[HN Gopher] My IRC client runs on Kubernetes
___________________________________________________________________
My IRC client runs on Kubernetes
Author : xena
Score : 64 points
Date : 2024-08-23 19:46 UTC (3 hours ago)
(HTM) web link (xeiaso.net)
(TXT) w3m dump (xeiaso.net)
| AdamJacobMuller wrote:
| Kubevirt is great, but, I'm not sure why you wouldn't just run
| weechat inside a container.
|
| There's nothing so special about weechat that it wouldn't work
| and you can just exec into the container and attach to tmux.
|
| Running tmux/screen inside a container is definitely cursed, but,
| it works surprisingly well.
| xena wrote:
| Author of the article here. One of the main reasons is that I
| want to update weechat without rebuilding the container, taking
| advantage of weechat's soft upgrade method:
| https://weechat.org/files/doc/weechat/stable/weechat_user.en...
|
| And at that point, it may as well just be a normal Ubuntu
| server.
| AdamJacobMuller wrote:
| Ah I didn't know wechat could do that, but, I remember that
| from my irssi days.
|
| I would personally consider that a bit of an anti-pattern (I
| would always want to have software upgrades tied to the
| container) but that makes sense!
| xena wrote:
| As a long-time Docker-slonker I agree, however then that
| requires me to remember to upgrade the container images
| which means that it's harder to ensure that it's up to
| date. If you set up a cronjob to update the VMs packages
| and then a script that runs `/upgrade` on the weechat
| server every time the weechat package version changes, then
| you don't need to think.
| fuero wrote:
| Consider having Renovate handle this for you - using a
| git repo to apply your manifests will make it pull
| updated container images
| tw04 wrote:
| So why not just make it a VM and skip k8s altogether?
| xena wrote:
| Because I already have a kubernetes cluster with
| distributed storage. This way makes it somewhat standard
| with the rest of my setup. In SRE things the real enemy is
| when parts are unique in ways that conflict with the normal
| way of doing things. Look up at the "unique/standard"
| comment above and then take a moment to think about it. It
| sounds absurd from the first read; but when you realize
| that it allows me to reuse the hardware I have and take
| advantage of the second order properties I've done, then it
| makes more sense.
| bbkane wrote:
| This is also why I design all my config files to use
| YAML. I've already got powerful tools to deal with its
| quirks (JSONSchema, yq, yamllint)
| superkuh wrote:
| A prime example of silly cargo culting. But hey, if it works for
| them no problem. I'd just set up a crontab or systemd timer to
| rsync back-up my logs to a second device every night or so.
| AdamJacobMuller wrote:
| I wouldn't say it's cargo-culting, but, it's definitely silly
| (and intended to be).
|
| Not having to manage systems anymore, and fully relying on
| kubernetes to configure and manage everything is great,
| especially leveraging the redundancy and reliability of it.
| Alupis wrote:
| People so very often are quick to look down upon anything
| using k8s these days, often citing complexity and some hand-
| wavy statement about "you're not web scale" or similar. These
| statements are usually accompanied by complex suggestions as
| "more simple" alternatives.
|
| On one hand you can manually provision a VM, configure the
| VM's firewall, update the OS, install several packages, write
| a bash script which can be used with cron, configure cron,
| setup backups, and more, followed by routine maintenance that
| is required to keep this system running and secure. This
| dance has to be done every time you deploy the app... oh, and
| don't forget that hack you did one late night to get your
| failing app back online but forgot to document!
|
| Or, on the other hand, you can write 1-3 relatively simple
| yaml files, just once, that explicitly describe what your app
| should look like within the cluster, and then it just does
| it's thing forever.
|
| k8s is very flexible and very useful, even for simple things.
| Why would you not use it, should be the default question.
| drdaeman wrote:
| Setting up and maintaining along the happy path is not an
| issue, it's well-documented and has existing automations if
| desirable so you can set up a cluster in just a few minutes
| (if you have the experience already).
|
| And YAML files are nearly trivial in most cases.
| Boilerplate is not pretty but it's easy.
|
| The only Problem (with a capital P) with K8s is when it
| suddenly and unexpectedly breaks down - something fails and
| you need to figure out what is going on. That's where its
| complexity bites you, hard - because it's a whole new OS on
| top of another OS - so, a whole new giant behemoth to learn
| and debug. If you're lucky it's already documented by
| someone so you can just follow the recipe, but if luck runs
| out (it always eventually does) it's not a common case
| you're going to have some very unpleasant time.
|
| I'm a lazy ass, I hate having to debug extremely complex
| systems when I don't really need to. All those DIY
| alternatives are there not because people are struggling
| with NIH syndrome but because it is orders of magnitude
| simpler. YMMV.
|
| Or if you want to make K8s do something it isn't designed
| or not presently capable of, so you need to hack on it.
| Grokking that codebase requires a lot of brainpower and
| time (I tried and I gave up, deciding I don't want to,
| until the day finally comes and I must.)
| Alupis wrote:
| This is the other common pushback on using k8s, and it's
| usually unfounded. Very rarely will you ever need to
| troubleshoot k8s itself, if ever.
|
| I highly doubt most situations, especially ones like this
| article, will ever encounter an actual k8s issue, let
| alone discover a problem that requires digging into the
| k8s codebase.
|
| If you are doing complex stuff, then ya, it will be
| complex. If you are not doing complex stuff and find
| yourself thinking you need to troubleshoot k8s and/or dig
| into the codebase - then you are very clearly doing
| _something_ wrong.
|
| A homebrewed alternative will also be complex in those
| cases, and most likely _more_ complex than k8s because of
| all the non-standard duct tape you need apply.
| drdaeman wrote:
| Hah. Okay, here's an anecdote.
|
| First time I had to look under the hood (and discover
| it's not exactly easy to get through the wiring) is when
| I set up my first homelab cluster. I wanted to run it
| over an IPv6-only network I had, and it while the control
| plane worked data plane did not - turned out that some
| places were AF_INET only. That was quite a while ago, and
| it's all totally okay. But it was documented exactly
| nowhere except for some issue tracker, and it wasn't easy
| to get through CNI codebase to find enough information to
| find that issue.
|
| So, it wasn't doing _something_ wrong, but I had to look
| into it.
|
| So I ran that cluster over IPv4. But it was a homelab,
| not some production-grade environment - so, commodity
| hardware, flaky networks and chaos monkeys. And I had it
| failing in various ways quite frequently. Problems of all
| kind, mostly network-related - some easy and well-
| documented like nf_conntrack capacity issues, some were
| just weird (spontaneous intermittent unidirectional
| connectivity losses that puzzled me, and the underlying
| network was totally okay, it was something in the CNI - I
| had to migrate off Weave to, IIRC, Calico, as Weave was
| nicer to configure but too buggy and harder to debug).
|
| I also ran a production GKE cluster, and - indeed - it
| failed much less often, but I still had to scrape and
| rebuild it once, because something had happened to it,
| and by then I knew that I'm simply not in capacity of
| proper diagnostics (but creating new nodes and discarding
| old ones, so all failing payloads migrate is easy -
| except, of course, that it solves nothing in the long
| term).
|
| In the end, for the homelab, I realized I don't need the
| flexibility of K8s. I briefly experimented with Nomad,
| and realized I don't even need Nomad for what I'm
| running. All I essentially wanted was failover, some
| replicated storage, and a private network underneath. And
| there are plenty of already robust age-proved tools for
| doing all of those. E.g. I needed a load balancer, but
| not specifically K8s load balancer. I know nginx and
| traefik a little bit, including some knowledge of of
| their codebase (I debugged both, successfully) so I just
| picked one. Same for the networking and storage, I just
| built what I needed from the well-known pieces with a bit
| of Nix glue, much more simpler and direct/hardcoded
| rather than managed by CNI stack. Most importantly, I
| didn't need fancy dynamic scaling capacities at all, as
| all my loads were and still are fairly static.
|
| As for that GKE cluster - it worked, until the company
| failed (for entirely non-technical reasons with upper
| management).
|
| If you know what I did wrong, in general - please tell.
| I'm not entirely dismissing K8s (it's nice to have its
| capabilities), but currently have a strong opinion that
| it's way too complex to be used when you don't have a
| team or person who can actually understand it. Just like
| you wouldn't run GNU/Linux or FreeBSD on server back in
| '00s without having an on-call sysadmin who knows how to
| deal with it (I was one, freezing my ass in the server
| room, debugging NIC driver crashing kernel at 2am - fun
| times (not really)!, and maybe I grew dumber but a lot of
| Linux source code is somewhat comprehensible to me, while
| K8s is much harder to get through.)
|
| This is all just my personal experience and my personal
| attitude to how I do things (I just hate not knowing how
| things in my care/responsibility work, because I feel
| helpless). As always, YMMV.
| OJFord wrote:
| If you stood up k8s just for an IRC client that would
| definitely be silly, but honestly if you already have it (in
| this context presumably as the way you self-host stuff at
| home) then I don't really think there's anything silly about
| it.
|
| If anything it would be silly to have all your 'serious'
| stuff in k8s and then a little systemd unit on one of the
| nodes (or a separate instance/machine) or something for your
| IRC client!
|
| (Before _I 'm_ accused of cargo-culting: I don't run it at
| home, and just finished moving off it at work last week.)
| erulabs wrote:
| > also want it to be a bit more redundant so that if one
| physical computer dies then it'll just be rescheduled to
| another machine and I'll be back up and chatting within minutes
| without human intervention.
|
| It seems an explicit requirement was not having to manually
| restore backups. You may or may not be wrong, but I find most
| arguments against tools like Kuberenetes boil down to "if you
| ignore the requirements, you could have done it simpler with
| bash!", which, yea.
| petesergeant wrote:
| > if it works for him
|
| Don't think they're a him
| superkuh wrote:
| Thanks, default assumption on my part having only read the
| article itself. Changed.
| zzyzxd wrote:
| I think it's fine if you are happy with your crontab/systemd
| based system. But the author already has a k8s cluster with
| storage and VM automatically provisioned in a declarative way,
| so what's wrong with that? crontab/systemd would be more
| complicated for them.
| dang wrote:
| Can you please edit out swipes from your HN comments? That's in
| the site guidelines:
| https://news.ycombinator.com/newsguidelines.html.
|
| ... as is this:
|
| " _' That is idiotic; 1 + 1 is 2, not 3' can be shortened to '1
| + 1 is 2, not 3._"
|
| https://news.ycombinator.com/newsguidelines.html
| erulabs wrote:
| Well I think it's neat. The bit I find most provoking is the "if
| you already have Kubernetes..." premise. I find myself having a
| hard time not wanting to shove everything into the Kubernetes
| framework simply to avoid having to document what solutions I've
| chosen. `kubectl get all` gives me an overview of a project in a
| way that is impossible if every single project uses a different
| or bespoke management system.
|
| "simple/complex" is not the right paradigm. The real SRE
| controversy is "unique/standard". Yes, the standard approach is
| more complex. But it is better _in practice_ to have a single
| approach, rather than many individually-simpler, but in-
| aggregate-more-complex approaches.
|
| Kubernetes is never the perfect solution to an engineering
| problem, but it is almost always the most pragmatic solution to a
| business problem for a business with many such problems.
| Spivak wrote:
| Is the mountain of k8s code on top of your cloud of choice not
| strictly more complex than the cloud services it's
| orchestrating? Like I think k8s is good software but to hear
| that you're not choosing it because it's good engineering is
| surprising. To me that's the only reason you choose it, because
| you want or need the control-loop based infrastructure
| management that you can't get on say AWS alone.
| marcinzm wrote:
| There's always a mountain of code on top of the cloud
| provider.
| andrewstuart2 wrote:
| Furthermore, there's always a mountain of code aggregating
| heterogeneous systems into a uniform interface. Look at the
| linux kernel, or LLVM, or pretty much any other
| (successful) project that provides abstractions to many
| different backends.
| Spivak wrote:
| Yes but that code isn't typically running in production the
| way k8s is. Having a lot of terraform is just a fancy
| method of statically provisioning infrastructure. The code
| is gone once it applies. Bringing k8s into the mix is a
| whole other application layer that's providing services to
| _your_ code. Things like sgs for pods, cni plugins,
| coredns, is code that wouldn 't be there in a non k8s
| world.
|
| Your problems now include k8s networking as well as VPC
| networking.
| zipmapfoldright wrote:
| That's the same with.. the Linux kernel? I'd wager that most
| services people write have less complexity than the the
| kernel it runs on. We choose to do that not because we need
| its full feature set, but because it gives us useful
| abstractions.
| akdor1154 wrote:
| Yeah k8s is great. It gives you an infinite rope generator to
| let you hang yourself with ever increasing complexity, but with
| a bit of restraint you can orchestrate pretty much anything in
| a simple or at least standard way.
|
| I'd take a stack of yaml over a stack of bespoke aws managed
| service scripts any day.
| Lord_Zero wrote:
| Just use irccloud... It's worth the $5/mo
| RockRobotRock wrote:
| I love irccloud! Only way I could use IRC and not go insane
| windlep wrote:
| I've been self-hosting a lot of things on a home kubernetes
| cluster lately, though via gitops using flux (apparently this
| genre is now home-ops?). I was kind of expecting this article to
| be along those lines, using the fairly popular gitops starting
| template cluster-template: https://github.com/onedr0p/cluster-
| template
|
| I set one of these up on a few cheap odroid-h4's, and have quite
| enjoyed having a fairly automated (though quite complex of
| course) setup, that has centralized logging, metrics, dashboards,
| backup, etc. by copying/adapting other people's setups.
|
| Instead of weechat, I went with a nice web based irc client (the
| lounge) to replace my irccloud subscription. kubesearch makes it
| easy to find other people's configs to learn from
| (https://kubesearch.dev/hr/ghcr.io-bjw-s-helm-app-template-
| th...).
| zipmapfoldright wrote:
| TIL about Talos (https://github.com/siderolabs/talos, via your
| github/onedr0p/cluster-template link). I'd been previously
| running k3s cluster on a mixture of x86 and ARM (RPi) nodes,
| and frankly it was a bit of a PiTA to maintain.
| jbverschoor wrote:
| Why not proxmox?
| xena wrote:
| I already have a Kubernetes cluster with distributed storage
| set up. Adding Proxmox makes that setup unique. I don't want
| unique. I want to reuse the infrastructure I already have.
| minkles wrote:
| For those days when you _really_ fancy writing YAML...
| dijit wrote:
| I can't imagine a worse combination than Kubernetes and stateful
| connections.
| jeanlucas wrote:
| Is IRC still a thing? I mean seriously, do communities hang
| around there? I stopped using in 2016.
| torvald wrote:
| Yes and yes.
| Tiberium wrote:
| Not a single mention of Quassel in the article or in the
| comments, which is honestly surprising. It's a client-server
| architecture IRC client specifically made to make it easy to save
| logs and persist IRC sessions, since you can host the server part
| on an actual server/VPS and connect to it from all of your
| different devices.
| xena wrote:
| I'd use Quassel, but I have custom weechat scripts. I can't
| easily implement those in Quassel.
| Tiberium wrote:
| Fair enough, it's just that Quassel immediately came to mind
| when talking about persistent IRC logs/sessions :)
___________________________________________________________________
(page generated 2024-08-23 23:00 UTC)