[HN Gopher] Kubernetes at Home with K3s
       ___________________________________________________________________
        
       Kubernetes at Home with K3s
        
       Author : todsacerdoti
       Score  : 143 points
       Date   : 2021-12-05 08:33 UTC (14 hours ago)
        
 (HTM) web link (blog.nootch.net)
 (TXT) w3m dump (blog.nootch.net)
        
       | nunez wrote:
       | k3s is so 2019. Kind is where it's at. /s
       | 
       | Personally, k3s is good for bare-metal since it's slightly
       | lighter-weight than a kubeadm-provisioned cluster. However, I
       | found it too heavy-weight for running k8s on a laptop. I believe
       | it uses etcd, which is pretty noisey.
       | 
       | Kind is a lot lighter, I've found. And it runs in Docker, which
       | is great. I haven't tried using it for more complex stuff like
       | service meshes and storage; I'd expect issues there since it's
       | pretty stripped down. For local testing, however, it's solid.
       | 
       | I've heard k0s is similar.
        
         | cococonut wrote:
         | Kind is also used as the reference implementation for
         | kubernetes, so you know it's compliant and won't have weird
         | mismatch bugs.
        
         | raesene9 wrote:
         | Funny you say that as one of k3s main difference to general
         | Kubernetes is that it supports the use of sqlite instead of
         | etcd, whereas kind is straight kubeadm clusters in Docker
         | containers, so uses etcd...
         | 
         | I see the two projects as covering largely different ground.
         | kind is brilliant for test clusters where you don't need long
         | term persistence, but k3s is better for long-term low resource
         | clusters.
        
       | ilvez wrote:
       | > I'm still not a huge fan of K8s, but Docker has kind of
       | imploded and its Swarm project has been long dead, Nomad isn't
       | much better (or free)
       | 
       | Can someone elaborate on why Nomad should be avoided?
       | 
       | My currently known issues are smaller use base (not a big issue
       | on its own) and being too dependant on Hashicorp (potential risk,
       | trust issue?).
       | 
       | I'm currently choosing technology for our company to host all
       | services that currently are running in their custom environments,
       | manageable at the moment but lacks the future. Nomad (together
       | with Consul) has most of the features I need without seeming
       | bloat.
       | 
       | I played around with Swarm already, really liked its simplicity,
       | but few use cases were not supported without workarounds. K8 is
       | probably still too big of a gun, but also a possible candidate.
        
         | gizdan wrote:
         | > Can someone elaborate on why Nomad should be avoided?
         | 
         | The answer is, as always, it depends on your use-case. I'm not
         | personally not a huge fan of Nomad for reasons I won't go into
         | because a) they're irrelevant because opinions, and b) they're
         | probably outdated. However, to say "it isn't much better" is
         | very vague and extremely subjective. I don't really know what
         | OP means with it, but consider the following things I _know_
         | about Nomad vs Kubernetes:
         | 
         | Kubernetes has many more features that you or your users don't
         | have to deal with (in terms of setup). Obviously that brings in
         | complexity that Nomad may not have to deal with. This also
         | means that some features may need to be supplemented using
         | other software, which otherwise means learning those other
         | software and having different interfaces to manage it.
         | Kubernetes on the other hand, provides all of these features
         | using the same interface. Obviously this also means with Nomad
         | you can add some of these features as you need them over time,
         | instead of having it all at the beginning regardless of whether
         | or not you need them. Some features I can think of: secrets
         | management*, load balancing, config management, service
         | discovery.
         | 
         | Kubernetes has a much bigger community, many more tools you can
         | just plug and play. The relatively recent phenomenon that is
         | Kubernetes Operators is just awesome and makes running software
         | a breeze. Software that otherwise requires a lot of knowledge
         | to run.
         | 
         | Kubernetes only does Linux containers, compared to Nomad that
         | has support for just about any thing you can throw at it (Java,
         | containers, plain binaries), and it has first class Windows
         | support (via Windows executables). Last I checked Windows
         | support for Kubernetes was still in its infancy.
         | 
         | In terms of support, with K8s you will need to get a third-
         | party to give you support, whereas with Nomad you can get
         | support directly from Hashicorp.
         | 
         | Nomad requires a Consul cluster, at least last time I looked
         | this up, though as I understand this, HC was working on this in
         | the past year or so, so this may not be accurate any more.
         | Kubernetes uses etcd internally, which itself takes some
         | understanding.
         | 
         | Lastly, Nomad will likely lead to vendor lock-in. Kubernetes
         | can run just about everywhere including on-prem, all major
         | cloud providers, and even at the edge (see KubeEdge). Chick-
         | fil-A famously runs Kubernetes on Intel NUCs in all their
         | stores. I'm assuming there is nothing stopping a Nomad cluster
         | being run at the edge, but I suppose it's not been proven yet.
         | 
         | * note: secrets management is a bit of an overstatement for
         | Kubernetes. By default it is base64 encoded "secrets", and the
         | only thing preventing one from accessing the secrets is ACLs,
         | but if you have access to the underlying etcd cluster, it's
         | game over. If you want proper secrets management (i.e.
         | encryption at rest and/or in transit), you'll need to integrate
         | it with something else such Hashicorp Vault (the most advanced
         | option), or Mozilla SOPS.
        
           | mountainriver wrote:
           | Very good synopsis, one note that kubernetes now does have
           | encryption at rest for secrets
        
       | wara23arish wrote:
       | Can anyone tell me when Kubernetes should be used and when it
       | shouldn't?
       | 
       | I've been learning about it along with terraform and other devops
       | tools, but Im still not clear how to identify whether it's the
       | right tool for the job.
        
         | dreyfan wrote:
         | Kubernetes is like a Rube Goldberg machine. It's complex to
         | setup, very satisfying to watch in action, and gets the job
         | done even if there's a few extra moving parts.
        
           | mountainriver wrote:
           | A bit in places, but it's all built on very simple primitives
        
         | atomicnumber3 wrote:
         | Imo, it's basically all about having software-defined
         | infrastructure. If that's worth it to you, use k8s. If you
         | don't mind just ssh'ing to a box and running docker, or using
         | puppet/ansible or something, that's fine too.
         | 
         | I personally like using k8s at work because our dev team is
         | 2000+ people and there's great support for using it in our
         | tooling.
         | 
         | I wrote my own thing at home that's basically just a
         | declarative "put this docker container on this host with these
         | options" manager and it's WAY better QOL than when I was using
         | helm and Digital Oceans managed k8s, which is great $ value
         | btw. But Jesus was helm such a pain and at times I was spending
         | more time reading helm and k8s docs than working on my own
         | project.
         | 
         | That's my main gripe with k8s. Unlike docker, it is a HuGE
         | surface area to learn, so it takes a lot for that to be worth
         | it.
        
         | nonameiguess wrote:
         | Are you trying to build a platform (as a service or otherwise)
         | that many developers will deploy many applications into?
         | Strongly consider Kubernetes.
         | 
         | Are you trying to deploy one application you're also the
         | developer of? Then use some existing platform as a service
         | unless and until you hit limitations that no commodity offering
         | can overcome and you need to roll your own.
         | 
         | Are you trying to deploy a static site or something consisting
         | mostly of informational content that isn't really an
         | application and doesn't have meaningful uptime requirements?
         | Fire and forget some VPS with httpd or nginx serving a
         | directory you update with rsync, or maybe pick some templated
         | off-the-shelf Wordpress thing.
        
       | throwawayboise wrote:
       | I use lxc and shell scripts to manange containers. I've made
       | several attempts at Kubernetes but I don't have enough oxygen to
       | climb that mountain.
        
       | Havoc wrote:
       | That's well timed - busy doing essentially the same. Finally got
       | self-hosted gitlab-CI to spit out docker images via kaniko so
       | need to deploy them somewhere.
       | 
       | Tried OpenFaasd but it isn't a good fit so now having a go at K8S
       | instead. Really don't need the clustering part but there is
       | something to be said for picking the mainstream tech.
        
       | mrweasel wrote:
       | > I miss its programmatic approach to deployments
       | 
       | That seems to be people main justification for using Kubernetes.
       | We really need to find an solution that will allow us to deploy
       | software programmatically, without dragging along the complexity
       | of Kubernetes.
       | 
       | I know, Kubernetes is "easy" on AWS, Azure or GCP. Well great,
       | but many of us can't use managed Kubernetes. Now we're stuck
       | managing on-prem cluster, just so the developers can deploy three
       | containers and an ingress controller while complaining that they
       | need to update their YAML because some API changed again.
       | 
       | Kubernetes is great, for some project, just not most. Sadly we're
       | don't have any good alternative for deploying to VMs.
        
         | nunez wrote:
         | There are a few solutions that try to abstract away the
         | innerworkings of k8s, like Garden and Tilt. They enable you to
         | use a Compose-like syntax for deployment, which covers the 80%
         | use case of "I just wanna deploy an app, bro"
         | 
         | The issue is that escaping the complexity of Kubernetes is
         | challenging because of that 20%. You want to use Vault for
         | pulling secrets in dynamically and customize how your service
         | mesh routes traffic to your app? Unless you're using something
         | like OpenShift that gives you one way to do it, you're gonna
         | mess with some YAML.
        
           | mountainriver wrote:
           | Yeah knative too
        
         | stavros wrote:
         | I wrote Harbormaster because I needed exactly this. It's very
         | simple and has worked extremely well for me, at home as well as
         | in production:
         | 
         | https://gitlab.com/stavros/harbormaster
        
           | andrelaszlo wrote:
           | I was just looking into Harbormaster. Does it play well with
           | Traefik? Seems like a good combo in theory at least. I'm
           | already running Traefik, but haven't found a nice way to
           | automatically upgrade and make sure docker-compose is running
           | for all projects I'm hosting.
        
             | stavros wrote:
             | Hmm, I've never used Traefik with it, I use Caddy in host
             | mode. Does Traefik autodetect the containers from the
             | Docker socket? I might look into it because that would make
             | automatic ingress painless.
             | 
             | Harbormaster is basically a thin layer/manager over
             | Compose. If Traefik works with a bunch of running
             | containers, it'll work with Harbormaster.
        
               | codethief wrote:
               | > Does Traefik autodetect the containers from the Docker
               | socket?
               | 
               | Given the overall expertise you've demonstrated here on
               | HN time and again, I'm sure you are aware of this but a
               | general warning to everyone else reading this might be in
               | order: Careful with exposing the Docker socket to
               | Traefik, see
               | https://github.com/traefik/traefik/issues/4174
        
               | stavros wrote:
               | Ah, yes, I'm unclear on whether Docker intends to be
               | secure these days (though there are things like Sysbox or
               | running rootless that help), but exposing the socket
               | inside a root container is removing a few layers of
               | defense, thank you for pointing this out.
               | 
               | Unfortunately I don't think there's a way for
               | Harbormaster to do anything about it, but I'll play
               | around with Traefik and see if there's a more secure way
               | to deploy it, thank you.
        
               | detaro wrote:
               | > _Does Traefik autodetect the containers from the Docker
               | socket?_
               | 
               | Yes, it can do that. you can put annotations(? tags? I
               | forget what the docker terminology is) on containers that
               | Traefik reads so it knows "expose port xyz on this
               | container as https://example.com".
        
               | stavros wrote:
               | Hmm very interesting, I'll need to play with that and add
               | annotation support on Harbormaster so it works out of the
               | box, thanks!
        
               | andrelaszlo wrote:
               | That would be really cool. I might mess around with it
               | around Christmas. If I run into any rough edges around
               | labels (Traefik uses labels like `traefik.enable=true`
               | https://doc.traefik.io/traefik/reference/dynamic-
               | configurati...) I'll submit an issue or PR.
        
               | stavros wrote:
               | Excellent, thanks! I think a label config item should be
               | enough to get Traefik working, but I'll know more after I
               | look at it.
        
           | guywhocodes wrote:
           | Interesting project, will give this a shot.
        
           | mrweasel wrote:
           | Not really what I expected. I was thinking more in terms of a
           | kubectl replacement, but this is actually pretty clever for
           | the projects that want to run a git-ops setup.
           | 
           | Wonderful little project, thank you.
        
             | stavros wrote:
             | Oh hmm, how do you use kubectl for programmatic
             | deployments? I've used it before but it required an
             | intermediate machine with cluster access to run kubectl on,
             | which is less convenient than git-ops.
        
               | mrweasel wrote:
               | We just use intermediate machines/jump-hosts :-)
               | 
               | It depends on the customer and the project. Most want
               | git-ops, but often they also want kubectl. The overhead
               | of having to build, instrument and manage the
               | infrastructure to run software on a Kubernetes cluster is
               | often a pretty big surprise for many of our customers. So
               | in order to debug, developers still resort to kubectl to
               | get a shell in a pod or viewing logs.
               | 
               | This is also why I'm against Kubernetes for most
               | projects. It require a high level of maturity in the
               | development team to do right. When you see some one who
               | knows what they're doing, Kubernetes is amazing. For most
               | people however, picking something more "classic" is going
               | to make their life much easier. This is especially in the
               | light of what most want is just some easy way to saying:
               | "Run this container image on those two hosts".
               | 
               | So far the best solution I've been able to come up with
               | is Ansible and AWX. Developers can then bump a version
               | number in an group_vars file and click "launch" in AWX.
        
               | stavros wrote:
               | Ah yeah, this matches my experience 100%. Give
               | Harbormaster a try, I think you'll like it. For me, it's
               | exactly the simpler solution you mention.
        
         | 3np wrote:
         | I find Nomad (coupled with Terraform, Consul and Vault) hits
         | the sweet spot for me.
         | 
         | It provides all the orchestration, coordination and resource
         | management in source-controlled templates but doesn't add at
         | all the layers of abstraction and indirection that k8s does.
        
         | doliveira wrote:
         | Nix sounds like could be it, but it really needs better
         | learning resources or maybe some beginner-friendly wrappers.
        
         | mountainriver wrote:
         | Have you looked at kubevirt?
        
           | mrweasel wrote:
           | I have not, but just quickly browsing to documentation it
           | looks like I absolutely should. Thank you.
        
       | anton5mith2 wrote:
       | I'm a new product manager at canonical for maas.io and just spent
       | a fair chunk of time producing a bare metal k8s video tutorial. I
       | did it as a learning exercise and it was really interesting.
       | 
       | Anyway, it uses Juju and a charm called kubernetes-core. I came
       | away impressed (even tho biased) at how easy it was to deploy a
       | cluster, scale it up/down etc. Didn't see anyone mentioning these
       | in this thread so thought I'd mention it.
       | 
       | https://youtu.be/sLADei_c9Qg
        
       | revskill wrote:
       | My issue with Kubernetes is mostly broken and oudated YAML
       | configuration for packages. Try google for Nginx kubernetetes,
       | most of results are broken if you apply them. Worse, you have no
       | idea to fix them rather than delete resources and try out another
       | way.
       | 
       | It's like someone try to update a REST API and forget to update
       | YAML swagger documentation.
       | 
       | So K8S, or K3S is not an issue, they're fine, it's the
       | configuration that sucks to keep up with.
        
         | 4140tm wrote:
         | The main reason for not working NGINX examples is because there
         | are two NGINX ingress controllers - one by NGINX and one
         | maintained by k8s. Real fun if one does not know this upfront.
         | Ingress api version changes don't help much, of course
        
         | hvgk wrote:
         | Ah yes. Broken ass helm charts is 90% of my day. I maintain
         | internal forks of about 40 different public charts. Good money
         | but a world of pain.
        
           | pojzon wrote:
           | This seems like a company issue to not see value in updating
           | your stack constantly.
           | 
           | Thats a part of being DevOps.
        
             | hvgk wrote:
             | I think it's an external QA issue as we have to fix all the
             | broken shit upstream.
             | 
             | 90% of the stuff out there is garbage.
        
       | awinter-py wrote:
       | rancher's work on k3s is really impressive
       | 
       | it feels like they have the most design empathy for people who
       | are running a kube cluster, but a small one and it's not their
       | full-time job
        
       | IceWreck wrote:
       | You're mounting your server's storage to store mysql data. Doing
       | that is hard to scale. And isn't scaling/high availability the
       | point of using kubernetes ? Without that k3s is just a regular
       | container engine.
        
         | frbaroni wrote:
         | I do that, most home servers don't need high
         | availability/scalability.
        
       | nmaggioni wrote:
       | > Nomad isn't much better (or free)
       | 
       | "Better" is quite subjective, in my experience I've had workloads
       | that were a breeze to setup in Nomad and painful enough to
       | replicate in K8S that I haven't followed through with the latter.
       | That is especially true in a home server/lab environment where
       | things change and break all the time. I still run Nomad and K3S
       | back-to-back to re-evaluate them periodically for my production
       | needs, but so far my take on the matter hasn't changed.
       | 
       | "Free" puzzles me as well. Free as in open and hackable? The
       | source and buildfiles are all publicly available, hack away to
       | your heart's content. Free as in free beer? The same concept
       | still applies, and if paid support is being referred to here,
       | how's that different from paying any cloud provider to use their
       | managed K8S services?
        
         | bantunes wrote:
         | Hey, author here. I meant "free" as in full featured - I
         | remember Nomad having features that I quite liked behind the
         | Enterprise paywall.
        
           | KronisLV wrote:
           | Ahh, i feel like this is probably the case for most of the
           | corporation backed software out there - apparently offering
           | just support isn't always a viable strategy to remain in
           | business, so many out there lock bits of more advanced
           | functionality behind pay walls: like many of the features of
           | GitLab (https://about.gitlab.com/pricing/gitlab-com/feature-
           | comparis...) and something like Artifactory (https://www.jfro
           | g.com/confluence/display/JFROG/Artifactory+C...)
           | 
           | I'm not sure whether that's a bad thing, though, since the
           | alternative would be having no free plans/editions for those
           | pieces of software at all, or maybe just using them for a few
           | years and the company going bankrupt.
        
           | nmaggioni wrote:
           | Thanks for chiming in. Any chance that you tried an older,
           | pre-v1.0 version? I used to share some of your
           | considerations, but lately I've seen Nomad evolve at a
           | quicker pace and catch up with some of the features I
           | wanted/liked (ex.: readiness checks were introduced in May
           | 2021 with v1.1, going by the changelog).
           | 
           | As others have already said and basing on the official docs
           | (https://www.nomadproject.io/docs/enterprise), the few
           | Enterprise-only features left don't really feel necessary to
           | me for small-medium deployments; especially when there's an
           | Ops team that can oversee things anyway.
           | 
           | Not trying to play the devil's advocate here, I'm genuinely
           | interested in what I might be missing with my current
           | approach.
        
         | nullwarp wrote:
         | Yeah I find "better" quite subjective too as I tend to find
         | Nomad better in almost every way and I run both k3s and Nomad
         | as well, both at home and professionally.
         | 
         | Also the "Enterprise" features of Nomad aren't really something
         | a home server user is going to need in the first place
         | (Multiregion Deployments? Doubtful. Non voting servers? Also
         | no)
        
       | gatestone wrote:
       | For geeking this is great. For getting my existing containers
       | running I would use Google Kubernetes service GKE with Terraform
       | and be done with it in minutes.
        
       | vander_elst wrote:
       | Tried myself to go down that road, came back to System V and
       | cron. At most docker compose, but k8s: never again. It is way too
       | complex for small deployments, I need to many levels of
       | abstraction and indirection. If I absolutely need reproducibility
       | I can ln then configurations that I want to track in a repo and
       | that's it.
        
         | spmurrayzzz wrote:
         | Even for large deployments, I've always viewed this quandary
         | as: what is the true opportunity cost of not using this when I
         | have working docker solutions running at scale today? And how
         | much time do I need to invest to get to subject matter
         | expertise parity?
         | 
         | The calculus there has never panned out for me, especially for
         | any service architectures with non-trivial networking
         | constraints. But I still do love playing around with platform
         | to learn about it, some great ideas overall (even if complexity
         | abound).
        
       | alias_neo wrote:
       | Unlike OP I DO love Kubernetes.
       | 
       | I didn't like it initially (years ago, when Docker Swarm was
       | still worth mentioning) but I spent a year+ during the pandemic
       | learning it on work time and porting one of our products. I
       | deployed k3s at home and love it.
       | 
       | Unlike many, we deploy our product on-prem to customers who want
       | to be hands off (not always technical) so I had to get very
       | familiar with deploying and configuring in a variety of
       | environments from bare metal to cloud providers to Raspberry Pi
       | and redeploying over and over, not just the workload but the
       | entire cluster.
       | 
       | While at one time Docker was my preferred method of deploying
       | software everywhere, it's now Kubernetes.
       | 
       | I blogged a 4 part series[0][1][2][3] about the process (read:
       | tutorial) and caveats of deploying K3s to devices like the
       | Raspberry Pi. It also covers monitorings with Influx/Telegraf,
       | rotating logs and reducing write to SD cards (see the caveats),
       | discusses networking considerations and storage and a few other
       | things I thought might he useful to a beginner.
       | 
       | Kubernetes is my preferred way to deploy any software at home now
       | and I have a basic template for deploying anything that runs in a
       | container with auto-ingress, DNS, certificates (from my internal
       | CA), storage etc all auto-configured.
       | 
       | [0] https://2byt.es/post/bantamcloud/01-build/
       | 
       | [1] https://2byt.es/post/bantamcloud/02-configure/
       | 
       | [2] https://2byt.es/post/bantamcloud/025-caveats/
       | 
       | [3] https://2byt.es/post/bantamcloud/03-kubernetes/
        
       | sascha_sl wrote:
       | Weird gaps of understanding in the introduction. Services don't
       | control pods, services provide a virtual IP that goes to any of
       | the selected pods. Control is done via Deployment, StatefulSet or
       | DaemonSet. ReplicaSet is technically correct, but a bit advanced
       | for an introduction.
        
       | atombender wrote:
       | I tried out K3s (via Rancher Desktop) on my Mac, and was
       | disappointed to discover that it would consume about 30% CPU
       | while idle, enough to run the fan at maximum speed. By "idle" I
       | mean running absolutely no containers other than what's included
       | in the base install.
       | 
       | The full version of Kubernetes has the same flaw, and is the
       | curse of distributions like Minikube and Kind. While K3s is
       | marketed as a lightweight Kubernetes distribution, it does not
       | fix the idle CPU usage issue. The reason, from what I can tell,
       | is that Kubelet relies on polling everything. It's not
       | efficiently implemented because nobody has ever needed it to be.
       | 
       | That means that, just like Minikube and Kind, it's not really
       | suitable as something to run on your desktop. Which is really
       | unfortunate, because it's a great way to run system software
       | (think Postgres or Redis) that's a share dependency among a
       | team's apps.
        
         | tommek4077 wrote:
         | It's the same on servers, just losing a lot of power and
         | energy. But someone will tell you servers and energy are
         | cheaper than capable devs and ops.
        
         | mountainriver wrote:
         | It's the kubernetes reconciler pattern you're seeing. It can't
         | completely idle but is incredibly fault tolerant
        
           | atombender wrote:
           | With no resources changing, there should be nothing to
           | reconcile. There should be some chatter from node status
           | updates, perhaps, but nothing else.
           | 
           | It's just a suboptimal implementation that nobody has
           | bothered to optimize, because running Kubernetes on a laptop
           | has never been a priority.
        
       | samgranieri wrote:
       | I've gone between using k3s and full-blown k8s on a raspberry pi
       | cluster at home.
       | 
       | Both work just fine in my humble opinion. K3s is a bit easier to
       | install.
       | 
       | For context, I've been using Kubernetes at work since I think
       | 2018 (amazon eks), and I'm quite familiar with it.
       | 
       | I highly recommend using k9s for day to day stuff. It's just so
       | much easier than having to memorize a ton of different commands.
        
       | stormbrew wrote:
       | > but Docker has kind of imploded and its Swarm project has been
       | long dead
       | 
       | Is it? I mean, it's still there in docker and its setup
       | instructions remain a lot more concise and simple than even the
       | most "it's easy, really!" tutorials of k8s-related things tend to
       | be.
       | 
       | It kind of seems like it's not so much dead as it exists, it does
       | some things, and probably isn't going to be doing a lot more new
       | things in the future but it's honestly pretty appealing when you
       | go looking at things like this for 'simple' use cases.
        
         | davnicwil wrote:
         | Also came here to ask this. Seems like something being simple
         | and something being sort of done and not under active
         | development are two sides of the same coin.
         | 
         | A related question - the article mentions swarm here as the k8s
         | equivalent, but can docker compose also fill this role in even
         | simpler usecases, like just a handful of services in a typical
         | 3 tier web stack? (especially on a team without k8s expertise
         | that goes much beyond the functionality of docker compose)
         | 
         | I get a general sense from reading various threads on this
         | topic that using docker compose in prod isn't seen as a great
         | idea, without ever having seen the why properly articulated.
        
           | fivea wrote:
           | > Also came here to ask this. Seems like something being
           | simple and something being sort of done and not under active
           | development are two sides of the same coin.
           | 
           | This. To me Docker's swarm mode is already feature-complete
           | and solid. The only thing that I'm aware that needs some work
           | is the notorious intra-node connection speed issue, but
           | that's not a blocker in any way.
        
           | kubanczyk wrote:
           | > the article mentions swarm here as the k8s equivalent, but
           | can docker compose also fill this role
           | 
           | Docker-compose is inherently limited to the single node
           | (single daemon). No redundancy. The multinode version of
           | docker-compose is docker swarm, in terms of the yaml file
           | being mostly just ready for migration to swarm.
           | 
           | > something being simple and something being sort of done and
           | not under active development are two sides of the same coin
           | 
           | If someone's resume is basically "5 years experience with
           | docker swarm" they have a harder time on the market than if
           | they had these years with k8s.
        
         | wilsonfiifi wrote:
         | Yeah so the author didn't do enough research before writing
         | that because just the other day, Mirantis announced that docker
         | swarm is going to be fully supported for the foreseeable future
         | [0]. So you can confidently use swarm with your projects.
         | 
         | [0] https://youtu.be/pCGCCQyBtjg
        
           | miked85 wrote:
           | I would still be wary about committing to Docker Swarm, even
           | Mirantis' home page is pushing Kubernetes.
        
         | KronisLV wrote:
         | That's a false statement as far as the technical aspects are
         | concerned (Swarm is still usable and supported), but is a true
         | statement when you look at the social aspects (Kubernetes won
         | the container wars and now both Swarm and even Nomad is
         | uncommon to run into).
         | 
         | Right now the company i'm in uses Swarm in a lot of places due
         | to its simplicity (Compose file support) and low resource usage
         | - Swarm hits the sweet spot when it comes to getting started
         | with container orchestration and doing so without needing
         | multiple people to wrangle the technical complexity of
         | Kubernetes, or large VMs to deal with its resource usage, at
         | least in on prem environments.
         | 
         | In combination with Portainer (https://www.portainer.io/) it's
         | perhaps one of the best ways to get things done, when you
         | expect everything to just work and aren't doing something too
         | advanced (think along the lines of 10 servers, rather than 100,
         | which is probably most of the deployments out there).
         | 
         | I actually wrote about some of its advantages in my blog post,
         | "Docker Swarm over Kubernetes":
         | https://blog.kronis.dev/articles/docker-swarm-over-kubernete...
         | 
         | That said, if there are any good options to replace Swarm, it
         | has to either be Hashicorp Nomad (https://www.nomadproject.io/)
         | which is a really nice platform, especially when coupled with
         | Consul (https://www.consul.io/), as long as you can get past
         | the weirdness of HCL, or it has to be K3s (https://k3s.io/),
         | which gives you Kubernetes without the insane bloat and
         | hardware usage.
         | 
         | I actually benchmarked K3s against Docker Swarm in similar app
         | deployments: 1 leader server, 2 follower servers, running a
         | Ruby on Rails app and an ingress, while they're under load
         | testing by K6 (https://k6.io/). I was attempting to see whether
         | COVID contract tracking with GPS would be viable as far as the
         | system load goes in languages with high abstraction level,
         | here's more info about the approach:
         | https://blog.kronis.dev/articles/covid-19-contact-tracing-wi...
         | 
         | Honestly, the results were pretty close - on the follower
         | servers, the overhead of the orchestrator agents were a few
         | percent (K3s being heavier, but a few dozen MB here or there
         | not being too relevant), whereas the bigger differences were in
         | the leader components, where K3s was heavier almost by a factor
         | of two, which isn't too much when you consider how lightweight
         | Swarm is (there was a difference of a few hundred MB) and the
         | CPU usage was reasonably close in both of the cases as well.
         | Sadly, the text of the paper is in Latvian, so it's probably of
         | no use to anyone, but i advise you to do your own benchmarks!
         | Being a student back then, i couldn't afford many servers, so
         | it's probably a good idea to benchmark those with more servers.
         | 
         | Of note, on those VPSes (4 GB of RAM, single core), the full
         | Kubernetes wouldn't even start, whereas at work, trying to get
         | the resources for also running Rancher on top of a "full"
         | Kubernetes cluster (e.g. RKE) can also take needlessly long due
         | to the backlash from ops. Also, personally i find the Compose
         | syntax to be far easier to deal with, rather than the
         | amalgamation that Kubernetes uses, Helm probably shouldn't even
         | be a thing if the deployment descriptors weren't so bloated.
         | Just look at this: https://docs.docker.com/compose/compose-
         | file/compose-file-v3...
         | 
         | In short:                 - Docker Swarm is pretty good when
         | you're starting out with containers and is reasonably stable
         | and easy to use       - with Portainer, you can take it pretty
         | far, be it in regards to development environments or even
         | production       - Compose format is great and works for
         | Portainer and Docker Swarm (with minor changes), it's really
         | nice       - sadly, none of that really matters, because
         | Kubernetes has more hype and therefore you have to learn how to
         | use it in the industry       - because of this, expect most of
         | the tooling to be for Kubernetes and Swarm to fade into
         | irrelevance before being retired (killed)       - on the bright
         | side, K3s is a viable and mostly non-bloated alternative to the
         | "full" Kubernetes distros       - also, Portainer supports
         | both, so you don't even always need Rancher either
        
         | random_kris wrote:
         | At work we use swarm and I love it. In past i was exposed to
         | badly configured kuberneteres. I take docker swarm any day
        
         | planb wrote:
         | I deployed Docker Swarm in a small cluster at work this year.
         | After evaluating K8S, Nomad and Swarm it was the solution that
         | best fit our needs (together with Portainer for graphical
         | management and a few scripts to deploy from jenkins). The
         | system is simple enough that it can be documented on one wiki
         | page and when stuff goes wrong (which fortunately hasn't
         | happened yet) there is always a way to deploy the containers
         | manually.
        
           | GordonS wrote:
           | I have Swarm clusters running in production too, and am
           | planning on deploying dozens more this year. Swarm is great -
           | if you already use Compose, then you can use Swarm with ease.
           | It's so much simpler than k8s.
        
             | piaste wrote:
             | Thirding the Docker Swarm... I wouldn't say "love", more
             | like the "it's a piece of our infrastructure and it chugs
             | along just fine, I guess it's cool?", which is sentiment
             | I've _rarely_ seen anyone express towards Kubernetes.
        
           | mountainriver wrote:
           | Just FYI that swarm is EOL
        
           | mikepurvis wrote:
           | Interesting. I did a similar evaluation in early 2021 and
           | landed on microk8s as the right balance of features, ease of
           | deployment, and not having to do much hand-rolling or other
           | manual stuff.
        
       ___________________________________________________________________
       (page generated 2021-12-05 23:02 UTC)