[HN Gopher] The container orchestrator landscape
       ___________________________________________________________________
        
       The container orchestrator landscape
        
       Author : chmaynard
       Score  : 208 points
       Date   : 2022-08-24 11:08 UTC (11 hours ago)
        
 (HTM) web link (lwn.net)
 (TXT) w3m dump (lwn.net)
        
       | jordemort wrote:
       | Hi, I'm the author of the article. This is my third for LWN and a
       | follow-up to my previous one about container engines (
       | https://lwn.net/Articles/902049/ ) - I wasn't initially planning
       | on writing this since I only have a wee bit of k8s experience,
       | but enough people asked for it that I felt that I should. Please
       | let me know if you'd like me to write about anything else :)
        
         | chucky_z wrote:
         | One point around Nomad is that the point on it requiring low
         | latency.... isn't really true. I run a cluster with some
         | servers physically on different coasts of the United States
         | with clients as far as Singapore. It all just works :). (This
         | does need some tuning but once it works, it just works)
        
           | schmichael wrote:
           | Our docs are a bit overly conservative in their suggestions.
           | Glad that configuration is working out for you!
           | 
           | Clients (aka nodes-actually-running-your-work) can be run
           | globally distributed without issue, and we've even added some
           | more features to help tune their behavior:
           | 
           | https://www.hashicorp.com/blog/managing-applications-at-
           | the-...
           | 
           | Disclaimer: I am the HashiCorp Nomad Engineering Team Lead
        
         | igtztorrero wrote:
         | Super info, congratulations. I keep conclusions: Ten years ago,
         | very little of this technology even existed, and things are
         | still evolving quickly.
         | 
         | Thanks
        
         | heipei wrote:
         | Thanks for the great post!
         | 
         | One small addition: Nomad recently implemented basic service
         | discovery functionality into Nomad itself, so for simple use-
         | cases you might get away with just a Nomad cluster without
         | Consul running alongside: https://www.hashicorp.com/blog/nomad-
         | service-discovery
        
         | orsenthil wrote:
         | > Hi, I'm the author of the article.
         | 
         | Wow. I really enjoyed this. Your writing was clear, simple and
         | explained the landscape while navigating the marketing terms
         | extremely well.
         | 
         | I wish to read other articles that you have written.
        
         | MoOmer wrote:
         | Great post - I hadn't heard of Nomad Pack, and I have a small-
         | ish deployment of ~20 nodes.
         | 
         | There are many reasons to love Nomad, like its straight-forward
         | architecture, how it plays well with other binaries to augment
         | its functionality (Consul & Vault), super easy service mesh
         | with Consul Connect + Traefik, and the ease of defining complex
         | applications/architectures with the Job Spec [0].
         | 
         | I've used k8s a few times for work, but I just love Nomad.
         | 
         | [0]: https://www.nomadproject.io/docs/job-specification
        
           | kungfufrog wrote:
           | I love nomad too. Quick question, how have you solved ingress
           | using nomad + consul connect for service mesh?
           | 
           | The big thing for me is needing mTLS support for some of my
           | services.
        
             | chucky_z wrote:
             | Yea consul connect is a good choice for mTLS, it's enforced
             | by default with no way to turn it off.
             | 
             | You can also go DIY with something like Traefik and just
             | configure everything to require TLS with client TLS pass-
             | through for enforced mutual TLS, that puts you into
             | 'require a set of central load balancers' but you still get
             | all the benefits.
        
             | MoOmer wrote:
             | I use Consul Connect + Traefik annotations, and am planning
             | to write about it soon. If you want a sneak peak, I'm happy
             | to screen-share the config to get you going, ping me at the
             | email in my profile!
        
           | candiddevmike wrote:
           | I'm not sure where I'd use Nomad Pack vs Waypoint or
           | Terraform. Any HashiCorp folks care to chime in?
        
         | schmichael wrote:
         | Disclaimer: I'm the HashiCorp Nomad Engineering Team Lead
         | 
         | This may be my favorite article on the subject, and I've read a
         | lot! Others have suggested some minor corrections but overall
         | you've done a fantastic job accurately and fairly portraying
         | Nomad as far as I'm concerned.
         | 
         | I really appreciate the final paragraph as I _constantly_ field
         | questions about whether or not a k8s-only world is a foregone
         | conclusion. I think there 's room for more than one option, and
         | I'm glad you noticed HashiCorp thinks so too!
        
           | jordemort wrote:
           | Thanks, I'm glad you liked it! I've used Nomad a bit at a
           | previous job; due to its simplicity, we were much more
           | comfortable embedding it in the virtual appliance images we
           | were building. Nomad meant we had to rewrite the k8s configs
           | that the hosted version of the app was using, but even with
           | one of the all-in-one k8s distros, we were worried about
           | shipping that many hidden moving parts.
        
         | abriosi wrote:
         | Congratulations on the article. It is always nice to read sober
         | writing
         | 
         | Personally our team found a match using docker swarm. The
         | reality is that we don't have a DevOps role and we needed
         | something that anyone could deploy on
         | 
         | Although the general sentiment is negative towards docker
         | swarm, its simplicity is hard to beat
         | 
         | This guide was useful for setting up our stack
         | https://dockerswarm.rocks/
        
           | bityard wrote:
           | > It is always nice to read sober writing
           | 
           | Are you used to reading drunken writing? :)
           | 
           | I read this earlier too and was very impressed at how
           | everything was explained in plain english. I wish I could
           | send this article to myself 2-3 years ago, it would have
           | saved me literally days of trying to figure out what parts of
           | k8s do what and why.
        
       | hellohowareu wrote:
       | I am trying to learn Kubernetes for work-- Docker + K8s + Helm.
       | 
       | Does anyone have recommendation(s) regarding books, video
       | courses, or live in-person trainings?
        
         | JustinGarrison wrote:
         | I work on the EKS team at Amazon and will recommend our YouTube
         | channel "Containers from the couch" cftc.info and
         | eksworkshop.com
        
         | mmh0000 wrote:
         | For live training, I can not recommend Guru Labs[0] highly
         | enough. They offer many courses, but specific to your question,
         | lots of container related courses[1][2][3][4].
         | 
         | [0] https://www.gurulabs.com/linux-training/courses/
         | 
         | [1] https://www.gurulabs.com/linux-training/courses/GL340/
         | 
         | [2] https://www.gurulabs.com/linux-training/courses/GL360/
         | 
         | [3] https://www.gurulabs.com/linux-training/courses/GL355/
         | 
         | [4] https://www.gurulabs.com/linux-training/courses/GL390/
        
         | paulgb wrote:
         | I learned a lot of the concepts just by typing search terms
         | into YouTube. "TechWorld with Nana" is one channel that comes
         | up a lot and has good, digestible explanations.
         | 
         | Hussein Nasser too, but he has less content directly about
         | Docker/k8s and more about networking. It's excellent though.
        
         | danwee wrote:
         | For K8s:
         | 
         | 1. Read the book "Kubernetes up and Running"
         | 
         | 2. After that you can start reading the https://kubernetes.io
         | documentation
         | 
         | 3. Alongside the documentation reading, practise what you have
         | learnt using minikube.
         | 
         | 4. Only after, start with Helm (which is pretty bad, but if
         | your company is using it...)
        
       | yummybear wrote:
       | I do miss the days when you'd FTP to your server and sync
       | changes.
        
         | bityard wrote:
         | This is pretty much what I do with Ansible, and now my whole
         | server configuration lives in a git repo that I can access from
         | anywhere.
        
       | Alir3z4 wrote:
       | Docker Swarm all the way.
       | 
       | Started with 1 node acting as manager and worker at the same
       | time. Then added more 2 nodes, basically 3 nodes all acting as
       | manager and workers at the same.
       | 
       | Later kept 3 managers and 3 workers and now around 10 workers
       | with the 3 managers.
       | 
       | The cluster is full of apps, celery workers, static apps or SSR.
       | 
       | To manage it I didn't use any UI. The CLI is really simple to
       | use. Later to give less experienced people access to it i instant
       | Portrainer.
       | 
       | It has never stopped working, either upgrading or messing around
       | on production environments.
       | 
       | It just works.
       | 
       | The only and only complain I have id the way it mess around with
       | IPtable rules I have. The docker swarm networking overrides them.
       | There are several approaches, while they work, they always feel
       | like a hack to be honest. But yeah, they work.
       | 
       | It's such a wonderful simple to use software that gets thing.
        
         | JoyfulTurkey wrote:
         | I fully agree. Docker Swarm was perfect for my small employer
         | with an IT staff of 3 people. Went with the 3 manager nodes, as
         | recommended by Docker, to allow for upgrades without downtime,
         | and things were rock solid.
         | 
         | Also liked that encryption of the overlay networks was
         | included.
         | 
         | The only thing we missed/wanted was auto-scaling built-in, but
         | we rarely had a need to scale the number of containers.
        
         | bityard wrote:
         | What are you doing for storage?
         | 
         | We're thinking of deploying Docker Swarm where I work onto a
         | VMWare cluster and basically the options boil down to sharing a
         | storage volume between VMs (which vSphere makes difficult) and
         | replicating storage across nodes with something like gluster
         | (which drastically increases storage used and has performance
         | implications).
        
       | lifeisstillgood wrote:
       | I was on a thread recently looking for this decades Byte
       | magazine. How did we forget to mention lwn? Great article :-)
        
       | throwaway787544 wrote:
       | The main purpose of K8s is just to run microservices. It makes
       | things convenient by automatically restarting services, running
       | multiple copies across servers, routing from one load-balancing
       | port to many ephemerally-allocated ports, collecting logs,
       | providing a consistent control API, limit system resources,
       | networking, secrets...
       | 
       | ...All things Systemd does. Some bored admin just needs to make a
       | patch to make systemd processes talk together, and half the point
       | of Kubernetes will be gone.
       | 
       | All the extra shit like CNI and Operators are there because K8s'
       | own abstraction is too complicated so they had to add more
       | abstractions to deal with it. Managed Distributions of K8s go
       | away if every Linux distro can do the same things out of the box.
       | "Managing fleets" just becomes running more Linux VMs or not. You
       | wouldn't need "clusters", you'd just run a bunch of boxes and run
       | replicas of whatever service you wanted, and all resources would
       | be managed using normal Linux system permissions and Systemd
       | configs. No volume mounts or persistent storage setup, it's just
       | regular filesystem mounts and disks. Containers would be
       | optional, eliminating a huge class of complexity (static apps
       | make a lot of containerization moot).
       | 
       | And most importantly to admins - no more bullshit 15 month
       | upgrade death march. Imagine just sticking to a stable Debian for
       | 3 years but still having all the power of a distributed
       | orchestrator. Imagine not having 50 components with APIs
       | listening on the internet to be hacked. Imagine just using
       | IPTables rather than custom plugins that change based on the
       | Cloud provider just to filter network traffic, and not letting
       | everything on the network access everything else by default.
       | 
       | K8s is just too expensive, complicated, and time consuming to
       | survive. As soon as Systemd can network as a cluster, the
       | writing's on the wall. The only reason it hasn't happened yet is
       | the Systemd developers are basically desktop-oriented Linux
       | fogies rather than Cloud-focused modern devs.
       | 
       | I also predict RedHat will block patches to make this happen,
       | since their money comes from K8s. But someone just needs to make
       | a Systemd fork using Systemd's own API and we can finally fix all
       | the crap Systemd devs like to pretend is somebody else's bug.
        
         | candiddevmike wrote:
         | Ha, I had a similar thought and started creating exactly that:
         | clusterd, a way to cluster and schedule things on systemd
         | instances. I haven't gotten very far, but if you're interested
         | in helping me please reach out (email in profile).
        
         | Fiahil wrote:
         | > The main purpose of K8s is just to run microservices.
         | 
         | No. The main purpose of K8s is to make portable << running
         | multiple copies across servers, routing from one load-balancing
         | port to many ephemerally-allocated ports, collecting logs,
         | providing a consistent control API, limit system resources,
         | networking, secrets... >>.
         | 
         | Systemd could do all that, but, then, it would not be Systemd
         | anymore and would be like another K8s.
        
         | zaat wrote:
         | > All the extra shit like CNI and Operators are there because
         | K8s' own abstraction is too complicated so they had to add more
         | abstractions to deal with it.
         | 
         | For example, two operators I'm currently implementing for a
         | client, ArgoCD and RabbitMQ Topology operator. The first allow
         | GitOps, meaning that we update config file and the software
         | running in the clusters immediately gets updated or
         | configuration is being set. The second let me declare exchanges
         | and queues like any other Kubernetes object. Now I'm sure it is
         | possible to do the same in SystemD, anything can be done with
         | enough engineering. But is it really the right tool for the
         | job? in any case, the function of Operators is not to simplify
         | and complex abstractions, this is simply incorrect.
        
         | loriverkutya wrote:
         | I have the "If you have a SystemD everything is a nail" feeling
         | from this comment.
        
         | topspin wrote:
         | > The main purpose of K8s is just to run microservices.
         | 
         | Agreed, except I'd add 'stateless' to the microservices part.
         | That's all k3s is good for and trying to make k8s do more is
         | awful.
         | 
         | > Some bored admin just needs to make a patch to make systemd
         | processes talk together, and half the point of Kubernetes will
         | be gone.
         | 
         | Far more is needed. CRIU needs to work; live migrating
         | containers between hosts. Network overlay, with transparent
         | encryption, name resolution and access control are necessary.
         | Bulletproof network storage management is crucial. Containers
         | need to easily float among hosts, and the networking and
         | storage they depend on must follow seamlessly.
         | 
         | Most of the pieces exist, but we're a long way off. I believe
         | the future is an operating system with integrated clustering
         | solely focused on hosting containers and the networking and
         | storage they need.
        
           | yjftsjthsd-h wrote:
           | > CRIU needs to work; live migrating containers between hosts
           | 
           | Wait, k8s can use CRIU to live migrate containers?
        
             | metadat wrote:
             | Not commonly, I've never gotten CRIU working in real life.
             | AFAIK it's not feasible unless you go all-in on a VMWare
             | stack.
        
           | 0xbadcafebee wrote:
           | You don't need any of that stuff to run microservices.
           | Stateless is up to the app, not the orchestrator (you can
           | still run a stateful containerized app in K8s). You
           | absolutely do not need containers, network overlay,
           | "transparent encryption" (do you mean a service mesh?), name
           | resolution (dnsmasq works fine), and access controls are
           | inherent in systemd. Network storage is unrelated if you mean
           | NFS; if you mean persistent storage volume claims shuffled
           | around nodes, yeah, that's... not a stateless microservice.
           | And.... no, we don't need CRIU, but it works just fine
           | without K8s.
           | 
           | All you need to run microservices is 1) a statically-compiled
           | stateless app, 2) a systemd unit file to run the app and
           | restart it when it stops, 3) load any credentials into
           | systemd's credential manager, 4) iptables for network ACLs,
           | 5) linux user/group permissions, 6) an Nginx load balancer to
           | route traffic to different services, 7) journald to keep
           | logs, 8) systemd to set per-service CPU/memory limits, 9) a
           | dynamic DNS client to register a service name in dnsmasq when
           | a service starts/stops. Systemd already handles intra-service
           | network communication by handing an open filehandle to a
           | service; alternately each service could grab its own random
           | port and register it with Nginx along with its service name
           | and the HTTP routes and hosts it wants to listen on.
           | 
           | If you shackle yourself to the design of K8s, or some
           | implementation of K8s (K8s doesn't have all the features you
           | described out of the box) and say "well that other system
           | isn't K8s!", then yeah, you will find systemd wanting.
        
             | chucky_z wrote:
             | You could run Nomad and use parameterized jobs with
             | raw_exec to get something like this. I know that systemd
             | has some lifecycle stuff where you can create/run systemd
             | units directly through the cli instead of using unit files
             | which should allow you to hook the lifecycle into a nomad
             | style service invocation but I can't recall exactly how to
             | do this.
             | 
             | This doesn't quite roll into the 'only systemd,' but it
             | allows you to build exactly what you're looking for.
        
               | 0xbadcafebee wrote:
               | I would rather systemd added a single feature to its
               | already vast arsenal, and then Nomad wouldn't be needed
               | either.
        
               | chucky_z wrote:
               | Figure out how to revive https://github.com/coreos/fleet
               | as something native in systemd?
        
         | TakeBlaster16 wrote:
         | > K8s is just too expensive, complicated, and time consuming to
         | survive. As soon as Systemd can network as a cluster, the
         | writing's on the wall. The only reason it hasn't happened yet
         | is the Systemd developers are basically desktop-oriented Linux
         | fogies rather than Cloud-focused modern devs.
         | 
         | So "all" we need to do is rewrite systemd as a distributed
         | system, and make it "cloud-focused" and "modern"?
         | 
         | No thanks. Right tool for the right job.
        
           | 0xbadcafebee wrote:
           | It's basically a distributed system already! All you need for
           | feature parity with K8s is a parent node sending child nodes
           | commands via its existing API. Everything else is already
           | there.
        
             | cpuguy83 wrote:
             | Like https://github.com/coreos/fleet?
        
             | qbasic_forever wrote:
             | Oh and what if that parent node dies, now all the children
             | are orphaned and without a leader to tell them what to do?
             | Kind of breaks down.
             | 
             | The problem is a distributed state and consensus. It's not
             | as simple as "just have a leader send nodes commands".
             | There are well known algorithms and systems for dealing
             | with distributed consensus. It might annoy you to find out
             | kubernetes uses one of them (etcd)! Use kubernetes!
        
               | 0xbadcafebee wrote:
               | You don't need it. A single synchronous redundant storage
               | mechanism (e.g. SQL database, S3, SAN/NFS) works fine.
               | But you could use etcd (or Consul) if you wanted. (You
               | also didn't mention what happens when your etcd gets
               | corrupted or loses consensus or its TLS certs expire...
               | :)
               | 
               | K8s is a very "opinionated" system. People think this is
               | the only architecture that works, but of course it isn't
               | true.
        
               | qbasic_forever wrote:
               | You do need distributed consensus--your 'single
               | synchronous redundant storage mechanism' cannot work
               | without it. You're basically just offloading the
               | distributed system state to this magic system you're
               | inventing. That's fine, and that's exactly what k8s does
               | with etcd. If you want to boil the ocean and reinvent k8s
               | because you don't like its yaml or whatever, that's cool
               | and I hope your employer supports you in those endeavors.
        
             | zozbot234 wrote:
             | So, like systemd socket activation (which is already there,
             | too)?
        
               | 0xbadcafebee wrote:
               | More like the job controller: https://github.com/kubernet
               | es/kubernetes/blob/master/pkg/con...
               | 
               | It's a loop that schedules jobs. When you create a job it
               | starts them, when one crashes it schedules another, when
               | you delete one it stops them. Systemd already does this
               | for a single node's service units, so all that's lacking
               | is doing this _across_ nodes.
        
             | TakeBlaster16 wrote:
             | Yes... kind of. A distributed system all on one machine
             | skips many of the hard parts. How does systemd handle a
             | network partition? How does systemd mount storage or direct
             | network connections to the right nodes? How do you migrate
             | physical hosts without downtime?
        
               | 0xbadcafebee wrote:
               | > How does systemd handle a network partition?
               | 
               | The same way K8s does... it doesn't. K8s management nodes
               | just stop scheduling anything until the worker nodes are
               | back. When they're back the job controller just does
               | exactly what it was doing before, monitoring the jobs and
               | correcting any problems.
               | 
               | > How does systemd mount storage
               | 
               | Systemd has a _Mount unit configuration_ that manages
               | system mounts.
               | 
               | > or direct network connections to the right nodes
               | 
               | The same way K8s does... use some other load balancer.
               | Most people use cloud load balancers pointed at an
               | Ingress (Nginx & IPTables) or _NodePort_. The latter
               | would simply be an Nginx service (Ingress) listening on
               | port 80 /443 and load-balancing to all the other nodes
               | that the service was on, which you can get by having the
               | services send Nginx an update when they start via dynamic
               | config.
               | 
               | > How do you migrate physical hosts without downtime?
               | 
               | The same way K8s does.... stop the services on the node,
               | remove the node, add a new node, start the services. The
               | load balancer sends connections to the other node until
               | services are responding on the new node.
        
               | teraflop wrote:
               | > The same way K8s does... it doesn't. K8s management
               | nodes just stop scheduling anything until the worker
               | nodes are back
               | 
               | This is not entirely accurate. K8s makes it _possible_ to
               | structure your cluster in such a way that it can tolerate
               | certain kinds of network partitions. In particular, as
               | long as:
               | 
               | * there is a majority of etcd nodes that can talk to each
               | other
               | 
               | * there is _at least one_ instance of each of the other
               | important daemons (e.g. the scheduler) that can talk to
               | the etcd quorum
               | 
               | then the control plane can keep running. So the cluster
               | administrator can control the level of fault-tolerance by
               | deciding how many instances of those services to run, and
               | where. For instance, if you put 3 etcd's and 2 schedulers
               | in different racks, then the cluster can continue
               | scheduling new pods even if an entire rack goes down.
               | 
               | If you assign the responsibility for your cluster to a
               | single "parent" node, you're inherently introducing a
               | point of failure at that node. To avoid that point of
               | failure, you have to offload the state to a replicated
               | data store -- which is exactly what K8s does, and which
               | leads to many of the other design decisions that people
               | call "complicated".
        
         | qbasic_forever wrote:
         | Systemd is great... for one machine. Systemd has no concept of
         | multi-machine orchestration. It has no distributed state or
         | consensus which is the foundation of multiple machines
         | orchestrating work. I suspect systemd by design will _never_
         | try to do multi-machine orchestration and prefer people use an
         | existing solution... like kubernetes.
         | 
         | I love systemd but it's not designed for managing a cluster of
         | machines.
        
         | devmunchies wrote:
         | How would you run multiple instances of an app on a single
         | machine with systemd? I.e. if they are listening in the same
         | port.
        
         | zozbot234 wrote:
         | You can run containers under systemd already via systemd-nspawn
         | (and even manage/orchestrate them via machinectl) but
         | configuring them is just as complex as on any other
         | containerization solution... Don't see much of a point. And
         | containers provide a lot more than just replicating static
         | linking, they're about namespacing of all resources and not
         | just a different filesystem root.
        
         | [deleted]
        
       | cad1 wrote:
       | Maybe it is implied by being on lwn.net, but the list is limited
       | to open source orchestration platforms. In a broader discussion I
       | would expect to see AWS ECS mentioned.
        
         | Kaytaro wrote:
         | They mentioned it indirectly-
         | 
         | > "seemingly every tech company of a certain size has its own
         | distribution and/or hosted offering to cater to enterprises"
        
         | jordemort wrote:
         | The implication is correct; I did not write about ECS because
         | it is not open source.
        
           | hkrgl wrote:
           | Out of curiosity, what other non-open source platforms do
           | people use?
        
       | bamazizi wrote:
       | One thing missing from the article is the "load balancer" aspect
       | of the orchestrators. I believe K8S favours big cloud providers
       | tremendously due to load balancer lock-in or handcuff or whatever
       | else you want to call it. For Kubernetes there's a WIP Metallb
       | but I personally haven't had confident success running on my own
       | servers.
       | 
       | K8S setup complexity is already highlighted but I'd add upgrade
       | and maintenance pain to the mix as well.
       | 
       | I've only recently started with Nomad and while it's so simple, I
       | question my self at every step because coming from K8S I've
       | gotten used to complexity and over configuration. Nomad allows
       | you to bring your own loadbalancer, and i'm experimenting with
       | Traefik & Nginx now.
        
         | dixie_land wrote:
         | We ran a large number of services on a large number of k8s
         | clusters handling 1M+ QPS
         | 
         | What worked for us is to use k8s strictly for
         | deployment/container orchestration, we have our own Envoy-based
         | network layer on top of k8s and do not use Services or Ingress
         | or Ingress Controllers
        
         | hitpointdrew wrote:
         | >I believe K8S favours big cloud providers tremendously due to
         | load balancer lock-in or handcuff or whatever else you want to
         | call it. For Kubernetes there's a WIP Metallb but I personally
         | haven't had confident success running on my own servers.
         | 
         | > Nomad allows you to bring your own loadbalancer
         | 
         | I think this is a widely held myth that you can't easily self-
         | host k8s due to lack of access to a cloud load balancer. You
         | can use your own load balancers with Kubernetes, it isn't that
         | hard. I have production k8s clusters running on on-prem
         | hardware right now using haproxy.
         | 
         | I use kubeadm when setting up my cluster and point the control
         | plane endpoint to the haproxy load balancer.
         | 
         | Then when it comes to the ingress I use the nginx ingress and
         | have it setup exactly as described in the "Using a self-
         | provisioned edge" section of the documentation here:
         | https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
        
         | verdverm wrote:
         | k8s has good, non-cloud based ingress controllers.
         | 
         | https://github.com/kubernetes/ingress-nginx
         | 
         | https://doc.traefik.io/traefik/providers/kubernetes-ingress/
        
           | bamazizi wrote:
           | To run k8s on bare metal server clusters there's only Metallb
           | option to bring user traffic into clusters NodePort services,
           | via externally exposed IP address(es). I wasn't talking about
           | k8s internal load balancers and reverse proxies
        
             | hitpointdrew wrote:
             | This is not true at all, you can use a self-provisioned
             | edge (load balancer) like haproxy.
             | 
             | See https://kubernetes.github.io/ingress-
             | nginx/deploy/baremetal/
        
             | [deleted]
        
             | verdverm wrote:
             | You were talking about cloud, now you are talking about
             | bare metal?
             | 
             | Of course you have to route traffic to your cluster, but
             | you are implying some cloud based load balancer lockin
             | which just isn't true
        
               | chucky_z wrote:
               | I believe the OP is talking about LoadBalancer
               | (https://kubernetes.io/docs/tasks/access-application-
               | cluster/...), and I believe you're talking about Ingress
               | (https://kubernetes.io/docs/concepts/services-
               | networking/ingr...).
        
               | verdverm wrote:
               | It's still an optional, rarely used service config, and
               | does not imply lockin
        
               | buzer wrote:
               | It's very commonly used by the ingress controllers. You
               | can get things working with hostNetwork or custom service
               | port ranges, but that's a lot rarer than doing it via
               | load balancer.
               | 
               | Third option would be to expose pods directly in the
               | network. EKS does this by default, but from my experience
               | it's quite rare for people to leverage it.
        
               | verdverm wrote:
               | Yes, an ingress resource will typically create a load
               | balancer if you are in the cloud, but not a service
               | resource.
        
       | KronisLV wrote:
       | This seems like a pretty well written overview!
       | 
       | As someone who rather liked Docker Swarm (and still likes it,
       | running my homelab and private cloud stuff on it), it is a bit
       | sad to see it winding down like it, even though there were
       | attempts to capitalize on the nice set of simple functionality
       | that it brought to the table like CapRover: https://caprover.com/
       | 
       | Even though there is still some nice software to manage installs
       | of it, like Portainer: https://www.portainer.io/ (which also
       | works for Kubernetes, like a smaller version of Rancher)
       | 
       | Even though the resource usage is far lower than that of almost
       | any Kubernetes distro that I've used (microk8s, K3s and K0s
       | included), the Compose format being pretty much amazing for most
       | smaller deployments and Compose still being one of the better
       | ways to run things locally in addition to Swarm for remote
       | deployments (Skaffold or other K8s local cluster solutions just
       | feel complex in comparison).
       | 
       | And yet, that's probably not where the future lies. Kubernetes
       | won. Well, Nomad is also pretty good, admittedly.
       | 
       | Though if you absolutely do need Kubernetes, personally I'd
       | suggest that you look in the direction of Rancher for a simple UI
       | to manage it, or at least drill down into the cluster state,
       | without needing too much digging through a CLI:
       | https://rancher.com/
       | 
       | Lots of folks actually like k9s as well, if you do like the TUI
       | approach a bit more: https://k9scli.io/
       | 
       | But for the actual clusters, assuming that you ever want to self-
       | host one, ideally a turnkey solution, RKE is good, K0s is also
       | promising, but personally I'd go with K3s: https://k3s.io/ which
       | has been really stable on DEB distros and mostly works okay on
       | RPM ones (if you cannot afford OpenShift or to wait for
       | MicroShift).
       | 
       | My only pet peeve being that the Traefik ingress is a little bit
       | under-documented (e.g. how to configure common use cases, like a
       | SSL certificate, one with an intermediate certificate, maybe a
       | wildcard, or perhaps just use Let's Encrypt, how to set defaults
       | vs defining them per domain).
       | 
       | For the folks with thicker wallets, though, I'd suggest to just
       | give in and pay someone to run a cluster for you: that way you'll
       | get something vaguely portable, will make lots of the aspects in
       | regards to running it someone else's problem and will be able to
       | leverage the actual benefits of working with the container
       | orchestrator.
       | 
       | Oh, also Helm is actually pretty nice, since the templates it
       | generates for you by default are actually reasonable and can
       | serve as a nice starting point for your own application
       | deployment deescriptions, versus trying to write your own
       | Kubernetes manifest from 0 which feels like an uphill battle, at
       | least when compared with the simplicity of Compose.
       | 
       | In regards to Nomad, however, I feel like a UI that would let you
       | click around and get bunches of HCL to describe the deployment
       | that you want would be pretty cool. At least when starting out,
       | something like that can really flatten the learning curve. That's
       | also kind of why I like Portainer or Rancher so much, or even
       | something like: https://k8syaml.com/
       | 
       | > To extend its reach across multiple hosts, Docker introduced
       | Swarm mode in 2016. This is actually the second product from
       | Docker to bear the name "Swarm" -- a product from 2014
       | implemented a completely different approach to running containers
       | across multiple hosts, but it is no longer maintained. It was
       | replaced by SwarmKit, which provides the underpinnings of the
       | current version of Docker Swarm.
       | 
       | On an unrelated note, this, at least to me, feels like pretty bad
       | naming and management of the whole initiative, though. Of course,
       | if the features are there, it shouldn't be enough to scare anyone
       | away from the project, but at the same time it could have been a
       | bit simpler.
        
         | miralgj wrote:
         | Agreed. I actually run Docker Swarm in production for a small
         | number of applications that are for system administrators to
         | use and I have never had any real issues with it. It's sad to
         | see it move towards obscurity.
        
           | LeSaucy wrote:
           | Nomad is the heir apparent for swarm deployments that don't
           | want to take on all the extra complexity of k8s. I do prefer
           | a docker-compose style yaml manifest file compared to hcl
           | however.
        
             | ViViDboarder wrote:
             | Oh, I'm the opposite. I find HCL so much nicer that when I
             | was attempting to use k8s for my home servers I decided to
             | do it with Terraform. Now I'm running Nomad, Consul, and
             | Vault.
        
       ___________________________________________________________________
       (page generated 2022-08-24 23:01 UTC)