[HN Gopher] Tools to Run Kubernetes Locally
       ___________________________________________________________________
        
       Tools to Run Kubernetes Locally
        
       Author : EntICOnc
       Score  : 108 points
       Date   : 2021-08-06 11:37 UTC (11 hours ago)
        
 (HTM) web link (yankee.dev)
 (TXT) w3m dump (yankee.dev)
        
       | nickjj wrote:
       | I tried pretty much all of them and settled with kind.
       | 
       | It takes 1 command and 1 minute to spin up a 3 node cluster from
       | scratch, everything works the same as you would expect on a real
       | managed Kubernetes cluster and it took a few lines of YAML to
       | config the cluster so that the ingress works in such a way where
       | you can write your manifests exactly how you would on a managed
       | provider. It's also super easy to get your local Docker images
       | into the cluster.
       | 
       | Did anyone else come to the same conclusion? Maybe I'm missing
       | something but you can't really ask for much more than the above.
        
         | 8bitbuddhist wrote:
         | Agreed, Kind is perfect for running disposable, reproducible
         | multi-node clusters that are essentially upstream K8s.
        
       | mfer wrote:
       | A couple things I notice about this list are the use of terminal
       | and the reliance on available Linux or Docker. For example,
       | minikube (by default) and k3d both look for Docker Desktop to be
       | installed.
       | 
       | We've been working on something taking a different direction that
       | makes it a simple app to run. It's called Rancher Desktop
       | (https://rancherdesktop.io/). It uses k3s under the hood. The
       | goal is to make running k8s locally as easy as running a normal
       | desktop app.
       | 
       | Note, as the creator of Rancher Desktop I'm entirely biased and
       | this is a bit of a shameless plug.
        
       | sul_tasto wrote:
       | I'm surprised docker desktop wasn't mentioned:
       | https://www.docker.com/products/docker-desktop
        
         | jayd16 wrote:
         | Yeah, and a simple docker compose for local dev does the job
         | pretty easily.
        
       | silasb wrote:
       | Off topic: I really dig `fly` from `fly.io` is there any model
       | that is similar to that. Akin to hereoku but for k8s?
        
         | loloquwowndueo wrote:
         | Do you mean flyctl? "fly" is the CLI tool from concourse-
         | Ci.org.
        
           | silasb wrote:
           | Yes, `flyctl` from https://fly.io/docs/getting-started/login-
           | to-fly/
        
       | rckrd wrote:
       | One interesting fact is that minikube actually used to be very
       | similar to k3s. (I used to be a maintainer of minikube)
       | 
       | We called it localkube, and it was an all-in-one binary with
       | features stripped out that ran the kubelet, apiserver, kube-
       | proxy, etcd, and other components as Go routines. Eventually it
       | had to be removed - upstream k8s was moving too fast, and there
       | were fundamental issues that couldn't be fixed without large
       | patches (TPRs, the predecessor to CRDs were one).
       | 
       | The problem with this approach is that it requires a ton of
       | maintenance and there can be subtle bugs that come up.
       | 
       | The other distinction not found on this list is that only Docker
       | for Desktop and minikube are cross-platform solutions. Running
       | Kubernetes and docker on Linux is somewhat trivial, but managing
       | a lightweight virtual machine and adding that extra layer of
       | abstraction is what makes it really difficult.
        
         | 0xbadcafebee wrote:
         | Personally I think it's time for a Kubernetes fork. Let K8s get
         | huge and change too frequently, but let's get a desktop/less-
         | robust alternative going that doesn't change frequently at all.
         | 
         | The hope would be that we would stop trying to chase our tails
         | keeping up to date with K8s, and instead we could maintain a
         | more generic abstraction layer which could be applied to other
         | PaaS/SaaS systems as well. You use the abstractions to separate
         | the design from the implementation. The "desktop"
         | implementation could be insecure, non-HA, non-robust, easy to
         | install & maintain. The "cloud" implementation could be all the
         | best practices. The containers would be the same, but the
         | service definition, command-line tools, etc _could_ all be
         | different, and just be written to interface with the
         | abstraction specification.
         | 
         | Obviously you still would need to completely test the hell out
         | of the service in the cloud cluster before releasing to
         | production, but the initial local development could be much
         | faster when done locally in a simpler system. I think trying to
         | replicate the cloud implementation locally is just asking for
         | trouble, we know it will never match up.
        
           | zzyzxd wrote:
           | > Let K8s get huge
           | 
           | Correct me if I am wrong, but I don't see trend that k8s is
           | getting huge. In fact, I get a feeling that it is trying to
           | be smaller. If you look at the release notes for each minor
           | version, lots of in-tree features are being deprecated and
           | users are being pointed to out-of-tree implementations. DNS,
           | networking, metrics server, cloud vendor specific
           | features(storage class, load balancers), container
           | runtimes...just to name a few. And the community put great
           | amount of effort to harden API machinery, to make it easier
           | for 3rd parties to implement specific features for specific
           | business.
           | 
           | This kind of gives me hope that one day I may be able to just
           | run a vanilla k8s straight out of a release from
           | github.com/kubernetes/kubernetes without any "batteries" as a
           | platform, and I can later decide what I want to install, all
           | by myself.
        
         | gizdan wrote:
         | Out of curiousity, how come k3s has been able to keep up? Or is
         | my assumption incorrect?
        
           | rckrd wrote:
           | (1) Maturity of Kubernetes APIs. Fewer upstream changes.
           | 
           | (2) It's one of the main products of a venture-backed
           | company. At Google, at the time, it was tough to get
           | engineers on products like minikube.
        
         | SomaticPirate wrote:
         | Having worked around k3s quite a bit, one aspect I miss from
         | minikube is docker image loading. This hurts for using k3s as a
         | local dev cluster vs minikube & kind.
        
           | comma_at wrote:
           | In k3d this is already available.
           | https://k3d.io/usage/commands/k3d_image_import/
        
       | thiht wrote:
       | Honest question: what do you need a local Kubernetes for? It's an
       | orchestrator, surely you don't need that in local? Don't you have
       | a way to mock your services? Or simple compose files for local
       | integration?
       | 
       | I have worked with Marathon and Kubernetes in the last few years
       | and never had the need to deploy them locally, it seems strange
       | to even need that
        
       | revskill wrote:
       | K8S is fine until u want to scale a service which uses local
       | storage. Is there any cheapest cloud service to provide such kind
       | of plan ?
        
         | cyberpunk wrote:
         | Don't do that...
        
       | atombender wrote:
       | I found that the biggest blocker to running Kubernetes locally is
       | that it's _very_ resource-hungry, especially on a Mac with Linux
       | virtualization. This also true about Docker on Mac, but
       | Kubernetes is worse.
       | 
       | I have not tried k3s or any of the other attempts to produce a
       | stripped-down version, so I'm curious if they fix this problem.
       | But the fact remains that the Kubernetes daemons are heavily
       | based on polling; even a Kubernetes instance that does nothing
       | will be using a lot of CPU.
       | 
       | Right now, the tool of my dreams does not run anything locally at
       | all, but transparently maintains a small remote Kubernetes
       | cluster in the cloud of my choice. The benefit here is that I
       | could get all the tools at my disposal, such as public load
       | balancers, ability to effortlessly scale up my nodes when needed,
       | etc.
        
       | singhrac wrote:
       | I've recently become interested in learning Kubernetes for some
       | work I'm doing. But I'm not a sysops engineer or even really a
       | backend person, so I kind of want to pick the right horse to back
       | and learn some simple tools so that I can get builds onto GCP or
       | AWS. Mostly I want to make sure there is a StackOverflow post for
       | whatever problem I will encounter.
       | 
       | What do people think is the best 'stack' to invest in? I'm also
       | curious about experiences with CapRover and/or FastAPI.
        
         | mwaitjmp wrote:
         | Terraform to create and manage the cluster and then helm to
         | deploy any applications.
         | 
         | Those two tools have been around for a while and you'll find
         | plenty of documentation and other people looking for help
         | online.
        
         | mplewis wrote:
         | Use a language you know and like, then use its popular web
         | framework. FastAPI is just fine.
         | 
         | Package your apps into containers. Then run them somewhere.
         | DigitalOcean Apps, AWS Lambda, and Google Cloud Run are all
         | fine. So is Kubernetes.
        
       | whalesalad wrote:
       | Running kube is the most trivial part, especially with some of
       | these tools like k3s/k0s.
       | 
       | Hard part is deployments. Managing Kube's raw yaml files is
       | tedious. What tools are y'all using there? I am having a tough
       | time with Helm, but I feel like that is the standard answer these
       | days.
       | 
       | Back in ~2016 at FarmLogs we built and operated our own
       | (Clojure!) deployment system that was backed by a Github
       | repository. Developers worked with a DSL that was really just a
       | slimmed down version of Kube yaml. This eliminated lots of
       | boilerplate thanks to our company-wide conventions for building
       | microservices. The master branch reflected the intended state of
       | the cluster. It was easy to reason about.
       | 
       | I have not found a modern/open-source tool that is similar, but I
       | have to imagine it exists.
        
         | luxurytent wrote:
         | This is how I feel too.
         | 
         | At my workplace, we can easily spin up a kube cluster locally
         | but our Helm charts aren't transportable for local development.
         | They can be modified, torn apart, extracted from, but we don't
         | have a seamless and elegant way to say "Hey, this thing in
         | production, that's running via Helm/k8s, I want the same thing
         | locally ... with these environment variables adjusted"
         | 
         | I don't think that's a difficult thing to accomplish and there
         | is appetite from others I work with to do so, but there's
         | _effort_ behind it.
         | 
         | Maybe there are smarter ways and we just don't know it yet.
        
         | stormbrew wrote:
         | > Back in ~2016 at FarmLogs we built and operated our own
         | (Clojure!) deployment system that was backed by a Github
         | repository. Developers worked with a DSL that was really just a
         | slimmed down version of Kube yaml. This eliminated lots of
         | boilerplate thanks to our company-wide conventions for building
         | microservices. The master branch reflected the intended state
         | of the cluster. It was easy to reason about.
         | 
         | The lack of _this_ is what frustrates me so much about all of
         | these things. Writing some bundle of yaml but just telling the
         | management software  "here go run this 'app'" with no state or
         | version management of it makes me feel like I'm running a stack
         | like it's a windows desktop. Just random shit everywhere and I
         | don't know what any of it is. Maybe you're keeping it in a git
         | repo but is what's running live what's in the git repo? Who the
         | heck knows.
         | 
         | All feels like a huge step backwards to me. Let me use version
         | control as a source of truth, not just a spot to throw things
         | in.
        
         | oweiler wrote:
         | I've found kustomize to be really nice. Much less powerful than
         | Helm, easy to learn, good enough for most tasks.
         | 
         | https://github.com/kubernetes-sigs/kustomize
        
         | mbushey wrote:
         | Helm is horrible. Please use Kustomize. Life will get better,
         | it did for me. I also use Sops (with goabout/kustomize-
         | sopssecretgenerator) and ArgoCD for a very slick well
         | integrated system. The CRDs in ArgoCD are so well done that it
         | feels like native K8s CD.
        
           | mfer wrote:
           | Kustomize and Helm take different routes. Kustomize expects
           | the user to know Kubernetes resources and their schemas. Helm
           | and charts target the chart creator being being an expert in
           | Kubernetes while the chart consumer (the Helm user) doesn't
           | need to be an expert. They can have a simple interface to
           | deploy an application.
           | 
           | One of the issues for those running applications in
           | Kubernetes is the complexity. Kubernetes is complex. So much
           | so that people like Kelsey Hightower have said Kubernetes is
           | a platform for creating platforms. For most people to learn
           | Kubernetes will slow down their velocity which is a negative.
           | 
           | Most people need a simpler system on top of Kubernetes to
           | make them effective.
        
             | joshuak wrote:
             | There is no need to use Kubernetes if it slows down your
             | velocity, just use docker or direct drive some VMs. If you
             | _NEED_ Kubernetes then you simply won 't get most of the
             | benefits it provides by learning some short cut tools.
             | Tools like helm are easy to use, but do so at the cost of
             | breaking core design elements and a massive increase in
             | complexity just below the surface.
             | 
             | Here's the rule of thumb for Kubernetes use tools to manage
             | the files not the cluster, the cluster description is the
             | source of truth not the cluster.
        
             | sascha_sl wrote:
             | Helm itself is pretty awful through.
             | 
             | For instance, when people upgraded to 1.16, where
             | Kubernetes retired apps/v1beta1, Kubernetes automatically
             | transparently converts these types.
             | 
             | Helm, having once created these resources as v1beta1 will
             | completely refuse to touch the deployment. Without some
             | specialized extra tooling that people created in the wake
             | of this, resolving this requires deleting the entire
             | namespace and reinstalling the chart. Issue on helm repo
             | closed as wontfix.
        
               | mfer wrote:
               | This still comes down to assumptions.
               | 
               | If someone was using raw kubernetes manifests instead of
               | Helm and charts they would need to know the schemas and
               | details of those manifests. Then, when 1.16 came out they
               | would need to update to the new apiVersion and modify
               | their manifests for any differences in schema.
               | 
               | This requires someone who knows and understands k8s
               | resources. This IS NOT a typical app dev or person who
               | wants to run apps in k8s. I've been in numerous circles
               | of app folks who have complained about this expectation
               | from the k8s community.
               | 
               | On the main Helm project they have the stance that what's
               | in a chart is up to the charts authors. Just like what's
               | in a debian package is what's up to the debian package
               | authors. That those folks should do a good job
               | maintaining them. That's why it was a wontfix issue.
               | 
               | What we see as an underlying issue with k8s is the
               | experience that app devs and people who want to run apps
               | have to deal with. It's currently like expecting an app
               | dev to write in assembly and know when assembly codes are
               | deprecated or changed. It's in the pre-high level
               | language phase. This comparison to assembly is one that's
               | come out of the k8s community itself.
        
               | sascha_sl wrote:
               | I generally do not disagree that the experience for
               | appdevs on k8s is bad, but I also disagree that they
               | should be the audience to begin with.
               | 
               | Helm is still functionally broken. A helm chart that was
               | kept up to date still could've used v1beta1 early on. All
               | it takes is a cluster that hasn't been patched in a
               | while. This is 100% a helm issue. (Note that this bug
               | also triggers if the chart has been updated, but helm
               | recorded a different GVK)
               | 
               | What you actually need is a platform team that provides a
               | middle layer. I was part of such a team between 2018 and
               | 2021, and my main takeaway was to not let developers have
               | write access at all after slicing a namespace with 25k
               | pods out of etcd because gRPC refused to operate on it at
               | that point (that was a misconfigured CronJob).
               | 
               | You need something like KubeVela, not Helm.
               | 
               | https://kubevela.io/
        
           | booleanbetrayal wrote:
           | Big fan of ArgoCD here. The development team is also very
           | responsive with good processes in place, from my experience
           | in the repo.
        
         | dharmab wrote:
         | For some more complex deployments (i.e. not simple batch jobs
         | or web services), we've used the Kubernetes API directly from
         | programming languages like Python or Go. Often as a controller
         | running within the cluster. That allows us the full data
         | structures toolset and freedom of a programming language.
         | 
         | It reminds me of the old article "Beating the Averages":
         | http://www.paulgraham.com/avg.html
        
           | whalesalad wrote:
           | Afraid to admit I had not even considered that idea, good
           | thinking.
        
           | q3k wrote:
           | Yeah, just writing some code that talks to the Kubernetes API
           | really ought to be more common.
           | 
           | It's not a great API, but it certainly beats writing bash
           | scripts that shell out to kubectl and yq.
           | 
           | In the same vein, I wish people knew that YAML isn't even
           | really a native Kubernetes thing, it's just one of the many
           | ways you can serialize k8s objects...
        
             | sascha_sl wrote:
             | > writing bash scripts that shell out to kubectl and yq
             | 
             | We did this for a while, from a container rendering out our
             | stack in Go, when server-side apply was not a reality yet.
             | 
             | The main issue is that lifecycle management of resources
             | beyond writing code for a single deployment is hard to do
             | with the typed clients, and the dynamic client used to be
             | even more of a mess than it is now. There's a lot of
             | implicit magic happening, baked into kubectl in a non-
             | reusable way you just couldn't replicate before 1.16.
        
               | dharmab wrote:
               | When we started back in 1.13 we wrote a function that
               | took a set of structs as an arg and ran a kubectl
               | subprocess to deploy it. We still had all of the niceness
               | of native data structures. Later we transitioned to more
               | native API calls.
        
             | dharmab wrote:
             | In my experience, engineers with a formal education in Data
             | Structures and Algorithms tend "get it" in a way that
             | others don't- that the underlying data structure is a set
             | of maps/structs and YAML, JSON, etc. are just text formats.
        
         | jrsdav wrote:
         | Have you heard of Flux? It's not a manifest abstraction tool
         | per se (like Helm or Kustomize), but automates the management
         | of your manifests using GitOps. It sounds like it might be what
         | you're looking for.
         | 
         | https://fluxcd.io/docs/concepts/
        
         | gray_-_wolf wrote:
         | We are using tanka [0] and are failry happy with it. The
         | jsonnet language allows nice layer of abstraction while still
         | allowing to directly generate kubernetes resources when
         | necessary. We wrote in-house library to map (very) simplified
         | .jsonnet file to kuberenetes deployment; after all, 90%
         | deployements are mostly same and differs in minor details only.
         | Seems to work reasonably well.
         | 
         | 0: https://github.com/grafana/tanka
        
         | mushishi wrote:
         | In previous project, we had Kotlin DSL based definitions, and
         | even though I only read it, it seemed to make a lot of sense:
         | static typing, and reuse of code e.g., between QA and
         | production environment definitions.
         | 
         | I think it may have been this:
         | https://github.com/fkorotkov/k8s-kotlin-dsl
        
         | marcc wrote:
         | I agree that "apps" and deploying to the cluster is the hard
         | part. Helm is definitely the most popular choice, but there are
         | others.
         | 
         | kpt: https://github.com/GoogleContainerTools/kpt
         | 
         | kapp: https://carvel.dev/kapp/
         | 
         | cnab: https://cnab.io
         | 
         | There are many (many) more. But these are somewhat popular
         | recently.
        
           | ithkuil wrote:
           | I made a variation on the theme of kpt, you may find
           | interesting: https://github.com/mkmik/knot8
           | 
           | (Rendered manpage at https://knot8.io/, if you like that
           | exposition format)
        
           | whalesalad wrote:
           | Thank you for the links. I had not heard of cnab until now
           | but that looks really interesting. I will look into all of
           | these.
        
         | birdyrooster wrote:
         | jsonnet, works great and is composable however you like
        
         | q3k wrote:
         | > Hard part is deployments. Managing Kube's raw yaml files is
         | tedious. What tools are y'all using there? I am having a tough
         | time with Helm, but I feel like that is the standard answer
         | these days.
         | 
         | kubecfg (with jsonnet) or CUE. Gives you actual composability
         | and a powerful programming language for reducing boilerplate.
         | Oh, and no text templating.
         | 
         | With kubecfg deploying/updating is:                   kubecfg
         | update foo.jsonnet
         | 
         | No need to first build any Helm images or for any scaffolding.
         | Just get `kubecfg` on your PATH and you're done. Version things
         | the same way you version the rest of your code. Build your own
         | abstractions that solve your particular problems as you go,
         | instead of relying on someone's idea on how your deployments
         | should work. Perfect for building orga-wide microservice
         | deployment boilerplate.
        
       | oxplot wrote:
       | If you're on linux, `kind` is the way to go
        
       | orsenthil wrote:
       | > tools to run kubernetes locally
       | 
       | And do what? This is the most important question.
        
         | bo0tzz wrote:
         | Development environments?
        
       ___________________________________________________________________
       (page generated 2021-08-06 23:01 UTC)