[HN Gopher] Everything useful I know about kubectl
___________________________________________________________________
Everything useful I know about kubectl
Author : alexhwoods
Score : 186 points
Date : 2021-07-05 12:04 UTC (10 hours ago)
(HTM) web link (www.atomiccommits.io)
(TXT) w3m dump (www.atomiccommits.io)
| vasergen wrote:
| In case you work a lot with k8s, you can take a look as well at
| k9s, hightly reccomend it. It can save a lot of time with
| typings, especially to quickly check what pods/deployments are
| running, execute command in pod, describe to understand why did
| it fail, change cluster / namespace and so on
| dmitriid wrote:
| All that is good and dandy until you run a command and it spews a
| serialised Go struct instead of a proper error. And, of course,
| that struct has zero relationship to what the actual error is.
|
| Example: The Job "export-by-user" is invalid:
| spec.template: Invalid value:
| core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"",
| GenerateName:"", Namespace:"", SelfLink:"", UID:"",
| ResourceVersion:"", Generation:0,
| CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0,
| loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil),
| DeletionGracePeriodSeconds:(*int64)(nil),
| Labels:map[string]string{"controller-
| uid":"416d5527-9d9b-4d3c-95d2-5d17c969be19", "job-name": "export-
| by-user", Annotations:map[string]string(nil),
| OwnerReferences:[]v1.OwnerReference(nil),
| Finalizers:[]string(nil), ClusterName:"",
| ManagedFields:[]v1.ManagedFieldsEntry(nil)},
| Spec:core.PodSpec{Volumes:[]core.Volume(nil),
| InitContainers:[]core.Container(nil),
| Containers:[]core.Container{core.Container{Name:"....
|
| And it just goes on.
|
| The actual error? The job is already running and cannot be
| modified
| nvarsj wrote:
| That one has annoyed people for a long time. See
| https://github.com/kubernetes/kubernetes/issues/48388.
|
| I'm pretty sure if you have the time to make a PR to fix it, it
| would be welcome. But I'm guessing it's non trivial or it would
| have been fixed by now - probably a quirk of the code
| generation logic.
| smarterclayton wrote:
| Wow, I'd forgotten about this. The reason no one has fixed it
| is partially because I didn't do a great job of describing
| what the fix was I expected to see (clarified now). Reopened
| and will poke folks to look.
| dmitriid wrote:
| > I'm pretty sure if you have the time to make a PR to fix
| it, it would be welcome.
|
| Google had a _net income_ of $17.9 _billion_ in just Q1 of
| 2021.
|
| I believe they have the resources to fix that, and I will not
| be shamed into "if you have time, please open a PR towards
| this opensource project".
|
| > But I'm guessing it's non trivial or it would have been
| fixed by now - probably a quirk of the code generation logic.
|
| I remember the "quirks of generation logic" being used as an
| excuse for Google's horrendous Java APIs towards their cloud
| services. "It's just how we generate it from specs and don't
| have the time to make it pretty".
|
| For the life of me can't find that GitHub issue that called
| this out. Somehow their other APIs (for example, .net) are
| much better.
|
| Edit: found it
|
| https://github.com/googleapis/google-cloud-
| java/issues/2331#... and
| https://github.com/googleapis/google-cloud-
| java/issues/2331#...
| busterarm wrote:
| Typically in my experience, the further you get away from
| AdWords, the more broken Google's client libraries are.
|
| I recall a little more than half a decade ago settling on
| the PHP version of their Geocoding API client library for a
| project because it was the only one whose documentation
| matched how you were actually supposed to authenticate.
| iechoz6H wrote:
| Break in case of Fire (i.e. rollback to the previous deployment
| version):
|
| kubectl rollout undo deployment <deployment-name>
| throwaaskjdfh wrote:
| Along those lines, this was an interesting statement:
|
| "You should learn how to use these commands, but they shouldn't
| be a regular part of your prod workflows. That will lead to a
| flaky system."
|
| It seems like there's some theory vs. practice tension here. In
| theory, you shouldn't need to use these commands often, but in
| practice, you should be able to do them quickly.
|
| How often is it the case in reality that a team of Kubernetes
| superheroes, well versed in these commands, is necessary to
| make Continuous Integration and/or Continuous Deployment work?
| lazyant wrote:
| For the read-only commands, you can obv use them as much as
| you can, the issue is with the write commands. I see them as
| a tool for troubleshooting (eg, you are adding a debugging
| pod, not changing the running system) and emergency work that
| would be faster on command line than running the CI/CD
| pipeline but the final state needs to be in sync with the
| code (tools like ArgoCD help with this), otherwise it's a
| mess.
| nsxwolf wrote:
| I'd love to know the easiest way to answer this question: "what
| IP address and port is the microservice pod the CI server just
| deployed listening on?"
| whalesalad wrote:
| Why wouldn't that be determinstic? You should be using a
| service for that. kubectl get svc -l
| app=<your-app-name>
| lazyant wrote:
| For the port is trivial: `kubectl get pod <yourpod> --output
| jsonpath={.spec.ports[*].port}` or if you don't remember the
| json path just `k get pod <yourpod> |grep Port`.
|
| For the IP address, why do you need that? with k8s dns you can
| easily find anything by name.
| vvladymyrov wrote:
| One the most useful things (for me) about kubectl - is moving to
| k9s cli UI for k8s. Makes daily debugging so much easier.
| bobbyi_settv wrote:
| How is this command from the page: # Lint a Helm
| chart # Good to put in pre-merge checks $ helm
| template . | kubeval -
|
| different/ better than "helm lint"
| (https://helm.sh/docs/helm/helm_lint/)?
| alexhwoods wrote:
| I could be wrong here, but I think `helm lint` just checks that
| the chart is formed correctly -- Go templating and all.
|
| I don't think it validates the Kubernetes resources.
|
| Here's an example:
|
| $ helm create foo
|
| $ cd foo
|
| Then change "apiVersion" in deployment.yaml to "apiVersion:
| nonsense"
|
| In the linting, I got
|
| $ helm lint ==> Linting . [INFO] Chart.yaml: icon is
| recommended
|
| 1 chart(s) linted, 0 chart(s) failed
|
| $ helm template . | kubeval -
|
| ERR - foo/templates/deployment.yaml: Failed initializing schema
| https://kubernetesjsonschema.dev/master-
| standalone/deploymen...: Could not read schema from HTTP,
| response status is 404 Not Found
| theden wrote:
| One useful debugging trick I use often is to edit a deployment or
| pod via `kubectl edit` and update the command to be `tail -f
| /dev/null`
|
| e.g., spec: containers: -
| command: - bash - -c - tail
| -f /dev/null
|
| (and comment out any liveness or readiness probes)
|
| Very useful to then `exec` with a shell in the pod debug things
| or test out different configs quickly, check the environment etc.
| mdavid626 wrote:
| Nice trick, I usually use sleep 10000000 as the command.
| geofft wrote:
| "sleep infinity" works on GNU coreutils as well as recent
| versions of busybox (which is what Alpine uses as coreutils).
| LukeShu wrote:
| "recent" = BusyBox 1.31 (October 2019) or newer; for
| inclusion in Alpine 3.11 (December 2019) or newer.
| namelosw wrote:
| I have been using kubectl + zsh for quite a while.
|
| But now my choice is Intellij (or other IDEs from JetBrains) +
| Lens, which I find more productive and straightforward (more GUI,
| fewer commands to memorize). Here's my setup and workflow:
|
| 1. For each repository, I put the Kubernetes deployment, service
| configurations, etc. in the same directory. I open and edit them
| with Intellij.
|
| 2. There's also a centralized repository for Ingress,
| Certificate, Helm charts, etc. I also open with Intellij. Spend
| some time to organize Kubernetes configs really worth it. I'm
| working with multiple projects and the configs gets overwhelming
| very quickly.
|
| 3. Set shortcuts for applying and deleting Kubernetes resources
| for current configs for Intellij. So I can create, edit, and
| delete resources in a blink.
|
| 4. There's a Kubernetes panel in Intellij for basic monitoring
| and operations.
|
| 5. For more information and operations, I would use Lens instead
| of Intellij. The operations are very straightforward, I can
| navigate back and forth, tweak configurations much faster than I
| could with the shell command only.
| nielsole wrote:
| Want a crude way to see pods run on a node with the READY, STATUS
| and RESTARTS fields instead of the `kubectl describe node`
| output?
|
| kubectl get po --all-namespaces -o wide | grep $NODE_NAME
|
| Of course becomes unbearably slow, the more pods you have
| arianvanp wrote:
| You can use fieldSelector
|
| https://stackoverflow.com/questions/39231880/kubernetes-api-...
| throwaway984393 wrote:
| > kubectl is self-documenting, which is another reason why it's
| 100x better than a UI
|
| A console tool has a UI, it's the shell. And GUIs can be self-
| documenting too: tool tips, help bars, interactive prompts,
| manuals.
| tut-urut-utut wrote:
| I would add one more important point about kubectl?
|
| If you don't work at Google, you don't need a complexity of
| kubernetes at all, so better forget everything you already know
| about it. The company would be grateful.
|
| Joke aside, trying to sell something to the masses that could
| potentially benefit only 0.001% of the projects is just
| insincere.
|
| Pure CV pump and dump scheme.
| imglorp wrote:
| This is getting downvoted for cynicism maybe, but I feel it's
| the most important advice here. Know /when/ to use Kubernetes.
|
| It's very often the wrong tool to deploy our tiny app but many
| of us go along with it because it ticks some management boxes
| for various buzzwords, compliance, hipness, or whatever. Once
| you get out this hammer factory, it's a big and complicated
| one, so you will probably need a full time team to understand
| it and manage it. It's also a metric hammer factory, so you'll
| need to adapt all your other tooling to interoperate. Most of
| us can get by with lesser hammer factories, even k3s is less
| management.
|
| If you just need to deploy some containers, think hard if you
| want to buy the whole tool factory or just a hammer.
| emptysongglass wrote:
| We get paid because we know these tools. It's why we're
| desired: because the company thinks they want K8s or they're
| one foot in EKS and they're doubling down. We don't get hired
| because we dare to suggest they dismantle their pilot cluster
| and take a sharp turn into Nomad.
|
| Most of us aren't the engineering heads of our departments.
| So you'll forgive us if we continue pushing the moneymakers
| we have in our heads and setting up our homelab clusters. I
| want to be paid, I want to be paid well. It may as well be
| pushing the technology stack that scales to megacorps because
| who knows maybe I'll make it there one day.
| nvarsj wrote:
| This kind of comment is on every single HN post about
| Kubernetes and is tiresome. I also think it's off topic (TFA
| is about kubectl tricks, not about the merits of K8s).
| busterarm wrote:
| I think it's important to have comments like those as
| Google, who does not use Kubernetes, is exerting a lot of
| pressure on the industry to adopt it. It is an extremely
| complicated tool to learn to use well and companies act
| like there aren't reasonable alternatives.
|
| Those of us who have gone through it are often coming back
| with war stories saying to use something else. Some of us
| have invested thousands of man hours into this already and
| have strong opinions. At the very least, give Nomad a look.
| It is maybe a tenth of the effort to run for exactly the
| features most people want and then some.
|
| People need to be made aware that there are options. I have
| friends at companies that have large teams just dedicated
| to managing Kubernetes and they still deal with failure
| frequently or they spend their entire day-to-day tuning
| etcd.
| kube-system wrote:
| Kubernetes is much more simple than what we would have to do
| without it, and my team is much much smaller than anything at
| Google. For what it does, it offers some good opinions for what
| might otherwise be a tangle of dev ops scripts.
|
| If what you want to deploy is best described as "an
| application" it's probably not the right tool for the job. If
| what you want to deploy is best described as "50 interconnected
| applications" it's probably going to save you time.
| 0xEFF wrote:
| Modern istio provides a lot of value to a single application.
| mTLS security, telemetry, circuit breaking, canary
| deployments, and better external authentication and
| authorization. I've seen each done so many different ways.
| Nice to do it once at the mesh layer and have it be done for
| everything inside the cluster.
| busterarm wrote:
| As someone who runs both in production, Nomad would almost
| certainly meet your needs.
|
| Learning how to operate Kubernetes well takes a while and I
| would say is only worth the investment for a extremely tiny
| percentage of companies.
| sgarland wrote:
| Maybe, but for better or worse it's also become the
| industry standard, much like Terraform.
|
| If you don't know k8s and Terraform, you're shooting
| yourself in the foot for future jobs.
| busterarm wrote:
| K8s is an enormously complex piece of software and I
| haven't met a great many people who "know" it inside and
| out.
|
| Basic concepts and how to write a job/service/ingress,
| sure. Knowing the internals and how to operate it? I'd
| say that's only for specialists. Most people don't need
| to know what a Finalizer is or does. Most people aren't
| going to write operators.
|
| It is a multi-year investment of time to deeply
| understand this tool and it's not necessary for everyone.
| pgwhalen wrote:
| The same could be said for the linux kernel, and yet we
| still run all of our software on it.
| busterarm wrote:
| Except with the kernel, you only have to be familiar with
| the system calls and you don't need a team of people just
| to run, maintain and upgrade the kernel.
|
| That and it tries to make breaking changes on the
| timescale of decades rather than every other minor
| release (so, once or twice a year?).
| pgwhalen wrote:
| > Except with the kernel, you only have to be familiar
| with the system calls
|
| I think it's safe to assume that any non-trivial use of
| linux involves non-default configuration.
|
| > you don't need a team of people just to run, maintain
| and upgrade the kernel.
|
| My relatively small company employed linux admins before
| we adopted (on prem) kubernetes. Their work has changed a
| bit since since then, but it isn't meaningfully more
| laborious.
|
| I assume that less effort is required for cloud
| kubernetes offerings.
| [deleted]
| alexhwoods wrote:
| Agreed. I think overtime we'll just get more abstracted
| away from it. GKE Autopilot, for example.
|
| I think you still have to understand the lego block in
| your hand though, so you can combine it well with the
| other parts of your system.
| kube-system wrote:
| Maybe so, but anyone should definitely use more criteria
| than my few word generalization to choose their deployment
| infrastructure. :)
|
| We (mostly) chose k8s over other solutions because of other
| tools/providers in the ecosystem that made business sense
| for us. But we did need _something_ to abstract our
| deployment complexity.
|
| I'm mostly suggesting that I suspect many of the people
| with bad k8s experience didn't really need it.
|
| I've seen a number of people wrap a simple application in a
| container, slap it in a deployment/service/ingress and call
| it a day, it works, but using k8s that way doesn't really
| add much value.
| pgwhalen wrote:
| > If what you want to deploy is best described as "an
| application" it's probably not the right tool for the job. If
| what you want to deploy is best described as "50
| interconnected applications" it's probably going to save you
| time.
|
| This is an excellent way of looking at it. I've struggled for
| many years to come up with a response to hacker news comments
| saying you don't need kubernetes, but this sums it up about
| as well as I could imagine.
| alek_m wrote:
| Every time I'm starting a new service to run internally or
| reviewing something we have going, I find myself struggling to
| find the right instance type for the needs.
|
| For instance, there are three families (r, x, z) that optimize
| RAM in various ways in various combinations and I always forget
| about the x and z variants.
|
| So I put together this "cheat sheet" for us internally and
| thought I'd share it for anyone interested.
|
| Pull requests welcome for updates:
| https://github.com/wrble/public/blob/main/aws-instance-types...
|
| Did I miss anything?
| secondcoming wrote:
| Both AWS and GCP allow you to customise a machine's spec.
| You're not limited to the ones they offer.
| loriverkutya wrote:
| Maybe I'm not 100% up-to-date with AWS Ec2 offerings, but as
| far as I'm aware, you can only choose from predefined
| instance types.
| seneca wrote:
| It seems this may have been posted on the wrong article, as
| it's completely unrelated to the topic.
| dmitryminkovsky wrote:
| Nice list. Learned a couple neat things. Thank you!
|
| Would like to add that my favorite under-appreciated can't-live-
| without kubectl tool is `kubectl port-forward`. So nice being
| able to easily open a port on localhost to any port in any
| container without manipulating ingress and potentially
| compromising security.
| ithkuil wrote:
| Not only containers, it can also forward services!
| [deleted]
| icythere wrote:
| One of the issues I've often seen that my team mates send "right
| command" to wrong cluster and context. We have a bunch of
| clusters and it's always surprising to see some laptop
| deployments on ... production cluster.
|
| So I wrote this https://github.com/icy/gk8s#seriously-why-dont-
| just-use-kube... It doesn't come with any autocompletion by
| default, but it's a robust way to deal with multiple clusters.
| Hope this helps.
|
| Edit: Fix typo err0rs
| throwaway984393 wrote:
| I'm lazy and I don't like having to remember "the right way" to
| run something, so my solution is directories and wrappers. I
| keep a directory for every environment (dev, stage, prod, etc)
| for every account I manage. env/
| account-a/ dev/ stage/
| prod/ account-b/ dev/
| stage/ prod/
|
| I keep config files in each directory. I call a wrapper script,
| _cicd.sh_ , to run certain commands for me. When I want to
| deploy to stage in account-b, I just do: ~ $ cd
| env/account-b/stage/ ~/env/account-b/stage $ cicd.sh
| deploy cicd.sh: Deploying to account-b/stage ...
|
| The script runs _.. /../../modules/deploy/main.sh_ and passes
| in configs from the current directory ("stage") and the
| previous directory ("account-b"). Those configs are hard-coded
| with all the correct variables. It's impossible for me to
| deploy the wrong thing to the wrong place, as long as I'm in
| the right directory.
|
| I use this model to manage everything (infrastructure,
| services, builds, etc). This has saved my bacon a couple times;
| I might have my AWS credentials set up for one account (
| _export AWS_PROFILE=prod_ ) but trying to deploy nonprod, and
| the deploy immediately fails because the configs had hard-coded
| values that didn't match my environment.
| onestone wrote:
| I simply put my apps in different namespaces on
| dev/stage/prod/etc. That way a kubectl command run against
| the wrong cluster will fail naturally.
| busterarm wrote:
| A lot of people like kubectx. Or specifying contexts.
| Personally I hate both approaches.
|
| For the several dozen clusters that I manage, I have separate
| kubeconfig files for each and I use the --kubeconfig flag.
|
| It's explicit and I have visual feedback in the command I run
| for the cluster I'm running against, by short name. No stupidly
| long contexts.
| icythere wrote:
| Exactly! Having separate config is very easily. I had that
| support in my tool ;)
| kemitche wrote:
| My approach was to have a default kubeconfig for dev/QA
| environments, and a separate for production. I had a quick
| wrapper script to use the prod config file - it would set the
| KUBECONFIG env car to use the prod file, and update my PS1 to
| be red, a clear differentiator that reminds me I'm pointed at
| prod.
| cfors wrote:
| Not a perfect solution but I add a prompt signaling both my
| current namespace and cluster, along with some safeguards for
| any changes on our production environment. In practice I
| haven't deployed something wrongfully in production ever.
|
| I use a custom written script but I've used this one in the
| past - its pretty nice.
|
| https://github.com/jonmosco/kube-ps1/blob/master/kube-ps1.sh
| majewsky wrote:
| I have a prompt display as well, but to my own dismay,
| earlier that year, I applied some QA config to a prod system.
| (It did not cause substantial harm, thankfully.) After that,
| I changed my prompt display so that names of productive
| regions are highlighted with red background. That seems to
| really help in situations of diminished attentiveness from
| what I can tell.
| xyst wrote:
| couldn't that be solved by not allowing production access to
| those clusters? most k8s providers should allow role based
| access (read/write/deploy)
| pchm wrote:
| Agree, this is a huge pain point when dealing with multiple
| clusters. I wrote a wrapper for `kubectl` that displays the
| current context for `apply` & `delete` and prompts me to
| confirm the command. It's not perfect, but it's saved me a lot
| of trouble already -- but encouraging other members of the team
| to have a similar setup is another story.
|
| Here's the script (along with a bunch of extra utils):
| https://github.com/pch/dotfiles/blob/master/kubernetes/utils...
| icythere wrote:
| Very valuable script. Thanks for your sharing.
| iechoz6H wrote:
| We partially resolve this by having different namespaces in
| each of our environments. Nothing is ever run in the 'default'
| namespace.
|
| So if we think we're targeting the dev cluster and run 'kubectl
| -n dev-namespace delete deployment service-deployment' but our
| current context is actually pointing to prod then we trigger an
| error as there is no 'dev-namespace' in prod.
|
| Obviously we can associate specific namespaces to contexts to
| traverse this safety net but it can help in some situations.
| physicles wrote:
| direnv is our magic sauce for this. We enforce that all devs
| store the current context in an environment variable
| (KUBECTL_CONTEXT), and define the appropriate kubectl alias to
| always use that variable as the current context. To do stuff in
| a cluster, cd into that cluster's directory, and direnv will
| automatically set the correct context. I also change prompt
| colors based on the current context.
|
| (This way, the worst you can do is re-apply some yaml that
| should've already been applied in that cluster anyway)
|
| We also have a Makefile in every directory, where the default
| pseudo-target is the thing you want 99% of the time anyway:
| kustomize build | kubectl apply -f -
| lazyant wrote:
| I like the spirit of this but for dealing with multiple
| clusters, kubectx is pretty standard, always returns
| highlighting where you are and we don't have to type in the
| cluster name in every command. Also avoiding "kubectl delete"
| seems such a narrow case, I can still delete with "k scale
| --replicas=0" and possibly many other ways; at this point you
| are better of with a real RBAC implementation.
| jeffbee wrote:
| isn't kubectx the problem, not the solution? You think you
| are in one context but you are actually in another. You
| wanted to tear down the dev deployments but you nuked the
| production ones instead.
| Osiris wrote:
| k9s is a fantastic tool. It's a CLI GUI written in go.
| icythere wrote:
| I used k9s before and that's an awesome tool. Tho it doesn't
| help when I want to send a command to my team mate and he
| just executes them on wrong cluster. It's the problem I want
| to solve
| birdyrooster wrote:
| Create a User/Role for deleting (or whatever dangerous
| action) resources in prod cluster/namespace. Setup RBAC
| which allows your employees to impersonate as that
| user/role using kubectl --as. This way if you send your
| coworkers a command for dev environment and they try to run
| it in prod it will fail because they didn't run kubectl as
| that impersonated user.
| pm90 wrote:
| I'm glad to hear that this is a more common problem. When
| sharing kubectl commands, I always specify the --context
| flag explicitly so the person using it has to manually edit
| the context name to whatever they are using before running
| it.
| jeffbee wrote:
| More of a TUI than a CLI. I love its presentation but it
| falls apart if you have about 1000 pods or more.
| forestgagnon wrote:
| I wrote a convoluted tool for this problem which isolates
| kubectl environments in docker containers:
| https://github.com/forestgagnon/kparanoid.
|
| This approach allows the convenience of short, context-free
| commands without compromising safety, because the context info
| in the shell prompt can be relied on, due to the isolation.
|
| There are some things which don't work well inside a docker
| container (port-forwarding for example), but it does make it
| simple to have isolated shell history, specific kubectl
| versions, etc.
| pm90 wrote:
| This is really nice. I like that it doesn't munge the
| `kubectl` command itself.
___________________________________________________________________
(page generated 2021-07-05 23:01 UTC)