[HN Gopher] Writing a Kubernetes CRD Controller in Rust
___________________________________________________________________
Writing a Kubernetes CRD Controller in Rust
Author : lukastyrychtr
Score : 62 points
Date : 2021-01-09 20:15 UTC (2 hours ago)
(HTM) web link (technosophos.com)
(TXT) w3m dump (technosophos.com)
| praveenperera wrote:
| The library used in the this post has gotten much better since
| this post was written (in 2019)
|
| Recently the ability to do `exec` and `attach` was added. It
| should reach feature parity with the Go version soon once `port
| forward` is added.
|
| https://github.com/clux/kube-rs
| sanxiyn wrote:
| This is (2019).
| parhamn wrote:
| How has Kubernetes being a giant reactive resource API panned
| out? My first reaction was to like it, but near the end of when I
| was very invested in the space it started feeling like there were
| too many moving parts and a hectic mess. Stacks became more
| difficult to reproduce (controllers, their versions and platform
| limitations) and people moved too many things into custom
| resources.
|
| Did the custom resource controllers pan out well?
| jsabo wrote:
| I'm not sure what you mean by too many things ended up as
| custom resources -- it's the recommended extension point and
| you can do a lot with them that you can't do with core types.
|
| Custom resources had/have some growing pains but I think worked
| out pretty well. It can be very very hard to test and
| distribute them though. As someone who maintained a controller
| with paid support, your test matrix gets pretty large pretty
| fast accounting for different k8s versions, different hosted
| versions (GKE, AKS, etc), different distros (openshift,
| rancher, etc). And that's before you even get into specific
| configurations like pod security policies, can the control
| plane communicate with the data plane, is there a service mesh.
|
| Resource versioning is hard to get right. Once a resource type
| is v1, it becomes difficult to extend it. You can't add a beta
| field to it easily. Revising schema can be hard since anything
| more than a no op conversion between versions requires a
| webhook, which requires a certificate chain, and while cert-
| manager is popular it is not ubiquitous and regularly has
| breaking changes. Webhook setup issues made up a large portion
| of our support requests.
|
| As far as the general "reconciliation loop" architecture goes,
| you end up with something similar in most orchestration systems
| I've worked on, or you wish that you did. So overall I think
| that worked out well. Getting it right can be hard, but I think
| that's the nature of the beast.
| eecc wrote:
| Don't know but it's a shame that the k8s API doesn't have link
| fields... it kind of breaks the whole concept of automatic
| interfacing
| BenTheElder wrote:
| Can you elaborate on what you mean by link field? I've
| searched a bit but I'm not certain what you're referring to
| in this context.
| Pirate-of-SV wrote:
| HATEOAS I suppose. https://restfulapi.net/hateoas/
| freedomben wrote:
| Custom resources are really "operators" and there are some
| really great operators, and there are some not so great ones.
| Notably OpenShift 4 is implemented as operators on top of K8s,
| so there is a lot you can do with them.
| Thaxll wrote:
| Well look how popular Kubernetes is, how many thousands of
| things built on top of it because its powerful and extensible
| API model.
| jacques_chester wrote:
| > _How has Kubernetes being a giant reactive resource API
| panned out?_
|
| It was never reactive, at least in the sense of the Reactive
| Manifesto. There is no system of backpressure.
|
| > _Did the custom resource controllers pan out well?_
|
| I think it will prove to be useful for a handful of uses, but
| that in general, it will wind up like Wordpress and Jenkins
| plugins. Powerful, popular, but a guaranteed mess.
|
| Kubernetes was not originally designed to accommodate CRDs.
| There's no concept of tenancy, only things you can cobble
| together into tenancy-ish-alike-kinda shapes. There's RBAC, but
| it was designed before CRDs and means that poorly designed
| custom controllers can be attack vectors.
|
| I think CRDs are useful and necessary[0], but that they are
| greatly overused and have expensive design flaws.
| Unfortunately, I don't have the wits to join into the multi-
| billion dollar market of "papering over things Kubernetes
| doesn't do very well but which can't now be changed".
|
| [0] I mean, I wrote a book about Knative, which is a purely
| CRD-based architecture.
| bogomipz wrote:
| >"I think CRDs are useful and necessary[0], but that they are
| greatly overused and have expensive design flaws."
|
| Could you elaborate on the "design flaws" part? Do you feel
| that the CRD model itself has design flaws or are you
| referring to specific project's CRDs?
| Thaxll wrote:
| "The Go version was over 1700 lines long and was loaded with
| boilerplate and auto-generated code. The Rust version was only
| 127 lines long"
|
| Right ...
| jacques_chester wrote:
| I've written CRD controllers. This is an accurate description
| of what your code looks like.
|
| Because of Golang's anemic type system, massive code generation
| is the only viable way to deal with the dynamic blackboard that
| the API Server presents to clients.
| Thaxll wrote:
| I also write CRD, but I don't beleive for one second that you
| have more than 10x less code, especially for generated code
| that you don't touch at all.
|
| It's just a bad idea to write something for Kubernetes that
| is not in Go, you have 0 support and k8s releases a new
| version every 6month, good luck keeping up with that.
| jacques_chester wrote:
| It's a different language with a different type system.
| I've also interacted with the API Server using a Ruby
| library that relied on metaprogramming. I could do things
| in a dozen lines that would require thousands of lines of
| checked-in Go.
|
| Replying to your edit:
|
| > _It 's just a bad idea to write something for Kubernetes
| that is not in Go, you have 0 support and k8s releases a
| new version every 6month, good luck keeping up with that._
|
| Well, a few things.
|
| First, using client-go doesn't save you from version
| shifts. It is explicitly unsupported, so if they break your
| downstream system, stiff shit.
|
| Second, client-go is derived from code generators. The same
| generators also produce other clients. Unfortunately they
| are terrible, by virtue of OpenAPI generator being a
| hellish maze and because they leak Go-isms like a spiteful
| sieve. I've used that code generator to develop a proxy. It
| sucked.
|
| Third, this library is derived independently. It uses the
| same OpenAPI definitions as client-go, client-java and so
| forth. It's as easy to keep up to date as any of the
| others.
|
| Fourth, the Kubernetes API isn't fully tested. Not even
| close. Many libraries can rightfully claim to be just as
| conformant as client-go, despite being broken on ~66% of
| endpoints: https://apisnoop.cncf.io/
| Arnavion wrote:
| As the maintainer of the Rust bindings that the library
| used in the article (kube) is backed by, I can confirm
| that Kubernetes' openapi spec requires a lot of
| Kubernetes-specific handling to generate a good client
| than generic openapi generators do not provide. Yes, that
| includes all the other Kubernetes clients in
| github.com/kubernetes-client like Python and .Net too.
|
| See https://github.com/Arnavion/k8s-openapi/blob/master/R
| EADME.m... for a full description.
|
| I also confirm that I keep it up-to-date with Kubernetes
| releases and have been doing so for the ~3 years that
| it's been around. Not just the minor ones every few
| months, but even the point ones; these days the latter
| usually only involves updating the test cases instead of
| code changes and they're done within a few hours of the
| upstream release.
___________________________________________________________________
(page generated 2021-01-09 23:00 UTC)