[HN Gopher] Show HN: GitOps Template for Kubernetes
___________________________________________________________________
Show HN: GitOps Template for Kubernetes
Hello HN, we're Philip Louis from Glasskube
(https://github.com/glasskube/glasskube). We are working on a
package manager for Kubernetes to simplify the packaging of complex
applications with multiple dependencies, ensuring they are
installed and kept up-to-date across multiple Kubernetes clusters.
Nowadays, it is best practice to use Git as a revision control
system for your Kubernetes configurations. Update automation
workflows like Renovate or Dependabot can create pull requests for
new versions of Docker images and Helm charts, but ensuring these
new package versions work is still a manual task. By using the
central (or a private) Glasskube repository
(https://github.com/glasskube/packages) together with our Renovate
integration
(https://docs.renovatebot.com/modules/manager/glasskube/), you can
ensure that new package versions will run through our Minikube-
based CI workflows before they get published--similar to how the
Homebrew core tap works. We've just introduced readiness checks for
manifest-based deployments and utilize the flux-helm-controller to
wait for a Helm release to succeed. Dependencies are resolved by
our package controller. These dependencies can either be cluster-
scoped (installed in the recommended namespace, e.g., operators
wird CRDs) or namespace-scoped components of a package (e.g., a
database or Redis cache). In such cases, we will prefix resources
with the dependent package name to ensure multiple packages can use
the same dependencies without naming conflicts (we use Kustomize on
a virtual filesystem for this). Glasskube packages can currently
be Helm charts (from an OCI or Helm repository) or manifests, which
are mostly built using Kustomize's overlay approach. Since neither
the overlay approach (using Kustomize) nor Helm's limited
templating functionality will help us and other Kubernetes users
scale to more complex packages, we are considering creating a more
programmatic approach to package creation, similar to Timoni.
Currently, KCL is our frontrunner
(https://github.com/glasskube/glasskube/discussions/1018), as it
already integrates well with the Kubernetes ecosystem. We would
appreciate if you give our GitOps template a try. It also works
work existing Kubernetes clusters if just want to use GitOps for
some applications. Just make sure that the argocd and glasskube-
system namespaces are not yet in use. See:
https://github.com/glasskube/gitops-template/
Author : pmig
Score : 48 points
Date : 2024-09-10 16:18 UTC (6 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| vanillax wrote:
| " It also works work existing Kubernetes clusters if just want to
| use GitOps for some applications. Just make sure that the argocd
| and glasskube-system namespaces are not yet in use. See:
| https://github.com/glasskube/gitops-template/ "
|
| I assume this statement is for running this? glasskube bootstrap
| git --url <your-repo> --username <org-or-username> --token <your-
| token>
|
| I think I'd like to understand what the argo cd / git ops
| template is and how its different than argocd autopilot. Maybe
| some pictures of how argo is deploying apps. Etc.
| linkdd wrote:
| IIUC, it's basically "manage your Glasskube packages from Git,
| thanks to ArgoCD".
|
| The `glasskube install` command does a bunch of stuff that ends
| up as resources in your Kubernetes cluster, that are then
| interpreted by the Glasskube operator.
|
| The "Gitops template" make use of ArgoCD and Git to do what
| `glasskube install` would have done.
| pmig wrote:
| Thanks linkdd. Exactly Glasskube in "GitOps mode" will output
| these package custom resources as yaml so you can commit them
| to git and argo pulls these resources from git into the
| cluster.
| hadlock wrote:
| Thanks. It sounds more like glasskube is a plugin for ArgoCD
| IIUC
|
| I am not super thrilled about critical applications like Argo
| getting a plethora of plugins, otherwise we end up looping
| back to Jenkins and plugin hell
| linkdd wrote:
| Off-topic: To be honest, after trying almost all the CI/CD
| offerings out there, CircleCI, Github Actions, Gitlab CI,
| Travis, etc... I've started to believe that none of them
| actually did it better than Jenkins (despite all its
| flaws).
|
| On-topic: Glasskube isn't really an ArgoCD plugin as it can
| work standalone, but in 2024, can you really propose a
| package manager for k8s without having some integration
| with ArgoCD and GitOps in general?
|
| If you want to migrate, having interoperability between the
| tools can make the process smoother. And if you don't want
| to migrate and still benefit from a centralized, curated,
| audited repository of packages for Kubernetes so that your
| "Powered by ArgoCD" GitOps are easier to manage, that's
| what the GitOps template propose.
|
| In Debian, you can just `apt install <that big thing i
| don't want to write a deploy script for>`. Imagine doing
| that with the usual big operators you want in your cluster
| (cert-manager, a hashicorp vault operator, istio or nginx
| ingress controller or envoy or ...)
| pmig wrote:
| Yes, you can get started by executing this commands.
|
| Our bootstrap git sub command is similar to argo cd autopliot.
| I give it a try right now to be able to better state
| differences and follow up on this question.
| pmig wrote:
| I just setup a argo cd autopilot repository
| https://github.com/pmig/argocd-autopilot as a comparison.
| Autopilot gives you a great opinionated scaffold for internal
| applications.
|
| Our template already includes update automations with the
| renovate integration and easier integration of third party
| applications.
| vanillax wrote:
| I mean renovate in github is 1 file and a app integration. It
| takes very little effort to setup. What exactly do you mean
| by easier integration of third party apps? Why wouldnt
| someone just use https://operatorhub.io/. ?
| pmig wrote:
| If you are not on open-shift olm and the operator hub is
| quite an overkill. I wrote a more detailed explanation on
| our website: https://glasskube.dev/docs/comparisons/olm/
|
| Tldr.: If you are already using open shift, make use of the
| operator hub, else glasskube is the more lightweight and
| simpler solution with similar concepts.
| iwwr wrote:
| Why do you need an in-cluster operator/pod?
| pmig wrote:
| There are multiple reasons and limitations with current tooling
| that we want to overcome.
|
| We have abstracted all packages as custom resources and have a
| controller that reconciles these resources to (1) enable drift
| detection. Additionally, we use admission controllers (2) to
| validate dependencies and package configurations before they
| are applied to the cluster, while also working with custom
| resources to store and update the status of installed packages.
| iwwr wrote:
| Genuinely interested. What problems did you have dealing with
| the standard reconciliation mechanism provided by ArgoCD and
| by k8s itself. I understand the advantage of the operator
| approach, but it might be hard to show the state in ArgoCD
| and somewhat breaks the idea of gitops.
|
| Can we benefit your project in a more limited but agentless
| way? Limiting the types and CRDs we allow in k8s makes
| operations better, especially with the aggressive upgrade
| cycle that k8s already imposes.
| pmig wrote:
| A deeper integration into Argo CD (similar as how helm is
| integrated) will be needed to in order to display all
| status conditions.
|
| I don't think that idea of gitops is broken if the
| glasskube package controller and all custom resources are
| versioned you will always lead to a reproducible result.
|
| > Can we benefit your project in a more limited but
| agentless way?
|
| We are building a central package repository with a lot of
| ci / cd testing infrastrucutre to increase quality of
| kubernetes packages in general:
| https://github.com/glasskube/packages
| slederer wrote:
| Very interesting, where do you see value:
|
| a.) in Kubernetes setups that operate the same software stack,
| with the ongoing updates and regular releases.
|
| b.) in Kubernetes setups that frequently install new
| software/diverse software
| pmig wrote:
| A package manager for a software stack the does not change that
| often (a) can move managed services like databases, message
| queues or even more complex obseravbility tools inside the
| cluster with the same convenience of a manage service.
|
| If your setup become more complex and changes often (b) a I
| would recommend breaking it up into smaller pieces.
|
| For both scenarios it makes sense to use git to keep track of
| revision and previous states of your Kubernetes cluster and
| incorporate review processes with pull requests.
| cianuro_ wrote:
| Can't this be achieved using the app of apps pattern in ArgoCD?
| linkdd wrote:
| The ultimate goal is a centralized, curated, and audited
| repository, similar to Debian's official apt repository.
|
| Sure, you could make your own .deb, your own repository, and
| manage dependencies yourself. But do you really want to?
| pmig wrote:
| Teams can also build applications with the apps of apps pattern
| with Argo CD or Flux kustomizations. They both feature concepts
| of dependencies. But these packages can then not published and
| shared between different clusters and organization that don't
| share a git repository.
| esafak wrote:
| How do you handle updates, manually?
| raffraffraff wrote:
| How would I integrate it into existing setup that uses tools like
| terraform, along with helm?
|
| See, there's a bunch of stuff that I deploy using terraform (VPC,
| DNS, EKS) and a bunch of stuff that I deploy with FluxCD. But in
| between, there's some awkward stuff like the monitoring stack
| that requires cloudy things and Kube things, and they're tightly
| coupled.
|
| Right now I end up going with terraform and the awful helm
| provider. Many of these helm charts have sub-charts, nested
| values etc, but thankfully the monitoring stack doesn't change
| that much. It's still not ideal but it works as a one-shot
| deployment that sets up buckets, policies, IAM roles for service
| accounts, more policies for the roles, and finally the helm
| charts with values from the outputs of those clouds resources.
| momothereal wrote:
| What is awful about the Helm provider for Terraform?
| linkdd wrote:
| I'd say both Helm and Terraform :)
| pmig wrote:
| Instead of using the Terraforms Helm provider, you can simply
| install the Glasskube package controller and provision package
| custom resource definitions via Flux. This is also how we
| manage our internal clusters.
|
| I recommend giving Glasskube a try with Minikube locally and
| join our Discord to interact with the community.
___________________________________________________________________
(page generated 2024-09-10 23:01 UTC)