[HN Gopher] Show HN: Glasskube - Open Source Kubernetes Package ...
       ___________________________________________________________________
        
       Show HN: Glasskube - Open Source Kubernetes Package Manager,
       alternative to Helm
        
       Hello HN, we're Philip and Louis from Glasskube
       (https://github.com/glasskube/glasskube). We're working on an open-
       source package manager for Kubernetes. It's an alternative to tools
       like Helm or Kustomize, primarily focused on making deploying,
       updating, and configuring Kubernetes packages simpler and a lot
       faster. Here is a demo video
       (https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s) with quick
       start instructions.  Most developers working with Kubernetes use
       Helm, an open-source tool created during a hackathon nine years
       ago. However, with the rapid growth of Kubernetes packages to over
       800 packages on the CNCF landscape today, the prerequisites have
       changed, and we believe it's time for a new package manager. Every
       engineer we talked to has a love-hate relationship with Helm, and
       we also found ourselves defaulting to Helm despite its shortcomings
       due to a lack of alternatives.  We have spent enough time trying to
       get Helm to do what we need. From looking for the correct chart,
       trying to learn how each value affects the components and hand-
       crafting a schemaless values.yaml file, to debugging the final
       release if it inevitably fails to install, the experience of using
       Helm is, for the most part, time consuming and cumbersome.  Charts
       often become more complex, requiring the use of sub-charts. These
       umbrella charts tend to be even harder to maintain and upgrade,
       because so many different components are bundled into a single
       release.  We talked to over 100 developers and found that everyone
       developed their own little workarounds, with some working better
       than others. We collected the feedback poured everything we learned
       from that into a new package manager. We want to build something
       that is as easy to use as Homebrew or npm and make package
       management on Kubernetes as easy as on every other platform.  Some
       of the features Glasskube already supports are  Typesafe package
       configuration via UI or interactive CLI to inject values from other
       packages, ConfigMaps, and Secrets.  Browse our central package
       repository so there is no need to look for a Helm repository to
       find a specific package.  All packages are dependency-aware so they
       can be used and referenced by multiple other packages even across
       namespaces. We validate the complete dependency tree - So packages
       get installed in the correct namespace.  Preview and perform
       pending updates to your desired version with a single click of a
       button. All updates have been tested in the Glasskube test suite
       before being available in the public repository.  Use multiple
       repositories and publish your own private packages (e.g., your
       company's internal services packages, so all developers will have
       the up-to-date and easily configured internal services).  All
       features are available via UI or interactive CLI. You can also
       manage all packages via GitOps.  Currently, we are focused on
       enhancing the user experience, aiming to save engineers as much
       time as possible. We are still using Helm and Manifests under the
       hood. However, together with the community, we plan to develop an
       entirely new packaging and bundling format for all cloud-native
       packages. This will provide package developers with a
       straightforward way to define how to install and configure
       packages, offer simple upgrade paths, and enable us to provide
       feedback, crash reports, and analytics to every developer working
       on Kubernetes packages.  We also started working on a cloud
       version. You can pre-signup here in case you are interested:
       https://glasskube.cloud  We'd greatly appreciate any feedback you
       have and hope you get the chance to try out Glasskube.
        
       Author : pmig
       Score  : 130 points
       Date   : 2024-06-25 15:30 UTC (7 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | fsniper wrote:
       | When I check the packages repo, I see that they are yaml.files
       | linking to helm packages. Is this only capable of deploying helm
       | packages? Is there a packaging solution included too?
        
         | pmig wrote:
         | > We are still using Helm and Manifests under the hood.
         | However, together with the community, we plan to develop an
         | entirely new packaging and bundling format for all cloud-native
         | packages.
         | 
         | Yes at the moment Glasskube packages wrap around Helm charts
         | and manifests. We don't have a dedicated packaging format as of
         | now.
         | 
         | We are actively looking into OCI images and other possibilities
         | to bundle Kubernetes packages, but focus on features like multi
         | namespace dependencies and simplicity at the moment.
         | 
         | How would you like to package Kubernetes packages or what
         | should be avoided from your perspective?
        
           | ryanisnan wrote:
           | This is the worst of both worlds.
           | 
           | Helm, as an abstraction layer, is a real pain in the ass.
           | Having another abstraction layer here is pure madness.
           | 
           | I wish you luck, but I do not wish to board your boat.
        
           | meling wrote:
           | Yaml files should be avoided.
        
           | fsniper wrote:
           | This was what I understood too.
           | 
           | As a power Kubernetes user providing Kubernetes based paas to
           | internal customers, we are not looking for more GUI or,
           | abstractions over helm.
           | 
           | There are already a lot of solutions out there in this area
           | like helmsman/argocd/helmfile/cross plane helm-provider. And
           | we like kubernetes resources + gitops based automations over
           | any other fancy tools.
           | 
           | Most of the time problems around helm is it's string based
           | templating and lack of type safety. That's why timoni looks a
           | more promising solution in this space. It's lack of packages
           | is the limiting factor.
           | 
           | Another interesting approach is kapp-controller and carvel
           | tooling. Packaging helm charts, OCI images, etc as OCI
           | artifacts to use as a combined source is really interesting.
           | We were considering using kapp-kontroller however our current
           | dependence helm, and some architectural concerns caused us to
           | pass on kapp-controller for now.
           | 
           | As to the question of what could be selling points towards a
           | new "Package Manager" would be,
           | 
           | * Timoni like packages that has templating with type safety.
           | * A large package pool, or * Abstraction over helm packages
           | that could add type safety, or better yet automatic or at
           | least semi-automatic conversion of helm charts (One can dream
           | ;)) * Full management through kubernetes API / CRDs *
           | Multicluster management, or fleet management.
        
       | speedgoose wrote:
       | Do you plan to eventually support alternatives to the mix of
       | Golang templates in YAML? It's my main issue regarding Helm
       | charts and I dream about pkl helm charts.
        
         | pmig wrote:
         | Yes, but no concrete plans.
         | 
         | We already looked into pkl, but this would require every
         | package author to either have Java (and pkl) running on their
         | system or we would need to package the jre (and pkl) in order
         | to make it work probably. But Kubernetes examples are already
         | out there (https://github.com/apple/pkl-k8s-examples) and we
         | are keeping an eye on it.
        
         | gtirloni wrote:
         | Same here. It's where I spend 90% of my time working with Helm
         | charts and I absolutely despise it.
        
         | verdverm wrote:
         | I dream of Helm adopting CUE as an option for chart templates
         | and values. They are both written in Go, which makes the
         | integration a possibility. I've played around with a wrapper
         | that renders the CUE to to Yaml before running Helm, which
         | effectively means helm is deploying hardcoded templates,
         | lifting the values/templates merging out of Helm proper.
        
           | kingcan wrote:
           | Have you tried https://timoni.sh/?
        
             | verdverm wrote:
             | yes, see my other comment:
             | https://news.ycombinator.com/item?id=40793260
        
       | llama052 wrote:
       | This looks like an interesting take on package management. Would
       | be cool for homebrew clusters and the like.
       | 
       | However things like helmfile with renovate paired with a pipeline
       | is my personal preference even if just for ensuring things remain
       | consistent in a repo.
       | 
       | The `update all` button for instance seems terrifying on a
       | cluster that means anything at all. None the less it's still cool
       | for personal projects and the like!
       | 
       | The package controller reminds me a lot of Helm tiller with older
       | versions of helm, and it became a big security issue for a lot of
       | companies, so much so that helm3 removed it and did everything
       | clientside via configmaps. Curious how this project plans on
       | overcoming that.
        
         | pmig wrote:
         | Thanks for your input, let me comment on your points one by
         | one.
         | 
         | > However things like helmfile with renovate paired with a
         | pipeline is my personal preference even if just for ensuring
         | things remain consistent in a repo.
         | 
         | Glasskube packages can also be put inside a GitOps repository
         | as every package is a CR (custom resource). (They can even be
         | configured via the CLI using the `--dry-run` and `--output
         | yaml` flags and than put into git. In addition we are working
         | on pull request to support package updates via Renovate:
         | https://github.com/renovatebot/renovate/issues/29322
         | 
         | > The package controller reminds me a lot of Helm tiller with
         | older versions of helm, and it became a big security issue for
         | a lot of companies, so much so that helm3 removed it and did
         | everything clientside via configmaps. Curious how this project
         | plans on overcoming that.
         | 
         | As helm3 is now a client side tool only, that means that it
         | can't enforce any RBAC by itself. OLM introduced Operator
         | Groups (https://olm.operatorframework.io/docs/advanced-
         | tasks/operato...) which introduces a permissions on an operator
         | level. We might introduce something similar for Glasskube
         | packages. Glasskube itself will still require be quite
         | powerful, but we can than scope packages and introduce granular
         | permissions.
        
       | verdverm wrote:
       | I have a hard time seeing how a k8s package manager could ever be
       | as simple as brew or apt. One reason that stands out is that I
       | have different values depending on what environment I'm
       | targeting, and almost every user has a snowflake, just the way it
       | is. The idea of a repl like prompt or web UI for setting those
       | values is not appealing to me.
       | 
       | The main pains remain unaddressed
       | 
       | - authoring helm charts sucks
       | 
       | - managing different values per environment
       | 
       | - connecting values across charts so I don't have to
        
         | pmig wrote:
         | We are absolutely seeing a lot of snowflake clusters in the
         | wild. This is also a hot topic on all cloud native conferences
         | I attended lately.
         | 
         | Platform teams try to create internal developer platforms to
         | further standardize Kubernetes configurations across teams and
         | clusters, where developers can only do minor modifications.
         | From my experience we want to reduce snow flake configurations.
         | This is also a reason why we created Glasskube in the first
         | place.
         | 
         | > - authoring helm charts sucks
         | 
         | Yes, 100% and we are on a mission to change this in future.
         | 
         | > - managing different values per environment
         | 
         | Glasskube packages are still configurable, but come with
         | meaningful default values.
         | 
         | > - connecting values across charts so I don't have to
         | 
         | This is already possible, you can reference package
         | configuration values from other packages easily via Glasskube,
         | not needing to provide the same values multiple times.
        
           | verdverm wrote:
           | > This is already possible, you can reference package
           | configuration values from other packages
           | 
           | This misses the point, what we need is something more like
           | Terraform, which has a way to get dynamic information from
           | resource values that are assigned by the system. One such
           | example would be the secret that the postgres operator
           | generates for connecting in the api server that needs access.
           | 
           | > > - managing different values per environment
           | 
           | Most charts already come with meaningful defaults. The issue
           | is that you need simpler defaults for multiple environments
           | that the user doesn't have to think about. There ought to be
           | some higher level information coming into the pipeline that
           | tells the tool what environment I'm working with and assign
           | certain values automatically.
           | 
           | > Yes, 100% and we are on a mission to change this in future.
           | 
           | Configuration needs a proper language. Please avoid Yaml and
           | something bespoke. There are a few configuration languages
           | emerging, CUE is my personal pick in the horse race.
        
             | pmig wrote:
             | Thanks for clarifying your inputs.
             | 
             | > This misses the point, what we need is something more
             | like Terraform, which has a way to get dynamic information
             | from resource values that are assigned by the system. One
             | such example would be the secret that the postgres operator
             | generates for connecting in the api server that needs
             | access.
             | 
             | It is also already possible to inject values from secrets
             | during runtime. You can for example create a Glasskube
             | package that has a dependency on cnpg and add a
             | `cluster.yaml` to your package and then dynamically patch
             | the connection string (or credentials) from it to your
             | deployment.
             | 
             | See the "ValueFrom" section of our configuration
             | documentation for the exact inner workings:
             | https://glasskube.dev/docs/design/package-config/
        
               | verdverm wrote:
               | > See the "ValueFrom" section of our configuration
               | documentation
               | 
               | ... this is what we have today, how do I know what value
               | to patch in from, like what is the name of the secret?
               | 
               | Looking at that link makes me think this is like another
               | layer of helm on helm, especially with the same go
               | template values in yaml that are going to be fed into
               | helm templates under the hood.
               | 
               | Putting more yaml on top of templated yaml is not the way
               | to create the next package manager for k8s.
        
       | leroman wrote:
       | Somehow I am able to get bye with Kustimise. Cant stand the mess
       | that is helm and its eco system.
        
         | k8sToGo wrote:
         | Do you rewrite all the apps that are out there as your own
         | kustomize charts or what do you mean by "its eco system"
        
           | leroman wrote:
           | Kustomize is basically a higher level file for K8s
           | deployments, I have all the resources as declarative code
           | that gets deployed when I apply the relevant directory. I
           | have istio + ssl certs + services and any other resource,
           | multiple projects with cross project communication and
           | provisioning etc..
        
           | leroman wrote:
           | By "eco system" I mean all the charts that get shared, when
           | ever I try to look under the hood I instantly regret it..
        
             | lmm wrote:
             | So what do you do when you need to deploy something a bit
             | fiddly with multiple components (e.g. Cassandra) that would
             | just be importing a shared chart in Helm. Do you rewrite
             | that yourself in kustomize?
        
         | joshuak wrote:
         | Agreed.
         | 
         | Adding black boxes on top of black boxes is not a good way to
         | abstract complexity. Helm does nothing more than any template
         | engine does, yet requires me to trust not only the competency
         | of some random chart author but also that they will correctly
         | account for how my k8s environment is configured.
         | 
         | When I inevitably have to debug some deployment, now I'm
         | digging through not only the raw k8s config, but also whatever
         | complexity Helm has added on to obfuscate that k8s config
         | complexity.
         | 
         | Helm is an illusion. All it does is hide important details from
         | you.
        
         | cortesoft wrote:
         | I always see comments like this, and as someone who used
         | Kustomize first and then moved to Helm, it really doesn't fit
         | with my experience.
         | 
         | I found Kustomize extremely annoying to work with. Changing
         | simple configuration options required way too much work.
         | 
         | Take a simple example - changing the URL value of an ingress.
         | This is something every deploy is going to have to set, since
         | it will be different for every cluster.
         | 
         | In Kustomize, I first have to find the ingress resource, then
         | recreate the nesting properly in my kustomize file, and then
         | repeat that for every deployment.
         | 
         | In Helm, I just change one entry in a values file, that is
         | clearly named so I know what I am setting.
         | 
         | In addition, it is REALLY hard to refactor resources once there
         | are a lot of Kustomization files. If I want to redo how the
         | deployment works, I have to change every Kustomize file that
         | any other repo using my project uses. If I have a lot of other
         | people who are pulling in my project and then using kustomize
         | on top of it, we have to coordinate any changes, because
         | changing the structure breaks all Kustomizations.
         | 
         | With Helm, as long as I keep the same values file structure, I
         | am free to move things around however I want. I can use values
         | in completely new locations without having to change anything
         | about the values files themselves.
         | 
         | I just don't see how it is easier. I find it a lot easier to
         | read a default values file and figure out what every setting
         | does rather than read 20 k8s yaml files trying to figure out
         | what does what.
         | 
         | In some ways, I kind of feel like the Kustomize enthusiasts
         | LIKE the things I find annoying about it; they think you SHOULD
         | have to read every resource and fully understand it, and they
         | don't want anyone to be able to change anything without every
         | kustomizer also changing things. I get the theory that everyone
         | should know the underlying resources, but in practicality I
         | find Kustomize to be the wrong level of abstraction for what I
         | want to do.
        
       | woile wrote:
       | Has anyone tried https://github.com/stefanprodan/timoni ? Seems
       | like a good alternative to helm
        
         | geiser wrote:
         | Reviewing their docs I found this:
         | https://glasskube.dev/docs/comparisons/timoni/ I have
         | personally never worked with Timoni
        
         | verdverm wrote:
         | I tried it early on, being a big CUE fan.
         | 
         | Pros:
         | 
         | - comes from someone with deep k8s experience
         | 
         | - has features for secrets and dynamic information based on k8s
         | version and CRDs
         | 
         | - thinks about the full life-cycle and e2e process
         | 
         | Cons: (at the time)
         | 
         | - holds CUE weird, there are places where they overwrite values
         | (helm style) which is antithetical to CUE philosophy. This was
         | rationalized to me as keeping with the Helm mindset rather than
         | using the CUE mindset, because it is what people are used to. I
         | think this misses the big opportunity for CUE in k8s config.
         | 
         | - has its own module system that probably won't integrate with
         | CUE's (as it stands today), granted CUE's module system wasn't
         | released at the time, but it seems the intention is to have a
         | separate system because the goal is to align with k8s more than
         | CUE
         | 
         | - Didn't allow for Helm modules to be dependencies. This seems
         | to have since changed, but requires you to use FluxCD (?)
         | 
         | I didn't ever adopt it because of the philosophical differences
         | (me from CUE, stefan from k8s). I have seen others speak highly
         | of it, definitely worth checking out to see if it is something
         | you'd like. I have plans for something similar once an internal
         | Go package in CUE is made publicly available
         | (https://github.com/cue-
         | lang/cue/commits/master/internal/core...). The plan is to
         | combine the CUE dep package with OpenTofu's graph solver to
         | power a config system that can span across the spaces.
        
       | slipheen wrote:
       | This looks interesting, thanks for sharing it.
       | 
       | Feel free to disregard, but it would help me understand if you
       | briefly explain how this fits in with / compares to existing
       | tools like argocd.
       | 
       | I watched your video and I saw that argo was one of the tools you
       | were installing, so clearly this is occupying a different niche -
       | But I'm not sure what that is yet :)
        
         | pmig wrote:
         | ArgoCD is a great tool to sync the state from your GitOps
         | repository to your cluster and helps by visualizing the
         | installed resources and showcase potentials errors.
         | 
         | It is often used by developers to get a glimpse of the state of
         | core application of a company without cluster access.
         | 
         | Glasskube focuses on the packages your core application depends
         | on. Managing the life cycle of these infrastructure components,
         | testing updates and providing upgrading paths. You can still
         | put Glasskube packages into your GitOps repo and sync them via
         | ArgoCD into the cluster. Our PackageController will do the
         | rest.
        
       | redbackthomson wrote:
       | First off I just wanted to say I think it's great that you're
       | attempting to tackle the problem that is Kubernetes package
       | management. I work at a Kubernetes SaaS startup and spend many
       | hours working with YAML and Helm charts every day, so I
       | absolutely feel the pain that comes with it.
       | 
       | That being said, I'm confused as to where Glasskube is positioned
       | in solving this problem. In the title of this post, you are
       | claiming Glasskube is an "alternative to Helm"; although in your
       | documentation you have a "Glasskube vs Helm" guide that
       | explicitly states that "Glasskube is not a full replacement of
       | Helm". I'm trying to understand how these two statements can be
       | true. To make things more confusing, under the hood Glasskube
       | repositories appear to be a repackaging of a Helm repository,
       | albiet with a nicer UI.
       | 
       | From what I've gathered after reading the docs, Glasskube is
       | being positioned as an easier way to interact with Helm charts -
       | offering some easy-to-use tooling for upgrades and dependency
       | management. To me, that doesn't exactly feel like it replaces
       | Helm, but simply supplements my use of it, because it doesn't
       | actually combat the real problems of using Helm.
       | 
       | My biggest pain points, some of which I don't think Glasskube is
       | addressing, that I think are at the crux of switching off Helm:
       | 
       | - The arbitrary nature of how value files are laid out - every
       | chart appears to have its own standards for which fields should
       | be exposed and the nomenclature for exposing them
       | 
       | - Helm releases frequently get stuck when updating or rolling
       | back, from which they can't be fixed without needing to be
       | uninstalled and reinstalled
       | 
       | - I need to reference the Helm chart values file to know what is
       | exposed and what values and types are accepted (Glasskube
       | schema'd values files does address this! Yay!)
       | 
       | Apart from the Helm chart values schema, I don't think Glasskube
       | solves these fundamental problems. So I'm not sure why I would
       | spend the large amount of effort to migrate to this new paradigm
       | if the same problems could still cause headaches.
       | 
       | Lastly, I would also concur with @llama052's comment, that an
       | "update all" button will always be forbidden in my, and probably
       | most other, companies. Considering the serious lack of
       | standardisation that comes with Helm chart versioning (whether
       | the app version changes between charts, whether roles or role
       | bindings need to be updated, whether values have been deprecated
       | or their defaults have changed, etc.), it's incredibly risky to
       | update a Helm chart without understanding the implications that
       | come with it. Typically our engineers have to review the release
       | notes for the application between the two Helm chart versions, at
       | least test in dev and staging for a few days, and only then can
       | we feel comfortable releasing the changes - one chart at a time.
       | Not to mention that if you are in charge of running a system with
       | multiple applications, you probably want to use GitOps, and in
       | that case a version upgrade would require a commit to the Git
       | repository and not just a push of a button on the infra IDP.
        
       | epgui wrote:
       | My immediate question is whether/how this can be an improvement
       | over using the terraform helm provider.
       | 
       | If I need a better GUI anywhere, it's probably in EKS, or
       | something that makes working with EKS a bit less painful.
        
         | pmig wrote:
         | Helm (also in combination with terraform) is a client side tool
         | for deploying charts to your cluster.
         | 
         | Glasskube on the other hand is a package manger where you can
         | find look up, install and configure packages via a cli and UI
         | and overcome some of the shortcomings of Helm.
        
           | fsniper wrote:
           | I think there is a fundamental misunderstanding about what a
           | package manager is.
           | 
           | From Wikipedia: "A package manager or package-management
           | system is a collection of software tools that automates the
           | process of installing, upgrading, configuring, and removing
           | computer programs for a computer in a consistent manner."
           | https://en.wikipedia.org/wiki/Package_manager
           | 
           | Helm is a package manager as it consistently,
           | 
           | * Can pull and deploy applications via packages * Can manage
           | (upgrade/reconfigure/delete) deployed applications * Can
           | search and find helm charts.
           | 
           | So the difference is it lacks a GUI? Afaik GUI was never a
           | requirement for a package manager.
           | 
           | And another perspective is, as GlassKube does not provide a
           | packaging mechanism, and uses helm in the backend
           | (established in another question, which I'll also reply) it's
           | not really a package manager but a frontend to another one.
           | (examples: dpkg - package manager - apt-get/apt/aptitude
           | frontend)
           | 
           | Also IMHO, Considering CNCF landscape, Glasskube is more
           | positioned as a Continues Delivery tool than a package
           | manager. But this is my take.
        
       | guhcampos wrote:
       | I think this might be a step in the right direction, but my main
       | problem with Kubernetes package management today might not be
       | fixable by a package manager, sadly. The biggest issue I have in
       | my daily life is handling the multiple levels of nested YAML and
       | the unpredictability of the results.
       | 
       | Think of an ArgoCD ApplicationSet that generates a bunch of
       | Applications. Those Applications render a bunch of Helm charts,
       | and inside those charts there are CRDs used by some random
       | operator like Strimzi, Grafana or Vector.
       | 
       | Given YAML's lack of syntax and the absense of any sort of
       | standard for rendering templates, it's practically impossible to
       | know what are the actual YAML being injected in the Kubernetes
       | API when you make a top-level change. It's trial and error,
       | expensive blue-green deployments and hundreds of debugging
       | minutes all the way, every month.
        
         | pmig wrote:
         | This is actually a problem we want to focus on with Glasskube
         | Cloud (https://glasskube.cloud/) where our glasskube[bot] will
         | comment on your GitOps Pull request with an exact diff of
         | resources that will get changed across all connected clusters.
         | This diff will be performed by controller running inside your
         | cluster.
         | 
         | Think of it as codecov analysis, but just for resource changes.
        
           | ForHackernews wrote:
           | This sounds like terraform. Is this TF for k8s?
        
             | pmig wrote:
             | That's an interesting analogy, but I thinks it's a stretch.
        
           | granra wrote:
           | IMO the pull request should be the diff.
           | 
           | https://akuity.io/blog/the-rendered-manifests-pattern/
        
         | sandwitches wrote:
         | The solution is to not use Kubernetes.
        
         | theLiminator wrote:
         | The wide adoption of YAML for devops adjacent tooling was a
         | mistake.
         | 
         | I think proper programming language support is the way to go.
         | 
         | Ideally a static type system that isn't turing complete and
         | guaranteed to terminate. So something like starlark with types.
        
           | dventimi wrote:
           | > _I think proper programming language support is the way to
           | go._
           | 
           | Personally, I would prefer a SQLite database. Ok I'll show
           | myself out.
        
           | verdverm wrote:
           | Have you looked at CUE?
           | (https://cuelang.org/docs/concept/the-logic-of-cue/)
           | 
           | CUE is also pragmatic in that it has integrations with yaml,
           | json, jsonschema, openapi, protobuf
        
           | dilyevsky wrote:
           | > So something like starlark with types.
           | 
           | this exists for k8s[0]. there have been other users based on
           | the same library[1], I heard reddit did something similar
           | internally
           | 
           | [0] - https://github.com/cruise-automation/isopod [1] -
           | https://github.com/stripe/skycfg
        
           | davidmdm91 wrote:
           | I have been developing my own package manager, and my core
           | idea is that proper programming languages are the proper
           | level for describing packages.
           | 
           | Programs take inputs and can output arbitrary data such as
           | resources. However they can do so with type safety, and
           | everything else a programming ecosystem can achieve.
           | 
           | For asset distribution it uses wasm, and that's it!
           | 
           | If you want to check it out its here: github:
           | (https://github.com/davidmdm/yoke) docs:
           | (https://davidmdm.github.io/yoke-website)
           | 
           | I like that you said: > I think proper programming language
           | support is the way to go.
           | 
           | I think we need to stop writing new ways of generating yaml
           | since we already have the perfect way of doing so. Typed
           | languages!
        
             | verdverm wrote:
             | > that isn't turing complete and guaranteed to terminate
             | 
             | This means general purpose languages do not qualify, and
             | more generally, no general recursion
        
         | zikohh wrote:
         | Have you tried the rendered manifests pattern ?
         | https://akuity.io/blog/the-rendered-manifests-pattern/
        
           | granra wrote:
           | I read this article a while ago and it seems like the most
           | sane way of dealing with this. Which tool you use to render
           | the manifests doesn't even matter anymore.
        
           | appplication wrote:
           | Interesting, we have a system (different context, though it
           | does use yaml) that allows nested configurations, and arrived
           | at a similar solution, where nested configs (implicit/human
           | interface) are compiled to fully qualified specifications
           | (explicit/machine interface). It works quite well for
           | managing e.g. batch configurations with plenty of
           | customization.
           | 
           | I was unaware there was a name for this pattern, thank you.
        
       | lars_francke wrote:
       | We[1] build a lot of Kubernetes operators[2] and deal a lot with
       | Helm and OLM issues. Good luck!
       | 
       | One immediate question: Your docs say "Upgrading CRDs will be
       | taken care of by Glasskube to ensure CRs and its operators don't
       | get out-of-sync." but searching for "CRD" in your docs doesn't
       | lead to any concrete results.
       | 
       | This is one of our biggest pain ponts with Helm right now. Can
       | you share your plans?
       | 
       | [1] <https://stackable.tech/en/>
       | 
       | [2] <https://www.youtube.com/watch?v=Q8OSYOgBdCc>
        
         | pmig wrote:
         | Packages available in the public Glasskube repo are configured
         | in a way to make sure changes in CRDs get applied (either via
         | Manifest or the helm-controller)
         | 
         | We will update the docs though.
        
       | 0xbadcafebee wrote:
       | Application packages are (traditionally) versioned immutable
       | binaries that come with pre- and post-install steps. They are
       | built with a specific platform in mind, with specific
       | dependencies in mind, and with extremely limited configuration at
       | install time. This is what makes packages work pretty well: they
       | are designed for a very specific circumstance, and allow as
       | little change as possible at install time.
       | 
       | Even with all that said, operating system packages require a vast
       | amount of testing, development, and patching, constantly, even
       | within those small parameters. Packages feel easy because
       | potentially hundreds of hours of development and testing have
       | gone into that package you're installing now, on that platform
       | you're on now, with the components and versions you have now.
       | 
       | Kubernetes "packages" aren't really packages. They are a set of
       | instructions of components to install and configure, which often
       | involves multiple distinct sets of applications. This is
       | different in a couple ways: 1) K8s "packages" are often extremely
       | "loose" in their definition, leading to a lot of variability, and
       | 2) they are built by all kinds of people, in all kinds of ways,
       | making all kinds of assumptions about the state of the system
       | they're being installed into.
       | 
       | There's actually multiple layers of dependencies and
       | configuration that have to come together correctly for a
       | Kubernetes "package" to work. The K8s API version has to be
       | right, the way the K8s components are installed and running have
       | to be right, the ACLs have to be right, there has to be no other
       | installed component which could conflict, the version of the
       | components and containers being installed by the package need to
       | be pinned (and compatible with everything else in the cluster),
       | and the user has to configure everything properly. Upgrades are
       | similarly chaotic, as there's no sense of a stable release tree,
       | or rolling releases. It's like installing random .deb or .rpm or
       | .dmg files into your OS and hoping for the best.
       | 
       | Nothing exists that does all of this today. To make Kubernetes
       | packaging as seamless as binary platform-specific packaging, you
       | need an entire community of maintainers, and either a rolling-
       | release style (ala Homebrew) or stable versioned release
       | branches. You basically need a project like ArtifactHub or
       | Homebrew to manage all of the packages in one way. That's a big
       | undertaking, and in no way profitable.
        
       | impalallama wrote:
       | Definitely looks like a solid improvement over Helm when it comes
       | to just downloading and installing Kubernetes packages but it
       | still to early for me to have a solid opinion since the aspect of
       | actually building and distributing your own chart doesn't seem
       | have been tackled which is basically 90% of the use case for Helm
       | despite calling itself a package-manager, ha.
        
       | rad_gruchalski wrote:
       | We started with some pretty cool Jsonnet-based build
       | infrastructure and we're pretty happy with it. Grafana Tanka is
       | also okay, there's tooling to generate Jsonnet libraries for
       | anything that has a CRD. There's jsonnet-bundler for package
       | management.
       | 
       | One can throw Flux and Git* actions or what not in the mix. The
       | outcome is a boring CI/CD implementation. Boring is good. Boring
       | and powerful because of how Jsonnet works.
       | 
       | It's pretty neat in the right hands.
        
       ___________________________________________________________________
       (page generated 2024-06-25 23:00 UTC)