[HN Gopher] Show HN: K8s Cleaner - Roomba for Kubernetes
       ___________________________________________________________________
        
       Show HN: K8s Cleaner - Roomba for Kubernetes
        
       Hello HN community!  I'm excited to share K8s Cleaner, a tool
       designed to help you clean up your Kubernetes clusters.  As
       Kubernetes environments grow, they often accumulate unused
       resources, leading to confusion, waste, and clutter. K8s-cleaner
       simplifies the process of identifying and removing unnecessary
       components.  The tool scans your Kubernetes clusters for unused or
       orphaned resources--including pods, services, ingresses, and
       secrets--and removes them safely. You can fully customize which
       resources to scan and delete, maintaining complete control over
       what stays and what goes.  Getting Started:  Visit
       https://sveltos.projectsveltos.io/k8sCleaner.html and click the
       "Getting Started" button to try K8s-cleaner.  Key Features:  - Easy
       to Use: No complex setup or configuration required--perfect for
       developers and operators alike - Open Source: Modify the code to
       better fit your specific needs - Community Driven: We welcome your
       feedback, feature ideas, and bug reports to help improve
       K8s-cleaner for everyone  I'm here to answer questions, address
       feedback, and discuss ideas for future improvements.  Looking
       forward to your thoughts! And make sure your all you kubernetes
       clusters are sparkling clean for the holidays. :-)  Simone
        
       Author : pescerosso
       Score  : 37 points
       Date   : 2024-12-18 20:27 UTC (2 hours ago)
        
 (HTM) web link (sveltos.projectsveltos.io)
 (TXT) w3m dump (sveltos.projectsveltos.io)
        
       | empath75 wrote:
       | why isn't this just a cli tool? I don't see any reason it needs
       | to be installed on a cluster. There should at least be an option
       | to scan a cluster from the cli.
        
         | mgianluc wrote:
         | Agree ability to run also as a binary would be helpful.
        
         | sethhochberg wrote:
         | Ironically, I'd bet environments most desperately in need of
         | tools like this are some of the ones where there has been lots
         | of "just run this Helm chart/manifest!" admining over the
         | years...
        
       | devops99 wrote:
       | If you find yourself using something like this, you seriously
       | fucked up as DevOps / cloud admin / whatever.
        
         | llama052 wrote:
         | Agreed, this feels like the kubernetes descheduler as a service
         | or something. Wild.
        
         | paolop wrote:
         | Possible, but if someone needs it and it works well, why not?
        
         | pescerosso wrote:
         | I understand where you're coming from, and ideally, we strive
         | for well-managed Kubernetes environments. However, as DevOps
         | practitioners, we often face complexities that lead to stale or
         | orphaned resources due to fast deployment cycles, changing
         | application needs or teams. Even the public clouds make lots of
         | money from services that are left running and not used for
         | which some companies make a living helping clean things up.
         | 
         | K8s-cleaner serves as a helpful safety net to automate the
         | identification and cleanup of these resources, reducing manual
         | overhead and minimizing human error. It allows teams to focus
         | on strategic tasks instead of day-to-day resource management.
        
           | devops99 wrote:
           | > However, as DevOps practitioners, we often face
           | complexities that lead to stale or orphaned resources due to
           | fast deployment cycles
           | 
           | So, as a DevOps practitioner myself, I had enough say within
           | the organizations I worked at, who are now clients, and also
           | my other clients, that anything not in a dev environment goes
           | through our GitOps pipeline. Other than the GitOps pipeline,
           | there is zero write access to anything not dev.
           | 
           | If we stop using a resource, we remove a line or two (usually
           | just one) in a manifest file, the GitOps pipeline takes care
           | of the rest.
           | 
           | Not a single thing is unaccounted for, even if indirectly.
           | 
           | That said, the DevOps-in-name-only clowns far outnumber
           | actual DevOps people, and there is no doubt a large market
           | for your product.
           | 
           | edited: added clarity
        
             | nine_k wrote:
             | How do you realize that you have stopped using a resource?
             | Can there be cases when you're hesitant to remove a
             | resource just yet, because you want a possible rollback to
             | be fast, and then it lingers, forgotten?
        
         | thephyber wrote:
         | Or the person working that role before you did and you are
         | trying to manage the situation as best as possible.
        
           | devops99 wrote:
           | As a cloud/DevOps consultant, I don't believe in letting
           | management drag on the life support of failed deployments.
           | 
           | We (the team I built out since a couple years ago) carve out
           | a new AWS subaccount / GCP Project or bare-metal K8s or
           | whatever the environment is, instantiate our GitOps pipeline,
           | and services get cut-over in order to get supported.
           | 
           | When I was working in individual contributor roles, I managed
           | to "manage upward" enough that I could establish boundaries
           | on what could be supported or not. Yes this did involve a lot
           | of "soft skills" (something I'm capable of doing even though
           | my posts on this board are rather curt).
           | 
           | "Every DevOps job is a political job" rings true.
           | 
           | Do not support incumbent K8s clusters, as a famous -- more
           | like infamous -- radio host used to say on 97.1 FM in the
           | 90s: dump that bitch.
        
       | S0y wrote:
       | When I saw the headline I was pretty excited, but looking at your
       | examples, I'm really curious about why you decided to make
       | everything work via CRDs? Also having to write code inside those
       | CRD for the cleanup logic seems like a pretty steep learning
       | curve and honestly I'd be pretty scared to end up writing
       | something that would delete my entire cluster.
       | 
       | Any reason why you chose this approach over something like a CLI
       | tool you can run on your cluster?
        
         | mgianluc wrote:
         | It has a DryRun that generates a report on which resources
         | would be affected. So you can see what resources identifies as
         | unused before asking it to remove those. I would be scared as
         | well otherwise. Agree.
        
         | mgianluc wrote:
         | one more thing. It comes with a huge library of use cases
         | already covered. So you don't necessarily need to write any new
         | instance.
         | 
         | You would need to do that only if you have your own proprietary
         | CRDs or some use case that is not already covered.
        
         | resouer wrote:
         | A CRD controller is expected, especially it will allow your
         | clean up logic long running in cluster or periodically. But
         | writing code inside YAML is very weird, at least there are CEL
         | as option here.
        
       | darkwater wrote:
       | How does it work in an IaC/CD scenario, with things like
       | Terraform or ArgoCD creating and syncing resources lifecycle
       | inside the cluster? A stale resource, as identified and cleaned
       | by K8s Cleaner, would be recreated in the next sync cycle, right?
        
         | mgianluc wrote:
         | In that case I would use it in DryRun mode and have it generate
         | a report. Then look at the report and if it makes sense, fix
         | the terraform or ArgoCD configuration.
         | 
         | More on report:
         | https://gianlucam76.github.io/k8s-cleaner/reports/k8s-cleane...
        
       | caust1c wrote:
       | So is this the first instance of a Cloud C-Cleaner then? You
       | could call it CCCleaner!
        
       | paolop wrote:
       | I've been using it for a bit now and very happy with it. The
       | stale-persistent-volume-claim detection has been almost a 100%
       | hit in my case; it's a real game-changer for cleaning up disk
       | space.
       | 
       | Kubernetes clutter can quickly become a headache, and having a
       | tool like this to identify and remove unused resources has made
       | my workflow so much smoother.
        
       | zzyzxd wrote:
       | For resources that are supposed to be cleaned up automatically,
       | fixing your operator/finalizer is a better approach. Using this
       | tool is just kicking the can down the road, which may cause even
       | bigger problem.
       | 
       | If you have resources that need to be regularly created and
       | deleted, I feel a cronjob running `kubectl delete -l <your-label-
       | selector>` should be more than enough, and less risker than
       | installing a 3rd party software with cluster wide list/delete
       | permission.
        
         | pcl wrote:
         | [delayed]
        
       ___________________________________________________________________
       (page generated 2024-12-18 23:00 UTC)