[HN Gopher] AWS Nuke - delete all resources associated with AWS ...
___________________________________________________________________
AWS Nuke - delete all resources associated with AWS account
Author : fortran77
Score : 111 points
Date : 2022-07-02 20:23 UTC (2 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| Centigonal wrote:
| This is a fantastic utility to have in my back pocket, and the
| attention to safety is commendable. nice!
| pcthrowaway wrote:
| Needs a flag to run non-interactively
| danw1979 wrote:
| I nearly took this bait
| pcthrowaway wrote:
| `aws-nuke --no-dry-run --non-interactive --i-really-mean-it
| --yes-i-agree-to-the-second-prompt-also`
| code_duck wrote:
| I can barely imagine what is in my AWS account. The last time I
| used it was 2011.
| ezekiel11 wrote:
| so i have this weird monthly charge I cannot for the love of god
| find out what is causing it. I cancelled my credit card and now
| my credit score has taken a hit.
|
| All because some unknown AWS service that is in some zone that I
| cannot find at all and neither can AWS support.
| arkadiyt wrote:
| 99% of our AWS resources are terraformed, but developers
| constantly push back / want to use the console to create stuff to
| test or play around with. So we setup a separate "hack" AWS
| account and give them admin access in there, and have an
| automated job using AWS Nuke to delete everything in there once a
| quarter.
| Frajedo wrote:
| We give each one of our developers their very own aws account
| managed through AWS organizations service. They are full
| administrators and responsible for resources and cost.
|
| So far we haven't had any issues or bad surprises, although we
| have setup some aws billing alerts just in case.
|
| Feel free to make them responsible for cost and resources and
| you'll be surprised how well they can manage their own account.
| vageli wrote:
| > We give each one of our developers their very own aws
| account managed through AWS organizations service. They are
| full administrators and responsible for resources and cost.
|
| How many developers work at your organization?
| jrockway wrote:
| I sure wish cloud providers had a "fake" implementation of their
| APIs that I could use in tests, for cases like the one that this
| program was written for. There are so many cross-dependencies
| that can be checked ("sorry can't create that VM, you need a
| service account first" or "VM names can only be 69 characters and
| can't contain '!'") without creating actual resources. You can
| test all the edge cases that resolve to failure in a unit test,
| and then only need one integration test to get that end-to-end
| "when I run 800 Terraform files, can I talk to my webserver at
| test239847abcf123.domain-name.com?".
|
| I've found that the alternative to this is either a CI run that
| takes hours, or simply not testing the edge cases and hoping for
| the best. The former is a nightmare for velocity, the latter is a
| nightmare for having more than one person working on the team
| ever.
|
| Lately, I've been fortunate to not be working in the "general
| cloud provider" space (where everything is a black box that can
| change at any moment, and documentation is an afterthought of
| afterthoughts) and have only focused on things going on in
| Kubernetes. To facilitate fast tests, I forked Kubernetes, made
| some internal testing infrastructure public, and implemented a
| kubelet-alike that runs "containers" in the same process as the
| test. For the application I work on at work, we are basically a
| data-driven job management system that runs on K8s. People have
| already written the easy tests; build the code into a container,
| create a Kubernetes cluster, start the app running, poke at it
| over the API. These take for-fucking-ever to run. (They do give
| you good confidence that Linux's sleep system call works well,
| though! Boy can it sleep.) With my in-process Kubernetes cluster,
| most of that time goes away, and you can still test a lot of
| stuff. If you want to test "what happens when a very restrictive
| AppArmor policy is applied to all pods in the cluster", yeah, the
| "fake" doesn't work. If you want to test "when my worker starts
| up, will it start processing work", it works great. (And, since
| the 100% legit "api machinery" is up and running, you can still
| test things like "if my user's spec contains an invalid pod
| patch, will they get a good error message?") Most bugs (and
| mistakes that people make while adding new features) are in that
| second category, and so you end up spending milliseconds instead
| of minutes testing the parts of your application that are most
| hurtful to users when they break. (And, nobody is saying not to
| run SOME live integration tests. You should always start with,
| and keep, integration tests against real environments running
| real workloads. When they pass, you get some confidence that
| there are no major showstoppers. When they fail, you want the
| lighter-weight tests to point you with precision to the faulty
| assumption or bug.)
|
| Anyway... it makes me sad that things like aws-nuke are the kind
| of tooling you need to produce reliable software focused on
| deployment on cloud providers. I'd certainly pay $0.01 more per
| VM hour to be able to delete 99% of my slow tests. But I think
| I'm the only person in the world that thinks tests should be
| thorough and fast, so I'm on my own here. Sad.
| jmainguy wrote:
| time0ut wrote:
| I've contributed to and used this tool extensively. I have
| accounts where I run this thing out of a CodeBuild job on a cron
| schedule to dejunk after a day of terraform development.
| Fantastic tool.
| yieldcrv wrote:
| this will shave 20% off of Amazon's earnings!
| thedougd wrote:
| Are Azure resource groups still pretty comprehensive? You use to
| be able to create all your stuff in a resource group. Delete the
| group and it deletes all the stuff with it. Super convenient.
| bbkane wrote:
| Yeah they seem to be. Except Azure AD stuff, which kinda makes
| sense (still wish that part were more easily automated)
| OJFord wrote:
| I would love a `terraform <plan|apply> --nuke`.
|
| Not `destroy` - the opposite - destroy/nuke what I _don 't_ have
| in my config.
| [deleted]
| iasay wrote:
| Oh that would be glorious. Not sure how it'd be possible
| though.
| mrg2k8 wrote:
| May be possible using driftctl and some API calls.
| lvh wrote:
| I've used aws-nuke a bunch but the use case seems significantly
| diminished now that AWS Organizations has the ability to delete
| entire accounts.
| wielebny wrote:
| Last time I've checked it was a lengthy process involving
| attaching a credit card, leaving the organization and then
| deleting the account. Has it been changed?
| dhess wrote:
| I'm in the process of doing this now. You can close accounts
| from Control Tower without needing to log in as root to each
| separate account, adding a credit card, removing it from the
| org, and then closing it manually.
|
| However, you can only close them from Control Tower at the
| rate of 2 to 3 per month, due to a hard limit quota which
| cannot be changed, even if you request it. Needless to say,
| this sucks when you've followed AWS's own best practices and
| created lots of accounts using Control Tower's "vending
| machine."
|
| AWS's archaic account model is one reason we've switched to
| GCP.
| Aeolun wrote:
| It seems to me this cannot possibly be a hard limit. If
| it's a hard limit it's only because AWS wants to milk you
| dry.
| janstice wrote:
| I suspect it's a hard limit to prevent disgruntled
| (former) admin blast radius.
| bonobocop wrote:
| https://docs.aws.amazon.com/organizations/latest/APIReferenc.
| ..
|
| Now has an API (with some caveats)
| qwertyuiop_ wrote:
| Project sponsored by Azure ?
| evanfarrar wrote:
| https://github.com/genevieve/leftovers
|
| A co-worker made this when we worked together on a project that
| ran a large number of terraform configurations in CI against real
| IaaSes. Each account was a sandbox, so we would run this at the
| end of a pipeline to clean up any failures or faulty teardowns.
| bilekas wrote:
| Wow! This is actually some great work!
| hk1337 wrote:
| I would think they should probably change the name so it doesn't
| come across as an official AWS product but seems nice.
| orf wrote:
| Shout out to AWS batch, where if you delete the role assigned to
| a compute cluster the cluster itself becomes impossible to
| delete.
|
| Found this out after using AWS nuke
| iasay wrote:
| There's loads of stuff like this.
|
| If you detach an EC2 instance from a virtual tape library and
| destroy it you can't delete the tape library any more. Even AWS
| support couldn't delete it. This is fine until you have 60TB of
| tapes online and are paying for it.
|
| Fortunately we found an EBS snapshot of the VM.
| OJFord wrote:
| That's a weird one, but surely an aws-nuke bug? It must already
| use a deliberate order - there's plenty of resources that need
| anything linked/constituent deleted first - so that order
| is/was just not correct for those?
| Marazan wrote:
| No, Batch's interaction with IAM roles and permission is
| super weird and not at all documented.
|
| It is easy to screw it up.
| dekhn wrote:
| Aws batch is a terrible product
| bilekas wrote:
| Nah I've seen this too. It is considered an expected
| behavior. In a way it makes sense. But it's not well
| documented.
| bilekas wrote:
| Not sure why but I laughed out loud reading this.
|
| Edit : I've ran into this but it's considered a feature not a
| bug!
| Marazan wrote:
| Even better if you have your cloud formation with the iam role
| in the same stack and there is something wrong the the rollback
| can happen such that the iam role gets deleted standing the
| compute environment and failing the rollback.
| zxcvbn4038 wrote:
| Try logging into the console as root, that lets you delete
| anything.
|
| You can also try updating the cluster to have a new role.
| orf wrote:
| That's... not the issue. The cluster gets stuck in a deleting
| state forever. And at which point you're unable to update it.
___________________________________________________________________
(page generated 2022-07-02 23:00 UTC)