[HN Gopher] Intentionally Leaking AWS Keys
       ___________________________________________________________________
        
       Intentionally Leaking AWS Keys
        
       Author : lelf
       Score  : 26 points
       Date   : 2021-01-18 20:02 UTC (1 days ago)
        
 (HTM) web link (brokenco.de)
 (TXT) w3m dump (brokenco.de)
        
       | nodesocket wrote:
       | How is AWS detecting keys pushed to public repos so fast?
        
         | lstamour wrote:
         | GitHub scans on their behalf:
         | https://docs.github.com/en/github/administering-a-repository...
        
       | lidder86 wrote:
       | Would you not be better off doing this with GitHub actions and
       | using secrets?
        
       | markuman123 wrote:
       | AWS_PROFILE=broken aws s3 ls s3://deltars/simple/
       | 
       | 14kb in your bucket and you pay for outgoing traffic. means,
       | while you're sleeping, I can ruin your aws bill.
       | 
       | In any case, committing credentials is always wrong and you
       | cannot justify it.
       | 
       | Depending on your ci tool (e.g. gitlab-runner, drone-ci ... )
       | there are other and better ways to provide credentials for a git
       | project in CI/CD pipeline.
        
         | tylermenezes wrote:
         | You can run up charges by requesting files in any number of
         | public buckets without the AWS keys. The AWS keys don't change
         | the threat model in this situation.
        
           | markuman123 wrote:
           | that's the reason why should always use aws:kms encryption on
           | s3.
        
             | QuinnyPig wrote:
             | Wait what? This conflates two entirely unrelated things.
        
               | markuman123 wrote:
               | Nope, because you just get a 404, even on public buckets,
               | because you have no access to the kms key.
        
               | naikrovek wrote:
               | I know from your absolute conviction on this (coupled
               | with LOTS of experience with people who have absolute
               | conviction about stuff) that your own conviction is
               | preventing you from seeing valid uses for this, and is
               | potentially keeping you from seeing the 100% of the
               | landscape you're professing about.
        
               | markuman123 wrote:
               | Sorry, I cannot follow.
               | 
               | To be clear, I'm no aws advocate.
        
               | GauntletWizard wrote:
               | There are real uses for AWS Buckets that are public and
               | cost you money. Distributing files, acting as a webhost,
               | anything that you'd use dropbox with link sharing for.
               | 
               | Yes, it sucks if someone randomly decides to download
               | files from you all day. You should probably set your
               | budget to alert and attempt to blacklist them when it
               | happens. That's rare, though, and aside from a few cases
               | of actual malice, the convenience is worth the cost.
        
             | tylermenezes wrote:
             | You know that AWS is frequently used as (and has an entire
             | product for use as) a CDN, right?
        
         | twodollars wrote:
         | Always wrong? No justification? This seems like a good
         | justification to me. Is there any difference between running up
         | the outbound traffic bill using the key vs accessing web assets
         | anonymously? OP has a budget alert set.
        
           | markuman123 wrote:
           | but the budget alert is delayed...I bet you're ruined when
           | you've noticed the alert.
        
       | jasonpeacock wrote:
       | They're being lazy, and trying to make an excuse for it.
       | 
       | They're self-limiting the effectiveness and thoroughness of their
       | testing because they aren't willing to setup a proper integration
       | test server w/credentials. Where else are they cutting corners?
       | 
       | The integration tests server should have its own credentials to
       | access the resources it needs for testing, then it would support
       | all types of testing beyond the "safe" read-only tests they're
       | talking about in the article.
        
       | cryptonym wrote:
       | Minio or S3 Ninja can help with S3 integration testing
        
       | ashearer wrote:
       | Good to know that AWS is so fast to detect this.
       | 
       | If good uses were common--and I'm struggling to come up with them
       | --AWS could suppress the alert for IAM users that were already
       | sufficiently locked down. But since that would become dangerous
       | if the permissions were loosened later, AWS would wind up
       | creating two classes of keys, public and non-public, in order to
       | know whether to warn about loosening restrictions. Simpler just
       | to forbid making keys public.
       | 
       | To publish such a key anyway without having to go to the trouble
       | of unwinding an AWS auto-quarantine, breaking it up in code (like
       | "part1" + "part2") might be enough to foil the AWS bot. Can
       | anyone confirm?
        
         | Znafon wrote:
         | It's actually GitHub that contacts AWS even before the commit
         | finishes being sent to GitHub so it is indeed very fast.
        
           | paultopia wrote:
           | Really? If Github is already detecting credentials that
           | reliably, I wonder why they don't just switch repositories to
           | temporarily private and e-mail the account owner
           | themselves...?
        
             | netzvieh wrote:
             | Because the key has to be revoked on AWS side, not just
             | removed from the repo. And probably the person pushing to
             | Github and the person paying the AWS bill/the AWS admin are
             | usually not the same..
        
       | 430scuderia wrote:
       | when I first started with AWS I made this mistake. I was building
       | an MVP to test out some idea and forgot to remove the hardcoded
       | secret key when I pushed to github.
       | 
       | AWS quickly shut it down and informed me. I came back to find
       | millions of queues created and somebody execute 100 million
       | lambda queries.
       | 
       | AWS refunded me and I have been with them since. This is a very
       | different experience to what happened with Google. They ignored
       | my email and basically I learned that if I did a chargeback, they
       | would shut down my other services too.
        
       | benlivengood wrote:
       | I guess the integration is testing that the software can connect
       | to an s3 bucket _with an access key_ since it 's trivial to just
       | make a public bucket?
       | 
       | But if that's the code path that needs testing then it should
       | probably be isolated to an auth library that can be tested by
       | simply authenticating with the secret key alone and not needing
       | to perform a chargeable action.
        
         | etimberg wrote:
         | Or if one really wanted an integration test, why not spin up
         | something link MinIO and test with that. The infrastructure
         | would exist only for the lifetime of the test run.
        
           | musingsole wrote:
           | > Fortunately our tests just needed to retrieve objects from
           | a bucket to confirm that an S3 bucket is presenting itself as
           | a Delta table properly.
           | 
           | It seems the test is about the data within the bucket
           | assuming the correct format, not the connection to S3.
           | Perhaps we can trust the author to have considered other ways
           | to test this more properly that either wouldn't work for
           | reasons or weren't nearly as fun.
        
             | benlivengood wrote:
             | > It seems the test is about the data within the bucket
             | assuming the correct format, not the connection to S3.
             | Perhaps we can trust the author to have considered other
             | ways to test this more properly that either wouldn't work
             | for reasons or weren't nearly as fun.
             | 
             | If security is legitimately not orthogonal to functional
             | testing then that is the interesting part of the problem,
             | but I don't see that laid out in the article. It would be a
             | case for AWS and others to improve or add methods to
             | support the test case in a secure way.
        
         | aECBNhWl4sqXr2 wrote:
         | This looks to be the integration test:
         | https://github.com/delta-io/delta-rs/commit/09d345e52ca3177d...
         | 
         | I'm not sure why the author thinks this required putting the
         | credentials in the code itself. Heck, I'd argue that this
         | integration test should be setting up the bucket and contents
         | as part of the setup for the test itself, pulling the
         | credentials from some sort of secrets manager.
        
           | Raidion wrote:
           | Yea, this just seems like someone is trying to rationalize
           | avoiding work. Injecting secrets is like one of the base
           | steps of CI/CD. How else would you make sure this works in
           | different environments? All of that might be unnecessary for
           | the most base case ever, but it's one of those things you
           | solve once and use it for the rest of the project.
        
       | rpedela wrote:
       | If you are going to put creds in git, at least use sops.
       | 
       | https://github.com/mozilla/sops
        
       | [deleted]
        
       | captn3m0 wrote:
       | I've had a _very interesting idea_ about public-code-execution
       | that relies on leaking your AWS keys:
       | 
       | 1. AWS Lambda is "verifiable infrastructure". You can fetch the
       | code and verify that it's the same code as what you've provided
       | elsewhere (as a reproducible build for eg)
       | 
       | 2. Use the lambda for trusted-code-execution (say collecting
       | hashes of your contact list, matching them against a bloom
       | filter). The code isn't supposed to log these contacts, or save
       | them in any way.
       | 
       | 3. You create a AWS IAM keypair that has permissions to get each
       | revision of the lambda code, validate the corresponding API
       | endpoint.
       | 
       | 4. Instead of using a custom-domain API Gateway, you instead use
       | the AWS execute lambda endpoint. The request directly reaches the
       | lambda - verifiably (I think).
       | 
       | 5. Publish the keys for the AWS IAM keypair that was created
       | above.
       | 
       | Anyone in the world can then call up the Lambda management API to
       | validate the code at any time with these credentials. If you
       | trust AWS Lambda and IAM, there is a verifiable trust in the code
       | that is running on that lambda and that is processing your
       | contacts.
       | 
       | This might be possible by setting up an IAM Assume Role against
       | the * to allow any AWS user the same permissions, but I'm not
       | sure if that's allowed?
       | 
       | Edit: Documented on my ideas repo
       | https://github.com/captn3m0/ideas#verifiable-code-execution-... ,
       | if someone is interested in building this
        
         | Znafon wrote:
         | This is a funny idea, I imagine you could do this with other
         | services as well, for example you could use an API Gateway as
         | long as you give read only keys for API Gateway and Route53?
        
           | captn3m0 wrote:
           | Yeah, but then you're auditing more pieces (is the gateway
           | logging? is it mirroring traffic?).
           | 
           | Route53 is also tricky, because you will need to prove the
           | whole chain from your namesever. It won't even work for
           | domains registered outside AWS, because you could have a
           | second NS listed and that needs special treatment to catch.
           | 
           | Using the Lambda execution endpoints (the ones that look like
           | https://API-ID.execute-api.REGION.amazonaws.com/STAGE) avoids
           | a lot of these concerns.
        
       | gouggoug wrote:
       | I do not understand the justification given by the author. It
       | actually does not say at all _why_ these must be versioned in the
       | repository. "I just need read access" is not a good
       | justification.
        
       ___________________________________________________________________
       (page generated 2021-01-19 23:02 UTC)