[HN Gopher] How to harden GitHub Actions
       ___________________________________________________________________
        
       How to harden GitHub Actions
        
       Author : moyer
       Score  : 189 points
       Date   : 2025-05-06 02:07 UTC (2 days ago)
        
 (HTM) web link (www.wiz.io)
 (TXT) w3m dump (www.wiz.io)
        
       | tomrod wrote:
       | I support places that use GH Actions like its going out of style.
       | This article is useful.
       | 
       | I wonder how we get out of the morass of supply chain attacks,
       | realistically.
        
         | guappa wrote:
         | We use linux distributions.
        
           | tomrod wrote:
           | How do apt, dnf, and apk prevent malicious software from
           | getting into repositories?
        
             | liveoneggs wrote:
             | never update
        
               | photonthug wrote:
               | I can confirm there's real wisdom in this approach, lol.
               | Nothing bad had happened to me for a while so I decided
               | to update that one computer to ubuntu noble and YUP,
               | immediately bricked by some UEFI problem. Ok cool, it's
               | not like 2004 anymore, this will probably be a quick
               | fix.. 3 hours later...
        
               | RadiozRadioz wrote:
               | An OS upgrade broke UEFI. Huh? That doesn't sound right.
        
               | photonthug wrote:
               | In the newest iteration of a time-honored tradition, grub
               | (and/or whatever distro's treatment of it) has been
               | finding all kinds of ways to break upgrades for 30 years.
               | If you're on the happy path you can probably go a long
               | time without a problem.
               | 
               | But when you're the unlucky one and need to search for a
               | fix, and you're checking hardware/distro/date details in
               | whatever forums or posts, and that's when you notice that
               | the problems don't actually ever stop.. it just hasn't
               | happened to _you_ lately.
        
               | RadiozRadioz wrote:
               | No that's not what I mean, I mean technologically, UEFI
               | is flashed in your motherboard and there isn't any way
               | for an OS to mess with that. You need to boot from a
               | specially prepared USB with compatible firmware in order
               | to change it. Your problem must have been above UEFI, or
               | an error in your OS that mentioned UEFI.
        
             | wongarsu wrote:
             | In principle by having the repository maintainer review the
             | code they are packaging. They can't do a full security
             | review of every package and may well be fooled by
             | obfuscated code or deliberately introduced bugs, but the
             | threshold for a successful attack is much higher than on
             | Github Actions or npm.
        
               | KronisLV wrote:
               | It kinda feels like any CI/CD should only be run on the
               | server after one of the maintainers gives it the okay to
               | do so, after reviewing the code. From this, one can also
               | make the assumption that most of the CI (linting, various
               | checks and tests) should all be runnable locally even
               | before any code is pushed.
        
       | DGAP wrote:
       | Great article!
       | 
       | I also found this open source tool for sandboxing to be useful:
       | https://github.com/bullfrogsec/bullfrog
        
         | mstade wrote:
         | It's pretty depressing that such functionality isn't a core
         | feature of GHA. Seems like low hanging fruit.
        
         | cedws wrote:
         | I came across this the other day but I couldn't really grok how
         | it works. Does it run at a higher privilege level than the
         | workflow or the same? Can a sophisticated enough attack just
         | bypass it?
        
           | mdaniel wrote:
           | I spent a few seconds clicking into it before the newfound
           | 429 responses from GitHub caused me to lose interest
           | 
           | I believe a sufficiently sophisticated attacker could unwind
           | the netfilter and DNS change, but in my experience every
           | action that you're taking during a blind attack is one more
           | opportunity for things to go off the rails. The increased
           | instructions (especially ones referencing netfilter and DNS
           | changes) also could make it harder to smuggle in via an
           | obfuscated code change (assuming that's the attack vector)
           | 
           | That's a lot of words to say that this approach could be
           | better than nothing, but one will want to weigh its gains
           | against the onoz of having to keep its allowlist rules up to
           | date in your supply chain landscape
        
           | DGAP wrote:
           | Yep, and there's an opt-in to disable sudo which prevents
           | circumvention. However this can break some actions especially
           | ones deployed as Docker images. It also doesn't work with
           | macos.
        
         | vin10 wrote:
         | Interesting project, I think I just found a way to crash the
         | sandbox, just reported via an advisory.
        
       | kylegalbraith wrote:
       | Glad this got posted. It's an excellent article from the Wiz
       | team.
       | 
       | GitHub Actions is particularly vulnerable to a lot of different
       | vectors, and I think a lot of folks reach for the self-hosted
       | option and believe that closes up the majority of them, but it
       | really doesn't. If anything, it might open more vectors and
       | potentially scarier ones (i.e., a persistent runner could be
       | compromised, and if you got your IAM roles wrong, they now have
       | access to your AWS infrastructure).
       | 
       | When we first started building Depot GitHub Actions Runners [0],
       | we designed our entire system to never trust the actual EC2
       | instance backing the runner. The same way we treat our Docker
       | image builders. Why? They're executing untrusted code that we
       | don't control.
       | 
       | So we launch a GitHub Actions runner for a Depot user in 2-5
       | seconds, let it run its job with zero permissions at the EC2
       | level, and then kill the instance from orbit to never be seen
       | again. We explicitly avoid the persistent runner, and the IAM
       | role of the instance is effectively {}.
       | 
       | For folks reading the Wiz article. This is the line that folks
       | should be thinking about when going the self-hosted route:
       | 
       | > Self-hosted runners execute Jobs directly on machines you
       | manage and control. While this flexibility is useful, it
       | introduces significant security risks, as GitHub explicitly warns
       | in their documentation. Runners are non-ephemeral by default,
       | meaning the environment persists between Jobs. If a workflow is
       | compromised, attackers may install background processes, tamper
       | with the environment, or leave behind persistent malware.
       | 
       | > To reduce the attack surface, organizations should isolate
       | runners by trust level, using runner groups to prevent public
       | repositories from sharing infrastructure with private ones. Self-
       | hosted runners should never be used with public repositories.
       | Doing so exposes the runner to untrusted code, including
       | Workflows from forks or pull requests. An attacker could submit a
       | malicious workflow that executes arbitrary code on your
       | infrastructure.
       | 
       | [0] https://depot.dev/products/github-actions
        
       | cedws wrote:
       | I've been reviewing the third party Actions we use at work and
       | seen some scary shit, even with pinning! I've seen ones that run
       | arbitrary unpinned install scripts from random websites, cloning
       | the HEAD of repos and running code from there, and other stuff. I
       | don't think even GitHub's upcoming "Immutable Actions" will help
       | if people think it's acceptable to pull and run arbitrary code.
       | 
       | Many setup Actions don't support pinning binaries by checksum
       | either, even though binaries uploaded to GitHub Releases can be
       | replaced at will.
       | 
       | I've started building in house alternatives for basically every
       | third party Action we use (not including official GitHub ones)
       | because almost none of them can be trusted not to do stupid shit.
       | 
       | GitHub Actions is a security nightmare.
        
         | MillironX wrote:
         | Even with pinning, a common pattern I've seen in one of my orgs
         | is to have a bot (Renovate, I think Dependabot can do this too)
         | automatically update the pinned SHA when a new release comes
         | out. Is that practically any different than just referencing a
         | tag? I'm genuinely curious.
        
           | wongarsu wrote:
           | I guess you still have some reproducibility and stability
           | benefits. If you look at an old commit you will always know
           | which version of the action was used. Might be great if you
           | support multiple releases (e.g. if you are on version 1.5.6
           | but also make new point releases for 1.4.x and 1.3.x). But
           | the security benefits of pinning are entirely negated if you
           | just autoupdate the pin.
        
       | crohr wrote:
       | I guess TL;DR just use ephemeral runners when self hosting? There
       | are lots of solutions for that. Also would be nice for GitHub to
       | do something on the security front (allowlist / blocklists if
       | IPs, hosted, etc or at least just reporting on traffic)
        
       | enescakir wrote:
       | The riskiest line in your repo isn't in "src/", it's in
       | ".github/workflows/"
       | 
       | Self-hosted runners feel more secure at first since they execute
       | jobs directly on machines you manage. But they introduce new
       | attack surfaces, and managing them securely and reliably is hard.
       | 
       | At Ubicloud, we built managed GitHub Actions runners with
       | security as the top priority. We provision clean, ephemeral VMs
       | for each job, and they're fully isolated using Linux KVM. All
       | communication and disks are encrypted.
       | 
       | They're fully compatible with default GitHub runners and require
       | just a one-line change to adopt. Bonus: they're 10x more cost-
       | effective.
       | 
       | https://www.ubicloud.com/use-cases/github-actions
        
       | Arch-TK wrote:
       | The recommendation is not to interpolate certain things into
       | shell scripts. Don't interpolate _anything_ into shell scripts as
       | a rule. Use environment variables.
       | 
       | This combined with people having no clue how to write bash
       | well/safely is a major source of security issues in these things.
        
         | cedws wrote:
         | Zizmor has a check for this.
         | 
         | https://github.com/woodruffw/zizmor
        
       | diggan wrote:
       | > Using Third-Party GitHub Actions
       | 
       | Maybe I'm overly pedantic, but this whole section seems to miss
       | the absolutely most obvious way to de-risk using 3rd party
       | Actions, review the code itself? It talks about using popularity,
       | number of contributors and a bunch of other things for "assessing
       | the risk", but it never actually mentions reviewing the
       | action/code itself.
       | 
       | I see this all the time around 3rd party library usage, people
       | pulling in random libraries without even skimming the source
       | code. Is this really that common? I understand for a whole
       | framework you don't have time to review the entire project, but
       | for these small-time GitHub Actions that handle releases, testing
       | and such? Absolute no-brainer to sit down and review it all
       | before you depend on it, rather than looking at the number of
       | stars or other vanity-metrics.
        
         | KolmogorovComp wrote:
         | Because reading the code is useless if you can't pin the
         | version, and the article explains well it's hard to do
         | 
         | > However, only hash pinning ensures the same code runs every
         | time. It is important to consider transitive risk: even if you
         | hash pin an Action, if it relies on another Action with weaker
         | pinning, you're still exposed.
        
           | ratrocket wrote:
           | Depending on your circumstances (and if the license of the
           | action allows it) it's "easy" to fork the action and use your
           | own fork. Instant "pinning".
        
             | carlmr wrote:
             | But how does that solve the issue with the forked action
             | not using pinned versions itself.
             | 
             | You need to recursively fork and modify every version of
             | the GHA and do that to its sub-actions.
             | 
             | You'd need something like a lockgile mechanism to prevent
             | this.
        
               | ratrocket wrote:
               | Yes, that is completely true -- transitive dependencies
               | are a problem. What I suggested only works in the
               | simplest cases and isn't a great solution, more of a
               | bandaid.
        
       | analytically wrote:
       | https://centralci.com/blog/posts/concourse_vs_gha
        
       | axelfontaine wrote:
       | This is a great article, with many important points.
       | 
       | One nitpick:
       | 
       | > Self-hosted runners should never be used with public
       | repositories.
       | 
       | Public repositories themselves aren't the issue, pull requests
       | are. Any infrastructure or data mutable by a workflow involving
       | pull requests should be burned to the ground after that workflow
       | completes. You can achieve this with ephemeral runners with JIT
       | tokens, where the complete VM is disposed of after the job
       | completes.
       | 
       | As always the principle of least-privilege is your friend.
       | 
       | If you stick to that, ephemeral self-hosted runners on disposable
       | infrastructure are a solid, high-performance, cost-effective
       | choice.
       | 
       | We built exactly this at Sprinters [0] for your own AWS account,
       | but there are many other good solutions out there too if you keep
       | this in mind.
       | 
       | [0] https://sprinters.sh
        
       | cyrnel wrote:
       | This has some good advice, but I can't help but notice that none
       | of this solves a core problem with the tj-actions/changed-files
       | issue: The workflow had the CAP_SYS_PTRACE capability when it
       | didn't need it, and it used that permission to steal secrets from
       | the runner process.
       | 
       | You don't need to audit every line of code in your dependencies
       | and their subdependencies if your dependencies are restricted to
       | only doing the thing they are designed to do and nothing more.
       | 
       | There's essentially nothing nefarious changed-files could do if
       | it were limited to merely reading a git diff provided to it on
       | stdin.
       | 
       | Github provides no mechanism to do this, probably because posts
       | like this one never even call out the glaring omission of a
       | sandboxing feature.
        
         | delusional wrote:
         | What would be outside the sandbox? If you create a sandbox that
         | only allows git diff, the I suppose you fixed this one issue,
         | but what about everything else? If you allow the sandbox to be
         | configurable, then how do you configure it without that just
         | being programming?
         | 
         | The problem with these "microprograms" have always been that
         | once you delegate so much, once you are willing to put in that
         | little effort. You can't guarantee anything.
         | 
         | If you are willing to pull in a third party dependency to run
         | git diff, you will never research which permissions it needs.
         | Doing that research would be more difficult than writing the
         | program yourself.
        
         | esafak wrote:
         | Where can I read about this? I see no reference in its repo:
         | https://github.com/search?q=repo%3Atj-actions%2Fchanged-file...
        
           | cyrnel wrote:
           | Every action gets these permissions by default. The reason we
           | know it had that permission is that the exploit code read
           | from /proc/pid/mem to steal the secrets, which requires some
           | permissions: https://blog.cloudflare.com/diving-into-proc-
           | pid-mem/#access...
           | 
           | Linux processes have tons of default permissions that they
           | don't really need.
        
         | abhisek wrote:
         | GitHub Actions by default provide isolated VM with root
         | privilege to a workflow. Don't think job level privilege
         | isolation is in its threat model currently. Although it does
         | allow job level scopes for the default GitHub token.
         | 
         | Also the secrets are accessible only when a workflow is invoked
         | from trusted trigger ie. not from a forked repo. Not sure what
         | else can be done here to protect against compromised 3rd party
         | action.
        
           | cyrnel wrote:
           | People have been running different levels of privileged code
           | together on the same machine ever since the invention of
           | virtual machines. We have lots of lightweight sandboxing
           | technologies that could be used when invoking a particular
           | action such as tj-actions/changed-files that only gives it
           | the permissions it needs.
           | 
           | You may do a "docker build" in a pipeline which does need
           | root access and network access, but when you publish a
           | package on pypi, you certainly don't need root access and you
           | also don't need access to the entire internet, just the pypi
           | API endpoint(s) necessary for publishing.
        
         | lmeyerov wrote:
         | Yes, by default things should be sandboxed - no network, no
         | repo writes, ... - and should be easy to add extra caps (ex:
         | safelist dockerhub)
         | 
         | Likewise, similar to modern smart phones asking if they should
         | remove excess unused privs granted to certain apps, GHAs should
         | likewise detect these super common overprovisionings and make
         | it easy for maintainers to flip those configs, e.g., "yes"
         | button
        
       | wallrat wrote:
       | Been tracking this project for a while https://github.com/chains-
       | project/ghasum . It creates a verifiable checksum manifest for
       | all actions - still in development but looks very promising.
       | 
       | Will be a good compliment to Github's Immutable Actions when they
       | arrive.
        
         | esafak wrote:
         | https://github.com/features/preview/immutable-actions
        
         | esafak wrote:
         | Here is an example in the wild:
         | https://github.com/actions/checkout/actions/workflows/publis...
        
       | bob1029 wrote:
       | > By default, the Workflow Token Permissions were set to read-
       | write prior to February 2023. For security reasons, it's crucial
       | to set this to read-only. Write permissions allow Workflows to
       | inadvertently or maliciously modify your repository and its data,
       | making least-privilege crucial.
       | 
       | > Double-check to ensure this permission is set correctly to
       | read-only in your repository settings.
       | 
       | It sounds to me like the most secure GH Action is one that
       | doesn't need to exist in the first place. Any time the security
       | model gets this complicated you can rest assured that it is going
       | to burn someone. Refer to Amazon S3's byzantine configuration
       | model if you need additional evidence of this.
        
       | ebfe1 wrote:
       | After tj-actions hack, I put together a little tool to go through
       | all of github actions in repository to replace them with commit
       | hash of the version
       | 
       | https://github.com/santrancisco/pmw
       | 
       | It has a few "features" which allowed me to go through a
       | repository quickly:
       | 
       | - It prompts user and recommend the hash, it also provides user
       | the url to the current tag/action to double check the hash value
       | matches and review the code if needed
       | 
       | - Once you accept a change, it will keep that in a json file so
       | future exact vesion of the action will be pinned as well and
       | won't be reprompted.
       | 
       | - It let you also ignore version tag for github actions coming
       | from well-known, reputational organisation (like "actions" belong
       | to github) - as you may want to keep updating them so you receive
       | hotfix if something not backward compatible or security fixes.
       | 
       | This way i have full control of what to pin and what not and then
       | this config file is stored in .github folder so i can go back,
       | rerun it again and repin everything.
        
         | tuananh wrote:
         | renovate can be configured to do that too :)
        
           | jquaint wrote:
           | Do you have an example config?
           | 
           | Trying to get the same behavior with renovate :)
        
         | loginatnine wrote:
         | This is good, just bear in mind that if you put the hash of an
         | external composite action and that action pulls on another one
         | without a hash, you're still vulnerable on that transitive
         | dependency.
        
         | newman314 wrote:
         | I don't know if your tool already does this but it would be
         | helpful if there is an option to output the version as a
         | comment of the form
         | 
         | action@commit # semantic version
         | 
         | Makes it easy to quickly determine what version the hash
         | corresponds to. Thanks.
        
         | fartbagxp wrote:
         | I've been using https://github.com/stacklok/frizbee to lock
         | down to commit hash. I wonder how this tool compares to that.
        
       | MadsRC wrote:
       | Shameless plug, I pushed a small CLI for detecting unpinned
       | dependencies and automatically fix them the other day:
       | https://codeberg.org/madsrc/gh-action-pin
       | 
       | Works great with commit hooks :P
       | 
       | Also working on a feature to recursively scan remote dependencies
       | for lack of pins, although that doesn't allow for fixing, only
       | detection.
       | 
       | Very much alpha, but it works.
        
       | esafak wrote:
       | Can dependabot pin actions to commits while upgrading them?
        
       | duped wrote:
       | I feel like there was a desire from GH to avoid needing a "build"
       | step for actions so you could use `use: someones/work` or
       | whatever, `git push` and see the action run.
       | 
       | But if you think about it, the entire design is flawed. There
       | should be a `gh lock` command you can run to lock your actions to
       | the checksum of the action(s) your importing, and have it apply
       | transitively, and verify those checksums when your action runs
       | every time it pulls in remote dependencies.
       | 
       | That's how every modern package manager works - because the
       | alternative are gaping security holes.
        
       | newman314 wrote:
       | Step Security has a useful tool that aids in securing GitHub
       | Actions here: https://app.stepsecurity.io/securerepo
       | 
       | Disclaimer: No conflict of interest just a happy user.
        
       | RadiozRadioz wrote:
       | GHA Newbie here: what are all these 3rd-party actions that people
       | are using? How complicated is your build / deployment pipeline
       | that you need a bunch of premade special steps for it?
       | 
       | Surely it's simple: use a base OS container, install packages,
       | run a makefile.
       | 
       | For deployment, how can you use pre-made deployment scripts?
       | Either your environment is bespoke VPS/on-prem, In which case you
       | write your deployment scripts anyway, or you use k8s and have no
       | deployment scripts. Where is this strange middleground where you
       | can re-use random third party bits?
        
         | TrueDuality wrote:
         | Can't speak for everyone, but workflows can get pretty crazy in
         | my personal experience.
         | 
         | For example the last place I worked had a mono repo that
         | contained ~80 micro services spread across three separate
         | languages. It also contained ~200 shared libraries used by
         | different subsets of the services. Running the entire unit-test
         | suite took about 1.5 hours. Running the integration tests for
         | everything took about 8 hours and the burn-in behavioral QA
         | tests took 3-4 days. Waiting for the entire test suite to run
         | for every PR is untenable so you start adding complexity to
         | trim down what gets run only to what is relevant to the
         | changes.
         | 
         | A PR would run the unit tests only for the services that had
         | changes included in it. Library changes would also trigger the
         | unit tests in any of the services that depended on them. Some
         | sets of unit tests still required services, some didn't. We
         | used an in-house action that mapped the files changed to
         | relevant sets of tests to run.
         | 
         | When we updated a software dependency, we had a separate in-
         | house action that would locate all the services that use that
         | dependency and attempt to set them attempt to set them to the
         | same value, running the subsequent tests.
         | 
         | Dependency caching is a big one and frankly Github's built-in
         | cacheing is so incredibly buggy and inconsistent it can't be
         | relied on... So third party there. It keeps going on:
         | 
         | - Associating bug reports to recent changes
         | 
         | - Ensuring PRs and issues meet your compliance obligations
         | around change management
         | 
         | - Ensuring changes touching specific lines of code have
         | specific reviewers (CODEOWNERS is not always sufficiently
         | granular)
         | 
         | - Running vulnerability scans
         | 
         | - Running a suite of different static and lint checkers
         | 
         | - Building, tagging, and uploading container artifacts for
         | testing and review
         | 
         | - Building and publishing documentation and initial set of
         | release notes for editing and review
         | 
         | - Notifying out to slack when new releases are available
         | 
         | - Validating certain kinds of changes are backported to
         | supported versions
         | 
         | Special branches might trigger additional processes like
         | running a set of upgrade and regression tests from previously
         | deployed versions (especially if you're supporting long-term
         | support releases).
         | 
         | That was a bit off the top of my head. Splitting that from a
         | mono-repo doesn't simplify the problem unfortunately it just
         | moves it.
        
       | 20thr wrote:
       | These suggestions make a lot of sense.
       | 
       | At Namespace (namespace.so), we also take things one step
       | further: GitHub jobs run under a cgroup with a subset of
       | privileges by default.
       | 
       | Running a job with full capabilities, requires an explicit opt-
       | in, you need to enable "privileged" mode.
       | 
       | Building a secure system requires many layers of protection, and
       | we believe that the runtime should provide more of these layers
       | out of the box (while managing the impact to the user
       | experience).
       | 
       | (Disclaimer: I'm a founder at Namespace)
        
       | gose1 wrote:
       | > Safely Writing GitHub Workflows
       | 
       | If you are looking for ways to identify common (and uncommon)
       | vulnerabilities in Action workflows, last month GitHub shipped
       | support for workflow security analysis in CodeQL and GitHub Code
       | Scanning (free for public repos):
       | https://github.blog/changelog/2025-04-22-github-actions-work....
       | 
       | The GitHub Security Lab also shared a technical deep dive and
       | details of vulnerabilities that they found while helping develop
       | and test this new static analysis capability:
       | https://github.blog/security/application-security/how-to-sec...
        
       | maenbalja wrote:
       | Timely article... I recently learned about self-hosted runners
       | and set one up on a Hetzner instance. Pretty smooth experience
       | overall. If your action contains any SSH commands and you'd like
       | to avoid setting up a firewall with 5000+ rules[0], I would
       | recommend self-hosting a runner to help secure your target
       | server's SSH port.
       | 
       | [0] https://api.github.com/meta
        
         | woodruffw wrote:
         | FWIW: Self-hosted runners are non-trivial to secure[1]; the
         | defaults GitHub gives you are not necessarily secure ones,
         | particularly if your self-hosted runner executes workflows from
         | public repositories.
         | 
         | (Self-hosted runners are great for many other reasons, not
         | least of which is that they're a lot cheaper. But I've seen a
         | lot of people confuse GitHub Actions' _latent_ security issues
         | with something that self-hosted runners can fix, which is not
         | per se the case.)
         | 
         | [1]: https://docs.github.com/en/actions/security-for-github-
         | actio...
        
       | goosethe wrote:
       | your article inspired me https://github.com/seanwevans/Ghast
       | still a WIP
        
       | colek42 wrote:
       | We just built a new version of the witness run action that tracks
       | the who/what/when/where and why of the GitHub actions being used.
       | It provides "Trusted Telemetry" in the form of SLSA and in-toto
       | attestations.
       | 
       | https://github.com/testifysec/witness-run-action/tree/featur...
        
       ___________________________________________________________________
       (page generated 2025-05-08 23:01 UTC)