[HN Gopher] Any program can be a GitHub Actions shell
___________________________________________________________________
Any program can be a GitHub Actions shell
Author : woodruffw
Score : 265 points
Date : 2025-04-08 01:20 UTC (21 hours ago)
(HTM) web link (yossarian.net)
(TXT) w3m dump (yossarian.net)
| mkoubaa wrote:
| The next DOOM port
| leonheld wrote:
| Has been done: https://youtu.be/Z1Nf8KcG4ro?t=1107
| pseufaux wrote:
| uv --script
| digianarchist wrote:
| One cool undocumented GitHub Actions trick I spotted at work was
| the ability to use wildcards to match repository_dispatch event
| names: on: repository_dispatch:
| - security_scan - security_scan::*
|
| Why would you want to do this?
|
| We centralize our release pipelines as it's the only way to force
| repositories through a defined reusable workflow (we don't want
| our product teams to have to maintain them).
|
| This allows us to dispatch an event like so: {
| "event_type": "security_scan::$product_name::$version",
| "client_payload": { "field": "value" }
| }
|
| Then it is far easier to identify which product and version a
| workflow is running when looking in the Actions tab of our
| central release repository.
| larusso wrote:
| Nice trick!
| c0wb0yc0d3r wrote:
| How have you found this centralized approach to work for you?
| Does your org require everything be built in exactly the same
| way?
|
| I've just joined an organization that is trying to do similar,
| but in practice it seems nearly worthless. Templates are
| frequently broken, or written in ways that expect code to be
| built in a particular manner without any supporting docs
| describing the requirements.
| digianarchist wrote:
| We centralize releases but we don't centralize builds. That
| would remove too much autonomy from teams.
|
| We don't use templates for any of this. Our interface is the
| payload sent with repository_dispatch, a few metadata files
| in the repository (which we fetch) and a GitHub application
| that allows us to update the PRs with the release status
| checks.
|
| GitHub doesn't have a great story here, ideally we would want
| to listen to CI events emitting from a repo and run workflows
| as a reaction.
|
| The reusable story on a modular basis is better but here
| we're missing features that we would need to move some of
| workflows into Actions. Notably - Action repos need to be
| public.
| connorgurney wrote:
| Only to use the actions within on public repositories.
|
| https://docs.github.com/en/actions/sharing-
| automations/shari...
| markus_zhang wrote:
| Wait I can finally write C for our CI/CD in production and call
| it a low level system job.
|
| Probably can write assembly too.
| wilg wrote:
| put the C in CI
| Tabular-Iceberg wrote:
| And here's the library for it: https://github.com/tsoding/nob.h
| throw10920 wrote:
| As long as other readers of the action are aware of what's
| happening, this seems pretty useful. There's been many adventures
| where my shell script, starting out as a few lines basically
| mirroring what I typed in by hand, has grown to a hundred-line-
| plus monster where I wish that I had real arrays and types and
| the included batteries in the Python stdlib.
|
| I'm definitely not going to use this to implement my company's
| build actions in elisp.
| hulitu wrote:
| > Any program can be a GitHub Actions shell
|
| systemd, echo "1" > /proc/sys/kernel/panic, echo > /bin/bash,
| etc.
| pjerem wrote:
| Doesn't mean that the shell is given more permissions than
| other commands.
| jstrieb wrote:
| I've used this in the past to force bash to print every command
| it runs (using the -x flag) in the Actions workflow. This can be
| very helpful for debugging.
|
| https://github.com/jstrieb/just.sh/blob/2da1e2a3bfb51d583be0...
| 0xbadcafebee wrote:
| FYI with pipefail enabled, if one of the pipes fails, your step
| will fail, there will be no error output, you won't know why it
| failed.
|
| Pipefail also doesn't prevent more complex error states. For
| example this step from your config:
| curl -L "https://github.com/casey/just/releases/download/${{
| matrix.just-version }}/just-${{ matrix.just-version
| }}-x86_64-apple-darwin.tar.gz" \ | sudo tar -C
| /usr/local/bin -xzv just
|
| Here's the different error conditions you will run into:
|
| 1. _curl_ succeeds, _sudo_ succeeds, _tar_ succeeds, but _just_
| fails to extract from the tarball. Tar reports error, step
| fails.
|
| 2. _curl_ succeeds, _sudo_ succeeds, _tar_ fails. Sudo reports
| error, step fails.
|
| 3. _curl_ succeeds, _sudo_ fails. Shell reports error, step
| fails.
|
| 4. _curl_ begins running. _sudo_ begins running in a subshell
| /pipe. _tar_ begins running under the sudo pipe, extracting
| half of the _just_ binary. _curl_ fails due to network error.
| Due to pipefail being enabled, shell exits immediately. There
| is no error message. A corrupt executable is left on-disk
| (which will be attempted to run if your step had failure-
| skipping enabled)
| jchw wrote:
| > there will be no error output, you won't know why it failed
|
| That's probably why the -x is there. (Well, that and if
| something like curl or sudo fails it tends to output
| something to stderr...)
|
| > Pipefail also doesn't prevent more complex error states ...
| A corrupt executable is left on-disk (which will be attempted
| to run if your step had failure-skipping enabled)
|
| If I'm reading right it seems like you're suggesting is that
| the case pipefail doesn't handle is if you explicitly ignore
| the exit code. That doesn't exactly seem like the most
| concerning catch 22, to be honest.
| 0xbadcafebee wrote:
| -x does not show output/errors when pipefail triggers. It
| tells you _a pipe has started_ , and that's it. No idea
| what specific part of the pipe failed, or why, or what its
| return status was.
|
| > the case pipefail doesn't handle is if you explicitly
| ignore the exit code
|
| No I'm saying there is no exit code other than Bash's
| default exit code (1). -x will tell you that tar started
| and curl started, and then you will get exit code 1 when
| bash dies, and no other indication of _why_ bash died.
| Could 've been sudo, could've been curl. Sudo might have
| had the error, or if it were killed from a signal
| (depending on how it handles wrapping execution of tar) it
| might not return any error and the shell won't either.
| cryptonector wrote:
| You're supposed to also use `set -e` if you're going to `set
| -o pipefail`, but of course that requires understanding that
| `set -e` will not apply to anything happening from inside a
| function called in a conditional expression -- this is a
| tremendous footgun.
|
| And if you want to know which command in a pipe failed
| there's `PIPESTATUS`.
| ZiiS wrote:
| You can also trick the default shell 'bash' into running any
| program.
| cdata wrote:
| I wonder if you could pair this with nix e.g.,:
| - shell: nix develop --command {0} run: ...
| asmor wrote:
| In my experience, the default VM size is so slow, you probably
| don't want Nix on a workflow that doesn't already take minutes.
|
| Even with a binary cache (we used R2), installing Lix, Devbox
| and some common tools costs us 2 1/2 minutes. Just evaluating
| the derivation takes ~20-30 seconds.
| manx wrote:
| Is there a way to cache the derivation evaluation?
| chuckadams wrote:
| You can cache arbitrary directories in github actions, but
| the nix package cache is enormous and probably bigger than
| GH's cache system will allow. Restoring multi-gig caches is
| also not instant, though it still beats doing everything
| from scratch. Might be more feasible to bake the cache into
| a container image instead. I think any nix enthusiast is
| still likely to go for self-hosted runners though.
| asmor wrote:
| The default cache action also has issued with anything
| that isn't owned by the runner user, and caches are per-
| repository, so you can't just have one cache like you do
| for binary caches.
| adobrawy wrote:
| You can use a self-hosted runner with an image that has
| anything pre-loaded.
| tadfisher wrote:
| Yes, we do this, although you need to do `nix develop --command
| bash -- {0}` to make it behave as a shell.
| emilfihlman wrote:
| I mean, exec already exists, so you can become anything anyways.
| PhilipRoman wrote:
| I've always been confused by how the CI script is evaluated in
| these yaml based systems. It is written as an array of lines,
| but seems to be running in a single context (local variables,
| etc.), but also there is some special handling for exit code of
| each line. I'm not sure how exec would work.
| Arnavion wrote:
| >It is written as an array of lines, but seems to be running
| in a single context (local variables, etc.) [...]
|
| It's documented for GH at
| https://docs.github.com/en/actions/writing-
| workflows/workflo...
|
| >>The shell command that is run internally executes a
| temporary file that contains the commands specified in the
| run keyword.
|
| ... and also mentioned in the article submitted here:
|
| >>If the command doesn't already take a single file as input,
| you need to pass it the special {0} argument, which GitHub
| replaces with the temporary file that it generates the
| template-expanded run block into.
|
| ---
|
| >[...] but also there is some special handling for exit code
| of each line.
|
| As you can see in the defaults in the first link, the Linux
| default is to run bash with `-e`
| cturner wrote:
| Our generation shuddered in terror when we were asked to
| translate a spreadsheet to code while the spreadsheet continued
| to evolve.
|
| This generation will shudder when they are asked to bring
| discipline to deployments built from github actions.
| adastra22 wrote:
| What is undisciplined about this?
| TheDong wrote:
| yaml is roughly as disciplined as malbolge.
|
| Writing any meaningful amount of logic or configuration in
| yaml will inevitably lead to the future super-sentient yaml-
| based AI torturing you for all eternity for having taken any
| part in cursing it to a yaml-based existence. The thought-
| experiment of "Roko's typed configuration language" is
| hopefully enough for you to realize how this blog post needs
| to be deleted from the internet for our own safety.
| adastra22 wrote:
| I literally have no idea what you are talking about.
| Declarative languages are great for specifying these sorts
| of things.
| TheDong wrote:
| > Declarative languages are great for specifying these
| sorts of things
|
| Yes, good declarative languages are. I'm a happy nix
| user. I like dhall. cel is a cool experiment. jsonnet has
| its place. I stan XML.
|
| A language with byzantine type rules, like 'on: yes'
| parses the same as "true: true", but only in some
| languages (like ruby's built-in yaml parser for example),
| but not others, and only with some settings, is not it
| chief.
|
| It isn't even one language, since most yaml parsers only
| have like 90% coverage of the spec, and it's a different
| percent, so the same yaml document often won't be parsed
| the same even by two libraries in the same programming
| language. It's really like 20 subtly incompatbile
| languages that are all called "yaml".
|
| It is indefensible in any context. Github actions should
| have been in starlark, xml, or even lisp or lua.
| rogerrogerr wrote:
| As someone currently working to move a large enterprise to GH
| Actions (not quite, but "yaml-based pipelines tied to git") -
| what would discipline look like? If you can describe it, I can
| probably make it happen at my org.
| TheDong wrote:
| I'll give a shot at some guiding principals:
|
| 1. Do not use yaml.
|
| All github action logic should be written in a language that
| compiles to yaml, for example dhall (https://dhall-
| lang.org/). Yaml is an awful language for programmers, and
| it's a worse language for non-programmers. It's good for no
| one.
|
| 2. To the greatest extent possible, do not use any actions
| which install things.
|
| For example, don't use 'actions/setup-node'. Use bazel, nix,
| direnv, some other tool to setup your environment. That tool
| can now also be used on your developer's machines to get the
| same versions of software as CI is using.
|
| 3. Actions should be as short and simple as possible.
|
| In many cases, they will be as simple as effectively
| "actions/checkout@v4", "run: ./ci/build.sh", and that's it.
|
| Escape from yaml as quickly as possible, put basic logic in
| bash, and then escape from bash as quickly as possible too
| into a real langauge.
|
| 4. Do not assume that things are sane or secure by default.
|
| Ideally you don't accept PRs from untrusted users, but if you
| do, read all the docs very carefully about what actions can
| run where, etc. Github actions on untrusted repos are a
| nightmare footgun.
| eadmund wrote:
| > Escape from yaml as quickly as possible, put basic logic
| in bash, and then escape from bash as quickly as possible
| too into a real language.
|
| This should be your #1 rule. Don't compile logic to YAML,
| just write it in a real language and call it as quickly as
| possible.
|
| This way a developer can run it from his workstation.
| hnbad wrote:
| I agree with most of the points but I would condense #2 and
| #3 to "Move most things into scripts". Sometimes it's
| difficult to avoid complex workflows but generally it's a
| safer bet to have actual scripts you can re-use and use for
| other environments than GitHub. It's a bad idea to make
| yourself dependent entirely on one company's CI system,
| especially if it's free or an add-on feature.
|
| However I'd balk at the suggestion to use Dhall (or any
| equally niche equivalent) based on a number of factors:
|
| 1) If you need this advice, you probably don't know Dhall
| nor does anyone else who has worked or will work on these
| files, so everyone has to learn a new language and they'll
| all be novices at using that language.
|
| 2) You're adding an additional dependency that needs to be
| installed, maintained and supported. You also need to teach
| everyone who might touch the YAML files about this
| dependency and how to use it and not to touch the output
| directly.
|
| 3) None of the advice on GitHub Workflows out there will
| apply directly to the code you have because it is written
| in YAML so even if Dhall will generate YAML for you, you
| will need to understand enough YAML to convert it to Dhall
| correctly. This also introduces a chance for errors because
| of the friction in translating from the language of the
| code you read to the language of the code you write.
|
| 4) You are relying on the Dhall code to correctly map to
| the YAML code you want to produce. Especially if you're
| inexperienced with the language (see above) this means
| you'll have to double check the output.
|
| 5) It's a niche language so it's neither clear that it's
| the right choice for the project/team nor that it will
| continue to be useful. This is an extremely high bar
| considering the effort involved in training everyone to use
| it and it's not clear at all that the trade-off is worth it
| outside niche scenarios (e.g. government software that will
| have to be maintained for decades). It's also likely not to
| be a transferable skill for most people involved.
|
| The point about YAML being bad also becomes less of an
| issue if you don't have much code in your YAML because
| you've moved it into scripts.
| SOLAR_FIELDS wrote:
| The other problem with Github Actions that I always
| mention that muddies the waters when it comes to
| discussions of it is that GHA itself is front loaded as
| actually multiple different things:
|
| 1. Event dispatching/triggers, the thing that spawns
| webhooks/events to do things
|
| 2. The orchestration implementation (steps/jobs, a DAG-
| like workflow execution engine)
|
| 3. The reusable Actions marketplace
|
| 4. The actual code that you are running as part of the
| build
|
| 5. The environment setup/secrets of GHA, in other words,
| the makeup of how variables and other configurations are
| injected into the environment.
|
| The most maintainable setups only leverage 1 directly
| from GHA. 2-5 can be ignored or managed through
| containerized workflow in some actual build system like
| Bazel, Nix, etc.
| michaelmior wrote:
| You're also adding an extra build step that by its nature
| can't run in CI since it generates the CI pipelines. So
| now you need some way to keep your Dhall and YAML in
| sync. I suppose you could write one job in YAML that
| compiles the Dhall and fails the build if it's out of
| date, but it seems like a lot of extra work for minimal
| payoff.
|
| Instead, if you want to stay away from YAML, I'd say just
| move as much of the build as possible into external
| scripts so that the YAML stays very simple.
| JoelMcCracken wrote:
| Not too long ago, I went down a rabbit hole of specifying
| GHA yaml via dhall, and quickly hit some problems; the
| specific thing I was starting with was the part I was
| frustrated with, which was the "expresssions" evaluation
| stuff.
|
| However, I quickly ran into the whole "no recursive data
| structures in dhall" (at least, not how you would
| normally think about it), and of course, a standard
| representation of expressions is a recursively defined
| data type.
|
| I do get why dhall did this, but it did mean that I
| quickly ran into super advanced stuff, and realized that
| I couldn't in good conscience use this as my team of
| mixed engineers would need to read/maintain it in the
| future, without any knowledge of how to do recursive
| definitions in dhall, and without the inclination to care
| either.
|
| an intro to this: https://docs.dhall-lang.org/howtos/How-
| to-translate-recursiv...
|
| An example in the standard lib is how it works with JSON
| itself: https://store.dhall-
| lang.org/Prelude-v23.1.0/JSON/Type.dhall...
|
| basically, to do recursive definitions, you have to
| lambda encode your data types, work with them like that,
| and then finally "reify" them with, like, a concrete list
| type at the end, which means that all those lambdas
| evaluate away and you're just left with list data. This
| is neat and intresting and worthy of learning, but would
| be wildly overly-complicated for most eng teams I think.
|
| After hitting this point in the search, I decided to go
| another route: https://github.com/rhysd/actionlint
|
| and this project solved my needs such that I couldn't
| justify spending more time on it any longer.
| Intralexical wrote:
| YAML is supposed to be a strict superset of JSON. So if
| it's the footguns and complexity you're trying to avoid,
| just write it as JSON.
| michaelmior wrote:
| > To the greatest extent possible, do not use any actions
| which install things.
|
| Why not? I assume the concern is making sure development
| environments and production use the same configuration as
| CI. But that feels like somewhat of an orthogonal issue.
| For example, in Node.js, I can specify both the runtime and
| package manager versions using standard configuration. I
| think it's a bonus that how those specific versions get
| installed can be somewhat flexible.
| JoelMcCracken wrote:
| To me, the issue comes when something weird is going on
| in CI that isn't happening locally, and you're stuck
| debugging it with that typical insanity.
|
| Yeah, it may be that you'll get the exact same versions
| of things installed, but that doesn't help when some
| other weird thing is going on.
|
| If you haven't experienced this, well, keep doing what
| you're doing if you want, but just file this reflection
| away for if/when you do hit this issue.
| rafram wrote:
| > All github action logic should be written in a language
| that compiles to yaml
|
| An imperative language that compiles to a declarative
| language that emulates imperative control flow and calls
| other programs written in imperative languages that can
| have side effects that change control flow? Please no.
| lidder86 wrote:
| I have been looking at repo templates. That by default
| include basic ci checks for example.. linters for anything
| you can think of
| huijzer wrote:
| Probably a good idea to explicitly pin GitHub Actions to
| commit hashes as I wrote about a few days ago:
| https://huijzer.xyz/posts/jas/
|
| Also put as much as possible in bash or justfile instead of
| inside the yaml. It avoids vendor lock-in and makes local
| debugging easier.
| michaelmior wrote:
| I understand the arguments for putting more things in
| scripts instead of GHA YAML. However, I also like that
| breaking things up into multiple YAML steps means I get
| better reporting via GitHub. Of course I could have
| multiple scripts that I run to get the same effect. But I
| wish there was a standard protocol for tools to report
| progress information to a CI environment. Something like
| the Test Anything Protocol[0], but targeted at CI/CD.
|
| GitHub Actions workflow commands[1] are similar to what I'm
| thinking of, but not standardized.
|
| [0] https://testanything.org/ [1]
| https://docs.github.com/en/actions/writing-
| workflows/choosin...
| mikepurvis wrote:
| I was involved in a discussion about this here a few
| weeks ago: https://news.ycombinator.com/item?id=43427996
|
| It's frustrating that we're beholden to Github to add
| support for something like this to their platform,
| especially when the incentives are in the wrong
| direction-- anything that's more generic and more
| portable reduces lock-in to Actions.
| oblio wrote:
| 0. Make 99% of your setups runnable locally, with Docker if
| need be. It's the fastest way to test something and nothing
| else come close. #1 and #2 derive from #0. This is actually a
| principle for code, too, if you have stuff like Lambda, make
| sure you have a command line entry point, too and you can
| also test things locally.
|
| 1. Avoid YAML if you can. Either plain configuration files
| (generated if need be - don't be afraid to do this) or full
| blown programming languages with all the rigor required
| (linting/static analysis, tests, etc).
|
| 2. Move ALL logic outside of the pipeline tool. Your actions
| should be ./my-script.sh or ./my-tool.
|
| Source: lots of years of experience in build
| engineering/release engineering/DevOps/...
| cturner wrote:
| Some themes I think about,
|
| 1. Distinct prod and non-prod environments. I think you
| should have distinct Lab and Production environments. It
| should be practical to commit something to your codebase, and
| then test it in Lab. Then, you deploy that to Production. The
| Github actions model confuses the concepts of (source
| control) and (deployment environment). So you easily end up
| with no lab environment, and people doing development work
| against production.
|
| 2. Distinguish programming language expression and DSLs.
| Github yaml reminds me of an older time where people built
| programming languages in XML. It is an interesting idea, but
| it does not work out. The value of a programming language:
| the more features it has, the better. The value of a DSL: the
| fewer features it has, the better.
|
| 3. Security. There is a growing set of github-action
| libraries. The Github ecosystem makes it easy to install
| runners on workstations to accept dispatch from github
| actions. This combination opens opportunities for remote
| attacks.
| vl wrote:
| For CI action: pre-build docker image with dependencies, then
| run your tests using this image as single GitHub action
| command. If dependencies change, rebuild image.
|
| Do not rely on gh caching, installs, multiple steps, etc.
|
| Otherwise there will be a moment when tests pass locally, but
| not on gh, and debugging will be super hard. In this case you
| just debug in the same image.
| skrebbel wrote:
| The golden rule is "will I need to make a dummy commit to
| test this?" and if yes, find a different way to do it. All
| good rules in sibling comments here derive from this rule.
|
| You not want to _ever_ need to make dummy commits to debug
| something in CI, it 's awful. As a bonus, following this rule
| also means better access to debugging tools, local logs,
| "works on CI but not here" issues, etc. Finally if you ever
| want to move away from GitHub to somewhere else, it'll be
| easy.
| ViperCode wrote:
| how many workflows could be simplified this way without
| sacrificing debuggability or security."
| z3t4 wrote:
| What are the advantages with Github CI yaml over just a bash
| script, eg run: pipeline.sh ?
| hmhhashem wrote:
| - You get an overview in the Github UI for each step and can
| expand/collapse each step to inspect its output.
|
| - You can easily utilize Github actions that others have
| contributed in your pipeline.
|
| - You can modularize workflows and specify dependencies between
| them and control parallel executions.
|
| I'm sure there are more. But the main advantage is you don't
| need to implement all these things yourself.
| PhilipRoman wrote:
| For #1, you can output section markers from any software:
| https://docs.github.com/en/actions/writing-
| workflows/choosin... (I've only used this feature with
| GitLab)
| hmhhashem wrote:
| Thanks, I didn't know about this!
| noirscape wrote:
| That second one sounds more like a security risk to me than a
| feature.
| dharmab wrote:
| - you have an automatic, managed GitHub API token which is
| useful for automating releases and publishing artifacts and
| containers
|
| - if your software is cross platform you can run jobs across a
| variety of OSes and CPU architectures concurrently, e.g.
| building and testing natively on all platforms
|
| - you have access to a lot of contextual information about what
| triggered the job and the current state of the repo, which is
| handy for automating per-PR chores or release automation
|
| - You can integrate some things into the GitHub Web UI, such as
| having your linter annotate the PR line-by-line with flagged
| problems, or rendering test failures in the web page so you
| don't have to scan through a long log for them
|
| - You have a small cache you can use to avoid
| redownloading/rebuilding files that have not changed between
| builds
|
| Ideally you do as much as possible in a regular tool that runs
| locally (make/scripts/whatever) and you use the GitHub CI
| config for the little bit of glue that you need for the
| triggers, caching and GitHub integrations
| Hackbraten wrote:
| One advantage for GitHub is that you're less likely to migrate
| to another Git forge.
| misnome wrote:
| Which of any the alternatives don't have their own unique
| solution?
| Hackbraten wrote:
| The question was "GitHub CI YAML vs. pipeline.sh", not
| "GitHub CI YAML vs. other forge's YAML."
|
| What I'm trying to say is that if you keep your build logic
| in `pipeline.sh` (and use GitHub CI only for calling into
| it), then you're going to have an easier time migrating to
| another forge's CI than in the alternative scenario, i.e.
| if your build logic is coded in GitHub CI YAML.
| misnome wrote:
| Obviously. But then you still have caching, passing
| data/artifacts between stages, workflow logic (like
| skipping steps if unnecessary), running on multiple
| platforms, and exposing test results/coverage to the
| system you are running in.
|
| Written properly, actually building the software is the
| least of what the CI is doing.
|
| If your build is simple enough that you don't need any of
| that - great. But pretending that the big CI systems
| never do anything except lock you in is a trifle
| simplistic.
| prmoustache wrote:
| Pipelines are usually just a list of sequential steps. I have
| been working with a lot of different CI/CD tools and they are
| among the easiest thing to move from one to another.
| Hackbraten wrote:
| One example: for my personal Python projects, I use two
| GitHub actions named `pypa/gh-action-pypi-publish` [0] and
| `sigstore/gh-action-sigstore-python` [1] to sign my wheels,
| publish my wheels to PyPI, and have PyPI attest (and
| publicly display via check mark [2]) that the uploaded
| package is tied to my GitHub identity.
|
| How would I even begin migrating this to another forge? And
| that's just a small part of the pipeline.
|
| [0]: https://github.com/marketplace/actions/pypi-publish
|
| [1]: https://github.com/marketplace/actions/gh-action-
| sigstore-py...
|
| [2]: https://pypi.org/project/itchcraft/
| woodruffw wrote:
| This is only a small part, but FWIW: you don't need gh-
| action-sigstore-python to do the signing; gh-action-pypi-
| publish will do it automatically for you now.
|
| (Source: I maintain the former and contributed the
| attestations change to the latter.)
| prmoustache wrote:
| sigstore is not a github action specific tool, you can
| use the python client with any CI/CD tool runner. You can
| attest with py_pi attestations and publish with twine.
|
| When migrating the steps don't have to use the same
| syntax and tools, but for each step you can identify the
| desired outcome and create it without actions from the gh
| marketplace on a different CI/CD.
|
| More importantly, you consciously decided to make your
| pipeline not portable by using gh actions from the
| marketplace. This is not a requirement nor inevitable.
| dolmen wrote:
| This gives me hope to ease running Go code for CI jobs directly
| from GitHub workflow YAML files using goeval [1].
|
| However goeval doesn't yet have direct support for file input
| (only stdin), so shell tricks are needed.
|
| So far the way is: run: | go run
| github.com/dolmen-go/goeval@v1 - <<'EOF'
| fmt.Println("Hello") EOF
|
| but this requires a bit of boilerplate.
|
| Disclaimer: I'm the author of goeval.
|
| [1] https://github.com/dolmen-go/goeval
| michaelmior wrote:
| Why do you need any shell "tricks"? Wouldn't this work?
| go run github.com/dolmen-go/goeval@v1 - < file.go
| mdaniel wrote:
| I believe the difference is that in your example file.go
| needs to live in the repo (or created by a previous step:)
| whereas reading from stdin allows writing the logic in the
| .yaml itself, and thus is subject to uses: or other reuse
| tricks
| michaelmior wrote:
| Sure. But the previous post seemed to suggest that tricks
| were needed _because_ goeval doesn 't support reading from
| files.
| mdaniel wrote:
| I went to the repo to find out if your program automatically
| converted spaces into tabs but it seems it is more oriented
| toward oneliner expressions
|
| If you're going to plug your toy in this context, showing
| something more relevant than "print hello" would have saved me
| the click
| _def wrote:
| Ah I didn't know of the shell directive. Basically an equivalent
| to #! in shell scripts I guess:
| https://en.wikipedia.org/wiki/Shebang_%28Unix%29
| latexr wrote:
| Not exactly.
|
| > If the command doesn't already take a single file as input,
| you need to pass it the special `{0}` argument, which GitHub
| replaces with the temporary file that it generates the
| template-expanded `run` block into.
|
| It seems to be writing your script to a file, then using an
| executable to run your file. That ignores any shebang.
| nickysielicki wrote:
| Would be cool to use this with nix shell shebangs
| greener_grass wrote:
| My experience is that the less done in GitHub actions the better.
|
| I tend to prefer either:
|
| - Using a build-system (e.g. Make) to encode logic and just
| invoke that from GitHub Actions; or
|
| - Writing a small CLI program and then invoke that from GitHub
| Actions
|
| It's so much easier to debug this stuff locally than in CI.
|
| So an interesting trick, but I don't see where it would be
| useful.
| latexr wrote:
| I don't see the post as the author suggesting you do this, but
| informing that it can be done. There's a large difference.
| Knowing the possibilities of a system, even if it's things you
| never plan on using, is useful for security and debugging.
| 6LLvveMx2koXfwn wrote:
| My take was that it is _not_ useful, definitely, categorically
| not useful. It _is_ a potential security hazard though.
| Especially for 'exploring' self-hosted runners.
| donatj wrote:
| Our workflows amount to essentially - make
| build - make test
|
| We got bought out, and the company that bought us' workflow
| files are hundreds and hundreds of lines long, often with
| repeated sections.
|
| Call me old school, but I want to leave YAML town as soon as
| possible.
| lucasyvas wrote:
| I may be targeted for calling this the "correct" way, but it
| is - it's the only correct way.
|
| Otherwise you need complicated setups to test any of the
| stuff you put up there since none of it can be run locally /
| normally.
|
| GitHub Actions, like any CI/CD product, is for automating in
| ways you cannot with scripting - like parallelizing and
| joining pipelines across multiple machines, modelling the
| workflow. That's it.
|
| I would really appreciate an agnostic templating language for
| this so these workflows can be modelled generically and have
| different executors, so you could port them to run them
| locally or across different products. Maybe there is an
| answer to this that I've just not bothered to look for yet.
| donatj wrote:
| It was YAML but I actually really liked Drone CI's "In this
| container, run these commands" it was much more sane than
| GitHub Actions "Here's an environment we pre-installed a
| bunch of crap in, you can install the stuff you want every
| single time you run a workflow".
| vel0city wrote:
| You can specify a container image in GHA for a job to run
| in.
|
| https://docs.github.com/en/actions/writing-
| workflows/choosin...
| giancarlostoro wrote:
| > I would really appreciate an agnostic templating language
| for this so these workflows can be modelled generically and
| have different executors, so you could port them to run
| them locally or across different products. Maybe there is
| an answer to this that I've just not bothered to look for
| yet.
|
| Terraform? You can use it for more than just "cloud"
| jasonlotito wrote:
| In addition, adding our own custom modules for terraform
| is, all things considered, fairly easy. Much easier than
| dealing with the idiosyncrasies of trying to use YAML for
| everything.
| giancarlostoro wrote:
| I have not used Terraform outside of for Azure resources,
| but I am always astounded by how much it can handle.
| dleeftink wrote:
| Maybe what .devcontainers does? As a thin wrapper to
| Docker, I find it makes testing configurations easier.
| nonethewiser wrote:
| Who would the potential bad actor be here? Someone whose
| committing to your repo right? I guess the risk is that they add
| something malicious in the commit and you dont see it. Which is
| maybe obfuscated to some extent by this little known fact. But
| all the malicious code would be there in the open just like any
| other commit.
|
| I mean, it seems like it would either take not noticing the
| malicious code which is always a threat vector. Or seeing it and
| mistakenly thinking "aha, but you arent actually running it!" and
| then letting it through based on that (which is of course
| ridiculous).
|
| Or there is some other way to exploit this that I'm unaware of.
|
| Edit: OK, maybe this is a little better. Re-write some malicious
| bash look alike somewhere outside the repo, install it from
| github actions (make it look like you are updating bash or
| something) and then its doing the bad thing.
| woodruffw wrote:
| Not committing necessarily; there are plenty of GitHub Action
| triggers that a workflow can use that allow third-party use.
|
| I don't think there's a really direct security risk here, per
| se: it's just another way (there were plenty already) in which
| write implies execute in GHA.
| donatj wrote:
| I mean it's just a shell script jammed into YAML for reasons. The
| shell is just the shebang of said script
| aljarry wrote:
| Github Actions Runner code is pretty easy to read, here's a
| specific place that define default arguments for popular shells /
| binaries:
| https://github.com/actions/runner/blob/main/src/Runner.Worke...,
| it is exported through a method
| ScriptHandlerHelpers.GetScriptArgumentsFormat.
|
| In ScriptHandler.cs there's all the code for preparing process
| environment, arguments, etc. but specifically here's actual code
| to start the process:
|
| https://github.com/actions/runner/blob/main/src/Runner.Worke...
|
| Overall I was positively surprised at simplicity of this code.
| It's very procedural, it handles a ton of edge cases, but it
| seems to be easy to understand and debug.
___________________________________________________________________
(page generated 2025-04-08 23:01 UTC)