[HN Gopher] Open Source Security at Astral
___________________________________________________________________
Open Source Security at Astral
Author : vinhnx
Score : 305 points
Date : 2026-04-09 04:11 UTC (12 hours ago)
(HTM) web link (astral.sh)
(TXT) w3m dump (astral.sh)
| darkamaul wrote:
| With the recent incidents affecting Trivy and litellm, I find it
| extremely useful to have a guide on what to do to secure your
| release process.
|
| The advices here are really solid and actionable, and I would
| suggest any team to read them, and implement them if possible.
|
| The scary part with supply chain security is that we are only as
| secure as our dependencies, and if the platform you're using has
| non secure defaults, the efforts to secure the full chain are
| that much higher.
| sevg wrote:
| FYI it was actually William Woodruff (the article author) and his
| team at Trail of Bits that worked with PyPI to implement Trusted
| Publishing.
| ChrisArchitect wrote:
| Earlier submission from author:
| https://news.ycombinator.com/item?id=47691466
| raphinou wrote:
| One (amongst other) big problem with current software supply
| chain is that a lot of tools and dependencies are downloaded (eg
| from GitHub releases) without any validation that it was
| published by the expected author. That's why I'm working on an
| open source, auditable, accountless, self hostable, multi sig
| file authentication solution. The multi sig approach can protect
| against axios-like breaches. If this is of interest to you, take
| a look at https://asfaload.com/
| darkamaul wrote:
| I'm maybe not understanding here, but isn't it the point of
| release attestations (to authenticate that the release was
| produced by the authors)?
|
| [0] https://docs.github.com/en/actions/how-tos/secure-your-
| work/...
| raphinou wrote:
| Artifact attestation are indeed another solution based on
| https://www.sigstore.dev/ . I still think Asfaload is a good
| alternative, making different choices than sigstore:
|
| - Asfaload is accountless(keys are identity) while sigstore
| relies on openid connect[1], which will tie most user to a
| mega corp
|
| - Asfaload ' backend is a public git, making it easily
| auditable
|
| - Asfaload will be easy to self host, meaning you can easily
| deploy it internally
|
| - Asfaload is multisig, meaning event if GitHub account is
| breached, malevolent artifacts can be detected
|
| - validating a download is transparant to the user, which
| only requires the download url, contrary to sigstore [2]
|
| So Asfaload is not the only solution, but I think it has some
| unique characteristics that make it worth evaluating.
|
| 1:https://docs.sigstore.dev/about/security/
|
| 2: https://docs.sigstore.dev/cosign/verifying/verify/
| arianvanp wrote:
| The problem is nobody checks.
|
| All the axios releases had attestations except for the
| compromised one. npm installed it anyway.
| raphinou wrote:
| Yes, that's why I aim to make the checks transparant to the
| user. You only need to provide the download url for the
| authentication to take place. I really need to record a
| small demo of it.
| snthpy wrote:
| Overall I believe this is the right approach and something like
| this is what's required. I can't see any code or your product
| though so I'm not sure what to make of it.
| raphinou wrote:
| Here's the GitHub repo of the backend code:
| https://github.com/asfaload/asfaload
|
| There's also a spec of the approach at
| https://github.com/asfaload/spec
|
| I'm looking for early testers, let me know if you are
| interested to test it !
| est wrote:
| > without any validation that it was published by the expected
| author
|
| SPOF. I'd suggest use automatic tools to audit every line of
| code no matter who the author is.
| ramoz wrote:
| Created an agent skill based on this blog. Assessing my own repos
| now.
|
| https://github.com/backnotprop/oss-security-audit
| trashcan2137 wrote:
| Lengths people will go to rediscover Nix/Guix is beyond me
| 3abiton wrote:
| I don't see the connection though?
| Eufrat wrote:
| Nix provides declarative, reproducible builds. So,
| ostensibly, if you had your build system using Nix, then some
| of the issues here go away.
|
| Unfortunately, Nix is also not how most people function. You
| have to do things the Nix way, period. The value in part
| comes from this strong opinion, but it also makes it
| inherently niche. Most people do not want to learn an entire
| new language/paradigm just so they can get this feature. And
| so it becomes a chicken and egg problem. IMHO, I think it
| also suffers from a little bit of snobbery and poor naming
| (Nix vs. NixOS vs. Nixpkgs) which makes it that much harder
| to get traction.
| diffeomorphism wrote:
| There are different notions of "reproducible". Nix does not
| automatically make builds reproducible in the way that
| matters here:
|
| https://reproducible.nixos.org
|
| It is still good at that but the difference to other
| distros is rather small:
|
| https://reproducible-builds.org/citests/
| trashcan2137 wrote:
| Nix, if not used incorrectly (and they really make it hard to
| use it, both correctly and incorrectly lol), gives you
| reproducible and verifiable builds.
|
| Unfortunately I have to agree with the sibling comment that
| it suffers from poor naming and the docs are very hard to
| grok which makes it harder to get traction.
|
| I really hate the idea of `it's all sales at the end of the
| day` but if Nix could figure how to "sell" itself to more
| people then we would probably have less of those problems.
| Zopieux wrote:
| Reading the paragraph on hash pinning and "map lookup files"
| (lockfiles) made me audibly sigh.
| sunshowers wrote:
| If it doesn't work on Windows, it is not a full replacement.
| mkj wrote:
| Isn't Nix just reinventing what Vesta did for software
| reproducibility decades earlier? https://vesta.sourceforge.net/
| dirkc wrote:
| The open source ecosystem has come very far and proven to be
| resilient. And while trust will remain a crucial part of any
| ecosystem, we urgently need to improve our tools and practices
| when it comes to sandboxing 3rd party code.
|
| Almost every time I bump into uv in project work, the touted
| benefit is that it makes it easier to run projects with different
| python versions and avoiding clashes of 3rd dependencies -
| basically pyenv + venv + speed.
|
| That sends a cold shiver down my spine, because it tells me that
| people are running all these different tools on their host
| machine with zero sandboxing.
| Oxodao wrote:
| meh not always. I do use uv IN docker all the time, its quite
| handy
| dirkc wrote:
| Honest question - what are the main benefits for you when you
| use it in docker?
|
| ps. I feel like I've been doing python so long that my
| workflows have routed around a lot of legit problems :)
| sersi wrote:
| Main reason I now use uv is being able to specify a cool
| down period. pip allows it but it's with a timestamp so
| pretty much useless..
|
| And that doesn't prevent me from running it into a sandbox
| or vm for an additional layer of security.
| zwp wrote:
| > pip allows it but it's with a timestamp
|
| A PR to be able to use a relative timestamp in pip was
| merged just last week
|
| https://github.com/pypa/pip/pull/13837/commits
| Oxodao wrote:
| Mainly the "project" system. I'm only developing python in
| my free time, not professionally so I'm not as well versed
| in its ecosystem as I would be in PHP. The fact that
| there's tons of way to have project-like stuff I don't want
| to deal with thoses. I used to do raw python containers +
| requirements.txt but the DX was absolutely not enjoyable.
| I'm just used to it now
| silvester23 wrote:
| For us, the DX of uv for dependency management is much
| better than just using pip and requirements.txt.
|
| To be clear though, we only use uv in the builder stage of
| our docker builds, there is no uv in the final image.
| carderne wrote:
| If anyone from Astral sees this: at this level of effort, how do
| you deal with the enormous dependence on Github itself? You
| maintain social connections with upstream, and with PyPA... what
| if Github is compromised/buggy and changes the effect of some
| setting you depend on?
| bognition wrote:
| > what if Github is compromised/buggy
|
| What if? GitHub has is extremely buggy! I'm getting
| increasingly frustrated with the paper cuts that have become
| endemic across the entire platform. For example its not
| uncommon for one of our workflows to fail when cloning a
| branches of the repo they are running in.
| woodruffw wrote:
| We talk to GitHub as well! You're right that they are an
| enormous and critical dependency, and we pay close attention to
| the changes they make to their platform.
| tao_oat wrote:
| This is a really great overview; what a useful resource for other
| open-source projects.
| lrvick wrote:
| The only binaries of uv in the world you can get that were full
| source bootstrapped from signed package commits to signed reviews
| to multi-signed deterministic artifacts are the ones from my
| teammates and I at stagex.
|
| All keys on geodistributed smartcards held by maintainers tied to
| a web of trust going back 25 years with over 5000 keys.
|
| https://stagex.tools/packages/core/uv/
|
| Though thankful for clients that let individual maintainers work
| on stagex part time once in a while, we have had one donation
| ever for $50 as a project. (thanks)
|
| Why is it a bunch of mostly unpaid volunteer hackers are putting
| more effort into supply chain security than OpenAI.
|
| I am annoyed.
| duskdozer wrote:
| >Why is it a bunch of mostly unpaid volunteer hackers are
| putting more effort into supply chain security than OpenAI.
|
| Unpaid volunteer hackers provide their work for free under
| licenses designed for the purpose of allowing companies like
| OpenAI to use their work without paying or contributing in any
| form. OpenAI wants to make the most money. Why would they spend
| any time or money on something they can get for free?
| ra wrote:
| Not sure if you're fully over the context that openAI bought
| Astral - who "own" uv.
| hootz wrote:
| Yep. Permissive licenses, "open source", it's all just free
| work for the worst corporations you can think.
| philipallstar wrote:
| It's free work for anyone.
| tclancy wrote:
| Never let the left hand know what the right hand is doing.
| I suppose it works both ways here, but the specific end
| user is not why people make code available, it's in the
| hope of improving things, even just the tiniest bit.
| MidnightRider39 wrote:
| Seems like the most cynical take on OSS possible.
|
| Like anything good you do an evil person could benefit from
| - is the solution to never do any good?
| fsflover wrote:
| The solution is to use AGPLv3.
| MidnightRider39 wrote:
| I'm maybe daft but AGPLv3 doesnt prevent $Evilcorp from
| using it, they just need to share any modifications or
| forks they made?
| 3form wrote:
| Only if they provide the software or software as a
| service. Then I suspect it's good enough if the
| modifications or forks made are shared internally if
| software is used only internally, but on the other hand
| I'm not a lawyer.
| fsflover wrote:
| This is the point. They can use and modify it, but they
| also have to share their modifications, i.e., help its
| development. Yet most megacorps never even touch this
| license.
| pabs3 wrote:
| What are you using for signed reviews?
| lrvick wrote:
| I promise we are actively working on a much better solution
| we hope any distro can use, but... for now we just enforce
| signed merge commits by a different maintainer other than the
| author as something they only do for code they personally
| reviewed.
| pabs3 wrote:
| Are you looking at crev at all?
|
| https://github.com/crev-dev/
| saghm wrote:
| > Why is it a bunch of mostly unpaid volunteer hackers are
| putting more effort into supply chain security than OpenAI.
|
| Didn't the acquisition only happen a few weeks ago? Wouldn't it
| be more alarming if OpenAI had gone in and forced them to
| change their build process? Unless you're claiming that the
| article is lying about this being a description of what they've
| already been doing for a while (which seems a bit outlandish
| without more evidence), it's not clear to me why you're
| attributing this process to the parent company.
|
| Don't get me wrong; there's _plenty_ you can criticize OpenAI
| over, and I 'm not taking a stance on your technical claims,
| but it seems somewhat disingenuous to phrase it like this.
| woodruffw wrote:
| Yeah, I'll just establish for the record that we've been
| thinking about this for a long time, and that it has nothing
| to do with anybody except our own interests in keeping our
| development and release processes secure.
| saghm wrote:
| That fits what I had assumed (and would expect), but it
| definitely doesn't hurt to have that confirmed, so thank
| you!
| blitzar wrote:
| The private jet wont fuel itself now will it.
| charcircuit wrote:
| >Why is it a bunch of mostly unpaid volunteer hackers are
| putting more effort into supply chain security than OpenAI.
|
| To be frank. Because more effort doesn't actually mean that
| something is more secure. Just because you check extra things
| or take extra steps that doesn't mean it actually results in
| tangibly better security.
| MeetingsBrowser wrote:
| Exactly. Deterministic artifacts alone are not necessarily
| more secure and are tangential to a lot of what is being
| described in the blog post.
|
| The blog is mostly focused on hardening the CI/CD pipeline.
| woodruffw wrote:
| (I'm the author of TFA.)
|
| > All keys on geodistributed smartcards held by maintainers
| tied to a web of trust going back 25 years with over 5000 keys.
|
| Neither the age nor the cardinality of the key graph tells me
| anything if I don't trust the maintainers themselves; given
| that you're fundamentally providing third-party builds, what's
| the threat model you're addressing?
|
| It's worth nothing that all builds of uv come from a locked
| resolution and, as mentioned in TFA, you can get signed
| artifacts from us. So I'm very murky on the value of signed
| package commits that come from a _different_ set of identities
| than the ones actually building the software.
| kaathewise wrote:
| StageX does reproducible builds, so they are signed
| independently and can also be verified locally. I don't think
| it applies to Astral, but it's useful for packages with a
| single maintainer or a vulnerable CI, where there is only one
| point of failure.
|
| But I also think it'd be nice if projects provided a first-
| party StageX build, like many do with a Dockerfile or a Nix
| flake.
| abigail95 wrote:
| I don't think you are annoyed. You have done this to produce a
| reproducible linux distribution which your partners sell
| support for.
|
| I wouldn't find this annoying at all - I would expect to have
| to do this for hundreds of packages.
|
| Without unpaid volunteers things like Debian do not exist.
| Don't malign the situation and circumstances of other projects,
| especially if they are your competitors.
|
| Compete by being better, not by complaining louder.
| jmalicki wrote:
| This is the market telling you what matters.
|
| OpenClaw has been an outstanding success, it is providing
| people the ability to leak their keys, secrets, and personal
| data, and allowing people to be subject to an incredible number
| of supply chain attacks when its users have felt their attack
| surface was just too low.
|
| Your efforts have been on increasing security and reducing
| supply chain attacks, when the market is strongly signaling to
| you that people want reduced security and more supply chain
| attacks!
| Zopieux wrote:
| The entire paragraph about version pinning using hashes (and
| using a map lookup for in-workflow binary deps) reminds me that
| software engineers are forever doomed to reinvent worse versions
| of nixpkgs and flakes.
|
| I don't even love Nix, it's full of pitfalls and weirdnesses, but
| it provides so much by-default immutability and reproducibility
| that I sometimes forget how others need to rediscover this stuff
| from first principles every time a supply chain attack makes the
| news.
| nDRDY wrote:
| >worse versions of nixpkgs and flakes
|
| You mean statically-compiled binaries and hash pinning? Those
| have been around a bit longer than Nix :-)
| Zopieux wrote:
| Were they deployed at scale in such a way that most (open and
| some non-free) software is packaged as such? I've never seen
| this happen until nixpkgs.
| tclancy wrote:
| Every generation thinks they invented sex. And hash pinning,
| which now sounds dirty.
| 12_throw_away wrote:
| I don't have much experience with GitHub's CI offering. But if
| this is an accurate description of the steps you need to take to
| use it securely ... then I don't think it _can_ , in fact, ever
| be used securely.
|
| Even if you trust Microsoft's cloud engineering on the backend,
| this is a system that does not appear to follow even the most
| basic principles of privilege and isolation? I'm not sure why you
| would even _try_ to build "supply-chain security" on top of
| this.
| wofo wrote:
| Out of curiosity, is there a build setup you have seen in the
| past that you think could be a good replacement for this
| complex GitHub CI setup? Asking for a friend ;)
|
| Update: now I've finished reading the article, my impression is
| that complexity is mostly inherent to this problem space. I'd
| be glad to be proven wrong, though!
| everforward wrote:
| I think any of the webhook-based providers are better,
| because you can isolate your secrets. PRs go to a PR webhook
| that runs in an environment that just doesn't have access to
| any secrets.
|
| Releases go to the release webhook, which should output
| nothing and ideally should be a separate machine/VM with
| firewall rules and DNS blocks that prevent traffic to
| anywhere not strictly required.
|
| Things are a lot harder to secure with modern dynamic
| infrastructure, though. Makes me feel old, but things were
| simpler when you could say service X has IP Y and add
| firewall rules around it. Nowadays that service probably has
| 15 IP addresses that change once a week.
| WhyNotHugo wrote:
| The complexity comes from how the whole system is designed.
|
| There's no single repository or curated packages as is
| typical in any distribution: instead actions pull other
| actions, and they're basically very complex wrapper around
| scripts which downloads binaries from all over the place.
|
| For lots of very simple actions, instead of installing a
| distribution package and running a single command, a whole
| "action" is used which creates and entire layer of
| abstraction over that command.
|
| It's all massive complexity on top of huge abstractions, none
| of which were designed with security in mind: it was just
| gradually bolted on top over the years.
| hardsnow wrote:
| I would agree with this. I recently tried to figure out how to
| properly secure agent-authored code in GitHub Actions. I
| believe I succeeded in doing this[1] but the secure
| configuration ended up being so delicate that I don't have high
| hopes of this being a scalable path.
|
| Now, as other commenter pointed out, maybe this is just
| inherent complexity in this space. But more secure defaults
| could go a long way making this more secure in practice.
|
| [1] https://github.com/airutorg/sandbox-action
| superpositions wrote:
| Yeah, this is usually where things break in practice
| s_ting765 wrote:
| Pinning github actions by commit SHA does not solve the supply
| chain problem if the pinned action itself is pulling in other
| dependencies which themselves could be compromised. An action can
| pull in a docker image as a dependency for example. It is
| effectively security theatre. The real fix is owning the code
| that runs in your CI pipelines. Or fork the action itself and
| maintain it as part of your infrastructure.
| codethief wrote:
| Shouldn't you always read & double-check the 3rd-party GitHub
| actions you use, anyway? (Forking or copying their code alone
| doesn't solve the issue you mention any more than pinning a SHA
| does.)
| s_ting765 wrote:
| Double checking Github actions does not mitigate threats from
| supply chain vulnerabilities. Forking an action moves the
| trust from a random developer to yourself. You still have to
| make sure the action is pulling in dependencies from trusted
| sources which can also be yourself depending on how far you
| want to go.
| zanie wrote:
| We do address this in the article! It's defense in depth, not
| theater.
|
| We audit all of our actions, check if they pull in mutable
| dependencies, contribute upstream fixes, and migrate off using
| any action when we can.
|
| (I work at Astral)
| MeetingsBrowser wrote:
| > It is effectively security theatre.
|
| I disagree. Security is always a trade-off.
|
| Owning, auditing, and maintaining your entire supply chain
| stack is more secure than pinning hashes, but it is not
| practical for most projects.
|
| Pinning your hashes is more secure than not pinning, and is
| close to free.
|
| At the end of the day, the line of trust is drawn somewhere (do
| you audit the actions provided by GitHub?). It is not possible
| to write and release software without trusting some third party
| at some stage.
|
| The important part is recognizing where your "points of trust"
| are, and making a conscious decision about what is worth doing
| yourself.
| anentropic wrote:
| Super useful info... but I feel so tired after reading it
| kdeldycke wrote:
| I maintain `repomatic`, a Python CLI + reusable workflows. It
| bakes most of the practices from this post into a drop-in setup
| for Python projects (uv-based, but works for others too). The
| goal is to make the secure default the easy default for
| maintainers who just want to ship packages. Also addresses a lot
| of GitHub Actions own shortcomings.
|
| But thanks to the article I added a new check for the fork PR
| workflow approval policy.
|
| More at: https://github.com/kdeldycke/repomatic
___________________________________________________________________
(page generated 2026-04-09 17:00 UTC)