[HN Gopher] We should all be using dependency cooldowns
___________________________________________________________________
We should all be using dependency cooldowns
Author : todsacerdoti
Score : 233 points
Date : 2025-11-21 14:50 UTC (8 hours ago)
(HTM) web link (blog.yossarian.net)
(TXT) w3m dump (blog.yossarian.net)
| elevation wrote:
| You could do a lot of this with CI if you scheduled a job to
| fetch the most recent packages once a month and record a manifest
| with the current versions, then, if no security issues are
| reported before the end of the cooldown period, run integration
| tests against the new manifest. If no tests fail, automatically
| merge this update into the project.
|
| For projects with hundreds or thousands of active dependencies,
| the feed of security issues would be a real fire hose. You'd want
| to use an LLM to filter the security lists for relevance before
| bringing them to the attention of a developer.
|
| It would be more efficient to centralize this capability as a
| service so that 5000 companies aren't all paying for an LLM to
| analyze the same security reports. Perhaps it would be enough for
| someone to run a service like cooldown.pypi.org that served only
| the most vetted packages to everyone.
| AndrewDucker wrote:
| Why would you use an LLM rather than just sort by CVE score?
| ok123456 wrote:
| CVE scores are junk. Most CVEs are noise.
| jayd16 wrote:
| Doesn't this mean you're leaving yourself open to known
| vulnerabilities during that "cool down" time?
| jcalvinowens wrote:
| Yep. Not only vulnerabilities, but just bugs in general, which
| usually matter more than than vulnerabilities IMHO.
| bityard wrote:
| Do you believe new releases don't introduce new bugs?
| jcalvinowens wrote:
| Obviously. Every release introduces bugs. There's an
| inevitable positive correlation between the amount of code
| we write and the number of bugs we introduce, we just try
| to minimize it.
|
| The probability of introducing bugs is a function of the
| amount of development being done. Releasing less often
| doesn't change that. In fact, under that assumption,
| delaying releases _strictly increases_ the amount of time
| users are affected by the average bug.
|
| People who do this tell themselves the extra time allows
| them to catch more bugs. But in my experience that's a
| bedtime story, most bugs aren't noticed until after
| deployment anyway.
|
| That's completely orthogonal to slowly rolling out changes,
| btw.
| woodruffw wrote:
| To be clear, there's no reason why you can't update
| dependencies in advance of a cooldown period. The cooldown is
| an enforced policy that you can choose to override as needed.
|
| (This also doesn't apply to vulnerabilities per se, since
| known vulnerabilities typically aren't evaluated against
| cooldowns by tools like Dependabot.)
| jcalvinowens wrote:
| No you can't, the cooldown period is _started_ by the new
| upstream release. So if you follow this "rule" you're
| _guaranteed to be behind the latest upstream release._
| woodruffw wrote:
| I don't understand what you mean. The cooldown period is
| something _you_ decide to enforce; you can _always_
| override it. It 's your prerogative as a responsible
| engineer to decide the boundaries of policy enforcement.
| jcalvinowens wrote:
| I mean, you can do anything you want. But you're
| inventing a new definition of "cooldown" different than
| TFA...
| woodruffw wrote:
| I wrote TFA, so I can ensure you that this is what I
| meant :-)
|
| (Can you say more about what you found unclear in the
| post? The post definitely does not say "thou shall not
| update before the cooldown," the argument was that
| cooldowns are a great default. Engineers are
| fundamentally always expected to exercise discretion
| because, per the post, there's no single, sound, perfect
| solution to supply chain risks.)
| jcalvinowens wrote:
| > A "cooldown" is exactly what it sounds like: a window
| of time between when a dependency is published and when
| it's considered suitable for use.
|
| ^ This is what you wrote. I don't understand how that
| could possibly be interpreted any other way than I wrote
| above: an enforced delay on deploying the new code after
| upstream releases it.
|
| > The post definitely does not say "thou shall not update
| before the cooldown," the argument was that cooldowns are
| a great default
|
| Sorry, that is such a cop out. "I didn't actually mean
| you should do this, I mean you should consider if you
| should maybe do this and you are free to decide not to
| and don't argue with me if you disagree every case is
| different". Either take a stand or don't.
| woodruffw wrote:
| I think this is an overly tendentious reading. Nobody
| else seems to have gotten hung up on this, because they
| understand that it's a policy, not an immutable law of
| nature.
|
| The argument advanced in the post is IMO clear: cooldowns
| are a sensible default to have, and empirically seem to
| be effective at mitigating the risk of compromised
| dependencies. I thought I took sufficient pains to be
| clear that they're not a panacea.
| jcalvinowens wrote:
| I'm simply saying I think the policy you're proposing is
| bad. It is completely bizarre to me you're trying to
| frame that as a semantic argument.
| woodruffw wrote:
| I'm not saying it's a semantic argument. I'm saying that
| the policy isn't universal, whereas your argument appears
| to hinge on me _thinking_ that it is. But this seems to
| have run its course.
| jcalvinowens wrote:
| That's a semantic argument.
|
| Me saying your proposed policy is bad is in no way
| predicated on any assumption you intended it to be
| "universal". Quite the opposite: the last thing anybody
| needs at work is yet another poorly justified bullshit
| policy they have to constantly request an "exception" to
| to do their job...
| swatcoder wrote:
| No.
|
| A sane "cooldown" is just for _automated_ version updates
| relying on semantic versioning rules, which is a pretty
| questionable practice in the first place, but is indeed made a
| lot more safe this way.
|
| You can still _manually_ update your dependency versions when
| you learn that your code is exposed to some vulnerability that
| 's purportedly been fixed. It's no different than manually
| updating your dependency version when you learn that there's
| some implementation bug or performance cliff that was fixed.
|
| You might even still use an automated system to identify these
| kinds of "critical" updates and bring them to your attention,
| so that you can review them and can appropriately assume
| accountability for the choice to incorporate them early,
| bypassing the cooldown, if you believe that's the right thing
| to do.
|
| Putting in that effort, having the expertise to do so, and
| assuming that accountability is kind of your "job" as a
| developer or maintainer. You can't just automate and delegate
| everything if you want people to be able to trust what you
| share with them.
| jayd16 wrote:
| If you could understand the quality of updates you're pulling
| in, that solves the issue entirely. The point is that you
| can't.
|
| There's no reason to pretend we live in a world where
| everyone is manually combing through the source of every
| dependency update.
| astrobe_ wrote:
| TFA shows that most vulnerabilities have a "window of
| opportunity" smaller than one day. Are you anxious going on
| week-end because Friday evening a zero-day or a major bug could
| be made public?
| jayd16 wrote:
| Well then you agree that the answer is yes. At the end of the
| article a 14 day window is mentioned but not dismissed and
| does not mention the downsides.
| Havoc wrote:
| > we should all
|
| Except if everyone does it chance of malicious things being
| spotted in source also drops by virtue of less eyeballs
|
| Still helps though in cases where maintainer spot it etc
| smaudet wrote:
| > also drops by virtue of less eyeballs
|
| I don't think the people automatically updating and getting hit
| with the supply chain attack are also scanning the code, I
| don't think this will impact them much.
|
| If instead, updates are explicitly put on cooldowns, with the
| option of manually updating sooner, then there would be more
| eyeballs, not fewer, as people are more likely to investigate
| patch notes, etc., possibly even test in isolation...
| woodruffw wrote:
| (Author of the post.)
|
| The underlying premise here is that supply chain security
| vendors are _honest_ in their claims about proactively scanning
| (and effectively detecting + reporting) malicious and
| compromised packages. In other words, it 's not about eyeballs
| (I don't think people who automatically apply Dependabot bumps
| are categorically reading the code anyways), but about rigorous
| scanning and reporting.
| tjpnz wrote:
| You might read the source if something breaks but in a
| successful supply chain attack that's unlikely to happen. You
| push to production, go home for the evening and maybe get
| pinged about it by some automation in a few weeks.
| pico303 wrote:
| I agree. This "cooldown" approach seems antithetical to some
| basic tenants of security in the open source world, namely that
| more eyeballs makes for better code. If we all stop looking at
| or using the thing, are these security professionals really
| going to find the supply-chain problems for us in the thing,
| for free?
|
| Instead of a period where you don't use the new version,
| shouldn't we instead be promoting a best practice of not just
| blindly using a package or library in production? This
| "cooldown" should be a period of use in dev or QA environments
| while we take the time to investigate the libraries we use and
| their dependencies. I know this can be difficult in many
| languages and package managers, given the plethora of libraries
| and dependencies (I'm looking at you in particular JavaScript).
| But "it's hard" shouldn't really be a good excuse for our best
| efforts to maintain secure and stable applications.
| jcalvinowens wrote:
| I hate this. Delaying real bugfixes to achieve some nebulous
| poorly defined security benefit is just bad engineering.
| wrs wrote:
| Do you upgrade all your dependencies _every day_? If not, then
| there's no real difference in upgrading as if it were 7 days
| ago.
| jerlam wrote:
| Your CI/CD might be setup to upgrade all your dependencies on
| every build.
| wrs wrote:
| I've seen a lot of CI/CD setups and I've never seen that.
| If that were common practice, it would certainly simplify
| the package manager, since there would be no need for
| lockfiles!
| jerlam wrote:
| I didn't necessarily say they were good CI/CD practices.
| ktpsns wrote:
| Unattended upgrades for server installations are very common.
| For instance, for Ubuntu/Debian this updates by default daily
| (source: https://documentation.ubuntu.com/server/how-
| to/software/auto...). No cooldown implemented, AFAIK.
|
| Of course we talk about OS security upgrades here, not
| library dependencies. But the attack vector is similar.
| jcalvinowens wrote:
| I upgrade all dependencies every time I deploy anything. If
| you don't, a zero day is going to bite you in the ass: that's
| the world we now live in.
|
| If upgrading like that scares you, your automated testing
| isn't good enough.
|
| On average, the most bug free Linux experience is to run the
| latest version of everything. I wasted much more time
| backporting bugfixes before I started doing that, than I have
| spent on new bugs since.
| smaudet wrote:
| > zero day is going to bite you in the ass
|
| Maybe your codebase is truly filled with code that is that
| riddled with flaws, but:
|
| 1) If so, updating will not save you from zero days, only
| from whatever bugs the developers have found.
|
| 2) Most updates are not zero day patches. They are as
| likely to (unintentionally) introduce zero days as they are
| to patch them.
|
| 3) In the case where a real issue is found, I can't imagine
| it isn't hard to use the aforementioned security vendors,
| and use their recommendations to force updates outside of a
| cooldown period.
| jcalvinowens wrote:
| My codebase runs on top of the same millions of lines of
| decades old system code that yours does. You don't seem
| to appreciate that :)
| smaudet wrote:
| If you mean operating system code, that is generally
| opaque, and not quite what the article is talking about
| (you don't use a dependency manager to install code that
| you have reviewed to perform operating system updates -
| you can, and that is fantastic for you, but not I imagine
| what you mean).
|
| Although, even for Operating Systems, cooldown periods on
| patches are not only a good thing, but something that
| e.g. a large org that can't afford downtime will employ
| (managing windows or linux software patches, e.g.). The
| reasoning is the same - updates have just as much chance
| to introduce bugs as fix them, and although you hope your
| OS vendor does adequate testing, _especially in the case
| where you cannot audit their code_ , you have to wait so
| that either some 3rd party security vendor can assess
| system safety, or you are able to perform adequate
| testing yourself.
| starburst wrote:
| Upgrading to new version can also introduce new exploits,
| no amount of tests can find those.
|
| Some of these can be short-lived, existing only on a minor
| patch and fixed on the next one promptly but you'll get it
| if you upgrade constantly on the latest blindly.
|
| There is always risks either way but latest version doesn't
| mean the "best" version, mistakes, errors happens,
| performance degradation, etc.
| jcalvinowens wrote:
| Personally, I choose to aggressively upgrade and engage
| with upstreams when I find problems, not to sit around
| waiting and hoping somebody will notice the bugs and fix
| them before they affect me :)
| jhatemyjob wrote:
| That sounds incredibly stressful.
| icehawk wrote:
| > I upgrade all dependencies every time I deploy anything.
| If you don't, a zero day is going to bite you in the ass:
| that's the world we now live in.
|
| I think you're using a different definition of zero day
| than what is standard. Any zero day vulnerability is not
| going to have a patch you can get with an update.
| jcalvinowens wrote:
| Zero days often get fixed sooner than seven days. If you
| wait seven days, you're pointlessly vulnerable.
| saurik wrote:
| Only if you already upgraded to the one with the bug in
| it, and then only if you ignore "this patch is actually
| different: read this notice and deploy it immediately".
| The argument is not "never update quickly": it is don't
| routinely deploy updates constantly that are not known to
| be high priority fixes.
| jcalvinowens wrote:
| > The argument is not "never update quickly": it is don't
| routinely deploy updates constantly that are not known to
| be high priority fixes.
|
| Yes. I'm saying that's wrong.
|
| The default should always be to upgrade to new upstream
| releases immediately. Only in exceptional cases should
| things be held back.
| midasz wrote:
| Renovate (dependabot equiv I think) creates PRs, I usually
| walk through them every morning or when there's a bit of
| downtime. Playing with the idea to automerge patches and
| maybe even minor updates but up until now it's not that hard
| to keep up.
| swatcoder wrote:
| The point is to apply a cooldown to your "dumb" and
| unaccountable automation, not to your own professional judgment
| as an engineer.
|
| If there's a bugfix or security patch that applies to how your
| application uses the dependency, then you review the changes,
| manually update your version if you feel comfortable with those
| changes, and accept responsibility for the intervention if it
| turns out you made a mistake and rushed in some malicious code.
|
| Meanwhile, most of the time, most changes pushed to
| dependencies are not even in the execution path of any given
| application that integration with them, and so don't need to be
| rushed in. And most others are "fixes" for issues that were
| apparently not presenting an eminent test failure or support
| crisis for your users and don't warrant being rushed in.
|
| There's not really a downside here, for any software that's
| actually being actively maintained by a responsible engineer.
| jcalvinowens wrote:
| You're not thinking about the system dependencies.
|
| > Meanwhile, most of the time, most changes pushed to
| dependencies are not even in the execution path of any given
| application that integration with them
|
| Sorry, this is really ignorant. You don't appreciate how much
| churn their is in things like the kernel and glibc, even in
| stable branches.
| swatcoder wrote:
| > You're not thinking about the system dependencies.
|
| You're correct, because it's completely neurotic to worry
| about phantom bugs that have no actual presence of mind but
| must absolutely positively be resolved as soon as a
| candidate fix has been pushed.
|
| If there's a zero day vulnerability that affects your
| system, which is a rare but real thing, you can be notified
| and bypass a cooldown system.
|
| Otherwise, you've presumably either adapted your workflow
| to work around a bug or you never even recognized one was
| there. Either way, waiting an extra <cooldown> before
| applying a fix isn't going to harm you, but it will dampen
| the much more dramatic risk of instability and supply chain
| vulnerabilities associated with being on the bleeding edge.
| jcalvinowens wrote:
| > You're correct, because it's completely neurotic to
| worry about phantom bugs that have no actual presence of
| mind but must absolutely positively be resolved as soon
| as a candidate fix has been pushed.
|
| Well, I've made a whole career out of fixing bugs like
| that. Just because you don't see them doesn't mean they
| don't exist.
|
| It is shockingly common to see systems bugs that don't
| trigger for a long time by luck, and then suddenly
| trigger out of the blue everywhere at once. Typically
| it's caused by innocuous changes in unrelated code, which
| is what makes it so nefarious.
|
| The most recent example I can think of was an
| uninitialized variable in some kernel code: hundreds of
| devices ran that code reliably for a year, but an
| innocuous change in the userland application made the
| device crash on startup almost 100% of the time.
|
| The fix had been in stable for _months_ , they just
| hadn't bothered to upgrade. If they had upgraded, they'd
| have never known the bug existed :)
|
| I can tell dozens of stories like that, which is why I
| feel so strongly about this.
| saurik wrote:
| If your internal process is willing to ship and deploy
| upgrades--whether to your code or that of a third party--
| without testing even so minimally to notice that they
| cause almost a 100% chance of crashing, you need the
| advice to slow down your updates more than anyone...
| jcalvinowens wrote:
| Obviously, they caught the bug and didn't deploy it.
|
| The point is that a change to a completely unrelated
| component caused a latent bug to make the device
| unusable, ended up delaying a release for weeks and
| causing them have to pay me a bunch of money to fix it
| for them.
|
| If they'd been applying upgrades, they would have never
| even known it existed, and all that trouble would have
| been avoided.
|
| I mean, I'm sort of working against my own interest here:
| arguably I should _want_ people to delay upgrades so I
| can get paid to backport things for them :)
| 33a wrote:
| A lot of security problems can be solved by moving slower.
| solarist wrote:
| All of security problems can be solved by not moving at all.
| kykat wrote:
| What about food security
| solarist wrote:
| If you don't eat food you won't get food poisoning.
| ksherlock wrote:
| The slow food movement encourages eating local foods and
| gardening, among other things, so it actually improves food
| security, for people who aren't food insecure.
| theoldgreybeard wrote:
| jokes on them I already have 10 year dependency cooldowns on the
| app I work on at work!
| cheschire wrote:
| Retirement Driven Development
| nitwit005 wrote:
| This did make me think of our app running on Java 8.
|
| Although, I suppose we've probably updated the patch version.
| marcosdumay wrote:
| There's always the one that requires Java 8, and the one that
| requires Java >= 11.
| layer8 wrote:
| People in this thread are worried that they are significantly
| vulnerable if they don't update right away. However, this is
| mostly not an issue in practice. A lot of software doesn't have
| continuous deployment, but instead has customer-side deployment
| of new releases, which follow a slower rhythm of several weeks or
| months, barring emergencies. They are fine. Most vulnerabilities
| that _aren 't_ supply-chain attacks are only exploitable under
| special circumstances anyway. The thing to do is to _monitor_
| your dependencies and their published vulnerabilities, and for
| critical vulnerabilities to assess whether your product is affect
| by it. Only then do you need to update that specific dependency
| right away.
| silvestrov wrote:
| I think the main question is: do your app get unknown input
| (i.e. controlled by other people).
|
| Browsers get a lot of unknown input, so they have to update
| often.
|
| A Weather app is likely to only get input from one specific
| site (controlled by the app developers), so it should be
| relatively safe.
| duped wrote:
| A million times this. You update a dependency when there are
| bug fixes or features that you need (and this includes patching
| vulnerabilities!). Those situations are rare. Otherwise you're
| just introducing risk into your system - and not that you're
| going to be caught in some dragnet supply chain attack, but
| that some dependency broke something you relied on by accident.
|
| Dependencies are good. Churn is bad.
| embedding-shape wrote:
| > for critical vulnerabilities to assess whether your product
| is affect by it. Only then do you need to update that specific
| dependency right away.
|
| This is indeed what's missing from the ecosystem at large.
| People seem to be under the impression that if a new release of
| software/library/OS/application is released, you need to move
| to it today. They don't seem to actually look through the
| changes, only doing that if anything breaks, and then proceed
| to upgrade because "why not" or "it'll only get harder in the
| future", neither which feel like solid choices considering the
| trade-offs.
|
| While we've seen to already have known that it introduces
| massive churn and unneeded work, it seems like we're waking up
| to the realization that it is a security tradeoff as well, to
| stay at the edge of version numbers. Sadly, not enough tooling
| seems to take this into account (yet?).
| hypeatei wrote:
| > Sadly, not enough tooling seems to take this into account
|
| Most tooling (e.g. Dependabot) allows you to set an interval
| between version checks. What more could be done on that front
| exactly? Devs can already choose to check less frequently.
| mirashii wrote:
| The check frequency isn't the problem, it's the latency
| between release and update. If a package was released 5
| minutes before dependabot runs and you still update to it,
| your lower frequency hasn't really done anything.
| hypeatei wrote:
| What are the chances of that, though? The same could
| happen if you wait X amount of days for the version to
| "mature" as well. A security issue could be found five
| minutes after you update.
|
| EDIT: Github supports this scenario too (as mentioned in
| the article):
|
| https://github.blog/changelog/2025-07-01-dependabot-
| supports...
|
| https://docs.github.com/en/code-
| security/dependabot/working-...
| tracnar wrote:
| You could use this funky tool from oss-rebuild which proxies
| registries so they return the state they were at a past date:
| https://github.com/google/oss-rebuild/tree/main/cmd/timewarp
| pas wrote:
| > "it'll only get harder in the future"
|
| that's generally true, no?
|
| of course waiting a few days/weeks should be the minimum
| unless there's a CVE (or equivalent) that's _applies_
| jerf wrote:
| I fought off the local imposition of Dependabot by executive
| fiat about a year ago by pointing out that it _maximizes_
| vulnerabilities to supply chain attacks if blindly followed
| or used as a metric excessively stupidly. Maximizing
| vulnerabilities was not the goal, after all. You do not want
| to harass teams with the fact that DeeplyNestedDepen just
| went from 1.1.54-rc2 to 1.1.54-rc3 because the worst case is
| that they upgrade just to shut the bot up.
|
| I think I wouldn't object to "Dependabot on a 2-week delay"
| as something that at least flags. However working in Go more
| than anything else it was often the case even so that
| dependency alerts were just an annoyance if they aren't tied
| to a security issue or something. Dynamic languages and
| static languages do not have the same risk profiles at all.
| The idea that some people have that all dependencies are
| super vital to update all the time and the casual expectation
| of a constant stream of vital security updates is not a
| general characteristic of programming, it is a _specific_
| characteristic not just of certain languages but arguably the
| community attached to those languages.
|
| (What we really need is capabilities, even at a very gross
| level, so we can all notice that the supposed vector math
| library suddenly at version 1.43.2 wants to add network
| access, disk reading, command execution, and cryptography to
| the set of things it wants to do, which would raise all sorts
| of eyebrows immediately, even perhaps in an automated
| fashion. But that's a separate discussion.)
| skybrian wrote:
| It seems like some of the arguments in favor of doing
| frequent releases apply at least a little bit for
| dependency updates?
|
| Doing updates on a regular basis (weekly to monthly) seems
| like a good idea so you don't forget how to do them and the
| work doesn't pile up. Also, it's easier to debug a problem
| when there are fewer changes at once.
|
| But they could be rescheduled depending on what else is
| going on.
| catlifeonmars wrote:
| I use a dependabot config that buckets security updates
| into a separate pull than other updates. The non-security
| update PRs are just informational (can disable but I choose
| to leave them on), and you can actually spend the time to
| vet the security updates
| dap wrote:
| At my last job, we only updated dependencies when there was a
| compelling reason. It was awful.
|
| What would happen from time to time was that an important
| reason _did_ come up, but the team was now many releases
| behind. Whoever was unlucky enough to sign up for the project
| that needed the updated dependency now had to do all those
| updates of the dependency, including figuring out how they
| affected a bunch of software that they weren 't otherwise
| going to work on. (e.g., for one code path, I need a bugfix
| that was shipped three years ago, but pulling that into my
| component affects many other code paths.) They now had to go
| figure out what would break, figure out how to test it, etc.
| Besides being awful for them, it creates bad incentives
| (don't sign up for those projects; put in hacks to avoid
| having to do the update), and it's also just plain bad for
| the business because it means almost any project, however
| simple it seems, might wind up running into this pit.
|
| I now think of it this way: either you're on the dependency's
| release train or you jump off. If you're on the train, you
| may as well stay pretty up to date. It doesn't need to be
| every release the minute it comes out, but nor should it be
| "I'll skip months of work and several major releases until
| something important comes out". So if you decline to update
| to a particular release, you've got to ask: am I jumping off
| forever, or am I just deferring work? If you think you're
| just deferring the _decision_ until you know if there 's a
| release worth updating to, you're really rolling the dice.
|
| (edit: The above experience was in Node.js. Every change in a
| dynamically typed language introduces a lot of risk. I'm now
| on a team that uses Rust, where knowing that the program
| compiles and passes all tests gives us a lot of confidence in
| the update. So although there's a lot of noise with regular
| dependency updates, it's not actually that much work.)
| JoshTriplett wrote:
| > I'm now on a team that uses Rust, where knowing that the
| program compiles and passes all tests gives us a lot of
| confidence in the update.
|
| That's been my experience as well. In addition, the
| ecosystem largely holds to semver, which means a non-major
| upgrade tends to be painless, and conversely, if there's a
| _major_ upgrade, you know not to put it off for too long
| because it 'll involve some degree of migration.
| lock1 wrote:
| I think it also depends on the community as well. Last time
| I touched Node.js and Javascript-related things, every time
| I tried to update something, it practically guaranteed
| something would explode for no reason.
|
| While my recent legacy Java project migration from JDK 8 ->
| 21 & a ton of dependency upgrades has been a pretty smooth
| experience so far.
| Terr_ wrote:
| Yeah, along with the community's attitudes to risk and
| quality, there is a... chronological component.
|
| I'd like to be juuuust far enough behind that most of the
| nasty surprises have already been discovered by somebody
| else, preferably with workarounds developed.
|
| At the same time, you don't want to be so far back that
| upgrading uncovers novel problems or ones that nobody
| else cares about anymore.
| coredog64 wrote:
| My current employer publishes "staleness" metrics at the
| project level. It's imperfect because it weights all the
| dependencies the same, but it's better than nothing.
| stefan_ wrote:
| Thats because the security industry has been captured by
| useless middle manager types who can see that "one dependency
| has a critical vulnerability", but could never in their life
| scrounge together the clue to analyze the impact of that
| vulnerability correctly. All they know is the checklist
| fails, and the checklist can not fail.
|
| (Literally at one place we built a SPA frontend that was
| embedded in the device firmware as a static bundle, served to
| the client and would then talk to a small API server. And
| because these NodeJS types liked to have libraries reused for
| server and frontend, we would get endless "vulnerability
| reports" - but all of this stuff only ever ran in the clients
| browser!)
| jerf wrote:
| Also, if you are updating "right away" it is presumably because
| of some specific vulnerability (or set of them). But if you're
| in an "update right now" mode you have the most eyes on the
| source code in question at that point in time, and it's
| probably a relatively small patch for the targeted problem.
| Such a patch is the absolute worst time for an attacker to try
| to sneak anything in to a release, the exact and complete
| opposite of the conditions they are looking for.
|
| Nobody is proposing a system that utterly and completely locks
| you out of all updates if they haven't aged enough. There is
| always going to be an override switch.
| justsomehnguy wrote:
| > People in this thread are worried that they are significantly
| vulnerable if they don't update right away
|
| Most of them assume what if they are working on some public
| accessible website then 99% of the people and orgs in the world
| are running nothing but some public accessible website.
| bumblehean wrote:
| >The thing to do is to monitor your dependencies and their
| published vulnerabilities, and for critical vulnerabilities to
| assess whether your product is affect by it. Only then do you
| need to update that specific dependency right away.
|
| The practical problem with this is that many large
| organizations have a security/infosec team that mandates a
| "zero CVE" posture for all software.
|
| Where I work, if our infosec team's scanner detect a critical
| vulnerability in any software we use, we have 7 days to update
| it. If we miss that window we're "out of compliance" which
| triggers a whole process that no one wants to deal with.
|
| The path of least resistance is to update everything as soon as
| updates are available. Consequences be damned.
| BrenBarn wrote:
| > The practical problem with this is that many large
| organizations have a security/infosec team that mandates a
| "zero CVE" posture for all software.
|
| The solution is to fire those teams.
| bumblehean wrote:
| Sure I'll go suggest that to my C-suite lol
| paulddraper wrote:
| Someone has the authority to fix the problem. Maybe it's
| them.
| acdha wrote:
| This isn't a serious response. Even if you had the clout to
| do that, you'd then own having to deal with the underlying
| pressure which lead them to require that in the first
| place. It's rare that this is someone waking up in the
| morning and deciding to be insufferable, although you can't
| rule that out in infosec, but they're usually responding to
| requirements added by customers, auditors needed to get
| some kind of compliance status, etc.
|
| What you should do instead is talk with them about SLAs and
| validation. For example, commit to patching CRITICAL within
| x days, HIGH with y, etc. but also have a process where
| those can be cancelled if the bug can be shown not to be
| exploitable in your environment. Your CISO should be
| talking about the risk of supply chain attacks and outages
| caused by rushed updates, too, since the latter are pretty
| common.
| IcyWindows wrote:
| Aren't some of these government regulations for cloud,
| etc.?
| tetha wrote:
| I really dislike that approach. We're by now evaluating high-
| severity CVEs ASAP in a group to figure out if we are
| affected, and if mitigations apply. Then there is the choice
| of crash-patching and/or mitigating in parallel, updating
| fast, or just prioritizing that update more.
|
| We had like 1 or 2 crash-patches in the past - Log4Shell was
| one of them, and blocking an API no matter what in a
| component was another one.
|
| In a lot of other cases, you could easily wait a week or two
| for directly customer facing things.
| cosmic_cheese wrote:
| I think there's a much stronger argument for policies that both
| limit the number and complexity of dependencies. Don't add it
| unless it's highly focused (no "everything libraries" that pull
| in entire universes of their own) and carries a high level of
| value. A project's entire dependency tree should be small and
| clean.
|
| Libraries themselves should perhaps also take a page from the
| book of Linux distributions and offer LTS (long term support)
| releases that are feature frozen and include only security
| patches, which are much easier to reason about and periodically
| audit.
| pengaru wrote:
| I'd be willing to pay $100 to upvote your comment 100x.
| Y_Y wrote:
| How do you think dang puts bread on the table?
| tcfhgj wrote:
| Won't using highly focused dependencies increase the amount of
| dependencies?
|
| Limiting the number of dependencies, but then rewriting them in
| your own code, will also increase the maintenance burden and
| compile times
| skydhash wrote:
| A lot of projects are using dependencies, but are only using
| a small part of. Or are using them in a single place for a
| single usecase. Like bringing in formik (npm), but you only
| have one single form. Or moment, because you want to format a
| single date.
| buu700 wrote:
| I think AI nudges the economics more in this direction as well.
| Adding a non-core dependency has historically bought short-term
| velocity in exchange for different long-term maintenance costs.
| With AI, there are now many more cases where a first-party
| implementation becomes cheaper/easier/faster in both the short
| term and the long term.
|
| Of course it's up to developers to weigh the tradeoffs and make
| reasonable choices, but now we have a lot more optionality.
| Reaching for a dependency no longer needs to be the default
| choice of a developer on a tight timeline/budget.
| xmodem wrote:
| Let's have AI generate the same vulnerable code across
| hundreds of projects, most of which will remain vulnerable
| forever, instead of having those projects all depend on a
| central copy of that code that can be fixed and distributed
| once the issue gets discovered. Great plan!
| buu700 wrote:
| You're attacking a straw man. No one said not to use
| dependencies.
| xmodem wrote:
| At one stage in my career the startup I was working at
| was being acquired, and I was conscripted into the due-
| diligence effort. An external auditor had run a scanning
| tool over all of our repos and the team I was on was
| tasked with going through thousands of snippets across
| ~100 services and doing _something_ about them.
|
| In many cases I was able to replace 10s of lines of code
| with a single function call to a dependency the project
| already had. In very few cases did I have to add a new
| dependency.
|
| But directly relevant to this discussion is the story of
| the most copied code snippet on stack overflow of all
| time [1]. Turns out, it was buggy. And we had more than
| once copy of it. If it hadn't been for the due diligence
| effort I'm 100% certain they would still be there.
|
| [1]: https://news.ycombinator.com/item?id=37674139
| buu700 wrote:
| Sure, but that doesn't contradict the case for
| conservatism in adding new dependencies. A maximally
| liberal approach is just as bad as the inverse. For
| example:
|
| * Introducing a library with two GitHub stars from an
| unknown developer
|
| * Introducing a library that was last updated a decade
| ago
|
| * Introducing a library with a list of aging unresolved
| CVEs
|
| * Pulling in a million lines of code that you're
| reasonably confident you'll never have a use for 99% of
|
| * Relying on an insufficiently stable API relative to the
| team's budget, which risks eventually becoming an
| obstacle to applying future security updates (if you're
| stuck on version 11.22.63 of a library with a current
| release of 20.2.5, you have a problem)
|
| Each line of code included is a liability, regardless of
| whether that code is first-party or third-party. Each
| dependency in and of itself is also a liability and
| ongoing cost center.
|
| Using AI doesn't magically make all first-party code
| insecure. Writing good code and following best practices
| around reviewing and testing is important regardless of
| whether you use AI. The point is that AI reduces the
| upfront cost of first-party code, thus diluting the
| incentive to make short-sighted dependency management
| choices.
| xmodem wrote:
| > Introducing a library with two GitHub stars from an
| unknown developer
|
| I'd still rather have the original than the AI's un-
| attributed regurgitation. Of course the fewer users
| something has, the more scrutiny it requires, and below a
| certain threshold I will be sure to specify an exact
| version and leave a comment for the person bumping deps
| in the future to take care with these.
|
| > Introducing a library that was last updated a decade
| ago
|
| Here I'm mostly with you, if only because I will likely
| want to apply whatever modernisations were not possible
| in the language a decade ago.
|
| > Introducing a library with a list of aging unresolved
| CVEs
|
| How common is this in practice? I don't think I've ever
| gone library hunting and found myself with a choice
| between "use a thing with unsolved CVEs" and "rewrite it
| myself". Normally the way projects end up depending on
| libraries with lists of unresolved CVEs is by adopting a
| library that subsequently becomes unmaintained. Obviously
| this is a painful situation to be in, but I'm not sure
| its worse than if you had replicated the code instead.
|
| > Pulling in a million lines of code that you're
| reasonably confident you'll never have a use for 99% of
|
| It very much depends - not all imported-and-unused code
| is equal. Like yeah, if you have Flask for your web
| framework, SQLAlchemy for your ORM, Jinja for your
| templates, well you probably shouldn't pull in Django for
| your authentication system. On the other hand, I would be
| shocked if I had ever used more than 5% of the standard
| library in the languages I work with regularly. I am
| definitely NOT about to start writing my rust as no_std
| though.
|
| > Relying on an insufficiently stable API relative to the
| team's budget, which risks eventually becoming an
| obstacle to applying future security updates (if you're
| stuck on version 11.22.63 of a library with a current
| release of 20.2.5, you have a problem)
|
| If a team does not have the resources to keep up to date
| with their maintenance work, that's a problem. A problem
| that is far too common, and a situation that is unlikely
| to be improved by that team replicating the parts of the
| library they need into their own codebase. I my
| experience, "this dependency has a CVE and the security
| team is forcing us to update" can be one of the few ways
| to get leadership to care about maintenance work at all
| for teams in this situation.
|
| > Each line of code included is a liability, regardless
| of whether that code is first-party or third-party. Each
| dependency in and of itself is also a liability and
| ongoing cost center.
|
| First-party code is an individual liability. Third-party
| code can be a shared one.
| xmodem wrote:
| I've seen this argument made frequently. It's clearly a popular
| sentiment, but I can't help feel that it's one of those things
| that sounds nice in theory if you don't think about it too
| hard. (Also, cards on the table, I personally really _like_
| being able to pull in a tried-and-tested implementation of code
| to solve a common problem that 's also used by in some cases
| literally millions of other projects. I dislike having to re-
| solve the same problem I have already solved elsewhere.)
|
| Can you cite an example of a moderately-widely-used open source
| project or library that is pulling in code as a dependency that
| you feel it should have replicated itself?
|
| What are some examples of "everything libraries" that you view
| as problematic?
| skydhash wrote:
| Anything that pulled in chalk. You need a very good reason to
| emit escape sequences. The whole npm (and rust, python,..)
| ecosystem assumes that if it's a tty, then it's a full blown
| xterm-256color terminal. And then you need to pipe to cat or
| less to have sensible output.
|
| So if you're adding chalk, that generally means you don't
| know jack about terminals.
| igregoryca wrote:
| Some people appreciate it when terminal output is easier to
| read.
|
| If chalk emits sequences that aren't supported by your
| terminal, then that's a deficiency in chalk, not the
| programs that wanted to produce colored output. It's easier
| to fix chalk than to fix 50,000 separate would-be
| dependents of chalk.
| Dylan16807 wrote:
| I appreciate your frustration but this isn't an answer to
| the question. The question is about implementing the same
| feature in two different ways, dependency or internal code.
| Whether a feature _should_ be added is a different
| question.
| igregoryca wrote:
| Most of your supply chain attack surface is social engineering
| attack surface. Doesn't really matter if I use Lodash, or 20
| different single-function libraries, if I end up trusting the
| exact same people to not backdoor my server.
|
| Of course, small libraries get a bad rap because they're often
| maintained by tons of different people, especially in less
| centralized ecosystems like npm. That's usually a fair
| assessment. But a single author will sometimes maintain 5, 10,
| or 20 different popular libraries, and adding another library
| of theirs won't really increase your social attack surface.
|
| So you're right about "pull[ing] in universes [of package
| maintainers]". I just don't think complexity or number of
| packages are the metrics we should be optimizing. They are
| correlates, though.
|
| (And more complex code can certainly contain more
| vulnerabilities, but that can be dealt with in the traditional
| ways. Complexity begets simplicity, yadda yadda; complexity
| that only begets complexity should obviously be eliminated)
| gr4vityWall wrote:
| The Debian stable model of having a distro handle common
| dependencies with a full system upgrade every few years looks
| more and more sane as years pass.
|
| It's a shame some ecosystems move waaay too fast, or don't have a
| good story for having distro-specific packages. For example, I
| don't think there are Node.js libraries packaged for Debian that
| allow you to install them from apt and use it in projects. I
| might be wrong.
| kykat wrote:
| It is possible to work with rust, using debian repositories as
| the only source.
| embedding-shape wrote:
| > For example, I don't think there are Node.js libraries
| packaged for Debian that allow you to install them from apt and
| use it in projects
|
| Web search shows some:
| https://packages.debian.org/search?keywords=node&searchon=na...
| (but also shows "for optimizing reasons some results might have
| been suppressed" so might not be all)
|
| Although probably different from other distros, Arch for
| example seems to have none.
| o11c wrote:
| Locally, you can do: apt-cache showpkg
| 'node-*' | grep ^Package:
|
| which returns 4155 results, though 727 of them are type
| packages.
|
| Using these in commonjs code is trivial; they are
| automatically found by `require`. Unfortunately, system-
| installed packages are yet another casualty of the ESM
| transition ... there are ways to make it work but it's not
| automatic like it used to be.
| 9dev wrote:
| > Unfortunately, system-installed packages are yet another
| casualty of the ESM transition ...
|
| A small price to pay for the abundant benefits ESM brings.
| noosphr wrote:
| Never mistake motion for action.
|
| An eco system moving too quickly, when it isn't being
| fundamentally changed, isn't a sign of a healthy ecosystem, but
| of a pathological one.
|
| No one can think that js has progressed substantially in the
| last three years, yet trying to build any project three years
| old without updates is so hard a rewrite is a reasonable
| solution.
| gr4vityWall wrote:
| > No one can think that js has progressed substantially in
| the last three years
|
| Are we talking about the language, or the wider ecosystem?
|
| If the latter, I think a lot of people would disagree. Bun is
| about three years old.
|
| Other significant changes are Node.js being able to run
| TypeScript files without any optional flags, or being able to
| use require on ES Modules. I see positive changes in the
| ecosystem in recent years.
| l9o wrote:
| I wonder if LLMs can help here to some extent. I agree with
| others that cooldowns aren't helpful if everyone is doing it.
|
| I've been working on automatic updates for some of my [very
| overengineered] homelab infra and one thing that I've found
| particularly helpful is to generate PRs with reasonable summaries
| of the updates with an LLM. it basically works by having a script
| that spews out diffs of any locks that were updated in my
| repository, while also computing things like `nix store diff-
| closures` for the before/after derivations. once I have those
| diffs, I feed them into claude code in my CI job, which generates
| a pull request with a nicely formatted output.
|
| one thing I've been thinking is to lookup all of those
| dependencies that were upgraded and have the LLM review the
| commits. often claude already seems to lookup some of the commits
| itself and be able to give a high level summary of the changes,
| but only for small dependencies where the commit hash and
| repository were in the lock file.
|
| it would likely not help at all with the xz utils backdoor, as
| IIRC the backdoor wasn't even in the git repo, but on the release
| tarballs. but I wonder if anyone is exploring this yet?
| robszumski wrote:
| We built and launched this product about 2 months ago, HN
| thread here: https://news.ycombinator.com/item?id=45439721
|
| Totally agree that AI is great for this, it will work harder
| and go deeper and never gets tired of reading code or release
| notes or migration guides. What you want instead of summaries
| is to find the breaking changes, figure out if they matter,
| then comment on _that_.
| perlgeek wrote:
| Some scattered thoughts on that:
|
| * If everybody does it, it won't work so well
|
| * I've seen cases where folks pinned their dependencies, and then
| used "npm install" instead of "npm ci", so the pinning was
| worthless. Guess they are the accidental, free beta testers for
| the rest of us.
|
| * In some ecosystems, distributions (such as Debian) does both
| additional QA, and also apply a cooldown. Now we try to retrofit
| some of that into our package managers.
| exasperaited wrote:
| > * If everybody does it, it won't work so well
|
| Indeed, this is a complex problem to solve.
|
| And the "it won't work so well" of this is probably a general
| chilling effect on trying to fix things because people won't
| roll them out fast enough anyway.
|
| This may seem theoretical but for example in websites where
| there are suppliers and customers, there's quite a chilling
| effect on any mechanism that encourages people to wait until a
| supplier has positive feedback; there are fewer and fewer
| people with low enough stakes who are willing to be early
| adopters in that situation.
|
| What this means is that new suppliers often drop out too
| quickly, abandon platforms, work around those measures in a way
| that reduces the value of trust, and worse still there's a risk
| of bad reviews because of the reviewer's Dunning-Kruger etc.
|
| I think the mechanism is important for people who really _must_
| use it, but there will absolutely be side effects that are hard
| to qualify /correct.
| layer8 wrote:
| "If everybody does it, <some outcome>" is rarely a good
| argument, because the premise rarely becomes reality.
| dugidugout wrote:
| If you're prescribing a public practice you intend to be
| taken seriously, you should contend with what happens at
| scale, especially when evaluating competing alternatives,
| where comparative scale effects become decisive. Given the
| article's push for ecosystem-wide support, there's good
| reason to probe this hypothetical.
| K0nserv wrote:
| This is not how `npm install` works. This misunderstanding is
| so pervasive. Unless you change stuff in `package.json` `npm
| install` will not update anything, it still installs based on
| package-lock.json.
|
| Quoting from the docs:
|
| > This command installs a package and any packages that it
| depends on. If the package has a package-lock, or an npm
| shrinkwrap file, or a yarn lock file, the installation of
| dependencies will be driven by that [..]
| cxr wrote:
| What everyone should all be doing is practicing the decades-old
| discipline of source control. Attacks of the form described in
| the post, where a known-good, uncompromised dependency is
| compromised at the "supply chain" level, can be 100% mitigated--
| not fractionally or probabilistically--by cutting out the
| vulnerable supply chain. The fact that people are still dragging
| their feet on this and resist basic source control is the only
| reason why this class of attack is even possible. That vendoring
| has so many other benefits and solves other problems is even more
| reason to do so.
|
| Stacking up more sub-par tooling is not going to solve anything.
|
| Fortunately this is a problem that doesn't even have to exist,
| and isn't one that anyone falls into naturally. It's a problem
| that you have to actively opt into by taking steps like adding
| things to .gitignore to exclude them from source control,
| downloading and using third-party tools in a way that introduces
| this and other problems, et cetera--which means you can avoid all
| of it by simply not taking those extra steps.
|
| (Fun fact: on a touch-based QWERTY keyboard, the gesture to input
| "vendoring" by swiping overlaps with the gesture for
| "benefitting".)
| dbdr wrote:
| Doesn't vendoring solve the supply chain issue in the same way
| as picking a dependency version and never upgrading would?
| (especially if your package manager includes a hash of the
| dependency in a lock file)
| cxr wrote:
| Mm, no.
|
| The practice I described is neither "never upgrading" your
| dependencies nor predicated upon that. And your comment
| ignores that there's an entire other class of problem I
| alluded to (reproducibility/failure-to-reproduce) that
| normal, basic source control ("vendoring") sidesteps almost
| entirely, if not entirely in the majority of cases. That,
| too, is without addressing the fact that what you're tacitly
| treating as bedrock is not, in fact, bedrock (even if it can
| be considered the status quo within some subculture).
|
| The problem of "supply chain attacks" (which, to reiterate,
| is coded language for "not exercising source control"--or,
| "pretending to, but not actually doing it") is a problem that
| people have to _opt_ _in_ to.
|
| It takes _extra_ tooling.
|
| It takes _extra_ steps.
|
| You have to _actively do things_ to get yourself into the
| situation where you 're susceptible to it--and then it takes
| more tooling and more effort to try and get out. In contrast,
| you can protect yourself from it +and+ solve the
| reproducibility problem all by simply doing nothing[1]--
| _not_ taking the extra steps to edit .gitignore to exclude
| third-party code (and, sometimes, _first-party_ code) from
| the SCM history, _not_ employing lock files to undo the
| consequences of selectively excluding key parts of the input
| to the build process (and your app 's behavior).
|
| All these lock file schemes that language package manager
| projects have belatedly and sheepishly began to offer as a
| solution are inept attempts at doing what your base-level RCS
| already does for free (if only all the user guides for these
| package managers weren't all instructing people to subvert
| source control to begin with).
|
| Every package manager lock file format (requirements file,
| etc.) is an inferior, ad hoc, formally-specified, error-
| prone, incompatible re-implementation of half of Git.
|
| 1. <https://news.ycombinator.com/item?id=25623388>
| jhatemyjob wrote:
| That's what I do for some of my dependencies. But it's only
| for legal reasons. If licensing isn't a concern, vendoring is
| superior.
| callamdelaney wrote:
| Anything to save us from not being able to apply because of a CVE
| which is only relevant if you're doing something which we don't.
| pimlottc wrote:
| This assumes that most exploits are discovered by pro-active
| third-party security vendors, instead of being noticed in
| deployed projects. Is this actually true?
| woodruffw wrote:
| > Is this actually true?
|
| I don't know, but it's the claimed truth from a lot of vendors!
| The value proposition for a lot of supply chain security
| products is a _lot_ weaker if their proactive detection isn 't
| as strong as claimed.
| nicoburns wrote:
| I would like to see a variant of this that is based on a manual
| review/audit process rather than a time-based cooldown.
|
| Something like, upgrade once there are N independent positive
| reviews AND less than M negative reviews (where you can configure
| which people are organisations you trust to audit). And of course
| you would be able to audit dependencies yourself (and make your
| review available for others).
| testplzignore wrote:
| I've been wanting something like this for years. It's simply
| impossible for millions of companies to individually review the
| thousands of updates made to their thousands of dependencies
| every day.
|
| Imagine a world where every software update has hundreds of
| signoffs from companies across the industry. That is achievable
| if we work together. For only a few minutes a day, you too can
| save a CVSS 10.0 vulnerability from going unpatched :)
| DrScientist wrote:
| The think I find most odd about the constant pressure to update
| to the most recent and implied best version is that there is some
| implicit belief that software get's uniformly better with each
| release.
|
| Bottom line those security bugs are not all from version 1.0 ,
| and when you update you may well just be swapping known bugs for
| unknown bugs.
|
| As has been said elsewhere - sure monitor published issues and
| patch if needed but don't just blindly update.
| switchbak wrote:
| I remember this used to actually be the case, but that was many
| moons ago when you'd often wait a long time between releases.
| Or maybe the quality bar was lower generally, and it was just
| easier to raise it?
|
| These days it seems most software just changes mostly around
| the margins, and doesn't necessarily get a whole lot better.
| Perhaps this is also a sign I'm using boring and very stable
| software which is mostly "done"?
| edoceo wrote:
| I do this with my pull-thru-proxy. When new stuff shows up, I
| know, makes it easy to identify if urgent update is needed. I
| usually lag a bit. Make a dependency update a dedicated process.
| Update a few at a time, test, etc. slower but stable.
| programmertote wrote:
| I know it's impossible in some software stack and ecosystem. But
| I live mostly in the data world, so I usually could get away from
| such issues by aggressively keeping my upstream dependency list
| lean.
|
| P.S. When I was working at Amazon, I remember that a good number
| of on-call tickets were about fixing dependencies (in most of
| them are about updating the outdated Scala Spark framework--I
| believe it was 2.1.x or older) and patching/updating OS'es in our
| clusters. What the team should have done (I mentioned this to my
| manager) is to create clusters dynamically (do not allow long-
| live clusters even if the end users prefer it that way), and
| upgrading the Spark library. Of course, we had a bunch of other
| annual and quarterly OKRs (and KPIs) to meet, so updating Spark
| got the lowest of priorities...
| jhatemyjob wrote:
| This is a decent approach, another approach is to vendor your
| dependencies and don't update them unless absolutely necessary.
| The only thing you'd need to react to is major security
| vulnerabilities.
| compumike wrote:
| There's a tradeoff and the assumption here (which I think is
| solid) is that there's more benefit from avoiding a supply chain
| attack by _blindly (by default)_ using a dependency cooldown vs.
| avoiding a zero-day by _blindly (by default)_ staying on the
| bleeding edge of new releases.
|
| It's comparing the likelihood of an update introducing a new
| vulnerability to the likelihood of it fixing a vulnerability.
|
| While the article frames this problem in terms of _deliberate,
| intentional_ supply chain attacks, I 'm sure the majority of bugs
| and vulnerabilities were never supply chain attacks: they were
| just ordinary bugs introduced unintentionally in the normal
| course of software development.
|
| On the unintentional bug/vulnerability side, I think there's a
| similar argument to be made. Maybe even SemVer can help as a
| heuristic: a patch version increment is likely safer (less likely
| to introduce new bugs/regressions/vulnerabilities) than a minor
| version increment, so a patch version increment could have a
| shorter cooldown.
|
| If I'm currently running version 2.3.4, and there's a new release
| 2.4.0, then (unless there's a feature or bugfix I need ASAP), I'm
| probably better off waiting N days, or until 2.4.1 comes out and
| fixes the new bugs introduced by 2.4.0!
| woodruffw wrote:
| Yep, that's definitely the assumption. However, I think it's
| also worth noting that zero-days, once disclosed, _do_
| typically receive advisories. Those advisories then (at least
| in Dependabot) bypass any cooldown controls, since the thinking
| is that a _known_ vulnerability is more important to remediate
| than the open-ended risk of a compromised update.
|
| > I'm sure the majority of bugs and vulnerabilities were never
| supply chain attacks: they were just ordinary bugs introduced
| unintentionally in the normal course of software development.
|
| Yes, absolutely! The _overwhelming_ majority of vulnerabilities
| stem from _normal_ accidental bug introduction -- what makes
| these kinds of dependency compromises uniquely interesting is
| how _immediately_ dangerous they are versus, say, a DoS
| somewhere in my network stack (where I 'm not even sure it
| affects me).
| hinkley wrote:
| Defaults are always assumptions. Changing them usually means
| that you have new information.
| OhMeadhbh wrote:
| I'm not arguing that cooldowns are a bad idea. But I will argue
| that the article presents a simplified version of user behaviour.
| One of the reasons people upgrade their dependencies is to get
| bug fixes and feature enhancements. So there may be significant
| pressure to upgrade as soon as the fix is available, cooldowns be
| damned!
|
| If you tell people that cooldowns are a type of test and that
| until the package exits the testing period, it's not "certified"
| [*] for production use, that might help with some organizations.
| Or rather, would give developers an excuse for why they didn't
| apply the tip of a dependency's dev tree to their PROD.
|
| So... not complaining about cooldowns, just suggesting some
| verbiage around them to help contextualize the suitability of
| packages in the cooldown state for use in production. There are,
| unfortunately, several mid-level managers who are under pressure
| to close Jira tickets IN THIS SPRINT and will lean on the devs to
| cut whichever corners need to be cut to make it happen.
|
| [*] for some suitable definition of the word "CERTIFIED."
| nine_k wrote:
| There is a difference between doing $ npm i
| 78 packages upgraded
|
| and upgrading just one dependency from 3.7.2_1 to 3.7.2_2,
| after carefully looking at the code of the bugfix.
|
| The cooldown approach makes the automatic upgrades of the
| former kind much safer, while allowing for the latter approach
| when (hopefully rarely) you actually need a fix ASAP.
| andrewla wrote:
| I'm not convinced that this added latency will help, especially
| if everyone uses it. It may protect you as long as nobody else
| uses a cooldown period, but once everyone uses one then the
| window between the vulnerability being introduced and it being
| detected will expand by a similar amount, because without
| exposure it is less likely to be found.
| cmckn wrote:
| I don't think it's bad advice, it really just depends on the
| project, its dependencies, and your attack surface. I so badly
| want this era of mindlessly ticking boxes to end. Security is a
| contact sport! "Best practices" won't get you to the promised
| land, you have to actually think critically about the details day
| to day.
| abalone wrote:
| Sure, but there's an obvious tradeoff: You're also delaying the
| uptake of fixes for zero-day vulnerabilities.
|
| The article does not discuss this tradeoff.
| woodruffw wrote:
| The article assumes that engineers have the technical
| wherewithal to know when they should manually upgrade their
| dependencies!
|
| Clearly I should have mentioned that Dependabot (and probably
| others) don't consider cooldown when suggesting security
| upgrades. That's documented here[1].
|
| [1]: https://docs.github.com/en/code-
| security/dependabot/working-...
| awesome_dude wrote:
| Ye Olde "Cache Invalidation" problems really
|
| Instead of updating the cache of dependencies you have
| immediately, the suggestion is to use the cooldown to wait....
|
| As you point out, this means that you have a stale cache member
| has a critical fix applied.
|
| Next week's solution - have a dependency management tool that
| alerts you when critical fixes are created upstream for
| dependencies you have
|
| Followed by - now the zero day authors are publishing their
| stuff as critical fixes...
|
| Hilarity ensues
| jimbob45 wrote:
| Isn't this more or less what the Microsoft Long-Term Support
| (LTS) versus Short-Term Support (STS) is meant to do? LTS only
| receives critical updates but eschews all experimental/beta
| features. STS gets everything for the people that couldn't care
| less if their app gets hacked (e.g. apps like calculators,
| sandboxed internal tools, etc).
|
| I know Ubuntu and others do the same but I don't know what they
| call their STS equivalent.
| xg15 wrote:
| It's a good idea, but not without weak points, I think.
|
| One of the classic scammer techniques is to introduce artificial
| urgency to prevent the victim from thinking clearly about a
| proposal.
|
| I think this would be a weakness here as well: If enough projects
| adopt a "cooldown" policy, the focus of attackers would shift to
| manipulate projects into making an exception for "their"
| dependency and install it before the regular cooldown period
| elapsed.
|
| How to do that? By playing the security angle once again: An
| attacker could make a lot of noise how a new critical
| vulnerability was discovered in their project and every dependant
| should upgrade to the emergency release as quickly as possible,
| or else - with the "emergency release" then being the actually
| compromised version.
|
| I think a lot of projects would could come under pressure to
| upgrade, if the perceived vulnerability seems imminent and the
| only point for not upgrading is some generic cooldown policy.
| __MatrixMan__ wrote:
| Along those lines: If you're packaging an exploit, it's
| probably best to fix a bug while you're at it. That way people
| who want to remove their ugly workarounds will be motivated to
| violate the dependency cooldown.
| mewpmewp2 wrote:
| How would they create that noise?
| xg15 wrote:
| Depends on the level of infiltration I guess. If the attacker
| managed to get themselves into a trusted position, as with
| the XZ backdoor, they could use the official communication
| channels of the project and possibility even file a CVE.
|
| If it's "only" technical access, it would probably be harder.
| andix wrote:
| If they file a CVE, they will draw a lot of attention from
| experts to the project. Even from people who never heard
| from this package before.
| gbin wrote:
| Feels like the tragedy of the commons: I don't want to look at
| the change, I don't want to take responsibility, somebody else
| will take care or it, I just have to wait.
|
| Ok if this is an amazing advice and the entire ecosystem does
| that: just wait .... then what? We wait even more to be sure
| someone else is affected first?
|
| Every time I see people saying you need to wait to upgrade it is
| like you are accumulating tech debt: the more you wait, the more
| painful the upgrade will be, just upgrade incrementally and be
| sure you have mitigations like 0 trust or monitoring to cut early
| any weird behavior.
| tempestn wrote:
| You're not taking on any meaningful tech debt by waiting a week
| after a new version goes public to adopt it. As the OP says,
| there are services that scan popular open source tools for
| vulnerabilities as soon as they are released; even if a large
| percentage of the user base is waiting a week to update, many
| will still be caught in that period. And for various reasons
| some will still upgrade immediately.
| bongodongobob wrote:
| This is just completely wrong. If you are talking about a
| sizeable number of devices, you're not getting anything updated
| immediately even if you wanted to. You roll out to groups over
| a period of time because you don't want to break everything if
| there are unintended consequences. Your personal device? Sure
| whatever, but any fleet of devices absolutely does not get
| immediate updates across the board.
| catlifeonmars wrote:
| You're implicitly assuming that it's exposure to downstream
| consumers that causes the malicious packages to be discovered,
| but we haven't actually seen that in the last couple of major
| supply chain attacks. Instead it just buys time for the
| maintainers to undo the damage.
| andix wrote:
| Even if less consumers will notice a compromise and report it,
| it still gives additional time for security researchers to
| analyze the packages, and for maintainers to notice themselves
| they got compromised
|
| There are a lot of companies out there, that's scan packages
| and analyze them. Maintainers might notice a compromise,
| because a new release was published they didn't authorize. Or
| just during development, by getting all their bitcoin stolen ;)
| BrenBarn wrote:
| The culture of constant updates is a cancer on the software
| world.
| ChrisMarshallNY wrote:
| I'm an advocate for safe dependency usage.
|
| I can't in good conscience, say "Don't use dependencies," which
| solves a lot of problems, but I _can_ say "Let's be careful, out
| there," to quote Micheal Conrad.
|
| I strongly suspect that a lot of dependencies get picked because
| they have a sexy Website, lots of GH stars and buzz, and cool
| swag.
|
| I tend to research the few dependencies that I use. I don't
| depend lightly.
|
| I'm also fortunate to be in the position where I don't _need_ too
| many. I am quite aware that, for many stacks, there 's no choice.
| jrowen wrote:
| _A "cooldown" is exactly what it sounds like: a window of time
| between when a dependency is published and when it's considered
| suitable for use._
|
| My understanding of a cooldown, from video games, is a period of
| time after using an ability where you can't use it again. When
| you're firing a gun and it gets too hot, you have to wait while
| it cools down.
|
| I was trying to apply this to the concept in the article and it
| wasn't quite making sense. I don't think "cooldown" is really
| used when taking a pie out of the oven, for example. I would call
| this more a trial period or verification window or something?
| largbae wrote:
| Wouldn't this be a great use case of any agentic AI coding
| solution? A background agent scanning code/repo/etc and making
| suggestions both for and against dependency updates?
|
| Copilot seems well placed with its GitHub integration here, it
| could review dependency suggestions, CVEs, etc and make pull
| requests.
| andix wrote:
| One reason for cooldowns is not mentioned: maintainers often
| notice by themselves they got compromised.
|
| The attacker will try to figure out when they are the least
| available: during national holidays, when they sleep, during
| conferences they attend, when they are on sick leave, personal
| time off, ...
|
| Many projects have only a few or even only a single person,
| that's going to notice. They are often from the same country
| (time zone) or even work in the same company (they might all
| attend the same conference or company retreat weekend).
| kazinator wrote:
| Cooldowns won't do anything if it takes wide deployment in order
| for the problem to be discovered.
|
| If the code just sits there for a week without anyone looking at
| it, and is then considered cooled down just due to the passage of
| time, then the cool down hasn't done anything beneficial.
|
| A form of cooldown that could would in terms of mitigating
| problems would be a gradual rollout. The idea is that the
| published change is somehow not visible to all downstreams at the
| same time.
|
| Every downstream consumer declares a delay factor. If your delay
| factor is 15 days, then you see all new change publications 15
| days later. If your delay factor is 0 days, you see everything as
| it is published, immediately. Risk-averse organizations configure
| longer delay factors.
|
| This works because the risk-takers get hit with the problem,
| which then becomes known, protecting the risk-averse from being
| affected. Bad updates are scrubbed from the queue so those who
| have not yet received them due to their delay factor wlll not see
| those updates.
| alphazard wrote:
| For some reason everyone wants to talk about all the solutions to
| supply chain attacks except designing languages to avoid them in
| the first place.
|
| Austral[0] gets this right. I'm not a user, just memeing a good
| idea when I see it.
|
| Most languages could be changed to be similarly secure. No global
| mutable state, no system calls without capabilities, no artisanal
| crafting of pointers. All the capabilities come as tokens or
| objects passed in to the main, and they can be given out down the
| call tree as needed. It is such an easy thing to do at the
| language level, and it doesn't require any new syntax, just a new
| parameter in main, and the removal of a few bad ideas.
|
| [0] https://austral-lang.org/
| kachapopopow wrote:
| pick your poison:
|
| - you are vulnerable for 7 days because of a now public update
|
| - you are vulnerable for x (hours/days) because of a supply chain
| attack
|
| I think the answer is rather simple: subscribe to a vulnerability
| feed, evaluate & update. The amount of times automatic updates
| are necessary is near zero as someone who has ran libraries that
| are at times 5 to 6 years out of date exposed to the internet
| without a single event of compromise and it's not like these were
| random services, they were viewed by hundreds of thousands of
| unique addresses. There was only 3 times in the last 4 years
| where I had to perform updates due to a publically exposed
| service where these vulnerabilities affected me.
|
| Okay, the never being compromised part is a lie because of php,
| it's always PHP (monero-miner I am sure everyone is familiar
| with). The solution for that was to stop using PHP and assosiated
| software.
|
| Another one I had problems with was CveLab (GitLab if you
| couldn't tell), there has been so many critical updates pointing
| to highly exploitable CVE's that I had decided to simply migrate
| off it.
|
| In conclusion avoiding bad software is just as important as
| updates from my experience lowering the need for quick and
| automated actions.
___________________________________________________________________
(page generated 2025-11-21 23:00 UTC)