[HN Gopher] We shouldn't have needed lockfiles
___________________________________________________________________
We shouldn't have needed lockfiles
Author : tobr
Score : 99 points
Date : 2025-08-06 15:33 UTC (7 hours ago)
(HTM) web link (tonsky.me)
(TXT) w3m dump (tonsky.me)
| ratelimitsteve wrote:
| anyone find a way to get rid of the constantly shifting icons at
| the bottom of the screen? I'm trying to read and the motion keeps
| pulling my attention away from the words toward the dancing
| critters.
| foobarbecue wrote:
| $("#presence").remove()
|
| And yeah, I did that right away. Fun for a moment but extremely
| distracting.
| vvillena wrote:
| Reader mode.
| karmakurtisaani wrote:
| Agreed. It's an absolutely useless feature for me to see as
| well.
| trinix912 wrote:
| Block ###presence with UBlock.
| bencevans wrote:
| https://times.hntrends.net/story/44813397
| zahlman wrote:
| I use NoScript, which catches all of these sorts of things by
| default. I only enable first-party JS when there's a clear good
| reason why the site should need it, and third-party JS
| basically never beyond NoScript's default whitelist.
| yawaramin wrote:
| https://github.com/t-mart/kill-sticky
| Joker_vD wrote:
| NPM has, starting with version 0.5.1, an absolutely lovely
| feature where it simply ignores the package-lock.json file
| altogether. Or to be more precise, "npm install" regenerates
| package-lock.json based on package.json. What's the point of "npm
| upgrade" then? Eh.
| thunderfork wrote:
| You can use `npm ci` for "don't update the lockfile, fail if an
| exact lockfile match can't be collected"
| 0cf8612b2e1e wrote:
| I hate this reality.
| omnicognate wrote:
| What if your project also uses librupa, which also depends on
| liblupa? Follow the chain of reasoning from that thought, or
| maybe spend a couple of decades dealing with the horror created
| by people who didn't, and you'll get to lockfiles.
| hyperpape wrote:
| > But if you want an existence proof: Maven. The Java library
| ecosystem has been going strong for 20 years, and during that
| time not once have we needed a lockfile. And we are pulling
| hundreds of libraries just to log two lines of text, so it is
| actively used at scale.
|
| Maven, by default, does not check your transitive dependencies
| for version conflicts. To do that, you need a frustrating plugin
| that produces much worse error messages than NPM does:
| https://ourcraft.wordpress.com/2016/08/22/how-to-read-maven-....
|
| How does Maven resolve dependencies when two libraries pull in
| different versions? It does something insane.
| https://maven.apache.org/guides/introduction/introduction-to....
|
| Do not pretend, for even half a second, that dependency
| resolution is not hell in maven (though I do like that packages
| are namespaced by creators, npm shoulda stolen that).
| potetm wrote:
| The point isn't, "There are zero problems with maven. It solves
| all problems perfectly."
|
| The point is, "You don't need lockfiles."
|
| And that much is true.
|
| (Miss you on twitter btw. Come back!)
| jeltz wrote:
| You don't need package management by the same token. C is
| proof of that.
|
| Having worked professionally in C, Java, Rust, Ruby, Perl,
| PHP I strongly prefer lock files. They make it so much nicer
| to manage dependencies.
| potetm wrote:
| "There is another tool that does exactly the job of a
| lockfile, but better."
|
| vs
|
| "You can use make to ape the job of dependency managers"
|
| wat?
| jeltz wrote:
| I have worked with Maven and dependency management is a
| pain. Not much nicer than vendoting dependencies like you
| do for C. When I first was introduced to lock files that
| was amazing. It solved so many problems I had with
| vendored dependencies, CPAN and Maven.
|
| Just because thousands of programmers manage to suffer
| through your bad system every day does not make it good.
| aidenn0 wrote:
| Now you're moving the goalposts; I think lockfiles that
| are checked-in to version control are superior to Maven's
| "Let's YOLO it if your transitive dependencies conflict."
| Version ranges are _more expressive_ than single-
| versions, and when you add lockfiles you get
| deterministic builds.
| deepsun wrote:
| I don't understand how Maven's YOLO is different from
| NPM's range.
|
| If you force a transitive dependency in Maven, then yes,
| some other library may get incompatible with it. But in
| NPM when people declare dependency as, say, ~1.2.3 the
| also don't know if they will be compatible with a future
| 1.2.4 version. They just _assume_ the next patch release
| won't break anything. Yes npm will try to find a version
| that satisfies all declarations, but library devs
| couldn't know the new version would be compatible because
| it wasn't published at that time.
|
| And my point is that it's _exactly_ the same probability
| that the next patch version is incompatible in both Maven
| and NPM. That's why NPM users are not afraid to depend on
| ~x.x or even ^x.x, they basically YOLOing.
| eptcyka wrote:
| Yeah, npm people expect that semantic versioning will be
| abided by. Obviously, it will not work if a minor version
| bump introduces a breaking change. Obviously this is
| better than pinning the same one dependency in literally
| every package - imagine the churn and the amount of life
| lost to bumping dependencies in any given ecosystem if
| every package had to pin a specific version of a
| dependency.
|
| Ultimately, these are imperfect solutions to practical
| problems, and I know that I much prefer the semantic
| versioning and lockfile approach to whatever the java
| people are into.
| aidenn0 wrote:
| > I don't understand how Maven's YOLO is different from
| NPM's range.
|
| The person who wrote the range selected a range that they
| deem likely to work.
|
| I don't use NPM, but in Python it definitely happens that
| you see e.g.: foo >= 0.3.4, <= 0.5.6
|
| Which can save a lot of headaches early on for packages
| that use ZeroVer[1]
|
| 1: https://0ver.org/
| beart wrote:
| From my experience, this is a self-policing issue in the
| npm ecosystem. When major packages break semver, the
| maintainers take a ton of heat. When minor packages do
| it, they quickly fall out of the ecosystem entirely. It's
| not "YOLO"ing, but rather following the ecosystem
| conventions.
|
| But anyway.. isn't that exactly the purpose of lock
| files? If you don't trust the semver range, it shouldn't
| matter because every `npm ci` results in the same package
| versions.
| cogman10 wrote:
| Maven builds are deterministic (so long as you don't have
| SNAPSHOT dependencies). The version resolution is insane
| but deterministic. You'll only break that determinism if
| you change the dependencies.
|
| That's precisely because maven doesn't support version
| ranges. Maven artifacts are also immutable.
|
| Maven also supports manual override when the insane
| resolution strategy fails that's the
| "dependencymanagement" section.
| aidenn0 wrote:
| Lockfile builds are _also_ deterministic. You only break
| that determinism if you change the lockfile.
| deepsun wrote:
| I typically just add an <exclusion> for one of my top
| level dependency, so Maven picks from some other.
| javanonymous wrote:
| You can use version range with a maven plugin
|
| https://maven.apache.org/enforcer/enforcer-
| rules/versionRang...
| hyperpape wrote:
| I think Maven's approach is functionally lock-files with
| worse ergonomics. You can only use the dependency from the
| libraries you use, but you're waiting for those libraries to
| update.
|
| As an escape hatch, you end up doing a lot of exclusions and
| overrides, basically creating a lockfile smeared over your
| pom.
|
| P.S. Sadly, I think enough people have left Twitter that it's
| never going to be what it was again.
| potetm wrote:
| Of course it's functionally lock files. They do the same
| thing!
|
| There's a very strong argument that manually managing deps
| > auto updating, regardless of the ergonomics.
|
| P.S. You're, right, but also it's where the greatest
| remnant remains. :(
| shadowgovt wrote:
| I fear it says something unfortunate about our entire
| subculture if the greatest remnant remains at the Nazi
| bar. :(
|
| (To be generous: it might be that we didn't build our own
| bar the moment someone who is at least Nazi-tolerant
| started sniffing around for the opportunity to purchas
| the deed to the bar. The big criticism might be "we, as a
| subculture, aren't punk-rock enough.")
| KerrAvon wrote:
| JFC, get off Twitter. It's a Nazi propaganda site and you
| are going to be affected by that even if you think you're
| somehow immune.
| Alupis wrote:
| > P.S. Sadly, I think enough people have left Twitter that
| it's never going to be what it was again.
|
| Majority of those people came back after a while. The
| alternatives get near-zero engagement, so it's just
| shouting into the wind. For the ones that left over
| political reasons, receiving near-zero engagement takes all
| the fun out of posting... so they're back.
| stack_framer wrote:
| I'd be willing to bet that 95% of my "followers" on
| Twitter are bots. So I get near-zero engagement, or
| engagement that is worth zero.
| Karrot_Kream wrote:
| When I used to lead a Maven project I'd take dependency-upgrade
| tickets that would just be me bumping up a package version then
| whack-a-moling overrides and editing callsites to make
| dependency resolution not pull up conflicting packages until it
| worked. Probably lost a few days a quarter that way. I even
| remember the playlists I used to listen to when I was doing
| that work (:
|
| Lockfiles are great.
| RobRivera wrote:
| > I even remember the playlists I used to listen to when I
| was doing that work (:
|
| Im a big fan of anything Aphex Twin for these type of
| sessions.
| yawaramin wrote:
| How do lockfiles solve this problem? You would still have
| dependency-upgrade tickets and whack-a-mole, no? Or do you
| just never upgrade anything?
| chowells wrote:
| The difference is that the data is centralized with a
| single source of truth, and you have tools for working with
| it automatically. It doesn't mean lockfiles are cheap to
| update, but it does mean it's a much more streamlined
| process when it's time.
| yawaramin wrote:
| The data is also centralized without lockfiles
| though...it's in the package spec file itself, where all
| version can be upgraded together. If you are saying the
| only difference is some tooling for automation, that's a
| temporary problem, not a fundamental one.
| growse wrote:
| Without a lock file your transitive dependencies are....
| by definition not centralised?
|
| Or have I misunderstood?
| chowells wrote:
| By "centralized" I mean a single file that contains all
| transitive dependency version information and nothing
| else. This is very different than having to recursively
| examine tiny portions of your dependencies' package specs
| recursively to find all transitive dependencies.
| Muromec wrote:
| you press a button which triggers a pipeline which does npm
| update on all root dependencies and produces a new
| lockfile, then creates a PR you need to approve. creating a
| PR triggers running all the tests you bothered to write to
| also flag things that didn't go well automagically.
| yawaramin wrote:
| Is this a joke? What's the difference between this and
| pressing a button which triggers a pipeline which does
| npm update on all root dependencies and produces a new
| package.json file which you approve in a PR? Is the only
| difference that some convenient tooling already exists so
| you don't have to update each root dependency by hand?
| crote wrote:
| A package spec defines your _desires_ for your
| dependencies, a lockfile defines a _resolution_ of those
| desires.
|
| For example, as a developer I want Spring to stay on
| 6.3.x and not suddenly jump to 6.4 - as that is likely to
| break stuff. I do not care whether I get 6.3.1 or 6.3.6,
| as they are quite unlikely to cause issues. I do not care
| the _slightest_ what version of libfoobar I get 5
| dependencies down the line.
|
| However, I do _not_ want packages to suddenly change
| versions between different CI runs. Breakage due to minor
| version bumps are unlikely, but they _will_ happen. That
| kind of stuff is only going to cause noise when a rebuild
| of a PR causes it to break with _zero_ lines changed, so
| you only want version upgrades to happen specifically
| when you ask for them. On top of that there 's the risk
| of supply chain attacks, where pulling in the absolute
| latest version of every single package isn't exactly a
| great idea.
|
| The package spec allows me to define "spring at 6.3.x",
| the lockfile stores that we are currently using "spring
| 6.3.5, libfoobar 1.2.3". I ask it to look for upgrades,
| it resolves to "spring 6.3.6, libfoobar 1.4.0". There's
| also a "spring 6.4.0" available, but the spec says that
| we aren't interested so it gets ignored. All tests pass,
| it gets merged, and we'll stay at those versions until we
| _explicitly_ ask for another upgrade.
|
| The whole flow exists for things which _aren 't_ root
| dependencies. The major versions of those are trivial to
| keep track of manually, and you'll only have a handful of
| them. It's all the minor versions and downstream
| dependencies which are the issue: tracking them all
| manually is a nightmare, and picking whatever happens to
| be the latest version at time-of-build is a massive no-
| no.
| Muromec wrote:
| The difference is transitive dependences with ranges,
| which results in less frequent updates of direct
| dependences. And tooling of course too
| hyperpape wrote:
| I think the difference is that since libraries do not
| specify version ranges, you must manually override their
| choices to find a compatible set of dependencies.
|
| The solution is version ranges, but this then necessitates
| lockfiles, to avoid the problem of uncontrolled upgrades.
|
| That said, there's an option that uses version ranges, and
| avoids nondeterminism without lockfiles:
| https://matklad.github.io/2024/12/24/minimal-version-
| selecti....
|
| Note: maven technically allows version ranges, but they're
| rarely used.
| arcbyte wrote:
| I have YEARS of zero problems with maven dependencias. And yet
| i cant leave up a node project for more than a month without
| immediately encountering transitive dependency breakage that
| take days to resolve.
|
| Maven is dependency heaven.
| nemothekid wrote:
| That's a problem with node's culture, not with lockfiles.
| I've never experienced the level of bitrot Node suffers from
| in Rust, Ruby, Go, or PHP, which do have lockfiles.
| creesch wrote:
| You might not have issues, but they definitely happen.
| Especially once you bring in heavy frameworks like Spring,
| which for some reason ship with a ton of dependencies, some
| of which are surprisingly outdated.
|
| I had this happen with JUnit after a JDK upgrade. We needed
| to update to a newer major version of JUnit to match the new
| JDK, so we updated the test code accordingly. But then things
| broke. Methods were missing, imports couldn't be resolved,
| etc. Turned out something else in the dependency tree was
| still pulling in JUnit 4, and Maven's "nearest-wins" logic
| just silently went with the older version. No error, no
| warning. Just weird runtime/classpath issues. This something
| else turned out to be spring, for some odd reason it was an
| ancient version of Junit 4 as well.
|
| And yeah, you can eventually sort it out, maybe with mvn
| dependency:tree and a lot of manual overrides, but it is a
| mess. And Maven still doesn't give you anything like a
| lockfile to consistently reproduce builds over time. That's
| fine if your whole org pins versions very strictly, but it's
| naive to claim it "just works" in all cases. Certainly
| because versions often don't get pinned that strictly and it
| is easy to set up things in such a way that you think you
| have pinned the version while that isn't the case. Really fun
| stuff..
| Bjartr wrote:
| Seems like there's room then in the Maven ecosystem that does
| what maven-enforcer-plugin does, but which just looks at a
| lockfile to make its decisions.
| adrianmsmith wrote:
| If you use two dependencies, and one requires Foo 1.2.3 and the
| other Foo 1.2.4 then 99% of the time including either version
| of Foo will work fine. (I was a Java developer and used Maven
| for about 10 years.)
|
| For those times where that's not the case, you can look at the
| dependency tree to see which is included and why. You can then
| add a <dependency> override in your pom.xml file specifying the
| one you want.
|
| It's not an "insane" algorithm. It gives you predictability. If
| you write something in your pom.xml that overrides whatever
| dependency your dependency requires, because you can update
| your pom.xml if you need to.
|
| And because pom.xml is hand-written there are very few merge
| conflicts (as much as you'd normally find in source code), vs.
| a lock file where huge chunks change each time you change a
| dependency, and when it comes to a merge conflict you just have
| to delete the lot and redo it and hope nothing important has
| been changed.
| ffsm8 wrote:
| Depending on the dependency, you can also use shadow versions
| right? Essentially including both versions, and providing
| each dependency with it's own desired version. I believe it's
| done with the maven shade plug-in
|
| Never used it myself though, just read about it but never had
| an actual usecase
| zdragnar wrote:
| > You can then add a <dependency> override in your pom.xml
| file specifying the one you want.
|
| Isn't that basically a crappy, hand-rolled equivalent to a
| lock file?
| Muromec wrote:
| Of course it is
| int_19h wrote:
| What happens when one requires Foo 1.0 and the other requires
| Foo 2.0, and the two are incompatible on ABI level?
| Muromec wrote:
| Then you sit and cry of course
| xg15 wrote:
| The irony is that it even has something like lockfiles as well:
| The <dependencyManagement> section:
|
| > _Dependency management - this allows project authors to
| directly specify the versions of artifacts to be used when they
| are encountered in transitive dependencies or in dependencies
| where no version has been specified._
|
| It's just less convenient because you have to manage it
| yourself.
| oftenwrong wrote:
| To add: if you would like to avoid depending on Maven's
| dependency mediation behaviour, then a useful tool is Maven
| Enforcer's dependencyConvergence rule.
|
| https://maven.apache.org/enforcer/enforcer-rules/index.html
| kemitchell wrote:
| npm got around to `@{author}/{package}` and `@{org}/{package}`
| beyond just global `{package}`, albeit midstream, rather than
| very early on. The jargon is "scoped packages". I've seen more
| adoption recently, also with scopes for particular projects,
| like https://www.npmjs.com/package/@babel/core
| reactordev wrote:
| The issue is what happens when _libX@latest_ is updated and
| uses _libd@2.0_ but your other dependency _libA@latest_ uses
| _libd@1.3.1_? In maven, crazy things happen. Sometimes it's
| fine but if you have any kind of security, the version
| mismatch has different signatures and blows up. Ask any
| spring developer what happens when they have more than 1
| _slf4j_ in their classpath.
| simonw wrote:
| I see lockfiles as something you use for applications you are
| deploying - if you run something like a web app it's _very_
| useful to know exactly what is being deployed to production, make
| sure it exactly matches staging and development environments,
| make sure you can audit new upgrades to your dependencies etc.
|
| This article appears to be talking about lockfiles for libraries
| - and I agree, for libraries you shouldn't be locking exact
| versions because it will inevitably pay havoc with other
| dependencies.
|
| Or maybe I'm missing something about the JavaScript ecosystem
| here? I mainly understand Python.
| kaelwd wrote:
| The lockfile only applies when you run `npm install` in the
| project directory, other projects using your package will have
| their own lockfile and resolve your dependencies using only
| your package.json.
| aidenn0 wrote:
| I think you missed the point of the article. Consider
| Application A, that depends on Library L1. Library L1 in turn
| depends on Library L2:
|
| A -> L1 -> L2
|
| They are saying that A should not need a lockfile because it
| should specify a single version of L1 in its dependencies (i.e.
| using an == version check in Python), which in turn should
| specify a single version of L2 (again with an == version
| check).
|
| Obviously if everybody did this, then we wouldn't need
| lockfiles (which is what TFA says). The main downsides (which
| many comments here point out) are:
|
| 1. Transitive dependency conflicts would abound
|
| 2. Security updates are no longer in the hands of the app
| developers (in my above example, the developer of A1 is
| dependent on the developer of L1 whenever a security bug
| happens in L2).
|
| 3. When you update a direct dependency, your transitive
| dependencies may all change, making what you that was a small
| change into a big change.
|
| (FWIW, I put these in order of importance to me; I find #3 to
| be a nothingburger, since I've hardly ever updated a direct
| dependency without it increasing the minimum dependency of at
| least one of its dependencies).
| hosh wrote:
| Is the article also suggesting that if there are version
| conflicts, it goes with the top level library? For example,
| if we want to use a secure version of L2, it would be
| specified at A, ignoring the version specified by L1?
|
| Or maybe I misread the article and it did not say that.
| aidenn0 wrote:
| It's maybe implied since Maven lets you do that (actually
| it uses the shallowest dependency, with the one listed
| first winning ties), but the thrust of the article seems to
| be roughly: "OMGWTFBBQ we can't use L2 with 0.7.9 if L1 was
| only tested with 0.7.9!" so I don't know how the author
| feels about that.
|
| [edit]
|
| The author confirmed that they are assuming Maven's rules
| and added it to the bottom of their post.
| yawaramin wrote:
| > Transitive dependency conflicts would abound
|
| They would be resolved by just picking the version 'closest
| to root', as explained in the article.
|
| > Security updates are no longer in the hands of the app
| developers
|
| It is, the app developers can just put in a direct dependency
| on the fixed version of L2. As mentioned earlier, this is the
| version that will be resolved for the project.
|
| > When you update a direct dependency, your transitive
| dependencies may all change, making what you that was a small
| change into a big change.
|
| This is the same even if you use a lockfile system. When you
| update dependencies you are explicitly updating the lockfile
| as well, so a bunch of transitive dependencies can change.
| crote wrote:
| > They would be resolved by just picking the version
| 'closest to root', as explained in the article.
|
| Which is going to lead to horrible issues when that library
| isn't compatible with all your other dependencies. What if
| your app directly depends on both L1 and L2, but L1 is
| compatible with L3 1.2 ... 1.5 while L2 is compatible with
| L3 1.4 ... 1.7? A general "stick to latest" policy would
| have L1: "L3==1.5", L2: "L3==1.7" (which breaks L1 if L2
| wins). A general "stick to oldest compatible" policy would
| have L1: "L3==1.2", L2: "L3==1.4" (which breaks L2 if L1
| wins).
|
| The obvious solution would be to use L3 1.4 ... 1.5 - but
| that will _never_ happen without the app developer manually
| inspecting the transitive dependencies and hardcoding the
| solution - in essence reinventing the lock file.
|
| > It is, the app developers can just put in a direct
| dependency on the fixed version of L2. As mentioned
| earlier, this is the version that will be resolved for the
| project.
|
| And how is that going to work out in practice? Is that
| direct dependency supposed to sit in your root-level spec
| file forever? Will there be a special section for all the
| "I don't really care about this, but we need to manually
| override it for now" dependencies? Are you going to have to
| manually specify and bump it until the end of time because
| you are at risk of your tooling pulling in the vulnerable
| version? Is there going to be tooling which automatically
| inspects your dependencies and tells you when it is safe to
| drop?
|
| > This is the same even if you use a lockfile system. When
| you update dependencies you are explicitly updating the
| lockfile as well, so a bunch of transitive dependencies can
| change.
|
| The difference is that in the lockfile world any changes to
| transitive dependencies are well-reasoned. If every package
| specifies a compatibility range for its dependencies, the
| dependency management system can be reasonably sure that
| any successful resolution will not lead to issues _and_
| that you are getting the newest package versions possible.
|
| With a "closest-to-root" approach, all bets are off. A
| seemingly-trivial change in your direct dependencies can
| lead to a transitive dependency completely breaking your
| entire application, or to a horribly outdated library
| getting pulled in. Moreover, you might not even be _aware_
| that this is happening. After all, if you were keeping
| track of the specific versions of every single transitive
| dependency, you 'd essentially be storing a lockfile - and
| that's what you were trying to avoid...
| lalaithion wrote:
| What if your program depends on library a1.0 and library b1.0,
| and library a1.0 depends on c2.1 and library b1.0 depends on
| c2.3? Which one do you install in your executable? Choosing one
| randomly might break the other library. Installing both _might_
| work, unless you need to pass a struct defined in library c from
| a1.0 to b1.0, in which case a1.0 and b1.0 may expect different
| memory layouts (even if the public interface for the struct is
| the exact same between versions).
|
| The reason we have dependency ranges and lockfiles is so that
| library a1.0 can declare "I need >2.1" and b1.0 can declare "I
| need >2.3" and when you depend on a1.0 and b1.0, we can do
| dependency resolution and lock in c2.3 as the dependency for the
| binary.
| tonsky wrote:
| One of the versions will be picked up. If that version doesn't
| work, you can try another one. The process is exactly the same
| Joker_vD wrote:
| > If that version doesn't work, you can try another one.
|
| And how will this look like, if your app doesn't have library
| C mentioned in its dependencies, only libraries A and B? You
| are prohibited from answering "well, just specify _all_ the
| transitive dependencies manually " because it's precisely
| what a lockfile is/does.
| tonsky wrote:
| Maven's version resolution mechanism determines which
| version of a dependency to use when multiple versions are
| specified in a project's dependency tree. Here's how it
| works:
|
| - Nearest Definition Wins: When multiple versions of the
| same dependency appear in the dependency tree, the version
| closest to your project in the tree will be used.
|
| - First Declaration Wins: If two versions of the same
| dependency are at the same depth in the tree, the first one
| declared in the POM will be used.
| Joker_vD wrote:
| Well, I guess this works if one appends their newly-added
| dependencies are appended at the end of the section in
| the pom.xml instead of generating it alphabetically
| sorted just in time for the build.
| yawaramin wrote:
| Maven pom.xml files are maintained by hand, not by some
| code generation tool.
| deredede wrote:
| It's not "all the transitive dependencies". It's only the
| transitive dependencies you need to explicitly specify a
| version for because the one that was specified by your
| direct dependency is not appropriate for X reason.
| deredede wrote:
| Alternative answer: both versions will be picked up.
|
| It's not always the correct solution, but sometimes it is. If
| I have a dependency that uses libUtil 2.0 and another that
| uses libUtil 3.0 but neither exposes types from libUtil
| externally, or I don't use functions that expose libUtil
| types, I shouldn't have to care about the conflict.
| shadowgovt wrote:
| This points to a software best-practice: "Don't leak types
| from your dependencies." If your package depends on A,
| never emit one of A's structs.
|
| Good luck finding a project of any complexity that manages
| to adhere to that kind of design sensibility religiously.
|
| (I think the only language I've ever used that provided
| top-level support for recognizing that complexity was
| SML/NJ, and it's been so long that I don't remember exactly
| how it was done... Modules could take parameters so at the
| top level you could pass to each module what submodule it
| would be using, and _only then_ could the module emit types
| originating from the submodule because the passing-in "app
| code" had visibility on the submodule to comprehend those
| types. It was... Exactly as un-ergonomic as you think. A
| real nightmare. "Turn your brain around backwards" kind of
| software architecting.)
| deredede wrote:
| I can think of plenty situations where you really want to
| use the dependency's types though. For instance the
| dependency provides some sort of data structure and you
| have one library that produces said data structure and a
| separate library that consumes it.
|
| What you're describing with SML functors is essentially
| dependency injection I think; it's a good thing to have
| in the toolbox but not a universal solution either. (I do
| like functors for dependency injection, much more than
| the inscrutable goo it tends to be in OOP languages
| anyways)
| shadowgovt wrote:
| I can think of those situations too, and in practice this
| is done all the time (by everyone I know, including me).
|
| In theory... None of us should be doing it. Emitting raw
| underlying structures from a dependency coupled with
| ranged versioning means part of your API is under-
| specified; "And this function returns an argument, the
| type of which is whatever this third-party that we don't
| directly communicate with says the type is." That's hard
| to code against in the general case (but it works out
| often enough in the specific case that I think it's safe
| to do 95-ish percent of the time).
| int_19h wrote:
| It works just fine in C land because modifying a struct
| in _any_ way is an ABI breaking change, so in practice
| any struct type that is exported has to be automatically
| deemed frozen (except for major version upgrades where
| compat is explicitly not a goal).
|
| Alternatively, it's a pointer to an opaque data
| structure. But then that fact (that it's a pointer) is
| frozen.
|
| Either way, you can rely on dependencies not just pulling
| the rug from under you.
| shadowgovt wrote:
| I like this answer. "It works just fine in C land because
| this is a completely impossible story in C land."
|
| (I remember, ages ago, trying to wrap my head around
| Component Object Model. It took me awhile to grasp it in
| the abstract because, I finally realized, it was trying
| to solve a problem I'd never needed to solve before: ABI
| compatibility across closed-source binaries with
| different compilation architectures).
| Revisional_Sin wrote:
| So you need to test if the version worked yourself (e.g. via
| automated tests)? Seems better to have the library author do
| this for you and define a range.
| boscillator wrote:
| Ok, but what happens when lib-a depends on lib-x:0.1.4 and lib-b
| depends on lib-x:0.1.5, even though it could have worked with any
| lib-x:0.1.*? Are these libraries just incompatible now? Lockfiles
| don't guarantee that new versions are compatible, but it
| guarantees that if your code works in development, it will work
| in production (at least in terms of dependencies).
|
| I assume java gets around this by bundling libraries into the
| deployed .jar file. That this is better than a lock file, but
| doesn't make sense for scripting languages that don't have a
| build stage. (You won't have trouble convincing me that every
| language should have a proper build stage, but you might have
| trouble convincing the millions of lines of code already written
| in languages that don't.)
| aidenn0 wrote:
| > I assume java gets around this by bundling libraries into the
| deployed .jar file. That this is better than a lock file, but
| doesn't make sense for scripting languages that don't have a
| build stage. (You won't have trouble convincing me that every
| language should have a proper build stage, but you might have
| trouble convincing the millions of lines of code already
| written in languages that don't.)
|
| You are wrong; Maven just picks one of lib-x:0.1.4 or
| lib-x:0.1.5 depending on the ordering of the dependency tree.
| yladiz wrote:
| How do you change the order?
| adrianmsmith wrote:
| You go into your pom.xml file (bunch of <dependency>) using
| a text editor and change the order.
| Tadpole9181 wrote:
| Maven will also silently choose different minor and _major_
| versions, destroying your application. Sometimes at compile
| time, sometimes at runtime.
|
| Java dependency management is unhinged, antiquated garbage to
| anyone who has used any other ecosystem.
| shadowgovt wrote:
| > Are these libraries just incompatible now?
|
| Python says "Yes." Every environment manager I've seen, if your
| version ranges don't overlap for all your dependencies, will
| end up failing to populate the environment. Known issue; some
| people's big Python apps just break sometimes and then three or
| four open source projects have to talk to each other to un-fsck
| the world.
|
| npm says "No" but in a hilarious way: if lib-a emits objects
| from lib-x, and lib-b emits objects from lib-x, you'll end up
| with objects that all your debugging tools will tell you should
| be the same type, and TypeScript will statically tell you are
| the same type, but don't `instanceof` the way you'd expect two
| objects that are the same type should. Conclusion: `instanceof`
| is sus in a large program; embrace the duck typing (and accept
| that maybe your a-originated lib-x objects can't be passed to
| b-functions without explosions because I bet b didn't embrace
| the duck-typing).
| epage wrote:
| Let's play this out in a compiled language like Cargo.
|
| If every dependency was a `=` and cargo allowed multiple versions
| of SemVer compatible packages.
|
| The first impact will be that your build will fail. Say you are
| using `regex` and you are interacting with two libraries that
| take a `regex::Regex`. All of the versions need to align to pass
| `Regex` between yourself and your dependencies.
|
| The second impact will be that your builds will be slow. People
| are already annoyed when there are multiple SemVer incompatible
| versions of their dependencies in their dependency tree, now it
| can happen to any of your dependencies and you are working across
| your dependency tree to get everything aligned.
|
| The third impact is if you, as the application developer, need a
| security fix in a transitive dependency. You now need to work
| through the entire bubble up process before it becomes available
| to you.
|
| Ultimately, lockfiles are about giving the top-level application
| control over their dependency tree balanced with build times and
| cross-package interoperability. Similarly, SemVer is a tool any
| library with transitive dependencies [0]
|
| [0] https://matklad.github.io/2024/11/23/semver-is-not-about-
| you...
| hosh wrote:
| Wasn't the article suggesting that the top level dependencies
| override transitive dependencies, and that could be done in the
| main package file instead of the lock file?
| junon wrote:
| You should not be editing your cargo.lock file manually. Cargo
| gives you a first-class way of overriding transitive
| dependencies.
| richardwhiuk wrote:
| You can also do cargo update -p
| oblio wrote:
| Java is compiled, FYI.
| deepsun wrote:
| And interpreted.
|
| Some call transforming .java to .class a transpilation, but
| then a lot of what we call compilation should also be called
| transpilation.
|
| Well, Java can ALSO be AOT compiled to machine code, more
| popular nowadays (e.g. GraalVM).
| matklad wrote:
| This scheme _can_ be made to work in the context of Cargo. You
| can have all of:
|
| * Absence of lockfiles
|
| * Absence of the central registry
|
| * Cryptographically checksummed dependency trees
|
| * Semver-style unification of compatible dependencies
|
| * Ability for the root package to override transitive
| dependencies
|
| At the cost of
|
| * minver-ish resolution semantics
|
| * deeper critical path in terms of HTTP requests for resolving
| dependencies
|
| The trick is that, rather than using crates.io as the universe
| of package versions to resolve against, you look only at the
| subset of package versions reachable from the root package. See
| https://matklad.github.io/2024/12/24/minimal-version-selecti...
| yawaramin wrote:
| > All of the versions need to align to pass `Regex` between
| yourself and your dependencies.
|
| No, they don't. As the article explains, the resolution process
| will pick the version that is 'closest to the root' of the
| project.
|
| > The second impact will be that your builds will be
| slow....you are working across your dependency tree to get
| everything aligned.
|
| As mentioned earlier, no you're not. So there's nothing to
| support the claim that builds will be slower.
|
| > You now need to work through the entire bubble up process
| before it becomes available to you.
|
| No you don't, because as mentioned earlier, the version that is
| 'closest to root' will be picked. So you just specify the
| security fixed version as a direct dependency and you get it
| immediately.
| trjordan wrote:
| There is absolutely a good reason for version ranges: security
| updates.
|
| When I, the owner of an application, choose a library (libuseful
| 2.1.1), I think it's fine that the library author uses other
| libraries (libinsecure 0.2.0).
|
| But in 3 months, libinsecure is discovered (surprise!) to be
| insecure. So they release libinsecure 0.2.1, because they're good
| at semver. The libuseful library authors, meanwhile, are on
| vacation because it's August.
|
| I would like to update. Turns out libinsecure's vulnerability is
| kind of a big deal. And with fully hardcoded dependencies, I
| cannot, without some horrible annoying work like
| forking/building/repackaging libuseful. I'd much rather libuseful
| depend on libinsecure 0.2.*, even if libinsecure isn't terribly
| good at semver.
|
| I would love software to be deterministically built. But as long
| as we have security bugs, the current state is a reasonable
| compromise.
| tonsky wrote:
| It's totally fine in Maven, no need to rebuild or repackage
| anything. You just override version of libinsecure in your
| pom.xml and it uses the version you told it to
| zahlman wrote:
| So you... manually re-lock the parts you need to?
| aidenn0 wrote:
| Don't forget the part where Maven silently picks one
| version for you when there are transitive dependency
| conflicts (and no, it's not always the newest one).
| deredede wrote:
| Sure, I'm happy with locking the parts I need to lock. Why
| would I lock the parts I don't need to lock?
| deredede wrote:
| What if libinsecure 0.2.1 is the version that introduces the
| vulnerability, do you still want your application to pick up
| the update?
|
| I think the better model is that your package manager let you
| do exactly what you want -- override libuseful's dependency on
| libinsecure when building your app.
| trjordan wrote:
| Of course there's no 0-risk version of any of this. But in my
| experience, bugs tend to get introduced with features, then
| slowly ironed out over patches and minor versions.
|
| I want no security bugs, but as a heuristic, I'd strongly
| prefer the latest patch version of all libraries, even
| without perfect guarantees. Code rots, and most versioning
| schemes are designed with that in mind.
| MarkusQ wrote:
| Except the only reason code "rots" is that the environment
| keeps changing as people chase the latest shiny thing.
| Moreover, it rots _faster_ once the assumption that
| everyone is going to constantly update get established,
| since it can be used to justify pushing non-working
| garbage, on the assumption "we'll fix it in an update".
|
| This may sound judgy, but at the heart it's intended to be
| descriptive: there are two roughly stable states, and both
| have their problems.
| PhilipRoman wrote:
| Slightly off topic but we need to normalize the ability to
| patch external dependencies (especially transitive ones).
| Coming from systems like Yocto, it was mind boggling to see a
| company bugging the author of an open source library to release
| a new version to the package manager with a fix that they
| desperately needed.
|
| In binary package managers this kind of workflow seems like an
| afterthought.
| eitau_1 wrote:
| nixpkgs shines especially bright in this exact scenario
| guhcampos wrote:
| The author hints very briefly that Semantic Version is a hint,
| not a guarantee, to which I agree - but then I think we should
| be insisting on library maintainers that semantic versioning
| *should* be a guarantee, and in the worst case scenario,
| boycott libraries that claim to be semantically versioned but
| don't do it in reality.
| oiWecsio wrote:
| I don't understand why major.minor.patchlevel is a "hint". It
| had been an interface contract with shared libraries written
| in C when I first touched Linux, and that was 25+ years ago;
| way before the term "semantic version" was even invented
| (AFAICT).
| michaelt wrote:
| Imagine I make a library for loading a certain format of
| small, trusted configuration files.
|
| Some guy files a CVE against my library, saying it crashes
| if you feed it a large, untrusted file.
|
| I decide to put out a new version of the library, fixing
| the CVE by refusing to load conspicuously large files. The
| API otherwise remains unchanged.
|
| Is the new release a major, minor, or bugfix release? As I
| have only an approximate understanding of semantic
| versioning norms, I could go for any of them to be honest.
|
| Some other library authors are just as confused as me,
| which is why major.minor.patchlevel is only a hint.
| shwestrick wrote:
| I like this example.
|
| The client who didn't notice a difference would probably
| call it a bugfix.
|
| The client whose software got ever-so-slightly more
| reliable probably would call it a minor update.
|
| The client whose software previously was loading large
| files (luckily) without issue would call it major,
| because now their software just doesn't work anymore.
| seniorsassycat wrote:
| Yeah, this felt like a gap in the article. You'd have to wait
| for every package to update from the bottom up before you could
| update you top levels to remove a risk (or you could patch in
| place, or override)
|
| But what if all the packages had automatic ci/cd, and
| libinsecure 0.2.1 is published, libuseful automatically tests a
| new version of itself that uses 0.2.1, and if it succeeds it
| publishes a new version. And consumers of libuseful do the
| same, and so on.
| CognitiveLens wrote:
| The automatic ci/cd suggestion sounds appealing, but at least
| in the NPM ecosystem, the depth of those dependencies would
| mean the top-level dependencies would constantly be
| incrementing. On the app developer side, it would take a lot
| of attention to figure when it's important to update top-
| level dependencies and when it's not.
| skybrian wrote:
| Go has a deterministic package manager and handles security
| bugs by letting library authors retract versions [1]. The 'go
| get' command will print a warning if you try to retrieve a
| retracted version. Then you can bump the version for that
| module at top level.
|
| You also have the option of ignoring it if you want to build
| the old version for some reason, such as testing the broken
| version.
|
| [1] https://go.dev/ref/mod#go-mod-file-retract
| alexandrehtrb wrote:
| I completely agree.
|
| .NET doesn't have lock files either, and its dependency tree runs
| great.
|
| Using fixed versions for dependencies is a best practice, in my
| opinion.
| horsawlarway wrote:
| This is wrong. DotNet uses packages.lock.json explicitly to
| support the case where you want to be able to lock transitive
| dependencies that are specified with a range value, or several
| other edge cases that might warrant explicitly declaring
| versions that are absent from csproj or sln files.
|
| https://devblogs.microsoft.com/dotnet/enable-repeatable-pack...
|
| https://learn.microsoft.com/en-us/nuget/consume-packages/pac...
|
| Again - there's no free lunch here.
| alexandrehtrb wrote:
| Practically no one uses that.
| andix wrote:
| Lockfiles are essential for somewhat reproducible builds.
|
| If a transient dependency (not directly referenced) updates, this
| might introduce different behavior. if you test a piece of
| software and fix some bugs, the next build shouldn't contain
| completely different versions of dependencies. This might
| introduce new bugs.
| tonsky wrote:
| > Lockfiles are essential for somewhat reproducible builds.
|
| No they are not. Fully reproducible builds have existed without
| lockfiles for decades
| its-summertime wrote:
| of distros, they usually refer to an upstream by hash
|
| https://src.fedoraproject.org/rpms/conky/blob/rawhide/f/sour.
| ..
|
| also of flathub
|
| https://github.com/flathub/com.belmoussaoui.ashpd.demo/blob/.
| ..
|
| "they are not lockfiles!" is a debatable separate topic, but
| for a wider disconnected ecosystem of sources, you can't
| really rely on versions being useful for reproducibility
| andix wrote:
| > they usually refer to an upstream by hash
|
| exactly the same thing as a lockfile
| andix wrote:
| Sure, without package managers.
|
| It's also not about fully reproducible builds, it's about a
| tradeoff to get modern package manger (npm, cargo, ...)
| experience and also somewhat reproducible builds.
| chriswarbo wrote:
| > modern package manger (npm, cargo, ...) experience
|
| Lol, the word "modern" has truly lost all meaning. Your
| list of "modern package managers" seems to coincide with a
| list of _legacy_ tooling I wrote four years ago!
| https://news.ycombinator.com/item?id=29459209
| pluto_modadic wrote:
| ...source?
|
| show me one "decades old build" of a major project that isn't
| based on 1) git hashes 2) fixed semver URLs or 3) exact
| semver in general.
| jedberg wrote:
| The entire article is about why this isn't the case.
| andix wrote:
| It suggests a way more ridiculous fix. As mentioned by other
| comments in detail (security patches for transient
| dependencies, multiple references to the same transient
| dependency).
| yawaramin wrote:
| The article and various comments in this same thread have
| explained why these are not real issues because the
| resolution process picks the version 'closest to root'.
| zokier wrote:
| umm wat
|
| > "But Niki, you can regenerate the lockfile and pull in all the
| new dependencies!"
|
| > Sure. In exactly the same way you can update your top-level
| dependencies.
|
| how does updating top-level deps help with updating leaf
| packages? Is the author assuming that whenever a leaf package is
| updated, every other package in the dep chain gets immediately
| new release? That is fundamentally impossible considering that
| the releases would need to happen serially.
| tonsky wrote:
| I updated the post, see near the bottom
| nine_k wrote:
| The author seems to miss the point of version ranges. Yes,
| specific versions of dependencies get frozen in the lock file at
| the moment of building. But the only way to _determine_ these
| specific versions is to run version resolution across the whole
| tree. The process finds out which specific versions _within the
| ranges_ can be chosen to satisfy all the version constraints.
|
| This works with minimal coordination between authors of the
| dependencies. It becomes a big deal when you have _several
| unrelated_ dependencies, each transitively requiring that
| libpupa. The chance they converge on the same exact version is
| slim. The chance a satisfying version can be found within
| specified ranges is much higher.
|
| Physical things that are built from many parts have the very same
| limitation: they need to specify tolerances to account for the
| differences in production, and would be unable to be assembled
| otherwise.
| tonsky wrote:
| Yeah but version ranges are fiction. Some says: we require
| libpupa 0.2.0+. Sure you can find a version in that range. But
| what if it doesn't work? How can you know that your library
| will work with all the future libpupa releases in advance?
| nine_k wrote:
| More often than not things are compatible within a major
| version. Very often things are compatible within a minor
| version.
|
| Not being able to build because one thing depends on libpupa
| 1.2.34.pre5 and another, on 1.2.35 would be a worse outcome,
| on average.
| mystifyingpoi wrote:
| It reminds me of the whole mess of Angular 2+ upgrades. It
| was I believe before lockfiles in npm? Literally every new
| person joining the team had to get the node_modules handed to
| them from someone else's machine for the project to work,
| since `npm install` could never install anything working
| together.
| wpollock wrote:
| Under semver, any dependency version X.Y.* is supposed to be
| compatible with any software that was built with version
| X.Z.* when Y > Z. If not, the author of the dependency has
| broken semver.
|
| "Supposed to" being the operative phrase. This is of little
| comfort when you need version X.Y for a security fix but your
| build breaks.
|
| Note that Maven is more complex than others here have
| mentioned. In some cases, Maven compares versions lexically
| (e.g. version 1.2 is considered newer than version 1.10).
|
| Dependency management is indeed hell.
| freetonik wrote:
| In the world of Python-based end-user libraries the pinned (non-
| ranged) versions result in users being unable to use your library
| in an environment with other libraries. I'd love to lock my
| library to numpy 2.3.4, but if the developers of another library
| pin theirs to 2.3.5 then game over.
|
| For server-side or other completely controlled environments the
| only good reason to have lock files is if they are actually
| hashed and thus allow to confirm security audits. Lock files
| without hashes do not guarantee security (depending on the
| package registry, of course, but at least in Python world (damn
| it) the maintainer can re-publish a package with an existing
| version but different content).
| tonsky wrote:
| > I'd love to lock my library to numpy 2.3.4, but if the
| developers of another library pin theirs to 2.3.5 then game
| over.
|
| Why? Can't you specify which version to use?
| spooky_deep wrote:
| > The important point of this algorithm is that it's fully
| deterministic.
|
| The algorithm can be deterministic, but fetching the dependencies
| of a package is not.
|
| It is usually an HTTP call to some endpoint that might flake out
| or change its mind.
|
| Lock files were invented to make it either deterministic or fail.
|
| Even with Maven, deterministic builds (such as with Bazel) lock
| the hashes down.
|
| This article is mistaken.
| Tainnor wrote:
| Maven artifacts are immutable, so the whole resolution is
| deterministic (even if hard to understand), unless you're using
| snapshot versions (which are mutable) or you use version ranges
| (which is rare in the Maven world).
| mystifyingpoi wrote:
| I never understood this. I can delete anything from Nexus and
| reupload something else in its place. Is this supposed
| immutability just a convention that's followed?
| spooky_deep wrote:
| If your model is that you trust Maven to never change
| anything, then sure.
|
| However, I think most people in the reproducible build space
| would consider Maven an external uncontrolled input.
| beart wrote:
| Maven artifacts are not immutable. Some maven repositories
| may prevent overwriting an already published version, but
| this is not guaranteed. I've personally seen this cause
| problems where a misconfigured CI job overwrote already
| published versions.
|
| npm used to allow you to unpublish (and may be overwrite?)
| published artifacts, but they removed that feature after a
| few notable events.
|
| Edit: I was not quite correct. It looks like you can still
| unpublish, but with specific criteria. However, you cannot
| ever publish a different package using the same version as an
| already published package.
|
| https://docs.npmjs.com/cli/v8/commands/npm-publish?v=true
|
| https://docs.npmjs.com/policies/unpublish
| chriswarbo wrote:
| > Maven artifacts are immutable, so the whole resolution is
| deterministic
|
| Nope, Maven will grab anything which happens to have a
| particular filename from `~/.m2`, or failing that it will
| accept whatever a HTTP server gives it for a particular URL.
| It _can_ compare downloaded artifacts against a hash; but
| that 's misleading, since those hashes are provided by the
| same HTTP server as the artifact! (Useful for detecting a
| corrupt download; useless for knowing anything about the
| artifact or its provenance, etc.)
|
| This isn't an academic/theoretical issue; I've run into it
| myself https://discuss.gradle.org/t/plugins-gradle-org-
| serving-inco...
| horsawlarway wrote:
| This is a great example of chesterton's fence.
|
| The author of this piece doesn't understand why a top level
| project might want control of its dependencies dependencies.
|
| That's the flaw in this whole article, if you can't articulate
| why it's important to be able to control those... don't write an
| article. You don't understand the problem space.
|
| Semantic versioning isn't perfect, but it's more than a "hint",
| and it sure as hell beats having to manually patch (or fork) an
| entire dependency chain to fix a security problem.
| aidenn0 wrote:
| Author puts up Maven as an example of no lockfiles. Maven does
| allow a top-level project to control its transitive
| dependencies (when there is a version conflict, the shallowest
| dependency wins; the trivial version of this is if you specify
| it as a top-level dependency).
|
| I think rather that the author doesn't realize that many people
| in the lockfile world _put their lockfiles under version
| control_. Which makes builds reproducible again.
| horsawlarway wrote:
| Yes, but Maven doesn't support reproducibility (outside of
| plugins that basically haul in a lockfile). So his whole
| point is moot (Gradle now does, as an aside: https://docs.gra
| dle.org/current/userguide/dependency_locking...)
|
| Again - I don't think the author is aware enough of the
| problem space to be making the sort of claim that he is. He
| doesn't understand the problem lockfiles are solving, so he
| doesn't know why they exist and wants them gone...
| chesterton's fence in action.
|
| ---
|
| Directly declaring deps is great. It's so great that we'd
| like to do it for every dependency in many (arguably most)
| cases. But doing that really sort of sucks when you start
| getting into even low 10s of deps. Enter... lockfiles and the
| tooling to auto-resolve them.
| junon wrote:
| I think people forget NPM added package-lock.json for the npm@5
| release that was rushed out the door to match the next node.js
| major and was primarily to cut down on server traffic costs as
| they weren't making money from the FOSS community to sustain
| themselves.
| palotasb wrote:
| The author is perhaps presenting a good argument for
| languages/runtimes like JavaScript/Node where dependencies may be
| isolated and conflicting dependencies may coexist in the
| dependency tree (e.g., "app -> { libpupa 1.2.3 -> liblupa 0.7.8
| }, { libxyz 2.0 -> liblupa 2.4.5 }" would be fine), but the
| proposed dependency resolution algorithm...
|
| > Our dependency resolution algorithm thus is like this:
|
| > 1. Get the top-level dependency versions
|
| > 2. Look up versions of libraries they depend on
|
| > 3. Look up versions of libraries they depend on
|
| ...would fail in languages like Python where dependencies are
| shared, and the steps 2, 3, etc. would result in conflicting
| versions.
|
| In these languages, there is good reason to define dependencies
| in a relaxed way (with constraints that exclude known-bad
| versions; but without pins to any specific known-to-work version
| and without constraining only to existing known-good versions) at
| first. This way dependency resolution always involves some sort
| of constraint solving (with indeterminate results due to the
| constraints being open-ended), but then for the sake of
| reproducibility the result of the constraint solving process may
| be used as a lockfile. In the Python world this is only done in
| the final application (the final environment running the code,
| this may be the test suite in for a pure library) and the pins in
| the lock aren't published for anyone to reuse.
|
| To reiterate, the originally proposed algorithm doesn't work for
| languages with shared dependencies. Using version constraints and
| then lockfiles as a two-layer solution is a common and reasonable
| way of resolving the dependency topic in these languages.
| hosh wrote:
| What if the top level can override the transitive dependencies?
|
| I have had to do that with Ruby apps, where libraries are also
| shared.
| tonsky wrote:
| > would fail in languages like Python where dependencies are
| shared
|
| And yet Java and Maven exist...
| zahlman wrote:
| > Imagine you voluntarily made your build non-reproducible by
| making them depend on time. If I build my app now, I get libpupa
| 1.2.3 and liblupa 0.7.8. If I repeat the same build in 10
| minutes, I'll get liblupa 0.7.9. Crazy, right? That would be
| chaos.
|
| No; in fact it's perfectly reasonable, and at the core of what
| the author doesn't seem to get. Developers have motivations other
| than reproducibility. The entire reason we have version number
| schemes like this is so that we can improve our code while also
| advertising reasonable expectations about compatibility. If we
| have dependents, then hopefully this also improves their UX
| indirectly -- whether by taking advantage of optimizations we
| made, not encountering bugs that were actually our fault, etc.
| Similarly, if we have dependencies, we can seek to take advantage
| of that.
|
| Upgrading environments is an opportunity to test new
| configurations, and see if they're any better than what's in
| existing lockfiles.
|
| > But this is what version ranges essentially are. Instead of
| saying "libpupa 1.2.3 depends on liblupa 0.7.8", they are saying
| "libpupa 1.2.3 depends on whatever the latest liblupa version is
| at the time of the build."
|
| But also, _developers aren 't necessarily using the latest
| versions of their dependencies locally anyway_. If I _did_ pin a
| version in my requirements, it 'd be the one _that I tested the
| build with_ , not necessarily the one that was most recently
| released at the time of the build. Not everyone runs an
| industrial-strength CI system, and for the size of lots of useful
| packages out there, they really shouldn't have to, either. (And
| in the pathological case, someone else could re-release while I'm
| building and testing!)
|
| > But... why would libpupa's author write a version range that
| includes versions that don't exist yet? How could they know that
| liblupa 0.7.9, whenever it will be released, will continue to
| work with libpupa? Surely they can't see the future? Semantic
| versioning is a hint, but it has never been a guarantee.
|
| The thing about this is that "work with [a dependency]" is not
| really a binary. New versions also fix things -- again, that's
| the main reason that new versions get released in the first
| place. Why would I keep writing the software after it's "done" if
| I don't think there's anything about it that could be fixed?
|
| For that matter, software packages break for external reasons. If
| I pin my dependency, and that dependency is, say, a wrapper for a
| third-party web API, and the company operating that website makes
| a breaking change to the API, then I just locked myself out of
| new versions of the dependency that cope with that change.
|
| In practice, there are good reasons to not need a guarantee and
| accept the kind of risk described. Lockfiles exist for those who
| do need a guarantee that their local environment will be set in
| concrete (which has other, implicit risks).
|
| I see it as much like personal finance. Yes, investments beyond a
| HISA may carry some kind of risk. This is worthwhile for most
| people. And on the flip side, you also can't predict the future
| inflation rate, and definitely can't predict what will happen to
| the price of the individual goods and services you care about
| most.
|
| > The funny thing is, these version ranges end up not being used
| anyway. You lock your dependencies once in a lockfile and they
| stay there, unchanged. You don't even get the good part!
|
| ??? What ecosystem is this author talking about? Generating a
| lockfile doesn't cause the underlying dependency metadata to
| disappear. You "get the good part" as a developer by periodically
| _regenerating a lockfile, testing the resulting environment and
| shipping the new lock_. Or as a user by grabbing a new lockfile,
| or by just choosing not to use provided lockfiles.
|
| > "But Niki, you can regenerate the lockfile and pull in all the
| new dependencies!" Sure. In exactly the same way you can update
| your top-level dependencies.
|
| Has the author tried both approaches, I wonder?
|
| Not to mention: the lockfile-less world the author describes,
| would require _everyone_ to pin dependency versions. In practice,
| this would require dropping support for anything else in the
| metadata format. And (I did have to look it up) this appears to
| be the world of Maven that gets cited at the end (cf.
| https://stackoverflow.com/questions/44521542).
|
| I like choice and freedom in my software, thank you.
|
| > "But Niki, lockfiles help resolve version conflicts!" In what
| way? Version conflicts don't happen because of what's written in
| dependency files.
|
| Perhaps the author hasn't worked in an ecosystem where people
| routinely attempt to install new packages into existing
| environments? Or one where users don't want to have multiple
| point versions of the same dependency downloaded and installed
| locally if one of them would satisfy the requirements of other
| software? Or where dependency graphs never end up having
| "diamonds"? (Yes, there are package managers that work around
| this, but not all _programming languages_ can sanely support
| multiple versions of the same dependency in the same
| environment.)
| aidenn0 wrote:
| > No; in fact it's perfectly reasonable, and at the core of
| what the author doesn't seem to get. Developers have
| motivations other than reproducibility. The entire reason we
| have version number schemes like this is so that we can improve
| our code while also advertising reasonable expectations about
| compatibility. If we have dependents, then hopefully this also
| improves their UX indirectly -- whether by taking advantage of
| optimizations we made, not encountering bugs that were actually
| our fault, etc. Similarly, if we have dependencies, we can seek
| to take advantage of that.
|
| I'm actually with the author on this one, but checking-in your
| lockfile to version-control gets you this.
| Joker_vD wrote:
| > No; in fact it's perfectly reasonable,
|
| And this is how I once ended spending a Friday evening in a
| frantic hurry because a dependency decided to drop support for
| "old" language versions (that is, all except the two newest
| ones) in its patch-version level update. And by "drop support"
| I mean "explicitly forbid from building with language versions
| less than this one".
|
| > The entire reason we have version number schemes like this is
| so that we can improve our code while also advertising
| reasonable expectations about compatibility.
|
| Except, of course, some library authors deliberately break
| semver because they just hate it, see e.g. quote in [0],
| slightly down the page.
|
| [0] https://dmytro.sh/blog/on-breaking-changes-in-transitive-
| dep...
| jedberg wrote:
| Tangential, but what is up with all those flashing icons at the
| bottom of the page? It made it nearly unreadable.
| jofla_net wrote:
| it looks like who's reading the page , and from where.
|
| just my guess.
| imtringued wrote:
| I disagree with this blogpost in its entirety. Lockfiles are
| neither unnecessary, nor are they complicated. The argument
| presented against lockfiles boils down to a misrepresentation. I
| also dislike the presentation using the godawful yellow color and
| the stupid websocket gadget in the footer.
|
| The entire point of lockfiles is to let the user decide when the
| version resolution algorithm should execute and when it
| shouldn't. That's all they do and they do it exactly as promised.
| mrspuratic wrote:
| OMG is it full of yellow. And there I was wondering how one
| might manage a modem or a mailqueue without a lock file: these
| are not your father's lockfiles[1]
|
| [1] https://linux.die.net/man/1/lockfile
| hosh wrote:
| Many comments talk about how top-level and transitive
| dependencies can conflict. I think the article is suggesting you
| can resolve those by specifying them in the top-level packages
| and overriding any transitive package versions. If we are doing
| that anyways, it circles back to if lock files are necessary.
|
| Given that, I still see some consequences:
|
| The burden for testing if a library can use its dependency falls
| back on the application developer instead of the library
| developer. A case could be made that, while library developers
| should test what their libraries are compatible with, the
| application developer has the ultimate responsibility for making
| sure everything can work together.
|
| I also see that there would need to be tooling to automate
| resolutions. If ranges are retained, the resolver needs to report
| every conflict and force the developer to explicitly specify the
| version they want at the top-level. Many package managers
| automatically pick one and write it into the lock file.
|
| If we don't have lock files, and we want it to be automatic, then
| we can have it write to the top level package manager and not the
| lock file. That creates its own problems.
|
| One of those problems comes from humans and tooling writing to
| the same configuration file. I have seen problems with that idea
| pop up -- most recently, letting letsencrypt modify nginx
| configs, and now I have to manually edit those. Letsencrypt can
| no longer manage them. Arguably, we can also say LLMs can work
| with that, but I am a pessimist when it comes to LLM
| capabilities.
|
| So in conclusion, I think the article writer's reasoning is
| sound, but incomplete. Humans don't need lockfiles, but our
| tooling need lockfiles until it is capable of working with the
| chaos of human-managed package files.
| hosh wrote:
| If the dependencies are specified as data, such as
| package.json, or a yaml or xml file, it may be structured
| enough that tools can still manage it. Npm install has a save
| flag that lets you do that. Python dep files may be structured
| enough to do this as well.
|
| If the package specification file is code and not data, then
| this becomes more problematic. Elixir specified dep as data
| within code. Arguably, we can add code to read and write from a
| separate file... but at that point, those might as well be lock
| files.
| skybrian wrote:
| Not quite. Library authors are expected to specify _the version
| that they tested with_. (Or the oldest version that they tested
| with that works.)
|
| If it's an old release of a library then it will specify old
| dependencies, but you just have to deal with that yourself,
| because library authors aren't expected to have a crystal ball
| that tells them which future versions will be broken or have
| security holes.
| 10000truths wrote:
| When discoverability and versioning of libraries is more-or-less
| standardized (a la Cargo/PyPI/NPM), automated tooling for
| dependency resolution/freezing follows naturally. The build tools
| for Java and C historically did not have this luxury, which is
| why their ecosystems have a reputation for caring a lot about
| backwards compatibility.
| andy99 wrote:
| In case the author is reading, I can't read your article because
| of that animation at the bottom. I get it, it's cute, but it
| makes it too distracting to concentrate on the article, so I
| ended up just closing it.
| somehnguy wrote:
| I read the article but that animation was incredibly
| distracting. I don't even understand what it's for - clicking
| it does nothing. Best guess is a representation of how many
| people active on page.
| fennecbutt wrote:
| It also covers a whole 1/4 of the screen on mobile...
| Aaargh20318 wrote:
| It covers 90% of the screen on iPad
| fellowniusmonk wrote:
| I've never seen something so egregious before, it made it
| impossible to read without covering it with my hand.
|
| But I realized something by attempting to read this article
| several times first.
|
| If I ever want to write an article and reduce peoples ability
| to critically engage with the argument in it I should add a
| focus pulling animation that thwarts concerted focus.
|
| It's like the blog equivalent of public speakers who ramble
| their audience into a coma.
| deepsun wrote:
| https://neal.fun/stimulation-clicker/
| IrishTechie wrote:
| > I've never seen something so egregious before
|
| Streaming comments on YouTube give it a run for its money,
| what absolute garbage.
| politelemon wrote:
| Thankfully that can be collapsed.
| masklinn wrote:
| Do you mean the live chat? Those are, appropriately, for
| live streams. They do replay afterwards as depending on the
| type of stream the video may not make complete sense
| without them (and they're easy enough to fold if they don't
| have any value e.g. premieres).
| extraduder_ire wrote:
| I think the same blog used to show you the cursor position of
| every other reader on your screen. Surprised that's been
| removed.
| masklinn wrote:
| > I've never seen something so egregious before
|
| You should check how comments work on niconico.
| mvieira38 wrote:
| Give in to the noJS movement, there's no animation and it's a
| beautiful minimalistic site if you disable javascript
| hackrmn wrote:
| So Tonsky's punishing us for leaving JavaScript enabled?
| modernerd wrote:
| I did document.querySelector('#presence').remove();
| daveidol wrote:
| Yeah I just popped into devtools and added "display: none" to
| the CSS. It was crazy distracting.
| appease7727 wrote:
| Wow, that's one of the most abhorrent web designs I've ever
| seen
| J37T3R wrote:
| In addition, the solid yellow background is another readability
| impediment.
| sethpurc wrote:
| Same here, I also closed it within a few seconds.
| ddejohn wrote:
| It's downright awful and I'm having a hard time imagining the
| author proof reading their own page and thinking "yeah, that's
| great".
|
| As an aside, I have an article in my blog that has GIFs in it,
| and they're important for the content, but I'm not a frontend
| developer by any stretch of the imagination so I'm really at
| wit's end for how to make it so that the GIFs only play on
| mouse hover or something else. If anybody reading has some
| tips, I'd love to hear them. I'm using Zola static site
| generator, and all I've done is make minor HTML and CSS tweaks,
| so I really have no idea what I'm doing where it concerns
| frontend presentation.
| yladiz wrote:
| As someone who does like tonsky's stuff sometimes: I
| immediately closed the article when I saw it. I'm less
| charitable than you: it's not cute, it's just annoying, and it
| should be able to be switched off. For me it goes into the same
| box as his "dark mode" setting but it's worse because it can't
| be disabled. Why should I, as the reader, put in effort to
| overcome something the author found "cute" just to read their
| words? It's akin to aligning the words to the right, or
| vertically: I can read it but it's so much work that I'd rather
| just not.
| nerdjon wrote:
| Agreed, and the fact that there is not an easy "x" to close it
| is even worse.
|
| If you want to do something cute and fun, whatever its your
| site. But if you actually want people to use your site make it
| easy to dismiss. We already have annoying ads and this is
| honestly worse than many ads.
|
| Also, from the bio that I can barely see he writes about "UI
| Design" and... included this?
| jraph wrote:
| To all people in this sub thread: suggestion to try reader
| mode.
| hans_castorp wrote:
| On sites like that, I typically just switch to "reader view"
| which the leaves only the interesting content.
| pak9rabid wrote:
| Yes is horrible, this idea.
| jedahan wrote:
| I wonder if it respects prefers-reduced-motion, though I don't
| know if I have that set in my browser, I do have it set with my
| OS.
| bangaladore wrote:
| Even worse, it exposes the city all viewers within the HTML,
| even if the country code is only displayed on the webpage.
|
| Obviously, the server gets your IP when you connect but ideally
| it doesn't share that with all visitors. This isn't as bad as
| that, still concerning.
| sitkack wrote:
| In ublock origin
|
| tonsky.me##.container
| _verandaguy wrote:
| I'll also add that the "night mode" is obnoxious as hell
| anyway.
|
| Inverted colours would've been _mostly fine._ Not great, but
| mostly fine, but instead, the author went out of their way to
| add this flashlight thing that's borderline unusable?
|
| What the hell is this website?
| hn-acct wrote:
| Thanos snap it if you're using ios
| meatmanek wrote:
| 12 years later, https://alisdair.mcdiarmid.org/kill-sticky-
| headers/ is still super useful.
| jerhewet wrote:
| Reader mode. Don't leave home without it.
| rs186 wrote:
| When I saw the yellow background, I knew this is _the_ website
| where I read the Unicode article [1]. Sure it is. With great
| pain I finished it.
|
| I mean, just the fact that the background is yellow is a
| terrible UX decision. Not to mention that ridiculous "dark
| mode". No, it's not funny. It's stupid and distracting.
|
| [1] https://tonsky.me/blog/unicode/
| dom96 wrote:
| The animation? For me it was the blinding yellow background
| sneak wrote:
| My favorite websites are the weird ones that make people
| complain about stuff.
| kaptainscarlet wrote:
| I somewhat agree because the main package file .eg package.json
| can act as a lock file if you pin packages to specific versions
| whilenot-dev wrote:
| No tag other than _latest_ has any special significance to npm
| itself. Tags can be republished and that 's why integrity
| checks should be in place. Supply chain attacks are happening
| in open source communities, sadly.
| beart wrote:
| I don't think you can republish to npm.
|
| https://docs.npmjs.com/cli/v11/commands/npm-publish
|
| > The publish will fail if the package name and version
| combination already exists in the specified registry.
|
| > Once a package is published with a given name and version,
| that specific name and version combination can never be used
| again, even if it is removed with npm unpublish.
| bunjeejmpr wrote:
| This metadata should be in the top of your source as
| documentation.
|
| We need the metadata. Not a new container.
| wedn3sday wrote:
| I absolutely abhor the design of this site. I cannot engage with
| the content as Im filled with a deep burning hatred of the
| delivery. Anyone making a personal site: do not do this.
| shadowgovt wrote:
| This author's approach would probably work "fine" (1) for
| something like npm, where individual dependencies also have a
| subtree of their dependencies (and, by extension, "any situation
| where dependencies are statically linked").
|
| It doesn't work at all for something like Python. In Python,
| libpupa 1.2.3 depends on liblupa 0.7.8. But libsupa 4.5.6 depends
| on liblupa _0.7.9_. Since the Python environment can only have
| one version of each module at a time, I need to decide on a
| universe in which libpupa and libsupa can both have their
| dependencies satisfied simultaneously. Version ranges give me
| multiple possible universes, and then for reproducibility (2) I
| use a lockfile to define one.
|
| (1) npm's dependencies-of-dependencies design introduces its own
| risks and sharp edges. liblupa has a LupaStuff object in it. It
| changed very subtly between v0.7.8 and v0.7.9, so subtly that the
| author didn't think to bump the minor version. And that's okay,
| because both libpupa and libsupa should be wrapping their
| dependent objects in an opaque interface anyway; they shouldn't
| be just barfing liblupa-generated objects directly-accessible
| into their client code. Oh, you think people _actually_
| encapsulate like that? You 're hilarious. So eventually, a
| LupaStuff generated by libpupa is going to get passed to libsupa,
| which is actually expecting a _subtly_ different object. Will it
| work? Hahah, who knows! Python actually avoids this failure mode
| by forcing one coherent environment; since 'pupa and 'supa
| _have_ to be depending on the same 'lupa (without very fancy
| module shenanigans), you can have some expectation that their
| LupaStuff objects will be compatible.
|
| (2) I think the author is hitting on something real though, which
| is that semantic versioning is a convention, not a guarantee;
| nobody really _knows_ if your code working with 0.7.8 implies it
| will work with 0.7.9. It _should_. Will it? "Cut yourself and
| find out." In an ideal world, every dependency-of-a-dependency
| pairing has been hand-tested by someone before it gets to you; in
| practice, individual software authors are responsible for one web
| of dependencies, and the Lockfile is a candle in the darkness:
| "Well, it worked on my machine in this configuration."
| broken_broken_ wrote:
| I agree with the premise, just use a specific version of your
| dependencies, that's generally fine.
|
| However: You absolutely do need a lock file to store a
| cryptographic hash of each dependency to ensure that what is
| fetched has not been tampered with. And users are definitely not
| typing a hash when adding a new dependency to package.json or
| Cargo.toml.
| chriswarbo wrote:
| > And users are definitely not typing a hash when adding a new
| dependency to package.json or Cargo.toml
|
| I actually much prefer that: specify the git revision to use
| (i.e. a SHA1 hash). I don't particularly care what "version
| number" that may or may not have.
| numbsafari wrote:
| ... no mention of golang and minimal version selection?
|
| https://go.dev/ref/mod#minimal-version-selection
| https://research.swtch.com/vgo-mvs
|
| Instead of a "lock" file, go includes a "sum" file, which
| basically tells you the checksums of the versions of the modules
| that were used during a build happened to be, so that you can
| download them from a central place later and ensure you are
| working from the same thing (so that any surreptitious changes
| are identified).
| nc0 wrote:
| So much thought works for not accepting the only real, future-
| proof, safe, and deterministic solution that is downloading your
| dependencies' code next to your code forever (a.k.a.
| vendoring)....
| maxmcd wrote:
| For what it's worth I think Go's MVS somewhat meets the desire
| here. It does not require lockfiles, but also doesn't allow use
| of multiple different minor/patch versions of a library:
| https://research.swtch.com/vgo-mvs
|
| I believe Zig is also considering adopting it.
|
| If there are any dependencies with the same major version the
| algorithm simply picks the newest one of them all (but not the
| newest in the package registry), so you don't need a lockfile to
| track version decisions.
|
| Go's go.sum contains checksums to validate content, but is not
| required for version selection decisions.
| nycticorax wrote:
| Strongly endorse. That paper is really wonderful. It seems to
| me that MVS is _the_ solution to the version selection problem,
| and now we just have to wait for awareness of this to fully
| percolate through the developer community.
| vl wrote:
| Indirect require section in go.mod file is essentially a
| lockfile. Once decision is made by tool, it's codified for
| future builds.
| maxmcd wrote:
| The //indirect dependencies I believe are just there to track
| dependencies that are not in the project, or to help with
| caching: https://github.com/golang/go/issues/36460
|
| In go 1.17 they were added so that project loading did not
| require downloading the go.mod of every dependency in the
| graph.
| nemothekid wrote:
| Most of the issues in this thread and the article, are, IMO,
| problems with _Node_ , not with _lockfiles_.
|
| > _How could they know that liblupa 0.7.9, whenever it will be
| released, will continue to work with libpupa? Surely they can't
| see the future? Semantic versioning is a hint, but it has never
| been a guarantee._
|
| Yes, this is a _social_ contract. Not everything in the universe
| can be locked into code, and with Semantic versioning, we hope
| that our fellow humans won 't unnecessarily break packages in
| non-major releases. It happens, and people usually apologize and
| fix, but it's rare.
|
| This has worked successfully if you look at RubyGems which is 6
| years older than npm (although Gemfile.lock was introduced in
| 2010, npm didn't introduce it until 2017).
|
| RubyGems doesn't have the same reputation for dysfunction as Node
| does. Neither does Rust, Go, PHP, and Haskell. Even more that I
| probably don't use a daily basis. Node is the only language that
| I will come back and find a docker container that straight up
| won't build or a package that requires the entire dependency tree
| to update because one package pushed a minor-version change that
| ended up requiring a minor version change to _Node_ , then that
| new version of Node isn't compatible with some hack that another
| package did in it's C extension.
|
| In fact, I expect some Node developer to read this article and
| deploy yet another tool that will break _everything_ in the build
| process. In other languages I don't even think I've ever really
| thought about dependency resolution in years.
| untech wrote:
| Don't see it mentioned in the comments, but the names liblupa and
| libpupa are based on a penis joke.
|
| The joke is this:
|
| Lupa and Pupa received their paycheques, but the accountant
| messed up, so Lupa received payment belonging to Pupa, and Pupa
| -- belonging to Lupa.
|
| "To Lupa" sounds like "dick head" when translated to Russian. The
| ending reads as if Pupa received a dick head, which means that he
| didn't receive anything.
|
| I am not sure, but it could that the entire post intent is to get
| English-speaking folks to discuss "libpupa" and "liblupa".
| jacksavage wrote:
| Let's say "A" has a direct dependency on "B". The author of "A"
| knows how they use "B" and are qualified to state what versions
| of "B" that "A" is compatible with. Yes, some assumptions are
| made about "B" respecting semver. It's imperfect but helpful. If
| I'm writing package/app "C" and I consume "A", I'm not qualified
| to decide what versions of "B" to use without studying the source
| code of "A". Some situations necessitate this, but it doesn't
| scale.
|
| As a separate thought, it seems that it would be possible to
| statically analyze the usage of "B" in the source code of "A" and
| compare it to the public API for any version of "B" to determine
| API compatibility. This doesn't account for package
| incompatibility due to side effects that occur behind the API of
| "B", but it seems that it would get you pretty far. I assume this
| would be a solution for purely functional languages.
| jchw wrote:
| Go MVS ought to be deterministic, but it _still_ benefits from
| modules having lockfiles as it allows one to guarantee that the
| resolution of modules is consistent without needing to trust a
| central authority.
|
| Go's system may be worth emulating in future designs. It's not
| perfect (still requires some centralized elements, module
| identities for versions >=2 are confusing, etc.) but it does
| present a way to both not depend strongly on specific centralized
| authorities without also making any random VCS server on the
| Internet a potential SPoF for compiling software. On the other
| hand, it only really works well for module systems that purely
| deal with source code and not binary artifacts, and it also is
| going to be the least hazardous when fetching and compiling
| modules is defined to not allow arbitrary code execution. Those
| constraints together make this system pretty much uniquely suited
| to Go for now, which is a bit of a shame, because it has some
| cool knock-on effects.
|
| (Regarding deterministic MVS resolution: imagine a@1.0 depending
| on b@1.0, and c@1.0 depending on a@1.1. What if a@1.1 no longer
| depends on b? You can construct trickier versions of this
| possibly using loops, but the basic idea is that it might be
| tricky to give a stable resolution to version constraints when
| the set of constraints that are applied depends on the set of
| constraints that are applied. There are possible deterministic
| ways to resolve this of course, it's just that a lot of these
| edge cases are pretty hard to reason about and I think Go MVS had
| a lot of bugs early on.)
| zaptheimpaler wrote:
| Dependency management is a deep problem with a 100 different
| concerns, and every time someone says "oh here it's easy, you
| don't need that complexity" it turns out to only apply to a tiny
| subset of dependency management that they thought about.
|
| Maven/Java does absolutely insane things, it will just compile
| and run programs with incompatible version dependencies and then
| they crash at some point, and pick some arbitrary first version
| of a dependency it sees. Then you start shading JARs and writing
| regex rules to change import paths in dependencies and your
| program crashes with a mysterious error with 1 google result and
| you spend 8 hours figuring out WTF happened and doing weird
| surgery on your dependencies dependencies in an XML file with
| terrible plugins.
|
| This proposed solution is "let's just never use version ranges
| and hard-code dependency versions". Now a package 5 layers deep
| is unmaintained and is on an ancient dependency version, other
| stuff needs a newer version. Now what? Manually dig through
| dependencies and update versions?
|
| It doesn't even understand lockfiles fully. They don't make your
| build non-reproducible, they give you both reproducible builds
| (by not updating the lockfile) and an easy way to update
| dependencies _if and when_ you want to. They were made for the
| express purpose of making your build reproducible.
|
| I wish there was a mega article explaining all the concerns,
| tradeoffs and approaches to dependency management - there are a
| lot of them.
| adrianmsmith wrote:
| 1) "it will just compile and run programs with incompatible
| version dependencies and then they crash at some point"
|
| 2) "Now a package 5 layers deep is unmaintained and is on an
| ancient dependency version, other stuff needs a newer version.
| Now what? Manually dig through dependencies and update
| versions?"
|
| You can't solve both of these simultaneously.
|
| If you want a library's dependences to be updated to versions
| other than the original library author wanted to use (e.g.
| because that library is unmaintained) then you're going to get
| those incompatibilities and crashes.
|
| I think it's reasonable to be able to override dependencies
| (e.g. if something is unmaintained) but you have to accept
| there are going to be surprises and be prepared to solve them,
| which might be a bit painful, but necessary.
| nixosbestos wrote:
| Yeah, you have to bump stuff and use packages that are
| actually compatible. Like Rust. Which does not do the insane
| things that Maven does, that the post author is presumably
| advocating for.
| yawaramin wrote:
| > compile and run programs with incompatible version
| dependencies and then they crash at some point
|
| Just because Java does this doesn't mean every language has to.
| It's not strongly tied to the dependency management system
| used. You could have this even with a Java project using
| lockfiles.
|
| > a package 5 layers deep is unmaintained and is on an ancient
| dependency version, other stuff needs a newer version. Now
| what? Manually dig through dependencies and update versions?
|
| Alternatively, just specify the required version in the top-
| level project's dependency set, as suggested in the article.
| RangerScience wrote:
| No, no, a thousand times no.
|
| The package file (whatever your system) is communication _to
| other humans_ about _what you know_ about the versions you need.
|
| The lockfile is the communication _to other computers_ about the
| versions _you are using_.
|
| What you shouldn't have needed is fully defined versions in your
| package files (but you _do_ need it, in case some package or
| another doesn 't do a good enough job following semver)
|
| So, this: package1: latest # We're
| stuck on an old version b/c of X, Y, Z package2: ~1.2
|
| (Related: npm/yarn should use a JSON variant (or YAML, regular or
| simplified) that allows for comments for precisely this reason)
| skybrian wrote:
| With deterministic version control, library authors are
| supposed to document the exact version _that a library was
| tested with_. (Or the oldest version that they tested with and
| still works.)
|
| People who use a library might use newer versions (via diamond
| dependencies or because they use latest), but it will result in
| a combination of dependencies that wasn't tested by the
| library's authors. Often that's okay because libraries try to
| maintain backward compatibility.
|
| Old libraries that haven't had a new release in a while are
| going to specify older dependencies and you just have to deal
| with that. The authors aren't expected to guess which future
| versions will work. They don't know about security bugs or
| broken versions of dependencies that haven't been released yet.
| There are other mechanisms for communicating about that.
| furstenheim wrote:
| > But if you want an existence proof: Maven. The Java library
| ecosystem has been going strong for 20 years, and during that
| time not once have we needed a lockfile. And we are pulling
| hundreds of libraries just to log two lines of text, so it is
| actively used at scale.
|
| Maven and Java is simply broken when dealing with transitive
| dependencies.
|
| I've been hit so many times with running time exception
| "MethodNotFound" cause two libraries have the same transitive
| dependency and one version gets picked over the other one.
| TJTorola wrote:
| I generally agree, pinning versions and then having some script
| to automatically update to capture security updates makes sense,
| except that it also assumes that every package is just using
| standard symver, which in my experience is something like 99%
| true.
|
| But it's also missing the value of hashes, even if every package
| used symver, then you had a script that could easily update to
| get recent security updates, we would still gain value from a
| lockfile hashes to protect against source code changing
| underneath the same version code.
| jmull wrote:
| > ...why would [an] author write a version range that includes
| versions that don't exist yet? ... For that, kids, I have no good
| answer.
|
| When you first take a dependency, you typically want the latest
| compatible version, to have all the available bug fixes
| (especially security fixes).
|
| Once you've started building on top of a dependency you need
| stability and have to choose when to take updates.
|
| It's about validating the dependency... on first use, there's no
| question you will be validating its use in your app. Later, you
| have to control when you take an update so you can ensure you
| have a chance to validate it.
|
| BTW, of course semantics versioning isn't perfect. It just lowers
| the risk of taking certain bug fixes, making it feasible to take
| them more frequently.
|
| The lock file just holds the state for this mechanism.
| nailer wrote:
| You can remove the animations and read the article with:
| window.presence.remove()
|
| In Developer Tools. You don't even need to use querySelector,
| since IDs are JS globals.
| nixosbestos wrote:
| Oh the rich irony of using Maven. Maven, apparently has the same
| basic fundamental issues it had 15 years ago when [redacted] paid
| me to write a Maven plugin that would detect these version skews.
| I thought I'd just done it wrong because of the massive, sweeping
| number of places that it did things that were not just
| unfortunate, but were _serious_ (you can imagine).
| choeger wrote:
| Version ranges solve the problem of transitive dependencies if
| libA needs libZ 1.0 and libB needs libZ 1.1 how am I supposed to
| use both dependencies at the same time when my language doesn't
| allow for isolation of transitive deps?
| peterpost2 wrote:
| Ugh, bunch of strawmans and then the author comes to a
| conclusion.
|
| Not a good article.
| _verandaguy wrote:
| > But... why would libpupa's author write a version range that
| includes versions that don't exist yet? How could they know that
| liblupa 0.7.9, whenever it will be released, will continue to
| work with libpupa? Surely they can't see the future? Semantic
| versioning is a hint, but it has never been a guarantee.
| > For that, kids, I have no good answer.
|
| Because semantic version is good enough for me, as a package
| author, to say with a good degree of confidence, "if security or
| stability patches land within the patch (or sometimes, even
| minor) fields of a semver version number, I'd like to have those
| rolled out with all new installs, and I'm willing to shoulder the
| risk."
|
| You actually kind-of answer your own question with this bit.
| Semver not being a guarantee of anything is true, but I'd extend
| this (and hopefully it's not a stretch): package authors will
| republish packages with the same version number, but different
| package contents or dependency specs. Especially newer authors,
| or authors new to a language or packaging system, or with
| packages that are very early in their lifecycle.
|
| There are also cases where packages get yanked! While this isn't
| a universally-available behaviour, many packaging systems
| acknolwedge that software will ship with unintentional
| vulnerabilities or serious stability/correctness issues, and give
| authors the ability to say, "I absolutely have to make sure that
| nobody can install this specific version again because it could
| cause problems." In those cases, having flexible subdependency
| version constraints helps.
|
| It might be helpful to think by analogy here. If a structure is
| _completely rigid,_ it does have some desirable properties, not
| the least of which being that you don't have to account for the
| cascading effects of beams compressing and extending, elements of
| the structure coming under changing loads, and you can forget
| about accounting for thermal expansion or contraction and other
| external factors. Which is great, in a vacuum, but structures
| exist in environments, and they're subject to wear from usage,
| heat, cold, rain, and (especially for taller structures), high
| winds. Incorporating a planned amount of mechanical compliance
| ends up being the easier way to deal with this, and forces the
| engineers behind it to account for failure modes that'll arise
| over its lifetime.
| sjrd wrote:
| Having read the article and read some of the comments here, I
| think many could learn from dependency management rules in the
| Scala ecosystem.
|
| Scala uses Maven repositories (where the common practice is to
| use fixed dependency versions) but with different resolution
| rules:
|
| * When there are conflicting transitive versions, the _highest_
| number prevails (not the closest to the root).
|
| * Artifacts declare the versioning scheme they use (SemVer is
| common, but there are others)
|
| * When resolving a conflict, the resolution checks whether the
| chosen version is compatible with the evicted version according
| to the declared version scheme. If incompatible, an error is
| reported.
|
| * You can manually override a transitive resolution and bypass
| the error if you need to.
|
| The above has all the advantages of all the approaches advocated
| for here:
|
| * Deterministic, time-independent resolution.
|
| * No need for lock files.
|
| * No silent eviction of a version in favor of an incompatible
| one.
|
| * For compatible evictions, everything works out of the box.
|
| * Security update in a transitive dependency? No problem, declare
| a dependency on the new version. (We have bots that even
| automatically send PRs for this.)
|
| * Conflicting dependencies, but you know what you're doing? No
| problem, force an override.
| pyrale wrote:
| Semver was always a scam. Or rather, people were glad to scam
| themselves with semver rather than fixing their dependencies.
|
| Languages ecosystems who try to make it sane for developers
| usually end up with some sort of snapshot/bom system that lists
| that are compatible together, and that nudges lib developers to
| stay compatible with each other. I'm not going to pretend this is
| easy, because this is hard work on the side of lib devs, but it's
| invaluable for the community.
|
| Compared to that, people extolling the virtues of semver always
| seem to miss the mark.
| pentagrama wrote:
| The OP is well-versed in UX design, so it's hard for me to
| understand why they shipped that feature showing user avatars in
| real time [1]. It's mostly useless, distracting, takes up
| valuable screen space, and feels creepy.
|
| [1] https://imgur.com/a/q1XVDZU
| _kst_ wrote:
| The author seems to have assumed that readers are going to know
| that he's talking about NPM and JavaScript, and that "lockfiles"
| are an NPM-specific feature (to me, it means something completely
| different).
|
| Perhaps that's a valid assumption for readers of his blog, but
| once it appears here there are going to be a lot of readers who
| don't have the context to know what it's about.
|
| Can an "NPM" tag be added to the subject of this post? More
| generally, I encourage authors to include a bit more context at
| the top of an article.
| deathanatos wrote:
| > _that "lockfiles" are an NPM-specific feature_
|
| ... they're not, though. Python & Rust both have lockfiles. I
| don't know enough Go to say if go.sum counts, but it might also
| be a lockfile. They're definitely not unique to NPM, because
| nothing about the problem being solved is unique to NPM.
| _kst_ wrote:
| OK, but to me a "lockfile" is a file whose existence signals
| that some resource is locked.
|
| https://en.wikipedia.org/wiki/File_locking#Lock_files
|
| When I saw the title "We shouldn't have needed lockfiles", I
| expected something about preferring some other mechanism for
| resource locking.
|
| More generally, I see a lot of articles that talk about an
| issue in some language or framework that don't mention that
| context. Just adding "JavaScript" or "NPM" (or whatever) in
| the title or near the top of the article would be very
| helpful.
| xp84 wrote:
| This is weird to me. (Note: i'll use ruby terms like 'gem' and
| 'bundle' but the same basic deal applies everywhere)
|
| Generally our practice is to pin everything to major versions, in
| ruby-speak this means like `gem 'net-sftp', '~> 4.0'` which
| allows 4.0.0 up to 4.9999.9999 but not 5. Exceptions for non-
| semver such as `pg` and `rails` which we just pin to exact
| versions and monitor manually. This little file contains our
| intentions of which gems to update automatically and for any
| exceptions, why not.
|
| Then we encourage aggressive performances of `bundle update`
| which pulls in tons of little security patches and minor bugfixes
| frequently, but intentionally.
|
| Without the lockfile though, you would not be able to do our
| approach. Every bundle install would be a bundle update, so any
| random build might upgrade a gem without anyone even meaning to
| or realizing it, so, your builds are no longer reproducible.
|
| So we'd fix reproducibility by reverting to pinning everything to
| X.Y.Z, specifically to make the build deterministic, and then
| count on someone to go in and update every gem's approved version
| numbers manually on a weekly or monthly basis. (yeah right,
| definitely will happen).
| chriswarbo wrote:
| I would agree with this _if_ the author 's examples were using
| hashes, rather than "version numbers". Specifying a hash lets us
| check whether any random blob of code is or isn't what we
| specified; versions can't do this, because any blob of code can
| _claim_ to have any name, version, etc. it likes. As long as we
| have a hash, we don 't need version numbers (or names, though
| it's usually helpful to provide them).
|
| Using hashes also makes it easier to distribute, fetch, proxy,
| etc. since there's no need for trust. In contrast, fetching code
| based only on (name and) version number requires more centralised
| repositories with a bunch of security hoops to jump through.
|
| Also, on that note, I can plug my own post on the topic:
| http://www.chriswarbo.net/blog/2024-05-17-lock_files_conside...
___________________________________________________________________
(page generated 2025-08-06 23:00 UTC)