[HN Gopher] Debian discusses vendoring again
___________________________________________________________________
Debian discusses vendoring again
Author : Tomte
Score : 179 points
Date : 2021-01-13 06:23 UTC (16 hours ago)
(HTM) web link (lwn.net)
(TXT) w3m dump (lwn.net)
| tpetry wrote:
| I don't understand why debian wants to package nodejs libraries?
| There is already a package manager for nodejs. Why do they have
| to package it again? The same applies for php or python
| libraries, both ecosystems do have their own package managers.
| duskwuff wrote:
| > Why do they have to package it again?
|
| Many of those package managers -- like npm and pip -- will
| require a compiler toolchain to build/install packages with
| native code components, like packages that bind against C
| libraries. That isn't an acceptable requirement.
| hashmush wrote:
| They're talking about "application projects" which I understood
| as actual programs, not libraries. As a user I don't care if
| the tools I'm using are written in C, Python or JS, so I
| shouldn't have to remember which package manager to use, Debian
| should include them all.
| erik_seaberg wrote:
| They don't want twenty .deb node apps to depend on twenty
| different versions of the same library, because backporting
| security fixes to each of them years from now could be a
| nightmare.
| nikisweeting wrote:
| Why is Debian responsible for backporting security fixes for
| the thousands of deb packages available? Shouldn't that
| responsibility be handed to the package authors/maintainers
| themselves?
| debiandev wrote:
| Debian Developer here: upstream developer almost never care
| to prepare fixes for existing releases.
| symlinkk wrote:
| If an upstream dev isn't supporting software anymore
| maybe the users should stop using it?
| tzs wrote:
| Here's the problem. Say I develop
| program/library/whatever Foo.
|
| I make a new release every six months, so we have Foo 1,
| six months later Foo 2, six months after that Foo 3, and
| so on.
|
| Between the time Foo n and Foo n+1 are released, I'll
| release minor updates to Foo n to fix bugs, and maybe
| even add minor features, but I don't make breaking
| changes.
|
| Foo n+1 can have break changes, and once Foo n+1 is out I
| stop doing bug fixes on Foo n. My policy is that once Foo
| n+1 is out, Foo n is frozen. If Foo n has a bug, move to
| Foo n+1.
|
| A new version of Debian comes out, and includes Foo n
| which is the current Foo at the time. Debian supports new
| versions for typically 3 years, and the Debian LTS
| project typically adds another 2 years of security
| support on top of that.
|
| That version of Debian is not going to update to Foo n+1,
| n+2, n+3, etc., as I release them, because they have
| breaking changes. A key point of Debian stable is that
| updates don't break it. That means it is going to stay on
| Foo n all 3 years, and then for the 2 after that when the
| LTS project is maintaining it.
|
| That means that Debian and later Debian LTS ends up
| backporting security fixes from Foo n+1 and later to Foo
| n.
| symlinkk wrote:
| > Debian supports new versions for typically 3 years, and
| the Debian LTS project typically adds another 2 years of
| security support on top of that.
|
| I don't think Debian should try to do this. They should
| just ship whatever the current release of upstream is. Or
| better yet, just allow upstream to ship directly to
| users.
| scbrg wrote:
| Debian _are_ the package maintainers.
| nikisweeting wrote:
| Right, and therein lies the rub. Unless Debian wants to
| "boil the ocean" and burn a ludicrous amount of effort on
| repackaging every last npm and pip package for Debian,
| that position seems unsustainable long-term.
|
| I'm of the opinion that Debian should create more
| distribution channels (that are available by default)
| where they are _not_ the maintainers, and old releases
| are not forced to stay patched in order to remain
| installable.
| sigotirandolas wrote:
| If I understand correctly, Debian decides to deliver a
| certain version of a package (say v1.2.3) for a certain
| Debian version and they generally keep that version of the
| package fixed (or try to stay 100% compatible with it),
| minus security/major impact bugs which get backported. By
| doing this, Debian can ensure that when you upgrade the
| system, nothing breaks.
|
| While it's not uncommon for upstreams to offer a stable or
| LTS channel that mostly works like this (and generally
| stable distributions decide to package this version), the
| whole value of Debian is to offer a layer on top of many
| upstreams with different speeds / practices / release
| policies / etc. and offer you a system that works well
| together and doesn't break. So the work/backports they need
| to do is mostly related to different upstreams working
| differently or in a way that doesn't allow Debian to stay
| pinned on a certain version.
| marcthe12 wrote:
| Apps. They have distribute bunch of tools written in does
| language. For example an electron app for nodejs. On top of
| that bunch language PKG manager are not made for final
| distribution, mainly for development. Pip does not have an
| uninstall. This is worse in Lang's like rust, go, Haskell,
| nodejs where ecosystem design is not really compatible policy
| of these distros. And rust and nodejs comes in wierd locations
| so it can eventually needed in base system.
| [deleted]
| quietbritishjim wrote:
| A friendly reminder that if you enjoyed this article, please
| consider subscribing to LWN. It's an excellent news source and
| they employ people full time so need real money in order to
| survive.
|
| Normally articles are restricted to subscribers initially, and
| are made available to everyone a week after being posted. But
| subscribers can make Subscriber Links that let non-subscribers
| read a specific article straight away. I've noticed a lot of
| subscriber links (like this one) posted to HN recently - there's
| nothing wrong with that but, again, please remember to subscribe
| if you like them.
| psanford wrote:
| I love that every year my LWN subscription expires and they
| don't auto renew it. Its such a nice way to treat your
| customers that differentiates LWN from most other media
| companies.
| SEJeff wrote:
| Jonathan Corbet (the founder and head cheese @ lwn) is an
| exceedingly nice guy and as a result of all of his work, ended
| up as a maintainer of much of the linux kernel documentation.
| He's one of those real unsung heroes of linux and also does
| things like the "Linux Kernel Development Report". Super good
| people and very professional all around.
| wyldfire wrote:
| Do they have a bitcoin/monero donation address? I'd donate
| today.
| iforgotpassword wrote:
| > By trying to shoehorn node/go modules into Debian packages we
| are creating busy work with almost no value.
|
| Another problem, at least with python I've encountered this, is
| that the debian packages sometimes seem to fight what you
| downloaded via pip. It's not made to work together. I'm not a
| python dev so it was very confusing to figure out what is going
| on, and I wouldn't be surprised if it would be similar if you mix
| npm and deb packages for js libs. They don't know of each other
| and can't know which libs were anyway provided by the other, then
| search paths are unknown to the user etc. I think I went through
| similar pain when I had to get some ruby project going.
|
| My gut feeling is that it would be best if debian only supplied
| the package of the software in question and let's the "native"
| dependency management tool handle all the libs, but I guess that
| would get the debian folks the feeling of loss of control, as it
| indeed makes it impossible to backport a fix for specific libs;
| rather you'd have to fiddle with the dependency tree somehow.
| rubyn00bie wrote:
| Can't you just use "update-alternatives" to set the versions
| you want?
|
| https://wiki.debian.org/DebianAlternatives
| initplus wrote:
| Sure they may have to fiddle with the dependency tree, but Node
| & Go both have well defined dependency formats (go.mod,
| package.json). It should be relatively easy to record the
| go.mod/package.json when these applications are built, and
| issue mass dependency bump & rebuilds if some security issue
| comes up.
|
| Really seems like the best of both worlds, and less work than
| trying to wrangle the entire set of node/go deps & a selection
| of versions into the Debian repos. I mean Debian apparently has
| ~160,000 packages, while npm alone has over 1,000,000!
| erik_seaberg wrote:
| > mass dependency bump
|
| That's not an option for Debian stable. They intentionally
| backport security and stability patches, and avoid other
| changes that might break prod without a really good reason.
|
| https://www.debian.org/doc/manuals/debian-
| faq/choosing.en.ht...
| initplus wrote:
| The situation with backporting security fixes is still the
| same. Debian could backport the fix to any node/go lib the
| same way they backport security fixes to C libs.
|
| The only difference is that a backported fix in a language
| that uses vendored dependencies rather than .so's needs to
| have all depending packages rebuilt.
| debiandev wrote:
| Debian Developer here. Backporting fixes to tenths of
| thousands of packages is already a huge amount of
| (thankless) work.
|
| But it's still done - as long as there's usually one
| version of a given library in the whole archive.
|
| Imagine doing that for e.g. 5 versions of a given
| library, embedded in the sources of 30 different
| packages.
| hackmiester wrote:
| I'm sorry to hear that it's thankless. Thank you for
| doing it. It is one of the pillars of my sanity, and I am
| not exaggerating.
| viraptor wrote:
| > the debian packages sometimes seem to fight what you
| downloaded via pip
|
| It's a bit annoying, but there are simple rules and it applies
| to pip/gem/npm the same (not sure about go): For each runtime
| installation you have a place for global modules. If you
| installed that runtime from a system package, you don't touch
| the global modules - they're managed using system packages.
|
| If you install the language runtime on a side (via pyenv, asdf
| or something else) or use a project-space environment (python
| venv, bundler, or local node_modules) you can install whatever
| modules you want for that runtime without conflicts.
| pydry wrote:
| This is simple but completely counterintuitive. I've seen it
| go wrong hundreds of times and has been subject to a bunch of
| different workarounds (e.g. pipx).
|
| Debian should probably ship a separate global python
| environment for Debian packages that depend on python where
| it is managing the environment - one with a different name
| (e.g. debpy), a different folder and, preferably, without pip
| even being available so that it's unlikely people will
| accidentally mess with it.
|
| This could have isolated the python 2 mess they had for years
| also, de coupling the upgrade of the "python" package from
| the upgrade of all the various Debian things that depended on
| python 2.
|
| really, it's easier to make "apt install python" be the way
| to install python "on the side".
| viraptor wrote:
| > without pip even being available so that it's unlikely
| people will accidentally mess with it.
|
| This has already happened. It only resulted in lots of
| "ubuntu broke pip" posts rather than understanding why that
| happened. (the fact it's not entirely separate from venvs
| didn't help) But considering that issue, imagine what would
| happen for people running `apt install python` and not
| being able to run `python` or `virtualenv`. Most setup
| guides just wouldn't apply to debian/ubuntu and they can't
| afford that.
| pydry wrote:
| Yeah, of course it did! That's why my primary suggested
| fix wasn't "just removing pip" but hiving the Debian
| managed python environment off somewhere different and
| calling it something else and with different binary names
| (e.g. debsyspython) that debian package authors could
| rely upon.
|
| Then the default "python" and "pip" could be without
| debian dependencies and users could go wild doing
| whatever the hell they want without messing up anything
| else in the debian dependency tree (like they would with
| pyenv or conda).
| uranusjr wrote:
| I don't have much to add but Python maintainers have been
| suggesting solutions like this for years, and IIUC Red
| Hat distros use a similar approach you described. Debian
| devs refuse to bulge, like they always do on many topics,
| for better or worse. They are not going to do it, not
| because your approach is technically wrong, but does not
| fit their idea of system packaging.
| toyg wrote:
| This was probably considered and discarded because altering
| all references in all packages would be a _ton_ of work,
| and bound to produce issues with every single merge.
| pydry wrote:
| If it's truly an unmanageable amount of work that's a
| sign that there are other bugs/problems lurking that need
| fixing.
|
| If they did consider it and reject it I imagine it is
| more likely it was about avoiding backwards compatibility
| issues than the amount of work.
|
| This would also signal that there are deeper bugs lurking
| that need fixing, however.
| debiandev wrote:
| This is good advice for software lifecycle management in
| general:
|
| https://wiki.debian.org/DontBreakDebian
| quietbritishjim wrote:
| Put more simply: _never_ run `sudo pip install foo`. That 's
| never expected to work, and it's a pity it doesn't just give
| a simple error "don't do that!" rather than sometimes
| partially working.
|
| As you said, you should start a new environment instead and
| install whatever you like into that. For Python, that means
| using virtualenv or python -m venv. You can always use the
| --system-site-packages switch to get the best of both worlds:
| any apt install python3-foo packages show up in the virtual
| envioronment, but you can use pip to shadow them with newer
| versions if you wish.
| kstrauser wrote:
| Pip has that feature now. Put this in your ~/.pip/pip.conf:
| [global] require-virtualenv = true
|
| and then you get errors like: $ pip install
| foo ERROR: Could not find an activated virtualenv
| (required).
| nemetroid wrote:
| That's not exactly the same. I might be fine with
| installing Python packages with pip into my home
| directory, just not /usr.
| kstrauser wrote:
| My workflow's been to temporary disable that, do the
| stuff I need, then re-enable it. It's a bit clunky, but I
| don't install stuff outside a virtualenv frequently
| enough for it to be a major pain in the neck.
| skrtskrt wrote:
| Yeah I use pyenv + virtualenvwrapper and there are a few
| packages I am fine with having in the top-level pyenv
| version, rather than in any particular virtual
| environment: black, requests, click, etc.
| rcxdude wrote:
| I basically don't let anything but the package manager
| touch /usr. Too many mysteriously issues in systems that
| had gotten screwed up. It's extremely rare that it's
| necessary for any project: if you need to build and install
| some other project you can generally just install in a
| directory dedicated to any single codebase you may be
| working (with appropriate PATH adjustments which can be
| sources from a shell script so they are isolated from the
| rest of the system). I really dislike tutorials and guides
| which encourage just blindly installing stuff into system
| managed areas but it's rife.
| zozbot234 wrote:
| > I basically don't let anything but the package manager
| touch /usr.
|
| That's the standard approach. Custom system-wide packages
| (as opposed to packages that are only installed for one
| user) should go in /usr/local/ or in a package-specific
| directory under /opt/.
| ogre_codes wrote:
| This is a bit of a pet peeve with Linux packaging systems.
|
| I want application X. $ sudo apt-get install appX
| This is going to instal 463 packages do you want to continue
| (y/n)? # HELL NO
|
| Seems like every time a language starts to get popular, this is
| an issue until you have 8 or 9 sets of language tools piled up
| that you never use.
| tpoacher wrote:
| Look, I need that leftpad import, ok?
| ogre_codes wrote:
| That's not even getting into NPM or node!
| Too wrote:
| Look for a appX-minimal package and add ---no-install-
| recommends.
|
| Don't ask why these are not the defaults.
| initplus wrote:
| I am worried here that the alternative we'll end up with is
| applications that rely on vendoring will end up distributed
| entirely outside the Debian repositories... hopefully with go
| get/npm install, hopefully not with "download it from my
| website!"... But either way you lose a lot of the benefits that
| being officially in the Debian repos would bring. Devs want to
| distribute their software to users, and they aren't going to
| chase down rabbit holes to get it packaged to comply with every
| different distributions set of available dependency versions.
|
| Really this idea that a distro (even a large well maintained one
| like Debian) has the resources to package a set of known versions
| of go/node packages for common open source software seems wrong?
| If they aren't going to package every exact version that's
| required, how is it going to be possible to test for
| compatibility? There is no way. And no dev is going to downgrade
| some random dependency of their app just to comply with with
| Debian's set of available versions.
|
| Developers hate this versioning issue with languages like C\C++
| on Linux, it's a huge pain. And that's partially why dependency
| management in languages like Go/Node work the way they do. A
| multitude of distros with slightly different versions of every
| lib you use is a huge headache to dev for, so people have
| designed languages to avoid that issue.
| throwaway098237 wrote:
| There' has been always a split between software that is
| expected to be run for 10 or 20 years and software that will be
| obsoleted in 2 years.
|
| https://www.cip-project.org/ aims to backport fixes to released
| kernel for 25 (twenty-five) years.
|
| Because you don't "npm update" deployed systems on: banks,
| power plants, airplanes and airports, trains, industrial
| automation, phone stations, satellites. Not to mention military
| stuff.
|
| (And Debian is much more popular in those places that people
| believe)
|
| > Devs want to distribute their software to users, and they
| aren't going to chase down rabbit holes to get it packaged to
| comply with every different distributions set of available
| dependency versions.
|
| That's what stable ABIs are for.
|
| > Really this idea that a distro (even a large well maintained
| one like Debian) has the resources to package a set of known
| versions of go/node packages for common open source software
| seems wrong?
|
| Yes, incredibly so. Picking after lazy developers to unbundle a
| library can require hours.
|
| Backporting security fixes for hundreds of thousands libraries
| including multiple versions is practically impossible.
|
| > And no dev is going to downgrade some random dependency of
| their app just to comply with with Debian's set of available
| versions.
|
| Important systems will keep running in the next decades.
| Without the work from such developers.
| JoshTriplett wrote:
| This was exactly my concern. I believe that Debian packages
| should avoid vendoring when possible, but that means it must be
| possible to package the individual modules, even if there are
| multiple versions, and even if there are many small
| dependencies.
| jillesvangurp wrote:
| That's already the reality for most of this century. Openjdk,
| go, rust, docker, npm/yarn, etc. all provide up to date Debian,
| Red Hat, etc. packages for what they offer. There's zero
| advantage to sticking with the distribution specific versions
| of those packages which are typically out of date and come with
| distribution specific issues (including stability and security
| issues).
|
| Debian's claims to adding value in terms of security and
| stability to those vendor provided packages are IMHO dubious at
| best. At best they sort of ship security patches with
| significant delays by trying to keep up with their stable
| release channels. Worst case they botch the job, ship them
| years late, or introduce new bugs repackaging the software (I
| experienced all of that at some point).
|
| When it comes to supporting outdated versions of e.g. JDKs,
| there are several companies specializing in that that actually
| work with Oracle to provide patched, tested, and certified JDKs
| (e.g. Amazon Coretto, Azul, or Adopt OpenJDK). Of course for
| Java, licensing the test suite is also a thing. Debian is
| probably not a licensee given the weird restrictive licensing
| for that. Which implies their packages don't actually receive
| the same level of testing as before mentioned ways of getting a
| supported JDK.
|
| On development machines, I tend to use things like pyenv, jenv,
| sdkman, nvm, etc. to create project specific installations.
| Installing any project specific stuff globally is just
| unprofessional at this point and completely unnecessary. Also,
| aligning the same versions of runtimes, libraries, tools, etc.
| with your colleagues using mac, windows, and misc. Linux
| distributions is probably a good thing. Especially when that
| also lines up with what you are using in production.
|
| Such development tools of course have no reason to exist on a
| production server. Which is why docker is so nice since you
| pre-package exactly what you need at build time rather than
| just in time installing run-time dependencies at deploy time
| and hoping that will still work the same way five years later.
| Clean separation of infrastructure deployment and software
| deployment and understanding that these are two things that
| happen at separate points in time is core to this. Debian
| package management is not appropriate for the latter.
|
| Shipping tested, fully integrated, self-contained binary images
| is the best way to ship software to production these days. You
| sidestep distribution specific packaging issues entirely that
| way and all of the subtle issues that happen when these
| distributions are updated. If you still want Debian package
| management, you can use it in docker form of course.
| cbmuser wrote:
| > Debian's claims to adding value in terms of security and
| stability to those vendor provided packages are IMHO dubious
| at best.
|
| That's not true. The idea is that the distribution is tested
| and stable as a whole and replacing something as OpenJDK can
| cause a lot breakage in other packages.
|
| There is a reason why enterprise distributions provide
| support only for the limited set of packages that they ship.
| jillesvangurp wrote:
| Depends, if you install a statically linked version from a
| third party it won't create much headaches. That kind of is
| the point of vendoring and static linking: not making to
| much assumptions about what is there and what version it is
| in. Works great at the cost of a few extra MB, which in
| most cases is a complete non issue for the user.
|
| Debian self-inflicts this breakage by trying to share
| libraries and dependencies between packages. That both
| locks you in to obsolete stuff and creates inflexibility.
| Third parties actively try to not have this problem. Debian
| is more flaky on this front than it technically needs to
| be.
|
| Kind of the point of the article is that to vendor or not
| to vendor is a hot topic for Debian exactly because of
| this.
| Blikkentrekker wrote:
| > _That 's already the reality for most of this century.
| Openjdk, go, rust, docker, npm/yarn, etc. all provide up to
| date Debian, Red Hat, etc. packages for what they offer.
| There's zero advantage to sticking with the distribution
| specific versions of those packages which are typically out
| of date and come with distribution specific issues (including
| stability and security issues)._
|
| The advantage is the very reason one would choose _Debian_ to
| begin with -- an inert, unchanging, documented system.
|
| A large part of this problem seems to be that users somehow
| install a system such as _Debian_ whose _raison d 'etre_ is
| inertia, only to then complain about the inertia, which makes
| one wonder why they chose this system to begin with.
|
| > _Debian 's claims to adding value in terms of security and
| stability to those vendor provided packages are IMHO dubious
| at best. At best they sort of ship security patches with
| significant delays by trying to keep up with their stable
| release channels. Worst case they botch the job, ship them
| years late, or introduce new bugs repackaging the software (I
| experienced all of that at some point)._
|
| Evidently they add value in terms of stability, but methinks
| many a man misunderstands what "stable" means in _Debian_ 's
| parlance. It does not mean "does not crash"; it means "is
| inert, unchanging" which is important for enterprises that
| absolutely cannot risk that something stop working on an
| upgrade.
|
| > _Shipping tested, fully integrated, self-contained binary
| images is the best way to ship software to production these
| days. You sidestep distribution specific packaging issues
| entirely that way and all of the subtle issues that happen
| when these distributions are updated. If you still want
| Debian package management, you can use it in docker form of
| course._
|
| Not for the use case that _Debian_ , and _RHEL_ attempt to
| serve at all -- these are systems that for good reasons do
| not fix non-critical bugs but rather document their behavior
| and rule them features, for someone might have come to rely
| upon the faulty behavior, and fixing it would lead to
| breaking such reliance.
| jillesvangurp wrote:
| That's why most shops deploy docker containters: it's not
| convenient at all for them for Debian, Red Hat, etc. to
| repackage the software they deploy or be opinionated about
| what versions of stuff is supported. For such users, the OS
| is just a runtime and it just needs to get out of the way.
|
| Ten years ago, we were all doing puppet, chef and what not
| to customize our deployment infrastructure to run our
| software. That's not a common thing anymore for a lot of
| teams and I have not had to do stuff like that for quite
| some time. A lot of that work btw. involved working around
| packaging issues and distribution specific or distribution
| version specific issues.
|
| I remember looking at the puppet package for installing ntp
| once and being horrified at the hundred lines of code
| needed to run something like that because of all the
| differences between platforms. Also such simple things like
| going from one centos version to the next was a non trivial
| issue because of all the automation dependencies on stuff
| that changed in some way (I remember doing the v5 to v6 at
| some point). Dealing with madness like that is a PITA I
| don't miss at all.
|
| There's definitely some value in having something that is
| inert and unchanging for some companies that run software
| for longer times. Pretty much all the solutions I mentioned
| have LTS channels. E.g. If you want java 6 or 7 support,
| you can still get that. And practically speaking, when that
| support runs out I don't see how Debian would be in any way
| positioned to provide that in a meaningful way. The type of
| company caring about such things would likely not be
| running Debian but some version of Red Hat or something
| similarly conservative.
| curt15 wrote:
| >It does not mean "does not crash"; it means "is inert,
| unchanging" which is important for enterprises that
| absolutely cannot risk that something stop working on an
| upgrade.
|
| But would enterprises accept being forever stuck with any
| bugs that aren't security related? Even RHEL backports
| patches from newer kernels while maintaining kABI.
| Blikkentrekker wrote:
| We're talking about entitites that run _COBOL_ code from
| the 60s and are too afraid to update or replace it, for
| fear that something break.
|
| There's a reason why most enterprise-oriented systems
| take inertia quite seriously -- it has been something
| their customers greatly desire who are losing
| considerable capital on even minor downtime.
| saagarjha wrote:
| > There's zero advantage to sticking with the distribution
| specific versions of those packages which are typically out
| of date and come with distribution specific issues (including
| stability and security issues).
|
| Uh, other than "apt install foo" versus "ok, let's go search
| for foo on the internet, skip that spam listing that Google
| sold ad space to, ok no I am on foo.net, let's find the one
| that corresponds to my computer...yeah amd64 Linux, rpms? no,
| I want debs...download, dpkg -i...oh wait I need libbar".
| sorisos wrote:
| Agree no one is going to downgrade but there is another
| strategy - always build your app against package versions that
| is in Debian stable. Of-course it can be problematic but have
| some advantages: well tested, any bugs probably have documented
| workaround.
| npsimons wrote:
| > But either way you lose a lot of the benefits that being
| officially in the Debian repos would bring.
|
| The first thing I do when I hear about a new (to me) piece of
| software is an "apt-cache search $SOFTWARE". If it doesn't show
| up there, that's a red flag to me: this software isn't mature
| or stable enough to be trusted on my production machines.
|
| Sure, I might go ahead and download it to play around with on
| my development machines, but for all the "I'm making it
| awesome!" arguments of developers, more often than not it's
| just an excuse for lack of discipline in development process.
| GlitchMr wrote:
| Debian already provides multiple versions for Rust crates, I
| don't see why such an approach wouldn't be viable for Node.js
| packages. For example, Debian for nom crate provides versions 3,
| 4 and 5:
|
| https://packages.debian.org/sid/librust-nom-3-dev
|
| https://packages.debian.org/sid/librust-nom-4-dev
|
| https://packages.debian.org/sid/librust-nom-dev
| wscott wrote:
| Rust is funny because it is perfectly possible to build a
| single binary that links multiple versions of the same library.
| Happens when a transitive dependency is written assuming the
| older API of a library.
| eclipseo76 wrote:
| I think it all boil down to manpower. Rust crates need a
| limited set of compat packages and have a way smaller ecosystem
| than nodejs. Node developers tend to use as many dependencies
| they can, resulting in hundreds of deps per app. Rust programs
| have generally less than 10 direct dependencies, and at worst
| less than a hundred indirect dependencies, so it is still
| manageable.
| prepperdev wrote:
| Debian policy is very sane (no network access during build), but
| it does seem like modern software just assumes that the Internet
| is always available, and all dependencies (including transitive)
| are out there.
|
| The assumption is a bit fragile, as proven by the the left-pad
| incident ([1]). I hope that whatever the outcome of the
| discussion in Debian will be, it would keep the basic policy in
| place: not relying on things outside of the immediate control
| during package builds.
|
| 1. https://evertpot.com/npm-revoke-breaks-the-build/
| cbmuser wrote:
| > Debian policy is very sane (no network access during build)
|
| openaSUSE has that policy, too. And I'm pretty sure the same
| applies for Fedora.
|
| You don't want to rely on external dependencies during build
| that you can't control.
|
| That would be a huge security problem.
| arp242 wrote:
| The whole "download during build" thing is a minor issue; k8s,
| for example, puts all their dependencies in the /vendor/
| directory, and AFAIK many toolchains support this or something
| like it. And even if they don't, this is something that can be
| worked around in various ways.
|
| The _real_ issue is whether or not to use that vendor
| directory, or to always use generic Debian-provided versions of
| those dependencies, or some mix of both. This is less of a
| purely technical issue like the above, and more of a UX /"how
| should Debian behave"-kind of issue.
| JoshTriplett wrote:
| I don't think _that_ aspect of Debian Policy is in any danger
| of changing, nor should it.
| cbmuser wrote:
| It's also not very Debian-specific. It applies to openSUSE as
| well, for example.
| BelenusMordred wrote:
| Debian is incredibly conservative about versioning/updates and
| faces a lot of pressure to move faster. I hope they keep the
| same pace or even slow down.
|
| The world will keep turning.
| initplus wrote:
| I'm not entirely sure the security argument makes sense here.
|
| If a library is API compatible, does it matter if it's been
| vendored or not? If it's not vendored, you release the new build
| and be done. But if it's been vendored into 20 packages, you just
| need to bump the vendored version & rebuild those packages.
|
| The languages we are discussing where vendoring is common have
| simple build processes, and well defined dependency management
| mechanisms (go.mod, package.json). So it's not difficult to bump
| the version of a dependency and rebuild a package in those
| languages. A large part of the work here should even be able to
| be automated.
| viraptor wrote:
| > The languages we are discussing where vendoring is common
| have simple build processes
|
| For most packages, yes. But then you've got kubernetes. Or
| openstack. Or keras/tensorflow stack. They are significantly
| harder to deal with than anything else and essentially could
| build their own distributions around themselves.
|
| Or pandas+scipy+numpy+mpl which lots of people just give up on
| and use conda.
| bfrog wrote:
| Nix seems to effectively have solved this, by more or less
| vendoring everything, but in a way that still allows shared
| usage. Having made a few deb and rom packages in my life, I don't
| miss it. At all.
| rbanffy wrote:
| I believe that one lesson here is that it's not because it's now
| possible to have a thousand dependencies that you should have a
| thousand dependencies. It'll make your sysadmins very sad.
|
| I don't want the latest libraries on my servers. I want my
| servers to be boring and not change often. I want them to run the
| time-proven, battle-tested and well-understood software, because
| I don't want to be the first to debug those. There are people
| better at that than me.
|
| If, and only if, there's a blocker bug in a distro-provided
| package, I'll think of vendoring it in. And then only if there is
| no plausible workaround.
|
| Of course, I also do testing against the latest stuff so I'm not
| caught off-guard when the future breaks my apps.
| choeger wrote:
| Personally I believe that vendoring is just the lazy approach by
| developers that do not want to care about the eco system their
| software runs in. Consequently, their software will probably not
| be maintained for a long time (Red Hat offers 10 years of
| support, for instance). It's a shame but it seems like the cool
| kids simply tend to ignore sustainability in software
| development.
|
| Since npm,pip,go,cargo, etc. are open source projects, would it
| not be simpler to add a "debian mode" to them? In that mode, the
| tool could collaborate with the system package manager and follow
| any policies the distribution might have.
| macksd wrote:
| It's not like there's one ecosystem your software runs in.
| Sure, they could add a Debian mode. But you'd also need a Red
| Hat mode, Apple mode (and I don't know - do you need a homebrew
| mode that's different from the default mode?), Windows mode,
| etc. I think it's equally fair to say that the ecosystems just
| haven't solved the dependency management problem flexibly
| enough for everyone. Not everyone needs to or wants to support
| things for 10 years in order to have a Red Hat mode unless Red
| Hat is picking up that burden for them or paying them.
| floatboth wrote:
| With scripting runtimes like ruby and python, it's already kind
| of "Debian mode", with a system-wide location for packages.
| With compiled languages that really prefer to statically link
| stuff written in that language (rust, go, etc) it's really not
| feasible.
|
| In FreeBSD Ports we have a very pleasant solution for packaging
| rust apps. You just run `make cargo-crates` and paste the
| output into the makefile - boom, all cargo dependencies are now
| distfiles for the package.
| npsimons wrote:
| > Personally I believe that vendoring is just the lazy approach
| by developers that do not want to care about the eco system
| their software runs in.
|
| As someone with a foot in both worlds, allow me to try to put
| it more diplomatically: developers' job is to get into the
| weeds, pay attention to tiny little details, and know every
| piece of their code so well they often visualize the problem
| and fix in their head before even putting hands to keyboard.
| They don't want to have to deal with "other peoples' software."
|
| System administrators (and by extension, distribution
| maintainers) have to take in the bigger picture: how will this
| affect stability? Resources? How will this package interact
| with other pieces of software? They have to consider use cases
| in everything from the embedded to insanely large clusters and
| clouds.
|
| It would behoove developers to try to expand their awareness of
| how their software fits in with the rest of the world, if even
| for a short while. It's hard (I know), but in the end it will
| make you a better developer, and make your software better.
|
| > Since npm,pip,go,cargo, etc. are open source projects, would
| it not be simpler to add a "debian mode" to them?
|
| This is the best fucking idea I've heard in a while!
| wizzwizz4 wrote:
| Cargo already has one: https://crates.io/crates/cargo-deb
| zozbot234 wrote:
| > Personally I believe that vendoring is just the lazy approach
| by developers that do not want to care about the eco system
| their software runs in.
|
| I have to agree. If your dependency is used by more than one
| package within the distribution, it should be split out and not
| vendored - ultimately, this will reduce the total workload on
| package maintainers.
|
| This however does not mean that distros should be splitting
| _every_ dependency into a separate package by default, much
| less have a separate package in the archive for each build
| configuration - there 's no need to litter the distribution
| repo with idiosyncratic packages that will never have more than
| a single project/app/library depending on them, and that's
| precisely where a vendoring/bundling focused approach might be
| appropriate. Such dependencies are more like individual source-
| level or object files, that have always been "bundled" as part
| of a self-contained application.
| Blikkentrekker wrote:
| The problem is that in the real world despite promises of
| semantic versioning, breaking often occurs which makes it
| impossible for two packages to depend on the same version of
| another library though in theory they should.
| ernesth wrote:
| Debian used to manage and package programs from the real
| world.
|
| It is obviously easier to create breaking changes that will
| not be discovered in dynamic languages. But in the real
| world, where everybody do unit tests, it is difficult to
| understand what kind of new problems were not already
| encountered 20 years ago.
| Blikkentrekker wrote:
| Well _Rust_ and _npm_ happened in that time.
|
| I don't have much experience with the latter but I know
| of the former that despite promises of semantic
| versioning, it is quite common for a crate to not work
| with a newer version, requiring one to hard depend on a
| specific version.
|
| Add to that that _Rust_ 's definition of "non-breaking
| change" in theory can include changes that A) read to
| compilation errors or B) compile fine but lead to
| different behavior.
| m000 wrote:
| > If your dependency is used by more than one package within
| the distribution, it should be split out and not vendored...
|
| That's like going back to square-one: Dependency Hell [1].
| That's a regression, not a solution.
|
| I believe the problem is not at the edges (OS packaging,
| software developers), but in the middle: The dependency
| handling in programming languages.
|
| I.e. you need to be allowed to install multiple versions of a
| library under the same "environment" and activate the version
| you want at launch time (interpreted languages) or build time
| (compiled languages).
|
| [1] https://en.wikipedia.org/wiki/Dependency_hell
| zozbot234 wrote:
| Dependency hell is a result of not using sensible semantic
| versioning. If libraries are properly versioned, it's easy
| to make sure that each dependency uses the latest
| compatible version.
| [deleted]
| mlang23 wrote:
| "One package manger to rule them all"...
| pip/gem/npm/cargo/cabal&stack/whatnot all pose the same issue.
| From a distributors point of view, I get why you dont want to
| solve all these problems. From a user point of view, there is no
| good reason why I should learn more then one package manager.
|
| Itsa a dilemma. A similar thing is happening with ~/.local/
|
| When I started with Linux 25 years ago, it was totally normal to
| know and do the configure/make/make install dance. 10 years
| later, most of what I wanted to use was available as a packge.
| And if it wasn't, I actually built one. These days, I have a 3G
| ~/.local/ directory. It has become normal again to build stuff
| locally and just drop it into ~/.local/. And in fact, sometimes
| it is way easier to do so then to try and find a current version
| packaged by your distribution.
| npsimons wrote:
| > These days, I have a 3G ~/.local/ directory. It has become
| normal again to build stuff locally and just drop it into
| ~/.local/. And in fact, sometimes it is way easier to do so
| then to try and find a current version packaged by your
| distribution.
|
| That's great; now move it to your production webserver where
| the user the server runs as doesn't have a "home" directory, or
| you run in to a dozen other reasonable security restrictions
| that break the tiny world that vendoring was never tested
| outside of.
| AnIdiotOnTheNet wrote:
| Indeed the fact that, in many cases, one has to compile
| software themselves just to get a reasonably recent version of
| it is one of Linux's most colossal and inexcusable failures.
|
| Fortunately people are starting to come around to things like
| AppImage, FlatPak, and (to a lesser extent) Docker in order to
| deal with it.
| aww_dang wrote:
| >totally normal to know and do the configure/make/make install
| dance
|
| I always cringe when the readme starts with something something
| Docker. "Don't try building this yourself", is an unfortunate
| norm for opensrc.
| marcosdumay wrote:
| I usually give up on the software if the INSTALL file only
| has references to Docker.
|
| A few times I try really hard not giving up, but the reality
| is that Docker-only is highly correlated with non-working, so
| up to now I have eventually given up on every piece of
| Docker-only software I have met.
| matkoniecz wrote:
| Note "SubscriberLink" and "The following subscription-only
| content has been made available to you by an LWN subscriber."
|
| You are likely not supposed to post it to HN
| detaro wrote:
| https://lwn.net/op/FAQ.lwn says
|
| > _Where is it appropriate to post a subscriber link?_
|
| > _Almost anywhere. Private mail, messages to project mailing
| lists, and blog entries are all appropriate. As long as people
| do not use subscriber links as a way to defeat our attempts to
| gain subscribers, we are happy to see them shared._
| matkoniecz wrote:
| Thanks! I was unaware about such approach, thanks for
| clarification!
|
| "subscription-only content" confused me and I failed to parse
| "has been made available to you by an LWN subscriber".
|
| It is the first time when I see such approach explicitly
| encouraged and I really like it. Hopefully it is going well
| for LWN.
| makz wrote:
| What if linux distributions stop packaging stuff altogether?
|
| Most of the time it seems to create more trouble than it's worth
| (for the developers and mantainers of such distributions).
|
| Maybe just provide a base system and package management tools but
| leave the packaging to third parties.
|
| We can see that already with repositories such as EPEL and others
| more specialized.
| 3np wrote:
| Most about any distribution you can configure your own
| repositories.
|
| Realistically, you could set up a minimal arch and host your
| own aur (there are projects for this). This is basically what
| the aur is.
|
| Or Debian ppas if you're looking for more self-contained
| bundles.
|
| And there's always gentoo.
|
| I think what you want kind of exists and is in practice already
| :)
|
| Personally I find A LOT of value in distributions and it's an
| IOU's that others do too - otherwise they wouldn't have the
| significance they do today.
| zeckalpha wrote:
| > Kali Linux, which is a Debian derivative that is focused on
| penetration testing and security auditing. Kali Linux does not
| have the same restrictions on downloading during builds that
| Debian has
|
| The security auditing distribution has less auditable
| requirements around building packages?
| Blikkentrekker wrote:
| Of course. _Kali_ is generally booted as a live image, and one
| runs the entire system as root.
|
| It is most certainly not designed to be secure. This is
| expecting a battering ram to be resistant against being
| battering.
| erik_seaberg wrote:
| raesene9 has a point about someone getting malware into your
| bleeding edge pentest tool.
| luch wrote:
| This is the magic of "offensive" security, where you don't
| really bother with your own security posture :D
|
| you know what they say, the cobbler's children are the worst
| shod ...
| erik_seaberg wrote:
| I think Kali is trying to offer the latest versions of each
| tool, because a pentest box that randomly breaks isn't as
| serious as prod servers.
| raesene9 wrote:
| yeah but the challenge that's being referred to (I think)
| is that having an unauditable supply chain on a security
| distribution is, in itself a security risk.
|
| You've got to imagine that Kali linux is a very tempting
| target for supply chain attacks, if you can compromise a
| load of security testers, you might get access to all sorts
| of information....
| nanna wrote:
| I recently returned to Debian after a long hiatus in Ubuntu. This
| time, I'm using Guix as my package manager.
|
| It's a wonderful combo. Bleeding edge, reproducable, roll-
| backable, any version I choose, packages if I want them via Guix.
| Apt and the occasional .deb file as a fallback or for system
| services (nginx etc). And Debian as the no-bs, no-snap, solid
| foundation of everything.
|
| To me this is the future.
| twentydollars wrote:
| hmmm, this is your desktop or a server?
| npsimons wrote:
| I haven't been through a lot of comments here or at the link, but
| I'll bring up something I ran into in what I realize now was an
| early version of "vendoring": over a decade ago I was playing
| around with https://www.ros.org/, and there were no distribution
| packages, so I went with the vendor method, and I distinctly
| remember it downloading gobs of stuff and building it, only to
| break here and there. It was fucking terrible to work with and I
| only did it because it was R&D, not a production grade project,
| and I was being paid full time for it.
|
| Vendoring "build" processes, IME, are incredibly prone to
| breakage, and that alone is reason I won't bother with them for a
| lot of production stuff. Debian is stable - I can "apt install
| $PACKAGE" and not have to worry about some random library being
| pulled from the latest GitHub version breaking the whole gorram
| build.
| IceWreck wrote:
| Fedora has separate packages for libraries. But for nodejs,
| packaging induvidual libs lead to a huge clusterfuck of difficult
| to maintain packages. Now theyve decided that nodejs based
| packages would have compiled/binary nodejs modules for now.
| https://fedoraproject.org/wiki/Changes/NodejsLibrariesBundle...
| eclipseo76 wrote:
| And for Golang, we try to unbundle, we have around 1,600 go
| libraries packaged. Some package are still bundled like k8s
| though due to depndency hell.
| symlinkk wrote:
| It's unsustainable to expect package maintainers to create
| packages and backport security fixes for every piece of software
| in existence, and big ecosystems like Node.js make this
| blindingly obvious.
|
| Developers should be able to just ship software directly to
| users, without package maintainers standing in the middle.
|
| Hopefully Snap or Flatpak solves this!
| viraptor wrote:
| I haven't seen it mentioned in that discussion, but it's the
| vendoring is interesting from the reproducible builds point of
| view, especially the recent solarwind incident. The dependencies
| become one step removed from their upstream distribution and
| potentially patched. Tracking what you're actually running
| becomes a harder problem than just looking at package version.
|
| With vendoring we'll see Debian security bulletins for X-1.2.3
| which actually mean that the vendored Y-3.4.5 is vulnerable. And
| if you're monitoring some other vulnerabilities feed, Y will not
| show up as a package on your system at all.
| ncmncm wrote:
| I start with the assumption that Node.js is, itself, not fixable
| at the distro level, end of story.
|
| But, what about the rest? The problem to solve is things like Go
| packages that want to static-link their dependencies.
|
| One way forward is to consider programs that want to static-link
| as being, effectively, scripts. So, like scripts, they are not
| packaged as complete executables. Their dependencies do not refer
| to a specific version, but provide a link to where to get any
| specific version wanted. The thing in /usr/bin is a script that
| checks for a cached build and, _if needed_ : invokes the build
| tool, which uses local copies where found and usable, and
| downloads updates where needed, and links.
|
| A package- or dist-upgrade doesn't pull new versions of
| dependencies; it just notes where local, cached copies are stale,
| and flushes. On next execution -- exactly as with scripts --
| invalidated builds are scrapped, and rebuilt.
|
| It means that to use a Go program, you need a Go toolchain, but
| that is not a large burden.
|
| It means that the responsibility of the package system is only to
| flush caches when breaking or security updates happen. The target
| itself arranges to be able to start up quickly, on the 2nd run,
| much like the Python packages we already deal with.
| superkuh wrote:
| This futureshock is a result of the rapid pace of new features
| implemented within commonly used libraries and immediately used
| by devs. The rapid pace is good for commerce and servers but it's
| bad for the desktop. Commerce pays almost all the devs (except
| those wonderful people at debian) so futureshock will continue.
| The symptoms of this inbalance in development incentives versus
| user incentives express themselves as containerization and
| vendoring.
| jancsika wrote:
| If upstream has decided to vendor, I only see two sensible
| options:
|
| * package the vendored software in Debian, and annotate the
| category of vendored packages so it's clear to the user they
| _cannot_ follow the normal Debian policies. I 've been bitten by
| the lack of such feedback wrt Firefox ESR. My frustration would
| have gone away completely if the package manager told me, "Hey,
| we don't have the volunteer energy to properly package this
| complex piece of software and its various dependencies. If you
| install it, it's _your_ job to deal with any problems arising
| from the discrepancy between Debian 's support period and
| Mozilla's support period." As it is, Debian's policy advertises a
| level of stability (or "inertia" as people on this thread seem to
| refer to it) that isn't supported by the exceptions it makes for
| what are probably two of the most popular packages-- Chromium and
| Firefox.
|
| * do not package software that is vendored upstream
|
| I can understand either route, and I'm sure there are reasonable
| arguments for either side.
|
| What I _cannot_ understand-- and what I find borderline
| manipulative-- is pretending there 's some third option where
| Debian volunteers roll up their sleeves and spend massive amounts
| of their limited time/cognitive load manually fudging around with
| vendored software to get it in a state that matches Debian's
| general packaging policy. There's already been a story posted
| about _two_ devs approaching what looked to me like burnout over
| their failed efforts to package the same piece of vendored
| software.
|
| Edit: clarification
| angrygoat wrote:
| A snippet of a quote from Pirate Praveen in the article.
|
| > All the current trends are making it easy for developers to
| ship code directly to users. Which encourages more isolation
| instead of collaboration between projects.
|
| This is where I would respectfully disagree. As a dev, packaged
| libraries from the system are often fine - until I hit a snag,
| and need to work with the devs from another project to work out a
| fix. With cargo/node/yarn/poetry/gopkg/... I can send a PR to
| another project, get that merged, and vendor in the fix while all
| of that is happening.
|
| If I can't do that, I'm left with hacky workarounds, as
| upstreaming a fix and then waiting up to 12 months (if I'm on a
| six-month release tempo OS) for the fix to be available to me is
| just not practical.
|
| Being able to work on a quick turnaround with dependencies to fix
| stuff is one of the huge wins with modern build tools.
| z3t4 wrote:
| I've been maintaining a Node.js app for about five years and
| almost all dependencies have been "vendored"/locked with forked
| libraries because some of the dependencies have been abandoned,
| and some have switched owners where the new owner spend their
| days adding bugs to perfectly working code due to "syntax
| modernization", or where the maintainer didn't accept the pull
| request for various reasons. Software collaboration is not that
| easy, especially if it's done by people in their (very little)
| spare time.
| olau wrote:
| I think you misunderstood what he is talking about.
|
| The issue he's addressing is that you don't care about other
| projects also using this library.
| ohazi wrote:
| Even as an engineer, I usually draw a line between "stuff I
| like to hack on" and "core system components that I'd rather
| not touch." I'm fine pulling a dependency from nightly for a
| project I'm working on, or because some program I use has a
| cool new feature I want to play with. But I probably wouldn't
| do that with, say, openssh.
|
| I can certainly sympathize with this:
|
| > and then waiting up to 12 months (if I'm on a six-month
| release tempo OS) for the fix to be available
|
| but the needs of system administrators are not the same as the
| needs of developers. That's why my development machine is on a
| rolling release, but my servers run Debian stable with as few
| out-of-repository extras as possible.
|
| Those servers are really fucking reliable, and I don't need a
| massive team to manage them. Maybe this sort of "boring" system
| administration isn't as popular as it used to be with all of
| that newfangled container orchestration stuff, but this is the
| core of the vendoring argument.
|
| Installing who-knows-what from who-knows-where can work if
| you're Google, but it really sucks if you're one person trying
| to run a small server and have it not explode every time you
| poke at it.
| wbl wrote:
| The point of vendoring and not using dynamic linking is to
| avoid spooky action at a distance that screws up everything.
| npsimons wrote:
| > but the needs of system administrators are not the same as
| the needs of developers. That's why my development machine is
| on a rolling release, but my servers run Debian stable with
| as few out-of-repository extras as possible.
|
| As another code monkey-cum-sysadmin, I very much second this:
| my servers are Debian stable without non-free, and there's
| damn good reason for that.
|
| I can appreciate GP's argument, and I've been there, pulling
| down CL libraries for playing around on my development
| machines. But what GP leaves out is that more often than not,
| those distribution-external packages _break_ , and if I was
| relying on them, I'd be left holding the bag.
|
| I do agree, there is a problem (that the LWN article goes
| into), and it definitely needs attention. Distributions might
| be able to handle newer ecosystems better.
|
| But for all the awesome whizbang packages of NPM, QuickLisp,
| etc, developers need to realize that sysadmins and
| _especially_ distro maintainers have to look at the bigger
| picture. Maybe consider that if your software introduces
| breaking changes or needs security updates on a weekly basis,
| it isn 't production ready.
| newpavlov wrote:
| It's a very reasonable policy to require an ability to build
| everything offline without accessing language "native"
| repositories. But I think a big problem is that Debian requires
| that each library was a separate package.
|
| For classic C/C++ libraries it's not a problem, since for
| historical reasons (lack of a good, standard language package
| manager and thus high-level of pain caused by additional
| dependencies) they had relatively big libraries. Meanwhile in new
| languages, good tooling (cargo, NPM, etc.) makes "micro-library"
| approach quite viable and convinient (to the point of abuse, see
| leftpad). And packaging an application with sometimes several
| hundred dependencies is clearly a Sisyphean task.
|
| I think, that instead of vendoring, Debian should instead adopt a
| different packaging policy, which would allow them to package
| whole dependency trees into a single package. This should make it
| much easier for them to package applications written in Rust and
| similar languages.
| jillesvangurp wrote:
| Well, C/C++ historically had no separate dependency management
| making linux distributions effectively the de-facto package
| management for C/C++.
|
| Other languages do have package managers and not using those is
| typically not a choice developers make.
|
| I agree vendoring npm, maven, pip, etc. dependencies for the
| purpose of reusing them in other packages that need them (as
| opposed to just vendoring the correct versions with those
| packages) is something that probably adds negative value. It's
| just not worth the added complexity of trying to even make that
| work correctly. Also package locking is a thing with most of
| these package managers meaning that anything else by definition
| is the wrong version.
| RcouF1uZ4gsC wrote:
| > For classic C/C++ libraries it's not a problem, since for
| historical reasons (lack of a good, standard language package
| manager and thus high-level of pain caused by additional
| dependencies)
|
| This is also one of the big reasons why header-only C++
| libraries are so popular.
| giovannibajo1 wrote:
| > I think, that instead of vendoring, Debian should instead
| adopt a different packaging policy, which would allow them to
| package whole dependency trees into a single package.
|
| I'm not sure how this is different from what I call vendoring,
| and I think this is indeed the solution.
|
| In Go, there's "go mod vendor" which automatically creates a
| tree called "vendor" with a copy of all the sources needed to
| build the application, and from that moment on, building the
| application transparently uses the vendored copy of all
| dependencies.
|
| In my ideal world, Debian would run "go mod vendor" and bundle
| the resulting tree a source DEB package (notice that the binary
| DEB package would still be "vendored" because go embraces
| static linking anyway).
|
| If the Debian maintainer of that application wants to "beat
| upstream" at releasing security fixes, they will have a monitor
| on those dependencies' security updates, and then whenever they
| want, update the required dependencies, revendor and ship the
| security update.
|
| What I totally disagree with is having "go-crc16" as a Debian
| package. I'm not even sure who would benefit from that, surely
| not Go developers that will install packages through the go
| package manager and decide and test their own dependencies
| without even knowing what Debian is shipping.
| l0b0 wrote:
| I hope Nix (or something like it) starts eating market share from
| other package managers, Docker, and the like. Nix solves this
| sort of thing at the cost of one of the cheapest things
| available, disk space. Every discussion about it mentions how
| complex it is; I remember giving up on creating a .deb after a
| few days of looking into that fractal of complexity, versus
| producing a Nix package within the first day of looking at the
| language.
| chalst wrote:
| Debian stable offers unattended patches, which is something I
| value highly in public-facing server deployments.
|
| I haven't seen anything like this even proposed for NixOS.
|
| https://wiki.debian.org/UnattendedUpgrades
| antientropic wrote:
| NixOS does have unattended automatic upgrades:
| https://nixos.org/manual/nixos/stable/#sec-upgrading-
| automat...
| chalst wrote:
| Mea culpa: I should have been aware of that.
|
| I'm looking at https://status.nixos.org/
|
| Are the 6-12 month old releases more stable than the 0-6
| month releases? (i.e., is 20.03 more stable than 20.09?)
| mlang23 wrote:
| Maybe I was just unlucky... But when I tried Nix, the _first_
| thing that happened was that it did not deliver what it
| promised. The package was not properly isolated, so it ended up
| depending on something from /usr/ _/ bin. This was pretty
| disappointing for a first try.
|
| Also, while this is a minor cosmetic thing, I highly dislike
| the location of the nix-store. It doesn't belong in /
|
| _and*, long path names (due to the hashes) are very impractical
| to use. All in all, I hear a lot of good things about Nix, esp.
| from the Haskell community, but whenever I try it, I develop a
| deeply rooted dislike for the implementation of the concept.
| soraminazuki wrote:
| > the package was not properly isolated, so it ended up
| depending on something from /usr/bin.
|
| I don't know when you last used Nix, but Nix now enforces
| sandboxed builds by default so it should be better at
| catching these kind of things during packaging. But note that
| isolation in Nix is mostly a build time thing, and it does
| not prevent running programs from accessing filesystem paths
| in /usr. You could still fire up a bash prompt and enter "ls
| /usr/bin", there's nothing stopping you from doing so.
|
| > I highly dislike the location of the nix-store. It doesn't
| belong in /
|
| I see many people express this sentiment, but I'm not sure
| what's wrong with /nix/store when you've mostly[1] abandoned
| /usr. Nix is fundamentally incompatible with traditional Unix
| directory layouts.
|
| > long path names (due to the hashes) are very impractical to
| use
|
| That's why you never need to specify them directly. You could
| either install packages globally and get it symlinked into
| /run/current-system/sw/bin or ~/.nix-profile/bin, both of
| which are included in PATH, or use nix-shell and direnv to
| automatically add packages to PATH whenever you enter a
| specific directory.
|
| [1]: "Mostly," because /usr/bin/env is kept for compatibility
| Foxboron wrote:
| I don't think nix solves this. You are still left with having
| to deal with security issues, updates and tracking on pr
| package basis instead of once for the entire ecosystem.
|
| Admittedly, this is a hard problem. And the languages that does
| use vendoring makes it hard to programatically inspect all of
| this. But what do you do if say the python library requests has
| a severe HTTP parsing issue which allows ACE?
|
| How many packages would you need to patch on nix?
|
| How many packages would you need to patch on Debian, Arch,
| Fedora, OpenSuse?
| l0b0 wrote:
| Do you mean as a package maintainer or as an end user? I
| expect automation and reproducible builds to make this near
| trivial as a maintainer. As an end user binary diffs will be
| helpful (not sure if Nix supports them yet), but modern
| hardware and network connections can easily upgrade a
| thousand small packages in less than a minute.
| Foxboron wrote:
| Reproducible as in build environments or deterministic
| binaries? Nix has only reproducible build environments.
|
| Package maintainer. For the end-user there is no practical
| difference between a container and nix, and you see how
| well the container ecosystem is currently handling security
| updates on their distributed images.
|
| The problem is not distributing the fix, it's getting the
| fix patched.
| soraminazuki wrote:
| I think you might be misunderstanding how packaging is
| handled in Nix. Nix devs use semi-automatic tools to convert
| packages from programming language ecosystems to Nix
| packages, but these tools still have means to properly apply
| patches where necessary. Whether the vendoring approach is
| used depends on the actual tools being used, but that is
| mostly irrelevant. Being able to apply patches to all
| intended packages is a requirement for any packaging tool
| because patching is absolutely essential for packaging work.
|
| > How many packages would you need to patch on nix?
|
| So to answer your question, you only need to change a single
| file. For the requests library, this one[1]. You might also
| be interested in how Nix manages patches for NPM packages[2].
| The amount of manual fixes required is surprisingly few.
|
| [1]: https://github.com/NixOS/nixpkgs/blob/master/pkgs/develo
| pmen... [2]: https://github.com/NixOS/nixpkgs/blob/master/pkg
| s/developmen...
| Foxboron wrote:
| > Whether the vendoring approach is used depends on the
| actual tools being used, but that is mostly irrelevant.
|
| I don't think it is though? Because...
|
| >So to answer your question, you only need to change a
| single file. For the requests library, this one[1]. You
| might also be interested in how Nix manages patches for NPM
| packages[2]. The amount of manual fixes required is
| surprisingly few.
|
| Right, I assume python is easier in this scenario since
| there are not many cases where a python project would
| install N different versions of one package. I don't quite
| understand how these work if a python project depends on
| separate versions?
|
| For the nodejs part I'm more curious. node_modules
| sometimes contain multiple versions of the same dependency,
| sometimes across multiple major versions. The patching in
| the files seems fairly trivial sed replacements and rpath
| rewrites. But how would security patches be applied across
| versions?
|
| I also took a quick look at the go stuff, and it seems like
| there is no such thing there as `deleteVendor` defaults to
| false thus each Go application is self-contained. How would
| patching dependencies work here?
|
| https://github.com/NixOS/nixpkgs/search?q=deleteVendor
| soraminazuki wrote:
| > I don't quite understand how these work if a python
| project depends on separate versions?
|
| For Python packages in the offical Nix repository, the
| packages AFAIK isn't auto-generated. In this case, Nix
| devs split out the common part of the package definition
| to resemble the following pseudocode:
| def commonDefinition(version): return {
| 'src': 'http://...', 'sha256': '000...',
| ... } packageV1 =
| commonDefintion(1) packageV2 =
| commonDefinition(2)
|
| > For the nodejs part I'm more curious. ... But how would
| security patches be applied across versions?
|
| I guess this was a bad example, as I incorrectly assumed
| it was patching dependencies when it wasn't. But you can
| though, by matching package names. The Nix language is
| powerful enough to do this.
|
| > thus each Go application is self-contained
|
| I wasn't aware of the go situation, but this does seem to
| be the case. However, this looks incidental rather than
| it being a hard requirement. Many tools provide
| mechanisms to centrally maintain patches, which would
| work whether or not vendoring is enabled.
| Foxboron wrote:
| I think this illustrates my point though. Nix doesn't
| necessarily solve the overarching issue of having
| vendored dependencies. And it doesn't seem like it's
| being worked on either. There might be work on this on a
| pr ecosystem basis, but this isn't necessarily a goal of
| nixos itself.
|
| The intention here isn't to talk shit about nix though. I
| just wonder why people present it as being the solution
| to this issue.
| Blikkentrekker wrote:
| _Gentoo_ probably solves it better.
|
| The developers allow multiple versions of the same library
| when there are problems and they have deemed it necessary,
| but libraries where it makes no real sense are not
| multislotted accordingly.
|
| When developers realize that two packages need a different
| version of the same library due to issues that should not be,
| they multislot the library in response or if it be trivial
| patch whatever package relies on faulty behavior.
| Foxboron wrote:
| Everyone can provide multiple versions though, Arch does
| this. It works well to work around stuff in the existing
| ecosystem (e.g openssl1.0, gtk2/3/4, qt and so on). But I
| don't quite see how this solves anything for modern
| languages like Rust and Go?
|
| Do you have any documentation for how this is dealt with in
| languages that utilizes vendoring to the extreme?
| Blikkentrekker wrote:
| The difference with those systems is that the different
| versions are coded in the package name and that they only
| go so far to provide different versions of packages are
| that are designed to be installed as such under different
| sonames, because they're binary distributions.
|
| On _Gentoo_ , they are permitted to rename these
| libraries arbitrarily since software is compiled locally,
| so my _Krita_ can be linked to _Qt_ libraries with a
| different path than your _Krita_ , so if ever _Qt_ would
| break a.p.i. or a.b.i. despite it not updating it 's
| soname to reflect that, _Gentoo_ could elect to manually
| rename them and compile what need against the appropriate
| paths.
| shp0ngle wrote:
| Related article (linked in this one) from earlier this year,
| about Kubernetes and its Go dependencies
|
| https://news.ycombinator.com/item?id=24948591
| chalst wrote:
| I'm surprised the option of moving the package to contrib got so
| little support. Many of these packages don't seem a good fit for
| Debian stable and its security-patch model.
| JoshTriplett wrote:
| contrib is for software that doesn't fit into a fully FOSS
| ecosystem. It's not for sidestepping security or quality
| concerns.
|
| I wouldn't want to see FOSS with no proprietary dependencies
| stuffed into contrib because of packaging issues.
| chalst wrote:
| It is hard to audit DFSG compliance for software whose build
| process pulls in dependencies at run time.
| jcelerier wrote:
| then.. they should just make a new "vendored" repo for that
| kind of software ?
| nikisweeting wrote:
| ArchiveBox is fully FOSS but is almost unpackagable on stable
| because it depends on a mix of both pip, packages, npm, and
| chromium (which is only distributed via snap).
|
| The core value provided by ArchiveBox is the integration of
| these disparate tools into a single UX, so it's stuck in
| contrib/ppa for the foreseeable future.
|
| This is just one example of a FOSS package that doesn't fit
| neatly into Debian's distribution model, but there are many
| others.
| rezonant wrote:
| Speaking as a seasoned Node.js dev, if they think they can handle
| Node's nested vendored packaging system using flat debian
| packaging and guarantee correct behavior of the app they are
| sorely mistaken. It's a fools errand. The sheer amount of effort
| being proposed here is astounding.
| zaarn wrote:
| If all you have is a hammer...
|
| It's not the first time debian package policies seem backwards
| and trying to shove a square peg through a round hole. I hope
| the solution does not end up being "make APT do it" because APT
| is a terrible package manager to begin with (I hate every
| second that I had to fight APT over how to handle PIP packages
| that I would very much like installed globally).
| bbarnett wrote:
| apt is an incredible package manager.
|
| Don't blame the hammer, blame the carpenter.
|
| The problem here is packaging, and maintainer decisions. And
| yes, I'm familiar with the issues here, and the bugs filed. I
| think it was handled... improperly as well.
| Ericson2314 wrote:
| Fine-grain depedencies are crucial, but vendoring is terrible.
|
| Check out https://github.com/kolloch/crate2nix/
| https://github.com/input-output-hk/haskell.nix for technical
| solutions to getting the best of both worlds.
|
| Sorry, but there's just no way DPkg/APT and RPM/Yum are going to
| keep up here very well.
___________________________________________________________________
(page generated 2021-01-13 23:03 UTC)