[HN Gopher] Debian's approach to Rust - Dependency handling (2022)
       ___________________________________________________________________
        
       Debian's approach to Rust - Dependency handling (2022)
        
       Author : zdw
       Score  : 43 points
       Date   : 2024-12-24 17:05 UTC (2 days ago)
        
 (HTM) web link (diziet.dreamwidth.org)
 (TXT) w3m dump (diziet.dreamwidth.org)
        
       | woodruffw wrote:
       | The author writes:
       | 
       | > I am proposing that Debian should routinely compile Rust
       | packages against dependencies in violation of the declared
       | semver, and ship the results to Debian's millions of users.
       | 
       | Followed by a rationale. However, the rationale rings hollow to
       | me:
       | 
       | * Compiling a Rust package outside of its declared semver might
       | not cause security problems in the form of exploitable memory
       | corruption, but it's almost certainly going to cause _stability_
       | problems with extremely undesirable characteristics (impossible
       | for the upstream to triage, _and_ definitionally unsupported).
       | 
       | * The assumption that semantic changes to APIs are encoded within
       | types is incorrect in the general case: Rust has its fair share
       | of "take a string, return a string" APIs where critical semantic
       | changes can occur without any changes to the public interface.
       | These again are unlikely to cause memory safety issues but they
       | can result in _logical_ security issues, especially if the
       | project 's actual version constraints are intended to close off
       | versions that _do_ work but perform their operation in an
       | undesirable way.
       | 
       | As a contrived example of the above: `frobulator v1.0.0` might
       | have `fn frob(&str) -> &str` which shells out to `bash` for no
       | good reason. `frobulator v1.0.1` removes the subshell but doesn't
       | change the API signature or public behavior; Debian ends up using
       | v1.0.0 as the build dependency despite the upstream maintainer
       | explicitly requesting v1.0.1 or later. This would presumably go
       | unnoticed until someone files a RUSTSEC advisory on v1.0.0 or
       | similar, which I think is a risky assumption given the size (and
       | growth) of the Rust ecosystem and its tendency for large dep
       | trees.
       | 
       | The author is right that, in practice, this will work 99% of the
       | time. But I think the 1% will cause a _lot_ of unnecessary
       | downstream heartburn, all because of a path dependency
       | (assumptions around deps and dynamic linkage) that isn 't
       | categorically relevant to the Rust ecosystem.
        
         | jvanderbot wrote:
         | I really don't understand this problem at all. I can cargo-deb
         | and get a reasonable package with virtually no effort. What is
         | uncompliant about that?
        
           | woodruffw wrote:
           | My understanding (which could be wrong) is that this is an
           | attempt to preserve Debian's "global" view of dependencies,
           | wherein each Rust package has a set of dependencies that's
           | consistent with every other Rust package (or, if not every,
           | as many as possible). This is similar to the C/C++ packaging
           | endeavor, where dependencies on libraries are handled via
           | dynamic linkage to a single packaged version that's
           | compatible with all dependents.
           | 
           | If the above is right this contortion is similar to what's
           | happened with Python packaging in Debian and similar, where
           | distributions tried hard to maintain compatibility inside of
           | a single global environment instead of allowing distinct
           | incompatible resolutions within independent environments
           | (which is what Python encourages).
           | 
           | I think the "problem" with cargo-deb is that it bundles all-
           | in-one, i.e. doesn't devolve the dependencies back to the
           | distribution. In other words, it's technically sound but not
           | philosophically compatible with what Debian wants to do.
        
             | wakawaka28 wrote:
             | Packaging a distribution efficiently requires sharing as
             | many dependencies as possible, and ideally hosting as much
             | of the stuff as possible in an immutable state. I think
             | that's why Debian rejects language-specific package
             | distribution. How bad would it suck if every Python app you
             | installed needed to have its own venv for example? A distro
             | might have hundreds of these applications. As a maintainer
             | you need to try to support installing them all efficiently
             | with as few conflicts as possible. A properly maintained
             | global environment can do that.
             | 
             | Edit: I explained lower down but I also want to mention
             | here, static linkage of binaries is a huge burden and waste
             | of resources for a Linux distro. That's why they all tend
             | to lean heavily on shared libraries unless it is too
             | difficult to do so.
        
               | woodruffw wrote:
               | > Packaging a distribution efficiently requires sharing
               | as many dependencies as possible, and ideally hosting as
               | much of the stuff as possible in an immutable state.
               | 
               | I don't think any of this precludes immutability: my
               | understanding is Debian _could_ package every version
               | variant (or find common variants without violating
               | semver) and maintain both immutability and their global
               | view. Or, they could maintain immutability but sacrifice
               | their _package-level_ global view (but _not_ metadata-
               | level view) by having Debian Rust source packages contain
               | their fully vendored dependency set.
               | 
               | The former would be a lot of work, especially given how
               | manual the distribution packaging process is today. The
               | latter seems more tractable, but requires distributions
               | to readjust their approach to dependency tracking in
               | ecosystems that fundamentally don't behave like C or C++
               | (Rust, Go, Python, etc.).
               | 
               | > How bad would it suck if every Python app you installed
               | needed to have its own venv for example?
               | 
               | Empirically, not that badly. It's what tools like `uv`
               | and `pipx` do by default, and it results in a markedly
               | better net user experience (since Python tools _actually
               | behave_ like hermetic tools, and not implicit modifiers
               | of global resolution state). It 's also what Homebrew
               | does -- every packaged Python formula in Homebrew gets
               | shipped in its own virtual environment.
               | 
               | > A properly maintained global environment can do that.
               | 
               | Agreed. The problem is the "properly maintained" part; I
               | would argue that ignoring upstream semver constraints
               | challenges the overall project :-)
        
               | wakawaka28 wrote:
               | >I don't think any of this precludes immutability: my
               | understanding is Debian could package every version
               | variant (or find common variants without violating
               | semver) and maintain both immutability and their global
               | view.
               | 
               | Debian is a binary-first distro so this would obligate
               | them to produce probably 5x the binary packages for the
               | same thing. Then you have higher chances of conflicts,
               | unless I'm missing something. C and C++ shared libraries
               | support coexistence of multiple versions via semver-based
               | name schemes. I don't know if Rust packages are
               | structured that well.
               | 
               | >Empirically, not that badly. It's what tools like `uv`
               | and `pipx` do by default, and it results in a markedly
               | better net user experience (since Python tools actually
               | behave like hermetic tools, and not implicit modifiers of
               | global resolution state). It's also what Homebrew does --
               | every packaged Python formula in Homebrew gets shipped in
               | its own virtual environment.
               | 
               | These are typically not used to install everything that
               | goes into a whole desktop or server operating system.
               | They're used to install a handful of applications that
               | the user wants. If you want to support as many systems as
               | possible, you need to be mindful of resource usage.
               | 
               | >I would argue that ignoring upstream semver constraints
               | challenges the overall project :-)
               | 
               | Yes it's a horrible idea. "Let's programmatically add a
               | ton of bugs and wait for victims to report the bugs back
               | to us in the future" is what I'm reading. A policy like
               | that can be exploited by malicious actors. At minimum
               | they need to ship the correct required versions of
               | everything, if they ship anything.
        
               | woodruffw wrote:
               | > Debian is a binary-first distro so this would obligate
               | them to produce probably 5x the binary packages for the
               | same thing. Then you have higher chances of conflicts,
               | unless I'm missing something.
               | 
               | Ah yeah, this wouldn't work -- instead, Debian would need
               | to bite the bullet on Rust preferring static linkage and
               | accept that each package might have different interior
               | dependencies (still static and known, just not globally
               | consistent). This doesn't represent a conflict risk
               | because of the static linkage, but it's very much against
               | Debian's philosophy (as I understand it).
               | 
               | > I don't know if Rust packages are structured that well.
               | 
               | Rust packages of different versions can gracefully
               | coexist (they do already at the crate resolution level),
               | but static linkage is the norm.
               | 
               | > These are typically not used to install everything that
               | goes into a whole desktop or server operating system.
               | They're used to install a handful of applications that
               | the user wants.
               | 
               | I might not be understanding what you mean, but I don't
               | think the user/machine distinction is super relevant in
               | most deployments: in practice the server's software
               | shouldn't be running as root anyways, so it doesn't
               | matter much that it's installed in a user-held virtual
               | environment.
               | 
               | And with respect to resource consumption: unless I'm
               | missing something, I think the resource difference
               | between installing a stack with `pip` and installing that
               | same stack with `apt` should be pretty marginal --
               | installers will pay a linear cost for each new virtual
               | environment, but I can't imagine that being a dealbreaker
               | in most setups (already multiple venvs are atypical, and
               | you'd have to be pretty constrained in terms of storage
               | space to have issues with a few duplicate installs of
               | `requests` or similar).
        
               | wakawaka28 wrote:
               | >I might not be understanding what you mean, but I don't
               | think the user/machine distinction is super relevant in
               | most deployments: in practice the server's software
               | shouldn't be running as root anyways, so it doesn't
               | matter much that it's installed in a user-held virtual
               | environment.
               | 
               | Many software packages need root access but that is not
               | what I was talking about. Distro users just want working
               | software with minimal resource usage and
               | incompatibilities.
               | 
               | >Rust packages of different versions can gracefully
               | coexist (they do already at the crate resolution level),
               | but static linkage is the norm.
               | 
               | Static linkage is deliberately avoided as much as
               | possible by distros like Debian due to the additional
               | overhead. It's overhead on the installation side and mega
               | overhead on the server that has to host a download of
               | essentially the same dependency many times for each
               | installation when it could have instead been downloaded
               | once.
               | 
               | >And with respect to resource consumption: unless I'm
               | missing something, I think the resource difference
               | between installing a stack with `pip` and installing that
               | same stack with `apt` should be pretty marginal --
               | installers will pay a linear cost for each new virtual
               | environment, but I can't imagine that being a dealbreaker
               | in most setups (already multiple venvs are atypical, and
               | you'd have to be pretty constrained in terms of storage
               | space to have issues with a few duplicate installs of
               | `requests` or similar).
               | 
               | If the binary package is a thin wrapper around venv, then
               | you're right. But these packages are usually designed to
               | share dependencies with other packages where possible. So
               | for example, if you had two packages installed using some
               | huge library for example, they only need one copy of that
               | library between them. Updating the library only requires
               | downloading a new version of the library. Updating the
               | library if it is statically linked requires downloading
               | it twice along with the other code it's linked with,
               | potentially using many times the amount of resources on
               | network and disk. Static linking is convenient sometimes
               | but it isn't free.
        
               | int_19h wrote:
               | Historically, the main reason why dynamic linking is even
               | a thing is because RAM was too limited to run "heavy"
               | software like, say, an X server.
               | 
               | This hasn't been true for decades now.
        
               | wakawaka28 wrote:
               | RAM is still a limited resource. Bloated memory
               | footprints hurt performance even if you technically have
               | the RAM. The disk, bandwidth, and package builder CPU
               | usage involved to statically link everything alone is
               | enough reason not to do it, if possible.
        
               | LtWorf wrote:
               | This is still true.
               | 
               | Static linking works fine because 99% of what you run is
               | dynamically linked.
               | 
               | Try to statically link your distribution entirely and see
               | how the ram usage and speed will degrade :)
        
               | fragmede wrote:
               | > Then you have higher chances of conflicts, unless I'm
               | missing something.
               | 
               | For python, you could install libraries into a versioned
               | dir, and then create a venv for each program, and then in
               | each venv/lib/pythonX/site-packages/libraryY dir just
               | symlinks to the appropriate versioned global copy.
        
               | wakawaka28 wrote:
               | That would make it difficult to tell at a system level
               | what the exact installed dependencies of a program are.
               | It would also require the distro to basically re-invent
               | pip. Want to invoke one venv program from another one?
               | Well, good luck figuring out conflicts in their
               | environments which can be incompatible from the time they
               | are installed. Now you're talking about a wrapper for
               | each program just to load the right settings. This is not
               | even an exhaustive list of all possible complications
               | that are solved by having one global set of packages.
        
               | LtWorf wrote:
               | Do you think that's user friendly?
        
               | fragmede wrote:
               | I see it as more user friendly - instead of forgetting to
               | activate the venv and having the program fail to run/be
               | broken/act weird, you run the program and it activates
               | the venv for you so you don't have that problem.
        
               | zajio1am wrote:
               | > Debian could package every version variant ... Or, they
               | could maintain immutability ... by having Debian Rust
               | source packages contain their fully vendored dependency
               | set. The former would be a lot of work, especially given
               | how manual the distribution packaging process is today.
               | 
               | That would work for distributions that provide just
               | distributions / builds. But one major advantage of Debian
               | is that it is committed to provide security fixes
               | regardless of upstream availability. So they essentially
               | stand in for maintainers. And to maintain many different
               | versions instead of just latest one is plenty of
               | redundant work that nobody would want to do.
        
               | LtWorf wrote:
               | > my understanding is Debian could package every version
               | variant
               | 
               | unlike pypi, debian patches CVEs, so having 3000 copies
               | of the same vulnerability gets a bit complicated to
               | manage.
               | 
               | Of course if you adopt the pypi/venv scheme where you
               | just ignore them, it's all much simpler :)
        
               | vbezhenar wrote:
               | > How bad would it suck if every Python app you installed
               | needed to have its own venv for example?
               | 
               | I would love to have that. Actually that's what I do: I
               | avoid distribution software as much as possible and
               | install it in venvs and similar ways.
        
               | LtWorf wrote:
               | Now tell your grandmother to install a software that way
               | and report back with the results please.
        
               | superkuh wrote:
               | >How bad would it suck if every Python app you installed
               | needed to have its own venv for example?
               | 
               | You just described every python3 project in 2024. Pretty
               | much none will be expected to work with system python.
               | But your point still stands, that's not a good thing that
               | there is no python and only pythons. And it's not a good
               | thing that there is no rustc only rustcs, etc, let alone
               | trying to deal with cargo.
        
               | woodruffw wrote:
               | It's not that they don't work with the system Python,
               | it's that they don't want to share the same _global
               | package namespace_ as the system Python. If you create a
               | virtual environment with your system Python, it'll work
               | just fine.
               | 
               | (This is distinct from Rust, where there's no global
               | package namespace at all.)
        
               | fragmede wrote:
               | > How bad would it suck if every Python app you installed
               | needed to have its own venv for example?
               | 
               | Yeah I hacked together a shim that searches the python
               | program's path for a directory called venv and shoves
               | that into sys.path. Haven't hacked together reusing venv
               | subdirs like pnpm does for JavaScript, but that's on my
               | list.
        
             | MrBuddyCasino wrote:
             | This seems like a fools errand to me. Unlike C/C++, Rust
             | apps typically have many small-ish dependencies, trying to
             | align them to a dist-global approved version seems
             | pointless and laborious. Pointless because Rust programs
             | will have a lot less CVEs that would warrant such an
             | approach.
        
               | sshine wrote:
               | Laborious but not pointless.
               | 
               | Rust programs have fewer CVEs for two reasons: its safe
               | design, and its experienced user base. As it grows more
               | widespread, more thoughtless programmers will create
               | insecure programs in Rust. They just won't often be
               | caused by memory bugs.
        
               | arccy wrote:
               | I'd think logic bugs are the majority of CVEs, and rust
               | doesn't magically make those go away
        
               | woodruffw wrote:
               | The "majority of CVEs" isn't a great metric, since (1)
               | anybody can file a CVE, and (2) CNAs can be tied to
               | vendors, who are incentivized to pre-filter CVEs or not
               | issue CVEs at all for internal incidents.
               | 
               | Thankfully, we have better data sources. Chromium
               | estimates that 70% of serious security bugs in their
               | codebases stem from memory unsafety[1], and MSRC
               | estimates a similar number for Microsoft's codebases[2].
               | 
               | (General purpose programming languages can't prevent
               | logic bugs. However, I would separately argue that
               | idiomatic Rust programs are less likely to experience
               | classes of logic bugs that are common in C and C++
               | programs, in part because strong type systems can make
               | invalid states unrepresentable.)
               | 
               | [1]: https://www.chromium.org/Home/chromium-
               | security/memory-safet...
               | 
               | [2]: https://msrc.microsoft.com/blog/2019/07/we-need-a-
               | safer-syst...
        
             | Asooka wrote:
             | This is a general Linux distro problem, which is entirely
             | self-inflicted. The distro should only carry the software
             | for the OS itself and any applications for the user should
             | come with all their dependencies, like on Windows. Yes, it
             | kind of sucks that I have 3 copies of Chrome (Electron) and
             | 7 copies of Qt on my Windows system, but that sure works a
             | hell of a lot better than trying to synchronise the
             | dependencies of a dozen applications. The precise split
             | between OS service and user application can be argued
             | endlessly, but that's for the maintainers to decide. The OS
             | should not be a vehicle for delivering enduser
             | applications. Yes, some applications should remain as a
             | nice courtesy (e.g. GNU chess), but the rest should be
             | dropped. Basically the split should be "does this project
             | want to commit to working with major distros to keep its
             | dependencies reasonable". I really hope we can move most
             | Linux software to flatpak and have it updated and
             | maintained separately from the OS. After decades of running
             | both Linux and Windows, the Windows model of an application
             | coming with all its dependencies in a single folder really
             | is a lot better.
        
               | yjftsjthsd-h wrote:
               | > Yes, it kind of sucks that I have 3 copies of Chrome
               | (Electron) and 7 copies of Qt on my Windows system, but
               | that sure works a hell of a lot better than trying to
               | synchronise the dependencies of a dozen applications.
               | 
               | Does it? Even if 2 of those chromiums and all of the QTs
               | have actively exploited vulnerabilities and it's anyone's
               | guess if/when the application authors might bother
               | updating?
        
               | Xylakant wrote:
               | A common downside is that the distribution picks the
               | lowest common denominator of some dependency and all apps
               | that require a newer version are held behind. That
               | version may well be out of support and not receive fixes
               | at all any more, which leaves the burden of maintenance
               | on the distribution. Depending on the package maintainer,
               | results may vary. (We sysadmins still remember debians
               | effort to backport a fix to OpenSSL and breaking key
               | generation.)
               | 
               | This is clearly a tradeoff with no easy win for either
               | side.
        
               | vbezhenar wrote:
               | Another downside might be that developer just does not
               | test his software with OS-supplied library versions. So
               | it can cause all the kinds of bugs. There's a reason why
               | containers won in server-side development.
        
               | Arnavion wrote:
               | That is a problem for LTS distributions. Rolling
               | distributions do not have this problem.
        
               | yjftsjthsd-h wrote:
               | I don't think that's true? If packages foo and bar both
               | need libbaz, and foo always uses the latest version of
               | libbaz but bar doesn't, you're going to have a conflict
               | no matter whether you're rolling or not. If anything, a
               | slow-moving release-based distro _could_ have an easier
               | time if they can fudge versions to get an older version
               | of foo that overlaps dependencies with bar.
        
               | yjftsjthsd-h wrote:
               | > We sysadmins still remember debians effort to backport
               | a fix to OpenSSL and breaking key generation.
               | 
               | Could you remind those of us who don't remember that? The
               | big one I know about is CVE-2008-0166, but that was 1.
               | not a backport, and 2. was run by upstream before they
               | shipped it.
               | 
               | But yes, agreed that it's painful either way; I think it
               | comes down to that _someone_ has to do the legwork, and
               | who exactly does it has trade-offs either way.
        
               | vlovich123 wrote:
               | MacOS and Windows both seem to do quite well actually on
               | this front. You should have OS-level defense mechanisms
               | rather than trying to keep every single application
               | secure. For example, qBitTorrent didn't verify HTTPS
               | certs for something like a decade. It's really difficult
               | to keep everything patched when it's your full time job.
               | When it's arbitrary users with a mix of technical
               | abilities and understandings of the issue it's a much
               | worse problem.
        
               | woodruffw wrote:
               | The problem here is ultimately visibility and
               | actionability: half a dozen binaries with known
               | vulnerabilities isn't much better than a single
               | distribution one, if the distribution isn't (or can't)
               | provide security updates.
               | 
               | Or, as another framing: everything about user-side
               | packaging cuts both ways: it's both a source of new
               | dependency and vulnerability tracking woes, _and_ it 's a
               | significant accelerant to the process of getting patched
               | versions into place. Good and bad.
        
               | forrestthewoods wrote:
               | You're getting downvoted but you're not wrong. The Linux
               | Distro model of a single global shared library is a bad
               | and wrong design in 2024. In fact it's so bad and wrong
               | that everyone is forced to use tools like Docker to work
               | around the broken design.
        
               | LtWorf wrote:
               | > any applications for the user should come with all
               | their dependencies, like on Windows
               | 
               | Because that works so well on windows right?
        
             | jppittma wrote:
             | I think what I'm missing is why all of this is necessary in
             | a world without dynamic linking. We're talking about purely
             | build time dependencies, who cares about those matching
             | across static binaries? If it's this much of a headache,
             | I'd rather just containerize the build and call it a day.
        
               | woodruffw wrote:
               | This is pretty much my thought as well -- FWICT the
               | ultimate problem here isn't a technical one, but a
               | philosophical disagreement between how Rust tooling
               | expects to be built and how Debian would like to build
               | the world. Debian _could_ adopt Rust 's approach with
               | minor technical accommodations (as evidenced by `cargo-
               | deb`), but to do so would be to shed Debian's commitment
               | to global version management.
        
               | mook wrote:
               | I believe that's usually so they can track when a library
               | has a security vulnerability and needs to be updated,
               | regardless of whether the upstream package itself has a
               | version that uses the fixed library.
        
               | cpuguy83 wrote:
               | Because they are also maintaining all those build time
               | dependencies, including dealing with CVE's, back porting
               | patches, etc.
        
             | iknowstuff wrote:
             | The sheer amount of useless busywork needed for this, only
             | to end up with worse results and piss off application
             | creators, is peak fucking Linux.
             | 
             | If I was _paid_ to do this kind of useless work for a
             | living I'd probably be well on the way to off myself.
        
         | rcxdude wrote:
         | It is also something that will make upstream hate them,
         | possibly to the point of actively making their life difficult
         | in retribution.
         | 
         | (Problems along this line already are why the bcachefs dev told
         | users to avoid debian and the packager eventually dropped the
         | package: https://www.reddit.com/r/bcachefs/comments/1em2vzf/psa
         | _avoid...)
        
           | Hemospectrum wrote:
           | > ...and the packager eventually dropped the package
           | 
           | Related HN discussion from August/September:
           | https://news.ycombinator.com/item?id=41407768
        
           | woodruffw wrote:
           | I'm one of those potential upstreams. It's all OSS and I
           | understand the pain of path dependency, so I don't think it
           | would be fair to go out of my way to make any downstream
           | distribution maintainer's life harder than necessary.
           | 
           | At the same time, I suspect this kind of policy will make my
           | life harder than necessary: users who hit bugs due to
           | distribution packaging quirks will be redirected to me for
           | triage, and I'll have a very hard time understanding (much
           | less _fixing_ ) quirks that are tied to transitive
           | subdependencies that my own builds intentionally exclude.
        
           | kelnos wrote:
           | > _It is also something that will make upstream hate them_
           | 
           | This was my thought. Xubuntu decided to ship unstable
           | development pre-releases of the core Xfce components in their
           | last stable OS release, and I got really annoyed getting bug
           | reports from users who were running a dev release that was 4
           | releases behind current, where things were fixed.
           | 
           | This was completely unnecessary and wasted a bunch of my
           | time.
        
           | Arnavion wrote:
           | Users ought to report bugs to their distro first, and the
           | distro package maintainer should then decide whether it's an
           | upstream bug or a distro packaging bug. That would cut out
           | all the noise that upstream hates.
           | 
           | The problem is users have gotten savvy about talking to
           | upstream directly. It helps that upstream is usually on easy-
           | to-use and search-engine-indexed platforms like GitHub these
           | days vs mailing lists. It would still be fine if the user did
           | the diligence of checking whether the problem exists in
           | upstream or just the distro package, but they often don't do
           | that either.
        
             | int_19h wrote:
             | It's a lot of effort for the user to do that kind of
             | testing. They would need to either build that package from
             | scratch themselves, or to install a different distro.
        
               | Arnavion wrote:
               | "Users ought to report bugs to their distro first" does
               | not require any testing or compiling. Them reporting
               | directly upstream is the one that would require diligence
               | to prove it's not a distro bug, though that still doesn't
               | nececssarily require compiling or checking other distros,
               | just looking at code.
        
       | spenczar5 wrote:
       | > I am not aware of any dependency system that has an explicit
       | machine-readable representation for the "unknown" state, so that
       | they can say something like "A is known to depend on B; versions
       | of B before v1 are known to break; version v2 is known to work".
       | 
       | Isn't this what Go's MVS does? Or am I misunderstanding?
        
         | arccy wrote:
         | go's MVS only means >=, capped at major version, selected from
         | all declared versions (excluding external latest released).
         | 
         | >= doesn't really mean anything below it will break
        
       | wakawaka28 wrote:
       | This sounds like a very unsafe approach for this allegedly "safe"
       | language. You should not categorically ignore the advice of the
       | library and app authors in favor of some QA-driven time wasting
       | scheme. If the authors are not releasing software stable enough
       | to put together a distribution, then talk to them and get them to
       | commit to stable releases and support for stable versions of
       | libraries. I know they might not cooperate but at that point you
       | just wash your hands of these problems, perhaps by hosting your
       | own mirror of the language-specific repo. When users start having
       | problems due to space-wasting unruly apps, you can explain to
       | them that the problem lies upstream.
        
       | 01HNNWZ0MV43FF wrote:
       | > The resulting breakages will be discovered by automated QA
       | 
       | It's a bold move, Cotton.
       | 
       | Wiping `Cargo.lock` and updating within semver seems sort-of
       | reasonable if you're talking about security updates like for a
       | TLS lib.
       | 
       | "Massaging" `Cargo.toml` and hoping that automated tests catch
       | breakage seems doomed to fail.
       | 
       | Of course this was 3 years ago and this person is no fool. I
       | wonder what I'm missing.
       | 
       | Did this ever go into production?
        
       | Macha wrote:
       | To be blunt:
       | 
       | If you compile my Rust library against dependencies that are not
       | compatible as declared in Cargo.toml, the result is on you. If
       | you want to regenerate a compatible Cargo.lock, that would be
       | more understandable, and I don't go out of my way to specify
       | "=x.y.z" dependencies in my Cargo.toml, so I have effectively
       | given permission for that anyway.
       | 
       | They give the example of Debian "bumping" dependencies, but given
       | the relative "freshness" of your typical Rust package vs say...
       | Debian stable, I imagine the more likely outcome would be
       | downgrading dependencies.
       | 
       | This reminds me of the time Debian developers "knew better" with
       | the openssh key handling...
        
         | MuffinFlavored wrote:
         | > If you want to regenerate a compatible Cargo.lock
         | 
         | I only read the comments here so far and not the article but it
         | sounds to me that people are up-in-arms about the "how do we
         | handles" related to:
         | 
         | * primarily C/GNU makefile ecosystem
         | 
         | then
         | 
         | you now want to add Rust to it
         | 
         | and
         | 
         | Cargo.lock on cares about Cargo.toml versions, not what other
         | crates `build.rs` are linking against provided by the system
         | external to Rust?
        
       | devit wrote:
       | The correct solution to packaging Rust crates for a distribution
       | seems to be to compile them as Rust dynamic libraries and use
       | package names like "librust1.82-regex-1+aho-corasick" for the
       | regex crate, semver major 1, compiled for the Rust 1.82 ABI, for
       | the given set of features (which should be maximal, excluding
       | conflicting features).
       | 
       | For unclear reasons it seems Debian only packages the sources and
       | not the binaries and thus doesn't need the Rust version, and also
       | apparently only packages the latest semver version (which might
       | make it problematic to compile old versions but is not an issue
       | at runtime).
        
         | progval wrote:
         | Are Rust ABIs even guaranteed to be stable for a given compiler
         | version and library features? ie. doesn't the compiler allow
         | itself to change the ABI based on external factors, like how
         | other dependents use it?
        
           | woodruffw wrote:
           | The Rust ABI is unstable, but is consistent within a single
           | Rust version when a crate is built with `dylib` as its type.
           | So Debian could build shared libraries for various crates,
           | but at the cost of a _lot_ of shared library use in a less
           | common /ergonomic configuration for consumers.
        
             | zozbot234 wrote:
             | The thing is that even `dylib` is just not very useful
             | given the extent that monomorphized generic code is used in
             | Rust "library" crates. You end up having to generate your
             | binary code at the application level anyway, when all your
             | generic types and functions are finally instantiated with
             | their proper types. Which is also why it's similarly not
             | sensible to split out a C-compatible API/ABI from just any
             | random Rust library crate, which would otherwise be the
             | "proper" solution here (avoiding the need to rebuild-the-
             | world whenever the system rustc compiler changes).
        
               | woodruffw wrote:
               | Yep, all true.
        
       | atoav wrote:
       | As a software developer that works in Rust as well I applaud any
       | effort to unify dependecies where possible -- if you took one of
       | my projects however and you went all like _yolo_ and changed my
       | chosen dependencies in a major distro _I_ would get all the heat
       | for a choice I didn 't make.
       | 
       | This means if you change my chosen dependencies you are
       | responsible to ensure testing exists.
       | 
       | Unless debian packagers provide the tests to avoid logic errors
        
       | davexunit wrote:
       | It's frustrating to see the "rewrite it in Rust" meme continue
       | spread to all sorts of projects when there is no reasonable
       | packaging story for the language and no solution in sight because
       | the community does not see it as a problem. Cargo has introduced
       | a huge problem for distros that didn't exist with languages like
       | C. Compared to Rust, packaging C projects for distros is easy.
       | Because of these problems, developers are receiving more issue
       | reports from distro maintainers and are becoming more hostile to
       | distro packaging, adopting a "use my binary or go away"
       | mentality. Of course, this is a general problem that has been
       | happening for all languages that come with a language-specific
       | package manager; developers love them but they're typically
       | unaware of the massive downstream problems they cause. It's
       | getting harder to harder to exercise freedom 2 of free software
       | and that's a shame.
        
         | ahupp wrote:
         | It's not like this is unique to rust; you see similar issues
         | with node and python. Distributions have many jobs, but one was
         | solving the lack of package management in C. Now that every
         | modern language a package manager, trying to apply the C
         | package management philosophy is untenable. Specifically, the
         | idea of a single version, globally installed, and producing
         | distro packages for every language specific packages.
        
           | llm_trw wrote:
           | Apart from the fact building it the 'non-C way' results in a
           | magic meat package that you have no idea what it contains:
           | https://hpc.guix.info/blog/2021/09/whats-in-a-package/
           | 
           | Guix is also a distro that allows for any number of versions
           | of the same package globally, something that language
           | specific dependancy managers do not.
           | 
           | Distors are there for a reason, and anyone who doesn't
           | understand that reason is just another contributor to the
           | ongoing collapse of the tower of abstractions we've built.
        
             | tcfhgj wrote:
             | > Distors are there for a reason
             | 
             | for me: make an os out of the kernel
        
         | zozbot234 wrote:
         | The "problems" are the same as with any statically-linked
         | package (or, for that matter, any usage of AppImage, Flatpak,
         | Snap, container images etc. as a binary distribution format).
         | You can use Rust to build a "C project" (either a dynamic
         | library exporting a plain C API/ABI, or an application linking
         | to "C" dynamic libraries) and its packaging story will be the
         | same as previous projects that were literally written in C. The
         | language is just not very relevant here, if anything Rust has a
         | better story than comparable languages like Golang, Haskell,
         | Ocaml etc. that are just as challenging for distros.
        
           | zajio1am wrote:
           | This is unrelated to whether it is statically-linked or
           | dynamically-linked, it is about maintaining compatibility of
           | API for libraries.
           | 
           | In C, it is generally assumed that libraries maintain
           | compatibility within one major version, so programs rarely
           | have tight version intervals and maintainers could just use
           | the newest available version (in each major series) for all
           | packages depending on it.
           | 
           | If the build system (for Rust and some other languages) makes
           | it easy to depend on specific minor/patch versions (or upper-
           | bound intervals of versions), it encourages developers to do
           | so instead of working on fixing the mess in the ecosystem.
        
         | forrestthewoods wrote:
         | > and are becoming more hostile to distro packaging
         | 
         | This to me sounds like the distro packaging model isn't a good
         | one.
        
           | davexunit wrote:
           | Someone always makes this comment and it's always wrong.
        
             | forrestthewoods wrote:
             | The Linux model of global shared libraries is an objective
             | failure. Everyone is forced to hack around this bad and
             | broken design by using tools like Docker.
        
               | davexunit wrote:
               | It's not the "Linux model". It's an antiquated distro
               | model that has been superseded by distros like Guix and
               | NixOS that have shown you can still have an
               | understandable dependency graph of your entire system
               | without resorting to opaque binary blobs with Docker.
        
               | forrestthewoods wrote:
               | Oh you're a Nix Truther. Never mind.
        
               | c0l0 wrote:
               | And what is it that makes the *inside* of all (read:
               | almost all) these nice and working Docker/container
               | images tick?
               | 
               | Distributions using a "model of global shared libraries".
        
               | int_19h wrote:
               | Except that it's usually completely wasted on a Docker
               | container. You could as well just statically link
               | everything (and people do that).
        
             | woodruffw wrote:
             | It's been great for C and C++ packaging. I don't think the
             | track record has been great for Python, Go, JavaScript,
             | etc., all of which surfaced the same problems years before
             | Rust.
        
       | josephcsible wrote:
       | > I am proposing that Debian should routinely compile Rust
       | packages against dependencies in violation of the declared
       | semver, and ship the results to Debian's millions of users.
       | 
       | That sounds like a recipe for a lot more security vulnerabilities
       | like https://news.ycombinator.com/item?id=30614766
        
       ___________________________________________________________________
       (page generated 2024-12-26 23:01 UTC)