[HN Gopher] The new APT 3.0 solver
       ___________________________________________________________________
        
       The new APT 3.0 solver
        
       Author : todsacerdoti
       Score  : 96 points
       Date   : 2024-05-14 18:40 UTC (4 hours ago)
        
 (HTM) web link (blog.jak-linux.org)
 (TXT) w3m dump (blog.jak-linux.org)
        
       | amelius wrote:
       | Can anyone explain why libc versions are so often a problem when
       | installing software? I understand that all libraries in an
       | executable need to use the same version of malloc. But otherwise
       | I don't understand the reason for these clashes. Even malloc can
       | be swapped out for another function with the same functionality.
       | And I also don't understand why libc needs to be updated so
       | frequently, anyway.
        
         | codexon wrote:
         | They are a problem because gcc automatically links to the
         | latest version of glibc.
         | 
         | As to why they don't add an option to specify an older version?
         | I don't know either and it is rather annoying to have to use
         | docker images of older OSes to target older glibc versions.
         | It's just one of many things that prevents linux from being as
         | popular as windows for desktop users.
        
           | IshKebab wrote:
           | I think the main reason they don't offer a `--make-my-binary-
           | compatible-with-the-ancient-linux-versions-users-often-have`
           | is that GCC/glibc is a GNU project and the are
           | philosophically against distributing software as binaries.
           | 
           | I don't think there's any technical reason why it couldn't be
           | done.
           | 
           | To be fair to them though, Mac has the same problem. I worked
           | at a company where we had to keep old Mac machines to produce
           | compatible binaries, and Apple makes it hard to even download
           | old versions of MacOS and Xcode.
           | 
           | I guess the difference is MacOS is easy to upgrade so you
           | don't have to support versions from 13 years ago or whatever
           | like you do with glibc.
        
             | codexon wrote:
             | > I think the main reason they don't offer a `--make-my-
             | binary-compatible-with-the-ancient-linux-versions-users-
             | often-have` is that GCC/glibc is a GNU project and the are
             | philosophically against distributing software as binaries.
             | 
             | You don't have to statically compile glibc, gcc just needs
             | an option to tell the compiler to target say, version 2.14
             | instead of the latest one.
             | 
             | The newest glibc has all the older versions in it. That's
             | why you can compile on say ubuntu 14 and have it run on
             | ubuntu 24.
        
               | saurik wrote:
               | No like, the point is that the only reason you (and I: I
               | do this all the time, including with my open source
               | software... like: no judgment) want to target some old
               | version of glibc is so you can distribute that binary to
               | people without caring as much about what version of the
               | OS they have; but that would be unnecessary if you just
               | gave them the source code and have them compile their own
               | copy for their system targeting the exact libraries they
               | have.
        
               | codexon wrote:
               | Unfortunately most people don't want to bother compiling,
               | myself included. I tried gentoo one time and it took 1
               | hour to compile 5 minutes worth of apt-get on ubuntu.
        
               | fweimer wrote:
               | Only the dynamically linked bits, the statically linked
               | startup code and libc_nonshared.a are missing from newer
               | versions. Most programs don't need them (who needs
               | working ELF constructors in the main program?). The
               | libc_nonshared.a bits can be reimplemented from scratch
               | easily enough (but we should switch them over to header-
               | only implementations eventually).
        
             | fweimer wrote:
             | I used to think that binary compatibility benefits
             | proprietary applications, but I'm not so sure anymore. From
             | a commercial perspective, when we break binary
             | compatibility (not that we want to), it's an opportunity
             | for selling more stuff.
             | 
             | Many distributions do periodic mass rebuilds anyway and do
             | not need that much long-term ABI compatibility. Binary
             | compatibility seems mostly for people who compile their own
             | software, but have not automated that and therefore
             | couldn't keep up with updates if there wasn't ABI
             | compatibility.
        
           | forrestthewoods wrote:
           | > They are a problem because gcc automatically links to the
           | latest version of glibc. As to why they don't add an option
           | to specify an older version?
           | 
           | Because glibc and ld/lld are badly designed. glibc is stuck
           | in the 80s with awful and unnecessary automagic configure
           | steps. ld/lld expect a full and complete shared library to
           | exist when compiling even though it expects a different
           | shared library to exist in the future.
           | 
           | Zig solves the glibc linking issue. You can trivially target
           | any old version for any supported target platform. The only
           | thing you actually need are headers and a thin,
           | implementation free lib that contains stub functions.
           | Unfortunately glibc is not architected to make this trivial.
           | But this is just because glibc is stuck with decades of
           | historic cruft, not because it's actually a hard problem.
        
             | einpoklum wrote:
             | > awful and unnecessary automagic configure steps
             | 
             | Steps taken when? When building glibc? And - what steps?
             | 
             | > ld/lld expect a full and complete shared library to exist
             | when compiling ...
             | 
             | But ld and lld are linkers...
             | 
             | > Zig solves the glibc linking issue.
             | 
             | But Zig is a language. Do you mean the Zig standard
             | library? The Zig compiler?
             | 
             | > The only thing you actually need are headers and a thin,
             | implementation free lib that contains stub functions.
             | 
             | Why do you need stub functions at all, if you're not
             | actually using them?
        
           | bregma wrote:
           | Hmm. The MSVCRT.DLL/MSVCRTD.DLL not being binary compatible
           | between releases of Visual Studio is the same thing, except
           | of course you can't even combine some modules compiled for
           | debug with modules compiled without debug in the same
           | executable. The Windows problem has always been so so much
           | worse that pretty much all developers simple resorted to
           | shipping the OS system runtime with every package and it's
           | just expected nowadays. It's where the phrase "DLL hell"
           | originated, after all.
           | 
           | Not to say the ABI problem isn't real if you want to combine
           | binary packages from different Linux-based OSes. Plenty of
           | solutions for that have cropped up as well: containers,
           | flatpacks, snaps, the list goes on.
        
             | forrestthewoods wrote:
             | > The Windows problem has always been so so much worse
             | 
             | Hard, hard disagree. The problems are somewhat comparable.
             | But if any platform is more painful it's Linux. Although
             | they're similar if you exclude glibc pain. At least in my
             | personal experience of writing lots of code that needs to
             | run on win/mac/linux/android.
             | 
             | > pretty much all developers simple resorted to shipping
             | the OS system runtime with every package
             | 
             | Meanwhile Linux developers have resorted to shipping an
             | entire OS via docker to run every program. Because managing
             | Linux environment dependencies is so painful you have to
             | package the whole system.
             | 
             | Needing to docker to simply launch a program is so
             | embarrassing.
             | 
             | > except of course you can't even combine some modules
             | compiled for debug with modules compiled without debug in
             | the same executable
             | 
             | That's not any different on Linux. That has more to do with
             | C++.
        
               | graemep wrote:
               | > Meanwhile Linux developers have resorted to shipping an
               | entire OS via docker to run every program. > Needing to
               | docker to simply launch a program is so embarrassing.
               | 
               | I have never needed docker "just to launch a program".
               | Docker makes it easy to provide multiple containerised
               | copies of an identical environment. Containers are a
               | light alternative to VM images.
               | 
               | I assume you find the existence of Windows containers
               | just as embarrassing? https://learn.microsoft.com/en-
               | us/virtualization/windowscont...
        
               | forrestthewoods wrote:
               | > Docker makes it easy to provide multiple containerised
               | copies of an identical environment.
               | 
               | Correct. The Linux architecture around a global of
               | dependencies is, imho, bad and wrong. The thesis is it's
               | good because you can deploy a security fix to libfoo.so
               | just once for the whole system. However we now live in a
               | world where you actually need to deploy the updated
               | libfoo.so to all your various hierarchical Docker images.
               | _sad trombone_
               | 
               | > Containers are a light alternative to VM images.
               | 
               | A light alternative to Docker is simply deploy your
               | dependencies and not rely on a fragile, complicated
               | global environment.
               | 
               | > I assume you find the existence of Windows containers
               | just as embarrassing?
               | 
               | Yes.
               | 
               | I know my opinion is deeply unpopular. But I stand by it!
               | Running a program should be as simple as downloading a
               | zip, extracting, and running the executable. It's not
               | hard!
        
               | LtWorf wrote:
               | I guess your software is not libre, right?
               | 
               | In that case, I think you should stick to snap and
               | flatpak.
        
               | fullspectrumdev wrote:
               | Instead of Docker, static linking is a _much_ more
               | elegant solution if library /dep management is painful.
               | 
               | Switching to static linking using a sane libc (not glibc)
               | can be a pain initially but you end up with way less
               | overhead IMO.
        
               | munchler wrote:
               | Static linking is a good way to avoid the problem, but
               | I'd hardly call it "elegant" to replicate the same
               | runtime library in every executable. It's very wasteful -
               | we're just fortunate to have enough storage these days to
               | get away with it.
        
             | delta_p_delta_x wrote:
             | This comment is full of inaccuracies and mistakes, and is a
             | terrible travesty of the Windows situation.
             | 
             | > except of course you can't even combine some modules
             | compiled for debug with modules compiled without debug in
             | the same executable.
             | 
             | There's a good reason for this, which IMO Unix-like
             | compilers and system libraries should start adopting, too.
             | Debug and Release binaries like the standard C and C++
             | runtimes and libraries cannot be inter-mixed because they
             | have have different ABIs. They have different ABIs because
             | the former set of binaries have different type layouts,
             | many of which come with debug-specific assertions and tests
             | like bounds-checking, exception try-catch, null-dereference
             | tests, etc.
             | 
             | > The Windows problem has always been so so much worse that
             | pretty much all developers simple resorted to shipping the
             | OS system runtime with every package and it's just expected
             | nowadays.
             | 
             | This is not true at all. There are several layers to the
             | counterargument.
             | 
             | Firstly, UCRT has supplanted MSVCRT since Visual Studio
             | 2015, which is a decade old this year. Additionally, UCRT
             | can be statically linked: use `/MT` instead of `/MD`. And
             | linking back to the previous quote, to statically link a
             | debug CRT, use `/MTd`. Set this up in MSBuild or CMake
             | using release/debug build configurations. UCRT is available
             | for install (and maintained with Windows Update) in older
             | versions of Windows going back to Vista.
             | 
             | Next, Windows by default comes with several versions of C++
             | redistributables going back to Visual Studio 2005. All of
             | these redistributables are also regularly maintained with
             | Windows Update.
             | 
             | Finally, Windows SDK versions targeting various versions of
             | Windows are available for all supported Visual Studio
             | developer environments. The oldest currently available in
             | Visual Studio 2022 is Windows XP SP3[1].
             | 
             | These all serve to thoroughly solve both combinations of
             | backward-forward compatibility, where (a) the runtime
             | environment is newer than the developer environment, and
             | (b), the runtime environment is older than the developer
             | environment.
             | 
             | It is perfectly possible to compile a single `.exe` on
             | Visual Studio 2022 in Windows 11, and expect it to run on
             | Windows XP SP3, and the vice versa: compile a single `.exe`
             | on Visual Studio 6, and expect it to run on Windows 11. No
             | dynamic libraries, no DLLs, nothing; just a naked `.exe`.
             | Download from website, double-click to run. That's _it_. No
             | git clone, no GNU Autotools, no configure, no make, no make
             | install (and then f*cking with rpaths), nothing.
             | Prioritising binary-only software distribution means
             | Windows has prioritised the end-user experience.
             | 
             | > It's where the phrase "DLL hell" originated, after all.
             | 
             | This is also incorrect. 'DLL hell' is a pre-NT problem that
             | originated when packagers decided it was a good idea to
             | overwrite system binaries by installing their own versions
             | into system directories[2]. Sure, the versioning problem
             | was there too, but this is itself a result of the
             | aforementioned DLL stomping.
             | 
             | [1]: https://learn.microsoft.com/en-
             | us/cpp/build/configuring-prog... [2]:
             | https://en.wikipedia.org/wiki/DLL_Hell#DLL_stomping
             | 
             | I fully daresay writing C and C++ for Windows is an easier
             | matter than targeting _any_ Unix-like. For context, see
             | what video game developers and Valve have done to check off
             | the  'works on Linux' checkbox: glibc updates are so
             | ridiculously painful that Valve resorted to simply patching
             | WINE and releasing it as Proton, and WINE is the ABI target
             | for video games on Linux.
        
               | jraph wrote:
               | That Windows is generally better at handling
               | compatibility is one thing, but I'm curious about the
               | following part:
               | 
               | > There's a good reason for this, which IMO Unix-like
               | compilers and system libraries should start adopting,
               | too.
               | 
               | Why? I'm understanding from your comment that you can't
               | mix debug and release objects on Windows because the ABI
               | is different. That's not a feature, that's a limitation.
               | If it works on Linux to mix debug-enabled objects with
               | "release", what use would it have to make it not work
               | anymore?
               | 
               | IIUC debug symbols can be totally separated from the
               | object code, such that you can debug the release if you
               | download the debug symbols. A well configured GDB on
               | distros that offer this feature is able to do it
               | automatically for you. It seems very useful and elegant.
               | Why can't Windows do something like this and how is it an
               | advantage?
               | 
               | (Genuine question, I have a remote idea on how ELF works
               | (wrote a toy linker), not much how DWARF works, and not
               | the slightest idea on how all this stuff works on
               | Windows)
        
               | josephg wrote:
               | Yes, I wonder that too. The comment says that debug and
               | release builds have ABIs because they have different type
               | layouts. But why do they have different type layouts?
               | Bounds checking and assertions shouldn't change the type
               | layout. It seems to me that debug flags should generally
               | only modify code generation & asserts. This is usually
               | the case on Linux, and it's extremely convenient.
               | 
               | If windows is going to insist on different libraries in
               | debug and release mode, I wish the development version of
               | the library bundled debug and release builds together so
               | I could just say "link with library X" and the compiler,
               | linker and runtime would just figure it out. (Like
               | framework bundles on the Mac). Windows could start by
               | having a standard for library file naming - foo.obj/dll
               | for release and foo-debug.obj/dll for debug builds or
               | something. Then make the compiler smart enough to pick
               | the right file automatically.
               | 
               | Seriously. It's 2024. We know how to make good compiler
               | tooling (look at go, Swift, rust, etc). There's no sane
               | reason that C++ has to be so unbelievably complex and
               | horrible to work with.
        
               | forrestthewoods wrote:
               | > If it works on Linux to mix debug-enabled objects with
               | "release", what use would it have to make it not work
               | anymore?
               | 
               | There is no difference between Linux and Windows here.
               | The debug/release issue is ultimately up to the API
               | developer.
               | 
               | C++ has has the standard template library (STL).
               | libstdc++, libc++, and MSVC STL are three different
               | implementations. STL defines various iterators. A common
               | choice is for a release-mode iterator to be a raw
               | pointer, just 8 bytes on 64-bit. But the debug-mode
               | iterator is a struct with some extra information for
               | runtime validation, so it's 24 bytes!
               | 
               | The end result is that if you pass an iterator to a
               | function that iterator is effectively two completely
               | different types with different memory layouts on debug
               | and release. This is a common issue with C++. Less so
               | with C. But it's not a platform choice per se.
               | 
               | > IUC debug symbols can be totally separated from the
               | object code, such that you can debug the release if you
               | download the debug symbols. A well configured GDB on
               | distros that offer this feature is able to do it
               | automatically for you. It seems very useful and elegant.
               | Why can't Windows do something like this and how is it an
               | advantage?
               | 
               | MSVC always generates separate .pdb files for debug
               | symbols. Windows tooling has spectacular tooling support
               | for symbol servers (download symbols) and source indexing
               | (download source code). It's great.
        
               | delta_p_delta_x wrote:
               | [delayed]
        
           | Sesse__ wrote:
           | Linking to the latest version of glibc is, in itself, not a
           | problem -- glibc hasn't bumped its soname in ages, it is
           | using symbol versioning instead. So you only get a problem if
           | you use a symbol that doesn't exist in older glibc (i.e.,
           | some specific interface that you are using changed).
           | 
           | As for using an older version of glibc, _linking_ isn't the
           | problem -- swapping out the header files would be. You can
           | probably install an old version of the header files somewhere
           | else and just -I that directory, but I've never tried.
           | libstdc++ would probably be harder, if you're in C++ land.
        
             | fweimer wrote:
             | Recent libstdc++ has a _dl_find_object@GLIBC_2.35
             | dependency, so it's not exactly trivial anymore to link a
             | C++ program against a older, side-installed glibc version
             | because it won't have that symbol. It's possible to work
             | around that (link against a stub that has
             | _dl_find_object@GLIBC_2.35 as a compat symbol, so that
             | libstdc++ isn't rejected), but linking statically is much
             | more difficult because libstc++.a (actually, libgcc_eh.a)
             | does not have the old code anymore that _dl_find_object
             | replaces (once GCC is built against a glibc version that
             | has _dl_find_object).
             | 
             | This applies to other libraries as well because there are
             | new(ish) math functions, strlcpy, posix_spawn extensions
             | etc. that seem to be quite widely used already.
        
           | zX41ZdbW wrote:
           | I've made a library named "glibc-compatibility": https://gith
           | ub.com/ClickHouse/ClickHouse/tree/master/base/gl...
           | 
           | When linking with this library before glibc, the resulting
           | binary will not depend on the new symbol versions. It will
           | run on glibc 2.4 and on systems as old as Ubuntu 8.04 and
           | CentOS 5 even when built on the most modern system.
        
           | LtWorf wrote:
           | chroot has existed for many years.
        
         | fweimer wrote:
         | You don't have to update, it's just that developers seem to
         | like to run the latest stuff, and many appear to build
         | production binaries on their laptops (rather than in a
         | controlled build environment, where dependencies are tightly
         | managed, and you could deliberately stick to an older
         | distribution easily enough).
         | 
         | The dependency errors indicate real issues because of the way
         | most distributions handle backwards compatibility: you have to
         | build on the oldest version you want to support. Those errors
         | happen if this rule is violated. For glibc-based systems, the
         | effect is amplified because package managers have become quite
         | good at modeling glibc run-time requirements in the package-
         | level dependencies, and mismatches result in install-time
         | dependency errors. Admittedly, I'm biased, but I strongly
         | suspect that if we magically waved away the glibc dependency
         | issues, most applications still wouldn't work because they
         | depend on other distribution components, something that's just
         | not visible today.
        
       | yjftsjthsd-h wrote:
       | > The most striking difference to the classic APT solver is that
       | solver3 always keeps manually installed packages around, it never
       | offers to remove them.
       | 
       | Y'know, when you put it like that it does seem rather obvious,
       | doesn't it? Principle of least surprise and all that.
       | 
       | Edit: To be clear, not knocking anyone for it; hindsight is 20/20
       | and I've never written a dependency solving algorithm so I'm in
       | no position to judge.
        
         | giancarlostoro wrote:
         | I love when this happens when I find a new product, I think to
         | myself would be even better if they did this really simple
         | tweak. Then a week or two later they do the same exact change.
         | 
         | Apt probably moves a little slower on "big" changes, since
         | theres many implications. I cant even imagine the number of
         | obscure scripts that something like this breaks because the
         | output and behavior is slightly off somehow.
        
         | layer8 wrote:
         | It's not that clearcut, because sometimes you forget all the
         | packages you installed manually over the years and aren't using
         | anymore, and it can make sense to be offered to remove one when
         | it helps resolving a conflict with a newer package that you
         | actually want to use now.
        
           | yjftsjthsd-h wrote:
           | Yeah, I've hit that exact situation on one of my machines
           | running Void Linux (so obviously completely different package
           | manager). I personally think it's best to tell the user that
           | it can't make the given constraints work and suggest which
           | package(s) are the problem, and then if the user is willing
           | to part with them then they can `apt remove foo && apt
           | upgrade` or w/e (which is more or less my experience of how
           | Void's xbps does it).
        
           | inopinatus wrote:
           | The healthy choice is to rebuild rather than upgrade any
           | long-lived OS installation (e.g. your desktop) on distro
           | release.
           | 
           | Corollary: make it easy on yourself: mount /home separately
           | (and maybe /var too), and keep /etc in scm.
        
             | josephg wrote:
             | > The healthy choice is to rebuild rather than upgrade any
             | long-lived OS installation (e.g. your desktop) on distro
             | release.
             | 
             | I wish there was more rigor (and testing) with this sort of
             | thing. Generally systems should have the invariant that
             | "install old" + "upgrade" == "install new". This property
             | can be fuzz tested with a bit of work - just make an
             | automated distribution installer, install random sets of
             | packages (or all of them), upgrade and see if the files all
             | match.
             | 
             | /etc makes this harder, since you'd want to migrate old
             | configuration files rather than resetting configuration to
             | the defaults. But I feel like this should be way more
             | reliable in general. And people want that reliability - if
             | the popularity of docker and nix are anything to go by.
        
               | LtWorf wrote:
               | In theory yes, but this hits with the fact that defaults
               | change.
               | 
               | For example a new system won't have pulseaudio, but it
               | won't be removed and replaced automatically because that
               | would be potentially disruptive to existing users.
        
             | layer8 wrote:
             | I've been upgrading Debian in-place for almost two decades
             | now and prefer that approach. Security updates are
             | automated (daily), major-version upgrades have almost no
             | downtime, and only very rarely is there a critical
             | configuration that needs to be adjusted.
             | 
             | The installed packages can be listed with `dpkg --get-
             | selections`, and that list can be replayed should it be
             | necessary to recreate the installation, plus a backup of
             | /etc/. But I never had to do this, Debian just works.
        
               | hsbauauvhabzb wrote:
               | Debian releases are somewhat slower than Ubuntu or
               | similar - given infinite time, esoteric configurations
               | will break on update due to some edge case 4 dist-
               | upgrades ago.
        
               | inopinatus wrote:
               | I was previously a maintainer of certain Debian packages,
               | and of the same vintage, so this advice comes with the
               | extra salt of having seen how the sausage is made. I
               | shudder to think how many abandoned files, orphan
               | packages, and obsolete/bad-practice configurations would
               | be lurking in a build that has only been release-upgraded
               | for decades. Yes, no doubt it functions. By the same
               | token, people can live in their own filth. Should they? I
               | choose not to.
        
           | josephg wrote:
           | I don't know if other distributions do this, but I quite like
           | gentoo's "world" file. Its a text file listing all the
           | manually installed packages. Everything in the list is pinned
           | in the dependency tree, and kept up to date when you "emerge
           | update". I constantly install random stuff for random
           | projects then forget about it. The world list lets me easily
           | do spring cleaning. I'll scroll through the list, delete
           | everything I don't recognise and let the package manager auto
           | remove unused stuff.
           | 
           | I think nix has something similar. I wish Debian/ubuntu had
           | something like that - maybe it does - but I've never figured
           | it out.
        
       | david_draco wrote:
       | After reading so many GPT news item, I was confused what APT is,
       | no explanation on the page, no link ... took me a while to
       | realise it is about debian linux' packaging tool apt.
        
       | gsich wrote:
       | From the guy that brought you the keepassxc downgrade.
       | 
       | https://github.com/keepassxreboot/keepassxc/issues/10725
        
         | k8sToGo wrote:
         | What downgrade are you referring to?
        
           | gsich wrote:
           | https://github.com/keepassxreboot/keepassxc/issues/10725
        
             | k8sToGo wrote:
             | Thanks. I love opensource drama.
        
         | bjoli wrote:
         | I never thought I would say this but: I am a pretty recent
         | flatpak convert. This thing just made me more convinced about
         | how it is a good thing to let the Devs deliver the apps
         | themselves and let the distro people do the distro stuff.
         | 
         | After accepting flatpak I started using fedora silverblue and
         | then switched to opensuse aeon and I have been very happy. The
         | only pain point was getting Emacs working properly.
        
         | j1elo wrote:
         | Wait so this guy decided to fork the project, and seemingly
         | abused his position to supplant the previous version with his
         | own opinionated fork? All this while disregarding the opinion
         | of upstream devs themselves and being arrogant and stubborn in
         | his replies.
         | 
         | What an spectacular way to break things for end users.
         | 
         | If there's one thing to learn and apply from Linus, IMHO, is
         | his attitude about NEVER breaking userspace in the Kernel. This
         | lesson can be adapted to most software, and we really should
         | strive to more of it, not less (obviously adjusting to each
         | case; here, replace "userspace" with "user setups")
        
           | layer8 wrote:
           | First, this is not in stable Debian (yet). Secondly, it is
           | common and expected for distributions to select the feature
           | flags they deem appropriate. It is not a fork. The mistake
           | here was not to provide a compatible upgrade option in
           | addition to the new default.
        
       ___________________________________________________________________
       (page generated 2024-05-14 23:00 UTC)