[HN Gopher] What autoconf got right
___________________________________________________________________
What autoconf got right
Author : WhyNotHugo
Score : 50 points
Date : 2024-06-01 10:00 UTC (13 hours ago)
(HTM) web link (leahneukirchen.org)
(TXT) w3m dump (leahneukirchen.org)
| endgame wrote:
| This is a great list, and it's good to see a counterpoint to the
| autobashing that's trendy these days.
|
| One additional point: autotools are bootstrappable without going
| all the way up to C++ (CMake, ninja) or Python (meson).
|
| And yes, muon and samurai are written in C99, but as soon as I
| wanted to both support -fvisibility=hidden and unit tests in my
| project, I was forced to build both static and shared libraries.
| This forced me back onto libtool, and once I was there I figured
| I may as well stick with the good ol' autotools.
| RcouF1uZ4gsC wrote:
| > One additional point: autotools are bootstrappable without
| going all the way up to C++ (CMake, ninja) or Python (meson).
|
| GCC is written in C++.
|
| So on a modern system you are going to have to use C++ anyway.
| uecker wrote:
| People work on bootstrapping with GCC 4.8 which is still C.
| page_fault wrote:
| A large chunk of meson functionality is supported by muon,
| which is a pure C implementation with no dependencies. I tested
| it for fun with several projects and it was indistinguishable
| from "proper" meson. Haven't tried with anything large like
| qemu or systemd, though.
|
| https://muon.build
| endgame wrote:
| I tried to use muon once and immediately discovered that it
| rejects passing a dict to the project() function, does not
| support cross-compilation, and a few other things I consider
| important. I hope it continues to close the gap.
|
| But also, I still don't have an answer to "I want to build
| with -fvisibility=hidden and also have unit tests" that isn't
| "libtool".
| IshKebab wrote:
| Not very much then... Tbh I think the existence of "packager" as
| a distinct job is a huge failing of the Linux software ecosystem.
|
| As is the fact that 99% of Linux software defaults to hard-codes
| absolute paths and spewing its files all over the system, rather
| than installing to a single directory that you can freely
| relocate.
| ur-whale wrote:
| Another one is everyone in the 90's believed shared libraries
| were a major innovation and therefore to be used everywhere.
|
| They turn out to be bringing fairly little benefit (minuscule
| disk space saving) and giant drawbacks (the vaunted "your
| software will be upgraded / bugfixed behing your back" which
| instead turned out to be a configuration nightmare and an
| exponential multiplication of the attack surface for would be
| attackers)
|
| To this day, I dread to try an run an old binary that was
| linked against shared libraries say 5 years ago on a modern
| system.
|
| Chances of it working are quasi zero, and good luck trying to
| reconstitute the tree of shared libraries that are recursively
| required to get that binary to run, glibc often being the worst
| offender of the lot.
| zajio1am wrote:
| They are. Static linking may be okay for small and simple
| libraries, but as a user, i do not want multiple versions of
| a complex library linked inside applications, each with
| slightly different behavior. And i do not want to upgrade all
| applications if there is a bug in, say, libpng or
| libfreetype.
| IshKebab wrote:
| > And i do not want to upgrade all applications if there is
| a bug in, say, libpng or libfreetype.
|
| I don't think many people have problems with the idea that
| core system libraries could be dynamically linked. But
| Linux distros take this to a ridiculous extreme. I
| absolutely do not care if a bug in libgmp means my package
| manager updates the 2 apps I have installed that use it
| instead of 1 library.
|
| Would be really interesting to see some actual stats about
| this though. How many library projects do people have
| installed that are depended on by more than say 3 apps? I'm
| guessing it's under 100.
|
| I think that's the approach Flatpak takes with it's
| "runtimes" and it seems like a very sensible one.
| zajio1am wrote:
| Even if you just use 2 apps that depends on 1 library,
| there may be tens or hundreds of such apps in the
| distribution repository (which is, btw, the case for
| libgmp). A bug in such library would force distribution
| maintainers to release new packages for all these
| applications, instead of just one.
|
| I think that if the library is independent upstream
| project, then there is no reason why it should not be
| dynamically-linked independent package in distributions.
| IshKebab wrote:
| > A bug in such library would force distribution
| maintainers to release new packages for all these
| applications, instead of just one.
|
| _Surely_ that 's automated?
| dralley wrote:
| Dynamic linking isn't a bad idea for, say, the top 100
| libraries on the system. There it can be genuinely quite
| useful. Beyond that though, the returns diminish quickly.
| dspillett wrote:
| In the 90s, even right at the end of them but especially
| earlier on, space was _far_ more expensive - even a miniscule
| saving could be worth a bit of faf (faf in terms of dev &
| admin time, and a small efficiency hit at the CPU). Also, RAM
| was expensive and shared libraries allow some of that to be
| saved too if your OS is bright enough. Back then the benefits
| really were apparent, especially for common core libraries
| (libc in particular).
|
| Of course these days storage and RAM are both massively
| cheaper, and there are a plethora of variants even for what
| might previously have been _the_ core version of something,
| so the benefits are minimised.
|
| We've seen similar with JS libs over more recent years: first
| everyone had their own copy, costing bandwidth and meaning
| every site/app dev had to update their version when there was
| a security issue. Then CDNs became a thing for libraries &
| such, reducing the bandwidth cost and (unless you pinned at a
| specific minor version) making security/efficiency updates
| easier. Now as shared sources for JS and the like that are
| potentially a privacy or security issue, and treeshaking &
| such can reduce the payload for most uses, we've swung back
| and (almost) everyone has their own version, as-is or
| crunched into a bundle.
|
| Package management, the likes of which wasn't available in
| the 90s, deals with the updating of JS libs even if everyone
| has local versions (they get updated on next build, unless
| holding back to full test a new release), but that in itself
| is a security concern due to supply-chain attacks.
| uecker wrote:
| Shared libraries are critical for security updates.
| rkeene2 wrote:
| I think it's a harder problem on Linux -- when using shared
| objects -- for three reasons:
|
| - flat namespace, which you can sort of work around in some
| cases with dlmopen() but...
|
| - ...glibc's subordinate libraries (pthreads) don't like being
| in the same process as different versions of glibc, and loading
| two different libcs in the same process creates problems
|
| - There isn't a strong check for ABI compatibility at startup,
| the best you can hope for is soname being set and symbol
| versioning
|
| Symbol versioning (especially auto-versioning) with proper
| sonames and ABI stability help a lot, but there is still the
| problem that everything needs to use the same glibc within the
| same process.
|
| There's also static linking, but it has its own problems, and
| at that point you're just doing shared linking (plus some
| optimizations) and copying all your dependent libraries that
| all still had to be compiled with the same system library,
| hopefully in a way that integrates with the whole system (but
| probably not).
|
| I took a different approach with AppFS [0] than traditional
| package managers:
|
| - no install step for packages - since there's no install step,
| you can have as many concurrent versions available as you want
|
| However, the hard problems remain. A concrete example is my
| library CACKey [1], which is a PKCS#11 module. PKCS#11 dictates
| shared objects (so no getting around it with static linking)
| and that the library should be thread-safe, so to deal with
| threading it was written to use pthreads primitives (mutexes).
| However, if you try to load pthreads in a process with a
| different version of glibc, pthreads aborts. I tried to fix
| this with dlmopen() [2], but this refuses to load a different
| version of glibc (IIRC). So, your best bet is a package manager
| that compiled everything in the system with the system
| libraries.
|
| [0] https://AppFS.net/
|
| [1] https://CACKey.rkeene.org/
|
| [2]
| https://cackey.rkeene.org/fossil/file?name=build/libcackey_w...
| uecker wrote:
| I agree that packaging should be far easier, but I think the
| main problem is a lack of cross-distribution standardization.
|
| My other 2c are that languages now have their own package
| managers is far bigger indication of the huge failing (of the
| overall ecosystem). apt-get is a blessing compared to npm, pip,
| cargo, and all that horrible garbage,.
| IshKebab wrote:
| Indeed but the reason these languages have their own package
| managers is precisely because Apt et al are so unsuitable for
| the task. Primarily because:
|
| 1. They aren't cross-platform. NPM, Pip and Cargo all work
| identically on Mac, Windows and every Linux distro.
|
| 2. They are too heavily gate-kept. I can upload a package to
| PyPI, NPM or Crates.io without having to convince anyone that
| my project is worthy. This is a very good thing!
|
| If the Linux community had made a packaging standard with
| similar properties maybe those languages would have used it.
| tmtvl wrote:
| > _I can upload a package to PyPI, NPM or Crates.io without
| having to convince anyone that my project is worthy. This
| is a very good thing!_
|
| It's a good thing for me because I can upload my ransomware
| disguised as a file manager and people won't know it's
| ransomware until they run it. It's not such a good thing
| for my victims because they need to do their due diligence
| for each project instead of having a trustworthy maintainer
| vet it for them.
| 38 wrote:
| Good riddance. These days I download a repo and "go build".
| That's literally it.
|
| I used C/C++ for years, and it was always a nightmare to build
| something, unless it was completely self contained. When your
| "packaging" is so bad that everything has to be vendored, that
| should be a red flag.
| dlachausse wrote:
| Another thing I really appreciate about go is how easy it makes
| cross compilation.
| zajio1am wrote:
| I mostly agree with the list, but i think there are also some
| arguments what are suboptimal about Autoconf:
|
| 1) We do not need to support old buggy platforms, we can
| generally assume that platforms are open source and their bugs
| can be fixed, so it is less pressure to workaround them in
| application software. Consequently, we do not need that detailed
| checks
|
| 2) We can assume there is fully working POSIX shell, so configure
| script and its checks could be made just in shell, no need to
| have M4-generated shell code.
|
| 3) It is probably better not have separate 'release-phase', so
| end-users could just work with code from git. Users could run
| Autoconf themselves to generate the configure script, but the
| Autoconf is not well-suited for this, as there is lack of
| backwards compatibility.
|
| 4) Most developers do not know M4, so they just hack around
| Autoconf macros without proper understanding.
|
| 5) It would be nice if all these directories that are given to
| configure scripts could be overridden at runtime using
| environment variables
|
| Ideally, there could be POSIX shell script library that could be
| used to write portable configure scripts offering all the
| standard options and checks like ones generated by Autoconf.
| hulitu wrote:
| > We can assume there is fully working POSIX shell,
|
| Kids those days.
|
| Haven't we learned a long time ago that we shouldn't assume
| anything ?
| thefilmore wrote:
| Bell Labs had a simpler alternative in IFFE [0], that consisted
| of a single shell script [1]. You would specify any required
| headers, libraries and tests in a simple text file [2].
|
| [0] https://www.cs.tufts.edu/~nr/cs257/archive/glenn-
| fowler/iffe...
|
| [1] https://github.com/att/ast/blob/master/src/cmd/INIT/iffe.sh
|
| [2]
| https://github.com/att/ast/blob/master/src/cmd/3d/features/s...
| jujube3 wrote:
| _It provides a standardized interface._
|
| This is just a long-winded way of saying it's popular. It's
| popular, therefore people know it, therefore they view whatever
| it does as "the standard." Kind of like... MS Windows?
|
| _It is based on checking features._
|
| The feature-checking approach is slow and makes cross-compilation
| unnecessarily difficult. Plus a lot of the "features" that auto-
| conf checks for are things that haven't been an issue for
| decades, like "sizeof char"
| viraptor wrote:
| Standardised doesn't mean popular. It means you know what
| --help, --enable-foo, DESTDIR, etc. will do and that you can
| rely on that format. It means packaging can rely on the same
| flow and common options for each package using autoconf. It
| then means that unless you've got very specific needs, your
| packaging can be abstracted to "here's the source, the name and
| the version - go!" and that's the abstraction many distros
| provide.
___________________________________________________________________
(page generated 2024-06-01 23:02 UTC)