[HN Gopher] Apple's new library format combines the best of dyna...
___________________________________________________________________
Apple's new library format combines the best of dynamic and static
Author : covector
Score : 68 points
Date : 2023-06-07 20:13 UTC (2 hours ago)
(HTM) web link (developer.apple.com)
(TXT) w3m dump (developer.apple.com)
| liuliu wrote:
| Weird. Yes, link time is a big issue at development time, but
| with the new development of ld-prime and sold, seems trading that
| with dylib is not a big of a deal.
|
| On the other hand, the other big use of dylib I saw for apps to
| embed (other than providing 3rd-party libraries for other people
| to use), is to share code between main app binary and extensions,
| which mergeable library doesn't seem to support.
|
| I guess the biggest use is indeed to deliver 3rd-party close-
| source libraries?
| compiler-guy wrote:
| Many, perhaps most?, dylibs that a Mac app ships with are all
| within the app bundle itself, and signed as such, so dylib
| sharing is not a common use case.
| liuliu wrote:
| Yeah, and the only reason they don't static link is because
| these dylibs are some 3rd-party closed-source libraries to
| avoid a lot of version clashes comparing to ship static lib
| naively.
|
| Otherwise you put some code in dylib so your app extension
| can share some code with the main app, but that is tricky due
| to different RAM usage restrictions between app extension
| v.s. main app.
| enriquto wrote:
| give me static ape or give me death!
| notacoward wrote:
| This looks _really_ hype-y. AFAICT a "merged" binary is just a
| statically linked one. The only problem it solves is a self-
| inflicted one - failure of the old static linker to prune or de-
| duplicate stuff in linked libraries (particularly ObjC-specific
| stuff). It notably does _not_ solve the main problem with static
| linking - old insecure code which would have been fixed with an
| updated dynamic library but is instead "stranded" by having
| already been copied into executables (or bundles and no
| reasonable person would quibble about that) never to change
| again. Also: provenance, supply chain, etc.
|
| Yes, I'm aware that superkuh beat me to this last point, and also
| that updating dynamic libraries can also cause breakage. But I
| still think it's important to note that this isn't really
| advancing the state of the art like Apple would like you to
| believe. It's just a new middle-of-the-road approach with its own
| possibly-positive tradeoffs _and pitfalls_. Nothing here that
| wasn 't already considered at least thirty years ago, and whether
| they were right or wrong to choose another path is less relevant
| than the fact that it's _not new_.
| compiler-guy wrote:
| I'm not seeing any hype. They have shipped a nice new feature
| for their build system. I don't see any claims of it advancing
| the state of the art, or life-changing, just that it improves
| certain work flows in an easy way, and provide a convenient way
| to use it for their developers.
| lgg wrote:
| There is a bit more to it than that. Yes, it was always
| possible to use a mess of build rules and shell scripts to make
| your debug and release builds swap between fully static and
| dynamic libraries, but it was a lot of work, and was difficult
| to maintain. The novelty of mergeable dylibs is that they now
| make it trivial to switch between the two without all of that
| work. In particular it solves two large problems people tended
| to run into:
|
| 1. Static archives and dynamic libraries have different
| semantics with respect to symbol lookup. In particular, due to
| two level namespaces multiple dylibs can exports the same
| symbol without it being a runtime collision since the binary
| importing them stores the library a symbol came from in
| addition to the name. This is different from static archives
| where you have sets of symbols brought in by .o file. That
| means it is often non-trivial to switch between dynamic
| libraries and static archives. Mergeable libraries solve this
| by allowing you to use the semantics of dynamic libraries and
| two level namespaces for content that will be statically
| linked.
|
| 2. Most people use frameworks, not raw dylibs. They do that for
| a lot of reasons, but the biggest one is to allow them to
| distribute non-code resources that are associated with the
| library. This is a common problem that has been solved in
| various ways (Windows embeds the resources in the DLL files,
| classic Mac OS depended on resource forks, etc). Mergeable
| dylibs are completely supported by the runtime in such a way
| that enough of the dylib's identity is preserved so that things
| like NSBundle continue to work as a way to find the bundles
| resources despite the code itself being merged.
| ridiculous_fish wrote:
| The problem being tackled here is link time in debug builds.
| This affects all platforms.
| roqi wrote:
| > The problem being tackled here is link time in debug
| builds. This affects all platforms.
|
| I've worked on many projects, big and small, and the link
| time of debug builds was never a problem that was worth
| fixing.
|
| In fact, even today I was discussing with a coworker the
| inefficiencies of the project's build process, we literally
| commented that having to only link a huge subproject is a
| major blessing.
| compiler-guy wrote:
| For Google at least, link times are sufficiently important
| that the company has rewritten the linker from scratch
| twice--both open source. The Gnu-Gold linker which ships as
| part of gnu binutils and subsequently the ELF version of
| llvm's lld.
|
| So although you might not encounter issues with link times
| (debug or otherwise), it is a multi-million dollar problem
| for big companies like Google and Apple. Both in resources
| and engineer time.
| roqi wrote:
| > So although you might not encounter issues with link
| times (debug or otherwise), it is a multi-million dollar
| problem for big companies like Google and Apple. Both in
| resources and engineer time.
|
| I appreciate your appeal to authority, but I worked at a
| FANG on a project that had over 20 GB worth of static and
| dynamic/shared libraries.
|
| Linking was never an issue.
| compiler-guy wrote:
| Err, "Google has rewritten the linker twice. Both times
| with the stated goal to make link times much faster."
| isn't an appeal to authority. It's evidence that the
| company has found speeding up linking to be worth
| millions of dollars. Otherwise it wouldn't have done it.
|
| They surely weren't doing it for fun.
| roqi wrote:
| > Err, "Google has rewritten the linker twice. Both times
| with the stated goal to make link times much faster."
| isn't an appeal to authority.
|
| It is, and a very weak one considering Google has a
| history of getting people to work on promotion-oriented
| projects.
|
| https://news.ycombinator.com/item?id=31261488
|
| > It's evidence that the company has found speeding up
| linking to be worth millions of dollars.
|
| It really isn't. You're buying into the fallacy that a
| FANG can never do wrong and all their developers are
| infallible and walk on water. All you're able to say is
| google did this and google did that, and you're telling
| that to a guy who has first-hand experience on how this
| and that is actually made. You never mentioned any
| technical aspect or more importantly performance
| differences. You only name-dropped Google, and to a guy
| who already worked at a FANG.
|
| Linking was never an issue.
| KerrAvon wrote:
| There are many FAANG customers who care about link time;
| some of them are also FAANGs, but certainly not all.
| You're falling into the libertarian trap of thinking that
| because it didn't happen in your experience, it could not
| possibly happen to anyone.
| [deleted]
| roqi wrote:
| > But I still think it's important to note that this isn't
| really advancing the state of the art like Apple would like you
| to believe. It's just a new middle-of-the-road approach with
| its own possibly-positive tradeoffs and pitfalls.
|
| It also seems that this new library format barely solves any
| problem and in the process bumps up the number of library types
| that developers need to onboard and possibly support.
| baybal2 wrote:
| [dead]
| ChrisMarshallNY wrote:
| It won't make much difference to me, until it's supported in SPM
| (which will probably be soon).
| covector wrote:
| Documentation at
|
| https://developer.apple.com/documentation/xcode/configuring-...
| jevinskie wrote:
| They better keep the new linker open source like they did for
| ld64 (despite delays), or it is going to become a private ABI
| nightmare.
| superkuh wrote:
| It's really weird to me that they're talking about balancing
| static vs. dynamic in terms of the dev-end build time speed vs
| running the software load times.
|
| To me the dynamic vs static balancing act is about compatibility
| vs security where dynamic linked applications easily get lib
| security updates but the binary may or may not work with your
| system libs and statically compiling is more compatible but
| doesn't automatically get system lib security updates.
| liuliu wrote:
| They are talking about app-bundled dylibs, which cannot be
| updated separately due to the whole-bundle signing.
| notacoward wrote:
| In other words it's a fix for a somewhat Apple-specific (and
| in this case Apple-created) problem?
| astrange wrote:
| iOS definitely isn't the only platform where apps ship
| their own versions of DLLs.
| notacoward wrote:
| No, not the only one, but the converse isn't rare either.
| Whether it's "bundles" or containers, a lot of people act
| as though bloat and slow build times because of static
| linking are inevitable when in fact the pain is _self
| inflicted_.
| conradev wrote:
| The problem is not Apple-specific and their solution could
| be useful elsewhere.
|
| The specific optimization this achieves is during build
| time only: these new files are static libraries that are
| quicker to link. It is a small shift of some of the linking
| pipeline into the (parallel) builds of dependent libraries,
| rather than heaping it all onto the linker at the end of
| the build. The linker has to essentially re-link from
| scratch for every small change.
|
| Parallelization has long been known as the best way to
| speed up linking. This Apple change comes in addition to
| rewriting their linker to be massively parallel. Mold did
| it first, though: https://github.com/rui314/mold
|
| A faster development cycle is one of the coolest
| improvements that Zig brings - lightning fast recompiles,
| bringing the native development cycle closer to web speed.
| More languages should get this capability, and Apple needs
| it for Swift to get instantly updating SwiftUI previews.
|
| Static linking is required to get the best performance: no
| linker runtime cost, cross-language LTO and dead code
| elimination
|
| If this optimization is generally applicable and developers
| find it worthwhile, I could imagine this making its way to
| GCC-land. At the very least, gold needs replacing/rewriting
| to be competitive now.
|
| edited: for clarity
| ndesaulniers wrote:
| > Mold did it first, though:
| https://github.com/rui314/mold
|
| Before LLD?
| conradev wrote:
| I had forgotten about LLD, you are right, but also,
| they're the same person! :P
| RcouF1uZ4gsC wrote:
| I think the whole security thing of relying on dynamic linking
| is a red herring.
|
| It is neither necessary nor sufficient. It will miss private
| dynamic libraries, Docker containers, potentially virtual envs.
|
| And it has a huge cost in terms of maintenance and backwards
| compatibility.
|
| In a world where we have regular builds from source, this model
| really doesn't have a place.
| JohnFen wrote:
| I disagree, especially for the desktop. I think there's great
| value in being able to update a shared library and get a fix
| in place for all applications, versus waiting for each
| application to update.
| notacoward wrote:
| > It will miss private dynamic libraries, Docker containers,
| potentially virtual envs.
|
| Yes, it will. Does that make dynamic libraries a red herring,
| or does it mean these approaches all make the same mistake?
| Please _think_ before you answer. Just because something 's
| common doesn't mean it's wise or necessary.
|
| > it has a huge cost in terms of maintenance and backwards
| compatibility
|
| How much are those "huge costs" unique to that approach?
| Maintenance and backwards compatibility are _still_ problems
| even with static linking or embedding into containers or any
| other alternative. It 's disingenuous to count common
| problems against only one approach. The real tradeoff is
| between problems that are unique to each, and it's possible
| to disagree about which choice is best without stacking the
| deck for one side.
| nicoburns wrote:
| Ultimately if the app is unmaintained, then it's likely to
| have security issues even if it's using dynamic libraries.
| If it is maintained then it shouldn't be a big deal to
| update it.
| [deleted]
| hexomancer wrote:
| Frankly, I think dynamic libraries are _less_ secure (at least
| when talking about a desktop environment, not servers) because
| sometimes package managers update dynamic library dependencies
| without actually checking if binary compatibility was preserved
| in the new version, causing various crashes and bugs (I
| experienced this first hand with an archlinux package of a
| software I was developing).
| lgg wrote:
| That may true on Linux, but it is not the case on Apple
| platforms. On Darwin the base system is immutable and the
| dylibs embedded in an application bundle cannot be changed
| without invalidating the codesignature. In order to have this
| sort of issue occur you need to opt out of multiple security
| settings that are enabled by default (such as the hardened
| runtime and library validation) AND then be sloppy with your
| use of relative paths or dlopen() calls.
| hexomancer wrote:
| Yeah this is the right way of doing it. The archlinux (and
| most other distributions') way is profoundly stupid.
___________________________________________________________________
(page generated 2023-06-07 23:00 UTC)