[HN Gopher] Rust in illumos
___________________________________________________________________
Rust in illumos
Author : vermaden
Score : 130 points
Date : 2024-09-10 21:15 UTC (1 days ago)
(HTM) web link (wegmueller.it)
(TXT) w3m dump (wegmueller.it)
| yjftsjthsd-h wrote:
| > The development model of illumos is different from Linux and
| thus there are no Rust drivers in upstream illumos yet. But that
| is to be expected for new things. In our model, we take the time
| to mature new tech in a fork, and for rust, the Oxide fork has
| taken that role. In there, we have several drivers for the Oxide
| Networking stack that are in rust. Based on that some experience
| could be gained.
|
| Soft forks are also good, but doesn't illumos have a stable
| driver ABI, such that you could just make your drivers completely
| independently? I thought that was how it had ex. Nvidia drivers (
| https://docs.openindiana.org/dev/graphics-stack/#nvidia )
| boricj wrote:
| Even if there is a stable ABI driver to the point where
| upstream is oblivious of any Rust stuff going on inside
| drivers, that doesn't solve the problem of fragmentation
| amongst forks (assuming upstreaming to Illumos is a goal).
| jclulow wrote:
| I'm part of the illumos core team and I'm quite keen to use
| Rust in the base of illumos, FWIW. There several contexts in
| which we could make good use of it, and the primary
| challenges in most cases is either one of release engineering
| or the resultant binary sizes. I'm confident we'll be able to
| work them out, it'll just take more exploration and effort!
|
| The rough areas where I think things would be useful, and
| probably also roughly the order in which I would seek to do
| them:
|
| * Rust in the tools we use to build the software. We have a
| fair amount of Perl and Python and shell script that goes
| into doing things like making sure the build products are
| correctly formed, preparing things for packaging, diffing the
| binary artefacts from two different builds, etc. It would be
| pretty easy to use regular Cargo-driven Rust development for
| these tools, to the extent that they don't represent shipped
| artefacts.
|
| * Rust to make C-compatible shared libraries. We ship a lot
| of libraries in the base system, and I could totally see
| redoing some of the parts that are harder to get right (e.g.,
| things that poke at cryptographic keys, or do a lot of string
| parsing, or complex maths!) in Rust. I suspect we would _not_
| want to use Cargo for these bits, and probably try to
| minimise the dependencies and keep tight control on the size
| of the output binaries etc.
|
| * Rust to make kernel modules. Kernel modules are pretty
| similar to C shared libraries, in that we expect them to have
| few dependencies and probably not be built through Cargo
| using crates.io and so on.
|
| * Rust to make executable programs like daemons and command-
| line commands. I think here the temptation to use more
| dependencies will increase, and so this will be the hardest
| thing to figure out.
|
| The other thing beyond those challenges is that we want the
| build to work completely offline, _and_ we don't want to
| needlessly unpack and "vendor" (to use the vernacular) a lot
| of external files. So we'd probably be looking at using some
| of the facilities Cargo is growing to use an offline local
| clone of parts of a repository, and some way to reliably
| populate that while not inhibiting the ease of development,
| etc.
|
| Nothing insurmountable, just a lot of effort, like most
| things that are worth doing!
| marvin-hansen wrote:
| Have you explored building with Bazel? What you describe as
| a problem is roughly what Bazel solves: Polyglot complex
| builds from fully vendored deps.
|
| Just pointing this out because I had a fair share of issues
| with Cargo and ultimately moved to Bazel and a bit later to
| BuildBuddy as CI. Since then my builds are reliable, run a
| lot faster and even stuff like cross compilation in a
| cluster works flawlessly.
|
| Obviously there is some complexity implied when moving to
| Bazel, but the bigger question is whether the capabilities
| of your current build solution keep up with the complexity
| of your requirements?
| steveklabnik wrote:
| (Not Josh, not involved with illumos, do work at Oxide)
|
| For at least one project at Oxide we use buck2, which is
| conceptually in a similar place. I'd like to use it more
| but am still too new to wield it effectively. In general
| I would love to see more "here's how to move to
| buck/bazel when you outgrow cargo) content.
| marvin-hansen wrote:
| Here you go!
|
| https://github.com/bazelbuild/examples/tree/main/rust-
| exampl...
|
| I wrote all of those examples and contributed them back
| to Bazel because I've been there...
|
| Personally, I prefer the Bazel ecosystem by a wide margin
| over buck2. By technology alone, buck2 is better, but as
| my requirements were growing, I needed a lot more mature
| rule sets such as rules OCI to build and publish
| container images without Docker and buck2 simply doesn't
| have the ecosystem available to support complex builds
| beyond a certain level. It may get there one day.
| steveklabnik wrote:
| Thanks!
|
| I fully agree with the ecosystem comments; I'm just a
| Buck fan because it's in Rust and I like the "no built in
| rules" concept, but it's true that it's much younger and
| seemingly less widely used. Regardless I should spend
| some time with Bazel.
| marvin-hansen wrote:
| Here is another thing worth sharing. Database integration
| tests on Bazel used to be difficult. Ultimately I've
| found an elegant solution:
|
| https://github.com/diesel-
| rs/diesel/blob/master/examples/pos...
|
| The parallel testing with dangling transactions isn't
| Diesel or Postgres specific. You can do the same with
| pure SQL and any relational DB that supports
| transactions.
|
| For CI, BuildBuddy can spin up Docker in a remote
| execution host, then you write a custom util that tests
| if the DB container is already running and if not starts
| one, out that in a test and then let Bazel execute all
| integration tests in parallel. For some weird reasons,
| all tests have to be in one file per insolated remote
| execution host, so I created one per table.
|
| Incremental builds that compile, tests, build and publish
| images usually complete in about one minute. That's
| thanks to the 80 Core BuildBuddy cluster with remote
| cache.
|
| GitHub took about an hour back in April when the repo was
| half in size.
|
| There is real gain in terms of developer velocity.
| hucker wrote:
| Agreed, my company tried buck2 first when evaluating a
| cargo replacement, but bazel is just so much more mature
| at this point that it ended up being the natural choice.
| Thanks for your examples, they helped us get started :)
| jclulow wrote:
| I'm not hugely excited about Bazel, or really any of the
| more complex build tools made either directly for, or in
| the image of, the large corporate monorepo tooling.
|
| Right now we have decades of accumulated make stuff, with
| some orchestrating shell. It's good in parts, and not in
| others. What I would ideally like is some bespoke
| software that produces a big Ninja file to build
| literally everything. A colleague at Oxide spent at least
| a few hours poking at a generator that might eventually
| suit us, and while it's not anywhere near finished I
| thought it was a promising beginning I'd like us to
| pursue eventually!
|
| https://github.com/oxidecomputer/eos
| cryptonector wrote:
| I don't know if Illumos follows the Sun model of "interface
| contracts", which allows for downstreams to get timely notice
| of upcoming backwards-incompatible changes to unstable
| interfaces. Of course, Illumos can choose to make some intra-
| kernel interfaces stable.
| jclulow wrote:
| The answer is definitely not "no", but as far as I'm aware we
| haven't found any particular cases where that level of
| formalism and that specific style of arrangement would help.
|
| In an open source world where multiple shipping operating
| systems are based on (different snapshots of) the same core
| code and interfaces, it seems much more valuable to try and
| push straight to Committed interfaces (public, stable,
| documented) where we can.
|
| There are plenty of cases where people use Uncommitted
| interfaces today in things that they layer on top in
| appliances and so on, but mostly it's expected that those
| people are paying attention to what's being worked on and
| integrated. When folks pull new changes from illumos-gate
| into their release engineering branches or soft forks,
| they're generally doing that explicitly and testing that
| things still work for them.
|
| Also, unlike the distributions (who ship artefacts to users)
| it doesn't really make much sense for us to version illumos
| itself. This makes it a bit harder to reason about when it
| would be acceptable to renege on an interface contract, etc.
| In general we try to move things to Committed over time, and
| then generally try very hard never to break or remove those
| things after that point.
| 4ad wrote:
| > The development model of illumos is different from Linux and
| thus there are no Rust drivers in upstream illumos yet. But that
| is to be expected for new things. In our model, we take the time
| to mature new tech in a fork, and for rust, the Oxide fork has
| taken that role.
|
| This is silly. Virtually all illumos development is done by
| Oxide. There is no other upstream to speak of.
|
| As far as I am concerned, illumos _is_ Oxide.
|
| Also, it's _illumos_ , in lower case.
| dvtkrlbs wrote:
| Illumos is not Oxide. Oxide is just using a in house
| distribution of illumos for their needs. Illumos created is
| created 14 years ago by Solaris developers to swap the closed
| source bits by open source implementations.
| dvtkrlbs wrote:
| Looking at the last commits it does not seems like majority
| upstream development is not done by Oxide (I did not look at
| it extensively)
| yjftsjthsd-h wrote:
| user@machine:~/illumos-gate$ git log --since 2023-09-10 |
| grep ^Author | awk -F @ '{print $2}' | sed 's/>$//' | sort
| | uniq -c | sort -n | tail 11 mnx.io
| 16 grumpf.hope-2000.org 18 gmail.com
| 21 oxide.computer 29 hamachi.org
| 36 richlowe.net 37 racktopsystems.com
| 71 fiddaman.net 91 fingolfin.org
| 125 me.com
|
| I'm not going to post names because that feels a touch
| icky, but a little bit of cross-comparison suggests that
| one of those domains should be combined with oxide (read: I
| search for names and they work there) in a way that does
| probably make Oxide more than 50% of commits in the last
| year. Though even if that's true it doesn't make them the
| only player or anything.
| asveikau wrote:
| Considering that Oxide's CTO is a longtime prominent
| Solaris guy, it's not shocking. I wouldn't surprised if
| that's the whole reason they use ilumos.
| jclulow wrote:
| You can read a bit about our decision making process on
| the host OS and hypervisor in Oxide RFD 26:
| https://rfd.shared.oxide.computer/rfd/0026
| steveklabnik wrote:
| Telling someone they don't exist is a bold move.
|
| Our (Oxide's) distro isn't even listed on the main page for
| illumos.
|
| Also the author is using the lower case i as far as I can see?
| yjftsjthsd-h wrote:
| Okay, but that's just... not true. Joyent is still around,
| OmniOS is doing their thing. Oxide is a major player, but
| they're not the only player.
| turtle_heck wrote:
| > Joyent is still around
|
| They are but they're no longer contributing to illumos...
| jclulow wrote:
| The remainder of the staff working on SmartOS, and the
| remaining IP like trademarks, and customers and so on, were
| all bought from Samsung by MNX so they could take it over.
| They're doing a great job over there with it, and unlike
| Samsung they actually want it!
| re-thc wrote:
| > and unlike Samsung
|
| So what did Samsung do with Joyent? Just a failed
| acquisition or were there plans?
| panick21_ wrote:
| From what I learned from Podcasts, they did a massive
| build-out of datacenters with Joyant technology for
| internal use as Samsung. But I don't know if that
| continued. Love to hear from Bryan more on that.
| bitfilped wrote:
| It seems like it has not continued considering the Joyent
| tech Triton, SmartOS, et al. and engineers have been sold
| off to MNX.io for future care and feeding.
| turtle_heck wrote:
| My understanding is that there are not very many ex-
| Joyent folks at MNX.io, there's just a few, many more are
| at Oxide.
| re-thc wrote:
| > Joyent is still around
|
| Just in name. It's mostly Samsung infrastructure.
| rtpg wrote:
| This article really comes at some of the crux of packaging pains
| for many languages. The "distro ships the software" idea is
| something so many modern language toolkits are pretty far away
| from (and, as someone constantly burned by Debian packaging
| choices, something I'm pretty happy with as well). But without a
| good answer, there's going to be frustration.
|
| When distros are charging themselves with things like security,
| shared libraries do become a bit load bearing. And for users, the
| idea that you update some lib once instead of "number of software
| vendor" times is tempting! But as a software developer, I really
| really enjoy how I can ship a thing with exactly a certain set of
| versions, so it's almost an anti-feature to have a distro swap
| out functionality out from under me.
|
| Of course there can be balances and degrees to this. But a part
| of me feels like the general trend of software packaging is
| leaning towards "you bundle one thing at a time", away from "you
| have a system running N things", and in that model I don't know
| where distro packagers are (at least for server software).
| oconnor663 wrote:
| I think it would be pretty natural for a distro to run it's own
| crates.io/PyPI/NPM mirror to build packages against, and have a
| unified set of library versions for the entire OS that way.
| Maybe not even a server, maybe just a giant Git repo or
| whatever, filtered down to the set of packages they actually
| use. Have any of them tried this?
| steveklabnik wrote:
| I'm not aware of any that have; this would require them to
| become experts in running parallel infra, in many languages
| and with varied setups, and would mean that they're not
| longer self-contained. That feels antithetical to their goals
| to me at least.
| rtpg wrote:
| I mean Ubuntu has a bunch of python libs. The problem is that
| you need to make a universe of mutually compatible libs, so
| if _any_ of the libs have old-ish dependencies, that holds
| back so much.
|
| I think what happens in practice is package maintainers do
| the work to get things up to date, and so only include a
| subset.
| thristian wrote:
| Debian _effectively_ does this. It doesn 't actually
| implement the crates.io API, but when building a Rust package
| they tell cargo "look in this directory for dependencies,
| don't talk to the network".
|
| As I understand it, the process goes something like this:
|
| - Debian wants to package a Rust tool
|
| - They examine the tool's Cargo.toml file to determine its
| immediate dependencies
|
| - Those Rust dependencies are converted into Debian package
| names
|
| - Generate a Debian package definition that lists those
| dependencies under "Build-Depends"
|
| - When building the Debian package in an isolated
| environment, Build-Depends packages are installed, providing
| things like `rustc` and `cargo` as well as putting the
| dependencies' source into the "local dependencies" directory
|
| - Cargo runs, configured to only look in the "local
| dependencies" directory for dependencies
|
| - the resulting binary is scooped up and packaged
| actionfromafar wrote:
| Is there some documentation on how to do this yourself? It
| sounds very useful for building software meant to be binary
| distributed.
| mkesper wrote:
| The whole process seems to be compiled in this Readme:
| https://salsa.debian.org/rust-team/debcargo-
| conf/blob/master... How to use only local packages can be
| found in the last paragraph.
| nesarkvechnep wrote:
| Is this documented anywhere?
| thristian wrote:
| See https://wiki.debian.org/Teams/RustPackaging/Policy
| josephg wrote:
| Is this automated? Does doing this manually do anything for
| Debian other than create a headache and a mountain of work?
|
| If it were up to me, I'd consider carving off part of the
| Debian package namespace - like cargo--* for cargo
| packages, and just auto-create them (recursively) based on
| the package name in cargo. There are a few things to figure
| out - like security, and manually patched packages, and
| integrating with cargo itself. But it seems like it should
| be possible to make something like this work with minimal
| human intervention.
| thristian wrote:
| Yes, it's heavily automated.
| IshKebab wrote:
| Why not just vendor the source for each Rust tool
| separately? A lot simpler and more reliable.
|
| The only downside I can see is disk space usage on the
| builder machines, but it's source code... is it really that
| big?
| steveklabnik wrote:
| Generally distributions want to unify dependencies so
| that if there's a security fix, they only need to do it
| once instead of for every program.
| oconnor663 wrote:
| > Debian effectively does this. It doesn't actually
| implement the crates.io API, but when building a Rust
| package they tell cargo "look in this directory for
| dependencies, don't talk to the network".
|
| That sounds close to what I had in mind, but I assume you
| run into problems with the finer points of Rust's
| dependency model. Like if I depend on libfoo v1, and you
| depend on libfoo v2, there are a lot of situations (not
| all) where Cargo will happily link in both versions of
| libfoo, and applications that depend on both of us will
| Just Work. But if libfoo needs to be its own Debian
| package, does that mean every major version is a
| _different_ package?
|
| That's the first question that comes to mind, but I'm sure
| there are others like it. (Optional features? Build vs test
| dependencies?) In short, dependency models (like Cargo's
| and Debian/APT's) are actually pretty complicated, and
| ideally you want to find a way to avoid forcing one to
| represent all the features of the other. Not that I have
| any great ideas though :p
| oefrha wrote:
| Haskell has Stackage[1] which maintains globally consistent
| sets of Haskell packages that build together. It's of course
| a huge pain to maintain, probably exponentially harder wrt
| <size of package registry> / <size of Hackage>. Also highly
| depends on whether the language and ecosystem helps and
| encourages library authors to maintain compatibility; I don't
| think Rust does.
|
| [1] https://www.stackage.org/
| Scramblejams wrote:
| > I really really enjoy how I can ship a thing with exactly a
| certain set of versions
|
| Just curious, how do you handle security monitoring and updates
| when doing this?
|
| I've been exposed to two major approaches:
|
| At $FAANG we're on a treadmill where company CI force builds
| your stuff every time anything in your requirements.txt
| releases a new version (semver respected, at least) because in
| the absence of targeted CVE monitoring (which itself won't
| cover you 100%), that's the only practical way they see to keep
| up with security updates. If a dependency change breaks your
| project and you leave it broken for too long, peeps above you
| start getting autocut tickets. This leads to a lot of ongoing
| maintenance work for the life of the project because eventually
| you get forced into newer minor and eventually major versions
| whether you like it or not.
|
| Outside of $FAANG I've gotten a whole lot of mileage building
| customer software on top of Debian- or Ubuntu-packaged
| dependencies. This allows me to rely on the distro for security
| updates without the churn, and I'm only forced to take newer
| dependencies when the distro version goes EOL. Obviously this
| constrains my library selection a lot, but if you can do it
| there's very little maintenance work compared to the other
| approach.
|
| I'd like to hear how others handle this because both approaches
| obviously have considerable downsides.
| rtpg wrote:
| To be honest I'm fortunate enough to where maintenance work
| by sticking to "the latest and greatest" for application-
| level stuff is not hard. For system-level stuff in general
| it's been about relying on Debian or Ubuntu stuff, but then
| application-level management for the top layer ("the
| application", so to speak). I've only rarely been in
| situations where Debian packaging outright meant we couldn't
| move forward on something (one funny and annoying one was
| gettext being too old, but gettext is so coupled with libc
| for some reason that we couldn't just compile a fresh
| version...)
|
| The joys of not having to deal wit the extremes of any
| problem...
| jjnoakes wrote:
| I strive for the latter approach as much as possible. I'm
| happy doing more work as a developer (like using a slightly
| older version of libraries or languages) to ensure that I
| build on a stable base that gets security updates without
| tons of major version api churn.
| zozbot234 wrote:
| Keeping up with the 'treadmill' you described is perhaps the
| main job of a Linux distribution. See, e.g.
| https://release.debian.org/transitions/ for one example of
| how this can work practically.
| jeroenhd wrote:
| Don't forget the third option: don't. Updates are necessary
| when the customer asks for them. Most customers won't, and if
| they do, they don't know they need an update if you compile
| everything statically.
|
| Personally, the update story is why I find developing on
| Windows much easier than developing on Linux. Instead of
| hundreds of small, independent packages, you can just target
| the _massive_ Windows API. You can take the "living off the
| land" approach on Linux too, but it's harder, often relying
| on magical IOCTLs and magical file paths rather than library
| function calls.
|
| Not that I think the Win32/WinRT API is particularly well-
| designed, but at least the API is there, guaranteed to be
| available, and only breaks in extremely rare circumstances.
| lars_francke wrote:
| I don't know where you reside and what kind of software you
| build.
|
| With the European Cyber Resilience Act coming into full
| effect late 2027/early 2028 (depends on when it's finally
| signed) this will not always be an option anymore for a lot
| of people.
|
| a) You'll have to provide SBOMs so you can't (legally)
| "hide" stuff in your binary
|
| b) Security updates are mandatory for known exploitable
| vulnerabilities and other things, so you can't wait until a
| customer asks.
|
| This will take a few years before it bites (see GDPR) but
| the fines can be just as bad.
| repelsteeltje wrote:
| > [...] "hide" stuff in your binary
|
| While legislatively requiring SBOMs is obviously a good
| idea, it might also unintentionally incentive companies
| to hide dependencies by rolling their own. Afterall, it's
| not a "dependency" if it's in the main code base you
| developed yourself.
|
| Not sure how _likely_ this is, but especially in case of
| network stacks or cryptographic functions that could
| potentially be disastrous.
| lars_francke wrote:
| I agree that there is a chance this could happen. But as
| a business owner myself it's pretty simple: It's far
| cheaper for me to take an existing dependency, create
| SBOMs and patch regularly compared to having to develop
| and maintain these things myself. But I do get your point
| that others might make different decisions.
|
| But the CRA also has provisions here as you have to do a
| risk assessment for your product as well and publish the
| result of that on your documentation and hand rolling
| crypto should definitely go on there as well.
|
| Again... People can just ignore this and probably won't
| get caught anytime soon....
| twoodfin wrote:
| It's extremely likely, verging on certitude.
|
| In some ways, the pressure of requirements like this may
| be positive: "Do we really need a dependency on
| 'leftpad'?"
|
| But it will also add pressure to make unsound NIH choices
| as you suggest.
| Foobar8568 wrote:
| SBOM is already in place I believe for the financial
| sector, or soon enough. Add DORA going full in Jan 2025,
| and we have another legislation that no one understands
| and follows. Respect of the AI cloud act is such a joke.
| jeroenhd wrote:
| Yeah, the ECRA is going to wreck badly organised
| businesses and it's high time.
|
| I just hope there will be actual fines, even for "small"
| companies. GDPR enforcement against most companies seem
| rather lackluster if it happens at all. If the ECRA ends
| up only applying to companies like Google, it'll be
| rather useless.
| lenkite wrote:
| > Don't forget the third option: don't.
|
| Good way to have malpractice lawsuits filed against you and
| either become bankrupt or go to jail.
| formerly_proven wrote:
| Source?
| lenkite wrote:
| The Evolution of Legal Risks Pertaining to Patch
| Management and Vulnerability Management and Vulnerability
| Management
|
| https://dsc.duq.edu/cgi/viewcontent.cgi?article=3915&cont
| ext...
|
| Why Are Unpatched Vulnerabilities a Serious Business
| Risk? https://prowritersins.com/cyber-insurance-
| blog/unpatched-vul...
| solidninja wrote:
| What are the significant downsides of the first approach in
| your experience?
| Scramblejams wrote:
| > This leads to a lot of ongoing maintenance work for the
| life of the project because eventually you get forced into
| newer minor and eventually major versions whether you like
| it or not.
|
| Remember, the desire here is simply to keep up with
| security updates from dependencies. There is no customer
| requirement to be using the latest dependencies, but this
| approach requires you to eventually adopt them and that
| creates a bunch of work throughout the entire lifecycle of
| the project.
| giancarlostoro wrote:
| I think Debian's system should have a system only environment
| with their pre-frozen packages, and then a userland specific
| environment, where you can install any version of anything, and
| none of the system packages are affected by these software
| packages, but when you open a terminal for dev work, you can
| choose a more modern Python, or what have you.
|
| I'm seeing an uptick in what they call "atomic" systems, where
| this is exactly the case, but last I tried to install one, it
| didnt even register correctly on boot up, so until I find one
| that boots up at all, I'll be on POP OS.
| yjftsjthsd-h wrote:
| The way immutable distros handle it is by running those
| things inside containers using distrobox or the like. You can
| always do the same on a traditional system; just run your
| Python stuff in docker/podman on pop-os, or use full
| distrobox if you like.
| turtle_heck wrote:
| > for one would love to have people help me with the new
| Installer[0] and with the package Forge[1]
|
| Those repos really need some basic high-level information about
| what they are and how they work, the Forge doesn't even have a
| README.
|
| [0] https://github.com/Toasterson/illumos-installer
|
| [1] https://github.com/toasterson/forge
| fake-name wrote:
| > The Debian approach of putting everything into one system and
| only having one version of each dependency is not feasible for a
| huge international community of people that develop together but
| never meet.
|
| I find this to be overbroad and incorrect in the larger case.
|
| I want one version that has been reasonably tested for everything
| _but_ the parts I 'm developing or working with. Having just one
| version for everyone generally means it's probably going to work,
| and maybe even has been tested.
|
| For my own stuff, I am happy to manage the dependencies myself,
| but moving away from the debian approach harms the _system_ in
| general.
| the8472 wrote:
| > Missing support for shared libraries
|
| Well, rust does support shared libraries. Call things by their
| name: what distros want is for all libraries to have a stable
| ABI. And I guess some sort of segmentation that the source of one
| library doesn't bleed into the binary output of another, and I
| think the latter part is even more tricky than ABI stability.
| Generics are monomorphized in downstream crates (aren't C++
| templates the same?), so I think you'd either have to ban generic
| functions and inlining from library APIs or do dependent-tree
| rebuilds anyway.
| silon42 wrote:
| Not necessarily ban, but any incompatible changes to those is
| an ABI break.
| the8472 wrote:
| Which would require rebuilding all dependents, yeah, that was
| the second option mentioned. If you're already willing to
| rebuild-the-world then you could use dynamic libraries today.
| But you'd only get a shared memory / size benefit from that,
| not drop-in-updates.
___________________________________________________________________
(page generated 2024-09-11 23:01 UTC)