[HN Gopher] Apple's Darwin OS and XNU Kernel Deep Dive
___________________________________________________________________
Apple's Darwin OS and XNU Kernel Deep Dive
Author : tansanrao
Score : 425 points
Date : 2025-04-05 23:46 UTC (23 hours ago)
(HTM) web link (tansanrao.com)
(TXT) w3m dump (tansanrao.com)
| whalesalad wrote:
| I've been wanting to understand Darwin at this depth for a long
| time. Great read!
| tansanrao wrote:
| It's my first time condensing my research notes into a blog
| post like this, glad you liked it!
| ForOldHack wrote:
| It was easily comprehensive enough, that I printed it out and
| was showing people today what the differences were in the
| last couple of os releases, while ruminating about snow
| leopard. No one mentions that the blue box became Rosetta,
| and Rosetta II did the same thing in the switch from Intel to
| ARM.
|
| There are some very tiny points, but this is easily very best
| to date. ( I started with Rhapsody, and Linux in swedish, and
| NT 3.1 )( Ran MKLinux on a 7100, but never got accelerated
| video to work. )
| ForOldHack wrote:
| I should add, I have never seen accelerated video in Linux,
| except for built in Intel video and it's fine.
| jshier wrote:
| Mac OS X Internals by Singh is one of my favorite books, such a
| great in depth examination of Mac OS X circa 10.4. I really
| wish there was an updated version.
|
| Edit: I see it's even cited at the end of this article. Truly a
| source for the (macOS) ages.
| wpm wrote:
| Jonathan Levin's three part series "*OS Internals" is that
| update, but they stopped working on and writing about Darwin
| around Catalina.
| kccqzy wrote:
| I've also wanted to understand Windows NT at this depth for a
| while. Skip the Win32 stuff, and discuss what's underneath it.
| As I understand Win32 is just one personality; there was also
| Windows Services for UNIX in the Windows XP days and Subsystem
| for UNIX-based Applications in Windows Vista. The underlying NT
| kernel is flexible enough to allow POSIX compliance. That would
| be an interesting read.
| p_ing wrote:
| Windows Internals is the book you want.
|
| Or Inside Windows NT, if you want "version 1" of the
| Internals series. Or read the Windows NT OS/2 Design Workbook
| - https://computernewb.com/~lily/files/Documents/NTDesignWork
| b....
|
| Yes, Win32 is just one personality, but a required one.
| OpenNT, Interix, SFU, SUA will ride alongside Win32. And of
| course there was the official OS/2 personality.
| nunez wrote:
| 100%. Russinovich, who now heads up Azure, co-wrote many of
| the follow-on books, and David Solomon, who co-wrote the NT
| kernel, co-wrote the first few. The latest version of this
| book covers Windows 10/Server 2016. They are very, very
| good.
| skissane wrote:
| > As I understand Win32 is just one personality
|
| Not really... although NT was designed to run multiple
| "personalities" (or "environment subsystems" to use the
| official term), relatively early in its development they
| decided to make Win32 the "primary" environment subsystem,
| with the result that the other two subsystems (OS/2 and
| POSIX) ended up relying on Win32 for essential system
| services.
|
| I think this multiple personalities thing was the original
| vision but it never really took off in the way its original
| architects intended - although there used to be OS/2 and
| POSIX subsystems, Microsoft never put a great deal of effort
| into them, and now them are both dead, so Win32 is the only
| environment subsystem left.
|
| Yes, there is WSL, but: WSL1 is not an environment subsystem
| in the classic NT sense - it has a radically different
| implementation from the old OS/2 and POSIX subsystems, a
| "picoprocess provider". And WSL2 is just a Linux virtual
| machine.
| ForOldHack wrote:
| WSL2 is just a Linux VM, and the POSIX subsystem is just a
| kluge I never heard of an OS/2 subsystem for NT, for which
| Cutler would take extreme umbridge in.
|
| I have three charts on my wall( now 4): the Unix timeline,
| the windows timeline, and the Linux distribution tree,and
| now a very decent MacOS X timeline.
|
| The personalities became containers which is just the
| windows version of common subsystem virtualization.
| Containers were based on VirtualPC, but with the genius of
| Mark Russinivich.
| skissane wrote:
| > I never heard of an OS/2 subsystem for NT,
|
| It was there from NT 3.1 until Windows 2000; it was
| removed in Windows XP onwards.
|
| It was very limited - it only supported character mode
| 16-bit OS/2 1.x applications. 32-bit apps, which IBM
| introduced with OS/2 2.0, were never supported. Microsoft
| offered an extra cost add-on called "Microsoft OS/2
| Presentation Manager For Windows NT" aka "Windows NT Add-
| On Subsystem for Presentation Manager", which added
| support for GUI apps (but still only 16-bit OS/2 1.x
| apps) - which was available for NT version 3.1 thru 4.0,
| I don't believe it was offered for Windows 2000.
|
| The main reason why it existed - OS/2 1.x was jointly
| developed by IBM and Microsoft, with both having the
| right to sell it - so some business customers bought
| Microsoft OS/2 and then used it as the basis for their
| business applications - when Microsoft decided to replace
| Microsoft OS/2 with Windows NT, they needed to provide
| these customers with backward compatibility and an
| upgrade path, lest they jump ship to IBM OS/2 instead.
| But Microsoft never tried to support 32-bit OS/2, since
| Microsoft never sold it, and given their "divorce" with
| IBM they didn't have the rights to ship it ( _possibly_
| they might have retained rights to some early in-
| development version of OS /2 2.0 from before the breakup,
| but definitely not the final shipped OS/2 2.0 version) -
| the OS/2 subsystem wasn't some completely from-scratch
| emulation layer, it was actually based off the OS/2 code,
| with the lower levels rewritten to run under Windows NT,
| but higher level components included OS/2 code largely
| unchanged.
|
| > for which Cutler would take extreme umbridge in.
|
| Windows NT was originally called NT OS/2, because it was
| originally going to be Microsoft OS/2 3.0. Part way
| through development - but at which point Cutler and his
| team had already got the basics of the OS up and running
| on Microsoft Jazz workstations (in-house Microsoft
| workstation design using Intel i860 RISC CPUs) -
| Microsoft and IBM had a falling out and there was a
| change of strategy, instead of NT providing a 32-bit OS/2
| API, they'd extend the 16-bit Windows 3.x API to 32-bit
| and use that. So I doubt Cutler would take "extreme
| umbrage" at something which was the plan at the time he
| was hired, and remained the plan through the first year
| or two of NT's development.
|
| > The personalities became containers which is just the
| windows version of common subsystem virtualization.
|
| Containers and virtualization are (at least somewhat)
| successors to personalities / environment subsystems in
| terms of the purpose they serve - but in terms of the
| actual implementation architecture, they are completely
| different.
| pjmlp wrote:
| Windows containers are built on top of jobs
| infrastructure.
|
| https://learn.microsoft.com/en-
| us/virtualization/windowscont...
|
| https://learn.microsoft.com/en-
| us/windows/win32/procthread/j...
| p_ing wrote:
| > 32-bit apps, which IBM introduced with OS/2 2.0, were
| never supported.
|
| This was obviously due to the divorce, but also that the
| Cruiser API wasn't finalized.
|
| > Our initial OS/2 API set centers around the evolving
| 32-bit Cruiser, or OS/2 2.0 API set. (The design of
| Cruiser APIs is being done in parallel with the NT OS/2
| design.)
|
| ...
|
| > Given the nature of OS/2 design (the joint development
| agreement), we have had little success in influencing the
| design of the 2.0 APIs so that they are portable and
| reasonable to implement on non-x86 systems.
| ForOldHack wrote:
| "At the same time, NT (up to and including Windows 2000)
| shipped with an OS/2 subsystem which ran character-mode
| 16-bit OS/2 applications." From OS/2 museum.
| ForOldHack wrote:
| Turns out this is not true. Confirmed that os/2 2.0 was a
| skinning and compatibility layer for NT it came out for
| OS/2, not with windows and not from Microsoft. It came
| with OS/2 and from IBM. No idea whether it supported
| HPFS+ but it was not a subsystem.
| skissane wrote:
| > Turns out this is not true.
|
| No, what you quoted in your comment you are replying to
| is accurate. What you are saying in this comment isn't.
|
| > Confirmed that os/2 2.0 was a skinning and
| compatibility layer for NT it came out for OS/2,
|
| This is confused. OS/2 was not a "skinning and
| compatibility layer for NT" it was a completely separate
| operating system.
|
| I think at one point NT was going to be OS/2 2.0, and
| then it was going to be OS/2 3.0 - but the OS/2 2.0 which
| eventually ended up shipping had nothing to do with NT,
| it was IBM's independent work, in which Microsoft was
| uninvolved (except maybe in its early stages).
| bch wrote:
| > Mach's virtual memory (VM) system was influential beyond the
| project - it was adopted by 4.4BSD and later FreeBSD as their
| memory management subsystem.
|
| ...and NetBSD[0], OpenBSD[1], but apparently not DragonFly
| BSD[2].
|
| [0] https://netbsd.org/docs/kernel/uvm.html
|
| [1] https://man.openbsd.org/OpenBSD-3.0/uvm.9
|
| [2]
| https://www.dragonflybsd.org/mailarchive/kernel/2011-04/msg0...
| tansanrao wrote:
| Ohhh interesting! I'll update the post to include this soon,
| thanks!
| inkyoto wrote:
| Sadly, that is not entirely correct.
|
| Whilst all three BSDs (386BSD, FreeBSD, and NetBSD; there was
| no OpenBSD in the beginning) did inherit the legacy Mach
| 2.5-style design, it did not live on in FreeBSD, whose core
| team started pretty quickly replacing all remaining vestiges of
| the Mach VM[0] with a complete, modern, and highly performant
| rewrite of the entire VM. FreeBSD 4 had none of the original
| Mach code left in the kernel codebase, and that happened in the
| late 1990s. Therefore, FreeBSD can't be referenced in a
| relationship to Mach apart from the initial separation/very
| early foundation stage.
|
| NetBSD (and OpenBSD) went on for a while but also quickly hit
| the wall with the Mach design (performance, SMP/scalability,
| networking) and also set out on a complete rewrite with UVM
| (unified virtual memory) designed and led by Chuck Cranor, who
| wrote his dissertation on the UVM. OpenBSD later borrowed and
| adopted the UVM implementation, which remains in use today.
|
| So out of all living BSD's[1], only XNU/Darwin continues to use
| Mach, and not Mach 2.5 but Mach 3. There have been Mach 2.5, 3
| and 4 (GNU/Hurd uses Mach 4) in existence, and the
| compatibility between them is rather low, and remains mostly at
| the overall architectural level. They are better to be treated
| as distinct design with shared influence.
|
| [0] Of which there were not that many to start off with.
|
| [1] I am not sure whether DragonBSD is dead or alive today at
| all.
| o11c wrote:
| > I am not sure whether DragonBSD is dead or alive today at
| all.
|
| It seems to have about the same level of activity as NetBSD.
| Take that how you will.
| bch wrote:
| > I am not sure whether DragonBSD is dead or alive today at
| all.
|
| Oof, yeah.[0][1]. I hope they're doing alright - technically
| fascinating, and charming as they march to the beat of their
| own accordion.[2][3][4][5]
|
| [0] https://www.dragonflybsd.org/release64/
|
| [1] https://gitweb.dragonflybsd.org/dragonfly.git
|
| [2] https://www.dragonflybsd.org/mailarchive/kernel/2012-03/m
| sg0...
|
| [3] http://www.bsdnewsletter.com/2007/02/Features176.html
|
| [4] https://en.wikipedia.org/wiki/Vkernel
|
| [5] https://en.wikipedia.org/wiki/HAMMER_(file_system)
| inkyoto wrote:
| The last release being <<Version 6.4.0 released 2022 12
| 30>>, links from 2007 and 2012 do not lend much assurance
| that the project is still alive in 2025 - compared to other
| similar projects.
|
| Also note that HAMMER (the previous design) and HAMMER2
| (the current design, since 2018) are two distinct,
| incompatible file system designs. I am not sure what is the
| value of mentioning the previous and abandoned design in
| the this context.
| bch wrote:
| > The last release being <<Version 6.4.0 released 2022 12
| 30>>, links from 2007 and 2012 do not lend much assurance
| that the project is still alive in 2025 - compared to
| other similar projects.
|
| Right - the git repo has commits from yesterday, but it
| ain't no NetBSD... (h/t 'o11c)
|
| > Also note that HAMMER (the previous design) and HAMMER2
| (the current design, since 2018) are two distinct,
| incompatible file system designs. I am not sure what is
| the value of mentioning the previous and abandoned design
| in the this context.
|
| Sure - I linked to the first for the general intro, which
| mentions Hammer2 in the first paragraph if anybody reads
| through... my mistake.
| swatson741 wrote:
| Whenever I see the Darwin kernel brought into the discussion I
| can't help but wonder how different things could have been if
| Apple had just forked Linux and ran their OS services on top of
| that.
|
| Especially when I think about how committed they are to Darwin it
| really paints a poor image in my mind. The loss that open source
| suffers from that, and the time and money Apple has to dedicate
| to this with a disproportionate return.
| skissane wrote:
| > Whenever I see the Darwin kernel brought into the discussion
| I can't help but wonder how different things could have been if
| Apple had just forked Linux
|
| XNU is only partially open sourced - the core is open sourced,
| but significant chunks are missing, e.g. APFS filesystem.
|
| Forking Linux might have legally compelled them to make all
| kernel modules open source-which while that would likely be a
| positive for humanity, isn't what Apple wants to do
| mattl wrote:
| At one point NeXT considered distributing GCC under the GPL
| with some proprietary parts linked at first boot into the
| binary.
|
| Stallman after speaking with lawyers rejected this.
|
| https://sourceforge.net/p/clisp/clisp/ci/default/tree/doc/Wh.
| ..
|
| Look for "NeXT" on this page.
| leoh wrote:
| Stallman's insistence that a judge would side with him is
| pretty arrogant in my opinion; eg looking at Oracle v.
| Google decades later and how folks deciding the case seemed
| to be confused about technical matters.
| skissane wrote:
| I don't think it was "arrogant" - if you read the link,
| he explains that he originally thought differently, but
| he changed his mind based on what _his lawyer_ told him.
| I don 't think you can label a non-lawyer "arrogant" for
| accepting the legal advice of their own attorney -
| whether that advice is correct or not can be debated, but
| it isn't arrogant for someone to trust the correctness of
| their own lawyer's advice.
| threeseed wrote:
| 1) We are talking about the late 90s, well before Ubuntu, where
| Desktop Linux was pretty poor in terms of features and polish.
|
| 2) Apple had no money or time to invest in rewriting NeXTStep
| for a completely new kernel they had no experience in.
| Especially when so many of the dev team was involved in sorting
| out Apple's engineering and tech strategy as well as all the
| features needed to make it more Mac like.
|
| 3) Apple was still using PowerPC at the time which NeXTStep
| supported but Linux did not. It took IBM a couple of years to
| get Linux running.
| CharlesW wrote:
| > _Apple had no money or time to invest in rewriting NeXTStep
| for a completely new kernel they had no experience in._
|
| And even if they had had the money and time, Avie Tevanian1
| was a principal designer and engineer of Mach2. There was no
| NeXTSTEP-based path where the Mach-derived XNU would not be
| at the core of Apple's new OS family.
|
| 1 https://en.wikipedia.org/wiki/Avie_Tevanian 2
| https://en.wikipedia.org/wiki/Mach_(kernel)
| kergonath wrote:
| > Apple had no money or time to invest in rewriting NeXTStep
| for a completely new kernel they had no experience in.
|
| I broadly agree, but it is more nuanced than that. They
| actually had experience with Linux. Shortly before acquiring
| NeXT, they did the opposite of what you mentioned and ported
| Linux to the Mach microkernel for their MkLinux OS. It was
| cancelled at some point, but had things turned a bit
| differently, it could have ended up more important than it
| actually did.
| wtallis wrote:
| There was never a right time for Apple to make such a switch.
| NeXTSTEP predates Linux, and when it was adapted into Mac OS X,
| Apple couldn't afford a wholesale kernel replacement project on
| top of everything else, and Linux in the late 1990s was far
| from being an obviously superior choice. Once they were a few
| versions in to OS X and solidly established as the most
| successful UNIX-like OS for consumer PCs, switching to a Linux
| base would have been an expensive risk with very little short-
| term upside.
|
| Maybe if Apple had been able to keep classic MacOS going five
| years longer, or Linux had matured five years earlier, the OS X
| transition could have been very different. But throwing out XNU
| in favor of a pre-2.6 Linux kernel wouldn't have made much
| sense.
| swatson741 wrote:
| I agree with all of this. Moreover depending on what Torvalds
| chooses to do Apple may have ended up with a more expensive
| XNU in the end which would have been a disaster. Although I
| think Apple can deal with Torvalds just fine who really knows
| how that would have played out.
| inopinatus wrote:
| It would not be fine. It would have never been fine. It
| would have been a titanic clash of egos and culture,
| producing endless bickering and finger-pointing, with
| little meeting of minds. Apple runs the most vertically
| integrated general systems model outside of mainframes.
| Linux and its ecosystem represent the least.
|
| In any case, as others have noted, the timeline here w.r.t
| nextstep is backwards.
| lunarlull wrote:
| Making a switch is one thing, but using Linux from the start
| for OS X would have made more sense. The only reason that
| didn't happen is because of Jobs' attachment to his other
| baby. It wasn't a bad choice, but it was a choice made from
| vanity and ego over technical merit.
| andrewf wrote:
| This presumes that Apple brought in Jobs as a decision
| maker, and NeXTSTEP was attached baggage. At the time, the
| reverse was true - Apple purchased NeXTSTEP as their future
| OS, and Jobs came along for the ride. Given the disaster
| that was Apple's OS initiatives in the 90s, I doubt the
| Apple board would have bought into a Linux adventure.
| lunarlull wrote:
| Why wouldn't Apple have been interested in a Linux
| option? They bought NeXTSTEP _because_ of Jobs. Linux was
| already useable as a desktop OS in 2000, and they could
| have added in the UX stuff and drivers for their
| particular macs on top of it. There wouldn 't have been
| any downsides for them, and it would have strengthened
| something that was hurting their biggest rival.
| musicale wrote:
| > Linux was already useable as a desktop OS in 2000
|
| Apple made its decision in 1996.
| pjmlp wrote:
| Not only was the acquisition during the 1990's, as
| someone that happened to be a Linux zealot up to around
| 2004, usable was quite relative in 2000, if one had the
| right desktop parts.
|
| And it only became usable as Solaris/AIX/HP-UX
| replacement thanks to the money IBM, Oracle and Compaq
| pumped into Linux's development around 2000, it is even
| on the official timeline history.
| DeathArrow wrote:
| >it would have strengthened something that was hurting
| their biggest rival.
|
| If by biggest rival you mean Microsoft, it was Microsoft
| who saved Apple from bancrupcy in 1997.
| kfir wrote:
| Microsoft did that not out of charity to Apple but as an
| attempt to fend off the DOJ trial accusing it of being a
| monopoly
| wpm wrote:
| Jobs initially did not want to come back to Apple. Apple
| bought NeXTSTEP because between it and BeOS, Jean-Louis
| Gassee overplayed his hand and was asking way too much
| money for the acquisition. Apple then _defaulted_ to
| NeXT. Jobs thought Apple was hopeless just like everyone
| else did at the time and didn 't want to take over a
| doomed company to steer it into the abyss, and it's not
| like NeXT was doing great at the time.
|
| >There wouldn't have been any downsides for them
|
| Really? NO downsides???
|
| - throwing away a decade and a half of work and
| engineering experience (Avie Tevanian helped _write_
| Mach, this is like having Linus being your chief of
| software development and saying "just switch to Hurd!")
|
| - uncertain licensing (Apple still ships ancient bash 3.2
| because of GPL)
|
| - increased development time to a shipping, modern OS (it
| already took them 5 years to ship 10.0, and it was
| _rough_ )
|
| That's just off the top of my head. I believe you think
| there wouldn't have been any downsides because you didn't
| stop to think of any, or are ideaologically disposed to
| present the Linux kernel in 1996 as being better or safer
| than XNU.
| _rpf wrote:
| > Jean-Louis Gassee overplayed his hand
|
| Well, there's a parallel universe! Beige boxes running
| BeOS late-90s-cool maybe, but would we still have had the
| same upending results for mobile phones, industrial
| design, world integration, streaming media services...
| musicale wrote:
| In 1996, Apple evaluated the options and decided (quite
| reasonably) that NeXTSTEP - the whole OS including kernel,
| userland, and application toolkit - was a better starting
| point than various other contenders (BeOS, Solaris, ...) to
| replace the failed Copland. Moreover, by acquiring NeXT,
| Apple got NeXTSTEP, NeXT's technical staff (including
| people like Bud Tribble and Avie Tevanian), and (ultimately
| very importantly) Steve Jobs.
| dagmx wrote:
| You haven't really expanded on why basing off the Linux
| kernel would have made more sense, especially at the time.
|
| People have responded to you with timelines explaining why
| it couldn't have happened but you seem to keep restating
| this claim without more substance or context to the time.
|
| Imho Linux would have been the wrong choice and perhaps
| even the incorrect assumption. Mac is not really BSD based
| outside of the userland. The kernel was and is
| significantly different and would've hard forked from Linux
| if they did use it at the time.
|
| Often when people say Linux they mean (the often memes)
| GNU/Linux , except GNU diverged significantly from the
| posix command line tools (in that sense macOS is truer) and
| the GPL3 license is anathema to Apple.
|
| I don't see any area where basing off Linux would have
| resulted in materially better results today.
| jart wrote:
| Well for starters, it would have better memory
| management. The XNU kernel's memory manager has poor time
| complexity. If I create a bunch of sparse memory maps
| using mmap() then XNU starts to croak once I have 10,000+
| of them.
| dagmx wrote:
| Please re read the comment you're responding to about how
| the kernel would have diverged significantly even if they
| did use the Linux kernel. Unless you think a three decade
| old kernel would have the same characteristics as today.
|
| What benefit would it have had at the time? What
| guarantees would it have given at the time that would
| have persisted three decades later?
| monocasa wrote:
| AFAICT Linux wasn't even ported to PowerPC at the time of
| NextSTEP being acquired by Apple.
| johndoe0815 wrote:
| Apple was firstly involved in porting Linux to PPC,
| albeit running on top of Mach 3 in MkLinux, since early
| 1996:
|
| https://en.m.wikipedia.org/wiki/MkLinux
| GianFabien wrote:
| Back in the days when Apple acquired NeXT, Linux was undergoing
| lots of development and wasn't well established. Linux being a
| monolithic kernel didn't offer the levels of
| compartmentalization that Mach did.
|
| As things now stand, FreeBSD represents many of the benefits of
| Darwin and the open source nature of Linux. If you seek a more
| secure environment without Apple's increasing levels of lock-
| in, then FreeBSD (and the other BSDs) merit consideration for
| deployment.
| finnjohnsen2 wrote:
| Is the driver support fit for using FreeBSD as a desktop OS
| these days?
|
| Last I tried (~10 years ago) I gave up and I assumed FreeBSD
| was a Server OS, because I couldn't for the life of me get
| Nvidia drivers working in native resolution. I don't recall
| specifics but Bluetooth was problematic also.
| WD-42 wrote:
| I don't think so. Here's a report from this month:
| https://freebsdfoundation.org/blog/february-2025-laptop-
| supp...
|
| Looks like (some) laptops might sleep and wifi is on the
| way! (with help from Linux drivers)
| laurencerowe wrote:
| Isn't FreeBSD a monolithic kernel? I don't believe it
| provides the compartmentalisation that you talk about.
|
| As I understand it Mach was based on BSD and was effectively
| a hybrid with much of the existing BSD kernel running as a
| single big task under the microkernel. Darwin has since
| updated the BSD kernel under microkernel with the current
| developments from FreeBSD.
| TickleSteve wrote:
| Mach was never based on BSD, it replaced it. Mach is the
| descendant of the Accent and Aleph kernels. BSD came into
| the frame for the userland tools.
|
| "Mach was developed as a replacement for the kernel in the
| BSD version of Unix,"
| (https://en.wikipedia.org/wiki/Mach_(kernel))
|
| Interestingly, MkLinux was the same type of project but for
| Linux instead of BSD (i.e. Linux userland with Mach
| kernel).
| inkyoto wrote:
| > As things now stand, FreeBSD represents many of the
| benefits of Darwin and the open source nature of Linux.
|
| No. FreeBSD has committed the original sin of UNIX by
| deliberately dropping support for all non-Intel
| architectures, intending to focus on optimising FreeBSD for
| the Intel ISA and platforms. UNIX portability and support for
| a diverse range of CPU's and hardware platforms are ingrained
| in the DNA of UNIX, however.
|
| I would argue that FreeBSD has paid the price for this
| decision - FreeBSD has faded into irrelevance today (despite
| having introduced some of the most outstanding and brilliant
| innovations in UNIX kernel design) - because the FreeBSD core
| team bet heavily on Intel remaining the only hardware
| platform in existence, and they missed the turn (ARM, RISC-V,
| and marginally MIPS in embdedded). Linux stepped in and
| filled in the niche very quickly, and it now runs everywhere.
| FreeBSD is faster but Linux is better.
|
| And it does not matter that Netflix still runs FreeBSD on its
| servers serving up the content at the theoretical speed of
| light - it is a sad living proof of FreeBSD having become a
| niche within a niche.
|
| P.S. I would also argue that the BSD core teams
| (Free/Net/Open) were a major factor in the downfall of all
| BSD's, due to their insular nature and, especially in the
| early days, a near-hostile attitude towards outsiders.
| <<Customers>> voted with their feet - and chose Linux.
| pjmlp wrote:
| Not really everywhere, exactly because of GPL, most
| embedded FOSS OSes are either Apache or BSD based.
|
| It is not only Netflix, Sony is also quite found of cherry
| picking stuff from BSDs to their Orbit OS.
|
| Finally, I would assert Linux kernel as we know it today,
| is only relevant as the ones responsible for its creation
| still walk this planet, and like every project, when the
| creators are no longer around it will be taken into
| directions that no longer match the original goals.
| danieldk wrote:
| I am very skeptical that it's primarily caused by the focus
| on Intel CPUs. FreeBSD already fell into obscurity way
| before RISC-V. And even though they missed the ARM
| router/appliance boat, Linux already overtook FreeBSD when
| people were primarily using Linux for x86 servers and
| (hobbyist) desktops. The _Netcraft has confirmed: BSD is
| dying_ Slashdot meme was from the late 90ies or early
| 2000s. Also, if this was the main reason, we would all be
| using OpenBSD or NetBSD.
|
| IMO it's really a mixture of factors, some I can think of:
|
| - BSD projects were slowed down by the AT&T lawsuit in the
| early 90ies.
|
| - FreeBSD focused more on expert users, whereas Linux
| distributions focused on graphical installers and
| configuration tools early on. Some distributions had
| graphical installers at the end of the 90ies. So, Linux
| distributions could onboard people who were looking for a
| Windows alternative much more quickly.
|
| - BSD had forks very early on (FreeBSD, NetBSD, OpenBSD,
| BSDi). The cost is much higher than multiple Linux
| distributions, since all BSDs maintain their own kernel and
| userland.
|
| - The BSDs (except BSDi) were non-profits, whereas many
| early Linux distributions were by for-profit companies (Red
| Hat, SUSE, Caldera, TurboLinux). This gave Linux a larger
| development and marketing budget and it made it easier to
| start partnerships with IBM, SAP, etc.
|
| - The BSDs projects were organized as cathedrals and more
| hierarchical, so made it harder for new contributors to
| step in.
|
| - The BSD projects provided full systems, whereas in Linux
| distributions would piece together systems. This made Linux
| development messier, but allowed quicker evolution and made
| it easier to adapt Linux for different applications.
|
| - The GPL put a lot more pressure on hardware companies to
| contribute back to the Linux kernel.
|
| Besides that there is probably also a fair amount of
| randomness involved.
| inkyoto wrote:
| The AT&T lawsuits are a moot point, as they were all
| settled in the early 1990s. They are the sole reason why
| FreeBSD and NetBSD even came into existence - by forking
| the 4.4BSD-Lite codebase after the disputed code had been
| eliminated or replaced with non-encumbered
| reimplementations. Otherwise, we would all be running on
| descendants of 4.4BSD-Lite today.
|
| Linux has been running _uninterruptedly_ on s /390 since
| October 1999 (31-bit support, Linux v2.2.13) and since
| January 2001 for 64-bit (Linux v2.4.0). Linux mainlined
| PPC64 support in August 2002 (Linux v2.4.19), and it has
| been running on ppc64 happily ever since, whereas FreeBSD
| dropped ppc64 support around 2008-2010. Both s/390 and
| ppc64 (as well as many others) are hardly hobbyist
| platforms, and both remain in active use today. Yes, IBM
| was behind each port, although the Linux community has
| been a net $0 beneficiary of the porting efforts.
|
| I am also of the opinion that licensing is a red herring,
| as BSD/MIT licences are best suited for proprietary,
| closed-source development. However, the real issue with
| proprietary development is its siloed nature, and the
| fact that closed-source design and development very
| quickly start diverging from the mainline and become
| prohibitively expensive to maintain in-house long-term.
| So the big wigs quickly figured out that they could make
| a sacrifice and embrace the GPL to reduce ongoing costs.
| Now, with the *BSD core team-led development, new
| contributors (including commercial entities) would be
| promptly shown the door, whereas the Linux community
| would give them the warmest welcome. That was the second
| major reason for the downfall of all things BSD.
| danieldk wrote:
| _The AT &T lawsuits are a moot point, as they were all
| settled in the early 1990s. They are the sole reason why
| FreeBSD and NetBSD even came into existence - by forking
| the 4.4BSD-Lite codebase after the disputed code had been
| eliminated or replaced with non-encumbered
| reimplementations. Otherwise, we would all be running on
| descendants of 4.4BSD-Lite today._
|
| The lawsuit was settled in Feb 1994, FreeBSD was started
| in 1993. FreeBSD was started because development on
| 386BSD was too slow. It took FreeBSD until Nov 1994 until
| it rebased on BSD-Lite 4.4 (in FreeBSD 2.0.0).
|
| At the time 386BSD and then FreeBSD were much more mature
| than Linux, but it took from 1992 until the end of 1994
| for the legal clarity around 386BSD/FreeBSD to clear up.
| So Linux had about three years to try to catch up.
| _paulc wrote:
| > FreeBSD has committed the original sin of UNIX by
| deliberately dropping support for all non-Intel
| architectures, intending to focus on optimising FreeBSD for
| the Intel ISA and platforms.
|
| FreeBSD supports amd64 and aarch64 as Tier 1 platforms and
| a number of others (RiscV, PowerPC, Arm7) as Tier 2
|
| https://www.freebsd.org/platforms/
| inkyoto wrote:
| It is irrelevant what FreeBSD supports today.
|
| FreeBSD started demoting non-Intel platforms around
| 2008-2010, with FreeBSD 11 released in 2016 only
| supporting x86. The first non-Intel architecture support
| was reinstated in April 2021, with the official release
| of FreeBSD 13, which is over a decade of the time having
| been irrevocably lost.
|
| Plainly, FreeBSD has missed the boat - the first AWS
| Graviton CPU was released in 2018, and it ran Linux.
| Everything now runs Linux, but it could have been
| FreeBSD.
| adrian_b wrote:
| Having used continuously both FreeBSD and Linux, wherever
| they are best suited, since around 1995 until today, I
| disagree.
|
| In my opinion the single factor that has contributed the
| most to a greater success for Linux than for FreeBSD has
| been the transition to multithreaded and multicore CPUs
| even in the cheapest computers, which has started in 2003
| with the SMT Intel Pentium 4, followed in 2005 by the dual-
| core AMD CPUs.
|
| Around 2003, FreeBSD 4.x was the most performant and the
| most reliable operating system for single-core single-
| thread CPUs, for networking or storage applications, well
| above Linux or Microsoft Windows (source: at that time I
| was designing networking equipment and we had big server
| farms on which the equipment was tested, under all
| operating systems).
|
| However it could not use CPUs with multiple cores or
| threads, so on such CPUs it fell behind Linux and Windows.
| The support introduced in FreeBSD 5.x was only partial and
| many years have passed until FreeBSD had again a
| competitive performance on up-to-date CPUs. Other BSD
| variants were even slower in their conversion to
| multithreaded support. During those years the fraction of
| users of *BSD systems has diminished a lot.
|
| The second most important factor has been the much smaller
| set of device drivers for various add-on interface cards
| than for Linux. Only few hardware vendors have provided
| FreeBSD device drivers for their products, mostly only
| Intel and NVIDIA, and for the products of other vendors
| there have been few FreeBSD users able to reverse engineer
| them and write device drivers, in comparison with Linux.
|
| The support for non-x86 ISAs has also been worse than in
| Linux, but this was just a detail among the general support
| for less kinds of hardware than Linux.
|
| All this has been caused by positive feedback, FreeBSD has
| started with fewer users, because by the time when the
| lawsuits have been settled favorably for FreeBSD most
| potential users had already started to use Linux. Then the
| smaller number of users have been less capable of porting
| the system to new hardware devices and newer architectures,
| which has lead to even lower adoption.
|
| Nevertheless, there have always been various details in the
| *BSD systems that have been better than in Linux. A few of
| them have been adopted in Linux, like the software package
| systems that are now ubiquitous in Linux distributions, but
| in many cases Linux users have invented alternative
| solutions, which in enough cases were inferior, instead of
| studying the *BSD systems and see whether an already
| existing solution could be adopted instead of inventing yet
| another alternative.
| inkyoto wrote:
| Whilst I do agree with most of your insights and the
| narrative of historic events, I also believe that BSD
| core teams were a major contributing factor to the demise
| of BSD's (however unpopular such an opinion might be).
|
| The first mistake was that all BSD core teams flatly
| refused to provide native support for the JVM back in its
| heyday. They eventually _partially_ conceded and made it
| work using Linux emulation; however, it was riddled with
| bugs, crashes and other issues for years before it could
| run Java server apps. Yet, users clamoured to run Java
| applications, like, _now_ and _vociferously_.
|
| The second grave mistake was to flatly refuse to support
| containerisation (Docker) due to not being kosher. Linux
| based containerisation is what underpins all cloud
| computing today. Again, the FreeBSD arrived too late, and
| it was too little.
|
| P.S. I still hold the view that FreeBSD made matters even
| worse by dropping support for non-Intel platforms early
| on - at a stage when its bleak future was already all but
| certain. New CPU architectures are enjoying a
| renaissance, whilst FreeBSD nervously sucks its thumb by
| the roadside of history.
| usrnm wrote:
| Docker was created in 2013, long after BSDs had lost all
| their popularity. And, fwiw, FreeBSD pioneered containers
| long before Linux:
| https://en.m.wikipedia.org/wiki/FreeBSD_jail
| inkyoto wrote:
| FreeBSD jails are advanced chroot++. Albeit they do set a
| precedent for a predessor of true containers, they have:
| 1. Minimal kernel isolation. 2. Optional
| network stack isolation via VNET (but not used by
| default). 3. Rudimentary resource controls
| with no default enforcement (important!). 4.
| Simple capability security model.
|
| Most importantly, since FreeBSD was a very popular choice
| for hosting providers at the time, jails were originally
| invented to fully support partitioned-off web hosting,
| rather than to run self-sufficient, fully contained
| (containerised) applications as first-class citizens.
|
| The claim to have invented true containers belongs to
| Solaris 10 (not Linux) and its zones. Solaris 10 was
| released in January 2005.
| throw0101d wrote:
| > _3. Rudimentary resource controls with no default
| enforcement (important!)._
|
| Seems pretty extensive to me, including R/W bytes/s and
| R/W ops/s:
|
| * https://docs.freebsd.org/en/books/handbook/jails/#jail-
| resou...
|
| * https://klarasystems.com/articles/controlling-resource-
| limit...
|
| * https://man.freebsd.org/cgi/man.cgi?query=rctl
| tzs wrote:
| I don't know if this had much affect on anything, but
| another thing that hindered using FreeBSD for some users
| was that Linux worked better as a dual boot system with
| DOS/Windows on a typical home PC.
|
| There were two problems.
|
| The first was that FreeBSD really wanted to own the whole
| disk. If you wanted to dual boot with DOS/Windows you
| were supposed to put FreeBSD on a separate disk. Linux
| was OK with just having a partition on the same disk you
| had DOS/Windows on. For those of us whose PCs only had
| one hard disk, buying a copy of Partition Magic was
| cheaper than buying a second hard disk.
|
| The reason for this was that the FreeBSD developers felt
| that multiple operating system on the same disk was not
| safe due to the lack of standards for how to emulate a
| cylinder/head/sector (CHS) addressing scheme on disks
| that used logical block addressing (LBA). They were
| technically correct, but greatly overestimated the
| practical risks.
|
| In the early days PC hard disks used CHS addressing, and
| the system software such as the PC BIOS worked in those
| terms. Software using the BIOS such as DOS applications
| and DOS itself worked with CHS addresses and the number
| of cylinders, heads, and sectors per track (called the
| "drive geometry") they saw matched the actual physical
| geometry of the drive.
|
| The INT 13h BIOS interface for low level disk access
| allowed for a maximum of 1024 cylinders, 256 heads, and
| 63 sectors per track (giving a maximum possible drive
| size of 8 GB if the sectors were 512 bytes).
|
| At some point as disks got bigger drives with more than
| 63 sectors per track became available. If you had a drive
| with for example 400 cylinders, 16 heads, and 256 sectors
| per track you would only be able to access about 1/4 of
| the drive using CHS addressing that uses the actual drive
| geometry.
|
| It wasn't really practical to change the INT 13h
| interface to give the sectors per track more bits, and so
| we entered the era of made up drive geometries. The BIOS
| would see that the disk geometry is 400/16/256 and make
| up a geometry with the same capacity that fit within the
| limits, such as 400/256/16.
|
| Another place with made up geometry was SCSI disks. SCSI
| used LBA addressing. If you had a SCSI disk on your PC
| whatever implemented INT 13h handling for that (typically
| the BIOS ROM on your SCSI host adaptor) would make up a
| geometry. Different host adaptor makers might use
| different algorithms for making up that geometry. Non-
| SCSI disk interfaces for PCs also moved to LBA
| addressing, and so the need to make up a geometry for INT
| 13h arose with those too, and different disk controller
| vendors might use a different made up geometry.
|
| So suppose you had a DOS/Windows PC, you repartitioned
| your one disk to make room for FreeBSD, and went to
| install FreeBSD. FreeBSD does not use the INT 13h BIOS
| interface. It uses its own drivers to talk to the low
| level disk controller hardware and those drivers use LBA
| addressing.
|
| It can read the partition map and find the entry for the
| partition you want to install on. But the entries in the
| partition map use CHS addressing. FreeBSD would need to
| translate the CHS addresses from the partition map into
| LBA addresses, and to do that it would need to know the
| disk geometry that whatever created the partition map was
| using. If it didn't get that right and assumed a made up
| geometry that didn't match the partitioner's made up
| geometry the actual space for DOS/Windows and the actual
| space for FreeBSD could end up overlapping.
|
| In practice you can almost always figure out from looking
| at the partition map what geometry the partitioner used
| with enough accuracy to avoid stomping on someone else's
| partition. Partitions started at track boundaries, and
| typically the next partition started as close as possible
| to the end of the previous partition and that
| sufficiently narrows down where the partition is supposed
| to be in LBA address space.
|
| That was the approach taken by most SCSI vendors and it
| worked fine. I think eventually FreeBSD did start doing
| this too but by then Linux had become dominant in the
| "Dual boot DOS/Windows and a Unix-like OS on my one disk
| PC" market.
|
| The other problem was CD-ROM support. FreeBSD was slow to
| support IDE CD-ROM drives. Even people who had SCSI on
| their home PC and used SCSI hard disks were much more
| likely to have an IDE CD-ROM than a SCSI CD-ROM. SCSI CD-
| ROM drives were several times more expensive and it
| wasn't the interface that was the bottleneck so SCSI CD-
| ROM just didn't make much sense on a home PC.
|
| For many then it came down to with Linux they could
| install they didn't need a two disk system and they could
| install from a convenient CD-ROM, but for FreeBSD they
| would need a dedicated disk for it and would have to deal
| with a stack of floppies.
| LargoLasskhyfv wrote:
| Related fun fact up to maybe a decade ago: If you had a
| disk labeled/partitioned in FBSDs 'dangerously dedicated'
| style, and tried to image that, or reading the image of
| that with some forensic tool called Encase (running under
| Windows of course, how else could it be?), this tool
| would crash that Windows with an irrecoverable blew
| screen :)
|
| _I loved that!_
| tedunangst wrote:
| Not quite accurate history of SMP. FreeBSD had SMP well
| before 5.0, but not "fine grained" which is what the 5.0
| release was all about. But the conversion led to many
| regressions.
| linguae wrote:
| Interestingly enough, Apple did contribute to porting Linux to
| PowerPC Macs in the mid-1990s under the MkLinux project, which
| started in 1996 before Apple's purchase of NeXT later that
| year:
|
| https://en.m.wikipedia.org/wiki/MkLinux
|
| I don't think there was any work done on bringing the Macintosh
| GUI and application ecosystem to Linux. However, until the
| purchase of NeXT, Apple already had the Macintosh environment
| running on top of Unix via A/UX (for 68k Macs) and later the
| Macintosh Application Environment for Solaris and HP-UX; the
| latter ran Mac OS as a Unix process. If I remember correctly,
| the work Apple did for creating the Macintosh Application
| Environment laid the groundwork for Rhapsody's Blue Box, which
| later became Mac OS X's Classic environment. It is definitely
| possible to imagine the Macintosh Application Environment being
| ported to MkLinux. The modern FOSS BSDs were also available in
| 1996, since this was after the settlement of the lawsuit
| affecting the BSDs.
|
| Of course, running the classic Mac OS as a process on top of
| Linux, FreeBSD, BeOS, Windows NT, or some other contemporary OS
| was not a viable consumer desktop OS strategy in the mid 1990s,
| since this required workstation-level resources at a time when
| Apple was still supporting 68k Macs (Mac OS 8 ran on some 68030
| and 68040 machines). This idea would've been more viable in the
| G3/G4 era, and by the 2000s it would have be feasible to give
| each classic Macintosh program its own Mac OS process running
| on top of a modern OS, but I don't think Apple would have made
| it past 1998 without Jobs' return, not to mention that the NeXT
| purchase brought other important components to the Mac such as
| Cocoa, IOKit, Quartz (the successor to Display PostScript) and
| other now-fundamental technologies.
| CharlesW wrote:
| > _I don't think there was any work done on bringing the
| Macintosh GUI and application ecosystem to Linux._
|
| QTML (which became the foundation of the Carbon API) was OS
| agnostic. The Windows versions of QuickTime and iTunes used
| QTML, and in an alternate universe Apple could've empowered
| developers to bring Mac OS apps to Windows and Linux with a
| more mature version of that technology.
| threeseed wrote:
| Completely forget about MkLinux. The timing is fascinating.
|
| MkLinux was released in February 1996 whilst Copland got
| officially cancelled in August 1996.
|
| So it's definitely conceivable that internally they were
| considering to just give up on the Copland microkernel and
| run it all on Linux. And maybe this was a legitimate third
| option to BeOS and NeXT that was never made public.
| kalleboo wrote:
| What's crazy is that MkLinux was actually Linux-on-Mach,
| not just a baremetal PowerPC Linux. The work they did to
| port Mach to PowerPC for MkLinux was then reused in the
| port of NeXTSTEP Mach to PowerPC. Everything was very
| intertwined.
| masswerk wrote:
| Also, MkLinux wasn't that stable. I experimented a bit
| with it at the time and it wasn't really ripe for
| production. It kind of worked, but there would have been
| lots of work to be invested (probably more than Apple
| could afford) to turn this into a mainstream OS.
| askvictor wrote:
| Diverse systems are more resilient. It's probably a good thing
| for IT in a general sense, even if it's not the most efficient
| toast0 wrote:
| Based on how often they pull in updated bits from FreeBSD
| (pretty much never), an Apple fork of Linux would be more or
| less Linux 2.4 today.
|
| I don't know what the loss that open source suffers is in this
| context?
|
| I don't think Apple would need to spend less time or money on
| their kernel grafted ontop of Linux 2.4 vs their kernel grafted
| on top of FreeBSD 4.4
| WD-42 wrote:
| Because presumably the GPL would force them to release their
| modifications. Apple gets/got away with leeching off the BSDs
| because of the permissive license.
| zigzag312 wrote:
| A bit off topic, but is there any data or estimates of how
| often big companies use modified versions GPL
| software/libraries for their web services without releasing
| their modifications?
| toast0 wrote:
| They release their kernel source more or less timely
| without the GPL.
| surajrmal wrote:
| Why would we want more of a monoculture? We've put so many eggs
| in one basket already. I hope we see more diversity in kernels,
| not further consolidation.
|
| Taken a different way, it feels similar to suggesting Apple
| should rebase safari on chromium.
| hypercube33 wrote:
| Keep in mind they were also looking at BeOS which is more real
| time and notably not unix/Linux. I wish I lived in the timeline
| that they went with it as I'm a huge Be fan.
| palata wrote:
| Seen differently, I think it's great that there is yet another
| kernel being maintained out there.
|
| Imagine if Apple decided to open source Darwin: wouldn't that
| be a big win for open source?
| phendrenad2 wrote:
| Control is important. Apple has never had to fight with
| Torvalds or IBM or Microsoft over getting something added to
| the kernel. Just look at the fiasco when Microsoft wanted to
| add a driver for their virtualization system to the kernel.
|
| Also, one thing you'll notice about big companies - they know
| that not only is time valuable, worst-case time is important
| too. If someone in an open-source ecosystem CAN delay your
| project, that's almost as bad as if they regularly DO delay
| your project. This is why big companies like Google tend to
| invent everything themselves, I.E. Google may have "invented
| Kubernetes" (really, an engineer at Google uninvolved with the
| progenitor of K8s - Borg - invented it based on Borg), but they
| still use Borg, which everyone Xoogler here likes to say is
| "not as good as k8s". Yet they still use it. Because it gives
| them full control, and no possibility of outsiders slowing them
| down.
| DeathArrow wrote:
| >Whenever I see the Darwin kernel brought into the discussion I
| can't help but wonder how different things could have been if
| Apple had just forked Linux and ran their OS services on top of
| that.
|
| They have a long history with XNU and BSD. And Linux has s GPL
| license which might not suit Apple.
|
| >Especially when I think about how committed they are to Darwin
| it really paints a poor image in my mind. The loss that open
| source suffers from that, and the time and money Apple has to
| dedicate to this with a disproportionate return.
|
| They share a lot of code with FreeBSD, NetBSD and OpenBSD.
| Which are open source. And Darwin is open source, too. So
| there's no loss that open source suffers.
| piuantiderp wrote:
| The world is better with multiple flavors instead of one
| bloated one.
| emchammer wrote:
| Couldn't Apple have used ZFS instead of inventing APFS? Maybe
| modifying it to use less physical memory?
| inkyoto wrote:
| Supporting ZFS in a UNIX kernel requires excessively extensive
| modifications to the design and implementation of the VMM,
| namely: 1. Integration of the kernel's VM with
| ZFS's adaptive replacement cache which runs in user space -
| memory pressure cooperation, page accounting and unified memory
| management. It also requires extensive VM modifications to
| support ZFS's controlled page eviction, fine-grained dirty page
| tracking, plus other stuff. 2. VMM alignment with
| the ZFS transactional semantics and intent logs - delayed write
| optimisations, efficient page syncing. 3. Support
| for large memory pages and proper memory page alignment -
| support for superpages (to reduce the TLB pressure and to
| efficiently map large ZFS blocks efficiently) and I/O alignment
| awareness (to ensure proper alignment of memory pages to avoid
| unnecessary copies). 4. Memory-mapped I/O: different
| implementation of mmap and support for lazy checksumming for
| mmap pages. 5. Integration with kernel thread
| management and scheduling, co-opertation with VMM memmory
| allocators. 6. ... and the list goes on and on.
|
| ZFS is just not the right answer for consumer facing and
| mobile/portable devices due being a heavyweight _server_ design
| with vastly different design provisions and due to being the
| answer to a entirely different question.
| AndrewDavis wrote:
| > Supporting ZFS in a UNIX kernel requires excessively
| extensive modifications to the design and implementation of
| the VMM, namely:
|
| FYI: Apple did a bunch of that work. They ported ZFS to OSX
| shortly after it was open sourced. With with only support
| landing in 10.5. With it being listed as an upcoming feature
| in 10.6.
|
| But something happened and they abandoned it. The rumour is a
| sun exec let the cat out of the bag about it being the next
| main filesystem for osx (ie not just support for non root
| drives) and this annoyed Jobs so much he canned the whole
| project.
| lunarlull wrote:
| > The rumour is a sun exec let the cat out of the bag about
| it being the next main filesystem for osx (ie not just
| support for non root drives) and this annoyed Jobs so much
| he canned the whole project.
|
| Very petty if true.
| MBCook wrote:
| It would fit jobs though.
|
| That's one of the famous rumors.
|
| As others here have said Oracle bought Sun two years
| later. Between me increased memory requirements,
| uncertainty due to Sun's status as an ongoing concern,
| and who knows what else maybe it really did make sense
| not to go forward.
| JCattheATM wrote:
| The ZFS code was already released under the CDDL. What
| was stopping Apple from writing their own implementation
| like OpenZFS and FreeBSD, regardless of what happened
| with Sun?
| kergonath wrote:
| As other also said, there were patent issues around ZFS:
| https://wiki.endsoftwarepatents.org/wiki/NetApp's_filesys
| tem...
| flomo wrote:
| > But something happened and they abandoned it. The rumour
| is...
|
| The reality is NetApp sued Sun/Oracle over ZFS patents.
|
| https://www.theregister.com/2010/09/09/oracle_netapp_zfs_di
| s...
| inkyoto wrote:
| Yes, they did, but... it was more of a proof of concept and
| a promise rather than a production quality release. They
| also had the OS X Server product line at the time (no
| more), which ZFS would have been the best fit for at the
| time, and they also released the OS X ZFS port before the
| advent of the first iPhone.
|
| It is not a given that ZFS would have performed well within
| the tight hardware constraints of the first ten or so
| generations of the iPhone - file systems such as APFS,
| btrfs or bcachefs are better suited for the needs of mobile
| platforms.
|
| Another conundrum with ZFS is that ZFS disk pools really,
| really want a RAID setup, which is not a consumer grade
| thing, and Apple is a consumer company. Even if ZFS did see
| the light back then, there is no guarantee it would have
| lived on - I am not sure, anyway.
| cosmic_cheese wrote:
| IIRC that was something they had been working on, but it got
| axed when ZFS changed hands and licensing became potentially
| thorny. My memory may be failing me though.
| linguae wrote:
| I remember reading back in 2007-2008 that Apple was interested
| in bringing ZFS support to Mac OS X, but discussions ended once
| Oracle purchased Sun. This was a bummer; I would've loved ZFS
| on a Mac.
|
| After a cursory Google search, I found this article:
|
| https://www.zdnet.com/article/zfs-on-snow-leopard-forget-abo...
| Lammy wrote:
| There were a couple of (iirc read-only) beta ZFS PKGs from
| Apple in the 10.5 era:
| https://macintoshgarden.org/forum/looking-zfs-beta-seeds-
| ye-...
| leoh wrote:
| Kind of surprising that the Oracle deal would have killed it
| given that Jobs and Ellison were such close friends.
| krger wrote:
| They were probably close friends because they weren't
| business competitors.
| wpm wrote:
| Around the time of Snow Leopard, it was rumored. I assume the
| Oracle buyout of Sun around the same time had a big part in
| killing that particular idea.
| agentkilo wrote:
| The article states that "pager daemons" that manage swap files
| runs in user space, and the kernel memory can also get swapped
| out, but never explained how a user space daemon swaps out kernel
| memory. Do they have hard-coded exceptions for special daemons,
| or use special system calls? Where can I find out more details
| about the user space memory management specifically?
| jjtheblunt wrote:
| https://github.com/apple-oss-distributions/xnu
| comex wrote:
| The claim is inaccurate and mixes together multiple different
| things:
|
| - The Mach microkernel originally supported true userland
| paging, like mmap but with an arbitrary daemon in place of the
| filesystem. You can see the interface here:
|
| https://web.mit.edu/darwin/src/modules/xnu/osfmk/man/memory_...
|
| But I'm not sure if Darwin ever used this functionality; it
| certainly hasn't used it for the last ~20 years.
|
| - dynamic_pager never used this interface. It used a different,
| much more limited Mach interface where xnu could alert it when
| it was low on swap; dynamic_pager would create swap files, and
| pass them back into the kernel using macx_swapon and
| macx_swapoff syscalls. But the actual swapping was done by the
| kernel. Here is what dynamic_pager used to look like:
|
| https://github.com/apple-oss-distributions/system_cmds/blob/...
|
| But that functionality has since moved into the kernel, so now
| dynamic_pager does basically nothing:
|
| https://github.com/apple-oss-distributions/system_cmds/blob/...
|
| - The vast majority of kernel memory is wired and cannot be
| paged out. But the kernel can explicitly ask for pageable
| memory (e.g. with IOMallocPageable), and yes, that memory can
| be swapped to disk. It's just rarely used.
|
| Still, any code that does this needs to be careful to avoid
| deadlocks. Even though userland is no longer involved in
| "paging" per se, it's still possible and in fact common for
| userland to get involved one or two layers down. You can have
| userland filesystems with FSKit (or third-party FUSE). You can
| have filesystems mounted on disk images which rely on userland
| to convert reads and writes to the virtual block device into
| reads and writes to the underlying dmg file (see `man
| hdiutil`). You can have NFS or SMB connections going through
| userland networking extensions. There are probably other cases
| I'm not thinking of.
|
| EDIT: Actually, I may be wrong about that last bit. You can
| definitely have filesystems that block on userspace, but it may
| not be supported to put swap on those filesystems.
| krackers wrote:
| >xnu could alert it when it was low on swap; dynamic_pager
| would create swap files, and pass them back into the kernel
|
| What's the benefit of this indirection through userspace for
| swap file creation? Can't the kernel create the swap file
| itself?
| comex wrote:
| Today the kernel does create the swap file itself. I don't
| know why it behaved differently in the past, given that the
| version of dynamic_pager I linked is only 355 lines of
| code, not obviously complex enough to be worth offloading
| to userspace. But this was written back in 1999 and maybe
| there was more enthusiasm for being microkernel-y (even if
| they had already backed away from full Mach paging).
| delusional wrote:
| Looking at some of the contemporary documentation it does
| look like it was essentially a historical accident. The
| interface it built to be all microkernel, but when it was
| adapted into a real system the microkernel concepts fell
| to the wayside as they were no longer useful, but where
| they didn't impose too much of a burden (as in the pager
| interface) they were allowed to stick around.
| rollcat wrote:
| IMHO a kernel managing a file (any file) all on its own
| imposes too many assumptions about hardware and user space.
| This could unexpectedly bite you if you're in a rescue
| system trying to fsck, booting from external RO media,
| running diskless or from NFS, etc.
|
| Meanwhile Linux allows you to swapon(2) just about
| anything. A file, a partition, a whole disk, /dev/zram,
| even a zvol. (The last one could lead to a nasty deadlock,
| don't do it.)
|
| Perhaps the XNU/NeXT/Darwin/OSX developers wanted a similar
| level of flexibility? Have the right piece in place, even
| just as a stub?
| fithisux wrote:
| They should have fostered a better FOSS community around XNU, now
| that they moved to ARM there should have been a runnable
| distribution for x64
| larusso wrote:
| I'm not sure if I/O kit was written in this c++ subset just for
| speed. There was this controversial at the time. Apple announced
| MacOS X and said that it won't be compatible with current
| software. All partners would need to rewrite the software in
| Objective-C. This didn't go over well. Apple back paddelt and
| introduced "carbon". An API layer for cpp applications as well as
| "Core Foundation" an underpinning to the objective-c base
| Framework "Foundation". Also the reason why we have Obj-c++. The
| interesting part is that they managed to get the memory
| management toll free. Means an object allocated in the c/cpp
| world can be passed to obj-c without extra overhead.
| comex wrote:
| IOKit C++ is running in the kernel, so it's not really related
| to any of the technologies you mentioned which are all
| userland-only.
| dcrazy wrote:
| Being able to port your existing C++ driver to IOKit instead
| of rewriting it in Objective-C is a selling point. For some
| reason people a lot of people seem to dislike writing an
| Objective-C shell around their C++.
| comex wrote:
| At the risk of nitpicking, there are a bunch of things that are
| not _quite_ right. Nonexhaustive list:
|
| - Discussion of paging mixes together some concepts as I
| described in [1].
|
| - Mach port "rights" are not directly related to entitlements.
| Port rights are port of the original Mach design; entitlements
| are part of a very different, Apple-specific security system
| grafted on much later. They are connected in the sense that Mach
| IPC lets the receiver get an "audit token" describing the process
| that sent them, which it can then use to look up entitlements.
|
| - All IOKit calls go through Mach IPC, not just asynchronous
| events.
|
| - "kmem" (assuming this refers to the kmem_* functions) is not
| really a "general-purpose kernel malloc"; that would be kalloc.
| The kmem_* functions are sometimes used for allocations, but
| they're closer to a "kernel mmap" in the sense that they always
| allocate new whole pages.
|
| - It's true that xnu can map the same physical pages into
| multiple tasks read-only, but that's nothing special. Every OS
| does that if you use mmap or similar APIs. What does make the
| shared cache special is that it can also share physical page
| _tables_ between tasks.
|
| - The discussion about "shared address space" is mixing things
| up.
|
| The current 64-bit behavior is the same as the traditional 32-bit
| behavior: the lower half of the address space is reserved for the
| current user process, and the upper half is reserved for the
| kernel. This is typically called a shared address space, in the
| sense that the kernel page tables are always loaded, and only
| page permissions prevent userland from accessing kernel memory.
| Though you could also think of it as a 'separate' address space
| in the sense that userland and kernel stick to separate
| addresses. Anyway, this approach is more efficient (because you
| don't have to swap page tables for every syscall) and it's the
| standard thing kernels do.
|
| What was tricky and unusual was the _intermediate_ 32-bit
| behavior where the kernel and user page tables actually were
| completely independent (so the same address would mean one thing
| in user mode and another thing in kernel mode). This allowed
| 32-bit user processes to use more memory (4GB rather than 2GB),
| but at the cost of making syscalls more expensive.
|
| Even weirder, in the same era, xnu could even run 64-bit
| processes while itself being 32-bit! [2]
|
| - The part about Secure Enclave / Exclaves does not explain the
| main difference between them: the Secure Enclave is its own CPU,
| while Exclaves are running on the main CPU, just in a more-
| trusted context.
|
| - Probably shouldn't describe dispatch queues as a "new
| technique". They're more than 15 years old, and now they're sort
| of being phased out, at least as a programming model you interact
| with directly, in favor of Swift Concurrency. To be fair, Swift
| Concurrency uses libdispatch as a backend.
|
| [1] https://news.ycombinator.com/item?id=43599230
|
| [2] https://superuser.com/questions/23214/why-does-my-mac-
| os-x-1...
| saagarjha wrote:
| Also,
|
| > As of iOS 15, Apple even allows virtualization on iOS (to run
| Linux inside an iPad app, for example, which some developers
| have demoed), indicating the XNU hypervisor is capable on
| mobile as well, though subject to entitlement.
|
| Apple definitely does not allow this; in fact the hypervisor
| code has been removed from the kernel as of late.
| opengears wrote:
| Does this give us any indication on when Apple will be ditching
| x86 support?
| SG- wrote:
| any day now, usually after 6-7 years after hardware releases.
| snovymgodym wrote:
| Late 2027 or 2028 most likely. MacOS version lifespan is about
| 3 years give or take so we'll have a better idea once they
| announce whether or not MacOS 16 (Sequoia's successor) will
| support Intel Macs.
|
| I have a hunch we'll get one more MacOS version with Intel
| support since they were still making Mac Minis and Pros with
| Intel chips in the first half of 2023.
| devmtk wrote:
| oh interesting
| pjmlp wrote:
| Lots of love and work went into this article, as someone around
| for most of this history, ported code from NeXTSTEP into Windows,
| dived into the GNUStep attempts to clone the experience,
| remembers YellowBox and OpenStep, read the internals books,
| regular consumer of WWDC content, I would say the article matches
| pretty much my recollection on most of the systems have evolved.
| lapcat wrote:
| Question for the author, who is here in the comments: for
| clarification, to what extent is the article a deep dive into the
| OS itself (e.g., reverse engineering) vs. a deep dive into the
| extant literature on the OS?
| naves wrote:
| Jobs tried to hire Torvalds to work on Mac OS X and Linus
| declined: https://www.macrumors.com/2012/03/22/steve-jobs-tried-
| to-hir...
| tux3 wrote:
| Hard to imagine Torvalds working on a microkernel, of all
| people
| mike_hearn wrote:
| That's a good history, but it skips over a lot of the nice
| security work that really distinguishes Apple's operating systems
| from Linux or Windows. There's a lack of appreciation out there
| for just how far ahead Apple now is when it comes to security. I
| sometimes wonder if one day awareness of this will grow and
| people working in sensitive contexts will be required to use a
| Mac by their CISO.
|
| The keystone is the code signing system. It's what allows apps to
| be granted permissions, or to be sandboxed, and for that to
| actually stick. Apple doesn't use ELF like most UNIXs do, they
| use a format called Mach-O. The differences between ELF and
| Mach-O aren't important except for one: Mach-O supports an extra
| section containing a signed _code directory_. The code directory
| contains a series of hashes over code pages. The kernel has some
| understanding of this data structure and dyld can associate it
| with the binary or library as it gets loaded. XNU checks the
| signature over the code directory and the VMM subsystem then
| hashes code pages as they are loaded on demand, verifying the
| hashes match the signed hash in the directory. The hash of the
| code directory therefore can act as a unique identifier for any
| program in the Apple ecosystem. There 's a bug here: the
| association hangs off the Mach vnode structure so if you
| overwrite a signed binary and then run it the kernel gets upset
| and kills the process, even if the new file has a valid
| signature. You have to actually replace the file as a whole for
| it to recognize the new situation.
|
| On top of this foundation Apple adds _code requirements_. These
| are programs written in a small expression language that
| specifies constraints over aspects of a code signature. You can
| write a requirement like, "this binary must be signed by Apple"
| or "this binary can be of any version signed by an entity whose
| identity is X according to certificate authority Y" or "this
| binary must have a cdhash of Z" (i.e. be that exact binary).
| Binaries can also expose a _designated requirement_ , which is
| the requirement by which they'd like to be known by other
| parties. This system initially looks like overkill but enables
| programs to evolve whilst retaining a stable and unforgeable
| identity.
|
| The kernel exposes the signing identity of tasks to other tasks
| via ports. Requirements can then be imposed on those ports using
| a userspace library that interprets the constraint language. For
| example, if a program stores a key in the system keychain (which
| is implemented in user space) the keychain daemon examines the
| designated requirement of the program sending the RPC and ensures
| it matches future requests to use the key.
|
| This system is abstracted by _entitlements_. These are key=value
| pairs that express permissions. Entitlements are an open system
| and apps can define their own. However, most entitlements are
| defined by Apple. Some are purely opt-in: you obtain the
| permission merely by asking for it and the OS grants it
| automatically and silently. These seem useless at first, but
| allow the App Store to explain what an app will do up front, and
| more generally enable a least-privilege stance where apps don 't
| have access to things unless they need them. Some require
| additional evidence like a provisioning profile: this is a signed
| CMS data structure provided by Apple that basically says "apps
| with designated requirement X are allowed to use restricted
| entitlement Y", and so you must get Apple's permission to use
| them. And some are basically abused as a generic signed flags
| system; they aren't security related at all.
|
| The system is then extended further, again through cooperation of
| userspace and XNU. Binaries being signable is a start but many
| programs have data files too. At this point the Apple security
| system becomes a bit hacky IMHO: the kernel isn't involved in
| checking the integrity of data files. Instead a plist is included
| at a special place in the slightly ad-hoc bundle directory layout
| format, the plist contains hashes of every data file in the
| bundle (at file not page granularity), the hash of the plist is
| placed in the code signature, and finally the whole thing is
| checked by Gatekeeper on first run. Gatekeeper is asked by the
| kernel if it's willing to let a program run and it decides based
| on the presence of extended attributes that are placed on files
| and then propagated by GUI tools like web browsers and
| decompression utilities. The userspace OS code like Finder
| invokes Gatekeeper to check out a program when it's been first
| downloaded, and Gatekeeper hashes every file in the bundle to
| ensure it matches what's signed in the binaries. This is why
| macOS has this slow "Verifying app" dialog that pops up on first
| run. Presumably it's done this way to avoid causing apps to stall
| when they open large data files without using mmap, but it's a
| pity because on fast networks the unoptimized Gatekeeper
| verification can actually be slower than the download itself.
| Apple doesn't care because they view out-of-store distribution as
| legacy tech.
|
| Finally there is Seatbelt, a Lisp-based programming language for
| expressing sandbox rules. These files are compiled in userspace
| to some sort of bytecode that's evaluated by the kernel. The
| language is quite sophisticated and lets you express arbitrary
| rules for how different system components interact and what they
| can do, all based on the code signing identities.
|
| The above scheme has an obvious loophole that was only closed in
| recent releases: data files might contain code and they're only
| checked once. In fact for any Electron or JVM app this is true
| because the code is in a portable format. So, one app could
| potentially inject code into another by editing data files and
| thus subvert code signing. To block this in modern macOS Seatbelt
| actually sandboxes every single app running. AFAIK there is no
| unsandboxed code in a modern macOS. One of the policies the
| sandbox imposes is that apps aren't allowed to modify the data
| files of other apps unless they've been granted that permission.
| The policy is quite sophisticated: apps can modify other apps if
| they're signed by the same legal entity as verified by Apple,
| apps can allow others matching code requirements to modify them,
| and users can grant permission on demand. To see this in action
| go into Settings -> Privacy & Security -> App Management, then
| turn it off for Terminal.app and (re)start it. Run something like
| "vim /Applications/Google Chrome.app/Contents/Info.plist" and
| observe that although the file has rw permissions vim thinks it's
| read-only.
|
| Now, I'll admit that my understanding of how this works ends here
| because I don't work for Apple. AFAIK the kernel doesn't
| understand app bundles, and I'm not sure how it decides whether
| an open() syscall should be converted to read only or not. My
| guess is that the default Seatbelt policy tells the kernel to do
| an upcall to a security daemon which understands the bundle
| format and how to read the SQLite permission database. It then
| compares the designated requirement of the opener against the
| policies expressed by the bundle and the sandbox to make the
| decision.
| adrian_b wrote:
| I do not think that "security" is the appropriate name for such
| features.
|
| In my opinion "security" should always refer to the security of
| the computer owners or users.
|
| These Apple features may be used for enhancing security, but
| the main purpose for which they have been designed is to
| provide enhanced control of the computer vendor on how the
| computer that they have sold, and which is supposed to no
| longer belong to them, is used by its theoretical owner, i.e.
| by allowing Apple to decide which programs are run by the end
| user.
| saagarjha wrote:
| I think you went for a lazy reply rather than actually
| reading the comment through. Most of the things mentioned
| here directly improve security for the computer's owner.
| lapcat wrote:
| > I think you went for a lazy reply rather than actually
| reading the comment through.
|
| https://news.ycombinator.com/newsguidelines.html
|
| Your reply could have omitted the first sentence.
|
| Many years ago, at Macworld San Francisco, I met "Perry the
| Cynic", the Apple engineer who added code signing to Mac OS
| X. Nice person, but I also kind of hate him and wish I
| could travel back in time to stop this all from happening.
| mike_hearn wrote:
| On macOS the security system is open even though the codebase
| is closed. You can disable SIP and get full root access.
| Gatekeeper can be configured to trust some authority other
| than Apple, or disabled completely. You can write and load
| your own sandbox policies. These things aren't well known and
| require reading obscure man pages, but the capabilities are
| there.
|
| Even in the default out-of-the-box configuration, Apple isn't
| exercising editorial control over what apps you can run. Out
| of store distribution requires only a verified identity and a
| notarization pass, but notarization is a fully automated
| malware scan. There's no human in the loop. The App Store is
| different, of course.
|
| Could Apple close up the Mac? Yes. The tech is there to do so
| and they do it on iOS. But... people have been predicting
| they'd do this from the first day the unfortunately named
| Gatekeeper was introduced. Yet they never have.
|
| I totally get the concern and in the beginning I shared it,
| but at some point you have to just stop speculating give them
| credit for what they've actually done. It's _much easier_ to
| distribute an app Apple executives don 't like to a Mac than
| it is to distribute an app Linux distributors don't like to
| Linux users, because Linux app distribution barely works if
| you go "out of store" (distro repositories). In theory it
| should be the other way around, but it's not.
| p_ing wrote:
| > Even in the default out-of-the-box configuration, Apple
| isn't exercising editorial control over what apps you can
| run
|
| Perhaps not in the strictest sense, but Apple continues to
| ramp up the editorial friction for the end user to run un-
| notarized applications.
|
| I feel/felt <macOS 15 that right-click Open was an OK
| approach, but as we know that's gone. It's xattr or
| Settings.app. More egregious is the monthly reminder that
| an application is doing something that you want it to do.
|
| A level between "disable all security" and what macOS 15
| introduces would be appreciated.
| jlcases wrote:
| What impresses me most about technical documentation like this is
| how it structures knowledge into comprehensible layers. This
| article manages to explain an extremely complex system by
| establishing clear relationships between components.
|
| I've been experimenting with similar approaches for documentation
| in open source projects, using knowledge graphs to link concepts
| and architectural decisions. The biggest challenge is always
| keeping documentation synchronized with evolving code.
|
| Has anyone found effective tools for maintaining this
| synchronization between documented architecture and implemented
| code? Large projects like Darwin must have established processes
| for this.
| rollcat wrote:
| > Has anyone found effective tools for maintaining this
| synchronization between documented architecture and implemented
| code?
|
| Yes, it's called structure, discipline, and iterative
| improvement.
|
| Keep the documentation alongside the code. Think in BSD terms:
| the OS is delivered as a whole; if I modify /bin/ls to support
| a new flag, then I update the ls.1 man page accordingly,
| preferably in the same commit/PR.
|
| The man pages are a good reference if you already have
| familiarity with the employed concepts, so it's good to have an
| intro/overview document that walks you through those basics.
| This core design rarely sees radical changes, it tends to
| evolve - so adapt the overview as you make strategic decisions.
| The best benchmark is always a new hire. Find out what is it
| that they didn't understand, and task them with improving the
| document.
| worik wrote:
| > Has anyone found effective tools for...
|
| Managing management?
|
| Code comments and documentation make no money, only features
| make money.
|
| Bitter experience...
| darksaints wrote:
| It seems like the XNU kernel is architecturally super close to
| the Mach kernel, and XNU drivers architecturally work like Mach
| drivers, but just that they are compiled into the kernel instead
| of running in userspace as a separate process. And it seems like
| the only reason for doing so is performance.
|
| That makes me wonder: how hard would it be to run the XNU kernel
| in something like a "Mach mode", where you take the same kernel
| and drivers but run them separately as the Mach microkernel was
| intended?
|
| I feel like from a security standpoint, a lot of situations would
| gladly call for giving up a little bit of performance for the
| process isolation security benefits that come from running a
| microkernel.
|
| Is anybody here familiar enough with XNU to opine on this?
| dcrazy wrote:
| Many drivers are gradually moving back into userspace via
| DriverKit: https://developer.apple.com/documentation/driverkit
| llincerd wrote:
| Darwin is interesting because of the pace of radical changes to
| its core components. From dropping syscall backwards
| compatibility to mandatory code signing to dyld_shared_cache
| eliminating individual system library files to speed up dynamic
| executable loading. It's a very results-oriented design approach
| with no nostalgia and no sacred cows. I think only a big hardware
| vendor like Apple could pull it off.
| conradev wrote:
| Totally. The march continues with userspace drivers and
| exclaves[1]. I think it's fair to say that security is a big
| driver for their kernel evolution.
|
| [1]
| https://www.theregister.com/2025/03/08/kernel_sanders_apple_...
| mannyv wrote:
| I suppose with unified memory there's no real difference between
| the kernel and userspace; it's just different security zones.
|
| The MMU era used separate memory spaces to enforce security, but
| it's probably safer in the log run to actually have secure areas
| instead of 'accidentslly secure areas" that aren't that secure.
| dcrazy wrote:
| I think you're misunderstanding "unified memory". That term
| refers to whether the GPU has its own onboard memory chips
| which must be populated by a DMA transfer. It doesn't refer to
| whether the system has an MMU.
| adolph wrote:
| Can someone speak to the below statement from the article? I
| thought Objective-C did not have a runtime like memory managed
| languages like C#.
|
| > avoid the runtime overhead of Objective-C in the kernel
|
| From Apple docs[0]:
|
| _You typically don't need to use the Objective-C runtime library
| directly when programming in Objective-C. This API is useful
| primarily for developing bridge layers between Objective-C and
| other languages, or for low-level debugging._
|
| 0.
| https://developer.apple.com/documentation/objectivec/objecti...
| twoodfin wrote:
| Objective-C does have a runtime that maintains all the state
| necessary to implement the APIs in the documentation you
| linked.
|
| For example, how to map class objects to string representations
| of their names.
| dcrazy wrote:
| Yep, ObjC programs call into the runtime every time they call
| a method.
___________________________________________________________________
(page generated 2025-04-06 23:00 UTC)