[HN Gopher] Asahi Linux for Apple M1 progress report, August 2021
___________________________________________________________________
Asahi Linux for Apple M1 progress report, August 2021
Author : fanf2
Score : 436 points
Date : 2021-08-14 12:32 UTC (10 hours ago)
(HTM) web link (asahilinux.org)
(TXT) w3m dump (asahilinux.org)
| mraza007 wrote:
| This is so cool and the Asahi linux team has done amazing work.
|
| I can't wait to use Linux as daily driver on my M1
| stiltzkin wrote:
| Hector Martin is a hacker machine.
| mlindner wrote:
| It's unfortunate in a lot of ways that this is all getting
| written before Rust support is in the kernel. If that was the
| case then all these new drivers could get written in Rust
| instead. Oh well.
| aranchelk wrote:
| As described, the crux of this work seems to be learning how to
| interface with Apple's proprietary hardware. If Rust
| enthusiasts want to go back later and reimplement, they'll have
| a working open source reference implementation.
| emodendroket wrote:
| I'm not an early adopter for this kind of thing, but it's great
| to see progress all the same.
| 2bitencryption wrote:
| here's a (possibly dumb) question - assuming a 100% complete and
| successful "Asahi Linux", what does this mean for distros?
|
| Is this a kernel replacement, as in I could run Manjaro Gnome,
| and just load the Asahi Kernel and it all Just Works?
|
| Or will Asahi Linux need to be a full distro of its own, in order
| to be useful?
| sys_64738 wrote:
| This is genius at work. If I was 5% as smart as these guys I'd be
| doing well.
| dilap wrote:
| Damn, this is so cool. I can tell I'm going to be sucked back
| into to trying linux yet again...
| SirensOfTitan wrote:
| This kind of brilliant work makes me feel very very tiny as an
| engineer. I struggle after work to find time to learn basic Rust.
| I'm totally in awe of folks who can do this kind of stuff. It's
| just so impressive, and maybe one day I can benefit from all this
| awesome work.
| gigatexal wrote:
| Same. People who can easily straddle the lowest level bits of a
| machine and also write performant and good c code and then to
| do all of that and reverse engineer things, amazing.
| ip26 wrote:
| Building useful and nonporous layers of abstractions and being
| able to quickly shift between them seems to be a key skill.
| When each layer leverages another, you can rapidly build
| something impressive.
|
| This in turn implies apart from technical skill, charting a
| good course from layer to layer can make a big difference, vs
| meandering without quite knowing where you are going.
| hivacruz wrote:
| I feel you. Sometimes I browse projects on GitHub and I'm
| astonished by what people can do and I can't. Example,
| OpenCore[0], a famous bootloader in the Hackintosh scene. How
| can people even start to code this.. Awewome work, awesome
| people.
|
| [0]: https://github.com/acidanthera/OpenCorePkg
| sho_hn wrote:
| Preface: I can't do this (specifically). But I have done many
| types of software development across two decades. My journey
| began with LAMP-style web work, took me to C++ and the
| desktop (apps, GUI toolkits, browser engines), then to
| embedded - from smart TVs to smart speakers, to network
| protocols for drone systems and beefy car infotainment ECUs
| and lower-level microcontroller/borderline electronics work.
|
| My conclusion: You can get into just about anything, and for
| the most part the difficulty level is fairly uniform. But
| there's simply a vast sea of domain-specific spec knowledge
| out there. It doesn't mean that it's too hard for you or you
| can't learn it. Anything that is done at scale will
| fundamentally be approachable by most developers. Just be
| prepared you'll need to put in the time to acquire a
| familiarity with the local standards/specs and ways of doing
| things. Knowledge turns seemingly hard things into easy
| things, and if it's been done before chances are it's
| documented somewhere.
|
| The truly hard stuff is innovating/doing things that haven't
| been done before. Novel development happens rarely.
| bartvk wrote:
| Yeah, this is my conclusion too. I moved from the Oracle
| platform to embedded, scientific stuff like reading out
| custom electronics for IR cameras. And now I'm into iOS
| apps. It's more a question of what part of the stack feels
| interesting and doable to you, at a certain period in your
| professional life.
| [deleted]
| cmurf wrote:
| >acquire a familiarity with the local standards/specs
|
| And with the bugs. Especially with bootloaders because
| you're in a preboot environment.
| secondaryacct wrote:
| I ve done a bittorrent client, a 3D rendering engine on the
| PS3 and a printer driver for fun and while clearly not at the
| levels of a bootloader, I can have a few cookie points in
| interview for originality.
|
| What I learned starting these daunting task (especially the
| ps3 which was closed specs), is that it s still done by
| humans and following familiar patterns. Most of the time you
| bang your head against unclear doc or go through successive
| layers of abstraction (to do a ps3 rendering engine, better
| know your basic openGL on PC, which is only possible if you
| know matrix and vector geometry) but EVERYTHING is possible
| to reach at a workable level (expert level I feel comes with
| money incentive, team mates and repetition). I spent 2 years
| on japanese, 3 hours a day, and could integrate the meaning
| of Haikus at the end.
|
| I think the only true talent you must have is insane
| stubborness. To go through, a doc at a time. Usually after
| the first insane challenge (for me: learning english at a
| near native level, reading literature or discussing politics)
| of your life, you understand it s all pretty much time
| investment.
| ostenning wrote:
| Re: finding time. I think this is true for all of life, as we
| get older our risk and reward profiles change. As we are
| encumbered by more responsibility, family, mortgage, etc, we
| stop taking the same risks and start playing life more "safely"
| or "securely". Of course it doesn't have to be that way and I
| personally advocate for a debt free lifestyle for this reason.
| Too many times have we heard the story of the midlife crisis:
| the false deity of security and comfort that robs the well paid
| executive the majority of his or her life. Its a sad story. So
| my advice is to people is to work less for others and work more
| for your own projects and passions, life is simply too short to
| give your years to someone else, even if it pays well.
| knodi wrote:
| Linux on M1 mini is looking like a dream machine more and more.
| black_puppydog wrote:
| On one level yes. I just can't imagine (anymore) pumping money
| into an ecosystem that purposefully makes it so difficult to
| run free software.
|
| I admit that excludes more and more modern hardware, but apple
| is the most expensive and profitable.
| vbezhenar wrote:
| Mini is not expensive. At least in its base configuration.
| It's a pity you can't upgrade it with cheap RAM and disks
| anymore.
| haberman wrote:
| > The silver lining of using this complicated DCP interface is
| that DCP does in fact run a huge amount of code - the DCP
| firmware is over 7MB! It implements complicated algorithms like
| DisplayPort link training, real-time memory bandwidth
| calculations, handling the DisplayPort to HDMI converter in the
| Mac mini, enumerating valid video modes and performing mode
| switching, and more. It would be a huge ordeal to reverse
| engineer all of this and implement it as a raw hardware driver,
| so it is quite likely that, in the end, we will save time by
| using this higher-level interface and leaving all the dirty work
| to the DCP. In particular, it is likely that newer Apple Silicon
| chips will use a shared DCP codebase and the same interface for
| any given macOS version, which gives us support for newer chips
| "for free", with only comparatively minor DCP firmware ABI
| updates.
|
| How interesting: by moving some of the driver to firmware, Apple
| has effectively made it easier for OSs like Linux to have good
| support for the hardware, because the OS<->hardware protocol
| operates at a higher level of abstraction and may work across
| multiple chips.
|
| The trade-off is that the protocol is not stable and so Linux has
| to decide which versions it will support, and make sure the OS
| driver always matches the version of the hardware firmware:
|
| > As a further twist, the DCP interface is not stable and changes
| every macOS version! [...] Don't fret: this doesn't mean you
| won't be able to upgrade macOS. This firmware is per-OS, not per-
| system, and thus Linux can use a different firmware bundle from
| any sibling macOS installations.
| edeion wrote:
| If I recall correctly, high end hard drives used a protocol
| called SCSI in the 90s that was mostly embedded in hardware.
| minedwiz wrote:
| Somewhat related, SSDs run an entire microcontroller on-board
| to translate from I/O requests from the OS to proper use of
| the flash hardware.
| skavi wrote:
| I remember hearing about how the M1 Macs were very quick to get
| displays running when they were plugged in and to change the
| display configuration. I guess the DCP is why.
| neilalexander wrote:
| > How interesting: by moving some of the driver to firmware,
| Apple has effectively made it easier for OSs like Linux to have
| good support for the hardware, because the OS<->hardware
| protocol operates at a higher level of abstraction and may work
| across multiple chips.
|
| If I am remembering right, this was the original idea behind
| UEFI drivers too -- you could in theory write drivers that the
| firmware would load and they would present a simpler/class-
| compatible interface to whatever operating system was loaded
| after that. I think Apple did pretty much this on Intel Macs
| for a number of things.
| rvz wrote:
| Great work, but as for the installer, use at your own risk or on
| another spare machine.
|
| Unless you want to risk losing your files or your entire machine.
| Maybe it would void your warranty if you do this. Who knows.
| marcan_42 wrote:
| Installing a custom kernel is an official feature on these
| machines and won't void your warranty.
|
| I've never had the installer do anything crazy, nor does it
| support any resize/deletion operations yet, so it's pretty
| unlikely that it'll wipe your data (it's much more likely to
| just fail to complete the installation properly).
|
| More likely is that if you boot Linux, a driver bug will cause
| data corruption. We've had some reports of filesystem issues,
| though as I said the kernel tree situation is haphazard right
| now, so we'll see if they still happen once we put together a
| tree with properly vetted patches :)
| rvz wrote:
| > More likely is that if you boot Linux, a driver bug will
| cause data corruption. We've had some reports of filesystem
| issues, though as I said the kernel tree situation is
| haphazard right now, so we'll see if they still happen once
| we put together a tree with properly vetted patches :)
|
| Well that is another notable risk isn't it? If something goes
| wrong on the filesystem level (since you admitted it) then
| the worst case is that the OS and your important files become
| corrupted some how and it is back to recovery once again, if
| not then they will use Apple Configurator which requires
| another Mac.
|
| Regardless, I would most certainly use another machine to
| test this, hence why I said _' use at your own risk'_,
| especially when the installation script is pre-alpha.
|
| What is wrong with such a disclaimer?
| zozbot234 wrote:
| The common feature with these embedded ARM chipsets is that a
| faulty kernel can very much hardbrick your machine. There's
| nothing like the amount of failsafes you get with ordinary
| PC-compatible hardware. I wouldn't want to rely on any
| promise that "installing a custom kernel won't void your
| warranty" - in practice, it very much will if you don't know
| what you're doing.
| my123 wrote:
| Same for x86 machines really, the number of UEFI firmwares
| that hard-bricked the machine if you dared deleting all
| NVRAM variables...
|
| (and many more other issues)
| floatboth wrote:
| That number is not that big, IIRC just a few specific
| laptops. I haven't heard of any desktop mainboards with
| that issue.
| my123 wrote:
| I remember some Lenovo server boards being bricked by an
| UEFI variable storage snafu, which was painful to handle.
| marcan_42 wrote:
| You're assuming Apple's ARM chipsets are like other
| embedded ARM chipsets. They aren't. Things are much cleaner
| and quite well designed.
|
| Storage-wise, you can't brick the machine by wiping NVMe.
| You can always recover via DFU mode, which you can also use
| from another Linux machine by using idevicerestore (no need
| for macOS). This is very much unlike your typical Android
| phone, which is indeed a flimsy mess that will brick itself
| at the slightest touch of the wrong partition in eMMC.
|
| Hardware-wise, there's the usual "you can technically do
| ugly things with I/O pins", but in practice that's quite
| unlikely to cause damage. The lowest level, dangerous
| things are handled by SMC, not the main OS. We barely have
| to deal with any of that. And this is a thing on PCs too,
| just read your chipset's user manual and you'll find out
| the registers to flip random I/O pins.
|
| Firmware-wise, I do know of one way of causing damage to
| these machines that cannot be recovered by the user: wiping
| NOR Flash. This isn't because it'll make DFU unusable, but
| rather because it contains things like system identity
| information and calibration data that only Apple can put
| back properly. But again, this isn't much different on PCs;
| you can brick a PC by wiping the NOR Flash from your OS
| too. In fact there have been BIOSes so broken that you can
| brick them by doing a `rm -rf /`, which descends into
| efivars and the UEFI implementation crashes when it finds
| no EFI variables.
|
| In order to avoid the NOR Flash bricking risk, I do not
| intend to instantiate the NOR device in our device tree at
| all by default. I do not believe we need it for regular
| operation. Flash memories have specific unlock sequences
| for writes/erases, so there is practically zero risk that a
| crazy random bug (i.e. scribbling over random I/O memory)
| could trigger such an operation, and without the NOR driver
| bound, the code that actually knows how to cause damage
| will never run.
|
| For what it's worth, I have quite some experience _not_
| bricking people 's machines, even when dealing with
| unofficial exploits/hacks/jailbreaks. To my knowledge, not
| a single user out of the probably >10m people running The
| Homebrew Channel / BootMii on their Wii, which I helped
| develop, has bricked their system due to a problem
| traceable to our software. Nintendo has a much worse track
| record :-)
|
| https://marcan.st/2011/01/safe-hacking/
| floatboth wrote:
| > system identity information and calibration data that
| only Apple can put back properly
|
| Ha. This reminds me of a Motorola Linux phone I bricked
| as a kid by trying to write a package manager in C shell
| (lol) which inevitably led to rm -rf / being executed
| which wiped a unique per-device security partition
| required for the boot process.
|
| Why did Apple make the same choice of putting critical
| data onto an OS-writable flash chip? Chromebooks do it
| right - everything critical lives in the Security Chip's
| own on-chip flash, and the list of commands it would take
| from the host system is very limited by default (most
| things are only available to the debug cable).
| my123 wrote:
| It's an SPI flash that isn't mapped to /dev.
|
| As far as I know, fully wiping it is recoverable, but
| involves putting the Mac into device firmware update
| mode, and then recovering from another machine.
| marcan_42 wrote:
| AIUI if you actually wipe NOR flash entirely, DFU mode
| won't save you, because the Mac won't even know who it is
| (MAC addresses, serial number, etc.).
|
| However, I can't claim to have tried it, so there may
| well be additional safeguards. I'm just not particularly
| interested in risking it for our users either way :)
| intricatedetail wrote:
| We need lawmakers to force Apple and other companies to open
| documentation that could enable writing drivers etc. Give Apple
| anti privacy stance, I can see why they don't want a platform
| they can't control on their hardware. This should be illegal.
| sydthrowaway wrote:
| Interesting. What's the project leaders day job, if I don't mind
| asking?
| zamadatix wrote:
| From https://marcan.st/about/
|
| "Hello! I'm Hector Martin and like to go by the nickname
| "marcan". I currently live in Tokyo, Japan as an IT/security
| consultant by day and a hacker by night. For some definition of
| day and night, anyway."
|
| Hacker for hire basically. This project was started via
| crowdfunding enough monthly sponsors (overnight from his
| previous reputation and interest in this project) and is
| somewhat half day job and half night hacking out of personal
| interest as a result.
| monocasa wrote:
| Are the ASC firmware blobs encrypted too? ie. are they easily
| available for static analysis?
| my123 wrote:
| They are regular Mach-Os, with some symbols still available
| too.
| porsupah wrote:
| Hugely nifty work! Makes me wish I had an M1 system even more so
| I could help out.
| rowanG077 wrote:
| Wow that's some progress. This looks pretty close to useable it
| seems. Is there some kind of expected timeline? Like in 6 months.
| Kb, wifi etc work. In 1 year we expect the GPU to work?
| marcan_42 wrote:
| Unofficially, I still have it as a bit of a personal goal to
| get the GPU going by the end of the year. We already have the
| userspace side passing >90% of the GLES2 tests on macOS, it's
| just the kernel side that's missing, so it's not as far as it
| might seem.
| jtl999 wrote:
| How exactly do you test your own GPU "rendering" under macOS?
| gsnedders wrote:
| Essentially by replacing the macOS shader -> machine code
| compiler with one of your own (or, rather, running it
| alongside).
| rowanG077 wrote:
| That would truly be amazing! I don't really expect anything
| official, I mean software is hard to estimate as it is. Let
| alone when you are reverse engineering where you never know
| what kind of insanity you will find.
|
| So for GPU we would need 3 pieces? Alyssa's work on reverse
| engineering the protocol and writing a driver to submit the
| correct commands to the GPU. A working DCP driver to handle
| actually displaying rendered frames. And finally a driver to
| be able to submit the commands to the GPU. And work on the
| last of those has not started yet, right? It's hard to piece
| together how everything fits for me.
| marcan_42 wrote:
| There's a whole different coprocessor tied to the GPU for
| the rendering (AGX), though the generic RTKit/mailbox stuff
| is the same as it is for DCP. That will largely involve
| command submission, memory management, and (a relatively
| big unknown) GPU preemption. In principle, the GPU kernel
| side should be quite a bit simpler than DCP, since display
| management is quite a bit hairier than (relatively) simpler
| GPU operations; most of the complexity of the GPU is in the
| userspace side. But we won't know until we get there.
| boris wrote:
| Thanks for working on this! It would be great if making Asahi
| generally usable was not blocked by the GPU driver. Currently
| Apple M1 is the only generally-available ARM hardware that can be
| used for testing other software for compatibility with ARM. So an
| installable version without desktop support would be very
| appreciated.
| opencl wrote:
| What makes all of the other ARM hardware available unsuitable?
| my123 wrote:
| Especially for example that a Jetson AGX Xavier with 32GB of
| RAM is available at the same price point as an M1 Mac mini.
|
| For $50 more, you can get: https://www.solid-run.com/arm-
| servers-networking-platforms/h... on which you just add your
| own DRAM sticks too.
| rjzzleep wrote:
| If everything wasn't so built around CUDA the M1 GPU
| actually gives you twice the performance
| bionade24 wrote:
| Source?
| kzrdude wrote:
| Why is this the kind of ARM processor you need, and not a
| raspberry pi?
| marcan_42 wrote:
| It's not blocked on the GPU driver; you can already boot a
| desktop on the boot-time framebuffer (this has worked for
| months). The issue right now is that as I mentioned at the end,
| things like USB and storage work but aren't quite _there_ yet
| and are spread around various kernel branches, so our next step
| is indeed going to be to get that all in shape so that there is
| one known good kernel branch for people to use. That will give
| you Ethernet, USB, NVMe, etc. _Then_ we 'll tackle the GPU
| after that.
|
| https://twitter.com/alyssarzg/status/1419469011734073347
| boris wrote:
| That's great to hear, thanks!
| swiley wrote:
| I'm typing this from an AArch64 device that's not an M1 and
| runs Linux. What are you talking about.
| floatboth wrote:
| Just testing non-desktop software on ARM - you could use a
| Raspberry Pi 4 for that. Or a Pinebook Pro. Or just an EC2
| instance!
|
| For "real workstation" class hardware (i.e. working
| PCIe/XHCI/AHCI on standard ACPI+UEFI systems with mostly FOSS
| firmware) nothing beats SolidRun boards (MACCHIATObin/HoneyComb
| LX2K). Yeah yeah the Cortex-A72 cores are _really_ showing
| their age, oh well, on the other hand the SoCs are not embedded
| cursed crap :)
| opencl wrote:
| Avantek also sells a few workstations that are basically
| Ampere ARM server boards stuffed into desktop cases. Very
| powerful CPUs with up the 80 Neoverse N1 cores, lots of PCIe
| and RAM slots, very expensive.
|
| https://store.avantek.co.uk/arm-desktops.html
| jjcon wrote:
| Can anyone comment on the possibility of going in the opposite
| direction?
|
| I'm considering buying the frame.work laptop, daily driving
| pop!_os and then virtualizing OSX on it for the few OSX programs
| I use (this supports framework and not apple).
|
| It looks like it may be fairly easy and possible with a 10-20%
| performance hit?
|
| https://github.com/foxlet/macOS-Simple-KVM
|
| From what I can tell you can pass through a single GPU with a bit
| of work, even a iGPU? Is that correct?
| mixmastamyk wrote:
| macOS doesn't work on non-apple hardware, including under VMs.
| Unless you've patched it somehow?
| inside_out_life wrote:
| Have you been living under a rock? Do you know what a
| hackintosh is? It's been going for so long that ccurrent
| bootloaders allow to run it completely unmodified, working
| updates etc. macOS also runs under some VMs, with some config
| tweaks
| mixmastamyk wrote:
| I'm familiar with that, why I mentioned "patched" in my
| above comment. It's still a tenuous position to be in,
| unless you enjoy diagnosing errors rather than working.
| heavyset_go wrote:
| This and VirtualBox work well for Mac emulation on Linux.
| kitsunesoba wrote:
| Virtualizing macOS on non-macOS hosts works ok, with a couple
| of caveats:
|
| * macOS lacks drivers for GPUs commonly emulated by VM
| software, and as such runs absolutely horribly without graphics
| acceleration because it assumes the presence of a competent GPU
| at all times - software rasterization seems to be intended to
| be used only as an absolute last-resort fallback. As such,
| passthrough of a supported GPU (generally AMD, though Intel
| iGPUs might work) is basically a requirement.
|
| * It's technically against the macOS EULA. Apple hasn't gone
| after anybody for it to date, but it doesn't mean they won't.
| This is particularly relevant for devs building for Apple
| platforms.
| my123 wrote:
| For macOS on arm64, the software renderer fallback is gone.
|
| No GPU? WindowServer crashes and you'll never see the GUI.
| flatiron wrote:
| I've never passed a GPU to a mac guest but I have done Xcode
| work in a VM and it worked fine on Arch. If I had the cash I
| would too get a framework laptop!
| raihansaputra wrote:
| I'm also considering this. The software renderer should be a
| non-issue if you successfully passthrough the iGPU. The PopOs
| won't have any display device, but you can still access is
| through ssh from the VM Mac. Look into /r/vfio and the discord.
| stefan_ wrote:
| That RPC interface looks very nice for reverse engineering, but
| what kind of horror is this going to be in the kernel
| implementing KMS with JSON^2 and pretend-C++-functions?
| marcan_42 wrote:
| Oh yes. We're aware, we've had and continue to have debates
| about how to tackle the monster... :-)
|
| The C++ stuff isn't _that_ bad, in the end you can just pretend
| it 's messages with fixed layout structures. But yes, the
| serialized dict-of-arrays-of-dicts type stuff can be approached
| in a few ways, none of which are particularly beautiful.
| rjzzleep wrote:
| Any idea why they did that as opposed to something like
| RPMsg?
| marcan_42 wrote:
| Apple have no reason to use existing standards when they
| can roll their own version tailored to their needs. It's
| for internal consumption, after all. This is very much a
| theme throughout their design.
| rjzzleep wrote:
| I'm aware of that, but that's not my question. Oftentimes
| there is a reason why they choose to ignore standards.
| They added a simple C++ (not full C++) layer on top of
| their driver code when they took FreeBSD drivers and
| integrated them into their OS. But there was a benefit to
| doing so, making the drivers arguably easier to compose.
|
| In this case the answer might be just that they were too
| time constrained to design something better. But I was
| just curious if the RPMsg framework has too many issues
| that would make it unsuitable.
| gok wrote:
| RPMsg isn't really a standard, is it? I think it was
| added in Linux 4.11, in April 2017, way after this stuff
| was introduced in Apple devices.
| marcan_42 wrote:
| Taking a quick look at RPmsg, it seems it's from 2016.
| Apple have been doing coprocessors longer than that;
| Apple had their Secure Enclave in 2013 and that was
| already using their mailbox system, which is
| fundamentally different from the IRQs-only RPmsg that
| keeps all message passing entirely in shared memory.
|
| In general, this kind of message passing thing is easy to
| reinvent and everyone does it differently. You'll find
| tons of mailbox drivers in Linux already, all different.
| derefr wrote:
| I'm guessing that what Apple did here was:
|
| 1. Probably years ago: refactor the relevant components
| (display driver, etc.) to run as Mach components with Mach
| IPC, in the form of: - One XCode project,
| with - an "IPC client" build target - an "IPC
| server" build target - a "shared datatypes library"
| with well-defined IPC serialization semantics, static-
| linked into both the IPC client and server - a single
| test suite that tests everything
|
| 2. Possibly when designing the M1, or possibly years ago
| for iOS: split the IPC server build target in two -- one
| build target that builds an IPC-server firmware blob (RTOS
| unikernel), and another that builds a regular on-CPU
| kernel-daemon IPC server;
|
| 3. Basically _do nothing_ in the IPC client build target --
| it should now be oblivious to whether its Mach messages are
| going to an on-CPU kernel daemon or to an RTOS-coprocessor-
| hosted daemon, as the Mach abstraction layer is taking care
| of the message routing. (Kinda like the location-
| obliviousness you get in Erlang when sending messages to
| PIDs.)
|
| This seems likely to me because there was (and still is!)
| both an Intel and an Apple Silicon macOS release; and they
| would want to share as much driver code between those
| releases as possible. So I think it's very likely that
| they've written drivers structured to be "split across"
| Apple Silicon, while running "locally" on their Intel
| devices, in such a way that the differences between these
| approaches is effectively transparent to the kernel.
|
| To achieve #3 -- and especially to achieve it while having
| only one shared test suite -- the firmware would have to be
| speaking the same _wire protocol_ to the IPC client that
| the in-kernel IPC daemon speaks.
|
| And, given that the in-kernel IPC daemons were designed to
| presume a reliable C++ interface, access to shared-memory
| copies of low-level Frameworks like IOKit, etc.; the
| firmware would need to provide this same environment to get
| the pre-existing code to run.
| rjzzleep wrote:
| That's the kind of theory I was looking for, thanks!
| marcan_42 wrote:
| It's not Mach IPC, it's a bespoke thing only for the main
| DCP endpoint (they have a whole _separate_ IPC framework
| for sub-drivers that have more sanely fully migrated to
| the DCP). It also has nothing to do with Intel, since
| these display controllers are only in Apple SoCs.
|
| They _do_ have a non-DCP driver they 're still shipping,
| that runs all the same code entirely within the macOS
| kext. I'm not sure exactly what it's for.
| derefr wrote:
| Re: this bespoke protocol, could it be something that
| _emulates the same abstraction as_ Mach IPC (without
| being the same protocol), such that the kernel API of
| this protocol exposes functions similar-enough to Mach
| IPC functions to the driver that it would basically be a
| matter of an #ifdef to switch between the two? The
| tagged-64bit-messages thing sounds very reminiscent, is
| why I'm thinking along those lines.
|
| > They do have a non-DCP driver they're still shipping,
| that runs all the same code entirely within the macOS
| kext. I'm not sure exactly what it's for.
|
| Presumably, they didn't always have the coprocessor as
| part of the design, especially at the prototype phase.
| Imagine what type of workstations were built to prototype
| the Apple Silicon release of macOS -- they probably
| didn't have _any_ Apple-specific coprocessors at first
| (and then they were likely gradually added for testing by
| sticking them on a PCIe card or something.)
| marcan_42 wrote:
| It's much simpler than Mach IPC, and at the same time a
| strange beast in that they literally chopped the driver
| down the middle. The same objects exist on both sides,
| and certain methods call over to the other side. This is
| accomplished by having thunks on one side that marshal
| the arguments into a buffer and call the RPC function,
| and then the other side un-marshals and calls the real
| method on the object, then return data goes back in the
| opposite direction. I assume those thunks are
| autogenerated from some sort of IDL definition they have
| (they mostly use C++ templates on the argument types to
| perform the marshaling).
|
| As for the non-DCP driver, the odd thing is that it's
| specifically for the same chip generation (H13, i.e. A14
| and M1), not an older one or any weird prototype thing.
|
| No weird "workstations" were built to prototype the Apple
| Silicon release of macOS; Apple Silicon has existed for
| many years now, coprocessors like these included, in
| iPhones and iPads. The M1 is an iPad chip, the A14X, that
| they beefed up just enough to be able to stick it in
| laptops, and then marketing rebranded it as M1. DCP
| specifically is relatively recent, though, I think it
| only showed up in a relatively recent Ax silicon
| generation.
| derefr wrote:
| > DCP specifically is relatively recent, though, I think
| it only showed up in a relatively recent Ax silicon
| generation.
|
| This was more my point.
|
| I find it unlikely that Apple were doing most of the
| testing of macOS-on-ARM (which probably occurred for
| years prior to the M1 announcement, and prior to the A14X
| being created) directly using the iOS device
| architecture. Doing that wouldn't have allowed them to
| develop AS support for regular PCIe devices attached
| through Thunderbolt, for example, since there's nothing
| like a PCIe lane in that architecture.
|
| Instead, to test things like that, I suspect Apple would
| have needed some kind of testbench that allowed them to
| run macOS on an ARM CPU, _while_ attaching arbitrary
| existing peripherals into said ARM CPU's address space,
| _without_ having to fab a new one-off board for it every
| time they tweaked the proposed architecture.
|
| I would guess that their approach, then, would have been
| very similar to the approach used in prototype bring-up
| in the game-console industry:
|
| - Use a regular Intel machine as a host / hypervisor (in
| Apple's case, probably one of the internal mATX-
| testbench-layout Intel Mac Pros)
|
| - Put whatever recent-generation ARM CPU they made, onto
| a PCIe card
|
| - Build a bespoke hypervisor to drive that CPU, which
| presents to the CPU a virtualized chipset matching the
| current proposed architecture (e.g. a DCP or not, a
| Thunderbolt controller, etc.)
|
| - Have the hypervisor configure the host's IOMMU to
| present both virtual peripherals (for bring-up), and
| arbitrary host peripherals, to the CPU
|
| It's not like Apple are unfamiliar with "SoCs used to
| accelerate a host-run emulator"; decades ago, they put an
| Apple II on an accelerator card and drove it through a
| host hypervisor just like this :)
| marcan_42 wrote:
| > I find it unlikely that Apple were doing most of the
| testing of macOS-on-ARM (which probably occurred for
| years prior to the M1 announcement, and prior to the A14X
| being created) directly using the iOS device
| architecture.
|
| Of course they would be doing it like that, since the
| macOS kernel was already ported to ARM for iOS. They even
| handed out developer kits that literally used an iPad
| chip.
|
| https://en.wikipedia.org/wiki/Developer_Transition_Kit_(2
| 020...
|
| macOS on ARM is _very_ clearly a descendant of the way
| iOS works. It 's just macOS userspace on top of an ARM
| XNU kernel, which was already a thing. The way the boot
| process works, etc. is clearly iOS plus their new fancy
| Boot Policy stuff for supporting multiple OSes and custom
| kernels.
|
| > Doing that wouldn't have allowed them to develop AS
| support for regular PCIe devices attached through
| Thunderbolt, for example, since there's nothing like a
| PCIe lane in that architecture.
|
| iPhones and iPads use PCIe! How do you think WiFi/storage
| are attached? PCIe isn't only a desktop thing, it's in
| most phones these days, the Nintendo Switch, Raspberry Pi
| 4, etc. Most modern ARM embedded systems use PCIe for
| something. You'd be hard pressed to find a high-end ARM
| SoC without PCIe lanes.
|
| They didn't have Thunderbolt, but since Apple rolled
| their own bespoke controller too, there is absolutely
| nothing to be gained by bringing that up on Intel first.
| Instead they probably did the same thing everyone does:
| bring it up on FPGAs. Possibly as PCIe add-ons for
| existing iPad chips, possibly hosting a whole (reduced)
| SoC; both approaches would probably be mixed at different
| design stages.
|
| I'm sure they also had at least one unreleased silicon
| spin/design before the M1, during this project. You never
| quite get these things right the first time. No idea if
| that would've made it close to tape-out or not, but I'm
| sure there was something quite a bit different from the
| M1 in design at some point.
|
| > Use a regular Intel machine as a host
|
| > Put whatever recent-generation ARM CPU they made, onto
| a PCIe card
|
| You can buy one of those, it's called a T2 Mac. Indeed
| some Apple Silicon technologies came from there (e.g. how
| SEP external storage is managed), but not in the way you
| describe; most of the T2<->Intel interface was thrown
| away for Apple Silicon, and now the OS sees things the
| same way as BridgeOS did natively on the T2 on those
| Macs.
| derefr wrote:
| Thanks for all the corrections! I'm learning a lot from
| this conversation. :)
|
| > there is absolutely nothing to be gained by bringing
| that up on Intel first
|
| If you're an OSdev fan, then "self-hosting" (i.e. being
| able to do your kernel development for your new system
| _on_ the system itself, making your testbench your new
| workstation) is usually considered a valuable property,
| to be achieved as soon as is feasible.
|
| Of course, the M1 effort probably had a lot of
| involvement from the iOS kernel teams; and the iOS kernel
| folks are a lot less like OS-enthusiast hackers and a lot
| more like game-console platform-toolchain developers,
| thoroughly used to a development paradigm for putting out
| new iOS devices that focuses on non-self-hosted bring-up
| via tethered serial debugging / whatever the Lightning-
| protocol equivalent of ADB is. So they probably don't
| really care about self-hosting. (Are they even there now?
| Can you iterate on XNU kernel drivers effectively on an
| M1 Mac?)
| marcan_42 wrote:
| The kernel tree was already shared; pre-M1 releases of
| the XNU kernel for Intel macOS _already_ had (only
| partially censored) support for Ax line CPUs on iOS
| devices :)
|
| You can build a completely open source XNU kernel core
| for M1 and boot it, and iterate kernel drivers, yes. All
| of that is fully supported, they have the KDKs,
| everything. It's been self hosted since release (well,
| things were a bit rocky the first month or two in the
| public version as far as custom kernels).
| haberman wrote:
| > But yes, the serialized dict-of-arrays-of-dicts type stuff
| can be approached in a few ways, none of which are
| particularly beautiful.
|
| For what it's worth, this sounds somewhat similar to protobuf
| (which also supports dicts, arrays, etc).
|
| After spending many years trying to figure out the smallest,
| fastest, and simplest way to implement protobuf in
| https://github.com/protocolbuffers/upb, the single best
| improvement I found was to make the entire memory management
| model arena-based.
|
| When you parse an incoming request, all the little objects
| (messages, arrays, maps, etc) are allocated on the arena.
| When you are done with it, you just free the arena.
|
| In my experience this results in code that is both simpler
| and faster than trying to memory-manage all of the sub-
| objects independently. It also integrates nicely with
| existing memory-management schemes: I've been able to adapt
| the arena model to both Ruby (tracing GC) and PHP
| (refcounting) runtimes. You just have to make sure that the
| arena itself outlives any reference to any of the objects
| within.
|
| (Protobuf C++ also supports arenas, that's actually where the
| idea of using arenas for protobuf was first introduced. But
| Protobuf C++ also has to stay compatible with its pre-
| existing API based on unique ownership, so the whole API and
| implementation are complicated by the fact that it needs to
| support both memory management styles).
| megous wrote:
| For JSON I settled for no transformation to a different in-
| memory representation, just inline ondemand parsing of a
| JSON string buffer. Works nicely and you don't need to
| manage memory much at all.
|
| https://megous.com/git/megatools/tree/lib/sjson.h
| black_puppydog wrote:
| Since there are clearly a lot of people who have been following
| the asahi development in detail, I would like to hear your takes
| on this quote from their FAQ:
|
| > No, Apple still controls the boot process and, for example, the
| firmware that runs on the Secure Enclave Processor. However, no
| modern device is "fully open" - no usable computer exists today
| with completely open software and hardware (as much as some
| companies want to market themselves as such).
|
| I think they're aiming at purism here, but might have forgotten
| about the MNT Reform, even though it is currently specced at the
| lower end of "usable".
| ploxiln wrote:
| There's also the Raptor Computing Talos line ... which is
| interesting, all open firmware and busses, but even more
| expensive and less practical unfortunately.
| https://www.raptorcs.com/TALOSII/
| marcan_42 wrote:
| Open firmware, but who says the silicon isn't backdoored? And
| why is open firmware more important for your freedom than
| open silicon? What about on-chip ROMs? :-)
|
| In the end, you're always drawing arbitrary lines in the
| sand. If you really want to go all the way to actually reach
| a tangible goal, I'm only aware of one project that actually
| offers trust guarantees backed by hard engineering arguments:
| Precursor
|
| https://www.crowdsupply.com/sutajio-kosagi/precursor
|
| (TL;DR on the trick is that, by using an FPGA, you make it
| nearly impossible to backdoor, because it would take a huge
| amount of compute power to engineer a backdoor into the FPGA
| silicon that can analyze arbitrary randomized FPGA designs to
| backdoor them).
|
| For more practical computing, I find M1s have a very solid
| secureboot design. We can retain that security even after
| putting m1n1/Asahi on there; doing so requires physical
| presence assertion and admin credentials and locks you into a
| single m1n1 version without repeating the process, so we can
| use that to root our own chain of trust. Similarly we can
| continue to use Apple's SEP (in the same way one would use a
| TPM, HSM, or a YubiKey for that matter; sure, it's a
| proprietary black box, but it also can't hurt you except
| where you let it) just like macOS does, for things like
| encryption and storing SSH secrets. And all the coprocessors
| have IOMMUs, and the whole thing is very logically put
| together (unlike the giant mess that are Intel CPUs; e.g. we
| _know_ they have a JTAG backdoor in production chips after
| authenticating with keys only they have, never mind things
| like ME) and designed such that the main OS does not have to
| trust all the auxiliary firmwares.
|
| I'd love to have a more open system competing in the same
| space, but I have no problem trusting Apple's machines much
| more than I do the x86 PCs I use these days anyway, and it's
| hard to find things that compete in the same
| performance/power space that are more open than either of
| those, unfortunately.
| dm319 wrote:
| I find Marcan's live streams on this fascinating/mesmerising, and
| I'm not a programmer.
| fastssd wrote:
| Agreed. I have even learned some things casually watching his
| streams over the last month or so.
| mlindner wrote:
| > For the initial kernel DCP support, we expect to require the
| firmware released with macOS 12 "Monterey" (which is currently in
| public beta); perhaps 12.0 or a newer point release, depending on
| the timing. We will add new supported firmware versions as we
| find it necessary to take advantage of bugfixes and to support
| new hardware.
|
| Please don't do this! Monterey is an OS I will never install
| given that it will include the Apple-provided spyware. This
| shouldn't be the earliest supported version! I suggest instead
| finalizing on a current/later release of Mac OS 11 as updates for
| it will slow down once Monterey is released. A lot of people
| won't be updating to Monterey.
| marcan_42 wrote:
| You need Monterey firmware only. You can just do a side install
| once (to update your system firmware, which needs to be at
| least as new as the newest OS bundle) into another volume, nuke
| it, then install Asahi with the Monterey firmware bundle
| option, and keep your main macOS install at whatever version
| you want.
| derefr wrote:
| Re: the installation procedure, why does Asahi Linux have to have
| its own separate APFS container, rather than being an APFS volume
| in the pre-existing APFS container (and so sharing the Recovery
| volume with the pre-existing macOS install, not requiring you to
| resize the pre-existing APFS container smaller, etc.)?
| marcan_42 wrote:
| I actually haven't tried doing multiple installs in the same
| container, but there's no reason why it wouldn't work as far as
| I know. It would be easy to change the installer to let you do
| that.
|
| Though there isn't much of a difference for Linux; you can't
| actually share the Recovery image (since it's per-OS, even
| within a single Recovery volume), and since you need to
| repartition to create standard Linux partitions anyway (since
| linux-apfs isn't quite at the point where you'd want to use it
| as your root filesystem as far as I know...) it doesn't make
| much of a difference if you create a 2.5G stub container too,
| and makes it much easier to blow away the entire Linux install
| without affecting macOS. Perhaps in the future, when linux-apfs
| is solid enough, it would be interesting to support this in the
| installer in order to have a fully space-sharing dual-boot
| macOS/linux system, without partitioning.
|
| Also, U-Boot would have to grow APFS support in order to have a
| truly all-APFS system :) (we plan to use U-Boot as a layer for
| UEFI services and such, so you don't have to burn kernels into
| the m1n1 image, which would suck for upgrades).
| derefr wrote:
| > and since you need to repartition to create standard Linux
| partitions anyway
|
| If it's possible to do this in APFS (not sure), the installer
| could do something similar for APFS to what WUBI does for
| NTFS: fallocate(2) a contiguous-disk-block single-extent file
| inside the APFS volume, write the rootfs into it, and then
| configure the bootloader entry's kernel params with the APFS
| container+volume and the path to the file within the
| container.
|
| In this setup, linux-apfs would then only need to work well
| _enough_ to expose that file 's single extent's raw-block-
| device offset during early boot; then the initramfs script
| could unmount the linux-apfs volume and mount the ext4 rootfs
| directly from the block device at that offset.
|
| Yes, it's almost the same as resizing the disk smaller -- but
| it's at least a less-"permanent" change, in the sense that
| deleting the rootfs file from macOS would give the space
| back, without needing to know to re-grow your macOS APFS
| container.
| marcan_42 wrote:
| I wrote a tool to do this many years ago on the Xbox 1 (the
| original) to avoid having to use loopmounted files, but
| it's not the kind of thing you want to rely on for a modern
| filesystem like APFS. There's no way to guarantee that the
| OS won't mess with your data, move it elsewhere, etc (and
| if it doesn't today, it might in tomorrow's macOS update).
| I would never trust this approach enough to recommend it
| for our users, to be honest.
| derefr wrote:
| > There's no way to guarantee that the OS won't mess with
| your data, move it elsewhere, etc
|
| I mean, "moving it elsewhere" is why you'd call into
| linux-apfs on every boot -- to find out where the rootfs
| backing file's extent is living today. It might move
| around between boots, but it won't move while macOS isn't
| even running.
|
| And this approach presumes that there exists an explicit
| file attribute for keeping a file single-extent /
| contiguous-on-disk (as far as the host-NVMe interface is
| concerned, at least.) Such an attribute _does_ exist on
| NTFS, which is why WUBI continued to work there. I 've
| never heard about such an attribute existing on APFS,
| which is why I said I'm not sure whether APFS "supports"
| this.
|
| Usually there _is_ such an attribute in most-any
| filesystem, though -- it exists to be used with swap
| files, so that kernels can directly mmap(2) the disk-
| block range underlying the swap file rather than having
| every read /write go through the filesystem layer.
|
| I know that macOS uses an APFS _volume_ for swap (the
| "VM" volume), which is kind of interesting. Maybe in
| APFS, there isn't a per-file attribute for contiguous
| allocation, but rather a per-volume one?
|
| In that case, it might be possible for the Linux rootfs
| to be an APFS volume, in the same way that the macOS VM
| volume is an APFS volume.
| marcan_42 wrote:
| > I've never heard about such an attribute existing on
| APFS, which is why I said I'm not sure whether APFS
| "supports" this.
|
| Ah, yeah, I've never heard of anything like that. I
| didn't realize NTFS had this. I have no idea how APFS
| handles the VM volume, though.
| cassepipe wrote:
| Kind of unrelated but I would like to benefit from the knowledge
| of people hanging out here to ask if anybody is aware of how good
| and how usable is Linux on Intel MacBook? Is there a specific
| Intel MacBook that's known to be particularly well supported by
| Linux ? Been searching but I couldn't find much else than vague
| information and lists of Linux distributions I could install on a
| MacBook.
| nicholasjarr wrote:
| I have a 2017 MacBook Pro with Touchbar and have Manjaro
| installed in it. It works fine but it is not perfect. No
| reliable bluetooth, never got touchbar working (so no esc and
| function keys). Its is not my main OS (I use Windows for work),
| but for studying and trying new things I use it. I wouldn't
| risk installing it without leaving a MacOS partition.
| horsawlarway wrote:
| Depends on the age of the macbook. Older models have reasonable
| support, newer models are (at best) a pain in the ass.
|
| I generally hit the arch wikis with specific models for the
| best information.
|
| This github repos also does a good job laying out current
| support for the 2016 models.
| https://github.com/Dunedan/mbp-2016-linux
|
| Frankly, I haven't tried on a newer model then that - I don't
| buy apple hardware anymore.
| fastssd wrote:
| 15 inch Macbook Pro's from 2015 are regarded as the best Macs
| for Linux and hackers in general. The ports are a big plus too.
| flatiron wrote:
| I use Arch as a daily driver on my 2013 MacBook. Their wiki has
| all I needed to tweak to get it running perfect. Needs a few
| extra steps (had to pass some max specific kernel parameters to
| turn off thunder bolt for instance) but it's leaps and bounds
| better than macOS.
| syntaxing wrote:
| Pardon my ignorance but what does the m1n1 hypervisor mean? Does
| this mean that it's a single core hypervisor vm?
| marcan_42 wrote:
| m1n1 is our bootloader that bridges the Apple world and the
| Linux world together; it is also a hardware reverse engineering
| platform (remote controlled from a host) and now also a
| hypervisor.
|
| The hypervisor feature is indeed single core, but that's just
| because I haven't bothered implementing SMP (would need a bunch
| of extra locking, but it's doable, just not really necessary
| for us).
| zamadatix wrote:
| Absolutely amazing progress, I might even give it an install once
| the USB drivers mentioned at the end are working.
|
| https://www.patreon.com/marcan/
| https://github.com/sponsors/marcan
| rowanG077 wrote:
| IF you can choose please use the github sponsors instead of
| patreon :). It's better percentage wise then patreon.
___________________________________________________________________
(page generated 2021-08-14 23:00 UTC)