[HN Gopher] Tart: VMs on macOS using Apple's native Virtualizati...
       ___________________________________________________________________
        
       Tart: VMs on macOS using Apple's native Virtualization.Framework
        
       Author : PaulHoule
       Score  : 136 points
       Date   : 2024-01-19 18:24 UTC (4 hours ago)
        
 (HTM) web link (tart.run)
 (TXT) w3m dump (tart.run)
        
       | autoexecbat wrote:
       | I do like the idea of using container registries for VM images
        
         | mikepurvis wrote:
         | I set up a workflow at $DAY_JOB for building the rootfs as a
         | container and then "promoting" it to a vmdk and creating the
         | ovf metadata file to allow it to be imported into VMWare as its
         | own machine.
         | 
         | This was ~3 years ago and at least at the time I was annoyed at
         | how little established tooling there seemed to be for doing an
         | appliance image build offline-- everyone was just like "why?
         | Boot some public cloud-init template and use that as the basis
         | for your terraform/ansible/whatever. If you actually need an
         | OVF then export it from VMWare and be done with it."
         | 
         | On the other hand, once I got down in the weeds with things, I
         | did find there were some bits that were a bit hairy about the
         | promotion process-- especially with "minimal" containers that
         | have no users or init system, not to mention of course no
         | filesystem, kernel, or bootloader, there is a fair bit that you
         | have to do with a typical container to ready it for becoming a
         | real boy.
        
           | photonbeam wrote:
           | Could put everything into the image - I mean the registry for
           | just storage/transmission rather than reusing pre existing
           | minimal container images
        
       | TylerE wrote:
       | Now if only there was a simple solution for running z X86/64 vms.
        
         | olliej wrote:
         | The whole point of virtualization is you're running as close as
         | possible to directly on native hardware. That's literally what
         | makes virtualization distinct from emulation for VMs.
         | 
         | If you're trying to run an x86 client OS you need an emulator,
         | there's just no way around it. If you just have some x86
         | binaries and don't actually need a full x86 OS, they've made
         | rosetta available for client linux VMs.
        
           | edgyquant wrote:
           | Your terminology is off and you seem to be talking about a
           | hypervisor or something. Virtualization is just virtualizing
           | computer hardware and emulation is one way to do it
        
             | dijit wrote:
             | Sorry, I'm not sure what's going on here, I replied to
             | another comment about this and there's multiple people
             | stating that (s)he's thinking of containers.
             | 
             | I think we have collective amnesia about what
             | virtualisation actually _is_ and why it 's distinct from
             | containerisation.
             | 
             | Virtualisation is absolutely about "skipping" parts of the
             | emulation chain to do direct calls to the CPU in a way that
             | does not need to be translated; in this way it gets much
             | closer to the hardware.
             | 
             |  _Containerisation_ was considered faster still because
             | instead of even having the 5% overhead not being able to
             | use the same process scheduler in the kernel; you can share
             | one.
             | 
             | Yes, containers are able to execute faster than VMs, but
             | the parent is absolutely right that the entire point of
             | virtualisation as a concept was to get closer to the CPU
             | from inside an emulated computer.
        
               | hnlmorg wrote:
               | ...or they edited their post afterwards ;)
               | 
               | I've been using containers for decades on FreeBSD and
               | Solaris, long before Linux ever caught on. And
               | virtualisation even longer.
               | 
               | In fact I have fond memories of using the first version
               | of VMWare (which was a literal emulator at that point
               | because x86 didn't support virtualisation back in 1999)
               | to run Windows 2000 from Linux.                 So, like
               | the others who responded, I definitely know the
               | difference between virtualisation and containerisation.
        
               | dijit wrote:
               | You have a similar background to me; though I used Zones
               | on Solaris before Jails on FreeBSD.
               | 
               | I also used VMWare when it was an emulator and hated how
               | abysmally slow it was
        
               | hnlmorg wrote:
               | Christ was is slow!!
               | 
               | I wasn't using it for anything serious thankfully.
        
           | w0m wrote:
           | virtualization != containerization.
        
           | hnlmorg wrote:
           | You're thinking of containerisation. Virtualisation does
           | abstract away direct interfaces with the hardware. And some
           | virtual machines are literal emulators.
        
             | dijit wrote:
             | No, he's right.
             | 
             | Containerisation is distinct from virtualisation.
             | 
             | Virtualisation shares some areas with Emulation, but it's
             | essentially passing CPU instructions to the CPU _without
             | translation_ from some alternative CPU machine language.
             | 
             | The difference here is the level; in descending order:
             | 
             | * Containerisation emulates a userland environment, shares
             | a OS/Kernel interfaces.
             | 
             | * Virtualisation emulates hardware devices, but not CPU
             | instructions; there are some "para-virt" providers (Xen)
             | that will share even a kernel here, but this is not that.
             | 
             | * Emulation emulates an entire computer including its CPU
        
               | lxgr wrote:
               | Note that these aren't necessarily layered: You can
               | virtualize with emulation, but you can also emulate
               | without virtualization, which is what e.g. Rosetta does
               | on macOS, or QEMU's userland emulation mode.
        
               | hnlmorg wrote:
               | I don't think you've read the comment chain correctly
               | because you're literally just repeating what I just said.
               | 
               | Though you make a distinction between virtualisation and
               | emulation when in fact they can be the same thing (they
               | aren't always, but sometimes they are. It just depends on
               | the problem you're trying to solve).
        
               | dijit wrote:
               | >> The whole point of virtualization is you're running
               | _as close as possible to directly on native hardware._
               | That's literally _what makes virtualization distinct from
               | emulation for VMs._
               | 
               | > You're thinking of containerisation.
               | 
               | no.
        
               | hnlmorg wrote:
               | They edited their post. If you look at the comments
               | others have made, you can see their original comment was
               | much more ambiguous
        
               | dijit wrote:
               | Is there a way to see original comments?
        
               | hnlmorg wrote:
               | Unlikely. It's too recent for Wayback machine to cache.
               | 
               | Their post was ostensibly the same but much more vaguely
               | worded. And if you say "virtualisation is about being as
               | close to the hardware as possible" without much more
               | detail to someone else who talks about wanted to run a VM
               | with a different guest CPU, then it's understandable that
               | people will assume the reply is mixing up virtualisation
               | with containerisation since there's nothing in
               | virtualisation that says you cannot emulate hardware in
               | the guest. Whereas containerisation is very much intended
               | to run natively on the hardware
        
           | TylerE wrote:
           | The thing is, such an emulator exists, MacOS _itself_ uses it
           | to run old x86 MacOS apps transparently. Why is that as soon
           | as that app happens to be, say, VMWare, the OS suddenly says
           | nuh uh, not gonna do it?
           | 
           | The _technology_ works, I run AAA x86 games on my Mac Studio
           | via Crossover. Sure, the performance is a bit limited, but it
           | 's limited by the nature of an integrated, albeit fairly
           | powerful, GPU. It works surprisingly well considering many of
           | these games are targeted at, say, 1000-series NVidia cards.
           | 
           | But wanting to run a 8GB Linux instance so I can run a local
           | dev environment is an impossible ask? (Before anyone asks,
           | no, ARM linux isn't really a viable solution for...reasons I
           | don't feel like going into but are mostly boring and
           | technical-debty).
        
             | andrewaylett wrote:
             | The canonical mechanism for running amd64 Linux processes
             | appears to be to virtualise aarch64 and use binfmt-misc
             | with Rosetta 2 to get emulation working.
             | 
             | It does make a certain amount of sense that Apple would
             | have hardware virtualisation support for native VMs but not
             | for emulated VMs. I can imagine (but I've not checked) that
             | support for emulation of the VT extensions is lacking.
             | 
             | As a random person on the Internet, I'm obviously
             | overqualified to suggest that you use native virtualisation
             | to run aarch64 Linux, then use Rosetta within that Linux VM
             | to run whatever amd64 software virtualisation tool you
             | prefer. This is quite similar to what containerisation
             | tooling does -- Docker (and similar) on aarch64 runs an ARM
             | VM, then uses Rosetta inside that VM to run containers. You
             | don't get a native amd64 kernel that way, but even without
             | nested virtualisation you get a complete (namespaced) amd64
             | userspace.
        
             | aseipp wrote:
             | You can do it with QEMU using its emulation backend to
             | emulate the entire x86 boot chain. It will be dozens
             | (hundreds?) of times slower.
             | 
             | Hypervisors work efficiently because they allow you to
             | "nest" certain functions inside of other userspace
             | processes, which are typically only available to the
             | operating system (with a higher privilege level), things
             | like delivering interrupts efficiently, or managing page
             | tables. The nesting means that the nested operating system
             | is running on the same hardware in practice, and so it
             | inherits many of the constraints of the host architecture.
             | So, virtualization extensions can be seen as a kind of
             | generalization of existing features, designed so that
             | nesting them is efficient and secure.
             | 
             | For example, different architectures have different rules
             | about how page tables are set up and how virtual memory
             | page faults are handled in that environment. The entire
             | memory mapped layout of the system is completely different.
             | The entire memory model (TSO vs weak ordering) is
             | different. There are strict correctness requirements. A
             | linux kernel for x86 has specific x86 code to manage that
             | and perform those sequences correctly. You cannot just
             | translate x86 code to ARM code and hope the same sequence
             | works; you have to emulate the entire processor
             | environment, so that x86 code works as it should.
             | 
             | Rosetta does not emulate code; it is a binary translator.
             | It translates x86 code to ARM code up front, then runs
             | that. The only reason this works is because normal
             | userspace programs have an ABI that dictates how they
             | interoperate with each other across process and address
             | space and function boundaries. When CrossOver runs an x86
             | game for example, it translates the x86 to ARM. That
             | program then calls Metal APIs, libmetal.dylib or something,
             | but that .dylib itself isn't x86-to-ARM translated. It is
             | simply "pass through" shim to your system-native Metal
             | APIs. So the graphics stack achieves native performance;
             | the application overhead comes from the x86-to-ARM
             | translation, which needs to preserve the semantics of the
             | original code.
             | 
             | Rosetta-for-Linux works the same way, because there is a
             | stable ABI that exists between processes and function
             | calls, and the Linux kernel ABI is considered stable
             | (though not between architectures in some cases). It
             | translates the x86-to-ARM, and then that binary is run and
             | it gets to call into the native Linux kernel, which is not
             | translated, etc. This basically works well in practice. It
             | is also how Windows' x86-on-ARM emulation works.
             | 
             | If you want to emulate an entire x86 processor, including
             | an x86 Linux kernel, you have to do exactly that, emulate
             | it. Which includes the entire boot process, the memory
             | model, CPU instructions that may not have efficient 1-to-1
             | translations, etc.
             | 
             | Unfortunately, what you are asking is not actually
             | reasonably possible, in a technical sense. Your options are
             | either to use Rosetta-on-Linux to translate your binary, or
             | get an actual x86 Linux machine.
        
               | astrange wrote:
               | > Rosetta does not emulate code; it is a binary
               | translator. It translates x86 code to ARM code up front,
               | then runs that. The only reason this works is because
               | normal userspace programs have an ABI that dictates how
               | they interoperate with each other across process and
               | address space and function boundaries. When CrossOver
               | runs an x86 game for example, it translates the x86 to
               | ARM.
               | 
               | I don't think this is true. I think code run under WINE
               | is always JITted because it's too much unlike a Mac
               | binary.
        
             | olliej wrote:
             | Ah, that's not quite accurate.
             | 
             | Rosetta is an emulator (sorry
             | <marketing>translator</marketing>) for userspace. Happily
             | Apple actually provides direct support for linux in rosetta
             | (as in it can parse linux binaries and libraries, etc can
             | remap syscalls correctly, etc in linux VMs), and there's
             | even a WWDC video on how to integrate it, so that you can
             | install an arm linux kernel, and then run x86_64 linux
             | binaries via rosetta. Windows has its own translator so
             | doesn't need rosetta.
             | 
             | At a basic level the semantics of usermode x86 and usermode
             | arm are identical, so you can basically say "I just care
             | about translating the x86 instructions to arm equivalents",
             | and all the rest of the "make this code run correctly"
             | semantics of the hardware (memory protection, etc) then
             | just work.
             | 
             | That basically breaks down completely for kernel mode code,
             | for a variety of reasons, at one level a lot of the things
             | rosetta style translators can do for speed break down
             | because they can't just leverage memory protection any
             | more. But at a more fundamental level your translator now
             | needs to translate completely different mechanisms for how
             | memory is accessed, how page tables are set up, how
             | interrupts, are handled, etc. It's hard to say exactly
             | which bit would be the worst part, but likely memory
             | protection - you could very easily end up in a state where
             | your translator has to added non-hardware supported memory
             | protection checks on every memory access, and that is slow.
             | Significant silicon resources are expended (the extremely
             | high performance and physically colocated TLBs, caches,
             | etc) to make paged memory efficient, and suddenly you have
             | actual code running and walking what (to the cpu) is just
             | regular ram. So now what was previously a single memory
             | access for the CPU involves a code loop with multiple
             | (again, for the cpu) random memory accesses.
             | 
             | Those problems are why CPUs added explicit support for
             | virtualization in the early 2000s. VMWare, etc started
             | making virtual machines an actual viable product, because
             | they were able to set things up so that the majority of
             | code in the client OS was running directly on the host CPU.
             | The problem they had for performance was essentially what I
             | described above, only they didn't have to also translate
             | all of the instructions (I believe their implementation at
             | the time did translate some instructions, especially kernel
             | mode code, as part of their "tolerable" performance was
             | being very clever, and of course the client os kernel is
             | running in the host os's user mode so definitely has to
             | handle client kernel use of kernel mode only instructions).
             | A lot of virtualization support in CPUs basically boils
             | down to things like nested page tables, which lets your VM
             | just say "these are the page tables you should also be
             | looking at while I'm running".
             | 
             | Now for cross architecture emulation that's just not an
             | option, as the problem is your client kernel is trying to
             | construct the data structures used by its target cpu, and
             | those data structures don't match, and may or may not even
             | have an equivalent, so there is no "fix" that doesn't
             | eventually boil down to the host architecture supporting
             | the client architecture directly, and at that point you're
             | essentially adding multiple different implementations of
             | the same features to the silicon which is astronomically
             | expensive.
             | 
             | The far better solution is to say, make the child OS run a
             | native to the host architecture kernel, and then have that
             | do usermode translation like rosetta for application
             | compatibility.
        
           | olliej wrote:
           | This was going to be an addendum to the above comment, but I
           | took to long trying hunt down old marketing material for
           | Virtual PC, anyway here is the intended addendum.
           | 
           | [edit: Giant addendum starts here, in response to comments in
           | replies, rather than saying variations on the same things
           | over and over again. Nothing preceding this comment has been
           | changed or edited from my original post. As I've said
           | elsewhere, I really wish HN provided edit history on
           | comments]
           | 
           | First off, for people saying I'm talking about containers, I
           | was not considering them at all, I consider them as
           | essentially tangential to the topic virtual machines and
           | virtualization. I haven't looked into modern container
           | infrastructure to really understand the full nuances, my
           | exceptionally vague understanding of the current state of the
           | art is the original full-vm-per-container model docker, et al
           | introduced has been improved somewhat to allow better
           | resource sharing between clients and the host hardware than a
           | full-vm provides but is still using some degree of
           | virtualization provide better security boundaries than just
           | basic chroot containers could ever do. I'm curious about
           | exactly how much of a kernel modern container VMs have based
           | on the comments in this thread, and if I ever have time
           | between my job and long winded HN comments I'll try to look
           | into it - I'd love good references on exactly how kernel
           | level operation is split and shared in modern container
           | implementation .
           | 
           | Anyway, as commonly used virtualization means "code in the VM
           | runs directly of the host CPU", and has done for more than
           | two decades now. An academic definition of a virtual machine
           | may include emulation, but if you see any company or person
           | talking about supporting virtual machines, virtual hosts, or
           | virtualized X in any medium - press, marketing, article (tech
           | or non-tech press) - you will know that they are not talking
           | about emulation. The reason is simply that the technical
           | characteristics of an emulated machine are so drastically
           | different that any use of emulation has to be explicitly
           | called out. Hence absent any qualifier virtualization means
           | the client code executes directly on the the host hardware.
           | The introduction of hypervisors in the CPU simply meant that
           | more things could be done directly on the host hardware
           | without requiring expensive/slow runtime support from the VM
           | runtime, it did not change the semantics of what "virtual
           | machine" meant vs emulation even at the time CPUs with direct
           | support for virtualization entered the general market.
           | 
           | Back when VMWare first started out a big part of their
           | marketing and performance messaging boiled down to "Virtual
           | Machine != emulation" and that push was pretty much the death
           | knell for a definition of "virtualization" and "virtual
           | machine" including emulation. As that model took off,
           | "hypervisor" was introduced to general industry as the term
           | for the CPU mechanism to support virtualization more
           | efficiently (I'm sure in specialized industries and academia
           | it existed earlier) by allowing _more_ code to run directly,
           | but for the most part there was no change to userspace code
           | in the client machine. Most of the early
           | "hypervisor"/virtualization extensions (I believe on ARM
           | they're explicitly called the "virtualization extensions",
           | because virtualization does not mean emulation) were just
           | making it easier for VM runtimes to avoid having to do
           | anything to code running in kernel mode so that that code
           | could be left to run directly on the host CPU as well.
           | 
           | The closest emulation ever got to "virtualization" in non-
           | academic terminology that I recall is arguably "Virtual PC
           | for Mac" (for young folk virtual pc was an x86 emulator for
           | PPC macs that was eventually bought by MS IIRC), which said
           | "virtual pc" in the product name. It did not however use the
           | term virtualization, and was only ever described as
           | explicitly emulation in the tech press, I certainly have no
           | recollection of it ever even being described as a virtual
           | machine even back during its window of relevance. I'd love to
           | find actual marketing material from the era because I'm
           | genuinely curious what it actually said, but the product name
           | seems to have been reused over time so google search results
           | are fairly terrible and my attempts in the wayback machine
           | are also fairly scattershot :-/
           | 
           | But if we look at the context, once apple moved to x86, from
           | Day 1 the Parallels marketing that targeted the rapidly
           | irrelevant "virtual pc for Mac" product talked about using
           | virtualization rather than emulation to get better
           | performance than virtual pc, but the rapid decline in the
           | relevance of PPC meant that talking about not being emulation
           | ceased being relevant because the meaning of a virtual
           | machine in common language is native client running code
           | directly on the host CPU.
           | 
           | So while an academic argument may have existed that
           | virtualization included emulation in the past, the reality is
           | that the meaning of virtualization in any non-academic
           | context since basically the late 90s has been client code
           | runs directly on the host CPU, not via emulation. Given that
           | well established meaning, my statement that virtualization of
           | a non-host-architecture OS is definitionally not possible is
           | a reasonable statement, that is correct in the context of the
           | modern use of the word virtualization (again we're talking a
           | couple of decades here, not some change in the last few
           | months).
           | 
           | If you really want to argue with this, I want you to ask
           | yourself how you would respond if you had leased a hundred
           | virtualized x86 systems, and then found half of them were
           | running at 10% the speed of the rest because they were
           | actually emulated hardware, and then if you think that a
           | lawyer for that company would be able to successfully argue
           | that the definition of "virtualization include emulation"
           | would pass muster when you could bring in reps from every
           | other provider, and every commercial VM product and none of
           | them involved emulation, and every article published for
           | decades about how [cloud or otherwise] VMs work (none of
           | which mention emulation). If you really think that your
           | response would be "ah you got me", or that that argument
           | would work in court, then fair play to you, you're ok with
           | your definition and we'll have to agree to disagree, but I
           | think the vast majority of people in tech would disagree.
        
       | zamalek wrote:
       | I was considering similar approach were I still stuck with Apple
       | for work: make a Firecracker OCI runtime for MacOS. Fortunately
       | Intune for Linux came around before I had to resort to that.
        
         | duskwuff wrote:
         | Virtualization.framework does most of the things Firecracker
         | does on Linux. It's not literally the same, of course, but it
         | does a comparable amount of the work for you. Here's an example
         | application which uses it:
         | 
         | https://github.com/apinske/virt/blob/master/virt/virt/main.s...
         | 
         | And yes, that's really the whole thing. Once the VM is
         | configured (which is what most of the code is concerned with),
         | running it is fully handled by the framework.
        
           | zamalek wrote:
           | Firecracker is also the distro that makes assumptions (and
           | therefore boot time wins) about being run inside the
           | Firecracker VMM, as far as I understand it. You'd also need
           | the OCI runtime, and a Docker-compatible socket would make
           | tons of sense.
        
       | 0x69420 wrote:
       | heads up: it's under one of those BSL-esque weirdo licenses [1]
       | parameterised on seats and, get this, a seat is defined as a
       | single CPU core (if you are not an individual). so don't get any
       | ideas about running it on more than 5 mac studios if you're a
       | university that wants to run CI for some open-source project
       | along with those mirrors.
       | 
       | [1]: https://fair.io/?a
        
         | lxgr wrote:
         | > What counts as "using" Fair Source licensed software with
         | respect to the Use Limitation (e.g., 25-user limit)?
         | 
         | > The license doesn't define "use" exactly because the way
         | people deploy software changes all the time. Instead, it relies
         | on a common-sense definition. For example, executing the
         | software, modifying the code, or accessing a running copy of
         | the software constitutes "use."
         | 
         | Appealing to common sense for a critical definition in a
         | binding license agreement? What could go wrong!
        
         | jxy wrote:
         | I don't know what the intended audience is. But for managing
         | many instances among a few mac studios, it's much better to
         | invest an afternoon to get the right qemu command and just use
         | that, instead of all these fancy UIs.
        
       | navojoe wrote:
       | I tried the ubuntu lastest image, but don't know what user and
       | password use to log into. Any idea?
        
         | fragmede wrote:
         | creativity, should be user ubuntu pass ubuntu
        
           | navojoe wrote:
           | I have tried user ubuntu with ... empty , pass, ubuntu. None
           | of them works.
        
             | dps wrote:
             | admin/admin worked for me
        
               | navojoe wrote:
               | Thanks! working now
        
         | pritambarhate wrote:
         | If you just need Ubuntu then you can try "Multipass" from
         | Canonical (https://multipass.run/). Works quite well on my M2
         | Air. I haven't tried using Linux GUI with it though as I need
         | only terminal based VMs.
        
       | isodev wrote:
       | I don't get it - why would one pay for something that comes for
       | free on every Mac? Bootstrapping one in Swift is quite
       | straightforward and there are a number of tools and apps (with
       | UI) like virt (https://github.com/apinske/virt)
        
         | duskwuff wrote:
         | I guess their selling point is the container registry?
        
           | naikrovek wrote:
           | This tool used to be open (AGPLv3 for some reason; it's not a
           | network service) and they changed to this crap license once
           | they realized they had something good.
           | 
           | The AGPL version is available in homebrew.
        
       | olliej wrote:
       | [addendum: per u/pm215 Hypervisor.Framework does still apparently
       | exist and supported on apple silicon apparently, I assume the
       | absence of hardware docs just makes it miserable. OTOH maybe the
       | Asahi GPU drivers, etc can work in that model? I really haven't
       | ever done anything substantially more than what the WWDC demos do
       | so am not a deep fount of knowledge here :D, to avoid confusion
       | with replies I have not edited or changed my original comment. I
       | kind of wish that HN UX exposed edit histories for comments, or
       | provided separate addendum/correction options]
       | 
       | Virtualization.Framework is how you have to do virtualization on
       | apple silicon as it is the userspace API layer that interacts
       | with the kernel. There is no API you can use.
       | 
       | Virtualization.Framework is pretty much everything you need out
       | of the box for a generic "I have an isolated virtual machine"
       | model, basically it's just missing a configuration UI and main()
       | 
       | There are a couple of WWDC sessions over the last few WWDCs on
       | using the framework, configuring rosetta, and improvements in the
       | 2023 OS update
       | 
       | https://developer.apple.com/videos/play/wwdc2022/10002
       | https://developer.apple.com/videos/play/wwdc2023/10007
       | 
       | [1] commercial VM products probably require more work to compete
       | in the market, things like the transparent desktop, window
       | hosting, etc
        
         | Someone wrote:
         | Apple also documents the Virtualization framework fairly well
         | at https://developer.apple.com/documentation/virtualization,
         | with links to various code samples.
         | 
         | For example, https://developer.apple.com/documentation/virtuali
         | zation/run...:
         | 
         |  _"This sample configures a virtual machine for a Linux-based
         | operating system. You run the sample from the command line, and
         | you specify the locations of the Linux kernel to run and
         | initial RAM disk to load as command-line parameters. The sample
         | configures the boot loader that the virtual machine requires to
         | run the guest operating system, and it configures a console
         | device to handle standard input and output. It then starts the
         | virtual machine and exits when the Linux kernel shuts down."_
        
         | stuff4ben wrote:
         | Interesting! I really need a cheap way to spin up an Apple
         | Silicon container to create binaries for an open source project
         | on GitHub. I don't want to spend money on an Apple Silicon
         | runner in GitHub and I also don't want to run the build
         | directly on my M2 MacBook Pro along with my other development
         | work.
        
         | pm215 wrote:
         | You don't _have_ to use Virtualization.Framework.
         | Hypervisor.Framework is the lower level API --
         | https://developer.apple.com/documentation/hypervisor ; QEMU
         | uses that.
        
       | codetheory wrote:
       | Just use lima. https://lima-vm.io/
       | 
       | You have an option to use Native or QEMU.
        
         | dijit wrote:
         | Or, if you're doing containers (like Tart/Virt): colima. (which
         | uses lima, of course).
         | 
         | https://github.com/abiosoft/colima
        
         | otherjason wrote:
         | From a quick look at Lima, I don't think it's exactly the same
         | thing. Tart allows running macOS and Linux VMs, while Lima
         | seems focused on Linux VMs only. I don't use Tart, but having
         | infrastructure-as-code-like tooling that can be used to define
         | macOS environments (and store the VMs in container registries)
         | sounds useful and I'm not aware of another solution that does
         | it.
        
       | jamifsud wrote:
       | Are there any similar open source tools that allow you to manage
       | MacOS VMs? I'm aware of Lima / Colima but it seems they're for
       | Linux only.
        
         | lloeki wrote:
         | UTM maybe?
         | 
         | https://mac.getutm.app
        
         | shoo_pl wrote:
         | Yes, there is VirtualBuddy and Viable (not sure if this one is
         | Open Source):
         | 
         | - https://github.com/insidegui/VirtualBuddy -
         | https://eclecticlight.co/virtualisation-on-apple-silicon/
        
       | d3w4s9 wrote:
       | Serious question: how far can you go with base model's 8GB RAM?
       | 
       | Doing VM workflows is one reason I didn't bother with recent
       | Macbooks, as nice as they are. It is simply much cheaper to get a
       | machine with removable RAM and then upgrade them later. Without
       | going there, I can also build a decent ThinkPad T14 with 32GB for
       | around $1,100 even though RAM is soldered.
        
         | mattl wrote:
         | I can edit video in Final Cut Pro on my 8GB M1 Mac Mini while
         | doing other things.
        
           | foofie wrote:
           | > I can edit video in Final Cut Pro on my 8GB M1 Mac Mini
           | while doing other things.
           | 
           | I can't use IntelliJ or vscode with autocompletion on a 2023
           | MacBook Air with 8GB of RAM with a bunch of my projects.
           | 
           | The same projects run like a breeze on a cheap and very
           | crappy Beelink minipc with 16GB of RAM whose total cost is
           | lower than a RAM upgrade on a MacBook Air.
        
             | mattl wrote:
             | I'm curious if a native editor like Panic's Nova or BBEdit
             | would work better than a Java or Electron app?
        
               | bluish29 wrote:
               | Bbedit is a a lightweight editor compared to Intellij
               | IDE. It is hard to compare both as they are on the same
               | foot. But yes, if you can work with BBedit on a project
               | go for it.
               | 
               | On the same note, sublime will still win editor
               | performance competition on Mac and probably all
               | platforms.
        
             | CharlesW wrote:
             | > _I can 't use IntelliJ or vscode with autocompletion on a
             | 2023 MacBook Air with 8GB of RAM with a bunch of my
             | projects._
             | 
             | That's surprising. (More developer anecdata:
             | https://duncsand.medium.com/is-apples-cheapest-macbook-
             | good-...)
             | 
             | Still, I'd absolutely recommend that devs and other
             | creators spend the extra $200 for 16GB. And yes, it's
             | outrageously priced in comparison to buying matched sticks
             | for your PC.
        
       | bradgessler wrote:
       | I'm curious why file system performance from the VM through the
       | hypervisor to the host is so slow on macOS. Is this some sort of
       | fundamental limitation or is it a case of "Apple hasn't made this
       | a priority"?
       | 
       | My knowledge could be out of date and maybe this is fixed, but
       | I've tried using Docker on macOS and it's almost unusable for dev
       | environments with lots of files because of the file system
       | performance.
        
         | JonathonW wrote:
         | How recently have you tried it? The current best-performance
         | option for bind mounts in Docker Desktop on macOS (VirtioFS,
         | using Hypervisor.framework) didn't become the default until
         | June (for new installs) or September (for existing installs),
         | and wasn't available at all prior to December 2022.
        
           | bradgessler wrote:
           | I haven't tried it on a large project recently. How good is
           | it?
        
             | 9dev wrote:
             | Better, but still a pain. Nothing compared to running
             | Docker on Linux.
        
               | ithkuil wrote:
               | But docker on Linux doesn't involve any virtualization
               | whatsoever so filesystem performance is 100% native
        
         | omederos wrote:
         | Did you have VirtioFS enabled when you tried it?
        
         | rtpg wrote:
         | File performance in itself isn't bad per se, but keeping two
         | file trees in sync is a mess with Docker because every file
         | operation needs to be duplicated or something. There's some
         | async stuff as well but ultimately most programs meant for
         | Linux assume file access to be super fast so any sort of
         | intermediate steps end up being slow (see also: git on Windows
         | being dog slow)
         | 
         | You want super speedy docker on Mac? Run docker _inside_ a
         | Linux vm. Even better, just use Linux!
        
       | jtotheh wrote:
       | I've been using UTM, it seems to work OK. I use arm64 images, I
       | haven't tried x86_64 images. Works for Windows and Debian.
        
         | treesknees wrote:
         | x86_64 runs using QEMU for emulation, and it's incredibly slow.
         | I'd call it practically unusable.
        
           | vsgherzi wrote:
           | Hard agree, it does a decent job emulating an x86 container.
           | However Angular builds were still painfully slow.
        
             | Cu3PO42 wrote:
             | For running an x64 container, I'd recommend running aarch64
             | Linux and then running the container using Rosetta 2, that
             | yields significantly better performance than running
             | everything through QEMU.
        
         | sillywalk wrote:
         | MacOS 9 (PPC)runs OK. A little slow (macbook air m2)
         | 
         | https://mac.getutm.app/gallery/
        
       | lupinglade wrote:
       | Also check out https://www.getvmtek.com for something very
       | polished. We just released a huge update a few days ago.
        
       | tambourine_man wrote:
       | Is there a modern easy GUI that allows snapshots without reaching
       | for QEMU?
        
         | lupinglade wrote:
         | Yes, VMTek does snapshots natively and takes advantage of APFS.
         | We have a great graphical snapshot UI as well.
        
           | tambourine_man wrote:
           | Interesting, thanks. I was looking for an open source
           | solution though, sorry for not specifying.
        
       | ramon156 wrote:
       | Has no one here heard of mutagen before? It solves our
       | performance issue regarding syncing
        
       | emmanueloga_ wrote:
       | Orb also runs linux machines, a feature I miss for the first few
       | weeks of using it!
       | 
       | https://docs.orbstack.dev/machines/
        
       ___________________________________________________________________
       (page generated 2024-01-19 23:00 UTC)