[HN Gopher] Ask HN: How are you dealing with the M1/ARM migration?
___________________________________________________________________
Ask HN: How are you dealing with the M1/ARM migration?
I love the M1 chips. I use a 2021 MacBook both personally and
professionally. My job is DevOps work. But the migration to ARM is
proving to be quite a pain point. Not being able to just do things
as I would on x86-64 is damaging my productivity and creating a
necessity for horrible workarounds. As far as I know none of our
pipelines yet do multi-arch Docker builds, so everything we have is
heavily x86-64 oriented. VirtualBox is out of the picture because
it doesn't support ARM. That means other tools that rely on it are
also out of the picture, like Molecule. My colleague wrote a sort
of wrapper script that uses Multipass instead but Multipass can't
do x86-on-ARM emulation. I've been using Lima to create virtual
machines which works quite well because it can do multiple
architectures. I haven't tested it on Linux though, and since it
claims to be geared towards macOS that worries me. We are a company
using a mix of MacBooks and Linux machines so we need a tool that
will work for everyone. The virtualisation situation on MacBooks
in general isn't great. I think Apple introduced
Virtualization.framework to try and improve things but the
performance is actually worse than QEMU. You can try enabling it in
the Docker Desktop experimental options and you'll notice it gets
more sluggish. Then there's just other annoyances, like having to
run a VM in the background for Docker all the time because 'real'
Docker is not possible on macOS. Sometimes I'll have three or more
VMs going and everything except my browser is paying that
virtualisation penalty. Ugh. Again, I love the performance and
battery life, but the fragmentation this has created is a
nightmare. How is your experience so far? Any tips/tricks?
Author : c7DJTLrn
Score : 101 points
Date : 2022-06-10 16:39 UTC (6 hours ago)
| yread wrote:
| I'm lucky enough that I need high performance and have native
| dependancies that don't provide m1 binaries. So, I can worry
| about other problems
| boringuser1 wrote:
| xet7 wrote:
| I did buy MacBook M1.
|
| Installed Asahi Linux, it made possible to keep OS running all
| the time, keep HexChat IRC running, and not shutdown when away
| from keyboard like on macOS.
|
| But that M1 did only last 4 days. Then it did not boot anymore.
| So I returned it to warranty repair, and canceled buying it.
| jeroenhd wrote:
| I don't own an ARM computer (except the ones running Android,
| that is) but in my experience Linux tooling should work just fine
| on ARM if you pick the right distributions. That said, I have run
| Linux distros on Android a few times so I am somewhat familiar
| with what's out there.
|
| Running x64 and ARM together on one machine will work through
| tricks like Rosetta but I don't believe that stuff will ever work
| well in virtual machines, not until Apple open sources Rosetta
| anyway.
|
| I'd take a good, hard look at your tech stack and find out what's
| actually blocking ARM builds. Linux runs on ARM fine, so I'm
| surprised to hear people have so many issues.
|
| What you could try for running Docker is running your virtual
| machines on ARM and using the native qemu-static infrastructure
| Linux has supported for years to get (less efficient than
| Rosetta) x64 translation for parts of your build process that
| really need it. QEMU is more than just a virtualisation system,
| it also allows executing ELF files from other instruction sets if
| you set it up right. Such a setup has been very useful for me
| when I needed to run some RPi ARM binaries on my x64 machine and
| I'm sure it'll work just as well in reverse.
| dev_tty01 wrote:
| See my comment on original question. Apple's latest beta
| system, Ventura, enables using Rosetta within Linux VMs. Looks
| like they have done a lot of work on the virtualization
| frameworks since last year.
| InvaderFizz wrote:
| I work at a SaaS vendor.
|
| We are completing a project to upgrade to Java 11 for most of our
| micro services. This will also mean we do multiarch container
| builds for our entire pipeline. Once that is complete, we will
| begin developing against ARM as the primary target for devs. This
| is needed because we are in the middle of a hardware refresh so
| by EOY, something like 80% of devs will be running on M1 Pro.
|
| This should end up saving us money long term as we move all the
| cloud workloads to Graviton2/3 and Ampere A1 hosts.
|
| We'll be multiarch for many years to come. I also don't see a
| timeline for us to sunset x86 support considering we also do on-
| prem installs and ARM rack mount servers are nearly impossible to
| source.
| hoofhearthed wrote:
| Just my 2cs, results may vary, but my (short) experience with the
| m1 was so bad I switched back to a dell xps the week after I got
| it. Things may have gotten better meanwhile ofcourse, my local
| developer experience was dreaded. Some of the non arm targeted
| images took ages to start, some didn't start and others were
| straight up flaky. I'm not touching the m1 until I know all these
| issues have been resolved, docker file system api complete, all
| arch targeted images etc. It also doesn't make the situation any
| better that a majority of the images we have been running locally
| are all really fat Java dependencies likes Kafka etc.
| 28304283409234 wrote:
| Vagrant + parallels on m1 and intel. Works beautifully.
| rgovostes wrote:
| I got an M1 MacBook Pro from work last year, and expecting to pay
| the price for being an early adopter, I set up my previous Intel-
| based MBP nearby in case I ran into any problems or needed to run
| one of my existing virtual machines. (I do varied development
| projects ranging from compiling kernels to building web
| frontends.)
|
| In reality I have hardly turned on the Intel MBP at all since I
| got it. _At all._
|
| Docker and VMware Fusion both have Apple Silicon support, and
| even in "tech preview" status they are both rock solid. Docker
| gets kudos for supporting emulated x86 containers, though I
| rarely use them.
|
| I was able to easily rebuild almost all of my virtual machines;
| thanks to the Raspberry Pi, almost all of the packages I use were
| already available for arm64, though Ubuntu 16.04 was a little
| challenging to get running.
|
| I also had to spend an afternoon updating my CI scripts to cross-
| compile my Docker containers, but this mostly involves switching
| to `docker buildx build`.
|
| Rosetta is flawless, including for userland drivers for USB and
| Bluetooth devices, but virtually all of my apps were rebuilt
| native very quickly. (Curious to see what, if anything, is
| running under translation, I just discovered that WhatsApp, the
| #1 Social Networking app in the App Store, still ships Intel-
| only.)
| 0x29aNull wrote:
| ..want to get rid of your old intel MBP? The video on my 2017
| is dying.
| rgovostes wrote:
| Unfortunately it belongs to work, and not to me. Good luck
| finding a replacement, though.
| pishpash wrote:
| What's also nice is iPhone/iPad ARM apps can run as desktop
| apps on M1, so when there was no native desktop app replacement
| sometimes there was that other native app.
| spockz wrote:
| Just to note. You can do `docker buildx install` to make it the
| default backend of `docker build`. So this precludes you from
| having to switch commands everywhere. I haven't figured out how
| you can let it build multiple architecture by default, so
| without having to pass the --target flag.
|
| On my m1 I see 16x performance differences in builds in favour
| of native over emulated. Even simple shell script run slow or
| seem to stall when emulated.
| Operyl wrote:
| If you open Activity Monitor you can see what processes are
| running in "Apple" or "Intel"!
| eslaught wrote:
| Are there more details on "docker buildx build" that you can
| point us to? The command line reference doesn't seem especially
| helpful:
| https://docs.docker.com/engine/reference/commandline/buildx_...
|
| E.g. if I wanted to start building ARM binaries on a x86 host,
| is that the sort of thing this would enable?
| jarrell_mark wrote:
| Multipass for an Ubuntu arm64 vm. Podman inside of there to
| create and run x86 docker images.
| mise_en_place wrote:
| My first task at my current job I had to port our local dev
| environment to M1, out of necessity. Docker was relatively
| straightforward but I ran into a hell of sorts trying to get deps
| to compile on my aarch64 container, especially for stale projects
| like leveldb and eleveldb.
|
| In short it was painful but once you get over the attrition of
| compiling (mainly C) deps it's smooth sailing from there on out.
| smoldesu wrote:
| I'm in much the same boat, and I've coped by just switching to a
| nice beefy Linux desktop for most things.
|
| I like how ARM is progressing (I owned a second-batch RPi!), and
| M1 would probably be right for me if I wasn't a technical user,
| but it's simply too exhausting to fight the machine,
| architecture, package manager and product all at the same time.
| Docker is (and has been for a while) loathsome on Mac.
| Virtualization is usually pretty bad too, which makes regression-
| testing/experimentation much slower. I might give it another go
| if Asahi figures out GPU acceleration, but I'm not very hopeful
| regardless. The M series of CPUs doesn't really make sense to me
| as a dev machine unless you have a significant stake in the Apple
| ecosystem as a developer. Otherwise, it's a lovely little machine
| that I have next to no use-cases for.
|
| > Any tips/tricks?
|
| Here's one (slightly controversial) tip: next time you're setting
| up a new Mac, ditch Homebrew and use Nix. This is really only
| feasible if you've got a spacious boot drive (Nix stores
| routinely grow to be 70-80gb in size), but the return is a top-
| notch developer environment. The ARM uptake on Nix is still hit-
| or-miss, but a lot of my package management woes are solved
| instantly with it. It's got a great package repository, fantastic
| reproducability, hermetic builds and even ephemeral dev
| environments. The workflow is lovely, and it lets me mostly
| ignore all of the Mac-isms of MacOS.
| [deleted]
| olliej wrote:
| Whatever this years macOS version is called includes support for
| running Rosetta on linux VMs at least which sounds like once VM
| apps adopt the appropriate APIs that will solve many of the
| problems in these comments
| exabrial wrote:
| 0 Problems: I'm a JVM developer, all of my tools work as
| intended. We deploy things as one-jar with no OS dependencies
| other than ENV variables for configuration.
| bladegash wrote:
| We resorted to building multiple images manually and pushing them
| to ECR. Then just have an override compose file that people with
| M1s have to use. Fortunately, have only had to do that with a
| couple images that aren't updated all that often.
| ThemalSpan wrote:
| On the whole it's been good.
|
| I work on scientific software, so the biggest technical issue I
| face day-to-day is that OpenMP based threading seems almost
| fundamentally incompatible with M1.
|
| https://developer.apple.com/forums/thread/674456
|
| The summary of the issue is that OpenMP threaded code typically
| assumes that a) processors are symmetric and b) there isn't a
| penalty for threads yielding.
|
| On M1 / macOS, what happens is that during the first OpenMP for
| loop, the performance cores finish much faster, their threads
| yield, and then they are forever scheduled on the efficiency
| cores which is doubly bad since they're not as fast and now have
| too many threads trying to run on them. As far as I can tell
| (from the linked thread and similar) there is not an API for
| pinning threads to a certain core type.
| physicsguy wrote:
| Can you not do this using the CPU affinity environment
| variables and just ignoring the efficiency codes? I was under
| the impression you could bind to specific cores with:
|
| GOMP_CPU_AFFINITY="1 2 5 6"
|
| With thread 1 bound to core 1, thread 2 on core 2, thread 3 on
| core 5, thread 4 on core 6. I don't have an M1 to play around
| on but I'd have assumed that the cores are fixed IDs.
|
| Aside from that, if the workload is predictable in time, using
| a more complex scheduling pattern might help. You could perhaps
| look at how METIS partitions the workload, but see if it's
| modifiable by adding weights to the cores reflective of their
| relative performance. Generally, to get good OMP performance I
| always found it better to treat it almost like it's not shared
| memory, because on HPC clusters, you have NUMA anyway which
| drags performance down once you have more threads than a single
| processor has cores in the machine
| neilalexander wrote:
| I do most of my work in Go (with the very occasional splash of
| Swift or Kotlin) and the move to M1 has been utterly seamless for
| me. So much so that I often forget I'm working on an ARM64
| machine until I forget to set GOARCH when compiling and then try
| to copy a binary to a remote machine.
|
| The majority of Docker images that I use are available for ARM
| and the few that aren't perform fine under Docker for Mac
| emulation (although the big performance boost that I saw
| ultimately came from enabling VirtioFS accelerated directory
| sharing).
|
| Just about all of the tools that I use are now available as
| universal binaries, but before that, Rosetta was utterly
| seamless.
|
| I really can't complain.
| matwood wrote:
| It seems like if someones workflow is heavily local container
| based, then the m1 has been a rough transition. Otherwise it's
| been pretty seamless.
|
| I'm on my second m1 machine (m1 mba, now m1 max mbp), and I
| only had a few issues early on with terraform. My day to day
| software dev is web, go, and java.
| jacquesm wrote:
| For now I'm ignoring it, I'm usually about two to three years
| behind the curve and by then the bugs have typically been ironed
| out. I won't be running macOS anyway, but will wait until a fully
| supported version of Debian is out there that uses all of the
| peripherals properly. They call it the bleeding edge for a reason
| and I see no reason to spend extra effort that isn't driven by an
| immediate need. I like tech, I can't stand fashion.
| nojito wrote:
| You will be waiting far more than 3 years.
| tonyhb wrote:
| 1. Run arm-based debian using Parallels, headless using `prlctl`.
| SSH in and use tmux.
|
| 2. Everything you install will be arm based. Docker will pull
| arm-based images locally. Most every project (that we use) now
| has arm support via docker manifests.
|
| 3. Use binfmt to cross-compile x86 images within prlctl, or have
| CI auto-build images on x86 machines.
|
| That pretty much does it.
| aidos wrote:
| Yup. We have a lot of complex dependencies so a couple of us
| got M1s so we could charge into it headfirst to get it sorted.
| It wasn't too bad. We had a couple of 3rd party things stuck on
| x86 so we emulated them on qemu within the vm. Slow, but ok
| (eventually we replaced them).
|
| We were using UTM but have recently switched to Parallels,
| which is nice.
|
| Our prod stayed on x86 but we've started moving to graviton3
| which is better bang for buck. Suspect it'll end up being a
| common story for others too.
|
| m1s are just such nice machines that I'd go quite out of my way
| to stay on them now.
| deeptote wrote:
| TL;DR - I tried it when the first m1's came out and it was a huge
| pain, ended up going back to x86 for my primary machine.
|
| I got an m1 right when they came out because I started a new gig
| right around that time, literally happened the same week. Trying
| to get all my dev tools installed became a rat's nest of issues.
| I work as a backend / dist sys / systems engineer for my day job
| and so I have to write and use things that are fairly close to
| bare-metal. Brew hadn't been forked yet, so that added a whole
| new layer of issues.
|
| Docker still doesn't work, Rust libs compile in weird ways...
| just all kinds of stuff that I'm not smart enough or paid well
| enough to figure out. My title is "Developer", not "M1 developer
| advocate" so after about a month of running into issue after
| issue, I went back and found a used MBP with an intel chip. I'm
| excited about the future of Apple silicon, and ARM as a whole,
| but it needs another couple years of refinement.
|
| I will say that I've been using an m1 mac mini for general office
| work as apart of my side business and it's quite good.
| mtoddsmith wrote:
| At some point we're going to have the opposite issue. Stuff will
| work for ARM but not x86. Thanks Apple.
| 0x0 wrote:
| You're welcome.
| skohan wrote:
| But apparently "thanks" unironically right? I am not a CPU
| expert, but from what I have read about ARM during this
| transition (plus with more ARM options becoming available in
| the cloud) it seems to me like x86 is bogged down with a fair
| amount of baggage and ARM/RISK is actually a better technology
| which has been held back by the inertia of x86.
|
| Happy to be corrected if I am wrong.
| viraptor wrote:
| It wasn't clear from the post, but do you work with things that
| actually depend on the architecture a lot? I'm dealing with the
| opposite (still on x86, applications get deployed on arm) and the
| answer was: pretend it isn't happening. If there are obvious
| issues, they'll be caught by the CI which is running the target
| architecture. If there are non obvious problems, I can spin up a
| vm in AWS immediately.
| rowanG077 wrote:
| Maybe this should be a wakeup call to stop using docker and
| virtualization for DevOps. They have their place in CI but should
| not be used for local development.
| xvilka wrote:
| GDB still doesn't work on MacOS M1.
| pickleMeTimbers wrote:
| The biggest challenge was getting multi-arch builds sorted. Ended
| up putting together a layer on top of QEMU to run both x86-64 and
| aarch64 VMs (https://github.com/beringresearch/macpine). Have
| pre-baked VMs with LXD installed inside each instance, with main
| software builds taking place inside LXD containers - works pretty
| well so far.
| zamalek wrote:
| This works so long as your build isn't compute intensive. From
| my experience, you need real ARM (or cross compile) for stuff
| like C++.
| [deleted]
| _gllen wrote:
| I tried going from a 2018 MBP to M1 MBA in 2021 and had too many
| issues to make it my primary machine. Docker and Android
| development were particularly brutal to get going reliably IIRC.
| The M1 performed well for the things it could do, but I still
| needed the MBP (which constantly reached 100% CPU) for other
| stuff, so I ended up doing a horrible multi-machine setup with
| Synergy. That was a dark time in my life ;)
|
| Then tried again early this year with a M1Max MBP, and it has
| been the biggest step change productivity boost of my life.
| Definitely still some pain points, but the way this thing handles
| anything I throw at it is incredible.
|
| I'm mostly doing front-end dev (react native). Have a minimum of
| 2 IDEs, 1 iOS simulator, 1 Android simulator, Windows (ARM)
| Virtualbox, 2 browsers open at all times. And then add a mix of
| Docker, XCode, Android Studio, Zoom, Sketch, Affinity apps,
| Slack, Zoom, etc. I haven't ever heard the fan spin up. I was
| carefully managing what I had open on the 2018 MBP, and now I
| don't even think about it.
|
| The only thing I'm still running in Rosetta is Apple software:
| XCode and the iOS simulator, but they run smooth, so I don't even
| think about it.
|
| The MBA setup I was just flailing my way through. For the M1Max
| setup, I found this guide very helpful in my initial setup
| (mostly focused on a RN Dev): https://amanhimself.dev/blog/setup-
| macbook-m1/
| seanalltogether wrote:
| > The only thing I'm still running in Rosetta is Apple
| software: XCode and the iOS simulator, but they run smooth, so
| I don't even think about it.
|
| How old is your version of xcode? From what I can see they
| added M1 support 1.5 years ago.
| andrewk17 wrote:
| I've been doing React Native development on my M1 MBP and it's
| been pretty great overall.
|
| But is anyone else surprised how long it's taking to get the
| iOS Simulator for ARM? I feel like it would make a massive
| difference to my developer experience (especially in battery).
| And I haven't seen any indication that it's coming anytime
| soon.
| _gllen wrote:
| I didn't realize it wasn't already ARM. I found one day
| (https://stackoverflow.com/a/68929949) that running Simulator
| with Rosetta allows momentum scrolling to work, and
| everything else seemed perfect, so I left it that way.
| navjack27 wrote:
| Isn't it... Just in xcode? I've used it. It's like, there,
| right?
|
| I open up my 13.3 beta 2 and go to the xcode tab go down to
| open developer tool click on simulator and then click on file
| and then open simulator my mouse over to iOS 15.4 go down to
| the iPhone 13 for example click on it and then I have a
| simulated iPhone 13...
|
| And if I check the activity monitor nothing in my activity
| monitor is showing up as Intel code except for parsec right
| now...
| gabereiser wrote:
| You say your job is DevOps work so you probably feel the pain
| more than most people do.
|
| Not being able to run amd64 containers hit me hard. I fought it
| until I just gave in and made sure that everything we built could
| be built under amd64 or arm64. For specific builds on a specific
| architecture, GitHub action runner on a cloud box. (Or pick your
| flavor of CI/CD).
|
| Once I looked past my machine into an ecosystem and embraced the
| arm as just another build artifact it was easier.
|
| I also reject testing locally as a measure of working software.
| So that eliminates some pain. If your coverage is high then this
| is an easy shift. Have a dev environment that you can test that
| matches your assumed architecture, toolchain wise.
| ArchOversight wrote:
| > That means other tools that rely on it are also out of the
| picture, like Molecule
|
| You can run molecule against an ec2 instance or Docker
| containers. Since you can run x86_64 docker containers on Docker
| for Mac, you can continue to use molecule. I run molecule tests
| against Docker containers or LXD in the cloud though just because
| of how much faster they run on large Ec2 instances.
|
| As for everything else, I haven't really noticed many issues.
| Most of the work I do is built through CI/CD pipelines so what I
| use locally to build doesn't affect what is deployed to
| production.
| ridiculous_fish wrote:
| For me it's been mostly painless. I've even used Time Machine to
| migrate from a 2012 Intel iMac to an Apple Silicon Mac Mini and
| it worked perfectly!
|
| The two pain points:
|
| 1. No support for running older virtualized macOS. I like to test
| back to 10.9 and need an Intel Mac to do that.
|
| 2. One Python wheel which doesn't have Apple Silicon builds and
| doesn't build cleanly:
| https://github.com/CoolProp/CoolProp/issues/2003
| kitsunesoba wrote:
| It's been silk smooth for native desktop (macOS) and mobile
| (iOS/Android) development. I make it a point to keep the projects
| I'm responsible for running with the latest toolchains, though.
|
| The amount of trouble that some seem to be having with backend
| dev on M1 makes me wonder if maybe it wasn't the best idea for
| the industry to put its collective eggs in the single basket of
| trying to perfectly match dev and prod environments. If nothing
| else, it feels weird for progress and innovation in the world of
| end-user/developer-facing computing to be held back by that of
| the server world.
| nicoburns wrote:
| It mainly seems to be Docker causing the problems. We run
| node.js apps, and all we had to do was update to the latest
| version node and a couple of dependencies (those with native
| modules). No app changes.
|
| We run macOS aarch64 locally and Linux x86 on production and
| have yet to have a single compatibility issue (and we run a
| staging environment that's identical to prod, so if there was
| occasionally an issue it probably wouldn't make it to
| production.
| treis wrote:
| What's the other option besides running prod stuff on my local
| machine?
| anotheracctfo wrote:
| Oracle sucks.
|
| I mean in general, but they have also not released ARM
| instantclient or even an ARM version of Java. I think its crazy
| that I'm using Microsoft's version of ARM java.
|
| I'm also using Windows 11 ARM in Parallels, which does seamless
| emulation of Oracle instantclient / Java / PL/SQL Developer. So
| most of my workflow has not been interrupted.
|
| Still, just another excuse to move to a better database. Now all
| I have to do is convince our heavily bureaucratic IT department
| to move away from Oracle. It'll be easy, right?
| frampytown wrote:
| > but they have also not released ARM instantclient or even an
| ARM version of Java
|
| Java has been available on ARM since the days of Nokia phone
| dominance. Not sure what you're referring to?
| atonse wrote:
| There have definitely been some rough edges (on my end mostly
| related to terraform modules. don't have a big docker/vm
| dependent workflow anymore so that might be why.)
|
| But apart from that it's been incredibly smooth.
| ArchOversight wrote:
| For terraform, I have been using tfenv to manage the different
| versions, and you can set a flag `TFENV_ARCH=amd64` so you
| download the Intel versions of terraform.
|
| This will also download the Intel versions of all the providers
| when terraform executes. Which reduces the problems a ton since
| there are some providers that are definitely not aarch64,
| especially when it comes to older versions.
| bredren wrote:
| I'm working on re-factoring a developer environment from using
| vagrant/virtualbox and docker to only docker.
|
| The prior goal was to mock production as closely as possible.
|
| The realization is that macos as a host machine for orchestration
| is close enough to build. More strict validation can be done in
| CI and a staging env.
|
| So for this project, the forced transition away from virtualbox
| Has actually led to simplification and asking questions about why
| it was "required" previously.
|
| It is a bit of a pain only because some team members will need
| more support than others so the entire setup kind of needs to be
| clean and carefully documented when there is other stuff to do.
| nicoburns wrote:
| We go one step further and don't use Docker either!
| dev_tty01 wrote:
| At least some aspects of this issue are getting better as we
| speak. The latest Mac OS (in beta) supports virtualizing ARM
| Linux but also enables the ARM Linux system to use Apple's speedy
| Rosetta 2 x86 binary compiler and JIT compiler to run x86
| programs within the ARM Linux VM. Based on descriptions, it seems
| that the rest of the hypervisor VM framework has also matured
| substantially this release.
|
| https://developer.apple.com/documentation/virtualization/run...
|
| https://developers.apple.com/videos/play/wwdc2022/10002/
|
| If you are not familiar, Rosetta is how Apple Silicon Macs run
| existing Mac x86 binaries and it is highly performant. It does
| binary pre-compilation and cacheing. It also works with JIT
| systems. They are now making that available within Linux VMs
| running on Macs.
| mdp2021 wrote:
| > _highly performant_
|
| Last thing I read, 70% of the native performance was shown by
| running GeekBench through Rosetta (with a few odd results
| noted).
|
| If somebody has better info...
|
| Edit: I see that Nov 2020 checks returned an 80% performance,
| and there was discussion on HN at (at least)
| https://news.ycombinator.com/item?id=25105597
| dev_tty01 wrote:
| Here are my numbers for the original M1 (not Pro or Max) soon
| after release:
|
| ARM Geekbench single core on M1 MacOS is 1734. ARM Geekbench
| single core on WinARM in VM on M1 is 1550. x86 single core on
| i9 MB Pro MacOS is 1138. x86 in emulation on M1 MacOS is
| 1254.
|
| Yes, 72% x86 Rosetta vs. M1 Native. However, x86 Rosetta on
| M1 was faster than the previous i9 2019 Macbook Pro x86
| native. I consider that to be performant for running code
| that was compiled for a very different architecture.
| philistine wrote:
| When you compare with the sad-trombone sound that Windows
| has produced for its ARM os, it is _speedy_.
| azinman2 wrote:
| My understanding is the AOT won't be available to Linux; it's
| JIT only.
| runjake wrote:
| The WWDC video is unclear but seems to imply that it works
| the exact same as on macOS.
|
| Hopefully, this is the right timestamp:
|
| https://developer.apple.com/videos/play/wwdc2022/10002/?time.
| ..
| dev_tty01 wrote:
| Interesting. Where did you see that? I'm still trying to get
| a handle on the latest changes.
| azinman2 wrote:
| Says so here, which was posted earlier this week. I cannot
| verify it's accuracy. I do work for Apple but not at all on
| related stuff.
|
| https://threedots.ovh/blog/2022/06/quick-look-at-rosetta-
| on-...
| 0x0 wrote:
| No problems here. Node, php, apache, mariadb, postgresql run
| native out of the box via homebrew. Java11 and Java17 have native
| aarch64 builds via homebrew and/or temurin (or the oracle openjdk
| project, which unfortunately doesn't seem to care about being a
| responsible security patch vendor at all). Android studio is fine
| except they don't support androidtv emulators yet. UTM with an
| aarch64 debian host runs mssql (azure edge sql) in docker
| natively, as well as anything you'd expect from a high quality
| debian distribution. UTM with windows 11 arm64 even runs vs2022
| through its fairly efficient x64 usermode translator (WPF apps
| and everything). Xcode and the iOS simulator works great as
| expected, too.
|
| Even the x64 java8 SDK for macOS runs without a glitch, I mean
| how impressive is that, with JIT and everything? Mind blown.
|
| I didn't even understand the point of the new macOS 13 ventura
| linux rosetta thing until I realized some people are still
| running x64 docker containers. (why, though?)
| cyanydeez wrote:
| Apple is not worth thinking about or considering. They're
| basically Sony on steroids.
| usrn wrote:
| I almost exclusively use FOSS. Most of it was ported a decade ago
| at least.
| mountain_peak wrote:
| Great answer and definitely in keeping with the original vision
| of what computing should be - open and accessible.
|
| Maybe people here haven't lived through 68K to PPC migrations,
| or to DEC Alpha, or Sun SPARC to Intel, or PPC to Intel, or any
| number of platforms and platform shifts - some lasted longer
| than others, but all had their ups and downs. The largest
| 'down' was predatory business practices in the 80s and 90s,
| which set computing back a decade (and still apparently
| continues today). It's unfortunate that many of these FUD-type
| articles pop up whenever a new platform/chip is announced. I'm
| excited for technological progress and think that every new
| announcement is another small miracle that I'm happy to be
| around for.
| xd1936 wrote:
| Higher Education IT here. For our users that we support, it's
| been great on the whole... except for those who need to use a VM
| for the occasional Windows-only desktop app. UTM[1] seems to be
| the best option (everything else is in technical preview or not
| supported?) but it's slow as a dog to emulate x86. ARM Windows
| isn't great either if you want to just virtualize. Suggestions
| welcome!
|
| 1. https://getutm.app/
| sdevonoes wrote:
| Vagrant + vmware vagrant plugin
| (https://www.vagrantup.com/vmware/downloads) + vmware fusion tech
| preview (https://communities.vmware.com/t5/Fusion-for-Apple-
| Silicon-T...).
|
| Currently running a bunch of Ubuntu (arm) virtual machines and my
| mbp m1 handles it really nice.
| nitwit005 wrote:
| I just learned how to run our apps outside of docker and virtual
| box. Setting up two Postgres DBs, a NodeJS process and a Python
| process wasn't completely trivial, but it wasn't all that
| difficult either.
| matsemann wrote:
| Of those I feel python is the big problem, at least if one
| deals with multiple projects needing different versions. When
| you've finally got the paths and venvs correctly set up, many
| packages won't be installed correctly because there is no wheel
| for M1 / your architecture. So then you have to compile
| everything yourself, which then fails on some new steps.
| ryall wrote:
| I think I've commented before on this but I've had great success
| using VSCode Remote Containers. Essentially using the M1 as a
| frontend to an x86 environment.
|
| Works great and I can move between a local and cloud servers
| depending on requirements
| silasb wrote:
| I'm an Eng for a startup using Rails, MySQL, ... Next.js.
|
| The only problems we've had is slow performance of Docker for us
| with our databases. So much so, we've moved those out of Docker
| and back to the native. Performance is easily 6x faster. MySQL
| was also a headache because finding a MySQL 5.7 official Docker
| container didn't exist for ARM so we needed to use the slow
| emulation through qemu.
|
| We also have a CLI dev tool that is written in Python and
| distributed in Docker (x86) which has also been slow. Not enough
| time to build ARM based Docker image.
| jeppester wrote:
| There are many comments here that goes: "we had to do all sorts
| of configuration for this to work, but it's been great and we
| like it".
|
| As a primarily Linux user these feel like very familiar stories.
|
| It's kinda refreshing to hear those stories from mac users. Maybe
| we are not so different after all.
| NegativeLatency wrote:
| Dropped docker for local development and I just run stuff
| natively relying on tests and CI to catch any issues but I
| haven't really had any.
| jmartin2683 wrote:
| Adopting m1 has been virtually pain-free for us. Projects are all
| rust, just specify the cpu-target in the docket builds.
| [deleted]
| davidmurdoch wrote:
| I just added creating and publishing ARM64 docker containers to
| our automated release process and the CI (GitHub Actions) time
| went from about 10 minutes to an hour and half.
|
| I don't expect many teams to volunteer to suffer this sort of
| slowdown and complexity in the near term.
| jacobwg wrote:
| I've been working on Depot (https://depot.dev) specifically for
| this reason: it's a hosted Docker builder service that runs
| BuildKit on managed VMs. When it receives incoming build
| requests, it routes them to a VM running the target
| architecture, x86 VMs run in Fly.io, arm64 VMs run in AWS.
|
| Since it's all BuildKit, you can swap `docker buildx build` for
| `depot build` and it works exactly the same - I made a
| depot/build-push-action to drop in place of the docker/build-
| push-action in GitHub Actions.
|
| It also has a persistent SSD cache disk for each builder, that
| was my other pain with GitHub Actions, time saving and loading
| layer cache was negating the speedups from cache hits - with a
| persistent disk, there's no saving or loading.
|
| Anyways, combo of having a local cache and running on real ARM
| machines gives like an order of magnitude speedup to builds
| compared to the QEMU emulation.
|
| Still a new project, not yet officially launched, and hosted
| services aren't for everyone, but exactly as you said, the
| status quo is amazingly painful.
| andreineculau wrote:
| That's because there are no arm runners on GitHub actions. So
| you now emulate arm, thus slow.
|
| You can add hosted arm GitHub runners or register arm hosts for
| docker and see down to earth build times.
| zamalek wrote:
| Azure has ARM in preview, and AWS has had it for ages. You should
| be able to create multi-arch builds in CI.
|
| For actually creating multi-arch, I recommend you stay as far
| away as possible from Docker and use Podman and Buildah. The
| latter unbundles some of the Docker manifest commands, giving you
| far more control over how you create multi-arch images. I wasted
| 4 months on Docker tooling, and got it right in half a week with
| Podman. This meant switching from DCT (Podman doesn't support
| this at all) to Cosign, but Cosign is far more sensible than DCT.
|
| There are a rare few containers that you can get away with
| running on x86.
| evantahler wrote:
| Over at Airbyte, we had a project this quarter to update all of
| our build & publish processes over to building multi-arch (AMD
| and ARM) docker images. As Airbyte runs entirely within docker,
| getting a smooth local experience for folks on M1/(2?) Macs was
| important. We had a long lived support thread (1) where you can
| see us grow through all the phases - from "nothing works", to
| "our deps don't work", to "the platform works" and finally to
| "the connectors work"!
|
| Assuming your base images are themselves already multi-arch, most
| of the tooling we needed was already built into the `dockerx`
| build tool, which is awesome - check it out if you haven't (2).
| Docker has bundled all the tooling and emulation packages (qemu)
| needed into a single docker image that can publish multi-arch
| docker images for you! You run docker to emulate docker to
| publish docker... There are some interesting things that you'll
| need to do if you publish multi-stage builds, like publish a tmp
| tag and delete it when you are done, but it's not /too/ terrible.
| Since Airbyte is OSS, you can check out our connector publish
| script here (3) to see some examples.
|
| I'd recommend that spending the time to work your multi-arch
| tooling - not only does it make the local dev experience
| faster/better, it:
|
| 1. unlocks ARM cloud compute, which can be faster/cheaper in many
| cases (AWS)
|
| 2. removes a class of emulation bugs when running AMD images on
| ARM - mostly around networking & timing (in the JAVA stack
| anyway)
|
| Links:
|
| 1. https://github.com/airbytehq/airbyte/issues/2017
|
| 2. https://docs.docker.com/buildx/working-with-buildx
|
| 3.
| https://github.com/airbytehq/airbyte/blob/master/tools/integ...
| herpderperator wrote:
| If you need to work with amd64 Docker images on an M1, just SSH
| to an amd64 AWS instance and do the builds there while things get
| ironed out. Otherwise, you can do the builds with `docker build
| --platform linux/amd64` but it'll be slower since it's emulated.
| johnklos wrote:
| Simple: I never target specific CPUs to begin with ;)
|
| I'm only half joking. I'm of the group of people who know that
| Docker is a security nightmare unless you're generating your
| Docker images yourself, so wherever I've had to support that, I
| insist on that. If you don't use software that's either processor
| centric (and therefore buggy, IMHO) or binary-only, then this is
| straightforward and a win for everyone.
|
| Run x86 and amd64 VMs on real x86 and amd64 servers, and access
| them remotely, like we've done since the beginning of time
| (teletypes predate stored program electronic computers).
|
| Since Docker is x86 / amd64 centric, treat it like the snowflake
| it is, and run it on x86 / amd64.
| porcoda wrote:
| I'm in a similar boat - love the performance/battery of my M1
| MacBook Air, but the ecosystem is just too messy at the moment
| for me. I have a few tools I need to use that haven't yet been
| making official Apple Silicon releases due to GitHub actions not
| supporting Apple Silicon fully yet. The workaround involves
| maintaining two versions of homebrew, one for ARM and one for
| x86-64, and then being super careful to make sure you don't
| forget if you're working in an environment that's ARM and one
| that's X86. It's too much of a pain to keep straight for me (I
| admit it - I lack patience and am forgetful, so this is a bit of
| a "me" problem versus a tech problem).
|
| My solution was to give up using my M1 mac for development work.
| It sits on a desk as my email and music machine, and I moved all
| my dev work to an x86 Linux laptop. I'll probably drift back to
| my mac if the tools I need start to properly support Apple
| Silicon without hacky workarounds, but until GitHub actions
| supports it and people start doing official releases through that
| mechanism, I'm kinda stuck.
|
| It is interesting how much impact GitHub has had by not having
| Apple Silicon support. Just look at the ticket for this issue to
| see the surprisingly long list of projects that are affected.
| (See: https://github.com/actions/virtual-
| environments/issues/2187)
| brundolf wrote:
| > It is interesting how much impact GitHub has had by not
| having Apple Silicon support
|
| Putting on my tin-foil hat for a sec: GitHub is owned by
| Microsoft, who would really stand to benefit from slowing down
| Apple Silicon adoption a bit...
| michaelt wrote:
| Alternative theory: Apple doesn't offer an M1 server. Github
| doesn't offer an M1 build server because M1 servers don't
| exist.
| spockz wrote:
| I'm having the same issue on azure devops. The only way forward
| seems to be is running your own ado agents on arm machines you
| managed to arrange. Arm on Azure is a private beta that you
| have to subscribe for.
|
| That wouldn't be too much of an issue if you could just cross
| compile like you can with go. However graalvm can't do this
| yet.
| cordite wrote:
| Waiting for GitHub actions to have ARM.
| frankwiles wrote:
| Multiarch builds are pretty darn easy to setup in my experience
| (exclusively Linux based images FYI) so I'd refocus the energy
| spent on Virtualbox, etc to just setting them up and then problem
| solved.
| dimgl wrote:
| I've had 0 issues. Everything has worked for me out of the box,
| so to speak.
| ab-dm wrote:
| Recently got an M1 Mac, I couldn't be happier. I use it all day
| every day for dev (ruby, node, react)
|
| Everything was far easier than I expected it to be, the only
| issues I had was with installing python (a few cli Utils required
| it) but everything else has been smooth sailing and a much better
| experience than running things on my 2019 MBP
|
| I'm not a huge docker user, but I run it for a few things and
| again, it was all smooth sailing.
| TameAntelope wrote:
| I switched basically at the very beginning of M1 release. It was
| an absolute nightmare until I got everything working (was using
| Docker Desktop extensively), and then I haven't thought about it
| since.
| stock_toaster wrote:
| I just have a headless x64 linux machine running docker and use
| the docker cli from my mac to interact with the remote docker
| (via docker context), and use a synced directory structure for
| any funky volume mounts I need. Works great.
| rvz wrote:
| It would have made sense to simply ignore the M1 hype altogether
| since the tools you require do not work / run on ARM, or run
| worse than on Intel, you're better off staying on Intel to wait
| until the situation for VMs in Apple Silicon improves first: [0].
|
| For developers using VMs, Docker, multi-pass, etc I think it is
| more trouble than it is worth to jump on to the new shiny thing
| and invest time in workarounds that break on a new update. At
| least you weren't part of the November 2020 launch day chaos
| otherwise you would be waiting 6 months to do any work if you
| went all in on the M1.
|
| Looks like Intel is (still) the way to go for VMs until Apple
| Silicon gets better (eventually).
|
| [0] https://news.ycombinator.com/item?id=26159495
| pineconewarrior wrote:
| I use Docker and Colima constantly on M1 and have had very few
| issues. Granted, my use case for those things is probably quite
| simple compared to someone in Ops.
|
| For web development, I believe that Apple Silicon is really the
| place to be right now (especially if you also work on design
| projects!)
| c7DJTLrn wrote:
| Intel MacBook supplies are decreasing which has actually caused
| them to go up in price. In a few years they will be difficult
| to get. Any company which uses MacBooks is going to have to
| make the switch at some point - better sooner than later.
|
| Also, the post you linked is over a year old and the situation
| has changed since.
| zamalek wrote:
| When I first joined my current employer late November/early
| December (employee 1 with an M1), we could not source Intel
| directly from Apple. The only option was to purchase a
| refurbished device.
|
| If you aren't ready to switch to ARM, consider Linux.
| cehrlich wrote:
| I'm happy with it.
|
| Here's a tip for anyone with docker compatibility problems: If
| you add `platform: "linux/amd64"` to your docker-compose (there's
| also a similar command for Dockerfile iirc), it just gets the x64
| images and emulates those.
|
| There is emulation overhead of course, but it's not perceivable
| in my experience, compared to running native images.
| jayd16 wrote:
| Something really annoying about this is that for some reason
| docker can't seem to easily switch platforms for base images.
| If you have an x64 base image and try to run an arm64 image on
| top, it'll complain. Why doesn't it just download the right
| version automatically instead of forcing me to solve it? Seems
| like there are still some rough edges here.
| spockz wrote:
| Afaict, this is the default behaviour? If there is no arm
| specific image it just tries the x86-64 image which is then
| emulated. With docker 4 and podman at least.
| gjsman-1000 wrote:
| When I started, I had Rosetta for almost everything, and I was
| able to do my workflow without Docker (super broken that was).
| Several months later, I reset my Mac and reinstalled everything,
| this time with far fewer Rosetta parts. Several months later, did
| it again, and this time was completely free of anything needing
| Rosetta because everything was native by that point.
| marpstar wrote:
| For me personally (as a freelancer), it's been a pretty smooth
| transition. I have a dozen or more projects relying on node-sass
| (which fails to compile on M1), which has been annoying but
| easily remedied.
|
| For my 9-5 employer, biggest drawback we've come across is that
| SQL Server can't be installed on Windows 11 ARM, which is
| preventing us from having a truly local development environment.
|
| We've gotten everything else working via Azure SQL Edge running
| via Docker for Mac, but it lacks several features that we require
| (e.g. full-text search, spatial data types).
|
| Despite a recent announcement
| (https://blogs.windows.com/windowsdeveloper/2022/05/24/create...)
| that Visual Studio will soon support ARM, There are no signs that
| SQL Server 2022 will support ARM.
|
| My employer is still moving forward with provisioning M1 MBPs for
| developers.
| [deleted]
| ChrisMarshallNY wrote:
| Mine has been great, but it's not a fair comparison. I write
| native apps for Apple stuff in Swift, so I'm pretty much who the
| new stuff was optimized for.
|
| I have noticed that some apps can get "hangy," including Xcode,
| SourceTree, and Slack. I sometimes need to force-quit the system
| (force-quitting apps seems to now have about a 25% success rate).
| SourceTree also crashes a lot. A lot of this happened after I got
| my MBPMax14. I don't know if it would happen with any other
| machine.
|
| These are not showstoppers (I've been having to force-quit for
| years. Has to do with the kind of code I write), but it is quite
| annoying. I have faith that these issues will get addressed.
| w0mbat wrote:
| I don't use Docker so I have had no problems whatsoever. Getting
| all the Mac apps I code on building for Arm was easy.
| ratww wrote:
| Same. I waited a couple months to buy my M1, so when I got it
| everything I needed was running fine.
|
| There were a couple libraries my company needed that didn't
| have ARM support but I ported them and made pull requests to
| the repos, and now they work alright. It wasn't difficult at
| all, since lots of stuff already had ARM code because of ARM-
| Linux, Android or iOS.
|
| I go weeks without turning on my work-provided Intel Mac. I
| actually only use it for "personal" stuff (I help maintain some
| open source C/Asm stuff that use multiple OSs and
| architectures). My boss asked if I want an upgrade tho.
| dundarious wrote:
| It is quite interesting scrolling through other comments and
| seeing a large majority of problems are with Docker pipelines
| (hard coded to x86_64 images that must be emulated, tools), in
| other words, problems extrinsic to the platform/OS and the
| actual program code/dependencies.
| pdoege wrote:
| The desktop user experience has been quite good
|
| Virtualization Framework's VLAN support is not mature and getting
| more than 100 machines per rack has proven difficult. The need
| for additional switches, patch panels, uplinks and cooling makes
| multi-thousand machine installations slow due to the recent
| logistics unpleasantness.
|
| Using Studios is hard because of massive delays to orders.
| Especially the 'big' machines in 1,000 unit quantities.
|
| x86 and x86/GPU still seems to be the best approach for prod
| datacenter use.
|
| Otherwise I am a fan
| sneak wrote:
| If you bring up a remote VM and set DOCKER_HOST to something like
| "ssh://root@$IP" and have key auth set up, the local docker CLI
| works as it always did but using a remote dockerd via ssh. I do
| all my container builds this way (on remote x64) because
| hotel/LTE internet sucks and I would rather download 47363367373
| npm packages 4700 times on datacenter gigabit.
| Quinner wrote:
| Our entire dev team switched from MacBooks to laptops running
| linux.
| arnaudsm wrote:
| Same, and I'm not looking back. Docker performance is stellar,
| everything is dev friendly, and the OS actually treats you like
| an adult.
| forty wrote:
| I don't have Mac, I'm a Linux user but many if my colleagues have
| Macs, some of them are M1. And the arm thing is really a pain,
| much more than the difference between OSes. Just a random
| example: we have some app which uses MySQL 5.7, and we use MySQL
| in docker for integration tests. Unfortunately MySQL 5.7 won't
| run on arm (current workaround: they use a mariadb image, which
| is apparently good enough, and the CI would catch any
| difference). There are many small things like this, I would
| currently not recommend using those new Macs until things improve
| if you want to avoid wasting time on uninteresting issues.
| turtlebits wrote:
| You can run x64 containers on M1, it's just slower. Just add
| the `--platform linux/amd64` flag
| forty wrote:
| Yes it does work but then those tests take an unacceptable
| long time to run.
| navjack27 wrote:
| Sucks you need 5.7 I guess. I have a native install of 8.0.29
| on my Mac Mini for my gitea.
___________________________________________________________________
(page generated 2022-06-10 23:00 UTC)