[HN Gopher] Apple M1 Max Geekbench Score
___________________________________________________________________
Apple M1 Max Geekbench Score
Author : mv9
Score : 270 points
Date : 2021-10-20 17:50 UTC (5 hours ago)
(HTM) web link (browser.geekbench.com)
(TXT) w3m dump (browser.geekbench.com)
| diebeforei485 wrote:
| I'm in the situation where I really want to pre-order a 14", but
| I have no idea if going with the base model would be a mistake.
|
| Would upgrading to 32GB RAM make Xcode faster? Or would it be a
| waste of $400?
| nunez wrote:
| hard to tell given that m1's address and page memory completely
| differently from how x86 does it. My 8GB M1 MacBook Air
| performs extremely well even when memory pressure is high...and
| it never seems to hit swap space.
|
| Anecdotal example: I could have several Firefox tabs with
| active workers in the background (WhatsApp, Slack, etc.), a
| Zoom meeting with video conferencing on, with audio and video
| being routed via OBS (native), a Screen Sharing session over
| SSH going, and a Kubernetes cluster in Docker running, and that
| won't even make the MacBook hot. Nothing slows down. I could
| get maybe five hours out of the battery this way. Usually six.
|
| Doing that on a maxed out Intel MacBook Pro will make it sound
| like a jet engine and reduce my battery life to two or three
| hours. It will also slow to a crawl.
|
| I'm guessing buying a machine with 32GB of RAM is an investment
| into the future where workloads on m1 machines are high enough
| to actually give the memory a run for its money.
| thebean11 wrote:
| 16GB of RAM seems so low in 2021. OTOH, hard drives are so fast
| on these things that maybe 16 is good enough with the SSD as
| overflow.
|
| I ended up shelling out the extra $400 for 32GB, but didn't
| feel great about it!
| bluedays wrote:
| On the other hand I've been running 16GB of ram for a while
| and I can't conceive of a reason why I would need more. 32GB
| seems like overkill. What would you do with all of that ram?
| Open more tabs?
| t-writescode wrote:
| Compilation, graphics work, heavy-weight IDEs
|
| And, if you're spinning on 16GB ram on an M1, you might be
| eating through the SSD powering your swap space and not
| know it.
| EugeneOZ wrote:
| indeed. From my experience, I see that my MBA uses swap
| sometimes, but I can't notice it. Still, I want to avoid
| it.
| thebean11 wrote:
| Looking at activity monitor, I'm currently using ~19GB,
| doing nothing special (IntelliJ, 15 chrome tabs, Spotify,
| Slack). Docker Desktop was using 8GB before I killed it.
| And this is on an Intel Mac so it doesn't include GPU
| memory usage I believe, which is shared in M1 Macs.
|
| This likely isn't a perfect metric, if I were closer to the
| limit I think MacOS would get more aggressive about
| compression and offloading stuff to disk but still.
| Tagbert wrote:
| Browsers generally allocate RAM with abandon but any one
| process is not necessarily memory bound. It just means
| that they are using that as page cache.
| josephpmay wrote:
| I've heard from people with M1 laptops that they perform
| surprisingly better than Intel laptops with less RAM (on the
| Mac). I imagine the same will hold with the Pro and Max,
| although it will depend a lot on what type of work you're
| doing.
| EugeneOZ wrote:
| I have m1 air with 8Gb and I see that had to take 16Gb at
| least. Not sure about 32GB, but 16 is the minimum. Profile:
| JS, TS, Rust, sometimes Java.
| johnboiles wrote:
| I am one of those people. I bought an 8gb M1 Air last month
| for a project (while I waited for the new models to be
| released). It baffles me how well this little computer
| performs with less RAM than I've had in a computer since
| 2009. I'd love an explanation of how that works. Maybe the
| SSD is so fast that swapping isn't a big deal?
| runako wrote:
| I looked into this angle recently. The SSDs in the new
| MBPs are roughly 3x as fast as the one in my 2015 MBP
| (7.4GB/s vs 2GB/s). To contextualize, the new MBPs have
| roughly half as much SSD bandwidth as my MBP does memory
| bandwidth.
|
| Which is to say the SSD is much closer to being as fast
| as RAM, which would explain why is subjectively can make
| better use of less memory.
| johnboiles wrote:
| Neat!
| azinman2 wrote:
| Unlikely to make Xcode any faster. Just look at your ram usage
| now and then forecast a bit to know.
| pbowyer wrote:
| Same situation as you, looking at 14". Need to see what the
| Xcode benchmarks are like for the variety of M1 Pro processors
| on offer, to see if any upgrades from the base model are
| worthwhile.
|
| If I was sticking with 16GB RAM I think I'd get a M1 Air
| instead. The smaller screen is the downside, but 300g lighter
| and substantially cheaper are good points.
| rvanmil wrote:
| I'm upgrading from an M1 Air (16GB) to a 14" Pro base model
| just for the display. Extra M1 pro performance is bonus but
| the M1 has already been amazing for development work past
| year.
| pbowyer wrote:
| Is it the resolution you've found limiting on the M1 Air?
| My eyesight is slowly getting worse so I'm assuming I'll
| have to run both at 200%. Which makes the Air a 1280x800
| screen - something I last had on my 2009 13" MBP!
| hajile wrote:
| M1 only allows one external monitor (you can run 2 with
| some docks if you turn off the laptop screen). This isn't
| such a problem for me as I have an ultra-wide, but lots
| of people with dual/triple monitor setups haven't been
| super thrilled.
| rvanmil wrote:
| I'm lucky enough to still have good eyesight so it's not
| limiting for me personally. Most of the time when working
| I've got it hooked up to a big 4K display though. My
| expectation is I'll appreciate the new XDR display for
| the times I'm not on an external monitor.
| kstrauser wrote:
| My opinion: the jump from 16GB to 32GB probably won't make a
| huge difference today, especially if your workload is already
| running alright in 16GB. I think it'll greatly extend the
| useful life of the laptop, though. For example, I wouldn't want
| to be using an 8GB laptop today.
| [deleted]
| hajile wrote:
| When I start up kubernetes, my usage for that alone goes up
| to 6+GB. Swapping to SSD is terrible for it's lifecycle. 32GB
| should have you going for quite a while unless you need/want
| lots of vRAM in which case I'd go with 64GB.
| danieldk wrote:
| I wouldn't go for the base model, since it has 6 performance
| cores, rather than 8.
| rgbrenner wrote:
| Ram is ram. Don't fall for marketing hype and think M1 means
| you need less ram. If you think you need it, you still need it
| on M1... and I say that as someone who owns an M1 Mac.
| EricE wrote:
| Yes - I wish people would stop implying that the M1 magically
| doubles RAM and other such nonsense. I found the same. I have
| a game that requires at least 32GB (Cities:Skylines - mainly
| because of my self inflicted steam workshop asset addiction)
| and ended up returning the Air to wait for the next round.
| Decided to go all out - have a 16" 64GB M1 Max with 2TB of
| storage on the way.
| marcodiego wrote:
| Consider that you always can download more ram:
| https://downloadmoreram.com/
|
| /s
| busymom0 wrote:
| Depends upon your workflow but I use xcode and android studio
| and 16gb isn't enough if I run a simulator or emulator.
| Definitely get the 32gb imo.
| [deleted]
| wolrah wrote:
| RAM is something where you either have enough or you don't.
| When you have enough, adding more doesn't really get you
| anything. When you don't have enough, performance tanks.
|
| The integrated nature of the M1 Macs gives them a much better
| ability to predictively swap unused content to SSD and make the
| most of the available RAM when things get tight while
| multitasking, but if you have a single task that needs 17GB and
| you have 16GB it's going to suffer a lot compared to if you had
| 32GB.
|
| I wish Apple (and everyone else) would give up on the ultra-
| thin crap and make a real Pro machine that returns to
| upgradability at the cost of a few extra millimeters, but for
| now since you're stuck with what you start with I'd recommend
| always going at least one level above what you think you might
| ever need during its expected useful life.
| stocknoob wrote:
| Figure out your hourly wage, how many hours a day you use your
| laptop, and whether $400 amortized over the life of the device
| is worth the "risk".
| robertwt7 wrote:
| Is that supposed to be fast guys?
|
| Isn't that still slower than some ryzen 5 5600x? (my pc uses this
| but below is not my benchmark)
|
| https://browser.geekbench.com/v5/cpu/8238216
|
| I'm not sure how fast or good is that number.. But i've heard
| good things about m1 and planning to probably upgrade.
| mciancia wrote:
| > Base Frequency 4.72 GHz
|
| > Maximum Frequency 6.03 GHz
|
| This was seriously overclocked, wouldn't be surprised if with
| liquid nitrogen. So probably poor comparison to laptop CPU ;)
| jjcon wrote:
| We're also seeing slower m1max benchmarks though
|
| https://browser.geekbench.com/v5/cpu/10476727
| robertwt7 wrote:
| Ah yes that's true, didn't realise it was overclocked.
|
| Probably good to compare to those ryzen in laptop..
| maxpert wrote:
| Site not loading for me :(
| e0m wrote:
| Wow does much better than Geekbench's prior top processor
| (https://browser.geekbench.com/processor-benchmarks) the Intel
| Core i9-11900K (https://browser.geekbench.com/processors/intel-
| core-i9-11900...).
|
| 10,997 for Intel i9
|
| vs
|
| 12,693 for M1 Max
| systemvoltage wrote:
| Per core performance is the most interesting metric.
|
| Edit: _for relative_ comparison between CPUs, per core metric
| is the most interesting unless you also account for heat, price
| and many other factors. Comparing a 56-core CPU with 10-core M1
| is a meaningless comparison.
| 5faulker wrote:
| Just when you think things hit the top, another kid's out of
| the town.
| PaulDavisThe1st wrote:
| Not when building large software projects.
| gchokov wrote:
| Like... everyone builds large projects all the time?
| akmarinov wrote:
| If you don't then you don't really need the top end do
| you?
| ajuc wrote:
| Most people who buy fast cars don't need them and it's
| the same with computers.
| YetAnotherNick wrote:
| By that logic you could build an array of mac mini if you
| don't care about price/heat.
| semicolon_storm wrote:
| What compiler could even make use of 10 cores? Most build
| processes I've run can't even fully utilize the 4 cores.
| pornel wrote:
| Rust (Cargo) does, and always wants more.
| destitude wrote:
| Xcode has no issues taking advantage of all cores.
| ukd1 wrote:
| Just running the tests in our Rails project (11k of them)
| can stress out a ton; we're regularly running it on 80+
| cores to keep our test completion time ~3 minutes. M1 Max
| should let me run all tests locally much faster than I
| can today.
| andy_ppp wrote:
| Wow, what is the system doing to have 11000 tests?
| throwawaywindev wrote:
| C++ compilers probably will.
| spacedcowboy wrote:
| I can stress out a 32+32 core 2990WX with 'make -j' on
| some of my projects, htop essentially has every core
| pegged.
| jlmorton wrote:
| Often a single compiler won't make use of more than a
| core, but it's generally easy to build independent
| modules in parallel.
|
| For example, make -j 10, or mvn -T 10.
| [deleted]
| PaulDavisThe1st wrote:
| Compilers typically don't use multiple cores, but the
| build system that invokes them do, by invoking them in
| parallel. Modern build systems will typically invoke
| commands for 1 target per core, which means that on my
| system for example, building my software uses all 16
| cores more or less until the final steps of the process.
|
| The speed record for building my software is held by a
| system with over 1k cores (a couple of seconds, compared
| to multiple minutes on a mid-size Threadripper).
| ur-whale wrote:
| > Not when building large software projects.
|
| Or run heavy renders of complex ray-traced scenes.
|
| Or do heavy 3D reconstruction from 2D images.
|
| Or run Monte-Carlo simulations to compute complex
| likelihoods on parametric trading models.
|
| Or train ML models.
|
| The list of things you can do with a computer with many,
| many cores is long, and some of these (or parts thereof)
| are sometimes rather annoying to map to a GPU.
| Someone wrote:
| It seems Apple thinks it _can_ map the essential ones to
| the GPU, though. If they didn't, there would be more CPUs
| and less powerful other hardware.
|
| 'Rather annoying' certainly doesn't have to be a problem.
| Apple can afford to pay engineers lots of money to write
| libraries that do that for you.
|
| The only problem I see is that Apple might (and likely
| will) disagree with some of their potential customers
| about what functionality is essential.
| concinds wrote:
| I wish there were laptop-specific Geekbench rankings because
| right now it seems impossible to easily compare devices in the
| same class
| wmf wrote:
| The M1 Pro/Max are effectively H-class chips so you can
| search for 11800H, 11950H, 5800H, 5900HX, etc.
| zsmi wrote:
| Your comment got me wondering if there was actually a
| method to Intel's naming madness, and it turns out there
| is!
|
| https://www.intel.com/content/www/us/en/processors/processo
| r...
|
| 11800H = Core i7-11800H -> family=i7 generation=11 sku=800
| H=optimized for mobile
|
| 11950H = Core i9-11950H -> family=i9 generation=11 sku=950
| H=optimized for mobile
|
| I didn't look up the AMD names.
|
| So, now that I know the names, why not use Core i9-11980HK?
|
| family=i7 generation=11 sku=800 HK=high performance
| optimized for mobile
|
| It seems like it exists
| https://www.techspot.com/review/2289-intel-core-i9-11980hk/
|
| P.S. General rant: WTF Intel. I'm really glad there is a
| decoder ring but does it really have to be that hard? Is
| there really a need for 14 suffixes? For example, option T,
| power-optimized lifestyle. Is it really different from
| option U, mobile power efficient?
| alpha64 wrote:
| You sorted by single core performance, then compared multi core
| performance. Sort by multi core performance, and you will see
| that the i9-11900K is nowhere near the top spot.
|
| For example, the Ryzen 9 5950X has single/multi core scores of
| 1,688/16,645 - which is higher in multi core score than the M1
| Max, but lower in the single core.
| 28933663 wrote:
| Perhaps they were referencing the highest 8C chip. Certainly,
| a 5950X is faster, but it also has double the number of cores
| (counting only performance on the M1; I don't know if the 2
| efficiency cores do anything on the multi-core benchmark).
| Not to mention the power consumption differences - one is in
| a laptop and the other is a desktop CPU.
|
| Looking at a 1783/12693 on an 8-core CPU shows about a 10%
| scaling penalty from 1 to 8 cores - suppose a 32-core M1 came
| out for the Mac Pro that could scale only at 50% per core,
| that would still score over 28000, compared to the real-world
| top scorer, the 64-core 3990X scoring 25271.
| eMSF wrote:
| M1 Max has 10 cores.
| andy_ppp wrote:
| But the two efficiency cores are less than half a main
| core thought right?
| hajile wrote:
| 1/3 the performance, but 1/10 the power. Not adding more
| was a mistake IMO. Maybe next time...
| andy_ppp wrote:
| Really? I mean if it gets me 10-14h coding on a single
| charge that's awesome...
| hajile wrote:
| The A15 efficiency cores will be in the next model. They
| are A76-level performance (flagship-level for Android
| from 2019-2020), but use only a tiny bit more power than
| the current efficiency cores.
|
| At that point, their E-cores will have something like 80%
| the performance of a Zen 1 core. Zen 1 might not be the
| new hotness, but lots of people are perfectly fine with
| their Threadripper 1950X which Apple could almost match
| with 16 E-cores and only around 8 watts of peak power.
|
| I suspect we'll see Apple joining ARM in three-tiered
| CPUs shortly. Adding a couple in-order cores just for
| tiny system processes that wake periodically, but don't
| actually do much just makes a ton of sense.
| thenthenthen wrote:
| Stil 8 more than my desktop pc :p
| [deleted]
| lostmsu wrote:
| Which is still not that much higher. Of the "consumer" CPUs
| only 5900X and 5950X score higher. And their stress power
| draw is about 2X of speculated M1 Max's.
| tedunangst wrote:
| That's maybe not a bad way to sort? Most of the time I'm
| interacting with a computer I'm waiting for some single
| thread to respond, so I want to maximize that, then look over
| a column to see if it will be adequate for bulk compute tasks
| as well.
| GeekyBear wrote:
| Interestingly, the iPhone's A15 SOC did get a newer version
| of Apple's big core this year.
|
| >On an adjacent note, with a score of 7.28 in the integer
| suite, Apple's A15 P-core is on equal footing with AMD's
| Zen3-based Ryzen 5950X with a score of 7.29, and ahead of M1
| with a score of 6.66.
|
| https://www.anandtech.com/show/16983/the-apple-a15-soc-
| perfo...
|
| On floating point, it's slightly ahead. 10.15 for the A15 vs.
| 9.79 for the 5950X.
| DeathArrow wrote:
| What about this? https://browser.geekbench.com/v5/cpu/7421821
| andy_ppp wrote:
| That's a strangely shaped laptop, what is the battery like on
| it?
| mcphage wrote:
| It's actually compatible with a tremendous range of third
| party external batteries like so:
| https://www.amazon.com/dp/B004918MO2
|
| And forget about fast charging--you can charge this battery
| up from 0% to 100% in less than a minute just by pouring
| some gasoline in the thing!
|
| It's the very pinnacle of portability!
| erk__ wrote:
| I really wonder how a single z/Architecture core would fare on
| this benchmark, though I imagine it's never been ported
| LASR wrote:
| Probably not as good as you might expect. Z machines are
| built for enterprise features like RAS, and performance on
| specific workloads.
|
| The ultra-high-clocked IBM cpus are probably significantly
| faster at DB loads, and less than the best at more general
| benchmarks like Geekbench.
| mrtksn wrote:
| The single core is second to Intel's best but the multicore is
| well below in the scale, comparable to Intel Xeon W-2191B or
| Intel Core i9-10920X, which are 18 and 12 core beasts with TDP
| of up to 165W.
|
| Which means, at least for Geekbench, Apple M1 Max has a power
| comparable to a very powerful desktop workstation. But if you
| need the absolute best of the best on multicore you can get
| double the performance with AMD Ryzen Threadripper 3990X at
| 280W TDP!
|
| Can you imagine if Apple released some beast with similar TDP?
| 300W Apple M1 Unleashed, the trashcan design re-imagined, with
| 10X power of M1 Max if can preserve similar performance per
| watt. That would be 5X over the best of the best.
|
| If Apple made an iMac Pro with similar TDP to the Intel one,
| and keeps the performance per watt, that would mean multicore
| score of about 60K, which is twice of the best processor there
| is in the X86 World.
|
| I suspect, these scores don't tell the full story since the
| Apple SoC has specialised units for processing certain kind of
| data and they have direct access to the data in the memory and
| as a result it could be unmatched by anything but at the same
| time it can be comically slow for some other type of processes
| where X86 shines.
| moreira wrote:
| Interestingly, the M1 Max is only a 10 core (of which only 8
| are high performance). I wonder what it will look like when
| it's a 20-core, or even a 64-core like the Threadripper.
| Imagine a 64-core M1 on an iMac or Mac Pro.
|
| We're in for some fun times.
| spacedcowboy wrote:
| John Siracusa - no the chart isn't real, but maybe qualify
| that with "yet"...
|
| https://twitter.com/siracusa/status/1450202454067400711
| kzrdude wrote:
| Hm, related to that reply https://twitter.com/lukeburrage
| /status/1450216654202343425
|
| Is this a yield trick, that one is the "chopped" part of
| another? So they'll bin failed M1Max ones as M1Pro, if
| possible?
| GeekyBear wrote:
| Bloomberg's Gurman certainly has shown that he has reliable
| sources inside Apple over the years.
|
| >Codenamed Jade 2C-Die and Jade 4C-Die, a redesigned Mac
| Pro is planned to come in 20 or 40 computing core
| variations, made up of 16 high-performance or 32 high-
| performance cores and four or eight high-efficiency cores.
| The chips would also include either 64 core or 128 core
| options for graphics.
|
| https://www.macrumors.com/2021/05/18/bloomberg-mac-
| pro-32-hi...
|
| So right in line with the notion of the Mac Pro getting an
| SOC that has the resources of either 2 or 4 M1 Pros glued
| together.
| sudhirj wrote:
| John Siracusa had a diagram linked here that shows the die
| for M1 Max, and says the ultimate desktop version is
| basically 4 M1 Max packages. If true, that's a 40 core CPU
| 128 core GPU beast, and then we can compare to the desktop
| 280W Ryzens.
| lvl100 wrote:
| I was dead set on getting a new Mac but I think I will opt for
| Alder Lake. It appears Apple Silicon for desktop will be pretty
| much constrained to mobile designs and limitations. Perhaps I
| will revisit if they decide to release M1 Max inside a Mac Mini
| but I highly doubt that will happen.
| jccalhoun wrote:
| Is geekbench seen as a valid score? It seems that on the pc
| hardware sites and youtube channels I frequent that they don't
| seem to mention it. The only time I seem to see it is for mac
| stuff.
| 2OEH8eoCRo0 wrote:
| Not really. It's nice to run the same benchmark on all
| platforms but most hardware sites run a game demo that's easily
| repeatable and take an fps reading or run Cinebench which
| renders a scene.
| LeoPanthera wrote:
| It is one of the few cross-platform benchmarks that can be run
| on both PCs and Macs, as well as iOS, Android, and Linux.
| parhamn wrote:
| Anyone know how this compares to the M1 Pro? Curious if it's
| worth the additional $400. The search doesn't seem to allow exact
| matches.
| SegOnMyFault wrote:
| Should be the same as the difference between the max and the
| pro lies in the GPU core count.
| ivalm wrote:
| Memory bandwidth as well.
| meepmorp wrote:
| No, the Max has twice the memory bandwidth as well - 200GB/s
| vs 400 GB/s, which will have a big impact on anything memory
| bound.
| danieldk wrote:
| We have to see how much it matters in practice, since the
| M1 Max has roughly the same single threaded score as the
| M1.
| InvaderFizz wrote:
| How much and on what is the question. I'm not in the market
| for a laptop, so I can wait and see.
|
| That said, if Apple releases a MacMini M1 Max, I'll
| probably buy it.
| meepmorp wrote:
| Yeah, I would've already ordered a MaxMini if it existed.
| Maybe next year.
| czbond wrote:
| https://browser.geekbench.com/v5/cpu/10479712
| neogodless wrote:
| Overall only 5-15% increase.
|
| https://browser.geekbench.com/v5/cpu/compare/10496766?baseli.
| ..
| metahost wrote:
| This is the comparison to a mid 2015 15" MacBook Pro for anyone
| who is curios:
| https://browser.geekbench.com/v5/cpu/compare/10513492?baseli...
|
| Summary: single core performance gain is 2x whereas multi-core
| performance gain is 4x.
| neogodless wrote:
| Ha I couldn't stand the order that was, so here's the reverse:
|
| https://browser.geekbench.com/v5/cpu/compare/10496766?baseli...
|
| 3.1% increase in single core, and 69.4% increase in multi-core.
| karmelapple wrote:
| Sorry, I think you meant to reply to this post with it:
| https://news.ycombinator.com/item?id=28935095
| hazeii wrote:
| Over 6 years, 2x the performance in single-core and 4x (for
| 2.5x the cores, depending how you count) seems surprisingly low
| compared to stuff I've read (not experienced yet) with the M1.
| klelatti wrote:
| Well the 2019 16 inch i9 MacBook Pro scores are 1088 and 6821
| so you can see the massive uplift over just two years.
| r00fus wrote:
| The vast majority of that 6 years is very gradual or minimal
| improvements. Then M1.
| jermaustin1 wrote:
| I think a lot of the perceivable speedup in M1 Macs is less
| to do with these benchmarks and more to do with optimizing
| the OS and Apple software for the M1 and the various
| accelerators on the SoC (video encoding and ML).
| klelatti wrote:
| Don't really understand this argument - Apple has had 13
| years to optimise Mac OS for Intel and only a couple of
| years for M1 / can't see how a video encoding accelerator
| affects responsiveness.
| mhh__ wrote:
| Also remember that a lot of apple customers stick with
| absolutely ancient laptops and are then amazed when a
| modern one is much faster e.g. I had to explain what an
| NVME drive is to someone who is a great developer but just
| not a hardware guy.
| larrik wrote:
| I think you are on the right track, but I think the
| performance gain really comes from the RAM being on the
| chip itself, which raw number crunching won't make use (but
| actual usage will make great use of)
| destitude wrote:
| I've used both M1 Macbook Air for work and 2019 16" with
| 32GB of RAM and the M1 Macbook Air feels as fast if not
| faster then the 16"..
| giantrobot wrote:
| My M1 MBA is measurably faster than the i9 MBP it replaced
| in several purely software tasks. At the very least it
| performs _on par_ with the i9.
|
| There's plenty of ARM-specific optimization in macOS that
| gives boosts to M1s over Intel but the chips are just
| faster for many tasks than Intel.
| spfzero wrote:
| Since Apple has been relying on Intel processors for many
| years now, I'd bet they've spent more time optimizing
| MacOS for Intel at this point. On the other hand, iOS has
| given them a quick way to transfer optimization
| knowledge.
| boopmaster wrote:
| This is the first bench I believe might be real. i saw 11542
| multi core spammed across news sites a couple days back, but it
| didn't align with a 1.7x boost to performance (which this result
| actually does). The single core was 1749 which also didn't make
| sense as I'd imagine there's maybe a tiny bit more TDP per core
| in the MBP 14 and 16 than the OG 13" M1 mbp. That and it was on
| macOS v 12.4... which is maybe nonexistent and thus fake? This
| score here is believable and incredible.
| icosahedron wrote:
| I'm not exactly proficient with GeekBenchery, but what I see here
| is that the M1 Max per core barely outperforms the M1?
|
| https://browser.geekbench.com/v5/cpu/compare/10496766?baseli...
| GeekyBear wrote:
| Tick - New cores; Tock - Scaling up the number of those same
| cores
|
| I think most of the work went into the uncore portions of the
| SOC this time.
| sydthrowaway wrote:
| uncore?
| als0 wrote:
| Parts of the SoC that are not the main CPUs e.g. power
| management controllers, display controllers, etc.
| als0 wrote:
| FYI it's the reverse. "Tock" is a new microarchitecture, and
| "tick" is a process shrink.
|
| https://en.wikipedia.org/wiki/Tick-tock_model
| hbOY5ENiZloUfnZ wrote:
| The larger part of the upgrade comes from the GPU rather than
| the CPU.
| bichiliad wrote:
| I think this kinda makes sense to me -- the M1 Max has the same
| cores as the M1, just more of them and more of the performant
| ones, if I understand it right. The fastest work on the fastest
| core, when only working on a single core, is probably very
| similar.
| klelatti wrote:
| Maybe a little surprised - presumably the thermal limitations
| on a 16 inch laptop are potentially less limiting than on a
| 13 inch one so that single core could be pushed to a higher
| frequency?
| tedunangst wrote:
| For how long? Longer than it takes to run the benchmark?
| floatingatoll wrote:
| Does the core speed of an M1 core change at all? I thought
| they used the low/high power cores at fixed-limit clock
| speeds.
|
| It sounds crazy to consider, but maybe they'd rather not
| try to speed up the individual cores in M1 Max, so that
| they can keep their overhead competitively low. That
| certainly would simplify manufacturing and QA; removing an
| entire vector (clock speed) from the binning process makes
| pipelines and platforms easier to maintain.
| hajile wrote:
| M1 uses TSMC high-density rather than high-performance.
| They get 40-60% better transistor density and less leakage
| (power consumption) at the expense of lower clockspeeds.
|
| Also, a core is not necessarily just limited by power.
| There are often other considerations like pipeline length
| that affect final target clocks.
|
| The fact is that at 3.2GHz, the M1 is very close to a 5800X
| in single-core performance. When that 5800X cranks up 8
| cores, it dramatically slows down the clocks. Meanwhile the
| M1 should keep its max clockspeeds without any issue.
|
| We know this because you can keep the 8 core M1 at max
| clocks for TEN MINUTES on passive cooling in the Macbook
| air (you can keep max clocks indefinitely if you apply a
| little thermal pad on the inside of the case).
| klelatti wrote:
| Thanks - some very good points. Presumably this opens the
| possibility of higher single core performance on a future
| desktop design unless limited by pipeline length etc?
| icosahedron wrote:
| I thought I remembered that in the presentation they had
| souped up the individual cores too. Must be I'm
| misremembering.
| sliken wrote:
| They didn't. However the cores enjoy more main memory
| bandwidth.
| andy_ppp wrote:
| The A15 chips has core improvements, I suspect this is what
| we'll see every year from now on 15-20% performance increase
| yearly for the next few years assuming no issues with TSMC...
| jmartrican wrote:
| Anyone know the score for Intel 12th gen Alder Lake?
| mhh__ wrote:
| I can't check for you due to hug but the 12900K was clocking in
| at about 1900 ST IIRC
|
| Edit: Leaving this up so I can be corrected, don't think I have
| the right figure.
| gigatexal wrote:
| If true I'd be interested to see what that is in points per
| watt.
| mhh__ wrote:
| The points per watt is probably going to be crap but
| equally I don't care all that much.
|
| One thing as well is that there are always headlines
| complaining about power usage, but the figures are nearly
| always from _extreme_ stress tests which basically fully
| saturate the execution units.
|
| Geekbench is slightly different to those stresses so not
| sure.
| gigatexal wrote:
| Sure but I'm only asking for Points per watt to use as a
| baseline for testing Apple's claims.
| mhh__ wrote:
| Peak wattage is apparently 330W on a super heavy test,
| not really sure how to extrapolate that to geekbench.
| jeffbee wrote:
| Wow, there's a power virus for Alder Lake _client_ that
| can make it draw 330W? Reference?
| gigatexal wrote:
| Bitcoin? ;)
| marricks wrote:
| 1834 ST / 17370 MT [1]
|
| But that's also for something which has a TDP of 125W [2],
| unsure if that's the right number for a mobile chip? Also no
| clue what M1 Max's TDP is either.
|
| [1] https://browser.geekbench.com/v5/cpu/9510991
|
| [2] https://cpu-benchmark.org/cpu/intel-core-i9-12900k/
| runeks wrote:
| The Intel CPU is pretty underwhelming. It has twice the
| number of high performance cores (16 vs 8) but is only 37%
| faster for multi-core tasks.
| colinmhayes wrote:
| I guess that's what happens when Apple's process is a
| generation ahead of Intel's.
| ribit wrote:
| M1 Max TDP will be around 30-40 watt for the CPU cluster and
| 50-60 watt for the GPU cluster. Note that unlike x86 CPUs
| (which can draw far more than their TDP for brief periods of
| time), this is maximal power usage of the chip.
|
| M1 needs about 5W to reach those signs-core scores, Tiger
| Lake needs 20W
| websap wrote:
| Looks like Geekbench just got the HN bug of death.
| tambourine_man wrote:
| Site is under heavy load:
|
| 1783 Single-Core Score
|
| 12693 Multi-Core Score
| YetAnotherNick wrote:
| For comparison, M1 results:
|
| 1705 Single core, 7382 multi core
| M4R5H4LL wrote:
| For comparison, Mac Pro (Late 2019) 16-core: 1104 Single-Core
| Score, 15078 Multi-Core Score
| starfallg wrote:
| Pretty underwhelming from a pure performance standpoint after all
| the hype from the launch.
|
| https://browser.geekbench.com/v5/cpu/singlecore
| petersellers wrote:
| Pretty impressive, all things considered. Looks like it's roughly
| on par with an AMD 5800X, which is a Desktop CPU with 8C/16T and
| a 105W TDP.
| [deleted]
| mhh__ wrote:
| Impressive but unbelievably expensive considering what will
| actually be run on it.
|
| I kind of want one, but the 5800x for example is about 10x less
| expensive than a specced out MacBook Pro
| jchw wrote:
| People are probably going to be reading your comparison as an
| objective comparison rather than an opportunistic one. For
| example, if you are choosing between upgrading AM4 processors
| on your desktop versus buying a new Macbook Pro, then of
| course it makes sense to compare the cost in those terms.
| However, the price of the M1 chip is obviously probably not
| that bad. Since you can't meaningfully buy it on its own, I
| guess there is no fair comparison to make here anyways.
| mhh__ wrote:
| The M1 chip is actually probably extremely expensive. The
| top of the line one is literally 60 BILLION transistors!
|
| My current machine has like 10 billion (GPU + CPU)
| jchw wrote:
| 60 billion is obviously a metric ton, but 10 billion is
| not that ridiculous for an SoC to clear; there are
| snapdragons at higher transistor counts. The AMD 5950X
| clears 19 billion, and it is just a CPU with no GPU or
| integrated RAM. I've got to guess the M1 transistor count
| is inflated a fair bit by RAM.
|
| I suppose it's not likely we'll know the actual price of
| the M1, but it would suffice to say it's probably a fair
| bit less than the full laptop.
| smiley1437 wrote:
| Not sure if these numbers can believed, but apparently a
| 300mm wafer at the 5nm node costs about $17000 for TSMC
| to process
|
| https://www.tomshardware.com/news/tsmcs-wafer-prices-
| reveale...
|
| Since the M1 is 120 mm2 and a 300mm wafer is about 70695
| mm2, you could theoretically fit 589 M1 chips on a wafer.
|
| Subtract the chips lost at the edge and other flaws, you
| might be able to get 500 M1 off a wafer? (I know nothing
| about what would be a reasonable yield but I'm pretty
| sure a chip won't work if part of it is missing)
|
| Anyways, that would be $17000/500, or $34 per M1 chip -
| based just on area and ignoring other processing costs.
| hajile wrote:
| The M1 Max is slightly off-square and 432mm^2. 19.1 x
| 22.6 seems like a decent fit.
|
| TSMC has stated that N7 defect density was 0.09 a year
| ago or so. They have also since stated that N5 defect
| density was lower than N7.
|
| Let's plug that in here https://caly-
| technologies.com/die-yield-calculator/
|
| If we go with a defect density of 0.07, that's 94 good
| dies and 32 harvest dies. At 0.08, it's 90 and 36
| respectively.
|
| If we put that at 120 dies per wafer and $17,000 per
| wafer, that's just $141 per chip. That's probably WAAAYY
| less than they are paying for the i9 chips in their 2019
| Macbook Pros.
|
| For comparison, AMD's 6900 GPU die is 519mm^2 and
| Nvidia's 3080 is 628mm^2. Both are massively more
| expensive to produce.
| ribit wrote:
| Workstation laptops are expensive. These new Macs are priced
| quite competitively. E.g. the 14" with the full M1 Pro is
| $2.5k and is faster, more portable and has a much better
| display than a 2.7k Dell Precision 5560...
| TheBigSalad wrote:
| Aren't Dells usually pricer than retail because they
| include some kind of service plan?
| mhh__ wrote:
| Very True although I note that I wouldn't be able to
| actually work on one for the day job because of windows.
|
| Still might buy one, I want an arm box to test my work on
| compilers on.
| ribit wrote:
| For me it's going to be a massive improvement. Already
| the base M1 builds software faster than my Intel i9...
| and given that these new chips have 400GB/s bandwidth it
| will be a ridiculous improvement for my R data analysis
| code...
| megablast wrote:
| Are you comparing the price of a chip to a laptop??
| LegitShady wrote:
| can't really get it any other way.
|
| The issue to me is that because apple is the only one who
| can put these in computers this will have no real effect on
| PC component pricing. Apple makes, apple puts in apple
| computers, can't run windows, if you're in another
| ecosystem it's not even a real option.
| mhh__ wrote:
| Yes. If I bought one of these it would never actually move
| so I don't actually mind the comparison all that much for
| my personal use case, obviously it isn't apples to oranges
| but my point is that this performance is not free.
| whynotminot wrote:
| What is it with the M1 series of chips that causes people
| to bring completely disingenuous comparisons to the fray
| with a straight face.
| vinculuss wrote:
| A boxed CPU is an odd comparison without considering
| motherboard, ram, GPU, cooler, SSD, display in that
| calculation as well.
| hwita wrote:
| Then perhaps you may want to wait for Apple to release a
| desktop-class processor to make the comparison, perhaps
| early next year?
| yazaddaruvala wrote:
| Its not clear to me that Apple will make a desktop-class
| processor. The unit economics likely don't make sense for
| them.
|
| All of Apple's innovation seems to be towards better and
| cheaper AR/VR hardware. Desktop-class processors would be
| a distraction for them.
|
| And with all of the top cloud players building custom
| silicon these days, there is little room for Apple to
| sell CPUs to the server market even if they were inclined
| (which they are not).
|
| The only strategic Apple vertical that might align with
| desktop-class CPUs is the Apple Car initiative and
| specifically self-driving. Dedicated audio/image/video
| processing and inference focused hardware could better
| empower creatives for things like VFX or post-processing
| or media in general. However, its not clear to me that is
| enough of a market for Apple's unit economics compared
| with more iDevice / MacBook sales.
| hajile wrote:
| At worst, wait until they release these in the iMac Pro
| and maybe even the Mac Mini. Both easily have the
| headroom for these chips.
| xbar wrote:
| Can you explain your unit economic analysis?
|
| They have made a Mac Pro desktop for several decades. I
| am trying to follow your reasoning for Apple to sunset
| that category of workstation as a result of transitioning
| to Apple silicon, but it is not working out for me.
|
| My logic leads to a cheaper-to-produce-than-Macbook
| workstation in an updated Mac Pro chassis with best-
| binned M1X Max parts in the Spring followed by its first
| chiplet-based Apple silicon workstation using the same
| chassis in the Fall, followed by an annual iteration on
| Apple silicon across its product line, on about the same
| cadence as A/A-x iOS devices.
|
| Part of my reasoning is based on the assumption that Mac
| Pro sales generate more Pro XDR sales displays at a
| higher rate than Macbook Pro sales. I think the total
| profit baked into an Apple silicon Mac Pro + Pro XDR is
| big and fills a niche not filled by any thing else in the
| market. Why leave it unfilled?
| Closi wrote:
| IMO it's obvious that there will need to be a desktop
| version - and all the rumours are pointing towards a Mac
| Pro release with silly specs - i.e. an SOC with something
| like 40/64+ cores. Why would Apple want to give up their
| portion of the high-power desktop market to Windows?
|
| What's the alternative? That they release another Mac Pro
| with intel, despite their stated intention to move
| everything away from x86, or that they release a Mac Pro
| with just a laptop chip inside?
|
| Let's remember that Apple has an annual R&D budget of
| c$20 billion, so it won't be totally shocking if they
| diverted a small fraction of that to build out a desktop
| processor.
| xbar wrote:
| Quite right.
| lostmsu wrote:
| Perhaps. AMD should also release 5nm processors by then
| too.
| neogodless wrote:
| The 10-core (8 performance) M1 Max starts around $2700 in the
| 14" form factor.
|
| It's hard to compare to laptops, but since we started the
| desktop comparison, a Ryzen 7 5800X is $450 MSRP, or about
| $370 current street price. Motherboards can be found for
| $100, but you'll more likely spend $220-250 for a good match.
| 32GB RAM is $150, 1TB Samsung 980 Pro is a bit under $200.
| Let's assume desktop RTX 3060 for the graphics (which is
| probably slightly more powerful than the 32-core GPU M1 Max)
| for MSRP $330 but street price over $700.
|
| So we're at about $1670 for the components, before adding in
| case ($100), power supply ($100) and other things a laptop
| includes (screen, keyboard...).
| jshier wrote:
| M1 Pro has the same CPU cores as the M1 Max, just half the
| GPU cores (and video coding cores). So you can get the same
| CPU performance in the 14" for as little as $2499.
| Thaxll wrote:
| 5800x seems 20% faster for single core:
| https://browser.geekbench.com/v5/cpu/5874365 so def not on
| part.
| tbob22 wrote:
| That's certainly overclocked or very high PBO offsets, at
| stock the 5800x gets around 1700-1800 ST.
| YetAnotherNick wrote:
| This is the official score:
| https://browser.geekbench.com/processors/amd-ryzen-7-5800x
| hajile wrote:
| But if you run TWO single-threaded workloads at once, the
| clocks dial WAY back. Meanwhile, the M1 can keep all cores at
| 3.2GHz pretty much indefinitely.
| munro wrote:
| Here's the link to the MacBookPro18,2 OpenCL benchmark:
|
| * M1 Max OpenCL https://browser.geekbench.com/v5/compute/3551790
| [60,167 OpenCL Score]
|
| Comparing to my current MacBook Pro (16-inch Late 2019) & my
| Hetzner AX101 server:
|
| * MacBook Pro (16-inch Late 2019) vs M1 Max - CPU
| https://browser.geekbench.com/v5/cpu/compare/10496766?baseli...
| [single 163.6%, multi 188.8%]
|
| * MacBook Pro (16-inch Late 2019) vs M1 Max - OpenCL
| https://browser.geekbench.com/v5/compute/compare/3551790?bas...
| [180.8%]
|
| * Hetzner AX101 vs M1 Max - CPU
| https://browser.geekbench.com/v5/cpu/compare/10496766?baseli...
| [single 105.0%, multi 86.4%]
|
| * NVIDIA GeForce RTX 2060 vs M1 Max - OpenCL
| https://browser.geekbench.com/v5/compute/compare/3551790?bas...
| [80.7%]
|
| * NVIDIA GeForce RTX 3090 vs M1 Max - OpenCL
| https://browser.geekbench.com/v5/compute/compare/3551790?bas...
| [29.0%, boo]
|
| I'm surprised the MacBook holds its own against the Hetzner
| AX101's AMD Ryzen 9 5950X 16-core CPU! The multi-core SQLite
| performance surprises me, I would think the M1 Max's NVMe is
| faster than my server's SAMSUNG MZQL23T8HCLS-00A07.
| keymone wrote:
| > * NVIDIA GeForce RTX 3090 vs M1 Max - OpenCL
| https://browser.geekbench.com/v5/compute/compare/3551790?bas...
| [29.0%]
|
| 30% performance for ~15% of power use in laptop form factor?
| that's not boo, that seems like a clear win for apple.
| muro wrote:
| Performance doesn't go down linearly with power. I don't have
| that card to try, but maybe it would do even better at 15% of
| power.
| keymone wrote:
| i doubt it'll even initialize.
| r00fus wrote:
| There is a floor below which it will go straight to 0 (ie,
| nonfunctional).
|
| The M1 Max is well below that floor at max wattage.
| klelatti wrote:
| Thanks - interesting on OpenCL - presumably running on GPU? All
| that memory opens up some interesting possibilities.
|
| Also I thought Apple was deprecating OpenCL in favour of Metal?
| 58028641 wrote:
| OpenCL and OpenGL have been deprecated in favor of Metal.
| Geekbench also has a Metal compute benchmark.
| bredren wrote:
| Is there a metal benchmark? This is the score I've been
| most interested in.
| munro wrote:
| Nay, there isn't one for the new M1 Max, but FWIW it's
| pretty comparable, but OpenCL is a bit faster than Metal.
|
| * MacBook Pro (16-inch Late 2019) - Metal
| https://browser.geekbench.com/v5/compute/3139776 [31,937
| metal score] * MacBook Pro (16-inch Late 2019) - OpenCL
| https://browser.geekbench.com/v5/compute/3139756 [33,280
| OpenCL score]
| m15i wrote:
| Anyone know if that "Device Memory 42.7 GB" is fixed or can be
| increased?
| munro wrote:
| The MacBook Pro M1 Max can be ordered with either 32 GB or 64
| GB of unified memory. The geekbench report shows 64 GB of
| memory (maxed), not sure why only 42.7 GB is usable by OpenCL
| --so I guess we have to assume that's the max, unless there's
| some software fix to get it up to 64 GB.
| marcodiego wrote:
| I wonder what Asahi Linux devs can show us.
| [deleted]
| raylad wrote:
| Compared to the late 2020 Macbook Air:
|
| https://browser.geekbench.com/v5/cpu/compare/10508178?baseli...
| LeoPanthera wrote:
| This confirms that single-core performance is essentially the
| same as the basic M1.
| jkeddo wrote:
| One big thing to consider is that this is just the _first_ m1 Max
| Geekbench score, compared against the world 's greatest 11900Ks.
| Most 11900K's are nowhere near the levels of the top preforming
| one.
|
| Once Apple starts shipping M1 Max in volume, and as TSMC yields
| get better, you will see "golden chips" slowly start to score
| even higher than this one.
| cauk wrote:
| Golden chips?
| spacedcowboy wrote:
| In every manufacturing process there are always chips that
| perform better than others. They all reach the "bin" that
| they're designed for, but the _actual_ design is for higher
| than that, so that even imperfect chips can perform at the
| required level.
|
| The corollary of that is that there are some chips that
| perform better than the design parameters would have you
| expect, they're easier to overclock and get higher speeds
| from. These are the "golden" chips.
|
| Having said that, it's not clear to me that the M1* will do
| that, I don't know if Apple self-tune the chips on boot to
| extract the best performance, or they just slap in a standard
| clock-rate and anything that can meet it, does. I'd expect
| the latter, tbh. It's a lot easier, and it means there's less
| variation between devices which has lots of knock-on benefits
| to QA and the company in general, even if it means users
| can't overclock.
| trenchgun wrote:
| There are slight variations in materials, processes, etc.
|
| End result: some chips just end up being better than others.
| m15i wrote:
| How will these chips do for training neural nets? The 64 GB RAM
| would be awesome for larger models, so I'm willing to sacrifice
| some training speed.
| cschmid wrote:
| I know this doesn't answer your question, but can I just ask
| (out of curiosity) why you're training ML models on a laptop?
| m15i wrote:
| It's all about the RAM. 64GB would allow input of larger
| image sizes and/or nets with more parameters. Right now, the
| consumer card with the most RAM is the rtx 3090 which is only
| 24GB, and in my opinion overpriced and inefficient in terms
| of wattage (~350W). Even the ~$6000 RTX A6000 cards are only
| 48GB.
| cschmid wrote:
| I don't think replacing a workstation with a Macbook
| because of RAM makes too much sense: If running one
| minibatch of your model already takes up all the memory you
| have, where would the rest of your training data sit? In
| the M1, you don't have a separate main memory.
|
| Also, software support for accelerated training on Apple
| hardware is extremely limited: Out of the main frameworks,
| only tensorflow seems to target it, and even there, the
| issues you'll face won't be high on the priority list.
|
| I know that nvidia GPUs are very expensive, but if you're
| really serious about training a large model, the only
| alternative would be paying rent to Google.
| bhouston wrote:
| Where is the graphics test? The M1 Max versus an NVIDIA 3080 or
| similar?
| alfredxing wrote:
| In the keynote Apple said the M1 Max should be comparable to
| the performance of an RTX 3080 Laptop (the footnote on the
| graph specified the comparison was against an MSI GE76 Raider
| 11UH-053), which is still quite a bit below the desktop 3080.
| akmarinov wrote:
| No way it can get anywhere near 3080
| mhh__ wrote:
| I'm waiting to be corrected by someone who knows GPU
| architecture better than me but as far as I can tell the
| _synthetic_ benchmarks can trade blows with a 3070 or 80
| (mobile), but the actual gaming performance isn 't going to be
| as rosy.
|
| Also recall that very few games needing that performance
| actually work on MacOS
| smoldesu wrote:
| The gaming performance will be CPU-bottlenecked. Without
| proper Wine/DXVK support, they have to settle for interpreted
| HLE or dynamic recompilation, neither of which are very
| feasible on modern CPUs, much less ARM chips.
| schleck8 wrote:
| Does someone know how much VRAM the M1X has? Because I bet
| it's far less than a 3070 or 3080.
| mhh__ wrote:
| The memory is unified, and very high bandwidth. No idea
| what that means in practice, guess we'll find out.
| pornel wrote:
| It's very high bandwidth for a CPU, but not that great
| for a GPU (400GB/s vs 440GB/s in 3070 and 980GB/s in
| 3090).
| artificialLimbs wrote:
| >> ...not that great for a GPU _...
|
| * almost equals the highest laptop GPU available_
| coayer wrote:
| But it's also a premium product, so it matching a 3070m
| isn't really above what you'd expect for the cost (but
| efficiency is another story)
| oneplane wrote:
| On the other hand, it's zero-copy between CPU and GPU.
| minhazm wrote:
| It's not quite apples to apples. The 3070 only has 8GB of
| memory available, whereas the M1 Max has up to 64 GB
| available. It's also unified memory in the M1 and doesn't
| require a copy between CPU & GPU. Some stuff will be
| better for the M1 Max and some stuff will be worse.
| nottorp wrote:
| Of course, with a 3070 and up you have to play with
| headphones so you don't hear the fan noise.
|
| This is the best feature of the new Apple CPUs if you ask
| me: silence.
|
| Now to wait for a decent desktop...
| ribit wrote:
| Up to 64GB...
| jbverschoor wrote:
| There's no vram. It's unified / shared memory. There's no
| M1X. There's M1 Pro and M1 Max
| akaij wrote:
| I think we can safely shorten them to M1P and M1X.
| amne wrote:
| the memory is unified so whatever ram is on there
| (16,32,64) can be allocated as vram.
|
| That's why during the presentation they bragged about how
| certain demanding 3d scenes can now be rendered on a
| notebook.
| EugeneOZ wrote:
| I still didn't get their example about the 100Gb
| spaceship model - max RAM supported is 64Gb...
| f0rmatfunction wrote:
| M1 Pro & Max (and plain M1 too for what it's worth) have
| unified memory across both CPU and GPU. So depending on the
| model it'd be up to 32gb or 64gb (not accounting for the
| amount being used by the CPU). Put differently - far more
| than 3070 and 3080.
| labby5 wrote:
| It's really hard to compare Apple and Nvidia, but a bit
| easier to compare Apple to AMD. My best guess is performance
| will be similar to a 6700xt. Of course, none of this really
| matters for gaming if studios don't support the Mac.
| neogodless wrote:
| The mobile RTX 3080 limited to 105W is comparable to about
| an RX 6700M, which is well behind the desktop RX 6700XT.
| EricE wrote:
| "Also recall that very few games needing that performance
| actually work on MacOS"
|
| But many Windows games do run under Crossover (a commercial
| wrap of WINE - well worth the measly licensing fee for
| seamless ease of use to me) or the Windows 10 ARM beta in
| Parallels. I got so many games to run on my M1 MacBook Air I
| ended up returning it to wait for the next round that could
| take more RAM. I'm very, very happy I waited for these and I
| fully expect it will replace my Windows gaming machine too.
| ribit wrote:
| Well, Apple G13 series are excellent rasterizers. I'd expect
| them do very well in games, especially with that massive
| bandwidth and humongous caches. The problem is that not many
| games run on macOS. But if you are only interested in games
| with solid Mac support, they will perform very well
| (especially if it's a native client like Baldurs Gates 3).
| Thaxll wrote:
| "very well" 45 fps top at 1080p medium settings
| https://www.youtube.com/watch?v=hROxRQvO-gQ
|
| You take a 3 years old card you get 2x more fps.
| ribit wrote:
| Which other passively cooled laptop can do it? And what 3
| year old card are you comparing it to? Hopefully
| something with 20W or lower power consumption.
|
| 45fps at medium Full HD is not far off a 1650 Max q
| Thaxll wrote:
| Apple compare themself to a 3080m, the perf from an M1 is
| not even close to a 3 y/o card. I don't care if it takes
| 10w if I can't even play at 60fps on "recent'ish" games.
| mhh__ wrote:
| What is the most intensive game you could actually run?
|
| I was going to say Fortnite but I'm guessing that's not the
| case anymore
| ribit wrote:
| Baldurs Gates 3, Metro Last Light, Total War...
| tromp wrote:
| Seems kind of unfair with the NVIDIA using up to 320W of power
| and having nearly twice the memory bandwidth. But if it runs
| even half as well as a 3080, that would represent amazing
| performance per Watt.
| throwawaywindev wrote:
| I believe they compared it to a ~100W mobile RTX 3080, not a
| desktop one. And the mobile part can go up to ~160W on gaming
| laptops like Legion 7 that have better cooling than the MSI
| one they compared to.
|
| They have a huge advantage in performance/watt but not in raw
| performance. And I wonder how much of that advantage is
| architecture vs. manufacturing process node.
| moron4hire wrote:
| I am very confused by these claims on M1's GPU performance.
| I build a WebXR app at work that runs at 120hz on the Quest
| 2, 90hz on my Pixel 5, and 90hz on my Window 10 desktop
| with an RTX 2080 with the Samsung Odyssey+ _and_ a 4K
| display at the same time. And these are just the native
| refresh rates, you can 't run any faster with the way VR
| rendering is done in the browser. But on my M1 Mac Mini, I
| get 20hz on a single, 4K screen.
|
| My app doesn't do a lot. It displays high resolution
| photospheres, performs some teleconferencing, and renders
| spatialized audio. And like I said, it screams on
| Snapdragon 865-class hardware.
| astlouis44 wrote:
| What sort of WebXR app? Game or productivity app?
| moron4hire wrote:
| Productivity. It's a social VR experience for teaching
| foreign language. It's part of our existing class
| structure, so there isn't really much to do if you aren't
| scheduled to meet with a teacher.
| [deleted]
| wtallis wrote:
| The MSI laptop in question lets the GPU use up to 165W. See
| eg. AnandTech's review of that MSI laptop, which measured
| 290W at the wall while gaming:
| https://www.anandtech.com/show/16928/the-msi-ge76-raider-
| rev... (IIRC, it originally shipped with a 155W limit for
| the GPU, but that got bumped up by a firmware update.)
| fnordsensei wrote:
| The performance right now is interesting, but the
| performance trajectory as they evolve their GPUs over the
| coming generations will be even more interesting to follow.
|
| Who knows, maybe they'll evolve solutions that will
| challenge desktop GPUs, as they have done with the CPUs.
| moron4hire wrote:
| A "100W mobile RTX 3080" is basically not using the GPU at
| all. At that power draw, you can't do anything meaningful.
| So I guess the takeaway is "if you starve a dedicated GPU,
| then the M1 Max gets within 90%!"
| wilde wrote:
| Apple invited the comparison during their keynote. ;)
| Thaxll wrote:
| It's probably bad, the M1 could not get 60fps on WoW so ...
| When I see Apple comparison I would take that with a grain of
| salts because the M1 is not able to run any modern game at
| decent fps.
| williamtwild wrote:
| "the M1 is not able to run any modern game at decent fps."
|
| Do you have first hand experience with this? I do . We play
| WoW on MacBook air M1 and it runs fantastic . Better than my
| intel MacBook Pro from 2019
| schleck8 wrote:
| "Running fantastic" is what Apple would advertise, but what
| matters is fps, utilisation and thermals when benchmarking
| games
| Thaxll wrote:
| Defines fantastic because a 1080ti from 4 years ago run
| faster than the M1. My 2070 could run wow at 144fps, and
| it's a 2.5y/o card.
|
| Yet most people can't get 60fps:
| https://www.youtube.com/watch?v=nhcJzKCcpMQ
|
| Edit: thanks for the dates update
| windowsrookie wrote:
| Comparing the M1 to a 1080ti is ridiculous. The 1080ti
| draws 250+ watts. The M1 draws 10w in the MacBook Air.
|
| In the current market you can buy a MacBook Air (an
| entire laptop computer) for less than buying just a
| midrange GPU.
| Thaxll wrote:
| Well Apple compared themself to a 3080m which is faster
| than a 1080ti.
| Zarel wrote:
| No one has explained what you got wrong, so in case
| anyone reading this is still confused, Apple compared an
| M1 Max to a 3080m. An M1 Max's graphics card is ~4x as
| fast as an M1.
| danieldk wrote:
| The 1080 Ti was made available in March 2017, so it's 4.5
| years old. Not 6.
| jbverschoor wrote:
| My M1 cannot properly without stuttering show
| https://www.apple.com/macbook-pro-14-and-16/ (esp. the second
| animation of opening and turning the laptop).
|
| Both safari and chrome
| garblegarble wrote:
| That's because it's actually a series of jpegs rather than
| a video(!!) - the same happens on my Intel Mac
| wil421 wrote:
| Are modern games built with Metal? Pretty sure Apple
| deprecated OpenGL support. Macs have never been gaming
| computers.
|
| The GPUs in the M1 family of Macs are for "professional"
| users doing Video Editing, Content creation, 3D editing,
| Photo editing and audio processing.
| lights0123 wrote:
| MoltenVK is Vulkan's official translation layer to Metal,
| and doesn't have too much overhead. Combine with dxvk or
| d3vkd to translate from DirectX--DirectX before 12 is
| generally faster with DXVK that Windows' native support.
| plandis wrote:
| That's not great especially because I believe WoW works
| natively for the M1 and uses the Metal API.
|
| My follow up would be what settings were you playing at?
| varjag wrote:
| Someone needs to come up with realistic Mac benchmarks. Like
| 'seconds it takes for Finder beachball to disappear'. At this
| metric my M1 sees no improvement over my MB12.
| bborud wrote:
| Beachball disappears when Apple's developers learn to write
| asynchronous code. (You can't blaame the hardware for
| programming decisions that are fundamentally slow or stupid).
| ur-whale wrote:
| How does that compare to top of the line AMD Ryzen latest gen
| (site is dead for me right now)?
| tbob22 wrote:
| Single core is similar to the 5900x but multicore is more in
| line with 3900x.
|
| Quite impressive for a laptop, I'm sure the power consumption
| will be much higher compared to the M1 but likely no where near
| desktop parts.
| EvgeniyZh wrote:
| Top intel mobile processor appears to be
| https://browser.geekbench.com/v5/cpu/10431820 M1 Max gives 9%
| boost for single-core and 34% for multicore, with similar or
| larger (?) TDP -- Intel is 35-45 W, M1 Max is 60W but I assume
| some of it (a lot?) goes to GPU. Impressive, but probably
| wouldn't be called revolution if came from Intel.
| Weryj wrote:
| At first glace yeah, but that's 45W at 2.5Ghz, but TDP isn't
| the power rating of the CPU. That benchmark lists the Intel CPU
| as up to 4.9Ghz, I would say it's actual power draw was closer
| to the 100W mark for CPU only.
| EvgeniyZh wrote:
| 4.9 GHz should be single-core max freq, not applicable in
| multicore benchmark. All-core turbo is lower, but also AFAIK
| there is a limit on time it run on this freq (tau for PL2, i
| think around half a minute by default).
| klelatti wrote:
| Is there any reason why the Intel CPU wouldn't run at 4.9
| GHz for the single core benchmark though - whilst the M1
| would be limited to a much lower frequency which it can
| sustain for much longer?
| EvgeniyZh wrote:
| I think, but I'm not 100% sure, that limits are on
| overall TDP, i.e., single-core workloaad can run on turbo
| indefinitely. Then aalso I'd assume benchmarks are much
| longer than any reasonable value of tau which means it
| integrates out (i.e., Intel performance may be higher in
| short-term). Edit: that is probably one of the reasons
| single-core gap is smaller
| klelatti wrote:
| Thanks - this all highlights some of the interesting
| trade offs in CPU design!
| neogodless wrote:
| Just to throw AMD in the mix, Ryzen 9 5980HX (35-54W).
|
| https://browser.geekbench.com/v5/cpu/10431820
|
| M1 Max leading 17% single core, 50% multi-core.
|
| https://browser.geekbench.com/v5/cpu/compare/10496766?baseli...
| foldr wrote:
| If the TDP was really the same we'd be seeing Intel laptops
| with 20 hour battery lives.
| EvgeniyZh wrote:
| Battery life is more about idle/low load TDP, not full load
| TDP. It's not like MBP has 60*20=1200 Wh battery (100Wh is
| FAA limit)
| hajile wrote:
| M1 only uses around 15-18w with all 8 cores active according to
| Anandtech's review (with E-cores using 1/10 the power, that's
| equivalent to just a little less than 4.5 big cores. I'd guess
| 30w for all cores would be the upper-end power limit for the
| cores.
|
| Intel's "TDP" is a suggestion rather than reality. Their chips
| often hit 60-80 watts of peak power before hitting thermals and
| being forced to dial back.
| badhombres wrote:
| I'm curios on the difference between the 8 cores and 10 cores for
| the M1 Pro. I'm definitely not going to get the Max, I'm just not
| the user that needs the GPU's
| ksec wrote:
| I have seen far too many people making comments on MacPro "Pro"
| Chip.
|
| A hypothetical _32_ Core CPU and _64_ Core GPU is about the max
| Apple could make in terms of die size reticle limit without going
| chiplet and TDP limit without some exotic cooling solution. Which
| means you cant have some imaginary 64 Core CPU and 128 Core GPU.
|
| We can now finally have a Mac Cube, where the vast majority of
| the Cube will simply be a heat sink and it will still be faster
| than current Intel Mac Pro. I think this chip makes most sense
| with the A16 design on 4nm next year [1].
|
| Interesting question would be memory, current Mac Pro support
| 1.5TB. A 16 Channel DDR5 would be required to feed the GPU, but
| that also means minimum memory on Mac Pro would be 128GB at 8GB
| per DIMM. One could use HBM on package, but that doesn't provide
| ECC Memory protection.
|
| [1] TSMC 3nm has been postponed in case anyone not paying
| attention, so the whole roadmap has been shifted by a year. If
| this still doesn't keep those repeating Moore's law has not died
| yet I dont know what will.
| wmf wrote:
| It's not hard to imagine that "Jade 2C-Die" mean two compute
| die chiplets and "Jade 4C-Die" means four chiplets which makes
| 40 CPU cores and 128 GPU cores about the same size as Sapphire
| Rapids. It could have 256 GB RAM on a 2048-bit bus with 1,600
| GB/s of bandwidth.
| ksec wrote:
| Chiplet isn't a silver bullet and without compromise. You
| will then need to figure the memory configuration and NUMA
| access. Basically the Jade 2C-Die on chiplet analogy doesn't
| make sense unless you make some significant changes to
| Jade-C.
|
| Jade 4C-Die on a single die only make sense up to 40 Core (
| 4x Current CPU Core or 32 HP Core and 8 HE Core ), unless
| there are some major cache rework there aren't any space for
| 128 GPU Core.
|
| But then we also know Apple has no problem with _only_ 2 HE
| Core on a MacBook Pro Laptop, why would they want to put _8_
| HE Core on a Desktop?
| lowbloodsugar wrote:
| If someone had said in 2018 "Apple is going to release a
| macbook pro with an ARM chip that will be faster at running x86
| applications than an x86 chip while having a 20 hour batter
| life", then a lot of people would have responded with a lot of
| very clever sounding reasons why this couldn't possibly work
| and what the limitations would be on a hypothetical chip and
| how a hypothetical chip that they can imagine would behave.
|
| I think the evidence is that Apple's chip designers don't care
| about anyone's preconceived and very logical sounding ideas
| about what can and can't be done.
|
| So what we know is that there is going to be a Mac Pro, and
| that whatever it is is going to absolutely destroy the
| equivalent (i.e. absolute top of the range) x86 PC.
|
| If you want to be right, take that as a fact, and work
| backwards from there. Any argument that starts with " _I_ don
| 't think it can be done" is a losing argument.
| runeks wrote:
| But does it support running x86 Docker images using hardware
| emulation?
|
| This is what's holding me off buying one since I don't know
| whether to get 16 GB RAM (if it _doesn 't_ work) or 32 GB RAM (if
| it _does_ work).
| ArchOversight wrote:
| There is no hardware in the M1's for x86 emulation. Rosetta 2
| does on the fly translation for JIT and caches translation for
| x86_864 binaries on first launch.
|
| Docker for Mac runs a VM, inside that VM (which is Linux for
| ARM) if you run an x86_64 docker image it will use qemu to
| emulate x86_64 and run Linux Intel ELF binaries as if they were
| ARM.
|
| That means that currently using Docker on macOS if there is a
| native ARM version available for the docker image, it will use
| that, but it can and will fall back to using x86_64 docker
| images.
|
| That already works as-is. There is no hardware emulation
| though, it is all QEMU doing the work.
| giantrobot wrote:
| Note that Docker Desktop for Mac has _always_ used a VM to
| run a Linux instance. Same with Docker Desktop on Windows (I
| don 't know if this has changed with WSL). The main
| difference on M1 Macs is the qemu emulation when a Docker
| image is only available as x86_64. If the image is available
| in AArch64 it runs native on the M1.
| DeathArrow wrote:
| > Same with Docker Desktop on Windows (I don't know if this
| has changed with WSL).
|
| Not really. On Windows you can choose if you want to run
| Linux binaries in a VM or native Windows containers.
| thesandlord wrote:
| I feel Windows Containers are a whole separate thing. I
| personally have never seen anyone use them, but then
| again I have never worked on a Windows Server stack.
| ArchOversight wrote:
| Native Windows Containers run Native Windows Binaries.
| You can't just launch your Linux docker containers using
| Native Windows Containers.
| ArchOversight wrote:
| Oh yeah, I thought the whole VM thing was implied with the
| fact that Docker is a Linux technology...
| watermelon0 wrote:
| WSL1 doesn't support cgroups and other pieces needed to run
| containers, but Docker Desktop can use WSL2, which uses a
| lightweight VM in the background, so you are correct.
| nunez wrote:
| That said, x86_64 images that require QEMU are much buggier
| than pure arm64 images. Terraform's x86_64 image, for
| example, will run into random network hangups that the ARM
| image doesn't experience. It was bad enough for me to
| maintain my own set of arm64 Terraform images.
| IceWreck wrote:
| Why does Apple not open up their hardware to other operating
| systems like Linux. They will already get our money from the
| hardware purchases, what more do they want.
|
| I know Asahi Linux exists but without Apple's (driver) support it
| will never reach macOS on M1 level performance.
|
| (If someone disagrees, please compare Nouveau with proprietary
| Nvidia drivers.)
| oneplane wrote:
| Because Apple intends to sell UX, not just some hardware.
| klelatti wrote:
| I'd like this but you make it sound like this would not require
| significant effort from them.
|
| I suspect they would say if you want to run Linux you can do it
| in a VM and use the already debugged MacOS device drivers.
| ctdonath wrote:
| Apple's core competency is selling hardware, with a tightly
| integrated ecosystem to optimize user experience with that
| hardware.
|
| There is no incentive to facilitate introducing that hardware
| into an uncontrolled foreign ecosystem. Apple does not want
| users looking at their computer while muttering "this sucks"
| when "this" is an OS+UI famous for incomplete/buggy behavior.
|
| (I've tried going all-in for Linux several times. The amount of
| "oh, you've got to tweak this..." makes it unusable.)
| basisword wrote:
| As cool as that would be, why would they? I mean what do they
| have to gain from it? Maybe a few more Linux users will buy
| MacBook's but not enough to impact their bottom line. Plus
| those users wouldn't be part of the Apple ecosystem and at
| least since the iPod Apple's goal has been bringing you into
| the ecosystem.
| bilbo0s wrote:
| More importantly, those users would complain and some would
| even ask for support.
|
| As horrible as this is going to sound, I'm thinking both
| Apple and Microsoft file that under: "Some users are not
| worth the trouble."
|
| Probably OK for server users though? But Apple doesn't really
| make servers.
| someguydave wrote:
| Apple could gain some trust and goodwill from the technical
| user base they have been alienating for 10 years
| spacedcowboy wrote:
| _shrug_ doesn 't look like they need it. As long as they
| keep churning out machines and devices that make people's
| wallets reflexively open, they don't have to pander to the
| tiny minorities...
|
| Not a particularly _nice_ situation to be in, if you 're
| not in the Apple ecosystem, but there's the world we live
| in, and the world we wish it to be. Only sometimes do those
| worlds overlap.
| LASR wrote:
| I bet the number of people within the technical user base
| for whom the lack of linux support is a deal-breaker is
| significantly smaller than the number of people who don't
| care.
|
| So while it's a nice boost in goodwill, it's probably small
| enough for Apple to safely ignore.
| walls wrote:
| Nobody wants the Linux desktop experience associated with their
| brand.
| masterof0 wrote:
| ^ I agree, and also the support burden of N distros, for N ->
| inf.
| PUSH_AX wrote:
| Steam hardware?
| ctdonath wrote:
| Operative word: "want".
|
| There was no other viable option.
| jzb wrote:
| They have some of your money, but they want more of it.
|
| Mac owners are more likely to buy iPads and iPhones and become
| embedded in the Apple ecosystem. That draws people into the app
| stores, which Apple gets a cut of, and into Apple Music and
| Apple TV...
|
| If they're very successful they get you to sign up for Apple
| Pay and the Apple Card and get a chunk of your spend for things
| entirely unrelated to Apple.
|
| If they just sell you the hardware and you can easily run Linux
| on it, you might not run macOS and move further into their
| ecosystem.
| jzb wrote:
| Also - even if they only wanted your hardware money, if you
| run Linux on Apple hardware, there's less chance your next
| purchase will be Apple hardware. Linux is portable, macOS
| isn't (legally, easily -- Hackintoshes notwithstanding, but
| even those are going to go away when Apple ditches Intel
| hardware for good).
| DeathArrow wrote:
| >Hackintoshes notwithstanding, but even those are going to
| go away when Apple ditches Intel hardware for good
|
| Maybe people will find a way to build ARM hackintoshes. :)
| kitsunesoba wrote:
| Nouveau is a bit of a special case because Nvidia actively
| blocks third party drivers from fully leveraging the
| capabilities of their cards. While Apple isn't officially
| supporting any particular third party OS on M-series hardware,
| they're also not obstructing the creation of high performance
| third party drivers for it.
| gumby wrote:
| > Why does Apple not open up their hardware to other operating
| systems
|
| It's extra effort they don't want to go to. They spent a lot of
| engineering and support time working with MS on boot camp,
| handling (what are to them) special cases and back
| compatibility. They really needed it at the time, but no longer
| need it so make no effort in making it happen. And Apple never
| did linux support, it's simply that linux runs on most hardware
| that runs windows.
|
| Among other things, here's a major reason why it's hard: Apple
| supports their hardware and software for quite a long time, but
| is happy to break back compatibility along the way. MS
| considers back compatibility critical, despite the cost. I
| respect both positions. But getting windows running on intel
| Mac hardware wasn't automatic, and required developing support
| for bits of code designed (by MS) for all sorts of special
| cases. They simply needed it, so did the work, with the
| cooperation of MS.
| spitfire wrote:
| What am I missing here? The marketing presentations spoke about
| 200GB/sec and 400GB/sec parts. Existing CPU's generally have 10s
| of GB/sec. But I see these parts beating out existing parts by
| small margins - 100% at best.
|
| Where is all that bandwidth going? Are none of these synthetic
| benchmarks stressing memory IO that much? Surely compression
| should benefit from 400GB/sec bandwidth?
|
| This also raises the question how are those multiple memory
| channels laid out? Sequentually? Stripped? Bitwise, byte wise or
| word stripped?
| hajile wrote:
| They have 5.2 and 10.4 TFLOPS of GPU power to feed in addition
| to 10 very wide cores.
| jb1991 wrote:
| Would like to see the Metal benchmark too, but doesn't appear yet
| on the Geekbench Metal page. That would be interesting to see how
| Geekbench scores it against many discreet GPUs.
| amatecha wrote:
| You can find more results at
| https://browser.geekbench.com/v5/cpu/search?q=Apple+M1+Max
|
| (it does, broadly, appear to be pretty comparable to Intel
| i9-11900K
| https://browser.geekbench.com/v5/cpu/search?q=Intel+Core+i9-... )
| newfonewhodis wrote:
| i9-11900K is 125W TDP, and the M1 is probably nowhere near that
| (M1 is 10-15W TDP)
| Flatcircle wrote:
| How does this compare to a Mac Pro for video editing
| vineyardmike wrote:
| Should be better. The discussed how it was even better than
| MacPro with afterburner.
| JohnTHaller wrote:
| Looks like it's somewhere between my AMD Ryzen 5800X desktop
| (paid about $800) and a 5900X.
| blakesterz wrote:
| I just can't figure out what I'm missing on the "M1 is so fast"
| side of things. For years I worked* on an Ubuntu desktop machine
| I built myself. Early this year I switched to a brand new M1 mini
| and this this is slower and less reliable than the thing I built
| myself that runs Ubuntu. My Ubuntu machine had a few little
| issues every no and then. My Mini has weird bugs all the time.
| e.g. Green Screen Crashes when I have a thumbdrive plugged in.
| Won't wake from sleep. Loses bluetooth randomly. Not at all what
| I'd expect from something built by the company with unlimited
| funds. I would expect those issues from the Ubuntu box, but the
| problems were small on that thing.
|
| *Work... Docker, Ansible, Rails apps, nothing that requires
| amazing super power. Everything just runs slower.
| PragmaticPulp wrote:
| > I just can't figure out what I'm missing on the "M1 is so
| fast" side of things.
|
| Two reasons:
|
| 1. M1 is a super fast _laptop_ chip. It provides mid-range
| desktop performance in a laptop form factor with mostly fanless
| operation. No matter how you look at it, that 's impressive.
|
| 2. Apple really dragged their feet on updating the old Intel
| Macs before the transition. People in the Mac world (excluding
| hackintosh) were stuck on relatively outdated x86-64 CPUs.
| Compared to those older CPUs, the M1 Max is a huge leap
| forward. Compared to modern AMD mobile parts, it's still faster
| but not by leaps and bounds.
|
| But I agree that the M1 hype may be getting a little out of
| hand. It's fast and super power efficient, but it's mostly on
| par with mid-range 8-core AMD desktop CPUs from 2020. Even
| AMD's top mobile CPU isn't that far behind the M1 Max in
| Geekbench scores.
|
| I'm very excited to get my M1 Max in a few weeks. But if these
| early Geekbench results are accurate, it's going to be about
| half as fast as my AMD desktop in code compilation (see Clang
| results in the detailed score breakdown). That's still mightily
| impressive from a low-power laptop! But I think some of the
| rhetoric about the M1 Max blowing away desktop CPUs is getting
| a little ahead of the reality.
| VortexDream wrote:
| The fact that this beats AMDs top laptop CPU is actually a
| huge deal. And that's before considering battery life and
| thermals.
|
| I'll never buy an Apple computer, but I can't help but be
| impressed with what they've achieved here.
| PragmaticPulp wrote:
| Don't get me wrong: It's impressive and I have huge respect
| for it. I also bought one.
|
| However, it would be surprising if Apple's new 5nm chip
| _didn 't_ beat AMD's older 7nm chip at this point. Apple
| specifically bought out all of TSMC's 5nm capacity for
| themselves while AMD was stuck on 7nm (for now).
|
| It will be interesting to see how AMD's new 6000 series
| mobile chips perform. According to rumors they might be
| launched in the next few months.
| smoldesu wrote:
| This definitely is a factor. Another thing that people
| frequently overlook is how competitive Zen 2 is with M1:
| the 4800u stands toe-to-toe with the M1 in a lot of
| benchmarks, and consistently beats it in multicore
| performance.
|
| Make no mistake, the M1 is a truly solid processor. It
| has seriously stiff competition though, and I get the
| feeling x86 won't be dead for another half decade or so.
| By then, Apple will be competing with RISC-V desktop
| processors with 10x the performance-per-watt, and once
| again they'll inevitably shift their success metrics to
| some other arbitrary number ("The 2031 Macbook Pro Max XS
| has the highest dollars-per-keycap ratio out of any of
| the competing Windows machines we could find!")
| LDataReady wrote:
| I think people are missing the fact that it's performance +
| energy efficiency where M1 blows regular x86 out of the
| water.
| NovemberWhiskey wrote:
| > _Apple really dragged their feet on updating the old Intel
| Macs before the transition. People in the Mac world
| (excluding hackintosh) were stuck on relatively outdated
| x86-64 CPUs._
|
| Maybe my expectations are different; but my 16" MacBook Pro
| has a Core i9-9880H, which is a 19Q2 released part - it's not
| exactly ancient.
| camillomiller wrote:
| You need to consider the larger target group of
| professionals. It's really GPU capabilities that blow
| everything away. If you don't plan to use your MacBook Pro
| for video/photo editing or 3D modeling, then a M1 Pro with
| the same 10-core CPU and 16-core Neural Engine has all you
| need and costs less. Unless I'm missing something I don't
| think there much added benefit from the added GPU cores in
| your scenario, unless you want to go with the maximum
| configurable memory.
| eecc wrote:
| Well, the MAX has double the memory bandwidth of a PRO, but
| I cannot see workloads other than the ones you mentioned
| where it would make a significant improvement.
|
| Perhaps ML but that's all proprietarized on CUDA so it's
| unlikely.
|
| Perhaps Apple could revive OpenCL from the ashes?
| johnboiles wrote:
| https://developer.apple.com/metal/tensorflow-plugin/
| alexcnwy wrote:
| Pro only supports 2 external displays which is why I
| ordered max
| jjcon wrote:
| > GPU capabilities that blow everything away
|
| Compared to previous macs and igpus - an nvidia gpu will
| still run circles arounnd this thing
| jachee wrote:
| In a quiet laptop?
| jjcon wrote:
| If you are trying to do hardcore video editing or
| modeling then 'quiet laptop' likely comes second to speed
| mataug wrote:
| > Compared to previous macs and igpus - an nvidia gpu
| will still run circles arounnd this thing
|
| True, but the point here is that M1 is able to achieve
| outstanding performance per watt numbers compared to
| Nvidia or Intel.
| KptMarchewa wrote:
| Are you really rendering in a cafe that you need on the
| go GPU performance?
| alwillis wrote:
| GPUs are no longer special purposes components; certainly
| in macOS, the computation capabilities of the GPU are
| used for all sorts of frameworks and APIs.
|
| It's not just about rendering any more.
| zsmi wrote:
| For the content creator class that needs to
| shoot/edit/upload daily, while minimizing staff, I can
| see definite advantages to having a setup which is both
| performant and mobile.
| kylemh wrote:
| Besides what the other person commented, also consider
| creatives that travel. Bringing their desktop with them
| isn't an option.
| ArchOversight wrote:
| Editing photos or reviewing them before getting back home
| to know if you need to re-shoot, reviewing 8K footage on
| the fly, applying color grading to get an idea of what
| the final might look like to know if you need to re-
| shoot, re-light or change something...
|
| There are absolutely use-cases where this is going to
| enable new ways of looking at content and give more
| control and ability to review stuff in the field.
| PostThisTooFast wrote:
| It has been a long time since editing photos has required
| anything more than run-of-the-mill performance.
| mataug wrote:
| Adding to all the usecases listed by other commenters.
|
| Having a higher performance per watt numbers also implies
| less heat from M1's perspective. This means that even if
| someone isn't doing CPU/GPU heavy tasks, they are still
| getting better battery life since power isn't being
| wasted on cooling by spinning up the fans.
|
| For some perspective, My current 2019, 16inch i7 MBP gets
| warm even if I leave it idling for 20 - 30 mins and I can
| barely get ~4hrs of battery life. My wife's M1 macbook
| air stays cool despite being fanless, and lasts the whole
| day with similar usage.
|
| The point is performance per watt matters a lot in a
| portable device, regardless of its capabilities.
| FractalHQ wrote:
| I am often rendering and coding while traveling for work
| a few months out of the year. Even when I'm home, I
| prefer spending my time in the forest behind my house, so
| being able to use Blender or edit videos as well as code
| and produce music anywhere I want is pretty sweet.
| whynotminot wrote:
| Why would you even buy a laptop if you don't need to be
| mobile?
| gtirloni wrote:
| Fewer cables is one reason.
| monkmartinez wrote:
| Yes, Nvidia GPU's are a major reason I switched to PC
| about 3 years ago. That and I can upgrade RAM and SSD's
| myself on desktops and laptops. The power from
| professional apps like Solidworks, Agisoft metashape, and
| some Adobe products with a Nvidia card and drivers is
| like night and day with a Mac at the time I switched.
|
| Does Apple have any ISV certified offerings? I can't find
| one. I suspect Apple will never win the Engineering crowd
| with the M1 switch... so many variable go into these
| systems builds and Apple just doesn't have that business
| model.
|
| Even with these crazy M1's, I still have doubts about
| Apple winning the Movie/Creative market. LED walls,
| Unreal Engine, Unity are being used for SOOOO much more
| than just games now. The hegemony of US centric content
| creation is also dwindling... budget rigs are a heck of
| lot easier to source and pay for than M1's in most parts
| of the world.
| DCKing wrote:
| Not so sure about that "running circles around". While
| the M1 Max will not beat a mobile RTX 3080 (~same chip as
| desktop RTX 3070), Apple is in the same ballpark of its
| performance [1] (or is being extremely misleading in
| their performance claims [2]).
|
| Nvidia very likely has leading top end performance still,
| but "running circles around this thing" is probably not a
| fair description. Apple certainly has a credible claim to
| destroy Ampere in terms of power per watt - just limiting
| themselves in the power envelope still. (It's worth
| noting that AMD's RDNA2 already edges out Ampere in
| performance per watt - that's not really Nvidia's strong
| suit in their current lineup).
|
| [1]: https://www.apple.com/v/macbook-
| pro-14-and-16/a/images/overv... - which in the footnote
| is shown to compare the M1 Max to this laptop with mobile
| RTX 3080: https://us-
| store.msi.com/index.php?route=product/product&pro...
|
| [2]: There's _a lot_ of things wrong with in how vague
| Apple tends to be about performance, but their unmarked
| graphs have been okay for general ballpark estimates at
| least.
| jjcon wrote:
| Definitely impressive in terms of power efficiency if
| Apples benchmarks (vague as they are) come close to
| accurate. Comparing the few video benchmarks we are
| seeing from the M1Max to leading Nvidia cards I'm still
| seeing about 3-5x the performance across plenty of
| workloads (Id consider anything >2x running circles).
|
| https://browser.geekbench.com/v5/compute/3551790
| spacedcowboy wrote:
| Anything that _can_ run rings around it is unlikely to be
| running on a battery in a laptop, at least for any
| reasonable length of time.
| derefr wrote:
| > an nvidia gpu will still run circles arounnd this thing
|
| Not for loading up models larger than 32GB it wouldn't.
| (They exist! That's what the "full-detail model of the
| starship Enterprise" thing in the keynote was about.)
|
| Remember that on any computer _without_ unified memory,
| you can only load a scene the size of the GPU 's VRAM. No
| matter how much main memory you have to swap against, no
| matter how many GPUs you throw at the problem, no magic
| wand is going to let you render a _single tile of a
| single frame_ if it has more texture-memory as inputs
| than one of your GPUs has VRAM.
|
| Right now, consumer GPUs top out at 32GB of VRAM. The M1
| Max has, in a sense, 64GB (minus OS baseline overhead) of
| VRAM for its GPU to use.
|
| Of course, there is " _an_ nvidia gpu " that can bench
| more than the M1 Max: the Nvidia A100 Tensor Core GPU,
| with 80GB of VRAM... which costs $149,000.
|
| (And even then, I should point out that the leaked Mac
| Pro M1 variant is apparently 4x larger _again_ -- i.e. it
| 's probably available in a configuration with _256GB_ of
| unified memory. That 's getting close to "doing the
| training for GPT-3 -- a 350GB model before optimization
| -- on a single computer" territory.)
| jjcon wrote:
| Memory != Speed
|
| You could throw a TB of memory in something and it won't
| get any faster or be of any use for 99.99% of use cases.
|
| Large ML architectures don't need more memory, they need
| distributed processing. Ignoring memory requirements,
| GPT-3 would take hundreds of years to train on a single
| high end GPU (on say a desktop 3090 which is >10x faster
| than m1) which is why they aren't trained that way (and
| why NVidia has the offerings set up the way they do).
| [deleted]
| spacedcowboy wrote:
| I'm looking forward to playing BG3 on mine :)
| ryanjodonnell wrote:
| Is it optimized for mac?
| spacedcowboy wrote:
| There's a native ARM binary :) You can choose the x86 or
| ARM binary when you launch, and they're actually separate
| apps. That's how they get around the Steam "thou shalt
| only run x86 applications" mandate.
| fnord77 wrote:
| > Compared to those older CPUs, the M1 Max is a huge leap
| forward.
|
| Is it, though? Its single-core score is roughly 2x that of
| the top CPU Apple put in the mid-2012 macbook pro. 2x after
| 10 years doesn't seem that great to me.
|
| Maybe that's more of an indictment against intel
| baybal2 wrote:
| > were stuck on relatively outdated x86-64 CPUs.
|
| We came to a point when X86 is faster to emulate, than to run
| on a more modern microarchitecture.
|
| One point is clear: per-transistor performance of X86 is
| dimishing with each new core generation, and it is already
| losing to M1.
|
| X86 makers will not be able to keep up for long. They will
| have to keep releasing bigger, and hotter chips to keep
| parity, until they cannot.
| e40 wrote:
| Really curious to see what chips will go into the Mac Pro
| line next year (year after?). Will they be faster than the
| AMD desktop/workstation chips when they come out?
| lewantmontreal wrote:
| Has there been any research to OS response times between Macos,
| Windows and Linux?
|
| Ive wanted to switch to Mac many times, most recently to M1 Mac
| Mini, but cant get over the frustrating slowness. Every action,
| like opening Finder, feels like it takes so long compared to
| Windows even with reduce animation on. I always end up
| returning the mac.
| factorialboy wrote:
| M1 is, compared to precious editions of Macbooks, much better.
|
| My two year old Linux desktop is a beast compared to my M1
| Macmini. But I love the Macmini compared to my 2019 MBP.
| jokethrowaway wrote:
| This is easily explained. Linux distributions don't make money
| if you buy a new laptop.
| zitterbewegung wrote:
| I have had the opposite experience. Whenever a update to a
| kernel comes out for Ubuntu my machine apparently forgets which
| kernel to choose and boots up incorrectly. My M1 MacBook Air
| drives a Pro Display XDR without any hiccups. Its completely
| silent when I work with it. But, performance wise the M1 Pro
| and Max don't seem like they would be worth it for me to
| upgrade from the M1 at all. I just want a completely silent
| laptop and it makes a huge difference to me.
|
| But, my workflows and workloads are slowly changing from local
| compilation of code to one where I will probably just SSH and
| or possibly figure out how to configure RDP into my
| Threadripper when I do actual builds and have my Macbook Air M1
| with 16GB of Ram and the synergy with other Apple devices is a
| huge plus.
|
| Have you talked with Apple to get a replacement?
| littlestymaar wrote:
| I share the same feeling, and I'm glad to learn I'm not alone:
| earlier this year I worked for a (really) sort time with a
| company who gave me a Macbook M1, which I was pretty excited
| about.
|
| Over the course of the two weeks I worked there, I faced:
|
| - frequent glitches when unplugging the HDMI adaptor (an Apple
| one, which costed an arm and a leg).
|
| - non-compatible software (Docker, Sublimetext)
|
| - and even a Kernel Panic while resuming from hibernation
|
| It was like working with Linux in the 2000's, but at least with
| Linux I knew it was gonna be a bit rough on the edges. Given
| the hype found online, I wasn't at all prepared for such an
| experience.
| markdown wrote:
| M1 is hardware. Ubuntu is software.
|
| Your comparison makes no sense.
| babypuncher wrote:
| The Mac Mini isn't super impressive, because you're comparing
| it to desktops where TDP is far less of a concern.
|
| The M1 is getting a lot of attention because it's Apple's first
| laptop chip, and it is the fastest chip in that category by a
| fairly significant margin. Chips from Intel and AMD only
| compete with it on performance when drawing considerably more
| power.
| pgib wrote:
| I went from a 2016 top-of-the-line MacBook Pro to an M1 Mac
| mini, and I can't believe how much faster things are-including
| Docker with Rails, PostgreSQL, etc. Out of curiosity, are you
| running native (arm64) containers? Aside from an issue now and
| then with my secondary HDMI display, my machine has continued
| to blow me away. My development environment is almost on par
| with our production environment, which is a nice change.
| jinto36 wrote:
| Not particularly defending the M1, but it might be that much of
| what you're using so far isn't available native and is going
| through Rosetta2? The other issues could be OS issues, but
| maybe it would be worth doing a RAM test?
| vineyardmike wrote:
| Disclaimer, I don't have an M1 Mac, but I do have a buggy
| ubuntu desktop and used Macs my whole technical life.
|
| It seems that you're heavily in the minority with this. Even
| the weird bugs you mention are very unexpected. I've used a Mac
| for 15 years and never heard of an issue related to thumb
| drives. You may just have a lemon. See if you can just replace
| it (warranty, etc, not just spending more money).
| jandrese wrote:
| It's hardly unheard of for a Mac to pick up weird issues. My
| wife's Macbook has a thing where the mouse cursor will just
| disappear when she wakes the thing up from sleep. Poking
| around on the internet finds other people with the same
| problem and no good solution (zapping PRAM doesn't help,
| neither did a full OS reinstall). It's just a live with it
| affair. The only fix is to close the lid and open it again,
| which isn't too bad but the issue crops up multiple times in
| a day and is quite annoying.
|
| I manage a bunch of Ubuntu desktops at work and the most
| common issue seems to be that if you leave a machine alone
| for too long (a week or two), then when you log back in the
| DBUS or something seems to get hung up and the whole
| interface is mostly unusable until you log out and log back
| in. It can be so bad you can't even focus a window anymore or
| change the input focus. Next most common issue is DKMS
| randomly fucking up and installing a kernel without the
| nVidia or VirtualBox modules leaving the machine useless.
| thenthenthen wrote:
| Well if you actually do 'work' with the device, like
| installing software thats not in the appstore you might run
| into some troubles...i love my 2013 macbook air for most
| daily tasks, but never was there a time where i couldnt do
| with having a windows and linux device on hand. But yeah,
| thats just life. Happy to see sobering comment here that the
| m1 is a 'mobile' processor, my 12 year old pc agrees. Another
| question that came to mind is; what professional is gonna
| edit ProRes video while commuting? Is this the ultimate
| precarious labour creative industry machine?!
| jacurtis wrote:
| I have both, and use both everyday.
|
| I use an M1 MacBook Pro (16Gb Ram) for personal projects and
| as my standard home/travel computer. It is amazing and fast.
|
| I use a Lenovo Carbon X1 Laptop with similar specs (i5, 16Gb
| Ram, m.2 ssd) for work that runs RHEL 8 (Red Hat Enterprise
| Linux). It's insanely fast and stable.
|
| The overhead to run RHEL is so small it would blow your mind
| at the performance you get from almost nothing. Mac or
| Windows are crazy bloated by comparison. I know I am sparking
| an eternal debate by saying this, but I personally have never
| found Ubuntu to be as stable for a workstation (but ubuntu
| server is great) as RHEL is.
|
| With that being said, I still think the M1 mac is the best
| computer I have ever owned. While linux is great for work, I
| personally enjoy the polished and more joyful experience of
| Mac for personal use. There are a million quality of life
| improvements that Mac offers that you won't get in Linux. The
| app ecosystem on mac is incredible.
|
| When most people make comparisons for the M1 Mac, they are
| comparing windows PCs (generally Intel-based ones since Mac
| previously used Intel) and they compare intel-based Macs. I
| have never seen someone comparing it to linux performance.
| The speed of the M1 mac is far better than Windows and far
| better than old Macs. There is no question. Before my M1 mac
| I used a MacBook Pro with an i7, 16Gb RAM, and the upgraded
| dedicated graphics card. The little M1 MacBook outshines it
| at least 2 to 1. Best of all, the fans never turned on, and
| my old MacBook Pro had constant fan whine which drove me
| crazy.
|
| The other incredible feat of the M1 Mac is the battery life.
| I run my laptop nearly exclusively on battery power now. I
| treat it like an iPad. You plug it in when it gets low, but I
| can use it for about a week between charges (I use it for 2-3
| hours each day). I don't turn the screen down or modify my
| performance. I keep the screen fairly bright and just cruise
| away. I love it.
|
| While Linux might be able to outshine on performance, it
| doesn't outperform with battery. My Lenovo laptop is worth
| about 2x my MacBook Pro. It is a premium laptop and yet
| running RHEL I will be lucky to get 6 hours. Compare that to
| ~20 hours of my MacBook.
| jandrese wrote:
| Downside of RHEL is the package repo is anemic and out of
| date. Sometimes horribly out of date. It's hardly uncommon
| to run into some issue with an application and then look it
| up online and find out that the fix was applied 8 versions
| after the one that's in the repo.
|
| Worse is when you start grabbing code off of Git and the
| configure script bombs out because it wants a library two
| versions ahead of the one in the repo. But you don't want
| to upgrade it because obviously that's going to cause an
| issue with whatever installed that library originally. So
| now you're thinking about containers but that adds more
| complication...
|
| Like everything it is a double edged sword.
| amelius wrote:
| My colleague has an M1 Mac. I have a Ubuntu desktop. My
| colleague always asks me to transcode videos on my machine
| because on her MacBook it is too slow.
| smoldesu wrote:
| Just my N=1 anecdata, but I'm in the same boat. I got a
| Macbook Air from work, and I have a hard time using it
| compared to my Linux setup (which is saying something, since
| I'm using a Torvaldsforsaken Nvidia card). Here's a list of
| the issues I can recall off the top of my head:
|
| - High-refresh displays cause strange green/purple
| artifacting
|
| - Plugging in multiple displays just outright doesn't work
|
| - Still a surprisingly long boot time compared to my x201
| (almost a teenager now!)
|
| - No user replaceable storage is a complete disservice when
| your OEM upgrades cost as much as Apple charges
|
| - Idle temps can get a little uncomfortable when you're
| running several apps at once
|
| ...and the biggest one...
|
| - A lot of software just isn't ready for ARM yet
|
| Maybe I'm spoiled, coming from Arch Linux, but the software
| side of things on ARM still feels like they did in 2012 when
| my parents bought me a Raspberry Pi for Christmas. Sure it
| works, but compatibility and stability are still major
| sticking points for relatively common apps. Admittedly, Apple
| did a decent job of not breaking things any further, but
| without 32-bit library support it's going to be a hard pass
| from me. Plus, knowing that Rosetta will eventually be
| unsupported gives me flashbacks to watching my games library
| disappear after updating to Catalina.
| katbyte wrote:
| yep. if things don't work contact apple support they are
| actually pretty decent. i had a lemon mini, randomly would go
| into a bootloop after os updates - had a bad mainboard so
| apple replaced it and it's been fine since.
| lambdapsyc wrote:
| Easy, it is likely docker that is making your Mac mini slower
| than your old linux box?
|
| Docker on macOS is painfully slow, because it is implemented on
| macOS through what amounts to a sledgehammer to the problem.
| Docker depends on linux kernel features so on macOS it just
| starts a linux virtual machine and does other high overhead
| compatibility tricks to get it to work. Volumes are the biggest
| culprit.
|
| If you are running docker through rosetta... (don't know the
| state of docker on apple silicon) then that is a double whammy
| of compatibility layers.
|
| Regarding bugs, yeah probably teething issues because the M1
| was/is such a large departure from the norm. They should really
| get those things fixed pronto.
| ArchOversight wrote:
| Docker for Mac Desktop supports the M1 and uses an Linux VM
| that is ARM.
|
| It is not using Rosetta 2 at all.
| johncolanduoni wrote:
| But if you run x86 images, it will use qemu's software
| emulation inside the Linux VM (which is quite slow).
| sliken wrote:
| Well switching hardware and OS makes it hard to tell what's
| responsible for the difference. I also suspect Docker and
| what's running in the containers might well be x86 code instead
| of native.
|
| I can tell you that I have a high end mbp 16" intel-i9 with
| 32GB ram and it feels much slower than a new mac M1 mini. My
| intel-i9 runs the fan card during backups (crashplan), video
| conferencing, and sometimes just because even just outlook. The
| M1 mini on the other hand has been silent, fast, and generally
| a pleasure to use.
|
| Doubling the number of fast cores, doubling (or quadrupling)
| the GPU and memory bandwidth should make the new MBP 14 and 16"
| even better.
| localhost wrote:
| I have a three-year old hand-built 16-core ThreadRipper 1950X
| machine here with 64GB 4-channel RAM. I have a 14" MBP M1 Max
| on order. I just checked the Geekbench scores between these two
| machines: M1 Max: 1783 single-core | 12693
| multi-core 1950X: 901 single-core | 7409 multi-core
|
| That's a big difference that I'm looking forward to.
|
| Also, just checked memory bandwidth. I have 80GB/s memory
| bandwidth [1] on my 1950X. The MBP has 400GB/s memory
| bandwidth. 5x(!)
|
| [1] https://en.wikichip.org/wiki/amd/ryzen_threadripper/1950x
|
| *EDIT adding memory bandwidth.
| plandis wrote:
| What are the specs on your Ubuntu machine?
| spamizbad wrote:
| The answer is M1 is a great chip held back by a mediocre OS.
| Containers and certain other applications are just going to run
| faster on Linux, which has had a ton of performance tuning put
| into it relative to MacOS.
| throwaway894345 wrote:
| That's Docker for Mac versus native Docker. Docker only runs on
| Linux, so Docker for Mac spins up a linux VM to run your
| containers. When you mount your Ruby or Python projects into
| your containers, Docker for Mac marshals tons of filesystem
| events over the host/guest boundary which absolutely devastates
| your CPU.
|
| Docker for Mac is really just bad for your use case.
|
| No idea what's going on with the thumb drive, bluetooth, etc.
|
| Beyond that, it's a little silly to compare a desktop
| (presumably many times larger, ~500+W power supply, cooling
| system, etc) with a Mac mini (tiny, laptop chip, no fans, etc).
| mvanbaak wrote:
| > Docker for Mac is really just bad for your use case. Nah,
| not just for mac, docker is really just bad. period ;P
| shados wrote:
| > Beyond that, it's a little silly to compare a desktop
| (presumably many times larger, ~500+W power supply, cooling
| system, etc) with a Mac mini (tiny, laptop chip, no fans,
| etc).
|
| You're totally right, but I see a lot of folks around me
| making absolutely bonker claims on these M1 devices. If I
| believed everything I'm told, I'd be expecting them to run
| complex After Effect rendering at 10000 frames per second.
| BugsJustFindMe wrote:
| > _You 're totally right, but I see a lot of folks around
| me making absolutely bonker claims on these M1 devices._
|
| What "but"? The claims are less bonkers with a perspective
| that acknowledges the significance of those physical
| differences.
|
| Without the case, would the full internals
| (motherboard/CPU/GPU/PSU/ram/heatsinks/storage) of your
| desktop fit in your pants pockets? Because the M1 Mac
| Mini's fit in mine.
|
| How much fan noise does your desktop produce? How much
| electricity does it consume? M1s are fast compared to
| almost everything, but they're bonkers fast compared to
| anything getting even remotely close to 20 hours on a
| battery at 3 lbs weight.
| amelius wrote:
| > You're totally right, but I see a lot of folks around me
| making absolutely bonker claims on these M1 devices.
|
| Fake news happens on HN too. Apparently. Sadly.
|
| Hopefully it's mostly limited to discussions about Apple.
| alwillis wrote:
| I think that's disingenuous. I haven't seen any claims that
| haven't been backed up by benchmarks and specific use
| cases.
|
| What most people keep missing is the M1 jumped out way
| ahead on performance per watt _on their first attempt at
| making their own processor_.
|
| I've seen the M1 Mac for less than $700. In that price
| range, there's not much competition when it comes to
| computation, GPU performance, etc. and comes fairly close
| to much higher priced x86 Macs and PCs.
|
| That's why people are excited about the M1. You generally
| can't edit 4k (and 8k) video in real-time on a machine in
| this price range--and what people happily paid 5x that
| price to get this level of performance just a few years
| ago.
| alwillis wrote:
| Let's review: "Black. Magic. Fuckery."--
| https://news.ycombinator.com/item?id=25202147
| dubcanada wrote:
| Seems like your computer may have issues, you shouldn't be
| crashing from USB devices. Sounds like you should go get it
| checked out tbh.
| thenthenthen wrote:
| Well could just be a software issue, for example if you
| installed the wrong CH430x driver to run your faux arduino
| clones (that didnt pay debt to ftdi) you might run into this
| issue.
| jiveturkey wrote:
| Probably more to do with bug sir than the hardware. Catalina is
| pretty solid.
| lostlogin wrote:
| I hadn't heard that one. I had some pain, but I can laugh
| now.
| chrisco255 wrote:
| My AMD / Nvidia laptop running Ubuntu has had a number of
| issues from flash crashes to freeze ups to external monitor
| connectivity that have progressively gotten better as the
| drivers were updated. It is likely the drivers are buggy and
| will probably improve with time.
| merrvk wrote:
| Put the same thing in a fanless laptop, that seems to last
| forever on a charge, then you'll understand
| 2OEH8eoCRo0 wrote:
| Meh. The M1 is a node ahead and memory is placed on package.
| I'm trying not to be dazzled by the tech when it comes at the
| price of freedom.
| outside1234 wrote:
| Ok - I get that comment for iPhone - but what is less free
| about the Mac vs. a PC?
| y4mi wrote:
| docker is a second-class citizen on anything but linux. i'm
| especially amazed you bothered with the m1 mac mini if you
| wanted to use docker, considering how little memory it has. the
| memory overhead of having a VM running with only 8GB available
| is significant.
|
| yes, i know the docker on mac has "native" support for the m1
| cpu, that doesn't mean its not running a VM.
| jacurtis wrote:
| Yeah people forget that Docker is built to run on Linux. That
| is why it exists. You deploy your containers onto a Linux
| host that runs docker for containerization, that is the
| point.
|
| The reason we have Docker for Windows and Mac is so that
| developers can emulate the docker environment for testing and
| building purposes. But ultimately it is expected that you
| will take your container and eventually deploy it on Linux.
| So Docker will always be a second class citizen on these
| other OS's because it is just there for emulation purposes.
| mikhailt wrote:
| Many folks are comparing it to previous Macs with macOS, not
| PCs nor other distros.
|
| If you have an Intel Mac next to it, there's a clear noticeable
| difference assuming macOS is on it.
|
| If you put W10 on the Intel Mac, it is much faster than macOS
| on it (from my own experience). If we could run W10 or Linux on
| M1, it will be much faster than Intel Mac.
|
| Another example, Windows 10/11 on ARM in Parallels on my MBA m1
| is much faster than Surface Pro X, a MS-tuned/customized
| device.
| vmception wrote:
| For me, all the gains from a W10 running faster than MacOS
| are lost because I have to immediately go into control panel
| to fix everything. Those fast computers are waiting on me to
| improve the user experience most of the time. I'm personally
| not really splitting hairs over much longer running processes
| if I have to walk away anyway.
| EugeneOZ wrote:
| I'm comparing Windows PC (i7-6700k, nvme) with MBA m1 and MBA
| is noticeable faster in just everything. Compilation time of
| JS and Java code is literally 2 times faster on MBA M1 than
| on desktop i7 CPU.
| EugeneOZ wrote:
| for those who downvote the facts they don't want to know
| about: https://twitter.com/eugeniyoz/status/140751857088881
| 0497?s=2...
| ry4nolson wrote:
| nobody is arguing with your facts, just that your
| comparison is apples and oranges since the i7 6700k was
| released 5+ years ago.
| munchbunny wrote:
| Isn't the i7-6700k 3-4 generations older at this point than
| the m1?
| rahimnathwani wrote:
| i7-6700k is 6 years old. The current equivalent i7
| (i7-11700k) is almost 2x as fast when using all cores.
| ricardobeat wrote:
| If the latest gen, top cpu is "almost 2x as fast" it will
| still lose to the parent's "literally 2 times faster" M1.
| While using 4x more power.
| rahimnathwani wrote:
| Right but GP's main points were:
|
| - noticeable faster in just everything
|
| - literally 2 times faster [compiling code]
|
| Neither of these statements is true when comparing
| against the latest generation, which is the relevant
| comparison.
|
| The power thing is true, but irrelevant to GP's point and
| my objection to it.
| wging wrote:
| That hardware is from 2015, though. A fairer comparison
| would be to current gen Intel. https://en.wikipedia.org/wik
| i/List_of_Intel_Core_i7_processo...
| jccalhoun wrote:
| I've never owned a mac and I've never used an M1 so take this
| with a heavy grain of salt. Everyone I've seen talk about how
| fast the M1 is were coming from 4-5+ year old macs not
| comparing them to intel/amd chips that came out last year in a
| similar price range.
| websap wrote:
| Currently typing this on an m1 laptop that I bought 2 weeks
| back. This machine blows my 16 inch Macbook out of the water.
| My 16 inch Macbook is pretty well spec'd, but this laptop is
| something else.
| celsoazevedo wrote:
| - https://web.archive.org/web/20211020175422if_/https://browse...
|
| - https://archive.md/zQAC3
| busymom0 wrote:
| For someone with more knowledge- are these scores dependant on
| the OS? Like would it have different scores if it ran Linux?
| timbit42 wrote:
| On macos, if you are running some x64 apps, they are jit'ed and
| emulated. On Linux, all your repo and manually compiled apps
| will be compiled for the native CPU so it would be fast.
| launchiterate wrote:
| So can someone build a mini cloud service with these machines?
| reacharavindh wrote:
| What did I miss? Dell XPS 15 with intel CPU has way higher
| scores... is Geekbench not capable of working correctly on Apple
| Silicon?
|
| https://browser.geekbench.com/v4/cpu/16376678
| joshstrange wrote:
| Is that a legitimate GB score for that machine? Searching for
| scores from that machine gives me:
| https://browser.geekbench.com/v5/cpu/search?q=XPS+15+9510
|
| Which shows score much lower than the linked test (maybe I'm
| missing something). Likewise unless it's coming from someone
| like AnandTech I'm skeptical of any benchmarks for the M1
| Pro/Max until these machines are actually released next week.
| [deleted]
| mrbuttons454 wrote:
| That's a v4 score, which can't be compared to a v5 score.
| [deleted]
| rokobobo wrote:
| I think you're looking at a geekbench 4 score, vs v5 in the
| post. It seems that this machine performs significantly better
| than an XPS 15.
| [deleted]
| unicornfinder wrote:
| That's a Geekbench v4 score, which isn't comparable to a
| Geekbench v5 score.
|
| You can see the same laptop on Geekbench v5 here, where it
| scores much, much lower than the M1:
| https://browser.geekbench.com/v5/cpu/10506812
| jkeddo wrote:
| I think there is a big variance in XPS 17 scores. For
| reference, here is a top scoring XPS 17. In this case, scores
| are much closer.
|
| https://browser.geekbench.com/v5/cpu/compare/10481116?baseli.
| ..
| jjcm wrote:
| For those that can't load and are curious, the M1 Max scores
| about 2x the Intel Core i7-11800H in the Dell laptop on both
| single core and multi core scores.
| jkeddo wrote:
| I don't think that was a good sample, most of the XPS 15's
| I see have a much higher single thread score, almost double
| the one linked above:
|
| https://browser.geekbench.com/v5/cpu/10491562
| snuser wrote:
| It looks like the original as been updated, the
| differences don't seem very drastic especially when you
| consider what the SKU's w/ the m1 max cost
|
| guess it's about a 1.5-2x difference in battery life
| though
| smoldesu wrote:
| It also costs twice as much, so I should hope I'm getting
| something for the money.
| minimaul wrote:
| You're comparing Geekbench 4 and 5 scores - you can't do that.
| It's a different scale.
|
| Here's an example of a Geekbench 5 score:
| https://browser.geekbench.com/v5/cpu/10501477
| lalaithion wrote:
| Here's the actual Dell XPS 17 vs MacBook Pro:
| https://browser.geekbench.com/v5/cpu/compare/10502109?baseli...
| jkeddo wrote:
| That might be a lemon XPS 17. With a higher scoring XPS 17,
| the benchmark is much closer:
|
| https://browser.geekbench.com/v5/cpu/compare/10481116?baseli.
| ..
| throwawaybanjo1 wrote:
| https://browser.geekbench.com/v5/cpu/10501477 V5 score for that
| judge2020 wrote:
| Comparison for that https://browser.geekbench.com/v5/cpu/comp
| are/10501477?baseli...
| petecooper wrote:
| Geekbench 4 and Geekbench 5 are different scales.
|
| Edit: about 5 people beat me.
| biosed wrote:
| Compared to my gen 1 M1 macbook pro:
| https://browser.geekbench.com/v5/cpu/compare/10514397?baseli...
| hammock wrote:
| Single core regular M1 Macbook about the same. Multicore the Max
| is a lot higher.
| hbOY5ENiZloUfnZ wrote:
| It should be about the same. It is the same micro-architecture
| which is why they are still called M1. There are just more of
| the same cores.
| tyingq wrote:
| Looks not the same to me:
| https://browser.geekbench.com/v5/cpu/compare/10508124?baseli...
|
| Did I pick the wrong regular M1 Macbook?
|
| Edit: Hmm.
| https://browser.geekbench.com/v5/cpu/compare/10508059?baseli...
|
| I guess Geekbench is a little unpredictable?
|
| And it looks like HN is pushing their capacity...
| hammock wrote:
| Hmm not sure what I was looking at before, that site is not
| the easiest to navigate.
| tyingq wrote:
| My second link shows what you're talking about. I'm not
| sure which result to trust.
| smoldesu wrote:
| Geekbench has always been a little unpredictable. It was most
| famously skewed quite heavily in Apple's favor before
| Geekbench 4, and even the modern scores are pretty opaque.
| I'd wait to see real-world performance metrics out of it.
| dzink wrote:
| A few days ago there were comments on HN that the Geekbench score
| on M1 Max was run on an operating system not optimized for M1.
| Has that changed with this one?
| [deleted]
| simonebrunozzi wrote:
| Wondering if some of the avid videogamers, in need for powerful
| GPUs, will consider Macbook Pros with M1 Pro or Max as a good
| option.
|
| AFAIK, some of the most popular videogames run on Mac too.
| lghh wrote:
| Some, but not nearly enough.
| mhh__ wrote:
| Almost none of the most popular actually. No Fortnite no
| warzone
| DisjointedHunt wrote:
| I'm not sure how Geekbench browser tests hold up to real world
| usage, but note that Apples ecosystem is WAY different compared
| to x86.
|
| For starters, the chip isn't general purpose compute, it's more
| of an ASIC that is optimized to run the OS abstractions available
| through the official APIs (Think on screen animation such as
| rendering an image or detecting things in it. On x86, you
| implement the code and in some cases, proprietary libraries such
| as Intels MKL make it seemingly faster to run. On Apple Silicon,
| for such common use cases there are DEDICATED chip areas. Thus,
| an ASIC)
| lostmsu wrote:
| I will get one if it can run Windows and nothing comparable
| released by then.
| louwrentius wrote:
| The single core performance is still the same as the M1, so
| frankly, for a laptop that's awesome, but I wish single core
| performance could hit the AMD Ryzen scores.
| gjsman-1000 wrote:
| Depends what benchmark you ask. According to Passmark, the M1
| is the best single core CPU no question.
| musesum wrote:
| Compared to my Mid 2018 6 core i9 2.9Gz
|
| 1.62x single core 2.70x multi core (10 vs 6 cores) 2x to 3x
| faster on ML and Graphics
|
| Hopefully, less power and no more false positives on the taskbar.
___________________________________________________________________
(page generated 2021-10-20 23:00 UTC)