[HN Gopher] Qualcomm's Oryon core: A long time in the making
___________________________________________________________________
Qualcomm's Oryon core: A long time in the making
Author : rbanffy
Score : 147 points
Date : 2024-07-11 10:17 UTC (12 hours ago)
(HTM) web link (chipsandcheese.com)
(TXT) w3m dump (chipsandcheese.com)
| bhouston wrote:
| From the article it seems like it clones a lot of the Apple M1
| technology. So smart acquisition on Qualcomm's part to get
| competitive again.
|
| Apple engineers who worked on the Apple M4 should go start
| another company so Qualcomm can acquire it again for +1B. :). Or
| better yet, acquired by ARM themselves.
|
| It is likely that as this tech makes it to the smartphone market,
| Android phones are going to get a major speed boost. They have
| been so uncompetitive against Apple for a while now.
| kernal wrote:
| >Android phones are going to get a major speed boost. They have
| been so uncompetitive against Apple for a while now.
|
| Uncompetitive? These numbers indicate otherwise while being on
| a previous generation TSMC 4nm node. Just imagine if they were
| on the same 3nm node as the A17.
|
| https://nanoreview.net/en/soc-list/rating
|
| https://www.androidauthority.com/snapdragon-8-gen-3-vs-apple...
| immibis wrote:
| It doesn't matter how fast the CPU is, if you can't run the
| software you need to.
| bhouston wrote:
| Looking at this chart Apple just destroys the competition:
|
| https://browser.geekbench.com/mobile-benchmarks
|
| Also Apple's mobile devices destroy the competition in this
| chart as well if you sort by thread:
|
| https://www.cpubenchmark.net/CPU_mega_page.html
| kernal wrote:
| According to Geekerwan (who is one of the best mobile SoC
| reviewers) Apple does not destroy the competition. In fact,
| aside from the A17 single core score (which comes at the
| cost of increased thermals and aggressive throttling) Apple
| is the one destroyed in multi-core and GPU performance. All
| the while being on an inferior TSMC node.
|
| >Snapdragon 8 Gen 3 comparison with A17. Apple's multicore
| lead is gone, GPU lead is gone, and the single core lead
| hasn't been smaller in a decade and has worse thermals.
|
| https://www.reddit.com/r/apple/comments/17gvtew/geekerwan_s
| n...
| dagmx wrote:
| This is because the Adreno ~830 excels at mobile
| benchmarks and falls significantly behind in "desktop"
| oriented benchmarks.
|
| You can see this borne out in all the Snapdragon X GPU
| benches where they share the same GPU architectures
| respectively.
|
| Apple have the better GPU for more modern and demanding
| workloads. Qualcomm have the better GPU for more dated
| but still highly relevant mobile gaming needs.
| walterbell wrote:
| _> because there has not been any upstreamed Device Tree for this
| laptop, we were not able to get a Linux desktop installed_
|
| Looking forward to tests of Linux on Arm UEFI ("SystemReady") for
| laptops based on Qualcomm Oryon.
| rjsw wrote:
| I don't think any of the Linux GPU drivers are written to work
| with UEFI/ACPI.
| assassinator42 wrote:
| I've been confused by this, aren't these systems using ACPI
| instead of Device Tree? I know AWS ARM systems use ACPI.
| wmf wrote:
| Qualcomm is using device tree.
| surajrmal wrote:
| I believe it supports both, however only uses acpi when
| booting windows.
| retskrad wrote:
| The M4 is the highest single core CPU in the world and it's in a
| ridiculously thin tablet without cooling. I don't understand
| where this idea that Apple has lost most of their chip design
| talent and is in trouble rhetoric is coming from. Apple still
| makes the world's most advanced and efficient consumer chips,
| years after the M1.
| Sakos wrote:
| I think people generally expected larger improvements between
| iterations. Intel and AMD continue to deliver sizeable
| performance and efficiency gains every 1-2 years while it feels
| like the Apple M-series isn't getting comparable gains. It
| definitely seems like Apple has suffered significant enough
| brain drain in recent years that they're finding it difficult
| to iterate on the M1.
| TylerE wrote:
| Hasn't each gen been like 20-30% performance? Isn't that
| better than what AMD and Intel have managed?
| Sakos wrote:
| Not really?
| https://arstechnica.com/gadgets/2023/11/testing-
| apples-m3-pr...
|
| I've been waiting for an upgrade to my M1 and I still
| haven't seen one worth spending that much money on. I'd
| rather just sink that into upgrading my Windows tower.
| bhouston wrote:
| https://docs.google.com/spreadsheets/d/1i5dBe_rsiNaQATH-
| D-Pj...
|
| Single core performance went from 2300 with the M1 to
| 3800 with the M4. That is a huge improvement for my
| workflow (large TypeScript mono repositories) which is
| dependent a lot of single core performance (even with
| parallel builds, because one rarely re-builds everything,
| rather hot reloads.)
| zamadatix wrote:
| These statements are orthogonal though i.e. 2300->3800 is
| still less than 20% per generation (17% per if you use
| the exact single core numbers for the M1 vs M4 iPad).
| That might be meaningful for your workload but it also
| means 20-30 percent per generation is quite a bit off.
| TylerE wrote:
| What increase has intel made over the same time period?
| Feels like a lot less. CPUs are mature technology.
| qwytw wrote:
| > Feels like a lot less
|
| I think quite a bit more. Meteor Lake seems to have
| faster ~100% multicore and ~25% single core performance
| and better battery life.
| TylerE wrote:
| Still have double the power draw of my entire desktop
| mac. For a supposed mobile chip. Intel power
| consumption/heat is terrible
| qwytw wrote:
| That wasn't the question, they did improve tremendously
| over the last 4 because the were basically not doing
| anything 4 years prior to that...
|
| To be fair desktop Macs these days are just laptops
| without a screen so it's not that surprising. Of course
| they are significantly more power efficient but also much
| slower than high end AMD/Intel chips (if you don't care
| about heat/power usage that much like a lot of desktop
| users).
|
| Also even on mobile 165H seems to be not that far from M3
| e.g. 10-20% worse battery life, slightly slower single
| core but faster multi-core. Not ideal but considering
| where Intel was when M1 came out not that bad either.
| TylerE wrote:
| You're the one that brought battery life into it, which
| certainly does put power consumption in play.
| qwytw wrote:
| Well yeah... I was saying that they improved power
| consumption massively compared to Apple since 2020. Which
| is true.
|
| 10th gen was horrible and now they have almost caught up
| with the M series.
| TylerE wrote:
| According to Wikipedia the lastest meteor lakes have a
| peak draw of 57w. My entire Mac Studio draws 38w.
| Sakos wrote:
| I picked the Ars article because they showed real-world
| performance like encoding. Geekbench scores are difficult
| to impossible to equate to real-world results. There are
| ways to measure it properly, but most sites seem to just
| do Geekbench or something else like it and call it a day.
| Single-core performance isn't this universal thing.
| What's your actual workload like? I'm all over the Ryzen
| x3D CPUs because they have proven massive performance
| improvements for things I care about like Factorio. Some
| site reporting "yeah, single and multi-core scores are
| 20% better" doesn't mean anything. 20% better at what
| exactly?
| TylerE wrote:
| I will say that my M1 Mac runs Factorio like an absolute
| dream. There is an ARM native port now and it's really
| good. It something like doubled UPS over the old Intel
| binary.
|
| One of the big advantages the M chips have is the
| insanely fast integrated memory, since it's all right on
| the die. It's much closer to ultra high spec GPU ram than
| PC ram.
|
| My M1 studio has 200GB/sec of memory bandwidth. Extant
| DDR5 modules are under 50GB/sec
| bee_rider wrote:
| CPUs got good enough for most applications a decade ago,
| it is hard to talk about upgrades being "worth spending
| money on" without specific workload info.
| bhouston wrote:
| I've been tracking the performance increase via GeekBench
| for M1, M2, M3 and now M4 and they are good incremental and
| consistent improvements:
|
| https://docs.google.com/spreadsheets/d/1i5dBe_rsiNaQATH-D-
| Pj...
| zamadatix wrote:
| "Up to 20-30% faster" or "20-30% faster than <couple
| generation old version you might upgrade from". Comparing
| the "plain" M1 model vs M4 model (not pro/max as those
| aren't out yet, though follow nearly identical trends so
| far) there has been a total ~56% single core and ~59%
| multi-core performance uplift, or an average of ~16% per M*
| generation.
|
| M1 released in November 2020. In almost the exact time
| frame AMD went from Zen3 -> Zen 5 (so one less generation)
| with a total gain of ~34% multicore and ~59% multicore
| gain. Intel went from ~10900k to ~14900k for a total of 76%
| single core and 128% multicore gain (i.e. more than
| double).
|
| Two disclaimers before the conclusion: This is just from
| one lazy benchmark (Geekbench 6) and of the high performing
| model. That doesn't necessarily tell the whole story on
| both accounts, but it at least gives some context around
| general trends. E.g. passmark is going to say the
| differences on Intel are a little smaller, comparisons
| between Max and non-Max generations confuse multi core
| growth, and changes in low-mid market chips may follow a
| different slope than the high end products. Also there are
| other considerations like "and what about integrated
| graphics performance, which eats up a huge amount of die
| space and power budget?"
|
| Anyways, my conclusion is that Apple is doing reasonably
| well with generational improvements vs the competition.
| Maybe not the absolute best, but not the worst. Being on
| top in the single core realm with a mobile passively cooled
| device makes equivalent gains all the more difficult but
| they're still doing it. Apple may be a victim of its own
| success in that the M1 from 2020 is still such a
| performance powerhouse that a 50% gain 4 generations later
| just isn't that interesting whereas with Intel a catchup of
| ~doubling multicore performance in the same time seems more
| impressive especially when M*'s story isn't as tantalizing
| on that half of things.
| ggus wrote:
| > ~59% multi-core performance uplift, or an average of
| ~16% per M* generation.
|
| +59% in 4 years is equivalent to 12% every year.
| percentages compound exponentially.
| phonon wrote:
| What are you correcting?
|
| 59% from M1->M4 = 1.59^ 1/3 = 16.7% increase per
| generation
|
| No one said "years" either...
| mrbungie wrote:
| I'm still waiting for chips to compete with the perf/watt
| ratios and general efficiency of the M-series chips.
|
| Snapdragon X Elite + Windows ARM is getting there, but their
| GPU perf leaves a lot to be desired.
|
| Is Apple a victim of their own success? Idk, but diminishing
| returns are a thing you know.
| tedunangst wrote:
| You're only allowed to be the darling for one year or so. Then
| you suck again.
| Rinzler89 wrote:
| _> The M4 is the highest single core CPU in the world and it's
| in a ridiculously thin tablet_
|
| That's kind of the problem. The world's most powerful CPU is
| put in the world's most expensive and thinnest Netflix machine
| lol. Was the previous "thicker" M3 iPad holding anyone back?
|
| All that power and I can't use it to compile the Linux kernel,
| I can't use it to play the latest Steam/GOG games, or run CAD
| simulations, because it needs to be behind the restrictive iPad
| OS AppStore walled garden where only code blessed by Apple can
| run.
|
| Until it's put in a machine that can run Linux, it's more of a
| benchmarking flex than actually increasing productivity
| compared to previous M generations or the ARM/X86 competition
| that allows you to run any OS.
| dagmx wrote:
| That's a very narrow definition of "useful" and one I'd say
| is rather focused on yourself?
|
| Why is Linux the arbiter of what is "useful"? Why would it
| still be a benchmark flex if it was on macOS, an Os where
| millions of people do professional work everyday?
|
| And why is the iPad just a "Netflix machine" when tons of
| people use the iPad for professional creative use cases as
| well?
| gjsman-1000 wrote:
| > And why is the iPad just a "Netflix machine" when tons of
| people use the iPad for professional creative use cases as
| well?
|
| For my family, the iPhone and other Apple devices are the
| ultimate productivity machines. The App Store, fantastic.
|
| Why? Because they are (rightly) terrified of installing
| apps on Windows. Or any "computer." They've been burned too
| many times, warned too many times. Unless it's Microsoft
| Office, it doesn't happen. "Programs" are a threat.
|
| But apps? Apps don't hurt you. iPhones and iPads turn on
| every day when you want them to, they act predictably, it
| feels consistent in a way a laptop is not. As for the speed
| of the chip only being useful for "consumption" -
| technology advances. That A10 Fusion powered iPad from 6
| years ago stings to use now, even though it was plenty
| comfortable at the time. 5 years from now, nobody will
| regret an M4.
| iforgotpassword wrote:
| > That A10 Fusion powered iPad from 6 years ago stings to
| use now, even though it was plenty comfortable at the
| time. 5 years from now, nobody will regret an M4.
|
| Sorry, I've been around for too long to believe this. :)
|
| No matter how big of a milestone, a jump in performance
| anything ever brought, it never stopped software from
| eventually using up all the resources available. Just
| another layer of abstraction, just another framework,
| just another useless ui gimmick and you'll see that the
| M4 isn't immune to this either.
| thebeardisred wrote:
| I'll attempt the best interpretation of the comment.
| Installing Linux would allow for general purpose use of the
| device (in a freedom sense). This increases the "utility"
| of the device and lowers the bar for extending its
| functionality.
| zitterbewegung wrote:
| You can run iSH on the device for Linux (somewhat
| limited). It's on the App Store
|
| https://github.com/ish-app/ish
| Rinzler89 wrote:
| _> And why is the iPad just a "Netflix machine" when tons
| of people use the iPad for professional creative use cases
| as well?_
|
| I never said people can't use iPads for profesional
| applications, I was asking what professional tasks are
| people doing with the iPad that necessitated the M4 to be
| in that closed platform vs the M3 or M2, instead of being
| put in a more open one like the Mac where it can be put to
| better use at more generic compute tasks similar to what
| X86 and Nvidia chips are used for, instead of being stuck
| in a very restricted platform mostly targeted towards
| content consumption.
|
| And so far nobody has provided an answer to that question.
| All the answers are either repeating Apple's marketing or
| vague ones like "you can use the iPad for professional
| applications too you know, it's not just a Netflix
| machine". OK, but what exactly are those professional iPad
| applications that mandate the M4 being in the iPad instead
| of the Mac?
|
| I'm bringing this up, because commenters bring up the M4 as
| the holy grail of chips in discussions about the latest X86
| and Qualcomm chips, so if you compare a chip that can
| currently only run AppStore apps vs chips that can run most
| SW ever then we're comparing Apples and Oranges.
| dagmx wrote:
| No, people have provided answers. One of those people is
| me. You just don't like them because you seem to have a
| problem imaging other people have differing uses than
| your own.
|
| Edit: this person keeps ignoring any responses that don't
| align with their world view and acting like they don't
| exist. When challenged they have called multiple people
| names instead.
|
| I'm not responding to them further to prevent dragging
| this out further.
| Rinzler89 wrote:
| Sorry but your answer didn't actually answer what I asked
| for though. Which is fine, just don't claim you did and
| then switch to gaslighting me that "I just don't like
| your answer". Please keep a mature attitude and not act
| like a toddler.
|
| And it's not just "my use case" that I'm referring to
| since I don't do any of those, it's generic use cases
| that I'm talking about in examples, since X86/Qualcomm
| are also generic chips being used for a shit tonne of use
| cases and not restricted to certain AppStores like the M4
| is in the iPad, so if you bring M4 in comparison with
| those generic chips then you'd better provide argument on
| how they're comparable given the current SW platform
| restrictions, and so far you haven't.
| burningChrome wrote:
| Why is Apple the arbiter of what is "useful"? Why would it
| still be a benchmark flex if it was on Linux, an Os where
| millions of people do professional work everyday?
|
| Your argument cuts both ways.
| dagmx wrote:
| It doesn't cut both ways because I'm not making that
| argument at all. You interjected your own argument that
| you're now arguing against.
|
| I'm making the argument that "usefulness" extends outside
| of Linux use. I'm not saying it only extends to a
| specific group. Point me to where that is even implied.
| pantalaimon wrote:
| > The world's most powerful CPU
|
| maybe the most powerful / watt - desktop CPUs like the 9950X
| are still way more powerful.
| qwytw wrote:
| M4 still has a 10% higher Geekbench single core score than
| the 9950X.
| SkiFire13 wrote:
| Do note that the M4 is built on TSMC's 3nm process while
| the 9950X is built on the relatively older 4nm process.
| This has been the case for the earlier Apple ARM
| processors too, as Apple made deals with TSMC to get
| priority access to the newer processes. In the end as a
| user you get a slighly faster machine, but that doesn't
| mean that's all thanks to the CPU architecture.
|
| Apple's also has lower memory latencies by virtue of
| being a SoC design. Memory speed is increasingly becoming
| the bottleneck compared to computing power. However this
| is done at the cost of other features, like
| repleacable/upgradable RAM and CPU, which the other
| desktop processors support. This is not to say that one
| is better than the other, but there are surely tradeoffs
| involved.
| qwytw wrote:
| > end as a user you get a slighly faster machine
|
| You don't, though. Unless you only care about single core
| performance and/or power usage.
|
| Apple's chips are not even close to AMD/Intel value wise
| when it comes to MT performance on desktop. The $6,999
| Mac Pro somehow losses to the $400 14700K (of course you
| still need a GPU/etc. but unless you care about niche use
| cases i.e. very high amounts of VRAM, you can get a GPU
| equivalent to the M2 Ultra for another $300-400 or so)
| jjtheblunt wrote:
| by what metric?
| 0x457 wrote:
| Amount of exposed PCI-E lines, I guess.
| runjake wrote:
| It's not just a benchmark flex. I have been using Linux since
| 1992, for many or most of those years on both desktop and
| server. I am currently more productive while on an Apple
| Silicon Mac.
|
| I would venture to guess that for most people, they would be
| _more productive on a Mac or an iPad_ versus Linux.
|
| Pedantry: To my knowledge, there was no M3 iPad.
| Rinzler89 wrote:
| The question from me was whether the M4 makes iPad users
| more productive vs the M3 iPad or whatever the last chip
| was, not if the iPad itself make you more productive than
| Linux or window s.
|
| And since people keep bringing up the M4 as _the_ yardstick
| in discussions on generic X86 and Qualcomm chips, then my
| productivity metric for comparison was also meant in a
| global generic way, as in the general purpose compute chips
| like x86 and Nvidia have unlocked new innovations and
| improvements to our lives over decades being able to run
| code from specialty aerospace, CAD, earthquake prediction,
| to protein folding for vaccines and medical use because you
| could run anything on those chips from YouTube to Fortnite
| to mainframes and supercomputers. What similar improvements
| to humanity does the M4 iPad bring when it can only run
| apps off the AppStore compared to M3, most used iPAd apps
| being YouTube and Netflix?
|
| As long as the most cutting edge MX chips are restricted to
| running only Apple approved AppStore apps because Apple is
| addicted to the 30% AppStore tax that they can't charge on
| their laptops running MacOS/Linux with the same chips, then
| they're relatively useless chips for humanity in the grand
| scheme of things in comparisons with X86 and Arm who make
| the world go round, power research and innovation because
| they can run anything you can think of despite scoring a
| bit lower benchmarks.
| runjake wrote:
| > The question was whether the M4 makes you more
| productive on the iPad vs the M3 iPad or whatever the
| last chip was
|
| This same question applies to any other computing
| platform upgrade. So far, the hardware for most common
| platforms far out-scales the majority of use cases.
|
| Nonetheless, tech must and is advancing, regardless.
| Every platform is releasing newer and faster versions,
| and only a tiny fraction will make use of that power year
| over year.
|
| As to the rest of your comment, I see what you're saying,
| but those are _your_ opinions. However, the vast majority
| of the user base would probably disagree with you,
| because they are not technical people.
|
| _They are productive_ on these closed platforms. they
| have different workflows than you or I. I 'm not very
| productive on these platforms. These are consumption
| devices for me. I need things like a development
| toolchain and a command-line interface to be productive.
|
| By and large, non-technical people are Apple's target
| audience, not technical people. In raw numbers, these
| people outnumber technical people by an order of
| magnitude.
|
| This dinosaur (me) recognizes that what constitutes a
| computer has evolved and shifted away from what I think
| of as a computer. And this shift will further continue.
| Rinzler89 wrote:
| _> They are productive on these closed platforms._
|
| I never said otherwise, I asked what's the point of
| commenters bringing up the M4 performance in comparisons
| with X86/Qualcomm when due to the open vs closed nature
| of the platforms they're not directly comparable because
| the M4 is much more restricted in the iPad vs the other
| chips.
|
| That's like comparing a Ferrari to a van saying how much
| faster Ferraris are. Sure, a Fierari will always be
| faster than a van, but you can do a lot more things with
| a van than with a faster Ferrari, and just like M4 iPad
| Pros, Ferraris are lot less relevant to the function of
| society than vans which deliver your food, medicine, kids
| to school, etc. Is the M4 good for you and an improvement
| for your own workflow? Good for you, just don't compare
| it to X86 until it can run the same app as those chips.
|
| Like you said, it's mostly a consumption device, and as
| such, the M4 is mostly wasted in that, until they bring
| it into a device with a more open OS that can run the
| same SW as the other X86/ARM platforms, which Apple
| delays intentionally because they're trying to nudge
| users off the open Mac platform towards the closed iPads
| OS platform for that sweet 30% AppStore cut they can't
| get on their devices running MacOS/Linux.
| qwytw wrote:
| > This same question applies to any other computing
| platform upgrade
|
| Hardly. Or rather to a much lesser extent, at least
| pro/power users benefit a lot more from performance
| improvements on open general purpose platforms.. since
| well.. you can actually do stuff on them. What
| (performance sensitive) use cases does the iPad even
| have? I guess video/image editing to an extent but pretty
| much all of those apps on iOS are severely crippled and
| there are other limitations (storage and extremely low
| memory capacity).
| ben_w wrote:
| No year will ever be "the year of Linux on the desktop",
| despite that phrase being so old that if I'd conceived a
| child when I first heard it, I'd be a grandparent already.
|
| The fact that you can't imagine a device being useful until
| it can compile the Linux kernel etc., that you dismiss it as
| a "Netflix machine", says more about you than about Apple.
| Rinzler89 wrote:
| What are most people doing with the iPad that they were
| being held back by the M3 chip in order for the M4 to be
| such a game changer for them?
| ben_w wrote:
| All the things in the press release.
|
| AI inference. Editing, not merely watching, video. Ditto
| audio. Gets more done before the battery runs out.
|
| I cannot emphasise strongly enough how niche "compile the
| Linux kernel" is: to most people, once you've explained
| what those words mean, this is as mad as saying you don't
| like a Tesla _because you recreationally make your own
| batteries and Tesla cars don 't have a warranty-not-void
| method to let you just stick those in_.
| Rinzler89 wrote:
| So linux kernel compiling is niche but not editing videos
| and audio on the iPad? (source, have friends making money
| in the audio and video industry and none of them use
| iPads professionally for the M4 to matter, they all use
| MacBooks or Mac Studios even though they tired iPad pros)
|
| And you haven't answered my question. For video and audio
| editing, were the M3 IPads being held back compared to M4
| for it to unlock new possibilities that couldn't have
| been done before and convince new users to switch to
| iPads?
| ben_w wrote:
| > So linux kernel compiling is niche but not editing
| videos and audio on the iPad?
|
| Correct.
|
| How many million youtube channels are there, and how
| exactly do you think that stuff gets made? Magic?
|
| New chip aimed at them, to make their work lives better.
|
| > And you haven't answered my question
|
| Yes I have. So have others who have replied to you.
|
| Performance.
|
| > were the M3 IPads being held back
|
| Disingenuous. "Held back" implies Apple could have made
| M4s a year sooner.
| Rinzler89 wrote:
| _> How many million youtube channels are there, and how
| exactly do you think that stuff gets made? Magic?
|
| _
|
| Since you asked, they're using Macs and PCs mostly,
| sometimes Linux too. Rarely iPads though. That's more of
| a strawmen your building here.
|
| And I'm bringing in arguments that X86 and ARM
| improvements are more important that the M4 chips,
| because they power the world innovations, and you're
| bring up editing YouTube videos on iPads as an argument
| for why the M4 is such a big deal. I rest my case.
|
| _> Disingenuous. "Held back" implies Apple could have
| made M4s a year sooner._
|
| I never said or meant such a thing, you're just making up
| stuff up at this point to stoke the fire and that's why
| I'll end the conversation with you here.
| ben_w wrote:
| > Using Macs and PCs mostly, sometimes Linux too. Rarely
| iPads. That's more of a strawmen your building here.
|
| You're arguing as if everyone changes production mode in
| one go when new hardware arrives. This chip was only
| released _50 days ago_.
|
| Your example of "power the world innovations" included
| "compile Linux kernel" and "play the latest Steam/GOG
| games".
|
| YouTube is more valuable to the world than endlessly
| recompiling the Linux kernel.
|
| Let me rephrase via metaphorical narrative:
|
| --
|
| "This internal combustion engine is very good, isn't it.
| I don't know why people keep saying they're a dead-end."
|
| "That's kind of the problem. The world's most powerful
| engine is put in the world's most expensive and thinnest
| carriage, lol. Was the previous horse-drawn carriage
| holding anyone back? Can't use car exhaust as manure!"
|
| "Obviously it held people back, just look at all the
| things horseless carriages let people do _faster_ , how
| the delivery of goods has been improved. And honestly,
| most people have moved on from organic manure."
|
| "What, manure is niche compared to trucks?"
|
| "Very much so. I mean, how do you think we get all the
| stuff in our shops?"
|
| "But most of the stuff is delivered by horses! And also,
| you're talking about trivial things like 'shopping', when
| horses power important things like 'cavalry'. I rest my
| case."
|
| --
|
| > I never said or meant such a thing, you're just making
| up stuff up at this point to stoke the fire and that's
| why I'll end the conversation with you here.
|
| I copy-pasted from your own previous comment, and at the
| time of writing this comment, the words "were the M3
| IPads being held back" are still present. I cannot see
| how they could mean something else. Just in case you edit
| that comment (you've edited a few others, that's fine, I
| do that too), here's the whole paragraph:
|
| > And you haven't answered my question. For video and
| audio editing, were the M3 IPads being held back compared
| to M4 for it to unlock new possibilities that couldn't
| have been done before and convince new users to switch to
| iPads?
| qwytw wrote:
| > example of "power the world innovations" included
| "compile Linux kernel"
|
| This is a bad faith argument. You should assume compile
| Linux kernel = doing any software development if you
| expect to have a rational discussion with anyone.
| Rinzler89 wrote:
| Since your comments are deviating in the bad faith
| direction with you trying to score _gotchas_ off of
| interpretations of various words in my comments, instead
| of sticking to the chips comparison topic at hand, I will
| have to stop replying to you as we can 't have an
| objective and sane debate at this point. Peace.
| qwytw wrote:
| > I cannot emphasise strongly enough how niche "compile
| the Linux kernel"
|
| You can replace that with compile pretty much any
| software in general or do any real software development
| on it which isn't even remotely niche...
| ben_w wrote:
| I think that even software development in general is
| pretty niche; about 29 million of us worldwide, putting
| us significantly behind sex workers (40 million or so),
| which I get the impression most regard as (at the very
| least) an unusual profession?
|
| I want to say it's also _way_ behind the number of
| YouTube channels, but I can 't find a citation for the
| 3rd party claim of "114 million" active channels, though
| obviously most are not professionals and there's not a
| 1-1 requirement between channels and people.
| qwytw wrote:
| I doubt there are more professional video editors than
| software developers (or people working in related
| areas/fields).
|
| Even if we focus on video editing the apps available on
| the iPad and they are crippled by having very low amounts
| of memory which also makes "professional" usage rather
| difficult. The iPad Pro is mainly a luxury product for
| people who just want the "best" iPad and don't really
| care about the cost.
|
| > 114 million" active channels
|
| You should be comparing this to the number of active
| Github accounts or something like that which seems to be
| about the same.
| dagmx wrote:
| There was no M3 iPad so the question is incorrect.
|
| But if you're talking compared to the M2, it has a number
| of updates
|
| 1. Significant performance and efficiency gains
|
| 2. New GPU with raytracing and mesh shading , and much
| higher performance.
|
| 3. AV1 decode
|
| 4. New display controllers required for the new tandem
| OLED
|
| 5. Huge upgrade to the neural engine
|
| I'm sure I'm missing stuff, but the M4 iPad Pro is a
| legitimate step up from the M2 for capabilities. Unless
| you fall in the camp of it being just a media consumption
| device
| Rinzler89 wrote:
| Sure, those are nice improvements, no question about it,
| but none of those change the iPad fundaments or unlock
| new possibilities for it, which is that it wasn't
| previously compute limited but limited by iPad OS. It's
| also not just my opinion but almost all users who
| reviewed the M4 iPad Pro, like MKBHD, calling it a
| overpriced Netflix machine.
| dagmx wrote:
| I mean, if your question is "what new capabilities did it
| unlock?" then doesn't that apply to the whole CPU market?
| Did any of the stuff you mentioned you do has
| fundamentally changed in desktop in the last decade +.
|
| People like faster and snappier things. Along with that
| the new hardware unlocks the use of tandem oleds which is
| a big change for HDR creation and color accuracy. They go
| hand in hand.
|
| A lot of people create on iPads. I used to work in film
| and almost all my concept art friends have shifted over
| to iPad. A lot of my on set friends use it for on set
| management of content, visualization and rushes.
|
| The reviewers you mentioned don't use the device that
| way. Would I similarly be right in taking their opinion
| about how niche it is to run Linux?
|
| Like this argument you're proposing just boils down to
| "it doesn't solve my needs and therefore I can't imagine
| it solving other people's needs"
| Rinzler89 wrote:
| _> Like this argument you're proposing just boils down to
| "it doesn't solve my needs and therefore I can't imagine
| it solving other people's needs"
|
| _
|
| I never said that. I asked what needs does the M4 solve
| that the M3/2 couldn't? I asked that because people keep
| bring up the M4 in discussion, arguing against
| X86/Qualcomm chips, how they're slower than Apple's
| latest M4 chips, and for that I counteract with the fact
| that for a lot of cases the M4's extra performance over
| x86/Qualcomm is irelevant since X86/Qualcomm chips solve
| different and a lot more diverse problems that the highly
| restrictive and niche problems the iPad solves.
|
| And it's not me, because those are not my needs, I don't
| compile the linux kernel or doe CAD/CAE, or microbiology
| simulations but those to me (and to society and humanity)
| those are still more important than movie writers having
| a slightly faster iPad for drafts, since it's not like
| that was the reasons most movies suck nowadays.
| ben_w wrote:
| > I never said that. I asked what needs does the M4 solve
| that the M3/2 couldn't? I asked that because people keep
| bring up the M4 in discussion, arguing against
| X86/Qualcomm chips, how they're slower than Apple's
| latest M4 chips, and for that I counteract with the fact
| that for a lot of cases the M4's extra performance over
| x86/Qualcomm is irelevant since X86/Qualcomm chips solve
| different and a lot more diverse problems that the highly
| restrictive and niche problems the iPad solves.
|
| If you think one brand of "chips solve different and a
| lot more diverse problems" than another, it sounds like
| you don't know what "Turing machine" means.
|
| All chips can always do what other chips can do --
| eventually.
|
| M4 is faster. That's it. That's the whole selling point.
| Rinzler89 wrote:
| _> M4 is faster. _
|
| Faster at what exactly? Where can I buy these M4 chips to
| upgrade my PC with to make it faster as you claim? Oh,
| it's only shipped as part of a very locked down tablet OS
| and restricted ecosystem with totally different apps than
| those running on the X86/generic ARM chips which can run
| anything you write for them? OK, fine, but then what's
| the point of it being faster than those other chip if
| they can't run the same sw?
|
| Like I said, you're comparing a Ferrari to a van. It's
| faster yes, but totally different use cases. And the
| world runs mostly on people driving vans/trucks, not on
| people driving Ferraris.
| ben_w wrote:
| Your complaint is more like saying a people carrier is an
| overpriced sportscar because of its inability to function
| as a backhoe, while ignoring evidence not only from all
| the people who use people people carriers, but also
| disregarding the usage and real world value evidence in
| the form of the particular company behind this people
| carrier manages to be wildly popular despite above
| average prices for every single model.
| dagmx wrote:
| You're arguing multiple axes and this argument feels
| really nonsensical to me as a result.
|
| So first of all, your entire argument hinges on YOUR
| belief that the iPad is just a consumption device. So you
| don't believe the M4 is a significant jump over the M2,
| even when I give you reasons that it is.
|
| Then your argument hinges on the comparison to Qualcomm
| SoCs, but isn't the use of the iPad irrelevant unless you
| also believe it'll not make its way to other devices?
| Which feels unfounded.
|
| Those are two distinct arguments that IMHO have no
| bearing on each other unless you also make the two
| assumptions that I think you're erroneously making.
| ben_w wrote:
| > like MKBHD, calling it a overpriced Netflix machine.
|
| I've not seen that quote anywhere.
|
| Google no longer actually finding quotes no longer means
| the quote isn't present on the internet, so do you have a
| citation for that?
| Rinzler89 wrote:
| It wasn't a quote, just implied via other words and
| expressions. Watch his M4 iPad review.
| ben_w wrote:
| I did*, he didn't.
|
| You might have "overpriced _if used as a_ ", but not
| "it's overpriced and only a".
|
| * Assuming "What" is a typo for "watch"; gosh, isn't
| auto-corrupt an annoying part of this era...
| gessha wrote:
| It's not a game changer, never meant to be. Apple updates
| the hardware, shows you the possibilities and charges you
| an arm and a leg for it. What you do with it is your own
| business *
|
| Another thing is you're comparing apples to oranges.
| iPads aren't meant to be used in that way and if you want
| to do it anyway you have to hack your way there. You
| should be perfectly capable of compiling the Linux kernel
| on their more general purpose machine - the Mac.
|
| * within the limitations of the iPad AppStore
| Rinzler89 wrote:
| _> iPads aren't meant to be used in that way _
|
| Correct, and if they're only meant to be used within the
| restrictive limitation of the AppStore then who cares
| about them, other than the small market of iPad OS
| AppStore users, most of which don't even used the full
| potential of the M3/M2 on their iPads let alone need the
| M4?
|
| Chips from Intel, AMD, NVidia, etc are big news because
| since they're generic compute chips, so they unlock new
| uses cases and research that can improve or even change
| the world, not just run IOS apps a bit faster.
|
| For example, do you think those Apple EEs are using iPads
| to design the M4 chips or X86/Mac computers?
| 0x457 wrote:
| > the year of Linux on the desktop
|
| It already happened: Windows + WSL2. Windows is the best
| linux distro.
| Rinzler89 wrote:
| Eh, yes and no. Windows kernel plus it's backend features
| for security, emulation and virtualization, which enable
| things like WSL2 and backwards compatibility to work are
| great, but they're hampered by a crap front end with
| news, ads, web search in start menu, and dark patterns
| everywhere being forced down your throat like OneDrive
| holding your files ransom in the cloud if you don't pay
| attention or the failed push for the CallBack feature
| with unencrypted screenshots, or Windows Explorer being
| slow as shit due to it now running JavaScript code for
| some reason.
|
| All in all, I'm moving away from it to linux, as I don't
| like the direction Microsoft is taking, and learning to
| fix the rough edges of Linux will serve me better in my
| career than trying to keep up with and fight the dark
| pattern frog that Microsoft keeps boiling slowly.
|
| Mutahar on YouTube did a review of a leaked copy of
| Windows 11 Chinese Government edition which is Windows 11
| Enterprise with everything non essential stripped out of
| it: no AI, no telemetry, no OneDrive, no ads, no news, no
| Edge, no media player, no web search in star menu, no
| defender, nothing, just the kernel, drivers, window
| manager and explorer, that's it, kind of like lightweight
| linux distros. If Microsoft would sell that to us
| consumers I would buy that in a heartbeat. But no, we get
| the adware and spyware version.
| monocasa wrote:
| To a degree, that makes sense. Due to the end of dennard
| scaling, most high end raw compute makes more sense as more
| but simpler cores, and has for a long time. For instance the
| blue gene supercomputers made of tons of pretty individually
| anemic PowerPC 4xx cores.
|
| For battery powered devices, race to sleep is the current
| meta, where once you have some bit of heavy compute work, you
| power up a big core, run it fast to get through all of the
| work as quickly as possible, and get back to low power as
| quickly as possible.
| hajile wrote:
| Because clockspeed ramps power exponentially, there's a
| limit to how high you can clock before the cost of racing
| outweighs the savings of running a short time.
|
| I believe I've read that Apple's chips run under 3GHz
| unless their job runs longer than 100-150ms. I suspect
| that's their peak race to sleep range.
| monocasa wrote:
| Clocks aren't the only way to race. Powering up the
| massive core with it's over a dozen pipelines, massive
| reorder buffer, etc. is another way. This is they way
| Apple has chose to focus on.
| sitkack wrote:
| I run Linux in a VM on my M1 mac all day long. That VM was
| the fastest Linux instance I had anywhere. Faster the my 5950
| and way faster than anything in cloud. Your can't using it is
| your choice.
| g_p wrote:
| Out of interest, I assume this was an arm64 build of Linux?
| Which hypervisor or VM software did you use?
| seabrookmx wrote:
| Not OP, but likely lima or colima if working with
| containers.
| spockz wrote:
| Is Lima or Colima so much better than docker desktop for
| Mac these days? In what respect, the performance?
| seabrookmx wrote:
| I don't have recent experience with Docker Desktop.
| Around the time they went commercial we still had lots of
| stability issues with it (eating memory and needing
| restarted) so we told their sales people to take a hike.
| We tried Rancher Desktop for a while but something with
| their docker-cli implementation didn't play nice with dev
| containers so our remaining Mac users jumped to colima.
| For my part I'd had enough of faking docker/containerd on
| a non-native platform so I'm daily driving Linux (Fedora
| on a Framework 13). No more VM's for me :)
| sitkack wrote:
| Arm64 Ubuntu 22.04 LTS.
|
| VMWare Fusion (now free beer) and Podman.
|
| Still running the beta of Fusion which I have literally
| had zero issues with. I have not benchmarked podman.
|
| https://blogs.vmware.com/teamfusion/2024/05/fusion-pro-
| now-a...
|
| https://podman-desktop.io/docs/installation/macos-install
| Rinzler89 wrote:
| Sir, in the comment you're replying to, I was talking about
| the iPad here, not the Mac where you can indeed use most of
| the M chips potential while on the iPad you can't do that.
| sitkack wrote:
| Look at such a fruitful branch of the conversation that
| you created! Did you think that would be an option when
| you bought it? Don't answer.
| entropicdrifter wrote:
| Asahi Linux runs great on all of the M-series Macs
| treyd wrote:
| There's a lot of long tail issues with peripherals that
| still need to be worked out, to be fair. Sound wasn't
| enabled until fairly recently because they were still
| testing to ensure the drivers couldn't damage the hardware.
| evilduck wrote:
| To be fully fair, that's the status quo for Linux on
| hardware that didn't ship with Linux support as a selling
| point. I had issues with Intel Wifi 6 hardware
| compatibility on a Lenovo laptop within the last year or
| so. On 3 different x86 laptops with fingerprint readers,
| I've never been able to get any of them to function in
| Linux. One one of my laptops, the sound works but it's
| substantially degraded compared to the drivers for
| Windows on the same hardware. Another supports the USB4
| port for data but won't switch to DP alt mode with it,
| though it works in Windows. On the other hand, my Steam
| Deck shipped with Linux and everything works great.
|
| Linux not supporting your hardware perfectly is just the
| nature of the beast. Asahi on Apple hardware meets [very
| low linux user] expectations.
| awesomepeter wrote:
| I think my M3 isn't supported yet and won't be for some
| time
| Rinzler89 wrote:
| Yeah, on the Macs, not on the iPad though which is what my
| comment as talking about, that the M4 chip is just wasted
| there and only for the flex.
| jjtheblunt wrote:
| https://www.linuxfordevices.com/tutorials/linux/linux-on-
| the...
| turtlebits wrote:
| The fact that Apple put a fast chip in a limited device
| doesn't detract from their engineering ability.
| dagmx wrote:
| The idea that Apple lost their most important talent came from
| SemiAnalysis. It's a saucy idea so it spread from there without
| much backing.
|
| They're a tech news blog mixed with heavy doses of dramatized
| conjecture.
|
| The primary author has written multiple times that Apple has
| lost lots of their key talent but has never been able to back
| it up beyond "I keep tabs on LinkedIn".
|
| End of the day, tech folks like drama as much as the next
| person. Sites like that are the equivalent to celebrity focused
| tabloids.
| forrestthewoods wrote:
| > I don't understand where this idea that Apple has lost most
| of their chip design talent and is in trouble rhetoric is
| coming from.
|
| Why are you saying this? The article doesn't seem to imply
| that?
|
| Apple isn't in any kind of trouble. But the gap between Apple
| and the competition does appear to be closing. Right now
| Apple's biggest advantage is they buy up all of TSMC supply for
| the latest node. They're always a little faster because they're
| always a node ahead.
|
| Qualcomm Snapdragon X Elite is reasonably impressive from a CPU
| and x86 emulation perspective. The GPU hardware seems kinda
| sorta ok. But their GPU drivers are dog poop. Which is why they
| suck for Windows gaming.
|
| I hope AMD and Nvidia start to release ARM SoC designs for
| laptops/desktops next year. That could get interesting fast.
| All hail competition!
| hajile wrote:
| The first 4 minute mile was revolutionary. It's not ordinary
| today, but isn't super surprising either.
|
| Apple bet big that we hasn't hit the limits of how wide a
| machine can go. They created ARM64 to push this idea as it
| tries to eliminate things that make this hard.
|
| Everyone wrote off iPhone performance as mobile only and not
| really representative of performance on a "real" computer.
|
| M1 changed that and set the clock ticking. Once everyone
| realized it was possible, they needed to adjust future plans
| and start work on their own copy (with companies like Nuvia
| having a head start in accepting the new way of things due to
| leadership who believed in M1 performance).
|
| In the next few years, very wide machines will just be the
| way things are done and while they won't be hyper common,
| they won't be surprising either.
| Wytwwww wrote:
| > I hope AMD and Nvidia start to release ARM SoC designs for
| laptops/desktops next year
|
| Or AMD / Intel could just make more power efficient x86 core?
| What would they gain by switching to ARM?
|
| Also developing a competitive ARM core in less than a year is
| pretty much impossible, it took Qualcomm several years to
| catch up with ARMs cores (hence the title...). They even had
| to buy another company to accomplish that.
| forrestthewoods wrote:
| > Or AMD / Intel could just make more power efficient x86
| core? What would they gain by switching to ARM?
|
| I mixed some thoughts in rewrites. I hope Nvidia releases
| laptop/desktop SoC. AMD is getting better at x86 mobile,
| Steamdeck is pretty decent. I hope they keep getting
| better.
|
| I'd like to see high-end integrated GPU on a SoC from AMD
| for laptops/desktops. That doesn't exist yet. It requires a
| discrete GPU and there's a kajillion issues that stem from
| having two GPU paths. Just give me one SoC with shared
| memory and a competitive GPU. I don't care if it's ARM or
| x86.
|
| > Also developing a competitive ARM core in less than a
| year is pretty much impossible
|
| What makes you think they'd just be starting? Nvidia has
| been shipping ARM cores for years. Nintendo Switch is an
| Nvidia Tegra.
|
| Here's an article from October 2023 claiming that Nvidia is
| working on CPUs for Windows to ship in 2025. Qualcomm has
| an "ARM for Windows" exclusive that expires at some point
| in 2024. https://www.reuters.com/technology/nvidia-make-
| arm-based-pc-...
| Derbasti wrote:
| And that's exactly why it's so cool to see this in the Surface
| tablets.
|
| ...he writes, on his Snapdragon Surface tablet.
|
| (Also, I find it hella entertaining that a snapdragon is a cute
| little flower, martial naming notwithstanding)
| paulmd wrote:
| It comes from the idea that apple didn't advance significantly
| in M2 and M3, which are equally untrue, but equally pervasive.
|
| People were absolutely sure that apple made basically no real
| advancement but just were running up the TDPs and that's where
| all their M2 and M3 gains came from. That was the narrative for
| the last 2 years. But then you look at geekerwan and apple is
| making 20, 30% steps every gen, and with perf/w climbing
| upwards every gen too. Mainstream sites just didn't want to do
| the diligence, plus there's a weird persistent bias against the
| idea of apple being good. It's gotta just be the node... or the
| accelerators... or the OS...
|
| Reminder that we sit here 3 years later and even giving AMD a
| node _advantage_ (7940HS vs M2 /M3 family) they're still
| pulling >20W _core-only_ single-thread power (even ignoring the
| x86 platform power problems!) to compete with a 5W M2 thread.
| And yes, you can limit it but then they lose on performance
| instead.
|
| https://youtu.be/EbDPvcbilCs?t=928
|
| https://youtu.be/EbDPvcbilCs?t=1000
|
| https://youtu.be/EbDPvcbilCs?t=708
|
| But yeah, anyway, that's where it came from. People completely
| dismissed M2 and M3 as having any worthwhile advancement (even
| with laptops they could objectively analyze!) and were in the
| process of repeating this for M4 yet again. So why wouldn't you
| think that three generations of stagnation indicates apple has
| a problem? The problem is that apple hasn't actually stagnated
| - there is an epistemic closure issue and a lot of people won't
| admit it or aren't exposed to that information, because it's
| being proxied through this lens of 25% of the tech community
| being devout apple anti-fanboys.
|
| It's a problem with every single apple/android thread too.
| People will admit with the ML stuff that apple does a much
| better job handling PII (keeping it on-device, offering e2e
| encryption between devices, using anonymizing tokens when it
| needs to go to a service), and people intellectually understand
| they use the same approaches and techniques in other areas too,
| but suggest that maybe apple isn't quite as bad as the literal
| adtech company and you'll get the works. People don't want to
| think of themselves as fanboys but... there is a large
| contingent that does all the things fanboys would do and says
| all the things fanboys would say, and acts how fanboys would
| act, and nevertheless thinks they're the rock of neutrality
| standing between two equally-bad choices. False balance is very
| intellectually comforting.
|
| https://paulgraham.com/fh.html
| stefan_ wrote:
| I don't understand what prompted you to bring up the M4 in a
| meticulously researched deep-dive on the Oryon architecture.
| The original article doesn't mention it once. Is this just
| flame bait?
| refulgentis wrote:
| Thank you for saying it out loud. 100%. Seen a handful of
| very strange top comments this week that you usually wouldn't
| see on HN, I assume because of the holidays.
|
| i.e. more practicing rhetoric than contributing content,
| and/or, leaning on rhetorical strategies to do a more
| traditional definition of trolling circa early 2000s
| Slashdot. i.e. generate tons of replies by introducing a
| tangent that'll generate conversation.
| refulgentis wrote:
| > I don't understand where this idea that Apple has lost most
| of their chip design talent and is in trouble rhetoric is
| coming from.
|
| I think...you?
|
| I read the article in full a few hours ago and didn't see
| anything like that.
|
| I skimmed it again, and also gave it a go of grepping TFA
| "lost", "talent" "trouble": 0 results (Chrome, command F).
| m463 wrote:
| if you tried to say "Apple still makes the world's most
| advanced and efficient chips" instead of qualifying it with
| "consumer" who would you be cutting out?
|
| I think "consumer" literally means people who consume things,
| instead of people who create things. Before detouring into
| "content creators", people who create things are frequently
| engineers and scientists, who apple does not target.
|
| Unfortunately, I think apple doesn't do general purpose
| computing. sigh.
| phkahler wrote:
| Get proper Linux support and the RISC-V variant they seemed to be
| working on and I'll buy the laptop ;-) I have no interest in
| Windows and even less so on ARM.
| quic_bcain wrote:
| You don't want a RISC-V laptop. Not yet, at least. It will take
| quite some doing before those compete with x86_64/arm laptops.
|
| Beautiful thing about Windows computers is that it's generally
| not too hard to make them into linux/BSD/etc computers. :)
|
| Qualcomm [1] and some vendors [2][3] are making progress
| towards linux support though.
|
| [1] https://www.phoronix.com/news/Linux-6.8-ARM-Changes
|
| [2] https://www.phoronix.com/news/ASUS-Vivbook-S-15-Elite-X-
| Linu...
|
| [3] https://www.phoronix.com/news/TUXEDO-Snapdragon-X-Elite
| mixmastamyk wrote:
| To get a good laptop, often you need to start with a bad one.
| hajile wrote:
| Jim Keller is confident that Ascalon will have performance
| close to zen5 when it is finished later this year. Chips in
| hand could be some than a lot of people seen to think.
| rjsw wrote:
| What GPU will be in it?
| kernal wrote:
| >Finally, Snapdragon X Elite devices are too expensive. Phoenix
| and Meteor Lake laptops often cost less, even when equipped with
| more RAM and larger SSDs. Convincing consumers to pay more for
| lower specifications is already a tough sell. Compatibility
| issues make it even tougher. Qualcomm needs to work with OEMs to
| deliver competitive prices.
|
| Qualcomm can start by working with their accountants and reducing
| the price they charge for their SoCs. Rumors indicate the
| Snapdragon 8 Gen 4 mobile SoC, with Oryon cores, will cost
| between $220-$240 USD.
| perfsea wrote:
| I wonder how effectively it can utilize all of its execution
| units for common workloads. Frontend boundedness is often a big
| issue especially with jit
___________________________________________________________________
(page generated 2024-07-11 23:01 UTC)