[HN Gopher] Snapdragon X Elite benchmarks: Impressive gains over...
       ___________________________________________________________________
        
       Snapdragon X Elite benchmarks: Impressive gains over M2 and desktop
       CPUs
        
       Author : magnio
       Score  : 79 points
       Date   : 2023-10-30 15:13 UTC (7 hours ago)
        
 (HTM) web link (www.notebookcheck.net)
 (TXT) w3m dump (www.notebookcheck.net)
        
       | ZiiS wrote:
       | Next years chip with twice the power budget wining a first party
       | benchmark is the status quo; it would only be news if this did
       | not happen.
        
         | S_A_P wrote:
         | It almost reads like a snapdragon x marketing flyer. Is this
         | page a legitimate? Reading through the article and scanning a
         | few benchmarks I see that it's still behind most of today's
         | chips but is in "striking distance". What does that even mean?
         | The chip is going to become sentient and attack the
         | m2/amd/intel chips and get its performance revenge?
        
           | jandrese wrote:
           | I think it means that instead of being 3-4 years behind they
           | will only be 2 years behind. It's hard to say though because
           | Apple has been a hell of a moving target.
        
             | jorvi wrote:
             | > . It's hard to say though because Apple has been a hell
             | of a moving target.
             | 
             | Have they?
             | 
             | They were stuck on Intel's incremental schedule for years
             | and years, then they made a gigantic leap with the
             | M1-series, and it has been incremental again since then.
             | Bigger increments than 7th gen > 8th gen Intel, but its a
             | bit ridiculous to pretend they're making 8th gen Intel > M1
             | leaps every year.
        
               | jandrese wrote:
               | The competitors have been making only incremental
               | improvements as well, which has left them with just as
               | big of a gap year after year.
        
               | vlovich123 wrote:
               | It's important to remember that they are 3-4 years ahead
               | against ARM competitors (ie mobile). They were the first
               | to reach sufficient volume (+ years of planning for SW
               | and HW) that their mobile CPU tech could be scaled jump
               | to migrate their laptop/desktop offerings and outpace on
               | a per watt basis with AMD/Intel (and be at least
               | competitive with on an absolute basis).
               | 
               | For example, they have a unified 800GB/s memory bandwidth
               | with zero copy sharing between GPU and CPU. That's not
               | something anyone else has managed yet. They're not making
               | the huge leaps every year, but they're making enough to
               | keep the advantage of their previously leap. Their
               | previous leap was a result of process node and
               | architectural improvements and they're continuing to buy
               | up manufacturing capacity from TSMC to retain their
               | 6-12month edge on other competitors.
        
               | jorvi wrote:
               | They have certainly built an amazing architecture.
               | 
               | I'll say that I stated it rather too miserly, a bit out
               | of annoyance from the extreme hyperbole surrounding the
               | M-series processors ("its better than an i9 + 3080
               | system! Next year it'll beat an i9 + 4090!!").
               | 
               | Funnily enough I plan to switch to an M-series Macbook as
               | soon as Asahi Linux is completely out of beta.
        
               | giantrobot wrote:
               | I replaced a 16" i9 MacBook Pro with an M1 MacBook Air
               | right after it was released. The Air was _at least_ as
               | fast as the MBP running full tilt with my workloads. The
               | Air has no fan so it 's completely silent and I've rarely
               | experienced any thermal throttling on it. The i9 MBP's
               | fans would spin up if you looked at it sideways and
               | running full out sounded like a jet engine. The Air also
               | lasts all day on the battery at a fraction of the weight
               | and a fraction of the thickness of the MBP it replaced.
               | 
               | The M-series chips beat a lot of Intel and AMD offerings
               | in _some_ form factors. It 's not that the greatest chip
               | of all time and there's Intel and AMD offerings that beat
               | them in single thread performance or offer more multi
               | thread performance at some price points.
               | 
               | At the high end I think Intel and AMD (plus GPU) are more
               | competitive with Apple's kit so long as the OEM makes
               | nice hardware. At the low end the entry level M2s have
               | ridiculous capability compared to x86 machines. Maybe
               | Qualcomm will actually be in the running with the
               | Snapdragon X if they can ditch the Windows millstone.
        
               | caycep wrote:
               | granted, I think maybe more eyes are on this snapdragon
               | release as it might have some Apple/Nuvia special sauce
               | in this batch. Albeit not sure how motivated Gerald
               | Williams and co are to be designing mobile/laptop chips
               | again given the whole point of their leaving were to do
               | server chips...
        
               | AnthonyMouse wrote:
               | Apple is doing something interesting but it's not at all
               | unclear _how_ they 're doing it:
               | 
               | > outpace on a per watt basis with AMD/Intel (and be at
               | least competitive with on an absolute basis)
               | 
               | They're the first on TSMC's new process nodes, so
               | whenever a new node comes out, people will compare the
               | new Apple chip on a new node to the previous generation
               | AMD chip on the previous node. If you compare them this
               | way then the newer node outperforms the older one as
               | expected. Then AMD releases one on the new node and the
               | advantage basically disappears, or is just making
               | different trade offs, e.g. selling chips with more cores
               | which consequently have a higher TDP in exchange for
               | higher multi-thread performance.
               | 
               | Intel hasn't been competitive with either of them on
               | power consumption for some time because Intel's
               | fabrication is less power efficient than TSMC's.
               | 
               | But if Apple's advantage is just outbidding everyone at
               | TSMC, that only lasts until TSMC builds more fabs (the
               | lead times for which get shorter as the COVID issues
               | subside), or someone else makes a better process.
               | 
               | > For example, they have a unified 800GB/s memory
               | bandwidth with zero copy sharing between GPU and CPU.
               | That's not something anyone else has managed yet.
               | 
               | It's also not something which is at all difficult to do.
               | It's a straightforward combination of two known
               | technologies. Integrated GPUs have unified memory and
               | discrete GPUs have high memory bandwidth.
               | 
               | There is no secret to attaching high bandwidth memory to
               | a CPU with an integrated GPU, it's just a trade off that
               | traditionally isn't worth it. CPUs typically have more
               | memory than GPUs but if it's unified and you want to use
               | the fast memory you're either going to get less of it or
               | pay more, and CPU applications that actually benefit from
               | that amount of memory bandwidth are uncommon.
               | 
               | One of the rare applications that do benefit from it are
               | LLMs, and it's possible that's enough to create market
               | demand, but there's no real question of if they can
               | figure out how to make that -- everybody knows how. It's
               | only a question of if customers want to pay for it.
               | 
               | And what we may see is something better -- high bandwidth
               | memory as a unified L4 cache. So then your CPU+iGPU gets
               | the amount of HBM traditionally found on a discrete GPU
               | and the amount of DRAM traditionally attached to a CPU
               | and you get the best of both worlds with no increase in
               | cost over a CPU + discrete GPU. But currently nobody
               | offers that -- the closest is the Xeon Max which has HBM
               | but no integrated GPU.
               | 
               | And none of these are why _Qualcomm_ isn 't competitive
               | with Apple -- they're not competitive with AMD or Intel
               | either. It's not the fabs or the trade offs. Their
               | designs just aren't as good. But all that means is they
               | should hire more/better engineers.
        
         | lern_too_spel wrote:
         | It looks like it wins even with a smaller power budget. Apple
         | is going to have to pull a rabbit out of the hat with the M3
         | for it to be competitive.
         | 
         | https://www.xda-developers.com/snapdragon-x-elite-benchmarks...
         | shows a 23W Qualcomm processor having almost 50% higher
         | multicore performance than an M2 on Geekbench and more than
         | doubling the multicore performance on Cinebench.
        
       | officeplant wrote:
       | Can't wait for it to be complete trash in Windows and continue to
       | suffer the same issues I've had with every snap dragon powered
       | windows machine including the devbox 2023. GPU driver will crash
       | under heavy web rendering loads (twitter gif heavy threads), It
       | will probably be undercooled causing P cores to be kneecapped
       | push applications to the E cores as soon as it warms up, and
       | never let them go back. Although I will note the MS ARM devbox at
       | least was adequately cooled, if poorly vented so this was a rare
       | occasion.
       | 
       | On top of that linux support will be non-existent leaving you to
       | struggle as microsoft's most underserved user population.
       | 
       | Good luck anyone who makes the dive, even as an ARM enthusiast I
       | gave up on qualcomm products as a whole.
        
         | Moral_ wrote:
         | The chip already has upstream linux support, the benchmarks
         | they showed in their presentation was on 6.5
        
           | wtallis wrote:
           | Did they show any GPU benchmarks or otherwise give an
           | indication that the Linux support will be relatively full-
           | featured? Because what I've seen so far is just a Geekbench
           | CPU score on Linux that was significantly higher than the
           | Windows score because fan control wasn't working on Linux.
        
             | hmottestad wrote:
             | Dunno myself, but I do remember that the company behind the
             | initial chip design was targeting the server market, so
             | that's probably the reason for Linux support. I kinda doubt
             | that the GPU has any Linux support, it's just a regular
             | Qualcomm developed GPU.
        
         | tssva wrote:
         | "It will probably be undercooled causing P cores to be
         | kneecapped push applications to the E cores as soon as it warms
         | up, and never let them go back."
         | 
         | It doesn't have any E cores.
        
       | wmf wrote:
       | Other thread: https://news.ycombinator.com/item?id=38069887
        
       | gnabgib wrote:
       | AnandTech's article (49 points, 20 comments, 3 hours ago)[0]
       | seems like a better source/write-up.
       | 
       | [0]: https://news.ycombinator.com/item?id=38069887
        
         | CoolGuySteve wrote:
         | I disagree, this article shows how it compares to their own
         | benchmarks.
         | 
         | In particular, you can add a 7840U laptop to the comparisons to
         | see how the 23W Qualcomm part compares to the current low power
         | AMD CPU with a comparable wattage.
        
       | asicsarecool wrote:
       | Framework motherboard please
        
       | hulitu wrote:
       | > Thus, it is becoming increasingly clear that the Snapdragon X
       | Elite's single-core performance can rival the best of current
       | offerings from Intel, AMD, and Apple.
       | 
       | The word "laptop" is missing from this sentence.
       | 
       | And since laptops have soldered cpus i think this would not help
       | the adoption of an exotic archirecture with no software support.
       | (Supports DirectX and OpenGL - he just forgot to mention the
       | numbers).
       | 
       | I would love to see computers other than x86, but, at the moment,
       | they are limited to routers, laptops and things like Raspberry or
       | Banana Pi.
        
       ___________________________________________________________________
       (page generated 2023-10-30 23:01 UTC)