[HN Gopher] Intel's Alder Lake big.little CPU design, tested: it...
       ___________________________________________________________________
        
       Intel's Alder Lake big.little CPU design, tested: it's a barn
       burner
        
       Author : neogodless
       Score  : 149 points
       Date   : 2021-11-04 13:24 UTC (9 hours ago)
        
 (HTM) web link (arstechnica.com)
 (TXT) w3m dump (arstechnica.com)
        
       | neogodless wrote:
       | Anandtech Review: https://www.anandtech.com/show/17047/the-
       | intel-12th-gen-core...
        
         | caycep wrote:
         | I feel like this review is more even headed...but they don't
         | seem overly concerned by the close-to-300W elephant in the
         | room...
        
           | rsynnott wrote:
           | I mean, arguably if you're buying a desktop processor that
           | costs $700 just for the chip, you probably don't care THAT
           | much?
           | 
           | This is very much a niche product.
        
             | dijit wrote:
             | Still. Your VRMs pulling 90A and your cooling solution are
             | going to care.
             | 
             | So will your ears when that cooling solution needs to try
             | spreading the thermal load
        
       | paulpan wrote:
       | Looks like the i5-12600K is the real value play for Intel's 12th
       | gen. It essentially beats Ryzen 5600X at similar power
       | consumption. LTT review:
       | https://www.youtube.com/watch?v=-EogHCFd7w0
       | 
       | That said, the cost of ownership is likely much higher for Intel
       | vs. AMD - due to requiring DDR5 RAM and new motherboard.
        
         | zucker42 wrote:
         | You don't need DDR5 RAM. DDR5 is a halo feature at this point,
         | only for the people for whom money is no issue. There's zero
         | reason to use DDR5 RAM with the 12600k (considering you could
         | be served better by spending the extra money on a better CPU or
         | more capacity). The performance benefit is varied and probably
         | not more than 5% at best.
         | 
         | The increased price of Z690 vs B550 and even X570 is
         | significant though.
        
           | Osiris wrote:
           | If that's true, what advantage is there to DDR5 besides lower
           | stock voltage?
        
             | GhettoComputers wrote:
             | I asked myself the same thing when DDR4 came out. I was
             | looking forward to skipping it, but Ryzen was able to
             | unofficially use ECC ram so that intrigued me. DDR5 will be
             | able to have larger sizes for ram as well if I remember
             | correctly.
        
               | sliken wrote:
               | DDR4 didn't have much over DDR3, just a bit of bandwidth,
               | and most apps weren't that sensitive.
               | 
               | However DDR5 doubles the number of channels vs DDR4,
               | while also having a higher clock rate (more bandwidth),
               | so it's a bigger difference.
        
             | toast0 wrote:
             | DDR5 has more future promise, whether that's lower power,
             | higher bandwidth, higher capacity, maybe lower latency too.
             | There's something about DDR5 requiring ECC, but it's not
             | clear if that includes actual CPU/OS reporting or if it's
             | internally using ECC but with no error reporting.
             | 
             | But, I'm not sure how much of that shines through in the
             | first DDR5 product vs mature DDR4 products. Now is the part
             | of the cycle where the old memory is still high volume and
             | the new memory is starting to ramp up; in not too long,
             | we'll have the part of the cycle where they made too much
             | of the old memory and prices drop and it's time to scoop up
             | all the old memory you'll ever need.
        
             | smolder wrote:
             | One big thing is better resistance to cosmic ray bitflips
             | due to the on-chip error protection. (Different from ECC,
             | all DDR5 gets it.) Hopefully rowhammer style trickery is
             | effectively quashed now.
        
             | sliken wrote:
             | Twice as many channels so twice as many cache misses in
             | flight. The Anandtech shows significant gains on multi core
             | workloads. So the I9-12900K with DDR4 shows 61.33 int,
             | 59.55 FP. With DDR5 it gets 80.53 Intel and 81.85. Some of
             | that is increased bandwidth, but having more memory
             | requests in flight helps as well.
             | 
             | Imagine all the cores generating cache misses and you have
             | 2 channel of memory. When a channel handles a transaction
             | it's unavailable for 40 ns (keep in mind that @ 3GHz with 2
             | instructions per cycle that's 240 instructions per core).
             | 
             | Now with 4 channels you can have twice as many transactions
             | in flight and your cores spend less time idle and waiting
             | on memory.
             | 
             | So over a 30% increase in performance when going from DDR4
             | to DDR5, of course this doesn't matter if your application
             | has a high rate of cache hits.
        
         | neogodless wrote:
         | Here's the specific segment comparing power consumption:
         | 
         | https://youtu.be/-EogHCFd7w0?t=503
         | 
         | In Blender, the i5-12600K draws about 125W, while the 5600X
         | appears to draw about 75W. In games, the difference is much
         | smaller.
         | 
         | You can also compare the heat dissipation of those two chips:
         | 
         | https://youtu.be/-EogHCFd7w0?t=604
        
       | freemint wrote:
       | *in synthetic benchmarks or multicore alu-constrained workloads
        
       | flakiness wrote:
       | I wonder what's the story for Linux here.
       | 
       | I don't think it is possible for OSS projects to utilize this
       | big.little system: I tried to look at the instruction level
       | details of the thread director but could find very little. If
       | that's the case, the future of linux desktop/laptop is even
       | dimmer than before.
       | 
       | Android has downstream (partially upstream?) EAS scheduler to
       | handle this kind of architecture [1], but I'm not sure there is
       | any move to take this up to the intel platform. As a linux laptop
       | user, I hope someone does something.                 [1]
       | https://developer.arm.com/tools-and-software/open-source-
       | software/linux-kernel/energy-aware-scheduling
        
         | usr1106 wrote:
         | As a Linux desktop user I am still a bit disappointed about
         | AMD. My Ryzen 7 does not run 100% reliably (freezes many times
         | a year, althoug typically not more than once a month. The net
         | is full of similar stories, but nothing seems to help).
         | Independently but equally annoying the temperature sensors are
         | not well supported, obviously AMD does not share documentation
         | or contribute to the kernel. Don't remember the details now.
         | Gave up frustrated 2 years ago or so.
        
           | GhettoComputers wrote:
           | Sounds like a motherboard issue. Never had that happen.
        
           | KingMachiavelli wrote:
           | FYI temperature sensors are fixed in the latest Linux kernel.
           | The only occasional bug I get is from the amdgpu driver (I
           | use an APU) but even that seems to have been resolved.
           | 
           | I would highly recommend making sure you are using the latest
           | BIOS and kernel.
        
           | bavell wrote:
           | YMMV... I bought a 5950X in late January for my main
           | workstation (Arch) and haven't had any problems with it at
           | all. FWIW my last Intel CPU (2700k IIRC) crashed my machine
           | regularly whenever the iGPU fired up for some tasks (e.g. I
           | could crash Blender in < 10 seconds...)
        
           | [deleted]
        
         | piperswe wrote:
         | Intel does a surprisingly fantastic job of open-sourcing
         | things. They're one of the biggest contributors to the Linux
         | kernel, and the official drivers for their integrated GPUs are
         | in the mainline kernel. I wouldn't be surprised if we soon see
         | patches submitted from Intel adding Alder Lake support to
         | Linux's Capacity Aware Scheduling [1], or if there already have
         | been patches submitted under the radar.
         | 
         | EDIT: Oh, and Linux support for their NICs is fantastic as well
         | in my experience, both wired and wireless. That's something
         | Broadcom can learn from - I end up replacing Broadcom wireless
         | NICs in my laptops with Intel ones because it's such a pain to
         | work with Broadcom on Linux.                   [1]
         | https://www.kernel.org/doc/html/latest/scheduler/sched-
         | capacity.html
        
         | jdub wrote:
         | Linux has supported big.LITTLE style systems for a long time,
         | and has been improving scheduler support for them for about 10
         | years.
         | 
         | (The main gap you tend to see in the Linux world is integration
         | further up the stack. For example, allowing app developers to
         | nominate a core preference by configuration. That's partly due
         | to the assumption of the distribution model.)
        
         | neogodless wrote:
         | Phoronix does a suite of benchmarks on Linux.
         | 
         | https://www.phoronix.com/scan.php?page=article&item=intel-12...
        
         | kcb wrote:
         | Intel is one of(possibly the biggest) the largest contributors
         | to Linux. I wouldn't worry about it too much.
        
       | Symmetry wrote:
       | The prices do look very compelling, though Intel motherboards do
       | tend to be more expensive than AMD so spec out the whole system
       | before deciding which is the better deal.
       | 
       | It's great to have competition, even if it's comping out a year
       | behind the equivalent AMD systems.
        
       | bastardoperator wrote:
       | What's the price point though?
        
         | neogodless wrote:
         | Officially?
         | 
         | https://ark.intel.com/content/www/us/en/ark/products/134599/...
         | 
         | > $589.00 - $599.00
         | 
         | Retail will vary. Currently $649 at NewEgg.com, but sold out.
         | 
         | https://www.newegg.com/intel-core-i9-12900k-core-i9-12th-gen...
        
           | snuxoll wrote:
           | It's always important to note that pricing on ark is bulk
           | pricing (1K unit order), retail will always be higher.
        
           | bastardoperator wrote:
           | not bad, ty for sharing.
        
       | rektide wrote:
       | Overall fabulous news, but I am weirded out that mobile is now
       | the connected, featureful, good core, and desktop gets a bunch of
       | last gen bad tech:
       | 
       | * an UHD 770 gpu rather than a Xe (understandable-ish as many
       | want dedicated GPUs anyhow)
       | 
       | * no Thunderbolt 4, only USB4. way way less connected & capable.
        
         | neogodless wrote:
         | Z690 supports Thunderbolt 4.
         | 
         | See, for example, https://wccftech.com/review/intel-
         | core-i9-12900k-core-i5-126...
        
           | rektide wrote:
           | my understanding is that a number of (already deluxe
           | expensive fancy) motherboards have add-on chips to further
           | jack up the cost i mean add thunderbolt4 support. i haven't
           | seen any indicators that the kick ass readily available on
           | chip 4x 40 Gbps of connectivity that intel mobile chips offer
           | on integrated tb4 is available on desktop.
           | 
           | this, to me, constrains the desktop from being a good movipe
           | partner to the great laptops available. at least usb4
           | mandates host-to-host networking, but all these great laptops
           | with desktops lagging so far behind, having such lower
           | standards, is excessively sad, to me.
           | 
           | atm phones are way far behind. no usb4 (with host-to-host
           | networkingl nor tb4 are avialbable. i have a hard time
           | imagining phones remaining market segmented out, not
           | participating, in the good, not meeting the new bar.
        
       | chmod775 wrote:
       | Intel's processor is drawing 307W to slightly edge past the 5950X
       | in Cinebench, which only draws around 214W.
       | 
       | It's hard to cool 307W over extended periods without your
       | computer becoming _really_ loud, even with a water cooling setup.
       | 
       | Now, if they had that kind of cooling available, they could have
       | easily gotten a lot more performance out of the Ryzen, while the
       | i9 is already operating at the limits of what consumer grade
       | cooling can sustain.
       | 
       | There's no way that Intel processor is beating a 5950X that is
       | _also_ drawing 300W. At that point some Ryzen 's will be
       | operating at close to 6GHz per core.
       | 
       | The most interesting takeaway is that Intel apparently has no
       | qualms selling a processor with a stock configuration that draws
       | 300W.
       | 
       | Also their Ryzen numbers are beyond suspicious. On a _silent_
       | configuration (peak power draw ~209W, 175W sustained, some
       | undervolting) my Ryzen 5900X hits a Cinebench R20 score of 8835
       | vs their 8390. I suspect my rig is worse since I cheaped out on
       | the RAM, but I wouldn 't know because they omit information about
       | the Ryzen test rig.
       | 
       | Their Ryzen numbers don't add up with other benchmarks around the
       | internet either: The 5950X should hit around 10400, and the 5900X
       | should hit 8600 in their stock configurations with mid-to-high-
       | grade RAM[1]. Their scores are 400 points and 200 points lower
       | respectively.
       | 
       | To get numbers as bad as theirs, I have to mismatch my FCLK and
       | RAM intentionally. But again, we don't know what their setup is
       | because they didn't tell us.
       | 
       | We however do know that they used DDR5 RAM for the Intel
       | processor, which the current gen Ryzen don't support. Likely some
       | performance was gained there as well, but obviously they didn't
       | test the Intel processor with DDR4 RAM to get an idea of how much
       | comes down to faster RAM. It's still fair to use DDR5, since the
       | Intel processor supports it, but it's hard to say much about the
       | Intel processor performance (development) itself without some
       | additional information. Without this information we don't know
       | whether we can expect the next generation of Ryzen processors
       | with DDR5 support to immediately leave Intel in the dust again,
       | or whether DDR4/DDR5 doesn't really matter.
       | 
       | [1]:
       | https://www.guru3d.com/articles_pages/amd_ryzen_9_5900x_and_...
        
         | threatripper wrote:
         | > We however do know that they used DDR5 RAM for the Intel
         | processor, which the current gen Ryzen don't support. Likely
         | some performance was gained there as well, but obviously they
         | didn't test the Intel processor with DDR4 RAM to get an idea of
         | how much comes down to faster RAM.
         | 
         | Anandtech has a detailed section comparing DDR4 to DDR5:
         | https://www.anandtech.com/show/17047/the-intel-12th-gen-core...
        
           | sroussey wrote:
           | So, up to 18% uplift, weighted towards heavily threaded
           | loads.
        
         | GhettoComputers wrote:
         | Someone pointed out they're using a Windows 11 version that
         | cripples AMD. https://www.neowin.net/news/charlie-demerjian-
         | intel-used-uno...
        
           | jessermeyer wrote:
           | Can you please read the article before posting?
           | 
           | >We tested Alder Lake on the latest Windows 11 below, but our
           | Ryzen results are on Windows 10--this made certain to avoid
           | AMD being penalized by the current Windows 11 regressions in
           | L3 cache and Preferred Core selection, while giving Alder
           | Lake the big.little architecture support it needs from
           | Windows 11 itself.
           | 
           | Thanks.
        
             | GhettoComputers wrote:
             | Thanks I only read the charts
        
         | walrus01 wrote:
         | If I had to guess the TDP of the new Intel CPU when maxed out
         | is actually around 270-280W. Not all of that 307W is the CPU,
         | there's power used by the motherboard itself, a mostly idle
         | GPU, fans, NVME SSD, etc.
         | 
         | Still a very serious heat producing thing in one socket
         | compared to the AMD.
         | 
         | This also means it's going to require some rather expensive
         | motherboards since reliably delivering 280W to one socket is no
         | simple task.
        
           | gambiting wrote:
           | You don't have to guess, Intel themselves say the "peak" TDP
           | of their top i9 is 250W. All to slightly beat 5900X which
           | tops out at 150W. It's insanity, literally have no idea who
           | will buy this.
        
             | pie42000 wrote:
             | You don't get the bleeding edge tech for practicality or
             | reasonableness. You don't buy a Bugatti Veyron for cargo
             | space and gas mileage. You get silly tech for insane
             | performance at massive cost. Eventually this tech is
             | refined, optimized and made reasonable.
             | 
             | It's what humans do. We don't cross the Pacific on a tiki
             | raft or fly to the moon in a tin bucket because our
             | destination is nicer than where we are currently, we do it
             | because we can and because it's awesome.
        
             | noir_lord wrote:
             | People in cold climates.
             | 
             | Computing and heating in one go.
        
             | arcticbull wrote:
             | TDP isn't really a peak wattage number though, it's hand-
             | wavey.
        
             | r00fus wrote:
             | A few lifetimes ago, I worked at a small company where the
             | VP shared with me why we had some "caddilac" service plans
             | (ie, super expensive, cost/benefit=unreasonable).
             | 
             | His reply: These are the showroom-only models where
             | customer won't buy but will feel their (already expensive)
             | not as high end plan is more reasonable.
             | 
             | Essentially it's a part of product marketing segmentation
             | where you create an unobtanium segment that's only there to
             | sell other segments.
        
               | sroussey wrote:
               | The old axiom about people buying the middle priced
               | choice...
        
               | r00fus wrote:
               | Product marketing version of the Overton Window, I guess.
        
         | kyrra wrote:
         | As others have sort of pointed out, your opening sentence is
         | incorrect. That "307W" draw is "Full System Power Consumption",
         | not just the CPU. The 12900K will draw around 230-240W at max
         | load (when doing work like Blender renders), compared to 140W
         | for a 5900X.
         | 
         | But for Gaming loads where the CPU maxes out, I haven't seen
         | reviewers hit those numbers. The 12900K was using ~125W in
         | those workloads, just a few watts lower than AMD.
         | 
         | Now, if you go down to Intel's midrange (i5, not the i9 you
         | were quoting), the power usage is way down (at 125W), vs the
         | AMD 5600X at 75W with a blender workload. Gaming workload,
         | Intel is a few watts higher than AMD.
         | 
         | Yes, Intel is drawing way more power for super CPU intensive
         | tasks, but for what most people will be doing, it's pretty
         | even.
         | 
         | Source: LTT review:
         | https://www.youtube.com/watch?v=-EogHCFd7w0&t=513s
        
           | ChuckNorris89 wrote:
           | _> Yes, Intel is drawing way more power for super CPU
           | intensive tasks, but for what most people will be doing, it's
           | pretty even._
           | 
           | This. I honestly don't get why people are so focused suddenly
           | on the power consumption for a desktop CPU designed for
           | enthusiasts who want top performance at any cost.
           | 
           | GPUs are consuming way more power than that(Nvidia's Ampere
           | is a notorious power hog), pushing consumers to 700W+ PSUs
           | and nobody bats an eye, but seem to throw their hands in the
           | air when their CPUs get close to that of a GPU.
           | 
           | Like why? It's the top spec model, it's supposed to be
           | ludicrous, not be good value for money or highly efficient,
           | they have other models more competitive for that. It feels
           | like people are just looking for reasons to to create outrage
           | about a product they're not gonna buy anyway.
        
             | GhettoComputers wrote:
             | Because there isn't a proportional performance gain, and
             | the heat generated will degrade the computer, at what cost?
             | A few irrelevant percentage points?
        
               | ChuckNorris89 wrote:
               | If you're concerned about proportional performance gains,
               | this product isn't for you. So why are you complaining
               | about something you're not gonna buy anyway?
               | 
               | Most tech products that are the tip of the spear are bad
               | value for money and less energy efficient than the ones
               | down the range (Nvidia 3090 vs 3080 for example) as they
               | pass the point of diminishing returns of what the design
               | can do and what's cheap and easy to manufacture.
               | 
               | But they exist because there is a marke of enthusiasts
               | for whom efficiency or price does not matter, they just
               | want the best performance money can buy because maybe
               | their business use case benefits from the 10% shorter
               | render/compile times at the cost of 50% extra power
               | consumption. Who are you to judge?
        
               | GhettoComputers wrote:
               | To feel better about my non Intel purchase by laughing at
               | the performance.
               | 
               | If that is true they should be comparing it to the Epyc
               | 7763 or other CPUs that aren't meant to be reasonably
               | priced. No reason a true enthusiast would pick this CPU
               | over Epyc.
        
               | nawgz wrote:
               | > they should be comparing it to the Epyc 7763
               | 
               | That's a server chip/line, no? And therefore will have
               | significantly worse per-core performance, one of the most
               | important things for non-embarrassingly-parallel compute
               | tasks?
               | 
               | Edit: and after a cursory search, is it not also true
               | that Epyc line is 4-8x higher priced than the 12900k
               | MSRP?
        
               | ChuckNorris89 wrote:
               | _> To feel better about my non Intel purchase by laughing
               | at the performance._
               | 
               | That just sounds childish.
        
               | renewiltord wrote:
               | It's a self deprecating joke, dude. People make these to
               | de-escalate situations safely. I sometimes really wonder
               | at the social skills of people on this website.
        
             | 24t wrote:
             | Intel have a rich and storied history of paying for biased
             | and dishonest reviews. You should be thankful that people
             | are willing to fill in the gaps so you hear the full story.
        
             | pipodeclown wrote:
             | But we're talking a small lead in many cases, a lot of
             | times a tie, for 30% higher power draw here.
        
           | fwip wrote:
           | So instead of drawing 50% more power than the AMD chip, the
           | Intel chip draws closer to 70% more power.
        
         | eloff wrote:
         | Your comment was more revealing than the article. Thanks for
         | sharing!
        
         | api wrote:
         | Maybe by barn burner they mean it would catch a barn on fire if
         | not properly cooled.
        
           | caycep wrote:
           | Ha! I thought "they meant literally" when I read the
           | headline...
        
         | webmobdev wrote:
         | Apparently Intel used an older build of Windows 11 that has
         | bugs that slows down AMD processors, even though Windows has
         | fixed the issue - https://www.neowin.net/news/charlie-
         | demerjian-intel-used-uno... ...
        
           | [deleted]
        
         | mhh__ wrote:
         | Maxing a CPU is very hard in practice, the power usage under
         | realistic loads is actually much less crap.
        
           | freemint wrote:
           | In particular in gaming loads it even draws less than the
           | same tier AMD products at mostly better results (modulo DDR5
           | vs DDR4).
        
         | vondur wrote:
         | In the Gamers Nexus review, they were using a high end 360MM
         | AIO cooler and the CPU Temp shot up to 74C almost immediately
         | during a blender session. Definitely a hot, power hungry CPU.
        
           | zeusk wrote:
           | immediate temp fluctuation is more of a heat transfer
           | problem, not overall thermal budget.
        
           | smolder wrote:
           | It seems like all the focus has been on how these chips are
           | at maximum load. I wonder how they perform under more typical
           | light/mixed loads.
           | 
           | AFAIK, the theoretical benefit of doing the big-little
           | arrangement is improved power scaling with load. At partial
           | load, powering a wimpy "e-core" at its ideal clock speed
           | should be more efficient (instructions per kwh) & possibly
           | have better latency characteristics than the usual approach
           | of doing a drastic underclock & undervolt on big performance-
           | oriented cores.
           | 
           | Naturally I'm interested in seeing whether that theoretical
           | advantage of the architecture has paid off. The maximum speed
           | performance stats and wattage numbers give an incomplete
           | picture. I want to see wattage and performance metrics across
           | the spectrum of load levels, from <1% all the way up.
           | 
           | If the 12900K delivers roughly 5950 level performance using
           | 25% more power than the 5950, that looks pretty bad. But if
           | the 12900K can deliver 25% of its max performance for only
           | 10% of its max power envelope, and the 5950 needs 30% of its
           | max power envelope to keep up with that load, that's a big
           | deal, and the intel chip will actually be the cooler one more
           | often than not. That dynamism is what reviewers need to focus
           | on explaining & quantifying, IMO.
        
           | snuxoll wrote:
           | What's the die size of the ADL silicon? I know my 5800X will
           | rocket up past 70C even on a custom loop with 30C water temps
           | because the chiplets are so small and power dense that
           | there's not enough surface area to transfer heat any quicker
           | to the IHS.
        
         | zamadatix wrote:
         | You can give a 5950x all the wattage all day but it really
         | doesn't get much faster until you start dropping the temps to
         | sub ambient. I have a 480mm radiator dedicated to my 5950X
         | along with high quality RAM and extensive tuning (sub timings,
         | fabric clocks) and I'm not able to get anywhere near 6 GHz at
         | room temp without a voltage level that would destroy the chip
         | after a few benchmarks.
         | 
         | And while I think my highly optimized 5950x setup could
         | probably eek out the 12900k cinebench multicore score I'm sure
         | if you gave me 10 minutes to also be fair and tweak a few
         | settings on the 12900K I could get that to win again, even
         | without messing with max power.
         | 
         | I do agree the Ryzen numbers in the article do in general seem
         | lower than normal though.
         | 
         | Given this is Intel's "Catch up multiple nodes, new performance
         | design, 300 watt class, ddr5" CPU and it's only able to get
         | this eek ahead vs AMD's CPU from last year which is about to
         | get the same litany of improvements plus their stacked cache
         | improvement I'm not holding my breath Intel is really
         | innovating fast enough again but at least they don't seem to be
         | sitting still anymore.
        
           | tails4e wrote:
           | What is the cold temp helping with? AFAIK transistors no
           | longer get faster at cold, at least since 16nm the perforamce
           | at sub 1V thin-ox transistors is pretty flat with
           | temperature. Cold helps a lot with power though, leakage at
           | hot (100C) could be 20W, so maybe going cold allowed more
           | dynamic power dissapation? I'd be intersted to hear what the
           | overclock experts like yourself think.
        
             | KingMachiavelli wrote:
             | Unless anything has changed the silicon/die itself isn't
             | staying sub ambient at all it's just that there's a limit
             | to hot steep temperature gradient you can build when the
             | ambient/sink temperature is fixed.
        
             | magicalhippo wrote:
             | Transistors have a maximum operating temperature[1]
             | (Tj_max), and the package has a fixed thermal
             | resistance[2].
             | 
             | If the junction-to-case thermal resistance is 0.5 deg C/W
             | and CPU dissipates 100W, then the junction will operate at
             | 50 deg C _above_ the case temperature, whatever that is.
             | 
             | If the case is kept at room temperature (say 25 deg C),
             | which is the best possible a simple air or water cooled
             | system can do, then the junction will operate at 75 deg C.
             | 
             | For simple discrete transistors and diodes, Tj_max is
             | typically 150-175 deg C. For modern CPUs I believe it's
             | closer to 100-125 deg C.
             | 
             | Thus, lowering the case temperature means the chip can
             | consume more power before reaching the maximum junction
             | temperature.
             | 
             | A greater power budget means you could raise operating
             | voltages to help with overclocking for example. Keep in
             | mind power dissipation scales with voltage squared.
             | 
             | There might be other effects too, I'm no overclocking
             | expert, but this is one aspect.
             | 
             | [1]: https://en.wikipedia.org/wiki/Junction_temperature
             | 
             | [2]: https://en.wikipedia.org/wiki/Thermal_resistance
        
               | baybal2 wrote:
               | Leakage increases with temperature, and the speed at
               | which FET switches also increases with temperature.
        
           | 55873445216111 wrote:
           | If 5950x was designed for 240W max power like 12900k, then it
           | is likely that the optimizations AMD would have needed to
           | implement in the silicon would have also provided some
           | additional frequency headroom. It's all hypothetical though
           | since all we can do is compare the chips that were actually
           | produced.
        
         | theevilsharpie wrote:
         | > There's no way that Intel processor is beating a 5950X that
         | is also drawing 300W. At that point some Ryzen's will be
         | operating at close to 6GHz per core.
         | 
         | Modern desktop CPUs are already being pushed well past the peak
         | perf/watt efficiency in order to extract the maximum amount of
         | performance. A Ryzen 9 5950X would need substantially more than
         | 300W to hit 6 GHz, and you would need exotic sub-ambient
         | cooling to keep it stable at those speeds.
         | 
         | A more reasonable comparison would be against EPYC and
         | Threadripper, both of which use their respective power budgets
         | for more cores rather than more clocks. A 64-core EPYC 7763 has
         | a TDP of about 280W, and it's going to substantially outperform
         | the Core i9-12900K when given 300W of power.
        
           | native_samples wrote:
           | Outperform it assuming your workload can actually saturate 64
           | cores. In the desktop market though, and heck often in the
           | server market, you're going to struggle to do that. Single
           | thread performance is still hyper important and that isn't
           | going to change anytime soon. Intel seem to have the edge
           | there, in a big way.
        
         | formerly_proven wrote:
         | > Also their Ryzen numbers are beyond suspicious. On a silent
         | configuration (peak power draw ~209W, 175W sustained, some
         | undervolting) my Ryzen 5900X hits a Cinebench R20 score of 8835
         | vs their 8390. I suspect my rig is worse since I cheaped out on
         | the RAM, but I wouldn't know because they omit information
         | about the Ryzen test rig.
         | 
         | Firmware quality for AM4 boards is ... "variable". My board
         | restricted CPU power to 60% of nominal because they put in the
         | wrong ADC scaling factors in some BIOS revision. Some boards
         | have PBO enabled by default (which nets big % on the kill-a-
         | watt, less % in performance). Many boards have bad defaults
         | when you enable XMP which steal a few Watts from the actual CPU
         | cores.
         | 
         | CB mostly scales with core frequency, memory and FCLK are less
         | important.
         | 
         | > At that point some Ryzen's will be operating at close to 6GHz
         | per core.
         | 
         | There's no way any of them will get close to 6 GHz without LN2.
        
           | semi-extrinsic wrote:
           | > There's no way any of them will get close to 6 GHz without
           | LN2.
           | 
           | No need for LN2, you could probably achieve this easily using
           | a closed loop CO2 system. But I'm not aware of any such
           | systems readily available. Kinda seems like an unfilled niche
           | to me.
        
       | franciscop wrote:
       | > "Intel beats the pants off the Ryzen 9 line in both Geekbench 5
       | and Cinebench R20 multithreaded tests"
       | 
       | Am I reading the graph wrong or Intel is barely a 3% faster than
       | Ryzen in that Cinebench i9/9 graph?
        
         | mey wrote:
         | Watching several detailed YouTube reviews (Gamers Nexus,
         | Hardware Unboxed), the performance difference is 0-20% above a
         | 5950x. Long duration (above 8 min) render workloads seem to
         | push AMD back to break even, but memory dependant mutlithreaded
         | bursty workloads heavily favor 12900k. Of course halo products
         | are going to halo. The interesting story may be the 12600k, for
         | it's price to performance considerations. Cooling the chips is
         | not simple, air cooling is not enough, one reviewer
         | recommending 360mm AIO as a requirement.
         | 
         | Edit: Gaming benchmarks are all over the place reviewer to
         | reviewer. Some showing huge jumps, others not, game to game.
        
         | meepmorp wrote:
         | Yeah, Geekbench is the only one where Intel chip really
         | meaningfully ahead. I guess you gotta talk up the parts a bit
         | to get the free test samples.
        
       | acomjean wrote:
       | I think this is great improvement. (I own machines with intel and
       | AMD processors). It feels like AMDs progress has lit a fire under
       | intel to innovate. Competition seems to be back and I look
       | forward to the progress next few years. I think the i5 not the i9
       | is the chip of note.
       | 
       | I'm using linux more and more and like open architecture PCs, so
       | optimistic they can stay competitive with those SOC. Should be
       | interesting how the linux scheduler deals with the different core
       | types.
       | 
       | The architecture bumps to DDR5.0, and PCIe5.0 hopefully will help
       | performance in the future too.
        
         | jeffbee wrote:
         | I predict it will be many years before linux works in a
         | satisfactory way out of the box on these machines. The existing
         | kernel model of task priority is not rich enough. Perhaps this
         | will accelerate the adoption of user-space task scheduling.
        
       | cestith wrote:
       | Ars Technica, where 2.4% in the Cinebench R20 is "beats the pants
       | off".                   $ perl -E 'say ( ( ( 10323 - 10085 ) /
       | 10085 ) * 100 )'         2.35994050570154
        
         | [deleted]
        
         | nomel wrote:
         | > crushes AMD's Ryzen 9 5950x--even multithreaded.
         | 
         | > they're faster than AMD's latest Ryzens on both single-
         | threaded and most multithreaded benchmarks
         | 
         | They prove how silly their headline is in the second sentence.
         | This is screaming "biased".
        
           | GhettoComputers wrote:
           | That has been the case with every site I looked at, its full
           | of Intel shilling, they compare a CPU that is many times as
           | expensive as AMD, see its got a slight edge with way higher
           | power usage and declare the Intel processor the winner.
        
             | Androider wrote:
             | The 12900K MSRP is $589, the AMD 5950X MSRP is $799.
        
               | GhettoComputers wrote:
               | Is MSRP relevant? Does the Nvidia 3080 Ti MSRP matter in
               | the real world?
               | 
               | Where is it $589? Its sold for $1,426.45
               | https://www.amazon.com/dp/B09FXDLX95 and the 5950X is
               | $749 https://www.amazon.com/dp/B0815Y8J9N/ so the Intel
               | is much more expensive in the real word than MSRP and the
               | AMD is slightly cheaper.
        
               | Androider wrote:
               | CPU prices are usually much closer to MSRP except perhaps
               | right at launch. You can put a 12700K or 12600K in your
               | bag at Newegg right now at ~MSRP+$20, and you were able
               | to do that for the 12900K Newegg pre-order last week. The
               | 12700K at $449 on Newegg right now is some excellent
               | value you have to admit (same P cores as 12900K, 4 less
               | efficiency cores) any way you look at it, and the 12600K
               | even more so.
               | 
               | Claiming that the Intel CPUs are "many times more
               | expensive" than the AMD comparables is just wrong.
        
               | GhettoComputers wrote:
               | If you want that CPU you'll have to pay much more than
               | MSRP, and the MSRP is higher than the real world price
               | you can get the AMD for. Using historical prices with pre
               | order prices that nobody can get now is a dishonest way
               | to reflect cost. MSRP is a useless representation in this
               | case and for GPUs.
               | 
               | Edit: https://www.bestbuy.com/site/intel-
               | core-i9-12900k-desktop-pr... Never mind found it for less
               | here, wonder why it's so much on amazon.
        
       | postalrat wrote:
       | I already have an i9 and the i5 is old. These CPUs aren't an
       | upgrade.
        
       | oblio wrote:
       | I really wish they wouldn't use obscure American expressions for
       | these article titles.
        
         | ant6n wrote:
         | I guess these chips run really hot.
        
           | GhettoComputers wrote:
           | It does, Intel is better because you can use it to heat your
           | house like AMD bulldozer.
        
         | bagels wrote:
         | I'm American and I don't know what it means either. I assume
         | burning a barn is bad? I expect not a lot of overlap between
         | the audience of the article and people knowledgeable about
         | agriculture idioms.
        
           | oblio wrote:
           | Apparently it's good (at least in the sense they're using
           | here). Which is extremely counterintuitive.
        
           | kube-system wrote:
           | It's a sports colloquialism. It means 'exciting'. Someone may
           | say "the game was a real barn burner", meaning that it was
           | exciting and competitive. Ars is saying that Alder Lake is an
           | excitingly competitive CPU.
        
       | ksec wrote:
       | At roughly 25W per Performance Core running at 5.2GHz on Intel
       | 7nm High Performance Node, with a ST Geekbench score of ~1900.
       | Compared to The Apple M1 Max running at 3.2GHz on a TSMC 5nm Low
       | Power node, at roughly 5W per core, with Geekbench score of
       | ~1750.
       | 
       | The is _5x_ the power for a 8.5% of performance increase.
       | 
       | But I know there are many who just want to get the best ST
       | performance regardless or power usage. And considering someone
       | Overclock the thing past 8Ghz barrier. It is still pretty damn
       | impressive.
        
         | hnburnsy wrote:
         | I see this comparison all the time in articles about Intel, but
         | until I can build a desktop with M1 Pro/Max, they are
         | meaningless.
        
           | ksec wrote:
           | That is the difference between a consumer review and tech
           | review. They may be meaningless to _consumers_ , but they are
           | meaningful for those of us only want to learn about tech.
           | 
           | This it the same with Graviton 2 on Servers.
        
             | dirkg wrote:
             | they are meaningless to oem's and manufacturers as well
             | since they cant buy an M1 and build a system around it.
        
         | GhettoComputers wrote:
         | These biased sites will declare Intel the winner despite the
         | higher cost in motherboard, CPU, power usage, cost to
         | overclock, and heat creation. These tech sites are obviously
         | nothing but product shilling sites with a cheap veneer that
         | relies on their old reputation, but after seeing them all
         | saying the opposite of reality with Ryzen, I won't give them
         | clicks and I won't buy Intel again.
        
           | [deleted]
        
           | nyxaiur wrote:
           | lmao what?
        
             | formerly_proven wrote:
             | He's not wrong per se though he's wrong in this case,
             | because many tech reviewers were overly enthusiastic about
             | Ryzen parts and acted as-if they're much better than their
             | Intel counterparts because they performed better in various
             | benchmarks and productivity applications; this is kind of a
             | meme in the gaming scene (where AMD's best CPUs generally
             | still struggle with Intel's 10th gen offerings).
             | 
             | In tech your product has to be REALLY bad in order to
             | receive critical reviews and manufacturers are basically
             | smart enough to just not send these bad products to
             | reviewers, so you basically get no or few reviews on them.
             | Most reviewers are completely dependent on the
             | manufacturers to give them stuff to review at no cost, and
             | obviously when your reviews are too critical they're going
             | to stop sending you stuff, which threatens the reviewer's
             | business. There are very few reviewers who can do highly
             | critical reviews, GamersNexus comes to mind. Notice how
             | they're often not getting stuff from the manufacturer but
             | buy it off the shelf instead.
        
               | GhettoComputers wrote:
               | Lots of tech reviews will also not say anything bad, for
               | instance anyone critial of Apple won't get access to
               | their events, products, and will be blacklisted.
               | 
               | You can't trust these publications at all anymore.
        
               | nyxaiur wrote:
               | you are delusional.
        
               | GhettoComputers wrote:
               | Prove me wrong. Show me a large mainstream publication
               | that is crital of Apple that still gets their products to
               | review, or one that isn't full of nothing but praise for
               | Apple, unless you're deluded to deny it. I bet you can't.
        
               | tomnipotent wrote:
               | Can you show me one that is critical and doesn't get
               | access?
        
               | GhettoComputers wrote:
               | https://techaeris.com/2015/01/29/have-i-been-blacklisted-
               | by-...
               | 
               | https://www.cultofmac.com/255618/how-apples-blacklist-
               | manipu...
               | 
               | > Yes, Apple maintains a press "blacklist," a list of
               | people in the media who are shunned and ignored --
               | "punished," as it were, for "disloyalty."
               | 
               | >"Blacklisted" reporters, editorialists and media
               | personalities are denied access to information, products
               | and events.
               | 
               | >Once you're on the list, it's almost impossible to get
               | off. (I've been on it for more than a decade.)
               | 
               | https://www.theregister.com/2016/09/07/reg_effort_to_atte
               | nd_...
               | 
               | > It's been a number of years since an Apple PR staffer
               | secretly admitted to one of our reporters that The
               | Register was on a blacklist.
        
               | rsj_hn wrote:
               | Dude, it's a company that makes consumer goods. It's not
               | the government refusing to credential a reporter.
               | 
               | If you are actually an objective news company trying to
               | evaluate a product, then do what Consumer Reports does
               | and go to the open market and buy the generally available
               | product for the full retail price, and then evaluate it.
               | That is a requirement to be an honest broker not
               | influenced by a vendor, which is why Consumer Reports
               | does it. So make up your mind as to whether you want to
               | be a journalist that is independent or whether you are
               | willing to compromise and do P.R. work for apple in
               | exchange for early access.
        
               | GhettoComputers wrote:
               | You asked me for journalists that are critical and didn't
               | get access and I showed you. What's your point in
               | justifying exactly what I proved?
        
               | rsj_hn wrote:
               | Perhaps you are confusing me with another poster?
        
               | [deleted]
        
               | nyxaiur wrote:
               | As I said in my other comments you are delusional. You
               | see a conspiracy theory without proofing based on your
               | feelings and you disregard the facts from several high
               | regarded sources like anandtech, phoronix and everyone
               | who has published their benchmarks today. It's the
               | literal definition of a delusion.
        
               | GhettoComputers wrote:
               | Prove me wrong; you can't.
        
               | nyxaiur wrote:
               | Every reputable source has benchmarks of the new alder
               | lake CPUs and the ones that cater to Apple Users have
               | comparisons with the M1 variants. There is no point to
               | proof except your acquisitions because from theverge to
               | phoronix there is nothing even remotely helping your
               | argument.
        
               | GhettoComputers wrote:
               | Still waiting for a shred of evidence.
        
               | nyxaiur wrote:
               | This is just a list of issues and articles that came to
               | my mind. Every of these sites also criticized the thermal
               | of the last intel macbook pros and all hated the touchbar
               | and all of them are still reporting on apple on exactly
               | the day the embargo lifts. wtf are YOU on about?
               | 
               | Macbook Keyboard Issues: 21x Apple critical articles on
               | the verge
               | https://www.theverge.com/search?q=keyboard+macbook 10x
               | Apple critical articles on ifixit https://www.ifixit.com/
               | Search?doctype=news&query=macbook%20k... 5x Apple
               | critical articles on engadget https://search.engadget.com
               | /search;_ylc=X3IDMgRncHJpZAN0NEMu...
               | 
               | iPad Jelly Scrolling: 12x Apple critical articles on the
               | verge
               | https://www.theverge.com/search?q=jelly+scrolling+apple
               | 1x Apple critical articles on ifixit https://www.ifixit.c
               | om/Search?doctype=news&query=jelly%20scr... 4x Apple
               | critical articles on engadget https://search.engadget.com
               | /search;_ylt=AwrJ7FcGV4Rh6okADpZ8...
               | 
               | Mini LED Blooming: 4x Apple critical articles on the
               | verge https://www.theverge.com/search?q=apple+blooming 3x
               | Apple critical articles on ifixit https://www.ifixit.com/
               | Search?doctype=news&query=blooming%20... 6x Apple
               | critical articles on engadget https://search.engadget.com
               | /search;_ylt=A0geKJBBV4RhUYAANg58...
        
               | GhettoComputers wrote:
               | iFixit doesn't get access to Apple hardware.
               | 
               | What you're linking to proves the opposite of critical
               | articles. Let's show some titles you linked.
               | 
               | > The saga of Apple's bad butterfly MacBook keyboards is
               | finally over
               | 
               | >iPad mini review (2021): The best small tablet gets a
               | facelift
               | 
               | >Will Apple's Mini LED MacBook Pros avoid the iPad Pro's
               | downsides?
               | 
               | >Apple says the iPad mini's 'jelly scrolling' problem is
               | normal
               | 
               | So where are these critial articles?
        
               | nyxaiur wrote:
               | If you don't cherry pick there are enough critical
               | articles and critical comments in every review (You said
               | there aren't in your first comment.
               | 
               | You are totally right they must have a giant blacklist if
               | blacklisting is for you not to send demo units of
               | unreleased products to every outlet. THE BLACKLIST IS
               | GIGANTIC DUDE.
               | 
               | If you ever look for help call here The National Alliance
               | on Mental Illness (NAMI): 1-800-950-6264
        
               | pie42000 wrote:
               | Dude, just give up. He called you delusional. He won the
               | argument. Your facts and logic don't matter here anymore.
               | You had the opportunity to call him delusional, but he
               | beat you to it. Sorry bud, better luck tomorrow
        
             | GhettoComputers wrote:
             | These biased sites will declare Intel the winner despite
             | the higher cost in motherboard, CPU, power usage, cost to
             | overclock, and heat creation. These tech sites are
             | obviously nothing but product shilling sites with a cheap
             | veneer that relies on their old reputation, but after
             | seeing them all saying the opposite of reality with Ryzen,
             | I won't give them clicks and I won't buy Intel again.
        
               | nyxaiur wrote:
               | On pure performance metrics Intel will be the winner
               | every cycle they bring one "enthusiast" CPU that tops the
               | benchmark in gaming or creator CPU loads. they are
               | expensive they are power hogs but they are the fastest. I
               | don't get why you are so adjetated by it.
        
               | GhettoComputers wrote:
               | >they are expensive they are power hogs but they are the
               | fastest.
               | 
               | That is absolutely false and a lie for multicore
               | processes, real world usage, and programs that I use.
               | This is an enormous lie, unless your workload only
               | consists of synthetic benchmarks that are Intel optimized
               | and ignore the gains from other hardware like premium
               | DDR5 they use to set these sythetic benchmarks.
        
               | nyxaiur wrote:
               | As I said in my other comments you are delusional. have a
               | good one. You see a conspiracy theory without proofing it
               | and you disregard the facts from several high regarded
               | sources like anandtech, phoronix and everyone who has
               | published their benchmarks today. It's the literal
               | definition of a delusion.
        
               | GhettoComputers wrote:
               | Sorry, facts don't care about your feelings. You are
               | wrong and can't do anything more than shill for Intel
               | with no evidence, personal attacks and delusions you made
               | up and can't prove. I don't know why your ego is so
               | fragile over hearing the truth. Are you being paid well
               | to shill and spread lies?
        
               | nyxaiur wrote:
               | You think every benchmarking site, even the sites that
               | use open source software to benchmark is getting payed by
               | Intel? Phoronix and Anandtech have open benchmark suites
               | you can go and check the results yourself. You are
               | arguing that commercial benchmark providers, open source
               | software developers, tech journalists and enthusiasts and
               | hobbiest are in a conspiracy to favor Intel in
               | performance benchmarks without providing a single source
               | stating your acquisitions except yourself. Insane.
        
               | GhettoComputers wrote:
               | Prove me wrong; you still can't.
        
               | nyxaiur wrote:
               | This is just a list of issues and articles that came to
               | my mind. Every of these sites also criticized the thermal
               | of the last intel macbook pros and all hated the touchbar
               | and all of them are still reporting on apple on exactly
               | the day the embargo lifts. wtf are YOU on about?
               | 
               | Macbook Keyboard Issues: 21x Apple critical articles on
               | the verge
               | https://www.theverge.com/search?q=keyboard+macbook 10x
               | Apple critical articles on ifixit https://www.ifixit.com/
               | Search?doctype=news&query=macbook%20k... 5x Apple
               | critical articles on engadget https://search.engadget.com
               | /search;_ylc=X3IDMgRncHJpZAN0NEMu...
               | 
               | iPad Jelly Scrolling: 12x Apple critical articles on the
               | verge
               | https://www.theverge.com/search?q=jelly+scrolling+apple
               | 1x Apple critical articles on ifixit https://www.ifixit.c
               | om/Search?doctype=news&query=jelly%20scr... 4x Apple
               | critical articles on engadget https://search.engadget.com
               | /search;_ylt=AwrJ7FcGV4Rh6okADpZ8...
               | 
               | Mini LED Blooming: 4x Apple critical articles on the
               | verge https://www.theverge.com/search?q=apple+blooming 3x
               | Apple critical articles on ifixit https://www.ifixit.com/
               | Search?doctype=news&query=blooming%20... 6x Apple
               | critical articles on engadget https://search.engadget.com
               | /search;_ylt=A0geKJBBV4RhUYAANg58...
        
               | GhettoComputers wrote:
               | Critical articles like "iPad mini review (2021): The best
               | small tablet gets a facelift"?
               | https://www.engadget.com/apple-ipad-mini-6th-
               | generation-2021... You're grasping at straws since ifixit
               | doesn't get Apple press invites, hardware and you linked
               | to searches of Apple praise since you can't find evidence
               | to prove your point. Thanks you proved my point for me
               | and linked to articles that prove I am right.
        
         | vadfa wrote:
         | 5x the power so you can run x86 32-bit and 64-bit code. I don't
         | see the performance part as being that important
        
           | threeseed wrote:
           | I run x86 code all the time on my M1 Mac.
           | 
           | And surprisingly well given it's being transpiled.
        
         | liuliu wrote:
         | Sorry about my impulsive correction:
         | 
         | It is Intel 7 (formerly known as Intel's 10nm process).
         | 
         | There is also Intel 4 (formerly known as Intel's 7nm process).
        
           | ksec wrote:
           | Yes except I got asked again and again what is "Intel 7". So
           | I am going to stick with a simple naming scheme that right
           | now happens to be about the same across all three foundry
           | working on leading node, Samsung, Intel and TSMC. And
           | everyone can relate or understand.
        
             | SahAssar wrote:
             | If that is the case shouldn't you call it "Intel 10nm"
             | since that is what it was called when Intel was still using
             | the same units as everyone else (even if those units are
             | becoming meaningless)?
             | 
             | Not even Intel itself says that "Intel 7" means "7nm",
             | right?
        
         | nly wrote:
         | Imagine what Intel could accomplish on the TSMC node!
        
           | marricks wrote:
           | Only 2.5x the power usage?
           | 
           | More seriously, there's always a few comments on threads like
           | this claiming it's unfair to compare Intel to AMD or Apple
           | because "they have a larger node size and are hamstrung by
           | it." Which is Intel's choice, they were happy having a
           | superior node all to themselves for decades and now it's a
           | problem for them.
           | 
           | I don't remember people saying AMD was just as good as Intel
           | but was just on a worse node either, did it matter to anyone
           | but AMD fanboys?
        
             | kube-system wrote:
             | Well, it made less sense to make excuses about fabs back
             | when Intel and AMD both ran their own fabs.
        
               | GhettoComputers wrote:
               | When did they make these excuses? AMD hasn't had its own
               | fabs for about a decade way before Intel was stuck on
               | 14nm.
        
             | gpderetta wrote:
             | For those that do not care just about the faster CPUs on
             | benchmark and tribal fights, and like instead to nerd out
             | about architectures, it is still interesting to know
             | whether Apple superiority is due the ARM architecture, the
             | Apple microarchitecture improvements or TSMC process.
             | 
             | It is likely a mix of all three and of course
             | microarchitecture cant be easily disentangled from the
             | process.
        
               | sroussey wrote:
               | There are relatively large differences in performance
               | between implementations of ARM X1, so yeah, there is more
               | to it than microarchitecture.
        
               | [deleted]
        
             | ksec wrote:
             | Yes. That is why I have been very explicit about node
             | numbers and characteristics along with clock speed. Apple
             | cant push to 5GHz without using another node and possibly
             | some redesign. Intel using their own ultra low Power node
             | or even TSMC 5nm low power node will not be able to reach
             | 5Ghz either.
             | 
             | And let people make their own judgement. Again this isn't
             | for consumer review, just for those interested in tech and
             | how it all fits together.
        
       | schmorptron wrote:
       | While this is exciting, the thing I'm most looking forward to is
       | intel entering the dedicated GPU market next year. Arc is looking
       | really promising and god knows nVidia and AMD need some
       | disrupting.
        
         | heavyset_go wrote:
         | Intel's Linux drivers for their integrated graphics chipsets
         | are pretty good, at least compared to Nvidia's, so hopefully
         | the same effort will be spent on Linux support for Arc. It
         | would be nice to have another option than AMD on the table for
         | Linux-compatible GPUs.
        
       | KuiN wrote:
       | This might be completely irrational, but it really annoys me when
       | the press use 'big.little' for all heterogeneous CPUs. big.LITTLE
       | is an ARM trademark for ARM's implementation of heterogenous
       | designs. Intel have clearly decided heterogeneous is a marketing
       | no-go and have gone with 'hybrid' architecture, which is fair
       | enough. But these are absolutely not big.LITTLE CPUs.
        
         | ziml77 wrote:
         | Unless we get something better sounding than "heterogeneous CPU
         | topology", I'm going to go with big.LITTLE. I know it's
         | incorrect to apply it outside of ARM processors, but it's also
         | not 13 syllables and it's more likely that someone will know
         | what I'm talking about.
        
         | freeAgent wrote:
         | Do you get just as annoyed about discussions where x86-64,
         | AMD64, Intel 64, and x64 are used interchangeably?
        
         | dlp211 wrote:
         | "big.little" is like "kleenex" or "ziploc". Yes, technically
         | they refer to a specific implementation of a thing,
         | heterogenous CPU core size, tissues, plastic storage bags, but
         | in reality, they were so perfectly branded that they just
         | replaced the item name. It's not a big deal.
        
           | OldHand2018 wrote:
           | And I'm sure that people are going to call the Google Tensor
           | "big.little" when it is really "big.medium.little" (it has 3
           | different types of ARM cores)!
        
           | guerrilla wrote:
           | or "Google" for web search.
        
           | seabrookmx wrote:
           | +1
           | 
           | It's exactly the same thing when people say an AMD Ryzen CPU
           | has "Hyperthreading." This is Intel's trademark for
           | Simultaneous multithreading (or SMT).
           | 
           | We all know what someone means when they say this. There's no
           | use being so pedantic.
        
       | 37ef_ced3 wrote:
       | Please measure the AVX-512 ResNet50 inference performance (e.g.,
       | https://NN-512.com).
       | 
       | Does Alder Lake AVX-512 finally provide an advantage over AVX2
       | (the way it should)?
       | 
       | Is Intel serious about making AVX-512 worthwhile? Will AMD
       | provide efficient AVX-512 hardware?
        
         | ksec wrote:
         | AVX 512 is "fused off" to quote Intel. Although Anandtech
         | discovered it is actually disabled.
        
           | 37ef_ced3 wrote:
           | You're right:
           | 
           | "In order to get to this point, Intel had to cut down some of
           | the features of its P-core, and improve some features on the
           | E-core. The biggest thing that gets the cut is that Intel is
           | losing AVX-512 support inside Alder Lake. When we say losing
           | support, we mean that the AVX-512 is going to be physically
           | fused off, so even if you ran the processor with the E-cores
           | disabled at boot time, AVX-512 is still disabled."
           | 
           | "But it does mean that AVX-512 is probably dead for
           | consumers."
           | 
           | "Intel isn't even supporting AVX-512 with a dual-issue AVX2
           | mode over multiple operations - it simply won't work on Alder
           | Lake. If AMD's Zen 4 processors plan to support some form of
           | AVX-512 as has been theorized, even as dual-issue AVX2
           | operations, we might be in some dystopian processor
           | environment where AMD is the only consumer processor on the
           | market to support AVX-512."
           | 
           | https://www.anandtech.com/show/16881/a-deep-dive-into-
           | intels...
           | 
           | It's shocking how Intel has failed with AVX-512, and
           | unfortunate for those who embraced the technology.
        
             | ksec wrote:
             | My suspicious / theory is that Intel needed the hybrid
             | design for Mobile, otherwise there is no way to keep up
             | with AMD in terms of pref / watt. So they basically throw
             | this hybrid design on desktop as test bed before launching
             | it on Mobile.
        
               | dralley wrote:
               | It's probably part of their fab strategy. They want to
               | offer x86 IP blocks to SoC customers, and it's nice to
               | have options tuned for performance and ones tuned for
               | size / efficiency.
        
             | neogodless wrote:
             | It's odd, because in today's review, they have this to say:
             | 
             | > I have to say a side word about AVX-512 support, because
             | we found it. If you're prepared to disable the E-cores, and
             | use specific motherboards, it works. After Intel spent time
             | saying it was fused off, we dug into the story and found it
             | still works for those that need it. It's going to be
             | interesting to hear how this feature will be discussed by
             | Intel in future.
             | 
             | There's a whole page on it:
             | 
             | https://www.anandtech.com/show/17047/the-intel-12th-gen-
             | core...
        
               | selectodude wrote:
               | One thing that I genuinely appreciate about AMD is that I
               | don't need to do any research to see what CPUs support
               | what. They have a handful of SKUs, they're all
               | straightforward, you spend what you're willing to and go
               | from there. I feel like Intel requires me to do a bunch
               | of research to see if the CPU I've picked checks all the
               | boxes.
               | 
               | Then I buy my AMD and deal with their weird driver issues
               | and throw my hands up.
        
               | ksec wrote:
               | >I feel like Intel requires me to do a bunch of research
               | to see if the CPU I've picked checks all the boxes.
               | 
               | I will not be surprised if Intel follows the AMD example
               | soon. Pat Gelsinger is a product person. And generally
               | Product person understand these issues better. Compared
               | to Marketing / Financial people who have _zero_
               | understanding. ( I dont want to bash them too much but
               | seriously I have never met a marketing  / financial guy
               | who is any good at products in the 20+ years. )
        
               | klelatti wrote:
               | This is incredibly bizarre especially given AVX-512 takes
               | quite a lot of transistors. A late design decision to try
               | to limit power use?
        
               | monocasa wrote:
               | It's to allow easy e-core p-core thread migrations in the
               | OS. They probably weren't intending the p-cores to be
               | matched with e-cores when the RTL was being initially
               | written for the p-cores, which would have been a few
               | years ago.
        
               | klelatti wrote:
               | Very good point. I guess they both cores have to have
               | identical ISAs as they don't know which instructions will
               | be needed at the point at which they're deciding which
               | core to use.
        
               | stagger87 wrote:
               | Wow, and 2 FMA units/ports as well? Gotta love that the
               | first consumer chip with 2 x AVX-512 FMA units doesn't
               | officially support it.
        
       | InTheArena wrote:
       | Its very good to see three viable, great emerging CPU
       | architectures. Intel is important, and them getting their CPU
       | node up is critical. AMD has saved x86 from irrelevency and
       | stagnation, and has been sitting on their next node while Intel
       | caught up, but now has to release it to keep their spot on top.
       | Apple is remaking the game with the M1 / Pro and Max variants -
       | and we still have Jade 2c and Jade 4c coming.
       | 
       | Competition is great.
        
         | dlp211 wrote:
         | Jade 2/4c are additional processors being developed by Apple
         | for their <strike>Macbook Air line of notebooks</strike> Mac
         | Pro line for anyone who is like me and didn't know.
         | 
         | Edit: first thing I read was wrong, turns out that these are
         | Mac Pro level processors, see below comments for more details.
         | Thanks everyone for the clarification.
        
           | InTheArena wrote:
           | No, the 2c/4c are the MacPro / iMacPro CPUs. Think big-boy
           | versions of the Max / Pro (which are insane already).
           | 
           | Jade C "Chop" is the Pro, Jade C Die is the Max.
           | 
           | jade 2c is two Maxes. Jade 4c is 4 maxes.
           | 
           | https://twitter.com/siracusa/status/1395706013286809600?lang.
           | ..
        
             | SavantIdiot wrote:
             | All this die chopping really sounds like the consequence of
             | the Intel architects Apple hired away. Over the years lots
             | of Intel products segmented SKUs by laying them out in a
             | way that facilitated simple chops post wafersort.
        
             | kube-system wrote:
             | That Jade 4c might be pretty sweet in an Xserve.
        
           | neogodless wrote:
           | My understanding is that they are rumored, code names.
           | 
           | But they are expected be even higher scaled iterations of
           | Apple Silicon. In other words, while M1 had up to 4
           | performance (P) cores, 4 efficiency (E) cores and 8 GPU (G)
           | cores, the M1 Pro and Max scaled up to as high as 8P, 2E and
           | 32G.
           | 
           | The Jade 2/4c designation is rumored to go even higher on P
           | and G cores, which means it's much more likely to end up in a
           | Mac Pro than the Macbook Air.
        
         | superkuh wrote:
         | Saying Apple has remade the processor game is like saying the
         | PS4 remade the gaming market. Yeah, it's true. But only for
         | people that use consoles/macs. That kind of hardware only
         | supports the configuration it's sold in and not full spec. If
         | you try to do things that 99% of people won't do it won't work
         | (ie, attempting to boot off an external drive when the internal
         | one breaks).
        
           | agumonkey wrote:
           | The storage subsystem design failure seems extremely
           | irrelevant to processor design really (no matter how bad
           | Apple made it there)
        
           | megablast wrote:
           | > But only for people that use consoles/macs.
           | 
           | Or phones or tablets.
        
           | xmodem wrote:
           | I mean, the M1 gains are so great that it's making the mac
           | relevant to entirely new market segments, so i'd say that's
           | pretty game changing.
        
             | errantspark wrote:
             | I am curious how much of the M1 gains can be realized by
             | just slapping 16 gigs of cache on your die. I know that's a
             | gross oversimplification, and I'm 100% just going off
             | intuition here but it seems to me that memory access
             | efficiency is what carries the brunt of the M1's gains.
        
               | ericye16 wrote:
               | If you slapped 16GB of cache on your die, you would have
               | a humongous die and the most expensive processor in the
               | world. It would be very fast though.
        
               | smolder wrote:
               | At some point the physical distance to parts of cache
               | would mean you'd have rapidly diminishing returns on
               | adding more. For 16GB it'd need to be some kind of tiered
               | thing with nearer cache segments being quicker. Maybe you
               | could have it present itself as a giant unified cache...
               | sort of like what they did in the new IBM mainframes.
        
               | GhettoComputers wrote:
               | Is 1GB interesting enough?
               | https://gadgettendency.com/amds-new-processors-will-have-
               | nea...
        
               | criddell wrote:
               | Do the gains you talk about include power consumption
               | considerations? Could an Intel chip with a tone of cache
               | run all day on typical laptop battery?
        
               | formerly_proven wrote:
               | The M1 parts actually have way worse memory latency (i.e.
               | random accesses) than both AMD and Intel. A finely tuned
               | Intel system has around 3x lower memory latency than an
               | M1P/X.
               | 
               | All of this is SDRAM, the S stands for synchronous and
               | means that the memory is driven according to fixed
               | timings in relation to a fixed bus clock. All
               | LPDDR4X-4266 parts with the same timings perform exactly
               | the same, whether they are soldered to the interposer or
               | are 5 cm away on the board.
        
               | sliken wrote:
               | That's not at all what I'm seeing. Can you be specific as
               | to what chips you are looking at and what benchmark you
               | are using to draw this conclusion?
               | 
               | I'll go first.
               | 
               | Personally when talking about memory latency, I want to
               | measure time to main memory, without TLB thrashing. Not
               | that TLB latency isn't interesting, but it should be
               | separately quantified.
               | 
               | I've written a few microbenchmarks in this area, but I
               | get very similar numbers to the Anandtech R per RV
               | prange. Which puts the latency to main memory at around
               | 35ns looking like it might flatten out at 40ns. Yes, full
               | TLB thrashing is up at 110ns or so, but that's not a
               | usual use case. If it is, at least under linux, you can
               | switch to 1GB pages if that's important to you.
               | 
               | R per RV prange numbers on the Ryzen 9 5950x is 65ns or
               | so.
               | 
               | So sure the TLB worst case is higher latency on the M1,
               | but then you have to figure out how big the TLB is, and
               | how often that's an issue if you want to know the real
               | world performance impact.
               | 
               | The i9-12900K with DDR5 gets 30ns on the R per RV prange,
               | and 92ns on full random (tlb thrashing).
               | 
               | Even assuming the worst case TLB behavior the m1 max is
               | 111ns and the 5950x is 79ns or 1.4x higher. On the Intel
               | side is 111ns vs 92n, 1.2x higher.
               | 
               | Finding it hard to find any numbers that make the M1X
               | look like 3x higher memory latency.
        
               | formerly_proven wrote:
               | I feel like R per R isn't quite the perfect match for the
               | pointer-chasy workloads I have in mind where you end up
               | going through more-or-less random selections of
               | relatively small (<< page size) objects, so R per R
               | amortizes the randomization a little much. I still think
               | having the overall totally random access for a system is
               | a good number precisely because it contains all the
               | penalties and latencies that can add up. Embarrassing as
               | it may be, I also compared the wrong numbers across parts
               | here (and in the past too, I think) - thanks for calling
               | that out. I was kinda irritated how exactly they would be
               | able to get the performance they were getting across all
               | workloads with such poor latency but failed to double
               | check.
        
               | sliken wrote:
               | Ah, sensible. Generally with the TLB numbers and the
               | memory latency you can get a good idea of the performance
               | by interoplating between the two number for any number of
               | TLB misses (from 0% to 100%).
               | 
               | The M1 Max also has a crazy number of memory channels,
               | even the older M1 has 4 channels, pro has 8 channels, and
               | max has 16. So you can have many more cache misses in
               | flight, this is part of why the m1 max is 20% faster than
               | the 5950x with twice as many cores.
        
               | formerly_proven wrote:
               | Honestly it's kinda crazy what's possible these days.
               | Couple years ago you'd easily burn 30+ watts on just the
               | memory chips to get this kind of bandwidth.
        
               | sliken wrote:
               | Indeed, 64GB ram, 400GB/sec, decent GPU, ML acceleration,
               | 10 cores, etc all in a small package that gives good
               | battery life to a relatively thin laptop.
               | 
               | Here's hoping they put the same in the mac mini. Anyone
               | interested in a linux port join the Marcan patreon, I'm
               | kicking in a few $ a month.
        
               | errantspark wrote:
               | Interesting, I didn't know that the memory latency was
               | worse. The bandwidth is still much higher right? In your
               | estimation how much of the perf gains of the M1 chip are
               | due to the increased memory bandwidth vs other
               | optimizations/advancements?
        
               | formerly_proven wrote:
               | I don't think you can single out one aspect when
               | comparing designs which are so different, but I'd say
               | most of the CPU performance is down to the core. There
               | you have the M1 core which is (iirc) 10 pipes wide, much
               | wider than both AMD and Intel, and a much fatter frontend
               | as well to feed the beast. I think that's the main
               | advantage the M1 has. It also has a larger L1 cache
               | (twice as big as x86). These two are one of the few areas
               | where x86 has an innate disadvantage because fattening up
               | the frontend is much more complex on x86 compared to ARM,
               | and you can't have a VIPT L1 cache larger than 64K with
               | 4K pages, which is a hard-to-change default on x86, while
               | the M1 by default uses larger pages (32K or something
               | like that).
        
               | errantspark wrote:
               | "x86 has an innate disadvantage [...] fattening up the
               | frontend" this is inherent largely because of ISA
               | complexity and variable width instructions?
               | 
               | "M1 core which is [...] much wider than both AMD and
               | Intel" The core width you're referring to is the decoder
               | width yeah? As another poster pointed out the M1 has a
               | large reorder buffer as well. Combined with the ability
               | to index a larger L1 cache what I'm getting here is that
               | the M1 can be a lot better about scheduling instructions
               | (and perhaps even running non-interfering instructions in
               | parallel on a given core (is that a thing?)) because the
               | frontend has more power and space to do so.
               | 
               | I guess that efficiency is then a big part of the puzzle
               | on why the increased bandwidth to ram makes such an
               | impact?
        
               | sliken wrote:
               | Many things don't push the memory bandwidth and are cache
               | friendly. However GPUs are bandwidth limited and the M1
               | Max does quite well against any other integrated graphics
               | from Intel or AMD.
               | 
               | Even on the CPU side it can be a big win, in particular
               | on SpecFPRate (a collection of heavy floating point real
               | world codes, not microbenchmarks) Anand has this to say:
               | The fp2017 suite has more workloads that are more memory-
               | bound, and it's here where the M1 Max is absolutely
               | absurd. The workloads that put the most memory pressure
               | and stress the DRAM the most, such as 503.bwaves,
               | 519.lbm, 549.fotonik3d and 554.roms, have all multiple
               | factors of performance advantages compared to the best
               | Intel and AMD have to offer.
               | 
               | To drive this home compare the Spec2017 FP Rate, the M1
               | Max gets 81.07, the Ryzen 5950x (high end desktop with
               | twice as many fast cores and a 105 watt TDP) gets 62.27.
               | 
               | So a low power M1 Max with half as many cores and much
               | lower power is 30% faster than AMD's highest end desktop
               | chip. Instead of a desktop size/volume/power, you can get
               | it in a laptop that's 2/3rd of an inch thick.
        
               | klelatti wrote:
               | It's also very wide and has a big reorder buffer so 'just
               | slapping 16 gigs' of cache probably isn't the answer.
        
               | errantspark wrote:
               | Very wide in the sense of memory bandwidth particularly,
               | or is there another kind of wideness at play here?
               | 
               | "has a big reorder buffer", I'm interpreting this as "one
               | of the notable advantages of the M1 is it's ability to be
               | more clever about processing instructions out of order to
               | maximize resource utilization". Is that about right?
        
               | klelatti wrote:
               | Width in the sense of how many instructions can be
               | executed at the same time.
               | 
               | If you're really interested in this it might be worth
               | finding a copy of the Patterson and Hennessy book. It's a
               | big read and expensive (but older versions are on the
               | internet archive [1]) and covers all these design issues
               | in quite a lot of detail.
               | 
               | [1] https://archive.org/details/ComputerArchitectureAQuan
               | titativ...
        
             | monkmartinez wrote:
             | What market segments?
             | 
             | CAD/CAM, CAE/Engineering, Render farms, Movie making, GIS?
        
           | sliken wrote:
           | I disagree. Sure primarily faster more power efficient Apple
           | desktops/laptops benefit Apple users. But it also helps
           | people realize what arm designs can offer. The M1 Max is a
           | marvel, decent GPU performance compared to discrete, and
           | amazing GPU performance for integrated. Amazing perf/watt,
           | and also amazing memory bandwidth. Desktops and Laptops are
           | primarily in the 50-70GB/sec range, M1 Max is at 400 GB/sec.
           | 
           | Suddenly Intel and AMD have to keep an eye not just on each
           | other, but also on the various ARM designs targeting
           | microcontrollers up to supercomputers.
        
             | formerly_proven wrote:
             | I don't get why people are pouncing on the memory BW thing.
             | There are very few applications which are actually bound by
             | memory bandwidth (most of these will never be used on a
             | laptop) and I've seen nothing so far to suggest that the
             | CPU cores in the M1P/M1M can actually use it. The M1P/M
             | parts are basically a midrange GPU and CPU in the same
             | package and use a wide, high-speed memory interface (LPDDR
             | is conceptually more similar to GDDR than standard DDR) to
             | get GPU-like bandwidth because there's an actual GPU in
             | there - which is actually rather impressive.
             | 
             | For reference, in Zen 2/3 a chiplet (of eight cores) is
             | limited to around 25 GB/s write, 50 GB/s read.
             | 
             | Edit: The anandtech test has memory B/W numbers and gives
             | 100 GB/s for a single core and around 220 GB/s for all
             | cores, which is extremely high, but also not the full
             | memory bandwidth.
        
               | sliken wrote:
               | One clear benefit is decent GPU performance, without the
               | cost, physical size, and power/cooling for a discrete
               | GPU. It's way faster than Intel or AMD's integrated
               | graphics. You can see this difference when you unplug a
               | MBP vs any of the laptops with a discrete GPU.
               | 
               | Marketing brags about memory bandwidth based on
               | clockspeed * bus width = 400GB/sec. Getting 60% of peak
               | on some memory bandwidth benchmark is pretty common. Try
               | McCalpin's stream benchmark on your platform of choice to
               | verify. I suspect you'll find similar on Intel or AMD on
               | desktops, laptops, or servers.
               | 
               | Not sure I buy the memory bound argument, sure peak
               | performance will not change, but worst case performance
               | can be caused by cache misses. So I expect that the new
               | MBPs will be more evenly fast than the x86-64
               | competition, even while doing stressful things. I don't
               | currently need a $3k laptop, but am hoping they ship a
               | mini with a M1 Max or Pro.
               | 
               | To quote anandtech: The fp2017 suite has more workloads
               | that are more memory-bound, and it's here where the M1
               | Max is absolutely absurd. The workloads that put the most
               | memory pressure and stress the DRAM the most, such as
               | 503.bwaves, 519.lbm, 549.fotonik3d and 554.roms, have all
               | multiple factors of performance advantages compared to
               | the best Intel and AMD have to offer.
               | 
               | So an apple M1 Max is 19% faster on SpecFP (floating
               | point applications) as a Ryzen 5950x which has twice as
               | many fast cores (16 vs 8) and runs at 105 watts TDP.
               | That's pretty amazing in my book.
        
               | zozbot234 wrote:
               | Memory bandwidth _per core, per clock cycle_ is actually
               | very limited on many systems. So you end up being bound
               | by memory bandwidth any time you 're running compute-
               | intensive apps on multiple cores, unless thermal
               | constraints hit sooner - which is quite unlikely on Apple
               | silicon, and ARM silicon more generally.
        
             | DeathArrow wrote:
             | Apple laptops are twice as expensive as x86 hardware but
             | not twice as powerful.
             | 
             | What other kind ofARM hardware can we buy beside Apple
             | laptops?
        
               | GhettoComputers wrote:
               | Surface Pro X, Qualcomm laptops that run Windows 10 x64
               | emulation, JingPad, https://en.jingos.com/jingpad-a1/ and
               | PineBook https://pine64.com/product/14%e2%80%b3-pinebook-
               | pro-linux-la...
               | 
               | You can also buy RISCV if you want something more open
               | and not ARM.
        
               | eptcyka wrote:
               | They are almost 2x as power efficient.
        
               | InTheArena wrote:
               | A Dell Latitude 5520 Laptop, 15inch, 4k, 2TB SSD is
               | $4,099.00 form Dell (after a 35% off coupon). A Apple M1
               | MAx 16 inch, with a far better display both in quality
               | and display, better keyboard, same storage is also $4099.
        
               | stagger87 wrote:
               | PC laptop pricing fluctuates so much and I don't know
               | why. You can kit out a ThinkPad P15 with these specs for
               | ~2500 right now. Next month the prices will be flipped.
               | Go figure. At least Apple is consistently priced high.
        
               | pie42000 wrote:
               | ThinkPads are made of shitty plastic, have stupid
               | keyboard layouts, worse trackpads, and come preloaded
               | with malware/bloatware. I say this an exclusive x86 user.
               | ThinkPads are average quality tools, MacBooks are
               | delightful, beautifully designed devices. Really tempted
               | to switch
        
               | dirkg wrote:
               | are you trying to troll?
               | 
               | Thinkpads are notoriously well made. 'shitty plastic' is
               | ultra durable. Macs are great if you are ok being locked
               | into the Apple ecosystem
        
               | threeseed wrote:
               | I don't understand how having a Mac locks you into the
               | Apple ecosystem.
               | 
               | In real world use I don't notice any difference in
               | restrictions from my Ubuntu server.
        
               | ricardobayes wrote:
               | There was a reason why you didn't before. If you are
               | willing to give that reason up in exchange of some pretty
               | pixels, go for it.
        
               | dboreham wrote:
               | Hmm. I don't know that model, but I have bought two Dell
               | XPS15s in the past year. One had 4K, 64G RAM, 2T SSD and
               | 8 cores. It was $2500. The other was slightly lower spec
               | and $2400.
        
               | GhettoComputers wrote:
               | Just wondering: why?
        
         | thesquib wrote:
         | Except Apple keeps their hardware locked in the Apple
         | ecosystem, but anyhow can buy AMD or Intel
        
           | zamadatix wrote:
           | I wouldn't say locked in when it comes to computer hardware,
           | bundled might be more apt.
        
           | LASR wrote:
           | What? Anyone can buy Apple. They aren't doing anything to
           | stop you from running your own ARM os on it.
        
           | arbirk wrote:
           | https://asahilinux.org to the rescue
        
           | Wowfunhappy wrote:
           | Hardware ecosystem yes, but Macs continue to ship with
           | unlockable bootloaders and I expect Asahi Linux to be
           | extremely usable within another year.
        
             | kelnos wrote:
             | I'm skeptical of that claim. 5-year-old Intel Macs still
             | have issues running Linux, and they should be easier to
             | support than something like the M1.
        
               | Wowfunhappy wrote:
               | Things happen when someone wants to work on them. Running
               | Linux on Apple Silicon machines is a lot more exciting
               | than running Linux on a generic Intel machine which
               | happens to have an Apple logo, so there's been a lot of
               | development effort.
               | 
               | You can _already_ get a usable desktop on an M1 Mac, if
               | you really want to. The CPU is fast enough that you don
               | 't absolutely need graphics acceleration, and Wifi, USB,
               | and display output all work.
               | 
               | The big missing piece is the GPU, but Alyssa has a fully-
               | custom user-space implementation for macOS that largely
               | works.
        
         | [deleted]
        
       | DeathArrow wrote:
       | Where are the days when a decent CPU was $100 and it draw no more
       | than 100W?
        
         | toast0 wrote:
         | Right now, AMD is selling every chip they make, so they're not
         | making much for the low end. Intel's got quad core comet lake
         | chips around that price, but they've also not been selling low
         | end versions of their newer stuff either.
         | 
         | If Alder Lake's release signals Intel 10nm (aka Intel 7)
         | finally working for desktop chips, then we may see products
         | addressing the low end market again.
         | 
         | You should probably be able to limit power to 100W though;
         | power targeting is definitely a firmware feature on AMD, and
         | I'd be surprised if it's not available on Intel as well. If
         | nothing else, you can just provide 100W worth of cooling, and
         | thermal management should throttle back the power.
        
         | theevilsharpie wrote:
         | Inflation is a thing that exists, so decent $100 CPUs are going
         | to be hard to come by if you expect to run modern applications
         | in 2021.
         | 
         | However, there's plenty of competitive CPUs that can operate
         | below 100W. CPU reviews tend to focus on the top-end products
         | which throw efficiency out the window for higher clock speeds,
         | but lower tier SKUs target 95W, 65W, and 35W power points,
         | while still maintaining quite reasonable performance.
         | 
         | While I'm not as familiar with the power management
         | capabilities of modern Intel CPUs, AMD Ryzen CPUs can operate
         | in an "Eco mode" where the power target is lowered, so you can
         | optimize for power efficiency without having to go out of your
         | way to buy a special "low-power" SKU.
        
         | zamadatix wrote:
         | If you're willing to accept inflation since the early 90s check
         | out something like the 5600G instead of the top end models like
         | the 5950X, or wait for the low end SKUs of Alder lake to come
         | out too. The 5600G is a fantastic new 6 core CPU that will
         | handle all but the most extreme multithreaded workloads with
         | ease while staying under 100 Watts.
        
         | zokier wrote:
         | Pentium G6400 is available <$100, is rated at 58W TDP, and is
         | "decent" by some measure. For example it is dramatically faster
         | than RPi4 for almost all tasks:
         | 
         | https://www.phoronix.com/scan.php?page=news_item&px=Raspberr...
         | 
         | And in general is in the same ballpark (smidgen faster) as the
         | now classic i5 2500k:
         | 
         | https://www.phoronix.com/scan.php?page=article&item=celeron-...
         | 
         | Considering that I'm typing this comment on a laptop that is
         | probably fair bit slower than that G6400, I do believe it is
         | decent for basic desktop usage.
        
       | mhh__ wrote:
       | Also, the 12900k is trading blows with the 5950x but I think the
       | mid range CPUs are actually the draw here.
        
       | tus89 wrote:
       | > symmetric multithreading (SMT, also known as "hyperthreading").
       | 
       | Say what now?
        
       | roody15 wrote:
       | " Jim Salter We typically consider Cinebench to be the gold
       | standard for general-purpose CPU tests--and for the first time in
       | years, Intel trounces AMD's best offerings here."
       | 
       | What nonsense look at the graph this comment is referencing. The
       | new i9 barely edges ahead of Ryzen and they use the term
       | trounces.
       | 
       | Give me a break ... price per performance still clearly in AMD's
       | favor.
        
         | 1024core wrote:
         | Also, this:
         | 
         | "Passmark is the only benchmark we ran that still gave the nod
         | to AMD's Ryzen 9 CPUs--and even there, AMD won only by a narrow
         | margin."
         | 
         | The "narrow margin" being > 20% better ... :roll eyes:
        
         | trynumber9 wrote:
         | >price per performance still clearly in AMD's favor How do you
         | reckon? The 12900K is cheaper than the 5950X. And the 6+4 core
         | 12600K is priced nearly the same as the 6 core 5600X.
        
       | neogodless wrote:
       | Disappointingly, despite the big.little design adding efficiency
       | cores, this release focuses on performance, and is still far
       | behind AMD (and of course Apple) on efficiency.
        
         | meepmorp wrote:
         | Yeah, but this is also about "enthusiast" CPUs, which is very
         | much about eye popping top perf numbers, independent of
         | efficiency, etc. We're talking sports cars, not commuter
         | options.
        
           | neogodless wrote:
           | It is. Depending on the consumer, these are just what the
           | doctor ordered. Get a high quality motherboard, massive
           | cooling solution, and eke out the maximum performance.
           | 
           | But this won't translate well into excellent laptop chips, so
           | we'll continue to wait and see if Intel has something ready
           | in time for their next mobile release.
           | 
           | And personally, I don't get enough benefit from the last few
           | percentage points of performance to give up a quiet, cool-
           | running, relatively power-sipping desktop system. That's just
           | me, but that means I prefer to see advances in efficiency
           | regardless of the market segment!
        
             | BoorishBears wrote:
             | Lol how was a top-of-the-top end desktop release supposed
             | to translate well into excellent laptop chips?
             | 
             | Something about your wording here seems to imply that it's
             | not practically guaranteed that these don't translate into
             | laptop results
             | 
             | -
             | 
             | I honestly never get why the whole "quiet and cool" thing
             | comes up for these top of the line CPUs.
             | 
             | My current personal system is near-silent and already
             | drawing something like 450W peaks for GPU usage alone,
             | which translates to a lot more than "a few percentage
             | points of performance"
             | 
             | If you can pay, you can make just about anything short of
             | server parts quiet and cool, and the power difference won't
             | affect your CO2 footprint in any meaningful way. And when
             | you're looking at $600 CPUs, generally speaking you can
             | pay...
        
               | [deleted]
        
               | kalleboo wrote:
               | The whole point of big.little and having efficiency cores
               | is to save power on mobile when you don't need all that
               | performance.
               | 
               | The argument is made that in the same budget
               | (die/power/heat) of a performance core you can fit
               | multiple efficiency cores to help with threaded
               | workloads, but this Intel CPU uses even more power, so
               | wouldn't they have been better off with 100% performance
               | cores on an "enthusiast" chip?
        
               | BoorishBears wrote:
               | Intel didn't use 100% performance cores because
               | apparently they felt they could get more performance from
               | the little cores (which are literally smaller, you can
               | fit more in a given area on the die)
               | 
               | Shouldn't that be obvious, it's practically a tautology
        
               | kalleboo wrote:
               | The anandtech benchmarks show that they have better
               | performance in single-core benchmarks (that run on the
               | performance cores) and Ryzen has the upper-hand in multi-
               | threaded benchmarks (since it is all performance cores).
               | So it seems like if the goal was to make an enthusiast
               | chip they would have been better off stuffing their chip
               | with their superior performance cores.
               | 
               | Obviously Intel wants to compete on mobile. It's the
               | biggest market now, and they've been the worst at it. But
               | a new platform is expensive so in the beginning you have
               | to start with the high-margin enthusiast chips, so even
               | if a platform is designed to have gains on mobile you
               | have to start them at the enthusiast level. If they
               | weren't planning on scaling this down to mobile there
               | would be no reason for this architecture. But it doesn't
               | look like it will actually end up any more efficient with
               | their tech.
        
               | BoorishBears wrote:
               | I honestly don't get what you're trying to say.
               | 
               | You don't work at Intel I presume. Even of the people who
               | work at Intel I doubt there's any _one_ person who could
               | tell you why they decided on the given arrangement.
               | 
               | I expect this kind of baseless speculation on some more
               | casual forums not one where most of us are aware of how
               | insanely complex modern CPU design is.
               | 
               | In the end they came up with a design that is already
               | performing top of class in the workloads that are most
               | common, and it's unlikely we won't see improvements over
               | time.
               | 
               | -
               | 
               | I mean if your concern is productivity, here's what the
               | people building systems for the worlds largest production
               | houses have to say:
               | https://www.pugetsystems.com/labs/articles/12th-Gen-
               | Intel-Co...?
               | 
               | > Overall, the 12th Gen Intel Core CPUs are terrific
               | across the board, providing a large performance boost
               | over the previous 11th Gen CPUs, and in almost every
               | single case, handily out-performed AMD's Ryzen 5000
               | series. Intel's lead is larger at the i5 and i7 level,
               | but even with the Core i9 12900K, Intel consistently came
               | out on top.
               | 
               | I don't know why you're acting like they gave something
               | up here... because their synthetic benchmarks aren't up
               | to snuff?
        
               | neogodless wrote:
               | > Lol how was a top-of-the-top end desktop release
               | supposed to translate well into excellent laptop chips?
               | 
               | See Zen 2. Zen 3. The desktop chips are very efficient.
               | When you throw 105W at them, they compete at the very top
               | of the desktop performance market. Previous to this Alder
               | Lake release, they were beating much more power hungry
               | Intel chips. Now they remain competitive at half the
               | power.
               | 
               | Cut them down to 15W-45W chips, and they work great in
               | laptops, with ample performance and excellent battery
               | life.
               | 
               | If Intel has massively more efficient chip designs that
               | they plan to use in mobile, why aren't they making use of
               | them? They could have competitive performance without
               | doubling the power consumption.
               | 
               | It's a similar story with Apple Silicon. They have chips
               | that can do amazing things at 10W. Crank 60W through them
               | and they are excellent performers.
               | 
               | If you have no choice but to consume massive amounts of
               | power to get top performance, so be it. But given the
               | choice...
        
               | LegitShady wrote:
               | intel is competing on 10nm (now called intel 7) against
               | TMSC's 7nm process.
               | 
               | >If Intel has massively more efficient chip designs that
               | they plan to use in mobile, why aren't they making use of
               | them? They could have competitive performance without
               | doubling the power consumption.
               | 
               | I'd guess they're aimed at making competitive enthusiast
               | class desktop chips with their existing process size from
               | foundries they already have made many many chips from so
               | there's much less investment, and saving any smaller
               | process capacity for laptops chips.
        
               | BoorishBears wrote:
               | Trying to draw conclusions about mobile parts from
               | desktop parts is what people magazine racing CPUs do.
               | 
               | It's silly.
               | 
               | And it's doubly silly when you're talking about
               | heterogeneous computing.
               | 
               | > If you have no choice but to consume massive amounts of
               | power to get top performance, so be it. But given the
               | choice...
               | 
               | I mean you have the choice? Don't get the top of the line
               | halo part meant to break records at all costs?
               | 
               | The i5 is almost as fast as the i9, faster than the
               | previous i5 and the 5800X, while using considerably less
               | power than the i9.
               | 
               | -
               | 
               | Top performance always equals "massive power draw"
               | 
               | A 5950X will draw over 300W if you OC, which, surprise
               | surprise, a lot of people who are spending $800 on a CPU
               | end up doing.
               | 
               | I guess I just figured by now people would understand
               | that and focus on performance for the halo SKUs
        
               | neogodless wrote:
               | You seem to have strong opinions about this, while also
               | missing the past decade of desktop and laptop CPU
               | history.
               | 
               | Intel based the entire Core line on the architecture they
               | designed with laptop chips in mind (initially, Pentium
               | M). They found that throwing more power at efficient
               | chips works pretty well. So they switched gears from
               | Netburst.
               | 
               | And while you just ignored my comments on Zen, they still
               | apply. AMD designed Zen for efficiency, which allowed it
               | to be excellent in high power, high performance
               | applications, while also being excellent in low power,
               | medium performance applications. While the chips have
               | differences due to being used differently, the core
               | architecture is used from 15W chips all the way up to
               | 125W desktop chips and even 280W workstation chips.
               | 
               | It's not impossible that Intel designs completely
               | different architectures for their next laptop chips than
               | what they revealed with Alder Lake, but it's also
               | unlikely, and the point of pairing efficiency cores with
               | performance cores is to allow for flexibility in how they
               | use the architecture. If Alder Lake is really just for
               | 300W workstations, then Intel made a mistake bothering
               | with efficiency cores.
               | 
               | What we didn't see today was evidence that Alder Lake is
               | ready for mobile, because these chips are much less
               | efficient when compared to alternatives. How can we
               | expect this architecture to be efficient when you throw
               | less power at it, when it's already proving to be
               | inefficient and requiring lots of power in order to
               | perform?
        
               | BoorishBears wrote:
               | 99% of this boils down to two words:
               | 
               | _heterogenous computing_
               | 
               | It means that they have exponentially more knobs to turn.
               | 
               | -
               | 
               | And I stand by my point of ignoring insistence on
               | speculating about unannounced hardware for no discernible
               | benefit?
               | 
               | It's also weird that this is the second comment acting
               | like outsiders know better than Intel when it comes to
               | what mix of cores was best
               | 
               | Especially since this has proven to be an excellent
               | workstation CPU:
               | 
               | > Overall, the 12th Gen Intel Core CPUs are terrific
               | across the board, providing a large performance boost
               | over the previous 11th Gen CPUs, and in almost every
               | single case, handily out-performed AMD's Ryzen 5000
               | series. Intel's lead is larger at the i5 and i7 level,
               | but even with the Core i9 12900K, Intel consistently came
               | out on top.
               | 
               | https://www.pugetsystems.com/labs/articles/12th-Gen-
               | Intel-Co...?
               | 
               | It's almost like people confused 7zip and Cinebench for
               | actual productivity results?
        
           | notTheAuth wrote:
           | "Enthusiast" CPUs seem pointless. My CPU is never above 20%
           | in AAA games as it's all on the GFX card now.
           | 
           | Good to see CPUs going through the great decoupling that
           | software did.
           | 
           | IMO Steamdeck is the future of home desktops. Both my kids
           | are into science; I'm excited to have a drone remote, sensor
           | base station, generic pc, etc, in a high quality package
           | versus something like Pine phone.
           | 
           | Valve and Apple are pushing hardware forward. Hopefully they
           | can obsolete needing data centers of generic CPUs and tons of
           | Byzantine software by making hardware with the best logic for
           | a task built in, available to home users.
        
             | shadowfacts wrote:
             | > My CPU is never above 20% in AAA games
             | 
             | That just means your GPU is the bottleneck, not that the
             | CPU couldn't be utilized more.
        
             | [deleted]
        
             | omni wrote:
             | There are plenty of games that actually use CPU: Microsoft
             | Flight Simulator, Factorio, Stellaris, Total War, pretty
             | much any city simulator game, etc. Sure, your average dumb
             | AAA action game won't, but that doesn't mean a good CPU is
             | worthless.
        
               | notTheAuth wrote:
               | With different software pipelines they could run right on
               | a GPU
               | 
               | It's all state in a machine, and ML is showing us
               | recursion + memory accomplish a lot; why all the generic
               | structure in x86 if we can prove our substrate works just
               | as well with better power efficiency if it's structured
               | specifically?
               | 
               | Chips aren't concepts, they're coupled to physics;
               | simplify the real geometry. I think that's what Apple is
               | really proving with its chips, and why Intel is trying to
               | become a foundry; they realize their culture can only
               | extend x86 and x86 comes from another era of
               | manufacturing.
               | 
               | I got into tech designing telecom hardware for mass
               | production in the late-90 and early-00s. I just code now
               | but still follow manufacturing, and have friends that
               | work in fabs all over; this is just sort of a summary of
               | the trends we see _shrug emoji_
        
               | GhettoComputers wrote:
               | Is that a realistic goal to run all on GPU? Nvidia wants
               | ARM to make GPU/CPUs together. The idea is as intriguing
               | as making games that are OS independent and just run bare
               | metal by making them with ISAs. I don't think there's
               | games that do that.
        
               | GhettoComputers wrote:
               | Factorio isn't stressing 8 core CPUs. Stellaris can be
               | played on a laptop. You are confirming his conclusion
               | that they don't need a strong CPU to play.
        
               | dragontamer wrote:
               | Endgame Factorio stresses CPUs because rocket-per-minute
               | bases are a thing.
               | 
               | 1 RPM is where a mega base starts. Stronger players can
               | do 20 RPM (yes, a rocket every 3 seconds).
               | 
               | In those conditions, your CPU becomes the limit to the
               | RPM as your game starts to slow down
        
               | riversflow wrote:
               | lol! Stellaris _can_ be played on a laptop, but try
               | ramping the Galaxy size up to 1000 and /or increase the
               | habitable planet multiplier. You get a couple hundred
               | years in and the game just crawls even on nice hardware.
               | Its not unplayable, its a strategy game, but the pace
               | definitely slows down a lot, and space battles aren't as
               | fun to watch.
               | 
               | By the same token you can play virtually any game on a
               | cheap gaming rig. Just put all the graphics on low, run
               | it at 720p and be happy with 20 fps.
        
               | GhettoComputers wrote:
               | Most games don't need the latest or greatest hardware to
               | run well, there's a lack of good AAA games that make the
               | value proposition of new hardware much less appealing
               | versus the days of wanting to build a computer to play
               | Crysis.
        
               | omni wrote:
               | > Stellaris can be played on a laptop. You are confirming
               | his conclusion that they don't need a strong CPU to play.
               | 
               | Movies can be watched on phones. Does that mean theater
               | screens are pointless?
        
               | GhettoComputers wrote:
               | Just look at the attendance of movies or how often phones
               | are used for videos.
        
             | skocznymroczny wrote:
             | If it's all on the GFX card, why is there a performance
             | difference in games between Intel and AMD CPUs?
        
               | theevilsharpie wrote:
               | When testing games, CPU reviews tend to test reduced
               | resolutions and quality settings with the highest-end GPU
               | they have, as a means of highlighting the differences
               | between the CPUs.
               | 
               | While there aren't any nefarious intentions on behalf of
               | the reviewer, this approach runs into the following
               | problems:
               | 
               | - People buying high-end GPUs are unlikely to be running
               | at resolutions of 1080p or below (or at lower quality
               | settings), and won't see as much (if any) performance
               | difference between CPUs as what reviewers show.
               | 
               | - People buying lower-end GPUs are going to be GPU-
               | bottlenecked, and won't see as much (if any) performance
               | difference between CPUs as what reviewers show.
               | 
               | - Each frame being rendered needs to be set up, animated,
               | sent to the GPU for display, etc., and like all
               | workloads, there's going to be portions that can't be
               | effectively parallelized. As such, the higher the frame
               | rate, the more likely the game is to be bottlenecked by
               | single-threaded performance, which is an area where Intel
               | CPUs have traditionally been strong relative to AMD's.
               | However, as frames get more complex and take longer to
               | render, the CPU has more of an opportunity to perform
               | that work in parallel, and raw computational throughput
               | is an area where AMD's modern CPUs have been strong
               | relative to Intel's. So just because a CPU has leading
               | performance in games today, doesn't necessarily mean that
               | will hold in the future as game worlds become more
               | complex (and reviewers revisiting the performance of
               | 2017-era AMD Zen 1 vs. Intel Kaby Lake in recently-
               | released titles have already started seeing this).
               | 
               | In short, the way that reviewers test CPU performance in
               | games results in the tests being artificial and not
               | really reflective of what most end users would actually
               | experience.
               | 
               | After all, a graph showing nearly identical CPU
               | performance across the lineup and the reviewer
               | concluding, "yep, still GPU-limited," doesn't make for an
               | interesting article/video.
        
             | GhettoComputers wrote:
             | Just a question about gaming. I haven't really seen any
             | good AAA games worth playing anymore. The GPUs and CPUs
             | have great capabliities but I don't see any good games
             | anymore that make buying the hardware worth it anymore.
             | Most of the games that are good I am interested in don't
             | need good hardware. Do you feel the same trend when you
             | play games?
        
               | Someone wrote:
               | I don't know whether it applies to you, and don't even
               | know whether it's true, but I think that may have less to
               | do with new games being worse than with you being older
               | and/or having seen more games. Getting older makes people
               | less inclined to be obsessed with games, and having seen
               | more games decreases the chance of a new game being an
               | outlier, and outliers attract attention.
               | 
               | I think this applies to other fields, too. Watch your
               | umptieth Super Bowl, and chances are you will think back
               | to the 'better' one you saw when you were young. Twenty
               | years from now, the kids watching their first one now
               | will say the same about this one.
        
               | GhettoComputers wrote:
               | I don't completely disagee, but the point is that games
               | haven't really gotten better, gameplay hasn't improved
               | for many games (look at Cyberpunk 2077), there are more
               | and more HD remakes since they aren't making new good
               | games that excite people (SC2 was never as loved as SC1
               | same with Diablo 2 vs 3), and graphics have improved but
               | gameplay has not. I think Nintendo is the most consistent
               | with good new games but thats not really relevant to PC
               | gaming.
        
               | notTheAuth wrote:
               | Yeah I think that's a side effect of knowing how the
               | sausage is made.
               | 
               | I have written my own ECS loops, rendering pipelines; all
               | naive but after that it's optimizing to product fit, and
               | product emotional themes are pretty copy-paste to satisfy
               | social memes.
        
               | GhettoComputers wrote:
               | They still occasionally come out but it's rare. I haven't
               | been happy with an AAA game aside from Prey recently
               | (2017 as "recent") and Cyberpunk 2077 was all hype and no
               | substance. I think they're running out of new interesting
               | games (Paradox, and Arkane are still good studios though)
               | and many games I'm intrigued by are just remakes.
               | 
               | Starcraft Remastered, AoE II HD, System Shock, the Halo
               | collection for instance, the Homeworld remake didn't even
               | interest me since I heard it was worst in some ways with
               | hit boxes. They also don't need new graphics cards. It's
               | so different from when PC hardware upgrades and games
               | were so much more closely coupled.
        
               | staticman2 wrote:
               | Sony has started to port prestige first party PlayStation
               | games like Horizon Zero Dawn to pc. If you like that sort
               | of thing it's worth checking out...
        
               | Apocryphon wrote:
               | Feels like we've hit a plateau in terms of graphics for
               | nearly a decade now in terms of "good enough" or
               | "realistic enough."
        
               | GhettoComputers wrote:
               | I haven't been excited for graphics since Crysis in 2008,
               | nothing after that was very impressive in comparison.
        
       | prirun wrote:
       | I think the heat given off by that CPU is what burnt down the
       | barn.
        
         | ant6n wrote:
         | Isn't that what the title means?
        
           | neogodless wrote:
           | barn burner: "an event, typically a sports contest, that is
           | very exciting or intense."
           | 
           | Though one would hope the title was selected for the double
           | meaning.
        
       ___________________________________________________________________
       (page generated 2021-11-04 23:01 UTC)