[HN Gopher] Intel Core i9-13900T CPU benchmarks show faster than...
       ___________________________________________________________________
        
       Intel Core i9-13900T CPU benchmarks show faster than 12900K 125W
       performance
        
       Author : metadat
       Score  : 71 points
       Date   : 2023-01-16 17:17 UTC (5 hours ago)
        
 (HTM) web link (wccftech.com)
 (TXT) w3m dump (wccftech.com)
        
       | 867-5309 wrote:
       | the same can also be said for the 12900K at 125W; having a
       | TDP(PL2) of 241W
        
       | zython wrote:
       | Newer CPU faster than older CPU; can someone explain if I am
       | missing something or if thats the whole story here ?
        
         | bhouston wrote:
         | It seems that AMD is losing the performance crown for consumer
         | PCs for both single threaded and multithreaded workloads at
         | high efficiencies. I love AMD, but it does seem that Intel is
         | back and at worse neck and neck. It isn't clear if it can
         | clearly pull ahead.
         | 
         | This isn't great for AMD's stock, the boom times may be over?
         | 
         | What confuses me is that AMD still has a huge lead for server
         | CPUs. I have not seen massive adoption of AMD in the cloud yet.
        
           | smileybarry wrote:
           | > What confuses me is that AMD still has a huge lead for
           | server CPUs. I have not seen massive adoption of AMD in the
           | cloud yet.
           | 
           | This was years ago so the tide may've shifted but: part of it
           | could still be vendor experience and "it works"-experience?
           | 
           | When EPYC gen 1 & 2 came out, we were shopping for a bare
           | metal megaserver spec at work. I got a budget and free rein
           | (except it had to be Dell), and I really wanted to pick EPYC
           | for the better core clocks (which were significant in our
           | build architecture) _and_ more cores _and_ better cost.
           | 
           | With one EPYC spec and one Xeon Gold spec built (EPYC was I
           | think $13k vs $15k?), work was a bit uneasy about AMD
           | processors just yet. Our workload was MSVC compilation but
           | they were concerned about architecture differences, since all
           | of our workstations & laptops were Intels. They preferred
           | paying more because "we already have Xeon servers and they're
           | proven".
           | 
           | So, we ended up getting the Xeon Gold spec instead.
        
           | vvillena wrote:
           | Each brand has its strengths and weaknesses. AMD isn't
           | dominating anymore, but they are still ahead in areas such as
           | AVX512 (absent in Intel Core), the ability to mount more than
           | 8 performant cores in a chip (Intel's intra-core
           | communication architecture is difficult to scale, hence their
           | new P+E approach, which fortunately seems to work well), and
           | their ability to take advantage of the multi-chip process to
           | mount absurd amounts of cache. The multi-chip approach is
           | also a tech advantage that allows them to cut costs.
        
         | wmf wrote:
         | 12th gen and 13th gen are the same architecture on "the same"
         | process so it's somewhat surprising that efficiency has
         | increased.
        
       | MR4D wrote:
       | For AMD reference, it's almost 10% faster than a Ryzen 9 5950X.
        
         | ZeroCool2u wrote:
         | I'm not sure it's fair to say the 5950X is a good point of
         | reference. The 7950X has been out for quite some time at this
         | point and the 7950X3D is announced and expected to be available
         | next month.
        
           | bloodyplonker22 wrote:
           | Living in a place where electricity costs are very high, such
           | as California, the AMDs have much better price to performance
           | when compared to the Intel CPUs over a period of 3-5 years
           | (which is how long I normally wait before upgrading CPUs).
        
           | Godel_unicode wrote:
           | And a ton of people are cross-shopping more expensive newer
           | parts with still-available 5000 series CPUs which are much
           | cheaper. There's an interesting question about whether
           | absolute performance is more important or if you should
           | weight that against cost and/or power.
           | 
           | The GN benchmarks recently have done a good with explaining
           | both the cost perspective and the performance-per-watt.
           | 
           | Edit: for reference, Newegg currently has the 5950x and 7950x
           | at $500 and $600 respectively. And that's before the higher
           | platform cost for AM5/DDR5.
        
             | vondur wrote:
             | The 5950x uses DDR4. The 7000 Ryzen series uses DDR5.
        
             | fbdab103 wrote:
             | If reading a review of the newest vendor's product, it is
             | only fair to give the performance of the competitor's
             | similar generation release. If there are strong
             | availability/price concerns (eg generation N+1 is triple
             | the cost of generation N) those should be noted as to why
             | not performing the most appropriate apples-to-apples
             | comparison.
             | 
             | Calculating performance / dollar is an entirely different
             | exercise (though valuable, and rare is it the newest
             | generation that hits the sweet spot).
        
           | formerly_proven wrote:
           | The 7950X3D is not going to be faster in non-game benchmarks.
           | Most likely it will be a bit slower in most synthetic tests
           | and non-game workloads (rendering, encoding, FEA etc.)
        
           | Espressosaurus wrote:
           | It's a fair point of reference for those of us with a 5950X,
           | which was a top performer a couple years ago and is still a
           | damn lot of CPU no matter how you slice it.
        
       | chx wrote:
       | Someone did a power analysis of the 13900K
       | https://www.reddit.com/r/hardware/comments/10bna5r/13900k_po...
       | and the CPU scales very well with power limit up to around 100W
       | where the returns are rapidly diminishing.
        
         | metadat wrote:
         | The linked graphs paint a really clear picture, thank you for
         | sharing.
        
         | jeffbee wrote:
         | 100W is also my observed sweet spot on C++ project build times.
         | At 100W I got the minimum build time. At higher limits the cpu
         | draws more power indeed but the build takes just as long.
        
       | htk wrote:
       | With the processor being rated "up to 106 watts" I would expect
       | to see M1 Ultra in the list.
        
       | kristianp wrote:
       | The 13th gen i9 in question has 8 performance cores and 16
       | efficiency cores, and 36MB of cache, whereas the 12th gen has 8
       | and 8, with 30M of cache, so it's not that surprising it performs
       | better.
        
       | adamsmith143 wrote:
       | Why can't we get efficiency leaps like this in GPUs? Tired of my
       | 3080 heating my entire office up...
        
         | bryanlarsen wrote:
         | Performance/watt increases more every GPU generation than it
         | does every CPU generation.
        
         | zokier wrote:
         | Arguably Intel is now just catching up with the times after
         | their long manufacturing quagmire, I suspect large part of the
         | "leaps" are due that. As GPUs have been managing more steady
         | progress, such leaps do not happen. Also there is lot of doubt
         | around these sort of benchmarks, especially for P+E setups
        
         | tarnith wrote:
         | Undervolt/downclock it? Afterburner or GPU Tweak, etc, will do
         | this relatively easily. (I've undervolted AMD cards that come
         | with ridiculous core voltages without having to drop clocks at
         | all in past)
         | 
         | Almost everything is stock clocked past the point of
         | diminishing returns right now. This Intel part looks to mostly
         | be a downclocked version of existing 13900.
         | 
         | Look at the recent AMD 7900 vs 7900X. You can get 90-95% of the
         | performance for far less power by just backing off the voltage
         | and clocks a bit. (In their TDP terms going from 115w to 65w
         | TDP, loses less than 5-10%)
         | 
         | Everyone's fighting for chart king/significant generational
         | improvement number they can point to and missing the sweet spot
         | on the efficiency curve, but you can bring it back yourself. I
         | bet the 3080 still runs great at 50-100w less power limit/TDP
         | depending on your use case and I doubt that will result in
         | anywhere near a 1:1 perf/power reduction.
        
           | SketchySeaBeast wrote:
           | Yeah, exactly. I was able to get a decent ~50W drop and
           | stayed at a higher than base clock rate (1920 MHz @ 900 mV).
           | It's still a space heater, but it's better.
           | 
           | I also frame rate limit myself in a lot of games - I don't
           | need my MMOs running at 160 fps, so many games I'm at 150-200
           | W. I still wish it was less, but that's much more reasonable
           | than 400 W.
        
           | wtallis wrote:
           | > (I've undervolted AMD cards that come with ridiculous core
           | voltages without having to drop clocks at all in past)
           | 
           | It's often even worse than that. There are plenty of cases
           | where you can undervolt so far that you now have enough
           | headroom in the power delivery and cooling to allow you to
           | run at substantially _higher_ clock speeds.
        
         | jnwatson wrote:
         | The 4090 is a lot better in terms of heat.
        
           | Godel_unicode wrote:
           | Depends what you mean. It generates more heat, it's just more
           | effective at getting that heat off the die and into your
           | room.
           | 
           | Edit: since it's performance per watt is higher, if you're
           | capping frame rates then you can get less heat out of the
           | 4090. Like I said, depends what you mean.
        
             | MikusR wrote:
             | At half of power usage it only loses 10% performance.
             | 
             | https://videocardz.com/newz/nvidia-geforce-rtx-4090-power-
             | li...
        
       | rektide wrote:
       | Stunning performance, especially for a (heavily iterated on) 10nm
       | node. Alas I doubt we'll see many mini-pc/1L PCs configured with
       | such a top tier node but what an epic mini-server that would
       | make! So many cores (8p+16e)!
        
         | layer8 wrote:
         | I can't wait to see a Cirrus7 [0] fanless build with that CPU.
         | It just needs a suitable Mini-ITX board.
         | 
         | [0] https://www.cirrus7.com/en/
        
         | whatever1 wrote:
         | Any idea how these heterogeneous processors compare to regular
         | processors with identical cores for server workloads ?
         | 
         | Do workers get stuck in efficiency cores?
        
       | varispeed wrote:
       | Am I the only one who is no longer excited about Intel releases?
       | Until they release something substantially better than M1/M2,
       | I'll pass.
       | 
       | It's a shame that Intel is wasting their energy and fabs on
       | something that offers such mediocre improvements.
        
         | 2OEH8eoCRo0 wrote:
         | Substantially better how? Power/performance? For me, Intel and
         | AMD are already better because I can buy them and use them
         | however I please.
        
         | spicymaki wrote:
         | M1/M2 is fine if you want to develop on a closed platform.
        
         | bhouston wrote:
         | This has better single threaded performance that M1/M2 by a
         | significant margin, but at a higher total power. When the M1
         | was released I believe it has the fastest single threaded
         | performance of any CPU. That is no longer the case by quite a
         | bit now.
        
           | varispeed wrote:
           | So it's more like putting a lipstick on a long dead design.
           | Sure may look fresh from afar, but it still stinks.
           | 
           | What I am trying to say is that at this level of power
           | consumption Intel should be many times faster than M1 - that
           | is offering substantially better performance per watt.
           | 
           | Sure there are niche applications where power consumption
           | doesn't matter and only the single core performance counts,
           | but could older tech be overclocked and achieved the same
           | result?
           | 
           | Not sure. Point being is that what Intel does is increasing
           | pollution and causing people to bin perfectly working
           | machines just because there is a "new" CPU on the block,
           | where in real life they probably won't see a difference.
           | 
           | It's really irresponsible thing to do for Intel. They should
           | go back to the drawing board and stop releasing meaningless
           | products until they actually get something worth upgrading
           | to.
        
           | merb wrote:
           | In a Laptop its quiete stupid to stay at 100w just to have a
           | slightly better Performance. I mean its only a significant
           | margin as Long as you can get the heat away, which is often
           | the biggest problem in most laptops...
           | 
           | (M1 Max uses half the power for geekbench 5 and ~1650 and
           | being a 2 years older chip. If the m2 max can hold the Power
           | limit and still get to ~2k it would be fastly superior, i.e.
           | the normal m2 is already at ~1850 With Less than half of the
           | Power of the 13900t)
        
           | vondur wrote:
           | For the most part it seems like in Intel are throwing a ton
           | of power into their CPUs to get these results.
        
             | wongarsu wrote:
             | There's the hope that this is just a stopgap measure from
             | Intel to deal with AMD and Apple competition, and power
             | demands go back to normal once Intel had time to push a
             | couple of architecture improvements through the pipeline.
        
             | PragmaticPulp wrote:
             | If power efficiency is a concern, you wouldn't want to look
             | at the absolute top of the line enthusiast-targeted chips.
             | 
             | I care greatly about power consumption in my laptop, but I
             | don't really care at all in my desktop. If they can make my
             | compiles finish 10% faster by doubling the power usage,
             | bring it on. My CPU is rarely ever running at these 100%
             | usage levels, so it's not like it makes a difference in my
             | power bill. Modern coolers are plenty quiet.
        
               | MikusR wrote:
               | For example fastest AMD consumer cpu 7950X is really
               | efficient and fast in eco mode.
        
       | ribit wrote:
       | What's the actual power consumption? Without those figures the
       | article is just a clickbait. Looking at those multicore results I
       | don't believe for a second that the CPU was drawing less than 90
       | watts, probably more.
        
       | [deleted]
        
       | wtallis wrote:
       | The article's title says "at 35W", but the article text says "the
       | T-series chip is rated at up to 106 Watts". That appears to refer
       | to the chip's short-term turbo power limit (PL2 in Intel
       | parlance), typically effective for a default of 28 seconds if
       | thermal limits don't kick in--but the time parameter can be
       | adjusted by the end user (and this benchmark run has unknown
       | provenance, so who knows what power management settings were in
       | effect).
       | 
       | Since the benchmark in question (Geekbench 5) only runs for a
       | minute or two, it does most of its work before the chip even
       | attempts to throttle down to anywhere near 35W. An actual power
       | _measurement_ averaged over the entire benchmark run would yield
       | a significantly higher value than 35W.
        
         | htk wrote:
         | That's disappointing, I thought they were closer to Apple
         | Silicon's performance/watt ratio.
         | 
         | The deceptive nature of such benchmarks is also troubling.
        
           | toast0 wrote:
           | It's kind of hard to compare performance/watt meaningfully
           | when the wattages aren't at all similar.
           | 
           | If you limit power on current generation Intel/AMD
           | processors, the perf/watt is pretty good; depending on the
           | specific load, sometimes better, sometimes worse than M2.
           | When you allow more watts, perf goes up, but perf/watt goes
           | down as you see diminishing returns. (With some loads, you
           | may also have a point where more watts reduces performance,
           | whoops)
           | 
           | Geekbench isn't great as it doesn't even attempt to capture
           | power usage, but that may not be available or accurate on all
           | systems anyway.
        
           | runnerup wrote:
           | The benchmarks themselves aren't really deceptive, but
           | wccftech's reporting on them is misleading. These aren't
           | claims being made by Intel, but just random user benchmarks
           | that appeared online.
           | 
           | For all benchmarks, context matters. These days, most in-
           | depth reviewers understand how to work with transient power
           | and thermal limits with respect to time. For example,
           | measuring the MacBook Air M1's peak speed/time vs. its lower
           | sustained speed after it hits thermal throttling around the
           | 5-7 minute mark.
        
         | bee_rider wrote:
         | It is a shame these kind of knobs are rarely exposed on a
         | laptop. Sometimes I throttle my laptop cpu down to like
         | 800MHz-1.2GHz to save power/prevent the fan from going. But
         | this makes Vim less responsive for certain events. What I'd
         | actually like is normal behavior but with the turbo clock
         | limited to like... I dunno, a second.
        
           | jeffbee wrote:
           | In Linux you want powercap tools. You can tune all three
           | parameters in terms of power and time constant, so you can do
           | exactly what you suggest. You can change it on the fly
           | without rebooting so you could have a ridiculous setup like
           | mine where I have the power limits wired up to two unused
           | buttons on my mouse.
        
             | bee_rider wrote:
             | Cunningham'd. Awesome, I'll look into it, thanks.
        
           | MikusR wrote:
           | https://www.techpowerup.com/download/techpowerup-
           | throttlesto...
        
         | [deleted]
        
         | ajross wrote:
         | > Since the benchmark in question (Geekbench 5) only runs for a
         | minute or two, it does most of its work before the chip even
         | attempts to throttle down to anywhere near 35W.
         | 
         | That doesn't sound right at all. Nothing in a PC system (absent
         | water systems with huge reservoirs I guess) is going to buffer
         | excess heat on minute-long timescales, not even close. There's
         | literally nowhere for that energy to go; 125W for a minute
         | would be melting the solder joints.
         | 
         | CPU throttling works on at most small-integer-second scales.
        
           | wtallis wrote:
           | CPU throttling happens on many different timescales. That's
           | why Intel processors have multiple power limits (PL1, PL2,
           | PL3, PL4), in addition to current and temperature-based
           | throttling mechanisms. The time constant for PL2 is usually
           | either 28 seconds or 56 seconds. At those timescales, the
           | concern is usually not with power delivery or CPU die
           | temperature but rather with exterior case temperatures that
           | the user may be directly touching.
           | 
           | Based on the reported benchmark performance, it seems very
           | unlikely that temperature-based throttling kicked in, and
           | it's clear that the chip was operating well above 35W for at
           | least a large portion of the benchmark run. So the PL1 and
           | PL2 turbo power limits are the relevant controls at play
           | here.
        
             | ajross wrote:
             | Yeah yeah, I know. I'm just saying that there's zero chance
             | that throttling is affecting this measurement. The idea
             | that an Intel machine is significantly faster for 2-3
             | minutes after the start of a benchmark is just silly,
             | that's not the way these things work. Go start a benchmark
             | of your own to see.
             | 
             | Again, the thermodynamic argument is fundamental here.
             | You're saying that a "35W" CPU is "actually" drawing power
             | equivalent to a 125W CPU for exactly the time of a
             | benchmark, which is several minutes. That excess would have
             | nowhere to go! There's no reservoir to store it. (Obviously
             | the cooling system could take it away, but part of your
             | argument is that the cooling system is only good for 35W!).
        
               | wtallis wrote:
               | > The idea that an Intel machine is significantly faster
               | for 2-3 minutes after the start of a benchmark is just
               | silly, that's not the way these things work. Go start a
               | benchmark of your own to see.
               | 
               | I've done so, on many occasions, with actual power meters
               | rather than trusting software power estimates. You really
               | _do_ commonly see a laptop 's power consumption drop
               | significantly ~28 seconds into a multithreaded CPU
               | benchmark.
               | 
               | > (Obviously the cooling system could take it away, but
               | part of your argument is that the cooling system is only
               | good for 35W!).
               | 
               | I make no such claim that the cooling system is limited
               | to 35W. I only claim that the default platform power
               | management settings from Intel impose a 35W long-term
               | power limit, unless the system builder has adjusted the
               | defaults to account for whatever form factor and cooling
               | choices they've made.
               | 
               | Perhaps you haven't realized that the turbo power limits
               | will still kick in even if the CPU die temperature is not
               | too hot--because they're not actually a temperature-based
               | control mechanism?
        
               | ajross wrote:
               | Uh... this is a socketed CPU: https://www.intel.com/conte
               | nt/www/us/en/products/sku/230498/...
               | 
               | Now I see where the disconnect was. You're right, if this
               | was a laptop that could happen. It isn't, and it didn't.
        
               | ribit wrote:
               | Even in a laptop CPUs regularly draw more power than the
               | TDP for non-trivial amounts of time.
        
               | wtallis wrote:
               | It's a socketed CPU intended for low-power small form
               | factor systems and thus will usually be running with
               | Intel-recommended power limits or lower, for all the same
               | reasons that laptop CPUs are usually running with low
               | power limits. The control mechanisms don't actually
               | function any differently between their laptop and desktop
               | CPUs, they just have different default parameters (the
               | various turbo limits).
               | 
               | The only relevance of the socketed nature of this part is
               | that it is easy to put it in a normal desktop form factor
               | where a big heatsink and possibly tweaked turbo limit
               | settings can be used to generate misleading benchmark
               | results. But it's not actually certain that this is
               | what's happening; the Intel-recommended default behavior
               | for this chip _can_ plausibly produce the reported
               | results--just not in any way that could be reasonably
               | described as  "35W".
        
               | adrian_b wrote:
               | Even if the configuration may be different from
               | motherboard to motherboard and from laptop to laptop,
               | exactly as wtallis said, most Intel CPUs are configured
               | by default to consume during the first 28 seconds a power
               | 2 to 3 times greater than the nominal TDP, e.g. 105 W for
               | a 35 W CPU.
               | 
               | Most, if not all, subtests of GeekBench need less than 28
               | seconds, so it is quite possible for the entire benchmark
               | to be run at an 105 W power consumption. Whenever a
               | subtest finishes, the power consumption momentarily
               | drops, which resets the 28 second timer.
               | 
               | If the computer has poor cooling, it may happen that when
               | the CPU spends too much time and too frequently at an 105
               | W power consumption the junction temperature limit is
               | reached, which triggers thermal throttling and the power
               | consumption is reduced. This is a different mechanism,
               | independent of the one that reduces the power consumption
               | down to the nominal TDP after 28 seconds, or after
               | another configured time.
               | 
               | Thermal throttling reduces the power consumption only
               | enough to keep the temperature under the limit, so the
               | power consumption may remain greater than the TDP until
               | the 28 seconds pass.
        
       | Aissen wrote:
       | IMHO, any benchmark should run with Turbo disabled. _And_ come
       | with additional tests about how much turbo brings, and how long
       | it can stay on within a given thermal setup. Otherwise all you
       | have is garbage, not data.
        
         | brokencode wrote:
         | Why? It's not like turbo is not a real performance enhancement.
         | With enough thermal headroom, there's no reason why turbo can't
         | be sustained, as heat is the limiting factor.
         | 
         | Good CPU reviews include long render and compile benchmarks
         | that would suffer if the turbo performance couldn't be
         | sustained. If a CPU can sustain high performance, then it
         | doesn't matter if it's using a turbo mode.
        
         | sz4kerto wrote:
         | Turbo is such an inherent feature of new CPUs that turning them
         | off would make the benchmark completely unrepresentative for
         | real world usage. For example - a 16 core CPU might run a
         | single core at extremely high clock rates during single
         | threaded workloads. What would be the point of turning it off?
         | 
         | Complex devices need complex benchmarking, unfortunately. You
         | won't get a simple, single number that shows how powerful a cpu
         | is.
        
         | xen2xen1 wrote:
         | Like hitting the button on my 486sx/16.
        
         | viraptor wrote:
         | You can have both at the same time by publishing a graph of
         | performance over time where you see the ramp up and the time
         | where the turbo can't be sustained. Quite a few reviewers
         | started doing that maybe 2 years ago?
        
       | PaulWaldman wrote:
       | What is the difference between an i9-13900T and an i9-13900 when
       | limiting P1 and P2?
       | 
       | Curiously, the i9-13900T scored better than the i9-13900 in
       | single threaded performance.
        
         | smileybarry wrote:
         | My guess is (slightly?) better binning in addition to "forced"
         | PL1 and PL2 limits. Better silicon can run stable at lower
         | voltage, ergo lower power, ergo lower temps, so I bet there's
         | some binning for T SKUs so Dell & co. can ship not-overheating
         | micro PCs.
        
           | PaulWaldman wrote:
           | Makes sense. T series chips generally aren't as widely
           | available as individual components as their mainstream higher
           | power counterparts. To your point, they are available from
           | OEMs as complete systems, who seem to get priority.
           | 
           | There is advice to just buy the non-T series and limit power.
           | Interesting to see that, at least in this example, they
           | aren't quite equal.
        
           | 0cf8612b2e1e wrote:
           | I see this frequently, but what is "better silicon"?
           | Physically something is off about the manufacturing which
           | impacts performance, but does not kill the chip. What are
           | those defects?
           | 
           | There are 0.1% bad transistors instead of 0.2%? Heat output
           | is more uniform? The bad transistors are clustered in such a
           | way that signal routing is more efficient and leads to a
           | measurable throughout difference?
        
             | dihydro wrote:
             | Very slight changes in the transistor junctions will make
             | them slightly more resistive, more inductive, physical
             | variance will make them narrower, wider, etc. All of these
             | factors, although extremely slight, will add up to
             | different response curves of switching time vs current vs
             | voltage.
        
       | simonebrunozzi wrote:
       | I might want to buy a new powerful 2023 laptop, but there's ton
       | of confusion about CPUs, graphic cards, etc. As usual, there's no
       | simple way for a buyer to understand what's best.
        
         | xen2xen1 wrote:
         | Passmark. Always passmark.
        
       | metadat wrote:
       | The performance ratio relative to power consumption places this
       | in new territory for Intel. While the T-series isn't new, I don't
       | recall seeing it competing with the likes of K processors and
       | Ryzens.
       | 
       | Also the 1.x GHz base clock which turbo boosts to 5GHz, wow. Do
       | other professors scale across such a wide band?
        
         | suprjami wrote:
         | I have a 12400F which idles at 600 MHz and boosts to 4.4 GHz,
         | so I guess this is the new normal now.
         | 
         | Mine also has a massive heatsink and never gets above 25C.
        
       | gjsman-1000 wrote:
       | Wow... I only have to ask, what was wrong with the 10nm node
       | before? This seems like an impossible leap.
        
       ___________________________________________________________________
       (page generated 2023-01-16 23:00 UTC)