[HN Gopher] Intel enters a new era of chiplets
___________________________________________________________________
Intel enters a new era of chiplets
Author : giuliomagnifico
Score : 75 points
Date : 2022-08-27 15:06 UTC (7 hours ago)
(HTM) web link (www.servethehome.com)
(TXT) w3m dump (www.servethehome.com)
| pfoof wrote:
| It has kind of FPGA vibes iiuc
| nsteel wrote:
| As of this comment, there are 53 occurrences of "apple" and 4 of
| "chiplet" in the comments here (three from the same comment).
|
| And to comment on topic, this mix of processes, each optimised to
| the task at hand and all on the same package sounds perfect for
| Intel. But I wonder how accessible it is for regular TSMC
| customers. We've had chiplet presentations, I don't recall
| anything like this being offered, despite requiring extremely
| high bandwidth for our application.
| to11mtm wrote:
| > But I wonder how accessible it is for regular TSMC customers.
|
| TSMC is part of the UCIe (Universal Chiplet Interconnect)
| consortium, so I'd assume they have some capability. But the
| other members are ARM, Intel, AMD, Qualcomm, and Samsung... so
| I'm not sure if it's a matter of you have to be 'working in
| that club' or if TSMC can provide help on custom solutions.
| yaantc wrote:
| AMD fabs at TSMC (and GloFo for the I/O die before, but this
| has moved to TSMC too recently) and has been using chiplets.
| TSMC does have 2.5 and 3D packaging for this. But it's still up
| to the fabless client of TSMC to design the chip. In other
| words, TSMC provides the enabling tools, but it's up to their
| client to use them. For now it's for the big players.
| nsteel wrote:
| Indeed. And they've got designs using at least two different
| processes (one for compute, another for IO) but the article
| has examples using 4 or 5! I just can't see anyone else going
| as far as that.
| jeffreygoesto wrote:
| What will sit on top of UCIe software wise? Any standards coming
| up there?
| carlycue wrote:
| Apple Silicon will have the title of world's best designed
| silicon for the foreseeable future. The only way for AMD and
| Intel to be competitive is if they can match the
| performance/watt, thin-ness, fan-less and battery life of the M2
| MacBook Air. Forget having all of these things at once. AMD and
| Intel chips can't even function in fan-less enclosures!
| Am4TIfIsER0ppos wrote:
| Stick it in a desktop chassis, mount a heat sink, crank the
| power to 100W, then it might be useful.
| SteveNuts wrote:
| Apple has the luxury of being able to design the chips and the
| chassis together to dissipate heat.
|
| Intel and AMD chips go to OEMs
| zaroth wrote:
| And the operating system too, so they can fully leverage all
| the special features on the chip to improve UX.
|
| Once you have critical mass being able to have enough
| experienced engineers up and down the whole stack, and you've
| figured out an organizational structure that allows them to
| work together efficiently, I don't see how you can beat the
| vertical integration in a world where the fabs Apple can use
| are actually better than Intel's own fab.
| zaroth wrote:
| Correct me if I'm wrong, but a decade ago Intel would
| probably have said their fab/process node was their moat?
|
| I know nothing about chip design, but like most things, I
| feel like one exceptionally brilliant tech lead surrounded
| by a group of say ~100 merely very smart collaborators can
| make a world class ARM based processor-design. A mere
| $100mm annually in hiring and overhead?
|
| The rest of Apple's advantage comes from being able to
| actually hire the best, and being their own customer at
| scale, which means being able to buy first place in line at
| the fab with billions of dollars of cash.
|
| This is perhaps less true now that the chip has so many
| specialized areas on the die? Like, does the neural engine
| get allocated a certain mm^2 and certain number of bus
| lanes, and then a fully separate team of 100 designs it? I
| suspect the neural engine part of the chip is actually
| super simple to design, it's the tight coupling with the OS
| and getting apps to properly leverage it which is tricky.
| fezfight wrote:
| I dont care about thinness, or fan-less-ness or battery life,
| assuming it's adequate.
|
| I care about openness and performance (watts are irrelevant if
| reasonable like they are now).
|
| Your list, to me, sounds like marketing. Move the goal posts to
| these arbitrary points, declare victory.
| zaroth wrote:
| That's not fair. Some people care about different things than
| you do. I think op's description is more aligned with the
| broader market, but that's just, like, my opinion.
| yakkityyak wrote:
| Those are the goal posts the majority of consumers value.
| fezfight wrote:
| Consumers believe what we tell them to believe.
| yakkityyak wrote:
| No, not really.
| Tagbert wrote:
| You think that openness and performance aren't marketing
| points? You've just chosen your two favorites.
| fezfight wrote:
| Yes. Just like OP. But with more freedom.
| pram wrote:
| Surely you mean RISC-V then. There's nothing free (like
| freedom) about x86 or ARM.
| fezfight wrote:
| That's the dream! And stop calling my Shirley!
| ChuckNorris89 wrote:
| Can we please have ONE. SINGLE. SILICON DISCUSSION ON HN. stay
| on the topic at hand without fanboys spamming _" hurr durr, my
| M2 is da best, Intel suxxx, X86 is dead!!!111one"_, while not
| bringing anything useful or relevant to the discussion?
|
| Seeing your comment at the top makes me not want to ever open
| any silicon topic here again as I'm sure it will be full of
| these kind of low effort comments vomiting marketing garble on
| how AS is the best and everything else is doomed to failure,
| while not bringing any useful info or arguments on-topic.
| dheera wrote:
| Exactly. I hate Apple with a passion.
|
| EDIT: F this downvoting, people seem to not get the obvious
| fruit pun.
| refulgentis wrote:
| Cosign, and thank you: any CPU discussion past "Apple ARM
| great" has been impossible to have since launch.
|
| List of places I tried taking conversation over that time,
| but it was ignored or read as complaints about Apple. (see
| Disclaimers in footer if you read these and think 'Wow, he
| just wanted to talk about why Apple was bad')
|
| - M1 was the first processor on a particular node; so there
| was a short term opportunity to do an apples to apples
| comparison by taking down M1 numbers and waiting for upcoming
| launches
|
| - it wasn't as trivial as "manufacturing on the improved
| node" for AMD, but it was for Qualcomm
|
| - performance of ARM vs. X86 could be teased out by tracking
| tuple of node x manufactor of chips and being patient;
| projections and tracking of Qualcomm & AMD chips performance
|
| - the initial M1 was beaten by Tiger Lake in desktop &
| sustained performance cases, which was two(!) nodes behind
|
| - performance and noise issues from Apple optimizing for
| absolute fan silence always, leading to them only kicking on
| at extremely high speeds far into the performance workload,
| that had already been throttled
|
| == DISCLAIMERS ==
|
| 1. I am a happy M2 MBP owner and think its the best chip.
|
| 2. My more nuanced view, summarized is that it is almost
| restrained in that the hardware got bigger, somehow, and in
| software there are growing pains as drivers adopt from iPhone
| use case to mostly-plugged-in use case. To wit, throttling
| seems optimized for ad copy around fan noise at low workloads
| than the user.
|
| 3. If you feel these discussions were focused on denigrating
| Apple, please recommend curious thoughts to have about chips
| that aren't denigrating Apple
| CoastalCoder wrote:
| One benefit of threaded conversations is that you can ignore
| one thread and still participate in the rest of the
| discussion.
| saiya-jin wrote:
| This is US site. Apple is US company. The biggest company in
| the world, source of so much pride. So tons and tons and tons
| of fanboyism here is inevitable. I'd say most of it is damn
| well earned.
|
| That said, fanboyism makes people blind and uncritical, and
| (at least to me) its apparent company like Apple needs some
| good old criticism, rather than blind worship. Otherwise they
| will fall (if not fallen) into "we know whats best for you
| and you have no say in it" like with cough cough "child porn"
| filters or battery-gate.
|
| I truly honestly don't trust their "we are more secure"
| marketing pitch, especially as non-US person.
|
| At the end, its just another corporation driven by huge army
| of managers with main focus on salaries and bonuses. The idea
| that they are somehow morally better than everybody else when
| they keep hiring from companies like Facebook is pretty
| dangerous and goes back to beginning of my post.
| freeflight wrote:
| _> I 'd say most of it is damn well earned._
|
| Respect is earned, while fanboyism is rarely a good thing
| as by definition it's something rather biased.
| echelon wrote:
| I think a suitable counter argument is that Apple has grown
| too large and needs to be split into multiple smaller
| companies to better aid competition.
|
| Between what they're doing to silicon, the outrageous App
| Store behavior, and how they flaunt that they're a quazi-
| government entity, I think this could find broad support in
| Congress and the DOJ.
|
| Nobody can compete with Apple, and that's a bad thing for
| everyone.
| umanwizard wrote:
| No comment on the other stuff, but the "silicon" part of
| this argument boils down to "anyone who makes something way
| better than the competition must be shut down because
| that's unfair".
|
| We'd still be in the Stone Age if everybody had that
| mentality
| heavyset_go wrote:
| Apple successfully monopolized the 5nm node by buying out
| all of TSMC's manufacturing capability at that node size.
| Characterizing it as "anyone who makes something way
| better than the competition" is a strawman portrayal of
| the underlying issue at hand.
| umanwizard wrote:
| Apple contributes billions of dollars to TSMC's R&D. The
| node would not exist yet without Apple.
| jeromegv wrote:
| So Apple brought competitiveness to an industry that was
| badly needing it and stalling in the most recent years..
| and your "solution" is that we should prevent them from
| doing so?
| smoldesu wrote:
| It's a strawman. Apple has the capacity to do immense
| good, but only because _they 're the single largest
| company in the world!_ We should scrutinize concentration
| of power heavily, and so far Apple has done nothing to
| suggest their benevolence to the rest of the market.
| They're blowing off Dutch regulators like it's a middle
| school homework assignment, and refusing to loose their
| asinine monopoly over software distribution on iPhone.
| They're behaving childishly, and everyone knows they're
| not a child. They're a company with hundreds of billions
| of dollars, and they're demonstrating organizational
| failure to address the demand of the market. On top of
| that, they're largest revenue sources are rent-collection
| and unibody aluminum computers made by political
| prisoners in Chinese concentration camps.
|
| They need a slap, hard.
| danaris wrote:
| What, exactly, are they "doing to silicon"?
|
| They made a chip that's vastly better than anything out
| there at a given power consumption level. They are not
| attempting to use this advantage to corner the silicon
| market; indeed, they are neither licensing the design, nor
| selling the chips outside their own end-user hardware.
|
| How does any of that say "antitrust" to you?
| echelon wrote:
| Using their enormous lead on cellphones and their
| incredible negotiation power and playing that into mobile
| business computing and supply chain / process monopoly.
| franga2000 wrote:
| The same goes for Google. They have higher market share in
| many categories and cover many more of them. Near-monopoly
| on search, ads, online video, email, and at least half of
| the smartphone OS market...
| echelon wrote:
| Each of the trillion dollar tech companies could be split
| in half and still be trillion dollar tech companies.
|
| It would be a healthier ecosystem for startups and
| competitors and make for faster total sector growth.
| spaceman_2020 wrote:
| The App store fees are pretty atrocious but I fail to see
| how Apple Silicon is anything but a net positive for the
| industry.
| Flankk wrote:
| Why are you so triggered that M2 is the best chip on the
| market? I'm pretty sure that is relevant in a discussion
| about the CPU market. AMD did a similar thing to Intel with
| the Ryzen launch. Intel is currently stagnating. They need a
| miracle at this point.
| ChuckNorris89 wrote:
| _> M2 is the best chip on the market_
|
| There is no such thing as a "best chip on the market". Best
| chip for what? You're confusing the word SoC/CPU with
| "chip" which is a very generic word.
|
| The best "chip" is the one that best suits your individual
| application or business needs, but there is no such thing
| as a best chip on the market. That's why Apple is only a
| tiny fraction of the computing market share and so many
| other chip vendors are still in business, because every
| application requires different chips.
|
| M2 doesn't solve every needs neither as a CPU (since you
| can't buy it outside the Apple ecosystem), neither a a
| generic "chip". Why can't you accept that?
| derNeingeist wrote:
| Not sure about others but I personally am not a big fan of
| generalising _that_ much: I 'd prefer it if people would
| ideally say that it was for example the "most power
| efficient general purpose CPU/SoC" or at least something in
| that regard, not just "the best".
|
| For example, have a look at https://openbenchmarking.org/vs
| /Processor/AMD%20Ryzen%20Thre... (user benchmarks of M1 and
| Threadripper). Compiling Linux on the 2990WX appears to be
| about 4 times faster than on the M2. (There are lots of
| other examples of one of the two CPUs being faster than the
| other but compiling Linux is the most time-expensive task I
| regularly do on my 2990WX. The energy usage in this task on
| the 2990WX is almost certainly a lot higher of course; this
| will be true for most tasks. However, the 2990WX is also 4
| years older of course, manufactured in a different node,
| not very optimized for power saving and not operated in a
| very power saving mode.)
| jrockway wrote:
| Why are you so excited? Did you design the M2? Do you
| manufacture the M2? Did you fund the M2? If so, feel free
| to be proud of it. You made a technological advance happen.
| But if you just walked into a store and bought one, I
| dunno, I think you're arriving pretty late in the evolution
| to take a personal interest in its success.
| vachina wrote:
| Dont feed the troll. Dude's comment history is filled with
| inconsistencies and an Apple shill.
| cwizou wrote:
| While I could agree with the general sentiment, I think it's
| hard to understate how much of a role Apple played in the
| background of all of this.
|
| But in any case, there's plenty of things to be said about
| this article. About one year ago (random link with relevant
| quotes : https://www.pcgamer.com/intels-3d-chip-tech-is-
| perfect-so-it... ), Intel was mocking AMD for using a chiplet
| approach, before announcing today that it was - clickbaity
| title aside - going to change _everything_.
|
| The sad truth is, both Intel and AMD are in the exact
| situation. AMD went chiplet in order to make their
| performance cores at TSMC, and their less critical cores
| ("IO") at GloFo.
|
| And Intel will be doing the same thing tomorrow (again,
| random link on the topic:
| https://www.tomshardware.com/news/intel-ceo-visits-tsmc-
| agai... ) by producing their performance cores at TSMC and
| their less critical ones on their own processes.
|
| In both cases, this is just a question of using a very
| limited resource (TSMC's best in class process) the more
| effectively that you can (by throwing extra engineering at
| making a chiplet design that works).
|
| And it's supremely relevant to the discussion to talk about
| how Apple, by throwing capital at a company (TSMC) that was,
| for the couple of decades I used to cover this, at best 2
| years behind the best in class, today where they are (far far
| in front).
|
| We could definitely have a long discussion about the hubris
| that led 2015 Intel where they are today (completely stuck
| with a 7 year old "+ paint coatings" aging 14nm "performance"
| process), or how Gelsinger is trying to make the best out of
| the situation (I personally think he's immensely qualified
| and Intel's best hope, though that may not be enough to bring
| Intel back to where it was), but at the end of the day, Apple
| threw a wrench in what seemed like an unshakable performance
| lead from Intel by spewing a bit of money left and right
| (they didn't only bet on TSMC early on, they threw money at
| GF for example, and it wasn't that massive early on from my
| understanding), and the silicon world hasn't been the same
| since.
| ChuckNorris89 wrote:
| _> a role Apple played in the background of all of this_
|
| All of what? This topic is about Intel chilpets, which many
| of those will end up in datacenters where most Intel chips
| go, and that's not where Apple sells chips for.
|
| Not every chip made and sold in the world revolves around
| laptops, tablets, smartphones or the apple ecosystem.
|
| So can we please talk about Intel's chiplets impact on the
| industry and less about Apple silicone which has nothing to
| do with this?
| cwizou wrote:
| Maybe I wasn't clear or went too fast on some things, I'm
| not a native speaker.
|
| This is all about semi manufacturing, and the position
| that TSMC now has in the fab space, thanks to Apple (yes,
| really, that's what I went on about in the previous
| comment). No part of my comment referred to arm,
| architectures, anything of the sort, just that Apple's
| money, applied broadly at first in the semi manufacturing
| space, then in a very very targeted way, took TSMC from
| tier 2 manufaturer to the best in class.
|
| If you look back a few years, only x86 chips were having
| volume at the bleeding edge of manufacturing process.
| Gpus, Smartphone, everything else was one node back at
| the very least. Apple threw money and orders with a
| massive volume (iPhone + iPad is pretty close in units to
| the x86 cpu market, above 300M roughly off the top of my
| heard) at TSMC and that early + continous investment
| helped them fast forward their processes while Intel is
| still stalled in 2015.
|
| Apple is using TSMC today (the best bits), AMD is using
| TSMC today (the second best bits) for the performance
| part of their chiplets and so will Intel tomorrow for the
| exact same reason. This is the relevant bit that I was
| pointing at.
| to11mtm wrote:
| That's a fair assessment although perhaps not giving AMD
| enough credit.
|
| IMO GloFo's spin-off worked out very poorly for AMD in the
| short term, but long term it let them partially-leapfrog
| Intel much as they had done 20 years prior with the K6/K7.
|
| There's two main things that IMO give the M1/M2 their
| 'magic';
|
| - Dram on die (helping their PPW, especially single
| threaded PPW) - Tight integration between OS and CPU.
|
| This is, perhaps, where the x86 consortium has fallen into
| a challenge in the face of tight integration; The majority
| of that group would likely shriek at the idea of a DRAM on
| CPU, "here you go that's all you get" idea. I saw it a lot
| when I slung PC hardware; Folks who would insist on having
| as much upgrade-ability as possible, but never actually
| _bought_ the upgrades between PC purchases. Even still, RAM
| is the main thing I personally find myself still upgrading
| on either purchased or older PCs.
|
| That being said, It would be interesting to see if they try
| doing DRAM chiplets for these; I'm sure some 'ideal state'
| would be where DRAM chiplets + slotted RAM cause the
| chiplets to be dedicated to integrated GPU resources, or
| act as a form of L4 cache for one or more banks of DRAM.
| tambourine_man wrote:
| Then don't. There's a big internet out there.
|
| Apple Silicon is the most exiting thing to happen in the
| field in decades. Apple handles platform transitions really
| well so it may seem less of a tectonic shift than it actually
| is.
|
| It's normal for people to be enthusiastic amongst such facts.
| ChuckNorris89 wrote:
| Then why not be enthusiastic about it on AS threads. This
| thread is about Intel chiplet design, it ahs nothing to do
| with AS so why pollute every silicone thread with this
| offtopic.
| yywwbbn wrote:
| I keep wondering, did people always use the word 'silicon'
| when talking about CPUs & SoCs? Somehow I never noticed it
| before Apple released their 'Apple Silicon'.
| heavyset_go wrote:
| It's Apple marketing speak
| kyriakos wrote:
| Apple has very strong marketing, for example retina screens
| is marketing term for high DPI screens.
| mycocola wrote:
| Just echoing what others are saying, no, we called them
| Intel chips/CPUs. What I don't get is why people go along
| with it. I personally prefer not being a miniature
| speakerphone for the marketing department at Apple.
| freeflight wrote:
| In hardware circles I've seen CPUs/GPUs colloquially
| being referred to as "silicon" for at least a decade.
|
| Which has very little to do with Apple PR, but everything
| with how CPUs/GPUs are overwhelmingly made from silicon.
| heavyset_go wrote:
| This trends chart[1] suggests that "Apple silicon" is
| very much a marketing term versus a colloquial term for
| chips in general.
|
| [1] https://trends.google.com/trends/explore?date=today%2
| 05-y&ge...
| programmer_dude wrote:
| This line of reasoning is just dumb. Every single chip on
| the mother board is made from Silicon. (There maybe some
| gallium or germanium parts but those are insignificant
| and irrelevant).
| ohgodplsno wrote:
| Hint: people posting on HN about how the M1/M2 are
| revolutionary are definitely not part of any hardware
| circles, and are using the marketing speak.
| freeflight wrote:
| The original thread comment asked if this was a thing
| before "Apple Silicon", that's what I was referring to;
| I've seen it before, and not just years before but over a
| decade before, possibly even two decades.
|
| As such it's not something Apple PR invented, but rather
| hijacked.
| exmadscientist wrote:
| It used to be slang, though. Something you'd use to punch
| up the first paragraph of an article, not the way people
| actually talked. For some reason calling it "Apple
| Silicon" _really_ grates on me, too. But such are the
| whims of megacorps.
| n7pdx wrote:
| LOL. Anyone who works in chip design would know much
| M1/M2 changed the hardware game. It is actually the
| dabbler enthusiast talking about how power/perf isn't
| important because his LED-laden shitbox has a wall plug
| (muh absolute performance) that doesn't grasp how utterly
| irrelevant DIY builders are in the market. Just look at
| the relative sales of servers, laptops and desktop and
| see what we care about.
|
| Not a single second of thought is ever spent by the
| architects/designers on optimizing "absolute
| performance". We only care about perf/area and perf/watt.
| It is the marketing teams that try to hype up gamer
| performance. Overclocking/high voltage performance
| requires the engineering knowledge of a freshman intern:
| go raise the voltage/freq, run the test program, make a
| SKU.
|
| Source: worked on CPU/GPU arch/design for 20 years,
| including at Intel.
| kemotep wrote:
| I mean they called the Bay Area, Silicon Valley because of
| the computer companies in the 70's.
| cercatrova wrote:
| Yes, because those chips are made from silicon.
| heavyset_go wrote:
| I'd save this judgment call for when there's node parity
| between Apple's chips, x86 and even other ARM manufacturers.
| The fact is that Apple's chips are on smaller nodes than the
| rest of the competition, because they bought up all of
| manufacturing on those nodes from TSMC. Performance per watt,
| power draw, thermals, etc are all functions of node size.
| urthor wrote:
| The sensible answer.
| senttoschool wrote:
| Apple Silicon is magnitudes more efficient. It cannot be
| explained by node size alone. TSMC 5nm is 15%-25% higher
| performance or 30% lower power compared to TSMC 7nm.
|
| Also, Apple's 7nm chips outperformed AMD/Intel 7nm chips.
| pulse7 wrote:
| Compare AMD Ryzen 7 5800U [1] with Apple M2 8 Core [2] and
| you will see that Apple Silicon is not "magnitudes more
| efficient", but ca. 30% faster in single-thread, but at the
| same time 30% slower in multi-thread and with 25% higher
| TDP... This is not "magnitudes more efficient"...
|
| [1] https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+7+58
| 00U&i...
|
| [2] https://www.cpubenchmark.net/cpu.php?cpu=Apple+M2+8+Cor
| e+350...
| senttoschool wrote:
| Passmark is a terrible benchmark. Stick to SPEC or
| Geekbench5.
|
| This Reddit post summarizes the efficiency and speed
| advantages of the M1: https://www.reddit.com/r/hardware/c
| omments/nii37s/comment/gz...
| [deleted]
| whalesalad wrote:
| microservies for silicon
| Animats wrote:
| The article notes some advantages of being able to use different
| processes for different wafers, but there's not much more on
| that. It might be helpful if you wanted different fabs for memory
| and compute, or for flash type memory. Anyone know what they're
| getting at here?
|
| There are situations when you really need that, but they mostly
| involve imagers. The Advanced Scientific Concepts flash LIDAR had
| a two-chip stack, with the light detectors made with InGaAs
| technology. The counters and timers were ordinary CMOS. This also
| shows up in some IR sensors.
| to11mtm wrote:
| > The article notes some advantages of being able to use
| different processes for different wafers, but there's not much
| more on that. It might be helpful if you wanted different fabs
| for memory and compute, or for flash type memory. Anyone know
| what they're getting at here?
|
| Well, one advantage is that it can be a cost/efficiency savings
| on a few levels.
|
| For example, in the case of Zen2/3, the main CPU chiplet is at
| 7nm, but the I/O die is at 12nm. If I had to guess, generally
| speaking it is useful in cases where parts of the final module
| would benefit from higher transistor density versus others;
| Memory controller tech generally gets fewer updates than the
| CPU/APU itself, so it allows faster design cycles; you already
| know the existing I/O controller works, one less portion to re-
| qualify.
| cm2187 wrote:
| My understanding is that one of the key benefits is that if you
| have an imperfection on a wafer, you lose one small chiplet
| instead of losing a big monolithic CPU. So for equivalent
| imperfections you get a better yield.
| atq2119 wrote:
| Different functions scale differently. With the latest
| processes, logic scales best, while analog scales worse: SRAM
| (which is internally pretty analog) still scales decently, but
| less than logic; and I/O scales very little at all (think of it
| this way: the size of transistors that drive outputs is pretty
| much determined by the current you need to drive, and the
| current is determined by e.g. the PCIe spec, which is itself
| subject to the physical constraints of relatively long wires).
|
| As a consequence, if your design has CPU dies and I/O dies,
| using a smaller process only for the CPU dies is likely to be a
| good trade-off.
| yaantc wrote:
| Memories have their specialized processes indeed, but there are
| other reasons to specialize.
|
| New nodes are fine for logic, but it takes time for analog IPs
| to move to new nodes (and some may not). So what AMD did, using
| an advanced node for compute/logic and a less advanced one for
| I/Os should be typical. You can also see this in broadband
| cellular modems, where the baseband part is on an advanced node
| and the RF on an older one.
|
| When you look at processes offering, you often have variants
| optimized either for peak performance (frequency) or maximum
| efficiency. The peak performance would be the natural choice
| for (big) CPUs, and an efficiency node better suited for a GPU
| or any massively parallel accelerator where efficiency is more
| relevant than peak frequency (on this, I think Intel planned to
| use TSMC for their HPC GPU, could be related: they can focus on
| high perf for their CPUs).
| citizenpaul wrote:
| Something about this gives me the under the skin creepy feel of
| piles of "locked" resources sitting on peoples desk going to
| complete waste.
|
| Like when they used to sell mainframes with excess processor
| capacity then you would pay to unlock the processor that was
| already there if you need it. If not it was simply manufactured
| to sit unused in a mainframe its entire life, then be thrown in
| the trash.
|
| I didn't specifically see anything that said this in the article
| but there is a TON to digest in there though mdular hardware
| always has that built to waste vibe. Even if they claim the
| opposite.
| artificialLimbs wrote:
| >> Like when they used to sell mainframes with excess processor
| capacity then you would pay to unlock the processor that was
| already there if you need it.
|
| IBM still does this.
| crazygringo wrote:
| It's not "going to complete waste" any more than if you
| download the MS Office suite and let the installer sit on your
| computer without installing it because you don't have a license
| key. Which... nobody cares.
|
| You're licensing the processor capacity. You're not paying for
| the actual piece of silicon, you're paying your fair share of
| the R&D and fab investment that went into it. You want to use
| more, you pay more. The same as software.
|
| The amount of silicon in the chip is, what, a small fraction of
| the amount of silicon in a single grain of sand? A whole
| processor is tens of grams of material total. The cardboard
| boxes a standalone chip comes in probably weigh more. I
| wouldn't get worried about "waste" here.
| to11mtm wrote:
| IIRC they have something like that but different in the works,
| 'Software Defined Silicon' (SDSi) is what it's called.
| bombcar wrote:
| It can actually result in less waste (1 line, no flying techs
| around to upgrade, etc) depending on how you account for
| upgrade shipping, etc.
|
| People seem fine with software unlocks for software but often
| confuse the price of hardware with the cost to manufacture the
| same.
| bee_rider wrote:
| I'd assume the opposite actually.
|
| Chips always have to be binned. But previously a chip would
| have to be binned down to it's worst component I guess -- if
| they had a chip with great CPUs but the GPUs were a little
| wonky, and they didn't have an appropriate processor line for
| that combo, they'd have to bin the whole thing down to low-
| tier. Now they can instead match up the good CPUs and the good
| CPUs.
|
| Plus they'll be able to satisfy some of their lust for SKUs by
| mixing and matching tiles, rather than making a bazillion
| slightly bins.
| Animats wrote:
| > lust for SKUs
|
| That's so Intel. All 37 variants of the Intel Core i9 CPU:
| [1]
|
| [1] https://www.intel.com/content/www/us/en/products/details/
| pro...
| toast0 wrote:
| Core i9 isn't a useful thing to complain about SKUs for.
| There's probably been thousands of Pentium products at this
| point.
|
| What you want to look at is how many SKUs for an
| architecture, like say Alder Lake for Desktop[1]. Do we
| really need 5 to 7 SKUs at 16, 12, 6, or 4 cores, but only
| 2 SKUs at 10 cores, and 4 SKUs at 2 cores?
|
| [1] https://ark.intel.com/content/www/us/en/ark/products/co
| denam...
| bee_rider wrote:
| 2 cores seems like a niche product at this point,
| actually 4 SKUs for that is more than I'd expect.
|
| Only 2 SKUs at 10 cores seems a little weird, I wonder if
| they are 12 core parts with some cores disabled or
| something like that.
|
| Edit: Note it is just the i5 [...]K's from Q4 '21 that
| have 10 cores. It isn't that surprising that the early
| enthusiast parts are a little weird, right?
| sidewndr46 wrote:
| My understanding is that binning is mostly about core count
| and core operating frequency.
|
| Intel has a hard-on for segmenting their product lines. To
| the point they "launch" new products like the i9, which is
| just what an i7 used to be. They also deliberately cripple
| products, like selling SKUs with VT-x disabled. Not to
| mention them keeping ECC memory out of the entire desktop
| market for basically all of history.
| nwah1 wrote:
| >Intel also has a packaging line that spans 2D technologies as
| well as 3D technologies like its Foveros line.
|
| Very cool
| mugivarra69 wrote:
| Lisa Su and by extention, AMD, has been ahead of anything out
| there. If intel can pull this off, i will be happy to have
| competition back in market.
| midislack wrote:
| Pentium Pro used "chiplets." Ironically Intel mocked AMD recently
| for it.
| urthor wrote:
| What's old is new again.
|
| Had a chat with an old timer whl was doing horizontally scaling
| compute with thr IBM A400.
|
| 25 years later it's the starry eyed wonder of the 2010s.
| alexklarjr wrote:
| Now their CPU lifespan will be same as modern videocards - 3-5
| years, same as new ryzen chips. This will surely drive new
| products adoption and profits.
| eyegor wrote:
| Are you implying that the silicon somehow has a shorter
| lifetime due to the presence of an interposer layer? Or is this
| a way of saying you think cpus are going to become more
| powerful at a faster rate?
| aidenn0 wrote:
| I remember when some of the old Slot-1 P3 cpus had off-die, but
| on-board cache (The Katmai did, the Coppermine did not).
| to11mtm wrote:
| Yeah...
|
| Back in the day, as another comment mentioned, the PPro had a
| 'chiplet' style configuration where The CPU and Cache were on
| the same chip but separate dies. The problem with this was the
| CPU and Cache had to be bonded to the chip first, then tested,
| and if either was bad, game over. Additionally, at the time die
| size was at more of a premium, in the case of a PPro, 256Kb of
| cache was close-ish to 2/3 the size of the CPU die. [0]
|
| The P2, Katmai P3, and the Athlon 'Classic' (Pluto/Orion) used
| offboard cache; This was far better from a yield standpoint
| (I'm assuming the cache chips could either be tested before
| install, or were easier to rework) but limited their speed.
|
| It's crazy to think that the Katmai P3 itself had around 9.5
| Million Transistors, but the 512Kb of cache was another 25
| Million on it's own!
|
| [0] -
| https://en.wikipedia.org/wiki/Pentium_Pro#/media/File:Pentiu...
| throw0101a wrote:
| > _I remember when some of the old Slot-1 P3 cpus had off-die,
| but on-board cache_
|
| I remember when you had to buy an extra processor to get
| floating point.
|
| * https://en.wikipedia.org/wiki/X87
|
| At one point there were video game(s) with 'extra'
| functionality that was only available with this hardware
| 'upgrade':
|
| * https://en.wikipedia.org/wiki/Falcon_3.0
|
| (Get off my lawn.)
| aidenn0 wrote:
| You're probably not that much older than me, the first PC I
| used was a Z80 based system (it came standard with a floppy
| drive though)
| Flankk wrote:
| bee_rider wrote:
| Hopefully Intel will be able to sustain their business by
| competing in the tiny non-macOS niche I guess.
| zekica wrote:
| How will the gap widen? 8 instruction parallel decoder will
| give Apple single core performance per watt, but other than
| that I don't see what Apple does differently. M1 Pro 10 core is
| their best performance-per-watt, and Ryzen 6800U is just 6%
| behind [0].
|
| Apple will be ahead, but the gap will not widen.
|
| [0] https://www.notebookcheck.net/AMD-
| Ryzen-7-6800U-Efficiency-R...
| ohgodplsno wrote:
| Apple zealots have this hilariously weird obsession with
| performance per watt. Do not expect them to look at any
| actual data.
| to11mtm wrote:
| Wellllll...
|
| Ryzen is only 6% behind for multi-core PPW, but The M1
| appears to still have a huge advantage in single-core PPW.
|
| I have a half dozen thoughts on this, but the foremost is;
|
| Apple keeping memory on chip, is likely providing a huge
| memory latency advantage, (1) as well as some power benefits.
| It would not surprise me if this is a large part of (if not
| the majority) of the advantage here outside of the bigger
| decoder.
|
| Think about a single threaded vs multithreaded benchmark; In
| the case of ST, there's only one thread the prefetcher can
| deal with, that one thread is going to be waiting for data.
| In the case of MT, there's a much greater likelyhood that
| you'll have multiple threads making memory requests, and the
| latency can be amortized by having the threads do other work
| (i.e. Thread 1 can start working on data it got back while
| thread 2's request is already in-flight from controller to
| DRAM.)
|
| This is one of those moments I miss David Kanter
| (RealWorldTech, [0]) doing CPU arch breakdowns.
|
| (1) - Back in the day, one of the 'better' things you could
| do for a SDRAM P3/Athlon/Duron system, especially if you were
| replacing all modules for an upgrade anyway, was to hunt for
| CL2 memory.
|
| [0] - https://www.realworldtech.com/cpu/
| zekica wrote:
| Exactly - they have on package RAM with 100GB/s and low
| latency
| kcb wrote:
| > Apple keeping memory on chip, is likely providing a huge
| memory latency advantage
|
| How? It's the same LPDDR5 everyone else is using and it's
| on package not on "chip". The trace length has a negligible
| impact on latency.
| danaris wrote:
| I've never been clear on how much Intel _couldn 't_ make cooler
| chips, and how much they just can't admit to themselves that it
| _matters_ (and thus didn 't try very hard).
| n7pdx wrote:
| They know it matters, they just don't have the competence to
| do it since they promoted a bunch of toadies and charlatans
| into their technical leadership, and also outsourced a ton of
| technical work to "low cost geo" so managers can brag about
| cutting costs.
| yakkityyak wrote:
| Having worked there under BK and Bobby it is very clear why.
| They didn't invest anything into engineering. Even today
| their comp is peanuts compared to any other company in the
| tech industry.
| jamiek88 wrote:
| Bbbbbbut benchmarking.........market survey......advanced
| analytics......some HR goober said that we are
| 'competitive'.
| stavros wrote:
| Why didn't AMD? Is the M2 that much better than Ryzen, for
| example?
| kcb wrote:
| It's not. 5nm Ryzen will finally give us the more apples to
| apples comparison. Unfortunately we're probably a ways out
| from 5nm Ryzen mobile chips.
| stavros wrote:
| Hmm, how did Apple get to 5nm first? Does the fact its
| ARM have anything to do with it?
| fooker wrote:
| They bought out TSMC fab capacity by biddng significantly
| over AMD.
| cercatrova wrote:
| They pay TSMC loads of money to have the first chips for
| each process node. They were first to 5nm, they'll be
| first to 3nm next year too. This is because they sell
| whole products rather than chips like Intel and AMD, so
| Apple's profit margins are astronomical compared to them,
| and so they can afford to pay TSMC so much.
| stavros wrote:
| Oh I didn't know that, thanks. So it's more that they
| bought out all of TSMC's product, rather than that they
| came up with an innovative new process.
| cercatrova wrote:
| Well, they also have some of the best silicon engineers
| in the Valley, and the world. It's not just TSMC, they
| only build what is designed.
| urthor wrote:
| Intel was in this situation before.
|
| They dug themselves out with the Core 2 Duo.
|
| There remains a nonzero chance Intel digs themselves out of the
| hole again.
|
| I'm strongly considering their stock.
| tambourine_man wrote:
| They were never in trouble on the manufacturing part. In
| fact, they've been the best at it for 40 years.
|
| I think it's way easier to dig yourself out of a whole by
| picking up a previous architecture and updating it (ditching
| the Pentium 4 and using the 3 as the basis for the Core
| architecture) than it is by regaining manufacturing lead,
| especially when there's only one company that's able to do it
| these days. Lots of companies are able to make compelling
| architectures with different instruction sets. Actual chip
| making, however, is TSCM.
|
| How Intel lost their lead is probably the greatest business
| case to be studied in our industry's recent history.
| pyrolistical wrote:
| Only took 5 years https://www.pcgamer.com/intel-slide-criticizes-
| amd-for-using...
| washadjeffmad wrote:
| Has Intel ever released a benchmark or marketing comparison
| that wasn't the filtering equivalent of spelling and grammar
| mistakes in spam emails? If you notice it for what it is,
| you're not their target demographic.
|
| Their whole sour grapes culture is just bizarre to me.
___________________________________________________________________
(page generated 2022-08-27 23:00 UTC)