[HN Gopher] TSMC unveils 1.6nm process technology with backside ...
___________________________________________________________________
TSMC unveils 1.6nm process technology with backside power delivery
Author : elorant
Score : 318 points
Date : 2024-04-25 11:47 UTC (11 hours ago)
(HTM) web link (www.tomshardware.com)
(TXT) w3m dump (www.tomshardware.com)
| tvbv wrote:
| could someone ELi5 the backside power delivery please ?
| bayindirh wrote:
| Signal wires and power wires are not routed together, but
| separately.
|
| As a result, transistors are sandwiched between two wiring
| stacks. Resulting in lower noise, but makes heat removal a bit
| harder AFAICS.
|
| See the image at: https://www.custompc.com/wp-
| content/sites/custompc/2023/06/I...
| Tempest1981 wrote:
| How many power layers are there? Or how do the power "wires"
| cross over each other?
|
| Are they sparse, like wires? Or solid, like the ground plane
| of a PCB? Are there "burried vias"?
| gostsamo wrote:
| An article on the topic though about Intel:
|
| https://spectrum.ieee.org/backside-power-delivery
| AnimalMuppet wrote:
| So, if I understood that correctly, "frontside" actually is
| the _top_ side, and "backside" is actually the _bottom_
| side. They 're delivering power from the bottom (substrate)
| side, and connecting the data on the top side.
| Keyframe wrote:
| instead of signals and power path going through the same side
| (frontside) causing all sorts of issues and inefficiency,
| they're decoupling where power is coming from (from the other,
| backside, err side).
|
| More importantly, intel saw it as one of two key technologies
| of them moving into angstrom era, and was touting itself
| they'll be the first one to bring it to life (not sure they
| did).. so this seems to be more of a business power move.
|
| more on all of it from anandtech:
| https://www.anandtech.com/show/18894/intel-details-powervia-...
| wwfn wrote:
| I'm also clueless. First search brings up
| https://www.tomshardware.com/news/intel-details-powervia-bac...
| with a nice diagram
|
| It looks like the topology for backside moves the transistors
| to the middle so "singal wires and power wires are decoupled
| and optimized separately" instead of "compete[ing] for the same
| resources at every metal layer"
| __s wrote:
| https://www.imec-int.com/en/articles/how-power-chips-backsid...
| has some explanation that's a bit beyond ELI5, but they have 2
| pictures:
|
| frontside: https://www.imec-
| int.com/_next/image?url=%2Fsites%2Fdefault%...
|
| backside: https://www.imec-
| int.com/_next/image?url=%2Fsites%2Fdefault%...
|
| Another graphic samsung used:
| https://www.thelec.net/news/photo/202210/4238_4577_1152.jpg
|
| From https://www.tomshardware.com/pc-components/cpus/samsung-
| to-i...
|
| > Typically, backside power delivery enables thicker, lower-
| resistance wires, which can deliver more power to enable higher
| performance and save power. Samsung's paper noted a 9.2%
| reduction in wiring length, enhancing performance
| barfard wrote:
| the pics go to 404
| __s wrote:
| thanks, I've fixed them
|
| btw, you appear to be shadow banned, probably on account of
| being downvoted for making short comments like "good" or "I
| don't get it"
|
| Short & simple responses may be adequate at times, but
| usually people will view it as not adding anything
|
| Perhaps reviewing guidelines
| https://news.ycombinator.com/newsguidelines.html will give
| an idea of the ideal being yearned for (granted, it's an
| ideal). In general, trying to enhance the conversation with
| relevant information/ideas. "your links are 404" was
| definitely relevant here
| gcanyon wrote:
| How can you tell someone is shadowbanned?
| __s wrote:
| Their new posts will be dead. You can enable seeing dead
| posts in settings. Then if you click on their account &
| view their comments, it'll all be grayed out
|
| I vouched for barfard's comment in this thread
| vollbrecht wrote:
| there is a good youtube video that explains it
| https://www.youtube.com/watch?v=hyZlQY2xmWQ
| brohee wrote:
| Not ELI5 but I found this 20mn video pretty informative
| https://www.youtube.com/watch?v=hyZlQY2xmWQ
| brennanpeterson wrote:
| Half your wires deliver power, half deliver signal. So if you
| do both on the same side, you need twice the density of wires.
| If you split the delivery into two parts, you get double the
| density without needing to make things smaller.
| blagie wrote:
| This isn't quite right. Big wires (ideally entire planes)
| deliver power. Small wires deliver signal. Half and half
| isn't the right split, and you don't want to make power wires
| smaller.
|
| The very different requirements of the two is where a lot of
| the gains come in.
| mbajkowski wrote:
| It is going to get even a bit more interesting when you
| consider power gaters and virtual power supplies. Now the
| real power will be on the back side and the virtual power
| will be on the front side. Fun time for power analysis.
| blagie wrote:
| ELI5:
|
| ICs are manufactured on silicon disks called wafers. Discs have
| two sides, and traditionally, everything was done on top. We
| can now do power on the bottom. This makes things go faster and
| use less power:
|
| * Power wires are big (and can be a bit crude). The bigger the
| better. Signal wires are small and precise. Smaller is
| generally better.
|
| * Big wires, if near signal wires, can interfere with them
| working optimally (called "capacitance").
|
| * Capacitance can slow down signals on the fine signal wires.
|
| * Capacitance also increases power usage when ones become zeros
| and vice-versa on signal wires.
|
| * Big wires also take up a lot of space.
|
| * Putting them on the back of the wafer means that things can
| go faster and use less power, since you don't have big power
| wires near your fine signal wires.
|
| * Putting them in back leaves a lot more space for things in
| front.
|
| * Capacitance between power wires (which don't carry signal)
| actually helps deliver cleaner power too, which is a free
| bonus!
|
| This is hard to do, since it means somehow routing power
| through the wafer. That's why we didn't do this before. You
| need very tiny wires through very tiny holes in locations very
| precisely aligned on both sides. Aligning things on the scale
| of nanometers is very, very hard.
|
| How did I do?
| pmx wrote:
| Thanks this was a very useful summary for me!
| vintendo64 wrote:
| thank you for the explanation! It really helped
| pedrocr wrote:
| > You need very tiny wires through very tiny holes in
| locations very precisely aligned on both sides. Aligning
| things on the scale of nanometers is very, very hard.
|
| Do you need to align that precisely? Can't the power side
| have very large landing pads for the wires from the signal
| side to make it much easier?
| SAI_Peregrinus wrote:
| Not big enough, they still need to fit the size of the
| transistors on the signal side.
| AdamN wrote:
| There's still the question though of why they didn't do this
| decades ago - seems very obvious that this layout is better.
| What changed that made it possible only now and not earlier?
| blagie wrote:
| My knowledge isn't current enough to offer more than
| speculation.
|
| However, something like an 80286 didn't even require a
| heatsink, while my 80486 had a dinky heat sink similar to
| what you might find on a modern motherboard chipset. At the
| same time, on a micron node, wires were huge. A few special
| cases aside (DEC Alpha comes to mind), power distribution
| didn't require anything special beyond what you'd see on
| your signal wires, and wasn't a major part of the
| interconnect space.
|
| Mapping out to 2024:
|
| 1) Signal wires became smaller than ever.
|
| 2) Power density is higher than ever, requiring bigger
| power wires.
|
| So there is a growing disparity between the needs of the
| two.
|
| At the same time, there is continued progress in figuring
| out how to make through-wafer vias more practical (see
| https://en.wikipedia.org/wiki/Three-
| dimensional_integrated_c...).
|
| I suspect in 2000, this would have been basically
| restricted to $$$$ military-grade special processes and
| similar types of very expensive applications. In 2024, this
| can be practically done for consumer devices. As costs go
| down, and utility goes up, at some point, the two cross,
| leading to practical devices.
|
| I suspect a lot of this is driven by progress in imagers.
| There, the gains are huge. You want a top wafer which is as
| close as possible to 100% sensor, but you need non-sensor
| area if you want any kind of realtime processing, full
| frame readout (e.g. avoiding rolling shutter), or rapid
| readout (e.g. high framerate). The first time I saw 3D IC
| technology in mainstream consumer use were
| prosumer-/professional-grade Sony cameras.
|
| I have strong fundamentals, but again, I stopped following
| this closely maybe 15 years ago, so much of the above is
| speculative.
| DSMan195276 wrote:
| > seems very obvious that this layout is better
|
| "Better" is relative, the layout introduces more
| fabrication steps so it's only better if you actually get
| some benefit from it. Decades ago designs didn't require as
| much power or have as many transistors to wire so it wasn't
| an issue.
| pjc50 wrote:
| > why they didn't do this decades ago
|
| You might as well ask why, since we can do it now, Shockley
| didn't simply start at 3nm. It's all a very long road of
| individual process techniques.
|
| > You need very tiny wires through very tiny holes in
| locations very precisely aligned on both sides.
|
| Key word here is "both sides". It has challenges similar to
| solder reflow on double sided boards: you need to ensure
| that work done on the first side isn't ruined/ruining work
| on the second side.
|
| https://semiwiki.com/semiconductor-
| services/techinsights/288... seems to be a good
| description.
|
| "The challenges with BPR are that you need a low resistance
| and reliable metal line that does not contaminate the Front
| End Of Line (FEOL). BPR is inserted early in the process
| flow and must stand up to all the heat of the device
| formation steps."
|
| Contamination = metals used musn't "poison" the front-side
| chemistry. So they end up using tungsten rather than the
| more usual aluminium. (Copper is forbidden for similar
| chemistry reasons)
|
| It also (obviously) adds a bunch of processing steps, each
| of which adds to the cost, more so than putting the rails
| on the front side.
| 1oooqooq wrote:
| so the wafer is a huge ground plane? still can't see how one
| side is separate from the other if its the same block.
| blagie wrote:
| The wafer is thick. Let's call it a mm thick (not quite,
| but close). Devices are tiny. The claim is 1.6nm, which
| isn't quite true, but let's pretend it is, since for the
| qualitative argument, it doesn't make a difference. That's
| on the order of a million times smaller than the thickness
| of the wafer.
|
| Historically, everything was etched, grown, deposited, and
| sputtered on one side of the wafer. The rest of the wafer
| was mostly mechanical support. The other side of the wafer
| is a universe away.
|
| The world is more complex today, but that's a good model to
| keep in mind.
|
| For a 3d integrated circuit, you would do this, and then
| e.g. grind away the whole wafer, and be left with a few
| micron thick sheet of just the electronics, which you'd
| mechanically place on top of another similar sheet. That's
| every bit as complex as it sounds. That's why this was
| restricted to very high-end applications.
|
| As for whether the wafer is a huge ground plane, that's
| complex too, since it depends on the top of the device and
| the IC:
|
| * First, it's worth remembering a pure silicon crystal is
| an insulator. It's only when you dope it that it becomes a
| conductor. The wafer starts out undoped.
|
| * Early ICs had the whole wafer doped, and the collector of
| all the NPN transistors was just the wafer. There, it was a
| ground plane.
|
| * SOI processes deposit a layer of glass on top of the
| wafer, and everything else on the glass. There, the wafer
| is insulated from the circuit.
|
| So all of this can very quickly go in many directions,
| depending on generation of technology and application.
|
| I'm not sure this post is helpful, since it's a lot of
| complexity in an ELI5, so I'll do a TL;DR: It's
| complicated. (or: Ask your dad)
| 1oooqooq wrote:
| now I'm even more confused. why start with Si if you're
| going to put a glass layer before anything else? why not
| start with glass right away?
| blagie wrote:
| Ergo, the TL;DR :)
|
| Even so, I oversimplified things a lot (a lot of the
| processes to leverage the silicon wafer, but some don't):
|
| https://en.wikipedia.org/wiki/Silicon_on_insulator
|
| One of the things to keep in mind is that a silicon wafer
| starts with a near-perfect silicon ingot crystal:
|
| https://en.wikipedia.org/wiki/Monocrystalline_silicon
|
| The level of purity and perfection there is a little bit
| crazy to conceive.
|
| It's also worth noting how insanely tiny devices are. A
| virus is [?]100nm. DNA is 2nm diameter. We're at << 10nm
| for a device. That's really quite close to atomic-scale.
|
| There are something like [?]100 billion transistors per
| IC for something like a high-end GPU, and a single failed
| transistor can destroy that fancy GPU. That's literally
| just a few atoms out-of-place or a few atoms of some
| pollutant.
|
| The level of perfection needed is insane, and the
| processes which go into that are equally insane. We are
| making things on glass, but the glass has to be nearly
| perfect glass.
| Detrytus wrote:
| >> There are something like [?]100 billion transistors
| per IC for something like a high-end GPU, and a single
| failed transistor can destroy that fancy GPU.
|
| No, it can't thanks to this fancy marketing strategy
| where you sell faulty GPUs at lower price, as lower-tier
| model.
| sylware wrote:
| It seems the actually impressive thing here is the 15-20% less
| power consumption at same n2 complexity/speed.
| 654wak654 wrote:
| Sounds like a response to Intel's 18A process [0], which is also
| coming in 2026.
|
| [0] https://www.tomshardware.com/tech-
| industry/manufacturing/int...
| Alifatisk wrote:
| This is a response to something that's coming in 2 years?
| Incredible how far ahead they are
| blackoil wrote:
| This will also come in 2026, so not that far ahead.
| nerbert wrote:
| Depends who will ultimately deliver.
| tgtweak wrote:
| Fabs have a 3+ year lead time from prototyping the node to
| having it produce production wafers.
| HarHarVeryFunny wrote:
| Intel is the one trying to catch up to TSMC, not vice versa!
|
| The link you give doesn't have any details of Intel's 18A
| process, including no indication of it innovating in any way,
| as opposed to TSMC with their "backside power delivery" which
| is going to be critical for power-hungry SOTA AI chips.
| shookness wrote:
| While you are correct that it is Intel trying to catch TSMC,
| you are wrong about the origin of backside power delivery.
| The idea originated at Intel sometime ago, but it would be
| very ironic if TSMC implements it before Intel...
| sct202 wrote:
| Intel is not the inventor of backside power, they are the
| first planning to commercialize it. It's similar to finfets
| and GAA where Intel or Samsung may be first to
| commercialize an implementation of those technologies, but
| the actual conceptual origin and first demonstrations are
| at universities or research consortiums like IMEC. Example
| Imec demonstrating backside power in 2019
| https://spectrum.ieee.org/buried-power-lines-make-memory-
| fas... far before powerVia was announced.
| s3p wrote:
| OP never said Intel wasn't trying to catch up. As far as
| backside power delivery, this is literally what Intel has
| been working on. It is called PowerVia.
|
| https://www.intel.com/content/www/us/en/newsroom/news/powerv.
| ..
| hinkley wrote:
| Except for backside power. Intel published and had a timeline
| to ship at least one generation ahead of TSMC. I haven't been
| tracking what happened since, but Intel was ahead on this one
| process improvement. And it's not a small one, but it doesn't
| cancel out their other missteps. Not by half.
| ajross wrote:
| This isn't a comparison of shipping processes though, it's
| just roadmap products. And in fact until this announcement
| Intel was "ahead" of TSMC on the 2026 roadmap.
| Havoc wrote:
| >1.6nm
|
| Gotta love how we now have fractions of a near meaningless
| metric.
| s_dev wrote:
| Why is it meaningless?
| Havoc wrote:
| It stopped being an actual measure of size long ago. The nm
| is t Nanometer anything it's just a vague marketing thing
| attempting some sort of measure of generations
| chmod775 wrote:
| Because it's not measuring anything, except the tech
| generation. It conveys about as much information as "iPhone
| 14".
| gwervc wrote:
| It's at least more easily understandable (lower is newer)
| than the average tech product naming scheme, especially
| those by Microsoft.
| tommiegannert wrote:
| The problem is it sounds like something any engineer can
| understand without domain knowledge, but interpreting it
| that way is completely wrong. The worst kind of naming.
| Not just IKEA-style random names (and I say that as a
| Swede,) but reusing a standard, while not keeping to what
| the standard is normally used for, and what it previously
| meant even in this domain.
|
| N1.6 is much better for naming node processes. Or even
| TSMC16.
| bmicraft wrote:
| Normally? That "standard" hasn't been used "normally" for
| 20 years now. Arguably the new way is normal in every
| sense of the word
| dmix wrote:
| Do you think they'll be advertising 0.9nm in the future
| or switch it up at some point?
| bmicraft wrote:
| Intel has already switched to Angstrom for the most part
| smallmancontrov wrote:
| I see MTr/mm2 a lot, it feels like a better measure, and
| it feels like the time is ripe for marketing to make a
| jump. Bigger is better, "mega" is cool, the numbers are
| hundreds to thousands in the foreseeable future so no
| chance of confusion with nm. What's not to love? But hey,
| I don't make these decisions, and I see no indication
| that anyone in a position to make these decisions has
| actually made them. Shrug.
| tommiegannert wrote:
| Yes. By _standard_ I mean that nanometers ( "nm") is used
| to describe physical distances. That's it's normal use,
| understood by all engineers. That's also how it was born
| into the semiconductor domain. In that domain, it should
| have either stayed a description of physical distances,
| or been replaced.
|
| They could invent codenames, or start using a better
| (physical) metric, if this one is no longer relevant.
| brandall10 wrote:
| While it may be divorced from any particular measurement,
| doesn't it give an approximation of increased density?
| andrewSC wrote:
| process/node "size" has, for some time now, been divorced
| from any actual physical feature on or of the chip/transistor
| itself. See the second paragraph of:
| https://en.wikipedia.org/wiki/2_nm_process for additional
| details
| blagie wrote:
| Because it lost meaning somewhere between a micron and 100nm.
|
| From roughly the 1960s through the end of the 1990s, the
| number meant printed gate lengths or half-pitch (which were
| identical).
|
| At some point, companies started using "equivalences" which
| became increasingly detached from reality. If my 50nm node
| had better performance than your 30nm node because I have
| FinFETs or SOI or whatever, shouldn't I call mine 30nm? But
| at that point, if you have something less than 30nm somewhere
| in your process, shouldn't you call it 20nm? And so the
| numbers detached from reality.
|
| So now when you see a 1.6nm process, it's think "1.6nm
| class", rather than that corresponding to any specific
| feature size, and furthermore, understand that companies
| invent class number exaggerations differently. For example,
| an Intel 10nm roughly corresponds to Samsung / TSMC 7nm (and
| all would probably be around 15-30nm before equivalences).
|
| That should give you enough for a web search if you want all
| the dirty details.
| api wrote:
| The same thing happened with chip frequency around the end
| of the 1990s.
|
| Chip frequencies stagnated (end of Dennard scaling if I
| remember correctly) giving the impression that single
| threaded performance had stagnated, but since then chip
| makers have used increasing data and instruction
| parallelism to squeeze even more apparent single threaded
| performance out of chips. A 3ghz chip today is usually way
| faster on average code than a 3ghz chip 15 years ago. They
| also started expanding into multiple cores and adding more
| cache of course.
|
| For fab processes we should just switch to transistor
| density. That still wouldn't capture everything (e.g. power
| efficiency) but would be a lot better than not-actually-
| nanometers.
|
| For performance we really don't have a good single metric
| anymore since all these performance hacks mean different
| levels of gain on different code.
| thsksbd wrote:
| "For fab processes we should just switch to transistor
| density."
|
| Indeed
|
| May I propose transistor density divided by the (ergodic)
| average transistor switching power?
| antisthenes wrote:
| > For fab processes we should just switch to transistor
| density.
|
| The marketing departments of silicon companies are saving
| that as their ultimate weapon.
| Denvercoder9 wrote:
| > The same thing happened with chip frequency around the
| end of the 1990s.
|
| Not really the same thing, though. If I buy a chip that's
| advertised as 3.4GHz, it'll run at 3.4GHz. Maybe not all
| the time, but it'll achieve that speed. If I buy a chip
| advertised as being produced with a 3nm process, there's
| nothing even resembling 3nm on there.
| jl6 wrote:
| It's not meaningless, it's near meaningless. That's what the
| nm stands for, right?
| blackoil wrote:
| Number represents transistor density. 2nm has ~twice the
| density of 4nm. If you ignore nm as unit of distance it makes
| sense.
| phonon wrote:
| Dimensions go by 1/root(2) to double density...so about .7
| per generation.
| _heimdall wrote:
| They're using the wrong units if we need to consider the
| nanometer as something other than a measure of distance.
| defrost wrote:
| In some applications it helps to be fluid in exchanging
| distance as time with time as distance.
|
| http://dataphys.org/list/grace-hopper-nanoseconds/
|
| and the field of play here isn't 1D linear .. it's
| somewhere sort of a bit slightly > 2D (elements can be
| stacked, there are limitations on how much and how close)
| exe34 wrote:
| Wait till you talk to theoretical physicists who set
| c=h-bar=1.
| _heimdall wrote:
| I have more fundamental concerns to talk with a
| theoretical physicist about first :D
|
| Time to kick an ants nest here...
|
| Quantum mechanics is a useless study that had no business
| being a scientific field. The most fundamental
| principles, like quantum superposition, can never be
| tested or validated and therefore could never be used to
| be predictive of anything other than similarly
| theoretical ideas.
| SAI_Peregrinus wrote:
| Think of them as "node metrics". 1.4 node metrics, not
| nanometers. They haven't referred to any distance in years,
| so make up your favorite backronym. A can be Awesomes, so
| 1.4 node metrics = 14 Awesomes. Embrace nonsense, the chip
| makers did!
| misja111 wrote:
| Nope that's what it meant a long time ago. Nowadays, the nm
| number represents the smallest possible element on the chip,
| typically the gate length, which is smaller than the size of
| a transistor.
|
| This means that when different manufacturers use a different
| transistor design, their 'nm' process could be the same but
| their transistor density different.
| yau8edq12i wrote:
| No, it's not even related to gate length or any kind of
| physical feature anymore. It's a purely marketing term. A
| "3 nm" node has a gate length around 16-18 nm.
| s3p wrote:
| OP isn't wrong. Once the switch was made from 2D to 3D
| gates, transistor numbering became "this is the
| performance you would expect if you COULD shrink a gate
| to x nm", and while it's inaccurate it lets people
| understand how differentiated a new generation is from
| the previous ones. It also lets them stay consistent with
| understanding the progression of chips over time as in
| most of the history of silicon the generations were
| defined by the gates' nm sizing.
| hinkley wrote:
| We should have switched to gates per mm2 long ago. IMO
| that covers the 3D problem fine. Doesn't cover variance
| in feature size by task though. If the gates for a
| multiplier or cache need more space per gate than a
| logical OR then the marketing guys still get a little
| victory over truth.
|
| We could just divide total gates by surface area for the
| chip, but that would confuse the process efficiency with
| the implementation details and then we sacrifice
| instructions per second for a marketing number.
| maqp wrote:
| I don't see it as an issue if the marketing term
| maintains the original context:
|
| "how large a planar FET should be for the transistor
| density used in this process"
|
| Is this the case, I'm not entirely sure. If it is, there
| be a better unit for this measurement, absolutely.
|
| PLANETS: PLANar Equivalent Transistor Size
|
| Or hire a marketing team to figure a good one.
| 0x457 wrote:
| Not even that. Those numbers just mean, "this 3D 2nm-ish
| process performs similar to a 2D 2nm process".
|
| It's the same situation as when AMD started putting numbers
| in their Athlon XP SKUs to tell "this CPU performs just as
| good as intel of that frequency".
|
| I, for one, think it makes sense. Because as I consumer I
| don't care about actual transited length or density, all I
| care about is: performance, efficiency, price. I can assume
| that going from TSMC N5 to TSMC N3 would result in an
| increase in efficiency and performance.
|
| People that do care about those things (i.e. electrical
| engineers) most likely don't use marketing materials...
| audunw wrote:
| You have it the wrong way around. The nm number used to
| mean minimum gate length. But when moving to FinFET and
| other transistor innovations that provided increased
| density, they decreases the "nm" number AS IF the increased
| transistor density came from shrinking gate length with a
| traditional planar transistor.
|
| The nm has always been a very rough approximate proxy for
| transistor density. Nothing has really changed about how
| what the number implies.
|
| I find it so weird how so many people on hacker is so hung
| up on the "nm" numbering. None of these people are
| designing standard cell libraries and those that actually
| design standard cell libraries never cared about what's in
| the marketing material anyway.
|
| Lowe number means higher transistor density in some ways,
| on some parts of the chip. That's all
| 2OEH8eoCRo0 wrote:
| Wouldn't 2nm have 4x the density of 4nm?
| helsinkiandrew wrote:
| Wonder how long until its back to round numbers? - 800pm
| mazurnification wrote:
| It will be 18A first.
| __alexs wrote:
| TSMC N5 is indeed about 30% more Transistors/mm2 than N7 as you
| might expect so seems reasonably meaningful?
| thsksbd wrote:
| 1.6 nm is 16 A. If they continue the BS for much longer, the
| "feature size" will be smaller than the lattice of Si.
| phkahler wrote:
| And it's not that much better than the previous 2nm node,
| so it seems like the number is just meant to be smaller
| than Intel 18A. I wish Intel in the switch to "A" would
| have at least returned to a physically meaningful measure.
| Then just _maybe_ the industry could agree to use that for
| a while. Like 180A or whatever it needed to be - maybe just
| the smallest metal pitch or something.
| __alexs wrote:
| You're the one calling it a feature size, not TSMC.
| lostlogin wrote:
| I'm sure you agree that the measurement given is a
| marketing term and is generally unhelpful beyond 'it's
| smaller than the current number'.
| Tagbert wrote:
| it also seems to be a roughly proportional change in
| density so why would it be unhelpful?
|
| Just because you can't point to a specific feature and
| say that is 1.6nm, doesn't make the label meaningless as
| so many try to assert. It is a label that represents a
| process that results in a specific transistor density.
| What label would you prefer?
| __alexs wrote:
| That you think it needs to convey anything other than
| "better than the last version" suggests a fundamental
| misunderstanding of marketing. It's actually even kind of
| decent because it is actually approximately more dense as
| you might suspect given the numbers.
| thsksbd wrote:
| They're the ones specifying units.
| highwaylights wrote:
| I hear it also has many megahertz.
| audunw wrote:
| The metric means what it has always meant: lower numbers mean
| higher transistor density on some area (the digital part) of
| the chip.
|
| What more do you want? It's as meaningful as a single number
| ever could be.
|
| If you're a consumer - if you don't design chips - lower number
| means there's some improvement somewhere. That's all you need
| to know. It's like horse powers on a car. The number doesn't
| tell you everything about the performance under all conditions
| but it gives a rough idea comparatively speaking.
|
| If you're a chip designer then you never cared about the number
| they used in naming the process anyway. You would dig into the
| specification and design rules from the fab to understand the
| process. The "nm" number _might_ show up in the design rules
| somewhere.. but that's never been relevant to anyone, ever.
|
| I really don't understand why so many commenters feel the need
| to point out that "nm" doesn't refer to a physical dimension.
| Who cares? It doesn't have any impact on anyone. It's very
| mildly interesting at best. It's a near meaningless comment.
| spxneo wrote:
| what is a rough rule of thumb for each 1nm reduction? What
| does that increase in transistor density look like each year?
| If the jump is bigger (5nm->3nm), does that rate of change
| increase too?
|
| trying to understand the economic impact of these
| announcements as I don't understand this topic well enough
| Havoc wrote:
| >The metric means what it has always meant
|
| That's just patently wrong.
|
| The measurement used to refer to feature size [0]. It used to
| be a physical measurement of a thing in nanometers hence nm
|
| Now it's a free for all marketing team names it whatever the
| fuck they want. That's why you get stuff like intel 10nm
| being equivalent to AMD 7nm. [1]. It's not real.
|
| >Who cares?
|
| You...else we wouldn't be having this discussion
|
| [0] https://en.wikipedia.org/wiki/Semiconductor_device_fabric
| ati...
|
| [1] https://www.pcgamer.com/chipmaking-process-node-naming-
| lmc-p...
| gtirloni wrote:
| _> This technology is tailored specifically for AI and HPC
| processors that tend to have both complex signal wiring and dense
| power delivery networks_
|
| Uh?
| mastax wrote:
| Pardon?
| mazurnification wrote:
| There are certain trade offs when designing process. TSMC
| researched with partners how AI and HPC (high performance
| computing) chips most likely will look like and adjusted
| process accordingly.
|
| In fact this is big deal as until recently the processes were
| more tailored toward mobile application (that is were trading
| some switching performance for lower power consumption). Look
| like we are back in 2000s when speed/density is again more
| important than power consumption.
| magicalhippo wrote:
| I imagine it's because AI and HPC processors are typically
| utilized much more fully than your regular desktop processor.
|
| A typical desktop CPU is designed to execute very varied and
| branch-heavy code. As such they have a lot of cache and a lot
| of logic transistors sitting idle at any given time, either
| waiting for memory or because the code is adding not
| multiplying for example. You can see that in the die shots like
| this[1] for example. I imagine the caches are relatively
| regular and uniform and as such as less complex signal wiring,
| and idle transistors means lower power requirements.
|
| AI and HPC processors are more stream-oriented, and as such
| contain relatively small cachees and a lot of highly-utilized
| logic transistors. Compare the desktop CPU with the NVIDIA
| A100[2] for example. Thus you got both complex wiring, all
| those execution units needs to be able to very quickly access
| the register file, and due to the stream-oriented nature one
| can fully utilize most of the chip so a more complex power
| delivery network is required.
|
| edit: Power delivery tracks can affect signal tracks due to
| parasitic coupling if they're close enough, potentially causing
| signals to be misinterpreted by the recipient if power usage
| fluctuates which it will do during normal operation (if say an
| execution unit goes from being idle to working on an
| instruction, or vice versa). Thus it can be challenging to fit
| both power and signal tracks in close proximity.
|
| [1]: https://wccftech.com/amd-ryzen-5000-zen-3-vermeer-
| undressed-...
|
| [2]: https://www.tomshardware.com/news/nvidia-
| ampere-A100-gpu-7nm
| gtirloni wrote:
| The needs of AI/HPC at the chip level weren't clear to me.
| Thanks for the insightful answer.
| mort96 wrote:
| As opposed to low power processors (e.g laptops/phones), that
| tend to have less complex wiring and less dense power delivery
| networks
| tgtweak wrote:
| A16 in 2027 vs Intel's 18A in full swing by 2026 feels like a
| miss on TMSCs behalf. This looks like an open door for fabless
| companies to try Intel's foundry service.
| akmittal wrote:
| Intel has made lots of big promises, they have yet to catch us
| to TSMC
| crowcroft wrote:
| Intel hasn't got a great track record in recent history. Really
| hoping they can hit their timeline though.
| SilverBirch wrote:
| There are definitely going to be people taking a bet on the
| Intel foundry, but Intel has tried this before and it has
| worked badly.
| onepointsixC wrote:
| Sounds like an easy short for you then.
| futureshock wrote:
| Comments about the marketing driven nm measurements aside, this
| still looks like another solid advance for TSMC. They are already
| significantly ahead of Samsung and Intel on transistor density.
| TSMC is at 197 MTr/mm2 wile Samsung is at 150 MTr/mm2 and Intel
| is at 123 MTr/mm2. This 1.6nm process will put them around 230
| MTr/mm2 by 2026. When viewed by this metric, Intel is really
| falling behind.
| Workaccount2 wrote:
| Intel has a 1.4nm process in the pipeline for ~2027. They just
| took delivery on their first high NA EUV machine in order to
| start working on it.
|
| Their gamble however is that they need to figure out DSA, a
| long storied technology that uses self-forming polymers to
| allow less light to sharply etch smaller features.
|
| If they figure out DSA, they will likely be ahead of TSMC. If
| not, it will just be more very expensive languishing.
| saganus wrote:
| Do you know the name of the company that produces the EUV
| machine? is it ASML?
|
| It is my understanding that only ASML had cracked the EUV
| litography, but if there's another company out there, that
| would be an interesting development to watch.
| singhrac wrote:
| It's ASML.
| Rinzler89 wrote:
| _> It is my understanding that only ASML had cracked the
| EUV litography_
|
| _Ackshually_ , EUV was cracked by Sandia Labs research in
| the US, with EUV light sources built by Cymer in the US.
| ASML was the only one allowed to license the tech and
| integrate it into their steppers after they bough Cymer in
| 2013. Hence why US has veto rights to whom Dutch based ASML
| can sell their EUV steppers to, as in not to China, despite
| ow much ASML shareholders would like that extra Chinese
| money.
| ano-ther wrote:
| More on the history here:
|
| https://www.asml.com/en/news/stories/2022/making-euv-lab-
| to-...
| EMCymatics wrote:
| Thanks for the source. I'm glad Bell labs pounced on the
| opportunity.
| oivey wrote:
| Pretty cool story of the National Lab system and the
| closest thing the US has to "government scientists"
| massively shaping the future.
| s3p wrote:
| They don't need anyone else's money. Intel has bought all
| of TSMC's EUV _and_ High-NA EUV machines for the next
| several years.
| Rinzler89 wrote:
| _> They don't need anyone else's money_
|
| There's no such thing in capitalism. Limited supply semi
| is(was) a bidding war, where China had a blank cheque and
| was willing to outbid everyone else to secure semi
| manufacturing supremacy.
|
| Do you think Intel or TSMC could have scored all that
| ASML supply at that price if China would have been
| allowed to bid as well? ASML would have been able to
| score way higher market prices per EUV stepper had China
| been allowed in the game, and therefore higher profits.
|
| You think ASML shareholders hate higher profits or what?
| Nvidia sure didn't during the pandemic. You wanted a GPU?
| Great, it would cost you now 2x-3x the original MSRP
| because free market capitalism and the laws of supply and
| demand.
| saganus wrote:
| Did not know that. Thanks
| ahartmetz wrote:
| >EUV was cracked
|
| More like it was started. There were a ton of gnarly
| problems left that took over ten years and billions of
| EUR to solve.
|
| Producing a few flashes of EUV and getting a few photons
| on the target is relatively easy. Producing a lot of EUV
| for a long time and getting a significant fraction
| (...like 1%) of the photons on the target is very hard.
| hajile wrote:
| EUV was supposed to ship in the early 2000s and the work
| on it started in earnest the 90s. It turned out that EUV
| is way harder than anyone imagined. The only reason we
| did stuff like immersion DUV was because we couldn't get
| EUV to work.
| nabla9 wrote:
| Yes, and Intel was the first to install the first High-NA
| EUV machine from ASML, TWINSCAN EXE:5000
| https://www.youtube.com/watch?v=3PCtf1ONYMU
|
| ps. Intel used to own 15% of ASML in 2012, now they own
| less than 2%.
| hackernudes wrote:
| DSA = Directed Self-Assembly
| dev1ycan wrote:
| Intel says a lot of things but until they put them out in the
| field I don't believe a word they say
| Sysreq2 wrote:
| The nomenclature for microchip manufacturing left reality a
| couple generations ago. Intel's 14A process is not a true 14A
| half-pitch. It's kind of like how they started naming CPUs
| off "performance equivalents" instead of using raw clock
| speed. And this isn't just Intel. TSMC, Samsung, everyone is
| doing half-pitch equivalent naming now a days.
|
| This is the industry roadmap from 2022: https://irds.ieee.org
| /images/files/pdf/2022/2022IRDS_Litho.p... If you look at
| page 6 there is a nice table that kind of explains it.
|
| Certain feature sizes have hit a point of diminishing
| returns, so they are finding new ways to increase
| performance. Each generation is better than the last but we
| have moved beyond simple shrinkage.
|
| Comparing Intel's 14A label to TSMCs 16A is meaningless
| without performance benchmarks. They are both just marketing
| terms. Like the Intel/AMD CPU wars. You can't say one is
| better because the label says it's faster. There's so much
| other stuff to consider.
| confused_boner wrote:
| would performance per watt be the right way to benchmark?
| futureshock wrote:
| Benchmarks are tricky because it all depends on workload
| and use case. If you are in a VR headset for example,
| it's all about power envelope and GPU flops. If you are
| in a Desktop used for productivity it might be all about
| peak CPU performance.
| wtallis wrote:
| When comparing fab processes, you wouldn't want
| performance of a whole processor but rather the voltage
| vs frequency curves for the different transistor
| libraries offered by each fab process.
| Workaccount2 wrote:
| "Likely better" doesn't come from 14A vs 16A. It comes from
| Intel using High NA-EUV vs TSMC using double pattern Low
| NA-EUV.
|
| If Intel pulls off DSA, they will be using a newer
| generation of technology compared to TSMC using an
| optimized older generation. Could TSMC still make better
| chips? Maybe. But Intel will likely be better.
| tivert wrote:
| > If Intel pulls off DSA, they will be using a newer
| generation of technology compared to TSMC using an
| optimized older generation. Could TSMC still make better
| chips? Maybe. But Intel will likely be better.
|
| Is Intel working on "an optimized older generation" as a
| backup plan? I don't follow semiconductors very closely,
| but my impression is the reason they're "behind" is they
| bet aggressively on an advanced technology that didn't
| pan out.
| Golden_Wind123 wrote:
| From what I remember the aprocyphal story is that Intel
| dragged their feet on adopting EUV and instead tried to
| push multi-patterning past it's reasonable limits.
|
| If that's the actual root cause, then Intel's lagging is
| due to optimizing their balance sheets (investors like
| low capital expenditures) at the expense of their
| technology dominance.
| martinpw wrote:
| > This is the industry roadmap from 2022: https://irds.ieee
| .org/images/files/pdf/2022/2022IRDS_Litho.p...
|
| Very interesting document - lots of numbers in there for
| real feature sizes that I had not seen before (Table
| LITH-1).
|
| And this snippet was particularly striking:
|
| _Chip making in Taiwan already uses as much as 10% of the
| island's electricity._
| ReptileMan wrote:
| Not understanding chip design - but is it possible to get more
| computational bang with less transistors - are there some
| optimizations to be had? Better design that could compensate
| for bigger nodes?
| pixelpoet wrote:
| Such optimisation would apply equally to the more dense
| processes, though.
| mort96 wrote:
| In general, yes, but all the chip design companies have
| already invested a whole lot of time and engineering
| resources into squeezing as much as possible from each
| transistor. But those kinds of optimizations are certainly
| part of why we sometimes see new CPU generations released on
| the same node.
| 7thpower wrote:
| Yes, generally one of the trends has been movement toward
| specialized coprocessors/accelerators. This was happening
| before the recent AI push and has picked up steam.
|
| If you think of an SOC, the chip in your phone, more and more
| of the real estate is being dedicated to specialized compute
| (AI accelerators, GPUs, etc. vs general purpose compute
| (CPU).
|
| At the enterprise scale, one of the big arguments NVIDIA has
| been making, beyond their value in the AI market, has been
| the value of moving massive, resource intense workloads from
| CPU to more specialized GPU acceleration. In return for the
| investment to move their workload, customers can get a
| massive increase in performance per watt/dollar.
|
| There are some other factors at play in that example, and it
| may not always be true that the transistors/mm^2 is always
| lower, but I think it illustrates the overall point.
| merb wrote:
| JavaScript accelerator would probably half the power
| consumption of the world. The problem is just, that as soon
| as it would have widespread usage it would probably already
| be too old.
| choilive wrote:
| I believe ARM has some instructions that is JavaScript
| specific so we're kinda in that direction already.
| merb wrote:
| Yeah the https://developer.arm.com/documentation/dui0801/
| h/A64-Floati... but a full jit helper that is generalized
| is way way harder and as said will take a long time tobe
| generally available. Just look at wasmgc and that only
| has the minimum denominator.
| robocat wrote:
| Reminiscent of the Java CPUs: Not even used for embedded
| (ironically the reason Java was created?). And not used
| at all by the massive Java compute needed for corporate
| software worldwide?
| crote wrote:
| Weren't they used in Java Cards?
|
| Basically, every single credit card sized security chip
| (including actual credit cards, of course) is a small
| processor running Java applets. Pretty much everyone has
| one or more in their wallet. I'd assume those were actual
| Java CPUs directly executing bytecode?
| pjc50 wrote:
| Everyone generally does that _before_ sending the design to
| the fab.
|
| Not to say that improvements and doing more with less are
| impossible, they probably aren't, but it's going to require
| significant _per design_ human effort to do that.
| potatolicious wrote:
| Some yeah, but many of these optimizations aren't across-the-
| board performance improvements, but rather specializations
| that favor specific kinds of workloads.
|
| There are the really obvious ones like on-board GPUs and AI
| accelerators, but even within the CPU you have optimizations
| that apply to specific kinds of workloads like specialized
| instructions for video encode/decode.
|
| The main "issue", such as it is, is that this setup
| advantages vertically integrated players - the ones who can
| release software quickly to use these optimizations, or even
| going as far as to build specific new features on top of
| these optimizations.
|
| For more open platforms you have a chicken-and-egg problem.
| Chip designers have little incentive to dedicate valuable and
| finite transistors to specialized computations if the
| software market in general hasn't shown an interest. Even
| after these optimizations/specialized hardware have been
| released, software makers often are slow in adopting them,
| resulting in consumers not seeing the benefit for a long
| time.
|
| See for example the _many_ years it took for Microsoft to
| even accelerate the rendering of Windows ' core UI with the
| GPU.
| jjk166 wrote:
| The design optimization software for modern semiconductors is
| arguably the most advanced design software on earth with
| likely tens if not hundreds of millions of man-years put into
| it. It takes into account not only the complex physics that
| apply at the nano-scale but also the interplay of the various
| manufacturing steps and optimizes trillions of features.
| Every process change brings about new potential
| optimizations, so rather than compensating for bigger nodes
| it actually widens the gap further. By analogy, the jump from
| hatchet to scalpel in the hands a layman is far less than the
| jump from hatchet to scalpel for a skilled surgeon.
| kjkjadksj wrote:
| What about the effect of heat? More transistors per area
| equate a hotter chip, no?
| robocat wrote:
| More transistors switching I thought? It isn't about raw
| transistors e.g. memory isn't as power hungry as GPU?
| rowanG077 wrote:
| Are intel really just the best chip designers on the earth or
| why can they compete with such densities with AMD?
| smolder wrote:
| I'd say they haven't been very competitive with AMD in
| performance per watt/dollar in the ryzen era, specifically
| due to process advantage. (On CPU dies especially, with less
| advantage for AMDs I/O dies.) I'd agree they have done a good
| job advancing other aspects of their designs to close the
| gap, though.
| black_puppydog wrote:
| Stupid beginner question: is MTr/mm2 really the right thing to
| be looking at? Shouldn't it be more like mm2/MTr ? This feels
| kind of like these weird "miles per gallon" units, when
| "gallons per mile" is much more useful...
| simpsond wrote:
| 200 million transistors per square millimeter.
|
| Gallons per mile only makes sense when you are talking about
| dragsters.
| baq wrote:
| gallons per hundred miles make much more sense than that.
|
| incidentally, this is the measure rest of the world is
| advertising, except usually in liters per 100km.
|
| there's a good reason for this: comparisons linear instead
| of inversely proportional. 6l/100km is 50% better than
| 9l/100km. 30mpg vs 20mpg is... not as simple.
| Detrytus wrote:
| Gallons per hundred miles would be more useful.
|
| I guess it's a matter of approach. Europeans are traveling
| familiar, constant distances and worry about fuel cost.
| Americans just fill up their tank and worry how far they
| can go :)
| kumarvvr wrote:
| Its so hard to even fathom 200+ Million Transistors in 1 square
| millimeter !
|
| And, to think, it's all done with light !
|
| We live in interesting times !
| seastarer wrote:
| Translating 197 M/mm2 into the dimensions of a square, we get a
| dimension of 71nm. If we compute the "half-pitch", that's
| 35.5nm.
|
| 230 M/mm2 translates to 33nm "half-pitch".
|
| Of course, transistors aren't square and aren't so densely
| packed, but these numbers are more real IMO.
| alberth wrote:
| > This 1.6nm process will put them around 230 MTr/mm2
|
| Would it be that x2 (for front & back)?
|
| E.g., 230 on front side and another 230 on back side = 460
| MTr/mm2 TOTAL
| m3kw9 wrote:
| So much for physical limitations, it seems you can keep halving
| smaller till infinity.
| ThinkBeat wrote:
| I am not sure I understand backside in this instance and the
| illustration in the article didn't entirely help.
|
| In general, at least in older time, one side of the CPU has all
| the nice pins on it, and the motherboard has a pincushion that
| the pins match nicely. At the top of the CPU you put a HUGE
| heatsink on it and off you go.
|
| In this configuration the power delivery must be via the
| pincushion, through some of the pins.
|
| Intuitively that sounds to me like the power is coming in the
| backside? But given that it is a big deal I am missing something.
|
| Is the power fed from the "top" of the cpu where the heatsink
| would sit?
| sgalloway wrote:
| Here's a very good youtube video on the subject of backside
| power delivery I watched a few days ago.
| https://youtu.be/hyZlQY2xmWQ?si=JuXtzRy0U0onEzoe
| gosub100 wrote:
| The "Hollywood accounting" transistor density aside, I think a
| new metric needs to become mainstream: "wafers per machine per
| day" and "machines per year manufactured".
|
| Getting these machines built and online is more important than
| what one machine (that might be less than 6 per year) can do.
|
| The information, I'm sure, is buried in their papers, but I want
| to know what node processes are in products available now.
___________________________________________________________________
(page generated 2024-04-25 23:01 UTC)