[HN Gopher] Apple and Intel first to use TSMC 3nm
       ___________________________________________________________________
        
       Apple and Intel first to use TSMC 3nm
        
       Author : walterbell
       Score  : 229 points
       Date   : 2021-07-03 08:15 UTC (14 hours ago)
        
 (HTM) web link (www.electronicsweekly.com)
 (TXT) w3m dump (www.electronicsweekly.com)
        
       | alkonaut wrote:
       | > If Intel can get back to a two-year node cadence with 2x
       | density improvements around mid-decade
       | 
       | If I cut my kilometer time in half every six months I'll do that
       | 3h marathon next year!
        
       | 0xy wrote:
       | I wonder how large the king's ransom was that Intel threw at TSMC
       | to make this happen. Seems like it was necessary given their
       | faltering fab strategy, and faced with years of a limping 10nm
       | and 14nm+++++++++ products.
        
         | smoldesu wrote:
         | > I wonder how large the king's ransom was that Intel threw at
         | TSMC to make this happen.
         | 
         | Probably similar to how much Apple paid to edge out the
         | competition on 5nm.
        
         | test_epsilon wrote:
         | Are you generally asking how much it costs to purchase early
         | slots on leading edge nodes?
         | 
         | I'm not sure if being Intel would have made TSMC jack up the
         | price or anything. It is in their interest to get large
         | profitable customers, and it is in their interest to take
         | business away from Intel's fabs. TSMC might well have offered
         | significant discounts or other special treatment to land Intel
         | contracts.
        
           | 0xy wrote:
           | TSMC are experiencing over 100% demand, they don't need to
           | offer discounts. In fact, they stopped giving any discounts
           | months ago even on older processes.
           | 
           | Apple has deep pockets and spends heavily to ensure supply
           | chain stability, but we haven't seen this behavior from Intel
           | so I'm curious how big of a price they had to pay was.
        
             | test_epsilon wrote:
             | > TSMC are experiencing over 100% demand, they don't need
             | to offer discounts. In fact, they stopped giving any
             | discounts months ago even on older processes.
             | 
             | I stand by what I wrote.
             | 
             | > Apple has deep pockets and spends heavily to ensure
             | supply chain stability, but we haven't seen this behavior
             | from Intel so I'm curious how big of a price they had to
             | pay was.
             | 
             | Would obviously highly depend on volumes, timelines they
             | committed to, layers, materials, and specific devices and
             | tweaks they are paying for. Could be less per transistor
             | than Apple, I can't see hwy it would be vastly more (unless
             | we're talking about extremely limited volums).
        
       | greenknight wrote:
       | Weren't Intel to use TSMC on 5nm in 2H 2021? --
       | https://www.extremetech.com/computing/319301-report-intel-wi...
       | 
       | I doubt Intel would move any of their core designs across to
       | TSMC. Only fringe products... Its not like their CPU designs port
       | across natively and from my understanding, they would have to
       | have an entirely different team that would work with TSMCs IP.
        
         | xiphias2 wrote:
         | You can't have fringe products on 3nm, they have to be
         | profitable ones. It seems like a risky, but exciting strategy.
         | I prefer that over the old boring Intel.
        
           | sharken wrote:
           | It's a much better strategy for sure, as for AMD it seems
           | that hard times are ahead if they can not move to 3nm after
           | 5nm.
           | 
           | It seems very likely that AMD has no other option than moving
           | most of their products to Samsung 3nm.
           | 
           | I hope that works out as the competition is needed.
           | 
           | https://www.gizmochina.com/2021/02/02/amd-outsource-gpu-
           | apu-...
        
             | smoldesu wrote:
             | Samsung's fabs have been getting better and better over the
             | years, we could see a surprise blowout on the 3nm node.
             | Hell, just the other day Samsung was testing an RDNA2 iGPU
             | that was faster than the GPU on the iPhone 12... on the 7nm
             | node. I'm no expert here, but AMD might finally have the
             | silicon it needs to compete with Nvidia in raw compute.
        
               | atty wrote:
               | That's exciting news, but I can't help but feel that at
               | least in my field (machine learning), Nvidia is far more
               | sticky than just their compute dominance. As long as CUDA
               | is Nvidia proprietary, I just don't think our team can
               | afford to move.
               | 
               | I'm seeing lots of very impressive movements from
               | Tensorflow and Pytorch to support ROCm, but it's just not
               | at a level yet where it would make good business sense
               | for us to switch, even if AMD GPUs were 50% faster than
               | Nvidia. And it seems like Nvidia is improving and
               | widening their compute stack faster than AMD is catching
               | up.
        
               | smoldesu wrote:
               | I'm right there with you, I have an Nvidia GPU on all my
               | machines too. With that being said, however, there are
               | plenty of software-agnostic workloads that I run that
               | could benefit from a truly open, powerful GPU.
        
         | baybal2 wrote:
         | From my passing birdie, Intel will still be doing TSMC 5nm, but
         | chose to announce the far in the future 3nm first because it
         | "sounds big," and make it look as if they are ahead of AMD to
         | Wall street people.
         | 
         | I haven't heard anything certain of AMD, but they are
         | guaranteed to do very aggressive TSMC roadmap following, being
         | fully aware that cutting edge node capacity is bought years in
         | advance, and that Intel wanted it for the last ~3 years.
         | 
         | They probably simply don't have things set in stone for so far
         | in the future.
        
           | belval wrote:
           | It's not to impress wall street, so much as they can hijack
           | TSMC production and constrain AMD's volume.
        
             | adventured wrote:
             | Exactly. Intel's profits are still running near all-time
             | record highs (which is 12x that of AMD). They can trivially
             | constrain AMD by taking away their access to fabrication,
             | buy as much of it out as possible. If nothing else it'll
             | buy them more time to try to dig out of their mess.
        
               | samus wrote:
               | It is in TSMCs own interest to not burn their existing
               | customers because Intel still has their own fabs and can
               | drop TSMC the moment they catch up. Rebuilding business
               | relationships with clients takes time, and they are also
               | not really without options: Samsung is very much also a
               | key player in the market after all.
        
               | andy_ppp wrote:
               | Surely that would be anti competitive in the extreme? I'm
               | surprised TSMC are just taking the money now... I'd be
               | really surprised if Intel don't learn a load of tricks as
               | the drill into getting silicon onto a different fab
               | provider - both about their own processes and the
               | underlying details of the technology.
        
               | zsmi wrote:
               | > Surely that would be anti competitive in the extreme?
               | 
               | Now prove it.
               | 
               | TSMC accepts paying customers. Intel is a paying customer
               | too.
               | 
               | Haven't we spent years bemoning how Intel is behind TSMC?
               | It seems natural Intel would take advantage of TSMC's
               | expertise if they are ahead. And Intel probably is happy
               | to use TSMC. Two rabbits with one shot.
               | 
               | Even if you could prove it, where do you plan to file the
               | complaint?
        
       | amelius wrote:
       | Is the US even aware at a political level that Taiwan is now
       | ahead of the US and that US businesses are giving up?
        
         | xiphias2 wrote:
         | They are giving incentives for TSMC to build factories in US
         | for a good reason. Both China and US are aware how important it
         | is.
        
           | amelius wrote:
           | Yeah but a Taiwanese factory on US soil still isn't US tech.
           | Shouldn't the US invest more, while we can still catch up?
        
             | lotsofpulp wrote:
             | US should be handing out green cards to TSMC workers and
             | their families.
        
             | xiphias2 wrote:
             | TSMC is both a monopoly for the newest chips and a
             | geopolitical risk in the case of China taking over TSMC in
             | Taiwan.
             | 
             | I think that if TMSC has a subsidary in US, it would be
             | really hard for TSMC Taiwan to block TSMC US from licensing
             | IP from it after the knowledge is transferred. Regarding
             | the monopoly status: sure, healthy competition is good, but
             | it's something competitors (Samsung, Intel) should tackle
             | on the market.
        
             | smoldesu wrote:
             | If your house was built using "US tech", you'd be living an
             | Amish-adjacent lifestyle.
        
         | ohazi wrote:
         | Half the world won't even acknowledge that Taiwan is a country.
        
           | loyukfai wrote:
           | Publicly, currently, yes.
        
           | emayljames wrote:
           | Part of China, yes. You are right.
        
             | jtdev wrote:
             | The Taiwanese people don't seem to agree with your
             | statement.
        
               | amelius wrote:
               | The people in Hong Kong didn't agree either ... :(
        
               | rlanday wrote:
               | Hong Kong has been indisputably part of China since July
               | 1, 1997.
        
               | jlokier wrote:
               | The consent of the Hong Kong people in 1997 was entirely
               | dependent on the Hong Kong Basic Law and Sino-British
               | treaty being honoured by China.
               | 
               | China his indisputably reneged on both agreements since
               | 1997. Virtually nobody in Hong Kong or associated with
               | Hong Kong would have accepted the 1997 handover if they
               | had known in advance.
               | 
               | This should be borne in mind by anyone evaluating whether
               | to trust China on any important agreement in future.
               | 
               | "I am altering the deal. Pray I don't alter it any
               | further" comes to mind.
        
               | [deleted]
        
               | AussieWog93 wrote:
               | Publicly, many of them do.
               | 
               | A formal declaration of independence would force Beijing
               | to either resume war with Taiwan or lose face
               | domestically.
               | 
               | Pretending that your homeland isn't technically a real
               | country is a small price to pay to prevent insane
               | Mainlander nationalists from launching a flurry of DF-41s
               | into the heart of Taipei.
        
         | yyyk wrote:
         | Intel isn't out of the fab business, just behind. Being behind
         | is much less relevant economically than some people think,
         | especially in this seller's market.
         | 
         | It's not like 14nm processors will magically stop working, you
         | just pay more TDP for performance. If one day the US decides
         | Taiwan is not worth it, it would just pay a bit more until
         | Intel catches up (which will happen eventually).
        
           | amelius wrote:
           | The problem is that Intel is already more than 2 technology
           | nodes behind. Catching up could be exponentially expensive,
           | especially if all orders are now going to Taiwan.
        
             | mensetmanusman wrote:
             | EUV is the unknown here. It helps skip a lot of density
             | doubling steps, so we might see a quick catch-up.
        
             | yyyk wrote:
             | Intel 10mm is approximately like TSMC 7mm (different
             | measurements but similar transistor density). So I'd say
             | 1.5 nodes behind? AMD and TSMC used to be behind like that,
             | and eventually they caught up. Intel has the resources, and
             | given shortages, they have a bit of time too.
        
               | amelius wrote:
               | > So I'd say 1.5 nodes behind?
               | 
               | A node corresponds to a factor of .7 in size, see [1].
               | 
               | > AMD and TSMC used to be behind like that
               | 
               | Yes, but as a technology evolves it's getting harder to
               | catch up.
               | 
               | [1] https://en.wikichip.org/wiki/technology_node
        
               | samus wrote:
               | Since 20nm, a node step does not necessarily have
               | anything to do with sizes and is purely a marketing term.
               | Even if there are size shrinks, they are hardly
               | comparable if a switch to another technology (FinFET ->
               | GAAT) is involved as well.
        
       | heisenbit wrote:
       | > "If Intel can get back to a two-year node cadence with 2x
       | density improvements around mid-decade they can be roughly at
       | density parity with TSMC," says Jones
       | 
       | A guarded statement indeed. It is hard to see how Intel can catch
       | up on semiconductor technology. At this point of Moore's law it
       | is about financial capabilities and while Intel's revenues may be
       | still a little bigger than TSMC the semiconductor related part
       | measured in capacity is a third and Intel is not in the top-5.
       | 
       | Intel diverting money from own fabs in order to keep its options
       | open and maintain a tech lead position is telling. Watch what
       | they do not what they tell you.
        
         | MangoCoffee wrote:
         | >while Intel's revenues may be still a little bigger than TSMC
         | the semiconductor
         | 
         | TSMC's Q1 2021 revenues is 362.41B
         | 
         | Intel's Q1 2021 revenues is 19.67B
         | 
         | TSMC market cap @553.16B
         | 
         | Intel market cap @229.20B
         | 
         | TSMC already surpass Intel.
         | 
         | https://www.google.com/finance/quote/INTC:NASDAQ
         | 
         | https://www.google.com/finance/quote/TSM:NYSE
        
           | colinmhayes wrote:
           | You actually believed TSMC had more than a trillion dollars
           | of revenue a year?
        
           | greenknight wrote:
           | TSMCs Q1 Revenue was 12.92B USD -- https://investor.tsmc.com/
           | english/encrypt/files/encrypt_file...
           | 
           | The 362.41B is in TWD
        
             | MangoCoffee wrote:
             | ah ok. google show b. i assumed its in USD.
        
               | greenknight wrote:
               | In any case, in this nanometer war, ASML will come out
               | the victor.
        
           | [deleted]
        
         | specialist wrote:
         | > _it is about financial capabilities_
         | 
         | Request for future: More about this, please.
         | 
         | Since binging on the acquired.fm podcast, I'm becoming more
         | aware of the money side of things.
         | 
         | By way of analogy:
         | 
         | "Amateurs talk strategy. Professionals talk logistics."
         | 
         | I spent my career solving cool problems ("strategy"). While the
         | geeks minding the business and finance stuff ("logistics")
         | faired quite a bit better.
        
       | lincpa wrote:
       | It is predicted that Intel will also switch to my
       | "warehouse/workshop model" to integrate multiple cores (workshop)
       | and memory (warehouse) in the chip, which requires 3nm.
       | 
       | Apple M1 has adopted my "warehouse/workshop model" and succeeded.
       | 
       | My "warehouse/workshop model":
       | https://github.com/linpengcheng/PurefunctionPipelineDataflow...
       | 
       | Disscussions:
       | 
       | https://www.reddit.com/r/programming/comments/o0gxy3/predict...
        
       | ksec wrote:
       | I thought it was worth pointing out, Nikkei, ( the original
       | source of this pieces ) and DigiTimes has exceptionally poor
       | track record on anything TSMC.
       | 
       | So exceptional that every prediction or rumours have been wrong.
       | Everything they got right was restating what TSMC said.
       | 
       | My suggestion for the HN community is whenever you see TSMC in
       | the headline read it with the biggest grain of salt. Unless it is
       | coming from a fairly reputable site. ( Anandtech for example )
        
       | jonplackett wrote:
       | > It is thought that the iPad will be first to get 3nm chips
       | 
       | Is anyone really pushing the current iPads tech specs? Until they
       | massively beef up iPadOS I'm not sure this will benefit anyone
       | particularly. The battery already lasts all day.
        
         | Tempest1981 wrote:
         | High-end games
        
           | elorant wrote:
           | Are there triple A games for iPad?
        
         | gameswithgo wrote:
         | you could shrink the battery
        
       | jiggawatts wrote:
       | I just spent a month upgrading a bunch of cloud servers to AMD
       | EPYC, and it was such a _nostalgic_ feeling to be able to speed
       | up systems with a simple, risk-free hardware upgrade.
       | 
       | Remember those days? When every server upgrade magically sped up
       | everything by a factor of two or three? The stop button was
       | pressed on that wonderful time for about a decade, but what
       | seemed like an end of an era wasn't. It was just a pause, and now
       | we're playing with ever faster tin once again.
       | 
       | It feels almost strange for that era of ever increasing hardware
       | speed to make such a forceful comeback after having been stuck at
       | Intel 14nm in the server space for years. The upgrades to 7nm AMD
       | are already pretty impressive. I hear good things about Apple's
       | 5nm laptop chips. Server chips on 3nm should blow everyone's
       | frigging minds.
       | 
       | Exciting times!
        
         | dmitriid wrote:
         | Software will inevitably eat all those gains
        
           | api wrote:
           | Yeah, now we can have Electron apps compiled to WASM running
           | in virtual browser instances hosted remotely in the server
           | with feeds sent back via H.264 streams that then have to be
           | decoded by a local browser instance to be rendered to...
        
             | choeger wrote:
             | ... fill a spreadsheet.
        
             | AussieWog93 wrote:
             | I know this is a meme, but back in around 2012 or so, one
             | of my grandpa's friends pulled out a Windows 2000 laptop
             | and loaded up a spreadsheet in Excel 2000.
             | 
             | I was blown away by how snappy and responsive the whole
             | experience was compared to the then-not-bad Core2 Duo
             | Grandpa had in his machine, running Windows 7 and Office
             | 2010.
        
               | VortexDream wrote:
               | Honestly, it doesn't feel like personal computing has
               | improved much at all. I remember using Windows 2000. Run
               | it on an SSD and it flies (tried it recently). Yet I
               | can't identify anything that W10 does better (for me)
               | than W2000 that justifies its sluggishness on a C2D.
        
               | kilburn wrote:
               | While it is true that older software is extremely snappy
               | if you compare it with what we use today, it is not that
               | hard to find examples where we have come a long way. Off
               | the top of my head:
               | 
               | - You can mix chinese and russian characters in a
               | document [pervasive use and support for unicode]
               | 
               | - Your computer won't get zombified minutes after you
               | connect it to the internet [lots of security
               | improvements]
               | 
               | - You can connect a random usb thingy with a much much
               | lower probability of your computer getting instantly
               | owned [driver isolation]
               | 
               | - You can use wifi with some reliability [more complex,
               | stable and faster communication protocols]
               | 
               | - You can have a trackpad that doesn't suck
               | [commodization of non-trivial algorithms/techniques that
               | did not exist back then]
               | 
               | - Files won't get corrupted time and again at every power
               | failure [lots of stability improvements]
               | 
               | Whether all of the above could be achieved with the _very
               | performance oriented_ techniques and approaches that used
               | to be common in older software is debatable at least. In
               | any case, a lot of the slowness we pay for toady is in
               | exchange of actually being able to deal with the
               | complexities necessary to achieve those things in
               | reasonable time/cost.
        
               | VortexDream wrote:
               | Honestly, I don't see why any of these things require
               | such terrible performance characteristics.
        
               | bruce343434 wrote:
               | > - You can have a trackpad that doesn't suck
               | 
               | Go on...
        
               | dmitriid wrote:
               | > In any case, a lot of the slowness we pay for toady is
               | in exchange of actually being able to deal with the
               | complexities necessary to achieve those things in
               | reasonable time/cost.
               | 
               | This is also debatable at least.
               | 
               | Just a few weeks ago it turned out that the new Windows
               | Terminal can only do color output at 2fps [1].
               | 
               | The very infuriating discussion in the GitHub tracker
               | ended up with a Microsoft tea member saying that you need
               | a _" an entire doctoral research project in performant
               | terminal emulation "_ to do colored output. I kid you
               | not. [2]
               | 
               | Of course, the entire "doctoral research" is 82 lines of
               | code [3]. There will be a continuation of the saga [4]
               | 
               | And that is just a very small, but a very representative
               | example. But do watch Casey's rant about MS Visual Studio
               | [5]
               | 
               | You can see this _everywhere_. My personal anecdote is
               | this: With the introduction of new M1 macs Apple put it
               | front and center that now Macs wake up instantly. For
               | reference: in 2008 I had the exactly same behaviour on a
               | 2007 Macbook Pro. In the thirteen years since the
               | software has become so bad, that you need a processor
               | that 's anywhere from 3 to _15_ times more powerful to
               | barely, just barely, do the same thing [6].
               | 
               | The upcoming Windows 11 will require 4GB of RAM and 64GB
               | of storage space just for the empty, barebones operating
               | system alone [7]. Why? No "wifi works reliably" or
               | "trackpad doesn't suck" can justify any of this.
               | 
               | [1]
               | https://twitter.com/cmuratori/status/1401761848022560771
               | 
               | [2] https://github.com/microsoft/terminal/issues/10362#is
               | suecomm...
               | 
               | [3]
               | https://twitter.com/cmuratori/status/1405356794495442945
               | 
               | [4]
               | https://twitter.com/cmuratori/status/1406755159347130371
               | 
               | [5] https://www.youtube.com/watch?v=GC-0tCy4P1U
               | 
               | [6] https://gadgetversus.com/processor/apple-m1-vs-intel-
               | core-2-...
               | 
               | [7] https://www.microsoft.com/en-
               | us/windows/windows-11-specifica...
        
               | astrange wrote:
               | > In the thirteen years since the software has become so
               | bad, that you need a processor that's anywhere from 3 to
               | 15 times more powerful to barely, just barely, do the
               | same thing [6].
               | 
               | That is not "the software" unless you count EFI, the
               | sleep process is mostly a hardware thing and controlled
               | by Intel.
        
               | dmitriid wrote:
               | As luck would have it, here's the continuation to Windows
               | Terminal. Casey Muratori made a reference terminal
               | renderer: https://github.com/cmuratori/refterm
               | 
               | This uses all the constraints that the Windows terminal
               | team cited as excuses: it uses Windows subsystems etc.
               | One person, 3k lines of code, it runs 100x the speed of
               | Windows terminal.
               | 
               | See the epic demo (and stay till the end for color
               | output): https://www.youtube.com/watch?v=hxM8QmyZXtg
        
               | PragmaticPulp wrote:
               | > Yet I can't identify anything that W10 does better (for
               | me) than W2000 that justifies its sluggishness on a C2D.
               | 
               | Core 2 Duo was introduced 15 years ago (2006). Almost a
               | decade before Windows 10 was released.
               | 
               | Windows 10, and all modern operating systems, are
               | designed around the availability of modern GPUs. If you
               | try running it on an ancient machine without modern
               | graphics acceleration and without enough CPU power to
               | handle it in software, it's going to feel sluggish.
               | 
               | Windows 10 is perfectly fine and snappy on every machine
               | I've used it on in recent history.
        
               | hashhar wrote:
               | Hardware getting 100x faster isn't a good reason to make
               | your software 100x slower so that the user-experience
               | feels the same as from 20 years ago.
        
               | speedgoose wrote:
               | Have you tried to do heavy computing on an old windows?
               | In my experience the multi-tasking under load is so bad
               | that you can't really use the machine while it's busy.
               | 
               | And then you can also think about the security
               | improvements.
        
               | VortexDream wrote:
               | I genuinely don't think modern OS's are any better at
               | multitasking with heavy loads. Particularly Linux and
               | Windows are terrible at heavy mtitasking loads, at least
               | as a software developer working with both environments.
               | 
               | I also don't see why the security improvements lead to
               | such a massive decrease in performance.
        
               | zozbot234 wrote:
               | The multitasking is just as bad on _modern_ Windows, tbh.
               | Given reasonably up-to-date hardware, a lightweight Linux
               | install really can be as snappy as Windows 2000 was back
               | in the day, and that 's with a lot of security and
               | usability improvements.
        
               | speedgoose wrote:
               | My windows 10 laptops can redraw the mouse cursor and
               | windows without much issues when it's rendering a video
               | using all the cpu cores for example.
        
               | api wrote:
               | It spies on you and monetizes you better.
        
         | hyperpallium2 wrote:
         | noob question, but how is the performance better? I thought
         | clocks weren't increasing (due to heat), so is it that smaller
         | chips mean more per wafer, therefore cheaper and you can buy
         | more?
         | 
         | I recall a Sophie Wilson talk about how things will never get
         | faster, past 29nm.
        
           | techrat wrote:
           | IPCs. Instructions per clock.
           | 
           | We haven't had a strict clock speed = performance ratio for
           | over a decade, it's just one component of it now.
           | 
           | https://en.wikipedia.org/wiki/Cycles_per_instruction
           | 
           | https://en.wikipedia.org/wiki/Instructions_per_second
           | 
           | Rather than just bumping clock speed, they've made
           | improvements on what can be done within the clock cycle.
           | 
           | I recently did a desktop rebuild.
           | 
           | Went from a Ryzen 7 2700X to a Ryzen 5 5600X.
           | 
           | On paper, this looks like a downgrade. After all, the 7 is
           | higher than a 5, right? I have two more cores with a 2700X...
           | 
           | However, the 5600x has about a 28% IPC gain over the 2700x in
           | single core performance despite running at the same base
           | clock speed. Literally 20%+ faster in the same tasks even
           | when taking the turbo boost out of the equation.
           | 
           | The 2700x was at 12/14nm process while the 5600x is at 7nm
           | which also helps with the power consumption as the 5600x has
           | a 40 watt lower TDP.
           | 
           | Since what I need is better single core performance over more
           | cores, the 5600X is quite an upgrade despite only being two
           | years newer. (A LOT has happened in 2 years with AMD) With
           | two less cores but significantly higher single core
           | performance, the 5600X outperforms the 2700X on both single
           | core and multicore performance.
           | 
           | Unfortunately for Intel, they did a lot of little tweaks and
           | cheats to gain performance at the expense of security and now
           | the mitigation patches pulls their chips back to the
           | Bulldozer era in terms of current performance. They were also
           | stuck at the same node for almost a decade now (Broadwell,
           | 2014), so they had no gains from a shrink (speed of light
           | propagation gain from a shrink, less heat and higher clocks
           | also).
           | 
           | My i7-4790k is full of jank and microstuttering now, it has
           | become unusable. Imagine how the servers might be doing if
           | (and they should be) they are being kept up to date on
           | security and microcode patches.
           | 
           | Then there also comes with spec upgrades within the processor
           | generations. Ryzen 2700X to 5600X also means PCI Express went
           | from 3.0 to 4.0... Not hugely significant among desktop
           | users, but substantial for servers that need that amount of
           | link bandwidth for compute cards and storage.
           | 
           | TLDR: Magic.
        
           | ksec wrote:
           | A lot of comment mentioned about IPC, but something obvious
           | no one has mentioned ( yet ).
           | 
           | For Server is it also about core count. For the same TDP AMD
           | offer _64_ Core option. On a Dual Socket System that is 128
           | Core. Zen 3 Milan is also socket compatible with Zen 2 Rome.
           | 
           | Basically for Server Intel has been stuck on 14nm for far too
           | long. The first 14nm Broadwell Xeon was released in _2015_ ,
           | and as of mid 2021 Intel barely started rolling out 10nm Xeon
           | Part based on IceLake.
           | 
           | That is half a decade of stagnation. But we are now finally
           | getting 7nm server part, 5nm with Zen 4 and and SRAM Die
           | Stacking. I am only hoping EUV + DDR5 will bring Server ECC
           | DRAM price down as well. In a few years time we will have
           | affordable ( relatively speaking ) Dual Socket 256 Core
           | Server with Terabyte of RAM. Get a few of those and be done
           | with scaling for 90% of us. ( I mean the whole StackOverflow
           | is Served with 9 Web Server [1] on some not very powerful
           | hardware [2] )
           | 
           | [1] https://stackexchange.com/performance
           | 
           | [2] https://nickcraver.com/blog/2016/03/29/stack-overflow-
           | the-ha...
        
           | slver wrote:
           | Clockrate is not a bottleneck on how fast your computer is.
           | It's just a synchronization primitive.
           | 
           | Think about it like the tempo of a song. The entire orchestra
           | needs to play in sync with the tempo, but how many notes you
           | play relative to the tempo is still up to each player. You
           | can play multiple notes per "tempo tick".
        
             | amelius wrote:
             | You will have to use a faster internal clock to play those
             | faster notes, though.
        
               | can_count wrote:
               | The point is that you don't. A clock tick is not the
               | smallest unit of operation. It's the smallest unit of
               | synchronization. A lot of work can be done in-between
               | synchronization points.
        
           | ben-schaaf wrote:
           | Clocks aren't increasing much, but IPC still is. Density is
           | also still increasing meaning lower latencies, lower power
           | and therefore higher performance per area.
        
             | Causality1 wrote:
             | IPC is going up but slowly compared to previous
             | generations. If AMD can sustain the kind of generation on
             | generation increases it achieved between Zen 2 and 3 I will
             | be tremendously impressed, but as it stands my brand new
             | 5600X has barely 50% more single-core performance than the
             | seven year old 4670K I replaced.
        
               | deviledeggs wrote:
               | Even crazier, my i5 750 (from 2009) was overclocked to
               | 3.5ghz and per core Ryzen 3 isn't even twice as fast.
               | 
               | We're near the end of IPC scaling per core. And it was
               | never that good in the first place. Pentium 3 IPC is only
               | 3-4x worse than the fastest Ryzen. Most of our speed
               | increases came from frequency.
               | 
               | IMO we need to get off silicon substrate so we can
               | frequency scale again.
               | 
               | I wonder if the end of scaling will push everyone into
               | faster languages like Rust. You can't sit around 2 years
               | for your code performance to double anymore. Will this
               | eventually kill slow languages? I think so.
        
               | michaelmrose wrote:
               | Hardware increases of speed began to slow down 20 years
               | ago and people are still using software that is 50x
               | slower than the fastest possible technology. If this
               | alone was going to kill slow languages one would suppose
               | it would have already done so.
               | 
               | The arguable shift that is more significant is not about
               | hardware its hopefully about newer languages like Rust
               | making this performance cost less in terms of safety and
               | development time which is a more recent development.
        
               | jbluepolarbear wrote:
               | I doubt that very much. I have a i7 4790 and I have Ryzen
               | 7 3700X. In my testing single core speed is nearly 2.5
               | times in favor of my Ryzen. What where you using as bench
               | marks?
               | 
               | My test was a single threaded software rasterizer with
               | real-time vertex lighting. That I compiled in vs c++ 2008
               | around 2010.
        
               | speeder wrote:
               | Need to point out in particular that that best Haswell i5
               | was FASTER than the best i7 for single-core workloads, so
               | this might be a factor in OP post.
               | 
               | And also that is the reason why I use such processor on
               | my own computer, it was already "outdated" when I bought
               | it, but since one of the things I like to do is play some
               | simulation style games that rely heavily on single-core
               | performance, I choose the fastest single-core CPU I could
               | find without bankrupting myself. (the i5 4690k that with
               | some squeezing can even be pushed past 4ghz, it is a
               | beastly CPU this one)
        
               | Causality1 wrote:
               | Exactly. The only thing I used my system for that was
               | really begging for more CPU was console emulation, and
               | that depends more on single core performance than
               | anything else.
        
               | aseipp wrote:
               | Basic single-core scalar-only workloads of my own
               | corroborate the grandparent, as well as most of the other
               | benchmarks I've seen. My own 5600X is "only" about 50-60%
               | better than my old Haswell i5-4950 (from Q2 '14) on this
               | note.
               | 
               | But the scalar speed isn't everything, because you're
               | often not bounded solely by retirement in isolation (the
               | system, in aggregate, is an open one not a closed one.)
               | Fatter caches and extraordinarily improved storage
               | devices with lots of parallelism (even on a single core
               | you can fire off a ton of asynchronous I/O at the device)
               | make a huge difference here even for single-core
               | workloads, because you can actually keep the core fed
               | well enough to do work. So the cumulative improvement is
               | even better in practice.
        
               | jbluepolarbear wrote:
               | Now I'm curious, is this testing against some
               | requirements for an application? Are they any
               | applications that would benefit solely from scalar
               | performance vs scalar, cache size, and memory speed.
        
               | epmaybe wrote:
               | Haha I'm in almost the exact same boat, I replaced a
               | haswell i5 with a 5800x. I definitely went overkill with
               | my system and still haven't gotten a GPU upgrade yet due
               | to cost/laziness.
        
               | flumpcakes wrote:
               | I have a hunch that it would be much greater than 50% if
               | the software was recompiled to take advantage of the
               | newer instruction sets. Then again, you won't be
               | comparing exact binary copies of software, which some
               | people might say invalidates it as a
               | benchmark/comparison.
               | 
               | I remember many, many years ago my PC couldn't run the
               | latest adobe software as I was missing SSE2 instructions
               | on my then current CPU. I believe there was some hidden
               | compatability mode, but the software ran extremely poorly
               | vs. the version of software that was out before that
               | didn't _require_ SSE2.
        
               | aseipp wrote:
               | There have been almost no new broad "performance-
               | oriented" instruction sets introduced since the Haswell
               | era, all the way to the Zen 3 era (e.g. SIMD/vector) for
               | compilers to target. At least not any instructions that
               | the vast majority of software is going to see magical
               | huge improvements from and are wide spread enough to
               | justify it (though specific software may benefit greatly
               | from some specific tuning; pext/pdep and AVX-512 for
               | instance.)
               | 
               | The microarchitectures have improved significantly
               | though, which does matter. For instance, Haswell-era AVX2
               | implementations were significantly poorer than the modern
               | ones in, say, Tiger Lake or Zen 3. The newer ones have
               | completely different power usage and per-core performance
               | characteristics for AVX code; even if you _could_ run
               | AVX2 on older processors, it might not have actually been
               | a good idea if the cumulative slowdowns they cause
               | impacts the whole system (because the chips had to
               | downclock the whole system so they wouldn 't brownout).
               | So it's not just a matter of instruction sets, but their
               | individual performance characteristics.
               | 
               | And also, it is not just CPUs that have improved. If
               | anything, the biggest improvements have been in storage
               | devices across the stack, which now have significantly
               | better performance and parallelism, and the bandwidth has
               | improved too (many more PCIe lanes). I can read gigabytes
               | a second from a single NVMe drive; millions of IOPS a
               | second, which is vastly better than you could 7 years ago
               | on a consumer-level budget. Modern machines do not just
               | crunch scalar code in isolation, and neither did older
               | ones; we could just arbitrage CPU cycles more often than
               | we can now in an era where a lot of the performance
               | "cliffs" have been dealt with. Isolating things to just
               | look at how fast the CPU can retire instructions is a
               | good metric for CPU designers, but it's a very incomplete
               | view when viewing the system as a whole as an application
               | developer.
        
               | AtlasBarfed wrote:
               | I'm not deep on the weeds of high-performance compilers,
               | but just because ISA evolution hasn't happened doesn't
               | mean compilers can't evolve to use the silicon better.
               | 
               | There always was an "Intel advantage" to compilers for
               | decades (admittedly Intel invested in compilers more than
               | AMD, but they also were sneaky about trying to nerf AMD
               | in compilers), but with AMD being such a clear leader for
               | so many years, I would hope at least GCC has started
               | supporting AMD flavors of compilation better.
               | 
               | Anyone know if this has happened with GCC and AMD
               | silicon? Or at least is there a better body of knowledge
               | of what GCC flags help AMD more?
        
               | aseipp wrote:
               | Yes, I consider this to fall under the umbrella of
               | general microarchitectural improvements I mentioned. GCC
               | and LLVM are regularly updated with microarchitectural
               | scheduling models to better emit code that matches the
               | underlying architecture, and have featured these for at
               | least 5-7 years; there can be a big difference between
               | say, Skylake and Zen 2, for instance, so targeting things
               | appropriately is a good idea. You can use the `-march`
               | flag for your compiler to target specific architectures,
               | for instance -march=tigerlake or -march=znver3
               | 
               | But in general I think it's a bit of a red herring for
               | the thrust of my original post; first off you always have
               | to target the benchmark to test a hypothesis, you don't
               | run them in isolation for zero reason. My hypothesis when
               | I ran my own for instance was "General execution of bog
               | standard scalar code is only up by about 50-60%" and
               | using the exact same binary instructions was the baseline
               | criteria for that; it was not "Does targeting a specific
               | microarchitecture scheduling model yield specific gains."
               | If you want to test the second one, you need to run
               | another benchmark.
               | 
               | There are too many factors for any particular machine for
               | any such post to be comprehensive, as I'm sure you're
               | aware. I'm just speaking in loose generalities.
        
               | jmgao wrote:
               | > You can use the `-march` flag for your compiler to
               | target specific architectures, for instance
               | -march=tigerlake or -march=znver3
               | 
               | Note that -march will use instructions that might be
               | unavailable on other CPUs of the target. -mtune (which is
               | implied by -march) is the flag that sets the cost tables
               | used by instruction selection, cache line sizes, etc.
        
               | scns wrote:
               | march=native
        
           | gameswithgo wrote:
           | instructions per clock have been going up, l3 cache sizes
           | going up, core count going up, new wider instructions
           | becoming available (avx 2 and 512 ), branch predictors
           | improving.
           | 
           | overall speed increases are still nothing like the old days
           | and spectre style mitigations have been eating away at the
           | improvements
        
             | [deleted]
        
             | bahmboo wrote:
             | Yes. Without a L1 cache a cpu is just a hyper active kid in
             | a box. L2 important too. If one cares about performance
             | learn about caches.
        
           | elif wrote:
           | Less heat = more cores
        
           | mbfg wrote:
           | i think for the data center the key number is performance per
           | watt. With smaller the power requirements go down. As the
           | power goes down, heat goes down, so you can push the
           | processors harder.
           | 
           | There are of course architectural improvements that improve
           | performance with space to add gates etc, but when you have
           | 1000s of chips cost of energy is the big thing.
        
         | prox wrote:
         | But we can't go much smaller can we? Totally not well versed in
         | the chip space, so what can we expect?
        
           | tyingq wrote:
           | A revisit of how to do parallelism. Hopefully more
           | successfully than Itanium and its compilers fared. The Mill
           | CPU has some ideas there as well.
        
             | matthewfcarlson wrote:
             | It was interesting working at microsoft next to some folks
             | that were around in the Itanium days and worked on it.
             | Hearing their stories and theories was really cool. I
             | wonder if now is the time of alternative ISAs given that
             | JIT and other technologies have gotten so good
        
           | MayeulC wrote:
           | A usual reminder that we're not getting smaller. This is
           | marketting speech, transistor gates are stuck at 20ish
           | nanometers.
           | 
           | What's still increasing is transistor density, but "Dennard's
           | Scaling" is dead. we stopped decreasing voltages some time
           | ago.
           | 
           | We have more transistors, so we can make smarter chips. but
           | we can't turn them on at the same time ("dark silicon"), we
           | don't want to melt the chips.
           | 
           | Short of using other materials such as GaN, frequency won't
           | really go above 5 GHz.
           | 
           | There remain plenty of ways to improve performance though:
           | improvements to system architecture (distributed, non von
           | neumann, changing the ISA), compilers, etc. Adiabatic
           | computing, 3D integration, carbon nanotubes, tri-gate
           | transistors, logic in memory, "blood" (cooling + power) and
           | other microfluiduc advances, modularization with chiplets.
           | 
           | The "simple" Dennard's scaling is over though, and we need to
           | move beyond CMOS and Von Neumann to really leverage
           | increasing density without melting away.
        
             | dfdz wrote:
             | > 3D integration
             | 
             | In case anyone missed it, AMD announced a basic "3D" design
             | last month
             | 
             | https://www.hpcwire.com/2021/06/02/amd-
             | introduces-3d-chiplet...
             | 
             | Essentially AMD "stacked a 64MB 7nm SRAM directly on top of
             | each core complex, tripling the L3 cache available to the
             | Zen 3 cores."
             | 
             | I am excited to see what comes next!
        
             | ac29 wrote:
             | > A usual reminder that we're not getting smaller. This is
             | marketting speech, transistor gates are stuck at 20ish
             | nanometers.
             | 
             | A quick perusal of WikiChip seems to suggest that this isnt
             | true. Pretty much everything is getting smaller, including
             | Fin pitch which should directly affect transistor size (not
             | an EE, certainly could be wrong there). You're absolutely
             | right that terms like "7nm" have become decoupled from a
             | specific measurement and are largely marketing terms,
             | though.
             | 
             | https://en.wikichip.org/wiki/10_nm_lithography_process
             | 
             | https://en.wikichip.org/wiki/7_nm_lithography_process
        
           | MangoCoffee wrote:
           | advance packaging, chiplet, 3DFabric...etc
        
             | peheje wrote:
             | If history has taught us anything it's that technology
             | won't stop evolving. And whenever humans thinks surely
             | we've reached the peak of technological advancements, time
             | proves us wrong. One thing is for sure, it's going to look
             | very different from what we can imagine today.
        
               | Retric wrote:
               | Technology stagnated all the time in history. The pattern
               | is new approach gets perfected resulting in ever smaller
               | gains and different tradeoffs. Look at say rings where
               | personal skill plays a larger role than the advancements
               | in technology. We might be better at mining gold today,
               | but that doesn't translate into a better ring.
               | 
               | Include longevity as one of the points of comparison and
               | a lot of progresses looks like a step back. Cheaper and
               | doesn't last as long has been a tradeoff people have been
               | willing to make for thousands of years.
        
               | andrewjl wrote:
               | > Look at say rings where personal skill plays a larger
               | role than the advancements in technology.
               | 
               | What about advancements metallurgy? And to use an
               | adjacent example, cold-pressed steel techniques increase
               | the number of places steel can be used. (See here
               | https://en.wikipedia.org/wiki/Cold-formed_steel#Hot-
               | rolled_v...).
               | 
               | More often that not, technology enables trade-offs to be
               | made that couldn't be before. Making something cheaper
               | and more fragile does lose something, service life, and
               | so on. In exchange the cheaper thing is now used more
               | widely. Perhaps this more widespread use unlocks
               | something that only a few expensive yet high-quality
               | units could not. Think of smartphones and the resulting
               | network effects.
        
               | Retric wrote:
               | 18k Gold jewelry has been using 18 parts gold, 3 parts
               | copper, and 3 parts silver for a _long_ time.
               | 
               | In theory sure we could probably improve on it, but it
               | works.
        
               | oblio wrote:
               | Technology also gets as good as necessary and not
               | necessarily further, for long periods of time.
               | 
               | Babies have "colics", which are probably some kind of
               | pain that we haven't identified yet, but because they go
               | away on their own and parents are taught that they just
               | have to deal with them, we still apply a medieval
               | solution to the problem ("tough it out").
               | 
               | Rings seem to be the kind of problem where our current
               | solution is good enough.
               | 
               | I don't foresee computing power to be the same. We'll
               | want more and more of it.
               | 
               | So stagnation will be due to gaps in basic research, not
               | due lack of interest.
        
               | rvanlaar wrote:
               | I see that computing power is in high demand for quite
               | some time to come.
               | 
               | I do believe there will be stagnation unless a different
               | way is found. In the same way Henry Ford said people
               | wanted a faster horse not a car.
               | 
               | And regarding travel, I would like to have faster, much
               | faster transportation. However it hasn't come, yet. There
               | seems to be a local optimum between costs and
               | practicality.
        
           | xmodem wrote:
           | We're already at the point where a transistor is a double-
           | digit number of silicon atoms. The cost of each node shrink
           | is growing at an insane rate.
           | 
           | The good times might be back for now - and don't get me
           | wrong, I'm having a blast - but don't expect them to last for
           | long. I think this is probably the last sputter of gas in the
           | tank, not a return to the good times.
        
             | gsnedders wrote:
             | As already mentioned, there's plenty of innovation
             | happening still with packaging, plus even on the IC level
             | there's all kinds of possibilities for advancement: new
             | transistor designs (to reduce power consumption or to
             | increase density by decreasing spacing), monolithic 3D ICs
             | (the vast majority of current 3D approaches are
             | manufacturing multiple wafers or dies then wiring them
             | together; if you can do it on a single wafer you can do a
             | lot more movement between layers). Plus there's always the
             | potential to move away from silicon to make transistors
             | even smaller.
             | 
             | Away from the IC level itself we're only just starting to
             | scratch the surface of optimisation algorithms for many NP-
             | hard problems that occur in IC design, like floor plan
             | arrangement.
        
             | hutrdvnj wrote:
             | But what happens if we hit the final plateau, same
             | processor speed (+ minor improvements) for the decades to
             | come?
        
               | jl6 wrote:
               | We start optimizing software, and then we start
               | optimizing requirements, and then computing is finally
               | finished, the same way spoons are finally finished.
        
               | imtringued wrote:
               | The hardware will be finished. But the food we eat (the
               | software) will keep changing.
        
               | api wrote:
               | Computing will never be finished like spoons in the
               | software realm because software is like prose. It's a
               | language where we write down our needs, wants, and
               | desires, and instructions to obtain them, and those are
               | always shifting.
               | 
               | I could definitely see standard classical computer
               | hardware becoming a commodity though.
               | 
               | There will also be room for horizontal expansion for a
               | LONG time. If costs drop through the floor then we could
               | see desktops and servers with hundreds or thousands of
               | 1nm cores.
        
               | GuB-42 wrote:
               | Are spoons really finished? I am sure plenty of people
               | are designing better/cheaper spoons today. I love looking
               | at simple, everyday objects and look at how they evolved
               | over time. Like soda cans, water bottles. Even what may
               | be the oldest tool, the knife is constantly evolving.
               | Better steels, or even ceramic, folding mechanisms for
               | pocket knives, handle materials, and of course all the
               | industrial processes that gets us usable $1 stainless
               | steel knives.
               | 
               | Computers are the most complex objects man has created,
               | there is no way it is going to be finished.
        
               | agumonkey wrote:
               | you can also optimize society because every time a human
               | gets in the loop, trillions of cycles are wasted, and
               | people / software / platforms are really far from
               | efficient.
               | 
               | actually if companies and software were designed
               | differently (with teaching too, basically an ideal
               | context) you could improve a lot of things with 10x
               | factors just from the lack of resistance and pain at the
               | operator level
        
               | MR4D wrote:
               | This is a really good point you make.
               | 
               | A simple example for me is how the ATM replaced the bank
               | teller, but the ATM has been replaced with cards with
               | chips in them. It's a subtle, but huge change when
               | magnified across society.
        
               | agumonkey wrote:
               | are chips an issue ?
               | 
               | having worked in various administrations, the time /
               | energy / resources wasted due to old paper based workflow
               | is flabbergasting
               | 
               | you'd think after 50 years of mainstream computing they'd
               | have some kind of adequate infrastructure but it's really
               | really sad (they still have paper inbox for internal mail
               | routing errors)
        
               | TheRealSteel wrote:
               | Transistor size is not the only factor in processing
               | speed, architecture is also important. We will still be
               | able to create specialised chips, like deep learning
               | accelerators and such.
        
               | ghaff wrote:
               | You optimize software--which means more time/money to
               | write software for a given level of functionality. More
               | co-design of hardware and software, including more use of
               | ASICs/FPGAs/etc. And stuff just doesn't get faster/better
               | as easily so upgrade cycles are longer and potentially
               | less money flows into companies creating hardware and
               | software as a result. Maybe people start upgrading their
               | phones every 10 years like they do their cars.
               | 
               | We probably have a way to go yet but the CMOS process
               | shrink curve was a pretty magical technology advancement
               | that we may not see again soon.
        
           | kasperni wrote:
           | Jim Keller believes that at least 10-20 years of shrinking is
           | possible [1].
           | 
           | [1] https://www.youtube.com/watch?v=Nb2tebYAaOA&t=1800
        
             | stingraycharles wrote:
             | And for those who don't know, Jim Keller is a legend.
             | 
             | https://en.m.wikipedia.org/wiki/Jim_Keller_(engineer)
        
             | agumonkey wrote:
             | the last 20 years people had serious doubts on breaching
             | 7nm (whatever the figure means today) but, and even if
             | Keller is a semigod (pun half intended) .. I'm starting to
             | be seriously dubious on 20 years of continued progress..
             | unless he means a slow descent to 1-2nm .. or he's thinking
             | sub-atomic electronics / neutronics / spintronics (in which
             | case good on him).
        
               | Nokinside wrote:
               | Jim Keller is legend in microarchitecture design, not in
               | process technology. All his arguments seem to be just
               | extrapolating from the past.
               | 
               | Process engineers&material scientists seem more cautious.
               | I'm sure shrinking goes but gains are smaller from each
               | generation.
               | 
               | TSMC 3nm Process is something like 250 MTr/mm2 and single
               | digit performance increase and 15-30% power efficiency
               | increase compared to older prosess.
        
               | tyingq wrote:
               | It does, though, reduce heat, right? Which ultimately is
               | more cores per socket. Which hits the thing that actually
               | matters...price/performance.
        
               | Nokinside wrote:
               | Yes. But that's a huge decline compared to even recent
               | past.
               | 
               | Performance increases from generation to generation used
               | to be much faster. TSMC's N16 to N7 was still doubling or
               | almost doubling performance and price/performance over
               | the long term. N5 to N3 is just barely single digits.
               | 
               | Every fab generation is more expensive than in the past.
               | Soon every GIGAFAB costs $30 billion while technology
               | risk increaseses.
        
               | Robotbeat wrote:
               | That's true, but because Moore's Law has slowed, you'll
               | be able to amortize that $30 billion over a longer time.
        
               | ac29 wrote:
               | > because Moore's Law has slowed
               | 
               | Not sure that is really true based on the data. Remember,
               | Moore's law says the number of transistors in an IC
               | doubles every two years, which doesnt necessarily mean a
               | doubling of performance. For a while in the 90's,
               | performance was also doubling every two years, but that
               | was largely due to frequency scaling.
               | 
               | https://upload.wikimedia.org/wikipedia/commons/0/00/Moore
               | %27...
        
               | Robotbeat wrote:
               | To be precise, Moore's Law says the number of transistors
               | per unit cost doubles (every two years).
               | https://newsroom.intel.com/wp-
               | content/uploads/sites/11/2018/...
               | 
               | A lot of the new processes have not had the same cost
               | reductions. Also, some increase in transistor count is
               | due to physically larger chips. Also, you have "Epyc
               | Rome" on that graph, which actually isn't a single chip
               | but uses chiplets.
        
               | analognoise wrote:
               | Yeah and after you have a working $30B fab, how many
               | people are going to follow you to build one?
               | 
               | The first one built will get cheaper to run every year -
               | it will pay for itself by the time a second company even
               | tries to compete. The first person to the "final" node
               | will have a natural, insurmountable monopoly.
               | 
               | You could extract rent basically forever after that
               | point.
        
               | labawi wrote:
               | That's only true if the supply satisfies demand.
        
               | fshbbdssbbgdd wrote:
               | I thought the drivers of cost are lots of design work,
               | patents, trade secrets etc. involved with each process.
               | If there's a "final" node, those costs should decrease
               | over time and eventually become more of a commodity.
        
               | prox wrote:
               | The video that was posted goes into that (30min mark) and
               | seems to reflect what you are saying.
        
               | agumonkey wrote:
               | he might know some about the material science behind
               | things but yeah, that said I'd like to hear about actual
               | semi/physics researchers on the matter
        
               | rorykoehler wrote:
               | Check out the lex fridman Jim Keller podcast on YouTube
        
               | dkersten wrote:
               | Since the "nm" numbers are just marketing anyway, I think
               | they don't mean much in regards to how small we can go.
               | We can go small until the actual smallest feature size
               | hits physical limitations, which is so decoupled from the
               | nm number that we can't possibly tell how close "7nm" is
               | (well, I mean, we can, there's a cool youtube video
               | showing the transistors and measuring feature size with a
               | scanning electron microscope, but I mean we can't tell
               | just from the naming/marketing).
        
             | prox wrote:
             | That was a nice watch! Thanks!
        
             | mirker wrote:
             | On the same podcast you can find David Patterson (known for
             | writing some widely used computer architecture books), who
             | disputes this claim.
             | 
             | https://www.youtube.com/watch?v=naed4C4hfAg
             | 
             | At 1:20:00
        
               | someperson wrote:
               | David Patterson is not disputing that there's decades
               | left of transistor shrinking, he's just saying that the
               | statement of "transistor count doubling every 2 years"
               | doesn't hold up empirically.
               | 
               | David Patterson is saying he considers Moore's Law is
               | dead because the current state of say, "transistor count
               | doubling every _three_ years " doesn't match the Moore's
               | Law _exact_ statement.
               | 
               | In other words, he is simply being very pedantic about
               | his definition. I can see where he's coming from with
               | that argument.
        
               | zsmi wrote:
               | It's more than that though as it's important to remember
               | why Moore made his law in the first place.
               | 
               | The rough organizational structure of a VLSI team that
               | makes CPUs is the following pipeline:
               | 
               | architecture team -> team that designs the circuits which
               | implement the architecture -> team that manufactures the
               | circuits
               | 
               | The law was a message to the architecture team that by
               | the time your architecture gets to manufacture you should
               | expect there to be ~2x the number of transistor you have
               | today available, and that should influence your decisions
               | when making trade-offs.
               | 
               | And that held for a long time. But, if you're in a CPU
               | architecture team today, and you operate that way, you
               | will likely be disappointed when it comes to manufacture.
               | Therefore one should consider Moore's law dead when
               | architecting CPUs.
        
               | mirker wrote:
               | I don't think it's irrelevant to look at changing
               | timescale. If the law broke down to be 3 years, there
               | isn't any reason it won't be 4, 5, or some other N years
               | in the future.
        
               | vlovich123 wrote:
               | Every 2 years
        
               | Robotbeat wrote:
               | Right. But it is no longer 2 years so it's not Moore's
               | Law any more.
        
       | dilawar wrote:
       | How these nm based metric relates with transistor density or
       | flops per watt?
        
         | bullen wrote:
         | I think we're going to be stuck at ~2Gflops/W CPU.
         | 
         | If we ever see 3+ on cheap low power consumer hardware I'll eat
         | my shorts.
        
           | tyingq wrote:
           | Any thoughts on MicroMagic?
           | https://arstechnica.com/gadgets/2020/12/new-risc-v-cpu-
           | claim...
        
             | bullen wrote:
             | Ok, I should have added with enough total power to do some
             | meaningful work...
        
       | jnjj33j3j wrote:
       | Meanwhile, zero innovation in Europe... No wonder considering
       | that even in Germany (country with highest SW engineer salaries
       | in Europe), an average developer will only make 2500 per month
       | after tax... For that money, you will only be able to buy a bread
       | and butter, nothing else.
        
         | RealityVoid wrote:
         | Last time I checked ASML is an European company. Regardless,
         | things are... Different in the EU and there are plenty of
         | complaints regarding innovation, but this tired talking point
         | about EU stagnation lacks nuance and is not true.
        
         | cma wrote:
         | At this point ASML has a higher market cap than Intel.
        
         | akvadrako wrote:
         | The machines that power TSMC are exclusively from Europe.
         | 
         | Also the average programmer salaries might be low, but there is
         | plenty of full-time freelance work in Europe with competitive
         | rates; a decent gig in NL is around EUR100/hour.
        
         | moooo99 wrote:
         | > For that money, you will only be able to buy a bread and
         | butter, nothing else.
         | 
         | Maybe you're trolling or maybe you're just actually clueless,
         | but when looking at how much people can afford, its not
         | sufficient to convert the euro to dollar amounts
        
       | swiley wrote:
       | We reached the point a long time ago where the freedom of the
       | platform is far more important than performance for all but the
       | most demanding tasks (really just video editing at this point.)
        
         | ghaff wrote:
         | >performance for all but the most demanding tasks
         | 
         |  _Client-side_ performance is increasingly irrelevant except
         | for, as you say, video editing and a vanishingly small slice of
         | gaming (and that is at least as much about GPUs anyway). But
         | then, clients outside of phones are also increasingly
         | commodities anyway.
         | 
         | However, performance on servers remains incredibly important.
         | Of course, you can just throw more hardware at the problem
         | which increases costs but is otherwise perfectly doable for
         | most applications.
        
           | zozbot234 wrote:
           | > Client-side performance is increasingly irrelevant except
           | for, as you say, video editing and a vanishingly small slice
           | of gaming
           | 
           | I really don't see it that way. Web browsing used to be
           | considered a very light task that almost any hardware could
           | handle, but the performance demands have been steadily
           | climbing for quite some time.
        
             | ghaff wrote:
             | Performance and memory requirements have doubtless gone up
             | but I use a five year old MacBook Pro and it's perfectly
             | fine for browsing. Performance isn't really irrelevant of
             | course but browsing generally doesn't push anywhere close
             | to the limits of available processors.
        
       | veselin wrote:
       | I was wondering why AMD didn't finally get access to the top
       | notch process at TSMC. They can have some of the best margins in
       | their products, just because they ended being the best now and
       | they certainly have the volume already. They didn't even start
       | addressing notebooks or the lower ends of the market with Zen 3.
       | 
       | Zen 3 is probably the fastest CPU on the market, it is not even
       | scaled down to N5 or N6 which could get power consumption down to
       | the best we have seen for notebook market.
       | 
       | Instead the rumors are that in 2022 the improvements will go
       | mostly into packaging extra cache. And now the next processes are
       | already booked by others.
        
         | selectodude wrote:
         | Because AMD doesn't have any money. Intel can fire off a 1
         | billion dollar wire transfer tomorrow. That would wipe out half
         | of AMD's cash reserves. Money talks.
        
         | colinmhayes wrote:
         | Intel has 20 billion in profit a year. AMDs revenue is half
         | that. AMD can't compete with apple and intel's cash flow.
        
       | nly wrote:
       | One has to wonder how long it will be before Apple build their
       | own fabs
        
         | narrator wrote:
         | The amount of centralization of fab tech is astonishing. You've
         | got all the sub 7nm fab tech that the whole world depends on
         | for the next generation of technology coming out of a few fab
         | plants in Taiwan. Software is easy. Material science is hard.
         | There is one company that makes the machines that the Fab uses.
         | They cost several hundred million each, have a multi-year
         | backlog and only TSMC has really mastered integrating them into
         | a manufacturing process that can scale.
         | 
         | Thankfully, things are getting more diversified. TSMC just
         | started construction of one of several new plants in Arizona
         | last month: https://www.phonearena.com/news/construction-of-
         | tsmc-5nm-fab...
        
         | eyesee wrote:
         | I wouldn't count on it. Apple is vertically integrated, but
         | doesn't do their own manufacturing. It's not necessary when
         | They can dominate the supply chain and hold a practical
         | monopsony on the best production capacity anywhere. This is why
         | everyone else was lagging to produce 5nm parts -- Apple simply
         | bought 100% of TSMC's capacity.
        
       | cwizou wrote:
       | Can't help but feeling it's a bit of a diversion to make everyone
       | forget the just announced delays of their 10nm server chips.
       | 
       | The original source, Nikkei, points to their own sources that
       | claim that both Apple and Intel are currently doing early testing
       | on 3nm. The article doesn't imply they are the only ones and
       | according to Daniel Nenni, AMD did get the PDK too.
       | 
       | Then it's about who's ready first and who bought the most
       | allocation. Rumours points to Intel buying a lot of it, although
       | when that allocation comes is a bit unclear. Nobody should expect
       | anyone but Apple to have first dib at anything at TSMC.
       | 
       | Plus it is Intel's first use of a bleeding edge PDK from TSMC,
       | which is dramatically different from the older nodes they used so
       | far (EUV and all). But it's been a long coming commitment (pre
       | Gelsinger) from them to outsource their high perf DT and server
       | chips for this node.
       | 
       | Their volume needs for this are high (not quite iPhone high, but
       | still far above non-Apple customers) and it will be interesting
       | to see what, and how soon they can launch in 2023.
       | 
       | I would probably expect Intel's lineup, especially the
       | articulation around their so called HEDT (High End Desktop)
       | segment to be shaken up a bit.
       | 
       | So would their margins.
       | 
       | https://www.intel.com/content/www/us/en/newsroom/opinion/upd...
       | 
       | https://asia.nikkei.com/Business/Tech/Semiconductors/Apple-a...
       | 
       | https://semiwiki.com/forum/index.php?threads/apple-and-intel...
        
         | ac29 wrote:
         | Intel is already shipping 10nm server chips. See, for example:
         | https://www.anandtech.com/show/16594/intel-3rd-gen-xeon-scal...
         | 
         | The article you linked is about their next generation server
         | chip (which is also 10nm).
        
       | sschueller wrote:
       | From what I have gathered the smallest trace spacing is 3nm but
       | not all traces. So it would be interesting to know what
       | percentage is and the progress in getting all or most to 3nm.
        
         | rob74 wrote:
         | I'm not an expert, but from what I have been reading these
         | "node names" have been pretty much decoupled from actual trace
         | dimensions for a few years already. So I wouldn't be surprised
         | to find out that most of the structures are in fact bigger than
         | 3nm.
         | 
         | To quote Wikipedia (on the 5nm process!):
         | 
         |  _The term "5 nanometer" has no relation to any actual physical
         | feature (such as gate length, metal pitch or gate pitch) of the
         | transistors. It is a commercial or marketing term used by the
         | chip fabrication industry to refer to a new, improved
         | generation of silicon semiconductor chips in terms of increased
         | transistor density, increased speed and reduced power
         | consumption._
        
           | intrasight wrote:
           | My understanding is that it's more like the marks on a ruler.
           | And the process shrinks the whole ruler. So it does
           | ultimately relate to the feature size.
        
       ___________________________________________________________________
       (page generated 2021-07-03 23:00 UTC)