[HN Gopher] AMD and GlobalFoundries Update: Orders Through 2024,...
___________________________________________________________________
AMD and GlobalFoundries Update: Orders Through 2024, Now Non-
Exclusive
Author : rbanffy
Score : 139 points
Date : 2021-05-17 12:09 UTC (10 hours ago)
(HTM) web link (www.anandtech.com)
(TXT) w3m dump (www.anandtech.com)
| woliveirajr wrote:
| AMD could use that capacity to re-lauch older GPU at a lower
| price and make gamers happy. Low cost GPUs (even with higher
| watts) might be marketable again.
| mastax wrote:
| MooresLawIsDead suggested this as well and I think it's a good
| idea especially in the current climate, though it's hard to
| react situationally when it takes >12 months to launch a
| product even an iterative one. Take the 5600XT design and
| backport it to 12nm. You get something that performs pretty
| well at 1080p that you can sell for really cheap for a long
| time. Finally a replacement for the RX480/RX580.
|
| One of the strange things with AMDs new GPU push is that their
| GPUs are less profitable in a supply constrained environment
| than their CPU/APUs. Ian Cutress recently talked about this (I
| think on YouTube @TechTechPotato). Even their highest end GPU
| dies selling to consumers at a price point higher then they
| planned for is much less profitable per wafer than a midrange
| APU. If there were no supply constraint they could just make as
| many of both as they can sell but right now they're just
| trading off profits against GPU market growth.
| mrweasel wrote:
| Can you "just" take the 5600XT and switch it to 12nm? Won't
| it require more power and cooling?
| mastax wrote:
| You can't "just" do it, but it is possible as shown by
| Intel Rocket Lake.
| profile53 wrote:
| Sadly no, because everything is closely tied to the node.
| For example, it might take 0.3ns for a signal to physically
| travel from one edge of the chip to the other on a 7nm
| chip. On 12nm, it might take 0.5ns. The chip was designed
| to compensate for a 0.3ns delay, not 0.5, so now the cache
| is sending data to the core at the wrong time, leading to
| corruption and garbage data being processed. This, plus a
| million other things in both manufacturing and design have
| to change to build a chip on a different node
| (manufacturing tolerances, clock speed, metallurgy,
| patterning issues, etc.).
| ArkanExplorer wrote:
| By the time you actually get those products on the shelves,
| Ethereum will either be on Proof of Stake, in an ice age, or
| less valuable than now.
|
| And thus the GPU shortage will end.
|
| In 1-2 years we might see 5nm GPUs, and so the gulf between a
| 14nm GPU and the latest GPU (even if you can't find them on
| shelves) will be even greater.
|
| Its more realistic for GlobalFoundries to consider opening a
| new factory with 10nm/7nm, especially if they can pickup
| established knowledge and hardware from other players.
| rini17 wrote:
| I doubt it's so simple, there are many other profitable coins
| to mine.
| jkilpatr wrote:
| Proof of stake Ethereum is very similar to 5G, yes there is
| real technology, yes it's 'working' in some capacity now. But
| you really shouldn't plan on it reaching you any time soon.
|
| The ETH2 'beacon chain' is a meta-chain that's supposed to
| checkpoint a bunch of 'sub chains' for scalability. They have
| launched this beacon chain. But it doesn't do anything other
| than mint ETH2 tokens and operate their proof of stake
| consensus system.
|
| AKA it's another blockchain, different than ETH1 and it
| doesn't even move transactions yet. It's got a novel POS
| system to it's name, that's not nothing, but that is all it
| has at the moment.
|
| What does a 'sub chain' actually look like to interact with?
| How do you coordinate with many of them? All these questions
| have answers in theory, but not answers in solid production
| ready interoperable code.
|
| 'Proof of stake Ethereum' only happens if these questions are
| figured out and _then_ ETH1 is somehow brought under the
| beacon chain 's governance. Which is a whole different kettle
| of fish than a greenfield subchain. Exactly how this is to
| happen... well I haven't even seen anything credible on this.
| As far as I can tell it's not even seriously being worked on
| yet.
|
| So we may, if things go well, have another blockchain calling
| itself ETH2 that is proof of stake by the end of this year.
| But that won't cause ETH1 to stop wasting electricity or
| gracefully move it's economic activity to a proof of stake
| system.
|
| Moving billions of dollars of economic activity out of the
| hands of miners (who have every incentive to screw it up) and
| into the hands of a proof of stake validator set while at the
| same time not significantly disrupting said system or opening
| up new vulnerabilities is not trivial.
|
| tl;dr Proof of stake Ethereum is possible, but there
| definitely isn't a rush order on it. I would bet on Ethereum
| miners still being around 2-3 years from now.
| olouv wrote:
| This is not exact. The merge is definitely being
| prioritized and the current target is Q4'21 / Q1'22.
| Developers have chosen to perform an early merge by using
| the ETH2 consensus layer to validate ETH1 blocks, it's a
| pretty straightforward process were both ETH1 and ETH2
| clients co-exists with minimal changes. There is already a
| running testnet. More info
| https://blog.ethereum.org/2021/01/20/the-state-of-
| eth2-janua...
| jensvdh wrote:
| All the more reason for crypto environment regulations. We
| can't keep destroying the environment just because "miners"
| want to make a profit.
| ClumsyPilot wrote:
| I have a long laundry list of things that are destroying
| the envuronment just to make a profit.
| lvs wrote:
| OK, so add it to your list then.
| Shorel wrote:
| My 2014 Radeon HD 6850 was recently sold for a nice profit.
|
| I don't think everyone in the world goes for only the latest
| and greatest. Or even have the budget to get a card more
| expensive than $200 USD.
|
| I am a member of a simracing league with dozens of people in
| South America, and only 1/4 of them have machines capable of
| running games launched in the last three years.
|
| Only one guy in the group has a GeForce RTX 3080. One, out of
| more than a hundred.
|
| Everyone else in the group will definitely welcome an
| affordable mid-range card. Including me.
| screye wrote:
| Computer gaming moves in console generations.
|
| The PS5/new XBox are targeting a ~5 year generation. This
| means that games will be mostly playable as long as you get a
| top of the line 14nm GPU for 5 more years.
|
| Remember that graphics have stopped being a bottle neck for
| games for a few years now. The top played games: Fortnite,
| Apex, LoL, Dota, Fifa and the like are perfectly playable on
| mediocre graphics.
| jfrunyon wrote:
| The top played games have pretty much always been playable
| on mediocre hardware. Otherwise they wouldn't be top
| played.
| nradov wrote:
| Nethack also runs pretty well on old GPUs. I consistently
| get >60fps even on integrated graphics.
| dcow wrote:
| Are you sure your terminal emulator is rendering that
| quickly? d= e.g. iTerm2 caps at 30fps unless you change
| the default.
| stainforth wrote:
| The more important question is does it register key
| release events
| dragontamer wrote:
| Lol. Nethack is designed for the Unix console. All
| commands are key-down only.
|
| But you had me going for a second. Nethack is also turn-
| based, so there's no real-time element at all. If you
| come across a difficult turn, its a common "strategy" to
| walk away from the computer, relax a bit, and then look
| at the screen with a different mindset. Maybe you can
| think of a better solution if you give yourself time.
| pandaman wrote:
| PS5/new Xbox are 7nm though. 14nm was the PS4
| shrink/Scorpio.
| simias wrote:
| At this point I really wouldn't dare making any projection on
| the future of cryptocurrencies. You can't rationalize
| irrationality. Let me remind you that currently Doge coin, a
| literal joke fork of LiteCoin that traded for a fraction of a
| cent mere months ago, is the fifth biggest cryptocurrency by
| market cap and is being pumped by Elon Musk for some reason.
|
| I have literally no idea what that space is going to look
| like one week from now.
| lobocinza wrote:
| Same reason as any pump.
| ClumsyPilot wrote:
| I see it as no worse than gamestop or synthetics CDO's and
| some other stuff going on Wallstreet.
| sbierwagen wrote:
| Why would AMD want to make _less_ money?
| babypuncher wrote:
| How would selling even more GPUs make them even less money?
|
| Right now their problem is that they literally cannot make RX
| 6000 series cards fast enough.Adding more GPU capacity in the
| form of reissued 14nm chips to fill some of the gap in demand
| would not result in fewer RX 6000 cards sold. People just
| want something, anything they can buy today to hold them over
| until they can get their hands on something good.
| rozab wrote:
| There are basically no cards available on the consumer
| market. Nobody is making money in this space.
|
| I was looking this week for a replacement for the GPU I
| bought in 2014 for PS80 (r7 260x). The only sub-PS600 card
| available on _any_ site when I checked was the GeForce GT
| 1030, which would give me slightly worse performance for
| PS90. The situation is unbelievably dire.
| greggyb wrote:
| Rather, everyone is making money, but the OEMs could make
| more if they'd be willing to raise MSRP. It's a shortage of
| supply, not a stoppage.
| woliveirajr wrote:
| > and AMD in turn is required to pay for these wafers,
| whether they use this capacity or not. > AMD expects to buy
| approximately $1.6 billion [[?] box office sales of Bambi,
| 1942] in wafers from GlobalFoundries in the 2022 to 2024
| period.
|
| They'll pay for it, anyway, $1.6bi. I'm pretty sure that
| there's a way to spend $1.6bi to make GPUs and sell it in the
| long-tail consumer spectrum without bitting your sales. And
| create the desire, on those users, to upgrade to a better GPU
| in the future.
|
| Or don't even make money about it, just ensure that your
| competitors will have to lose money too.
| 55873445216111 wrote:
| Engineering resourses are limited. It takes significant effort
| to transfer and verify a design for a new process. Why pull off
| engineers working on the next gen products to retapeout an old
| product on an old node? I doubt it would have ROI.
| mrweasel wrote:
| How old would those GPUs be? It might be great for those of us
| who just needs a reasonable GPU for older games and general
| desktop use.
| MangoCoffee wrote:
| AMD APU or Intel Xe for older games?
| dragontamer wrote:
| The 75W GPU market is completely stalled at Rx 560 and NVidia
| 1050 Ti IIRC. People still buy those because:
|
| 1. 75W is the maximum the PCIe x16 slot can power without any
| additional power (see 6-pin or 8-pin connectors).
|
| 2. 75W is too little to really make a "decent" GPU for modern
| AAA games. But still a huge step up from integrated graphics.
| Playing 5-year-old or 10-year-old games on lower settings is
| the only real possibility.
| my123 wrote:
| Better 75W GPUs are possible, but they would be quite
| expensive on a perf per dollar basis.
|
| An RTX 3070 Laptop GPU on a PCIe card would be quite nice,
| but the market for that isn't big.
| SXX wrote:
| Laptop 3070 would be sold out immediately. It's too good
| at mining and sometimes better than desktop part
| Macha wrote:
| AMD's last 12nm GPUs were the Radeon 7, Vega 56 and Vega 64,
| which were all pretty much flops. The rx580, their preceding
| model did quite well for a few years as a last gen budget
| model.
|
| To my understanding, the radeon 7 and vega were expensive
| chips to produce. All of them used HBM, which contributed to
| this, and it's not clear how easy it would be to make a non-
| HBM variant. Would customers accept them if they needed the
| same MSRP as the 6xxx GPUs?
|
| So if they were to relaunch and old GPU I think the rx
| 580/590 at $200 would do. In this climate, $200 for a 1080p
| medium GPU would probably sell.
| dragontamer wrote:
| Radeon VII was 7nm.
|
| Re-releasing the Vega cards would be excellent for ROCm
| programmers. AMD needs to do something to get beginner
| programmers onto their system cheaply.
|
| Radeon VII / Radeon VII Pro / MI25 seem like the cheapest
| option for cards still in production. Vega 56 / 64 still
| work if you can Ebay a used-card.
| jedbrown wrote:
| Radeon VII has comparable memory bandwidth to an RTX 3090
| (or V100) and 7x the double precision performance of RTX
| 3090 for a quarter the cost. It has almost double the
| memory bandwidth of RX 6800 and 3x the double precision
| performance at half the price. It would be a compelling
| card today if it hadn't been EOL'd almost two years ago.
| dragontamer wrote:
| Radeon VII isn't really EOL'd. Its now called Radeon VII
| Pro and worth $2000. Or (EDIT: MI50: the MI25 is Vega64
| IIRC).
|
| As long as AMD is selling those cards out at the higher
| price, there's no reason for the $700 price point to come
| back...
| my123 wrote:
| It was a stopgap product that wasn't really profitable to
| reduce the gaming performance gap between NV and AMD. Its
| marketing wasn't that much targeted towards compute
| either.
|
| And as soon as they had cards with better gaming perf
| out, it outlived its usefulness in the AMD product
| matrix.
|
| Nvidia has the Titan V as a prosumer card with HBM2, 1/2
| FP64 and all for $3k, but that seems to be out of stock
| nowadays too...
| jedbrown wrote:
| Agreed that it was EOL'd because it was priced too low,
| though the Pro also lifted the double precision throttle
| (3.5 TF to 6.5 TF) and added Infinity Fabric.
| dragontamer wrote:
| A 7nm design uses less area, and is probably just going to get
| cheaper over time. A smaller, 7nm RDNA2 design might be easier
| to make than backporting RDNA2 to 14nm.
|
| I think an older GPU might still be useful, but from a
| cost/benefit perspective, it seems to make more sense to just
| make a smaller chip on the latest process (especially if your
| latest technology, like RDNA2, already has lots of data on the
| 7nm node).
| vzidex wrote:
| I think an older GPU would definitely be useful. Used GTX
| 1080 cards (I picked one up used for C$375 back in September)
| are listed for C$750-850 where I live. I'm not as familiar
| with AMD's products but I think 1-2 generation old GPUs would
| definitely have high demand until current gen ones are more
| available.
| icegreentea2 wrote:
| 7nm capacity is likely to be tapped out for the new couple
| years. The OP's suggestion (I believe) was literally just to
| keep making older 12nm cards (no backporting) to supplement
| current supply.
|
| Normally this would... not really work well. But in the
| current environment, it'd probably work just fine (just look
| at how much GT 1030s go on sale for...)
| devwastaken wrote:
| GPU's significantly increase performance between generations.
| My 1060 is already effectively outdated. The cost of production
| isn't less than it was originally, and $200 for a 4 year old
| GPU is far too much. Modern titles are going to require 6-8GB
| vram at a minimum.
| tpxl wrote:
| Used 1080Tis are going for 600EUR+, so new ones for 400 would
| probably sell like hot cakes.
| ZeroCool2u wrote:
| Perhaps this will help auto-manufacturers in the U.S. by freeing
| up some supply that was previously committed to AMD?
| readams wrote:
| Note: "Now with that said, the net impact of this change is
| likely to be limited as AMD was already free to pursue other fabs
| for 7nm and smaller nodes - which will be the vast majority of
| AMD's needs over the next three years."
| [deleted]
| ramshanker wrote:
| What node is AMD IO die now a days? It always felt like they were
| keeping the IO dia one node backward to honor the GF agreement.
| Even with this they are able to give 128 PCIe lanes and Better
| Memory frequency. So when they are able to bring IO die also to
| leading node, it is going to be awesome.
| exmadscientist wrote:
| Not as awesome as you think. The IOD is on the older process
| because it benefits comparatively little from transistor
| shrinks. (Logic-level IO transistors have to be physically
| large, with thicker gate oxides.) But it will help some.
| monocasa wrote:
| It might not help at all from a density perspective. The IO
| die has a lot of pads along the edges that can't be shrunk
| which puts a lower limit on die area. I wouldn't be surprised
| if the logic areas in the middle already have a lot of blank
| space at 14nm. You used to see the same thing with north
| bridge chips, and that's basically what the IO die is anyway.
| MrFoof wrote:
| The AMD IO die is using the GloFo 14nm process.
| eatonphil wrote:
| What fab(s) does AMD use instead? I skimmed the article but
| didn't notice anything about this.
| MangoCoffee wrote:
| TSMC. AMD and Apple is TSMC's core customers
| PartiallyTyped wrote:
| Also NVDA, and by proxy Sony and Microsoft as they use AMD
| cpus/apus.
| Narishma wrote:
| I think Sony and Microsoft are AMD's customers, not TSMC's.
| They buy their chips from AMD. I don't think they care much
| where they are manufactured.
| DeRock wrote:
| Mostly TSMC, https://www.tomshardware.com/news/amd-tsmc-second-
| largest-cu...
___________________________________________________________________
(page generated 2021-05-17 23:01 UTC)