[HN Gopher] AMD Fires Back with 7 New Chips
___________________________________________________________________
AMD Fires Back with 7 New Chips
Author : ItsTotallyOn
Score : 144 points
Date : 2022-03-15 13:39 UTC (9 hours ago)
(HTM) web link (www.tomshardware.com)
(TXT) w3m dump (www.tomshardware.com)
| noselasd wrote:
| How can we get these vendors to do versioning of their products
| that we can relate to, or atleast somewhat grasp ?
| sliken wrote:
| Heh, I think the best we can do is embarrass them and complain
| when they do something stupid and confusing like some of the
| Ryzen 5XXX series being Zen 2 and others are Zen 3 cores. Not
| that Nvidia and Intel hasn't done the same thing.
|
| Using http://ark.intel.com has been very useful for figuring
| out the details of Intel CPUs. Not found anything as useful for
| AMD or Nvidia.
| Sohcahtoa82 wrote:
| Intel and AMD use a similar naming scheme, but get a little
| confusing with suffixes, especially AMD.
|
| Both include a target market, generation, and performance
| level. With Intel, it's "i{market}-{generation}{performance}",
| with market being 3 (budget), 5 (mainstream), 7 (enthusiast), 9
| (high-end). The numbers used in the "performance" level varies,
| but higher is always better. For example, i7-3770 is the third-
| gen chip targeted towards enthusiasts and was the highest
| performer in its generation. My i9-9900 is the top end model
| for the 9th gen. Intel will also use suffixes "K" to mean it
| has an unlocked multiplier (making overclocking easier) and "F"
| which means it does not have an on-die GPU, so you'll need a
| discrete GPU card.
|
| AMD is similar, they'll call them "Ryzen {market}
| {generation}{performance}", ie, Ryzen 5 5600. But where AMD
| goes crazy is with the damn suffixes that express an additional
| performance level that is impossible to decipher.
| zekica wrote:
| Desktop CPUs: X - sightly higher performance
| one than the one with no suffix XT - even higher
| performance
|
| Desktop APUs: G - includes graphics,
| different architecture than non-G CPUs (65W) GE - lower
| power than equivalent without E (similar to T with Intel)
| (35W)
|
| Mobile APUs: U - lower power (15W,
| configurable) H - higher power (35W, configurable)
| HS - lower clocks than H HX - higher clocks than H
| dsr_ wrote:
| Sadly, AMD is even more opaque than that.
|
| A Ryzen 2700 is a second-generation CPU with 8c/16t.
|
| A 3700 is a third-generation CPU with 8c/16t
|
| A 4700g is a second-generation CPU with 8c/16t and a GPU
|
| A 4500h is a second-gen CPU with 6c/12t and a GPU
|
| There is no 4600G.
|
| An X suffix generally means higher clockrate; an H suffix
| generally means high efficiency (lower power draw); a U
| suffix means ultra-efficient (very low power draw).
|
| In the 5000 and 6000 series, possible suffices include X, H,
| U, HX, HS, but not G.
|
| Some CPUs are OEM-only.
|
| Some CPUs are only sold in packaging for laptop/tinybox
| manufacturers.
|
| In general, if you get integrated graphics, you lose an
| entire processor generation, but more recently you only lose
| top-end features.
| paulmd wrote:
| for a while, a lot of that could be summarized with "APUs
| are 1000 higher than their desktop generation" (4000 series
| APUs = Zen 2 = 3000 series desktop) but then AMD went and
| made the 5000 laptop series _split between generations_ , a
| 5700U is a Zen2 part and a 5800U is a Zen3 part.
|
| Intel did split the 10th gen but even then they changed up
| the naming between the two series.
| gruez wrote:
| The only "new" chip is the X3D one (with 3d v-cache). The
| others are the same chips as last year, just at a different
| price point (eg. the 5700X is basically a cheaper version of
| the 5800X, only being marginally slower).
| zekica wrote:
| They also seem to have released a bunch of CPUs that are APUs
| with disabled GPUs Ryzen 5 5500 - 5600G with
| broken and disabled GPU Ryzen 5 4500 - 4600G with
| broken and disabled GPU Ryzen 5 4100 - 4300G with
| broken and disabled GPU
| iszomer wrote:
| Kinda' goes to show the resourcefulnes of salvaging for
| repurposement based on yields right? Sort of reminds me of
| the now ancient Black Edition releases with unlockable
| cores with a big fat YMMV sticker on top.
| paulmd wrote:
| it depends on market situation but usually those kinds of
| SKUs (if they're released worldwide in large volumes) are
| not related to yields but rather artificially disabled.
|
| People hear about binning and assume that every product
| decision has to be related to binning, but usually it's
| not, it's just market segmentation. AMD had over 80% of
| Zen2 chiplets coming off the line with 8 fully-functional
| cores, and clock bins are generally selected such that
| most units will pass, by design. And that's at launch, on
| a new node, in 2019. Numbers have only gotten better over
| the last 3 years.
|
| AMD already has a bin for iGPUs with a defect - it's
| 5600G/5600U/5600H/etc. And they have 5300G below that
| allowing even more defects. There's very very few APUs
| coming off the line with tons of GPU defects but 6
| workable cores, or a defective PCIe controller but only a
| defect in the iGPU part and not the rest, etc.
|
| The problem is that AMD has tons of supply of high-binned
| parts but the lowest demand for those parts. And they
| have the highest demand for low-binned parts but the
| lowest supply of those parts. How do you mesh those two
| curves? Disable cores on a high-binned part and sell it
| as a lower SKU. That's why those "black edition with
| unlockable cores" existed - those unlockable cores were
| locked off for market segmentation. Nowadays they just
| don't let you turn it back on.
|
| (Which isn't to say that none of the 5600X/etc are the
| result of a dead core/etc - but a lot of them aren't,
| probably _most_ of them aren 't, given the likely >90%
| yields for 8-core at this point. And you pick your 5600X
| bin such that most 5800X failures can be sold as 5600X,
| meaning there's very little that falls through the cracks
| without being just utterly broken. True binning-generated
| "we have this pile of chips, let's do something with it"
| style SKUs tend to have extremely limited availability as
| a result, it's shit like the Ryzen 4700S or the Ryzen
| 3100, or the NVIDIA 1650 KO.)
|
| Anyway, it's not a coincidence this is coming a few
| months after the Alder Lake launch. This is market-
| driven, Alder Lake is not only faster but in many cases
| it's cheaper as well. AMD coasted a little bit while
| motherboard supply firmed up for Intel, but they finally
| have to respond. I'm sure they're selling lots of Milan
| but consumer marketshare matters too, and AMD is losing
| steam there with the price increases, with Intel back on
| top on performance, and with Intel undercutting their
| pricing heavily.
| kibwen wrote:
| I concur, as someone who only pays attention to the processor
| market twice a decade I'm clueless when it comes to navigating
| the matrix of CPU market segmentation. It makes me yearn for
| automobile-style naming where you just have a year, a make, and
| a model.
| tormeh wrote:
| Just look at benchmarks and prices. Names can't solve
| anything.
| kemotep wrote:
| Largely those style names do exist in the current product
| names. Since Intel styled the Core i-series 3, 5, and 7 (and
| the later 9) over 12 years ago, AMD is following their lead
| with Ryzen 3, 5, 7, and 9.
|
| The first digit is the generation and then the second digit
| and suffix letter designate additional features. So a Ryzen 7
| 5700X is a 5th gen Ryzen 7 meaning it has 8 cores. It is a
| step down in price and performance to a 5800X and also has no
| Graphics capabilities (those have a G suffix). Intel uses
| different letter suffixes such as the Core i7 12700KF being a
| 12th generation Core i7 with overclocking features (K) and no
| graphics (F).
|
| It is far worst with GPUs. Certainly an industry wide naming
| convention reset would be nice but everyone is not going to
| cooperate like that.
|
| It does take a minute to read up on but at least Intel has
| *mostly* followed this naming convention for 12 product
| generations so far.
| IE6 wrote:
| Even within one vendor it's super confusing. i3? i5? i7? Y
| series? U series? I think 12th gen Intel may even use different
| naming conventions already... A Y series i7 of olde delivered a
| significantly different experience than an i3 U series
| (literally the difference between being able to run some
| workloads comfortably vs. locking the machine up). And then
| marketing materials make it harder because they will drop the
| CPU model name for the nondescript "i5 CPU". I think it's
| probably somewhat intentional.
| paulmd wrote:
| if they don't call out mobile/ULV processors with their own
| series (Y and U series) then people whine that they're trying
| to "sneak lower-performing processors into the lineup using
| the same numbering".
|
| similarly, marketing a processor as being a "5700U" or
| "5700G" despite it being slower than a desktop 5700X could be
| seen as equally deceptive. That's not really any better than
| "i7" vs "m7" or whatever.
|
| unfortunately you're just going to have to learn the naming
| convention, there really isn't a good solution for that,
| given the wide range of applications that a given series
| might be applied to. You just happen to think that AMD's
| naming convention is worth taking the time to learn while
| throwing your hands up at the Intel naming. Same with people
| who think Intel is _just awful_ for internally codenaming all
| their products after Lakes and Coves while eagerly memorizing
| every single painter and city name that AMD uses in their
| codenames - says more about your priorities than their naming
| scheme.
| IE6 wrote:
| > You just happen to think that AMD's naming convention is
| worth taking the time to learn while throwing your hands up
| at the Intel naming. Same with people who think Intel is
| just awful for internally codenaming all their products
| after Lakes and Coves while eagerly memorizing every single
| painter and city name that AMD uses in their codenames -
| says more about your priorities than their naming scheme.
|
| Who are you talking to?
| paulmd wrote:
| The person who was saying they didn't understand the
| Intel naming convention because U and Y SKUs were too
| complex to understand.
|
| That's normal. Mobile and ULV SKUs are usually coded
| differently. AMD calls them "U" and "H" skus I believe.
|
| The person in question just doesn't understand the Intel
| naming convention, which is fine, but AMD and every other
| company does the exact same thing. It's not that Intel is
| uniquely confusing, it's that the individual here doesn't
| feel that Intel's naming convention is worth the brain
| space. Which is also fine, but it's not a problem with
| the naming convention.
|
| As a tangential observation (people do make those in
| discussions!) architecture/product codenames are another
| place this comes up. There are many enthusiasts who will
| eagerly memorize that Rembrandt > Renoir but think the
| idea that Rocket Lake > Coffee Lake is perplexing and
| confusing. Or at least that's a repeated theme in many of
| these naming discussions.
|
| I'm doing my best here to say this politely, but a lot of
| people clearly just don't value those two bits of
| knowledge equally. And dipping into rhetorical "who are
| you even responding to!?" doesn't really further the
| discussion either.
|
| Naming isn't hard and naming discussions aren't
| interesting.
| IE6 wrote:
| As someone who understands the intel naming convention
| quite well and could understand the AMD naming convention
| if I had a reason to I can still empathize with random-
| consumer-x who does not need to have that any practical
| understanding of the naming convention and could be
| confused as to what they are really purchasing...
| paulmd wrote:
| Well, sadly, there is more than one use-case for
| computers so we need multiple power brackets, so U and Y
| SKUs are going to continue to exist. It's appropriate to
| call it out with the naming scheme, but that's exactly
| what AMD and Intel have done here.
|
| There is no solution which is going to be 100% intuitive
| to someone who specifically doesn't know anything about
| what they're looking for. If you move those products out
| to their own separate series, that's what Intel did with
| the Y series ("m7-xxxY" line - contrast to "i7"). You
| specifically don't like that. If you mark them within the
| existing series, that's what Intel and AMD do with the U
| series. You specifically don't like that either. If you
| move them into a single series, you end up with something
| like the Intel Ice Lake/Tiger Lake naming convention,
| where there is some part of the name that means "cores"
| and some part of the name that means "power" and part
| that means "graphics". Other people _really_ didn 't like
| that, because now you have one name that means 5
| different things.
|
| (And this is what I mean about the naming discussion
| being dumb and boring - whatever you think is how it
| should be done, someone else hates that, and thinks it is
| too complex and requires too much knowledge on the part
| of buyers. It's bikeshedding, product naming is low-
| stakes so everyone has an opinion on it and is _very
| upset_ that AMD and Intel are ignoring their urgent
| forums posts. At the end of the day it 's just not that
| interesting, nor are any of these naming schemes that
| difficult if you bother to learn what they mean.)
|
| Anyway, it's unfortunate that there are features and
| distinctions which laymen may not understand, but that's
| a fact of life, there's things car people really care
| about that a Camry Buyer doesn't know, and that's fine.
|
| Someone who bought an expensive truck with a base-model
| trim might be upset that a Camry with a top trim is
| "nicer", because they didn't understand what a "trim" is
| before they laid down their money, and that's unfortunate
| but it's not exactly hidden either, nor should we call to
| get rid of trim levels because one person didn't
| understand. Someone else might be really upset that their
| SUV doesn't tow like a truck even though they got the
| nicest trim level on the SUV.
|
| Again, sorry if this seems frustrated, but this is a
| topic that has been bikeshedded endlessly. The stakes are
| low, there's multiple reasonable options available, and
| there's a whole lot of people who are all _really upset_
| that AMD and Intel aren 't taking their forums posts on
| the topic seriously. Naming conventions are fine, they're
| good enough to not matter.
| TillE wrote:
| You would still want to refer to specs and benchmarks to make
| good comparisons, so a "better" naming scheme doesn't actually
| solve any problems.
| flatiron wrote:
| Remember athon 1800+ naming scheme?
|
| I'm not sure what is better. Amds weird numbers of apples
| ultra max pro m1
| n00bface wrote:
| At least it's not monitors. I just pre-ordered a AW3423DW. The
| sexiest monitor version/model ever.
| piinbinary wrote:
| Depending on how the performance vs. Intel shakes out, I could
| see the Ryzen 5 5500 becoming the new default option for mid-
| range PC builds
| komuher wrote:
| i3-12100f is almost 100 usd cheaper in most builds you would
| use extra cash for better gpu or just get i5-12400f with the
| same / lower price and higher clocks but lets wait for
| benchmarks.
| WithinReason wrote:
| How much more expensive are Intel motherboards though?
| throwmeariver1 wrote:
| DDR4 boards are in the same ballpark as AMD.
| BeefWellington wrote:
| Intel boards baseline around $80[1] and AMD boards around
| $50[2].
|
| [1]: https://www.newegg.com/p/pl?N=100007627%20601394305%20
| 601361...
|
| [2]: https://www.newegg.com/p/pl?N=100007625%20601292786%20
| 601312...
| onli wrote:
| For Intel, those are the cheapest of the cheap boards
| with the very limited H610 chipset. If you want something
| acceptable you will pay a lot more. AMD on the other hand
| has cheap good option with B550 or even B450.
| jeffbee wrote:
| I am interested in your belief that most builds needs GPU
| more than CPU, or that most builds even have GPUs. iGPUs have
| ~70% of the market. Most builds are going to spend the cash
| on CPU performance.
| daemoens wrote:
| If you're buying the CPU separately, it means you'll be
| building the PC yourself, not a prebuilt. Upgrading the GPU
| instead of the CPU will give you better graphics
| performance, which is what most of the people building one
| want. The only reason iGPUs have such a large section of
| the market is because of people buying regular prebuilt to
| use as a general computer, not a gaming one.
| jeffbee wrote:
| That's not obviously true to me. Is graphics-intensive
| gaming - and I would point out that defining "gaming" as
| GPU-intensive would be too narrow - really that large of
| a market? I've personally built dozens of PCs and the
| last time I bought a GPU it was a 3DFX Voodoo.
| Retric wrote:
| Ryzen 5 5500 is a $159 6 core chip.
|
| i3-12100f is a 100$ 4 core chip, which should line up with
| Ryzen 3 4100 at $100 or possibly Ryzen 5 4500 at $129.
| komuher wrote:
| 5500 is 199$ "Price Street", MSRP is dead for a long time
| especially for TSMC products.
| meragrin_ wrote:
| I can get a 5600x for $210. There is no way the 5500 will
| be anywhere near close to $199.
| BeefWellington wrote:
| 12100f is $178 right now at Newegg (shipped by third
| party; Newegg has none): https://www.newegg.com/intel-
| core-i3-12100f-core-i3-12th-gen...
| komuher wrote:
| 118 usd after VAT here in EU
| awill wrote:
| Is this a sign zen4 is delayed? I get they need new motherboards
| and socket, and DDR5 etc.., but launching zen3 stuff now is
| strange. A lot of us already upgraded.
| blihp wrote:
| I don't take it that way. Zen 4 is likely to only release high-
| end parts at first. Given the timing of these releases, I'd
| expect Zen 3 to serve as the mid- to low-end of the product
| range at least through mid- to late-2023 if not longer. Given
| the way AMD sandbagged these parts this generation, they'll
| probably hold off as long as possible next generation for low-
| end Zen 4 too.
| mjevans wrote:
| I couldn't get past the scalpers in 2020. That was already a
| _slightly_ delayed refresh driven by a combination of less than
| stellar AMD GPUs (Linux, so nVidia's not a good option) and
| cryptominer hell to that point.
|
| It's going to be 2 years later, and at this point I'd rather
| jump directly from DDR3 to DDR5.
| awill wrote:
| I thought the Radeon 6800 was a pretty good launch. Didn't it
| hold up with Nvidia in most benchmarks except RT?
| mjevans wrote:
| That would be the scalpers issue, both for the card and the
| rest of the system to make it worth upgrading the card.
| hajile wrote:
| I believe this was always the plan. This is the refresh with
| the next chips due late this year to early next year.
| sliken wrote:
| Not heard that, but Alder lake has taken back most of the
| market share the Zen 3 managed to take from Intel. So a
| response sooner than later will help protect marketshare. In
| particular they are doubling down on the lower power chips that
| ADL is having problems competing with. ADL is fast, but power
| hungry. Thus chips like the 5700x with a 65 watt tdp instead of
| the previous 8c/16t chip with a 105 watt TDP.
| leguminous wrote:
| The differences between the 5600 and 5500 are interesting. The
| $199 5600 has 6 cores, 32MB of L3, no GPU, and uses the chiplet
| packaging. The $159 5500 has 6 cores, 16MB of L3, a
| disabled/fused off GPU, and is one larger, monolithic die.
|
| I thought the chiplet packaging was supposed to result in lower
| costs due to better yields on smaller dies? I guess the extra
| cache takes up more than enough die space to make up for that?
| xxs wrote:
| 5500 is APU based, and it has only PCIe-3... you can consider
| it a failed laptop sale.
| blihp wrote:
| Everything I've been reading has been that 7nm has been having
| excellent yields and chiplets do bring their own costs. Lisa Su
| (AMD CEO) indicated in an interview a while back that there is
| a price floor at which chiplets make sense. Another thing to
| consider is that most of the chiplets used in the consumer
| parts could be considered rejects from Epyc. (i.e. they need to
| make a lot of relative duds to get the small number of 'golden
| samples' they use in their server chips)
| faluzure wrote:
| The 5500 is less performant, therefore lower cost. It only
| exists to sell salvaged dies, and won't likely appear in large
| quantities (similar to how the 3300 / 3100 were impossible to
| find).
| meragrin_ wrote:
| None of their chiplet designs currently have GPU. All of their
| APUs are currently monolithic. The 5500 is just a neutered
| 5600g. Probably mainly trying make some money on dies with bad
| GPUs.
| Symmetry wrote:
| The yield issue is going to be much more of a factor when a
| process node is young and defect rates are high. The big
| savings is in engineering hours where you can re-use a design
| across multiple products.
| paulmd wrote:
| the 5500 is a monolithic die, it's based on the monolithic APU
| lineup rather than the chiplet enthusiast lineup.
|
| This also implies other limitations like PCIe 3.0 and half the
| cache of the equivalent enthusiast lineup, because that's how
| the laptop chips were designed. The APU lineup has always
| performed a bit worse than the equivalent desktop lineup as a
| result.
| GekkePrutser wrote:
| Too little too late. The 5600 should have been out from the
| start. I've basically wasted 100 bucks on the X version I didn't
| want.
|
| Only went for AMD so I could reuse the motherboard. But next time
| I'll go back to Intel.
| JohnTHaller wrote:
| I'd love to see proper testing of all the current chips from both
| AMD and Intel with Specter v2 mitigations applied. Performance
| hits of up to 50% are being shown in some workloads.
| jeffbee wrote:
| I can't think of anything less relevant for gaming workloads
| than spec. ex. security. In that use case you'd obviously
| disable all of the mitigations, to get the highest possible
| speculative performance.
| paulmd wrote:
| AMD really also should be benched with KPTI turned on, since
| they've got a Meltdown-style vulnerability (discovered by the
| same team) they've left unpatched because "KPTI fixes it", but
| they also want KPTI off-by-default because it tanks their
| benchmarks. It also completely breaks KASLR (and that's been
| broken on AMD for a while thanks to prior work from the same
| team).
|
| https://mlq.me/download/amdprefetch.pdf
|
| Sadly people don't seem to take AMD vulnerabilities very
| seriously, Specter/Meltdown were trumpeted from the rooftops
| but when AMD leaves vulnerable defaults because mitigations
| would tank their benchmarks then it's fine, and everyone
| continues to benchmark in vulnerable configurations since it's
| "manufacturer-recommended".
|
| There seems to be a mindset for many that because the initial
| vulnerabilities didn't affect AMD that they're invulnerable
| forever.
| throwoutway wrote:
| There will be many mort such security issues. So dual
| benchmarks should be standardized from here on out:
|
| With protections turned on
|
| With protections turned off
| [deleted]
| KronisLV wrote:
| Not to discount how cool healthy competition is, but i recently
| got a Ryzen 5 1600 (used), which came out in 2017 and its 6 cores
| and 12 threads have been enough for every workload that i've
| wanted to deal with, even in 2022.
|
| It was a nice upgrade up from Ryzen 3 1200 that i had previously
| and honestly, i don't see myself upgrading again for the years to
| come, unless things become visibly slow and sluggish because of
| Wirth's law (let's assume that in 2030 most of my desktop apps
| would run Electron instances).
|
| It does gaming, it does programming, it renders videos, it does
| 3D modelling and other content creation, it compiles code and
| runs Docker containers and oftentimes multiple of those at the
| same time with no overclocking and no issues (e.g. render a video
| in kdenlive while playing a game in the background).
|
| Somehow it feels like either hardware has scaled to the point
| where new improvements are pretty incremental, or maybe
| software's ability to be wasteful with the provided resources
| thankfully still hasn't caught up with these improvements - you
| no longer really need to get a new CPU every 2 or 3 years, even
| the Ryzen 3 1200 which also came out in 2017 was adequate (i just
| got a good deal on the upgrade), which is really nice to see!
|
| Even my homelab servers still run used Athlon 200GE CPUs from
| 2018 because of the 35W TDP and are on par with cloud resources
| that i can otherwise afford for my 24/7 available cloud stuff,
| just much cheaper when you consider how long they last.
|
| Also, there's something really cool about every device in my home
| (apart from laptops) running on the same CPU socket and all of
| the parts being interchangeable with no driver update weirdness.
| Though it'd be even better if the Ryzen CPUs were the variety
| with iGPUs, given the GPU prices still otherwise being pretty
| unfortunate (in case my RX 570 would break).
|
| The only reasons that i see for upgrading my setup in the next 5
| years would be one of the following: - the stocks
| of used CPUs drying up or prices rising (AliExpress still has
| them pretty affordable) - the motherboards themselves
| getting deprecated or there being other unforseen issues
|
| Here's hoping that won't happen and that i can enjoy CPUs that
| are suitable for my needs (and help avoid the e-waste issue at
| least somewhat), while those more well off financially than me
| can dabble with the more recent releases to their heart's
| content!
| kemotep wrote:
| The biggest part of this announcement is the bios updates for 300
| series motherboards in my opinion. People who bought into the AM4
| ecosystem 5 years ago should be able to update to these brand new
| zen3 cpus.
|
| Have we ever had such a long lived socket and chipset before?
| Supporting brand new products for 5 years?
| tehbeard wrote:
| > People who bought into the AM4 ecosystem 5 years ago should
| be able to update to these brand new zen3 cpus.
|
| As long as people recognise thay a brand new CPU in a 5 year
| old board will have some compromises...
|
| Which given the rates of #internetdrama these days, is probably
| not likely.
| kemotep wrote:
| It reduces potential e-waste but depending on the board there
| certainly will be issues getting the best performance out of
| the newer zen 3 cpus. No pcie 4.0 support being a big one.
| belval wrote:
| I see this line repeated a lot, but in practice is there
| any hard data that PCIe 3.0 is bottlenecking reasonably
| priced hardware?
|
| Doesn't seem like it (RTX 3090 is probably at the edge of
| what I'd consider "reasonably priced" anyway): https://www.
| reddit.com/r/hardware/comments/ikrteg/nvidia_rep...
| kmeisthax wrote:
| Most games can run fine on Thunderbolt external GPUs, and
| that's a PCIe 3.0x4 link (roughly) that, on most Intel
| implementations, is also bottlenecked by the chipset[0].
| Game assets are typically static, so available bandwidth
| mainly impacts how fast you can stream data into VRAM. Of
| course, with things like DirectStorage and consoles all
| embracing SSD-to-GPU streaming and hardware
| decompression, game developers might actually bother to
| actually use the extra bandwidth they've been given.
| However, that's still a way's off and definitely not a
| requirement to enjoy most games today.
|
| The reason why PCIe 4.0 actually became a thing was
| because of enterprise storage arrays. M.2 slots and U.2
| connectors don't have enough pins for 16 lanes, and using
| up so many of those lanes for one device makes no sense
| if you need to stick 10 or 20 of them in a server. That's
| also a use case that doesn't really make sense on AM4,
| unless you have a bifurcation[1]-capable motherboard or
| are spending way too much money on M.2 carriers with PLX
| chips in them.
|
| [0] AFAIK, Intel wanted Thunderbolt direct-to-CPU but
| there was some weird driver/certification nonsense with
| Microsoft or something, and going through the chipset
| apparently made it easier for vendors not named Apple to
| support it. I don't remember the details.
|
| [1] The ability to drive multiple PCIe devices off the
| same slot by splitting the slot's lanes. Most M.2 carrier
| boards are wired up to work this way because proper PCIe
| hubs are absurdly expensive... because the entire market
| for such chips are just storage array vendors.
| eulers_secret wrote:
| PCIe 3.0 is a bottleneck for modern SSDs. It's not a big
| deal at all, but it is a bottleneck.
| bcrosby95 wrote:
| I wouldn't call a $1400 graphics card anywhere close to
| "reasonably priced". $400 maybe.
| belval wrote:
| Right, but that's my point, if PICe 3.0 is not a
| significant bottleneck to an RTX 3090, it can hardly be
| seen as one for anyone running non-enthusiast builds.
| paulmd wrote:
| Not generally, but there are a few edge-cases. AMD
| recently released the 6500XT GPU which has PCIe 4.0 but
| only x4, and when used in PCIe 3.0 motherboards that
| obviously becomes 3.0x4. Furthermore, AMD only put 4GB of
| VRAM on it, which isn't good enough for modern games
| (it's actually less than the previous 5500XT, where AMD
| made a big advertising push about "4GB isn't enough for
| modern games anymore") so it swaps all the time, which
| amplifies the PCIe bottleneck. The card loses something
| like 15% performance when used on a PCIe 3.0 motherboard.
| It's not like it won't run, but that's a significant
| amount of performance, that's like a half a tier of
| performance and far more than you see on (eg) 3090.
|
| Unfortunately we're still in something of a transition
| period. AMD blocked vendors from supporting PCIe 4.0 on
| 300/400 series boards that might have been capable of the
| required signal integrity (particularly on the first
| slot). AMD doesn't support PCIe 4.0 on their Zen3 APUs at
| all either - and some of the new processors are based on
| the APU die even though they don't have graphics, so they
| are limited to PCIe 3.0 as well. And obviously
| Skylake/Coffee Lake/Comet Lake stuff is all PCIe 3.0
| based since they're ancient. So there are definitely
| scenarios where you might think "throw in a cheap dGPU"
| and are still stuck with PCIe 3.0.
|
| Anyway though, what I would caution you here is, the 3090
| has lots of VRAM, so it doesn't swap. What the 6500XT
| shows is, low-end cards can be _more_ susceptible to PCIe
| bottlenecking - they have less VRAM, so they swap more,
| which increases the pressure on the PCIe bus. 3090
| results are not representative of a worst case scenario
| just because they do drawcalls really fast (high
| framerate), there are other parts of the pipeline where
| PCIe load can be generated. If you are swapping due to
| low VRAM, that 's still PCIe load.
|
| Similarly - simply using a card placed into a PCIe 3.0x4
| slot is not representative of Thunderbolt 3 results
| either, despite both links being 3.0x4. Thunderbolt is
| usually implemented as an additional standalone chip
| attached to the chipset - so it's not a CPU-direct link,
| there's multiple hops with higher latency there. There is
| also contention for the chipset bandwidth - the chipset
| has a 3.0x4 link, so the GPU alone can saturate it, but
| there's also NVMe traffic (particularly for pre-11th gen
| Intel where NVMe has to attach to the chipset) and
| network traffic (chipset provides the network
| controller), etc. It's bidirectional so you can read at
| 4GB/s while you write 4GB/s to the GPU, but there's also
| just general contention for commands/etc. So performance
| results on Thunderbolt will be worse than a card attached
| to the chipset, which will be worse than a card attached
| to CPU-direct PEG lanes, even if lane count is the same.
|
| (the exception to this might be Ice Lake and newer Intel
| laptop chips, where the Thunderbolt controller is
| actually part of the CPU itself, the performance impact
| of that should be less. However, this does not apply to
| desktop chips, including Alder Lake.)
| matja wrote:
| $100 USD PCIe 4.0 SSDs exist (e.g. 500GB SN850), which
| can comfortably run at close to the theoretical limit of
| PCIe 4.0, bottle-necked on PCIe 3.0.
| deckard1 wrote:
| yeah but what's the use case here? I've been on NVMe
| since 2017. Without being told about it, I would never
| know it's not a regular SSD.
|
| PCIe 4.0 is one of those things that if you need it, then
| you already know you need it. But almost no one does.
| jotm wrote:
| Only thing I can think of is RAM/PCIe speed... which isn't
| much of an issue. What else is there?
| xattt wrote:
| What was the shortest lived socket? Socket 423?
| paulmd wrote:
| Probably depends on your definition of "shortest lived".
| There's quite a few sockets that only got one generation.
|
| AM1 comes to mind, there were only ever about 5 processors
| compatible with it and really only two were ever intended for
| retail market and only one of them ever really existed off
| the drawing board. That's probably one of the rarest consumer
| sockets ever produced, at least recently.
|
| There have also been a few HEDT sockets that saw short life
| and small numbers. The W-3175X platform and AMD Quad FX
| (Quadfather) both were exceptionally short lived in
| themselves, but on paper supported a decent number of server
| processors due to socket compatibility. They are probably
| some of the smallest numbers sold. Quadfather never lived
| past a single board and neither of them probably sold more
| than a few thousand units.
|
| TR4 and TRX40 both went down as exceptionally short-lived
| HEDT sockets with only one generation each and no cross-
| compatibility with server chips that share their sockets. But
| they probably sold higher numbers than W-3175X and
| Quadfather.
|
| WRX80 is probably one of the lowest-volume sockets around
| since it's basically OEM only and a niche of the niche HEDT
| market, but again then you've got two generations, even if
| the volume sold is small.
|
| Most Intel products really aren't contenders here, "one
| generation for the socket" is table-stakes here, it's volume
| that really decides it imo. Nobody can really say Intel
| doesn't produce volume for their stuff, a short-lived socket
| for Intel probably sold 100x the amount that Quadfather sold.
| Weird niche products like W-3175X or Kaby Lake-X are the
| exception but again then you've got other products in the
| same socket.
| flatiron wrote:
| That would be my guess as well. Only a single p4 generation
| iirc
| rsynnott wrote:
| Socket 4 or 5, probably (first two Pentium sockets). Socket 5
| had a new socket replace it faster than 4 did, but new chips
| were made for 5 for longer.
| awill wrote:
| I get it's good, but what took them so long? Wouldn't many
| people who had 300 series motherboards already have upgraded?
| They were stuck for 18 months without a path forward.
| blihp wrote:
| They're trying make switching to Intel less appealing when
| upgrade time comes. Until 12th gen, they didn't seem to view
| that as much of a threat. Now it apparently is. Competition
| is good!
| awill wrote:
| interesting. I hadn't considered AMD doing this due to
| Intel 12th gen. If true, that's a sign that AMD isn't doing
| this to help customers, but to hurt intel
| jatone wrote:
| which helps customers. hence why competition is good.
| thawaya3113 wrote:
| AMD, like Intel, was almost certainly gonna take
| advantage of a lack of competition to try and maximize
| their returns from their existing chips. Without
| competitive pressure from Intel they have no incentive to
| release better chips.
|
| It's not that AMD is good and Intel is bad, or vice
| versa. It's that only Intel is bad and only AMD is bad.
|
| What we need is a healthy back and forth competition
| between both the companies (and some ARM sprinkled in as
| well).
| deckard1 wrote:
| > AMD isn't doing this to help customers, but to hurt
| intel
|
| Seems correct. ASRock already allowed 5000 series CPUs on
| their X370/B350/A320 motherboards with a beta BIOS
| released in _2020_ [1]. AMD told them to stop[2]. That
| beta BIOS was never officially released.
|
| https://wccftech.com/asrock-amd-ryzen-5000-cpu-bios-
| support-...
|
| https://wccftech.com/amd-warns-motherboard-makers-
| offering-r...
| Tomte wrote:
| So please help me understand: I have an MSI X370 mainboard, and
| as far as I understood until now, the Ryzen 7 3700X processor
| for around 300 Euros is the newest one I can get that still
| runs on my system (that would be relevant for being able to
| update to Windows 11, since I have a Ryzen 1700 now).
|
| Now I can use the 5800X3D, but nothing in between those, or is
| there some other wrinkle I haven't understood?
| neogodless wrote:
| https://arstechnica.com/gadgets/2022/03/amd-reverses-
| course-...
|
| Basically all Ryzen 5000 series chips can become compatible
| if your motherboard OEM releases an updated BIOS.
| iszomer wrote:
| If your OEM really does provide an update to the BIOS to
| accomodate the 5000 then it sounds like a relatively good
| incremental upgrade path.
|
| I currently have a 2600 paired with an X370 and have been
| meaning to try APUs and play around with passthroughs more
| with the RX580.
| neogodless wrote:
| I was pleasantly surprised after I checked my Asus Prime
| X470-Pro driver page today, and noticed they released a
| BIOS update for 5800X3D!
|
| Not to say for certain I plan to get it, but it's great
| that the support is there the day AMD announces it!
| zekica wrote:
| What do you mean nothing in between. They should support ALL
| Ryzen desktop CPUs.
| Nexxxeh wrote:
| Does this mean A300 boards may get official support for the
| Ryzen 5 4600G? Because I'd love to slap one into my DeskMini
| A300. With a (now pulled) Beta BIOS, an A300 ran a Ryzen 7 Pro
| 4750G just fine, and it'd be a big jump from my R3 3200G.
| paulmd wrote:
| A300/A320/X300 were always the exception to the Zen3 lockout
| code. You could run a 5950X on a $30 trash A320 board with a
| 16MB BIOS and a 3-phase VRM, but not a Crosshair VI Hero.
|
| anyway, there's nothing stopping vendors from releasing
| support for A300, Asrock simply has abandoned the Deskmini
| A300 and doesn't want to support it anymore. That one isn't
| on AMD, Asrock just seems to have abandoned it.
| kemotep wrote:
| It sounds like it depends on your board's manufacture
| providing it for your system but it looks like AMD is
| releasing the BIOS firmware to allow x370, b350, and a320
| boards to support all the new cpus just announced today.
|
| In some cases your existing cpu might not be supported using
| the new firmware. So if you update the bios, you might have
| to swap the processor before you can boot the computer again.
| jotm wrote:
| Yeah, people who didn't get a board with the best VRMs must be
| kicking themselves :D
|
| Really though, there's some cheap mATX boards with high quality
| VRMs that can run a Ryzen 9 with no issues. Though tbf, if the
| processor starts but is not stable, an undervolt or maybe clock
| cap could work, at the cost of a slight performance drop.
|
| It's nice to have the option to just upgrade the processor.
| toast0 wrote:
| Socket 7 was around for a long time and supported a multitude
| of different processors from different companies. You could
| usually put a socket 5 processor into a socket 7 board as well
| (but not the other way). And that was a lot larger jump between
| the first and last processor.
| kemotep wrote:
| It looks like AM3+ went from 2011 to the first generation of
| Ryzen launch in 2017. Still you are right, socket 7 takes the
| crown for longevity and performance boost.
|
| Anecdotally, the increase in performance from my Ryzen 5 1600
| to the Ryzen 7 5700G was roughly a 70% increase in
| performance. Despite that feeling like a solid upgrade it is
| nothing on the practical doubling between Original
| Pentium/AMD k6 releases in performance year over year.
| formerly_proven wrote:
| AM2 was also very long-lived.
| phkahler wrote:
| >> The biggest part of this announcement is the bios updates
| for 300 series motherboards...
|
| A great upgrade for the old Mellori-ITX running a 2400G.
| https://github.com/phkahler/mellori_ITX
|
| BTW my system is limited to 4K30hz on the HDMI for reasons I
| never figured out. Haven't found a good Displayport->HDMI
| adapter to fix it, and not sure why the CPU/mobo combination
| won't do it on HDMI. I don't do gaming on it so not much of a
| problem, but I'd prefer 60hz.
| matja wrote:
| Is it because the mainboard is not HDMI 2.0 or does not
| support DSC (Display Stream Compression?) (https://en.wikiped
| ia.org/wiki/HDMI#Refresh_frequency_limits_...). Raven Ridge
| supports 2.0, but it's down to the mainboard to also support
| that.
| phkahler wrote:
| That linked table indicates even 4k30fps (at 4:4:4)
| requires 6.18Gbps, while the motherboard page here:
| https://www.gigabyte.com/Motherboard/GA-AB350N-Gaming-
| WIFI-r...
|
| Under the HDMI section indicates up to 5Gbps video
| bandwidth (in addition to 8 channel audio). So on the
| surface it sounds like I shouldn't even get 30fps, but I
| do. Thank you for the info, it looks like the board isn't
| up to the task via HDMI so I will make the effort to find a
| Displayport adapter. That PC has a LOT of life left in it
| and I'd like to get the most out of it ;-)
| cehrlich wrote:
| Socket A made it from the first Athlon all the way to right
| before AMD went 64bit, but there were several different
| chipsets IIRC
| freeAgent wrote:
| The first Athlons used Slot A. Socket A came later.
| cehrlich wrote:
| Yeah you're right, looks like the first Socket A CPU was
| the Athlon 650 [1]
|
| [1] https://www.cpu-world.com/CPUs/K7/AMD-
| Athlon%20650%20-%20A06...
| rocky1138 wrote:
| This has been my biggest pain point with Intel. Why are sockets
| changed, seemingly, every generation? There's no point in
| having a socket in that case. Might as well solder the CPU
| right to the board.
| eric__cartman wrote:
| Money!
| cogman10 wrote:
| Part of the answer there is it's easier from an engineering
| standpoint to get a new socket than it is to make a new
| architecture work with an existing socket.
|
| Sort of the same way in the software world where it's easier
| to add a new method then it is to enhance an existing method.
| New methods have no legacy baggage to worry about. Tweak an
| old one wrong and you run the risk of breaking a bunch of
| people.
|
| AMD's commitment to forward compatibility is pretty nice, but
| definitely more work on their part.
|
| Then again, AMD's legacy is making sockets/slots work for
| their processors :D (That was the whole point of the original
| athlon)
| belval wrote:
| I bought a 1700 in 2017 when they launched, upgraded to a 3800X
| 2 years ago and now I might even get to run a 5800X on my B350
| board that I paid CA$130 for.
|
| This is what good competition looks like. As a customer I love
| it!
| mikepurvis wrote:
| Seriously, yeah-- this is great for consumers. Though I
| wonder if the mobo manufacturers feel quite as thrilled about
| it.
| belval wrote:
| I am sure they aren't BUT it might push them to innovate to
| make the motherboard itself more interesting?
| Alternatively, money-wise if I had known that the board
| might last me 7-8 years I probably would've bought a better
| one that (probably) has a better margin for Gigabyte.
|
| For example, my B350 doesn't have a USB-C port, LAN is only
| 1Gbps, only one M.2 SSD slot.
| paulmd wrote:
| you can always get a 10gbe (or 2.5gbe, 5gbe, etc) network
| card for $50 or so, and M.2 NVMe can be added with $10
| pcie addon cards as well (it's just a physical adapter to
| the pcie slot, electrically it's still nvme).
|
| unfortunately now we get into the discussion about 3.5
| slot GPUs that overhang all but 1 or 2 of your pcie
| slots, and the general lack of PCIe connectivity on
| consumer boards... you can sorta work around it sometimes
| with vertical mounts/etc. But it's annoying and takes
| work/planning.
|
| I miss the days of X99 and having 28 lanes at my
| disposal, and every GPU being 2-slot or at most 2.5 slot
| such that I could actually use those lanes.
|
| It's also been really annoying watching the arms race
| between GPUs and motherboard slot spacing. For a long
| time, GPUs were 2 and 2.5 slot, so motherboards went to
| 3-slot spacing for their slots. But then 3-slot and
| 3.5-slot GPUs became common, which overhangs the middle
| slot (or puts the GPU right up against a card placed in
| the middle slot). What we really need now is for
| motherboards to go back to 2-slot spacing so you get a
| slot in the #4 spot to give 3- and 3.5-slot cards some
| breathing room...
| bayindirh wrote:
| > LAN is _only_ 1Gbps, _only one_ M.2 SSD slot.
|
| How far we've come. I remember having a semi-decent on
| board 100mbps LAN was only reserved for highest end.
|
| Same for M.2.
| nullc wrote:
| > How far we've come. I remember having a semi-decent on
| board 100mbps LAN was only reserved for highest end.
|
| These cpus are _much more_ than 10x faster than the CPUs
| you would have used then.
|
| Networking speeds in desktop devices haven't kept pace.
| Not just with cpu speeds but with storage speeds-- which
| is particularly obnoxious when you want to use network
| attached storage.
|
| I assume this is due to a mixture of internet and
| wireless creating an extremely low bandwidth least common
| denominator and that running >1GB over copper is kinda
| problematic (finicky, power hungry, etc) -- and the
| industry seems to reason (perhaps correctly) the the
| customers in this segment can't handle fiber.
|
| I personally have 40GBE attached desktops (and 100gbe
| attached stuff within the server rack)-- thankfully quite
| economically due to surplus enterprise hardware, so I'm
| well aware that it can be done... but to do it in a small
| formfactor system is still a problem, e.g. for any of
| these non-apu systems your normal mini itx systems will
| use all their slot space for a graphics card and not have
| room for a mellanox adapter.
| bayindirh wrote:
| > These cpus are much more than 10x faster than the CPUs
| you would have used then.
|
| Actually, it's not only about the CPU performance,
| because NIC performance drew a cosine wave w.r.t.
| processor power over the years. Earlier NICs had much
| more machinery inside them and were expensive. Then
| Realtek came and we witnessed the era of (almost)
| software LAN adapters, then the silicon became cheap, and
| some of the stuff moved back into the LAN controller. So
| modern, higher end cards do not do significantly more
| offloading, but they handle other things like
| virtualization of cards between many VMs, or other higher
| level tasks.
|
| From what I've seen, making a faster CPU is easy, but a
| faster and wider fabric is harder to weave due to
| distance, switching related noise and other physics
| related phenomena. Cache and good memory controllers are
| also relatively easy, but expensive. Also, RAM and higher
| voltage doesn't go well together, because RAM uses a lot
| of energy for its size, so it comes with heat and
| stability problems and other lifetime related problems
| (yes, you can fry RAMs).
|
| Storage is another hard technology. Spinning rust is
| limited by physics, vibration, rotation speed and other
| mechanical stuff. SSDs came crashing, but before they
| were cheap and Sun didn't got swallowed by Oracle, they
| did nice ZFS arrays with some very expensive SSD caches
| and spinning drives combined. I've seen
| _whattheheckisthat_ amount of speed from an array with a
| size of 6-8U total and a couple of IB interfaces fitted
| to it at the factory. Currently we get that amount of
| speed from Lustre arrays with some SSDs and a lot of
| spinning disks.
|
| With the current backbone capacity of a standard desktop
| computer, even on the high end, 40Gbps network is
| overkill unless you're going to ingest that data directly
| at the CPU. Yes, absorbing the data at the disks are
| possible, for what cost and what use case? 10Gbps is
| understandable, but I still consider 1Gbps as a sweet
| spot for general purpose computing. If you're going to
| transfer vast amounts of data from a storage array to a
| local system, yes you can go higher, but it's still a
| niche. Also as you've said, going over 1Gbps is
| problematic from a signal integrity point of view, and
| fiber is too fragile and expensive for average customer.
|
| These Mellanox cards get quite hot when they're
| constantly utilized, so in a little ITX box, both the
| card and the system will be cooked. The process becomes
| faster if you use active fiber cables, because connectors
| sink the heat inside the case via heat sinks attached to
| the connectors on the cards. Even if you use them in a
| system room, building a large network with them requires
| expensive equipment which has adequate bandwidth and
| cable length. Also, running an IB network needs other
| software machinery to keep the network up. You can
| directly run ethernet, or TCP over IB, which kinda
| defeats the purpose and adds additional penalty and
| reduces the performance.
|
| All in all, higher speed network is still doesn't bring
| too much of a value to the home of the average user, and
| is not still _that cheap_ for the enthusiast. Also the
| hardware is not scaled down for the home. Cards are big
| & hot, connectors/cables are bulky and the fabric gear is
| noisy, bulky and high maintenance for home.
|
| Yes they work rather nicely, but I'd not want them at
| home. I have enough of them at the system room already.
| mattferderer wrote:
| I try to push my builds as close to 10 years as possible.
| I've had good luck by trying to time it around important
| motherboard features that make a big difference & allow
| for minor upgrades over time.
|
| - Caveats - These are software dev & light multimedia
| editing computers. By the time I get rid of one, it's had
| as much RAM added as possible. I've done this twice now.
| Would not work on a gaming rig.
| bipson wrote:
| Hm, also some features (e.g. USB C/3.1 headers) were only
| available on high-end gaming MoBos for quite a long time,
| making the whole build power-hungry and needlessly
| fragile.
|
| Before that, MoBo manufacturers even made extensive use
| of additional chips for networking, USB, mora SATA ports,
| RAID and what not, which could make the whole system more
| complicated/fragile (all the additional drivers!).
|
| I nowadays try to avoid that and aim for boards close to
| "reference" (whatever the chipset offers out of the box).
| mattferderer wrote:
| I didn't mean to call out anything against gaming rigs.
| Some of my components have been marketed towards gamers.
| Many of the better motherboards often are, excluding the
| very serious motherboards.
|
| I just never bought a very expensive video card & I can't
| comment on how long a good one would last. I imagine it
| depends on to many variables from the games you play, the
| settings of the games you're okay with, the card, etc..
| delusional wrote:
| This isn't just good competition. It's damn good engineering.
| We have to remember that this wasn't something that AMD was
| forced to do by the competitive landscape. Someone pushed
| hard for this decision internally, and that person or team
| deserve recognition. Someone also had to actually build it,
| and they deserve even more recognition.
|
| Competition is fantastic, but the people who rise to the
| competition are, in my opinion, the real heroes.
| deckard1 wrote:
| ASRock already did this in 2020.
|
| https://wccftech.com/asrock-amd-ryzen-5000-cpu-bios-
| support-...
|
| Then AMD told them to stop it. Which is why that BIOS never
| got an official release and never appeared on ASRock's
| website.
|
| There are people that have been running 5000 CPUs on ASRock
| for over a year now.
|
| Conversely, you can run first gen Ryzen on B550/X570. Which
| many people seem to bizarrely think is impossible. Possibly
| due to AMD's marketing.
| bipson wrote:
| I remember something that there was a technical issue
| with that "compatibility"... Was it something about power
| ratings or the chipset/MoBo becoming such a bottleneck
| that this was essentially useless?
|
| Or am I just thinking about an entirely different
| thing...?
|
| It really does not fit the narrative of what AMD did with
| Zen so far, that's for sure.
| deckard1 wrote:
| If you go back to last year you will see a lot of AMD
| fanboys arguing such a thing, based on pure conjecture.
| One person replied to one of my comments saying 5000 can
| not run on 300 series BIOS because of [insert-technical-
| hypothetical-nonsense] reasons. But suddenly... it can!
|
| https://wccftech.com/amd-warns-motherboard-makers-
| offering-r...
|
| The reality is that AMD fanboys haven't woken up to the
| fact that AMD is no different than Intel. All the same
| marketing and segmenting games exist on AMD. They were
| the underdog and then suddenly, they are on top and
| acting just like Intel.
|
| https://www.youtube.com/watch?v=h74mZp0SvyE
| paulmd wrote:
| No. There was a lot of rationalizing from fanboys
| attempting to retcon a technical justification for what
| was clearly a business decision from AMD, but none of the
| technical explanations hold up.
|
| Most of the stack has a similar TDP to Zen1/Zen+/Zen2.
| The 5950X has the highest TDP, 105W, which is the same as
| the 3950X, which is allowed. In practice the 5950X
| actually pulls a bit less power. Furthermore, the A320
| boards (even lower-end) got official support, despite a
| lot of those boards having utter trash VRMs.
|
| (furthermore, there's nothing that guarantees that every
| A520 board has a VRM that is capable of supporting a
| 5950X either! it's not like board generation is some
| guarantee of VRM quality, there are X370 boards that can
| handle a 5950X and A520 boards that can't. That's not how
| support is determined.)
|
| 16MB bios isn't a limitation either. B450 boards are the
| literal exact same silicon - B350 is to B450 as Z170 is
| to Z270, basically, it's rebranded silicon - and B450
| boards with 16MB BIOS got support. There are B450 boards
| with 16MB bios that support literally the entire range of
| chips. And once again, so did A320 boards with 16MB BIOS.
|
| Releasing official support for A320 really just blew a
| hole in every retconned justification that people tried
| to make for AMD. It's uncouth to say it here on HN, but
| there really is a large number of people who are super
| emotionally attached to the AMD brand and willing to
| stretch to absurd lengths to justify what are clearly
| business decisions.
|
| And now they're just reversing their previous policy.
| Wasn't a problem at all, actually.
| bipson wrote:
| AMD was aiming for that when AM4/Zen was announced, knowing
| that this was a major pain point for enthusiasts and
| builders/"upgraders".
|
| This would have been my main reason to build an AMD system,
| although I was easy to win, I always built AMD systems IIRC...
| And I ended up postponing it again and again, since I don't
| game and the reasonable (budget) options were always lagging
| behind regarding technology/efficiency - most of all, my Phenom
| (actually a rebranded Athlon with 2/6 cores disabled) was still
| doing OK. Now there will be AM5... Didn't follow the news, is
| it supposed to live as long as AM4?
|
| Would it even pay off for AMD to repeat that effort?
| mastax wrote:
| I'm debating whether to buy this. I have an 1800X on a B450 board
| with DDR4 3000 RAM. I use the PC for gaming and compiling,
| basically, so I should see huge benefits from the extra cache. I
| could double my CPU performance and breathe new life into this
| computer for not much money.
|
| However, that computer is getting long in the tooth. The RAM is
| slow compared to modern stuff. The motherboard is cheap, has poor
| UEFI, and I zapped the ethernet port with static - destroying it.
| Do I really want to invest more into it when I could wait 6
| months and build an all new PC with DDR5 and Zen 4? It would be a
| lot more expensive but I have a lot more money now than when I
| built this computer.
| trillic wrote:
| Wait, buy some ECC RAM for the old machine and turn it into a
| home server.
| KarlKemp wrote:
| Because a ,,home server" a.k.a. a space heater to store extra
| porn on, is still a server, and everyone knows servers need
| ECC?
| TheBigSalad wrote:
| 3000 RAM is good enough. If you upgrade the CPU and GPU you'll
| get effectively the same performance as any shiny new build and
| can last another 5+ years. You can get back the ethernet port
| with a PCI card or USB.
|
| That being said, if you can afford it get the shiny new stuff.
| You deserve it.
| lvl102 wrote:
| It's hard to justify buying these when you have M1 Max or Ultra
| in Mac Studio.
| cosmiccatnap wrote:
| sliken wrote:
| Er, studio is minimum $2k, well above the average price of a
| home built AMD system with the same specs (32GB ram and 512GB
| SSD). It also runs OSX instead of Windows and Linux,
|
| Not really the same market at all. The 5700X (2nd most
| expensive on a list of 7 CPUs) costs $300, 32GB ram of
| DDR4-3600 is $150, 512GB SSD is $60, motherboard, case, and
| power supply is likely another $300. So $810 for a system, not
| including a GPU, but $1200 is more than enough for something
| decent.
| nyadesu wrote:
| Crazy how people would prefer spending 300 - 450 USD to upgrade
| a rig they already own, instead of throwing it away and buying
| a apple device with a new M1 cpu that costs +700 USD
| neogodless wrote:
| Not to mention the M1 Max equipped Studio starts at $1999.
|
| And can't run most of the games I've played over the past two
| years.
|
| That parent comment was particularly out of place since the
| 5800XD is a gaming-focused CPU.
| kcb wrote:
| Those devices are completely irrelevant to the vast majority
| considering one of these CPUs.
| lvl102 wrote:
| For developers? I beg to differ.
| neogodless wrote:
| Did you read the article? The AMD Ryzen 7 5800X3D is being
| released for gaming performance. I checked and most of the
| games I've played over the past few years do not even
| launch on macOS. That's kind of a problem for using Apple
| Silicon in a gaming PC (among other problems).
| [deleted]
| PragmaticPulp wrote:
| Lower clock speeds but more cache. Also interesting that they
| stopped at the 8-core chips and aren't adding the cache to the 12
| and 16 core parts.
|
| It looks like the extra cache die isn't good for thermals, so
| maybe it's not viable on the 12 and 16-core chips without
| sacrificing too much clock speed.
|
| Looks to be a strictly gaming-focused CPU play. Not a bad move
| given the way the enthusiast market works, but it doesn't do much
| for a lot of non-gaming workloads.
| bick_nyers wrote:
| I'm hoping it's because they wanted everyone to save their
| money for Zen4 16 cores with 3D cache. The reasons are probably
| thermal related though
| marktangotango wrote:
| Does that amount of L3 enable some memory hard POWs to be mined
| profitably?
| zozbot234 wrote:
| > Looks to be a strictly gaming-focused CPU play. Not a bad
| move given the way the enthusiast market works, but it doesn't
| do much for a lot of non-gaming workloads.
|
| How so? 8 cores still covers plenty of light-workstation
| workloads, where the extra cache should lead to an overall
| performance improvement (despite the minor drop in top clock
| speeds). With current software architectures, it's still hard
| to make full use of 12 to 16 cores.
| paulmd wrote:
| you're rather missing the point about the exclusion of 3D
| V-cache on the higher-end processors. If it was a
| productivity-focused play then you'd see it included on those
| as well.
|
| The 5800X is a good processor and it's true that people do
| productivity on it, nobody disputed that and there's no need
| to rush to its defense. But you've missed the point about the
| overall way the upgrade is (or rather, isn't) being rolled
| out.
| zozbot234 wrote:
| Parent comment pointed out that the whole 3d-cache business
| might come with challenging thermals on high-core-count
| desktops. There might be all sorts of technical reasons why
| V-cache is not being used in this particular segment as of
| yet.
| paulmd wrote:
| thermal problems are obvious based on AMD disabling
| overclocking on the 5800X3D. Even when gains have become
| minimal, they didn't explicitly _disable_ them, so
| clearly either potential gains are literally zero, or
| overclocking might actually damage the cache /lead to
| conditions where it could be bricked or lead to glitches
| that might allow control of the PSP (the cache die being
| separate, and completely exposed, certainly is an
| interesting attack "surface").
|
| Thermals have always been the giant asterisk on die
| stacking. Everyone knows it's going to be a problem. But
| this is the first _consumer_ stacked-die product (that I
| am aware of?) so we don 't really have any sense for
| overclocking on those processors. The tentative
| implication here is - bad. Even with a single cache die,
| it's still got a whole processor underneath it heating it
| up. There are implications here for GPUs as well, as both
| AMD and NVIDIA are expected to release stacked-die
| products next generation. It's looking like they will
| have to keep clocks under control to make that happen -
| maybe that is a counter-argument to the "500-600W" rumors
| (for both brands).
|
| But multi-die products have twice the area to dissipate
| their heat over - just like 5950X doesn't run at higher
| temperatures than 5800X. Higher power, yes, but twice the
| area to dissipate it over means thermals are about the
| same. That's not really the reason.
|
| A single-die limitation also wouldn't rule out a 3D
| version of the 5600X - it's no longer "the bottom of the
| stack" and finally there are value SKUs underneath it, it
| would be appropriate to re-release it as a more
| performance-oriented 5600X3D SKU.
|
| Anyway, my personal opinion is this is going to be a very
| limited product that primarily exists for "homologation".
| AMD can say that they've released it, it's officially a
| consumer product, so it can re-establish AMD's place on
| top of the gaming benchmark charts, but it's going to be
| a 9900KS-style super-limited SKU that doesn't see any
| real volume production at least until AMD has sold their
| fill of Milan-X. It just exists to put AMD back on the
| top of the benchmark charts now that Intel has retaken it
| with Alder Lake.
|
| The 5800X is the best processor for them to do that.
| 5950X sets up some small regressions for inter-CCD
| problems/etc. 5800X is enough for games for now anyway.
| sliken wrote:
| I do wonder if the days of dimms and long lived sockets of AM4 is
| over. AM4 maxes out at DDR4-3200 x 128 bit = 50GB/sec and more
| importantly just 2 pending cache misses. A bit more with
| overclocking, but not much.
|
| Apple M1 = 128 bit x LPDDR4X-4266 = 68GB/sec and I believe 8
| memory channels (8 pending cache misses). A modest core count (4
| fast + 4 slow) helps keep the memory bandwidth from being a
| bottleneck.
|
| The M1 pro doubles this to 256 bits, 16 channels, and 200GB/sec,
| which is a significant help for the integrated GPU and hits
| levels that AMD APUs can not match.
|
| M1 max doubles again to 512 bits, 32 channels and 400GB/sec.
|
| M1 ultra doubles again to 1024 bits, 64 channels, and 800GB/sec.
|
| Not sure AMD can really compete and the APUs are severely limited
| by memory bandwidth, unless you buy a PS5 of XboxX. I'm hoping
| that AMD takes a page from the Apple playbook and ships a
| consumer CPU with dramatically more bandwidth and allows users to
| skip the current curse of discrete GPUs that run hot, are not
| available at MSRP, and are hard to get even at 2x MSRP.
| bick_nyers wrote:
| I think you would find an APU that powerful would suffer the
| same fate as discrete GPUs in that it would be low-stock and/or
| expensive. Same thing with the M1 Ultra, starts at $2k it looks
| like
| sliken wrote:
| Dunno. Mining is all about hashrate/$ and scalpers are all
| about the % profit. Seems like AMD could easily make today's
| APU with 2x the bandwidth (less than 1/5th of the Apple
| Studio/M1 max) make a tidy profit and provide 2x the APU
| performance and make the APUs a good fit for a much larger
| fraction of the GPU intensive applications.
|
| Not like they aren't shipping by the millions in the XboxX
| and PS5, there's obviously a demand for them and AMD is
| obviously capable of making them (they make the CPU in both).
| stevespang wrote:
___________________________________________________________________
(page generated 2022-03-15 23:02 UTC)