[HN Gopher] Intel is reducing server chip pricing in attempt to ...
___________________________________________________________________
Intel is reducing server chip pricing in attempt to stem the AMD
tide
Author : rbanffy
Score : 365 points
Date : 2021-09-14 09:50 UTC (13 hours ago)
(HTM) web link (www.tomshardware.com)
(TXT) w3m dump (www.tomshardware.com)
| yyyk wrote:
| The only surprise is that it took so long, in fact I'd argue
| Intel isn't doing it enough. Intel is losing marketshare and has
| some power hungry chips, however financially it's doing well. It
| makes a lot of sense to compensate via price.
|
| As I keep saying, Intel is very far from dead. A company does not
| need to have the topmost performing chips to do well, not anymore
| than AMD/TSMC needed to in the past. Especially not in this
| seller's chip market.
|
| It just means Intel needs to invest more for some more time and
| will lose some marketshare. If in 3-5 years Intel will not have
| improved its processors in a nonmarginal way then they'll be in
| trouble.
| mensetmanusman wrote:
| This is exactly the purpose of competition, it shouldn't be news.
| rualca wrote:
| I disagree. This should be news because it's as close as an
| official declaration that Intel acknowledges it's time as the
| world's leading chip manufacturer is over, and that the crown
| is nowadays firmly on AMD's head.
|
| Also, Intel's long history of using unethical tricks to
| preserve their market share while avoidig competing on price
| also makes this a historical turn of events.
| rbanffy wrote:
| I am sure Intel and AMD relize that being x86 is no longer
| the advantage it used to be.
| shartacct wrote:
| > This should be news because it's as close as an official
| declaration that Intel acknowledges it's time as the world's
| leading chip manufacturer is over, and that the crown is
| nowadays firmly on AMD's head.
|
| You're being premature here. Intel still makes more profit in
| one quarter than AMD makes revenue in multiple years. Intel
| still puts out more than 10x as many CPUs as AMD does in a
| year in one quarter.
| habibur wrote:
| It's news because it's fun to finally see intel face the heat
| of competition after so many decades.
| hiram112 wrote:
| > As seen in renowned system distributor Puget Systems'
| statistics, AMD has risen from a 5% share in systems sold since
| June 2020, up to a dominating 60% as of June 2021.
|
| Wow, maybe this stat is misleading or only referring to some
| small segment of the market, but if not, that is an incredible
| loss for Intel in just a single year.
| freemint wrote:
| Puget Systems is a smallish boutique seller that builds you the
| best computer for a certain workload given some benchmarks.
| It's niche but they are an indicator what is better for the
| workloads of their customer.
| tw04 wrote:
| I've never had time for Intel creating 400 different CPUs just to
| create artificial market segmentation and force people into a
| more expensive CPU. Why is there an i3, i5, i7, i9 - ahh, right,
| because then you can try to justify charging incrementally more
| for each additional feature. Oh you want turbo boost? Sorry
| that's an i5! Oh you want hyperthreading/SMT? Nope, next model
| up. Oh you want ECC? That's a "workstation" feature, here's an
| identical xeon with nothing new other than ECC!
|
| Just STOP. _EVERY_ CPU they make should support ECC in 2021. Give
| me an option for with or without GPU, and with or without 10Gbe -
| everything else should be standard. Differentiate with clock
| speed, core count, and a low power option, and be done with it.
| xuki wrote:
| Yeah, I've switched to AMD Ryzen 5k for my dedicated servers.
| They're faster and cheaper than Xeon, they support ECC which is
| the only reasons I need Xeon previously.
| polskibus wrote:
| Higher end Ryzens along with NVMe make for great high
| performance CI local worker nodes.
| [deleted]
| formerly_proven wrote:
| Fun-fact: Intel's 12th gen desktop CPUs will no longer have
| AVX-512. Well, I mean, the cores do have it, but it's disabled
| in all SKUs. So to do any AVX-512 development and testing _at
| all_ you will need an Intel Xeon machine in the future.
| blackhaz wrote:
| Business features are tied to software/hardware features. They
| want piece of your business.
| omegalulw wrote:
| While I agree with your general sentiment, I don't agree that
| you should expect Intel to hand out features for free. That's
| what competition is for.
| freemint wrote:
| I don't want to pay more for cheap CPUs such that they have
| ECC. High prices on ECC subsidize cheaper parts without ECC.
| mook wrote:
| Except that if the cheaper chips have ECC, they probably
| couldn't go up much in price -- that price is limited by how
| much people (who don't care about ECC anyway) are willing to
| pay. So if prices for the low end went up, people (like you)
| would instead go without (meaning Intel doesn't get your
| money), or try to get second hand (Intel doesn't get your
| money), or go with AMD (Intel doesn't get your money). But
| Intel would really like to have your money, or at least
| generally more money.
| freemint wrote:
| Intel would like to make the same profit per wafer as
| before. Any savings you get as some who wants ECC gets
| added weighted by fraction of volumes to chips in my price
| class. No thanks.
| magila wrote:
| In theory Intel could use profits from Xeons to subsidize
| consumer chips, but I doubt they actually are. In practice
| you only see that happen in highly competitive commodity
| markets where the profit margin on consumer grade models is
| razor thin (e.g. SSDs). Intel's profit margin on their
| consumer chips is not particularly small, and AMD wasn't a
| significant competitive threat until a year or two ago.
| antonios wrote:
| Yes please. ECC support by now should come by default, both in
| CPU support and in motherboards, RAM chips etc.
|
| At least AMD Ryzen supports it, but the fact that one has to
| spend a lot of time to research through products, specs, forums
| and internet chats to figure out a good CPU, m/b & RAM
| combination that works is cumbersome, to say the least.
| wyager wrote:
| > Give me an option for with or without GPU, and with or
| without 10Gbe
|
| In what capacity is 10Gbe included as a CPU feature? I've only
| ever used PCIe cards.
| Taniwha wrote:
| so these days 10GbPCIe and 10gbe are essentially the same
| thing at the low level silicon/pins/wires level the bit
| packing/unpacking/signalling stuff has a whole lot in common
| and they're all sort of converging on some superset of
| hardware serdes - the higher level hardware stuff is still
| different (ethernet MACs vs PCI etc) of course
| billsnow wrote:
| ECC support is an actual +10-20% cost in materials for the
| motherboard and DIMM manufacturers. Also, ECC errors are
| basically non-existent on desktop/laptop workloads. ECC is
| worth the extra cost in servers, but for desktops and laptops,
| the market got it right.
| s1dev wrote:
| A consumer PC should see a single bit error roughly once a
| week. That's hardly non-existent
| billsnow wrote:
| According to who? I checked the edac module for a year on
| my work machine, and it never detected a single error. I
| know I'm just one anecdote, but I doubt I'm that lucky.
| Filligree wrote:
| Worst case those just trash some family photos of a dead
| relative. Hardly anything important.
|
| /s
| phendrenad2 wrote:
| The more expensive chips subsidize the cheaper ones. If they
| put ECC in low-end models, they would have to charge more for
| them, because fewer people would buy the high-end models.
|
| Also, there's some cross contamination between price point and
| market segment here. Nobody just buys a CPU, they buy a CPU
| wrapped in a laptop. So Intel's real customers are laptop
| manufacturers, not you. So the low-end chips have to appeal to
| a model that the laptop vendors want to introduce. That takes
| the form of thin & light laptops (or low-energy-usage "green"
| desktops for office workers).
|
| Adding ECC support adds heat and cost and die size. All things
| the thin & light market do not want under any circumstances.
| drewg123 wrote:
| And similarly with memory speed segmentation in the Xeon line.
| I'm kicking the tires on a ice lake 8352V, and I was
| disappointed (but not at all surprised) to learn that it is
| running its 3200 memory at 2933
| Consultant32452 wrote:
| This is very common across many industries. It doesn't cost
| much more to manufacture a sports car vs a sedan, but the price
| is very different.
|
| No product is based on the price of manufacture, it's based on
| the price people are willing to pay.
| hajile wrote:
| Let's say it costs 5 billion to design a car (it goes as high
| as 6 billion) and another 2-3 billion to create all the molds
| and custom tooling and change over a factory. If you sell 10
| million cars, that overhead costs $800 per car. If you sell
| only 1 million, that's $8,000 per car. Some sports cars sell
| even fewer units than that. This is the biggest reason prices
| are higher.
| KronisLV wrote:
| I agree in principle, but it's pretty obvious that this would
| be bad for their profit margins and as a consequence wouldn't
| happen.
|
| After all, making your consumers buy the more expensive
| versions of your product just because they need one of its
| features is a sound business decision.
|
| Otherwise people will use the cheaper and lower end versions if
| they only need these features - like i'm currently using 200GEs
| for my homelab servers, because i do not require any additional
| functionality that the low power 2018 chip doesn't provide.
| CountSessine wrote:
| I agree but this is a game you can play with your customers
| when they actually want what you're selling and you have
| market power. When you're losing ground and customers are
| leaving the shop, it's time to cut the bullshit and give
| people what they want.
| tw04 wrote:
| >I agree in principle, but it's pretty obvious that this
| would be bad for their profit margins and as a consequence
| wouldn't happen.
|
| The only reason it hasn't happened is because they had no
| legitimate competition until recently. In a healthy market
| they would have been forced to do so long ago. Capitalism and
| "market forces" only work where competition exists.
| awestroke wrote:
| Well, now they are losing to AMD, so what does that tell you
| about it being a sound busineas decision?
| danielmarkbruce wrote:
| Exactly. It seemed like a sound business decision because
| it gave them measurably more money in their pocket over a
| short period of time. They don't appear to have taken into
| account that they left the door open for competition. It
| wasn't _just_ prices that left them vulnerable, but it sure
| didn 't help.
|
| AMD should never have been able to get back in the game.
| apetrovic wrote:
| They aren't losing to AMD because of market segmentation,
| they are losing because their fabs are way behind TSMC.
| OrvalWintermute wrote:
| > they are losing because their fabs are way behind TSMC.
|
| I don't believe it is _merely_ an execution problem.
|
| AMD's out-innovated Intel Evidence being the pivot to
| multi-core, massive increased PCIe, better fabric,
| chiplet design, design efficiency per wafter, among
| others.
|
| Why did this happen?
|
| > Two years after Keller's restoration in AMD's R&D
| section, CEO Rory Read stepped down and the SVP/GM moved
| up. With a doctorate in electronic engineering from MIT
| and having conducted research into SOI (silicon-on-
| insulator) MOSFETS, _Lisa_ Su [1] had the academic
| background and the industrial experience needed to return
| AMD to its glory days. But nothing happens overnight in
| the world of large scale processors -- chip designs take
| several years, at best, before they are ready for market.
| AMD would have to ride the storm until such plans could
| come to fruition.
|
| >While AMD continued to struggle, Intel went from
| strength to strength. The Core architecture and
| fabrication process nodes had matured nicely, and at the
| end of 2016, they posted a revenue of almost $60 billion.
| For a number of years, Intel had been following a 'tick-
| tock' approach to processor development: a 'tick' would
| be a new architecture, whereas a 'tock' would be a
| process refinement, typically in the form of a smaller
| node.
|
| >However, not all was well behind the scenes, despite the
| huge profits and near-total market dominance. In 2012,
| Intel expected to be releasing CPUs on a cutting-edge
| 10nm node within 3 years. That particular tock never
| happened -- indeed, the clock never really ticked,
| either. Their first 14nm CPU, using the Broadwell
| architecture, appeared in 2015 and the node and
| fundamental design remained in place for half a decade.
|
| >The engineers at the foundries repeatedly hit yield
| issues with 10nm, forcing Intel to refine the older
| process and architecture each year. Clock speeds and
| power consumption climbed ever higher, but no new designs
| were forthcoming; an echo, perhaps, of their Netburst
| days. PC customers were left with frustrating choices:
| choose something from the powerful Core line, but pay a
| hefty price, or choose the weaker and cheaper
| FX/A-series.
|
| >But AMD had been quietly building a winning set of cards
| and played their hand in February 2016, at the annual E3
| event. Using the eagerly awaited Doom reboot as the
| announcement platform, the completely new Zen
| architecture was revealed to the public. Very little was
| said about the fresh design besides phrases such as
| 'simultaneous multithreading', 'high bandwidth cache,'
| and 'energy efficient finFET design.' More details were
| given during Computex 2016, including a target of a 40%
| improvement over the Excavator architecture.
|
| ....
|
| >Zen took the best from all previous designs and melded
| them into a structure that focused on keeping the
| pipelines as busy as possible; and to do this, required
| significant improvements to the pipeline and cache
| systems. The new design dropped the sharing of L1/L2
| caches, as used in Bulldozer, and each core was now fully
| independent, with more pipelines, better branch
| prediction, and greater cache bandwidth.
|
| ...
|
| >In the space of six months, AMD showed that they were
| effectively targeting every x86 desktop market possible,
| with a single, one-size-fits-all design. A year later,
| the architecture was updated to Zen+, which consisted of
| tweaks in the cache system and switching from
| GlobalFoundries' venerable 14LPP process -- a node that
| was under from Samsung -- to an updated, denser 12LP
| system. The CPU dies remained the same size, but the new
| fabrication method allowed the processors to run at
| higher clock speeds.
|
| >Another 12 months after that, in the summer of 2019, AMD
| launched Zen 2. This time the changes were more
| significant and the term chiplet became all the rage.
| Rather than following a monolithic construction, where
| every part of the CPU is in the same piece of silicon
| (which Zen and Zen+ do), the engineers separated in the
| Core Complexes from the interconnect system. The former
| were built by TSMC, using their N7 process, becoming full
| dies in their own right -- hence the name, Core Complex
| Die (CCD). The input/output structure was made by
| GlobalFoundries, with desktop Ryzen models using a 12LP
| chip, and Threadripper & EPYC sporting larger 14 nm
| versions.
|
| ...
|
| >It's worth taking stock with what AMD achieved with Zen.
| In the space of 8 years, the architecture went from a
| blank sheet of paper to a comprehensive portfolio of
| products, containing $99 4-core, 8-thread budget
| offerings through to $4,000+ 64-core, 128-thread server
| CPUs.
|
| From https://www.techspot.com/article/2043-amd-rise-fall-
| revival-...
|
| [1] https://en.wikipedia.org/wiki/Lisa_Su
| jjoonathan wrote:
| The secondary features (PCIe, ECC) and tertiary features
| (chiplets) wouldn't have mattered if Intel had delivered
| 10nm in 2015.
|
| It's a harsh truth, but nodes completely dominate the
| value equation. It's nearly impossible to punch up even a
| single node -- just look at consumer GPUs, where NVidia,
| the king of hustle, pulled out all the stops, all the
| power budget, packed all the extra features, and leaned
| harder than ever on all their incumbent advantage, and
| still they can barely punch up a single node. Note that
| even as they shopped around in the consumer space, NVidia
| still opted to pay the TSMC piper for their server
| offerings. The node makes the king.
| astatine wrote:
| Thanks! I had no idea about any of this. Very
| informative.
| lotsofpulp wrote:
| Hence they can no longer afford to do the market
| segmentation.
| R0b0t1 wrote:
| They are. The Intel segmentation was too restrictive. AMD
| started offering "server" grade features on desktop
| parts.
| colejohnson66 wrote:
| It's worth keeping in mind that the silicon lottery is very
| much a thing at these nanometer sizes. So _some_ market
| segmentation has to exist. If Intel threw away every chip that
| had one of the four cores come out broken, they'd lose a lot of
| money and have to raise prices to compensate. By fusing off the
| broken and one of the good ones, they can sell it as a two core
| SKU.
|
| Does this excuse Intel's form of market segmentation? No. They
| almost certainly disable, for example, hyperthreading on cores
| that support it - just for the segmentation. But we can't make
| every CPU support everything without wasting half good dies.
| inetknght wrote:
| > _By fusing off the broken and one of the good ones, they
| can sell it as a two core SKU._
|
| Fuse off the broken one? Sure, makes sense.
|
| Fuse off a good one? That's arguably amoral and should be
| discouraged.
|
| Three cores can be better than two. Let the consumer disable
| the runt core if they need.
| deedree wrote:
| To be pedantic, you mean immoral right? It's bad, they
| shouldn't waste usable resources just to fit their
| marketing scheme.
|
| Amoral means that it is not moral or immoral.
| lupire wrote:
| It's amoral because they act based on incentives and not
| the commenter's religious beliefs.
| dev_tty01 wrote:
| Amoral? Why? They advertise a two core part, you pay for a
| two core part, you get a two core part. Completely fair.
| hnuser123456 wrote:
| Because if they're capable of making plenty of good
| 4-cores but have more demand for 2-cores so are cutting
| good 4c, they should just make the 4-cores a little
| cheaper. But maybe they already do this.
|
| Anyways, agreed ECC should be standard, but it requires
| an extra die and most people can do fine without it, so
| it probably won't happen. But an ECC CPU option with
| clearly marketed consumer full ECC RAM would be nice.
| DDR5 is a nice step in this direction but isn't "full"
| ECC.
| dageshi wrote:
| I don't know if mobile cores factor into the same
| process, but if you have a lot of demand for 2 core
| system for cheap laptops that can't supply the power or
| cooling for a 4 core then having more 4 cores, even if
| they're cheaper doesn't help.
| [deleted]
| brianwawok wrote:
| Wait till you find out that two people side by side on an
| airplane may pay 10x or more difference in ticket price,
| for the same ride.
| rozap wrote:
| Wait until they find out about the fact that all new BMWs
| come with heated seats, but you need to pay a monthly
| subscription to have them enabled.
| colejohnson66 wrote:
| Such as some airlines having a "business/first class"
| that's nothing but "board before the plebs"
| Arainach wrote:
| No, the scenario is that there are massive price
| differences even for the same class of seats.
| Traditionally, the major long haul airlines sold seats
| weeks/months in advance at rates that were basically
| losing money but made almost all of their per flight
| profit on last minute bookings at higher rates. These
| were usually business flights, but not necessarily (not
| usually, even) business class.
|
| Business models for budget airlines (RyanAir, etc.) are a
| bit different but that's not relevant here.
| jvanderbot wrote:
| That's not amoral. It's missing a market opportunity, but
| conflating that with morality is an interesting way of
| looking at it.
|
| Businesses don't owe you a product (before you pay for it)
| any more than you owe them loyalty after you pay for
| something. They will suffer when someone else offers what
| you want and you leave. That's the point of markets and
| competition.
| scoopertrooper wrote:
| Maybe 'amoral' is a bit strong, but I think there is
| something wrong with an economic system where producers
| destroy wealth, rather than distribute all that is
| produced.
|
| If it's wrong for the government to pay farmers to burn
| crops during a depression, then it's wrong for a monopoly
| to disable chip capabilities during a chip shortage.
| jvanderbot wrote:
| I think you're framing the supply chain in a very
| personal (strawman) way.
|
| The problem is just one of "efficiency". The production
| is not perfectly aligned with where people are willing to
| spend money. A purely efficient market exists only in
| theory / textbooks / Adam Smith's Treatise.
|
| The chips that roll off a fab are not done. They aren't
| "burning crops". Perhaps they are abandoned (not
| completed) perhaps because they need to recoup or save
| resources to focus on finishing and shipping the working
| (full core) products. They aren't driving their trucks of
| finished products into the ocean.
| scoopertrooper wrote:
| > The problem is just one of "efficiency". The production
| is not perfectly aligned with where people are willing to
| spend money. A purely efficient market exists only in
| theory / textbooks / Adam Smith's Treatise.
|
| Destroying wealth is not appropriate the market mechanism
| to deal with disequilibrium. Producers should either
| lower the price to meet the market or hold inventory if
| they anticipate increased future demand. However, the
| latter may be harder to do in the CPU business because
| inventory depreciates rapidly.
|
| Intel has hitherto been minimally affected by market
| pressures because they held an effective monopoly on the
| CPU market though that is fast changing.
|
| So, there is nothing necessarily "efficient" about what
| Intel is doing. They're maximising their returns through
| price discrimination at the _expense_ of allocative
| efficiency.
|
| > The chips that roll off a fab are not done. They aren't
| "burning crops". Perhaps they are abandoned (not
| completed) perhaps because they need to recoup or save
| resources to focus on finishing and shipping the working
| (full core) products. They aren't driving their trucks of
| finished products into the ocean.
|
| That may be true in some cases, but not in others. I'm
| speaking directly to the case where a component is
| deliberately modified to reduce its capability for the
| specific purpose of price discrimination.
| munificent wrote:
| _> Businesses don 't owe you a product (before you pay
| for it) any more than you owe them loyalty after you pay
| for something._
|
| This is itself a moral claim. You may choose to base your
| morals on capitalism, but capitalism itself doesn't
| _force_ that moral choice.
|
| _> That 's the point of markets and competition._
|
| And the point of landmines is to blow people's legs off,
| but the existence of landmines does not morally justify
| blowing people up. Markets are a technology and our moral
| framework should determine how we employ technologies and
| not the other way around.
| jvanderbot wrote:
| So, if I had changed to preface with "In today's western
| society, it is generally accepted that ... ", we'd be on
| a level playing field? That's reasonable.
| munificent wrote:
| You could make that claim, but I disagree that it is
| generally accepted that companies destroying products is
| a morally good thing.
|
| I don't know anyone in western society who thinks things
| like planned obsolenscence are to be admired.
| techrat wrote:
| > So some market segmentation has to exist. If Intel threw
| away every chip that had one of the four cores come out
| broken, they'd lose a lot of money and have to raise prices
| to compensate.
|
| Except in the case with the Pentium special edition 2 cores
| and i3 parts, Intel actually designed a separate two core
| part that wouldn't have the benefit of re-enabling cores
| among hobbyists.
|
| And then there's the artificial segmentation by disabling
| Xeon support among consumer boards... even though the Xeon
| branded parts were identical to i7s (with the GPU disabled)
| and adding (or removing) a pin on a socket between
| generations even though the chipset supports the CPU itself
| (and the CPU runs on the socket fine with an adapter.)
|
| Intel definitely did everything they could to make it as
| confusing as possible.
| alexhawke wrote:
| Apple produces one A series chip for the iPhones every year.
| How does that work?
| secondaryacct wrote:
| It doesnt, still cant game with it.
| tcoff91 wrote:
| Baseless speculation: perhaps they do actually throw away
| chips? They only really target a premium market segment so
| perhaps it's not worth it to their brand to try and keep
| those chips.
| jimbob45 wrote:
| There's no way they throw away that much revenue. Not
| even Apple is that committed to purity. I'm sure they
| have a hush-hush deal with another company to shove their
| chips in no-name microwave ovens or something.
| xxpor wrote:
| Funny story about microwaves, theres basically only 2
| main manufacturers. They're both in China, and you've
| never heard of them. But if you look at various brands in
| the US and take them apart, you'll see the only
| difference is the interface. The insides are _literally_
| the same.
|
| The only exception to this are Panasonic microwaves.
|
| https://www.nytimes.com/wirecutter/reviews/best-
| microwave/
|
| Granted, a microwave with a half broken M1 in it would be
| awesome.
| verall wrote:
| It's not that much revenue because the marginal cost of
| an individual chip is very low. Given that apple has
| plenty of silicon capacity, throwing away say 5-10% of
| chips that come off the line is likely cheaper than
| trying to build a new product around them or selling them
| off to some OEM who needs to see a bunch of proprietary
| info to use them.
| hajile wrote:
| You'll likely see them in other products like lower end
| tablets or the Apple TV where lasering a core or two
| doesn't matter.
| sdenton4 wrote:
| Turns out the Apple tax means you're also buying the
| three chips thrown away to produce your one...
| munificent wrote:
| Waste is a factor in all production goods. Every fish you
| eat's price takes into account dealing with bycatch. Your
| wooden table's price accounts for the offcuts. It's the
| nature of making (or harvesting, or whatever) things.
| arcticbull wrote:
| Waste is an inherent inefficiency.
|
| In silicon manufacturing, the inefficiency is actually
| pretty low specifically because of the kind of binning
| that Intel and AMD do, that GP was complaining about. In
| a fully vertically integrated system with no desire to
| sell outside, the waste is realized. In a less integrated
| system the waste is taken advantage of.
|
| In theory capitalism should broadly encourage the
| elimination of waste - literally every part of the animal
| is used, for instance. Even the hooves make glue, and the
| bones to make jello.
| gleenn wrote:
| That's not really an Apple tax though, that's a cost of
| doing business tax. It's not like Intel and AMD and
| everyone else aren't effectively doing the same exact
| thing.
| dragontamer wrote:
| Intel and AMD __literally__ sell those broken chips to
| the open marketplace, recouping at least some of the
| costs (or possibly getting a profit from them).
|
| Apple probably does the same strategy PS3 did: create a
| 1-PPE + 8-SPE chip, but sell it as a 1-PPE + 7-SPE chip
| (assume one breaks). This increases yields, and it means
| that all 7-SPE + 8-SPE chips can be sold.
|
| 6-SPE-chips (and below) are thrown away, which is a small
| minority. Especially as the process matures and
| reliability of manufacturing increases over time.
| gleenn wrote:
| Apple sells a 7 core and 8 core version of their M1
| chips. Maybe Intel and AMD ship CPUs with even more cores
| disabled but it's not like Apple doesn't do this at all.
| qweqwweqwe-90i wrote:
| Apple doesnt sell chips at all. Next.
| barbecue_sauce wrote:
| Next?
| rteuionwiv wrote:
| I can confirm that 5000 desktop ryzen series has issues
| with turbo boost, basically if you disable turbo and stay
| on base clock then everthing is fine, but with turbo
| (CPB) enabled you get crashes and BSOD. I had this
| problem at work at my new workstation with ryzen 5900x.
| We RMAed it and new cpu works fine. From what i read it's
| pretty common problem, but it's strange that no on talks
| about it.
| pdimitar wrote:
| Can the turbo boost maximum frequency value be lowered a
| little in the BIOS to try and alleviate the problem?
| wayoutthere wrote:
| No way; the half-busted chips go into low-cost products
| like the iPhone SE. It costs little to accumulate and
| warehouse them until a spot in the roadmap for a budget
| device arises.
| officeplant wrote:
| The SE series uses the same chips, but cheaps out in
| other ways by going with older body, older camera, older
| screen.
| minhazm wrote:
| Look at the Apple A12x. They disabled a GPU core in it for
| the iPad, and then in the A12z they enabled that core. This
| was likely to help with yields. Then with the M1 chips they
| decided to sell a 7 core version of the chip with the base
| level Macbook Air and save the 8 core version for the
| higher trims.
|
| Even Apple is susceptible to it. But Apple doesn't sell
| chips, they sell devices and they can eat the cost for some
| of these. For example if a chip has 2 bad cores instead of
| selling a 6 core version Apple is probably just scrapping
| it.
| inasio wrote:
| The M1 (ok, in the 7 and 8 GPU core configurations) is in
| the Macbook air, Macbook pro, Ipad, Imac, and Mac mini...
| arcticbull wrote:
| All of those device perform exactly the same, as Apple
| has chosen the same power/thermal set point for all of
| them. This is going to start to look a lot different in
| coming years when the larger MacBook Pro transitions - I
| expect 2-3 more models there. Then when the Mac Pro
| transitions I expect another 2-3 models there.
|
| We'll start to see high-binned next-gen Apple Silicon
| parts moving to the MacBook Pro, and Mac Pro, and lower-
| binned parts making their way down-range.
| reissbaker wrote:
| Another commenter (dragontamer) pointed out elsewhere in
| the thread that Apple might be doing what Sony did for
| the PS3 (since Sony also made custom chips that had to
| perform identically in the end product): the strategy
| Sony took was to actually make better chips than
| advertised for the PS3, and disable the extra cores. That
| means that if one of the cores is broken, you can still
| sell it in a PS3; you were going to disable it anyway.
| Yields go up since you can handle a broken core, at the
| cost of some performance for your best-made chips since
| you disable a core on them.
|
| That could make sense for Apple; the M1 is already ~1
| generation ahead of competitors, so axing a bit of
| performance in favor of higher yields doesn't lose you
| any customers, but does cut your costs.
|
| Plus, they definitely do _some_ binning already, as
| mentioned with the 7 vs 8 core GPUs.
| monocasa wrote:
| We know from die shots that the M1 chips aren't disabling
| CPU cores, or any GPU cores other than the 7 vs 8
| binning.
| chippiewill wrote:
| > Does this excuse Intel's form of market segmentation? No.
| They almost certainly disable, for example, hyperthreading on
| cores that support it - just for the segmentation.
|
| I think even this is a bit unfair. Intel's segmentation is
| definitely still overkill, but it's worth bearing in mind
| that the cost of the product is not just the marginal cost of
| the materials and labour.
|
| Most of the cost (especially for intel) is going to be
| upfront costs like R&D on the chip design, and the chip
| foundry process. I don't think it's unreasonable for Intel to
| be able to sell an artificially gimped processor at a lower
| price, because the price came out of thin air in the first
| place.
|
| The point at which this breaks is when Intel doesn't have any
| real competition and uses segmentation as a way to raise
| prices on higher end chips rather than as a way to create
| cheaper SKUs.
| dodobirdlord wrote:
| > The point at which this breaks is when Intel doesn't have
| any real competition and uses segmentation as a way to
| raise prices on higher end chips rather than as a way to
| create cheaper SKUs.
|
| I'm not sure that this is really fair to call broken. This
| sort of fine granularity market segmentation allows Intel
| to maximize revenue by selling at every point along the
| demand curve, getting a computer into each customer's hands
| that meets their needs at a price that they are willing to
| pay. Higher prices on the high end enables lower prices on
| the low end. If Intel chose to split the difference and
| sell a small number of standard SKUs in the middle of the
| price range, it would benefit those at the high end and
| harm those at the low end. Obviously people here on HN have
| a particular bias on this tradeoff, but it's important to
| keep things in perspective. Fusing off features on lower-
| priced SKUs allows those SKUs to be sold at that price
| point _at all_. If those SKUs cannibalized demand for their
| higher tier SKUs, they would just have to be dropped from
| the market.
|
| Obviously Intel is not a charity, and they're not doing
| this for public benefit, but that doesn't mean it doesn't
| _have_ a public benefit. Enabling sellers to sell products
| at the prices that people are willing /able to pay is good
| for market efficiency, since it since otherwise vendors
| have to refuse some less profitable but still profitable
| sales.
|
| It is unfortunate though that this has led to ECC support
| being excluded from consumer devices.
| arcticbull wrote:
| Without knowing what the silicon lottery distribution
| actually looks like we can't really say that.
|
| > "... but it's worth bearing in mind that the cost of the
| product is not just the marginal cost of the materials and
| labour."
|
| Yes, you could choose to amortize it over every product but
| then you're selling each CPU for the same price no matter
| which functional units happen to be defective on a given
| part.
|
| Since that's not a great strategy (who wants to pay the
| same for a 12 core part as a 4 core part because the amount
| of sand that went into it is the same?) you then begin to
| assign more value to the parts with more function, do you
| not? And then this turns into a gradient. And eventually,
| you charge very little for the parts that only reception
| PCs require, and a lot more for the ones that perform much
| better.
|
| Once you get to diminishing returns there's going to be a
| demographic you can charge vastly more for that last 1%
| juice, because either they want to flex or at their scale
| it matters.
|
| Pretty soon once you get to the end of the thought exercise
| it starts to look an awful lot like Intel's line-up.
|
| I think what folks don't realize is even now, Intel 10nm
| fully functional yields are ~50%. That means the other half
| of those parts, if we're lucky, can be tested and carved up
| to lower bins.
|
| Even within the "good" 50% certain parts are going to be
| able to perform much better than others.
| xyzzy21 wrote:
| The "reason" is yield management combined with inventory
| management.
|
| The i3 through i9 are generally the exact same silicon. But
| yields are always variable. If you took the raw yield the
| actual i9 per wafer might only be 10%-20% which would not be
| economically viable.
|
| So designed into EVERY Intel product (and generally every other
| semiconductor company's products) are "fuses" and circuitry
| that can re-map and re-program out failed elements of the
| product die.
|
| So a failed i9 can AND DOES become i7, i5, or i3. There is no
| native i3 processor. The i3 is merely an i9 that has 6 failed
| cores or 6 "canceled" cores (for inventory/market supply
| management). Same goes for i5 and i7. They are "semi-failed"
| i9s!
|
| This is how the industry works. Memories work in similar ways
| for Flash or DRAM: there is a top-end product which is designed
| with either spare rows or columns as well as half-array and
| 3/4-array map-out fuses. Further there is speed binning with a
| premium on EMPIRICALLY faster parts (you can NOT predict or
| control all to be fast - it's a Bell curve distribution like
| most EVERYTHING ELSE in the universe)
|
| With this, nominal total yields can be in the 90% range.
| Without it, pretty much NO processor or memory chip would be
| economically viable. The segmentation is as much created to
| support this reality OF PHYSICS and ENGINEERING as it is to
| maximize profits.
|
| So generally, to use your example, a non-ECC processor is a
| regular processor "who's" ECC logic has failed and is
| inoperable. Similar for different cache size versions - part of
| the cache memory array has failed on smaller cache parts.
|
| So rather than trash the entire die which earns $0 (and
| actually costs money to trash), it has some fuses blown, gets
| packaged and becomes a non-ECC processor which for the right
| customer is 100% OK so that it earns something less than the
| ECC version but at an acceptable discount.
|
| When I worked at Intel, we had Commercial, Industrial and
| Military environmental plus extra ones for "emergencies: e.g.
| parts that completed 80% of military qual and then failed -
| hence the "Express" class part.
|
| We also had 10 ns speed bins which create 5-7 bins, and then
| the failed half- and quarter-array parts meant 3 more. So 4x7x3
| = 84 possible product just for the memory parts I worked on.
|
| For processors you could easily have separate categories for
| core failures, for ECC failures, for FPU/CPU failures. That
| takes you up to 100-200 easy. If you are simultaneous selling
| 2-3 technology generations (tik-tock or tik-tik-tock), that
| gets you to 500-1000 easy.
|
| This is about "portfolio effect" to maximize profits while
| still living with the harsh realities that the laws of physics
| impose upon semiconductor manufacturing. You don't rely on a
| single version and you don't toss out imperfect parts.
|
| BTW how do you think IPA and sour beers came about?? Because of
| market research? Or because someone had a whole lot of Epic
| Fail beer brew that they needed to get rid of??
|
| It was the latter originally, plus inspired marketing. And then
| people realized they could intentionally sell schlock made with
| looser process controls and make even more money!
| FredPret wrote:
| I really appreciated this explanation, thank you
| grumpyprole wrote:
| > So generally, to use your example, a non-ECC processor is a
| regular processor "who's" ECC logic has failed and is
| inoperable.
|
| I find that particular statement, very hard to believe.
| kabdib wrote:
| Right, I'm doubtful that the die area consumed by the
| chip's ECC circuitry would fail often enough to support a
| "non-ECC" manufacturing bin.
| tambre wrote:
| > So generally, to use your example, a non-ECC processor is a
| regular processor "who's" ECC logic has failed and is
| inoperable.
|
| But no high performance mainstream desktop Intel CPU supports
| ECC [0]. Meanwhile AMD doesn't have any that lack it.
|
| What gives? Surely Intel's ECC logic doesn't have such a huge
| defect ratio that Intel can't have even a single regular
| mainstream part with ECC.
|
| At work I need fairly low performance CPU with decent
| integrated graphics. Intel's iGPUs are great were it not for
| the lack of any parts with ECC. Nevermind that finding a non-
| server Intel motherboard with ECC support would restrict the
| choice such that there'd likely be none with also other
| desired features.
|
| [0] https://ark.intel.com/content/www/us/en/ark/search/featur
| efi...
| outside1234 wrote:
| Ok - I was with you until the IPA part. :)
|
| IPA came about because hops are a natural preservative and
| they needed to ship the beer all the way to India from
| England.
|
| Sour Beer is just air fermented beer ala Sourdough Bread. It
| is actually harder to make Sour Beer than "normal" beer (it
| does not come out of the failure of normal beer fermentation
| either).
|
| Sorry for being pedantic. :)
| babypuncher wrote:
| Market segmentation both raises and lowers prices. I don't
| think it is inherently bad. The low cost of entry level chips
| is only viable because of the high cost of premium chips. It is
| also critical in getting more viable chips out of your wafers,
| as defective parts of the silicon can be disabled and the chip
| placed in a lower SKU.
|
| If you eliminate the market segmentation practices, then the
| price of the small number of remaining SKUs will regress to the
| mean. This may save wealthy buyers money as they get more
| features for less cash, but poor buyers get left out completely
| as they can no longer afford anything.
|
| I do agree that Intel takes this to an absurd degree and should
| reign it in to a level more comparable to AMD. With ECC being
| mandatory in DDR5, I would expect all Intel chips to support it
| within a few years.
| deckard1 wrote:
| Just to note, AMD does every single thing you blame Intel for.
|
| AMD recently dicked b350/x370 chipset owners by sending
| motherboard manufacturers a memo telling them _not_ to support
| Zen 3 (5000 series) Ryzen CPUs on their older chipsets.[1] This
| was _after_ AsRock sent out a beta BIOS which proved that 5000
| series CPUs worked fine on b350 chipsets. Today, AsRock 's beta
| BIOS _still_ isn 't on their website and it's nearly a year
| after they put it out.
|
| Also, Ryzen APU CPUs do _not_ support ECC. Only the PRO branded
| versions. Which only exist as A) OEM laptop integration chips,
| or B) OEM desktop chips which can only be found outside North
| America (think AliExpress, or random sellers on eBay).
|
| It's more accurate to say AsRock supports ECC on Ryzen. And
| sometimes Asus. They are also incredibly cagey about exactly
| what level of ECC they support.
|
| Ryzen only supports UDIMMs. Not the cheaper RDIMMs. There are
| literally 2-3 models of 32GB ECC UDIMMs on the market. One of
| which is still labeled "prototype" on Micron's website, last I
| checked. Even if your CPU supports ECC, it takes the entire
| market to bring it to fruition. If no one is buying ECC
| (because non ECC will always be cheaper), then the market for
| those chips and motherboards won't exist. Want IPMI on Ryzen?
| You're stuck with AsRock Rack or Asus Pro WS X570-ACE. Go check
| the prices on those. Factor in the UDIMM ECC. It's not cheaper
| than Xeon.
|
| [1] https://wccftech.com/amd-warns-motherboard-makers-
| offering-r...
| tw04 wrote:
| >AMD recently dicked b350/x370 chipset owners by sending
| motherboard manufacturers a memo telling them not to support
| Zen 3 (5000 series) Ryzen CPUs on their older chipsets.[1]
| This was after AsRock sent out a beta BIOS which proved that
| 5000 series CPUs worked fine on b350 chipsets. Today,
| AsRock's beta BIOS still isn't on their website and it's
| nearly a year after they put it out.
|
| And they stated their reasoning: The average AMD 400 Series
| motherboard has key technical advantages over the average AMD
| 300 Series motherboard, including: VRM configuration, memory
| trace topology, and PCB layers
|
| Which is entirely reasonable, and accurate if you look at the
| quality of the average X370 motherboard compared to 400+.
|
| And no, AMD does not do everything I described. Which Ryzen
| model doesn't have SMT? I see it on the 3, the 5, the 7, and
| the 9. Which model doesn't have turbo boost? I see it on the
| 3, the 5, the 7, and the 9.
|
| As for ECC: I don't believe I said they're perfect, but it's
| a heck of a lot better than what Intel has to offer...
| jandrese wrote:
| The worst part is that adding ECC support should only
| increase the price of RAM by about 13%, which given that the
| RAM modules are about $50-$100 on most builds works out to
| $7-$13 to the total cost of the machine. _Every machine
| should come with ECC_. It 's such cheap insurance. But
| because the chip manufacturers have to make more money by
| artificially segmenting the market almost nobody runs ECC on
| home machines.
| adamweld wrote:
| 13% is huge in a low margin, highly competitive field. the
| price difference comes down more to economies of scale and
| less to artificial segmentation.
| PaulDavisThe1st wrote:
| But it's 13% of a tiny component of an overall system.
| Not the same as 13% of the total cost.
|
| Sure, if you're only buying memory modules, maybe you
| would go for the $7 savings. But as part of an overall
| system, nobody is even going to notice.
| qweqwweqwe-90i wrote:
| I notice so there goes your argument down the drain.
| jandrese wrote:
| It is 13% of one of the cheaper components. Back in the
| 80s when all memory was expensive there was something of
| an excuse, but today we are needlessly trading the
| possibility for silent corruption over the multi-year
| lifetime of the machine for a couple of coffees. And
| worse, we make it really expensive and difficult for
| people who do want to reduce their risk by artificially
| segmenting the market.
| officeplant wrote:
| >OEM desktop chips which can only be found outside North
| America (think AliExpress, or random sellers on eBay).
|
| Lenovo offered Pro Series Ryzen APU small form factor PCs.
| Like the Lenovo ThinkCentre M715q with a 2400GE. I believe HP
| offered them as well with the 2400GE at some point.
| deckard1 wrote:
| by desktop I meant non-integrated/embedded. A standalone
| CPU you could buy and plop into any standard ATX/mATX/ITX
| motherboard.
|
| But even if you have a Pro embedded, it doesn't mean you
| get ECC. My Lenovo ThinkPad has a PRO 4750U. But they
| solder on one non-ECC DIMM. So it's rather pointless. Plus,
| it's SODIMM. So that's yet another factor at play when
| choosing RAM.
|
| The only real exception that I know of is the recent 5000G
| APUs _may_ support ECC. But this seems to be borderline
| rumor /speculation at this point. Level1Techs made the
| claim on YouTube and were supposed to have a follow up. Not
| sure if that ever happened.
| tails4e wrote:
| It is a bit much that ECC is only availble on xeons, as ecc is
| incredibly cheap in terms of circuitry. Glad to see AMD are
| including it on mid end products.
| a012 wrote:
| Isn't this always their strategy? But now they just hand out the
| discount code more easily to everyone.
| libertine wrote:
| It's one of the tools in intel tool-belt, they have used
| shadier tools in the past - aggressive sales force,
| manipulating benchmarks - which was probably the cause of AMD
| fall.
| bastardoperator wrote:
| Interesting strategy, based on my interactions with AMD the work
| we're seeing materialize today was planned 5-7 years ago. I
| worked with the GPU team in Florida and they laid out at high
| level how AMD plans to attack Intel at business and consumer
| level. I'm not sure if it's viable but when Intel is hiring back
| old engineers and slashing prices it makes me think they lack a
| long term plan.
| ulzeraj wrote:
| Looking forward to those cheap Xeons then. I'm eyeing an HP Gen
| 10 Plus and trying to find out if its better than a Ryzen Pro
| build in the same price range.
| syshum wrote:
| In anti-trust terms would this not be "dumping"
| st_goliath wrote:
| Or maybe you could call it "being forced to return to
| reasonable prices now that their monopoly suddenly has
| competition again"?
|
| Well, at least that's what I remember comments on HN cheerfully
| proclaiming would happen back when Ryzen & Threadripper were
| launched.
| worrycue wrote:
| "Reasonable price" has always been what the market will bare.
| This is true for all companies.
| Tuna-Fish wrote:
| No. Dumping is selling below the cost of production, with the
| intent of driving someone else out of a market. This is just
| normal price competition. It's a good thing.
| Apes wrote:
| A monopoly becomes illegal when it negatively impacts
| consumers.
|
| If a company is able to lower costs to better compete with
| another company taking market share, wouldn't that imply
| that:
|
| 1. They had a defacto monopoly in the sector that allowed
| them to price above the fair market value.
|
| 2. They harmed consumers by pricing above fair market value.
| kinghajj wrote:
| No, it simply means that the market conditions changed. The
| price could very well have been a fair market value before,
| and still is a fair market value after; and the delta of
| these prices reflects the impact of the new conditions.
| jes wrote:
| What's happening with respect to any class-action lawsuits
| against Intel for the performance-damaging Spectre / Meltdown
| mitigations?
|
| I had expected these lawsuits to be significant, yet I haven't
| heard much about them.
| zamadatix wrote:
| I haven't heard much about them in a long time either but even
| if I had I wouldn't be expecting anything significant. Meltdown
| affected Intel, IBM, and ARM processors while spectre affected
| any processor that used branch prediction up until that point.
| Both were patched the best they could be on all target
| platforms via combinations of microcode and kernel patches.
|
| Significant class action suits tend to result from
| intentionally hidden and operated longstanding fraud or
| discrimination such as Enron or the tobacco settlements or
| Volkswagen. Even if Intel and every other manufacturer were
| found negligent for some part of spectre/meltdown it wasn't
| industry wide multi decade conspiracy to defraud.
| api wrote:
| They are also doubling down on fab tech to catch up with TSMC.
| Markets working as intended.
| mettamage wrote:
| Well... I find that a bit of a stretch. I'd rather say we
| happen to be lucky.
|
| What if TSMC was a company that was about as good as Intel on
| specs and price. Would the market be "working" then?
|
| A new player might come along. That new player would need to
| have 20 billion dollars of money to play along though.
| roenxi wrote:
| > That new player would need to have 20 billion dollars of
| money to play along though.
|
| In 2020 Uber's net income _rose_ to losing only 7 about
| billion dollars. And they are competing for a market far less
| interesting and defensible than advanced semiconductors.
|
| Competition would arise.
| netcan wrote:
| I don't disagree in the abstract. Massive entry costs and all
| sorts of structures _can_ and do obstruct the "as intended"
| mechanics a lot of the time. It's a struggle to term the
| revenue dynamics of an Alphabet, FB or JPM as "markets" at
| all.
|
| Chips though... chips are a market and it is working as a
| market. IMO chips are a rare example of Real economics in the
| modern economy, as opposed to the intangible-only economy
| that used to be mostly banking. A notable feature of the chip
| market is the persistent _demand,_ the ability to demand
| /consume more computing than chip manufacturers can produce.
|
| Compare to cars, say 100 years ago. Most people didn't have
| one yet. Demand could keep up with supply, markets grow fast,
| and also make consistent efficiency gains,. Eventually
| though, the market saturates. People have cars and just need
| periodic replacements. The market isn't growing. People still
| want lower prices, or shinier cars. If they get lower prices,
| the market will shrink. Efficiency gains in mature markets
| _can_ degrow a market, if demand is saturated. If car
| factories becomes twice as efficient, we 'll probably have
| fewer of them. Our demand is not that flexible.
|
| Same thing happened with smartphones and laptops, to a
| degree. They do what they do well and we only need one each.
|
| In order to have a learning curve anything like Moore's law,
| the chip market has to grow every year. That requires a lot
| of demand, to offset all the efficiency gains. I don't think
| a lot of markets have the demand potential to support a
| Moore's law. In this scenario, market's working pretty well.
| AussieWog93 wrote:
| >That new player would need to have 20 billion dollars of
| money to play along though.
|
| From what I understand, quite a few national governments are
| at least looking into setting up their own local chip plants
| since semiconductors have become a critical industry. On that
| scale, $20b is not a huge speed hump.
| VortexDream wrote:
| All of these projects AFAIK involve enticing a company like
| TSMC to build new plants, not building their own competitor
| in the market. I don't think there's any appetite in Europe
| to invest tens of billions in building their own chip
| industry.
| ddalex wrote:
| > appetite in Europe to invest tens of billions in
| building their own chip industry.
|
| Why would they. They build the machines that create the
| chips. TSMC would merely manage the chip building orders
| and the consumables. In case of national emergency I
| doubt that any government would have qualms about
| nationalising the factories.
| MangoCoffee wrote:
| foundry ecosystem forced Intel's hand. we going to see more and
| more company developing its own chip and outsourced to TSMC and
| Samsung for production.
|
| Intel's chip no longer fit what the market needs. Apple M1,
| YouTube own video-transcoding chips, AWS's graviton and Google's
| own chip for pixel 6.
|
| we have reach the point where off the shelf chip isn't going to
| fit the problem we are trying to solve. the ability to make
| custom chip that fit your product/bottleneck is more important
| than the price and the Foundry ecosystem is reducing the custom
| chip cost.
|
| i hope Intel IDM 2.0 can take off. we need more foundries that
| can do high end node.
| giuliomagnifico wrote:
| Intel needs better (especially with less energy consumption)
| chips not lower price. If Intel will lower the price they will
| have less cash for R&D and this is the trouble for intel now: not
| much competitive chips. And where are again at the beginning of
| the circle.
|
| I hope they will make an internal review of their
| offices/laboratories/whatever, is not a price issue with the
| chips, is a performance and technical issue.
| uluyol wrote:
| There have been big leadership shakeups at Intel over the past
| few months (see the CEO change). Long term, if they execute
| well, they should be back in a good position technically. In
| the meantime, their only option to offer competitive perf/$ is
| to lower cost.
|
| AMD managed to recover with a much smaller budget than Intel. I
| don't think that lower margins for a couple of years will
| prevent a recovery long term.
| colinmhayes wrote:
| Intel's profit is double AMD's revenue. Cash flow is not their
| problem.
| NewLogic wrote:
| The US taxpayer will backstop Intel no matter what, purely
| because of the fab business.
| worrycue wrote:
| Quite sure their bean counters have done the math. Intel like
| any company aims for max profitability given market conditions
| - i.e. Intel is only drop prices because it maximizes their
| profit.
| Sohcahtoa82 wrote:
| Bean counters are often short-sighted, focusing on quarterly
| reports to keep shareholders happy.
| HelloNurse wrote:
| Don't assume that increased R&D spending leads to better
| products and/or reduced time to market, at least quickly.
|
| There are consistent signs of technical decline at Intel, and
| "reviewing" underperforming units into oblivion is likely to
| drain away talent and destroy more value faster.
|
| EDIT: other comments point out that Intel is sitting on an
| awful lot of cash. It can be safely assumed that Intel is
| spending as much as possibly useful on R&D and that their
| results are limited by talent and strategic choices, not by
| cheapness.
| [deleted]
| hrgiger wrote:
| When their new fab ready they might have even better advantage,
| also CPU instructions wise I believe they hold small advantage as
| well, yet still i didn't see any incredible benchmark with avx512
| paying back as performance. I just built 2 gen3 epyc server for
| homelab waiting delivery, but if they do a nice surprise with
| upcoming Sapphire with CXL price i will be willing to sell one of
| the server and switch to Intel, optane not available for epyc,
| but i think CXL will provide more pmem availability
| rushiadhida wrote:
| At this time where there are shortage of chips across the globe,
| isn't it a good idea to increase production and diversify the
| verticals? Any experts here who can put in some thoughts?
| wmf wrote:
| Increasing production takes years.
| lvl100 wrote:
| Intel is NOT competing against AMD only. In the past couple of
| years, we've seen a number of big tech companies developing their
| own chips. Focusing on AMD would be quite myopic from a strategic
| pov. This market is only getting more competitive. Either you
| compete on performance or price.
| StillBored wrote:
| Because in the past no one could justify competing with Intel.
| But the xeon parts with huge profit margins, and companies like
| apple which only tended to buy the high margin parts in their
| devices the business people realized that it was cheaper to
| produce their own. Which is outrageous, if you think about it
| given the amount of engineering investment required to build a
| competitive product. The idea that a slice of the customer base
| has decided that the market is so broken that the financials
| work better to avoid Intel says they are way past the to greedy
| stage.
| Maakuth wrote:
| These companies are already not paying anything close to the
| list price, though.
| MangoCoffee wrote:
| its not about the price but the ability to create chip that
| fit what you needs. For example: YouTube is now building its
| own video-transcoding chips.
|
| The biggest cost of making chip is the foundry and the
| foundry ecosystem have reduced the cost to where everyone can
| be a fabless and just outsource to foundry like TSMC and
| Samsung.
|
| https://arstechnica.com/gadgets/2021/04/youtube-is-now-
| build...
| the-dude wrote:
| And still they are developing their own ARMs.
| swalsh wrote:
| After purchasing an M1, i'm starting to realize how viable ARM
| is as a main platform. Nearly everything I want to run on it
| has a natively built version, and runs great on it. I could
| easily move anything I've built to a server running ARM with
| little frustration. I think that may be a bigger part of the
| coming future.
| lvl100 wrote:
| Depending on the next iteration of Apple silicons, I am
| seriously considering a Mac Mini farm for compute heavy
| tasks.
| skohan wrote:
| Would that really be a competitive option for your use case
| over something like graviton?
|
| It is kind of a wild state of affairs that as good a chip
| as M1 isn't available as commodity hardware.
| omegalulw wrote:
| I think we should look at one step further - RISC-V. Open
| source is the best way to ensure consumers don't get shafted
| by someone doing the Intel model again or Apple keeping M1
| limited to their devices.
| skohan wrote:
| Well I think it's clear that Apple will keep M1 to
| themselves. But I would imagine other vendors will come out
| with Arm offerings to compete.
|
| I agree a truly open-source option would be desirable.
| mhh__ wrote:
| You seem to be making the classic mistake of thinking that
| a given RISC-V processor is open source. The standard is
| open, the processor's source "code" (design) doesn't have
| to be.
|
| This does not mean RISC-V use wouldn't be a good thing, as
| it prevents a whole boatload of legal issues, but it just
| isn't what a lot of people seem to think it is.
|
| ARM could end up being a better ISA in the very high-clock
| high-IPC domain, it remains to be seen.
| nicce wrote:
| Open soure alone is not enough. Look at Chromium,
| controlled by a single company on what is decided
| ashtonkem wrote:
| I've been eyeing Graviton for our server work loads. On paper
| it's price competitive.
| minimaul wrote:
| We've been moving more and more to it. It works, and
| surprisingly well. It's not quite up there for absolute
| single thread performance in our experience, but price/perf
| is excellent.
|
| edit: really, I'm just waiting for Graviton 2 Fargate
| support, and then I'll be able to move a _lot_ of
| workloads.
| jes wrote:
| Same. I've been loving my M1 Mac Mini for almost a year.
| Cool, silent, fast and compact.
| dippersauce wrote:
| Before M1 my only exposure to ARM has been low-power SBCs and
| Android devices, and the experience was mediocre in the "just
| works" department. Poor hardware support, and a lack of
| proprietary software support. Performance was also lacking.
| Apple's tight integration and high-end CPUs have resulted in
| a vastly better experience, but I want to have more options
| than just macOS and MacBooks. I think we're trending in the
| right direction, but it's going to be a while before (5 years
| IMO) before we see anything approaching competitive to the
| M-series chips from major market players. If Microsoft could
| fix their frankly horrid x86 compatibility on aarch64 devices
| thing would speed along nicely I think.
| nightski wrote:
| M1 is great but not everyone wants a SoC. I like the
| ability to swap out parts in my PC build.
| webmobdev wrote:
| And the freedom to run full featured alternative OSes on
| the bare machine.
| m4rtink wrote:
| All the M1 machines have built in RAM, right ? And GPU as
| well.
| tenebrisalietum wrote:
| > If Microsoft could fix their frankly horrid x86
| compatibility on aarch64 device
|
| I don't think Microsoft is the real problem there, though.
|
| NT was developed to be portable and was working on
| architectures other than x86 in the beginning.
|
| So it was interesting when I heard things about "Windows on
| ARM" half a decade ago--and then the Surface RT. The RT was
| crap, but it did have real Windows NT working on non-Intel
| ARM, as was the OS on their Windows Series 10 phones or
| whatever.
|
| So Microsoft is already there on an OS level. It's the big
| software vendors that have to be corralled to switch
| somehow (Autodesk, Adobe, etc.) Honestly .NET overall was
| probably at least in part Microsoft trying to get
| developers on something more CPU-agnostic to reduce
| dependence on x86.
| q-big wrote:
| > I don't think Microsoft is the real problem there,
| though.
|
| > [...]
|
| > So it was interesting when I heard things about
| "Windows on ARM" half a decade ago--and then the Surface
| RT. The RT was crap, but it did have real Windows NT
| working on non-Intel ARM, as was the OS on their Windows
| Series 10 phones or whatever.
|
| In this specific case, Microsoft _is_ the real problem:
| Microsoft deeply locked down the Surface RT; you needed a
| jailbreak to run unsigned applications on it.
| atq2119 wrote:
| Parent's point was that Apple made the switch without
| having to get software vendors on the project, due to
| excellent emulation of x86 on their ARM.
| webmobdev wrote:
| Apple has more control over developers - these idiots pay
| them money for the "privilege" of developing on it. And
| Apple started by deprecating support for all 32 bits app.
| That forced many developers to refactor or port their
| code. The x86 emulation support will end in the near
| future and will force the remaining developers onto the
| ARM platform.
| cs2733 wrote:
| Rosetta 1 was supported for 6 years. I don't think that's
| too bad.
| codeflo wrote:
| I'm not so optimistic. There are some technical things
| Microsoft did poorly when going from x86 to x86-64, which
| in my opinion delayed the transition of a lot of software
| by a decade. And this is with processors that can run
| both instruction sets natively, where no actual software
| emulation was required.
|
| To give some context (this started with Windows Server
| 2003 64-bit and is still how it works in Windows 11):
| Instead of implementing fat binaries like OS X did, they
| decided to run old x86 applications in a virtualized
| filesystem where they see different files in the same
| logical path. This results in double the DLL hell
| nightmare, with lots of confusing issues around which
| process sees which file where. For many usecases around
| plugins, this made a gradual transition impossible. (Case
| in point: The memory hungry Visual Studio is currently
| _still_ 32-bit. Next release will hopefully finally make
| the switch.)
|
| Also, it's surprising how much stuff in Windows depends
| on loading unknown DLLs into your process, like showing
| the printer dialog. So you run into these problems all
| the time.
|
| Have they learned their lesson? It doesn't look like it.
| Last I checked, x86 on ARM uses the exact same system as
| x86 on x86-64. If they ever emulate x86-64 the same way,
| that's _triple_ DLL hell right there. And I don't think
| they'll get a decade to sort things out this time around.
| MarkSweep wrote:
| Microsoft announced ARM64EC. It's an ABI for ARM64 that
| is similar to x64. They say it allows mixing x64 and
| ARM64 DLLs in the same process.
|
| https://blogs.windows.com/windowsdeveloper/2021/06/28/ann
| oun...
| robocat wrote:
| Cool - perhaps that opens the way for a x64+ARM
| big.LITTLE processor, with a few hot fast x64 AMD cores
| (big) and a lot of slow efficient ARM cores (little).
| vanilla_nut wrote:
| I very nearly want them to double down on this disastrous
| strategy so in 3-5 years we'll all be saved from Windows
| by an MS-run Linux distro (with windows theming,
| naturally) that just runs Wine+some MS internal goodies
| for backwards compat. It's really not that different from
| Apple's approach with Rosetta 2 in M1.
| philistine wrote:
| It's crazy that this now aligns with Microsoft's goals
| and could conceivably happen.
|
| Microsoft has the capacity to realize that the value of
| Windows is not the codebase, but the compatibility. They
| could let the Linux subsystem swallow Windows and wrap
| Windows itself inside it.
|
| However, I believe we'll continue to see their colocation
| system instead, where Windows and Linux are both wrapped
| inside a system managing both.
| yjftsjthsd-h wrote:
| ... Windows subsystem for Windows? (Although I guess
| maybe wow64 was that already)
| mmerlin wrote:
| Internet Explorer became Chromium under the hood (MS
| Edge)
|
| Windows might be fully Linux under the hood one day!
|
| WSL2 is one of the early bridges across the divide.
| emptyparadise wrote:
| Microsoft-made Linux distribution finally making Linux on
| the desktop happen, did somebody wish for it on a monkey
| paw?
| plushpuffin wrote:
| What you described is actually closer to Apple's strategy
| for moving from Mac OS 9 to Mac OS X, with a virtual
| machine for running classic apps on the new OS.
| mschuster91 wrote:
| > NT was developed to be portable and was working on
| architectures other than x86 in the beginning.
|
| NT itself yes, but the userland? Not in the slightest.
| Apple provided Rosetta runtime translation at each arch
| transition, MS did not. As a result, no company even
| thought about switching PCs over to ARM which meant that
| there also was no incentive for the big players you
| mentioned to port their software over to RT.
| mnadkvlb wrote:
| I have tried buying microsoft arm computers since the last
| two gens now both the surface prox x with qualcomm sq1 and
| sq2 as well as another yoga book 5g.
|
| Windows performance on these platforms is so trash, you
| feel like going back ten years on ultrabooks. Even their
| own apps are not optimized or some like visual studio didnt
| even run.
|
| Compare that to x86 builtin emulation on apple m1, it
| performs so close to native performance on a 1000 bucks
| macbook air.
|
| Microsoft has definitely different priorities like how to
| chnange settings for a user without their permissions or
| how to hide settings so users have less choice. windows
| experience has been so downhill since win7.
| freemint wrote:
| Downhill compared to Win7, yes. Downhill ever since, no.
| Windows 8 was worse then 10.
| mnadkvlb wrote:
| i agree. But seems like microsoft is back to old habits
| in win11 with settings regarding browser settings etc.
| having different standards for edge vs others.hn
| discussion: https://news.ycombinator.com/item?id=28225043
| hdjjhhvvhga wrote:
| Intel IS competing against AMD mainly in the server space now.
| Of course at some point ARM and RISC-V servers will become
| mainstream, but it will take years. Intel is taking action now
| and it's aimed directly at AMD.
| guiriduro wrote:
| ARM is already there in the cloud, cf AWS Graviton. With
| Apple's M1 in the laptop/desktop (mac mini) space, ARM and
| its superior power/performance ratio is a significant
| contender for mainstream compute now.
| pimeys wrote:
| Now to only be able to buy a motherboard and a fast ARM CPU
| to my workstation and install NixOS to it, I'd be willing
| to try it out.
| smoldesu wrote:
| I know a few VPS distributors, and I've heard pretty mixed
| things about ARM's viability in the server space. Not only
| is it pretty expensive relative to x86, it's also pretty
| slow: you won't be getting SIMD instructions like AVX,
| which are _huge_ in the server space. The only thing ARM
| has going for it is low IPC, but I really fail to see many
| applications where you could benefit from that, much less
| one where it would be worth the price premium over x86.
|
| Maybe in 5 or 10 years, ARM will be viable. But by then,
| we'll all be rocking RISC-V CPUs because someone realized
| that accelerating for specialized workloads _isn 't_ a
| crock of shit when 90% of your workload is video decoding.
| Hikikomori wrote:
| Maybe read a bit about Graviton? AllIntel/AMD instances
| in AWS has Graviton processors to handle network/disk IO
| unless its a very old instance type, large amounts of AWS
| own services run on it as well.
| monocasa wrote:
| The rumor is that Graviton instances are being sold at
| below cost for Amazon to put negotiating pressure on
| Intel/AMD.
|
| And that M1 only looks as good as it does because of
| Apple's de facto monopoly on TSMC 5nm. That AMD cores are
| more than competitive at the same node.
| javchz wrote:
| Yeah, Custom ARM / RISC-V chips or even ASIC/FPGAs could start
| threatening x86/AMD64 for datacenters and "clouds" sooner than
| we think.
| dragontamer wrote:
| ARM maybe. But I'm not convinced that the ARM-alliance
| (Fujitsu, Apple, Ampere, Neoverse) is quite as unified as you
| might think. Apple has no apparent goals for cloud/servers,
| Fujitsu seems entirely focused on the Japanese market, and
| Ampere Altra isn't reaching critical mass (Amazon prefers a
| Neoverse rather than joining forces with Ampere / using
| Altra).
|
| As long as the ARM-community is fragmented, their
| research/investments won't really be as aligned as Xeon
| and/or EPYC servers.
|
| HiFive / RISC-V aren't anywhere close to the server-tier.
| skohan wrote:
| What about graviton? Isn't it already competitive with x86
| on price/performance?
| dragontamer wrote:
| Amazon doesn't offer graviton in the open market. You can
| only get those chips if you buy AWS.
|
| Graviton is a standard N1 neoverse core, which is
| slightly slower than a Skylake Xeon / Zen2 EPYC. There's
| hope that N2 will be faster, but even if it is, we don't
| really have an apples-to-apples comparison available
| (since Amazon doesn't sell that chip).
|
| The most likely source of Neoverse cores is the Ampere
| Altra, which is expected to have N2 cores shipping
| eventually. As usual though: since Ampere has lower
| shipping volume than other companies, the motherboards
| are very expensive.
|
| x86 (both Intel and AMD) have extremely high volumes: so
| from a TCO perspective, its hard to beat them, especially
| when you consider motherboard prices into the mix.
| kumarvvr wrote:
| > we've seen a number of big tech companies developing their
| own chips
|
| Fortunately, only a handful of companies have the resources to
| do that.
|
| Unfortunately for Intel, those handful of companies are the
| biggest and only customers for large scale server farms.
| Guthur wrote:
| Lol, this stinks of PR BS.
|
| It's Intel's vertical integration that has hamstrung its chip
| design for about half a decade. 10nm transition was an
| unmitigated disaster and because of it Intel has haemorrhaged
| technical dominance and has only really maintained market
| dominance due to entrenched and slow moving decision cycles
| within the data centre space and to a lesser extend the consumer
| market.
|
| Intel will likely stablise over time but they won't enjoy the
| market dominance they had for most of the last decade.
| vanilla_nut wrote:
| Not just technical dominance -- we've all heard of lead Intel
| engineers hired away to Apple/Google/Amazon/etc during this
| period of stagnation. How many senior engineers, staff
| engineers, and low-level talent in general has Intel bled in
| the last 5-7 years? How many of them have moved to Qualcomm,
| TSMC, Apple, Google, etc? At this point, I wonder if Intel is
| even capable of fixing their technical problems since most of
| their talent abandoned the sinking ship long ago.
| worrycue wrote:
| Talent comes and goes. If other companies can hire away
| talent, Intel can hire them back too. If you pay enough,
| people will come. Intel currently seem quite willing to pay.
| graton wrote:
| That's a change then. Before they use to aim for paying at
| about the 50th percentile. So Google and other companies
| would literally pay twice or more as much in salary in
| comparison.
| Brave-Steak wrote:
| There's also how Intel has repeatedly laid off older workers
|
| https://www.oregonlive.com/silicon-
| forest/2015/08/intel_layo...
|
| > Proportionately, employees in their 50s were three times
| more likely to lose their jobs than workers in their 30s,
| according to a document obtained by The Oregonian/OregonLive
| that tallies every Intel employee in the United States. The
| company was nearly five times more likely to lay off workers
| in their 60s than those in their 30s.
|
| I'm sure they've lost a lot of institutional knowledge
| through cost-cutting like this.
| cletus wrote:
| It's amazing how far Intel has fallen.
|
| The 10nm debacle exposed how far they've fallen behind on fabs to
| the point that they're outsourcing to TSMC. Like, how humiliating
| must that be?
|
| Intel completely missed the mobile revolution. They had a stake
| in that race but sold it (ie XScale).
|
| Intel's product segmentation is bewildering. They've also kept
| features "enterprise" only to prop up high server chip prices to
| the detriment to computing as a whole, most notably ECC support.
|
| And on the server front, which I'm sure is what's keeping them in
| business now, they face an existential threat in the form of ARM.
|
| Intel had clearly shifted to a strategy of extracting as much
| money as possible from their captive market. I'm not sure price
| cuts here are necessarily about AMD but more than their
| previously captive market now has more options in general.
|
| How the mighty have fallen.
| [deleted]
| woofie11 wrote:
| I think one more issue is support. If I want a chip from TI,
| Analog Devices, etc., I fill out a web form and get a sample.
| If I want to talk to an engineer, I place a phone call. If I
| want to order a dozen of a part, I go to Digikey. If I want a
| datasheet, it's online.
|
| Intel won't give you the time of day unless you're HP or Dell.
| That's optimal for capitalizing on old markets, but it means
| it's never in new markets. It always starts at a disadvantage.
| It's not that Intel never has chips startups want to use; it's
| that it's impossible to engineer with most of them.
|
| By the time a product has enough marketshare for Intel to care,
| they need to displace an existing supplier.
|
| This means they could never really diversify outside of PCs.
| pclmulqdq wrote:
| Intel and AMD are both like that, and it makes me wonder how
| much space they have opened up for ARM. I would love a small
| x86 SoC if it came with the same level of support that an NXP
| or TI ARM chip has, but they don't.
| myself248 wrote:
| I wonder how a small outfit like UDOO manages to design
| around an AMD embedded part then. The boards are out there
| and they work, but I have no idea how the negotiations
| happened.
| pclmulqdq wrote:
| My impression is that if you are an open source project
| (especially one with a few already existing designs), you
| can actually get some design support from large
| companies. This is especially true if you either meet the
| right person in marketing at those companies or know
| someone on the inside. The Raspberry Pi uses chips from a
| very user-hostile company (Broadcom) because they started
| as a side project by a few engineers at Broadcom.
| papercrane wrote:
| > Intel and AMD are both like that, and it makes me wonder
| how much space they have opened up for ARM.
|
| Arguably, this is what led to the creation of ARM. Acorn
| wanted to make a computer with a 286, but Intel ignored
| them, so they decided to build their own RISC based CPU,
| the "Acorn RISC Machine".
| IshKebab wrote:
| > If I want to talk to an engineer, I place a phone call. If
| I want to order a dozen of a part, I go to Digikey. If I want
| a datasheet, it's online.
|
| I notice you didn't list Broadcom... And bullshit can you
| call an engineer. Submit a support case through some online
| portal maybe. Zero chance they are giving you a direct line
| to their engineers.
| StillBored wrote:
| Yah they are all like that, in the arm space outside of
| really low end devices and the rk3399 they won't even give
| you minimal register docs for standard devices. I had
| problems at the previous place trying to build a PCIe
| device where the minimum to to even get the most minimal of
| documentation was 100k units. Sure you could buy the parts
| from digikey but they were useless because the public docs
| were little more than footprints and high level whitepaper
| like feature matrices.
| AtlasBarfed wrote:
| And everything you detailed there is solely a management
| issue.
|
| They could devote a market segment to support that as a long
| term emerging market support aspect of their business, but
| it's clear that short term hit-strike-price-for-execs has
| been the dominant management mode for quite some time.
| Aromasin wrote:
| This is a point where I would have to disagree. While their
| early access programs are generally restricted to larger
| customers, you can apply to join other schemes (called Docs
| and Docs+ as far as I remember) where they will assign you an
| account manager and a dedicated platform application engineer
| to help you with your design-in process.
|
| I worked at a small start-up producing COM-HPC boards for
| companies who wanted to keep their servers in-house, as
| opposed to using cloud infrastructure. We weren't purchasing
| any more than maybe 500 CPUs of their upcoming platform.
| Despite that, they supplied 1:1 tech support, reference
| schematics/layouts, a reference validation platform with
| which to test our design on, and 1000's of documents
| including product design guides and white papers. This all
| came about by just contacting Intel's developer account
| support and filling in a few forms.
|
| We also produced the same product with AMD hardware and the
| difference was night an day. Say what you will about their
| production difficulties and roadmaps, their engineering
| support is years ahead of AMD.
| [deleted]
| woofie11 wrote:
| I wasn't comparing to AMD.
|
| I've had few enough interactions with AMD that I can't pass
| judgement, but from the few I've had were consistent with
| your assessment. AMD was a complete black hole. My
| interactions with Intel were lightyears ahead of AMD.
|
| But Intel, in turn, was lightyears behind Analog, Linear,
| Maxim, TI, and most other vendors I've dealt with (this was
| before Analog gobbled Linear and Maxim up).
| chithanh wrote:
| XMG (a gaming laptop brand) even publicly announced that
| AMD would not meet their request for validation samples
| of Ryzen 5800 and 5900 CPUs. CPUs that have been launched
| and are shipping to other customers already.
|
| https://www.reddit.com/r/XMG_gg/comments/n4i3x2/update_th
| rea...
| robocat wrote:
| If AMD at at 100% production capacity, why would they
| want to increase demand? Surely supplying validation
| samples could only hurt AMD in that situation (technical
| costs, disappointing the customer when the customer want
| to shift to production).
| digikata wrote:
| It really depends on who the targeted customers are. I
| remember inquiring on some TI lines and being told by the
| rep that unless you're a customer anticipating 1M+ units,
| that chip really isn't available.
| yyyk wrote:
| >The 10nm debacle exposed how far they've fallen behind on fabs
| to the point that they're outsourcing to TSMC. Like, how
| humiliating must that be?
|
| Every chip Intel buys from TSMC is a chip not made by its
| competitors. Doing this is extremely useful for Intel to the
| point I wonder why TSMC agreed in the first place.
|
| After all, eventually Intel will improve their fabs, and then
| it's the non-Intel players that will order from TSMC. Why
| hamper TSMC's future customers? Intel must have offered a lot
| of money.
| mizzack wrote:
| > And on the server front, which I'm sure is what's keeping
| them in business now, they face an existential threat in the
| form of ARM.
|
| HPC and inertia. Lots of inertia.
| petschge wrote:
| HPC still has a lot of Intel, because these systems run for
| ~5 years. But if I look at the Top500, there is systems with
| AMD Rome (and Milan and Naples). There is systems with IBM
| POWER9 (and POWER7) and the fastest system in the list is of
| course running Fujitsu A64FX. And there is exotic systems
| with Vector Engine, Marvell ThunderX2, Hygon Dhyana or
| Sunway.
|
| And while Xeon Phi (and predecessors) used to be very
| popular, the accelerator market is now dominated by Nvidia
| (mostly Volta, but also Ampere and Pascal) and AMD Vega.
|
| Actually only two systems (#7 in China and #10 in Texas) of
| the top 10 systems rely on Intel. And upcoming systems also
| feature a wild mix of architectures and vendors. So way less
| inertia that you might think.
| dragontamer wrote:
| Intel has soundly lost HPC to NVidia at this point.
|
| Not only because of NVidia GPUs, but also because NVidia
| bought Mellanox (who makes those fancy InfiniBand NICs that
| those supercomputers use).
|
| Intel's Xeon Phi didn't work out so hot. They're working on
| Intel Xe (aka: Aurora Supercomputer), but Aurora has been
| bungled so hard that Intel's losing a lot of reputation right
| now. Intel needs to deliver Aurora if they want to be taken
| seriously.
| formerly_proven wrote:
| A lot of stuff in HPC still doesn't utilize GPUs (because
| the problem is not amenable to GPU architecture or laziness
| / lack of funding and interest) so at least for commercial
| deployments with a diverse set of solvers I'd say CPUs
| remain important. Intel might be unable to outperform their
| competitors at this time, but they have more than enough
| money to be temporarily cheaper to (seemingly) make up for
| that.
| mmastrac wrote:
| XScale was an awesome processor. Not only was it competitive,
| but it was 100% completely open and documented.
| OneEyedRobot wrote:
| I used it in a design. What I remember (it was a while ago)
| was the lack of OS and driver support from third party
| software houses. It was a mistake to use it.
| Animats wrote:
| This may increase Amazon's margins, but if you're on AWS, you
| won't get a price cut just because Amazon got a price cut.
| danpalmer wrote:
| This will make it harder for them to invest in production
| technology, which will make it harder for them to catch up to
| TSMC. It might be the only move they can make, but that doesn't
| make it a great one.
| jychang wrote:
| Intel has $24.8 billion in cash.
|
| https://www.macrotrends.net/stocks/charts/INTC/intel/cash-on...
|
| Intel dropping their prices and thus revenue temporarily should
| not affect their ability to compete at all. They're not THAT
| badly mismanaged to the point they're out of cash.
| joakleaf wrote:
| ... and dropping prices doesn't necessarily meaning dropping
| revenue or even profit.
|
| Intel's per chip profit may drop, but if they sell more
| because of lower prices, they may actually increase their
| overall profit.
|
| It is really hard to tell without knowing Intel's current
| profit margin and the increase in number of chips sold from
| this maneuver (if any).
| [deleted]
| starfallg wrote:
| Intel drank the management consultant kool-aid like many
| large pharmaceuticals corps, relying more on financial
| engineering than their research pipeline to compete. The flip
| side is that they have lots of money to splash around.
| Companies like Pfizer for example.
| christophilus wrote:
| Did they? I thought they made a heavy bet that hasn't paid
| off (and maybe never will).
| cisvolk1016 wrote:
| So if Intel drops $0.8B on a risktaker learning from what
| I. B. M. did to make Watson and not make those mistakes.
| What new thing could come from that? A new category
| product is what Intel should look for around the
| corner(s).
| starfallg wrote:
| You're right for the 10nm (7nm) process development. They
| focused on shrinking the wrong parts and ended up with an
| inferior product. Instead of changing direction, they
| doubled down.
| galangalalgol wrote:
| Was this a case of a leader refusing to be wrong, or
| engineers thinking "we almost have this, give us another
| shot."
| GuuD wrote:
| I have no idea, but isn't it the usual real-world case of
| "it's complicated, and it's both"?
| tomalpha wrote:
| Is this not just the efficient-market at work? Simplistically:
| Intel's chips aren't as good as AMDs so it has to drop prices.
|
| And for future investment Intel still has ~ $24 billion cash on
| hand as of June 2021 [0]
|
| [0] https://www.macrotrends.net/stocks/charts/INTC/intel/cash-
| on...
| danpalmer wrote:
| TSMC are spending about that every year for the next 3 years
| on production improvements. Chip fabrication is so expensive
| to develop, I'm concern that $24bn is nowhere near enough to
| build a 5nm process.
| pbalau wrote:
| Intel has 24bn available in cash. One could assume Intel
| also has $FOO x 24bn available in loaning power.
| MangoCoffee wrote:
| money can't buy node process. Intel struggle with 10nm
| for so long and 7nm is delayed while TSMC is now 5nm
| production ready and developing 3nm.
| the-dude wrote:
| Isn't a new fab around $20bn?
| aeyes wrote:
| You usually don't buy the construction of a new fab with
| cash.
| scandinavian wrote:
| > Simplistically: Intel's chips aren't as good as AMDs so it
| has to drop prices.
|
| Shouldn't you replace AMD with TSMC in that sentence, unless
| you meant design instead of chips? AMD doesn't manufacture
| chips.
| rcthompson wrote:
| By "AMD's chips", they clearly meant chips marketed and
| sold under AMD's brand. If you're trying to make the point
| that TSMC deserves the real credit for competing with
| Intel, then just say that.
| scandinavian wrote:
| You don't think it's important to make the distinction
| between chip design and manufacturing?
|
| Intel has clearly failed with regards to manufacturing
| new nodes, but is the chip design really that bad when
| they could compete for a long time with a large node
| disadvantage?
| tomalpha wrote:
| It's an interesting point, but I buy from Intel or AMD - I
| don't buy from TSMC. Intel and AMD supply me the product
| and set the pricing.
|
| As a simple consumer, I perhaps don't know about their
| upstream suppliers (granted the HN crowd will absolutely
| know...).
| Retric wrote:
| If you own a cellphone or console your likely buying from
| TSMC. If you're buying AMD then you _are_ buying TSMC.
|
| Intel is unusual in that they still manufacture their
| high end chips in house, cutting edge fabs are simply
| mind boggling expensive. So basically everyone else
| outsources and if your outsourcing high end chips you
| might as well buy from the bets if you can.
| adrian_b wrote:
| It is true that the main reason why the AMD chips are
| better than the Intel chips is that the TSMC 7 nm
| manufacturing process is significantly better than the
| Intel process used for the Ice Lake Server chips.
|
| Nevertheless, the AMD designers must be praised for making
| the right design choices year after year for the last half
| of decade, which were needed to fully exploit the
| characteristics of the modern CMOS processes.
|
| On the other hand the Intel designers appear to have lived
| in a fantasy land, where they had absolutely no idea about
| how their future manufacturing processes will behave, even
| if in their case the required information should have come
| from another division of the same company, not from
| different foundry companies, like in the case of AMD.
|
| Once again, Intel was not able to switch in time their
| style of design, to be in sync with the advance of CMOS
| technology.
|
| During 2003 - 2008, Intel needed 5 years to follow AMD and
| switch to CPUs with integrated memory controllers and now,
| during 2016 - 2021, Intel required again 5 years to follow
| AMD in the transition to the use of multiple interconnected
| chiplets instead of large monolithic chips.
| humps wrote:
| This move is to attract AMD customers back to Intel, so while
| in the short term it could hurt revenue, longer term it may
| mean increased profits and therefore offer more room to invest.
| There's also the potential for increased sales at the lower
| pricing, which will still have a profit attached. So I doubt
| that overall this will have much impact on investment.
| tehbeard wrote:
| Is a discounted chip price going to sway people enough to
| offset the hotter core (therefore pricier in terms of power
| and cooling), and limited (in comparison to EPYC) IO?
| dtech wrote:
| I don't think money is the problem with Intel's troubles.
| adwn wrote:
| A lack of capital is hardly the reason for Intel falling behind
| TSMC. If it were, they wouldn't have lost their lead in the
| first place, and TSMC wouldn't have been able to overtake them.
| rafaelturk wrote:
| Not sure if price reduction will do the trick. Intel is behind on
| the product side.
| rajeevk wrote:
| IMO, it has been mainly a price game between AND and Intel for
| quite some time
| elorant wrote:
| I got an i3 10100f the other day for a mere EUR80. For a four
| core cpu that sounds like a steal to me. Way to go Intel.
| buitreVirtual wrote:
| Is that a good bang for the pound even after the price drop?
| (cost/performance ratio)
| elorant wrote:
| The processor is very efficient. Four cores at 3,6GHz, and
| can support up to 128GB of RAM. Even AMD can't beat that.
| mschuster91 wrote:
| Why would you want 128GB of RAM without ECC support,
| something totally mainstream at AMD?
| elorant wrote:
| I have ECC RAM on my servers. I don't care for it on my
| desktop rig.
| cedivad wrote:
| Sure, but if you don't have a GPU to go with that it won't even
| boot. The non-f version is twice that.
| mrjin wrote:
| Have been with Intel for almost two decades, I finally moved to
| AMD for the very first time recently and I'm glad I did. Intel is
| being called "Toothpaste Company" for a reason. It has
| deliberately slowed down its innovation since gained performance
| advantage over AMD with CORE, for over a decade now. Between each
| iteration, there was not many changes but kept adding fancy
| instruction sets such as AVX512 useless to most if not all
| ordinary users. It's a shame that I bought it actually. But over
| time I gradually realized that the only occasions I used those
| fancy stuffs were benching marking new systems. So those fancy
| things mean nothing to me other than showing off to friends.
| skohan wrote:
| > kept adding fancy instruction sets such as AVX512 useless to
| most if not all ordinary users
|
| I'm not a microprocessor expert, but this seems like one of the
| reasons RISC has so much potential in the future. It seems like
| x86 is just weighed down with so much cruft.
| dbatten wrote:
| For those who (like me) didn't get the "toothpaste company"
| reference - it seems to be a reference to Intel trying to
| squeeze every last bit of performance out of an old
| architecture (as one would squeeze every bit of toothpaste out
| of a tube), rather than innovating with new architectures and
| technologies.
|
| It's hard to figure out exactly where the toothpaste reference
| originated, but at least one source makes it sound like it was
| a mis-translation of materials published by AMD. See
| https://www.hardwaretimes.com/amd-takes-a-jab-at-intel-we-do...
| tambourine_man wrote:
| I think the parent meant an anecdote I've heard many times,
| in slightly different ways. It goes like this: a major
| toothpaste company was having a meeting, trying to increase
| sales. Many solutions were tried: new flavors, advertising,
| none had much effect.
|
| On a whim, a director asks the guy serving coffee:
| - Jack, what would you do to increase sales? - Have you
| tried increasing the hole on the toothpaste?
|
| There might be some truth to this, toothpaste tubes used to
| be metal in the 60s and you were supposed to punch a hole on
| the front of it with the back of the cover cap. That hole got
| a lot smaller than the [?]1cm wide in the plastic ones of
| today. It was also much easier to squeeze the very last gram
| by folding it.
| cge wrote:
| I had also heard a point for toothpaste involving the
| marketing: toothpaste advertisements, and all marketing
| imagery of toothpaste on a toothbrush, almost always show
| _absurdly_ larger amounts of toothpaste than is effective
| or appropriate to use brushing teeth, trying to increase
| consumption by increasing waste.
| LambdaComplex wrote:
| > toothpaste tubes used to be metal in the 60s and you were
| supposed to punch a hole on the front of it with the back
| of the cover cap
|
| I'm definitely too young to remember anything from the
| 1960s, but you can still buy tomato paste in tubes like
| that. Neat.
| lupire wrote:
| Many medicines have the same tube style.
| dimitrios1 wrote:
| It's amazing how backwards we went from a sustainability
| perspective when you consider likely no one had this issue
| front and center as they did in the early industrial days.
|
| We used reusable metals and glasses much more. Now
| everything is plastic.
| mschuster91 wrote:
| On the other side, just take a "Tragerl" of beer (German
| beer crate with 20x0.5l):
|
| - It weighs much more than a crate of 20x0.5l aluminium
| cans or plastic bottles
|
| - it is more voluminous: glass bottles have way thicker
| walls and they need plastic spacers to prevent the
| bottles from crashing each other, whereas cans and
| bottles can be shrinkwrapped just fine)
|
| - the return logistics are simpler: glass bottles and the
| crates have to be returned to the brewery to be refilled,
| whereas PET bottles and aluminium cans enter the normal,
| regional recycling stream
|
| The switch to plastics has saved _lots_ of money and
| environmental pollution in logistics. What _was_ missed
| though was regulating recycling capabilities of plastics
| - compound foils are impossible to separate, for example
| - and mandating that plastics not end up in garbage, e.g.
| by having a small deposit on each piece of plastic sold.
| dimitrios1 wrote:
| > The switch to plastics has saved lots of money and
| environmental pollution in logistics
|
| Ah, but this is debatable!
|
| https://www.wri.org/insights/planes-trains-and-big-
| automobil...
|
| "Trains move 32% of goods in the United States, but
| generate only 6% of freight-related greenhouse gas
| emissions. Meanwhile trucks account for 40% of American
| freight transport and 60% of freight-related emissions."
|
| From the beginning of the industrial period, we relied on
| rail and boat for logistics, and buggies for last mile
| deliveries, until the advent of affordable, mass produced
| vehicles, and the interstate system, this didn't change
| much. Our reliance on plastics combined with airplanes
| and trucks for logistics results in much greater
| pollution in my view.
|
| Granted, coal was the primary fuel source for steamboats
| and steam engines, but sail still was common until iron
| boats became widespread, and still more economical for
| cross-sea transportation.
|
| All this to say, as an amateur historian, in my view,
| this all comes to a precipice between the late 1950s and
| early 1960s, with the completion of the interstate
| highway system in the US, and DuPoint proliferating
| plastics in 1960s.
| sbierwagen wrote:
| >in the United States.
|
| Fun fact, Europe moves most of its freight by road:
| https://www.eea.europa.eu/data-and-maps/figures/road-
| transpo...
|
| Compare with the US: https://encrypted-
| tbn0.gstatic.com/images?q=tbn:ANd9GcQ13vD9... (A
| screenshot from this PDF: https://www.kth.se/polopoly_fs/
| 1.87118.1550154619!/Menu/gene... )
| elihu wrote:
| That's an interesting point that without the interstate
| highway system (which had many benefits) we might be
| using rail a lot more than we are currently and therefore
| emitting less CO2.
|
| Another way of looking at it is that we could consider
| the interstate highways only half-complete, and that the
| important part that was never built was an electrical
| delivery system for the cars and trucks that use it, so
| they can recharge their batteries without even stopping.
| It's what we would have been forced to build if fossil
| fuels weren't plentiful and cheap and we still wanted to
| use cars and trucks for our main transportation. We could
| have built that in the 70's in response to the oil
| crisis, and we could've had 50 years of electric vehicles
| by now, and it could have worked even using awful lead-
| acid batteries if cars didn't have to go more than twenty
| miles or so between electrified road sections.
|
| Building the same thing now would be a lot easier.
| Battery technology is good enough that it would only be
| needed at regular intervals on the major freeways, and we
| can pair the electrified road sections with cheap solar
| power where it makes sense to do so.
| m4rtink wrote:
| Another big thing is cleaning - maybe someone put paint
| thinner, bleach or some acid to their used bear bottle
| before returning it ?
|
| It could be even an accident (eq. someone turning in old
| beer bottles found somewhere), but you have to still
| account for that when cleaning _all_ the beer bottles
| before refill.
| Dylan16807 wrote:
| Aluminum is pretty great for recycling. And plastic
| bottles can work okay, but most types of plastic use are
| going to end up in the garbage.
| mizzack wrote:
| It has a bit of a double meaning.
|
| Starting with the Ivy Bridge (3rd) generation, Intel switched
| to using thermal paste between the core and heat spreader
| instead of solder on socketed desktop processors. Presumably
| this was done as a cost savings measure.
|
| This caused a marked increase in core temperatures and
| thermal throttling. Enthusiasts discovered that you could
| remove, or "delid", the heat spreader and replace the
| "toothpaste" with higher quality paste or liquid metal to
| drastically improve temperatures (15-20c) and improve
| overclocking headroom.
|
| Edit: This event is commonly reflected on to showcase Intel's
| greed at a time where they dominated the market. It wasn't
| until the i9-9900k that Intel went back to soldering
| heatspreaders for consumer CPUs, at which point they were
| forced to because they were being challenged by AMD.
| bserge wrote:
| Cost saving would've been to get rid of the IHS entirely.
| Their mobile chips work fine without them, I don't really
| understand why they're a thing for desktop processors.
|
| AMD uses them too, so there must be a reason... is it
| because they're afraid of improper installation breaking
| them? That's on the user.
|
| The weight of the desktop heatsinks? Small changes to latch
| design should suffice. Or you can have a metal spacer
| around the chip with the die exposed, kinda like GPUs do.
|
| I've replaced many laptop chips and even ran some on
| desktops with no issues.
| tjoff wrote:
| Desktop cooler: https://d1lss44hh2trtw.cloudfront.net/ass
| ets/editorial/2018/...
|
| Laptop cooler: https://guide-
| images.cdn.ifixit.com/igi/4h3FmQQNHsITcHTq.med...
| bserge wrote:
| nVidia GPU: https://www.techpowerup.com/gpu-
| specs/quadro-4000m.c1428
| xxs wrote:
| The IHS is needed to prevent the die from, hence RMA.
| cptskippy wrote:
| > Cost saving would've been to get rid of the IHS
| entirely.
|
| The IHS itself is a cost saving measure.
|
| When Intel and AMD first introduced flip chips, they
| didn't have the IHS and the heatsink was balanced on top
| while you tensioned a spring. If you rocked the heatsink
| in any direction you would (not could) crush an edge or
| corner of the chip and likely kill the CPU.
|
| The IHS protected the chip and reduced the failure/return
| rate.
| mizzack wrote:
| > is it because they're afraid of improper installation
| breaking them?
|
| Yes. This was an issue back in the Athlon Thunderbird
| days.
|
| "It's on the user" doesn't work as an argument when all
| of your large desktop/server OEMs notice a large uptick
| in failure rate post-assembly.
| grp000 wrote:
| I don't know if there's any truth to this, but I heard
| that there were also issues that could arise more easily
| with electrically conductive thermal paste and that there
| was essentially fraud going on where lower end SKUs were
| being passed off as higher end units. That being said,
| that seems like something that would only affect the
| consumer used market.
| cptskippy wrote:
| Looking back it seems so barbaric.
|
| I remember how they briefly tried those black foam
| sticker pads in the corners of the substrate before
| acquiescing and using the IHS.
|
| At some point they realized they could do better than a
| heatsink mounting system that involved trying to balance
| a heavy metal object on a small pedestal while trying to
| hook a tensioned spring to a clip you couldn't see by
| exerting tremendous downward force with a flathead
| screwdriver. I guess those motherboard return rates
| finally got to them.
| aners wrote:
| I always wondered why that mounting mechanism even
| existed. Would've thought it would get scrapped on the
| drawing board but maybe no one in the design pipeline
| ever put a screwdriver through their motherboard.
| cptskippy wrote:
| It was probably all part of Intel's strategy to sell more
| chips. It's hard to repair a gouged motherboard and not
| worth the time to recover the chips soldered into it.
| After the introduction of the IHS and new cooling
| solutions the motherboard market became unprofitable,
| that's why Intel had to exit it. /s
| mizzack wrote:
| Only as barbaric as the ~50dB, 4krpm tiny fans on
| enthusiast coolers in those days.
| monocasa wrote:
| > Cost saving would've been to get rid of the IHS
| entirely. Their mobile chips work fine without them, I
| don't really understand why they're a thing for desktop
| processors.
|
| Because there's a huge difference between running 5watts
| sustained through something the size of your fingernail,
| and 100 watts sustained. That heat has to go somewhere
| and there's 20x more of it on a desktop part, as it
| requires way more integrated cooling to not immediately
| thermally throttle.
| gtirloni wrote:
| What tangible benefits are you getting from choosing AMD now?
| Honest question as I'm curious if there's another benefit
| besides price (which Intel is fighting now).
| Const-me wrote:
| They are faster for many practical applications, at every
| price point.
|
| Maybe it's just me but all my performance-sensitive
| applications are heavily multithreaded. AMD CPUs simply have
| more cores. The profit from Intel only AVX-512 doesn't quite
| cut it. Besides, not all apps are actually optimized to
| leverage AVX-512, C++ compilers aren't.
| [deleted]
| zamadatix wrote:
| Most stuff eventually turns into cost so I'll ignore "costly"
| effects like heat/power and performance per dollar and focus
| on max performance scale for workloads and other unique
| differences not achievable via just throwing more money at
| the alternative.
|
| Per socket performance scaling is higher for equivalent tier
| sockets. At hyperscale that goes back into the price benefit
| (buy and maintain less physical data center) but for an
| individual server workload or individual user that also turns
| into a performance benefit, particularly for non NUMA aware
| workloads on the server side and just plain availability of
| such core counts for performance on the desktop or
| workstation side.
|
| PCIe wise you get about twice the lanes (128 total on AMD) of
| even a 40 core 8380 in a the base 8 core model of Epyc or a
| Threadripper workstation CPU.
|
| A place Intel still wins is total NUMA scaling. For a NUMA
| aware app like SAP HANA Intel can scale to 8 sockets while
| AMD currently tops out at 2 so you can reach about 2x as many
| total threads that way.
| gtirloni wrote:
| Awesome, thanks for the extra details.
| b9a2cab5 wrote:
| For non-NUMA aware workloads with high inter-core
| coordination (for example, a write heavy database workload)
| Intel will still perform much better because the cross-
| chiplet latency of EPYC chips is very high. Going through
| the IO die and to another chip is about as expensive as
| going to main memory.
|
| Hyperscalers are running web servers which is a different
| story. But if you're running web servers you might be
| better off with Graviton in perf/$.
| Dylan16807 wrote:
| Though there is the effect that clusters of eight cores
| on EPYC have faster access to each other than on Intel.
| smolder wrote:
| I had a sister comment which wasn't as thorough as yours so
| I deleted it. It's worth adding though that for mobile
| applications, power consumption isn't just a cost factor,
| since better efficiency means you can have tighter
| packaging, get more battery life, not roast your lap as
| much, have quieter cooling, etc.
| zamadatix wrote:
| Good point regarding batteries
| nevi-me wrote:
| For the eventual consumers of the servers at the discounted
| prices: are we going to see the price decrease benefits?
|
| If say GCP/AWS/Azure decide to build a DC in a new region, and
| they go blue primarily because of the discounts, would the
| pricing end up being slightly smaller than otherwise?
|
| I can understand that electricity, cooling and other costs would
| have an influence; but I'm wondering whether performance & price
| per watt end up being passed or recouped downstream.
| wmf wrote:
| I noticed that Azure is charging significantly less for Ice
| Lake even though the MSRP is the same.
| chippiewill wrote:
| > and they go blue primarily because of the discounts, would
| the pricing end up being slightly smaller than otherwise?
|
| I don't know about the others, but AWS already has different
| prices for ec2 for AMD vs Intel vs ARM. It's not a case of
| "going blue", they'll support anything that people will pay
| them for. Pricing tends to be dictated more by power usage than
| by hardware cost.
|
| For non-directly-ec2 backed services (like ECS and s3, as
| opposed to say RDS) I'd guess they'd go all in on ARM
| regardless for the power savings.
| jtdev wrote:
| Runtime costs are generally higher for comparable Intel CPUs
| due to electricity usage... so I would not expect any cost
| reduction passed on to consumers in the scenario you describe.
| lmilcin wrote:
| Exactly. Energy usage and density are major costs. Data
| centers throw away perfectly good hardware because at some
| point if you factor in density and energy usage that CPU
| might be worth less than zero.
___________________________________________________________________
(page generated 2021-09-14 23:00 UTC)