[HN Gopher] Intel Problems
___________________________________________________________________
Intel Problems
Author : camillovisini
Score : 438 points
Date : 2021-01-19 14:27 UTC (8 hours ago)
(HTM) web link (stratechery.com)
(TXT) w3m dump (stratechery.com)
| jjoonathan wrote:
| The "US manufacturing is actually stronger than ever" camp used
| to cook their books by over-weighting Intel profits. Hopefully
| this will be a wakeup call.
| [deleted]
| me551ah wrote:
| I wonder how the move from x86 to arm is going to affect desktop
| apps. With the move to ARM apple is already pushing it's iOS apps
| into MacOS. Once it becomes commonplace on Windows, it would be
| super easy to run Android Apps on Windows via simulation(rather
| than emulation which is much slower).
|
| Given that mobile apps are more lightweight and consume far less
| resources than their electron counterparts, would people prefer
| to use those instead? Especially if their UIs were updated to
| support larger desktop screens.
| [deleted]
| oblio wrote:
| Why do you think mobile apps are more lightweight?
|
| Android phones these days have at least 4GB or RAM and mobile
| apps are in general more limited plus you run fewer of them in
| parallel as they tend to be offloaded from RAM once the limit
| is reached.
| phcordner wrote:
| > And in that planning the fact that TSMC's foundries -- and
| Samsung's -- are within easy reach of Chinese missiles is a major
| issue.
|
| Are processor fabs analagous to auto factories and shipyards in
| World War II? Is the United States military's plan for a nuclear
| exchange with China dependent on a steady supply of cutting edge
| semiconductors? Even if it is, is that strategy really going to
| help?
|
| This article is mostly concerened with Intel's stock price. Why
| bring this into it? Let's say Intel gets its mojo back and is
| producing cutting edge silicon at a level to compete with TSMC
| and supplying the Pentagon with all sorts of goodies... and then
| China nukes Taiwan? And now we cash in our Intel options just in
| time to see the flash and be projected as ash particles on a
| brick wall?
|
| "The U.S. needs cutting edge fabs on U.S. soil" is true only if
| you believe the falied assumptions of the blue team during the
| Millenium Challenge, that electronic superiority is directly
| related to battlefield superiority. If semiconductors are the key
| to winning a war, why hasn't the U.S. won one lately?
|
| And what does any of this have to do with Intel? Why are we
| dreaming up Dr. Strangelove scenarios? Is it just that some
| people are only comfortable with Keynesian stimulus if it's in
| the context of war procurement?
| lotsofpulp wrote:
| > If semiconductors are the key to winning a war, why hasn't
| the U.S. won one lately?
|
| Semiconductors aren't going to help change people's cultures or
| religion or tribal affiliations without decades of heavy
| investment in education and infrastructure, or other large
| scale wealth transfers.
|
| But if "winning a war" means killing the opposing members while
| minimizing your own losses, surely electronic superiority will
| help.
| tgtweak wrote:
| I don't feel that there is a meaningful TSMC alternative today.
| Samsung, Intel and GlobalFoundries are not suitable
| replacements for TSMC with regards to throughput or technology.
|
| The world does need some meaningful fabs outside of
| Taiwan/South Korea. All of the <10nm semiconductor and most of
| the >10nm semiconductor fabrication takes place within a
| 750km/460mile radius circle today. That is risky.
|
| Israel, Mexico, Germany, Canada, Japan (not that it would grow
| the circle much...) are all viable places to run a foundry. The
| fact that Intel is one of the few outside that circle doesn't
| inspire confidence in the security of the global supply chain.
| oblio wrote:
| It doesn't even have to be war: https://en.wikipedia.org/wiki
| /2011_Thailand_floods#Damages_t...
| mdasen wrote:
| I mostly agree that there isn't a great alternative to TSMC,
| but I would point out that the 2021 Qualcomm Snapdragon 888
| processors are being made by Samsung with their 5nm process
| (in addition to their new Exynos 2100). Intel and
| GlobalFoundries aren't really replacements, but Samsung has
| been winning business for latest-generation flagship
| processors. Maybe it isn't as advanced as TSMC and maybe
| Samsung will have problems, but a lot of the 2021 flagship
| phones will be shipping with Samsung-manufactured 5nm
| processors.
|
| Samsung seems to be keeping it close.
| tgtweak wrote:
| It's still within that circle. Samsung is a great fab,
| probably the only real contender to TSMC. Samsung is 17%
| global semiconductor demand vs TSMC at 50%+. Included in
| that 17% is all of Samsung's demand (Exynos, SSD/storage,
| memory, etc).
|
| Further agitating the issue, South-Korean SK-Hynix is
| buying Intel's nand business this year and will likely
| shift production out of intel's US fabs when it comes time.
| Ericson2314 wrote:
| Gosh, splitting (if not anti-trust at least pro-competition) then
| subsidies sounds like way too sane government planning for the US
| to actually do it.
| eutropia wrote:
| I worked at Intel in 2012 and 2013. Back then, we had a swag
| t-shirt that said "I've got x86 problems but ARM aint one".
|
| I went and dug that shirt out of a box and had a good laugh when
| Apple dropped the M1 macs.
|
| Back then, the company was confident that they could make the
| transition to EUV lithography and had marketing roadmaps out to
| 5nm...
| trident5000 wrote:
| A company that is being cannibalized by companies ripping the rug
| under them by developing their own chips, yet Intel makes no
| efforts to return the favor. They need to make an open source OS
| phone, maybe that will make a dent and serve as a carrier for the
| chips. They dont need to do all the work they can partner.
| gmmeyer wrote:
| This article seems to mix up AMD and ARM
| NoNameHaveI wrote:
| Companies that use millions of micros will grow tired of paying
| royalties for ARM & other IP. I'm putting my money on RISC-V. If
| Intel is smart, they will too and offer design customization and
| contract manufacturing of RISC-V.
| totalZero wrote:
| > This is why Intel needs to be split in two. Yes, integrating
| design and manufacturing was the foundation of Intel's moat for
| decades, but that integration has become a straight-jacket for
| both sides of the business. Intel's designs are held back by the
| company's struggles in manufacturing, while its manufacturing has
| an incentive problem.
|
| The only comparable data point says that this is a terrible idea.
| AMD spun out GlobalFoundries after a deep slide in their
| valuation, and the stock (as well as the company's reputation)
| remained in the doldrums for several years after that. Chipmaking
| is a big business and there are many advantages to vertical
| integration when both sides of the company function
| appropriately. If you own the fabs and there is a surge in demand
| (as we see now at the less extreme end of the lithography
| spectrum), your designs get preferential treatment.
|
| Intel's problem isn't the structure of the company, it's the
| execution. Swan was not originally intended as the permanent
| replacement to Krzanich[0], and it's a bit strange to draw
| conclusions about whether the company can steer away from the
| rocks when the new captain isn't even going to take the helm
| until the middle of next month.
|
| People are viewing Intel's suggestion that it may use TSMC's fabs
| for some products as a negative for Intel, but I just see it as a
| way to exert pressure on AMD's gross margin by putting some
| market demand pressure on the extreme end of the lithography
| spectrum (despite sustained demand in TSMC's HPC segment, TSMC's
| 7nm+ and 5nm are not the main driver of current semiconductor
| shortages).
|
| [0] https://www.engadget.com/2019-01-31-intel-gives-interim-
| ceo-...
| garaetjjte wrote:
| >The only comparable data point says that this is a terrible
| idea.
|
| Huh, I would say completely opposite thing. AMD wouldn't have
| survived if it kept trying to improve their own process instead
| of going to TSMC.
| ZeroCool2u wrote:
| The problem here is not the success of AMD after splitting,
| but the complete retreat of Global Foundries from the SOTA
| process node. If this happens again with an Intel split then
| we have only TSMC left, off the coast of mainland China in
| Taiwan, in the middle of a game of thermonuclear tug of war
| between the West and China.
|
| While Capitalism will likely be part of the solution, through
| subsidizes for Intel or some other form, it must take a back
| seat to preventing the scenario described above from becoming
| reality. We are on the brink of this happening already with
| so many people suggesting such a split and ignoring what
| happened to AMD and GF.
|
| The geopolitical ramifications of completely centralizing the
| only leading process node in such a sensitive area between
| the world's super powers cannot be understated.
|
| Full disclosure: I'm a shareholder in Intel, TSMC, and AMD.
| renewiltord wrote:
| I feel like that disclosure isn't warranted since like two
| of them are in the S&P 500 and everyone here probably has
| some exposure to that.
| ZeroCool2u wrote:
| Fair enough.
| Symmetry wrote:
| The price of creating the fabs for a new node increases
| exponentially with every node. I remember when there were
| over 20 top node players. Now there are 3 if you aren't
| counting Intel out. If AMD had remained in the game there's
| no way they could have won.
| ZeroCool2u wrote:
| I agree with respect to AMD's situation. I think that was
| the right decision for them then.
|
| I'm saying that there is a difference between the two
| situations and there are geopolitical factors at play
| that mean the answer here is not as simple as splitting
| Intel into a foundry company and a chip design company,
| due to what we saw happen to AMD's foundry when they
| split.
|
| I think it's a bit misleading to say that there are 3 top
| node players right now. Samsung, TSMC, and Intel, while
| from a business perspective do compete, from a technical
| perspective TSMC seems to have a fairly significant lead.
| Like you said, the price increases dramatically every
| node. If Intel were to split, why would that new foundry
| company bother investing a huge amount of money in nodes
| they can't yet produce at volume? Also, Samsung while
| close to TSMC in competition at this point, still
| produces an inferior product. There seems to be solid
| evidence of this in the power consumption comparison of
| AMD vs NVIDIA top end cards.[1]
|
| My point being, if Intel were to follow the same road as
| AMD and split up, we could find ourselves in a situation
| that while better for Intel's business, would arguably
| leave the world worse off overall by leaving TSMC as the
| only viable manufacturer for high end chips.
|
| 1. https://www.legitreviews.com/amd-radeon-rx-6900-xt-
| video-car...
| morganw wrote:
| Let's just hope that if Intel's position is protected
| because of its strategic importance in the tug of war, it
| doesn't become another Boeing.
| totalZero wrote:
| This is a bizarre comparison. Boeing made an entire line
| of planes that could randomly dive into the ground, and
| insisted that there be no additional training required
| for the uptake of those planes. Intel, in contrast, was
| over-ambitious with 10nm and didn't wait a few more
| months to incorporate EUV into that process node. The
| government hasn't banned the use of Intel chips, but the
| 737 Max 8 was grounded for 20 months. While the pandemic
| slammed air travel, it has been a major tailwind for the
| PC and server markets alike.
| rjmunro wrote:
| I thought Boeing had many issues pre-covid, not just the
| 737 Max. Starliner immediately springs to mind.
| twblalock wrote:
| AMD had to go through that in order to become a competitive
| business again. Look at them now! Maybe Intel's chip design
| business needs to go through the same thing.
|
| Maybe there is a way for Intel to open up its fab business to
| other customers and make it more independent, without splitting
| it off into another company. However, it seems like that would
| require a change in direction that goes against decades of
| company culture. It might be easier to achieve that by actually
| splitting the fab business off.
| Covzire wrote:
| But look at Global Foundries now. The article does suggest
| that Intel's spun off fabs would need state funding to
| survive but is that really tenable for the long term? Is that
| TSMC's secret thus far?
| tgtweak wrote:
| Global Foundries has stopped at 12nm and I don't see any
| plans to go beyond that. A notable amount of their fabs are
| in South Korea anyway so they would fall into the same
| bucket.
| totalZero wrote:
| Self-immolation is only a path to growth if you're a magical
| bird -- it's not a reasonable strategy for a healthy public
| company. AMD went through seven years of pain and humiliation
| between that spinoff and its 2015 glow-up. I understand that
| sometimes the optimal solution involves a short-term hit, but
| you don't just sell your organs on a lark (nor because some
| finance bros at Third Point said so). There are obvious
| strategic reasons to remain an IDM, and AMD would never have
| gone fabless if the company hadn't been in an existential
| crisis. Intel is nowhere near that kind of crisis; it may
| have some egg on its face but the company still dominates
| market share in its core businesses and is making profits
| hand over fist.
|
| > Maybe there is a way for Intel to open up its fab business
| to other customers and make it more independent, without
| splitting it off into another company.
|
| Intel Custom Foundry. They have several years of experience
| doing exactly what you describe, and that's how their
| relationship with Altera (which they later acquired) began. I
| see AMD's subsequent bid for Xilinx as a copycat acquisition
| that demonstrates one of the competitive advantages of
| Intel's position as an IDM: information.
| twblalock wrote:
| > Intel is nowhere near that kind of crisis; it may have
| some egg on its face but the company still dominates market
| share in its core businesses and is making profits hand
| over fist.
|
| Years from now, people looking back will be amazed at how
| fast that changed. The time to react to disruption is now,
| when the company still has the ability to do so.
| totalZero wrote:
| It's easy to make that kind of prediction, and in fact
| people made the same prediction about AMD in far worse
| circumstances -- and were still wrong. Semiconductors are
| extremely important to the world's economy right now, not
| just in PC and server but all over the tech marketplace.
| sradman wrote:
| > Solution One: Breakup
|
| > Solution Two: Subsidies
|
| Solution Three: lower prices/margins (temporarily) to match the
| value proposition of AMD on Windows PCs and Linux Cloud servers.
| hakcermani wrote:
| Solution Four: Is it even possible that Intel and AMD merge ?!
| With ARM based chips clearly accelerating and poised to take a
| big market share (Apple, Nvidia, Amazon, Qualcomm becoming
| major players) there is less of an antitrust issue ?
| vinay_ys wrote:
| It is more likely they will stick to their guns like IBM
| sticking with Power (which is technically awesome) but still
| pricing it too high (because cost economics is likely out of
| whack) and in the process they will lose developer mindshare.
|
| I really hope Intel does better than IBM with Power.
| twblalock wrote:
| Intel needs to change the way it does business. Simply lowering
| prices won't achieve that. Becoming the cheap option is likely
| the beginning of a death spiral the company will never recover
| from -- it will give the company an excuse to double down on a
| failing strategy.
|
| Furthermore, AMD is not the biggest threat to Intel. The
| biggest threat is cloud providers like Amazon designing their
| own chips, which is already happening. If those succeed, who
| would build them? Certainly not Intel, if they continue to
| manufacture only their own designs -- that business, like so
| much other fab business, will go to TSMC.
| sradman wrote:
| > Becoming the cheap option is likely the beginning of a
| death spiral...
|
| Maybe. I didn't suggest becoming the cheap option I suggested
| re-evaluating its premium pricing strategy in the short-term
| to reflect current and future customer value. Margin
| stickiness seems to be a built-in bias similar to the sunk-
| costs fallacy.
|
| Server-side Neoverse is a threat but a slow-moving one. I'm
| assuming that "Breakup" (going fabless) will not show
| benefits for many months if not years. Price seems like an
| obvious lever; perhaps I'm being naive about pricing but it's
| not obvious to me why.
| iamgopal wrote:
| Solution three : Invest to make x86 to become power efficient to
| reach at par or better than arm counter part, while outsourcing
| manufacturing to TMSC to fill the gap. Reach to the level Future
| Apple M chip will achieve. At the same time, start building bare
| metal cloud hosting solutions that allow other companies to
| provide their own cloud solutions, ( and using energy efficiency
| at your advantage ), also use that energy efficiency to create
| mobile platform that can enable Mozilla and Ubuntu like provider
| to make operating syatem on.
| darshanime wrote:
| C'mon Intel, this is your opportunity to go all in on RISC-V
| moonbug wrote:
| why?
| jfb wrote:
| They're not losing money fast enough on x86?
| mikewarot wrote:
| They could make a strategic investment in reconfigurable
| computing, and pivot around this, _if_ they can survive long
| enough to profit from it.
| ashtonkem wrote:
| I think one day we're going to wake up and discover that AWS
| mostly runs on Graviton (ARM) and not x86. And on that day
| intel's troubles will go from future to present.
|
| My standing theory is that the m1 will accelerate it. Obviously
| all the wholly managed AWS services (Dynamo, Kinesis, S3, etc.)
| can change over silently, but the issue is EC2. I have a MBP, as
| do all of my engineers. Within a few years all of these machines
| will age out and be replaced with m1 powered machines. At that
| point the idea of developing on ARM and deploying on x86 will be
| unpleasant, especially since Graviton 2 is already cheaper per
| compute unit than x86 is for some work loads; imagine what
| Graviton 3 & 4 will offer.
| tapirl wrote:
| Intel's fate predicted in 2014:
| https://pbs.twimg.com/media/ErrFtv0UwAA4JaW?format=png
| afavour wrote:
| On the flip side that post illustrates just how things can go
| wrong, too: Windows RT was a flop.
| tapirl wrote:
| It is more a stance to show Microsoft is ready to put
| Windows on arm CPUs if x86 loses the market.
| deaddodo wrote:
| Precisely for the reasons he gave though. It wasn't a
| unified experience. RT had lackluster support, no
| compatibility and a stripped down experience.
|
| They're trying to fix it with Windows on ARM now, but
| that's what people were asking for back then.
| dfgdghdf wrote:
| Aren't most of us already programming against a virtual
| machine, such as Node, .NET or the JVM? I think the CPU
| architecture hardly matters today.
| dboreham wrote:
| Having worked some on maintaining a stack on both Intel and
| ARM, it matters less than it did, but it's not a NOOP. e.g.
| Node packages with native modules are often not available
| prebuilt for ARM, and then the build fails due to ... <after
| 2 days debugging C++ compilation errors, you might know>.
| DreadY2K wrote:
| Many people do code against some sort of VM, but there are
| still people writing code in C/C++/Rust/Go/&c that gets
| compiled to machine code and run directly.
|
| Also, even if you're running against a VM, your VM is running
| on an ISA, so performance differences between them are still
| relevant to your code's performance.
| ncmncm wrote:
| C, C++, Rust, & Go compile to an abstract machine, instead.
| It is quite hard these days to get it to do something
| different between x86, ARM, and Power, except relying on
| memory model features not guaranteed on the latter two; and
| on M1 the memory model apes x86's. Given a compatible
| memory model (which, NB, ARM _has not had_ until M1)
| compiling for the target is trivial.
|
| The x86 memory model makes it increasingly hard to scale
| performance to more cores. That has not held up AMD much,
| mainly because people don't scale things out that don't
| perform well when they do, and use a GPU when that does
| better. In principle it has to break at some point, but
| that has been said for a long time. It is indefinitely hard
| to port code developed on x86 to a more relaxed memory
| model, so the overwhelming majority of such codes will
| never be ported.
| wmf wrote:
| Note that M1 only uses TSO for Rosetta; ARM code runs
| with the ARM weak memory model.
| d33lio wrote:
| While I generally agree with this sentiment a lot of people
| don't realize how much enterprise supply chain / product chain
| vastly varies from the consumer equivalent. Huge customers that
| buy intel chips at datacenter scale are pandered to and treated
| like royalty by both intel and amd. Companies are courted in
| the earliest stages of cutting edge technical development and
| product development and given rates so low (granted for huge
| volume) that most consumers would not even believe. The fact
| that companies like Serve The Home exist proves this - for
| those who don't know, the realy business model of Serve The
| Home is to give enterprise clients the ability to play around
| with a whole data center of leading edge tech, Serve The Home
| is simply a marketing "edge api" of sorts for the operation.
| Sure it might look like intel isn't "competitive" but many of
| the intel V amd flame wars in the server space for un released
| tech have already had their bidding wars settled years ago for
| this very tech.
|
| One thing to also consider is why amazon hugely prioritizes
| using their "services" and not deploying on bare metal is
| likely because they can execute their "services" on cheapo arm
| hardware. Bare metal boxes and VM's give the impression that
| customer's software will perform in an x86 esque matter. For
| amazon, the cost of the underlying compute per core is
| irrelevant since they've already solved the issue of using
| blazing fast network links to mesh their hardware together - in
| this way, the ball is heavily in Arm's court for the future of
| Amazon data centers, although banking and gov clients will
| likely not move away from X86 any time soon.
| rauhl wrote:
| > I have a MBP, as do all of my engineers. Within a few years
| all of these machines will age out and be replaced with m1
| powered machines. At that point the idea of developing on ARM
| and deploying on x86 will be unpleasant
|
| Is it not at least somewhat possible that at least some of
| those Apple laptops will age out and be replaced with GNU/Linux
| laptops? Agreed that developing on ARM and deploying on x86 is
| unpleasant, but so too is developing on macOS and deploying on
| Linux. Apple's GNU userland is pretty ancient, and while the
| BSD parts are at least updated, they are also very austere.
| Given that friction is already there, is it likelier that folks
| will try to alleviate it with macOS in the cloud or GNU/Linux
| locally?
|
| Mac OS X was a godsend in 2001: it put a great Unix underneath
| a fine UI atop good hardware. It dragged an awful lot of folks
| three-quarters of the way to a free system. But frankly I
| believe Apple have lost ground UI-wise over the intervening
| decades, while free alternatives have gained it (they are still
| not at parity, granted). Meanwhile, the negatives of using a
| proprietary OS are worse, not better.
| narrator wrote:
| This is why you run your environment on Linux and MacOS in
| Docker, so you don't have these screwy deployment issues
| caused by MacOs vs Linux issues.
| surajrmal wrote:
| I develop on GNU/Linux begrudgingly. It has all of my tools,
| but I have a never-ending stream of issues with WiFi,
| display, audio, etc. As far as I'm concerned, GNU/Linux is
| something that's meant to be used headless and ssh'd into.
| Koshkin wrote:
| Try a Raspberry Pi. It just works.
| ogre_codes wrote:
| > Is it not at least somewhat possible that at least some of
| those Apple laptops will age out and be replaced with
| GNU/Linux laptops?
|
| Has Linux desktop share been increasing lately? I'm not sure
| why a newer Mac with better CPU options is going to result in
| increasing Linux share. If anything, it's likely to be
| neutral or favor the Mac with it's newer/ faster CPU.
|
| > But frankly I believe Apple have lost ground UI-wise over
| the intervening decades, while free alternatives have gained
| it (they are still not at parity, granted).
|
| Maybe? I'm not as sold on Linux gaining a ton of ground here.
| I'm also not sold on the idea that the Mac as a whole is
| worse off interface wise than it was 10 years ago. While
| there are some issues, there are also places where it's
| significantly improved as well. Particularly if you have an
| iPhone and use Apple's other services.
| rurp wrote:
| As much as I would like it to happen, I think it's unlikely
| Linux will be taking any market share away from Macs. That
| said, I could imagine it happening a couple ways. The first
| being an increasingly iPhonified and restricted Mac OS that
| some devs get fed up with.
|
| The second would be Apple pushing all MacBooks to M1 too
| soon, breaking certain tools and workflows.
|
| While I think both of those scenarios could easily happen,
| most devs will probably decide just put up with the extra
| trouble rather than switch to Linux.
| eitland wrote:
| > Has Linux desktop share been increasing lately?
|
| At least I feel I see a lot more Linux now, not just in the
| company I work for but also elsewhere.
|
| The speed advantage over Windows is so huge that it is
| painful to go back once you've seen it.
|
| MS Office isn't seen as a requirement anymore, nobody think
| it is funny if you use GSuite and besides, last time I used
| an office file for work is months ago: everything exist
| inae Slack, Teams, Confluence or Jira and these are about
| equally bad on all platforms.
|
| The same is true (except the awful part) for C#
| development: it is probably even better on Linux.
|
| People could switch to Mac and I guess many will. For
| others, like me Mac just doesn't work and for us Linux is
| an almost obvious choice.
| old-gregg wrote:
| > Is it not at least somewhat possible that at least some of
| those Apple laptops will age out and be replaced with
| GNU/Linux laptops?
|
| And I personally hope that by then, GNU/Linux will have an
| M1-like processor available to happily run on. The
| possibilities demonstrated by this chip
| (performance+silence+battery) are so compelling that it's
| inevitable we'll see them in non-Apple designs.
|
| Also, as it usually happens with Apple hardware advancements,
| Linux experience will be gradually getting better on M1
| Macbooks as well.
| davish wrote:
| I think we can look to mobile to see how feasible this
| might be: consistently over the past decade, iPhones have
| matched or exceeded Android performance with noticeably
| smaller capacity batteries. A-series chips and Qualcomm
| chips are both ARM. Apple's tight integration comes with a
| cost when it comes to flexibility, and, you can argue,
| developer experience, but it's clearly not just the silicon
| itself that leads to the performance we're seeing in the M1
| Macs.
| geogra4 wrote:
| I think there are serious concerns about Qualcomm's
| commitment to competitive performance instead of just
| being a patent troll. I think if AWS Graviton is followed
| by Microsoft[0] and Google[1] also having their own
| custom ARM chips it will force Qualcomm to either
| innovate or die. And will make the ARM landscape quite
| competitive. M1 has shown what's possible. MS and Google
| (and Amazon) certainly have the $$ to match what Apple is
| doing.
|
| 0:https://www.datacenterdynamics.com/en/news/microsoft-
| reporte...
| 1:https://www.theverge.com/2020/4/14/21221062/google-
| processor...
| tomp wrote:
| I wonder to what extent that's a consequence of Apple
| embracing reference counting (Swift/Objective C with ARC)
| while Google being stuck on GC (Java)?
|
| I'm a huge fan of OCaml, Java and Python (RC but with
| cyclic garbage collection), and RC very likely incurs
| more developer headache and more bugs, but at the end of
| the day, that's just a question of upfront investment,
| and in the long run it seems to pay off - it's pretty
| hard for me to deny that pretty much _all_ GC software is
| slow (or singlethreaded).
| willtim wrote:
| Java can be slow for many complex reasons, not just GC.
| Oracle are trying to address some of this with major
| proposals such as stack-allocated value types, sealed
| classes, vector intrinsics etc, but these are potentially
| years away and will likely never arrive for Android.
| However, a lot of Androids slowness is not due to Java
| but rather just bad/legacy architectural decisions. iOS
| is simply better engineered than Android and I say this
| as an Android user.
| jjoonathan wrote:
| Not to mention it took Android about a decade longer than
| iPhone to finally get their animations silky smooth. I
| don't know if the occasional hung frames were the results
| of GC, but I suspect it.
| john_alan wrote:
| You can easily bring macOS up to Linux level GNU with brew.
|
| I agree generally though. I see macOS as an important Unix OS
| for the next decade.
| Steltek wrote:
| "Linux" is more than coreutils. The Mac kernel is no where
| close to Linux in capability and Apple hates 3rd party
| drivers to boot. You'll end up running a half-baked Linux
| VM anyway so all macOS gets you is a SSH client with a nice
| desktop environment, which you can find anywhere really.
| Shared404 wrote:
| > all macOS gets you is a SSH client with a nice desktop
| environment
|
| Also proprietary software. Unfortunately, many people
| still need Adobe.
|
| I personally like Krita, Shotcut, and Darktable better
| than any of the Adobe products I used to use, but it's a
| real issue.
|
| E: Add "many people"
| majormajor wrote:
| > Is it not at least somewhat possible that at least some of
| those Apple laptops will age out and be replaced with
| GNU/Linux laptops?
|
| Sadly, fewer of my coworkers use Linux now than they did 10
| years ago.
| jjoonathan wrote:
| > GNU/Linux laptops
|
| Could we do a roll call of experiences so I know which ones
| work and which ones don't? Here are mine.
| Dell Precision M6800: Avoid. Supported Ubuntu: so
| ancient that Firefox and Chrome wouldn't install
| without source-building dependencies.
| Ubuntu 18.04: installed but resulted in the
| display backlight flickering on/off at 30Hz.
| Dell Precision 7200: Supported Ubuntu: didn't
| even bother. Ubuntu 18.04: installer silently
| chokes on the NVMe drive. Ubuntu
| 20.04: just works.
| BanazirGalbasi wrote:
| Historically, Thinkpads have had excellent support. My
| T430S is great (although definitely aging out), and
| apparently the new X1 Carbons still work well. Also, both
| Dell and Lenovo have models that come with Linux if
| desired, so those are probably good ones to look at.
| jjoonathan wrote:
| I'll have to look into modern thinkpads. I had a bad
| experience about ~10 years ago, but it wouldn't be fair
| to bring that forward.
|
| > both Dell and Lenovo have models that come with Linux
|
| Like the Dell Precision M6800 above? Yeah. Mixed bag.
| mrj wrote:
| Most companies wouldn't end up trying to shove a disk into
| a computer though, they would buy from a vendor with
| support and never have compatibility issues. I have owned 3
| System76 computers for this reason...
| jjoonathan wrote:
| > they would buy from a vendor with support
|
| Like the Dell Precision 6800 above? The one where the
| latest supported linux was so decrepit that it wouldn't
| install Firefox and Chrome without manually building
| newer versions of some of the dependencies?
|
| "System76 is better at this than Dell" is valid feedback,
| but System76 doesn't have the enterprise recognition to
| be a choice you can't be fired for.
|
| Maybe ThinkPads hit the sweet spot. I'll have to look at
| their newer offerings.
| Const-me wrote:
| Building server software on Graviton ARM creates a vendor lock-
| in to Amazon, with very high costs of switching elsewhere.
| Despite using A64 ISA and ARM's cores, they are Amazon's
| proprietary chips no one else has access to. Migrating
| elsewhere gonna be very expensive.
|
| I wouldn't be surprised if they sponsor their Graviton offering
| taking profits elsewhere. This might make it seem like a good
| deal for customers, but I don't think it is, at least not in
| the long run.
|
| This doesn't mean Graviton is useless. For services running
| Amazon's code as opposed to customer's code (like these PAAS
| things billed per transaction) the lock-in is already in place,
| custom processors aren't gonna make it any worse.
| timthorn wrote:
| Ubuntu 64 looks the same on Graviton as on a Raspberry Pi.
| You can take a binary you've compiled on the RPi, scp it to
| the Graviton instance and it will just run. That works the
| other way round too, which is great for speedy Pi software
| builds without having to set up a cross-compile environment.
| treve wrote:
| Maybe I'm missing something, but don't the vast majority of
| applications don't care about what architecture they run on?
|
| The main difference for us was lower bills.
| MaxBarraclough wrote:
| > Maybe I'm missing something, but don't the vast majority
| of applications don't care about what architecture they run
| on?
|
| There can be issues with moving to AArch64, for instance
| your Python code may depend on Python 'wheels' which in
| turn depend on C libraries that don't play nice with
| AArch64. I once encountered an issue like this, although
| I've now forgotten the details.
|
| If your software is pure Java I'd say the odds are pretty
| good that things will 'just work', but you'd still want to
| do testing.
| matwood wrote:
| Sure, but you're talking about short term problems. RPi,
| Graviton, Apple Silicon, etc... are making AArch64 a
| required mainstream target.
| MaxBarraclough wrote:
| That's true. AArch64 is already perfectly usable, and
| what issues there are will be ironed out in good time.
| lars-b2018 wrote:
| Our experience as well. We run a stack that comprises
| Python, Javascript via Node, Common Lisp and Ruby/Rails.
| It's been completely transparent to the application code
| itself.
| Slartie wrote:
| Even if the applications don't care, there's still the
| (Docker) container, which cares very much, and which seems
| to be the vehicle of choice to package and deliver many
| cloud-based applications today. Being able to actually run
| the exact same containers on your dev machine which are
| going to be running on the servers later is definitely a
| big plus.
| acdha wrote:
| Docker has had multiarch support for a while and most of
| the containers I've looked at support both. That's not to
| say this won't be a concern but it's at the level of
| "check a box in CI" to solve and between Apple and Amazon
| there'll be quite a few users doing that.
| rapsey wrote:
| Why would it be lock in. If you can compile for arm you can
| compile for x86.
| optimiz3 wrote:
| Memory model, execution units, simd instructions...
| rapsey wrote:
| The vast majority of code running is in python, js, jvm,
| php, ruby, etc. Far removed these concerns.
| jlawer wrote:
| Some of those languages (especially python & php) utilise
| C based modules or packaged external binaries. Both of
| which have to be available / compatible with ARM.
|
| When you run pip or composer on Amd64 they often pull
| these down and you don't notice, but if you try on arm
| you discover quickly that some packages don't support
| ARM. Sometimes there is a slower fallback option, but
| often there is none.
| oblio wrote:
| The real question is, can you compile for ARM and move
| the binary around as easily as you can for x86?
|
| I'm reasonably sure that you can take a binary compiled
| with GCC on a P4 back in the day and run it on the latest
| Zen 3 CPU.
| easton wrote:
| As far as I can tell, yes. Docker images compiled for
| arm64 work fine on the Macs with M1 chips without
| rebuilding. And as another commenter said, you can
| compile a binary on a Raspberry Pi 4 and move it to a EC2
| graviton instance and it just works.
| cozzyd wrote:
| it will probably be a similar situation to x86, with
| various vendors implementing various instructions in some
| processors that won't be supported by all. I guess the
| difference is that there may be many more variants than
| in x86, but performance-critical code can always use
| runtime dispatch mechanisms to adapt.
| oblio wrote:
| It's true that there are extensions to x86, but 99,99% of
| software out there (the one you'd commonly install on
| Windows or find in Linux distribution repos) doesn't use
| those instructions or maybe just detects the features and
| then uses it.
|
| I don't recall encountering a "Intel-locked" or "AMD-
| locked" application in more than 20 years of using x86.
| Ok, maybe ICC, but that one kind of makes sense :-)
| cozzyd wrote:
| Encountering SIGILLs is not super uncommon on
| heterogeneous academic computer clusters (since
| -march=native).
|
| But yeah, typically binaries built for redistribution use
| a reasonably crusty minimum architecture. Reminds me of
| this discussion for Fedora: https://lists.fedoraproject.o
| rg/archives/list/devel@lists.fe...
| mitjam wrote:
| Audio software usually runs better on Imtel than on AMD.
| ndesaulniers wrote:
| That doesn't mean compilers will emit such instructions;
| maybe hand written assembler will become less portable if
| such code is making use of extensions...but that should
| be obvious to the authors...and probably they should have
| a fallback path.
| volta87 wrote:
| > can you compile for ARM and move the binary around as
| easily as you can for x86?
|
| Yes.
| chasil wrote:
| As I understand it, ARM's new willingness to allow custom op-
| codes is dependent upon the customer preventing fragmentation
| of the ARM instruction set.
|
| In theory, your software could run faster, or slower,
| depending upon Amazon's use of their extensions within their
| C library, or associated libraries in their software stack.
|
| Maybe the wildest thing that I've heard is Fujitsu not
| implementing either 32-bit or Thumb on their new
| supercomputer. Is that a special case?
|
| "But why doesn't Apple document this and let us use these
| instructions directly? As mentioned earlier, this is
| something ARM Ltd. would like to avoid. If custom
| instructions are widely used it could fragment the ARM
| ecosystem."
|
| https://medium.com/swlh/apples-m1-secret-
| coprocessor-6599492...
| billiam wrote:
| It's interesting that if you step back and look at what
| Amazon has been most willing to just blow up and destroy,
| it is the idea of intellectual property of any kind. It
| comes out clearly in their business practices. This muscle
| memory may make it hard for ARM to have a long term stable
| relationship with a company like ARM.
| oblio wrote:
| What do you mean?
|
| Also, I think there's a typo in your last phrase.
| stephencanon wrote:
| > Maybe the wildest thing that I've heard is Fujitsu not
| implementing either 32-bit or Thumb on their new
| supercomputer. Is that a special case?
|
| What's wild about this? Apple dropped support for 32b (arm
| and thumb) years ago with A11. Supporting it makes even
| less sense in an HPC design than it does in a phone CPU.
| dragontamer wrote:
| I'm not necessarily disagreeing with you, but... maybe
| elaborating in a contrary manner?
|
| Graviton ARM is certainly vendor lock-in to Amazon. But a
| Graviton ARM is just a bog-standard Neoverse N1 core. Which
| means the core is going to show similar characteristics as
| the Ampere Altra (also a bog-standard Neoverse N1 core).
|
| There's more to a chip than its core. But... from a
| performance-portability and ISA perspective... you'd expect
| performance-portability between Graviton ARM and Ampere
| Altra.
|
| Now Ampere Altra is like 2x80 core, while Graviton ARM is...
| a bunch of different configurations. So its still not perfect
| compatibility. But a single-threaded program probably
| couldn't tell the difference between the two platforms.
|
| I'd expect that migrating between Graviton and Ampere Altra
| is going to be easier than Intel Skylake -> AMD Zen.
| Const-me wrote:
| > you'd expect performance-portability between Graviton ARM
| and Ampere Altra
|
| I agree, that would what I would expect too. Still, are
| there many public clouds built of these Ampere Altra-s?
| Maybe we gonna have them widespread soon, but until then I
| wouldn't want to build stuff that only runs on Amazon or my
| own servers with only a few on the market and not yet
| globally available on retail.
|
| Also, AFAIK on ARM the parts where CPUs integrate with the
| rest of the hardware are custom. The important thing for
| servers, disk and network I/O differs across ARM chips of
| the same ISA. Linux kernel abstracts it away i.e. stuff is
| likely to work, but I'm not so sure about performance
| portability.
| dragontamer wrote:
| > Also, AFAIK on ARM the parts where CPUs integrate with
| the rest of the hardware are custom. The important thing
| for servers, disk and network I/O differs across ARM
| chips of the same ISA. Linux kernel abstracts it away
| i.e. stuff is likely to work, but I'm not so sure about
| performance portability.
|
| Indeed. But Intel Xeon + Intel Ethernet integrates
| tightly and drops the Ethernet data directly into L3
| cache (bypassing DRAM entirely).
|
| As such, I/O performance portability between x86 servers
| (in particular: Intel Xeon vs AMD EPYC) suffers from
| similar I/O issues. Even if you have AMD EPYC + Intel
| Ethernet, you lose the direct-to-L3 DMA, and will have
| slightly weaker performance characteristics compared to
| Intel Xeon + Intel Ethernet.
|
| Or Intel Xeon + Optane optimizations, which also do not
| exist on AMD EPYC + Optane. So these I/O performance
| differences between platforms are already on the status-
| quo, and should be expected if you're migrating between
| platforms. A degree of testing and tuning is always
| needed when changing platforms.
|
| --------
|
| >Still, are there many public clouds built of these
| Ampere Altra-s? Maybe we gonna have them widespread soon,
| but until then I wouldn't want to build stuff that only
| runs on Amazon or my own servers with only a few on the
| market and not yet globally available on retail.
|
| A fair point. Still, since Neoverse N1 is a premade core
| available to purchase from ARM, many different companies
| have the ability to buy it for themselves.
|
| Current rumors look like Microsoft/Oracle are just
| planning to use Ampere Altra. But like all other standard
| ARM cores, any company can buy the N1 design and make
| their own chip.
| yaantc wrote:
| > > Also, AFAIK on ARM the parts where CPUs integrate
| with the rest of the hardware are custom. The important
| thing for servers, disk and network I/O differs across
| ARM chips of the same ISA. Linux kernel abstracts it away
| i.e. stuff is likely to work, but I'm not so sure about
| performance portability.
|
| > Indeed. But Intel Xeon + Intel Ethernet integrates
| tightly and drops the Ethernet data directly into L3
| cache (bypassing DRAM entirely).
|
| This will be less of a problem on ARM servers as direct
| access to the LLC from a hardware master is a standard
| feature of ARM's "Dynamic Shared Unit" or DSU, which is
| the shared part of a cluster providing the LLC and
| coherency support. Connect a hardware function to the DSU
| ACP (accelerator coherency port) and the hardware can
| control, for all write accesses, whether to "stash" data
| into the LLC or even the L2 or L1 of a specific core. The
| hardware can also control allocate on miss vs not. So any
| high performance IP can benefit from it.
|
| And if I understand correctly, the DSU is required with
| modern ARM cores. As most (besides Apple) tend to use ARM
| cores now, you have this in the package.
|
| More details here in the DSU tech manual: https://develop
| er.arm.com/documentation/100453/0002/function...
| gchamonlive wrote:
| I think OP was talking about managed services, like lambda,
| Ecs and beanstalk internal control, EC2 internal management
| system, that is systems that are transparent for the user.
|
| AWS could very well run their platform systems entirely on
| graviton. After all, serverless and cloud is in essence
| someone else's server. AWS might as well run all their paas
| software on in-house architecture
| ogre_codes wrote:
| While there is vendor lock in with those services, it also
| has nothing to do with what CPU you are running. At that
| layer, CPU is completely abstract.
| gchamonlive wrote:
| Maybe I wasn't clear enough. I am talking about code that
| runs behind the scenes. Management processes, schedulers,
| server allocation procedures, everything that runs on the
| aws side of things, transparent for the client.
| pjmlp wrote:
| My Java and .NET applications don't care most of the time in
| what hardware they are running, and many of other languages
| managed languages I use also do not, even if AOT compiled to
| native code.
|
| That is the beauty of having proper defined numeric types and
| memory model, instead of the C and derived approaches of
| whatever the CPU gives, with whatever memory model.
| jorblumesea wrote:
| Really, you could make the argument for any AWS service and
| generally using a cloud service provider. You get into the
| cloud, use their glue (lambda, kinesis, sqs etc) and suddenly
| migrating services somewhere else is a multi-year project.
|
| Do you think that vendor lock in has stopped people in the
| past (and future)? Thinking about those kinds of things are
| long term and many companies think short term.
| ralph84 wrote:
| Heck, Amazon themselves got locked-in to Oracle for the
| first 25 years of Amazon's existence. Vendor lock-in for
| your IT stack doesn't prevent you from becoming a
| successful business.
| PaulDavisThe1st wrote:
| True, true (and heh, it was me who pushed for Oracle,
| oops)
|
| But ... the difference is that Oracle wasn't a platform
| in the sense that (e.g.) AWS is. Oracle as a corporation
| could vanish, but as long as you can keep running a
| compatible OS on compatible hardware, you can keep using
| Oracle.
|
| If AWS pulls the plug on you, either as an overall
| customer or ends a particular API/service, what do you do
| then?
| echelon wrote:
| > Building server software on Graviton ARM creates a vendor
| lock-in to Amazon
|
| Amazon already has lock-in. Lambda, SQS, etc. They've already
| won.
|
| You might be able to steer your org away from this, but
| Amazon's gravity is strong.
| deaddodo wrote:
| > they are Amazon's proprietary chips no one else has access
| to.
|
| Any ARM licensee (IP or architecture) has access to them.
| They're just NeoVerse N1 cores and can be synthesized on
| Samsung or TSMC processes.
| skohan wrote:
| This is kind of what should happen right? I'm not an expert,
| but my understanding is that one of the takeaways from the M1
| success has been the weaknesses of x86 and CISC in general. It
| seems as if there is a performance ceiling which exists for x86
| due to things like memory ordering requirements, and complexity
| of legacy instructions, which just don't exist for other
| instruction sets.
|
| My impression is that we have been living under the cruft of
| x86 because of inertia, and what are mostly historical reasons,
| and it's mostly a good thing if we move away from it.
| zucker42 wrote:
| M1's success shows how efficient and advanced the TSMC 5 nm
| node is. Apple's ability to deliver it with decent software
| integration also deserves some credit. But I wouldn't
| interpret it as the death knell for x86.
| sf_rob wrote:
| Isn't most of M1's performance success due to being a SoC /
| increasing component locality/bandwidth? I think ARM vs x86
| performance on its own isn't a disadvantage. Instead the
| disadvantages are a bigger competitive landscape (due to
| licensing and simplicity), growing performance parity, and
| SoCs arguable being contrary to x86 producers' business
| models.
| UncleOxidant wrote:
| ARM instructions are also much easier to decode than x86
| instructions which allowed the M1 designers to have more
| instruction decoders and this, IIRC, is one of the
| important contributors to the M1's high performance.
| erosenbe0 wrote:
| Umm, Intel laptop chips are SoC with onchip graphics, pci4,
| wifi, usb4, and thunderbolt4 controller, connectivity
| direct to many audio codec channels, plus some other
| functionality for dsp and encryption.
| kllrnohj wrote:
| > weaknesses of x86 and CISC in general
|
| "RISC" and "CISC" distinctions are murky, but modern ARM is
| really a CISC design these days. ARM is not at all in a "an
| instruction only does one simple thing, period" mode of
| operation anymore. It's grown instructions like "FJCVTZS",
| "AESE", and "SHA256H"
|
| If anything CISC has overwhelmingly and clearly won the
| debate. RISC is dead & buried, at least in any high-
| performance product segment (TBD how RISC-V ends up fairing
| here).
|
| It's largely "just" the lack of variable length instructions
| that helps the M1 fly (M1 under Rosetta 2 runs with the same
| x86 memory model, after all, and is still quite fast).
| setpatchaddress wrote:
| Most RISCs would fail the "instruction only does one thing"
| test. ISTR there were instructions substantially more
| complex than FJCVTZS in the PowerPC ISA.
|
| I think it's time for a Mashey CISC vs RISC repost:
|
| https://www.yarchive.net/comp/risc_definition.html
| lambda wrote:
| RISC vs CISC isn't really about instructions doing "one
| simple thing period."
|
| It's about increased orthogonality between ALU and memory
| operations, making it simpler and more predictable in an
| out-of-order superscalar design to decode instructions,
| properly track data dependencies, issue them to independent
| execution units, and to stitch the results back into
| something that complies with the memory model before
| committing to memory.
|
| Having a few crazy-ass instructions which either offload to
| a specialized co-processor or get implemented as
| specialized microcode for compatibility once you realize
| that the co-processor is more trouble than it's worth
| doesn't affect this very much.
|
| What ARM lacks are the huge variety of different
| instruction formats and addressing mode that Intel has;
| which substantially affect the size and complexity of the
| instruction decoder, and I'm willing to bet that creates a
| significant bottleneck on how large of a dispatch and
| reorder system they can have.
|
| For a long time, Intel was able to make up this difference
| with process dominance, clever speculative execution
| tricks, and throwing a lot of silicon and energy at it
| which you can do on the server side where power and space
| are abundant.
|
| But Intel is clearly losing the process dominance edge.
| Intel ceded the mobile race a long time ago. Power is
| becoming more important in the data center, which are
| struggling to keep up with providing reliable power and
| cooling to increasingly power-hungry machines. And Intel's
| speculative execution smarts came back to bite them in the
| big market they were winning in, the cloud, when it turned
| out that they could cause information leaks between
| multiple tenants, leading to them needing to disable a lot
| of them and lose some of their architectural performance
| edge.
|
| And meanwhile, software has been catching up with the newer
| multi-threaded world. 10-15 years ago, dominance on single
| threaded workloads still paid off considerably, because
| workloads that could take advantage of multiple cores with
| fine-grained parallelism were fairly rare. But systems and
| applications have been catching up; the C11/C++11 memory
| model make it significantly more feasible to write portable
| lock-free concurrent code. Go, Rust, and Swift bring safer
| and easier parallelism for application authors, and I'm
| sure the .net and Java runtimes have seen improvements as
| well.
|
| These increasingly parallel workloads are likely another
| reason that the more complex front-ends needed for Intel's
| instruction set, as well as their stricter memory ordering,
| are becoming increasingly problematic; it's becoming
| increasingly hard to fit more cores and threads into the
| same area, thermal, and power envelopes. Sure, they can do
| it on big power hungry server processors, but they've been
| missing out on all of the growth in mobile and embedded
| processors, which are now starting to scale up into
| laptops, desktops, and server workloads.
|
| I should also say that I don't think this is the end of the
| road for Intel and x86. They have clearly had a number of
| setbacks of the last few years, but they've managed to
| survive and thrive through a number of issues before, and
| they have a lot of capital and market share. They have
| squeezed more life out of the x86 instruction set than I
| thought possible, and I wouldn't be shocked if they managed
| to keep doing that; they realized that their Itanium
| investment was a bust and were able to pivot to x86-64 and
| dominate there. They are facing a lot of challenges right
| now, and there's more opportunity than ever for other
| entrants to upset them, but they also have enough resources
| and talent that if they focus, they can probably come back
| and dominate for another few decades. It may be rough for a
| few years as they try to turn a very large boat, but I
| think it's possible.
| leshow wrote:
| > I'm willing to bet that creates a significant
| bottleneck on how large of a dispatch and reorder system
| they can have
|
| My understanding is the reorder buffer of the m1 is
| particularly large:
|
| "A +-630 deep ROB is an immensely huge out-of-order
| window for Apple's new core, as it vastly outclasses any
| other design in the industry. Intel's Sunny Cove and
| Willow Cove cores are the second-most "deep" OOO designs
| out there with a 352 ROB structure, while AMD's newest
| Zen3 core makes due with 256 entries, and recent Arm
| designs such as the Cortex-X1 feature a 224 structure."
|
| https://www.anandtech.com/show/16226/apple-
| silicon-m1-a14-de...
| kllrnohj wrote:
| > These increasingly parallel workloads are likely
| another reason that the more complex front-ends needed
| for Intel's instruction set, as well as their stricter
| memory ordering, are becoming increasingly problematic;
| it's becoming increasingly hard to fit more cores and
| threads into the same area, thermal, and power envelopes.
| Sure, they can do it on big power hungry server
| processors, but they've been missing out on all of the
| growth in mobile and embedded processors, which are now
| starting to scale up into laptops, desktops, and server
| workloads.
|
| Except ARM CPUs aren't any more parallel in comparable
| power envelopes than x86 CPUs are, and x86 doesn't seem
| to have any issue hitting large CPU core counts, either.
| Most consumer software doesn't scale worth a damn,
| though. Particularly ~every web app which can't scale
| past 2 cores if it can even scale past 1.
| erosenbe0 wrote:
| There isn't any performance ceiling issue. Intel ISA operates
| at a very slight penalty in terms of achievable performance
| per watt, but nothing in an absolute sense.
|
| I would argue it isn't time for Intel to switch until we see
| a little more of the future as process nodes may shrink at a
| slower rate. Will we have hundreds of cores? Field
| programmable cores? More fixed function hardware on chip, or
| less? How will high-bandwidth high-latency gddr style memory
| mix with lower-latency lower-bandwidth ddr memory? Will there
| be on die memory like hbm for cpus?
| mhh__ wrote:
| I can see this happening for things that run in entirely
| managed environments but I don't think AWS can make the switch
| fully until that exact hardware is on people's benches. Doing
| microbenchmarking is quite awkward on the cloud, whereas anyone
| with a Linux laptop from the last 20 years can access PMCs for
| their hardware
| api wrote:
| I don't think it takes "exact" hardware. It takes ARM64,
| which M1 delivers. I already have a test M1 machine with
| Linux running in a Parallels (tech preview) VM and it works
| great.
| ashtonkem wrote:
| Professional laptops don't last that long, and a lot of
| developers are given MBPs for their work. I personally expect
| that I'll get a M1 laptop from my employer within the next 2
| years. At that point the pressure to migrate from x86 to ARM
| will start to increase.
| mhh__ wrote:
| You miss my point - if I am seriously optimizing something
| I need to be on the same chip not the same ISA.
|
| Graviton2 is a Neoverse core from Arm and it's totally
| separate from M1.
|
| Besides, Apple don't let you play with PMCs easily and I'm
| assuming they won't be publishing any event tables any time
| soon so unless they get reverse engineered you'll have to
| do it through xcode.
| foobarian wrote:
| We have MBPs on our desks but our cloud are Centos Xeon
| machines. The problems I run into are not squeezing every
| last ms of performance, since it's vastly cheaper to just
| add more instances. The problems I care about is that
| some script I wrote suddenly doesn't work in production
| because of BSDisms, or Python incompatibilities, or old
| packages in brew, etc. Would be nice if Apple waved a
| magic wand and replaced its BSD subsystem with Centos*
| but I won't be holding my breath :)
|
| * yes I know Centos is done, substitute as needed.
| eloisant wrote:
| I just wish my employer would let me work on a Linux PC
| rather than a MBP, then I wouldn't have this mismatch
| between my machine and server...
| singhrac wrote:
| I think this is a slightly different point from the other
| responses, but this not true: if I am seriously
| optimizing something I need _ssh access_ to the same
| chip.
|
| I don't run my production profiles on my laptop - why
| would I expect to compare how my i5 or i7 chip on a
| thermally limited MBP to how my 64 core server performs?
|
| It's convenient for debugging to have the same
| instruction set (for some people, who run locally), but
| for profiling it doesn't matter at all.
| benibela wrote:
| I profile in valgrind :/
| ashtonkem wrote:
| Yes, the m1 isn't a graviton 2. But then again the mobile
| i7 in my current MBP isn't the same as the Xeon
| processors my code runs on in production. This isn't
| about serious optimization, but rather the ability for a
| developer to reasonably estimate how well their code will
| work in prod (e.g. "will it deadlock"). The closer your
| laptop gets to prod, the narrower the error bars get, but
| they'll never go to zero.
|
| And keep in mind this is about reducing the incentive to
| switch to a chip that's cheaper per compute unit in the
| cloud. If Graviton 2 was more expensive or just equal in
| price to x86, I doubt that M1 laptops alone would be
| enough to incentivize a switch.
| mhh__ wrote:
| That's true but the Xeon cores are much easier to compare
| and correlate because of the aforementioned access to
| well defined and supported performance counters rather
| than Apple's holier than thou approach to developers
| outside the castle.
| lostapathy wrote:
| This is typical Hacker News. Yes, some people "seriously
| optimize" but the vast majority of software written is
| not heavily optimized nor is it written at companies with
| good engineering culture.
|
| Most code is worked on until it'll pass QA then thrown
| over the wall. For that majority of people, an M1 is
| definitely close enough to a graviton.
| mhh__ wrote:
| > typical hacker news
|
| Let me have my fun!
| sitkack wrote:
| Very little user code generates binaries that can _tell_ it
| is running on non-x86 hardware. Rust is Arm Memory Model
| safe, existing C/C++ code that targets the x86 memory model
| is slowly getting ported over, but unless you are writing
| multithreaded C++ code that cuts corners it isn't an issue.
|
| Running on the JVM, Ruby, Python, Go, Dlang, Swift, Julia or
| Rust and you won't notice a difference. It will be sooner
| than you think.
| mhh__ wrote:
| It's not the memory model I'm thinking of but the cache
| design, ROB size etc.
|
| Obviously this is fairly niche but the friction to making
| something fast is hugely easier locally.
| scythe wrote:
| If you use a VM language like Java, Ruby, etc, that work
| is largely abstracted.
| tyingq wrote:
| True, though the work/fixes sometimes take a while to
| flow down. One example:
| https://bugs.openjdk.java.net/browse/JDK-8255351
| sitkack wrote:
| The vast majority of developers never profile their code.
| I think this is much less of an issue than anyone on HN
| would rank it. Only when the platform itself provides
| traces do they take it into consideration. And even then,
| I think most perf optimization is in a category of don't
| do the obviously slow thing, or the accidentally n^2
| thing.
|
| I partially agree with you though, as the penetration of
| Arm goes deeper into the programmer ecosystem, any mental
| roadblocks about deploying to Arm will disappear. It is a
| mindset issue, not a technical one.
|
| In the 80s and 90s there were lots of alternative
| architectures and it wasn't a big deal, granted the
| software stacks were much much smaller and more metal.
| Now they are huge, but more abstract and farther away
| from machine issues.
| mhh__ wrote:
| This isn't really about you or me but the libraries that
| work behind the spaghetti people fling into the cloud.
| vinay_ys wrote:
| Yes, and just like Intel & AMD spent a lot of
| effort/funding for building performance libraries and
| compilers, we should expect Amazon and Apple invest into
| similar efforts.
|
| Apple will definitely give all the necessary tools as
| part of Xcode for iOS/MacOS software optimisation.
|
| AWS is going to be more interesting - this is a great
| opportunity for them to provide distributed
| profiling/tracing tools (as a hosted service, obviously)
| for Linux that run across a fleet of Graviton instances
| and help you do fleet-wide profile guided optimizations.
|
| We should also see a lot of private companies building
| high-performance services on AWS to contribute to highly
| optimized open-source libraries being ported to graviton.
| fhrifjr wrote:
| So far I found getting started repo for Graviton with few
| pointers https://github.com/aws/aws-graviton-getting-
| started
| vinay_ys wrote:
| What kind of pointers were you expecting?
|
| I found it to have quite a lot of useful pointers.
| Specifically -https://static.docs.arm.com/swog309707/a/Ar
| m_Neoverse_N1_Sof...
|
| https://static.docs.arm.com/ddi0487/ea/DDI0487E_a_armv8_a
| rm....
|
| - these two docs gives lot of useful information.
|
| And the repo itself contain a number of examples (like
| ffmpeg) that have been optimized based on these manuals.
| jerf wrote:
| "The vast majority of developers never profile their
| code."
|
| Protip: New on the job and want to establish a reputation
| quickly? Find the most common path and fire a profiler at
| it as early as you can. The odds that there's some
| trivial win that will accelerate the code by huge amounts
| is fairly decent.
|
| Another bit of evidence developers rarely profile their
| code is that I can tell my mental model of how expensive
| some server process will be to run and most other
| developer's mental models tend to differ by at least an
| order of magnitude. I've had multiple conversations about
| the services I provide and people asking me what my
| hardware is, expecting it to be run on some monster boxes
| or something when I tell them it's really just two
| t3.mediums, which mostly do nothing, and I only have two
| for redundancy. And it's not like I go profile crazy... I
| really just do some spot checks on hot-path code. By no
| means am I doing anything amazing. It's just... as you
| write more code, the odds that you accidentally write
| something that performs stupidly badly goes up steadily,
| even if you're trying not to.
| nicoburns wrote:
| > Find the most common path and fire a profiler at it as
| early as you can. The odds that there's some trivial win
| that will accelerate the code by huge amounts is fairly
| decent.
|
| I've found that a profiler isn't even needed to find
| significant wins in most codebases. Simple inspection of
| the code and removal of obviously slow or inefficient
| code paths can often lead to huge performance gains.
| mattbee wrote:
| I mean I love finding those "obvious" improvements too
| but how do you know you've succeeded without profiling
| it? ;)
| lumost wrote:
| given a well designed chip which achieves competitive
| performance across most benchmarks, _Most code_ will run
| sufficiently well for _most_ use cases regardless of the
| nuance of specific cache design and sizes.
|
| There is certainly an exception to this for chips with
| radically different designs and layouts, as well as folks
| writing very low-level performance sensitive code which
| can benefit from specific platform optimization (
| graphics comes to mind ).
|
| However even in the latter case, I'd imagine the platform
| specific and fallback platform agnostic code will be
| within 10-50% performance of each other. Meaning a
| particularly well designed chip could make the platform
| agnostic code cheaper on either a raw performance basis
| or cost/performance basis.
| Someone wrote:
| I would think the number of developers that have "that exact
| hardware" on their bench is extremely small (does AWS even
| tell you what cpu you get?)
|
| What fraction of products deployed to the cloud even has its
| developers seen doing _any_ microbenchmarking?
| ghettoimp wrote:
| If it can emulate x86, is there really a motivation for
| developers to switch to ARM? (I don't have an M1 and don't
| really know what it's like to compile stuff and deploy it to
| "the cloud.")
| ashtonkem wrote:
| Emulation is no way to estimate performance.
| acoard wrote:
| Sure, but as a counter example Docker performance on Mac
| has historically been abysmal[0][1], but everyone on Mac I
| know still develops using it. We ignore the performance hit
| on dev machines, knowing it won't affect prod (Linux
| servers).
|
| I don't see why this pattern would fail to hold, but am
| open to new perspectives.
|
| [0] https://dev.to/ericnograles/why-is-docker-on-macos-so-
| much-w...
|
| [1] https://www.reddit.com/r/docker/comments/bh8rpf/docker_
| perfo...
| remexre wrote:
| The VM that uses takes advantage of hardware-accelerated
| virtualization, for running amd64 VMs on amd64 CPUs. You
| don't have hardware-accelerated virtualization for amd64
| VMs on any ARM CPUs I know of...
| jayd16 wrote:
| I guess I don't understand why the M1 makes developing on
| Graviton easier. It doesn't make Android or Windows ARM dev any
| easier.
|
| I guess the idea is to run a Linux flavor that supports both
| the M1 and Graviton on the macs and hope any native work is
| compatible?
| _alex_ wrote:
| dev in a linux vm/container on your M1 macbook, then deploy
| to a graviton instance.
| wmf wrote:
| It's not hope; ARM64 is compatible with ARM64 by definition.
| The same binaries can be used in development and production.
|
| Windows ARM development (in a VM) should be much faster on an
| M1 Mac than on an x86 computer since no emulation is needed.
| Steltek wrote:
| How much does arch matter if you're targeting AWS? Aren't the
| differences between local service instances vs instances
| running in the cloud a much bigger problem for development?
| BenoitEssiambre wrote:
| Yeah and I assume we are going to see Graviton/Amazon linux
| based notebooks any day now.
| agloeregrets wrote:
| Honestly, if Amazon spun this right and they came pre-setup
| for development and distribution and had all the right little
| specs (13 and 16 inch sizes, HiDPI matte displays, long
| battery life, solid keyboard, macbook-like trackpad) they
| could really hammer the backend dev market. Bonus points if
| they came with some sort of crazy assistance logic like each
| machine getting a pre-setup AWS Windows server for streaming
| windows X86 apps.
| yesbabyyes wrote:
| Like a Bloomberg machine for devops.
| heartbreak wrote:
| > could really hammer the backend dev market
|
| That's worth, what, a few thousand unit sales?
| TheOperator wrote:
| The point wouldn't be to sell laptops.
| agloeregrets wrote:
| Exactly.
| easton wrote:
| If they could get it to $600-800 and have an option for
| Windows, decent trackpad/keyboard, you could sell them to
| students just as well. Shoot, if the DE for Amazon Linux
| was user friendly enough they wouldn't even need windows,
| since half of schools are on GSuite these days.
| ksec wrote:
| I commented [1] on something similar a few days ago,
|
| >Cloud (Intel) isn't really challenged yet....
|
| AWS are estimated to be ~50% of HyperScalers.
|
| HyperScalers are estimated to be 50% of Server and Cloud
| Business.
|
| HyperScalers are expanding at a rate faster than other market.
|
| HyperScaler expanding trend are not projected to be slowing
| down anytime soon.
|
| AWS intends to have all of their own workload and SaaS product
| running on Graviton / ARM. ( While still providing x86 services
| to those who needs it )
|
| Google and Microsoft are already gearing up their own ARM
| offering. Partly confirmed by Marvell's exit of ARM Server.
|
| >The problem is single core Arm performance outside of Apple
| chips isn't there.
|
| Cloud computing charges per vCPU. On all current x86 instances,
| that is one hyper-thread. On AWS Graviton, vCPU = Actual CPU
| Core. There are plenty of workloads, and large customers like
| Twitter and Pinterest has tested and shown AWS Graviton 2 vCPU
| perform better than x86. All while being 30% cheaper. At the
| end of the day, it is workload / dollars that matters on Cloud
| computing. And right now in lots of applications Graviton 2 are
| winning, and in some cases by large margin.
|
| If AWS sell 50% of their services with ARM in 5 years time,
| that is 25% of Cloud Business Alone. Since it offer a huge
| competitive advantage Google and Microsoft has no other choice
| but to join the race. And then there will be enough of a market
| force for Qualcomm, or may be Marvell to Fab a commodity ARM
| Server part for the rest of the market.
|
| Which is why I was extremely worried about Intel. (Half of) The
| lucrative Server market is basically gone. ( And I haven't
| factored in AMD yet ) 5 years in Tech hardware is basically 1-2
| cycles. And there is nothing on Intel's roadmap that shown they
| have the chance to compete apart from marketing and sales
| tactics. Which still goes a long way if I have to be honest,
| but not sustainable in long term. It is more of a delaying
| tactics. Along with a CEO that despite trying very hard, had no
| experience in market and product business. Luckily that is
| about to change.
|
| Evaluating ARM switch takes time, Software preparation takes
| time, and more importantly, getting wafer from TSMC takes time
| as demand from all market are exceeding expectations. But all
| of them are already in motion, and if these are the kind of
| response you get from Graviton 2, imagine Graviton 3.
|
| [1] https://news.ycombinator.com/item?id=25808856
| spideymans wrote:
| >Which is why I was extremely worried about Intel. (Half of)
| The lucrative Server market is basically gone.
|
| Right. I suspect in time we'll look back to this time, and
| realize that it was already too late for Intel to right the
| ship, despite ARM having a tiny share of PC and server sales.
|
| Their PC business is in grave danger as well. Within a few
| years, we're going to see ARM-powered Windows PCs that are
| competitive with Intel's offerings in several metrics, but
| most critically, in power efficiency.
|
| These ARM PCs will have tiny market share (<5%) for the first
| few years, because the manufacturing capacity to supplant
| Intel simply does not exist. But despite their small
| marketshare, these ARM PCs will have a devastating impact on
| Intel's future.
|
| Assuming these ARM PCs can emulate x86 with sufficient
| performance (as Apple does with Rosetta), consumers and OEMs
| will realize that ARM PCs work just as well as x86 Intel PCs.
| At that point, the x86 "moat" will have been broken, and
| we'll see ARM PCs grow in market share in lockstep with the
| improvements in ARM manufacturing capacity (TSMC, etc...).
|
| Intel is in a downward spiral, and I've seen no indication
| that they know how to solve it. Their best "plan" appears to
| be to just hope that their manufacturing issues get sorted
| out quickly enough that they can right the ship. But given
| their track record, nobody would bet on that happening. Intel
| better pray that Windows x86 emulation is garbage.
|
| Intel does not have the luxury of time to sort out their
| issues. They need more competitive products to fend off ARM,
| _today_. Within a year or two, ARM will have a tiny but
| critical foothold in the PC and server market that will crack
| open the x86 moat, and invite ever increasing competition
| from ARM.
| ksec wrote:
| As long as Intel is willing to accept margin will never be
| as good they once were. I think there are _lots_ of things
| they could still do.
|
| The previous two CEO choose profits margin. And hopefully
| we have enough evidence today that was the wrong choice for
| the companies long term survival.
|
| It is very rare CEO do anything radical. It is something I
| learned and observe the difference between a founder and a
| CEO. But Patrick Gelsinger is the closest thing to that.
| dogma1138 wrote:
| At that point if it will be trouble for Intel it would be a
| death sentence for AMD...
|
| Intel has fabs, yes it's what maybe holding them back atm but
| it also a big factor in what maintains their value.
|
| If x86 dies and neither Intel nor AMD pivot in time Intel can
| become a fab company they already offer these services, yes no
| where near the scale of say TSMC but they have a massive
| portfolio of fabs and their fabs are located in the west, they
| also have a massive IP portfolio related to everything form IC
| design to manufacturing.
| skohan wrote:
| AMD also has a GPU division.
| dogma1138 wrote:
| Intel makes more money selling their WiFi chipsets than AMD
| makes on selling GPUs heck even including consoles...
| shrewduser wrote:
| got a source for that? sounds hard to believe.
| dogma1138 wrote:
| Computing and Graphics which includes the Radeon
| technology group revenue for AMDs last quarter was
| PS1.67B industry estimates are that $1.2-1.3B were from
| CPU sales.
|
| Intel's Internet of Things group alone revenue last
| quarter was $680M and they hit $1B IOTG revenue
| previously
|
| https://www.statista.com/statistics/1096381/intel-
| internet-o...
| nickik wrote:
| AMD makes great designs, switching to ARM/RISC-V would make
| them lose value but not kill them.
| dogma1138 wrote:
| And Intel doesn't?
| api wrote:
| How hard would it be for AMD to make an ARM64 based partly on
| the IP of the Zen architecture? Seems like AMD could equal or
| beat M1 if they wanted.
| phkahler wrote:
| >> Seems like AMD could equal or beat M1 if they wanted.
|
| Sometime around 5? years ago AMD was planning to have an
| ARM option. You'd get essentially an ARM core in an AMD
| chip with all the surrounding circuitry. They hyped it so
| much I wondered if they might go further than just that.
|
| Further? Maybe a core that could run either ISA, or a mix
| of both core types. I dunno, but they dumped that (or
| shelved it) to focus on Zen, which saved them. No doubt the
| idea and capability still exist within the company. I'd
| like to see them do a RISCV chip compatible with existing
| boards.
| moonbug wrote:
| "Seattle". tried it, couldn't meet perf targets, canned it.
| Joe_Cool wrote:
| They already have that: https://www.amd.com/en/amd-
| opteron-a1100
|
| Didn't sell very well.
| oldgradstudent wrote:
| > Intel can become a fab company
|
| Not unless they catch up with TSMC in process technology.
|
| Otherwise, they become an uncompetitive foundry.
| dogma1138 wrote:
| You don't have to be a bleeding edge foundry, there are
| tons of components that cannot be manufactured on bleeding
| edge nodes nor need too.
|
| Intel can't compete right now on the bleeding edge node but
| they outcompete TSMC by essentially every other factor when
| it comes to manufacturing.
| MangoCoffee wrote:
| >Not unless they catch up with TSMC in process technology
|
| 1. Intel doesn't have to catch up. Intel's 14nm is more
| than enough for a lot of fabless. Not every chip needs
| cutting edge node
|
| 2. split up Intel foundry into a pure play allowed Intel to
| build up an ecosystem like TSMC.
|
| 3. Intel's 10nm is much denser than TSMC's 7nm. Intel is
| not too far behind. they just needs to solve the yield
| problem. split up Intel's design and foundry allowed each
| group to be more agile and not handcuffed to each other.
|
| in fact Intel Design should licensed out x86 like ARM. why
| not take best biz model from the current leaders? Intel
| Design take ARM business model and Intel foundry take TSMC
| business model.
| esclerofilo wrote:
| The ARM business model isn't that profitable. Intel's
| market cap right now is about 240 billion, 6 times the
| amount Nvidia is paying for ARM.
| MangoCoffee wrote:
| >Intel's market cap right now is about 240 billion, 6
| times the amount Nvidia is paying for ARM
|
| so what? yahoo was a giant in its heyday. blackberry was
| the king with its phone. no empire stay on top forever.
|
| Apple/Amazon created its own cpu. ARM killing it in
| mobile space.
|
| intel is the king right now but with more and more its
| customers design their own cpu. how long before intel
| fall?
| wwtrv wrote:
| ARM Ltd. is earning relatively very little from this and
| there seems to be little reason why would that change in
| the future. This is why it can't really survive as an
| independent company.
|
| If you compare net income instead of mkt. cap Intel is
| ahead by 70 times (instead of 6) and is relatively
| undervalued compared to other tech companies.
| TheOtherHobbes wrote:
| The point is Intel can't compete as a fab _or_ as a design
| house.
|
| It's doubtful if Intel would have been able to design an
| equivalent to the M1, even with access to TSMC's 5nm
| process _and_ an ARM license.
|
| Which suggests there's no point in throwing money at Intel
| because the management culture ("management debt") itself
| is no longer competitive.
|
| It would take a genius CEO to fix this, and it's not
| obvious that CEO exists anywhere in the industry.
| altcognito wrote:
| AMD looked just as bad not so long ago.
| oblio wrote:
| Plus even though Intel has been super fat for 3 decades
| or so, everyone has predicted their death for at least
| another 3 decades (during their switch from memory to
| CPUs and then afterwards when RISCs were going to take
| over the world).
|
| So they do have a bit of history with overcoming these
| predictions. We'll just have to see if they became too
| rusty to turn the ship around.
| StillBored wrote:
| I don't know how you can predict the future like this.
| Yes, intel greedily choose not to participate in the
| phone soc market and are paying the price.
|
| But their choice not to invest in EUV early doesn't mean
| that they will never catch up. They still have plenty of
| cash, and presumably if they woke up and decided to, they
| wouldn't be any worse off than Samsung. And definitely
| better off than SMIC.
|
| Similarly, plenty of smart microarch people work at
| intel, freeing them to create a design competitive with
| zen3 or the m1 is entirely possible. Given amd is still
| on 7nm, and are just a couple percent off of the M1 seems
| to indicate that if nothing else intel could be there
| too.
|
| But as you point out Intel's failings are 100% bad mgmt
| at this point. Its hard to believe they can't hire or
| unleash whats needed to move themselves forward. But at
| the moment they seem to be very "IBM" in their moves, but
| one has to believe that a good CEO with a good
| engineering background can cut the mgmt bullcrap and get
| back to basics. They fundamentally just have a single
| product to worry about unlike IBM.
| pcdoodle wrote:
| If intel made a SBC or SOC design for low power applications, I'd
| consider it if they had long term support. Intel used to power
| all of our edge needs in POS and Security, now I see that
| slipping as well.
| ineedasername wrote:
| I found the geopolitical portion to be the most important aspect
| here. China has shown a willingness to flex its muscles on
| enforcing its values beyond their borders. China is smart, and
| plays a long game. We don't want to wake up one day and find
| they've flexed their muscles on their regional neighbors similar
| to their rare earths strong-arming from 2010-2014 and not have
| fab capabilities to fall back on in the West.
|
| (For that matter, I'm astounded that after 2014 the status quo
| returned on rare earths with very little state-level strategy or
| subsidy to address the risk there.)
| Spooky23 wrote:
| That's a good comparison... CPUs are increasingly a commodity.
| npunt wrote:
| Ben missed an important part of the geopolitical difference
| between TSMC and Intel: Taiwan is much more invested in TSMC's
| success than America is in Intel's.
|
| Taiwan's share of the semiconductor industry is 66% and TSMC is
| the leader of that industry. Semiconductors helps keep Taiwan
| from China's encroachment because it buys them protection from
| allies like the US and Europe, whose economies heavily rely on
| them.
|
| To Taiwan, semiconductor leadership is an existential question.
| To America, semiconductors are just business.
|
| This means Taiwan is also likely to do more politically to keep
| TSMC competitive, much like Korea with Samsung.
| mc10 wrote:
| > Semiconductors helps keep Taiwan from China's encroachment
| because it buys them protection from allies like the US and
| Europe, whose economies heavily rely on them.
|
| Are there any signed agreements that would enforce this? If
| China one day suddenly decides to take Taiwan, would the US
| or Europe step in with military forces?
| davish wrote:
| The closest I've found is this:
| https://en.wikipedia.org/wiki/Taiwan_Relations_Act
|
| Not guaranteed "mutual defense" of any sort, but the US at
| least has committed itself to helping Taiwan protect itself
| with military aid. The section on "Military provisions" is
| probably most helpful.
| koheripbal wrote:
| https://en.wikipedia.org/wiki/Taiwan_Relations_Act
|
| > The Taiwan Relations Act does not guarantee the USA will
| intervene militarily if the PRC attacks or invades Taiwan
| wwtrv wrote:
| There are no official agreements since neither US nor any
| major European countries recognize Taiwan/ROC but US has
| declared multiple times that they would defend Taiwan (see
| ' Taiwan Relations Act' & Six Assurances')
| okl wrote:
| > [...] and not have fab capabilities to fall back on in the
| West.
|
| I'm not too concerned:
|
| - There are still a number of foundries in western countries
| that produce chips which are good enough for "military
| equipment".
|
| - Companies like TSMC are reliant on imports of specialized
| chemicals and tools mostly from Japan/USA/Europe.
|
| - Any move from China against Taiwan would likely be followed
| by significant emigration/"brain drain".
| ineedasername wrote:
| National security doesn't just extend to direct military
| applications. Pretty much every industry and piece of
| critical infrastructure comes into play here. It won't matter
| if western fabs can produce something "good enough" if every
| piece of technological infrastructure from the past 5 years
| was built with something better.
|
| As for moves again at Taiwan, China hasn't given up that
| prize. Brain drain would be moot if China simply prevented
| emigration. I view Hong Kong right now as China testing the
| waters for future actions of that sort.
|
| Happily though I also view TSMC's pending build of a fab in
| Arizona as exactly that sort of geographical diversification
| of industrial and human resources necessary. We just need
| more of it.
| MangoCoffee wrote:
| >As for moves again at Taiwan, China hasn't given up that
| prize.
|
| CCP hasn't give up since KMT high-tailed to Taiwan. for
| more than 40+ yrs American cozy up with the Chinese govrt
| and doing business with China.
|
| American told Taiwan govrt not to "make trouble" but we all
| know China is the one who make all the troubles with
| military threat and flying aircraft over Taiwan, day in and
| day out.
|
| Taiwan have build up a impressive defensives from buying
| weapon (US) to develop its own. yes, China can take Taiwan.
| that's 100% but at what price.
|
| that's what Taiwanese is betting on, China will think twice
| about invading.
| nine_k wrote:
| I bet TSMC has a number of bombs planted around the most
| critical machines, much like Switzerland has bombs
| planted around most critical tunnels and bridges.
|
| Trying to grab Taiwan with force alone, even if formally
| successful, would mean losing its crown jewels forever.
| icefo wrote:
| The bombs have been removed some years ago in Switzerland
| as the risk of them going off was deemed greater than the
| risk of sudden invasion.
|
| Just to nitpick, your point absolutely stands
| Lopiolis wrote:
| The issue isn't just military equipment though. When your
| entire economy is reliant on electronic chips, it's untenable
| for all of those chips to come from a geopolitical opponent.
| That gives them a lot of influence over business and politics
| without having to impact military equipment.
| bee_rider wrote:
| Yeah, for some reason, I assumed that military equipment
| mostly used, like, low performance but reliable stuff. In-
| order processors, real time operating systems, EM-hardening.
| Probably made by some company like Texas Instruments, who
| will happily keep selling you the same chip for 30 years.
| PKop wrote:
| >I'm astounded
|
| Our political system and over financialized economy seem to
| suffer from same hyper short term focus that many corporations
| chasing quarterly returns run in to. No long term planning or
| focus, and perpetual "election season" thrashing one way or
| another while nothing is followed through with.
|
| Plus, in 2, 4 or 8 years many of the leaders are gone and
| making money in lobbying or corporate positions. No possibly
| short-term-painful but long term beneficial policy gets
| enacted, etc.
|
| And many still uphold our "values" and our system as the ideal,
| and question any that would look towards the Chinese model as
| providing something to learn from. So, I anticipate this trend
| will continue.
| echelon wrote:
| It appears the Republicans are all-in on the anti-China
| bandwagon. Now you just have to convince the Democrats.
|
| I don't think this will be hard. Anyone with a brain looking
| at the situation realizes we're setting ourselves up for a
| bleak future by continuing the present course.
|
| The globalists can focus on elevating our international
| partners to distribute manufacturing: Vietnam, Mexico,
| Africa.
|
| The nationalists can focus on domestic jobs programs and
| factories. Eventually it will become clear that we're going
| to staff them up with immigrant workers and provide a path to
| citizenship. We need a larger population of workers anyway.
| ramoz wrote:
| A solution to yesterday's problems shouldn't discount tomorrow's
| innovations. I don't think iOS and Android are in the best long-
| term position. There's more things happening in our global
| infrastructure that should be accounted for. Internet is priming
| for a potential reverse/re- evolution of itself. (5G is a large
| factor for this).
| recursivedoubts wrote:
| Perhaps now the "Why do people obsess over manufacturing?"
| question that many tech workers ask when other US industries were
| decimated will become a bit less quizzical.
| coldtea wrote:
| It only becomes less quizzical when it hits home -- that is
| when one's own job is on the line.
| yourapostasy wrote:
| It was more economics and political policy wonks, economists
| and politicians in general, who didn't just ask that question,
| but thrust forth the prescriptive rhetoric through a large grid
| of trade agreements, "Globalism is here to stay! Get over it!
| Comparative Advantage!" This is only the first in a long,
| expensive series of lessons these people will be taught by
| reality in the coming decades. I'm guessing this kind of
| manufacturing loss will cost at least $1T to catch up and take
| the lead again after all is said and done, assuming the US can
| even take the lead.
|
| US organization, economic, and financial management at the
| macro scale is going through a kind of "architecture astronaut"
| multi-decade phase with financialization propping up abstracted
| processes of how to lead massive organizations as big blocks on
| diagrams instead of highly fractal, constantly shifting
| networks of ideas and stories repeatedly coalescing around
| people, processes, and resources into focused discrete action
| in continguous and continuous OODA feedback loops absorbing and
| learning mistakes along the way. Ideally, the expensive BA and
| INTC lessons drive home the urgent need for an evolution in
| organizational management.
|
| I wryly think how similar the national comparative advantage
| argument looks to much young adult science fiction portrayal of
| space opera galactic empire settings with entire worlds
| dedicated solely to one purpose. This world only produces
| agricultural goods. That world only provides academician
| services. It is a very human desire to simplify complex fractal
| realities, and effective modeling is one of our species'
| advantages, but at certain scales of size, agility and
| complexity it breaks down. We know this well in the software
| world; some problems are intrinsically hard and complex, and
| there is a baseline level of complexity the software must model
| to successfully assist with the problem space. Simplifying
| further past that point deteriorates the delivery.
| mountainb wrote:
| The US decided that it didn't like all the political disorder
| that came with managing a large proletariat. Instead, it
| decided to outsource the management of that proletariat to
| Asia and the 'global south.' Our proletariat instead was
| mostly liquidated and shifted itself into the service
| industry (not that amenable to labor organization) and the
| welfare/workfare rolls.
|
| There are so many things that the US would have to reform to
| become more competitive again, but we are so invested into
| the FIRE economy that it's not unlike the position of the
| southern states before the Civil War: they were completely
| invested into the infrastructure of slavery and could not
| contemplate an alternative economic system because of that.
| The US is wedded to an economy based on FIRE and Intellectual
| Property production, with the rest of the economy just in a
| support role.
|
| I'm not really a pro-organized-labor person, but I think that
| as a matter of national security we have to figure out a way
| to reform and compromise to get to the point to which we
| develop industry even if it is redundant due to
| globalization. The left needs to compromise on environmental
| protection, the rich need to compromise on NIMBYism, and the
| right needs to compromise on labor relations. Unfortunately
| none of this is on the table even as a point of discussion.
| Our politics is almost entirely consumed by insane gibberish
| babbling.
|
| This became very clear when COVID hit and there was no
| realistic prospect of spinning up significant industrial
| capacity to make in-demand goods like masks and filters. In
| the future, hostile countries will challenge and overtake the
| US in IP production (which is quite nebulous and based on
| legal control of markets anyway) and in finance as well. The
| US will be in a very weak negotiating position at that point.
| TheOperator wrote:
| An IP based economy just on its face seems like such a
| laughable house of cards. So your economy is based on
| government enforced imaginary rights to ideas? The
| proliferation of tax havens should be a sign that the
| system is bullshit - it exposes how little is actually
| keeping the profits of IP endeavors within a nation.
|
| There is incredibly little respect for the society owning
| the means of production in a tangible real sense, instead
| we have economies that run on intangibles, where the
| intangibles allow 600lb gorillas like Oracle to engage in
| much rent seeking while simultaneously avoiding paying dues
| to the precise body that granted them their imaginary
| rights. The entire status quo feels like something some
| rich tycoons dreamed up to sell to the public the merits of
| systematically weakening their negotiating position on the
| promise that one day a Bernie Sanders type would descend
| from the heavens and deliver universal basic income fueled
| by the efficiency of private industry through nothing but
| incorruptability and force of personality.
|
| China seems to be successful in part because they have no
| qualms with flexing dictatorial power to increase the
| leverage of the state itself. This may be less economically
| efficient but it means they actually get to harvest the
| fruits of any efficiency. Intellectual property law? They
| just ignore it and don't get punished, since punishing them
| would be anti-trade.
| mountainb wrote:
| Yes, the IP economy rests on a bunch of fragile
| international treaties the US has with its partner
| states. The government provides the court system that
| enforces IP claims, but the costs of litigation are
| mostly carried by rights holders. So when you are sued
| for patent infringement, the court's costs are fairly
| minimal and paid by both sides -- but the court's power
| is just an externality of state power.
| DoofusOfDeath wrote:
| > financialization propping up abstracted processes of how to
| lead massive organizations as big blocks on diagrams instead
| of highly fractal, constantly shifting networks of ideas and
| stories repeatedly coalescing around people, processes, and
| resources into focused discrete action in continguous and
| continuous OODA feedback loops absorbing and learning
| mistakes along the way.
|
| I had trouble reading this _without_ falling into the cadence
| of Howl! by Allen Ginsberg.
| yourapostasy wrote:
| Thanks for introducing me to that, it was enjoyable to
| listen to Allen.
|
| [1] https://www.youtube.com/watch?v=MVGoY9gom50
| DoofusOfDeath wrote:
| My pleasure :)
|
| But to my chagrin, I just realized that the reading
| cadence I had in mind wasn't Allen Ginsberg's, but
| instead Jack Kerouac's. [0]
|
| [0] https://youtu.be/3LLpNKo09Xk?t=197
| hyper_reality wrote:
| This was very nicely expressed, I would read a book in this
| style!
|
| By the way, I'm not sure the hnchat.com service linked in
| your profile works any more?
| WoodenChair wrote:
| The thing about all of these articles analyzing Intel's problems
| is that nobody really knows the details of Intel's "problems"
| because it comes down to just one "problem" that we have no
| insight into: node size. What failures happened in Intel's
| engineering/engineering management of its fabs that led to it
| getting stuck at 14 nm? Only the people in charge of Intel's fabs
| know exactly what went wrong, and to my knowledge they're not
| talking. If Intel had kept chugging along and got down to 10 nm
| years ago when they first said they would, and then 7 nm by now,
| it wouldn't have any of these other problems. And we don't know
| exactly why that didn't happen.
| ogre_codes wrote:
| Intel's problem was that they were slow getting their 10nm
| design online. That's no longer the case. Intel's new problem
| is much bigger than that at this point.
|
| Until fairly recently, Intel had a clear competitive advantage:
| Their near monopoly on server and desktop CPUs. Recent events
| have illustrated that the industry is ready to move away from
| Intel entirely. Apple's M1 is certainly the most conspicuous
| example, but Microsoft is pushing that way (a bit slower),
| Amazon is already pushing their own server architecture and
| this is only going to accelerate.
|
| Even if Intel can get their 7nm processes on line this year,
| Apple is gone, Amazon is gone, and more will follow. If
| Qualcomm is able to bring their new CPUs online from their
| recent acquisition, that's going to add another high
| performance desktop/ server ready CPU to the market.
|
| Intel has done well so far because they can charge a pretty big
| premium as the premier x86 vendor. The days when x86 commands a
| price premium are quickly coming to and end. Even if Intel
| fixes their process, their ability to charge a premium for
| chips is fading fast.
| JoshTko wrote:
| We actually have a lot of insight in that Intel still doesn't
| have a good grasp on the problem. Their 10nm was supposed to
| enter volume production in mid 2018, and they still haven't
| truly entered volume production today. Additionally Intel
| announced in July 2020 that their 7nm is delayed by at least a
| year which means they figured out their node delay problem.
| Spooky23 wrote:
| Wasn't the issue that the whole industry did a joint venture,
| but Intel decided to go it alone?
|
| I worked at a site (in a unrelated industry) where there was
| a lot of collaborative semiconductor stuff going on, and the
| only logo "missing" was Intel.
| cwhiz wrote:
| Didn't Samsung also go it alone, or am I mistaken?
| WoodenChair wrote:
| > We actually have a lot of insight in that Intel still
| doesn't have a good grasp on the problem. Their 10nm was
| supposed to enter volume production in mid 2018, and they
| still haven't truly entered volume production today.
| Additionally Intel announced in July 2020 that their 7nm is
| delayed by at least a year which means they figured out their
| node delay problem.
|
| Knowing something happened is not the same as knowing "why"
| it happened. That's the point of my comment. We don't know
| why they were not able to achieve volume production on 10 nm
| earlier.
| JoshTko wrote:
| The point of my comment is that Intel doesn't know either
| and that's a bigger problem.
| spideymans wrote:
| I'll also add that it's fascinating that both 10 nm and 7
| nm are having issues.
|
| My understanding (and please correct me if I'm wrong), is
| that the development of manufacturing capabilities for any
| given node is an _independent_ process. It 's like building
| two houses: the construction of the second house isn't
| dependent on the construction of the first. Likewise, the
| development of 7 nm isn't dependent on the perfection of 10
| nm.
|
| This perhaps suggests that there is a deep institutional
| problem at Intel, impacting multiple manufacturing
| processes. That is something more significant that a big
| manufacturing problem holding up the development of one
| node.
| okl wrote:
| I think that's not quite right. While it's true that for
| each node they build different manufacturing lines,
| generating the required know-how is an
| iterative/evolutionary process in the same way that
| process node technology usually builds on the proven tech
| of the previous node.
| sanxiyn wrote:
| I think it's just a difficult problem. Intel is trying to
| do 10 nm without EUV. TSMC never solved that problem
| because they switched to EUV at that node size.
| mchusma wrote:
| A key issue is volume. Intel is doing many times less
| volume than the mobile chipmakers. So intel cant spend as
| much to solve the problem.
|
| It's a bad strategic position to be in, and I agree with
| Ben's suggestions as one of the only ways out of it.
| okl wrote:
| SemiAccurate has written a lot about the reasons, for me
| the essence from that was: complacency, unrealistic goals,
| they didn't have a plan B in case schedule slips.
| klelatti wrote:
| Feel this piece ducks one of the most important questions - what
| is the future and value of x86 to Intel? For a long time x86 was
| one half of the moat but it feels like that moat is close to
| crumbling.
|
| Once that happens the value of the design part of the business
| will be much, much lower - especially if they have to compete
| with an on form AMD. Can they innovate their way out of this?
| Doesn't look entirely promising at the moment.
| stefan_ wrote:
| Why are people so hung up about the x86 thing? ARM continues to
| be sold on because everyone has now understood they don't
| really matter; they are not driving the innovations, they were
| simply the springboard for the Apples, Qualcomms and Amazons to
| drive their own processor designs, and they are not setup to
| profit from that. ARMs reference design isn't competitive, the
| M1 is.
|
| Instruction set architecture at this point is a bikeshed
| debate, it's certainly not what is holding Intel back.
| usefulcat wrote:
| I'm not sure that's entirely true. According to this (see
| "Why can't Intel and AMD add more instruction decoders?"):
|
| https://debugger.medium.com/why-is-apples-m1-chip-so-
| fast-32...
|
| ..a big part of the reason the M1 is so fast is the large
| reorder buffer, which is enabled by the fact that arm
| instructions are all the same size, which makes parallel
| instruction decoding far easier. Because x86 instructions are
| variable length, the processor has to do some amount of work
| to even find out where the next instruction starts, and I can
| see how it would be difficult to do that work in parallel,
| especially compared to an architecture with a fixed
| instruction size.
| exmadscientist wrote:
| That doesn't make any sense. The ROB is after instructions
| have been cracked into uops; the internal format and length
| of uops is "whatever is easiest for the design", since it's
| not visible to the outside world.
|
| This argument does apply to the L1 cache, which sits before
| decode. (It does not apply to uop caches/L0 caches, but is
| related to them anyway, as they are most useful for CISCy
| designs, with instructions that decode in complicated ways
| into many uops.)
| usefulcat wrote:
| Maybe it wasn't clear, but the article I linked is saying
| that compared to M1, x86 architectures are decode-
| limited, because parallel decoding with variable-length
| instructions is tricky. Intel and AMD (again according to
| the linked article) have at most 4 decoders, while M1 has
| 8.
|
| So yes the ROB is after decoding, but surely there's
| little point in having the ROB be larger than can be kept
| relatively full by the decoders.
| AnimalMuppet wrote:
| Well, if we can have speculative execution, why not
| speculative decode? You could decode the stream as if the
| next instruction started at $CURRENT_PC+1, $CURRENT_PC+2,
| etc. When you know how many bytes the instruction at
| $CURRENT_PC takes, you could keep the right decode and
| throw the rest away.
|
| Sure, it would mean multiple duplicate decoders, which eats
| up transistors. On the other hand, we've got to find
| something useful for all those transistors to do, and this
| looks useful...
| usefulcat wrote:
| According to the article I linked, that's basically how
| they do it:
|
| "The brute force way Intel and AMD deal with this is by
| simply attempting to decode instructions at every
| possible starting point. That means x86 chips have to
| deal with lots of wrong guesses and mistakes which has to
| be discarded. This creates such a convoluted and
| complicated decoder stage that it is really hard to add
| more decoders. But for Apple, it is trivial in comparison
| to keep adding more.
|
| In fact, adding more causes so many other problems that
| four decoders according to AMD itself is basically an
| upper limit for them."
| blinkingled wrote:
| Well put. People are being their usual teamsport participants
| on x86 vs ARM. Intel has execution problems in two
| departments - manufacturing and integration. ISA is not an
| issue - they can very well solve the integration issues and
| investing in semiconductor manufacturing is the need of the
| hour for the US so I can imagine they getting some traction
| there with enough money and will.
|
| IOW even if Intel switched ISA to ARM it won't magically fix
| any of the issues. We've had a lot of ARM vendors trying to
| do what Apple did for too long.
| klelatti wrote:
| Intel / AMD had a duopoly on desktop / server because of
| x86 for a large number of years.
|
| Loss of that duopoly - even with competitive manufacturing
| - has profound commercial implications for Intel. M1 and
| Graviton will be followed by others that will all erode
| Intel's business.
| blinkingled wrote:
| On the other hand if x86 keeps competitive there's lot of
| inertia in its favor. So it could go either way. Desktop
| especially has been a tough nut to crack for anyone other
| than Apple and they are only 8% of the market.
| klelatti wrote:
| Probably more than 8% by value and with the hyperscalers
| looking at Arm that's a decent part of their business at
| risk - and that's ignoring what Nvidia, Qualcomm etc
| might do in the future.
|
| Agreed that intertia is in their favour but it's not a
| great position to be in - it gives them a breathing space
| but not a long term competitive advantage.
| mhh__ wrote:
| It's worth saying that CPU design isn't like software. Intel
| and AMD cores are fairly different, and the ISA is the only
| thing that unites them.
|
| If X86 finally goes, and Intel and AMD both switched elsewhere
| we'd be seeing the same battle as usual but in different
| clothes.
|
| On top of the raw uarch design, there is also the peripherals
| and ram standard etc. etc.
| klelatti wrote:
| Fair points, but if you're saying that if we moved to a
| non-x86 (and presumably Arm based) world then its business as
| usual for Intel and AMD then I'd strongly disagree - it's a
| very different (and much less profitable) commercial
| environment with lots more competition.
| mhh__ wrote:
| The likelihood of Intel moving to ARM is probably nil. They
| have enough software to drag whatever ISA they choose with
| them, whereas AMD bringing up an ARM core could be fairly
| herculean as they have to convince their customers to not
| only buy their new chip but also trust AMD with a bunch of
| either missing or brand new software.
| hpcjoe wrote:
| AMD has already done an ARM 8 core chip. Then abandoned
| it.
|
| ISA changes require a long term investment and building
| up an ecosystem. Which were out of scope for AMD at the
| time.
|
| I think the market has changed somewhat, and they don't
| have to do all the heavy lifting. Would be interesting to
| see what happens there.
| klelatti wrote:
| They still have an architecture license I think.
|
| Given that x86 still has an advantage on servers makes
| sense for them to push that for then time being. When the
| Arm ecosystem is fully established I can't imagine it
| would be that hard to introduce a new Arm CPU using the
| innovation they've brought to x86 (chiplets etc).
| klelatti wrote:
| The days when Intel could single handedly successfully
| introduce a new (incompatible) ISA are long gone (if it
| ever could). I expect they will stick with x86 for as
| long as possible.
| yjftsjthsd-h wrote:
| > The days when Intel could single handedly successfully
| introduce a new (incompatible) ISA are long gone (if it
| ever could).
|
| Given itanium, I'd say they never could (although that
| _could_ have been a fluke of that specific design)
| klelatti wrote:
| Indeed and that was with HP.
|
| Look long enough back and you have iAPX 432!
| jecel wrote:
| There was also the three way ISA battle at Intel: 486 vs
| 860 vs 960. In the end they decided that legacy software
| was too valuable and redefined the 860 as a graphics co-
| processor and the 960 as a intelligent DMA to keep people
| from building Unix computers with them
| varispeed wrote:
| My view is that currently the only way for Intel to salvage
| themselves it to go ARM route and start licensing x86 IP and
| perhaps even open source some bits of tech. They are unable to
| sustain this tech by themselves nor with AMD anymore. It seems
| to me when Apple releases their new CPUs I am going to have to
| move to that platform in order to keep up with the competition
| (quicker the core, I can do calculations quicker and quicker
| deliver the product). Currently I am on AMD, but it is only
| marginally faster than M1 it seems.
| AlotOfReading wrote:
| Are they able to even do that legally? I'm pretty sure the
| licensing agreement for x86 with AMD explicitly prohibited
| this for both parties.
| totalZero wrote:
| The demise of x86 isn't something that can be fiated. It could
| come about, but there would need to be a very compelling reason
| to motivate the transition. Technologies that form basic
| business and infrastructural bedrock don't go away just because
| of one iteration -- look at Windows Vista for example.
|
| Even if every PC and server chip manufacturer were to eradicate
| x86 from their product offerings tomorrow, you'd still have
| over a billion devices in use that run on x86.
| klelatti wrote:
| I was extremely careful not to say that x86 would go away!
|
| But it doesn't have to for Intel to feel the ill effects.
| There just have to be viable alternatives that drive down the
| price of their x86 offerings.
| totalZero wrote:
| I wasn't trying to refute your comment, nor to imply that
| you said x86 is on its way out the door, but we are talking
| about the future of x86 after all.
|
| Intel has already driven its prices downward aggressively
| [0]. That seems to be part of their strategy to contain AMD
| while they get their own business firing on all cylinders
| again, and it's going to be true regardless of whether the
| majority of the market demands x86 or not. The more that
| Intel can pressure AMD's gross margin, the more relevant
| Intel's synergies from being an IDM can become.
|
| [0] https://www.barrons.com/articles/intel-is-starting-a-
| price-w...
| spideymans wrote:
| Windows Vista's problems were relatively easy to solve
| though. Driver issues naturally sorted themselves out over
| time, performance became less of an issue as computers got
| more powerful, and the annoyances with Vista's security model
| could be solved with some tweaking around the edges. There
| wasn't much incentive to jump from the Windows ecosystem, as
| there was no doubt that Microsoft could rectify these issues
| in the next release of Windows. Indeed, Windows 7 went on to
| be one the greatest Windows release ever, despite being
| nothing more than a tweaked version of the much maligned
| Vista.
|
| Intel's problems are a lot more structural in nature. They
| lost mobile, they lost the Mac, and we could very well be in
| the early stages of them losing the server (to Graviton,
| etc...) and the mobile PC market (if ARM PC chips take off in
| response to M1). Intel needs to right the ship expeditiously,
| before ARM gets a foothold and the x86 moat is irreversibly
| compromised. Thus far, we've seen no indication that they
| know how to get out of this downward spiral.
| garethrowlands wrote:
| Windows 7 was a _substantially fixed_ version of the much
| maligned Vista. Its fixes for memory usage, for example,
| were dramatic.
| nemothekid wrote:
| > _look at Windows Vista for example._
|
| This is a terrible example for the reasons stated in the
| article. Microsoft is already treating windows more and more
| like a step child everyday - office and azure are the new
| cool kids
| marcosdumay wrote:
| It's not the demise of x86. It's the demise of x86 as a moat.
|
| Those are different things. We have seen a minuscule movement
| on the first, but we've been running towards the second since
| the 90's, and looks like we are close now.
| tyingq wrote:
| I agree that the moat is falling away. There used to be things
| like TLS running faster because there was optimized x86 ASM in
| that path, but none for other architectures. That's no longer
| true.
|
| I suppose Microsoft would be influential here. Native Arm64 MS
| Office, for example.
| toonies555 wrote:
| Lets speed run a doomsday:
|
| 2022: share price tanks, ceo booted, they shuffle but dont have a
| plan, no longer blue chip so finance is hard to come by.
| delisting. everyone booted. doors close.
|
| 2023/4: AMD only game in town. profits and volumes up. so are the
| faults and vulnerabilities. They spend most of their effort in
| fixes and not innovation.
|
| 2024: M1 chip available on dells/hps/thinkpads. AWS only use
| Graviton unless customer specifically buys another chip.
|
| 2025: Desktop ARM chip available on dells/hps/thinkpads. 2025:
| AWS makes a 'compile-to-anything' service. decompiler and
| recompiler on demand.
|
| 2026: AMD still suffering. Hires Jim Keller for the 20th time.
| makes a new ZEN generation that beats M1 and Arm. AMD goes into
| mobile CPUs.
| billiam wrote:
| Though provoking, to be sure, but the problem with his solution
| of building up Intel's manufacturing arm through spinoff and
| subsidy is that we simply don't have the labor force to support
| it, and with much more controlled immigration in the future, it
| will take decades to build up the engineering education needed to
| make the US compete with Taiwan, South Korea, and of course
| China.
| iamgopal wrote:
| The problem was created when they lost focus on energy
| efficiency. Rest is just after effect.
| hctaw wrote:
| Slightly more aggressive take: fully automated contract
| manufacturing is the future, those that resist its march will be
| trampled and those that ignore it will be left behind.
|
| Semiconductor manufacturing is just one example where this is
| happening, electronics is another. Maybe one day Toyota Auto fabs
| will be making Teslas.
| cbozeman wrote:
| This was a well-written article, but I don't think it came from
| someone with a deep understanding of semiconductor technology and
| fabrication.
|
| Intel hasn't lost to Apple and AMD because they employ idiots, or
| because of their shitty company culture (in fact, they're doing
| surprisingly well _in spite_ of their awful company culture).
| Intel lost because they made the wrong bet on the wrong _type_ of
| process technology. 10 years ago (or thereabouts), Intel 's
| engineers were certain that they had the correct type of process
| technology outlined to successfully migrate down from 22nm to
| 14nm, then down to 10nm and eventually 7, 5, and 3nm. They were
| betting on future advances in physics, chemistry, and
| semiconductor processes. Advances that didn't materialize.
|
| EUV turned out to be the best way to make a wafer at lower
| transistor size.
|
| So now Intel's playing catch up. Their 10nm process is still
| error-prone and far from stable. There are no high-performance
| 10nm desktop or server chips.
|
| That's not going to continue forever though. Even on 14nm, Intel
| chips, while not as fast as Apple's M1 or AMD's Ryzen 5000
| series, are still competitive in many areas. Intel's 14nm chips
| are over 6 years old. The first was Broadwell in October 2014.
| What do you think will happen when Intel solves the engineering
| problems on 10nm, and then 7nm? And then 5nm?
|
| It took AMD 5 years to become competitive with Intel, and over 5
| to actually surpass them.
|
| If you think the M1 and 5950X are fast, then wait till we have an
| i9-14900K on 5nm. It'll make these offers look _quaint_ by
| comparison.
|
| EDIT: I say this as a total AMD fanboy by the way, who bought a
| 3900X and RX 5700 XT at MicroCenter on 7/7/2019 and stood in line
| for almost five hours to get them, and as someone who now has a
| Threadripper 3990X workstation. I love AMD for what they've
| done... they took us out of the quad-core paradigm and brought us
| into the octa-core paradigm of x86 computing.
|
| But I am under _no_ illusions that they 're technically superior
| to Intel. Their _process_ is what allows them to outperform
| Intel, not their design. I guarantee you that if Intel could mass
| produce _their_ CPUs on _their_ 7nm process (which is far, far
| more transistor dense than TSMC 's 7nm), AMD would be 15-25%
| behind on performance.
|
| It isn't so much that AMD is succeeding because they're
| technically superior... they're succeeding because Zen's design
| team made the right bet and because Intel's engineering process
| team made the _wrong_ bet.
| oblio wrote:
| Interesting perspective. From a paradoxical perspective, I
| actually want Intel to stay relevant.
|
| I think that everyone will take advantage of the migration to
| ARM to push more lock in, despite the supposedly open ARM
| architecture.
|
| A sort of poison pill: "you get more performance and better
| battery life, but you can't install apps of type A, B and C and
| those apps can only do X, Y and Z".
| adrian_b wrote:
| I agree with most of what you say, except that AMD was also
| technically superior during these last years.
|
| Intel certainly has the potential of being technically superior
| to AMD, but they do not appear to have focused on the right
| things in their roadmaps for CPU evolution.
|
| Many years before the launches of Ice Lake and Tiger Lake, the
| enthusiastic presentations of Intel about the future claimed
| that these will bring some sort of marvelous improvements in
| microarchitecture, but the reality has proven to be much more
| modest.
|
| While from Skylake in 2015 to Ice Lake in 2019 there was a
| decent increase in IPC, it was still much less than expected
| after so many years. While they were waiting for a
| manufacturing process, they should have redesigned their CPU
| cores to get something better than this.
|
| Moreover the enhancements in Ice Lake and Tiger Lake seem
| somewhat unbalanced and random, there is no grand plan that can
| be discerned about how to improve a CPU.
|
| On the other hand the evolution of Zen cores was perfect, every
| time the AMD team seems to have been able to add precisely all
| improvements that could give a maximum performance increase
| with a minimum implementation effort.
|
| Thus they were able to pass from Zen 1 (2017) with an IPC
| similar to Intel Broadwell (2014), to Zen 2 (2019) with an IPC
| a little higher than Intel Skylake (2015) and eventually to Zen
| 3 (2020) with an IPC a little higher than Intel Tiger Lake
| (2020).
|
| So even if the main advantage of AMD remains the superior CMOS
| technology they use from TSMC, just due to the competence of
| their design teams, they have passed from being 3 years behind
| Intel in IPC in 2017, to being ahead of Intel in IPC in 2020.
|
| If that is not technical superiority, I do not know what is.
|
| Like I have said, I believe that Intel could have done much
| better than that, but they seem to have done some sort of a
| random walk, instead of a directed run, like AMD.
| varispeed wrote:
| When I learned that allegedly Intel and nVidia were fixing the
| laptop market, I just hope that this company goes down or goes
| through a substantial transformation. Their current management
| situation is untenable. If I was a shareholder (fortunately I am
| no longer), I would pressure them to sack everyone involved.
| m3kw9 wrote:
| If individual companies developing their own chips is a trend,
| and it sure seem like it is starting to, intel has a lot more
| competition they have to contend with. Before is always a buy,
| now add build into the equation. That's where intels problems are
| coming. That's a lot of head winds, they can capture that by
| splitting and going the TSMC route, and specialize further on the
| design and use some form of licensing model like ARM.
|
| This is like the Microsoft pivot into cloud to save itself.
| darig wrote:
| If you want to write an article about how your article website
| has matured, maybe start with proofreading the first sentence.
| ENOTTY wrote:
| Contrary to the article, AMD is not yet shipping 5nm in volume.
| (Rumors point to Zen 4 later in 2021.)
|
| Additionally, Intel works with ASML and other similar suppliers.
| Intel even owns a chunk of ASML.
| totalZero wrote:
| I looked up Canon's and Nikon's lithography numbers yesterday
| and was shocked to see that they barely make any money off
| their lithography businesses, considering that both companies
| make DUV machinery. Although they don't have the street cred of
| ASML, they are important because (A) there's a shortage, and
| (B) the demand side of the machinery market needs to foster
| competition in order to keep ASML from gouging its customers.
|
| To go even further than your comment (with which I agree, 5nm
| isn't the center of AMD's income right now), TSMC isn't even
| making most of its wafer revenue from 5nm and 7nm. Straight
| from the horse's mouth (Wendell Huang, CFO):
|
| _" Now, let's move on to the revenue by technology.
| 5-nanometer process technology contributed 20% of wafer revenue
| in the fourth quarter, while 7-nanometer and 16-nanometer
| contributed 29% and 13%, respectively. Advanced technologies,
| which are defined as 16-nanometer and below, accounted for 62%
| of wafer revenue. On a full-year basis, 5-nanometer revenue
| contribution came in at 8% of 2020 wafer revenue. 7-nanometer
| was 33% and 16-nanometer was 17%."_
|
| https://www.fool.com/earnings/call-transcripts/2021/01/14/ta...
| moonbug wrote:
| they'll still be depreciating the newer fabs.
| totalZero wrote:
| Sure, but my understanding is that (assuming that you have
| a choice about how much depreciation expense to write down)
| from a tax perspective that's what you're supposed to do
| when your business is making money. It's also a form of P&L
| smoothing.
| kolbe wrote:
| If TSMC is going to be a monopolist fab for x86, then they will
| ultimately suck all the profits out of the server/desktop
| markets. This isn't just kinda bad news for Intel/AMD, it's
| really bad news.
| jeffbee wrote:
| Well I got down to the part where the author said that AMD never
| threatened Intel in the data center market and I closed the tab.
| AMD won entire generations of data center orders while Intel was
| flailing with Itanium and NetBurst.
| secondcoming wrote:
| GCP only offer Epyc CPUs in some regions. None of those regions
| are ones we use! Gah!
|
| Can someone update us on where AWS offer them, if at all?
| jeffbee wrote:
| If you think about how these providers deploy a cloud
| facility, it makes sense that the offerings in a given place
| are relatively static. The whole network design,
| thermal/mechanical design, and floor plan is built with
| certain assumptions and they can't just go in and rack up
| some new machines. It evolves pretty slowly and when a
| facility gets a new machine it is because they refresh the
| whole thing, or a large subset of it.
|
| That said, the EPYC machine type is available in 12 zones of
| four different regions in the US, which isn't bad.
| vinay_ys wrote:
| Usually you would have some number of enclosed aisles of
| racks make up a deployment pod.
|
| You can usually customize machine configuration within a
| deployment pod while staying within the electrical and
| thermal envelope of the aisle and without changing the
| number of core-spine to pod-spine network links.
|
| You could potentially build out a data hall but not fully
| fill it with aisles. As demand starts to trend up you can
| forecast two quarters into future and do the build-outs
| with just one quarter lead time.
|
| I would expect very large operators to have perfected this
| supply chain song and dance very well.
| jeffbee wrote:
| They have perfected it, just not in the manner that you
| are suggesting.
| mhh__ wrote:
| Speaking of Itanium, if the x86 dam has truly burst, I'd much
| rather see something more like the Itanium than RISC-V.
| Something new.
|
| It's a shame the Mill is so secretive, actually, they're design
| is rather nice.
| sitkack wrote:
| One of RISC-V's main goals is to be boring and extensible.
| Think if it as the control-plane core, or the EFI for a
| larger system. You would take RISC-V and use it drive your
| novel VLIW processor.
| mhh__ wrote:
| How? RISC-V will have to have memory model, for example,
| which will define some at least effective execution model.
| If you turn RISC-V into not RISC-V you might as well just
| start from scratch.
| garethrowlands wrote:
| Pretty sure you can't take RISC-V and use it to drive a
| Mill.
| tyingq wrote:
| There's Russia's Elbrus VLIW chip.
| https://www.anandtech.com/show/15823/russias-elbrus-8cb-
| micr...
| raverbashing wrote:
| Nah I think the Itanic concept is dead in the water
|
| VLIW works (especially in the way it was done in Itanium -
| IIRC) when either your workload is too predictable or maybe
| if your compiler manages to be one order of magnitude smarter
| than it is today (even with llvm, etc)
|
| It seems even M1 prefers to reorder scalar operations than
| work with SIMD ops in some cases (this is one of its
| processors)
| mhh__ wrote:
| Itanium is dead but VLIW as a concept is still interesting
| to me.
|
| If you look at uops executed per port benchmarks you can
| see that CPUs are far from all seeing eyes.
| hajile wrote:
| AMD and Nvidia _both_ used VLIW in the past and _both_
| moved away because they couldn 't get it to run
| efficiently. If embarrassingly parallel problems can't
| execute efficiently on VLIW architectures, I somehow
| doubt that CPUs will either.
|
| The final versions of Itanic started adopting all the
| branch predictors and trappings from more traditional
| chips.
|
| The problem is that loops theoretically cannot be
| completely predicted at compile time (the halting
| problem). Modern OoO CPUs are basically hardware JITs
| that change execution paths and patterns on the fly based
| on previous behavior. This (at least at present) seems to
| get much better data resulting in much better real-world
| performance compared to what the compiler sees.
| garethrowlands wrote:
| Mill is claimed to run general purpose code well, unlike
| Itanic and VLIW in general. Are you claiming Mill would be
| like Itanium?
| thoughtsimple wrote:
| How does Moore's law figure into this? I suspect that TSMC runs
| into the wall that is quantum physics at around 1-2nm.
| Considering that TSMC has said that they will be in full
| production of 3nm in 2022, I can't see 1nm being much beyond
| 2026-2028. What happens then? Does a stall in die shrinks allow
| other fabs to catch up?
|
| It appears to me that Intel stalling at 14nm is what opened the
| door for TSMC and Samsung to catch up. Does the same thing happen
| in 2028 and allow China to finally catch up?
| MangoCoffee wrote:
| > I can't see 1nm being much beyond 2026-2028. What happens
| then?
|
| whatever marketing people come up? Moore's law is not a law but
| an observation. it doesn't really matter tho. we are going to
| 3D chip, chiplet, advance packaging ...etc.
| kasperni wrote:
| Jim Keller believes that at least 10-20 years of shrinking is
| possible [1].
|
| [1] https://www.youtube.com/watch?v=Nb2tebYAaOA&t=1800
| kache_ wrote:
| moar coars
| wffurr wrote:
| Quantum effects haven't been relevant for a while now. The
| "nanometer" numbers are marketing around different transistor
| topologies like FinFET and GAA (Gate-all-around). There's a
| published roadmap out to "0.7 eq nm). Note how the
| "measurements" all have quotes around them:
|
| https://www.extremetech.com/computing/309889-tsmc-starts-dev...
| viktorcode wrote:
| Eventually, CPUs will have to focus on going wide, i.e. growing
| number of cores and improving interconnections.
| jng wrote:
| Modern process node designations (5nm, 3nm...) are not
| measurements any more, they are marketing terms. The actual
| measure of shrinking is a lot smaller than the name would mean
| to indicate, and not approaching the quantum limits as fast as
| it may seem.
| sobellian wrote:
| If I recall correctly from my uni days, one of the big
| challenges with further shrinking the physical gates is that
| the parasitic capacitance on the gates becomes very hard to
| control, and the power consumption of the chip is directly
| related to that capacitance. Of course, nothing is so simple
| and I'm sure Intel can make _some_ chips at very small
| process sizes, but at the cost of horrible yield.
| chaorace wrote:
| I did not know that! Though, that answer raises its own
| questions...
|
| If the two are entirely unlinked, what's stopping Intel from
| slapping "Now 3nm!" on their next gen processors? Surely
| _some_ components must be at the advertised size, even if it
| 's no longer a clear cut all-or-nothing descriptor, right?
| What's actually being sized down and why is it seemingly
| posing so many challenges for Intel's supply chain?
| rrss wrote:
| This article has a pretty good overview of the situation
| and other metrics that actually track progress:
| https://spectrum.ieee.org/semiconductors/devices/a-better-
| wa...
| okl wrote:
| There's a nice wiki where you can look up more detailed
| specs on the processes of each contender, e.g. 5nm:
| https://en.wikichip.org/wiki/5_nm_lithography_process
| cbozeman wrote:
| I think it'll be a good thing when people stop worrying
| about process node technology and start worrying about
| performance and power usage.
|
| Intel's 14nm chips are already competitive with AMD's
| (TSMC's, really) 7nm chips. The i7-11700 or whatever the
| newest one coming out soon is called, is going to be pretty
| much exactly on parity with AMD's Ryzen 5000 series.
|
| So if node shrinkage is such a dramatic increase in
| performance and power usage, then when Intel unfucks
| themselves and refines their 10nm node and 7nm node and
| whatever-node after that, they'll clearly be more
| performant than AMD... and Apple's M1.
|
| Process technology is holding Intel back. They fix that,
| they get scary again.
| kllrnohj wrote:
| > I think it'll be a good thing when people stop worrying
| about process node technology and start worrying about
| performance and power usage.
|
| I think it's more that people attribute too much
| significance to process node technology when trying to
| understand why performance & power are what they are.
|
| For single-core performance the gains from a node shrink
| are in the low teen percentage increases. Power
| improvements at the same performance are a bit better,
| but still not as drastic as people tend to treat it as.
|
| 10-20 years ago just having a better process node was a
| _massive_ deal. These days it 's overwhelmingly CPU
| design & architecture that dictate things like single-
| core performance. We've been "stuck" at the 3-5ghz range
| for something like half a decade now and TSMC has worse
| performance here than Intel's existing 14nm. Still hasn't
| been a single TSMC 7nm or 5nm part that hits that magical
| 5ghz mark reliably enough for marketing, for example. And
| that's all process node performance is - clock speed. M1
| only runs at 3.2ghz - you could build that on Intel's
| 32nm without any issues. Power consumption would be a lot
| worse, but you could have had "M1-like" single-core
| performance way back in 2011 if you had a time machine to
| take back all the single-core CPU design lessons &
| improvements, that is.
| adrian_b wrote:
| While you are right that due to their design CPUs like
| Apple M1 can reach the same single-thread performance as
| Intel/AMD at a much lower clock frequency and such a
| clock frequency could be reached much earlier, e.g.
| already Intel Nehalem in 2009 reached 3.3 GHz as turbo,
| while Sandy Bridge in 2011 had 3.4 GHz as base clock
| frequency, it would have been impossible to make a CPU
| like Apple M1 in any earlier technology, not even in
| Intel's 14 nm.
|
| To achieve its very high IPC, M1 multiplies a lot of
| internal resources and also uses very large caches. All
| those require a huge number of transistors.
|
| Implementing an M1-like design in an earlier technology
| would have required a very large area, resulting in a
| price so large and also in a power consumption so large
| that such a design would have been infeasible.
|
| However, you are partially right in the sense that Intel
| clearly was overconfident due to their clock frequency
| advantage and they have decided on a roadmap to increase
| the IPC of their CPUs in the series Skylake => Ice Lake
| => Alder Lake that was much less ambitious than it should
| have been.
|
| While Tiger Lake and Ice Lake have about the same IPC,
| Alder Lake is expected to bring a similar increase like
| from Skylake to Ice Lake.
|
| Maybe that will be competitive with Zen 4, but it is
| certain that the IPC of Alder Lake will still be lower
| than the IPC of Apple M1, so Intel will continue to be
| able to match the Apple performance only at higher clock
| frequencies, which cause a higher power consumption.
| kllrnohj wrote:
| > To achieve its very high IPC, M1 multiplies a lot of
| internal resources and also uses very large caches. All
| those require a huge number of transistors.
|
| Yes & no. Most of the M1 die isn't spent on CPU, it's
| spent things like GPU, neural net, and SLC cache. A
| "basic" dual-core CPU-only M1 would be very
| manufacturable back in 2011 or so. After all, Intel at
| some point decided to spend a whole lot of transistors
| adding a GPU to every single CPU regardless of worth,
| there were transistors to spare.
| adrian_b wrote:
| True, but the M1 CPU, together with the necessary cache
| and memory controller still occupies about a third of the
| Apple M1 die.
|
| In the Intel 32-nm process, the area would have been 30
| to 40 times larger than in the TSMC 5 nm process.
|
| The 32-nm die would have been as large as a book, many
| times larger than any manufacturable chip.
|
| By 2011, 2-core CPUs would not have been competitive, but
| even reducing the area in half is not enough to bring the
| size into the realm of possible.
| kllrnohj wrote:
| Where are you getting your M1 CPU + memory controller is
| about a third of the M1 die from? Looking at this die
| shot + annotation:
| https://images.anandtech.com/doci/16226/M1.png The
| firestorm cores + 12MB cache is _far_ less than 1 /3rd
| the die, and the memory controller doesn't look
| particularly large.
|
| The M1 total is 16B transistors. A 2700K on Intel's 32nm
| was 1.1B transistors. You're "only" talking something
| like ~4x the size necessary if that. Of course the 2700K
| already has a memory controller on it, so you really just
| need the firestorm cores part of the M1. Which is a _lot_
| less than 1/3rd of the die size.
|
| But lets say you're right and it is 1/3rd. That means you
| need ~5B transistors. Nvidia was doing 7B transistors on
| TSMC's 28nm in 2013 on consumer parts (GTX 780)
| adrian_b wrote:
| I was looking at the same image.
|
| A very large part of the die is not labelled and it must
| include some blocks that cannot be omitted from the CPU,
| e.g. the PCIe controller and various parts from the
| memory controller, e.g. buffers and prefetchers.
|
| The area labelled for the memory channels seems to
| contain just the physical interfaces for the memory, that
| is why it is small. The complete memory controller must
| include parts of the unlabelled area.
|
| Even if the CPU part of M1 would be smaller, e.g. just a
| quarter, that would be 30 square mm. In the 32 nm
| technology that would likely need much more than 1000
| square mm, i.e. it would be impossible to be
| manufactured.
|
| The number of transistors claimed for various CPUs or
| GPUs is mostly meaningless and usually very far from the
| truth anyway.
|
| The only thing that matters for estimating the costs and
| the scaling to other processes is the area occupied on
| the die, which is determined by much more factors than
| the number of transistors used, even if that would have
| been reported accurately. (The transistors can have very
| different sizes and the area of various parts of a CPU
| may be determined more by the number of interconnections
| than by the number of transistors.)
| branko_d wrote:
| > We've been "stuck" at the 3-5ghz range for something
| like half a decade
|
| It's closer to two decades, actually. Pentium 4
| (Northwood) reached 3.06 GHz in 2002, using 130 nm
| fabrication process.
| adrian_b wrote:
| Since AMD has introduced its first 7 nm chip, Intel's
| 14-nm chips have never been competitive.
|
| Intel's 14-nm process has only 1 advantage over any other
| process node, including Intel's own 10 nm: the highest
| achievable clock frequency, of up to 5.3 GHz.
|
| This advantage is very important for games, but not for
| most other purposes.
|
| Since the first 7-nm chip of AMD, their CPUs consume much
| less power at a given clock frequency than Intel's 14 nm.
|
| Because of this, whenever more cores are active, so that
| the clock frequency is limited by the total power
| consumption, the clock frequency of the AMD CPUs is
| higher than of any Intel CPU with the same number of
| active cores, which lead to AMD winning any multi-
| threaded benchmark even with Zen 2, when they still did
| not have the advantage of a higher IPC than Intel, like
| they have with Zen 3.
|
| With the latest Intel's 10 nm process variant, Intel has
| about the same power consumption at a given frequency and
| the same maximum clock frequency as the TSMC 7 nm proces.
|
| So Intel should have been able to compete now with AMD,
| except that they still appear to have huge difficulties
| in making larger chips in sufficient quantities, so they
| are forced to use workarounds, like the launch of the
| Tiger Lake H35 series of laptop CPUs with smaller dies,
| to have something to sell until they will be able to
| produce the larger 8-core Tiger Lake H CPUs.
| Stevvo wrote:
| "This advantage is very important for games, but not for
| most other purposes."
|
| I disagree. The majority of desktop applications are only
| lightly threaded e.g. Adobe products, office suites,
| Electron apps, anything mostly written before 2008.
| adrian_b wrote:
| You are right that those applications benefit from a
| higher single-thread performance.
|
| Nevertheless, unlike in competitive games, the few
| percents of extra clock frequency that Intel previously
| had in Comet Lake versus Zen 2 and which Intel probably
| will have again in Rocket Lake versus Zen 3, are not
| noticeable in office applications or Web browsing, so
| they are not a reason to choose one vendor or the other.
| ksec wrote:
| >Intel's 14nm chips are already competitive with AMD's
| (TSMC's, really) 7nm chips.
|
| The only metric that Intel's 14nm is better than TSMC's
| 7nm is clock speed ceiling. Other than that there is
| nothing competitive from an Intel 14nm chip compares to
| AMD ( TSMC ) 7nm chip from a processing perspective.
|
| And that is not a fault of TSMC or AMD. They just decide
| not to pursuit that route.
| ksec wrote:
| >"Now 3nm!" on their next gen processors?
|
| Nothing.
|
| It started when Samsung were using features size just to
| gain competitive marketing advantage. And then TSMC had to
| follow because their customers and shareholders were
| putting a lot of pressure on them
|
| While ingredient branding is important, at the end of the
| day the chip has to perform. Otherwise your ingredient
| branding would suffer and such strategy would no longer
| work. Samsung are already tasting their own medicine.
|
| P.S - That "Now 3nm!" reminds me of "3D Now!" from AMD.
| cglace wrote:
| They can call it whatever they want but it will need to
| show huge performance improvements for anyone to actually
| care.
| [deleted]
| JumpCrisscross wrote:
| > _a federal subsidy program should operate as a purchase
| guarantee: the U.S. will buy A amount of U.S.-produced 5nm
| processors for B price; C amount of U.S. produced 3nm processors
| for D price; E amount of U.S. produced 2nm processors for F
| price; etc._
|
| I really like this concept, though I'd advocate for a straight
| subsidy (sales of American-made chips to a U.S.-registered and
| based buyer get $ credit, paid directly to the supplier and
| buyer, on proof of sale and proof of purchase) given the
| logistical issues of the U.S. government having a stockpile of
| cutting-edge chips it can't dump on the market.
| Causality1 wrote:
| The funny thing is, in the time period being addressed first in
| the article (2013) Intel was better at mobile than it is now. Its
| Bay Trail and Cherry Trail chips had more performance per dollar
| than even today's offerings, eight years later. Intel just
| decided low-margin wasn't a concept in which they were
| interested.
| 11thEarlOfMar wrote:
| A major incentive for the US government to get involved is
| touched on. Not only is Taiwan 'just off the coast' from China,
| China is coming for it and intends to assimilate Taiwan back into
| China just as Hong Kong and Macau have been.
|
| At that point, the only sustainable leverage the rest of the
| world would have in chip technology would be ASML.
| aluminussoma wrote:
| It is thought provoking to think that the USA's interest in
| Taiwan is more about protecting TMSC than protecting a
| democratic state in East Asia. By this line of thinking,
| building capacity in Arizona, or anywhere outside Taiwan, is
| good for TMSC and for the USA but weakens Taiwan.
| oblio wrote:
| > It is thought provoking to think that the USA's interest in
| Taiwan is more about protecting TMSC than protecting a
| democratic state in East Asia.
|
| Why it though provoking? It's always realpolitik. All wars
| are.
|
| There's always a pretext but the subtext is what actually
| causes wars.
| z77dj3kl wrote:
| A question for those who've been around in tech longer: was
| Google really the first and "disruptive" user of x86 commodity
| hardware in datacenters that everyone else then lagged behind? Or
| was it just a general wave and shift in the landscape?
| gorjusborg wrote:
| Totally reasonable argument, and I think most would be better off
| with an independent, US-based foundry.
|
| Unfortunately, I doubt that the US government functions well
| enough at this point to recognize the threat and overcome the
| influence Intel's money would wield against the effort.
| NortySpock wrote:
| I thought GlobalFoundries was US based, and they have a fab in
| Vermont (and Germany and Singapore)
| nicoburns wrote:
| Yes, but GlobalFoundries have given up development on leading
| edge process nodes.
| ncmncm wrote:
| It's time for more predictions.
|
| 1. Apple's CPUs will not improve anywhere near as fast as the
| competition. Computation per watt of (some) competitors' products
| will outpace Apple's in just a few years.
|
| 2. Intel will come roaring back on the back of TSMC, but first
| will need to wait on growth of manufacturing capacity, as certain
| competitors can get more money per mm^2.
|
| 3. Intel will fail to address its product-quality problem, but it
| will not end up hurting them.
___________________________________________________________________
(page generated 2021-01-19 23:00 UTC)