[HN Gopher] Will Intel's AXG division survive Pat Gelsinger's axe?
___________________________________________________________________
Will Intel's AXG division survive Pat Gelsinger's axe?
Author : oumua_don17
Score : 54 points
Date : 2022-08-09 18:05 UTC (4 hours ago)
(HTM) web link (www.jonpeddie.com)
(TXT) w3m dump (www.jonpeddie.com)
| rektide wrote:
| None of the other items Intel has let go of are an existential
| must have. GPUs are a must have in any computer big or small. Raw
| number crunching is core to the very profitable HPC market.
| Players like Nvidia have fully custom interlinks that threaten to
| dry up even the CPU sales Intel still gets in this market. GPU is
| a focus on mobile where expectations have been much raised.
|
| This group is supposedly 6 years old. But reciprocally, the other
| players have been pouring money in over decades. GPUs are
| fantastically complicated systems. Getting started here is
| enormously challenging, with vast demands. Just shipping is a
| huge accomplishment. Time to grow into it & adjust is necessary.
| It's such a huge challenge, and I really hope AXG is given the
| time, resources, iterations, & access to fabs it'll take to get
| up to speed.
| khitchdee wrote:
| Intel should double down on its core business which is selling
| x86 chips for Windows PCs. If it loses to AMD+TSMC here,
| there's nowhere left to hide. IFS is very long term and a new
| business opportunity in a market where Intel is not the leader.
| x86 for Windows is their bread and butter. They need to throw
| everything else out and downsize to just excelling at what
| they've always done. My guess is AXG is a separate division
| than the division that makes the graphics for their core
| microprocessors.
| mschuster91 wrote:
| > Intel should double down on its core business which is
| selling x86 chips for Windows PCs. If it loses to AMD+TSMC
| here, there's nowhere left to hide. IFS is very long term and
| a new business opportunity in a market where Intel is not the
| leader.
|
| The very second someone else but Apple brings a competitive
| ARM desktop CPU to the market, it's game over for Intel.
| x86_64 literally _cannot_ compete with modern ARM designs
| simply because of how much utter garbage from about thirty
| years worth of history it has accumulated and absolutely
| needs to support in the future because even the boot process
| still requires all that crap, whereas ARM was never shy about
| cutting out stuff and breaking backwards compatibility to
| stay performant.
|
| The only luck that Intel has at the moment is that Samsung
| and Qualcomm are dumpster fires - Samsung has enough problems
| getting a phone to run at a halfway decent performance with
| their Exynos line and Qualcomm managed to _completely_ botch
| their exclusive deal with Microsoft [1] (hardly surprising to
| anyone who has ever had the misfortune to have to work with
| their crap). A small startup that is not bound by ages of
| legacy and corporate red tape should be able to complete such
| a project - Annapurna Labs have proven it 's possible to
| break into the server ARM CPU market well enough to get
| acquired by Amazon.
|
| [1] https://www.xda-developers.com/qualcomm-exclusivity-deal-
| mic...
| khitchdee wrote:
| Apple will always be niche because they are too expensive.
| ARM based PC client processors accounted for about 9% of
| the total market. But I don't expect that number will grow
| a lot higher due to cost of Macs. Also, you can't bring out
| a desktop PC based on Linux because that a very slowly
| growing market segment and MacOS is proprietary. Don't
| think Apple will start another PowerComputing type scenario
| where they open their platform to others. It's not a matter
| of performance for this reason (price). So, on Windows,
| assuming Macs don't take away any more market share from
| Windows, they need to beat AMD+TSMC.
| mschuster91 wrote:
| > So, on Windows, assuming Macs don't take away any more
| market share from Windows, they need to beat AMD+TSMC.
|
| No. All it needs is
|
| - Microsoft and Qualcomm breaking their unholy and IMHO
| questionably legal alliance
|
| - an ARM CPU vendor willing to do the same as Apple did
| and add support for accelerating translation of x86 code
| (IIRC, memory access models/barriers are done differently
| between x86 and ARM, and Apple simply extended their
| cores to be able to use the same memory access/barrier
| model as x86 on translated-x86 threads)
|
| - an ARM CPU vendor willing to implement basic
| functionality like PCIe actually according to spec - even
| the Raspberry Pi which is the closest you can get to a
| mass market general-purpose ARM computer has that broken
| [1]
|
| - someone (tm) willing to define a common standard of
| bootup sequence/standard feature set. Might be possible
| that UEFI fills the role; the current ARM bootloaders are
| a hot mess compared to the old and tried BIOS/boot sector
| x86 approach, and most (!) ARM CPUs/BSPs aren't exactly
| built with "the hardware attached to the chips may change
| at will" in mind.
|
| Rosetta isn't patented to my knowledge, absolutely
| nothing is stopping Microsoft from doing the same as part
| of Windows.
|
| [1] https://www.hackster.io/news/jeff-geerling-shows-off-
| an-amd-...
| khitchdee wrote:
| If you translate, you do face performance issues. Apple
| or some other vendor cannot make an ARM chip so fast at a
| competitive cost that beats an x86 chip in emulation
| mode. The underlying acceleration techniques for both ARM
| and x86 are the same.
| mschuster91 wrote:
| > Apple or some other vendor cannot make an ARM chip so
| fast at a competitive cost that beats an x86 chip in
| emulation mode.
|
| This reminds me of Iron Man 1... "Tony Stark was able to
| build this in a cave! With a box of scraps! - Well, I'm
| sorry. I'm not Tony Stark."
|
| Apple has managed to pull it off so well that the M1
| blasted an _i9_ to pieces [1]. The M1 is just so damn
| well more performant than an Intel i9 that the 20%
| performance loss compared to native code didn 't matter.
|
| [1] https://www.macrumors.com/2020/11/15/m1-chip-
| emulating-x86-b...
| WkndTriathlete wrote:
| And for 99.9999% of users the M1 performance benchmarks
| vs. i9 _don't matter one bit._
|
| The use case for the vast majority of laptops include
| I/O- and memory-bound applications. Very few CPU-bound
| applications are run on consumer laptops, or even
| corporate laptops, for the most part. CPU-bound
| applications should be getting run on ARM or GPU clusters
| in the cloud.
|
| The use case for an M1 in laptops is the _power_
| benchmarks vs. an i9.
| mschuster91 wrote:
| > The use case for the vast majority of laptops include
| I/O- and memory-bound applications.
|
| Where the M1 just blows anything desktop-Intel out of the
| water, partially because they integrate a lot of stuff
| directly on the SoC, partially because they place stuff
| like RAM or persistent storage _extremely_ close to the
| SoC whereas on desktop-Intel RAM, storage and peripheral
| controllers are all dedicated chips.
|
| The downside is obviously that you can't get more than
| 16GB RAM with an M1 and 24GB RAM with the new M2's and
| you cannot upgrade either memory at all without a high-
| risk soldering job [1]... but given that Apple has the
| persistent storage so closely attached to the SoC to swap
| around, it doesn't matter all that much.
|
| [1] https://www.macrumors.com/2021/04/06/m1-mac-ram-and-
| ssd-upgr...
| khitchdee wrote:
| That performance difference is not due to architecture
| but process technology. Intel is on Intel' best node and
| Apple is on TSMC's.
| spookie wrote:
| No. The architecture is the biggest factor. Damn, even
| Jim Keller talks about how most programs use a very small
| subset of instructions. It isn't like RISC makes
| miracles, but sure helps them when your power budget is
| small.
| frostburg wrote:
| Windows runs on ARM, too (with issues, but those are also
| related to the questionable performance of the SoCs).
| spookie wrote:
| > Also, you can't bring out a desktop PC based on Linux
| because that a very slowly growing market segment
|
| Wouldn't bet on that one
| acomjean wrote:
| The fact you can run old software on any PC still is a
| pretty tremendous feature. Apple kicked of 32 bit apps from
| Mac and iPhones, its user accept that, business PC users
| might not. I have a number of IOS games I enjoyed that are
| no longer usable (yeah I could have stopped upgrading my i
| device, but the security problems make that not a viable
| option)
|
| Conventional wisdom in the mid 90s was powerpc (RISC) would
| eventually be better the x86, but it never happened. They
| worked around the issues. And Microsoft eventually made an
| OS that wasn't a crash fest (I'm looking at you windows ME)
|
| Also no one can afford to be on TSMCs best node when Apple
| buys all the production. Apple's been exclusive on the best
| node for at least a couple years now. Even AMD isn't using
| TSMCs best node yet.
| mschuster91 wrote:
| > Apple kicked of 32 bit apps from Mac and iPhones, its
| user accept that, business PC users might not.
|
| Yeah, but that was (at least on the Mac) not a technical
| requirement, they just didn't want to carry around the
| kernel-side support any more. IIRC it didn't take long
| until WINE/Crossover figured out a workaround to run old
| 32-bit Windows apps on modern Macs.
|
| > Also no one can afford to be on TSMCs best node when
| Apple buys all the production. Apple's been exclusive on
| the best node for at least a couple years now. Even AMD
| isn't using TSMCs best node yet.
|
| Samsung has their own competitive fab process, but they
| still have yield issues [1]. It's not like TSMC has a
| monopoly by default.
|
| [1] https://www.gsmarena.com/samsung_claims_that_yields_f
| rom_its...
| PaulHoule wrote:
| There is just one serious problem with the x86 architecture
| and that is the difficulty of decoding instructions which
| could be anywhere between 1 and 14 bytes. It's not hard to
| design a quick instruction decoder but hard to design a
| quick instruction decoder that is power efficient.
|
| ... Oh yeah, and there are the junkware features like SGX
| and TSX and also the long legacy of pernicious segmentation
| that means Intel is always playing with a hand tied behind
| its back, for instance, the new laptop chips that should
| support AVX512 but don't because they just had to add
| additional low performance cores.
| Miraste wrote:
| I may be showing ignorance of some important use case
| here, but I'd put AVX512 in the "junkware feature"
| category. It can run AI/HPC code faster than the main
| processor, but still so much slower than a GPU as to be
| pointless; when it does run, it lowers the frequency and
| grinds the rest of the processor to a halt until it
| eventually overheats anyway. Maybe, maybe it has a place
| in their big desktop cores, but why would anyone want it
| in a laptop? You can't even use it in a modern ultrabook-
| style casing without a thermal shutdown and probably
| first-degree burns, without even mentioning the battery
| drain.
| PaulHoule wrote:
| Yes and no.
|
| If you have all the cores running hard the power
| consumption goes up a lot but if it is just one core it
| won't go up too much. If worse come to worse you can
| throttle the clock.
|
| In general it's a big problem with SIMD instructions that
| they aren't compatible across generations of
| microprocessors. It's not a problem for a company like
| Facebook that buys 20,000 of the same server but normal
| firms avoid using SIMD entirely or they use SIMD that is
| many years out of date. You see strange things like
| Safari not supporting WebP images on a 2013 Mac while
| Firefox supports them just fine because Apple wants to
| use SIMD acceleration and they'd be happier if you
| replaced you 2013 Mac with a new one.
|
| I worked on a semantic search engine that used an
| autoencoder neural network that was made just before GPU
| neural networks hit it big and we wrote the core of our
| implementation in assembly language using one particular
| version of AVX. We had to do all the derivatives by hand
| and code them up in assembly language.
|
| By the time the product shipped we bought new servers
| that supported a new version of AVX that might have run
| twice as fast but we had no intention of rewriting that
| code and testing it.
|
| Most organizations don't want to go through the hassle of
| keeping up with the latest SIMD flavor of the month so a
| lot of performance is just left on the table. Intel is
| happy because their marketing materials can tell you how
| awesome the processor is but people in real life don't
| experience that performance.
| spookie wrote:
| China is gonna be that second someone, with RISC-V.
| Honestly, it's kind of amazing how bad it can get for them.
| ChuckNorris89 wrote:
| _> None of the other items Intel has let go of are an
| existential must have._
|
| Except mobile ARM chips and mobile LTE modems, both of which
| Intel sold off, and those are some of the most desirable things
| to make right now. Just ask Quallcomm.
| mrandish wrote:
| Yep. Competent mobile ARM and LTE modem cores would be _very_
| nice to have in their tech stack right about now. I think a
| _credible_ (if not fully competent), GPU stack is pretty
| essential for strategic relevance going forward. This seems
| like something they _have_ to make work.
| pjmlp wrote:
| Intel famously used to lie on their OpenGL drivers, asserting
| features were supported, but were actually software rendered.
|
| Larrabe was shown at GDCE 2009 as if going to render the
| competition useless, then faded away.
|
| Their GPU debugger marketing sessions used to be focused on how
| to optimise games for integrated GPUs.
|
| I really don't get how they keep missing the mark versus AMD
| and NVidia in both GPU design and developer tooling.
| pinewurst wrote:
| Don't forget Intel's world-beating i740!
| Miraste wrote:
| Ah, the i740. Because VRAM is a conspiracy cooked up by
| 3DFX and Nvidia to take your money, and real gamers don't
| need any.
| muro wrote:
| Anyone who bought one will not forgive them that one.
| csense wrote:
| What the heck about GPU development costs $2 billion? That seems
| _obscenely_ expensive. Anyone who has detailed knowledge of the
| industry care to weigh in on what they might actually spending
| that money on?
| killingtime74 wrote:
| I think the article aludes to it being massive payroll
| terafo wrote:
| Drivers and couple attempts that they shot behind the barn(but
| made it into silicon).
| ChuckNorris89 wrote:
| Try hiring GPU experts and see how "easy" it is. It's a very
| small talent pool.
| ksec wrote:
| >Started in 2016 the dGPU group snatched showboater Raja Koduri
| away from AMD
|
| We are in 2022 I still dont understand why Raja is popular. One
| of the reason why I have been extremely sceptical of Intel's GPU
| since the very beginning. To the point I got a lot of bashing on
| Anandtech and HN.
|
| And I have been mentioning drivers as the major concern since
| 2016. Citing PowerVR Kyro on Desktop as an example. Even pointing
| that out on Twitter. With one of the Intel Engineers on the GPU
| team replied something as "GPU Drivers is a solved problem".
|
| I do love to be wrong. But it is increasingly looking like
| another item to be added to my book of prediction that came true.
| philjohn wrote:
| To be fair, in the consumer space, he seemed to be holding them
| back. In the data centre acceleration space, they're still
| essentially using a descendant of the Vega uArch.
| slimginz wrote:
| Honestly, I'm in the same boat as you, until I realized I don't
| know anything he's done before he became head of Radeon Group.
| According to his Wikipedia entry:
|
| > He became the director of advanced technology development at
| ATI Technologies in 2001.[3] Following Advanced Micro Devices's
| 2006 acquisition of ATI, he served as chief technology officer
| for graphics at AMD until 2009. At S3 and ATI he made key
| contributions to several generations of GPU architectures that
| evolved from DirectX Ver 3 till Ver 11.[4] He then went to
| Apple Inc., where he worked with graphics hardware, which
| allowed Apple to transition to high-resolution Retina displays
| for its Mac computers.[5]
|
| So he has a history of launching some really good products but
| I'd say it's been at least a decade since he's been involved in
| anything industry leading.
| PolCPP wrote:
| And ironically after he left AMD started improving on the
| drivers
| allie1 wrote:
| Very telling... the guy hasn't had a success in the last 10
| years and still gets top roles. Seems like he can sell
| himself well.
|
| There needs to be accountability for these failures of
| execution.
| Miraste wrote:
| I'm with you here. At AMD, Koduri released GPU after GPU that
| was slow, terribly hot and inefficient, plagued by driver
| problems, and overpriced. He left on terrible terms for a
| number of reasons, but one of them was his complaint that
| Radeon was underfunded and deprioritized, so he couldn't
| compete with Nvidia. Here we are, six years of practically
| unlimited funding and support from a company with a greater R&D
| budget than AMD's entire 2016 gross...
|
| and he's done the exact same thing.
| ChuckNorris89 wrote:
| _> and he's done the exact same thing._
|
| That's how failing upwards works in this racket. As long as
| you have a great looking resume at a few giants in the
| industry with some fancy titles to boot, you're set for life
| regardless of how incompetent you are.
| Scramblejams wrote:
| _Intel is now facing a much stronger AMD and Nvidia, plus six
| start-ups_
|
| Who are the six startups? It mentions four are in China, and two
| in the US.
| killingtime74 wrote:
| Vaporware unless we can buy their cards anyway
| Kon-Peki wrote:
| Apple Silicon has absolutely proven that an integrated gpGPU is a
| must on modern consumer chips.
|
| Intel has to slog through this. Forget about CUDA support - Apple
| is going to kill that monopoly for them.
| Aromasin wrote:
| I'm still convinced by Intel's and Pat's XPU strategy. Having a
| well rounded portfolio of CPUs, GPUs, SoCs, and FPGAs that all
| work flawlessly, alongside a comprehensive software environment
| to make cross-platform development easier (their OneAPI stack) is
| a dream for any OEM/ODM; provided it works.
|
| Their issue at the moment isn't strategy. It's execution. Axing
| their GPU division would only hurt their current plan, and do
| nothing to fix the the systematic problem that they're missing
| deadline's and shipping incomplete products. From the outside
| looking in, it seems like there's some fat that needs trimming
| and people aren't pulling their weight. If they can scale back to
| efficient team and org sizes, cut the side projects, and focus on
| excellent software and hardware validation, I can see them
| pulling this off and Pat being lauded as a hero.
| klelatti wrote:
| You may well be right but if each of the individual components
| of that portfolio is worse than the competition then it will be
| an uphill struggle.
| PaulHoule wrote:
| People I know who develop high performance systems are so
| indifferent to OpenCL it's hard for me to picture what could
| make people develop for OpenCL.
|
| There is such a long term history of failure here that somebody
| has to make a very strong case that the next time is going to
| be different and I've never seen anyone at Intel try that or
| even recognize that history of failure.
| Miraste wrote:
| It's unfortunate, but the reality is it would take an act of
| god to unseat CUDA at this point.
| nicoburns wrote:
| Could Intel just implement CUDA support? That would
| certainly be a huge task, but Intel aren't have the
| resources for that.
| paulmd wrote:
| The GPU Ocelot project did exactly that at one point.
| It's probably a legal minefield for an organized
| corporate entity though.
|
| https://gpuocelot.gatech.edu/
| Aromasin wrote:
| Intel doesn't implement CUDA, but they're working on a
| CUDA to DPC++ (their parallel programming language)
| migration tool to make the jump less painful. I've tested
| some of the NVIDIA samples with it and it seems to do the
| vast majority of the grunt work, but there are still some
| sections where refactoring is required. You can find it
| in the OneAPI base toolkit.
| TomVDB wrote:
| One of the big attractions of CUDA is not just the core
| (programming, API, compiler, development tool) but also
| the libraries (cuDNN, FFT, cuSignal, ... It's a very long
| list). These libraries are usually not open source, and
| can thus not be recompiled even if you have core CUDA
| support.
| VHRanger wrote:
| Vulkan compute might, over time, take root.
|
| The fact that a similar API can be used for training on
| servers, inference from laptops to phones, is an appealing
| proposition
|
| Best part: most devices have decent vulkan drivers. Unlike
| openCL.
| bobajeff wrote:
| That or they could just open their GPU cores to direct
| programming instead of locking it behind some API.
| amluto wrote:
| This depends on the degree to which the underlying
| hardware remains compatible across generations. nVidia
| can sell monster GPU systems today to run 5-year-old CUDA
| code faster without any recompilation or other mandatory
| engineering work. I don't know whether this is possible
| with direct hardware access.
| PaulHoule wrote:
| Intel's OneAPI (rebranded OpenCL) can allegedly do this.
|
| It also claims to do this across radically different
| architectures like FPGA and all I can say to that is "I
| find that very hard to believe"
| buildbot wrote:
| Yeah it is in practice not really true. Maybe after
| hacking the code you can watch it compile for _days_ and
| then perform probably worse than a GPU without extreme
| tuning, which removes the point anyway.
| wmf wrote:
| That's a "did you just tell me to go fuck myself?"
| solution. It didn't work for AMD and it won't work for
| anyone else. Developing a CUDA-to-GPU compiler and
| runtime stack is an immense amount of work and customers
| or "the community" can't afford it.
|
| Although for completeness I'll note that Intel's GPU
| architecture is documented:
| https://01.org/linuxgraphics/documentation/hardware-
| specific...
| tambourine_man wrote:
| Unless someone beats them in the hardware front. GPUs that
| are 30-50% faster (or just as fast but 30-50% cheaper).
|
| I could see Apple and AMD, working with TSMCs latest node,
| stepping up to the challenge.
| ceeplusplus wrote:
| No way 30-50% is enough. You need 2x at least, if not
| more. There is a deep rooted ecosystem of CUDA apps. You
| would need to have at least 2-3 generations of 2x lead to
| get people to switch.
| pinewurst wrote:
| And could care even less about "oneAPI"(tm).
| mrtweetyhack wrote:
| khitchdee wrote:
| Execution is about engineers. Intel can't retain their
| engineers. Without good engineers you can't compete. So you
| downsize and try compete only on one thing -- better x86. Pat
| Gs return as CEO does help with engineer retention though.
| ChuckNorris89 wrote:
| _> Execution is about engineers._
|
| Meh, not really. Google has some of the best engineers in the
| world yet they fail at nearly everything that isn't related
| to search, android, ads and e-mail.
|
| Even Jensen Huang said that it's more about vision and not
| all about execution. He said in the early days of Nvidia all
| of their competitors at the time had engineers good enough to
| execute, yet only they had the winning vision that enabled
| them to make and sell exactly what the market wanted, nothing
| more, nothing less.
| R0b0t1 wrote:
| Necessary but not sufficient is the phrase.
| random314 wrote:
| > everything that isn't related to search, android, ads and
| e-mail.
|
| And YouTube.
|
| No wonder they are 10X as valuable as Intel
| khitchdee wrote:
| Google used to be search only. Now they have Android.
| That's pretty good.
| ChuckNorris89 wrote:
| And how many other dozens of products did Google build
| then kill off due to failure? Similar to Intel.
| khitchdee wrote:
| Intel does not have a number 2. They only own x86. They
| have failed to diversify.
| ChuckNorris89 wrote:
| _> Intel does not have a number 2_
|
| Yeah they do, Intel has the FPGA division they bought
| from Altera. That's a huge business on its own.
|
| And my point still stands: having great engineers is not
| enough for great execution. You need great leadership
| with a vision a-la Steve Jobs or Jensen Huang.
| __d wrote:
| They have _tried_ to diversify:
|
| They have competitive NICs, although they don't seem to
| be maintaining the lead there they once had.
|
| They bought competitive network switches. These have
| largely languished, in part because they sat on the IP
| and then targeted it at weird niches.
|
| They bought Altera. I feel like it had lost some momentum
| vs. Xilinx, but with AMD acquiring the latter, it's
| probably going to end up a wash.
|
| The AI chips are kinda too early to tell, but at least
| they're playing the game.
|
| Overall, I think they have squandered the massive
| advantage they had in CPUs for the last 3 decades.
| touisteur wrote:
| Oh yeah I'm waiting for their 200G NICs while NVIDIA is
| ramping up connectx 7 with 400G on PCIe5...
|
| The low end AI chips are a mess. Myriad-X can be used
| only through openvino, and they've been closing details
| about the internals... Keembay is... Years late?
|
| Is there anything coming out of Intel these days?
|
| And what happened to nervana systems, became plaidml then
| disappeared after bought by Intel? Maxas was really great
| and now, crickets.
|
| Even profiling tools, which they used to be on top of,
| don't seem to work well on Linux w/ the TigerLake gpu.
| It's so painful to debug and program, they might as well
| have not put it in...
| tambourine_man wrote:
| And that's what, 15 years old?
| AnimalMuppet wrote:
| The degree of market penetration that Android has
| maintained for that 15 years is a pretty impressive feat,
| both strategically and technically.
| pinewurst wrote:
| Android is still basically ads - in that it's another
| source of info fodder for their ad machine along with
| search.
| time0ut wrote:
| My impression is that most of Google's failures are
| strategic. Stadia, for example, has been executed very well
| in the technical sense. It just doesn't make a lot of sense
| strategically. I feel every failure I can think of fits
| this mold of great technology solving the wrong problem or
| held back from solving the right problem.
| PaulHoule wrote:
| For the last 20 years there have always been stories in the
| press about how Intel is laying off 10,000 engineers. It
| never struck me as a very psychologically safe place to work.
| AnimalMuppet wrote:
| IIRC, they had (maybe still have?) a policy of laying off
| the bottom 5%. Every year. For a company that size, it's a
| lot of engineers.
|
| But yes, that probably does not make it a psychologically
| safe place to work...
| alfalfasprout wrote:
| It's a bit of everything but yeah, what I've seen is
| basically that Intel has a combination of absolutely insane
| talent working on some really hard problems and a lot of
| mediocre talent that they churn through.
|
| And the reality is that yeah, with their comp for engineers
| so abysmally bad why would anyone really go there? Especially
| when it's not like they're getting great WLB.
| AceJohnny2 wrote:
| > _provided it works_
|
| Ay, there's the rub!
| khitchdee wrote:
| XPU is too generic a term and harmful looking at it from a
| business perspective.
| mugivarra69 wrote:
| isnt that amd rn
| tadfisher wrote:
| > Since Q1'21 when Intel started reporting on its dGPU group,
| known as AXG or accelerated graphics, the company has lost a
| staggering $2.1 billion and has very little to show for it.
|
| You know, except for a competitive desktop GPU. I'm actually
| impressed that it didn't take much longer and much more money to
| catch up with AMD and NVIDIA, given that those two were in
| business when 3dfx was still around.
| ChuckNorris89 wrote:
| I don't know why your painting Intel as some new entry GPU
| beginner here, but FYI, Intel had been making and selling
| integrated GPUs for over 20 years now, to the point they pretty
| much dominate the GPU market share in sheer numbers (most
| desktops and laptops, even older Macs, sold in the last 10-15
| years are most likely gonna have an Intel chip with integrated
| graphics, regardless if it's used or not in favor of a discrete
| GPU).
|
| Sure, their iGPU offerings were never competitive for gaming or
| complex tasks, but given it came for "free" with your CPU, it
| was good enough for most businesses and consumers and was a
| also a major boon during the GPU shortage where gamers who
| build systems with AMD chips were left unable to use their PC
| while those who went Intel could at least use their PC for some
| productivity and entertainment until they could buy a dGPU.
|
| So it's not like they had to start absolutely from scratch
| here. In fact, Intel's latest integrated GPU architecture, Xe,
| was so good, it was beating the integrated Vega graphics AMD
| was shipping till the 6xxx series with RDNA2 in 2022, while
| also killing the demand for Nvidia's low end dGPUs for desktops
| (GT 1050) and mobile (MX350). Xe was also the first GPU on the
| market with AV1 decode support.
|
| So given this, Intel is definitely not a failure in the GPU
| space, they're definitely doing some things right but they just
| can't box in the ring with "Ali" yet. Anyone thinking they can
| leapfrog Radeon and Nvidia at their first attempt would be
| foolish. Intel should take on the losses on the GPU division
| for a few more years and push through.
| hexadec wrote:
| > during the GPU shortage where gamers who build systems with
| AMD chips were left unable to use their PC
|
| This comment baffles me, both AMD and Intel have CPUs with
| onboard graphics and those without. You even noted the
| integrated graphics a sentence later.
|
| If anything, this is more evidence that AMD is following the
| Intel playbook by having that integrated CPU/ GPU
| architecture plan.
| ChuckNorris89 wrote:
| _> This comment baffles me, both AMD and Intel have CPUs
| with onboard graphics and those without. You even noted the
| integrated graphics a sentence later_
|
| Why does it baffle you? AMD has only been selling desktop
| chips with integrated GPUs only for a few years now (they
| called them APUs), and their APUs were not that stellar at
| either the GPU or CPU part due to compromises on both
| parts.
|
| Most of the successful Ryzen chips AMD was selling for the
| desktop were exclusively without integrated GPUs, to save
| die space and cost, which hurt PC builders during the GPU
| scalpocalipse, while on the other hand, Intel's almost
| entire CPU product range for desktops had integrated GPUs
| for over 10 years now, enabling PC builders to at least use
| their PCs until a dGPU could be available.
|
| Sure, Intel sold some CPUs without iGPUs but those were
| very few SKUs in comparison. Similarly, but in reverse, AMD
| also sold some Ryzen CPUs with iGPUs(APUs), but those were
| very few SKUs as their CPUs were weaker than the non-iGPU
| SKUs, and their outdated Vega iGPUs were pretty weak even
| compared to Intel's Xe.
|
| So that's the major difference between Intel and AMD that
| was a game changer for many: Intel shipped most of its
| chips with iGPUs for over a decade while AMD did not,
| meaning you always needed to buy a dGPU, and if you
| couldn't, like in the past ~2 years, well ... good luck,
| your new tower PC is now an expensive door stop.
|
| Still baffled?
| formerly_proven wrote:
| AMD CPUs with iGPUs are a very different product from their
| CPUs without, regardless of what the nomenclature might
| imply.
| paulmd wrote:
| > So it's not like they had to start absolutely from scratch
| here
|
| No, and actually in some respects that's not a good thing
| either. Their existing iGPU driver was designed with the
| assumption of GPU and CPU memory being pretty much fungible,
| and with the CPU being pretty "close" in terms of latency,
| just across the ringbus. It wasn't even PCIe attached, like
| how AMD does it, it was directly on the ringbus like another
| core.
|
| Now you need to take that legacy codebase and refactor it to
| have a conception of where data lives and how computation
| needs to proceed in order to feed the GPU with the minimum
| number of trips across the bus. Is that easier than writing a
| clean driver from scratch, and pulling in specific bits that
| you need? ....
|
| One of their recent bugs in raytracing was literally due to
| one line of code in an allocator that was missing a flag to
| allocate the space in GPU memory instead of CPU, a one-line
| change produced a 100x speedup in the raytracing performance.
|
| https://www.phoronix.com/news/Intel-Vulkan-RT-100x-Improve
|
| It is most likely much easier to do what AMD did and go from
| discrete to integrated than the other way around... again,
| they don't have a tightly-coupled design like Intel did,
| their iGPU is literally just a pcie client that happens to be
| on the same die.
|
| (also, AMD paid the penalty like 15 years ago... there were
| _terascale based APUs_ , and the GCN driver was developed as
| a dGPU/iGPU hybrid architecture from day 1, they never had to
| backport GCN itself from dGPU, that was all done in the
| terascale days.)
| buran77 wrote:
| I'd have to agree with GP here. It wasn't at all obvious for
| almost anyone that Intel can get so quickly to the level of
| performance Arc offers. After decades of bottom of the barrel
| iGPUs in a span of 2-3 years they went to decent iGPUs, and
| then decent dGPUs (according to those who touched them).
| That's quite an achievement in a market that all but
| solidified around the 1.5 incumbents.
| jjoonathan wrote:
| > they went from decades of bottom of the barrel iGPUs
|
| Hah, I remember the time I spent a week dodging the
| software renderer to figure out why my shader wasn't
| working on an intel iGPU -- turns out that someone at intel
| decided to implement "float" with an 8 bit float. Not 8
| byte, 8 bit.
|
| "Sure, we support floating point, what's the problem?"
| bacro wrote:
| I have not seen any desktop GPU that is competitive with Nvidia
| or AMD yet. What surprises me is how naive Intel seems to be.
| It is extremely hard to enter the GPU market right now, so it
| should expect a lot of years of losses. Hopefully they will
| continue and fix their drivers and hardware problems so we can
| have more competition. For Alchemist, if they do not price it
| extremely aggressively ( like 200$ for a A770 ) they will not
| get much marketshare, I am afraid.
| windowsrookie wrote:
| Intel quit making ARM cpus right before smartphones took off.
| They sold off their LTE modem division essentially giving
| Qualcomm a monopoly. Now they might give up on GPUs.
|
| Intel is way too quick to give up. Imagine if they had continued
| to make ARM CPUs. Apple may have never pursued making their own
| chips. Intel CPU's + modems could be competing with Qualcomm for
| every mobile device sold today.
| UncleOxidant wrote:
| I think it was more hubris that caused them to stop making ARM
| chips as opposed to "giving up". They were still in the "x86
| everywhere" mindset back then. Remember when they were trying
| to get x86 into phones & tablets?
| spookie wrote:
| >The drumroll never stopped, even to the point of talking about
| code improvements in a Linux driver--Linux? That's Intel's first
| choice?
|
| This phrase does it, this guy has no clue whatsoever. Baffling.
| rossdavidh wrote:
| "Of the groups Gelsinger got rid of was Optane (started in 2017,
| never made a profit), sold McAfee (bought in 2010, never made a
| profit), and shut down the drone group (started in 2015, never
| made a profit). Last year, Intel sold off its NAND business to
| Hynix, giving up its only Chinese fab, and sold off its money-
| losing sports group; and this year, the company shut down its
| Russian operations. Since Gelsinger's return, Intel has dumped
| six businesses, saving $1.5 billion in operating costs and
| loses."
|
| None of these groups are remotely as close to Intel's core (pun
| intended) business, as this one. It is certainly true that this
| division needs to produce better results, but that's true of the
| company as well. Chopping this one, unless there is some plan for
| a replacement strategy in the GPU space, would be a bad sign, as
| it would suggest a company circling the drain/milking the
| existing winners, unable to make any new winners.
| bearjaws wrote:
| I really don't understand how Intels drivers for Alchemist are so
| flawed, they have integrated GPUs that run esports titles at
| 100+fps.
|
| It seemed to me scaling the existing architecture to have more
| processing and bandwidth should have yielded a very competitive
| GPU while reusing their existing talent pool.
|
| Instead we end up with a very buggy, very specialized (needs
| resizable bar?, DX12 works well but not other runtimes?) that
| aren't required today for Intel graphics.
| wtallis wrote:
| One pattern that has emerged is many of the performance
| problems come from VRAM no longer being carved out of main
| system RAM. The drivers now have to be careful about which pool
| of memory they allocate from, and for CPU access to VRAM
| without ReBAR the access patterns matter a lot more than they
| used to.
| wmf wrote:
| It's amazing that fixing that one thing takes over three
| years.
| terafo wrote:
| DX12 works well cause it was designed to make driver
| development easier. Last gen APIs required specific
| optimizations in drivers for every single game in order for it
| to run well. And as far as I heard the problem isn't in
| drivers, problem is in hardware scheduler that have some kind
| of limitations that become more apparent with scale. It might
| be impossible to fix this without retape.
| khitchdee wrote:
| If I were Pat Gelsinger, I would keep PC Client, DataCenter and
| FPGA(Altera). Get rid of everything else.
| PaulsWallet wrote:
| I have to believe that Intel's GPU division will be fine just
| because I refuse to believe that any executive over at Intel is
| short sighted enough to believe that Intel was going to leap frog
| AMD and Nvidia or even make a profit within the next 3-5 years.
| It took AMD years to get it right with Ryzen and AMD still hasn't
| had a "Ryzen moment" with their GPU. The fact that Intel's GPU
| can even compete with AMD and Nvidia is a feat of engineering
| magic. Intel should just take the L and make it a loss leader and
| grind it out for the long haul.
| Miraste wrote:
| I would say RDNA/RDNA2 has delivered Ryzen-level improvements.
| It didn't result in Ryzen-level fanfare because Nvidia improves
| their GPUs every product cycle instead of sitting still like
| Intel was doing with processors. I don't think Intel has the
| leadership or, after their Boeingization (getting rid of senior
| engineers) the engineering ability to compete with Nvidia.
| [deleted]
| pclmulqdq wrote:
| Intel missed a huge opportunity by killing Larrabee and the Xeon
| Phi. They aren't going to be able to beat CUDA with the "oneAPI"
| software layer they are trying to offer: an outgrowth of OpenCL
| is not going to be popular with people who care about the
| performance of their devices. The only programming API that Intel
| has and CUDA can't beat is x86 assembly.
|
| In my opinion, when they wanted to get back into graphics, they
| should have brought back Xeon Phi, maybe doubled the vector width
| (and added some special units), and hired some engineers from HFT
| firms to figure out how to make it pretend to be a fast GPU.
| fancyfredbot wrote:
| Intel can make money from deep learning accelerators even if they
| fail to get the gaming market this generation.
| protomyth wrote:
| I'm not sure how you can give up the GPU market when Intel
| aspires to be a performance leader and must have some plan to get
| back into mobile. They have a GPU with bad drivers right now, but
| instead of axing the division, it would seem more appropriate to
| get some folks who know software. The duopoly of NVIDIA and AMD
| are a lot more vulnerable than other players in other markets
| Intel could expand into.
| CameronNemo wrote:
| Aren't mobile devices usually iGPU only? I can't think of a
| single mobile device with a discrete GPU. Maybe some
| convertible tablets?
| keepquestioning wrote:
| It's clear Raja Koduri is the wrong choice.
| tibbydudeza wrote:
| Was the Arc disaster his making ???.
| arcanus wrote:
| Yes. As the executive in charge of this business unit (since
| 2016!) he owns execution.
| tibbydudeza wrote:
| Hope it does not end up like Larrabee - really don't get
| why Intel is competing in this market segment - Integrated
| graphics yes but surely there is bigger and more important
| fish to fry (Amd Zen) elsewhere ???.
| PolCPP wrote:
| They don't want to depend on third party companies to
| complete their stack offering on the datacenter. It's the
| same way as why Nvidia wanted ARM, but on the other
| direction (CPU/GPU)
| pyrolistical wrote:
| Short term is may seem like a smart move to cut AXG but long term
| if it's a must have.
|
| All three AMD, nvidia and Apple are unifying general purpose
| compute with graphics. That is the direction the world is going.
| Unless intel had a new arch trick up their sleeves they will be
| soon made redundant
| barkingcat wrote:
| since everyone is on the hybrid computing bandwagon, might as
| well bring the axg on core and make a xeon with axg on the same
| package. that's what apple is doing, might as well give that a
| try.
| nicoburns wrote:
| Yes, the insight being that "graphics" is actually mostly just
| highly parallel compute. So if you don't have a graphics
| solution then you're unlikely to have a competitive compute
| solution either. It's not quite there yet, but I think it will
| be soon.
| paulmd wrote:
| And actually not just graphics but the world is moving towards
| these highly unified solutions in general, the hardware world
| is going "full stack" and companies need to be able to offer
| top-to-bottom solutions that solve the whole problem without
| relying on someone else's silicon.
|
| Same reason NVIDIA wanted to buy ARM. Intel and NVIDIA are in
| trouble unless they can make that leap.
|
| Long-term it's the same reason AMD bought ATI too, that vision
| just took a long time to come to fruition. Remember, they
| "acquired" their way to success as well, RTG wasn't something
| that AMD indigeneously developed themselves either... just like
| AMD bought Xilinx and Intel bought Altera.
| klelatti wrote:
| To be successful as a new entrant against powerful incumbent
| players you need to have some sort of competitive angle /
| advantage. Intel used to have a fabrication lead but chose not to
| make dGPUs whilst they had that and allowed Nvidia and AMD to own
| the market.
|
| Really struggling to see this leading to a successful outcome -
| even if they produce a reasonable product it's likely to be third
| placed which is not a comfortable place to be.
___________________________________________________________________
(page generated 2022-08-09 23:00 UTC)