[HN Gopher] Intel Announces Inference-Optimized Xe3P Graphics Ca...
___________________________________________________________________
Intel Announces Inference-Optimized Xe3P Graphics Card with 160GB
VRAM
Author : wrigby
Score : 76 points
Date : 2025-10-14 18:30 UTC (4 hours ago)
(HTM) web link (www.phoronix.com)
(TXT) w3m dump (www.phoronix.com)
| RoyTyrell wrote:
| Will this have any support for open source libraries like PyTorch
| or will it be all Intel proprietary software that you need a
| license for?
| CoastalCoder wrote:
| Intel puts a huge priority on DL framework support before
| releasing related hardware, going back to at least 2017.
|
| I assume that hasn't changed.
| 0xfedcafe wrote:
| OpenVino is entirely open-source and can run PyTorch and ONNX
| models, so this is definitely not a topic of concern. PyTorch
| also has native Intel GPU support
| https://docs.pytorch.org/docs/stable/notes/get_start_xpu.htm...
| knowitnone3 wrote:
| Any business people here that can explain why companies announce
| products a year before their release? I can understand getting
| consumers excited but it also tells competitors what you are
| doing giving them time to make changes of their own. What's the
| advantage here?
| teeray wrote:
| > What's the advantage here?
|
| Stock number go up
| creaturemachine wrote:
| The AI bubble might not last another year. Better get a few
| more pumps in before it blows.
| Mars008 wrote:
| AI is not going anywhere. Now everyone wants to get a piece.
| Local inference is expected to grow. Documents, image, video,
| etc processing. Another obvious is driverless farm vehicles
| and other automated equipment. "Assisted" books, images,
| news,.. already and grows fast. Translation also a fact.
| thenaturalist wrote:
| The technology, maybe - and if on local.
|
| The public co valuations of quickly depreciating chip
| hoarders selling expensive fever dreams to enterprises are
| gonna pop though.
|
| Spend 3-7 USD for 20 cents in return and 95% project
| failures rates for quarters on end aren't gonna go
| unnoticed on Wall St.
| baq wrote:
| There is a serious possibility this isn't a bubble. Too many
| people watched the big short and now call every bull a
| bubble; maybe the bubble was the dollar and it's popping now
| instead.
| thenaturalist wrote:
| Have you looked in detail at the economics of this?
|
| Career finance professionals are calling it a bubble, not
| due to their suddenly found deep technological expertise,
| but because public cos like FAANG et. al are engaging in
| typical bubble like behavior: Shifting capex away from
| their balance sheets into SPACs co-financed by private
| equity.
|
| This is not a consumer debt bubble, it's gonna be a private
| market bubble.
|
| But as all bubbles go, someones gonna be left holding the
| bag with society covering for the fallout.
|
| It'll be a rate hike, it'll be some Fortune X00 enterprises
| cutting their non-ROI-AI-bleed or it'll be an AI-fanboy
| like Oracle over-leveraging themselves and then watching
| their credit default swaps going "Boom!" leading to a
| financing cut off.
| baq wrote:
| It's possible, circular financing is definitely fishy,
| but OTOH every openai deal sama makes is swallowed by
| willing buyers at a fair market price. We'll be in a
| bubble when all the bears are dead and everyone accepts
| 'a new paradigm', not before; there's plenty of upside
| capitulation left judging by some hedge fund returns this
| year.
|
| ...and again, this is assuming AI capability stops
| growing exponentially in the widest possible sense
| (today, 50%-task-completion time horizon doubles ~7
| months).
| Mars008 wrote:
| To keep investors happy and stock from failing? Fairy tales
| work as well, see Tesla robots.
| fragmede wrote:
| If you're Intel sized, it's gonna leak. If you announce it
| first, you get to control the message.
|
| The other thing is enterprise sales is ridiculously slow. If
| Intel wants corporate customers to buy these things, they've
| got to announce them ~a year ahead, in order for those
| customers to buy them next year when they upgrade hardware.
| AnthonyMouse wrote:
| If customers know your product exists before they can buy it
| then they may wait for it. If they buy the competitor's product
| today because they don't know your product will exist until the
| day they can buy it then you lose the sale.
|
| Samples of new products also have to go out to third party
| developers and reviewers ahead of time so that third party
| support is ready for launch day and that stuff is going to leak
| to competitors anyway so there's little point in not making it
| public.
| jsnell wrote:
| In this case there is no risk of anyone stealing Intel's ideas
| or even reacting to them.
|
| First, they're not even an also-ran in the AI compute space.
| Nobody is looking to them for roadmap ideas. Intel does not
| have any credibility, and no customer is going to be going to
| Nvidia and demanding that they match Intel.
|
| Second, what exactly would the competitors react to? The only
| concrete technical detail is that the cards will hopefully
| launch in 2027 and have 160GB of memory.
|
| The cost of doing this is really low, and the value of
| potentially getting into the pipeline of people looking to buy
| data center GPUs in 2027 soon enough to matter is high.
| baq wrote:
| Given how long it takes to develop a new GPU I'm pretty sure
| this one was signed off by Pat and given it survived Lip-Bu's
| axe that says something, at least for Intel.
| reactordev wrote:
| This is a shareholder "me too" product
| thenaturalist wrote:
| What are they gonna do with their own FAB?
|
| Not release anything?
|
| There'll be a good market share for comparatively "lower
| power/ good enough" local AI. Check out Alez Ziskind's
| analysis of the B50 Pro [0]. Intel has an entire line-up of
| cheap GPUs that perform admirably for local use cases.
|
| This guy is building a rack on B580s and the driver update
| alone has pushed his rig from 30 t/s to 90 t/s. [1]
|
| 0: https://www.youtube.com/watch?v=KBbJy-jhsAA
|
| 1: https://old.reddit.com/r/LocalLLaMA/comments/1o1k5rc/new_i
| nt...
| reactordev wrote:
| Watson...
|
| Yeah even RTX's are limited in this space due to lack of
| tensor cores. It's a race to integrate more cores and
| faster memory buses. My suspicion is this is more me too
| product announcement so they can play partner to their
| business opportunities and continue greasing their wheels.
| epolanski wrote:
| I don't think you're giving much advantage to anybody really on
| such a small timeframe.
|
| Semiconductors are like container ships, they are extremely
| slow and hard to steer, you plan today the products you'll
| release in 2030.
| Perenti wrote:
| It can also prevent competitors from entering a particular
| space. I was told as an undergraduate that UNIX was irrelevant
| because the upcoming Windows NT would be POSIX compliant. It
| took a _very_ long time before that happened (and for a very
| flexible version of "compliant"), but the pointy-headed bosses
| thought that buying Microsoft was the future. And at first
| glance the upcoming NT _looked_ as if the TCO would be much
| lower than AIX, HPuX or Solaris.
|
| Then of course Linux took over everywhere except the desktop.
| schmorptron wrote:
| Xe3P as far as I remember is built in their own fabs as opposed
| to xe3 at TSMC. This could give them a huge advantage by being
| possibly the only competitor not competing for the same TSMC
| wafers
| mft_ wrote:
| I have no idea of the likely price, but (IMO) this is the sort of
| disruption that Intel needs to aim at if it's going to make some
| sort of dent in this market. If they could release this for
| around the price of a 5090, it would be very interesting.
| schmorptron wrote:
| Maybe not that low, but given it's using LPDDR5 instead of
| GDDR7, at least the ram should be a lot cheaper.
| Neywiny wrote:
| Certainly an interesting choice. Dramatically worse
| performance but dramatically larger only time will tell how
| it actually goes
| Tepix wrote:
| It's LPDDR5X
| wtallis wrote:
| LPDDR5 _x_ really just means LPDDR5 running at higher than
| the original speed of 6400MT /s. Absent any information
| about _which_ faster speed they 'll be using, this
| correction doesn't add anything to the discussion. Nobody
| would expect even Intel to use 6400MT/s for a product that
| far in the future. Where they'll land on the spectrum from
| 8533 MT/s to 10700 MT/s is just a matter for speculation at
| the moment.
| baq wrote:
| With this much ram don't expect anything remotely affordable by
| civilians.
| wmf wrote:
| 160 GB LPDDR5 is ~$1,200 retail so the card could be sold for
| $2,000. The price will depend on how desperate Intel is.
| Intel probably can't copy Nvidia's pricing.
| dragonwriter wrote:
| I mean, even without that, the phrase "enterprise GPU", does
| not tend to convey "priced for typical consumers".
| api wrote:
| A not-absurdly-priced card that can run big models (even
| quantized) would sell like crazy. Lots and lots of fast RAM is
| key.
| bigwheels wrote:
| How does LPDDR5 (This Xe3P) compare with GDDR7 (Nvidia's
| flagships) when it comes to inference performance?
|
| Local inference is an interesting proposition because today in
| real life, the NV H300 and AMD MI-300 clusters are operated by
| OpenAI and Anthropic in batching mode, which slows users down
| as they're forced to wait for enough similar sized queries to
| arrive. For local inference, no waiting is required - so you
| could get potentially higher throughput.
| qingcharles wrote:
| I asked GPT to pull real stats on both. Looks like the
| 50-series RAM is about 3X that of the Xe3P, but it wanted to
| remind me that this new Intel card is designed for data
| centers and is much lower power, and that the comparable
| Nvidia server cards (e.g. H200) have even better RAM than
| GDDR7, so the difference would be even higher for cloud
| compute.
| halJordan wrote:
| Lpddr5x (not lpddr5) is 10.7 Gbps. Gddr7 is 32 Gbps. So it's
| going to be slower
| codedokode wrote:
| Yes but in matrix multiplication there are O(N2) numbers
| and O(N3) multiplications, so it might be possible that you
| are bounded by compute speed.
| btian wrote:
| Isn't that precisely what DGX Spark is designed for?
|
| How is this better?
| geerlingguy wrote:
| DGX Spark is $4000... this might ( _might_ ) not be? (and
| with more memory)
| btian wrote:
| This starts shipping in 2027. I'm sure you can buy a DGX
| Spark for less than $4k in 2 years time.
| bigmattystyles wrote:
| I remember Larabee and Xeon-Phi announcements and getting so
| excited at the time. So I'll wait but curb my enthusiasm.
| Analemma_ wrote:
| Yeah, Intel's problem is that this is (at least) the third time
| they've announced a new ML accelerator platform, and the first
| two got shitcanned. At this point I wouldn't even glance at an
| Intel product in this space until it had been on the market for
| at least five years and several iterations, to be somewhat sure
| it isn't going to be killed, and Intel's current leadership
| inspires no confidence that they'll wait that long for success.
| wmf wrote:
| Xe works much much better than Larabee or Xeon Phi ever did.
| Xe3 might even be good.
| makapuf wrote:
| Funny they still call them _graphics_ cards when they 're
| really... I dont know, matmul cards ? Tensor cards ? TPU ? Well
| that sums it up maybe, what those are are really CUDA cards.
| halJordan wrote:
| Dude, this is asinine. Graphics cards have been doing matrix
| and vector operations since they were invented. No one had a
| problem with calling matrix multiplers graphics cards until it
| became cool to hate AI.
| adastra22 wrote:
| It was many generations before vector operations were moved
| onto graphics chips.
| boomskats wrote:
| If you s/graphics/3d graphics does that still hold true?
| shwaj wrote:
| I think they're using "vector" in the linear algebra sense,
| e.g. multiplying a matrix and a vector produces a different
| vector.
|
| Not, as I assume you mean, vector graphics like SVG, and
| renderers like Skia.
| yjftsjthsd-h wrote:
| GPUs may well have done the same-ish operations for a long
| time, but they were doing those operations for graphics.
| GPGPU didn't take off until relatively recently.
| wmf wrote:
| This sounds like a gaming card with extra RAM so it's kind of
| appropriate to call it a graphics card.
| eadwu wrote:
| It'll be either "cheap" like the DGX Spark (with crap memory
| bandwidth) or overpriced with the bus width of a M4 Max with the
| rhetoric of Intel's 50% margin.
| phonon wrote:
| Or it will be cheap, with the ability to expand 8X on a server.
| Particularly with PCIe 6.0 coming soon, might be a very
| attractive package.
|
| https://www.linkedin.com/posts/storagereview_storagereview-a...
| Tepix wrote:
| Sound as if it won't be widely available before 2027 which
| disappointing for a 341GB/s chip.
| storus wrote:
| Intel leadership actually reads HN? Mindblown...
| silisili wrote:
| Between 18A becoming viable and this, it seems Intel is finally
| climbing out of the hole it's been in for years.
|
| Makes me wonder whether Gelsinger put all this in motion, or if
| the new CEO lit a fire under everyone. Kinda a shame if it's the
| former...
___________________________________________________________________
(page generated 2025-10-14 23:01 UTC)