[HN Gopher] Raytracing on Intel's Arc B580
___________________________________________________________________
Raytracing on Intel's Arc B580
Author : rbanffy
Score : 80 points
Date : 2025-03-16 12:01 UTC (10 hours ago)
(HTM) web link (chipsandcheese.com)
(TXT) w3m dump (chipsandcheese.com)
| im_down_w_otp wrote:
| I love these breakdown writeups so much.
|
| I'm also hoping that Intel puts out an Arc A770 class upgrade in
| their B-series line-up.
|
| My workstation and my kids' playroom gaming computer both have
| A770's, and they've been really amazing for the price I paid,
| $269 and $190. My triple screen racing sim has an RX 7900 GRE
| ($499), and of the three the GRE has surprisingly been the least
| consistently stable (e.g. driver timeouts, crashes).
|
| Granted, I came into the new Intel GPU game after they'd gone
| through 2 solid years of driver quality hell, but I've been
| really pleased with Intel's uncharacteristic focus and pace of
| improvement in both the hardware and _especially_ the software. I
| really hope they keep it up.
| 999900000999 wrote:
| I have a couple of these too, and I strongly believe Intel is
| effectively subsidizing these to try to get a foothold in the
| market.
|
| You get me equivalent of a $500 Nvidia card for around $300 or
| less. And it makes sense because Intel knows if they can get a
| foothold in this market they're that much more valuable to
| shareholders.
|
| Great for gaming, no real downsides imo.
| DeepSeaTortoise wrote:
| They should drop a $600 card with 128gb of vram. This is just
| barely possible without losses on every sale.
|
| And then just watch heads explode.
| ryao wrote:
| At current market pricing on dramexchange, 128GB of 16Gbit
| GDDR6 chips would cost $499.58. That only leaves $100.42
| for the PCB, GPU die, miscellaneous parts, manufacturing,
| packaging, shipping, the store's margin, etcetera. I
| suspect that they could not do that without taking a loss.
|
| I wonder if they could mix clamshell mode and quadrank to
| connect 64 memory chips to a GPU. If they connected 128GB
| of VRAM to a GPU, I would expect then to sell it for $2000,
| not $600.
| DeepSeaTortoise wrote:
| Yup, just went with the $3 per GB formula.
|
| GPU should be about $200 at TSMC (400-450mm2).
|
| + about $150 for the pcb, cooler and other stuff, I
| didn't consider
|
| Times a 1.6 to 1.75 factor if they like actually being
| profitable (operations, rnd, sales, marketing, ...).
|
| So about $1.5k, I guess.
|
| Multiply that with a .33 "screw the competition" factor
| and my initial guess is almost spot on.
|
| .
|
| Real problem:
|
| The largest GDDR7 package money can buy right now is 3GB.
| That's a 1376bit bus right there. GL fitting that to a
| sub 500mm2 die.
|
| In the future you could put that amount of vram on a
| 512bit bus, tho.
|
| Also normal DDR is getting really fast atm. 8 channel can
| already challenge most vram configurations. Maybe it's
| time soon to switch back to swappable memory.
| rasz wrote:
| >+ about $150 for the pcb, cooler and other stuff,
|
| Assuming I had access to gerbers I could order replica of
| 5090 PCB for $65, including shipping. Intel PCB is half
| that. Again this is for a dude off the street buying 1-5
| copies, not a bulk order.
| ryao wrote:
| They are definitely selling them at close to no profit, but
| they are not anywhere near subsidizing them unless they
| botched their supply chain so badly that they are overpaying
| the BOM costs.
| 999900000999 wrote:
| R&D isn't free.
|
| Even selling at cost is a subsidy.
|
| I'm proud to support them. Intel is also selling their
| lunar lake chips fairly cheaply too. Let's all hope they
| make it through this rough patch. I can't imagine a world
| where we only have one x86 manufacturer.
| hypothesis wrote:
| > I can't imagine a world where we only have one x86
| manufacturer.
|
| Does it even matter? Some people won't notice even if
| there are zero x86 manufacturers.
|
| In fact I would say lots of people have not bought x86
| CPU in while, between Mac, RPi and risc-v boards...
| 999900000999 wrote:
| X86 is still needed for a lot of software. The emulation
| just isn't there yet.
| hypothesis wrote:
| That would be news to people on mac with Rosetta Stone /
| Crossover.
| 999900000999 wrote:
| A lot of server code and specialized software won't work.
|
| Competition is always good
| ryao wrote:
| R&D is a sunk cost that is largely paid by their iGPUs.
| Selling at cost is not a subsidy and that is not relevant
| here since they should be making money off every sale. I
| tried estimating their costs a few months ago and found
| that they had room for up to a 10% margin on these, even
| after giving retailers a 10% margin. If they are not
| making money from these, it would be their fault for not
| building enough to leverage economics of scale.
| 999900000999 wrote:
| https://slickdeals.net/f/17910114-acer-arc-a770-16gb-gpu-
| w-f...
|
| > Acer Arc A770 16gb GPU w/Free Game & Shipping $229.99
| $229.99 $399.99 at Newegg
|
| Sure that margin is holding when they had to mark the
| first generation down to get them off the shelves. It
| would truly surprise me if they've made a significant
| profit off these cards.
| throwaway48476 wrote:
| They won't make a B770 or C770 because they lose money on every
| card they sell. The prices are low because otherwise they would
| sell 0 and they already paid for the silicon. The intel
| graphics division is run by fools who won't give their cards a
| USP in the SR-IOV feature home labers have been asking for for
| years. Doing what AMD and Nvidia do but worse is not a
| profitable strategy. There's a 50% the whole division gets
| fired in the next year though.
| gruez wrote:
| >The intel graphics division is run by fools who won't give
| their cards a USP in the SR-IOV feature home labers have been
| asking for for years.
|
| Intel are "fools" for not adding a feature that maybe a few
| thousand people care about?
| throwaway48476 wrote:
| A few thousand people is a lot more than the number of
| gamers who'd buy an Intel GPU at market price. If they
| don't raise their ASP into the black they're going to ax
| the whole division.
|
| For reference the B580 die is nearly the size of the 4070
| but sells for a third the price.
| freddi333 wrote:
| SR-IOV doesn't sell consumer cards, are you expecting Intel
| to produce an expensive XEON equivalent of Arc? I'd expect
| them to attempt capturing some LLM market share by loading up
| the cards with RAM rather than expending effort on niche
| features.
| throwaway48476 wrote:
| SR-IOV would sell more cards than Intel would be able to
| sell if they charged market rate for their GPUs. Same goes
| for not selling a local LLM high VRAM variant. Intel is
| just allergic to competing by offering a USP.
|
| AMD Ryzen CPUs have ECC enabled but not officially
| supported. Intel still locks away the feature.
| sergiotapia wrote:
| Was raytracing a psyop by Nvidia to lock out amd? Games today
| don't look that much nicer than 10 years ago and demand crazy
| hardware. Is raytracing a solution looking for a problem?
|
| https://x.com/NikTekOfficial/status/1837628834528522586
| im_down_w_otp wrote:
| I've kind of wondered about this a bit too. The respective
| visual quality side of it that is. Especially in a context
| where you're actually playing a game. You're not just sitting
| there staring at side by side still frames looking for minor
| differences.
|
| What I have assumed given then trend, but could be completely
| wrong about, is that the raytracing version of the world might
| be easier on the software & game dev side to get great visual
| results without the overhead of meticulous engineering, use,
| and composition of different lighting systems, shader effects,
| etc.
| gmueckl wrote:
| When path tracing works, it is much, much, MUCH simpler and
| vastly saner algorithm than those stacks of 40+ complicated
| rasterization hacks in current rasterization based renderers
| that barely manage to capture crude approxinations of the
| first indirect light bounces. Rasterization as a rendering
| model for realistic lighting has outlived its usefulness. It
| overstayed because optimizing ray-triangle intersection tests
| for path tracing in hardware is a hard problem that took some
| 15 o 20 years of research to even get to the first generation
| RTX hardware.
| gruez wrote:
| >When path tracing works, it is much, much, MUCH simpler
| and vastly saner algorithm than those stacks of 40+
| complicated rasterization hacks in current rasterization
| based renderers that barely manage to capture crude
| approxinations of the first indirect light bounces.
|
| It's ironic that you harp about "hacks" that are used in
| rasterization, when raytracing is so computationally
| intensive that you need layers upon layers of performance
| hacks to get decent performance. The raytraced results
| needs to be denoised because not enough rays are used. The
| output of that needs to be supersampled (because you need
| to render at low resolution to get acceptable performance),
| and then on top of all of that you need to hallu^W
| extrapolate frames to hit high frame rates.
| 1W6MIC49CYX9GAP wrote:
| Meanwhile raserization is fundamentally incapable of
| producing the same image.
| cubefox wrote:
| And you still need rasterization for ray traced games
| (even "fully" path traced games like Cyberpunk 2077)
| because the ray tracing sample count is too low to result
| in an acceptable image even after denoising. So the
| primary visibility rendering is done via rasterization
| (which has all the fine texture and geometry detail
| without shading), and the ray traced (and denoised)
| shading is layerd on top.
|
| You can see the purely ray traced part in this image from
| the post: https://substack-post-
| media.s3.amazonaws.com/public/images/8...
|
| This combination of techniques is actually pretty smart:
| Combine the powers of the rasterization and ray tracing
| algorithms to achieve the best quality/speed combination.
|
| The rendering implementation in software like Blender can
| afford to be primitive in comparison: It's not for real-
| time animation, so they don't make use of rasterization
| at all and do not even use denoising. That's why
| rendering a simple scene takes seconds in Blender to
| converge but only milliseconds in modern games.
| juunpp wrote:
| This doesn't hold at all. Path tracing doesn't "just work",
| it is computational infeasible. It needs acceleration
| structures, ray traversal scheduling, denoisers, upscalers,
| and a million other hacks to work any close to real-time.
| kmeisthax wrote:
| For the vast majority of scenes in games, the best balance of
| performance and quality is precomputed visibility, lighting
| and reflections in static levels with hand-made model LoDs.
| The old Quake/Half-Life bsp/vis/rad combo. This is unwieldy
| for large streaming levels (e.g. open world games) and breaks
| down completely for highly dynamic scenes. You wouldn't want
| to build Minecraft in Source Engine[0].
|
| However, that's not what's driving raytracing.
|
| The vast majority of game development is "content pipeline" -
| i.e. churning out lots of _stuff_ - and engine and graphics
| tech is built around removing roadblocks to that content
| pipeline, rather than presenting the graphics card with an
| efficient set of draw commands. e.g. LoDs demand artists
| spend extra time building the same model multiple times;
| precomputed lighting demands the level designer wait longer
| between iterations. That goes against the content pipeline.
|
| Raytracing is Nvidia promising game and engine developers
| that they can just forget about lighting and delegate that
| entirely to the GPU at run time, at the cost of running like
| garbage on anything that isn't Nvidia. It's entirely
| impractical[1] to fully raytrace a game at runtime, but that
| doesn't matter if people are paying $$$ for roided out space
| heater graphics cards just for slightly nicer lighting.
|
| [0] That one scene in _The Stanley Parable_ notwithstanding
|
| [1] Unless you happen to have a game that takes place
| entirely in a hall of mirrors
| corysama wrote:
| Yep. I worked on the engine of a PS3/360 AAA game long ago.
| We spent a long of time building a pipeline for precomputed
| lighting. But, in the end the game was 95% fully
| dynamically lit.
|
| For the artists, being able to wiggle lights around all
| over in real time was an immeasurable productivity boost
| over even just 10s of seconds between baked lighting
| iterations. They had a selection of options at their
| fingertips and used dynamic lighting almost all the time.
|
| But, that came with a lot of restrictions and limitations
| that make the game look dated by today's standards.
| juunpp wrote:
| Except that it isn't like that at all. All you get from the
| driver in terms of ray tracing is the acceleration structure
| and ray traversal. Then you have denoisers and upscalers
| provided as third-party software. But games still ship with
| thousands of materials, and it is up to the developer to
| manage lights, shaders, etc, and use the hardware and driver
| primitives intelligently to get the best bang for the buck.
| Plus, given that primary rays are a waste of time/compute,
| you're still stuck with G-buffer passes and rasterization
| anyway. So now you have two problems instead of one.
| Clemolomo wrote:
| It's a transition happening.
|
| Research and progress is necessary, Ray tracing is a clear
| advancement.
|
| AMD could just easily skip it if they want to reduce costs, we
| could just not by the gpus. Non of it is happening.
|
| It does look better and it would be a lot easier if we would
| only do ray tracing
| keyringlight wrote:
| I think there's two ways of looking at it. Firstly that raster
| has more or less plateaued, there haven't been any great
| advances in a long time and it's not like AMD or any other
| company have offered an alternative path or vision for where
| they see 3d graphics going. The last thing a company like
| nvidia wants is to be a generic good which is easy to compete
| with or simple to compare against. Nvidia was also making use
| of their strength/long term investment in ML to drive DLSS
|
| Secondly, nvidia are a company that want to sell stuff for a
| high asking price, and once a certain tech gets good enough
| that becomes more difficult. If the 20 series was just a
| incremental improvement from the 10, and so on then I expect
| sales would have plateaued especially if game requirements
| don't move much.
| sergiotapia wrote:
| I don't believe we have reached a raster ceiling. More and
| more it seems like groups are cahoots to push rtx and ray
| tracing. We are left to speculate why devs are doing this.
| nvidiabux? easier time to add marketing keywords? who
| knows... i'm not a game dev.
|
| https://www.youtube.com/watch?v=NxjhtkzuH9M
| gruez wrote:
| There's no need for implications of deals between nvidia
| and game developers in smoke filled rooms. It's pretty
| straightforward: raytracing means less work for developers,
| because they don't have to manually place lights to make
| things look "right". Plus, they can harp about how it looks
| "realistic". It's not any different than the explosion of
| electron apps (and similar technologies making apps using
| html/js), which might be fast to develop, but are bloated
| and feel non-native. But it's not like there's an electron
| corp, giving out "electronbux" to push app developers to
| use electron.
| phatfish wrote:
| Raster quality is limited by how much effort engine
| developers are willing to put into finding computationally
| cheap approximations of how light/materials behave. But it
| feels like the easy wins are already taken?
| ThatPlayer wrote:
| I don't think it's just about looks. The advantage of ray
| tracing is the real time lighting done rather than the static
| baked maps. One of the features I feel that was lost with
| modern game lighting is dynamic environments. But as long as
| the game isn't only ray tracing, these types of interactions
| will stay disabled for the game. Teardown and The Finals are
| examples of a dynamic environment game with raytraced lighting.
|
| Another example is when was the last time you've seen a game
| with a mirror that wasn't broken?
| gruez wrote:
| Hitman, GTA, both of which use a non-raytraced
| implementation. More to the point, lack of mirrors doesn't
| impact the gameplay. It's something that's trotted out as a
| nice gimmick, 99% of the time it's not there, and you don't
| really notice that it's missing.
| ThatPlayer wrote:
| GTA V's implementation did not work in their cars. Rear
| view and side view mirrors in cars are noticeably low
| quality and missing other cars while driving, which is
| pretty big for gameplay purposes.
|
| Working mirrors are limited to less complex scenes in GTA.
| Hitman too I believe.
| keyringlight wrote:
| Hitman is an example that contradicts your point about
| gameplay, guards will see you in mirrors and act
| appropriately. They'll be doing that for gameplay with a
| non-graphical method, but you need to show it to the player
| graphically for them to appreciate the senses available
| gruez wrote:
| >Hitman is an example that contradicts your point about
| gameplay, guards will see you in mirrors and act
| appropriately.
|
| See:
|
| >It's something that's trotted out as a nice gimmick, 99%
| of the time it's not there, and you don't really notice
| that it's missing.
|
| Yeah, it's a nice detail for the 1% of time that you're
| in a bathroom or whatever, but it's not like the
| immersion takes a hit when it's missing. Moreover because
| the game is third person, you can't even accurately judge
| whether you'll be spotted through a mirror or not.
| davikr wrote:
| lol, go play Cyberpunk 2077 with pathtracing and compare it to
| raster before you call it a gimmick.
| sergiotapia wrote:
| i own an rtx 4090 and yes cyberpunk looks amazing with
| raytracing - but worth the $2000k and nvidia monopoly over
| the tech? a big resounding no (for me).
| TiredOfLife wrote:
| Here is a good video by Digital Foundry looking at Metro Exodus
| Enhanced edition with devtools. Where they show what raytracing
| is and how it differs from regular lighting.
|
| https://youtu.be/NbpZCSf4_Yk
|
| simplified tldr: with raytracing you build the environment,
| designate which parts (like sun, lamps) emit light and you are
| done. With regular an artist has to spend hours to days adding
| many fake lightsources to get same result.
| washadjeffmad wrote:
| If you think of either crypto or gaming and not accelerated
| compute for advanced modeling and simulation when you hear
| Nvidia, you won't have sufficient perspective to answer this
| question.
|
| What does RTX do, what does it replace, and what does it
| enable, for whom? Repeat for Physx, etc. Give yourself a bonus
| point if you've ever heard of Nvidia Omniverse before right
| now.
| berkut wrote:
| Not if you want better fidelity: the VFX industry for film
| moved from rasterisation to raytracing / pathtracing (on CPU
| initially, and a lot of final frame rendering is still done on
| CPU due to memory requirements even today, although lookdev is
| often done on GPU if the shaders / light transport algorithms
| can be matched between GPU/CPU codepaths) due to the higher
| fidelity possible starting back in around 2012/2013.
|
| It required discarding a lot of "tricks" that had been learnt
| with rasterisation to speed things up over the years, and made
| things slower in some cases, but meant everything could use
| raytracing to compute visibility / occlusion, rather than
| having shadow maps, irradiance caches, pointcloud SSS caches,
| which simplified workflows greatly and allowed high-fidelity
| light transport simulations of things like volume scattering in
| difficult mediums like water/glass and hair (i.e. TRRT lobes),
| where rasterisation is very difficult to get the medium
| transitions and LT correct.
| shmerl wrote:
| If suggested usage means upscaling, it's a dubious trade off.
| That's why I'm not using it in Cyberpunk 2077, at least with
| RDNA 3 on Linux, since I don't want to use upscaling.
|
| Not sure how much RDNA 4 and on will improve it.
| achierius wrote:
| It feels like just yesterday that Chips and Cheese started
| publishing (*checked and they started up in 2020 -- so not that
| long ago after all!), and now they've really become a mainstay in
| my silicon newsletter stack, up there with
| Semianalysis/Semiengineering/etc.
|
| > Intel uses a software-managed scoreboard to handle dependencies
| for long latency instructions.
|
| Interesting! I've seen this in compute accelerators before, but
| both AMD and Nvidia manage their long-latency dependency tracking
| in hardware so it's interesting to see a major GPU vendor taking
| this approach. Looking more into it, it looks like the interface
| their `send`/`sendc` instruction exposes is basically the same
| interface that the PE would use to talk to the NOC: rather than
| having some high-level e.g. load instruction that hardware then
| translates to "send a read-request to the dcache, and when it
| comes back increment this scoreboard slot", the ISA lets/makes
| the compiler state that all directly. Good for fine control of
| the hardware, bad if the compiler isn't able to make inferences
| that the hardware would (e.g. based on runtime data), but then
| good again if you really want to minimize area and so wouldn't
| have that fancy logic in the pipeline anyways.
| rayiner wrote:
| This is so cool! I think this is a video of CyberPunk 2077 with
| path tracing on versus off:
| https://www.youtube.com/watch?v=89-RgetbUi0. It sees like a real,
| next-generation advance in graphics quality that we haven't seem
| in awhile.
| vinkelhake wrote:
| Just a heads up - it looks like the "Path Tracing Off" shots
| have ray tracing disabled as well. In the shots starting at
| 1:22 (the car and then the plaza), it looks like they just have
| the base screenspace reflections enabled. Path tracing makes a
| difference (sometimes big, sometimes small) for diffuse
| lighting in the game. The kind of reflection seen in those
| scenes can be had by enabling "normal" ray tracing in the game,
| which is playable on more systems.
| api wrote:
| Intel Arc could be Intel's comeback if they play it right. AMD's
| got the hardware to disrupt nVidia but their software sucks and
| they have a bad reputation for that. Apple's high-end M chips are
| good but also expensive like nVidia (and sold only with a high-
| end Mac) and don't quite have the RAM bandwidth.
| blagie wrote:
| Intel is close. Good history with software.
|
| If they started shipping GPUs with more RAM, I think they'd be
| in a strong position. The traditional disruption is to eat the
| low-end and move up.
|
| Silly as it may sound, but a Battlemage where one can just plug
| in DIMMs, with some high total limit for RAM, would be the
| ultimate for developers who just want to test / debug LLMs
| locally.
| userbinator wrote:
| _Silly as it may sound, but a Battlemage where one can just
| plug in DIMMs, with some high total limit for RAM, would be
| the ultimate for developers who just want to test / debug
| LLMs locally._
|
| Reminds me of this old satire video:
| https://www.youtube.com/watch?v=s13iFPSyKdQ
| throwaway48476 wrote:
| Intel is run by fools. I don't see them coming back. They
| just don't have the willingness to compete and offer products
| with USPs. Intel today is just MBAs and the cheapest
| outsourced labor the MBAs can find.
___________________________________________________________________
(page generated 2025-03-16 23:01 UTC)