[HN Gopher] Mesh shaders talk at XDC 2022
___________________________________________________________________
Mesh shaders talk at XDC 2022
Author : JNRowe
Score : 28 points
Date : 2022-10-24 18:56 UTC (4 hours ago)
(HTM) web link (rg3.name)
(TXT) w3m dump (rg3.name)
| bsder wrote:
| I really wish that everybody would finally admit that the "fixed
| function pipeline" is dead and shift the development work to
| making the use of the GPU compute units mainstream.
|
| Unreal Engine 5 ("nanite") shows that the high end game engines
| are already stepping around a lot of the fixed function pipeline.
| It's just going to get worse.
|
| The problem is that the GPU compute systems throw away decades of
| CPU processor development. This is good in some ways. However,
| it's very bad in others.
|
| GPUs don't yet have a well-defined memory model (to be fair, even
| CPUs are still a bit sketchy in places)--so you can't guarantee
| atomicity or progress or fairness. GPUs don't have the equivalent
| of an MMU--so they can't be partitioned or protected. etc.
|
| The problem is that switching to a "compute" model and making
| that work well would commodify the GPU--and that's the last thing
| that AMD and NVIDIA want.
| moonchild wrote:
| > I really wish that everybody would finally admit that the
| "fixed function pipeline" is dead
|
| There is a great deal of space between 'fixed function
| pipeline' (which no one is using) and 'all compute all the
| time' (which pretty much no one is using for real-time
| graphics).
|
| > GPUs don't yet have a well-defined memory model
|
| Nidia _has_ a memory model. (I have heard--from an nvidia
| employee--that amd plays a bit more fast and loose.)
|
| https://www.youtube.com/watch?v=VogqOscJYvk
|
| > GPUs don't have the equivalent of an MMU--so they can't be
| partitioned or protected. etc.
|
| You do not need an MMU to partition or to protect. (Not to say
| that GPU security isn't a shitshow.)
|
| > The problem is that switching to a "compute" model and making
| that work well would commodify the GPU--and that's the last
| thing that AMD and NVIDIA want.
|
| https://s22.q4cdn.com/364334381/files/doc_financials/2023/Q2...
| look at where nvidia's revenue is coming from, and tell me
| again that they're not _already_ there.
| tambre wrote:
| Is there any point in switching to mesh shaders if I don't have
| any use for the flexibility they provide? For example, do the new
| semantics allow faster execution on newer hardware for converted
| boring vertex shaders?
| corysama wrote:
| Nope. They don't make anything faster by themselves if you
| stick to doing the same work the same way as vertex shaders.
| What they do is open up a lot more options for you to do
| something differently that can work better for your situation.
| phire wrote:
| My understanding is that if you are just doing the exact same
| work that regular vertex shaders would do, you are better off
| sticking with the regular pipeline. Might even be faster, since
| the old pipeline (potentially, depending on implementation) has
| a bunch of fixed-function hardware that gets disabled for mesh
| shaders.
|
| The preformance advantages only come when you can take
| advantage of the flexibility to apply new optimisations. Such
| as sharing work across a mesh invocation, or avoiding launching
| mesh invocations from task/amplification shaders, or avoiding
| vertex fetch (procedural generation).
| dragontamer wrote:
| So it sounds like Vertex / Tessellation / Geometry shaders should
| be just one stage of the pipeline, and not necessarily locked
| into handling vertices (even if they end up outputting vertices).
| The new combined shader will be named "Mesh" shader and have the
| flexibility of all three.
| corysama wrote:
| What you are describing is how it works. A mesh shader is a lot
| more general and flexible than even all three of Vertex /
| Tessellation / Geometry shaders working together. And so, it
| replaces all three of them to be an alternate pipeline option.
| It cannot be chained together with them.
| Const-me wrote:
| Yeah, that's more or less correct.
|
| > The new combined shader will be named "Mesh" shader
|
| It's only new for Vulkan. On Windows and XBox, mesh shaders
| were implemented couple years ago:
| https://devblogs.microsoft.com/directx/coming-to-directx-12-...
|
| This new Vulkan extension is a direct copy-paste of the above,
| leveraging the same hardware.
|
| However, the hardware support for mesh shaders is less than
| ideal. That thing needs feature level 12.2, so-called "DirectX
| 12 Ultimate". It only works on nVidia Turing (GeForce 20
| series) or newer, and AMD RDNA2 (Radeon RX 6000 series, and
| current generation of game consoles).
| Cloudef wrote:
| Mesh shaders actually go way back to Playstation 2, they are
| pretty much identical to PS2's VU1
| https://govanify.com/post/im-path-three-ps2/#not-so-quick-
| tr...
| Const-me wrote:
| They do similar job in the sense they both generate data
| for hardware rasterizer, but I think the differences are
| rather large.
|
| That PS2 coprocessor is similar to a single ARMv7 core with
| NEON.
|
| Mesh shaders don't run loops for elements of the meshlets,
| they do them in parallel, on different threads of the same
| thread group.
___________________________________________________________________
(page generated 2022-10-24 23:00 UTC)