[HN Gopher] From GLSL to WGSL: the future of shaders on the Web ...
       ___________________________________________________________________
        
       From GLSL to WGSL: the future of shaders on the Web (2021)
        
       Author : rossant
       Score  : 39 points
       Date   : 2024-10-07 20:44 UTC (1 days ago)
        
 (HTM) web link (dmnsgn.me)
 (TXT) w3m dump (dmnsgn.me)
        
       | alkonaut wrote:
       | This is from 2021 and the main issue the author has with wgsl has
       | long since been fixed. There is a lot less <f32> needed in
       | ~2014~. Edit: in _2024_
        
         | kookamamie wrote:
         | But, on the other hand, more time-travel required?
        
           | dwattttt wrote:
           | Not anymore. The time travel issue was fixed with time
           | travel.
        
       | davexunit wrote:
       | WGSL is easily the worst part of WebGPU, which I like overall.
       | The world did not need another high-level shading language. They
       | could have just used SPIR-V bytecode if it wasn't for Apple
       | sabotage. The GPU API in SDL3, by comparison, has chosen the only
       | sane option and just asks you to pass whatever the native shader
       | format is.
        
         | Rusky wrote:
         | They could not "just" have used SPIR-V bytecode. WebGL already
         | had to do a bunch of work to restrict GLSL's semantics to work
         | in the web sandbox; whatever WebGPU chose would have had the
         | same problem.
        
           | mwkaufma wrote:
           | WGSL is defined as a "bijection" to a "subset" of spir-v --
           | they could have simply specced that subset.
        
           | modeless wrote:
           | In comparison to inventing a new language and three
           | independent interoperable implementations from scratch, I
           | think "just" is appropriate.
        
         | johnnyanmac wrote:
         | Why am I not surprised that it's Apple that continues to make
         | the world of graphics programming even more difficult than it
         | needs to be.
         | 
         | > The GPU API in SDL3, by comparison, has chosen the only sane
         | option and just asks you to pass whatever the native shader
         | format is.
         | 
         | I get why their don't do this, to be fair. That would mean they
         | would need to have a proper parser for a half dozen shader
         | models. Or at least try to promise they will make and maintain
         | others soon.
        
           | mwkaufma wrote:
           | SDL3 offers a spir-v-cross reference impl as an optional
           | auxiliary library.
        
             | modeless wrote:
             | Also DirectX is adopting SPIR-V, so that will help in the
             | future as well.
             | 
             | Yes, it's true!
             | https://devblogs.microsoft.com/directx/directx-adopting-
             | spir...
        
               | NekkoDroid wrote:
               | DXC can already compile to SPIR-V (with some missing
               | features/limitations IIRC), this just effectivly replaces
               | DXIL with SPIR-V in future shader models and makes it THE
               | language the compiler speaks.
               | 
               | They are also slowly but surely porting their DXC
               | compiler (forked from Clang 3.6 I think) to upstream
               | Clang.
        
           | davexunit wrote:
           | > That would mean they would need to have a proper parser for
           | a half dozen shader models.
           | 
           | It avoids this entirely. If you're on a system whose GPU
           | driver only speaks DXBC and you hand it SPIR-V, that would be
           | an error. This is what SDL GPU does. The SDL team
           | conveniently made a small library that can cross-compile
           | SPIR-V to the other major bytecode formats that you can
           | integrate into your build pipeline.
        
             | LinAGKar wrote:
             | That obviously wouldn't work for the web though, since it
             | would make webpages OS-specific
        
               | vetinari wrote:
               | Not only that; passtrough to real gpu hardware on the web
               | is a quick way to get 0wned. The GPU drivers - and a
               | bunch of hardware too - are not robust enough to be
               | exposed this way.
               | 
               | So WebGL and WebGPU filter and check anything between
               | webpage and the real hardware.
        
               | beeflet wrote:
               | WebGL and WebGPU are still major vectors for device
               | fingerprinting. In an ideal world, GPU access would
               | result in a browser popup like location services and
               | notifications currently do.
               | 
               | But admittedly this is not the only major vector for
               | fingerprinting. I would also say that User-agent
               | shouldn't be a header but an autofilliable form input,
               | and that cookies should be some transparently manageable
               | tab in the address bar (and should be renamed to
               | something more comprehensible to the average person like
               | "tokens" or "tickets").
        
               | davexunit wrote:
               | If only there were a shader IR that was made with
               | portability in mind!
        
               | beeflet wrote:
               | WebGL sort of makes webpages API-specific already by
               | pretty much forcing them to implement OpenGLES. I think
               | if anything SPIR-V is less imposing on the OS because you
               | don't have to implement the whole GLSL front of the
               | compiler and you can just deal with the intermediary.
               | 
               | You end up with a strange situation where a company like
               | apple doesn't want to support OpenGL or provide a
               | translation layer in their OS, but they effectively end
               | up doing so in their browser anyways.
               | 
               | But the downside of GLSL I think is that you make the web
               | less "open" because GLSL (or whatever SL) isn't
               | immediately transparent to the user. In the same way we
               | usually expect to open up a webpage and inspect the
               | javascript (because it is typically not minified by
               | convention) whereas the introduction of WASM will require
               | a decompiler to do the same.
               | 
               | The web so far has been a kind of a strange bastion for
               | freedom with adblockers and other types of plugins being
               | able to easily modify webpages. In the future this will
               | be more difficult with web apps as it would amount to
               | decompiling and patching a portable executable (flutter,
               | etc).
        
           | dan-robertson wrote:
           | I think apple contributed lots of things that make wgpu a
           | nicer api for people. I'm not saying this particular
           | contribution was good, just that it's not so black and white.
        
             | davexunit wrote:
             | Apple and all the big players bring a lot of engineering
             | knowledge to the table, but they also indulge in corporate
             | sabotage on a regular basis.
        
         | bsder wrote:
         | WebGPU was a total surrender to Apple, and Apple _still_ didn
         | 't implement it.
         | 
         | Given that Microsoft has also thrown in with SPIR-V and Apple
         | still isn't shipping WebGPU, the next version of WebGPU should
         | tell Apple to fuck off, switch to SPIR-V, and pick up Windows,
         | XBox, and Linux at a stroke.
        
           | modeless wrote:
           | Apple is implementing it. They are just slow. There's no
           | point in a web standard that Apple won't implement as long as
           | they hold a monopoly on iOS browser engines.
        
             | bsder wrote:
             | Just like Microsoft was slow about web standards in IE6.
             | <rolls eyes>
             | 
             | Tell Apple to fuck off and roll it out--designers will
             | flock to Teh Shiny(tm). When enough designers can't run
             | their glitzy web thing, Apple will cave.
        
               | vetinari wrote:
               | Google and Mozilla are also pretty slow. Google still
               | doesn't support it on more than some SoCs on their own
               | Android, leaving the bulk of the market unsupported,
               | never mind Linux. Mozilla also got lost somewhere.
        
       | spookie wrote:
       | The built-ins are named inconsistently, aren't visually any
       | different from other parts of the code, and the change from sets
       | to groups when there are workgroups makes no sense.
       | 
       | All around change for the sake of change.
        
         | jsheard wrote:
         | > All around change for the sake of change.
         | 
         | More like change for the sake of politics, Apple didn't want to
         | use any Khronos IP so the WebGPU committee had to work
         | backwards to justify inventing something new from scratch,
         | despite the feedback from potential users being
         | _overwhelmingly_ against doing that.
         | 
         | Then after sending the spec on a multi-year sidequest to
         | develop a shader language from scratch, Apple _still_ hasn 't
         | actually shipped WebGPU in Safari, despite Google managing to
         | ship it across multiple platforms over a year ago. Apple only
         | needs to support Metal.
        
           | davexunit wrote:
           | Apple has been extremely slow about getting important
           | features into Safari. They're about a year behind Chrome and
           | Firefox on some WebAssembly things, too.
        
             | jsheard wrote:
             | Safari's limitations are baffling sometimes, like it's the
             | only browser that doesn't support SVG favicons aside from a
             | non-standard monochrome-only variant. Their engine supports
             | SVG, just not in that context, and you'd think they would
             | be all over resolution independent icons given their
             | fixation on extremely high DPI displays.
        
           | flykespice wrote:
           | Apple is the epitome of Not Invented Here syndrome.
        
           | rahkiin wrote:
           | I've read this is because they are in a legal conflict with
           | Khronos for a while already
        
       | andrewmcwatters wrote:
       | Man, since the departure from OpenGL/OpenGL ES, graphics
       | programming is such a pain in the butt. It's totally unfun and
       | ridiculous.
        
         | 01HNNWZ0MV43FF wrote:
         | I finally switched to WebGL 2, I think, to get nicer shadow
         | maps. I'll ride that as far as I can. personally I liked gles2
         | a lot. Everything ran it.
        
         | davexunit wrote:
         | The fragmentation has been really frustrating but if things
         | like WebGPU and SDL GPU become well supported it will make
         | doing modern graphics programming _mostly_ pleasant. I love
         | /hate OpenGL and will miss it in a strange way.
        
         | jms55 wrote:
         | What part do you dislike? If it's the complexity of newer APIs
         | (Vulkan in 8 years old at this point, DirectX12 9 years), then
         | you might like WebGPU or any of the other userspace graphics
         | APIs such as blade or sdl3 that have been invented over the
         | past few years.
        
           | unconed wrote:
           | Not OP, but IMO the real issue is pretending graphics is
           | still about executing individual draw calls of meshes which
           | map 1-to-1 to visible objects.
           | 
           | It's not true anymore, because you have all sorts of
           | secondary rendering (e.g. shadow maps, or pre-passes), as
           | well as temporal accumulation. These all need their own
           | unique shaders. With meshlets and/or nanite, culling becomes
           | a cross-object issue. With deferred rendering, separate
           | materials require careful set up.
           | 
           | So now the idea that a dev can just bring their own shaders
           | to plug into an existing pipeline kind of falls apart. You
           | need a whole layer of infrastructure on top, be it node
           | graphs, shader closures, etc. And dispatch glue to go along
           | with it.
           | 
           | This is all true even with WebGPU where you don't have to
           | deal with synchronization and mutexes. Just a shit show all
           | around tbh. Rendering APIs have not kept up with rendering
           | techniques. The driver devs just threw up their hands and
           | said "look, it's a nightmare to keep up the facade of old-
           | school GL, so why don't you do it instead".
        
             | andrewmcwatters wrote:
             | Yep... You nailed it. It really bums me out. There's a lot
             | you can do with simple 90s era graphics programming while
             | still using the newer APIs, but you'll hit bottlenecks very
             | quickly, or run into architectural issues as soon as you
             | want to implement modern rendering techniques.
        
             | jms55 wrote:
             | > executing individual draw calls of meshes which map
             | 1-to-1 to visible objects.
             | 
             | This has not been true since deferred shading became
             | popular around 2008. Shadow maps were around much earlier
             | than that even.
             | 
             | There's a reason the 1:1 draw:object API has fallen out of
             | popularity - it doesn't scale well, be it CPU overhead,
             | lighting, culling and geometry processing, etc.
             | 
             | That said, you of course still can do this if you want to.
             | Draw calls and vertex buffers haven't gone away by any
             | means.
             | 
             | > So now the idea that a dev can just bring their own
             | shaders to plug into an existing pipeline kind of falls
             | apart. You need a whole layer of infrastructure on top, be
             | it node graphs, shader closures, etc. And dispatch glue to
             | go along with it.
             | 
             | That's the job of rendering engines, not graphics APIs. If
             | you want to work at that layer, then you use a
             | rendering/game engine that provides the tooling for
             | technical artists. If you _are_ the rendering/game engine,
             | then you're thankful for the increased level of control
             | modern graphics APIs provide you to be able to realize
             | better looking, higher performing (more stuff is possible),
             | and more flexible tools to provide your tech artists with.
             | 
             | > This is all true even with WebGPU where you don't have to
             | deal with synchronization and mutexes. Just a shit show all
             | around tbh. Rendering APIs have not kept up with rendering
             | techniques. The driver devs just threw up their hands and
             | said "look, it's a nightmare to keep up the facade of old-
             | school GL, so why don't you do it instead".
             | 
             | Users of the drivers got fed up with them being buggy,
             | slow, and limited. The industry's response was to move as
             | much code as possible out of the driver and into user
             | space, exposing more control and low-level details to
             | userspace. That way, you would never be bottlenecked by the
             | driver, be it performance or bugs. The industry has
             | realized time and time again that hardware companies are
             | often bad at software, and it would be better to let third
             | parties handle that aspect.
             | 
             | The real failure of of the graphics industry imo was Vulkan
             | 1.0 trying to cater to old mobile devices and modern
             | desktop devices simultaneously, and much worse, never
             | starting a large community project to communally develop a
             | higher-level graphics API until WebGPU (which itself is
             | underfunded). Even then its higher-level nature is largely
             | a byproduct of wanting to enforce safety on untrusted
             | webapps.
             | 
             | But yes, even WebGPU is still more complicated than OpenGL
             | 2. If you find graphics APIs too much work, you're not
             | their target audience and you should be using a higher
             | level API.
        
       | jesse__ wrote:
       | Fuck me .. as if we needed another shading language.
        
       | beeflet wrote:
       | WGSL seems to inherit a more rust-like syntax versus GLSL which
       | is similar to C.
       | 
       | I think the major advantage of WebGPU over WebGL2/OpenGLES3 is
       | that you can write GPGPU shaders more easily versus OpenGL's
       | Transform Feedback system which is very clunky. But this comes at
       | a tradeoff of compatibility for the time being.
       | 
       | But in the rust ecosystem at least, WebGPU has taken the role of
       | OpenGLES with libraries like wgpu becoming dominant.
        
       ___________________________________________________________________
       (page generated 2024-10-08 23:00 UTC)