[HN Gopher] Shipping WebGPU on Windows in Firefox 141
       ___________________________________________________________________
        
       Shipping WebGPU on Windows in Firefox 141
        
       Author : Bogdanp
       Score  : 334 points
       Date   : 2025-07-16 06:32 UTC (16 hours ago)
        
 (HTM) web link (mozillagfx.wordpress.com)
 (TXT) w3m dump (mozillagfx.wordpress.com)
        
       | politelemon wrote:
       | Thanks, looking forward to the Linux implementation as well. Are
       | there any webgpu demos worth trying when this is released?
        
         | franga2000 wrote:
         | I found this quite impressive:
         | https://huggingface.co/spaces/webml-community/kokoro-webgpu
         | 
         | It also works without WebGPU, just very slowly.
        
         | elpocko wrote:
         | https://threejs.org/examples/?q=webgpu (with fallback to WebGL
         | if WebGPU is not available)
        
         | s-macke wrote:
         | Most of the sites I've seen are indeed just demos. I especially
         | like Compute Toys [0], a Shadertoy clone for WebGPU. [1] is
         | probably the best place to find demos. I have a site myself in
         | which I experiment mainly with WebGPU Compute Shaders [2].
         | 
         | [0] https://compute.toys/
         | 
         | [1] https://github.com/mikbry/awesome-webgpu
         | 
         | [2] https://github.com/s-macke/WebGPU-Lab
        
         | vesterde wrote:
         | I have this toy I've enjoyed building immensely, for composing
         | video effects.
         | 
         | https://vester.si/motion/
        
         | ath92 wrote:
         | I'd say this is one of the most impressive demos:
         | https://github.com/ArthurBrussee/brush
         | 
         | Gaussian splatting training and rendering using webgpu
        
           | snickerdoodle12 wrote:
           | The web demo is just a gray page, not very impressive. Maybe
           | once they support linux.
        
         | diggan wrote:
         | Addition to other comments, (most of) Bevy's examples are both
         | available in WebGL and WebGPU so useful for comparison.
         | https://bevy.org/examples-webgpu/ + https://bevy.org/examples
         | (WebGL)
        
         | erichdongubler wrote:
         | We've got this index of apps we're aware of, if you'd like to
         | peruse: https://bugzilla.mozilla.org/show_bug.cgi?id=webgpu-
         | apps
        
         | mkw5053 wrote:
         | Some Unity demos:
         | 
         | 1. https://boat-demo.cds.unity3d.com/
         | 
         | 2. https://www.keijiro.tokyo/WebGPU-Test/
         | 
         | 3. https://www.chatlord.com/4/
        
           | doctorpangloss wrote:
           | The Keijiro examples work flawlessly on iOS with WebGPU
           | feature flag enabled. Big caveat, but it shows why this stuff
           | matters: Linux desktop, a little, iOS mobile Safari, a lot.
        
       | darkwater wrote:
       | Very cool! Now, let see how much will take for G-products to
       | actually use it and not complaining about "browser not supported
       | for this feature, use Chrome".
        
         | sroussey wrote:
         | Which google products use it?
        
           | mort96 wrote:
           | The one I can think of is Google Meet, where some GPU-thing
           | is used to add background effects such as blur. However I'm
           | not sure this actually uses WebGPU; it _used_ to use on
           | Firefox until Google added a browser check, and AFAIK, if you
           | could fool Meet to think Firefox was Chrome, it would still
           | work.
           | 
           | This might still be a semi-legitimate thing, i.e maybe they
           | kept around a WebGL implementation for a while as a fallback
           | but moved the main implementation to WebGPU and don't want to
           | maintain the fallback. It certainly fits well into their
           | strategy of making sure that the web really only works
           | properly with Chrome.
        
             | firtoz wrote:
             | I'm on Brave Linux which requires special flag for WebGPU
             | (not turned on for me) and can confirm that background blur
             | still works with Meet without WebGPU
        
               | dragonelite wrote:
               | It sounds like something webgl2 should be able to handle
               | easily.
        
             | surajrmal wrote:
             | Official documentation suggests meet works with Firefox: ht
             | tps://support.google.com/meet/answer/7317473?hl=en#zippy=%.
             | ..
        
               | mort96 wrote:
               | Yeah, Meet works in Firefox, but the background effects
               | don't.
        
               | CamouflagedKiwi wrote:
               | The background effects do work, but they're using the CPU
               | (and need a lot of it)
        
               | mort96 wrote:
               | Huh really? That must be a relatively recent change, last
               | time I was in a Meet meeting in Firefox I was just
               | blocked by a message saying my browser isn't supported.
               | That is admittedly some time ago though.
        
           | darkwater wrote:
           | I don't know :) I was referring basically to the Meet
           | situation back in the day with FF where the feature was there
           | but Meet complained the browser was not capable.
        
             | chrismorgan wrote:
             | There was a genuine technical reason for that, a part of
             | WebRTC that Firefox hadn't implemented yet, where if even a
             | single member of a group call lacked that feature, it had
             | to fall back to something that used a lot more CPU for
             | _everyone_. Can't remember the details exactly, but it was
             | approximately that.
        
               | r2vcap wrote:
               | If I remember correctly, the issue was related to newer
               | APIs like MediaStreamTrackProcessor, offscreen surfaces,
               | and WebRTC-WebCodecs interoperability, as well as the
               | ability to run ML inference efficiently in the browser.
               | At the time, Firefox hadn't fully implemented some of
               | these features, which impacted Google Meet's ability to
               | apply effects like background blur or leverage hardware-
               | accelerated video processing.
        
       | pixelpoet wrote:
       | Nice one, been waiting for this! Thanks to the devs.
        
       | pjmlp wrote:
       | Finally! Kudos for everyone involved into this.
       | 
       | I was feeling a bit dirty playing around with WebGPU with only
       | Chrome into the game thus far, even Safari has enabled their
       | preview quite recently.
        
       | tux3 wrote:
       | Great news, congrats to the gfx team!
        
       | nmstoker wrote:
       | Great news. Now hope they can sort it for Firefox on Android.
        
       | JasperBekkers wrote:
       | Very happy to see this as it means that our gpu-allocator [0]
       | crate (used currently by wgpu's dx12 backend, but capable of
       | supporting vulkan & metal as well) will see a significant wider
       | audience then what we've been using it for so far (which is
       | shipping our gpu benchmark suite: evolve [1]
       | 
       | [0]: https://github.com/Traverse-Research/gpu-allocator/
       | 
       | [1]: https://www.evolvebenchmark.com/
        
         | erichdongubler wrote:
         | A well-earned accolade! Thanks for helping Firefox make a
         | WebGPU.
        
       | coffeeaddict1 wrote:
       | I'm still hoping that WebGPU somehow takes off for non-web use so
       | that we have an easy to use cross platform API with an official
       | spec (a replacement for opengl). However, it seems that outside
       | of the Rust world, there doesn't seem to be much interest for
       | using WebGPU for native code. I don't know any big projects using
       | Dawn for example. Part of the reason seems to be that WebGPU came
       | a bit too late and everyone was already using custom-built
       | abstractions over dx, vulkan and metal.
        
         | crthpl wrote:
         | Another reason may be that WebGPU didn't allow for as much
         | optimization and control as Vulkan, and the performance isn't
         | as good as Vulkan. WebGPU also doesn't have all the extensions
         | that Vulkan has.
        
         | pjmlp wrote:
         | It is called middleware, no need to wait for WebGPU outside of
         | the browser, with all the constraints of an API design targeted
         | to browser sandboxes.
        
           | coffeeaddict1 wrote:
           | None of those middlewares have a spec and none of them offer
           | the compatibility guarantees that WebGPU provides.
        
             | pjmlp wrote:
             | As anyone used to Khronos APIs is aware, that is of little
             | value without actual Conformance Tests Suites, and even
             | then there are plently of forgotten guarantees when using
             | consumer hardware with all the usual OEM quality practices.
        
               | grovesNL wrote:
               | wgpu can run the WebGPU Conformance Test Suite for
               | validation against the WebGPU specification. Was there
               | something else you'd like to see?
        
               | le-mark wrote:
               | I think the parent is implying there are 1001 soc out
               | there with some form of embedded gpu that probably have
               | issues actually implementing webgpu. Like those in
               | millions of Chinese tablets. Are they likely targets?
               | Probably not now but in 5 years? Mainstream desktop
               | hardware? No problem.
        
               | _bent wrote:
               | they just have to implement Vulkan, enough for it to run
               | Dawn or wgpu
        
               | pjmlp wrote:
               | See Android for how much fun it is to debug OpenGL ES and
               | Vulkan issues.
        
               | pjmlp wrote:
               | Not really, as mentioned, middleware does the job with
               | much better developer tooling.
               | 
               | What I really would like to see is browser vendors
               | finally providing WebGL and WebGPU debugging tools.
               | 
               | I think a decade has been more than enough for that.
               | 
               | Then again, no one is paying for browsers, so I guess I
               | should not complain.
        
               | flohofwoe wrote:
               | Good then that even WebGL mostly doesn't run on Khronos
               | APIs ;) (only on Linux, although I don't know whether
               | ANGLE is now actually using a Vulkan backend on Linux -
               | which of course is also a Khronos API though).
               | 
               | Both WebGL2 and WebGPU are probably the most 'watertight'
               | specced and tested 3D API ever built, and especially
               | WebGPU has gone to great lengths to eliminate UB present
               | in native APIs (even at the cost of usability).
        
               | pjmlp wrote:
               | And yet it is lots of fun to debug on Android.
               | 
               | We only need to open chrome://gpu and see how many
               | workarounds are implementated.
               | 
               | Those that happen to own a device where workarounds are
               | yet to be implemented, have quite interesting
               | experiences, depending on the root cause.
               | 
               | As this is an increasing list across Chrome releases.
               | 
               | Lets see how it works out there with Firefox and Safari,
               | the later still not fully WebGL 2.0 compliant.
               | 
               | So much for the watertightness.
        
               | flohofwoe wrote:
               | The Android OS and the entire Android ecosystem being a
               | huge pile of excrement isn't really surprising though.
               | But it's 'too big to ignore' unfortunately, at least for
               | browser APIs.
        
             | mmis1000 wrote:
             | Even webgl don't give any gurentee about your code will run
             | well on any devices though. It only gurentee that it will
             | run, but it can have 10x performance difference on
             | different platform depends on how you wrote the shaders.
        
           | mandarax8 wrote:
           | Can you point me to some good middleware then? I haven't been
           | able to find any.
        
             | pjmlp wrote:
             | GDC Vault programming track has plenty of examples.
        
         | m-schuetz wrote:
         | It won't. It's barely simpler but lacks a lot of functionality.
         | Some stuff that became optional in Vulkan (render passes) are
         | still mandatory on WebGPU, and bind groups are static and thus
         | cumbersome. It also adds additional limitations and cruft, like
         | you can't easily transfer from host to a buffer subregion and
         | need staging buffers.
         | 
         | I'll use it for web since there is no alternative, but for
         | desktop I'll stick with an OpenGL+CUDA interop framework until
         | a sane, modern graphics API shows up. I.e., a graphics API that
         | gets rid of render pases, static pipelines, mandatory explizit
         | syncing, bindings and descriptor sets (simply use buffers and
         | pointers), and all the other nonsense.
         | 
         | If allocating and populating a buffer takes more effort than a
         | simple cuMemAlloc and cuMemcpy, and calling a shader with
         | arguments takes more than simply passing the shader pointers to
         | the data, then I'm out.
        
           | flohofwoe wrote:
           | ...that's assuming that the WebGPU API is set in stone, which
           | hopefully it isn't.
           | 
           | They'd do well to follow the D3D model (major breaking
           | versions, while guaranteeing backward compatibility for older
           | versions) - e.g. WebGPU2, WebGPU3, WebGPU4 each being a
           | mostly new API without having to compromise for backward
           | compatibility.
        
             | m-schuetz wrote:
             | WebGPU had more features and capabilites back in 2020
             | before they started removing and limiting them. Forgive my
             | lack of enthusiasm and optimism for the future prospects of
             | an API that was already ancient when development started,
             | and even less capable by the time it was released.
        
               | flohofwoe wrote:
               | > WebGPU had more features and capabilites back in 2020
               | before they started removing and limiting them
               | 
               | I think that's the price to pay for trying to cover a
               | wide range of hardware. You can't just make all those
               | shitty Android phones disappear. At least for each WebGPU
               | limit, there's usually a Github ticket which explains why
               | exactly this limit exists.
        
               | m-schuetz wrote:
               | Yeah, unfortunately I'm in real-time rendering research
               | so I like to play with fairly modern desktop GPUs. The
               | no-phone-left-behind policy made WebGPU a somewhat
               | unattractive target for me. Which is unfortunate because
               | during the early days it felt like we'd get a cutting-
               | edge modern API for the browser and I was excited and
               | ready to abandon OpenGL for WebGPU. Instead, I ended up
               | switching to CUDA which I avoided for years due to
               | platform dependency. But once I noticed how pleasant it
               | is to work with I could not go back to graphics APIs. I
               | really like the "easy things should be easy, complex
               | things should be possible" design of Cuda.
        
         | thegrim33 wrote:
         | Some part of it is also probably the atrocious naming. I don't
         | do anything with web, only native coding, so whenever I heard
         | something about web gpu somewhere I just ignored it, for
         | literally years, because I just assumed it was some new web
         | tech and thus not relevant to me at all.
        
       | Zealotux wrote:
       | Great news, congrats to the team!
        
       | joelthelion wrote:
       | What are the use cases for this? Are we sure sites are not just
       | going to use it to mine bitcoins using their users' hardware?
        
         | popcar2 wrote:
         | It'll open the door for more ambitious webgames and web apps
         | that use the GPU.
        
           | pjmlp wrote:
           | I keep waiting to see ambitious webgames that could match the
           | experience of Infinity Blade from 2010, used to demo iOS new
           | OpenGL ES 3.0 capabilities, the foundation of WebGL 2.0.
           | 
           | https://en.wikipedia.org/wiki/Infinity_Blade
           | 
           | Game demo, https://www.youtube.com/watch?v=_w2CXudqc6c
           | 
           | The only thing I like in Web 3D APIs, is that outside
           | middleware engines, they are the only mainstream 3D APIs
           | designed with managed languages in mind, instead of after the
           | fact bindings.
           | 
           | Still waiting for something like RenderDoc on the respective
           | browser developer tools, we never got anything better than
           | SpectorJS.
           | 
           | It isn't even printf debugging, rather pixel colour
           | debugging.
        
             | josephg wrote:
             | They're also the only 3D APIs designed to safely untrusted
             | code to use your GPU.
        
         | andybak wrote:
         | Same use cases that native apps have for using a GPU except in
         | a browser?
         | 
         | > Are we sure sites are not just going to use it to mine
         | bitcoins using their users' hardware?
         | 
         | Some almost certainly will but like all similar issues the game
         | of cat and mouse will continue.
        
         | m-schuetz wrote:
         | - Streaming point cloud data setsnover web browsers (used by
         | many surveying and construction companies, as well as
         | geospatial government agencies).
         | 
         | - Visualize other scan data such as gaussian splat data sets,
         | or triangle meshes from photogrammetry
         | 
         | - Things like google earth, Cesium, or other 3D globe viewers.
         | 
         | It's a pretty big thing in geospatial sciences and industry.
        
           | kam wrote:
           | What improvements does WebGPU bring vs WebGL for things like
           | Potree?
        
             | m-schuetz wrote:
             | Compute shaders, which can draw points faster than the
             | native rendering pipeline. Although I have to admit that
             | WebGPU implements things so poorly and restrictive, that
             | this benefit ends up being fairly small. Storage buffers,
             | which come along with compute shaders, are still fantastic
             | from a dev convenience point of view since it allows
             | implementing vertex pulling, which is much nicer to work
             | with than vertex buffers.
             | 
             | For gaussian splatting, WebGPU is great since it allows
             | implementing sorting via compute shaders. WebGL-based
             | implementations sort on the CPU, which means "correct"
             | front-to-back blending lags behind for a few frames.
             | 
             | But yeah, when you ask like that, it would have been much
             | better if they had simply added compute shaders to WebGL,
             | because other than that there really is no point in WebGPU.
        
             | flohofwoe wrote:
             | Access to slightly more recent GPU features (e.g. WebGL2 is
             | stuck on a feature set that was mainstream ca. 2008, while
             | WebGPU is on a feature set that was mainstream ca
             | 2015-ish).
        
               | m-schuetz wrote:
               | All of these new features could have easily be added to
               | WebGL. There was no need for creating a fundamentally
               | different API just for compute shaders.
        
               | flohofwoe wrote:
               | The GL programming only feels 'natural' if you've been
               | following GL development closely since the late 1990s and
               | learned to accept all the design compromises for sake of
               | backward compatibility. If you come from other 3D APIs
               | and never touched GL before it's one "WTF were they
               | thinking" after another (just look at VAOs as an example
               | of a really poorly designed GL feature).
               | 
               | While I would have designed a few things differently in
               | WebGPU (especially around the binding model), it's still
               | a much better API than WebGL2 from every angle.
               | 
               | The limited feature set of WebGPU is mostly to blame on
               | Vulkan 1.0 drivers on Android devices I guess, but
               | there's no realistic way to design a web 3D API and
               | ignore shitty Android phones unfortunately.
        
               | m-schuetz wrote:
               | It's not about feeling natural - I fully agree that
               | OpenGL is a terrible and outdated API. It's about the
               | complete overengengineered and pointless complexity in
               | Vulkan-like APIs and WebGPU. Render Passes are entirely
               | pointless complexity that should not exist. It's even
               | optional in Vulkan nowadays, but still mandatory in
               | WebGPU. Similarly static binding groups are entirely
               | pointless, now I've got to cache thousands of vertex and
               | storage buffers. In Vulkan you can nowadays modify those,
               | but not in WebGPU. Wish I could batch them buffers in a
               | single one so I dont need to create thousands of bind
               | groups, but that's also made needlessly cumbersome in
               | WebGPU due to the requirement to use staging buffers. And
               | since buffer sizes are fairly limited, I can't just
               | create one that fits all, so I have to create multiple
               | buffes anyway, might as well have a separate buffer for
               | all nodes. Virtual/Sparse buffers would be helpful in
               | single-buffer designs by growing those as much as needed,
               | but of course they also dont exist in WebGPU.
               | 
               | The one thing that WebGPU is doing better is that it does
               | implicit syncing by default. The problem is, it provides
               | no options for explicit syncing.
               | 
               | I mainly software-rasterize everything in Cuda nowadays,
               | which makes the complexity of graphics apis appear
               | insane. Cuda allows you to get things done simple and
               | easily, but it still has all the functionaility to make
               | things fast and powerful. The important part is that the
               | latter is optinal, so you can get things done quickly,
               | and still make them fast.
               | 
               | In cuda, allocating a buffer and filling it with data is
               | a simple cuMemAlloc and cuMemcpy. When calling a
               | shader/kernel, I dont need bindings and descriptors, I
               | simply pass a pointer to the data. Why would I need that
               | anyway, the shader/kernel knows all about the data, the
               | host doesnt need to know.
        
               | flohofwoe wrote:
               | > Render Passes are entirely pointless complexity that
               | should not exist. It's even optional in Vulkan nowadays.
               | 
               | AFAIK Vulkan only eliminated pre-baked render pass
               | objects (which were indeed pointless), and now simply
               | copied Metal's design of transient render passes, e.g.
               | there's still 'render pass boundaries' between
               | vkCmdBeginRendering() and vkCmdEndRendering() and the
               | VkRenderingInfo struct that's passed into the
               | vkCmdBeginRendering() function (https://registry.khronos.
               | org/vulkan/specs/latest/man/html/Vk...) is equivalent
               | with Metal's MTLRenderPassDescriptor (https://developer.a
               | pple.com/documentation/metal/mtlrenderpas...).
               | 
               | E.g. even modern Vulkan still has render passes, they
               | just didn't want to call those new functions
               | 'Begin/EndRenderPass' for some reason ;) AFAIK the idea
               | of render pass boundaries is quite essential for tiler
               | GPUs.
               | 
               | WebGPU pretty much tries to copy Metal's render pass
               | approach as much as possible (e.g. it doesn't have pre-
               | baked pass objects like Vulkan 1.0).
               | 
               | > The one thing that WebGPU is doing better is that it
               | does implicit syncing by default.
               | 
               | AFAIK also mostly thanks to the 'transient render pass
               | model'.
               | 
               | > Why would I need that anyway, the shader/kernel knows
               | all about the data, the host doesnt need to know.
               | 
               | Because old GPUs are a thing and those usually don't have
               | such a flexible hardware design to make rasterizing (or
               | even vertex pulling) in compute shaders performant enough
               | to compete with the traditional render pipeline.
               | 
               | > Similarly static binding groups are entirely pointless
               | 
               | I agree, but AFAIK Vulkan's 1.0 descriptor model is
               | mostly to blame for the inflexible BindGroups design.
               | 
               | > but that's also made needlessly cumbersome in WebGPU
               | due to the requirement to use staging buffers
               | 
               | Most modern 3D APIs also switched to staging buffers
               | though, and I guess there's not much choice if you don't
               | have unified memory.
        
               | m-schuetz wrote:
               | > AFAIK the idea of render pass boundaries is quite
               | essential for tiler GPUs.
               | 
               | I've been told by a driver dev of a tiler GPU that they
               | are, in fact, not essential. They pick that info up by
               | themselves by analyzing the command buffer.
        
               | m-schuetz wrote:
               | > Most modern 3D APIs also switched to staging buffers
               | though, and I guess there's not much choice if you don't
               | have unified memory.
               | 
               | Well I wouldn't know since I switched to using Cuda as a
               | graphics API. It's mostly nonsense-free, and faster than
               | the hardware pipeline for points, and about as fast for
               | splats. Seeing how Nanite also software-rasterizes as a
               | performance improvement, Cuda may even be great for
               | triangles. Only implemented a rudimentary triangle
               | rasterizer that can draw 10 million small textured
               | triangles per millisecond. Still working on the larger
               | ones, but low-priority since I focus on point clouds.
               | 
               | In any case, I won't touch graphics APIs anymore until
               | they make a clean break to remove the legacy nonsense.
               | Allocating buffers should be a single line, providing
               | data to shaders should be as simple as passing pointers,
               | etc..
        
         | flohofwoe wrote:
         | You can already do that in WebGL, WASM or Javascript,
         | thankfully all those technologies are easily ad-blockable.
        
       | vFunct wrote:
       | Pretty sure Apple is going to release WebGPU support in Safari in
       | Mac OS X 26 Tahoe as well. There was a WebGPU video in one of the
       | WWDC sessions.
        
       | bobajeff wrote:
       | >we plan to ship WebGPU on Mac and Linux in the coming months,
       | and finally on Android
       | 
       | Sounds good. I'm not really thrilled about it as of now. What
       | ever the reason, it's not been supported in Linux for any
       | browsers as of yet. My guess is it's too hard to expose without
       | creating terrible attack surfaces.
       | 
       | This seems to support my view that web standards are too
       | overgrown for how users actually use the web. It's obviously too
       | late to do anything about it now but all the issues of
       | monoculture and funding we are worried about today stem from the
       | complexity of making a web browser due to decisions tracing all
       | the way back to the days of Netscape.
        
         | pjmlp wrote:
         | Depends on which Linux, it is supported on Android/Linux,
         | WebOS/Linux and ChromeOS/Linux.
         | 
         | However it kind of proves the point on how relevant browser
         | vendors see GNU/Linux for this kind of workloads.
        
       | grovesNL wrote:
       | Really excited to see WebGPU/wgpu ship in Firefox!
       | Congratulations and thanks to Mozilla for supporting this.
        
         | erichdongubler wrote:
         | Thanks! Can't forget your help in the process.
        
       | SaintSeiya wrote:
       | As I see it, the current state of graphics API is worse now than
       | the OpenGL era, despite its promises none of the modern API's are
       | easier to use, truly portable and cross platform. Having to
       | reinvent OpenGL by creating custom wrappers around Vulkan, Metal,
       | DirectX12, etc is such a time waster as dropping strings and
       | going back to raw char arrays in the name of performance on every
       | modern language.
        
         | naikrovek wrote:
         | Yeah, it's kind of insane how things have gotten.
         | 
         | There are no adults, no leaders with an eye on things leading
         | us away from further mistakes, and we keep going deeper.
        
           | 01HNNWZ0MV43FF wrote:
           | What would a leader do? Nvidia wants to sell hardware,
           | Nintendo wants to sell games, Microsoft wants to either buy
           | Linux or crush it. Nobody has a stake in things actually
           | working
        
           | dagmx wrote:
           | The point of the graphics APIs is to be as close to the metal
           | as possible. It's a balancing act between portability and
           | hardware design/performance. I really don't think it's as
           | trivial as non-graphics engineers make it out to be to make
           | something universal.
           | 
           | But even when it existed in the form of OpenGL , or now
           | WebGPU, people complain about the performance overhead. So
           | you end up back here.
        
             | m-schuetz wrote:
             | Vulkan isnt close to the metal, though. It's a high-level
             | wrapper around all the quirks and differences of ancient
             | mobile to modern desktop GPUs. Render passes, for example,
             | are entirely irrelevant for desktop GPUs. They are not
             | close to metal, but add needless complexity. Recently, a
             | Vulkan driver engineer even told me that they are not
             | necessary for tile-based mobile GPUs for which they were
             | intended, since they can figure the necessary things out by
             | themselves. And I would guess they need to, since they
             | became optional in Vulkan, so they cant relly on them
             | anymore. They are still mandatory in WebGPU, for no good
             | reason.
             | 
             | And there are so many pointless things that are no longer
             | relevant, or should at best be optional so that devs can
             | get things done before optimizing.
        
               | dagmx wrote:
               | I said as close to it as possible.
               | 
               | Yes they're abstractions, because nobody really wants
               | anyone to be writing directly against the ISA either
               | since the vendors need the ability to change things over
               | time.
               | 
               | Again, to my point, it's about balancing portability and
               | power/perf.
        
               | m-schuetz wrote:
               | Yet they ended up creating something that makes OpenGL
               | still an attractive choice. That excessive complexity
               | certainly wasnt necessary.
               | 
               | Personally, I'll sit this generation out and wait dor
               | whatever comes after. I ended up switching to doing
               | software rasterization in Cuda because that's easier than
               | drawing a triangle in Vulkan. Cuda has shown me how
               | insane Vulkan is. Like, why even have descriptir sets,
               | bindings, etc? In cuda you simply call a kernel and
               | provide the data (e.g. vertex or storage buffer) as a
               | pointer argument.
        
         | dvdkon wrote:
         | I don't see the problem. There have been lower-level APIs in
         | the graphics stack for a long time (e.g. Mesa's Gallium), only
         | now they are standardised and people are actually choosing to
         | use them. It's not like higher-level APIs don't exist now,
         | OpenGL is still supported on reasonable platforms and WebGPU
         | has been usable from native code for some time.
         | 
         | As for true portability of those low-level APIs, you've
         | basically got Apple to blame (and game console manufacturers,
         | but I don't think anyone expected them to cooperate).
        
           | cogman10 wrote:
           | > you've basically got Apple to blame
           | 
           | Yeah, that's the thing that really irks me. WebGPU could have
           | been just a light wrapper over Vulkan like WebGL is (or was,
           | it's complicated now) for OpenGL. But apple has been on a
           | dumb war with Khronos for the last decade which has made
           | everything more difficult.
           | 
           | So now we have n+1 low level standards for GPU programming
           | not because we needed them, but because 1 major player is
           | obstinate.
        
             | flohofwoe wrote:
             | That would be a problem indeed if Metal wouldn't be a much
             | better designed API than Vulkan. As it stands, Vulkan would
             | do good to 'steal' a few ideas from Metal to make the
             | Vulkan API more convenient to use without sacrificing too
             | much performance.
        
               | cogman10 wrote:
               | Or you could build on top of Vulkan and add the
               | convenience features into middleware.
               | 
               | Being simpler is an advantage. It means that 3rd party
               | GPU drivers can more simply implement the interface
               | correctly.
        
               | flohofwoe wrote:
               | The problem is, for that to happen Vulkan must be
               | convenient and enjoyable enough to use that people are
               | willing to put their free time into such middleware
               | layers, otherwise it mostly won't happen (unless Valve is
               | willing to sponsor the development - because outside of
               | Valve's effort to make Windows games run on Linux via
               | layering D3D on top of Vulkan, Vulkan is actually quite
               | irrelevant).
        
               | pjmlp wrote:
               | Almost, now with Vulkan being the only 3D API on Android
               | (OpenGL ES now builds on top of Vulkan), it is also
               | relevant there.
               | 
               | However, as discussed in other comments, that doesn't
               | change the driver quality mess of the platform.
        
               | m-schuetz wrote:
               | At least for webgpu, you cant wrap around some of the
               | nonse. Like bindigs. They are an inherently bad concept
               | and the way to go would be bindless, but you cant make a
               | wraper that makes an API that revolves around binding
               | into a bindless API. And there are other things like
               | that, and I'm sure there are plenty between Vulkan and
               | Metal.
        
           | dagmx wrote:
           | Your last paragraph is fairly revisionist to me.
           | 
           | How is Apple solely to blame when there are multiple parties
           | involved ? They went to Khronos to turn AMD's mantle into a
           | true unified next gen APi. Khronos and NVIDIA shot them down
           | to further AZDO OpenGL. Therefore Metal came to be and then
           | DX12 followed and then Vulkan when Khronos realized they had
           | to move that way.
           | 
           | But even if you exclude Metal, what about Microsoft and D3D?
           | Also similarly non-portable. Yet it's the primary API in use
           | for non-console graphics. You rarely see people complaining
           | about the portability of DX for some reason...
           | 
           | And then in an extremely distant last place is Vulkan. Very
           | few graphics apps actually use Vulkan directly.
           | 
           | Have you tried writing any of the graphics APIs?
        
             | hmry wrote:
             | People don't complain about DX portability because Windows
             | has first-party support for Vulkan and OpenGL, unlike
             | macOS. Also, since the XBox also uses DirectX, you get two
             | birds with one stone. And third, you aren't forced to use
             | Microsoft hardware to develop for DirectX (these days, you
             | don't even have to use Windows.)
             | 
             | Basically, people are mad that you need to buy Apple
             | hardware, use Apple software (macOS), Apple tooling
             | (Xcode), just to develop graphics code for iOS and macOS.
             | At least you don't also need to use Apple language (Swift)
             | to use Metal, though I don't have any first-hand experience
             | with their C++ bindings so I can't judge if it's a painful
             | experience or not.
        
               | dagmx wrote:
               | Windows has third party support for Vulkan and OpenGL. It
               | is NOT first party.
               | 
               | It's definitely more convenient than Mac because it is
               | provided by the driver and so you can almost always
               | guarantee they exist, but Microsoft themselves do not
               | provide them. On Mac, for Vulkan you can use MoltenVK
               | which is also third party, and bundle it in the app,
               | though definitely less convenient and less fully
               | featured.
               | 
               | Regarding Xbox, that's a bit of an odd point because you
               | might as well include iOS as a platform at that point
               | which is a bigger gaming platform than Xbox. At least iOS
               | uses the same Metal as Mac, while Xbox does vary in some
               | ways from Windows. Granted, iOS gaming is much more
               | casual oriented but there are some AAA games as well.
               | 
               | Regarding Swift, Metal has always been ObjC first not
               | swift first. The C++ bindings are just for convenience,
               | but you've never been bound to Swift even before they
               | existed. Regarding Xcode, that's only to get the
               | toolchain or if you need instrumentation. You don't need
               | to use Xcode to actually develop things, this is no more
               | a burden than needing Visual Studio on Windows.
        
               | pjmlp wrote:
               | Windows doesn't have first party support for OpenGL and
               | Vulkan.
               | 
               | It has a plugglable driver system, leftover from the
               | Windows NT/OpenGL 1.1 days called ICD, that driver
               | vendors use to add their OpenGL and Vulkan drivers.
               | 
               | https://learn.microsoft.com/en-us/windows-
               | hardware/drivers/d...
               | 
               | In some subsystems like UWP, or Windows on ARM, ICDs
               | aren't supported, and OpenGL/Vulkan have to be mapped on
               | top of DirectX.
               | 
               | https://devblogs.microsoft.com/directx/announcing-the-
               | opencl...
        
               | my123 wrote:
               | On Snapdragon X and later, native Vulkan and OpenCL UMDs
               | are provided out of the box
               | 
               | GLon12 is still used for OpenGL however
        
               | dlivingston wrote:
               | To correct some misconceptions for readers:
               | 
               | Operating systems do _not_ implement graphics APIs for
               | GPUs. These are created by the GPU manufacturer
               | themselves (AMD, Nvidia, etc.). This _includes_ DirectX
               | drivers, both user-space and kernel-space drivers.
               | 
               | Graphics APIs like DirectX and Vulkan are better thought
               | of as (1) a formal specification for GPU behavior,
               | combined with (2) a small runtime. The actual DX/VK
               | drivers are thin shims around a GPU manufacturer's own
               | driver API.
               | 
               | For AMD, the DirectX / Vulkan / OpenGL graphics drivers
               | share a common layer called "PAL" which AMD has open
               | sourced: <https://github.com/GPUOpen-Drivers/pal>
               | 
               | Apple really isn't that different here: they leave the
               | graphics manufacturers to implement their own drivers.
               | Unfortunately, Apple is the sole graphics manufacturer
               | for their OS, and they've chosen to only implement Metal
               | drivers for their GPUs (and a legacy OpenGL driver too).
               | 
               | It's not _that_ big of a deal though, because Vulkan is
               | supported on macOS through the MetalVK project, which
               | wrap the Vulkan API around the Metal API. And projects
               | like vkd3d wrap the DirectX 12 API around the Vulkan API,
               | which is then wrapped around the Metal API. This is how
               | you 're able to run Windows games on Mac via the Game
               | Porting Toolkit or CrossOver, btw.
        
         | dist-epoch wrote:
         | Modern graphics APIs are the graphical equivalent to assembly
         | language - you are not supposed to use them directly, but
         | through another higher level layer, like a programming language
         | or a graphics engine.
         | 
         | They are a specialized API intended for tool writers.
        
         | whatevsmate wrote:
         | What promises were made, by whom? Graphics APIs have never been
         | about ease of use as a first order goal. They've been about
         | getting code and data into GPUs as fast as reasonably possible.
         | DevEx will always play second fiddle to that.
         | 
         | I think WebGPU is a decent wrapper for exposing compute and
         | render in the browser. Not perfect by any means - I've had a
         | few paper cuts working with the API so far - but a lot more
         | discoverable and intuitive than I ever found WebGL and OpenGL.
        
           | unconed wrote:
           | "What promises were made, by whom?"
           | 
           | Technically true, but practically tone deaf.
           | 
           | WebGPU is both years too late, and just a bit early. Wheras
           | WebGL was OpenGL circa 2005, WebGPU is native graphics circa
           | 2015. It shouldn't need to be said that the bleeding edge new
           | standard for web graphics shouldn't be both 10 years out of
           | date and awful.
           | 
           | Vendors are finally starting to deprecate the old binding
           | model as the byzantine machinery that it is. Bindless
           | resources are an absolute necessity for the modern style of
           | rendering with nanite and raytracing.
           | 
           | Rust's WGPU on native supports some of this, but WebGPU
           | itself doesn't.
           | 
           | It's only intuitive if you don't realize just how huge the
           | gap is between dispatching a vertex shader to render some
           | triangles, and actually producing a lit, shaded and
           | occlusioned image with PBR, indirect lighting, antialiasing
           | and postfx. Would you like to render high quality lines or
           | points? Sorry, it's not been a priority to make that simple.
           | Better go study up on SDFs and beziers.
           | 
           | Which, tbh, is the impression I get from webgpu efforts.
           | Everyone forgets the drivers have been playing pretend for
           | decades, and very few have actually done the homework. Of
           | those that have, most are too enamored with being a l33t gfx
           | coder to realize how terrible the dev exp is.
        
             | whatevsmate wrote:
             | I'm not sure I disagree with you really - and I ack that
             | webgpu feels like 2015 tech to someone who knows their
             | stuff. I don't have a take on "l33t gfx coder"; I'm a
             | hobbyist not a professional, and I've enjoyed getting up to
             | speed with WebGPU over and above my experiences with WebGL.
             | Happy to be schooled.
             | 
             | I've never impl PBF or raytracing because my interests
             | haven't gone that way. I don't find SDFs to be a
             | particularly difficult concept to "study up on" either
             | though. It's about as close to math-as-drawing that I've
             | seen and doesn't require much more than a couple triangles
             | and a fragment shader. By contrast I've been learning about
             | SVT for a couple months and still haven't quite pieced
             | together a working impl in webgpu... though I understand
             | there are extensions specifically in support of virtual
             | tiling that WebGPU could pursue in a future version.
             | 
             | Agreed DevEx broadly isn't great when working on graphics.
             | But WebGPU feels like a considerable improvement rather
             | than a step backward.
        
               | BigJono wrote:
               | I can give a bit more context as someone that got on
               | WebGL, then WebGPU, and is now picking up Vulkan for the
               | first time.
               | 
               | The problem is that GPU hardware is rapidly changing to
               | enable easier development while still having low level
               | control. With ReBAR for example you can just take a
               | pointer into gigabytes of GPU memory and pump data into
               | it as if it was plain old RAM with barely any performance
               | loss. 100 lines of bullshit suddenly turn into a one line
               | memcpy.
               | 
               | Vulkan is changing to support all this stuff, but the
               | Vulkan API was (a) designed when it didn't exist and is
               | (b) fucking awful. I know that might be a hot take, and
               | I'm still going to use it for serious projects because
               | there's nothing better right now, but the same
               | extensibility that makes it possible for Vulkan to just
               | pivot huge parts of the API to support new stuff also
               | makes it dogshit to use day to day, the code patterns are
               | terrible and it feels like you're constantly compromising
               | on readability at every turn because there is simply zero
               | good options for how to format your code.
               | 
               | WebGPU doesn't have those problems, I quite liked it as
               | an API. But it's based on a snapshot of these other APIs
               | right at the moment before all this work has been done to
               | simplify graphics programming as a whole. And trying to
               | bolt new stuff onto WebGPU in the same way Vulkan is
               | doing is going to end up turning WebGPU into a bloated
               | pile of crap right alongside it.
               | 
               | If you're coming from WebGL, WebGPU is going to feel like
               | an upgrade (or at least it did for me). But now that I've
               | seen a taste of the future I'm pretty sure WebGPU is dead
               | on arrival, it just had horrendous timing, took too long
               | to develop, and now it's backed into a corner. And in the
               | same vein, I don't think extending Vulkan is the way
               | forward, it feels like a pretty big shift is happening
               | right now and IMO that really should involve overhauls at
               | the software/library level too. I don't have experience
               | with DX12 or Metal but I wouldn't be surprised if all 3
               | go bye bye soon and get replaced with something new that
               | is way simpler to develop with and reflects the current
               | state of hardware and driver capabilities.
        
               | MintPaw wrote:
               | If it weren't for the brand new shading language it might
               | have been a step forward. But instead it's further
               | fragmentation. Vulkan runs happily with GLSL, Proton runs
               | HLSL on Linux, SPIR-V isn't bad.
               | 
               | And the new shading language is so annoying to write it
               | basically has to be generated. Weird shader compilation
               | stuff was already one of the biggest headaches in
               | graphics. Feels like it'll be decades before it'll all be
               | stable.
        
               | pjmlp wrote:
               | While I am also not happy with WGSL, note that GLSL has
               | reached a dead end, Khronos officially isn't developing
               | it any further other than extensions, see Vulkanised 2024
               | talks/panel.
               | 
               | Hence why NVidia's slang offer was welcomed with open
               | arms.
        
             | jauntywundrkind wrote:
             | From Steve Wittens, a well respected graphics hacker, and
             | maker of the excellent Use.GPU. https://acko.net/tv/usegpu/
             | . I'm mostly posting to expand context, and sprinkle in a
             | couple light options.
             | 
             | > _Bindless resources are an absolute necessity for the
             | modern style of rendering with nanite and raytracing._
             | 
             | Yeah, for real. Looking at the November 2024 post "What's
             | next for WebGPU" and HN comments, bindless is pretty high
             | up there! There's a high level field survey & very basic
             | proposal (in the hackmd link), and wgpu seems to be filling
             | in the many gaps and seemingly quite far along in
             | implementation. Not seeing any signs yet that the broader
             | WebGPU implementors/spec folks are involved or following
             | along, but at least wgpu is very cross platform & well
             | regarded.
             | 
             | https://developer.chrome.com/blog/next-for-webgpu
             | https://news.ycombinator.com/item?id=42209272
             | https://hackmd.io/PCwnjLyVSqmLfTRSqH0viA
             | https://hackmd.io/@cwfitzgerald/wgpu-bindless
             | https://github.com/gfx-rs/wgpu/issues/3637
             | https://github.com/gpuweb/gpuweb/issues/380
             | 
             | > _Would you like to render high quality lines or points?
             | Sorry, it 's not been a priority to make that simple.
             | Better go study up on SDFs and beziers._
             | 
             | I realize lines and font rendering are an insanely complex
             | fields, and that OpenGL offering at least lines and Vulkan
             | not sure feels like a slap in the face. The work being done
             | by groups like https://linebender.org/ is intense. Overall
             | though that intensity makes me question the logic of trying
             | to include it, wonders whether fighting to specify
             | something that clearly we don't have full mastery over
             | makes sense: even the very best folks are still improving
             | the craft. We could specify an API without specifying an
             | exact implementation, without conformance tests, perhaps,
             | but that feels like a different risk. Maybe having to reach
             | for a library that does the work reflects where we are,
             | causes the iteration & development we sort of need?
             | 
             | > _actually producing a lit, shaded and occlusioned image
             | with PBR, indirect lighting, antialiasing and postfx_
             | 
             | I admit to envying the ambition to make this simple, to
             | have such a great deep knowledge as Steve and to think such
             | hard things possible.
             | 
             | I really really am so thankful and hope funding can
             | continue for the incredibly hard work of developing webgpu
             | specs & implementations, and wgpu. As @animats chimes in in
             | the HN submission, bindless in particular is quite a
             | crisis, which either will enable the web to go forward, or
             | remain a lasting real barrier to the web's growth. Really
             | seems to be the tension of Steve's opening position:
             | 
             | > _WebGPU is both years too late, and just a bit early.
             | Wheras WebGL was OpenGL circa 2005, WebGPU is native
             | graphics circa 2015._
        
               | flohofwoe wrote:
               | > OpenGL offering at least lines...
               | 
               | WebGPU _does_ have line (and point) primitives since they
               | are a direct GPU feature.
               | 
               | It just doesn't bother to 'emulate' lines or points that
               | are wider than 1 pixel, since this is not commonly
               | supported in modern native 3D APIs. Drawing thick lines
               | and points are better done by a high level vector drawing
               | API.
        
             | jplusequalt wrote:
             | >It's only intuitive if you don't realize just how huge the
             | gap is between dispatching a vertex shader to render some
             | triangles, and actually producing a lit, shaded and
             | occlusioned image with PBR, indirect lighting, antialiasing
             | and postfx. Would you like to render high quality lines or
             | points? Sorry, it's not been a priority to make that
             | simple. Better go study up on SDFs and beziers.
             | 
             | I think this is a tad unfair. You're basically describing a
             | semi-robust renderer at that point. IMO to make
             | implementing such a renderer truly "intuitive" (I don't
             | know what this word means to you, so I'm taking it to mean
             | --offloading these features to the API itself) would
             | require railroading the developer some, which appears to go
             | against the design of modern graphics APIs.
             | 
             | I think Unity/Unreal/Godot/Bevy make more sense if you're
             | trying to quickly iterate such features. But even then, you
             | may have to hand write the shader code yourself.
        
           | flohofwoe wrote:
           | > They've been about getting code and data into GPUs as fast
           | as reasonably possible. DevEx will always play second fiddle
           | to that.
           | 
           | That's a tiny bit revisionist history. Each new major D3D
           | version (at least before D3D12) also fixes usability warts
           | compared to the previous version with D3D11 probably being
           | the most convenient to use 3D API - while also giving
           | excellent performance.
           | 
           | Metal also definitely has a healthy balance between
           | convenience and low overhead - and more recent Metal versions
           | are an excellent example that a high performance modern 3D
           | API doesn't have to be hard to use, nor require thousands of
           | lines of boilerplate to get a triangle on screen.
           | 
           | OTH, OpenGL has been on a steady usability downward trend
           | since the end of the 1990s, and Vulkan unfortunately had
           | continued this trend (but may steer into the right direction
           | in the future:
           | 
           | https://www.youtube.com/watch?v=NM-SzTHAKGo
        
             | whatevsmate wrote:
             | I hear you but I also don't see a ton of disagreement here
             | either. Like, the fact that D3D12 includes _some_ usability
             | fixes suggests that DevEx really does take a back seat to
             | the primary goal.
             | 
             | I'm not arguing that DevEx doesn't exist in graphics
             | programming. Just that it's second to dots on screen. I
             | also find webgpu to be a lot nicer in terms of DevEx than
             | WebGL.
             | 
             | Wdyt? Still revisionist, or maybe just a slightly different
             | framing of the same pov?
        
               | flohofwoe wrote:
               | > I also find webgpu to be a lot nicer in terms of DevEx
               | than WebGL.
               | 
               | Amen.
               | 
               | IMHO a new major and breaking D3D version is long
               | overdue. There _must_ be plenty of learnings in which
               | areas it was actually worth it to sacrifice ease-of-use
               | for peformance and where it wasn 't.
               | 
               | Or maybe something completely radical/ridiculous and make
               | HLSL the new "D3D API" (with some parts of HLSL code
               | running on the CPU, just enough to prepare CPU side data
               | for upload to the GPU).
        
             | jms55 wrote:
             | > Metal also definitely has a healthy balance between
             | convenience and low overhead - and more recent Metal
             | versions are an excellent example that a high performance
             | modern 3D API doesn't have to be hard to use, nor require
             | thousands of lines of boilerplate to get a triangle on
             | screen.
             | 
             | Metal 4 has moved a lot in the other direction, and now
             | copies a lot of concepts from Vulkan.
             | 
             | https://developer.apple.com/documentation/metal/understandi
             | n...
             | 
             | https://developer.apple.com/documentation/metal/resource-
             | syn...
        
         | m-schuetz wrote:
         | I agree. I'll keep using OpenGL with CUDA interop until
         | something better shows up. Vulkan isn't it. I tried Vulkan to
         | get away from OpenGL, but ended up with CUDA instead since it's
         | so much nicer to work with. Vulkan has way too much
         | overengineered complexity with zero benefit.
        
         | on_the_train wrote:
         | OpenGL is still such a powerful technology. I use it all the
         | time because Vulkan is just so much more difficult to use. It's
         | a pity, so much good software not being built because ogl is
         | more or less a dead man walking
        
         | pjmlp wrote:
         | Not really, for example in the OpenGL era there was this urban
         | myth that game consoles used OpenGL, this was never really the
         | case.
         | 
         | Nintendo after graduating to devkits where C and C++ could be
         | used like N64, had OpenGL inspired APIs, which isn't really the
         | same. Although there was some GLSL like shader support.
         | 
         | They only started supporting Khronos APIs with the Switch, and
         | even then, if you want the full power of the Switch, NVN is the
         | way to go.
         | 
         | Playstation always had proprietary APIs, they did a small stint
         | OpenGL ES 1.0 + Cg, which had very little to no uptake among
         | developers, and they dropped it from the devkits.
         | 
         | Sega only had proprietary APIs, and there was a small
         | collaboration with Microsoft for DirectX, which only a few
         | studios took advantage of.
         | 
         | XBox naturally has always been about DirectX.
         | 
         | Go watch GDC Vault programming track to see how many developers
         | you will find complaining about writing middleware for their
         | game engines, if any at all, versus how many talks about taking
         | the advantage of every little low level detail of hardware
         | architecture.
        
           | BearOso wrote:
           | Early console APIs were more similar to Direct3D 1, with very
           | rudimentary immediate mode commands. Modern console APIs
           | still have a less stateful, easy API layer, like D3D10/11,
           | but also expose more low-level stuff, too.
           | 
           | OpenGL didn't match the hardware well except on SGI hardware
           | or carryover descendants like 3dfx.
        
         | flohofwoe wrote:
         | OpenGL became a mess of an API after 2.0, and WebGPU is
         | actually a fairly easy to use wrapper around Vk, D3D12, Metal -
         | definitely better designed than what OpenGL has become.
         | 
         | Apart from that, D3D11 and Metalv1 are probably the sweet spot
         | between ease-of-use and performance (especially D3D11's
         | performance is hard to beat even in Vulkan and D3D12).
        
           | shortrounddev2 wrote:
           | D3D11 is so nice to use that I feel like the ideal workflow
           | for ground-up graphics applications is to just write
           | everything in D3D11 and then let Middleware layers on Linux
           | (proton) or Mac (Game porting toolkit) handle the
           | translation. DirectX also has a whole suite of first party
           | software (DirectXMath, DirectXTK) which make a lot of common
           | workflows much simpler.
           | 
           | If only the windows team could get out of a tailspin because
           | almost everything else MS produces on the Windows side gets
           | worse and worse every year.
        
         | zamadatix wrote:
         | I think what we learned from the OpenGL era is it's actually
         | not very relevant all platforms use the same high level API to
         | talk to the GPU hardware. What matters it the platform's chosen
         | API offers good control of the hardware it uses.
         | 
         | You say this requires reinvention but really the end work is
         | "translate OpenGL to something the hardware can actually
         | understand" in both scenarios. The difference with the OpenGL
         | era is you did not have the option to avoid using the wrapper,
         | not that no wrapper existed. Targeting the best of each
         | possibly hardware type individually without baking in
         | assumptions about the hardware has proven to not be very
         | practical, but it only matters if you're building a "easy
         | translation layer" rather than using it or trying to target
         | specific types of hardware very directly (in which case you
         | don't want something super generic or simple, you want
         | something which exposes the hardware as directly as is
         | reasonable for that hardware type).
        
       | simonw wrote:
       | I hadn't realized WebGPU was already available on macOS in the
       | Firefox Nightlies!
       | 
       | I just installed the Mac nightly from https://www.mozilla.org/en-
       | US/firefox/channel/desktop/ and now this demo works:
       | https://huggingface.co/spaces/reach-vb/github-issue-generato...
       | 
       | It runs the SmolLM2 model compiled to WebAssembly for structured
       | data extraction. I previously thought that demo only worked in
       | Chrome.
       | 
       | (If I try it in regular Firefox for Mac I get "Error: WebGPU is
       | not supported in your current environment, but it is necessary to
       | run the WebLLM engine.")
        
         | erichdongubler wrote:
         | Member of Firefox's WebGPU team here. This is expected. Stable
         | support for macOS is something we hope to ship soon! From the
         | post:
         | 
         | > Although Firefox 141 enables WebGPU only on Windows, we plan
         | to ship WebGPU on Mac and Linux in the coming months, and
         | finally on Android.
        
       | modeless wrote:
       | Seems like Firefox will ship WebGPU on Linux before Chrome does,
       | then.
        
         | josephg wrote:
         | Which is a bit weird honestly - since dawn (Google's webgpu
         | implementation) works pretty well on Linux.
        
       | danjl wrote:
       | They can take a huge step into the present by supporting RTX ray
       | tracing native instructions on the GPU.
        
         | Vipitis wrote:
         | that's an exciting feature to have available, as raytracing
         | hardware is now in mobile iGPUs and phones.
         | 
         | There is a tracking issue[1], although I am not sure how much
         | of that makes it to the browser.
         | 
         | [1] https://github.com/gfx-rs/wgpu/issues/6762
        
       | astlouis44 wrote:
       | This is very exciting, congrats to the Firefox team!
       | 
       | My company is working to bring Unreal to the browser, and we've
       | built out a custom WebGPU RHI for Unreal Engine 5.
       | 
       | Here are demos of the tech in action, for anyone interested:
       | 
       | (Will only work on Chromium-based browsers on desktop, and on
       | some Android phones)
       | 
       | Cropout: https://play-
       | dev.simplystream.com/?token=aa91857c-ab14-4c24-...
       | 
       | Car configurator: https://garage.cjponyparts.com/
        
         | aspenmayer wrote:
         | > (Will only work on Chromium-based browsers on desktop, and on
         | some Android phones)
         | 
         | This post is about WebGPU in Firefox. Do you plan to test
         | and/or release a Firefox-compatible version?
        
         | saubeidl wrote:
         | Will it work in Firefox 141 on Windows? If not, why?
        
         | Eduard wrote:
         | Android Chrome on Pixel 7a: "cropout" just shows 0 % loading
         | bar2, "car configurator" has loading bar go up to 97 or 98 %,
         | but then also doesn't continue.
        
           | LtdJorge wrote:
           | Same on Firefox on Linux (Gentoo), same percentages.
        
             | astlouis44 wrote:
             | Linux doesn't have WebGPU support yet, that would be the
             | reason.
        
         | wslh wrote:
         | In Google Chrome for macOS: 0%, and not moving, on the first
         | link, and stops in 98% (sometimes in 97%) in the second one.
         | Same with Safari.
        
           | astlouis44 wrote:
           | Sorry to hear that! I will say that usually if you wait long
           | enough, it will eventually load. Try popping open your dev
           | console sidebar, you should see assets downloading over the
           | network.
           | 
           | If it does crash, you'll be able to see why. I'd be
           | interested in seeing any bug reports if you do fine some,
           | we're always squashing bugs over here!
        
           | astlouis44 wrote:
           | Can you try this one instead? No loading bar, but it should
           | actually load and relatively fast.
           | 
           | https://topdown.tiwsamples.com/
        
             | wslh wrote:
             | Impressive, that works!
        
               | astlouis44 wrote:
               | Thanks! Believe it or not, that demo is actually WebGL
               | 2.0
        
       | mclau157 wrote:
       | Are there enough devs working on uses for this? I was hoping for
       | a resurgence like flash devs 20 years ago
        
         | fidotron wrote:
         | If you follow things like three.js you'll be painfully aware
         | that in truth there doesn't seem to be much use for this at
         | all. "3D on the web" is something that sounds fun until it's
         | possible at which point it becomes meh[1]. The exception
         | proving the rule would be that Marble Madness promo game
         | https://news.ycombinator.com/item?id=42212644
         | 
         | Consequently much of the JS 3D community has become obsessed
         | with gaussian splatting, and AR more generally.
         | 
         | [1] And I would extend this to what's going on here: people
         | prefer complaining about how missing features in APIs prevent
         | their genius idea from being possible, when in truth there's
         | simply no demand from users for this stuff at all. You could
         | absolutely have done web Minecraft years ago, and it's very
         | revealing such a thing is not wildly popular. I personally
         | wasted too long on WebGL ( https://www.luduxia.com/ ), and what
         | I learned is the moment it all works people just assume it was
         | nothing and move on.
        
           | pjmlp wrote:
           | There are so many blockers, versus old style Flash games.
           | 
           | Driver and OS blacklisting, means that game developers aren't
           | aware of the user experience, nor can they controll it, as in
           | native games, or server side rendering with streaming.
           | 
           | No proper debugging tools other than printf/pixel debugging.
           | 
           | The amount of loading screens that would be needed, given
           | memory constraints of browser sessions.
           | 
           | This alone means there is hardly that much ROI for 3D
           | webgames, and most uses end up being in ecommerce, or Google
           | Maps kind of applications.
        
         | MintPaw wrote:
         | There's no way you'll see anything like that. Flash was dead
         | simple, a 12 year old could throw a simple game together and
         | upload it. WebGPU will require a skilled graphics programmer
         | just to write (or more likely cross compile) these weird
         | shaders.
         | 
         | And the SWF format had insane compatibility, literally
         | unmatched by any other technology imo, we didn't even think
         | about OS's, it really was "write once run anywhere" (pre-
         | smartphone ofc). On the web, even basic CSS doesn't work the
         | same from OS to OS, and WebGL apps still crash on 10% of
         | devices randomly. It'll probably be 5 years before WebGPU is
         | even remotely stable.
         | 
         | Not even to mention the fully integrated editor environment.
         | 
         | Or I guess maybe you're saying someone should build something
         | like Flash targeting WebGPU? Probably the closest there is to
         | that right now is Figma? But it feels weak too imo, and was
         | already possible with WebGL. Maybe Unreal Engine is the bet.
        
       | Vipitis wrote:
       | I have been using wgpu for my main projects for nearly two years
       | now. Let's hope this rollout means more maintainers so issues I
       | have opened 18 months ago bug more people and eventually get
       | resolved. Never touched rust myself but maybe I find the
       | motivation and time to do it myself.
       | 
       | As I also depend on the wgpu-native bindings it's slow for
       | updates to reach. Like we just got to v25 last week and v26
       | dropped a couple days prior to that.
        
       ___________________________________________________________________
       (page generated 2025-07-16 23:00 UTC)