[HN Gopher] First Public Working Drafts: WebGPU and WebGPU Shadi...
       ___________________________________________________________________
        
       First Public Working Drafts: WebGPU and WebGPU Shading Language
        
       Author : bpierre
       Score  : 127 points
       Date   : 2021-05-18 14:16 UTC (8 hours ago)
        
 (HTM) web link (www.w3.org)
 (TXT) w3m dump (www.w3.org)
        
       | brrrrrm wrote:
       | the performance is pretty impressive on some devices
       | https://news.ycombinator.com/item?id=26333369
        
         | kvark wrote:
         | This benchmark has little to do with the first public draft of
         | the API we are discussing. Safari's old implementation hasn't
         | been updated in years, and it's probably missing all the safety
         | checks we'll need to ship this, such as array out-of-bound
         | access handling.
        
       | dindresto wrote:
       | I took the opportunity to base my visual computing project for
       | this semester on the Rust WebGPU implementation (wgpu-rs).
       | Working with it is very nice, compared to OpenGL and Vulkan. The
       | biggest pain point right now is WGSL because it still is a moving
       | target, but it's already good enough to work with in my case. The
       | repository for my project currently lacks any info at all
       | unfortunately: https://github.com/niklaskorz/linon/
       | 
       | It is a ray caster that I plan to evolve into a nonlinear ray
       | caster where the viewing rays travel along a vector field which
       | thus distorts the image. You can enable the nonlinear mode by
       | setting the "linear_mode" constant in "src/compute.wgsl" to
       | false. I also implemented hot reloading for the shaders, which
       | made experimenting a lot easier. The project doesn't have any
       | non-Rust dependencies, so giving it a go is as easy as "cargo
       | run" in the repository directory. For an idea of what I'm aiming
       | for, see the "Visualizing Strange Worlds" paper by Eduard Groller
       | from 1995:
       | https://www.cs.drexel.edu/~david/Classes/Papers/Groller95.pd...
       | 
       | You can also drag&drop .obj models to load them, but I advise you
       | to only use small models (500 triangles max) in linear mode and
       | completely avoid it in nonlinear mode right now, as the
       | performance is really bad at the moment. :) I'm looking into
       | acceleration structures next, so hopefully this will improve.
        
       | waynecochran wrote:
       | It would be cool if the shading document has an example ... just
       | diving into details makes it hard to get the flavor. Anyone have
       | a smallish SL example?
        
         | raphlinus wrote:
         | https://github.com/austinEng/webgpu-samples is your friend.
         | Look under sample/*/*.wgsl in particular for shader examples.
        
       | mrweasel wrote:
       | I was looking under security and I don't see any mention of
       | misuse, but that can also be hard to identify.
       | 
       | Potential misuse should not stop WebGPU from moving forward, but
       | isn't there a concern that the make JavaScript injection attacks
       | more attractive? Rather than buying and operate a crypto mining
       | setup it now becomes more attractive to attempt to utilize the
       | GPUs of unsuspecting web users. Or am I misunderstanding how much
       | of the GPU this will grant you access to?
        
         | paulgb wrote:
         | > I was looking under security and I don't see any mention of
         | misuse, but that can also be hard to identify.
         | 
         | For what it's worth, the overall WebGPU spec draft (rather than
         | the shader spec) has a section on security/malicious use:
         | https://gpuweb.github.io/gpuweb/#malicious-use
        
         | modeless wrote:
         | The GPU is already accessible to JavaScript through WebGL.
         | WebGPU should hopefully be somewhat faster in many cases, and
         | simpler to use with a simpler/less buggy implementation. But
         | it's not really exposing a ton of attack surface that's not
         | already available in WebGL. Things like coin mining are already
         | possible with WebGL, and a small-to-medium sized speed boost
         | isn't going to change much.
         | 
         | Neither WebGL nor WebGPU will today give you 100% of native
         | (e.g. CUDA) performance for applications like coin mining or
         | machine learning. And even WebGPU may never get all the way
         | there, at least in a form that can be exposed in browsers,
         | because top performance requires deep knowledge of the user's
         | specific hardware configuration and specialized hardware-
         | specific code paths. Exposing that to the web raises legitimate
         | fingerprinting and portability concerns.
         | 
         | That said, I think there is a lot of promise for WebGPU as a
         | portable graphics API for native applications outside of
         | browsers, where said fingerprinting and portability concerns
         | are much less of an issue. In a native context, hardware-
         | specific extensions could easily be exposed and there's no
         | reason in theory why you couldn't have the best performance
         | possible on any given hardware.
        
           | Hizonner wrote:
           | > The GPU is already accessible to JavaScript through WebGL.
           | 
           | Mine isn't. Disabling WebGL is DEFINITELY on the checklist
           | for a new browser profile
           | 
           | GPUs are way too complicated, way too opaque, and way too
           | privileged to be safely exposed to the Web. We're not talking
           | about some Web page using your machine to mine crypto here.
           | We're talking about some Web page owning your kernel.
           | 
           | Kill it with fire.
        
             | modeless wrote:
             | WebGL has been part of browsers for many years now and the
             | security track record is public. This fear has not proven
             | true in reality. Overall, WebGL has not had more or worse
             | security problems than other components of the browser.
             | Great care has been taken to properly isolate and sandbox
             | WebGL, far in excess of what any other graphics API has
             | ever had in terms of security. WebGPU in browsers will have
             | the same level of protection.
        
             | incrudible wrote:
             | Your machine is way more likely to get owned through some
             | bug parsing a dumb string. Anything your browser displays
             | ends up _in the graphics driver_ , one way or another.
             | WebGL is arguably a much smaller surface to deal with than
             | all the code involved in rendering a web page efficiently.
        
       | bruce343434 wrote:
       | Yuck. So now there's 3 of them. What's wrong with GLSL? HLSL?
       | Somebody link that xkcd about competing standards.
       | 
       | But seriously, why do they reinvent the wheel? Either make
       | javascript work on the GPU (because that's the language of the
       | web, just like C++ and Cuda go hand in hand), or just stick with
       | HLSL.
        
         | rektide wrote:
         | Control-flow oriented languages like JavaScript are not a good
         | fit for GPUs. That's the accepted belief, at least.
         | 
         | Allegedly GLSL and HLSL do not come with enough protections /
         | guarantees for anyone to have felt comfortable porting them to
         | the web. That rules them out cleanly.
         | 
         | Also, these are both fairly legacy technologies, both predating
         | Vulkan by a good bit more than a decade. WebGPU at it's core is
         | a remake of Vulkan for the web. Then there's a shading language
         | that works with this base. My understanding is this is a far
         | more competent, capable means of setting up & orchestrating
         | work, so that it can happen & flow effectively on GPUs. Doing
         | less than WebGPU sounds supremely unappealing.
        
         | kvark wrote:
         | Apple has tried to collaborate with Microsoft on making HLSL
         | viable for the Web and standardizing it. IIRC, we figured that
         | we can't accept HLSL as is anyway, and we'd have to develop a
         | flavor of it for the Web, in which case the point about using
         | an existing language becomes moot.
        
       | raphlinus wrote:
       | I've been doing something of a deep dive into the suitability of
       | WebGPU and WGSL for "serious" compute tasks (ie _potentially_
       | competing with CUDA). Here 's my take on the current state.
       | 
       | The upside is that it's very pleasant to program compared with
       | existing APIs, and is portable. Even when the GPU doesn't quite
       | meet the requirements, it's possible to polyfill compute shaders
       | with SwiftShader. As this standard becomes more mature, it will
       | be reasonable to expect that it will be correct and performant
       | pretty much everywhere.
       | 
       | Note that both web and non-web implementations are viable.
       | 
       | The biggest drawback is that the "MVP" covers the basics of
       | compute shaders but not the advanced features. It's _roughly_
       | equivalent to DX11  / Shader Model 5, which is of course good for
       | compatibility on pre-Windows 10 devices. It's missing subgroup
       | operations, scalar types other than 32 bits, an explicit memory
       | model, and some other goodies. This potentially leaves quite a
       | bit of performance on the table, but that will depend on the
       | exact workload. For example, machine learning inference workloads
       | will probably suffer considerably due to the lack of f16. Thus,
       | if you want to ship something based on compute shaders _today,_
       | it makes sense to build your own portability layer on top of
       | existing APIs such as Vulkan, DX12, and Metal.
       | 
       | I do think WebGPU is an excellent on-ramp to learning compute
       | shader programming, as it will get stronger over time, and
       | concepts will transfer to other APIs as listed above.
       | 
       | Watch my Twitter feed for more depth on the points I listed
       | above.
        
         | mrec wrote:
         | Do we know yet what the evolution and conditional-support
         | mechanism is going to look like? Will there be "feature levels"
         | like D3D, or ad-hoc independent extensions like interim GL, or
         | some mix of the two?
         | 
         | Now that basically all relevant browsers are evergreen (sit
         | down, IE11) I wonder whether an API roadmap could take
         | advantage of that in a way that's less pesky than the
         | traditional approaches.
        
           | raphlinus wrote:
           | You'll probably be interested in section 3.6 of the spec[1],
           | "optional capabilities." One such is timestamp queries, which
           | I find especially annoying when they're not available.
           | 
           | [1]: https://gpuweb.github.io/gpuweb/#optional-capabilities
        
           | pjmlp wrote:
           | It will be extension spaghetti as always.
           | 
           | Note that not even with the same browser version there are
           | any guarantees regarding GPGPU support.
           | 
           | A big difference between browsers and native 3D APIs is the
           | blacklisting of client hardware/drivers, so one never knows
           | what is actually being accelerated.
        
         | longstation wrote:
         | Could someone correct me? I am pretty new to graphics
         | programing but I was learning wgpu (the Rust binding of WebGPU)
         | the other day. I was under the impression that the shader
         | language is GLSL (version 450) and it is then converted into
         | SPIR-V.
         | 
         | Would those limitations still apply with SPIR-V?
        
           | lights0123 wrote:
           | The shader language you write in does not matter, as it can
           | be converted into whatever format the API expects.
           | 
           | wgpu historically accepts SPIR-V, but the latest version
           | released ~a month ago has moved to WGSL as much as it can in
           | the examples. Have you upgraded yet?
        
             | longstation wrote:
             | I haven't. I didn't realize the move. Looks like I need to
             | find some new learning material. BTW, do you know any up-
             | to-date tutorials?
        
             | raphlinus wrote:
             | "Does not matter" is overstating the case. There are lots
             | of tools for converting between shader languages (naga,
             | spirv-cross, glslangValidator, dxc), but these conversions
             | aren't 100% lossless and not all shader languages have the
             | same capabilities. To give one example, HLSL (Shader Model
             | 6) has a fairly complete set of subgroup (wave) operations,
             | but is missing nonuniform shuffle. Another gap is that only
             | GLSL (Vulkan flavor) has support for explicit memory
             | semantics. Of course, WGSL currently has none of this, as
             | it's something of a lowest common denominator.
             | 
             | In the game world, it is very common to write shaders in
             | HLSL because of good support on Windows, and then use the
             | tools mentioned above to translate into other shader
             | languages. I explored this for my own work but ultimately
             | settled on GLSL because it allows access to advanced
             | features.
             | 
             | Fans of shader languages should also be aware of rust-gpu,
             | which promises to be considerably higher level; most
             | existing shader languages show their clear lineage to C
             | (through Cg).
        
           | raphlinus wrote:
           | This is still in transition, and the full story is
           | complicated.
           | 
           | The official shader language for WebGPU is WGSL, and that
           | will almost certainly be the only language supported for web
           | deployment. Originally this was going to be a textual
           | representation that had exactly the same semantics
           | ("bijective") as SPIR-V, but that's been diverging a bit[1].
           | Only very recently have atomics been added to the spec, and
           | implementations are still catching up.
           | 
           | Because WGSL is still under construction, for native
           | deployment, wgpu supports SPIR-V shaders as well. On Vulkan,
           | those are passed pretty much straight through to the GPU
           | driver (with some validation). On DX12 and Metal, those are
           | translated into HLSL and MSL, respectively. Until recently,
           | in wgpu, all those translations were handled by spirv-cross.
           | More recently, the new naga crate is doing those
           | translations. Again, this is under construction and not
           | everything works yet.
           | 
           | [1]: https://kvark.github.io/spirv/2021/05/01/spirv-
           | horrors.html
        
             | alexkcd wrote:
             | From [1]:
             | 
             | > When writing SPIR-V, you can't have two integer types of
             | the same width. Or two texture types that match. All non-
             | composite types have to be unique for some reason. We don't
             | have this restriction in Naga IR, it just doesn't seem to
             | make any sense. For example, what if I'm writing a shader,
             | and I basically want to use the same "int32" type for both
             | indices and lengths. I may want to name these types
             | differently
             | 
             | This doesn't really make sense for IR. IR is not meant to
             | be human writeable. It's meant to be generated by a
             | compiler. So, having a one to one mapping between concept
             | and name in the IR is a feature, not a bug.
             | 
             | Honestly, WGSL just repeats the JavaScript mistake: where
             | we should have started with something like WASM instead of
             | JavaScript. And JS could have been just one of the many
             | languages that targeted WASM.
             | 
             | We were this close to not repeating this mistake by
             | adopting an IR language (SPIR-V) for WebGPU, but then that
             | got abandoned mostly for political reasons. Too bad. Now we
             | get to write transpilers and hacks for decades to come,
             | just like web people have been trying to paper over JS
             | problems for decades.
        
             | longstation wrote:
             | I see. Thank you for the detailed information. I will read
             | the linked content.
             | 
             | I don't know if this is a good place to ask. But I was
             | interested in wgpu because of the interest in building a
             | cross-platform non-web UI library. I read a few of your
             | articles and also looked at druid. I am pretty new to it
             | and there seems to be no books on this topic. Do you have
             | any recommendations?
        
               | raphlinus wrote:
               | Yes: don't. It's very hard.
               | 
               | More seriously. There is _very_ good reason to be
               | encouraged about wgpu as a cross-platform low-level GPU
               | drawing interface for UI work. There are already
               | successful UI toolkits using it, of which probably the
               | most impressive is iced.
               | 
               | For 2D graphics in general, a good starting point is
               | Diego Nehab's 2D graphics course. It has links to a bunch
               | of resources including papers on GPU rendering.
               | 
               | Of course, I'm working on my own stuff, which in time I
               | hope will become a very strong foundation for building
               | things such as UI toolkits, but it's not ready yet, and
               | of course you're already aware of it.
               | 
               | [1]: http://w3.impa.br/~diego/teaching/vg/
        
               | longstation wrote:
               | Saved in my note, much appreciated.
               | 
               | Yes, I agree building a UI library from scratch is hard
               | and probably not worthy doing (too many people tried and
               | gave up in the middle; probably better to channel that
               | enthusiasm on improving existing ones). I should state
               | that more clearly. I was looking at something more
               | modest, like a canvas implementation based on wgpu that
               | enables others to build their UI libraries on top.
               | 
               | I guess it would be something like Skia, but it also
               | sounds like a tremendous effort, and I could probably
               | just use Skia instead. However, I guess I am not going to
               | match what Skia has (maybe what HTML canvas has is
               | enough?). Also, I have played with skia-safe crate but I
               | personally feel the dev experience could be better (not
               | to blame skia-safe, it's wonderful and even provides a
               | build process that automatically downloads a precompiled
               | skia library).
               | 
               | Thanks for pointing out Iced. I didn't know they are
               | using wgpu under the hood. I will take a good look at
               | their code. I have been reading nannou's code to learn
               | how to work with wgpu. If we skip my last paragraph, I
               | think building a UI library on top of nannou is pretty
               | doable.
               | 
               | [1]: https://github.com/nannou-org/nannou
        
           | [deleted]
        
         | pjmlp wrote:
         | Given the slow uptake on WebGL and the hit-and-miss on browser
         | hardware support, I fear it will take several years for WebGPU
         | to be an usable target beyond learning purposes.
        
       | gsnedders wrote:
       | For those unfamiliar with the W3C:
       | 
       | Working Drafts are essentially just "drafts with stable URLs for
       | review purposes and historical interest".
       | 
       | First Public Working Drafts also have consequences under the W3C
       | Patent Policy, primarily starting a timer under which any
       | participant of the group has to say they wish to exclude any
       | patents they hold from the W3C royalty-free grant.
        
       | adamnemecek wrote:
       | You should check out the Rust WebGPU implementation
       | https://github.com/gfx-rs/wgpu-rs
        
         | mrec wrote:
         | https://github.com/gfx-rs/wgpu is the actual implementation (in
         | Rust, but exposing a C API). wgpu-rs is an idiomatic wrapper
         | around that.
        
           | adamnemecek wrote:
           | I know and I considered for a second which one to link to and
           | I went with the Rust bindings.
        
       | rubatuga wrote:
       | You can trigger some insane GPU bugs from just user-space
       | programs, and while using the WebGPU Python wrapper, I've managed
       | to completely corrupt the screen buffer (which required a
       | restart), completely frozen the GPU, and can consistently trigger
       | denial of service attacks. I can guarantee that WebGPU opens a
       | huge attack space that is relatively unexplored.
        
         | kvark wrote:
         | The current implementations are heavy WIP and incomplete. The
         | fact that you managed to screw them up is not a big surprise.
         | Once all of the safety and validation work is done, we'll need
         | to check again and see.
        
           | TonyTrapp wrote:
           | Those bugs are in the graphics drivers, not in the WebGPU
           | implementation. Any remotely powerful GPU interface will
           | probably be able to talk to the drivers in a way that will
           | exploit those bugs. On the other hand, if you artificially
           | restrict the interface so those bugs are out of reach, it
           | will probably not be powerful enough for what it wants to be.
           | Graphics card vendors will have to step up their game and
           | improve their drivers.
        
         | anentropic wrote:
         | It feels like this is something a "Web-" prefixed API should
         | protect us from.
         | 
         | I'm curious if such catastrophic behaviour is inherent to the
         | WebGPU API as specified, or of the current implementation is
         | failing to provide the level of safety that it should according
         | to the spec?
        
           | paulgb wrote:
           | > It feels like this is something a "Web-" prefixed API
           | should protect us from.
           | 
           | It's worth noting here that the Python wgpu library (which
           | I'm assuming is the one GP is referring to?) is a WebGPU-
           | inspired API to wgpu. wgpu is the library that Mozilla
           | created as a native graphics abstraction layer to build
           | WebGPU on top of. It's possible that Mozilla has implemented
           | (or intends to implement) the security/validity checks
           | between wgpu and the JavaScript interface exposed to the
           | browser, in which case they would not automatically apply to
           | the Python bindings.
        
             | kvark wrote:
             | wgpu is work in progress, we don't have all the validation
             | in place yet. Also, it's not a Mozilla project, it's a part
             | of gfx-rs work by Rust community.
        
               | paulgb wrote:
               | Thanks for the correction (and for wgpu!)
               | 
               | Was it a Mozilla-originated project that has been moved
               | to gfx-rx, or has if always been separate?
        
               | kvark wrote:
               | Was always separate. There was a movement towards trying
               | to make Mozilla-Central an upstream for it, but it hasn't
               | happened. So technically right now it's a community
               | project used by Firefox.
        
         | kitsunesoba wrote:
         | I cannot speak to the feasibility of such a thing, but it feels
         | a lot like OS vendors should be very seriously looking into
         | moving GPU drivers away from the kernel and into userspace.
         | GPUs are increasingly looking like a massive hole in otherwise
         | secure systems, even without the added possibilities of GPUs
         | being exposed to the web.
        
           | pjmlp wrote:
           | Windows has already done that for a while now, macOS is doing
           | that for all drivers, Android has Treble, is the BSDs and
           | Linux kernel that are still stuck with monolithic designs.
        
             | kitsunesoba wrote:
             | Isn't macOS still using kernel extensions (kexts) for GPU
             | drivers? That might not technically be in the kernel itself
             | but it's still pretty close and certainly not userspace.
        
               | pjmlp wrote:
               | Yes, I should have been more clear, it is a multiple year
               | roadmap.
               | 
               | The long term plan is to remove all support for any kind
               | of kexts.
               | 
               | As presented at WWDC, after a kext gets a userspace
               | alternative, its support is removed on the following OS
               | release.
        
       | slmjkdbtl wrote:
       | I'm looking at the complex specs of the shading language which
       | accomplish almost the same thing as other modern high level
       | shading languages, and thinking is there any possibility we'll
       | reach a state of consensus on a single unified GPU graphics
       | interface? Right now things seem to go more fragmented instead of
       | unified, we moved away from "cross-platform" OpenGL to extremely
       | vendor specific Vulkan, Metal, D3d and WebGPU, looks like cross
       | platform graphics programming is becoming more and more far away,
       | have to rely on abstraction layer which are often huge
       | dependencies and tedious to develop / maintain.
        
         | kvark wrote:
         | WebGPU on native is the best bet to bring back the times where
         | you could reasonably write once and run everywhere. See also -
         | http://kvark.github.io/web/gpu/native/2020/05/03/point-of-we...
        
           | slmjkdbtl wrote:
           | Thanks for the article it's a great write up! Do you mean
           | it's possible in the future venders will ship WebGPU drivers
           | built in their product (couldn't find any evidence on that
           | yet) or it's the best option for the interface of a
           | translation layer like wgpu? Right now wgpu is already pretty
           | solid as a translation layer, but direct driver support will
           | be so good. (Also thanks for your works on Rust graphics
           | related stuff! They benefit me a lot)
        
             | kvark wrote:
             | I don't expect any driver support for this in the nearest
             | future. The promise of WebGPU on native is not in wgpu
             | library quality, although it's a part of the equation. The
             | promise is in a well-thought specification (of WebGPU API)
             | and the standardized API headers with multiple
             | implementations. This matches best practices from Khronos.
        
         | haxiomic wrote:
         | From the start WebGPU has been build with native use in mind,
         | so ideally you'd just target WebGPU for both your web and
         | native builds. The WebGPU native implementation can then drive
         | the native graphics API for your system (and one day perhaps
         | we'll see WebGPU at the driver level)
        
           | slmjkdbtl wrote:
           | > build with native use in mind
           | 
           | Do you mean they're thinking "this can easily be translated
           | to other graphic interfaces" or "we're going to convince
           | vendors to ship products with WebGPU driver built-in like
           | OpenGL"? I'll be so happy to see WebGPU on driver level, but
           | can't find evidence on if anyone is interested in providing.
        
             | gmueckl wrote:
             | Expect having to ship a WebGPU implementation with your
             | application, similar to how you need to include ANGLE for
             | WebGL support. A platform level API needs a lot of
             | additional considerations about plumbing layers between
             | applications and one or more separately installed drivers.
             | I don't see this addressed for WebGPU.
        
         | gmueckl wrote:
         | The irony is that with OpenGL and WwbGL getting left behind,
         | graphics programming becomes a bit more uniform across
         | platforms as a result, even though the underlying APIs are
         | different. They are based on very similar modern concepts.
         | OpenGL on the other hand really shows its age and isn't a good
         | fit anymore. Some newer things have been bolted on awkwardly
         | over time, but they are either optional extensions or part of
         | versions that are newer than what certain target platforms
         | actually support.
         | 
         | Looking at the various rendering backends for our product, the
         | OpenGL backend is the most byzantine one. The Vulkan backend
         | has more code, but all of that is fairly sane. It doesn't have
         | accidental complexity like having to pick the right GL function
         | to upload a specific texture out of about a dozen possibilities
         | depending on the texture type and target OpenGL
         | version/extensions.
        
       | baybal2 wrote:
       | The moment Google made js throttling to kill miners, miners
       | switched to WebGL, and WASM.
       | 
       | A WebGPU miner will be coming out the same second it is adopted.
        
       | throwaway34241 wrote:
       | For anyone unfamiliar with WebGPU:
       | 
       | It's a successor to WebGL, along the lines of newer graphics APIs
       | like Metal/Vulkan/DirectX 12. Google and Mozilla are also
       | developing a C-compatible library that should be available as an
       | OpenGL alternative for native code.
       | 
       | So if successful it might end up as the most convenient cross-
       | platform graphics library (across desktop/mobile/the web).
       | Currently the alternatives in this space are:
       | 
       | - OpenGL (which Apple has deprecated and is stuck on an older
       | version on their platforms)
       | 
       | - Vulkan, which is the lowest level and most difficult to get
       | started on of the APIs, and also doesn't work on the web, but is
       | supported on Apple systems by using the MoltenVK library to
       | translate it to Metal
       | 
       | - some non-standardized libraries written by third party
       | developers like bgfx and Oryol.
       | 
       | For the shading language they settled on a text based language
       | designed to be easily translatable to/from Vulkan SPIR-V
       | bytecode, with the goal of being able to re-use existing shader
       | compilation toolchains (and most of the work put in to SPIR-V),
       | while still having the ability to write/read shaders without a
       | shader compiler. Also SPIR-V files are fairly large and
       | apparently don't compress well with standard algorithms (see
       | SMOL-V) so the text based format should probably be more
       | efficient to transmit on the web.
        
         | kvark wrote:
         | > Google and Mozilla are also developing a C-compatible library
         | that should be available as an OpenGL alternative for native
         | code.
         | 
         | To clarify, Google and Mozilla are both developing separate
         | libraries: Dawn (in C++) and wgpu (in Rust) respectively. Both
         | are designed to be usable behind the same native C API -
         | https://github.com/webgpu-native/webgpu-headers
         | 
         | > For the shading language they settled on a text based
         | language designed to be easily translatable to/from Vulkan
         | SPIR-V bytecode
         | 
         | Going from SPIR-V has been anything but easy for us (in
         | Naga/wgpu land), so far. Raph posted this link in the comment
         | above - http://kvark.github.io/spirv/2021/05/01/spirv-
         | horrors.html
        
         | trinovantes wrote:
         | > It's a successor to WebGL
         | 
         | Does this mean WebGL development/support has stopped?
        
           | gmueckl wrote:
           | Don't hold your breath in anticipation of an updated WebGL
           | standard. WebGPU seems to have a lot more support these days
           | and is a bit closer to DX12, Vulkan and Metal in its
           | concepts, so easier to implement in theory.
           | 
           | At this point, I'm not even waiting for a new OpenGL version
           | anymore. The good stuff has been mostly Vulkan exclusive for
           | a while now.
        
           | flohofwoe wrote:
           | I guess most WebGL development currently happens in Safari,
           | because Safari is switching from their own WebGL
           | implementation to ANGLE. Other then that, AFAIK there is no
           | "WebGL 2.1" or "WebGL 3" planned.
        
           | dindresto wrote:
           | Yes, compute shaders for WebGL have been cancelled in favor
           | of WebGPU.
        
         | mebr wrote:
         | A question, I was wondering if you happen to know whether there
         | is a long term commitment from these orgs/companies to support
         | WebGPU, say for at least 10 years?
        
           | throwaway34241 wrote:
           | If it's approved as a web standard, I would expect them to
           | support it basically forever, since otherwise websites would
           | stop working. I don't think there's any plan to deprecate the
           | previous standard (WebGL) that came out around 10 years ago.
        
             | rewq4321 wrote:
             | > I don't think there's any plan to deprecate the previous
             | standard (WebGL) that came out around 10 years ago
             | 
             | There's basically zero uncertainty around this. It would
             | break the internet. As you probably know it's extremely
             | difficult to make even _tiny_ backwards-incompatible
             | changes to features that have become web standards and have
             | been implemented on all major browsers.
        
       | rektide wrote:
       | Maybe worth noting that Deno 1.8 has WebGPU support[1]. Deno's
       | quest to be the web-platform compatible alternative to Node is
       | expansive & impressive!! I have high hopes for it!
       | 
       | [1] https://deno.com/blog/v1.8
       | https://news.ycombinator.com/item?id=26323600
        
       | jmrm wrote:
       | I imagine that this, in addition to WebAssembly, could lead in
       | the future to playing more complex games from the browser with
       | the benefit of being platform agnostic (in kind of device,
       | hardware architecture, OS) as long as the browser, the OS, and
       | the hardware is compatible. Am I wrong?
        
         | astlouis44 wrote:
         | Yup, WASM/WebGPU will enable a new era of gaming in the
         | browser. Try this Baldur's Gate 2 demo ported to WASM:
         | 
         | https://personal-1094.web.app/gemrb.html
        
         | flohofwoe wrote:
         | The main problem isn't CPU and GPU performance, those problems
         | had (to some extent) already been solved with asm.js and WebGL,
         | at least for some types of games. It's all the other APIs and
         | the general "feature churn" in browsers which are problematic
         | for games.
         | 
         | Some examples:
         | 
         | - The fullscreen and pointerlock APIs show popup warnings which
         | behave entirely differently between browsers.
         | 
         | - Timer precision has been reduced to around 1ms post-
         | Spectre/Meltdown, and jittered on top. This makes it very hard
         | to avoid microstuttering (we don't even need a high-precision
         | timer, just a way to query the display refresh rate... but
         | guess what, there is no way to query the display refresh rate)
         | 
         | - WebAudio is ... I don't even know what... all we need is
         | simple buffer streaming but we got this monstrosity of a node-
         | based audio API. And the only two ways to do this in WebAudio
         | are either deprecated (ScriptProcessorNode) or not usable
         | without proper threading (audio worklets), and guess what,
         | threading is also disabled or behind HTTP response headers
         | post-Spectre.
         | 
         | - Games need UDP style non-guaranteed networking, but we only
         | get this as a tiny part of WebRTC (DataChannels).
         | 
         | ...and the list goes on. In theory there are web APIs useful
         | for gaming, but in practice those APIs have been designed for
         | entirely different and very specific high-level use cases (such
         | as creating an audio synthesizer in a webpage, or creating a
         | video chat solution for browsers), and those rigid high-level
         | APIs are not flexible enough to be reassigned to different use
         | cases (like games). The web needs a "game mode", or better a
         | "DirectX initiative", a set of low level APIs and features
         | similar to WASM and WebGL/WebGPU, and if not designed
         | specifically for games, than at least low-level and generic
         | enough to be useful for games.
         | 
         | This isn't a new idea, see the Extensible Web Manifesto:
         | 
         | https://extensiblewebmanifesto.org/
         | 
         | (backup: https://github.com/extensibleweb/manifesto)
         | 
         | But the ideas presented there didn't seem to have much of an
         | impact with the web people (with the notable exception of
         | WebGPU).
        
           | pjmlp wrote:
           | Basically at the end of the day what happened was that the
           | Web people killed Flash, slowed online gaming for 10 years,
           | while nowadays everyone that cares about games moved into
           | native mobile gaming.
           | 
           | It is quite telling that the cloud gaming efforts rather
           | render everything server side with video streaming for the
           | browser than fixing the games APIs on the browser.
        
           | [deleted]
        
           | rewq4321 wrote:
           | > Timer precision has been reduced to around 1ms
           | 
           | > threading is also disabled or behind HTTP response headers
           | post-Spectre
           | 
           | It seems like you've written a long-winded complaint that you
           | have to add a http header (like this[0]) to your server's
           | response? Though it's true that the Spectre stuff broke
           | everything for a little while there (and there are still
           | ergonomics-related teething problems with headers that are
           | being worked on).
           | 
           | > Games need UDP style non-guaranteed networking, but we only
           | get this as a tiny part of WebRTC (DataChannels).
           | 
           | WebRTC DataChannels work fine though? Does it matter that
           | it's a small part of the whole WebRTC spec or that server-
           | client use is a bit of a hack? Either way, WebTransport hits
           | origin trial in Chrome in about a week, and it's specifically
           | designed for UDP-like client-server communication, so the
           | WebRTC approach can be swapped out once that is stable.
           | 
           | [0]: https://web.dev/coop-coep/
        
             | kmeisthax wrote:
             | Ruffle developer here. A good chunk of our incoming issue
             | volume used to be "how do we configure the WASM mime type"
             | disguised in various "omg it doesn't work, what does this
             | error mean" type issues. This is because we were using
             | WASM's instantiateStreaming function to load our module,
             | which requires valid HTTP headers on the WASM file which
             | most web servers weren't configured to produce. Adding a
             | fallback to a slower WASM load approach that doesn't
             | require a header made all those complaints go away.
             | 
             | In our experience the average webmaster does not know how
             | to configure proper HTTP headers and should not be expected
             | to for a basic web library. So any web API that will not
             | work with the standard web server configurations of Apache,
             | nginx, or IIS is a non-starter for us. (Not to mention that
             | the site isolation headers have non-trivial implications
             | for sites that use iframes...)
        
               | rewq4321 wrote:
               | That's a fair point. I hope you're involved in the spec
               | discussions around these topics! There's definitely room
               | for improvement in terms of ease of use.
               | 
               | I noticed this update on the https://web.dev/coop-coep/
               | article recently:
               | 
               | > We've been exploring ways to deploy Cross-Origin-
               | Resource-Policy at scale, as cross-origin isolation
               | requires all subresources to explicitly opt-in. And we
               | have come up with the idea of going in the opposite
               | direction: a new COEP "credentialless" mode that allows
               | loading resources without the CORP header by stripping
               | all their credentials. We are figuring out the details of
               | how it should work, but we hope this will lighten your
               | burden of making sure the subresources are sending the
               | Cross-Origin-Resource-Policy header.
        
             | flohofwoe wrote:
             | > It seems like you've written a long-winded complaint that
             | you have to add a http header (like this[0]) to your
             | server's response?
             | 
             | So how does this work on Github Pages or other hosting
             | solutions where the user has no control over the web server
             | configuration?
             | 
             | > WebTransport hits origin trial in Chrome in about a week
             | 
             | How long until this shows up in Firefox, and will Safari
             | ever support this before it's deprecated in Chrome again
             | because another better solution shows up?
        
               | rewq4321 wrote:
               | > how does this work on Github Pages
               | 
               | Spectre called for some drastic measures, and it'll be up
               | to Github/Netlify to decide how they react. Developers
               | who want simple hosting solutions for their little
               | projects will host elsewhere (e.g. replit.com,
               | glitch.com) if they need to. This isn't exactly a massive
               | obstacle if you've set out to build a complex game in the
               | browser. It's just a couple of headers...
               | 
               | >How long until this shows up in Firefox
               | 
               | WebRTC data channels work fine in the mean time. Have you
               | seen games like krunker.io and dotbigbang.com et al? It's
               | perfectly possible to create real-time games in the
               | browser using WebRTC.
        
               | lights0123 wrote:
               | > Netlify
               | 
               | Netlify already allows full control over headers even on
               | their subdomain, including COOP/COEP.
        
               | rewq4321 wrote:
               | Oh, good to know!
        
               | flohofwoe wrote:
               | > This isn't exactly a massive obstacle if you've set out
               | to build a complex game in the browser.
               | 
               | It is an obstacle (at least a massive annoyance) for
               | library authors (like this:
               | https://github.com/floooh/sokol). Those libraries can be
               | used for extremely simple and small WASM snippets
               | embedded in blog posts (like here:
               | https://floooh.github.io/2019/01/05/wasm-embedding.html),
               | or in "proper" games hosted through "proper" hosting
               | services which allow to set the response headers.
               | 
               | Right now the choice is to either support WASM threading,
               | but tell library users that the library will most likely
               | not work on the hosting solution of their choice, or not
               | support WASM threading and work everywhere. At least to
               | me it's clear that "works everywhere" is better than
               | "it's complicated", so I'll ignore WASM threading until
               | the problem is solved somehow (either most hosting
               | services turn on those response headers, or there's
               | another way to enable threading without requiring control
               | over the web server configuration).
        
               | rewq4321 wrote:
               | > It is an obstacle (at least a massive annoyance) for
               | library authors
               | 
               | Yeah that's a completely fair point. Ruffle dev makes a
               | similar argument up-thread:
               | https://news.ycombinator.com/item?id=27199260
        
               | meheleventyone wrote:
               | And we're using WebSockets right now for the networking
               | in dot big bang! Although we've talked a lot about moving
               | to WebRTC we've not gone as far as doing it yet.
               | 
               | More generally things get a lot simpler if you're "web
               | first". It's a lot easier to just be pragmatic about it
               | and build when you're not straight jacketed by existing
               | designs reliant on common low level OS APIs.
        
           | smaddox wrote:
           | > very hard to avoid microstuttering
           | 
           | It's not just _very hard_ to avoid microstuttering. It 's
           | quite literally impossible. And it's not just the timer
           | precision. Chrome's `requestAnimationFrame` actually provides
           | a full precision frame start time, but even if all your page
           | does is render a single solid-color triangle (that alternates
           | between cyan and magenta every frame), you still cannot avoid
           | microstuttering, because the compositor will periodically
           | drop frames. Ironically, the browser engines on mobile phones
           | seem to do better, presumably because of limited
           | multitasking.
           | 
           | Source: I've spent more hours than I care to admit trying to
           | avoid dropped frames in https://sneakysnake.io . To see how
           | bad it remains, check out the triangle in the bottom left
           | corner of https://sneakysnake.io?dev=true . If it looks
           | anything other than grey, then the browser is dropping
           | frames.
        
       | Shadonototro wrote:
       | ---
       | 
       | [[stage(fragment)]]
       | 
       | fn main() -> [[location(0)]] vec4<f32> {                   return
       | vec4<f32>(0.4, 0.4, 0.8, 1.0);
       | 
       | }
       | 
       | ---
       | 
       | That is very ugly and full of noise, c'mon people, that's one
       | thing to like rust, but let's not import its ugly and noisy
       | cluttered syntax..., it took me few seconds to understand what's
       | going on there
       | 
       | ---
       | 
       | var<private> x: f32;
       | 
       | ---
       | 
       | let's not confuse things even more, we have vec4<f32> already,
       | C'MON PEOPLE
        
         | kvark wrote:
         | This is very similar to what you'd write in HLSL, except maybe
         | for `[[stage(fragment)]]` part (since this information is
         | provided externally in HLSL) and `vec4<f32>` instead of
         | `float4`. Are these the only things you are concerned about? If
         | so, then I'd be happy with this, given that it's an artificial
         | example.
        
       | Jakobeha wrote:
       | Idk if WebGPU will ever be fully supported by enough browsers to
       | be viable for the web. Even if it is eventually supported people
       | are saying it will take 10+ years.
       | 
       | I do think WebGPU can be a great cross-platform compute shader
       | framework _outside_ of the web. My experience with compute
       | shaders is that they 're really really platform specific: you
       | need not just the right OS, but also the right graphics chip. It
       | would be nice to write cross-platform high-end 3D apps without
       | relying on Unity or Unreal.
       | 
       | Also, hopefully WebGPU has better debugging tools (if they're
       | even possible). I tried GPU programming before but it's a mess:
       | almost every example I tried to run crashed terribly (e.g. system
       | reboot), and even at best, I couldn't get much debugging output
       | other than "code = problem".
        
         | pjmlp wrote:
         | Don't have high hopes for debugging tools.
         | 
         | During last week's meetup from Khronos the only suggestion was
         | the native tooling like PIX.
        
         | rewq4321 wrote:
         | > people are saying it will take 10+ years
         | 
         | Who is saying that? Genuinely curious. It has strong buy-in
         | from Apple, Google and Mozilla, and all 3 are implementing the
         | spec right now (and updating implementations as the spec
         | changes). Seems very doubtful that it'd take anywhere near 10
         | years to be supported by major browsers. I guess it depends on
         | what subset of graphics/compute features you need for it to be
         | viable for your specific use case.
        
           | Jakobeha wrote:
           | Ok, maybe not 10 years. But look at WebGL2. From wikipedia:
           | 
           | > Development of the WebGL 2 specification started in 2013
           | with final in January 2017
           | 
           | Moreover, WebGL2 support was only added to Edge since January
           | 2020, and as of now it's still not supported in Safari
           | (excluding Safari TP) (from https://caniuse.com/webgl2).
        
             | rewq4321 wrote:
             | I don't think you's want to use WebGL2 as an analogue here.
             | Radically different depth and breadth of buy-in across the
             | browser ecosystem. I'm no expert, but I'm genuinely curious
             | where you got the 10+ year figure from (is there an up-to-
             | date estimate/roadmap from any of the people working on
             | this?). I was under the impression that we'd see an MVP
             | stabilise relatively soon (i.e. within a couple of years at
             | most). Of course, there will still be features being added
             | in 10 years time, since graphics APIs and GPUs are always
             | evolving.
        
               | fulafel wrote:
               | WebGL spec came about reasonably fast but took a very
               | long time to become robust and well supported across all
               | OS/hw/browser combos. Go read hn comment threads on webgl
               | submissions and count people complaining about os or
               | browser caches.
               | 
               | GPU drivers are really buggy, especially if you try to
               | cope with what is installed vs prompting users to upgrade
               | their os or drivers.
        
       | pjmlp wrote:
       | I find interesting that WGSL is clearly influenced by Rust.
       | 
       | GSL and WGSL are completely unrelated, apparently time to rewrite
       | shaders.
       | 
       | And in what concerns debugging, better get good at understanding
       | what is the actual application code and browser own rendering
       | engine, as the best solution keeps being to use native GPGPU
       | debuggers.
        
         | corysama wrote:
         | From what little I know, they were mostly influenced by who
         | showed up. A lot of Rusties participated.
        
           | kvark wrote:
           | This isn't actually the reason. The first draft of WGSL,
           | proposed by Google team, already had Rust-like syntax. As far
           | as I'm aware, nobody from that team had any significant work
           | done in Rust.
        
       ___________________________________________________________________
       (page generated 2021-05-18 23:01 UTC)