[HN Gopher] WebGPU is now available on Android
___________________________________________________________________
WebGPU is now available on Android
Author : astlouis44
Score : 158 points
Date : 2024-01-18 18:29 UTC (4 hours ago)
(HTM) web link (developer.chrome.com)
(TXT) w3m dump (developer.chrome.com)
| jsheard wrote:
| _> To help you anticipate memory limitations when allocating
| large amounts during the development of your app,
| requestAdapterInfo() now exposes memoryHeaps information such as
| the size and type of memory heaps available on the adapter._
|
| Oh nice, I was just complaining about that here the other day.
| The docs mention that browsers will probably guard that
| information behind a permission prompt to prevent it from being
| used for fingerprinting, but it's better than nothing.
| mschuetz wrote:
| I'm really looking forward to 2034, when WebGPU features will
| catch up to 2024.
| vetinari wrote:
| By that time, it might even get supported by Chrome for Linux.
| bhouston wrote:
| Why is linux supporting taking a while? I figured the
| underlying graphics subsystem on Android is Vulcan right?
| Wouldn't that also be the main graphics subsystem on Linux
| these days?
| zamadatix wrote:
| Testing/Validation and bugfixing. Just having Vulkan isn't
| enough to enable it by default, everything actually has to
| work right. Even for Android this is only for specific
| types of devices. You should be able to force enable it on
| Linux right now though. It's just not GA quality
| guaranteed.
| p_l wrote:
| I tried and tried and tried, Chrome 120 will always stick
| an undocumented Origin Trial for disabling WebGPU.
| anthk wrote:
| about:flags in Chromium
|
| search for "accel"
|
| Disable the blacklist for your GPU.
| p_l wrote:
| Doesn't stop disablement done through --origin-trial-
| disable-feature=WebGPU and I have yet to figure how to drop
| that without recompiling Chrome.
| 2OEH8eoCRo0 wrote:
| In 2034 it'll be as dead as Flash because of security issues.
| mschuetz wrote:
| Not really, that is not the problem of WebGPU. The worst you
| can do is crash the tab. With an unstable graphics driver,
| there might even be the option to crash the system but that's
| hardly a security issue, only an annoyance.
| kevingadd wrote:
| Historically any time an attack surface as big as WebGPU
| has been exposed, "the worst you can do is crash the tab"
| has not ever been true.
|
| Also note that for an unstable graphics driver, the way you
| usually crash the system is by touching memory you
| shouldn't (through the rendering API), which is definitely
| something that could be exploited by an attacker. It could
| also corrupt pages that later get flushed to disk and
| destroy data instead of just annoy you.
|
| Though I am skeptical as to whether it would happen,
| security researchers have previously come up with some
| truly incredible browser exploit chains in the past, so I'm
| not writing it off.
| mschuetz wrote:
| WebGL has been around for more than a decade and didn't
| turn out to be a security issue, other than occasionally
| crashing tabs. Neither will WebGPU be.
| csande17 wrote:
| By exposing vulnerable graphics drivers to arbitrary web
| code, WebGL has allowed websites to take screenshots of
| your desktop (https://www.mozilla.org/en-
| US/security/advisories/mfsa2013-8...) and break out of
| virtual machines
| (https://blog.talosintelligence.com/nvidia-graphics-
| driver-vu...), to use two examples I found via a web
| search.
| bhouston wrote:
| Nice!
|
| We just need Linux and iOS. And then we'll have somewhere around
| 80% support for WebGPU across all devices.
|
| I'm getting my numbers https://web3dsurvey.com/webgpu
|
| Android: 0.34%
|
| Chromium OS: 78.15%
|
| iOS: 0.09%
|
| Linux: 0.75%
|
| Mac OS: 54.43%
|
| Windows: 77.96%
| wongarsu wrote:
| According to [1] Android 12+ covers nearly 60% of all Android
| devices. That's more than I would have expected.
|
| 1: https://gs.statcounter.com/os-version-market-
| share/android/m... (Android 14 is still counted as "other")
| npunt wrote:
| if safari tech preview is anything to go by, it may come to iOS
| sooner or later
|
| https://webkit.org/blog/14879/webgpu-now-available-for-testi...
| westurner wrote:
| How do you run the task manager with Android Chrome?
|
| Does Android Chrome have the per-tab hover card RAM use feature
| as desktop chrome?
|
| From https://news.ycombinator.com/item?id=37840416 :
|
| >> From "Manifest V3, webRequest, and ad blockers" (2022)
| https://news.ycombinator.com/item?id=32953286 :
|
| >> What are some ideas for UI Visual Affordances to solve for bad
| UX due to slow browser tabs and extensions?
|
| >> - [ ] UBY: Browsers: Strobe the tab or extension button when
| it's beyond (configurable) resource usage thresholds
|
| >> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tabs
| according to their relative resource utilization
|
| >> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource
| quotas: CPU
| Xeamek wrote:
| >devices running Android 12 and greater powered by Qualcomm and
| ARM GPUs.
|
| So... won't work on any exynos, since they have the AMD RDNA3
| arch? Do I get that right?
| jsheard wrote:
| It should work on slightly older or lower end Exynos chips,
| which have ARM Mali GPUs. Their switch to AMD RDNA was a fairly
| recent thing, and so far it has only been integrated into their
| flagship-tier parts.
| astlouis44 wrote:
| My team has been developing out Unreal Engine 5 support for
| WebGPU, for anyone interested.
| fidotron wrote:
| What will be curious about WebGPU getting wider Android
| deployment is if it results in reducing the effect of variation
| in the drivers, which very much remain a headache. For example,
| WebGL type API implementations have had a somewhat flexible idea
| about data sizes and layout which due to the nature of WebGPU are
| much less acceptable there. One of the big wins of Vulkan has
| been that it has levelled the playing field somewhat and poor
| drivers have less of an impact.
|
| I think a lot of people will be disappointed by what proportion
| of devices currently in the wild actually successfully make this
| jump because it is under appreciated the extent to which
| shortcuts have been taken. I look forward to the day I never have
| to think about the Mali GLSL compiler ever again.
| rezonant wrote:
| > Timestamp queries allow WebGPU applications to measure
| precisely (down to the nanosecond) how much time their GPU
| commands take to execute compute and render passes
|
| > ...
|
| > Due to timing attack concerns, timestamp queries are quantized
| with a resolution of 100 microseconds, which provides a good
| compromise between precision and security.
|
| I don't have a particular need of nanosecond granularity
| timestamps for WebGPU- there are other parts of the web stack
| where I could really use better time measurement- but I
| understand the security concern and it's far better to be safe
| than sorry.
|
| But they quote two wildly different granularities in the same
| article, within a paragraph of each other...
| jsheard wrote:
| The former is a spec detail (the result is returned in ns) and
| the latter is an implementation detail (browsers currently
| quantize the result to 100us). That is a useful distinction
| since you can use WebGPU outside of the browser by embedding
| Dawn or wgpu into your own application, and there you should
| get the maximum resolution the spec allows for. Environments
| like Electron might also opt-out of that timing attack
| mitigation since they're intended to run trusted code.
|
| I agree the article could have made that clearer though.
| rezonant wrote:
| Yes indeed they mention how to opt out of quantization
| directly in Chrome if you'd like (at your own risk) using the
| Devtools.
| eschaton wrote:
| What's the actual utility of this for anyone that isn't trying to
| replace native code with web pages? Is this ever going to be
| worth the no doubt massive investment it required?
| andybak wrote:
| I'm not sure I understand you.
|
| Can you expand your comment somewhat?
| hutzlibu wrote:
| "trying to replace native code with web pages? "
|
| No one wants that. But many like to write their apps only for
| one plattform - and then still have them run allmost
| everywhere.
|
| The web is the best we have to achieve this. And this will
| greatly improve the possibilities.
|
| Edit: My app will soon finally use no more html elements. It is
| not a "webpage".
| mschuetz wrote:
| > No one wants that.
|
| I very much do want that since the WebGPU API is far easier
| and nicer to use than Vulkan or OpenGL. Also, it makes apps
| much more accessible to distribute them over web, and it is
| much more secure to use web apps than native apps.
| Unfortunately WebGPU is way too limited compared to desktop
| APIs.
| fidotron wrote:
| It should enable much more performant (and battery friendly) 3D
| content on the web. WebGL has a level of synchronization in the
| main render loop of the browser that is just not the right way
| to do it, and WebGPU fixes that.
|
| Additionally it is more suited to GPU based compute, which can
| be used to accelerate neural network inferencing, though not
| quite as well as dedicated NN accelerators which are fairly
| common these days.
|
| I would tend to agree that the business case for these things
| is not as strong as many would like though, and things have a
| distinct habit of ceasing to be interesting the moment they are
| widely achievable.
| crubier wrote:
| It's likely to become the best way to run cross-GPU-platform
| gpu code in the medium term
| JayStavis wrote:
| I feel like WebGPU actually holds some amount of promise as a
| cross-platform convenience. I'd agree that there's not a great
| reason to update your native code for this right now though.
|
| If you're writing new gfx code though and are more familiar
| with web technology, there's definitely utility there. That's
| the bigger value prop: that people with web development skills
| can work on more pro (GPU-required) applications.
| FL33TW00D wrote:
| Well done WebGPU team! Looking forward to the announcement one
| day that this has landed:
| https://github.com/gpuweb/gpuweb/issues/4195
| modeless wrote:
| Cool, there's consensus between APIs on how to expose "tensor
| cores" now? Very exciting! Although I think that relaxing
| memory limitations and providing more visibility and control
| there is even more important for running ML on the web right
| now. And harder to make progress on because there isn't a
| single team that clearly owns "all memory management".
___________________________________________________________________
(page generated 2024-01-18 23:00 UTC)