[HN Gopher] Show HN: HipScript - Run CUDA in the Browser with We...
___________________________________________________________________
Show HN: HipScript - Run CUDA in the Browser with WebAssembly and
WebGPU
CUDA is NVIDIA's language for GPU programming, allowing you to mix
write CPU and GPU code in C++ in one file. By chaining a few
projects that compile CUDA to OpenCL, then Vulkan, then WebGPU, you
can experiment with this GPGPU language on any hardware.
Author : lights0123
Score : 111 points
Date : 2025-01-07 15:44 UTC (7 hours ago)
(HTM) web link (hipscript.lights0123.com)
(TXT) w3m dump (hipscript.lights0123.com)
| CharlesW wrote:
| From the "Learn More" link
| (https://lights0123.com/blog/2025/01/07/hip-script/):
|
| _" By chaining chipStar1 (a HIP and NVIDIA(r) CUDA(r) to OpenCL
| compiler), Clspv2 (an OpenCL to Vulkan compiler), and Tint3
| (among others, a Vulkan shader to WebGPU shader compiler), you
| can run CUDA code in the browser!"_
|
| 1 https://github.com/CHIP-SPV/chipStar/ 2
| https://github.com/google/clspv/ 3
| https://dawn.googlesource.com/dawn/+/refs/heads/main/src/tin...
| bagels wrote:
| What an incredible demo/hack. This is actually the simplest way
| to actually execute CUDA code that I've seen.
| btown wrote:
| Not to mention that it Just Works on Apple devices! Really,
| really cool.
| _nalply wrote:
| Firefox supports WebGPU, but needs a setting in about:config. I
| enabled the setting but HipScript still denies running on Firefox
| with the message: "Please try a Chromium-based browser like
| Google Chrome or Microsoft Edge."
|
| Please do feature detection, not browser detection.
| lights0123 wrote:
| I _do_ do feature detection--WebGPU is blocked on Release
| Firefox regardless of config, you 'll need nightly. It does
| support Safari with its experimental mode enabled for example.
| doctoboggan wrote:
| I enabled WebGPU in safari on my m1 Mac and got this error
| when running the GoL demo:
|
| ``` TypeError: B.values().some is not a function. (In
| 'B.values().some(r=>r.args.length)', 'B.values().some' is
| undefined) ```
|
| EDIT: I got the same error with all three sample scripts
| JackYoustra wrote:
| I don't know why this never occurred to me. What a great website,
| glad you made it!
| Ameo wrote:
| The GoL example it loaded with seemed to be running way slower
| than I expected it to. It turns out that there's actually a
| `usleep(1000 * 100)` call in the code which was inserted to make
| it easier to see the output; the actual kernels execute quickly
| and take up very little GPU time.
|
| When I looked at the profiler, I was confused to see that one
| worker thread was at 100% usage the whole time it was running. At
| first, I thought that maybe it was actually running the code via
| Wasm on the CPU rather than on the GPU like it said.
|
| Instead, it turns out that the worker was just running
| `emscripten_futex_wait` - which as far as I can tell is
| implemented by busy waiting in a loop. Probably doesn't matter
| for performance since I imagine that's just for the sleep call
| anyway.
|
| ----
|
| Altogether this is an incredibly cool tool. I'm sure there is
| some performance gap compared to native, but even so this is a
| extremely impressive and likely has a ton of potential use cases.
| btown wrote:
| Thank you so much for this! I was a bit concerned that the
| performance on my Mac was nearly identical to my new 3090 on PC
| and thought I might have messed up the setup there!
| ryanmerket wrote:
| Very cool. Thank you for creating this!
| punnerud wrote:
| How performant is the CUDA code in browser compared to standalone
| program?
|
| Could we have PyTorch / ML training with CUDA through the browser
| performing ok?
| mordechai9000 wrote:
| Distributed large model training with web clients?
| bloomingkales wrote:
| How is this different than web-llm?
| lights0123 wrote:
| web-llm provides optimized kernels for neural network
| operations, and a convenient API for it. This project provides
| a place to experiment with CUDA, for any purpose--not
| necessarily for anything related to machine learning.
| JonChesterfield wrote:
| This feels like more stages than should be necessary (something
| should be able to do LLVM IR direct to WebGPU) but it's great to
| see it running, very nice!
| Cieric wrote:
| I love the idea of this, but sadly I can't get it to work in
| Firefox, Chrome or Edge on my work pc, probably because I can't
| find "--enable-features=Vulkan" equivalent in the about:flags and
| the argument doesn't appear to work on windows. I'm actually a
| bit more curious about a standalone application that skips the
| webgpu part and goes straight to Vulkan as I would love to be
| able to experiment with some Cuda only applications.
| lights0123 wrote:
| I don't currently expose the option to run the compiler if a
| GPU isn't detected, but on systems that do, it exposes a
| download option that lets you download the SPIR-V kernel to run
| with Vulkan.
___________________________________________________________________
(page generated 2025-01-07 23:00 UTC)