[HN Gopher] WebMonkeys: parallel GPU programming in JavaScript (...
___________________________________________________________________
WebMonkeys: parallel GPU programming in JavaScript (2016)
Author : surprisetalk
Score : 112 points
Date : 2025-05-04 17:00 UTC (3 days ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| butokai wrote:
| By coincidence I was just having a look at the work by the same
| author on languages based on Interaction Nets. Incredibly cool
| work, although the main repos seem to have been silent in the
| last couple of months? This work however is much older and
| doesn't seem to follow the same approach.
| mattdesl wrote:
| The author is working on a program synthesizer using
| interaction nets/calculus, which should be released soon. It
| sounds quite interesting:
|
| https://x.com/VictorTaelin/status/1907976343830106592
| FjordWarden wrote:
| WebMonkeys feels a bit like array programming, you create
| buffers and then have a simple language to perform operations
| on those buffers.
|
| HVM is one of the most interesting developments in programming
| languages that I know off. I just don't know if it will prove
| to be relevant for the problem space it is trying to address.
| It is a very difficult technology that is trying to solve
| another very complex problem (AI) by seemingly sight stepping
| the issues. Like you have to know linear algebra and statistics
| to do ML, and they are saying: yes and you have to know
| category theory too.
| foobarbecue wrote:
| FYI, just in case you didn't know, it's "side-stepping," not
| "sight-stepping."
|
| Thanks for introducing me to the concept of higher-order
| virtual machines.
| Anduia wrote:
| The title should say 2016
| sylware wrote:
| Maybe the guys here know:
|
| Is there a little 3D/GFX/game engine (plain and simple C written)
| strapped to a javascript interpreter (like quickjs) without being
| in apple or gogol gigantic and ultra complex web engines?
|
| Basically, a set of javascript APIs with a runtime for
| wayland/vulkan3D, freetype2, and input devices.
| chirsz wrote:
| You could use Deno with WebGPU.
| FjordWarden wrote:
| You can access the gpu without a browser using Deno[1] (and
| probably Node too if you search for it).
|
| Not to be patronising here, but if you are looking for
| something that makes 3D/GFX/game programming easier without all
| the paralysing complexity, you should recalibrate how hard this
| is going to be.
|
| [1] https://windowing.deno.dev/
| jkcxn wrote:
| You can quite easily make bindings for raylib/sokol-gpu/bgfx
| from Bun
| gr4vityWall wrote:
| You can use Node.js or Bun with bindings for stuff like raylib
| or SDL.
|
| Examples:
|
| https://github.com/RobLoach/node-raylib
| https://github.com/kmamal/node-sdl
| afavour wrote:
| I assume OP mentioned QuickJS specifically because they're
| looking for a tiny runtime. Node and Bun aren't that.
| rossant wrote:
| https://datoviz.org will have a webgpu js backend in a year or
| so.
| ronsor wrote:
| You could take raylib (https://www.raylib.com) and bolt quickjs
| to that.
| punkpeye wrote:
| So what are the practical use cases for this?
| qoez wrote:
| Awesome stuff. Btw: "For one, the only way to upload data is as
| 2D textures of pixels. Even worse, your shaders (programs) can't
| write directly to them" With webgpu you have atomics so you can
| actually write to them.
| kreetx wrote:
| Unfortunately this is not maintained since 2017:
| https://github.com/VictorTaelin/WebMonkeys/issues/26
|
| Are there other projects doing something similar on current
| browsers?
| kaoD wrote:
| Still a draft, experimental and not widely used[0], but
| WebGPU[1] will bring support for actual compute shaders[2] to
| the web.
|
| It's much more low level than these "web monkeys" but I'd say
| if you really need GPU performance instead of toy examples like
| squaring a list of numbers, you really need to go low level and
| understand how GPU threads and work batching works.
|
| [0] https://developer.mozilla.org/en-US/docs/Web/API/WebGPU_API
|
| [1] https://en.m.wikipedia.org/wiki/WebGPU
|
| [2] https://webgpufundamentals.org/webgpu/lessons/webgpu-
| compute...
| kreetx wrote:
| With "going low level" do you mean leaving the browser all
| together and shipping a native application?
|
| Although I currently don't need anything like this for work
| then still, the use case I see for GPU use in browser is that
| it's often times the easiest way to run a program on the
| user's machine - anything else requires an explicit install.
| kaoD wrote:
| I meant to compare abstract-ish stuff (like these monkeys)
| vs actual low-level within the GPU realm, i.e. thinking in
| GPU architecture terms. E.g. appropriately choosing a
| workgroup[0] size, optimizing your buffer layouts for
| specific access patterns, knowing when and how to
| read/write from/to VRAM, when to (or if) split into
| multiple stages, etc.
|
| I see space for abstractions over this mess of
| complexity[1] but there's not a lot of room for
| simplification.
|
| It's _almost_ like thinking in bare-metal terms but the GPU
| driver is your interface (and the browser 's sandbox of
| course).
|
| Although WGSL is not low-level itself (in the sense that
| you're not writing SPIR-V) that's for a good reason because
| it needs to be portable and each vendor does their own
| things so it's often hardware dependent.
|
| Going native will still help with performance AFAIK (the
| aforementioned sandbox has a cost for example) but I agree
| with you. I love the web as a platform.
|
| [0] https://gpuweb.github.io/gpuweb/wgsl/#compute-shader-
| workgro...
|
| [1] https://developer.chrome.com/docs/capabilities/web-
| apis/gpu-...
| zackmorris wrote:
| This is cool but doesn't actually do any heavy lifting, because
| it runs GLSL 1.0 code directly instead of transpiling Javascript
| to GLSL internally.
|
| Does anyone know of a Javascript to GLSL transpiler?
|
| My interest in this is that the world abandoned true multicore
| processing 30 years ago around 1995 when 3D video cards went
| mainstream. Had it not done that, we could have continued with
| Moore's law and had roughly 100-1000 CPU cores per billion
| transistors, along with local memories and data-driven processing
| using hash trees and copy-on-write provided invisibly by the
| runtime or even in microcode so that we wouldn't have to worry
| about caching. Apple's M series is the only mainstream CPU I know
| of that is attempting to do anything close to this, albeit poorly
| by still having GPU and AI cores instead of emulating single-
| instruction-multiple-data (SIMD) with multicore.
|
| So I've given up on the world ever offering a 1000+ core CPU for
| under $1000, even though it would be straightforward to design
| and build today. The closest approximation would be some kind of
| multiple-instruction-multiple-data (MIMD) transpiler that
| converts ordinary C-style code to something like GLSL without
| intrinsics, pragmas, compiler-hints, annotations, etc.
|
| In practice, that would look like simple for-loops and other
| conditionals being statically analyzed to detect codepaths free
| of side effects and auto-parallelize them for a GPU. We would
| never deal with SIMD or copying buffers to/from VRAM directly.
| The code would probably end up looking like GNU Octave, MATLAB or
| Julia, but we could also use stuff like scatter-gather arrays and
| higher-order methods like map reduce, or even green threads.
| Vanilla fork/join code could potentially run thousands of times
| faster on GPU than CPU if implemented properly.
|
| The other reason I'm so interested in this is that GPUs can't
| easily do genetic programming with thousands of agents acting and
| evolving independently in a virtual world. So we're missing out
| on the dozen or so other approaches to AI which are getting
| overshadowed by LLMs. I would compare the current situation to
| using React without knowing how simple the HTTP form submit model
| was in the 1990s, which used declarative programming and
| idempotent operations to avoid build processes and the imperative
| hell we've found ourselves in. We're all doing it the hard way
| with our bare hands and I don't understand why.
___________________________________________________________________
(page generated 2025-05-07 23:01 UTC)