Post B2OYAwm1d8au1h581w by mcc@mastodon.social
 (DIR) More posts by mcc@mastodon.social
 (DIR) Post #B2OYAwm1d8au1h581w by mcc@mastodon.social
       2026-01-18T00:04:49Z
       
       0 likes, 0 repeats
       
       A couple weeks ago I was thinking about how waypipe is sort of more like VNC, sending raw display buffer updates, and less like X or NeWS, which have the capacity to actually send "semantic" draw-update information— you could imagine an X extension that knows what a "button" is and tells the server to draw it, improving both bandwidth and responsiveness.This is too bad, but also kind of irrelevant, as NeWS failed at market, and X was frankly never any good at this. (1/2)
       
 (DIR) Post #B2OYAxW6rnSQKcdvrU by mcc@mastodon.social
       2026-01-18T00:05:51Z
       
       0 likes, 0 repeats
       
       However, could we, in the Wayland era, do *better* than X?Wayland is essentially a way of negotiating a connection to a GPU. You could imagine a network GUI streamer which negotiates a connection to a *remote* GPU. You could imagine that when uploading a texture to the GPU, it's sent via network to the "display server", and GPU command buffers sent over the network to execute. You could imagine this being much more efficient than Waypipe, especially if the app is designed anticipating it (2/2)
       
 (DIR) Post #B2OYAyEQD2u2Y3NJvk by dotstdy@mastodon.social
       2026-01-18T01:13:49Z
       
       1 likes, 0 repeats
       
       @mcc unfortunately it's really really not so easy anymore. if i have an application which uses 2GB of VRAM, then one would need to actually stream those 2GB of data before even rendering the first frame. furthermore applications can stream tremendous amounts of data between the CPU and GPU, and they can do it at any time, completely out of the control of the driver apis and wayland. You can just map the entire gpu memory into cpu address space and memcpy whenever you like.
       
 (DIR) Post #B2OYB2ToPlafiwbfTU by mcc@mastodon.social
       2026-01-18T00:07:36Z
       
       0 likes, 0 repeats
       
       The way I'd design this is have the wire protocol be based on WebGPU. I'd do this because WebGPU maps well to a broader set of target GPUs than Vulkan (the UNIX machine running the program may support Vulkan, but what machine is running the "display server"?), it's simpler, is designed to be virtualized, and because most Rust GUI programs already rest atop wgpu. However you *could* imagine going lower level and streaming Vulkan… or going the other way (and more deranged) and using HTML (3/2)
       
 (DIR) Post #B2OYB3FfXps67Mzt4K by dotstdy@mastodon.social
       2026-01-18T01:19:48Z
       
       0 likes, 0 repeats
       
       @mcc If you want to stream ui so you can render it on the remote, imo you need to do it at a much higher level than GPU commands or wayland. e.g. you can stream the draw command stream for imgui pretty easily, so long as people don't go too hog wild with their draws. (we do this at work for remote debugging of game servers and consoles and whatnot!) as an api, wayland has very little to do with these things, like x direct rendering, you're only passing a buffer and a gpu fence to the compositor.
       
 (DIR) Post #B2OYB7KQO63IlHFk2K by mcc@mastodon.social
       2026-01-18T00:31:29Z
       
       0 likes, 0 repeats
       
       Observation (4/2): If you *did* make the transport for my "semantic Waypipe" concept be HTML, since the most reasonable way to prototype this would be to use Tauri as the programming interface, then instead of Waypipe you could name it SendTaur
       
 (DIR) Post #B2OYLNn828ueiTZe4m by shironeko@fedi.tesaguri.club
       2026-01-18T02:15:06.983244Z
       
       0 likes, 0 repeats
       
       @mcc https://docs.mesa3d.org/drivers/venus.html maybe?