Post AXReO7h8x7YVPRoghk by zhuowei@notnow.dev
 (DIR) More posts by zhuowei@notnow.dev
 (DIR) Post #AXReO7h8x7YVPRoghk by zhuowei@notnow.dev
       2023-07-07T08:11:47.152538Z
       
       0 likes, 0 repeats
       
       What's the best way to encode a video with an alpha channel?
       
 (DIR) Post #AXRfopSQxxnEcXEbMu by zhuowei@notnow.dev
       2023-07-07T08:27:53.279235Z
       
       0 likes, 0 repeats
       
       The Internet says the best way is to encode two videos, one video holding YUV and one video holding the alpha channel, then during decoding time recombine. Not sure how to interleave them in .(Or I could just key out one colour, but visionOS's e.g. app icon grid probably won't look good without full alpha...)
       
 (DIR) Post #AXRnGyRlpxNEHuI9rs by torokati44@mastodon.social
       2023-07-07T08:47:35Z
       
       1 likes, 1 repeats
       
       @zhuowei That's exactly how "VP6 with alpha" in Flash works. The color is premultiplied, and alpha is a separate greyscale stream. There's a constraint there that any frame in the color stream that is a keyframe (I frame) must also be a keyframe in the alpha stream. Other than that VP8/VP9 in WebM can handle alpha natively it seems...?
       
 (DIR) Post #AXRnH0RUPyKyTcAxhw by torokati44@mastodon.social
       2023-07-07T08:53:11Z
       
       1 likes, 0 repeats
       
       @zhuowei This is how it works in SWF/FLV:
       
 (DIR) Post #AXRnMoRFxc3Yz8prKC by pulkomandy@mastodon.tetaneutral.net
       2023-07-07T08:50:03Z
       
       0 likes, 0 repeats
       
       @zhuowei a GIF file? (probably wrong answer but it technically fits the given requirements)
       
 (DIR) Post #AXRnMp5JYg6CzNZqlM by zhuowei@notnow.dev
       2023-07-07T09:52:27.181784Z
       
       0 likes, 0 repeats
       
       @pulkomandy The constraint is "minimum modifications to ALVR or some other VR streaming app", so, unfortunately, no
       
 (DIR) Post #AXSJXKCjuL3Ivz6N0a by keithahern@mastodon.social
       2023-07-07T13:30:15Z
       
       0 likes, 0 repeats
       
       @zhuowei I played around with this using OBS and chroma keying out colours while looking at blocks of colour in the sim.  Due to the inherent alpha blending of the app icon grid and frosted glass panels it didn't look great. Are you able to isolate the layers pre-composition?
       
 (DIR) Post #AXSJXL5KdMilfIdy6K by zhuowei@notnow.dev
       2023-07-07T15:52:54.786494Z
       
       0 likes, 0 repeats
       
       @keithahern Not sure yet: I'll have to take a look. I remember visionOS supports rendering environment to a separate layer (see  createPatchAndUpdateWithFrameDescription:(etc)), so might be able to use that.
       
 (DIR) Post #AXSJXNpUQipcB2FrKC by keithahern@mastodon.social
       2023-07-07T13:34:49Z
       
       0 likes, 0 repeats
       
       @zhuowei OBS made the chroma keyed stream available as a virtual camera device, I then streamed that over webRTC to the Magic Leap which was also sending gyro events. It was laggy and I haven't gone back to optimise.
       
 (DIR) Post #AXTR53nOJcOaj5EqvY by zhuowei@notnow.dev
       2023-07-08T04:52:10.205701Z
       
       0 likes, 0 repeats
       
       @keithahern Figured it out: I can just hook RealityKit to prevent it from adding composeSyntheticEnvironment.rerendergraph, and I get a PNG of the visionOS UI without the background scene.. https://notnow.dev/notice/AXTQmrNkveomxqdIrw