[HN Gopher] Show HN: 30ms latency screen sharing in Rust
       ___________________________________________________________________
        
       Show HN: 30ms latency screen sharing in Rust
        
       Author : Sean-Der
       Score  : 294 points
       Date   : 2024-07-09 14:07 UTC (4 days ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | krick wrote:
       | I don't get what it does, exactly? This doesn't seem to be an OBS
       | alternative (judging by the description), but... I mean, isn't it
       | exactly the same as just running OBS directly?
        
         | notjoemama wrote:
         | Looks like a LAN tele...er, screen sharing server/client.
         | Presumably you could serve over the internet but it will not
         | get the 30ms latency. Aside from the streaming (I only spent a
         | few minutes reviewing the source) it's a live jpeg kind of
         | thing. I built something similar to screen share with my kids
         | when we played Minecraft together. It was really for me because
         | once we got in game they would take off and in 5 minutes be
         | screaming for help 10 chunks away in some zombie skeleton
         | infested cave at or near bedrock. Being kids, I never got good
         | enough directions to help them in time. Anyway, it was a fun
         | project. I used CUDA and could get 60fps per client on CAT5 and
         | 45-ish over WiFi, dropping to 10-15fps when I walked in and out
         | of rooms with the laptop. 60fps is 15ms, so 20 is 50fps.
        
           | imtringued wrote:
           | >Presumably you could serve over the internet but it will not
           | get the 30ms latency.
           | 
           | Indeed, you'll have to live with something like 80ms to 100ms
           | latency over the internet and a horrifying 160 ms if you want
           | to have things respond to keyboard and mouse inputs.
        
             | jeffhuys wrote:
             | Then how does something like moonlight, parsec, or Geforce
             | Now work? Sub-10ms latency, sometimes even sub-5 depending
             | on time of day and network congestion.
        
               | geraldwhen wrote:
               | That is only ever theoretically possible with a direct
               | fiber connection between both nodes at <= 200 miles
               | between.
               | 
               | So the answer is "there's a data center in your city."
        
               | notjoemama wrote:
               | Ever heard of the Akamai network? Netflix might be a good
               | example. Trace routes show latency between network hops.
               | To reduce latency you either buy better network hardware,
               | buy better cabling, or reduce hops in n the network.
               | Since the first two are more expensive than the third, if
               | your service must have very fast response between server
               | and client, move the server closer to the client. Large
               | corporations run cache servers in multiple data centers
               | everywhere geographically so the response time for
               | clients is better than their competition. Why new video
               | services struggles to compete with YouTube is in part
               | because YouTube can afford this kind of architecture
               | where a startup cannot. Even if it's the best code money
               | can buy, it will never provide the same level of
               | experience to users as local cache servers. Kind sucks
               | not body can compete.
        
         | Sean-Der wrote:
         | It is also a player!
         | 
         | You can either pull the video from a WHEP source or run in a
         | P2P mode. I wanted to demonstrate the flexibility and
         | hackability of it all :)
        
       | 1oooqooq wrote:
       | mostly glues two libraries? ffmpeg from capture, play and whip?
        
         | Sean-Der wrote:
         | Yep! It glues ffmpeg, str0m[0] and SDL together. I hope bitwhip
         | doesn't need to exist someday. When WHIP/WHEP has enough
         | traction it will be easier to land in FFMPEG
         | 
         | [0] https://github.com/algesten/str0m
        
       | Sean-Der wrote:
       | I wrote this to solve a few things I cared about.
       | 
       | * I want to show people that native WebRTC players can be a
       | thing. I hope this encourages hangouts/discord/$x to implement
       | WHIP and WHEP it would let people do so much more
       | 
       | * I wanted to make low latency sharing easier. I saw the need for
       | this working on adding WebRTC to OBS and Broadcast Box[0]
       | 
       | * I wanted to show devs what a great ecosystem exists for WebRTC.
       | Lots of great implementations in different languages.
       | 
       | * Was a bit of a 'frustration project'. I saw a company claiming
       | only their proprietary protocol can do latency this low. So I
       | thought 'screw you I will make an open source version!'
       | 
       | [0] https://github.com/glimesh/broadcast-box
        
         | Sean-Der wrote:
         | Another things that is in this realm. I am adding native co-
         | streaming/conferencing to OBS [0]. OBS can send WebRTC, next I
         | want to make receiving work well.
         | 
         | Between that and Simulcast I hope to make real-time video
         | dramatically cheaper and easier
         | 
         | [0]
         | https://docs.google.com/document/d/1Ed2Evze1ZJHY-1f4tYzqNZfx...
        
           | ta988 wrote:
           | This would be fabulous, thank you so much for working on
           | that. What kind of latency does dual encoding (on client then
           | on receiver again) adds? Are there codecs that can have
           | multiple streams on the same image (as in zones of
           | independent streams on the video surface)?
        
         | tamimio wrote:
         | > I saw a company claiming only their proprietary protocol
         | 
         | Did the company have a "ripple" in its name? Curious
        
           | Sean-Der wrote:
           | Let me find it again! I saw it on LinkedIn and it was such a
           | bullshit promo thing
        
             | tamimio wrote:
             | I remember around two years ago, we got in touch with a
             | company--without mentioning the name but it has "ripple" in
             | it--and after an hour-long seminar, NDA, password-protected
             | binaries, and other BS, they barely delivered ~150ms
             | latency..
        
         | slashink wrote:
         | Hey Sean, we both worked at Twitch Video but I left just as you
         | were joining. I currently work on the Discord video stack and
         | am somewhat curious about how you imagine Discord leveraging
         | WHIP/WHEP. Do you see it as a way for these clients to
         | broadcast outwards to services like Twitch or more as an
         | interoperability tool?
        
           | Sean-Der wrote:
           | Users what to send WHIP into discord. The lack of control on
           | screen sharing today is frustrating. Users want to capture
           | via another tool and control bitrate/resolution.
           | 
           | Most Broadcast Box users tell me that's their reason for
           | switching off discord.
           | 
           | ------
           | 
           | With WHEP I want to see easier co-streaming. I should be able
           | to connect a room to my OBS instance and everyone's video
           | auto show up.
           | 
           | I don't have this figured out yet. Would love your opinion
           | and feedback. Wanna comment on the doc or would love to talk
           | 1:1 ! siobud.com/meeting
        
         | makapuf wrote:
         | Great things are accomplished by spite programming!
         | https://hackaday.com/2018/01/03/spite-thrift-and-the-virtues...
        
         | Karrot_Kream wrote:
         | Thanks the code is really useful to read through.
        
         | mcwiggin2 wrote:
         | This is awesome. I would love if you had some examples on how
         | to use AntMedia as a source. I am mostly in video engineering
         | so the read the source comes slower to me. This would be really
         | handy in many cases.
        
         | mwcampbell wrote:
         | Is the restriction to NVIDIA necessary for the low latency?
        
           | Sean-Der wrote:
           | Nope! I want to add all the other flows.
           | 
           | nvidia is the absolute lowest I believe. I wanted to do it
           | first to know if it was worth building.
        
         | namibj wrote:
         | Any plans on integrating L4S with e.g. Tetrys-based FEC and
         | using a way where the congestion feedback from L4S acts on the
         | quantizer/rate-factor instead of directly on bitrate?
         | 
         | It's much more appropriate to do perceptual fairness than
         | strict bitrate fairness.
         | 
         | Happy to have a chat on this btw; you can best catch me on
         | discord.
        
           | stefan_ wrote:
           | Depends on the network, surely? Lots of applications for low
           | latency video where you are not sharing the channel, but it
           | has a fixed bandwidth.
        
             | namibj wrote:
             | E.g. "Low Latency DOCSIS"[0] and related, WiFi[1], support
             | it and with the former it's about non-exclusive scarce
             | uplink capacity where cross-customer capacity sharing may
             | rely on post-hoc analysis of flow behavior to check for
             | abuse, switching to forced fairness if caught by such
             | heuristics. For downstream it's even more natural to have
             | shared capacity with enough congestion to matter, but often
             | only the WiFi side would have a large discretionary range
             | for bandwidth scheduling/allocation to matter much.
             | 
             | Apple already opportunistically uses L4S with TCP-Prague
             | and there are real-world deployments/experiments [2] with
             | end-to-end L4S.
             | 
             | Fixed-
             | 
             | [0]: https://github.com/cablelabs/lld [1] relevant excerpt
             | from [0]: Applications that send large volumes of traffic
             | that need low latency, but that are responsive to
             | congestion in the network. These applications can benefit
             | from using a technology known as "Low Latency, Low Loss,
             | Scalable Throughput (L4S)". Support for this technology is
             | including in the LLD feature set, but is beyond the scope
             | of what we have in this repository. Information on L4S can
             | be found in this IETF draft architecture.
             | 
             | [2]https://www.vodafone.com/news/technology/no-lag-gaming-
             | vodaf...
        
         | cchance wrote:
         | Question, 30ms latency sounds amazing but how does it actually
         | compare to "the standard" sharing tools for desktops, like do
         | you know what the latency on say MSRDP is as comparison or VNC?
        
           | Sean-Der wrote:
           | I doubt the protocol itself makes a big difference. I bet you
           | can get 30ms with VNC. The difference with BitWHIP.
           | 
           | * Can play WebRTC in browser. That makes things easier to
           | use.
           | 
           | * simpler/hackable software. BitWHIP is simple and uses nvenc
           | etc... if you use nvenc with VNC I bet you can get the same
           | experience
        
       | comex wrote:
       | Ooh, I've been looking for a good solution for this for years.
       | Currently I use Parsec, but it's closed source and not compatible
       | with direct streaming from OBS etc. I'll definitely check this
       | out.
        
       | eigenvalue wrote:
       | Couldn't get it to work in Windows 11. Was able to run the just
       | install script only after editing it to use the full path to the
       | 7zip binary. Said it installed correctly, but then when I try to
       | do `just run play whip` I got this:                 cargo:rustc-
       | cfg=feature="ffmpeg_7_0"       cargo:ffmpeg_7_0=true
       | --- stderr       cl : Command line warning D9035 : option 'o' has
       | been deprecated and will be removed in a future release
       | thread 'main' panicked at C:\Users\jeffr\.cargo\registry\src\inde
       | x.crates.io-6f17d22bba15001f\bindgen-0.69.4\lib.rs:622:31:
       | Unable to find libclang: "couldn't find any valid shared
       | libraries matching: ['clang.dll', 'libclang.dll'], set the
       | `LIBCLANG_PATH` environment variable to a path where one of these
       | files can be found (invalid: [])"       note: run with
       | `RUST_BACKTRACE=1` environment variable to display a backtrace
        
         | mintplant wrote:
         | Looks like you need libclang for the ffmpeg bindings.
        
           | warkdarrior wrote:
           | Looks like the install script is incomplete and fails to
           | check for and install all prerequisites.
        
       | synthoidzeta wrote:
       | vdo.ninja is another excellent alternative but I'll definitely
       | check this out!
        
       | Dwedit wrote:
       | How does this compare with Moonlight?
        
       | tamimio wrote:
       | Amazing work! The most I could achieve is ~40ms of video streams,
       | although it was over a cellular network from a drone. But 30ms is
       | a new milestone! I will see if I can repurpose this and test out
       | a real-time video stream from a robot if I get some spare time.
        
       | daghamm wrote:
       | What is the reason for using "just" here?
       | 
       | I understand people have their tooling preferences, but this
       | looks like something that build.rs or a plain makefile could have
       | handled?
        
         | mijoharas wrote:
         | I was also wondering if anyone could chime in on advantages of
         | using just.
         | 
         | I'm familiar with makefiles, is there a particular advantage to
         | using just over makefiles or is it personal preference? (which
         | is a totally valid answer! I'm just wondering if I'm missing
         | something)
        
           | aerzen wrote:
           | I think that the appeal of just is that it is simpler than
           | make. It is not checking timestamps of files, but executes a
           | DAG of tasks unconditionally.
        
             | mijoharas wrote:
             | My first thought was that that was dropping one of the main
             | features of make.
             | 
             | On reflection though, the timestamp dependant part isn't
             | really something used much nowadays apart from compiling C.
             | 
             | It'd be cool if it was an opt-in feature for just files so
             | that it could actually function as a replacement for make
             | in all cases.
             | 
             | I went looking in the docs and found this[0] which I'd
             | missed last time I looked into justfiles.
             | 
             | [0] https://github.com/casey/just?tab=readme-ov-file#what-
             | are-th...
        
               | daghamm wrote:
               | I don't really buy his justification that ".PHONY: xxx"
               | is hard to remember so we should have a completly new
               | tool instead.
               | 
               | Make has its issues, but it also has two big advantages:
               | it's simple and everyone already have it.
        
               | IshKebab wrote:
               | Everyone already has it... _on Linux and Mac_. It 's
               | pretty rare for it to be available on Windows.
               | 
               | That said I kind of agree. I like the idea of `just` but
               | it does seem like they have just created a complicated
               | DSL.
               | 
               | I think it is better to just write your infra scripting
               | in a real language. I generally use Deno or Rust itself
               | and a thin wrapper that `cargo run`'s it. Using Rust
               | eliminates a dependency.
        
               | mort96 wrote:
               | Anyone who's halfway serious about software development
               | on Windows surely has make there too, and it's not like
               | non-developers are the target audience for 'just' scripts
        
               | IshKebab wrote:
               | > Anyone who's halfway serious about software development
               | on Windows surely has make there too
               | 
               | Not even remotely. I know it might be hard to imagine if
               | you only program on Linux/Mac but there's a whole world
               | out there that isn't built on janky shell scripts and
               | Makefiles. If you use C# or Java or Visual C++ or Qt on
               | Windows it's pretty unlikely that you'd have Make. It's
               | kind of a pain to install and you don't need it.
        
               | galdosdi wrote:
               | I agree, and even more strongly: you don't even need to
               | remember .PHONY as long as your target names don't
               | overlap with actual filenames, which is usually easy.
               | 
               | In fact, I didn't even know about .PHONY and have used
               | make for a long time. That's what's great about it, even
               | if you stick to the most basic features make is
               | incredibly easy and straightforward. Dare I say, it
               | "just" works lol.
               | 
               | I hate the proliferation of new tools that are the same
               | as a tool that's been around for 20 years and is no
               | different in any significant way except being trendy.
               | Just unnecessary entropy. Our job is to manage and
               | reduce, not maximize entropy.
        
               | Arnavion wrote:
               | Also this:
               | 
               | >The explicit list of phony targets, written separately
               | from the recipe definitions, also introduces the risk of
               | accidentally defining a new non-phony target.
               | 
               | ... seems to think the only way to define phony targets
               | is:                   .PHONY: foo bar         foo:
               | ...         bar:            ...
               | 
               | ... which has the problem that bar's definition is
               | distant from its declaration as a phony target. But this
               | form is equivalent and doesn't have that problem:
               | .PHONY: foo         foo:            ...         .PHONY:
               | bar         bar:            ...
               | 
               | This ability to declare dependencies of a target over
               | multiple definitions isn't even unique to `.PHONY`.
        
             | daghamm wrote:
             | Wouldn't a shell script work just as well than?
             | 
             | I'm not against new better tooling, but I also want to keep
             | my dev machine reasonably clean.
        
               | IshKebab wrote:
               | Shell scripts don't work well on Windows.
        
               | hughesjj wrote:
               | Even powershell sometimes with execution policies
        
               | galdosdi wrote:
               | I would just use WSL then, if native windows dev tooling
               | is such a shit show
        
           | mharrig1 wrote:
           | I recently switched my (small) company over to using just
           | files within our codebases and it's been going over very well
           | thus far.
           | 
           | We're building a set of apps that need to run on Linux,
           | MacOS, and Windows so having a consistent solution for each
           | is better than shell scripting and I personally have never
           | felt great about make and it's weirdness.
           | 
           | It also helps that we have a pretty big monorepo so that
           | anyone can bounce from one app to another and `just run` to
           | use any of them, no matter the platform.
           | 
           | Either way the justification for me came from COSMIC[0].
           | 
           | [0] https://github.com/pop-os/cosmic-
           | epoch/blob/master/justfile
        
         | Sean-Der wrote:
         | John did all the work on this.
         | 
         | Just is nice as a Windows user. When I started committing
         | everything worked really well already. Editing the just stuff
         | also is really easy. Much nicer to read then scripts I think
        
       | jmakov wrote:
       | Can this be used as remote desktop?
        
         | Sean-Der wrote:
         | Yes! I want to add remote control features to it. Lots of
         | things left to do
         | 
         | Any interest in getting involved? Would love your help making
         | it happen
        
       | Tielem wrote:
       | Always a bit sceprical when it comes to latency claims,
       | especially in the sub 100ms space, but screen sharing 1-1 or
       | video ingest should be a great use case for WebRTC
       | 
       | WebRTC is a great technology, but it still suffers from a scaling
       | problem that is harder to resolve. On top of that, the protocol
       | itself does not define things like adaptive bitrate switching or
       | stalling recovery
       | 
       | Curious to hear what you think of some (proprietary) options for
       | low latency playback like LLHLS LLDASH, WebRTC or HESP
        
         | Sean-Der wrote:
         | WebRTC has congestion control and Simulcast/SVC, what is
         | missing for adaptive bitrate switching. What is stalling
         | recovery? I believe NACK/PLI handle this?
         | 
         | WebRTC doesn't have a scaling problem. I think it was a
         | software problem! Twitch, Tencent, Agora, Phenix all do 100k+
         | these days
         | 
         | I like WebRTC because of the open-ness of it. I also like that
         | I only need one system for ingest and playback. I am HEAVILY
         | biased though, way over invested in WebRTC :) I tend to care
         | about greenfield/unique problems and not enough about scaling
         | and making money
        
       | kiririn wrote:
       | As someone who setup a discord streaming like service to use
       | alongside Mumble, this is very exciting. I couldn't get anything
       | involving webrtc working reliably, but the only broadcasting
       | clients I found were web browsers and OBS, so I am interested to
       | see how this compares!
       | 
       | What I eventually settled on was https://github.com/Edward-
       | Wu/srt-live-server with OBS and VLC player, which gives robust
       | streaming at high bitrate 4k60, but latency is only 1-2 seconds
        
         | Sean-Der wrote:
         | Excited to hear what you think! If there is anything I can
         | change/improve tell me and will make it better :)
        
       | kierank wrote:
       | I wrote a blog post about how numbers like "30ms latency" are
       | thrown around called "How to lie about latency":
       | https://www.obe.tv/how-to-lie-about-latency/
       | 
       | It's left as an exercise to the reader which methods of lying are
       | being used in this case.
        
       ___________________________________________________________________
       (page generated 2024-07-13 23:01 UTC)