https://lwn.net/SubscriberLink/847412/d7826b1353e33734 LWN.net Logo LWN .net News from the source LWN * Content + Weekly Edition + Archives + Search + Kernel + Security + Distributions + Events calendar + Unread comments + ------------------------------------------------------------- + LWN FAQ + Write for us User: [ ] Password: [ ] [Log in] | [Subscribe] | [Register] Subscribe / Log in / New account PipeWire: The Linux audio/video bus [LWN subscriber-only content] Welcome to LWN.net Free trial subscription The following subscription-only Try LWN for free for 1 content has been made available to month: no payment or you by an LWN subscriber. Thousands credit card required. of subscribers depend on LWN for Activate your trial the best news from the Linux and subscription now and see free software communities. If you why thousands of readers enjoy this article, please consider subscribe to LWN.net. accepting the trial offer on the right. Thank you for visiting LWN.net! March 2, 2021 This article was contributed by Ahmed S. Darwish For more than a decade, PulseAudio has been serving the Linux desktop as its predominant audio mixing and routing daemon -- and its audio API. Unfortunately, PulseAudio's internal architecture does not fit the growing sandboxed-applications use case, even though there have been attempts to amend that. PipeWire, a new daemon created (in part) out of these attempts, will replace PulseAudio in the upcoming Fedora 34 release. It is a coming transition that deserves a look. Speaking of transitions, Fedora 8's own switch to PulseAudio in late 2007 was not a smooth one. Longtime Linux users still remember having the daemon branded as the software that will break your audio. After a bumpy start, PulseAudio emerged as the winner of the Linux sound-server struggles. It provided a native client audio API, but also supported applications that used the common audio APIs of the time -- including the raw Linux ALSA sound API, which typically allows only one application to access the sound card. PulseAudio mixed the different applications' audio streams and provided a central point for audio management, fine-grained configuration, and seamless routing to Bluetooth, USB, or HDMI. It positioned itself as the Linux desktop equivalent of the user-mode audio engine for Windows Vista and the macOS CoreAudio daemon. Cracks at PulseAudio By 2015, PulseAudio was still enjoying its status as the de facto Linux audio daemon, but cracks were beginning to develop. The gradual shift to sandboxed desktop applications may be proving fatal to its design: with PulseAudio, an application can snoop on other applications' audio, have unmediated access to the microphone, or load server modules that can interfere with other applications. Attempts were made at fixing PulseAudio, mainly through an access-control layer and a per-client memfd-backed transport. This was all necessary but not yet sufficient for isolating clients' audio. Around that time, David Henningson, one of the core PulseAudio developers, resigned from the project. He cited frustrations over the daemon's poor fit for the sandboxed-applications use case, and its intermixing of mechanism and policy for audio-routing decisions. At the end of his message, he wondered if the combination of these problems might be the birth pangs of a new and much-needed Linux audio daemon: In software nothing is impossible, but to re-architecture PulseAudio to support all of these requirements in a good way (rather than to "build another layer on top" [...]) would be very difficult, so my judgment is that it would be easier to write something new from scratch. And I do think it would be possible to write something that took the best from PulseAudio, JACK, and AudioFlinger, and get something that would work well for both mobile and desktop; for pro-audio, gaming, low-power music playback, etc. [...] I think we, as an open source community, could have great use for such a sound server. PulseVideo to Pinos Meanwhile, GStreamer co-creator Wim Taymans was asked to work on a Linux service to mediate web browsers' access to camera devices. Initially, he called the project PulseVideo. The idea behind the name was simple: similar to the way PulseAudio was created to mediate access to ALSA sound devices, PulseVideo was created to mediate and multiplex access to the Video4Linux2 camera device nodes. A bit later, Taymans discovered a similarly-named PulseVideo prototype [video], created by William Manley, and helped in upstreaming the GStreamer features required by its code. To avoid conflicts with the PulseAudio name, and due to scope extension beyond just camera access, Taymans later renamed the project to Pinos -- in a reference to his town of residence in Spain. Pinos was built on top of GStreamer pipelines, using some of the infrastructure that was earlier refined for Manley's prototype. D-Bus with file-descriptor passing was used for interprocess communication. At the GStreamer 2015 conference, Taymans described the Pinos architecture [PDF] to attendees and gave a demo of multiple applications accessing the system camera feed in parallel. Due to its flexible, pipeline-based, file-descriptor-passing architecture, Pinos also supported media broadcasting in the other direction: applications could "upload" a media stream by passing a memfd or dma-buf file descriptor. The media stream can then be further processed and distributed to other applications and system multimedia sinks like ALSA sound devices. While only discussed in passing, the ability to send streams in both directions and across applications allowed Pinos to act as a generic audio/video bus -- efficiently funneling media between isolated, and possibly sandboxed, user processes. The scope of Pinos (if properly extended) could thus overlap with, and possibly replace, PulseAudio. Taymans was explicitly asked that question [video, 31:35], and he answered: "Replacing PulseAudio is not an easy task; it's not on the agenda [...] but [Pinos] is very broad, so it could do more later." As the PulseAudio deficiencies discussed in the earlier section became more problematic, "could do more later" was not a far-off target. PipeWire By 2016, Taymans started rethinking the foundations of Pinos, extending its scope to become the standard Linux audio/video daemon. This included the "plenty of tiny buffers" low-latency audio use case typically covered by JACK. There were two main areas that needed to be addressed. First, the hard dependency on GStreamer elements and pipelines for the core daemon and client libraries proved problematic. GStreamer has plenty of behind-the-scenes logic to achieve its flexibility. During the processing of a GStreamer pipeline, done within the context of Pinos realtime threads, this flexibility came at the cost of implicit memory allocations, thread creation, and locking. These are all actions that are well-known to negatively affect the predictability needed for hard realtime code. To achieve part of the GStreamer pipelines' flexibility while still satisfying hard realtime requirements, Taymans created a simpler multimedia pipeline framework and called it SPA -- the Simple Plugin API [PDF]. The framework is designed to be safely executed from realtime threads (e.g. Pinos media processing threads), with a specific time budget that should never be exceeded. SPA performs no memory allocations; instead, those are the sole responsibility of the SPA framework application. Each node has a well-defined set of states. There is a state for configuring the node's ports, formats, and buffers -- done by the main (non-realtime) thread, a state for the host to allocate all the necessary buffers required by the node after its configuration, and a separate state where the actual processing is done in the realtime threads. During streaming, if any of the media pipeline nodes change state (e.g. due to an event), the realtime threads can be notified so that control is switched back to the main thread for reconfiguration. Second, D-Bus was replaced as the IPC protocol. Instead, a native fully asynchronous protocol that was inspired by Wayland -- without the XML serialization part -- was implemented over Unix-domain sockets. Taymans wanted a protocol that is simple and hard-realtime safe. By the time the SPA framework was integrated and a native IPC protocol was developed, the project had long-outgrown its original purpose: from a D-Bus daemon for sharing camera access to a full realtime-capable audio/video bus. It was thus renamed again, to PipeWire -- reflecting its new status as a prominent pipeline-based engine for multimedia sharing and processing. Lessons learned From the start, the PipeWire developers applied an essential set of lessons from existing audio daemons like JACK, PulseAudio, and the Chromium OS Audio Server (CRAS). Unlike PulseAudio's intentional division of the Linux audio landscape into consumer-grade versus professional realtime audio, PipeWire was designed from the start to handle both. To avoid the PulseAudio sandboxing limitations, security was baked-in: a per-client permissions bitfield is attached to every PipeWire node -- where one or more SPA nodes are wrapped. This security-aware design allowed easy and safe integration with Flatpak portals; the sandboxed-application permissions interface now promoted to a freedesktop XDG standard. Like CRAS and PulseAudio, but unlike JACK, PipeWire uses timer-based audio scheduling. A dynamically reconfigurable timer is used for scheduling wake-ups to fill the audio buffer instead of depending on a constant rate of sound card interrupts. Beside the power-saving benefits, this allows the audio daemon to provide dynamic latency: higher for power-saving and consumer-grade audio like music playback; low for latency-sensitive workloads like professional audio. Similar to CRAS, but unlike PulseAudio, PipeWire is not modeled on top of audio-buffer rewinding. When timer-based audio scheduling is used with huge buffers (as in PulseAudio), support for rewriting the sound card's buffer is needed to provide a low-latency response to unpredictable events like a new audio stream or a stream's volume change. The big buffer already sent to the audio device must be revoked and a new buffer needs to be submitted. This has resulted in significant code complexity and corner cases [PDF]. Both PipeWire and CRAS limit the maximum latency/buffering to much lower values -- thus eliminating the need for buffer rewinding altogether. Like JACK, PipeWire chose an external-session-manager setup. Professional audio users typically build their own audio pipelines in a session-manager application like Catia or QjackCtl, then let the audio daemon execute the final result. This has the benefit of separating policy (how the media pipeline is built) from mechanism (how the audio daemon executes the pipeline). At GUADEC 2018, developers explicitly asked Taymans [video, 23:15] to let GNOME, and possibly other external daemons, take control of that part of the audio stack. Several system integrators had already run into problems because PulseAudio embeds audio-routing policy decisions deep within its internal modules code. This was also one of the pain points mentioned in Henningson's resignation e-mail. Finally, following the trend of multiple influential system daemons created in the last decade, PipeWire makes extensive use of Linux-kernel-only APIs. This includes memfd, eventfd, timerfd, signalfd, epoll, and dma-buf -- all of which make the "file-descriptor" the primary identifier for events and shared buffers in the system. PipeWire's support for importing dma-buf file descriptors was key in implementing efficient Wayland screen capture and recording. For large 4K and 8K screens, the CPU does not need to touch any of the massive GPU buffers: GNOME mutter (or similar applications) passes a dma-buf descriptor that can then be integrated into PipeWire's SPA pipelines for further processing and capturing. Adoption The native PipeWire API has been declared stable since the project's major 0.3 release. Existing raw ALSA applications are supported through a PipeWire ALSA plugin. JACK applications are supported through a re-implementation of the JACK client libraries and the pw-jack tool if both native and PipeWire JACK libraries are installed in parallel. PulseAudio applications are supported through a pipewire-pulse daemon that listens to PulseAudio's own socket and implements its native communication protocol. This way, containerized desktop applications that use their own copy of the native PulseAudio client libraries are still supported. WebRTC, the communication framework (and code) used by all major browsers, includes native PipeWire support for Wayland screen sharing -- mediated through a Flatpak portal. The graph below shows a PipeWire media pipeline, generated using pw-dot then slightly beautified, on an Arch Linux system. A combination of PipeWire-native and PulseAudio-native applications is shown: [PipeWirepipeline] On the left, both GNOME Cheese and a GStreamer pipeline instance created with gst-launch-1.0 are accessing the same camera feed in parallel. In the middle, Firefox is sharing the system screen (for a Jitsi meeting) using WebRTC and Flatpak portals. On the right, the Spotify music player (a PulseAudio app) is playing audio, which is routed to the system's default ALSA sink -- with GNOME Settings (another PulseAudio app) live-monitoring the Left/Right channel status of that sink. On the Linux distributions side of things, Fedora has been shipping the PipeWire daemon (only for Wayland screen capture) since its Fedora 27 release. Debian offers PipeWire packages, but replacing PulseAudio or JACK is "an unsupported use case." Arch Linux provides PipeWire in its central repository and officially offers extra packages for replacing both PulseAudio and JACK, if desired. Similarly, Gentoo provides extensive documentation for replacing both daemons. The upcoming Fedora 34 release will be the first Linux distribution that will have PipeWire fully replace PulseAudio by default and out of the box. Overall, this is a critical period in the Linux multimedia scene. While open source is a story about technology, it's also a story about the people hard at work creating it. There has been a notable agreement from both PulseAudio and JACK developers that PipeWire and its author are on the right track. The upcoming Fedora 34 release should provide a litmus test for PipeWire's Linux distributions adoption moving forward. Index entries for this article GuestArticles Darwish, Ahmed S. [Send a free link] Did you like this article? Please accept our trial subscription offer to be able to see more content like it and to participate in the discussion. ----------------------------------------- (Log in to post comments) PipeWire: The Linux audio/video bus Posted Mar 2, 2021 21:55 UTC (Tue) by josh (subscriber, #17465) [Link ] Is anyone aware of any plans to support Chromecast devices in Pipewire? I'd love to be able to route playing audio or video to a TV. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 2, 2021 23:14 UTC (Tue) by tchernobog (subscriber, #73595) [Link] I am not sure encoding and network playback is a goal for Pipewire on its own. Maybe through gstreamer? [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 0:29 UTC (Wed) by josh (subscriber, #17465) [Link] I wouldn't expect it to be directly in Pipewire; rather, it'd be nice to have a Pipewire output sink that could stream audio and video to a device, with Pipewire handling low-latency provision of data, and the output sink doing hardware-accelerated encoding and streaming. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 21:29 UTC (Wed) by westurner (guest, #145208) [ Link] How could mkchromecast most optimally include support for PipeWire? From https://github.com/muammar/mkchromecast : > [mkchromecast] is a program to cast your macOS audio, or Linux audio to your Google Cast devices or Sonos speakers. It can also cast video files. > It is written for Python3, and it can stream via node.js, parec (Linux), ffmpeg, or avconv. Mkchromecast is capable of using lossy and lossless audio formats provided that ffmpeg, avconv (Linux), or parec (Linux) are installed. It also supports Multi-room group playback, and 24-bit/96kHz high audio resolution. Additionally, a system tray menu is available. https://github.com/topics/chromecast [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 16:22 UTC (Wed) by NightMonkey (subscriber, # 23051) [Link] For my use-case (my home network, HiFiBerry Analogue to Digital converters, Raspberry Pis with USB Digital-to-Analogue converters, and analogue audio playback setups), the network audio slinging capability is actually my favorite part of PulseAudio. :) I do hope that use-case gets even more robust with PipeWire. The way PulseAudio seems to magically keep video and audio in sync when playing video locally but directing the audio over a WiFi network is impressive. (Even SnapCast can't do that, but it wasn't built to handle video/ audio synchronization.) Cheers! [Reply to this comment] Please don't ask for Chromecast anywhere Posted Mar 3, 2021 18:10 UTC (Wed) by fratti (subscriber, #105722) [ Link] Support a proprietary undocumented protocol that requires loading a Google-controlled web page and is merely a glorified backdoored HLS streaming server? Why that sounds grand, please repeatedly request it on the mpv issue tracker as well. [Reply to this comment] Please don't ask for Chromecast anywhere Posted Mar 3, 2021 20:52 UTC (Wed) by josh (subscriber, #17465) [Link ] I'm sorry to hear that you've had issues with unhelpful or overly expectant bug reports. Please don't project that onto everyone who's interested in seeing two technologies to work together. I asked if someone had any efforts to make the two work together. I did *not* imply at any point that it was someone's job or obligation to do so. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 2:03 UTC (Wed) by gerdesj (subscriber, #5446) [ Link] This: https://bugzilla.redhat.com/show_bug.cgi?id=1906086 is quite devoid of complete meltdown which is a good sign. I've been running the thing on Arch for a month or two now and it seems quite stable. I plug in my USB headset and fire up Teams (*sigh* yes) and usually I get better audio than my Windows sporting colleagues. At least one or two will have a broken audio setup at any time. Then there's that SOF project thing. It all seems to hang together for me at the moment, which is nice. When my Samsung bluetooth earbud jobbies start working fully (with mic) then it'll all be golden. I am aware that there is quite a lot of stuff involved in that, most of which I don't understand! [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 2:42 UTC (Wed) by Subsentient (subscriber, # 142918) [Link] Nobody liked Pulseaudio, but I hope the transition here goes a whole lot smoother than the ALSA to Pulseaudio one did. I don't want to be out of luck for fixing broken shit for the next 3 years while the world gets used to PipeWire. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 4:32 UTC (Wed) by darwi (subscriber, #131202) [ Link] > Nobody liked Pulseaudio, but I hope the transition here goes a whole lot smoother than the ALSA to Pulseaudio one did. I had no space left in the article to further expand on this, so I'll write it here: breaking users audio was not always PulseAudio's fault. It was the first heavy user of a certain class of ALSA APIs like audio buffer rewinding; e.g., snd_pcm_rewind() and friends. The proprietary Adobe flash plugin, prevalent back then, also caused its own unique set of problems. Nonetheless, yes, PulseAudio (and especially its glitch-free support) was pushed too early on users. "I'll break your audio", the constant tongue-in-cheek remark by PulseAudio's lead developer back then, did not also always help. Humility, and some sympathy for users with broken audio setups, were definitely lacking. The good news is that the community as a whole learned a lot from the ALSA-PulseAudio transition fiasco. Remember that everyone is also more than a decade older now... Everyone is (hopefully) a little-bit wiser. By the way, the PipeWire developers are keenly aware of the previous audio API transition issues. This was discussed at length in the PipeWire 2018 hackfest: an adults-in-the-room orderly PulseAudio-PipeWire transition was planned early-on. I'm personally optimistic :) [Famous last words?] [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 4:48 UTC (Wed) by Subsentient (subscriber, # 142918) [Link] >Remember that everyone is also more than a decade older now... Everyone is (hopefully) a little-bit wiser. You think so? [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 13:44 UTC (Wed) by pizza (subscriber, #46) [Link] > The good news is that the community as a whole learned a lot from the ALSA-PulseAudio transition fiasco. Remember that everyone is also more than a decade older now It's probably more accurate to say that a _huge_ number of ALSA and application[+library] bugs were uncovered and fixed during the early PA days, and that work continues to pay dividends, making future migrations much easier. (The main technical reason for the ease of the PA->PW migration is that PA and PW both provide ALSA plugins so that software natively using the ALSA API continues to JustWork, and of course PW provides a drop-in PA library replacement..) [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 16:55 UTC (Wed) by HenrikH (subscriber, #31152) [ Link] > Nonetheless, yes, PulseAudio (and especially its glitch-free support) was pushed too early on users I'll beg to differ, the only reason that the ALSA drivers where ultimately fixed, that PA became usable and that software started to use PA was that it was pushed out to all users as early as it was. Had it not been done this way then PA would be closer to where Wayland is today and PipeWire would have been decades away in the future. The real QA for any software projects only comes on the day that you get it pushed to end users, it doesn't matter how much internal QA you have done. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 8:52 UTC (Wed) by chris_se (subscriber, #99706) [ Link] Nobody liked Pulseaudio I'd like to disagree here. I don't love PulseAudio, but with PulseAudio it was the first time I actually got audio working somewhat reliably on Linux. Previously audio was always a real pain, and it never worked reliably at all. I remember bare OSS, bare ALSA, and even such things as aRts. I always spent a long time trying to get it to work somehow, and then some application came along that didn't work with the setup and I had to figure stuff out again. Note that I'm the opposite of an audiophile, my only requirements are that the sound comes out of the device I want it to, that I can dynamically plug in a head set, that I can adjust the volume, there are no cracks due to buffer underruns, that multiple applications can output audio at the same time, and that I can change settings easily via GUI. I don't think these requirements are particularly onerous (I don't even care about stereo audio, I'd be more than happy with mono), but pre PulseAudio I was never even able to achieve even these basic things. Once PulseAudio came, the first couple of releases it wasn't great, but no worse than my prior experience -- but after a while it actually became reliable. It was the first time I could actually use a Bluetooth headset. (I experimented back when support first appeared in the audio stack before PulseAudio, and that barely worked at all.) Sure, some things still didn't work quite as well as I'd hoped, and there are some issues, but for me PulseAudio was a major improvement compared to everything that came before it. I think PulseAudio has a reputation that is worse than it deserves by a long shot. That said, PipeWire does sound extremely promising, and I'm very excited to try it out at some point when I can find some time, because I do sometimes bump into some of the warts of PulseAudio. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 13:14 UTC (Wed) by ms-tg (subscriber, #89231) [ Link] I wonder how we are culturally at the point that it seems worth the time to write a comment simply to agree that I shared this experience. PulseAudio was a huge improvement for me over what came before. Am I misremembering, or wasn't the prior state that we would have apps crash with "could not open ..." if two apps were trying to make sound at once? [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 13:26 UTC (Wed) by pizza (subscriber, #46) [Link] > PulseAudio was a huge improvement for me over what came before. Am I misremembering, or wasn't the prior state that we would have apps crash with "could not open ..." if two apps were trying to make sound at once? Followed by trying to figure out exactly what application was holding the sound card open. Or zombified sound mixing daemon, of which there were several. (Unless you were one of the fortunate few to have a sound card that could natively handle multiple simultaneous streams!) Still, even in its early days, PA was a _huge_ net improvement, and most of the issues that folks attributed to PA were really bugs in the underlying device drivers or applications themselves. (I recall the flash plugin being one of the worst offenders..) (Something similar played out with NetworkManager/wpa_supplicant and wifi devices that didn't implement the WEXT APIs consistently, and/or applications that were written expecting the quirks of a single driver to apply everywhere) [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 15:47 UTC (Wed) by chris_se (subscriber, #99706) [ Link] > Am I misremembering, or wasn't the prior state that we would have apps crash with "could not open ..." if two apps were trying to make sound at once? All of the other "sound servers", such as aRts and ESD, were created just for this reason. For programs that weren't compatible with them (basically anything not KDE / not GNOME) you had to start them with a wrapper that used a LD_PRELOAD library to hijack the system calls to open the ALSA/OSS devices, and reroute that through aRts/ESD -- but that didn't work with all software, so you had to kill the sound servers sometimes to use some applications. And then others using the sound server would misbehave. And don't get me started with Suspend-to-RAM, which typically killed any application that was currently outputting audio when closing your laptop lid... I do remember ALSA having native software mixing, but from what I recall it required all applications to use the same sample rate for their audio, which of course was often not really the case. (Maybe that's changed in the mean time?) And you had to configure it manually in your ~/.asoundrc. Also, I don't know exact time this was added, but I remember seeing ALSA software mixing for the first time when PulseAudio was already a thing, so maybe it came way too late. Or at the very least it wasn't advertised very well, because I didn't read about it before I was already switching to PulseAudio. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 19:32 UTC (Wed) by dezgeg (subscriber, #92243) [ Link] At least my personal recollection is that there was a period where ALSA software mixing (without any tweaking of .asoundrc or anything on most distros) was working very well, better than PulseAudio - as in, many audio problems could be solved by a "killall pulseaudio" (I do not remember if root was required for this or not). However this was long time (years) ago, definitely before me owning any USB or Bluetooth exclusive audio devices. I do not even remember when "killall pulseaudio" last solved anything And regardless of that being able to adjust sound levels per-application with PA has been very useful (lack of it is not show stopper but just very nice to have). [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 21:49 UTC (Wed) by ncm (subscriber, #165) [Link] Everybody had trouble with PA, even people who liked it in principle. LP wasn't joking about breaking sound: things did break, many, many times for many, many people, for years. And, almost always the only information readily available about what went wrong was just sound no longer coming out, or going in. And, almost always the reliable fix was to delete PA. But it really was often a consequence of something broken outside of PA. That doesn't mean there was always nothing the PA developers could do, and often they did. The only way it all ended up working as well as it does today--pretty well--is that those things finally got done, and bulldozed through the distro release pipelines. The result was that we gradually stopped needing to delete PA. Gstreamer crashed all the damn time, for a very long time, too. I never saw PA crash much. The thing is, all that most of us wanted, almost all the time, was for exactly one program to operate on sound at any time, with exactly one input device and one output device. UI warbling and meeping was never a high-value process. Mixing was most of the time an unnecessary complication and source of latency. The only complicated thing most of us ever wanted was to change routing to and from a headset when it was plugged or unplugged. ALSA was often wholly good enough at that. To this day, I have UI warbling and meeping turned off, not because it is still broken or might crash gstreamer, but because it is a net-negative feature. I am happiest that it is mostly easy to turn off. (I *wish* I could make my phone not scritch every damn time it sees a new wifi hub.) Pipewire benefits from things fixed to make PA work, so I have expectations that the transition will be quicker. But Pipewire is (like PA and Systemd) coded in a language that makes correct code much harder to write than buggy, insecure code; and Pipewire relies on not always necessarily especially mature kernel facilities. Those are both risk factors. I would be happier if Pipewire were coded in modern C++ (Rust is--let's be honest, at least with ourselves!--not portable enough yet), for reliability and security. I would be happier if it used only mature kernel features in its core operations, and dodgy new stuff only where needed for correspondingly dodgy Bluetooth configurations that nobody, seriously, expects ever to work anyway. What would go a long way to smoothing the transition would be a way to see, graphically, where it has stopped working. The graph in the article, annotated in real time with flow rates, sample rates, bit depths, buffer depths, and attenuation figures, would give us a hint about what is failing, with a finer resolution than "damn Pipewire". If we had such a thing for PA, it might have generated less animosity. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 22:35 UTC (Wed) by pebolle (subscriber, #35204) [ Link] > Everybody had trouble with PA, even people who liked it in principle. Please speak for yourself. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 9:38 UTC (Wed) by zuki (subscriber, #41808) [Link] Yep. It seems to be going surprisingly smoothly in F34 beta. Issues that are being seen are mostly related to bluetooth -- and bluetooth support was always a bit iffy. Often it's not even clear if pipewire is relevant at all, or if changes in bluez or the kernel are the cause. The overall feeling I get from my testing and what other people report is that things at this point are running just as well as with pulseaudio or marginally better. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 9:31 UTC (Wed) by dottedmag (subscriber, #18590) [ Link] > XML serialization part This is confusing. XML protocol description, not serialization, right? [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 10:59 UTC (Wed) by smcv (subscriber, #53363) [Link ] > Debian offers PipeWire packages, but replacing PulseAudio or JACK is "an unsupported use case." The ability to use PipeWire to replace the other audio services didn't arrive at a great time for the Debian 11 release cycle - in the versions that were available at freeze time, it can be made to work, but didn't seem mature enough to support for 2 years with minimal changes. The packaging is set up to make it possible, but making it straightforward without introducing regressions will need help from domain experts. The maintainer-of-record for Debian's PipeWire packaging has been busy with other things, so the 0.2 -> 0.3 transition had to be done by other contributors (such as me) in order to keep screen sharing and screencasting working in GNOME, and I wouldn't feel comfortable pushing forward a transition to PipeWire for audio on its current level of maintainer bandwidth. I suspect PipeWire for audio might become the recommendation in the Debian 12 cycle if more knowledgeable maintainers step in. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 21:19 UTC (Wed) by paravoid (subscriber, #32869) [ Link] Thank you so much for all of the efforts here :) The timing is indeed unfortunate! It's kind of a pity that we're missing all the 0.3. {20,21,22} improvements though (AIUI Fedora will release with .22). Perhaps it would be worth asking the Debian release team for a freeze exception for 0.3.22 here so that users could at least have the same level of experience as Fedora 34 users by opting in? [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 15:07 UTC (Wed) by clump (subscriber, #27801) [ Link] I'd be curious if PipeWire really could replace JACK right now. JACK has been dragged through the ringer to get latency extremely low, for quite a long time. While mentioned in the article, it's not just low latency that a JACK-based workflow enables. JACK's workflow enables extremely flexible signal routing. It's quite easy with command line or GUI tools to take a guitar signal from an external capture device and route it through an effects stack like Guitarix and into a DAW like Ardour. What if you'd like to keep the dry signal at the same time? Simply route the raw signal into a new track in a DAW. What if you want one signal or the other to be played through certain speakers? Simply adjust accordingly in JACK's tools. And so on. Some of the early discussion on Linux Musicians is promising, however: https://linuxmusicians.com/viewtopic.php?f=27&t=22150 [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 17:43 UTC (Wed) by jsmith45 (guest, #125263) [Link ] My understanding is that basically all the routing flexibility of JACK is fully present, and works. From what I've read, not only does it work, but the existing tools you use to manage the JACK graph will let you re-configure the audio graph for PulseAudio API applications too, as though they had magically became JACK apps. This is a very different experience from the nearly impossible to set up PulseAudio over JACK, where you could not configure each individual PulseAudio stream. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 20:22 UTC (Wed) by andresfreund (subscriber, # 69562) [Link] In the bug tracker for pipewire there were people using it for pretty complex jack setups. Seems perf / latency unsurprisingly isn't on par yet, but improving. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 22:16 UTC (Wed) by Wol (subscriber, #4433) [Link] But that's rather irrelevant. As I understood it, the PRIMARY motivation behind Jack was the minimal latency. If you haven't got that, you won't even be considered as a Jack replacement, whatever else you may offer. Cheers, Wol [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 18:17 UTC (Wed) by dbnichol (subscriber, #39622) [ Link] I'm not a JACK user, but https://feaneron.com/2020/12/07/ switching-to-pipewire/ shows a JACK application in action on pipewire. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 18:18 UTC (Wed) by dancol (subscriber, #142293) [ Link] As an aside, isn't it time to increase default file descriptor limits? In a world where literally everything is a file descriptor --- and that's a good world! --- a process should be able to create more than a few thousand file descriptors. The current low limits feel antiquated to me. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 20:57 UTC (Wed) by josh (subscriber, #17465) [Link ] Yes, absolutely. Someone just needs to take this upstream with a patch and clear justification. [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 19:33 UTC (Wed) by g2boojum (subscriber, #152) [ Link] This article was very well written, explaining a lot of arcane stuff quite clearly. Thanks! [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 20:28 UTC (Wed) by jafd (subscriber, #129642) [ Link] There's one thing that I cannot find an answer for, or the answer is evasive: right now I have a few things with speakers attached that speak AirPlay. Most of my family has at least one Mac, or iPhone, or an iPad, so those just work. My setup works because PulseAudio works with AirPlay too. After many years it's in the state where it just works without me thinking about it too much. A Linux desktop nirvana. Does it mean that if I switch to Pipewire now, I'm going to lose this, because Pipewire audio cannot into network? On that note, can Pipewire and PulseAudio run side by side (using, say, different sockets) with Pulse handling the AirPlay stuff and Pipewire piping audio to it as needed? [Reply to this comment] PipeWire: The Linux audio/video bus Posted Mar 3, 2021 22:50 UTC (Wed) by flussence (subscriber, #85566) [Link] PulseAudio handles all audio (AirPlay, Bluetooth, other network layers, even ALSA) via internal plugins so those would have to be ported to PW's plugin API. Which sounds like a fairly tractable task for someone with the hardware and the motivation, as it's at least a well-defined and thought-out API from what I can see. In the meantime, if PW can be told to spin up a Pulse compatibility network socket on localhost (I haven't looked into it, but I know it has *some* PA emulation) then you can use that for duplex audio as you would from a networked Pulse daemon. Be warned though that Pulse will choke badly on latency in the audio routing layer, so minimise how many physical network hops it uses. Learned that the hard way. [Reply to this comment] Copyright (c) 2021, Eklektix, Inc. Comments and public postings are copyrighted by their creators. Linux is a registered trademark of Linus Torvalds