[HN Gopher] PipeWire: The Linux audio/video bus
       ___________________________________________________________________
        
       PipeWire: The Linux audio/video bus
        
       Author : darwi
       Score  : 360 points
       Date   : 2021-03-03 13:05 UTC (9 hours ago)
        
 (HTM) web link (lwn.net)
 (TXT) w3m dump (lwn.net)
        
       | symlinkk wrote:
       | I have never had problems with audio on Linux. What problems does
       | this solve?
        
         | MayeulC wrote:
         | Better sandboxing, basically.
         | 
         | A system was needed for video, turns out it was a good fit for
         | audio.
         | 
         | Audio and video aren't that different, TBH (audio just has more
         | alpha/blending rules, and lower tolerance on missed frames;
         | video has higher bandwidth requirements). Wouldn't surprise me
         | if both pipelines eventually completely converge. Both "need"
         | compositors anyways.
        
         | tremon wrote:
         | Same thing that ALSA, esd, Pulseaudio and Phonon solved: the
         | previous incarnation itched.
        
         | SilverRed wrote:
         | Consumer audio already works reasonably well but this
         | apparently has massive improvements for bluetooth, especially
         | the HFP profile which is used when using the built in
         | headphones mic.
         | 
         | The main benefit imo is to pro audio so you don't need to
         | configure separate tools and manually swap between pulse and
         | jack every time you want pro audio.
         | 
         | It also manages permissions to record audio and the screen for
         | wayland users.
        
       | yewenjie wrote:
       | Can somebody please elaborate what does it mean for the user who
       | installed `pulseaudio` once long ago and never had to bother
       | about audio at all?
        
         | eulers_secret wrote:
         | I'm curious as well.
         | 
         | I've seen lots of folks talking about pipewire, but I'm a
         | simple audio user - I want software mixing and audio out via a
         | headphone jack and that's all.
         | 
         | I'm pretty sure for most folks we'll just wait until our distro
         | decides to move over, it'll happen in the background, and we'll
         | not notice or care.
        
       | robotbikes wrote:
       | This is an exciting development. As someone who has supported the
       | desktop use of Linux audio by community radio users I've found it
       | very frustrating at times how things don't work. I remember a
       | decade ago going to a presentation on Linux audio at Ohio Linux
       | fest and the recently I decided to dive in and see what the best
       | solution to coming up with a user friendly and fool proof audio
       | setup (easier said than done). I found that JACK is still too
       | complicated to setup for novices and pulse audio can just be
       | inconsistent. So pipewire seems like it has a lot of potential
       | and I'm excited that people are working on this. It'll perhaps
       | make Linux audio better able to compete with coreAudio and
       | whatever audio subsystem Windows uses. I especially appreciate
       | that the flexibility and modularity allows both professional and
       | consumer applications. The future is bright.
        
       | ElijahLynn wrote:
       | I actually read the whole thing as I have been wondering what
       | this new word is, PipeWire, for a while now. I actually
       | understood like 30% of this and think I'll get even more out of
       | it in the future having read this.
        
       | josteink wrote:
       | I wanted to give this a spin, but it's seemingly not packaged in
       | a meaningful way on Ubuntu yet. That is, there is no pipewire-
       | pulse, pipewire-jack, etc.
       | 
       | Oh well. Maybe next version?
        
       | moistbar wrote:
       | Does anyone know if PipeWire can do audio streaming like
       | PulseAudio can? I had a rather nice setup using a raspberry pi
       | and an old stereo system a while back that I'd like to replicate.
        
         | cycloptic wrote:
         | TCP sockets appear to still be supported by pipewire-pulse.
        
       | sp1rit wrote:
       | I'm currently trying pipewire on openSUSE Tumbleweed. I'm very
       | impressed with it so far.
       | 
       | (After realizing it was broken, because I didn't have the
       | pipewire-alsa package installed => No audio devices) The pulse
       | drop-in worked flawlessly out of the box. I'd had some isssues
       | with the jack drop-in libraries tho. (metalic voice, basically
       | not useable) To fix this, I had to change the sample rate in
       | /etc/pipewire/pipewire.cfg from the default 48000 to 44100.
        
       | tylerjl wrote:
       | > Second, D-Bus was replaced as the IPC protocol. Instead, a
       | native fully asynchronous protocol that was inspired by Wayland
       | -- without the XML serialization part -- was implemented over
       | Unix-domain sockets. Taymans wanted a protocol that is simple and
       | hard-realtime safe.
       | 
       | I'm surprised to read this; I was under the impression that D-Bus
       | was the de jure path forward for interprocess communication like
       | this. That's not to say I'm disappointed - the simpler, Unix-y
       | style of domain sockets sounds much more in the style of what I
       | hope for in a Linux service. I've written a little bit of D-Bus
       | code and it always felt very ceremonial as opposed to "send bytes
       | to this path".
       | 
       | Are there any discussions somewhere about this s/D-Bus/domain
       | socket/ trend, which the article implies is a broader movement
       | given Wayland's similar decision as well?
        
         | zajio1am wrote:
         | That sounds great! D-bus always seemed to me like a byzantine
         | overcomplicated design, and i was pleasantly surprised when i
         | saw Wayland protocol with its simple design.
        
           | bitwize wrote:
           | Maybe you haven't read Havoc Pennington's posts on why dbus
           | was designed the way it was and the problems it solves. Start
           | here: https://news.ycombinator.com/item?id=8649459
           | 
           | Dbus is about the _simplest_ approach that solves the issues
           | that need addressing.
        
         | cycloptic wrote:
         | D-Bus isn't suitable for realtime, to get that to work would
         | require additional changes within the D-Bus daemon to add
         | realtime scheduling, and even with all that, it would still
         | introduce latency because it requires an extra context switch
         | from client -> dbus-daemon -> pipewire. Maybe they could have
         | re-used the D-Bus wire format? That's the only bit that might
         | have been suitable.
        
           | vlovich123 wrote:
           | To this day I still don't understand why messages are routed
           | through dbus-daemon instead of just using FD-passing to
           | establish the p2p connection directly. I remember we were
           | using D-Bus on WebOS @ Palm & a coworker rewrote the DBus
           | internals (keeping the same API) to do just that & the
           | performance win was significant (at least 10 years ago).
        
             | cycloptic wrote:
             | Among other things, pushing everything through the message
             | bus allows for global message ordering, and security
             | policies down to the individual message. Rewriting the
             | internals would work in an embedded situation like that
             | where every application is linking against the same version
             | of libdbus, but that is not really the case on a desktop
             | system, where there are multiple different D-Bus protocol
             | implementations.
             | 
             | If applications have hard performance requirements, most
             | D-Bus implementations do have support for sending peer-to-
             | peer messages, but applications have to set up and manage
             | the socket themselves.
        
               | admax88q wrote:
               | Also lets you restart either end of the connection
               | transparently to the other end.
               | 
               | With fd passing, if the daemon I'm talking to dies or
               | restarts my fd is now stale and i have to get another.
               | 
               | Also allows starting things on demand similar to inetd.
               | 
               | Also allows transparent multicast.
               | 
               | So yeah, fd passing would be faster, but routing through
               | the daemon is easier.
        
               | cycloptic wrote:
               | I didn't mention those because in theory a lot of that
               | could be done by the library, or done by the daemon
               | before passing off the fd for a peer-to-peer connection.
               | (If a connection dies, the library would transparently
               | handle that by sending a request back to the daemon for
               | another connection, etc) But of course another thing that
               | having a message bus allows you to do is reduce the
               | amount of fds that a client has to poll on to just one
               | for the bus socket.
        
         | nerdponx wrote:
         | I don't know of any discussions on it, but I like it. Client-
         | server architectures seem like a Good Thing, and I'm growing to
         | like the idea of a small handful of core "system busses" that
         | can interoperate with each other.
        
           | hedora wrote:
           | The problem with these busses is that each hand rolls its own
           | security primitives and policies.
           | 
           | This sort of thing is better handled by the kernel, with
           | filesystem device file permissions. As a bonus, you save
           | context switching into the bus userspace process on the fast
           | path. So, "the unix way" is simpler, faster and more secure.
        
             | aseipp wrote:
             | File permissions are completely insufficient to achieve the
             | kind of design that PipeWire is aiming for. None of the
             | problems outlined in the article regarding PulseAudio (e.g.
             | the ability of applications to interfere or snoop on each
             | other, or requesting code being loaded into a shared
             | address space that have unlimited access) can be easily
             | handled with file permissions at all. The model is simply
             | not expressive enough; no amount of hand-wringing about
             | context switching will change that. This is one of the
             | first things addressed in the article and it's very simple
             | to see how file permissions aren't good enough to solve it.
        
             | cycloptic wrote:
             | That won't work here, the design of pipewire is to allow
             | for things like a confirmation dialog appearing when an
             | application tries to use the microphone or webcam, the
             | application can then get temporary access to the device.
             | That is a security policy that isn't really easy to do with
             | filesystem device file permissions.
        
         | aseipp wrote:
         | Well, D-Bus was originally designed to solve the problem of...
         | a message bus. So you can pass messages down the bus and
         | multiple consumers can see it, you can call "into" other bus
         | services as a kind of RPC, etc. Even today, there's no real
         | alternative, natively-built solution to the message bus problem
         | for Linux. There have been various proposals to solve this
         | directly in Linux (e.g. multicast AF_UNIX, bus1, k-dbus) but
         | they've all hit various snags or been rejected by upstream.
         | It's something Linux has always really lacked as an IPC
         | primitive. The biggest is multicast; as far as I know there's
         | just no good way to write a message once and have it appear
         | atomically for N listeners, without D-Bus...
         | 
         | Now, the more general model of moving towards "domain sockets"
         | and doing things like giving handles to file descriptors by
         | transporting them over sockets, etc can all be traced back to
         | the ideas of "capability-oriented security". The idea behind
         | capability oriented security is very simple: if you want to
         | perform an operation on some object, you need a handle to that
         | object. Easy!
         | 
         | For example, consider rmdir(2). It just takes a filepath. This
         | isn't capability-secure, because it requires ambient authority:
         | you simply refer to a thing by name and the kernel figures out
         | if you have access, based on the filesystem permissions of the
         | object. But this can lead to all kinds of huge ramifications;
         | filesystem race conditions, for instance, almost always come
         | down to exploiting ambient authority.
         | 
         | In contrast, in a capability oriented design, rmdir would take
         | a _file descriptor_ that pointed to a directory. And you can
         | only produce or create this file descriptor either A) from a
         | more general, permissive file descriptor or B) on behalf of
         | someone else (e.g. a privileged program passes a file
         | descriptor it created to you over a socket.... sound familiar,
         | all of a sudden?) And this file descriptor is permanent,
         | immutable, and cannot be turned into  "another" descriptor of
         | any kind that is more permissive. A file descriptor can only
         | become "more restrictive" and never "more permissive" -- a
         | property called "capability monotonicity." You can extend this
         | idea basically as much as you want. Capabilities (glorified
         | file descriptors) can be extremely granular.
         | 
         | As an example, you might obtain a capability to your homedir
         | (let's say every process, on startup, has such a capability.)
         | Then you could turn that into a capability for access to
         | `$HOME/tmp`. And from that, you could turn it into a read-only
         | capability. And from that, you could turn it into a read-only
         | capability for _exactly one file_. Now, you can hand that
         | capability to, say, gzip as its input file. Gzip can now never
         | read from any other file on the whole system, no matter if it
         | was exploited or ran malicious code.
         | 
         | For the record, this kind of model is what Google Chrome used
         | from the beginning. As an example, rendering processes in
         | Chrome, the process that determines _how_ to render a  "thing"
         | on the screen, don't _actually_ talk to OpenGL contexts or your
         | GPU at all; they actually write command buffers over sockets to
         | a separate process that manages the context. Rendering logic is
         | a browser is extremely security sensitive since it is based
         | exactly on potentially untrusted input. (This might have
         | changed over time, but I believe it was true at one point.)
         | 
         | There's one problem with capability oriented design: once you
         | learn about it, everything else is obviously, painfully broken
         | and inadequate. Because then you start realizing things like
         | "Oh, my password manager could actually rm -rf my entire
         | homedir or read my ssh key, and it shouldn't be able to do
         | that, honestly" or "Why the hell can an exploit for zlib result
         | in my whole system being compromised" and it's because our
         | entire permission model for modern Unix is built on a 1970s
         | model that had vastly different assumptions about how programs
         | are composed to create a usable computing system.
         | 
         | In any case, Linux is moving more and more towards adopting a
         | capability-based models for userspace. Such a design is
         | absolutely necessary for a future where sandboxing is a key
         | feature (Flatpak, AppImage, etc.) I think the kernel actually
         | has enough features now to where you could reasonably write a
         | userspace library, similar to libcapsicum for FreeBSD, which
         | would allow you to program with this model quite easily.
        
           | bitwize wrote:
           | > The biggest is multicast; as far as I know there's just no
           | good way to write a message once and have it appear
           | atomically for N listeners, without D-Bus...
           | 
           | I once wrote a proof of concept that uses the file system to
           | do this. Basically, writers write their message as a file to
           | a directory that readers watch via inotify. When done in a
           | RAM based file system like tmpfs, you need not even touch the
           | disk. There are security and permission snags that I hadn't
           | thought of and it may be difficult if not totally infeasible
           | to work in production, but yeah... the file system is pretty
           | much _the_ traditional one-to-many communication channel.
        
       | grawprog wrote:
       | >JACK applications are supported through a re-implementation of
       | the JACK client libraries and the pw-jack tool if both native and
       | PipeWire JACK libraries are installed in parallel
       | 
       | >unlike JACK, PipeWire uses timer-based audio scheduling. A
       | dynamically reconfigurable timer is used for scheduling wake-ups
       | to fill the audio buffer instead of depending on a constant rate
       | of sound card interrupts. Beside the power-saving benefits, this
       | allows the audio daemon to provide dynamic latency: higher for
       | power-saving and consumer-grade audio like music playback; low
       | for latency-sensitive workloads like professional audio.
       | 
       | That's pretty interesting. It sounds like it's backwards
       | compatible with jack programs but uses timer based scheduling
       | similar to pulseaudio. Can you actually get the same low levels
       | of latency needed for audio production without realtime
       | scheduling?
       | 
       | JACK's used over pulse for professional audio typically because
       | of its realtime scheduling. How does pipewire provide low enough
       | latency for recording or other audio production using timer based
       | scheduling?
       | 
       | Does anyone have any experience using pipewire for music
       | recording or production?
       | 
       | It would be nice to have one sound server, instead of three
       | layered on top of eachother precariously, if it works well for
       | music production.
        
       | dbrgn wrote:
       | The fact that PipeWire has the potential to replace both
       | PulseAudio (for consumer audio) and Jack (for pro audio) with a
       | unified solution is very exciting.
        
         | bluGill wrote:
         | Particularly if you are on the pro audio side. Consumer audio
         | can ignore pro-audio for the most part. However everyone on
         | pro-audio needs to do something consumer audio once in a while,
         | if only to run a web browser.
        
           | dbrgn wrote:
           | Exactly. If you're working on a project in Ardour and want to
           | quickly watch a video on YouTube, stopping Jack and starting
           | PulseAudio was a bit of a pain the last time I did that.
           | 
           | Yes, you can configure PulseAudio as a Jack client, but the
           | session handling is also a bit messy. (I used to have a PA ->
           | Jack setup on my work computer just so I could use the Calf
           | equalizer / compressor plugins for listening to music. I
           | dropped it again after a while, because session handling and
           | restoring wasn't always working properly. But that was around
           | 6-7 years ago, maybe it would work better nowadays.)
        
         | SilverRed wrote:
         | This is gonna be huge imo. I'm a linux veteran at this point
         | and I can't get JACK to work without an hour of fiddling every
         | time.
        
       | pta2002 wrote:
       | Just tried it on NixOS, had no idea it was so fleshed out
       | already! Thought it'd be full of bugs but was pleasantly
       | surprised, it just worked. No issues with compatibility,
       | extremely low latency and has JACK and PulseAudio shims, so
       | everything works out of the box, including pro audio stuff like
       | Reaper and Ardour. And thanks to the JACK shim I can patch around
       | the outputs with qjackctl. This is compared to JACK, which I
       | never managed to properly combine with PulseAudio.
        
         | kevincox wrote:
         | Thanks for the datapoint. I've been following
         | https://github.com/NixOS/nixpkgs/issues/102547 and considering
         | trying it out for a while.
         | 
         | Did you just set services.pipewire.pulse.enable=true?
         | 
         | https://search.nixos.org/options?channel=unstable&show=servi...
         | 
         | My major concern is that I use PulseEffects as a key component
         | of my setup so I'll need to check if that works well with
         | PipeWire. But the only way to be sure is to try it!
        
           | slabity wrote:
           | I also have Pipewire running on NixOS. This is what I
           | recommend configuring:                 services.pipewire = {
           | enable = true;         alsa.enable = true;
           | alsa.support32Bit = true;         jack.enable = true;
           | pulse.enable = true;         socketActivation = true;
           | };
           | 
           | That allows me to run pretty much any application that uses
           | ALSA, JACK, or PulseAudio.
        
             | aidenn0 wrote:
             | That gave me "services.pipewire.alsa" does not exist on
             | 20.09; does this require unstable?
        
               | slabity wrote:
               | I believe so, I am on the unstable channel.
        
           | toggleton wrote:
           | https://github.com/wwmm/pulseeffects#note-for-users-that-
           | did... Pulseeffect has only support for Pipewire since
           | version 5 and Pulseaudio in the legacy Pulseaudio branch
        
           | mxmilkb wrote:
           | I didn't get to try it under PipeWire on my Arch laptop
           | before that died the other day, but a friend had said
           | PulseEffects is no longer such a massive CPU hog under
           | PipeWire, so much so that they run it all the time now.
        
             | kevincox wrote:
             | Interesting. I have PulseEffects running all of the time on
             | PulseAudio and don't notice much CPU usage. However maybe
             | that is because I only apply effects to the mic and it
             | seems to disable itself when nothing is recording.
        
           | kaba0 wrote:
           | I just tried it on NixOS and the new version is already
           | packaged and working.
        
         | _0ffh wrote:
         | >This is compared to JACK, which I never managed to properly
         | combine with PulseAudio.
         | 
         | Yeah making PulseAudio play nice with JACK seems to be tricky.
         | Over time I configured it in four different environments
         | (different Linux Distributions and/or Versions) and for each of
         | them I had to do things (at least slightly) differently to get
         | them to work.
        
           | mxmilkb wrote:
           | Using JACK apps to route between PulseAudio apps under
           | PipeWire is magic, as is being able to turn on external DACs
           | after login and still be able to use them with JACK apps
           | without restarting any software. Also PulseAudio not randomly
           | switching to the wrong sample rate when I open pavucontrol is
           | a blessing. (And it's so easy to setup, at least on Arch
           | Linux.).
           | 
           | I have come to describe PW as like a superset of JACK and
           | PulseAudio.
           | 
           | Also to note, #pipewire is very active on freenode, and wtay
           | regularly drops into #lad.
        
             | _0ffh wrote:
             | I've first read about PipeWire about two months ago and I'd
             | really love to try it. But! My setup is working and I'm
             | really not the kind of person who likes to unnecessarily
             | tamper with a smoothly running system. So I'll probably try
             | it the next time I need to do a fresh install. Promise!
        
           | declnz wrote:
           | I found it tricky at first but got a mostly smooth setup on
           | two machines now with wide variety of uses.
           | 
           | Definitely for day-to-day use the Ubuntu Studio app has
           | actually been the most helpful (direct control / visibility
           | into the Jack <-> PA bridging is great), or a combo of
           | qjackctl and Carla for more Audio-focused stuff.
        
         | simias wrote:
         | But does it work with the OSS shim for alsa shim for pulseaudio
         | shim for jack shim for pipewire?
         | 
         | Jokes aside my first reaction upon hearing about pipewire was
         | "oh no, not yet an other Linux audio API" but maybe a miracle
         | will happen and it'll be the Chosen One.
         | 
         | I know that audio is hard but man the situation on Linux is
         | such a terrible mess, not in small part because everybody
         | reinvents the wheel instead of fixing the existing solutions.
         | Jack is definitely the sanest of them all in my experience
         | (haven't played with pipewire) but it's also not the most
         | widely supported so I often run into frustrating issues with
         | the compatibility layers.
        
           | SilverRed wrote:
           | > not in small part because everybody reinvents the wheel
           | instead of fixing the existing solutions.
           | 
           | I'm using all of these reinventions. Wayland, systemd,
           | flatpak, btrfs and soon pipewire. I'm absolutely loving linux
           | right now. Everything works so nice in a way it will never on
           | a distro with legacy tools. Some of these projects like
           | flatpak have a few rough edges but the future is very bright
           | for them and most problems seem very short term rather than
           | architectural.
        
         | severino wrote:
         | I gave JACK a try more than a decade ago, and I remember how
         | cool it was to be able to pipe the audio from one application
         | to the input of another (unrelated) app, possibly adding
         | effects or transformations in between. But JACK never became
         | "mainstream" so I never got to use it for anything serious, but
         | I miss the flexibility it offered even for non professional
         | use-cases. What I wonder is if PipeWire will allow this kind of
         | routing or patching of audio streams as well.
        
           | pta2002 wrote:
           | It does, exactly the same way as JACK, and you can even do it
           | with pulseaudio apps! I could pipe audio from a firefox tab
           | through guitarix (guitar amp emulator) into a second firefox
           | tab if I wanted to. With just JACK or just Pulse this
           | wouldn't be possible. And if I understand it correctly, it
           | should work for video streams too. I'm imagining piping a
           | screenshare through OBS before going into discord or
           | something, should be very useful.
        
             | Nullabillity wrote:
             | You can do that in Pulse, by using null sinks and monitor
             | sources.
        
               | pta2002 wrote:
               | I see what you mean, still a lot more complicated than
               | just dragging a "wire" in qjackctl though.
        
               | viraptor wrote:
               | https://github.com/futpib/pagraphcontrol
               | 
               | It's not super polished, but you can do similar wire
               | dragging here.
        
           | teddyfrozevelt wrote:
           | I'm running PipeWire on Arch and I can do it through `pw-jack
           | carla`[1]. You can do surprisingly advanced stuff through the
           | JACK compatibility.
           | 
           | [1] https://i.imgur.com/EFUxR41.png
        
         | capableweb wrote:
         | > extremely low latency
         | 
         | How low is "extremely low", especially compared to JACK that
         | I'm currently using when doing music production?
        
       | xorcist wrote:
       | What does real world latency look like with Pipewire?
       | 
       | Is it comparable to jackd when used with something like Ardour?
        
         | cptn_brittish wrote:
         | It has a jack and pulseaudio api wrapper. I use it because it
         | means I don't need to muck around with configuring pulse and
         | jack to work nicely together.
        
         | dralley wrote:
         | The FAQ provides some great information - no hard numbers
         | though
         | 
         | https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ...
         | 
         | https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ...
         | 
         | https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ...
        
       | superluserdo wrote:
       | Does anyone know if pipewire has its own audio protocol for
       | applications, as well as taking the place of JACK and Pulse? Or
       | will future applications still just decide whether to talk to
       | "JACK" or "Pulseaudio"? (Both actually being pipewire)
        
         | bitbang wrote:
         | It does. The support for Pulse and JACK APIs is to ease
         | adoption.
        
       | shmerl wrote:
       | I hope KDE will implement direct Pipewire support for general
       | audio controls, to avoid going through the PulseAudio plugin.
        
       | jeffnappi wrote:
       | This looks promising for Linux audio. I spent some time
       | investigating the state of Linux audio servers a while back while
       | diagnosing Bluetooth headset quality issues and ultimately opened
       | this bug:
       | https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/18...
       | 
       | Sounds like a lot of lessons have been learned from JACK,
       | PulseAudio etc that have been factored in to the architecture of
       | PipeWire. Maybe it really is the true coming of reliable Linux
       | audio :)
        
       | OJFord wrote:
       | It's great to see pipewire coming along, pulseaudio development
       | seems to (to a spectator) to have been a little.. well..
       | 
       | https://gitlab.freedesktop.org/pulseaudio/pulseaudio/-/merge...
       | 
       | https://gitlab.freedesktop.org/pulseaudio/pulseaudio/-/merge...
        
         | dralley wrote:
         | While that seems like a huge cluster, it does kind of seem that
         | the patch rejections come from a set of principles. If the
         | patches would improve Bluetooth audio at the expense of
         | breaking existing features, saying "we don't break existing
         | features" is a valid position to hold.
        
         | entropie wrote:
         | This is so sad. I had no idea.
        
         | uluyol wrote:
         | Isn't this probably because most developers have shifted focus
         | on PipeWire? I thought both came from more-or-less the same
         | community?
        
         | SilverRed wrote:
         | Well whenever I report issues about pulse audio, the response I
         | get is "this is fixed on pipewire". Seems like the development
         | community has moved on and its time for the users to move too.
        
       | londons_explore wrote:
       | Without audio buffer rewinding, you're going to have to suffer
       | random stutters and jumpiness every time your system comes under
       | heavy load. Your system does an NMI because you plugged the power
       | cable in? Your audio will glitch. It will also mean you won't be
       | able to sit with an idle CPU while playing music - the audio
       | daemon will have to wake up to reload buffers multiple times per
       | second, killing battery life unacceptably for playing audiobooks
       | on a phone...
       | 
       | Saying rewindable audio is a non-feature might simplify the
       | codebase, but if it makes it work badly for most use cases, it
       | ought to be rethought.
        
         | freeqaz wrote:
         | I'm not sure I understand this. Why can't you just increase
         | buffer sizes and write more data to them to avoid frequency of
         | wake ups?
         | 
         | Edit: does this help?
         | https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ...
        
           | fullstop wrote:
           | Here's a decent (and corny) explanation video from Google on
           | the matter:
           | 
           | https://www.youtube.com/watch?v=PnDK17zP9BI
        
           | phkahler wrote:
           | Low latency is a goal. From video conferencing to music
           | recording it's very important.
        
             | jjoonathan wrote:
             | Eh, just process all audio in a once-a-day batch job at
             | 2am, it'll be great!
        
             | jancsika wrote:
             | Except that _professional_ music recording is a domain in
             | which low latency _always_ trumps power consumption. So if
             | a system based on such a tradeoff fails even once due to
             | complexity of buffer rewinding or whatever, the
             | professional musician loses.
             | 
             | Hell, Jack could be re-implemented as power-hungry,
             | ridiculous blockchain tech and if it resulted in round-trip
             | latency / 2 professional musicians would still use it.
             | 
             | Edit: added "complexity of" for clarification
        
               | bluGill wrote:
               | In practice it can't though. You cannot do complex
               | computation and still meet low latency as complex
               | computation takes time which adds to latency. Also pros
               | intend to use their computer, so something that complex
               | leaves less CPU free for the other things they are trying
               | to do.
               | 
               | in practice the only way this comes into play is pros are
               | willing to fix their CPU frequency, while non-pros are
               | willing to suffer slightly longer latency in exchange for
               | their CPU scaling speed depending on how busy it is. It
               | is "easy" to detect if CPU scaling is an allowed setting
               | and if so increase buffer sizes to work around that.
               | 
               | Even for non-pros, low latency audio is important. You
               | can detect delays in sound pretty quickly.
        
             | robert_foss wrote:
             | I think supporting the low latency usecase is a goal, but
             | not the only one. As far as I understand it pipewire
             | provides configurable latency.
        
         | phkahler wrote:
         | One of the goals is low latency realtime audio to take the
         | place of Jack. That requires small buffers frequently filled. I
         | doubt that's very power hungry on today's systems. Also
         | handling other tasks can be pushed on another core. So far it
         | works better for me than Pulse did.
        
           | kevincox wrote:
           | It did talk about being adaptive. So if you are just
           | listening to music it should be able to use large buffers.
           | However if you switch to something with low-latency demands
           | it can start using smaller buffers.
           | 
           | My main concern is that without rewriting how can you handle
           | pressing play or pause? Sure, that music isn't realtime and
           | can use large buffers but if I start playing something else,
           | or stop the music I still want it to be responsive which may
           | require remixing.
        
             | bluGill wrote:
             | A 100ms delay between hitting pause and the pause happening
             | is plenty fast for that use case. The same delay for pro
             | audio mixing is way too long.
        
               | londons_explore wrote:
               | Except a typical desktop system is usually a mix of low
               | latency and high latency audio streams. You're playing
               | music, _and_ you 're typing on a 'clacky' virtual
               | keyboard. The user doesn't want 100ms of lag with each
               | finger tap till they hear the audible feedback. Yet when
               | no typing is happening, the CPU doesn't want to be waking
               | up 10x per second just to fill audio buffers.
               | 
               | The solution is to fill a 5 minute buffer with 5 minutes
               | of your MP3 and send the CPU to sleep, and then _if_ the
               | user taps the keyboard, rewind that buffer, mix in the
               | 'clack' sound effect, and then continue.
        
           | amluto wrote:
           | In some sense it's worse on modern systems. Modern systems
           | are pretty good at using very little power when idle, but
           | they can take a while to become idle. Regularly waking up
           | hurts quite a bit.
        
       | alexfromapex wrote:
       | The real link: https://pipewire.org/
        
         | cozzyd wrote:
         | In this case, I disagree. LWN is always well worth reading.
        
       | alvatar wrote:
       | I hope this fixes my issues with bluetooth on Linux. When I'm on
       | battery the audio breaks all the time. I've tried all sorts of
       | obscure config tweaks with Pulseaudio.
        
         | m45t3r wrote:
         | This looks that you're using very aggressive power settings in
         | the kernel (powertop, tlp?), and doesn't seem related to
         | PulseAudio at all.
        
       | nialv7 wrote:
       | pipewire already works surprisingly well. Even bluetooth source
       | and sink works.
       | 
       | Only problems I had so far are:
       | 
       | * sometimes bluetooth devices would connect but not output audio,
       | have to restart pipewire.
       | 
       | * sometimes pipewire gets confusing and doesn't assign audio
       | outputs properly. (shows up in pavucontrol as "Unknown output")
        
       | squarefoot wrote:
       | Does it also work with WINE and manages MIDI devices or it's
       | audio only? From the project page on Gitlab it seems it doesn't;
       | apologies for hijacking the thread if that's the case, but I'm
       | out of ideas.
       | 
       | I'm currently looking for a last resort before reinstalling
       | everything since probably after an apt upgrade all native Linux
       | software kept working perfectly with all my MIDI devices while
       | all WINE applications simply stopped detecting them, no matter
       | the software or WINE version used. No error messages, they
       | suddenly just disappeared from every WINE application but kept
       | working under native Linux. Audio still works fine in WINE
       | software, they just can't be used with MIDI devices because
       | according to them I have none. WINE and applications reinstalls
       | didn't work.
        
       | nitsky wrote:
       | PipeWire has worked very well for me both as a drop-in
       | replacement for PulseAudio and to enable screen sharing on
       | Wayland.
        
         | ledbettj wrote:
         | It also worked as drop in replacement for PulseAudio for me,
         | except all my audio now had stutters and pops. I ended up going
         | back to Pulse.
         | 
         | I got suggestions that I could go tweak buffer sizes stuff in a
         | config file somewhere, but for my simple desktop use case I'd
         | rather my audio just sounds right out of the box.
         | 
         | Hopefully this sort of thing gets straightened out, because
         | having to muck with config files to make my sound server
         | actually work is like going back to working directly with ALSA
         | or OSS.
        
           | pkulak wrote:
           | I had a couple little issues as well when I switched over a
           | couple months ago, but they just fell away over the ensuing
           | weeks of updates until there's nothing left at the moment.
           | Give it another try sometime soon.
        
             | cptn_brittish wrote:
             | I can confirm that a issue causing my audio to completely
             | drop at random points resolved about 1 month ago and now
             | everything works perfectly.
        
       | cyborgx7 wrote:
       | This is giving me xkcd "Standards" vibes.
       | 
       | https://xkcd.com/927/
       | 
       | I hope I'm wrong. There is a lot of potential to do better in
       | that realm.
        
         | declnz wrote:
         | Yeah but: the key differentiator to me is supplying drop-in
         | replacements / adapters for all the other standards from early
         | on in the process. This is why it _isn 't_ #927 I say...
        
         | fit2rule wrote:
         | Rest your eyes on the delights that Linux standards can
         | provideL
         | 
         | http://zynthian.org/
        
         | spijdar wrote:
         | Given that it supports ALSA, PulseAudio, and JACK, I don't
         | think it's like that at all. Assuming it works, it subsists of
         | both all the other standards and a new one, keeping existing
         | applications working with its own new advantages.
        
       | chenxiaolong wrote:
       | I've been trying out the latest master builds of pipewire
       | recently and have been pretty impressed with it:
       | 
       | * My bluetooth headset can now use the HFP profile with the mSBC
       | codec (16 kHz sample rate) instead of the terrible CVSD codec (8
       | kHz sample rate) with the basic HSP profile.
       | 
       | * Higher quality A2DP stereo codecs, like LDAC, also work.
       | 
       | * AVDTP 1.3 delay reporting works (!!) to delay the video for
       | perfect A/V sync.
       | 
       | * DMA-BUF based screen recording works with OBS + obs-xdg-portal
       | + pipewire (for 60fps game capture).
       | 
       | For my use cases, the only things still missing are automatic
       | switching between A2DP and HFP bluetooth profiles and support for
       | AVRCP absolute volume (so that the OS changes the headset's
       | hardware volume instead of having a separate software volume).
        
         | Abishek_Muthian wrote:
         | Would there be improvements for remote audio streaming over
         | pulseaudio with ssh?
        
           | chenxiaolong wrote:
           | I'm not sure about this one. I haven't tried streaming audio
           | over the network with either pulseaudio or pipewire.
        
         | foobarbecue wrote:
         | Did you get 2--way (in & out) 16khz Bluetooth to work? Am I
         | right that this isn't possible?
        
         | JeremyNT wrote:
         | If you use Arch Linux, it's really easy to just drop this right
         | in from official packages as a replacement for Pulse/ALSA [0]
         | and start using it. I've been running it for about a month and
         | everything seems to work exactly as I expect it to. I honestly
         | notice no difference other than the pulse audio input/output
         | picker extension I had been using seems confused now (the
         | native GNOME sound control panel applet works just fine
         | though).
         | 
         | On the video front I use obs-xdg-portal for Wayland screen
         | capture as well - finally there's a good story for doing this!
         | You even get a nifty permission dialogue in GNOME. You have to
         | launch OBS in forced wayland mode with 'QT_QPA_PLATFORM=wayland
         | obs'
         | 
         | [0] https://wiki.archlinux.org/index.php/Pipewire#Audio
        
           | ElijahLynn wrote:
           | Thanks for this, I was wondering if a future Arch update
           | would just auto install this and I would be left wondering
           | what happened when it broke. I am going to remember your post
           | here and try to upgrade to it soon!
        
         | Lucretia9 wrote:
         | Is that X11 or Wayland?
        
           | chenxiaolong wrote:
           | This was all with Wayland. I haven't tried using pipewire
           | with X11.
        
           | rav wrote:
           | I just installed Pipewire in Arch Linux running GNOME on
           | Wayland, and I finally have working screensharing in Firefox:
           | I can share e.g. GNOME Terminal (a Wayland-native non-
           | Xwayland app) in a video meeting, which I wasn't able to do
           | without Pipewire.
        
         | fulafel wrote:
         | Why would this affect bluetooth codecs?
        
           | chenxiaolong wrote:
           | I'm not super familiar with the pipewire internals, but I
           | believe pipewire is the daemon responsible for talking to
           | bluez/bluetoothd and ensuring that the audio stream is
           | encoded with a codec that the headset supports.
           | 
           | For example, this is the PR that enabled mSBC support for the
           | HFP profile: https://gitlab.freedesktop.org/pipewire/pipewire
           | /-/merge_req...
        
       | onli wrote:
       | > _including the raw Linux ALSA sound API, which typically allows
       | only one application to access the sound card._
       | 
       | If the _raw_ in that sentence is not meant as a special qualifier
       | and this is meant as a statement about ALSA in general, this is
       | wrong. I recently read up on this to confirm my memory was right
       | when reading a similar statement. In fact, ALSA had just a very
       | short period where typically only one application accessed the
       | sound card. After that, dmix was enabled by default. Supporting
       | multiple applications was actually the big advantage of ALSA
       | compared to OSS, which at the time really did support only one
       | application per soundcard (without hardware mixing, which broke
       | away at that time). I 'm not sure why this seems to be remembered
       | so wrongly?
       | 
       | > _Speaking of transitions, Fedora 8 's own switch to PulseAudio
       | in late 2007 was not a smooth one. Longtime Linux users still
       | remember having the daemon branded as the software that will
       | break your audio._
       | 
       | This wasn't just a Fedora problem. Ubuntu when making the switch
       | also broke Audio on countless of systems. I was active as
       | supporter in a Ubuntu support forum at that time and we got
       | flooded with help requests. My very own system did not work with
       | Pulseaudio when I tried to switch, that was years later. I still
       | use only ALSA because of that experience. At that time Pulseaudio
       | was garbage, it should never have been used then. It only got
       | acceptable later - but still has bugs and issues.
       | 
       | That said, PipeWire has a better vibe than Pulseaudio did. It
       | intends to replace a system that never worked flawlessly, seems
       | to focus on compatibility, and the apparent endorsement from the
       | JACK-developers also does not hurt. User reports I have seen so
       | far have been positive, though I'm not deep into support forums
       | anymore. Maybe this can at least replace Pulseaudio, that would
       | be a win. I'm cautiously optimistic about this one.
        
         | fulafel wrote:
         | Trivia: Some sound cards (eg pas16) showed up as multiple
         | devices using OSS too and you could output pcm audio to 2
         | simultaneously.
        
           | bluGill wrote:
           | Trivia, FreeBSD looked at Alsa and decided they were using a
           | rewrite as an excuse to not fix OSS and so they dug in and
           | fixed OSS so it worked.
        
         | mwcampbell wrote:
         | Another point against dmix: I would be surprised if it worked
         | with sandboxes like Flatpak. That may be another reason why
         | major desktop-oriented distros like Fedora Workstation haven't
         | embraced it.
        
           | nitrogen wrote:
           | How do dmix and pulseaudio do IPC?
        
         | war1025 wrote:
         | My understanding back in the mid 2000s when I first got into
         | Linux was that OSS only let you play one audio stream at once,
         | and the whole point of ALSA was that it let multiple
         | applications access the sound card.
         | 
         | I guess I could be remembering that wrong, but I know I was
         | listening to multiple audio streams long before PulseAudio came
         | onto the scene.
        
           | bluGill wrote:
           | OSS in non-free versions supported multiple sources. Linux
           | sound guys decided that instead of fixing the free OSS they
           | would write ALSA. They never really worked out all the bugs
           | around mixing before pulseaudio took over.
        
             | cycloptic wrote:
             | It's more than 20 years later and still I don't understand
             | these complaints. ALSA was designed to have a broader API
             | than OSS, and it has supported OSS emulation for quite some
             | time. What else could have been done when OSS went non-
             | free?
        
               | trasz wrote:
               | Same what FreeBSD has done: keep developing Open Source
               | OSS. One implementation going non-free doesn't affect
               | other implementations of the same API.
        
               | cycloptic wrote:
               | Didn't they do that by developing ALSA OSS emulation?
               | That is effectively another implementation of the same
               | API.
        
             | bitwize wrote:
             | PulseAudio and PipeWire embody the correct approach. The
             | problem with OSS is, if you're using anything resembling
             | modern audio formats, it risks introducing floating-point
             | code into the kernel, which is a hard "no" in mainline
             | Linux. So if you need software mixing, it should be done in
             | a user-space daemon that has exclusive access to the sound
             | card.
        
             | _pmf_ wrote:
             | OSS was the last time I had Linux audio in a state that I
             | would call "basically working".
        
           | onli wrote:
           | Jop, definitely. I still remember this clearly because I was
           | into gaming. And while I could play some games with wine,
           | Counter-Strike iirc, my friends used Teamspeak. Teamspeak had
           | a proprietary linux version, but it used OSS. Before `aoss`
           | became a thing (or maybe just known to me) there was no way
           | of having teamspeak on and ingame sound, and teamspeak needed
           | to be started before the game.
           | 
           | Only using alsa fixed this, mumble I think then became a good
           | alternative for a short while.
        
         | orra wrote:
         | > My very own system did not work with Pulseaudio when I tried
         | to switch, that was years later. I still use only ALSA because
         | of that experience. At that time Pulseaudio was garbage, it
         | should never have been used then. It only got acceptable later
         | - but still has bugs and issues
         | 
         | In the interest of balance, PulseAudio was a huge improvement
         | for me.
         | 
         | I remember playing SuperTux on my laptop. After the switch to
         | PulseAudio, the sound was flawless. Before that, on ALSA, the
         | audio was dominated by continuous 'popping' noises--as if
         | buffers were underrunning.
         | 
         | > the apparent endorsement from the JACK-developers also does
         | not hurt.
         | 
         | Indeed, it seems a better UX to only require one sound daemon,
         | instead of having to switch for pro work.
        
         | asabil wrote:
         | The ALSA interface can actually refer to two different things:
         | 
         | 1. The ALSA kernel interface
         | 
         | 2. The interface provided by libasound2
         | 
         | The former is basically the device files living in /dev/snd,
         | this interface is hardware dependent and whether you can or
         | cannot send multiple streams to the sound card all depends on
         | the actual underlying hardware and driver support.
         | 
         | The later is actually a shared library that when linked into
         | your application exposes "virtual" devices (such as `default`,
         | or `plughw:0` ...), these devices are defined through plugins.
         | The actual configuration of these virtual devices is defined in
         | `/etc/asound.conf` and `~/.asoundrc`. This is typically where
         | dmix is defined/used. Which means that if you have any
         | application that does not use libasound2 or uses a different
         | libasound2 version, you are in trouble.
         | 
         | p.s. Pulseaudio implements alsa API compatibility by exporting
         | an alsa device plugin to reroute audio from all applications
         | making use of libasound2 (except itself).
        
         | FooBarWidget wrote:
         | > In fact, ALSA had just a very short period where typically
         | only one application accessed the sound card.
         | 
         | For some definition of 'short period'. Software mixing via dmix
         | worked for me, but at the time I've heard for years that dmix
         | was broken for many other people. Not sure whether things are
         | better nowadays.
         | 
         | The breakage seems to be caused by hardware bugs. Various
         | authors had the stance that they refuse, on principle, to work
         | around hardware bugs. I guess I understand the technical
         | purism, but as a user that attitude was unhelpful: there was no
         | way to get sound working other than to ditch your laptop, and
         | hope that the next one doesn't have hardware bugs. In practice,
         | it seems a large number of hardware had bugs. Are things better
         | nowadays?
        
           | onli wrote:
           | > _For some definition of 'short period'._
           | 
           | According to https://alsa.opensrc.org/Dmix, enabled by
           | default since 1.0.9rc2. https://www.alsa-
           | project.org/wiki/Main_Page_News shows that was 2005. Alsa
           | 1.0.1 release was 2004. So it's only short when counting from
           | then on, project start was 1998. But
           | https://www.linuxjournal.com/article/6735 for example called
           | it new 2004, so I don't think it was much of a default choice
           | before then.
        
             | aidenn0 wrote:
             | I had a sound card that would not work with OSS in 2002ish,
             | so I guess define "default choice." Even though it was
             | technically disabled by default, I had to enable it to get
             | sound working.
        
               | ori_b wrote:
               | > _I guess define "default choice."_
               | 
               | Default choice: The choice that is made with no user
               | configuration.
        
         | iovrthoughtthis wrote:
         | Dmix seems rather limited and doesn't come automatically setup
         | for all audio devices. [1]
         | 
         | [1]: https://alsa.opensrc.org/Dmix
        
           | onli wrote:
           | That's a very old wiki page with decades old workaround for
           | decade old issues. I'm not saying you are wrong, but if you
           | take this impression solely from that wiki page you are
           | likely mislead. Afaik this always works and did for many
           | years - but I might be wrong and always only lucky with all
           | the systems where I tested it?
        
             | justaj wrote:
             | I think you've hit one of the most painful topics of ALSA:
             | Its documentation.
        
               | zbuf wrote:
               | Yes, you can do an incredible amount of very useful stuff
               | with ALSA and it's asoundrc files.
               | 
               | Sadly that logic is quite opaque, poorly documented, and
               | produces mysterious error messages or, worse, no error at
               | all.
        
         | jcastro wrote:
         | > I'm not sure why this seems to be remembered so wrongly?
         | 
         | It didn't work reliably on all chipsets/soundcards.
        
           | onli wrote:
           | I don't remember this at all, but this might explain that. Or
           | maybe a distribution like debian stable shipped with an
           | outdated ALSA version, taken from the short period between
           | release and dmix. Or just disabled dmix. Would love if
           | someone remembered specifics.
           | 
           | I kinda assumed people mix up Alsa and OSS or don't remember
           | anymore what actually did and what did not work before
           | Pulseaudio was introduced.
        
             | Delk wrote:
             | In the early 00's (before PulseAudio), my desktop had an
             | old SoundBlaster Live PCI card that was pretty common
             | around the turn of the millennium. ALSA dmix Just Worked
             | with that one.
             | 
             | Any other hardware I encountered required some kind of
             | software mixing, IIRC. Not that my experience was
             | extensive, but I got the impression that hardware or driver
             | support for dmix wasn't that common.
        
               | onli wrote:
               | > _Any other hardware I encountered required some kind of
               | software mixing, IIRC._
               | 
               | Yes, that was dmix :) And it fits the timeline, hardware
               | mixing was killed off back then by soundcard
               | vendors/microsoft, iirc.
        
           | darwi wrote:
           | Indeed. It also glitches like hell in case of any system
           | load.
        
         | m45t3r wrote:
         | > My very own system did not work with Pulseaudio when I tried
         | to switch, that was years later. I still use only ALSA because
         | of that experience. At that time Pulseaudio was garbage, it
         | should never have been used then. It only got acceptable later
         | - but still has bugs and issues.
         | 
         | I remember the transition to PulseAudio. Initially, most things
         | were broken, and we still had some applications that worked
         | only with OSS so the whole audio suite in Linux was a mess. I
         | remember that I already switched from Fedora (very
         | broken)/Ubuntu (slightly less broken) to Arch Linux, and for
         | sometime I kept using ALSA too.
         | 
         | Eventually, between switching Desktop Environments (I think
         | Gnome already used PulseAudio by default them, while KDE was
         | optional but also recommended(?) PA), I decided to try
         | PulseAudio and was surprised how much better the situation was
         | afterwards (also, OSS eventually died completely in Linux
         | systems, so I stopped using OSS emulation afterwards).
         | 
         | With the time it was getting better and better until PulseAudio
         | just worked. And getting audio output nowadays is much more
         | complex (Bluetooth, HDMI Audio, network streaming, etc). So
         | yeah, while I understand why PipeWire exists (and I am planning
         | a migration after the next NixOS release that will bring
         | multiple PipeWire changes), I am still gladly that PulseAudio
         | was created.
        
           | figomore wrote:
           | I'm using pipewire in NixOS unstable and it's working very
           | well. I know they are working in integrating new pipewire
           | configuration with Nix configuration.
        
             | m45t3r wrote:
             | I am just waiting the release of the next NixOS stable
             | version, since the integration in the current stable
             | version (20.09) is still lacking some important features.
        
       | jancsika wrote:
       | I'm just thinking of all the disparate use cases for Linux audio,
       | all the disparate types of inputs/outputs, complex device types
       | involved, etc.
       | 
       | But then I think about pro-audio:
       | 
       | * gotta go fast
       | 
       | * devices don't suddenly appear and disappear after boot
       | 
       | * hey Paul Davis-- isn't the current consensus that people just
       | wanna run a _single_ pro-audio software environment and run
       | anything else they need as plugins within that environment? (As
       | opposed to running a bazillion different applications and gluing
       | them together with Jack?)
       | 
       | So for pro-audio, rather than dev'ing more generic solutions to
       | rule all the generic solutions (and hoping pro-audio still fits
       | one of the generic-inside-generic nestings), wouldn't time be
       | better spent creating a dead simple round-trip audio latency test
       | GUI (and/or API), picking a reference distro, _testing_ various
       | alsa configurations to _measure_ which one is most reliable at
       | the lowest latency, and publishing the results?
       | 
       | Perhaps start with most popular high-end devices, then work your
       | way down from there...
       | 
       | Or has someone done this already?
        
         | cozzyd wrote:
         | But there's pro audio in a studio, and then there's people like
         | me who occassionally record stuff on our normal desktop systems
         | and find it annoying to remember / lookup how to switch audio
         | stacks.
        
         | ubercow13 wrote:
         | Can't pro audio also mean plugging in your laptop at a
         | nightclub and performing? Why is pro audio limited to things as
         | unchanging as a permanent recording studio? Someone making
         | tunes on their laptop in their bedroom can also require 'pro
         | audio'.
        
       | ncmncm wrote:
       | Everybody had trouble with PulseAudio, even people who liked it
       | in principle.
       | 
       | LP wasn't joking about breaking sound: things did break, many,
       | many times for many, many people, for years. And, almost always
       | the only information readily available about what went wrong was
       | just sound no longer coming out, or going in. And, almost always
       | the reliable fix was to delete PA.
       | 
       | But it really was often a consequence of something broken outside
       | of PA. That doesn't mean there was always nothing the PA
       | developers could do, and often they did. The only way it all
       | ended up working as well as it does today--pretty well--is that
       | those things finally got done, and bulldozed through the distro
       | release pipelines. The result was that we gradually stopped
       | needing to delete PA.
       | 
       | Gstreamer crashed all the damn time, for a very long time, too. I
       | never saw PA crash much.
       | 
       | The thing is, all that most of us wanted, almost all the time,
       | was for exactly one program to operate on sound at any time, with
       | exactly one input device and one output device. UI warbling and
       | meeping was never a high-value process. Mixing was most of the
       | time an unnecessary complication and source of latency. The only
       | complicated thing most of us ever wanted was to change routing to
       | and from a headset when it was plugged or unplugged. ALSA was
       | often wholly good enough at that.
       | 
       | To this day, I have UI warbling and meeping turned off, not
       | because it is still broken or might crash gstreamer, but because
       | it is a net-negative feature. I am happiest that it is mostly
       | easy to turn off. (I _wish_ I could make my phone not scritch
       | every damn time it sees a new wifi hub.)
       | 
       | Pipewire benefits from things fixed to make PA work, so I have
       | expectations that the transition will be quicker. But Pipewire is
       | (like PA and Systemd) coded in a language that makes correct code
       | much harder to write than buggy, insecure code; and Pipewire
       | relies on not always necessarily especially mature kernel
       | facilities. Those are both risk factors. I would be happier if
       | Pipewire were coded in modern C++ (Rust is--let's be honest, at
       | least with ourselves!--not portable enough yet), for reliability
       | and security. I would be happier if it used only mature kernel
       | features in its core operations, and dodgy new stuff only where
       | needed for correspondingly dodgy Bluetooth configurations that
       | nobody, seriously, expects ever to work anyway.
       | 
       | What would go a long way to smoothing the transition would be a
       | way to see, graphically, where it has stopped working. The graph
       | in the article, annotated in real time with flow rates, sample
       | rates, bit depths, buffer depths, and attenuation figures, would
       | give us a hint about what is failing, with a finer resolution
       | than "damn Pipewire". If we had such a thing for PA, it might
       | have generated less animosity.
        
       | voortuckian wrote:
       | I believe Mars has the largest percentage by planet of linux
       | machines with working sound.
       | 
       | Is this audio/video bus a result of the space program?
        
         | ncmncm wrote:
         | It is driven mainly by automotive uses. Modern instrument
         | clusters in all cars are running Linux, and need to handle
         | sound and video streams of terrifying variety, including
         | dashcams, back-up cams, sirius radio, phone bluetooth, and more
         | to come, directed to various display devices including actual
         | screens, the instrument cluster, speakers, and phone calls.
        
       | gmueckl wrote:
       | Nah, another audio daemon is not what Linux needs IMO. This
       | should be merged into the kernel, especially since process
       | isolation is one of the stated goals. Running hard realtime stuff
       | in a user space that is designed to not provide useful guarantees
       | related to hard deadlines is brave, but ultimately somewhat
       | foolish.
       | 
       | I know that there are arguments against having high quality audio
       | rate resampling inside the kernel that are routinely brought up
       | to block any kind of useful sound mixing and routing inside the
       | kernel. But I think that all necessary resampling can easily be
       | provided as part of the user space API wrapper that hands buffers
       | off to the kernel. And the mixing can be handled in integer
       | maths, including some postprocessing. Device specific corrections
       | (e.g. output volume dependent equalization) can also fit into the
       | kernel audio subsystem if so desired.
       | 
       | AFAIK, Windows runs part of the Audio subsystem outside the
       | kernel, but these processes get special treatment by the
       | scheduler to meet deadlines. And the system is built in a way
       | that applications have no way to touch these implementation
       | details. On Linux, the first thing audio daemons do is break the
       | kernel provided interface and forcing applications to become
       | aware of yet another audio API that may or may not be present.
       | 
       | This is just my general opinion on how the design of the Linux
       | audio system is lacking. I am aware that it's probably not a
       | terribly popular opinion. No need to hate me for it.
       | 
       | [End of rambling.]
        
         | cycloptic wrote:
         | Putting this into the kernel won't solve anything that isn't
         | already solved with things like the Linux realtime patch. The
         | way this works is that the applications themselves need to have
         | a realtime thread to fill their buffer, and the audio daemon
         | has to be able to schedule them at the right time, so it's not
         | just the daemon that needs to have special treatment from the
         | scheduler.
         | 
         | Also keep in mind that these audio daemons work as an IPC to
         | route sound between applications and over the network, not just
         | to audio hardware. Even if you put a new API in the kernel that
         | did the graph processing and routing there, you would still
         | likely need a daemon for all the other things.
        
           | gens wrote:
           | It would solve the needless IPC, cache trashing, priority
           | scheduling (since it becomes a kernel thread, instead of a
           | userspace thread), and other busywork.
        
             | cycloptic wrote:
             | Would it? Linux does support realtime priority scheduling,
             | JACK has worked this way for years. The thing is you need
             | userspace realtime threads for because that is what the
             | clients need to use, it's not enough to change just the
             | mixing thread into a kernel thread.
        
         | jsmith45 wrote:
         | But one of the goals of this is to be able to handle Video and
         | audio together. (This enables an easier API for ensuring audio
         | and video remain in sync with each other, which can be tricky
         | in some scenarios when both use totally seperate APIs.)
         | 
         | The other main goal is to simultaneously support both pro-audio
         | flows like JACK, and consumer flows like PulseAudio without all
         | the headaches caused by trying to run both of those together.
         | 
         | Lastly PipeWire is specifically designed to support the
         | protocols of basically all existing audio daemons. So if the
         | new APIs provide no benefit to your program, then you might as
         | well just ignore it, and continue to use PulseAudio APIs or
         | JACK APIs or the ESD APIs or the ALSA APIs or ... (you get the
         | idea).
         | 
         | Now you are not wrong that audio is a real time task, and that
         | there are advantages to running part of it kernel side
         | (especially if low latency is desired, since the main way to
         | mitigate issues from scheduling uncertainties is to use large
         | buffers, which is the opposite of low latency).
         | 
         | On the other hand, I'm not sure an API like you propose will
         | work as needed. For example, There really are cases where
         | sources A, B, C and D need to be output to devices W, X, Y, and
         | Z, but with different mixes for each, some of which might need
         | delays added, effects (like reverb, compression, application of
         | frequency equalization curves, etc) applied, and I have not
         | even mentioned yet that device W is not a physical device, but
         | actually the audio feed for a video stream to be encoded and
         | transmitted live.
         | 
         | Try designing something that can handle all of that kernel
         | side. Some of it you will have no chance of running in kernel
         | mode obviously. That typically implies that everything before
         | it in the audio pipeline ought to get done in user mode.
         | Otherwise the kernel mode to user mode transition has most of
         | the scheduling concerns that a full user-space audio pipeline
         | implementation has. For things like per output device effects
         | that would imply basically the whole pipeline be in user mode.
         | 
         | The whole thing is a very thorny issue with no perfect
         | solutions, just a whole load of different potential tradeoffs.
         | Moving more into kernel mode may the a sensible tradeoff for
         | some scenarios, yet for others that kernel side implementation
         | may be unusable, and just contributing more complexity to the
         | endless array of possible audio APIs.
        
         | tinco wrote:
         | I only read this article, so I'm still fuzzy on the exact
         | technical details, but couldn't a system like pipewire
         | eventually be adopted into the kernel after it has proven
         | itself adequate? Or is that not a thing the kernel does?
        
           | bitbang wrote:
           | Probably not. Kernel handles the hardware. User-space deals
           | with things like routing, mixing, resampling, fx, etc. Having
           | that functionality outside of the kernel offers a lot more
           | flexibility. Despite people chafing at the user-space audio
           | API churn, it does allow advancements that would be much more
           | difficult to do if implemented in the kernel.
        
         | regularfry wrote:
         | Crossing the streams a bit, I'm wondering if there's enough
         | grunt in EBPF to do mixing and resampling.
        
         | gens wrote:
         | Resampling in userspace and then sending it to the kernel is
         | how it already works.. in ALSA. The only real problem with how
         | ALSA does things is that you can't just switch the output (for
         | example sound card to hdmi) for a running stream. PA solves
         | this by basically being a network package router (bus, switch,
         | "sound daemon", however you want to call it). PulseVideo^H
         | PipeWire, from little i cared to look, is basically the same
         | thing.
         | 
         | Another problem with ALSA, as well as PA, is that you can't
         | change the device settings (sampling rate, bitrate, buffer size
         | and shape) without basically restarting all audio. (note: you
         | can't reeealy do it anyway as multiple programs could want
         | different rates, buffers, and such)
         | 
         | In my opinion, the proper way to do audio would be to do it in
         | the kernel and to have one (just one) daemon that controls the
         | state of the system. That would require resampling in the
         | kernel for almost all audio hardware. Resampling is not a
         | problem really. Yes, resampling should be fixed-point, and not
         | just because the kernel doesn't want floating point math in it.
         | Controlling volume is a cheap multiply(or divide), mixing
         | streams is just an addition (bout with saturation, ofc).
         | 
         | Special cases are one program streaming to another (ala JACK),
         | and stuff like bluetooth or audio over the network. Those
         | should be in userspace, for the most part. Oh, and studio
         | hardware, as they often have special hardware switches, DSP-s,
         | or whatever.
         | 
         | Sincerely; I doubt i could do it (and even if i could, nobody
         | would care and the Fedoras would say "no, we are doing what
         | ~we~ want"). So i gave up a long while ago. And i doubt anybody
         | else would fight up that hill to do it properly. Half-assed
         | solutions usually prevail, especially if presented as full-ass
         | (as most don't know better).
         | 
         | PS Video is a series of bitmaps, just as audio is a series of
         | samples. They are already in memory (system or gpu). Treating
         | either of them as a networking problem is the wrong way of
         | thinking, IMO. Only thing that matters is timing.
         | 
         | PPS And transparency. A user should always easily be able to
         | see when a stream is being resampled, where it is going, etc,
         | etc. And should be able to change anything relating to that
         | stream, and to the hardware, in flight via a GUI.
        
         | the8472 wrote:
         | > AFAIK, Windows runs part of the Audio subsystem outside the
         | kernel, but these processes get special treatment by the
         | scheduler to meet deadlines.
         | 
         | Assigning deadline-based scheduling priorities to the pipewire
         | daemon wouldn't do the same job?
        
           | gmueckl wrote:
           | Isn't the deadline realtime scheduler optional? How many
           | distros do actually ship it in their default kernels? I
           | honestly didn't manage to keep track of this.
        
             | cycloptic wrote:
             | The deadline scheduler is upstream, see "man 7 sched" for a
             | description: https://man7.org/linux/man-
             | pages/man7/sched.7.html
             | 
             | What is not upstream (yet) is the PREEMPT_RT patch which
             | makes all kernel threads fully preemptible.
        
       | [deleted]
        
       ___________________________________________________________________
       (page generated 2021-03-03 23:01 UTC)