[HN Gopher] FLAC 1.5 Delivers Multi-Threaded Encoding
___________________________________________________________________
FLAC 1.5 Delivers Multi-Threaded Encoding
Author : mikece
Score : 188 points
Date : 2025-02-11 13:58 UTC (9 hours ago)
(HTM) web link (www.phoronix.com)
(TXT) w3m dump (www.phoronix.com)
| masklinn wrote:
| That's nice although probably not of much use to most people:
| iirc FLAC encoding was 60+x realtime on modern machines already
| so unless you need to transcode your entire library (which you
| could do in parallel anyway) odds are you spend more time on
| setting up the encoding than actually running it.
| flounder3 wrote:
| Was about to say the same thing. It was already blazingly fast,
| with a typical album only taking seconds.
| diggan wrote:
| > That's nice although probably not of much use to most people
|
| Doesn't that depend on the hardware of "most people"? Even if
| you have a shit CPU, you probably have at least more than 1
| core, so this will be at least little bit helpful for those
| people, wouldn't it?
|
| Edit: Just tried turning a ~5 minute .wav into .flac (without
| multithreading) on a Intel Celeron N4505 (worst CPU I have
| running atm) and took around 20 seconds, FWIW
| _flux wrote:
| But even a smaller number of people have individual very long
| raw audio files.
|
| I've converted a bunch of sound packs (for music production)
| to flac and it really takes next to no time at all. I suppose
| those are quite short audio files, but there's a lot of them,
| 20 gigabytes in total (in flac).
|
| Perhaps the person who wrote this improvement did have a use
| case for it, though :).
| johncolanduoni wrote:
| In most situations you'd be encoding more than one song at a
| time, which would already parallelize enough unless you had a
| monster cpu and only one album.
| diggan wrote:
| I dunno, when I export a song I'm working on, it's just
| that one song. I think there are more use cases for .flac
| than just converting a big existing music collection.
| stonemetal12 wrote:
| FLAC is more than 20 years old at this point.
|
| At least according to wikipedia it doesn't look like they
| haven't changed the algorithm to much in the mean time, so
| just about anything should be able to run it today.
| masklinn wrote:
| > this will be at least little bit helpful for those people,
| wouldn't it
|
| Probably not, because they're unlikely to have enough stuff
| to export that it's relevant.
|
| > Edit: Just tried turning a ~5 minute .wav into .flac
| (without multithreading) on a Intel Celeron N4505 (worst CPU
| I have running atm) and took around 20 seconds, FWIW
|
| Which is basically nothing. It takes more time than that to
| fix the track's tagging.
| diggan wrote:
| I mean, you're again assuming the only use case is "encode
| and tag a huge music collection", encoding is used for more
| than that.
|
| For example, I have a raspberry pi that is responsible for
| recording as soon as it powers up. Then I can press a
| button, and it grabs the last 60 recorded minutes, which
| happens to be saved as .wav right now, which I'm fine with,
| the rest of my pipeline works with .wav.
|
| But if my pipeline required flac, I would need to turn the
| wav into flac on the raspberry pi at this point, for 60
| minutes of audio, and of course I'd like that to be faster
| if possible, so I can start editing it right away.
| 2OEH8eoCRo0 wrote:
| Even still, it was no issue saturating all CPU cores since each
| core could transcode a track at a time.
| dale_glass wrote:
| I have a possible use for FLAC for realtime audio.
|
| We (Overte, an open source VR environment) have a need for fast
| audio encoding, and sometimes audio encoding CPU time is the
| main bottleneck. For this purpose we support multiple codecs,
| and FLAC is actually of interest because it turns out that the
| niche of "compressing audio really fast but still in good
| quality" is a rare one.
|
| We maingly use Opus which is great, but it's fairly CPU heavy,
| so there can be cases when one might want to sacrifice some
| bandwidth in exchange for less CPU time.
| dijital wrote:
| For folks working in bioacoustics I think it might be pretty
| relevant. I'm working on a project with large batches of high
| fidelity, ultrasonic bioacoustic recordings that need to be in
| WAV format for species analysis but, at the data sizes
| involved, FLAC is a good archive format (~60% smaller).
|
| This release will probably be worth a look to speed the
| archiving/restoring jobs up.
| Lockal wrote:
| It could be useful for audio editors like here:
| https://manual.audacityteam.org/man/undo_redo_and_history.ht...
| - many steps require full save of tracks (potentially dozens of
| them). It is possible to compress history retrospectively, but
| why, if we can be done in parallel?
| CyberDildonics wrote:
| If you have multiple tracks you would just put different
| tracks on different threads anyway and parallelization is
| trivial.
| macawfish wrote:
| Will this translate to low latency FLAC streaming?
| shawabawa3 wrote:
| probably not as FLAC is basically only useful for archival
| purposes
|
| for streaming you are better off with an optimised lossy codec
| timcobb wrote:
| why is that?
| shawabawa3 wrote:
| the human ear just isn't good enough at processing sound to
| need lossless codecs
|
| a modern audio codec at 320kbps bitrate is more than good
| enough.
|
| Lossless is useful for recompressing stuff when new codecs
| come out or for different purposes without introducing
| artefacts, not really for listening (in before angry
| audiophiles attack me)
| arp242 wrote:
| > a modern audio codec at 320kbps bitrate is more than
| good enough.
|
| MP3 V0 should already be, and is typically smaller.
|
| That said, it does depend on the quality of the encoder;
| back in the day a lot of MP3 encoders were not very good,
| even at high quality settings. These days LAME is the de-
| facto standard and it's pretty good, but maybe some
| others aren't(?)
| givinguflac wrote:
| The human ear is absolutely good enough to hear the
| difference. However the vast majority of the population
| has not had listening training. I've done double blind
| tests repeatedly and provided it's not on crap audio gear
| I can absolutely tell the difference, as can most of my
| golden ears trained acquaintances.
| PaulDavisThe1st wrote:
| This conflicts with every published double blind study
| that was not in a context that had the word "audiophile"
| in its name.
|
| When you say you "can absolutely tell the difference",
| what score are you getting that proves you are doing
| better than guessing? And with what type of lossy
| encoding?
| hackingonempty wrote:
| People who make such claims either repeat the tests until
| they get one with a perfect score or otherwise don't
| count every trial, do a poor job of conversion so there
| is clipping or other artifacts that break blinding,
| compare different sources like the standard version that
| accompanies the "high rez" version on an SACD but may be
| from a different mastering, don't level match, don't
| actually do a real double blind test, don't do enough
| trials, or are just lying.
| givinguflac wrote:
| I've done multiple online tests and always scored at
| least in the 70s. Using foobar and my own cds or hi res
| downloads I've done encoding of the same exact wav file
| to flac, mp3, and ogg. Flac wins. Using monitor speakers
| (Mackie MR5's) and a high end DAC it was not at all
| difficult for me.
|
| I truly appreciate you calling me a liar though; really
| adds to the conversation.
| BoingBoomTschak wrote:
| > Mackie MR5
|
| Don't want to sound snarky, but these are only "decent"
| (the succeeding MRx24 add an actually designed waveguide)
| and you'll never hear the sound of a DAC unless it's very
| badly implemented or has SNR troubles that show with
| ultra sensitive IEMs.
|
| Anyone who has read enough of the research in the matter
| will tell you that the codec itself is an improbable
| culprit. The way it's encoded and the test setup usually
| are "at fault" in this situation.
| hackingonempty wrote:
| I don't know why you're hearing a difference I'm just
| pointing out that there are many reasons why you could be
| hearing a difference that are not a specific effect of
| the codec itself.
|
| You're right I shouldn't be making crappy posts and will
| try better in the future.
| hylaride wrote:
| > However the vast majority of the population has not had
| listening training.
|
| This pretty much means they don't need it. And even if
| they're all trained, there's still very much a good
| enough for many situations. I don't need to waste data on
| lossless if I'm streaming to my phone in a noisy
| environment, even with noise cancelling. Add to the fact
| that 99% of Bluetooth headphones are lossy anyways and
| you're left overoptimizing.
|
| Sitting on a beanbag at home with a pair of Hifiman
| Susvaras or some other overpriced headseat, that's maybe
| another story...
| seba_dos1 wrote:
| > Add to the fact that 99% of Bluetooth headphones are
| lossy anyways and you're left overoptimizing.
|
| Perhaps somewhat counterintuitively, Bluetooth headphones
| is actually a use case where lossless audio helps the
| most, as you're avoiding double-lossy compression. SBC XQ
| isn't that bad, but it gets much worse when what you feed
| it is already lossy.
| Marsymars wrote:
| I really like my 2.4 GHz RF headphones. Not portable
| outside of the house, but optical input to the base,
| lossless wireless transmission, and compared to
| Bluetooth, better
| range/interference/pairing/obsolescence. I like them so
| much I bought a second pair that I have as a backup for
| when the first breaks.
| givinguflac wrote:
| Right, and children shouldn't be taught to name distinct
| colors and therefore they would not need it. Hot take
| there bud.
| timrichard wrote:
| Possibly, depending on the listener.
|
| But why would I bother recompressing when the various
| media players in the house can deal with the FLAC files
| just fine? On a typical home wifi network, a track
| probably transfers in about a second.
| shawabawa3 wrote:
| right but I was talking about in the context of low
| latency streaming, where the costs of sending FLAC over
| <insert modern audio codec here> are considerably higher
| (in terms of latency, bandwidth etc)
| iamacyborg wrote:
| I can understand why a big streaming provider might want to
| use a lossy codec from a bandwidth cost perspective but what
| about in the context of streaming in your own network (eg
| through Roon or similar)?
| masklinn wrote:
| Why would you transcode _to_ FLAC when streaming? And
| transcode _from_ what?
| VyseofArcadia wrote:
| Off the top of my head, let's say the file you want to
| stream is Ogg Opus, but the device you're streaming to
| only supports FLAC and MP3. You could transcode to MP3
| and get all the artifacts that come with a double lossy
| encode, or you could transcode to FLAC which doesn't buy
| you any bandwidth savings but it does avoid double lossy
| artifacts.
| jprjr_ wrote:
| Chained Ogg FLAC works really well as an
| intermediary/internal streaming format.
|
| In my case - I have an internet radio station available
| in a few different codec/bitrate combinations. I generate
| a chained Ogg FLAC stream so I have in-band metadata with
| lossless audio.
|
| The stream gets piped into a process that will encode the
| lossy versions, update metadata the correct way per-
| stream (like there's HLS with timed ID3, there's Icecast
| with chained Ogg Opus, Icecast with AAC + Shoutcast-style
| metadata).
| bojanvidanovic wrote:
| Out of curiosity, can you provide a link to your station?
| I have created a website for listening lossless internet
| radio stations: https://audiophile.fm
| jprjr_ wrote:
| Well, I only use FLAC internally - none of the public
| streams are FLAC
| Night_Thastus wrote:
| Audio files are tiny, itty bitty things - even uncompressed.
| If you have the ability to use a lossless file at 0 extra
| cost...why not? Massive streaming services like Spotify don't
| obviously, the economics are way different.
| she46BiOmUerPVj wrote:
| I have a flac collection that I was streaming, and I ended
| up writing some software to encode the entire library to
| opus because when you are driving around you never know how
| good your bandwidth will be. Since moving to opus I never
| have my music cut off anymore. Even with the nice stereo in
| my car I don't notice any quality problems. There are
| definitely reasons to not stream wav or flac around all the
| time.
| givinguflac wrote:
| Why re-encode to a crap codec when you could just use
| plex with adaptive bitrate streaming?
| aeroevan wrote:
| I doubt whatever plex would do could beat opus (unless
| it's already transcoding to opus)
| pimeys wrote:
| If you decide to stream with a lower bitrate in Plexamp,
| it transcodes to Opus.
|
| You should not encode the files, just use Plex or
| Jellyfin and choose a lower bitrate when playing with
| your phone. Jellyfin uses AAC and Plexamp uses Opus.
| amlib wrote:
| Could you elaborate on Opus being a crap codec? AFAIK
| it's a state of the art lossy codec for high quality
| sound/music (and some other applications)
| givinguflac wrote:
| Because it's lossy, period. You may not notice it if
| you're not looking hard enough; but you wouldn't accept a
| .zip file of a word doc that was missing letters or words
| in the document. You'd use lossless compression.
|
| I'm not saying there's no use for opus- just that if your
| goal is a high quality listening experience, that ain't
| it.
|
| https://www.ecstuff4u.com/2023/03/opus-vs-flac-
| difference-co...
| izackp wrote:
| That's a bad illustration. The letters are there..
| they're just slightly lower rez. Like going from a
| 256x256 space per letter to 128x128. Is there a
| difference? sure. Can you read it perfectly fine.. of
| course.
|
| You could probably argue that these are handwritten
| letters but the argument still stands.
| amlib wrote:
| That's like saying cars are crap because it's not as
| powerful as a truck. Both are completely different
| classes of vehicles optimizing for different use cases.
| So are lossy vs lossless codecs, you can't just say one
| is superior to the other without specifying the use case.
|
| For instance, I've got a navidrome instance with all my
| music library accessible from anywhere in the world
| trough my phone. However there are situations where I may
| not have internet any connection, so I use the app on the
| phone (Tempo) to mark the songs I want to be downloaded
| and available even when offline, but my phone storage
| wouldn't hold even a quarter of my playlists if I went
| with the original encode of the songs (mostly lossless
| flacs), so I instead set it to download a transcoded Opus
| 128kbps version of it all and it fits on my phone with
| room to spare. It sound pretty damn good trough my
| admittedly average IEMs and I get the benefit of offline
| playback. Even if you somehow had the absolute best
| playback system connected to my phone you might be able
| to tell the difference, but it beats not having to rely
| on internet connectivity.
| pimeys wrote:
| Tell that to some of my 24bit/192kHz flac files. About 300
| megabytes each. Not nice to stream with plexamp using my 40
| Mbps upstream... Easy to encode in opus though.
| Marsymars wrote:
| Even uncompressed, 24bit/192Khz is <10 Mbps.
| theandrewbailey wrote:
| > for streaming you are better off with an optimised lossy
| codec
|
| If you are Spotify, that probably makes sense. But if you are
| someone with a homelab, you probably have enough bandwidth
| and then some, so streaming FLAC to a home theater (your own
| or your friend's) makes sense.
| PaulDavisThe1st wrote:
| It is what (originally) SlimDevices (now Logitech Media
| Server) does, for example.
| epcoa wrote:
| The major streaming platforms except Spotify offer lossless
| streaming as an upgrade or benefit(Apple Music) for years and
| even Spotify the hold out is releasing "Super Premium" soon.
| Opinion aside, lossless streaming is a big deal.
| vodou wrote:
| Probably not. It only mentions multi-threaded encoding. Not
| decoding. But for streaming it shouldn't matter a lot since you
| only decode smaller chunks at a time. Latency should be good.
| At least that is my experience and 95% of my music listening is
| listening to FLAC files.
| jprjr_ wrote:
| For FLAC, latency is primarily driven by block size and the
| library's own internal buffering.
|
| Using the API you can set the blocksize, but there's no manual
| flush function. The only way to flush output is to call the
| "finish" function, which as its name implies - marks the stream
| as finished.
|
| I actually wrote my own FLAC encoder for this exact reason -
| https://github.com/jprjr/tflac - focused on latency over
| efficient compression.
| ksec wrote:
| People looking for low latency lossless streaming may want to
| take a look at WavPack.
|
| https://www.wavpack.com
| hart_russell wrote:
| Would this be ideal for streaming from my navidrome server?
| Currently I stream FLAC on the local network and it converts
| it to opus on the fly when I'm on mobile.
| jprjr_ wrote:
| The thing I'm excited about is decoding chained Ogg FLAC files.
|
| Some software wouldn't work correctly with FLAC-based Icecast
| streams if they used libFLAC/libFLAC++ for demuxing and decoding.
| Usually these streams mux into Ogg and send updated metadata by
| closing out the previous Ogg bitstream and starting a new one. If
| you were using libFLAC to demux and decode - when the stream
| updated, it would just hang forever. Apps would have to do their
| own Ogg demuxing and reset the decoder between streams.
|
| Chained Ogg FLAC allows having lossless internet radio streams
| with rich, in-band metadata instead of relying on out-of-band
| methods. So you could have in-band album art, artist info, links
| - anything you can cram into a Vorbis comment block.
| nullify88 wrote:
| Are there any public lossless radio streams out there?
| longitudinal93 wrote:
| You can filter by "flac" on radio-browser:
|
| https://www.radio-
| browser.info/search?page=1&order=clickcoun...
| BoingBoomTschak wrote:
| Massive waste to use FLAC for Internet streaming, though. Opus
| was made for this purpose (in part).
| jiehong wrote:
| Interestingly, FLAC is now published as RFC 9639 [0].
|
| [0]: https://www.rfc-editor.org/rfc/rfc9639.html
| lazka wrote:
| On Windows (so libwinpthread), 8C/16T machine: $
| flac --version flac 1.5.0 $ hyperfine -r5 "flac -f -8
| a.wav a.flac" "flac -j16 -f -8 a.wav a.flac" Benchmark 1:
| flac -f -8 a.wav a.flac Time (mean +- s): 13.148 s +-
| 0.194 s [User: 12.758 s, System: 0.361 s] Range (min
| ... max): 12.934 s ... 13.318 s 5 runs Benchmark
| 2: flac -j16 -f -8 a.wav a.flac Time (mean +- s):
| 2.404 s +- 0.012 s [User: 14.126 s, System: 1.355 s]
| Range (min ... max): 2.395 s ... 2.425 s 5 runs
| Summary flac -j16 -f -8 a.wav a.flac ran 5.47
| +- 0.09 times faster than flac -f -8 a.wav a.flac
___________________________________________________________________
(page generated 2025-02-11 23:01 UTC)