[HN Gopher] JPEG XL and the Pareto Front
___________________________________________________________________
JPEG XL and the Pareto Front
Author : botanical
Score : 414 points
Date : 2024-03-01 06:55 UTC (16 hours ago)
(HTM) web link (cloudinary.com)
(TXT) w3m dump (cloudinary.com)
| mikae1 wrote:
| If only Google could be convinced to adopt this marvelous
| codec... Not looking super positive at the moment:
|
| https://issues.chromium.org/issues/40270698
|
| https://bugs.chromium.org/p/chromium/issues/detail?id=145180...
| pgeorgi wrote:
| All those requests to revert the removal are funny: you want
| Chrome to re-add jxl behind a feature flag? Doesn't seem very
| useful.
|
| Also, all those Chrome offshoots (Edge, Brave, Opera, etc)
| could easily add and enable it to distinguish themselves from
| Chrome ("faster page load", "less network use") and don't.
| Makes me wonder what's going on...
| silisili wrote:
| Simply put these offshoots don't really seem to do browser
| code, and realize how expensive it would be for them to
| diverge at the core.
| eviks wrote:
| No, obviously to re-add jxl without a flag
| pgeorgi wrote:
| "jxl without a flag" can't be re-added because that was
| never a thing.
| albert180 wrote:
| What a stupid pedantry. Feel better now?
| elygre wrote:
| Or (re-add jxl) (without a flag).
| eviks wrote:
| It can, that's why you didn't say "re-add jxl", but had
| to mention the flag, 're-add' has no flag implication,
| that pedantic attempt to constraint is somehing you've
| made up, that's not what people want, just read those
| linked issues
| pgeorgi wrote:
| It has a flag implication because jpeg-xl never came
| without being hidden behind a flag. Nothing was taken
| away from ordinary users at any point in time.
|
| And I suppose the Chrome folks have the telemetry to know
| how many people set that damn flag.
| jdiff wrote:
| >"But the plans were on display..."
|
| > "On display? I eventually had to go down to the cellar
| to find them."
|
| > "That's the display department."
|
| > "With a flashlight."
|
| > "Ah, well, the lights had probably gone."
|
| > "So had the stairs."
|
| > "But look, you found the notice, didn't you?"
|
| > "Yes," said Arthur, "yes I did. It was on display in
| the bottom of a locked filing cabinet stuck in a disused
| lavatory with a sign on the door saying 'Beware of the
| Leopard.'"
| pgeorgi wrote:
| I guess you're referring to the idea that the flag made
| the previous implementation practically non-existent for
| users. And I agree!
|
| But "implement something new!" is a very different demand
| from "you took that away from us, undo that!"
| jdiff wrote:
| > No, obviously to re-add jxl without a flag
|
| Is asking for the old thing to be re-added, but without
| the flag that sabotaged it. It is the same as "you took
| that away from us, undo that!" Removing a flag does not
| turn it into a magical, mystical new thing that has to be
| built from scratch. This is silly. The entire point of
| having flags is to provide a testing platform for code
| that may one day have the flag removed.
| eviks wrote:
| I suppose I'll trust the reality of what actual users are
| expressly asking for vs. your imagination that something
| different is implied
| pgeorgi wrote:
| Actual users, perhaps. Or maybe concern trolls paid by a
| patent holder who's trying to prepare the ground for a
| patent-based extortion scheme. Or maybe Jon Sneyers with
| an army of sock puppets. These "actual users" are just as
| real to me as Chrome's telemetry.
|
| That said: these actual users didn't demonstrate any
| hacker spirit or interest in using JXL in situations
| where they could. Where's the wide-spread use of jxl.js
| (https://github.com/niutech/jxl.js) to demonstrate that
| there are actual users desperate for native codec
| support? (aside: jxl.js is based on Squoosh, which is a
| product of GoogleChromeLabs) If JXL is sooo important,
| surely people would use whatever workaround they can
| employ, no matter if that convinces the Chrome team or
| not, simply because they benefit from using it, no?
|
| Instead all I see is people _not_ exercising their
| freedom and initiative to support that best-thing-since-
| slices-bread-apparently format but whining that Chrome is
| oh-so-dominant and forces their choices of codecs upon
| everybody else.
|
| Okay then...
| 149765 wrote:
| I tried jxl.js, it was very finicky on iPad, out of
| memory errors [0] and blurry images [1]. In the end I
| switched to a proxy server, that reencoded jxl images
| into png.
|
| [0]: https://github.com/niutech/jxl.js/issues/6
|
| [1]: https://github.com/niutech/jxl.js/issues/7
| pgeorgi wrote:
| Both issues seem to have known workarounds that could
| have been integrated to support JXL on iOS properly
| earlier than by waiting on Apple (who integrated JXL in
| Safari 17 apparently), so if anything that's a success
| story for "provide polyfills to support features without
| relying on the browser vendor."
| 149765 wrote:
| The blur issue is an easy fix, yes, but the memory one
| doesn't help that much.
| lonjil wrote:
| > And I suppose the Chrome folks have the telemetry to
| know how many people set that damn flag.
|
| How is that relevant? Flags are to allow testing, not to
| gauge interest from regular users.
| lonjil wrote:
| > you want Chrome to re-add jxl behind a feature flag?
| Doesn't seem very useful.
|
| Chrome has a neat feature where some flags can be enabled by
| websites, so that websites can choose to cooperate in
| testing. They never did this for JXL, but if they re-added
| JXL behind a flag, they could do so but with such testing
| enabled. Then they could get real data from websites actually
| using it, without committing to supporting it if it isn't
| useful.
|
| > Also, all those Chrome offshoots (Edge, Brave, Opera, etc)
| could easily add and enable it to distinguish themselves from
| Chrome ("faster page load", "less network use") and don't.
| Makes me wonder what's going on...
|
| Edge doesn't use Chrome's own codec support. It uses
| Windows's media framework. JXL is being added to it next
| year.
| firsching wrote:
| > Edge doesn't use Chrome's own codec support. It uses
| Windows's media framework. JXL is being added to it next
| year.
|
| Interesting!
| sergioisidoro wrote:
| It's so frustrating how the chromium team is ending up as a
| gatekeeper of the Internet by pick and choosing what gets
| developed or not.
|
| I recently come across another issue pertaining to the chromium
| team not budging on their decisions, despite pressure from the
| community and an RFC backing it up - in my case custom headers
| in WebSocket handshakes, that are supported by other Javascript
| runtimes like node and bun, but the chromium maintainer just
| disagrees with it -
| https://github.com/whatwg/websockets/issues/16#issuecomment-...
| hwbunny wrote:
| Question is for how long. Time to slam the hammer on them.
| hhh wrote:
| Why not make a better product than slam some metaphorical
| hammer?
| mort96 wrote:
| That's not how this works. Firefox is the closest we
| have, and realistically the closest we will get to a
| "better product" than Chromium for the foreseeable
| future, and it's clearly not enough.
| KingOfCoders wrote:
| And Firefox does not support the format. Mozilla is the
| same political company as everyone else.
| bombcar wrote:
| The _only_ hammer _at all_ left is Safari, basically on
| iPhones only.
|
| That hammer is _very_ close to going away; if the EU does
| force Apple to really open the browsers on the iPhone,
| everything will be Chrome as far as the eye can see in
| short order. And then we fully enter the chromE6 phase.
| Certhas wrote:
| Because "better" products don't magically win.
| caskstrength wrote:
| What hammer? You want US president or supreme court to
| compel Chrome developers to implement every image format in
| existence and every JS API proposed by anyone anywhere?
|
| Unless it is some kind of anti-competitive behavior like
| they intentionally stiffening adoption of standard
| competing with their proprietary patent-encumbered
| implementation that they expect to collect royalties for
| (doesn't seem to be the case), then I don't see the
| problem.
| madeofpalk wrote:
| Where's Firefox's and Webkit's position on the proposal?
| jonsneyers wrote:
| Safari/Webkit has added JPEG XL support already.
|
| Firefox is "neutral", which I understand as meaning they'll
| do whatever Chrome does.
|
| All the code has been written, patches to add JPEG XL
| support to Firefox and Chromium are available and some of
| the forks (Waterfox, Pale Moon, Thorium, Cromite) do have
| JPEG XL support.
| lonjil wrote:
| I believe they were referring to that WebSocket issue,
| not JXL.
| pgeorgi wrote:
| > It's so frustrating how the chromium team is ending up as a
| gatekeeper of the Internet by pick and choosing what gets
| developed or not.
|
| https://github.com/niutech/jxl.js is based on Chromium tech
| (Squoosh from GoogleChromeLabs) and provides an opportunity
| to use JXL with no practical way for Chromium folks to
| intervene.
|
| Even if that's a suboptimal solution, JXL's benefits
| supposedly should outweight the cost of integrating that, and
| yet I haven't seen actual JXL users running to that in
| droves.
|
| So JXL might not be a good support for your theory: where
| people could do they still don't. Maybe the format isn't
| actually that important, it's just a popular meme to rehash.
| jillesvangurp wrote:
| They didn't "lose interest", their lawyers pulled the emergency
| brakes. Blame patent holders, not Google. Like Microsoft:
| https://www.theregister.com/2022/02/17/microsoft_ans_patent/.
| Microsoft could probably be convinced to be reasonable. But
| there may be a few others. Google actually also holds some
| patents over this but they've done the right thing and license
| those patents along with their implementation.
|
| To fix this, you'd need to convince Google, and other large
| companies that would be exposed to law suits related to these
| patents (Apple, Adobe, etc.), that these patent holders are not
| going to insist on being compensated.
|
| Other formats are less risky; especially the older ones. Jpeg
| is fine because it's been out there for so long that any
| patents applicable to it have long expired. Same with GIF,
| which once was held up by patents. Png is at this point also
| fine. If any patents applied at all they will soon have expired
| as the PNG standard dates back to 1997 and work on it depended
| on research from the seventies and eighties.
| lifthrasiir wrote:
| > [...] other large companies that would be exposed to law
| suits related to these patents (Apple, Adobe, etc.) [...]
|
| Adobe included JPEG XL support to their products and also the
| DNG specification. So that argument is pretty much dead, no?
| jillesvangurp wrote:
| Not that simple. Maybe they struck a deal with a few of the
| companies or they made a different risk calculation. And of
| course they have a pretty fierce patent portfolio
| themselves so there's the notion of them being able to
| retaliate in kind to some of these companies.
| lifthrasiir wrote:
| I don't think that's true (see my other comment for what
| the patent is really about), but even when it is, Adobe's
| adoption means that JPEG XL is worth the supposed "risk".
| And Google does ship a lot of technologies that are
| clearly patent-encumbered. If the patent is the main
| concern, they could have answered so because there are
| enough people wondering about the patent status, but the
| Chrome team's main reason against JPEG XL was quite
| different.
| izacus wrote:
| Adobe also has an order of magnitude lower number of
| installed software than Chrome or Firefox which makes
| patent fees much cheaper. And their software is actually
| paid for by users.
| luma wrote:
| Adobe sells paid products and can carve out a license fee
| for that, like they do with all the other codecs and
| libraries they bundle. That's part of the price you are
| paying.
|
| Harder to do for users of Chrome.
| lifthrasiir wrote:
| The same thing can be said with many patent-encumbered
| video codecs which Chrome does support nevertheless. That
| alone can't be a major deciding factor, especially given
| that the rate of JPEG XL adoption has been remarkably
| faster than any recent media format.
| afavour wrote:
| Is this not simply a risk vs reward calculation? Newer
| video codecs present a very notable bandwidth saving over
| old ones. JPEG XL presents minor benefits over WebP,
| AVIF, etc. So while the dangers are the same for both the
| calculation is different.
| KingOfCoders wrote:
| Video = billions lower costs for Youtube.
| zokier wrote:
| > their lawyers pulled the emergency brakes
|
| Do you have source for that claim?
| kasabali wrote:
| Probably this: https://www.theregister.com/2022/02/17/micro
| soft_ans_patent/
|
| I think it would be much better for everyone involved and
| humanity if Mr. Duda himself got the patent in the first
| place instead of praying no one else will.
| mananaysiempre wrote:
| Duda published his ideas, that's supposed to be it.
| lonjil wrote:
| Prior art makes patents invalid anyway.
| michaelt wrote:
| Absolutely.
|
| And nothing advances your career quite like getting your
| employer into a multi-year legal battle and spending a
| few million on legal fees, to make some images 20%
| smaller and 100% less compatible.
| lonjil wrote:
| Well, lots of things other than JXL use ANS. If someone
| starts trying to claim ANS, you'll have Apple, Disney,
| Facebook, and more, on your side :)
| mort96 wrote:
| But that doesn't matter. If a patent is granted, choosing
| to infringe on it is risky, even if you believe you could
| make a solid argument that it's invalid given enough
| lawyer hours.
| lonjil wrote:
| The Microsoft patent is for an "improvement" that I don't
| believe anyone is using, but Internet commentators seem
| to think it applies to ANS in general for some reason.
|
| A few years earlier, Google was granted a patent for ANS
| in general, which made people very angry. Fortunately
| they never did anything with it.
| JyrkiAlakuijala wrote:
| I believe that Google's patent application dealt with
| interleaving non-compressed and ANS data in a manner that
| made streaming coding easy and fast in software, not a
| general ANS patent. I didn't read it but discussed
| shortly about it with a capable engineer who had.
| mort96 wrote:
| If the patent doesn't apply to JXL then that's a
| different story, then it doesn't matter whether it's
| valid or not.
|
| ...
|
| The fact that Google _does_ have a patent which covers
| JXL is worrying though. So JXL is patent encumbered after
| all.
| lonjil wrote:
| I misrecalled. While the Google patent is a lot more
| general than the Microsoft one, it doesn't apply to most
| uses of ANS.
| jillesvangurp wrote:
| I'm just inferring from the fact that MS got a patent and
| then this whole thing ground to a halt.
| peppermint_gum wrote:
| In other words, there's no source.
| lifthrasiir wrote:
| Not only you have no source backing your claim, but there
| is a glaring counterexample. Chromium's experimental JPEG
| XL support carried an expiry milestone, which was delayed
| multiple times and it was bumped last time on _June_ 2022
| [1] before the final removal on October, which was months
| later the patent was granted!
|
| [1] https://issues.chromium.org/issues/40168998#comment52
| peppermint_gum wrote:
| >To fix this, you'd need to convince Google, and other large
| companies that would be exposed to law suits related to these
| patents (Apple, Adobe, etc.), that these patent holders are
| not going to insist on being compensated.
|
| Apple has implemented JPEG XL support in macOS and iOS. Adobe
| has also implemented support for JPEG XL in their products.
|
| Also, if patents were the reason Google removed JXL from
| Chrome, why would they make up technical reasons for doing
| so?
|
| Please don't present unsourced conspiracy theories as if they
| were confirmed facts.
| jillesvangurp wrote:
| You seem to be all over this. So, what's your alternate
| theory?
|
| I've not seen anything other than "google is evil,
| boohoohoo" in this thread. That's a popular sentiment but
| it doesn't make much sense in this context.
|
| There must be a more rational reason than that. I've not
| heard anything better than legal reasons. But do correct me
| if I'm wrong. I've worked in big companies, and patents can
| be a show stopper. Seems like a plausible theory (i.e. not
| a conspiracy theory). We indeed don't know what happened
| because Google is clearly not in a mood to share.
| Scaevolus wrote:
| If you want a simple conspiracy theory, how about this:
|
| The person responsible for AVIF works on Chrome, and is
| responsible for choosing which codecs Chrome ships with.
| He obviously prefers his AVIF to a different team's JPEG-
| XL.
|
| It's a case of simple selfish bias.
| lonjil wrote:
| Mate, you're literally pulling something from your ass.
| Chrome engineers claim that they don't want JXL because
| it isn't good enough. Literally no one involved has said
| that it has anything to do with patents.
| JyrkiAlakuijala wrote:
| Why not take Chrome's word for it:
|
| ---cut---
|
| Helping the web to evolve is challenging, and it requires
| us to make difficult choices. We've also heard from our
| browser and device partners that every additional format
| adds costs (monetary or hardware), and we're very much
| aware that these costs are borne by those outside of
| Google. When we evaluate new media formats, the first
| question we have to ask is whether the format works best
| for the web. With respect to new image formats such as
| JPEG XL, that means we have to look comprehensively at
| many factors: compression performance across a broad
| range of images; is the decoder fast, allowing for speedy
| rendering of smaller images; are there fast encoders,
| ideally with hardware support, that keep encoding costs
| reasonable for large users; can we optimize existing
| formats to meet any new use-cases, rather than adding
| support for an additional format; do other browsers and
| OSes support it?
|
| After weighing the data, we've decided to stop Chrome's
| JPEG XL experiment and remove the code associated with
| the experiment. [...]
|
| From: https://groups.google.com/a/chromium.org/g/blink-
| dev/c/WjCKc...
| JyrkiAlakuijala wrote:
| I try to make a bulletin point list of the individual
| concerns, the original statement is written in a style
| that is a bit confusing for a non-native speaker such as
| me.
|
| * Chrome's browser partners say JPEG XL adds monetary or
| hardware costs.
|
| * Chrome's device partners say JPEG XL adds monetary or
| hardware costs.
|
| * Does JPEG XL work best for the web?
|
| * What is JPEG XL compression performance across a broad
| range of images?
|
| * Is the decoder fast?
|
| * Does it render small images fast?
|
| * Is encoding fast?
|
| * Hardware support keeping encoding costs reasonable for
| large users.
|
| * Do we need it at all or just optimize existing formats
| to meet new use-cases?
|
| * Do other browsers and OSes support JPEG XL?
|
| * Can it be done sufficiently well with WASM?
| JyrkiAlakuijala wrote:
| * [...] monetary or hardware costs.
|
| We could perhaps create a GoFundMe page for making it
| cost neutral for Chrome's partners. Perhaps some industry
| partners would chime in.
|
| * Does JPEG XL work best for the web?
|
| Yes.
|
| * What is JPEG XL compression performance across a broad
| range of images?
|
| All of them. The more difficult it is to compress, the
| better JPEG XL is. It is at its best at natural images
| with noisy textures.
|
| * Is the decoder fast?
|
| Yes. See blog post.
|
| * Does it render small images fast?
|
| Yes. I don't have a link, but I tried it.
|
| * Is encoding fast?
|
| Yes. See blog post.
|
| * Hardware support keeping encoding costs reasonable for
| large users.
|
| https://www.shikino.co.jp/eng/ is building it based on
| libjxl-tiny.
|
| * Do we need it at all or just optimize existing formats
| to meet new use-cases?
|
| Jpegli is great. JPEG XL allows for 35 % more. It creates
| wealth of a few hundred billion in comparison to jpegli,
| in users' waiting times. So, it's a yes.
|
| * Do other browsers and OSes support JPEG XL?
|
| Possibly. iOS and Safari support. DNG supports. Windows
| and some androids don't support.
|
| * Can it be done sufficiently well with WASM?
|
| Wasm creates additional complexity, adds to load times,
| and possibly to computation times too.
|
| Some more work is needed before all of Chrome's questions
| can be answered.
| peppermint_gum wrote:
| >There must be a more rational reason than that. I've not
| heard anything better than legal reasons. But do correct
| me if I'm wrong. I've worked in big companies, and
| patents can be a show stopper. Seems like a plausible
| theory (i.e. not a conspiracy theory)
|
| In your first comment, you _stated as a fact_ that
| "lawyers pulled the emergency brakes". Despite literally
| no one from Google ever saying this, and Google giving
| very different reasons for the removal.
|
| And now you act as if something you made up in your mind
| is the default theory and the burden of proof is on the
| people disagreeing with you.
| youngtaff wrote:
| People who look after Chrome's media decoding are an
| awkward bunch, they point blank refuse to support <img
| src=*.mp4 for example
| JyrkiAlakuijala wrote:
| That ANS patent supposedly relates to refining the coding
| tables based on symbols being decided.
|
| It is slower for decoding and Jpeg xl does not do that for
| decoding speed reasons.
|
| The specification doesn't allow it. All coding tables need to
| be in final form.
| bmicraft wrote:
| Safari supports jxl since version 17
| jonsneyers wrote:
| There are no royalties to be paid on JPEG XL. Nobody but
| Cloudinary and Google is claiming to hold relevant patents,
| and Cloudinary and Google have provided a royalty free
| license. Of course the way the patent system works, anything
| less than 20 years old is theoretically risky. But so far,
| there is nobody claiming royalties need to be paid on JPEG
| XL, so it is similar to WebP in that regard.
| bombcar wrote:
| "Patent issues" has become a (sometimes truthful) excuse
| for not doing something.
|
| When the big boys _want_ to do something, they find a way
| to get it done, patents or no, especially if there 's only
| "fear of patents" - see Apple and the whole watch fiasco.
| lonjil wrote:
| The Microsoft patent doesn't apply to JXL, and in any case,
| Microsoft has literally already affirmed that they will not
| use it to go after any open codec.
| bombcar wrote:
| How exactly is that done? I assume even an offhand comment
| by an official (like CEO, etc) that is not immediately
| walked back would at least protect people from damages
| associated with willful infringement.
| Pikamander2 wrote:
| Mozilla effectively gave up on it before Google did.
|
| https://bugzilla.mozilla.org/show_bug.cgi?id=1539075
|
| It's a real shame, because this is one of those few areas where
| Firefox could have lead the charge instead of following in
| Chrome's footsteps. I remember when they first added APNG
| support and it took Chrome years to catch up, but I guess those
| days are gone.
|
| Oddly enough, Safari is the only major browser that currently
| supports it despite regularly falling behind on tons of other
| cutting-edge web standards.
|
| https://caniuse.com/jpegxl
| JyrkiAlakuijala wrote:
| I followed Mozilla/Firefox integration closely. I was able to
| observe enthusiasm from their junior to staff level engineers
| (linkedin-assisted analysis of the related bugs ;-). However,
| an engineering director stepped in and locked the discussions
| because they were in "no new information" stage, and their
| position has been neutral on JPEG XL, and the integration has
| not progressed from the nightly builds to the next stage.
|
| Ten years ago Mozilla used to have the most prominent image
| and video compression effort called Daala. They posted
| inspiring blog posts about their experiments. Some of their
| work was integrated with Cisco's Thor and On2's/Chrome's
| VP8/9/10, leading to AV1 and AVIF. Today, I believe, Mozilla
| has focused away from this research and the ex-Daala
| researchers have found new roles.
| lonjil wrote:
| Daala's and Thor's features were supposed to be integrated
| into AV1, but in the end, they wanted to finish AV1 as fast
| as possible, so very little that wasn't in VP10 made it
| into AV1. I guess it will be in AV2, though.
| JyrkiAlakuijala wrote:
| I like to think that there might be an easy way to
| improve AV2 today -- drop the whole keyframe coding and
| replace it with JPEG XL images as keyframes.
| derf_ wrote:
| _> ... very little that wasn 't in VP10 made it into
| AV1._
|
| I am not sure I would say that is true.
|
| The entire entropy coder, used by every tool, came from
| Daala (with changes in collaboration with others to
| reduce hardware complexity), as did some major tools like
| Chroma from Luma and the Constrained Directional
| Enhancement Filter (a merger of Daala's deringing and
| Thor's CLPF). There were also plenty of other
| improvements from the Daala team, such as structural
| things like pulling the entropy coder and other inter-
| frame state from reference frames instead of abstract
| "slots" like VP9 (important in real-time contexts where
| you can lose frames and not know what slots they would
| have updated) or better spatial prediction and coding for
| segment indices (important for block-level quantizer
| adjustments for better visual tuning). And that does not
| even touch on all of the contributions from other AOM
| members (scalable coding, the entire high-level
| syntax...).
|
| Were there other things I wish we could have gotten in?
| Absolutely. But "done" is a feature.
| miragecraft wrote:
| It feels like nowadays Mozilla is extremely shorthanded.
|
| They probably gave up because they simply don't have the
| money/resources to pursue this.
| Zamicol wrote:
| > The new version of libjxl brings a very substantial reduction
| in memory consumption, by an order of magnitude, for both lossy
| and lossless compression. Also the speed is improved, especially
| for multi-threaded lossless encoding where the default effort
| setting is now an order of magnitude faster.
|
| Very impressive! The article too is well written. Great work all
| around.
| Modified3019 wrote:
| At the very low quality settings, it's kinda remarkable how jpeg
| manages to to keep a sharper approximation of detail that
| preserves the holistic quality of the image better in spite of
| the obvious artifacts making it look like a mess of cubism when
| examined close. It's basically converting the image into some
| kind of abstract art style.
|
| Whereas jxl and avif just become blurry.
| porker wrote:
| Yes, that was my takeaway from this that JPEG keeps edge
| sharpness really well (e.g. the eyelashes) while the jxl and
| avif smooth all detail out of the image.
| JyrkiAlakuijala wrote:
| It is because JPEG is given 0.5 bits per pixel, where JPEG XL
| and AVIF are given around 0.22 and 0.2.
|
| These images attempt to be at equal level of distortion, not at
| equal compression.
|
| Bpps are reported beside the images.
|
| In practice, use of quality 65 is rare in the internet and only
| used at the lowest quality tier sites. Quality 75 seems to be
| usual poor quality and quality 85 the average. I use quality 94
| yuv444 or better when I need to compress.
| bmacho wrote:
| You refer to this? https://res.cloudinary.com/jon/qp-low.png
|
| Bitrates are in the left column, jpg low quality is the same
| size as jxl/avif med-low quality (0.4bpp), so you should
| compare the bottom left picture to the top mid and right
| pictures.
| mrob wrote:
| JPEG bitrates are higher, so all it means is that SSIMULACRA2
| is the wrong metric for this test. It seems that SSIMULACRA2
| heavily penalizes blocking artifacts but doesn't much care
| about blur. I agree that the JPEG versions look better at the
| same SSIMULACRA2 score.
| JyrkiAlakuijala wrote:
| Ideally one would use human ratings.
|
| The author of the blog post did exactly that in a previous
| blog post:
|
| https://cloudinary.com/labs/cid22/plots
|
| Human ratings are expensive and clumsy so people often use
| computed aka objective metrics, too.
|
| The best OSS metrics today are butteraugli, dssim and
| simulacra. The author is using one of them. None of the
| codecs was optimized for that metrics except jpegli
| partially.
| jonsneyers wrote:
| Humans generally tend to prefer smoothing over visible
| blocking artifacts. This is especially true when a direct
| comparison to the original image is not possible. Of course
| different humans have different tastes, and some do prefer
| blocking over blur. SSIMULACRA2 is based on the aggregated
| opinions of many thousands of people. It does care more about
| blur than metrics like PSNR, but maybe not as much as you do.
| izacus wrote:
| Well, that's because JPEG is still using about twice as many
| bits per pixel, making the output size significantly larger.
|
| Don't get swept away by false comparisons, JXL and AVIF look
| significantly better if you give them twice as much filesize to
| work with as well.
| aidenn0 wrote:
| One does wonder how much of JXL's awesomeness is the encoder vs.
| the format. Its ability to make high quality, compact images just
| with "-d 1.0" is uncanny. With other codecs, I had to pass
| different quality settings depending on the image type to get
| similar results.
| kasabali wrote:
| That's a very good point. At this rate of development I
| wouldn't be surprised if libjxl becomes x264 of image encoders.
|
| On the other hand, libvpx has always been a mediocre encoder
| which I think might be the reason for disappointing performance
| (I mean in general, not just speed) of vp8/vp9 formats, which
| inevitably also affected performance of lossy WebP. Dark
| Shikari even did a comparison of still image performances of
| x264 vs vp8 [0].
|
| [0]
| https://web.archive.org/web/20150419071902/http://x264dev.mu...
| JyrkiAlakuijala wrote:
| While WebP lossy still has image quality issues it has
| improved a lot over the years. One should not consider a
| comparison done with 2010-2015 implementations indicative of
| quality performance today.
| kasabali wrote:
| I'm sure it's better now than 13 years ago, but the
| conclusion I got from looking at very recent published
| benchmark results is that lossy webp is still only slightly
| better than mozjpeg at low bitrates and still has worse
| max. PQ ceiling compared to JPEG, which in my opinion makes
| it not worth using over plain old JPEG even in web
| settings.
| JyrkiAlakuijala wrote:
| That matches my observations. I believe that WebP lossy
| does not add value when Jpegli is an option and is having
| hard time to compete even with MozJPEG.
| edflsafoiewq wrote:
| They've also made a JPEG encoder, cjpegli, with the same "-d
| 1.0" interface.
| dingdingdang wrote:
| Excellent run-through of jpegli encoder here:
| https://giannirosato.com/blog/post/jpegli/ - wish I could
| find a pre-compiled terminal utility for cjpegli!
| lonjil wrote:
| I have heard that it will see a proper standalone release
| at some point this year, but I don't know more than that.
| kasabali wrote:
| They're available at
| https://github.com/libjxl/libjxl/releases/ for linux and
| windows.
| dingdingdang wrote:
| Dear lord.. despite browsing and using github on a daily
| basis I still miss releases section sometimes! Before I
| saw your reply I checked the Scoop repos and sure enough,
| on Windows this will get you latest cjpegli version
| installed and added to path in one go:
|
| scoop install main/libjxl
|
| Note.. now that I tried it: that is really next level for
| an old format..!
| botanical wrote:
| It's in the static github release files here:
| https://github.com/libjxl/libjxl/releases/tag/v0.10.1
| JyrkiAlakuijala wrote:
| Pik was designed initially without quality options only to do
| the best there is to achieve distance 1.0.
|
| We kept a lot of focus on visually lossless and I didn't want
| to add format features which would add complexity but not help
| at high quality settings.
|
| In addition to modeling features, the context modeling and
| efficiency of entropy coding is critical at high quality. I
| consider AVIFs entropy coding ill-suited for high quality or
| lossless photography.
| anewhnaccount2 wrote:
| Should the Pareto front not be drawn with line perpendicular to
| the axes rather than diagonal lines?
| penteract wrote:
| Yes, it should, but it looks like they just added a line to the
| jxl 0.10 series of data on whatever they used to make the
| graph, and labelled it the Pareto front. Looking closely at the
| graphs, they actually miss some points where version 0.9 should
| be included in the frontier.
| lifthrasiir wrote:
| I think it can be understood as an expected Pareto frontier
| _if enough options are added to make it continuous_ , which
| is often implied in this kind of discussions.
| penteract wrote:
| I'm not sure that's reasonable - The effort parameters are
| integers between 1 and 10, with behavior described here: ht
| tps://github.com/libjxl/libjxl/blob/main/doc/encode_effort.
| .., the intermediate options don't exist as implemented
| programs. This is a comparison of concrete programs, not an
| attempt to analyze the best theoretically achievable.
|
| Also, the frontier isn't convex, so it's unlikely that if
| intermediate options could be added then they would all be
| at least as good as the lines shown; and the use of
| log(speed) for the y-axis affects what a straight line on
| the graph means. It's fine for giving a good view of the
| dataset, but if you're going to make a guess about
| intermediate possibilities, 'speed' or 'time' should also
| be considered.
| jonsneyers wrote:
| You are right, but that would make an uglier plot :)
|
| Some of the intermediate options are available though,
| through various more fine-grained encoder settings than
| what is exposed via the overall effort setting. Of course
| they will not fall exactly on the line that was drawn,
| but as a first approximation, the line is probably closer
| to the truth than the staircase, which would be an
| underestimate of what can be done.
| jamesthurley wrote:
| Perpendicular to which axis?
| penteract wrote:
| both - staircase style.
| deathanatos wrote:
| Good grief. A poorly phrased question, and an answer that
| doesn't narrow the possibilities. *
| | | | *-----+
|
| or +-----* | | | *
|
| ... and why?
| lonjil wrote:
| Whichever is more pessimistic. So for the axes in this
| article, the first one. If you have an option on the
| "bad" side of the Pareto curve, you can always find an
| option that is better in both axes. If a new option is
| discovered that falls on the good side of the curve,
| well, then the curve needs to be updated to pass thru
| that new option.
| JyrkiAlakuijala wrote:
| Often with this kind of pareto it can be argued that even when
| continuous decisions are not available, a compression system
| could keep choosing every second at effort 7 and every second
| at effort 6 (or any ratio), leading, on the average
| interpolated results. Naturally such interpolation does not
| produce straight lines in log space.
| kasabali wrote:
| I'm surprised mozjpeg performed worse than libjpeg-turbo at high
| quality settings. I thought its aim was having better pq than
| libjpeg-turbo at the expense of speed.
| JyrkiAlakuijala wrote:
| It is consistent to what I have seen. Both in metrics and in
| eyeballing. Mozjpeg gives good results around quality 75, but
| less good at 90+++
| AceJohnny2 wrote:
| I do not understand why this article focuses so much on encode
| speed, but for decode, which I believe represents 99% of usage in
| this web-connected world, give a cursory...
|
| > _Decode speed is not really a significant problem on modern
| computers, but it is interesting to take a quick look at the
| numbers._
| lifthrasiir wrote:
| Anything more than 100 MB/s is considered "enough" for the
| internet because at that point your bottleneck is no longer
| decoding. Most modern compression algorithms are asymmetric,
| that is, you can spend much more time on compression without
| significantly affecting the decompression performance, so it is
| indeed less significant once the base performance is achieved.
| oynqr wrote:
| When you actually want good latency, using the throughput as
| a metric is a bit misguided.
| lifthrasiir wrote:
| As others pointed out, that's why JPEG XL's excellent
| support for progressive decoding is important. Other
| formats do not support progressive decoding at all or made
| it optional, so it cannot be even compared at this point.
| In the other words, the table can be regarded as an
| evidence that you can have both progressive decoding _and_
| performance at once.
| lonjil wrote:
| If you don't have progressive decoding, those metrics are
| essentially the same.
| JyrkiAlakuijala wrote:
| During the design process of pik/jpeg xl I experimented on
| decode speed as a personal experience to have an opinion
| about this. I tried a special version of chrome that
| artificially throttled the image decoding. Once the decoding
| speed gets into the 20 megapixels per second the feeling
| coming from the additional speed was difficult to notice. I
| tried 2, 20 and 200 megapixels per second throttlings. This
| naturally depends on image sizes and uses too.
|
| There was a much more easy to notice impact from progressive
| images and even sequential images displayed in a streaming
| manner during the download. As a rule of thumb, sequential
| top-to-bottom streaming feels 2x faster as a waiting
| rendering, and progressive feels 2x faster than sequential
| streaming.
| silvestrov wrote:
| Decoding speed is important for battery time.
|
| If a new format drains battery twice as fast, users don't
| want it.
| jonsneyers wrote:
| This matters way more for video (where you are decoding 30
| images per second continuously) than it does for still
| images. For still images, the main thing that drains your
| battery is the display, not the image decoding :)
|
| But in any case, there are no _major_ differences in
| decoding speed between the various image formats. The
| difference caused by reducing the transfer size (network
| activity) and loading time (user looking at a blank screen
| while the image loads) is more important for battery life
| than the decoding speed itself. Also the difference between
| streaming/progressive decoding and non-streaming decoding
| probably has more impact than the decode speed itself, at
| least in the common scenario where the image is being
| loaded over a network.
| JyrkiAlakuijala wrote:
| Agreed. For web use they all decode fast enough. Any time
| difference might be in progression or streaming decoding,
| vs. waiting for all the data to arrive before starting to
| decode.
|
| For image gallery use of camera resolution photographs
| (12-50 Mpixels) it can be more fun to have 100+
| Mpixels/s, even 300 Pixels/s.
| caskstrength wrote:
| > This matters way more for video (where you are decoding
| 30 images per second continuously) than it does for still
| images.
|
| OTOH video decoding is highly likely to be hardware
| accelerated on both laptops and smartphones.
|
| > For still images, the main thing that drains your
| battery is the display, not the image decoding :)
|
| I wonder if it becomes noticeable on image-heavy sites
| like tumblr, 500px, etc.
| jonsneyers wrote:
| Assuming the websites are using images of appropriate
| dimensions (that is, not using huge images and relying on
| browser downscaling, which is a bad practice in any
| case), you can quite easily do the math. A 1080p screen
| is about 2 megapixels, a 4K screen is about 8 megapixels.
| If your images decode at 50 Mpx/s, that's 25 full screens
| (or 6 full screens at 4K) per second. You need to scroll
| quite quickly and have a quite good internet connection
| before decode speed will become a major issue, whether
| for UX or for battery life. Much more likely, the main
| issue will be the transfer time of the images.
| j16sdiz wrote:
| The parent was talking about battery life.
|
| Mobile phone CPU can switch between different power state
| very quickly. If the image decoding is fast, it can sleep
| more
| lonjil wrote:
| And the one you're replying to is also talking about
| battery life. The energy needed to display an image for a
| few seconds is probably higher than the energy needed to
| decode it.
| JyrkiAlakuijala wrote:
| I wasn't able to convince myself about that when
| approaching that question with with back-off-the-envelope
| calculation, published research and prototypes.
|
| Very few applications are constantly decoding images. Today
| a single image is often decided in a few milliseconds, but
| watched 1000x longer. If you 10x or even 100x energy
| consumption of image decoding, it is still not going to
| compete with display, radio and video decoding as a battery
| drain.
| jsheard wrote:
| Is it practical to use hardware video decoders to decode the
| image formats derived from video formats, like AVIF/AV1 and
| HEIC/H264? If so that could be a compelling reason to prefer
| them over a format like JPEG XL which has to be decoded in
| software on all of today's hardware. Everything has H264 decode
| and AV1 decode is steadily becoming a standard feature as well.
| jonsneyers wrote:
| No browser bothers with hardware decode of WebP or AVIF even
| if it is available. It is not worth the trouble for still
| images. Software decode is fast enough, and can have
| advantages over hw decode, such as streaming/progressive
| decoding. So this is not really a big issue.
| izacus wrote:
| No, not really - mostly because setup time and concurrent
| decode limitations of HW decoders across platforms tend so
| undermine any performance or battery gains from that
| approach. As far as I know, not even mobile platforms bother
| with it with native decoders for any format.
| PetahNZ wrote:
| My server will encode 1,000,000 images itself, but each client
| will only decode like 10.
| okamiueru wrote:
| That isn't saying much or anything.
| bombcar wrote:
| But you may have fifty million clients, so the total "CPU
| hours" spend on decoding will outlast encoding.
|
| But the person _encoding_ is picking the format, not the
| decoder.
| lonjil wrote:
| But the server doesn't necessarily have unlimited time to
| encode those images. Each of those 1 million images needs
| to be encoded before it can be sent to a client.
| lonjil wrote:
| Real-time encoding is pretty popular, for which encoding speed
| is pretty important.
| stusmall wrote:
| In some use cases the company is paying for the encoding, but
| the client is doing the decoding. As long as the client can
| decode the handful of images on the page fast enough for the
| human to not notice, its fine. Meanwhile any percentage
| improvement for encoding can save real money.
| youngtaff wrote:
| Because that's Cloudinary's use case... they literally spend
| millions of dollars encoding images
| mips_r4300i wrote:
| This is really impressive even compared to WebP. And unlike WebP,
| it's backwards compatible.
|
| I have forever associated Webp with macroblocky, poor colors, and
| a general ungraceful degradation that doesn't really happen the
| same way even with old JPEG.
|
| I am gonna go look at the complexity of the JXL decoder vs WebP.
| Curious if it's even practical to decode on embedded. JPEG is
| easily decodable, and you can do it in small pieces at a time to
| work within memory constraints.
| bombcar wrote:
| Everyone hates WebP because when you save it, nothing can open
| it.
|
| That's improved somewhat, but the formats that will have an
| easy time winning are the ones that people can use, even if
| that means a browser should "save JPGXL as JPEG" for awhile or
| something.
| ComputerGuru wrote:
| Everyone hates webp for a different reason. I hate it because
| it can only do 4:2:0 chroma, except in lossless mode.
| Lossless WebP is better than PNG, but I will take the peace
| of mind of knowing PNG is always lossless over having a WebP
| and not knowing what was done to it.
| 149765 wrote:
| > peace of mind of knowing PNG is always lossless
|
| There is pngquant:
|
| > a command-line utility and a library for lossy
| compression of PNG images.
| bombcar wrote:
| You also have things like https://tinypng.com which do
| (basically) lossy PNG for you. Works pretty well.
| ComputerGuru wrote:
| Neither of these are really what I'm referring to, as I
| view these as ~equivalent to converting a jpeg to png.
| What I mean is within a pipeline, once you have ingested
| a [png|webp|jpeg] and you need to now render it at
| various sizes or with various filters for $purposes. If
| you have a png, you know that you should always maintain
| losslessness. If you have a jpeg, you know you don't. You
| don't need to inspect the file or store additional
| metadata, the extension alone tells you what you need to
| know. But when you have a webp, the default assumption is
| that it's lossy but it _can_ sometimes be otherwise.
| lonjil wrote:
| Actually, if you already have loss, you should try as
| hard as possible to avoid further loss.
| ComputerGuru wrote:
| I don't disagree, in principle. But if I have a lossy
| 28MP jpeg, I'm not going to encode it as a lossless
| thumbnail (or other scaled-down version).
| lonjil wrote:
| I think JXL has been seeing adoption by apps faster than Webp
| or AVIF.
| mfkp wrote:
| I've noticed in chrome-based browsers, you can right click on
| a webp file and "edit image". When you save it, it defaults
| to png download, which makes a simple conversion.
|
| Mobile browsers seem to default to downloading in png as
| well.
| CharlesW wrote:
| > _And unlike WebP, it 's backwards compatible._
|
| No, JPEG XL files can't be viewed/decoded by software or
| devices that don't have a JPEG XL decoder.
| Zardoz84 wrote:
| JPEG XL can be converted to/from JPEG without any loss of
| quality. See another commenter where shows a example where
| doing JPEG -> JPEG XL -> JPEG generates a binary exact copy
| of the original JPEG.
|
| Yeah, this not means what usually we call backwards
| compatibility, but allows usage like storing the images as
| JPEG XL and, on the fly, send a JPEG to clients that can't
| use it, without any loss of information. WebP can't do that.
| throwaway81523 wrote:
| Does JPEG XL have patent issues? I half remember something about
| that. Regular JPG seems fine to me. Better compression isn't
| going to help anyone since they will find other ways to waste any
| bandwidth available.
| lifthrasiir wrote:
| The main innovation claimed by Microsoft's rANS patent is about
| the adaptive probability distribution, that is, you should be
| able to efficiently correct the distribution so that you can
| use less bits. While that alone is an absurd claim (that's a
| benefit shared with arithmetic coding and its variants!) and
| there is a very clear prior art, JPEG XL doesn't dynamically
| vary the distribution so is thought to be not related to the
| patent anyway.
| jonsneyers wrote:
| No it doesn't.
|
| And yes, regular JPEG is still a fine format. That's part of
| the point of the article. But for many use cases, better
| compression is always welcome. Also having features like alpha
| transparency, lossless, HDR etc can be quite desirable, and
| those things are not really possible in JPEG.
| taylorius wrote:
| The article mentions encoding speed as something to consider,
| alongside compression ratio. I would argue that decoding speed is
| also important. A lot of the more modern formats (webp, avif etc)
| can take significantly more CPU cycles to decode than a plain old
| jpg. This can slow things down noticeably,especially on mobile.
| oynqr wrote:
| JPEG and JXL have the benefit of (optional) progressive
| decoding, so even if the image is a little larger than AVIF,
| you may still see content faster.
| lifthrasiir wrote:
| Note that JPEG XL always supports progressive decoding,
| because the top-level format is structured in that way. The
| optional part is a finer-grained adjustment to make the
| output more suitable for specific cases.
| izacus wrote:
| That's great, are there any comparison graphs and benchmarks
| showing that in real life (similarly to this article)?
| 149765 wrote:
| A couple of videos comparing progressive decoding of jpeg,
| jxl and avif:
|
| https://www.youtube.com/watch?v=UphN1_7nP8U
|
| https://www.youtube.com/watch?v=inQxEBn831w
|
| There's more on the same channel, generation loss ones are
| really interesting.
| izacus wrote:
| Awesome, thanks.
| lifthrasiir wrote:
| Any computation-intensive media format on mobile is likely
| using a hardware decoder module anyway, and that most
| frequently includes JPEG. So that comparison is not adaquate.
| kasabali wrote:
| "computation-intensive media" = videos
|
| Seriously, when is the last time mobile phones used hardware
| decoding for showing images? Flip phones in 2005?
|
| I know camera apps use hardware encoding but I doubt gallery
| apps or browsers bother with going through the hardware
| decoding pipeline for hundreds of JPEG images you scroll
| through in seconds. And when it comes to showing a single
| image they'll still opt to software decoding because it's
| more flexible when it comes to integration, implementation,
| customization and format limits. So not surprisingly I'm not
| convinced when I repeatedly see this claim that mobile phones
| commonly use hardware decoding for image formats and software
| decoding speed doesn't matter.
| jeroenhd wrote:
| I don't know the current status of web browsers, ut
| hardware encoding and decoding for image formats is alive
| and well. Not really relevant for showing a 32x32 GIF arrow
| like on HN, but very important when browsing high
| resolution images with any kind of smoothness.
|
| If you don't really care about your users' battery life you
| can opt to disable hardware acceleration within your
| applications, but it's usually enabled by default, and for
| good reason.
| lonjil wrote:
| Hardware acceleration of image decoding is very uncommon
| in most consumer applications.
| kasabali wrote:
| > hardware encoding and decoding for image formats is
| alive and well
|
| I keep hearing and hearing this but nobody has ever yet
| provided a concrete real world example of smart phones
| using hw _decoding_ for displaying images.
| izacus wrote:
| No, not a single mobile platform uses hardware decode modules
| for still image decoding as of 2024.
|
| At best, the camera processors output encoded JPEG/HEIF for
| taken pictures, but that's about it.
| bmacho wrote:
| How is lossless webp 0.6th of the size of lossless avif? I find
| it hard to believe that.
| 149765 wrote:
| Lossless webp is actually quite good, especially on text heavy
| images, e.g. screenshots of a terminal with `cwebp -z9` are
| usually smaller than `jxl -d 0 -e 9` in my experience.
| lonjil wrote:
| Lossless AVIF is just really quite bad. Notice that how for
| photographic content, it is barely better than PNG, and for
| non-photographic content, it is far worse than PNG.
| edflsafoiewq wrote:
| It's so bad you wonder why AV1 even has a lossless mode.
| Maybe lossy mode has some subimages it uses lossless mode on?
| jonsneyers wrote:
| It has lossless just to check a box in terms of supported
| features. A bit like how JPEG XL supports animation just to
| have feature parity. But in most cases, you'll be better
| off using a video codec for animation, and an image format
| for images.
| samatman wrote:
| There are some user-level differences between an animated
| image and a video, which haven't really been
| satisfactorily resolved since the abandonment of GIF-the-
| format. An animated image should pause when clicked, and
| start again on another click, with setting separate from
| video autoplay to control the default. It should _not_
| have visible controls of any sort, that 's the whole
| interface. It should save and display on the
| computer/filesystem as an image, and degrade to the
| display frame when sent along a channel which supports
| images but not animated ones. It doesn't need sound, or
| CC, or subtitles. I should be able to add it to the photo
| roll on my phone if I want.
|
| There are a lot of little considerations like this, and
| it would be well if the industry consolidated around an
| animated-image standard, one which was an image, and not
| a video embedded in a way which looks like an image.
| F3nd0 wrote:
| Hence why AVIF might come in handy after all!
| JyrkiAlakuijala wrote:
| I believe it is more fundamental. I like to think that
| AV1 entropy coding just becomes ineffective for large
| values. Large values are dominantly present in high
| quality photography and in lossless coding. Large values
| are repeatedly prefix coded and this makes effective
| adaptation of the statistics difficult for large
| integers. This is a fundamental difference and not a
| minor difference in focus.
| jug wrote:
| WebP is awesome at lossless and way better than even PNG.
|
| It's because WebP has a special encoding pipeline for lossless
| pictures (just like PNG) while AVIF is basically just asking a
| lossy encoder originally designed for video content to stop
| losing detail. Since it's not designed for that it's terrible
| for the job, taking lots of time and resources to produce a
| worse result.
| derf_ wrote:
| Usually the issue is not using the YCgCo-R colorspace. I do not
| see enough details in the article to know if that is the case
| here. There are politics around getting the codepoint included:
| https://github.com/AOMediaCodec/av1-avif/issues/129
| eviks wrote:
| Welcome efficiency improvements
|
| And in general, Jon's posts provide a pretty good overview on the
| topic of codec comparison
|
| Pity such a great format is being held back by the much less
| rigorous reviews
| a-french-anon wrote:
| Pretty good article, though I would have used oxipng instead of
| optipng in the lossless comparisons, it's the new standard,
| there.
| jonsneyers wrote:
| Thanks for the suggestion, oxipng is indeed a better choice.
| Next time I will add it to the plots!
| gaazoh wrote:
| The inclusion of QOI in the lossless benchmarks made me smile.
| It's a basically irrelevant format, that isn't supported by
| default by any general-public software, that aims to be just OK,
| not even good, yet it has a spot on one of these charts (non-
| photographic encoding). Neat.
| lifthrasiir wrote:
| And yet didn't reach the Pareto frontier! It's quite obvious in
| hindsight though---QOI decoding is inherently sequential and
| can't be easily parallelized.
| gaazoh wrote:
| Of course it didn't, it wasn't designed to be either the
| fastest nor the best. Just OK and simple. Yet in some cases
| it's not completely overtaken by competition, and I think
| that's cool.
|
| I don't believe QOI will ever have any sort of real-world
| practical use, but that's quite OK and I love it for it has
| made me and plenty of others look into binary file formats
| and compression and demystify it, and look further into it. I
| wrote a fully functional streaming codec for QOI, and it has
| taught me many things, and started me on other projects,
| either working with more complex file formats or thinking
| about how to improve upon QOI. I would probably never have
| gotten to this point if I tried the same thing starting with
| any other format, as they are at least an order of magnitude
| more complex, even for the simple ones.
| lonjil wrote:
| > Of course it didn't, it wasn't designed to be either the
| fastest nor the best. Just OK and simple. Yet in some cases
| it's not completely overtaken by competition, and I think
| that's cool.
|
| Actually, there was a big push to add QOI to stuff a few
| years ago, specifically due to it being "fast". It was
| claimed that while it has worse compression, the speed can
| make it a worthy trade off.
| p0nce wrote:
| It can be interesting if you need fast decode on low
| complexity, and it's an easy to improve format (-20 to
| -30%). Base QOI isn't that great.
| theon144 wrote:
| >I don't believe QOI will ever have any sort of real-world
| practical use
|
| Prusa (the 3d printer maker) seems to think otherwise!
| https://github.com/prusa3d/Prusa-Firmware-
| Buddy/releases/tag...
| phoboslab wrote:
| As far a I understand this benchmark JXL was using 8 CPU
| cores, while QOI naturally only used one. If you were to plot
| the graph with compute used (watts?) instead of Mpx/s, QOI
| would compare much better.
|
| Also, curious that they only benchmarked QOI for "non-
| photographic images (manga)", where QOI fares quite badly
| because it doesn't have palleted mode. QOI does much better
| with photos.
| lonjil wrote:
| Actually, they did try QOI for the photographic images:
|
| > Not shown on the chart is QOI, which clocked in at 154
| Mpx/s to achieve 17 bpp, which may be "quite OK" but is
| quite far from Pareto-optimal, considering the lowest
| effort setting of libjxl compresses down to 11.5 bpp at 427
| Mpx/s (so it is 2.7 times as fast and the result is 32.5%
| smaller).
|
| 17 bpp is way outside the area shown in the graph. All the
| other results would've gotten squished and been harder to
| read, had QOI been shown.
| phoboslab wrote:
| Thanks, I missed that.
|
| I just ran qoibench on the photos they used[1] and QOI
| does indeed fair pretty badly with a compression ratio of
| 71.1% vs. 49.3% for PNG.
|
| The photos in the QOI benchmark suite[2] somehow compress
| a lot better (e.g. photo_kodak/, photo_tecnick/ and
| photo_wikipedia/). I guess it's the film grain with the
| high resolution photos used in [1].
|
| [1] https://imagecompression.info/test_images/
|
| [2] https://qoiformat.org/benchmark/
| shdon wrote:
| GameMaker Studio has actually rather quickly jumped onto the
| QOI bandwagon, having 2 years ago replaced PNG textures with
| QOI (and added BZ2 compression on top) and found a 20% average
| reduction in size. So GameMaker Studio and all the games
| produced with it in the past 2 years or so do actually use QOI
| internally.
|
| Not something a consumer _knowingly_ uses, but also not quite
| irrelevant either.
| pornel wrote:
| Oof that looks like a confused mess of a format.
|
| bz2 is obsolete. It's very slow, and not that good at
| compressing. zstd and lzma beat it on both compression and
| speed at the same time.
|
| QOI's only selling point is simplicity of implementation that
| doesn't require a complex decompressor. Addition of bz2
| completely defeats that. QOI's poorly compressed data inside
| another compressor may even make overall compression worse.
| It could heve been a raw bitmap or a PNG with gzip replaced
| with zstd.
| btdmaster wrote:
| Missing from the article is rav1e, which encodes AV1, and hence
| AVIF, a lot faster than the reference implementation aom. I've
| had cases where aom would not finish converting an image in a
| minute of waiting what rav1e would do in less than 10 seconds.
| JyrkiAlakuijala wrote:
| Is rav1e pareto-curve ahead of libaom pareto-curve?
|
| Does fast rav1e look better than jpegli at high encode speeds?
| btdmaster wrote:
| Difficult to know without reproduction steps from the
| article, but I would think it behaves better than libaom for
| the same quality setting.
|
| Edit: found https://github.com/xiph/rav1e/issues/2759
| JyrkiAlakuijala wrote:
| If Rav1e found better ways of encoding, why would the aom
| folks copy it in libaom?
| pornel wrote:
| rav1e is generally head to head with libaom on static images,
| and which one wins on the speed/quality/size frontier depends
| a lot on the image and settings, as much as +/- 20%. I
| suspect rav1e has an inefficient block size selection
| algorithm, so the particular shape of blocks is a make or
| break for it.
|
| I've only compared rav1e to mozjpeg and libwebp, and at
| fastest speeds it's only barely ahead.
| jonsneyers wrote:
| Both rav1e and libaom have a speed setting. At similar speeds,
| I have not observed huge differences in compression performance
| between the two.
| jug wrote:
| Pay attention to just how good WebP is at _lossless_ comparison
| though!
|
| I've always thought that one as flying under the radar. Most get
| stuck on WebP not offering tangible enough benefits (or even
| worse) over MozJPEG encoding, but WebP _lossless_ is absolutely
| fantastic for performance/speed! PNG or even OptiPNG is far
| worse. And very well supported online now, and leaving the
| horrible lossless AVIF in the dust too of course.
| jonsneyers wrote:
| Lossless WebP is very good indeed. The main problem is that it
| is not very future-proof since it only supports 8-bit. For SDR
| images that's fine, but for HDR this is a fundamental
| limitation that is about as bad as GIF's limitation to 256
| colors.
| jug wrote:
| Ah, I didn't know this and I agree this is a fairly big issue
| and increasingly so over time. I think smartphones in
| particular hastened the demand for HDR quite a bit, what was
| once a premium/enthusiast feature you only had to explicitly
| buy into.
| omoikane wrote:
| I haven't ran across websites that serves up HDR images, I am
| not sure I would notice the difference. WebP seems
| appropriately named and optimized for image delivery on the
| web.
|
| Maybe you are thinking of high bit depth for archival use? I
| can see some use cases there where 8-bit is not sufficient,
| though personally I store high bit depth images in whatever
| raw format was produced by my camera (which is usually some
| variant of TIFF).
| lonjil wrote:
| 8-bit can have banding even without "HDR". Definitely not
| enough. 10 bit HDR video is becoming more common, and
| popularity for images will follow. Adoption is hampered by
| the fact that Windows has bad HDR support, but it all works
| plenty well on macOS and mobile platforms.
| adgjlsfhk1 wrote:
| unfortunately linux HDR is pretty much completely absent.
| That said, Wayland slowly looks like it's getting there.
| chungy wrote:
| Lossless WebP is also stuck with a low axis limit of 16383.
|
| It is a good format when you can use it, but JPEG XL almost
| always compresses better anyway, and lacks color space and
| dimension limits.
| JyrkiAlakuijala wrote:
| Thank you! <3
|
| WebP also has a near-lossless encoding mode based on lossless
| WebP specification that is mostly unadvertised, but should be
| preferred over real lossless in almost every use case. Often
| you can half the size without additional visible loss.
| netol wrote:
| Is this mode picked automatically in "mixed" mode?
|
| Unfortunately, that option doesn't seem to be available in
| gif2webp (I mostly use WebP for GIF images - as animated AVIF
| support is poor on browsers and that has an impact in
| interoperability)
| JyrkiAlakuijala wrote:
| I don't know
| kurtextrem wrote:
| do you know why Jon didn't compare near-lossless in the
| "visually lossless" part?
| a-french-anon wrote:
| An issue with lossless WebP is that it only supports (A)RGB and
| encodes grayscale via hacks that aren't as good as simply
| supporting monochrome.
|
| If you compress a whole manga, PNG (via oxipng, optipng is
| basically deprecated) is still the way to go.
|
| Another something not mentioned in here is that lossless
| JPEG2000 can be surprisingly good and fast on photographic
| content.
| edflsafoiewq wrote:
| IIRC the way you encode grayscale in WebP is a SUBTRACT_GREEN
| transform that makes the red and blue channel 0 everywhere,
| and then use a 1-element prefix code for R and B, so the R
| and B for each pixel take zero bits. Same idea with A for
| opaque images. Do you know why that's not good enough?
| JyrkiAlakuijala wrote:
| I made a mistake there with subtract green.
|
| If I had just added 128 to the residuals, all remaining
| prediction arithmetic would have worked better and it would
| have given 1 % more density.
|
| This is because most related arithmetic for predicting
| pixels is done in unsigned 8 bit arithmetic. Subtract green
| moves such predictions to often cross the 0 -> 255
| boundary, and then averaging, deltas etc make little sense
| and add to the entropy.
| edflsafoiewq wrote:
| Can you explain why?
| JyrkiAlakuijala wrote:
| I edited the answer into the previous message for better
| flow.
| a-french-anon wrote:
| Thankfully the following comment explains more than I know,
| I was speaking purely from empiric experience.
| edflsafoiewq wrote:
| Then you can't know that any difference you see is
| because of how WebP encodes grayscale.
| out_of_protocol wrote:
| Just tried it on random manga page
|
| - OxiPNG - 730k
|
| - webp lossless max effort - 702k
|
| - avif lossless max effort - 2.54MB (yay!)
|
| - jpegxl lossless max effort - 506k (winner!)
| Andrex wrote:
| Probably depends on the manga itself. Action manga probably
| don't compress as well as more dialogue-heavy works.
| ComputerGuru wrote:
| I would, at a first blush, disagree with that
| characterization? Dialogue equals more fine-grained
| strokes and more individual, independent "zones" to
| encode.
| lonjil wrote:
| I wonder if the text would be consistent enough for JXL's
| "patches" feature to work well.
| Andrex wrote:
| Now I'm really curious. If anyone has more time than me
| they could do some sample compressions and I'd be
| interested in the results.
| Akronymus wrote:
| I really like webp. Sadly there's still a lot of applications
| that dont work with it (looking at discord)
| ocdtrekkie wrote:
| It is ironic you said this because when I disabled webp in my
| browser because it had a huge security vulnerability, Discord
| was the only site which broke and didn't immediately just
| serving me more reasonable image formats.
| kodabbb wrote:
| Looks like there are more savings coming on lossless AVIF:
| https://www.reddit.com/r/AV1/comments/1b3lh08/comment/kstmbr...
| JyrkiAlakuijala wrote:
| Also more savings will come for JPEG XL soon.
|
| Possibly mostly focused on medium and low quality.
| kodabbb wrote:
| will those require a bitstream change too?
| jonsneyers wrote:
| It is unlikely that there will be any bitstream changes
| in JPEG XL. There is still a lot of potential for encoder
| improvements within the current bitstream, both for lossy
| and for lossless.
| jug wrote:
| Wow, that new jpegli encoder. Just wow. Look at those results.
| Haha, JPEG has many years left still.
| kasabali wrote:
| > JPEG has many years left still
|
| Such a shame arithmetic coding (which is already in the
| standard) isn't widely supported in the real world. Because
| converting Huffman coded images losslessly to arithmetic coding
| provides an _easy_ 5-10% size advantage in my tests.
|
| Alien technology from the future indeed.
| dancemethis wrote:
| "Pareto" being used outside the context of Brazil's best prank
| call ever (Telerj Prank) will always confuse me. I keep thinking,
| "what does the 'thin-voiced lawyer' have to do with
| statistics?"...
| MikeCapone wrote:
| I really hope this can become a new standard and be available
| everywhere (image tools, browsers, etc).
|
| While in practice it won't change my life much, I like the
| elegance of using a modern standard with this level of
| performance an efficiency.
| TacticalCoder wrote:
| Without taking into account whether JPEG XL shines on its own or
| not (which it may or may not), JPEG XL completely rocks for sure
| because it does this: .. $ ls -l a.jpg &&
| shasum a.jpg ... 615504 ... a.jpg
| 716744d950ecf9e5757c565041143775a810e10f a.jpg .. $
| cjxl a.jpg a.jxl Read JPEG image with 615504 bytes.
| Compressed to 537339 bytes including container .. $
| ls -l a.jxl ... 537339 ... a.jxl
|
| But, wait for it: .. $ djxl a.jxl b.jpg
| Read 537339 compressed bytes. Reconstructed to JPEG.
| .. $ ls -l b.jpg && shasum b.jpg ... 615504 ... b.jpg
| 716744d950ecf9e5757c565041143775a810e10f b.jpg
|
| Do you realize how many _billions_ of JPEG files there are out
| there which people want to keep? If you recompress your old JPEG
| files using a lossy format, you lower its quality.
|
| But with JPEG XL, you can save 15% to 30% and still, if you want,
| get your original JPG 100% identical, bit for bit.
|
| That's wonderful.
|
| P.S: I'm sadly on Debian stable (12 / Bookworm) which is on
| ImageMagick 6.9 and my Emacs uses (AFAIK) ImageMagick to display
| pictures. And JPEG XL support was only added in ImageMagick 7. I
| haven't looked more into that yet.
| izacus wrote:
| I'm sure that will be hugely cherished by users which take
| screenshots of JPEGs so they can resend them on WhatsApp :P
| F3nd0 wrote:
| This particular feature might not, but if said screenshots
| are often compressed with JPEG XL, they will be spared the
| generation loss that becomes blatantly visible in some other
| formats: https://invidious.protokolla.fi/watch?v=w7UDJUCMTng
| IshKebab wrote:
| Maybe. But to know for sure you need to offset the image
| and change encoder settings.
| JyrkiAlakuijala wrote:
| I managed to add that requirement to jpeg xl. I think it will
| be helpful to preserve our digital legacy intact without lossy
| re-encodings.
| kodabbb wrote:
| Looks like there are more savings coming on lossless AVIF:
| https://www.reddit.com/r/AV1/comments/1b3lh08/comment/kstmbr...
| JyrkiAlakuijala wrote:
| I plan to add 15-25 % more quality in the ugly lowest end
| quality in JPEG XL in the coming two months.
| JyrkiAlakuijala wrote:
| It is worth noting that the JPEG XL effort produced a nice new
| parallelism library called Highway. This library is powering not
| only JPEG XL but also Google's latest Gemma AI models.
| jhalstead wrote:
| [0] for those interested in Highway.
|
| It's also mentioned in [1], which starts off
|
| > Today we're sharing open source code that can sort arrays of
| numbers about ten times as fast as the C++ std::sort, and
| outperforms state of the art architecture-specific algorithms,
| while being portable across all modern CPU architectures. Below
| we discuss how we achieved this.
|
| [0] https://github.com/google/highway
|
| [1] https://opensource.googleblog.com/2022/06/Vectorized%20and%
| 2..., which has an associated paper at
| https://arxiv.org/pdf/2205.05982.pdf.
| sandstrom wrote:
| JPEG XL is awesome!
|
| One thing I think would help with its adoption, is if they would
| work with e.g. the libvips team to better implement it.
|
| For example, streaming encoder and streaming decoder would be the
| preferred integration method in libvips.
| ImageXav wrote:
| Maybe someone here will know of a website that describes each
| step of the jpeg xl format in detail? Unlike for traditional
| jpeg, I have found it hard to find a document providing clear
| instructions on the relevant steps, which is a shame as there are
| clearly tons of interesting innovations that have been compiled
| together to make this happen, and I'm sure the individual
| components are useful in their own right!
| jeffbee wrote:
| The choice to represent the speed based on multithreaded encoding
| strikes me as somewhat arbitrary. If your software has a critical
| path dependent on minimal latency of a single image, then it
| makes some sense, but you still may have more or fewer than 8
| cores. On the other hand if you have another source of
| parallelism, for example you are encoding a library of images,
| then it is quite irrelevant. I think the fine data in the article
| would be even more useful if the single threaded speed and the
| scalability of the codec were treated separately.
| redder23 wrote:
| AVIF looks better here: JPEG XL looks very blurred out on the
| bottom with high compression. AVIF preserves much more detail and
| sharpness.
|
| https://res.cloudinary.com/jon/qp-low.png
| jiggawatts wrote:
| The problem with JPEG XL is that it is written in an unsafe
| language and has already had several memory safety
| vulnerabilities found in it.
|
| Image codecs are used in a wide range of attacker-controlled
| scenarios and need to be completely safe.
|
| I know Rust advocates sound like a broken record, but this is the
| poster child for a library that should never have been even
| started in C++ in the first place.
|
| It's absolute insanity that we write codecs -- pure functions --
| in an unsafe language that has a compiler that defaults to
| "anything goes" as an optimisation technique.
| lonjil wrote:
| Pretty much every codec in every browser is written in an
| unsafe language, unfortunately. I don't see why JXL should be
| singled out. On the other hand, there is a JXL decoder in Rust
| called jxl-oxide [1] which works quite well, and has been
| confirmed by JPEG as conformant. Hopefully it will be adopted
| for decode-only usecases.
|
| [1] https://github.com/tirr-c/jxl-oxide/pull/267
|
| > It's absolute insanity that we write codecs -- pure functions
| -- in an unsafe language that has a compiler that defaults to
| "anything goes" as an optimisation technique.
|
| Rust and C++ are exactly the same in how they optimize,
| compilers for both assume that your code has zero UB. The
| difference is that Rust makes it much harder to accidentally
| have UB.
___________________________________________________________________
(page generated 2024-03-01 23:01 UTC)