[HN Gopher] Show HN: JXL.js - JPEG XL Decoder in JavaScript Usin...
___________________________________________________________________
Show HN: JXL.js - JPEG XL Decoder in JavaScript Using WebAssembly
in Web Worker
Author : niutech
Score : 104 points
Date : 2022-11-22 12:37 UTC (10 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| samwillis wrote:
| The WASM is only 311kb, so for an image heavy site this can
| easily be offset by the savings on image sizes. It seems quite
| quick too, are there any benchmarks?
|
| There isn't any source in the repository for the WASM, thats
| slightly worrying as it's difficult to confirm the Apache License
| realy applies. I assume you are using an existing JPEG XL
| decoding lib?
|
| (Edit - Source here:
| https://github.com/GoogleChromeLabs/squoosh/tree/dev/codecs/...)
|
| Are you doing any progressive decoding while it downloads? I
| believe that is one of the features of JPEG XL.
|
| Anyone wanting a good overview of JPEG XL, there is a good one
| here:
| https://cloudinary.com/blog/how_jpeg_xl_compares_to_other_im...
|
| Aside: This is a great example of how awesome Mutation Observers
| are, they are the foundation of so many nice new "minimal" front
| end tools like HTMX and Alpine.js.
| BiteCode_dev wrote:
| Maybe on desktop, but on mobile I don't think the benefit would
| be that clear cut, because mobile are slow, and heavy
| computations drain battery pretty fast.
| Jyaif wrote:
| > The WASM is only 311kb
|
| That's gargantuan.
| samwillis wrote:
| Not if you have 10mb of images on the page... at that point
| it makes a significant saving.
| Jyaif wrote:
| Yes it will save bandwidth in many cases, it doesn't change
| the fact that for an image decoder 300KB is huge.
| lifthrasiir wrote:
| If you are going to leverage vectorization for
| performance (and WebAssembly does support 128-bit vector
| instructions), it's very much inevitable for binary size
| to increase.
| yakubin wrote:
| Which is 1-2 full-sized photos from a 30Mpx camera. More if
| it's just thumbnails.
| samwillis wrote:
| To answer my own question, they have extracted the decoder from
| this Google Chrome Labs "Sploosh" project [0], all the sources
| are here:
|
| https://github.com/GoogleChromeLabs/squoosh/tree/dev/codecs/...
|
| It's under an Apache license, and so the correct license
| applies.
|
| 0: https://squoosh.app
| cxr wrote:
| > all the sources are here:
|
| https://github.com/GoogleChromeLabs/squoosh/tree/dev/codecs/.
| ..
|
| Did you check? There aren't really any sources there, either.
|
| From having already spent some time looking into this to
| track down what, exactly, you should patch should you want to
| make modifications, it looks like it's using the decoder from
| the JPEG XL reference implementation, libjxl (which is
| available under a BSD license):
|
| <https://github.com/libjxl/libjxl>
| haolez wrote:
| I don't know why, but the design of Squoosh's page is very
| appealing to me. Must be some design witchcraft :)
| lucideer wrote:
| > _this can easily be offset by the savings on image sizes_
|
| Maybe but I wouldn't say "easily". Only if you're considering
| bandwidth in isolation as your only bottleneck & neglecting
| client-side processing. The combination of it being WASM &
| being run in a Web Worker should mitigate that a lot but it's
| still going to be a non-zero cost, particularly in terms of RAM
| usage for any reasonably "large" image perf. problem being
| solved.
|
| On top of that the 311kb download & execution/io processing,
| while small, is an up front blocking perf cost whereas loading
| images directly will at least be parallelised entirely (small
| images especially rendering immediately, before the WASM
| would've even downloaded).
|
| It's a really cool project that'd definitely improve perf. for
| many cases. I'd just be reluctant to say "easily offset".
|
| The other extra factor here is the overhead of
| abstraction/maintenance/likely developer mistakes due to the
| added code complexity of your new client-side decoding
| pipeline.
| [deleted]
| [deleted]
| cosarara wrote:
| Is it possible to fall back to a jpeg if the browser does not
| support js, wasm, or web workers? With a <picture> element,
| maybe?
|
| I did some tests on my own server and found that for some reason
| it's quite fast when running on https, but super slow on insecure
| http. Not sure why that is, maybe the browser disallows something
| that's important here for insecure connections.
| niutech wrote:
| Yes, you can use <picture> to fall back to JPEG/PNG/WebP.
| Retr0id wrote:
| > Then the JPEG XL image data is transcoded into JPEG image
|
| Why not PNG?
| DaleCurtis wrote:
| I think you can skip the transcode step entirely in favor of
| transmux with BMP too.
| niutech wrote:
| The main reason is that JPEG takes less space in the cache.
| samwillis wrote:
| JPEG is significantly quicker to encode/decode than PNG.
|
| Plus I believe there may be a particularly fast route from JPEG
| XL -> JPEG, as you can go JPEG -> JPEG XL without having to
| decode to pixels. That then lets you take advantage of the
| browser/hardware jpeg decoder.
| pornel wrote:
| Uncompressed PNG can be quick to encode. There are also
| dedicated encoders like fpng that are an order of magnitude
| faster than zlib.
| Retr0id wrote:
| That fast route would only be possible for a subset of JXLs,
| though.
|
| With control over codec parameters (turning down/off zlib
| compression), PNG can definitely encode/decode faster (in
| software). There might be a gap in the market for such an
| implementation though, perhaps I should make it.
| vanderZwan wrote:
| I'm guessing the author suspects the more common first
| adaptation of JXL will be losslessly re-encoded JPEGs. In
| other words: the very subset you mention, right?
|
| Having said that, the author seems very open to suggestions
| and contributions - I suggested using canvas instead of a
| data URL-based PNG a week ago and within a day they had
| implemented a version of it.
|
| edit: the other reason might be having smaller image sizes
| in the cache.
| DrNosferatu wrote:
| An awesome variation would be to, instead inside web pages, to
| use this _inside PDF files_.
|
| Some PDF newspaper subscriptions (many times sent by email) have
| very poor quality in the contained photos. I suppose the intent
| is to keep the already big file size (multi MB) down. Having the
| newspaper photos in JPEG XL - or even AVIF - would be a great
| upgrade.
|
| PS: And no, I don't think the poor quality photos are deliberate
| as some sort of forced "VHS-style Macrovision" degradation to
| minimize financial losses on easy to copy content - the same
| articles are also partially available online.
| niutech wrote:
| PDF 1.5 supports JPXDecode filter based on the JPEG 2000
| standard.
| DrNosferatu wrote:
| But no JPEG XL, AVIF or HEIF! I think AVIF would be optimally
| useful.
| aidenn0 wrote:
| I use jpeg-2k in a PDF for encoding comics; it's better than
| any pre-JXL compressor I tried for such images; HEIC and webp
| both blur out the stippling and hatching at much higher
| bitrates than j2k. JXL soundly defeats it though and is much
| easier to use (I have to use different quantization settings
| for greyscale and colour images with openJPEG, but the
| default "lossy" setting with libjxl never creates a larger
| file than j2k).
|
| So for me, at least, I'd like jxl to make it into PDF.
| jbverschoor wrote:
| Great, but it won't work on iPhones with lockdown mode enabled
| niutech wrote:
| Lockdown mode disables WebAssembly.
| jbverschoor wrote:
| Exactly Not sure if that's actually a security feature, or
| just something to preempt AppStore-less apps
___________________________________________________________________
(page generated 2022-11-22 23:01 UTC)