[HN Gopher] Writing my own dithering algorithm in Racket
       ___________________________________________________________________
        
       Writing my own dithering algorithm in Racket
        
       Author : venusgirdle
       Score  : 92 points
       Date   : 2025-04-13 19:43 UTC (3 hours ago)
        
 (HTM) web link (amanvir.com)
 (TXT) w3m dump (amanvir.com)
        
       | pixelpoet wrote:
       | The thresholding should be done in linear space I think, not
       | directly on the sRGB encoded values.
       | 
       | Also I think the final result has some pretty distracting
       | structured artifacts compared to e.g. blue noise dithering.
        
       | _ache_ wrote:
       | I did the same like 2 weeks ago. In Rust. ^^
       | 
       | I'm still trying to improve it a little.
       | https://git.ache.one/dither/tree/?h=%f0%9f%aa%b5
       | 
       | I didn't published it because it's hard to actually put dithered
       | images on the web, you can't resize a dithered image. So on the
       | web, you have to dither it on the fly. It's why, in the article,
       | there is some artifacts in the images. I still need to learn
       | about dithering.
       | 
       | Reference:
       | https://sheep.horse/2022/12/pixel_accurate_atkinson_ditherin...
       | 
       | Cool links about dithering: -
       | https://beyondloom.com/blog/dither.html -
       | https://blog.maximeheckel.com/posts/the-art-of-dithering-and...
        
         | 01HNNWZ0MV43FF wrote:
         | Why can't you resize it? Because of the filtering? You can turn
         | that off in css, right?
        
           | AndrewStephens wrote:
           | I am the author of the sheep.horse link above, although
           | here[0] is an updated link.
           | 
           | Even with filtering turned off you get slightly incorrect
           | results, especially if you are resizing down where aliasing
           | might completely ruin your image. Harsh black-and-white
           | dithering is very susceptible to scaling artifacts.
           | 
           | If you want pixel perfect dithering for the screen you are
           | viewing the page on, you need to do it client side. Whether
           | or not this is worth the bother is up to you.
           | 
           | [0] https://sheep.horse/2023/1/improved_web_component_for_pix
           | el-...
        
       | v9v wrote:
       | I love the small image previews to the left of the lines of code
       | loading and saving images. Which editor is this?
        
         | venusgirdle wrote:
         | I love them too :)
         | 
         | Visual Studio Code with this extension:
         | https://marketplace.visualstudio.com/items/?itemName=kisstko...
        
       | Aryezz wrote:
       | Great read and nice drawings!
       | 
       | I made some impractical dithering algorithms a while ago, such as
       | distributing the error to far away pixels or distributing more
       | than 100% of the error: https://burkhardt.dev/2024/bad-dithering-
       | algorithms/
       | 
       | Playing around with the distribution matrices and exploring the
       | resulting patterns is great fun.
        
         | _ache_ wrote:
         | Nice ! Thank you for the link! :)
        
       | turnsout wrote:
       | This is awesome! But be careful, if you dig much further you're
       | going to get into blue noise, which is a very deep rabbit hole.
        
       | qwertox wrote:
       | Somewhat related and worth watching:
       | 
       | Surface-stable fractal dithering explained
       | 
       | https://youtu.be/HPqGaIMVuLs
       | 
       | There's a follow-up video to that one.
        
       | jansan wrote:
       | I think implementing a dithering algorithm is one of the most
       | satisfying projects, because it is fun, small(ish) and you know
       | when you are done.
       | 
       | Of course, unless you are trying to implement something
       | completely insane like Surface-Stable Fractal Dithering
       | https://www.youtube.com/watch?v=HPqGaIMVuLs
        
       | lampiaio wrote:
       | Wouldn't it make more sense to display the samples at 100% in the
       | article? Had to open the images in a new tab to fully appreciate
       | the dithering.
        
       | neilv wrote:
       | (Kudos on doing this in Racket. Besides being a great language to
       | learn and use, using Racket (or another Scheme, or other less-
       | popular language) is a sign that the work comes from genuine
       | interest, not (potentially) just pursuit of keyword
       | employability.)
       | 
       | Side note on Lisp formatting: The author is doing a mix of
       | idiomatic cuddling of parenthesis, but also some more curly-
       | brace-like formatting, and then a cuddling of a trailing small
       | term such that it doesn't line up vertically (like people
       | sometimes do in other languages, like, e.g., a numeric constant
       | after a multi-line closure argument in a timer or event handler
       | registration).
       | 
       | One thing some Lisp people like about the syntax is that parts of
       | complex expression syntax can line up vertically, to expose the
       | structure.
       | 
       | For example, here, you can clearly see that the `min` is between
       | 255 and this big other expression:                   (define
       | luminance           (min (exact-round (+ (* 0.2126 (bytes-ref
       | pixels-vec (+ pixel-pos 1)))   ; red
       | (* 0.7152 (bytes-ref pixels-vec (+ pixel-pos 2)))   ; green
       | (* 0.0722 (bytes-ref pixels-vec (+ pixel-pos 3))))) ; blue
       | 255))
       | 
       | Or, if you're running out of horizontal space, you might do this:
       | (define luminance           (min (exact-round                 (+
       | (* 0.2126 (bytes-ref pixels-vec (+ pixel-pos 1)))   ; red
       | (* 0.7152 (bytes-ref pixels-vec (+ pixel-pos 2)))   ; green
       | (* 0.0722 (bytes-ref pixels-vec (+ pixel-pos 3))))) ; blue
       | 255)))
       | 
       | Or you might decide those comments should be language, and do
       | this:                   (define luminance           (let ((red
       | (bytes-ref pixels-vec (+ pixel-pos 1)))                 (green
       | (bytes-ref pixels-vec (+ pixel-pos 2)))                 (blue
       | (bytes-ref pixels-vec (+ pixel-pos 3))))             (min (exact-
       | round (+ (* red   0.2126)                                  (*
       | green 0.7152)                                  (* blue  0.0722)))
       | 255)))
       | 
       | One of my teachers would still call those constants "magic
       | numbers", even when their purpose is obvious in this very
       | restricted context, and insist that you bind them to names in the
       | language. Left as an exercise to the reader.
        
       | crazygringo wrote:
       | > _Atkinson dithering is great, but what 's awesome about
       | dithering algorithms is that there's no definitive "best"
       | algorithm!_
       | 
       | I've always wondered about this. Sure, if you're changing the
       | contrast then that's a subjective change.
       | 
       | But it's easy to write a metric to confirm the degree to which
       | brightness and contrast are maintained correctly.
       | 
       | And then, is it really impossible to develop an objective metric
       | for the level of visible detail that is maintained? Is that
       | really psychovisual and therefore subjective? Is there really
       | nothing we can use from information theory to calculate the level
       | of detail that emerges out of the noise? Or something based on
       | maximum likelihood estimation?
       | 
       | I'm not saying it has to be fast, or that we can prove a
       | particular dithering algorithm is theoretically perfect. But I'm
       | surprised we don't have an objective, quantitative measure to
       | prove that one algorithm preserves more detail than another.
        
         | zamadatix wrote:
         | I think the problem is less with the possibility of developing
         | something to maximize a metric (though that could be hard
         | depending how you define the metric) and more with no single
         | metric meeting all use cases so you're not going to end up with
         | a definitive answer anyways. Some images may be better suited
         | for an algorithm with the metric of preserving the most literal
         | detail. Others for preserving the most psychovisual detail.
         | Others for something which optimize visibility even if it's not
         | as true to the source. No one metric will be definitively the
         | best thing to measure against for every image and use case fed
         | to it.
         | 
         | You find the same in image resizing. No one algorithm can be
         | the definitive best for e.g. pixel art and movie upscaling. At
         | the same time nobody can agree what the best average metric of
         | all of that could be. Of course if you define a non-universally
         | important metric as the only thing which matters you can end up
         | with certain solutions like sinc being mathematically optimal.
         | 
         | It does lead to the question though: are there well defined
         | objective metrics of dithering quality for which we don't have
         | a mathematically optimal answer?
        
       | virtualritz wrote:
       | The dithered images have the wrong brightness mapping.
       | 
       | The reason is that the described approach will estimate the error
       | correction term wrong as the input RGB value is non-linear sRGB.
       | 
       | The article doesn't mention anything about this so I assume the
       | author is oblivious to what color spaces are and that an
       | 8bit/channel RGB value will most likely not represent linear
       | color.
       | 
       | This is not bashing the article; most people who start doing
       | anything with color in CG w/o reading up on the resp. theory
       | first get this wrong.
       | 
       | And coming up with your own dither is always cool.
       | 
       | See e.g. [1] for an in-depth explanation why the linearization
       | stuff matters.
       | 
       | [1] http://www.thetenthplanet.de/archives/5367
        
         | Lerc wrote:
         | Are dithering patterns proportional in perceived brightness to
         | a uniform grey for any percentage of set pixels?
         | 
         | I can see them not being linearly proportional to a smooth
         | perceptual grey gradient as the ratio of black to white
         | changes, but I suspect it might change also with the clustering
         | of light and dark at the same ratio.
        
         | venusgirdle wrote:
         | Hi, OP here!
         | 
         | Thank you so much for pointing this out! Just read the post you
         | linked and did some of my own research on the non-linearity of
         | sRGB - really fascinating stuff :)
         | 
         | For now, I've acknowledged this limitation of my implementation
         | so that any new readers are aware of it:
         | https://amanvir.com/blog/writing-my-own-dithering-algorithm-...
         | 
         | But I'll definitely revisit the article to add proper
         | linearization to my implementation when I have the time. Thanks
         | again for mentioning this!
        
       | SillyUsername wrote:
       | The square artifacts in the dithered image are caused by the
       | distribution not doing second passes over the pixels already with
       | error distributed, this is a byproduct of the "custom" approach
       | the OP uses, they've traded off (greater) individual colour error
       | for general picture cohesion.
       | 
       | Me, I adjusted Atkinson a few years ago as I prefer the "blown
       | out" effect:
       | https://github.com/KodeMunkie/imagetozxspec/blob/master/src/...
       | 
       | A similar custom approach to prevent second pass diffusion is in
       | the code too; it is slightly different implementation - processes
       | the image in 8x8 pixel "attribute" blocks, where the error does
       | not go out of these bounds. The same artifacts occur there too
       | but are more distinct as a consequence.
       | https://github.com/KodeMunkie/imagetozxspec/blob/3d41a99aa04...
       | 
       | Nb. 8x8 is not arbitrary, the ZX Spectrum computer this is used
       | for only allowed 2 colours in every 8x8 block so this seeing the
       | artifact on a real machine is less important as the whole image
       | potentially had 8x8 artifacts anyway.
        
       ___________________________________________________________________
       (page generated 2025-04-13 23:00 UTC)