[HN Gopher] Ditherpunk: The article I wish I had about monochrom...
___________________________________________________________________
Ditherpunk: The article I wish I had about monochrome image
dithering
Author : todsacerdoti
Score : 1251 points
Date : 2021-01-04 16:27 UTC (2 days ago)
(HTM) web link (surma.dev)
(TXT) w3m dump (surma.dev)
| dehrmann wrote:
| Since printers do this, I always wondered if there's a good way
| to undo dithering (at the cost of resolution) for scans. Would it
| just be scaling down the image?
| amelius wrote:
| How would you apply dithering to animations (without the screen
| looking noisy from the different starting conditions of each
| frame)?
| gruez wrote:
| discussed by the author of the mentioned game (return of the
| obra dinn) here:
| https://forums.tigsource.com/index.php?topic=40832.msg136374...
| cpmsmith wrote:
| I originally found out about _Obra Dinn_ by reading that
| devlog--it 's totally fascinating, and shows impressive
| attention to detail.
|
| > It feels a little weird to put 100 hours into something
| that won't be noticed by its absence. Exactly no one will
| think, "man this dithering is stable as shit. total magic
| going on here."
|
| Luckily, thanks to this post, we have the privilege of
| thinking that.
| jerf wrote:
| He's not entirely correct, either. I haven't played it, but
| I did see video of it, and I _was_ impressed by the
| stability of the dithering. It takes work to avoid it
| looking like https://www.youtube.com/watch?v=2AKtp3XHn38 or
| something.
| raphlinus wrote:
| Any ordered dithering algorithm will animate pretty smoothly,
| but techniques based on error diffusion will have a "swimming"
| effect. Lucas Pope's devlog goes into considerable detail.
| Here's a good starting point:
| https://forums.tigsource.com/index.php?topic=40832.msg104520...
|
| (ETA: also the link that gruez posted. HN seems to be having
| some performance issues)
| dassurma wrote:
| Author here!
|
| I can only refer to an article another comment also already
| mentioned. It is _excellent_ and covers animation & dithering:
| https://bisqwit.iki.fi/story/howto/dither/jy/
| bullen wrote:
| Is this what Steam uses for the "News" animated gifs?
| kliments wrote:
| I had never read about dithering before, but this article sparks
| interest. Coupled with sample images, it is fun to read about the
| different dithering algorithms. Thanks for sharing!
| ssivark wrote:
| Somewhat off-topic, but this reminds me of the
| impressionist/pointillist styles of painting. There the
| motivation is not to use a smaller palette, but to typically use
| a _richer_ palette (including colors on the opposite side of the
| wheel) so that the image looks much more vibrant and realistic on
| zooming out, circumventing the limitation of one (flat) color per
| location.
| foobiekr wrote:
| I always thought the floyd-steinberg algorithm produced images
| that looked like they were made from a nest of worms, at least in
| the implementations I came across in the Amiga era, so it's
| interesting to look at his FS image and realize that, most
| likely, it was an artifact of the low resolution display more
| than anything else.
| Chazprime wrote:
| Fascinating read!
|
| Almost makes me miss the days of getting to the graphics lab late
| and getting stuck with one of the old Mac SEs.
|
| Almost.
| aharris6 wrote:
| In case anyone wants to play around with some basic dithering, I
| run a webapp called Dither it! that does just this:
|
| https://ditherit.com
| bolofan wrote:
| Since we're talking about monochromatic art... anyone else here a
| fan of the Atari ST game Bolo?
|
| https://www.mobygames.com/game/atari-st/bolo
|
| Probably the best monochromatic aesthetics I've ever seen in a
| game. The screen shots don't do it justice.
| enriquto wrote:
| This article is missing a crucial pre-processing step to
| dithering algorithms: apply a Retinex-like filter to enhance the
| local contrast before doing the dithering. This gives a dramatic
| improvement of the final result. In fact, by exploring the scale
| parameter of the pre-processing step, you find a continuous
| family of binarisations that interpolates between global
| tresholding and local dithering.
| JKCalhoun wrote:
| I wonder how "Retinex" relates to levels, contrast, white and
| black point cutoff....
| crazygringo wrote:
| That's fascinating -- do you have any links to examples?
|
| I'm searching online but can't find anything at all. I've never
| heard of using Retinex in the context of dithering, and
| wondering what specifically you mean by Retinex-"like"?
|
| I'm also really curious what contexts this has been most
| successful in. E.g. was it used for dithering in images or
| games back in the 1990's when we were limited to 16-bit or
| 256-bit color? Or is this something more recently explored in
| academia or in some niche imaging applications?
| enriquto wrote:
| > I'm also really curious what contexts this has been most
| successful in. E.g. was it used for dithering in images or
| games back in the 1990's when we were limited to 16-bit or
| 256-bit color? Or is this something more recently explored in
| academia or in some niche imaging applications?
|
| No need to speak in the past tense! It is not a "niche"
| application, either. Think about it: gray ink is almost never
| used. _All_ printing into paper is done by dithering black
| ink into white paper. This includes bank notes, passports,
| product labels, etc. Besides dithering being used everywhere,
| it is a very active area of research, both in academia and in
| industry. In my lab we have seen a few industrial projects
| concerning dithering. It 's a vast and very beautiful
| subject.
|
| > do you have any links to examples?
|
| Take a look here for a couple of examples:
| http://gabarro.org/ccn/linear_dithering.html
| crazygringo wrote:
| Huh, to be honest I feel like I've only ever seen
| halftoning when printing onto paper -- I've never
| associated dithering with printing at all.
|
| And the "linear dithering" you're describing, when I think
| of images in certain banknotes and passports or quality
| seals that I'd call "engraved", I've always assumed were
| hand-drawn by an artist.
|
| But I like what you're describing and linking to, as a way
| to achieve that hand-drawn effect algorithmically, to
| include a directional texture element! Thanks for sharing.
| enriquto wrote:
| > Huh, to be honest I feel like I've only ever seen
| halftoning when printing onto paper -- I've never
| associated dithering with printing at all.
|
| Yep, sorry about my sloppy terminology. I always use
| "halftoning" and "dithering" interchangeably. Yet, notice
| that today's printers are often matrix-based, i.e., like
| a high-resolution binary screen, with a bit of ink
| smearing depending on the type of paper/plastic.
|
| > I've always assumed were hand-drawn by an artist.
|
| Maybe some are still drawn by hand, but most printed
| stuff is at some point dithered automatically (and a lot
| of critical information can be embedded on the dithering
| patterns).
| chromaton wrote:
| GIMP has it: Colors > Tone Mapping > Retinex
| crazygringo wrote:
| Ah thanks, just tried it out and it indeed produces quite a
| different result using that filter (default settings)
| before dithering.
|
| Here's a side-by-side comparison using an image from the
| front page of nytimes.com (be sure to click to zoom in for
| the full effect):
|
| https://imgur.com/a/mrHl7FW
|
| Without it (left), a photo remains "accurate" in terms of
| brightness levels.
|
| But with it (right), it becomes far more high-contrast to
| feel closer to an _illustration_ or painting. Which
| certainly makes it _clearer_. But while it brings out
| details in middle levels, it totally blows out shadows and
| highlights.
|
| E.g. the texture of his mask, shirt, and her hand are much
| clearer. But on the other hand, their hair (and a
| background object) just turn solid black and lose all
| detail. But certainly, the vastly higher contrast makes for
| a much more compelling image IMO.
| toast0 wrote:
| Here's a link to the color image
| https://static01.nyt.com/images/2020/12/21/well/21well-
| klass...
|
| I agree; the filtered image (right) is more aesthetically
| pleasing; but it feels much less accurate. It would
| depend on the intent of the image I think. If you're
| creating art, and using photographs in the creation, it's
| not a problem. If you're reporting on the world and just
| want to reduce the data size (or use a monochrome output
| medium), I wouldn't do it like this.
| anotheryou wrote:
| Black and white images need more contrast to be pleasing.
| I think that's most of the effect you see here.
|
| Maybe you'd want to start with a decent black and white
| photograph to get a better comparison.
|
| No matter what you do you probably also don't want to end
| up with large patches of solid white or black in your
| source image (unless it's the background). The hair
| already feels like drowning in black. But to take care of
| that you need to use photoshop and be careful with the
| gradient curves :)
| vvvv123 wrote:
| n
| cheschire wrote:
| Tangential, but when I view certain dithered images in that
| article and on the demo page, my entire monitor drops in
| brightness by a bit. As soon as I scroll away, the monitor
| returns to normal brightness.
|
| I wonder if this is a result of hardware or something on the
| software / driver side.
| vanni wrote:
| Probably dynamic contrast ratio by graphic driver and/or
| monitor that is tricked/triggered by unusual dithered patterns
| during scrolling.
| giovannibonetti wrote:
| I think there is a trade-off between spatial resolution and color
| depth. Dithering reduces the former to simulate increasing the
| latter.
|
| Perhaps we can get a similar result by interpreting the original
| signal (after linearization) in a different way. It could
| represent (after normalization) the probability each pixel would
| be 1 or 0. That way, brighter areas would have more density of
| bright pixels, and darker areas would have more density of dark
| pixels.
| Syzygies wrote:
| > "Bayer dithering" uses a Bayer matrix as the threshold map.
| They are named after Bryce Bayer, inventor of the Bayer filter,
| which is in use to this day in digital cameras.
|
| My Dad! I can hear him talking about this...
| person_of_color wrote:
| This is amazing. Is there any work using ML for "optimal"
| dithering?
| raphlinus wrote:
| Output dependent feedback is another good way to get creamy,
| evenly dispersed dots in highlight and shadow areas:
|
| https://levien.com/output_dependent_feedback.pdf
| enriquto wrote:
| > Output dependent feedback
|
| Wow, didn't know about that. It sounds quite similar to the
| more recent "electrostatic halftoning" algorithm by Weickert et
| al, that apparently does not cite your work:
|
| https://www.mia.uni-saarland.de/Research/IP_Halftoning.shtml
| CyberRabbi wrote:
| Pretty shameless Raph :P
| raphlinus wrote:
| I'm trying to figure out whether this is a compliment, a
| criticism, or both. When topics related to 2D graphics come
| up, there's a fair chance I've done some relevant work in the
| area. I plan to continue shamelessly hawking my results, as
| that's an important part of a research pipeline :)
| mmastrac wrote:
| Please continue hawking relevant papers. This is the
| perfect place to do it.
| dassurma wrote:
| Well, I hadn't heard of it, so I'll read this tomorrow :D
| Thanks for sharing.
| stuaxo wrote:
| Really enjoyable article, reminds me of being at school where the
| first scanner I encountered only output dithered black and white.
| swiley wrote:
| Does anyone else get errors on their HDMI monitors when viewing
| these images? Mine red shifts the entire screen and I'm not sure
| why.
| InvisibleUp wrote:
| Another good article about dithering algorithms is
| https://www.freesion.com/article/3406137250/. It goes through
| most of the matrix-based dithering algorithms, including the more
| obscure ones.
| CyberRabbi wrote:
| Another way to think of dither (that may only make sense to
| people with a signals background) is that it linearizes the error
| introduced by the quantization step (which is a non-linear
| process). This has a bunch of conceptual implications (like
| elimination of error harmonics being natural consequence) but
| maybe most importantly allows you to continue using linear
| analysis tools and techniques in systems with a quantization
| filter.
| nullc wrote:
| This thesis on noise shaping and dither
| http://uwspace.uwaterloo.ca/bitstream/10012/3867/1/thesis.pd...
| is an excellent mathematical treatment of the subject.
| CyberRabbi wrote:
| Very cool paper. Something that bothers me is that the Floyd-
| steinberg error diffusion without dithering looks superior
| than the version with the dithering. I think this
| demonstrates that perceptible patterns in the error (non-
| linear) don't always look so bad.
| nullc wrote:
| One thing I've found (admittedly with audio) is that you
| can usually get away with less dither power than is needed
| to completely linearize the error, at the cost of some
| harmonics for pathological signals which still ends up
| being less noticeable than the higher power dither noise.
|
| I expect something similar to apply to images.
| CyberRabbi wrote:
| Right, I think that's what's going on in my image
| example.
|
| I think this is a key lesson to people who apply these
| types of mathematical techniques to the lossy compression
| of data meant to be experienced by humans. Random noise
| is not always preferable to non-linear error, i.e.
| reducing non-linear error is not necessarily the
| equivalent of improving the perceived quality of the data
| in question. It can be but the function that determines
| what "looks good" or what "sounds good" to humans is
| probably a bit more complex.
|
| This is something I've run into with image compression
| techniques but it applies here too (quantization being a
| form of compression). E.g. JPEG compresses images in 8x8
| blocks because doing the whole image at once would look
| horrible. Figuring out the redundant information that can
| be thrown away with the least impact to image quality is
| still fundamentally an art.
| mikaelaast wrote:
| Shameful tangential plug alert: My sideproject is a dithering-
| based image-to-pattern converter for plastic fuse beads (you know
| them, the ones you place on a platter and iron to fuse them
| together): https://www.beadifier.pro/
| throwaway201103 wrote:
| You could probably adapt this to rug hooking kits pretty
| easily. Random example site (first one in DDG search results:
| https://woolery.com/rug-hooking/kits.html)
| dassurma wrote:
| Omg I didn't think of this application for dithering. Love
| that.
| aarchi wrote:
| It would work great for LEGO mosaics as well
| phkahler wrote:
| Nice. A conceptually simple program made into a nice app. Well,
| I assume it's a nice app ;-)
|
| Does it make money? There a couple other simple apps I've
| considered writing to see if I can make a few side bucks.
| mikaelaast wrote:
| Yeah, it's a doing-one-thing-and-doing-it-well kind of
| mentality. It even makes (a little) money.
| cxcorp wrote:
| See also, DITHER.txt:
| http://web.archive.org/web/20190316064436/http://www.efg2.co...
| makeworld wrote:
| DHALF.TXT might also be useful.
|
| https://raw.githubusercontent.com/cyotek/Dithering/master/re...
|
| DITHER.TXT is also available in the same folder.
| firebotipt wrote:
| Here's a trick. Before you do anything.... Add noise. Then use
| error diffusion. Looks pretty good for how crazy fast it is.
|
| Figuring out the strength of the noise is tricky. Usually 5 to 10
| percent. I'd suggest random RGB and not monochrome noise(so not
| quite how you've coded the randomness, but similar idea), as this
| should create more unique values and reduce patterned artifacts.
| sweetheart wrote:
| Oh boy, I think it's about time to hack on some dithered
| generative art. What an inspiration this article was!
| shadowfaxRodeo wrote:
| I just finished making an online dithering tool,
| doodad.dev/dither-me-this if anyone wants to play around with
| dithering.
|
| I'll be re-jigging it based on some info from that article, and
| definitely adding 'blue noise' as an option. Thanks for sharing.
| denki39 wrote:
| Wow! I just want to recommend the dither-me-this and find you
| already here :) Also, the pattern-
| generator(https://doodad.dev/pattern-generator) is really
| amazing, applause!
| steerablesafe wrote:
| The author didn't cover it, but it's common to alternate the
| direction of error diffusion passes line-by-line. This improves
| Floyd-Steinberg dithering by a lot in my experience.
| krackers wrote:
| A few more error dithering algorithms are seen in
| https://tannerhelland.com/2012/12/28/dithering-eleven-algori...
| esimov wrote:
| I've done a library which is capable of using various dithering
| algorithms by plugin the dithering method you wish:
| https://github.com/esimov/dithergo
| ksm1717 wrote:
| Here's a cool rust crate that does color quantization and
| dithering at the same time. Ie picking the palette to use and
| dithering the picture with that palette as an evolving context.
| https://github.com/okaneco/rscolorq
|
| It's a port of an older c library that's based on a
| paper/algorithm called spatial color quantization.
| aeontech wrote:
| The article mentions Bill Atkinson's dithering algorithm,
| invented during his work on the original Macintosh. You can also
| read more about it here:
| https://www.evilmadscientist.com/2012/dithering/
|
| It's actually implemented in BitCam iOS app by icon factory:
| https://iconfactory.com/bc.html
|
| And Emilio Vanni did a neat e-paper display experiment with it
| here: https://www.emiliovanni.com/atkinson-dithering-machine
| JKCalhoun wrote:
| Also, drop an image here:
|
| http://gazs.github.io/canvas-atkinson-dither/
| jamesfmilne wrote:
| An old acquaintance of mine, John Balestrieri, made a Mac app
| based on Bill Atkinson's dither algorithm too:
|
| https://www.tinrocket.com/content/hyperdither/
| ohazi wrote:
| > As this image shows, the dithered gradient gets bright way too
| quickly.
|
| _sigh_
|
| Not in Firefox on Linux.
|
| I vaguely recall seeing a multi-year-old bug related to subtly
| broken gamma behavior in either Firefox or Chrome, but can't seem
| to find it right now.
| BlueTemplar wrote:
| Are you sure about that ?
|
| The default Ubuntu photo viewer shows it in the same way.
|
| Also, for his second 'improved' dithering, the dithering
| clearly doesn't correspond to the undithered gradient (both
| undithered gradients seem to be the same in both pictures ??)
|
| See also :
|
| https://news.ycombinator.com/item?id=25644176
| dassurma wrote:
| Ah, this happens when Firefox/Chrome scales the image. I added
| a note to the article a couple hours ago, not sure if you saw
| that.
|
| If you open the image in question in a new tab (to prevent any
| scaling) you'll see the image as intended with the "desired"
| effect.
| Cactus2018 wrote:
| http://www.effectgames.com/demos/worlds/
|
| http://www.markferrari.com/
|
| ....
|
| I recall a recent kick-starter-type blog post about a work-in-
| progress game with huge 2d side scrolling dither art - can't find
| the link but :(
| corysama wrote:
| Video of Mark talking about his process
| https://www.youtube.com/watch?v=aMcJ1Jvtef0
| Cactus2018 wrote:
| yes! That is it. The game is https://thimbleweedpark.com/
| jansan wrote:
| Slightly offtopic, but this is a greate opportunity to ask what I
| always wanted to know. Can anybody tell me how protraits like the
| one in the linked article below are made? I always loved this
| dithering style.
|
| https://www.wsj.com/articles/SB10001424052702304418404579467...
|
| Edit: I found that the style seems to be called "hedcut".
| sgerenser wrote:
| The traditional WSJ Hedcut portraits are drawn by hand,
| although they now have a ML tool that makes a reasonable
| facsimile: https://www.wsj.com/articles/whats-in-a-hedcut-
| depends-how-i...
| bane wrote:
| It's so interesting to read about the rediscovery and
| reengagement with dithering by newer generations. I grew up when
| dithering was simply a fact of life because of the extremely
| limited graphics capabilities of early computers.
|
| I love the reference to Obra Dinn as the graphics remind me of
| very fond feelings I had for the first black and white
| Macintoshes. There was something wonderful about the crisp 1-bit
| graphics, on a monitor designed for only those two colors, that
| made it look "better" than contemporary color displays in most
| respects -- almost higher resolution than it actually was. It's
| kind of almost impossible to replicate _how_ it looked on modern
| color displays.
|
| I didn't experience that feeling looking at an electronic display
| again until the Kindle. It also had the funny side-effect of
| making art assets on Macintoshes a fraction of the size as on
| color systems, making both the software smaller, and the hard-
| drives hold more.
|
| There's also something somewhat unsatisfying about automatically
| generated dithering patterns used to recast color graphics into
| 1-bit b&w. It seems to really take an artist's hand to make it
| look beautiful. However, the author of this post ends up with
| some very nice examples and it's really well written.
|
| If anybody is interested in seeing how the old systems looked,
| and some great uses of dithering throughout, I'd recommend
| checking out this amazing Internet Archive Mac in a Browser -
| https://archive.org/details/mac_MacOS_7.0.1_compilation
|
| You get dithering literally at the system startup background.
| TJSomething wrote:
| I messed with the source of this and I came up with an entirely
| local dithering algorithm based on the van der Corput sequence.
|
| https://tjsomething.neocities.org/ditherpunk-demo/
| SoSoRoCoCo wrote:
| This is such a well-written article: it describes the impetus, it
| is researched, it has great examples both as code and as output,
| and it piques interest. In the late 1990's I contracted with an
| embedded software company to optimize a dithering algorithm for
| 8-bit MCUs that was used in most laser printers & copiers, and
| this paper is a really good overview.
| cnity wrote:
| This guys' articles are wonderful. I was experimenting with
| compiling C to WASM[0] and Surma's was really helpful.
|
| [0]: https://surma.dev/things/c-to-webassembly/index.html
| ayush000 wrote:
| His HTTP203 series with Jake Archibald is amazing if you
| happen to be a frontend developer.
|
| https://www.youtube.com/watch?v=9-6CKCz58A8&list=PLNYkxOF6rc.
| ..
| cnity wrote:
| I'll give it a watch, thanks.
| dassurma wrote:
| Thanks for the kind words!
| Eric_WVGG wrote:
| What an odd coincidence... I just acquired an "Inkplate" for my
| birthday (a recycled Kindle screen glued onto a board with an
| Arduino and wifi) and was in the process of looking for old 1bit
| art for it, stumbled across the term "ditherpunk" just last
| night. - https://inkplate.io
|
| artists: - https://unomoralez.com -
| https://www.instagram.com/mattisdovier/?hl=en -
| https://wiki.xxiivv.com/site/dinaisth.html
|
| I'm going to start raiding old Hypercard stacks next
| gregsadetsky wrote:
| I started extracting images from old Hypercard stacks, and then
| found this page which has done all of this work!
|
| https://mariteaux.somnolescent.net/junk/hypercard/
|
| Raid away :)
| dassurma wrote:
| ... I think I need to buy an Inkplate now
| Eric_WVGG wrote:
| It's fantastic. You can program in MicroPython, or C using
| Arduino IDE. I'm barely fluent in C and had no problem
| getting some basic stuff running last night.
| ogou wrote:
| Easy ditherpunk video treatment:
|
| ffmpeg -i video-input.mp4 -i palette.png -lavfi
| "paletteuse=dither=sierra2" video-output.mp4
|
| You'll need to create a 16px by 16px PNG with only black and
| white pixels. Also, there are other dithering algos that
| paletteuse offers.
| smasher164 wrote:
| I learned something new today! Nice work, and well-explained.
| supernovae wrote:
| Interesting, I "dither" (shift the telescope around between
| photos) my astrophotos to go from noise to less noise - funny
| seeing it go from image to noise. Does anyone have any papers on
| dithering + image integration to go in this direction? always
| been interested in knowing more about it.
| greggman3 wrote:
| Great article!
|
| Frustrated that Firefox doesn't support "image-rendering:
| pixelated". FF supports "crisp-edges" and happens to implement
| that as nearest-neighbor filtering but the spec says that is not
| the meaning of "crisp-edges".
|
| I don't understand why Firefox is dragging their feet on this. It
| seems like such an easy thing to add. In fact given their current
| implementation they could just make `pixelated` a synonym for
| `crisp-edges` and ship it
|
| Here's the 8yr old issue
|
| https://bugzilla.mozilla.org/show_bug.cgi?id=856337
| londons_explore wrote:
| Send a patch!
| vanderZwan wrote:
| Here is a cross-browser compatible workaround, courtesy of
| Gavin Kistner from phrogz.net: .pixelated {
| image-rendering:optimizeSpeed; /* Legal fallback */
| image-rendering:-moz-crisp-edges; /* Firefox */
| image-rendering:-o-crisp-edges; /* Opera */
| image-rendering:-webkit-optimize-contrast; /* Safari */
| image-rendering:optimize-contrast; /* CSS3 Proposed */
| image-rendering:crisp-edges; /* CSS4 Proposed */
| image-rendering:pixelated; /* CSS4 Proposed */
| -ms-interpolation-mode:nearest-neighbor; /* IE8+ */
| }
|
| And if you want to do pixelated upscaling while drawing an
| image on a <canvas> element: const canvas =
| document.getElementById('canvas'); const ctx =
| canvas.getContext('2d'); // turn off image smoothing
| when upscaling with ctx.drawImage()
| ctx.imageSmoothingEnabled = false;
|
| [0] http://phrogz.net/tmp/canvas_image_zoom.html
|
| [1] https://developer.mozilla.org/en-
| US/docs/Web/API/CanvasRende...
|
| [2] https://observablehq.com/@jobleonard/gotta-keep-em-
| pixelated
| dassurma wrote:
| TIL! I just redeployed with these styles added. Should be
| live in a couple minutes. Thank you.
| sk65535 wrote:
| iirc, back in ~99, Unreal Engine was doing dithering on the u/v
| coordinates (!!) for texture mapping instead of bilinear
| interpolation. They used fixed Bayer matrix.
|
| This was quite faster, visually pleasant, and was adapting nicely
| to viewing distance.
|
| Try freezing some close-up frames around 0:45 here:
| https://www.youtube.com/watch?v=aXA3360awec
|
| Tim Sweeney called this technique "ordered texture coordinate
| space dither", apparently:
|
| https://www.flipcode.com/archives/Texturing_As_In_Unreal.sht...
| BlueTemplar wrote:
| > color palettes are mostly a thing of the past
|
| > With HDR and wide gamut on the horizon, things are moving even
| further away to ever requiring any form of dithering.
|
| My impression is that, if anything, the almost total dominance of
| sRGB (in the consumer space) is coming at an end in the digital
| mediums, on one side from the generalization of wide gamut and
| high dynamic range transmissive LCD/LED screens, and on the other
| end the rising wave of both (higher frequency) black & white and
| color e-ink. (And I'm still hopeful for the resurrection of
| transflective LCDs, like in Pebble.)
|
| So I would expect dithering to come back, either to avoid banding
| on lower bit per color displays, and/or 'add' colors to lower
| color gamut ones.
|
| For instance, my 2009 LCD is a 8bpc wide gamut one (and fairly
| high dynamic range too). So if I were to take full advantage of
| it (leaving sRGB), I would require dithering to avoid banding.
| dljsjr wrote:
| > According to Wikipedia, "Dither is an intentionally applied
| form of noise used to randomize quantization error", and is a
| technique not only limited to images. It is actually a technique
| used to this day on audio recordings [...]
|
| Dithering as a digital signal processing technique is also used
| frequently in the digital control of physical systems. One
| example of this is in the control of hydraulic servo valves[1];
| these valves are usually pretty small and their performance can
| be dominated by a lot of external factors. One of the biggest
| ones is "stiction", or static friction, wherein if the moving
| parts of the valve are at rest it can take a large amount of
| current to get them going again which translates in to poor
| control of the valve and in turn poor control of the thing the
| valve is trying to move. It's common to use very high
| frequency/small amplitude dithering on these valves to eliminate
| the effects of stiction without compromising accuracy which
| greatly improves the control stability and responsiveness of the
| servo valves.
|
| 1: https://en.wikipedia.org/wiki/Electrohydraulic_servo_valve
| zlsa wrote:
| I believe this is what the engines are doing during the
| majority of the ascent of the Starship SN8 test vehicle[0]. You
| can see the engines gimbaling very slightly in a circular
| pattern.
|
| [0]: https://youtu.be/ap-BkkrRg-o?t=6516
| aidenn0 wrote:
| Anything controlled by a PID can easily end up in a circular
| pattern, so it's not a given that this was to avoid stiction.
|
| [edit]
|
| 1 dimensional PIDs can end up in a sinusoidal dynamic
| equilibrium, and a 2 dimensional sine wave is an ellipse.
| JKCalhoun wrote:
| I worked with a team that included printing, low-level graphics
| rendering. When we were able to rename our lab at work we went
| with "Dithering Heights."
| torginus wrote:
| I think the best way of thinking about dithering, is that it's
| the 'whitening' of quantization noise. Quantization can be
| modelled by taking the difference between the originally
| continuous signal, and the resultant quantized image as
| "quantization noise". The resultant noise has a spectrum that's
| pretty 'spiky' due to its non-continuous nature, it has
| significant energy in its higher-frequency harmonics. By adding
| some noise before sending it off to be thresholded by the
| quantizer, the noise's spectrum is made a lot flatter, thus
| making the quantized image look more like the original image,
| but with a higher noise floor.
| im3w1l wrote:
| I would not define it exactly like that. I would say "Dithering
| is any method of reducing the bit depth of a signal that
| prioritizes accurate representation of low frequencies over
| that of high frequencies". This frames it as essentially an
| optimization problem, with randomn noise being a heuristic way
| of accomplishing it.
| longtom wrote:
| Good image dithering algorithms do maintain sharp features
| like edges.
| tomc1985 wrote:
| I feel like that is still a very narrow definition.
| Dithering's useful anywhere quantization produces an unwanted
| result, and is useful in a lot of places where "bitrate"
| isn't even a concept
| dahart wrote:
| PWM signals are dithered by definition, and are probably the
| most common interface, no?
| dljsjr wrote:
| Servo valve dithering is overlaid on top of the PWM duty
| cycle.
| jVinc wrote:
| I am in the same boat as the author of having only recently
| played Return of the Obra Dinn between Christmas and new years. I
| cannot recommend it enough, if you haven't played it and like
| puzzlers you should pick it up.
|
| It's an extremely engaging story, and a narrative tool I have not
| previously encountered. No spoilers as this is revealed
| immediately, but essentially you are navigating past events
| through frozen timepoints at the moments when people died, and
| have to determine the identities and fates of all the about 60
| crew aboard the boat, which requires a bit of puzzling things
| together across the different events that lead to peoples deaths.
|
| I doubt we'll see a similar follow up game from Lucas Pope, as he
| has commented this game grew much larger than he expected and he
| would scale back for his next projects. Also from papers-please
| to Obra Dinn he seems to be one to break the mold at each
| iteration, but I really wish there where more games like this,
| with different stories to investigated.
| dheera wrote:
| Not in the same boat as the author but in the past few weeks
| I've been getting into e-Ink displays and making gadgets and
| framed art with them, and most of the displays I can get a hold
| of are either 1-bit or 4-bit greyscale, so this is super
| relevant.
|
| My "crude" dithering algorithm I wrote though seems not
| mentioned in the article. What I do is (in the case of 1-bit)
| just let the grey value (from 0.0=black to 1.0=white) determine
| the probability that the pixel is on or off, and render
| pseudorandom numbers according to those probabilities. In the
| case of 4-bit greyscale I do the same but within 16 bins.
|
| I'm not sure how it compares to the methods in the article but
| maybe I can test this sometime.
| mrob wrote:
| >(in the case of 1-bit) just let the grey value (from
| 0.0=black to 1.0=white) determine the probability that the
| pixel is on or off
|
| This is equivalent to the random noise [-0.5; 0.5] before
| quantization example.
| dheera wrote:
| I think you're right, very interesting, that means I can do
| a whole lot better in the visual quality I get out of these
| e-Ink displays.
|
| Well damn, I'm excited I found this article! Thanks
| @dassurma!
| jfim wrote:
| You can actually combine error diffusion with your
| probability-based approach, which helps reduce patterns if I
| remember correctly (it's been a very long time).
| the_af wrote:
| I share your love for Return of the Obra Dinn, truly a
| masterpiece and a game which shocked me out of my usual apathy
| towards recent videogames.
|
| I have faith in Lucas Pope for whatever project he decides to
| tackle next. Two of his games are masterpieces, this one and
| Papers Please, and I also liked Helsing's Fire a lot. Whatever
| he does next, regardless of scope and theme, will surely please
| me.
| cdelsolar wrote:
| This game was so freaking good, and I also agree, I'd become
| sort of bored of video games until this was recommended. I
| stayed glued to the game until I 100%ed it (didn't take me
| too long, maybe around 20 hours). I really really wish I
| could forget the game and replay it, or that there was a part
| 2, or something. Incredibly creative and well-made.
| croissants wrote:
| May as well ask here, the ESRB rating at the bottom of the Obra
| Dinn page [1] highlights "intense violence". Is this accurate?
| I'm a big wimp about visceral violence, so I'd prefer to have
| some idea before paying up. If 1/4 of the crew got disemboweled
| or something, I'm probably out.
|
| [1] https://obradinn.com/
| germinalphrase wrote:
| I have only played for a few hours, but all the violence (so
| far) has been communicated via sound effects during blacked
| out cut scenes (no visuals). The sound effects in the game
| are really well done and visceral, but otherwise you just see
| the low res frozen time 'results' of these cut scenes (e.g. a
| dead body under a cannon, a skeleton crushed and distorted,
| etc).
| dassurma wrote:
| To be safe: There _is_ some more graphic violence, but due
| to the dithering it's not very offensive imo. However,
| pretty much the entire crew does die rather brutal deaths
| (and that's what the plot of the game is centered around).
| jVinc wrote:
| There definitely is violent scenes and sounds in the game. I
| would not recommend playing it with young children for
| instance. However I would myself not put in the same box as
| other games in the "intense violence" category.
|
| Mostly because and this may sound silly, but it's not
| "violence in motion". You are viewing a murder scene, and
| someone died, and you might hear someone take the last
| breaths of their life. Which has a very high emotional impact
| which should not be ignored. But that to me is still
| fundamentally different from gory/bloody games with often
| fast visceral violence. As mentioned by others, the scenes
| themselves are calmed a lot by the dithering art-style.
|
| I would say about the emotional content though that this hits
| differently from other stories where characters die because
| of the narrative tool. You never have a Game of Thrones
| moment where a character you are heavily invested in suddenly
| dies, because even though you learn about the passengers and
| feel for them in their misery, you also realize up front even
| before you learn about them that they have died and you are
| just looking at memories before that event.
| owenversteeg wrote:
| What about dithering for the smallest possible images? I'm
| talking in the 300 byte - 2kb range here. Does anyone have any
| suggestions for what to do to really get file size down?
| dassurma wrote:
| Author here! I don't think many people have researched or
| optimized on this, but I also work on https://squoosh.app, and
| from that experience I know that dithering makes compression
| _worse_ most of the time (unless you use PNG and use a super
| small palette of colors). Interesting idea tho!
| shadowfaxRodeo wrote:
| Hi Surma! fantastic article. You can save a lot of data
| switching to a lossless format as you said, and especially
| when using ordered dithering. Even if the color palette is
| quite large.
|
| Error diffusion causes problems for certain color palettes,
| but usually results in a smaller image size.
|
| I've made a tool for doing this: https://doodad.dev/dither-
| me-this, you can easily half the size of a jpeg by dithering
| it and exporting it as a png.
| [deleted]
| steerablesafe wrote:
| pngquant is a FOSS tool designed for that.
|
| https://pngquant.org/
| mattdesl wrote:
| Quantizing to <= 256 colors will let you use a single byte per
| pixel, but there are other techniques like Block Truncation
| Coding that work well with 8bit images to go down to 2 bits per
| pixel or lower. Even at 2 bits per pixel, this is still quite
| big as raw data, so you typically will want to use compression
| on top such as RLE, DEFLATE, etc.
|
| I'm currently exploring this for my own applications,
| compressing large 8bit sprite sheet images, and it's producing
| much smaller images than most palette based image formats (GIF,
| WebP, PNG, etc). Follow my progress here:
| https://twitter.com/mattdesl/status/1346048282494177280?s=21
| kccqzy wrote:
| I did use dithering for unimportant background images of a
| website. Use just 256 colors, and both PNG and GIF will use a
| color palette instead of describing each pixel separately.
| Really helps with the file size. Afterwards muck with the
| various lossless compression parameters in PNG with optipng to
| shave off a few more percent.
| makach wrote:
| Excellent article. I've also always wondered, I was making some
| gradients the other day and I was curious to how I could dither
| between two colours. Big thank you for this article!
| at_a_remove wrote:
| This is a great overview.
|
| A few years back, when TechShop still existed and was open, I
| made a present for my mother: a glass laser-etched _and_ engraved
| with an image from her favorite comic. Because the comic was
| painted in a set of watercolors, this was going to be difficult.
| I ended up tracing the lines (for the deeper engraving) and then
| stomping on the color palette for the etching. Finally, I settled
| on different newspaper halftone sets for each "color."
|
| It took several tries for it to come out alright. This might have
| saved me a few runs.
| jstrieb wrote:
| This article is very well put together, and a helpful reference,
| thanks for sharing!
|
| A sibling comment links to a Show HN that didn't get much
| attention, but is definitely worth checking out. I will also
| point out that it is possible to do dithering using GIMP
| (https://docs.gimp.org/en/gimp-image-convert-indexed.html).
| Alternatively, use ImageMagick if that is more your speed
| (https://legacy.imagemagick.org/Usage/quantize/#ordered-dithe...)
| sek wrote:
| This might be an interesting application in some early medical
| devices for blind people.
|
| What if it's binary but has a better resolution and they would
| see like that?
| pompez wrote:
| Is it just my eyes or does the gradient dithered in sRGB look
| more accurate that the one dithered in linear space?
| Ericson2314 wrote:
| I think both gradients are "wrong" in that they themselves
| interpolate without correcting for RGB. I think the first
| example thte original and dither are wrong in the same way,
| while in the second the dither is more right than the gradient
| is.
|
| Basically I'm afraid the author of this post is a bit of a
| "careless eager student" archetype who, while generously
| pointing out the gotchas that to an expert might be second
| nature, is also introducing unintentional errors that add some
| confusion right back.
|
| I'm _not_ expert in color, but with anything with soo many
| layers of abstractions (physical, astronomical, psycological,
| various models that approximate each), it helps to work
| symbolically as long as you possibly can so the work can be
| audited. Trying to correct from the current "baked" state is
| numerical hell.
| BlueTemplar wrote:
| Yes, this is also my impression.
|
| But see also :
|
| https://news.ycombinator.com/item?id=25644176
| crazygringo wrote:
| If it does, then it probably means you've got some funky
| substandard or non-standard LCD panel or gamma setting or color
| correction on whatever you're viewing it on.
|
| Which isn't terribly unusual. But if your screen is well
| calibrated, then no -- the sRGB gradient should by definition
| be identical. That's literally the specification.
|
| (And it is on my MacBook, as Apple screens tend to be among the
| most accurate of general consumer screens.)
| firebotipt wrote:
| The sRGB version is evenly balanced bitwise yet 'gamma free'.
| The linear RGB version appears bitwise imbalanced due to gamma
| correction, but cross your eyes and blur your vision, and
| you'll see the linear RGB is actually more gamma correct!
| (Better contrast and luminosity)
| sunkencity wrote:
| it's also possible to get a little better results by cheating a
| little bit with the error distribution, Ulichneys x1 and x2 as in
| "Simple gradient-based error-diffusion method" Journal of
| Electronic Imaging jul/aug 2016.
| stephencanon wrote:
| If you want to play around with dithering on macOS or iOS, vImage
| in the Accelerate framework provides most of the algorithms
| discussed in the article (including Atkinson!), with performance
| more than adequate for most applications (and with convenient
| vImage utilities to fit them into a CV pixel buffer pipeline for
| video work). vImage also supports dithering to other bit depths,
| though one-bit output is what you want for that vintage look.
|
| https://developer.apple.com/documentation/accelerate/1533024...
| andrenotgiant wrote:
| semi-related: The origin of the word "dither" as explained by
| this wonderful little wikipedia page[1] is worth reading about.
|
| [1]: https://en.wikipedia.org/wiki/Dither
| dagss wrote:
| 3 minutes to generate 64 by 64 pixels blue noise? I think one
| should be able to instead draw white noise in frequency domain,
| multiply with a noise amplitude profile and then a 2D
| FFT...should only take some milliseconds..
| linuxlizard wrote:
| Back when I worked at Marvell Semiconductor (circa 2005), we made
| laser printer ASICs for HP. We did a lot of dithering in
| hardware. We had a hardware block that did error diffusion, we
| had a hardware block that did regular diffusion. We also had a
| hardware block that did Blue Noise. I was responsible for
| implementing the firmware that drove those printers' scan/copy
| path: scan an image in monochrome, run through the dither
| hardware to create the bit pattern fed to the laser engine.
|
| No one could explain to me how to use the blue noise block. I
| couldn't understand what the blue noise block was doing. This is
| the first article that explained, in terms I could understand,
| how blue noise dithering works.
|
| I can die happy. Thank you.
| spectramax wrote:
| Now that's amazing. Not only because it was hardware based, but
| because you were solving a real problem, a limitation, instead
| of just aesthetic endeavor. It was an engineering solution.
| There is a lot of aesthetic beauty in mundane engineering,
| rarely seen by anyone. So kudos to designers to pick out
| something interesting and giving it the light :) It's all cool.
| dassurma wrote:
| This is a lovely compliment. Thank you for taking the time to
| write it :)
| linuxlizard wrote:
| Thank you for such a great blog post that even years later, I
| could catch up and understand!
|
| (We never did turn on the blue noise block. Even today, it
| sits idle. Sad.)
| willis936 wrote:
| There is no math.Random() in hardware, so I have to ask: what
| algorithm did the noise block use? :)
| H8crilA wrote:
| Not the OP, but since repeatability is not a problem you can
| just use any cheap and insecure random number generator and
| hardcode a constant for the seed.
| willis936 wrote:
| The period needs to be sufficiently long such that it won't
| show up as visible artifacts. I would think something like
| PRBS23 would do the trick and be trivial to implement.
|
| That's the cheapest choice. Better whiteness could come
| with some added complexity.
| darusnna wrote:
| For less visual artifacts it is recommended to use PRBS
| with 50% of the taps 0, 50% of the primitive polynomial
| tap 1. Same period (2^n-1), but less short-term
| correlations.
| linuxlizard wrote:
| The great thing ("great") was we did the lower end laser
| printer niche for HP. We were able to make ASICs cheaper
| than HP could make them for themselves. So we had the
| (cough) less impressive hardware (scan sensors, laser
| engines, motors) to work with. So image quality was so-so
| at even the best of times.
|
| We were able to bury a lot of bodies under the sensor
| noise and engine output. But we made a super reliable,
| super cheap laser printer -- VW Bug of Laser Printers, if
| I may brag a bit. Twelve years later, the M1005 is still
| selling like hotcakes I hear.
| vanderZwan wrote:
| I have no idea what those parameters represent, but I'm
| very curious! Could you give a layman's explanation?
| darusnna wrote:
| The standard PRBS23 polynomial is X^23 + X^18 + 1. Most
| of the factors are zero. Only the factors for exp 23,18,1
| are 1. This causes poor bit mixing - ie the output
| sequence will have strong correlation every 23 bits.
|
| Choosing a "fat" primitive polynomial, ie a polynomial
| not from this list[0] but rather a polynomial with 50% of
| the taps are 1 (but it still must be primitive[1]),
| increases the avalanche effect[2] to the optimal 50%
| probability, ie 50% of the state bits affect the output
| at each step, instead of just 2 or 3 taps out of 23.
|
| Note: the LFSR sequence length will remain the same,
| 2^23-1 in either case. It's just that the short-term
| correlation between bits will be lower.
|
| [0] https://en.wikipedia.org/wiki/Linear-
| feedback_shift_register...
|
| [1] https://en.wikipedia.org/wiki/Primitive_polynomial_(f
| ield_th...
|
| [2] https://en.wikipedia.org/wiki/Avalanche_effect
| vanderZwan wrote:
| I'll be honest, that is a bit terse for a lay-person like
| me, but the links hopefully give enough pointers to
| properly grok what you said with a bit of reading. Thank
| you!
| darusnna wrote:
| Note: in the software world, the equivalent construction
| to the LFSR with "fat" polynomial is the xorshift PRNG.
| For more details, see the book: Numerical Recipes, 3rd
| edition (not 2nd ed!), Chapter 7 random numbers, section
| 7.1 uniform RNG - history.
| linuxlizard wrote:
| We had to generate values in firmware then populate a LUT.
| IIRC we just used a simple pseudo-random number generator
| from the C library. Non-crypto so it didn't matter too much.
| torginus wrote:
| Well, LFSR RNG-s are pretty efficient in terms of HW space.
| You take a single-bit wide shift registers of length n, and
| feed back its output to the beginning XOR-ed with predefined
| bits in the shift register.
| jacobolus wrote:
| Folks interested in learning more about dithering in genreal or
| blue noise dithering in particular should read Bart Wronski's
| blog posts here:
|
| https://bartwronski.com/2016/10/30/dithering-in-games-mini-s...
| CalChris wrote:
| _Dynamic Rounding_ , developed by Quantel back in the 90s for
| digital video+film, is underappreciated. Dynamic Rounding + Blue
| Noise is severely underappreciated.
|
| https://en.wikipedia.org/wiki/Quantel
|
| https://patents.google.com/patent/US6281938B1
| jiggawatts wrote:
| Are there any sample images floating around on the Internet?
| germanka wrote:
| The article brings back my ZX-Spectrum memories
| dandep wrote:
| Great resource! I played myself with dithering, applying it to
| images fetched from nasa's apis, from the rovers on mars and open
| sourced it. I really enjoyed the visual outcome, you can see it
| here: https://github.com/danieledep/rovers-dithering-playground
| mark-r wrote:
| It was nice to see a mention of Robert Ulichney. His 1987 book
| "Digital Halftoning" covers most of the ground that this blog
| post does, plus more.
| dassurma wrote:
| I saw this book mentioned a couple of times during my research.
| I guess I should read it.
| Tomte wrote:
| Donald Knuth also has two nice chapters in "Digital
| Typography".
| mark-r wrote:
| I might have to look that up. It's hard for me to imagine
| what dithering and typography would have in common, other
| than they might both be used to produce a book.
| Tomte wrote:
| IIRC he created a special font for half-toning images.
| labawi wrote:
| > However, sRGB is not linear, meaning that (0.5,0.5,0.5) in sRGB
| is not the color a human sees when you mix 50% of (0,0,0) and
| (1,1,1). Instead, it's the color you get when you pump half the
| power of full white through your Cathod-Ray Tube (CRT).
|
| While true, to avoid confusion, it might be better rephrased
| without bringing human color perception or even colors into the
| mix.
|
| sRGB uses non-linear (gamma) values, which are good for 8-bit
| representation. However, operating on them as normal (linear)
| numbers using average (expecting blur) or addition,
| multiplication (expecting addition, multiplication of
| corresponding light) - gives nonsensical, mathematically and
| physically inaccurate results, perceived by any sensor, human or
| animal as not what was expected.
|
| RGB colorspace in it's linear form is actually very good for
| calculations, it's the gamma that messes things up.
|
| In simplified monochrome sRGB/gamma space, a value _v_ means
| _k*v^g_ units of light, for some _k_ and gamma _g = 2.2_.
| Attempting to calculate an average like below is simply incorrect
| - you need to degamma1 (remove the ^g), calculate and then re-
| gamma (reapply ^g). (k*v1^g + k*v2^g)/2 = k*(v1^g
| + v2^g)/2 != k*((v1+v2)/2)^g
|
| 1 gamma function in sRGB is a bit more complex than f(x) = x^2.2
| BlueTemplar wrote:
| Are you sure about sRGB being already good enough for averaging
| ? (As long as we don't want to go to a wider color space of
| course.)
|
| We have been recently taught how to do it 'properly', and we
| had to go through CIELAB 76 (which, AFAIK, is still an
| approximation, as human perception is actually non-euclidean).
| labawi wrote:
| If you want physically accurate averaging (resize, blur etc),
| then RGB is fine, as long as you use linear values (or do
| long-winded transformed math). AFAIU it is by definition 100%
| physically accurate. As was said, sRGB uses gamma values,
| where typical math creates ill effects, as in many if not
| most typical programs.
|
| If you want to do perceptually uniform averaging of colors,
| color mixing / generating / artistic effects, that's
| something else entirely.
| BlueTemplar wrote:
| I'm not sure what you mean by "physically accurate" ?
|
| No color reproduction is going to be physically accurate,
| except by accident, since color is an (average) human
| qualia, not a physical observable.
|
| And especially because whatever the way that the colors are
| going to be reproduced, there's no guarantee that they will
| correspond to the same light spectrum than the original,
| since there's an infinity of spectra corresponding to the
| same impression for a specific color.
|
| And if you want to do perceptually accurate averaging,
| you're going to have to work in a perceptually uniform
| color space ! Which (even linear) sRGB isn't.
|
| All that said, it's perfectly possible that linear sRGB is
| just good enough for most use cases of averaging, even
| despite not being perceptually uniform. Especially in OP's
| example with just 2 shades : black and white.
| labawi wrote:
| Yes, color is a human concept for certain stimuli, no
| recording or reproduction is 100% accurate and even the
| model is not perfect (different people have shifted color
| sensitivities, some are even tetrachromats and we are
| ignoring rods vs. cones altogether).
|
| However, the stimuli is predominantly light, which can be
| quantified and certain operations with light are well
| studied. Not all, but most noticeable ones are and that
| is what is used when doing photorealistic rendering etc.
|
| If you do a 2x downscale, you need to average 4 pixels
| together. Linear RGB should (by theory and definition)
| give you a (theoretically) physically accurate result, as
| if you did a physical 2x downscale by putting it further
| away in a uniformly lit room or whatever. You can't
| reproduce that precisely, but neither can you reproduce
| 1l + 1l of milk = 2l of milk.
|
| This is in stark contrast to downsizing, blur or whatever
| "performed" in sRGB, where images get curiously darker
| and slightly off-color (or is it lighter?).
|
| I'm not sure what is perceptual average of colors, but if
| you mix pixels in 1:1 ratio and apply blur / look from
| far away, there is only one correct result, and linear
| RGB average (is one way that) gives you the single
| correct result. (ignoring screen color reproduction
| deviations from what they are supposed to be)
| BlueTemplar wrote:
| My bad, I was wrong - can't have both linearity and
| perceptual uniformity, and I guess that in use cases like
| dithering, linearity is more important ?
|
| https://news.ycombinator.com/item?id=25648490
| fireattack wrote:
| I always feel we have way too many historical burdens (which
| were good compromises at the time) in (digital) image/video
| field.
|
| In no particular order (and some are overlapping), I can
| immediately think of gamma, RGB/YCbCr/whatever color models,
| different (and often limited) color spaces, (low-)color depth
| and dithering, chroma subsampling, PAL/NTSC, 1001/1000 in fps
| (think 29.97), interlaced, TV/PC range, different color
| primaries, different transfer functions, SDR/HDR, ..
|
| the list can go on and on, and almost all of them constantly
| cause problems in all the places you consume visual media (I do
| agree gamma is one of the worst ones). Most of them are not
| going anywhere in near future, either.
|
| I often fantasize a world with only linear, 32-bit (or better),
| perception-based-color-model-of-choice, 4:4:4 digital images
| (similar for videos). It can save us so much trouble.
| steerablesafe wrote:
| I don't think of colorspaces as "historical burdens". I don't
| like that CRT monitors are brought up every time sRGB is
| mentioned though. I know it has historical relevance, but
| it's not relevant anymore, and it's not needed to understand
| the difference between linear and non-linear colorspaces.
| fireattack wrote:
| I misunderstood what you mean. Please ignore. On a side
| note, by colorspace I mainly meant that we can just stick
| with one with ultra-wide gamut. There are indeed other
| reasons to have different color spaces.
|
| (Below is my original comment for transparency.)
|
| -------------
|
| Gamma isn't really about CRT; or I should say, they're two
| different things. The fact CRT has a somewhat physical
| "gamma" (light intensity varies nonlinearly with the
| voltage) is likely just a coincidence with the gamma we're
| talking here.
|
| The reason gamma is used in sRGB is because human eyes are
| more sensitive to changes in darker area, i.e. if the light
| intensity changes linearly, it feels more "jumpy" in darker
| end (which causes perceptible bandings). This is especially
| an issue with lower color depth. To solve this, we invented
| gamma space to give darker end more bits/intensity
| intervals to smooth the perceptive brightness.
|
| >it's not needed to understand the difference between
| linear and non-linear colorspaces
|
| It absolutely should, since any gamma space would have
| problem with "averaging", as explained by the GP. Actually,
| it's so bad that almost all the image editing/representing
| tasks we have today are doing it wrong (resizing, blurring,
| mixing..).
|
| This topic has been discussed extensively on Internet, so
| I'm not going to go into detail too much. A good start
| point is [1][2].
|
| [1] https://www.youtube.com/watch?v=LKnqECcg6Gw [2]
| http://blog.johnnovak.net/2016/09/21/what-every-coder-
| should...
| magicalhippo wrote:
| > It absolutely should
|
| The GP just pointed out that the CRT link is not needed
| to motivate the talk about linear vs non-linear.
| fireattack wrote:
| You're right, I misunderstood (overlooked the CRT part in
| OP's article.)
| BlueTemplar wrote:
| I don't see why you would avoid talking about it.
|
| As far as I've understood, CRT monitor gamma has basically
| evolved to become the inverse of human eye gamma :
|
| http://poynton.ca/PDFs/Rehabilitation_of_gamma.pdf
|
| (With some changes for a less accurate, but more visually
| pleasing/exciting replication of brightness levels ?)
|
| Now, with many modern, digital screens (LCD, LED, e-ink?),
| as far as I've understood the electro-optical hardware
| response is actually linear, so the hardware actually has
| to do a non-linear conversion ?
|
| I'm still somewhat confused about this, as I expected to
| have to do gamma correction when making a gradient
| recently, but in the end it looked like I didn't have to
| (or maybe it's because I didn't do it properly : didn't do
| it two-way?).
|
| Note that the blog author might be confused there too, as
| just after he says :
|
| > With these conversions in place, dithering produces
| (more) accurate results:
|
| - you can clearly see that the new dithered gradient
| doesn't correspond to the undithered one ! (Both undithered
| gradients seem to be the same.)
| magicalhippo wrote:
| > I expected to have to do gamma correction when making a
| gradient recently, but in the end it looked like I didn't
| have to
|
| If you don't explicitly specify the color space you're
| working in, then you're using some implicitly defined
| color space in which case you basically need to know what
| that is (at least roughly).
|
| So traditionally in Windows, way back, when you created a
| bitmap, wrote some data to it and then drew it, there was
| no explicit mention of a color space. Instead it was
| implicit, and it was de-facto non-linear.
|
| These days you can specify[1][2] the color space you're
| working in, and Windows will then transform the colors
| into the specified device color space. So then you can
| specify if you want to work in linear RGB space or say
| non-linear sRGB.
|
| Unity has something similar[3], which affects how you
| write shaders, how your textures should be saved etc.
|
| [1]: https://docs.microsoft.com/en-us/previous-
| versions/windows/d...
|
| [2]: https://docs.microsoft.com/en-
| us/windows/win32/api/wingdi/nf...
|
| [3]: https://docs.unity3d.com/Manual/LinearRendering-
| LinearOrGamm...
| BlueTemplar wrote:
| Yes, and in almost all of these discussions, the implicit
| color space is (non-linear) sRGB. (IIRC Macs might have
| used a different default color space one-two decades ago
| ?)
|
| Also, I'm on Linux, and doing picture manipulation with
| Octave, but thank you for the links anyway !
| magicalhippo wrote:
| Yeah I was just using Windows because that's what I was
| familiar with. I guess on Linux is can vary a lot more on
| the setup.
|
| So yeah for Octave you need to know what Octave does with
| the data afterwards. If it's saving a matrix to a PNG
| say, it could assume the matrix is linear and convert to
| sRGB which would be a good choice if it also supported
| say OpenEXR files. However it could also just take the
| values raw, assuming you'd convert the data to sRGB
| before saving. Or even allow you to specify the color
| space in the iCCP chunk, which would give you more power.
|
| Again, what it actually does is something that needs to
| be looked up.
| BlueTemplar wrote:
| Yeah, since we're learning, we're not doing anything
| fancy on that side (at least yet), so (so far) working
| with sRGB bmp/png/jpg input and sRGB bmp output.
| steerablesafe wrote:
| sRGB gamma is often approximated to 2.2 [1], but the
| actual function has a linear section near 0, and a non-
| linear section with gamma of 2.4, possibly to avoid
| numerical difficulties near 0.
|
| The document you cite claims that CRT gamma is typically
| between 2.35 and 2.55.
|
| Human eye gamma can probably be approximated with cieLAB,
| that is designed to be a perceptually uniform colorspace,
| which seemingly has a gamma of 3 [2], although it also
| has a linear section, so maybe slightly lower overall
| gamma. ciaLAB is not state of the art though in
| perceptually uniform colorspaces.
|
| [1] https://en.wikipedia.org/wiki/SRGB
|
| [2] https://en.wikipedia.org/wiki/CIELAB_color_space
|
| What I don't like about this whole CRT/gamma topic:
|
| 1. It brings in perceptually uniform colorspaces to the
| discussion, while it's completely unnecessary.
| Perceptually uniform colorspaces are mostly unsuitable
| for arithmetic on colors like any other non-linear
| colorspace.
|
| 2. While the sRGB colorspace and a colorspace defined by
| a CRT monitor's transfer function are closer to
| perceptually uniform than a linear colorspace, they are
| still pretty damn far from it. sRGB still does a decent
| job to prevent banding in dark areas.
|
| 3. The sRGB colorspace is not identical to a colorspace
| defined by a CRT monitor's transfer function.
|
| 4. "gamma" is a crude approximation for transfer
| functions, assuming they follow a power function on all
| of their domain.
|
| 5. This whole thing about CRTs and gamma doesn't matter
| if you just want to understand that if you want to do
| arithmetic on color components, then you most probably
| want it represented in a linear colorspace (didn't even
| talk about it yet), and most color values you encounter
| is actually encoded in sRGB, so you want to convert that
| to linear first, then convert the result back, depending
| on what your output requires. This is the most widespread
| bug in computer color and you don't need the history of
| CRTs to do that, and in fact this has nothing to do with
| perceptually uniform colorspaces.
| BlueTemplar wrote:
| > ciaLAB is not state of the art though in perceptually
| uniform colorspaces.
|
| Which _are_ state of the art ?
|
| > 1. It brings in perceptually uniform colorspaces to the
| discussion, while it's completely unnecessary.
| Perceptually uniform colorspaces are mostly unsuitable
| for arithmetic on colors like any other non-linear
| colorspace.
|
| I don't get what you mean, linearity (dL'star') being
| defined wrt perceptual uniformity, isn't CIELAB 76 linear
| by definition (to an approximation) ?? And color
| arithmetic pretty much by definition implies dealing with
| distances and angles in a perceptually uniform color
| space, doesn't it ??
|
| (I would _really_ like to know, since it actually is the
| very topic of 'practical work' that we have to do for
| mid-January. We were told to do the transformations to
| CIELAB 76 and, after the modifications, back to sRGB,
| using the D65 illuminant.)
|
| Otherwise, I didn't mention the finer details about CRT
| transfer functions because it didn't seem to be relevant
| enough.
|
| > This is the most widespread bug in computer color and
| you don't need the history of CRTs to do that, and in
| fact this has nothing to do with perceptually uniform
| colorspaces.
|
| Yeah, I know, and while just doing this kind of
| transformation might be just good enough for the most
| common color arithmetic, is it really good enough for
| _all_ use cases ? To take an example from our practical
| work, seeing this kind of effect :
|
| https://en.wikipedia.org/wiki/Impression,_Sunrise#Luminan
| ce
|
| You know what, I think I'm going to try and do this
| practical work in two versions, one using just linear
| sRGB (and back). I'll see if I get noticeable
| differences. But that will have to wait a week or so, I'm
| too busy searching for internships right now (and have
| already spent too much time in this discussion...)
| steerablesafe wrote:
| > I don't get what you mean, linearity (dL'star') being
| defined wrt perceptual uniformity, isn't CIELAB 76 linear
| by definition (to an approximation) ?? And color
| arithmetic pretty much by definition implies dealing with
| distances and angles in a perceptually uniform color
| space, doesn't it ??
|
| No. Linearity is about physical light intensities
| relating to perceived color. Imagine having two light
| sources on the same spot that you can turn on or off
| separately. If you turn the first one on you perceive a
| certain color, if you turn the other on you perceive an
| other color and if you turn both on you perceive a third
| one. It turns out the perceivable colors (under normal
| viewing conditions) are representable in a three
| dimensional vector-space (for most people) so that the
| first two colors add up to the third color for every
| possible two light sources. Such a linear colorspace is
| XYZ for example [1].
|
| This has nothing to do with perceptual uniformity.
| Perceptual uniformity is about the capability of
| distinguishing near colors. This defines a distance
| between colors, and there are three dimensional
| colorspace representations where the Euclidean distance
| approximate this perceptual distance well. cieLAB is such
| a colorspace, but AFAIK there are better state of the art
| colorspaces for the same purpose. I'm not very well
| versed in this, I learned from them from this video [2].
|
| [1] https://en.wikipedia.org/wiki/CIE_1931_color_space
|
| [2] https://www.youtube.com/watch?v=xAoljeRJ3lU
|
| edit: gimp 2.10 now defaults to use a linear colorspace
| (not perceptually uniform!) for most if not all of its
| functionalities. This affects alpha-blending layers, the
| paintbrush, resizing, blur, and pretty much everything
| that involves adding/averaging colors. There is still a
| "legacy" option on these tools to turn back to the
| probably wrong sRGB methods, probably for compatibility
| with old gimp files. There is a dramatic difference when
| you use a soft green brush on a red background for
| example, it's worth to try out.
| BlueTemplar wrote:
| Ok, my bad, I should have re-read our lesson more
| carefully : we're actually supposed to do sRGB => XYZ >
| CIELAB (and later back).
|
| And it looks like that you can either have an Euclidean
| vector (linear) space (linear sRGB, XYZ), or a
| perceptually uniform one (CIELAB), but not both !?
|
| (I guess that I should have figured that out myself,
| sigh... this is why it isn't CIELAB that is used for
| monitor calibration, but CIELU'V' ? EDIT : Nope :
| "[CIELUV is] a simple-to-compute transformation of the
| 1931 CIE XYZ color space, but which attempted perceptual
| uniformity. It is extensively used for applications such
| as computer graphics which deal with colored lights.
| Although additive mixtures of different colored lights
| will fall on a line in CIELUV's uniform chromaticity
| diagram (dubbed the CIE 1976 UCS), such additive mixtures
| will not, contrary to popular belief, fall along a line
| in the CIELUV color space unless the mixtures are
| constant in lightness. ")
|
| So you have to pick the best color space for the job, in
| the case of doing color averages that would be one of the
| linear ones (linear sRGB, XYZ), while if you are trying
| to design a perceptually uniform gradient for data
| visualization, you would better pick a perceptually
| uniform space (CIELAB, CIELUV) ?
| raphlinus wrote:
| See the recent Oklab post[0] for guidance on choosing a
| perceptually uniform color space for gradients. It's
| better than CIELab and CIELuv, both of which I would
| consider inferior to newer alternatives. In particular
| CIELab has particularly bad hue shifts in the blue range.
|
| I'm also working on a blog post on this topic (there's an
| issue open in the repo for my blog, for the curious).
|
| [0]: https://news.ycombinator.com/item?id=25525726
| BlueTemplar wrote:
| That's a bit like saying that we shouldn't use lossy
| image/video compression.
|
| Do you realize the amount of 'waste' that a 32bpc+, 4:4:4,
| imaginary color triangle gamut picture would have ?
|
| +it looks like the human eye has a sensitivity of 9 orders of
| magnitude, with roughly 1% discrimination (so add 2 orders of
| magnitude). So, looks like you would need at least 37 bits
| per color with a linear coding, the overwhelming majority of
| which would be horribly wasted !
| fireattack wrote:
| I have no issue with lossy compression; but things like
| 4:2:0 aren't really typical lossy compression. It's like
| calling resizing an image to half its resolution
| "compression".
|
| Also lossless compression can reduce plenty of "wasted"
| bits already (see: png vs bmp).
| BlueTemplar wrote:
| But they are ! Like other lossy forms of compression,
| chroma subsampling takes advantage of the human visual
| system's lower acuity for color differences than for
| luminance.
| corysama wrote:
| Since we're all posting our favorite dithering-related links:
| https://loopit.dk/banding_in_games.pdf
| gfaure wrote:
| This is a great counterpart to this slightly more implementation-
| focused article on the Obra Dinn aesthetic:
| https://danielilett.com/2020-02-26-tut3-9-obra-dithering/
| twic wrote:
| I'm surprised the algorithm for producing blue noise is so
| complicated.
|
| Could you not generate white noise, then apply a high-pass
| filter? Say, by blurring it and then subtracting the blurred
| version from the original?
|
| Could you split the map into blocks, fill each block with a
| greyscale ramp, then shuffle the pixels inside the block?
|
| Could you take a random sudoku approach, where you start with a
| blank map, then randomly select pixels, look at the distribution
| of pixels in their neighbourhood, randomly pick one of the
| greyscale values not present (or least present) for that pixel,
| then repeat?
| HelloNurse wrote:
| The first technique is challenging because the filter needs to
| have a specific frequency response, without shortcuts. Such
| high quality filtering can be done more simply and more exactly
| with an inverse Fourier transform.
|
| The second technique doesn't seem promising because shuffling
| is very crude: differently shuffled small blocks are going to
| have border artifacts, repeating small blocks are going to have
| worse periodic artifacts, large blocks are going to approximate
| white noise rather than blue noise. Higher quality would
| require precomputing blue noise images, losing the advantage of
| on-the-fly computation.
|
| The third technique, being sequential, is unlikely to be
| practically cheaper than an inverse Fourier transform.
| twic wrote:
| I tried the first couple of ideas:
|
| https://gist.github.com/tomwhoiscontrary/337cb8aaef013327a89.
| ..
|
| I only went as far as generating threshold maps, not actually
| using them. Couldn't see how to do that using ImageMagick,
| and didn't want to write it manually!
|
| The high-pass filter maps "look okay", but i haven't looked
| at their spectrum. How important is it that they have a
| specific frequency response? What is the failure mode if they
| don't?
|
| The shuffling maps don't "look" so hot. There aren't border
| artifacts or repeating blocks (and you wouldn't expect these
| a priori - not sure why you think that), but indeed, it's not
| very different to white noise.
| semireg wrote:
| I use the excellent imageworsener compiled to WASM to dither the
| images in my label printer electron app, https://label.live
|
| Read more at http://entropymine.com/imageworsener/
| syssam1897 wrote:
| There recently was a Show HN thread of an interactive dithering
| tool called Dither Me This[1]. I think you will like it if you
| liked this article
|
| [1]: https://news.ycombinator.com/item?id=25469163
| vanderZwan wrote:
| If you don't mind, I'll plug a little innovation of my own:
| mixing ordered and error diffusion dithering. The idea behind it
| is actually pretty simple: _technically_ all forms of dithering
| use a threshold map, we just don 't tend to think of it when it's
| one flat threshold for the entire image. So there is nothing
| stopping us from decoupling the threshold map from the rest of
| the dithering algorithm, meaning it's trivial to combine error
| diffusion with more complex threshold maps:
|
| https://observablehq.com/@jobleonard/ordered-error-diffusion...
|
| (For the record, I picked a default example that highlighted a
| "hybrid" dither with a very dramatic difference from its
| "parents" instead of the _prettiest_ result)
|
| Interestingly, and perhaps not surprisingly, a variable threshold
| map interacts with the error diffusion itself, making it amplify
| local contrast and recover some fine detail in shadows and
| highlights (although also possibly overdoing it and crushing the
| image again).
|
| What's also somewhat interesting (to me at least) that this is
| really simple to implement: take any error diffusion dithering
| kernel and make it use the threshold map from ordered dithering.
| In principle it should have been possible to use them on any old
| hardware that can handle error diffusion dithering.
| semi-extrinsic wrote:
| I'll also plug an invention of my own: error diffusion with a
| random parameter in the diffusion matrix, keeping the sum of
| weights constant (and equal to 4/5, so boosting contrast
| slightly).
|
| I came up with this for a Code Golf challenge a few years ago,
| personally I think it looks really good. I haven't seen it
| elsewhere.
|
| Disclaimer: yes I like to write ugly Fortran code for fun (and
| profit).
|
| https://codegolf.stackexchange.com/a/26569
| vanderZwan wrote:
| That is a very elegant trick, I love it! I wonder what comes
| out of that when applied to a flat grayscale image - perhaps
| it leads to a decent blue noise pattern, or an approximation
| of it? EDIT: The reason I'm half-expecting that is because
| semi-randomizing the diffusion matrix reminds me a bit of
| Bridson's Algorithm, in that it mixes constraints with
| randomization[0].
|
| And kudos for sticking to the programming language you love
| and feel comfortable in :)
|
| EDIT: Something I never noticed before: a black and white
| dithered image causes _flickering_ when scrolling on an LCD
| screen, as least on mine, and it amplifies regions with
| patterns, like the checkerboards in ordered dithering, or the
| regular artifacts like in the example image of the challenge.
|
| However, your "randomized Sierra Lite" version seems to mask
| that flickering: it's still there, but feels much more like
| white noise that is relatively easy to ignore.
|
| [0] https://observablehq.com/@techsparx/an-improvement-on-
| bridso...
| elitepleb wrote:
| As for arbitrary-palette positional dithering,
|
| there's no better write up than
| https://bisqwit.iki.fi/story/howto/dither/jy/
| pdkl95 wrote:
| Bisqwit's discussion of dithering is _outstanding_. He presents
| a _very_ impressive algorithm for arbitrary-palette dithering
| that is _animation safe_.
|
| > This paper introduces a patent-free positional (ordered)
| dithering algorithm that is applicable for arbitrary palettes.
| Such dithering algorithm can be used to change truecolor
| animations into paletted ones, while maximally avoiding
| unintended jitter arising from dithering.
|
| He demonstrates it "live coding" style in this[1] video where
| he writes a demo in 256 colors of a "starfield" animation with
| color blending and Gaussian blur style bloom. The first
| animation at 6:33 using traditional ordered dithering has the
| usual annoying artifacts. The animation at 13:00 using an
| optimal palette and his "gamma-aware Knoll-Yliluom positional
| dithering" changed my understanding of what was possible with a
| 256 color palette. The animation even looks decent[2] dithered
| all the way down to a _16 color_ palette!
|
| If that wasn't crazy enough, he also "live codes" a
| raytracer[3] in DOS that "renders in 16-color VGA palette at
| 640x480 resolution."
|
| [1] https://www.youtube.com/watch?v=VL0oGct1S4Q
|
| [2] https://www.youtube.com/watch?v=W3-kACj3uQA
|
| [3] https://www.youtube.com/watch?v=N8elxpSu9pw
| dahart wrote:
| > He presents a very impressive algorithm for arbitrary-
| palette dithering that is animation safe.
|
| They do look good. This makes me want to run his animation
| examples on a blue noise dither, since he didn't compare to
| blue noise, and it's also animation safe...
| rubatuga wrote:
| TLDR use floyd-steinberg dithering with gamma correction.
| Synaesthesia wrote:
| Thought the blue noise looked the best.
| nitrogen wrote:
| _With HDR and wide gamut on the horizon, things are moving even
| further away to ever requiring any form of dithering._
|
| You still need dithering to prevent visible banding in really
| subtle color gradients.
| dahart wrote:
| You still need dither for print, where subtle gradients on-
| screen can suddenly become very visible. I've had some large
| format giclee prints surprise me with nasty color banding.
| nitrogen wrote:
| I suspect some of the lines that were showing up in the OP's
| error diffusion test might be paralleling color banding lines
| in the original images.
___________________________________________________________________
(page generated 2021-01-06 23:03 UTC)