[HN Gopher] All 16,777,216 RGB colours (2005)
___________________________________________________________________
All 16,777,216 RGB colours (2005)
Author : samrohn777
Score : 109 points
Date : 2021-09-25 16:38 UTC (6 hours ago)
(HTM) web link (davidnaylor.org)
(TXT) w3m dump (davidnaylor.org)
| mysterypie wrote:
| If I hadn't been told what this is, I would never have guessed
| that it's a representation of all colors. I know it's all the
| colors -- I can zoom in and see lots of different colors -- but
| why doesn't it _look_ like that? My initial description would
| have been "purple and green squares". I might elaborate that
| it's "purple and green squares with a bit of blue separating the
| rows". But why doesn't it feel like all colors? Because of the
| way it's organized? For the same reason that his randomized
| pixels looks like a grey blob?
| nightcracker wrote:
| > I can zoom in and see lots of different colors
|
| If you zoom in further, you'd soon realize your screen is
| nothing but pure red, green and blue subpixels at varying
| intensities.
| thaumasiotes wrote:
| You can't see the individual pixels. This is illustrated better
| by the second image, in which the same pixels occur in a random
| order. It is mostly a uniform gray, despite the fact that
| almost all of the pixels are not gray.
| zerocrates wrote:
| The reason it specifically looks like "purple and green"
| squares _is_ because of how it 's organized.
|
| Each of those smaller squares is made by having red at 0 and
| blue at 0 at the top left corner, then increasing each along
| one of the x/y axes until you have red 255 and blue 255 at the
| bottom right corner. The "green" value is constant for each
| square, and just increases by one for each subsequent square
| packed in. At distant/normal viewing you're not going to
| resolve all those individual colors but more see the averages
| (to say nothing of the computer itself averaging things to make
| a zoomed out view or a smaller-dimension rendering).
|
| So you basically have subsquares that start, on average, as
| "purple" (or let's say magenta). They have no or very little
| green and mostly are mixtures of red and blue. As you go toward
| the bottom, green increasingly dominates as what were once the
| darkest parts of each sub-square become just green and what
| were the magenta parts tend toward white. Blur your vision a
| little and the image just looks like a magenta-to-green
| gradient.
|
| You would be able to construct this image so it would still
| contain "all the RGB colors" but looked from afar instead like
| "yellow and blue squares" or "cyan and red squares" instead,
| just by changing your method for constructing/organizing it.
| There'll be variation in how well these different variants
| "blend" and so on just based on how we perceive the different
| colors, of course.
|
| In terms of that perceptual variation, take for example this:
| https://allrgb.com/thingy , in particular the little preview in
| the left. This is what would be the "yellow to blue" kind of
| ordering, but it doesn't really "blend" as much. The bottom
| subsquares in particular read as "cyan/magenta/blue"
| combinations rather than just "blue", and in the top "yellow"
| isn't really as dominant either. Blur your vision again though
| and it's a yellow-to-blue gradient.
| pavlov wrote:
| _> "But why doesn 't it feel like all colors?"_
|
| Because RGB is not a perceptual color space.
|
| With RGB, a lot of values are wasted encoding colors that are
| perceptually very similar. Many colors that are "interesting"
| to humans are encoded in a narrow slice of the RGB cube. Much
| of the cube is a deep ocean of indistinguishable bluish shades.
|
| Video traditionally uses various luminance+chroma color spaces,
| which are more perpetually effective than RGB and allow for
| easy subsampling to save bandwidth on the less important color
| data.
| H8crilA wrote:
| Now I feel cheated by the OP link and want to see an image
| with all colors but from some good perceptual-based space.
| 5faulker wrote:
| It's literally so packed that it exceeds our capacity to
| resolve colors at generic sizes.
| jonplackett wrote:
| https://en.m.wikipedia.org/wiki/RGB_color_space
|
| If you just think about it as a cube of colour with redness
| (0-255) on one axis, same with green and blue on the others and
| then taking 256 slices of that cube it all makes sense
| intuitively.
| perl4ever wrote:
| This gave me an idea for reorganizing it.
|
| I want to do a kind of bubble sort on the pixels and see what
| it looks like.
|
| To be specific, let's say we take two pixels at random and
| compare each to their 8 neighbors. If they are closer to the
| surrounding pixels when swapped, then swap them. Repeat.
|
| I may try this, but I'm lazy, so in case someone else wants
| to...
| heyitsguay wrote:
| I actually just did try it! I'm finding that random pixels
| don't really get swapped. Except along comparatively uncommon
| borders between squares, pixels are already very similar to
| their immediate neighbors. Creating a video with 10000 swaps
| per frame, I found no successful swaps in 50 frames.
|
| That said, I bet there's some other criterion that would have
| the desired effect.
| zamadatix wrote:
| I don't think this has a stable "end" it can reach. Diagonals
| of pixels of the same x,y coordinates in linearly adjacent
| (green incrementation) cells would trigger a swap as the
| diagonal is +-1 in both the red and blue channel but the same
| pixel offset in a different cell is +-1 only in the green
| channel. I can't think of any 2d arrangement where the
| difference of a pixel in a cell to all 8 of it's neighbors
| could be 1 for every pixel so this "sort" would really just
| cycle pixels around based on the order your random number
| generator happens to pick swappable diagonals by chance. As
| such it'd never converge and look like slowly adding distance
| capped noise over time. The seams between cells would only
| seek to increase the cap on this noise from 2 to 255. 255
| being the minimal maximum distance I can imagine for a sorted
| grid of 2d pixels (i.e. I think this image is a min-max
| optimal ordering).
|
| It'd also take a ridiculous number of iterations to really
| notice the noise being added. Perhaps unfeasibly many if
| you're wanting to go deep and truly rely on random
| comparisons.
| fallingknife wrote:
| Doesn't really have all with the compression though.
| nightcracker wrote:
| PNG is lossless, so it really does.
| fallingknife wrote:
| It has a compression parameter though
| calibas wrote:
| An image can be compressed and still lossless.
| fallingknife wrote:
| Right but it lets you adjust the level so I assumed it
| was lossy. But I googled it and it's not
| Dylan16807 wrote:
| Yeah, all kinds of lossless compression let you adjust
| how much CPU time is spent.
|
| For png, you end up with a variety of settings all the
| way from "very weak" up to "weak".
| [deleted]
| detaro wrote:
| There are none missing. (proof to the contrary for you would be
| easy: just list the color that is missing)
| blhack wrote:
| A fun programming challenge: take a source image, and then
| recreate the image using the entire RGB color space, one
| different RGB value per pixel.
| suvakov wrote:
| https://allrgb.com/
| mmaunder wrote:
| All?
| st_goliath wrote:
| It's quite simple actually. If you have 8 bits per color
| channel, you can have 256 values per channel (or 16777216 in
| total if you look at it as a 24 bit value).
|
| A rectangle of 256*256 pixels can display all possible
| combinations for two color channels, with the third one held
| constant. If you step through every possible value of the third
| channel, you end up with 256 of those pictures, which can be
| arranged neatly in a 16*16 grid (16*16 = 256).
|
| Because each image has 256*256 pixels, and we have 16*16 of
| them, we end up with a 4096*4096 pixel image (because 16*256 =
| 4096), which now holds every possible combination of the 3
| color channels.
|
| Because this systematic approach creates neatly ascending rows
| of color values, the resulting image compresses quite well.
| jonplackett wrote:
| Visualising it as a cube of colour helps me think about it.
|
| The 3 colours are 100% on one surface and 0% on the opposite.
| The cube being 256x256x256 pixels in size.
|
| Then in that image you're showing slices of that cube.
| jedberg wrote:
| Whoa, that's a really good way to visualize it. I'd love to
| see that rendered somehow.
| jonplackett wrote:
| https://en.m.wikipedia.org/wiki/RGB_color_space
|
| Turns out, it's also the way everyone else likes to
| visualise colours. That's why it's called a colour space
| I guess.
| pgn674 wrote:
| A long time ago, I made a version of this by taking a 3D Hilbert
| Curve, coloring the line with a 24 bit RGB color cube, and
| repacking the line as a 2D Hilbert Curve:
| http://niftythings.paulnickerson.net/2011/03/hilbert-rgb-pal...
| mastax wrote:
| > Edit 2011-06-30: Was inspired by one of the comments to try and
| compress the 58 kB png file. IzArc can create a 7z file which is
| only 705 bytes large! That is 1/71436th the size of the
| uncompressed image in Tiff format. That's pretty impressive
| stuff!
|
| This is hard to believe. An already compressed PNG can be
| substantially re-compressed by a generic lossless process?
| dynlib wrote:
| Why is it that the image with the scrambled pixels _looks_ mostly
| grey, even though it contains all the colors, just scrambled?
| Even the effect is amplified when the image is zoomed out
| willis936 wrote:
| Mean([R(0:255), G(0:255), B(0:255)]) = 50% gray
| zerocrates wrote:
| The small image on the main page is a 256x256 version of a
| full-size... 4096x4096 image.
|
| So each pixel in the small version is going to actually be an
| averaging of a 16x16 block of original pixels. Because they're
| randomly distributed, mostly that averaging is going to result
| in medium gray. Zooming out is going to do something similar
| (though the exact effect would depend on how the zoom algorithm
| works).
|
| Plus, your eyes/brain are going to do some smoothing/averaging
| of their own even where the screen is able to display
| individual 1:1 source pixels (consider for example that every
| pixel on what's probably an LCD you're viewing the image on is
| itself some combination of red, green and blue subpixels).
| 333c wrote:
| Needs (2005) in the title.
| lindseymysse wrote:
| I was with my cousin in law at the Getty a few years ago and we
| were talking about how much information is lost when you look at
| a screen. The world is a lot richer than a mere 16,777,216
| colors. I think there are things about our sensation of colors
| that don't have names yet -- there are just things that greens
| and browns do that I have not been able to find words to
| describe. There are interactions light has with materials that we
| have not separated out into "colors" or "textures" yet.
| jmiserez wrote:
| 3D scenes retain some of that information, see https://en.wikip
| edia.org/wiki/Bidirectional_scattering_distr..., https://en.wik
| ipedia.org/wiki/Bidirectional_reflectance_dist....
| romwell wrote:
| Evil UX idea of the day:
|
| Color selector where you pick the color from the randomized image
| with an eyedropper tool.
|
| The colors are all in there, what's there to complain about?
| mgdlbp wrote:
| Excellent demonstration of PNG progressive decoding by the
| randomized image (when opened full-res in-browser).
| 1270018080 wrote:
| When I clicked the randomized image, why did it go through waves
| of rendering?
| fred256 wrote:
| It appears to be interlaced: all16777216rgb-
| scrambled.png: PNG image data, 4096 x 4096, 8-bit/color RGB,
| interlaced
| wtallis wrote:
| That's an interlaced PNG:
| https://en.wikipedia.org/wiki/Interlacing_(bitmaps)
|
| The image data is stored such that downloading the first
| section of the file gets you enough information to display a
| low-resolution version of the image, and continuing the
| download allows for progressive refinement of the image.
| calibas wrote:
| I assume it's because the image follows very specific patterns
| that the PNG algorithm is able to compress it so much? Each
| individual square has a set green value, the pixel to the right
| is red + 1, and the pixel below is blue + 1.
|
| I was trying to figure out how it's possible to store 16 million
| separate colors in 58,000 bytes. Looks like it's storing
| information about the gradient patterns, and not each individual
| color.
| jl6 wrote:
| And you can name them all here:
|
| https://colornames.org/
| LifeIsBio wrote:
| Ha, I was just playing around with that last night.
|
| I was bummed to realize that the api and the zip file only
| expose the top name for any color at one time. It'd be cool to
| have access to all the data.
| anderskaseorg wrote:
| I crafted a PNG image that squeezes all 16777216 colors into just
| 49131 bytes. This comes very close to the 1032:1 maximum
| compression ratio that DEFLATE compression can achieve on any
| data.
|
| https://codegolf.stackexchange.com/a/217544/39242
| madars wrote:
| This is fantastic! The actual image in the post looks almost
| like <hr> so linking it here:
| https://i.stack.imgur.com/hPcJr.png (it also gzip -9's to just
| 253 bytes and to 241 bytes with bzip2 -9)
| yoru-sulfur wrote:
| A few years ago I wrote some software to create allrgb images
| that resemble a source image. I even wrote a small blog post
| about it
|
| https://davidbuckley.ca/post/colour-sort/
| tombh wrote:
| I wonder if dithering techniques could be used to convert _any_
| image into a 16 million colour image?
|
| Edit: Yes: https://allrgb.com (should have googled)
| nightcracker wrote:
| See also https://allrgb.com/.
| Laremere wrote:
| The source for many of these images appears to be the 2014 Code
| Golf Stack Exchange thread "Images With All Colors":
| https://codegolf.stackexchange.com/questions/22144/images-wi...
|
| Lots of interesting imagery, with source included.
| seligman99 wrote:
| Indeed, here's my version of the implementation, it's what's
| currently my wallpaper on most of my machines:
|
| My background, I took a bit of liberty with the colors:
| https://imgur.com/a/OFcsjZn
|
| And an animated version: https://youtu.be/mO7VuqLNK1w
|
| Fun use case to play around with Octrees to try and speed it
| all up.
| santiagobasulto wrote:
| Sorry for the stupid question, I understand so little about
| image encoding. If each image has ALL the RGB colors ("not one
| color missing, and not one color twice") how can they be so
| different in size? How does JPG/PNG encode a pixel of
| information, isn't it always the same size?
| ArchStanton wrote:
| Imagine how small the file would be if it were a couple of
| for loops. Compression can take advantage of the colors of
| adjacent/near by pixels, adjacent pixels in time (for video),
| less bits for more common situations, removing differences
| that are invisible, etc.
|
| Gotta ask. Has anyone worked on image compression using
| machine learning? (that sounds like an obvious thing to do).
| It would be funny to end up with an algorithm no one
| understands.
| sgtnoodle wrote:
| A lot of old video games with impressive graphics were
| implemented sort of like that. Rather than storing bitmap
| data, the algorithm to render the artwork would be executed
| on the fly. It was actually faster than doing otherwise,
| since hard drives were so slow and the algorithm itself
| could fit in ram.
|
| Tangentially related is the crash bandicoot game on the
| playstation. The developers figured out that untextured
| polygons were way faster to draw than textured polygons,
| and so they made the player character model out of tons of
| tiny colored polygons rather than fewer larger textured
| polygons. The result was a significantly better looking
| graphic for the same rendering time.
| cwp wrote:
| Yup. Consider that all compression boils down to some form
| of a model that can predict the next bits of information,
| and then an encoding that only includes the deviation of
| the actual data from the prediction. Machine learning gives
| us new ways of making predictions.
| ArchStanton wrote:
| I'll have to look into it.
|
| Given both the financial value of image compression
| (given the amount of video shoved down the net) and the
| asymmetry of codec (it's OK to use resources to compress,
| not so much for decompress), I'd expect some real money
| to be spent in this area.
| NavinF wrote:
| > Has anyone worked on image compression using machine
| learning?
|
| Dunno about the state of the art, but pretty much every ml
| tutorial has a section titled "Image Compression Using
| Autoencoders" right at the beginning. It's the Hello World
| of ml. The perceptual quality vs file size curve for such a
| simple network is pretty mediocre, but I'm sure you could
| do better if that was your goal.
| alex_smart wrote:
| Both PNG and JPG use compression (lossless in case of PNG,
| lossy in case of JPG) to store the image.
|
| Now your question is basically reduced to "how can text files
| with same number of bytes, each having ALL the ascii codes,
| compress to files of such different size". The answer is that
| it MUST necessarily be so. You can't have a one-to-one map
| from [2]^N to [2]^K where K < N.
| thaumasiotes wrote:
| > The answer is that it MUST necessarily be so. You can't
| have a one-to-one map from [2]^N to [2]^K where K < N.
|
| Except that you've already noted that JPG is a lossy
| compression scheme, so it doesn't matter that a one-to-one
| map isn't possible.
| sgtnoodle wrote:
| As a human analog, just think about how you would write down
| instructions for someone to recreate the image.
|
| For the ordered image: "Make a 16 by 16 grid of boxes, each
| getting redder from left to right and top to down. Inside
| each box make a 16x16 grid of boxes getting greener, and
| inside each of those boxes make a 16x16 grid of pixels
| getting bluer". Done.
|
| Vs. the random image: "Make a blueish red pixel with a bit of
| green. Then make a reddish blue pixel with a a moderate
| amount of green. Then make a brownish pixel. Then make a
| greenish pixel..." That will go on for about 16 million
| sentences!
|
| Image compression is simply coming up with a language that's
| good for describing images, and then an algorithm for writing
| succinct descriptions in that language. Rather than being
| human friendly languages (a modest number of complex words
| made from an alphabet), they are computer friendly languages
| (a ton of simple words made from 0s and 1s.)
|
| JPEG compression separates out the image into "component"
| images. One component is brightness, another is hue, and a
| third is saturation. Each of those components map nicely to
| how human vision works. In particular, brightness needs high
| resolution and precision. Hue needs high precision but
| resolution doesn't matter. Saturation doesn't need high
| resolution or precision. Therefore, each component can be
| compressed differently and independently from each other. The
| component images are further broken up into a grid of 8x8
| boxes, and then each of those boxes are approximated via a
| weighted sum of reference box images (the encoder and decoder
| have a dictionary of reference boxes they both agree to use.)
| For each box, only the weights are saved. The weights
| themselves can have varying precision, and that's basically
| you're controlling when you set the "jpeg quality". Higher
| quality jpegs have more precision in their weights, and lower
| quality jpegs have less precision in their weights.
| Jasper_ wrote:
| For PNG, it can support what's known as "filtering" [0],
| where it's based on the difference between the pixel and the
| one to the left/top of it. So if you have the values { 1, 2,
| 3, 4, 5, 6 }, then the differences will be { 1, 1, 1, 1, 1, 1
| }, which is very compressible to DEFLATE (analogy in English,
| "6 1's" is shorter than "1 1 1 1 1 1").
|
| JPEG uses a very different technology; it breaks the image
| into 8x8 blocks, and tries to fit the resulting 64 pixels to
| a gradient (yes, I know I'm simplifying). So pixels that tend
| to be "smooth" and "gradient-like" will compress much better
| than random noise.
|
| [0] https://www.w3.org/TR/PNG/#9Filters
| romwell wrote:
| This is the best layman's explanation of JPEG compression
| that I've ever seen.
|
| I feel it's very much spot on, and true to the math within.
| willis936 wrote:
| This is why I like things like svg for pure gradient
| graphics. Uncompressed plaintext can be used to render all
| RGB colors with a similar filesize as png.
|
| I'll see if I can come up with a clever way do it without
| having to resort to tricks that move the goalposts (such as
| having repeated colors).
|
| Okay I think I got pretty close. The png export doesn't
| have 16.7 million colors, so there must be some error in my
| logic.
|
| 1370 bytes as a human readable plaintext file.
|
| 711 bytes as a 7-zip.
|
| 658 bytes as a usable compressed svgz file.
|
| 159936 bytes as a png. <?xml version="1.0"
| encoding="UTF-8"?> <svg width="256" height="65536"
| version="1.1" viewBox="0 0 256 65536"
| xmlns="http://www.w3.org/2000/svg"
| xmlns:cc="http://creativecommons.org/ns#"
| xmlns:dc="http://purl.org/dc/elements/1.1/"
| xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
| <defs> <linearGradient id="a" x2="256" y1="32768"
| y2="32768" gradientUnits="userSpaceOnUse"> <stop
| stop-color="#f00" offset="0"/> <stop stop-
| color="#ff0" offset=".166666667"/> <stop stop-
| color="#0f0" offset=".333333333"/> <stop stop-
| color="#0ff" offset=".5"/> <stop stop-color="#00f"
| offset=".666666667"/> <stop stop-color="#f0f"
| offset=".833333333"/> <stop stop-color="#f00"
| offset="1"/> </linearGradient>
| <linearGradient id="b" x1="128" x2="128" y1="0" y2="65536"
| gradientUnits="userSpaceOnUse"> <stop offset="0"/>
| <stop stop-color="#808080" stop-opacity="0" offset=".5"/>
| <stop stop-color="#fff" offset="1"/>
| </linearGradient> </defs> <metadata>
| <rdf:RDF> <cc:Work rdf:about="">
| <dc:format>image/svg+xml</dc:format> <dc:type
| rdf:resource="http://purl.org/dc/dcmitype/StillImage"/>
| </cc:Work> </rdf:RDF> </metadata>
| <rect width="256" height="65536" fill="url(#a)" stroke-
| width="24" style="mix-blend-mode:normal"/> <rect
| width="256" height="65536" fill="url(#b)" stroke-width="24"
| style="mix-blend-mode:normal"/> </svg>
| enriquto wrote:
| An image with the upper half black and the lower half white
| can be compressed to a few bytes, regardless of its size.
|
| An image with the same pixels but randomly arranged cannot
| really be compressed.
| SmooL wrote:
| A lot of encoders try to exploit patterns in the data, and so
| instead of saving the "raw" data, they save the "pattern",
| with the idea being that the pattern will be smaller.
| Different encoding formats use different patterns, so you get
| different file sizes
| 867-5309 wrote:
| I'd imagine with a compression such as saving things like
| 255,255,255 as 3255 or a series such as #98DEEE and #EEEE89
| as 98D(7E)89, only not those examples and more sophisticated
| Andrex wrote:
| In short, image files don't include all RGB colors. They
| basically include a line like "usesColorSpace: 'RGB'". Only
| the decoder needs to know how to interpret the RGB
| colorspace.
|
| When you save in image in JPG, it's compressed using an
| algorithm that gets it "pretty close" to the source image.
| The software doing the JPG decoding (browser, image viewer,
| etc.) basically reverses this algorithm to display the image
| back to you. For JPG, this compression is "lossy" and so
| you've lost detail from the original source image.
|
| PNG is a lossless image format, but it basically works the
| same way without sacrificing the source image's quality.
|
| The "standard" for an image format dictates how an encoder
| creates the image and how a decoder displays the image. Since
| everyone is "on the same page," the individual image files
| only need to contain what they need to -- basically, "this
| pixel = RGB(0,1,2)".
|
| Hope this makes sense and helps.
| Andrex wrote:
| I apologize for my ignorance on this subject. Please ignore
| the parent comment.
| jollybean wrote:
| You can start here: Huffman Coding which gives a really good
| overview of one kind of coding.
|
| [1] https://en.wikipedia.org/wiki/Huffman_coding
| boomlinde wrote:
| If you consider the image in terms of the differences between
| adjacent pixels you can get the same value almost across the
| entire image.
|
| PNG works like this. You can run each horizontal row through
| any one of a variety of delta encoders that are suitable for
| different situations. The goal is to minimize the range of
| values and maximize the repetition before you pass the
| encoded deltas through a dictionary based compressor.
| Pictures like this are near optimal for this approach.
| OscarCunningham wrote:
| My favourite: https://allrgb.com/order-from-chaos
|
| It uses simulated annealing to arrange the colours smoothly.
___________________________________________________________________
(page generated 2021-09-25 23:01 UTC)