[HN Gopher] Polychromatic Pixels
___________________________________________________________________
Polychromatic Pixels
Author : bluehat974
Score : 197 points
Date : 2024-07-18 16:11 UTC (1 days ago)
(HTM) web link (compoundsemiconductor.net)
(TXT) w3m dump (compoundsemiconductor.net)
| Teknomancer wrote:
| OLED tech has been very transformative for lots of my old gear
| (synthesizers and samplers mostly) that originally came with
| backlit LCD displays. But the OLEDs are offered in static colors,
| usually blue or amber. Sometimes white red or green
|
| It would be very cool to have a display with adjustable color.
| 01100011 wrote:
| Any idea if there is a plan to produce discrete LEDs that are
| tunable?
| dmitrygr wrote:
| These still produce a single [adjustable] wavelength, which means
| some colors that are displayable on displays of today are not
| representable using just one of these, and multiples will be
| required.
| Scene_Cast2 wrote:
| I suppose anything besides the edge of the CIE horseshoe will
| need multiples.
| yig wrote:
| Two adjustable wavelength emitters should be sufficient, right?
| So the picking-and-placing problem gets easier by factor of 3:2
| rather than 3:1.
| modeless wrote:
| I bet you might run into some interesting problems trying to
| represent white with two wavelengths. For example, colorblind
| people (7% of the population) might not perceive your white
| as white. And I wonder if there is more widespread variation
| in human eye responses to single wavelengths between primary
| colors that is not classified as colorblindness but could
| affect the perception of color balance in a 2-wavelength
| display.
| mistercow wrote:
| If the refresh rate is high enough, a single LED could flip
| between multiple wavelengths to dither to non spectral colors.
| layer8 wrote:
| Higher refresh/modulation rates imply higher power
| consumption. It's already a trade-off in current display tech
| for mobile.
| mistercow wrote:
| Sure, but that's assuming you need a higher rate than is
| already used for brightness. That's a question I think can
| only be determined experimentally by putting real human
| eyes on it, although I think you could do the experiment
| with traditional RGB LEDs. But the other question is
| whether the wavelength tuning can be changed at the same
| rate as intensity.
| MT4K wrote:
| Or if pixel density is high enough, adjacent pixels could
| display the colors to combine with no flickering. Unlike
| regular RGB subpixels, this would only be needed for areas
| where the color cannot be displayed by an individual pixel
| alone.
| mistercow wrote:
| Yeah, and both techniques can be combined, which common
| with LCD screens, although it does sometimes lead to
| visible moving patterns when viewed close up.
|
| There's more flexibility with tunable wavelengths, though,
| since there will often be multiple solutions for what
| colors and intensities can be combined to create a
| particular photoreceptor response. By cycling through
| different solutions, I wonder if you could disrupt the
| brain's ability to spot any patterns, so that it's just a
| very faint noise that you mostly filter out.
| layer8 wrote:
| Yes, it'd be two subpixels instead of the current three. It's
| not clear that that's worth the added complexity of having to
| control each subpixel across two dimensions (brightness and
| wavelength) instead of just one (brightness).
| Retr0id wrote:
| Can you produce "white" with just two wavelengths?
| layer8 wrote:
| Yes, mix two complementary colors like orange and cyan. You
| just need two wavelengths that hit all three cone types [0]
| in the right ratio. There's the possibility that it's
| subject to more variation across individuals though, as not
| everyone has exactly the same sensitivity curves.
|
| [0] https://upload.wikimedia.org/wikipedia/commons/f/f1/141
| 6_Col...
| exmadscientist wrote:
| Human vision in the yellow (~590nm) region is known to be
| _extremely_ sensitive to particular wavelengths. Observe
| how quickly things go from green through yellow to amber
| /orange!
|
| So this is probably a nonstarter.
| Retr0id wrote:
| This vaguely reminds me of "CCSTN" (Color Coded Super Twisted
| Nematic) LCD displays, which were used in a few Casio calculators
| to produce basic colour output without the usual RGB colour
| filter approach.
|
| https://www.youtube.com/watch?v=quB60FmzHKQ
|
| https://web.archive.org/web/20240302185148/https://www.zephr...
| accrual wrote:
| I had a feeling the YouTube link would be Posy and was
| delighted when it was. His videos on display technologies are
| top notch.
| nayuki wrote:
| I noticed an unusual color LCD technology on the Pokemon
| Pikachu 2 GS too:
| https://electronics.stackexchange.com/questions/201827/how-d...
| ,
| https://bulbapedia.bulbagarden.net/wiki/Pok%C3%A9mon_Pikachu...
| chefandy wrote:
| For some reason I find those displays' shades of orange and
| green to be SUPER appealing. The blue is nice enough.
| itishappy wrote:
| This is super cool!
|
| I can certainly see these being useful in informational displays,
| such as rendering colored terminal output. The lack of subpixels
| should make for crisp text and bright colors.
|
| I don't see this taking over the general purpose display
| industry, however, as it looks like the current design is
| incapable of making white.
| hidelooktropic wrote:
| Incredible accomplishment, but the question remains what this
| will look like at the scale of a display on any given consumer
| device.
|
| Of course, it's only just now been announced, but I'd love to see
| what a larger scale graphic looks like with a larger array of
| these to understand if perceived quality is equal or better, if
| brightness distribution across the spectrum is consistently
| achieved, how pixels behave with high frame rates and how
| resilient they are to potential burn-in.
| p1esk wrote:
| I hope this will go into AVP3
| joshmarinacci wrote:
| Alien Vs Predator 3?
| p1esk wrote:
| Apple Vision Pro 3
| maxrumpf wrote:
| I imagine color consistency will be such a pain here.
| Retr0id wrote:
| I'd hope that per-pixel calibration would solve that, but I
| wonder how much that calibration would drift over time.
| mensetmanusman wrote:
| Whatever the drift would be, inorganics would drift less than
| organic materials.
| k7sune wrote:
| Does this need very accurate DAC to cover the entire color
| spectrum? Maybe even fine-tuning on each pixel?
| Joel_Mckay wrote:
| LED are somewhat temperature sensitive devices, and getting
| repeatable high-granularity bit-depth may prove a difficult
| problem in itself.
|
| There are ways to compensate for perceptual drift like modern
| LCD drivers, but unless the technology addresses the same burn-
| in issues with OLED it won't matter how great it looks.
|
| You may want to look at how DMD drivers handled the color-wheel
| shutter timing to increase perceptual color quality. There are
| always a few tricks people can try to improve the look at the
| cost of lower frame rates. =)
| jjmarr wrote:
| My ultimate hope is that this will allow us to store and display
| color data as Fourier series.
|
| Right now we only represent colour as combinations of red, green,
| and blue, when a colour signal itself is really a combination of
| multiple "spectral" (pure) colour waves, which can be anything in
| the rainbow.
|
| Individually controllable microLEDs would change this entirely.
| We could visualize any color at will by combining them.
|
| It's depressing that nowadays we have this technology yet video
| compression means I haven't seen a smooth gradient in a movie or
| TV show in years.
| ginko wrote:
| With two wavelength-tunable LEDs you should be able to cover
| the entire CIE colorspace.
|
| That's because the points on outer edge of CIE are pure
| wavelengths and you can get to any point inside by
| interpolating between two of them.
| bobmcnamara wrote:
| How do you make white?
| meindnoch wrote:
| E.g. mix 480nm cyan and 590nm orange.
| llama_drama wrote:
| Would this be practical? Or would it be similar to how
| printers have separate black ink, which is theoretically
| unnecessary?
| layer8 wrote:
| By mixing two complementary colors.
| meindnoch wrote:
| What would be the purpose of this?
|
| The human eye can't distinguish light spectra producing
| identical tristimulus values. Thus for display purposes [1],
| color can be perfectly represented by 3 scalars.
|
| [1] lighting is where the exact spectrum matters, c.f. color
| rendering index
| layer8 wrote:
| Color data has three components for the simple reason that the
| human eye has three different color receptors. You can change
| the coordinate system of that color space, but three components
| will remain the most parsimonious representation.
| lsaferite wrote:
| I started working with a hyperspectral imager a while back
| and the idea of storing image data in 3 wide bands seems so
| odd to me now. Just the fact that my HSI captures 25 distinct
| 4nm bands inside a single 100nm band of what we are used to
| with a 3-band image is awesome.
|
| Sorry, I get excited every time I work with hyperspec stuff
| now and love talking about it to anyone that will listen.
| meindnoch wrote:
| Hyperspectral imaging has its applications. A hyperspectral
| _display_ on the other hand makes no sense (unless your
| target audience consists of mantis shrimps).
| mncharity wrote:
| > I get excited every time I work with hyperspec stuff now
| and love talking about it to anyone that will listen.
|
| Color is widely taught down to K-2, but content and
| outcomes are poor. So I was exploring how one might better
| teach color, with an emphasis on spectra. Using
| multispectral/hyperspectral images of everyday life,
| objects, and art, seemed an obvious opportunity. Mousing
| over images like[1] for example, showing spectra vaguely
| like[2]. But I found very few (non-terrain) images that
| were explicitly open-licensed for reuse. It seemed the
| usual issue - there's so much nice stuff out there, living
| only on people's disks, for perceived lack of interest in
| it. So FWIW, I note I would have been delighted to find
| someone had made such images available. Happy to chat about
| the area.
|
| [1] http://www.ok.sc.e.titech.ac.jp/res/MSI/MSIdata31.html
| [2] https://imgur.com/a/teaching-color-using-spectra-
| zOtxQwe
| nfriedly wrote:
| > 6,800 pixel-per-inch display (around 1.1 cm by 0.55 cm, and
| around 3K by 1.5K pixels)
|
| That sounds like it's getting close to being a really good screen
| for a VR headset.
| k__ wrote:
| Nice, that's double of what the Vision Pro has.
| Retr0id wrote:
| Hm, thinking about this further, this would need dithering to
| work properly (which probably works fine, but the perceived
| quality difference would mean pixel density comparisons aren't
| apples-to-apples)
|
| Presumably, you get to control hue and brightness per-pixel. But
| that only gives you access to a thin slice of the sRGB gamut
| (i.e. the parts of HSL where saturation is maxed out), but
| dithering can solve that. Coming up with ideal dithering
| algorithms could be non-trivial (e.g. maybe you'd want temporal
| stability).
| refulgentis wrote:
| I'm not sure why saturation couldn't be controlled.
|
| I probably missed something in the article, though I do see ex.
| desaturated yellow in the photographs so I'm not sure this is
| accurate.
|
| If you can't control saturation, I'm not sure dithering won't
| help, I don't see how you'd approximate a less saturated color
| from a more saturated color.
|
| HSL is extremely misleading, it's a crude approximation for
| 1970s computing constraints. An analogy I've used previously is
| think of there being a "pure" pigment, where saturation is at
| peak, mixing in dark/light (changing the lightness) changes the
| purity of the pigment, causing it to lose saturation.
| Retr0id wrote:
| Saturation can't be controlled on a per-pixel basis because,
| per the article, they're tuned to a specific wavelength at
| any given time.
|
| You're right though, there appear to be yellows on display.
| Maybe they're doing temporal dithering.
|
| Edit: Oh wait, yellow doesn't need dithering in any case.
| Yellow can be represented as a single wavelength. Magenta on
| the other hand, would (and there does seem to be a lack of
| magenta on display)
| refulgentis wrote:
| Honestly might just be the limits of photography, there's
| so much contrast between the ~97 L* brightness of pure
| yellow and black that the sensor might not be able to
| capture the "actual" range.
|
| I've been called a color scientist in marketing, but sadly
| never grokked the wavelength view of color. It sounds off
| to me, that's a *huge* limitation to not mention. But then
| again, if they had something a year ago, its unlikely ex.
| Apple folds its microLED division they've been investing in
| for a decade. Either A) it sucks or B) it doesn't scale in
| manufacturing or C) no ones noticed yet. (A) seems likely
| given their central claim is (B) is, at the least, much
| improved.
| cubefox wrote:
| > Saturation can't be controlled on a per-pixel basis
| because, per the article, they're tuned to a specific
| wavelength at any given time.
|
| Where does the article say this? I couldn't find it.
| o11c wrote:
| Dithering is at worst equivalent to subpixels, which we already
| use.
|
| If you take the "no subpixels" claim out of the article, this
| technology still seems useful for higher DPI and easier
| manufacture.
| Retr0id wrote:
| Sure, but PPI/DPI headline figures are usually counted per-
| pixel, not per-subpixel, so the raw density numbers aren't
| directly comparable (and I'm not really sure what a fair
| "adjustment factor" would be)
| kurthr wrote:
| You really can't think about single wavelength tunable pixels
| as something except at the edge HSL.
|
| I think about it from the CIE "triangle" where wavelength
| traces the outer edge, or even the Lab (Luminance a-green/red
| b-yellow/blue) color space since it's more uniform in
| perceivable SDR color difference (dE).
|
| https://luminusdevices.zendesk.com/hc/article_attachments/44...
|
| One key realization is that although 1 sub-pixel can't cover
| the gamut of sRGB (or Rec2020), but only 2 with wavelength and
| brightness control rather than 3 RGB. Realistically, this
| allows something like super-resolution because your blue (and
| red) visual resolution is much less than your green (eg
| 10-30pix/deg rather than ~60ppd). However, your eye's
| sensitivity off their XYZ peaks are less and perceived
| brightness would fall.
|
| I guess what I'm saying is that a lot of the assumptions baked
| into displays have to be questioned and worked out for these
| kinds of pixels to get their full benefit.
| Retr0id wrote:
| Good point, the HSL edge includes magenta which is of course
| not a wavelength.
| cubefox wrote:
| The "line of purples":
| https://en.wikipedia.org/wiki/Line_of_purples
| juancn wrote:
| You could use several pixels as sub-pixels or if the color
| shift time is fast enough, temporal dithering.
|
| Even if these could produce just three wavelengths, if you can
| pulse them fast enough and accurately, the effect would be that
| color reproduction is accurate (on average over a short time
| period)
| jessriedel wrote:
| > only gives you access to a thin slice of the sRGB gamut (i.e.
| the parts of HSL where saturation is maxed out)
|
| Note that even if we restrict our attention to the max-
| saturation curve, these pixels can't produce shades of
| purple/magneta (unless, as you say, they use temporal dithering
| or some other trick).
| hobscoop wrote:
| Are they able to adjust the color and brightness simultaneously?
| Or would brightness be controlled with PWM?
| kurthr wrote:
| Brightness is PWM controlled, but likely at the micro -
| millisecond level. The required brightness range is about
| 100k:1.
|
| Black levels would be determined more by reflectivity of the
| display than illumination.
| nomdep wrote:
| This sounds awesome for future VR gear, when you need small
| displays with more pixels that is currently possible.
|
| 4K virtual monitors, here we come!
| Joel_Mckay wrote:
| They already have these, but people need to modify the GPU
| designs before it is really relevant. The current AI hype cycle
| has frozen development in this area for now... so a super fast
| 1990's graphics pipeline is what people will iterate on for
| awhile.
|
| Nvidia is both a blessing and a curse in many ways for
| standardization... =3
| GrantMoyer wrote:
| A single wavelength can't reproduce all visible colors. These
| pixels are variable wavelength, but can only produce one at a
| time, so you'd still need at least 2 of these pixels to reproduce
| any visible color.
|
| The fundamental problem is that color space is 2D[1] (color +
| brightness is 3D, hence 3 subpixel on traditional displays), but
| monochromatic light has only 1 dimension to vary for color.
|
| [1]: https://en.wikipedia.org/wiki/Chromaticity
| FriedPickles wrote:
| It can produce all the colors of the rainbow. But no magenta.
| Perhaps they can quickly pulse the LED enough between multiple
| wavelengths.
| PaulHoule wrote:
| See also https://en.wikipedia.org/wiki/Spectral_color
|
| This reminds me of the observation I had in high school that
| I could immerse LEDs in liquid nitrogen and run them at
| higher than usual voltage and watch the color change.
|
| I got a PhD in condensed matter physics later on but never
| got a really good understanding of the phenomenon but I think
| it has something to do with
|
| https://www.digikey.com/en/articles/identifying-the-
| causes-o...
|
| Here is a video of people doing it
|
| https://www.youtube.com/watch?v=5PquJdIK_z8
| duskwuff wrote:
| > I got a PhD in condensed matter physics later on but
| never got a really good understanding of the phenomenon but
| I think it has something to do with
|
| The color of most* LEDs is controlled by the band gap of
| the semiconductor they're using. Reducing the temperature
| of the material widens the band gap, so the forward voltage
| of the diode increases and the wavelength of the emitted
| light gets shorter
|
| https://www.sciencedirect.com/science/article/abs/pii/00318
| 9...
|
| *: With the exception of phosphor-converted LEDs, which are
| uncommon.
| exmadscientist wrote:
| > phosphor-converted LEDs, which are uncommon
|
| No, they're extremely common. Every white LED in the
| market is phosphor-converted: they're blue LEDs, usually
| ~450nm royal blue, with yellow-emitting phosphors on top.
| Different phosphors and concentrations give different
| color temperatures for the final LED, from about 7500K
| through 2000K. (Last I looked, anything below about 2000K
| didn't look right at all, no matter what its manufacturer
| claimed.)
|
| Bigger LEDs are often phosphor-converted as well. Most
| industrial grow lamps use this type of LED. So they're
| around! You're probably looking at some right now!
| jessriedel wrote:
| It also can't produce white or anything else in the interior
| of this diagram (as well as, as you mention, shades of
| magenta and purple that lie on the flat lower edge):
|
| https://upload.wikimedia.org/wikipedia/commons/b/ba/Planckia.
| ..
| mensetmanusman wrote:
| The human eye will see white when a pixel flashes through
| all of the colors quickly in time.
| ajuc wrote:
| But that means it has reduced refresh rate.
| Etherlord87 wrote:
| How quickly? Surely well above 1 kHz (1000 FPS).
| Otherwise you will see flickering.
| jessriedel wrote:
| According to this, humans can't see flicker above 100 Hz
| for most smooth images, but if the image has high
| frequency spatial edges then they can see flicker up to
| 500-1000 Hz. It has to do with saccades.
|
| https://www.nature.com/articles/srep07861
| jessriedel wrote:
| Ha, yea, in particular these monochromatic pixels can't simply
| be white. Notably ctrl-f'ing for "white" gives zero results on
| this page.
|
| Relatedly, the page talks a lot about pixel density, but this
| confused me: if you swap each R, G, or B LED with an adjustable
| LED, you naively get a one-time 3x boost in pixel area density,
| which is a one-time sqrt(3)=1.73x boost in linear resolution.
| So I think density is really a red herring.
|
| But they _also_ mention mass transfer ( "positioning of the
| red, green and blue chips to form a full-colour pixel") which
| plausibly is a much bigger effect: If you replace a process
| that needs to delicately interweave 3 distinct parts with one
| that lays down a grid of identical (but individually
| controllable) parts, you potentially get a _much_ bigger
| manufacturing efficiency improvement that could go way beyond
| 3x. I think that 's probably the better sales pitch.
| etrautmann wrote:
| It would be interesting to plot all of the achievable colors
| of this LED on the chromaticity diagram. Presumably it'd be
| some sort of circle/ellipse around white but might have some
| dropouts in certain parts of the spectrum?
| formerly_proven wrote:
| Pure wavelengths are on the horseshoe-shaped outline of the
| CIE 1931 space. The straight line connecting the ends of
| the horseshoe is the line of purples, which also isn't
| monochromatic.
|
| https://en.wikipedia.org/wiki/Chromaticity#/media/File:Plan
| c...
| Y_Y wrote:
| Presumably they wouldn't need to do a pixel-to-pixel
| mapping, but could account for the wavelengths of
| neighbouring pixels to produce a more faithful colour
| reproduction at an effectively lower resolution.
| meindnoch wrote:
| It's going to be the spectral locus.
| daniel_reetz wrote:
| Don't forget about bond wires that need to be run to each die
| and/or connected to a backplane.
| ricardobeat wrote:
| Doesn't the fact they have successfully demonstrated
| displays at 2000, 5000 and 10000 DPI alleviate those
| concerns a little bit?
| creshal wrote:
| It's not really meant as a concern, more a supporting
| argument: If every subpixel is identical, you can use
| simpler wiring patterns.
| jmu1234567890 wrote:
| However, you would have more flexibility to do tricks sub-pixel
| to improve resolution?
| thomassmith65 wrote:
| Surely the 'tricks' we have for RGB displays would be more
| effective when every element has the same color range as
| every other. For example, the subpixel rendering of
| typography for RGB displays had an unavoidable rainbow halo
| that would no longer be an issue for most colors of text with
| polychromatic pixels.
| Remnant44 wrote:
| This is definitely a problem; if the control circuitry is up
| for it you could PWM the pixel color, basically dithering in
| time instead of space to achieve white or arbitrary non-
| spectral colors.
| brookst wrote:
| Yep. DLP color wheels come to mind.
| TJSomething wrote:
| One thing I noticed is that they were talking about demoing
| 12,000 ppi displays, which is way more resolution than you're
| going to resolve with your eye. So using 2 pixels is still
| probably a win.
| mensetmanusman wrote:
| Those are the densities needed for near eye displays. The
| best displays can still show pixelization to the human eye up
| close.
| golergka wrote:
| > color space is 2D
|
| Human eyes have three different color receptors, each tuned for
| it's own frequency, so it's already 3d. However, apart from
| human perception, color, just like sound, can have any
| combinations of frequencies (when you split the signal with
| Fourier transform), and may animals do have more receptors than
| us.
| paipa wrote:
| You only need to mix two different wavelengths to render any
| human perceptible color. They give you four parameters to
| work with (wavelength1, brightness1, wavelength2,
| brightness2) which makes it an underdetermined system with an
| infinite number of solutions for all but the pure, spectral
| boundary of the gamut.
| GrantMoyer wrote:
| Humans perceive all stimulation in the same raito of the L,
| M, and S cones to be the same color, but with different
| brightnesses. So only two dimensions are nessesary to
| represent human visible colors, hence HSV or L*a*b* space.
| a-priori wrote:
| According the opponent process model of colour perception
| you need three axes to represent all colours: luminosity
| [L+M+S+rods], red-green [L-M] and blue-yellow [S - (L+M)].
| dahart wrote:
| There is a fair point there, but a few things - HSV and Lab
| are only models, they don't necessarily capture all visible
| colors (esp. when it comes to tetrachromats). Brightness is
| a dimension, and can affect the perception of a color, esp.
| as you get very bright - HSV and Lab are 3D spaces. Arguing
| that brightness should be ignored or factored out is
| problematic and only a small step from arguing that
| saturation should be factored out too and that color is
| mostly one dimensional.
| BurningFrog wrote:
| In this sense our hearing is _much_ better than our color
| vision.
|
| We can distinguish the combination a huge number of
| frequencies between 20-20000Hz.
|
| But we can only distinguish 3 independent colors of light.
|
| Of course our vision is vastly better than hearing for
| determining _where_ the sound /light comes from.
| 6gvONxR4sf7o wrote:
| Total tangent, but is that because of the wavelengths
| involved? I imagine a "sound camera" would have to be huge
| to avoid diffraction (but that's just intuition), requiring
| impracticality large ears. Likewise i imagine that
| perceiving "chords" of light requires sensing on really
| tiny scales, requiring impractically small complex
| structure in the eyes?
|
| Anybody know the answer?
| cubefox wrote:
| > These pixels are variable wavelength, but can only produce
| one at a time
|
| Citation needed. The article doesn't say anything about how the
| colors are generated, and whether they can only produce one
| wavelength at a time.
|
| Assuming they are indeed restricted to spectral colors,
| dithering could be used to increase the number of colors
| further. However, dithering needs at least 8 colors to cover
| the entire color space: red, green, blue, cyan, magenta,
| yellow, white, black. And two of those can't be produced using
| monochromatic light -- magenta and white. This would be a major
| problem.
| jiggawatts wrote:
| Dithering just black, red, green, and blue is sufficient to
| produce a full-colour image. Everything else is a combination
| of those. That's effectively how normal LCD or OLED monitors
| work!
| cubefox wrote:
| No, normal monitors use additive color mixing, but
| dithering isn't additive, it's averaging. With just red,
| green, blue, black you couldn't dither cyan, magenta,
| yellow, white, just some much darker versions of them. E.g.
| you get grey instead of white.
|
| You can check this by trying to dither a full color image
| in a program like Photoshop. It doesn't work unless you use
| at least the 8 colors.
|
| In fact, ink jet printers do something similar: They use
| subtractive color mixing to create red, green and blue dots
| (in addition to cyan, magenta, yellow and black ink and
| white paper), then all the remaining shades are dithered
| from those eight colors. It looks something like that: http
| s://as2.ftcdn.net/v2/jpg/01/88/80/47/1000_F_188804787_u1...
| (though there black is also created with subtractive color
| mixing).
|
| The color mixing type used by dithering is sometimes called
| "color blending". Apart from dithering it's also used when
| simulating partial transparency (alpha).
| jiggawatts wrote:
| The article is talking about microLEDs, which are an
| emissive light source.
| cubefox wrote:
| You can dither not just in print but also on illuminated
| screens. For example:
|
| http://caca.zoy.org/study/out/lena6-1-2.png
|
| This picture has only pixels of the aforementioned eight
| colors.
| incrudible wrote:
| Emissive means additive, not averaging. Cyan, magenta and
| yellow are not primaries here. Red and green light adds
| up to perceptual yellow. Red, green and blue adds up to
| perceptual white (or grey, at very low luminance).
| Treating each of these pixels like subpixels (which is
| arguably a form of dithering) will produce a full color
| image (at a lower resolution), but given that they did
| not demonstrate it, color reproduction and/or luminance
| likely is far from competitive at this point.
| cubefox wrote:
| That's not true. Dithering can be used in emissive
| screens, but dithering is not additive. If you mix red
| and green with color blending (e.g. by dithering), you
| get less red and less green in your mix, and therefore
| the resulting mix (a sort of ochre) is different from
| additive color mixing (yellow), where the amount of red
| and green stays the same. Or when you mix black and
| white, you get white with additive color mixing, but grey
| with blending. You also get grey when blending
| (dithering) red, green and blue. You can test this in
| software like Gimp, you won't be able to dither a full
| color image without at least the eight colors I
| mentioned.
| incrudible wrote:
| I am not saying you can use the exact same math as in an
| image manipulation program, these work with different
| assumptions. Mixing colors in those is usually not
| correct anyway.
|
| https://www.youtube.com/watch?v=LKnqECcg6Gw
|
| I am saying you can think of subpixels, which already
| exist, as a form of dithering. Most displays use just
| three primaries for subpixels - red, green and blue.
| Their arrangement is fixed, but that is not a limitation
| of this new technology.
| pfg_ wrote:
| This seems like a non-problem, cut the display resolution in
| half on one axis and reserve two 'subpixels' for each pixel.
| Then you have a full color display with only one physical pixel
| type and that needs one less subpixel. These displays could
| even produce some saturated colors with specific wavelengths
| that can't be represented on regular rgb displays.
| ajuc wrote:
| You'd still be unable to produce different brightness pixels.
| You'd get white but no grayscale.
|
| I guess you could cheat it by moving the wavelength outside
| the visible spectrum?
| enragedcacti wrote:
| Assuming they can PWM the brightness while getting consistent
| color (seems reasonable since microLEDs have extremely fast
| response time) then I think what you're saying would work
| great. It would be akin to 4:2:2 chroma subsampling where
| luminance (which we have higher acuity for) gets more
| fidelity and the resulting image quality is closer to full-
| res than half-res.
| Bumblonono wrote:
| There are plenty of monochromatic cases. Right now hw has a lot
| of orange.
|
| Dynamic resolution / subpixel rendering. Retina looks really
| good already, not sure if the effect would be relevant or
| interesting but it might open up something new
| creshal wrote:
| What Apple sells as "retina" still doesn't match common print
| densities, there's definitely room for improvement.
| meatmanek wrote:
| I'm assuming that in most cases they'll just make these act as
| RGB displays, either by sequentially tuning the wavelength of
| each pixel to red, green, blue in a loop, or by assigning each
| pixel to be red, green, or blue and just having them act as
| subpixels.
| modeless wrote:
| I understand that one of the big issues with microLED is huge
| brightness variation between pixels. Due to some kind of
| uncontrollable (so far) variations in the manufacturing process,
| some pixels output 1/10 the light (or less) as others. Ultimately
| the brightness of the whole display is constrained by the least
| bright pixels because the rest have to be dimmed to match.
| Judging by their pictures they have not solved this problem.
| mensetmanusman wrote:
| It is solvable with enough capital investment though, question
| is how much will it cost to solve.
| modeless wrote:
| Is it? I feel like there has already been a lot of capital
| investment by the various organizations working on microLED.
| cubefox wrote:
| > I understand that one of the big issues with microLED is huge
| brightness variation between pixels. Due to some kind of
| uncontrollable (so far) variations in the manufacturing
| process, some pixels output 1/10 the light (or less) as others.
|
| I instead understand that this is false. Available MicroLED
| screens (TVs) are in fact brighter than normal screens.
|
| The issue with MicroLED is instead that they are extremely
| expensive to produce, as the article points out, due to the
| required mass transfer. Polychromatic LEDs would simplify this
| process greatly.
| wtallis wrote:
| > Available MicroLED screens (TVs) are in fact brighter than
| normal screens.
|
| Does that in any way contradict the claim that there are
| large variations in brightness between microLED pixels on the
| same screen?
| modeless wrote:
| I should have specified that I was talking about microLED
| _microdisplays_ , as shown in the article. Sounds redundant
| but there are also large format microLED displays which are
| manufactured by individually cutting LEDs from a chip and
| placing them on a different substrate with bigger spacing.
| This process allows replacing the ones with poor brightness
| during assembly. For microdisplays, on the other hand, the
| LEDs are fabricated in place and the not individually moved
| after. The chip is the display.
| georgeburdell wrote:
| The promotional document focuses on wavelength tunability but I
| imagine brightness at any one wavelength suffers because to emit
| at one wavelength requires an electron to lose the amount of
| energy in that photon by transitioning from a high to low energy
| state. Maximum brightness then corresponds to how many of these
| transitions are possible in a given amount of time.
|
| Some states are not accessible at a given time (voltage can tune
| which states are available) but my understanding is the number of
| states is fixed without rearranging the atoms in the material.
| FriedPickles wrote:
| I didn't realize we even had a discrete LED tunable across the
| visible spectrum, let alone a Micro-LED array of them. Anybody
| know where I can buy one? I want to build a hyperspectral imager.
| jessriedel wrote:
| Do you mean hyperspectral imager (i.e., camera), or a
| hyperspectral _display_?
| FriedPickles wrote:
| An imager/camera: by illuminating a scene (or light box)
| solely with the tunable LED, sweeping it across the spectrum,
| and capturing it with an achromatic camera.
| jessriedel wrote:
| Ahh, that makes sense. Thanks!
|
| Btw, is that still reasonably effective if the scene has
| ambient illumination, but (in addition to shining each
| wavelength at it) you take a monochrome photo in only the
| ambient light and you subtract that out from all your other
| images?
| FriedPickles wrote:
| Sure that would work. The higher the ratio of
| controlled/ambient light, and the slower you can do the
| sweep, the better for SNR of the hyperspectral image.
| lsaferite wrote:
| > achromatic camera
|
| Is that the same as a panchromatic camera?
|
| Edit:
|
| Asking because I have a 410x410px hyperspectral imager that
| has an aligned 1886x1886px panchromatic imager that is use
| to perform pan-sharpening of the HSI data bringing it up to
| 1886x1886. I'd never heard of a panchromatic camera before
| I got involved in this business and I've never heard of an
| achromatic camera either. All I seem to find is achromatic
| lenses.
| FriedPickles wrote:
| Yes, "panchromatic" is probably the more accurate term
| for it. It's just a camera with no color filters and a
| known spectral response curve that's high enough across
| the frequencies being imaged.
| lsaferite wrote:
| Ah, yeah, I'd say that fits 'panchromatic camera' then.
| The panchromatic imager on my setup uses the exact same
| CCD and covers the exact same spectral range
| (350nm-1000nm), but it doesn't have the HSI
| lenses/filters. The company actually sells a smaller unit
| that is made from the same imager, but with the HS
| lens/filters.
| mxfh wrote:
| Would be fun if displays come full circle with variable
| addressable geometry/ glowing goo too.
|
| Not quite vector display, but some thing organic than can be
| adressed with some stimulators like reaction-diffusion or
| gaussian, FFT, laplacians, gabor filters, Turig patterns, etc.
| Get fancy patterns with lowest amount of data.
|
| https://www.sciencedirect.com/science/article/pii/S092547739...
| https://onlinelibrary.wiley.com/doi/10.1111/j.1755-148X.2010...
| KennyBlanken wrote:
| This appears to be done by varying current, from a slide in this
| 'webinar': https://youtu.be/MI5EJk8cPwQ?t=238
|
| That's not hugely surprising given that (I believe) LEDs have
| always shifted spectrum-wise a bit with drive current (well,
| mostly junction temperature, which can be a function of drive
| current.)
|
| I guess that means they're strictly on/off devices, which seems
| furthered by this video from someone stopping by their booth:
|
| https://youtu.be/f0c10q2S_PQ?t=107
|
| You can clearly see some pretty shit dithering, so I guess they
| haven't figured out how to do PWM based brightness (or worse, PWM
| isn't possible at all?)
|
| I guess that explains the odd fixation on pixel density that is
| easily 10x what your average high-dpi cell phone display has (if
| you consider each color to be its own pixel, ie ~250dpi x 3)
|
| It seems like the challenge will be finding applications for
| something with no brightness control etc. Without that, it's
| useless even for a HUD display type widget.
|
| In the meantime, if they made 5050-sized LEDs, they would
| probably print money...which would certainly be a good way to
| further development on developing brightness control.
| exmadscientist wrote:
| > if they made 5050-sized LEDs
|
| I doubt they can. Probably the process only works (or yields)
| small pieces, otherwise they'd be doing exactly what you
| suggest.
|
| I also notice that their blues look _terrible_ in the provided
| images. Which will be a problem. I don 't think they get much
| past 490nm or so? That would also explain why they don't talk
| at all about phosphors, which seem like a natural complement to
| this tech... I don't think they can actually pump them. Which
| is disappointing :(
| knotimpressed wrote:
| I think a lot of these comments are missing the point-even if you
| have to reduce their reported density numbers by half, they made
| a display with dimensions of "around 1.1 cm by 0.55 cm, and
| around 3K by 1.5K pixels", which is _insane_! All without having
| to dice and mass-transfer wafer pieces, since every pixel is the
| same.
|
| A lot of the article is focused on how this matters for the
| production side of things, since combining even 10 um wafer
| pieces from 3 different wafers is exceedingly time consuming,
| which I think is the more important part. Sure, the fact that
| each emitter can be tuned to "any colour" might be misleading,
| but even if you use rapid dithering like plasma displays did, and
| pin each emitter to one wavelength, you suddenly have a valid
| path to manufacturing insanely high density microLED displays!
| Hopefully this becomes viable soon, so I can buy a nice vivid and
| high contrast display without worrying about burn in.
| speakspokespok wrote:
| Did anybody notice just how fast their website loads? I didn't
| even look at the content yet and I'm already impressed.
| bluehat974 wrote:
| Porotech propose the same concept
|
| https://www.porotech.com/technology/dpt/
|
| Demo video
|
| https://youtu.be/758Xzi_nK8w
| hosh wrote:
| I wonder if these would improve VR/AR headset displays.
___________________________________________________________________
(page generated 2024-07-19 23:11 UTC)