[HN Gopher] Debunking HDR [video]
___________________________________________________________________
Debunking HDR [video]
Author : plastic3169
Score : 39 points
Date : 2025-06-11 15:56 UTC (3 days ago)
(HTM) web link (yedlin.net)
(TXT) w3m dump (yedlin.net)
| _wire_ wrote:
| Solid overview of applied color theory for video, so worth
| watching.
|
| As to what was to be debunked, the presentation not only fails to
| set out a thesis in the introduction, it doesn't even beg a
| question, so you've got to watch hours to get to the point: SDR
| and HDR are two measurement systems which when correctly used for
| most cases (legacy and conventional content) must produce the
| visual result. The increased fidelity of HDR makes it possible to
| expand the sensory response and achieve some very realistic new
| looks that were impossible with SDR, but the significance and
| value of any look is still up to the creativity of the
| photographer.
|
| This point could be more easily conveyed by this presentation if
| the author explained in the history of reproduction technology,
| human visual adaptation exposes a moment by moment contrast
| window of about 100:1, which is constantly adjusting across time
| based on average luminance to create an much larger window of
| perception of billions:1(+) that allows us to operate under the
| luminance conditions on earth. But until recently, we haven't
| expected electronic display media to be used in every condition
| on earth and even if it can work, you don't pick everywhere as
| your reference environment for system alignment.
|
| (+)Regarding difference between numbers such as 100 or billions,
| don't let your common sense about big or small values phase your
| thinking about differences: perception is logarithmic; it's the
| degree of ratios that matter more than the absolute magnitude of
| the numbers. As a famous acoustics engineer (Paul Klipsch) said
| about where to focus design optimization of response traits of
| reproduction systems: "If you can't double it or halve it, don't
| worry about it."
| strogonoff wrote:
| Regardless of whether it is HDR or SDR, when processing raw
| data for display spaces one must throw out 90%+ of information
| _of what was captured by the sensor_ (which is often a small
| amount of what was available at the scene already). There can
| simply be no objectivity, it is always about what you saw and
| what you want others to see, an inherently creative task.
| ttoinou wrote:
| 90% really ? What color information get ejected exactly ? For
| the sensor part are you talking about the fact that the
| photosites don't cover all the surface ? Or that we only
| capture a short band of wavelength ? Or that the lens only
| focuses rays unto specific exact points and make the rest
| blurry and we loose 3D ?
| adgjlsfhk1 wrote:
| Presumably they're referring to the fact that most cameras
| capture ~12-14 bits of brightness vs the 8 that (non-hdr)
| displays show.
| ttoinou wrote:
| Oh that's normal then. There are mandatory steps of
| dynamic range reduction in the video editing / color
| grading pipeline (like a compressor in audio production).
| So the whole information is not lost but the precision /
| details can be yes. But that's a weird definition, there
| are so many photons in daylight capture that you could
| easily say we really need minimum 21 bits per channel
| minimum (light intensity of sun / light intensity of
| moon)
| bdavbdav wrote:
| But that's not seen at the sensor - at least not at once
| - look at the sun and then look immediately at the dark
| sky moon (if it were possible) - the only reason you get
| the detail on the moon is the aperture in front. You
| couldn't see the same detail if they were next to each
| other. The precision is the most dark in the scene next
| to the most bright, as opposed to the most dark possible
| next to the most bright. That's the difference.
| ttoinou wrote:
| Hum I can look at a moon croissant and the sun at the
| same time
| sansseriff wrote:
| Cameras capture linear brightness data, proportional to the
| number of photons that hit each pixel. Human eyes (film
| cameras too) basically process the logarithm of brightness
| data. So one of the first things a digital camera can do to
| throw out a bunch of unneeded data is to take the log of
| the linear values it records, and save that to disk. You
| lose a bunch of fine gradations of lightness in the
| brightest parts of the image. But humans can't tell.
|
| Gamma encoding, which has been around since the earliest
| CRTs was a very basic solution to this fact. Nowadays it's
| silly for any high-dynamic image recording format to not
| encode data in a log format. Because it's so much more
| representative of human vision.
| ttoinou wrote:
| Ok so similar to the other commentator then, thanks.
| According to that metric its much more than 90% we're
| throwing out then (:
| empiricus wrote:
| At a minimum we should start from something captured close the
| reality, and then get creative from that point.. SDR is like
| black-and-white movies (not quite, but close). We can get
| creative with it, but can we just see the original natural look?
| HDR (and the wider color space associated) has a fighting chance
| to look real, but looking real seems far away from what movie
| makers are doing.
| naikrovek wrote:
| The switch from 24-bit color to 30-bit color is very similar to
| the move from 15-bit color on old computers to 16-bit color.
|
| You didn't need new displays to make use of it. It wasn't
| suddenly brighter or darker.
|
| The change from 15 to 16 bit color was at least _visible_ because
| the dynamic range of 16-bit color is much lower than 30-bit
| color, so you could see color banding improve, but it wasn't some
| new world of color, like how HDR is sold.
|
| Manufacturers want to keep the sales boom that large cheap TVs
| brought when we moved away from CRTs. That was probably a "golden
| age" for screen makers.
|
| So they went from failing to sell 3D screens to semi-successfully
| getting everyone to replace their SDR screen with an HDR screen,
| even though almost no one can see the difference in those color
| depths when shown with everything else being equal.
|
| What really cheeses me on things like this is that TV and monitor
| manufacturers seem to gate the "blacker blacks" and "whiter
| whites" behind HDR modes and disable those features for SDR
| content. That is indefensible.
| ttoinou wrote:
| As long as the right content was displayed I instantly saw the
| upgrade to HDR screens (first time I saw one was a smartphone
| less than 10 years ago I believe), I knew something was new.
|
| The same way I could instantly tell when I saw a screen showing
| a footage with more than 40 fps. And I see constantly on
| youtube wrongly converted footage from 24 fps to 25 fps, one
| frame every second jumps / is duplicated
| jakkos wrote:
| > Manufacturers want to keep the sales boom that large cheap
| TVs brought when we moved away from CRTs. That was probably a
| "golden age" for screen makers.
|
| IMO the difference between LCD and OLED is massive and "worth
| buying a new tv" over.
|
| I've never tried doing an 8-bit vs 10-bit-per-color "blind"
| test, but I _think_ I 'd be able to see it?
|
| > What really cheeses me on things like this is that TV and
| monitor manufacturers seem to gate the "blacker blacks" and
| "whiter whites" behind HDR modes and disable those features for
| SDR content. That is indefensible.
|
| This 100%. The hackery I have to regularly perform just to get
| my "HDR" TV to show an 8-bit-per-color "SDR" signal with it's
| full range of brightness is maddening.
| nayuki wrote:
| Unrelated to the video content, the technical delivery of the
| video is stunningly good. There is no buffering time, and
| clicking at random points in time on the seek bar gives me a
| result in about 100 ms. The minimal UI is extremely fast - and
| because seek happens onmousedown, oftentimes the video is already
| ready by the time I do onmouseup on the physical button. This is
| important to me because I like to skip around videos to skim the
| content to look for anything interesting.
|
| Meanwhile, YouTube is incredibly sluggish on my computer, with
| visible incremental rendering of the page UI, and seeking in a
| video easily takes 500~1000 ms. It's an embarrassment that the
| leading video platform, belonging to a multi-billion-dollar
| company, has a worse user experience than a simple video file
| with only the web browser's built-in UI controls.
| phonon wrote:
| More I-frames.
| CharlesW wrote:
| Not necessarily more, but importantly the cadence is fixed at
| one GOP per second -- a good (and not-unusual) choice for
| progressive download delivery.
| echoangle wrote:
| It's not that surprising that a massive page would have to
| compromise quality for scalability (to decrease server load and
| storage) compared to a smaller page with less visitors.
| CharlesW wrote:
| > _Unrelated to the video content, the technical delivery of
| the video is stunningly good._
|
| To save readers a "View Source", this is the typical
| progressive file download user experience with CDNs that
| support byte-range requests.
| ttoinou wrote:
| His previous stuff is so interesting and it's very refreshing to
| see a Hollywood professional able to dig so deep into those
| topics and teach us about it
| https://yedlin.net/NerdyFilmTechStuff/index.html
|
| I think the point that SDR inputs (to a monitor) can be _similar_
| to HDR input to monitors that have high dynamic ranges is obvious
| if you look at the maths involved. Higher dynamic gives you more
| precision in the information, you can choose to do what you want
| with it : higher maximum luminosity, better blacks with less
| noise, more details in the middle etc.
|
| Of course we should also see "HDR" as a social movement, a new
| way to communicate between engineers, manufacturers and
| consumers, it's not "only" a math conversion formula.
|
| I believe we could focus first on comparing SDR and HDR black and
| white images, to see how higher dynamic range only in the
| luminosity is in itself very interesting to experience
|
| But in the beginning he is saying the images look similar on both
| monitors. Surely we could find counter examples and that only
| applies to his cinema stills ? If he can show this is true for
| all images then indeed he can show that "SDR input to a HDR
| monitor" is good enough for all human vision. I'm not sure this
| is true, as I do psychedelic animation I like to use all the
| gamut of colors I have at my hand and I don't care about
| representing scenes from the real world, I just want maximum
| color p0rn to feed my acid brain : 30 bits per pixels surely
| improve that, as well as wider color gamut / new LEDs wavelengths
| not used before
| sansseriff wrote:
| An excellent video. I've admired Yedlin's past work in debunking
| the need for film cameras over digital when you're going after a
| 'film look'
|
| I wish he shared his code though. Part of the problem is he can't
| operate like a normal scientist when all the best color grading
| tools are proprietary.
|
| I think it would be really cool to make an open source color
| grading software that simulates the best film looks. But there
| isn't enough information on Yedlin's website to exactly reproduce
| all the research he's done with open source tools.
| dcrazy wrote:
| I'm 14 minutes into this 2 hour 15 minute presentation that
| hinges on precision in terminology, and Yedlin is already making
| oversimplifications that hamper delivery of his point. First of
| all, he conflates the actual RGB triplets with the colorspace
| coordinates they represent. He chooses a floating point
| representation where each value of the triplet corresponds to a
| coordinate on the normalized axes of the colorspace, but there
| are other equally valid encodings of the same coordinates.
| Integers are very common.
|
| Secondly, Rec. 2100 defines more than just a colorspace. A
| coordinate triple in the Rec. 2100 colorspace does not dictate
| both luminance and chromaticity. You need to also specify a
| _transfer function_, of which Rec. 2100 defines two: PQ and HLG.
| They have different nominal maximum luminance: 10,000 nits for PQ
| and 1,000 nits for HLG. Without specifying a transfer function, a
| coordinate triple merely identifies chromaticity. This is true of
| _all_ color spaces.
|
| On the other hand his feet/meters analogy is excellent and I'm
| going to steal it next time I need to explain colorspace
| conversion to someone.
| layer8 wrote:
| Continue watching, his overall points are quite valid.
|
| The presentation could surely be condensed, but also depends on
| prior knowledge and familiarity with the concepts.
___________________________________________________________________
(page generated 2025-06-14 23:00 UTC)