[HN Gopher] Sharper mipmapping using shader based supersampling ...
       ___________________________________________________________________
        
       Sharper mipmapping using shader based supersampling (2019)
        
       Author : tosh
       Score  : 56 points
       Date   : 2022-01-03 20:49 UTC (1 days ago)
        
 (HTM) web link (bgolus.medium.com)
 (TXT) w3m dump (bgolus.medium.com)
        
       | hypertele-Xii wrote:
       | > we ran into a curious problem. We had a sign in the game with a
       | character's name written across it. [...] The problem was the
       | name was almost completely illegible when playing the game.
       | 
       | Engineer solution: Spend hours hacking a small amount of mip LOD
       | biasing, and forcing anisotropic filtering on for that texture.
       | 
       | Designer solution: Lose the sign. Problem avoided, hours saved.
        
         | midnightclubbed wrote:
         | Often a game engine will have LOD biasing and filtering
         | settings exposed to the artist at the model and/or material
         | level. It's only engineering time if those settings are not
         | implemented.
         | 
         | I would expect/hope that the designer asked both the artist and
         | engineer if the sign could be fixed (and at what cost) before
         | spending the time to re-design an alternative. Designers will
         | often spend huge amounts of time doing tasks that could be
         | greatly simplified if they only asked the relevant engineer
         | (and vice-versa).
        
       | dahart wrote:
       | This is a cool technique, and I really love the article and all
       | the visual examples! The effort put in writing this post and
       | making it clear, and adding all the neat examples at the end, is
       | awesome. So this is not a fair critique, maybe not even entirely
       | relevant, but the hard part for me, having worked in CG film, is
       | starting from a Box filter, going even sharper than that, and
       | being able to clearly see pixels. Good filtering hides the very
       | existence of pixels from view, it's exactly blurry enough to make
       | little square shapes invisible. This technique is great for games
       | _relative_ to single sampling, but wouldn't get past a lighting
       | sup, wouldn't suffice for doing high quality prints.
       | 
       | One thing I think that's under appreciated when people talk about
       | blurriness and sharpness is that a slight amount of visible
       | blurriness can be much better for the overall clarity of an image
       | than erring on the side of sharpness. I learned this accidentally
       | many years ago doing a Siggraph video on VHS tape, you know, with
       | the old 480i 60 fields per second with alternating odd/even
       | fields. Certainly there's a little nuance there that is different
       | than 1080p LCD pixels, but I found out that over-blurring a
       | little vertically fixed all the interlace tearing and made the
       | image _so_ much more clear. A friend who'd written papers on
       | antialiasing called me to ask how I'd done it, why it was so
       | clear, and was as surprised as I was to hear that it was blurrier
       | than expected. Ever since then whenever I see someone trying to
       | squeeze the last bit of sharpness out of their filter, it almost
       | always seem to come at the cost of seeing pixels and damaging
       | clarity. Experts will say that a Gaussian filter is too soft and
       | you can get sharper, but for high quality filtering, I haven't
       | found anything else that will reliably hide the pixels and leave
       | you looking only at the image rather than the sampling.
        
       | gallerdude wrote:
       | I've heard before that watching 4K video on a 1080p display is
       | still better than watching the same video in 1080p - is that
       | because there's a kind of supersampling going on there?
        
         | s_gourichon wrote:
         | Generally not the point. Depending on your video player you
         | might only get aliasing artifacts (and virtually always higher
         | resource consumption).
         | 
         | The point of selecting a higher definition video stream can be
         | to circumvent bandwidth limitation, if the platform happens to
         | compress too much what it sends by default.
        
           | gallerdude wrote:
           | I thought I heard it applied even with proportionally equal
           | bitrate, but that could be incorrect.
        
         | Const-me wrote:
         | I think if you watch that 4k video with nearest neighbor
         | downsampling (i.e. the player which drops 75% of pixels and
         | only render 25% of them, one from each 2x2 quad), you'll get
         | worse quality.
         | 
         | Fortunately, that's not how video players normally resize the
         | videos they play. At the very least, players are using bilinear
         | sampling. For the exact 50% downsampling of 4k into 1080p, this
         | means the GPU averages 2x2 quad of the source video texture
         | into one output pixel. This averaging step hides substantial
         | amount of codec artifacts. That bilinear sampling is even
         | borderline free on GPUs: they have tons of VRAM bandwidth, and
         | texture samplers are dedicated fixed-function pieces of
         | hardware.
         | 
         | If you're using Windows and Media Player Classic, the player
         | has a preference to switch resizing algorithm, Options,
         | Playback, Output, "Resizer" combobox.
        
       | s_gourichon wrote:
       | Interesting article.
       | 
       | > Warning: this page has about 72MB of Gifs! Medium also has a
       | tendency to not load them properly, so if there's a large gap or
       | overly blurry image, try reloading the page. Ad blockers may
       | cause problems, no script may make it better. It's also broken in
       | the official Medium app. Sorry, Medium is just weird.
       | 
       | There may be a fix: vote with your choice, stop publishing on
       | Medium, publish elsewhere?
       | 
       | This content is worth it.
        
         | Jyaif wrote:
         | At the end of the article the author explains "As strange as it
         | might seem, I'm using gifs on purpose."
        
       | Jyaif wrote:
       | Would it be possible to have more intermediary mips levels, (for
       | instance instead of {1, 0.5, 0.25} have {1, 0.75, 0.5, 0.375,
       | 0.25}?
       | 
       | How much would it help?
        
         | dahart wrote:
         | Yes it would be possible! There isn't GPU hardware that does
         | it, but it could be done in GPU or CPU software. (This is
         | slower than GPU hardware texture filtering.)
         | 
         | Hard to say how much it would help the visual quality, but it's
         | easy to calculate the costs, which are largely memory. A map
         | map consumes 1/3rd more memory than a texture. If you add a 2nd
         | set of points starting at 75% zoom, then I think (napkin math)
         | it'll bring your memory consumption closer to 2x the original
         | texture. https://en.wikipedia.org/wiki/Mipmap#Mechanism
        
         | midnightclubbed wrote:
         | You would need to texture sample the intermediate mip levels
         | separately. All GPUs (that I know of) expect the mips to be
         | power-2 reductions. You are also increasing texture memory
         | usage (storage and bandwidth) by 50%.
         | 
         | There is probably some quality improvement, but there is a
         | fairly large cost. Would be interesting to see what the results
         | look like.
        
       ___________________________________________________________________
       (page generated 2022-01-04 23:02 UTC)