[HN Gopher] Neural Rendering: How Low Can You Go in Terms of Input?
       ___________________________________________________________________
        
       Neural Rendering: How Low Can You Go in Terms of Input?
        
       Author : Hard_Space
       Score  : 53 points
       Date   : 2021-05-13 10:21 UTC (12 hours ago)
        
 (HTM) web link (www.unite.ai)
 (TXT) w3m dump (www.unite.ai)
        
       | Jakobeha wrote:
       | This neural rendering is really, really cool. Things like the
       | GTA5 demo, this (https://www.youtube.com/watch?v=miLIwQ7yPkA),
       | and pix2pix. The general concept - dumb sketch to real photo or
       | artistic masterpiece - is the most impressive thing I've seen
       | this past year. Seriously.
       | 
       | Best case scenario (from a technology perspective), it means
       | people can make games with the most bare-bones, low-end graphics,
       | plus neural enhancement, and they would look better than top-of-
       | the line AAA games today. Someone with no experience could follow
       | a basic tutorial and in 5 minutes be creating hyper-realistic
       | landscapes, cities, characters, etc. from lazy sketches and
       | clunky 3D shapes. Anyone can be a "talented" artist.
       | 
       | Which is actually really bad for real talented artists and raises
       | serious ethical issues. That's why, best case from a technology
       | perspective, it would be catastrophic if we had this technology
       | today. But it's the best case, too good to be true, we're not
       | going to have it anytime soon.
       | 
       | Right now we have tools like pix2pix, that turn decent-quality
       | sketches into uncanny-valley products. If you squint they look
       | realistic, but they're also obviously AI-generated. And
       | overfitted: you can see what the network was trained on through
       | the output, and you simply can't create anything too far from the
       | training data. That's probably what we can expect in the near
       | future.
       | 
       | But even that is really impressive. And I actually see a lot of
       | practical uses for it. You can make art using these images, it
       | will be obviously AI-generated art, but people won't really care.
        
         | orbital-decay wrote:
         | _> Which is actually really bad for real artists and raises
         | some ethical issues._
         | 
         | This is raised every time a spectacular ML demo emerges. The
         | generic answer is that the real value in artistic work is
         | conceptualization. Execution is secondary. This method doesn't
         | conceptualize anything, you probably need strong AI for that.
         | Artists will just use the new tool, simple as that.
        
         | saeranv wrote:
         | I would love to see this incorporated into a HUD in actual cars
         | to augment unsafe driving conditions. For example outlining
         | cars, roads clearly during the night or rainy/snowy weather.
        
       | arduinomancer wrote:
       | The video linked of applying a neural net to GTAV for
       | photorealism is really impressive if you haven't seen it.
       | 
       | https://www.youtube.com/watch?v=P1IcaBn3ej0
        
         | aantix wrote:
         | I like the look when the model when it's trained with the
         | Vistas dataset. Much more saturation.
         | https://youtu.be/yLLhMkctfBY?t=4314
        
       | dmwallin wrote:
       | The approach that would be most interesting to me would be if you
       | could build out your assets with really high quality assets,
       | along with engine friendly low quality versions and then use a
       | slower but high quality ray tracing setup to render out extremely
       | well labeled training sets. This would potentially allow you to
       | have detailed aesthetic control over the end results.
        
       | drummer wrote:
       | All of this AI stuff is wizardry and magic.
        
       | orbital-decay wrote:
       | I think one possible problem with this is that you don't really
       | have the worldspace in the traditional sense, thus making many
       | things like convincing reflections impossible. So probably they
       | have to be rendered by the conventional pipeline, then augmented
       | with the NN.
        
       | ineedasername wrote:
       | > _" How low can you go?"_
       | 
       | Let's see what pops out when Intel points it at XKCD
        
       ___________________________________________________________________
       (page generated 2021-05-13 23:01 UTC)