[HN Gopher] Fuzzy Metaballs: Approximate differentiable renderin...
       ___________________________________________________________________
        
       Fuzzy Metaballs: Approximate differentiable rendering with
       algebraic surfaces
        
       Author : andersource
       Score  : 112 points
       Date   : 2022-11-01 11:05 UTC (11 hours ago)
        
 (HTM) web link (leonidk.github.io)
 (TXT) w3m dump (leonidk.github.io)
        
       | ThouYS wrote:
       | sweet! if you ask me, differentiable rendering is the next big
       | thing (tm).
        
         | enriquto wrote:
         | > if you ask me, differentiable rendering is the next big thing
         | 
         | Do you have a canonical reference for your usage of the
         | adjective "differentiable" ? If people look at the definition
         | they invariably get to this:
         | 
         | https://en.m.wikipedia.org/wiki/Differentiable_function
         | 
         | Which is clearly not what you mean. I'm not asking about what
         | it means, but about a reference. I teach calculus for a living,
         | and I make a big deal about the two meanings of this word, but
         | never found a definitive reference to cite.
        
           | Timon3 wrote:
           | Actually, that is the correct definition! It might sound
           | unintuitive at first, but I think this paper[1] describes it
           | really well (p.1):
           | 
           | > The last years have clearly shown that neural networks are
           | effective for 2D and 3D reasoning. However, most 3D
           | estimation methods rely on supervised training regimes and
           | costly annotations, which makes the collection of all
           | properties of 3D observations challenging. Hence, there have
           | been recent efforts towards leveraging easier-to-obtain 2D
           | information and differing levels of supervision for 3D scene
           | understanding. One of the approaches is integrating graphical
           | rendering processes into neural network pipelines. This
           | allows transforming and incorporating 3D estimates into 2D
           | image level evidence.
           | 
           | > Rendering in computer graphics is the process of generating
           | images of 3D scenes defined by geometry, materials, scene
           | lights and camera properties. Rendering is a complex process
           | and its differentiation is not uniquely defined, which
           | prevents straightforward integration into neural networks.
           | 
           | > Differentiable rendering (DR) constitutes a family of
           | techniques that tackle such an integration for end-to-end
           | optimization by obtaining useful gradients of the rendering
           | process. By differentiating the rendering, DR bridges the gap
           | between 2D and 3D processing methods, allowing neural
           | networks to optimize 3D entities while operating on 2D
           | projections.
           | 
           | [1] https://arxiv.org/abs/2006.12057
        
             | enriquto wrote:
             | > Actually, that is the correct definition!
             | 
             | Hmmm; no, it isn't?
             | 
             | Your quotation is vague stuff and not a definition at all.
             | The notion of differentiability has nothing to do with deep
             | learning, rendering, nor neural networks. None of these
             | terms should appear on a definition of the concept.
             | 
             | The classical mathematical definition of differentiability
             | concerns the function itself. For example, the function
             | f(x)=x^2 is differentiable while the function g(x)=abs(x)
             | isn't. On the other hand, the "modern" definition of
             | differentiability concerns the computer implementation of
             | the function. Given the same function f(x)=x^2, you can
             | have a differentiable implementation of it (e.g., using a
             | high level language) and a non-differentiable
             | implementation (e.g., calling a library written in
             | assembler that evaluates this function). At the same time,
             | you can have a differentiable implementation of
             | g(x)=abs(x), which is the sign function, and non-
             | differentiable implementation as well. Thus, both concepts
             | are really independent!
             | 
             | I'm still longing for a formal, canonical, definition of
             | the "modern" notion of differentiability.
        
               | andersource wrote:
               | Parent is correct that the definition you gave is indeed
               | broadly the correct definition, i.e. there's no principle
               | difference between calculus "differentiable" and
               | differentiable as in differentiable rendering, except
               | maybe the modern sense is a bit more relaxed. The
               | differentiability of the rendering process doesn't depend
               | on the implementation - incidentally look at the first
               | line of this paper's abstract:
               | 
               | > Differentiable renderers provide a direct mathematical
               | link between an object's 3D representation and images of
               | that object
               | 
               | When we call a process differentiable (in the "modern"
               | sense you're referring to) we simply mean that we have a
               | formulation of the process as a differentiable function.
               | In rendering this means the function's domain is the set
               | of tuples (3D object, camera parameters) and the
               | function's range is a set of 2D images of some sort.
               | These are very high-dimensional functions, but
               | mathematical functions nonetheless, and calling them
               | "differentiable" means the same as in the definition you
               | linked to.
               | 
               | Your examples re/ implementation make me think maybe
               | you're conflating differentiability with "autodiff"[0] -
               | languages / libraries that allow you to just write a
               | function and automatically differentiate it for you.
               | 
               | [0]
               | https://en.wikipedia.org/wiki/Automatic_differentiation
        
               | 6gvONxR4sf7o wrote:
               | You're confusing "everywhere differentiable" with
               | "differentiable at the points we care about." g(x) =
               | abs(x) is differentiable almost everywhere, and that's
               | enough for gradient descent to work in practice. And
               | fitting models by gradient-descent-like methods is the
               | single unifying feature of deep learning.
               | 
               | I don't understand the implementation thing you're
               | worrying about. Assuming it takes the same inputs to the
               | same outputs, how you implement the absolute value
               | function has no bearing on its limit properties, which is
               | what differentiability is about.
        
         | 6gvONxR4sf7o wrote:
         | More generally, differentiable "controllable, well understood
         | thing," so we can start combining the successes of deep
         | learning with the grokkability of more specific solutions.
        
       ___________________________________________________________________
       (page generated 2022-11-01 23:01 UTC)