[HN Gopher] Triangle splatting: radiance fields represented by t...
       ___________________________________________________________________
        
       Triangle splatting: radiance fields represented by triangles
        
       Author : ath92
       Score  : 151 points
       Date   : 2025-05-30 04:07 UTC (18 hours ago)
        
 (HTM) web link (trianglesplatting.github.io)
 (TXT) w3m dump (trianglesplatting.github.io)
        
       | dsp_person wrote:
       | > In this paper, we argue for a triangle come back
       | 
       | Go team triangles!
        
         | dr_dshiv wrote:
         | Pythagoras ftw
        
         | franky47 wrote:
         | Vercel approves.
        
       | adastra22 wrote:
       | Can someone explain what a splat is? I did graphics programming
       | 25 years ago, but haven't touched it since. I don't think I've
       | ever heard this word before.
        
         | 0xfffafaCrash wrote:
         | A splat is basically a point in a point cloud. But instead of a
         | gaussian splat being just a point it isn't infinitesimally
         | small but like a 3d gaussian where the point represent the
         | mean. It also has color and opacity. You can also stretch it
         | like an ellipsoid instead of having it have perfect radial
         | symmetry.
        
           | mjfisher wrote:
           | And in case it helps further in the context of the article:
           | traditional rendering pipelines for games don't render fuzzy
           | Gaussian points, but triangles instead.
           | 
           | Having the model trained on how to construct triangles
           | (rather than blobbly points) means that we're closer to a
           | "take photos of a scene, process them automatically, and walk
           | around them in a game engine" style pipeline.
        
             | yazaddaruvala wrote:
             | Any insights into why game engines prefer triangles rather
             | than guassians for fast rendering?
             | 
             | Are triangles cheaper for the rasterizer, antialiasing, or
             | something similar?
        
               | adastra22 wrote:
               | Yes. Triangles are cheap. Ridiculously cheap. For
               | everything.
        
               | jiggawatts wrote:
               | Triangles are the simplest polygons, and simple is good
               | for speed and correctness.
               | 
               | Older GPUs natively supported quadrilaterals (four sided
               | polygons), but these have fundamental problems because
               | they're typically specified using the vertices at the
               | four corners... but these may not be co-planar!
               | Similarly, interpolating texture coordinates smoothly
               | across a quad is more complicated than with triangles.
               | 
               | Similarly, older GPUs had good support for "double-sided"
               | polygons where both sides were rendered. It turned out
               | that 99% of the time you only want one side, because you
               | can only see the _outside_ of a solid object. Rendering
               | the inside back-face is a pointless waste of computer
               | power. This actually simplified rendering algorithms by
               | removing some conditionals in the mathematics.
               | 
               | Eventually, support for anything but single-sided
               | triangles was in practice _emulated_ with a bunch of
               | triangles anyway, so these days we just stopped
               | pretending and use only triangles.
        
               | mjfisher wrote:
               | Cheaper for everything, ultimately.
               | 
               | A triangle by definition is guaranteed to be co-planer;
               | three vertices must describe a single flat plane. This
               | means every triangle has a single normal vector across
               | it, which is useful for calculating angles to lighting or
               | the camera.
               | 
               | It's also very easy to interpolate points on the surface
               | of a triangle, which is good for texture mapping (and
               | many other things).
               | 
               | It's also easy to work out if a line or volume intersects
               | a triangle or not.
               | 
               | Because they're the simplest possible representation of a
               | surface in 3D, the individual calculations per triangle
               | are small (and more parallelisable as a result).
        
               | Daub wrote:
               | Yes cheaper. Quads are subject to becoming non-planar
               | leading to shading artifacts.
               | 
               | In fact, I belive that under the hood all 3d models are
               | triangulated.
        
               | strangecasts wrote:
               | As an aside, a few early 90s games did experiment with
               | spheroid sprites to approximate 3D rendering, including
               | the DOS game Ecstatica [1] and the (unfortunately named)
               | SNES/Genesis game Ballz 3D [2]
               | 
               | [1] https://www.youtube.com/watch?v=nVNxnlgYOyk
               | 
               | [2] https://www.youtube.com/watch?v=JfhiGHM0AoE
        
               | jplusequalt wrote:
               | >triangles cheaper for the rasterizer
               | 
               | Yes, using triangles simplifies a lot of math, and GPUs
               | were created to be really good at doing the math related
               | to triangles rasterization (affine transformations).
        
           | ragebol wrote:
           | So instead of a point, a splat is more of a (colored) cloud
           | itself.
           | 
           | So a gaussian splat scene is not a pointcloud but rather a
           | cloudcloud.
        
             | Daub wrote:
             | > so a gaussian splat scene is not a pointcloud but rather
             | a cloudcloud.
             | 
             | A good way of putting it.
        
         | jorlow wrote:
         | It's basically a blob in space. When you have millions of them,
         | you can use gradient descent to minimize loss between them and
         | source images.
        
         | samsartor wrote:
         | I always assumed "gaussian splatting" was a reference to old
         | school texture splatting, where textures are alpha-blended
         | together. AFAIK the graphcis terminology of splats as objects
         | (in addition to splatting as a operation) is new.
        
         | mxfh wrote:
         | The prior paper by the authors spends more time explaining
         | what's happening, would start there:
         | 
         | https://convexsplatting.github.io/
         | 
         | the seminal paper is still this one:
         | 
         | https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
        
         | Daub wrote:
         | Practically, what differentiateS a splat from standard
         | photogrammetry is that it can capture things like reflections,
         | transparency and skies. A standard photogram of (for example) a
         | mirror would confuse the reflection in the mirror for a space
         | behind the mirror. A photogram of a sheet of glass would
         | likewise suffer.
         | 
         | The problem is that any tool or process that converts splats
         | into regular geometry produces plain old geometry and RGB
         | textures, thus loosing its advantage. For this reason splats
         | are (in my opinion) a tool in search of an application.
         | Doubtless some here will disagree.
        
           | taylorius wrote:
           | I've never been quite clear on how Splats encode specular
           | (directional) effects. Are they made to only be visible from
           | a narrow field of view (so you see a different splat for
           | different view angles?) or do they encode the specular stuff
           | internally somehow?
        
             | Daub wrote:
             | This is a good question. As I understand it, the only
             | material parameters a splat can recognize are color and
             | transparency. Therefore the first of your two options would
             | be the correct one.
        
               | brianshaler wrote:
               | You can use spherical harmonics to encode a few
               | coefficients in addition to the base RGB for each splat
               | such that the rendertime view direction can be used to
               | compute an output RGB. A "reflection" in 3DGS isn't a
               | light ray being traced off the surface, but instead a way
               | of saying "when viewed from this angle, the splat may
               | take an object's base color, while from that angle, the
               | splat may be white because the input image had glare"
               | 
               | This ends up being very effective with interpolation
               | between known viewpoints, and hit-or-miss extrapolation
               | beyond known viewpoints.
        
             | vessenes wrote:
             | Because you have source imagery and colors (and therefore
             | specular and reflective details) from different angles you
             | can add a view angle and location based component to the
             | material/color function; so the material is not just
             | f(point in 3d space) it's f(pt, view loc, view direction).
             | That's made differentiable and so you get viewpoint
             | dependent colors for 'free'.
        
         | spookie wrote:
         | To add to the rest of the replies: color comes from spherical
         | harmonics, which I'm sure you came across them (used
         | traditionally for diffuse light or shadows, SuperTuxKart uses
         | them)
        
       | rossant wrote:
       | Very exciting! Neural rendering finally coming to standard
       | rasterizer engines.
        
       | ToJans wrote:
       | What is we use triangular pyramids instead of triangles?
       | 
       | Wouldn't this lead to the full 3D representation?
        
         | ImHereToVote wrote:
         | A pyramid is unnecessarily bound, a triangle performs better if
         | it is free flowing. I understand that this performs better
         | because there is less IO but slightly more processing. IO is
         | the biggest cost when it comes to GPUs.
        
         | cubefox wrote:
         | Something like this? https://arxiv.org/abs/2406.01579
        
       | qwertox wrote:
       | I'm so confused about the citation:
       | 
       | author = {Held, Jan and Vandeghen, Renaud and Deliege, Adrien and
       | Hamdi, Abdullah and Cioppa, Anthony and Giancola, Silvio and
       | Vedaldi, Andrea and Ghanem, Bernard and Tagliasacchi, Andrea and
       | Van Droogenbroeck, Marc},
       | 
       | while on arxiv and the top of the page
       | 
       | Jan Held, Renaud Vandeghen, Adrien Deliege, Abdullah Hamdi,
       | Silvio Giancola, Anthony Cioppa, Andrea Vedaldi, Bernard Ghanem,
       | Andrea Tagliasacchi, Marc Van Droogenbroeck
        
         | enriquto wrote:
         | it's the same list of authors, written in different formats
         | 
         | the first one is SURNAME, NAME separated by "and"
         | 
         | the second one is NAME SURNAME separated by commas
         | 
         | The second one is easier to read by humans, but the first one
         | makes it clearer what is the surname (which would be ambiguous
         | otherwise, when there are composite names). But then again, the
         | first format breaks when someone has "and" in their name, which
         | is not unheard of.
        
           | growlNark wrote:
           | Why do they use "and"? Why not use an unambiguous joining
           | token like `/`? This just feels like an abuse of informal
           | language to produce fundamentally formal data.
           | 
           | As it stands, it certainly does not resemble readable or
           | parseable english.
        
             | bowsamic wrote:
             | It's how the bibtex author field is defined. You don't get
             | free choice here. As far as I'm aware bibtex defines and as
             | the separator
             | 
             | https://bibtex.eu/fields/author/
        
               | growlNark wrote:
               | Yea but like... why? Typically you use human language
               | operators to produce readable phrases, and this doesn't
               | even approach readable english.
        
               | bowsamic wrote:
               | The answer is probably who knows since we are talking
               | about software from 1985 which was only updated once
               | since 1988 to clarify its licensing
        
               | jameshart wrote:
               | The person who designed it was solving primarily for
               | lexical sorting of the author field, thought maybe having
               | more than two authors was an edge case, and wanted the
               | two author case to be a logical extension of the single
               | author one?
        
             | robotresearcher wrote:
             | It's BibTex format. It's ancient, ubiquitous, very fussy,
             | and reads badly for humans in some cases. But it's what
             | we've been using since the 1980s.
             | 
             | 'Better' formats have been proposed but none have stuck
             | nearly as well. It works, and there's tooling for it.
        
         | cubefox wrote:
         | That's just the formatting?
         | 
         | author = {surname 1, first name 1 and surname 2, first name 2
         | and ...}
         | 
         | "and" is the separator.
        
       | praveen9920 wrote:
       | > The triangles are well aligned with the underlying geometry.
       | All triangles share a consistent orientation and lie flat on the
       | surface.
       | 
       | When I first read "triangle splatting," I assumed Gaussians were
       | just replaced by triangles, but triangles being aligned with
       | geometry changes everything. Looking forward to seeing this in
       | action in traditional rendering pipelines.
        
         | mxfh wrote:
         | Normal aligned 3DGS was already proposed earlier[1]. This seems
         | more like an iteration on getting performance improvement by
         | moving from the theoretical principle closer to what the
         | hardware is already doing well and finding a sweet spot of
         | brute amount of features, fast rasterization method and
         | perceived image quality.
         | 
         | It's already noticeable that there doesn't seem to be one fits
         | all approach. Volumetric feathered features like clouds will
         | not profit much from triangle representation vs high visual
         | frequency features.
         | 
         | There are various avenues for speeding up rendering and
         | improving 3d performance of 3DGS.
         | 
         | it's surely a very interesting research space to watch.
         | 
         | https://arxiv.org/pdf/2410.20593
         | 
         | https://speedysplat.github.io/
         | 
         | another venue is increasing the complexity of the gradient
         | function like applying Gabor filters
         | 
         | https://arxiv.org/abs/2504.11003
         | 
         | some many ways to adapt and extend on the 3dgs principles.
        
           | praveen9920 wrote:
           | I think the major computational task is sorting the
           | primitives, which works great on GPUs but not so much on
           | CPUs. Im sure there is some research happening on sort-free
           | primitives
        
       | sorenjan wrote:
       | This looks really nice, but I cant help to think this is a stop
       | gap solution like other splatting techniques. It's certainly
       | better than NERFs, where the whole scene is contained in a black
       | box, but reality is not made up of a triangle soup or gaussian
       | blobs. Most of the real world is made up of volumes, but can
       | often be thought of as surfaces. It makes sense to represent the
       | ground, a table, walls, etc with planes, not a cloud of semi
       | translucent triangles. This is like pouring LEGO on the floor and
       | moving them around until you get something that looks ok from a
       | distance, instead of putting them together. Obviously looking
       | good is often all that's needed, but it doesn't feel very
       | elegant.
       | 
       | Although, the normals look pretty good in their example images,
       | maybe you can get good geometry from this using some post
       | processing? But then is a triangle soup really the best way of
       | doing that? My impression is that this is chosen specifically to
       | get a final representation that is efficient to render on GPUs. I
       | haven't done any graphics programming in years, but I thought
       | you'd want to keep the number of draw calls down, do you need to
       | cluster these triangles into fewer draw calls?
       | 
       | Is there any work being done to optimize a volumetric
       | representation of scenes and from that create a set of surfaces
       | with realistic looking shaders or similar? I know one of the big
       | benefits of these splatting techniques is that it captures
       | reflections, opacity, anisotropicity, etc, so "old school"
       | photogrammetry with marching cubes and textured meshes have a
       | hard time competing with the visual quality.
        
         | hirako2000 wrote:
         | A digital image is a soup, of RGB dots of various size.
         | 
         | Gaussian Splatting radically changed the approach to
         | photogrammetry. Prior approaches to generate surface models,
         | and mapping the captures to materials that a renderer would
         | more or less rasterize with physically accuracy were hitting
         | the ceiling of the technique.
         | 
         | NerF was also a revolution but is very compute intensive.
         | 
         | Even a browser, a mid range GPU, can render millions of splats
         | at 60 frames per seconds. That's how fast it goes and less than
         | a million dense scene can already be totally bluf the eye in
         | most possible angles.
         | 
         | Splatting is the most advanced, promising and already delivered
         | on the promise technique for photogrammetry. The limit is that
         | can't do as much in term of modification to point clouds vs
         | surface with great PBR attributes.
        
           | sorenjan wrote:
           | No, an image is a well ordered grid of pixels. The 3D variant
           | would be voxels, and Nvidia recently released a project to do
           | scene reconstruction with sparse voxels [0].
           | 
           | If you take these triangles, make them share vertices, and
           | order them in a certain way, you have a mesh. You can then
           | combine some of them into larger flat surfaces when that
           | makes sense, draw thousands of them in one draw call,
           | calculate intersections, volumes, physics, LODs, use textures
           | with image compression instead of millions of colored
           | objects, etc with them. Splatting is one way of answering the
           | question "how do we reproduce these images in a way that lets
           | us generate novel views of the same scene", not "what is the
           | best representation of this 3D scene".
           | 
           | The aim is to find the light field that describes the scene,
           | and if you have solid objects that function can be described
           | on the surface of those objects. Seems like a much more
           | elegant end result than a cloud of separate objects, no
           | matter what shape they have, since that's much closer to how
           | reality works. Obviously we need to handle volumetrics and
           | translucency as well, but if we model the real surfaces as
           | virtual surfaces I think things like reflections and shadow
           | removal will be easier. At least gaussian splats have a hard
           | time with reflections, they look good from some viewing
           | angles, but the reflections are often handled as geometry
           | [1].
           | 
           | I'm not arguing that it doesn't look good or that it doesn't
           | serve a purpose, sometimes a photorealistic novel view of a
           | real scene is all you want. But I still don't think it's the
           | best representation of scenes.
           | 
           | [0] https://svraster.github.io/
           | 
           | [1] https://www.youtube.com/watch?v=yq6gtdpLUCo
        
             | spencerflem wrote:
             | I still love this older paper on Plenoxels :
             | https://alexyu.net/plenoxels/
             | 
             | It made so much sense to me: voxels with view dependent
             | color, using eg. spherical gaussians.
             | 
             | I don't know how it compares to newer techniques, probably
             | badly since nobody seems to be talking about it.
        
               | sorenjan wrote:
               | They're mentioned in the SVRaster paper.
               | 
               | https://svraster.github.io/images/teaser.jpg
        
         | haeric wrote:
         | > I haven't done any graphics programming in years, but I
         | thought you'd want to keep the number of draw calls down, do
         | you need to cluster these triangles into fewer draw calls?
         | 
         | GPUs draw can draw 10,000's of vertices per draw call, whether
         | they are connected together into logical objects or are
         | "triangle soup" like this. There is some benefit to having
         | triangles connected together so they can "share" a vertex, but
         | not as much as you might think. Since GPUs are massively
         | parallel, it does not matter much where on the screen or where
         | in the buffer your data is.
         | 
         | > Is there any work being done to optimize a volumetric
         | representation of scenes and from that create a set of surfaces
         | with realistic looking shaders or similar?
         | 
         | This is basically where the field was going until nerfs and
         | splats. But then nerfs and splats were such HUGE steps in
         | fidelity, it inspired a ton of new research towards it, and I
         | think rightfully so! Truth is that reality is really messy, so
         | trying to reconstruct logically separated meshes for everything
         | you see is a very hard way to try to recreate reality. Nerfs
         | and splats recreate reality much easier.
        
       | rossant wrote:
       | After an email exchange with the lead author, I just rendered one
       | of their demo datasets using my Datoviz GPU rendering library
       | [1]. It looks nice and it's quite fast. I'm just rendering
       | uniform triangles using standard 3D rasterization.
       | 
       | https://github.com/user-attachments/assets/6008d5ee-c539-451...
       | 
       | (or
       | https://github.com/datoviz/data/blob/main/gallery/showcase/s...)
       | 
       | I'll add it to the official gallery soon, with proper credits.
       | 
       | [1] https://datoviz.org/
        
         | meindnoch wrote:
         | 404
        
           | rossant wrote:
           | Which link? Both seem to work for me at the moment.
        
             | xeonmc wrote:
             | Try viewing while logged out of GitHub
        
               | rossant wrote:
               | Works too
        
               | meindnoch wrote:
               | It does not.
        
               | heliophobicdude wrote:
               | Might be private repo? I get 404 as well
        
             | littlestymaar wrote:
             | The first one, most likely a permission issue.
        
       | meindnoch wrote:
       | trianglebros we're so back
        
       | helloplanets wrote:
       | I wonder how hard it would be to implement an extra processing
       | step, to turn this to a more 'stylized low poly' look. Basically,
       | the triangle count would be drastically smaller, but the topology
       | would have to be crisp.
        
       | jplusequalt wrote:
       | Can anyone detail the use case for gaussian splatting to me? What
       | are we trying to solve, or, where direction are we trying to head
       | towards?
       | 
       | I'm more familiar with traditional 3D graphics, so this new wave
       | of papers around gaussian splatting lies outside my wheelhouse.
        
         | hnuser123456 wrote:
         | I get the impression the goal is to save 3D environments with
         | baked lighting without having to run raytracing, at a level
         | above explicitly defined meshes with faces covered by 2D
         | textures, which can't represent fog, translucency, reflection
         | glints, etc without a separate lighting pass. Basically trying
         | to get raytracing without doing raytracing.
        
           | kridsdale1 wrote:
           | I would say they're an attempt to extend the concept of a
           | photograph in to truly 3 dimensions (not a 2D bitmap with a
           | depth layer)
        
         | machiaweliczny wrote:
         | AFAIK Gaussian Splatting is somehow connected to NeRFs (neural
         | radiance fields), so job of turning multiple 2D images into 3D
         | scene. Actually tried doing something like this recently for
         | drone navigation (using older point cloud methods) but no luck
         | so far.
         | 
         | Can anyone who read this suggest something to use to scan room
         | geometry using camera only in real-time (with access to beefy
         | NVIDIA computer if needed) for drone navigation purposes?
        
           | walterlw wrote:
           | have you tried ORB SLAM v3?
        
         | jtolmar wrote:
         | Gaussian splatting models the scene as a bunch of normal
         | distributions (fuzzy squished spheres) instead of triangles,
         | then renders those with billboarded triangles. It has
         | advantages (simpler representation, easy to automatically
         | capture from a scan) and disadvantages (not what the hardware
         | is designed for, not watertight). The biggest disadvantage is
         | that most graphics techniques need to be reinvented for it, and
         | it's not clear what the full list of advantages and
         | disadvantages will be until people have done all of those. But
         | that big disadvantage is also a great reason to make tons of
         | papers.
        
           | marcellus23 wrote:
           | what does "not watertight" mean?
        
             | lambdaone wrote:
             | They don't create contiguous surfaces, and GPUs are
             | optimized to deal with sets of triangles that share
             | vertices (a vertex typically being shared by four to six
             | triangles), rather than not shared at all as with this.
             | 
             | "Watertight" is a actially a stronger criterion, which
             | requires not only a contigous surface, but one which
             | encloses a volume without any gaps, but "not watertight"
             | suffices for this.
        
               | marcellus23 wrote:
               | Interesting thank you!
        
       | pttrn wrote:
       | Make triangles great again
        
       | porphyra wrote:
       | This seems like the natural next step after Gaussian splatting.
       | After all, triangles are pretty much the most "native" rendering
       | that GPUs can do. And as long as you figure out a way to make it
       | differentiable (e.g. with their windowing function), it should be
       | possible to just throw your triangles into a big optimizer.
        
       ___________________________________________________________________
       (page generated 2025-05-30 23:01 UTC)