https://blog.maximeheckel.com/posts/painting-with-math-a-gentle-study-of-raymarching/
@MaximeHeckel
Painting with Math: A Gentle Study of Raymarching
Home
Painting with Math: A Gentle Study of Raymarching
September 12, 2023 / 37 min read
Last Updated: September 12, 2023
Most of my experience writing GLSL so far focused on enhancing
pre-existing Three.js/React Three Fiber scenes that contain diverse
geometries and materials with effects that wouldn't be achievable
without shaders, such as my work with dispersion and particle effects
. However, during my studies of shaders, I always found my way to
Shadertoy, which contains a multitude of impressive 3D scenes
featuring landscapes, clouds, fractals, and so much more, entirely
implemented in GLSL. No geometries. No materials. Just a single
fragment shader.
One video titled Painting a Landscape with Math from Inigo Quilez
pushed me to learn about the thing behind those 3D shader scenes:
Raymarching. I was very intrigued by the perfect blend of creativity,
code, and math involved in this rendering technique that allows
anyone to sculpt and paint entire worlds in just a few lines of code,
so I decided to spend my summer time studying every aspect of
Raymarching I could through building as many scenes as possible such
as the ones below which are the result of these past few months of
work. (and more importantly, I took my time to do that to not burnout
as the subject can be overwhelming, hence the title )
In this article, you will find a condensed version of my study of
Raymarching to get a gentle head start on building your own
shader-powered scenes. It aims to introduce this technique alongside
the concept of signed distance functions and give you the tools and
building blocks to build increasingly more sophisticated scenes, from
simple objects with lighting and shadows to fractals and infinite
landscapes.
Before you start
This article assumes you have basic knowledge about shaders, noise,
and GLSL, or read The Study of Shaders with React Three Fiber.
An icon representing the letter 'i' in a circle
This article will quote, and link the work of a lot of authors/
creators I relied on to teach myself Raymarching. Among them are:
* An icon representing an arrow
Inigo Quilez
* An icon representing an arrow
The Art of Code
* An icon representing an arrow
Syntopia
* An icon representing an arrow
SimonDev also known as @iced_coffee_dev on Twitter.
* An icon representing an arrow
The Book of Shaders
I'm very thankful for the quality content they put out there, without
which I would have probably not been able to grasp the concept of
Raymarching.
Snapshots of Math: demystifying the concept of Raymarching
If you're already familiar with Three.js or React Three Fiber 3D
scenes, you most likely encountered the concepts of geometry,
material, and mesh and maybe even built quite a few scenes with those
constructs. Under the hood, rendering with them involves a technique
called Rasterization, the process of converting 3D geometries to
pixels on a screen.
Raymarching on the other hand, is an alternative rendering technique
to render a 3D scene, without requiring geometries or meshes...
Marching rays
Raymarching consists of marching step-by-step alongside rays cast
from an origin point (a camera, the observer's eye, ...) through each
pixel of an output image until they intersect with objects in the
scene within a set maximum number of steps. When an intersection
occurs, we draw the resulting pixel.
That's the simplest explanation I could come up with. However, it
never hurts to have a little visualization to go with the definition
of a new concept! That's why I built the widget below illustrating:
* An icon representing an arrow
The step-by-step aspect of Raymarching: The visualizer below lets
you iterate up to 13 steps.
* An icon representing an arrow
The rays cast from a single point of origin (bottom panel)
* An icon representing an arrow
and the intersections of those rays with an object resulting in
pixels on the fragment shader (top panel)
-0.5,0.5
0.5,0.5
0,0
-0.5,-0.5
0.5,-0.5
An icon representing an arrow twisted so it makes a loop
An icon representing an arrow
[0 ]
Step:
0
An icon representing the letter 'i' in a circle
Notice how the rays that did not intersect with the sphere in the
visualizer above resulted in black pixels . That means there was
nothing to draw since there was no intersection with those rays
before we reached the maximum amount of steps.
Defining the World with Signed Distance Fields
The definition I just gave you above is only approximately correct.
Usually, when working with Raymarching:
* An icon representing an arrow
We won't go step-by-step with a constant step distance along our
rays. That would make the process very long.
* An icon representing an arrow
We also won't be relying on the intersection points between the
rays and the object.
Instead, we will use Signed Distance Fields, functions that calculate
the shortest distances between the points reached while marching
alongside our rays and the surfaces of the objects in our scene.
Relying on the distance to a surface lets us define the entire scene
with simple math formulas .
For each step, calculating and marching that resulting distance along
the rays lets us approach those objects until we're close enough to
consider we've "hit" the surface and can draw a pixel. The diagrams
below showcase this process:
Diagram showcasing the Raymarching process of 3 rays beamed from a
single point and marching step-by-step by a distance d obtained
through a Signed Distance FieldDiagram showcasing the Raymarching
process of 3 rays beamed from a single point and marching
step-by-step by a distance d obtained through a Signed Distance Field
Diagram showcasing the Raymarching process of 3 rays beamed from a
single point and marching step-by-step by a distance d obtained
through a Signed Distance Field
Notice how:
* An icon representing an arrow
each step of the raymarching (in green) goes as far as the
distance to the object.
* An icon representing an arrow
if the distance between a point on our rays and the surface of an
object is small enough (under a small value e) we consider we
have a hit (in orange).
* An icon representing an arrow
if the distance is not under that threshold, we continue the
process over and over using our SDF until we reach the maximum
amount of steps.
By using SDFs, we can define a variety of basic shapes, like spheres,
boxes, or toruses that can then be combined to make more
sophisticated objects (which we'll see later in this article). Each
of these have a specific formula that had to be reverse-engineered
from the distance of a point to its surface. For example, the SDF of
a sphere is equivalent to:
SDF for a sphere centered at the origin of the scene
1
float sdSphere(vec3 p, float radius){
2
return length(p) - radius;
3
}
To help you understand why the SDF of a sphere is defined as such, I
made the diagram below:
Diagram showcasing 3 points, P1, P2, and P3, being respectively, at a
positive distance, small distance, and inside a sphere.Diagram
showcasing 3 points, P1, P2, and P3, being respectively, at a
positive distance, small distance, and inside a sphere.Diagram
showcasing 3 points, P1, P2, and P3, being respectively, at a
positive distance, small distance, and inside a sphere.
In it:
* An icon representing an arrow
P1 is at a distance d from the surface that is positive since the
distance between P1 and the center of the sphere c is greater
than the radius of the sphere r.
* An icon representing an arrow
P2 is very close to the surface and would be considered a hit
since the distance between P2 and c is greater than r but lower
than e.
* An icon representing an arrow
P3 lies "within" the sphre, and technically we want our
Raymarcher to never end up in such use case (at least for what is
presented in this article).
SDFs
Inigo Quilez compiled a list of SDFs for most shapes that you may
need to get started with Raymarching.
If you wish redemonstrate those formulas yourself to have a deeper
understanding of SDFs The Art of Code has a great video showcasing
the process.
This is where the Math in Raymarching resides: through the definition
and the combination of SDFs, we can literally define entire worlds
with math. To showcase that power however, we first need to create
our first "Raymarcher" and put in code the different constructs we
just introduced.
Our first Raymarched scene
This introduction to the concept of Raymarching might have left you
perplexed as to how one is supposed to get started building anything
with it. Lucky for us, there are many ways to render a Raymarched
scene, and for this article we're going to take the perhaps most
obvious approach: use a simple Three.js/React Three Fiber
planeGeometry as a canvas, and paint our shader on it.
An icon representing the letter 'i' in a circle
@winkerVSbecks uses a different approach in his blog posts titled
Iridescent crystal with raymarching and signed distance fields where
he uses canvas-sketch which seems more adapted to render simple
shaders on the web. Regardless of which medium you use, you should
definitely check his write-up on Raymarching after reading this post
.
The canvas
Rendering a shader on top of a fullscreen planeGeometry is the
technique I used during this Raymarching study:
* An icon representing an arrow
I didn't want too much time investigating more lightweight
solutions.
* An icon representing an arrow
I still wanted to have easy access to tools like Leva.
* An icon representing an arrow
I was familiar with React Three Fiber's render loop and still
wanted to reuse code I've written over the past two years, like
uniforms, OrbitControls, mouse movements, etc.
Below is a code snippet of my canvas that served as the basis for my
Raymarching work:
React Three Fiber scene used as canvas for Raymarching
1
import { Canvas, useFrame, useThree } from '@react-three/fiber';
2
import { useRef, Suspense } from 'react';
3
import * as THREE from 'three';
4
import { v4 as uuidv4 } from 'uuid';
5
import vertexShader from './vertexShader.glsl';
6
import fragmentShader from './fragmentShader.glsl';
7
8
const DPR = 0.75;
9
10
const SDF = (props) => {
11
const mesh = useRef();
12
const { viewport } = useThree();
13
14
const uniforms = {
15
uTime: new THREE.Uniform(0.0),
16
uResolution: new THREE.Uniform(new THREE.Vector2()),
17
};
18
19
useFrame((state) => {
20
const { clock } = state;
21
mesh.current.material.uniforms.uTime.value = clock.getElapsedTime();
22
mesh.current.material.uniforms.uResolution.value = new THREE.Vector2(
23
window.innerWidth * DPR,
24
window.innerHeight * DPR
25
);
26
});
27
28
return (
29
30
31
38
39
);
40
};
41
42
const Scene = () => {
43
return (
44
49
);
50
};
51
52
export default Scene;
I'm also passing a couple of essential uniforms to my shaderMaterial,
which I advise you include as well in your own work, as they may
become pretty handy to have around:
* An icon representing an arrow
uResolution contains the current resolution of the window.
* An icon representing an arrow
uTime represents the time since the scene was rendered on the
screen.
DPR
Here's yet another PSA about device pixel ratio! Raymarching scenes
can be really intensive, especially at higher resolutions (you'll see
why very soon). That's why I'd recommend running most examples at a
DPR of 1, but if you have a computer more on the weaker end in terms
of specs, I'd recommend going even lower.
With a lower DPR, you should be able to appreciate the shader-based
scenes from this article at the maximum FPS possible. All demos will
be editable so you can fine-tune that value to fit your needs.
We need to do a few tweaks to our fragment shader before we can start
implementing our Raymarcher:
1. An icon representing an arrow
Normalize our UV coordinates.
2. An icon representing an arrow
Shift our UV coordinates to be centered.
3. An icon representing an arrow
Adjust them to the current aspect ratio of the screen.
These steps allow us to have our coordinate system where the center
of the screen is at the coordinates (0, 0) while preserving the
appearance of our shader regardless of screen resolutions and aspect
ratios (it won't appear stretched). That process may be hard to
understand at first, but here's a diagram to illustrate the math
involved:
Diagram illustrating the normalization, shift, and aspect ratio
adjustements of our UV coordinates that make sure that our raymarched
scenes will be resizable and at the correct aspect ratio.Diagram
illustrating the normalization, shift, and aspect ratio adjustements
of our UV coordinates that make sure that our raymarched scenes will
be resizable and at the correct aspect ratio.Diagram illustrating the
normalization, shift, and aspect ratio adjustements of our UV
coordinates that make sure that our raymarched scenes will be
resizable and at the correct aspect ratio.
Beaming rays
Let's implement our Raymarching algorithm step-by-step from the
definition established earlier. We need:
1. An icon representing an arrow
A rayOrigin from where all our rays will emerge from. E.g. vec3
(0, 0, 5).
2. An icon representing an arrow
A rayDirection equivalent to normalize(vec3(uv, -1.0)) to allow
us to beam rays in every direction on the screen along the
negative z-axis.
3. An icon representing an arrow
A raymarch function to march from the rayOrigin following the
rayDirection and detect when we're close enough to a surface to
draw it.
4. An icon representing an arrow
An SDF of any kind (we'll use a sphere) that our raymarch
function will use to calculate how close it is from the surface
at any given point of the raymarching loop.
5. An icon representing an arrow
A maximum number MAX_STEPS of steps and a surface distance
SURFACE_DISTANCE from which we can safely assume we're close
enough to draw a pixel.
Diagram representing the position of a theoretical sphere and our ray
origin in relation to the center of the scene.Diagram representing
the position of a theoretical sphere and our ray origin in relation
to the center of the scene.Diagram representing the position of a
theoretical sphere and our ray origin in relation to the center of
the scene.
Our raymarch function will loop for up to MAX_STEPS until we reach
the step limit, in which case we'll draw nothing or hit the surface
of the shape defined by our SDF.
Raymarch function
1
#define MAX_STEPS 100
2
#define MAX_DIST 100.0
3
#define SURFACE_DIST 0.01
4
5
float scene(vec3 p) {
6
float distance = sdSphere(p, 1.0);
7
return distance;
8
}
9
10
float raymarch(vec3 ro, vec3 rd) {
11
float dO = 0.0;
12
vec3 color = vec3(0.0);
13
14
for(int i = 0; i < MAX_STEPS; i++) {
15
vec3 p = ro + rd * dO;
16
17
float dS = scene(p);
18
dO += dS;
19
20
if(dO > MAX_DIST || dS < SURFACE_DIST) {
21
break;
22
}
23
}
24
return dO;
25
}
If we try running this code within our canvas, we should obtain the
following result
Adding some depth with light
We just drew a sphere with solely GLSL code . However, it looks more
like a circle because our scene has no light or lighting model
implemented, which means our scene doesn't have much depth. That is
quite similar to the first mesh you render in Three.js using
MeshBasicMaterial: the lack of shadows and reflections or diffuse
makes the result look flat.
If you read my article Refraction, dispersion, and other shader light
effects, we had a similar issue, and that's where we introduced the
concept of diffuse light. Lucky us, we can reuse the same formula and
principles from that blog post: by using the dot product of the
normal of the surface and a light direction vector, we can get some
simple lighting in our raymarched scene .
GLSL implementation of diffuse lighting
1
float diffuse = max(dot(normal, lightDirection), 0.0);
Diffuse
You may wonder why we do the dot product to know whether a given
point of the surface is hit by light or not, so here's a quick
refresher:
* An icon representing an arrow
The dot product between 2 vectors a and b is equal to a * b = |a|
x |b| x cos(th) where th is the angle between a and b.
* An icon representing an arrow
When the vectors point in the same direction, their value is
positive
* An icon representing an arrow
When the vectors are orthogonal, the dot product is 0.
If we transpose those notions to lighting, we get that any point
where the normal is orthogonal or opposite to the light direction (or
that the dot product is lower than or equal to 0) is in darkness, and
any other points should receive some light:
Diagram showcasing how the dot product is used to obtain the 'amount'
of light that a given point receives from a given light source.
Diagram showcasing how the dot product is used to obtain the 'amount'
of light that a given point receives from a given light source.
Diagram showcasing how the dot product is used to obtain the 'amount'
of light that a given point receives from a given light source.
The only issue is that we do not have an easy access to the Normal
vector as we do in rasterized scenes. We need to calculate it for
each "hit" we get between our rays and a surface. Luckily, Inigo
Quilez already went deep into this subject, and I invite you to read
his article on the subject as it will give you a better understanding
of the underlying formula which we'll use throughout all the examples
of this article:
getNormal function that returns the normal vector of a point p of the
surface of an object
1
vec3 getNormal(vec3 p) {
2
vec2 e = vec2(.01, 0);
3
4
vec3 n = scene(p) - vec3(
5
scene(p-e.xyy),
6
scene(p-e.yxy),
7
scene(p-e.yyx));
8
9
return normalize(n);
10
}
Applying both those formulas gives us a nicely lit raymarched scene
. Sprinkle on top some uTime for our light position, and we can
appreciate a more dynamic composition that reacts to light in
real-time:
Soft shadows
With SDFs, we easily have access to information about the entire
scene, which lets us apply some more advanced lighting/shadowing
techniques. Inigo explores this idea in his article on soft shadows,
where he presents a way to calculate the softness of shadows through
a loop that is akin to the Raymarching loop:
* An icon representing an arrow
it uses SDFs to calculate the distance to a surface
* An icon representing an arrow
it decreases the illumation of a given point (which starts at
1.0) as we go through the iterations of this loop based on the
SDF distance and a softness parameter k
* An icon representing an arrow
if the distance obtained through the SDF reaches below a certain
threshold, the illumination of the point will be set to 0.0, i.e.
completely dark
Soft shadow function from Inigo Quilez
1
float softshadow( in vec3 ro, in vec3 rd, float mint, float maxt, float k ) {
2
float res = 1.0;
3
float t = mint;
4
for( int i=0; i<256 && t abs(x/2) and its derivative which is discontinued.
Chart representing a non-smooth intersection of 2 objects through the
function f:x -> abs(x/2) and its derivative which is discontinued.
Chart representing a non-smooth intersection of 2 objects through the
function f:x -> abs(x/2) and its derivative which is discontinued.
We can use a pinch of math to obtain a smooth minimum/maximum. Once
again, Inigo Quilez wrote on the subject pretty extensively, and his
polynomial smoothmin variant became the standard in many Shadertoy
scenes. This video from The Art of Code also goes into more details
but with a more visual approach on how to get to the formula
step-by-step.
Charts representing respectively a curve obtained by using the
standard min operator and a curve obtained by using smoothmin.Charts
representing respectively a curve obtained by using the standard min
operator and a curve obtained by using smoothmin.Charts representing
respectively a curve obtained by using the standard min operator and
a curve obtained by using smoothmin.
GLSL implementation of the smoothmin function
1
float smoothmin(float a, float b, float k) {
2
float h = clamp(0.5 + 0.5 * (b-a)/k, 0.0, 1.0);
3
return mix(b, a, h) - k * h * (1.0 - h);
4
}
Thanks to this smoothmin function, we can not only get prettier
unions, but we can also have objects act more like liquids or more
organic when moving and blending together. That's something that is
quite difficult to do in a rasterized scene and would require a lot
of vertices, but it only requires a few lines of GLSL to obtain a
great result with Raymarching!
The scene below is an example of smooth minimum applied to three
spheres alongside some Perlin noise, akin to the one I made for this
showcase.
Moving, rotating, and scaling
While the union and intersection of SDFs may be straightforward to
picture in one's mind, operations such as translations, rotations,
and scale can feel a bit less intuitive, especially when having only
dealt with rasterized scenes in the past.
To me, to position a sphere in a raymarched scene at a given set of
coordinates, it first made more sense to render the SDF, pick it up,
and move it to the desired position, which, unfortunately for this
intuition, is wrong. In the world of Raymarching, you'd need to move
the sampling point to the opposite direction you wish to place your
SDF object. A simple way to visualize this is to:
* An icon representing an arrow
Imagine yourself as a point in a raymarched scene containing a
sphere.
* An icon representing an arrow
If you step two steps to the right, your sphere will appear to
you two steps further to the left.
Example of moving SDFs by moving the sampling point p
1
float scene(vec3 p) {
2
float plane = p.y + 1.0;
3
float sphere = sdSphere(p - vec3(0.0, 1.0, 0.0), 1.0);
4
5
float distance = min(sphere, plane);
6
return distance;
7
}
Rotating consists of the same way of thinking:
* An icon representing an arrow
We don't rotate the SDF itself
* An icon representing an arrow
We apply the rotation on the sampling point instead
Example of rotation in a raymarched applied to the sampling point
1
float scene(vec3 p) {
2
vec3 p1 = rotate(p, vec3(0.0, 1.0, 0.0), 3.14 * 2.0);
3
float distance = sdSphere(p1, 1.0);
4
5
return distance;
6
}
Rotation Matrix
Rotating stuff in GLSL requires the use of rotation matrices. I know
those can seem frightening at first, but if you take the time to
learn how to get to those formulas, trust me, you'll feel way more
comfortable using them .
I'm not going to detail in this blog post how to obtain those
matrices (maybe in a dedicated quaternions and rotation matrices
article in the future ) for brevity, but I'd recommend checking out
this video instead.
In the meantime, you can use the following rotate2D and rotate3D GLSL
snippets:
Example of rotate2D and rotate3D GLSL functions using rotation
matrices
1
// 2D rotation around the origin
2
mat2 rotate2D(float angle) {
3
float s = sin(angle);
4
float c = cos(angle);
5
return mat2(c, -s, s, c);
6
}
7
8
// 3D rotation around the x-axis
9
mat3 rotateX3D(float angle) {
10
float s = sin(angle);
11
float c = cos(angle);
12
return mat3(
13
1.0, 0.0, 0.0,
14
0.0, c, -s,
15
0.0, s, c
16
);
17
}
18
19
// 3D rotation around the z-axis
20
mat3 rotateZ3D(float angle) {
21
float s = sin(angle);
22
float c = cos(angle);
23
return mat3(
24
c, -s, 0.0,
25
s, c, 0.0,
26
0.0, 0.0, 1.0
27
);
28
}
You can also use one of the pre-made rotation helper functions from
glsl-rotate if you have the setup required to load them.
Scaling is even weirder. To scale, you need to multiply your sampling
point by a factor:
* An icon representing an arrow
Multiplying by two will make the resulting shape half the size
* An icon representing an arrow
Multiplying by 0.5 will make the shape twice as big
However, by multiplying our sampling point, we mess a bit with our
raymarcher and may accidentally have it step inside our object. To
work around this issue, we have to decrease our step size (i.e. the
distance returned by the SDF) by the same factor we're scaling our
shape of
Example of scaling a SDF in a raymarched scene
1
float scene(vec3 p) {
2
float scale = 2.0;
3
vec3 p1 = p * scale;
4
5
float sphere = sdSphere(p1, 1.5);
6
float distance = sphere / scale;
7
8
return distance;
9
}
Combining all those operations and transformations and adding our
utime uniform to the mix can yield gorgeous results. You can see one
such beautifully executed raymarched scene that uses those operations
on Richard Mattka's portfolio, which @Akella reproduced in one of his
streams.
I give you my own simplified implementation of it below, also
featured on my React Three Fiber showcase website, which leverages
all the building blocks of Raymarching featured in this article so
far:
Scaling to infinity
One trippy aspect of Raymarching that really blew my mind early on is
the ability to render infinite-looking scenes with very little code.
You can achieve that by putting together lots of SDFs, positioning
them programmatically, moving your camera, or increasing the maximum
number of steps to render further in space. However, the more SDFs we
use, the slower our scene gets.
If you've tried to do the same in a classic rasterized scene, you
have faced the same issues and worked around them using mesh
instances instead of rendering discreet meshes. Luckily, Raymarching
lets us use a similar principle: reusing a single SDF to add as many
objects as desired onto our scene.
Repeat function used to periodically duplicate our sampling point
1
vec3 repeat(vec3 p, float c) {
2
return mod(p,c) - 0.5 * c; // (0.5 *c centers the tiling around the origin)
3
}
The function above is what makes this possible:
* An icon representing an arrow
Using the mod function (modulo) on the sampling point p lets us
take a chunk of space defined by the second argument and tile it
infinitely in all directions (see diagram below showcasing the
mod function but applied only to a single dimension).
* An icon representing an arrow
Then, "instantiate" many objects from a single SDF for each tile,
giving the illusion of infinite shapes stretching to infinity.
Chart representing the mod function with a period of 2. Notice the
repeating pattern along the x-axis. Now use this model to picture in
your mind the same repetition happening along all axis x, y, and z in
our raymarched scene.Chart representing the mod function with a
period of 2. Notice the repeating pattern along the x-axis. Now use
this model to picture in your mind the same repetition happening
along all axis x, y, and z in our raymarched scene.Chart representing
the mod function with a period of 2. Notice the repeating pattern
along the x-axis. Now use this model to picture in your mind the same
repetition happening along all axis x, y, and z in our raymarched
scene.
The demo scene below showcases how you can include the repeat
function with any SDF to create infinite instances of an object in
every direction in space:
Notice that if the second argument of the modulo function is low, the
objects will appear closer to one another (more frequent
repetitions). If higher, they will appear further apart.
While being able to render scenes that stretch to infinity is
impressive, the mod function can also have an incredible effect in a
more "limited" way: to create fractals.
That's what Inigo Quilez explores in his article about Menger
Fractals which are nothing more than "iterated intersection of a
cross and a box SDF" that solely relies on the operations we've seen
in this part:
* An icon representing an arrow
Render a simple box using its SDF.
1
float sdBox(vec3 p, vec3 b) {
2
vec3 q = abs(p) - b;
3
return length(max(q, 0.0)) + min(max(q.x, max(q.y, q.z)), 0.0);
4
}
5
6
float scene(vec3 p) {
7
float d = sdBox(p,vec3(6.0));
8
9
return d;
10
}
* An icon representing an arrow
Render an infinite cross SDF, which is the union of 3 boxes.
* An icon representing an arrow
Intersect them to obtain a box with square holes at the center of
each phase using the max operator.
1
float sdBox(vec3 p, vec3 b) {
2
vec3 q = abs(p) - b;
3
return length(max(q, 0.0)) + min(max(q.x, max(q.y, q.z)), 0.0);
4
}
5
6
// The SDF of this cross is 3 box stretched to infinity along all 3 axis
7
float sdCross( in vec3 p ) {
8
float da = sdBox(p.xyz,vec3(inf,1.0,1.0));
9
float db = sdBox(p.yzx,vec3(1.0,inf,1.0));
10
float dc = sdBox(p.zxy,vec3(1.0,1.0,inf));
11
return min(da,min(db,dc));
12
}
13
14
float scene(vec3 p) {
15
float d = sdBox(p,vec3(6.0));
16
float c = sdCross(p);
17
18
float distance = max(d,c);
19
return distance;
20
}
By doing these operations in a loop, and for each iteration, making
our combined SDF smaller by scaling down and increasing the number of
repetitions, the resulting SDF can output some intricate objects that
can display repeating patterns that could theoretically go on forever
if we wanted to. Hence, this falls into the category of fractals.
The demo below showcases the Menger fractal implementation from
Inigo, using the building blocks we laid out in this article with the
inclusion of soft shadows, which really shine (no pun intended) for
this specific use case.
An icon representing the letter 'i' in a circle
In this demo above, try to:
* An icon representing an arrow
Increase the number of iterations and notice how each iteration
adds ever-smaller details to the fractal.
* An icon representing an arrow
Change the scale factor to bigger numbers and notice how quickly
we converge to more intricate SDFs.
Building Worlds with Raymarching and noise derivatives
We've finally reached the part focusing on the reason I wanted to
write this blog post in the first place . Now that we've warmed up
and got familiar with the building blocks of Raymarching, we can
explore the beautiful art of painting landscapes with those same
techniques.
If you spend some time searching on Shadertoy, those raymarched
landscapes can feel both breathtaking when looking at the results
they yield and quite intimidating at the same time when looking at
the code displayed on the right-hand side of the website. That is why
I spent a big deal of time analyzing a couple of those landscapes by
trying to find the repetitive patterns used by the authors and
breaking them down for you into more digestible bits.
Composing noise with Fractal Brownian Motion
You've probably played quite a bit with noise in your own shader work
and are familiar with the ability of the different types to generate
more organic patterns.
Refresher
If not, don't worry! You can head to The Study of Shader with React
Three Fiber, where I give a quick walkthrough of what Perlin or
Simplex noises are and how to use them in your own creations.
In that blog post, I also briefly mention the concept of Fractal
Brownian Motion: a method to compose noises and obtain a more
granular resulting noise featuring more fine details:
* An icon representing an arrow
The final detailed noise builds itself in a loop.
* An icon representing an arrow
We start with a simple noise with a given amplitude and frequency
for the first iteration.
* An icon representing an arrow
Then, for each iteration, we apply the same noise but decrease
the amplitude and increase the frequency (and add some
transformation if we want to), thus creating sharper details but
with less influence on the overall scene.
We can visualize this in 2D by looking at a simple curve. Each
iteration is called an Octave, and the higher we go in terms of
octaves, the sharper and better looking our noise will be:
Charts representing the application of a noise on top of itself
through 3 octaves, where at each octave we decrease its amplitude and
increase its frequency, thus yielding a more organic-looking and
sharper noise the bigger the maximum octave number is.Charts
representing the application of a noise on top of itself through 3
octaves, where at each octave we decrease its amplitude and increase
its frequency, thus yielding a more organic-looking and sharper noise
the bigger the maximum octave number is.Charts representing the
application of a noise on top of itself through 3 octaves, where at
each octave we decrease its amplitude and increase its frequency,
thus yielding a more organic-looking and sharper noise the bigger the
maximum octave number is.
Applying that type of noise to the SDF of a plane, like in the code
snippet below, can yield some very sharp-looking mountainous
landscapes stretching to infinity.
Example of Fractal Brownian Motion applied to a raymarched plane
1
// importing perlin noise from glsl-noise through glslify
2
#pragma glslify: cnoise = require(glsl-noise/classic/2d)
3
4
#define PI 3.14159265359
5
6
mat2 rotate2D(float a) {
7
float sa = sin(a);
8
float ca = cos(a);
9
return mat2(ca, -sa, sa, ca);
10
}
11
12
float fbm(vec2 p) {
13
float res = 0.0;
14
float amp = 0.8;
15
float freq = 1.5;
16
17
for(int i = 0; i < 12; i++) {
18
res += amp * cnoise(p * 0.8);
19
amp *= 0.5;
20
freq *= 1.05;
21
p = p * freq * rotate2D(PI / 4.0);
22
}
23
return res;
24
}
25
26
float scene(vec3 p) {
27
float distance = 0.0;
28
distance += fbm(p.xz * 0.3);
29
distance += p.y + 2.0;
30
31
return distance;
32
}
Add to that the diffuse lighting model we looked at in the earlier
parts of this article and some soft shadows, and you can get a
beautiful raymarched landscape with just a few lines of GLSL. Those
are the techniques I used to build my very first raymarched terrain,
and I was quite satisfied with the result. Also, look at how the
shadows update in real time as we move the position of the light .
[]MaximeHeckelMaximeHeckelMaxime@MaximeHeckel
You can create entire procedurally generated worlds with shaders with
a couple of well-placed math formulas! From the sharpness of the
terrain, the light, the fog, and the shadows of those mountains: it's
all GLSL (don't run this on your phone pls) https://t.co/hBVew9w90O
https://t.co/jl1vk9yPWQ
2:40 AM - Jul 13, 2023
7175
However, I quickly realized that:
1. An icon representing an arrow
I needed my FBM loop to reach high octaves for a sharp looking
result. That caused the frame rate to drop significantly, as the
higher the octaves for my FBM, the higher the complexity of my
raymarcher was (nested loops). This scene was pulling a lot of
juice from my laptop and even worse at higher resolutions!
2. An icon representing an arrow
Despite using Perlin noise as the base for my FBM, the resulting
landscape was just an endless series of mountains. Each looked
distinct and unique from its neighbor, but the overall result
looked repetitive.
Noise derivatives
By studying Inigo's own 3D landscape creations, I noticed that in
many of them, he was using a tweaked Fractal Brownian Motion to
generate his terrains through the use of noise derivatives.
In his blog posts on the topic, he presents this technique as an
updated version of FBM to generate realistic-looking noise patterns.
This technique is, at first glance, a little bit more complicated to
explain concisely and also involves a little bit more math than most
people might be comfortable with, but here's my own attempt at
highlighting its key features:
* An icon representing an arrow
It relies on sampling a grayscaled noise texture (we'll get to
that in a bit) at various points, i.e. looking at the color value
stored at a given location, and interpolating between them.
* An icon representing an arrow
Instead of relying on those "noise values", we're using the
derivative between the sampled points representing the steepness
or rate of change.
We thus end up with more "data" about the physical properties of our
terrain: the higher derivatives correspond to steeper regions of our
landscapes, while lower values will result in flat plateaux or
downward slopes, resulting in better-looking, more detailed terrains.
The GLSL code for that function looks like this
Function returning noise value and noise derivative
1
// Noise texture passed as a uniform
2
uniform sampler2D uTexture;
3
4
vec3 noised(vec2 x) {
5
vec2 p = floor(x);
6
vec2 f = fract(x);
7
vec2 u = f * f* (3.0 - 2.0 * f);
8
9
float a = textureLod(uTexture, (p+vec2(.0,.0)) /256.,0.).x;
10
float b = textureLod(uTexture, (p+vec2(1.0,.0)) /256.,0.).x;
11
float c = textureLod(uTexture, (p+vec2(.0,1.0)) /256.,0.).x;
12
float d = textureLod(uTexture, (p+vec2(1.0,1.0)) /256.,0.).x;
13
14
float noiseValue = a + (b - a) * u.x + (c - a) *
15
u.y + (a - b - c + d) * u.x * u.y;
16
vec2 noiseDerivative = 6.0 * f * (1.0 - f) * (vec2(b - a, c - a) +
17
(a - b - c + d) * u.yx);
18
19
return vec3(noiseValue, noiseDerivative);
20
}
Texture
I noticed from several Shadertoy raymarched landscapes that they were
all sampling more or less the same texture for their noise
derivatives
Noise texture used in our noised function. Originally obtained on
Shadertoy.Noise texture used in our noised function. Originally
obtained on Shadertoy.Noise texture used in our noised function.
Originally obtained on Shadertoy.
That gave me better results at lower octaves than using a hash
function (we'll talk about that below).
[Optional] Quick Math refresher on how to obtain the derivative
An icon representing an X
From these noise derivatives, we can generate the terrain in a
similar fashion to the FBM method. For each iteration:
* An icon representing an arrow
We call our noised function for our sample point.
* An icon representing an arrow
Accumulate the derivatives, which will accentuate the features of
the terrain as we go through the iterations of our loop.
* An icon representing an arrow
Adjust the height a of our terrain based on the value of the
noise.
* An icon representing an arrow
Reduce and flip the sign of the scaling factor b. That will
result in each subsequent iteration having less effect on the
global aspect of the terrain while also alternating between
increases and decreases in the overall height of our terrain.
* An icon representing an arrow
Transform the sampling point for the next loop by multiplying it
by a rotation matrix (which results in a slight rotation for the
following iteration) and scaling it down.
Alternate FBM process using noise value along side noise derivative
1
float terrain(vec2 p){
2
vec2 p1 = p * 0.06;
3
float a = 0.0;
4
float b = 2.5;
5
vec2 d = vec2(0.0);
6
float scl = 2.75;
7
8
for(int i = 0; i < 8; i++ ) {
9
vec3 n = noised(p1);
10
d += n.yz;
11
a += b * n.x / (dot(d,d) + 1.0);
12
b *= -0.4;
13
a *= .85;
14
p1 = m * p1 * scl;
15
}
16
17
return a * 3.0;
18
}
The screenshot below showcases the terrain yielded at each octave
(i.e. each iteration of our FBM loop) from 2 to 7:
Screenshots representing a raymarched terrain view from the top from
octaves 2 to 7, obtained through FBM and noise derivatives. Notice
the cracks, and slopes forming after the 5th octave is reached.
Screenshots representing a raymarched terrain view from the top from
octaves 2 to 7, obtained through FBM and noise derivatives. Notice
the cracks, and slopes forming after the 5th octave is reached.
Screenshots representing a raymarched terrain view from the top from
octaves 2 to 7, obtained through FBM and noise derivatives. Notice
the cracks, and slopes forming after the 5th octave is reached.
Applying this technique on top of everything we've learned through
this article gives us a magnificent landscape that is entirely
tweakable, more detailed, and less repetitive than its standard FBM
counterpart. I'll let you play with the scale factors, noise weight,
and height in the demo below so you can experiment with more diverse
terrains .
Sky, fog, and Martian landscape
Generating the terrain is not all there is when building landscapes
with Raymarching. One of the foremost things I like to add is fog:
the further in the distance an element of my landscape is, the more
faded and enveloped in mist it should appear. This adds a layer of
realism to the scene and can also help you color it!
Once again, we can use some math and physics principles to create
such an effect. Using Beer's law which states that the intensity of
light passing through a medium is exponentially related to the
distance it travels we can get a realistic fog effect:
I = I0 * exp(-a * d) where a is the absorption or attenuation
coefficient describing how "thick" or "dense" the medium is.
That's the math that Inigo uses as the base for his own fog
implementation that is a little bit more elaborated and is also
featured in most of his own creations.
Inigo Quilez's implementation of fog using exponential decay
1
// This fog is presented in Inigo Quilez's article
2
// It's a version of the fog function that keeps the "fog" at the
3
// bottom of the scene, and doesn't let it go above the horizon/mountains
4
vec3 fog(vec3 ro, vec3 rd, vec3 col, float d){
5
vec3 pos = ro + rd * d;
6
float sunAmount = max(dot(rd, lightPosition), 0.0);
7
8
float b = 1.3;
9
// Applying exponential decay to fog based on distance
10
float fogAmount = 0.2 * exp(-ro.y * b) * (1.0 - exp(-d * rd.y * b)) / rd.y;
11
vec3 fogColor = mix(vec3(0.5,0.2,0.15), vec3(1.1,0.6,0.45), pow(sunAmount,2.0));
12
13
return mix(col, fogColor, clamp(fogAmount,0.0,1.0));
14
}
When it comes to adding a background color for our sky, it's really
straightforward: whatever was not hit by the raymarching loop is our
sky and thus can be colored in any way we want!
Applying a sky color to the background and fog to a raymarched scene
1
vec3 lightPosition = vec3(-1.0, 0.0, 0.5);
2
3
void main(){
4
vec2 uv = gl_FragCoord.xy/uResolution.xy;
5
uv -= 0.5;
6
uv.x *= uResolution.x / uResolution.y;
7
8
vec3 color = vec3(0.0);
9
vec3 ro = vec3(0.0, 18.0, 5.0);
10
vec3 rd = normalize(vec3(uv, 1.0));
11
12
float d = raymarch(ro,rd);
13
vec3 p = ro + rd * d;
14
15
vec3 lightDirection = normalize(lightPosition-p);
16
17
if (d