Ray Marching
3/13/2022Ray marching is a technique for rendering signed distance fields (SDFs). A SDF is a way of defining if a given point is within a volume. In essence, a volume defined by a SDF is rendered through ray marching, and ray marching does this through detecting collisions between rays sent from the camera and any SDFs.
In principle ray(s) are sent out for each pixel on the screen, and if the ray(s) intersect with a SDF then that pixel is rendered with the SDF's material. To decide if a ray intersects with a SDF, the ray is "marched" forwards by some amount. If the resulting point is within an SDF, it is determined to intersect.
How is the amount the ray marches determined? SDFs are defined by functions which take a point and return the distance from that point to the SDF. Therefore, to be efficient, the ray is marched forward by the distance calculated.
A simple SDF to create is a sphere. To determine the distance from the surface of a sphere we take the difference between the distance between the point being compared and the center of the sphere and the radius. The result of this formula is positive for any points outside of the sphere, and negative for any within.
The first example incorporates everything up to this point. For demonstration the shader itself is written in GLSL for WebGL, and the demo is written in javascript. The example, and every example in this article, specifies a plane made of two triangles that covers the entire viewport. The shader is then rendered to this mesh. The vertex shader is simple and does not change in these examples, it only takes the positions of the plane's vertices and passes it to the fragment shader. The range of positions is from (-1,-1) to (1,1), and each pixel position is interpolated in the fragment shader. As this range is square, and the viewport is wider than it is tall, we need to pass the aspect ratio to the shader so the image is not squashed. These are the only values passed to the shader from the program, the rest are hard-coded for simplicity (except for the texture, which gets passed in a later example).
This does not look very interesting, pretty much a red circle. To give the surface some depth, and make it look like a sphere, there is a way to generate the normals. This is done by sampling the change in distance when moving the point slightly along each of the x, y and z axes. Plugging all of those differences into a vector and normalizing it gives a normal vector for that point. If visualized, when moving the vector in a certain direction and having the distance increase, it means the normal goes more negative, and the opposite is true when the distance decreases.
Once given a normal it is very easy to have simple lighting. Just take the dot product between the normal and the position of the light to get the intensity of the light at that point, and multiply by the diffuse color, in this case a dark gray. This example also adds an ambient light so that the dark spots are not too dark.
The distance being negative when within a SDF can be used to combine SDFs in interesting ways. When measuring the distance from two different surfaces, the minimum between the two can be taken, as a result combining both surfaces. This is because it returns the surface that has a shorter distance at the given point, making the closer surface occlude the other surface, which replicates how vision works. Similarly, you can take the maximum of those two distances to get the volume in which the volumes intersect. The reason for this is the surface further away is returned, but to appear it still has to be negative or close to zero at a certain point. Since either value is positive when the point is outside of either surface, then the max function only returns a negative value (or one within the bias) if both distances are small enough. I have also included an example of smoothly combining two volumes, and an example of cutting a volume from another volume, both of which I suggest examining the appropriate functions in the shader for the formulas.
Ray marching is already an expensive process, and lighting can be very taxing as well. For this reason you would want to bake lighting for more lightweight applications, although we cannot bake lighting for dynamic objects. A way of working around this is by using a matcap texture, which is a texture that provides the color at every point of a sphere. As this also provides the color at every normal, we can use matcaps to render fake lighting and texture. In this case I also kept the previous lighting, for a bit more added realism, as matcaps do not generally have shadows.
Many more shapes can be achieved through ray-marching, in fact googling for SDFs provides many such examples, but for this article I will end with a cube. The way to calculate the cube is visualized easier than explained, so below is a chart. In this arrangement we are looking to find d2 - d1. d2 is a known value, it is the difference between the origin and the point being sampled. We can see that the inner distance has a component which is equal to r, and this component corresponds to the larger component of d2, in this case the x component. Since the inner triangle is similar to the larger triangle, its hypotenuse d1 is proportional to d2, and can be determined by multiplying by the ratio between r and the point's x component. Because the cube is at the origin, this ratio is r/x. If the larger component were the y component, the ratio would be r/y. For the third dimension this idea actually extends easily, and the result is we find the largest component (between the point's x, y and z) and divide r by that component. The result is that d1 = d2 * (1 - r/max(x,y,z)).
For this example I also included a rotation function, though the obvious next step would be to incorporate a proper projection and transform matrices.
All of the code, along with a simple demo html/javascript file, can be found at this github link: https://github.com/realJavabot/raymarching_example_webgl