You've probably run into terrain where the steep sides of a cliff have their texture stretched so much that it looks unrealistic. Maybe you have a procedurally generated world that you have no way to UV unwrap and texture. Tri-planar mapping provides an elegant technique to solve these issues and give you realistic textures from any angle or on any complex shape. Here you will learn about the technique, see the code, and look at some of the benefits, downsides, and other possibilities when using tri-planar mapping.
The most common issue is stretched textures, in particular when it comes to terrain. The problem lies in the UV coordinates of the object you are texturing. In the case of terrain, the UV coordinates are spread out in a grid, evenly spaced in the X-Y plane like so:
This UV layout doesn't take into account height difference in the terrain and causes stretching. You can take measures to even out the area for the steep polygons by carefully unwrapping the UV coordinates, but that leads to a less than ideal result. You still have warped textures and some tiles, such as the center one, are compressed.
You might also be in a position where you cannot unwrap the UV coordinates of the mesh: the terrain or shape could be procedurally generated. Maybe you have a cave system or holes in your shape.
We can solve these issues with the tri-planar mapping technique (also known as "tri-planar texturing".
Tri-Planar Mapping in Detail
First, lets look at the terrain again with tri-plannar mapping applied to it:
Now that is much nicer! The stretching is gone and the steep slopes look more realistic.
Tri-planar mapping does this by rendering the texture 3 times, in 3 different directions: X, Y and Z axes. Picture a box. First the texture is projected down from the positive X-axis towards the negative X-axis. Any fragments (pixels of the geometry) that are facing in the direction of the X-axis get the texture applied to them. The the same process is applied to the Y-axis and the Z-axis.
These renderings are blended together. So a fragment that is facing half on the X-axis and half on the Z-axis will take half of the X-axis rendering and half of the Z-axis rendering. If the fragment is facing 90% towards the X-axis instead, then it receives 90% of the X-axis rendering and only 10% of the Z-axis. It's like taking 3 spray cans and spraying from the top, the side, and the front.
All of this is done in the fragment shader of your material. It essentially textures the geometry 3 times, once in each direction and then blends the result.
Tri-planar mapping does not use UV coordinates at all. Instead it uses actual world coordinates. Knowing this, lets look at the code.
The first part it to calculate the blend factor for each direction:
// in wNorm is the world-space normal of the fragment vec3 blending = abs( wNorm ); blending = normalize(max(blending, 0.00001)); // Force weights to sum to 1.0 float b = (blending.x + blending.y + blending.z); blending /= vec3(b, b, b);
Here it takes in the world-space normal of the fragment (which will be normalized and each component will be within the range of -1 and 1) and we make it an absolute value. We do not care if a normal is facing in -X or X, just that is it on the X-axis. If we did worry about the absolute direction we would be painting the shape from the front, back, left, right, top and bottom; 3 more times than we need.
Next we force it to be within the range of 0 to 1 so we end up with a percentage multiplier for each of the axis components. If the normal is facing upwards on the Y-axis, we get a Y value of 1 and it gets all of the Y-axis painting while the other axes will have values of 0 and get none.
That's the hard part. Next we just mix the three blend values (x,y,z) with the texture at that texture coordinate. Remember, the texture coordinate is in world-space:
vec4 xaxis = texture2D( rockTexture, coords.yz); vec4 yaxis = texture2D( rockTexture, coords.xz); vec4 zaxis = texture2D( rockTexture, coords.xy); // blend the results of the 3 planar projections. vec4 tex = xaxis * blending.x + xaxis * blending.y + zaxis * blending.z;
And there we have it. "tex" is the final color of the fragment, blended three times from the 3 axes.
It can be very handy to apply a scale factor to the texture as you will no doubt want to scale it:
// in float scale vec4 xaxis = texture2D( rockTexture, coords.yz * scale); vec4 yaxis = texture2D( rockTexture, coords.xz * scale); vec4 zaxis = texture2D( rockTexture, coords.xy * scale); vec4 tex = xaxis * blending.x + xaxis * blending.y + zaxis * blending.z;
If you are using tri-planar mapping and normal maps, you will also want to apply the same procedure to the normals in the fragment shader, like so:
vec4 xaxis = texture2D( rockNormalTexture, coords.yz * scale); vec4 yaxis = texture2D( rockNormalTexture, coords.xz * scale); vec4 zaxis = texture2D( rockNormalTexture, coords.xy * scale); vec4 tex = xaxis * blending.x + xaxis * blending.y + zaxis * blending.z;
The first downfall you will encounter is the performance. The fragments of the geometry are going to be rendered 3 times, once in each direction. This means the color and lighting calculations (normals) will be repeated and then blended. If you are already strapped for free frames, you might not want to use tri-planar mapping.
The next significant downfall is the blending at 45 degree angles, especially where different textures overlap where you are using texture splatting. You could perform 4 more renders, from the angle corners, but the performance hit for that probably will not be worth it. You could try blending with a depth map, a technique sometimes used in texture splatting.
You should now a have an understanding of how tri-planar mapping works and what it can be used for. But it has many other applications where it can be changed slightly to produce interesting results.
As mentioned before, procedural terrain is a good candidate for the technique. Caves, cliffs, and complex lava tunnels are now easy to texture. You could even influence what texture is used where based on some random, or pseudo-random (noise), routines. Elevation or even slope could determine what texture is used.
By modifying the routine to just project a texture from the top (y-axis) and firmly clamping the blend value to an acceptable range, ie. 10%, then you could render snow on the tops of everything in the scene. An atomic blast could scorch everything radiating out from a certain world-coordinate origin point using the same technique, but basing the angle from the origin point and using a dark burn texture.
Do any other applications come to mind? Let us know and feel free to discuss.