Displacement Mapping on WebGL

I realize that all the real-time rendering applications that I ever implemented just used the simple Phong shader, with diffuse and specular lighting. So, I’m working to change this and here is the first step: Displacement Mapping.

3D Mesh

This was tricky to do mainly because WebGL doesn’t support the Tesselation Shader, which is the shader stage used to subdivide the mesh, increasing its resolution of vertices. So I made my own 3D mesh with a very high resolution of polygons using Blender. This mesh is a Quad Sphere subdivided 4x with sharp UV coordinates.

It’s essential to use sharp UV coordinates because this way each vertex position is associated with only one UV coordinate. If the same position could have more than one UV coordinate, the shape after applying the displacement mapping would be deformed with holes between adjacent triangles. This would be caused because two or more vertices in the same position would be displaced to different locations (by their different UV coordinates mapping to different displacement values).

One negative part of using “pre-tessellated” meshes like this, is the lack of adaptability added to the system. For example, even if I’m applying the displacement of a mirror texture (which is almost flat) I would be expending a considerable part of my frame time to renderer an unnecessarily detailed mesh.

Displacement

Since we have a high resolution quad mesh, to apply the displacement we can simply shift every vertex position accordingly to the value sampled from the displacement map. This shift is made towards the direction of the surface’s normal for each vertex. You can check out the exact snippet of code applying this shift.

// calculate the displaced position using position + normal * displacement
vec3 displacedPos = aPosition + oWNormal * disp * uDispFactor;

And the full code for the vertex shader can be seen here:

// attributes coming from the vertex buffers
attribute vec3 aPosition;
attribute vec3 aNormal;
attribute vec3 aTangent;
attribute vec3 aBitangent;
attribute vec2 aUV;

// all the variables send as inputs to the pixel shader
varying highp vec2 oUV;
varying highp vec3 oWPosition;
varying highp vec3 oWNormal;
varying highp vec3 oTangentLightPos;
varying highp vec3 oTangentFragPos;
varying highp vec3 oTangentCameraPos;

// model transform
uniform mat4 uModel;
// transpose inverse model, necessary to apply the model transform to normal vectors
uniform mat4 uTransposeInverseModel;
// view-projection transform
uniform mat4 uViewProjection;

// auxiliary texture struct
struct Texture {
    bool valid;
    sampler2D map;
};

// auxiliary light struct
struct Light {
    vec3 position;
    vec3 ambient;
    vec3 diffuse;
    vec3 specular;
};

// scene components
uniform Light uLight;
uniform vec3 uCamPosition;

// displacement map  
uniform Texture uDisplacement;

// auxiliary value to control the displacement magnitude
uniform float uDispFactor;

// surprisingly WebGL's GLSL doesn't provide a default transpose function
mat3 transpose(mat3 m) {

    vec3 i = m[0];
    vec3 j = m[1];
    vec3 k = m[2];

    return mat3 (
        vec3(i.x, j.x, k.x),
        vec3(i.y, j.y, k.y),
        vec3(i.z, j.z, k.z)
    );
}

void main(void) {

    // create the transformation to tangent space
    vec3 T = normalize(vec3(uModel * vec4(aTangent, 0.0)));
    vec3 N = normalize(vec3(uTransposeInverseModel * vec4(aNormal, 0.0)));
    // re-orthogonalize T with respect to N
    T = normalize(T - dot(T, N) * N);
    vec3 B = cross(N, T);
    mat3 TBN = transpose(mat3(T, B, N));
    
    // get the light, fragment and camera positions in tangent space
    oTangentLightPos = TBN * uLight.position;
    oTangentFragPos = TBN * oWPosition;
    oTangentCameraPos = TBN * uCamPosition;

    // sample the displacement from the displacement map using the UV coordinate
    float disp = texture2D(uDisplacement.map, aUV).r;

    // copy the UV coordinate to be used inside the pixel shader
    oUV = aUV;
    // apply the model transform to the normal vector 
    // (using the transpose inverse of the original model matrix)
    oWNormal = normalize(mat3(uTransposeInverseModel) * aNormal);

    // calculate the displaced position using position + normal * displacement
    vec3 displacedPos = aPosition + oWNormal * disp * uDispFactor;
    // apply the model transform
    oWPosition = vec3(uModel * vec4(displacedPos, 0.0));

    // apply the view-projection transform to the world position
    gl_Position = uViewProjection * vec4(oWPosition, 1);
}