Dive Into WebGPU — Part 3
Tutorial
WebGPU
3D
Animation
Developer
Front-end
JavaScript
Web Development
WebGL
Martin Laxenaire's avatar

Part 3 — glTF scene

Welcome to the third article of our gpu-curtains tutorials series.

In Part 1 and Part 2 we learned how to draw various meshes defined by basic geometries, and we’re starting to have a good understanding of how the library is working and its various capabilities.

But there are still a lot of things that can be done.

One of the most useful feature of any web 3D rendering engine is the ability to display objects exported from 3D sculpting softwares.

That is exactly what we’ll learn in this chapter, but with an additional little bonus offered by the library. Not only are we going to load and render a glTF object, but the glTF scene size and position will be synced to a DOM Element!


Table of Contents

  1. glTF scene setup
  2. Adding the glTF
  3. Syncing the glTF with the DOM and use a basic shading
  4. Physically Based Rendering (PBR) shading
  5. Adding interactions
    1. Drag to rotate interaction
    2. Update object’s color on button click
  6. Wrapping up
  7. Going further

#1. glTF scene setup

By now, you should be comfortable with setting up a new scene. Let’s start by switching to the 16-gltf-1-setup git branch:

git checkout 16-gltf-1-setup

There’s not much to notice about the HTML and CSS setup, except for one thing: this time, our canvas container will not cover the whole viewport but only a small portion of it around where we’ll be drawing the object. The fewer pixels we render, the more performant it is, so let’s take advantage of that.

We’ve also arbitrarily sized our div#gltf-scene-object with aspect-ratio: 16 / 10; because that’s the actual dimensions of the credit card we’ll load.

#2. Adding the glTF

17-gltf-2-adding-the-gltf

As usual, create a GLTFScene.js file inside our /js/gltf-scene/ folder:

// js/gltf-scene/GLTFScene.js
import { GLTFLoader, GLTFScenesManager } from 'gpu-curtains'
import { ScrollTrigger } from 'gsap/ScrollTrigger'
import { DemoScene } from '../DemoScene'

export class GLTFScene extends DemoScene {
  constructor({ renderer }) {
    super({ renderer })
  }
  
  init() {
    this.section = document.querySelector('#gltf-scene')
    
    super.init()
  }
  
  setupWebGPU() {
    this.loadGLTF()
  }
  
  destroyWebGPU() {
    this.gltfScenesManager?.destroy()
  }
  
  addScrollTrigger() {
    this.scrollTrigger = ScrollTrigger.create({
      trigger: this.section,
      onToggle: ({ isActive }) => {
        this.onSceneVisibilityChanged(isActive)
      },
    })
    
    this.onSceneVisibilityChanged(this.scrollTrigger.isActive)
  }
  
  removeScrollTrigger() {
    this.scrollTrigger.kill()
  }
  
  onSceneVisibilityChanged(isVisible) {
    if (isVisible) {
      this.section.classList.add('is-visible')
      this.renderer.shouldRenderScene = true
    } else {
      this.section.classList.remove('is-visible')
      this.renderer.shouldRenderScene = false
    }
  }
  
  async loadGLTF() {
    this.gltfLoader = new GLTFLoader()
    this.gltf = await this.gltfLoader.loadFromUrl('./assets/gltf/metal_credit_card.glb')
    
    this.gltfScenesManager = new GLTFScenesManager({
      renderer: this.renderer,
      gltf: this.gltf,
    })
    
    const { scenesManager } = this.gltfScenesManager
    const { node, boundingBox } = scenesManager
    const { center, radius } = boundingBox
    
    // center the scenes manager parent node
    node.position.sub(center)
    
    // position camera based on glTF scene bounding box radius
    this.renderer.camera.position.z = radius * 2
    
    this.gltfMeshes = this.gltfScenesManager.addMeshes()
  }
}

Let’s focus on the loadGLTF() method and explain step by step what it’s doing:

  1. We create a new GLTFLoader() instance. This class allows us to load .gltf and .glb files.
  2. Load the .glb model (in this case, a credit card model), parse its content, and create the array buffers needed to handle the data.
  3. Create a new GLTFScenesManager() instance with our renderer and the previously parsed gltf as parameters. This handles textures, samplers, child mesh geometries, scene graph nodes, and computes a global bounding box for the entire glTF scene.
  4. Since glTF scenes and meshes can have various initial sizes and positions, we manually center the entire scene within the canvas and position the camera so the final scene renders at a convenient size.
  5. Finally, we call addMeshes() on the gltfScenesManager to create the meshes (in this case, there’s only one).
Tip: As of gpu-curtains v0.7.7, not all glTF 2.0 features are supported. Unsupported features include animations, skinning, morph targets, sparse accessors, and various KHR extensions.

Next, we need to instantiate the GLTFScene class inside our Demo.js file to see it on the screen:

// js/Demo.js
createScenes() {
  this.createIntroScene()
  this.createPlanesScene()
  this.createGLTFScene()
  
  this.lenis.on('scroll', (e) => {
    this.gpuCurtains.updateScrollValues({ x: 0, y: e.scroll })
    
    this.scenes.forEach((scene) => scene.onScroll(e.velocity))
  })
}
createIntroScene() {
  const introScene = new IntroScene({
    renderer: new GPUCameraRenderer({
      deviceManager: this.deviceManager,
      label: 'Intro scene renderer',
      container: '#intro-scene-canvas',
      pixelRatio: this.pixelRatio,
    }),
  })
  
  this.scenes.push(introScene)
}
createPlanesScene() {
  const planesScene = new PlanesScene({
    renderer: new GPUCurtainsRenderer({
      deviceManager: this.deviceManager,
      label: 'Planes scene renderer',
      container: '#planes-scene-canvas',
      pixelRatio: this.pixelRatio,
    }),
  })
  
  this.scenes.push(planesScene)
}
createGLTFScene() {
  const gltfScene = new GLTFScene({
    renderer: new GPUCurtainsRenderer({
      deviceManager: this.deviceManager,
      label: 'glTF scene renderer',
      container: '#gltf-scene-canvas',
      pixelRatio: this.pixelRatio,
    }),
  })
  
  this.scenes.push(gltfScene)
}

We’re using a GPUCurtainsRenderer again because we’ll need the library’s DOM syncing capabilities later on.

And there we go:

The credit card is correctly displayed, centered in our canvas. Of course, we haven’t applied any custom shaders yet, but by now, you should be accustomed to the default look of the mesh rendered with normal shading.


#3. Syncing the glTF with the DOM and using basic shading

18-gltf-3-DOM-sync-and-basic-shading

The first step is to sync the glTF mesh with the div#gltf-scene-object element. We’ll use the DOMObject3D class. To create a DOMObject3D, we pass the following:

  • First argument: the renderer.
  • Second argument: the DOM element to sync with.
  • Third argument (optional): an object with parameters.

Under the hood, this class calculates sizes and positions based on the renderer container, the DOM element, and the camera’s visible sizes. It uses these values, along with transformation properties like position, rotation, and scale, to compute a model matrix for syncing the mesh with the DOM.

Tip: The DOMMesh and Plane classes mentioned earlier both use DOMObject3D under the hood!
// js/gltf-scene/GLTFScene.js
init() {
  this.section = document.querySelector('#gltf-scene')
  this.gltfElement = document.querySelector('#gltf-scene-object')
  
  this.parentNode = new DOMObject3D(this.renderer, this.gltfElement, {
    watchScroll: false, // no need to watch the scroll
  })
  
  // Add it to the scene graph
  this.parentNode.parent = this.renderer.scene
  
  super.init()
}

Here, we create a DOMObject3D and add it to the scene graph. Since the div#gltf-scene-object and the canvas are already relatively positioned, we set watchScroll to false.

Scaling the glTF meshes

// js/gltf-scene/GLTFScene.js
async loadGLTF() {
  this.gltfLoader = new GLTFLoader()
  this.gltf = await this.gltfLoader.loadFromUrl('./assets/gltf/metal_credit_card.glb')
  
  this.gltfScenesManager = new GLTFScenesManager({
    renderer: this.renderer,
    gltf: this.gltf,
  })
  
  const { scenesManager } = this.gltfScenesManager
  const { node, boundingBox } = scenesManager
  const { center } = boundingBox
  
  // Center the scenes manager parent node
  node.position.sub(center)
  
  // Add parent DOMObject3D as the scenes manager node parent
  node.parent = this.parentNode
  
  // Copy the new scene's bounding box into the DOMObject3D bounding box
  this.parentNode.boundingBox.copy(boundingBox)
  
  this.gltfMeshes = this.gltfScenesManager.addMeshes()
}

The code centers the mesh, sets the parent node, and syncs the bounding box. Since we align the mesh’s center with (0, 0, 0), we don’t need to set an arbitrary camera position anymore.

Look at that, the glTF scene is now synced with our DOM element!

Handling depth alignment

Let’s test this with a solid red cube to understand the challenge of aligning the object’s front face:

// js/gltf-scene/GLTFScene.js
async loadGLTF() {
  const redCubeTest = new Mesh(this.renderer, {
    label: 'Red cube test',
    geometry: new BoxGeometry(),
    shaders: {
      fragment: {
        code: '@fragment fn main() -> @location(0) vec4f { return vec4(1.0, 0.0, 0.0, 1.0); }',
      },
    },
  })
  
  redCubeTest.parent = this.parentNode
  
  // copy mesh bounding box to parent node bounding box
  this.parentNode.boundingBox.copy(redCubeTest.geometry.boundingBox)
  
  const updateParentNodeDepthPosition = () => {
    // move our parent node along the Z axis so the cube front face lies at (0, 0, 0) instead of the cube's center
    this.parentNode.position.z = -0.5 * this.parentNode.boundingBox.size.z * this.parentNode.DOMObjectWorldScale.z
  }
  
  updateParentNodeDepthPosition()
  this.parentNode.onAfterDOMElementResize(() => updateParentNodeDepthPosition())
}

This ensures the front face lies at (0, 0, 0). The Z‑axis depth is calculated using DOMObjectWorldScale.z, which is managed internally by the library.

Adding basic shading

We can now integrate the addMeshes() method to define shaders for our glTF meshes. Using the buildShaders helper, we display the baseColorTexture:

// js/gltf-scene/GLTFScene.js
async loadGLTF() {
  this.gltfLoader = new GLTFLoader()
  this.gltf = await this.gltfLoader.loadFromUrl('./assets/gltf/metal_credit_card.glb')
  
  this.gltfScenesManager = new GLTFScenesManager({
    renderer: this.renderer,
    gltf: this.gltf,
  })
  
  const { scenesManager } = this.gltfScenesManager
  const { node, boundingBox } = scenesManager
  const { center } = boundingBox
  
  // Center the scenes manager parent node
  node.position.sub(center)
  
  // Add parent DOMObject3D as the scenes manager node parent
  node.parent = this.parentNode
  
  // Copy new scenes bounding box into DOMObject3D bounding box
  this.parentNode.boundingBox.copy(boundingBox)
  
  const updateParentNodeDepthPosition = () => {
  	// move our parent node along the Z axis so the glTF front face lies at (0, 0, 0) instead of the glTF’s center
    this.parentNode.position.z = -0.5 * this.parentNode.boundingBox.size.z * this.parentNode.DOMObjectWorldScale.z
  }
  
  updateParentNodeDepthPosition()
  this.parentNode.onAfterDOMElementResize(() => updateParentNodeDepthPosition())
  
  this.gltfMeshes = this.gltfScenesManager.addMeshes((meshDescriptor) => {
    const { parameters } = meshDescriptor
    
    // Disable frustum culling
    parameters.frustumCulling = false
    
    // Add shaders
    parameters.shaders = buildShaders(meshDescriptor)
  })
}

There it is! Our DOM-synced, unlit credit card is ready with basic shading applied.


# 4. Physically Based Rendering (PBR) Shading

19-gltf-4-PBR-shading

We can, of course, improve the look of that card by adding some lights to our scene. Let’s start with basic Lambert shading, as we’ve seen in our first example.

Fortunately, the buildShaders function accepts a shaderParameters object as a second argument, allowing us to pass different shader chunk string properties:

  • additionalFragmentHead: Used to define additional functions in our fragment shader.
  • preliminaryColorContribution: Used to tweak the color before applying any lighting.
  • ambientContribution: Used for the ambient light contribution.
  • lightContribution: Used for any other kind of light contribution.
  • additionalColorContribution: Used to tweak the final color before outputting it.

As always, we’ll start by adding the uniforms:

// js/gltf-scene/GLTFScene.js
async loadGLTF() {
  this.gltfLoader = new GLTFLoader();
  this.gltf = await this.gltfLoader.loadFromUrl('./assets/gltf/metal_credit_card.glb');
  
  this.gltfScenesManager = new GLTFScenesManager({
    renderer: this.renderer,
    gltf: this.gltf,
  });
  
  const { scenesManager } = this.gltfScenesManager;
  const { node, boundingBox } = scenesManager;
  const { center, radius } = boundingBox;
  
  // Center the scenes manager parent node
  node.position.sub(center);
  
  // Add parent DOMObject3D as the scenes manager node parent
  node.parent = this.parentNode;
  
  // Copy new scene's bounding box into DOMObject3D's own bounding box
  this.parentNode.boundingBox.copy(boundingBox);
  
  const updateParentNodeDepthPosition = () => {
    // Move our parent node along the Z axis so the glTF front face lies at (0, 0, 0) instead of the glTF's center
    this.parentNode.position.z = 
      -0.5 * this.parentNode.boundingBox.size.z * this.parentNode.DOMObjectWorldScale.z;
  };
  
  updateParentNodeDepthPosition();
  this.parentNode.onAfterDOMElementResize(() => updateParentNodeDepthPosition());
  
  this.gltfMeshes = this.gltfScenesManager.addMeshes((meshDescriptor) => {
    const { parameters } = meshDescriptor;
    
    // Disable frustum culling
    parameters.frustumCulling = false;
    
    // Add lights
    const lightPosition = new Vec3(-radius * 1.25, radius * 0.5, radius * 1.5);
    
    parameters.uniforms = {
      ...parameters.uniforms,
      ...{
        ambientLight: {
          struct: {
            intensity: {
              type: 'f32',
              value: 0.1,
            },
            color: {
              type: 'vec3f',
              value: new Vec3(1),
            },
          },
        },
        directionalLight: {
          struct: {
            position: {
              type: 'vec3f',
              value: lightPosition,
            },
            intensity: {
              type: 'f32',
              value: 0.3,
            },
            color: {
              type: 'vec3f',
              value: new Vec3(1),
            },
          },
        },
      },
    };
    
    parameters.shaders = buildShaders(meshDescriptor);
  });
}

Now, let’s create the actual ambientContribution and lightContribution chunks. We’ll put them inside a new gltf-contributions.wgsl.js file located in the js/shaders/chunks directory:

// js/shaders/chunks/gltf-contributions.wgsl.js
export const ambientContribution = /* wgsl */ `
  lightContribution.ambient = ambientLight.intensity * ambientLight.color;
`;

export const lightContribution = /* wgsl */ `
  // Diffuse Lambert shading
  // N is already defined as: normalize(normal)
  let L = normalize(directionalLight.position - worldPosition);
  let NDotL = max(dot(N, L), 0.0);
  
  lightContribution.diffuse = NDotL * directionalLight.color * directionalLight.intensity;
`;

We can safely assign the contributions to both lightContribution.ambient and lightContribution.diffuse variable components because they are already defined as vec3f variables in the WGSL code generated by our buildShaders function. Additionally, the generated shaders provide access to the normalized normals (N) and worldPosition, which can be directly used in the lighting calculations.

Now, we just need to pass these chunks into the buildShaders call, and we’re done:

// js/gltf-scene/GLTFScene.js
async loadGLTF() {
  this.gltfLoader = new GLTFLoader();
  this.gltf = await this.gltfLoader.loadFromUrl('./assets/gltf/metal_credit_card.glb');
  
  this.gltfScenesManager = new GLTFScenesManager({
    renderer: this.renderer,
    gltf: this.gltf,
  });
  
  const { scenesManager } = this.gltfScenesManager;
  const { node, boundingBox } = scenesManager;
  const { center, radius } = boundingBox;
  
  // Center the scenes manager parent node
  node.position.sub(center);
  
  // Add parent DOMObject3D as the scenes manager node parent
  node.parent = this.parentNode;
  
  // Copy new scene's bounding box into DOMObject3D's own bounding box
  this.parentNode.boundingBox.copy(boundingBox);
  
  const updateParentNodeDepthPosition = () => {
    // Move our parent node along the Z axis so the glTF front face lies at (0, 0, 0) instead of the glTF's center
    this.parentNode.position.z = 
      -0.5 * this.parentNode.boundingBox.size.z * this.parentNode.DOMObjectWorldScale.z;
  };
  
  updateParentNodeDepthPosition();
  this.parentNode.onAfterDOMElementResize(() => updateParentNodeDepthPosition());
  
  // Add meshes and configure lighting
  this.gltfMeshes = this.gltfScenesManager.addMeshes((meshDescriptor) => {
    const { parameters } = meshDescriptor;
    
    // Disable frustum culling
    parameters.frustumCulling = false;
    
    // Define light properties
    const lightPosition = new Vec3(-radius * 1.25, radius * 0.5, radius * 1.5);
    
    parameters.uniforms = {
      ...parameters.uniforms,
      ...{
        ambientLight: {
          struct: {
            intensity: { type: 'f32', value: 0.1 },
            color: { type: 'vec3f', value: new Vec3(1) },
          },
        },
        directionalLight: {
          struct: {
            position: { type: 'vec3f', value: lightPosition },
            intensity: { type: 'f32', value: 0.3 },
            color: { type: 'vec3f', value: new Vec3(1) },
          },
        },
      },
    };
    
    parameters.shaders = buildShaders(meshDescriptor, {
      chunks: {
        ambientContribution,
        lightContribution,
      },
    });
  });
}

Let’s ensure that it’s actually working:

We now have a working Lambert shader, which is cool. However, it’s still not ideal, and we’ve discussed implementing PBR rendering. There’s more work to be done.

First, let’s replace our buildShaders function with the new buildPBRShaders function. This is extremely simple:

// js/gltf-scene/GLTFScene.js
parameters.shaders = buildPBRShaders(meshDescriptor, {
  chunks: {
    ambientContribution,
    lightContribution,
  },
});

At this point, nothing has visually changed yet because we’re still using Lambert shading computations for our lightContribution. We’ll need to change that.

The difference between buildShaders and buildPBRShaders is that the latter adds several functions to our fragment shader, utilizing the additionalFragmentHead parameter we mentioned earlier. I won’t go into too much detail about how PBR shading works, but suffice it to say that we’ll now have access to new WGSL functions, such as FresnelSchlick, DistributionGGX, and GeometrySmith, to calculate physically accurate light contributions.

Now, update the light contribution chunk with the following code:

// js/shaders/chunks/gltf-contributions.wgsl.js
export const lightContribution = /* wgsl */ `  
  // Here N, V, and NdotV are already available
  // They are defined as follows:
  // let N: vec3f = normalize(normal);
  // let viewDirection: vec3f = fsInput.viewDirection
  // let V: vec3f = normalize(viewDirection);
  // let NdotV: f32 = clamp(dot(N, V), 0.0, 1.0);
  let L = normalize(directionalLight.position - worldPosition);
  let H = normalize(V + L);
  
  let NdotL: f32 = clamp(dot(N, L), 0.0, 1.0);
  let NdotH: f32 = clamp(dot(N, H), 0.0, 1.0);
  let VdotH: f32 = clamp(dot(V, H), 0.0, 1.0);
  
  // Cook-Torrance BRDF
  let NDF = DistributionGGX(NdotH, roughness);
  let G = GeometrySmith(NdotL, NdotV, roughness);
  let F = FresnelSchlick(VdotH, f0);
  
  let kD = (vec3(1.0) - F) * (1.0 - metallic);
  
  let numerator = NDF * G * F;
  let denominator = max(4.0 * NdotV * NdotL, 0.001);
  
  let specular = numerator / vec3(denominator);
  
  // Not needed now since directional lights do not have any attenuation,
  // but will be useful later
  let attenuation = 1.0;
  
  let radiance = directionalLight.color * directionalLight.intensity * attenuation;
  
  lightContribution.diffuse += (kD / vec3(PI)) * radiance * NdotL;
  lightContribution.specular += specular * radiance * NdotL;
`;

We’ll also need to tweak the light uniforms a bit:

// js/gltf-scene/GLTFScene.js
parameters.uniforms = {
  ...parameters.uniforms,
  ...{
    ambientLight: {
      struct: {
        intensity: {
          type: 'f32',
          value: 0.35,
        },
        color: {
          type: 'vec3f',
          value: new Vec3(1),
        },
      },
    },
    directionalLight: {
      struct: {
        position: {
          type: 'vec3f',
          value: lightPosition,
        },
        intensity: {
          type: 'f32',
          value: 1,
        },
        color: {
          type: 'vec3f',
          value: new Vec3(1),
        },
      },
    },
  },
};

We now have physically based rendering shading!

That’s neat. Note that we can still improve this quite a bit. We’ve used a directional light here, which can be compared to the light emitted by the sun. But what if we wanted to use a point light — something that mimics the light of a bare lightbulb?

Luckily, the concept is almost the same. We’d just need to account for light attenuation in our shading calculations. These calculations are typically based on an additional light range uniform and the distance from the light source to the object.

Start by adding a new chunk to calculate the point light attenuation based on its range and distance:

// js/shaders/chunks/gltf-contributions.wgsl.js
export const additionalFragmentHead = /* wgsl */ `
  fn rangeAttenuation(range: f32, distance: f32) -> f32 {
    if (range <= 0.0) {
        // Negative range means no cutoff
        return 1.0 / pow(distance, 2.0);
    }
    return clamp(1.0 - pow(distance / range, 4.0), 0.0, 1.0) / pow(distance, 2.0);
  }
`;

Then update the uniforms and add the chunk:

// js/gltf-scene/GLTFScene.js
async loadGLTF() {
  this.gltfLoader = new GLTFLoader();
  this.gltf = await this.gltfLoader.loadFromUrl('./assets/gltf/metal_credit_card.glb');
  
  this.gltfScenesManager = new GLTFScenesManager({
    renderer: this.renderer,
    gltf: this.gltf,
  });
  
  const { scenesManager } = this.gltfScenesManager;
  const { node, boundingBox } = scenesManager;
  const { center, radius } = boundingBox;
  
  // Center the scenes manager parent node
  node.position.sub(center);
  
  // Add parent DOMObject3D as the scenes manager node parent
  node.parent = this.parentNode;
  
  // Copy new scene's bounding box into DOMObject3D's own bounding box
  this.parentNode.boundingBox.copy(boundingBox);
  
  const updateParentNodeDepthPosition = () => {
    // Move our parent node along the Z axis so the glTF front face lies at (0, 0, 0) instead of the glTF's center
    this.parentNode.position.z = 
      -0.5 * this.parentNode.boundingBox.size.z * this.parentNode.DOMObjectWorldScale.z;
  };
  
  updateParentNodeDepthPosition();
  this.parentNode.onAfterDOMElementResize(() => updateParentNodeDepthPosition());
  
  this.gltfMeshes = this.gltfScenesManager.addMeshes((meshDescriptor) => {
    const { parameters } = meshDescriptor;
    
    // Disable frustum culling
    parameters.frustumCulling = false;
    
    // Add lights
    const lightPosition = new Vec3(-radius * 1.25, radius * 0.5, radius * 1.5);
    const lightPositionLength = lightPosition.length();
    
    parameters.uniforms = {
      ...parameters.uniforms,
      ...{
        ambientLight: {
          struct: {
            intensity: { type: 'f32', value: 0.35 },
            color: { type: 'vec3f', value: new Vec3(1) },
          },
        },
        pointLight: {
          struct: {
            position: { type: 'vec3f', value: lightPosition },
            intensity: { type: 'f32', value: lightPositionLength * 0.75 },
            color: { type: 'vec3f', value: new Vec3(1) },
            range: { type: 'f32', value: lightPositionLength * 2.5 },
          },
        },
      },
    };
    
    parameters.shaders = buildPBRShaders(meshDescriptor, {
      chunks: {
        additionalFragmentHead,
        ambientContribution,
        lightContribution,
      },
    });
  });
}

We’ve renamed our directionalLight uniform into pointLight. We’ll also need to update that in our light contribution chunk. Getting the correct point light intensity and range for a scene can be tricky and might lead to some fine-tuning. In this case, we’ve based these values on the light’s distance from the object’s center, which depends on the glTF scene’s bounding box radius. However, this approach can be adjusted as needed.

Next, update the WGSL code with the new uniform struct name and include the point light attenuation by using buildPBRShaders and the rangeAttenuation function:

// js/shaders/chunks/gltf-contributions.wgsl.js
export const lightContribution = /* wgsl */ `
  // Here N, V, and NdotV are already available
  // Defined as follows:
  // let N: vec3f = normalize(normal);
  // let viewDirection: vec3f = fsInput.viewDirection
  // let V: vec3f = normalize(viewDirection);
  // let NdotV: f32 = clamp(dot(N, V), 0.0, 1.0);
  let L = normalize(pointLight.position - worldPosition);
  let H = normalize(V + L);
  
  let NdotL: f32 = clamp(dot(N, L), 0.0, 1.0);
  let NdotH: f32 = clamp(dot(N, H), 0.0, 1.0);
  let VdotH: f32 = clamp(dot(V, H), 0.0, 1.0);
  
  // Cook-Torrance BRDF
  let NDF = DistributionGGX(NdotH, roughness);
  let G = GeometrySmith(NdotL, NdotV, roughness);
  let F = FresnelSchlick(VdotH, f0);
  
  let kD = (vec3(1.0) - F) * (1.0 - metallic);
  
  let numerator = NDF * G * F;
  let denominator = max(4.0 * NdotV * NdotL, 0.001);
  
  let specular = numerator / vec3(denominator);
  
  let distance = length(pointLight.position - worldPosition);
  let attenuation = rangeAttenuation(pointLight.range, distance);
  
  let radiance = pointLight.color * pointLight.intensity * attenuation;
  
  lightContribution.diffuse += (kD / vec3(PI)) * radiance * NdotL;
  lightContribution.specular += specular * radiance * NdotL;
`;

That’s it for the PBR shading! One of the best ways to ensure our lighting is correctly applied is to rotate our object and observe the shading changes in real time:

// js/gltf-scene/GLTFScene.js
onRender() {
  // Temporary, will be changed later
  this.parentNode.rotation.y += 0.01;
}

And it’s working… err, wait — why does the object appear blurry when rotated?

Any idea of what could go wrong here?

We’re facing a common texture sampling issue that arises when textures are viewed from steep angles. Fortunately, we can address this by using an anisotropic sampler.

// js/gltf-scene/GLTFScene.js
async loadGLTF() {
  this.gltfLoader = new GLTFLoader();
  this.gltf = await this.gltfLoader.loadFromUrl('./assets/gltf/metal_credit_card.glb');
  
  this.gltfScenesManager = new GLTFScenesManager({
    renderer: this.renderer,
    gltf: this.gltf,
  });
  
  const { scenesManager } = this.gltfScenesManager;
  const { node, boundingBox } = scenesManager;
  const { center, radius } = boundingBox;
  
  // Center the scenes manager parent node
  node.position.sub(center);
  
  // Add parent DOMObject3D as the scenes manager node parent
  node.parent = this.parentNode;
  
  // Copy new scene's bounding box into DOMObject3D's own bounding box
  this.parentNode.boundingBox.copy(boundingBox);
  
  const updateParentNodeDepthPosition = () => {
    // Move our parent node along the Z axis so the glTF front face lies at (0, 0, 0) instead of the glTF's center
    this.parentNode.position.z = 
      -0.5 * this.parentNode.boundingBox.size.z * this.parentNode.DOMObjectWorldScale.z;
  };
  
  updateParentNodeDepthPosition();
  this.parentNode.onAfterDOMElementResize(() => updateParentNodeDepthPosition());
  
  // Create a new sampler to address the anisotropic issue
  this.anisotropicSampler = new Sampler(this.renderer, {
    label: 'Anisotropic sampler',
    name: 'anisotropicSampler',
    maxAnisotropy: 16,
  });
  
  this.gltfMeshes = this.gltfScenesManager.addMeshes((meshDescriptor) => {
    const { parameters } = meshDescriptor;
    
    // Disable frustum culling
    parameters.frustumCulling = false;
    
    // Add anisotropic sampler to the parameters
    parameters.samplers.push(this.anisotropicSampler);
    
    // Assign our anisotropic sampler to every textureSample call 
    // used inside our buildPBRShaders function
    meshDescriptor.textures.forEach((texture) => {
      texture.sampler = this.anisotropicSampler.name;
    });
    
    // Add lights
    const lightPosition = new Vec3(-radius * 1.25, radius * 0.5, radius * 1.5);
    const lightPositionLength = lightPosition.length();
    
    parameters.uniforms = {
      ...parameters.uniforms,
      ...{
        ambientLight: {
          struct: {
            intensity: { type: 'f32', value: 0.35 },
            color: { type: 'vec3f', value: new Vec3(1) },
          },
        },
        pointLight: {
          struct: {
            position: { type: 'vec3f', value: lightPosition },
            intensity: { type: 'f32', value: lightPositionLength * 0.75 },
            color: { type: 'vec3f', value: new Vec3(1) },
            range: { type: 'f32', value: lightPositionLength * 2.5 },
          },
        },
      },
    };
    
    parameters.shaders = buildPBRShaders(meshDescriptor, {
      chunks: {
        additionalFragmentHead,
        ambientContribution,
        lightContribution,
      },
    });
  });
}

And finally, we’re fully done with the PBR shading now!


#5. Adding Interactions

20-gltf-5-adding-interactions

The article stated we were going to build a product configurator, but as of now, we’re just displaying the glTF object as it is. We’d like to add two kinds of interactions here. First, we’d like to be able to rotate the object a bit by dragging it. Next, we’d like to be able to change its color.

Before actually implementing those, we’ll start by adding a little animation to display the UI elements and text content, as we’d need those later.

// js/gltf-scene/GLTFScene.js
onSceneVisibilityChanged(isVisible) {
  if (isVisible) {
    this.section.classList.add('is-visible');
    this.renderer.shouldRenderScene = true;
    this.timeline?.restart(true);
  } else {
    this.section.classList.remove('is-visible');
    this.renderer.shouldRenderScene = false;
    this.timeline?.paused();
  }
}

addEnteringAnimation() {
  this.autoAlphaElements = this.section.querySelectorAll('.gsap-auto-alpha');
  
  this.timeline = gsap
    .timeline({
      paused: true,
    })
    .set(this.autoAlphaElements, { autoAlpha: 0 })
    .to(
      this.autoAlphaElements,
      {
        autoAlpha: 1,
        duration: 1,
        stagger: 0.125,
        ease: 'power2.inOut',
      },
      0.5
    );
}

removeEnteringAnimation() {
  this.timeline.kill();
}

# 5.a Drag to Rotate Interaction

21-gltf‑5 – 1‑drag-to-rotate-interaction

The idea behind this interaction is that we’ll detect when the user starts or stops dragging and keep track of the pointer position delta while dragging. We’re not going to apply these deltas directly to our object rotation but lerp them instead and apply those lerped values, as it will create a smoother and more pleasing effect.

Let’s start by adding this code:

// js/gltf-scene/GLTFScene.js
addEvents() {
  this.gltfContainer = document.querySelector('#gltf-scene-object-container');
  
  this.mouse = {
    lerpedInteraction: new Vec2(),
    currentInteraction: new Vec2(),
    last: new Vec2(),
    multiplier: 0.015,
    isDown: false,
  };
  
  this._onPointerDownHandler = this.onPointerDown.bind(this);
  this._onPointerUpHandler = this.onPointerUp.bind(this);
  this._onPointerMoveHandler = this.onPointerMove.bind(this);
  
  this.section.addEventListener('mousedown', this._onPointerDownHandler);
  this.section.addEventListener('mouseup', this._onPointerUpHandler);
  this.gltfContainer.addEventListener('mousemove', this._onPointerMoveHandler);
  
  this.section.addEventListener('touchstart', this._onPointerDownHandler, {
    passive: true,
  });
  this.section.addEventListener('touchend', this._onPointerUpHandler);
  this.gltfContainer.addEventListener('touchmove', this._onPointerMoveHandler, {
    passive: true,
  });
}

removeEvents() {
  this.section.removeEventListener('mousedown', this._onPointerDownHandler);
  this.section.removeEventListener('mouseup', this._onPointerUpHandler);
  this.gltfContainer.removeEventListener('mousemove', this._onPointerMoveHandler);
  
  this.section.removeEventListener('touchstart', this._onPointerDownHandler, {
    passive: true,
  });
  this.section.removeEventListener('touchend', this._onPointerUpHandler);
  this.gltfContainer.removeEventListener('touchmove', this._onPointerMoveHandler, {
    passive: true,
  });
}

onPointerDown(e) {
  if (e.which === 1 || (e.targetTouches && e.targetTouches.length)) {
    this.mouse.isDown = true;
  }
  
  const { clientX, clientY } = e.targetTouches && e.targetTouches.length ? e.targetTouches[0] : e;
  this.mouse.last.set(clientX, clientY);
}

onPointerUp() {
  this.mouse.isDown = false;
}

onPointerMove(e) {
  if (this.mouse.isDown) {
    const { clientX, clientY } = e.targetTouches && e.targetTouches.length ? e.targetTouches[0] : e;
    
    const xDelta = clientX - this.mouse.last.x;
    const yDelta = clientY - this.mouse.last.y;
    
    this.mouse.currentInteraction.x += xDelta * this.mouse.multiplier;
    this.mouse.currentInteraction.y += yDelta * this.mouse.multiplier;
    
    // Clamp X rotation
    this.mouse.currentInteraction.y = Math.max(-Math.PI / 4, Math.min(Math.PI / 4, this.mouse.currentInteraction.y));
    
    this.mouse.last.set(clientX, clientY);
  }
}

Nothing particularly difficult here. Just note that we’ll clamp the final rotation along the X‑axis to avoid running into nightmarish quaternion issues. Besides, we don’t need to fully rotate the object along this axis.

Next, we need to actually lerp the interaction and apply this to our parentNode DOMObject3D:

// js/gltf-scene/GLTFScene.js
onRender() {
  this.mouse.lerpedInteraction.lerp(this.mouse.currentInteraction, 0.2);
  
  this.parentNode.rotation.x = this.mouse.lerpedInteraction.y;
  this.parentNode.rotation.y = this.mouse.lerpedInteraction.x;
}

This works like a charm. Now you also understand why we made the canvas container overflow its parent: so that we can rotate the object without it being cropped.

# 5.b Update Object’s Color on Button Click

22-gltf‑5 – 2‑update-object-color-button-click

Next, we’re going to add the ability to update the object’s color when clicking on the bottom buttons.

Tip: Since our glTF scene contains only one mesh, this will save us some time. It might be different with a model containing multiple meshes, where you’d have to actually update some meshes’ colors but not all of them.

We’ll start with a very basic setup that changes the background and text colors. This specific interaction will be added and removed only if WebGPU is available, as there’s no point in adding this otherwise.

// js/gltf-scene/GLTFScene.js
setupWebGPU() {
  this.loadGLTF();
  
  this.addButtonInteractions();
}

destroyWebGPU() {
  this.gltfScenesManager?.destroy();
  this.removeButtonInteractions();
}

addButtonInteractions() {
  this.buttons = this.section.querySelectorAll('#gltf-scene-controls button');
  
  // update card color
  this.cards = [
    { name: 'silver', baseColorFactor: new Vec3(1) },
    { name: 'gold', baseColorFactor: new Vec3(240 / 255, 140 / 255, 15 / 255) },
    { name: 'black', baseColorFactor: new Vec3(0.55) },
  ];
  
  // init with first color
  this.section.classList.add(this.cards[0].name);
  
  this._buttonClickHandler = this.onButtonClicked.bind(this);
  
  this.buttons.forEach((button) => {
    button.addEventListener('click', this._buttonClickHandler);
  });
}

removeButtonInteractions() {
  this.buttons.forEach((button) => {
    button.removeEventListener('click', this._buttonClickHandler);
  });
}

onButtonClicked(e) {
  const { target } = e;
  const cardName = target.hasAttribute('data-card-name') ? target.getAttribute('data-card-name') : this.cards[0].name;
  
  const card = this.cards.find((c) => c.name === cardName);
  
  // remove all previous card name classes
  this.cards.forEach((card) => {
    this.section.classList.remove(card.name);
  });
  
  // add active card class name
  this.section.classList.add(cardName);
}

We need to plug that into our fragment shader somewhere. To do this, we’ll add a new uniform to send the baseColorFactor. This uniform must be used in the fragment shader before applying any lighting, or else it will distort the result.

Patch Our Uniforms

// js/gltf-scene/GLTFScene.js
parameters.uniforms = {
  ...parameters.uniforms,
  ...{
    interaction: {
      struct: {
        baseColorFactor: {
          type: 'array<vec3f>', // Pass an array of vec3f values
          value: this.cards[0].baseColorFactor.clone(),
        },
      },
    },
    ambientLight: {
      struct: {
        intensity: {
          type: 'f32',
          value: 0.35,
        },
        color: {
          type: 'vec3f',
          value: new Vec3(1),
        },
      },
    },
    pointLight: {
      struct: {
        position: {
          type: 'vec3f',
          value: lightPosition,
        },
        intensity: {
          type: 'f32',
          value: lightPositionLengthSq,
        },
        color: {
          type: 'vec3f',
          value: new Vec3(1),
        },
        range: {
          type: 'f32',
          value: lightPositionLength * 7.5,
        },
      },
    },
  },
};

Update the Uniform When Clicking a Button

// js/gltf-scene/GLTFScene.js
onButtonClicked(e) {
  const { target } = e;
  const cardName = target.hasAttribute('data-card-name') ? target.getAttribute('data-card-name') : this.cards[0].name;
  
  const card = this.cards.find((c) => c.name === cardName);
  
  // remove all previous card name classes
  this.cards.forEach((card) => {
    this.section.classList.remove(card.name);
  });
  
  // add active card class name
  this.section.classList.add(cardName);
  
  this.gltfMeshes?.forEach((mesh) => {
    mesh.uniforms.interaction.baseColorFactor.value.copy(card.baseColorFactor);
  });
}

Modify the Shader

A basic idea to apply this to our shader would be to multiply the base color with our baseColorFactor uniform. We’ll use the preliminaryColorContribution to achieve this, as we want to modify the color before lighting calculations.

// js/shaders/chunks/gltf-contributions.wgsl.js
export const preliminaryColorContribution = /* wgsl */ `
  // multiply our base color by the interaction base color factor
  color = vec4(color.rgb * interaction.baseColorFactor, color.a);
`;

export const lightContribution = /* wgsl */ `
  // here N, V and NdotV are already available
  // they are defined as follows:
  // let N: vec3f = normalize(normal);
  // let viewDirection: vec3f = fsInput.viewDirection;
  // let V: vec3f = normalize(viewDirection);
  // let NdotV: f32 = clamp(dot(N, V), 0.0, 1.0);
  let L = normalize(pointLight.position - worldPosition);
  let H = normalize(V + L);
  
  let NdotL: f32 = clamp(dot(N, L), 0.0, 1.0);
  let NdotH: f32 = clamp(dot(N, H), 0.0, 1.0);
  let VdotH: f32 = clamp(dot(V, H), 0.0, 1.0);
  
  // cook-torrance brdf
  let NDF = DistributionGGX(NdotH, roughness);
  let G = GeometrySmith(NdotL, NdotV, roughness);
  let F = FresnelSchlick(VdotH, f0);
  
  let kD = (vec3(1.0) - F) * (1.0 - metallic);
  
  let numerator = NDF * G * F;
  let denominator = max(4.0 * NdotV * NdotL, 0.001);
  
  let specular = numerator / vec3(denominator);
  
  let distance = length(pointLight.position - worldPosition);
  let attenuation = rangeAttenuation(pointLight.range, distance);
  
  let radiance = pointLight.color * pointLight.intensity * attenuation;
  
  lightContribution.diffuse += (kD / vec3(PI)) * radiance * NdotL;
  lightContribution.specular += specular * radiance * NdotL;
`;

Passing the New Chunk to buildPBRShaders

parameters.shaders = buildPBRShaders(meshDescriptor, {
  chunks: {
    additionalFragmentHead,
    preliminaryColorContribution,
    ambientContribution,
    lightContribution,
  },
});

Unfortunately, this approach doesn’t work very well, particularly for the gold color, which appears overly dull:

To achieve the desired result, we’ll need to use Photoshop-like blending techniques. However, no single blending mode works for all three colors, so we’ll have to handle this manually. To accomplish this, we’ll refactor our code to send all three base color factors as a single uniform. Additionally, we’ll add a new baseColorBlendIndex uniform to identify which color to use in the shader.

Updating the Uniforms

// js/gltf-scene/GLTFScene.js
// Add lights
const lightPosition = new Vec3(-radius * 1.25, radius * 0.5, radius * 1.5);
const lightPositionLength = lightPosition.length();

// Put all base color factors into a single array
const baseColorFactorsArray = this.cards
  .map((card) => {
    return [card.baseColorFactor.x, card.baseColorFactor.y, card.baseColorFactor.z];
  })
  .flat();
  
parameters.uniforms = {
  ...parameters.uniforms,
  ...{
    interaction: {
      struct: {
        baseColorFactorsArray: {
          type: 'array<vec3f>', // Pass an array of vec3f values
          value: baseColorFactorsArray,
        },
        baseColorBlendIndex: {
          type: 'i32',
          value: 0, // Default index
        },
      },
    },
    ambientLight: {
      struct: {
        intensity: {
          type: 'f32',
          value: 0.35,
        },
        color: {
          type: 'vec3f',
          value: new Vec3(1),
        },
      },
    },
    pointLight: {
      struct: {
        position: {
          type: 'vec3f',
          value: lightPosition,
        },
        intensity: {
          type: 'f32',
          value: lightPositionLength * 0.75,
        },
        color: {
          type: 'vec3f',
          value: new Vec3(1),
        },
        range: {
          type: 'f32',
          value: lightPositionLength * 2.5,
        },
      },
    },
  },
};
Tip: You can use arrays in uniforms as long as the total size of your uniform buffer is less than or equal to 64k bytes. For larger arrays, you’ll need to use storage buffers, which we’ll explore in the next chapter.

Sending the Right baseColorBlendIndex Value on Button Click

// js/gltf-scene/GLTFScene.js
onButtonClicked(e) {
  const { target } = e;
  const cardName = target.hasAttribute('data-card-name')
    ? target.getAttribute('data-card-name')
    : this.cards[0].name;
    
  const cardIndex = this.cards.findIndex((c) => c.name === cardName);
  
  // Remove all previous card name classes
  this.cards.forEach((card) => {
    this.section.classList.remove(card.name);
  });
  
  // Add the active card class name
  this.section.classList.add(cardName);
  
  this.gltfMeshes?.forEach((mesh) => {
    mesh.uniforms.interaction.baseColorBlendIndex.value = cardIndex;
  });
}

Updating the Shader

// js/shaders/chunks/gltf-contributions.wgsl.js
export const preliminaryColorContribution = /* wgsl */ `
  // Multiply our base color by the interaction base color factor
  color = vec4(color.rgb * interaction.baseColorFactorsArray[interaction.baseColorBlendIndex], color.a);
`;

At this point, there are no visible changes yet, but we now have a clean base for implementing the color blending functionality. Moreover, this structure will assist us later when animating the color transitions.

Color Blending Operations: Saturation and Luminosity

We will use two color blending operations: saturation and luminosity. To achieve this, we need to define several helper functions and implement the final getBlendedColor function to calculate the desired color based on the selected blend mode.

// 'js/shaders/chunks/gltf-contributions.wgsl.js'
export const additionalFragmentHead = /* wgsl */ `
  fn rangeAttenuation(range: f32, distance: f32) -> f32 {
    if (range <= 0.0) {
        // Negative range means no cutoff
        return 1.0 / pow(distance, 2.0);
    }
    return clamp(1.0 - pow(distance / range, 4.0), 0.0, 1.0) / pow(distance, 2.0);
  }

  // photoshop-like blending
  // port of https://gist.github.com/floz/53ad2765cc846187cdd3
  fn rgbToHSL(color: vec3f) -> vec3f {
    var hsl: vec3f;
    
    let fmin: f32 = min(min(color.r, color.g), color.b); // Min. value of RGB
    let fmax: f32 = max(max(color.r, color.g), color.b); // Max. value of RGB
    let delta: f32 = fmax - fmin; // Delta RGB value
  
    hsl.z = (fmax + fmin) / 2.0; // Luminance
  
    // This is a gray, no chroma...
    if (delta == 0.0) {
      hsl.x = 0.0; // Hue
      hsl.y = 0.0; // Saturation
    } else {
      // Chromatic data...
      if (hsl.z < 0.5) {
        hsl.y = delta / (fmax + fmin); // Saturation
      } else {
        hsl.y = delta / (2.0 - fmax - fmin); // Saturation
      }
      
      let deltaR: f32 = (((fmax - color.r) / 6.0) + (delta / 2.0)) / delta;
      let deltaG: f32 = (((fmax - color.g) / 6.0) + (delta / 2.0)) / delta;
      let deltaB: f32 = (((fmax - color.b) / 6.0) + (delta / 2.0)) / delta;
  
      if (color.r == fmax) {
        hsl.x = deltaB - deltaG; // Hue
      } else if (color.g == fmax) {
        hsl.x = (1.0 / 3.0) + deltaR - deltaB; // Hue
      } else if (color.b == fmax) {
        hsl.x = (2.0 / 3.0) + deltaG - deltaR; // Hue
      }
        
      if (hsl.x < 0.0) {
        hsl.x += 1.0; // Hue
      } else if (hsl.x > 1.0) {
        hsl.x -= 1.0; // Hue
      }
    }
  
    return hsl;
  }
  
  fn hueToRGB(f1: f32, f2: f32, hue: f32) -> f32 {
    var h = hue;
  
    if (h < 0.0) {
      h += 1.0;
    } else if (h > 1.0) {
      h -= 1.0;
    }
    
    var res: f32;
    
    if ((6.0 * h) < 1.0) {
      res = f1 + (f2 - f1) * 6.0 * h;
    } else if ((2.0 * h) < 1.0) {
      res = f2;
    } else if ((3.0 * h) < 2.0) {
      res = f1 + (f2 - f1) * ((2.0 / 3.0) - h) * 6.0;
    } else {
      res = f1;
    }
    
    return res;
  }
  
  fn hslToRGB(hsl: vec3f) -> vec3f {
    var rgb: vec3f;
    
    if (hsl.y == 0.0) {
      rgb = vec3(hsl.z); // Luminance
    } else {
      var f2: f32;
      
      if (hsl.z < 0.5) {
        f2 = hsl.z * (1.0 + hsl.y);
      } else {
        f2 = (hsl.z + hsl.y) - (hsl.y * hsl.z);
      }
        
      let f1: f32 = 2.0 * hsl.z - f2;
      
      rgb.r = hueToRGB(f1, f2, hsl.x + (1.0 / 3.0));
      rgb.g = hueToRGB(f1, f2, hsl.x);
      rgb.b = hueToRGB(f1, f2, hsl.x - (1.0 / 3.0));
    }
    
    return rgb;
  }  
  
  // Saturation Blend mode creates the result color by combining the luminance and hue of the base color with the saturation of the blend color.
  fn blendSaturation(base: vec3f, blend: vec3f) -> vec3f {
    let baseHSL: vec3f = rgbToHSL(base);
    return hslToRGB(vec3(baseHSL.r, rgbToHSL(blend).g, baseHSL.b));
  }
  
  // Luminosity Blend mode creates the result color by combining the hue and saturation of the base color with the luminance of the blend color.
  fn blendLuminosity(base: vec3f, blend: vec3f) -> vec3f {
    let baseHSL: vec3f = rgbToHSL(base);
    return hslToRGB(vec3(baseHSL.r, baseHSL.g, rgbToHSL(blend).b));
  }
  
  // Use the correct blend equation based on the blendIndex to use
  // and add small adjustments for a more visually pleasing result
  fn getBlendedColor(baseColor: vec4f, blendIndex: i32) -> vec4f {
    var blendedColor: vec4f;
    let blendColor: vec3f = interaction.baseColorFactorsArray[blendIndex];
    
    if (blendIndex == 1) {
      // gold
      blendedColor = vec4(blendLuminosity(blendColor, baseColor.rgb), baseColor.a);
    } else if (blendIndex == 2) {
      // different blending for black card
      blendedColor = vec4(blendColor * blendSaturation(baseColor.rgb, blendColor), baseColor.a);
    } else {
      // default to silver
      blendedColor = vec4(blendLuminosity(blendColor, baseColor.rgb), baseColor.a);
      
      // brighten silver card
      blendedColor = vec4(blendedColor.rgb * vec3(1.25), blendedColor.a);
    }
    
    return blendedColor;
  }
`

Using the Blended Color in the Shader

// js/shaders/chunks/gltf-contributions.wgsl.js
export const preliminaryColorContribution = /* wgsl */ `
  // Get blended color based on the baseColorBlendIndex uniform
  color = getBlendedColor(color, interaction.baseColorBlendIndex);
`;

The output is now visually interesting. However, there are still no transitions, so the experience feels somewhat rough. Adding smooth transitions will enhance the overall effect.

Let’s add a cool transition! I assume you already have a clue on how we’ll do that. We’re going to add another uniform, let’s say colorChangeProgress, and mix the new and old colors together. There are loads of different effects available, and we’ll just have to pick one. You can find inspiration here, for example: https://gl-transitions.com/gallery (they are written in GLSL but are easy enough to port in WGSL).

In fact, we’re actually going to base the final animation on this one, but with a few tweaks: https://gl-transitions.com/editor/wipeRight

Okay, now for the uniforms:

// js/gltf-scene/GLTFScene.js
parameters.uniforms = {
    ...parameters.uniforms,
    ...{
        interaction: {
            struct: {
                baseColorFactorsArray: {
                    type: 'array<vec3f>', // Pass an array of vec3f values
                    value: baseColorFactorsArray,
                },
                currentBaseColorBlendIndex: {
                    type: 'i32',
                    value: 0,
                },
                nextBaseColorBlendIndex: {
                    type: 'i32',
                    value: 0,
                },
                colorChangeProgress: {
                    type: 'f32',
                    value: 0,
                },
            },
        },
        ambientLight: {
            struct: {
                intensity: {
                    type: 'f32',
                    value: 0.35,
                },
                color: {
                    type: 'vec3f',
                    value: new Vec3(1),
                },
            },
        },
        pointLight: {
            struct: {
                position: {
                    type: 'vec3f',
                    value: lightPosition,
                },
                intensity: {
                    type: 'f32',
                    value: lightPositionLengthSq,
                },
                color: {
                    type: 'vec3f',
                    value: new Vec3(1),
                },
                range: {
                    type: 'f32',
                    value: lightPositionLength * 7.5,
                },
            },
        },
    },
}

In the shader, we’ll just mix the current and next base color factors based on our colorChangeProgress for now:

// js/shaders/chunks/gltf-contributions.wgsl.js
export const preliminaryColorContribution = /* wgsl */ `
  // mix between blended color
  // based on our currentBaseColorBlendIndex, nextBaseColorBlendIndex and colorChangeProgress uniforms
  color = mix(
    getBlendedColor(color, interaction.currentBaseColorBlendIndex),
    getBlendedColor(color, interaction.nextBaseColorBlendIndex),
    interaction.colorChangeProgress
  );
`

Finally, just add a gsap timeline to tween our colorChangeProgress uniform value:

// js/gltf-scene/GLTFScene.js
removeButtonInteractions() {
    this.updateColorTween?.kill()
    
    this.buttons.forEach((button) => {
        button.removeEventListener('click', this._buttonClickHandler)
    })
    
    this.buttons = []
}

onButtonClicked(e) {
    const { target } = e
    const cardName = target.hasAttribute('data-card-name') ? target.getAttribute('data-card-name') : this.cards[0].name
    
    const cardIndex = this.cards.findIndex((c) => c.name === cardName)
    
    // remove all previous card name classes
    this.cards.forEach((card) => {
        this.section.classList.remove(card.name)
    })
    
    // add active card class name
    this.section.classList.add(cardName)
    
    const changeProgress = {
        value: 0,
    }
    
    this.updateColorTween?.kill()
    
    this.updateColorTween = gsap.to(changeProgress, {
        value: 1,
        duration: 1.25,
        ease: 'expo.inOut',
        onStart: () => {
            this.gltfMeshes.forEach((mesh) => {
                mesh.uniforms.interaction.nextBaseColorBlendIndex.value = cardIndex
            })
        },
        onUpdate: () => {
            this.gltfMeshes.forEach((mesh) => {
                mesh.uniforms.interaction.colorChangeProgress.value = changeProgress.value
            })
        },
        onComplete: () => {
            this.gltfMeshes.forEach((mesh) => {
                mesh.uniforms.interaction.currentBaseColorBlendIndex.value = cardIndex
            })
        },
    })
}

Now we have a fade transition. We’ve done the most difficult part! We just have to improve the transition in our shader and we’ll be done.

We’re going to start by just using the wipe transition we’ve talked about above:

// js/shaders/chunks/gltf-contributions.wgsl.js
export const preliminaryColorContribution = /* wgsl */ `
  // get blended colors
  // based on our currentBaseColorBlendIndex, nextBaseColorBlendIndex uniforms
  let currentColor: vec4f = getBlendedColor(color, interaction.currentBaseColorBlendIndex);
  let nextColor: vec4f = getBlendedColor(color, interaction.nextBaseColorBlendIndex);
  
  // based on https://gl-transitions.com/editor/wipeRight
  let p: vec2f = fsInput.uv / vec2(1.0);
  
  color = mix(currentColor, nextColor, step(p.x, interaction.colorChangeProgress));
`

It’s definitely better, but still a bit rough. So we’re going to add a little wavy effect to that transition. It’s going to be applied based on the preliminaryColorContribution value as well, so it will affect less the transition at the beginning and the end:

// js/shaders/chunks/gltf-contributions.wgsl.js
export const preliminaryColorContribution = /* wgsl */ `
  // get blended colors
  // based on our currentBaseColorBlendIndex and nextBaseColorBlendIndex uniforms
  let currentColor: vec4f = getBlendedColor(color, interaction.currentBaseColorBlendIndex);
  let nextColor: vec4f = getBlendedColor(color, interaction.nextBaseColorBlendIndex);
  
  var uv: vec2f = fsInput.uv;
  let progress: f32 = interaction.colorChangeProgress;
  
  // convert to [-1, 1]
  uv = uv * 2.0 - 1.0;
  
  // apply deformation
  let uvDeformation: f32 = sin(abs(fsInput.uv.y * 2.0) * 3.141592) * 3.0;
  
  // 0 -> 0.5 -> 0
  let mappedProgress: f32 = 0.5 - (abs(progress * 2.0 - 1.0) * 0.5);
  
  // apply to X
  uv.x *= 1.0 - mappedProgress * uvDeformation;
  
  // convert back to [0, 1]
  uv = uv * 0.5 + 0.5;
  
  // mix between a simple slide change (from https://gl-transitions.com/editor/wipeRight)
  // and our custom animation based on progress
  let p: vec2f = mix(uv, fsInput.uv, smoothstep(0.0, 1.0, progress)) / vec2(1.0);
    
  color = mix(currentColor, nextColor, step(p.x, progress));
`

We’re reusing the concept of UV-based distortion seen in the previous example post-processing shader and applying it based on our colorChangeAnimation uniform value.

Mission accomplished!


# 6. Wrapping Up

23-gltf-6-wrapping-up

The last thing to do now before closing this chapter is adding a little entering animation to our 3D credit card object, polishing things a bit, and we’ll be done! We’ll just tween the parentNode scale and rotate it along the Y axis.

First, the scale tween:

// js/gltf-scene/GLTFScene.js
addEnteringAnimation() {
  this.autoAlphaElements = this.section.querySelectorAll('.gsap-auto-alpha')
  
  // animation
  this.animations = {
    meshesProgress: 0,
  }
  
  this.timeline = gsap
    .timeline({
      paused: true,
    })
    .call(() => {
      // reset mouse interaction and parent node scale on start
      this.mouse.currentInteraction.set(0)
      this.parentNode.scale.set(0)
    })
    .set(this.autoAlphaElements, { autoAlpha: 0 })
    .to(this.animations, {
      meshesProgress: 1,
      ease: 'expo.out',
      duration: 3,
      delay: 0.25,
      onUpdate: () => {
        this.parentNode.scale.set(this.animations.meshesProgress)
      },
    })
    .to(
      this.autoAlphaElements,
      {
        autoAlpha: 1,
        duration: 1,
        stagger: 0.125,
        ease: 'power2.inOut',
      },
      0.5
    )
}

Next, we’re going to use the tweened animations.meshesProgress value inside our onRender() method to rotate the object, et voilà!

// js/gltf-scene/GLTFScene.js
onRender() {
  this.mouse.lerpedInteraction.lerp(this.mouse.currentInteraction, 0.2)
  
  this.parentNode.rotation.x = this.mouse.lerpedInteraction.y
  this.parentNode.rotation.y = this.animations.meshesProgress * Math.PI * 4 + this.mouse.lerpedInteraction.x
}

Our product configurator is finally done. This was a long ride, but now we know how to load glTF objects and how to apply various shading to them. We’ve also seen how gpu-curtains lets us sync our glTF scenes with the DOM and how we can update a mesh’s base color.


# 7. Going Further

24-gltf-7-going-further

Could you come up with a solution to tweak our color change animation a bit more? Something like this for example?

Check out how I’d do that in the ‘24-gltf-7-going-further’ git branch!


Dive Into WebGPU — Part 4
Coming Soon…


Additional resources:

Martin Laxenaire's avatar