Dive Into WebGPU — Part 2
Tutorial
WebGPU
3D
Animation
Developer
Front-end
JavaScript
Web Development
WebGL
Martin Laxenaire's avatar

Part 2 — Planes scene

In Part 1 of this series, we’ve seen how to draw meshes with a custom shading.

In this tutorial, we will learn to create DOM synced planes (sometimes referred to as quads), detect when they enter the camera frustum, and apply a post-processing effect on top of our scene.


Table of Contents

  1. Gallery setup
  2. Adding the planes
  3. Adding the textures
  4. Adding post-processing
  5. Hooking to the scroll velocity
  6. Adding animations
  7. Going further

#1. Gallery Setup

Start by switching to the 9‑gallery-1-setup git branch.

git checkout 9-gallery-1-setup

There are some new HTML elements in our index.html file, along with some basic CSS rules that define the layout of our second scene.

You can already notice that something’s different from the previous scene: here, the HTML Canvas container position is fixed. That’s because we’ll update the planes’ meshes’ y position while we scroll to sync them with the DOM elements, while the Canvas will always remain within the viewport.

This won’t pose any performance issue because the planes will be frustum culled (they won’t be drawn if they’re not inside our camera frustum). Plus, we can still toggle our renderer’s shouldRenderScene flag whenever the section leaves the viewport.

The other little trick is that we’re hiding the .plane element images with visibility: hidden;. We don’t want to render the original HTML images since we’re going to draw the WebGPU quads instead.

The last thing to note is the data-texture-name="planeTexture" attribute on the image tags. This will automatically set the texture binding name so we can use it in our shaders.


#2. Adding the planes

10-gallery-2-adding-planes

Let’s create a new PlanesScene.js inside the js/planes-scene folder and add this code:

// js/planes-scene/PlanesScene.js
import { Plane } from 'gpu-curtains'
import { ScrollTrigger } from 'gsap/ScrollTrigger'
import { DemoScene } from '../DemoScene'
import { gsap } from 'gsap'
export class PlanesScene extends DemoScene {
  constructor({ renderer }) {
    super({ renderer })
  }
  
  init() {
    this.section = document.querySelector('#planes-scene')
    
    this.planes = []
    
    this.planesElements = document.querySelectorAll('#planes-scene .plane')
    
    super.init()
  }
  
  setupWebGPU() {
    this.planesElements.forEach((planeEl, index) => {
      const plane = new Plane(this.renderer, planeEl)
      this.planes.push(plane)
    })
  }
  
  destroyWebGPU() {
    this.planes.forEach((plane) => {
      plane.remove()
    })
  }
  
  addScrollTrigger() {
    this.scrollTrigger = ScrollTrigger.create({
      trigger: '#planes-scene',
      onToggle: ({ isActive }) => {
        this.onSceneVisibilityChanged(isActive)
      },
    })
    
    this.onSceneVisibilityChanged(this.scrollTrigger.isActive)
  }
  
  removeScrollTrigger() {
    this.scrollTrigger.kill()
  }
  
  onSceneVisibilityChanged(isVisible) {
    if (isVisible) {
      this.section.classList.add('is-visible')
      this.renderer.shouldRenderScene = true
    } else {
      this.section.classList.remove('is-visible')
      this.renderer.shouldRenderScene = false
    }
  }
}

If you’ve been following the first article, nothing should really surprise you here.

Instead of creating a bunch of meshes, we’re using the Plane class here. It still takes our renderer as the first argument, but now the second argument is an HTML element that will be used to map the position and size to the created mesh under the hood.

We’d of course be able to pass additional parameters as the third argument, but we’re going to leave that for later. We do not have to pass any geometry as an option, as the Plane class already creates a PlaneGeometry internally.

We also need to add it to our Demo.js script and create a new renderer.

// js/Demo.js
createScenes() {
  this.createIntroScene()
  
  this.createPlanesScene()
  
  this.lenis.on('scroll', (e) => {
    this.gpuCurtains.updateScrollValues({ x: 0, y: e.scroll })
    this.scenes.forEach((scene) => scene.onScroll(e.velocity))
  })
}
createIntroScene() {
  const introScene = new IntroScene({
    renderer: new GPUCameraRenderer({
      deviceManager: this.deviceManager,
      label: 'Intro scene renderer',
      container: '#intro-scene-canvas',
      pixelRatio: this.pixelRatio,
    }),
  })
  
  this.scenes.push(introScene)
}
createPlanesScene() {
  const planesScene = new PlanesScene({
    renderer: new GPUCurtainsRenderer({
      deviceManager: this.deviceManager,
      label: 'Planes scene renderer',
      container: '#planes-scene-canvas',
      pixelRatio: this.pixelRatio,
    }),
  })
  
  this.scenes.push(planesScene)
}

We’re instancing a GPUCurtainsRenderer here. This renderer extends the GPUCameraRenderer we’ve used before by adding a couple of extra methods and properties that allow syncing meshes with DOM elements. This can be achieved by using two special meshes classes, DOMMesh and Plane. In this example, as we’ve seen above, we’ll use the Plane class.

Tip: Each renderer is responsible for its own canvas context, but the WebGPU resources are actually handled by the deviceManager. This means that WebGPU resources are shared between renderers, and you can even change a mesh renderer at runtime without any drawbacks!

You should now see something like this:

Once again, if you’ve been following the previous chapter closely, the result should not be surprising. The meshes are correctly created, their positions and sizes correspond to the various planesElements HTML elements (try to inspect the DOM with your dev tools), and we render them using our default normal shading. If you resize your screen or try to scroll, the planes’ sizes and positions should adapt to the new values.


#3. Adding the textures

11-gallery-3-adding-planes-textures

We need a fragment shader to display the planes’ textures. You’ll see this is pretty straightforward since each plane has already automatically created a GPUTexture containing the plane img child element.

Tip: The textures are automatically created because the Plane class options object autoloadSources property is set to true by default. You could disable this behavior by setting it to false and handle it yourself.

Create a gallery-planes.wgsl.js file inside the /shaders directory, and add this fragment shader code:

// js/shaders/gallery-planes.wgsl.js
export const planesFs = /* wgsl */ `
  struct VSOutput {
    @builtin(position) position: vec4f,
    @location(0) uv: vec2f,
  };
  
  @fragment fn main(fsInput: VSOutput) -> @location(0) vec4f {
    return textureSample(planeTexture, defaultSampler, fsInput.uv);
  }
`

As you can see, the textureSample WGSL function has 3 mandatory arguments: the name of our GPUTexture uniform, the name of a GPUSampler uniform to use to sample the texture, and the UV coordinates.

Where do those names come from?

  • Our GPUTexture uniform name has been set by using the data-texture-name attribute on our img child element.
  • The defaultSampler sampler uniform is, as the name suggests, a default GPUSampler created by our renderer and automatically added to our fragment shader as a uniform.

Of course, we need to add this shader as a parameter when instancing the Plane:

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        fragment: {
          code: planesFs,
        },
      },
    })
    
    this.planes.push(plane)
  })
}

And if you check the result, there are our textured planes!

Neat. But hey, the portrait images look like they do not have the correct aspect ratio; they seem compressed along the X‑axis. What happens is that we take 1280×720 images as inputs for the textures and display them on planes that have an aspect ratio of 10 / 15. They’re indeed distorted.

What we’d like to achieve is an effect similar to the CSS background-size: cover property.

Fortunately, gpu-curtains has a little trick to help us achieve that. Each time a DOMMesh or Plane loads a GPUTexture from a DOM image element, the library uses a DOMTexture class to handle it. This class has a property called textureMatrix that computes a 4×4 matrix representing the actual scale of the texture relative to its parent mesh container bounding rectangle. It is passed as a uniform to our vertex shader using the texture uniform name, with Matrix’ appended at the end. In our case: planeTextureMatrix.

We thus need to create a vertex shader that will compute the adjusted, scaled UV coordinates using this matrix and pass it to our fragment shader. Go back to our planes.wgsl.js file and add this vertex shader:

// js/shaders/gallery-planes.wgsl.js
export const planesVs = /* wgsl */ `
  struct VSOutput {
    @builtin(position) position: vec4f,
    @location(0) uv: vec2f,
  };
  
  @vertex fn main(
    attributes: Attributes,
  ) -> VSOutput {
    var vsOutput: VSOutput;
    
    vsOutput.position = getOutputPosition(attributes.position);
    
    // get correctly scaled UV coordinates
    vsOutput.uv = getUVCover(attributes.uv, planeTextureMatrix);
    
    return vsOutput;
  }
`
export const planesFs = /* wgsl */ `
  struct VSOutput {
    @builtin(position) position: vec4f,
    @location(0) uv: vec2f,
  };
  
  @fragment fn main(fsInput: VSOutput) -> @location(0) vec4f {
    return textureSample(planeTexture, defaultSampler, fsInput.uv);
  }
`

We’re using a built-in function called getUVCover to achieve that. If you remember the WGSL code appended by the library to our shaders in the first tutorial, you may have noticed this function defined in there. Now you’ll know what it’s for.

Next, don’t forget to add the vertex shader to our Plane parameters:

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        vertex: {
          code: planesVs,
        },
        fragment: {
          code: planesFs,
        },
      },
    })
    
    this.planes.push(plane)
  })
}

And that’s it, we have perfectly scaled textures, whatever the input images and HTML plane elements sizes!

Before we move on to adding post-processing, there’s one last thing we could improve with these textures. On small screens, or while scrolling, you might notice pixelation artifacts known as moiré patterns. That’s because we’re using 1280×720 images and rendering them on smaller quads, and the GPU has a hard time figuring out what texel (texture pixel) to sample.

We can improve this by telling the renderer to generate mipmaps for each texture. Mipmaps are a set of smaller textures generated from the original high-resolution texture. The GPU uses them when an object appears smaller on screen, reducing aliasing and improving rendering performance by sampling lower-resolution textures.

With gpu-curtains, it is super easy to use. Just add a texturesOptions object to the Plane parameters and set its generateMips option to true:

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        vertex: {
          code: planesVs,
        },
        fragment: {
          code: planesFs,
        },
        texturesOptions: {
          generateMips: true,
        },
      })
      
      this.planes.push(plane)
    })
  })
}

# 4. Adding post-processing

12-gallery-4-adding-post-processing

We’ve successfully added WebGPU planes synced to their respective DOM elements and correctly displayed their textures. But right now, the result is exactly the same as not using them at all, since we’re just rendering them at the same place and size. So why bother?

Because now that we’ve set all of that up, we can easily apply any WebGPU-powered effect we want. In this example, we’ll demonstrate a simple distortion-based post-processing effect hooked to the scroll velocity, but really, anything is possible.

Adding a post-processing pass is straightforward using the built-in ShaderPass class:

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        vertex: {
          code: planesVs,
        },
        fragment: {
          code: planesFs,
        },
      },
      texturesOptions: {
        generateMips: true,
      },
    })
    
    this.planes.push(plane)
  })
  
  this.shaderPass = new ShaderPass(this.renderer)
}
destroyWebGPU() {
  this.planes.forEach((plane) => {
    plane.remove()
  })
  
  this.shaderPass.remove()
}

Before checking the result, what do you think will be displayed on the screen? Since we haven’t passed our shaderPass any shader yet, you might expect it to display a pale violet quad covering the screen, corresponding to the plane normals.

Let’s have a look at the result:

But nothing changes. Isn’t that weird? Have we actually correctly added the post-processing pass?

Yes, everything is working as expected. Shader passes can use default built-in shaders like other meshes, but they don’t use the same ones!

To get a better understanding of what’s being drawn here, let’s use the getShaderCode() helper method again:

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        vertex: {
          code: planesVs,
        },
        fragment: {
          code: planesFs,
        },
      },
      texturesOptions: {
        generateMips: true,
      },
    })
    
    this.planes.push(plane)
  })
  
  this.shaderPass = new ShaderPass(this.renderer)
  
  this.shaderPass.onReady(() => {
    console.log(
      '// >>> SHADER PASS VERTEX SHADER\n\n',
      this.shaderPass.material.getShaderCode('vertex'),
      '\n\n// >>> SHADER PASS FRAGMENT SHADER\n\n',
      this.shaderPass.material.getShaderCode('fragment')
    )
  })
}

Now, look at the console output:

// >>> SHADER PASS VERTEX SHADER
struct Attributes {
	@builtin(vertex_index) vertexIndex : u32,
	@builtin(instance_index) instanceIndex : u32,
	@location(0) position: vec3f,
	@location(1) uv: vec2f,
	@location(2) normal: vec3f
};
fn getUVCover(uv: vec2f, textureMatrix: mat4x4f) -> vec2f {
  return (textureMatrix * vec4f(uv, 0.0, 1.0)).xy;
}
@group(0) @binding(0) var defaultSampler: sampler;
@group(0) @binding(1) var renderTexture: texture_2d;
struct VSOutput {
  @builtin(position) position: vec4f,
  @location(0) uv: vec2f,
};
@vertex fn main(
  attributes: Attributes,
) -> VSOutput {
  var vsOutput: VSOutput;
  
  vsOutput.position = vec4f(attributes.position, 1.0);
  
  vsOutput.uv = attributes.uv;
  
  return vsOutput;
}
// >>> SHADER PASS FRAGMENT SHADER
fn getVertex2DToUVCoords(vertex: vec2f) -> vec2f {
  return vec2(
    vertex.x * 0.5 + 0.5,
    0.5 - vertex.y * 0.5
  );
}
fn getVertex3DToUVCoords(vertex: vec3f) -> vec2f {
  return getVertex2DToUVCoords(vec2(vertex.x, vertex.y));
}
fn getUVCover(uv: vec2f, textureMatrix: mat4x4f) -> vec2f {
  return (textureMatrix * vec4f(uv, 0.0, 1.0)).xy;
}
@group(0) @binding(0) var defaultSampler: sampler;
@group(0) @binding(1) var renderTexture: texture_2d;
struct VSOutput {
  @builtin(position) position: vec4f,
  @location(0) uv: vec2f,
};
@fragment fn main(fsInput: VSOutput) -> @location(0) vec4f {
  return textureSample(renderTexture, defaultSampler, fsInput.uv);
}

The vertex shader is different because it’s not using matrices and just outputs the position attribute as is. In fact, ShaderPass does not have any matrix at all, that’s why they’re not even passed to the vertex shader as uniforms. It saves some memory space on the GPU and avoids useless matrix computations on the CPU. The fragment shader samples from the renderTexture, which holds the content of our main frame buffer. This is why we get the same result as before.

Tip: WebGPU does not have the exact same WebGL concept of frame buffer objects. Instead, you use a render pass descriptor to explicitly tell onto which texture(s) you want to draw your meshes, and that’s what gpu-curtains is using internally for post-processing passes. This can be very helpful for things like multisampled anti-aliasing or drawing to multiple targets and explains why, even while using an additional pass, we’ll still have MSAA out of the box.

Next, create a post-processing fragment shader. Create a gallery-shader-pass.wgsl.js inside our /js/shaders directory with the following code:

// js/shaders/gallery-shader-pass.wgsl.js
export const galleryShaderPassFs = /* wgsl */ `
  struct VSOutput {
    @builtin(position) position: vec4f,
    @location(0) uv: vec2f,
  };
  
  @fragment fn main(fsInput: VSOutput) -> @location(0) vec4f {
    var uv: vec2f = fsInput.uv;
    
    // convert to [-1, 1]
    uv = uv * 2.0 - 1.0;
    
    // apply deformation
    let uvDeformation: f32 = cos(abs(uv.y) * 3.141592 * 0.5);
    
    uv.x *= 1.0 + uvDeformation;
    
    // convert back to [0, 1]
    uv = uv * 0.5 + 0.5;
    
    return textureSample(renderTexture, defaultSampler, uv);
  }
`
  1. We first convert our UV coordinates to the [-1, 1] range.
  2. Then we compute a deformation based on the newly computed uv.y coordinate: when uv.y equals 0, we apply a full deformation; when abs(uv.y) equals 1, it’s not deformed at all. We use a cos() function to apply a sinusoidal shape (in this case, a curve).
  3. We apply that deformation to the UV along the X‑axis.
  4. We remap the UV coordinates to the [0, 1] range.
  5. We use the tweaked UV to sample our renderTexture.
Tip: In WebGPU, UV coordinates range from 0 to 1 on both axes, with the UV at coordinate [0, 0] representing the top-left pixel and UV at coordinate [1, 1] representing the bottom-right pixel. This is different from WebGL, where the Y coordinate is upside down, ranging from 0 at the bottom to 1 at the top.

As usual, we add it to our ShaderPass parameters:

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        vertex: {
          code: planesVs,
        },
        fragment: {
          code: planesFs,
        },
        texturesOptions: {
          generateMips: true,
        },
      })
      
      this.planes.push(plane)
    })
    
    this.shaderPass = new ShaderPass(this.renderer, {
      label: 'Distortion shader pass',
      shaders: {
        fragment: {
          code: galleryShaderPassFs,
        },
      },
    })
  }
}

And the result:

Ok, so the distortion is correctly applied, but the texture seems to be repeated on both sides. What’s up with that?

This happens because the defaultSampler we’re using has its address modes set to repeat on both axes. This means that each time the UVs are greater than 1 or less than 0, the texture will repeat endlessly.

We could add the following line to our shaders just before sampling the texture:

uv = clamp(uv, vec2(0.0), vec2(1.0));

But that’s more of a hack. Instead, what we can do is create a new GPUSampler with both address modes clamped to the edges. Each gpu-curtains mesh has a samplers property that accepts an array of Sampler class objects, which we can use to sample our textures.

To instantiate a new Sampler, we pass the renderer as the first parameter as usual, and the second parameter is an object where we define the GPUSampler properties:

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        vertex: {
          code: planesVs,
        },
        fragment: {
          code: planesFs,
        },
        texturesOptions: {
          generateMips: true,
        },
      })
      
      this.planes.push(plane)
    })
    
    this.shaderPass = new ShaderPass(this.renderer, {
      label: 'Distortion shader pass',
      shaders: {
        fragment: {
          code: galleryShaderPassFs,
        },
      },
      samplers: [
        new Sampler(this.renderer, {
          label: 'Clamp sampler',
          name: 'clampSampler',
          addressModeU: 'clamp-to-edge',
          addressModeV: 'clamp-to-edge',
        }),
      ],
    })
  }
}

Note the name property. That’s what we’re going to use in our shader:

// js/shaders/gallery-shader-pass.wgsl.js
export const galleryShaderPassFs = /* wgsl */ `
  struct VSOutput {
    @builtin(position) position: vec4f,
    @location(0) uv: vec2f,
  };
  
  @fragment fn main(fsInput: VSOutput) -> @location(0) vec4f {
    var uv: vec2f = fsInput.uv;
    
    // convert to [-1, 1]
    uv = uv * 2.0 - 1.0;
    
    // apply deformation
    let uvDeformation: f32 = cos(abs(uv.y) * 3.141592 * 0.5);
    
    uv.x *= 1.0 + uvDeformation;
    
    // convert back to [0, 1]
    uv = uv * 0.5 + 0.5;
    
    return textureSample(renderTexture, clampSampler, uv);
  }
`

The texture isn’t repeated anymore:


#5. Hooking to the scroll velocity

13-gallery-5-adding-scroll-effect

What we’d like is for the deformation to depend on the scroll velocity. Besides, the deformation is currently a bit too much. We just need to add a couple of uniforms to control the maximum scroll strength, update the scroll velocity, and use them in the fragment shader.

Add the uniforms:

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        vertex: {
          code: planesVs,
        },
        fragment: {
          code: planesFs,
        },
      },
      texturesOptions: {
        generateMips: true,
      },
    })
    
    this.planes.push(plane)
  })
  
  this.shaderPass = new ShaderPass(this.renderer, {
    label: 'Distortion shader pass',
    shaders: {
      fragment: {
        code: galleryShaderPassFs,
      },
    },
    samplers: [
      new Sampler(this.renderer, {
        label: 'Clamp sampler',
        name: 'clampSampler',
        addressModeU: 'clamp-to-edge',
        addressModeV: 'clamp-to-edge',
      }),
    ],
    uniforms: {
      deformation: {
        struct: {
          maxStrength: {
            type: 'f32',
            value: 0.1,
          },
          scrollStrength: {
            type: 'f32',
            value: 0,
          },
        },
      },
    },
  })
}
Tip: We don’t have to set the uniform visibility to ['fragment'] here as it is already the default for ShaderPass.

Update the scrollStrength uniform on scroll:

// js/planes-scene/PlanesScene.js
onScroll(velocity = 0) {
  if (this.shaderPass) {
    this.shaderPass.uniforms.deformation.scrollStrength.value = Math.abs(velocity) * 0.05
  }
}

And apply them in our fragment shader:

// js/shaders/gallery-shader-pass.wgsl.js
export const galleryShaderPassFs = /* wgsl */ `
  struct VSOutput {
    @builtin(position) position: vec4f,
    @location(0) uv: vec2f,
  };
  
  @fragment fn main(fsInput: VSOutput) -> @location(0) vec4f {
    var uv: vec2f = fsInput.uv;
    
    // convert to [-1, 1]
    uv = uv * 2.0 - 1.0;
    
    // apply deformation
    let uvDeformation: f32 = cos(abs(uv.y) * 3.141592 * 0.5);
    
    // apply deformation uniforms
    uv.x *= 1.0 + deformation.maxStrength * deformation.scrollStrength * uvDeformation;
    
    // convert back to [0, 1]
    uv = uv * 0.5 + 0.5;
    
    return textureSample(renderTexture, clampSampler, uv);
  }
`

It works, but it’s not perfect. Each time the wheel event is fired, the scroll velocity increases drastically, causing the animation to stutter.

To work around this, we can weight the velocity value by the velocity delta, smoothing out those spikes.

We’ll need to keep track of that weighted scroll velocity:

// js/planes-scene/PlanesScene.js
init() {
  this.section = document.querySelector('#planes-scene')
  
  this.planes = []
  
  this.planesElements = document.querySelectorAll('#planes-scene .plane')
  
  this.velocity = {
    weightRatio: 0.75, // the smaller, the closer to the original velocity value
    weighted: 0,
  }
  
  super.init()
}

And apply the weight in our onScroll callback:

// js/planes-scene/PlanesScene.js
onScroll(velocity = 0) {
  // no weight if current velocity is null
  const weight = velocity ? Math.abs(velocity - this.velocity.weighted) * this.velocity.weightRatio : 0
  
  // apply weight
  this.velocity.weighted = (this.velocity.weighted * weight + Math.abs(velocity)) / (weight + 1)
  
  if (this.shaderPass) {
    this.shaderPass.uniforms.deformation.scrollStrength.value = this.velocity.weighted * 0.05
  }
}

#6. Adding animations

14-gallery-6-adding-animations

Before we’re done, let’s add two types of animations.

The first one is a basic fade-in animation for the title and description when the section comes into view. This is easy to implement:

// js/planes-scene/PlanesScene.js
onSceneVisibilityChanged(isVisible) {
  if (isVisible) {
    this.section.classList.add('is-visible')
    this.renderer.shouldRenderScene = true
    this.timeline?.restart(true)
  } else {
    this.section.classList.remove('is-visible')
    this.renderer.shouldRenderScene = false
    this.timeline?.paused()
  }
}
addEnteringAnimation() {
  this.autoAlphaElements = this.section.querySelectorAll('.gsap-auto-alpha')
  
  this.timeline = gsap
    .timeline({ paused: true })
    .fromTo(
      this.autoAlphaElements,
      { autoAlpha: 0 },
      {
        autoAlpha: 1,
        duration: 1,
        stagger: 0.2,
        ease: 'power2.inOut',
      },
      0.25
    )
}
removeEnteringAnimation() {
  this.timeline.kill()
}

The next animation fades in each plane while scaling out its texture when it enters the viewport.

Instead of using a ScrollTrigger to detect when the planes enter the viewport (which would not work correctly if we applied arbitrary scaling or rotation), we can rely on gpu-curtainss frustum culling, which provides the callbacks onLeaveView() and onReEnterView() whenever a mesh leaves or enters the camera frustum.

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        vertex: { code: planesVs },
        fragment: { code: planesFs },
      },
      texturesOptions: { generateMips: true },
    })
    
    plane.onLeaveView(() => {
      console.log(`${plane.options.label} just left the camera frustum`)
    }).onReEnterView(() => {
      console.log(`${plane.options.label} just reentered the camera frustum`)
    })
    
    this.planes.push(plane)
  })
  
  this.shaderPass = new ShaderPass(this.renderer, {
    label: 'Distortion shader pass',
    shaders: { fragment: { code: galleryShaderPassFs } },
    samplers: [
      new Sampler(this.renderer, {
        label: 'Clamp sampler',
        name: 'clampSampler',
        addressModeU: 'clamp-to-edge',
        addressModeV: 'clamp-to-edge',
      }),
    ],
    uniforms: {
      deformation: {
        struct: {
          maxStrength: { type: 'f32', value: 0.1 },
          scrollStrength: { type: 'f32', value: 0 },
        },
      },
    },
  })
}

We can check the console to ensure it’s working.

Next, add an opacity uniform and set the Planes transparent property to true to handle blending correctly. No need for alpha blending hacks in the fragment shader.

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        vertex: { code: planesVs },
        fragment: { code: planesFs },
      },
      texturesOptions: { generateMips: true },
      transparent: true,
      uniforms: {
        params: {
          struct: {
            opacity: { type: 'f32', value: 1 },
          },
        },
      },
    })
    
    plane.onReEnterView(() => {
      // we'll start our animation here later
    })
    
    this.planes.push(plane)
  })
}

We also need to update the plane’s fragment shader to use the opacity uniform:

// js/shaders/gallery-planes.wgsl.js
export const planesFs = /* wgsl */ `
  struct VSOutput {
    @builtin(position) position: vec4f,
    @location(0) uv: vec2f,
  };
  
  @fragment fn main(fsInput: VSOutput) -> @location(0) vec4f {
    var color: vec4f = textureSample(planeTexture, defaultSampler, fsInput.uv);
    
    color.a *= params.opacity;
    
    return color;
  }
`

Finally, let’s add the animation itself. We’ll store the animation timeline using the userData property of the plane:

// js/planes-scene/PlanesScene.js
setupWebGPU() {
  this.planesElements.forEach((planeEl, index) => {
    const plane = new Plane(this.renderer, planeEl, {
      label: `Plane ${index}`,
      shaders: {
        vertex: { code: planesVs },
        fragment: { code: planesFs },
      },
      texturesOptions: { generateMips: true },
      transparent: true,
      uniforms: {
        params: {
          struct: {
            opacity: { type: 'f32', value: 1 },
          },
        },
      },
    })
    
    plane.userData.animationTimeline = gsap
      .timeline({ paused: true })
      .fromTo(
        plane.uniforms.params.opacity,
        { value: 0 },
        {
          value: 1,
          duration: 1.5,
          ease: 'expo.out',
          onUpdate: () => {
            const textureScale = 1.5 - plane.uniforms.params.opacity.value * 0.5
            plane.domTextures[0]?.scale.set(textureScale, textureScale, 1)
          },
        }
      )
      
    plane.onReEnterView(() => {
      plane.userData.animationTimeline.restart()
    })
    
    this.planes.push(plane)
  })
}
destroyWebGPU() {
  this.planes.forEach((plane) => {
    plane.userData.animationTimeline.kill()
    plane.remove()
  })
  
  this.shaderPass.remove()
}

Two things to note:

  1. We access the DOM texture using the domTextures[0] property.
  2. We kill the animation before removing the plane to avoid memory leaks.

And that’s it! We’ve added DOM synced planes, handled their visibility in the camera frustum, applied post-processing effects, and animated them based on scroll velocity.


#7. Going further

Could you think of a way to sequentially animate each plane when they reenter the viewport?

Find out how to do it in the 15-gallery-7-going-further branch!


Next up: Dive Into WebGPU — Part 3


Additional resources:

Martin Laxenaire's avatar