In Part 1 of this series, we’ve seen how to draw meshes with a custom shading.
In this tutorial, we will learn to create DOM synced planes (sometimes referred to as quads), detect when they enter the camera frustum, and apply a post-processing effect on top of our scene.
There are some new HTML elements in our index.html file, along with some basic CSS rules that define the layout of our second scene.
You can already notice that something’s different from the previous scene: here, the HTML Canvas container position is fixed. That’s because we’ll update the planes’ meshes’ y position while we scroll to sync them with the DOM elements, while the Canvas will always remain within the viewport.
This won’t pose any performance issue because the planes will be frustum culled (they won’t be drawn if they’re not inside our camera frustum). Plus, we can still toggle our renderer’s shouldRenderScene flag whenever the section leaves the viewport.
The other little trick is that we’re hiding the .plane element images with visibility: hidden;. We don’t want to render the original HTML images since we’re going to draw the WebGPU quads instead.
The last thing to note is the data-texture-name="planeTexture" attribute on the image tags. This will automatically set the texture binding name so we can use it in our shaders.
If you’ve been following the first article, nothing should really surprise you here.
Instead of creating a bunch of meshes, we’re using the Plane class here. It still takes our renderer as the first argument, but now the second argument is an HTML element that will be used to map the position and size to the created mesh under the hood.
We’d of course be able to pass additional parameters as the third argument, but we’re going to leave that for later. We do not have to pass any geometry as an option, as the Plane class already creates a PlaneGeometry internally.
We also need to add it to our Demo.js script and create a new renderer.
We’re instancing a GPUCurtainsRenderer here. This renderer extends the GPUCameraRenderer we’ve used before by adding a couple of extra methods and properties that allow syncing meshes with DOM elements. This can be achieved by using two special meshes classes, DOMMesh and Plane. In this example, as we’ve seen above, we’ll use the Plane class.
Tip: Each renderer is responsible for its own canvas context, but the WebGPU resources are actually handled by the deviceManager. This means that WebGPU resources are shared between renderers, and you can even change a mesh renderer at runtime without any drawbacks!
You should now see something like this:
Once again, if you’ve been following the previous chapter closely, the result should not be surprising. The meshes are correctly created, their positions and sizes correspond to the various planesElementsHTML elements (try to inspect the DOM with your dev tools), and we render them using our default normal shading. If you resize your screen or try to scroll, the planes’ sizes and positions should adapt to the new values.
We need a fragment shader to display the planes’ textures. You’ll see this is pretty straightforward since each plane has already automatically created a GPUTexture containing the plane img child element.
Tip: The textures are automatically created because the Plane class options object autoloadSources property is set to true by default. You could disable this behavior by setting it to false and handle it yourself.
Create a gallery-planes.wgsl.js file inside the /shaders directory, and add this fragment shader code:
As you can see, the textureSampleWGSL function has 3 mandatory arguments: the name of our GPUTexture uniform, the name of a GPUSampler uniform to use to sample the texture, and the UV coordinates.
Where do those names come from?
Our GPUTexture uniform name has been set by using the data-texture-name attribute on our img child element.
The defaultSampler sampler uniform is, as the name suggests, a default GPUSampler created by our renderer and automatically added to our fragment shader as a uniform.
Of course, we need to add this shader as a parameter when instancing the Plane:
And if you check the result, there are our textured planes!
Neat. But hey, the portrait images look like they do not have the correct aspect ratio; they seem compressed along the X‑axis. What happens is that we take 1280×720 images as inputs for the textures and display them on planes that have an aspect ratio of 10 / 15. They’re indeed distorted.
What we’d like to achieve is an effect similar to the CSSbackground-size: cover property.
Fortunately, gpu-curtains has a little trick to help us achieve that. Each time a DOMMesh or Plane loads a GPUTexture from a DOM image element, the library uses a DOMTexture class to handle it. This class has a property called textureMatrix that computes a 4×4 matrix representing the actual scale of the texture relative to its parent mesh container bounding rectangle. It is passed as a uniform to our vertex shader using the texture uniform name, with ‘Matrix’ appended at the end. In our case: planeTextureMatrix.
We thus need to create a vertex shader that will compute the adjusted, scaled UV coordinates using this matrix and pass it to our fragment shader. Go back to our planes.wgsl.js file and add this vertex shader:
We’re using a built-in function called getUVCover to achieve that. If you remember the WGSL code appended by the library to our shaders in the first tutorial, you may have noticed this function defined in there. Now you’ll know what it’s for.
Next, don’t forget to add the vertex shader to our Plane parameters:
And that’s it, we have perfectly scaled textures, whatever the input images and HTML plane elements sizes!
Before we move on to adding post-processing, there’s one last thing we could improve with these textures. On small screens, or while scrolling, you might notice pixelation artifacts known as moiré patterns. That’s because we’re using 1280×720 images and rendering them on smaller quads, and the GPU has a hard time figuring out what texel (texture pixel) to sample.
We can improve this by telling the renderer to generate mipmaps for each texture. Mipmaps are a set of smaller textures generated from the original high-resolution texture. The GPU uses them when an object appears smaller on screen, reducing aliasing and improving rendering performance by sampling lower-resolution textures.
With gpu-curtains, it is super easy to use. Just add a texturesOptions object to the Plane parameters and set its generateMips option to true:
We’ve successfully added WebGPU planes synced to their respective DOM elements and correctly displayed their textures. But right now, the result is exactly the same as not using them at all, since we’re just rendering them at the same place and size. So why bother?
Because now that we’ve set all of that up, we can easily apply any WebGPU-powered effect we want. In this example, we’ll demonstrate a simple distortion-based post-processing effect hooked to the scroll velocity, but really, anything is possible.
Adding a post-processing pass is straightforward using the built-in ShaderPass class:
Before checking the result, what do you think will be displayed on the screen? Since we haven’t passed our shaderPass any shader yet, you might expect it to display a pale violet quad covering the screen, corresponding to the plane normals.
Let’s have a look at the result:
But nothing changes. Isn’t that weird? Have we actually correctly added the post-processing pass?
Yes, everything is working as expected. Shader passes can use default built-in shaders like other meshes, but they don’t use the same ones!
To get a better understanding of what’s being drawn here, let’s use the getShaderCode() helper method again:
The vertex shader is different because it’s not using matrices and just outputs the position attribute as is. In fact, ShaderPass does not have any matrix at all, that’s why they’re not even passed to the vertex shader as uniforms. It saves some memory space on the GPU and avoids useless matrix computations on the CPU. The fragment shader samples from the renderTexture, which holds the content of our main frame buffer. This is why we get the same result as before.
Tip: WebGPU does not have the exact same WebGL concept of frame buffer objects. Instead, you use a render pass descriptor to explicitly tell onto which texture(s) you want to draw your meshes, and that’s what gpu-curtains is using internally for post-processing passes. This can be very helpful for things like multisampled anti-aliasing or drawing to multiple targets and explains why, even while using an additional pass, we’ll still have MSAA out of the box.
Next, create a post-processing fragment shader. Create a gallery-shader-pass.wgsl.js inside our /js/shaders directory with the following code:
We first convert our UV coordinates to the [-1, 1] range.
Then we compute a deformation based on the newly computed uv.y coordinate: when uv.y equals 0, we apply a full deformation; when abs(uv.y) equals 1, it’s not deformed at all. We use a cos() function to apply a sinusoidal shape (in this case, a curve).
We apply that deformation to the UV along the X‑axis.
We remap the UV coordinates to the [0, 1] range.
We use the tweaked UV to sample our renderTexture.
Tip: In WebGPU, UV coordinates range from 0 to 1 on both axes, with the UV at coordinate [0, 0] representing the top-left pixel and UV at coordinate [1, 1] representing the bottom-right pixel. This is different from WebGL, where the Y coordinate is upside down, ranging from 0 at the bottom to 1 at the top.
Ok, so the distortion is correctly applied, but the texture seems to be repeated on both sides. What’s up with that?
This happens because the defaultSampler we’re using has its address modes set to repeat on both axes. This means that each time the UVs are greater than 1 or less than 0, the texture will repeat endlessly.
We could add the following line to our shaders just before sampling the texture:
uv = clamp(uv, vec2(0.0), vec2(1.0));
But that’s more of a hack. Instead, what we can do is create a new GPUSampler with both address modes clamped to the edges. Each gpu-curtains mesh has a samplers property that accepts an array of Sampler class objects, which we can use to sample our textures.
To instantiate a new Sampler, we pass the renderer as the first parameter as usual, and the second parameter is an object where we define the GPUSampler properties:
What we’d like is for the deformation to depend on the scroll velocity. Besides, the deformation is currently a bit too much. We just need to add a couple of uniforms to control the maximum scroll strength, update the scroll velocity, and use them in the fragment shader.
The next animation fades in each plane while scaling out its texture when it enters the viewport.
Instead of using a ScrollTrigger to detect when the planes enter the viewport (which would not work correctly if we applied arbitrary scaling or rotation), we can rely on gpu-curtains’s frustum culling, which provides the callbacks onLeaveView() and onReEnterView() whenever a mesh leaves or enters the camera frustum.
Next, add an opacity uniform and set the Plane’s transparent property to true to handle blending correctly. No need for alpha blending hacks in the fragment shader.
We access the DOM texture using the domTextures[0] property.
We kill the animation before removing the plane to avoid memory leaks.
And that’s it! We’ve added DOM synced planes, handled their visibility in the camera frustum, applied post-processing effects, and animated them based on scroll velocity.