This is now time for our fourth and last article of this in-depth serie. If you haven’t checked the previous ones, I encourage you to do so as this one will use a lot of previously covered notions.
In this last chapter, we’re going to finally put to good use the most awaited WebGPU feature: compute shaders. Be prepared, because it is going to be a long one!
What we are going to build is a particle system animated thanks to compute shaders. We will also learn how 3D objects can cast and receive shadows using a shadow map, and we’ll do all of that from scratch!
// 'js/Demo.js'
createScenes() {
this.createIntroScene()
this.createPlanesScene()
this.createGLTFScene()
this.createShadowedParticlesScene()
this.lenis.on('scroll', (e) => {
this.gpuCurtains.updateScrollValues({ x: 0, y: e.scroll })
this.scenes.forEach((scene) => scene.onScroll(e.velocity))
})
}
createIntroScene() {
const introScene = new IntroScene({
renderer: new GPUCameraRenderer({
deviceManager: this.deviceManager,
label: 'Intro scene renderer',
container: '#intro-scene-canvas',
pixelRatio: this.pixelRatio,
}),
})
this.scenes.push(introScene)
}
createPlanesScene() {
const planesScene = new PlanesScene({
renderer: new GPUCurtainsRenderer({
deviceManager: this.deviceManager,
label: 'Planes scene renderer',
container: '#planes-scene-canvas',
pixelRatio: this.pixelRatio,
}),
})
this.scenes.push(planesScene)
}
createGLTFScene() {
const gltfScene = new GLTFScene({
renderer: new GPUCurtainsRenderer({
deviceManager: this.deviceManager,
label: 'glTF scene renderer',
container: '#gltf-scene-canvas',
pixelRatio: this.pixelRatio,
}),
})
this.scenes.push(gltfScene)
}
createShadowedParticlesScene() {
const shadowedParticlesScene = new ShadowedParticlesScene({
renderer: new GPUCameraRenderer({
deviceManager: this.deviceManager,
label: 'Shadowed particles scene renderer',
container: '#shadowed-particles-scene-canvas',
pixelRatio: this.pixelRatio,
}),
})
this.scenes.push(shadowedParticlesScene)
}
We’ll be using a GPUCameraRenderer because we won’t sync anything with the DOM this time.
Next, before we actually write any more code, let’s talk a bit about particles in WebGPU. If you’ve already worked with particles in WebGL, you might expect to use some kind of points geometry primitive and scale them with a built-in point size input. Typically, their positions would be updated in a vertex shader.
However, even though WebGPU does implement a point-list primitive topology, there is no way to control their rendered size, and they will always be drawn as 1px-sized points. If we wanted to achieve the same behavior as in WebGL, we’d need to find an alternative approach.
Fortunately, the solution is straightforward: we’ll use instanced billboarded quads instead. This means we will draw a set of 1×1 plane geometries in a single draw call. Billboarding refers to a technique used in the vertex shader to ensure that these planes always face the camera, effectively behaving like particles.
Let’s take a look at how this works:
// 'js/shadowed-particles-scene/ShadowedParticlesScene.js'
setupWebGPU() {
this.createParticles()
}
destroyWebGPU() {
this.particlesSystem?.remove()
}
createParticles() {
const geometry = new PlaneGeometry({
instancesCount: this.nbInstances,
})
this.particlesSystem = new Mesh(this.renderer, {
label: 'Shadowed particles system',
geometry,
frustumCulling: false,
})
// Since our camera is far away, let's scale our mesh for better visibility
this.particlesSystem.scale.set(25, 25, 1)
}
Here, we create a Mesh using a PlaneGeometry. By setting the instancesCount parameter, we efficiently render 100,000 planes in a single draw call!
Let’s see how it goes:
Hmm… 100,000 planes, are you sure?
Yes! It’s just that they are all rendered at the exact same position: (0, 0, 0).
Alright, now let’s try to assign each instance a random position based on our radius inside a vertex shader.
First, let’s add a uniform:
// 'js/shadowed-particles-scene/ShadowedParticlesScene.js'
createParticles() {
const geometry = new PlaneGeometry({
instancesCount: this.nbInstances,
})
this.particlesSystem = new Mesh(this.renderer, {
label: 'Shadowed particles system',
geometry,
frustumCulling: false,
uniforms: {
params: {
struct: {
radius: {
type: 'f32',
value: this.radius * 10, // Space them out a bit
},
},
},
},
})
}
Now, create a shadowed-particles.wgsl.js file inside the /js/shaders folder and paste this vertex shader:
// 'js/shaders/shadowed-particles.wgsl.js'
export const shadowedParticlesVs = /* wgsl */ `
struct VSOutput {
@builtin(position) position: vec4f,
@location(0) uv: vec2f,
@location(1) normal: vec3f,
};
// https://gist.github.com/munrocket/236ed5ba7e409b8bdf1ff6eca5dcdc39
// On generating random numbers, with help of y= [(a+x)sin(bx)] mod 1", W.J.J. Rey, 22nd European Meeting of Statisticians 1998
fn rand11(n: f32) -> f32 { return fract(sin(n) * 43758.5453123); }
@vertex fn main(
attributes: Attributes,
) -> VSOutput {
var vsOutput : VSOutput;
let instanceIndex: f32 = f32(attributes.instanceIndex);
const PI: f32 = 3.14159265359;
var position: vec3f;
// random radius in the [0, params.radius] range
let radius: f32 = rand11(cos(instanceIndex)) * params.radius;
let phi: f32 = (rand11(sin(instanceIndex)) - 0.5) * PI;
let theta: f32 = rand11(sin(cos(instanceIndex) * PI)) * PI * 2;
position.x = radius * cos(theta) * cos(phi);
position.y = radius * sin(phi);
position.z = radius * sin(theta) * cos(phi);
// billboarding
var mvPosition: vec4f = matrices.modelView * vec4(position, 1.0);
mvPosition += vec4(attributes.position, 0.0);
vsOutput.position = camera.projection * mvPosition;
vsOutput.uv = attributes.uv;
// normals in view space to follow billboarding
vsOutput.normal = getViewNormal(attributes.normal);
return vsOutput;
}
`
Here’s what it does:
Cast our instance index (ranging from 0 to 100,000), internally added to the attributes by gpu-curtains, as a float.
Compute a radius and two angles randomly based on the instance index.
Use them to calculate a position inside a sphere of our given params.radius.
Billboarding: apply the modelView matrix to our computed position, then add the position attribute (the plane’s vertex positions), then multiply by the camera projection matrix.
Now let’s add this back to our createParticles() method. To ensure the billboarding works, we’ll temporarily make our camera rotate around a pivot. We’ll remove that later:
// 'js/shadowed-particles-scene/ShadowedParticlesScene.js'
createParticles() {
const geometry = new PlaneGeometry({
instancesCount: this.nbInstances,
})
this.particlesSystem = new Mesh(this.renderer, {
label: 'Shadowed particles system',
geometry,
frustumCulling: false,
shaders: {
vertex: {
code: shadowedParticlesVs,
},
},
uniforms: {
params: {
struct: {
radius: {
type: 'f32',
value: this.radius * 10, // space them a bit
},
},
},
},
})
// Just to check the billboarding is actually working
this.cameraPivot = new Object3D()
this.cameraPivot.parent = this.renderer.scene
this.renderer.camera.position.z = this.radius * 15
this.renderer.camera.parent = this.cameraPivot
}
onRender() {
if (this.cameraPivot) {
this.cameraPivot.rotation.y += 0.01
}
}
This works!
Cool, so now we just have to animate the particles in the vertex shader and we’ll be done with that part, right?
Well, no. Using the vertex shader to update the particles’ positions would work just fine, but it is far from being the most performant solution. Compute shaders are way faster because they allow the GPU to run computations on parallel threads.
Before writing any code, let’s take a step back and analyze what we’re going to achieve and how we’ll do it.
We want to build a particle system using curl noise. To update the particles’ positions, we’ll need to compute a new velocity vector each frame. We will also need to track each particle’s lifetime and reset their positions once they’ve reached the end of their lives.
To accomplish this, we’ll need two compute passes:
The first pass will run once to set the particles’ initial positions.
The second pass will run each frame to update the particles.
Additionally, we’ll need two buffers:
One containing the initial positions and velocity values.
The second responsible for storing the updated values.
We are going to create the compute pass responsible for setting the initial positions first. It will essentially perform the same function as our initial vertex shader.
// 'js/shadowed-particles-scene/ShadowedParticlesScene.js'
setupWebGPU() {
this.createComputePasses()
this.createParticles()
}
destroyWebGPU() {
this.particlesSystem?.remove()
}
async createComputePasses() {
this.initComputeBuffer = new BufferBinding({
label: 'Compute particles init buffer',
name: 'initParticles',
bindingType: 'storage',
access: 'read_write', // we want a readable AND writable buffer!
usage: ['vertex'], // we're going to use this buffer as a vertex buffer along default usages
visibility: ['compute'],
struct: {
position: {
type: 'array<vec4f>',
value: new Float32Array(this.nbInstances * 4),
},
velocity: {
type: 'array<vec4f>',
value: new Float32Array(this.nbInstances * 4),
},
},
})
// update buffer, cloned from init one
this.updateComputeBuffer = this.initComputeBuffer.clone({
...this.initComputeBuffer.options,
label: 'Compute particles update buffer',
name: 'particles',
})
this.computeBindGroup = new BindGroup(this.renderer, {
label: 'Compute particles bind group',
bindings: [this.initComputeBuffer, this.updateComputeBuffer],
uniforms: {
params: {
visibility: ['compute'],
struct: {
radius: {
type: 'f32',
value: this.radius * 10, // * 10 for temporary debugging purpose
},
maxLife: {
type: 'f32',
value: 60, // in frames
},
},
},
},
})
}
As stated above, we will need to create two buffers using the BufferBinding class.
You can already recognize the parameters from our previous uniform declarations; these are almost the same. What’s important here is that we’ll be using storage buffers instead of uniform buffers by setting the bindingType to 'storage', because we’ll need to have 'read_write' access, and uniforms don’t allow that. Also, note the usage flag set to ['vertex'] because we’ll also use these as our instance vertex buffer.
The last important thing about these buffers is that we’re going to use array<vec4f> because we’ll handle the life values in the respective position and velocity W vector component.
Next, we create a BindGroup using those two buffer bindings plus a couple of uniforms. This bind group will be shared by both compute passes.
Tip: When you pass uniforms to a mesh parameters, the mesh material internally creates a BindGroup and the corresponding BufferBinding.
We now have everything in place to write our first compute shader!
Create a compute-particles.wgsl.js file inside the /js/shaders/ folder:
// 'js/shaders/compute-particles.wgsl.js'
export const computeParticles = /* wgsl */ `
// https://gist.github.com/munrocket/236ed5ba7e409b8bdf1ff6eca5dcdc39
// On generating random numbers, with help of y= [(a+x)sin(bx)] mod 1", W.J.J. Rey, 22nd European Meeting of Statisticians 1998
fn rand11(n: f32) -> f32 { return fract(sin(n) * 43758.5453123); }
fn getInitLife(index: f32) -> f32 {
return round(rand11(cos(index)) * params.maxLife * 0.95) + params.maxLife * 0.05;
}
const PI: f32 = 3.14159265359;
// set initial positions and data
@compute @workgroup_size(256) fn setInitData(
@builtin(global_invocation_id) GlobalInvocationID: vec3
) {
let index = GlobalInvocationID.x;
if(index < arrayLength(&particles)) {
let fIndex: f32 = f32(index);
// calculate a random particle init life, in number of frames
var initLife: f32 = getInitLife(fIndex);
initParticles[index].position.w = initLife;
particles[index].position.w = initLife;
// now the positions
// calculate an initial random position inside a sphere of a defined radius
var position: vec3f;
// random radius in the [0.5 * params.radius, params.radius] range
let radius: f32 = (0.5 + rand11(cos(fIndex)) * 0.5) * params.radius;
let phi: f32 = (rand11(sin(fIndex)) - 0.5) * PI;
let theta: f32 = rand11(sin(cos(fIndex) * PI)) * PI * 2;
position.x = radius * cos(theta) * cos(phi);
position.y = radius * sin(phi);
position.z = radius * sin(theta) * cos(phi);
// initial velocity
var velocity: vec3f = vec3(0.0);
particles[index].velocity = vec4(velocity, initLife);
// write positions
particles[index].position.x = position.x;
particles[index].position.y = position.y;
particles[index].position.z = position.z;
initParticles[index].position.x = position.x;
initParticles[index].position.y = position.y;
initParticles[index].position.z = position.z;
}
}
`
As you can see, this is very similar to our previous vertex shader.
We’re using the global invocation id instance of the instance index to compute the positions. We’re also setting a random initial life value and an empty velocity for now.
We now have everything we need to create the first compute pass:
// 'js/shadowed-particles-scene/ShadowedParticlesScene.js'
async createComputePasses() {
this.initComputeBuffer = new BufferBinding({
label: 'Compute particles init buffer',
name: 'initParticles',
bindingType: 'storage',
access: 'read_write', // we want a readable AND writable buffer!
usage: ['vertex'], // we're going to use this buffer as a vertex buffer along default usages
visibility: ['compute'],
struct: {
position: {
type: 'array',
value: new Float32Array(this.nbInstances * 4),
},
velocity: {
type: 'array',
value: new Float32Array(this.nbInstances * 4),
},
},
})
// update buffer, cloned from init one
this.updateComputeBuffer = this.initComputeBuffer.clone({
...this.initComputeBuffer.options,
label: 'Compute particles update buffer',
name: 'particles',
})
this.computeBindGroup = new BindGroup(this.renderer, {
label: 'Compute particles bind group',
bindings: [this.initComputeBuffer, this.updateComputeBuffer],
uniforms: {
params: {
visibility: ['compute'],
struct: {
radius: {
type: 'f32',
value: this.radius * 10, // * 10 for temporary debugging purpose
},
maxLife: {
type: 'f32',
value: 60, // in frames
},
},
},
},
})
const computeInitDataPass = new ComputePass(this.renderer, {
label: 'Compute initial data',
shaders: {
compute: {
code: computeParticles,
entryPoint: 'setInitData',
},
},
dispatchSize: Math.ceil(this.nbInstances / 256),
bindGroups: [this.computeBindGroup],
autoRender: false, // we don't want to run this pass each frame
})
// we should wait for pipeline compilation!
await computeInitDataPass.material.compileMaterial()
// now run the compute pass just once
this.renderer.renderOnce([computeInitDataPass])
}
We instantiate a new ComputePass using our renderer as the first parameter and some options as the second parameter.
We’re setting a custom entryPoint for our compute shader because we are going to put both compute shaders’ code in the same chunk so we can reuse the same functions.
The dispatchSize is equal to our number of instances divided by 256 since this is the workgroup size we’ve set in our shader (this is also the maximum allowed for the X and Y dimensions). This means the compute shader will run a little less than 400 times (100,000 / 256) on 256 threads simultaneously. This is much, much faster than our previous vertex shader example!
Last but not least, we’ve set the autoRender parameter to false, meaning that this compute pass will not be executed automatically. We need to manually render it, and that’s what we do (but only once) after its material has been compiled.
We still need to use the updateComputeBuffer in our vertex buffer:
// 'js/shadowed-particles-scene/ShadowedParticlesScene.js'
createParticles() {
const geometry = new PlaneGeometry({
instancesCount: this.nbInstances,
vertexBuffers: [{
// use instancing
stepMode: 'instance',
name: 'instanceAttributes',
buffer: this.updateComputeBuffer.buffer, // pass the compute buffer right away
attributes: [{
name: 'particlePosition',
type: 'vec4f',
bufferFormat: 'float32x4',
size: 4,
},
{
name: 'particleVelocity',
type: 'vec4f',
bufferFormat: 'float32x4',
size: 4,
},
],
}, ],
})
this.particlesSystem = new Mesh(this.renderer, {
label: 'Shadowed particles system',
geometry,
frustumCulling: false,
shaders: {
vertex: {
code: shadowedParticlesVs,
},
},
})
// just to check the billboarding is actually working
this.cameraPivot = new Object3D()
this.cameraPivot.parent = this.renderer.scene
this.renderer.camera.position.z = this.radius * 15
this.renderer.camera.parent = this.cameraPivot
}
The first thing to do is tell our PlaneGeometry to use the updateComputeBuffer actual buffer and how it is structured.
By passing the stepMode parameter to 'instance', WebGPU will directly let us access attributes.particlePosition in our vertex shader without the need to specify the instanceIndex anymore.
So, we are going to use curl noise. Curl noise is a type of procedural noise used primarily in computer graphics to create fluid-like, turbulent effects.
We’re going to add a curlNoise function as a chunk. You don’t have to actually look at that code, just put it inside a curl-noise.wgsl.js file inside our /js/shaders/chunks directory:
// 'js/shaders/chunks/curl-noise.wgsl.js'
export const curlNoise = /* wgsl */ `
// some of the utility functions here were taken from
// https://gist.github.com/munrocket/236ed5ba7e409b8bdf1ff6eca5dcdc39
// snoise4 and curlNoise have been ported from a previous WebGL experiment
// can't remember where I found them in the first place
// if you know it, please feel free to contact me to add due credit
fn mod289_4(x: vec4f) -> vec4f {
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
fn mod289_3(x: vec3f) -> vec3f {
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
fn mod289_2(x: vec2f) -> vec2f {
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
fn mod289(x: f32) -> f32 {
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
fn permute4(x: vec4f) -> vec4f {
return mod289_4(((x*34.0)+1.0)*x);
}
fn permute3(x: vec3f) -> vec3f {
return mod289_3(((x*34.0)+1.0)*x);
}
fn permute(x: f32) -> f32 {
return mod289(((x*34.0)+1.0)*x);
}
fn taylorInvSqrt4(r: vec4f) -> vec4f {
return 1.79284291400159 - 0.85373472095314 * r;
}
fn taylorInvSqrt(r: f32) -> f32 {
return 1.79284291400159 - 0.85373472095314 * r;
}
fn curlNoise(p: vec3f, noiseTime: f32, persistence: f32) -> vec3f {
var xNoisePotentialDerivatives: vec4f = vec4(0.0);
var yNoisePotentialDerivatives: vec4f = vec4(0.0);
var zNoisePotentialDerivatives: vec4f = vec4(0.0);
for (var i: i32 = 0; i < 3; i++) {
let twoPowI: f32 = pow(2.0, f32(i));
let scale: f32 = 0.5 * twoPowI * pow(persistence, f32(i));
xNoisePotentialDerivatives += snoise4(vec4(p * twoPowI, noiseTime)) * scale;
yNoisePotentialDerivatives += snoise4(vec4((p + vec3(123.4, 129845.6, -1239.1)) * twoPowI, noiseTime)) * scale;
zNoisePotentialDerivatives += snoise4(vec4((p + vec3(-9519.0, 9051.0, -123.0)) * twoPowI, noiseTime)) * scale;
}
return vec3(
zNoisePotentialDerivatives[1] - yNoisePotentialDerivatives[2],
xNoisePotentialDerivatives[2] - zNoisePotentialDerivatives[0],
yNoisePotentialDerivatives[0] - xNoisePotentialDerivatives[1]
);
}
`
Next, Just Add the Chunk to Our Compute Pass Shader and Use It
We need to include the curlNoise function in our compute shader and utilize it to generate initial velocity for our particles.
// 'js/shaders/compute-particles.wgsl.js'
import { curlNoise } from './chunks/curl-noise.wgsl'
export const computeParticles = /* wgsl */ `
${curlNoise}
// https://gist.github.com/munrocket/236ed5ba7e409b8bdf1ff6eca5dcdc39
// On generating random numbers, with help of y= [(a+x)sin(bx)] mod 1", W.J.J. Rey, 22nd European Meeting of Statisticians 1998
fn rand11(n: f32) -> f32 { return fract(sin(n) * 43758.5453123); }
fn getInitLife(index: f32) -> f32 {
return round(rand11(cos(index)) * params.maxLife * 0.95) + params.maxLife * 0.05;
}
const PI: f32 = 3.14159265359;
// set initial positions and data
@compute @workgroup_size(256) fn setInitData(
@builtin(global_invocation_id) GlobalInvocationID: vec3
) {
let index = GlobalInvocationID.x;
if(index < arrayLength(&particles)) {
let fIndex: f32 = f32(index);
// calculate a random particle init life, in number of frames
var initLife: f32 = getInitLife(fIndex);
initParticles[index].position.w = initLife;
particles[index].position.w = initLife;
// now the positions
// calculate an initial random position inside a sphere of a defined radius
var position: vec3f;
// random radius in the [0.5 * params.radius, params.radius] range
let radius: f32 = (0.5 + rand11(cos(fIndex)) * 0.5) * params.radius;
let phi: f32 = (rand11(sin(fIndex)) - 0.5) * PI;
let theta: f32 = rand11(sin(cos(fIndex) * PI)) * PI * 2;
position.x = radius * cos(theta) * cos(phi);
position.y = radius * sin(phi);
position.z = radius * sin(theta) * cos(phi);
// calculate initial velocity
var velocity: vec3f = curlNoise(position * 0.02, 0.0, 0.05);
velocity *= 10.0; // temporary
particles[index].velocity = vec4(velocity, initLife);
// apply to position
position += velocity;
// write positions
particles[index].position.x = position.x;
particles[index].position.y = position.y;
particles[index].position.z = position.z;
initParticles[index].position.x = position.x;
initParticles[index].position.y = position.y;
initParticles[index].position.z = position.z;
}
}
`
Finally, We Can Clean Up Our JavaScript Code and Observe the Result
We can refine our JavaScript by making sure our compute bind group and particle creation logic are well-structured.
It is still difficult to visually understand what’s happening because we haven’t added any proper shading yet. Besides, a bunch of particles stuck in the middle of our scene, even animated with a curl noise, is quite boring. This part is all about improving those two points.
In our fragment shader, we’ll need to access the current particle velocity, so we’ll need to pass it from the vertex to the fragment shader. Everything else is straightforward:
Next, we want to change the shape of the particles. To do that, it’s also super easy. We’ll just have to discard fragments in our shader based on their distance to the center.
The only thing to note here is that we’re going to put that part in a separate chunk, because we’ll need to use it elsewhere later.
Create a discard-particle-fragment.wgsl.js file in your /js/shaders/chunks/ folder and put this inside:
Now, our particles have circular shapes. Even if that’s not easily visible because of the particles’ size and speed, this is much better. One way to ensure this is to just put the camera a bit closer, and there you have a convincing result:
init() {
this.section = document.querySelector('#shadowed-particles-scene')
// particle system radius
this.radius = 50
// just so we can better visualize the shape of the particles
this.renderer.camera.position.z = 150
super.init()
}
Now the last thing we can do to improve the look at this point is to tweak the particle size.
We’re going to add a new uniform first to control the maximum size of the particles:
Next, we’re going to scale our particles based on their current life and initial life. We’ll put it into a chunk, similar to the discard chunk, so it can be reused elsewhere.
Create a get-particle-size.wgsl.js file inside the /js/shaders/chunks/ folder:
// 'js/shaders/chunks/get-particle-size.wgsl.js'
export const getParticleSize = /* wgsl */ `
fn getParticleSize(currentLife: f32, initialLife: f32) -> f32 {
// scale from 0 -> 1 when life begins
let startSize = smoothstep(0.0, 0.25, 1.0 - currentLife / initialLife);
// scale from 1 -> 0 when life ends
let endSize = smoothstep(0.0, 0.25, currentLife / initialLife);
return startSize * endSize * params.size;
}
`
Finally, we need to apply the scaling function inside our vertex shader:
If you recall, we used the W components of our particlePosition and particleVelocity vectors to store the current and initial life values of the particles. That’s what we’re using here to compute the scale in/out values.
Alright, that’s much better now. If we check with the camera still positioned at 150 along the Z axis:
It’s time for us to add the interactive part. We’ll make the particles follow our mouse, and just like that, the scene will become way cooler!
We’re going to listen to the mouse and pointer events and compute a value along the X and Y axes to send to our updateData compute shader.
We need these values to be normalized first, so that when the pointer is in the top-left corner, its coordinates would be (-1, 1), and when it’s in the bottom-right corner, its coordinates would be (1, ‑1).
The Y coordinate is inverted because in our world space, the Y‑axis is oriented toward the top of the screen.
We’ll then need to clamp these values between (-1, ‑1) and (1, 1) because our mouse might leave our current section by scrolling the page up.
Finally, we’ll convert those normalized coordinates into world space. To do that, we’ll use the camera getVisibleSizeAtDepth() method again.
As always with these things, we’re not going to send the mouse position as is, but actually lerp it before sending it to our shader for a more visually pleasing effect.
And we’re done, are we? Kinda. It will work as long as you don’t try to scroll up the page, because our clientY value is relative to the viewport, not our container. We’ll have to account for our renderer bounding rectangle top value and initial scroll value, and do so each time the window is resized:
Note that we’ll also be keeping track of the camera visible sizes in there so we don’t have to compute it again on each pointer move event.
Tip: The renderer.onResize callback is called after the renderer bounding rectangle has been computed and the camera has been resized but before any rendered objects have been resized, which is perfect for our use case.
// 'js/shaders/compute-particles.wgsl.js'
export const computeParticles = /* wgsl */ `
${curlNoise}
// https://gist.github.com/munrocket/236ed5ba7e409b8bdf1ff6eca5dcdc39
// On generating random numbers, with help of y= [(a+x)sin(bx)] mod 1", W.J.J. Rey, 22nd European Meeting of Statisticians 1998
fn rand11(n: f32) -> f32 { return fract(sin(n) * 43758.5453123); }
fn getInitLife(index: f32) -> f32 {
return round(rand11(cos(index)) * params.maxLife * 0.95) + params.maxLife * 0.05;
}
const PI: f32 = 3.14159265359;
@compute @workgroup_size(256) fn updateData(
@builtin(global_invocation_id) GlobalInvocationID: vec3
) {
let index = GlobalInvocationID.x;
if(index < arrayLength(&particles)) {
let fIndex: f32 = f32(index);
var vPos: vec3f = particles[index].position.xyz;
var life: f32 = particles[index].position.w;
life -= 1.0;
var vVel: vec3f = particles[index].velocity.xyz;
vVel += curlNoise(vPos * 0.02, 0.0, 0.05);
vVel *= 0.4;
particles[index].velocity = vec4(vVel, particles[index].velocity.w);
let mouse = vec3(params.mouse, 0);
if (life <= 0.0) {
// respawn particle to original position + mouse position
let newPosition = initParticles[index].position.xyz + mouse;
// reset init life to random value
initParticles[index].position.w = getInitLife(fIndex * cos(fIndex));
particles[index].position = vec4(
newPosition,
initParticles[index].position.w
);
particles[index].velocity.w = initParticles[index].position.w;
} else {
// apply new curl noise position and life
// accounting for mouse position
let delta: vec3f = mouse - vPos;
let friction: f32 = 1000.0;
vPos += delta * 1.0 / friction;
vPos += vVel;
particles[index].position = vec4(vPos, life);
}
}
}
`
Nothing has changed inside setInitData, so you can skip that.
Inside updateData, we’re getting the input mouse position. If the particle life is over, we make it respawn at its original position with our mouse position added. If not, we make it slowly drift toward the new mouse position.
That’s way more playful now!
The result is kind of satisfying now, and it already was a long ride, but we’re far from being done yet.
There’s still quite a piece of work ahead of us: implementing the shadows.
To render shadows in real-time, we are going to use a technique called shadow mapping.
The idea is that for each light that needs to cast shadows, we will render the objects that need to cast shadows onto a depth texture in what we call a depth pass. The objects will be rendered from each light’s perspective, meaning we’ll have to compute both view and projection matrices for each light. Depending on the type of light, we’d need an orthographic or a perspective projection matrix.
Once we have rendered all our depth passes, we then render our scene as usual and use each resulting depth texture to compute each shadow-receiving object’s fragment visibility and accordingly apply shadows.
Simplifying the Process
This might sound cumbersome — if your scene contains multiple shadow-casting light sources, it certainly can be. However, in our case, we will only use:
One light source with an orthographic projection matrix
One shadow-casting object (the particles)
This makes the approach both performant and straightforward to implement.
We are going to create a new ShadowMap class. Don’t worry, we’re going to create each of its methods step by step to understand what’s going on one step at a time. Create a ShadowMap.js file inside our /js/shadowed-particles-scenes/ folder:
// 'js/shadowed-particles-scene/ShadowMap.js'
import { BufferBinding, Mat4, RenderTarget, Sampler, Texture, Vec3 } from 'gpu-curtains'
export class ShadowMap {
constructor({
renderer,
depthTextureSize = 1024,
depthTextureFormat = 'depth24plus',
light = {
position: new Vec3(renderer?.camera.position.z || 1),
target: new Vec3(),
up: new Vec3(0, 1, 0),
orthographicCamera: {
left: renderer?.camera.position.z * -0.5,
right: renderer?.camera.position.z * 0.5,
top: renderer?.camera.position.z * 0.5,
bottom: renderer?.camera.position.z * -0.5,
near: 0.1,
far: renderer?.camera.position.z * 5,
},
},
}) {
this.renderer = renderer
this.depthTextureSize = depthTextureSize
this.depthTextureFormat = depthTextureFormat
// mandatory so we could use textureSampleCompare()
// if we'd like to use MSAA, we would have to use an additional pass
// to manually resolve the depth texture before using it
this.sampleCount = 1
this.light = light
// keep track of the meshes that will cast shadows
this.meshes = []
this.createLightSource()
this.createShadowMap()
this.setDepthPass()
}
createLightSource() {
// create the light view matrix
// equivalent to Mat4().lookAt(this.light.position, this.light.target, this.light.up).invert() but faster
this.light.viewMatrix = new Mat4().makeView(this.light.position, this.light.target, this.light.up)
// create the light projection matrix
this.light.projectionMatrix = new Mat4().makeOrthographic(this.light.orthographicCamera)
// create one uniform buffer that will be used by all the shadow casting meshes
this.lightProjectionBinding = new BufferBinding({
label: 'Light',
name: 'light',
bindingType: 'uniform',
struct: {
viewMatrix: {
type: 'mat4x4f',
value: this.light.viewMatrix,
},
projectionMatrix: {
type: 'mat4x4f',
value: this.light.projectionMatrix,
},
position: {
type: 'vec3f',
value: this.light.position,
},
},
})
}
createShadowMap() {}
setDepthPass() {}
destroy() {}
}
To create our shadow map, we’ll need to define a couple of things: the light matrices and the depth pass settings.
Here, we’re starting with the light setup. To compute the matrices, we need a few things like its position, target, up vector, and the orthographic projection settings. Once we’ve computed the matrices, we create a BufferBinding that we’ll use when rendering to the depth pass.
Next, the depth pass:
// 'js/shadowed-particles-scene/ShadowMap.js'
createShadowMap() {
// create the depth texture
this.depthTexture = new Texture(this.renderer, {
label: 'Shadow map depth texture',
name: 'shadowMapDepthTexture',
type: 'depth',
format: this.depthTextureFormat,
sampleCount: this.sampleCount,
fixedSize: {
width: this.depthTextureSize,
height: this.depthTextureSize,
},
})
// create the render target
this.depthPassTarget = new RenderTarget(this.renderer, {
label: 'Depth pass render target',
useColorAttachments: false,
depthTexture: this.depthTexture,
sampleCount: this.sampleCount,
})
// create depth comparison sampler
// used to compute shadow receiving object visibility
this.depthComparisonSampler = new Sampler(this.renderer, {
label: 'Depth comparison sampler',
name: 'depthComparisonSampler',
// we do not want to repeat the shadows
addressModeU: 'clamp-to-edge',
addressModeV: 'clamp-to-edge',
compare: 'less',
type: 'comparison',
})
}
First, we create the depth texture using the Texture class. As always, we pass our renderer as the first argument and then a bunch of parameters:
We obviously set its type to 'depth'.
We use the format and sampleCount defined earlier.
The fixedSize parameter is important to indicate that this texture should not be resized whenever the renderer size changes.
Then we create a RenderTarget object. This creates a render pass descriptor that will tell onto which texture we’ll render our shadow-casting objects before issuing their draw call. Let’s have a look at the parameters:
We set useColorAttachments to false to explicitly tell that we’re not going to render to any color targets, but only to the depth texture.
Speaking of depth texture, pass it with the depthTexture parameter.
We once again set the sampleCount so that it matches the depth texture setting.
Finally, we create a new Sampler that we’ll use in the shadow-receiving objects’ fragment shader to compute the shadows.
Since we’ve added a couple of WebGPU resources here, we need to ensure they’ll be destroyed if needed:
Next up, we’ll write a method that will add a shadow-casting mesh to the meshes stack.
// 'js/shadowed-particles-scene/ShadowMap.js'
addShadowCastingMesh(mesh, parameters = {}) {
if (!parameters.shaders) {
const defaultDepthVs = /* wgsl */ `
@vertex fn main(
attributes: Attributes,
) -> @builtin(position) vec4f {
return light.projectionMatrix * light.viewMatrix * matrices.model * vec4(attributes.position, 1.0);
}
`
parameters.shaders = {
vertex: {
code: defaultDepthVs,
},
fragment: false, // we do not need to output to a fragment shader unless we do late Z writing
}
}
parameters = { ...mesh.material.options.rendering, ...parameters }
// explicitly set empty output targets
// we just want to write to the depth texture
parameters.targets = []
parameters.sampleCount = this.sampleCount
parameters.depthFormat = this.depthTextureFormat
if (parameters.bindings) {
parameters.bindings = [
this.lightProjectionBinding,
mesh.material.getBufferBindingByName('matrices'),
...parameters.bindings,
]
} else {
parameters.bindings = [this.lightProjectionBinding, mesh.material.getBufferBindingByName('matrices')]
}
mesh.userData.depthMaterial = new RenderMaterial(this.renderer, {
label: mesh.options.label + ' Depth render material',
...parameters,
})
// keep track of original material as well
mesh.userData.originalMaterial = mesh.material
this.meshes.push(mesh)
}
This method takes two parameters: first, the mesh that we’d like to add to our depth pass, then some optional material parameters to apply, such as custom depth shaders, additional bindings, and so on.
Here’s the detailed explanation:
First, if there aren’t any shaders defined, we use a default one. You can see we’re using the light matrices in the vertex shader, but also that we’re explicitly stating that we do not want to use a fragment shader. Writing to the depth texture is actually done by outputting to the built-in vertex shader position.
We then patch the material parameters: we explicitly tell it not to output to any color target and add the mesh’s matrices bindings as well as our lightProjectionBinding.
We create a new RenderMaterial using those parameters and add it to the mesh’s userData object.
We also add the original material to the userData object because we’ll swap those two materials when rendering the depth pass.
Do not forget to clean things in the destroy() method:
That one is simple: we just add the depthTexture, depthComparisonSampler, and lightProjectionBinding to the parameters and return them.
We’re almost done. We still need to actually render our depth pass at some point.
// 'js/shadowed-particles-scene/ShadowMap.js'
setDepthPass() {
// add the depth pass (rendered each tick before our main scene)
this.depthPassTaskID = this.renderer.onBeforeRenderScene.add((commandEncoder) => {
if (!this.meshes.length) return
// assign depth material to meshes
this.meshes.forEach((mesh) => {
mesh.useMaterial(mesh.userData.depthMaterial)
})
// reset renderer current pipeline
this.renderer.pipelineManager.resetCurrentPipeline()
// begin depth pass
const depthPass = commandEncoder.beginRenderPass(this.depthPassTarget.renderPass.descriptor)
// render meshes with their depth material
this.meshes.forEach((mesh) => {
if (mesh.ready) mesh.render(depthPass)
})
depthPass.end()
// reset depth meshes material to use the original
// so the scene renders them normally
this.meshes.forEach((mesh) => {
mesh.useMaterial(mesh.userData.originalMaterial)
})
// reset renderer current pipeline again
this.renderer.pipelineManager.resetCurrentPipeline()
})
}
We add a callback to the renderer onBeforeRenderScene task manager. As the name states, this will be called each frame just before rendering our scene. This is exactly when we want that to happen. This method returns a task ID that allows us to unsubscribe to the event anytime we want (in our case, in the destroy() method, see below).
Here’s what we do in this function:
Set every mesh’s material to our custom depthMaterial.
Reset the current renderer pipeline manager active pipeline, so that the renderer can set the corresponding pipeline before drawing the mesh to the depth pass.
Begin our depth render pass using our depthPassTarget render pass descriptor.
Render each shadow casting mesh.
End our depth render pass.
Reset every mesh’s material to their original one.
Reset the current renderer pipeline manager active pipeline again, so that we’ll be able to then render our scene normally.
Do not forget to unsubscribe from the onBeforeRenderScene task when destroying, and we’ll be done with the ShadowMap class!
Before we move on to actually adding shadows, we instantiate the class in our ShadowedParticlesScene file:
// 'js/shadowed-particles-scene/ShadowedParticlesScene.js'
onSceneVisibilityChanged(isVisible) {
if (isVisible) {
this.section.classList.add('is-visible')
this.renderer.shouldRender = true
} else {
this.section.classList.remove('is-visible')
this.renderer.shouldRender = false
}
}
setupWebGPU() {
const distance = this.renderer.camera.position.z
this.shadowMap = new ShadowMap({
renderer: this.renderer,
depthTextureSize: 1024,
light: {
position: new Vec3(distance * 0.5, distance * 0.325, distance * 0.5),
// add a bit of spacing on every side
// to avoid out of view particles to be culled
// by the shadow map orthographic matrix
orthographicCamera: {
left: distance * -1.05,
right: distance * 1.05,
top: distance * 1.05,
bottom: distance * -1.05,
near: 0.1,
far: distance * 5,
},
},
})
this.createComputePasses()
this.createParticles()
}
destroyWebGPU() {
this.shadowMap.destroy()
// destroy both compute pass and compute bind group
this.computePass?.destroy()
this.computeBindGroup?.destroy()
this.particlesSystem?.remove()
}
Note we’ve changed our onSceneVisibilityChanged() method a bit, switching the renderer shouldRenderScene with shouldRender property: now we want to disable every render call when the section is not visible, including the onBeforeRenderScene callbacks.
We want our particles to both cast and receive shadows, in order to apply what’s called self-shadowing. We need to:
Patch the particlesSystem mesh parameters using shadowMap.patchShadowReceivingParameters() so they can receive shadows.
Add the particlesSystem as a shadow casting object using shadowMap.addShadowCastingMesh().
// 'js/shadowed-particles-scene/ShadowedParticlesScene.js'
createParticles() {
const geometry = new PlaneGeometry({
instancesCount: this.nbInstances,
vertexBuffers: [
{
// use instancing
stepMode: 'instance',
name: 'instanceAttributes',
buffer: this.updateComputeBuffer.buffer, // pass the compute buffer right away
attributes: [
{
name: 'particlePosition',
type: 'vec4f',
bufferFormat: 'float32x4',
size: 4,
},
{
name: 'particleVelocity',
type: 'vec4f',
bufferFormat: 'float32x4',
size: 4,
},
],
},
],
})
// since we need this uniform in both the depth pass and regular pass
// create a new buffer binding that will be shared by both materials
const particlesParamsBindings = new BufferBinding({
label: 'Params',
name: 'params',
bindingType: 'uniform',
visibility: ['vertex'],
struct: {
size: {
type: 'f32',
value: 0.7,
},
},
})
this.particlesSystem = new Mesh(
this.renderer,
this.shadowMap.patchShadowReceivingParameters({
label: 'Shadowed particles system',
geometry,
frustumCulling: false,
shaders: {
vertex: {
code: shadowedParticlesVs,
},
fragment: {
code: shadowedParticlesFs,
},
},
uniforms: {
shading: {
struct: {
lightColor: {
type: 'vec3f',
value: new Vec3(255 / 255, 240 / 255, 97 / 255),
},
darkColor: {
type: 'vec3f',
value: new Vec3(184 / 255, 162 / 255, 9 / 255),
},
},
},
},
bindings: [particlesParamsBindings],
})
)
this.shadowMap.addShadowCastingMesh(this.particlesSystem, {
bindings: [particlesParamsBindings],
})
}
And… nothing changed. Do you know why? I hope you do. We did not make any change to the particles shaders, obviously!
We need a function to compute the shadow intensity in the fragment shader based on our shadow map depth texture and the depth comparison sampler.
We’ll use a percentage closer filtering function to do that, as it gives good results at a reasonable price.
To compute the shadows, we need to pass to this function the positions in the light view space.
We need to create 2 new chunks.
First, create a get-shadow-position.wgsl.js inside the /js/shaders/chunks folder:
// 'js/shaders/chunks/get-shadow-position.wgsl.js'
export const getShadowPosition = /* wgsl */ `
fn getShadowPosition(lightProjectionMatrix: mat4x4f, modelViewPosition: vec4f) -> vec3f {
// XY is in (-1, 1) space, Z is in (0, 1) space
let posFromLight = lightProjectionMatrix * modelViewPosition;
// Convert XY to (0, 1)
// Y is flipped because texture coords are Y-down.
return vec3(
posFromLight.xy * vec2(0.5, -0.5) + vec2(0.5),
posFromLight.z,
);
}
`
Create a get-pcf-soft-shadows.wgsl.js in the same folder:
// 'js/shaders/chunks/get-pcf-soft-shadows.wgsl.js'
export const getPCFSoftShadows = /* wgsl */ `
fn getPCFSoftShadows(shadowPosition: vec3f) -> f32 {
// Percentage-closer filtering. Sample texels in the region
// to smooth the result.
var visibility: f32 = 0.0;
let bias: f32 = 0.001;
let size: f32 = f32(textureDimensions(shadowMapDepthTexture).y);
let oneOverShadowDepthTextureSize = 1.0 / size;
for (var y = -1; y <= 1; y++) {
for (var x = -1; x <= 1; x++) {
let offset = vec2(vec2(x, y)) * oneOverShadowDepthTextureSize;
visibility += textureSampleCompare(
shadowMapDepthTexture,
depthComparisonSampler,
shadowPosition.xy + offset,
shadowPosition.z - bias
);
}
}
visibility /= 9.0;
return visibility;
}
`
We can now apply those changes to our particles shaders. The shadow position will be passed by the vertex to the fragment shader. Do not forget we need to take care of billboarding to compute it:
// 'js/shaders/shadowed-particles.wgsl.js'
export const shadowedParticlesVs = /* wgsl */ `
struct VSOutput {
@builtin(position) position: vec4f,
@location(0) uv: vec2f,
@location(1) normal: vec3f,
@location(2) velocity: vec4f,
@location(3) shadowPosition: vec3f,
};
${getParticleSize}
${getShadowPosition}
@vertex fn main(
attributes: Attributes,
) -> VSOutput {
var vsOutput : VSOutput;
let size: f32 = getParticleSize(attributes.particlePosition.w, attributes.particleVelocity.w);
// billboarding
var mvPosition: vec4f = matrices.modelView * vec4(attributes.particlePosition.xyz, 1.0);
mvPosition += vec4(attributes.position, 0.0) * size;
vsOutput.position = camera.projection * mvPosition;
vsOutput.uv = attributes.uv;
// normals in view space to follow billboarding
vsOutput.normal = getViewNormal(attributes.normal);
vsOutput.velocity = attributes.particleVelocity;
// the shadow position must account for billboarding as well!
var mvShadowPosition: vec4f = light.viewMatrix * matrices.model * vec4(attributes.particlePosition.xyz, 1.0);
mvShadowPosition += vec4(attributes.position, 0.0) * size;
vsOutput.shadowPosition = getShadowPosition(
light.projectionMatrix,
mvShadowPosition
);
return vsOutput;
}
`
export const shadowedParticlesFs = /* wgsl */ `
struct VSOutput {
@builtin(position) position: vec4f,
@location(0) uv: vec2f,
@location(1) normal: vec3f,
@location(2) velocity: vec4f,
@location(3) shadowPosition: vec3f,
};
${getPCFSoftShadows}
@fragment fn main(fsInput: VSOutput) -> @location(0) vec4f {
${discardParticleFragment}
// clamp velocity
let velocity = clamp(length(fsInput.velocity.xyz), 0.0, 1.0);
// use it to mix between our 2 colors
var color: vec3f = mix(shading.darkColor, shading.lightColor, vec3(velocity));
var visibility = getPCFSoftShadows(fsInput.shadowPosition);
color *= visibility;
return vec4(color, 1.0);
}
`
Note how we multiply our final color by the visibility. This should give us the right shading now, including our shadows.
But… nothing’s changed! Again! Damn, we must have missed something. Any idea?
Remember that we had the opportunity to pass custom shaders when adding a shadow casting mesh? That’s it!
When rendering our particles to the depth pass, we need to account for billboarding as well!
So, let’s create another pair of shaders for the depth pass in our shadowed-particles.wgsl.js file:
I guess you expected we’d use only a vertex shader, since we’re writing to the depth texture by outputting to the vertex shader built-in position variable.
But by adding a fragment shader, we’re allowed to apply what’s called late‑Z writing, and this lets us discard the fragment that needs to be.
Let’s go back to our particles and add the shaders:
Once again, you should already be familiar with this code by now.
We create a cube, scale it using our visibleSize object, and position it so it perfectly fits our viewport.
The only new thing here is the cullMode parameter. We’ll want to draw the inside of the cube instead of the outside.
Of course, we need to actually add it and destroy it:
// 'js/shadowed-particles-scene/ShadowedParticlesScene.js'
setupWebGPU() {
const distance = this.renderer.camera.position.z
this.shadowMap = new ShadowMap({
renderer: this.renderer,
depthTextureSize: 1024,
light: {
position: new Vec3(distance * 0.5, distance * 0.325, distance * 0.5),
// add a bit of spacing on every side
// to avoid out of view particles to be culled
// by the shadow map orthographic matrix
orthographicCamera: {
left: distance * -1.05,
right: distance * 1.05,
top: distance * 1.05,
bottom: distance * -1.05,
near: 0.1,
far: distance * 5,
},
},
})
this.createComputePasses()
this.createParticles()
this.createWrappingBox()
}
destroyWebGPU() {
this.shadowMap.destroy()
// destroy both compute pass and compute bind group
this.computePass?.destroy()
this.computeBindGroup?.destroy()
this.particlesSystem?.remove()
this.wrappingBox?.remove()
}
There it is:
The thing is, since this mesh is not actually lit, it’s hard to tell where the faces are.
So let’s add a basic lighting. Fortunately, we already have the shadow map light at our disposal to do that. We just need to add a couple uniforms defining the ambient and directional light intensities and colors and implement a basic lambert shading:
We’ve already seen those things before so no need to explain this in detail.
Just note we apply the shadows to our diffuse result only.
But if we look at the result, it suddenly became all dark:
What’s happening here again?
Well, we’re using the normals to calculate the light contribution, but since we’re drawing only the back faces of our cube, the normals should be negated!
WebGPU has a builtin variable to let us know if we’re drawing the front or back face of a fragment, so let’s use this to invert our normals:
// 'js/shaders/shadowed-wrapping-box.wgsl.js'
export const wrappingBoxFs = /* wgsl */ `
struct VSOutput {
@builtin(position) position: vec4f,
@builtin(front_facing) frontFacing: bool,
@location(0) uv: vec2f,
@location(1) normal: vec3f,
@location(2) shadowPosition: vec3f,
@location(3) worldPosition: vec3f,
};
${getPCFSoftShadows}
@fragment fn main(fsInput: VSOutput) -> @location(0) vec4f {
var visibility = getPCFSoftShadows(fsInput.shadowPosition);
visibility = clamp(visibility, 1.0 - clamp(shading.shadowIntensity, 0.0, 1.0), 1.0);
// ambient light
let ambient: vec3f = ambientLight.intensity * ambientLight.color;
// inverse the normals if we're using front face culling
let faceDirection = select(-1.0, 1.0, fsInput.frontFacing);
// diffuse lambert shading
let N = normalize(faceDirection * fsInput.normal);
let L = normalize(light.position - fsInput.worldPosition);
let NDotL = max(dot(N, L), 0.0);
let diffuse: vec3f = NDotL * directionalLight.color * directionalLight.intensity;
// apply shadow to diffuse
let lightAndShadow: vec3f = ambient + visibility * diffuse;
return vec4(shading.color * lightAndShadow, 1.0);
}
`
And now our shading is correct!
There’s still one last thing though. If you look closely, you’ll see that our light shading has introduced a bit of color banding:
Luckily we have a cheap trick to solve this: apply a little bit of dithering.
// 'js/shaders/shadowed-wrapping-box.wgsl.js'
export const wrappingBoxFs = /* wgsl */ `
struct VSOutput {
@builtin(position) position: vec4f,
@builtin(front_facing) frontFacing: bool,
@location(0) uv: vec2f,
@location(1) normal: vec3f,
@location(2) shadowPosition: vec3f,
@location(3) worldPosition: vec3f,
};
${getPCFSoftShadows}
fn applyDithering(color: vec3f, fragCoord: vec2f) -> vec3f {
// Simple random noise based on fragment coordinates
let scale = 1.0 / 255.0; // Adjust this value to control the strength of the dithering
let noise = fract(sin(dot(fragCoord, vec2(12.9898, 78.233))) * 43758.5453);
// Apply the noise to the color
return color + vec3(noise * scale);
}
@fragment fn main(fsInput: VSOutput) -> @location(0) vec4f {
var visibility = getPCFSoftShadows(fsInput.shadowPosition);
visibility = clamp(visibility, 1.0 - clamp(shading.shadowIntensity, 0.0, 1.0), 1.0);
// ambient light
let ambient: vec3f = ambientLight.intensity * ambientLight.color;
// inverse the normals if we're using front face culling
let faceDirection = select(-1.0, 1.0, fsInput.frontFacing);
// diffuse lambert shading
let N = normalize(faceDirection * fsInput.normal);
let L = normalize(light.position - fsInput.worldPosition);
let NDotL = max(dot(N, L), 0.0);
let diffuse: vec3f = NDotL * directionalLight.color * directionalLight.intensity;
// apply shadow to diffuse
let lightAndShadow: vec3f = ambient + visibility * diffuse;
// apply dithering to reduce color banding
let color = applyDithering(shading.color * lightAndShadow, fsInput.position.xy);
return vec4(color, 1.0);
}
`
As usual, let’s finish with the entering animation. Since this article is already really dense, we’ll do something pretty basic here — we’ll just scale the particles in when we enter the viewport.
That was a very long ride, and I hope you made it this far. But this was very instructive since we’ve learned how to use compute shaders to animate a particle system, and we’ve also seen how objects can cast and receive shadows using a shadow map from scratch.
I also hope you’ve enjoyed the articles and working with gpu-curtains. If you want to help me keep building the library, you can always sponsor me on GitHub. If you have any questions regarding these articles, or gpu-curtains in general, feel free to reach out to me on social media.