WebGPU is a global technology that promises to bring cutting-edge GPU computing capabilities to the web, benefiting all consumer platforms using a shared code base.
Although its predecessor, WebGL, is powerful, it seriously lacks compute shader capabilities, limiting its application scope.
WGSL (WebGPU Shader/Compute Language) draws on best practices from areas like Rust and GLSL.
As I was learning to use WebGPU, I came across some gaps in the documentation: I was hoping to find a simple starting point for using compute shaders to compute data for vertex and fragment shaders.
The single-file HTML for all the code in this tutorial can be found at https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb - read on for a detailed breakdown.
Here is a single-click demonstration of this HTML running on my domain: https://www.php.cn/link/bed827b4857bf056d05980661990ccdc A WebGPU-based browser such as Chrome or Edge https://www.php.cn/link/bae00fb8b4115786ba5dbbb67b9b177a).
Advanced Settings
This is a particle simulation - it happens in time steps over time.
Time is tracked on JS/CPU and passed to GPU as (float) uniform.
Particle data is managed entirely on the GPU - although still interacting with the CPU, allowing memory to be allocated and initial values set. It is also possible to read the data back to the CPU, but this is omitted in this tutorial.
The magic of this setup is that each particle is updated in parallel with all other particles, allowing for incredible calculation and rendering speeds in the browser (parallelization maximizes the number of cores on the GPU; We can divide the number of particles by the number of cores to get the true number of cycles per update step per core).
Bind
The mechanism WebGPU uses for data exchange between CPU and GPU is binding - JS arrays (such as Float32Array) can be "bound" to memory locations in WGSL using WebGPU buffers. WGSL memory locations are identified by two integers: the group number and the binding number.
In our case, both the compute shader and the vertex shader rely on two data bindings: time and particle position.
Time - uniforms
Uniform definitions exist in compute shaders (https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L43) and vertex shaders (https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L69) Medium - Calculate shader update position, vertex shader updates color based on time.
Let’s take a look at the binding setup in JS and WGSL, starting with compute shaders.
<code>const computeBindGroup = device.createBindGroup({ /* 参见 computePipeline 定义,网址为 https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L102 它允许将 JS 字符串与 WGSL 代码链接到 WebGPU */ layout: computePipeline.getBindGroupLayout(0), // 组号 0 entries: [{ // 时间绑定在绑定号 0 binding: 0, resource: { /* 作为参考,缓冲区声明为: const timeBuffer = device.createBuffer({ size: Float32Array.BYTES_PER_ELEMENT, usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST}) }) https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L129 */ buffer: timeBuffer } }, { // 粒子位置数据在绑定号 1(仍在组 0) binding: 1, resource: { buffer: particleBuffer } }] });</code>
and the corresponding declaration in the compute shader
<code>// 来自计算着色器 - 顶点着色器中也有类似的声明 @group(0) @binding(0) var<uniform> t: f32; @group(0) @binding(1) var<storage read_write=""> particles : array<particle>; </particle></storage></uniform></code>
Importantly, we bind the timeBuffer on the JS side to WGSL by matching the group number and binding number in JS and WGSL.
This allows us to control the value of the variable from JS:
<code>/* 数组中只需要 1 个元素,因为时间是单个浮点值 */ const timeJs = new Float32Array(1) let t = 5.3 /* 纯 JS,只需设置值 */ timeJs.set([t], 0) /* 将数据从 CPU/JS 传递到 GPU/WGSL */ device.queue.writeBuffer(timeBuffer, 0, timeJs);</code>
Particle Position - WGSL Storage
We store and update particle positions directly in GPU-accessible memory – allowing us to update them in parallel by taking advantage of the GPU’s massive multi-core architecture.
Parallelization is coordinated with the help of work group size, declared in the compute shader:
<code>@compute @workgroup_size(64) fn main(@builtin(global_invocation_id) global_id : vec3<u32>) { // ... } </u32></code>
@builtin(global_invocation_id) global_id : vec3
By definition, global_invocation_id = workgroup_id * workgroup_size local_invocation_id - this means it can be used as a particle index.
For example, if we have 10k particles and workgroup_size is 64, we need to schedule Math.ceil(10000/64) workgroups. Each time a compute pass is triggered from JS, we will explicitly tell the GPU to perform this amount of work:
<code>computePass.dispatchWorkgroups(Math.ceil(PARTICLE_COUNT / WORKGROUP_SIZE));</code>
If PARTICLE_COUNT == 10000 and WORKGROUP_SIZE == 64, we will start 157 workgroups (10000/64 = 156.25), and the calculated range of local_invocation_id of each workgroup is 0 to 63 (while the range of workgroup_id is 0 to 157 ). Since 157 * 64 = 1048, we will end up doing slightly more calculations in a workgroup. We handle overflow by discarding redundant calls.
Here is the final result of calculating the shader after taking these factors into account:
<code>@compute @workgroup_size(${WORKGROUP_SIZE}) fn main(@builtin(global_invocation_id) global_id : vec3<u32>) { let index = global_id.x; // 由于工作组网格未对齐,因此丢弃额外的计算 if (index >= arrayLength(&particles)) { return; } /* 将整数索引转换为浮点数,以便我们可以根据索引(和时间)计算位置更新 */ let fi = f32(index); particles[index].position = vec2<f32>( /* 公式背后没有宏伟的意图 - 只不过是用时间+索引的例子 */ cos(fi * 0.11) * 0.8 + sin((t + fi)/100)/10, sin(fi * 0.11) * 0.8 + cos((t + fi)/100)/10 ); } </f32></u32></code>
These values will persist across calculation passes because particles are defined as storage variables.
Read the particle position in the compute shader in the vertex shader
In order to read the particle positions in the vertex shader from the compute shader, we need a read-only view, since only the compute shader can write to the storage.
The following is a statement from WGSL:
<code>@group(0) @binding(0) var<uniform> t: f32; @group(0) @binding(1) var<storage> particles : array<vec2>>; /* 或等效: @group(0) @binding(1) var<storage read=""> particles : array<vec2>>; */ </vec2></storage></vec2></storage></uniform></code>
Trying to re-use the same read_write style in a compute shader will just error:
<code>var with 'storage' address space and 'read_write' access mode cannot be used by vertex pipeline stage</code>
Note that the binding numbers in the vertex shader do not have to match the compute shader binding numbers - they only need to match the vertex shader's binding group declaration:
<code>const renderBindGroup = device.createBindGroup({ layout: pipeline.getBindGroupLayout(0), entries: [{ binding: 0, resource: { buffer: timeBuffer } }, { binding: 1, resource: { buffer: particleBuffer } }] });</code>
I selected binding:2 in the GitHub sample code https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L70 - just to explore the boundaries of the constraints imposed by WebGPU
Run the simulation step by step
With all settings in place, the update and render loops are coordinated in JS:
<code>/* 从 t = 0 开始模拟 */ let t = 0 function frame() { /* 为简单起见,使用恒定整数时间步 - 无论帧速率如何,都会一致渲染。 */ t += 1 timeJs.set([t], 0) device.queue.writeBuffer(timeBuffer, 0, timeJs); // 计算传递以更新粒子位置 const computePassEncoder = device.createCommandEncoder(); const computePass = computePassEncoder.beginComputePass(); computePass.setPipeline(computePipeline); computePass.setBindGroup(0, computeBindGroup); // 重要的是要调度正确数量的工作组以处理所有粒子 computePass.dispatchWorkgroups(Math.ceil(PARTICLE_COUNT / WORKGROUP_SIZE)); computePass.end(); device.queue.submit([computePassEncoder.finish()]); // 渲染传递 const commandEncoder = device.createCommandEncoder(); const passEncoder = commandEncoder.beginRenderPass({ colorAttachments: [{ view: context.getCurrentTexture().createView(), clearValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 }, loadOp: 'clear', storeOp: 'store', }] }); passEncoder.setPipeline(pipeline); passEncoder.setBindGroup(0, renderBindGroup); passEncoder.draw(PARTICLE_COUNT); passEncoder.end(); device.queue.submit([commandEncoder.finish()]); requestAnimationFrame(frame); } frame();</code>
Conclusion
WebGPU unleashes the power of massively parallel GPU computing in the browser.
It runs in passes - each pass has local variables enabled through a pipeline with memory binding (bridging CPU memory and GPU memory).
Compute delivery allows for the coordination of parallel workloads through workgroups.
While it does require some heavy setup, I think the local binding/state style is a huge improvement over WebGL's global state model - making it easier to use while also finally bringing the power of GPU computing to Entered the Web.
The above is the detailed content of WebGPU tutorial: compute, vertex, and fragment shaders on the web. For more information, please follow other related articles on the PHP Chinese website!

JavaScript is widely used in websites, mobile applications, desktop applications and server-side programming. 1) In website development, JavaScript operates DOM together with HTML and CSS to achieve dynamic effects and supports frameworks such as jQuery and React. 2) Through ReactNative and Ionic, JavaScript is used to develop cross-platform mobile applications. 3) The Electron framework enables JavaScript to build desktop applications. 4) Node.js allows JavaScript to run on the server side and supports high concurrent requests.

Python is more suitable for data science and automation, while JavaScript is more suitable for front-end and full-stack development. 1. Python performs well in data science and machine learning, using libraries such as NumPy and Pandas for data processing and modeling. 2. Python is concise and efficient in automation and scripting. 3. JavaScript is indispensable in front-end development and is used to build dynamic web pages and single-page applications. 4. JavaScript plays a role in back-end development through Node.js and supports full-stack development.

C and C play a vital role in the JavaScript engine, mainly used to implement interpreters and JIT compilers. 1) C is used to parse JavaScript source code and generate an abstract syntax tree. 2) C is responsible for generating and executing bytecode. 3) C implements the JIT compiler, optimizes and compiles hot-spot code at runtime, and significantly improves the execution efficiency of JavaScript.

JavaScript's application in the real world includes front-end and back-end development. 1) Display front-end applications by building a TODO list application, involving DOM operations and event processing. 2) Build RESTfulAPI through Node.js and Express to demonstrate back-end applications.

The main uses of JavaScript in web development include client interaction, form verification and asynchronous communication. 1) Dynamic content update and user interaction through DOM operations; 2) Client verification is carried out before the user submits data to improve the user experience; 3) Refreshless communication with the server is achieved through AJAX technology.

Understanding how JavaScript engine works internally is important to developers because it helps write more efficient code and understand performance bottlenecks and optimization strategies. 1) The engine's workflow includes three stages: parsing, compiling and execution; 2) During the execution process, the engine will perform dynamic optimization, such as inline cache and hidden classes; 3) Best practices include avoiding global variables, optimizing loops, using const and lets, and avoiding excessive use of closures.

Python is more suitable for beginners, with a smooth learning curve and concise syntax; JavaScript is suitable for front-end development, with a steep learning curve and flexible syntax. 1. Python syntax is intuitive and suitable for data science and back-end development. 2. JavaScript is flexible and widely used in front-end and server-side programming.

Python and JavaScript have their own advantages and disadvantages in terms of community, libraries and resources. 1) The Python community is friendly and suitable for beginners, but the front-end development resources are not as rich as JavaScript. 2) Python is powerful in data science and machine learning libraries, while JavaScript is better in front-end development libraries and frameworks. 3) Both have rich learning resources, but Python is suitable for starting with official documents, while JavaScript is better with MDNWebDocs. The choice should be based on project needs and personal interests.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver Mac version
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

WebStorm Mac version
Useful JavaScript development tools