Attention: Here be dragons
This is the latest
(unstable) version of this documentation, which may document features
not available in or compatible with released stable versions of Godot.
Checking the stable version of the documentation...
高级后期处理
前言
本教程描述了一种在 Godot 中进行后期处理的高级方法。值得注意的是,它将解释如何编写使用深度缓冲区的后期处理着色器。你应该已经熟悉后期处理,特别是使用自定义后期处理教程中介绍的方法。
全屏四边形
One way to make custom post-processing effects is by using a viewport. However, there are two main drawbacks of using a Viewport:
无法访问深度缓冲区
在编辑器中看不到后期处理着色器的效果
To get around the limitation on using the depth buffer, use a MeshInstance3D with a QuadMesh primitive. This allows us to use a shader and to access the depth texture of the scene. Next, use a vertex shader to make the quad cover the screen at all times so that the post-processing effect will be applied at all times, including in the editor.
First, create a new MeshInstance3D and set its mesh to a QuadMesh. This creates
a quad centered at position (0, 0, 0)
with a width and height of 1
. Set
the width and height to 2
and enable Flip Faces. Right now, the quad
occupies a position in world space at the origin. However, we want it to move
with the camera so that it always covers the entire screen. To do this, we will
bypass the coordinate transforms that translate the vertex positions through the
difference coordinate spaces and treat the vertices as if they were already in
clip space.
The vertex shader expects coordinates to be output in clip space, which are coordinates
ranging from -1
at the left and bottom of the screen to 1
at the top and right
of the screen. This is why the QuadMesh needs to have height and width of 2
.
Godot handles the transform from model to view space to clip space behind the scenes,
so we need to nullify the effects of Godot's transformations. We do this by setting the
POSITION
built-in to our desired position. POSITION
bypasses the built-in transformations
and sets the vertex position in clip space directly.
shader_type spatial;
// Prevent the quad from being affected by lighting and fog. This also improves performance.
render_mode unshaded, fog_disabled;
void vertex() {
POSITION = vec4(VERTEX.xy, 1.0, 1.0);
}
备注
In versions of Godot earlier than 4.3, this code recommended using POSITION = vec4(VERTEX, 1.0);
which implicitly assumed the clip-space near plane was at 0.0
.
That code is now incorrect and will not work in versions 4.3+ as we
use a "reversed-z" depth buffer now where the near plane is at 1.0
.
Even with this vertex shader, the quad keeps disappearing. This is due to frustum culling, which is done on the CPU. Frustum culling uses the camera matrix and the AABBs of Meshes to determine if the Mesh will be visible before passing it to the GPU. The CPU has no knowledge of what we are doing with the vertices, so it assumes the coordinates specified refer to world positions, not clip space positions, which results in Godot culling the quad when we turn away from the center of the scene. In order to keep the quad from being culled, there are a few options:
将 QuadMesh 作为子节点添加到相机,这样相机就会始终指向它
在 QuadMesh 中将几何属性
extra_cull_margin
设置得尽可能大
第二个选项会确保四边形在编辑器中可见,而第一个选项能够保证即使摄像机移出剔除边缘也它仍可见。你也可以同时使用这两个选项。
深度纹理
To read from the depth texture, we first need to create a texture uniform set to the depth buffer
by using hint_depth_texture
.
uniform sampler2D depth_texture : hint_depth_texture;
定义之后,深度纹理可以从 texture()
函数中读取。
float depth = texture(depth_texture, SCREEN_UV).x;
备注
与访问屏幕纹理类似,访问深度纹理只有在从当前视口读取时才能进行。深度纹理不能从你已经渲染的另一个视口中访问。
The values returned by depth_texture
are between 1.0
and 0.0
(corresponding to
the near and far plane, respectively, because of using a "reverse-z" depth buffer) and are nonlinear.
When displaying depth directly from the depth_texture
, everything will look almost
black unless it is very close due to that nonlinearity. In order to make the depth value align with world or
model coordinates, we need to linearize the value. When we apply the projection matrix to the
vertex position, the z value is made nonlinear, so to linearize it, we multiply it by the
inverse of the projection matrix, which in Godot, is accessible with the variable
INV_PROJECTION_MATRIX
.
Firstly, take the screen space coordinates and transform them into normalized device
coordinates (NDC). NDC run -1.0
to 1.0
in x
and y
directions and
from 0.0
to 1.0
in the z
direction when using the Vulkan backend.
Reconstruct the NDC using SCREEN_UV
for the x
and y
axis, and
the depth value for z
.
void fragment() {
float depth = texture(depth_texture, SCREEN_UV).x;
vec3 ndc = vec3(SCREEN_UV * 2.0 - 1.0, depth);
}
备注
This tutorial assumes the use of the Forward+ or Mobile renderers, which both
use Vulkan NDCs with a Z-range of [0.0, 1.0]
. In contrast, the Compatibility
renderer uses OpenGL NDCs with a Z-range of [-1.0, 1.0]
. For the Compatibility
renderer, replace the NDC calculation with this instead:
vec3 ndc = vec3(SCREEN_UV, depth) * 2.0 - 1.0;
You can also use the CURRENT_RENDERER
and RENDERER_COMPATIBILITY
built-in defines for a shader that will work in all renderers:
#if CURRENT_RENDERER == RENDERER_COMPATIBILITY
vec3 ndc = vec3(SCREEN_UV, depth) * 2.0 - 1.0;
#else
vec3 ndc = vec3(SCREEN_UV * 2.0 - 1.0, depth);
#endif
通过将NDC乘以 INV_PROJECTION_MATRIX
, 将NDC转换成视图空间. 回顾一下, 视图空间给出了相对于相机的位置, 所以 z
值将给我们提供到该点的距离.
void fragment() {
...
vec4 view = INV_PROJECTION_MATRIX * vec4(ndc, 1.0);
view.xyz /= view.w;
float linear_depth = -view.z;
}
因为摄像机是朝向负的 z
方向的, 所以坐标会有一个负的 z
值. 为了得到一个可用的深度值, 我们必须否定 view.z
.
The world position can be constructed from the depth buffer using the following code, using the
INV_VIEW_MATRIX
to transform the position from view space into world space.
void fragment() {
...
vec4 world = INV_VIEW_MATRIX * INV_PROJECTION_MATRIX * vec4(ndc, 1.0);
vec3 world_position = world.xyz / world.w;
}
Example shader
Once we add a line to output to ALBEDO
, we have a complete shader that looks something like this.
This shader lets you visualize the linear depth or world space coordinates, depending on which
line is commented out.
shader_type spatial;
// Prevent the quad from being affected by lighting and fog. This also improves performance.
render_mode unshaded, fog_disabled;
uniform sampler2D depth_texture : hint_depth_texture;
void vertex() {
POSITION = vec4(VERTEX.xy, 1.0, 1.0);
}
void fragment() {
float depth = texture(depth_texture, SCREEN_UV).x;
vec3 ndc = vec3(SCREEN_UV * 2.0 - 1.0, depth);
vec4 view = INV_PROJECTION_MATRIX * vec4(ndc, 1.0);
view.xyz /= view.w;
float linear_depth = -view.z;
vec4 world = INV_VIEW_MATRIX * INV_PROJECTION_MATRIX * vec4(ndc, 1.0);
vec3 world_position = world.xyz / world.w;
// Visualize linear depth
ALBEDO.rgb = vec3(fract(linear_depth));
// Visualize world coordinates
//ALBEDO.rgb = fract(world_position).xyz;
}
优化
你可以使用单个大三角形而不是使用全屏四边形. 解释的原因在 这里 . 但是, 这种好处非常小, 只有在运行特别复杂的片段着色器时才有用.
Set the Mesh in the MeshInstance3D to an ArrayMesh. An ArrayMesh is a tool that allows you to easily construct a Mesh from Arrays for vertices, normals, colors, etc.
Now, attach a script to the MeshInstance3D and use the following code:
extends MeshInstance3D
func _ready():
# Create a single triangle out of vertices:
var verts = PackedVector3Array()
verts.append(Vector3(-1.0, -1.0, 0.0))
verts.append(Vector3(-1.0, 3.0, 0.0))
verts.append(Vector3(3.0, -1.0, 0.0))
# Create an array of arrays.
# This could contain normals, colors, UVs, etc.
var mesh_array = []
mesh_array.resize(Mesh.ARRAY_MAX) #required size for ArrayMesh Array
mesh_array[Mesh.ARRAY_VERTEX] = verts #position of vertex array in ArrayMesh Array
# Create mesh from mesh_array:
mesh.add_surface_from_arrays(Mesh.PRIMITIVE_TRIANGLES, mesh_array)
备注
The triangle is specified in normalized device coordinates.
Recall, NDC run from -1.0
to 1.0
in both the x
and y
directions. This makes the screen 2
units wide and 2
units
tall. In order to cover the entire screen with a single triangle, use
a triangle that is 4
units wide and 4
units tall, double its
height and width.
从上面分配相同的顶点着色器, 所有内容应该看起来完全相同.
The one drawback to using an ArrayMesh over using a QuadMesh is that the ArrayMesh is not visible in the editor because the triangle is not constructed until the scene is run. To get around that, construct a single triangle Mesh in a modeling program and use that in the MeshInstance3D instead.