URP获取屏幕图像的解决方案

前言

        当我们做一些后处理效果的时候,比如Bloom或者是Blur。我们都需要当前的屏幕图像来帮助我们实现这样的效果。本篇文章就是说明如何在Unity的URP管线下获取到屏幕图像。

环境

Windows10
Unity 2022.3.8f1c1
Universal RP 14.0.8

URP自带的获取屏幕图像功能

        我们打开URP的设置([URP设置查找可以看这篇文章](https://www.starloong.top/2024/03/31/Unity%E6%9F%A5%E6%89%BEURP%E8%AE%BE%E7%BD%AE%E7%9A%84%E6%96%B9%E6%B3%95/)并将Opaque Texture勾选上。这里主摄像机中的设置中的Opaque Texture(在摄像设置中的Rendering下)选择Use setting from Render Pipeline Asset。然后URP就会自动去获取屏幕图像了。下面是一个简单获取图像显示Shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
Shader "Unlit/SimpleGetScreen"
{
Properties
{
_OffsetVal("Offset",Vector) = (0,0,0,0)
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue" = "Transparent"}
LOD 100

Pass
{
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/DeclareOpaqueTexture.hlsl"

struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
float4 screenPos : TEXCOORD1;
};

CBUFFER_START(UnityPerMaterial)
float4 _OffsetVal;
CBUFFER_END

v2f vert (appdata v)
{
v2f o;
o.vertex = TransformObjectToHClip(v.vertex);
o.screenPos = ComputeScreenPos(o.vertex);
return o;
}

half4 frag (v2f i) : SV_Target
{
// sample the texture
half4 screenPos = i.screenPos;
half2 uv = screenPos.xy / screenPos.w + _OffsetVal.xy;
half4 col;
col.rgb = SampleSceneColor(uv);
col.a = 1;
return col;
}
ENDHLSL
}
}
}

URP本身也有提供采样截图的函数,其就存放在Packages/com.unity.render-pipelines.universal/ShaderLibrary/DeclareOpaqueTexture.hlsl下。我们只要给到正确的UV就好了,这里为了证明Shader是正确的。我加了一个变量_OffsetVal来证明这点。大家可以调整_OffsetVal的xy来查看其变化。

        额外注意一点,URP采集图像的时机是Camera.RendererSkybox后,DrawTransparentObjects前(这一点大家可以打开帧调试来验证)。这个时候非透明物体的渲染已经结束了。所以我们不能将这个Shader的渲染队列设置在非透明渲染队列。这样其渲染的物体本身也会被URP截图下来进而导致渲染出现错误。所以在Shader中,我将其放置到透明队列中。

自定义获取截图

        虽然URP已经帮我们截图了,但是其中的限制也很明显。那就是URP的截图时机是定死的。如果我想要的是透明物体渲染后的截图呢?而本身这样的要求也不少见,比如粒子特效中常用的扭曲效果。一般而言粒子是在透明队列的,而且有时候也要求被扭曲。那这时候我们就需要自定义截图。

        当然URP必定是支持我们这样做的。我们新建一个Renderer Feature(关于这个大家可以看一下这篇文章)。我将其命名为PostRT。代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
using System;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class PostRT : ScriptableRendererFeature
{

[Serializable]
public class PostSetting
{
public RenderPassEvent renderPassEvent = RenderPassEvent.AfterRenderingTransparents;
public string grabbedTextureName = "_GrabbedTexture";
}

class PostRTRenderPass : ScriptableRenderPass
{
private PostSetting postSetting;
private RTHandle _grabbedTextureHandle;
private int _grabbedTexturePropertyId;

public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
{
base.OnCameraSetup(cmd, ref renderingData);
}

public PostRTRenderPass(PostSetting postSetting)
{
renderPassEvent = postSetting.renderPassEvent;
this.postSetting = postSetting;
_grabbedTexturePropertyId = Shader.PropertyToID(postSetting.grabbedTextureName);
_grabbedTextureHandle = RTHandles.Alloc(postSetting.grabbedTextureName, postSetting.grabbedTextureName);
}

public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
{
cmd.GetTemporaryRT(_grabbedTexturePropertyId, cameraTextureDescriptor);
cmd.SetGlobalTexture(postSetting.grabbedTextureName, _grabbedTextureHandle.nameID);
}

public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
var cmd = CommandBufferPool.Get(nameof(PostRTRenderPass));
cmd.Clear();
var renderer = renderingData.cameraData.renderer;
if (renderer.cameraColorTargetHandle.rt != null)
{
Blit(cmd, renderer.cameraColorTargetHandle, _grabbedTextureHandle);
}
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);
}

public override void FrameCleanup(CommandBuffer cmd)
{
cmd.ReleaseTemporaryRT(_grabbedTexturePropertyId);
}

public override void OnCameraCleanup(CommandBuffer cmd)
{
}

}

PostRTRenderPass m_ScriptablePass;
public PostSetting postSetting = new PostSetting();

public override void Create()
{
m_ScriptablePass = new PostRTRenderPass(postSetting);
}

public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
{
renderer.EnqueuePass(m_ScriptablePass);
}
}

Ps: 代码一部分是从网络上抄来的,但是我写这篇文章的时候找不到我抄的文章了。

这里要说明一点:在URP的14版本中Blit(cmd, renderer.cameraColorTargetHandle, _grabbedTextureHandle);这段代码其实是有问题的。所以我才加上了前面的if判断。在这个讨论帖中有人说了这个问题,可是他们也没有说明如何解决。我也找不到问题所在,所以只能加上一个if判断来保证不会报错。

        我们在当前的URP设置中添加Render Feature,选项中会出现PostRT的选择。添加成功后如下图所示:

在图片的PostRT中Render Pass Event是用来决定在哪个渲染节点获取屏幕图片,Grabbed Texture Name表示我们截取的图片名字。图片中,我还加入了一个Render Objects(即Self Post Render)。这个是用来控制Shader在哪个节点进行渲染。这里我选定的渲染节点是保证了PostRT已经完成了任务后再进行。修改后的Shader代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
Shader "Unlit/SelfSimpleGetScreen"
{
Properties
{
_OffsetVal("Offset",Vector) = (0,0,0,0)
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue" = "Transparent"}
LOD 100

Pass
{
Tags{"LightMode"="SelfPostRender"}
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/DeclareOpaqueTexture.hlsl"

struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
float4 screenPos : TEXCOORD1;
};

TEXTURE2D_X(_GrabbedTexture);
SAMPLER(sampler_GrabbedTexture);

CBUFFER_START(UnityPerMaterial)
float4 _OffsetVal;
CBUFFER_END

v2f vert (appdata v)
{
v2f o;
o.vertex = TransformObjectToHClip(v.vertex);
o.screenPos = ComputeScreenPos(o.vertex);
return o;
}

half4 frag (v2f i) : SV_Target
{
// sample the texture
half4 screenPos = i.screenPos;
half2 uv = screenPos.xy / screenPos.w + _OffsetVal.xy;
half4 col;
col.rgb = SAMPLE_TEXTURE2D_X(_GrabbedTexture, sampler_GrabbedTexture, UnityStereoTransformScreenSpaceTex(uv)).rgb;
col.a = 1;
return col;
}
ENDHLSL
}
}
}

一些缺点

        在我后面使用PostRT的过程中,我发现配置有些麻烦。我必须要额外加上一个Render Objects才可以得到我想要的渲染效果。简单一点来说就是我无法在PostRT加上一个可以设置LightMode的方法。其实也不是不可以,比如下面的代码中我就加入了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
using System;
using UnityEngine;
using UnityEngine.Experimental.Rendering.Universal;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;
using static UnityEngine.Experimental.Rendering.Universal.RenderObjects;

public class PostRT : ScriptableRendererFeature
{

[Serializable]
public class PostSetting
{
public RenderPassEvent renderPassEvent = RenderPassEvent.AfterRenderingTransparents;
public string grabbedTextureName = "_GrabbedTexture";
public FilterSettings filterSettings;
}

class PostRTRenderPass : ScriptableRenderPass
{
private PostSetting postSetting;
private RTHandle _grabbedTextureHandle;
private int _grabbedTexturePropertyId;

public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
{
base.OnCameraSetup(cmd, ref renderingData);
}

public PostRTRenderPass(PostSetting postSetting)
{
renderPassEvent = postSetting.renderPassEvent;
this.postSetting = postSetting;
_grabbedTexturePropertyId = Shader.PropertyToID(postSetting.grabbedTextureName);
_grabbedTextureHandle = RTHandles.Alloc(postSetting.grabbedTextureName, postSetting.grabbedTextureName);
}

public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
{
cmd.GetTemporaryRT(_grabbedTexturePropertyId, cameraTextureDescriptor);
cmd.SetGlobalTexture(postSetting.grabbedTextureName, _grabbedTextureHandle.nameID);
}

public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
var cmd = CommandBufferPool.Get(nameof(PostRTRenderPass));
cmd.Clear();
var renderer = renderingData.cameraData.renderer;
if (renderer.cameraColorTargetHandle.rt != null)
{
Blit(cmd, renderer.cameraColorTargetHandle, _grabbedTextureHandle);
}
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);
}

public override void FrameCleanup(CommandBuffer cmd)
{
cmd.ReleaseTemporaryRT(_grabbedTexturePropertyId);
}

public override void OnCameraCleanup(CommandBuffer cmd)
{
}

}

PostRTRenderPass m_ScriptablePass;
public PostSetting postSetting = new PostSetting();

RenderObjectsPass renderObjectsPass;

public override void Create()
{
m_ScriptablePass = new PostRTRenderPass(postSetting);
renderObjectsPass = new RenderObjectsPass("PostRT", postSetting.renderPassEvent, postSetting.filterSettings.PassNames,
postSetting.filterSettings.RenderQueueType, postSetting.filterSettings.LayerMask, new CustomCameraSettings());
}

public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
{
renderer.EnqueuePass(m_ScriptablePass);
renderer.EnqueuePass(renderObjectsPass);
}
}

但是实际上这就是在PostRT中加入一个Render Objects。虽然这的确可以满足这个需求,但是我觉得这个方法太蠢了。