- Shadow Mapping
- 阴影映射
- Shadow mapping was introduced by Lance Williams in 1978, in a paper entitled "Casting curved shadows on curved surfaces". It has been extensively used since, both in offline rendering and real time graphics. Shadow mapping is used by Pixar's Renderman and was used on major films such as "Toy Story".
- 阴影映射是由Lance Williams于1978年在一篇名为"Casting curved shadows on curved surfaces"的文章中引入.自从那以后阴影映射技术就一直被广泛使用于离线绘制和实时图形中.阴影映射被使用于Pixer的Rednerman和一些专业的电影,比如:Toy Story.
- Shadow mapping is just one of many different ways of producing shadows in your graphics applications, each with its own advantages and disadvantages. In the case of shadow mapping, these include:
- 阴影映射仅仅是你的应用程序中许多不同的各具优缺点的生成阴影方式中的一种.对于阴影映射来说这些包括:
- Advantages:
- No knowledge or processing of the scene geometry is required, since shadow mapping is an image space technique, working automatically with objects created or altered on the GPU.
- Only a single texture is required to hold shadowing information for each light; the stencil buffer is not used.
- Avoids the high fill requirement of shadow volumes.
- 优点:
- 不需要对场景几何体进行任何的了解和处理,因为阴影映射是一项图像空间技术,它会自动地工作于在GPU上创建和改变的物体.
- 对每个光源仅仅需要单个纹理来承担阴影信息;模型缓冲区是不使用的.
- 避免了阴影体高度拥挤的要求.
- Disadvantages:
- Aliasing, especially when using small shadow maps.
- The scene geometry must be rendered once per light in order to generate the shadow map for a spotlight, and more times for an omnidirectional point light.
- 缺点:
- 存在锯齿相象,特别是在使用尺寸较小的阴影图时.
- 对于每个光源场景几何必须被绘制一次,而对于全方位的点光源甚至需要被绘制更多次的.
- This tutorial will focus on basic shadow mapping for a single spotlight, but there are plenty of papers about how to extend and improve the technique.
- 这个教程将要专注于基本的对于单个点光源的阴影映射,但是也存在许多关于如何扩展和改善这项技术的文章.
- Theory - Shadow mapping as a depth test
- Consider a simple scene lit by a single point light, with hard shadows. How does a given point in the scene know whether it is lit, or in shadow? Put simply, a point in the scene is lit if there is nothing blocking a straight line path between the light and that point. The key step in understanding shadow mapping is that these points are exactly those which would be visible (i.e. not occluded) to a viewer placed at the light source.
- 理论 - 阴影映射作为一个深度测试
- 让我们考虑在单个使用硬阴影的点光源照亮简单的场景的时候,怎么去知道在场景中的,一个特定点是否被照亮或者位于阴影内的呢?简单来说,如果没有任何东西堵塞从光源到目标点之间的线段路径的话,那么场景中的这个点是被照亮的.理解阴影映射的关键步骤就是了解那些点就是对于定位于光源的观察者来说可见的点.
- We already have a technique to see what is visible to a given viewer, and use it when drawing almost any scene using 3d hardware. That technique is z-buffering. So, the points which would pass the depth test if we were rendering the scene from the light's point of view are precisely those which should not be in shadow.
- 我们已经拥有一项技术来检测对于一个给定的观点什么是可见的,并且几乎在任何使用3D硬件来绘制场景时都使用它.这就是Z缓冲区.因此,如果我们将观察者定位于光源位置并绘制场景那么那些通过深度测试的点就是那些不在阴影中的点.
- If we draw the scene from the light's point of view, we can save the values from the depth buffer. Then, we draw the scene from the camera's point of view, and use the saved depth buffer as a texture which is projected from the light's position. At a given point, we can then compare the value from the texture projected onto the point to the distance from the point to the light, and hence calculate which points should be in shadow.
- 如果我们将观察者定位于光源位置并绘制场景,我们就可以保存那些来自深度缓冲区的值.然后,我们从照相机位置绘制场景,并且使用保存下来的深度缓冲区值作为一个从光源位置投影来的纹理.对于给定的顶点,我们比较其投影到深度纹理上的值和从该点到光源的距离值,这样就可以计算出哪个顶点应该在阴影中的.
- Letting the value in the savm depth texture be D, and the distance from the point to the light be R, we have:
- 假设保存在深度图中的值为D,从给定点到光源的距离为R,则
- R = D There was nothing occluding this point when drawing from the light source,so this point is unshadowed.
- 当R = D时从光源位置绘制的給定点是不被任何东西遮挡的,因此它不是在阴影中的.
- R > D There must have been an object in front of this point when looking from the light's position.This point is thus in shadow.
- 当R > D时从光源位置绘制给定点必须定存在某一物体位于給定点前面,因此这个点是在阴影中的.
- Application
- 应用
- How do we go about performing the above using OpenGL?
- 然后我们怎样使用OpenGL对上述情况进行实现呢?
- The technique requires at least 2 passes, but to keep each pass simple, we will use 3.
- 这项技术要求至少两个绘制通道的,但是为了使得每个通道更简单一点,我们将会使用3个通道进行实现.
- Firstly, we draw the scene from the light's point of view. This is acheived by using gluLookAt to look from the light's position at the centre of the scene. The scene is then drawn as normal, and the depth buffer read.
- 首先,我们从光源的位置绘制整个场景.这是通过在光源位置进行gluLookAt并以场景中心位置作为视线目标实现的.整个场景就像平常一样被绘制,然后我们读取深度缓冲区中的值.
- All the calculations for the shadowing are performed at the precision of the depth buffer. Using an equality to test for an unshadowed point is likely to produce many incorrect results, due to a lack of precision. This is the same reason as that behind "Do not compare floats with ==". So, when drawing the scene from the light's point of view, we instruct OpenGL to cull front faces. Thus the back faces of our objects are drawn into the shadow map. Hence the depth values stored in the shadow map are greater than the depth of the faces which can be seen by the light. By marking as unshadowed points for which D >= R, all surfaces visible to the light will be unshadowed.
- 阴影计算的具体实现效果依赖于深度缓冲区的精度.由于精度的缺乏,对一个不在阴影中的点使用相等测试很可能产生不正确的结果.这与不提倡对浮点数进行"=="是同样的道理.因此,当从光源位置绘制场景的时候,我们告诉OpenGL去剔除正面.这样背面的深度值就被绘制到阴影图中.这样保存在阴影图的深度值就大于从光源位置看过去的物体表面深度.通过将D >= R的点标志为不在阴影中,所有对于光源来说的可见面就将不在阴影中.
- This technique will only work if all objects are closed. If you have open objects in your scene, it is possible instead to use polygon offset to increase the depth values stored in the depth buffer.
- 这项技术仅仅能够正常工作的,当所有的场景物体是相对靠近的时候.如果在你的场景中存在相对开发的物体,那么使用多边形偏移可能可以增加存在深度缓冲区的值.
- For simplicity, we will draw this first pass to the standard back buffer. This means that our window must be large enough to fit the shadow map texture within it, and the window must not be occluded by others. These restrictions can be bypassed by using an off-screen pbuffer when generating the shadow map.
- 简单来说,我们绘制第一个通道到标准的后缓冲区.这就意味着为了去适应阴影图我们的窗口必须是足够大的,并且未被其它窗口所遮挡.这个约束可以被屏蔽掉通过使用离屏P缓冲区当生成阴影图的时候.
- The other two passes are drawn from the camera's point of view. Firstly, we draw the entire scene with a dim light, as it would be shown if shadowed. In theory, this pass should draw the scene using only ambient light. However, in order that the curved surfaces in shadow do not appear unnaturally flat, we use a dim diffuse light source.
- 其余两个通道从照相机位置进行绘制.首先,我们在一个较昏暗的灯光下绘制场景,对应那些位于阴影中的物体它将保持这个样子.在理论上,这个通倒应该仅仅使用环境光对场景进行绘制.然而,为了使用那些曲面不至于看起来不自然的平坦的(由于未使用Ambient Occlusion),所以我们使用一个昏暗的散射光源.
- The third pass is where the shadow comparison mentioned above occurs. This comparison is so vital to shadow mapping, it is actually possible to get the hardware to perform the comparison per pixel, using the ARB approved extension, ARB_shadow. We set up the texture unit so that the comparison will affect the alpha value as well as the color components. Any fragments which "fail" the comparison (R > D) will generate an alpha value of 0, and any which pass will have alpha of 1. By using the alpha test, we can discard any fragments which should be shadowed. Now, using a bright light with specular enabled we can draw the lit parts of the scene.
- 在第三个通道中将发生上述的阴影比较.这个比较是如此的重要以至于我们最好尽可能地使用ARB提供的扩展ARB_shadow让硬件去进行每像素的比较.我们设置纹理比较参数使得Alpha值和其它颜色成分一样受到影响的.任何不能通过R < D的片元的alpha值都将为0的,而相对的通过此测试的片元的alpha都将为1.通过使用alpha测试,我们可以剔除那些本应该在阴影中的片元.现在,使用一个带有镜面光的高光我们可以绘制场景中被照亮的部分.
- Using a linear filter on the depth texture will filter the values produced after the shadow comparison. This is called "Percentage Closer Filtering" (or PCF) and will produce slightly soft shadow edges. If we allow the lower alpha values to pass the alpha test however, the lit fragments, modulated by the shadow map, may actually be darker than the shadowed pixel already within the framebuffer. This produces a dark border around the shadowed regions. So, in this demo, the alpha test is used to discard all but fully lit regions. The dark border could be eliminated by using a different, more complicated method to combine the 2 passes. In my main shadow mapping project, MAX blending is used to combine the results. However, to keep this tutorial as simple as possible, PCF has not been used.
- 在深度纹理上使用线性过滤将会过滤掉那些在阴影比较中生成的值.这就是所谓的"靠近百分比过滤"(PCF)并且它会在阴影边缘生成一些微弱的软阴影.如果我们允许更低的alpha值通过透明度测试的话,那么那些被阴影图调整后的照亮的片元可能会比已经存在于祯缓冲区中的像素更为黑暗的.这样就会在阴影区域生成一个黑暗的边缘.因此,在这个Demo中透明度测试将除了被完全照亮的片元外的所有片元都提出掉.然而黑暗的边缘可以通过使用一些不同的并且更复杂的算法结合两个通道进行消除.在我的主阴影映射程序中,MAX bleding被使用来混合两次绘制结果.然而,为了让这个教程显得更为简单,这里并没有使用靠近百分比过滤.
- Projective texturing
- 投影纹理
- How do we project the light's depth buffer, encoded in a texture, onto the scene's geometry when rendered from the camera's point of view?
- 当从照相机的位置绘制场景时我们怎样投射到一个被编码为纹理的光源深度缓冲区到几何体上呢?
- Firstly, let's look at the coordinate spaces and matrices involved in this demo:
- 首先,让我们来看一下在这个Demo中涉及到的坐标空间和矩阵:
- The shadow map is a snapshot of the light's viewport, which is a 2d projection of the light's clip space. In order to perform the texture projection, we will use OpenGL's EYE_LINEAR texture coordinate generation, which generates texture coordinates for a vertex based upon its eye-space position. We need to map these generated texture coordinates to ones appropriate for addressing the shadow map, using the texture matrix. The texture matrix thus needs to perform the operation symbolised by the green arrow above.
- 阴影图是一个光源视口的快照,而光源视口则是光源裁剪空间的2D投影.为了使用纹理投影,我们将使用OpenGL的EYE_LINEAR纹理坐标生成器,它将为每个基于眼睛空间的顶点生成纹理坐标.我们需要使用纹理矩阵映射它们到一个对选址阴影图来说比较合适的位置.而纹理矩阵需要执行上图中绿色箭头所指示的操作.
- The best way to do this is to use:
- 最好的来完成这个操作的途径是使用以下这个式子:
- where:
- T is the texture matrix
- Pl is the light's projection matrix
- Vl is the light's view matrix
- Vc is the camera's view matrix
- T是一个纹理矩阵
- Pl是光源的投影矩阵
- Vl是光源的视野矩阵
- Vc是照相机的视野矩阵
- Remembering that OpenGL applies a matrix M to a texture coordinate set T as MT, this will transform the camera's eye space coordinates into the light's clip space by going through world space and the light's eye space. This avoids object space and the use of any model matrices, and hence does not need to be recalculated for each model we are drawing.
- OpenGL应用矩阵M到纹理坐标T的结果为MT,这样就可将照相机空间坐标在经过世界空间和光源视点空间后变换到光源裁剪空间,并且避免了物体坐标空间和使用模型矩阵,因此不需要为每个模型重新计算此变换矩阵.
- There is one final operation which needs to be performed on the texture coordinates once they are in the light's clip space. After the perspective divide, the clip space X, Y and Z coordinates are in the range -1 to 1 (written [-1, 1]). The texture map is addressed by X and Y coordinates in [0, 1], and the depth value stored in it is also in [0, 1]. We need to generate a simple matrix to map [-1, 1] to [0, 1] for each of X, Y and Z coordinates, and pre-multiply our texture matrix T by it.
- 一旦它们处于光源裁剪空间之后,就只剩下对纹理坐标执行的最后一步操作了.在进行透视除法之后,裁剪空间的坐标都位于[-1, 1]内.但是由于纹理坐标范围为[0, 1]而且被存储的深度值范围也为[0, 1],所以我们需要生成一个简单的矩阵来将x,y,z坐标映射从[-1, 1]到[0, 1]范围内,而此缩放矩阵将会和纹理矩阵进行预乘.
- We can actually perform this projection avoiding use of the texture matrix altogether. This can be acheived as we actually specify a matrix when we enable EYE_LINEAR texgen. Typical code to enable the texture coordinate generation for a single coordinate is:
- 其实我们可以避免直接使用纹理矩阵来进行投影,这个可以通过在我们启动EYE_LINEAR纹理坐标生成时来实现.典型的启动纹理生成的代码是:
- glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGenfv(GL_S, GL_EYE_PLANE, VECTOR4D(1.0f, 0.0f, 0.0f, 0.0f)); glEnable(GL_TEXTURE_GEN_S);
- If we look at the eye planes for all four texture coordinates together, they form the 4x4 identity matrix. Texture coordinates are generated based upon this "texgen" matrix, and are then manipulated using the texture matrix. We can gain a small speed-up by ignoring the texture matrix and placing what we would use for the texture matrix directly into the eye planes.
- 如果我们为所有的4个纹理坐标观察眼睛平面时,它们就会形成一个4x4的单位矩阵.纹理坐标就是基于这个矩阵进行生成的,然后使用纹理矩阵对它们进行调整.我们可以获得一个小的加速通过不使用纹理矩阵,而是直接在眼睛平面中设置与纹理矩阵相应的参数.
- Finally, the most expensive part of setting up the projection is calculating the inverse of Vc. OpenGL will even do that for us! When the eye planes are specified, the GL will automatically post-multiply them with the inverse of the current modelview matrix. All we have to do is ensure that at this time, the modelview matrix contains the camera's view matrix. The inverse of this will then be multiplied onto our texgen matrix.
- 最后,设置投影的最昂贵的部分就是计算Vc的逆矩阵.OpenGL将会为我们完成它的.当眼睛平面是被指定的时候,OpenGL就会自动的将它们和当前模型视图矩阵进的逆行预乘.我们必须去做的就是保证在那个时候,此模型视图矩阵包含照相机视图矩阵的.这个矩阵的逆将会和纹理生成矩阵进行相乘.
- So, the final code to set up the texture projection, including these optimisations, is:
- 包含上述优化的设置纹理投影的代码罗列如下:
- //Calculate texture matrix for projection
- //This matrix takes us from eye space to the light's clip space
- //It is postmultiplied by the inverse of the current view matrix when specifying texgen
- static MATRIX4X4 biasMatrix
- (0.5f, 0.0f, 0.0f, 0.0f,
- 0.0f, 0.5f, 0.0f, 0.0f,
- 0.0f, 0.0f, 0.5f, 0.0f,
- 0.5f, 0.5f, 0.5f, 1.0f);
- MATRIX4X4 textureMatrix=biasMatrix*lightProjectionMatrix*lightViewMatrix;
- //Set up texture coordinate generation.
- glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
- glTexGenfv(GL_S, GL_EYE_PLANE, textureMatrix.GetRow(0));
- glEnable(GL_TEXTURE_GEN_S);
- glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
- glTexGenfv(GL_T, GL_EYE_PLANE, textureMatrix.GetRow(1));
- glEnable(GL_TEXTURE_GEN_T);
- glTexGeni(GL_R, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
- glTexGenfv(GL_R, GL_EYE_PLANE, textureMatrix.GetRow(2));
- glEnable(GL_TEXTURE_GEN_R);
- glTexGeni(GL_Q, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
- glTexGenfv(GL_Q, GL_EYE_PLANE, textureMatrix.GetRow(3));
- glEnable(GL_TEXTURE_GEN_Q);
- 软阴影效果截图:
- 由于本人英语能力有限,如果有翻译不好的地方请见谅.
由于个人感觉原文中有个别不好理解但无伤大体的语句,所以本翻译对这些语句不予翻译.
原文连接:http://www.paulsprojects.net/tutorials/smt/smt.html
备注:以下是我结合此文章和其它参考资料写的一个软阴影演示
exe文件:http://www.fileupyours.com/view/219112/GLSL/Soft%20Shadow%20Demo%20V2.0.rar