Parallax Mapping Sample

本文深入探讨了Parallax Occlusion Mapping技术,解释了如何通过调整纹理坐标来增强平坦表面的细节表现,使得低多边形网格能够展现出高多边形效果。文中详细介绍了该过程所需的三个组件:起始纹理坐标、表面高度值和指向眼睛点的切线空间向量。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

ScreenShot-300x240

This program is ParallaxOcclusionMapping Sample from D3D samples. I touch this technology because I found some one use DXT5 format as normal maps. DXT5 means the texture still keep full range of alpha channel. As I deeper into the code, I found they use the alpha channel as the parallax mapping. This will will make the normal map effect and details on the flattened surface become more obvious. Low polygon mesh could show some high polygon effect with less vertex data triangles, and textures.

 

basis-300x145 The Basics When a texture map representing an uneven surface is applied to a flattened polygon, the surface appears to get flattened. In Figure 1 you can see that when viewing the polygon along the depicted eye vector, you will see point A of the surface. However, if you were viewing the actual surface instead of a texture mapping polygon, you would see point B. If the texture coordinate corresponding to point A could be corrected, then you could view point B instead. By offsetting all the texture coordinates individually, high areas of the surface would shift toward the eye and lows areas of the surface would shift away from the eye(This original words is like this: high areas of the surface would shift always the eye and low areas of the surface would shift toward the eye. As you see from the Figure 1, A locates the high area, it’s corrected position is B, B is more closer than A. So I think high areas should shift toward the eye instead of shifting away from the eye). The process of parallax mapping requires that, for each pixel drawn, a texture coordinate, used to index one or more texture maps, be corrected by some displacement.

 

offset1-300x132 To compute an offset texture coordinate for a pixel, 3 components are required: a starting texture coordinates, a value of the surface height, and a tangent space vector pointing from the pixel to the eye point. An application programmer must supply a tangent, bi-normall, and normal at each vertex. These can be used to create a rotation matrix which will transform vectors from the global coordinate system to tangent space.

A standard height map or one channel (usually the alpha channel) in texture is used to represent the varying height of the surface. The height map correlates to the surface’s regular texture map and stores one height value per Texel. These values are in the range[0.0, 1.0]. You could use following function to remap other range to range [0, 1].

height-300x27

And you could use the following function to calculate the corrected texture coordinate.

offset_cal-300x32

Here are some implementation details in the shader:

To create the tangent world space:

float3 vNormalWS   = mul( vInNormalOS,   (float3x3) g_mWorld );
float3 vTangentWS  = mul( vInTangentOS,  (float3x3) g_mWorld );
float3 vBinormalWS = mul( vInBinormalOS, (float3x3) g_mWorld );

vNormalWS   = normalize( vNormalWS );
vTangentWS  = normalize( vTangentWS );
vBinormalWS = normalize( vBinormalWS );

// Compute position in world space:
float4 vPositionWS = mul( inPositionOS, g_mWorld );

// Normalize the light and view vectors and transform it to the tangent space:
float3x3 mWorldToTangent = float3x3( vTangentWS, vBinormalWS, vNormalWS );

 

To calculate the corrected coordinate:

// Propagate the view and the light vectors (in tangent space):
Out.vLightTS = mul( vLightWS, mWorldToTangent );   // 
Out.vViewTS  = mul( mWorldToTangent, vViewWS  );  // 

// Compute initial parallax displacement direction:
float2 vParallaxDirection = normalize(  Out.vViewTS.xy );

// The length of this vector determines the furthest amount of displacement:
float fLength         = length( Out.vViewTS );
float fParallaxLength = sqrt( fLength * fLength – Out.vViewTS.z * Out.vViewTS.z ) / Out.vViewTS.z;

// Compute the actual reverse parallax displacement vector:
Out.vParallaxOffsetTS = vParallaxDirection * fParallaxLength;

 

Here, there are still several thing make me a bit confused. Why we need to use mul(matrix, vector) instead of mul(vector, matrix) when calculate the view direction in the tangent space? I tried to modify the code, replaced mul(matrix, vector) with mul(vector, matrix), but some artifact appear as the angle between camera view and polygon surface become very small. Another thing is the the Parallax occlusion mapping have some better effect than Parallax Mapping, but I did not check how it works.

 

At first, I wanted to ship this sample into my d3d framework. That means I will re-do the whole process again with my own code. But why should I keep ask a VisualStudio IDE under Ubuntu  OS system when I could use VS under windows XP at all? If it works, we should use it, no matter whether it was created by ourselves or not. We know it and know how to apply it that is enough.  The wheels should not be re-build again and again.

 

The full source code could be downloaded from here.

转载于:https://www.cnblogs.com/open-coder/archive/2012/08/24/2653606.html

### Python 中实现视差效果的方法 为了实现在网页或其他图形界面中的视差滚动效果,可以利用 CSS 和 HTML 结合的方式完成简单的效果[^2]。然而,在更复杂的场景下,比如游戏开发或高级动画制作,则可能需要用到像 Pygame 或者其他专门的游戏引擎来处理。 对于基于图像的视差计算而言,特别是在计算机视觉领域内讨论的话题,通常涉及到的是通过双目摄像头获取两幅不同视角下的图片并据此推算物体距离的信息过程[^3]。这里会用到诸如 OpenCV 这样的库来进行特征点检测、描述符提取以及最终的匹配工作;而一旦得到了左右两张照片之间的对应关系之后就可以进一步求解出像素级别的位移量即所谓的“视差”。 如果目标是在三维渲染环境中模拟材质表面凹凸不平的感觉(也就是常说的 Parallax Mapping),那么这更多是一个着色器编程层面的任务,一般会在支持 GLSL 的框架里编写片段着色程序以达到目的[^1]。不过下面给出一段简化版的概念验证代码用于展示如何借助 NumPy 来创建基本的一维高程数据进而影响背景层的位置偏移从而制造简单的视差错觉: ```python import numpy as np from PIL import Image, ImageDraw def generate_height_map(width=800, height=600): """Generates a simple noise-based height map.""" scale = 100.0 octaves = 6 persistence = 0.5 lacunarity = 2.0 world = np.zeros((height, width)) for y in range(height): for x in range(width): amplitude = 1.0 frequency = 1.0 noise_height = 0.0 for _ in range(octaves): sample_x = x / scale * frequency sample_y = y / scale * frequency perlin_value = (np.sin(sample_x) + np.cos(sample_y)) / 2.0 noise_height += perlin_value * amplitude amplitude *= persistence frequency *= lacunarity world[y][x] = noise_height return normalize(world) def apply_parallax_effect(base_image_path, depth_map, output_path='output.png', shift_amount=50): """ Applies parallax effect to an image using provided depth information. :param base_image_path: Path of the original background image file. :param depth_map: A two-dimensional array representing depths at each pixel location. Higher values mean closer objects which should move more when scrolling. :param output_path: Where to save resulting image with applied parallax effect. :param shift_amount: Maximum amount by which layers can be shifted horizontally due to simulated 'camera' movement. """ img = Image.open(base_image_path).convert('RGBA') w, h = img.size new_img = Image.new(mode="RGBA", size=(w*2, h), color=(0, 0, 0, 0)) draw = ImageDraw.Draw(new_img) # Duplicate the input image side-by-side twice so we have enough content after shifting new_img.paste(img, box=(0, 0)) new_img.paste(img, box=(w, 0)) result = [] for i in range(w): layer_shift = int(depth_map[h//2][i]*shift_amount) cropped_section = new_img.crop( (layer_shift+i, 0, layer_shift+i+w, h)) result.append(cropped_section.getdata()) final_output = Image.new(mode="RGBA", size=img.size) final_output.putdata([item for sublist in zip(*result) for item in sublist]) final_output.save(output_path) def normalize(data_array): min_val = data_array.min() max_val = data_array.max() normalized_data = ((data_array-min_val)/(max_val-min_val))*255 return normalized_data.astype(np.uint8) if __name__ == '__main__': hm = generate_height_map() im = Image.fromarray(hm) im.show() # Display generated heightmap # Assuming you already prepared your own base image named "background.jpg" apply_parallax_effect('./background.jpg', hm) ``` 这段脚本首先定义了一个 `generate_height_map` 函数用来随机生成代表地形起伏的高度图作为深度信息源。接着实现了核心逻辑部分 —— 即 `apply_parallax_effect` ,它接受一张原始底图路径字符串参数和之前提到过的高度数组,并根据这些输入构建具有视差效应的新合成图像文件保存至指定位置。最后还包含了辅助性的归一化操作以便于可视化查看中间结果。 需要注意上述例子仅适用于静态图片间的转换,实际应用中动态交互式的视差往往依赖浏览器端技术栈或是特定硬件加速的支持才能流畅运行。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值