代码记录:3D Gaussian Splatting 渲染(Rendering)代码

前言

  • 3D GS的渲染部分(从3D Gaussian 到 2D image)本应该是整个框架中最重要的部分,但是由于具体是通过Cuda编程实现,而作者也已经把这部分包装成了库,调用即可,因此,我们首先来看看3D GS代码中如何安排这部分的代码来进行渲染。
  • 渲染的过程是将3D Gaussian 点云投影到2D平面上生成渲染的图像。

位置

  • render函数位于gaussian_renderer\ __init__.py

源码

def render(viewpoint_camera, pc : GaussianModel, pipe, bg_color : torch.Tensor, scaling_modifier = 1.0, override_color = None):
    """
    Render the scene. 
    
    Background tensor (bg_color) must be on GPU!
    """
 
    # Create zero tensor. We will use it to make pytorch return gradients of the 2D (screen-space) means
    screenspace_points = torch.zeros_like(pc.get_xyz, dtype=pc.get_xyz.dtype, requires_grad=True, device="cuda") + 0
    try:
        screenspace_points.retain_grad()
    except:
        pass

    # Set up rasterization configuration
    tanfovx = math.tan(viewpoint_camera.FoVx * 0.5)
    tanfovy = math.tan(viewpoint_camera.FoVy * 0.5)

    raster_settings = GaussianRasterizationSettings(
        image_height=int(viewpoint_camera.image_height),
        image_width=int(viewpoint_camera.image_width),
        tanfovx=tanfovx,
        tanfovy=tanfovy,
        bg=bg_color,
        scale_modifier=scaling_modifier,
        viewmatrix=viewpoint_camera.world_view_transform,
        projmatrix=viewpoint_camera.full_proj_transform,
        sh_degree=pc.active_sh_degree,
        campos=viewpoint_camera.camera_center,
        prefiltered=False,
        debug=pipe.debug
    )

    rasterizer = GaussianRasterizer(raster_settings=raster_settings)

    means3D = pc.get_xyz
    means2D = screenspace_points
    opacity = pc.get_opacity

    # If precomputed 3d covariance is provided, use it. If not, then it will be computed from
    # scaling / rotation by the rasterizer.
    scales = None
    rotations = None
    cov3D_precomp = None
    if pipe.compute_cov3D_python:
        cov3D_precomp = pc.get_covariance(scaling_modifier)
    else:
        scales = pc.get_scaling
        rotations = pc.get_rotation

    # If precomputed colors are provided, use them. Otherwise, if it is desired to precompute colors
    # from SHs in Python, do it. If not, then SH -> RGB conversion will be done by rasterizer.
    shs = None
    colors_precomp = None
    if override_color is None:
        if pipe.convert_SHs_python:
            shs_view = pc.get_features.transpose(1, 2).view(-1, 3, (pc.max_sh_degree+1)**2)
            dir_pp = (pc.get_xyz - viewpoint_camera.camera_center.repeat(pc.get_features.shape[0], 1))
            dir_pp_normalized = dir_pp/dir_pp.norm(dim=1, keepdim=True)
            sh2rgb = eval_sh(pc.active_sh_degree, shs_view, dir_pp_normalized)
            colors_precomp = torch.clamp_min(sh2rgb + 0.5, 0.0)
        else:
            shs = pc.get_features
    else:
        colors_precomp = override_color

    # Rasterize visible Gaussians to image, obtain their radii (on screen). 
    rendered_image, radii = rasterizer(
        means3D = means3D,
        means2D = means2D,
        shs = shs,
        colors_precomp = colors_precomp,
        opacities = opacity,
        scales = scales,
        rotations = rotations,
        cov3D_precomp = cov3D_precomp)

    # Those Gaussians that were frustum culled or had a radius of 0 were not visible.
    # They will be excluded from value updates used in the splitting criteria.
    return {"render": rendered_image,
            "viewspace_points": screenspace_points,
            "visibility_filter" : radii > 0,
            "radii": radii}

输入参数

  • viewpoint_camera: 要渲染的相机视角
  • pc: 高斯模型对象
  • pipe:
  • bg_color: 背景颜色张量
  • scaling_modifier: 缩放修正因子
  • override_color: 覆盖颜色

输出参数

  • rendered_image: 渲染出的图像
  • screenspace_points:屏幕空间点
  • visibility_filter:
  • radii: 半径信息

具体分析:

screenspace_points = torch.zeros_like(pc.get_xyz, dtype=pc.get_xyz.dtype, requires_grad=True, device="cuda") + 0
  • 屏幕空间中点的位置,screenspace_points的形状是(N,3),N为高斯中的点的个数。
# Set up rasterization configuration
tanfovx = math.tan(viewpoint_camera.FoVx * 0.5)
tanfovy = math.tan(viewpoint_camera.FoVy * 0.5)
  • 这里计算视场角在水平和垂直方向上的正切值。
raster_settings = GaussianRasterizationSettings(
    image_height=int(viewpoint_camera.image_height),
    image_width=int(viewpoint_camera.image_width),
    tanfovx=tanfovx,
    tanfovy=tanfovy,
    bg=bg_color,
    scale_modifier=scaling_modifier,
    viewmatrix=viewpoint_camera.world_view_transform,#外参
    projmatrix=viewpoint_camera.full_proj_transform,#内参
    sh_degree=pc.active_sh_degree,#球谐系数的阶数
    campos=viewpoint_camera.camera_center,#相机中心位置
    prefiltered=False,
    debug=pipe.debug #是否启用调试模式
)

rasterizer = GaussianRasterizer(raster_settings=raster_settings)
  • 接下来就调用了库函数GaussianRasterizationSettings(…)设置栅格化的参数。然后创建rasterizer对象,用于将高斯分布投影到屏幕上。
rendered_image, radii = rasterizer(
    means3D = means3D,
    means2D = means2D,
    shs = shs,
    colors_precomp = colors_precomp,
    opacities = opacity,
    scales = scales,
    rotations = rotations,
    cov3D_precomp = cov3D_precomp)
  • 最后进行渲染,生成渲染图像。
### 使用球形高斯加速3D高斯点绘的技术细节 #### SG-Splatting 技术概述 SG-Splatting 是一种用于加速 3D 高斯点绘 (3D Gaussian Splatting) 的技术,通过引入球形高斯函数来简化计算并提高渲染效率。该方法特别适用于实时辐射场渲染场景中的复杂光照效果模拟。 #### 实现原理 为了有效处理大规模的三维数据集,在传统基础上进行了改进: - **球形高斯表示**:采用球形高斯分布代替标准椭圆体模型,使得每个粒子可以被更简单地描述为位置、方向以及强度参数组合而成的形式[^1]。 - **高效采样策略**:利用球形对称性质减少不必要的冗余运算;同时针对不同视角下的可见性变化设计自适应调整机制以优化性能表现[^2]。 - **颜色分解**:为进一步增强对于具有镜面反射特性的物体表面特征捕捉能力,提出了将色彩信息拆解成漫反射与镜面反射两部分的方法。这不仅有助于区分高低频信号差异,还能够更好地匹配实际物理现象中光线传播规律[^3]。 ```python import numpy as np def spherical_gaussian(position, direction, intensity): """ 计算单个球形高斯项 参数: position -- 中心坐标向量 direction -- 方向单位向量 intensity -- 强度系数 返回值: sg_value -- 球形高斯响应值 """ # 假设输入已经过预处理转换到局部坐标系下 r_squared = sum([p*p for p in position]) dot_product = sum([d * p for d,p in zip(direction,position)]) exponent_term = -(r_squared - dot_product*dot_product)/(2*(intensity**2)) normalization_factor = 1 / ((np.sqrt(2*np.pi)*abs(intensity))**(len(position)-1)) return normalization_factor * np.exp(exponent_term) ``` #### 性能优势 得益于上述特性,基于球形高斯的 splatting 方法能够在保持高质量视觉呈现的同时显著降低计算成本,尤其适合应用于动态环境中快速更新视图的需求场合。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

BlueagleAI

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值