(9)点云数据处理学习——Global registration(全局注册)

本文介绍了使用Open3D库进行点云全局配准的过程,包括RANSAC算法和快速全局注册方法的应用。通过实例演示了如何提取点云特征、执行配准及优化,并对比了不同方法的性能。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1、主要参考

(1)官网的地址

Global registration — Open3D 0.16.0 documentation

2、作用和原理

2.1个人理解

PS理解:(1)ICP的作用是,2个点云数据在初步转换关系(已知不精确)的情况下进行精细调整。(2)Global registration是不知道2个点云数据的初始关系,直接来一波计算,得到初步转换关系,接下来给ICP计算用。

 2.2 介绍

ICP配准和彩色点云配准都被称为本地配准方法,因为它们依赖于粗略对齐作为初始化。本教程展示了另一类注册方法,称为全局注册。这一系列算法不需要对初始化进行对齐。它们通常产生不太紧密的对齐结果,并被用作局部方法的初始化。

2.3 例子中封装的可视化函数

下面的辅助函数将转换后的源点云与目标点云一起可视化:

def draw_registration_result(source, target, transformation):
    source_temp = copy.deepcopy(source)
    target_temp = copy.deepcopy(target)
    source_temp.paint_uniform_color([1, 0.706, 0])
    target_temp.paint_uniform_color([0, 0.651, 0.929])
    source_temp.transform(transformation)
    o3d.visualization.draw_geometries([source_temp, target_temp],
                                      zoom=0.4559,
                                      front=[0.6452, -0.3036, -0.7011],
                                      lookat=[1.9892, 2.0208, 1.8945],
                                      up=[-0.2779, -0.9482, 0.1556])

2.4 提取几何特征

我们对点云进行下采样,估计法线,然后为每个点计算一个FPFH特征。FPFH特征是描述点的局部几何性质的33维向量。在33维空间中的最近邻查询可以返回具有相似局部几何结构的点。详见[Rasu2009]。

def preprocess_point_cloud(pcd, voxel_size):
    print(":: Downsample with a voxel size %.3f." % voxel_size)
    pcd_down = pcd.voxel_down_sample(voxel_size)

    radius_normal = voxel_size * 2
    print(":: Estimate normal with search radius %.3f." % radius_normal)
    pcd_down.estimate_normals(
        o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))

    radius_feature = voxel_size * 5
    print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
    pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
        pcd_down,
        o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
    return pcd_down, pcd_fpfh

 3、测试的例子

3.1 读取输入的两个点云文件

下面的代码从两个文件读取一个源点云和一个目标点云。它们与单位矩阵的变换不对齐。

(1)注意:下面代码中的np.identity(4),相当于没有进行平移和旋转 

import open3d as o3d
import numpy as np
from copy import deepcopy

def draw_registration_result(source, target, transformation):
    # source_temp = copy.deepcopy(source)
    # target_temp = copy.deepcopy(target)
    source_temp = deepcopy(source)
    target_temp = deepcopy(target)
    source_temp.paint_uniform_color([1, 0.706, 0])
    target_temp.paint_uniform_color([0, 0.651, 0.929])
    source_temp.transform(transformation)
    o3d.visualization.draw_geometries([source_temp, target_temp],
                                      zoom=0.4559,
                                      front=[0.6452, -0.3036, -0.7011],
                                      lookat=[1.9892, 2.0208, 1.8945],
                                      up=[-0.2779, -0.9482, 0.1556])


def preprocess_point_cloud(pcd, voxel_size):
    print(":: Downsample with a voxel size %.3f." % voxel_size)
    pcd_down = pcd.voxel_down_sample(voxel_size)

    radius_normal = voxel_size * 2
    print(":: Estimate normal with search radius %.3f." % radius_normal)
    pcd_down.estimate_normals(
        o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))

    radius_feature = voxel_size * 5
    print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
    pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
        pcd_down,
        o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
    return pcd_down, pcd_fpfh


def prepare_dataset(voxel_size):
    print(":: Load two point clouds and disturb initial pose.")

    demo_icp_pcds = o3d.data.DemoICPPointClouds()
    source = o3d.io.read_point_cloud(demo_icp_pcds.paths[0])
    target = o3d.io.read_point_cloud(demo_icp_pcds.paths[1])
    trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
                             [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
    source.transform(trans_init)
    draw_registration_result(source, target, np.identity(4))

    source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
    target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
    return source, target, source_down, target_down, source_fpfh, target_fpfh


if __name__ == "__main__":
    voxel_size = 0.05  # means 5cm for this dataset
    source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(
        voxel_size)

(2)下载后应该用了下面的这个文件

 (3)显示的两个点云初始关系如下

 (4)操作流程

 3.2 RANSAC算法和测试

3.2.1原理

      我们使用RANSAC进行全局注册。在每次RANSAC迭代中,从源点云中选取ransac_n个随机点。通过在33维FPFH特征空间中查询最近的邻居来检测它们在目标点云中的对应点。修剪步骤需要快速的修剪算法来快速地在早期拒绝错误匹配。

Open3D提供以下剪枝算法: 

  • CorrespondenceCheckerBasedOnDistance 检查对齐点云是否接近(小于指定的阈值)。
  • CorrespondenceCheckerBasedOnEdgeLength检查从源和目标匹配分别绘制的任意两条边(由两个顶点组成的线)的长度是否相似。本教程检查||edgesource||>0.9带||edgetarget||和||edgetarget||>0.9带||edgesource||是否为真。

  • CorrespondenceCheckerBasedOnNormal考虑任何匹配的顶点法线亲和。它计算两个法向量的点积。它取一个弧度值作为阈值。

      只有通过修剪步骤的匹配才被用于计算转换,并在整个点云上进行验证。核心函数是registration_ransac_based_on_feature_matching。该函数最重要的超参数是RANSACConvergenceCriteria。它定义了RANSAC迭代的最大次数和置信概率。这两个数字越大,结果越准确,但算法花费的时间也越多。

     我们根据[[Choi2015]](../reference.html# Choi2015)提供的经验值设置RANSAC参数。

 3.2.2测试代码

(1)代码

import open3d as o3d
import numpy as np
from copy import deepcopy

def draw_registration_result(source, target, transformation):
    # source_temp = copy.deepcopy(source)
    # target_temp = copy.deepcopy(target)
    source_temp = deepcopy(source)
    target_temp = deepcopy(target)
    source_temp.paint_uniform_color([1, 0.706, 0])
    target_temp.paint_uniform_color([0, 0.651, 0.929])
    source_temp.transform(transformation)
    o3d.visualization.draw_geometries([source_temp, target_temp],
                                      zoom=0.4559,
                                      front=[0.6452, -0.3036, -0.7011],
                                      lookat=[1.9892, 2.0208, 1.8945],
                                      up=[-0.2779, -0.9482, 0.1556])


def preprocess_point_cloud(pcd, voxel_size):
    print(":: Downsample with a voxel size %.3f." % voxel_size)
    pcd_down = pcd.voxel_down_sample(voxel_size)

    radius_normal = voxel_size * 2
    print(":: Estimate normal with search radius %.3f." % radius_normal)
    pcd_down.estimate_normals(
        o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))

    radius_feature = voxel_size * 5
    print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
    pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
        pcd_down,
        o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
    return pcd_down, pcd_fpfh


def prepare_dataset(voxel_size):
    print(":: Load two point clouds and disturb initial pose.")

    demo_icp_pcds = o3d.data.DemoICPPointClouds()
    source = o3d.io.read_point_cloud(demo_icp_pcds.paths[0])
    target = o3d.io.read_point_cloud(demo_icp_pcds.paths[1])
    trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
                             [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
    source.transform(trans_init)
    draw_registration_result(source, target, np.identity(4))

    source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
    target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
    return source, target, source_down, target_down, source_fpfh, target_fpfh


def execute_global_registration(source_down, target_down, source_fpfh,
                                target_fpfh, voxel_size):
    distance_threshold = voxel_size * 1.5
    print(":: RANSAC registration on downsampled point clouds.")
    print("   Since the downsampling voxel size is %.3f," % voxel_size)
    print("   we use a liberal distance threshold %.3f." % distance_threshold)
    result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
        source_down, target_down, source_fpfh, target_fpfh, True,
        distance_threshold,
        o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
        3, [
            o3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(
                0.9),
            o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
                distance_threshold)
        ], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))
    return result

if __name__ == "__main__":
    #(1)原始,下采样,查看
    voxel_size = 0.05  # means 5cm for this dataset
    source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(
        voxel_size)

    #(2)ransac全局注册
    result_ransac = execute_global_registration(source_down, target_down,
                                                source_fpfh, target_fpfh,
                                                voxel_size)
    print(result_ransac)
    draw_registration_result(source_down, target_down, result_ransac.transformation)

(2)结果

(3)发现结果还真不错,看看上一个教程icp中提到的两个指标fitness=6.745798e-01, inlier_rmse=3.051079e-02

注意:Open3D为全局注册提供了更快的实现。请参考快速全球注册。

4、局部细化(Local refinement)

(1)原理

出于性能的考虑,全局配准只在低采样的点云上执行。结果也不紧绷。我们使用点对面ICP来进一步细化对准。

(2)注意:下面的方法中使用了上一个教程中的点对面的配准方法

 

import open3d as o3d
import numpy as np
from copy import deepcopy

def draw_registration_result(source, target, transformation):
    # source_temp = copy.deepcopy(source)
    # target_temp = copy.deepcopy(target)
    source_temp = deepcopy(source)
    target_temp = deepcopy(target)
    source_temp.paint_uniform_color([1, 0.706, 0])
    target_temp.paint_uniform_color([0, 0.651, 0.929])
    source_temp.transform(transformation)
    o3d.visualization.draw_geometries([source_temp, target_temp],
                                      zoom=0.4559,
                                      front=[0.6452, -0.3036, -0.7011],
                                      lookat=[1.9892, 2.0208, 1.8945],
                                      up=[-0.2779, -0.9482, 0.1556])


def preprocess_point_cloud(pcd, voxel_size):
    print(":: Downsample with a voxel size %.3f." % voxel_size)
    pcd_down = pcd.voxel_down_sample(voxel_size)

    radius_normal = voxel_size * 2
    print(":: Estimate normal with search radius %.3f." % radius_normal)
    pcd_down.estimate_normals(
        o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))

    radius_feature = voxel_size * 5
    print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
    pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
        pcd_down,
        o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
    return pcd_down, pcd_fpfh


def prepare_dataset(voxel_size):
    print(":: Load two point clouds and disturb initial pose.")

    demo_icp_pcds = o3d.data.DemoICPPointClouds()
    source = o3d.io.read_point_cloud(demo_icp_pcds.paths[0])
    target = o3d.io.read_point_cloud(demo_icp_pcds.paths[1])
    trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
                             [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
    source.transform(trans_init)
    draw_registration_result(source, target, np.identity(4))

    source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
    target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
    return source, target, source_down, target_down, source_fpfh, target_fpfh


def execute_global_registration(source_down, target_down, source_fpfh,
                                target_fpfh, voxel_size):
    distance_threshold = voxel_size * 1.5
    print(":: RANSAC registration on downsampled point clouds.")
    print("   Since the downsampling voxel size is %.3f," % voxel_size)
    print("   we use a liberal distance threshold %.3f." % distance_threshold)
    result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
        source_down, target_down, source_fpfh, target_fpfh, True,
        distance_threshold,
        o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
        3, [
            o3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(
                0.9),
            o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
                distance_threshold)
        ], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))
    return result


def refine_registration(source, target, source_fpfh, target_fpfh, voxel_size):
    distance_threshold = voxel_size * 0.4
    print(":: Point-to-plane ICP registration is applied on original point")
    print("   clouds to refine the alignment. This time we use a strict")
    print("   distance threshold %.3f." % distance_threshold)
    result = o3d.pipelines.registration.registration_icp(
        source, target, distance_threshold, result_ransac.transformation,
        o3d.pipelines.registration.TransformationEstimationPointToPlane())
    return result


if __name__ == "__main__":
    #(1)原始,下采样,查看
    voxel_size = 0.05  # means 5cm for this dataset
    source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(
        voxel_size)

    #(2)ransac全局注册
    result_ransac = execute_global_registration(source_down, target_down,
                                                source_fpfh, target_fpfh,
                                                voxel_size)
    print(result_ransac)
    draw_registration_result(source_down, target_down, result_ransac.transformation)

    #(3)点对面细调整
    result_icp = refine_registration(source, target, source_fpfh, target_fpfh,
                                    voxel_size)
    print(result_icp)
    draw_registration_result(source, target, result_icp.transformation)

(3)测试结果

 (4)匹配的参数真漂亮,参数为 fitness=6.210325e-01, inlier_rmse=6.564120e-03

 5、快速全局注册(Fast global registration)

5.1原理

(1)描述

基于RANSAC的全局注册解决方案可能需要很长时间,因为有无数的模型建议和评估。[Zhou2016]提出了一种更快的方法,可以快速优化对应次数较少的线处理过程。由于每次迭代都不涉及模型提出和评估,[Zhou2016]提出的方法可以节省大量的计算时间。

(2)用到的函数

o3d.pipelines.registration.registration_fgr_based_on_correspondence(
        source_down, target_down, correspondence_set,
        o3d.pipelines.registration.FastGlobalRegistrationOption())

本教程将基于RANSAC的全局注册的运行时间与[Zhou2016]的实现进行比较。

 5.2 测试代码

import open3d as o3d
import numpy as np
from copy import deepcopy
import time

def draw_registration_result(source, target, transformation):
    # source_temp = copy.deepcopy(source)
    # target_temp = copy.deepcopy(target)
    source_temp = deepcopy(source)
    target_temp = deepcopy(target)
    source_temp.paint_uniform_color([1, 0.706, 0])
    target_temp.paint_uniform_color([0, 0.651, 0.929])
    source_temp.transform(transformation)
    o3d.visualization.draw_geometries([source_temp, target_temp],
                                      zoom=0.4559,
                                      front=[0.6452, -0.3036, -0.7011],
                                      lookat=[1.9892, 2.0208, 1.8945],
                                      up=[-0.2779, -0.9482, 0.1556])


def preprocess_point_cloud(pcd, voxel_size):
    print(":: Downsample with a voxel size %.3f." % voxel_size)
    pcd_down = pcd.voxel_down_sample(voxel_size)

    radius_normal = voxel_size * 2
    print(":: Estimate normal with search radius %.3f." % radius_normal)
    pcd_down.estimate_normals(
        o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))

    radius_feature = voxel_size * 5
    print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
    pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
        pcd_down,
        o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
    return pcd_down, pcd_fpfh


def prepare_dataset(voxel_size):
    print(":: Load two point clouds and disturb initial pose.")

    demo_icp_pcds = o3d.data.DemoICPPointClouds()
    source = o3d.io.read_point_cloud(demo_icp_pcds.paths[0])
    target = o3d.io.read_point_cloud(demo_icp_pcds.paths[1])
    trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
                             [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
    source.transform(trans_init)
    draw_registration_result(source, target, np.identity(4))

    source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
    target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
    return source, target, source_down, target_down, source_fpfh, target_fpfh


def execute_global_registration(source_down, target_down, source_fpfh,
                                target_fpfh, voxel_size):
    distance_threshold = voxel_size * 1.5
    print(":: RANSAC registration on downsampled point clouds.")
    print("   Since the downsampling voxel size is %.3f," % voxel_size)
    print("   we use a liberal distance threshold %.3f." % distance_threshold)
    result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
        source_down, target_down, source_fpfh, target_fpfh, True,
        distance_threshold,
        o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
        3, [
            o3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(
                0.9),
            o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
                distance_threshold)
        ], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))
    return result


def refine_registration(source, target, source_fpfh, target_fpfh, voxel_size):
    distance_threshold = voxel_size * 0.4
    print(":: Point-to-plane ICP registration is applied on original point")
    print("   clouds to refine the alignment. This time we use a strict")
    print("   distance threshold %.3f." % distance_threshold)
    result = o3d.pipelines.registration.registration_icp(
        source, target, distance_threshold, result_ransac.transformation,
        o3d.pipelines.registration.TransformationEstimationPointToPlane())
    return result


def execute_fast_global_registration(source_down, target_down, source_fpfh,
                                     target_fpfh, voxel_size):
    distance_threshold = voxel_size * 0.5
    print(":: Apply fast global registration with distance threshold %.3f" \
            % distance_threshold)
    result = o3d.pipelines.registration.registration_fgr_based_on_feature_matching(
        source_down, target_down, source_fpfh, target_fpfh,
        o3d.pipelines.registration.FastGlobalRegistrationOption(
            maximum_correspondence_distance=distance_threshold))
    return result


if __name__ == "__main__":
    #(1)原始,下采样,查看
    voxel_size = 0.05  # means 5cm for this dataset
    source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(
        voxel_size)

    #(2)ransac全局注册,计算时间
    start = time.time()
    result_ransac = execute_global_registration(source_down, target_down,
                                                source_fpfh, target_fpfh,
                                                voxel_size)
    print("Global registration took %.3f sec.\n" % (time.time() - start))
    print(result_ransac)
    draw_registration_result(source_down, target_down, result_ransac.transformation)

    # #(3)点对面细调整
    # result_icp = refine_registration(source, target, source_fpfh, target_fpfh,
    #                                 voxel_size)
    # print(result_icp)
    # draw_registration_result(source, target, result_icp.transformation)

    # (4)快速全局注册,计算时间
    start = time.time()
    result_fast = execute_fast_global_registration(source_down, target_down,
                                                source_fpfh, target_fpfh,
                                                voxel_size)
    print("Fast global registration took %.3f sec.\n" % (time.time() - start))
    print(result_fast)
    draw_registration_result(source_down, target_down, result_fast.transformation)


5.3 结论

PS:没仔细看,感觉和官网给的数据结果一样,速度没有快起来啊。大家有空仔细研究一下。

通过适当的配置,快速全局注册的精度甚至可以与ICP媲美。更多实验结果见[Zhou2016]。

除了基于fpfh特征的FGR,全局注册还可以通过registration_fgr_based_on_correspondence使用基于通信的FGR执行。如果您的通信前端不同于FPFH,但您仍然希望在给定一组假定通信时使用FGR,则此方法是有用的。调用方法如下:

注意,该方法和上面的测试方法不同。

 o3d.pipelines.registration.registration_fgr_based_on_correspondence(
        source_down, target_down, correspondence_set,
        o3d.pipelines.registration.FastGlobalRegistrationOption())

### 点云修正方法概述 点云修正是指通过对原始点云数据进行处理,去除噪声、填补缺失区域并提高其精度的过程。这一过程通常涉及多个阶段和技术手段,包括但不限于滤波去噪、配准调整以及后续的应用优化。 #### 1. **点云滤波与降噪** 点云中的噪声可能来源于传感器本身的局限性或外部环境干扰。为了提升点云质量,可以采用多种滤波技术对其进行预处理: - **统计滤波 (Statistical Outlier Removal, SOR)** 这种方法基于统计学原理检测异常值。它假设正常点与其邻域内的其他点具有相似的距离分布特性,而离群点则偏离这种规律[^3]。通过设定合适的阈值参数,能够有效剔除孤立的噪声点。 - **体素化滤波 (Voxel Grid Filter)** 将三维空间划分为均匀的小立方体(即体素),并对每个体素内的点取平均值或其他聚合操作来减少冗余点的数量,同时平滑表面细节。这种方法适用于大规模稀疏点云场景下的快速简化处理。 - **随机采样一致性算法 (RANSAC)** 主要用于拟合几何模型(如平面、球面等)并排除不符合该模型约束条件的数据点。对于含有大量结构特征的目标物体扫描结果尤为适用。 #### 2. **点云配准** 当存在多视角采集得到的不同片段点云集时,需借助特定算法完成它们之间的一致性对齐工作——这就是所谓的“点云配准”。以下是两种常用的技术方案及其特点描述: - **ICP(Iterative Closest Point)算法** ICP是一种经典的刚体变换求解策略,旨在寻找两组给定点集间的最优相对位姿关系[^2]。其实现流程大致如下:先初始化猜测两者间的大致转换关系;接着反复执行最近邻居匹配环节直至收敛至局部极小值为止。值得注意的是,在实际应用过程中还需考虑诸如初值敏感度高、容易陷入错误吸引子等问题的影响因素。 - **扩展型ICP变种改进版本** 针对传统ICP存在的缺陷提出了诸多增强版设计方案,例如Generalized-ICP(G-ICP)[^1], 它引入了协方差加权机制以更好地适应复杂形状边界情况; 或者Fast Global Registration(FGR),利用全局一致性的图优化框架大幅缩短运行时间成本的同时保持较高的准确性水平. #### 3. **高级修复与重建** 除了基础层面的操作外,有时还需要进一步开展更深层次的任务,像曲面重构或是纹理贴图生成等等: - 利用隐函数表示形式构建连续光滑表征; - 借助深度学习网络自动预测丢失部位形态特征; - 结合RGB图像补充色彩信息赋予最终成果更加逼真的视觉效果呈现。 --- ### 工具推荐 目前市面上有许多优秀的开源库支持上述提到的各种功能模块开发需求,下面列举几个典型代表供参考选用: - **Point Cloud Library(PCL):** 提供全面丰富的C++ API接口集合覆盖从低级算子到高层管线全流程管理能力; - **Open3D:** Python友好型跨平台解决方案专注于高效可视化渲染及机器学习集成便利性; - **CloudCompare:** GUI界面友好的桌面应用程序适合科研教学演示用途; --- ```python import open3d as o3d # 加载点云文件 pcd = o3d.io.read_point_cloud("input.ply") # 执行SOR滤波器 cl, ind = pcd.remove_statistical_outlier(nb_neighbors=20, std_ratio=2.0) # 显示过滤后的点云 inlier_cloud = pcd.select_by_index(ind) o3d.visualization.draw_geometries([inlier_cloud]) ``` ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值