PCL库之快速点特征直方图(FPFH)描述子

本文介绍了一种用于实时应用的点云处理技术——快速点特征直方图(FPFH)。通过使用多核/多线程并行计算,该方法能够显著提升密集点云特征提取的速度,最多可达6-8倍的加速效果。文中还提供了基于PCL库的FPFH计算代码实例。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

对于实时应用或接近实时应用中,密集点云的点特征直方图(PFH)的计算,是一个主要的性能瓶颈。此处为PFH计算方式的简化形式,称为快速点特征直方图FPFHFast Point Feature Histogram。由于大大地降低了FPFH的整体复杂性,因此FPFH有可能使用在实时应用中。
对于计算速度要求苛刻的用户,PCL提供了一个FPFH估计的另一实现,它使用多核/多线程规范,利用OpenMP开发模式来提高计算速度。这个类的名称是pcl::FPFHEstimationOMP,并且它的应用程序接口(API)100%兼容单线程pcl::FPFHEstimation,这使它适合作为一个替换元件。在8核系统中,OpenMP的实现可以在6-8倍更快的计算时间内完全同样单核系统上的计算。

//main.cpp
#include <iostream>
#include <string>
#include <stdio.h>
#include <Eigen/Core>
#include <pcl/io/ply_io.h>
#include <pcl/io/pcd_io.h>
#include <pcl/correspondence.h>                 //对应表示两个实体之间的匹配(例如,点,描述符等)。
#include <pcl/features/normal_3d.h>             //法线

#include <pcl/filters/uniform_sampling.h>    //均匀采样
#include <pcl/filters/voxel_grid.h>
#include <pcl/visualization/pcl_visualizer.h>   //可视化
#include <pcl/common/transforms.h>              //转换矩阵
#include <pcl/console/parse.h>
#include <pcl/point_types.h>

#include <pcl/common/time.h>
#include <pcl/console/print.h>
#include <pcl/features/normal_3d_omp.h>
#include <pcl/features/fpfh_omp.h>
#include <pcl/filters/filter.h>
#include <pcl/registration/icp.h>
#include <pcl/registration/sample_consensus_prerejective.h>
#include <pcl/segmentation/sac_segmentation.h>
using namespace std;

typedef pcl::PointXYZRGB PointT;             //Point with color
typedef pcl::PointCloud<PointT> PointCloud; //PointCloud with color

int main(int argc, char *argv[]){
    string model_filename = argv[1];
    bool display = true;
    bool downSampling = false;

    //load pcd/ply point cloud
    pcl::PointCloud<PointT>::Ptr model(new pcl::PointCloud<PointT>()); //模型点云
    /*
    if (pcl::io::loadPCDFile<PointT>(model_filename, *model)<0)
    {
      return -1;
    }
    */
    if (pcl::io::loadPCDFile(model_filename, *model) < 0)
    {
      std::cerr << "Error loading model cloud." << std::endl;
      return (-1);
    }


    if (downSampling)
    {
        //create the filtering object
        std::cout << "Number of points before downSampling: " << model->points.size() << std::endl;
        pcl::VoxelGrid<PointT> sor;
        sor.setInputCloud(model);
        sor.setLeafSize(0.01, 0.01, 0.01);
        sor.filter(*model);
        std::cout << "Number of points after downSampling: " << model->points.size() << std::endl;
    }

    // Normal estimation
    pcl::NormalEstimation<PointT, pcl::PointNormal> ne;
    pcl::PointCloud<pcl::PointNormal>::Ptr normals(new pcl::PointCloud<pcl::PointNormal>);
    pcl::search::KdTree<PointT>::Ptr tree(new pcl::search::KdTree<PointT>());
    ne.setInputCloud(model);
    ne.setSearchMethod(tree);
    ne.setKSearch(10);
    //boost::shared_ptr<pcl::PointIndices> indices;
    //indices->indices.push_back(200);
    //ne.setIndices(indices);
    ne.compute(*normals);
    /*
    for(int i = 100; i<101; ++i)
        printf("points[%d]=[%f, %f, %f], normal=[%f, %f, %f], curvature = %f\n",i, model->points[i].x, model->points[i].y, model->points[i].z, normals->points[i].normal_x, normals->points[i].normal_y, normals->points[i].normal_z, normals->points[i].curvature);
    //for (int i = 0; i < normals->points.size(); ++i)
    //  printf("points[%f, %f, %f], normal[%f, %f, %f]", normals->points[i].x, normals->points[i].y, normals->points[i].z, normals->points[i].normal_x, normals->points[i].normal_x, normals->points[i].y, normals->points[i].normal_z);
    */
    //create the FPFH estimation class, and pass the input dataset+normals to it
    pcl::FPFHEstimationOMP<PointT, pcl::PointNormal, pcl::FPFHSignature33> fpfh;
    pcl::PointCloud<pcl::FPFHSignature33>::Ptr fpfhs(new pcl::PointCloud<pcl::FPFHSignature33>());
    fpfh.setInputCloud(model);
    fpfh.setInputNormals(normals);
    fpfh.setSearchMethod(tree);
    fpfh.setRadiusSearch(0.02); // Use all neighbors in a sphere of radius 5cm, the radius used here has to be larger than the radius used to estimate the surface normals!!!
    fpfh.compute(*fpfhs);// Compute the features


    FILE* fid = fopen("source.bin", "wb");
    int nV = normals->size(), nDim = 33;
    fwrite(&nV, sizeof(int), 1, fid);            // size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream)
    fwrite(&nDim, sizeof(int), 1, fid);
    for (int v = 0; v < nV; v++)
    {
        const pcl::PointNormal &pt = normals->points[v];
        float xyz[3] = { pt.normal_x, pt.normal_y, pt.normal_z };
        fwrite(xyz, sizeof(float), 3, fid);
        const pcl::FPFHSignature33 &feature = fpfhs->points[v];
        fwrite(feature.histogram, sizeof(float), 33, fid);
    }
    fclose(fid);

    if (display)
    {
        // Concatenate the XYZ and normal fields*
        //pcl::PointCloud<pcl::PointNormal>::Ptr cloud_with_normals(new pcl::PointCloud<pcl::PointNormal>());
        //pcl::concatenateFields(*model, *normals, *cloud_with_normals);
        //pcl::io::savePLYFile("result.ply", *cloud_with_normals);
        boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer(new pcl::visualization::PCLVisualizer("Viewer"));
        viewer->setBackgroundColor(0, 0, 0);
        viewer->addPointCloud<PointT>(model, "model");
        viewer->setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, "model");
        viewer->addPointCloudNormals<PointT, pcl::PointNormal>(model, normals, 10, 0.05, "normals");  //display every 1 points, and the scale of the arrow is 10
        viewer->addCoordinateSystem(1.0);
        viewer->initCameraParameters();
        while (!viewer->wasStopped())
        {
            viewer->spinOnce(100);
            boost::this_thread::sleep(boost::posix_time::microseconds(100000));
        }
    }
  return 0;
}

对应的CMakeLists.txt如下:

# set project's name
PROJECT( PointCloudSegmentation )

###############################################################################
# CMake settings
CMAKE_MINIMUM_REQUIRED(VERSION 2.8.3)

add_definitions(-std=c++11)

# OpenMP
FIND_PACKAGE(OpenMP)
IF(OPENMP_FOUND)
    OPTION(WITH_OPENMP "Whether to use parallel processing capabilities of OPENMP. ON/OFF" ON)
ENDIF(OPENMP_FOUND)

IF(OPENMP_FOUND AND WITH_OPENMP)
    MESSAGE(STATUS "With OpenMP ")
    SET (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DMAX_OPENMP_NUM_THREADS=${PROCESSOR_COUNT} -DOPENMP_NUM_THREADS=${PROCESSOR_COUNT} ${OpenMP_CXX_FLAGS} -DOPENMP")
    SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DMAX_OPENMP_NUM_THREADS=${PROCESSOR_COUNT} -DOPENMP_NUM_THREADS=${PROCESSOR_COUNT} ${OpenMP_CXX_FLAGS} ${OpenMP_C_FLAGS} -DOPENMP")
ELSE(OPENMP_FOUND AND WITH_OPENMP)
    MESSAGE(STATUS "Without OpenMP")
    SET (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DMAX_OPENMP_NUM_THREADS=1 -DOPENMP_NUM_THREADS=1")
    SET (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DMAX_OPENMP_NUM_THREADS=1 -DOPENMP_NUM_THREADS=1")
ENDIF(OPENMP_FOUND AND WITH_OPENMP)

# OpenCV
FIND_PACKAGE(OpenCV REQUIRED)
FIND_PACKAGE(PCL 1.8 REQUIRED)


add_definitions(${PCL_DEFINITIONS})

INCLUDE_DIRECTORIES( ${PCL_INCLUDE_DIRS})




add_executable(main src/main.cpp)
target_link_libraries(main  ${PCL_LIBRARIES})
### FPFH 点云特征直方图 PCL 实现教程 #### 1. 背景介绍 快速点特征直方图(Fast Point Feature Histograms, FPFH)是一种用于描述点云局部几何特性的高效方法。相比于传统的点特征直方图(Point Feature Histograms, PFH),FPFH通过简化计算流程显著降低了时间复杂度,使其更适合于大规模点云数据的实时处理场景。 FPFH的核心思想是基于SPFH(Simplified Point Feature Histogram)的概念,通过对每个查询点及其邻居点的几何关系进行统计分析来构建特征描述子。具体而言,FPFH将复杂的全局邻域计算分解为一系列简单的局部邻域操作,并采用加权平均的方式融合这些局部特征[^3]。 --- #### 2. 计算原理概述 FPFH的主要执行步骤可以分为以下几个部分: - **第一步:计算 SPFH** 对于每一个查询点 \( p_q \),其对应的 SPFH 是由它与其 K 邻域内的点之间的几何属性构成的一个低维向量表示。这个过程涉及法线方向、角度以及距离等信息的提取[^4]。 - **第二步:组合成 FPFH** 利用上述得到的所有 SPFH 结果,按照一定的权重比例将其汇总起来形成最终的目标点的完整 FPCH 描述符。这种策略不仅加快了整体运算速度还维持住了较高的区分能力[^1]。 --- #### 3. 使用 PCL 实现 FPFH 的代码示例 以下是利用 PCL (Point Cloud Library) 来实现 FPFH 特征提取的具体 Python 和 C++ 示例程序片段: ##### (1)Python 实现版本 ```python import pcl from pcl import registration as reg def compute_fpfh_features(cloud): """ Compute Fast Point Feature Histograms (FPFH) descriptors for a given point cloud. Parameters: cloud : pcl.PointCloud Input point cloud data Returns: fpfh_descriptors : pcl.FPFHEstimation Computed FPFH feature set """ # Estimate normals first ne = cloud.make_NormalEstimation() tree = cloud.make_kdtree() ne.set_SearchMethod(tree) ne.set_RadiusSearch(0.03) # Set radius search parameter appropriately based on your dataset scale normals = ne.compute() # Initialize the FPFH estimation object fpfh_estimator = pcl.features.FPFHEstimation() fpfh_estimator.setInputCloud(cloud) fpfh_estimator.setInputNormals(normals) # Define k-d tree again specifically for FPFH computation purposes fpfh_tree = cloud.make_kdtree_FLANN() fpfh_estimator.set_SearchMethod(fpfh_tree) fpfh_estimator.set_RadiusSearch(0.05) # Adjust according to desired neighborhood size # Perform actual calculation and retrieve results fpfh_output = pclPointCloudXYZ() fpfh_estimator.compute(fpfh_output) return fpfh_output if __name__ == "__main__": input_cloud = pcl.load_XYZRGB('path_to_your_pointcloud.pcd') result = compute_fpfh_features(input_cloud) print("Computed FPFH features:", result.size()) ``` ##### (2)C++ 实现版本 ```cpp #include <pcl/point_types.h> #include <pcl/features/fpfh_estimation.h> void estimateFPFH(const pcl::PointCloud<pcl::PointXYZ>::Ptr &inputCloud, const pcl::search::KdTree<pcl::PointXYZ>::Ptr &tree, pcl::PointCloud<pcl::FPFHSignature33> &fpfhs) { // Create normal estimator instance pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> norm_est; pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>()); norm_est.setInputCloud(inputCloud); norm_est.setSearchMethod(tree); norm_est.setRadiusSearch(0.03); // Radius should be adjusted depending upon scene dimensions norm_est.compute(*normals); // Instantiate an FPFH estimation object pcl::FPFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::FPFHSignature33> fpfh_est; fpfh_est.setInputCloud(inputCloud); fpfh_est.setInputNormals(normals); fpfh_est.setSearchMethod(tree); fpfh_est.setRadiusSearch(0.05); // Similar adjustment here too regarding neighbor scope definition // Execute processing step itself now finally after setting everything up properly beforehand accordingly above lines already explained well enough clearly hopefully so far good luck enjoy coding :) fpfh_est.compute(fpfhs); } int main(int argc, char** argv){ pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>); pcl::io::loadPCDFile<pcl::PointXYZ>("example_pcl_file.pcd", *cloud); pcl::search::KdTree<pcl::PointXYZ>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZ>()); pcl::PointCloud<pcl::FPFHSignature33> fpfhs; estimateFPFH(cloud, tree, fpfhs); std::cout << "Number of estimated FPFH signatures: " << fpfhs.width * fpfhs.height << std::endl; return 0; } ``` 以上两段分别展示了如何借助 PCL 工具包,在不同编程环境下完成从原始三维坐标集合到对应高维度特征空间映射转换的过程[^2][^3]. --- #### 4. 关键参数说明 在实际应用过程中需要注意调整几个重要超参以获得最佳效果: - `setRadiusSearch`: 控制搜索半径大小直接影响近似程度与精度平衡; - 输入点云质量高低也会影响最后输出稳定性因此建议预处理阶段做好降噪平滑等工作; ---
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值