kinect dk人体识别与姿势对比

本文介绍了一项基于Kinect DK的人体动作识别与对比研究,参考了OpenPose的论文并实现了初步的角度对比。作者分享了如何利用GitHub上的开源代码进行关节距离和角度计算,并提供了小白跑通代码的步骤,包括安装SDK、配置OpenCV环境及添加相关NuGET包。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

最近在做动作对比的研究
参考了两篇论文《基于 OpenPose 的人体动作识别对比研究》
《基于Kinect的康复训练辅助系统设计》
最终初步实现利用角度粗糙的进行动作对比
下一步准备使用角度变化序列进行评估
话不多说 想利用kinect dk实现关节距离计算与角度计算可以从github上下载这个开源代码:https://github.com/yincangqiong/body-tracker-DK
在这里插入图片描述
小白如何跑通这个代码:
1、下载相机SDK与人体跟踪SDK
在这里插入图片描述
2、下载opencv3140版本,并在系统环境中配置环境变量
3、配置环境 截图在下
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
文件路径下添加这些,从SDK和opencv找
在这里插入图片描述
4、从NUGET添加包:Microsoft.Azure.Kinect.Sensor
以前这么多就足够,这次需要另外添加以下三个包才能成功
在这里插入图片描述
下为自己粗略修改每三帧保存一次角度数据到txt文件中

#include<iostream>
#include<fstream>
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include "include/k4a.h"
#include "include/k4abt.h"
#include <math.h>
using namespace std;
#define MAX 10

// OpenCV
#include "include/opencv.hpp"
// Kinect DK
#include "include/k4a.hpp"

#define VERIFY(result, error)                                                                        \
if(result != K4A_RESULT_SUCCEEDED)                                                                   \
{
                                                                                                         \
    printf("%s \n - (File: %s, Function: %s, Line: %d)\n", error, __FILE__, __FUNCTION__, __LINE__); \
    exit(1);                                                                                         \
}                                                                                                    
float get_angle(float x1, float y1, float z1, float x2, float y2, float z2, float x3, float y3, float z3)
{
   
    float dis1 = sqrt(pow((x1 - x2), 2) + pow((y1 - y2), 2) + pow((z1 - z2), 2));
    float dis2 = sqrt(pow((x2 - x3), 2) + pow((y2 - y3), 2) + pow((z2 - z3), 2));
    float dis3 = sqrt(pow((x1 - x3), 2) + pow((y1 - y3), 2) + pow((z1 - z3), 2));
    float angle = acos((dis1 * dis1 + dis2 * dis2 - dis3 * dis3) / (2 * dis1 * dis2)) * 180.0 / 3.1415926;
    return angle;
}

int main()
{
   
    //定义倾角
    float Angel[10];
    float standard[10] = {
   155.82,156.59,102.03,97.76,101.81,93.06,119.62,121.65,};
    k4a_device_t device = NULL;
    VERIFY(k4a_device_open(0, &device), "Open K4A Device failed!");

    const uint32_t device_count = k4a_device_get_installed_count();
    if (1 == device_count)
    {
   
        std::cout << "Found " << device_count << " connected devices. " << std::endl;
    }
    else
    {
   
        std::cout << "Error: more than one K4A devices found. " << std::endl;
    }

    //打开设备
    k4a_device_open(1, &device);
    std::cout << "Done: open device. " << std::endl;

    // Start camera. Make sure depth camera is enabled.
    k4a_device_configuration_t deviceConfig = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
    deviceConfig.depth_mode = K4A_DEPTH_MODE_NFOV_2X2BINNED;
    deviceConfig.color_resolution = K4A_COLOR_RESOLUTION_720P;
    deviceConfig.camera_fps = K4A_FRAMES_PER_SECOND_30;
    deviceConfig.color_format = K4A_IMAGE_FORMAT_COLOR_BGRA32;
    deviceConfig.synchronized_images_only = true;// ensures that depth and color images are both available in the capture

    //开始相机
    //k4a_device_start_cameras(device, &deviceConfig);
    VERIFY(k4a_device_start_cameras(device, &deviceConfig), "Start K4A cameras failed!");
    std::cout << "Done: start camera." << std::endl;

    //查询传感器校准
    k4a_calibration_t sensor_calibration;
    k4a_device_get_calibration(device, deviceConfig.depth_mode, deviceConfig.color_resolution, &sensor_calibration);
    VERIFY(k4a_device_get_calibration(device, deviceConfig.depth_mode, deviceConfig.color_resolution, &sensor_calibration),
        "Get depth camera calibration failed!");
    //创建人体跟踪器
    k4abt_tracker_t tracker = NULL;
    k4abt_tracker_configuration_t tracker_config = K4ABT_TRACKER_CONFIG_DEFAULT;
    k4abt_tracker_create(&sensor_calibration, tracker_config, &tracker);
    VERIFY(k4abt_tracker_create(&sensor_calibration, tracker_config, &tracker), "Body tracker initialization failed!");

    cv::Mat cv_rgbImage_with_alpha;
    cv::Mat cv_rgbImage_no_alpha;
    cv::Mat cv_depth;
    cv::Mat cv_depth_8U;

    int frame_count = 1;
    //定义计数器
    int action1 = 0;
    int action2 = 0;
    int action3 = 0;
    int action4 = 0;
    //while(true)
    while (frame_count <= 30)
    {
   
        k4a_capture_t sensor_capture;
        k4a_wait_result_t get_capture_result = k4a_device_get_capture(device, &sensor_capture, K4A_WAIT_INFINITE);
        //获取RGB和depth图像
        k4a_image_t rgbImage = k4a_capture_get_color_image(sensor_capture);
        k4a_image_t depthImage = k4a_capture_get_depth_image(sensor_capture);

        //RGB
        cv_rgbImage_with_alpha = cv::Mat(k4a_image_get_height_pixels(rgbImage), k4a_image_get_width_pixels(rgbImage), CV_8UC4, k4a_image_get_buffer(rgbImage));
        cvtColor(cv_rgbImage_with_alpha, cv_rgbImage_no_alpha, cv::COLOR_BGRA2BGR);
        //depth
        cv_depth = cv::Mat(k4a_image_get_height_pixels(depthImage), k4a_image_get_width_pixels(depthImage), CV_16U, k4a_image_get_buffer(depthImage), k4a_image_get_stride_bytes(depthImage));
        cv_depth.convertTo(cv_depth_8U, CV_8U, 1);

        //计算姿态
        if (get_capture_result == K4A_WAIT_RESULT_SUCCEEDED)
        {
   
            frame_count++;
            k4a_wait_result_t queue_capture_result = k4abt_tracker_enqueue_capture(tracker, sensor_capture, K4A_WAIT_INFINITE);
            k4a_capture_release(sensor_capture); // Remember to release the sensor capture once you finish using it
            if (queue_capture_result == K4A_WAIT_RESULT_TIMEOUT)
            {
   
                // It should never hit timeout when K4A_WAIT_INFINITE is set.
                printf("Error! Add capture to tracker process queue timeout!\n");
                break;
            }
            else if (queue_capture_result == K4A_WAIT_RESULT_FAILED)
            {
   
                printf("Error! Add capture to tracker process queue failed!\n");
                break;
            }

            k4abt_frame_t body_frame = NULL;
            k4a_wait_result_t pop_frame_result = k4abt_tracker_pop_result(tracker, &body_frame, K4A_WAIT_INFINITE);
            if (pop_frame_result == K4A_WAIT_RESULT_SUCCEEDED)
            {
   
                // Successfully popped the body tracking result. Start your processing 
                size_t num_bodies = k4abt_frame_get_num_bodies(body_frame);

                for (size_t i = 0; i < num_bodies; i++)
                {
   
                    k4abt_skeleton_t skeleton;
                    k4abt_frame_get_body_skeleton(body_frame, i, &skeleton);
                    //std::cout << typeid(skeleton.joints->position.v).name();

                    k4a_float2_t P_NOSE_2D;
                    k4a_float2_t P_EYE_RIGHT_2D;
                    k4a_float2_t P_EAR_RIGHT_2D;
                    k4a_float2_t P_EYE_LEFT_2D;
                    k4a_float2_t P_EAR_LEFT_2D;
                    k4a_float2_t P_SHOULDER_RIGHT_2D;
                    k4a_float2_t P_SHOULDER_LEFT_2D;
                    k4a_float2_t P_ELBOW_RIGHT_2D;
                    k4a_float2_t P_ELBOW_LEFT_2D;
                    k4a_float2_t P_WRIST_RIGHT_2D;
                    k4a_float2_t P_WRIST_LEFT_2D;
                    k4a_float2_t P_HAND_RIGHT_2D;
                    k4a_float2_t P_HAND_LEFT_2D;
                    k4a_float2_t P_THUMB_RIGHT_2D;
                    k4a_float2_t P_THUMB_LEFT_2D;
                    k4a_float2_t P_HANDTIP_RIGHT_2D;
                    k4a_float2_t P_HANDTIP_LEFT_2D;
                    k4a_float2_t P_SPINE_CHEST_2D;
                    k4a_float2_t P_HEAD_2D;
                    k4a_float2_t P_NECK_2D;
                    k4a_float2_t P_SPINE_NAVEL_2D;
                    k4a_float2_t P_PELVIS_2D;
                    k4a_float2_t P_CLAVICLE_RIGHT_2D;
                    k4a_float2_t P_CLAVICLE_LEFT_2D;
                    k4a_float2_t P_HIP_RIGHT_2D;
                    k4a_float2_t P_HIP_LEFT_2D;
                    k4a_float2_t P_KNEE_LEFT_2D;
                    k4a_float2_t P_KNEE_RIGHT_2D;
                    k4a_float2_t P_ANKLE_LEFT_2D;
                    k4a_float2_t P_ANKLE_RIGHT_2D;
                    k4a_float2_t P_FOOT_LEFT_2D;
                    k4a_float2_t P_FOOT_RIGHT_2D;
                    int result;

                    //头部
                    k4abt_joint_t  P_NOSE = skeleton.joints[K4ABT_JOINT_NOSE];
                    k4abt_joint_t  P_HEAD = skeleton.joints[K4ABT_JOINT_HEAD];
                    k4abt_joint_t  P_EYE_RIGHT = skeleton.joints[K4ABT_JOINT_EYE_RIGHT];
                    k4abt_joint_t  P_EAR_RIGHT = skeleton.joints[K4ABT_JOINT_EAR_RIGHT];
                    k4abt_joint_t  P_EYE_LEFT = skeleton.joints[K4ABT_JOINT_EYE_LEFT];
                    k4abt_joint_t  P_EAR_LEFT = skeleton.joints[K4ABT_JOINT_EAR_LEFT];
                    //3D转2D,并在color中画出
                    k4a_calibration_3d_to_2d(&sensor_calibration, &P_HEAD.position, K4A_CALIBRATION_TYPE_DEPTH, K4A_CALIBRATION_TYPE_COLOR, &P_HEAD_2D, &result);
                    k4a_calibration_3d_to_2d(&sensor_calibration
评论 12
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值