相关链接:
项目:
- tf-pose-estimation https://github.com/ildoonet/tf-pose-estimation
- tf-pose-estimation for ROS https://github.com/ildoonet/tf-pose-estimation/blob/master/etcs/ros.md
论文:tf-pose其实就是openpose的tensorflow+Python版本,openpose论文:Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
环境:
- 9代i5+1660T、ubuntu18.04
- nvidia418.74、cuda10.0.130、anaconda 4.3.0、tensorflow-gpu1.13.1
- opencv3.4.1
- TensorRT
- protobuf-3.11.1
目录
1、cv_bridge依赖python2,与ROS的python3环境冲突
一、工作空间建立
#建立ros的workspace
mkdir -p ~/gesture_recognition/catkin_tfpose/src
cd ~/gesture_recognition/catkin_tfpose/src
catkin_init_workspace #生成catkin_tfpose/src/CMakeList.txt
cd ..
catkin_make #编译生成catkin_tfpose/build/和catkin_tfpose/devel/
#创建功能包
#cd src
#catkin_create_pkg bagname std_msgs roscpp rospy #后面仨是包的依赖
#下载功能包
cd src
git clone https://github.com/ildoonet/tf-pose-estimation
pip install -r tf-pose-estimation/requirements.txt
roslaunch tfpose_ros demo_video.launch
二、openpose的独立运行
只需要看单独的包,用python运行
(一)图片demo
cd ~/gesture_recognition/catkin_tfpose/src/tf_pose_estimation
python run.py --model=mobilenet_thin --resize=432x368 --image=./images/p1.jpg
1、按照gayhub,最后运行时出错:Failed to initialize TensorRT. This is either because the TensorRT installation path is not in LD_LIBRARY_PATH, or because you do not have it installed,于是安装TRT:https://zhuanlan.zhihu.com/p/165359425,仍报相同错,由issues/515,注释一行代码import tensorflow.contrib.tensorrt as trt解决。位于:
- tf_pose_estimation/tf-pose/estimator.py
- /home/muxi/anaconda3/lib/python3.6/site-packages/tf_pose-0.1.1-py3.6-linux-x86_64.egg/tf_pose/estimator.py
2、又运行又报错:E tensorflow/stream_executor/cuda/cuda_dnn.cc:334] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR,可能是因为gpu分配出现了问题,所以我们设置config,让gpu能够正常使用。见https://blog.youkuaiyun.com/icestorm_rain/article/details/105337071开头加入以下,问题解决。
import tensorflow as tf
from tensorflow.compat.v1 import InteractiveSession
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
运行成功
(二)相机demo
#python run_webcam.py --model=mobilenet_thin --resize=432x368 --camera=0
python ~/gesture_recognition/catkin_tfpose/src/tf_pose_estimation/run_webcam.py --model=mobilenet_thin --resize=432x368 --camera=0
运行报错:Cannot mix incompatible Qt library (version 0x50907) with this library (version 0x50905),意思应该是需要qt5.9.7却用了5.9.5。
当初用spyder也是一堆qt相关错,当时是动态链接库的优先级问题导致的,环境很杂,这点就很烦,其实anaconda根本不是说好的虚拟环境互不干扰那么美好啊! 于是找到qt5.9.7位置
sudo find ~ -name 'libQt*' -print
如下:
bashrc加入以下:
export LD_LIBRARY_PATH=/home/muxi/anaconda3/pkgs/qt-5.9.7-h5867ecd_1/lib:$LD_LIBRARY_PATH
搞定,帧数不错,30上下。
(三)视频demo
#python run_video.py --model=mobilenet_thin --video=/home/muxi/datasets/UAVGesture/AllClear/S1_allClear_HD.mp4
python ~/gesture_recognition/catkin_tfpose/src/tf_pose_estimation/run_video.py --model=mobilenet_thin --video=/home/muxi/datasets/UAVGesture/AllClear/S1_allClear_HD.mp4
要精度就--model=cmu,但帧率只有8左右
问题:运行出视频不显示骨架
- 方法1:run_webcam.py里50多行的cam = cv2.VideoCapture(args.camera) 改为 cam = cv2.VideoCapture('/home/muxi/datasets/UAVGesture/AllClear/S1_allClear_HD.mp4'),再运行命令python run_webcam.py --model=mobilenet_thin --resize=432x368 --camera=0,成功。
- 方法2:参考Fix 'no skeleton graph' issue in run_video.py #650,成功。
(四)运行期间终端打印关节点坐标
终端打印及可视化skeleton(骨架)keypoint坐标,参考:https://github.com/ildoonet/tf-pose-estimation/issues/619,estimator.py:
def draw_humans(npimg, humans, imgcopy=False):
if imgcopy:
npimg = np.copy(npimg)
image_h, image_w = npimg.shape[:2] #img.shape[:2] 取彩色图片的高、宽,摄像头是480,640
print("image size:", image_h, image_w)
centers = {}
j = 1
for human in humans:#检测到的人的循环
# print(human.body_parts.items())
print("human", j, "of", len(humans), ":")
j=j+1
# draw point
for i in range(common.CocoPart.Background.value):#关键点的循环
if i not in human.body_parts.keys():
continue
body_part = human.body_parts[i]
center = (int(body_part.x * image_w + 0.5), int(body_part.y * image_h + 0.5)) #画图用的根据输入图像规格来的像素位置
print(" ", body_part, ", image location:", center) #BodyPart:0-(0.56, 0.20) score=0.73
centers[i] = center
cv2.circle(npimg, center, 2, common.CocoColors[i], thickness=2, lineType=8, shift=0)
# draw line