https://download.youkuaiyun.com/download/Jason_____Wang/16502249
论文精读——A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image
标题比较长,已经打不完了。。 所以题目格式有些变形,望谅解。
上周尝试做了一下代码复现的方向,因为生活琐碎以及清明假期,文献精读的工作有所耽误,本周继续做文献精读工作,初步选取的目标是来自华科的 A2J 网络模型 (ICCV 2019)
选取这篇文章的理由有以下几个:
- 目前在以数据集为导向找项目与代码,本篇A2J有开源的代码,并且用了多个手部RGBD 的主流数据集,如:
NYU Hand Pose Dataset
ICVL Hand Pose Dataset
HANDS 2017 dataset
因为目前在数据集下载工作方面遇到很多问题(缺乏经验),所以当务之急是找一个能够复现(要求代码开源且数据集主流)的文章进行阅读,从这个角度来看本篇文章是不错的选择。
-
在2019hand estimation挑战中,本篇文章夺得了冠军的头衔,并且目前在上述的三个数据集中也有着较为不俗的表现,因此是一篇值得借鉴的文章
-
文章貌似篇幅不长(误)
废话不多说,进入正文部分了:
#############################################################################
文章来源
题目:A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image
A2J:基于单深度图像的三维关节姿态估计的锚-关节回归网络
引用:[1] Xiong F , Zhang B , Xiao Y , et al. A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image[J]. 2019.
链接&下载地址:
论文地址: https://arxiv.org/abs/1908.09999v1
论文我也已经下载好上传到了优快云中,可以点下方直接下载:
》》》论文连接《《《
一些相关连接:
开源代码:
https://github.com/zhangboshen/A2J
性能比较(paper with code):
https://www.paperswithcode.com/paper/a2j-anchor-to-joint-regression-network-for-3d#code
文章简介
内容简介:
A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image
A2J:基于单深度图像的三维关节姿态估计的锚-关节回归网络
For 3D hand and body pose estimation task in depth image, a novel anchor-based approach termed Anchor-to-Joint regression network (A2J) with the end-to-end learning ability is proposed. Within A2J, anchor points able to capture global-local spatial context information are densely set on depth image as local regressors for the joints. They contribute to predict the positions of the joints in ensemble way to enhance generalization ability. The proposed 3D articulated pose estimation paradigm is different from the state-of-the-art encoder-decoder based FCN, 3D CNN and point-set based manners. To discover informative anchor points towards certain joint, anchor proposal procedure is also proposed for A2J. Meanwhile 2D CNN (i.e., ResNet-50) is used as backbone network to drive A2J, without using time-consuming 3D convolutional or deconvolutional layers. The experiments on 3 hand datasets and 2 body datasets verify A2J’s superiority. Meanwhile, A2J is of high running speed

本文介绍了一种名为A2J的新型网络,用于从单个深度图像中估计3D手和身体关节的姿势。A2J通过密集设置的锚点捕获空间上下文信息,实现端到端学习,避免了3D卷积的计算成本。在多个数据集上的实验验证了A2J在性能和效率上的优越性。
最低0.47元/天 解锁文章
1794





