目录结构如图
annot-test.h5是OriNet提供的,链接在这https://github.com/chenxuluo/OriNet-demo
annot-test.h5提供了3DHP测试数据集的大部分信息,包括3D关节位置、图片名字等,但是没有图像信息,所以需要自己用cv2提取图像信息。
代码如下
class MyDatasets(Dataset):
def __init__(self):
self.file = h5py.File('annot-test.h5', 'r') # 打开h5文件
def __getitem__(self, index):
root_path = "E:\PythonCodes\ContextPose2\data\mpi_inf_3dhp"
sample = defaultdict(list)
image_path = ""
for id in self.file["imagename"][index]:
image_path += str(chr(int(id)))
image_path = os.path.join(root_path,image_path)
image = cv2.imread(image_path)
sample['images'] = image
part_3D_univ = torch.Tensor(self.file["part_3D_univ"][index])
p = torch.ones(17)
p = p.unsqueeze(-1)
part_3D_univ = torch.cat((part_3D_univ, p), -1)
sample['keypoints_3d'] = part_3D_univ
sample['indexex'] = index
sample['pred_keypoints_3d'] = []
return sample
def __len__(self):
return len(self.file['part_3D_univ'])
data_test = MyDatasets()
data_loader_test = DataLoader(data_test, batch_size=8, shuffle=False)
主要做的就是根据 annot-test.h5提供的图像名找到图像位置,然后用cv2得到图像信息。需要什么信息一般看模型的需要,其他的 annot-test.h5中的也可以自己添加