解读NTUDataset类:
定义3个方法:初始化方法,求长度方法,求x,y索引
def __init__(self, x, y):
self.x = x
self.y = np.array(y, dtype='int')
def __len__(self):
return len(self.y)
def __getitem__(self, index):
return [self.x[index], int(self.y[index])]
解读NTUDataLoaders类
1.def __init__(self, dataset ='NTU', case = 0, aug = 1, seg = 30):
self.dataset = dataset
self.case = case
self.aug = aug
self.seg = seg
self.create_datasets()
self.train_set = NTUDataset(self.train_X, self.train_Y)
self.val_set = NTUDataset(self.val_X, self.val_Y)
self.test_set = NTUDataset(self.test_X, self.test_Y)
2.get_train_loader方法:
if self.aug == 0:-》若传入参数为0
return DataLoader(self.train_set, batch_size=batch_size,
shuffle=True, num_workers=num_workers,
collate_fn=self.collate_fn_fix_val, pin_memory=False, drop_last=True)-》一个可迭代对象,使用iter()访问,不能使用next()访问,train_set数据集,batch_size批处理大小,num_workers线程,
elif self.aug ==1:-》若传入参数为0
return DataLoader(self.train_set, batch_size=batch_size,
shuffle=True, num_workers=num_workers,
collate_fn=self.collate_fn_fix_train, pin_memory=True, drop_last=True)-》一个可迭代对象,
3.get_val_loader方法,同理
4.get_test_loader方法:加载测试数据
5.get_train_size,get_val_size,get_test_size-》获取训练集大小,验证集大小,测试集大小
6.create_datasets方法:
if self.dataset == 'NTU':
if self.case ==0:
self.metric = 'CS'
elif self.case == 1:
self.metric = 'CV'
path = osp.join('./data/ntu', 'NTU_' + self.metric + '.h5')-》加载.h5文件
7.f = h5py.File(path , 'r')
self.train_X = f['x'][:]-》读取训练集x坐标
self.train_Y = np.argmax(f['y'][:],-1)-》读取训练集y坐标
self.val_X = f['valid_x'][:]-》读取验证集x坐标
self.val_Y = np.argmax(f['valid_y'][:], -1)-》读取验证集y坐标
self.test_X = f['test_x'][:]-》读取测试集x坐标
self.test_Y = np.argmax(f['test_y'][:], -1)-》读取测试集y坐标
f.close()
8.将训练数据和验证数据合并为ST-GCN
self.train_X = np.concatenate([self.train_X, self.val_X], axis=0)-》能够一次完成多个数组的拼接。
self.train_Y = np.concatenate([self.train_Y, self.val_Y], axis=0)-》能够一次完成多个数组的拼接。
self.val_X = self.test_X-》测试集赋值給验证集
self.val_Y = self.test_Y-》测试集赋值給验证集
解读collate_fn_fix_train方法
作用:将每个数据字段放入具有外部维度批大小的张量中
1.x, y = zip(*batch)-》zip()&nb

本文详细解读了NTUDataset类及其子类NTUDataLoaders,包括数据集的初始化、获取数据的方法,以及如何处理和加载NTU RGB+D数据集。此外,还介绍了数据预处理过程,如collate_fn_fix_train、collate_fn_fix_val和collate_fn_fix_test方法,用于构建张量并进行数据调整。最后,涉及到数据增强的旋转和转换操作。
最低0.47元/天 解锁文章
3158





