原文 Lite-HRNet 轻量级的HRNet 转onnx_xz1203的博客-优快云博客_lite-hrnet
[开发技巧]·AdaptivePooling与Max/AvgPooling相互转换_小宋是呢的博客-优快云博客
自适应池化Adaptive Pooling是PyTorch的一种池化层,根据1D,2D,3D以及Max与Avg可分为六种形式。
自适应池化Adaptive Pooling与标准的Max/AvgPooling区别在于,自适应池化Adaptive Pooling会根据输入的参数来控制输出output_size,而标准的Max/AvgPooling是通过kernel_size,stride与padding来计算output_size:
output_size = ceil ( (input_size+2∗padding−kernel_size)/stride)+1
Adaptive Pooling仅存在与PyTorch,如果需要将包含Adaptive Pooling的代码移植到Keras或者TensorFlow就会遇到问题。
stride = floor ( (input_size / (output_size) )
kernel_size = input_size − (output_size−1) * stride
padding = 0
def forward(self, x):
stridesz_list, kernelsz_list, outputsz = self.get_avgpool_para(x)
out = [nn.AvgPool2d(kernel_size=kernelsz.tolist(),stride=stridesz.tolist())(s) for s,kernelsz,stridesz in
zip(x[:-1],kernelsz_list,stridesz_list) ] + [x[-1]]
out = torch.cat(out, dim=1)
out = self.conv1(out)
out = self.conv2(out)
out = torch.split(out, self.channels, dim=1)
out = [
s * F.interpolate(a, size=s.size()[-2:], mode='nearest')
for s, a in zip(x, out)
]
return out
def get_avgpool_para(self,x):
output_size = np.array(x[-1].size()[-2:])
stride_size_list = []
kernel_size_list = []
for index in range(len(x)):
input_size = np.array(x[index].size()[-2:])
stride_size = np.floor(input_size / output_size ).astype(np.int32)
kernel_size = input_size - (output_size - 1) * stride_size
stride_size_list.append(stride_size)
kernel_size_list.append(kernel_size)
return stride_size_list,kernel_size_list,output_size
原文代码
out = [torch.nn.AvgPool2d(kernel_size=kernelsize_list.tolist(),stride=stridesize_list.tolist())(s) for s,kernel_size,stride_size in
zip(x[:-1],kernelsize_list,stridesize_list) ] + [x[-1]]
修改为该文章评论区代码
out = [torch.nn.AvgPool2d(kernel_size=kernel_size,stride=stride_size)(s) for s,kernel_size,stride_size in
zip(x[:-1],kernelsize_list,stridesize_list) ] + [x[-1]]
转换脚本可以用mmpose tools 文件夹下 pytorch2onnx.py