import torch
from toch import nn
from torch.nn import functional as F
nn.Xxx 和 nn.functional.xxx 区别:
1. nn.Xxx封装了后者, 更加方便, 比如dropout用nn.Xxx时候只需要m.eval()即可让它失效.
2. nn.Xxx更加方便, 不用每次调用把参数都输入一遍.
out = F.conv2d(x, self.cnn1_weight, self.bias1_weight)
out = self.maxpool2(self.relu2(self.cnn2(out)))
3. nn.functional.xxx更偏向底层, 更灵活. 比如TridentNet根据输入尺度不同, 卷积核要用不同的dilation但是参数共享, 此时如下实现更方便.
x_1 = F.conv2d(x, self.weight,dilation=1, padding=1)
x_2 = F.conv2d(x, self.weight,dilation=2, padding=2)
卷积
>>> input (batch, in_channel, height, width) --> filters (out_channel, in_channel, kernel_height, kernel_width)
>>> P = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias=False, (norm): FrozenBatchNorm2d(num_fe