Pytorch使用笔记
记录&总结一下自己写model时需要用到的函数。官方文档
pytorch-lightning
torchvision
nn.Module
CNN
torch.nn.Conv1d
torch.nn.Conv1d(in_channels, #int 输入信号的通道
out_channels, # int 卷积产生的通道
kernel_size, # int/tuple 卷积核的尺寸
stride=1, #int/tuple, opt卷积步长
padding=0, #int/tuple, opt 输入的每一条边补充0的层数
dilation=1, #int/tuple, opt 卷积核元素之间的间距
groups=1, #int, opt 从输入通道到输出通道的阻塞连接数
bias=True # bool, opt 如果bias=True,添加偏置)
shape
输入: ( N , C i n , L i n ) (N,C_{in},L_{in}) (N,Cin,Lin)
weight: ( C o u t , C i n , k _ s ) (C_{out}, C_{in}, k\_s) (Cout,Cin,k_s) Cout个卷积核并行运算
输出: ( N , C o u t , L o u t ) (N,C_{out},L_{out}) (N,Cout,Lout)
L o u t = [ L i n + 2 p − d ( k _ s − 1 ) − 1 ] / s + 1 L_{out}=[L_{in}+2p-d(k\_s-1)-1]/s+1 Lout=[Lin+2p−d(k_s−1)−1]/s+1
con1d的输入输出的计算方式
Example
# [1,5,7] 卷积核[2,5,3] = [1, 3, 6]
a = torch.ones(1,5,7)
b = nn.Conv1d(in_channels=5, out_channels=3, kernel_size=2)(a)
print(b.size()) #torch.Size([1, 3, 6])
torch.nn.Conv2d
torch.nn.Conv2d(in_channels,
out_channels,
kernel_size,
stride=1,
padding=0,
dilation=1,
groups=1,
bias=True)
参数kernel_size
,stride,padding
,dilation
- 可以是一个int的数据,此时卷积height和width值相同;
- 可以是一个tuple数组,tuple的第一维度表示height的数值,tuple的第二维度表示width的数值
shape
输入: ( N , C i n , H i n , W i n ) (N, C_{in}, H_{in}, W_{in}) (N,Cin,Hin,Win)
weight: ( C o u t , C i n , k _ s ) (C_{out}, C_{in}, k\_s) (Cout,C