1. 常用形式:
torch.nn.Linear(in_channels, out_channels)
2. 参数解释
- a = nn.Linear(in,out) 相当于定义了一个叫a的网络结构:y=x*w+b,其中w=AT
- A的size是 (out,in),A是实际给出的weight
- b的size是 (out)
- in:前一层网络神经元的个数
- out: 该网络层神经元的个数
3. 举例说明
程序:
import torch
x = torch.randn(128, 20) # 输入的维度是(128,20)
m = torch.nn.Linear(20, 30) # 20,30是指维度
output = m(x)
print('m.weight.shape:\n ', m.weight.shape)
print('m.bias.shape:\n', m.bias.shape)
print('output.shape:\n', output.shape)
结果:
m.weight.shape:
torch.Size([30, 20])
m.bias.shape:
torch.Size([30])
output.shape:
torch.Size([128, 30])
参考:
https://blog.youkuaiyun.com/weixin_40952784/article/details/91417780
https://blog.youkuaiyun.com/m0_37586991/article/details/87861418