nlp中的文本,基本都可以表示为[ batch, seq, embed_dim] 的形式
- CNN
一般使用一维卷积,可以将embedding维度看做channel,则对于每一维特征,利用卷积核遍历所有字,在这种情况下,利用卷积对相邻字符的某一维特征进行聚合,对聚合后的多维特征进行加和或者求均值。实现时需要转置。若是不转置,卷积核遍历同一个字的特征,无实际意义。
# 一维卷积是在最后一个维度进行
m = nn.Conv1d(in_channels=16, out_channels=33, kernel_size=3, stride=2)
input = torch.randn(20, 16, 50) #[batch, embed_dim, seq],16=embed_dim, 50=seq
output = m(input) # [20, 33, 24] # [batch, out_channels, hidden_out]
out_channel为卷积核的数目,代表不同的特征提取方式。
L
o
u
t
=
L
i
n
+
2
∗
p
a
d
d
i
n
g
−
k
e
r
n
e
l
_
s
i
z
e
s
t
r
i
d
e
+
1
L{out} = \frac{L{in} +2*padding -kernel\_size}{stride}+1
Lout=strideLin+2∗padding−kernel_size+1
或者,将输入经二维卷积
输入数据维度:[batch, 1, seq, embed_dim]
卷积核维度 [k, embed_dim],需要同时扫过embed长度的变量
输出数据维度:[batch, out_channel, L_out, 1]
二维卷积与一维卷积类似,只是变换维度为两个维度。
# 二维的卷积
m = nn.Conv2d(in_channels=16, out_channels=33, kernel_size=3, stride=2) # [batch, C, H, W]
input = torch.randn(20, 16, 50, 100)
output = m(input) # [20, 33, 24, 49]
# 三维卷积
# With square kernels and equal stride
m = nn.Conv3d(in_channels=16, out_channels=33, kernel_size=3, stride=2)
# # non-square kernels and unequal stride and with padding
# m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0))
input =torch.randn(20, 16, 10, 50, 100)
output = m(input) # [20, 33, 4, 24, 49]
- RNN
输入维度:[ seq, batch, input_size]
h0: [num_layersnum_directions, batch, hidden],可以不加
输出维度:
output: [seq, batch, hidden]
hn: [num_layersnum_directions, batch, hidden]
# rnn
rnn = nn.RNN(input_size=10, hidden_size=20, num_layers=2, bidirectional=False)
input = torch.randn(5, 3, 10) # [time_step, batch, feature] = [seq, batch, input_size]
h0 = torch.randn(4, 3, 20) # [num_layers*num_directions, batch, hidden]
output, hn = rnn(input)
# output: [seq, batch, hidden] [5, 3, 20] 双向乘2
# hn: [num_layers*num_directions, batch, hidden] [2, 3, 20]
# num_direction: 计算方向,双向=2,单向=1
# batch_first:Ture时,输入维度为[batch, seq, input_size]. 默认False,输入维度为[seq, batch, input_size]
# batch_first=False时, 若bidirectional=False,output[-1]=hn[-1],若为True时,两者不相等
- LSTM
lstm = nn.LSTM(input_size=10, hidden_size=20, num_layers=2)
input = torch.randn(5, 3, 10) # [time_step, batch, feature] = [seq, batch, input_size]
# h0与c0可以不进输入
h0 = torch.randn(2, 3, 20) # [num_layers*num_directions, batch, hidden]
c0 = torch.randn(2, 3, 20) # [num_layers*num_directions, batch, hidden]
# 这里有2层lstm,output是最后一层lstm的每个词向量对应隐藏层的输出,其与层数无关,只与序列长度相关
output, (hn, cn) = lstm(input, (h0, c0))
# output: [seq, batch, hidden_size] [5, 3, 20]
# hn: [num_layers*num_directions, batch, hidden] [2, 3, 20]
# cn: [num_layers*num_directions, batch, hidden] [2, 3, 20]
4、GRU网络
类似LSTM,增加更新门和复位门
GRU可以这样理解,有一杯满满的饮料A,先倒出一些在杯子B,在B中加些其他饮料,晃匀+处理杂质,再倒回A中,使A中的容量不变。
忘了在哪看到的了,记下以免忘记。