Transformer学习 | task5 项目实践
一、代码概述
本代码是一个用PyTorch实现的Transformer模型,用于机器翻译任务。它将中文句子翻译成英文句子。
二、代码拆解
1. 数据准备
(1)定义句子和词汇表
首先,我们定义了一些简单的中英文句子对,以及对应的词汇表:
sentence = [
['我 是 学 生 P', 'S I am a student', 'I am a student E'],
['我 喜 欢 学 习', 'S I like learning P', 'I like learning P E'],
['我 是 男 生 P', 'S I am a boy', 'I am a boy E']
]
我 是 学 生 P
:中文输入句子,P
是占位符,用于固定长度。'S I am a student'
:英文输入句子,S
是开始符。'I am a student E'
:英文输出句子,E
是结束符。
(2)词汇表
我们将中文和英文单词分别映射为数字索引:
src_vocab = {'P':0, '我':1, '是':2, '学':3, '生':4, '喜':5, '欢':6,'习':7,'男':8}
tgt_vocab = {'S':0, 'E':1, 'P':2, 'I':3, 'am':4, 'a':5, 'student':6, 'like':7, 'learning':8, 'boy':9}
2. 数据预处理
(1)将句子转换为数字索引
我们使用一个函数make_data
将句子转换为数字索引,并生成编码器、解码器的输入和输出:
def make_data(sentences):
enc_inputs, dec_inputs, dec_outputs = [], [], []
for i in range(len(sentences)):
enc_input = [[src_vocab[n] for n in sentences[i][0].split()]]
dec_input = [[tgt_vocab[n] for n in sentences[i][1].split()]]
dec_output = [[tgt_vocab[n] for n in sentences[i][2].split()]]
enc_inputs.extend(enc_input)
dec_inputs.extend(dec_input)
dec_outputs.extend(dec_output)
return torch.LongTensor(enc_inputs), torch.LongTensor(dec_inputs), torch.LongTensor(dec_outputs)
(2)自定义数据集
我们创建了一个自定义的数据集类MyDataset
,用于加载数据:
class MyDataset(Data.Dataset):
def __init__(self, enc_inputs, dec_inputs, dec_outputs):
super(MyDataset, self).__init__()
self.enc_inputs = enc_inputs
self.dec_inputs = dec_inputs
self.dec_outputs = dec_outputs
def __len__(self):
return self.enc_inputs.shape[0]
def __getitem__(self, idx):
return self.enc_inputs[idx], self.dec_inputs[idx], self.dec_outputs[idx]
3. 模型组件
(1)位置嵌入
我们为每个单词添加位置信息,让模型能够感知单词的顺序:
class PositionalEncoding(nn.Module):
def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pos_table = np.array([
[pos / np.power(10000, 2 * i / d_model) for i in range(d_model)]
if pos != 0 else np.zeros(d_model) for pos in range(max_len)])
pos_table[1:, 0::2] = np.sin(pos_table[1:, 0::2])
pos_table[1:, 1::2] = np.cos(pos_table[1:, 1::2])
self.pos_table = torch.FloatTensor(pos_table)
def forward(self, enc_inputs):
enc_inputs += self.pos_table[:enc_inputs.size(1), :]
return self.dropout(enc_inputs)
(2)注意力机制
Transformer使用自注意力机制来计算单词之间的关系:
class ScaledDotProductAttention(nn.Module):
def forward(self, Q, K, V, attn_mask):
scores = torch.matmul(Q, K.transpose(-1, -2)) / np.sqrt(d_k)
scores.masked_fill_(attn_mask, -1e9)
attn = nn.Softmax(dim=-1)(scores)
context = torch.matmul(attn, V)
return context, attn
(3)多头注意力机制
将多个注意力机制并行运行,以捕捉不同角度的信息:
class MultiHeadAttention(nn.Module):
def __init__(self):
super(MultiHeadAttention, self).__init__()
self.W_Q = nn.Linear(d_model, d_k * n_heads, bias=False)
self.W_K = nn.Linear(d_model, d_k * n_heads, bias=False)
self.W_V = nn.Linear(d_model, d_v * n_heads, bias=False)
self.fc = nn.Linear(n_heads * d_v, d_model, bias=False)
def forward(self, input_Q, input_K, input_V, attn_mask):
residual, batch_size = input_Q, input_Q.size(0)
Q = self.W_Q(input_Q).view(batch_size, -1, n_heads, d_k).transpose(1,2)
K = self.W_K(input_K).view(batch_size, -1, n_heads, d_k).transpose(1,2)
V = self.W_V(input_V).view(batch_size, -1, n_heads, d_v).transpose(1,2)
attn_mask = attn_mask.unsqueeze(1).repeat(1, n_heads, 1, 1)
context, attn = ScaledDotProductAttention()(Q, K, V, attn_mask)
context = context.transpose(1, 2).reshape(batch_size, -1, n_heads * d_v)
output = self.fc(context)
return nn.LayerNorm(d_model)(output + residual), attn
(4)前馈神经网络
通过两个全连接层,对单词进行特征变换:
class FF(nn.Module):
def __init__(self):
super(FF, self).__init__()
self.fc = nn.Sequential(
nn.Linear(d_model, d_ff, bias=False),
nn.ReLU(),
nn.Linear(d_ff, d_model, bias=False)
)
def forward(self, inputs):
residual = inputs
output = self.fc(inputs)
return nn.LayerNorm(d_model)(output + residual)
4. 模型结构
(1)编码器
编码器用于处理输入句子,并生成上下文表示:
class Encoder(nn.Module):
def __init__(self):
super(Encoder, self).__init__()
self.src_emb = nn.Embedding(src_vocab_size, d_model)
self.pos_emb = PositionalEncoding(d_model)
self.layers = nn.ModuleList([EncoderLayer() for _ in range(n_layers)])
def forward(self, enc_inputs):
enc_outputs = self.src_emb(enc_inputs)
enc_outputs = self.pos_emb(enc_outputs.transpose(0, 1)).transpose(0, 1)
enc_self_attn_mask = get_attn_pad_mask(enc_inputs, enc_inputs)
enc_self_attns = []
for layer in self.layers:
enc_outputs, enc_self_attn = layer(enc_outputs, enc_self_attn_mask)
enc_self_attns.append(enc_self_attn)
return enc_outputs, enc_self_attns
(2)解码器
解码器用于根据编码器的输出生成目标句子:
class Decoder(nn.Module):
def __init__(self):
super(Decoder, self).__init__()
self.tgt_emb = nn.Embedding(tgt_vocab_size, d_model)
self.pos_emb = PositionalEncoding(d_model)
self.layers = nn.ModuleList([DecoderLayer() for _ in range(n_layers)])
def forward(self, dec_inputs, enc_inputs, enc_outputs):
dec_outputs = self.tgt_emb(dec_inputs)
dec_outputs = self.pos_emb(dec_outputs.transpose(0, 1)).transpose(0, 1)
dec_self_attn_pad_mask = get_attn_pad_mask(dec_inputs, dec_inputs)
dec_self_attn_subsequence_mask = get_attn_subsequence_mask(dec_inputs)
dec_self_attn_mask = torch.gt((dec_self_attn_pad_mask + dec_self_attn_subsequence_mask), 0)
dec_enc_attn_mask = get_attn_pad_mask(dec_inputs, enc_inputs)
dec_self_attns, dec_enc_attns = [], []
for layer in self.layers:
dec_outputs, dec_self_attn, dec_enc_attn = layer(
dec_outputs, enc_outputs, dec_self_attn_mask, dec_enc_attn_mask)
dec_self_attns.append(dec_self_attn)
dec_enc_attns.append(dec_enc_attn)
return dec_outputs, dec_self_attns, dec_enc_attns
5. 模型训练和测试
(1)定义模型、损失函数和优化器
model = Transformer()
criterion = nn.CrossEntropyLoss(ignore_index=0)
optimizer = optim.SGD(model.parameters(), lr=1e-3, momentum=0.99)
(2)训练
通过多轮迭代,优化模型参数:
for epoch in range(50):
for enc_inputs, dec_inputs, dec_outputs in loader:
outputs, _, _, _ = model(enc_inputs, dec_inputs)
loss = criterion(outputs, dec_outputs.view(-1))
print('Epoch:', '%04d' % (epoch + 1), 'loss =', '{:.6f}'.format(loss))
optimizer.zero_grad()
loss.backward()
optimizer.step()
(3)翻译测试
使用训练好的模型将中文句子翻译成英文:
def translate(model, enc_input, start_symbol):
enc_outputs, enc_self_attns = model.Encoder(enc_input)
dec_input = torch.zeros(1, tgt_len).type_as(enc_input.data)
next_symbol = start_symbol
for i in range(0, tgt_len):
dec_input[0][i] = next_symbol
dec_outputs, _, _ = model.Decoder(dec_input, enc_input, enc_outputs)
projected = model.projection(dec_outputs)
prob = projected.squeeze(0).max(dim=-1, keepdim=False)[1]
next_word = prob.data[i]
next_symbol = next_word.item()
return dec_input
三、代码运行流程
- 定义和加载数据。
- 预处理数据,转换为数字索引。
- 构建Transformer模型。
- 定义损失函数和优化器。
- 训练模型,优化参数。
- 使用训练好的模型进行翻译测试。
通过以上步骤,你可以使用Transformer模型将中文句子翻译成英文。
完整代码如下:
import math # 导入math模块
import torch # 导入torch模块
import numpy as np # 导入numpy模块
import torch.nn as nn # 导入torch.nn模块
import torch.optim as optim # 导入torch.optim模块
import torch.utils.data as Data # 导入torch.utils.data模块
#Encoder_input Decoder_input Decoder_output(预测下一个字符)
sentence = [['我 是 学 生 P', 'S I am a student', 'I am a student E'], # S: 开始字符
['我 喜 欢 学 习', 'S I like learning P', 'I like learning P E'], # E: 结束字符
['我 是 男 生 P', 'S I am a boy', 'I am a boy E']] # P: 占位字符,如果当前句子不足固定长度用P占位 pad补0
# 以下的一个batch中是sentence[0,1]
src_vocab = {'P':0, '我':1, '是':2, '学':3, '生':4, '喜':5, '欢':6,'习':7,'男':8} # 词源字典 字:索引
src_idx2word = { src_vocab[key]: key for key in src_vocab } # 索引:字
src_vocab_size = len(src_vocab) # 词源字典大小
# 生成目标中 'S'是0填充的
tgt_vocab = {'S':0, 'E':1, 'P':2, 'I':3, 'am':4, 'a':5, 'student':6, 'like':7, 'learning':8, 'boy':9} # 词目标字典 字:索引
tgt_idx2word = { tgt_vocab[key]: key for key in tgt_vocab } # 索引:字
tgt_vocab_size = len(tgt_vocab) # 词目标字典大小
src_len = len(sentence[0][0].split(" ")) # 输入序列长度
tgt_len = len(sentence[0][1].split(" ")) # 输出序列长度
# 把sentence 转换成字典索引
def make_data(sentences):
enc_inputs, dec_inputs, dec_outputs = [], [], [] # 编码器输入,解码器输入,解码器输出
for i in range(len(sentences)): # 遍历每一个句子
enc_input = [[src_vocab[n] for n in sentences[i][0].split()]] # Encoder_input 索引
dec_input = [[tgt_vocab[n] for n in sentences[i][1].split()]] # Decoder_input 索引
dec_output = [[tgt_vocab[n] for n in sentences[i][2].split()]] # Decoder_output 索引
enc_inputs.extend(enc_input)
dec_inputs.extend(dec_input)
dec_outputs.extend(dec_output)
return torch.LongTensor(enc_inputs), torch.LongTensor(dec_inputs), torch.LongTensor(dec_outputs) # 转换成tensor
enc_inputs, dec_inputs, dec_outputs = make_data(sentence) # 转换成tensor
print(enc_inputs)
print(dec_inputs)
print(dec_outputs)
'''
sentences 里一共有三个训练数据,中文->英文。把Encoder_input、Decoder_input、Decoder_output转换成字典索引,
例如"学"->3、“student”->6。再把数据转换成batch大小为2的分组数据,3句话一共可以分成两组,一组2句话、一组1句话。src_len表示中文句子
固定最大长度,tgt_len 表示英文句子固定最大长度。
'''
# 自定义数据集函数
class MyDataset(Data.Dataset):
# 重写__init__方法
def __init__(self, enc_inputs, dec_inputs, dec_outputs):
super(MyDataset, self).__init__() # 继承父类
self.enc_inputs = enc_inputs # 编码器输入
self.dec_inputs = dec_inputs # 解码器输入
self.dec_outputs = dec_outputs # 解码器输出
def __len__(self):
return self.enc_inputs.shape[0] # 返回数据集大小
def __getitem__(self, idx):
return self.enc_inputs[idx], self.dec_inputs[idx], self.dec_outputs[idx] # 返回数据集中第idx个数据
loader = Data.DataLoader(dataset=MyDataset(enc_inputs, dec_inputs, dec_outputs), batch_size=2, shuffle=False) # 加载数据集
d_model = 512 # 字 Embedding 的维度
d_ff = 2048 # 前向传播隐藏层维度
d_k = d_v = 64 # K(=Q), V的维度. V的维度可以和K=Q不一样
n_layers = 6 # 有多少个encoder和decoder
n_heads = 8 # Multi-Head Attention设置为8
# 位置嵌入 position embedding
class PositionalEncoding(nn.Module):
def __init__(self, d_model,dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pos_table = np.array([
[pos / np.power(10000, 2 * i / d_model) for i in range(d_model)]
if pos != 0 else np.zeros(d_model) for pos in range(max_len)]) # 位置编码表
pos_table[1:, 0::2] = np.sin(pos_table[1:, 0::2]) # 字嵌入维度为偶数时
pos_table[1:, 1::2] = np.cos(pos_table[1:, 1::2]) # 字嵌入维度为奇数时
self.pos_table = torch.FloatTensor(pos_table) # enc_inputs: [seq_len, d_model]
# 前向传播
def forward(self, enc_inputs):
enc_inputs += self.pos_table[:enc_inputs.size(1),:] # 两个embedding相加,参考https://www.cnblogs.com/d0main/p/10447853.html
return self.dropout(enc_inputs)
'''
Mask句子中没有实际意义的占位符,例如’我 是 学 生 P’ ,P对应句子没有实际意义,所以需要被Mask,Encoder_input 和Decoder_input占位符
都需要被Mask。
这就是为了处理,句子不一样长,但是输入有需要定长,不够长的pad填充,但是计算又不需要这个pad,所以mask掉
这个函数最核心的一句代码是 seq_k.data.eq(0),这句的作用是返回一个大小和 seq_k 一样的 tensor,只不过里面的值只有 True 和 False。如
果 seq_k 某个位置的值等于 0,那么对应位置就是 True,否则即为 False。举个例子,输入为 seq_data = [1, 2, 3, 4, 0],
seq_data.data.eq(0) 就会返回 [False, False, False, False, True]
'''
def get_attn_pad_mask(seq_k, seq_q):
"""
此时字还没表示成嵌入向量
句子0填充
seq_q中的每个字都要"看"一次seq_k中的每个字,
Args: 在Encoder_self_att中,seq_q,seq_k 就是enc_input
seq_q (_type_): [batch, enc_len] [batch, 中文句子长度]
seq_k (_type_): [batch, enc_len] [batch, 中文句子长度]
在Decoder_self_att中,seq_q,seq_k 就是dec_input, dec_input
seq_q (_type_): [batch, tgt_len] [batch, 英文句子长度]
seq_k (_type_): [batch, tgt_len] [batch, 英文句子长度]
在Decoder_Encoder_att中,seq_q,seq_k 就是dec_input, enc_input
seq_q (_type_): [batch, tgt_len] [batch, 中文句子长度]
seq_k (_type_): [batch, enc_len] [batch, 英文句子长度]
Returns:
_type_: [batch_size, len_q, len_k] 元素:T or F
"""
batch_size, len_q = seq_q.size() # seq_q 用于升维,为了做attention,mask score矩阵用的
batch_size, len_k = seq_k.size()
pad_attn_mask = seq_k.data.eq(0) # 判断 输入那些词index含有P(=0),用1标记 [len_k, d_model]元素全为T,F
pad_attn_mask = pad_attn_mask.unsqueeze(1) # [batch, 1, len_k]
pad_attn_mask = pad_attn_mask.expand(batch_size, len_q, len_k) # 扩展成多维度 [batch_size, len_q, len_k]
return pad_attn_mask # 返回 [batch_size, 1, len_k] 元素:T or F
'''
# Decoder输入Mask
用来Mask未来输入信息,返回的是一个上三角矩阵。比如我们在中英文翻译时候,会先把"我是学生"整个句子输入到Encoder中,得到最后一层的输出
后,才会在Decoder输入"S I am a student"(s表示开始),但是"S I am a student"这个句子我们不会一起输入,而是在T0时刻先输入"S"预测,
预测第一个词"I";在下一个T1时刻,同时输入"S"和"I"到Decoder预测下一个单词"am";然后在T2时刻把"S,I,am"同时输入到Decoder预测下一个单
词"a",依次把整个句子输入到Decoder,预测出"I am a student E"。
'''
def get_attn_subsequence_mask(seq):
"""
生成上三角Attention矩阵
Args:
seq (_type_): [batch_size, tgt_len]
Returns:
_type_: _description_
"""
attn_shape = [seq.size(0), seq.size(1), seq.size(1)] # 生成上三角矩阵,[batch_size, tgt_len, tgt_len]
subsequence_mask = np.triu(np.ones(attn_shape), k=1) # 得到主对角线向上平移一个距离的对角线(下三角包括对角线全为0)
subsequence_mask = torch.from_numpy(subsequence_mask).byte() # [batch_size, tgt_len, tgt_len]
return subsequence_mask
# 计算注意力信息、残差和归一化
class ScaledDotProductAttention(nn.Module):
def __init__(self):
super(ScaledDotProductAttention, self).__init__()
def forward(self, Q, K, V, attn_mask):
'''
注意!: d_q和d_k一定一样
d_v和d_q、d_k可以不一样
len_k和len_v的长度一定是一样的(翻译任务中,k,v要求都从中文文本生成)
:param Q: [batch_size, n_heads, len_q, d_k]
:param K: [batch_size, n_heads, len_k, d_k] len_k和len_v的长度一定是一样的
:param V: [batch_size, n_heads, len_v, d_v(和d_q、d_k不一定一样,但d_q和d_k可以一样)]
:param attn_mask: [batch_size, n_heads, len_q, len_k] attn_mask此时还是T or F
:return: [batch_size, n_heads, len_q, d_v], [batch_size, n_heads, len_q, len_k]
'''
scores = torch.matmul(Q, K.transpose(-1, -2)) / np.sqrt(d_k) # scores : [batch_size, n_heads, len_q, len_k]
scores.masked_fill_(attn_mask, -1e9) # 如果是停用词P就等于负无穷 在原tensor修改
attn = nn.Softmax(dim=-1)(scores) # PADmask位置分数变为0
# [batch_size, n_heads, len_q, len_k] * [batch_size, n_heads, len_v, d_v] = [batch_size, n_heads, len_q, d_v]
context = torch.matmul(attn, V) # 注意!len_k和len_v的长度一定是一样的
return context, attn # 返回 context 和 attention
# 多头自注意力机制
# 拼接之后 输入fc层 加入残差 Norm
class MultiHeadAttention(nn.Module):
def __init__(self):
super(MultiHeadAttention, self).__init__()
self.W_Q = nn.Linear(d_model, d_k * n_heads, bias=False)
self.W_K = nn.Linear(d_model, d_k * n_heads, bias=False)
self.W_V = nn.Linear(d_model, d_v * n_heads, bias=False)
self.fc = nn.Linear(n_heads * d_v, d_model, bias=False)
def forward(self, input_Q, input_K, input_V, attn_mask):
'''
enc_self_attn_mask里 input_Q,input_K,input_V: 词嵌入、位置嵌入之后的矩阵都是 [batch_size, src_len, d_model]
dec_self_attn_mask里 input_Q,input_K,input_V: 词嵌入、位置嵌入之后的矩阵都是 [batch_size, tag_len, d_model]
dec_enc_attn_mask里 input_Q,input_K,input_V: 词嵌入、位置嵌入之后的矩阵
input_Q: dec_input [batch_size, tag_len, d_model]
input_K: enc_output [batch_size, src_len, d_model]
input_V: enc_output [batch_size, src_len, d_model]
:param attn_mask:
enc_self_attn_mask: [batch_size, src_len, src_len]元素全为T or F, T的位置是要掩码(PAD填充)的位置
dec_self_attn_mask: [batch_size, tgt_len, tgt_len]元素全为T or F, T的位置是要掩码(PAD填充)的位置
dec_enc_attn_mask: [batch_size, tgt_len, src_len]元素全为T or F, T的位置是要掩码(PAD填充)的位置
:return: [batch_size, len_q, d_model]
'''
residual, batch_size = input_Q, input_Q.size(0)
Q = self.W_Q(input_Q).view(batch_size, -1, n_heads, d_k).transpose(1,2) # Q: [batch_size, n_heads, len_q, d_k]
K = self.W_K(input_K).view(batch_size, -1, n_heads, d_k).transpose(1,2) # K: [batch_size, n_heads, len_k, d_k]
V = self.W_V(input_V).view(batch_size, -1, n_heads, d_v).transpose(1,2) # V: [batch_size, n_heads, len_v, d_v]
attn_mask = attn_mask.unsqueeze(1).repeat(1, n_heads, 1, 1) # attn_mask : [batch_size, n_heads, len_q, len_k]
context, attn = ScaledDotProductAttention()(Q, K, V, attn_mask) # context: [batch_size, n_heads, len_q, d_v]
# attn: [batch_size, n_heads, len_q, len_k]
# 拼接多头的结果
context = context.transpose(1, 2).reshape(batch_size, -1, n_heads * d_v) # context: [batch_size, len_q, n_heads * d_v(d_model)]
output = self.fc(context) # d_v fc之后变成d_model -> [batch_size, len_q, d_model]
return nn.LayerNorm(d_model)(output + residual), attn # 加入残差和归一化
'''
# Decoder输入Mask
用来Mask未来输入信息,返回的是一个上三角矩阵。比如我们在中英文翻译时候,会先把"我是学生"整个句子输入到Encoder中,得到最后一层的输出
后,才会在Decoder输入"S I am a student"(s表示开始),但是"S I am a student"这个句子我们不会一起输入,而是在T0时刻先输入"S"预测,
预测第一个词"I";在下一个T1时刻,同时输入"S"和"I"到Decoder预测下一个单词"am";然后在T2时刻把"S,I,am"同时输入到Decoder预测下一个单
词"a",依次把整个句子输入到Decoder,预测出"I am a student E"。
'''
def get_attn_subsequence_mask(seq):
"""
生成上三角Attention矩阵
Args:
seq (_type_): [batch_size, tgt_len]
Returns:
_type_: _description_
"""
attn_shape = [seq.size(0), seq.size(1), seq.size(1)] # 生成上三角矩阵,[batch_size, tgt_len, tgt_len]
subsequence_mask = np.triu(np.ones(attn_shape), k=1) # 得到主对角线向上平移一个距离的对角线(下三角包括对角线全为0)
subsequence_mask = torch.from_numpy(subsequence_mask).byte() # [batch_size, tgt_len, tgt_len]
return subsequence_mask
# 计算注意力信息、残差和归一化
class ScaledDotProductAttention(nn.Module):
def __init__(self):
super(ScaledDotProductAttention, self).__init__()
def forward(self, Q, K, V, attn_mask):
'''
注意!: d_q和d_k一定一样
d_v和d_q、d_k可以不一样
len_k和len_v的长度一定是一样的(翻译任务中,k,v要求都从中文文本生成)
:param Q: [batch_size, n_heads, len_q, d_k]
:param K: [batch_size, n_heads, len_k, d_k] len_k和len_v的长度一定是一样的
:param V: [batch_size, n_heads, len_v, d_v(和d_q、d_k不一定一样,但d_q和d_k可以一样)]
:param attn_mask: [batch_size, n_heads, len_q, len_k] attn_mask此时还是T or F
:return: [batch_size, n_heads, len_q, d_v], [batch_size, n_heads, len_q, len_k]
'''
scores = torch.matmul(Q, K.transpose(-1, -2)) / np.sqrt(d_k) # scores : [batch_size, n_heads, len_q, len_k]
scores.masked_fill_(attn_mask, -1e9) # 如果是停用词P就等于负无穷 在原tensor修改
attn = nn.Softmax(dim=-1)(scores) # PADmask位置分数变为0
# [batch_size, n_heads, len_q, len_k] * [batch_size, n_heads, len_v, d_v] = [batch_size, n_heads, len_q, d_v]
context = torch.matmul(attn, V) # 注意!len_k和len_v的长度一定是一样的
return context, attn
# 多头自注意力机制
# 拼接之后 输入fc层 加入残差 Norm
class MultiHeadAttention(nn.Module):
def __init__(self):
super(MultiHeadAttention, self).__init__()
self.W_Q = nn.Linear(d_model, d_k * n_heads, bias=False)
self.W_K = nn.Linear(d_model, d_k * n_heads, bias=False)
self.W_V = nn.Linear(d_model, d_v * n_heads, bias=False)
self.fc = nn.Linear(n_heads * d_v, d_model, bias=False)
def forward(self, input_Q, input_K, input_V, attn_mask):
'''
enc_self_attn_mask里 input_Q,input_K,input_V: 词嵌入、位置嵌入之后的矩阵都是 [batch_size, src_len, d_model]
dec_self_attn_mask里 input_Q,input_K,input_V: 词嵌入、位置嵌入之后的矩阵都是 [batch_size, tag_len, d_model]
dec_enc_attn_mask里 input_Q,input_K,input_V: 词嵌入、位置嵌入之后的矩阵
input_Q: dec_input [batch_size, tag_len, d_model]
input_K: enc_output [batch_size, src_len, d_model]
input_V: enc_output [batch_size, src_len, d_model]
:param attn_mask:
enc_self_attn_mask: [batch_size, src_len, src_len]元素全为T or F, T的位置是要掩码(PAD填充)的位置
dec_self_attn_mask: [batch_size, tgt_len, tgt_len]元素全为T or F, T的位置是要掩码(PAD填充)的位置
dec_enc_attn_mask: [batch_size, tgt_len, src_len]元素全为T or F, T的位置是要掩码(PAD填充)的位置
:return: [batch_size, len_q, d_model]
'''
residual, batch_size = input_Q, input_Q.size(0)
Q = self.W_Q(input_Q).view(batch_size, -1, n_heads, d_k).transpose(1,2) # Q: [batch_size, n_heads, len_q, d_k]
K = self.W_K(input_K).view(batch_size, -1, n_heads, d_k).transpose(1,2) # K: [batch_size, n_heads, len_k, d_k]
V = self.W_V(input_V).view(batch_size, -1, n_heads, d_v).transpose(1,2) # V: [batch_size, n_heads, len_v, d_v]
attn_mask = attn_mask.unsqueeze(1).repeat(1, n_heads, 1, 1) # attn_mask : [batch_size, n_heads, len_q, len_k]
context, attn = ScaledDotProductAttention()(Q, K, V, attn_mask) # context: [batch_size, n_heads, len_q, d_v]
# attn: [batch_size, n_heads, len_q, len_k]
# 拼接多头的结果
context = context.transpose(1, 2).reshape(batch_size, -1, n_heads * d_v) # context: [batch_size, len_q, n_heads * d_v(d_model)]
output = self.fc(context) # d_v fc之后变成d_model -> [batch_size, len_q, d_model]
return nn.LayerNorm(d_model)(output + residual), attn # 加入残差和归一化
'''
## 前馈神经网络
输入inputs ,经过两个全连接层,得到的结果再加上 inputs (残差),再做LayerNorm归一化。LayerNorm归一化可以理解层是把Batch中每一句话
进行归一化。
'''
class FF(nn.Module):
def __init__(self):
super(FF, self).__init__()
self.fc = nn.Sequential(
nn.Linear(d_model, d_ff, bias=False),
nn.ReLU(),
nn.Linear(d_ff, d_model, bias=False)
)
def forward(self, inputs): # inputs: [batch_size, seq_len, d_model]
residual = inputs
output = self.fc(inputs)
return nn.LayerNorm(d_model)(output + residual) # [batch_size, seq_len, d_model]
## encoder layer(block)
class EncoderLayer(nn.Module):
def __init__(self):
super(EncoderLayer, self).__init__()
self.enc_self_attn = MultiHeadAttention() # 多头注意力机制
self.pos_ffn = FF() # 前馈神经网络
def forward(self, enc_inputs, enc_self_attn_mask): # enc_inputs: [batch_size, src_len, d_model]
'''
:param enc_inputs: [batch_size, src_len, d_model] 词嵌入、位置嵌入之后的输入矩阵
:param enc_self_attn_mask: [batch_size, src_len, src_len]元素全为T or F, T的是要掩码(PAD填充)的位置
:return:
'''
#输入3个enc_inputs分别与W_q、W_k、W_v相乘得到Q、K、V # enc_self_attn_mask: [batch_size, src_len, src_len]
enc_outputs, attn = self.enc_self_attn(enc_inputs, enc_inputs, enc_inputs, # enc_outputs: [batch_size, src_len, d_model],
enc_self_attn_mask) # attn: [batch_size, n_heads, src_len, src_len]
# 多头自注意力机制之后(Add & Norm之后),进行FF(Add & Norm)
enc_outputs = self.pos_ffn(enc_outputs) # enc_outputs: [batch_size, src_len, d_model]
return enc_outputs, attn
'''
## Encoder
第一步,中文字索引进行Embedding,转换成512维度的字向量。
第二步,在子向量上面加上位置信息。
第三步,Mask掉句子中的占位符号。
第四步,通过6层的encoder(上一层的输出作为下一层的输入)。
'''
class Encoder(nn.Module):
def __init__(self):
super(Encoder, self).__init__()
self.src_emb = nn.Embedding(src_vocab_size, d_model)
self.pos_emb = PositionalEncoding(d_model)
self.layers = nn.ModuleList(
[EncoderLayer() for _ in range(n_layers)]
)
def forward(self, enc_inputs):
'''
enc_inputs: [batch_size, src_len] 元素是字典词index
'''
enc_outputs = self.src_emb(enc_inputs) # [batch_size, src_len, d_model]
enc_outputs = self.pos_emb(enc_outputs.transpose(0, 1)).transpose(0, 1) # [batch_size, src_len, d_model]
enc_self_attn_mask = get_attn_pad_mask(enc_inputs, enc_inputs) # [batch_size, src_len, src_len]
enc_self_attns = []
for layer in self.layers:
# enc_outputs: [batch_size, src_len, d_model], enc_self_attn: [batch_size, n_heads, src_len, src_len]
enc_outputs, enc_self_attn = layer(enc_outputs, enc_self_attn_mask) # 一个EncoderBlock输出,注意力分数矩阵
enc_self_attns.append(enc_self_attn) # 记录注意力分数矩阵
# enc_outputs: [batch_size, src_len, d_model]
# enc_self_attn: [batch_size, n_heads, src_len, src_len]
return enc_outputs, enc_self_attns
# decoder layer(block)
# decoder两次调用MultiHeadAttention时,第一次调用传入的 Q,K,V 的值是相同的,都等于dec_inputs,第二次调用 Q 矩阵是来自Decoder的
# 输入。K,V 两个矩阵是来自Encoder的输出,等于enc_outputs。
class DecoderLayer(nn.Module):
def __init__(self):
super(DecoderLayer, self).__init__()
self.dec_self_attn = MultiHeadAttention()
self.dec_enc_attn = MultiHeadAttention()
self.pos_ffn = FF()
def forward(self, dec_inputs, enc_outputs, dec_self_attn_mask, dec_enc_attn_mask):
"""
解码器一个Block包含两个多投资注意力机制
Args:
dec_inputs (_type_): [batch_size, tgt_len, d_model]
enc_outputs (_type_): [batch_size, src_len, d_model] # Encoder的输出
dec_self_attn_mask (_type_): [batch_size, tgt_len, tgt_len]
dec_enc_attn_mask (_type_): [batch_size, tgt_len, src_len]
Returns:
_type_: _description_
"""
# dec_outputs: [batch_size, tgt_len, d_model]
# dec_self_attn: [batch_size, n_heads, tgt_len, tgt_len]
dec_outputs, dec_self_attn = self.dec_self_attn(dec_inputs, dec_inputs, dec_inputs,
dec_self_attn_mask)
# decoder自注意力之后的值作为Q值。K,V来自Encoder的输出
# dec_outputs: [batch_size, tgt_len, d_model]
# dec_enc_attn: [batch_size, h_heads, tgt_len, src_len]
dec_outputs, dec_enc_attn = self.dec_enc_attn(dec_outputs, enc_outputs, enc_outputs,
dec_enc_attn_mask)
dec_outputs = self.pos_ffn(dec_outputs) # dec_outputs: [batch_size, tgt_len, d_model]
return dec_outputs, dec_self_attn, dec_enc_attn
'''
# Decoder
第一步,英文字索引进行Embedding,转换成512维度的字向量。
第二步,在子向量上面加上位置信息。
第三步,Mask掉句子中的占位符号和输出顺序.
第四步,通过6层的decoder(上一层的输出作为下一层的输入)
'''
class Decoder(nn.Module):
def __init__(self):
super(Decoder, self).__init__()
self.tgt_emb = nn.Embedding(tgt_vocab_size, d_model)
self.pos_emb = PositionalEncoding(d_model)
self.layers = nn.ModuleList([DecoderLayer() for _ in range(n_layers)])
def forward(self, dec_inputs, enc_inputs, enc_outputs):
'''
enc_intpus: [batch_size, src_len]
dec_inputs: [batch_size, tgt_len]
enc_outputs: [batch_size, src_len, d_model]
'''
dec_outputs = self.tgt_emb(dec_inputs) # [batch_size, tgt_len, d_model]
dec_outputs = self.pos_emb(dec_outputs.transpose(0, 1)).transpose(0, 1) # [batch_size, tgt_len, d_model]
# PAD 0填充Mask掉 (Decoder输入序列的pad mask矩阵(这个例子中decoder是没有加pad的,实际应用中都是有pad填充的))
# Decoder中 0填充的位置是'S',也就是第一个位置要Mask掉,为true
dec_self_attn_pad_mask = get_attn_pad_mask(dec_inputs, dec_inputs) # [batch_size, tgt_len, tgt_len] T or F
'''
此时的一个batch:['S I am a student', 'S I like learning P']
dec_self_attn_pad_mask:
tensor([[[ True, False, False, False, False],
[ True, False, False, False, False],
[ True, False, False, False, False],
[ True, False, False, False, False],
[ True, False, False, False, False]],
[[ True, False, False, False, False],
[ True, False, False, False, False],
[ True, False, False, False, False],
[ True, False, False, False, False],
[ True, False, False, False, False]]])'''
# Masked Self_Attention:当前时刻是看不到未来的信息的
dec_self_attn_subsequence_mask = get_attn_subsequence_mask(
dec_inputs) # [batch_size, tgt_len, tgt_len] 下三角包括对角线为0,上三角为1
'''
tensor([[[0, 1, 1, 1, 1],
[0, 0, 1, 1, 1],
[0, 0, 0, 1, 1],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0]],
[0, 1, 1, 1, 1],
[0, 0, 1, 1, 1],
[0, 0, 0, 1, 1],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0]]], dtype=torch.uint8)'''
# Decoder中把两种mask矩阵相加(既屏蔽了pad的信息,也屏蔽了未来时刻的信息)
# torch.gt() 比较Tensor1和Tensor2的每一个元素,并返回一个0-1值.若Tensor1中的元素大于Tensor2中的元素,则结果取1,否则取0
dec_self_attn_mask = torch.gt((dec_self_attn_pad_mask + dec_self_attn_subsequence_mask),
0) # [batch_size, tgt_len, tgt_len]
'''tensor([[[ True, True, True, True, True],
[ True, False, True, True, True],
[ True, False, False, True, True], # 注意到之前的,当然不包括开始字符'S'。但是后面PAD的位置也会注意到前面PAD的位置
[ True, False, False, False, True],
[ True, False, False, False, False]],
[ True, True, True, True, True],
[ True, False, True, True, True],
[ True, False, False, True, True],
[ True, False, False, False, True],
[ True, False, False, False, False]]])'''
# 这个mask主要用于encoder-decoder attention层
# get_attn_pad_mask主要是enc_inputs的pad mask矩阵(因为enc是处理K,V的,求Attention时是用v1,v2,..vm去加权的,
# 要把pad对应的v_i的相关系数设为0,这样注意力就不会关注pad向量)
# dec_inputs只是提供expand的size的
dec_enc_attn_mask = get_attn_pad_mask(dec_inputs, enc_inputs) # [batc_size, tgt_len, src_len]
'''
此时的一个batch: 'S I am a student' 'S I like learning P'
下面的tensor是上面两个dec_input样本对应的enc_input的掩码矩阵
tensor([[[False, False, False, False, False],
[False, False, False, False, False],
[False, False, False, False, False],
[False, False, False, False, False],
[False, False, False, False, False]]
[[False, False, False, False, True],
[False, False, False, False, True],
[False, False, False, False, True],
[False, False, False, False, True],
[False, False, False, False, True]]
])'''
dec_self_attns, dec_enc_attns = [], []
for layer in self.layers:
# dec_outputs: [batch_size, tgt_len, d_model],
# dec_self_attn: [batch_size, n_heads, tgt_len, tgt_len],
# dec_enc_attn: [batch_size, h_heads, tgt_len, src_len]
dec_outputs, dec_self_attn, dec_enc_attn = layer(dec_outputs, enc_outputs, dec_self_attn_mask,
dec_enc_attn_mask)
dec_self_attns.append(dec_self_attn)
dec_enc_attns.append(dec_enc_attn)
return dec_outputs, dec_self_attns, dec_enc_attns
'''
# Transformer
Trasformer的整体结构,输入数据先通过Encoder,再通过Decoder,
最后把输出进行多分类,分类数为英文字典长度,也就是判断每一个字的概率。
'''
class Transformer(nn.Module):
def __init__(self):
super(Transformer, self).__init__()
self.Encoder = Encoder()
self.Decoder = Decoder()
# 翻译到英文词的分类
self.projection = nn.Linear(d_model, tgt_vocab_size, bias=False)
def forward(self, enc_inputs, dec_inputs):
"""
transformer
Args:
enc_inputs (_type_): [batch_size, src_len]
dec_inputs (_type_): [batch_size, tgt_len]
Returns:
_type_: _description_
"""
# encoder部分
# enc_outputs: [batch_size, src_len, d_model],
# enc_self_attns: [n_layers, batch_size, n_heads, src_len, src_len]
enc_outputs, enc_self_attns = self.Encoder(enc_inputs)
# decoder部分
# dec_outpus : [batch_size, tgt_len, d_model],
# dec_self_attns: [n_layers, batch_size, n_heads, tgt_len, tgt_len],
# dec_enc_attn : [n_layers, batch_size, tgt_len, src_len]
dec_outputs, dec_self_attns, dec_enc_attns = self.Decoder(dec_inputs, enc_inputs, enc_outputs)
dec_logits = self.projection(dec_outputs) # dec_logits: [batch_size, tgt_len, tgt_vocab_size]
dec_logits = dec_logits.view(-1, dec_logits.size(-1)) # dec_logits: [batch_size*tgt_len, tgt_vocab_size]
return dec_logits, enc_self_attns, dec_self_attns, dec_enc_attns
'''
# 定义网络
'''
model = Transformer()
criterion = nn.CrossEntropyLoss(ignore_index=0) # 忽略 占位符 索引为0.
optimizer = optim.SGD(model.parameters(), lr=1e-3, momentum=0.99)
'''
# 训练Transformer
'''
for epoch in range(50):
for enc_inputs, dec_inputs, dec_outputs in loader:
enc_inputs, dec_inputs, dec_outputs = enc_inputs, dec_inputs, dec_outputs # [2,5] [2,5] [2,5]
outputs, enc_self_attns, dec_self_attns, dec_enc_attns = model(enc_inputs, dec_inputs)
loss = criterion(outputs, dec_outputs.view(-1)) # outputs: [batch_size*tgt_len, tgt_vocab_size]
print('Epoch:', '%04d' % (epoch + 1), 'loss =', '{:.6f}'.format(loss))
optimizer.zero_grad()
loss.backward()
optimizer.step()
# 测试
# ... 保持之前的代码不变 ...
# 将测试函数重命名为translate
def translate(model, enc_input, start_symbol):
'''
enc_input: [1, src_len]
'''
enc_outputs, enc_self_attns = model.Encoder(enc_input)
dec_input = torch.zeros(1, tgt_len).type_as(enc_input.data)
next_symbol = start_symbol
for i in range(0, tgt_len):
dec_input[0][i] = next_symbol
dec_outputs, _, _ = model.Decoder(dec_input, enc_input, enc_outputs)
projected = model.projection(dec_outputs)
prob = projected.squeeze(0).max(dim=-1, keepdim=False)[1]
next_word = prob.data[i]
next_symbol = next_word.item()
return dec_input
if __name__ == '__main__':
# 定义模型、损失函数和优化器
model = Transformer()
criterion = nn.CrossEntropyLoss(ignore_index=0)
optimizer = optim.SGD(model.parameters(), lr=1e-3, momentum=0.99)
# 训练循环
for epoch in range(50):
for enc_inputs, dec_inputs, dec_outputs in loader:
outputs, _, _, _ = model(enc_inputs, dec_inputs)
loss = criterion(outputs, dec_outputs.view(-1))
print('Epoch:', '%04d' % (epoch + 1), 'loss =', '{:.6f}'.format(loss))
optimizer.zero_grad()
loss.backward()
optimizer.step()
# 测试模型
enc_inputs, _, _ = next(iter(loader))
predict_dec_input = translate(model, enc_inputs[1].view(1, -1), start_symbol=tgt_vocab["S"])
predict, _, _, _ = model(enc_inputs[1].view(1, -1), predict_dec_input)
predict = predict.data.max(1, keepdim=True)[1]
print([src_idx2word[int(i)] for i in enc_inputs[1]], '->', [tgt_idx2word[n.item()] for n in predict.squeeze()])
运行输出
tensor([[1, 2, 3, 4, 0],
[1, 5, 6, 3, 7],
[1, 2, 8, 4, 0]])
tensor([[0, 3, 4, 5, 6],
[0, 3, 7, 8, 2],
[0, 3, 4, 5, 9]])
tensor([[3, 4, 5, 6, 1],
[3, 7, 8, 2, 1],
[3, 4, 5, 9, 1]])
Epoch: 0001 loss = 2.600224
Epoch: 0001 loss = 2.365517
Epoch: 0002 loss = 2.270431
Epoch: 0002 loss = 1.688261
Epoch: 0003 loss = 1.760807
Epoch: 0003 loss = 1.144777
Epoch: 0004 loss = 1.371444
Epoch: 0004 loss = 0.693195
Epoch: 0005 loss = 1.048666
Epoch: 0005 loss = 0.465757
Epoch: 0006 loss = 0.751479
Epoch: 0006 loss = 0.171526
Epoch: 0007 loss = 0.491221
Epoch: 0007 loss = 0.067958
Epoch: 0008 loss = 0.343695
Epoch: 0008 loss = 0.063098
Epoch: 0009 loss = 0.299439
Epoch: 0009 loss = 0.045108
Epoch: 0010 loss = 0.197172
Epoch: 0010 loss = 0.043025
Epoch: 0011 loss = 0.158497
Epoch: 0011 loss = 0.062138
Epoch: 0012 loss = 0.100445
Epoch: 0012 loss = 0.061179
Epoch: 0013 loss = 0.078011
Epoch: 0013 loss = 0.058852
Epoch: 0014 loss = 0.053713
Epoch: 0014 loss = 0.041043
Epoch: 0015 loss = 0.037801
Epoch: 0015 loss = 0.027008
Epoch: 0016 loss = 0.039904
Epoch: 0016 loss = 0.013233
Epoch: 0017 loss = 0.029600
Epoch: 0017 loss = 0.007474
Epoch: 0018 loss = 0.028843
Epoch: 0018 loss = 0.005040
Epoch: 0019 loss = 0.043575
Epoch: 0019 loss = 0.002494
Epoch: 0020 loss = 0.033482
Epoch: 0020 loss = 0.002945
Epoch: 0021 loss = 0.033570
Epoch: 0021 loss = 0.002597
Epoch: 0022 loss = 0.026345
Epoch: 0022 loss = 0.001820
Epoch: 0023 loss = 0.019674
Epoch: 0023 loss = 0.001033
Epoch: 0024 loss = 0.011712
Epoch: 0024 loss = 0.000861
Epoch: 0025 loss = 0.008331
Epoch: 0025 loss = 0.000799
Epoch: 0026 loss = 0.006545
Epoch: 0026 loss = 0.000619
Epoch: 0027 loss = 0.005334
Epoch: 0027 loss = 0.000476
Epoch: 0028 loss = 0.004496
Epoch: 0028 loss = 0.000541
Epoch: 0029 loss = 0.004130
Epoch: 0029 loss = 0.000393
Epoch: 0030 loss = 0.006456
Epoch: 0030 loss = 0.000519
Epoch: 0031 loss = 0.006623
Epoch: 0031 loss = 0.000498
Epoch: 0032 loss = 0.005495
Epoch: 0032 loss = 0.000539
Epoch: 0033 loss = 0.010349
Epoch: 0033 loss = 0.000973
Epoch: 0034 loss = 0.006881
Epoch: 0034 loss = 0.000965
Epoch: 0035 loss = 0.009865
Epoch: 0035 loss = 0.000818
Epoch: 0036 loss = 0.010197
Epoch: 0036 loss = 0.000483
Epoch: 0037 loss = 0.004308
Epoch: 0037 loss = 0.000808
Epoch: 0038 loss = 0.006505
Epoch: 0038 loss = 0.000774
Epoch: 0039 loss = 0.005025
Epoch: 0039 loss = 0.000705
Epoch: 0040 loss = 0.006015
Epoch: 0040 loss = 0.000839
Epoch: 0041 loss = 0.004136
Epoch: 0041 loss = 0.001405
Epoch: 0042 loss = 0.004876
Epoch: 0042 loss = 0.001411
Epoch: 0043 loss = 0.003801
Epoch: 0043 loss = 0.001093
Epoch: 0044 loss = 0.002689
Epoch: 0044 loss = 0.000896
Epoch: 0045 loss = 0.003327
Epoch: 0045 loss = 0.000675
Epoch: 0046 loss = 0.002565
Epoch: 0046 loss = 0.003357
Epoch: 0047 loss = 0.004440
Epoch: 0047 loss = 0.000780
Epoch: 0048 loss = 0.003352
Epoch: 0048 loss = 0.000687
Epoch: 0049 loss = 0.004127
Epoch: 0049 loss = 0.000813
Epoch: 0050 loss = 0.006756
Epoch: 0050 loss = 0.000766
Epoch: 0001 loss = 2.458865
Epoch: 0001 loss = 2.614825
Epoch: 0002 loss = 2.137760
Epoch: 0002 loss = 1.978979
Epoch: 0003 loss = 1.742040
Epoch: 0003 loss = 1.073070
Epoch: 0004 loss = 1.528814
Epoch: 0004 loss = 0.651220
Epoch: 0005 loss = 1.293689
Epoch: 0005 loss = 0.317707
Epoch: 0006 loss = 1.065461
Epoch: 0006 loss = 0.121336
Epoch: 0007 loss = 0.797166
Epoch: 0007 loss = 0.066953
Epoch: 0008 loss = 0.576325
Epoch: 0008 loss = 0.036890
Epoch: 0009 loss = 0.356691
Epoch: 0009 loss = 0.029474
Epoch: 0010 loss = 0.226635
Epoch: 0010 loss = 0.055043
Epoch: 0011 loss = 0.174936
Epoch: 0011 loss = 0.127223
Epoch: 0012 loss = 0.121806
Epoch: 0012 loss = 0.180432
Epoch: 0013 loss = 0.096962
Epoch: 0013 loss = 0.170093
Epoch: 0014 loss = 0.081268
Epoch: 0014 loss = 0.068879
Epoch: 0015 loss = 0.083818
Epoch: 0015 loss = 0.040747
Epoch: 0016 loss = 0.077952
Epoch: 0016 loss = 0.016654
Epoch: 0017 loss = 0.077648
Epoch: 0017 loss = 0.009613
Epoch: 0018 loss = 0.138340
Epoch: 0018 loss = 0.010694
Epoch: 0019 loss = 0.126777
Epoch: 0019 loss = 0.007976
Epoch: 0020 loss = 0.116539
Epoch: 0020 loss = 0.007063
Epoch: 0021 loss = 0.079524
Epoch: 0021 loss = 0.004437
Epoch: 0022 loss = 0.029921
Epoch: 0022 loss = 0.004003
Epoch: 0023 loss = 0.015221
Epoch: 0023 loss = 0.002186
Epoch: 0024 loss = 0.008588
Epoch: 0024 loss = 0.002981
Epoch: 0025 loss = 0.005541
Epoch: 0025 loss = 0.003300
Epoch: 0026 loss = 0.005953
Epoch: 0026 loss = 0.004076
Epoch: 0027 loss = 0.004823
Epoch: 0027 loss = 0.007718
Epoch: 0028 loss = 0.005726
Epoch: 0028 loss = 0.015953
Epoch: 0029 loss = 0.006219
Epoch: 0029 loss = 0.017494
Epoch: 0030 loss = 0.005834
Epoch: 0030 loss = 0.008275
Epoch: 0031 loss = 0.007575
Epoch: 0031 loss = 0.011956
Epoch: 0032 loss = 0.005285
Epoch: 0032 loss = 0.017082
Epoch: 0033 loss = 0.005033
Epoch: 0033 loss = 0.018587
Epoch: 0034 loss = 0.005241
Epoch: 0034 loss = 0.006266
Epoch: 0035 loss = 0.004937
Epoch: 0035 loss = 0.002235
Epoch: 0036 loss = 0.003399
Epoch: 0036 loss = 0.000733
Epoch: 0037 loss = 0.004403
Epoch: 0037 loss = 0.000702
Epoch: 0038 loss = 0.003656
Epoch: 0038 loss = 0.000234
Epoch: 0039 loss = 0.002913
Epoch: 0039 loss = 0.000217
Epoch: 0040 loss = 0.002105
Epoch: 0040 loss = 0.000166
Epoch: 0041 loss = 0.002711
Epoch: 0041 loss = 0.000139
Epoch: 0042 loss = 0.002726
Epoch: 0042 loss = 0.000234
Epoch: 0043 loss = 0.002743
Epoch: 0043 loss = 0.000211
Epoch: 0044 loss = 0.002601
Epoch: 0044 loss = 0.000223
Epoch: 0045 loss = 0.002893
Epoch: 0045 loss = 0.000354
Epoch: 0046 loss = 0.002688
Epoch: 0046 loss = 0.000465
Epoch: 0047 loss = 0.002145
Epoch: 0047 loss = 0.000747
Epoch: 0048 loss = 0.001917
Epoch: 0048 loss = 0.000610
Epoch: 0049 loss = 0.002184
Epoch: 0049 loss = 0.000721
Epoch: 0050 loss = 0.001443
Epoch: 0050 loss = 0.001024
[‘我’, ‘喜’, ‘欢’, ‘学’, ‘习’] -> [‘I’, ‘am’, ‘learning’, ‘P’, ‘E’]