RNN、LSTM、GRU、DeepRNN和BiLSTM

部署运行你感兴趣的模型镜像

RNN(Recurrent Neural Network)

在这里插入图片描述
Ht=ϕ(XtWxh+Ht−1Whh+bh)(1) \boldsymbol H_t=\phi(X_tW_{xh}+H_{t-1}W_{hh}+b_h) \tag 1 Ht=ϕ(XtWxh+Ht1Whh+bh)(1)

OUTPUTt=HtWhq+bq(2) \boldsymbol {OUTPUT_t}=\boldsymbol H_t \boldsymbol W_{hq}+\boldsymbol b_q \tag 2 OUTPUTt=HtWhq+bq(2)

其中,Xt∈Rn×d\boldsymbol X_t \in R^{n \times d}XtRn×d是时间步ttt的小批量输入,Ht∈Rn×hH_t \in R^{n \times h}HtRn×h是该时间步的隐藏变量,nnn为batch_size,ddd为一个词编码后的向量维度,hhh为隐藏神经元数量,Wxh∈Rd×h\boldsymbol W_{xh} \in R^{d \times h}WxhRd×hWhh∈Rh×h\boldsymbol W_{hh} \in R^{h \times h}WhhRh×hbh∈R1×h\boldsymbol b_h \in R^{1 \times h}bhR1×hϕ\phiϕ为非线性激活函数。因为引入了Ht−1WhhH_{t-1}W_{hh}Ht1WhhHtH_tHt可以捕捉截至当前步的序列历史信息。在时间步ttt,输出层的输出为OtO_tOtWhq∈Rh×qW_{hq} \in R^{h \times q}WhqRh×qbq∈R1×qb_q \in R^{1 \times q}bqR1×qqqq为输出维度大小。公式中含有$\boldsymbol W_{xh},\boldsymbol W_{hh}, \boldsymbol b_h,\boldsymbol W_{hq},\boldsymbol b_q$5个权重矩阵参数需要学习得到。

num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
# num_inputs: d
# num_hiddens: h, 隐藏单元的个数是超参数
# num_outputs: q

def get_params():
    def _one(shape):
        param = torch.zeros(shape, device=device, dtype=torch.float32)
        nn.init.normal_(param, 0, 0.01)
        return torch.nn.Parameter(param)

    # 隐藏层参数
    W_xh = _one((num_inputs, num_hiddens))
    W_hh = _one((num_hiddens, num_hiddens))
    b_h = torch.nn.Parameter(torch.zeros(num_hiddens, device=device))
    # 输出层参数
    W_hq = _one((num_hiddens, num_outputs))
    b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device))
    return (W_xh, W_hh, b_h, W_hq, b_q)

RNN使用循环来完成前向计算:

def rnn(inputs, state, params):
    # inputs和outputs皆为num_steps个形状为(batch_size, vocab_size)的矩阵
    W_xh, W_hh, b_h, W_hq, b_q = params
    H, = state
    outputs = []
    for X in inputs:
        H = torch.tanh(torch.matmul(X, W_xh) + torch.matmul(H, W_hh) + b_h)
        Y = torch.matmul(H, W_hq) + b_q
        outputs.append(Y)
    return outputs, (H,)

另外,神经网络需要初始化一个隐藏状态变量H0H_0H0,隐藏状态初始化:

def init_rnn_state(batch_size, num_hiddens, device):
    return (torch.zeros((batch_size, num_hiddens), device=device), )

LSTM

在这里插入图片描述

It=σ(XtWxi+Ht−1Whi+bi)(3) I_t=\sigma(\boldsymbol {X_t W_{xi}}+\boldsymbol {H_{t-1} W_{hi}}+\boldsymbol b_i) \tag 3 It=σ(XtWxi+Ht1Whi+bi)(3)

Ft=σ(XtWxf+Ht−1Whf+bf)(4) F_t=\sigma(\boldsymbol{ X_t W_{xf}}+H_{t-1}W_{hf}+\boldsymbol b_{f}) \tag 4 Ft=σ(XtWxf+Ht1Whf+bf)(4)

Ot=σ(XtWxo+Ht−1Who+bo)(5) O_t = \sigma(\boldsymbol {X_t W_{xo}} + \boldsymbol {H_{t-1}W_{ho} }+ \boldsymbol b_{o}) \tag 5 Ot=σ(XtWxo+Ht1Who+bo)(5)

Ct~=tanh(XtWxc+Ht−1Whc+bc)(6) \boldsymbol{\widetilde{C_t}} = tanh(\boldsymbol {X_t W_{xc}} + \boldsymbol {H_{t-1} W_{hc}} +\boldsymbol b_{c}) \tag 6 Ct=tanh(XtWxc+Ht1Whc+bc)(6)

Ct=Ft⨂Ct−1+It⨂Ct~(7) \boldsymbol C_t = \boldsymbol F_t \bigotimes \boldsymbol C_{t-1}+\boldsymbol I_t \bigotimes \boldsymbol{\widetilde{C_t}} \tag 7 Ct=FtCt1+ItCt(7)

Ht=Ot⨂tanh(Ct)(8) \boldsymbol H_t=\boldsymbol O_t \bigotimes tanh(\boldsymbol C_t) \tag 8 Ht=Ottanh(Ct)(8)

其中,Xt∈Rn×d\boldsymbol X_t \in R^{n \times d}XtRn×d是时间步ttt的小批量输入,Ht∈Rn×hH_t \in R^{n \times h}HtRn×h是该时间步的隐藏变量,It,Ft,Ct~,OtI_t,F_t,\boldsymbol{\widetilde{C_t}},O_tIt,Ft,Ct,Ot分别表示输入门、遗忘门、候选记忆细胞和输出门,σ\sigmaσ表示sigmoid函数,⨂\bigotimes表示Hadmard积。Wxi∈R,W_{xi} \in R^{},WxiR,。每一个LSTM单元包含Wxi∈Rd×h,Whi∈Rh×h,bi∈R1×h\boldsymbol{W_{xi}} \in R^{d \times h},\boldsymbol {W_{hi}} \in R^{h \times h} ,b_i \in R^{1 \times h}WxiRd×h,WhiRh×h,biR1×hWxf∈Rd×h,Whf∈Rh×h,bf∈R1×hW_{xf} \in R^{d \times h} ,W_{hf} \in R^{h \times h} ,b_{f} \in R^{1 \times h}WxfRd×h,WhfRh×h,bfR1×hWxo∈Rd×h,Who∈Rh×h,bo∈R1×hW_{xo} \in R^{d \times h} ,W_{ho} \in R^{ h \times h} ,b_{o} \in R^{1 \times h}WxoRd×h,WhoRh×h,boR1×h ,$W_{xc} \in R^{d \times h} ,,W_{hc} \in R^{h \times h} ,b_{c} \in R^{1 \times h},,W_{hq} \in R^{h \times q},b_q \in R^{1 \times q}$共14个权重矩阵参数需要学习。

输出计算为:
OUTPUTt=HtWhq+bq \boldsymbol {OUTPUT_t}=\boldsymbol H_t \boldsymbol W_{hq}+\boldsymbol b_q OUTPUTt=HtWhq+bq
代码如下:

num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
print('will use', device)

def get_params():
    def _one(shape):
        ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32)
        return torch.nn.Parameter(ts, requires_grad=True)
    def _three():
        return (_one((num_inputs, num_hiddens)),
                _one((num_hiddens, num_hiddens)),
                torch.nn.Parameter(torch.zeros(num_hiddens, device=device, dtype=torch.float32), requires_grad=True))
    
    W_xi, W_hi, b_i = _three()  # 输入门参数
    W_xf, W_hf, b_f = _three()  # 遗忘门参数
    W_xo, W_ho, b_o = _three()  # 输出门参数
    W_xc, W_hc, b_c = _three()  # 候选记忆细胞参数
    
    # 输出层参数
    W_hq = _one((num_hiddens, num_outputs))
    b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, dtype=torch.float32), requires_grad=True)
    return nn.ParameterList([W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q])

另外,隐藏状态和记忆细胞都需要初始化状态:

def init_lstm_state(batch_size, num_hiddens, device):
    return (torch.zeros((batch_size, num_hiddens), device=device), 
            torch.zeros((batch_size, num_hiddens), device=device))

构建的LSTM使用迭代的方式进行前向计算

def lstm(inputs, state, params):
    [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q] = params
    (H, C) = state
    outputs = []
    for X in inputs:
        I = torch.sigmoid(torch.matmul(X, W_xi) + torch.matmul(H, W_hi) + b_i)
        F = torch.sigmoid(torch.matmul(X, W_xf) + torch.matmul(H, W_hf) + b_f)
        O = torch.sigmoid(torch.matmul(X, W_xo) + torch.matmul(H, W_ho) + b_o)
        C_tilda = torch.tanh(torch.matmul(X, W_xc) + torch.matmul(H, W_hc) + b_c)
        C = F * C + I * C_tilda
        H = O * C.tanh()
        Y = torch.matmul(H, W_hq) + b_q
        outputs.append(Y)
    return outputs, (H, C)

使用现成的LSTM:

num_hiddens=256
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']

lr = 1e-2 # 注意调整学习率
lstm_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens)
model = d2l.RNNModel(lstm_layer, vocab_size)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)

GRU

在这里插入图片描述
Rt=σ(XtWxr+Ht−1Whr+br)(9) R_t=\sigma(X_tW_{xr}+H_{t-1}W_{hr}+b_r) \tag 9 Rt=σ(XtWxr+Ht1Whr+br)(9)

Zt=σ(XtWxz+Ht−1Whz+bz)(10) Z_t=\sigma(X_t W_{xz}+H_{t-1}W_{hz}+b_z) \tag {10} Zt=σ(XtWxz+Ht1Whz+bz)(10)

Ht~=tanh(XtWxh+(Rt⨂Ht−1)Whh+bh)(11) \widetilde{H_t}=tanh(X_t W_{xh}+(R_t \bigotimes H_{t-1})W_{hh}+b_h) \tag{11} Ht=tanh(XtWxh+(RtHt1)Whh+bh)(11)

Ht=Zt⨂Ht−1+(1−Zt)⨂Ht~(12) H_t = Z_t \bigotimes H_{t-1}+(1-Z_t)\bigotimes \widetilde{H_t} \tag{12} Ht=ZtHt1+(1Zt)Ht(12)

其中,Xt∈Rn×d\boldsymbol X_t \in R^{n \times d}XtRn×d是时间步ttt的小批量输入,Ht∈Rn×hH_t \in R^{n \times h}HtRn×h是该时间步的隐藏变量,没每个GRU单元包括Wxr∈Rd×h,Whr∈Rh×h,br∈R1×hW_{xr} \in R^{d \times h},W_{hr} \in R^{h \times h},b_r \in R^{1 \times h}WxrRd×h,WhrRh×h,brR1×hWxz∈Rd×h,Whz∈Rh×h,bz∈R1×hW_{xz} \in R^{d \times h},W_{hz} \in R^{h \times h},b_z \in R^{1 \times h}WxzRd×h,WhzRh×h,bzR1×hWxh∈Rd×h,Whh∈Rh×h,bh∈R1×hW_{xh} \in R^{d \times h},W_{hh} \in R^{h \times h},b_h \in R^{1 \times h}WxhRd×h,WhhRh×h,bhR1×hWhq∈Rh×q,bq∈R1×qW_{hq} \in R^{h \times q},b_q \in R^{1 \times q}WhqRh×q,bqR1×q共11个权重矩阵参数需要学习。

num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
print('will use', device)

def get_params():  
    def _one(shape):
        ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32) #正态分布
        return torch.nn.Parameter(ts, requires_grad=True)
    def _three():
        return (_one((num_inputs, num_hiddens)),
                _one((num_hiddens, num_hiddens)),
                torch.nn.Parameter(torch.zeros(num_hiddens, device=device, dtype=torch.float32), requires_grad=True))
     
    W_xz, W_hz, b_z = _three()  # 更新门参数
    W_xr, W_hr, b_r = _three()  # 重置门参数
    W_xh, W_hh, b_h = _three()  # 候选隐藏状态参数
    
    # 输出层参数
    W_hq = _one((num_hiddens, num_outputs))
    b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, dtype=torch.float32), requires_grad=True)
    return nn.ParameterList([W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q])

ttt时间步的输出计算:
OUTPUTt=HtWhq+bq \boldsymbol {OUTPUT_t}=\boldsymbol H_t \boldsymbol W_{hq}+\boldsymbol b_q OUTPUTt=HtWhq+bq

GRU单元只需要初始化隐藏状态矩阵:

def init_gru_state(batch_size, num_hiddens, device):   #隐藏状态初始化
    return (torch.zeros((batch_size, num_hiddens), device=device), )

GRU的前向计算迭代方式实现:

def gru(inputs, state, params):
    W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q = params
    H, = state
    outputs = []
    for X in inputs:
        Z = torch.sigmoid(torch.matmul(X, W_xz) + torch.matmul(H, W_hz) + b_z)
        R = torch.sigmoid(torch.matmul(X, W_xr) + torch.matmul(H, W_hr) + b_r)
        H_tilda = torch.tanh(torch.matmul(X, W_xh) + R * torch.matmul(H, W_hh) + b_h)
        H = Z * H + (1 - Z) * H_tilda
        Y = torch.matmul(H, W_hq) + b_q
        outputs.append(Y)
    return outputs, (H,)

深度循环神经网络

在这里插入图片描述

代码:

num_hiddens=256
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']

lr = 1e-2 # 注意调整学习率

gru_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens,num_layers=2)
model = d2l.RNNModel(gru_layer, vocab_size).to(device)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)

BiLSTM

在这里插入图片描述

代码:

num_hiddens=128
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e-2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']

lr = 1e-2 # 注意调整学习率

gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens,bidirectional=True)
model = d2l.RNNModel(gru_layer, vocab_size).to(device)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)

您可能感兴趣的与本文相关的镜像

PyTorch 2.5

PyTorch 2.5

PyTorch
Cuda

PyTorch 是一个开源的 Python 机器学习库,基于 Torch 库,底层由 C++ 实现,应用于人工智能领域,如计算机视觉和自然语言处理

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值