[笔记][Transformer]Attention Is All You Need

transformer 原论文 Attention is all you need 阅读笔记
建议结合原文食用

Reference:
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.

圈圈 2022/3/12 初稿 结合代码介绍了模型
以下为正文↓

Attention Is All You Need

Transformer

model architecture

( x 1 , . . . , x n ) → e n c o d e r ( z 1 , . . . , z n ) → d e c o d e r ( y 1 , . . . , y m ) (x_1,...,x_n)\xrightarrow{encoder} (z_1,...,z_n)\xrightarrow{decoder} (y_1,...,y_m) (x1,...,xn)encoder (z1,...,zn)decoder (y1,...,ym)

encoder and decoder stacks

encoder

composed of a stack of N = 6 N=6 N=6 identical layers, each layer has two sub-layers

  • multi-head self-attention mechanism x → S ( x ) x\to S(x) xS(x)

    residue connection followed by layer normalization
    x → L a y e r n o r m ( x + S ( x ) ) x\to Layernorm(x+S(x)) xLayernorm(x+S(x))

  • simple position-wise fully connected feed-forward network +
    same residue connection followed by layer normalization

To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension d m o d e l = 512 d_{model}=512 dmodel=512

## code author: "Yu-Hsiang Huang", https://github.com/jadore801120/attention-is-all-you-need-pytorch


class EncoderLayer(nn.Module):
    ''' Compose with two layers '''

    def __init__(self, d_model, d_inner, n_head, d_k, d_v, dropout=0.1):
        super(EncoderLayer, self).__init__()
        self.slf_attn = MultiHeadAttention(n_head, d_model, d_k, d_v, dropout=dropout)
        self.pos_ffn = PositionwiseFeedForward(d_model, d_inner, dropout=dropout)

    def forward(self, enc_input, slf_attn_mask=None):
        enc_output, enc_slf_attn = self.slf_attn(
            enc_input, enc_input, enc_input, mask=slf_attn_mask)
        enc_output = self.pos_ffn(enc_output)
        return enc_output, enc_slf_attn
decoder

also composed of a stack of N = 6 N=6 N=6 identical layers, but has three sub-layers each

the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack.

在这里插入图片描述

class DecoderLayer(nn.Module):
    ''' Compose with three layers '''

    def __init__(self, d_model, d_inner, n_head, d_k, d_v, dropout=0.1):
        super(DecoderLayer, self).__init__()
        self.slf_attn = MultiHeadAttention(n_head, d_model, d_k, d_v, dropout=dropout)
        self.enc_attn = MultiHeadAttention(n_head, d_model, d_k, d_v, dropout=dropout)
        self.pos_ffn = PositionwiseFeedForward(d_model, d_inner, dropout=dropout)

    def forward(
            self, dec_input, enc_output,
            slf_attn_mask=None, dec_enc_attn_mask=None):
        dec_output, dec_slf_attn = self.slf_attn(
            dec_input, dec_input, dec_input, mask=slf_attn_mask)
        dec_output, dec_enc_attn = self.enc_attn(
            dec_output, enc_output, enc_output, mask=dec_enc_attn_mask)
        dec_output = self.pos_ffn(dec_output)
        return dec_output, dec_slf_attn, dec_enc_attn

attention

An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.

scaled dot-product attention
  • Q : packed queries query dim : d k d_k dk
  • K : packed keys key dim : d k d_k dk
  • V : packed values value dim : d v d_v dv

A t t e n t i o n ( Q , K , V ) = s o f t m a x ( Q K T d k ) V Attention(Q,K,V)=softmax(\dfrac{QK^T}{\sqrt{d_k}})V Attention(Q,K,V)=softmax(dk QKT)V

reason for scaling the dot products by 1 d k \frac{1}{\sqrt{d_k}} dk 1:

While for small values of dk the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of d k d_k dk. We suspect that for large values of d k d_k dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients.

在这里插入图片描述

multi-head attention

several attention layers in parallel + projection
M u l t i H e a d ( Q , K , V ) = C o n c a t ( h e a d 1 , . . . , h e a d h ) W O w h e r e   h e a d i = A t t e n t i o n ( Q W i Q , K W i K , V W i V ) p r o j e c t i o n   m a t r i c e s :   W i Q ∈ R d m o d e l × d k ,   W i K ∈ R d m o d e l × d k ,   W i V ∈ R d m o d e l × d v ,   W O ∈ R h d v × d m o d e l MultiHead(Q,K,V)=Concat(head_1,...,head_h)W^O \\ where\ head_i=Attention(QW^Q_i,KW^K_i,VW^V_i)\\ projection\ matrices:\ W^Q_i\in\mathbb R^{d_{model}\times d_k},\\ \ W^K_i\in\mathbb R^{d_{model}\times d_k},\ W^V_i\in\mathbb R^{d_{model}\times d_v},\ W^O\in\mathbb R^{hd_v\times d_{model}} MultiHead(Q,K,V)=Concat(head1,...,headh)WOwhere headi=Attention(QWiQ,KWiK,VWiV)projection matrices: WiQRdmodel×dk, WiKRdmodel×dk, WiVRdmodel×dv, WORhdv×dmodel

position-wise feed-forward networks

F F N ( x ) = R e L U ( x W 1 + b 1 ) W 2 + b 2 FFN(x)=ReLU(xW_1+b_1)W_2+b_2 FFN(x)=ReLU(xW1+b1)W2+b2

2 full-connect layers

self.w_1 = nn.Linear(d_in, d_hid) # position-wise
self.w_2 = nn.Linear(d_hid, d_in) # position-wise

x = self.w_2(F.relu(self.w_1(x)))

embeddings and softmax

share the same weight matrix between the two embedding layers and the pre-softmax linear transformation

         self.trg_word_prj = nn.Linear(d_model, n_trg_vocab, bias=False)
    
         seq_logit = self.trg_word_prj(dec_output)
         if self.scale_prj:
            seq_logit *= self.d_model ** -0.5

softmax → \to probability

positional encoding

Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence.

P E ( p o s , 2 i ) = sin ⁡ ( p o s / 1000 0 2 i / d m o d e l ) P E ( p o s , 2 i + 1 ) = cos ⁡ ( p o s / 1000 0 2 i / d m o d e l ) PE_{(pos,2i)}=\sin(pos/10000^{2i/d_{model}})\\ PE_{(pos,2i+1)}=\cos(pos/10000^{2i/d_{model}}) PE(pos,2i)=sin(pos/100002i/dmodel)PE(pos,2i+1)=cos(pos/100002i/dmodel)

sinusoidal version, of course can use other version

        sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2])  # dim 2i
    
        sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2])  # dim 2i+1

We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions

why self-attention?

  • total computational complexity per layer
  • the amount of computation that can
    be parallelized
  • the path length between long-range dependencies in the network

As side benefit, self-attention could yield more interpretable models.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值