[ICLR2021] ViT模型的分析与实现

ViT模型将transformer模型的encoder应用在视觉任务上,取得了很好的效果。对于其基于timm库的实现进行一些注释和学习
timm
在这里插入图片描述

一些重要模块的解读:
PatchEmbed模块:

class PatchEmbed(nn.Module):
    """
    2D Image to Patch Embedding
    """
    def __init__(self, img_size=224, patch_size=16, in_c=3, embed_dim=768, norm_layer=None):
        super().__init__()
        img_size = (img_size, img_size)
        patch_size = (patch_size, patch_size)
        self.img_size = img_size
        self.patch_size = patch_size
        self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
        self.num_patches = self.grid_size[0] * self.grid_size[1]

        self.proj = nn.Conv2d(in_c, embed_dim, kernel_size=patch_size, stride=patch_size)
        self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()

    def forward(self, x):
        B, C, H, W = x.shape
        assert H == self.img_size[0] and W == self.img_size[1], \
            f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."

        # flatten: [B, C, H, W] -> [B, C, HW]
        # transpose: [B, C, HW] -> [B, HW, C]
        x = self.proj(x).flatten(2).transpose(1, 2) #
        x = self.norm(x)
        return x

该模块将输入的图片转化为token的形式,通常x的输入形式为[B,C,H,W],通过这里的操作可以将其转化为[B, HW, C]的形式,例如[B,3,224,224]->[B,196,768]因为每个patch的大小为16*16 所有共有224/16=14 14 * 14=196个token。

Attention模块

class Attention(nn.Module):
    def __init__(self,
                 dim,   # 输入token的dim
                 num_heads=8,
                 qkv_bias=False,
                 qk_scale=None,
                 attn_drop_ratio=0.,
                 proj_drop_ratio=0.):
        super(Attention, self).__init__()
        self.num_heads = num_heads
        head_dim = dim // num_heads
        self.scale = qk_scale or head_dim ** -0.5
        self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
        self.attn_drop = nn.Dropout(attn_drop_ratio)
        self.proj = nn.Linear(dim, dim)
        self.proj_drop = nn.Dropout(proj_drop_ratio)

    def forward(self, x):
        # [batch_size, num_patches + 1, total_embed_dim]
        B, N, C = x.shape

        # qkv(): -> [batch_size, num_patches + 1, 3 * total_embed_dim]
        # reshape: -> [batch_size, num_patches + 1, 3, num_heads, embed_dim_per_head]
        # permute: -> [3, batch_size, num_heads, num_patches + 1, embed_dim_per_head]
        qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
        # [batch_size, num_heads, num_patches + 1, embed_dim_per_head]
        q, k, v = qkv[0], qkv[1], qkv[2]  # make torchscript happy (cannot use tensor as tuple)

        # transpose: -> [batch_size, num_heads, embed_dim_per_head, num_patches + 1]
        # @: multiply -> [batch_size, num_heads, num_patches + 1, num_patches + 1]
        attn = (q @ k.transpose(-2, -1)) * self.scale
        attn = attn.softmax(dim=-1)
        attn = self.attn_drop(attn)

        # @: multiply -> [batch_size, num_heads, num_patches + 1, embed_dim_per_head]
        # transpose: -> [batch_size, num_patches + 1, num_heads, embed_dim_per_head]
        # reshape: -> [batch_size, num_patches + 1, total_embed_dim]
        x = (attn @ v).transpose(1, 2).reshape(B, N, C)  
        x = self.proj(x)
        x = self.proj_drop(x)
        return x

输入:[B,Token_number,Embeding_dim]
输出:[B,Token_number,Embeding_dim]
该模块是ViT模型的核心模块之一,用于计算注意力。传入的输入为经过patch化后的token序列。首先通过shape方法获得其三个形状参数,B为batchsize,N为token的数量,C对应嵌入的维度。在初始化中
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)这样一个层用于产生q,k,v三个矩阵。
在forward函数中:
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)通过这样的方式将其按照多头注意力的方法进行切分,qkv对应的维度为[batch_size, num_patches + 1, 3 * total_embed_dim]再通过reshape操作按照不同的头进行分开,之后的形式为[batch_size, num_patches + 1, 3, num_heads, embed_dim_per_head]之后是转置操作,将q,k,v三者进行拆分。
然后再根据注意力机制的计算公式进行计算,对应在这里就是矩阵的乘法(在后两个维度上进行,也就是在每个head上计算)attn的形状为[B,Num_heads,Token_number,Token_number],最后通过和v的运算,这样就得到了和输入一致的维度。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值