深度学习实战案例一:第三课:构建Resnet:残差网络模型

整体模型理解

简单的卷积层无法实现更加准确的图像识别,因此我们引入Resnet:残差网络模型,在相比较其他模型后得出,Resnet:残差网络模型有更好的效果。

我们图像是输入是[b,3,224,224],经过 卷积操作,变为[b,256,3,3],在平展开变为[b,255*3*3]

,经过最后的线性层,

最终输出:torch.Size([b, num_class]),对应真实标签的num_class个分类,num_class=5

import  torch
from    torch import  nn
from    torch.nn import functional as F


"定义残差块,改变通道数,改变高宽[b,ch_in,h,w]=>[b.ou_ch,(h-3+stride+2)/stride,(w-3+stride+2)/stride]"
class ResBlk(nn.Module):
    """
    resnet block
    """

    def __init__(self, ch_in, ch_out, stride=1):
        """
        :param ch_in:
        :param ch_out:
        """
        super(ResBlk, self).__init__()
        #[b,ch_in,h,w]=>[b,ch_out,h,w]
        self.conv1 = nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=stride, padding=1)
        #对小批量数据进行正则化处理
        self.bn1 = nn.BatchNorm2d(ch_out)
        self.conv2 = nn.Conv2d(ch_out, ch_out, kernel_size=3, stride=1, padding=1)
        self.bn2 = nn.BatchNorm2d(ch_out)

        self.extra = nn.Sequential()
        if ch_out != ch_in:
            # [b, ch_in, h, w] => [b, ch_out, (h-1+stride)/stride, (w-1+stride)/stride]
            self.extra = nn.Sequential(
                nn.Conv2d(ch_in, ch_out, kernel_size=1, stride=stride),
                nn.BatchNorm2d(ch_out)
            )


    def forward(self, x):
        """
        :param x: [b, ch, h, w]
        :return:
        """
        out = F.relu(self.bn1(self.conv1(x)))
        out = self.bn2(self.conv2(out))
        # short cut.
        # extra module: [b, ch_in, h, w] => [b, ch_out, h, w]
        # element-wise add:
        out = self.extra(x) + out
        out = F.relu(out)

        return out



"定义残差网络"
class ResNet18(nn.Module):

    def __init__(self, num_class):
        super(ResNet18, self).__init__()

        self.conv1 = nn.Sequential(
            nn.Conv2d(3, 16, kernel_size=3, stride=3, padding=0),
            nn.BatchNorm2d(16)
        )
        # followed 4 blocks
        # [b, 16, h, w] => [b, 32, h ,w]
        self.blk1 = ResBlk(16, 32, stride=3)
        # [b, 32, h, w] => [b, 64, h, w]
        self.blk2 = ResBlk(32, 64, stride=3)
        # # [b, 64, h, w] => [b, 128, h, w]
        self.blk3 = ResBlk(64, 128, stride=2)
        # # [b, 128, h, w] => [b, 256, h, w]
        self.blk4 = ResBlk(128, 256, stride=2)

        # [b, 256, 3, 3]
        self.outlayer = nn.Linear(256*3*3, num_class)

    def forward(self, x):
        """
        :param x:
        :return:
        """
        x = F.relu(self.conv1(x))        #[b,16,74,74]
        #print('conv1(x):',x.shape)
        # [b, 64, h, w] => [b, 1024, h, w]
        x = self.blk1(x)  #[b,32,25,25]
        #print('blk1(x):',x.shape)
        x = self.blk2(x)  #[b,64,9,9]
        #print('blk2(x):',x.shape)
        x = self.blk3(x)  #[b,128,5,5]
        #print('blk3(x):',x.shape)
        x = self.blk4(x)  #[b,256,3,3]
        #print('blk4(x):',x.shape)

        x = x.view(x.size(0), -1)
        #print('flatten:',x.shape)
        x = self.outlayer(x)


        return x



def main():
    blk = ResBlk(64, 128,3)
    tmp = torch.randn(2, 64, 224, 224)
    out = blk(tmp)
    print('block:', out.shape)


    model = ResNet18(5)
    tmp = torch.randn(2, 3, 224, 224)
    out = model(tmp)
    print('resnet:', out.shape)

    p = sum(map(lambda p:p.numel(), model.parameters()))
    print('parameters size:', p)




if __name__ == '__main__':
    main()

 ResBlk块理解:

[b,ch_in,h,w]=>[b,ou_ch,(h-1+stride)/stride,(w-1+stride)/stride]

self.conv1 = nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=stride, padding=1)
        #对小批量数据进行正则化处理
        self.bn1 = nn.BatchNorm2d(ch_out)
        self.conv2 = nn.Conv2d(ch_out, ch_out, kernel_size=3, stride=1, padding=1)
        self.bn2 = nn.BatchNorm2d(ch_out)
上面代码的卷积层:改变通道数,改变高宽[b,ch_in,h,w]=>[b,ou_ch,(h-3+stride+2)/stride,(w-3+stride+2)/stride]"
self.extra = nn.Sequential()
        if ch_out != ch_in:
            # [b, ch_in, h, w] => [b, ch_out, (h-1+stride)/stride, (w-1+stride)/stride]
            self.extra = nn.Sequential(
                nn.Conv2d(ch_in, ch_out, kernel_size=1, stride=stride),
                nn.BatchNorm2d(ch_out)
            )

上面代码的卷积层:改变通道数,改变高宽[b,ch_in,h,w]=>[b,ou_ch,(h-1+stride)/stride,(w-1+stride)/stride]"

其实这两段代码改变的图像通道数,高,宽是一样的方便用后续代码进行累加

out = self.extra(x) + out

图像的size改变流程如下

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值