空间金字塔池化SPP、SPPF、ASPP、PPM、ASPP等汇总

Original SPP

最原版的SPP(Spatial Pyramid Pooling)是14年在何凯明大神的《Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition》这篇文章中提出的,具体介绍见SPP: Spatial Pyramid Pooling_spatial pyramid pooling (spp) 代码-优快云博客。它是为了解决网络最后的全连接层的输入需要固定的维度,而输入图片大小是变化的这一问题而提出的。

这里多说两句,当时的做法是对输入进行crop或resize得到固定输入的大小,作者提出crop可能会导致输入没有完全包含目标对象,而resize会导致目标的几何失真。但十年过去了,好像现在的检测模型并不在意这些问题,RandomCrop反倒是一种非常常用的数据增强方法,Resize虽然常用保持长宽比的方式避免了几何失真,但会人为添加一些仿射变换的增强方法来故意让目标几何失真。这可能是因为模型的能力越来越强,更强的数据增强可以帮助模型更好地泛化到不同的场景和变体,从而提高模型的性能和鲁棒性。

说回来,SPP的做法是将特征图均分成多个不同的格子,然后在每个格子里进行池化,这里格子的数量是固定的,但每个格子的大小随着输入大小的不同也不一样,但格子数量固定就导致池化后特征的维度是固定的。这就完美解决了输入大小不固定,而全连接层的输入维度需要固定的矛盾。

SPP in YOLOv3

在u版yolov3中,作者对原始的spp进行了改进,池化的kernel变成固定大小的了,目的也不再是为了得到固定大小的输出,而是通过不同大小kernel的池化,在不同尺度上聚合context,增强感受野,有利于检测不同大小的目标。如下图所示

class SPP(nn.Module):
    # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729
    def __init__(self, c1, c2, k=(5, 9, 13)):
        super().__init__()
        c_ = c1 // 2  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
        self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])

    def forward(self, x):
        x = self.cv1(x)
        with warnings.catch_warnings():
            warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
            return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))

SPPF in YOLOv5

yolov5中提出的sppf是v3中的spp的改进版本,由并行结构改成了串行结构,这里串行结构下第二个5x5的maxpool的感受野为9x9就等价于一个并行的9x9 maxpool,在减少参数量的同时保持了spp多尺度特征聚合的能力。

class SPPF(nn.Module):
    # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
    def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
        super().__init__()
        c_ = c1 // 2  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c_ * 4, c2, 1, 1)
        self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)

    def forward(self, x):
        x = self.cv1(x)
        with warnings.catch_warnings():
            warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
            y1 = self.m(x)
            y2 = self.m(y1)
            return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))

SimSPPF in YOLOv6

从下面的实现可以看出,simppf和sppf的唯一区别就是将卷积层的激活函数由silu换成了relu,速度得到了提升,官方没有给出具体的效果对比和速度提升了多少,在其它博客中看到作者自己测试simsppf相比于sppf速度提升了18%。

class SimSPPF(nn.Module):
    """Simplified SPPF with ReLU activation"""
    def __init__(self, in_channels, out_channels, kernel_size=5):
        super().__init__()
        c_ = in_channels // 2  # hidden channels
        self.cv1 = SimConv(in_channels, c_, 1, 1)
        self.cv2 = SimConv(c_ * 4, out_channels, 1, 1)
        self.m = nn.MaxPool2d(kernel_size=kernel_size, stride=1, padding=kernel_size // 2)

    def forward(self, x):  # (64,256,20,20)
        x = self.cv1(x)  # (64,128,20,20)
        with warnings.catch_warnings():
            warnings.simplefilter('ignore')
            y1 = self.m(x)  # (64,128,20,20)
            y2 = self.m(y1)  # (64,128,20,20)
            return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))  # (64,256,20,20)

ASPP in DeepLabv2

在语义分割模型DeepLab v2(具体介绍见DeepLab系列: v1、v2、v3、v3+_mmsegmentation deeplabv3-优快云博客)中,作者指出尽管深度神经网络已经具备隐式的不同尺度的表示能力,只需要在包含不同大小对象的数据集上训练即可。但显式的考虑对象的大小可以提高DCNN处理大目标和小目标的能力。在deeplab v1中引入了空洞卷积在不增加参数的情况下增大卷积和的感受野。在deeplab v2中,也是受到SPP的启发,作者结合空洞卷积和spp提出了aspp来处理语义分割中尺度变化的问题,如下图所示

class ASPPModule(nn.ModuleList):
    """Atrous Spatial Pyramid Pooling (ASPP) Module.

    Args:
        dilations (tuple[int]): Dilation rate of each layer.
        in_channels (int): Input channels.
        channels (int): Channels after modules, before conv_seg.
        conv_cfg (dict|None): Config of conv layers.
        norm_cfg (dict|None): Config of norm layers.
        act_cfg (dict): Config of activation layers.
    """

    def __init__(self, dilations, in_channels, channels, conv_cfg, norm_cfg,
                 act_cfg):
        super().__init__()
        self.dilations = dilations
        self.in_channels = in_channels
        self.channels = channels
        self.conv_cfg = conv_cfg
        self.norm_cfg = norm_cfg
        self.act_cfg = act_cfg
        for dilation in dilations:
            self.append(
                ConvModule(
                    self.in_channels,
                    self.channels,
                    1 if dilation == 1 else 3,
                    dilation=dilation,
                    padding=0 if dilation == 1 else dilation,
                    conv_cfg=self.conv_cfg,
                    norm_cfg=self.norm_cfg,
                    act_cfg=self.act_cfg))

    def forward(self, x):
        """Forward function."""
        aspp_outs = []
        for aspp_module in self:
            aspp_outs.append(aspp_module(x))

        return aspp_outs

PPM in PSPNet

在语义分割模型PSPNet(具体介绍见PSPNet: Pyramid Scene Parsing Network_pspnet(pyramid scene parsing network)-优快云博客)的文章中,作者分析了一些错误的案例,包括混淆类别、不匹配的关系、不显眼的类别等,最后得出结论,这些错误都或多或少的与不同感受野的上下文关系和全局信息有关,因此作者提出了Pyramid Pooling Module,作为一个有效的全局上下文先验,可以有效提高场景解析的性能。

PPM的具体结构如下图(c)所示,受SPP的启发,全局平均池化是global contextual prior一个很好的选择,这里和原始的SPP一样固定的是输出bin size为1x1、2x2、3x3、6x6,而不是固定池化的kernel size,但当输入大小是固定的时池化的kernel size大小也是固定的了。和SPP不一样的是,SPP的输出flatten然后送入全连接层进行分类,而这里是FCN的分割网络,池化后再通过插值resize回原本的大小并沿通道concat输入后续的卷积层。

mmseg中的PPM实现如下,其中通过nn.AdaptiveAvgPool2d根据输出大小执行自适应池化操作,其中pool_scales=(1, 2, 3, 6)。

class PPM(nn.ModuleList):
    """Pooling Pyramid Module used in PSPNet.

    Args:
        pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid
            Module.
        in_channels (int): Input channels.
        channels (int): Channels after modules, before conv_seg.
        conv_cfg (dict|None): Config of conv layers.
        norm_cfg (dict|None): Config of norm layers.
        act_cfg (dict): Config of activation layers.
        align_corners (bool): align_corners argument of F.interpolate.
    """

    def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg,
                 act_cfg, align_corners, **kwargs):
        super().__init__()
        self.pool_scales = pool_scales
        self.align_corners = align_corners
        self.in_channels = in_channels
        self.channels = channels
        self.conv_cfg = conv_cfg
        self.norm_cfg = norm_cfg
        self.act_cfg = act_cfg
        for pool_scale in pool_scales:
            self.append(
                nn.Sequential(
                    nn.AdaptiveAvgPool2d(pool_scale),
                    ConvModule(
                        self.in_channels,
                        self.channels,
                        1,
                        conv_cfg=self.conv_cfg,
                        norm_cfg=self.norm_cfg,
                        act_cfg=self.act_cfg,
                        **kwargs)))

    def forward(self, x):
        """Forward function."""
        ppm_outs = []
        for ppm in self:
            ppm_out = ppm(x)
            upsampled_ppm_out = resize(
                ppm_out,
                size=x.size()[2:],
                mode='bilinear',
                align_corners=self.align_corners)
            ppm_outs.append(upsampled_ppm_out)
        return ppm_outs

DAPPM in DDRNet

在实时语义分割模型Deep Dual-resolution Networks(具体介绍见()Deep Dual-resolution Network 原理与代码解析)的文章中,作者受Res2Net(具体介绍见Res2Net-优快云博客)的启发对PPM进行了改进,提出了DAPPM,如下图所示

通过在PPM中引入hierarchial卷积的方式融合不同尺度上下文信息,DAPPM能够提供比PPM更丰富的context。

class DAPPM(BaseModule):
    """DAPPM module in `DDRNet <https://arxiv.org/abs/2101.06085>`_.

    Args:
        in_channels (int): Input channels.
        branch_channels (int): Branch channels.
        out_channels (int): Output channels.
        num_scales (int): Number of scales.
        kernel_sizes (list[int]): Kernel sizes of each scale.
        strides (list[int]): Strides of each scale.
        paddings (list[int]): Paddings of each scale.
        norm_cfg (dict): Config dict for normalization layer.
            Default: dict(type='BN').
        act_cfg (dict): Config dict for activation layer in ConvModule.
            Default: dict(type='ReLU', inplace=True).
        conv_cfg (dict): Config dict for convolution layer in ConvModule.
            Default: dict(order=('norm', 'act', 'conv'), bias=False).
        upsample_mode (str): Upsample mode. Default: 'bilinear'.
    """

    def __init__(self,
                 in_channels: int,
                 branch_channels: int,
                 out_channels: int,
                 num_scales: int,
                 kernel_sizes: List[int] = [5, 9, 17],
                 strides: List[int] = [2, 4, 8],
                 paddings: List[int] = [2, 4, 8],
                 norm_cfg: Dict = dict(type='BN', momentum=0.1),
                 act_cfg: Dict = dict(type='ReLU', inplace=True),
                 conv_cfg: Dict = dict(
                     order=('norm', 'act', 'conv'), bias=False),
                 upsample_mode: str = 'bilinear'):
        super().__init__()

        self.num_scales = num_scales
        self.unsample_mode = upsample_mode
        self.in_channels = in_channels
        self.branch_channels = branch_channels
        self.out_channels = out_channels
        self.norm_cfg = norm_cfg
        self.act_cfg = act_cfg
        self.conv_cfg = conv_cfg

        self.scales = ModuleList([
            ConvModule(
                in_channels,
                branch_channels,
                kernel_size=1,
                norm_cfg=norm_cfg,
                act_cfg=act_cfg,
                **conv_cfg)
        ])
        for i in range(1, num_scales - 1):
            self.scales.append(
                Sequential(*[
                    nn.AvgPool2d(
                        kernel_size=kernel_sizes[i - 1],
                        stride=strides[i - 1],
                        padding=paddings[i - 1]),
                    ConvModule(
                        in_channels,
                        branch_channels,
                        kernel_size=1,
                        norm_cfg=norm_cfg,
                        act_cfg=act_cfg,
                        **conv_cfg)
                ]))
        self.scales.append(
            Sequential(*[
                nn.AdaptiveAvgPool2d((1, 1)),
                ConvModule(
                    in_channels,
                    branch_channels,
                    kernel_size=1,
                    norm_cfg=norm_cfg,
                    act_cfg=act_cfg,
                    **conv_cfg)
            ]))
        self.processes = ModuleList()
        for i in range(num_scales - 1):
            self.processes.append(
                ConvModule(
                    branch_channels,
                    branch_channels,
                    kernel_size=3,
                    padding=1,
                    norm_cfg=norm_cfg,
                    act_cfg=act_cfg,
                    **conv_cfg))

        self.compression = ConvModule(
            branch_channels * num_scales,
            out_channels,
            kernel_size=1,
            norm_cfg=norm_cfg,
            act_cfg=act_cfg,
            **conv_cfg)

        self.shortcut = ConvModule(
            in_channels,
            out_channels,
            kernel_size=1,
            norm_cfg=norm_cfg,
            act_cfg=act_cfg,
            **conv_cfg)

    def forward(self, inputs: Tensor):  # (16,1024,8,8)
        feats = []
        feats.append(self.scales[0](inputs))

        for i in range(1, self.num_scales):
            feat_up = F.interpolate(
                self.scales[i](inputs),
                size=inputs.shape[2:],
                mode=self.unsample_mode)
            feats.append(self.processes[i - 1](feat_up + feats[i - 1]))
        # [(16,128,8,8),(16,128,8,8),(16,128,8,8),(16,128,8,8),(16,128,8,8)]

        return self.compression(torch.cat(feats,
                                          dim=1)) + self.shortcut(inputs)  # (16,256,8,8)

PAPPM in PIDNet

DDRNet中的DAPPM就深度而言无法并行,比较耗时,作者对DAPPM进行了改进使其可以并行加快的速度,并应用在PIDNet(具体介绍见实时语义分割模型PIDNet(CVPR 2023)解析_pidnet网络模型解读-优快云博客)的small和medium版本中。在DAPPM中,每个分支的输出都要与下一个分支相加再进行后续操作,因此每个分支都要等前一个分支计算完成所以无法并行。而在PAPPM中,每个分支都与最下面的分支相加,因此可以实现并行。

class PAPPM(DAPPM):
    """PAPPM module in `PIDNet <https://arxiv.org/abs/2206.02066>`_.

    Args:
        in_channels (int): Input channels.
        branch_channels (int): Branch channels.
        out_channels (int): Output channels.
        num_scales (int): Number of scales.
        kernel_sizes (list[int]): Kernel sizes of each scale.
        strides (list[int]): Strides of each scale.
        paddings (list[int]): Paddings of each scale.
        norm_cfg (dict): Config dict for normalization layer.
            Default: dict(type='BN', momentum=0.1).
        act_cfg (dict): Config dict for activation layer in ConvModule.
            Default: dict(type='ReLU', inplace=True).
        conv_cfg (dict): Config dict for convolution layer in ConvModule.
            Default: dict(order=('norm', 'act', 'conv'), bias=False).
        upsample_mode (str): Upsample mode. Default: 'bilinear'.
    """

    def __init__(self,
                 in_channels: int,
                 branch_channels: int,
                 out_channels: int,
                 num_scales: int,
                 kernel_sizes: List[int] = [5, 9, 17],
                 strides: List[int] = [2, 4, 8],
                 paddings: List[int] = [2, 4, 8],
                 norm_cfg: Dict = dict(type='BN', momentum=0.1),
                 act_cfg: Dict = dict(type='ReLU', inplace=True),
                 conv_cfg: Dict = dict(
                     order=('norm', 'act', 'conv'), bias=False),
                 upsample_mode: str = 'bilinear'):
        super().__init__(in_channels, branch_channels, out_channels,
                         num_scales, kernel_sizes, strides, paddings, norm_cfg,
                         act_cfg, conv_cfg, upsample_mode)
        # 512, 96, 128

        self.processes = ConvModule(
            self.branch_channels * (self.num_scales - 1),
            self.branch_channels * (self.num_scales - 1),
            kernel_size=3,
            padding=1,
            groups=self.num_scales - 1,
            norm_cfg=self.norm_cfg,
            act_cfg=self.act_cfg,
            **self.conv_cfg)

    def forward(self, inputs: Tensor):
        x_ = self.scales[0](inputs)
        feats = []
        for i in range(1, self.num_scales):
            feat_up = F.interpolate(
                self.scales[i](inputs),
                size=inputs.shape[2:],
                mode=self.unsample_mode,
                align_corners=False)
            feats.append(feat_up + x_)
        # [(16,96,8,8),(16,96,8,8),(16,96,8,8),(16,96,8,8)]
        scale_out = self.processes(torch.cat(feats, dim=1))  # (16,384,8,8)
        return self.compression(torch.cat([x_, scale_out],
                                          dim=1)) + self.shortcut(inputs)

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

00000cj

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值