SiamRPN代码分析:architecture


前言

  SiamRPN代码分析从代码角度出发,分析时结合本人对论文的理解,总共由三部分组成:architecture、training、test。本文是SiamRPN代码分析的第一部分。
  SiamRPN论文
  代码


1、SiamRPN相比于SiamFC的创新

  SiamFC的一个最明显的缺陷在于,当目标发生较大形变时无法跟踪,原因在于它使用的固定多尺度tracking方式,在前一帧的目标位置生成固定的几个尺度框,选择得分最高的框作为当前帧的跟踪结果。这种tracking策略当目标发生大形变时将无法跟踪,然而SiamRPN很好地解决了该问题,它第一次在目标跟踪领域引入了anchor的使用,有了anchor,就不再需要多尺度策略,有人会问:anchor也是多种尺度啊?关键在于数据关联方式,SiamFC是利用全卷积结构把目标模板当作卷积核在搜索图像上卷积,有点类似滑动窗口,而SiamRPN是以一种穷举的方式,在搜索图像上举出所有可能存在目标的anchor,简而言之,就是SiamFC相邻帧的尺度变化是较小的,所以无法跟踪大形变物体,而由于穷举anchor的缘故,SiamRPN相邻帧的尺度变化较大,可以由长条变为横条。anchor可以理解为在搜索图像上生成大大小小的各种框,足以覆盖图像内的每个物体(包含待跟踪目标),示意图如下所示,左侧是经过数据预处理的原始图片,中间图片列出了所有anchor的尺度,最右侧图片是在图片规定好的正方形范围以每个像素为中心生成所有尺度的anchor。
  对比两篇论文的实验结果,发现SiamRPN的fps大于SiamFC,虽然设备不一样,但也不会差别这么大(86fps<160fps),我在相同环境下实验时,发现SiamRPN只比SiamFC快些许。单看网络结构,会发现SiamRPN的更复杂,而且SiamFC网络有的SiamRPN也都有,速度的差异是看测试过程,SiamFC需要多尺度测试,意味着要进行多次全卷积,而SiamRPN只要一次,这是两者跟踪速度差距的主要因素。

2、architecture

  下面是从论文截取的SiamRPN框架图
左边是用于特征提取的孪生子网络。区域建议子网络位于中间,有两个分支,上分支用于分类,下分支用于回归。采用互相关的方法得到两个分支的输出。这两个输出特性映射的详细信息在右边。在分类分支中,输出特征图有2k个通道,对应k个anchor的前景和背景。回归分支中,输出特征图有4k通道,对应4个坐标,用于对anchor位置的微调。回归的作用是得到预测的目标框,回归的任务是将得到的目标框微调,使得位置更加精确。

2.1 特征提取网络(Siamese Network)和RPN

  代码如下,这部分代码是SiamRPNNet的网络结构定义__init__和网络参数初始化_init_weights。

class SiamRPNNet(nn.Module):
    def __init__(self, init_weight=False):
        super(SiamRPNNet, self).__init__()
        self.featureExtract = nn.Sequential(
            nn.Conv2d(3, 96, 11, stride=2),  #stride=2  [batch,3,127,127]->[batch,96,59,59]
            nn.BatchNorm2d(96),
            nn.MaxPool2d(3, stride=2),       #stride=2  [batch,96,58,58]->[batch,96,29,29]
            nn.ReLU(inplace=True), 
            nn.Conv2d(96, 256, 5),           #[batch,256,29,29]->[batch,256,25,25]
            nn.BatchNorm2d(256),
            nn.MaxPool2d(3, stride=2),       #stride=2  [batch,256,25,25]->[batch,256,12,12]
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 384, 3),          #[batch,256,12,12]->[batch,384,10,10]
            nn.BatchNorm2d(384),
            nn.ReLU(inplace=True),
            nn.Conv2d(384, 384, 3),          #[batch,384,10,10]->[batch,384,8,8]
            nn.BatchNorm2d(384),
            nn.ReLU(inplace=True),
            nn.Conv2d(384, 256, 3),          #[batch,384,8,8]->[batch,256,6,6]
            nn.BatchNorm2d(256),
        )
        self.anchor_num = config.anchor_num    #每一个位置有5个anchor
        """ 模板的分类和回归"""
        self.examplar_cla = nn.Conv2d(256, 256 * 2 * self.anchor_num, kernel_size=3, stride=1, padding=0)
        self.examplar_reg = nn.Conv2d(256, 256 * 4 * self.anchor_num, kernel_size=3, stride=1, padding=0)
        """ 搜索图像的分类和回归"""
        self.instance_cla = nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=0)
        self.instance_reg = nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=0)
        #这一步是SiamRPN框架中没有的,1x1的卷积,感觉可有可无,仅仅用于回归分支
        #简单理解就是增加了网络的学习能力
        self.regress_adjust = nn.Conv2d(4 * self.anchor_num, 4 * self.anchor_num, 1)
        if init_weight:
            self._init_weights()
    def _init_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.xavier_uniform_(m.weight, 1) #xavier是参数初始化,它的初始化思想是保持输入和输出方差一致,这样就避免了所有输出值都趋向于0
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)     #偏置初始化为0
            elif isinstance(m, nn.BatchNorm2d):      #在激活函数之前,希望输出值由较好的分布,以便于计算梯度和更新参数,这时用到BatchNorm2d函数
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                nn.init.xavier_uniform_(m.weight, 1)
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)

self.featureExtract里面定义了特征提取网络,使用的是alexnet,这部分具体讲解课参考这篇博客,rpn结构定义了4个不同的nn.Conv2d,分别对应architrcture4个橙黄色的Conv,主要的推导过程在2.2小节详解。

2.2 训练与测试网络推导

2.2.1 训练推导

  先给出代码,再看着分析,训练推导代码如下:

"""——————————前向传播用于训练——————————————————"""
    def forward(self, template, detection):
        N = template.size(0)    # batch=32
        template_feature = self.featureExtract(template)    #[32,256,6,6]
        detection_feature = self.featureExtract(detection)  #[32,256,24,24]
        """对应模板分支,求分类核以及回归核"""
        # [32,256*2*5,4,4]->[32,2*5,256,4,4]
        kernel_score = self.examplar_cla(template_feature)
        # 类似可得[32,4*5,256,4,4]
        kernel_regression = self.examplar_reg(template_feature)
        """对应搜素图像的分支,得到搜索图像的特征图"""
        conv_score = self.instance_cla(detection_feature)      #[32,256,22,22]
        conv_regression = self.instance_reg(detection_feature)   #[32,256,22,22]
        """对应模板和搜索图像的分类"""
        # [32,256,22,22]->[1,32*256=8192,22,22]
        conv_scores = conv_score.reshape(1, -1, 22, 22)
        score_filters = kernel_score.reshape(-1, 256, 4, 4)    #[32*2*5,256,4,4]
        #inout=[1,8192,22,22],filter=[320,256,4,4],得到output=[1,32*2*5,19,19]->[32,10,19,19] 32始终是batch不变,勿忘!
        pred_score = F.conv2d(conv_scores, score_filters, groups=N).reshape(N, 10, 19,19)
        """对应模板和搜索图像的回归----------"""
        #[32,256,22,22]->[1,32*256=8192,22,22]
        conv_reg = conv_regression.reshape(1, -1, 22, 22)
        reg_filters = kernel_regression.reshape(-1, 256, 4, 4)   #[32*4*5,256,4,4]
        #input=[1,8192,22,22],filter=[340,256,4,4],得到output=[1,32*4*5,19,19]->[32,4*5=20,19,19]——>微调
        pred_regression = self.regress_adjust(F.conv2d(conv_reg, reg_filters, groups=N).reshape(N, 20, 19, 19))
        #score.shape=[32,10,19,19],regression.shape=[32,20,19,19]
        return pred_score, pred_regression

代码注释中,假定训练时的batch_size=32,首先127x127x3大小的目标模板图像和271x271x3大小检测图像(搜索图像)共享特征提取网络的网络参数,得到相应大小的feature map,分别为6x6x256、24x24x256,论文中的检测图像大小是255,所以输出的feature map大小为22x22x256;然后,将6x6x256大小的检测图像通过两个卷积层self.instance_cla和self.instance_reg分别用于回归和分类,可以看到,这两个卷积层的结构是完全相同的,最终大小都为22x22x256,但这两个feature map的用途不一样,分别用于分类和回归,再将上分支通过特征提取网络得到的6x6x256大小的feature map分别通过self.examplar_cla和self.examplar_reg,得到的feature map大小为[32,256x2x5,4,4]和[32,4x5x256,4,4],其中2和4分别表示需要学习的实际参数,2代表前景和背景,前景分数越大,代表是目标的概率越高,4表示dx、dy、dw、dh四个微调参数。
  接下来就是将得到的4张feature map两两成对互相关,如果有对互相关不理解的小伙伴在博客中有讲解。最终得到用于分类和回归的feature map大小分别是[32,10,19,19]、[32,20,19,19]就可用于训练。

2.2.1 测试推导

  先上代码

"""—————————————初始化————————————————————"""
    def track_init(self, template):
        N = template.size(0) #1
        template_feature = self.featureExtract(template)# [1,256, 6, 6]
        # kernel_score=[1,2*5*256,4,4]   kernel_regression=[1,4*5*256,4,4]
        kernel_score = self.examplar_cla(template_feature)
        kernel_regression = self.examplar_reg(template_feature)
        self.score_filters = kernel_score.reshape(-1, 256, 4, 4)    #[2*5,256,4,4]
        self.reg_filters = kernel_regression.reshape(-1, 256, 4, 4) #[4*5,256,4,4]
    """—————————————————跟踪—————————————————————"""
    def track_update(self, detection):
        N = detection.size(0)
        # [1,256,24,24]
        detection_feature = self.featureExtract(detection)
        """----得到搜索图像的feature map-----"""
        conv_score = self.instance_cla(detection_feature)     #[1,256,22,22]
        conv_regression = self.instance_reg(detection_feature)  #[1,256,22,22]
        """---------与模板互相关"""
        #input=[1,256,22,22] filter=[2*5,256,4,4] gropu=1 得output=[1,2*5,19,19]
        pred_score = F.conv2d(conv_score, self.score_filters, groups=N)
        # input=[1,256,22,22] filter=[4*5,256,4,4] gropu=1 得output=[1,4*5,19,19]
        pred_regression = self.regress_adjust(F.conv2d(conv_regression, self.reg_filters, groups=N))
        #score.shape=[1,10,19,19],regression.shape=[1,20,19,19]
        return pred_score, pred_regression

仔细分析,这部分代码与forward类似,只是结构有些变化,训练时是上下分支同时输入输出,而在测试时,明确分为了初始帧和后续待跟踪帧通过的网络结构,后续帧与固定不变的初始帧互相关运算。论文中也提到过,如下,论文说这是SiamRPN快速的原因,但要知道这并不是SiamRPN相比于SiamFC更快的原因。
在这里插入图片描述

最后,验证下搭建网络的正确性,验证代码和结果如下,可以看到网络搭建是正确的。

if __name__ == '__main__':
    model = SiamRPNNet()
    z_train = torch.randn([32,3,127,127])  #batch=8
    x_train = torch.randn([32,3,271,271])
    # 返回shape为[32,20,19,19] [32,10,19,19]  20=5*4 10=5*2
    pred_score_train, pred_regression_train = model(z_train,x_train)
    z_test = torch.randn([1,3,127,127])
    x_test = torch.randn([1,3,271,271])
    model.track_init(z_test)
    # 返回shape为[1,20,19,19] [1,10,19,19]
    pred_score_test, pred_regression_test = model.track_update(x_test)

在这里插入图片描述Over,第一部分architecture代码分析就结束了!!

下一篇链接

  SiamRPN代码分析:training

### SiamRPN Loss Function Implementation and Explanation In computer vision tracking algorithms, particularly within Siamese Region Proposal Network (SiamRPN), the loss function plays a crucial role in training an effective tracker. The primary objective of SiamRPN is to locate target objects accurately by generating region proposals around potential targets. The overall architecture involves two branches: one branch extracts features from exemplar images while another processes search region images. These feature maps are then cross-correlated to generate response scores indicating likely positions of the object[^1]. #### Components of the SiamRPN Loss Function The total loss \( L \) consists primarily of classification loss \( L_{cls} \) and regression loss \( L_{reg} \): \[ L = L_{cls} + \lambda L_{reg} \] Where: - **Classification Loss (\(L_{cls}\))**: This part ensures correct identification between positive samples (target regions) and negative ones (background). Typically implemented using binary cross entropy or focal loss. For each anchor box, if it overlaps sufficiently with ground truth bounding boxes, it's labeled as positive; otherwise, negative labels apply. Mathematically, \[ L_{cls}(p_i,p^{*}_i)=\frac{-1}{N_{cls}}[\sum _{i=0}^{M} p^{*}_{icls}\log(p_{icls})+(1-p^{*}_{icls})\log(1- \( N_{cls} \): Normalization factor for classifying positives/negatives - **Regression Loss (\(L_{reg}\))**: Ensures accurate localization through minimizing differences between predicted coordinates and actual values associated with true bounding boxes. Smooth L1 loss often serves here due to its robustness against outliers compared to mean squared error. Given predictions \( t=(t_x,t_y,t_w,t_h)\) versus targets \( t^* =(t^*_x ,t^*_y ,t^*_w ,t^*_h )\), \[ L_{reg}=smooth\_l1(t-t^*) \] Python code implementing these components might look like this: ```python import torch.nn.functional as F def smooth_l1_loss(pred, target, beta=1. / 9): l1_loss = F.l1_loss(pred, target, reduction='none') cond = l1_loss < beta loss = torch.where(cond, 0.5 * l1_loss ** 2 / beta, l1_loss - 0.5 * beta) return loss.mean() def compute_losses(cls_scores, bbox_preds, cls_targets, bbox_targets): # Classification loss calculation cls_loss = F.binary_cross_entropy_with_logits( input=cls_scores.view(-1), target=cls_targets.float().view(-1)) # Regression loss computation pos_inds = cls_targets == 1 reg_loss = smooth_l1_loss(bbox_preds[pos_inds], bbox_targets[pos_inds]) final_loss = cls_loss + cfg.TRAIN.LAMBDA_REG * reg_loss return { 'total': final_loss, 'classification': cls_loss.item(), 'regression': reg_loss.item() } ```
评论 27
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小超man

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值