MMYOLO插入CBAM
由于CBAM是MMYOLO自带的注意力机制模块,所以在插入CBAM的时候直接修改对应的配置文件即可,以 configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py为例子,将model里面的backbone修改即可:
model = dict(
backbone=dict(frozen_stages=4,
## 在原有设置的基础上,加上插件的设置
plugins=[
dict(cfg=dict(type='CBAM'),
stages=(False, True, True, True))
],
),
bbox_head=dict(
head_module=dict(num_classes=num_classes),
prior_generator=dict(base_sizes=anchors)))
在插件的设置中, 可以通过修改stages的参数自定义模块插入在主干的哪一个stage,也可以通过修改“type”参数选择其他注意力机制模块。一个简单的例子如下:
Examples:
>>> plugins=[
... dict(cfg=dict(type='xxx', arg1='xxx'),
... stages=(False, True, True, True)),
... dict(cfg=dict(type='yyy'),
... stages=(True, True, True, True)),
... ]
Suppose ``stage_idx=0``, the structure of blocks in the stage would be:
.. code-block:: none
conv1 -> conv2 -> conv3 -> yyy
Suppose ``stage_idx=1``, the structure of blocks in the stage would be:
.. code-block:: none
conv1 -> conv2 -> conv3 -> xxx -> yyy
由于CBAM是MMYOLO自带的,所以直接设置参数“type”为“CBAM”即可,如果需要用其他注意力机制模块,则需要通过注册自定义模块实现。
修改后的configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py
_base_ = ['../_base_/default_runtime.py', '../_base_/det_p5_tta.py']
# ========================Frequently modified parameters======================
# -----data related-----
data_root = 'data/coco/' # Root path of data
# Path of train annotation file
train_ann_file = 'annotations/instances_train2017.json'
train_data_prefix = 'train2017/' # Prefix of train image path
# Path of val annotation file
val_ann_file = 'annotations/instances_val2017.json'
val_data_prefix = 'val2017/' # Prefix of val image path
num_classes = 80 # Number of classes for classification
# Batch size of a single GPU during training
train_batch_size_per_gpu = 16
# Worker to pre-fetch data for each single GPU during training
train_num_workers = 8
#