本文介绍
为有效提升 YOLOv8 的检测精度,同时不增加额外的计算参数和推理时间,本文借鉴 ICCV2019 ACNet 所提出的Asymmetric Convolution Block(ACBlock)模块改进YOLOv8的Conv模块。 为了实现高精度同时又不引入额外的推理开销,ACBlock模块训练时通过1D非对称卷积来增强方形卷积能力,推理时进行卷积融合以此解决上述问题。具体来说,ACBlock模块训练时分别借助方形卷积、横向1D非对称卷积和纵向1D非对称卷积来学习图像特征,推理时先分别求得各卷积层和其BN层的融合结果,后将融合结果转换为一个标准卷积层,从而无需引入额外的推理开销。ACNet通过实验验证了所提ACBlock模块的有效性,其有效性可归因于该模块能增强模型对旋转失真的鲁棒性和加强方形卷积核的中心骨架部分的能力。 实验结果如下(本文通过VOC数据验证算法性能,epoch为100,batchsize为32,imagesize为640*640):
| Model | mAP50-95 | mAP50 | run time (h) | params (M) | interence time (ms) |
|---|---|---|---|---|---|
| YOLOv8 | 0.549 | 0.760 | 1.051 | 3.01 | 0.2+0.3(postprocess) |
| YOLO11 | 0.553 | 0.757 | 1.142 | 2.59 | 0.2+0.3(postprocess) |
| yolov8_AConv | 0.550 | 0.760 | 1.052 | 3.19 | 0.2+0.3(postprocess) |

重要声明:本文改进后代码可能只是并不适用于我所使用的数据集,对于其他数据集可能存在有效性。
本文改进是为了降低最新研究进展至YOLO的代码迁移难度,从而为对最新研究感兴趣的同学提供参考。
代码迁移
重点内容
步骤一:迁移代码
ultralytics框架的模块代码主要放在ultralytics/nn文件夹下,此处为了与官方代码进行区分,可以新增一个extra_modules文件夹,然后新建文件添加以下代码:
import torch.nn as nn
import torch.nn.init as init
import torch
class ACBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', deploy=False,
use_affine=True, reduce_gamma=False, gamma_init=None ):
super(ACBlock, self).__init__()
self.deploy = deploy
if deploy:
self.fused_conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=(kernel_size,kernel_size), stride=stride,
padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode)
else:
self.square_conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=(kernel_size, kernel_size), stride=stride,
padding=padding, dilation=dilation, groups=groups, bias=False,
padding_mode=padding_mode)
self.square_bn = nn.BatchNorm2d(num_features=out_channels, affine=use_affine)
if padding - kernel_size // 2 >= 0:
# Common use case. E.g., k=3, p=1 or k=5, p=2
self.crop = 0
# Compared to the KxK layer, the padding of the 1xK layer and Kx1 layer should be adjust to align the sliding windows (Fig 2 in the paper)
hor_padding = [padding - kernel_size // 2, padding]
ver_padding = [padding, padding - kernel_size // 2]
else:
# A negative "padding" (padding - kernel_size//2 < 0, which is not a common use case) is cropping.
# Since nn.Conv2d does not support negative padding, we implement it manually
self.crop = kernel_size // 2 - padding
hor_padding = [0, padding]
ver_padding = [padding, 0]
self.ver_conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=(kernel_size, 1),
stride=stride,
padding=ver_padding, dilation

最低0.47元/天 解锁文章

被折叠的 条评论
为什么被折叠?



