1. cbam
https://blog.youkuaiyun.com/qq_44666320/article/details/105694019
https://blog.youkuaiyun.com/qq_38410428/article/details/103694759
2. se net
class ChannelAttention(nn.Module):
def __init__(self, in_planes, ratio=16):
super(ChannelAttention, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.max_pool = nn.AdaptiveMaxPool2d(1)
self.fc1 = nn.Conv2d(in_planes, in_planes // 16, 1, bias=False)
self.relu1 = nn.ReLU()
self.fc2 = nn.Conv2d(in_planes // 16, in_planes, 1, bias=False)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
avg_out = self.fc2(self.relu1(self.fc1(self.avg_pool(x))))
max_out = self.fc2(self.relu1(self.fc1(self.max_pool(x))))
out = avg_out + max_out
return self.sigmoid(out)

本文介绍了如何在ResNet网络中应用CBAM和SENet两种注意力机制,并详细展示了SENet的具体实现代码。同时讨论了在不同位置添加注意力模块对是否能使用预训练参数的影响。
最低0.47元/天 解锁文章
956

被折叠的 条评论
为什么被折叠?



