【语义分割】CGNet -- A Light-weight Context Guided Network for Semantic Segmentation

CGNet是一种高效且参数轻量级的语义分割网络,通过独特的Context Guided Block设计,在Cityscapes数据集上实现了64.8%的平均IoU,同时参数量小于0.5M。CGNet采用深度卷积,保持通道独立性,并利用全局上下文改进特征表达,有效提升分割精度。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Paper:CGNet
Github:Code



在这里插入图片描述
without any post-processing and multi-scale testing, the proposed CGNet achieves 64.8% mean IoU on Cityscapes with less than 0.5 M parameters

Context Guided Block

Firstly, CG block learns thejoint feature of both local feature and surrounding context. Thus, CG block learns the representation of each object from both itself and its spatially related objects(对象本身及其空间相关对象), which contains rich co-occurrence relationship.

Secondly, CG block employs the global context to improve the joint feature. The global context is applied to channel-wisely re-weight the joint feature, so as to emphasize useful components and suppress useless ones.

CG block adopts channel-wise convolutions(depth-wise conv)
在这里插入图片描述
之前很多工作为减少计算量,使用深度可分离卷积代替普通卷积。与之不同CG block仅使用depthwise conv,去掉pointwise conv。

This design is not suitable for the proposed CG block, since the
local feature and the surrounding context in CG block need to maintain channel independence.
在这里插入图片描述
Intuitively, GRL has a stronger capability than LRL to promote the flow of information in the network.

CG block

Context Guided Network

CGNet follows the major principle of “deep and thin” to save memory footprint as much as possible.

CG block is utilized in all stages of CGNet. Thus, CGNet captures contextual information in all stages of the network and is specially tailored for increasing segmentation accuracy.

Current mainstream segmentation networks have five down-sampling stages which learn too abstract features of objects and missing lots of the discriminative spatial information, causing over-smoothed segmentation boundaries. Differently, CGNet has only three downsampling stages, which is helpful for preserving spatial information.
在这里插入图片描述
在这里插入图片描述

Experiment

1、Ablation Studies

在这里插入图片描述
在这里插入图片描述

2、Comparison with state-of-the-arts

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值