READING NOTE: Wider or Deeper: Revisiting the ResNet Model for Visual Recognition

本文提出了对ResNet模型行为的理解,并基于此理解设计了一组相对浅层的卷积网络,在ImageNet分类数据集上取得了优异成果。此外,还探讨了不同网络结构对语义图像分割性能的影响。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

TITLE: Wider or Deeper: Revisiting the ResNet Model for Visual Recognition

AUTHOR: Zifeng Wu, Chunhua Shen, Anton van den Hengel

ASSOCIATION: The University of Adelaide

FROM: arXiv:1611.10080

CONTRIBUTIONS

  1. A further developed intuitive view of ResNets is introduced, which helps to understand their behaviours and find possible directions to further improvements.
  2. A group of relatively shallow convolutional networks is proposed based on our new understanding. Some of them achieve the state-of-the-art results on the ImageNet classification datase.
  3. The impact of using different networks is evaluated on the performance of semantic image segmentation, and these networks, as pre-trained features, can boost existing algorithms a lot.

SUMMARY

For the residual unit i , let yi1 be the input, and let fi() be its trainable non-linear mappings, also named Blok i . The output of unit i is recursively defined as

yi=fi(yi1,ωi)+yi1

where ωi denotes the trainalbe parameters, and fi() is often two or three stacked convolution stages in a ResNet building block. Then top left network can be formulated as

y2=y1+f2(y1,ω2)

=y0+f1(y0,ω1)+f2(y0+f1(y0,ω1),ω2)

Thus, in SGD iteration, the backward gradients are:

Δω2=dfsdω2Δy2

Δy1=Δy2+f2Δy2

Δω1=df1dω1Δy2+df1dω1f2Δy2

Ideally, when effective depth l2 , both terms of Δω1 are non-zeros as the bottom-left case illustrated. However, when effective depth l=1 , the second term goes to zeros, which is illustrated by the bottom-right case. If this case happens, we say that the ResNet is over-deepened, and that it cannot be trained in a fully end-to-end manner, even with those shortcut connections.

To summarize, shortcut connections enable us to train wider and deeper networks. As they growing to some point, we will face the dilemma between width and depth. From that point, going deep, we will actually get a wider network, with extra features which are not completely end-to-end trained; going wider, we will literally get a wider network, without changing its end-to-end characteristic.

The author designed three kinds of network structure as illustrated in the following figure

and the classification performance on ImageNet validation set is shown as below

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值