Learning Pixel-level Semantic Affinity with Image-level Supervision

background

The data for segmentation is hard to get as it need pixel-wise label.

contribution

Find a new way to generate labeled data

method

(1) Use classification network to generate the seed of CAMs:

classification network: A series of convolution layers follow by a fully connected layers

Mc is the function of CAMs, w c w_c wc is the weights of the last fc layers. f c a m f^{cam} fcam is the output of the last convolution layers.

I used to have a question that how the 7*7 feature map become the heat-map is not like chess. I try by myself and find that due to the effect of “cv2.reize”.

(2) Use the seed of CAMs to generate “Affinity” by AffinityNet

Affinity W W W:

the measurement of the probability to pixel are in the same class. In this essay, the AffinityNet output a vector and the Euclid distance of the vector D, and e − ∣ D ∣ e^{-|D|} eD is the value.

Data to train AffinityNet:

The pair which subjects to explicitly belonging to background or object area, and their distance less than a values (a hyper parameter).

The question of these Data

But this will cause the unbalance between the positive and negative samples, as most pixel in this field form positive samples.
So we need to divide the sample into two subset, and then sample from them evenly.

The area of background*:

disregard M c ′ M_{c'} Mc by making its activation scores zero and then (?)
在这里插入图片描述

Loss of Affinity Net:

cross-entropy loss by image_label

(3) random walk

Now we can have a Affinity matrix T: the i th rows is the transit probability from the pixel within specific distance between the i th pixel
在这里插入图片描述
So we can get the refined CAMs

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值