1. Motivation
- 传统方法:缺乏语义级的理解,不能补全复杂场景;
- 深度学习的方法:在不合理的限制下尝试恢复整个目标;
- 逐步补全的方法:会导致信息的曲解;
- 注意力的方法:未考虑不同循环之间的关系。
2. Approach
2.1 Network structure
Area Identification:集成了几个partial convolution layer用于更新mask和feature map。
Feature Reasoning:基于卷积的编码解码结构。
Feature Merging:直接去均值。
2.2 Knowledge consistent attention
首先计算出目标像素点和其周围像素点的相似性:
其中
我感觉这里源代码中的实验好像和论文中有点不一样,不知道是不是我理解错误。论文中说经过一个softmax即可获得当前的attention score,之后在做一个加权和就是最终的attention score。
class KnowledgeConsistentAttention(nn.Module):
def __init__(self, patch_size = 3, propagate_size = 3, stride = 1):
super(KnowledgeConsistentAttention, self).__init__()
self.patch_size = patch_size
self.propagate_size = propagate_size
self.stride = stride
self.prop_kernels = None
self.att_scores_prev = None
self.masks_prev = None
self.ratio = nn.Parameter(torch.ones(1))
def forward(self, foreground, masks):
bz, nc, w, h = foreground.size()
if masks.size(3) != foreground.size(3):
masks = F.interpolate(masks, foreground.size()[2:])
background = foreground.clone()
background = background
conv_kernels_all = background.view(bz, nc, w * h, 1, 1)
conv_kernels_all = conv_kernels_all.permute(0, 2, 1, 3, 4)
output_tensor = []
att_score = []
for i in range(bz):
feature_map = foreground[i:i+1]
conv_kernels = conv_kernels_all[i] + 0.0000001
norm_factor = torch.sum(conv_kernels**2, [1, 2, 3], keepdim = True)**0.5
conv_kernels = conv_kernels/norm_factor
conv_result = F.conv2d(feature_map, conv_kernels, padding = self.patch_size//2)
if self.propagate_size != 1:
if self.prop_kernels is None:
self.prop_kernels = torch.ones([conv_result.size(1), 1, self.propagate_size, self.propagate_size])
self.prop_kernels.requires_grad = False
self.prop_kernels = self.prop_kernels.cuda()
conv_result = F.avg_pool2d(conv_result, 3, 1, padding = 1)*9
attention_scores = F.softmax(conv_result, dim = 1)
if self.att_scores_prev is not None:
attention_scores = (self.att_scores_prev[i:i+1]*self.masks_prev[i:i+1] + attention_scores * (torch.abs(self.ratio)+1e-7))/(self.masks_prev[i:i+1]+(torch.abs(self.ratio)+1e-7))
att_score.append(attention_scores)
feature_map = F.conv_transpose2d(attention_scores, conv_kernels, stride = 1, padding = self.patch_size//2)
final_output = feature_map
output_tensor.append(final_output)
self.att_scores_prev = torch.cat(att_score, dim = 0).view(bz, h*w, h, w)
self.masks_prev = masks.view(bz, 1, h, w)
return torch.cat(output_tensor, dim = 0)
class AttentionModule(nn.Module):
def __init__(self, inchannel, patch_size_list = [1], propagate_size_list = [3], stride_list = [1]):
assert isinstance(patch_size_list, list), "patch_size should be a list containing scales, or you should use Contextual Attention to initialize your module"
assert len(patch_size_list) == len(propagate_size_list) and len(propagate_size_list) == len(stride_list), "the input_lists should have same lengths"
super(AttentionModule, self).__init__()
self.att = KnowledgeConsistentAttention(patch_size_list[0], propagate_size_list[0], stride_list[0])
self.num_of_modules = len(patch_size_list)
self.combiner = nn.Conv2d(inchannel * 2, inchannel, kernel_size = 1)
def forward(self, foreground, mask):
outputs = self.att(foreground, mask)
outputs = torch.cat([outputs, foreground],dim = 1)
outputs = self.combiner(outputs)
return outputs
2.3 Loss function
感知损失:
风格损失:
最终损失:
3. Disscussion
我认为主要有两个创新点:一个新的attention机制,一种新的即插即用的循环特征推理模型。
4. References
【1】Li, Jingyuan, et al. "Recurrent Feature Reasoning for Image Inpainting." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.