In our approach, we leverage a guidance captions as associated text language to steer visual attention. Our image captioning network is trained to minimize the following loss:
1.给定一张图片,从训练集中选择一张相似的图片和一条描述
2.相似的图片和描述are transformed to separate multi-dimensional feature vectors, 用来计算attention map
3.In the attention map, the regions relevant to the guidance caption are highlighted while irrelevant regions are suppressed. The decoder generates an output caption based on a weighted sum of the image feature vectors, where the weights are determined by the attention map.
Ground-truth caption是理想的guidance caption,但是测试的时候没法用,因此采用采样的方法获取guidance caption。相似的图片有共同的显著区域和描述。