Multimodal Compact Bilinear Pooling for Multimodal Neural Machine Translation
(Submitted on 23 Mar 2017)
In state-of-the-art Neural Machine Translation, an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions. Approaches to pool two modalities usually include element-wise product, sum or concatenation. In this paper, we evaluate the more advanced Multimodal Compact Bilinear pooling method, which takes the outer product of two vectors to combine the attention features for the two modalities. This has been previously investigated for visual question answering. We try out this approach for multimodal image caption translation and show improvements compared to basic combination methods.
Submission history
From: Jean-Benoit Delbrouck [ view email][v1] Thu, 23 Mar 2017 14:20:52 GMT (135kb,D)
本文提出使用多模态紧致双线性池化方法增强神经机器翻译中的注意力机制。该方法通过外积结合视觉与文本特征,在图像字幕翻译任务上取得显著提升。
8347

被折叠的 条评论
为什么被折叠?



