Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding
(Submitted on 6 Jun 2016 (
v1), last revised 24 Sep 2016 (this version, v3))
Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.
Submission history
From: Marcus Rohrbach [ view email][v1] Mon, 6 Jun 2016 17:59:56 GMT (2194kb,D)
[v2] Thu, 23 Jun 2016 19:52:41 GMT (3358kb,D)
[v3] Sat, 24 Sep 2016 01:58:59 GMT (3443kb,D)

本文提出一种名为多模态紧凑双线性池化(MCB)的方法,该方法能够有效地结合视觉和文本信息,尤其适用于视觉问答任务。通过外积的方式,将视觉与文本向量进行组合,克服了传统方法表达能力不足的问题。实验证明,在视觉问答及定位任务中,采用MCB的方法显著提升了模型性能。
8346

被折叠的 条评论
为什么被折叠?



