End-to-end Concept Word Detection for Video Captioning, Retrieval, and Question Answering
(Submitted on 10 Oct 2016 (
v1), last revised 13 Dec 2016 (this version, v2))
We propose a high-level concept word detector that can be integrated with any video-to-language models. It takes a video as input and generates a list of concept words as useful semantic priors for language generation models. The proposed word detector has two important properties. First, it does not require any external knowledge sources for training. Second, the proposed word detector is trainable in an end-to-end manner jointly with any video-to-language models. To maximize the values of detected words, we also develop a semantic attention mechanism that selectively focuses on the detected concept words and fuse them with the word encoding and decoding in the language model. In order to demonstrate that the proposed approach indeed improves the performance of multiple video-to-language tasks, we participate in four tasks of LSMDC 2016. Our approach achieves the best accuracies in three of them, including fill-in-the-blank, multiple-choice test, and movie retrieval. We also attain comparable performance for the other task, movie description.
Submission history
From: Jongwook Choi [ view email][v1] Mon, 10 Oct 2016 15:03:15 GMT (2876kb,D)
[v2] Tue, 13 Dec 2016 14:27:20 GMT (3934kb,D)

提出一种高级概念词检测器,可集成到任何视频到语言模型中,输入视频并生成概念词列表作为语言生成模型的有用语义先验。该检测器无需外部知识源即可训练,并且可以与任何视频到语言模型一起以端到端的方式进行联合训练。
3409

被折叠的 条评论
为什么被折叠?



