Spatio-Temporal Attention Models for Grounded Video Captioning
(Submitted on 17 Oct 2016 (
v1), last revised 18 Oct 2016 (this version, v2))
Automatic video captioning is challenging due to the complex interactions in dynamic real scenes. A comprehensive system would ultimately localize and track the objects, actions and interactions present in a video and generate a description that relies on temporal localization in order to ground the visual concepts. However, most existing automatic video captioning systems map from raw video data to high level textual description, bypassing localization and recognition, thus discarding potentially valuable information for content localization and generalization. In this work we present an automatic video captioning model that combines spatio-temporal attention and image classification by means of deep neural network structures based on long short-term memory. The resulting system is demonstrated to produce state-of-the-art results in the standard YouTube captioning benchmark while also offering the advantage of localizing the visual concepts (subjects, verbs, objects), with no grounding supervision, over space and time.
Submission history
From: Mihai Zanfir [ view email][v1] Mon, 17 Oct 2016 08:05:12 GMT (6260kb,D)
[v2] Tue, 18 Oct 2016 08:27:23 GMT (6260kb,D)

本文提出一种结合时空注意力机制与深度神经网络的视频自动生成字幕模型。该模型能够定位视频中的视觉概念,如主体、动作及对象,并在无需额外监督的情况下,在时间和空间上进行定位。
7173

被折叠的 条评论
为什么被折叠?



