Spatio-Temporal Attention Models for Grounded Video Captioning

本文提出一种结合时空注意力机制与深度神经网络的视频自动生成字幕模型。该模型能够定位视频中的视觉概念,如主体、动作及对象,并在无需额外监督的情况下,在时间和空间上进行定位。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Spatio-Temporal Attention Models for Grounded Video Captioning

Automatic video captioning is challenging due to the complex interactions in dynamic real scenes. A comprehensive system would ultimately localize and track the objects, actions and interactions present in a video and generate a description that relies on temporal localization in order to ground the visual concepts. However, most existing automatic video captioning systems map from raw video data to high level textual description, bypassing localization and recognition, thus discarding potentially valuable information for content localization and generalization. In this work we present an automatic video captioning model that combines spatio-temporal attention and image classification by means of deep neural network structures based on long short-term memory. The resulting system is demonstrated to produce state-of-the-art results in the standard YouTube captioning benchmark while also offering the advantage of localizing the visual concepts (subjects, verbs, objects), with no grounding supervision, over space and time.
Comments: To appear in Asian Conference on Computer Vision (ACCV), Taipei, Taiwan, 2016
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:1610.04997 [cs.CV]
  (or arXiv:1610.04997v2 [cs.CV] for this version)

Submission history

From: Mihai Zanfir [ view email
[v1] Mon, 17 Oct 2016 08:05:12 GMT (6260kb,D)
[v2] Tue, 18 Oct 2016 08:27:23 GMT (6260kb,D)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值