ICCV2017 UCT:学习统一卷积网络进行实时可视化跟踪---论文笔记

提出统一卷积跟踪器(UCT),实现端到端学习卷积特征与跟踪过程,保持实时速度的同时提升跟踪性能。适用于多种对象,在多个基准数据集上表现领先。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

相关博客:ICCV 2017 UCT:《UCT: Learning Unified Convolutional Networks forReal-time Visual Tracking》论文笔记_ucmtrack论文-优快云博客

《UCT: Learning Unified Convolutional Networks forReal-time Visual Tracking》

摘要

Convolutional neural networks (CNN) based tracking approaches have shown favorable performance in recent benchmarks. 
基于卷积神经网络(CNN)的跟踪方法在最近的测试中显示出良好的性能。


Nonetheless, the chosen CNN features are always pre-trained in different task and individual components in tracking systems are learned eparately, thus the achieved tracking performance may be suboptimal.
然而,所选择的CNN特征总是针对不同的任务进行预训练,并且跟踪系统中的各个组件是独立学习的,因此所获得的跟踪性能可能不是最优的。


Besides, most of these trackers are not designed towards realtime applications because of their time-consuming feature extraction and complex optimization details.
此外,由于这些跟踪器费时的特征提取复杂的优化细节,它们大多不是针对实时应用程序设计的。


In this paper, we propose an end-to-end framework to learn the convolutional features and perform the tracking process simultaneously,namely, a unified convolutional tracker (UCT).
在本文中,我们提出了一个端到端框架来学习卷积的特征并同时执行跟踪过程,即统一卷积跟踪器(UCT)。


Specifically, The UCT treats feature extractor and tracking process (ridge regression) both as convolution operation and trains them jointly, nabling learned CNN features are tightly coupled to tracking process. 
具体来说,UCT将特征提取器feature extractor跟踪处理(岭回归)tracking process (ridge regression)都作为卷积运算来处理,并对它们进行联合训练,nabling学习到的CNN feature与tracking process是紧密耦合的。


In online tracking, an efficient updating method is proposed by introducing peak-versus-noise ratio (PNR) criterion, and scale changes are handled efficiently by incorporating a scale branch into network.
在在线跟踪中,通过引入峰值噪声比(PNR)准则,提出了一种有效的更新方法,并将尺度分支并入网络,有效地处理尺度变化。


The proposed approach results in superior tracking performance, while maintaining real-time speed. 
该方法在保持实时性的同时,具有较好的跟踪性能。


The standard UCT and UCT-Lite can track generic objects at 41 FPS and 154 FPS without further optimization, respectively.
标准UCT和UCT- lite无需进一步优化即可分别在41 FPS和154 FPS下跟踪通用对象。


 Experiments are performed on four challenging benchmark tracking datasets: OTB2013, OTB2015, VOT2014 and VOT2015, and our method achieves state-ofthe-art results on these benchmarks compared with other real-time trackers.在OTB2013、OTB2015、VOT2014和VOT2015这四个具有挑战性的基准跟踪数据集上进行了实验,与其他实时跟踪器相比,我们的方法在这些基准上取得了最先进的结果。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

计算机视觉-Archer

图像分割没有团队的同学可加群

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值