王赛楠,
石韫开,
云霄,
施文娟
中国矿业大学信息与控制工程学院 ??徐州 ??221116
基金项目:江苏省自然科学基金青年基金(BK20150204),国家重点研发计划(2016YFC0801403),国家自然科学基金(51504214, 51504255, 51734009, 61771417),江苏省重点研发计划(BE2015040)
详细信息
作者简介:孙彦景:男,1977 年生,教授,博士生导师,研究方向为无线传感器网络、视频目标跟踪、人工智能、信息物理系统
王赛楠:女,1992 年生,硕士生,研究方向为视频目标跟踪
石韫开:男,1994 年生,硕士生,研究方向为人工智能
云霄:女,1986 年生,讲师,研究方向为视频目标跟踪和人工智能
施文娟:女,1981 年生,博士生,研究方向为视频质量评价
通讯作者:孙彦景? yjsun@cumt.edu.cn
中图分类号:TP391.4计量
文章访问数:1200
HTML全文浏览量:303
PDF下载量:50
被引次数:0
出版历程
收稿日期:2017-12-04
修回日期:2018-05-02
网络出版日期:2018-07-12
刊出日期:2018-09-01
Visual Tracking Algorithm Based on Global Context and Feature Dimensionality Reduction
Yanjing SUN,,Sainan WANG,
Yunkai SHI,
Xiao YUN,
Wenjuan SHI
School of Information and Control Engineering, China University of Mining Technology, Xuzhou 221116, China
Funds:The Natural Science Foundation of Jiangsu Province (BK20150204), The State Key Research Development Program (2016YFC0801403), The National Natural Science Foundation of China (51504214, 51504255, 51734009, 61771417), The Research Development Programme of Jiangsu Province (BE2015040)
摘要
摘要:相关滤波算法容易受到形变、运动模糊、相似背景等因素的干扰,导致跟踪任务失败。为了克服以上问题,该文提出一种基于全局背景与特征降维的视觉跟踪算法。该算法首先提取紧邻目标的图像区域作为负样本供分类器学习,以抑制相似背景的干扰;然后提出一种基于主成分分析的更新策略,构建降维矩阵压缩HOG特征的维度,在更新分类器的同时减少其冗余度;最后加入颜色特征表征运动目标,并根据特征对系统状态的响应强度进行自适应融合。在标准数据集上将该文提出的算法与Staple, KCF等其他算法进行了仿真对比,结果表明该文算法具有更强的鲁棒性,在形变因素的影响下,所提出的算法与Staple和KCF算法相比距离精度分别提升8.3%和13.1%。
关键词:视觉跟踪/
全局背景信息/
特征降维/
自适应融合
Abstract:Tracking effects of algorithms using correlation filter are easily interfered by deformation, motion blur and background clustering, which can result in tracking failure. To solve these problems, a visual tracking algorithm based on global context and feature dimensionality reduction is proposed. Firstly, the image patches uniformly around the target are extracted as negative sample, and thus the similar background patches around the target are suppressed. Then, an update strategy based on principal component analysis is proposed, constructing the matrix to reduce the dimensionality of HOG feature, which can reduce the redundancy of feature when it updates. Finally, the color features are added to represent the motion target and the response of the system states are adaptively fused according to the features. Experiments are performed on recent online tracking benchmark. The results show that the proposed method performs favorably both in terms of accuracy and robustness compared to the state-of-the-art trackers such as Staple or KCF. When deformation occur, the proposed method is shown to outperform the Staple tracker and KCF algorithm by 8.3% and 13.1% respectively in median distance precision.
Key words:Visual tracking/
Global context information/
Feautre dimensionality reduction/
Adaptive fusion
PDF全文下载地址:
https://jeit.ac.cn/article/exportPdf?id=2e71bba5-9797-40b8-b548-e0e47444f8f3