删除或更新信息,请邮件至freekaoyan#163.com(#换成@)

一种视角无关的时空关联深度视频行为识别方法

本站小编 Free考研考试/2022-01-03

吴培良1, 2, 3,
杨霄1,
毛秉毅1, 3,,,
孔令富1, 3,
侯增广2
1.燕山大学信息科学与工程学院 ??秦皇岛 ??066004
2.中国科学院自动化研究所复杂系统管理与控制国家重点实验室 ??北京 ??100190
3.河北省计算机虚拟技术与系统集成重点实验室 ??秦皇岛 ??066004
基金项目:国家自然科学基金(61305113),河北省自然科学基金(F2016203358),中国博士后基金(2018M631620),燕山大学博士基金(BL18007)

详细信息
作者简介:吴培良:男,1981年生,副教授,研究方向为家庭服务机器人行为识别与学习、功用性认知
杨霄:男,1993年生,硕士生,研究方向为家庭服务机器人行为识别
毛秉毅:男,1964年生,副研究员,研究方向为家庭服务机器人
孔令富:男,1957年生,教授,研究方向为智能机器人系统、智能信息处理
侯增广:男,1969年生,研究员,研究方向为机器人与智能系统、康复机器人与微创介入手术机器人
通讯作者:毛秉毅 ysdxmby@163.com
中图分类号:TP242.6+2

计量

文章访问数:1131
HTML全文浏览量:370
PDF下载量:45
被引次数:0
出版历程

收稿日期:2018-05-21
修回日期:2018-12-04
网络出版日期:2018-12-14
刊出日期:2019-04-01

A Perspective-independent Method for Behavior Recognition in Depth Video via Temporal-spatial Correlating

Peiliang WU1, 2, 3,
Xiao YANG1,
Bingyi MAO1, 3,,,
Lingfu KONG1, 3,
Zengguang HOU2
1. School of Information Science and Technology, Yanshan University, Qinhuangdao 066004, China
2. State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
3. The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Qinhuangdao 066004, China
Funds:The National Natural Science Foundation of China (61305113), The Natural Science Foundation of Hebei Province (F2016203358), China Postdoctoral Science Foundation (2018M631620), The Doctoral Fund of Yanshan University (BL18007)


摘要
摘要:当前行为识别方法在不同视角下的识别准确率较低,该文提出一种视角无关的时空关联深度视频行为识别方法。首先,运用深度卷积神经网络的全连接层将不同视角下的人体姿态映射到与视角无关的高维空间,以构建空间域下深度行为视频的人体姿态模型(HPM);其次,考虑视频序列帧之间的时空相关性,在每个神经元激活的时间序列中分段应用时间等级池化(RP)函数,实现对视频时间子序列的编码;然后,将傅里叶时间金字塔(FTP)算法作用于每一个池化后的时间序列,并加以连接产生最终的时空特征表示;最后,在不同数据集上,基于不同方法进行了行为识别分类测试。实验结果表明,该文方法(HPM+RP+FTP)提高了不同视角下深度视频识别准确率,在UWA3DII数据集中,比现有最好方法高出18%。此外,该文方法具有较好的泛化性能,在MSR Daily Activity3D数据集上得到82.5%的准确率。
关键词:视频行为识别/
深度视频/
视角无关/
卷积神经网络/
时空关联
Abstract:Considering the low recognition accuracy of behavior recognition from different perspectives at present, this paper presents a perspective-independent method for depth videos. Firstly, the fully connected layer of depth Convolution Neural Network (CNN) is creatively used to map human posture in different perspectives to high-dimensional space that is independent with perspective to achieve the Human Posture Modeling (HPM) of deep-performance video in spatial domain. Secondly, considering temporal-spatial correlation between video sequence frames, the Rank Pooling (RP) function is applied to the series of each neuron activated time to encode the video time sub-sequence, and then the Fourier Time Pyramid (FTP) is used to each pooled time series to produce the final spatio-temporal feature representation. Finally, different methods of behavior recognition classification are tested on several datasets. Experimental results show that the proposed method improves the accuracy of depth video recognition in different perspectives. In the UWA3DII datasets, the proposed method is 18% higher than the most recent method. The proposed method (HPM+RP+FTP) has a good generalization performance, achieving a 82.5% accuracy on dataset of MSR Daily Activity3D.
Key words:Video behavior recognition/
Depth video/
Perspective-independent/
Convolution Neural Network (CNN)/
Temporal-spatial correlation



PDF全文下载地址:

https://jeit.ac.cn/article/exportPdf?id=429c6a4f-abfc-4e63-bdc5-991eecb284f1
相关话题/序列 智能 数据 系统 燕山大学