谯庆伟1,
万俊伟1,
戴修斌1,
罗杰波2
1.南京邮电大学江苏省图像处理与图像通信重点实验室 ??南京 ??210003
2.罗彻斯特大学计算机科学系 ??纽约州 ??罗彻斯特市 ??14627
基金项目:国家自然科学基金(61001152, 31200747, 61071091, 61071166, 61172118),江苏省自然科学基金(BK2012437),南京邮电大学校级科研基金(NY214037),国家留学基金
详细信息
作者简介:刘天亮:1980年生,男,博士,副教授,硕士生导师,研究方向为图像处理、计算机视觉
谯庆伟:1989年生,男,硕士生,研究方向为图像处理与多媒体通信
万俊伟:1993年生,男,硕士生,研究方向为图像处理与多媒体通信
戴修斌:1980年生,男,博士,副教授,硕士生导师,研究方向为医学图像重建、图像处理和模式识别
罗杰波:1968年生,博士,教授,博士生导师,研究方向为图像处理、计算机视觉、机器学习、数据挖掘和社交网络媒体等
通讯作者:刘天亮 liutl@njupt.edu.cn
中图分类号:TP391.41计量
文章访问数:1457
HTML全文浏览量:545
PDF下载量:60
被引次数:0
出版历程
收稿日期:2017-11-27
修回日期:2018-07-26
网络出版日期:2018-08-02
刊出日期:2018-10-01
Human Action Recognition via Spatio-temporal Dual Network Flow and Visual Attention Fusion
Tianliang LIU1,,,Qingwei QIAO1,
Junwei WAN1,
Xiubin DAI1,
Jiebo LUO2
1. Jiangsu Provincial Key Laboratory of Image Processing and Image Communication, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
2. Department of Computer Science, University of Rochester, Rochester, NY 14627, USA
Funds:The National Natural Science Foundation of China (61001152, 31200747, 61071091, 61071166, 61172118), The Natural Science Foundation of Jiangsu Provice of China (BK2012437), The Natural Science Foundation of NJUPT (NY214037), China Scholarship Council
摘要
摘要:该文受人脑视觉感知机理启发,在深度学习框架下提出融合时空双网络流和视觉注意的行为识别方法。首先,采用由粗到细Lucas-Kanade估计法逐帧提取视频中人体运动的光流特征。然后,利用预训练模型微调的GoogLeNet神经网络分别逐层卷积并聚合给定时间窗口视频中外观图像和相应光流特征。接着,利用长短时记忆多层递归网络交叉感知即得含高层显著结构的时空流语义特征序列;解码时间窗口内互相依赖的隐状态;输出空间流视觉特征描述和视频窗口中每帧标签概率分布。其次,利用相对熵计算时间维每帧注意力置信度,并融合空间网络流感知序列标签概率分布。最后,利用softmax分类视频中行为类别。实验结果表明,与其他现有方法相比,该文行为识别方法在分类准确度上具有显著优势。
关键词:人体行为识别/
光流/
双重时空网络流/
视觉注意力/
卷积神经网络/
长短时记忆神经网络
Abstract:Inspired by the mechanism of human brain visual perception, an action recognition approach integrating dual spatio-temporal network flow and visual attention is proposed in a deep learning framework. First, the optical flow features with body motion are extracted frame-by-frame from video with coarse-to-fine Lucas-Kanade flow estimation. Then, the GoogLeNet neural network with fine-tuned pre-trained model is applied to convoluting layer-by-layer and aggregate respectively appearance images and the related optical flow features in the selected time window. Next, the multi-layered Long Short-Term Memory (LSTM) neural networks are exploited to cross-recursively perceive the spatio-temporal semantic feature sequences with high level and significant structure. Meanwhile, the inter-dependent implicit states are decoded in the given time window, and the attention salient feature sequence is obtained from temporal stream with the visual feature descriptor in spatial stream and the label probability of each frame. Then, the temporal attention confidence for each frame with respect to human actions is calculated with the relative entropy measure and fused with the probability distributions with respect to the action categories from the given spatial perception network stream in the video sequence. Finally, the softmax classifier is exploited to identify the category of human action in the given video sequence. Experimental results show that this presented approach has significant advantages in classification accuracy compared with other methods.
Key words:Human action recognition/
Optical flow/
Spatio-temporal dual network flow/
Visual attention/
Convolution Neural Network (CNN)/
Long Short-Term Memory (LSTM)
PDF全文下载地址:
https://jeit.ac.cn/article/exportPdf?id=ae292eac-0748-4a00-9604-e9e5ce3607f9