陈进2,,
雷琳1,,
计科峰1,,
匡纲要1,
1.国防科技大学电子信息系统复杂电磁环境效应国家重点实验室 长沙 410073
2.北京市遥感信息研究所 北京 100192
基金项目:国家自然科学基金(61971426, 61601035)
详细信息
作者简介:孙浩:孙 浩(1984–),男,陕西三原人,博士,国防科技大学电子科学学院副教授。研究方向为多源图像协同解译与对抗、因果表示机器学习
陈进:陈 进(1981–),男,江苏溧阳人,博士,北京市遥感信息研究所副研究员。研究方向为遥感智能解译
雷琳:雷 琳(1980–),女,湖南衡阳人,博士,国防科技大学电子科学学院教授。研究方向为遥感图像处理、图像融合、目标识别等
计科峰(1974–),男,陕西长武人,博士,国防科技大学电子科学学院教授,博士生导师。研究方向为SAR图像解译、目标检测与识别、特征提取、SAR和AIS匹配
匡纲要(1966–),男,湖南衡东人,博士,国防科技大学电子科学学院CEMEE国家重点实验室教授,博士生导师。研究方向为遥感图像智能解译、SAR图像目标检测与识别
通讯作者:孙浩 sunhao@nudt.edu.cn
责任主编:徐丰 Corresponding Editor: XU Feng中图分类号:TP391
计量
文章访问数:802
HTML全文浏览量:215
PDF下载量:258
被引次数:0
出版历程
收稿日期:2021-04-14
修回日期:2021-05-21
网络出版日期:2021-06-07
Adversarial Robustness of Deep Convolutional Neural Network-based Image Recognition Models: A Review
SUN Hao1,,,CHEN Jin2,,
LEI Lin1,,
JI Kefeng1,,
KUANG Gangyao1,
1. State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Changsha 410073, China
2. Beijing Institute of Remote Sensing Information, Beijing 100192, China
Funds:The National Natural Science Foundation of China (61971426, 61601035)
More Information
Corresponding author:SUN Hao, sunhao@nudt.edu.cn
摘要
摘要:近年来,以卷积神经网络为代表的深度识别模型取得重要突破,不断刷新光学和SAR图像场景分类、目标检测、语义分割与变化检测等多项任务性能水平。然而深度识别模型以统计学习为主要特征,依赖大规模高质量训练数据,只能提供有限的可靠性能保证。深度卷积神经网络图像识别模型很容易被视觉不可感知的微小对抗扰动欺骗,给其在医疗、安防、自动驾驶和军事等安全敏感领域的广泛部署带来巨大隐患。该文首先从信息安全角度分析了基于深度卷积神经网络的图像识别系统潜在安全风险,并重点讨论了投毒攻击和逃避攻击特性及对抗脆弱性成因;其次给出了对抗鲁棒性的基本定义,分别建立对抗学习攻击与防御敌手模型,系统总结了对抗样本攻击、主被动对抗防御、对抗鲁棒性评估技术的研究进展,并结合SAR图像目标识别对抗攻击实例分析了典型方法特性;最后结合团队研究工作,指出存在的开放性问题,为提升深度卷积神经网络图像识别模型在开放、动态、对抗环境中的鲁棒性提供参考。
关键词:深度卷积神经网络/
SAR图像识别/
信息安全/
对抗攻击与防御/
鲁棒性评估
Abstract:Deep convolutional neural networks have achieved great success in recent years. They have been widely used in various applications such as optical and SAR image scene classification, object detection and recognition, semantic segmentation, and change detection. However, deep neural networks rely on large-scale high-quality training data, and can only guarantee good performance when the training and test data are independently sampled from the same distribution. Deep convolutional neural networks are found to be vulnerable to subtle adversarial perturbations. This adversarial vulnerability prevents the deployment of deep neural networks in security-sensitive applications such as medical, surveillance, autonomous driving and military scenarios. This paper first presents a holistic view of security issues for deep convolutional neural network-based image recognition systems. The entire information processing chain is analyzed regarding safety and security risks. In particular, poisoning attacks and evasion attacks on deep convolutional neural networks are analyzed in detail. The root causes of adversarial vulnerabilities of deep recognition models are also discussed. Then, we give a formal definition of adversarial robustness and present a comprehensive review of adversarial attacks, adversarial defense, and adversarial robustness evaluation. Rather than listing existing research, we focus on the threat models for the adversarial attack and defense arms race. We perform a detailed analysis of several representative adversarial attacks on SAR image recognition models and provide an example of adversarial robustness evaluation. Finally, several open questions are discussed regarding recent research progress from our workgroup. This paper can be further used as a reference to develop more robust deep neural network-based image recognition models in dynamic adversarial scenarios.
Key words:Deep convolutional neural network/
SAR image recognition/
Information security/
Adversarial attacks and defense/
Robustness evaluation
PDF全文下载地址:
https://plugin.sowise.cn/viewpdf/198_280b3c3a-c15a-40ed-a720-7b70740df260_R21048