删除或更新信息,请邮件至freekaoyan#163.com(#换成@)

基于二阶对抗样本的对抗训练防御

本站小编 Free考研考试/2022-01-03

钱亚冠1,
张锡敏1,
王滨2,,,
顾钊铨3,
李蔚1,
云本胜1
1.浙江科技学院理学院/大数据学院 杭州 310023
2.杭州海康威视网络与信息安全实验室 杭州 310052
3.广州大学网络空间先进技术研究院 广州 510006
基金项目:国家重点研发计划项目(2018YFB2100400),国家自然科学基金(61902082)

详细信息
作者简介:钱亚冠:男,1976年生,副教授,研究方向为人工智能安全
张锡敏:女,1996年生,硕士生,研究方向为对抗机器学习
王滨:男,1978年生,研究员,研究方向为网络与信息安全
顾钊铨:男,1989年生,教授,研究方向为人工智能安全
李蔚:女,1978年生,副教授,研究方向为计算机视觉
云本胜:男,1980年生,副教授,研究方向为数据挖掘
通讯作者:王滨 32874546@qq.com
中图分类号:TN915.08; TP309.2

计量

文章访问数:197
HTML全文浏览量:117
PDF下载量:45
被引次数:0
出版历程

收稿日期:2020-08-06
修回日期:2021-08-20
网络出版日期:2021-09-16
刊出日期:2021-11-23

Adversarial Training Defense Based on Second-order Adversarial Examples

Yaguan QIAN1,
Ximin ZHANG1,
Bin WANG2,,,
Zhaoquan GU3,
Wei LI1,
Bensheng YUN1
1. School of Science/School of Big-data Science, Zhejiang University of Science and Technology, Hangzhou 310023, China
2. Network and Information Security Laboratory of Hangzhou Hikvision Digital Technology Co., Ltd. Hangzhou 310052, China
3. Cyberspace Institute of Advanced Technology (CIAT), Guangzhou University, Guangzhou 510006, China
Funds:The National Research and Development Program of China (2018YFB2100400), The National Natural Science Foundation of China (61902082)


摘要
摘要:深度神经网络(DNN)应用于图像识别具有很高的准确率,但容易遭到对抗样本的攻击。对抗训练是目前抵御对抗样本攻击的有效方法之一。生成更强大的对抗样本可以更好地解决对抗训练的内部最大化问题,是提高对抗训练有效性的关键。该文针对内部最大化问题,提出一种基于2阶对抗样本的对抗训练,在输入邻域内进行2次多项式逼近,生成更强的对抗样本,从理论上分析了2阶对抗样本的强度优于1阶对抗样本。在MNIST和CIFAR10数据集上的实验表明,2阶对抗样本具有更高的攻击成功率和隐蔽性。与PGD对抗训练相比,2阶对抗训练防御对当前典型的对抗样本均具有鲁棒性。
关键词:对抗样本/
对抗训练/
2阶泰勒展开
Abstract:Although Deep Neural Networks (DNN) achieves high accuracy in image recognition, it is significantly vulnerable to adversarial examples. Adversarial training is one of the effective methods to resist adversarial examples empirically. Generating more powerful adversarial examples can solve the inner maximization problem of adversarial training better, which is the key to improve the effectiveness of adversarial training. In this paper, to solve the inner maximization problem, an adversarial training based on second-order adversarial examples is proposed to generate more powerful adversarial examples through quadratic polynomial approximation in a tiny input neighborhood. Through theoretical analysis, second-order adversarial examples are shown to outperform first-order adversarial examples. Experiments on MNIST and CIFAR10 data sets show that second-order adversarial examples have high attack success rate and high concealment. Compared with PGD adversarial training, adversarial training based on second-order adversarial examples is robust to all the existing typical attacks.
Key words:Adversarial examples/
Adversarial training/
The second-order Taylor expansion



PDF全文下载地址:

https://jeit.ac.cn/article/exportPdf?id=70550882-c8e9-434b-b6aa-6e54989de102
相关话题/网络 数据 浙江科技学院 图像 广州