删除或更新信息,请邮件至freekaoyan#163.com(#换成@)

朱占星 博士:Adversarial Training for Deep Learning: A General Framework for Improving Robustness and Gene

本站小编 Free考研/2020-05-19



Academy of Mathematics and Systems Science, CAS
Colloquia & Seminars

Speaker: 朱占星 博士, 北京大学数学学院
Inviter: 明平兵 研究员
Title:
Adversarial Training for Deep Learning: A General Framework for Improving Robustness and Generalization
Time & Venue:
2019.09.19 10:00-11:00 N202
Abstract:
Deep learning has achieved tremendous success in various application areas, such as computer vision, natural language processing, game playing (AlphaGo), etc. Unfortunately, recent works show that an adversary is able to fool the deep learning models into producing incorrect predictions by manipulating the inputs maliciously. The corresponding manipulated samples are called adversarial examples. This vulnerability issue dramatically hinders the deployment of deep learning, particularly in safety-critical applications. In this talk, I will introduce various approaches for how to construct adversarial examples. Then I will present a framework, named as adversarial training, for improving robustness of deep networks to defense the adversarial examples, and how to accelerate the training. Moreover,I will show that the introduced adversarial learning framework can be extended as an effective regularization strategy to improve the generalization in semi-supervised learning.

相关话题/博士 北京大学