Journal of Information Security Reserach ›› 2021, Vol. 7 ›› Issue (4): 294-309.

    Next Articles

Summary of The Security Of Image Adversarial Samples

  

  • Online:2021-04-05 Published:2021-04-14

图像对抗样本的安全性研究概述

徐金才1,2  任民1,2 李琦2 孙哲南2   

  1. 1(中国科学院大学 人工智能学院 北京 100049) 
    2(中国科学院自动化研究所 智能感知与计算研究中心 北京 100190)

  • 通讯作者: 孙哲南
  • 作者简介:徐金才,硕士研究生,主要研究方向为对抗样本的攻击和防御. 13527449440@163.com 任民,博士研究生,主要研究方向为虹膜识别的泛化研究。 min.ren@cripac.ia.ac.cn 李琦,副研究员,主要研究方向为为人脸特征识别与安全,人脸属性编辑,人脸深度伪造。 qli@nlpr.ia.ac.cn 孙哲南,研究员,主要研究方向为人脸特征识别与安全,计算机视觉. znsun@nlpr.ia.ac.cn

Abstract: The improvement of computer performance and the emergence of deep learning have made artificial intelligence technology widely used. More people are paying their attention to the security of deep learning models. The existence of adversarial samples is one of the main threats of deep learning models, which limits their application scenarios such as face recognition systems and self-driving that require high privacy security. Despite the high performance, deep learning models are also required to be sufficiently robust. However, one might be concerned with whether deep neural networks can be applied in real-world applications stably, reliably and effectively? If our understanding of deep neural networks is only a black box model and only requires it to produce a satisfying output given an input, then it would be difficult to safely apply it in reality. Research on adversarial examples is also a hot spot. In this paper, we explain why adversarial examples exist and summarize some algorithms for both adversarial attack and defense. Meanwhile, we conduct experimental verification of several representative methods on MNIST, CIFAR-10, and ImageNet. In the end, we discuss the outlook and trends of this field.

Key words: Adversarial Samples, Adversarial Attack, Adversarial Defense, Privacy Security, Artificial Intelligence, Deep Learning

摘要: 计算机性能的提高和深度学习的出现,使得人工智能技术实现了广泛的应用。深度学习模型的安全性问题受到了广泛的关注。对抗样本的存在是深度学习应用场景的主要威胁之一,限制了诸如人脸识别、自动驾驶等隐私安全性要求较高的应用场景。深度学习模型除了需要神经网络有良好的性能外,还需要它有足够的鲁棒性。令人担心的是,深度神经网络是否可以稳定可靠有效的应用在现实世界中?如果我们对深度神经网络的认知仅仅停留在一个黑盒模型,对于输入有良好的输出效果,那很难放心的将它应用在现实中。论文介绍了对抗样本存在的原因,分类归纳了对抗攻击和对抗防御的算法。同时使用MNIST、CIFAR-10、ImageNet数据集对相关代表性的方法进行了实验验证,最后对这一领域的发展趋势进行了展望。

关键词: 对抗样本, 对抗攻击, 对抗防御, 隐私安全, 人工智能, 深度学习