Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks [C] //Proc of the 2nd Int Conf on Learning Representations. Banff: Conference Track Proceedings, 2014
[2] Goodfellow I, Shlens J, Szegedy C. Explaining and harnessing adversarial examples [C] //Proc of the 3rd Int Conf on Learning Representations. San Diego: Conference Track Proceedings, 2015
[3] Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world [C] //Proc of the 5th Int Conf on Learning Representations. Toulon: Conference Track Proceedings, 2017
[4] Dong Yinpeng, Liao Fangzhou, Pang Tianyu, et al. Boosting Adversarial Attacks With Momentum [C] // Proc of Conf on Computer Vision and Pattern Recognition(CVPR). Piscataway,NJ: IEEE, 2018: 9185-9193
[5] Carlini N, Wagner D. Towards evaluating the robustness of neural networks [C] //Proc of Symp on Security and Privacy(SP). Piscataway,NJ A: IEEE, 2017 39-57
[6] Carlini N, Wagner D. Audio adversarial examples: Targeted attacks on speech-to-text [C] //Proc of Security and Privacy Workshops (SPW). Piscataway,NJ: IEEE, 2018 1-7
[7] Bose A, Aarabi P. Adversarial attacks on face detectors using neural net based constrained optimization [C] // Proc of the 20th Int Workshop on Multimedia Signal Processing(MMSP). Piscataway,NJ: IEEE, 2018
[8] Song Qing, Wu Yingqi, Yang Lu. Attacks on state-of-the-art face recognition using attentional adversarial attack generative network [J]. arXiv preprint, arXiv:1811.12026, 2018
[9] Gao Ji, Lanchantin J, Soffa M, et al. Black-box generation of adversarial text sequences to evade deep learning classifiers [C] //Proc of Security and Privacy Workshops (SPW). Piscataway,NJ: IEEE, 2018 50-56
[10] Zügner D, Akbarnejad A, Günnemann S. Adversarial attacks on neural networks for graph data [C] // Proc of the 24th Int Conf on Knowledge Discovery & Data Mining (KDD). New York: ACM, 2018
[11] 阿里聚安全. 黑产揭秘:“打码平台”那点事儿[EB/OL]. (2017-05-04)[2019-08-02].https://zhuanlan.zhihu.com/p/24011861
|