[1] Chai J, Zeng H, Li A, et al. Deep learning in computer vision: A critical review of emerging techniques and application scenarios[J]. Machine Learning with Applications, 2021, 6: 100134
[2] Socher R, Bengio Y, Manning C D. Deep learning for NLP (without magic)[C]//Tutorial Abstracts of ACL 2012. Stroudsburg,PA: ACL, 2012: 5-5
[3] Baccouche M, Mamalet F, Wolf C, et al. Sequential deep learning for human action recognition[C]//Proc of Int Workshop on Human Behavior Understanding. Berlin: Springer, 2011: 29-39
[4] Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[J]. arXiv preprint,arXiv:1412.6572, 2014
[5] Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[C]//Proc of the 2nd Int Conf on Learning Representations. 2014
[6] Akhtar N, Mian A. Threat of adversarial attacks on deep learning in computer vision: A survey[J]. IEEE Access, 2018, 6: 14410-14430
[7] Yuan X, He P, Zhu Q, et al. Adversarial examples: Attacks and defenses for deep learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(9): 2805-2824
[8] 陈岳峰, 毛潇锋, 李裕宏,等. AI 安全——对抗样本技术综述与应用[J]. 信息安全研究, 2019, 5(11): 1000-1007
[9] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[J]. Advances in Neural Information Processing Systems, 2012, 25: 1097-1105
[10] Le Q V. Building high-level features using large scale unsupervised learning[C]//Proc of IEEE Int Conf on Acoustics, Speech and Signal Processing. Piscataway, NJ: IEEE, 2013: 8595-8598
[11] Tramèr F, Kurakin A, Papernot N, et al. Ensemble adversarial training: Attacks and defenses[C]//Proc of the 6th Int Conf on Learning Representations, 2018
[12] Alexey Kurakin, Ian J. Goodfellow, Samy Bengio. Adversarial examples in the physical world[J]. CoRR,2016,abs/1607.02533:
[13] Papernot N, McDaniel P, Jha S, et al. The limitations of deep learning in adversarial settings[C]//Proc of IEEE European Symp on Security and Privacy (EuroS&P). Piscataway, NJ: IEEE, 2016: 372-387
[14] Carlini N, Wagner D. Towards evaluating the robustness of neural networks[C]//Proc of IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2017: 39-57
[15] Jin D, Jin Z, Zhou J T, et al. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment[C]//Proc of AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2020:8018-8025
[16] Gao J, Lanchantin J, Soffa M L, et al. Black-box generation of adversarial text sequences to evade deep learning classifiers[C]//Proc of IEEE Security and Privacy Workshops (SPW). Piscataway, NJ: IEEE, 2018: 50-56
[17] Ren S, Deng Y, He K, et al. Generating natural language adversarial examples through probability weighted word saliency[C]//Proc of the 57th Annual Meeting of the ACL. Stroudsburg, PA: ACL, 2019: 1085-1097
[18] Alzantot M, Sharma Y, Elgohary A, et al. Generating natural language adversarial examples[C]// Proc of the 2018 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2018
[19] Garg S, Ramakrishnan G. BAE: BERT-based adversarial examples for text classification[C]//Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 6174-6181
[20] Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[C]// Proc of the 2019 Conf of the North American Chapter of the ACL: Human Language Technologies. Stroudsburg, PA: ACL, 2019
[21] Song L, Yu X, Peng H T, et al. Universal adversarial attacks with natural triggers for text classification[C]//Proc of the 2021 Conf of the North American Chapter of the ACL: Human Language Technologies. Stroudsburg, PA: ACL, 2021: 3724-3733
[22] Zhao J, Kim Y, Zhang K, et al. Adversarially regularized autoencoders[C]//Proc of the 35th Int Conf on Machine Learning. New York: ACM, 2018: 5902-5911
[23] Morris J X, Lifland E, Yoo J Y, et al. Textattack: A framework for adversarial attacks in natural language processing[C]// Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020
[24] Araujo V, Carvallo A, Aspillaga C, et al. On adversarial examples for biomedical nlp tasks[J]. arXiv preprint, arXiv:2004.11157, 2020
[25] Wang J, Xu W, Fu X, et al. ASTRAL: adversarial trained LSTM-CNN for named entity recognition[J]. Knowledge-Based Systems, 2020, 197: 105842
[26] Simoncini W, Spanakis G. SeqAttack: On adversarial attacks for named entity recognition[C]//Proc of the 2021 Conf on Empirical Methods in Natural Language Processing: System Demonstrations. Stroudsburg, PA: ACL, 2021: 308-318
[27] Eger, Steffen, et al. Text processing like humans do: Visually attacking and shielding NLP systems[C]//Proc of the 2019 Conf of the North American Chapter of the ACL: Human Language Technologies. Stroudsburg, PA: ACL, 2019
[28] Ebrahimi J, Lowd D, Dou D. On adversarial examples for character-level neural machine translation[C]//Proc of the 27th Int Conf on Computational Linguistics. New York: ACM, 2018: 653-663
[29] Zou W, Huang S, Xie J, et al. A reinforced generation of adversarial examples for neural machine translation[C]//Proc of the 58th Annual Meeting of the ACL. Stroudsburg, PA: ACL, 2020: 3486-3497
[30] Papineni K, Roukos S, Ward T, et al. Bleu: A method for automatic evaluation of machine translation[C]//Proc of the 40th Annual Meeting of the ACL. Stroudsburg, PA: ACL, 2002: 311-318
[31] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Proc of the 31st Annual Conf on Neural Information Processing Systems. Cambridge: MIT Press, 2017: 5998-6008
[32] Idrissi B Y, Clinchant S. Masked adversarial generation for neural machine translation[J]. arXiv preprint, arXiv:2109.00417, 2021
[33] Clark K, Luong M T, Le Q V, et al. ELECTRA: Pre-training text encoders as discriminators rather than generators[C]//Proc of the 7th IntConf on Learning Representations. 2019
[34] Zhang X, Zhang J, Chen Z, et al. Crafting adversarial examples for neural machine translation[C]//Proc of the 59th Annual Meeting of the ACL and the 11th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2021: 1967-1977
[35] 仝鑫, 王斌君, 王润正, 等. 面向自然语言处理的深度学习对抗样本综述[J]. 计算机科学, 2021, 48(1): 258-267
[36] Zhang W E, Sheng Q Z, Alhazmi A, et al. Adversarial attacks on deep-learning models in natural language processing: A survey[J]. ACM Transactions on Intelligent Systems and Technology (TIST), 2020, 11(3): 1-41
[37] 郑海斌, 陈晋音, 章燕, 等. 面向自然语言处理的对抗攻防与鲁棒性分析综述[J]. 计算机研究与发展, 2021, 58(8): 1727
[38] Iyyer M, Wieting J, Gimpel K, et al. Adversarial example generation with syntactically controlled paraphrase networks[C]//Proc of the 2018 Conf of the North American Chapter of the ACL: Human Language Technologies, Volume 1 (Long Papers). Stroudsburg, PA: ACL, 2018: 1875-1885
[39] Yu Y, Lee H J, Kim B C, et al. Investigating vulnerability to adversarial examples on multimodal data fusion in deep learning[J]. arXiv preprint, arXiv:2005.10987, 2020
[40] Ren K, Zheng T, Qin Z, et al. Adversarial attacks and defenses in deep learning[J]. Engineering, 2020, 6(3): 346-360
|