参考文献
[1]Li D, Wei F, Ming Z, et al. Questionanswering over freebase with multicolumn convolutional neural networks[C] Proc of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Int Joint Conf on Natural Language Processing (Volume 1: Long Papers). Stroudsburg, PA: ACL, 2015: 260269[2]AbdelHamid O, Mohamed A, Jiang H, et al. Convolutional neural networks for speech recognition[J]. IEEEACM Trans on Audio, Speech, and Language Processing, 2014, 22(10): 15331545[3]Tu Z, Hu B, Lu Z, et al. Contextdependent translation selection using convolutional neural network[C] Proc of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Int Joint Conf on Natural Language Processing (Volume 2: Short Papers). Stroudsburg, PA: ACL, 2015: 536541[4]Kim Y. Convolutional neural networks for sentence classification[J]. arXiv preprint, arXiv:1408.5882, 2014[5]金志刚, 周峻毅, 何晓勇. 面向自然语言处理领域的对抗攻击研究与展望[J]. 信息安全研究, 2022, 8(3): 202211[6]Marra F, Gragnaniello D, Verdoliva L. On the vulnerability of deep learning to adversarial attacks for camera model identification[J]. Signal Processing: Image Communication, 2018, 65: 240248[7]Yu Y, Lee H J, Kim B C, et al. Investigating vulnerability to adversarial examples on multimodal data fusion in deep learning[J]. arXiv preprint, arXiv:2005.10987, 2020[8]Ganin Y, Ustinova E, Ajakan H, et al. Domainadversarial training of neural networks[J]. The Journal of Machine Learning Research, 2016, 17(1): 20962030[9]Mahmood F, Chen R, Durr N J. Unsupervised reverse domain adaptation for synthetic medical images via adversarial training[J]. IEEE Trans on Medical Imaging, 2018, 37(12): 25722581[10]Jin D, Jin Z, Zhou J T, et al. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment[C] Proc of the AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2020: 80188025[11]Yang Y, Huang P, Cao J, et al. A promptingbased approach for adversarial example generation and robustness enhancement[J]. arXiv preprint, arXiv:2203.10714, 2022[12]Devlin J, Chang M W, Lee K, et al. BERT: Pretraining of deep bidirectional transformers for language understanding[J]. arXiv preprint, arXiv:1810.04805, 2018[13]Hochreiter S, Schmidhuber J. Long shortterm memory[J]. Neural Computation, 1997, 9(8): 17351780[14]王文琦, 汪润, 王丽娜, 等. 面向中文文本倾向性分类的对抗样本生成方法[J]. 软件学报, 2019, 30(8): 24152427[15]仝鑫, 王罗娜, 王润正, 等. 面向中文文本分类的词级对抗样本生成方法[J]. 信息网络安全, 2020, 20(9): 1216[16]Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint, arXiv:1312.6199, 2013[17]MoosaviDezfooli S M, Fawzi A, Frossard P. Deepfool: A simple and accurate method to fool deep neural networks[C] Proc of the IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 25742582[18]Carlini N, Wagner D. Towards evaluating the robustness of neural networks[C] Proc of the 2017 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2017: 3957[19]Goodfellow J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[J]. arXiv preprint, arXiv:1412.6572, 2014[20]Liang B, Li H, Su M, et al. Deep text classification can be fooled[C] Proc of the 27th Int Joint Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2018: 42084215[21]Gao J, Lanchantin J, Soffa M L, et al. Blackbox generation of adversarial text sequences to evade deep learning classifiers[C] Proc of the 2018 IEEE Security and Privacy Workshops (SPW). Piscataway, NJ: IEEE, 2018: 5056[22]Li J, Ji S, Du T, et al. Textbugger: Generating adversarial text against realworld applications[J]. arXiv preprint, arXiv:1812.05271, 2018.[23]Papernot N, McDaniel P, Swami A, et al. Crafting adversarial input sequences for recurrent neural networks[C] Proc of the 2016 IEEE Military Communications Conf. Piscataway, NJ: IEEE, 2016: 4954[24]Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C] Proc of the 31st Int Conf on Neural Information Processing Systems. New York: ACM, 2017: 60006010[25]Garg S, Ramakrishnan G. BAE: BERTbasedadversarial examples for text classification[C] Proc of the 2020 Conf on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA: ACL, 2020: 61746181[26]Kusner M, Sun Y, Kolkin N, et al. From word embeddings to document distances[C] Proc of the Int Conf on Machine Learning. New York: PMLR, 2015: 957966
|