[1]中国信息安全测评中心. 2022年度网络安全漏洞态势报告[EBOL]. (20230719) [20240410]. https:www.cnnvd.org.cnhomenetSecurity[2]国家计算机病毒应急处理中心. 西北工业大学遭美国NSA网络攻击事件调查报告(之二)[EBOL]. (20220927) [20240423]. https:www.cverc.org.cnheadzhaiyaonews20220927NPU2.htm[3]Happe A, Cito J. Getting pwn’d by AI: Penetration Testing with Large Language Models[C] Proc of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. San Francisco CA USA: ACM, 2023: 20822086[4]高亚楠. 大模型技术的网络安全治理和应对研究[J]. 信息安全研究, 2023, 9(6): 551556[5]Bakhshandeh A, Keramatfar A, Norouzi A, et al. Using chatgpt as a static application security testing tool[J]. arXiv preprint, arXiv:2308.14434, 2023[6]Ferrag M A, Battah A, Tihanyi N, et al. SecureFalcon: Are We There Yet in Automated Software Vulnerability Detection with LLMs?[DBOL]. [20240604]. http:arxiv.orgabs2307.06616[7]刘宝旭, 李昊, 孙钰杰, 等. 智能化漏洞挖掘与网络空间威胁发现综述[J]. 信息安全研究, 2023, 9(10): 932939[8]Deng G, Liu Y, MayoralVilches V, et al. PentestGPT: An llmempowered automatic penetration testing tool[J]. arXiv preprint, arXiv:2308.06782, 2023[9]Xu C, Sun Q, Zheng K, et al. Wizardlm: Empowering large language models to follow complex instructions[J]. arXiv preprint, arXiv:2304.12244, 2023[10]Hu E J, Shen Y, Wallis P, et al. Lora: Lowrank adaptation of large language models[J]. arXiv preprint, arXiv:2106.09685, 2021[11]Patil S G, Zhang T, Wang X, et al. Gorilla: Large language model connected with massive apis[J]. arXiv preprint, arXiv:2305.15334, 2023[12]Yao S, Zhao J, Yu D, et al. React: Synergizing reasoning and acting in language models[J]. arXiv preprint, arXiv:2210.03629, 2022 |