[1]Ienca M. Don’t pause giant AI for the wrong reasons[JOL]. Nature Machine Intelligence, 2023 [20240612]. https:www.nature.comarticless4225602300649x[2]Piktus A. Online tools help large language models to solve problems through reasoning[J]. Nature, 2023, 618: 465466[3]OpenAI. Moderation documentation[EBOL].[20230621]. https:platform.openai.comdocsguidesmoderationoverview[4]Raimondi R, Tzoumas N, Salisbury T, et al. Comparative analysis of large language models in the royal college of ophthalmologists fellowship exams[JOL]. Eye, 2023 [20240612]. https:www.nature.comarticless41433023025633[5]OpenAI. Introducing ChatGPT[EBOL].[20230621]. https:Openai.ComBlogChatgpt[6]Ouyang L, Wu J, Jiang X, et al. Training language models to follow instructions with human feedback[J]. Advances in Neural Information Processing Systems, 2022, 35: 2773027744[7]Si W M, Backes M, Blackburn J, et al. Why so toxic? Measuring and triggering toxic behavior in opendomain chatbots[C] Proc of the 2022 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2022: 26592673[8]Kang D, Li X, Stoica I, et al. Exploiting programmatic behavior ofllms: Dualuse through standard security attacks[J]. arXiv preprint, arXiv:2302.05733, 2023[9]崔蒙. 电信诈骗犯罪隐语分析[J]. 北京警察学院学报, 2021 (3): 102105[10]Antoine Bordes, YLanBoureau, Jason Weston. Learning endtoend goaloriented dialog[J]. arXiv preprint, arXiv:1605.07683 2016[11]Milano S, McGrane J A, Leonelli S. Large language models challenge the future of higher education[J]. Nature Machine Intelligence, 2023, 5(4): 333334[12]Dasigi P, Lo K, Beltagy I, et al. A dataset of informationseeking questions and answers anchored in research papers[J]. arXiv preprint, arXiv:2105.03011, 2021[13]Ashish Vaswani, NoamShazeer, Niki Parmar, et al. Attention is all you need[COL] Proc of the Annual Conf on Neural Information Processing Systems (NIPS). 2017 [20240612]. https:papers.nips.ccpaper_filespaper2017hash3f5ee243547dee91fbd053c1c4a845aaAbstract.html[14]Zhong Li, Wang Zilong. A study on robustness and reliability of large language model code generation[J]. arXiv preprint, arXiv:2308.10335, 2023
|