Journal of Information Security Reserach ›› 2024, Vol. 10 ›› Issue (9): 795-.

Previous Articles     Next Articles

A Large Language Model Detection System for Domainspecific Jargon

Ji Xu, Zhang Jianyi, Zhao  Zhangchi, Zhou Ziyin, Li Yilong, and Sun Zezheng   

  1. (Department of Cyberspace Security, Beijing Electronic Science and Technology Institute, Beijing 100070)
  • Online:2024-09-25 Published:2024-09-29

一种面向特殊领域隐语的大语言模型检测系统

姬旭张健毅赵张驰周子寅李毅龙孙泽正   

  1. (北京电子科技学院网络空间安全系北京100070)
  • 通讯作者: 张健毅 博士,副教授,CCF会员.主要研究方向为隐私保护与系统安全. zjy@besti.edu.cn
  • 作者简介:姬旭 硕士研究生.主要研究方向为大语言模型安全、知识图谱. 1164972083@qq.com 张健毅 博士,副教授,CCF会员.主要研究方向为隐私保护与系统安全. zjy@besti.edu.cn 赵张驰 主要研究方向为人工智能安全. xuelianwinter@gmail.com 周子寅 硕士研究生.主要研究方向为数据隐私. mrzhouziyin@126.com 李毅龙 硕士研究生.主要研究方向为自然语言处理与人工智能安全. elonisme@163.com 孙泽正 硕士研究生.主要研究方向为联邦学习、隐私安全. 420264993@qq.com

Abstract: Large language model (LLM) retrieve knowledge from their own structures and reasoning processes to generate responses to user queries, thus many researchers begin to evaluate the reasoning capabilities of large language models. However, while these models have demonstrated strong reasoning and comprehension skills in generic language tasks, there remains a need to evaluate their proficiency in addressing specific domainrelated problems, such as those found in telecommunications fraud. In response to this challenge, this paper presents the first evaluation system for assessing the reasoning abilities of DomainSpecific Jargon and proposes the first domain specific jargon dataset. To address issues related to cross matching problem and complex data calculation problem, we propose the collaborative harmony algorithm and the data aware algorithm based on indicator functions. These algorithms provide a multidimensional assessment of the performance of large language models. Our experimental results demonstrate that our system is adaptable in evaluating the accuracy of questionanswering by large language models within specialized domains. Moreover, our findings reveal, for the first time, variations in recognition accuracy based on question style and contextual cues utilized by the models. In conclusion, our system serves as an objective auditing tool to enhance the reliability and security of large language models, particularly when applied to specialized domains.

Key words: large language model, DomainSpecific Jargon, cant language detection, evaluation system, slang, reasoning

摘要: 大语言模型从模型本身和推理中检索知识以生成用户所需的答案,因此评价大语言模型的推理能力成为热点.然而,尽管在隐语方面大语言模型表现出较好的推理与理解能力,但在诸如电信诈骗等特殊领域隐语理解能力、推理能力的评价尚未出现.针对此问题,设计并实验了首个针对特殊领域隐语的大语言模型评估系统,同时提出了包含许多特殊主题的首个隐语数据集.针对数据交叉匹配问题和数据计算问题,分别提出了协同调和算法和基于指示函数的数据感知算法,从多角度评价大语言模型的表现.实验证明,该系统可以灵活、深入地评估大语言模型问答的识别准确性.同时,结果首次揭示了大语言模型基于提问风格和线索的识别准确性变化.设计系统可以作为一种审计工具帮助提高大语言模型的可靠性和安全性.

关键词: 大语言模型, 特殊领域隐语, 隐语检测, 评估系统, 黑话, 推理

CLC Number: