信息安全研究 ›› 2025, Vol. 11 ›› Issue (3): 265-.

• 技术应用 • 上一篇    下一篇

小样本语义分析的漏洞实体抽取方法

丁全1张磊2黄帅3查正朋3陶陶4   

  1. 1(国网安徽省电力有限公司电力科学研究院合肥230601)
    2(中国科学技术大学信息科学技术学院合肥230026)
    3(中国科学技术大学先进技术研究院合肥230031)
    4(安徽工业大学计算机科学与技术学院安徽马鞍山243032)
  • 出版日期:2025-03-18 发布日期:2025-03-31
  • 通讯作者: 丁全 硕士,高级工程师.主要研究方向为网络安全、信息技术监督. 394961975@qq.com
  • 作者简介:丁全 硕士,高级工程师.主要研究方向为网络安全、信息技术监督. 394961975@qq.com 张磊 博士,副研究员.主要研究方向为计算机视觉、NLP、信息检索、多模态语言模型. leizh23@ustc.edu.cn 黄帅 硕士研究生.主要研究方向为NLP、网络安全. misaki@mail.ustc.edu.cn 查正朋 硕士,研究员.主要研究方向为网络安全、密码应用. zhazp@ustc.edu.cn 陶陶 教授.主要研究方向为网络与数据安全. taotao@ahut.edu.cn

A Method for Extracting Vulnerable Entities in Small Sample  Semantic Analysis

Ding Quan1, Zhang Lei2, Huang Shuai3, Zha Zhengpeng3, and Tao Tao4   

  1. 1(Electric Power Science Research Institute, State Grid Anhui Electric Power Co., Ltd., Hefei 230601)
    2(School of Information Science and Technology, University of Science and Technology of China, Hefei 230026)
    3(Institute of Advanced Technology, University of Science and Technology of China, Hefei 230031)
    4(School of Computer Science and Technology, Anhui University of Technology, Ma’anshan, Anhui 243032)
  • Online:2025-03-18 Published:2025-03-31

摘要: 目前不同信息安全漏洞库标准各异,漏洞数据侧重点不同,关系相对独立,难以快速全面地获取高价值漏洞信息,需建立统一的漏洞实体标准,因此重点对漏洞数据中的实体抽取技术进行研究.大部分漏洞数据以非结构化中英文混合的自然语言形式呈现,基于规则的方法泛化性不强,基于人工智能的方法占用资源过高且依赖大量标注数据,为解决以上问题,提出一种小样本语义分析的漏洞实体抽取方法.该方法使用BERT(bidirectional encoder representations from transformers)预训练漏洞描述数据得到漏洞领域内的预训练模型,以更好地理解漏洞数据,减少对大量标注数据的依赖,此外,采用增量学习的自监督方式提高标注数据非常有限(1785个标注样本).所提模型抽取了漏洞领域中12类漏洞实体,实验结果表明,所提方法在漏洞实体抽取的效果上优于其他抽取模型,F1值达到0.8643,整体的识别性能较高,实现了对漏洞实体的精确抽取.

关键词: 小样本, 语义分析, 漏洞实体抽取, BERT, CRF

Abstract: At the moment, different information security vulnerability databases have different standards, with different focuses on vulnerability data and relatively independent relationships. It is difficult to quickly and comprehensively obtain highvalue vulnerability information, and a unified vulnerability entity standard needs to be established. Therefore, this paper focuses on vulnerability data in entity extraction technology research. The majority of vulnerability data is provided in unstructured natural language form that combines Chinese and English, rulebased methods lack robust generalization, deeplearningbased methods occupy too many resources and rely on a large amount of annotated data. To address these issues, this paper presents a vulnerability entity extraction method with small sample semantic analysis. The method employs BERT pretrained vulnerability data to generate a pretrained model within the cybersecurity vulnerability domain, allowing for a better understanding of cybersecurity vulnerability data and reducing reliance on lager annotated data. Additionally, a selfsupervised incremental learning approach is applied to improve model performance with very limited annotated data (1785 samples). The model in this paper extracts 12 types of vulnerability entities in the field of cybersecurity, and the experimental results show that the method outperforms other models in the recognition and extraction of cybersecurity vulnerability entities, with an F1 value of 0.8643.

Key words: small sample, semantic analysis, vulnerability entity extraction, BERT, CRF

中图分类号: