• 学术论文 •

### 基于 Softplus 函数的神经网络的 Reluplex 算法验证研究

1. (天津理工大学计算机科学与工程学院天津300384)
• 出版日期:2022-09-02 发布日期:2022-09-02
• 通讯作者: 陆明远 硕士.主要研究方向为软件形式化验证方法、人工神经网络和信息安全. 1933044495@qq.com
• 作者简介:陆明远 硕士.主要研究方向为软件形式化验证方法、人工神经网络和信息安全. 1933044495@qq.com 侯春燕 博士，副教授，硕士生导师.主要研究方向为软件形式化验证方法、软件漏洞挖掘、软件测试和机器深度学习. chunyanhou@tjut.edu.cn 王劲松 博士，教授，博士生导师.主要研究方向为人工智能、数据挖掘、区块链和信息安全. jswang@tjut.edu.cn

### Research on Verification of Neural Network Based on Softplus  Function by Reluplex Algorithm

• Online:2022-09-02 Published:2022-09-02

Abstract: Formal verification is a method in computer science that uses mathematical logic to verify whether a system is feasible. Applying formal verification methods to the field of neural networks fully can facilitate us to study the characteristics and applications of neural networks better. Reluplex is a kind of simplex algorithm for verifying deep neural networks which uses Relu as the activation function, and neurons of Relu are fragile and may die during training. Softplus is an activation function similar to Relu but smoother than it. We improved the Reluplex algorithm to test deep neural networks which is using Softplus activation function, and then obtained experimental data results by testing robust adversarial under Softplus activation function. Through the comparison with the test results under Relu, it is confirmed that test efficiency under Softplus is significantly higher than that of Relu, which is more balanced than Relu so that the neural network can learn faster. This study expands the functions of the neural network verification algorithm and conducts a corresponding comparative analysis, which is beneficial to better verify and improve deep neural networks to ensure its security in the future.