Loading...
Toggle navigation
Home
About
About Journal
Editorial Board
Author Center
Current Issue
Just Accepted
Archive
Most Read Articles
Most Download Articles
Most Cited Articles
E-mail Alert
RSS
Reader Center
Online Submission
Manuscript Tracking
Instruction
Download
Review Center
Peer Review
Office Work
Editor-in-Chief
Subscription
Contact Us
中文
Table of Content
01 March 2022, Volume 8 Issue 3
Previous Issue
Research and Prospect of Adversarial Attack in the Field of Natural Laguage Processing
2022, 8(3): 202.
Asbtract
(
)
PDF
(1351KB) (
)
References
|
Related Articles
|
Metrics
With the continuous development of artificial intelligence, deep learning has been applied to vari-ous fields. However, in recent years, relevant studies have shown that deep learning is suscepti-ble to adversarial attacks, which can deceive deep learning models into making wrong judgments about sample categories. At present, the research of computer vision adversarial attack has grad-ually become mature, but because of the special structure of text data, the research of natural lan-guage processing adversarial attack is still in the development stage. Therefore, by introducing the concept of adversarial attack and its application in the field of computer vision, this paper introduces the current research status of adversarial attack in the field of natural language pro-cessing, and investigates popular adversarial attack schemes according to specific downstream tasks of natural language processing. Finally, prospects for the development of adversarial attack in the field of natural language processing are proposed. This paper has reference value for re-searchers in the field of natural language processing adversarial attack.
Physical Adversarial Attacks Against Deep Reinforcement Learning Based Navigation
2022, 8(3): 212.
Asbtract
(
)
PDF
(3659KB) (
)
References
|
Related Articles
|
Metrics
In this paper, the security of deep reinforcement learning (DRL) based laser navigation system is studied, and the concept of adversarial map and a physical attack method based on it is proposed for the first time. The method uses the adversarial example generation algorithm to calculate the noise on the laser sensor and then modifies the original map to realize these noises and get the adversarial map. The adversarial map can induce the agent to deviate from the optimal path in a particular area and finally makes the robot navigation fail. In the physical simulation experiment, this paper compares the navigation results of an agent in multiple original maps and adversarial maps, proves the effectiveness of the countermeasure map attack method, and points out the hidden security dangers of the current application of DRL technology in the navigation system.
A Survey on Threats to Federated Learning
2022, 8(3): 223.
Asbtract
(
)
PDF
(1579KB) (
)
References
|
Related Articles
|
Metrics
At present, federated learning has been considered as an effective solution to solve data island and privacy protection. Its own security and privacy protection issues have attracted widespread attentions from industry and academia. The existing federated learning systems have been proven to have vulnerabilities. These vulnerabilities can be exploited by adversaries, whether within or without the system, to destroy data security. Firstly, this paper introduces the concept, classification and threat models of federated learning in specific scenarios. Secondly, it introduces the confidentiality, integrity, and availability (CIA) model of federated learning. Then, it carries out a classification study on the attack methods that destroy the federated learning CIA model. Finally, it explores the current challenges and future research directions of federated learning CIA model.
Technology and Research Progress of Generative Adversarial Networks
2022, 8(3): 235.
Asbtract
(
)
PDF
(866KB) (
)
References
|
Related Articles
|
Metrics
In recent years, generative adversarial networks (GANs) researches have increased exponentially. The generative adversarial networks utilize zero-sum game theory to combine two competing neural networks, so that they can produce clearer and much more discrete outputs. In the fields of computer vision, medical treatment, finance, etc., significant progress has been made in the field of image and video processing and generation, data set enhancement, and time sequence prediction. This paper introduces the basic framework, theory and implementation process of generative adversarial networks, and analyzes the mainstream research status in recent years, and lists the problems which need to be improved by reviewing the variants of generative adversarial networks and their application scenarios. In addition, this paper also focuses on how to apply generative adversarial networks to arrange privacy measures and deal with sensitive data, as well as the future development trend of generation countermeasure network technology in related fields.
A Survey of Deep Face Forgery Detection
2022, 8(3): 241.
Asbtract
(
)
PDF
(2995KB) (
)
References
|
Related Articles
|
Metrics
Video media has developed rapidly with the popularity of the mobile Internet in recent years. At the same time, face forgery technology has also made great progress with the development of computer vision. Face forgery technology can be adopted to make interesting short video applications, but due to characteristics such as high fidelity, easy and quick generation, its malicious use poses a great threat to social stability and information security. Therefore, how to detect fake videos of faces in the Internet has become an urgent problem to be solved. With the efforts of scholars in the world, forgery detection has also made great breakthroughs in recent years. Therefore, this review aims to summarize the existing forgery detection methods in detail. In particular, we first introduce the forgery detection data set, and then summarizes the existing methods from the aspects of forgery video trace, neural network architecture, temporal information of videos, face identity information, and generalization of detection algorithms. Then we compare and analyze their corresponding detection results. Finally, we summarize the research directions and existing problems of deep forgery detection and discusses the challenges and development trends, providing reference for relevant research.
The Review of Generation and Detection Technology for Deepfakes
2022, 8(3): 258.
Asbtract
(
)
PDF
(1583KB) (
)
References
|
Related Articles
|
Metrics
In recent years, deepfakes technology can tamper with or generate highly realistic and difficult to distinguish audio and video content, and has been widely used in benign and malicious applications. For the generation and detection of deepfakes, experts and scholars at home and abroad have conducted in-depth research, and put forward the corresponding generation and detection scheme. This paper gives a comprehensive overview and detailed analysis of the existing audio and video deepfakes generation and detection technology based on deep learning , data set and future research direction, which will help relevant personnel to understand deepfakes and research on malicious deepfakes prevention and detection.
A Traceable Deep Learning Classifier Based on Differential Privacy
2022, 8(3): 277.
Asbtract
(
)
PDF
(4553KB) (
)
References
|
Related Articles
|
Metrics
With the application of deep learning in various fields, privacy leakage in data collection and training has become one of the reasons hindering the further development of artificial intelligence. At present, many studies have combined deep learning with homomorphic encryption or differential privacy technologies to achieve privacy protection in deep learning. Our paper tries to solve this problem from another perspective, that is, to achieve traceability of computing nodes of training data on the basis of guaranteeing some degree of privacy of training data set. Therefore, we propose a trackable deep learning classifier based on differential privacy. It combines differential privacy and digital fingerprint technologies to provide privacy protection for training data sets and ensure that the problem training nodes can be located according to the fingerprint information in training models or data sets that are illegally transmitted. We design the classifier can ensure safety decision classification function, and can guarantee the robustness of fingerprint is perceptual basic characteristics such as reliability and feasibility. From the subsequent formulas derived the theoretical analysis and simulation results on real data show that the scheme can satisfy the deep learning of privacy information safety traceability requirements.
Semantic Recognition for Attack Behavior Based on Heterogeneous Attributed Graph
2022, 8(3): 292.
Asbtract
(
)
PDF
(2149KB) (
)
References
|
Related Articles
|
Metrics
Abstract
In order to bridge the semantic gap between security device logs and attack behaviors, an automatic attack behavior semantic recognition method based on heterogeneous attributed graph is proposed. Firstly, heterogeneous graph is used to model the threat of system logs. Then, taking the attack context as the semantics and combined with the knowledge map representation, the vector representation of nodes and edges in the graph is obtained. Meanwhile, hierarchical clustering is used to aggregate similar logs, and the most representative logs are found as the behavior representation of the whole class. Finally, the experimental verification of this method shows that this method has high accuracy in the abstraction of system normal behavior and malicious behavior. More importantly, in the stage of attack investigation and evidence collection, it can greatly reduce the workload of security operation.
Design and Implementation of Program Vulnerability Detection Tool Based on Dynamic Taint Analysis
2022, 8(3): 301.
Asbtract
(
)
PDF
(2736KB) (
)
References
|
Related Articles
|
Metrics
Due to the semantic differences between the program source code and the compiled executable file, and the uncertainty of user input, it is impossible to ensure that the verified properties in the source code are still consistent with their executable files in practical applications. Therefore, in the case of program source code is not provided, directly to its executable vulnerability analysis detection is more meaningful.In this paper, the application research of dynamic taint analysis technology in program vulnerability detection is carried out. A program vulnerability detection tool based on libdft API is designed, and the pollution source and pollution sink are customized to adapt to different vulnerability modes. The tool does not need to analyze the source code of the target program, but can directly detect the binary program. The test results show that the tool can effectively detect a variety of program vulnerabilities such as heart attack, formatted string attack, data leakage, ReturnToLibc attack, etc. It has certain reference value to help programmers detect program security and protect computer data security.
Great Attention to Artificial Intelligence Security Issues
2022, 8(3): 311.
Asbtract
(
)
PDF
(1250KB) (
)
Related Articles
|
Metrics
Author Center
Online Submission
Instruction
Template
Copyright Agreement
Review Center
Peer Review
Editor Work
Editor-in-Chief
Office Work