Loading...

Table of Content

    04 June 2023, Volume 9 Issue 6
    ChatGPT’s Applications, Status and Trends in the Field of Cyber Security
    2023, 9(6):  500. 
    Asbtract ( )   PDF (2555KB) ( )  
    References | Related Articles | Metrics
    ChatGPT, as a large language model technology, demonstrates extremely strong language understanding and text generation capabilities. It has not only attracted tremendous attention across various industries but also brought new transformations to the field of cybersecurity. Currently, research on ChatGPT in the cybersecurity field is still in its infancy. To help researchers systematically understand the research status of ChatGPT in cybersecurity, this paper provides the first comprehensive summary of ChatGPT’s applications in the field of cybersecurity and potential accompanying security issues. The article first outlines the development of large language model technologies and briefly introduces the technology and features of ChatGPT. Then, it discusses the enabling effects of ChatGPT in the cybersecurity field from two perspectives: assisting attacks and assisting defense. This includes vulnerability discovery, exploitation and remediation, malicious software detection and identification, phishing email generation and detection, and potential use cases in security operations scenarios. Furthermore, the article delves into the accompanying risks of ChatGPT in the cybersecurity field, including content risks and prompt injection attacks, providing a detailed analysis and discussion of these risks. Finally, the paper looks into the future of ChatGPT in the cybersecurity field from the perspectives of security enablement and accompanying security, pointing out the direction for future research on ChatGPT in the cybersecurity domain.
    Consideration on Some Problems in the Development of GPT4 and Its Regulation Scheme
    2023, 9(6):  510. 
    Asbtract ( )   PDF (1273KB) ( )  
    References | Related Articles | Metrics
    With the release of the new generation of generative artificial intelligence (AI) foundation model GPT4, the era of AI has arrived. GPT4’s rapid popularity also raises some risk issues. In the aspect of data security, facing with frequent data leakage events, data storage period should be set to ensure the parallel development of data security and technology. In the aspect of intellectual property, GPT4 brings challenges on copyright infringement, subject status and works identification, which should be kept in mind in the future. In the aspect of the core algorithm, GPT4 hides the risk of algorithm discrimination. The algorithm should be continuously optimized to make GPT4 towards the true artificial general intelligence. At present, GPT4 is still in the process of continuous development, so it is still too early to design a detailed regulation scheme. In order to better deal with the risks caused by GPT4, independent innovation in the digital age should be sought, and generative AI should be included in the category of deep synthesis technology through special legislation on AI and combined with existing algorithms governing practice.
    Research on Artificial Intelligence Data Falsification Risk  Based on GPT Model
    2023, 9(6):  518. 
    Asbtract ( )   PDF (1887KB) ( )  
    References | Related Articles | Metrics
    The rapid development and application of artificial intelligence technology have led to the emergence of AIGC (Artificial Intelligence Generated Context), which has significantly enhanced productivity. ChatGPT, a product that utilizes AIGC, has gained popularity worldwide due to its diverse application scenarios and has spurred rapid commercialization development. This paper takes the artificial intelligence data forgery risk as the research goal, takes the GPT model as the research object, and focuses on the possible causes of data forgery and the realization process by analyzing the security risks that have been exposed or appeared. Based on the offensive and defensive countermeasures of traditional cyberspace security and data security, the paper makes a practical study of data forgery based on model finetuning and speculates some data forgery utilization scenarios after the widespread commercialization of artificial intelligence. Finally, the paper puts forward some suggestions on how to deal with the risk of data forgery and provides directions for avoiding the risk of data forgery before the largescale application of artificial intelligence in the future.
    Research on Content Detection Generated by Large Language Model  and the Mechanism of Bypassing
    2023, 9(6):  524. 
    Asbtract ( )   PDF (1924KB) ( )  
    References | Related Articles | Metrics
    In recent years, there has been a surge in the development of large language models. AI robots like ChatGPT, although they have a largescale security confrontation mechanism inside, attackers can still elaborate questionandanswer patterns to bypass the mechanism, with their help to automatically produce phishing emails and carry out network attacks. In this case, how to identify the text generated by AI robots has also become a hot issue. In order to carry out LLMgenerated content detection experiment, our team collected a certain number of questionandanswer data samples from an Internet social platform and ChatGPT platform, and proposed a series of detection strategies according to different conditions of AI text availability. It includes text similarity analysis based on online controllable AI samples, text data mining based on statistical differences under offline conditions, adversarial analysis based on the LLM generation method under the condition that AI samples are not available, and AI model analysis based on building a classifier by finetuning the target LLM model itself. We calculated and compared the detection capabilities of the analysis engine in each case. On the other hand, we give some antikill techniques against AI text detection engines based on the characteristics of detection strategies, from the perspective of network attack and defense.
    ChatGPT’s Security Threaten Research
    2023, 9(6):  533. 
    Asbtract ( )   PDF (1801KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of deep learning technology and natural language processing technology, the large language model represented by ChatGPT came into being. However, while showing surprising capabilities in many fields, ChatgPT also exposed many security threats, which aroused the concerns of academia and industry. This paper first introduces the development history, working mode, and training methods of ChatGPT and its series models, then summarizes and analyzes various current security problems that ChatGPT may encounter and divides it into two levels: user and model. Then, countermeasures and solutions are proposed according to the characteristics of ChatGPT at each stage. Finally, this paper looks forward to developing a safe and trusted ChatGPT and a large language model.
    Research on the Integration of Full Lifecycle Data Security Management and Artificial Intelligence Technology#br#
    2023, 9(6):  543. 
    Asbtract ( )   PDF (1143KB) ( )  
    References | Related Articles | Metrics
    With data becoming a new production factor, China has elevated data security to a national strategic level. With the promotion of a new round of technological revolution and the deepening of digital transformation, the artificial intelligence technology has increasing development potential, and gradually empowers the field of data security management actively. Firstly, the paper introduces the concept and significance of data security lifecycle management, analyzes the security risks faced by data in various stages of the lifecycle, and further discusses the problems and challenges faced by traditional data security management technologies in the context of massive data processing and upgraded attack methods. Then, the paper introduces the potential advantages of artificial intelligence in solving these problems and challenges, and summarizes the current mature data security management technologies based on artificial energy and typical application scenarios. Finally, the paper provides an outlook on the future development trends of artificial intelligence technologies in the field of data security management. This paper aims to provide useful references for researchers and practitioners in the field of data security management, and promote the innovation and application of artificial intelligence in the field of data security management technology.
    Research on Network Security Governance and Response of  Largescale AI Model
    2023, 9(6):  551. 
    Asbtract ( )   PDF (1101KB) ( )  
    References | Related Articles | Metrics
    With the continuous development of artificial intelligence technology, largescale AI model technology has become an important research direction in the field of artificial intelligence. The publication of ChatGPT4.0 and ERNIE Bot has rapidly promoted the development and application of this technology. However, the emergence of largescale AI model technology has also brought new challenges to network security. This paper will start with the definition, characteristics and application of largescale AI model technology, and analyze the network security situation under largescale AI model technology. The network security governance framework of largescale AI model is proposed, and the given steps can provide reference for network security work of largescale AI model.
    Towards a Privacy-preserving Research for AI and Blockchain Integration
    2023, 9(6):  557. 
    Asbtract ( )   PDF (1307KB) ( )  
    References | Related Articles | Metrics
    With the widespread attention and application of artificial intelligence (AI) and blockchain technologies, privacy protection techniques arising from their integration are of notable significance. In addition to protecting the privacy of individuals, these techniques also guarantee the security and dependability of data. This paper initially presents an overview of AI and blockchain, summarizing their combination along with derived privacy protection technologies. It then explores specific application scenarios in data encryption, deidentification, multitier distributed ledgers, and kanonymity methods. Moreover, the paper evaluates five critical aspects of AIblockchainintegration privacy protection systems, including authorization management, access control, data protection, network security, and scalability. Furthermore, it analyzes the deficiencies and their actual cause, offering corresponding suggestions. This research also classifies and summarizes privacy protection techniques based on AIblockchain application scenarios and technical schemes. In conclusion, this paper outlines the future directions of privacy protection technologies emerging from AI and blockchain integration, including enhancing efficiency and security to achieve more comprehensive privacy protection of AI privacy.
    Research on Image Steganography and Extraction Scheme Based on  Implicit Symmetric Generative Adversarial Network
    2023, 9(6):  566. 
    Asbtract ( )   PDF (953KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems in the image steganography technology that the quality of the carrier image is degraded and vulnerable to attacks when the secret image is embedded, this paper proposes an image steganography and extraction scheme based on an implicit symmetric generative network. The scheme first abstracts the task of image steganography and extraction into a mathematical optimization problem. Secondly, an implicit symmetric generative adversarial network model is proposed according to the optimization problem. The implicit symmetric generative adversarial network contains two independent generative adversarial subnetworks, namely the steganographic adversarial subnetwork and the extraction adversarial subnetwork. In the steganographic confrontational subnetwork, first the encoder converts the cover image and the covert image into a set of highdimensional feature vectors containing enough cover image information and secret image information. The decoder then reconstructs these feature vectors into images embedded with secret information. In the extraction adversarial subnetwork, the image embedded with secret information is passed through another set of encoder and decoder to extract the hidden image. Finally, a loss function suitable for the model is designed. Experimental results show that the proposed scheme has high image quality and can maintain good robustness in the face of various common attacks.
    Image Steganalysis Method Based on Multiattention Mechanism and  Siamese Network
    2023, 9(6):  573. 
    Asbtract ( )   PDF (1439KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of extracting more significant steganographic features from images to improve detection accuracy of steganalysis detection, a Siamese network image steganalysis method based on multiattention mechanism is proposed. This method uses the idea of feature fusion to make the steganalysis model extract richer steganographic features. Firstly, a Siamese network subnetwork composed of ParNet block, depthwise separable convolution block, normalizationbased attention module, squeeze and excitation module, and external attention module is designed, and the multibranch network structure and multiattention mechanism are used to extract more useful classification results. Features improve the detection ability of the model; then use Cyclical Focal loss to modify the weight of the training samples at different stages of training to improve the training effect of the model. The experiment uses the BOOSbase 1.01 data set to conduct experiments on five adaptive steganography algorithms: WOW, SUNIWARD, HUGO, MiPOD and HILL. Experimental results show that this method outperforms SRNet, ZhuNet and SiaStegNet methods in detection accuracy, and has a lower number of parameters.
    A Method of Active Defense for Intelligent Manufacturing  Device Swarms Based on Remote Attestation
    2023, 9(6):  580. 
    Asbtract ( )   PDF (1988KB) ( )  
    References | Related Articles | Metrics
    With the development of artificial intelligence technology, intelligent manufacturing has become an inevitable choice for enterprise production. However, a compromised device not only causes issues such as confidentiality leaks and production chain errors, but also serves as a springboard for attackers and thus affects the security of the entire swarm. In this paper, we propose a proactive defense solution for intelligent manufacturing swarms based on remote attestation (SecRA). SecRA generates independent challenges for each device, enabling pointtopoint communication between gateways and devices. By extending the functionality of gateways, SecRA utilizes asynchronous communication to adapt to the existing network structure. In addition, based on the challengequery attestation protocol, communication and computation costs are transferred to resourcerich gateways, greatly reducing the burden of devices. Finally, the efficiency and feasibility of the SecRA are experimentally verified.
    Neural Network Backdoor Detection Method Based on Multilevel  Measurement Difference
    2023, 9(6):  587. 
    Asbtract ( )   PDF (950KB) ( )  
    Related Articles | Metrics
    The deep neural network has achieved advanced performance in various tasks. However, due to the lack of transparency and unexplainable of the deep learning model, the model will show abnormal behavior when the backdoor set by the malicious attacker is triggered in the reasoning stage, and the performance will be degraded. To solve the above problems, this paper proposes a Backdoor Detection Scheme Based on Multilevel Measurement Difference (MultMeasure). Test cases are generated against the source model and the authorization model maliciously injected backdoor. Two measures, white box and black box, are set to calculate test cases. Finally, the statistical threshold is used to calculate the difference to determine whether the model is injected backdoor. Experiments show that MultMeasure proposed in this paper is tested in the backdoor attack scenario implanted with Trojan Horse model, and performance evaluation is good under multiple triggers and invisible triggers. Compared with the existing detection schemes in recent years, MultMeasure has better effectiveness and stability.
    Detection and Classification Method of Network Attacks Based on  Generalized Neural Networks
    2023, 9(6):  593. 
    Asbtract ( )   PDF (1705KB) ( )  
    References | Related Articles | Metrics
    Nowadays, the virtual world is becoming more and more complex, and network attacks and emerging security threats are gradually increasing. Therefore, it is necessary to study intelligent detection and classification methods for network attacks to comprehensively observe network activities and prevent malicious behaviors. It is proposed that an intrusion detection system based on generalized regression neural network (GRNN) in this paper, which can intelligently detect and classify malicious network attacks, and be tested by using today’s mainstream NSLKDD dataset. The experimental results show that the technology proposed in this paper can identify and classify malicious behaviors more effectively than other current attack detection technologies.
    Research on the Disclosure and Sharing Policy of Cybersecurity  Vulnerabilities in China and the United States
    2023, 9(6):  602. 
    Asbtract ( )   PDF (2305KB) ( )  
    References | Related Articles | Metrics
    With the increasing scale and complexity of computer software systems, vulnerability attacks on software and systems become more and more frequent, and attack methods become more and more diverse. Various countries have published vulnerability management regulations to avoid the threat of software and system vulnerabilities to national cyberspace security. Proper disclosure and sharing of security vulnerabilities can help security researchers learn security threats quickly and reduce vulnerability repair costs through sharing and communication, which has become essential to mitigating security risks. This paper introduces the public vulnerability database, focuses on the summary of China and the United States network security vulnerability disclosure and sharing related policies and regulations, and gives the possible problems and countermeasures  in vulnerability disclosure and sharing in China so that security researchers can better understand and learn the security vulnerability disclosure process and sharing related regulations, which ensures that security researchers can study security vulnerabilities in the extent permitted by regulations.