Most Download articles

    Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    In last 2 years
    Please wait a minute...
    For Selected: Toggle Thumbnails
    ChatGPT’s Applications, Status and Trends in the Field of Cyber Security
    Journal of Information Security Reserach    2023, 9 (6): 500-.  
    Abstract907)      PDF (2555KB)(717)       Save
    ChatGPT, as a large language model technology, demonstrates extremely strong language understanding and text generation capabilities. It has not only attracted tremendous attention across various industries but also brought new transformations to the field of cybersecurity. Currently, research on ChatGPT in the cybersecurity field is still in its infancy. To help researchers systematically understand the research status of ChatGPT in cybersecurity, this paper provides the first comprehensive summary of ChatGPT’s applications in the field of cybersecurity and potential accompanying security issues. The article first outlines the development of large language model technologies and briefly introduces the technology and features of ChatGPT. Then, it discusses the enabling effects of ChatGPT in the cybersecurity field from two perspectives: assisting attacks and assisting defense. This includes vulnerability discovery, exploitation and remediation, malicious software detection and identification, phishing email generation and detection, and potential use cases in security operations scenarios. Furthermore, the article delves into the accompanying risks of ChatGPT in the cybersecurity field, including content risks and prompt injection attacks, providing a detailed analysis and discussion of these risks. Finally, the paper looks into the future of ChatGPT in the cybersecurity field from the perspectives of security enablement and accompanying security, pointing out the direction for future research on ChatGPT in the cybersecurity domain.
    Reference | Related Articles | Metrics
    Journal of Information Security Reserach    2023, 9 (E2): 4-.  
    Abstract75)      PDF (2945KB)(594)       Save
    Related Articles | Metrics
    Research on Network Security Governance and Response of  Largescale AI Model
    Journal of Information Security Reserach    2023, 9 (6): 551-.  
    Abstract458)      PDF (1101KB)(433)       Save
    With the continuous development of artificial intelligence technology, largescale AI model technology has become an important research direction in the field of artificial intelligence. The publication of ChatGPT4.0 and ERNIE Bot has rapidly promoted the development and application of this technology. However, the emergence of largescale AI model technology has also brought new challenges to network security. This paper will start with the definition, characteristics and application of largescale AI model technology, and analyze the network security situation under largescale AI model technology. The network security governance framework of largescale AI model is proposed, and the given steps can provide reference for network security work of largescale AI model.
    Reference | Related Articles | Metrics
    Journal of Information Security Reserach    2023, 9 (6): 498-.  
    Abstract347)      PDF (472KB)(427)       Save
    Related Articles | Metrics
    Towards a Privacy-preserving Research for AI and Blockchain Integration
    Journal of Information Security Reserach    2023, 9 (6): 557-.  
    Abstract637)      PDF (1307KB)(339)       Save
    With the widespread attention and application of artificial intelligence (AI) and blockchain technologies, privacy protection techniques arising from their integration are of notable significance. In addition to protecting the privacy of individuals, these techniques also guarantee the security and dependability of data. This paper initially presents an overview of AI and blockchain, summarizing their combination along with derived privacy protection technologies. It then explores specific application scenarios in data encryption, deidentification, multitier distributed ledgers, and kanonymity methods. Moreover, the paper evaluates five critical aspects of AIblockchainintegration privacy protection systems, including authorization management, access control, data protection, network security, and scalability. Furthermore, it analyzes the deficiencies and their actual cause, offering corresponding suggestions. This research also classifies and summarizes privacy protection techniques based on AIblockchain application scenarios and technical schemes. In conclusion, this paper outlines the future directions of privacy protection technologies emerging from AI and blockchain integration, including enhancing efficiency and security to achieve more comprehensive privacy protection of AI privacy.
    Reference | Related Articles | Metrics
    Research on Content Detection Generated by Large Language Model  and the Mechanism of Bypassing
    Journal of Information Security Reserach    2023, 9 (6): 524-.  
    Abstract504)      PDF (1924KB)(333)       Save
    In recent years, there has been a surge in the development of large language models. AI robots like ChatGPT, although they have a largescale security confrontation mechanism inside, attackers can still elaborate questionandanswer patterns to bypass the mechanism, with their help to automatically produce phishing emails and carry out network attacks. In this case, how to identify the text generated by AI robots has also become a hot issue. In order to carry out LLMgenerated content detection experiment, our team collected a certain number of questionandanswer data samples from an Internet social platform and ChatGPT platform, and proposed a series of detection strategies according to different conditions of AI text availability. It includes text similarity analysis based on online controllable AI samples, text data mining based on statistical differences under offline conditions, adversarial analysis based on the LLM generation method under the condition that AI samples are not available, and AI model analysis based on building a classifier by finetuning the target LLM model itself. We calculated and compared the detection capabilities of the analysis engine in each case. On the other hand, we give some antikill techniques against AI text detection engines based on the characteristics of detection strategies, from the perspective of network attack and defense.
    Reference | Related Articles | Metrics
    Journal of Information Security Reserach    2023, 9 (E1): 105-.  
    Abstract597)      PDF (1450KB)(325)       Save
    Reference | Related Articles | Metrics
    Journal of Information Security Reserach    2024, 10 (E1): 236-.  
    Abstract383)      PDF (796KB)(306)       Save
    Reference | Related Articles | Metrics
    Security Risks and Countermeasures to Artificial Intelligence#br#
    #br#
    Journal of Information Security Reserach    2024, 10 (2): 101-.  
    Abstract223)      PDF (469KB)(304)       Save
    Related Articles | Metrics
    Research on Privacy Protection Technology in Federated Learning
    Journal of Information Security Reserach    2024, 10 (3): 194-.  
    Abstract271)      PDF (1252KB)(289)       Save
    In federated learning, multiple models are trained through parameter coordination without sharing raw data. However,  the extensive parameter exchange in this process renders the model vulnerable to threats not only from external users but also from internal participants. Therefore, research on privacy protection techniques in federated learning is crucial. This paper introduces the current research status on privacy protection in federated learning. It classifies the security threats of federated learning into external attacks and internal attacks.Based on this classification,  it summarizes external attack techniques such as model inversion attacks, external reconstruction attacks, and external inference attacks, as well as internal attack techniques such as poisoning attacks, internal reconstruction attacks, and internal inference attacks. From the perspective of attack and defense correspondence, this paper summarizes data perturbation techniques such as central differential privacy, local differential privacy, and distributed differential privacy, as well as process encryption techniques such as homomorphic encryption, secret sharing, and trusted execution environment. Finally, the paper analyzes the difficulties of federated learning privacy protection technology and identifies the key directions for its improvement.
    Reference | Related Articles | Metrics
    Research and Thinking on Data Classification and Grading of Important Information Systems#br#
    Journal of Information Security Reserach    2023, 9 (7): 631-.  
    Abstract271)      PDF (1882KB)(268)       Save
    With the development of information technology and networking, incidents surrounding data security are also increasing. The data as a new production factor, is particularly important to ensure the security of important data. The “Data Security Law of the People’s Republic of China” clearly stipulates that the country should establish a data classification and grading protection system to implement classification and grading protection for data. This paper will study China’s data safety management regulations and policies, analyze the the degree of impact and influening objects of data damage, propose specific data classification and grading methods, and provide security protection and governance measures under data classification and grading management based on the industry characteristics and application scenarios of government data. It will achieve the openness and sharing of the data under safety protection, and provide reference for the classification and classification protection of the data in the future.
    Reference | Related Articles | Metrics
    ChatGPT’s Security Threaten Research
    Journal of Information Security Reserach    2023, 9 (6): 533-.  
    Abstract296)      PDF (1801KB)(264)       Save
    With the rapid development of deep learning technology and natural language processing technology, the large language model represented by ChatGPT came into being. However, while showing surprising capabilities in many fields, ChatgPT also exposed many security threats, which aroused the concerns of academia and industry. This paper first introduces the development history, working mode, and training methods of ChatGPT and its series models, then summarizes and analyzes various current security problems that ChatGPT may encounter and divides it into two levels: user and model. Then, countermeasures and solutions are proposed according to the characteristics of ChatGPT at each stage. Finally, this paper looks forward to developing a safe and trusted ChatGPT and a large language model.
    Reference | Related Articles | Metrics
    Research on the Integration of Full Lifecycle Data Security Management and Artificial Intelligence Technology#br#
    Journal of Information Security Reserach    2023, 9 (6): 543-.  
    Abstract297)      PDF (1143KB)(260)       Save
    With data becoming a new production factor, China has elevated data security to a national strategic level. With the promotion of a new round of technological revolution and the deepening of digital transformation, the artificial intelligence technology has increasing development potential, and gradually empowers the field of data security management actively. Firstly, the paper introduces the concept and significance of data security lifecycle management, analyzes the security risks faced by data in various stages of the lifecycle, and further discusses the problems and challenges faced by traditional data security management technologies in the context of massive data processing and upgraded attack methods. Then, the paper introduces the potential advantages of artificial intelligence in solving these problems and challenges, and summarizes the current mature data security management technologies based on artificial energy and typical application scenarios. Finally, the paper provides an outlook on the future development trends of artificial intelligence technologies in the field of data security management. This paper aims to provide useful references for researchers and practitioners in the field of data security management, and promote the innovation and application of artificial intelligence in the field of data security management technology.
    Reference | Related Articles | Metrics
    Key Technologies and Research Prospects of Privacy Computing
    Journal of Information Security Reserach    2023, 9 (8): 714-.  
    Abstract348)      PDF (1814KB)(254)       Save
    Privacy computing, as an important technical means taking into account both data circulation and privacy protection, can effectively break the “data island” barriers while ensuring data security, it enables open data sharing, and promotes the deep mining and use of data and crossdomain integration. In this paper, the background knowledge, basic concepts and architecture of privacy computing were introduced, the basic concepts of three key technologies of privacy computing, including secure multiparty computation, federated learning and trusted execution environment were elaborated, and studies on the existing privacy security was conducted, a multidimensional comparison and summarization of the differences of the three key technologies were made. On this basis, the future research direction of privacy computing is prospected from the technical integration of privacy computing with blockchain, deep learning and knowledge graph.
    Reference | Related Articles | Metrics
    Research on Identity Authentication Technology Based on Block Chain and PKI
    Journal of Information Security Reserach    2024, 10 (2): 148-.  
    Abstract208)      PDF (1573KB)(254)       Save
    Public key infrastructure (PKI) is a secure system based on asymmetric cryptographic algorithm and digital certificate to realize identity authentication and encrypted communication, operating on the principle of  trust transmission based on trust anchor. However, this technology has the following problems: The CA center is unique and there is a single point of failure; The authentication process involves a large number of operations, such as certificate resolution, signature verification, and certificate chain verification. To solve the above problems, this paper builds an identity authentication model based on Changan Chain, and proposes an identity authentication scheme based on Changan Chain digital certificate and public key infrastructure. Theoretical analysis and experimental data demonstrate  that this scheme reduces certificate parsing, signature verification and other operations, simplifies the authentication process, and improves the authentication efficiency.
    Reference | Related Articles | Metrics
    Research for Zero Trust Security Model
    Journal of Information Security Reserach    2024, 10 (10): 886-.  
    Abstract279)      PDF (2270KB)(245)       Save
    Zero trust is considered a new security paradigm. From the perspective of security models, this paper reveals the deepening and integration of security models in zero trust architecture, with “identity and data” as the main focus. Zero trust establishes a panoramic control object chain with identity at its core, builds defenseindepth mechanisms around object attributes, functions, and lifecycles, and centrally redirects the flow of information between objects. It integrates information channels to achieve layered protection and finegrained, dynamic access control. Finally, from an attacker’s perspective, it sets up proactive defense mechanisms at key nodes in the information flow path. Since zero trust systems are bound to become highvalue assets, this paper also explores the essential issues of inherent security and resilient service capabilities in zerotrust systems. Through the analysis of the security models embedded in zerotrust and its inherent security, this paper aims to provide a clearer technical development path for the architectural design, technological evolution, and selfprotection of zero trust in its application.
    Reference | Related Articles | Metrics
    Journal of Information Security Reserach    2024, 10 (E2): 105-.  
    Abstract363)      PDF (929KB)(241)       Save
    Reference | Related Articles | Metrics
    Malicious Client Detection and Defense Method for Federated Learning
    Journal of Information Security Reserach    2024, 10 (2): 163-.  
    Abstract408)      PDF (806KB)(239)       Save
    Federated learning allows participating clients to collaborate in training machine learning models without sharing their private data. Since the central server cannot control the behavior of clients, malicious clients may corrupt the global model by sending manipulated local gradient updates, and there may also be unreliable clients with low data quality but some value. To address the above problems, this paper proposes FedMDD,a defense approach for malicious client detection and defense for federated learning, to process detected malicious and unreliable clients in different ways based on local gradient updates, while defending against symbol flipping, additive noise, single label flipping, multilabel flipping, and backdoor attacks. Four baseline algorithms are compared for two datasets, and the experimental results show that FedMDD can successfully defend against various types of attacks in a training environment containing 50% malicious clients and 10% unreliable clients, with better results in both improving model testing accuracy and reducing backdoor accuracy.
    Related Articles | Metrics
    Survey of Intelligent Vulnerability Mining and Cyberspace Threat Detection
    Journal of Information Security Reserach    2023, 9 (10): 932-.  
    Abstract265)      PDF (1093KB)(238)       Save
    At present, the threat of cyberspace is becoming more and more serious. A large number of studies have focused on cyberspace security defense techniques and systems. Vulnerability mining technique can be applied to detect and repair vulnerabilities in time before the occurrence of network attacks, reducing the risk of intrusion; while threat detection technique can be applied to threat detection during and after network attacks occur, which can detect threats in a timely manner and respond to them, reducing the harm and loss caused by intrusion. This paper analyzed and summarized the research on vulnerability mining and cyberspace threat detection based on intelligent methods. In the aspect of intelligent vulnerability mining, the current research progress is summarized from several application classifications combined with artificial intelligence technique, namely vulnerability patch identification, vulnerability prediction, code comparison and fuzz testing. In the aspect of cyberspace threat detection, the current research progress is summarized from the classification of information carriers involved in threat detection based on network traffic, host data, malicious files, and network threat intelligence.
    Reference | Related Articles | Metrics
    The Status and Trends of Confidential Computing
    Journal of Information Security Reserach    2024, 10 (1): 2-.  
    Abstract242)      PDF (1466KB)(232)       Save
    Related Articles | Metrics
    A Review of Hardware Accelerated Research on Zeroknowledge Proofs
    Journal of Information Security Reserach    2024, 10 (7): 594-.  
    Abstract237)      PDF (1311KB)(232)       Save
    ZeroKnowledge Proofs (ZKP) are cryptographic protocols that allow a prover to demonstrate the correctness of a statement to a verifier without revealing any additional information. This article primarily introduces research on the acceleration of zeroknowledge proofs, with a particular focus on ZKPs based on Quadratic Arithmetic Programs (QAP) and Inner Product Proofs (IPA). Studies have shown that the computational efficiency of zeroknowledge proofs can be significantly improved through hardware acceleration technologies, including the use of GPUs, ASICs, and FPGAs. Firstly, the article introduces the definition and classification of zeroknowledge proofs, as well as the difficulties encountered in its current application. Secondly, this article  discusses in detail the acceleration methods of different hardware systems, their implementation principles, and their performance improvements over traditional CPUs. For example, cuZK and GZKP utilize GPUs to perform Multiscalar Multiplication (MSM) and Number Theoretic Transform (NTT), while PipeZK, PipeMSM, and BSTMSM accelerate these computational processes through ASICs and FPGAs. Additionally, the article mentions applications of zeroknowledge proofs in blockchain for concealing transaction details, such as the private transactions in ZCash. Lastly, the article proposes future research directions, including accelerating more types of ZKPs and applying hardware acceleration to practical scenarios to resolve issues of inefficiency and promote the widespread application of zeroknowledge proof technology.
    Reference | Related Articles | Metrics
    Research on Artificial Intelligence Data Falsification Risk  Based on GPT Model
    Journal of Information Security Reserach    2023, 9 (6): 518-.  
    Abstract258)      PDF (1887KB)(228)       Save
    The rapid development and application of artificial intelligence technology have led to the emergence of AIGC (Artificial Intelligence Generated Context), which has significantly enhanced productivity. ChatGPT, a product that utilizes AIGC, has gained popularity worldwide due to its diverse application scenarios and has spurred rapid commercialization development. This paper takes the artificial intelligence data forgery risk as the research goal, takes the GPT model as the research object, and focuses on the possible causes of data forgery and the realization process by analyzing the security risks that have been exposed or appeared. Based on the offensive and defensive countermeasures of traditional cyberspace security and data security, the paper makes a practical study of data forgery based on model finetuning and speculates some data forgery utilization scenarios after the widespread commercialization of artificial intelligence. Finally, the paper puts forward some suggestions on how to deal with the risk of data forgery and provides directions for avoiding the risk of data forgery before the largescale application of artificial intelligence in the future.
    Reference | Related Articles | Metrics
    A Review of Adversarial Attack on Autonomous Driving Perception System
    Journal of Information Security Reserach    2024, 10 (9): 786-.  
    Abstract284)      PDF (1560KB)(227)       Save
    The autonomous driving perception system collects surrounding environmental information through various sensors and processes this data to detect vehicles, pedestrians and obstacles, providing realtime foundational data for subsequent control and decisionmaking functions. Since sensors are directly connected to the external environment and often lack the ability to discern the credibility of inputs, the perception systems are  potential targets for various attacks. Among these, adversarial example attack is a mainstream attack method characterized by high concealment and harm. Attackers manipulate or forge input data of the perception system to deceive the perception algorithms, leading to incorrect output results by the system. Based on the research of existing relevant literature, this paper systematically summarizes the working methods of the autonomous driving perception system, analyzes the adversarial example attack schemes and defense strategies targeting the perception system. In particular, this paper subdivide the adversarial examples for the autonomous driving perception system into signalbased adversarial example attack scheme and objectbased adversarial example attack scheme. Additionally, the paper comprehensively discusses defense strategy of the adversarial example attack for the perception system, and subdivide it into anomaly detection, model defense, and physical defense. Finally, this paper prospects the future research directions of adversarial example attack targeting autonomous driving perception systems.
    Reference | Related Articles | Metrics
    Android Malware Multiclassification Model Based on Transformer
    Journal of Information Security Reserach    2023, 9 (12): 1138-.  
    Abstract221)      PDF (2073KB)(226)       Save
    Due to the open source and openness, the Android system has become a popular target for malware attacks, and there are currently a large number of research on Android malware detection, among which machine learning algorithms are widely used. In this paper, the Transformer algorithm is used to classify and detect the grayscale images converted by Android software classes.dex files, and the accuracy rate reaches 86%, which is higher than that of CNN, MLP and other models.
    Reference | Related Articles | Metrics
    Classification and Grading Method of Transportation Government Data
    Journal of Information Security Reserach    2023, 9 (8): 808-.  
    Abstract216)      PDF (1008KB)(224)       Save
    In order to promote the open sharing of government data and improve data security, it is urgent to solve the classification and grading of government data resources. This paper summarized the experience of domestic and foreign government data classification and grading, using a hybrid classification method combining surface and line to build transportation government data classification framework. A fivelevel data grading model was formed base on the data grading method of data security risk analysis, and the effect of the method was verified by introducing actual data. Transportation government data classification and grading method can effectively assist the relevant departments to carry out classification and grading of government data, as well as important data protection, and promoting the level of industry data security governance and security technology advancement.
    Reference | Related Articles | Metrics
    Research on the Disclosure and Sharing Policy of Cybersecurity  Vulnerabilities in China and the United States
    Journal of Information Security Reserach    2023, 9 (6): 602-.  
    Abstract264)      PDF (2305KB)(223)       Save
    With the increasing scale and complexity of computer software systems, vulnerability attacks on software and systems become more and more frequent, and attack methods become more and more diverse. Various countries have published vulnerability management regulations to avoid the threat of software and system vulnerabilities to national cyberspace security. Proper disclosure and sharing of security vulnerabilities can help security researchers learn security threats quickly and reduce vulnerability repair costs through sharing and communication, which has become essential to mitigating security risks. This paper introduces the public vulnerability database, focuses on the summary of China and the United States network security vulnerability disclosure and sharing related policies and regulations, and gives the possible problems and countermeasures  in vulnerability disclosure and sharing in China so that security researchers can better understand and learn the security vulnerability disclosure process and sharing related regulations, which ensures that security researchers can study security vulnerabilities in the extent permitted by regulations.
    Reference | Related Articles | Metrics
    Challenges and Responses to Data Governance in China
    Journal of Information Security Reserach    2023, 9 (7): 612-.  
    Abstract255)      PDF (924KB)(220)       Save
    At present, data can hold a substantial value in promoting economic and social development, and possess important strategic significance. Data governance has also been a significant topic and practical direction in the development of China’s digital economy and the construction of Digital China. By analyzing the difficulties in the following aspects of data rights confirmation, data security, data compliance, and data circulation, the institutional dilemmas and practical issues faced by data governance are being clarified. And a comprehensive approach for data governance has also been proposed, including protecting data rights and interests, strengthening compliance guidance, stimulating the vitality of the data market, and promoting technological empowerment. It is expected to advance the process of data governance in China.
    Reference | Related Articles | Metrics
    Research on Source Code Vulnerability Detection Based on BERT Model
    Journal of Information Security Reserach    2024, 10 (4): 294-.  
    Abstract204)      PDF (3199KB)(219)       Save
    Techniques such as code metrics, machine learning, and deep learning are commonly employed in source code vulnerability detection. However, these techniques have problems, such as their inability to retain the syntactic and semantic information of the source code and the requirement of extensive expert knowledge to define vulnerability features. To cope with the problems of existing techniques, this paper proposed a source code vulnerability detection model based on BERT(bidirectional encoder representations from transformers) model. The model splits the source code to be detected into multiple small samples, converted each small sample into the form of approximate natural language, realized the automatic extraction of vulnerability features in the source code through the BERT model, and then trained a vulnerability classifier with good performance to realize the detection of multiple types of vulnerabilities in Python language. The model achieved an average detection accuracy of 99.2%, precision of 97.2%, recall of 96.2%, and an F1 score of 96.7% across various vulnerability types. This represents a performance improvement of 2% to 14% over existing vulnerability detection methods. The experimental results showed that the model was a general, lightweight and scalable vulnerability detection method.
    Reference | Related Articles | Metrics
    Intelligent Fuzzy Testing Method Based on Sequence Generative Adversarial Networks
    Journal of Information Security Reserach    2024, 10 (6): 490-.  
    Abstract165)      PDF (2426KB)(215)       Save
    The increase in the number of vulnerabilities and the emergence of a large number of highly dangerous vulnerabilities, such as supercritical and highrisk ones, pose great challenges to the state of network security. As a mainstream security testing method, fuzz testing is widely used. Test case generation, as a core step, directly determines the quality of fuzz testing. However, traditional test case generation methods based on pregeneration, random generation, and mutation strategies face bottlenecks such as low coverage, high labor costs, and low quality. Generating highquality, highly available, and comprehensive test cases is a difficult problem in intelligent fuzz testing. To address this issue, this paper proposes an intelligent fuzz testing method based on the sequence generation adversarial network (SeqGAN) model. By combining the idea of reinforcement learning, the test case generation is abstracted as a learning and approximate generation problem for universally applicable variablelength discrete sequence data. Innovatively, a configurable embedding layer is added to the generator part to standardize the generation, and a reward function is designed from the dimensions of authenticity and diversity through dynamic weight adjustment. This ultimately achieves the goal of automatically and intelligently constructing a comprehensive, complete, and usable test case set for flexible and efficient intelligent fuzz testing. This paper verifies the proposed scheme from two aspects of effectiveness and universality. The average test case pass rate of over 95% and the average target defect detection rate of 10% under four different testing targets fully demonstrate the universality of the scheme. The 98% test case pass rate, 9% target defect detection rate, and the ability to generate 20000 usable test cases per unit time under four different schemes fully demonstrate the effectiveness of the scheme.
    Reference | Related Articles | Metrics
    A Network Intrusion Detection Model Integrating CNN-BiGRU and  Attention Mechanism
    Journal of Information Security Reserach    2024, 10 (3): 202-.  
    Abstract258)      PDF (2042KB)(214)       Save
    To enhance the feature extraction capabilities and classification accuracy of the network intrusion detection model, a network intrusion detection model integrating CNNBiGRU (Convolutional Neural NetworkBidirectional Gated Recurrent Unit) and attention mechanism is proposed. CNN is employed to effectively extract nonlinear features from traffic datasets,while BiGRU extracts timeseries features. The attention mechanism is then integrated to differentiate the importance of different types of traffic data through weighted means, thereby improvingthe overall performance of the model in feature extraction and classification. The experimental results indicate that the overall accuracy rate is 2.25% higher than that of the BiLSTM (Bidirectional Long ShortTerm Memory) model. Kfold crossvalidation results demonstrate that the proposed model's good generalization performance, avoiding the occurrence of overfitting phenomenon, and affirming its effectiveness and rationality.
    Reference | Related Articles | Metrics
    Journal of Information Security Reserach    2024, 10 (E1): 246-.  
    Abstract339)      PDF (1562KB)(209)       Save
    Reference | Related Articles | Metrics
    Research on the Security Architecture of Artificial Intelligence  Computing Infrastructure
    Journal of Information Security Reserach    2024, 10 (2): 109-.  
    Abstract171)      PDF (1146KB)(199)       Save
    The artificial intelligence computing infrastructure is a crucial foundation for the development of artificial intelligence. However, due to its diverse attributes, complex nodes, large number of users, and vulnerability of artificial intelligence itself, the construction and operation of artificial intelligence computing infrastructure face severe security challenges. This article analyzes the connotation and security development background of artificial intelligence computing infrastructure, proposes a security architecture for artificial intelligence computing infrastructure from three aspects: strengthening its own security, ensuring operational security, and facilitating security compliance. It puts forward development suggestions aiming to provide methodological ideas for the security construction of artificial intelligence computing infrastructure, offer a basis for selection and use of safe artificial intelligence computing infrastructure, and provide decisionmaking reference for the healthy and sustainable development of the artificial intelligence industry.
    Reference | Related Articles | Metrics
    Federated Foundation Model Finetuning Based on Differential Privacy#br#
    #br#
    Journal of Information Security Reserach    2024, 10 (7): 616-.  
    Abstract342)      PDF (1752KB)(198)       Save
    As the availability of private data decreases, large model finetuning based on federated learning has become a research area of great concern. Although federated learning itself has a certain degree of privacy protection, privacy security issues such as gradient leakage attacks and embedding inversion attacks on large models still threaten the sensitive information of participants. In the current context of increasing awareness of privacy protection, these potential privacy risks have significantly hindered the promotion of large model finetuning based on federated learning in practical applications. Therefore, this paper proposes a federated large model embedding differential privacy control algorithm, which adds controllable random noise to the embedded model of the large model during efficient parameter finetuning process through a global and local dual privacy control mechanism to enhance the privacy protection ability of federated learning based large model parameter finetuning. In addition, this paper demonstrates the privacy protection effect of this algorithm in large model finetuning through experimental comparisons of different federation settings, and verifies the feasibility of the algorithm through performance comparison experiments between centralization and federation.
    Reference | Related Articles | Metrics
    Journal of Information Security Reserach    2023, 9 (7): 610-.  
    Abstract189)      PDF (519KB)(197)       Save
    Related Articles | Metrics
    A Secure Data Sharing Scheme Supporting Finegrained Authorization
    Journal of Information Security Reserach    2023, 9 (7): 667-.  
    Abstract169)      PDF (1681KB)(197)       Save
    Considering the problems such as centralized data storage and difficulty in data sharing in cloud computing environments, based on the combination of multiconditional proxy reencryption and attributebased proxy reencryption, a multiconditional attributebased threshold proxy reencryption scheme which supports multiple authorization conditions is proposed. The scheme supports finegrained access to ciphertext data under multiple keyword authorization conditions, and can limit the authorization conditions and scope of ciphertext sharing. Only when the attribute set meets the access structure in the ciphertext and the keywords are consistent with the keywords set in the ciphertext, users can access the data. This solution achieves finegrained access to ciphertext data under multiple keyword authorization conditions, supports flexible user revocation, prevents unauthorized decryption of ciphertext by conspirators, and protects the sensitive information of data owners. Through the provable security analysis, it is shown that under the general group model, the scheme can resist chosen plaintext attack; compared with other conditional proxy reencryption schemes, the functions it supports are more diverse.

    Reference | Related Articles | Metrics
    Legislative Thinking of Artificial Intelligence Law in the Era of  Generative Artificial Intelligence
    Journal of Information Security Reserach    2024, 10 (2): 103-.  
    Abstract218)      PDF (874KB)(195)       Save
    With the technological advancements and widespread adoption of Generative Artificial Intelligence (GAI), the structure of human society has undergone fundamental changes.The development of artificial intelligence technology has brought new risks and challenges. The “Interim Measures for the Management of Generative Artificial Intelligence Services” represents China’s latest exploration achievement in the field of GAI. It emphasizes the dual importance of development and security, advocates for innovation and governance in accordance with the law, and serves as a reference and inspiration for the ongoing legislative process of the Artificial Intelligence Law. Specifically, the Artificial Intelligence Law should consider the adoption of promoting legislative model, reduce the use of normative references in the legislative content, clarify the legislative approach of classification and grading, enhance  international exchanges and cooperation in artificial intelligence, and promote the positive use of science and technology by establishing a more scientific and reasonable toplevel design scheme.
    Reference | Related Articles | Metrics
    Journal of Information Security Reserach    2024, 10 (E2): 24-.  
    Abstract147)      PDF (555KB)(194)       Save
    Reference | Related Articles | Metrics
    Research on Security Risks and Protection of Container Images
    Journal of Information Security Reserach    2023, 9 (8): 792-.  
    Abstract158)      PDF (1788KB)(192)       Save
    As the digital transformation speeds up, more and more enterprises shift to adopt container technology to improve business productivity and scalability in order to deepen the process of industrial digital transformation. As the basis for container operation, container images contain packaged applications and their dependencies, as well as process information for container instantiation. However, container images also have various insecure factors. In order to solve the problem from the source and reduce the various security risks and threats faced by containers after they are instantiated, the fulllifecycle management of container images should be implemented. In this paper, the advantages that container images bring to the application development and deployment were investigatesd, the security risks faced by container images were analyzed. Key technologies for container mirroring security protection from the three stages of construction, distribution, and operation were proposed, and then a container image security scanning tool was developed, which can scan container images for applications and underlying infrastructure that use container technology. It was proved to have good practical effects, which can help enterprises achieve fulllifecycle image security protection.
    Reference | Related Articles | Metrics
    A Review of GPU Acceleration Technology for Deep Learning in Plaintext  and Private Computing Environments
    Journal of Information Security Reserach    2024, 10 (7): 586-.  
    Abstract188)      PDF (1274KB)(188)       Save
    With the continuous development of deep learning technology, the training time of neural network models is getting longer and longer, and using GPU computing to accelerate neural network training has increasingly become a key technology. In addition, the importance of data privacy has also promoted the development of private computing technology. This article first introduces the concepts of deep learning, GPU computing, and two privacy computing technologies, secure multiparty computing and homomorphic encryption, and then discusses the GPU acceleration technology of deep learning in plaintext environment and private computing environment. In the plaintext environment, the two basic deep learning parallel training modes of data parallelism and model parallelism are introduced, two different memory optimization technologies of recalculation and video memory swapping are analyzed, and gradient compression in the training process of distributed neural network is introduced. technology. This paper introduces two deep learning GPU acceleration techniques: Secure multiparty computation and homomorphic encryption in a privacy computing environment. Finally, the similarities and differences of GPUaccelerated deep learning methods in the two environments are briefly analyzed.
    Reference | Related Articles | Metrics
    Research on the Progress of Crossborder Data Flow Governance
    Journal of Information Security Reserach    2023, 9 (7): 624-.  
    Abstract391)      PDF (1036KB)(183)       Save
    While promoting the sharing of global data resources, the crossborder data flow will inevitably threaten data sovereignty and national security. The competition for the right to speak in international data with crossborder data flow governance as the game will become the focus of competition in the international community in the future. This paper introduces the background knowledge and constraints of crossborder data flow, investigates and compares the crossborder data flow governance models of the United States, the European Union, Russia, Japan, and Australia, and analyzes the current policy status and challenges of crossborder data flow governance in our country, on this basis, countermeasures and suggestions are proposed for the governance of crossborder data flow in our country from the perspective of data sovereignty, including promoting the classification supervision of crossborder data flow, innovating and developing crossborder data flow governance models, improving countermeasures against extraterritorial “longarm jurisdiction”, and actively participating in and leading the formulation of international governance rules.
    Reference | Related Articles | Metrics