Loading...

Table of Content

    07 February 2026, Volume 12 Issue 2
    A Graphembedded Data Security Audit Scheme Based on Risk Elements
    2026, 12(2):  100. 
    Asbtract ( )   PDF (2173KB) ( )  
    References | Related Articles | Metrics
    With the increasing complexity of data security risks in big data environments, existing data security audit technologies are limited by fragmented feature utilization and insufficient scalability, preventing comprehensive lifecycle risk coverage and thereby reducing risk detection efficiency. To address these challenges, a graphembedded data security audit scheme based on risk elements (REGDSA) has been proposed. The scheme first constructs a security risk elements space comprising data attributes (D), user characteristics (U), carrier environment (C), and actions (A), achieving structured mapping of risk features throughout the entire data lifecycle. It then employs graph embedding technology to map these security risk elements into lowdimensional semantic vectors, constructs a crossdimensional association model for integrated analysis, and achieves efficient risk detection. The feasibility of the scheme is validated through effectiveness and performance analysis.
    Research on Trusted Data Collection Metrics Mechanism for IoT in Smart Cities
    2026, 12(2):  109. 
    Asbtract ( )   PDF (1939KB) ( )  
    References | Related Articles | Metrics
    The diversity, heterogeneity, and wide distribution characteristics of IoT devices expose their operational processes to risks such as data source forgery or tampering in sensing devices. However, current trust evaluation models in multidomain IoT scenarios for smart cities exhibit limited dynamic adaptability and lack comprehensive capabilities in addressing security threats. To address these issues, this study proposes a framework from the macrooperational perspective of IoT, integrating trusted computing technologies. We construct static attribute metrics and dynamic attribute metrics mechanisms for IoT device nodes, categorize trust levels by employing clustering algorithms, and establish a comprehensive trusted metrics mechanism tailored for multisource heterogeneous IoT devices. Subsequently, through simulation experiments based on a multidomain distributed IoT architecture, we validate that the proposed trusted metrics scheme effectively detects initial malicious propagation by malicious nodes, confines malicious propagation within a limited scope, and robustly addresses security challenges under varying proportions of malicious nodes.
    Research on the Development Challenges and Governance Pathways of  Network Data Labeling and Tagging Technology
    2026, 12(2):  118. 
    Asbtract ( )   PDF (689KB) ( )  
    References | Related Articles | Metrics
    Network data labeling and tagging technology serves as a critical enabler for ensuring the trusted circulation and secure controllability of data elements, offering significant application prospects and developmental potential. This paper reviews the global governance landscape of data labeling and tagging technologies, identifies three core challenges hindering their advancement and proposes targeted governance strategies. By addressing technical bottlenecks through institutional innovation, technological optimization, and collaborative supervision, this study provides theoretical guidance for building a secure, efficient, and modernized network data governance system in China.
    Research on Dynamic Risk Assessment and Security Supervision System of  Enterprise Outbound Data Transfer
    2026, 12(2):  124. 
    Asbtract ( )   PDF (2161KB) ( )  
    References | Related Articles | Metrics
    The demand for crossborder data flow has grown significantly with the globalization of the digital economy, and the security risks related to data, such as national information, corporate secrets, and personal privacy, have gained much attention. To mitigate the risks of outbound data transfer, this article evaluates the risk factors from the regulatory perspective and further forms a risk assessment and security supervision system framework that combines monitoring and sampling mechanisms based on the outbound data flow model. The wholechain risk supervision approach, which includes risk preassessment based on multifactor merging analysis prior to the business, risk adjustment and response based on statistical monitoring and sampling mechanism during the business, and postbusiness disposal and supervision optimization of illegal behaviors, can be strengthened in order to regulate the data outbound behavior of crossborder enterprises. The study makes recommendations for enhancing the technical framework of outbound data transfer security supervision, which is crucial for fostering the future growth of the digital economy in a highcaliber and sound manner.
    Compound Admissibility Rules of Blockchain Evidence in Online Litigation
    2026, 12(2):  134. 
    Asbtract ( )   PDF (1088KB) ( )  
    References | Related Articles | Metrics
    Blockchain evidence offers a solution to the limitations of traditional electronic evidence by establishing a new model of “evidence selfauthentication”. However, current regulations in China exhibit obvious limitations, failing to fully cover the application of blockchain evidence in both online and offline spaces, while prioritizing authenticity at the expense of admissibility. To realize the proper application of blockchain evidence in the Chinese context, this paper proposes a dualspace framework integrating technological selfauthentication with legal presumptions. This approach aims to achieve consensual justice, composite admissibility rules for preservation, presentation, crossexamination, and authentication, and thereby foster a novel form of evidence rule of law with benign interaction between rule of law and technical rule of law.
    China’s Mirror and Insights for the Legitimate Interest Rule from  the EU Law Perspective
    2026, 12(2):  142. 
    Asbtract ( )   PDF (1832KB) ( )  
    References | Related Articles | Metrics
    The rapid development of generative artificial intelligence (GAI) poses significant challenges to traditional informed consent rules. The European Union (EU) addresses this tension through the “legitimate interest rule” established under the General Data Protection Regulation. The EU effectively reconciles data protection with technological innovation by adopting an openstructured framework and dynamic balancing mechanisms. In contrast, China’s Personal Information Protection Law diverges from the EU counterpart in terms of the data processing lawfulness, rendering informed consent rules challenging to meet the demands of largescale data processing in the context of GAI. The EU’s approach is rooted in its governance doctrine that harmonizes rights protection with risk management, alongside an economic logic prioritizing a unified market. China adopts a riskbased regulatory strategy and has developed a “strong protection, weak circulation” regulatory model. To address the technical complexities of GAI, China should construct a localized legitimate interest rule which is confined to applications in commercial scenarios. This framework would incorporate a threetiered analysis—interest test, necessity test, and balance test—supported by risk mitigation measures and accountability mechanisms. Such institutional innovation would overcome the consent application dilemma while enabling adjudication to dynamically balance data subjects’ rights, commercial interests, and public values casebycase. This solution offers both a theoretical framework and practical feasibility for optimizing data governance in the AI era.
    Research on Phishing Email Detection Based on Large Language Model
    2026, 12(2):  151. 
    Asbtract ( )   PDF (1835KB) ( )  
    References | Related Articles | Metrics
    With the rapid increase in phishing email volumes and the continuous evolution of adversarial techniques, traditional phishing detection methods have encountered significant challenges regarding efficiency and accuracy. To address issues such as low detection rates, high falsenegative rates, and poor humancomputer interaction in existing systems, the authors proposed a phishing email detection system based on large language model. Through comprehensive analysis of key phishing email characteristics—including header fields, body content, URLs, QR codes, attachments, and HTML pages—they constructed a highquality training dataset using feature insertion algorithms. Building upon the pretrained LLaMA model, the researchers implemented LoRA finetuning technology, achieving domain knowledge transfer by updating only 0.72% of model parameters (approximately 50MB). Experimental results demonstrate that compared to traditional methods, the LLMbased detection approach achieves 94.5% overall accuracy with enhanced robustness, effectively reduces falsepositive rates, improves classification and interpretation capabilities for phishing email features, and provides a more practical and reliable solution for phishing detection.
    A Method for IP Positioning and Mapping Based on Multisource Data  Fusion and Dynamic Clustering
    2026, 12(2):  164. 
    Asbtract ( )   PDF (2035KB) ( )  
    References | Related Articles | Metrics
    With the growth of the global network scale, as a core technology for achieving refined network resource scheduling and attack tracing, the accuracy and realtime performance of IP positioning and mapping methods have become critical for ensuring highquality service in emerging scenarios such as 5G and the Internet of Things. Due to the insufficient static parameter settings and adaptability to dynamic topologies, traditional methods are difficult to meet the highprecision location requirements under multisource heterogeneous data. This paper proposed an IP positioning and mapping method that coordinates multisource data fusion and dynamic clustering. By integrating multisource heterogeneous data such as WiFi hotspots, BGP routing, and ZoomEye protocol fingerprints, a dynamic screening mechanism based on geographical location entropy was constructed, and the recall rate of reference points reached 92.3% (an increase of 15.2% compared with the comparative method). Then, a dynamic clustering optimization algorithm was designed to achieve differential clustering for enterprise dedicated lines and residential areas. Finally, combined with network topology mapping technology, the positioning offset was corrected through the analysis of common adjacent nodes, and the errors in the dynamic network were suppressed.
    Research on ECDSA Key Recovery Attacks Based on the Extended  Hidden Number Problem
    2026, 12(2):  174. 
    Asbtract ( )   PDF (797KB) ( )  
    References | Related Articles | Metrics
    Elliptic curve digital signature algorithm (ECDSA) is one of the most widely used digital signature algorithms. During the signing process, it requires computing scalar multiplication on elliptic curves, which is typically the most timeconsuming component of the signature. In many present cryptographic libraries, the windowed nonadjacent form representation is commonly used to represent the ephemeral key in order to reduce time consumption. This exposes sidechannel vulnerability to malicious attackers, allowing them to extract partial information about the ephemeral key from sidechannel traces and subsequently recover the signing key. Leveraging the extended hidden number problem to extract information from sidechannel traces and applying latticebased attacks to recover keys constitutes one of the mainstream attack frameworks against ECDSA. Based on above, we propose three optimization methods. First, we introduce a neighboring dynamic constraint merge strategy. By dynamically adjusting the merging parameters, we reduce the dimension of the lattice and control the amount of known information lost during the attack, ensuring high success rates for key recovery across all signatures. Second, we analyze and optimize the embedding number in the lattice, reducing the Euclidean norm of the target vector by approximately 8%, thereby improving the success rate and reducing time consumption. Finally, we propose a linear predicate method which significantly reduces the time overhead of the lattice sieving. In this work, we achieve a success rate of 0.99 in recovering the private key using only two signatures.
    Research on the Regulation of Crossborder Data Flow in China from  the Perspective of Dynamic Systems Theory
    2026, 12(2):  181. 
    Asbtract ( )   PDF (1153KB) ( )  
    References | Related Articles | Metrics
    Given the significant role data plays in the current world, this paper aims to clarify and analyze the dynamic changes in China’s crossborder data flow regulation measures and their driving forces. A dynamic systems theory is actively introduced as a new methodological approach to interpret the legitimacy and rationality of the effectiveness of China’s crossborder data flow regulation, and to clarify the legislative path for future developments. Subsequently, in legislation, it is essential to define the various elements that should be considered in regulating data flow in China: protection of national security and public interests, protection of personal privacy rights, ensuring the free flow of crossborder data, compliance with international agreement terms, the necessity of restricting data flow, and the mechanism for balancing and evaluating various elements in the judicial context, in order to achieve a flexible legal effect in data flow regulation.
    Design of a Port Industrial Control System Based on Zero Trust Architecture
    2026, 12(2):  189. 
    Asbtract ( )   PDF (1391KB) ( )  
    References | Related Articles | Metrics
    With the increasing intelligence of port industrial control system (ICS), traditional perimeterbased security models face severe challenges such as expanded attack surfaces and rigid permission management. This paper presents a zero trust architecture (ZTA)based security protection scheme for port ICS, establishing a hierarchical defense system through dynamic trust evaluation, softwaredefined perimeter (SDP), and microsegmentation technologies. The core contributions include a fourlayer architecture (terminal, access, control, and data), a dynamic trust evaluation model that integrates identity authentication, device health, and behavioral characteristics, and finegrained instructionlevel access control for industrial protocols. Experimental results demonstrate that the proposed architecture reduces the attack surface exposure rate from 100% to 8%, optimizes the average authentication time to 0.8s, and limits the permission adjustment response time to 45s, significantly enhancing both security and realtime performance in port industrial control systems.