[1]Google. Google transparency report[EBOL]. [20240908]. https:transparency report.google.comhttpsoverview[2]WatchGuard. WatchGuard’s threat lab analyzes the latest malware and internet attacks[EBOL]. [20240908]. https:www.watchguard.comwgrdresourcecentersecurityreportq22024[3]张稣荣, 卜佑军, 陈博,等. 基于多层双向SRU与注意力模型的加密流量分类方法[J]. 计算机工程, 2022, 48(11): 127136[4]Liu C, He L, Xiong G, et al. FSNet: A flow sequence network for encrypted traffic classification[C] Proc of IEEE Conf on Computer Communications. Piscataway, NJ: IEEE, 2019: 11711179[5]邓昕, 刘朝晖, 欧阳燕, 等. 基于CNN CBAMBiGRU Attention的加密恶意流量识别[J]. 计算机工程, 2023, 49(11): 178186[6]Song H, Kim M, Park D, et al. Learning from noisy labels with deep neural networks: A survey[J]. IEEE Trans on Neural Networks and Learning Systems, 2022, 34(11): 81358153[7]童家铖, 陈伟, 倪嘉翼, 等. 面向加密恶意流量的噪声标签检测方法[J]. 信息安全研究, 2023, 9(10): 10231027[8]Yuan Q, Zhu Y, Xiong G, et al. ULDC: Unsupervised learningbased data cleaning for malicious traffic with high noise[J]. The Computer Journal, 2024, 67(3): 976987[9]Qing Y, Yin Q, Deng X, et al. Lowquality training data only? A robust framework for detecting encrypted malicious network traffic[J]. arXiv preprint, arXiv:2309.04798, 2023[10]Goldberger J, BenReuven E. Training deep neuralnetworks using a noise adaptation layer[C] Proc of the 5th Int Conf on Learning Representations. Virtual: OpenReview.net, 2016: 19[11]Lee K, Yun S, Lee K, et al. Robust inference via generative classifiers for handling noisy labels[C] Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2019: 37633772[12]Ma X, Huang H, Wang Y, et al. Normalized loss functions for deep learning with noisy labels[C] Proc of the 37th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2020: 65436553[13]Liu Y, Guo H. Peer loss functions: Learning from noisy labels without knowing noise rates[C] Proc of the 37th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2020: 62266236[14]Xia X, Liu T, Han B, et al. Robust earlylearning: Hindering the memorization of noisy labels[C] Proc of the 9th Int Conf on Learning Representations. Virtual: OpenReview.net, 2021: 19[15]Wang J, Wang E X, Liu Y. Estimatinginstancedependent labelnoise transition matrix using a deep neural network[J]. arXiv preprint, arXiv: 2105.13001, 2021[16]Rui X, Cao X, Xie Q, et al. Learning an explicit weighting scheme for adapting complex HSI noise[C] Proc of the IEEECVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 67396748[17]Han B, Yao Q, Yu X, et al. Coteaching: Robust training of deep neural networks with extremely noisy labels[C] Proc of the 32nd Neural Information Processing Systems. Cambridge, MA: MIT Press, 2018: 85368546[18]Wang Y, Sun X, Fu Y. Scalable penalized regression for noise detection in learning with noisy labels[C] Proc of the IEEECVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 346355[19]Karim N, Rizve M N, Rahnavard N, et al. Unicon: Combating label noise through uniform selection and contrastive learning[C] Proc of the IEEECVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 96769686[20]Patel D, Sastry P S. Adaptive sample selection for robust learning under label noise[C] Proc of the IEEECVF Winter Conf on Applications of Computer Vision. Piscataway, NJ: IEEE, 2023: 39323942[21]MontazeriShatoori M, Davidson L, Kaur G, et al. Detection of doh tunnels using timeseries classification of encrypted traffic[C] Proc of the 5th IEEE Cyber Science and Technology Congress. Piscataway, NJ: IEEE, 2020: 6370[22]Xu J, Li Y, Deng R H. Differential training: A generic framework to reduce label noises for android malware detection[C] Proc of Network and Distributed System Security Symposium. Rosten, VA: Internet Society, 2021: 114 |