[1]Quinonez R, Giraldo J, Salazar L, et al. SAVIOR: Securing autonomous vehicles with robust physical invariants[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2020: 895912[2]王志波, 王雪, 马菁菁, 等. 面向计算机视觉系统的对抗样本攻击综述[J].计算机学报, 2023, 46(2): 436468[3]张田, 杨奎武, 魏江宏, 等. 面向图像数据的对抗样本检测与防御技术综述[J]. 计算机研究与发展, 2022, 59(6): 13151328[4]徐金才, 任民, 李琦, 等. 图像对抗样本的安全性研究概述[J]. 信息安全研究, 2021, 7(4): 294309[5]陈岳峰, 毛潇锋, 李裕宏, 等. AI安全——对抗样本技术综述与应用[J]. 信息安全研究, 2019, 5(11): 10001007[6]小淳. 全球首例无人车致死案更多细节公布 车祸发生前5.6秒检测到行人[EBOL]. (20191108) [20240118]. https:smart.huanqiu.comarticle7RqEXrzi3gk[7]扉旅汽车. 小鹏高速撞人,一再“闯祸”的自动驾驶[EBOL]. (20220812) [20240307]. https:baijiahao.baidu.coms?id=1740950921932517400&wfr=spider&for=pc[8]Shen J, Wang N, Wan Z, et al. Sok: On the semantic ai security in autonomous driving[J]. arXiv preprint, arXiv:2203.05314, 2022[9]况博裕, 李雨泽, 顾芳铭, 等. 车联网安全研究综述:威胁、对策与未来展望[J]. 计算机研究与发展, 2023, 60(10): 23042321[10]张燕咏, 张莎, 张昱, 等. 基于多模态融合的自动驾驶感知及计算[J]. 计算机研究与发展, 2020, 57(9): 17811799[11]Zhou C, Yan Q, Shi Y, et al. DoubleStar: Longrange attack towards depth estimation based obstacle avoidance in autonomous systems[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2022: 18851902[12]Yan C, Xu Z, Yin Z, et al. Rolling colors: Adversarial laser exploits against traffic light recognition[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2022: 19571974[13]Wang W, Yao Y, Liu X, et al. I can see the light: Attacks on autonomous vehicles using invisible lights[C] Proc of ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2021: 19301944[14]Zhang Q, Jin S, Zhu R, et al. On data fabrication in collaborative vehicular perception: Attacks and countermeasures[J]. arXiv preprint, arXiv:2309.12955, 2023[15]Vennam R R, Jain I K, Bansal K, et al. mmSpoof: Resilient spoofing of automotive millimeterwave radars using reflect array[C] Proc of IEEE Symp on Security and Privacy. Piscataway, NJ: IEEE, 2023: 18071821[16]Jin Z, Ji X, Cheng Y, et al. Plalidar: Physical laser attacks against lidarbased 3d object detection in autonomous vehicle[C] Proc of IEEE Symp on Security and Privacy. Piscataway, NJ: IEEE, 2023: 18221839[17]Cao Y, Bhupathiraju S H, Naghavi P, et al. You can’t see me: Physical removal attacks on LiDARbased autonomous vehicles driving frameworks[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2023: 29933010[18]Sun J S, Cao Y C, Chen Q A, et al. Towards robust lidarbased perception in autonomous driving: General blackbox adversarial sensor attack and countermeasures[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2020: 877894[19]Cao Y, Xiao C, Cyr B, et al. Adversarial sensor attack on lidarbased perception in autonomous driving[C] Proc of ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2019: 22672281[20]Shen J, Won J Y, Chen Z, et al. Drift with devil: Security of multisensor fusion based localization in highlevel autonomous driving under GPS spoofing[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2020: 931948[21]Jia W, Lu Z, Zhang H, et al. Fooling the eyes of autonomous vehicles: Robust physical adversarial examples against traffic sign recognition systems[J]. arXiv preprint arXiv:2201.06192, 2022[22]Jing P, Tang Q, Du Y, et al. Too good to be safe: Tricking lane detection in autonomous driving with crafted perturbations[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2021: 32373254[23]Sato T, Shen J, Wang N, et al. Dirty road can attack: Security of deep learning based automated lane centering under physicalworld attack[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2021: 33093326[24]Nassi B, Mirsky Y, Nassi D, et al. Phantom of the adas: Securing advanced driverassistance systems from splitsecond phantom attacks[C] Proc of ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2020: 293308[25]Zhu Y, Miao C, Xue H, et al. TileMask: A passivereflectionbased attack against mmWave radar object detection in autonomous driving[C] Proc of ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2023: 13171331[26]Zhu Y, Miao C, Zheng T, et al. Can we use arbitrary objects to attack lidar perception in autonomous driving?[C] Proc of ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2021: 19451960[27]Hallyburton R S, Liu Y, Cao Y, et al. Security analysis of CameraLiDAR Fusion against blackbox attacks on autonomous vehicles[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2022: 19031920[28]Cao Y, Wang N, Xiao C, et al. Invisible for both camera and lidar: Security of multisensor fusion based perception in autonomous driving under physicalworld attacks[C] Proc of IEEE Symposium on Security and Privacy. Piscataway, NJ: IEEE, 2021: 176194[29]Song R, Ozmen M O, Kim H, et al. Discovering adversarial driving maneuvers against autonomous vehicles[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2023: 29572974[30]Man Y, Muller R, Li M, et al. That person moves like a car: Misclassification attack detection for autonomous systems using spatiotemporal consistency[C] Proc of USENIX Security Symposium. Berkeley, CA: USENIX Association, 2023: 69296946[31]Xiao Q, Pan X, Lu Y, et al. Exorcising “Wraith”: Protecting LiDARbased object detector in automated driving system from appearing attacks[J]. arXiv preprint, arXiv:2303.09731, 2023
|