信息安全研究 ›› 2022, Vol. 8 ›› Issue (8): 812-.

• 网络安全治理专题 • 上一篇    下一篇

一种基于自注意力机制的深度学习侧信道攻击方法

周梓馨,张功萱,寇小勇,杨威   

  1. (南京理工大学计算机科学与工程学院南京210094)
  • 出版日期:2022-08-08 发布日期:2022-08-08
  • 通讯作者: 周梓馨 硕士研究生.主要研究方向为侧信道攻击与机器学习. zixinzhou@njust.edu.cn
  • 作者简介:周梓馨 硕士研究生.主要研究方向为侧信道攻击与机器学习. zixinzhou@njust.edu.cn 张功萱 博士,教授,CCF会员.主要研究方向为云计算、Web服务和分布式系统. gongxuan@njust.edu.cn 寇小勇 博士研究生.主要研究方向为掩码攻击与防御. Kouxy@njust.edu.cn 杨威 博士.主要研究方向为侧信道分析、检测与安全评估. generalyzy@njust.edu.cn

  • Online:2022-08-08 Published:2022-08-08

摘要: 深度学习可以自由地提取组合特征,基于深度学习的侧信道攻击方法避免了选取兴趣点和对齐等预处理操作,促使越来越多的研究者使用深度学习实施侧信道攻击.目前基于深度学习的侧信道攻击模型使用多层感知机网络、卷积神经网络和循环神经网络,在训练时存在快速过拟合、梯度消失和收敛速度慢等问题.自注意力机制在自然语言处理、计算机视觉等领域表现出强大的特征提取能力.深入剖析自注意力机制的原理后,根据基于深度学习的侧信道攻击特质,提出了基于自注意力机制的深度学习侧信道攻击模型SADLSCA,使自注意力机制适用于深度学习侧信道攻击领域.SADLSCA充分地发挥自注意力机制以全局视角提取兴趣点的优点,解决了基于深度学习的侧信道攻击模型在训练时存在的快速过拟合、梯度消失和收敛速度慢等问题,并通过实验验证了在公开数据集ASCAD和CHES CTF 2018上攻击成功所需要的能量迹数量分别减少了23.1%和41.7%.

关键词: 深度学习, 侧信道攻击, 自注意力机制, 神经网络, 建模攻击

Abstract: Because deep learning can freely extract and combine features, an increasing number of academics are using it to perform sidechannel attacks without taking into consideration preprocessing processes like choosing sites of interest and alignment. The sidechannel attack model based on deep learning is built with multilayer perceptron networks, convolution neural networks, and recurrent neural networks, but it has several issues in the training stage, such as overfitting, gradient disappearance, and sluggish convergence speed. Meanwhile, the selfattention mechanism is capable of extracting characteristics in natural language processing, computer vision, and other domains. To make the selfattentiveness mechanism accessible to the area of deep learning sidechannel attacks, we present SADLSCA, a deep learning sidechannel attack model based on the selfattentiveness mechanism, based on the features of deep learningbased sidechannel attacks. SADLSCA addresses the issues of fast overfitting, gradient disappearance, and slow convergence of deep learningbased sidechannel attack models during training, and experimentally verifies that the energy traces required for a successful attack on public datasets ASCAD and CHES CTF 2018 are reduced by 23.1% and 41.7%, respectively.

Key words: deep learning, side channel attack, selfattention mechanism, neural network, modeling attack