We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Imperceptible adversarial attacks against traffic scene recognition.
- Authors
Zhu, Yinghui; Jiang, Yuzhen
- Abstract
Adversarial examples have begun to receive widespread attention owning to their potential destructions to the most popular DNNs. They are crafted from original images by embedding well-calculated perturbations. In some cases, the perturbations are so slight that neither human eyes nor detection algorithms can notice them, and this imperceptibility makes them more covert and dangerous. For the sake of investigating the invisible dangers in the applications of traffic DNNs, we focus on imperceptible adversarial attacks on different traffic vision tasks, including traffic sign classification, lane detection and street scene recognition. We propose a universal logits map-based attack architecture against image semantic segmentation and design two targeted attack approaches on it. All the attack algorithms generate the micro-noise adversarial examples by the iterative method of C&W optimization and achieve 100% attack rate with very low distortion, among which, our experimental results indicate that the MAE (mean absolute error) of perturbation noise based on traffic sign classifier attack is as low as 0.562, and the other two algorithms based on semantic segmentation are only 1.503 and 1.574. We believe that our research on imperceptible adversarial attacks has a certain reference value to the security of DNNs applications.
- Subjects
TRAFFIC signs &; signals; IMAGE segmentation; TRAFFIC noise; REFERENCE values; LOGITS
- Publication
Soft Computing - A Fusion of Foundations, Methodologies & Applications, 2021, Vol 25, Issue 20, p13069
- ISSN
1432-7643
- Publication type
Article
- DOI
10.1007/s00500-021-06148-8