We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
From Spatial to Spectral Domain, a New Perspective for Detecting Adversarial Examples.
- Authors
Liu, Zhiyuan; Cao, Chunjie; Tao, Fangjian; Li, Yifan; Lin, Xiaoyu
- Abstract
Deep neural networks (DNNs) have been closely related to the Pandora's box from the moment of its birth. Although it achieves a high accuracy significantly in real-world tasks (e.g., object detecting and speech recognition), it still retains fatal vulnerabilities and flaws. Malicious attackers can manipulate DNN model misclassification just by adding tiny perturbations to the original image. These crafted samples are also called adversarial examples. One of the effective defense methods is to detect them before feeding them into the model. In this paper, we delve into the representation of adversarial examples in the original spatial and spectral domains. By qualitative and quantitative analysis, it is confirmed that the high-level representation and high-frequency components of abnormal samples contain richer discriminative information. To further explore the influence mechanism between the two factors, we perform an ablation study and the results show a win-win effect. Utilizing the finding, a detecting method (HLFD) is proposed based on extracting high-level representation and high-frequency components. Compared with other state-of-the-art detection methods, we achieve a better detection performance in most scenarios via a series of experiments conducted on MNIST, CIFAR-10, CIFAR-100, SVHN, and Tiny-ImageNet. In particular, we improve detection rates by a large margin on DeepFool and CW attacks.
- Subjects
ARTIFICIAL neural networks
- Publication
Security & Communication Networks, 2022, p1
- ISSN
1939-0114
- Publication type
Article
- DOI
10.1155/2022/5501035