We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter.
- Authors
Carrara, Fabio; Caldelli, Roberto; Falchi, Fabrizio; Amato, Giuseppe
- Abstract
The adoption of deep learning-based solutions practically pervades all the diverse areas of our everyday life, showing improved performances with respect to other classical systems. Since many applications deal with sensible data and procedures, a strong demand to know the actual reliability of such technologies is always present. This work analyzes the robustness characteristics of a specific kind of deep neural network, the neural ordinary differential equations (N-ODE) network. They seem very interesting for their effectiveness and a peculiar property based on a test-time tunable parameter that permits obtaining a trade-off between accuracy and efficiency. In addition, adjusting such a tolerance parameter grants robustness against adversarial attacks. Notably, it is worth highlighting how decoupling the values of such a tolerance between training and test time can strongly reduce the attack success rate. On this basis, we show how such tolerance can be adopted, during the prediction phase, to improve the robustness of N-ODE to adversarial attacks. In particular, we demonstrate how we can exploit this property to construct an effective detection strategy and increase the chances of identifying adversarial examples in a non-zero knowledge attack scenario. Our experimental evaluation involved two standard image classification benchmarks. This showed that the proposed detection technique provides high rejection of adversarial examples while maintaining most of the pristine samples.
- Subjects
ORDINARY differential equations
- Publication
Information (2078-2489), 2022, Vol 13, Issue 12, p555
- ISSN
2078-2489
- Publication type
Article
- DOI
10.3390/info13120555