We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
面向智能驾驶视觉感知的对抗样本攻击与防御方法综述.
- Authors
杨弋鋆; 邵文泽; 王力谦; 葛琦; 鲍秉坤; 邓海松; 李海波
- Abstract
Nowadays, deep learning has become one of the hottest research directions in the field of machine learning. It has achieved great success in a wide range of fields such as image recognition, target detection, voice processing, and question answering system. However, the emergence of adversarial examples has triggered new thinking on deep learning. The performance of deep learning models can be destroyed by adversarial examples constructed by adding specially designed subtle disturbance. The existence of adversarial examples makes many technical fields with high requirements on safety performance face new threats and challenges, especially the automatic driving system which uses visual perception as the main technology priority. Therefore, the research on adversarial attack and active defense has become an extremely important cross-cutting research topic in the field of deep learning and computer vision. In this paper, relevant concepts on adversarial examples are summarized firstly, and then a series of typical adversarial attack methods and defense algorithms are introduced in detail. Subsequently, a number of physical world attacks against visual perception are introduced along with discussions on their potential impact on the field of automatic driving. Finally, we give a technical outlook on the future study of adversarial attacks and defenses.
- Publication
Journal of Nanjing University of Information Science & Technology (Natural Science Edition) / Nanjing Xinxi Gongcheng Daxue Xuebao (ziran kexue ban), 2019, Vol 11, Issue 6, p651
- ISSN
1674-7070
- Publication type
Article
- DOI
10.13878/j.cnki.jnuist.2019.06.003