We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Robust and privacy-preserving collaborative training: a comprehensive survey.
- Authors
Yang, Fei; Zhang, Xu; Guo, Shangwei; Chen, Daiyuan; Gan, Yan; Xiang, Tao; Liu, Yang
- Abstract
Increasing numbers of artificial intelligence systems are employing collaborative machine learning techniques, such as federated learning, to build a shared powerful deep model among participants, while keeping their training data locally. However, concerns about integrity and privacy in such systems have significantly hindered the use of collaborative learning systems. Therefore, numerous efforts have been presented to preserve the model’s integrity and reduce the privacy leakage of training data throughout the training phase of various collaborative learning systems. This survey seeks to provide a systematic and comprehensive evaluation of security and privacy studies in collaborative training, in contrast to prior surveys that only focus on one single collaborative learning system. Our survey begins with an overview of collaborative learning systems from various perspectives. Then, we systematically summarize the integrity and privacy risks of collaborative learning systems. In particular, we describe state-of-the-art integrity attacks (e.g., Byzantine, backdoor, and adversarial attacks) and privacy attacks (e.g., membership, property, and sample inference attacks), as well as the associated countermeasures. We additionally provide an analysis of open problems to motivate possible future studies.
- Publication
Artificial Intelligence Review, 2024, Vol 57, Issue 7, p1
- ISSN
0269-2821
- Publication type
Article
- DOI
10.1007/s10462-024-10797-0