We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Federated Learning with Efficient Aggregation via Markov Decision Process in Edge Networks.
- Authors
Liu, Tongfei; Wang, Hui; Ma, Maode
- Abstract
Federated Learning (FL), as an emerging paradigm in distributed machine learning, has received extensive research attention. However, few works consider the impact of device mobility on the learning efficiency of FL. In fact, it is detrimental to the training result if heterogeneous clients undergo migration or are in an offline state during the global aggregation process. To address this issue, the Optimal Global Aggregation strategy (OGAs) is proposed. The OGAs first models the interaction between clients and servers of the FL as a Markov Decision Process (MDP) model, which jointly considers device mobility and data heterogeneity to determine local participants that are conducive to global aggregation. To obtain the optimal client participation strategy, an improved σ -value iteration method is utilized to solve the MDP, ensuring that the number of participating clients is maintained within an optimal interval in each global round. Furthermore, the Principal Component Analysis (PCA) is used to reduce the dimensionality of the original features to deal with the complex state space in the MDP. The experimental results demonstrate that, compared with other existing aggregation strategies, the OGAs has the faster convergence speed and the higher training accuracy, which significantly improves the learning efficiency of the FL.
- Subjects
FEDERATED learning; MARKOV processes; PRINCIPAL components analysis; HIGH speed trains; MACHINE learning
- Publication
Mathematics (2227-7390), 2024, Vol 12, Issue 6, p920
- ISSN
2227-7390
- Publication type
Article
- DOI
10.3390/math12060920