We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
State-based episodic memory for multi-agent reinforcement learning.
- Authors
Ma, Xiao; Li, Wu-Jun
- Abstract
Multi-agent reinforcement learning (MARL) algorithms have made promising progress in recent years by leveraging the centralized training and decentralized execution (CTDE) paradigm. However, existing MARL algorithms still suffer from the sample inefficiency problem. In this paper, we propose a simple yet effective approach, called state-based episodic memory (SEM), to improve sample efficiency in MARL. SEM adopts episodic memory (EM) to supervise the centralized training procedure of CTDE in MARL. To the best of our knowledge, SEM is the first work to introduce EM into MARL. SEM has lower space complexity and time complexity than state and action based EM (SAEM) initially proposed for single-agent reinforcement learning when using for MARL. Experimental results on two synthetic environments and one real environment show that introducing episodic memory into MARL can improve sample efficiency, and SEM can reduce storage cost and time cost compared with SAEM.
- Subjects
EPISODIC memory; REINFORCEMENT learning; TIME complexity; MARL
- Publication
Machine Learning, 2023, Vol 112, Issue 12, p5163
- ISSN
0885-6125
- Publication type
Article
- DOI
10.1007/s10994-023-06365-2