We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Map-based experience replay: a memory-efficient solution to catastrophic forgetting in reinforcement learning.
- Authors
Hafez, Muhammad Burhan; Immisch, Tilman; Weber, Tom; Wermter, Stefan
- Abstract
Deep reinforcement learning (RL) agents often suffer from catastrophic forgetting, forgetting previously found solutions in parts of the input space when training new data. Replay memories are a common solution to the problem by decorrelating and shuffling old and new training samples. They naively store state transitions as they arrive, without regard for redundancy. We introduce a novel cognitive-inspired replay memory approach based on the Grow-When-Required (GWR) self-organizing network, which resembles a map-based mental model of the world. Our approach organizes stored transitions into a concise environment-model-like network of state nodes and transition edges, merging similarsamples to reduce the memorysize and increase pair-wise distance among samples, which increases the relevancy of each sample. Overall, our study shows that map-based experience replay allows for significant memory reduction with only small decreases in performance.
- Subjects
REINFORCEMENT learning; COGNITIVE robotics; COLLECTIVE memory; SELF-organizing maps; WORLD maps
- Publication
Frontiers in Neurorobotics, 2023, p1
- ISSN
1662-5218
- Publication type
Article
- DOI
10.3389/fnbot.2023.1127642