We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Output feedback reinforcement learning based optimal output synchronisation of heterogeneous discrete-time multi-agent systems.
- Authors
Rizvi, Syed Ali Asad; Zongli Lin
- Abstract
This study proposes a model-free distributed output feedback control scheme that achieves synchronisation of the outputs of the heterogeneous follower agents with that of the leader agent in a directed network. A distributed two degree of freedom approach is presented that separates the learning of the optimal output feedback and the feedforward terms of the local control law for each agent. The local feedback parameters are learned using the proposed off-policy Q-learning algorithm, whereas a gradient adaptive law is presented to learn the local feedforward control parameters to achieve asymptotic tracking of each agent. This learning scheme and the resulting distributed control laws neither require access to the local internal state of the agents nor do they need an additional distributed leader state observer. The proposed approach has the advantage over the previous state augmentation approaches as it circumvents the need of introducing a discounting factor in the local performance functions. It is shown that the proposed algorithm converges to the optimal solution of the algebraic Riccati equation and the output regulator equations without explicitly solving them as long as the leader agent is reachable directly or indirectly from all the follower agents. Simulation results validate the proposed scheme.
- Subjects
FEEDBACK control systems; REINFORCEMENT learning; DISCRETE-time systems; MULTIAGENT systems; RICCATI equation; ALGEBRAIC equations
- Publication
IET Control Theory & Applications (Wiley-Blackwell), 2019, Vol 13, Issue 17, p2866
- ISSN
1751-8644
- Publication type
Academic Journal
- DOI
10.1049/iet-cta.2018.6266