Swarm intelligence in cooperative environments: introducing the N-step dynamic tree search algorithm
Date published
Free to read from
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Publisher
Department
Type
ISSN
Format
Citation
Abstract
Uncertainty and partial or unknown information about environment dynamics have led reward-based methods to play a key role in the Single-Agent and Multi-Agent Learning problem. Tree-based planning approaches such as Monte Carlo Tree Search algorithm have been a striking success in single-agent domains where a perfect simulator model is available, e.g., Go and chess strategic board games. This paper presents a decentralized tree-based planning scheme, that combines forward planning with direct reinforcement learning temporal-difference updates applied to the multi-agent setting. Forward planning requires an engine model which is learned from experience and represented via function approximation. Evaluation and validation are carried out in the Hunter-Prey Pursuit cooperative environment and performance is compared with state-of-the-art RL techniques. N-Step Dynamic Tree Search (NSDTS) pretends to adapt the most successful single-agent learning methods to the multi-agent boundaries in a decentralized system structure, dealing with scalability issues and exponential growth of computational resources suffered by centralized systems. NSDTS demonstrates to be a remarkable advance compared to the conventional Q-Learning temporal-difference method.