Strategic conflict management using recurrent multi-agent reinforcement learning for urban air mobility operations considering uncertainties
Date published
Free to read from
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Publisher
Department
Type
ISSN
Format
Citation
Abstract
The rapidly evolving urban air mobility (UAM) develops the heavy demand for public air transport tasks and poses great challenges to safe and efficient operation in low-altitude urban airspace. In this paper, the operation conflict is managed in the strategic phase with multi-agent reinforcement learning (MARL) in dynamic environments. To enable efficient operation, the aircraft flight performance is integrated into the process of multi-resolution airspace design, trajectory generation, conflict management, and MARL learning. The demand and capacity balancing (DCB) issue, separation conflict, and block unavailability introduced by wind turbulence are resolved by the proposed the multi-agent asynchronous advantage actor-critic (MAA3C) framework, in which the recurrent actor-critic networks allow the automatic action selection between ground delay, speed adjustment, and flight cancellation. The learned parameters in MAA3C are replaced with random values to compare the performance of trained models. Simulated training and test experiments performed on a small urban prototype and various combined use cases suggest the superiority of the MAA3C solution in resolving conflicts with complicated wind fields. And the generalization, scalability, and stability of the model are also demonstrated while applying the model to complex environments.