Cooperative planning for an unmanned combat aerial vehicle fleet using reinforcement learning

dc.contributor.authorYuksek, Burak
dc.contributor.authorDemirezen, Mustafa Umut
dc.contributor.authorInalhan, Gokhan
dc.contributor.authorTsourdos, Antonios
dc.date.accessioned2021-10-01T12:20:12Z
dc.date.available2021-10-01T12:20:12Z
dc.date.issued2021-07-07
dc.description.abstractIn this study, reinforcement learning (RL)-based centralized path planning is performed for an unmanned combat aerial vehicle (UCAV) fleet in a human-made hostile environment. The proposed method provides a novel approach in which closing speed and approximate time-to-go terms are used in the reward function to obtain cooperative motion while ensuring no-fly-zones (NFZs) and time-of-arrival constraints. Proximal policy optimization (PPO) algorithm is used in the training phase of the RL agent. System performance is evaluated in two different cases. In case 1, the warfare environment contains only the target area, and simultaneous arrival is desired to obtain the saturated attack effect. In case 2, the warfare environment contains NFZs in addition to the target area and the standard saturated attack and collision avoidance requirements. Particle swarm optimization (PSO)-based cooperative path planning algorithm is implemented as the baseline method, and it is compared with the proposed algorithm in terms of execution time and developed performance metrics. Monte Carlo simulation studies are performed to evaluate the system performance. According to the simulation results, the proposed system is able to generate feasible flight paths in real-time while considering the physical and operational constraints such as acceleration limits, NFZ restrictions, simultaneous arrival, and collision avoidance requirements. In that respect, the approach provides a novel and computationally efficient method for solving the large-scale cooperative path planning for UCAV fleets.en_UK
dc.identifier.citationYuksek B, Demirezen MU, Inalhan G, Tsourdos A. (2021) Cooperative planning for an unmanned combat aerial vehicle fleet using reinforcement learning, Journal of Aerospace Information Systems, Volume 18, Issue 10, October 2021, pp. 739-750.en_UK
dc.identifier.issn2327-3097
dc.identifier.urihttps://doi.org/10.2514/1.I010961
dc.identifier.urihttps://dspace.lib.cranfield.ac.uk/handle/1826/17128
dc.language.isoenen_UK
dc.publisherAmerican Society of Mechanical Engineersen_UK
dc.rightsAttribution-NonCommercial 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/*
dc.titleCooperative planning for an unmanned combat aerial vehicle fleet using reinforcement learningen_UK
dc.typeArticleen_UK

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
unmanned_combat_aerial_vehicle_fleet-2021.pdf
Size:
2.77 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.63 KB
Format:
Item-specific license agreed upon to submission
Description: