Autonomous path selection of unmanned aerial vehicle in dynamic environment using reinforcement learning
Date published
Free to read from
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Publisher
Department
Type
ISSN
Format
Citation
Abstract
The Unmanned Aerial Vehicle (UAV) is an emerging area within the aviation industry. Currently, fully autonomous UAV operations in real-world scenarios are rare due to low technology readiness and a lack of trust. However, Artificial Intelligence (AI) offers powerful tools to adapt to changing conditions and handle complex perceptions. In autonomous vehicles, automotive self-driving technologies have made significant advances. To enhance the level of autonomy in aviation, it is beneficial to analyze these frameworks and extend autonomous driving principles to autonomous flying. This research introduces a novel solution for ensuring safe navigation in UAVs by adopting the concept of autonomous lane or path selection strategies used in cars. The approach employs deep reinforcement learning (DRL) for high-level decision-making in selecting the appropriate path generated by various established algorithms that consider different scenarios. Specifically, the Interfered Fluid Dynamical System (IFDS) \cite{IFDS_OG} is utilized for guidance and the PID for the flight control system. The UAV can choose between global and local paths and determine the appropriate speed for following these paths. This proposed framework lays the foundation for future research into practical and safe navigation strategies for UAVs.