Browsing by Author "Perrusquia, Adolfo"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Open Access Autonomous path selection of unmanned aerial vehicle in dynamic environment using reinforcement learning(AIAA, 2025-01-06) Tamanakijprasart, Komsun; Perrusquia, Adolfo; Mondal, Sabyasachi; Tsourdos, AntoniosThe Unmanned Aerial Vehicle (UAV) is an emerging area within the aviation industry. Currently, fully autonomous UAV operations in real-world scenarios are rare due to low technology readiness and a lack of trust. However, Artificial Intelligence (AI) offers powerful tools to adapt to changing conditions and handle complex perceptions. In autonomous vehicles, automotive self-driving technologies have made significant advances. To enhance the level of autonomy in aviation, it is beneficial to analyze these frameworks and extend autonomous driving principles to autonomous flying. This research introduces a novel solution for ensuring safe navigation in UAVs by adopting the concept of autonomous lane or path selection strategies used in cars. The approach employs deep reinforcement learning (DRL) for high-level decision-making in selecting the appropriate path generated by various established algorithms that consider different scenarios. Specifically, the Interfered Fluid Dynamical System (IFDS) \cite{IFDS_OG} is utilized for guidance and the PID for the flight control system. The UAV can choose between global and local paths and determine the appropriate speed for following these paths. This proposed framework lays the foundation for future research into practical and safe navigation strategies for UAVs.Item Open Access Explaining data-driven control in autonomous systems: a reinforcement learning case study(IEEE, 2024-10-18) Zou, Mengbang; Perrusquia, Adolfo; Guo, WeisiExplaining what does a data-driven control algorithm learns play a crucial role for safety critical control of autonomous platforms in transportation. This is more acute in reinforcement learning control algorithms, where the learned control policy depends on various factors that are hidden within the data. Explainable artificial intelligence methods have been used to explain the outcomes of machine learning methods by analysing input-output relations. However, data-driven control does not pose a simple input-output mapping and hence, the resulting explanations lack depth. To deal with this issue, this paper proposes a explainable data-driven control method that allows to understand what the data-driven method is learning from the data. The model is composed by a Q-learning algorithm enhanced by a dynamic mode decomposition with control (DMDc) algorithm for state-transition function estimation. Both the Q-learning and DMDc provides the elements that are learned from the data and allow the construction of counterfactual explanations. The proposed approach is robust and does not require hyperparameter tuning. Simulation experiments are conducted to observe the benefits and challenges of the method.Item Open Access Integrating explainable AI into two-tier ML models for trustworthy aircraft landing gear fault diagnosis(AIAA, 2025-01-06) KN, Kadripathi; Perrusquia, Adolfo; Tsourdos, Antonios; Ignatyev, DmitryAs the aviation industry increasingly relies on data-driven intelligence to enhance safety and operational efficiency, the demand for AI solutions that are both technically robust and readily interpretable continues to intensify. This research presents a pioneering methodology for advanced fault diagnosis in aircraft landing gear systems that not only achieves high predictive accuracy but also provides transparent, actionable insights. Building upon a twotier machine learning framework—integrating fault classification with intelligent sensor data imputation—we demonstrate how state-of-the-art explainability techniques, notably LIME and SHAP, can elucidate the underlying logic of complex models. By exposing the critical features and sensor parameters driving each decision, this approach empowers maintenance engineers and operations personnel to understand, validate, and trust the model’s outputs rather than relying on opaque “black-box” predictions. Our results indicate that interpretable fault diagnoses facilitate more confident decisionmaking, streamline maintenance interventions, and reduce the likelihood of unforeseen component failures. Beyond mere compliance with emerging regulatory standards for AI transparency, this method establishes a blueprint for deploying machine learning solutions that are not only accurate and robust, but also inherently comprehensible. In an era where aerospace systems must seamlessly integrate precision, reliability, and human oversight, our work sets a precedent for creating intelligent tools that foster trust, enhance collaboration between technical experts and AI models, and ultimately contribute to safer and more efficient aviation operations.Item Open Access Selective exploration and information gathering in search and rescue using hierarchical learning guided by natural language input(IEEE, 2024-10-06) Panagopoulos, Dimitrios; Perrusquia, Adolfo; Guo, WeisiIn recent years, robots and autonomous systems have become increasingly integral to our daily lives, offering solutions to complex problems across various domains. Their application in search and rescue (SAR) operations, however, presents unique challenges. Comprehensively exploring the disaster-stricken area is often infeasible due to the vastness of the terrain, transformed environment, and the time constraints involved. Traditional robotic systems typically operate on predefined search patterns and lack the ability to incorporate and exploit ground truths provided by human stakeholders, which can be the key to speeding up the learning process and enhancing triage. Addressing this gap, we introduce a system that integrates social interaction via large language models (LLMs) with a hierarchical reinforcement learning (HRL) framework. The proposed system is designed to translate verbal inputs from human stakeholders into actionable RL insights and adjust its search strategy. By leveraging human-provided information through LLMs and structuring task execution through HRL, our approach not only bridges the gap between autonomous capabilities and human intelligence but also significantly improves the agent's learning efficiency and decision-making process in environments characterised by long horizons and sparse rewards.