Integrating explainable AI into two-tier ML models for trustworthy aircraft landing gear fault diagnosis
Date published
Free to read from
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Publisher
Department
Type
ISSN
Format
Citation
Abstract
As the aviation industry increasingly relies on data-driven intelligence to enhance safety and operational efficiency, the demand for AI solutions that are both technically robust and readily interpretable continues to intensify. This research presents a pioneering methodology for advanced fault diagnosis in aircraft landing gear systems that not only achieves high predictive accuracy but also provides transparent, actionable insights. Building upon a twotier machine learning framework—integrating fault classification with intelligent sensor data imputation—we demonstrate how state-of-the-art explainability techniques, notably LIME and SHAP, can elucidate the underlying logic of complex models. By exposing the critical features and sensor parameters driving each decision, this approach empowers maintenance engineers and operations personnel to understand, validate, and trust the model’s outputs rather than relying on opaque “black-box” predictions. Our results indicate that interpretable fault diagnoses facilitate more confident decisionmaking, streamline maintenance interventions, and reduce the likelihood of unforeseen component failures. Beyond mere compliance with emerging regulatory standards for AI transparency, this method establishes a blueprint for deploying machine learning solutions that are not only accurate and robust, but also inherently comprehensible. In an era where aerospace systems must seamlessly integrate precision, reliability, and human oversight, our work sets a precedent for creating intelligent tools that foster trust, enhance collaboration between technical experts and AI models, and ultimately contribute to safer and more efficient aviation operations.