A data-driven approach for automatic aircraft engine borescope inspection defect detection using computer vision and deep learning
Date published
Free to read from
Authors
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Publisher
Department
Type
ISSN
Format
Citation
Abstract
Regular aircraft engine inspections play a crucial role in aviation safety. However, traditional inspections are often performed manually, relying heavily on the judgment and experience of operators. This paper presents a data-driven deep learning framework capable of automatically detecting defects on reactor blades. Specifically, this study develops Deep Neural Network models to detect defects in borescope images using various datasets, based on Computer Vision and YOLOv8n object detection techniques. Firstly, reactor blade images are collected from public resources and then annotated and preprocessed into different groups based on Computer Vision techniques. In addition, synthetic images are generated using Deep Convolutional Generative Adversarial Networks and a manual data augmentation approach by randomly pasting defects onto reactor blade images. YOLOv8n-based deep learning models are subsequently fine-tuned and trained on these dataset groups. The results indicate that the model trained on wide-shot blade images performs better overall at detecting defects on blades compared to the model trained on zoomed-in images. The comparison of multiple models’ results reveals inherent uncertainties in model performance that while some models trained on data enhanced by Computer Vision techniques may appear more reliable in some types of defect detection, the relationship between these techniques and subsequent results cannot be generalized. The impact of epochs and optimizers on the model’s performance indicates that incorporating rotated images and selecting an appropriate optimizer are key factors for effective model training. Furthermore, models trained solely on artificially generated images from collages perform poorly at detecting defects in real images. A potential solution is to train the model on both synthetic and real images. Future work will focus on improving the framework’s performance and conducting a more comprehensive uncertainty analysis by utilizing larger and more diverse datasets, supported by enhanced computational power.