Illuminating the neural landscape of pilot mental states: a convolutional neural network approach with Shapley Additive explanations interpretability

dc.contributor.authorAlreshidi, Ibrahim
dc.contributor.authorBisandu, Desmond Bala
dc.contributor.authorMoulitsas, Irene
dc.date.accessioned2023-11-10T11:32:38Z
dc.date.available2023-11-10T11:32:38Z
dc.date.issued2023-11-11
dc.description.abstractPredicting pilots’ mental states is a critical challenge in aviation safety and performance, with electroencephalogram data offering a promising avenue for detection. However, the interpretability of machine learning and deep learning models, which are often used for such tasks, remains a significant issue. This study aims to address these challenges by developing an interpretable model to detect four mental states—channelised attention, diverted attention, startle/surprise, and normal state—in pilots using EEG data. The methodology involves training a convolutional neural network on power spectral density features of EEG data from 17 pilots. The model’s interpretability is enhanced via the use of SHapley Additive exPlanations values, which identify the top 10 most influential features for each mental state. The results demonstrate high performance in all metrics, with an average accuracy of 96%, a precision of 96%, a recall of 94%, and an F1 score of 95%. An examination of the effects of mental states on EEG frequency bands further elucidates the neural mechanisms underlying these states. The innovative nature of this study lies in its combination of high-performance model development, improved interpretability, and in-depth analysis of the neural correlates of mental states. This approach not only addresses the critical need for effective and interpretable mental state detection in aviation but also contributes to our understanding of the neural underpinnings of these states. This study thus represents a significant advancement in the field of EEG-based mental state detection.en_UK
dc.identifier.citationAlreshidi I, Bisandu D, Moulitsas I. (2023) Illuminating the neural landscape of pilot mental states: a convolutional neural network approach with SHapley Additive exPlanations interpretability, Sensors, Volume 23, Issue 22, November 2023, Article Number 9052en_UK
dc.identifier.issn1424-8220
dc.identifier.urihttps://doi.org/10.3390/s23229052
dc.identifier.urihttps://dspace.lib.cranfield.ac.uk/handle/1826/20535
dc.language.isoenen_UK
dc.publisherMDPIen_UK
dc.rightsAttribution 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectaviation safetyen_UK
dc.subjectconvolutional neural networken_UK
dc.subjectdeep learningen_UK
dc.subjectEEGen_UK
dc.subjectelectroencephalogramen_UK
dc.subjectinterpretability/explainabilityen_UK
dc.subjectmachine learningen_UK
dc.subjectmental states classificationen_UK
dc.subjectpilot deficienciesen_UK
dc.subjectSHapley Additive exPlanationsen_UK
dc.titleIlluminating the neural landscape of pilot mental states: a convolutional neural network approach with Shapley Additive explanations interpretabilityen_UK
dc.typeArticleen_UK

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
SHapley_Additive_exPlanations_interpretability-2023.pdf
Size:
2.02 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.63 KB
Format:
Item-specific license agreed upon to submission
Description: