Explainable artificial intelligence for 6G: improving trust between human and machine
Date published
Free to read from
Authors
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Publisher
Department
Type
ISSN
Format
Citation
Abstract
As 5G mobile networks are bringing about global societal benefits, the design phase for 6G has started. Evolved 5G and 6G will need sophisticated AI to automate information delivery simultaneously for mass autonomy, human machine interfacing, and targeted healthcare. Trust will become increasingly critical for 6G as it manages a wide range of mission-critical services. As we migrate from traditional mathematical model-dependent optimization to data-dependent deep learning, the insight and trust we have in our optimization modules decrease. This loss of model explainability means we are vulnerable to malicious data, poor neural network design, and the loss of trust from stakeholders and the general public -- all with a range of legal implications. In this review, we outline the core methods of explainable artificial intelligence (XAI) in a wireless network setting, including public and legal motivations, definitions of explainability, performance vs. explainability trade-offs, and XAI algorithms. Our review is grounded in case studies for both wireless PHY and MAC layer optimization and provide the community with an important research area to embark upon.