Browsing by Author "Li, Chen"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Open Access Quality-of-Trust in 6G: combining emotional and physical trust through explainable AI(IEEE, 2023-12-11) Li, Chen; Qi, Weijie; Jin, Bailu; Demestichas, Panagiotis; Tsagkaris, Kostas; Kritikou, Yiouli; Guo, WeisiWireless networks like many multi-user services have to balance limited resources in real-time. In 6G, increased network automation makes consumer trust crucial. Trust is reflect in both a personal emotional sentiment as well as a physical understanding of the transparency of AI decision making. Whilst there has been isolated studies of consumer sentiment to wireless services, this is not well linked to the decision making engineering. Likewise, limited recent research in explainable AI (XAI) has not established a link to consumer perception.Here, we develop a Quality-of-Trust (QoT) KPI that balances personal perception with the quality of decision explanation. That is to say, the QoT varies with both the time-varying sentiment of the consumer as well as the accuracy of XAI outcomes. We demonstrate this idea with an example in Neural Water-Filling (N-WF) power allocation, where the channel capacity is perceived by artificial consumers that communicate through Large Language Model (LLM) generated text feedback. Natural Language Processing (NLP) analysis of emotional feedback is combined with a physical understanding of N-WF decisions via meta-symbolic XAI. Combined they form the basis for QoT. Our results show that whilst the XAI interface can explain up to 98.9% of the neural network decisions, a small proportion of explanations can have large errors causing drops in QoT. These drops have immediate transient effects in the physical mistrust, but emotional perception of consumers are more persistent. As such, QoT tends to combine both instant physical mistrust and long-term emotional trends.Item Open Access Scarce data driven deep learning of drones via generalized data distribution space(Springer, 2023-04-06) Li, Chen; Sun, Schyler C.; Wei, Zhuangkun; Tsourdos, Antonios; Guo, WeisiIncreased drone proliferation in civilian and professional settings has created new threat vectors for airports and national infrastructures. The economic damage for a single major airport from drone incursions is estimated to be millions per day. Due to the lack of balanced representation in drone data, training accurate deep learning drone detection algorithms under scarce data is an open challenge. Existing methods largely rely on collecting diverse and comprehensive experimental drone footage data, artificially induced data augmentation, transfer and meta-learning, as well as physics-informed learning. However, these methods cannot guarantee capturing diverse drone designs and fully understanding the deep feature space of drones. Here, we show how understanding the general distribution of the drone data via a generative adversarial network (GAN), and explaining the under-learned data features using topological data analysis (TDA) can allow us to acquire under-represented data to achieve rapid and more accurate learning. We demonstrate our results on a drone image dataset, which contains both real drone images as well as simulated images from computer-aided design. When compared to random, tag-informed and expert-informed data collections (discriminator accuracy of 94.67%, 94.53% and 91.07%, respectively, after 200 epochs), our proposed GAN-TDA-informed data collection method offers a significant 4% improvement (99.42% after 200 epochs). We believe that this approach of exploiting general data distribution knowledge from neural networks can be applied to a wide range of scarce data open challenges.Item Open Access Soft body pose-invariant evasion attacks against deep learning human detection(IEEE, 2023-09-01) Li, Chen; Guo, WeisiEvasion attacks on deep neural networks (DNN) use manipulated data to let targets evade detection and/or classification across a wide range of DNNs. Most existing evasion attacks focus on planar images (e.g., photo, satellite imaging) and ignore the distortion of evasion in practical attacks (e.g., object rotation, deformation). Here, we build evasion patterns for soft-body human stakeholders, where patterns are designed to take into account body rotation, fabric stretch, printability, and lighting variations. We show that these are effective and robust to different human poses. This poses a significant threat to safety of autonomous vehicles and adversarial training should consider this new area.Item Open Access A transistor operations model for deep learning energy consumption scaling law(IEEE, 2022-12-14) Li, Chen; Tsourdos, Antonios; Guo, WeisiDeep Neural Networks (DNN) has transformed the automation of a wide range of industries and finds increasing ubiquity in society. The high complexity of DNN models and its widespread adoption has led to global energy consumption doubling every 3-4 months. Current energy consumption measures largely monitor system wide consumption or make linear assumptions of DNN models. The former approach captures other unrelated energy consumption anomalies, whilst the latter does not accurately reflect nonlinear computations. In this paper, we are the first to develop a bottom-up Transistor Operations (TOs) approach to expose the role of non-linear activation functions and neural network structure. As there will be inevitable energy measurement errors at the core level, we statistically model the energy scaling laws as opposed to absolute consumption values. We offer models for both feedforward DNNs and convolution neural networks (CNNs) on a variety of data sets and hardware configurations - achieving a 93.6% - 99.5% precision. This outperforms existing FLOPs-based methods and our TOs method can be further extended to other DNN models.Item Open Access Trustworthy deep learning in 6G-enabled mass autonomy: from concept to quality-of-trust key performance indicators(IEEE, 2020-09-30) Li, Chen; Guo, Weisi; Sun, Schyler Chengyao; Al-Rubaye, Saba; Tsourdos, AntoniosMass autonomy promises to revolutionize a wide range of engineering, service, and mobility industries. Coordinating complex communication among hyperdense autonomous agents requires new artificial intelligence (AI)-enabled orchestration of wireless communication services beyond 5G and 6G mobile networks. In particular, safety and mission-critical tasks will legally require both transparent AI decision processes and quantifiable quality-of-trust (QoT) metrics for a range of human end users (consumer, engineer, and legal). We outline the concept of trustworthy autonomy for 6G, including essential elements such as how explainable AI (XAI) can generate the qualitative and quantitative modalities of trust. We also provide XAI test protocols for integration with radio resource management and associated key performance indicators (KPIs) for trust. The research directions proposed will enable researchers to start testing existing AI optimization algorithms and develop new ones with the view that trust and transparency should be built in from the design through the testing phase.Item Open Access Uncertainty propagation in neural network enabled multi-channel optimisation(IEEE, 2020-06-30) Li, Chen; Sun, Schyler C.; Al-Rubaye, Saba; Tsourdos, Antonios; Guo, WeisiMulti-channel optimisation relies on accurate channel state information (CSI) estimation. Error distributions in CSI can propagate through optimisation algorithms to cause undesirable uncertainty in the solution space. The transformation of uncertainty distributions differs between classic heuristic and Neural Network (NN) algorithms. Here, we investigate how CSI uncertainty transforms from an additive Gaussian error in CSI into different power allocation distributions in a multi-channel system. We offer theoretical insight into the uncertainty propagation for both Water-filling (WF) power allocation in comparison to diverse NN algorithms. We use the Kullback-Leibler divergence to quantify uncertainty deviation from the trusted WF algorithm and offer some insight into the role of NN structure and activation functions on the uncertainty divergence, where we found that the activation function choice is more important than the size of the neural network