CERES
Library Services
  • Communities & Collections
  • Browse CERES
  • Library Staff Log In
    Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Li, Chen"

Now showing 1 - 7 of 7
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Quality-of-Trust in 6G: combining emotional and physical trust through explainable AI
    (IEEE, 2023-12-11) Li, Chen; Qi, Weijie; Jin, Bailu; Demestichas, Panagiotis; Tsagkaris, Kostas; Kritikou, Yiouli; Guo, Weisi
    Wireless networks like many multi-user services have to balance limited resources in real-time. In 6G, increased network automation makes consumer trust crucial. Trust is reflect in both a personal emotional sentiment as well as a physical understanding of the transparency of AI decision making. Whilst there has been isolated studies of consumer sentiment to wireless services, this is not well linked to the decision making engineering. Likewise, limited recent research in explainable AI (XAI) has not established a link to consumer perception.Here, we develop a Quality-of-Trust (QoT) KPI that balances personal perception with the quality of decision explanation. That is to say, the QoT varies with both the time-varying sentiment of the consumer as well as the accuracy of XAI outcomes. We demonstrate this idea with an example in Neural Water-Filling (N-WF) power allocation, where the channel capacity is perceived by artificial consumers that communicate through Large Language Model (LLM) generated text feedback. Natural Language Processing (NLP) analysis of emotional feedback is combined with a physical understanding of N-WF decisions via meta-symbolic XAI. Combined they form the basis for QoT. Our results show that whilst the XAI interface can explain up to 98.9% of the neural network decisions, a small proportion of explanations can have large errors causing drops in QoT. These drops have immediate transient effects in the physical mistrust, but emotional perception of consumers are more persistent. As such, QoT tends to combine both instant physical mistrust and long-term emotional trends.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Scarce data driven deep learning of drones via generalized data distribution space
    (Springer, 2023-04-06) Li, Chen; Sun, Schyler C.; Wei, Zhuangkun; Tsourdos, Antonios; Guo, Weisi
    Increased drone proliferation in civilian and professional settings has created new threat vectors for airports and national infrastructures. The economic damage for a single major airport from drone incursions is estimated to be millions per day. Due to the lack of balanced representation in drone data, training accurate deep learning drone detection algorithms under scarce data is an open challenge. Existing methods largely rely on collecting diverse and comprehensive experimental drone footage data, artificially induced data augmentation, transfer and meta-learning, as well as physics-informed learning. However, these methods cannot guarantee capturing diverse drone designs and fully understanding the deep feature space of drones. Here, we show how understanding the general distribution of the drone data via a generative adversarial network (GAN), and explaining the under-learned data features using topological data analysis (TDA) can allow us to acquire under-represented data to achieve rapid and more accurate learning. We demonstrate our results on a drone image dataset, which contains both real drone images as well as simulated images from computer-aided design. When compared to random, tag-informed and expert-informed data collections (discriminator accuracy of 94.67%, 94.53% and 91.07%, respectively, after 200 epochs), our proposed GAN-TDA-informed data collection method offers a significant 4% improvement (99.42% after 200 epochs). We believe that this approach of exploiting general data distribution knowledge from neural networks can be applied to a wide range of scarce data open challenges.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Soft body pose-invariant evasion attacks against deep learning human detection
    (IEEE, 2023-09-01) Li, Chen; Guo, Weisi
    Evasion attacks on deep neural networks (DNN) use manipulated data to let targets evade detection and/or classification across a wide range of DNNs. Most existing evasion attacks focus on planar images (e.g., photo, satellite imaging) and ignore the distortion of evasion in practical attacks (e.g., object rotation, deformation). Here, we build evasion patterns for soft-body human stakeholders, where patterns are designed to take into account body rotation, fabric stretch, printability, and lighting variations. We show that these are effective and robust to different human poses. This poses a significant threat to safety of autonomous vehicles and adversarial training should consider this new area.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Swarm drones - efficient machine learning and informatics
    (Cranfield University, 2022-12) Li, Chen; Guo, Weisi; Tsourdos, Antonios
    In 2020, worldwide consumer drone unit shipment was 5 million, which is expected to be 9.6 million in 2030. This generates a global drone market with 26.3 billion USD in revenue. The popularity of drones in civilian and professional environments has changed the way humans live and work. However, they have also brought new challenges and threats to the environment and society (e.g., property and personal damage caused by inappropriate drone operations or hostile drones) which requires more robust regulatory mechanisms and advanced technologies of drones. Supervision of tiny shape, high-mobility drones by humans is inefficient and inaccurate, whereas Artificial Intelligence (AI) methods, especially deep learning (DL) show potential in drone detection, classification and tracking. However, as data-driven models, the performance of DL models is decided by the quality of data and model structure. At the same time, the structural complexity of black-box DL models affects their explainability and energy efficiency. These factors affect the willingness of people to trust DL models. Therefore, this thesis aims to analyze the impact of drone informatics on DL behaviour, and achieve more efficient training of high-trustworthiness DL models by efficient drone informatics. This will require research on DL explainability, trustworthiness and efficiency. The aforementioned researches are all interrelated and highly relevant to the data. In this thesis, firstly, explainable AI (XAI) and DL trust factors are reviewed. A theoretical DL trustworthiness metric Quality of Trust (QoT) and a lifelong AI trust- worthiness supervision protocol are proposed. Secondly, a novel partially explainable Gaussian-process-based neural network structure is proposed. Compared with conventional machine learning methods, it is more transparent and without any sacrifice in accuracy. Thirdly, a GAN-TDA method is proposed to analyze the learning efficiency of convolutional layers on drone images and guide the collection of new data. Collecting new data with direction could boost the DL model performance more efficiently in time and cost. Fourthly, a transistor operations (TOs) model is proposed to analyze the DL energy consumption scaling law to different model architectures and settings. Finally, a physical visual neural stealth drone canopy is designed with the hard-to-learn design features analyzed by GAN-TDA and painted with adversarial evasion features to escape DL drone detection and classification. The canopy design method is further extended to swarm drone scenarios. This thesis shows: 1) both model explainability and performance are related to DL trustworthiness, and need a trade-off according to the QoT of different tasks; 2) combining human-understandable efficient drone informatics and the understanding of DL energy scaling laws can find high-efficiency datasets and network structures, resulting in efficient DL models with high trustworthiness; 3) The above knowledge can be used to formulate attacks on drone-related DL models to reduce their trustworthiness.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    A transistor operations model for deep learning energy consumption scaling law
    (IEEE, 2024-01-01) Li, Chen; Tsourdos, Antonios; Guo, Weisi
    Deep Neural Networks (DNN) has transformed the automation of a wide range of industries and finds increasing ubiquity in society. The high complexity of DNN models and its widespread adoption has led to global energy consumption doubling every 3-4 months. Current energy consumption measures largely monitor system wide consumption or make linear assumptions of DNN models. The former approach captures other unrelated energy consumption anomalies, whilst the latter does not accurately reflect nonlinear computations. In this paper, we are the first to develop a bottom-up Transistor Operations (TOs) approach to expose the role of non-linear activation functions and neural network structure. As there will be inevitable energy measurement errors at the core level, we statistically model the energy scaling laws as opposed to absolute consumption values. We offer models for both feedforward DNNs and convolution neural networks (CNNs) on a variety of data sets and hardware configurations - achieving a 93.6% - 99.5% precision. This outperforms existing FLOPs-based methods and our TOs method can be further extended to other DNN models.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Trustworthy deep learning in 6G-enabled mass autonomy: from concept to quality-of-trust key performance indicators
    (IEEE, 2020-09-30) Li, Chen; Guo, Weisi; Sun, Schyler Chengyao; Al-Rubaye, Saba; Tsourdos, Antonios
    Mass autonomy promises to revolutionize a wide range of engineering, service, and mobility industries. Coordinating complex communication among hyperdense autonomous agents requires new artificial intelligence (AI)-enabled orchestration of wireless communication services beyond 5G and 6G mobile networks. In particular, safety and mission-critical tasks will legally require both transparent AI decision processes and quantifiable quality-of-trust (QoT) metrics for a range of human end users (consumer, engineer, and legal). We outline the concept of trustworthy autonomy for 6G, including essential elements such as how explainable AI (XAI) can generate the qualitative and quantitative modalities of trust. We also provide XAI test protocols for integration with radio resource management and associated key performance indicators (KPIs) for trust. The research directions proposed will enable researchers to start testing existing AI optimization algorithms and develop new ones with the view that trust and transparency should be built in from the design through the testing phase.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Uncertainty propagation in neural network enabled multi-channel optimisation
    (IEEE, 2020-06-30) Li, Chen; Sun, Schyler C.; Al-Rubaye, Saba; Tsourdos, Antonios; Guo, Weisi
    Multi-channel optimisation relies on accurate channel state information (CSI) estimation. Error distributions in CSI can propagate through optimisation algorithms to cause undesirable uncertainty in the solution space. The transformation of uncertainty distributions differs between classic heuristic and Neural Network (NN) algorithms. Here, we investigate how CSI uncertainty transforms from an additive Gaussian error in CSI into different power allocation distributions in a multi-channel system. We offer theoretical insight into the uncertainty propagation for both Water-filling (WF) power allocation in comparison to diverse NN algorithms. We use the Kullback-Leibler divergence to quantify uncertainty deviation from the trusted WF algorithm and offer some insight into the role of NN structure and activation functions on the uncertainty divergence, where we found that the activation function choice is more important than the size of the neural network

Quick Links

  • About our Libraries
  • Cranfield Research Support
  • Cranfield University

Useful Links

  • Accessibility Statement
  • CERES Takedown Policy

Contacts-TwitterFacebookInstagramBlogs

Cranfield Campus
Cranfield, MK43 0AL
United Kingdom
T: +44 (0) 1234 750111
  • Cranfield University at Shrivenham
  • Shrivenham, SN6 8LA
  • United Kingdom
  • Email us: researchsupport@cranfield.ac.uk for REF Compliance or Open Access queries

Cranfield University copyright © 2002-2025
Cookie settings | Privacy policy | End User Agreement | Send Feedback