CERES
Library Services
  • Communities & Collections
  • Browse CERES
  • Library Staff Log In
    Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Jin, Bailu"

Now showing 1 - 5 of 5
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Explainable reinforcement and causal learning for improving trust to 6G stakeholders
    (IEEE, 2025-06-01) Arana-Catania, Miguel; Sonee, Amir; Khan, Abdul-Manan; Fatehi, Kavan; Tang, Yun; Jin, Bailu; Soligo, Anna; Boyle, David; Calinescu, Radu; Yadav, Poonam; Ahmadi, Hamed; Tsourdos, Antonios; Guo, Weisi; Russo, Alessandra
    Future telecommunications will increasingly integrate AI capabilities into network infrastructures to deliver seamless and harmonized services closer to end-users. However, this progress also raises significant trust and safety concerns. The machine learning systems orchestrating these advanced services will widely rely on deep reinforcement learning (DRL) to process multi-modal requirements datasets and make semantically modulated decisions, introducing three major challenges: (1) First, we acknowledge that most explainable AI research is stakeholder-agnostic while, in reality, the explanations must cater for diverse telecommunications stakeholders, including network service providers, legal authorities, and end users, each with unique goals and operational practices; (2) Second, DRL lacks prior models or established frameworks to guide the creation of meaningful long-term explanations of the agent's behaviour in a goal-oriented RL task, and we introduce state-of-the-art approaches such as reward machine and sub-goal automata that can be universally represented and easily manipulated by logic programs and verifiably learned by inductive logic programming of answer set programs; (3) Third, most explainability approaches focus on correlation rather than causation, and we emphasise that understanding causal learning can further enhance 6G network optimisation. Together, in our judgement they form crucial enabling technologies for trustworthy services in 6G. This review offers a timely resource for academic researchers and industry practitioners by highlighting the methodological advancements needed for explainable DRL (X-DRL) in 6G. It identifies key stakeholder groups, maps their needs to X-DRL solutions, and presents case studies showcasing practical applications. By identifying and analysing these challenges in the context of 6G case studies, this work aims to inform future research, transform industry practices, and highlight unresolved gaps in this rapidly evolving field.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Federated learning of wireless network experience anomalies using consumer sentiment
    (IEEE, 2023-03-23) Guo, Weisi; Jin, Bailu; Sun, Schyler C.; Wu, Yue; Qi, Weijie; Zhang, Jie
    In wireless networks, consumer experience is important for both short monitoring of the Quality of Experience (QoE) as well as long term customer retainment. Current 4G and 5G networks are not equipped to measure QoE in an automated way, and experience is still reported through traditional customer care and drive-testing. In recent years, large-scale social media analytics has enabled researchers to gather statistically significant data on consumer experience and correlate them to major events such as social celebrations or significant network outages. However, the translational pathway from languages to topic-specific emotions (e.g., sentiment) to detecting anomalies in QoE is challenging. This challenge lies in two issues: (1) the social experience data remains sparsely distributed across space, and (2) anomalies in experience jump across sub-topic spaces (e.g., from data rate to signal strength). Here, we solved these two challenges by examining the spectral space of experience across topics using federated learning (FL) to identify anomalies. This can inform telecom operators to pay attention to potential network demand or supply issues in real time using relatively sparse and distributed data. We use real social media data curated for our telecommunication projects across London and the United Kingdom to demonstrate our results. FL was able to achieve 74-92% QoE anomaly detection accuracy, with the benefit of 30-45% reduce data transfer and preserving privacy better than raw data transfer.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    How to find opinion leader on the online social network?
    (Springer, 2025-05-01) Jin, Bailu; Zou, Mengbang; Wei, Zhuangkun; Guo, Weisi
    Online social networks (OSNs) provide a platform for individuals to share information, exchange ideas, and build social connections beyond in-person interactions. For a specific topic or community, opinion leaders are individuals who have a significant influence on others’ opinions. Detecting opinion leaders and modeling influence dynamics is crucial as they play a vital role in shaping public opinion and driving conversations. Existing research have extensively explored various graph-based and psychology-based methods for detecting opinion leaders, but there is a lack of cross-disciplinary consensus between definitions and methods. For example, node centrality in graph theory does not necessarily align with the opinion leader concepts in social psychology. This review paper aims to address this multi-disciplinary research area by introducing and connecting the diverse methodologies for identifying influential nodes. The key novelty is to review connections and cross-compare different multi-disciplinary approaches that have origins in: social theory, graph theory, compressed sensing theory, and control theory. Our first contribution is to develop cross-disciplinary discussion on how they tell a different tale of networked influence. Our second contribution is to propose trans-disciplinary research method on embedding socio-physical influence models into graph signal analysis. We showcase inter- and trans-disciplinary methods through a Twitter case study to compare their performance and elucidate the research progression with relation to psychology theory. We hope the comparative analysis can inspire further research in this cross-disciplinary area.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Quality-of-Trust in 6G: combining emotional and physical trust through explainable AI
    (IEEE, 2023-12-11) Li, Chen; Qi, Weijie; Jin, Bailu; Demestichas, Panagiotis; Tsagkaris, Kostas; Kritikou, Yiouli; Guo, Weisi
    Wireless networks like many multi-user services have to balance limited resources in real-time. In 6G, increased network automation makes consumer trust crucial. Trust is reflect in both a personal emotional sentiment as well as a physical understanding of the transparency of AI decision making. Whilst there has been isolated studies of consumer sentiment to wireless services, this is not well linked to the decision making engineering. Likewise, limited recent research in explainable AI (XAI) has not established a link to consumer perception.Here, we develop a Quality-of-Trust (QoT) KPI that balances personal perception with the quality of decision explanation. That is to say, the QoT varies with both the time-varying sentiment of the consumer as well as the accuracy of XAI outcomes. We demonstrate this idea with an example in Neural Water-Filling (N-WF) power allocation, where the channel capacity is perceived by artificial consumers that communicate through Large Language Model (LLM) generated text feedback. Natural Language Processing (NLP) analysis of emotional feedback is combined with a physical understanding of N-WF decisions via meta-symbolic XAI. Combined they form the basis for QoT. Our results show that whilst the XAI interface can explain up to 98.9% of the neural network decisions, a small proportion of explanations can have large errors causing drops in QoT. These drops have immediate transient effects in the physical mistrust, but emotional perception of consumers are more persistent. As such, QoT tends to combine both instant physical mistrust and long-term emotional trends.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Revealing the excitation causality between climate and political violence via a neural forward-intensity Poisson process
    (arXiv, 2022-07-29) Sun, Schyler C.; Jin, Bailu; Wei, Zhuangkun; Guo, Weisi
    The causal mechanism between climate and political violence is fraught with complex mechanisms. Current quantitative causal models rely on one or more assumptions: (1) the climate drivers persistently generate conflict, (2) the causal mechanisms have a linear relationship with the conflict generation parameter, and/or (3) there is sufficient data to inform the prior distribution. Yet, we know conflict drivers often excite a social transformation process which leads to violence (e.g., drought forces agricultural producers to join urban militia), but further climate effects do not necessarily contribute to further violence. Therefore, not only is this bifurcation relationship highly non-linear, there is also often a lack of data to support prior assumptions for high resolution modeling. Here, we aim to overcome the aforementioned causal modeling challenges by proposing a neural forward-intensity Poisson process (NFIPP) model. The NFIPP is designed to capture the potential non-linear causal mechanism in climate induced political violence, whilst being robust to sparse and timing-uncertain data. Our results span 20 recent years and reveal an excitation-based causal link between extreme climate events and political violence across diverse countries. Our climate-induced conflict model results are cross-validated against qualitative climate vulnerability indices. Furthermore, we label historical events that either improve or reduce our predictability gain, demonstrating the importance of domain expertise in informing interpretation.

Quick Links

  • About our Libraries
  • Cranfield Research Support
  • Cranfield University

Useful Links

  • Accessibility Statement
  • CERES Takedown Policy

Contacts-TwitterFacebookInstagramBlogs

Cranfield Campus
Cranfield, MK43 0AL
United Kingdom
T: +44 (0) 1234 750111
  • Cranfield University at Shrivenham
  • Shrivenham, SN6 8LA
  • United Kingdom
  • Email us: researchsupport@cranfield.ac.uk for REF Compliance or Open Access queries

Cranfield University copyright © 2002-2025
Cookie settings | Privacy policy | End User Agreement | Send Feedback