Green deep reinforcement learning for radio resource management: architecture, algorithm compression, and challenges
Date published
Free to read from
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Publisher
Department
Type
ISSN
Format
Citation
Abstract
Artificial intelligence (AI) heralds a step-change in wireless networks but may also cause irreversible environmental damage due to its high energy consumption. Here, we address this challenge in the context of 5G and beyond, where there is a complexity explosion in radio resource management (RRM). For high-dimensional RRM problems in a dynamic environment, deep reinforcement learning (DRL) provides a powerful tool for scalable optimization, but it consumes a large amount of energy over time and risks compromising progress made in green radio research. This article reviews and analyzes how to achieve green DRL for RRM via both architecture and algorithm innovations. Architecturally, a cloudbased training and distributed decision-making DRL scheme is proposed, where RRM entities can make lightweight, deep, local decisions while being assisted by on-cloud training and updating. At the algorithm level, compression approaches are introduced for both deep neural networks (DNNs) and the underlying Markov decision processes (MDPs), enabling accurate lowdimensional representations of challenges. To scale learning across geographic areas, a spatial transfer learning scheme is proposed to further promote the learning efficiency of distributed DRL entities by exploiting the traffic demand correlations. Together, our proposed architecture and algorithms provide a vision for green and on-demand DRL capability