Virtual electroencephalogram acquisition: a review on electroencephalogram generative methods
Date published
Free to read from
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Publisher
Department
Type
ISSN
Format
Citation
Abstract
Driven by the remarkable capabilities of machine learning, brain–computer interfaces (BCIs) are carving out an ever-expanding range of applications across a multitude of diverse fields. Notably, electroencephalogram (EEG) signals have risen to prominence as the most prevalently utilized signals within BCIs, owing to their non-invasive essence, exceptional portability, cost-effectiveness, and high temporal resolution. However, despite the significant strides made, the paucity of EEG data has emerged as the main bottleneck, preventing generalization of decoding algorithms. Taking inspiration from the resounding success of generative models in computer vision and natural language processing arenas, the generation of synthetic EEG data from limited recorded samples has recently garnered burgeoning attention. This paper undertakes a comprehensive and thorough review of the techniques and methodologies underpinning the generative models of the general EEG, namely the variational autoencoder (VAE), the generative adversarial network (GAN), and the diffusion model. Special emphasis is placed on their practical utility in augmenting EEG data. The structural designs and performance metrics of the different generative approaches in various application domains have been meticulously dissected and discussed. A comparative analysis of the strengths and weaknesses of each existing model has been carried out, and prospective avenues for future enhancement and refinement have been put forward.