Publication Type : Journal Article
Publisher : Springer Science and Business Media LLC
Source : Artificial Intelligence Review
Url : https://doi.org/10.1007/s10462-025-11126-9
Campus : Chennai
School : School of Computing
Department : Computer Science and Engineering
Year : 2025
Abstract :
Emotion recognition from electroencephalography (EEG) signals is crucial for human–computer interaction yet poses significant challenges. While various techniques exist for detecting emotions through EEG signals, contemporary studies have explored the combination of EEG signals with other modalities. However, the field is still rapidly evolving, and new advancements are constantly being made. Comprehensive research is essential by distilling all factors in one manuscript to stay up-to-date with the latest research findings. This review offers an overview of multimodal leaning in EEG-based emotion recognition and discusses current literature in this domain from 2017 to 2024. Three primary challenges addressed are the fusion algorithm, representation of different modalities, and classification scheme. The review thoroughly explores the challenges of fusion algorithms, representation of different modalities, and classification schemes through empirical studies, offering a detailed analysis of their effectiveness. The approach of fusion algorithms is compared and evaluated based on convention and deep learning fusion methods. The research results show that poor performance is attributed to a lack of rigor and inadequate methods to identify correlated patterns across modalities to create a unified representation for experiments. This indicates a need for more thorough analysis and integration of data in future studies. When more than two modalities are involved, it becomes increasingly important to consider different aspects of classification schemes, such as the number of features and model selection. However, designing a classification scheme without considering the number of parameters and emotional categories may compromise the accuracy of classification. To aid readers in understanding the findings better, the results of different classification schemes and their corresponding accuracies are summarized. The tables in this draft display the fusion algorithms researchers utilize and evaluate the effectiveness of selected modalities, providing valuable insights for decision-making. Key contributions include a systematic survey of EEG features, an exploration of EEG integration with behavioral modalities, an investigation of fusion methods, and an overview of key challenges and future research directions in implementing multimodal emotion recognition systems. © The Author(s) 2025.
Cite this Research Publication : Rajasekhar Pillalamarri, Udhayakumar Shanmugam, A review on EEG-based multimodal learning for emotion recognition, Artificial Intelligence Review, Springer Science and Business Media LLC, 2025, https://doi.org/10.1007/s10462-025-11126-9