UTAR Institutional Repository

Deep learning-based classification of multichannel bio-signals for emotion recognition

Ng, Wei Hong (2025) Deep learning-based classification of multichannel bio-signals for emotion recognition. Final Year Project, UTAR.

Full text not available from this repository.

Abstract

Emotion recognition is a critical component in advancing applications such as human-computer interaction and mental health diagnostics. While traditional methods often rely on external cues, physiological bio-signals offer a more objective measure of an individual's internal emotional state. This project presents the design, implementation, and comprehensive evaluation of a deep learning-based framework for multimodal emotion recognition, leveraging electroencephalography (EEG), galvanic skin response (GSR), electromyography (EMG), and speech audio. The research utilized the DEAP and RAVDESS datasets to conduct a comparative analysis of different modeling approaches. Hybrid deep learning architectures, including Convolutional Neural Networks combined with Long Short-Term Memory (CNN+LSTM) and Self-Attention mechanisms, were implemented to capture spatio-temporal patterns from EEG. These were systematically compared against a benchmark model using traditional, handcrafted features (EEG Band Power, GSR/EMG statistics). To integrate information from disparate sources, both early fusion (for homogeneous physiological signals) and a novel late fusion prototype (for heterogeneous, cross-dataset signals) were developed and evaluated. The experimental results revealed several key findings. In rigorous cross-subject validation, the traditional feature-based benchmark model demonstrated superior generalization capabilities compared to the end-to-end deep learning models, which struggled with overfitting. Concurrently, a standalone CNN model proved highly effective for classifying arousal from speech. The final late fusion prototype successfully demonstrated the ability to integrate the independently trained physiological and audio "expert" models, effectively arbitrating conflicting evidence and showcasing a viable strategy for building robust, cross-dataset multimodal systems. This project contributes a comprehensive analysis of the challenges of subject-independent classification and delivers a functional proof-of-concept for heterogeneous multimodal fusion.

Item Type: Final Year Project / Dissertation / Thesis (Final Year Project)
Subjects: T Technology > T Technology (General)
Divisions: Faculty of Information and Communication Technology > Bachelor of Computer Science (Honours)
Depositing User: ML Main Library
Date Deposited: 29 Dec 2025 16:02
Last Modified: 29 Dec 2025 16:02
URI: http://eprints.utar.edu.my/id/eprint/7213

Actions (login required)

View Item