Cheah, Kit Hwa (2021) Variants of convolutional neural networks for classification of multichannel EEG signals: A study based on influence of music and emotion on human brain. Master dissertation/thesis, UTAR.
Abstract
Electroencephalography (EEG) records the electrical potential fields generated by the neuronal activities at various parts of the brain. With increasing popularity and interest from the research community of different disciplinary background, the applicability of EEG is getting more promising in many different areas from the research settings to the clinical neurology for diagnosis and treatment monitoring. Nonetheless, efficiently identifying and extracting the highly representative EEG signal features for a particular scenario is crucial for the success of the classification task. Convolutional neural network (CNN) which is specialized in processing the data structures with grid-like topology can be helpful in achieving automated extraction of key representative features from the multichannel EEG signals. While EEG and images both have grid-like topology, their data format are differently organized in the grid. This project which consists of three studies aims at developing CNN classifiers that better fit for the processing of EEG signals and identifying the factors that influence the performance of the CNN classifiers, based on the EEG data obtained from the experiments studying the influence of music and emotion on human brain. In Study 1 which is based on the influence of music on the brain, the impacts of various architectural aspects of CNN on the classification performance, the importance of spatial-dimension convolution in EEG data classification, and the computational resource-efficiency between CNN with 2D and 1D convolution kernels are evaluated. In Study 2 which deals with emotion recognition, the possibility of reducing the internal parameters of the model by using double-path convolution with kernels of different dilation factors is investigated. Study 3, which is also an emotion recognition study, investigates the applicability of the CNN models originally developed for image processing for EEG data classification and further explores the architectural changes that can help in performance improvement in EEG processing. This project has also revealed the non-uniform or lateralized influence of music and emotion on the human brain, based on the discrepancy in classification accuracies between EEG subsets from different brain regions. For the classification of EEG listening to different pieces of music, the test accuracy achieved using the EEG channels from the left cerebral hemisphere (88.91%) is approximately 5% higher than the accuracy achieved with the right hemisphere (84.12%). The test accuracy discrepancy in music-EEG classification is even higher (10% difference) between EEG channels from the frontal cerebral lobes (84.93%) and EEG channels from the temporal, parietal and occipital lobes combined (74.69%). For emotion classification using EEG in Study 3, emotion classification accuracy achieved with EEG from the temporal region (83.84%) is approximately 7% higher than that achieved using EEG from the frontal (76.90%) and parietal (76.78%) regions. There is 5.1% accuracy discrepancy in emotion classification using the EEG channels from the left (88.48%) and right (83.38%) cerebral hemispheres. While music appears to be more greatly affecting the frontal cerebral lobes than the other (temporal, parietal and occipital) lobes, emotion is more greatly reflected on the EEG obtained near the temporal lobes. In addition, both the music and emotion have greater influence on EEG of the left cerebral hemisphere than the right cerebral hemisphere. The neurological findings relevant to the influence of music and emotion on human brain are potentially helpful in selecting a smaller subset of EEG channels for the particular classification application.
Actions (login required)