Lee, Hao Jie (2022) Deep learning model for opinion mining. Final Year Project, UTAR.
Abstract
LSTM (Long Short-Term Memory) shows its performance in Sentiment Analysis, but it has a critical drawback in terms of how to do backpropagation, limiting the training time to more extended and the process slower. Attention mechanism more behavior like human understands the sentences by a focus on specific words to solve the issue from LSTM. The Bert (Bidirectional Encoder Representations from Transformers) use an attention mechanism and outperform other attention-based model such as GPT (Generative Pre-trained Transformer) and Elmo (Embeddings from Language Model) because it has learned the deep bidirectional presentations by the MLM (Masked Language Model) and NSP (Next Sentence Prediction). Ernie (Enhanced Language Representation with Informative Entities) model and Zen model modify how Bert model learns language and gains achievement in Chinese NLP (Natural Language Process). RoBERTa (A Robustly Optimized BERT Pretraining Approach) from Facebook proves the NSP is not helping the model, so we also modify the NSP task others for to learn the language. Sentiment Analysis is one of the NLP tasks Ernie and Zen successful in beating the Bert-Chinese, which uses Chinese characters as input and WordPiece embeddings to do word embeddings. Word level embeddings and input is needed to improve the Bert model works on Chinese Sentiment Analysis. With the motivation to improve the Chinese Sentiment Analysis, this project will combine experience from different models to propose a better version of the Bert model. This project will limit the scope to improve Sentiment Analysis among different NLP tasks.
Actions (login required)