Yong, Hong Long (2022) Anomaly detection with Attention-based deep autoencoder. Final Year Project, UTAR.
Abstract
Anomaly detection has become one of the most trending topics in the Information Technology domain. There are many existing approaches that deeply investigate the application of anomaly detection in several domains, such as video surveillance, financial technology, telecommunication, and healthcare. However, to the best of our knowledge, there is currently no absolute best solution that is reliable to be deployed on real-world applications. We investigate the key observation from the real-world application and tend to improve the existing anomaly detection model to a certain that we may find it is practical and reliable for real-world applications. We have found several key observations that might help to improve the existing work of MemAE from [4]. Firstly, anomaly detection on surveillance cameras will always process frames from the same scene. Secondly, it is not practical for an anomaly detection model to be trained with a huge amount of data in actual deployment. Thirdly, the anomalies that happen in a single video frame often occupy only a small portion of the video frame instead of the whole frame. In this project, a Conv2D autoencoder was built from scratch that mimics the Conv3D autoencoder to process images. Two attention mechanisms were applied to the baseline Conv2D autoencoder separately and thus forms two different attention-based deep autoencoders. The two attention mechanisms are Convolutional Block Attention Module (CBAM) [10], and an attention-based approach proposed by [11]. Throughout the experiments, improvement can be seen by implementing the attention mechanisms onto the baseline autoencoder and so the accuracy of anomaly detection
Actions (login required)