Foo, Wen Shun (2022) Fighting video analysis employing computer vision technique. Final Year Project, UTAR.
| PDF Download (2850Kb) | Preview |
Abstract
This project is about analysing and classifying the fighting video from the UCF_Crimes dataset to explore solutions to detect fighting events employing computer vision technique. It is expected that different scenes need different approaches to solve the problems and innovative solution for each category should be implemented and tested. Anomaly detection is one of the most challenging tasks in computer vision due to ambiguous nature of the anomaly and the complex nature of human behaviours. Anomalous events rarely happen as compared to normal events which leads to the waste of labour and time. Motivation of this project is to detect a few categories of fighting events to timely signal such incidences as a warning and its innovation is to adapt the automatic anomaly detection and eliminate the use of manual anomaly detection. The field of study for this project includes computer vision, image processing, machine learning and deep learning. For the methodology of the project, the input video frames will first be split into training and testing data and undergo pre-processing steps such as conversion to grayscale to reduce noise and dilation process to increase the white region. Then, the important features between two consecutive frames of input videos are extracted using optical flow. The optical flow of the important features was calculated, and the tracks are drawn out using random colour lines. Next is the observation process to prove that the optical flow generated was meaningful and it was suitable for the project solution. The observation processes included are using YOLO to compare the human size body, recolour the optical flow based on the orientation value, draw the Delaunay triangle and Voronoi diagram, and generate the frequency histogram for the orientation value of optical flow. After observation, the standard deviations of orientation of optical flows on all the dataset videos was recorded and normalized. The normalized data was used to fit and train the SVM classification model. Last step is to perform classification to detect the fighting events on dataset videos. The trained model was also evaluated using confusion matrix, classification report, AUC-ROC curve, and the learning curve.
Item Type: | Final Year Project / Dissertation / Thesis (Final Year Project) |
---|---|
Subjects: | Q Science > Q Science (General) T Technology > T Technology (General) |
Divisions: | Faculty of Information and Communication Technology > Bachelor of Computer Science (Honours) |
Depositing User: | ML Main Library |
Date Deposited: | 13 Oct 2022 15:29 |
Last Modified: | 13 Oct 2022 15:29 |
URI: | http://eprints.utar.edu.my/id/eprint/4648 |
Actions (login required)
View Item |