Khoo, Chia Hong (2020) Action detection system for alerting driver using computer vision. Final Year Project, UTAR.
Abstract
Nowadays, the increasing number of careless drivers on the road had resulted in more accident cases. Driver’s decisions and behaviors are the keys to maintain road safety. However, many drivers tend to do secondary task like playing with their phone, adjusting radio player, eating or drinking, answering phone calls, and worse case is reading phone text. In previous efforts, many kinds of approaches had been introduced to try to solve the task to recognize and capture potential problem related to careless driving inside the car. In this project, the work will mainly focus on the driver secondary tasks recognition using the action detection method. A camera will be set up inside the car for the real-time extract of driver’s action. The video will undergo a process to extract out the human pose frames without background called human pose estimator framework. Inside this framework, raw image will be input into a CNN network that compute human key points activation maps. After that key points coordinate will be computed using the output activation maps and drawn on a new blank frame. Then the frames will be input into Pose-based Convolutional Neural Network for the action classification. If an action performed by driver is considered a dangerous secondary task, alert will be given. The proposed framework was able to achieve a higher speed compare to others people framework if it is being run on Raspberry Pi CPU. It is able to detect 10 different driver actions where only talking to passengers and normal driving will not trigger the buzzer to give alert to driver.
Actions (login required)