Choo, Yong Quan (2022) An object finder for the visually impaired. Final Year Project, UTAR.
Abstract
Vision loss or vision impairment are impactful to human. The visually impaired people lost the sense of vision for them to determine the surrounding environment precisely. They are having a hard time in their life and require assistance for to carry on their daily life. In this project, an assistive solution is proposed to help the people with visual impairment to identify and even locate objects accurately to pick up the object. This project aims to develop an object finder application that can help the visually impaired people to easily locate and identify common objects or daily essentials in the indoor environment. The system will implement an object detection module developed using the pre-trained YOLO model to detects object in a single-shot convolutional neural network for real time detection from the images or video. The YOLO model is high speed and high accuracy to archive high responsiveness to the users. The system will output the detection result by label the bounding boxes of the objects in the visual data to enable the system to compute the relative direction of the object from the user’s hand in the next stage. To implement hand detection, the background frame of the video is extracted and will be used to perform background subtraction with the subsequent frame. The system will then compute the direction of the object relative to the user’s hand. The overlap of the user’s hand with the objects in the visual data will allow the system to trigger a notification to the user to pick up the object.
Actions (login required)