UTAR Institutional Repository

Object Localization In 3D Point Cloud

Chung, Hui Sze (2020) Object Localization In 3D Point Cloud. Final Year Project, UTAR.

[img]
Preview
PDF
Download (2714Kb) | Preview

    Abstract

    Object localization in 3D point cloud is one of the most complex yet interesting applications in computer vision, robotics and autonomous agents. The results of object localization are often affected by many factors such as the quality of the point clouds and the sensitivity of the algorithms to the occlusion in the point clouds. This project provides an efficient algorithm that is able to recognize and localize more than one object from the scene at the same time and is also able to perform localization of an object which undergoes a transformation. There are a total of four major steps to perform the object localization in the 3D point cloud - Scale Invariant Feature Transform (SIFT) keypoint detection to mark the descriptive points in the cloud, Signature of Histograms of OrienTations (SHOT) descriptor construction to store the geometrical properties of the keypoints, feature matching to collect point-to-point correspondences between the scene and the model and Hough Voting hypotheses generation to construct a model instance and localize it from the scene. In this project, adjustment of the parameters in each step was carried out to analyse their effects on the final localization result. The results obtained from each step based on the parameter adjustment were analysed and discussed. Highly descriptive keypoints were detected by using SIFT detector as the keypoints were mostly located at the outlines of the point clouds. In the descriptor construction step particularly, two methods, Point Feature Histogram (PFH) and Signature of Histograms of OrienTations (SHOT) were compared. SHOT’s performance was better than PFH as it had a higher efficiency in computing the descriptors. The high accuracy rate of the feature matching process indicated that the process was able to generate correct correspondences between the scene and the model. In the final localization step, with the adjustment of the parameters, the result shows that this algorithm was able to correctly localize all input models from the scene point cloud, achieving a 100% localization accuracy.

    Item Type: Final Year Project / Dissertation / Thesis (Final Year Project)
    Subjects: R Medicine > R Medicine (General)
    Divisions: Lee Kong Chian Faculty of Engineering and Science > Bachelor of Engineering (Honours) Biomedical Engineering
    Depositing User: Sg Long Library
    Date Deposited: 18 Aug 2021 19:52
    Last Modified: 18 Aug 2021 19:52
    URI: http://eprints.utar.edu.my/id/eprint/4232

    Actions (login required)

    View Item