UTAR Institutional Repository

An optimal near-infrared subcutaneous vein extraction using deep learning.

Chan, Xiao Jing (2022) An optimal near-infrared subcutaneous vein extraction using deep learning. Final Year Project, UTAR.

[img]
Preview
PDF
Download (4Mb) | Preview

    Abstract

    Intravenous (IV) access is a common and yet important daily clinical procedure that delivers fluids or medication into a patient’s vein. However, IV insertion is very challenging where clinicians are suffering in locating the subcutaneous veins due to patients’ physiological factors such as hairy forearm and thick dermis fat, and also medical staff’s level of fatigue. As a result, the patients are suffering from multiple IV insertions. Despite the fact that numerous studies were being conducted to overcome this limitation, this problem was left unsolved until today. Therefore, this project has proposed an optimal near-infrared subcutaneous vein extraction technique using deep learning. The proposed model was aimed to be implemented in smart healthcare machines that capture patients’ forearms to assist medical staff in locating subcutaneous veins during IV insertion. U-Net, a fully connected network (FCN) architecture was used due to its robustness in biomedical image segmentation. In the project development, the original image was used to train the proposed model without further preprocessing to better generalize to the problem statement and objectives of this project. Besides, data augmentation was applied to increase dataset size and reduce overfitting. The original U-Net architecture was optimized by replacing upsampling with transpose convolution as well as the additional implementation of batch normalization. The model was furthered trained with different hyperparameters including learning rate, activation function, number of epochs, filter size, and number of layer blocks. After fine-tuning the hyperparameter, unsupervised vein segmentation was implemented by manually selecting 20 checkpoints of true and false vein pixels from the unsupervised forearm images. Then, the saved checkpoints between the forearm images and the predicted output were compared to determine model performance. The proposed model had achieved an accuracy of 0.8871, specificity of 0.9935, sensitivity of 0.7806, and precision of 0.9918, which had successfully fulfilled the defined objectives.

    Item Type: Final Year Project / Dissertation / Thesis (Final Year Project)
    Subjects: Q Science > Q Science (General)
    T Technology > T Technology (General)
    Divisions: Faculty of Information and Communication Technology > Bachelor of Computer Science (Honours)
    Depositing User: ML Main Library
    Date Deposited: 20 Oct 2022 14:50
    Last Modified: 20 Oct 2022 14:52
    URI: http://eprints.utar.edu.my/id/eprint/4642

    Actions (login required)

    View Item