UTAR Institutional Repository

Defense against adversarial attack in image recognition

Ng, Shi Qi (2024) Defense against adversarial attack in image recognition. Final Year Project, UTAR.

[img]
Preview
PDF
Download (3585Kb) | Preview

    Abstract

    Deep neural networks were found to be extremely useful and perform exceedingly well in machine learning tasks such as computer vision, speech recognition, natural language processing and in various domains such as healthcare system and autonomous car system. The high accuracy exhibited by the deep learning models in machine learning tasks have attracted people into using them in various real-world scenarios including those safety-oriented ones. However, it would seem that the high accuracy that was exhibited by those machine learning models does not necessary means that they are reliable, or robust enough to be employed in our daily lives directly. It was found recently that these high-performance models can be fooled by adversarial examples, which are perturbed inputs that are almost indiscernible from normal inputs to human eyes but can create a huge disruption in the behavior of machine learning models. If this inherent nature of machine learning models was exploited by adversaries, dire consequences will occur. Hence, defensive methods should be deployed to prevent the machine learning models to be robust against these adversarial examples. In the reviewed papers, we found that researchers have tried incorporating randomness into the models by various methods such as RSE and Adv-BNN, but only Adv-BNN has combined adversarial training with BNN which infuse randomness into their defensive strategy, and other methods rarely investigated the effect of combining adversarial training and randomness incorporation into one defensive method, as well as investigating the effect of combining adversarial purification and adversarial training. In this thesis, we propose a defensive method combining a preprocessing pipeline that introduces noises, adversarial purification and adversarial training, and we have investigated various methods of incorporating noises the effect of doing so in an adversarial defense system.

    Item Type: Final Year Project / Dissertation / Thesis (Final Year Project)
    Subjects: T Technology > T Technology (General)
    Divisions: Faculty of Information and Communication Technology > Bachelor of Computer Science (Honours)
    Depositing User: ML Main Library
    Date Deposited: 21 Feb 2025 11:05
    Last Modified: 21 Feb 2025 11:05
    URI: http://eprints.utar.edu.my/id/eprint/6989

    Actions (login required)

    View Item