SURVEY AND PROPOSED METHOD TO DETECT ADVERSARIAL EXAMPLES USING AN ADVERSARIAL RETRAINING MODEL

Artificial intelligence (AI) has found applications across various sectors and industries, offering numerous advantages to human beings. One prominent area where AI has made significant contributions is in machine learning models. These models have revolutionized various fields, benefiting society i...

Cur síos iomlán

Đã lưu trong:
Sonraí Bibleagrafaíochta
Những tác giả chính: Phan, Thanh Son, Ta, Quang Hua, Pham, Duy Trung, Truong, Phi Ho
Formáid: Bài viết
Teanga:English
Foilsithe: Trường Đại học Đà Lạt 2024
Ábhair:
Rochtain Ar Líne:https://scholar.dlu.edu.vn/thuvienso/handle/DLU123456789/256905
https://tckh.dlu.edu.vn/index.php/tckhdhdl/article/view/1150
Clibeanna: Cuir Clib Leis
Gan Chlibeanna, Bí ar an gcéad duine leis an taifead seo a chlibeáil!
Thư viện lưu trữ: Thư viện Trường Đại học Đà Lạt
Cur Síos
Achoimre:Artificial intelligence (AI) has found applications across various sectors and industries, offering numerous advantages to human beings. One prominent area where AI has made significant contributions is in machine learning models. These models have revolutionized various fields, benefiting society in numerous ways, from self-driving cars and intelligent chatbots to automated facial authentication systems. However, in recent years, machine learning models have been the target of various attack methods. One common and dangerous attack method is adversarial attack, where modified input images can cause misclassification or erroneous predictions by the models. To confront that challenge, we present a novel approach called adversarial retraining that uses adversarial examples to train machine learning and deep learning models. This technique aims to enhance the robustness and performance of these models by subjecting them to adversarial scenarios during the training process. In this paper, we survey detection methods and propose a method to detect adversarial examples using YOLOv7, a commonly used intensive research model. By training adversarial retraining and conducting experiments, we show that the proposed method is an effective solution for helping deep learning models detect certain cases of adversarial examples.