Electronic Theses and Dissertations

Date of Award

1-1-2024

Document Type

Thesis

Degree Name

M.S. in Engineering Science

First Advisor

Charles Walter

Second Advisor

Yixin Chen

Third Advisor

Thai Le

Relational Format

dissertation/thesis

Abstract

With the increasing use of AI in various applications, its ability to impact different communities is a crucial aspect to the advancement of humanity and technology. However, with its often unexplained properties, AI models could wreck havoc if loopholes are found. Adversarial attack algorithms have been shown to cause deep neural networks to fail on image classification and object detection tasks while being visually imperceptible to humans. Adversarial attacks cause out of distribution predictions and can cause the model to fail in unexpected ways.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.