Committee Chair

Yang, Li

Committee Member

Kizza, Joseph; Xie, Mengjun; Ward, Michael


Dept. of Computational Science


College of Engineering and Computer Science


University of Tennessee at Chattanooga

Place of Publication

Chattanooga (Tenn.)


In today’s technology driven world, the use of Machine Learning (ML) systems is becoming ubiquitous, albeit often in the background, in many areas of daily life. ML systems are being used to detect malware, control autonomous vehicles, classify images, assist with medical diagnosis, and block internet ads with high precision. Although the use of these ML systems has become widespread in our society, there is the potential for systems used in high-stakes situations to make faulty predictions that can have serious consequences. Recently researchers have shown that even deep neural networks (DNNs) can be “fooled” into misclassifying an input sample that has been minimally modified in a specific way. These modified samples are known as adversarial examples and have been crafted with the goal of causing the target DNN to modify its behavior. It has been shown that adversarial examples can be crafted even when the attacker does not have access to the training parameters and model architecture of the victim DNN. An attack made under this threat model is known as a black-box attack and is made possible due to the transferability of adversarial examples from one model to another. In this dissertation we first present an overview of DNNs and capsule networks, the current known adversarial example crafting methods, defenses against adversarial examples, and possible explanations for the existence of adversarial examples. Next, we explore a novel technique that was recently developed that aims to use mutual information (MI) as an additional feature for the adversarial training of classification models called natural-adversarial mutual information-based defense (NAMID). We will describe our extensive evaluation of NAMID, as well as introduce our novel method for crafting adversarial examples termed MI-Craft. We will also apply NAMID to the domain of malware classification. We will compare MI-Craft to standard projected gradient descent for the creation of adversarial examples, as well as demonstrate the effectiveness of MI-Craft and NAMID under the CIFAR10 and MalImg datasets.


Ph. D.; A dissertation submitted to the faculty of the University of Tennessee at Chattanooga in partial fulfillment of the requirements of the degree of Doctor of Philosophy.




Deep learning (Machine learning); Neural networks (Computer science)


Adversarial Examples; Deep Learning; Malware Classification; Security of AI; Machine Learning; MI-Craft

Document Type

Doctoral dissertations




xvii, 127 leaves