Project Director
Howell, Roland
Department Examiner
Sakib, Shahnewaz Karim
Publisher
University of Tennessee at Chattanooga
Place of Publication
Chattanooga (Tenn.)
Abstract
The rapid growth of Large Language Models (LLMs) and their continuous increase in capabilities have affected many professions and people. Due to advancements in areas such as coding and data analysis, they are now also being utilized in Cybersecurity. Recent research has examined their use in many different areas such vulnerability detection in code and analyzing network traffic. With this rapid growth, most organizations around the world are eager to advance faster than their competition, with limited considerations for the potential harm and risks these tools could bring. Some research has been conducted on malicious uses, but as the benefits grow, so does the potential for misuse, causing this balance to constantly shift. This paper examines the benefits LLMs bring, as well as the growing risks they pose, to evaluate their net impact on the field of Cybersecurity. This research consists of two main sections designed to achieve this objective. The first part is a comprehensive literature review, covering studies that focus on beneficial use cases of LLMs as well as negative misuse of LLMs. This review highlights the vast possibilities of LLMs but also shows gaps and common flaws present in LLMs. The second part of the research is an experiment, designed to evaluate how LLMs perform from a security perspective when asked to perform coding tasks that require a focus on security. Major findings from the first section show that while LLMs can accelerate existing cybersecurity tasks and enable new applications, they also introduce new risks. These negative uses include not only attacks that are more easily perpetrated due to LLMs, but also the LLMs emerging as new victims. The second section evaluates how safe the use of LLMs such as ChatGPT, Gemini, etc. is in the creation of code that is exposed to the internet. Findings from this section show that while LLMs can create working code for inexperienced users, they introduce a multitude of security flaws in their code, making their products dangerous for use unless the code is checked by an expert. This research is significant as the use of LLMs is rapidly changing the digital world. As these LLMs are integrated into an increasing number of devices and online services, knowing whether this process can be done safely or if the risks outweigh the potential benefits is of great value. Due to ongoing growth, future research is expected to further evaluate these issues, as more capable and reliable LLMs provide new opportunities in cybersecurity.
Acknowledgments
I would like to extend my thanks and deepest gratitude to my Thesis Director, Roland Howell, for his guidance throughout this project. His feedback and insightful recommendations of relevant research have been of great help to me and provided perspectives that would have otherwise gone overlooked. I am also grateful to Professor Shahnewaz Karim Sakib for his help in refining my experimental design and providing technical guidance. Finally, I want to thank Professor Will Kuby and the Honors College for their invaluable help with the development of initial ideas and the planning of this thesis.
Degree
B. S.; An honors thesis submitted to the faculty of the University of Tennessee at Chattanooga in partial fulfillment of the requirements of the degree of Bachelor of Science.
Date
5-2026
Subject
Generative artificial intelligence; Data protection; Cyber intelligence (Computer security)
Discipline
Cybersecurity
Document Type
Theses
Extent
iv, 48 leaves
DCMI Type
Text
Language
English
Rights
http://rightsstatements.org/vocab/InC/1.0/
License
http://creativecommons.org/licenses/by/4.0/
Recommended Citation
Dobler, Niklas P., "Friend or foe? The benefits and risks of LLMs in Cybersecurity" (2026). Honors Theses.
https://scholar.utc.edu/honors-theses/667
This file contains the full LLM Outputs for each trial from the experiment performed in this paper.
Department
Dept. of Computer Science and Engineering