Calibrating human trust in the age of generative AI: an examination of ethical and social challenges
Project Director
Campbell, Curtis
Department Examiner
Liang, Yu
Publisher
University of Tennessee at Chattanooga
Place of Publication
Chattanooga (Tenn.)
Abstract
Generative AI (GenAI), a set of AI technologies with the ability to generate original, human-like outputs, is beginning to transform the way that information is distributed, composed, published, obtained, analyzed, and consumed. GenAI has seen massive adoption by internet users, businesses, and organizations in recent years despite the persistence of major ethical concerns and social implications. In particular, existing research has identified multiple critical trust-related issues associated with AI in general, including widespread mistrust and distrust, overreliance on AI, and a lack of trustworthiness of AI. There remains a need for a broader understanding of these issues as they relate to generative AI in particular. Some valuable models of trust in AI, trustworthiness of AI, and user acceptance exist, but they are either more limited in scope and not specifically designed for generative AI, or they could benefit from greater specificity and detail. Through a literature review, I examine and analyze the areas in which trust is currently impacted by choices made in the usage, implementation, and development of GenAI, with the goal of identifying the specific ethical, social, and legal factors of trust in and trustworthiness of GenAI. In addition to the literature review, I conducted a survey of university students to gain additional insight into how different factors impact participant trust levels of and interactions with GenAI, media content, and other people. I use the findings from the literature review as well as the survey to construct a more comprehensive visual model of how different factors affect both human trust and AI trustworthiness in the context of generative AI. The findings in this study indicate that key factors include the elements of the context in which GenAI is used, such as the type of task and degree of human involvement; the qualities of the GenAI tool itself, such as accuracy, transparency, safety, and fairness; and previous user experience with GenAI and AI-generated content, which may be described in terms of field of study, age, education level, and digital literacy; among other widely discussed factors. I also highlight current needs for ensuring trust in and trustworthiness of generative AI tools, which call for changes in the development, regulation, integration, and usage of generative AI.
Acknowledgments
I would like to thank Dr. Curtis Campbell for her continuous guidance and encouragement throughout the entire process. I would also like to thank Dr. Yu Liang for his participation and feedback as a member of my examination committee. I would also like to express my gratitude to Dr. Will Kuby for supervising and coordinating the thesis process, as well as the entire Honors College for their support. I also thank God for blessing me with friends, family, mentors, and instructors whose support and guidance have made it possible for me to complete this research.
IRB Number
#25-181
Degree
B. S.; An honors thesis submitted to the faculty of the University of Tennessee at Chattanooga in partial fulfillment of the requirements of the degree of Bachelor of Science.
Date
5-2026
Subject
Artificial intelligence--Moral and ethical aspects; Artificial intelligence--Security measures; Generative artificial intelligence; Human-computer interaction; User trust
Discipline
Artificial Intelligence and Robotics
Document Type
Theses
Extent
iv, 75 leaves
DCMI Type
Text
Language
English
Rights
http://rightsstatements.org/vocab/InC/1.0/
License
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recommended Citation
Mondido, Abigail M., "Calibrating human trust in the age of generative AI: an examination of ethical and social challenges" (2026). Honors Theses.
https://scholar.utc.edu/honors-theses/661
Department
Dept. of Computer Science and Engineering