Department

University of Tennessee at Chattanooga. Dept. of Psychology

Publisher

University of Tennessee at Chattanooga

Place of Publication

Chattanooga (Tenn.)

Abstract

Abstract Background Underemployment is a battle that many psychology undergraduates face (Burning Glass Institute, 2024). This can be discouraging, and students are often unaware of alternative postgraduate options to graduate school (Halonen, 2013). To combat this, the research team has created Eugene, an online tool that can be used for career exploration. Students select from a list of psychology courses, and Eugene produces the knowledge, skills, and abilities (KSAs) they have gained in those courses alongside a list of potential jobs they may be qualified for. Currently, the team has a methodology in place that involves having multiple faculty members provide ratings on KSAs gained in their courses and then come to a consensus (Bott et al. 2023). This process can be very extensive, and the team is seeking to make improvements by utilizing artificial intelligence (AI). Methods This study will focus on streamlining the current KSA reconciliation process by training an AI agent to rate syllabi from psychology courses and provide its rationale for why it rated each KSA at the level that it did. The team will continue to ask professors to rate each KSA, but once each professor’s ratings have been collected, the AI platform will rate the same KSAs as well, using the professors’ syllabi. The team will train the AI by providing it with all the KSAs and anchors currently in use. Once the team has ratings from both the professor and the AI, a report will be written that details both sets of ratings and this report will be sent back to the professor for them to decide if they would like to update the ratings or not. Incorporating AI into our consensus process is supported by recent work showing that human–AI hybrid frameworks can enhance the efficiency and transparency of expert consensus while maintaining human oversight (Speed & Metwally, 2025). Expected Results & Implications It is expected that this improved process will reduce the burden on the faculty, as it will allow them to self-reconcile on their own time. This may improve their perceptions of this KSA rating process and ultimately, Eugene. More positive faculty experiences could translate into greater student awareness of Eugene, particularly if faculty recommend the tool during advising appointments. Collectively, these improvements position Eugene as a more efficient, credible, and widely supported resource for connecting psychology coursework to meaningful career pathways.

Subject

Industrial and organizational psychology

Document Type

posters

Language

English

Rights

http://rightsstatements.org/vocab/InC/1.0/

License

http://creativecommons.org/licenses/by/4.0/

Share

COinS
 

Automating Insight: AI-Driven KSA Extraction for Career Clarity

Abstract Background Underemployment is a battle that many psychology undergraduates face (Burning Glass Institute, 2024). This can be discouraging, and students are often unaware of alternative postgraduate options to graduate school (Halonen, 2013). To combat this, the research team has created Eugene, an online tool that can be used for career exploration. Students select from a list of psychology courses, and Eugene produces the knowledge, skills, and abilities (KSAs) they have gained in those courses alongside a list of potential jobs they may be qualified for. Currently, the team has a methodology in place that involves having multiple faculty members provide ratings on KSAs gained in their courses and then come to a consensus (Bott et al. 2023). This process can be very extensive, and the team is seeking to make improvements by utilizing artificial intelligence (AI). Methods This study will focus on streamlining the current KSA reconciliation process by training an AI agent to rate syllabi from psychology courses and provide its rationale for why it rated each KSA at the level that it did. The team will continue to ask professors to rate each KSA, but once each professor’s ratings have been collected, the AI platform will rate the same KSAs as well, using the professors’ syllabi. The team will train the AI by providing it with all the KSAs and anchors currently in use. Once the team has ratings from both the professor and the AI, a report will be written that details both sets of ratings and this report will be sent back to the professor for them to decide if they would like to update the ratings or not. Incorporating AI into our consensus process is supported by recent work showing that human–AI hybrid frameworks can enhance the efficiency and transparency of expert consensus while maintaining human oversight (Speed & Metwally, 2025). Expected Results & Implications It is expected that this improved process will reduce the burden on the faculty, as it will allow them to self-reconcile on their own time. This may improve their perceptions of this KSA rating process and ultimately, Eugene. More positive faculty experiences could translate into greater student awareness of Eugene, particularly if faculty recommend the tool during advising appointments. Collectively, these improvements position Eugene as a more efficient, credible, and widely supported resource for connecting psychology coursework to meaningful career pathways.