Department

University of Tennessee at Chattanooga. Dept. of Psychology

Publisher

University of Tennessee at Chattanooga

Place of Publication

Chattanooga (Tenn.)

Abstract

The current project takes action on Kell et al.’s (2017) call for research on non-traditional methods for developing behaviorally anchored rating scales (BARS). Typically, BARS are used when conducting structured interviews to ensure high predictive validity and reliability while minimizing biases. The traditional method of constructing BARS requires time and resources which may deter organizations from adopting such scales. Our proposed methodology will assuage these deterrents. We have designed a survey to collect critical incidents from subject matter experts (SMEs) online using ProLific, a crowdsourcing software. For the purpose of this study, participants will be filtered based on their age, position and industry. This will ensure participants are over 18 and have spent at least one year in the role of middle manager. Additionally, participants in the healthcare and education industries will be excluded per expert recommendation due to the unique nature of these fields. Through an automatic randomization process, participants will be presented with 15 questions from a bank of 25 behaviorally based interview questions. Upon completion, participants will be asked to read a debriefing form within the survey. Then, they will be prompted to submit the survey. Participants will be compensated for their time. SMEs will then be tasked with extracting and filtering behavioral statements from the critical incidents while blind to the competency they were generated for. Next, SMEs will be split into two groups. The first group of SMEs will individually sort each statement into the competency they interpret to be most representative. Agreement will be calculated and statements not reaching this predetermined threshold will be removed. The second group of SMEs will rate each behavioral statement for its relevance to all five competencies. Statements will be assigned to the highest rated competency. SMEs will then trade behavioral statements and rate each one on effectiveness. The average effectiveness rating for each critical incident will be calculated and used to determine where to place the behavioral anchors on the rating scale. Given the relevance of the current project and the rich qualitative data that will be collected, the dissemination of the data and project findings offers an exciting opportunity. The key audience for this project is other Industrial-Organizational researchers and professionals. We aim to present these findings in a way that will be more attractive to organizations and encourage the adoption of this resource-friendly and streamlined process.

Date

October 2022

Subject

Industrial and organizational psychology

Document Type

posters

Language

English

Rights

http://rightsstatements.org/vocab/InC/1.0/

License

http://creativecommons.org/licenses/by/4.0/

Share

COinS
 
Oct 15th, 12:00 AM Oct 15th, 12:00 AM

Creating competency based behaviorally anchored rating scales using an online sample

The current project takes action on Kell et al.’s (2017) call for research on non-traditional methods for developing behaviorally anchored rating scales (BARS). Typically, BARS are used when conducting structured interviews to ensure high predictive validity and reliability while minimizing biases. The traditional method of constructing BARS requires time and resources which may deter organizations from adopting such scales. Our proposed methodology will assuage these deterrents. We have designed a survey to collect critical incidents from subject matter experts (SMEs) online using ProLific, a crowdsourcing software. For the purpose of this study, participants will be filtered based on their age, position and industry. This will ensure participants are over 18 and have spent at least one year in the role of middle manager. Additionally, participants in the healthcare and education industries will be excluded per expert recommendation due to the unique nature of these fields. Through an automatic randomization process, participants will be presented with 15 questions from a bank of 25 behaviorally based interview questions. Upon completion, participants will be asked to read a debriefing form within the survey. Then, they will be prompted to submit the survey. Participants will be compensated for their time. SMEs will then be tasked with extracting and filtering behavioral statements from the critical incidents while blind to the competency they were generated for. Next, SMEs will be split into two groups. The first group of SMEs will individually sort each statement into the competency they interpret to be most representative. Agreement will be calculated and statements not reaching this predetermined threshold will be removed. The second group of SMEs will rate each behavioral statement for its relevance to all five competencies. Statements will be assigned to the highest rated competency. SMEs will then trade behavioral statements and rate each one on effectiveness. The average effectiveness rating for each critical incident will be calculated and used to determine where to place the behavioral anchors on the rating scale. Given the relevance of the current project and the rich qualitative data that will be collected, the dissemination of the data and project findings offers an exciting opportunity. The key audience for this project is other Industrial-Organizational researchers and professionals. We aim to present these findings in a way that will be more attractive to organizations and encourage the adoption of this resource-friendly and streamlined process.