College of Arts and Sciences
University of Tennessee at Chattanooga
Place of Publication
In his recent confirmatory factor analysis of the Instructional Development and Effectiveness Assessment rating instrument (IDEA), Marsh (1994) identified six factors matching those from his Students' Evaluation of Educational Quality (SEEQ) rating instrument. However, four of these factors, Enthusiasm, Interaction, Learning, and Organization, were found to be highly intercorrelated. Due to this, other researchers have questioned whether these four factors are really independent constructs as Marsh asserts. Because of the question of independent constructs, many researchers feel that a greater reliance should be placed on the use of global rating items instead of items designed to measure specific dimensions of instructional effectiveness. Marsh counters with the assertion that responses to global items are nothing more than a weighted average of specific dimensions. In a parallel line of research, Cadwell and Jenkins (1985) hypothesized that the semantic similarity of individual items was the underlying influence to the robust factor structure found in Marsh's SEEQ and other rating instruments. Their findings suggested that the synonymous wording of items within scales artificially inflates inter-item correlations resulting in an illusory robust factor structure. This study hypothesized that the use of global open-ended questions in conjunction with the use of the Enthusiasm, Interaction, Learning, and Organization scales from the IDEA would help disentangle the issues of semantic similarity and of independent constructs. Following a content analysis that categorized responses to the open-ended items into themes that matched the semantic meaning of the four IDEA scales, a correlational analysis revealed that responses to both the closed-ended IDEA scales and the open-ended items possessed fairly good convergent validity effectively disputing the Semantic Item Similarity hypothesis. Following this, three structural equations models were conducted. The first model demonstrated that a Rater Bias construct representing global response tendencies on the part of student raters accounted for a significant portion of the variance in each of the four scales and offered a possible explanation for the high factor intercorrelations found in Marsh's (1994) study. The second model indicated that the Rater Bias construct also significantly influenced responses to the open-ended items as well. In the final model, a global item was introduced. The global item was found to have significant loadings on the Rater Bias, Learning, and Organization latent variables thereby providing some support to Marsh's assertion that responses to global items are a composite of specific dimensions of teaching effectiveness.
M. S.; A thesis submitted to the faculty of the University of Tennessee at Chattanooga in partial fulfillment of the requirements of the degree of Master of Science.
College students -- Attitudes; Teacher-student relationships
viii, 44 leaves
LB2369.2 .C377 1996
Cassill, Bill C., "A content analysis of student's perceptions of instructors" (1996). Masters Theses and Doctoral Dissertations.