Committee Chair

Biderman, Michael D.

Department

Dept. of Psychology

College

College of Arts and Sciences

Publisher

University of Tennessee at Chattanooga

Place of Publication

Chattanooga (Tenn.)

Abstract

In his recent confirmatory factor analysis of the Instructional Development and Effectiveness Assessment rating instrument (IDEA), Marsh (1994) identified six factors matching those from his Students' Evaluation of Educational Quality (SEEQ) rating instrument. However, four of these factors, Enthusiasm, Interaction, Learning, and Organization, were found to be highly intercorrelated. Due to this, other researchers have questioned whether these four factors are really independent constructs as Marsh asserts. Because of the question of independent constructs, many researchers feel that a greater reliance should be placed on the use of global rating items instead of items designed to measure specific dimensions of instructional effectiveness. Marsh counters with the assertion that responses to global items are nothing more than a weighted average of specific dimensions. In a parallel line of research, Cadwell and Jenkins (1985) hypothesized that the semantic similarity of individual items was the underlying influence to the robust factor structure found in Marsh's SEEQ and other rating instruments. Their findings suggested that the synonymous wording of items within scales artificially inflates inter-item correlations resulting in an illusory robust factor structure. This study hypothesized that the use of global open-ended questions in conjunction with the use of the Enthusiasm, Interaction, Learning, and Organization scales from the IDEA would help disentangle the issues of semantic similarity and of independent constructs. Following a content analysis that categorized responses to the open-ended items into themes that matched the semantic meaning of the four IDEA scales, a correlational analysis revealed that responses to both the closed-ended IDEA scales and the open-ended items possessed fairly good convergent validity effectively disputing the Semantic Item Similarity hypothesis. Following this, three structural equations models were conducted. The first model demonstrated that a Rater Bias construct representing global response tendencies on the part of student raters accounted for a significant portion of the variance in each of the four scales and offered a possible explanation for the high factor intercorrelations found in Marsh's (1994) study. The second model indicated that the Rater Bias construct also significantly influenced responses to the open-ended items as well. In the final model, a global item was introduced. The global item was found to have significant loadings on the Rater Bias, Learning, and Organization latent variables thereby providing some support to Marsh's assertion that responses to global items are a composite of specific dimensions of teaching effectiveness.

Degree

M. S.; A thesis submitted to the faculty of the University of Tennessee at Chattanooga in partial fulfillment of the requirements of the degree of Master of Science.

Date

12-1996

Subject

College students -- Attitudes; Teacher-student relationships

Keyword

Instructors, Student perceptions

Discipline

Psychology

Document Type

Masters theses

DCMI Type

Text

Extent

viii, 44 leaves

Language

English

Call Number

LB2369.2 .C377 1996

Rights

https://rightsstatements.org/page/InC/1.0/?language=en

License

http://creativecommons.org/licenses/by-nc-nd/3.0/

Included in

Psychology Commons

Share

COinS