Leveraging Course Evaluations To Empower Students And Improve Teaching
DOI:
https://doi.org/10.13021/itlcp.2019.2584Abstract
Location: JC Room D
Course evaluations, in recent decades, have been used increasingly for the purpose of high-stakes employment decisions, such as promotion, salary and re-hire. Research, however, clearly indicates that such forms, often biased, are best utilized as a snapshot of student impressions of teaching, and are, thus, best used as a means of formative, not summative assessment. Due to this misuse/misunderstanding of the form, faculty are missing the opportunity to utilize student-centered evaluative methods to gain insight into student perceptions of learning. Faculty generally dread looking at their course evaluation results. Low ratings may mean no salary increase, no promotion, or, even, termination. Even when ratings are high, instructors are well aware that course evaluations results are frequently biased by factors such as race, gender, age, grade expectation, course subject, number of respondents, level of instruction, type of course (required vs. elective), and enjoyment of course (Boring, Ottoboni & Stark 2016; Kornell & Hausman 2016; Ray, Babb and Wooten 2018). This roundtable discusses a series of possible measures to address this problem. It presents the results of the work by the Mason Effective Teaching Committee of revising the current course evaluation form. The presentation focuses on the research suggesting ways to use both the course evaluation form, as well as other measures, to obtain valuable information about student learning. These changes go beyond creating and validating a set of new items for the institutional form. They also require a cultural shift in understanding how student perceptions of their own learning can be utilized as an important formative tool, rather than merely a summative tool. In particular, the presentation describes the process followed by the committee to create new items and re-word old ones in order to shift the focus from the instructor to the student. Also, it discusses how the committee consulted with students to make sure that items were comprehensible, as well as with all stakeholders at Mason and statisticians for test reliability and validity purposes. Last, and most importantly, this roundtable discusses a series of recommendations at the instructor, program, and university levels needed for this cultural shift to happen.
Boring, A., Ottoboni, K. & Stark, P. B. 2016. Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research.
DOI: 10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1.