Teaching Toolbox: Student Evaluations of Teaching

by Emily O. Gravett

 

If you can believe it, the end of the semester is almost upon us. Soon enough, those of us teaching will receive access to our students’ evaluations of teaching (SETs; sometimes called “student ratings of instruction” or “SRIs” in the research literature).

 

SETs regularly crop up in the media and on higher ed listservs, though their appearance is rarely positive (e.g., “Zero Correlation between Evaluations and Learning” or “Student Course Evaluations Get an ‘F’”). While many academics find the controversies—which center on questions of validity and usefulness—tired and old, SETs continue to be of interest year after year because they are so often used as the primary determinant of teaching effectiveness in the promotion and tenure process.

 

About five years back, JMU convened a task force to study SETs and to make recommendations about how to best create, interpret, and utilize them. As your course evaluations roll in and as you begin to review the students’ feedback, I encourage you to revisit the task force’s final report and to consider some of the following conclusions and caveats, also found throughout the literature on SETs (e.g., IDEA Paper on “Student Ratings of Teaching: A Summary of Research and Literature”):

 

·         High-quality teaching entails the intersection of a number of different dimensions, such as knowledge of content, knowledge of pedagogy, course preparation, and respect for students, not all of which are or even can be captured by any given SET.

·         Because teaching is so complex and multifaceted, experts agree that SETs should not be the sole source of information used to judge teaching effectiveness. SETs should be complemented by other forms of (especially formative) evaluations of teaching, such as peer observation, self-reflection, or portfolios. These other data, however, come with their own challenges, as they can be time-intensive and potentially unreliable too.

·         While there is a pervasive sense that certain factors bias SETs (e.g., the gender or popularity of the instructor), the studies that have led to these conclusions are sometimes flawed themselves (e.g., the sample size was small or unrepresentative, the difference was insignificant, or the study hasn’t been replicated), so it has been difficult to draw definitive conclusions from them.

·         It does appear that the level of course, class size, discipline, and workload affect SET data. It may surprise us to learn, however, that students tend to give higher ratings to more difficult courses with heavier workloads.

·         Students are ill equipped to offer certain kinds of feedback (e.g., is the instructor an expert in the subject matter?), but well qualified to offer others (e.g., did the instructor provide timely feedback on graded work? did the course change their way of thinking?). As the JMU task force recommends, SETs should focus on soliciting students’ perspectives to reveal the latter.

·         Finally, as the task force recognizes, “SET responses from small classes (number of responses below 15-20) or classes with low response rate (below 66%-75%) are unreliable.”

 

In their report, the task force also offered several important reminders that may be helpful to keep in mind at this time of year, especially if you feel any trepidation about looking over your SETs:

·         Students are critical stakeholders in the process of teaching and learning and have valuable perspectives that can be used in the development of an instructor’s teaching practice.”

·         All instructors have room for improvement—as instructors we are never ‘done’ when it comes to developing our craft.”

·         SET responses for individual faculty are best viewed longitudinally, with previous semesters or comparable courses providing context for the current semester or course.”

·         Look for trends in responses rather than anomalies. The comments that you should pay attention to are those that occur frequently in a single class or that occur across multiple classes.”

 

It may also be helpful to meet with someone else to review and discuss your course evaluations. While this will be the final Teaching Toolbox email of the semester, you can request a CFI teaching consultation any time. And be the lookout for an evaluation of the Teaching Toolbox as we decide whether to continue this pilot initiative.

 

Best of luck with the end of the semester!

 

About the author: Emily O. Gravett is Assistant Director of Teaching Programs at the Center for Faculty Innovation and a faculty member in the Philosophy & Religion department. She can be reached at [log in to unmask].



To unsubscribe from the TEACHING-TOOLBOX list, click the following link:
http://listserv.jmu.edu/cgi-bin/wa?SUBED1=TEACHING-TOOLBOX&A=1