6 Comments
Dec 5, 2022Liked by David Thomson

One of the frustrations I have with student feedback is that in my working environments, the student surveys are often written without direct input from the teachers. In other words, I don't really get to ask about the things I think are important. Instead, I tend to receive fairly generic feedback that, taken as a whole, doesn't amount to much.

As a result, I have been making up small surveys that I deploy throughout the semester. I try to make them innocuous so that the students don't feel any pressure to give me the answer I want to hear. I ask questions like, "If you had to present this chapter of the book, what would you focus on? Why?" I've had mixed results, with a lot of students treating it like homework and giving the bare minimum answer. But when I get a substantive answer from students, it tends to be something I can, or maybe should, address in my teaching. But that does depend, of course, on finding the right question in the first place.

Expand full comment

Thanks for this post, David. We're circulating student evaluations now, and to me they feel more perfunctory and ritualistic than anything. I have chosen to devote the last class session to a much more thorough and tailored review of the class and my choices during its run. Since my course is a seminar, I also have the chance to sound out successes, failures, and blunders throughout the semester. But the last course dissects the readings, the projects, the assignments, the interactions much more thoroughly and discursively than any checkbox'ed form can do. (Class size and familiarity with each other help to provide a safe space for the students, too.). I've also used the last class session to float ideas for changes in the seminar by the students, just to see what they think. They're insightful especially after having gone through a semester's study, so the students do have an influence on ways to improve the course.

I can't expect much from the normal evaluations. The tactic I've used has been informative and useful.

Expand full comment
Dec 3, 2022Liked by David Thomson

Student evaluations of teaching (SET) suffer from trying to do too many things at the same time. We use SET to 1) get student feedback on their experience, 2) give instructors feedback on what is working and not working on the classroom, and 3) rate the teaching effectiveness of instructors. And while all three purposes are worthy (and even necessary), trying to accomplish all three at the same time in the the same instrument is the source of much of the criticism on SET.

Despite the substantial research indicting SET, we continue to use them because they are a cheap and easy way to accomplish all three purposes at once. We could easily design three different methods to gather the information and feedback to accomplish the three different purposes, but that would cost most more time and money, so we don't.

If we were really serious about getting student feedback on their educational experience, we would inquire through out the semester and not just at the end. We would also ask questions unrelated to individual classes and instead ask about their overall experience.

If we were really serious about giving instructors feedback on their teaching, we would have other instructors visit the classroom regularly so that peers could work together to analyze the classroom experience and offer ongoing feedback on what was going on in the classroom.

And if we were serious about evaluating the quality of teaching, we would do more that ask students to fill out a survey at the end of each semester.

Many of the problems with SET can be addressed, but we need to recognize that it will take time and money to fix the problems.

Expand full comment