top of page

Unwrapping Student Evaluations

KU CEL

Confession: I used to listen to Wishbone by Slaid Cleaves on repeat while reading my student evaluations, letting the chorus play over and over:

Spin the bottle cap Throw a shot back Everything's gonna be all right Spin the bottle cap Throw a shot back Cough and cry, lay down and die

Let’s be honest—reading student evaluations can be stressful. I’ve had meaningful comments in my SRIs (Student Response to Instruction, now called Student Feedback Instrument, or SFI): heartfelt thank-yous, recognition of new skills or ideas learned, and occasionally useful advice about improving an assignment structure. I believe it matters deeply how our students feel in our classes, at our university, and about their learning. Asking them directly is a valuable way to gain that perspective.


But when it comes to SFIs and their role in our evaluation process, there are multiple layers to unpack. For “unexpected” or “under-represented” faculty, reading them often exposes racist, sexist, or biased themes that reflect societal prejudices rather than teaching performance.


“A carelessly worded comment or deliberately cruel statement from even one student can become lodged in our brains for years, undermining our teaching self-efficacy and our desire to keep building our teaching skills.”

That quote resonates deeply with me. Neuhaus even advocates for “harm reduction” strategies when engaging with student evaluations. Let that phrase sink in: harm reduction. Proceeding with caution and care is vital when working with these materials. I’ve had evaluations claiming I didn’t know the material (for a 100-level course in my PhD field) or that I was “mean.” And that’s the PG-rated version—many comments can be much worse.


Balancing the Complex Layers

The academic world continues to grapple with these complexities. Researchers like Angela Linse (2017) argue for the effective use of student evaluations, while others urge caution, as summarized by David Delgado Shorter in The Chronicle of Higher Education.

SFIs are here to stay, but here’s what I’ve been thinking and reading about lately.


What Students Can (and Can’t) Evaluate

Students provide their authentic perspective, comparing a course to their expectations and experiences. They can comment on:

  • Teaching style and presentation,

  • Organization and structure,

  • Workload and pacing,

  • Whether the environment felt welcoming and accessible,

  • The clarity and timeliness of feedback.


However, as novices, students lack the expertise to evaluate an instructor’s breadth of subject knowledge, long-term teaching effectiveness, or the pedagogical quality of assessments. For example, if a student expects lectures and multiple-choice tests, they may feel frustrated or disappointed by active learning methods.


Timing also matters. At KU, students often complete SFIs as early as three-quarters of the way through the semester, before final projects or feedback is complete. This snapshot represents a partially completed course experience during a challenging phase of the semester for both students and faculty.


The Role of Bias

Decades of research across multiple fields and countries show that student evaluations are influenced by bias. Factors include:

  • Instructor characteristics (e.g., race, gender, and age),

  • Course type (STEM fields tend to score lower; electives higher),

  • The weather (Beran & Violato, 2014),

  • And my personal favorite—whether cookies were offered during class (peer-reviewed study).


Why Teaching Pedagogies Matter

Surprisingly, research shows teaching effectiveness is negatively correlated with student evaluation scores. Effective teaching often requires learning-centered approaches, such as CUREs (Course-based Undergraduate Research Experiences) or problem-based learning. These methods demand more responsibility and effort from students, which doesn’t always “feel good.”


Learning involves struggle, and students generally dislike struggling. A study from the Harvard Physics program (Deslauriers et al., 2019) found that the most effective techniques for student learning were the least liked by students. The authors warned:

“Student evaluations of teaching should be used with caution as they rely on students’ perceptions of learning and could inadvertently favor inferior passive teaching methods over research-based active pedagogical approaches.”

As Angela Linse (2017) aptly put it:

“Student ratings are not measures of student learning.”

Practical Tips for Engaging with SFIs

Here are a few strategies to make the process more manageable:

  1. Make a plan.  Don’t just open the packet when it randomly arrives in your email.  Decide where and how you can feel as safe as possible and prepared to engage.  How can you meaningfully wrap up and ‘close’ the experience? 


  2. Construct a Useful Approach. Determine how you’ll use them. In The Chronicle of Higher Education, Dr. Manya Whitaker provides helpful advice about coding information and finding patterns. Whitaker also talks about making meaningful adjustments to address certain types of comments in future semesters or evaluations. Dr. Nuehaus recommends asking ourselves this specific question after seeing data or reading a comment: 'Does this student comment and/or response give formative feedback for teaching?”  If it doesn’t, then move on.


  3. Use a template: End-of-experience reflective assessment exercises aren’t just for students. Duquesne University has this ‘course wrapper’ – formatted like a student exam wrap.  Also, Linse, 2017 has a helpful table for breaking down themes in the comments. After reading the course evaluations, what changes can you make?    The CEL's Basics of Universal Design for Learning Course challenges us to think about changes we've already made, and to make one more— a "Plus 1" approach.


  4. Don’t be a loner. Seek out a supportive environment – online groups, a colleague or friend, or a mentor to read them for you or with you.  For example, if there are racist or sexist comments, you can ask them to blank those phrases out so you don’t have to see them.  Or ask them to summarize the key themes and compare them to your impressions. 


  5. MYOA (Make your Own Assessment). Dr. Neuhaus strongly recommends that you consider building your course assessment by asking specific questions about your instruction to supplement and augment the SFI-required version.  For example, ask questions about the types of assignments in your class, such as ‘Describe how you used the in-class problem activities in your test preparation.’ or ‘Did you find the reading the online text accessible?  Explain.’ These course-focused questions can help guide the students to provide specific, useful feedback on their experience.  We can also ask them questions about their commitment to the course, for example, ‘How often did you attend?’ or ‘Did you regularly turn your work in on time?’  This contextualizes student comments about things like late feedback or unclear presentations in class. Delayed feedback and confusion over lecture materials might be expected if a student is frequently absent or turning in work late. Additional information like this can help supplement the limited perspective of SFI feedback. 

Want a more clever version of this?  Check out Loleen Berdahl’s Substack post, complete with cat picture!


The CEL is here for you

Drop us an email if you’d like to have some confidential support, even if it’s just someone to bring you chocolate and sit in the room with you while you read them. We also have example templates for course-specific feedback aligned specifically with your content and teaching methods—called a Student Assessment of Learning Gains, or SALG— and we can help you survey your own course. 

  

Finally, a good song on repeat never hurts. Remember: “Everything’s gonna be alright…” 


References

Neuhaus, J. Picture a Professor: Interrupting Biases about Faculty. West Virginia University Press. Available online: https://wvupressonline.com/node/909. Bonus chapter: "Figuring Out Student Feedback" (PDF). Available at: https://static1.squarespace.com/static/607b07fdbab4b20ba4638f17/t/63b4451ab804904714887741/1672758554343/Neuhaus%2C+Figuring+Out+Student+Feedback.pdf.


Beran, T., & Violato, C. (2014). The relationship between student ratings of instruction and student learning: A meta-analysis.Studies in Educational Evaluation, 43, 22-30. Available at: https://www.sciencedirect.com/science/article/pii/S0272775714000417.


Esarey, J., & Valdes, N. (2021). Unbiased, reliable, and valid student evaluations can still be unfair. Medical Education, 55 (6), 671-679. Available at: https://asmepublications.onlinelibrary.wiley.com/doi/10.1111/medu.13627.


Deslauriers, L., et al. (2019). Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences, 116 (39), 19251-19257. Available at: https://www.pnas.org/doi/10.1073/pnas.1821936116.


Linse, A. R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94-106. Available at: https://www.sciencedirect.com/science/article/pii/S0191491X16300232#fig0005.


Beran, T., & Violato, C. (2014). Students' ratings of teaching effectiveness: How to interpret and use them. Studies in Educational Evaluation, 43, 1-7.Available at: https://www.sciencedirect.com/science/article/pii/S0272775714000417.


Whitaker, M. (2022). How to make the best of bad course evaluations. The Chronicle of Higher Education. Available at: https://www.chronicle.com/article/how-to-make-the-best-of-bad-course-evaluations/.


Bergen, L. (2022). How to read student evaluations without losing your mind. Substack. Available at: https://loleen.substack.com/p/how-to-read-student-evaluations-without?r=2tz6yf&utm_campaign=post&utm_medium=web&triedRedirect=true.


Dusquesne University Google Docs Resource Available at: https://docs.google.com/document/d/1-fW3rsZL91oYCRcQMa_7Xw5XLgjdU1Ww/edit.


 

This piece represents the personal opinions, experiences, and musical tastes of Erin Kraal, and does not represent any official administrative, faculty union, or tenure or promotion position from Kutztown University, the Center for Engaged Learning, or the Department of Physical Sciences within the College of Liberal Arts and Sciences.


Dr. Erin Kraal is the current Faculty Director for the Center for Engaged Learning and a professor in the Department of Physical Sciences where she teaches planetary science, astronomy, geology, and science writing. She is particularly interested in exploring how faculty teach and students learn the process of science. In her non-work time, she likes to hike, travel, and cook and has recently taken up a new hobby of learning to watercolor (yeah, YouTube videos!)

© 2023 Kutztown University Center for Engaged Learning

bottom of page