Transforming Student Evaluations: Modern Approaches to Assessing Teaching Effectiveness at UGA

Student evaluations of teaching have long been a cornerstone of assessing instructional quality in higher education. However, traditional methods relying heavily on these evaluations face increasing scrutiny due to concerns about bias, validity, and their impact on teaching improvement. This article explores the limitations of conventional student evaluations and examines modern approaches to teaching assessment, particularly in the context of the University of Georgia (UGA). It will also explore the six-stage pattern of teaching evaluation reform and provide a toolkit for institutions looking to modernize their evaluation methods.

The Limitations of Traditional Student Evaluations

Traditional student evaluations, often the primary or exclusive measure of teaching effectiveness, have several shortcomings:

  • Bias: Student evaluations can be influenced by factors unrelated to teaching quality, such as instructor likeability, gender, race, and even attractiveness.
  • Lack of Validity: Research indicates that student evaluations do not accurately reflect teaching effectiveness. An AAUP survey revealed that less than half of faculty members believe in their validity.
  • Limited Usefulness for Improvement: Student comments often lack specificity and actionable feedback, making it difficult for instructors to improve their teaching.
  • Inappropriate Comparisons: Using means to compare faculty in evaluations can be misleading and unfair due to variations in course content, student demographics, and grading practices.
  • Emphasis on Instructor Likeability: Traditional surveys often prioritize instructor likeability over pedagogical effectiveness.
  • Subjectivity: Traditional approaches evaluate teaching "quality" in the abstract, deferring to the subjective definitions of student survey respondents and evaluation reviewers.

These limitations have led to widespread dissatisfaction with traditional evaluation methods, prompting institutions like UGA to seek more comprehensive and reliable approaches.

Modernizing Teaching Evaluation: A Multifaceted Approach

Modern approaches to teaching evaluation address the shortcomings of traditional methods by incorporating multiple dimensions of effectiveness and multiple sources of evidence.

Multiple Dimensions of Effectiveness

Instead of relying on a single, overall rating of "quality," modern approaches use a framework to define specific dimensions or elements of effective teaching. These frameworks, often called "teaching quality frameworks" or "teaching effectiveness frameworks," typically include four to seven distinct dimensions, such as:

Read also: Student Accessibility Services at USF

  • Goals, content, and alignment
  • Teaching practices
  • Class climate
  • Achievement of learning outcomes
  • Reflection and iterative growth
  • Mentoring and advising
  • Involvement in teaching service, scholarship, or community

By specifying these dimensions, institutions can clarify what they mean by "good" teaching and provide a more nuanced assessment of instructional quality.

Multiple Sources of Evidence

Modern approaches recognize that teaching is too complex to be evaluated by any single metric. Therefore, they encourage data triangulation by compiling a portfolio of evidence from multiple "lenses" or "voices:"

  • Instructor or Self Lens: Evidence from the instructor, such as a CV, teaching statement, or self-reflection.
  • Student Lens: Evidence from students, such as course evaluations, samples of student work, or student reflections.
  • Peer or Third-Party Lens: Evidence from peers or external observers, such as classroom observations, peer reviews of course materials, or ratings from field experience supervisors.

This multifaceted approach provides a more comprehensive and reliable picture of teaching effectiveness.

Formative Feedback and Development

Modern evaluation systems emphasize formative feedback to guide instructional improvement over time. While they are still used summatively for tenure and promotion decisions, they also provide actionable feedback to help instructors develop their teaching skills.

UGA's Approach to Modernizing Student Evaluations

Recognizing the need for change, UGA has taken steps to modernize its teaching evaluation methods.

Read also: Guide to UC Davis Student Housing

The DeLTA Project

In 2022, faculty working on the DeLTA project at UGA shepherded a new teaching evaluation policy that established expectations for departments to evaluate teaching using the three voices: instructor/self, peer, and student voices. This policy aimed to better support and recognize teaching effectiveness, provide meaningful and trustworthy information, and ensure fairness while minimizing bias.

Designing a New Student Evaluation Survey

The DeLTA team adopted a grassroots approach to design a new student evaluation of teaching survey. They conducted comprehensive research to understand how UGA values teaching and reviewed surveys from LCC4 institutions. Drawing inspiration from the University of Oregon's approach, which centered their survey around institutionally-valued teaching elements, the DeLTA team developed a UGA-specific version that prioritized learning.

Stakeholder Input and Feedback

The team solicited input and feedback from stakeholders at UGA through multiple rounds of focus groups with faculty of varying ranks, positions, and disciplines, and surveyed students of varying levels and years. The focus groups and surveys asked participants to brainstorm possible questions and then provide input on how they value and use instructional feedback to improve teaching, the drafted teaching elements, and the example survey structure from the University of Oregon.

Piloting the Survey

The last part of the design process involved piloting the survey in courses across the institution to ensure that students were able to interpret the survey items as expected. The team recruited diverse faculty volunteers from across campus and provided them with customized feedback reports as compensation.

Institutionalizing the Survey

The final phase focused on institutionalizing the survey through policy, which leveraged the team's existing expertise in faculty governance. The original 2022 teaching evaluation policy was revised, then presented to two education-related committees during the 2024-2025 academic year. To prepare for these meetings, the DeLTA team prepared comprehensive informational materials that documented how the survey was developed and shared evidence that the survey works as intended.

Read also: Investigating the Death at Purdue

A Six-Stage Pattern for Teaching Evaluation Reform

Based on interviews with leaders at institutions that have successfully modernized their teaching evaluation systems, a six-stage pattern for reform emerges:

  1. Recognizing the Problem: Someone within the institution recognizes a problem with traditional teaching evaluations and commits to doing something about it.
  2. Determining an Objective and Strategy: An objective and strategy for change are determined, and an official channel for making change is established at the department, school, or university level.
  3. Adopting a Research-Informed Framework: A research-informed framework that defines effective teaching across multiple dimensions is adopted.
  4. Formalizing Structures, Guidelines, and Policies: Structures, guidelines, and policies for evaluating multiple sources of evidence are formalized.
  5. Evaluating and Revising the Process: The new evaluation process is evaluated using real data and revised over time.
  6. Adopting Modern Technologies and Reaching a Sustainable Steady State: The new evaluation process adopts modern technologies and reaches a sustainable steady state.

Broad Approaches to Assessment

There are several broad approaches to assessment that might direct your choice of assessment measures:

  • Direct and Indirect Assessment: Direct measures require students to demonstrate and yield tangible, visible evidence of learning, while indirect measures capture students’ attitudes, perceptions, or opinions about their learning.
  • Formative and Summative Assessment: Formative measures focus on learning processes and can be used for immediate curricular changes, while summative measures focus on learning outcomes after a course or program is complete.
  • Embedded and Add-on Assessment: Embedded measures are already in use as course or program work, while add-on measures go beyond course requirements.
  • Qualitative and Quantitative Assessment: Quantitative measures involve counts or predetermined response options, while qualitative measures are flexible and analyzed for recurring patterns.

Tips for a Balanced Approach to Assessment

  • Study the course or curriculum for existing, embedded assessments. Which ones might be used for program assessment data?
  • Use direct measures! No assessment of knowledge or skills should consist of indirect evidence alone.
  • Use both formative and summative assessment. Look for multiple points throughout a program to collect data on student learning that can be used for immediate improvements.

Why Not Use Grades?

Course grades are usually insufficient measures of program student learning outcomes for several reasons. While final grades offer one source of information about student achievement, cumulative grades can include factors such as class participation and general education outcomes (e.g., writing) that are not directly related to a program’s learning outcomes. Additionally, course grades are approached differently by individual faculty members and result from widely varying grading policies and practices. Finally, program learning outcomes often span multiple courses within a major, and individual course syllabi do not always align precisely with the program’s learning goals.

Determining a Timeline for Assessment

It is not necessary to assess every SLO every year. While some SLOs may be easily assessed each year (e.g. indirect evidence collected through program satisfaction survey), others may be more suited to intermittent assessment (e.g. paper from research methods course offered every other year).

Student Learning Outcomes (SLOs)

Student learning outcomes (SLOs) are brief statements of what successful students should know or be able to do by the end of a course of study. Well-formed learning outcomes are written in terms of skills or actions, as that formulation creates measurable and useful SLOs.

Why are SLOs important?

At their best, SLOs provide definition and scope for a course or program. They guide instructors in making instructional and assessment choices, they direct student attention to targeted learning goals while setting clear expectations, and help colleagues understand how courses fit together within a curriculum.

Evaluating your SLOs

Strong SLOs are clearly defined, measurable, and address the context or criteria under which students will demonstrate learning. Use this checklist to identify ways in which your SLOs for a course or program might need revision:

  • Number: There are 3-5 SLOs defined for the application (without using excessive conjunctions!).
  • Action-Oriented: Each SLO is action oriented – most likely through use of an action verb at the beginning of each SLO.
  • Measurable: Each SLO is directly measurable/assessable and avoids use of less measurable terms like “understand”, “know” and “gain an appreciation for”.
  • Time Bound: SLOs are articulated within a time-bound context (e.g., “By the end of this course…”, “Upon successful completion of this degree…”).
  • Learner-Centered: SLOs focus on what students will know or be able to do by the end of the lesson/course/degree.
  • Jargon-Free: SLOs are free of discipline-specific terms and abbreviations that students in the course or major are not likely to understand (or those terms are provided with further explanation or definition).
  • Provides Scope: Each SLO is articulated in a way that specifies the limits (or “scope”) of expected application of the skill.
  • Alignment with Level: Verbs used in SLOs trend toward lower-level cognitive skills (e.g., remember and understand categories of Bloom’s taxonomy) for courses that are introductory in nature, and trend toward higher-level cognitive skills for more advanced courses.

Thinking Programmatically

If you are developing SLOs for a larger curriculum or program, it is important to also consider the ways in which the courses embedded in this curriculum or program serve to support the bigger-picture SLOs. In reviewing your curriculum, consider the following questions:

  • Are all SLOs both introduced and reinforced through the curriculum?
  • Do students have sufficient opportunities to attain SLOs through coursework or other opportunities?
  • Can some SLOs be “skipped?”
  • Are SLOs addressed at appropriate times in the curriculum? The curriculum should not expect students to demonstrate high-level SLOs too early or low-level SLOs too late in the program.

The Benefits of a Multiple Methods Approach

The use of multiple methods in academic program level assessment is encouraged, since a single method can restrict the interpretation of student learning. The limitations of one method may prompt the selection of other methods. Altogether, multiple methods provide a more accurate frame for assessing student learning. More so, a combination of quantitative and qualitative assessment methods adds reliability and a more comprehensive approach to assessment. Using a multiple methods approach to academic program level assessment has several advantages:

  • Minimizes potential limitations of data collection and analysis inherent in a single method
  • Provides alternative methods for students to demonstrate learning outcomes that may not have been possible in other methods
  • Provides a more complete understanding and interpretation of student achievement
  • Values the diversity of different learning methods

Responding to Assessment Information

The primary goal of student learning outcomes assessment is to improve student learning. Once evidence is collected and the faculty have analyzed it to determine if students are attaining the defined SLOs, it is important to take the appropriate steps in response to that information.

If student learning meets expectations?

  • Consider it a program strength
  • Consider raising expectations
  • Move on to assess the next SLO

If student learning does not meet expectations?

Consider program changes:

  • Adjust teaching & learning methods
  • Reinforce specific course content
  • Change pre-requisites
  • Revise course sequencing
  • Enhance advising

Assessment results are meant to improve teaching and learning as well as inform planning and decision making.

tags: #uga #student #evaluations #methods

Popular posts: