HOW WE DO IT


A number of good practices and shared ideas emerged out of the project. Here are three of the top strategies for other universities to consider.

· Achieving high response rates

The issue addressed with the highest frequency in the literature on student evaluation is response rates. Many universities discovered that the migration from paper to electronic surveys was accompanied by a plunge in response rates. Low response rates lead to doubts regarding validity of data.

One of the universities enlisted the students to come up with a solution. The student executive designed a system whereby after a certain date, students were required to complete their electronic surveys in order to access their learning management system (LMS) subject sites. Students did not want compulsory evaluation, so they also designed an opt-out option. Rather than completing the survey, students could click a button reading, I have considered completing this student evaluation survey and decided not to complete. Before they are given access to the LMS, they are required to respond to a single rating question reading, Overall, rate the quality of this subject/educator. Critics worried that this forced rating would result in skewed low-sentiment data. However, evidence showed that these ratings were high overall and equivalent to the ratings among the full surveys.

· Qualitative thematic analysis of student comments

Universities commonly report means and sometimes modes, medians and graphical representation of results from Likert-scale items. However, how do universities deal with student comments so that this data can be used to inform improvements?

Increasingly, universities are choosing electronic student evaluation systems with built-in or transferable qualitative analysis systems. In qualitative research, narrative data is often analysed by identifying common words and themes. This technology has been applied to student evaluation systems. People no longer have to read through pages of comments. Emerging themes are identified and keywords can be searched. For example, qualitative analysis means that universities can efficiently derive a report about how assessment practices are being perceived by students.

· Embedding student evaluation in the overall culture and context of quality learning and teaching

Student evaluation has been criticised as largely being a tick-and-flick exercise. A common theme in the literature is that students do not believe that anything is being done with their feedback, and educators believe that student survey feedback is an unfair and inaccurate means of judging their teaching.

Universities are seeing potential in student evaluation as a part of a larger quality assurance and quality improvement process. Question sets are validated and carefully aligned with these universities’ strategic priorities and enabling actions. Students are engaged in the process. Student evaluation is married with academic development, learning and teaching action research, curriculum renewal and benchmarking. One of the enabling actions to achieve these goals is for universities to choose the same student evaluation systems, questions and reporting mechanisms to collaborate for systemic improvement.