Listening to the student voice
Evaluation, control and commitment are the central tenets of student feedback, but it’s what institutions do with the data that really matters, says John Atherton
Policy changes mean that UK universities are having to take a more robust and strategic approach to course and module evaluation.
The National Student Survey (NSS) poses questions on how students have the opportunity to give feedback and how their feedback is acted on – and the Teaching Excellence and Student Outcomes Framework (TEF), which provides a resource for students to judge teaching quality in universities, draws on data from the NSS. All this points to student engagement rising higher up universities’ priority lists than ever before.
Why does the student voice matter?
Student satisfaction, informed and ultimately supported by an engaged student population, is fundamental to the future of higher education institutions and the strategic goals of vice-chancellors and deputy- or pro-vice-chancellors directly responsible for this agenda.
Our new report – The Student Voice: how can UK universities ensure that module evaluation feedback leads to continuous improvement across their institution? – explores the views of senior leaders in UK universities who are tasked with devising related strategies. It is clear that, driven by external pressures around the NSS, TEF and other metrics, universities are generally ramping up their approaches to capturing and responding to student feedback.
There has been too much focus on the scores that come back from the data and whether an individual score is better or worse than the average
Module evaluation surveys are recognised by senior leaders as playing a strategically important role in the student voice, providing institutions with the opportunity to respond to any issues and concerns before the NSS is completed. They also enable a valuable opportunity for individuals, departments, faculties and universities as a whole to reflect on their teaching practice and the student experience within that.
As such, many universities are embedding module evaluation within their wider strategies around student engagement and student experience – and these surveys are perceived to support broader initiatives around student retention. Module evaluation surveys are seen as particularly valuable for identifying areas of excellence or underperformance at a module level – the ‘detail’ of what is going on.
More widely, the principles of ‘co-creation’ and ‘co-production’ are being championed in some universities to foster greater engagement between students and staff – and these principles are being applied in module evaluation. Good practice has been identified around student engagement in module evaluation activity, both in its planning and in follow up. Faculty engagement is also recognised as important.
There are, however, clear issues with the consistency of approach to feedback and evaluation within institutions and across the sector more widely. When undertaken well, surveys can be used to ensure that decision-making is guided by evidence and they can support staff in being recognised and rewarded for their good practice.
Yet senior leaders also recognise that module evaluation surveys are just one form of gathering student feedback, and these need to be supported by more holistic approaches.
You might also be interested in: Universities increasingly keen to invite and act upon student feedback
The real challenge facing most universities is developing a wider system which allows them to gather students’ learning experiences and then use these for both quality assurance and quality enhancement purposes. Some institutions are making advances in this area; others are at the start of their journey and restricted by the absence of consistent, institutional approaches to module evaluation. There are also gaps in their ability to benchmark, and historical issues around engagement remain.
For too long student evaluation data has been under-used. Universities have tended to focus on improving the process, for example by automating rather than using the data for improvement. There has also been too much focus on the scores that come back from the data and whether an individual score is better or worse than the average. While this is helpful, it does not facilitate an understanding of the issues and trends with an institution’s students. However, the discussion is definitely shifting and becoming a more strategic conversation.
Committing to improvement
Many institutions interviewed for our report expressed an underlying commitment to creating a culture of continuous improvement, with an enhanced focus on data analytics, with the objective of teaching and learning improvement and student and staff development. A combination of external and internal success measures was identified, all in line with institutional priorities for the student experience.
In summary, feedback matters: and we all have a responsibility to help universities respond to this shift, not least in terms of how module evaluation feedback is gathered and used.
John Atherton is higher education director (UK and Ireland) at Explorance: www.explorance.com