Higher Education and Research Bill Debate

Full Debate: Read Full Debate
Department: Department for Education
Moved by
187: Clause 25, page 15, line 32, at end insert—
“( ) The scheme introduced under subsection (1) must be laid before and approved by a resolution of each House of Parliament before it may come into effect.”
Lord Lipsey Portrait Lord Lipsey (Lab)
- Hansard - -

My Lords, I start by apologising for the absence from this debate of the noble Lord, Lord Bew, who has been delayed on his flight from Northern Ireland by weather. He was very keen to be here and will greatly regret that he has missed this debate.

I have four amendments in this group, beginning with Amendment 187. I can describe them most concisely as a range of options to de-fang the National Student Survey as an ingredient in the TEF. The options range from requiring parliamentary approval of the scheme proposed under Clause 25, to an independent inquiry into the statistical validity of NSS data and, finally, the nuclear option—that the Committee does not agree to Clause 25 standing part of the Bill.

I shall start where we left off in an excellent debate touching on these issues last Wednesday. That debate had a rather wider proposition at its heart: that the link between the TEF and the ability of universities to raise fees should not come into being straight away. They would be given time for the TEF—and the statistical ingredients and metrics within it—to be properly got right. I sympathise very much with that view, but it is not the question today.

In the debate last Wednesday, a majority were certainly critical of the metrics being used—of whether the things the National Student Survey asks students are indeed a good way of measuring the quality of teaching in an institution. Some pretty key difficulties were raised. For example, there seems to be very little correlation—or no correlation, according to a paper by the Royal Statistical Society— between the scores achieved in the NSS by an institution and the quality of its degree results. That seems a bit worrying to many people. Those who defended the NSS did not actually argue that it was perfect—the noble Lord, Lord Willetts, was very frank. It is not perfect. They made the reasonable point that if we wait for perfection on this earth we get nowhere very much, and therefore argued that we should include these metrics.

As I said, I shall not go over that argument again in detail this afternoon, though we shall probably come back to it on Report. However, I have to be absolutely clear: my worries about the NSS are not primarily related to whether the metrics are good metrics for deciding teaching quality, or whether they are the best available, or any of those things; they are pretty well purely statistical. When the NSS survey results are compared, they do not reliably reflect the opinions of students in differing institutions as to the quality of the teaching they are getting. These are statistically flawed results, as well as, arguably, being flawed as metrics.

I am in danger of going on all night and being extremely boring. I know the Committee will have a limited appetite for a great deal of statistical discourse—although if there is anybody who shares my nerdish love of these things, they should read two documents by the Government’s own ONS on the statistical basis. They should also read the excellent document by the Royal Statistical Society, which analyses this matter in detail.

I shall just mention one or two problems that are relatively easy to comprehend. The response rates to the NSS vary greatly between different institutions. It is perfectly clear from what we know that the non-responders are not the same as the responders and, in particular, that ethnic minorities are greatly under-represented in the responses. This can have a terrific effect on the results. Let us suppose that in one year there is a 70% response rate, giving a result of 60% satisfied. If that 70% response rate had gone up to 100%, the whole of the remaining 30% might have been satisfied or all of the non-responders might have been not satisfied. So the true result could vary by 30% each way—60% in total—from the result given by the NSS. There are particular problems with sample sizes in small institutions such as my own—Trinity Laban. Music students are our biggest group of students—there are 112 of them—and the statistical margin of error for that number is very large.