Skip to main navigationSkip to main content
The University of Southampton
Public Policy|Southampton

Consultation response | TEF Year Two technical response

TEF

Teaching Excellence Framework Technical Consultation

Department for Business, Innovation & Skills

A response from the University of Southampton | July 2016

Read the call for evidenceDownload the response

Question 1 (Chapter 1)

Do you agree with the criteria proposed in Figure 4? No

 

There are many aspects of these criteria which we would support. However, we believe that some modifications of the criteria are necessary. Firstly, there are some additional criteria we would suggest:

 

Secondly, there are areas of concern from the criteria which include:

 

 

Question 2 (Chapter 3)

 A) How should we include a highly skilled employment metric as part of the TEF?

The most important element to securing a highly skilled employment metric for the TEF is to develop new and effective metrics as part of the consultation on the future of DLHE. Both the Government and the sector are aware of the shortcomings of DLHE and it is vital that this work results in a credible metric in which the sector can have confidence.

 

We realise that this will not be ready for Year 2 of the TEF, so for that, existing data must be used, but the Government and the TEF reviewers must use it with  due caution, aware of its limitations.

In particular, we agree with the tension that is set out at the end of Paragraph 72. We would go further and state that the perceived value of the same degree from different institutions across the country is very large, and that this has a direct effect on the employment prospects of students. That perceived value is  driven as much by entry standards as by quality of teaching. A low tariff university with excellent teaching may score badly on this measure; a higher tariff institution with only average teaching may score considerably better.

Whilst this measure actually benefits high tariff institutions such as the University of Southampton, we do not believe that it provides a meaningful proxy for teaching quality.

 

B) If included as a core metric, should we adopt employment in Standard Occupational Classification (SOC) groups 1-3 as a measure of graduates entering highly skilled jobs? Yes

 

NOTE: We hope that the review of DLHE will produce more robust metrics, so whilst we agree with this for TEF Year 2, we think that for this criterion in particular, it is important not to “lock in” what is used in TEF Year 2, into future years of TEF.

C) Do you agree with our proposal to include all graduates in the calculation of the employment/destination metrics? Not sure

 

We note from paragraph 81 that the proposal is to include only UK domiciled graduates rather than all graduates. We are happy with this as the data for non-UK domiciled are not robust. With UK domiciled students, we are content to include all graduates provided that the benchmarks described later in the consultation are in place. Without those benchmarks, there could be distortions as the variation in students taking up highly skilled employment varies not only by institution but also by subject. Without good benchmarks, this means that a provider’s TEF performance could be driven in part by the choice of subjects that they teach. There could be an unintended incentive for universities to change the subjects they teach, as reflected in Annex D.

 

 

Question 3 (Chapter 3)

 A) Do you agree with the proposed approach for setting benchmarks? Yes

 

It is absolutely vital that benchmarks take into consideration the subject mix.

NOTE: In addition to the metrics stated, to fulfil the ambitions of the White Paper we believe that data should be collected on non-continuation rates relating to ethnicity, sex and disability as well as employment destinations relating to disability. However, we recognise that this may not be possible for TEF Year 2.

 

B) Do you agree with the proposed approach for flagging significant differences between indicator and benchmark (where differences exceed 2 standard deviations and 2 percentage points)? Yes

NOTE: The key issue is not how things are flagged but how the assessors use the information. It is not clear how much of their judgement will be based on the metrics and how much on the contextual information.

 

Question 4 (Chapter 3)

 

Do you agree that TEF metrics should be averaged over the most recent three years of available data? Yes

 

This seems the right compromise to allow for variations.

 

Question 5 (Chapter 3)

Do you agree the metrics should be split by the characteristics proposed above? Yes

 

 

We are happy with the split but we note that the split in the metrics outlined in paragraph 88 is different to the contextual information used to aid interpretation of the metrics (Table 1 paragraph 95 Part A). We are not sure what use contextual information on gender will be, for example, if the metrics are not actually split out by gender. Over time, does it make sense for this to be the same list?

The Government should also recognise and plan for the challenges of splitting the data. Some splits would leave very small numbers of students, particularly in small institutions. It might be that thresholds are needed for including data.  Other splits may generate a grouping not intended by the split itself (e.g. when nearly all the students for a particular subject are on one side of a particular split, that subject will be over-represented in that group and under-represented on the other side). TEF assessors need to be specifically aware of these data issues and the care needed to interpret them.

 

 

Question 6 (Chapter 3)

 

Do you agree with the contextual information that will be used to support TEF assessments proposed above? Yes

 

Part A - Data: We are happy with the list of contextual information but as noted in Q5 above, we note the difference between this information (Table 1 paragraph 95 Part A) and the split in metrics outlined in paragraph 88.

Part B – Data Maps: We note use of the term in Table 1 Paragraph 95 Part B “where students who study at the provider grew up”. We assume that this  means “the address at which the student was living when s/he applied to study at the provider”, as people can grow up in more than one place. We are not sure what additional data this provides in addition to the Polar Quintile data in Part A. Whilst maps as suggested in Part B might be of interest to the institution, we are not convinced that they will provide additional informationof use to the reviewers which is not captured by the information in Part A.

 

Question 7 (Chapter 3)

 

 

 

 

 

 

A) Do you agree with the proposed approach for the provider submission? No

 

 

B) Do you agree with the proposed 15 page limit? No

 

The proposed methodology will work well for institutions with small numbers of students providing degrees in relatively few different subjects. The larger the number of students and breadth of degree courses, the more difficult it will be to meet the guidelines. It will also be important to establish a clear definition of what constitutes a subject.

For example, Paragraph 101 emphasises that the reviewers are looking for examples of excellence across the entire provision, not just ones affecting a small number of students. The needs of students learning different subjects is quite different (nursing vs engineering vs modern languages, for example). Large comprehensive universities with tens of thousands of students and more than 100 degree programmes might well have fantastic examples covering most of their students, but they won’t be concentrated in one or two initiatives, there will be many.

This also means that the 15 page limit will be significantly more problematic for larger, comprehensive HEIs than smaller, specialist ones.

How might these problems be addressed? Our suggestions include:

 

 

 

 

 

We also note that, even with the level of detail available here, it is possible that TEF reviewers will interpret things differently from each other, and significant effort in training will be needed to ensure a consistent interpretation by reviewers. It will also be important to establish transparent, published criteria which show how reviewers will make their judgements. In particular, the weighting or importance attached to the metrics compared with the commentary should be clarified.

 

 

Question 8 (Chapter 3)

 

Without the list becoming exhaustive or prescriptive, we are keen to ensure that the examples of additional evidence included in Figure 6 reflect a diversity of approaches to delivery. Do you agree with the examples? Not sure.

 

 

 

 

We feel that as things stand, there are too many things set out in this list. This could have two negative consequences. The first is that the more things on the list, the more scope there is for reviewers to give different judgements to different universities on similar types of evidence. This is made more likely as the rapid timing for the introduction of TEF Year 2 means that there is insufficient time to bring in a really robust training programme for reviewers and a robust system of moderation.  Although TEF 2 is still in the development phase, it will have immediate reputational and financial consequences for universities, and BIS should anticipate appeals and judicial reviews from universities not given the highest rating. BIS could ameliorate the problem by not using this list in TEF Year 2, and instead have a much simpler system, for example where Universities are merely allowed to comment on the metrics. This would allow a more comprehensive system to be introduced in Year 3 once robust training and moderation were in place, and would reduce the likely number of appeals and judicial reviews.

 

The second problem of having so many areas listed is that HEIs will inevitably attempt to cover all or most of these in their submissions, which will be incredibly challenging in 15 pages. A worrying possibility is that they don’t cover some important areas for which they have good material, and are subsequently penalised. This could be mitigated in Year 2 by reducing the commentary as described in the paragraph above. In future years of TEF, it could be mitigated either by reducing the number of criteria, or by the same suggestions set out in our response to Q7 above (increasing the page limit for larger institutions and making the institutions last HER report available to the reviewers).

Although our preference would be to reduce the number of examples of additional evidence, there are other pieces of evidence which BIS might like to consider:

 

For all the aspects:

 

 

For the Teaching Quality aspect:

 

One area of concern is in the Student Outcomes and Learning Gains aspect, where one of the examples is “learning gain and distance travelled by students”. We are concerned that with no agreed mechanism to measure this, assessors will not be able to make a judgement on a fair basis. We would therefore suggest that this is removed and only reintroduced once an agreed system of assessing it is in place.

 

Question 9 (Chapter 4)

 

A) Do you think the TEF should issue commendations? No.

B) If so, do you agree with the areas identified above? No.

 

We believe that it will take at least two or three cycles of TEF before the methodology is sufficiently robust. However, TEF outcomes will have an immediate reputational effect on HEIs already from Year 2 of operation. This opens up the possibility of all sorts of challenge, and commendations will be even harder to be robustly defended as overall judgements. We believe that commendations should only be considered when the metrics and methodology have been tested over a few cycles of TEF. Although we are not in favour of them, if they do proceed, it would be helpful to clarify if the Commendations – like the TEF award – also last for three years.

 

Question 10 (Chapter 4)

Do you agree with the assessment process proposed? Not sure

 

The whole timeframe is tight. Given that guidance will only come out in October 2016, we would prefer that providers are given to the end of January 2017 to make their submissions, but we realise that this squeezes the available time for the assessment. 

Whilst the process looks achievable, it would be helpful in the response to this consultation to given some further information about:

 

 

 

When and how much of the evidence will be sampled should also be clarified. It will be important that reviewers do not make judgements that are based solely on assertions in the institution's commentary.

 

Question 11 (Chapter 4)

 

Do you agree that in the case of providers with less than three years of core metrics, the duration of the award should reflect the number of years of core metrics available? Yes

 

This seems the best compromise. It allows those with less of a track record to participate.

NOTE: Although BIS have not asked a specific question about this, we note that as a TEF Year 2 award will stand for three years, HEIs who achieve an “outstanding” rating in Year 2 will have no incentive to take part in Year 3 of TEF – and many with an “excellent” rating may not choose to do so either. As TEF is still developing in its early years, it might not be in the interests of BIS to have some of the UK’s best teaching Universities not taking part in Year 3. Itmight consider whether there were incentives for good universities to take part (e.g. If your Year 3 rating was below Your Year 2 one, you could still use your Year 2 rating).

 

Question 12 (Chapter 5)

Do you agree with the descriptions of the different TEF ratings proposed in Figure 9? No

 

 

We agree with much of what is in the descriptions but have the following suggestions:

 

 



 

 

 

 

 

 


 

 

 

 

 

 

 

 

 

 

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×