The Higher Education Funding Council for England is conducting a consultation on the use of metrics to assess research quality. The current system in the UK is that, every five years, a time-consuming and expensive research assessment exercise is conducted. The last one was called the ‘Research Excellence Framework’ (you can see where this is going). It involves dozens and dozens of academics reading through submissions from more or less every research-active university scholar in the country. At the end of this, an ever-dwindling pot of money is divided up between universities in order to promote further research and ‘reward excellence’ (i.e. concentrate money where there is already lots of it).
Naturally, the government would prefer the bean-counting to be done in a cheaper way, ideally not involving actual people (whether this constitutes an accelerationist moment, I will leave you to judge). As a result, they are keen to promote metrics-based assessment, hence this consultation.
The primary way of measuring the quality of a piece of work would be to count how many citations it attracted. This raises huge questions about the adequacy of the measure employed, and, of course, how the measure distorts what it is that is being measured, as people indulge in all sorts of game-playing to give and get citations. Meera Sabaratnam and Paul Kirby have written a response to the consultation, arguing against the proposal. You can add your name to it, if you wish to support it, though this applies principally to academics based in England. Others might wish to read the response anyway, and reflect upon the neoliberalisation of humanities research, and how it might be resisted.