The metric tide – Stephen Curry, Imperial College London, and Ben Johnson, HEFCE
Open infrastructures - Clifford Tatum, Leiden University
Open citation – Cameron Neylon, Curtin University
Jisc and CNI conference, 6 July 2016
21. • We invest in the teaching, research and
knowledge exchange activities of English
higher education institutions. £4bn / year
• We regulate and oversee English HEIs (quality
and financial sustainability)
• We operate the UK-wide Research Excellence
Framework (REF) www.ref.ac.uk
About HEFCE
22. Measurement matters
If we can tell the good research from the bad:
• Researchers can build on high quality work
and pursue promising directions
• HEIs can appoint and promote good
researchers, support good departments
• Funders and governments can
benchmark performance and
fund high quality research
…and if we can put
numbers on it, we can
be more objective in
our decisions!
30. “I have asked HEFCE to
undertake a review of the
role of metrics in research
assessment and
management. The review
will consider the robustness
of metrics across different
disciplines and assess their
potential contribution to the
development of research
excellence and impact…”
David Willetts, Minister for
Universities & Science,
Speech to Universities UK, 3
April 2014
31. Steering group
The review was chaired by James Wilsdon, Professor of Science and Democracy at
the Science Policy Research Unit (SPRU), University of Sussex. He was supported by
an independent steering group and a secretariat from HEFCE’s Research Policy
Team:
Dr Liz Allen (Head of Evaluation, Wellcome Trust)
Dr Eleonora Belfiore (Associate Professor of Cultural Policy, University of Warwick)
Sir Philip Campbell (Editor-in-Chief, Nature)
Professor Stephen Curry (Department of Life Sciences, Imperial College London)
Dr Steven Hill (Head of Research Policy, HEFCE)
Professor Richard Jones FRS (Pro-Vice-Chancellor for Research and Innovation,
University of Sheffield) – representing the Royal Society
Professor Roger Kain FBA (Dean and Chief Executive, School of Advanced Study,
University of London) – representing the British Academy
Dr Simon Kerridge (Director of Research Services, University of Kent) –
representative of the Association of Research Managers and Administrators
Professor Mike Thelwall (Statistical Cybermetrics Research Group, University of
Wolverhampton)
Jane Tinkler (Parliamentary Office of Science & Technology)
Dr Ian Viney (Head of Evaluation, Medical Research Council) – representing RCUK
Professor Paul Wouters (Centre for Science & Technology Studies, Uni of Leiden)
32. Approach and evidence sources
• Steering group: diverse expertise and extensive involvement;
• Broad TORs: opening up, rather than closing down questions;
• Transparent: publishing minutes & evidence in real time;
• Formal call for evidence, May to June 2014;
– 153 responses; 44% HEIs; 27% individuals; 18% learned
societies; 7% providers; 2% mission groups; 2% other
• Stakeholder engagement
– 30+ events, inc. 6 review workshops, including on equality &
diversity, A&H. Invited fiercest critics!
– Ongoing consultation & use of social media e.g.
#hefcemetrics;
• In-depth literature review;
• Quantitative correlation exercise relating REF outcomes to
indicators of research;
• Linkage to HEFCE’s evaluations of REF projects;
35. Across the research
community, the
description,
production and
consumption of
‘metrics’ remains
contested and open to
misunderstandings.
36. Peer review, despite
its flaws and
limitations, continues
to command
widespread support
across disciplines.
Metrics should
support, not supplant
expert judgement.
38. Indicators can only
meet their potential if
they are underpinned
by an open and
interoperable data
infrastructure.
39. Our correlation
analysis of the
REF2014 results at
output-by-author
level has shown that
individual metrics
cannot provide a like-
for-like replacement
for REF peer review.
40. Within the REF, it is
not currently feasible
to assess the quality
of research outputs
using quantitative
indicators alone, or to
replace narrative
impact case studies
and templates.
41. Responsible metrics
Responsible metrics can be understood in terms of:
• Robustness: basing metrics on the best possible
data in terms of accuracy and scope;
• Humility: recognizing that quantitative evaluation
should support – but not supplant – qualitative,
expert assessment;
• Transparency: keeping data collection and
analytical processes open and transparent, so that
those being evaluated can test and verify the
results;
• Diversity: accounting for variation by field, using a
variety of indicators to reflect and support a
plurality of research & researcher career paths;
• Reflexivity: recognizing the potential & systemic
effects of indicators and updating them in
response.
44. At an institutional
level, HEI leaders
should develop a clear
statement of
principles on their
approach to research
management and
assessment, including
the role of indicators.
46. HR managers and
recruitment or
promotion panels in
HEIs should be explicit
about the criteria
used for academic
appointment and
promotion decisions.
47. Individual researchers
should be mindful of
the limitations of
particular indicators in
the way they present
their own CVs and
evaluate the work of
colleagues.
48. Like HEIs, research
funders should
develop their own
context-specific
principles for the use
of quantitative
indicators in research
assessment and
management.
49. Data providers,
analysts & producers
of university rankings
and league tables
should strive for
greater transparency
and interoperability
between different
measurement
systems.
50. Publishers should
reduce emphasis on
journal impact factors
as a promotional tool,
and only use them in
the context of a
variety of journal-
based metrics that
provide a richer view
of performance.
51.
52. There is a need for
greater transparency
and openness in
research data
infrastructure.
Principles should be
developed to support
open, trustworthy
research information
management.
53. The UK research
system should take
full advantage of
ORCID as its preferred
system of unique
identifiers. ORCID iDs
should be mandatory
for all researchers in
the next REF.
54. The use of digital
object identifiers
(DOIs) should be
extended to cover all
research outputs.
55. HEFCE, funders, HEIs
and Jisc should
explore how to
leverage data held in
existing platforms to
support the REF
process, and vice
versa.
56. In assessing impact,
we recommend that
HEFCE builds on the
analysis of the impact
case studies from
REF2014 to develop
clear guidelines for
the use of
quantitative indicators
in future impact case
studies.
57. In assessing the
research environment
in the REF, we
recommend that the
role of quantitative
data is enhanced.
58. The community needs
a mechanism to carry
forward this agenda.
We propose a Forum
for Responsible
Metrics, to bring
together key players
to work on data
standards, openness,
interoperability &
transparency.
59.
60. Other developments
• HE and Research Bill
• Stern review:
– Expected July 2015
– Extensive referencing of The Metric Tide in sector
responses
• EC expert group on altmetrics
OUTLINE
Talk about work on metrics
Refresher of key findings and recs ofmetric tide work
Discuss plans for metrics forum (in brief)
First some background
Stating the obvious..!
And publishing = esteem.
350 years of this
Publishing depends on peer review – an embedded but resource-intensive activity. A qualitative activity. At the heart of academic culture.
That’s not to say we don’t measure things
Researchers, heis, publisher and funders all increasingly interested in quantifying activity around publishing and research
Citations
Downloads
Usage
Impact
Grant income
PhD canddates etc
Publishers have embraced this
Whole industry here
JIF in particular is widely trumpeted by publishers, despite its objvious flaws
Metrics providers also in HEI ranking game, with some HEIs basing whole strategy around league tables
And governments not immune – they need to measure and benchmark to demonstrate value for money and attract further investment and justify expenditure
Elsevier/BIS report bases all performance measures on publishing metrics… showing clear link back down to the qualitative business of publishing and peer review.
REF is another method of quantifying quality
Like many other metrics it also depends on expert review
But it is resource intensive as a result. And not embedded!
So…
….some argue that it is additional activity that can be replaced with metrics to save money.
Those in the business of providing metrics are particularly keen, epsc. Publishers trying to diversify away from subscription in light of moves towards OA
All this is part of motivation behind DW request that hefce review role of metrics in research asst. Can we cut down on the additional work?
Set up in 2014
Puublished in july 2015
Just headlines!
Cultural challenge here – researchers don’t like being measured, especuially by numbers obsessed administraotrs and beaurocrats from hefce
Fears are quite real:
Quality may be lost in reductive number crunching
Metrics favour those that can play the game
Metrics punish rebels
Metrics are not suitable for A&H research
Researchers more comfortable with the term ‘indicators’
e.g. citation rings, salami slicing, focussing on sexy areas or measurable areas, not publishing controversial research
‘Basket of indicators’ may help to counteract gaming? And some of these issues are already upon us…
Problem that metrics arebeing created behind closed doors, dodgy data, incompatible systems. Needs to be opened up.
Very weak correlation.
Very poor coverage across some disiplines and output types.
Thjese metrics are insensitive and imprecise.
.. Until issues are ironed out.
Recognise that issues can be solved with work but it will take years
Obvious trade off with cost of peer review- so we will keep needing to have this discussion when we do REFs in future
Not all the written evidence about the REF we received acknowledged the diversity of purposes of the REF, and there are clearly differences of opinion about the relative importance of the purposes. It is not surprising, therefore, that there are equally different views on how and whether quantitative indicators of research quality should feature in the assessment.
This slide captures some of the overall points and concerns gathered through the process of evidence gathering
Some technical
Some cultural
Especially JIF and H!!!
“An article is only as good as the journal it is published in” == bullshit, obviously
NB extensive buying up of infrastructure by private companies is leading to significant concern. (Expect Cameron will cover this!!)