Presentation from a University of York Library workshop on bibliometrics. The session covers how published research outputs are measured at the article, author and journal level; with discussion of the limitations of a bibliometric approach.
1. Who’s counting?
An introduction to
bibliometrics
Thom Blake and Lindsey Myers
Research Support Librarians, Library Research Support Team
Academic Year
2018-19
AcademicYear
2019-20
2. What are bibliometrics?
Oxford English Dictionary:
The statistical analysis of books, articles, or
other publications.
…in order to evaluate research quality by
measuring the related scholarly output.
indicate
evaluate
3. Today’s workshop
Mathematical definitions
Articles; Authors; Journals
Author rankings
Alternative metrics
Uses for bibliometrics
Further help
4. Sources
Google Scholar
Scopus
Web of Science
Altmetric.com
Opportunities for hands-on practice
Questions welcome!
5. So: what counts?
“The impact of a piece of research is the
degree to which it has been useful to other
researchers and users” (Shadbolt et al. 2006,
p. 202).
How many times has it been cited?
More recently: page views, downloads, social
media mentions etc.
6. Pitfalls
Not all citations are the same
‘Attention’ has a social context
Disciplinary cultures vary
Vulnerable to gaming?
But subjective judgment isn’t perfect
either…
7. Exercise 1
Monitoring the number of times an
author has been cited is a reliable and
fair approach to evaluating the quality
of their research.
Discuss this statement with your neighbours.
(5 minutes)
12. Exercise 2
Using Scopus
Carry out a keyword search for articles related to
your research.
Identify the most highly cited articles and explore
the metrics. Are there any surprises?
Note the details of a highly-cited article, for use
later in this workshop.
(10 minutes)
16. Author-level indicators
Uses
On a CV
To report on a team or
department
For management decisions
To rank individuals
Limitations
Need to take account of
career length and pattern
Very little data available for
books
Disciplinary variation
Incomplete data
Collated data for a researcher over the course of their
career
18. Google Scholar profiles
Create your own profile, by selecting your articles from
Google Scholar results.
Register for ORCID so that your name is unique.
19. Exercise 4
Google Scholar profiles
Browse Google Scholar for York authors, or
search for prominent authors in your discipline.
Identify an author with a Scholar profile, and
explore their personal metrics.
(10 minutes)
20. Journal-level indicators
Calculated by aggregating citation scores for all
the articles published in a specific journal,
e.g. Journal Impact Factor
Ranked lists for each discipline
Interpret with caution…
23. Where to publish?
Factors that might influence where you decide to
publish:
appropriateness of the journal for your research
status of the journal amongst peers
likelihood of being accepted for the journal
quality of the editorial process
speed of publication
openness of the journal ... etc.
Think.Check.Submit
https://thinkchecksubmit.org
24. In conclusion
1. Always compare like with like
discipline
career pattern
2. Be mindful of publishing
culture
books or journal articles?
turnaround
social media attention?
3. Don’t rely on a single
bibliometric tool
underlying metrics differ
4. Put the data in context
to support subjective
judgment
25. A final word from you
Monitoring the number of times an
author has been cited is a reliable and
fair approach to evaluating the quality
of their research.
Have your views changed?
26. Further help
Bibliometrics: a practical guide
subjectguides.york.ac.uk/bibliometrics
Citation analysis and bibliometrics
www.york.ac.uk/library/info-for/researchers/citation
Research Support Team
lib-research-support@york.ac.uk
For the strategic use of indicators at institutional or
department level contact the Research Strategy and
Policy Office
www.york.ac.uk/staff/research/about-re/contacts-rsp
Hinweis der Redaktion
It could be anything: number of publications, word count, number of authors, frequency etc
The underlying maths can be very technical, but we won’t go there, we’ll just use the data other people have created
Scopus and WoS are paid-for; Google Scholar and Altmetric data are available free-of-charge.
They all have different strengths and weaknesses
The paid-for data is curated, and analytical tools are provided
Near-universal transition to digital publishing has enabled access to more instantaneous measures of use (or at least, attention)
A breakthrough dependent on earlier theoretical insights, or just a nod to someone’s methodology? Or even a refutation.
Already famous name? Published in a journal with a small readership? Written in a language other than English? etc
A science paper may have a bibliography running into hundreds, and immediate impact
Authors have incentives for self-citation or forming citation circles
There’s no ‘correct’ answer
What are your ‘gut’ feelings about whether bibliometrics are useful in your discipline?
When/if you are published, do you think you’ll want to follow your own metrics?
Who else might want to monitor them?
National and local conversations about good practice:
Choose the right metrics
Don’t make inappropriate comparisons
Don’t exclude qualitative assessment
Q: Why are there two different scores? A: WoS is a curated database, which means no false positives, no double-counting etc
How do you know what’s a ‘good’ score? Google Scholar doesn’t allow you to rank by Times Cited
Analysis from WoS.
Date of publication also has an impact on citation count: older articles have had a longer time to accrue citations.
Turnaround from ‘reading an interesting article’ to ‘publication of your own paper citing the original article’ may be several years
Scopus (owned by Elsevier) invented this metric
This article has been cited 64% more than its comparators
That puts it in the top 2% of comparable articles
Be aware that the data only reflects citations in other publications indexed by Scopus. Books, minority languages and non-academic publications not well-covered
Evidence of impact outside the traditional ‘scholarly’ environment
Social media interest is much faster than citation
Several different providers: Altmetric.com is the most distinctive
Weighted by source publication, but not by age or discipline
Use to understand impact of research. Not appropriate for ranking (what does a high number of tweets actually indicate?)
Many publishers (incl. WRRO) harvest/purchase altmetric data to encourage reader interest
Install it on your browser (not IE) to gather altmetrics for any scholarly publication with a DOI
If no Altmetric donut present in WRRO record:
No DOI, or
Score = 0
Plot all papers, ranked by no. times cited
H-index = the value where h papers have at least h citations: an indicator of career impact
e.g. Callum Roberts has authored 72 papers over the course of his career which have been cited at least 72 times. 51 in the past 5 years!
Refinements to account for self-citation, no. co-authors etc.
Not a useful measure for ECRs: 2 papers cited 50 times = h-index of 2.
If no Altmetric donut present in WRRO record:
No DOI, or
Score = 0
Publishers like to boast about their high impact factor
When choosing where to submit, you should also take account of editorial culture, open access policy etc
Doesn’t account for variance (one highly-cited article can skew a journal’s score)
Impact factor should never be used as a proxy for article- or author-level metrics
An overview of a program widely used in structural chemistry
These measures are not static - trends over time are an indicator of whether a journal’s reputation/influence is growing or shrinking
But they cannot substitute for knowledge of the discipline