Journal-level metrics
Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of information abundance (often termed ‘information overload’), having a shorthand for the signals for where in the ocean of published literature to focus our limited attention has become increasingly important.
Research metrics are sometimes controversial, especially when in popular usage they become proxies for multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis based on its underlying data source, method of calculation, or context of use. For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert opinion alongside metrics), and always use more than one research metric as the quantitative input. This second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics can help to provide a more complete picture and reflect different aspects of research productivity and impact in the final assessment. ( Elsevier)
Impact Factor Journals as per JCR, SNIP, SJR, IPP, CiteScore
1. Impact factor
journals as per as
per journal
citation
report SNIP, SJR,
IPP, CiteScore
Dr. S.Ghosh
Associate Professor
Department of Library &
Information Science, University of
North Bengal, West Bengal 734013
2. Publish or
Perish?
“Publish or perish" is an aphorism
describing the pressure to publish academic
work in order to succeed in an academic
career. ... The pressure to publish has been
cited as a cause of poor work being
submitted to academic journals.
12/6/2020@sghoshnbu 2
3. The Harsh
Consequences of
“Publish or Perish”
The culture of “publish or perish” is clearly pervasive and
appears to be here to stay. Calls for instant distribution and
transparency of both authorship and peer review may help
to address problems with research quality, but as long as
researchers are threatened by the publication venue of their
research, the system will remain fundamentally broken.
12/6/2020@sghoshnbu 3
4. Download counts
Page views
Mentions in news reports
Mentions in social media
Mentions in blogs
Reference manager readers
… etc.
Journal Impact Factor
Citation counts
Perspectives of impact
ACADEMIC IMPACT SOCIETAL IMPACT
Alternative metrics
“altmetrics”
+
Traditional metricsTraditional metrics
More article-centric, as opposed to
journal-centric.
6. Why is
metrics?
Quantification of research
impact
Multidimensional Array of
Stakeholders
Calculations of fuzzy concepts
and associative activities
7. What are the
different
metrics?
Scholars have combined standard research
metrics, like scholarly output and citation
counts, into formulas to measure and assess
author and journal impact in new ways. Some
of these metrics include:
Journal Impact Factor
h-index
g-index
Eigenfactor score
Altmetric
@sghoshnbu 12/6/2020 7
8. Ways of
Measuring
Impact
Article Impact - citation count and analysis using Web of
Science and Google Scholar
Journal Impact - journal data and standard measures for
journals
Author Impact - common measures of author impact (h-
index) and other metrics scholars might encounter
Altmetrics - what are altmetrics? Altmetric badges and
altmetrics tools
Book and Book Chapter Impact - book citation counts,
holdings, book reviews and other qualitative indicators
Maximize Impact - unique researcher identifiers and profiles,
academic communities, and other strategies to maximize
impact
12/6/2020@sghoshnbu 8
9. Journal-level metrics
al-level metrics
Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of
information abundance (often termed ‘information overload’), having a shorthand for the signals for
where in the ocean of published literature to focus our limited attention has become increasingly
important.
Research metrics are sometimes controversial, especially when in popular usage they become proxies
for multidimensional concepts such as research quality or impact. Each metric may offer a different
emphasis based on its underlying data source, method of calculation, or context of use. For this reason,
Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those
are: always use both qualitative and quantitative input for decisions (i.e. expert
opinion alongside metrics), and always use more than one research metric as the quantitative input. This
second rule acknowledges that performance cannot be expressed by any single metric, as well as the
fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary
metrics can help to provide a more complete picture and reflect different aspects of research productivity
and impact in the final assessment.
12/6/2020@sghoshnbu 9
10. Journal Citation Reports™ (JCR)
Journal Citation Reports™ (JCR) provides you with the transparent, publisher-neutral data and statistics you need to make
confident decisions in today’s evolving scholarly publishing landscape, whether you’re submitting your first manuscript or
managing a portfolio of thousands of publications.
Quickly understand a journal’s role within and influence upon the global research community by exploring a rich array of
citation metrics, including the Journal Impact Factor™ (JIF), alongside descriptive data about a journal’s open access
content and contributing authors.
Web of Science does not depend on the Journal Impact Factor alone in assessing the usefulness of a journal, and neither
should anyone else. The Journal Impact Factor should not be used without careful attention to the many phenomena that
influence citation rates – for example, the average number of references cited in the average article. The Journal Impact
Factor should be used with informed peer review. In the case of academic evaluation for tenure, it is sometimes
inappropriate to use the impact of the source journal to estimate the expected frequency of a recently published article.
Again, the Journal Impact Factor should be used with informed peer review. Citation frequencies for individual articles are
quite varied.
Journal Citation Reports now includes more article-level data to provide a clearer understanding of the reciprocal
relationship between the article and the journal. This level of transparency allows you to not only see the data, but also see
through the data to a more nuanced consideration of journal value.
12/6/2020@sghoshnbu 10
11. Journal Impact Factor (JIF)
Journal Impact Factor (JIF) is calculated by Clarivate Analytics as the average of
the sum of the citations received each year to a journal’s previous two years of
publications (linked to the journal, but not necessarily to specific publications)
divided by the sum of “citable” publications in the previous two years. Owing to
the way in which citations are counted in the numerator and the subjectivity of
what constitutes a “citable item” in the denominator, JIF has received sustained
criticism for many years for its lack of transparency and reproducibility and the
potential for manipulation of the metric. Available for only 11,785 journals
(Science Citation Index Expanded plus Social Sciences Citation Index, per
December 2019), JIF is based on an extract of Clarivate’s Web of Science
database and includes citations that could not be linked to specific articles in the
journal, so-called unlinked citations.
12/6/2020@sghoshnbu 11
12. Metrics in a nutshell(Impact Factor)
@sghoshnbu 12/6/2020 12
Impact Factor
Journal Citation
Reports
Use a two-year period to divide the
number of times articles were cited by
the number of articles that were
published
Example:
200 = the number of times articles
published in 2008 and 2009 were cited
by indexed journals during 2010.
73 = the total number of "citable
items" published in 2008 and 2009.
200/73 = 2.73
2010 impact factor
Impact factor reflects only on
how many citations on a
specific journal there are (on
average). A journal with a
high impact factor has articles
that are cited often.
13. Traditional metrics for journals
Impact Factor and Citation Counts, created to measure
Journals and journal articles
Scholarly (journal) impact
Initially created for librarians, then largely adopted by STEM
Image from Journal Citation Reports (library database)
Software
14. Source Normalized Impact per Paper (SNIP)
Source Normalized Impact per Paper (SNIP) is a sophisticated metric
that intrinsically accounts for field-specific differences in citation
practices. It does so by comparing each journal’s citations per
publication with the citation potential of its field, defined as the set of
publications citing that journal. SNIP therefore
measures contextual citation impact and enables direct comparison
of journals in different subject fields, since the value of a single
citation is greater for journals in fields where citations are less likely,
and vice versa. SNIP is calculated annually from Scopus data and is
freely available alongside CiteScore and SJR
at www.scopus.com/sources. Unlike the well-known journal impact
factor, SNIP corrects for differences in citation practices between
scientific fields, thereby allowing for more accurate between-field
comparisons of citation impact. Centre for Science and Technology
Studies(CWTS) Journal Indicators also provides stability intervals
that indicate the reliability of the SNIP value of a journal. SNIP was
created by Professor Henk F. Moed at Centre for Science and
Technology Studies (CWTS), University of Leiden.@sghoshnbu 12/6/2020 14
15. CiteScore metrics
CiteScore metrics are a suite of indicators calculated from data in Scopus, the world’s leading abstract
and citation database of peer-reviewed literature. CiteScore itself is an average of the sum of the
citations received in a given year to publications published in the previous three years divided by the
sum of publications in the same previous three years. CiteScore is calculated for the current year on a
monthly basis until it is fixed as a permanent value in May the following year, permitting a real-time
view on how the metric builds as citations accrue. Once fixed, the other CiteScore metrics are also
computed and contextualise this score with rankings and other indicators to allow comparison.
CiteScore metrics are:
Current: A monthly CiteScore Tracker keeps you up-to-date about latest progression towards the next annual
value, which makes next CiteScore more predictable.
Comprehensive: Based on Scopus, the leading scientific citation database.
Clear: Values are transparent and reproducible to individual articles in Scopus.
The scores and underlying data for more than 25,000 active journals, book series and conference
proceedings are freely available at www.scopus.com/sources or via a widget (available on each
source page on Scopus.com) or the Scopus API.
12/6/2020@sghoshnbu 15
16. SCImago Journal Rank (SJR)
SCImago Journal Rank (SJR) is based on the concept of a transfer of
prestige between journals via their citation links. Drawing on a similar
approach to the Google PageRank algorithm - which assumes that
important websites are linked to from other important websites - SJR
weights each incoming citation to a journal by the SJR of the citing
journal, with a citation from a high-SJR source counting for more than a
citation from a low-SJR source. Like CiteScore, SJR accounts for journal
size by averaging across recent publications and is calculated annually.
SJR is also powered by Scopus data and is freely available alongside
CiteScore at www.scopus.com/sources.
12/6/2020@sghoshnbu 16
17. The impact per publication(IPP)
The impact per publication, calculated as the number of citations given in
the present year to publications in the past three years divided by the total
number of publications in the past three years. IPP is fairly similar to the
well-known journal impact factor. Like the journal impact factor, IPP does
not correct for differences in citation practices between scientific fields.
IPP was previously known as RIP (raw impact per publication).
12/6/2020@sghoshnbu 17
18. Immediacy Index
The Immediacy Index measures how frequently the average article from a
journal is cited within the same year as publication. This number is useful
for evaluating journals that publish cutting-edge research.
Immediacy Index Numerator - Cites to recent items:
The numerator looks at citations in a particular JCR year to a journal's
content from the same year. For example, the 2015 Immediacy Index for a
journal would take into account 2015 citations to the journal's 2015 papers.
The numerator includes citations to anything published by the journal in that
year.
Immediacy Index Denominator - Number of recent items:
The denominator takes into account the number of citable items published
in the journal in 2015. Citable items include articles and reviews.
@sghoshnbu 12/6/2020 18
20. H-index variant H5-Index
@sghoshnbu 12/6/2020 20
h-index
Web of
Science, Google
Scholar, Scopus
1) Create a list of all your publications. Put the list in descending order based
on the number of times it was cited (you can get this information from any
of the sources to the left). The first article should have the most citations. Go
through and number these.
2) Look down through the list to figure out at what point the number of
times a publication has been cited is equal to or larger than the line (or
paper) number of the publication.
Example:
Paper Number # of citations
1 13
2 7
3 4
h-index= 3
*please remember that many databases will give you this number; this is
only if you'd like to calculate it manually. You can also often find calculators
online.
The h-index focuses more
specifically on the impact
of only one scholar instead
of an entire journal. The
higher the h-index, the
more scholarly output a
researcher has.
SoftwareJorge E. Hirsch
Argentine American professor of physics at the University of California,
San Diego.[1] He is known for inventing the h-index in 2005
21. G-index
@sghoshnbu 12/6/2020 21
g-index Harzing's Publish or Perish
Given a list of articles ranked
in decreasing order of the
number citations that they
received, the g-index is the
largest unique number to the
extent that the top g articles
received together is at least
g
2
citations.
The g-index can be thought of
as a continuation of the h-index.
The difference is that this index
puts more weight on highly-
cited citations. The g-index was
created because scholars
noticed that h-index ignores the
number of citations to each
individual article beyond what is
needed to achieve a certain h-
index. This number often
complements the h-index and
isn't necessarily a replacement.
Egghe, Leo
Hasselt University, Nederlands in 2006 suggested
g-index
23. Eigenfactor score
@sghoshnbu 12/6/2020 23
Eigenfactor score Eigenfactor.org
• The Eigenfactor score is calculated by
eigenfactor.org.
• However, their process is very similar to
calculating impact factor and they pull
their data from the JCR as well.
• The major difference is that the
Eigenfactor score deletes references
from one article in a journal to another
in the same journal.
• This eliminates the problem of self-
citing.
• The Eigenfactor score is also a five-year
calculation.
• More information can be found
through Journal Citation Reports.
A high Eigenfactor score signals
that the journal does not self-
cite and controls the network of
that discipline. It's useful to look
at scholar's h-index as well as
the Eigenfactor score of the
journals they publish in in order
to get a broad sense of their
impact as a researcher.
Jevin West Carl T. Bergstrom Ted C. Bergstrom
Ben Althouse
24. i10-index
The i10-index is used by Google
Scholar and indicates the
number of publications that have
been cited at least 10 times.
12/6/2020@sghoshnbu 24
25. Altmetrics
Jason Priem
The tweet by Jason Priem,
which coined the
term altmetrics.
The term "altmetrics" (alternative metrics) is used to describe
approaches to measure the impact of scholarship by using new
social media tools such as bookmarks, links, blog postings,
inclusion in citation management tools, mentions and tweets to
measure the importance of scholarly output.
Proponents of altmetrics believe that using altmetrics will help
measure the impact of an article in a more comprehensive and
objective way than was done with more traditional scholarly
impact measures such as journal impact factor. However, there
are limits to this approach and caution should be used to not rely
on any one particular measure in evaluating the importance of
scholarship.
12/6/2020@sghoshnbu 25
26. “The Umbrella
Classification of
Non-Citation
based Metrics”
“alternative metrics”
• new ways of measuring different, non-traditional
forms of impact.
• “alternative to only using citations”, not
“alternative to citations”.
• complementary to traditional citation-based
analysis.
Article-level metrics have come to refer to
any metrics (e.g., including altmetrics) that
surround a scholarly article.
27. An article-centric approach
Measure online attention surrounding journal articles (and datasets).
Collect and deliver article-level metrics to journal publishers.
30. How do we collect
data for altmetrics?
Directly from the individual tools
From publishers (views, download data)
From (some) library databases
From scholarly networks
Through aggregating tools
SlideShare views
PLOS article metrics
Web of Science usage
ResearchGate metrics
Altmetric metrics
31. Altmetrics
Measures
12/6/2020 @sghoshnbu 31
Usage : clicks, downloads, views; Social Media - likes, shares, or tweets;
Captures - bookmarks, favorites, followers; Mentions - blog posts,
reviews, comments, or ratings
Altmetrics are often used to measure the impact of gray literature or
materials that are not formally published, such as posters and working
papers. They can also be used to provide more information about the
reach of published articles and books.
It is unlikely that altmetrics will supplant traditional metrics as the
measure of research impact. However, altmetrics can demonstrate the
reach and interest in a topic from the public, practitioners, and policy
makers
Authors should refrain from judging the impact of a work based on the
altmetrics numbers. Digging into who is saying what about the work
may provide more reliable information about the quality and influence
of a work.
38. Strategies to
Maximize
Your Impact
12/6/2020 @sghoshnbu 38
Create Unique
Researcher Identifiers
Create Researcher
Profiles
Share Your Research
Online
Take Steps to Broaden
Your Impact
39. Take Steps
to Broaden
Your Impact
Contribute
Contribute to Wikipedia, either in a new entry or in the text and
references of an existing entry.
Discuss Discuss your research findings on a blog or through Twitter.
Link Link your most recent research to your email signature.
Publish in
Publish in open access journals or pay to have the work available
open access in a subscription journal.
Craft
Craft a work's title and abstract carefully. Repeat keywords so the
work is highly relevant in search engines.
Add
Add postprints/white papers/drafts of work to your institutional
repository, DigitalCommons@EMU, or to a disciplinary repository.
12/6/2020@sghoshnbu 39
40. Identity
Exploration
Google Scholar Profile
A Google Scholar Profile tracks your publications listed in Google Scholar,
provides the number of citations and links to the items citing your work, and
calculates your h-index. (Note: You need to have a Gmail account to track
your profile. Once you are logged in to your Gmail account, click on "My
citations" to view and edit your profile.)
Impactstory
This web-based service collects metrics and displays them with a link that
can be added to CVs. Join free with an ORCID account.
Share Your Research Online
The process of writing for publication often creates several outputs in
addition to the final journal article, book, or book chapter. Consider posting
slides from presentations, brief videos of presentations, data sets, or other
materials online with a link to the official publication.
Postprints/White Papers/Drafts of work - DigitalCommons@EMU or
subject/disciplinary repositories.
Presentation Slides - SlideShare or Speaker Deck
Videos - Vimeo or YouTube
Data Sets - Dryad or figshare (figshare can handle other outputs as well)
Code & Software - GitHub
@sghoshnbu 12/6/2020 40
42. References
Ayris, P., López de San Román, A., Maes, K., & Labastida, I. (2018). Open Science and its role in universities : A roadmap for cultural
change. League of European Research Universities.
Bose, R. (2004). Knowledge management metrics. Industrial Management and Data Systems. https://doi.org/10.1108/02635570410543771
Commission, E. (2017). Next generation metrics: Responsible metrics and evaluation for open science: European commission Report.
Brussels.
Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature.
https://doi.org/10.1038/520429a
Lăzăroiu, G. (2017). What do altmetrics measure? Maybe the broader impact of research on society. Educational Philosophy and Theory.
https://doi.org/10.1080/00131857.2016.1237735
LibGuides: Introduction to Impact Factor and Other Research Metrics: Home. (n.d.). Retrieved from
https://guides.library.illinois.edu/impact
SAGE Publishing. (2019). The latest thinking about metrics for research
impact in the social sciences (White paper). Thousand Oaks, CA: Author. doi: 10.4135/wp190522.
Understanding research metrics. (n.d.). Retrieved May 17, 2020, from https://editorresources.taylorandfrancis.com/understanding-
research-metrics/
12/6/2020@sghoshnbu 42
It’s easy to dismiss publish or perish as an old maxim that academics use to complain about their terrible working conditions, but research has shown that the longer this culture of pressure persists, the greater the risk to academic research integrity. As the players in this publishing game start to suffer, and the cracks begin to appear, we can see real consequences:
Focus has been shifting to metrics at the article level. Why should the value of a work be judged the journal it has been published in?