SlideShare ist ein Scribd-Unternehmen logo
1 von 34
Downloaden Sie, um offline zu lesen
   ’  

 1
,  2
1Centre for R&D Monitoring and Dept MSI, KU Leuven, Belgium
2Centre for Science and Technology Studies, Leiden University, The Netherlands
 Introduction 
In the last quarter of the 20th
century, bibliometrics evolved from a
sub-discipline of library and information science to an instrument for
evaluation and benchmarking (G, 2006; W, 2013).
• As a consequence, several scientometric tools became used in a context for
which they were not designed (e.g., JIF).
• Due to the dynamics in evaluation, the focus has shied away from macro
studies towards meso and micro studies of both actors and topics.
• More recently, the evaluation of research teams and individual scientists
has become a central issue in services based on bibliometric data.
• The rise of social networking technologies in which all types of activities are
measured and monitored has promoted auto-evaluation with tools such as
Google Scholar, Publish or Perish, Scholarometer.
G  W, The dos and don’ts, Vienna, 2013 2/25
 Introduction 
In the last quarter of the 20th
century, bibliometrics evolved from a
sub-discipline of library and information science to an instrument for
evaluation and benchmarking (G, 2006; W, 2013).
• As a consequence, several scientometric tools became used in a context for
which they were not designed (e.g., JIF).
• Due to the dynamics in evaluation, the focus has shied away from macro
studies towards meso and micro studies of both actors and topics.
• More recently, the evaluation of research teams and individual scientists
has become a central issue in services based on bibliometric data.
• The rise of social networking technologies in which all types of activities are
measured and monitored has promoted auto-evaluation with tools such as
Google Scholar, Publish or Perish, Scholarometer.
G  W, The dos and don’ts, Vienna, 2013 2/25
Introduction
There is not one typical individual-level bibliometrics since there are
different goals, which range from the individual assessment of proposals or
the oeuvre of applicants over intra-institutional research coordination to the
comparative evaluation of individuals and benchmarking of research teams.
As a consequence, common standards for all tasks at the individual level
do not (yet) exist.
☞ Each respective task, the concrete field of application requires a kind
of flexibility on the part of bibliometricians but also the maximum of
precision and accuracy.
In the following we will summarise some important guidelines for the use
of bibliometrics in the context of the evaluation of individual scientists,
leading to ten dos and ten don’ts in individual level bibliometrics .
G  W, The dos and don’ts, Vienna, 2013 3/25
Introduction
There is not one typical individual-level bibliometrics since there are
different goals, which range from the individual assessment of proposals or
the oeuvre of applicants over intra-institutional research coordination to the
comparative evaluation of individuals and benchmarking of research teams.
As a consequence, common standards for all tasks at the individual level
do not (yet) exist.
☞ Each respective task, the concrete field of application requires a kind
of flexibility on the part of bibliometricians but also the maximum of
precision and accuracy.
In the following we will summarise some important guidelines for the use
of bibliometrics in the context of the evaluation of individual scientists,
leading to ten dos and ten don’ts in individual level bibliometrics .
G  W, The dos and don’ts, Vienna, 2013 3/25
 Ten things you must not do … 
1. Don’t reduce individual performance to a single number
• Research performance is influenced by many factors such as age,
time window, position, research domain. Within the same scholarly
environment and position, interaction with colleagues, co-operation,
mobility and activity profiles might differ considerably.
• A single number (even if based on sound methods and correct data)
can certainly not suffice to reflect the complexity of research
activity, its background and its impact adequately.
• Using them to score or benchmark researchers needs to take the
working context of the researcher into consideration.
G  W, The dos and don’ts, Vienna, 2013 4/25
Ten things you must not do …
2. Don’t use IFs as measures of quality
• Once created to supplement ISI’s Science Citation Index, the IF
evolved to an evaluation tool and seems to have become the
“common currency of scientific quality” in research evaluation
influencing scientists’ funding and career (S, 2004).
• However, the Impact Factor is by no means a performance measure
of individual articles nor of the authors of these papers (e.g., S,
1989, 1997).
• Most recently, campaigns against the use of the IF in individual-level
research evaluation emerged on the part of scientists (who feel
victims of evaluation) and bibliometricians themselves (e.g.,
B  HL, 2012; B, 2013).
◦ The San Francisco Declaration on Research Assessment (DORA) has
started an online campaign against the use of the IF for evaluation of
researchers and research groups.
G  W, The dos and don’ts, Vienna, 2013 5/25
Ten things you must not do …
3. Don’t apply (hidden) “bibliometric filters” for selection
• Weights, thresholds or filters are defined for in-house evaluation or
for preselecting material for external use.
Some examples:
◦ A minimum IF might be required for inclusion in official publication
lists.
◦ A minimum h-index is required for receiving a doctoral degree or for
considering a grant application.
◦ A certain amount of citations is necessary for promotion or for
possible approval of applications.
This practice is sometimes questionable: If filters are set, they should
always support human judgement and not pre-empt it.
☞ Also the psychological effect of using such filters might not be
underestimated.
G  W, The dos and don’ts, Vienna, 2013 6/25
Ten things you must not do …
4. Don’t apply arbitrary weights to co-authorship
A known issue in bibliometrics is how to properly credit authors for their
contribution to papers they have co-authored.
• There is no general solution for the problem.
• Only the authors themselves can judge their own contribution.
• In some cases, pre-set weights on the basis of the sequence of
co-authors are defined and applied as strict rules.
• The sequence of co-authors as well the special “function” of the
corresponding authors do not always reflect the amount of their real
contribution.
• Most algorithms are, in practice, rather arbitrary and at this level
possibly misleading.
G  W, The dos and don’ts, Vienna, 2013 7/25
Ten things you must not do …
5. Don’t rank scientists according to 1 indicator
• It is legitimate to rank candidates who have been short-listed, e.g.,
for a job position, according to relevant criteria, but ranking should
not be merely based on bibliometrics.
• Internal or public ranking of research performance without any
particular practical goal (like a candidateship) is problematic.
• There are also ethical issues and possible repercussions of the
emerging “champions-league mentality” on the scientists research
and communication behaviour (e.g., G  D, 2003).
• A further negative effect of ranking lists (as easily accessible and
ready-made data) is that those could be used in decision-making in
other contexts than they have been prepared for.
G  W, The dos and don’ts, Vienna, 2013 8/25
Ten things you must not do …
6. Don’t merge incommensurable measures
• This problematic practice oen begins with output reporting by the
scientists them-selves.
◦ Citation counts appearing in CVs or applications are sometimes based
on different sources (WoS, SCOPUS, Google Scholar).
• The combination of incommensurable sources combined with
inappropriate reference standards make bibliometric indicators
almost completely useless (cf. W, 1993).
• Do not allow users to merge bibliometric results from different
sources without having checked their compatibility.
G  W, The dos and don’ts, Vienna, 2013 9/25
Ten things you must not do …
7. Don’t use flawed statistics
• Thresholds and reference standards for the assignment to
performance classes are proved tools in bibliometrics (e.g, for
identifying industrious authors, uncited or highly cited papers).
◦ This might even be more advantageous than using the original
observations.
• However, looking at the recent literature one finds a plethora of
formulas for “improved” measures or composite indicators lacking
any serious mathematical background.
• Small datasets are typical of this aggregation level: This might
increase the bias or result in serious errors and standard
(mathematical) statistical methods are oen at or beyond their limit
here.
G  W, The dos and don’ts, Vienna, 2013 10/25
Ten things you must not do …
8. Don’t blindly trust one-hit wonders
• Do not evaluate scientists on the basis of one top paper and do not
encourage scientists to prize visibility over targeting in their
publication strategy.
◦ Breakthroughs are oen based on a single theoretical concept or way
of viewing the world. They may be published in a paper that then
aracts star aention.
◦ However, breakthroughs may also be based on a life-long piecing
together of evidence published in a series of moderately cited papers.
☞ Always weight the importance of highly cited papers versus the
value of a series of sustained publishing. Don’t look at top
performance only, consider the complete life work or the research
output created in the time windows under study.
G  W, The dos and don’ts, Vienna, 2013 11/25
Ten things you must not do …
8. Don’t blindly trust one-hit wonders
• Do not evaluate scientists on the basis of one top paper and do not
encourage scientists to prize visibility over targeting in their
publication strategy.
◦ Breakthroughs are oen based on a single theoretical concept or way
of viewing the world. They may be published in a paper that then
aracts star aention.
◦ However, breakthroughs may also be based on a life-long piecing
together of evidence published in a series of moderately cited papers.
☞ Always weight the importance of highly cited papers versus the
value of a series of sustained publishing. Don’t look at top
performance only, consider the complete life work or the research
output created in the time windows under study.
G  W, The dos and don’ts, Vienna, 2013 11/25
Ten things you must not do …
9. Don’t compare apples and oranges
• Figures are always comparable. And contents?
• Normalisation might help make measures comparable but only like
with like.
• Research and communication in different domains is differently
structured. The analysis of research performance in humanities,
mathematics and life sciences needs different concepts and
approaches.
◦ Simply weighting publication types (monographs, articles, working
papers, etc.) and normalising citation rates will just cover up but not
eliminate differences.
G  W, The dos and don’ts, Vienna, 2013 12/25
Ten things you must not do …
10. Don’t allow deadlines and workload to compel you to drop
good practices
• Reviewers and users in research management are oen overcharged
by the flood submissions, applications and proposals combined with
tight deadlines and lack of personnel.
◦ Readily available data like IFs, gross citation counts and the h-index
are sometimes used to make decisions on proposals and candidates.
• Don’t give in to time pressure and heavy workload when you have
responsible tasks in research assessment and the career of scientists
and the future of research teams are at the stake and don’t allow
tight deadlines to compel you to reduce evaluation to the use of
“handy” numbers.
G  W, The dos and don’ts, Vienna, 2013 13/25
 Ten things you might do … 
1. Also individual-level bibliometrics is statistics
• Basic measures (number of publications/citations) are important
measures in bibliometrics at the individual level.
• All statistics derived from these counts require a sufficiently large
publication output to allow valid conclusions.
• If this is met, standard bibliometric techniques can be applied but
special caution is always called for at this level:
◦ A longer publication period might also cover different career
progression and activity dynamics in the academic life of scientists.
◦ Assessment, external benchmarking and comparisons require the use
of appropriate reference standards, notably in interdisciplinary
research or pluridisciplinary activities.
◦ Special aention should be paid to group authorship (group
composition and contribution credit assigned to the author).
G  W, The dos and don’ts, Vienna, 2013 14/25
Ten things you might do …
2. Analyse collaboration profiles of researchers
• Bibliometricians might analyse the scientist’s position among
his/her collaborators and co-authors. In particular, the following
questions can be answered.
◦ Do authors preferably work alone, work in stable teams, or prefer
occasional collaboration.
◦ Who are the collaborators and are the scientists rather ‘junior’, ‘peers’
or ‘senior’ partners in these relationships.
• This might help recognise the scientist’s own role in his/her research
environment but final conclusion should be drawn in combination
with “qualitative methods”.
G  W, The dos and don’ts, Vienna, 2013 15/25
Ten things you might do …
3. Always combine quantitative and qualitative methods
At this level of aggregation, the combination of bibliometrics with
traditional qualitative methods is not only important but indispensable.
• On one hand, bibliometrics can be used to supplement the
sometimes subjectively coloured qualitative methods by providing
“objective” figures to underpin, confirm arguments or to make
assessment more concrete.
• If discrepancies between the two methods are found try to
investigate and understand what the possible reasons for the
different results could be.
☞ This might even enrich and improve the assessment.
G  W, The dos and don’ts, Vienna, 2013 16/25
Ten things you might do …
4. Use citation context analysis
The concept of “citation context” analysis was first introduced in 1973 by
M and later suggested for use in Hungary (B, 2006).
• Here citation context does not cover the position, where a citation is
placed in an article, or the distance from other citations in the same
document. It covers the textual and contentual environment of the
citation in question.
• It is to be shown that a research results is not only referred to but is
used indeed in the colleagues’ research and/or is scholarly
discussed. ⇒ The context might be positive or negative.
“Citation context” represents an approach in-between qualitative and
quantitative methods and can be used in the case of individual proposals
and applications.
G  W, The dos and don’ts, Vienna, 2013 17/25
Ten things you might do …
4. Use citation context analysis
The concept of “citation context” analysis was first introduced in 1973 by
M and later suggested for use in Hungary (B, 2006).
• Here citation context does not cover the position, where a citation is
placed in an article, or the distance from other citations in the same
document. It covers the textual and contentual environment of the
citation in question.
• It is to be shown that a research results is not only referred to but is
used indeed in the colleagues’ research and/or is scholarly
discussed. ⇒ The context might be positive or negative.
“Citation context” represents an approach in-between qualitative and
quantitative methods and can be used in the case of individual proposals
and applications.
G  W, The dos and don’ts, Vienna, 2013 17/25
Ten things you might do …
5. Analyse subject profiles
Many scientists do research in an interdisciplinary environment. Even
their reviewers might work in different panels. The situation is even
worse for “polydisciplinary” scientists.
In principle, three basic approaches are possible.
1. Considering all activities as one total activity and “define” an
adequate topic for benchmarking
2. Spliing up the profile into its components (which might, of course,
overlap) for assessment
3. Neglecting activities outside the actual scope of assessment
It depends on the task, which of the above models should be applied.
More research on these issues is urgently needed.
G  W, The dos and don’ts, Vienna, 2013 18/25
Ten things you might do …
5. Analyse subject profiles
Many scientists do research in an interdisciplinary environment. Even
their reviewers might work in different panels. The situation is even
worse for “polydisciplinary” scientists.
In principle, three basic approaches are possible.
1. Considering all activities as one total activity and “define” an
adequate topic for benchmarking
2. Spliing up the profile into its components (which might, of course,
overlap) for assessment
3. Neglecting activities outside the actual scope of assessment
It depends on the task, which of the above models should be applied.
More research on these issues is urgently needed.
G  W, The dos and don’ts, Vienna, 2013 18/25
Ten things you might do …
6. Make an explicit choice for oeuvre or time-window analysis
The complete oeuvre of a scientist can serve as the basis of the individual
assessment. This option should rather not be used in comparative
analysis.
• The reason is different age, profile, position and the complexity of a
scientists career.
Time-window analysis is more suited for comparison, provided, of course,
like is compared with like and the publication period and citation
windows conform.
G  W, The dos and don’ts, Vienna, 2013 19/25
Ten things you might do …
6. Make an explicit choice for oeuvre or time-window analysis
The complete oeuvre of a scientist can serve as the basis of the individual
assessment. This option should rather not be used in comparative
analysis.
• The reason is different age, profile, position and the complexity of a
scientists career.
Time-window analysis is more suited for comparison, provided, of course,
like is compared with like and the publication period and citation
windows conform.
G  W, The dos and don’ts, Vienna, 2013 19/25
Ten things you might do …
7. Combine bibliometrics with career analysis
This applies to the assessment on the basis of a scientist’s oeuvre.
• Bibliometrics can be used to zoom in on a scientist’s career. Here the
evolution of publication activity, citation impact, mobility and
changing collaboration paerns can be monitored.
• It is not easy to quantify the observations and the purpose is not to
build indicators for possible comparison but to use bibliometric data
to visually and numerically depict important aspects of the progress
of a scientist’s career.
 Some preliminary results have been published by Z  G (2012).
G  W, The dos and don’ts, Vienna, 2013 20/25
Ten things you might do …
8. Clean bibliographic data carefully and use external sources
Bibliometric data at this level are extremely sensitive. This implies that
also input data must be absolutely clean and accurate.
• In order to achieve cleanness, publication lists and CVs should be
used if possible. This is important for two reasons:
◦ External sources help improve the quality of data sources.
◦ Responsibility with the authors or institutes is shared.
• If the assessment is not confidential, researchers themselves might
be involved in the bibliometric exercise.
• Otherwise, scientists might be asked to provide data according to a
given standard protocol that can and should be developed in
interaction between the user and bibliometricians.
G  W, The dos and don’ts, Vienna, 2013 21/25
Ten things you might do …
8. Clean bibliographic data carefully and use external sources
Bibliometric data at this level are extremely sensitive. This implies that
also input data must be absolutely clean and accurate.
• In order to achieve cleanness, publication lists and CVs should be
used if possible. This is important for two reasons:
◦ External sources help improve the quality of data sources.
◦ Responsibility with the authors or institutes is shared.
• If the assessment is not confidential, researchers themselves might
be involved in the bibliometric exercise.
• Otherwise, scientists might be asked to provide data according to a
given standard protocol that can and should be developed in
interaction between the user and bibliometricians.
G  W, The dos and don’ts, Vienna, 2013 21/25
Ten things you might do …
9. Even some “don’ts” are not taboo if properly applied
There is no reason to condemn the oen incorrectly used Impact Factor
and h-index. They can provide supplementary information if they are
used in combination with qualitative methods, and are not used as the
only decision criterion.
Example:
• Good practice (h-index as supporting argument):
“The exceptionally high h-index of the applicant confirms his/her
international standing aested to by our experts.”
• estionable use (h-index as decision criterion):
“We are inclined to support this scientist because his/her h-index
distinctly exceeds that of all other applicants.”
G  W, The dos and don’ts, Vienna, 2013 22/25
Ten things you might do …
9. Even some “don’ts” are not taboo if properly applied
There is no reason to condemn the oen incorrectly used Impact Factor
and h-index. They can provide supplementary information if they are
used in combination with qualitative methods, and are not used as the
only decision criterion.
Example:
• Good practice (h-index as supporting argument):
“The exceptionally high h-index of the applicant confirms his/her
international standing aested to by our experts.”
• estionable use (h-index as decision criterion):
“We are inclined to support this scientist because his/her h-index
distinctly exceeds that of all other applicants.”
G  W, The dos and don’ts, Vienna, 2013 22/25
Ten things you might do …
10. Help users to interpret and apply your results
At any level of aggregation bibliometric methods should be
well-documented. This applies above all to level of individual scientists
and research teams.
• Bibliometricians should support users in a transparent manner to
guarantee replicability of bibliometric data.
• They should issue clear instructions concerning the use and
interpretation of their results.
• They should also stress the limitations of the validity of these results.
G  W, The dos and don’ts, Vienna, 2013 23/25
 Conclusions 
• The (added) value of or damage by bibliometrics in individual-level
evaluation depends on how and in what context bibliometrics is
applied.
• In most situations, the context should determine which bibliometric
methods and how those should be applied.
• Soundness and validity of methods is all the more necessary at the
individual level but not yet sufficient. Accuracy, reliability and
completeness of sources is an absolute imperative at this level.
• We recommend to use individual level bibliometrics always on the
basis of the particular research portfolio. The best method to do this
may be the design of individual researchers profiles combining
bibliometrics with qualitative information about careers and
working contexts. The profile includes the research mission and
goals of the researcher.
G  W, The dos and don’ts, Vienna, 2013 24/25
 Conclusions 
• The (added) value of or damage by bibliometrics in individual-level
evaluation depends on how and in what context bibliometrics is
applied.
• In most situations, the context should determine which bibliometric
methods and how those should be applied.
• Soundness and validity of methods is all the more necessary at the
individual level but not yet sufficient. Accuracy, reliability and
completeness of sources is an absolute imperative at this level.
• We recommend to use individual level bibliometrics always on the
basis of the particular research portfolio. The best method to do this
may be the design of individual researchers profiles combining
bibliometrics with qualitative information about careers and
working contexts. The profile includes the research mission and
goals of the researcher.
G  W, The dos and don’ts, Vienna, 2013 24/25
 Acknowledgement 
The authors would like to thank I R and J G for
their contribution to the idea of a special session on this important issue
as well as the organisers of the ISSI 2013 conference for having given us
the opportunity to organise this session.
We also wish to thank L W and R C for their
useful comments.

Weitere ähnliche Inhalte

Was ist angesagt?

PUBLIC LIBRARIANSHIP.pdf
PUBLIC LIBRARIANSHIP.pdfPUBLIC LIBRARIANSHIP.pdf
PUBLIC LIBRARIANSHIP.pdfhugztetel
 
INFLIbnet.pptx
INFLIbnet.pptxINFLIbnet.pptx
INFLIbnet.pptxDivyaCB1
 
Collection evaluation
Collection evaluationCollection evaluation
Collection evaluationJohan Koren
 
Scholarly Communication 101
Scholarly Communication 101Scholarly Communication 101
Scholarly Communication 101Claire Sewell
 
Selection and acquisitions
Selection and acquisitionsSelection and acquisitions
Selection and acquisitionsJohan Koren
 
Scholarly Communications Presentation
Scholarly Communications PresentationScholarly Communications Presentation
Scholarly Communications PresentationVasantha Raju N
 
LIBRARY AUTOMATION.pptx
LIBRARY AUTOMATION.pptxLIBRARY AUTOMATION.pptx
LIBRARY AUTOMATION.pptxRbalasubramani
 
Collection evaluation techniques for academic libraries
Collection evaluation techniques for academic libraries Collection evaluation techniques for academic libraries
Collection evaluation techniques for academic libraries ALISS
 
Collection development policy
Collection development policyCollection development policy
Collection development policySeerat Chishti
 
Intellectual Freedom Fin
Intellectual Freedom FinIntellectual Freedom Fin
Intellectual Freedom Finmelinda livas
 
Library consortia
Library consortiaLibrary consortia
Library consortiaMpilo7
 
Evidence-based Research in Library and Information Practice
Evidence-based Research in Library and Information PracticeEvidence-based Research in Library and Information Practice
Evidence-based Research in Library and Information PracticeFe Angela Verzosa
 
Code of Ethics for Librarians (LIS 55)
Code of Ethics for Librarians (LIS 55)Code of Ethics for Librarians (LIS 55)
Code of Ethics for Librarians (LIS 55)Roy Santos Necesario
 
Open Access Explained
Open Access ExplainedOpen Access Explained
Open Access ExplainedUQSCADS
 
Censorship and libraries
Censorship and librariesCensorship and libraries
Censorship and librariesKteeanne
 
Bibliometrics - an overview
Bibliometrics - an overviewBibliometrics - an overview
Bibliometrics - an overviewclaudia cavicchi
 

Was ist angesagt? (20)

圖書資訊學概論 - Part 2 圖書資訊徵集與組織
圖書資訊學概論 - Part 2 圖書資訊徵集與組織圖書資訊學概論 - Part 2 圖書資訊徵集與組織
圖書資訊學概論 - Part 2 圖書資訊徵集與組織
 
PUBLIC LIBRARIANSHIP.pdf
PUBLIC LIBRARIANSHIP.pdfPUBLIC LIBRARIANSHIP.pdf
PUBLIC LIBRARIANSHIP.pdf
 
圖書資訊學概論 - Part 1 圖書資訊學導言
圖書資訊學概論 - Part 1 圖書資訊學導言圖書資訊學概論 - Part 1 圖書資訊學導言
圖書資訊學概論 - Part 1 圖書資訊學導言
 
INFLIbnet.pptx
INFLIbnet.pptxINFLIbnet.pptx
INFLIbnet.pptx
 
Collection evaluation
Collection evaluationCollection evaluation
Collection evaluation
 
Scholarly Communication 101
Scholarly Communication 101Scholarly Communication 101
Scholarly Communication 101
 
Selection and acquisitions
Selection and acquisitionsSelection and acquisitions
Selection and acquisitions
 
Ready reference
Ready referenceReady reference
Ready reference
 
Scholarly Communications Presentation
Scholarly Communications PresentationScholarly Communications Presentation
Scholarly Communications Presentation
 
LIBRARY AUTOMATION.pptx
LIBRARY AUTOMATION.pptxLIBRARY AUTOMATION.pptx
LIBRARY AUTOMATION.pptx
 
Collection evaluation techniques for academic libraries
Collection evaluation techniques for academic libraries Collection evaluation techniques for academic libraries
Collection evaluation techniques for academic libraries
 
Collection development policy
Collection development policyCollection development policy
Collection development policy
 
Intellectual Freedom Fin
Intellectual Freedom FinIntellectual Freedom Fin
Intellectual Freedom Fin
 
Library consortia
Library consortiaLibrary consortia
Library consortia
 
Evidence-based Research in Library and Information Practice
Evidence-based Research in Library and Information PracticeEvidence-based Research in Library and Information Practice
Evidence-based Research in Library and Information Practice
 
Code of Ethics for Librarians (LIS 55)
Code of Ethics for Librarians (LIS 55)Code of Ethics for Librarians (LIS 55)
Code of Ethics for Librarians (LIS 55)
 
Bibliometrics
BibliometricsBibliometrics
Bibliometrics
 
Open Access Explained
Open Access ExplainedOpen Access Explained
Open Access Explained
 
Censorship and libraries
Censorship and librariesCensorship and libraries
Censorship and libraries
 
Bibliometrics - an overview
Bibliometrics - an overviewBibliometrics - an overview
Bibliometrics - an overview
 

Andere mochten auch

Author Level Bibliometrics
Author Level BibliometricsAuthor Level Bibliometrics
Author Level BibliometricsPaul Wouters
 
конкурс
конкурсконкурс
конкурсimigalin
 
The Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwideThe Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwideDavid Yeng
 
Henryk Konrad
Henryk KonradHenryk Konrad
Henryk Konradpzgomaz
 
Webfil digital _ fail safe multiplexer __ ufsbi for sge
Webfil   digital  _ fail safe multiplexer __ ufsbi for sgeWebfil   digital  _ fail safe multiplexer __ ufsbi for sge
Webfil digital _ fail safe multiplexer __ ufsbi for sgepuneet kumar rai
 
WeiResearch Social Interest Graph
WeiResearch Social Interest GraphWeiResearch Social Interest Graph
WeiResearch Social Interest GraphRyan Xia
 
Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015Alessandro Porro
 
DeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sampleDeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sampleKim4142
 
Vikas_Bajpai_090915
Vikas_Bajpai_090915Vikas_Bajpai_090915
Vikas_Bajpai_090915Vikas Bajpai
 
TEKS NEGOSIASI
TEKS NEGOSIASITEKS NEGOSIASI
TEKS NEGOSIASISri Utanti
 
Opportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality educationOpportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality educationDavid Yeng
 
Collingwood Library 2010
Collingwood Library 2010Collingwood Library 2010
Collingwood Library 2010MarciaMcGinley
 
Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02Seth Sibangan
 

Andere mochten auch (17)

Author Level Bibliometrics
Author Level BibliometricsAuthor Level Bibliometrics
Author Level Bibliometrics
 
Gail presentation
Gail presentationGail presentation
Gail presentation
 
конкурс
конкурсконкурс
конкурс
 
The Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwideThe Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwide
 
Henryk Konrad
Henryk KonradHenryk Konrad
Henryk Konrad
 
Webfil digital _ fail safe multiplexer __ ufsbi for sge
Webfil   digital  _ fail safe multiplexer __ ufsbi for sgeWebfil   digital  _ fail safe multiplexer __ ufsbi for sge
Webfil digital _ fail safe multiplexer __ ufsbi for sge
 
WeiResearch Social Interest Graph
WeiResearch Social Interest GraphWeiResearch Social Interest Graph
WeiResearch Social Interest Graph
 
Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015
 
DeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sampleDeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sample
 
Peta watak
Peta watakPeta watak
Peta watak
 
Vikas_Bajpai_090915
Vikas_Bajpai_090915Vikas_Bajpai_090915
Vikas_Bajpai_090915
 
TEKS NEGOSIASI
TEKS NEGOSIASITEKS NEGOSIASI
TEKS NEGOSIASI
 
Opportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality educationOpportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality education
 
Exposicion lalo
Exposicion laloExposicion lalo
Exposicion lalo
 
Collingwood Library 2010
Collingwood Library 2010Collingwood Library 2010
Collingwood Library 2010
 
raf sistemleri
raf sistemleriraf sistemleri
raf sistemleri
 
Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02
 

Ähnlich wie The dos and don'ts in individudal level bibliometrics

Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)Aboul Ella Hassanien
 
Calais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.comCalais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.comWilliam Kritsonis
 
Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009William Kritsonis
 
Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators Michaela Kurschildgen
 
Research and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptxResearch and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptxVijayKumar17076
 
Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...Nicolas Robinson-Garcia
 
Durham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library SlidesDurham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library SlidesJamie Bisset
 
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...hashem Al-Shamiri
 
2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences Research2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences ResearchNUI Galway
 
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...Yasar Tonta
 
DORA and the reinvention of research assessment
DORA and the reinvention of research assessmentDORA and the reinvention of research assessment
DORA and the reinvention of research assessmentMark Patterson
 
Quantitative research
Quantitative researchQuantitative research
Quantitative researchTooba Kanwal
 
TYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdfTYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdfJubilinAlbania
 
In metrics we trust?
In metrics we trust?In metrics we trust?
In metrics we trust?ORCID, Inc
 
Altmetrics: An Overview
Altmetrics: An OverviewAltmetrics: An Overview
Altmetrics: An OverviewPallab Pradhan
 
Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are suzannewarch
 

Ähnlich wie The dos and don'ts in individudal level bibliometrics (20)

Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)
 
Calais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.comCalais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.com
 
Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009
 
Preparing research proposal icphi2013
Preparing research proposal icphi2013Preparing research proposal icphi2013
Preparing research proposal icphi2013
 
Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators
 
Research and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptxResearch and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptx
 
Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...
 
Anu digital research literacies
Anu digital research literaciesAnu digital research literacies
Anu digital research literacies
 
Durham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library SlidesDurham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library Slides
 
Lern, jan 2015, digital media slides
Lern, jan 2015, digital media slidesLern, jan 2015, digital media slides
Lern, jan 2015, digital media slides
 
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
 
2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences Research2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences Research
 
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
 
DORA and the reinvention of research assessment
DORA and the reinvention of research assessmentDORA and the reinvention of research assessment
DORA and the reinvention of research assessment
 
Quantitative research
Quantitative researchQuantitative research
Quantitative research
 
TYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdfTYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdf
 
In metrics we trust?
In metrics we trust?In metrics we trust?
In metrics we trust?
 
Altmetrics: An Overview
Altmetrics: An OverviewAltmetrics: An Overview
Altmetrics: An Overview
 
Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)
Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)
Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)
 
Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are
 

Kürzlich hochgeladen

Extensible Python: Robustness through Addition - PyCon 2024
Extensible Python: Robustness through Addition - PyCon 2024Extensible Python: Robustness through Addition - PyCon 2024
Extensible Python: Robustness through Addition - PyCon 2024Patrick Viafore
 
What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024Stephanie Beckett
 
WebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM PerformanceWebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM PerformanceSamy Fodil
 
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...CzechDreamin
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIES VE
 
Syngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdfSyngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdfSyngulon
 
Google I/O Extended 2024 Warsaw
Google I/O Extended 2024 WarsawGoogle I/O Extended 2024 Warsaw
Google I/O Extended 2024 WarsawGDSC PJATK
 
The UX of Automation by AJ King, Senior UX Researcher, Ocado
The UX of Automation by AJ King, Senior UX Researcher, OcadoThe UX of Automation by AJ King, Senior UX Researcher, Ocado
The UX of Automation by AJ King, Senior UX Researcher, OcadoUXDXConf
 
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...FIDO Alliance
 
AI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekAI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekCzechDreamin
 
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Julian Hyde
 
Salesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone KomSalesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone KomCzechDreamin
 
The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?Mark Billinghurst
 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutesconfluent
 
AI presentation and introduction - Retrieval Augmented Generation RAG 101
AI presentation and introduction - Retrieval Augmented Generation RAG 101AI presentation and introduction - Retrieval Augmented Generation RAG 101
AI presentation and introduction - Retrieval Augmented Generation RAG 101vincent683379
 
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya HalderCustom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya HalderCzechDreamin
 
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdfLinux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdfFIDO Alliance
 
Oauth 2.0 Introduction and Flows with MuleSoft
Oauth 2.0 Introduction and Flows with MuleSoftOauth 2.0 Introduction and Flows with MuleSoft
Oauth 2.0 Introduction and Flows with MuleSoftshyamraj55
 
Structuring Teams and Portfolios for Success
Structuring Teams and Portfolios for SuccessStructuring Teams and Portfolios for Success
Structuring Teams and Portfolios for SuccessUXDXConf
 
Simplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdf
Simplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdfSimplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdf
Simplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdfFIDO Alliance
 

Kürzlich hochgeladen (20)

Extensible Python: Robustness through Addition - PyCon 2024
Extensible Python: Robustness through Addition - PyCon 2024Extensible Python: Robustness through Addition - PyCon 2024
Extensible Python: Robustness through Addition - PyCon 2024
 
What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024
 
WebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM PerformanceWebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM Performance
 
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and Planning
 
Syngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdfSyngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdf
 
Google I/O Extended 2024 Warsaw
Google I/O Extended 2024 WarsawGoogle I/O Extended 2024 Warsaw
Google I/O Extended 2024 Warsaw
 
The UX of Automation by AJ King, Senior UX Researcher, Ocado
The UX of Automation by AJ King, Senior UX Researcher, OcadoThe UX of Automation by AJ King, Senior UX Researcher, Ocado
The UX of Automation by AJ King, Senior UX Researcher, Ocado
 
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
 
AI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekAI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří Karpíšek
 
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
 
Salesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone KomSalesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
 
The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?
 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutes
 
AI presentation and introduction - Retrieval Augmented Generation RAG 101
AI presentation and introduction - Retrieval Augmented Generation RAG 101AI presentation and introduction - Retrieval Augmented Generation RAG 101
AI presentation and introduction - Retrieval Augmented Generation RAG 101
 
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya HalderCustom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
 
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdfLinux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
 
Oauth 2.0 Introduction and Flows with MuleSoft
Oauth 2.0 Introduction and Flows with MuleSoftOauth 2.0 Introduction and Flows with MuleSoft
Oauth 2.0 Introduction and Flows with MuleSoft
 
Structuring Teams and Portfolios for Success
Structuring Teams and Portfolios for SuccessStructuring Teams and Portfolios for Success
Structuring Teams and Portfolios for Success
 
Simplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdf
Simplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdfSimplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdf
Simplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdf
 

The dos and don'ts in individudal level bibliometrics

  • 1.    ’     1 ,  2 1Centre for R&D Monitoring and Dept MSI, KU Leuven, Belgium 2Centre for Science and Technology Studies, Leiden University, The Netherlands
  • 2.  Introduction  In the last quarter of the 20th century, bibliometrics evolved from a sub-discipline of library and information science to an instrument for evaluation and benchmarking (G, 2006; W, 2013). • As a consequence, several scientometric tools became used in a context for which they were not designed (e.g., JIF). • Due to the dynamics in evaluation, the focus has shied away from macro studies towards meso and micro studies of both actors and topics. • More recently, the evaluation of research teams and individual scientists has become a central issue in services based on bibliometric data. • The rise of social networking technologies in which all types of activities are measured and monitored has promoted auto-evaluation with tools such as Google Scholar, Publish or Perish, Scholarometer. G  W, The dos and don’ts, Vienna, 2013 2/25
  • 3.  Introduction  In the last quarter of the 20th century, bibliometrics evolved from a sub-discipline of library and information science to an instrument for evaluation and benchmarking (G, 2006; W, 2013). • As a consequence, several scientometric tools became used in a context for which they were not designed (e.g., JIF). • Due to the dynamics in evaluation, the focus has shied away from macro studies towards meso and micro studies of both actors and topics. • More recently, the evaluation of research teams and individual scientists has become a central issue in services based on bibliometric data. • The rise of social networking technologies in which all types of activities are measured and monitored has promoted auto-evaluation with tools such as Google Scholar, Publish or Perish, Scholarometer. G  W, The dos and don’ts, Vienna, 2013 2/25
  • 4. Introduction There is not one typical individual-level bibliometrics since there are different goals, which range from the individual assessment of proposals or the oeuvre of applicants over intra-institutional research coordination to the comparative evaluation of individuals and benchmarking of research teams. As a consequence, common standards for all tasks at the individual level do not (yet) exist. ☞ Each respective task, the concrete field of application requires a kind of flexibility on the part of bibliometricians but also the maximum of precision and accuracy. In the following we will summarise some important guidelines for the use of bibliometrics in the context of the evaluation of individual scientists, leading to ten dos and ten don’ts in individual level bibliometrics . G  W, The dos and don’ts, Vienna, 2013 3/25
  • 5. Introduction There is not one typical individual-level bibliometrics since there are different goals, which range from the individual assessment of proposals or the oeuvre of applicants over intra-institutional research coordination to the comparative evaluation of individuals and benchmarking of research teams. As a consequence, common standards for all tasks at the individual level do not (yet) exist. ☞ Each respective task, the concrete field of application requires a kind of flexibility on the part of bibliometricians but also the maximum of precision and accuracy. In the following we will summarise some important guidelines for the use of bibliometrics in the context of the evaluation of individual scientists, leading to ten dos and ten don’ts in individual level bibliometrics . G  W, The dos and don’ts, Vienna, 2013 3/25
  • 6.  Ten things you must not do …  1. Don’t reduce individual performance to a single number • Research performance is influenced by many factors such as age, time window, position, research domain. Within the same scholarly environment and position, interaction with colleagues, co-operation, mobility and activity profiles might differ considerably. • A single number (even if based on sound methods and correct data) can certainly not suffice to reflect the complexity of research activity, its background and its impact adequately. • Using them to score or benchmark researchers needs to take the working context of the researcher into consideration. G  W, The dos and don’ts, Vienna, 2013 4/25
  • 7. Ten things you must not do … 2. Don’t use IFs as measures of quality • Once created to supplement ISI’s Science Citation Index, the IF evolved to an evaluation tool and seems to have become the “common currency of scientific quality” in research evaluation influencing scientists’ funding and career (S, 2004). • However, the Impact Factor is by no means a performance measure of individual articles nor of the authors of these papers (e.g., S, 1989, 1997). • Most recently, campaigns against the use of the IF in individual-level research evaluation emerged on the part of scientists (who feel victims of evaluation) and bibliometricians themselves (e.g., B  HL, 2012; B, 2013). ◦ The San Francisco Declaration on Research Assessment (DORA) has started an online campaign against the use of the IF for evaluation of researchers and research groups. G  W, The dos and don’ts, Vienna, 2013 5/25
  • 8. Ten things you must not do … 3. Don’t apply (hidden) “bibliometric filters” for selection • Weights, thresholds or filters are defined for in-house evaluation or for preselecting material for external use. Some examples: ◦ A minimum IF might be required for inclusion in official publication lists. ◦ A minimum h-index is required for receiving a doctoral degree or for considering a grant application. ◦ A certain amount of citations is necessary for promotion or for possible approval of applications. This practice is sometimes questionable: If filters are set, they should always support human judgement and not pre-empt it. ☞ Also the psychological effect of using such filters might not be underestimated. G  W, The dos and don’ts, Vienna, 2013 6/25
  • 9. Ten things you must not do … 4. Don’t apply arbitrary weights to co-authorship A known issue in bibliometrics is how to properly credit authors for their contribution to papers they have co-authored. • There is no general solution for the problem. • Only the authors themselves can judge their own contribution. • In some cases, pre-set weights on the basis of the sequence of co-authors are defined and applied as strict rules. • The sequence of co-authors as well the special “function” of the corresponding authors do not always reflect the amount of their real contribution. • Most algorithms are, in practice, rather arbitrary and at this level possibly misleading. G  W, The dos and don’ts, Vienna, 2013 7/25
  • 10. Ten things you must not do … 5. Don’t rank scientists according to 1 indicator • It is legitimate to rank candidates who have been short-listed, e.g., for a job position, according to relevant criteria, but ranking should not be merely based on bibliometrics. • Internal or public ranking of research performance without any particular practical goal (like a candidateship) is problematic. • There are also ethical issues and possible repercussions of the emerging “champions-league mentality” on the scientists research and communication behaviour (e.g., G  D, 2003). • A further negative effect of ranking lists (as easily accessible and ready-made data) is that those could be used in decision-making in other contexts than they have been prepared for. G  W, The dos and don’ts, Vienna, 2013 8/25
  • 11. Ten things you must not do … 6. Don’t merge incommensurable measures • This problematic practice oen begins with output reporting by the scientists them-selves. ◦ Citation counts appearing in CVs or applications are sometimes based on different sources (WoS, SCOPUS, Google Scholar). • The combination of incommensurable sources combined with inappropriate reference standards make bibliometric indicators almost completely useless (cf. W, 1993). • Do not allow users to merge bibliometric results from different sources without having checked their compatibility. G  W, The dos and don’ts, Vienna, 2013 9/25
  • 12. Ten things you must not do … 7. Don’t use flawed statistics • Thresholds and reference standards for the assignment to performance classes are proved tools in bibliometrics (e.g, for identifying industrious authors, uncited or highly cited papers). ◦ This might even be more advantageous than using the original observations. • However, looking at the recent literature one finds a plethora of formulas for “improved” measures or composite indicators lacking any serious mathematical background. • Small datasets are typical of this aggregation level: This might increase the bias or result in serious errors and standard (mathematical) statistical methods are oen at or beyond their limit here. G  W, The dos and don’ts, Vienna, 2013 10/25
  • 13. Ten things you must not do … 8. Don’t blindly trust one-hit wonders • Do not evaluate scientists on the basis of one top paper and do not encourage scientists to prize visibility over targeting in their publication strategy. ◦ Breakthroughs are oen based on a single theoretical concept or way of viewing the world. They may be published in a paper that then aracts star aention. ◦ However, breakthroughs may also be based on a life-long piecing together of evidence published in a series of moderately cited papers. ☞ Always weight the importance of highly cited papers versus the value of a series of sustained publishing. Don’t look at top performance only, consider the complete life work or the research output created in the time windows under study. G  W, The dos and don’ts, Vienna, 2013 11/25
  • 14. Ten things you must not do … 8. Don’t blindly trust one-hit wonders • Do not evaluate scientists on the basis of one top paper and do not encourage scientists to prize visibility over targeting in their publication strategy. ◦ Breakthroughs are oen based on a single theoretical concept or way of viewing the world. They may be published in a paper that then aracts star aention. ◦ However, breakthroughs may also be based on a life-long piecing together of evidence published in a series of moderately cited papers. ☞ Always weight the importance of highly cited papers versus the value of a series of sustained publishing. Don’t look at top performance only, consider the complete life work or the research output created in the time windows under study. G  W, The dos and don’ts, Vienna, 2013 11/25
  • 15. Ten things you must not do … 9. Don’t compare apples and oranges • Figures are always comparable. And contents? • Normalisation might help make measures comparable but only like with like. • Research and communication in different domains is differently structured. The analysis of research performance in humanities, mathematics and life sciences needs different concepts and approaches. ◦ Simply weighting publication types (monographs, articles, working papers, etc.) and normalising citation rates will just cover up but not eliminate differences. G  W, The dos and don’ts, Vienna, 2013 12/25
  • 16. Ten things you must not do … 10. Don’t allow deadlines and workload to compel you to drop good practices • Reviewers and users in research management are oen overcharged by the flood submissions, applications and proposals combined with tight deadlines and lack of personnel. ◦ Readily available data like IFs, gross citation counts and the h-index are sometimes used to make decisions on proposals and candidates. • Don’t give in to time pressure and heavy workload when you have responsible tasks in research assessment and the career of scientists and the future of research teams are at the stake and don’t allow tight deadlines to compel you to reduce evaluation to the use of “handy” numbers. G  W, The dos and don’ts, Vienna, 2013 13/25
  • 17.  Ten things you might do …  1. Also individual-level bibliometrics is statistics • Basic measures (number of publications/citations) are important measures in bibliometrics at the individual level. • All statistics derived from these counts require a sufficiently large publication output to allow valid conclusions. • If this is met, standard bibliometric techniques can be applied but special caution is always called for at this level: ◦ A longer publication period might also cover different career progression and activity dynamics in the academic life of scientists. ◦ Assessment, external benchmarking and comparisons require the use of appropriate reference standards, notably in interdisciplinary research or pluridisciplinary activities. ◦ Special aention should be paid to group authorship (group composition and contribution credit assigned to the author). G  W, The dos and don’ts, Vienna, 2013 14/25
  • 18. Ten things you might do … 2. Analyse collaboration profiles of researchers • Bibliometricians might analyse the scientist’s position among his/her collaborators and co-authors. In particular, the following questions can be answered. ◦ Do authors preferably work alone, work in stable teams, or prefer occasional collaboration. ◦ Who are the collaborators and are the scientists rather ‘junior’, ‘peers’ or ‘senior’ partners in these relationships. • This might help recognise the scientist’s own role in his/her research environment but final conclusion should be drawn in combination with “qualitative methods”. G  W, The dos and don’ts, Vienna, 2013 15/25
  • 19. Ten things you might do … 3. Always combine quantitative and qualitative methods At this level of aggregation, the combination of bibliometrics with traditional qualitative methods is not only important but indispensable. • On one hand, bibliometrics can be used to supplement the sometimes subjectively coloured qualitative methods by providing “objective” figures to underpin, confirm arguments or to make assessment more concrete. • If discrepancies between the two methods are found try to investigate and understand what the possible reasons for the different results could be. ☞ This might even enrich and improve the assessment. G  W, The dos and don’ts, Vienna, 2013 16/25
  • 20. Ten things you might do … 4. Use citation context analysis The concept of “citation context” analysis was first introduced in 1973 by M and later suggested for use in Hungary (B, 2006). • Here citation context does not cover the position, where a citation is placed in an article, or the distance from other citations in the same document. It covers the textual and contentual environment of the citation in question. • It is to be shown that a research results is not only referred to but is used indeed in the colleagues’ research and/or is scholarly discussed. ⇒ The context might be positive or negative. “Citation context” represents an approach in-between qualitative and quantitative methods and can be used in the case of individual proposals and applications. G  W, The dos and don’ts, Vienna, 2013 17/25
  • 21. Ten things you might do … 4. Use citation context analysis The concept of “citation context” analysis was first introduced in 1973 by M and later suggested for use in Hungary (B, 2006). • Here citation context does not cover the position, where a citation is placed in an article, or the distance from other citations in the same document. It covers the textual and contentual environment of the citation in question. • It is to be shown that a research results is not only referred to but is used indeed in the colleagues’ research and/or is scholarly discussed. ⇒ The context might be positive or negative. “Citation context” represents an approach in-between qualitative and quantitative methods and can be used in the case of individual proposals and applications. G  W, The dos and don’ts, Vienna, 2013 17/25
  • 22. Ten things you might do … 5. Analyse subject profiles Many scientists do research in an interdisciplinary environment. Even their reviewers might work in different panels. The situation is even worse for “polydisciplinary” scientists. In principle, three basic approaches are possible. 1. Considering all activities as one total activity and “define” an adequate topic for benchmarking 2. Spliing up the profile into its components (which might, of course, overlap) for assessment 3. Neglecting activities outside the actual scope of assessment It depends on the task, which of the above models should be applied. More research on these issues is urgently needed. G  W, The dos and don’ts, Vienna, 2013 18/25
  • 23. Ten things you might do … 5. Analyse subject profiles Many scientists do research in an interdisciplinary environment. Even their reviewers might work in different panels. The situation is even worse for “polydisciplinary” scientists. In principle, three basic approaches are possible. 1. Considering all activities as one total activity and “define” an adequate topic for benchmarking 2. Spliing up the profile into its components (which might, of course, overlap) for assessment 3. Neglecting activities outside the actual scope of assessment It depends on the task, which of the above models should be applied. More research on these issues is urgently needed. G  W, The dos and don’ts, Vienna, 2013 18/25
  • 24. Ten things you might do … 6. Make an explicit choice for oeuvre or time-window analysis The complete oeuvre of a scientist can serve as the basis of the individual assessment. This option should rather not be used in comparative analysis. • The reason is different age, profile, position and the complexity of a scientists career. Time-window analysis is more suited for comparison, provided, of course, like is compared with like and the publication period and citation windows conform. G  W, The dos and don’ts, Vienna, 2013 19/25
  • 25. Ten things you might do … 6. Make an explicit choice for oeuvre or time-window analysis The complete oeuvre of a scientist can serve as the basis of the individual assessment. This option should rather not be used in comparative analysis. • The reason is different age, profile, position and the complexity of a scientists career. Time-window analysis is more suited for comparison, provided, of course, like is compared with like and the publication period and citation windows conform. G  W, The dos and don’ts, Vienna, 2013 19/25
  • 26. Ten things you might do … 7. Combine bibliometrics with career analysis This applies to the assessment on the basis of a scientist’s oeuvre. • Bibliometrics can be used to zoom in on a scientist’s career. Here the evolution of publication activity, citation impact, mobility and changing collaboration paerns can be monitored. • It is not easy to quantify the observations and the purpose is not to build indicators for possible comparison but to use bibliometric data to visually and numerically depict important aspects of the progress of a scientist’s career.  Some preliminary results have been published by Z  G (2012). G  W, The dos and don’ts, Vienna, 2013 20/25
  • 27. Ten things you might do … 8. Clean bibliographic data carefully and use external sources Bibliometric data at this level are extremely sensitive. This implies that also input data must be absolutely clean and accurate. • In order to achieve cleanness, publication lists and CVs should be used if possible. This is important for two reasons: ◦ External sources help improve the quality of data sources. ◦ Responsibility with the authors or institutes is shared. • If the assessment is not confidential, researchers themselves might be involved in the bibliometric exercise. • Otherwise, scientists might be asked to provide data according to a given standard protocol that can and should be developed in interaction between the user and bibliometricians. G  W, The dos and don’ts, Vienna, 2013 21/25
  • 28. Ten things you might do … 8. Clean bibliographic data carefully and use external sources Bibliometric data at this level are extremely sensitive. This implies that also input data must be absolutely clean and accurate. • In order to achieve cleanness, publication lists and CVs should be used if possible. This is important for two reasons: ◦ External sources help improve the quality of data sources. ◦ Responsibility with the authors or institutes is shared. • If the assessment is not confidential, researchers themselves might be involved in the bibliometric exercise. • Otherwise, scientists might be asked to provide data according to a given standard protocol that can and should be developed in interaction between the user and bibliometricians. G  W, The dos and don’ts, Vienna, 2013 21/25
  • 29. Ten things you might do … 9. Even some “don’ts” are not taboo if properly applied There is no reason to condemn the oen incorrectly used Impact Factor and h-index. They can provide supplementary information if they are used in combination with qualitative methods, and are not used as the only decision criterion. Example: • Good practice (h-index as supporting argument): “The exceptionally high h-index of the applicant confirms his/her international standing aested to by our experts.” • estionable use (h-index as decision criterion): “We are inclined to support this scientist because his/her h-index distinctly exceeds that of all other applicants.” G  W, The dos and don’ts, Vienna, 2013 22/25
  • 30. Ten things you might do … 9. Even some “don’ts” are not taboo if properly applied There is no reason to condemn the oen incorrectly used Impact Factor and h-index. They can provide supplementary information if they are used in combination with qualitative methods, and are not used as the only decision criterion. Example: • Good practice (h-index as supporting argument): “The exceptionally high h-index of the applicant confirms his/her international standing aested to by our experts.” • estionable use (h-index as decision criterion): “We are inclined to support this scientist because his/her h-index distinctly exceeds that of all other applicants.” G  W, The dos and don’ts, Vienna, 2013 22/25
  • 31. Ten things you might do … 10. Help users to interpret and apply your results At any level of aggregation bibliometric methods should be well-documented. This applies above all to level of individual scientists and research teams. • Bibliometricians should support users in a transparent manner to guarantee replicability of bibliometric data. • They should issue clear instructions concerning the use and interpretation of their results. • They should also stress the limitations of the validity of these results. G  W, The dos and don’ts, Vienna, 2013 23/25
  • 32.  Conclusions  • The (added) value of or damage by bibliometrics in individual-level evaluation depends on how and in what context bibliometrics is applied. • In most situations, the context should determine which bibliometric methods and how those should be applied. • Soundness and validity of methods is all the more necessary at the individual level but not yet sufficient. Accuracy, reliability and completeness of sources is an absolute imperative at this level. • We recommend to use individual level bibliometrics always on the basis of the particular research portfolio. The best method to do this may be the design of individual researchers profiles combining bibliometrics with qualitative information about careers and working contexts. The profile includes the research mission and goals of the researcher. G  W, The dos and don’ts, Vienna, 2013 24/25
  • 33.  Conclusions  • The (added) value of or damage by bibliometrics in individual-level evaluation depends on how and in what context bibliometrics is applied. • In most situations, the context should determine which bibliometric methods and how those should be applied. • Soundness and validity of methods is all the more necessary at the individual level but not yet sufficient. Accuracy, reliability and completeness of sources is an absolute imperative at this level. • We recommend to use individual level bibliometrics always on the basis of the particular research portfolio. The best method to do this may be the design of individual researchers profiles combining bibliometrics with qualitative information about careers and working contexts. The profile includes the research mission and goals of the researcher. G  W, The dos and don’ts, Vienna, 2013 24/25
  • 34.  Acknowledgement  The authors would like to thank I R and J G for their contribution to the idea of a special session on this important issue as well as the organisers of the ISSI 2013 conference for having given us the opportunity to organise this session. We also wish to thank L W and R C for their useful comments.