SlideShare ist ein Scribd-Unternehmen logo
1 von 34
Downloaden Sie, um offline zu lesen
   ’  

 1
,  2
1Centre for R&D Monitoring and Dept MSI, KU Leuven, Belgium
2Centre for Science and Technology Studies, Leiden University, The Netherlands
 Introduction 
In the last quarter of the 20th
century, bibliometrics evolved from a
sub-discipline of library and information science to an instrument for
evaluation and benchmarking (G, 2006; W, 2013).
• As a consequence, several scientometric tools became used in a context for
which they were not designed (e.g., JIF).
• Due to the dynamics in evaluation, the focus has shied away from macro
studies towards meso and micro studies of both actors and topics.
• More recently, the evaluation of research teams and individual scientists
has become a central issue in services based on bibliometric data.
• The rise of social networking technologies in which all types of activities are
measured and monitored has promoted auto-evaluation with tools such as
Google Scholar, Publish or Perish, Scholarometer.
G  W, The dos and don’ts, Vienna, 2013 2/25
 Introduction 
In the last quarter of the 20th
century, bibliometrics evolved from a
sub-discipline of library and information science to an instrument for
evaluation and benchmarking (G, 2006; W, 2013).
• As a consequence, several scientometric tools became used in a context for
which they were not designed (e.g., JIF).
• Due to the dynamics in evaluation, the focus has shied away from macro
studies towards meso and micro studies of both actors and topics.
• More recently, the evaluation of research teams and individual scientists
has become a central issue in services based on bibliometric data.
• The rise of social networking technologies in which all types of activities are
measured and monitored has promoted auto-evaluation with tools such as
Google Scholar, Publish or Perish, Scholarometer.
G  W, The dos and don’ts, Vienna, 2013 2/25
Introduction
There is not one typical individual-level bibliometrics since there are
different goals, which range from the individual assessment of proposals or
the oeuvre of applicants over intra-institutional research coordination to the
comparative evaluation of individuals and benchmarking of research teams.
As a consequence, common standards for all tasks at the individual level
do not (yet) exist.
☞ Each respective task, the concrete field of application requires a kind
of flexibility on the part of bibliometricians but also the maximum of
precision and accuracy.
In the following we will summarise some important guidelines for the use
of bibliometrics in the context of the evaluation of individual scientists,
leading to ten dos and ten don’ts in individual level bibliometrics .
G  W, The dos and don’ts, Vienna, 2013 3/25
Introduction
There is not one typical individual-level bibliometrics since there are
different goals, which range from the individual assessment of proposals or
the oeuvre of applicants over intra-institutional research coordination to the
comparative evaluation of individuals and benchmarking of research teams.
As a consequence, common standards for all tasks at the individual level
do not (yet) exist.
☞ Each respective task, the concrete field of application requires a kind
of flexibility on the part of bibliometricians but also the maximum of
precision and accuracy.
In the following we will summarise some important guidelines for the use
of bibliometrics in the context of the evaluation of individual scientists,
leading to ten dos and ten don’ts in individual level bibliometrics .
G  W, The dos and don’ts, Vienna, 2013 3/25
 Ten things you must not do … 
1. Don’t reduce individual performance to a single number
• Research performance is influenced by many factors such as age,
time window, position, research domain. Within the same scholarly
environment and position, interaction with colleagues, co-operation,
mobility and activity profiles might differ considerably.
• A single number (even if based on sound methods and correct data)
can certainly not suffice to reflect the complexity of research
activity, its background and its impact adequately.
• Using them to score or benchmark researchers needs to take the
working context of the researcher into consideration.
G  W, The dos and don’ts, Vienna, 2013 4/25
Ten things you must not do …
2. Don’t use IFs as measures of quality
• Once created to supplement ISI’s Science Citation Index, the IF
evolved to an evaluation tool and seems to have become the
“common currency of scientific quality” in research evaluation
influencing scientists’ funding and career (S, 2004).
• However, the Impact Factor is by no means a performance measure
of individual articles nor of the authors of these papers (e.g., S,
1989, 1997).
• Most recently, campaigns against the use of the IF in individual-level
research evaluation emerged on the part of scientists (who feel
victims of evaluation) and bibliometricians themselves (e.g.,
B  HL, 2012; B, 2013).
◦ The San Francisco Declaration on Research Assessment (DORA) has
started an online campaign against the use of the IF for evaluation of
researchers and research groups.
G  W, The dos and don’ts, Vienna, 2013 5/25
Ten things you must not do …
3. Don’t apply (hidden) “bibliometric filters” for selection
• Weights, thresholds or filters are defined for in-house evaluation or
for preselecting material for external use.
Some examples:
◦ A minimum IF might be required for inclusion in official publication
lists.
◦ A minimum h-index is required for receiving a doctoral degree or for
considering a grant application.
◦ A certain amount of citations is necessary for promotion or for
possible approval of applications.
This practice is sometimes questionable: If filters are set, they should
always support human judgement and not pre-empt it.
☞ Also the psychological effect of using such filters might not be
underestimated.
G  W, The dos and don’ts, Vienna, 2013 6/25
Ten things you must not do …
4. Don’t apply arbitrary weights to co-authorship
A known issue in bibliometrics is how to properly credit authors for their
contribution to papers they have co-authored.
• There is no general solution for the problem.
• Only the authors themselves can judge their own contribution.
• In some cases, pre-set weights on the basis of the sequence of
co-authors are defined and applied as strict rules.
• The sequence of co-authors as well the special “function” of the
corresponding authors do not always reflect the amount of their real
contribution.
• Most algorithms are, in practice, rather arbitrary and at this level
possibly misleading.
G  W, The dos and don’ts, Vienna, 2013 7/25
Ten things you must not do …
5. Don’t rank scientists according to 1 indicator
• It is legitimate to rank candidates who have been short-listed, e.g.,
for a job position, according to relevant criteria, but ranking should
not be merely based on bibliometrics.
• Internal or public ranking of research performance without any
particular practical goal (like a candidateship) is problematic.
• There are also ethical issues and possible repercussions of the
emerging “champions-league mentality” on the scientists research
and communication behaviour (e.g., G  D, 2003).
• A further negative effect of ranking lists (as easily accessible and
ready-made data) is that those could be used in decision-making in
other contexts than they have been prepared for.
G  W, The dos and don’ts, Vienna, 2013 8/25
Ten things you must not do …
6. Don’t merge incommensurable measures
• This problematic practice oen begins with output reporting by the
scientists them-selves.
◦ Citation counts appearing in CVs or applications are sometimes based
on different sources (WoS, SCOPUS, Google Scholar).
• The combination of incommensurable sources combined with
inappropriate reference standards make bibliometric indicators
almost completely useless (cf. W, 1993).
• Do not allow users to merge bibliometric results from different
sources without having checked their compatibility.
G  W, The dos and don’ts, Vienna, 2013 9/25
Ten things you must not do …
7. Don’t use flawed statistics
• Thresholds and reference standards for the assignment to
performance classes are proved tools in bibliometrics (e.g, for
identifying industrious authors, uncited or highly cited papers).
◦ This might even be more advantageous than using the original
observations.
• However, looking at the recent literature one finds a plethora of
formulas for “improved” measures or composite indicators lacking
any serious mathematical background.
• Small datasets are typical of this aggregation level: This might
increase the bias or result in serious errors and standard
(mathematical) statistical methods are oen at or beyond their limit
here.
G  W, The dos and don’ts, Vienna, 2013 10/25
Ten things you must not do …
8. Don’t blindly trust one-hit wonders
• Do not evaluate scientists on the basis of one top paper and do not
encourage scientists to prize visibility over targeting in their
publication strategy.
◦ Breakthroughs are oen based on a single theoretical concept or way
of viewing the world. They may be published in a paper that then
aracts star aention.
◦ However, breakthroughs may also be based on a life-long piecing
together of evidence published in a series of moderately cited papers.
☞ Always weight the importance of highly cited papers versus the
value of a series of sustained publishing. Don’t look at top
performance only, consider the complete life work or the research
output created in the time windows under study.
G  W, The dos and don’ts, Vienna, 2013 11/25
Ten things you must not do …
8. Don’t blindly trust one-hit wonders
• Do not evaluate scientists on the basis of one top paper and do not
encourage scientists to prize visibility over targeting in their
publication strategy.
◦ Breakthroughs are oen based on a single theoretical concept or way
of viewing the world. They may be published in a paper that then
aracts star aention.
◦ However, breakthroughs may also be based on a life-long piecing
together of evidence published in a series of moderately cited papers.
☞ Always weight the importance of highly cited papers versus the
value of a series of sustained publishing. Don’t look at top
performance only, consider the complete life work or the research
output created in the time windows under study.
G  W, The dos and don’ts, Vienna, 2013 11/25
Ten things you must not do …
9. Don’t compare apples and oranges
• Figures are always comparable. And contents?
• Normalisation might help make measures comparable but only like
with like.
• Research and communication in different domains is differently
structured. The analysis of research performance in humanities,
mathematics and life sciences needs different concepts and
approaches.
◦ Simply weighting publication types (monographs, articles, working
papers, etc.) and normalising citation rates will just cover up but not
eliminate differences.
G  W, The dos and don’ts, Vienna, 2013 12/25
Ten things you must not do …
10. Don’t allow deadlines and workload to compel you to drop
good practices
• Reviewers and users in research management are oen overcharged
by the flood submissions, applications and proposals combined with
tight deadlines and lack of personnel.
◦ Readily available data like IFs, gross citation counts and the h-index
are sometimes used to make decisions on proposals and candidates.
• Don’t give in to time pressure and heavy workload when you have
responsible tasks in research assessment and the career of scientists
and the future of research teams are at the stake and don’t allow
tight deadlines to compel you to reduce evaluation to the use of
“handy” numbers.
G  W, The dos and don’ts, Vienna, 2013 13/25
 Ten things you might do … 
1. Also individual-level bibliometrics is statistics
• Basic measures (number of publications/citations) are important
measures in bibliometrics at the individual level.
• All statistics derived from these counts require a sufficiently large
publication output to allow valid conclusions.
• If this is met, standard bibliometric techniques can be applied but
special caution is always called for at this level:
◦ A longer publication period might also cover different career
progression and activity dynamics in the academic life of scientists.
◦ Assessment, external benchmarking and comparisons require the use
of appropriate reference standards, notably in interdisciplinary
research or pluridisciplinary activities.
◦ Special aention should be paid to group authorship (group
composition and contribution credit assigned to the author).
G  W, The dos and don’ts, Vienna, 2013 14/25
Ten things you might do …
2. Analyse collaboration profiles of researchers
• Bibliometricians might analyse the scientist’s position among
his/her collaborators and co-authors. In particular, the following
questions can be answered.
◦ Do authors preferably work alone, work in stable teams, or prefer
occasional collaboration.
◦ Who are the collaborators and are the scientists rather ‘junior’, ‘peers’
or ‘senior’ partners in these relationships.
• This might help recognise the scientist’s own role in his/her research
environment but final conclusion should be drawn in combination
with “qualitative methods”.
G  W, The dos and don’ts, Vienna, 2013 15/25
Ten things you might do …
3. Always combine quantitative and qualitative methods
At this level of aggregation, the combination of bibliometrics with
traditional qualitative methods is not only important but indispensable.
• On one hand, bibliometrics can be used to supplement the
sometimes subjectively coloured qualitative methods by providing
“objective” figures to underpin, confirm arguments or to make
assessment more concrete.
• If discrepancies between the two methods are found try to
investigate and understand what the possible reasons for the
different results could be.
☞ This might even enrich and improve the assessment.
G  W, The dos and don’ts, Vienna, 2013 16/25
Ten things you might do …
4. Use citation context analysis
The concept of “citation context” analysis was first introduced in 1973 by
M and later suggested for use in Hungary (B, 2006).
• Here citation context does not cover the position, where a citation is
placed in an article, or the distance from other citations in the same
document. It covers the textual and contentual environment of the
citation in question.
• It is to be shown that a research results is not only referred to but is
used indeed in the colleagues’ research and/or is scholarly
discussed. ⇒ The context might be positive or negative.
“Citation context” represents an approach in-between qualitative and
quantitative methods and can be used in the case of individual proposals
and applications.
G  W, The dos and don’ts, Vienna, 2013 17/25
Ten things you might do …
4. Use citation context analysis
The concept of “citation context” analysis was first introduced in 1973 by
M and later suggested for use in Hungary (B, 2006).
• Here citation context does not cover the position, where a citation is
placed in an article, or the distance from other citations in the same
document. It covers the textual and contentual environment of the
citation in question.
• It is to be shown that a research results is not only referred to but is
used indeed in the colleagues’ research and/or is scholarly
discussed. ⇒ The context might be positive or negative.
“Citation context” represents an approach in-between qualitative and
quantitative methods and can be used in the case of individual proposals
and applications.
G  W, The dos and don’ts, Vienna, 2013 17/25
Ten things you might do …
5. Analyse subject profiles
Many scientists do research in an interdisciplinary environment. Even
their reviewers might work in different panels. The situation is even
worse for “polydisciplinary” scientists.
In principle, three basic approaches are possible.
1. Considering all activities as one total activity and “define” an
adequate topic for benchmarking
2. Spliing up the profile into its components (which might, of course,
overlap) for assessment
3. Neglecting activities outside the actual scope of assessment
It depends on the task, which of the above models should be applied.
More research on these issues is urgently needed.
G  W, The dos and don’ts, Vienna, 2013 18/25
Ten things you might do …
5. Analyse subject profiles
Many scientists do research in an interdisciplinary environment. Even
their reviewers might work in different panels. The situation is even
worse for “polydisciplinary” scientists.
In principle, three basic approaches are possible.
1. Considering all activities as one total activity and “define” an
adequate topic for benchmarking
2. Spliing up the profile into its components (which might, of course,
overlap) for assessment
3. Neglecting activities outside the actual scope of assessment
It depends on the task, which of the above models should be applied.
More research on these issues is urgently needed.
G  W, The dos and don’ts, Vienna, 2013 18/25
Ten things you might do …
6. Make an explicit choice for oeuvre or time-window analysis
The complete oeuvre of a scientist can serve as the basis of the individual
assessment. This option should rather not be used in comparative
analysis.
• The reason is different age, profile, position and the complexity of a
scientists career.
Time-window analysis is more suited for comparison, provided, of course,
like is compared with like and the publication period and citation
windows conform.
G  W, The dos and don’ts, Vienna, 2013 19/25
Ten things you might do …
6. Make an explicit choice for oeuvre or time-window analysis
The complete oeuvre of a scientist can serve as the basis of the individual
assessment. This option should rather not be used in comparative
analysis.
• The reason is different age, profile, position and the complexity of a
scientists career.
Time-window analysis is more suited for comparison, provided, of course,
like is compared with like and the publication period and citation
windows conform.
G  W, The dos and don’ts, Vienna, 2013 19/25
Ten things you might do …
7. Combine bibliometrics with career analysis
This applies to the assessment on the basis of a scientist’s oeuvre.
• Bibliometrics can be used to zoom in on a scientist’s career. Here the
evolution of publication activity, citation impact, mobility and
changing collaboration paerns can be monitored.
• It is not easy to quantify the observations and the purpose is not to
build indicators for possible comparison but to use bibliometric data
to visually and numerically depict important aspects of the progress
of a scientist’s career.
 Some preliminary results have been published by Z  G (2012).
G  W, The dos and don’ts, Vienna, 2013 20/25
Ten things you might do …
8. Clean bibliographic data carefully and use external sources
Bibliometric data at this level are extremely sensitive. This implies that
also input data must be absolutely clean and accurate.
• In order to achieve cleanness, publication lists and CVs should be
used if possible. This is important for two reasons:
◦ External sources help improve the quality of data sources.
◦ Responsibility with the authors or institutes is shared.
• If the assessment is not confidential, researchers themselves might
be involved in the bibliometric exercise.
• Otherwise, scientists might be asked to provide data according to a
given standard protocol that can and should be developed in
interaction between the user and bibliometricians.
G  W, The dos and don’ts, Vienna, 2013 21/25
Ten things you might do …
8. Clean bibliographic data carefully and use external sources
Bibliometric data at this level are extremely sensitive. This implies that
also input data must be absolutely clean and accurate.
• In order to achieve cleanness, publication lists and CVs should be
used if possible. This is important for two reasons:
◦ External sources help improve the quality of data sources.
◦ Responsibility with the authors or institutes is shared.
• If the assessment is not confidential, researchers themselves might
be involved in the bibliometric exercise.
• Otherwise, scientists might be asked to provide data according to a
given standard protocol that can and should be developed in
interaction between the user and bibliometricians.
G  W, The dos and don’ts, Vienna, 2013 21/25
Ten things you might do …
9. Even some “don’ts” are not taboo if properly applied
There is no reason to condemn the oen incorrectly used Impact Factor
and h-index. They can provide supplementary information if they are
used in combination with qualitative methods, and are not used as the
only decision criterion.
Example:
• Good practice (h-index as supporting argument):
“The exceptionally high h-index of the applicant confirms his/her
international standing aested to by our experts.”
• estionable use (h-index as decision criterion):
“We are inclined to support this scientist because his/her h-index
distinctly exceeds that of all other applicants.”
G  W, The dos and don’ts, Vienna, 2013 22/25
Ten things you might do …
9. Even some “don’ts” are not taboo if properly applied
There is no reason to condemn the oen incorrectly used Impact Factor
and h-index. They can provide supplementary information if they are
used in combination with qualitative methods, and are not used as the
only decision criterion.
Example:
• Good practice (h-index as supporting argument):
“The exceptionally high h-index of the applicant confirms his/her
international standing aested to by our experts.”
• estionable use (h-index as decision criterion):
“We are inclined to support this scientist because his/her h-index
distinctly exceeds that of all other applicants.”
G  W, The dos and don’ts, Vienna, 2013 22/25
Ten things you might do …
10. Help users to interpret and apply your results
At any level of aggregation bibliometric methods should be
well-documented. This applies above all to level of individual scientists
and research teams.
• Bibliometricians should support users in a transparent manner to
guarantee replicability of bibliometric data.
• They should issue clear instructions concerning the use and
interpretation of their results.
• They should also stress the limitations of the validity of these results.
G  W, The dos and don’ts, Vienna, 2013 23/25
 Conclusions 
• The (added) value of or damage by bibliometrics in individual-level
evaluation depends on how and in what context bibliometrics is
applied.
• In most situations, the context should determine which bibliometric
methods and how those should be applied.
• Soundness and validity of methods is all the more necessary at the
individual level but not yet sufficient. Accuracy, reliability and
completeness of sources is an absolute imperative at this level.
• We recommend to use individual level bibliometrics always on the
basis of the particular research portfolio. The best method to do this
may be the design of individual researchers profiles combining
bibliometrics with qualitative information about careers and
working contexts. The profile includes the research mission and
goals of the researcher.
G  W, The dos and don’ts, Vienna, 2013 24/25
 Conclusions 
• The (added) value of or damage by bibliometrics in individual-level
evaluation depends on how and in what context bibliometrics is
applied.
• In most situations, the context should determine which bibliometric
methods and how those should be applied.
• Soundness and validity of methods is all the more necessary at the
individual level but not yet sufficient. Accuracy, reliability and
completeness of sources is an absolute imperative at this level.
• We recommend to use individual level bibliometrics always on the
basis of the particular research portfolio. The best method to do this
may be the design of individual researchers profiles combining
bibliometrics with qualitative information about careers and
working contexts. The profile includes the research mission and
goals of the researcher.
G  W, The dos and don’ts, Vienna, 2013 24/25
 Acknowledgement 
The authors would like to thank I R and J G for
their contribution to the idea of a special session on this important issue
as well as the organisers of the ISSI 2013 conference for having given us
the opportunity to organise this session.
We also wish to thank L W and R C for their
useful comments.

Weitere ähnliche Inhalte

Was ist angesagt?

What is the best software for pharmacy store stock management and billing
What is the best software for pharmacy store stock management and billingWhat is the best software for pharmacy store stock management and billing
What is the best software for pharmacy store stock management and billingManjur Bhatt
 
Pharmacoeconomics in development programmes
Pharmacoeconomics in development programmesPharmacoeconomics in development programmes
Pharmacoeconomics in development programmesDureshahwar khan
 
Benefits of Using an EDC System
Benefits of Using an EDC SystemBenefits of Using an EDC System
Benefits of Using an EDC SystemTrialJoin
 
CLINICAL TRIAL DOCUMENTS
CLINICAL TRIAL DOCUMENTS CLINICAL TRIAL DOCUMENTS
CLINICAL TRIAL DOCUMENTS Rishabh Sharma
 
ADR and pharmacovigillance
ADR and pharmacovigillanceADR and pharmacovigillance
ADR and pharmacovigillanceNesNalNiraula1
 
Meta analysis and spontaneous reporting
Meta analysis and spontaneous reportingMeta analysis and spontaneous reporting
Meta analysis and spontaneous reportinghamzakhan643
 
Pharmacovigilance Process Work Flow - Katalyst HLS
Pharmacovigilance Process Work Flow - Katalyst HLSPharmacovigilance Process Work Flow - Katalyst HLS
Pharmacovigilance Process Work Flow - Katalyst HLSKatalyst HLS
 
Advesre drug reaction- Types, Reporting, Evaluation, Monitoring, Preventing &...
Advesre drug reaction- Types, Reporting, Evaluation, Monitoring, Preventing &...Advesre drug reaction- Types, Reporting, Evaluation, Monitoring, Preventing &...
Advesre drug reaction- Types, Reporting, Evaluation, Monitoring, Preventing &...Suhas Reddy C
 
Overcoming the Challenges of Benefit Risk Assessment for Established Products
Overcoming the Challenges of Benefit Risk Assessment for Established ProductsOvercoming the Challenges of Benefit Risk Assessment for Established Products
Overcoming the Challenges of Benefit Risk Assessment for Established ProductsSGS
 
COMMON BIASES IN PHARMACOEPIDEMIOLOGICAL RESEARCH.pdf
COMMON BIASES IN PHARMACOEPIDEMIOLOGICAL RESEARCH.pdfCOMMON BIASES IN PHARMACOEPIDEMIOLOGICAL RESEARCH.pdf
COMMON BIASES IN PHARMACOEPIDEMIOLOGICAL RESEARCH.pdfsamthamby79
 
Pharmacovigilance regulations as per European Union
Pharmacovigilance regulations as per European UnionPharmacovigilance regulations as per European Union
Pharmacovigilance regulations as per European UnionBindu Kshtriya
 

Was ist angesagt? (15)

Anti obesity models
Anti obesity modelsAnti obesity models
Anti obesity models
 
Prescription
PrescriptionPrescription
Prescription
 
What is the best software for pharmacy store stock management and billing
What is the best software for pharmacy store stock management and billingWhat is the best software for pharmacy store stock management and billing
What is the best software for pharmacy store stock management and billing
 
Pharmacoeconomics in development programmes
Pharmacoeconomics in development programmesPharmacoeconomics in development programmes
Pharmacoeconomics in development programmes
 
Benefits of Using an EDC System
Benefits of Using an EDC SystemBenefits of Using an EDC System
Benefits of Using an EDC System
 
Overview of US FDA: Drugs
Overview of US FDA: DrugsOverview of US FDA: Drugs
Overview of US FDA: Drugs
 
CRO - Clinical Vendor Oversight Webinar.
CRO - Clinical Vendor Oversight Webinar.CRO - Clinical Vendor Oversight Webinar.
CRO - Clinical Vendor Oversight Webinar.
 
CLINICAL TRIAL DOCUMENTS
CLINICAL TRIAL DOCUMENTS CLINICAL TRIAL DOCUMENTS
CLINICAL TRIAL DOCUMENTS
 
ADR and pharmacovigillance
ADR and pharmacovigillanceADR and pharmacovigillance
ADR and pharmacovigillance
 
Meta analysis and spontaneous reporting
Meta analysis and spontaneous reportingMeta analysis and spontaneous reporting
Meta analysis and spontaneous reporting
 
Pharmacovigilance Process Work Flow - Katalyst HLS
Pharmacovigilance Process Work Flow - Katalyst HLSPharmacovigilance Process Work Flow - Katalyst HLS
Pharmacovigilance Process Work Flow - Katalyst HLS
 
Advesre drug reaction- Types, Reporting, Evaluation, Monitoring, Preventing &...
Advesre drug reaction- Types, Reporting, Evaluation, Monitoring, Preventing &...Advesre drug reaction- Types, Reporting, Evaluation, Monitoring, Preventing &...
Advesre drug reaction- Types, Reporting, Evaluation, Monitoring, Preventing &...
 
Overcoming the Challenges of Benefit Risk Assessment for Established Products
Overcoming the Challenges of Benefit Risk Assessment for Established ProductsOvercoming the Challenges of Benefit Risk Assessment for Established Products
Overcoming the Challenges of Benefit Risk Assessment for Established Products
 
COMMON BIASES IN PHARMACOEPIDEMIOLOGICAL RESEARCH.pdf
COMMON BIASES IN PHARMACOEPIDEMIOLOGICAL RESEARCH.pdfCOMMON BIASES IN PHARMACOEPIDEMIOLOGICAL RESEARCH.pdf
COMMON BIASES IN PHARMACOEPIDEMIOLOGICAL RESEARCH.pdf
 
Pharmacovigilance regulations as per European Union
Pharmacovigilance regulations as per European UnionPharmacovigilance regulations as per European Union
Pharmacovigilance regulations as per European Union
 

Andere mochten auch

Author Level Bibliometrics
Author Level BibliometricsAuthor Level Bibliometrics
Author Level BibliometricsPaul Wouters
 
конкурс
конкурсконкурс
конкурсimigalin
 
The Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwideThe Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwideDavid Yeng
 
Henryk Konrad
Henryk KonradHenryk Konrad
Henryk Konradpzgomaz
 
Webfil digital _ fail safe multiplexer __ ufsbi for sge
Webfil   digital  _ fail safe multiplexer __ ufsbi for sgeWebfil   digital  _ fail safe multiplexer __ ufsbi for sge
Webfil digital _ fail safe multiplexer __ ufsbi for sgepuneet kumar rai
 
WeiResearch Social Interest Graph
WeiResearch Social Interest GraphWeiResearch Social Interest Graph
WeiResearch Social Interest GraphRyan Xia
 
Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015Alessandro Porro
 
DeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sampleDeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sampleKim4142
 
Vikas_Bajpai_090915
Vikas_Bajpai_090915Vikas_Bajpai_090915
Vikas_Bajpai_090915Vikas Bajpai
 
TEKS NEGOSIASI
TEKS NEGOSIASITEKS NEGOSIASI
TEKS NEGOSIASISri Utanti
 
Opportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality educationOpportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality educationDavid Yeng
 
Collingwood Library 2010
Collingwood Library 2010Collingwood Library 2010
Collingwood Library 2010MarciaMcGinley
 
Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02Seth Sibangan
 

Andere mochten auch (17)

Author Level Bibliometrics
Author Level BibliometricsAuthor Level Bibliometrics
Author Level Bibliometrics
 
Gail presentation
Gail presentationGail presentation
Gail presentation
 
конкурс
конкурсконкурс
конкурс
 
The Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwideThe Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwide
 
Henryk Konrad
Henryk KonradHenryk Konrad
Henryk Konrad
 
Webfil digital _ fail safe multiplexer __ ufsbi for sge
Webfil   digital  _ fail safe multiplexer __ ufsbi for sgeWebfil   digital  _ fail safe multiplexer __ ufsbi for sge
Webfil digital _ fail safe multiplexer __ ufsbi for sge
 
WeiResearch Social Interest Graph
WeiResearch Social Interest GraphWeiResearch Social Interest Graph
WeiResearch Social Interest Graph
 
Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015
 
DeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sampleDeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sample
 
Peta watak
Peta watakPeta watak
Peta watak
 
Vikas_Bajpai_090915
Vikas_Bajpai_090915Vikas_Bajpai_090915
Vikas_Bajpai_090915
 
TEKS NEGOSIASI
TEKS NEGOSIASITEKS NEGOSIASI
TEKS NEGOSIASI
 
Opportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality educationOpportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality education
 
Exposicion lalo
Exposicion laloExposicion lalo
Exposicion lalo
 
Collingwood Library 2010
Collingwood Library 2010Collingwood Library 2010
Collingwood Library 2010
 
raf sistemleri
raf sistemleriraf sistemleri
raf sistemleri
 
Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02
 

Ähnlich wie The dos and don'ts in individudal level bibliometrics

Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)Aboul Ella Hassanien
 
Calais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.comCalais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.comWilliam Kritsonis
 
Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009William Kritsonis
 
Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators Michaela Kurschildgen
 
Research and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptxResearch and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptxVijayKumar17076
 
Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...Nicolas Robinson-Garcia
 
Durham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library SlidesDurham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library SlidesJamie Bisset
 
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...hashem Al-Shamiri
 
2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences Research2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences ResearchNUI Galway
 
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...Yasar Tonta
 
DORA and the reinvention of research assessment
DORA and the reinvention of research assessmentDORA and the reinvention of research assessment
DORA and the reinvention of research assessmentMark Patterson
 
Quantitative research
Quantitative researchQuantitative research
Quantitative researchTooba Kanwal
 
TYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdfTYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdfJubilinAlbania
 
In metrics we trust?
In metrics we trust?In metrics we trust?
In metrics we trust?ORCID, Inc
 
Altmetrics: An Overview
Altmetrics: An OverviewAltmetrics: An Overview
Altmetrics: An OverviewPallab Pradhan
 
Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are suzannewarch
 

Ähnlich wie The dos and don'ts in individudal level bibliometrics (20)

Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)
 
Calais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.comCalais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.com
 
Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009
 
Preparing research proposal icphi2013
Preparing research proposal icphi2013Preparing research proposal icphi2013
Preparing research proposal icphi2013
 
Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators
 
Research and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptxResearch and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptx
 
Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...
 
Anu digital research literacies
Anu digital research literaciesAnu digital research literacies
Anu digital research literacies
 
Durham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library SlidesDurham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library Slides
 
Lern, jan 2015, digital media slides
Lern, jan 2015, digital media slidesLern, jan 2015, digital media slides
Lern, jan 2015, digital media slides
 
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
 
2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences Research2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences Research
 
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
 
DORA and the reinvention of research assessment
DORA and the reinvention of research assessmentDORA and the reinvention of research assessment
DORA and the reinvention of research assessment
 
Quantitative research
Quantitative researchQuantitative research
Quantitative research
 
TYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdfTYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdf
 
In metrics we trust?
In metrics we trust?In metrics we trust?
In metrics we trust?
 
Altmetrics: An Overview
Altmetrics: An OverviewAltmetrics: An Overview
Altmetrics: An Overview
 
Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)
Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)
Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)
 
Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are
 

Kürzlich hochgeladen

Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 

Kürzlich hochgeladen (20)

Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 

The dos and don'ts in individudal level bibliometrics

  • 1.    ’     1 ,  2 1Centre for R&D Monitoring and Dept MSI, KU Leuven, Belgium 2Centre for Science and Technology Studies, Leiden University, The Netherlands
  • 2.  Introduction  In the last quarter of the 20th century, bibliometrics evolved from a sub-discipline of library and information science to an instrument for evaluation and benchmarking (G, 2006; W, 2013). • As a consequence, several scientometric tools became used in a context for which they were not designed (e.g., JIF). • Due to the dynamics in evaluation, the focus has shied away from macro studies towards meso and micro studies of both actors and topics. • More recently, the evaluation of research teams and individual scientists has become a central issue in services based on bibliometric data. • The rise of social networking technologies in which all types of activities are measured and monitored has promoted auto-evaluation with tools such as Google Scholar, Publish or Perish, Scholarometer. G  W, The dos and don’ts, Vienna, 2013 2/25
  • 3.  Introduction  In the last quarter of the 20th century, bibliometrics evolved from a sub-discipline of library and information science to an instrument for evaluation and benchmarking (G, 2006; W, 2013). • As a consequence, several scientometric tools became used in a context for which they were not designed (e.g., JIF). • Due to the dynamics in evaluation, the focus has shied away from macro studies towards meso and micro studies of both actors and topics. • More recently, the evaluation of research teams and individual scientists has become a central issue in services based on bibliometric data. • The rise of social networking technologies in which all types of activities are measured and monitored has promoted auto-evaluation with tools such as Google Scholar, Publish or Perish, Scholarometer. G  W, The dos and don’ts, Vienna, 2013 2/25
  • 4. Introduction There is not one typical individual-level bibliometrics since there are different goals, which range from the individual assessment of proposals or the oeuvre of applicants over intra-institutional research coordination to the comparative evaluation of individuals and benchmarking of research teams. As a consequence, common standards for all tasks at the individual level do not (yet) exist. ☞ Each respective task, the concrete field of application requires a kind of flexibility on the part of bibliometricians but also the maximum of precision and accuracy. In the following we will summarise some important guidelines for the use of bibliometrics in the context of the evaluation of individual scientists, leading to ten dos and ten don’ts in individual level bibliometrics . G  W, The dos and don’ts, Vienna, 2013 3/25
  • 5. Introduction There is not one typical individual-level bibliometrics since there are different goals, which range from the individual assessment of proposals or the oeuvre of applicants over intra-institutional research coordination to the comparative evaluation of individuals and benchmarking of research teams. As a consequence, common standards for all tasks at the individual level do not (yet) exist. ☞ Each respective task, the concrete field of application requires a kind of flexibility on the part of bibliometricians but also the maximum of precision and accuracy. In the following we will summarise some important guidelines for the use of bibliometrics in the context of the evaluation of individual scientists, leading to ten dos and ten don’ts in individual level bibliometrics . G  W, The dos and don’ts, Vienna, 2013 3/25
  • 6.  Ten things you must not do …  1. Don’t reduce individual performance to a single number • Research performance is influenced by many factors such as age, time window, position, research domain. Within the same scholarly environment and position, interaction with colleagues, co-operation, mobility and activity profiles might differ considerably. • A single number (even if based on sound methods and correct data) can certainly not suffice to reflect the complexity of research activity, its background and its impact adequately. • Using them to score or benchmark researchers needs to take the working context of the researcher into consideration. G  W, The dos and don’ts, Vienna, 2013 4/25
  • 7. Ten things you must not do … 2. Don’t use IFs as measures of quality • Once created to supplement ISI’s Science Citation Index, the IF evolved to an evaluation tool and seems to have become the “common currency of scientific quality” in research evaluation influencing scientists’ funding and career (S, 2004). • However, the Impact Factor is by no means a performance measure of individual articles nor of the authors of these papers (e.g., S, 1989, 1997). • Most recently, campaigns against the use of the IF in individual-level research evaluation emerged on the part of scientists (who feel victims of evaluation) and bibliometricians themselves (e.g., B  HL, 2012; B, 2013). ◦ The San Francisco Declaration on Research Assessment (DORA) has started an online campaign against the use of the IF for evaluation of researchers and research groups. G  W, The dos and don’ts, Vienna, 2013 5/25
  • 8. Ten things you must not do … 3. Don’t apply (hidden) “bibliometric filters” for selection • Weights, thresholds or filters are defined for in-house evaluation or for preselecting material for external use. Some examples: ◦ A minimum IF might be required for inclusion in official publication lists. ◦ A minimum h-index is required for receiving a doctoral degree or for considering a grant application. ◦ A certain amount of citations is necessary for promotion or for possible approval of applications. This practice is sometimes questionable: If filters are set, they should always support human judgement and not pre-empt it. ☞ Also the psychological effect of using such filters might not be underestimated. G  W, The dos and don’ts, Vienna, 2013 6/25
  • 9. Ten things you must not do … 4. Don’t apply arbitrary weights to co-authorship A known issue in bibliometrics is how to properly credit authors for their contribution to papers they have co-authored. • There is no general solution for the problem. • Only the authors themselves can judge their own contribution. • In some cases, pre-set weights on the basis of the sequence of co-authors are defined and applied as strict rules. • The sequence of co-authors as well the special “function” of the corresponding authors do not always reflect the amount of their real contribution. • Most algorithms are, in practice, rather arbitrary and at this level possibly misleading. G  W, The dos and don’ts, Vienna, 2013 7/25
  • 10. Ten things you must not do … 5. Don’t rank scientists according to 1 indicator • It is legitimate to rank candidates who have been short-listed, e.g., for a job position, according to relevant criteria, but ranking should not be merely based on bibliometrics. • Internal or public ranking of research performance without any particular practical goal (like a candidateship) is problematic. • There are also ethical issues and possible repercussions of the emerging “champions-league mentality” on the scientists research and communication behaviour (e.g., G  D, 2003). • A further negative effect of ranking lists (as easily accessible and ready-made data) is that those could be used in decision-making in other contexts than they have been prepared for. G  W, The dos and don’ts, Vienna, 2013 8/25
  • 11. Ten things you must not do … 6. Don’t merge incommensurable measures • This problematic practice oen begins with output reporting by the scientists them-selves. ◦ Citation counts appearing in CVs or applications are sometimes based on different sources (WoS, SCOPUS, Google Scholar). • The combination of incommensurable sources combined with inappropriate reference standards make bibliometric indicators almost completely useless (cf. W, 1993). • Do not allow users to merge bibliometric results from different sources without having checked their compatibility. G  W, The dos and don’ts, Vienna, 2013 9/25
  • 12. Ten things you must not do … 7. Don’t use flawed statistics • Thresholds and reference standards for the assignment to performance classes are proved tools in bibliometrics (e.g, for identifying industrious authors, uncited or highly cited papers). ◦ This might even be more advantageous than using the original observations. • However, looking at the recent literature one finds a plethora of formulas for “improved” measures or composite indicators lacking any serious mathematical background. • Small datasets are typical of this aggregation level: This might increase the bias or result in serious errors and standard (mathematical) statistical methods are oen at or beyond their limit here. G  W, The dos and don’ts, Vienna, 2013 10/25
  • 13. Ten things you must not do … 8. Don’t blindly trust one-hit wonders • Do not evaluate scientists on the basis of one top paper and do not encourage scientists to prize visibility over targeting in their publication strategy. ◦ Breakthroughs are oen based on a single theoretical concept or way of viewing the world. They may be published in a paper that then aracts star aention. ◦ However, breakthroughs may also be based on a life-long piecing together of evidence published in a series of moderately cited papers. ☞ Always weight the importance of highly cited papers versus the value of a series of sustained publishing. Don’t look at top performance only, consider the complete life work or the research output created in the time windows under study. G  W, The dos and don’ts, Vienna, 2013 11/25
  • 14. Ten things you must not do … 8. Don’t blindly trust one-hit wonders • Do not evaluate scientists on the basis of one top paper and do not encourage scientists to prize visibility over targeting in their publication strategy. ◦ Breakthroughs are oen based on a single theoretical concept or way of viewing the world. They may be published in a paper that then aracts star aention. ◦ However, breakthroughs may also be based on a life-long piecing together of evidence published in a series of moderately cited papers. ☞ Always weight the importance of highly cited papers versus the value of a series of sustained publishing. Don’t look at top performance only, consider the complete life work or the research output created in the time windows under study. G  W, The dos and don’ts, Vienna, 2013 11/25
  • 15. Ten things you must not do … 9. Don’t compare apples and oranges • Figures are always comparable. And contents? • Normalisation might help make measures comparable but only like with like. • Research and communication in different domains is differently structured. The analysis of research performance in humanities, mathematics and life sciences needs different concepts and approaches. ◦ Simply weighting publication types (monographs, articles, working papers, etc.) and normalising citation rates will just cover up but not eliminate differences. G  W, The dos and don’ts, Vienna, 2013 12/25
  • 16. Ten things you must not do … 10. Don’t allow deadlines and workload to compel you to drop good practices • Reviewers and users in research management are oen overcharged by the flood submissions, applications and proposals combined with tight deadlines and lack of personnel. ◦ Readily available data like IFs, gross citation counts and the h-index are sometimes used to make decisions on proposals and candidates. • Don’t give in to time pressure and heavy workload when you have responsible tasks in research assessment and the career of scientists and the future of research teams are at the stake and don’t allow tight deadlines to compel you to reduce evaluation to the use of “handy” numbers. G  W, The dos and don’ts, Vienna, 2013 13/25
  • 17.  Ten things you might do …  1. Also individual-level bibliometrics is statistics • Basic measures (number of publications/citations) are important measures in bibliometrics at the individual level. • All statistics derived from these counts require a sufficiently large publication output to allow valid conclusions. • If this is met, standard bibliometric techniques can be applied but special caution is always called for at this level: ◦ A longer publication period might also cover different career progression and activity dynamics in the academic life of scientists. ◦ Assessment, external benchmarking and comparisons require the use of appropriate reference standards, notably in interdisciplinary research or pluridisciplinary activities. ◦ Special aention should be paid to group authorship (group composition and contribution credit assigned to the author). G  W, The dos and don’ts, Vienna, 2013 14/25
  • 18. Ten things you might do … 2. Analyse collaboration profiles of researchers • Bibliometricians might analyse the scientist’s position among his/her collaborators and co-authors. In particular, the following questions can be answered. ◦ Do authors preferably work alone, work in stable teams, or prefer occasional collaboration. ◦ Who are the collaborators and are the scientists rather ‘junior’, ‘peers’ or ‘senior’ partners in these relationships. • This might help recognise the scientist’s own role in his/her research environment but final conclusion should be drawn in combination with “qualitative methods”. G  W, The dos and don’ts, Vienna, 2013 15/25
  • 19. Ten things you might do … 3. Always combine quantitative and qualitative methods At this level of aggregation, the combination of bibliometrics with traditional qualitative methods is not only important but indispensable. • On one hand, bibliometrics can be used to supplement the sometimes subjectively coloured qualitative methods by providing “objective” figures to underpin, confirm arguments or to make assessment more concrete. • If discrepancies between the two methods are found try to investigate and understand what the possible reasons for the different results could be. ☞ This might even enrich and improve the assessment. G  W, The dos and don’ts, Vienna, 2013 16/25
  • 20. Ten things you might do … 4. Use citation context analysis The concept of “citation context” analysis was first introduced in 1973 by M and later suggested for use in Hungary (B, 2006). • Here citation context does not cover the position, where a citation is placed in an article, or the distance from other citations in the same document. It covers the textual and contentual environment of the citation in question. • It is to be shown that a research results is not only referred to but is used indeed in the colleagues’ research and/or is scholarly discussed. ⇒ The context might be positive or negative. “Citation context” represents an approach in-between qualitative and quantitative methods and can be used in the case of individual proposals and applications. G  W, The dos and don’ts, Vienna, 2013 17/25
  • 21. Ten things you might do … 4. Use citation context analysis The concept of “citation context” analysis was first introduced in 1973 by M and later suggested for use in Hungary (B, 2006). • Here citation context does not cover the position, where a citation is placed in an article, or the distance from other citations in the same document. It covers the textual and contentual environment of the citation in question. • It is to be shown that a research results is not only referred to but is used indeed in the colleagues’ research and/or is scholarly discussed. ⇒ The context might be positive or negative. “Citation context” represents an approach in-between qualitative and quantitative methods and can be used in the case of individual proposals and applications. G  W, The dos and don’ts, Vienna, 2013 17/25
  • 22. Ten things you might do … 5. Analyse subject profiles Many scientists do research in an interdisciplinary environment. Even their reviewers might work in different panels. The situation is even worse for “polydisciplinary” scientists. In principle, three basic approaches are possible. 1. Considering all activities as one total activity and “define” an adequate topic for benchmarking 2. Spliing up the profile into its components (which might, of course, overlap) for assessment 3. Neglecting activities outside the actual scope of assessment It depends on the task, which of the above models should be applied. More research on these issues is urgently needed. G  W, The dos and don’ts, Vienna, 2013 18/25
  • 23. Ten things you might do … 5. Analyse subject profiles Many scientists do research in an interdisciplinary environment. Even their reviewers might work in different panels. The situation is even worse for “polydisciplinary” scientists. In principle, three basic approaches are possible. 1. Considering all activities as one total activity and “define” an adequate topic for benchmarking 2. Spliing up the profile into its components (which might, of course, overlap) for assessment 3. Neglecting activities outside the actual scope of assessment It depends on the task, which of the above models should be applied. More research on these issues is urgently needed. G  W, The dos and don’ts, Vienna, 2013 18/25
  • 24. Ten things you might do … 6. Make an explicit choice for oeuvre or time-window analysis The complete oeuvre of a scientist can serve as the basis of the individual assessment. This option should rather not be used in comparative analysis. • The reason is different age, profile, position and the complexity of a scientists career. Time-window analysis is more suited for comparison, provided, of course, like is compared with like and the publication period and citation windows conform. G  W, The dos and don’ts, Vienna, 2013 19/25
  • 25. Ten things you might do … 6. Make an explicit choice for oeuvre or time-window analysis The complete oeuvre of a scientist can serve as the basis of the individual assessment. This option should rather not be used in comparative analysis. • The reason is different age, profile, position and the complexity of a scientists career. Time-window analysis is more suited for comparison, provided, of course, like is compared with like and the publication period and citation windows conform. G  W, The dos and don’ts, Vienna, 2013 19/25
  • 26. Ten things you might do … 7. Combine bibliometrics with career analysis This applies to the assessment on the basis of a scientist’s oeuvre. • Bibliometrics can be used to zoom in on a scientist’s career. Here the evolution of publication activity, citation impact, mobility and changing collaboration paerns can be monitored. • It is not easy to quantify the observations and the purpose is not to build indicators for possible comparison but to use bibliometric data to visually and numerically depict important aspects of the progress of a scientist’s career.  Some preliminary results have been published by Z  G (2012). G  W, The dos and don’ts, Vienna, 2013 20/25
  • 27. Ten things you might do … 8. Clean bibliographic data carefully and use external sources Bibliometric data at this level are extremely sensitive. This implies that also input data must be absolutely clean and accurate. • In order to achieve cleanness, publication lists and CVs should be used if possible. This is important for two reasons: ◦ External sources help improve the quality of data sources. ◦ Responsibility with the authors or institutes is shared. • If the assessment is not confidential, researchers themselves might be involved in the bibliometric exercise. • Otherwise, scientists might be asked to provide data according to a given standard protocol that can and should be developed in interaction between the user and bibliometricians. G  W, The dos and don’ts, Vienna, 2013 21/25
  • 28. Ten things you might do … 8. Clean bibliographic data carefully and use external sources Bibliometric data at this level are extremely sensitive. This implies that also input data must be absolutely clean and accurate. • In order to achieve cleanness, publication lists and CVs should be used if possible. This is important for two reasons: ◦ External sources help improve the quality of data sources. ◦ Responsibility with the authors or institutes is shared. • If the assessment is not confidential, researchers themselves might be involved in the bibliometric exercise. • Otherwise, scientists might be asked to provide data according to a given standard protocol that can and should be developed in interaction between the user and bibliometricians. G  W, The dos and don’ts, Vienna, 2013 21/25
  • 29. Ten things you might do … 9. Even some “don’ts” are not taboo if properly applied There is no reason to condemn the oen incorrectly used Impact Factor and h-index. They can provide supplementary information if they are used in combination with qualitative methods, and are not used as the only decision criterion. Example: • Good practice (h-index as supporting argument): “The exceptionally high h-index of the applicant confirms his/her international standing aested to by our experts.” • estionable use (h-index as decision criterion): “We are inclined to support this scientist because his/her h-index distinctly exceeds that of all other applicants.” G  W, The dos and don’ts, Vienna, 2013 22/25
  • 30. Ten things you might do … 9. Even some “don’ts” are not taboo if properly applied There is no reason to condemn the oen incorrectly used Impact Factor and h-index. They can provide supplementary information if they are used in combination with qualitative methods, and are not used as the only decision criterion. Example: • Good practice (h-index as supporting argument): “The exceptionally high h-index of the applicant confirms his/her international standing aested to by our experts.” • estionable use (h-index as decision criterion): “We are inclined to support this scientist because his/her h-index distinctly exceeds that of all other applicants.” G  W, The dos and don’ts, Vienna, 2013 22/25
  • 31. Ten things you might do … 10. Help users to interpret and apply your results At any level of aggregation bibliometric methods should be well-documented. This applies above all to level of individual scientists and research teams. • Bibliometricians should support users in a transparent manner to guarantee replicability of bibliometric data. • They should issue clear instructions concerning the use and interpretation of their results. • They should also stress the limitations of the validity of these results. G  W, The dos and don’ts, Vienna, 2013 23/25
  • 32.  Conclusions  • The (added) value of or damage by bibliometrics in individual-level evaluation depends on how and in what context bibliometrics is applied. • In most situations, the context should determine which bibliometric methods and how those should be applied. • Soundness and validity of methods is all the more necessary at the individual level but not yet sufficient. Accuracy, reliability and completeness of sources is an absolute imperative at this level. • We recommend to use individual level bibliometrics always on the basis of the particular research portfolio. The best method to do this may be the design of individual researchers profiles combining bibliometrics with qualitative information about careers and working contexts. The profile includes the research mission and goals of the researcher. G  W, The dos and don’ts, Vienna, 2013 24/25
  • 33.  Conclusions  • The (added) value of or damage by bibliometrics in individual-level evaluation depends on how and in what context bibliometrics is applied. • In most situations, the context should determine which bibliometric methods and how those should be applied. • Soundness and validity of methods is all the more necessary at the individual level but not yet sufficient. Accuracy, reliability and completeness of sources is an absolute imperative at this level. • We recommend to use individual level bibliometrics always on the basis of the particular research portfolio. The best method to do this may be the design of individual researchers profiles combining bibliometrics with qualitative information about careers and working contexts. The profile includes the research mission and goals of the researcher. G  W, The dos and don’ts, Vienna, 2013 24/25
  • 34.  Acknowledgement  The authors would like to thank I R and J G for their contribution to the idea of a special session on this important issue as well as the organisers of the ISSI 2013 conference for having given us the opportunity to organise this session. We also wish to thank L W and R C for their useful comments.