3. Content
Critical evaluation
Critical reading
The art and science of critical evaluation
Internal and external validity of a study.
How to critically evaluate a Scientific paper
Abstract
Keyword
Introduction
5. CRITICAL EVALUATION
INTRODUCTION
Evidence-based medicine' has become a catch-phrase in the last
two decades. It means that reading, interpreting, evaluating and
acting on published literature should become a routine part of
clinical practice.
Decisions that we take should depend on evidence base, the
techniques that we employ and the skills that we engage should
be supported by evidence base.
6. One should not accept the content of a scientific
article on its face value assuming that the journal i
n which it is published is of a very high reputation
and authors are considered as pioneers in the area.
It is a must to remain open minded, judge, evaluate
and assess a scientific article by engaging in critica
l reading which facilitates critical evaluation.
7. CRITICAL READING:
Considered and justified examination of what othe
rs have written or said regarding the subject in
question and ability to recognize, analyse and
evaluate the reasoning and forms of argumentation
in the articles. This skill is called' critical reasoning
'
8.
9. WHAT IS CRITICAL READING?
Critical reading is:
1. One that goes beyond mere understanding into
making personal response to what has been written.
2. One that relates the writing in hand with various
other writings and tries to compare, contrast and deal
with contradictions.
3. One that does not take what is written at face value.
4. One that views research reports as a contested terrai
n, within which alternative views and positions may b
e taken.
10. The assumption that reading should be simple, can
be an obstacle when we are reading a scientific
journal.
If we don't engage the matter actively, we might
"get" what the author says, but we cannot evaluate
it. We may know what the writer thinks about a
subject, but we cannot reasonably defend agreeing
or disagreeing with his opinion or argument.
11. If we don't read critically, we can't say much
beyond what the text happens to say. Critical
reading should be followed immediately by a
critical reflective session.
Skimming, annotating, asking questions,
contemplating and writing an outline of text are th
e different strategies employed during critical readi
ng
13. Comprehensive evaluation of a scientific paper
consists of two components:-
1. Critical Appraisal
2. Critical Reflection
14. Critical Appraisal
It can be defined as the assessment of the scientific
quality of the paper. including design, methods and
analysis.
Critical appraisal in dentistry has been described as
making sense of the evidence and systematically
considering its validity results and relevance to
dentistry
15. As a critical appraiser one would
ask questions like,
1. Is the title relevant to the study
conducted?
2. Is the method used appropriate
for answering the research
question?
3. Is the design appropriate for th
e defined aim and objectives of th
e study?
4. Are the statistical tests used
appropriate and indicated?
16. Critical Reflection
It is to judge and discuss the implications of the study f
or the world outside the study.
The extent to which the study findings are generalizable
and what are the consequences of this study on larger
population outside the study are the two aspects which
are majorly covered under critical reflection.
18. Validity of a study is divided and presented under
two headings.
1. Internal validity: Can I believe the results of the
study?
This involves 'critical appraisal' skills to assess the
methodological quality of the study.
19. 2. External validity: Assuming the result of a study
is internally valid, how true it is for wider
population outside the study?
This involves the skills of reflecting on the
importance and practical relevance of the study.
20. Two points to be always kept in
mind by a critical evaluator
1. Results are always biased-.
Every study is dirty to some exte
nt; the perfect one does not exist.
The crucial task is to judge the
effect any bias may have on the
results and its implications of the
results.
21. 2. Statistics do not equal truth
They give us an estimate of how chance might
affect study results.
One should remember that a statistical significant
difference means simply and only that a result is
unlikely to have arisen from chance alone.
23. 1. PROLOGUE
• The presenter for the session should ideally
introduce the topic in the discipline to which
the selected article belongs.
• This should be brief and relevant. This is
done primarily to facilitate a movement from
"general" to "specific" instead of abruptly
beginning the session.
24. 2. ABOUT THE ARTICLE
Identifying the source of the article or tracing the
source of the article (paper)
25. 3. ABOUT THE JOURNAL
• What type of journal?
• Who is the publisher of the journal?
• Under what section of the journal the article is
published?
• What is the ISSN number of the journal?
• What is the MeSH representation?
26. • Is it peer reviewed or not?
• What is the year of publication?
• What is the issue number?
• What is the volume number?
27. PEER REVIEW
• When an article is submitted by authors to the
editor of a journal, before it is accepted for
inclusion in a journal, it is subjected to a process
of peer review.
• The editors have knowledge and integrity,
according to the conventional view and act as ‘gate
keepers’
28. 5. ABOUT THE AUTHORS:
Is there a mention of author's name/names?
Is there a mention of designation! institutional
attachments?
Have they done work in the same area or in
different areas earlier to the present one?
Are they familiar figures in the literature?
29. Are they pioneers in any specific area?
What is their track record? (a seasoned reader will
know the track record of many authors)
Are they new or unknown? (like the unknown
sculptors)
31. TITLE OF THE ARTICLE
Does it indicate the topic and focus of the study?
Does it indicate the research question?
Is the title meaningful and complete?
Does it reflect the aim and objectives of the study?
Does it include the important variables which are
intended to be measured?
32. • Does it give an idea of study population and stud
y setting (site)?
• Does it give an idea about the design of the study
?
• Does the title look catchy?
• Is it very short or over long?
• Is it too general or specific or over specific?
34. An abstract is defined as "an abbreviated, accurate
representation of the contents of a document,
without added interpretation or criticism and
without distinction as to who wrote the abstract".
35. The function of an abstract is to summarize the
nature of the research project, its context, how it
was carried out and what its major findings were.
Ideally it should require no more than one page of
text, and will typically be restricted to 200 to 300
words or less.
36. TYPES OF ABSTRACTS
Structured abstract:
The content is presented under subheadings like
Aim and objectives, methods, results, conclusions
etc.
Unstructured abstract:
'It is a text without any subheadings, but the matter
may implicitly follow the same pattern.
38. Informative
It is best for papers describing original research. It
should typically contain 100-250 words. It should
ideally answer the following Issues.
Aims and objectives
Why was the research done?
Methods
What was done and how?
39. Results
What were the findings?
Conclusions
What do the findings mean?
If the paper is about new method or apparatus
a) What are the advantages? (method or apparatus)
b) How well does it work?
40. Indicative
It is used for long articles such as reviews, review
of reviews, meta-analysis (secondary research) etc.
It gives reader a general idea of the contents of th
e paper but little, if any, idea of specific metho
ds or results.
41. Do's and Don'ts about abstract:
Under no circumstances should the abstract contai
n any information which is not in the following tex
t. A short abstract should be written in a single
paragraph.
The sentences should be complete & followed
logically. Acronyms, abbreviations & symbols
should be avoided, if used they should be defined a
t first mention. Tables, diagrams and equations are
not to be included in abstract. Citations are not to b
e included.
42. Difference between an abstract and
summary of an article:
Summary is not same as the abstract. Strictly
speaking a summary restates the main findings and
conclusions of a paper and it is written for people
who have already read the paper. It is placed after
the discussion section. Abstract is an abbreviated
version of the paper written for people who may
never read the complete version.
44. In the journal's contents list the most important and
specific words are chosen for listing as keyword.
Keywords facilitate the search from literature data
bases key words selected should facilitate data
search through engines when explode commands
are given to data bases.
45. Everyone is facing the problem of how to deal wit
h the ever-increasing volume of literature.
Index medicus, excerpta-medicus are index journal
s which can be searched electronically using Medli
ne (the national library of medicines bibliographic
al data bases) and MeSH terms (medical su
bject headings).
46. PUBMED, EMBASE, HEALTHSTAR, SSCI and
SCI are other data bases.
"Boolean operators" can be used to focus the searc
h.
We can use the AND, OR, NOT, and AND NOT
operators.
Cochrane library is another computerized data bas
e that can be searched for systematic reviews, clin
ical trials.
47. Critical appraisal
Questions related to abstract and key words:
1 . Is it structured or unstructured?
2. Is the abstract informative, indicative or a
combination of informative and indicative?
3. Is it comprehensive in its contents (containing
aim and objectives, methods, results and
conclusions)?
48. 4. Is it short, long or overlong?
5. Is it giving the gist of the whole text?
6. Does the information given in abstract match wi
th what is present in detail text?
7. Does it contain any information which is not in
the text?
49. 8. Are there acronyms, short forms, abbreviations
which are Not defined?
9. Can it facilitate reader in his selection of
pertinent study?
50. IMRaD
It is the conventional format (modified from
Greenhalgh 1997) which is highly accepted as a
norm in the publication circle. It refers to the order
in which the contents of an article should be
presented.
51. I = INRODUCTION (why was the research done?)
M = METHODS (how the study was done, and
results were analysed?)
R = RESULTS (what were the findings?)
D = DISCUSSION (what do the findings mean?)
Critical appraisal question:
Is the article following the IMRaD format?
54. It should ideally introduce the literature to the read
er
This section answers the question: why was the
study done?
What forms the background for the study?
Introduction should have logically flowing
sentences which create a movement from
general (background) to specific (foreground).
55. Background provides the context of study and the
foreground reveals the specific research.
Introduction should enable a reader to understand
the current status of knowledge in the respective
area.
Introduction should cite a small number of
important and pertinent papers already published i
n the journals (literature exploration).
56. These citations should be represented in the list of
references or bibliography at the end of the article.
There should be no citations which do not follow
representation in the references.
57. The last few lines of the introduction should
mention the research question, research hypothesis
, aim and objectives of the study either explicitly o
r implicitly.
59. HOW TO BEGIN INTRODUCTION?
There are three standard methods of stock opening.
1. Seminar approach
2. Alarmist approach
3. Much discussion recently (MDR) approach.
60. The Vancouver Group on the introduction state the
purpose of the article is to summarize the rationale
for the study or observation, give only strictly
pertinent references and do not include data and
conclusions from the work being reported. The
central part of introduction should cover the releva
nt researches which form the background to the stu
dy.
61. The introduction should normally lead towards an
overview of what the study will actually do and
should conclude with the statement of the
hypothesis that the study actually intends to test.
62. Critical appraisal
Questions related to introduction
1. Is the introduction meaningful and concise?
2. Is it built on existing literature?
3. Has it adapted a specific approach or is it writte
n
according to a specific pattern?
4. Is it logically presented?
63. 5. Are there omissions of some important studies i
n citations?
6. Are the citations relevant and pertinent to the
study being reported?
7. Are these citations followed with correct
references in the list of references?
8. Has it presented the need for the study?
64. 9. Is there an implicit or explicit mention of aim an
d objectives?
10. Has it stated research question or research
hypothesis?
11. Has it succeeded in introducing the background
of the study subject to the reader?
67. This section answers the question: What was done
? Methodology section can be called the "Brain" of
the paper for it refers to the active area of operatio
n of a study.
It should give readers clear and correct information
about the methods employed, how they were
employed, on whom and when they were employe
d, including statistical methods utilized for
analysis and the ethical guidelines which were f
ollowed.
68. General part of methodology:
Context of the study
It is concerned with two issues:
1. Study setting: Where did the study take
place?
2. Study population: On whom the actual study
was carried out.
69. I. STUDY DESIGN:
A plan or scheme for setting up a study which will
test the stated hypothesis and address the broader
issues raised by the research question is called stud
y design
70. 1. Is it interventional (experimental) or observational
?
Interventional study is the one where researchers
deliberately introduce or withdraw treatment or
procedures as part of the study.
Observational study is the one where the researchers
observe what is happening as mute spectators withou
t intervening in what is happening.
71. 2. The Time Frame:-
(Cross Sectional, Retrospective, Prospective)
a. Cross-Sectional surveys provide snap-shot of the
current state of affairs. They are used for assessing t
he burden of disease in a specific population or to as
sess the health needs.
b. Retrospective studies (Case control study) Moves
backward from effect to cause. Looking back to wha
t has already happened is the key.
72. c. Prospective studies (Cohort study) It moves
forward from cause to effect. It is investigation int
o future and involves follow up.
73. 3. Controlled or Uncontrolled
Control group is essentially the same as the study
group in all its characteristics except for the factor
which is under study like positive control, placebo
control are used.
74. 4. Randomized or non-randomized:
The key feature of a randomized intervention study
is that the allocation of a particular treatment to an
y person who participated in the study is entirely
random. The advantage is that selection bias is
eliminated and one can attribute the results only to
two things: the intervention or chance.
75. 5. Blinding or No-blinding:
Blinding is done to eliminate subjective bias which
can arise as a result by the participants, investigato
r and statistician of having prior knowledge about t
he group allocation, intention of study and the age
nt used.
76. Critical appraisal
Questions related to study design:
1. Whether observational or interventional?
2, Whether cross sectional, retrospective or prospe
ctive?
3. Whether controlled or uncontrolled?
77. 4. Whether randomized or non-randomized?
5. Whether blinded or non-blinded?
6. If blinded -is it single/double/triple blinded?
78. II. SAMPLING STRATEGY
It presents the sampling method, sampling frame,
sample size and the methods for assigning samples
to conditions. The sample size justification or the
way the authors reached at a specific size with
respect to sample, plays a vital role in deciding
validity of the study.
While selecting candidates what were the inclusion
and exclusion criteria need a clear mention.
79. III. MEASUREMENT STRATEGIES AND
MEASUREMENT INSTRUMENTS
What were the variables measured? (Primary and
secondary), how they were measured? The
parameters under measurement should be defined
and the definition should be more practical than
theoretical.
'Standardization' of measuring criteria can be usefu
l and can eliminate the 'measurement bias'.
80. It can be done by using standard indices and
instruction manuals of standard research
organizations.
When more than one examiner is assigned the duty
of measurement, calibration of examiners is a must
and the details of calibration techniques used to
reduce intra-examiner variability and inter-examin
er variability using Kappa-statistics requires a clea
r mention in this section.
81. The sensitivity, reliability and specificity attained b
y the measurement has to be mentioned. The above
said measures can eliminate measurement bias, an
d instrument bias to a great extent.
82. a. Defining Measures
Measures are of three types.
1. Baseline measures: - They are characteristics
measured at the start of the study. They are require
d to study their influence on outcome and also
ensure comparability.
2. Process measures: - The way in which the
treatment or measures are carried out in the study
(e.g. number of visits, dosage of drugs, types of
specimens, at what intervals) They give an idea
about the rigors of the methods.
84. 3. Outcome measures: - Those occurrences which
the study aim to investigate (death, recovery, return
to work, disability, reduction in blood pressure etc.)
Important outcome measures in direct relation with
the objectives of study are called' Primary Outcome
s'. Those which are important but not the main focus
of the study are called Secondary outcomes.
86. IV. The Experimental Design
This should be described in detail, so that the reade
r is able to replicate the study. The recruitment,
orientation, and assignment of subjects followed b
y a record of drop-outs, missing subjects, loss of
compliance should be noted down.
Statistical methods are developed for doing drop o
ut analysis and also to adjust for the loss.
87. V. Statistical Analytical Procedures
The Proposed strategy for quantifying, evaluating
and analysing the results should be presented along
with the actual statistical procedures employed. Th
e significance levels and confidence limits set for t
he study need an explicit mention in this part. The
specific statistical tests selected should be clearly
mentioned.
88. VI. Ethics
One should report that experiments on human
subjects were done in accordance with ethical
standards of the responsible committee on human
experimentation (institutional or regional)
89. VII. STATISTICS
a) Statistical methods should be described with
enough detail
b) The confidence limits set, and the 'P' value shou
ld be mentioned.
c) Details about randomization, blinding procedure
s, complications of treatment should be given.
90. d) Losses to observation such as drop-outs,
non-compliance etc should be mentioned.
e) Unless required complex statistical procedures
should be avoided.
f) If computer programmes are used like SPSS or
Epi-info for analysis, mention has to be made in th
is section.
91. VANCOUVER GROUP ON METHODS
a) Describes selection of subjects (patients or
laboratory animals or controls)
b) Identifying the age, sex, and other important
characteristics of the subjects.
c) Identifying the methods, apparatus (manufacturers
name, address in parenthesis) and procedures in
sufficient detail to allow replicability.
92. d) Give references to established methods, describ
e new or substantially modified method and give
reasons for using them.
e) Reports on randomized controlled trials should
present information on all study elements includin
g the protocol (study population, intervention,
outcomes etc), assignment of interventions
(Randomization, concealment of allocation) and th
e methods of marking (blinding)
93. f) Authors of review articles should include a sectio
n on methods used for locating, selecting, extractin
g and synthesizing data.
94. Critical appraisal
Questions related to methodology:
I. What and how of the study?
1. Is the methodology presented in a logical, clear
and meaningful manner? Is it replicable?
2. Is it the appropriate design for the aim and
objectives set by the study?
3. Has the author mentioned the design of the stud
y explicitly?
4. On whom was the study done?
5. How were the subjects selected?
95. 6. Has the study setting been mentioned by the
authors?
7. Target population, sampling frame, and study
population are they clearly defined?
8. Were the subjects studied in "real life"
circumstances?
II. Was the design of the study sensible?
1. What specific intervention or other manoeuvre
was being considered and what was it being
compared with?
96. 2. What were the variables measured and what wer
e the base-line values?
3. What were the outcomes measured and how wer
e they measured?
III. Was systematic bias avoided or minimized?
• In clinical trials
Was randomization done for allocation into study
and control groups?
97. • In cohort study
Was complex statistical adjustment made for baselin
e differences in key variables between exposure coh
ort and control cohort?
• In case control studies.
Was the diagnosis of "caseness" made based on cle
ar criteria?
Was misclassification avoided?
98. IV. Was the assessment "blind"?
What type of blinding was done?
V. Were preliminary statistical questions dealt with
?
1. Was sample size and its derivation mentioned?
2. What was the power of the study?
3. Was the duration of follow-up justified?
4. What was the "drop-out rate" or "drop-out
proportion"?
5. What were the statistical measures taken to
control the "dropout" effect? Have the authors mad
e a mention of it?
99. VI. Advanced statistical questions
1. What sort of data have the authors obtained?
(Quantitative or qualitative)
2. What is the nature of observed data distribution?
(Normal distribution or otherwise)
3. Have they used appropriate statistical tests?
(parametric or nonparametric)
4. If the authors have used complex and obscure
statistical analytical tests, have they mentioned wh
y they were used? Have they referenced them?
100. 5. Were the "outliers" mentioned and analysed with
"common sense" and suitable statistical adjustments
?
6. Has correlation been distinguished from regressio
n and has the correlation co-efficient ( r-value ) bee
n calculated and interpreted correctly?
7. Have "p" values been calculated correctly and
interpreted appropriately?
8. Have confidence intervals been calculated and do
the author's conclusions reflect them?
101. 9. Have the authors expressed the effects of an
intervention in terms of the likely benefit or harm
which an individual patient or community can
expect? (Effect size)
VII. Standardization or calibration
1. Were the examiners calibrated?
2. Were the instruments used calibrated?
3. Were the methods standardized?
102. VIII. For descriptive surveys
1. Whether a pilot study was conducted to check
feasibility consistency and validity of the measurin
g instrument and the methods
2. How were the subjects selected?
a) Was it deliberate, opportunistic?
b) Did subjects volunteer?
3. What steps were taken to ensure that the data
were reliable and repeatable?
103. 4. What steps were taken to eliminate the effects of
researcher or the research procedure on the respons
e of the subjects?
5. How was the data gathered?
6. If questionnaires were used, what was the layout
?
How many questions were there?
What were the types of questions?
Were they tailored to reach common people?
104. How was it administered?
Was it translated to local language?
Was it checked for validity, reliability, consistency
?
What statistical tests were done to check them?
How were the ethical implications managed?
107. The first few tables should provide subject
characteristics (descriptive statistics) and the later
tables should describe outcomes as measurements
of dependent variable reported by the study and
analytic results (inferential- statistics).
108. Thus, may provide clues for future studies and
expand into newer areas. Every table, chart or grap
h should be numbered. The columns should have
appropriate headings in tables. Every table should
be titled.
Table should be simple, because complex tables ar
e difficult to read as most of us may have inherent
fear to numbers. Graphs and charts also should be
presented in a legible form with particular details.
109. There are no specific rules about summarizing
results except that we need to identify the importan
t and relevant ones. The description of the main
results should be based on aim and objectives of th
e study.
110. Tables may be used to summarize information,
usually in a numerical format, and it indicates the
relationships between the different variables under
consideration. Diagrams are also useful for
indicating relationships and structure: they can
convey ideas much more effectively than lengthier
textual explanations.
111. When to use these illustrations?
• Where the illustration replaces a substantial
piece of text (i.e. a paragraph or more), use it but
do not keep the text as well.
• Where the illustration serves to make a point,
which would be difficult to make otherwise.
• Should not be used if it is copyright without
having appropriate permission.
• Don't use the illustration unless it is clear,
unambiguous and well reproduced.
112. THE VANCOUVER GROUP ON
THE RESULTS
Present your results in logical sequence in the text,
tables and illustrations. Do not repeat in the text all th
e data in the tables or illustrations; emphasize or
summarize only important observations.
113. On statistics: -
Put a general description of statistical methods in the
methods section. When data are summarized in the resul
ts section, specify the statistical methods used to analyse
them. Restrict tables and graphs to those needed to
explain the argument of the paper and to assess its suppo
rt, use graphs as an alternative to tables when there are
many entries.
114. On tables:-
Number the tables consecutively in the order of the
ir first citation in the text and supply a brief title to
each. Give each column a short or abbreviated
heading. Place explanatory matter in foot notes, no
t in the heading. Identify statistical measures of
variations such as standard deviation and standard
error of the mean. Be sure that each table is cited i
n the text.
115. Questions related to results:
1. Are the results presented in logical and
comprehensible manner?
2. Are the important results presented in both table
s and text matter?
3. Are the tables, charts and graphs numbered
properly and titled properly?
4. Are there tables showing descriptive as well
inferential data?
116. 5. Wherever required are there notes below the
tables?
6. Are the tables simple and alignment of
information properly done?
7. Are the data given in text and tables match or
tally with each other?
8. Are the diagrams, graphs and charts judiciously
used?
9. Are the results based on aim and objectives of th
e study?
119. Discussion is majorly about interpreting and
explaining the results obtained. The researcher
attempts to make the sense of findings. Inferences
are drawn with respect to population, product and
test materials which were used in the study.
120. Structure of a discussion
A discussion should be constructed in a linear
manner with logical transitions and should have th
e following structure.
1. A summary of the principal results of the study.
2. Composition of the study findings to those of
previous researches. The reason for discrepancies
should be explained and properly accounted.
121. 3. An explanation on problems and limitations of t
he study.
4. Suggestions for future research, to remedy the
limitation and extend the generalities of its finding
s.
122. The first sentence is relatively straight forward and
it summarizes the main findings of the research: in
this study we found that. This is followed with a
brief essay about their implication. Discussion
should emphasize how the study has unravelled
some important aspects related to the topic, how it
is different or similar to other studies.
123. If the study has broken a new ground in the area it
has to be highlighted and explained in-detail. It is
often the place in an article where authors can give
full rein to their imagination. Language of
speculation can be used with a degree of control
hence it is the most creative part of an article.
124. The limitations of the study and the extent to whic
h study conclusions can be generalized needs a cle
ar description in this section. The prospectus for
conducting new studies should be discussed.
125. THE VANCUOVER GROUP ON THE
DISCUSSION
Include in the discussion section the implications o
f the findings and their limitations, including
implications for future research. Relate the
observations to other relevant studies.
126. Critical appraisal
Questions related to discussion:
1. Is the discussion meaningful?
2. Does it highlight the important findings of the
study?
3. Is there enough explanation of all significant
findings?
4. Have the authors compared the current findings
with that of the ones already reported in literature?
127. 5. Is the comparison logical and reasoned properly
?
6. Are the implications of the study with respect to
research field, practice, other populations and
general population discussed?
7. What are the limitations of the study as presente
d in the discussions?
8. Did the discussion include an element of
imagination or speculation?
130. At the end of the article the researcher provides a
summary and interpretation of the study findings and
attempts to draw conclusions related to the original
theory and research question. The summary should
provide a gist of what the study was about, what was
done and what was found?
131. It is important to arrive at our own conclusions afte
r critically reading the paper-irrespective of the
author's conclusion. The reasoning proposed by the
author in reaching conclusions should be rigorousl
y analysed and assessed for its strength.
132. Any conclusion which is extending beyond the
frame work of aim and objectives of the study or t
he results drawn by the study is called "extended
conclusion". Such a conclusion is invalid and lacks
evidence and is unsupported by data.
133. Critical appraisal
Questions related to conclusions:
1. Are the conclusions meaningful?
2. Are they supported by the data collected and the
results drawn (evidence based or anecdotal)?
3. Are they based on aim and objectives of the study
?
4. Has the research question been answered?
5. Have they generated and presented some new
hypotheses as conclusions?
6. Have they made appropriate suggestions or
recommendations?
136. References provide an opportunity to the reader to
pursue further reading and enable more learning.
Primary and secondary references:
Primary references are those which an: direct
sources of the stuff which is cited in the article.
Secondary references are those which are indirect
sources because they would have imported and cite
d that stuff from others work.
137. References or Bibliography?
A 'references list' contains references to material
directly used in the research and preparation of the
scientific article, that is the stuff cited in the text.
A 'bibliography' contains references to works for
further information or background reading in
addition to all references.
138. VANCOUVER GROUP ON REFERENCES
References should be numbered consecutively in t
he order in which they are first mentioned in the te
xt; references should be identified in text and tabl
es. Avoid using abstracts as references; re
ferences to papers accepted but not yet publishe
d should be designated as 'in press or forthcomi
ng'.
139. Critical appraisal
Questions related to references:
1. Are there references for every citation done in th
e text part, tables, legends etc. of the article?
2. How many secondary references are present?
3. Are they accurate references?
4. Are there enough references to recent
publications?
5. Have they been presented according to specific
scientific conventions?
140. 3. CRITICAL REFLECTION OF A
SCIENTIFIC PAPER
Critical reflection is an integral step of critical
evaluation. It should follow soon after critical
appraisal. It refers to the external validity of the
study. A study on critical appraisal may be found
highly internally valid.
141. The very next question of the evaluation is, so
what? (Is it externally valid?)
1. Are the results and conclusions be extrapolated t
o other populations and other contexts?
(Generalizability)
2. Can the conclusions be applied to routine
practice?(Clinical practice significance)
142. 3. What is its utility of population at large?
(Public health significance)
4. How significantly can it change our concepts,
ideas and nature of practice? (Concept tilting)
5. What are its implications at economic level?
(cost benefit, economic viability, cost effectiveness
)
144. There are various standard methods used for citing
the source of work.
These methods are called as referencing styles or
citation styles.
Some common and widely used citation styles are:
• Harvard
• Vancouver
• APA (American Psychological Association)
Referencing Style
146. There are other styles that are not that common but
are still required at some places:
• ACS (American Chemical Society)
• AGLC (Australian Guide to Legal Citation)
• AMA (American Medical Association)
• CSE/ CBE (Council of Science Editors/ Council
of Biology Editors)
• IEEE (Institute of Electrical and Electronics
Engineers)
147. Checklist for Epidemiological Studies
Reporting guidelines are potent tools which help to
improve the accuracy, transparency and
completeness of health research and increase the
value of published research
161. Reference
• A HANDBOOK ON JOURNAL CLUB AND CRITICAL
EVALUATION by Dr. L. Nagesh
• von Elm E, Altman, D.G, The Strengthening the Reporting of O
bservational Studies in Epidemiology (STROBE) Statement: Gu
idelines for Reporting Observational Studies. UIJ. 2009 Apr;2(2
). doi: 10.1371/journal.pmed.0040296
• Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Grou
p (2009). Preferred Reporting Items for Systematic Reviews and
Meta-Analyses: The PRISMA Statement. PLoS Med 6(7): e100
0097.doi:10.1371/journal.pmed1000097