2. WHAT INFORMATION WOULD I REQUIRE
TO ACCEPT THE CONCLUSION?
Fisher’s assertability question
3. KIPLING 6 HONEST SERVING MEN
• I keep six honest serving men
• They taught me all I know
• Their names are ;
• what, why and when
• How, where and who
R. Kipling 1902
4. THE PLAN
• Over view on article structure
• What's considered to be the gold stranded of
research?
• Hands-on evaluation of medical articles
5. SCIENTIFIC PAPER
The council of biology editor define scientific paper
as , an acceptable primary scientific publication
must be the first disclosure containing information to
enable peers :
• 1- to assess observation
• 2- to repeat experiment
• 3- to evaluate intellectual process
7. Why : study design
How : study methodology
Who : study population
What: intervention and outcomes
How many: statistic
What else/
8. TITLE
• Has to be informative, concise and graceful!
• Attract the reader!
• Tell what the study about!
• Why the study was done?
• What get studied is what get funded!
9. AUTHORS
• Are they known in the field?
• What's their specialty?
• Citation index.
10. DATE OF SUBMISSION/ ACCEPTANCE
• Long delay may indicate that referee found serious
issues in the initial version
11. ABSTRACT
• Why the study was done
• What was done
• What was found
• What was concluded
• It helps to answer Fisher’s assertability question
12. IT S NOT HOW THE DATA WERE ANALYZED
IT HOW THE DATA WERE COLLECTED
13. INTRODUCTION
• How important is the study and what’s new
• Is there clear statement to justify the study?
• Is there a clear statement of the study hypothesis?
14. WHY: STUDY QUESTION
• The study design , population to be studied, the
method to be used, all depends on the purpose of
the study
• Is the hypothesis stated in advance or arose by the
data
• “ fishing expedition”: exploring their data for
association then reporting the significant one!
15. Is It Efficacy Or Effectiveness
• Therapeutic studies
Efficacy :
• whether the intervention will work ; very controlled
population and experimental conditions( ideal ).
Short term goals
Effectiveness :
• intervention will cause more good than harm, under
normal clinical condition, long term goals
16. Bypass Surgery In Patients With Coronary
Heart Disease
efficacy effectiveness
• Patient with clearly • Policy: intent to treat
documented coronary principle”
stenosis
• Long term
• Increased myocardial survival, quality of life
flow or relief of symptoms
• Any one was allocated to • Any one allocated but
TX but did not receive it, did not receive the
will not be included in the surgery will be included
study in the study
17. Nicotine patch therapy in adolescent smokers
Smith etal 1996
What information would I require to
accept the conclusion
18. WHY
Is sufficient evidence presented to justify the study?
Is there a clear statement of the purpose of the study?
Is there a clear statement of study hypothesis?
Is it clearly outlined in the study if its
Efficacy
effectiveness
19. HOW: STUDY DESIGN
observational experimental
Comparison group
randomized
Non randomized
yes no
Analytical descriptive
cohort Case control
Cross sectional
20. Comparison Group
Almost all studies has comparison :
• dose left handed subjects live longer than right
handed?
• Are women more more likely to have periodontal
disease than men
• Comparison to be fair
21. EXAMPLE 2: Cancer And Vitamin C
• observational study of Vitamin C as a treatment for
advanced cancer.
• For each patient, ten matched controls were
selected with the same age, gender, cancer
site, and histological tumor type.
• Patients receiving Vitamin C survived four times
longer than the controls (p < 0.0001).
• Cameron and Pauling 1976
22. • Ten years later, the Mayo Clinic conducted a
randomized experiment which showed no
statistically significant effect of Vitamin C. Moertel
1989
• Why did the Camoeron and Pauling study differ from
the Mayo study?
• The treatment group represented patients newly
diagnosed with terminal cancer. They received
Vitamin C and followed prospectively.
• The control group was selected from death
certificate records The control group represented a
retrospective chart review.
23. Be Cautious When A Study Compares
Prospective Data To Retrospective
data
24. Did The Author Created The Groups?
• Experimental study
• Observational study
• Who did the choosing ?:
• Author decided who get the intervention :
experimental
• Patients / doctors decided/group were intact prior
to study : observational
25. EXAMPLE
121 children with moderate-to-severe asthma were
"randomly assigned to receive subcutaneous
injections of either a mixture of seven aeroallergen
extracts or a placebo.” Adkinson (1997),
an experimental design.
26. EXAMPLE
"80 severe recidivist alcoholics received acupuncture
either at points specific for the treatment of substance
abuse (treatment group) or at nonspecific points
(control group).” Bullock (1989),
Since the researchers controlled the nature of the
acupuncture, this is an experimental design.
27. EXAMPLE
33 health care workers who became seropositive to HIV
after percutaneous exposure to HIV-infected blood
were compared to 665 health care workers with similar
exposure who did not become seropositive. Cardo (1997)
Since the researchers did not control who became
seropositive, this is an observational study.
28. EXAMPLE
• 80,082 women between the ages of 34 and 59
years were followed for 14 years to look for
instances of non-fatal myocardial infarction or
death from coronary heart disease. These women
were divided into low, intermediate, and high
groups on the basis of their consumption of dietary
fat. Hu (1997),
• Since the women themselves controlled their diets,
rather than having a diet imposed on them by the
researchers, this represents an observational design
29. • information from Experimental designs is
considered more authoritative than information
from observational designs
30. HOW: STUDY DESIGN
What is the study design?
Was it randomize?
Was it blinded?
Was prognostic stratification used?
31. HOW : STUDY DESIGN
• Controlled trial
• Before- and after
• Prospective analytic
• Cross sectional
• Retrospective
• Case series
32. Was The Assignment Randomized?
• assurance that the two groups are comparable in every
way except for the therapy received.
• use of a random device, such as a coin flip or a table of
random numbers.
• Be alert to “ quasi-random allocation” patient allocated
on the basis of seemingly random process ( BD, chart #..
Etc)
• If randomization was not followed :
• could any bias have occurred from the allocation of
patients?
33. Why Randomization Is Important?
• Groups are more comparable for known and
unknown variables ( measurable and un
measurable)
• Eliminate selection bias
• Some statistical analysis prerequisite randomization
• Its difficult to have blinding in a trial which is not
based on random allocation
34. What Type Of Blinding Was Used
• Knowledge of group membership, either before or
during the data collection can bias the study
• At the start of the study, did the patients know
which group they were going to be placed in?
• During the study, did the patients know which group
they were in?
35. BLINDING
• Single blind
• Double blind
• Triple blind
• Surgical trials : at least who performs the outcome
assessment.
• If the study was not blinded… how does lack of
blinding might have affected the result?
36. BLINDING
• Studies without blinding show an average biase of
11-17 % Schulz 1996, Coldiz 1989
• Comparing Unblinded study to a blinded one : an
overestimation of treatment effect by 11-17 %
37. Why Blinding Is Important?
• Prevent bias form allocating the patient to experimental
/control: i.e.; very sick patient.
• Minimize difference of how pt are treated during care.
• Prevent losing patient from trial.
• Prevent placebo Positive effects of a treatment.
• Greater validity of result : more like to report side effect.
• Minimize expectation bias for subjective outcomes, i.e
pain.
38. WHO
STUDY POPULATION
• Is the population from which the sample clearly
described?
• Did they represent a full spectrum of disease of
certain subset?
39. WHO: STUDY POPULATION
• Clear and replicable inclusion and exclusion criteria
• Did the criteria match the goal of the study?
• Who was excluded at the start of the study?
• Who dropped out during the study?
• Was there any effort to minimize drop out?
• Where the authors able to characterizes the
demographic of the drop out?
40.
41. WHAT : Intervention And Outcomes
• What is the intervention? Is it Cleary stated
• Were there enough subjects?
• Did the research have a narrow focus?
• Did the authors deviate from the plan?
• Did the authors discard outliers?
42. Were There Enough Subjects?
• Small sample size lead to lack of power ; negative
study
• Half of the articles reporting non significant
difference between therapies, a 50% improvement
in performance could be easily missed
• Type II error and small sample size are ubiquitous to
medical lit Freiman etal
• Predetermine the needed size!
43. Did The Research Have A Narrow Focus?
• A good research study has limited objectives that
are specified in advance. Failure to limit the scope
of a study leads to problems with multiple testing.
• A large number of comparisons limits the amount of
evidence that you can place on any single
conclusion.
• Fishing
44. • “If you torture your data long enough,
it will confess to something."
45. Be aware of Multiple comparison
problems:
Increase in type I error
46. Were Statistical Tests Applied
Appropriately?
• Knowledge of bio statistic
• Greenhalgh T. statistic for the non – statistion. Part I:
British medical journal 1997
• Greenhalgh T. statistic for the non – statistion. Part II:
significant relations and pitfalls. British medical
journal 1997
47. • Withdrawals; patients removed by investigators
• Dropout: patients leave the study on their well
• Crossover: patients change arm of the study
• Poor compliers
• Intent to treat : subjects are analyzed according to
the treatment to which they were randomized to
48. SO WHAT
• If difference was detected …is it clinically
significant?
• For a difference to be a difference it has to make a
difference"
49. SO WHAT
• Were the patient entered and analyzed sufficiently
representative that the results can be generalized?
• Can intervention as performed be generalized to
other sitting?
50. HOW
Study design
Allocation of subjects randomized
Control group
Blindness
WHO
Population Cleary described
Inclusion and exclusion criteria
Volunteers
Sample size
WHAT
What intervention
Compliance
Dropout
Narrow focus
Change of plan
Alternative hypotheses
51. Why : study design
How : study methodology
Who : study population
What: intervention and outcomes
How many: statistic
What else/
52. WHAT DOES IT TAKE TO CONVINCE
ME THAT THIS EVIDENCE IS TRUE?
54. How many: statistical significant and
sample size
Was statistical significant considered?
Was test applied appropriately?
Did they consider the sample size prior to
start?
Was the study large enough to detect
difference?
55. REFERENCE
Critical thinking : understanding and evaluating dental
research. D Brunette
http://www.childrensmercy.org/stats/journal/jour2003
-07.htm
http://healtoronto.com/howto.html
Thank you
Hinweis der Redaktion
Its not as important how data is analysied … what is important is how data is collected
As critical apprasial you should consider the resones for the study and determin is ther enough evidance presented o justfy
Randomization insures that both measurable and unmeasurable factors are balanced out across both the standard and the new therapy, assuring a fair comparison. It also guarantees that no conscious or subconscious efforts were used to allocate subjects in a biased way.
. Nevertheless, for any deviation or modification to the protocol, you can ask whether this change would have made sense to include in the protocol if it had been thought of before data collection began.