SlideShare a Scribd company logo
1 of 40
Download to read offline
Selective Outcome
Reporting
Kerry Dwan1, Jamie Kirkham1, Carrol Gamble1, Paula R
Williamson1, Doug G Altman2
SMG Training Course 2010
1University of Liverpool, UK, 2University of Oxford, UK
kerry.dwan@liverpool.ac.uk
Types of selective reporting
 outcomes
 subgroups
 adjusted versus unadjusted results
 prognostic or risk factors
 first period results in crossover trials
 PP rather than ITT
 complete case versus LOCF versus other
methods
Outcome Reporting Bias
 Definition: Selection of a subset of the original recorded
outcomes, on the basis of the results, for inclusion in
publication
 Statistically significant outcomes more likely to be fully
reported: OR 2.2 to 4.7 (Dwan et al, 2008)
 Potential threat to validity of systematic review / meta-
analysis. Potentially a missing data problem if measured
and analysed but not reported – similar impact to
publication bias i.e. non-publication of whole studies
Types of selective outcome reporting
 Selective reporting of the set of study outcomes
 Not all analysed outcomes are reported
 Selective reporting of a specific outcome
 Hutton and Williamson (2000)
 Selection from multiple time points
 Subscales
 Endpoint score versus change from baseline
 Continuous versus binary (choice of cut-offs)
 Different measures of same outcome, e.g. pain
 Incomplete reporting of a specific outcome
 e.g. “Not significant” or “p>0.05”
Not
submitted
Not
accepted
Published
Abstract
only
Full
publication
Some
outcomes
All
outcomes
Not
published
Missing
outcome
data
Completed
(not necessarily
meeting target
recruitment)
Stopped
early
Never
started
Submitted
Interim
analysis
Other e.g. poor
recruitment
Approved
application
Impact of ORB
OR 1.55 (1.13,2.14)OR 1.41 (1.04,1.91)
Assessment within review
 Exclusion criteria should not include ‘did not
report outcome data of interest’
 Number of eligible trials > number included in
MA/ fully reported in the text
Trial ID
(author,
date of publication)
Review primary
outcome
Overall survival
Review Outcomes Other Trial Outcomes
Event-free
survival
Overall
remission rate
Relapse
rate
Toxicity and
adverse events
Quality of
life
Relapse site Time to
relapse
Anderson 1983        
Brecher 1997        
Cairo 2003a  O
(Result to log
rank test – No.
events not
specified
     
Magrath 1973        
Magrath 1976        
Neequaye 1990        
Olweny 1976        
Olweny 1977        
Patte 1991   O
(some
description on
remission rates)
    
Sullivan 1991        
Ziegler 1971        
Ziegler 1972a       O
Reported for
a subgroup
of patients
only
O
K-M plot
only
ORBIT classification system
 Clear that the outcome was measured and analysed
Classification Description Level of
reporting
Risk of bias
A States outcome analysed but only
reported that result not significant
(typically stating p-value >0.05)
Partial High Risk
B States outcome analysed but only
reported that result significant (typically
stating p-value <0.05).
Partial No Risk
C States outcome analysed but insufficient
data presented to be included in meta-
analysis or to be considered to be fully
tabulated.
Partial Low Risk
D States outcome analysed but no results
reported.
None High Risk
ORBIT classification system
 Clear that the outcome was measured but not necessarily analysed
Classification Description Level of
reporting
Risk of bias
E Clear that outcome was measured but
not necessarily analysed.
None High Risk
F Clear that outcome was measured but
not necessarily analysed.
None Low Risk
Examples
E : Outcome – Overall mortality: Trial reports on cause-specific mortality only.
F : Ongoing study – outcome being measured but no reason to suggest
outcome analysed at current time
ORBIT classification system
 Unclear whether the outcome was measured
Classification Description Level of
reporting
Risk of bias
G Not mentioned but clinical judgment says likely
to have been measured and analysed.
None High Risk
H Not mentioned but clinical judgment says
unlikely to have been measured.
None Low Risk
Examples
G : Strong belief that the PO would have been measured, e.g. Overall
survival/Mortality in trials in Cancer/Aids patients
H : Follow-up appears to be too short to measure the PO, e.g. PO is live birth rate and
the trial reports only on pre-birth outcomes
ORBIT classification system
 Clear the outcome was not measured
Classification Description Level of
reporting
Risk of bias
I Clear that outcome was not measured. N/A No Risk
Examples
I : Outcome – Muscle Strength: “No measurements of muscle
strength were taken because the assessment of muscle strength with
hemiparetic subjects is very difficult”.
Assessment for individual study
 Review trial report
 how likely to have been selectively not reported?
 methods section, results section
 incomplete reporting of outcomes
 related outcomes reported
(e.g. cause-specific and overall mortality)
 battery of tests usually taken together
(e.g. systolic and diastolic blood pressure)
 knowledge of area suggests it is likely
 Trial protocol – search PubMed and web
(www.who.int/trialsearch)
 Abstracts of presentations – mention outcomes
not reported in trial report?
Example
Review: Human Albumin (2002, Issue 1)
Outcome: death for subgroup hypoalbuminaemia
 18 (763 individuals) eligible, 16 (719 (94%)) included
 Pooled OR (95% CI): 1.51 (0.82, 2.77)
 Two trials with no data: no information in either report to indicate outcome
recorded, however knowledge of clinical area suggests data would be collected
routinely
 Classification (g)
 For one of the included studies, interim report (n=52) reported outcome
(significant difference) whereas full report (n=94) did not.
 Original MA included preliminary data.
ORBIT: key messages
 ORB suspected in at least one trial in 34% of 283 Cochrane
reviews
 42 significant meta-analyses
 8 (19%) would not have remained significant
 11 (26%) would have overestimated the treatment effect by > 20%
 Review primary outcome less likely to be prone to ORB than
other outcomes
 under-recognition of the problem
 Interviews with trialists: 29% trials displayed ORB
The new Cochrane “Risk of Bias” tool:
items to address
1. Sequence generation (randomization)
2. Allocation concealment
3. Blinding of participants, personnel and outcomes
4. Incomplete outcome data (attrition and
exclusions)
5. Selective outcome reporting
6. Other (including topic-specific, design-specific)
Two components
 Description of what happened, possibly including
‘done’, ‘probably done’, ‘probably not done’ or ‘not
done’ for some items
 Review authors’ judgement whether bias unlikely to
be introduced through this item (Yes, No, Unclear)
Yes = Low risk of bias
No = High risk of bias
Selective outcome reporting
 Are reports of the study free of suggestion of
selective outcome reporting?
Criteria for a judgment of ‘YES’
(low risk of bias)
Either:
 The study protocol is available and all of the studies’ pre-
specified (primary and secondary) outcomes that are of interest
in the review have been reported in the pre-specified way.
 The study protocol is not available but it is clear that the
published reports include all of the study’s pre-specified
outcomes and all expected outcomes that are of interest in the
review (convincing text of this nature may be uncommon).
(rare!)
Criteria for a judgment of ‘NO’
(high risk of bias)
Any of the following:
 Not all of the study’s pre-specified primary outcomes have been reported;
 One or more primary outcomes is reported using measurements, analysis
methods or subsets of the data that were not pre-specified;
 One or more reported primary outcomes were not pre-specified (unless
clear justification for their reporting is provided, such as an unexpected
adverse effect);
The protocol is need to assess the points above.
 One or more outcomes of interest in the review are reported incompletely
so that they cannot be entered in a meta-analysis (ORBIT classifications
A-D);
 The study report fails to include results for a key outcome that would be
expected to have been reported for such a study (ORBIT classification G)
Criteria for a judgment of ‘UNCLEAR’
(uncertain risk of bias)
 Insufficient information to permit judgment of ‘Yes’ or ‘No’.
(It is likely that most trials will fall into this category)
‘Risk of bias’ assessment
Risk of bias table incorporated in RevMan 5
Entry Judge
-ment
Description
Adequate sequence
generation?
Yes Quote: “patients were randomly allocated.”
Comment: Probably done, since earlier reports from the same
investigators clearly describe use of random sequences
(Cartwright 1980).
Allocation concealment? No Quote: “...using a table of random numbers.”
Comment: Probably not done.
… … …… … … … …… … …… … …
Incomplete outcome data
addressed?
(Short-term outcomes: 2-6
wks)
No 4 weeks: 17/110 missing from intervention group (9 due to 'lack
of efficacy'); 7/113 missing from control group (2 due to 'lack of
efficacy').
Incomplete outcome data
addressed? (Longer-term
outcomes: >6 wks)
No 12 weeks: 31/110 missing from intervention group; 18/113
missing from control group. Reasons differ across groups.
Free of selective reporting? No Three rating scales for cognition listed in Methods, but only one
reported.
Free of other bias? No Trial stopped early due to apparent benefit.
General approach to meta-analysis
 Undertake meta-analysis with the assumption of non-
informative missing data.
 Undertake sensitivity analysis to assess robustness to
assumption of informative missing data.
 Is inference robust to this? If not, consider modelling approach
to assess impact under various realistic scenarios
Methods to assess ORB for RCTs
 Enumeration (Williamson and Gamble, 2005)
 Bound for maximum bias (Copas and Jackson,
2004 and Williamson and Gamble, 2007)
 Parametric selection model (Jayasekara, 2009)
 Regression (Moreno, 2009)
Copas and Jackson bound for
maximum bias
 Selection model approach used to determine maximum bias when m trials are
missing which are highly suspected of ORB. This information is used to
assess the robustness to this form of bias.
 A ‘worst case’ sensitivity analysis
 Assumption that probability of reporting increases as standard error decreases
 The application of the Copas bias bound method initially adjusts for the
known unpublished outcomes, and then the effect of various
further unpublished trials can be assessed.
 C&J bound attractive due to ease of calculation
Copas and Jackson bound for
maximum bias
 Calculate pooled estimated based on n-studies reporting data.
 Calculate the maximum bias bound which is a ‘worst case’ scenario
 Add the value of the bound to the pooled effect estimate such that the estimate
moves closer to the null.
 In these situations, adjustment is conservative, hence meta-analysis found to be
robust after this degree of adjustment can be considered to be robust to this form
of bias
 Simulation study showed C&J adjustment works well for most cases
investigated under variety of true suppression models for fixed and random
effects
 In situations where treatment effect small, trial sizes are small and/or variable,
number of studies with available data is small and number with missing
outcome data large, C&J adjustment found to be less accurate
Moreno regression method
 Regression based adjustment method
 Line of best fit
 Predict adjusted pooled estimate for an ideal study of infinite size (se=0)
 Quadratic version of Egger’s regression
 Assumption: linear trend between effect size and variance
 Moreno approach would effectively adjust for both unpublished
trials and unpublished outcomes.
 Appealing for its simplicity
 A reasonably high number of studies is needed
 Most MAs include 5-10 studies which is unlikely to be enough to
give a reliable estimate using regression.
Example of the sensitivity analyses
 From Published studies only
 51 published studies
 Hedge’s g score SMD 0.41 (0.37, 0.45)
 23 (31%) including 3449 participants not published
 Sensitivity analysis results
 Moreno: SMD 0.29 (0.23,0.35)
 Bias bound: SMD 0.33(0.29,0.38) n=50, m=23.
 It would take over 2000 studies to overturn the conclusion.
 Comparison to results presented to FDA
 74 studies
 SMD 0.31 (0.27,0.35)
Results – number of trials per
review
This is important as it means
consideration needs to be given to
the more common situation of <10
trials in meta-analysis.
Conclusions
 Awareness of ORB is limited but the problem must receive as
much attention as between-study selection bias
 Reviewers must consider the amount of, and reasons for, data
potentially missing from a meta-analysis
 To boost confidence in the review, we recommend the
sensitivity of the conclusions to plausible biases should be
investigated
 If robustness is lacking, present and interpret correctly both
the original meta-analysis which assumes no selective
reporting and the sensitivity analysis, including a description
of the assumptions made regarding the nature of selection.
Solutions
 Trial level
(i) Education
(ii) Core outcome sets
(iii) Better reporting - CONSORT statement, submission of protocol with manuscript
(Lancet, BMJ, PLoS Med) and EQUATOR (http://www.equator-network.org/)
(iv) Reporting of legitimate outcome changes (Evans, 2007)
(v) RECs (substantial protocol amendments)
(vi) Trial and protocol registration
(vii) FDA legislation – outcome results to be made available
(viii)Need for comprehensive worldwide adoption
(ix) Funders (Guidelines)
 Review level
(i) Risk of bias assessment in Cochrane reviews
(ii) Individual patient data repository (feasibility project)
(iii) Core outcome sets
(iv) Statistical methods
ORBIT: key messages
 Systematic review primary outcome data
 missing in 25% eligible trials in Cochrane reviews
 missing in at least one trial in 55% reviews
 a wasted opportunity?
 Interviews with trialists about outcomes in protocol but not
trial report:
 outcomes not measured
 outcomes measured but not analysed
 general lack of clarity about importance and/or feasibility of data
collection for outcomes chosen
Core Outcome Measures in Effectiveness
Trials
Group exercise
Outcome Matrix
Table 1: Bisphosphonate therapy for osteogenesis imperfect (inborn errors of metabolism)
Trial ID Review primary outcomes Review secondary outcomes Trial outcomes Review
assessment of
risk of bias
Our assessment
of risk of biasFracture
reduction (as
numbers and
rates)
Change in bone
mineral density
as assessed by
DEXA
Change in
biochemical
markers of bone
and mineral
metabolism
(e.g., bone
alkaline
phosphatase
measurements)
Growth (z
scores;
vertebral
heights)
Bone pain (as
assessed by
self-reported
questionnaires
of pain and
analgesic use)
Quality of life
(e.g., functional
changes in
mobility,
strength, well-
being and
completion of
activities of
daily
living (ADLs))
Lung function
(e.g.,
pulmonary
function
testing)
Adami, 2003   o
C Classification

G classification

H classification

H classification

H classification
Side effects,
pQCT
None given High risk
Chevrel, 2006   o 
G classification
  
H classification
Adverse effects,
calcium intake,
None given High risk
DeMeglio, 2006    o
C classification

H classification

H classification

H classification
Daily calcium
intake, side
effects
None given Low risk
Gatti, 2005   o
C classification
 
H classification

H classification

H classification
routine serum
biochemistry,
side effects
None given Low risk
Glorieux, 2004
(abstract only)
o
A classification
o
C classification
o
C classification

G classification
o
A classification
o
A classification

H classification
Side effects,
cortical bone
width
None given High risk
Letocha, 2005   o
A classification
   
H classification
None given None given High risk
Sakkers, 2004   o
A classification
o
A classification

H classification
 
H classification
side effects None given High risk
Seikaly, 2005 o
C classification
 o
C classification
   
H classification
Routine serum
biochemistry,
side effects,
stool guaiac
None given Low risk
 indicates reporting in full
 indicates no reporting
o indicates partial reporting
Feedback
Discussion points
 What should be recommended when there are concerns about
ORB in a meta-analysis?
 When should reviewers consider a sensitivity analysis?
 What statistical methods should reviewers use in a sensitivity
analysis?
 One-stage (publication bias generally) or two-stage (consider
effect of ORB first then consider effect of unpublished studies)?
 Does the number of studies included affect this decision?
References – empirical evidence
1. Hahn S, Williamson PR and Hutton JL. Investigation of within-study selective
reporting in clinical research: follow-up of applications submitted to an LREC.
Journal of Evaluation in Clinical Practice 2002; 8.3 (August).
2. Chan A-W, Hrbjartsson A, Haahr M, Gtzsche PC, Altman DG. Empirical evidence
for selective reporting of outcomes in randomized trials: Comparison of protocols
to publications. JAMA 2004; 291: 2457-2465.
3. Dundar Y, Dodd S, Williamson PR, Dickson R, Walley T. (2006) Case study of the
comparison of data from conference abstracts and full-text articles in health
technology assessment of rapidly evolving technologies- does it make a difference?
International Journal of Technology Assessment in Health Care 22(3): 288-94.
4. Chan AW, Krleza-Jeric K, Schmid I, Altman DG. Outcome reporting bias in
randomized trials funded by the Canadian Institutes of Health Research. CMAJ
2004;171 (7): 735-740.
5. Chan AW and Altman DG. Identifying outcome reporting bias in randomised trials
on Pubmed: review of publications and survey of authors. BMJ 2005; 330: 753-
758.
6. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan A-W, et al. (2008) Systematic
Review of the Empirical Evidence of Study Publication Bias and Outcome
Reporting Bias. PLoS ONE 3(8): e3081. doi:10.1371/journal.pone.0003081
References– methodological
1. Williamson PR, Gamble C, Altman DG and JL Hutton. (2005) Outcome selection bias in
meta-analysis. Statistical Methods in Medical Research, 14: 515-524.
2. Williamson PR and Gamble C. (2005) Identification and impact of outcome selection
bias in meta-analysis. Statistics in Medicine, 24: 1547-1561.
3. Hutton JL and Williamson PR. Bias in meta-analysis due to outcome variable selection
within studies. Applied Statistics 2000; 49: 359-370.
4. Hahn S, Williamson PR, Hutton JL, Garner P and Flynn EV. Assessing the potential for
bias in meta-analysis due to selective reporting of subgroup analyses within studies.
Statistics in Medicine 2000; 19: 3325-3336.
5. Evans S. When and how can endpoints be changed after initiation of a randomized
clinical trial? PLoS Clinical Trials 2007; e18
6. Sinha I, Jones L, Smyth RL, Williamson PR. A Systematic Review of Studies That Aim
to Determine Which Outcomes to Measure in Clinical Trials in Children. PLoS
Medicine 2008;Vol. 5, No. 4, e96
7. Dwan K, Gamble C, Williamson PR, Altman DG. Reporting of clinical trials: a review
of research funders' guidelines. Trials 2008, 9:66.
8. Kirkham JJ, Dwan K, Dodd S, Altman DG, Smyth R, Jacoby A, Gamble C, Williamson
PR. The impact of outcome reporting bias in randomised controlled trials on a cohort
of systematic reviews. BMJ 2010;340:c365
References
9. Dwan K, Gamble C, Kolamunnage-Dona R, Mohammed S, Powell C,
Williamson PR. Assessing the potential for outcome reporting bias in a review.
Submitted Trials 2009
10. Copas J, Jackson D. A bound for publication bias based on the fraction of unpublished
studies. Biometrics 2004; 60: 146 -153.
11. Williamson PR, Gamble C. Application and investigation of a bound for outcome
reporting bias. Trials 2007; 8(9).
12. Jayasekara, L and Copas J. Modelling the outcome reporting bias in meta-analysis.
Presented at MiM meeting (June 2006).
13. Moreno SG, Sutton AJ, Turner EH, Abrams KR, Cooper NJ, Palmer TM, Ades AE.
Novel methods to deal with publication biases: secondary analysis of antidepressant trials
in the FDA trial registry database and related journal publications. BMJ, 2009;339:b2981

More Related Content

What's hot

What's hot (20)

Publication Ethics
Publication EthicsPublication Ethics
Publication Ethics
 
Introduction of Research Integrity
 Introduction of Research Integrity Introduction of Research Integrity
Introduction of Research Integrity
 
Research ethics overview for social science researchers
Research  ethics  overview for social science researchers Research  ethics  overview for social science researchers
Research ethics overview for social science researchers
 
Salami Publication
Salami PublicationSalami Publication
Salami Publication
 
THE OBJECTIVITY OF MORAL JUDGEMENTS BY G. E. MOORE
THE  OBJECTIVITY OF MORAL JUDGEMENTS BY G. E. MOORETHE  OBJECTIVITY OF MORAL JUDGEMENTS BY G. E. MOORE
THE OBJECTIVITY OF MORAL JUDGEMENTS BY G. E. MOORE
 
publication misconduct.pptx
publication misconduct.pptxpublication misconduct.pptx
publication misconduct.pptx
 
Author Level Metrics
Author Level MetricsAuthor Level Metrics
Author Level Metrics
 
Scientific misconduct by dr vijay kumar
Scientific misconduct by dr vijay kumarScientific misconduct by dr vijay kumar
Scientific misconduct by dr vijay kumar
 
Predatory Journals.ppt
Predatory Journals.pptPredatory Journals.ppt
Predatory Journals.ppt
 
Predatory Publications and Software Tools for Identification
Predatory Publications and Software Tools for IdentificationPredatory Publications and Software Tools for Identification
Predatory Publications and Software Tools for Identification
 
Principles and key responsibilities in research integrity, research data and ...
Principles and key responsibilities in research integrity, research data and ...Principles and key responsibilities in research integrity, research data and ...
Principles and key responsibilities in research integrity, research data and ...
 
Scientific Research And Ethics by Manu Shreshtha
Scientific Research And Ethics by Manu ShreshthaScientific Research And Ethics by Manu Shreshtha
Scientific Research And Ethics by Manu Shreshtha
 
Intellectual Honesty and Research Integrity.pptx
Intellectual Honesty and Research Integrity.pptxIntellectual Honesty and Research Integrity.pptx
Intellectual Honesty and Research Integrity.pptx
 
Research Ethics
Research EthicsResearch Ethics
Research Ethics
 
Predatory publishers and journals
Predatory publishers and journalsPredatory publishers and journals
Predatory publishers and journals
 
Authorship
AuthorshipAuthorship
Authorship
 
Research Metrics
Research Metrics Research Metrics
Research Metrics
 
RESEARCH METRICS h-INDEX.pptx
RESEARCH METRICS h-INDEX.pptxRESEARCH METRICS h-INDEX.pptx
RESEARCH METRICS h-INDEX.pptx
 
Publication ethics
Publication ethics Publication ethics
Publication ethics
 
Unit 3
Unit 3Unit 3
Unit 3
 

Viewers also liked

2010 smg training_cardiff_day2_session1_salanti
2010 smg training_cardiff_day2_session1_salanti2010 smg training_cardiff_day2_session1_salanti
2010 smg training_cardiff_day2_session1_salanti
rgveroniki
 
2010 smg training_cardiff_day1_session3_higgins
2010 smg training_cardiff_day1_session3_higgins2010 smg training_cardiff_day1_session3_higgins
2010 smg training_cardiff_day1_session3_higgins
rgveroniki
 
2010 smg training_cardiff_day1_session1 (1 of 3)_mckenzie
2010 smg training_cardiff_day1_session1 (1 of 3)_mckenzie2010 smg training_cardiff_day1_session1 (1 of 3)_mckenzie
2010 smg training_cardiff_day1_session1 (1 of 3)_mckenzie
rgveroniki
 
2010 smg training_cardiff_day1_session1(2 of 3) altman
2010 smg training_cardiff_day1_session1(2 of 3) altman2010 smg training_cardiff_day1_session1(2 of 3) altman
2010 smg training_cardiff_day1_session1(2 of 3) altman
rgveroniki
 
day1(2010 smg training_cardiff)_session2b (1of 2) lewis
day1(2010 smg training_cardiff)_session2b (1of 2) lewisday1(2010 smg training_cardiff)_session2b (1of 2) lewis
day1(2010 smg training_cardiff)_session2b (1of 2) lewis
rgveroniki
 
2010 smg training_cardiff_day2_session2_dias
2010 smg training_cardiff_day2_session2_dias2010 smg training_cardiff_day2_session2_dias
2010 smg training_cardiff_day2_session2_dias
rgveroniki
 
2010 smg training_cardiff_day1_session2a (1 of 1) tudur-smith
2010 smg training_cardiff_day1_session2a (1 of 1) tudur-smith2010 smg training_cardiff_day1_session2a (1 of 1) tudur-smith
2010 smg training_cardiff_day1_session2a (1 of 1) tudur-smith
rgveroniki
 
2010 smg training_cardiff_day1_session2b (2 of 2) herbison
2010 smg training_cardiff_day1_session2b (2 of 2) herbison2010 smg training_cardiff_day1_session2b (2 of 2) herbison
2010 smg training_cardiff_day1_session2b (2 of 2) herbison
rgveroniki
 
2010 smg training_cardiff_day1_session4_harbord
2010 smg training_cardiff_day1_session4_harbord2010 smg training_cardiff_day1_session4_harbord
2010 smg training_cardiff_day1_session4_harbord
rgveroniki
 
2010 smg training_cardiff_day2_session4_sterne
2010 smg training_cardiff_day2_session4_sterne2010 smg training_cardiff_day2_session4_sterne
2010 smg training_cardiff_day2_session4_sterne
rgveroniki
 

Viewers also liked (10)

2010 smg training_cardiff_day2_session1_salanti
2010 smg training_cardiff_day2_session1_salanti2010 smg training_cardiff_day2_session1_salanti
2010 smg training_cardiff_day2_session1_salanti
 
2010 smg training_cardiff_day1_session3_higgins
2010 smg training_cardiff_day1_session3_higgins2010 smg training_cardiff_day1_session3_higgins
2010 smg training_cardiff_day1_session3_higgins
 
2010 smg training_cardiff_day1_session1 (1 of 3)_mckenzie
2010 smg training_cardiff_day1_session1 (1 of 3)_mckenzie2010 smg training_cardiff_day1_session1 (1 of 3)_mckenzie
2010 smg training_cardiff_day1_session1 (1 of 3)_mckenzie
 
2010 smg training_cardiff_day1_session1(2 of 3) altman
2010 smg training_cardiff_day1_session1(2 of 3) altman2010 smg training_cardiff_day1_session1(2 of 3) altman
2010 smg training_cardiff_day1_session1(2 of 3) altman
 
day1(2010 smg training_cardiff)_session2b (1of 2) lewis
day1(2010 smg training_cardiff)_session2b (1of 2) lewisday1(2010 smg training_cardiff)_session2b (1of 2) lewis
day1(2010 smg training_cardiff)_session2b (1of 2) lewis
 
2010 smg training_cardiff_day2_session2_dias
2010 smg training_cardiff_day2_session2_dias2010 smg training_cardiff_day2_session2_dias
2010 smg training_cardiff_day2_session2_dias
 
2010 smg training_cardiff_day1_session2a (1 of 1) tudur-smith
2010 smg training_cardiff_day1_session2a (1 of 1) tudur-smith2010 smg training_cardiff_day1_session2a (1 of 1) tudur-smith
2010 smg training_cardiff_day1_session2a (1 of 1) tudur-smith
 
2010 smg training_cardiff_day1_session2b (2 of 2) herbison
2010 smg training_cardiff_day1_session2b (2 of 2) herbison2010 smg training_cardiff_day1_session2b (2 of 2) herbison
2010 smg training_cardiff_day1_session2b (2 of 2) herbison
 
2010 smg training_cardiff_day1_session4_harbord
2010 smg training_cardiff_day1_session4_harbord2010 smg training_cardiff_day1_session4_harbord
2010 smg training_cardiff_day1_session4_harbord
 
2010 smg training_cardiff_day2_session4_sterne
2010 smg training_cardiff_day2_session4_sterne2010 smg training_cardiff_day2_session4_sterne
2010 smg training_cardiff_day2_session4_sterne
 

Similar to 2010 smg training_cardiff_day2_session3_dwan_altman

Epidemiological Approaches for Evaluation of diagnostic tests.pptx
Epidemiological Approaches for Evaluation of diagnostic tests.pptxEpidemiological Approaches for Evaluation of diagnostic tests.pptx
Epidemiological Approaches for Evaluation of diagnostic tests.pptx
Bhoj Raj Singh
 

Similar to 2010 smg training_cardiff_day2_session3_dwan_altman (20)

UAB Pulmonary board review study design and statistical principles
UAB Pulmonary board review study  design and statistical principles UAB Pulmonary board review study  design and statistical principles
UAB Pulmonary board review study design and statistical principles
 
Metaanalysis copy
Metaanalysis    copyMetaanalysis    copy
Metaanalysis copy
 
NOS SCALE.ppt
NOS SCALE.pptNOS SCALE.ppt
NOS SCALE.ppt
 
NOS
NOSNOS
NOS
 
Epidemiological Approaches for Evaluation of diagnostic tests.pptx
Epidemiological Approaches for Evaluation of diagnostic tests.pptxEpidemiological Approaches for Evaluation of diagnostic tests.pptx
Epidemiological Approaches for Evaluation of diagnostic tests.pptx
 
Systematic reviews of adverse effects and other topics not – yet – covered by...
Systematic reviews of adverse effects and other topics not – yet – covered by...Systematic reviews of adverse effects and other topics not – yet – covered by...
Systematic reviews of adverse effects and other topics not – yet – covered by...
 
Copenhagen 23.10.2008
Copenhagen 23.10.2008Copenhagen 23.10.2008
Copenhagen 23.10.2008
 
2014-10-22 EUGM | WEI | Moving Beyond the Comfort Zone in Practicing Translat...
2014-10-22 EUGM | WEI | Moving Beyond the Comfort Zone in Practicing Translat...2014-10-22 EUGM | WEI | Moving Beyond the Comfort Zone in Practicing Translat...
2014-10-22 EUGM | WEI | Moving Beyond the Comfort Zone in Practicing Translat...
 
Ana Marusic - MedicReS World Congress 2011
Ana Marusic - MedicReS World Congress 2011Ana Marusic - MedicReS World Congress 2011
Ana Marusic - MedicReS World Congress 2011
 
Vergoulas Choosing the appropriate statistical test (2019 Hippokratia journal)
Vergoulas Choosing the appropriate statistical test (2019 Hippokratia journal)Vergoulas Choosing the appropriate statistical test (2019 Hippokratia journal)
Vergoulas Choosing the appropriate statistical test (2019 Hippokratia journal)
 
Critical Appraisal - Quantitative SS.pptx
Critical Appraisal - Quantitative SS.pptxCritical Appraisal - Quantitative SS.pptx
Critical Appraisal - Quantitative SS.pptx
 
Systematic review and meta analaysis course - part 2
Systematic review and meta analaysis course - part 2Systematic review and meta analaysis course - part 2
Systematic review and meta analaysis course - part 2
 
1559221358_week 6-MCQ in EBP-1.pdf
1559221358_week 6-MCQ in EBP-1.pdf1559221358_week 6-MCQ in EBP-1.pdf
1559221358_week 6-MCQ in EBP-1.pdf
 
Yoav Benjamini, "In the world beyond p<.05: When & How to use P<.0499..."
Yoav Benjamini, "In the world beyond p<.05: When & How to use P<.0499..."Yoav Benjamini, "In the world beyond p<.05: When & How to use P<.0499..."
Yoav Benjamini, "In the world beyond p<.05: When & How to use P<.0499..."
 
Research Methodology - Dr Kusum Gaur
Research Methodology - Dr Kusum GaurResearch Methodology - Dr Kusum Gaur
Research Methodology - Dr Kusum Gaur
 
Copenhagen 2008
Copenhagen 2008Copenhagen 2008
Copenhagen 2008
 
Clinical trial options for rare diseases
Clinical trial options for rare diseasesClinical trial options for rare diseases
Clinical trial options for rare diseases
 
Analysis and Interpretation
Analysis and InterpretationAnalysis and Interpretation
Analysis and Interpretation
 
Assessment of Bias
Assessment of BiasAssessment of Bias
Assessment of Bias
 
Meta analysis
Meta analysisMeta analysis
Meta analysis
 

Recently uploaded

Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Vishram Singh - Textbook of Anatomy Upper Limb and Thorax.. Volume 1 (1).pdf
Vishram Singh - Textbook of Anatomy  Upper Limb and Thorax.. Volume 1 (1).pdfVishram Singh - Textbook of Anatomy  Upper Limb and Thorax.. Volume 1 (1).pdf
Vishram Singh - Textbook of Anatomy Upper Limb and Thorax.. Volume 1 (1).pdf
ssuserdda66b
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 

Recently uploaded (20)

HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptxHMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Vishram Singh - Textbook of Anatomy Upper Limb and Thorax.. Volume 1 (1).pdf
Vishram Singh - Textbook of Anatomy  Upper Limb and Thorax.. Volume 1 (1).pdfVishram Singh - Textbook of Anatomy  Upper Limb and Thorax.. Volume 1 (1).pdf
Vishram Singh - Textbook of Anatomy Upper Limb and Thorax.. Volume 1 (1).pdf
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structure
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptx
 

2010 smg training_cardiff_day2_session3_dwan_altman

  • 1. Selective Outcome Reporting Kerry Dwan1, Jamie Kirkham1, Carrol Gamble1, Paula R Williamson1, Doug G Altman2 SMG Training Course 2010 1University of Liverpool, UK, 2University of Oxford, UK kerry.dwan@liverpool.ac.uk
  • 2. Types of selective reporting  outcomes  subgroups  adjusted versus unadjusted results  prognostic or risk factors  first period results in crossover trials  PP rather than ITT  complete case versus LOCF versus other methods
  • 3. Outcome Reporting Bias  Definition: Selection of a subset of the original recorded outcomes, on the basis of the results, for inclusion in publication  Statistically significant outcomes more likely to be fully reported: OR 2.2 to 4.7 (Dwan et al, 2008)  Potential threat to validity of systematic review / meta- analysis. Potentially a missing data problem if measured and analysed but not reported – similar impact to publication bias i.e. non-publication of whole studies
  • 4. Types of selective outcome reporting  Selective reporting of the set of study outcomes  Not all analysed outcomes are reported  Selective reporting of a specific outcome  Hutton and Williamson (2000)  Selection from multiple time points  Subscales  Endpoint score versus change from baseline  Continuous versus binary (choice of cut-offs)  Different measures of same outcome, e.g. pain  Incomplete reporting of a specific outcome  e.g. “Not significant” or “p>0.05”
  • 6. Impact of ORB OR 1.55 (1.13,2.14)OR 1.41 (1.04,1.91)
  • 7. Assessment within review  Exclusion criteria should not include ‘did not report outcome data of interest’  Number of eligible trials > number included in MA/ fully reported in the text
  • 8. Trial ID (author, date of publication) Review primary outcome Overall survival Review Outcomes Other Trial Outcomes Event-free survival Overall remission rate Relapse rate Toxicity and adverse events Quality of life Relapse site Time to relapse Anderson 1983         Brecher 1997         Cairo 2003a  O (Result to log rank test – No. events not specified       Magrath 1973         Magrath 1976         Neequaye 1990         Olweny 1976         Olweny 1977         Patte 1991   O (some description on remission rates)      Sullivan 1991         Ziegler 1971         Ziegler 1972a       O Reported for a subgroup of patients only O K-M plot only
  • 9. ORBIT classification system  Clear that the outcome was measured and analysed Classification Description Level of reporting Risk of bias A States outcome analysed but only reported that result not significant (typically stating p-value >0.05) Partial High Risk B States outcome analysed but only reported that result significant (typically stating p-value <0.05). Partial No Risk C States outcome analysed but insufficient data presented to be included in meta- analysis or to be considered to be fully tabulated. Partial Low Risk D States outcome analysed but no results reported. None High Risk
  • 10. ORBIT classification system  Clear that the outcome was measured but not necessarily analysed Classification Description Level of reporting Risk of bias E Clear that outcome was measured but not necessarily analysed. None High Risk F Clear that outcome was measured but not necessarily analysed. None Low Risk Examples E : Outcome – Overall mortality: Trial reports on cause-specific mortality only. F : Ongoing study – outcome being measured but no reason to suggest outcome analysed at current time
  • 11. ORBIT classification system  Unclear whether the outcome was measured Classification Description Level of reporting Risk of bias G Not mentioned but clinical judgment says likely to have been measured and analysed. None High Risk H Not mentioned but clinical judgment says unlikely to have been measured. None Low Risk Examples G : Strong belief that the PO would have been measured, e.g. Overall survival/Mortality in trials in Cancer/Aids patients H : Follow-up appears to be too short to measure the PO, e.g. PO is live birth rate and the trial reports only on pre-birth outcomes
  • 12. ORBIT classification system  Clear the outcome was not measured Classification Description Level of reporting Risk of bias I Clear that outcome was not measured. N/A No Risk Examples I : Outcome – Muscle Strength: “No measurements of muscle strength were taken because the assessment of muscle strength with hemiparetic subjects is very difficult”.
  • 13. Assessment for individual study  Review trial report  how likely to have been selectively not reported?  methods section, results section  incomplete reporting of outcomes  related outcomes reported (e.g. cause-specific and overall mortality)  battery of tests usually taken together (e.g. systolic and diastolic blood pressure)  knowledge of area suggests it is likely  Trial protocol – search PubMed and web (www.who.int/trialsearch)  Abstracts of presentations – mention outcomes not reported in trial report?
  • 14. Example Review: Human Albumin (2002, Issue 1) Outcome: death for subgroup hypoalbuminaemia  18 (763 individuals) eligible, 16 (719 (94%)) included  Pooled OR (95% CI): 1.51 (0.82, 2.77)  Two trials with no data: no information in either report to indicate outcome recorded, however knowledge of clinical area suggests data would be collected routinely  Classification (g)  For one of the included studies, interim report (n=52) reported outcome (significant difference) whereas full report (n=94) did not.  Original MA included preliminary data.
  • 15. ORBIT: key messages  ORB suspected in at least one trial in 34% of 283 Cochrane reviews  42 significant meta-analyses  8 (19%) would not have remained significant  11 (26%) would have overestimated the treatment effect by > 20%  Review primary outcome less likely to be prone to ORB than other outcomes  under-recognition of the problem  Interviews with trialists: 29% trials displayed ORB
  • 16. The new Cochrane “Risk of Bias” tool: items to address 1. Sequence generation (randomization) 2. Allocation concealment 3. Blinding of participants, personnel and outcomes 4. Incomplete outcome data (attrition and exclusions) 5. Selective outcome reporting 6. Other (including topic-specific, design-specific)
  • 17. Two components  Description of what happened, possibly including ‘done’, ‘probably done’, ‘probably not done’ or ‘not done’ for some items  Review authors’ judgement whether bias unlikely to be introduced through this item (Yes, No, Unclear) Yes = Low risk of bias No = High risk of bias
  • 18. Selective outcome reporting  Are reports of the study free of suggestion of selective outcome reporting?
  • 19. Criteria for a judgment of ‘YES’ (low risk of bias) Either:  The study protocol is available and all of the studies’ pre- specified (primary and secondary) outcomes that are of interest in the review have been reported in the pre-specified way.  The study protocol is not available but it is clear that the published reports include all of the study’s pre-specified outcomes and all expected outcomes that are of interest in the review (convincing text of this nature may be uncommon). (rare!)
  • 20. Criteria for a judgment of ‘NO’ (high risk of bias) Any of the following:  Not all of the study’s pre-specified primary outcomes have been reported;  One or more primary outcomes is reported using measurements, analysis methods or subsets of the data that were not pre-specified;  One or more reported primary outcomes were not pre-specified (unless clear justification for their reporting is provided, such as an unexpected adverse effect); The protocol is need to assess the points above.  One or more outcomes of interest in the review are reported incompletely so that they cannot be entered in a meta-analysis (ORBIT classifications A-D);  The study report fails to include results for a key outcome that would be expected to have been reported for such a study (ORBIT classification G)
  • 21. Criteria for a judgment of ‘UNCLEAR’ (uncertain risk of bias)  Insufficient information to permit judgment of ‘Yes’ or ‘No’. (It is likely that most trials will fall into this category)
  • 22. ‘Risk of bias’ assessment Risk of bias table incorporated in RevMan 5 Entry Judge -ment Description Adequate sequence generation? Yes Quote: “patients were randomly allocated.” Comment: Probably done, since earlier reports from the same investigators clearly describe use of random sequences (Cartwright 1980). Allocation concealment? No Quote: “...using a table of random numbers.” Comment: Probably not done. … … …… … … … …… … …… … … Incomplete outcome data addressed? (Short-term outcomes: 2-6 wks) No 4 weeks: 17/110 missing from intervention group (9 due to 'lack of efficacy'); 7/113 missing from control group (2 due to 'lack of efficacy'). Incomplete outcome data addressed? (Longer-term outcomes: >6 wks) No 12 weeks: 31/110 missing from intervention group; 18/113 missing from control group. Reasons differ across groups. Free of selective reporting? No Three rating scales for cognition listed in Methods, but only one reported. Free of other bias? No Trial stopped early due to apparent benefit.
  • 23. General approach to meta-analysis  Undertake meta-analysis with the assumption of non- informative missing data.  Undertake sensitivity analysis to assess robustness to assumption of informative missing data.  Is inference robust to this? If not, consider modelling approach to assess impact under various realistic scenarios
  • 24. Methods to assess ORB for RCTs  Enumeration (Williamson and Gamble, 2005)  Bound for maximum bias (Copas and Jackson, 2004 and Williamson and Gamble, 2007)  Parametric selection model (Jayasekara, 2009)  Regression (Moreno, 2009)
  • 25. Copas and Jackson bound for maximum bias  Selection model approach used to determine maximum bias when m trials are missing which are highly suspected of ORB. This information is used to assess the robustness to this form of bias.  A ‘worst case’ sensitivity analysis  Assumption that probability of reporting increases as standard error decreases  The application of the Copas bias bound method initially adjusts for the known unpublished outcomes, and then the effect of various further unpublished trials can be assessed.  C&J bound attractive due to ease of calculation
  • 26. Copas and Jackson bound for maximum bias  Calculate pooled estimated based on n-studies reporting data.  Calculate the maximum bias bound which is a ‘worst case’ scenario  Add the value of the bound to the pooled effect estimate such that the estimate moves closer to the null.  In these situations, adjustment is conservative, hence meta-analysis found to be robust after this degree of adjustment can be considered to be robust to this form of bias  Simulation study showed C&J adjustment works well for most cases investigated under variety of true suppression models for fixed and random effects  In situations where treatment effect small, trial sizes are small and/or variable, number of studies with available data is small and number with missing outcome data large, C&J adjustment found to be less accurate
  • 27. Moreno regression method  Regression based adjustment method  Line of best fit  Predict adjusted pooled estimate for an ideal study of infinite size (se=0)  Quadratic version of Egger’s regression  Assumption: linear trend between effect size and variance  Moreno approach would effectively adjust for both unpublished trials and unpublished outcomes.  Appealing for its simplicity  A reasonably high number of studies is needed  Most MAs include 5-10 studies which is unlikely to be enough to give a reliable estimate using regression.
  • 28. Example of the sensitivity analyses  From Published studies only  51 published studies  Hedge’s g score SMD 0.41 (0.37, 0.45)  23 (31%) including 3449 participants not published  Sensitivity analysis results  Moreno: SMD 0.29 (0.23,0.35)  Bias bound: SMD 0.33(0.29,0.38) n=50, m=23.  It would take over 2000 studies to overturn the conclusion.  Comparison to results presented to FDA  74 studies  SMD 0.31 (0.27,0.35)
  • 29. Results – number of trials per review This is important as it means consideration needs to be given to the more common situation of <10 trials in meta-analysis.
  • 30. Conclusions  Awareness of ORB is limited but the problem must receive as much attention as between-study selection bias  Reviewers must consider the amount of, and reasons for, data potentially missing from a meta-analysis  To boost confidence in the review, we recommend the sensitivity of the conclusions to plausible biases should be investigated  If robustness is lacking, present and interpret correctly both the original meta-analysis which assumes no selective reporting and the sensitivity analysis, including a description of the assumptions made regarding the nature of selection.
  • 31. Solutions  Trial level (i) Education (ii) Core outcome sets (iii) Better reporting - CONSORT statement, submission of protocol with manuscript (Lancet, BMJ, PLoS Med) and EQUATOR (http://www.equator-network.org/) (iv) Reporting of legitimate outcome changes (Evans, 2007) (v) RECs (substantial protocol amendments) (vi) Trial and protocol registration (vii) FDA legislation – outcome results to be made available (viii)Need for comprehensive worldwide adoption (ix) Funders (Guidelines)  Review level (i) Risk of bias assessment in Cochrane reviews (ii) Individual patient data repository (feasibility project) (iii) Core outcome sets (iv) Statistical methods
  • 32. ORBIT: key messages  Systematic review primary outcome data  missing in 25% eligible trials in Cochrane reviews  missing in at least one trial in 55% reviews  a wasted opportunity?  Interviews with trialists about outcomes in protocol but not trial report:  outcomes not measured  outcomes measured but not analysed  general lack of clarity about importance and/or feasibility of data collection for outcomes chosen
  • 33. Core Outcome Measures in Effectiveness Trials
  • 35. Outcome Matrix Table 1: Bisphosphonate therapy for osteogenesis imperfect (inborn errors of metabolism) Trial ID Review primary outcomes Review secondary outcomes Trial outcomes Review assessment of risk of bias Our assessment of risk of biasFracture reduction (as numbers and rates) Change in bone mineral density as assessed by DEXA Change in biochemical markers of bone and mineral metabolism (e.g., bone alkaline phosphatase measurements) Growth (z scores; vertebral heights) Bone pain (as assessed by self-reported questionnaires of pain and analgesic use) Quality of life (e.g., functional changes in mobility, strength, well- being and completion of activities of daily living (ADLs)) Lung function (e.g., pulmonary function testing) Adami, 2003   o C Classification  G classification  H classification  H classification  H classification Side effects, pQCT None given High risk Chevrel, 2006   o  G classification    H classification Adverse effects, calcium intake, None given High risk DeMeglio, 2006    o C classification  H classification  H classification  H classification Daily calcium intake, side effects None given Low risk Gatti, 2005   o C classification   H classification  H classification  H classification routine serum biochemistry, side effects None given Low risk Glorieux, 2004 (abstract only) o A classification o C classification o C classification  G classification o A classification o A classification  H classification Side effects, cortical bone width None given High risk Letocha, 2005   o A classification     H classification None given None given High risk Sakkers, 2004   o A classification o A classification  H classification   H classification side effects None given High risk Seikaly, 2005 o C classification  o C classification     H classification Routine serum biochemistry, side effects, stool guaiac None given Low risk  indicates reporting in full  indicates no reporting o indicates partial reporting
  • 37. Discussion points  What should be recommended when there are concerns about ORB in a meta-analysis?  When should reviewers consider a sensitivity analysis?  What statistical methods should reviewers use in a sensitivity analysis?  One-stage (publication bias generally) or two-stage (consider effect of ORB first then consider effect of unpublished studies)?  Does the number of studies included affect this decision?
  • 38. References – empirical evidence 1. Hahn S, Williamson PR and Hutton JL. Investigation of within-study selective reporting in clinical research: follow-up of applications submitted to an LREC. Journal of Evaluation in Clinical Practice 2002; 8.3 (August). 2. Chan A-W, Hrbjartsson A, Haahr M, Gtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to publications. JAMA 2004; 291: 2457-2465. 3. Dundar Y, Dodd S, Williamson PR, Dickson R, Walley T. (2006) Case study of the comparison of data from conference abstracts and full-text articles in health technology assessment of rapidly evolving technologies- does it make a difference? International Journal of Technology Assessment in Health Care 22(3): 288-94. 4. Chan AW, Krleza-Jeric K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ 2004;171 (7): 735-740. 5. Chan AW and Altman DG. Identifying outcome reporting bias in randomised trials on Pubmed: review of publications and survey of authors. BMJ 2005; 330: 753- 758. 6. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan A-W, et al. (2008) Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias. PLoS ONE 3(8): e3081. doi:10.1371/journal.pone.0003081
  • 39. References– methodological 1. Williamson PR, Gamble C, Altman DG and JL Hutton. (2005) Outcome selection bias in meta-analysis. Statistical Methods in Medical Research, 14: 515-524. 2. Williamson PR and Gamble C. (2005) Identification and impact of outcome selection bias in meta-analysis. Statistics in Medicine, 24: 1547-1561. 3. Hutton JL and Williamson PR. Bias in meta-analysis due to outcome variable selection within studies. Applied Statistics 2000; 49: 359-370. 4. Hahn S, Williamson PR, Hutton JL, Garner P and Flynn EV. Assessing the potential for bias in meta-analysis due to selective reporting of subgroup analyses within studies. Statistics in Medicine 2000; 19: 3325-3336. 5. Evans S. When and how can endpoints be changed after initiation of a randomized clinical trial? PLoS Clinical Trials 2007; e18 6. Sinha I, Jones L, Smyth RL, Williamson PR. A Systematic Review of Studies That Aim to Determine Which Outcomes to Measure in Clinical Trials in Children. PLoS Medicine 2008;Vol. 5, No. 4, e96 7. Dwan K, Gamble C, Williamson PR, Altman DG. Reporting of clinical trials: a review of research funders' guidelines. Trials 2008, 9:66. 8. Kirkham JJ, Dwan K, Dodd S, Altman DG, Smyth R, Jacoby A, Gamble C, Williamson PR. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ 2010;340:c365
  • 40. References 9. Dwan K, Gamble C, Kolamunnage-Dona R, Mohammed S, Powell C, Williamson PR. Assessing the potential for outcome reporting bias in a review. Submitted Trials 2009 10. Copas J, Jackson D. A bound for publication bias based on the fraction of unpublished studies. Biometrics 2004; 60: 146 -153. 11. Williamson PR, Gamble C. Application and investigation of a bound for outcome reporting bias. Trials 2007; 8(9). 12. Jayasekara, L and Copas J. Modelling the outcome reporting bias in meta-analysis. Presented at MiM meeting (June 2006). 13. Moreno SG, Sutton AJ, Turner EH, Abrams KR, Cooper NJ, Palmer TM, Ades AE. Novel methods to deal with publication biases: secondary analysis of antidepressant trials in the FDA trial registry database and related journal publications. BMJ, 2009;339:b2981