Laine Thomas presented information about how causal inference is being used to determine the cost/benefit of the two most common surgical surgical treatments for women - hysterectomy and myomectomy.
2. CHAMP-HF REGISTRY
Greene, Stephen J., et al. "Titration of Medical Therapy for Heart Failure With Reduced Ejection
Fraction." Journal of the American College of Cardiology 73.19 (2019): 2365-2383.
3. COMPARE-UF
Stewart, Elizabeth A., et al. "The Comparing Options for Management: PAtient-centered REsults
for Uterine Fibroids (COMPARE-UF) registry: rationale and design." American journal of
obstetrics and gynecology 219.1 (2018): 95-e1.
4. TIME-DEPENDENT TREATMENTS
Event Enroll
Up to 12 months
Follow-upPCKS9i
initiation
000000111111111111111111111111111111111111111111MSM treated
status:
Discontinuation
Censored
…
NEW USER DESIGN
BIG DATA
Thomas, L and Yang, S and Wojdyla, D and Schaubel, D. “Matching with Time-dependent
Treatments; A Review and Look Forward.” (under review)
5. INTERNAL VALIDITY
CONVENIENCE SAMPLES
REAL WORLD DATA
How inclusive can we be
without compromising
validity?
Li, Fan, Laine E. Thomas, and Fan Li. "Addressing extreme propensity scores via the overlap
weights." American journal of epidemiology 188.1 (2018): 250-257.
6. TARGET TRIAL EMULATION
Clinical Trial
HR
Observationa
l *adjusted
HR1
Observationa
l *adjusted
HR2
Warfarin on ≈ 2.00
1.0
(CI: 0.8,1.2)
1.5
(CI: 1.0, 2.2)
Statins on CV events* ≈ 0.76
0.9
(CI: 0.6,1.3)
0.7
(CI: 0.6, 0.8)
Same treatment, same outcome, same adjustment variables!
Farjat, AE and Virdone, S and Piccini, JP and Kakkar, AK and Pieper, KS and Thomas, LE.
“Importance of the Design of Observational Studies in Comparative Effectiveness Research:
Lessons from GARFIELD-AF and ORBIT-AF registries.” (in preparation)
7. RISKS IN MEDICAL RESEARCH
• Hidden purpose: causation
• Need to differentiate quality in observational research
• Limitations vs. fatal flaws
• Lipstick on a pig; fancy methods layered on bad
design
8. OPPORTUNITIES
• Guidelines and tools like ROBINS-I (TRIPOD) help
• Establish fatal flaws and hold the line
• These need to be more accessible
• Tutorials on best practice help, but need to be really
clear on scope and limits
• Engage with Pharma and the FDA
• Transparency!
9. TRANSPARENCY
• Methods won’t get used if they aren’t transparent
• Peer-review needs to establish confidence that an
analysis was done correctly and interpreted correctly
under constraints (limited time and access)
• When we develop methods we need to help
reviewers
• How can we reveal the building blocks to facilitate
assessment?
• Interrogate assumptions/conditions
10. DIMENSIONS OF TRANSPARENCY
• What corresponding graphics and descriptive data
would help reveal mistakes?
• Mis-coding, IVs among confounders
• How do we communicate generalizability?
• Was it clear to begin with? Did we alter it?
• Assumptions / Purpose
• “Accounting” for clusters / hospitals /sites
11. DESIGN VS. DISCLAIMER
Typical disclaimer: “As with all observational treatment
comparisons we can not rule out the possibility that
associations are biased by unmeasured confounding.”
• Appropriately cautious
• Should not eclipse the potential to do good causal
inference in observational data
• Clinical trials have a lot of strengths beyond randomization
• Observational research can better emulate those strengths