Societal, policy, and regulatory implications of AI for healthcare and medicine
1. Societal, policy, and regulatory
implications of AI for healthcare and
medicine
Daniel Elton, Ph.D.
11-23-19
2. 2/14/2020 Dan Elton 2
Who am I ?
B.S. physics, 2010
Ph.D. in physics, 2016
Machine learning for molecular design (2016-2018) (check out my review article!)
Machine learning for medical images (Jan 2019 – present) at NIH
We are looking for summer interns and postbac fellows!
Clinical Center Summer Internship Program : June 15-August 7, 2020
https://www.training.nih.gov/programs/sip
Post-bac IRTA Fellowship
https://www.training.nih.gov/programs/postbac_irta
Dr. Ronald M. Summers
3. 2/14/2020 Dan Elton 3
What I work on
Plaque segmentation Spine segmentation & labeling
4. 2/14/2020 Dan Elton 4
Eras of AI
~1980 - 2000
~2000 - 2012
2012 - present
Slides adapted from Curtis Langlotz
5. 2/14/2020 Dan Elton 5
2017 – the year medical AI became better than doctors
Esteva et al.
“Dermatologist-level classification of skin cancer
with deep neural networks”.
Nature 542, 115–118 (2017)
129,450 training images
Liu, et al.
“Detecting Cancer Metastases on Gigapixel
Pathology Images”. https://arxiv.org/abs/1703.02442
(2017).
Challenging
benign cells
Cancerous
cells
Skin cancer classification
Stanford University
Histopathology slide analysis
Verily Life Sciences, Google
40,000,000+ training images
6. 2/14/2020 Dan Elton 6
AI is at human level for many medical image diagnosis tasks
Summarization of AI detection performance vs medical professions on out-of-sample
data, in 14 diverse areas.
ophthalmology,cancer, orthopaedics,respiratory disease, cardiology, gastroenterology, neurology.. etc
These studies are very rare! They only found 14 after downsampling from 20,530 publications and hand-
combing through 122.
Liu, et al. “A comparison of deep learning performance against health-care professionals in detecting diseases from
medical imaging: a systematic review and meta-analysis”, The Lancet Digital Health, Volume 1, Issue 6, 2019,
7. 2/14/2020 Dan Elton 7
Are we on the road to AGI?
https://openai.com/blog/ai-and-compute/
• These systems are all narrow AI.
• Models require millions of examples to learn, and massive compute.
• Models are extremely brittle and cannot generalize if data distribution changes in
any way
• Progress has come largely from massive compute and big data rather than
advances in algorithms and theory.
• Therefore, none of these advances have moved us towards general AI (AGI) .
• We anthromorphize these systems. They don’t understand in the way doctors do.
They don’t work on what Deutsch calls “good explanations”.
9. 2/14/2020 Dan Elton 9
Doctors use system 1 for diagnosis!
Doctors are intrinsically limited by their biology
Overconfidence & black/white
thinking
“Clinicians who were ‘completely certain’ of the
diagnosis antemortem were wrong 40 percent of
the time.”
Availability bias/heuristic
• There are 10,000 diseases.
• Doctors cannot learn how to diagnose all of
these! Many spend a lifetime just learning to
diagnose one hard to detect disease (ie
prostate cancer)
• Doctors diagnoses are limited by what they
know and what is foremost in mind – ie.
“available”
Can we use system 2 to correct for
system 1?
Kaheman is not optimistic.
10. 2/14/2020 Dan Elton 10
Personalized medicine
3.2 billion DNA characters
~ 24,000 genes
Including variants, ~50 million possibly relevant
dimensions
Modeling in many dimensions requires big data (curse of
dimensionality).
Whole genome costs ~$500-$1000 ,and will cost $100 in
just a few years.
But can we convince people to contribute their genetic
data and their electronic health records?
https://allofus.nih.gov/
11. 2/14/2020 Dan Elton 11
Personalized medicine
Schork, N. J., “Personalized Medicine: Time for One-Person Trials.” Nature, 2015. 520(7549): pp. 609–611.”
Figure from Eric Topol. “Deep Medicine”
Top 10 highest grossing drugs
12. 2/14/2020 Dan Elton 12
Open questions
• What new roles should doctors assume?
• How should we manage data?
• How should we regulate AI systems to make
sure they are secure, robust, and free from
bias?
13. 2/14/2020 Dan Elton 13
What new roles should doctors assume?
“People should stop
training radiologists right
now” – Geoffrey Hinton, one
of the “godfathers” of deep
learning.
2016 speaking in Toronto, Canada
14. 2/14/2020 Dan Elton 14
What new roles should doctors assume?
Physicology (heart rate)
• Currently doctors are overworked.
• Radiologists spend <20 min per case.
• High res scans have hundreds of
images, radiologists become fatigued
quickly
• Doctors & radiologists will be able to
spend more time with patients – bringing
the “human” back into medicine.
• Topol proposes joining training of
Radiologists and Histologists together
into a new role – “Information
Specialist”.
15. 2/14/2020 Dan Elton 15
How should we manage data
• 20,000,000 health documents
• Estonian Genome Project has collected the genome data of 52,000 persons
• Blockchain technology is said to be used in the Estonian national health
information system to ensure data integrity
Case study of government run data standardization &
centralization - Estonia
16. 2/14/2020 Dan Elton 16
How should we manage data
• 20,000,000 health documents
• Estonian Genome Center has collected the genome data of 51,515 persons
• Blockchain technology is used in the Estonian national health information
system to ensure data integrity
Federated learning
AI models are moved between ‘data silos’ and trained on each.
A decentralized approach – federated learning
Pros:
Data stays local and “siloed” into hospitals.
Some steps like de-identification of scans not necessary
Cons:
Possible for patient data to “leak” into models and for hackers to extract data.
Requires a lot of logistics and centralized authority to orchestrate.
Bonawitz, et al. Towards Federated Learning at Scale: System Design, arXiv:1902.01046
17. 2/14/2020 Dan Elton 17
How do we make it safe?
Eykholt, et al. “Robust Physical-World
Attacks on Deep Learning Visual
Classification”. CVPR 2018
https://arxiv.org/abs/1707.08945
18. 2/14/2020 Dan Elton 18
Adversarial attack on a medical AI
Finlayson, et al. “Adversarial Attacks Against Medical Deep
Learning Systems”. 2019. https://arxiv.org/abs/1804.05296
19. 2/14/2020 Dan Elton 19
Another example…
Finlayson, et al. “Adversarial Attacks Against Medical Deep
Learning Systems”. 2019. https://arxiv.org/abs/1804.05296
20. Dan Elton, Silver Spring AI Information Meetup2/14/2020 20
Bias
Meanings of the term “bias”
• Statistical bias: The “bias” part of the error term,
from the model not being the true model
• Biased training data
• Target signal unintentially leaks into data
• Social bias: when the ML system does things that
are against our values. In the context of social
applications (granting parole, recommending products
or job opportunities), ML can end up purpurating
existing inequities that we don’t want to perpetuate.
21. Dan Elton, Silver Spring AI Information Meetup2/14/2020 21
Biased training data
In the 1990s, the Cost Effective Health Care (CEHC) funded
a study to see if ML could predict risk of death for patients
with pneumonia.
The most accurate model was a multi task neural net
The system was almost fielded, but the researchers felt it was
risky putting a black box model into production without
knowing at all how it was working. So they trained a rule-
based learning system on the same data. It had lower
accuracy, but was highly transparent. One rule it learned was:
HasAsthma(x) LowerRisk(x)
Cooper et al. Predicting dire outcomes of patients with community acquired pneumonia, Journal of Biomedical Informatics,
v.38 n.5, p.347-366, 2005
Caruana et al. 2015. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission.
In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
https://vimeo.com/125940125
22. 2/14/2020 Dan Elton 22
Questions
• What new roles should doctors assume?
• How should we manage data?
• How can we ensure that AI systems are secure,
robust, and free from bias?
23. Dan Elton2/14/2020 23
Bias
Kate Crawford (NIPS, 2018) identifies 2 types of bias:
Harms of allocation
• Discrimination in products & services
• mortgage approval
• Parole granting
• Insurance rates
Harms of representation
• More subtle
• Perpetuation of social inequalities and stereotypes we
don’t want to be perpetuated
• Misrepresentation of sensitive topics like personal and
group identity
25. Dan Elton, Silver Spring AI Information Meetup2/14/2020 25
Bias
Examples of harms of distribution
Datta, Amit, Michael Carl Tschantz, and Anupam Datta. "Automated experiments on ad
privacy settings." Proceedings on Privacy Enhancing Technologies 2015.1 (2015): 92-
112. APA
26. 2/14/2020 Dan Elton 26
Three branches of Machine Learning
Reinforcement learningUnsupervised learningSupervised learning
• Regression
• Classification
Model Y = f(x) to match data (x,y)
• Clustering
• Data compression
• Learning distributions
(“generative modeling”)
Civilian applications
• Better robotic manipulators
• Romba robotic vacuum
• AlphaGo Go program
• Systems that can learn Atari
games
•
Possible Navy applications:
• Semi-autonomous ships / patrol
boats
• Better guidance systems
• Generation of candidate energetic
molecule
Civilian applications
• Handwritten digit recognition
• Voice recognition
• Stock market prediction
• Natural Language translation
• Computer vision for driverless cars
Possible Navy applications
• Computer vision for drones and ships
• Identifying sonar signatures
• AI-assisted medical diagnoses in
submarines
• Predict fuel usage, other logistics
• Prediction of energetic molecule properties
Civilian applications
• Online product
recommendations
• Automatic identification of
market segments and customer
types
Possible Navy applications:
• Optimization of hull shapes and
other components.
• Generation of candidate
energetic molecules
Hinweis der Redaktion
1
the underworld, kidnapped Persephone, goddess of spring, and later released her in return for marrying him. While in the underworld, she had to eat six pomegranate seeds, which compelled her to return every year. Whenever she does so her mother, Demeter, goddess of the harvest, becomes sad and cools the world so that nothing can grow. That myth, though false, is an explanation. It is also testable: if winter is caused by Demeter’s sadness, then it must happen simultaneously everywhere on Earth