On March 23, 2016, Prof. Henning Müller (HES-SO Valais-Wallis and Martinos Center) presented Medical image analysis and big data evaluation infrastructures at Stanford medicine.
Medical image analysis and big data evaluation infrastructures
1. Medical image analysis and
big data evaluation infrastructures
Henning Müller
HES-SO &
Martinos Center
2. Overview
• Medical image analysis & retrieval projects
• 3D texture modeling
– Graph models for data analysis
• Big data/data science evaluationinfrastructures
– ImageCLEF
– VISCERAL
– EaaS – Evaluation as a Service
• What comes next?
3. Henning Müller
• Studies in medical informatics in
Heidelberg, Germany
– Work in Portland, OR, USA
• PhD in image processingin Geneva,
focus on image analysis and retrieval
– Exchange at Monash Univ., Melbourne, Australia
• Prof. at Univ. of Geneva in medicine (2014)
– Medical image analysis and retrieval for decision
support
• Professor at the HES-SO Valais (2007)
– Head of the eHealth unit
• Sabbaticalat the Martinos Center, Boston, MA
4. Medical imaging is big data!!
• Much imaging datais produced
• Imaging data are very complex
– And getting more complex
• Imaging is essential for
diagnosis and treatment
• Images out of their context
loose most of their sense
– Clinical data are necessary
• Evidence-based medicine&
case-basedreasoning
5. Medical image retrieval (history)
• MedGIFT project started in 2002
– Global image similarity
• Texture, grey levels
– Teaching files
– Linking text files and
image similarity
• Often data not available
– Medical data hard to get
– Images and text are
connected in cases
• Unrealistic expectations, high quality vs. browsing
– Semantic gap
6. • Mixing multilingualdata from many resources
and semantic information for medical retrieval
– LinkedLifeData.com
Allan Hanbury, Célia Boyer, Manfred Gschwandtner, Henning Müller, KHRESMOI: Towards
a Multi-Lingual Search and Access System for Biomedical Information, Med-e-Tel, pages
412-416, Luxembourg, 2011.
10. Texture analysis (2D->3D->4D)
• Describe various tissue types
– Brain, lung, …
– Concentration on 3D and 4D data
– Mainly texture descriptors
• Extract visual features/signatures
– Learned, so relation to deep learning
Adrien Depeursinge, Antonio Foncubierta–Rodriguez, Dimitri Van de Ville, and Henning
Müller, Three–dimensional solid texture analysis and retrieval: review and opportunities,
Medical Image Analysis, volume 18, number 1, pages 176-196, 2014.
11. Database with CT image of
interstitial lung diseases
• 128 cases with CT image series and biopsy
confirmed diagnosis
• Manually annotated regions for tissue classes (1946)
– 6 tissue types of 13 with a larger number of examples
• 159 clinical parameters extracted (sparse)
– Smoking history, age, gender,
hematocrit, …
• Availableafter signing a
license agreement
12. Learned 3D signatures
• Learn combinations of Riesz wavelets as digital
signatures using SVMs (steerable filters)
– Create signatures to detect small local lesions
and visualize them
Adrien Depeursinge, Antonio Foncubierta–Rodriguez, Dimitri Van de Ville, and Henning
Müller, Rotation–covariant feature learning using steerable Riesz wavelets, IEEE
Transactions on Image Processing, volume 23, number 2, page 898-908, 2014.
13. Learning Riesz in 3D
• Most medical tissues are naturally 3D
• But modeling gets much more complex
– Vertical planes
– 3-D checkerboard
– 3-D wiggled
checkerboard
15. Using graphs for lung data analysis
• Pulmonary hypertension andpulmonary embolism
– Dual energy CT (DECT)
• Based on a simple lung atlas
– Not based on lobes
• Analyzing relationships between the
lung areas and their perfusion
• Differences in statistical moments
for areas as features
– Can easily be combined with texture
• Good quality for PH (77%) and PE (79%)
– DECT important for PE
Yashin Dicente Cid, Henning Müller, Alexandra Platon, JP Janssens, Frédéric Lador, PA
Poletti, A. Depeursinge, A Lung Graph-Model for Pulmonary Hypertension and Pulmonary
Embolism Detection on DECT images, submitted to MICCAI 2016, Athens Greece, 2016.
16. Scientific challenges/crowdsourcing
• Most conferences now organize challenge
sessions (MICCAI, ISBI, Grand Challenges, …)
• Public administrations use it increasingly
– NCI uses it: Coding4Cancer
– http://www.challenge.gov/
• Commercial challengeplatforms
– Kaggle, Topcoder, also Netflix challenge
• Open innovation in data science
• Amazon Mechanical Turk for small tasks,
includingmedical image analysis
– But also: https://www.cellslider.net/
17. • Benchmark on multimodal imageretrieval
– Run since 2003, medical task since 2004
– Part of the Cross Language Evaluation Forum (CLEF)
• Many tasks related to image retrieval
– Image classification
– Image-based retrieval
– Case-based retrieval
– Compound figure separation
– Caption prediction
– …
• Many old databases remain available, imageclef.org
Henning Müller, Paul Clough, Thomas
Deselaers, Barbara Caputo, ImageCLEF –
Experimental evaluation of visual information
retrieval, Springer, 2010.
18. Challenges with challenges
• Difficult to distribute very big datasets
– Sending around hard disks? Risky, expensive
• Sharing confidentialdata
– Big data is impossible to anonymize automatically
• Quickly changing data sets
– Outdated when a test collection is being created
• Optimizationson the test data are possible
– Manual adaptations, etc.
– Often hard to fully reproduce results
• Groups without large computing are disadvantaged
23. Test DataTraining Data
Participants Organiser
Participant
Virtual
MachinesRegistration
System
Annotation
Management System
Analysis
System
Annotators
(Radiologists)
Locally Installed
Annotation
Clients
Microsoft
Azure
Cloud
Test Data
24. Silver corpus (example trachea)
• Executable code of all participants
– Run it on new data, do label fusion
Dice 0.85 Dice 0.71 Dice 0.84 Dice 0.83
Participant segmentations
Dice 0.92
Silver Corpus
26. Evaluation as a Service (EaaS)
• Moving the algorithms to the data not vice versa
– Required when data are: very large, changing
quickly, confidential (medical, commercial, …)
• Different approaches
– Source code submission, APIs, VMs local or in the
cloud, Docker containers, specific frameworks
• Allows for continuous evaluation, component-
based evaluation, total reproducibility, updates, …
– Workshop March 2015 in Sierre on EaaS
– Workshop November 2015 in Boston on cloud-
based evaluation (http://www.martinos.org/cloudWorkshop/)
Allan Hanbury, Henning Müller, Georg Langs, Marc André Weber, Bjoern H. Menze, and Tomas
Salas Fernandez, Bringing the algorithms to the data: cloud–based benchmarking for medical
image analysis, CLEF conference, Springer Lecture Notes in Computer Science, 2012.
27. Sharing images, research data
• Very important aspect of research is to have solid
methods, data, large if possible
– If data not available, results can not be reproduced
– If data are small, results may be meaningless
• Many multi-center projects spend most money on
data acquisition, often delayed no time for analysis
– IRB takes long, sometimes restrictions are strange
• Research is international!
• NIH & NCI are great to push data availability
– But data can be made available in an unusable way
33. Business models for these links
• Manually annotate large data sets for challenges
– Data needs to be available in a secure space
• Have researchers work on data (on infrastructure)
– Deliver code in Docker containers
• Commercialize results and share benefits
34. • Part of QIN – Quantitative Imaging Network (NCI)
– Cloud-Based Image Biomarker Optimization Platform
• Create challenges for QIN to validate tools
• Use Codalabto run project challenges
– Run code in containers (Docker), well integrated
– Share code blocks across teams, evaluate
combinations
35. Codalab
• Open Source challenge platform supported by
Microsoft
– Integrated with the Azure cloud infrastructure
• Easy creation of new challenges
• Participant registration, leaderboardof results
37. Future of research infrastructures
• Much more centered around data!!
– Nature Scientific Data underlines the importance!
• Data need to be available but in a meaningful way
– Infrastructure needs to be available and way to
evaluate on the data with specific tasks
• More work for data preparation but in line with IRB
– Analysis inside medical institutions
• Code will become even more portable
– Docker helps enormously and develops quickly
• Public private partnerships to be sustainable
• Total reproducibility, long term, sharing tools
• Much higher efficiency
38. Conclusions
• Medicine is digital medicine
– More data and more complex links (genes, visual,
signals, …)
• Medical data science requires new infrastructures
– Use routine data, not manually extracted, curated
data, curate large scale, accommodate for errors
• Active learning and interactive data curation
– Use large data sets from data warehouses
– Keep data where they are produced
• More “local” computation, so where data are
– Secure aggregation of results
• Sharing infrastructures, data and more
39. Contact
• More information canbe found at
– http://khresmoi.eu/
– http://visceral.eu/
– http://medgift.hevs.ch/
– http://publications.hevs.ch/
• Contact:
– Henning.mueller@hevs.ch