Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

인공지능은 의료를 어떻게 혁신하는가 (2019년 3월)

2.619 Aufrufe

Veröffentlicht am

의료 인공지능을 주제로 2019년 3월 서울의대 및 고려대학교병원 등에서 강의한 자료입니다. 최근 연구 결과 등이 업데이트 되어 있습니다.

Veröffentlicht in: Gesundheit & Medizin
  • Als Erste(r) kommentieren

인공지능은 의료를 어떻게 혁신하는가 (2019년 3월)

  1. 1. 디지털 헬스케어 연구소 소장 성균관대 디지털헬스학과 초빙교수 최윤섭, PhD 인공지능은 의료를 어떻게 혁신하는가
  2. 2. “It's in Apple's DNA that technology alone is not enough. 
 It's technology married with liberal arts.”
  3. 3. The Convergence of IT, BT and Medicine
  4. 4. 최윤섭 지음 의료인공지능 표지디자인•최승협 컴퓨터 털 헬 치를 만드는 것을 화두로 기업가, 엔젤투자가, 에반 의 대표적인 전문가로, 활 이 분야를 처음 소개한 장 포항공과대학교에서 컴 동 대학원 시스템생명공 취득하였다. 스탠퍼드대 조교수, KT 종합기술원 컨 구원 연구조교수 등을 거 저널에 10여 편의 논문을 국내 최초로 디지털 헬스 윤섭 디지털 헬스케어 연 국내 유일의 헬스케어 스 어 파트너스’의 공동 창업 스타트업을 의료 전문가 관대학교 디지털헬스학과 뷰노, 직토, 3billion, 서지 소울링, 메디히어, 모바일 자문을 맡아 한국에서도 고 있다. 국내 최초의 디 케어 이노베이션』에 활발 을 연재하고 있다. 저서로 와 『그렇게 나는 스스로 •블로그_ http://www •페이스북_ https://w •이메일_ yoonsup.c 최윤섭 의료 인공지능은 보수적인 의료 시스템을 재편할 혁신을 일으키고 있다. 의료 인공지능의 빠른 발전과 광범위한 영향은 전문화, 세분화되며 발전해 온 현대 의료 전문가들이 이해하기가 어려우며, 어디서부 터 공부해야 할지도 막연하다. 이런 상황에서 의료 인공지능의 개념과 적용, 그리고 의사와의 관계를 쉽 게 풀어내는 이 책은 좋은 길라잡이가 될 것이다. 특히 미래의 주역이 될 의학도와 젊은 의료인에게 유용 한 소개서이다. ━ 서준범, 서울아산병원 영상의학과 교수, 의료영상인공지능사업단장 인공지능이 의료의 패러다임을 크게 바꿀 것이라는 것에 동의하지 않는 사람은 거의 없다. 하지만 인공 지능이 처리해야 할 의료의 난제는 많으며 그 해결 방안도 천차만별이다. 흔히 생각하는 만병통치약 같 은 의료 인공지능은 존재하지 않는다. 이 책은 다양한 의료 인공지능의 개발, 활용 및 가능성을 균형 있 게 분석하고 있다. 인공지능을 도입하려는 의료인, 생소한 의료 영역에 도전할 인공지능 연구자 모두에 게 일독을 권한다. ━ 정지훈, 경희사이버대 미디어커뮤니케이션학과 선임강의교수, 의사 서울의대 기초의학교육을 책임지고 있는 교수의 입장에서, 산업화 이후 변하지 않은 현재의 의학 교육 으로는 격변하는 인공지능 시대에 의대생을 대비시키지 못한다는 한계를 절실히 느낀다. 저와 함께 의 대 인공지능 교육을 개척하고 있는 최윤섭 소장의 전문적 분석과 미래 지향적 안목이 담긴 책이다. 인공 지능이라는 미래를 대비할 의대생과 교수, 그리고 의대 진학을 고민하는 학생과 학부모에게 추천한다. ━ 최형진, 서울대학교 의과대학 해부학교실 교수, 내과 전문의 최근 의료 인공지능의 도입에 대해서 극단적인 시각과 태도가 공존하고 있다. 이 책은 다양한 사례와 깊 은 통찰을 통해 의료 인공지능의 현황과 미래에 대해 균형적인 시각을 제공하여, 인공지능이 의료에 본 격적으로 도입되기 위한 토론의 장을 마련한다. 의료 인공지능이 일상화된 10년 후 돌아보았을 때, 이 책 이 그런 시대를 이끄는 길라잡이 역할을 하였음을 확인할 수 있기를 기대한다. ━ 정규환, 뷰노 CTO 의료 인공지능은 다른 분야 인공지능보다 더 본질적인 이해가 필요하다. 단순히 인간의 일을 대신하는 수준을 넘어 의학의 패러다임을 데이터 기반으로 변화시키기 때문이다. 따라서 인공지능을 균형있게 이 해하고, 어떻게 의사와 환자에게 도움을 줄 수 있을지 깊은 고민이 필요하다. 세계적으로 일어나고 있는 이러한 노력의 결과물을 집대성한 이 책이 반가운 이유다. ━ 백승욱, 루닛 대표 의료 인공지능의 최신 동향뿐만 아니라, 의의와 한계, 전망, 그리고 다양한 생각거리까지 주는 책이다. 논쟁이 되는 여러 이슈에 대해서도 저자는 자신의 시각을 명확한 근거에 기반하여 설득력 있게 제시하 고 있다. 개인적으로는 이 책을 대학원 수업 교재로 활용하려 한다. ━ 신수용, 성균관대학교 디지털헬스학과 교수 최윤섭지음 의료인공지능 값 20,000원 ISBN 979-11-86269-99-2 최초의 책! 계 안팎에서 제기 고 있다. 현재 의 분 커버했다고 자 것인가, 어느 진료 제하고 효용과 안 누가 지는가, 의학 쉬운 언어로 깊이 들이 의료 인공지 적인 용어를 최대 서 다른 곳에서 접 를 접하게 될 것 너무나 빨리 발전 책에서 제시하는 술을 공부하며, 앞 란다. 의사 면허를 취득 저가 도움되면 좋 를 불러일으킬 것 화를 일으킬 수도 슈에 제대로 대응 분은 의학 교육의 예비 의사들은 샌 지능과 함께하는 레이닝 방식도 이 전에 진료실과 수 겠지만, 여러분들 도생하는 수밖에 미래의료학자 최윤섭 박사가 제시하는 의료 인공지능의 현재와 미래 의료 딥러닝과 IBM 왓슨의 현주소 인공지능은 의사를 대체하는가 값 20,000원 ISBN 979-11-86269-99-2 레이닝 방식도 이 전에 진료실과 수 겠지만, 여러분들 도생하는 수밖에 소울링, 메디히어, 모바일 자문을 맡아 한국에서도 고 있다. 국내 최초의 디 케어 이노베이션』에 활발 을 연재하고 있다. 저서로 와 『그렇게 나는 스스로 •블로그_ http://www •페이스북_ https://w •이메일_ yoonsup.c
  5. 5. 의료 인공지능 •1부: 제 2의 기계시대와 의료 인공지능 •2부: 의료 인공지능의 과거와 현재 •3부: 미래를 어떻게 맞이할 것인가
  6. 6. 의료 인공지능 •1부: 제 2의 기계시대와 의료 인공지능 •2부: 의료 인공지능의 과거와 현재 •3부: 미래를 어떻게 맞이할 것인가
  7. 7. Inevitable Tsunami of Change
  8. 8. 대한영상의학회 춘계학술대회 2017.6
  9. 9. Vinod Khosla Founder, 1st CEO of Sun Microsystems Partner of KPCB, CEO of KhoslaVentures LegendaryVenture Capitalist in SiliconValley
  10. 10. “Technology will replace 80% of doctors”
  11. 11. https://www.youtube.com/watch?time_continue=70&v=2HMPRXstSvQ “영상의학과 전문의를 양성하는 것을 당장 그만둬야 한다. 5년 안에 딥러닝이 영상의학과 전문의를 능가할 것은 자명하다.” Hinton on Radiology
  12. 12. Luddites in the 1810’s
  13. 13. and/or
  14. 14. •AP 통신: 로봇이 인간 대신 기사를 작성 •초당 2,000 개의 기사 작성 가능 •기존에 300개 기업의 실적 ➞ 3,000 개 기업을 커버
  15. 15. • 1978 • As part of the obscure task of “discovery” — providing documents relevant to a lawsuit — the studios examined six million documents at a cost of more than $2.2 million, much of it to pay for a platoon of lawyers and paralegals who worked for months at high hourly rates. • 2011 • Now, thanks to advances in artificial intelligence, “e-discovery” software can analyze documents in a fraction of the time for a fraction of the cost. • In January, for example, Blackstone Discovery of Palo Alto, Calif., helped analyze 1.5 million documents for less than $100,000.
  16. 16. “At its height back in 2000, the U.S. cash equities trading desk at Goldman Sachs’s New York headquarters employed 600 traders, buying and selling stock on the orders of the investment bank’s large clients. Today there are just two equity traders left”
  17. 17. •일본의 Fukoku 생명보험에서는 보험금 지급 여부를 심사하 는 사람을 30명 이상 해고하고, IBM Watson Explorer 에게 맡기기로 결정 •의료 기록을 바탕으로 Watson이 보험금 지급 여부를 판단 •인공지능으로 교체하여 생산성을 30% 향상 •2년 안에 ROI 가 나올 것이라고 예상 •1년차: 140m yen •2년차: 200m yen
  18. 18. No choice but to bring AI into the medicine
  19. 19. Martin Duggan,“IBM Watson Health - Integrated Care & the Evolution to Cognitive Computing”
  20. 20. •약한 인공 지능 (Artificial Narrow Intelligence) • 특정 방면에서 잘하는 인공지능 • 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전 •강한 인공 지능 (Artificial General Intelligence) • 모든 방면에서 인간 급의 인공 지능 • 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습 •초 인공 지능 (Artificial Super Intelligence) • 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능 • “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
  21. 21. 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 90% 50% 10% PT-AI AGI EETNTOP100 Combined 언제쯤 기계가 인간 수준의 지능을 획득할 것인가? Philosophy and Theory of AI (2011) Artificial General Intelligence (2012) Greek Association for Artificial Intelligence Survey of most frequently cited 100 authors (2013) Combined 응답자 누적 비율 Superintelligence, Nick Bostrom (2014)
  22. 22. Superintelligence: Science of fiction? Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google), David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI) January 6-8, 2017, Asilomar, CA https://brunch.co.kr/@kakao-it/49 https://www.youtube.com/watch?v=h0962biiZa4
  23. 23. Superintelligence: Science of fiction? Panelists: Elon Musk (Tesla, SpaceX), Bart Selman (Cornell), Ray Kurzweil (Google), David Chalmers (NYU), Nick Bostrom(FHI), Demis Hassabis (Deep Mind), Stuart Russell (Berkeley), Sam Harris, and Jaan Tallinn (CSER/FLI) January 6-8, 2017, Asilomar, CA Q: 초인공지능이란 영역은 도달 가능한 것인가? Q: 초지능을 가진 개체의 출현이 가능할 것이라고 생각하는가? Table 1 Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn YES YES YES YES YES YES YES YES YES Table 1-1 Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn YES YES YES YES YES YES YES YES YES Q: 초지능의 실현이 일어나기를 희망하는가? Table 1-1-1 Elon Musk Start Russell Bart Selman Ray Kurzweil David Chalmers Nick Bostrom DemisHassabis Sam Harris Jaan Tallinn Complicated Complicated Complicated YES Complicated YES YES Complicated Complicated https://brunch.co.kr/@kakao-it/49 https://www.youtube.com/watch?v=h0962biiZa4
  24. 24. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
  25. 25. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
  26. 26. •약한 인공 지능 (Artificial Narrow Intelligence) • 특정 방면에서 잘하는 인공지능 • 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전 •강한 인공 지능 (Artificial General Intelligence) • 모든 방면에서 인간 급의 인공 지능 • 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습 •초 인공 지능 (Artificial Super Intelligence) • 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능 • “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
  27. 27. 의료 인공지능 •1부: 제 2의 기계시대와 의료 인공지능 •2부: 의료 인공지능의 과거와 현재 •3부: 미래를 어떻게 맞이할 것인가
  28. 28. •복잡한 의료 데이터의 분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 의료 인공지능의 세 유형
  29. 29. •복잡한 의료 데이터의 분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 의료 인공지능의 세 유형
  30. 30. Jeopardy! 2011년 인간 챔피언 두 명 과 퀴즈 대결을 벌여서 압도적인 우승을 차지
  31. 31. 600,000 pieces of medical evidence 2 million pages of text from 42 medical journals and clinical trials 69 guidelines, 61,540 clinical trials IBM Watson on Medicine Watson learned... + 1,500 lung cancer cases physician notes, lab results and clinical research + 14,700 hours of hands-on training
  32. 32. 메이요 클리닉 협력 (임상 시험 매칭) 전남대병원 도입 인도 마니팔 병원 WFO 도입 식약처 인공지능 가이드라인 초안 메드트로닉과 혈당관리 앱 시연 2011 2012 2013 2014 2015 뉴욕 MSK암센터 협력 (폐암) MD앤더슨 협력 (백혈병) MD앤더슨 파일럿 결과 발표 @ASCO 왓슨 펀드, 웰톡에 투자 뉴욕게놈센터 협력 (교모세포종 분석) GeneMD, 왓슨 모바일 디벨로퍼 챌린지 우승 클리블랜드 클리닉 협력 (암 유전체 분석) 한국 IBM 왓슨 사업부 신설 Watson Health 출범 피텔, 익스플로리스 인수 J&J, 애플, 메드트로닉 협력 에픽 시스템즈, 메이요클리닉 제휴 (EHR 분석) 동경대 도입 ( WFO) 왓슨 펀드, 모더나이징 메디슨 투자 학계/의료계 산업계 패쓰웨이 지노믹스 OME 클로즈드 알파 서비스 시작 트루븐 헬스 인수 애플 리서치 키트 통한 수면 연구 시작 2017 가천대 길병원 도입 메드트로닉 Sugar.IQ 출시 제약사 테바와 제휴 태국 범룽랏 국제 병원, WFO 도입 머지 헬스케어 인수 2016 언더 아머 제휴 브로드연구소협력발표(유 전체 분석-항암제 내성) 마니팔 병원의 
 WFO 정확성 발표 대구가톨릭병원 대구동산병원 
 도입 부산대병원 도입 왓슨 펀드, 패쓰웨이 지노믹스 투자 제퍼디! 우승 조선대병원 도입 한국 왓슨 컨소시움 출범 쥬피터 
 메디컬 
 센터 도입 식약처 인공지능 가이드라인 메이요 클리닉 임상시험매칭 결과발표 2018 건양대병원 도입 IBM Watson Health Chronicle WFO 최초 논문
  33. 33. 메이요 클리닉 협력 (임상 시험 매칭) 전남대병원 도입 인도 마니팔 병원 WFO 도입 식약처 인공지능 가이드라인 초안 메드트로닉과 혈당관리 앱 시연 2011 2012 2013 2014 2015 뉴욕 MSK암센터 협력 (폐암) MD앤더슨 협력 (백혈병) MD앤더슨 파일럿 결과 발표 @ASCO 왓슨 펀드, 웰톡에 투자 뉴욕게놈센터 협력 (교모세포종 분석) GeneMD, 왓슨 모바일 디벨로퍼 챌린지 우승 클리블랜드 클리닉 협력 (암 유전체 분석) 한국 IBM 왓슨 사업부 신설 Watson Health 출범 피텔, 익스플로리스 인수 J&J, 애플, 메드트로닉 협력 에픽 시스템즈, 메이요클리닉 제휴 (EHR 분석) 동경대 도입 ( WFO) 왓슨 펀드, 모더나이징 메디슨 투자 학계/의료계 산업계 패쓰웨이 지노믹스 OME 클로즈드 알파 서비스 시작 트루븐 헬스 인수 애플 리서치 키트 통한 수면 연구 시작 2017 가천대 길병원 도입 메드트로닉 Sugar.IQ 출시 제약사 테바와 제휴 태국 범룽랏 국제 병원, WFO 도입 머지 헬스케어 인수 2016 언더 아머 제휴 브로드연구소협력발표(유 전체 분석-항암제 내성) 마니팔 병원의 
 WFO 정확성 발표 부산대병원 도입 왓슨 펀드, 패쓰웨이 지노믹스 투자 제퍼디! 우승 조선대병원 도입 한국 왓슨 컨소시움 출범 쥬피터 
 메디컬 
 센터 도입 식약처 인공지능 가이드라인 메이요 클리닉 임상시험매칭 결과발표 2018 건양대병원 도입 IBM Watson Health Chronicle WFO 최초 논문 대구가톨릭병원 대구동산병원 
 도입
  34. 34. Annals of Oncology (2016) 27 (suppl_9): ix179-ix180. 10.1093/annonc/mdw601 Validation study to assess performance of IBM cognitive computing system Watson for oncology with Manipal multidisciplinary tumour board for 1000 consecutive cases: 
 An Indian experience •인도 마니팔 병원의 1,000명의 암환자 에 대해 의사와 WFO의 권고안의 ‘일치율’을 비교 •유방암 638명, 대장암 126명, 직장암 124명, 폐암 112명 •의사-왓슨 일치율 •추천(50%), 고려(28%), 비추천(17%) •의사의 진료안 중 5%는 왓슨의 권고안으로 제시되지 않음 •일치율이 암의 종류마다 달랐음 •직장암(85%), 폐암(17.8%) •삼중음성 유방암(67.9%), HER2 음성 유방암 (35%)
  35. 35. San Antonio Breast Cancer Symposium—December 6-10, 2016 Concordance WFO (@T2) and MMDT (@T1* v. T2**) (N= 638 Breast Cancer Cases) Time Point /Concordance REC REC + FC n % n % T1* 296 46 463 73 T2** 381 60 574 90 This presentation is the intellectual property of the author/presenter.Contact somusp@yahoo.com for permission to reprint and/or distribute.26 * T1 Time of original treatment decision by MMDT in the past (last 1-3 years) ** T2 Time (2016) of WFO’s treatment advice and of MMDT’s treatment decision upon blinded re-review of non-concordant cases
  36. 36. WFO in ASCO 2017 • Early experience with IBM WFO cognitive computing system for lung 
 
 and colorectal cancer treatment (마니팔 병원)
 • 지난 3년간: lung cancer(112), colon cancer(126), rectum cancer(124) • lung cancer: localized 88.9%, meta 97.9% • colon cancer: localized 85.5%, meta 76.6% • rectum cancer: localized 96.8%, meta 80.6% Performance of WFO in India 2017 ASCO annual Meeting, J Clin Oncol 35, 2017 (suppl; abstr 8527)
  37. 37. WFO in ASCO 2017 •가천대 길병원의 대장암과 위암 환자에 왓슨 적용 결과 • 대장암 환자(stage II-IV) 340명 • 진행성 위암 환자 185명 (Retrospective)
 • 의사와의 일치율 • 대장암 환자: 73% • 보조 (adjuvant) 항암치료를 받은 250명: 85% • 전이성 환자 90명: 40%
 • 위암 환자: 49% • Trastzumab/FOLFOX 가 국민 건강 보험 수가를 받지 못함 • S-1(tegafur, gimeracil and oteracil)+cisplatin): • 국내는 매우 루틴; 미국에서는 X
  38. 38. ORIGINAL ARTICLE Watson for Oncology and breast cancer treatment recommendations: agreement with an expert multidisciplinary tumor board S. P. Somashekhar1*, M.-J. Sepu´lveda2 , S. Puglielli3 , A. D. Norden3 , E. H. Shortliffe4 , C. Rohit Kumar1 , A. Rauthan1 , N. Arun Kumar1 , P. Patil1 , K. Rhee3 & Y. Ramya1 1 Manipal Comprehensive Cancer Centre, Manipal Hospital, Bangalore, India; 2 IBM Research (Retired), Yorktown Heights; 3 Watson Health, IBM Corporation, Cambridge; 4 Department of Surgical Oncology, College of Health Solutions, Arizona State University, Phoenix, USA *Correspondence to: Prof. Sampige Prasannakumar Somashekhar, Manipal Comprehensive Cancer Centre, Manipal Hospital, Old Airport Road, Bangalore 560017, Karnataka, India. Tel: þ91-9845712012; Fax: þ91-80-2502-3759; E-mail: somashekhar.sp@manipalhospitals.com Background: Breast cancer oncologists are challenged to personalize care with rapidly changing scientific evidence, drug approvals, and treatment guidelines. Artificial intelligence (AI) clinical decision-support systems (CDSSs) have the potential to help address this challenge. We report here the results of examining the level of agreement (concordance) between treatment recommendations made by the AI CDSS Watson for Oncology (WFO) and a multidisciplinary tumor board for breast cancer. Patients and methods: Treatment recommendations were provided for 638 breast cancers between 2014 and 2016 at the Manipal Comprehensive Cancer Center, Bengaluru, India. WFO provided treatment recommendations for the identical cases in 2016. A blinded second review was carried out by the center’s tumor board in 2016 for all cases in which there was not agreement, to account for treatments and guidelines not available before 2016. Treatment recommendations were considered concordant if the tumor board recommendations were designated ‘recommended’ or ‘for consideration’ by WFO. Results: Treatment concordance between WFO and the multidisciplinary tumor board occurred in 93% of breast cancer cases. Subgroup analysis found that patients with stage I or IV disease were less likely to be concordant than patients with stage II or III disease. Increasing age was found to have a major impact on concordance. Concordance declined significantly (P 0.02; P < 0.001) in all age groups compared with patients <45 years of age, except for the age group 55–64 years. Receptor status was not found to affect concordance. Conclusion: Treatment recommendations made by WFO and the tumor board were highly concordant for breast cancer cases examined. Breast cancer stage and patient age had significant influence on concordance, while receptor status alone did not. This study demonstrates that the AI clinical decision-support system WFO may be a helpful tool for breast cancer treatment decision making, especially at centers where expert breast cancer resources are limited. Key words: Watson for Oncology, artificial intelligence, cognitive clinical decision-support systems, breast cancer, concordance, multidisciplinary tumor board Introduction Oncologists who treat breast cancer are challenged by a large and rapidly expanding knowledge base [1, 2]. As of October 2017, for example, there were 69 FDA-approved drugs for the treatment of breast cancer, not including combination treatment regimens [3]. The growth of massive genetic and clinical databases, along with computing systems to exploit them, will accelerate the speed of breast cancer treatment advances and shorten the cycle time for changes to breast cancer treatment guidelines [4, 5]. In add- ition, these information management challenges in cancer care are occurring in a practice environment where there is little time available for tracking and accessing relevant information at the point of care [6]. For example, a study that surveyed 1117 oncolo- gists reported that on average 4.6 h per week were spent keeping VC The Author(s) 2018. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com. Annals of Oncology 29: 418–423, 2018 doi:10.1093/annonc/mdx781 Published online 9 January 2018 Downloaded from https://academic.oup.com/annonc/article-abstract/29/2/418/4781689 by guest
  39. 39. ORIGINAL ARTICLE Watson for Oncology and breast cancer treatment recommendations: agreement with an expert multidisciplinary tumor board S. P. Somashekhar1*, M.-J. Sepu´lveda2 , S. Puglielli3 , A. D. Norden3 , E. H. Shortliffe4 , C. Rohit Kumar1 , A. Rauthan1 , N. Arun Kumar1 , P. Patil1 , K. Rhee3 & Y. Ramya1 1 Manipal Comprehensive Cancer Centre, Manipal Hospital, Bangalore, India; 2 IBM Research (Retired), Yorktown Heights; 3 Watson Health, IBM Corporation, Cambridge; 4 Department of Surgical Oncology, College of Health Solutions, Arizona State University, Phoenix, USA *Correspondence to: Prof. Sampige Prasannakumar Somashekhar, Manipal Comprehensive Cancer Centre, Manipal Hospital, Old Airport Road, Bangalore 560017, Karnataka, India. Tel: þ91-9845712012; Fax: þ91-80-2502-3759; E-mail: somashekhar.sp@manipalhospitals.com Background: Breast cancer oncologists are challenged to personalize care with rapidly changing scientific evidence, drug approvals, and treatment guidelines. Artificial intelligence (AI) clinical decision-support systems (CDSSs) have the potential to help address this challenge. We report here the results of examining the level of agreement (concordance) between treatment recommendations made by the AI CDSS Watson for Oncology (WFO) and a multidisciplinary tumor board for breast cancer. Patients and methods: Treatment recommendations were provided for 638 breast cancers between 2014 and 2016 at the Manipal Comprehensive Cancer Center, Bengaluru, India. WFO provided treatment recommendations for the identical cases in 2016. A blinded second review was carried out by the center’s tumor board in 2016 for all cases in which there was not agreement, to account for treatments and guidelines not available before 2016. Treatment recommendations were considered concordant if the tumor board recommendations were designated ‘recommended’ or ‘for consideration’ by WFO. Results: Treatment concordance between WFO and the multidisciplinary tumor board occurred in 93% of breast cancer cases. Subgroup analysis found that patients with stage I or IV disease were less likely to be concordant than patients with stage II or III disease. Increasing age was found to have a major impact on concordance. Concordance declined significantly (P 0.02; P < 0.001) in all age groups compared with patients <45 years of age, except for the age group 55–64 years. Receptor status was not found to affect concordance. Conclusion: Treatment recommendations made by WFO and the tumor board were highly concordant for breast cancer cases examined. Breast cancer stage and patient age had significant influence on concordance, while receptor status alone did not. This study demonstrates that the AI clinical decision-support system WFO may be a helpful tool for breast cancer treatment decision making, especially at centers where expert breast cancer resources are limited. Key words: Watson for Oncology, artificial intelligence, cognitive clinical decision-support systems, breast cancer, concordance, multidisciplinary tumor board Introduction Oncologists who treat breast cancer are challenged by a large and rapidly expanding knowledge base [1, 2]. As of October 2017, for example, there were 69 FDA-approved drugs for the treatment of breast cancer, not including combination treatment regimens [3]. The growth of massive genetic and clinical databases, along with computing systems to exploit them, will accelerate the speed of breast cancer treatment advances and shorten the cycle time for changes to breast cancer treatment guidelines [4, 5]. In add- ition, these information management challenges in cancer care are occurring in a practice environment where there is little time available for tracking and accessing relevant information at the point of care [6]. For example, a study that surveyed 1117 oncolo- gists reported that on average 4.6 h per week were spent keeping VC The Author(s) 2018. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com. Annals of Oncology 29: 418–423, 2018 doi:10.1093/annonc/mdx781 Published online 9 January 2018 Downloaded from https://academic.oup.com/annonc/article-abstract/29/2/418/4781689 by guest Table 2. MMDT and WFO recommendations after the initial and blinded second reviews Review of breast cancer cases (N 5 638) Concordant cases, n (%) Non-concordant cases, n (%) Recommended For consideration Total Not recommended Not available Total Initial review (T1MMDT versus T2WFO) 296 (46) 167 (26) 463 (73) 137 (21) 38 (6) 175 (27) Second review (T2MMDT versus T2WFO) 397 (62) 194 (30) 591 (93) 36 (5) 11 (2) 47 (7) T1MMDT, original MMDT recommendation from 2014 to 2016; T2WFO, WFO advisor treatment recommendation in 2016; T2MMDT, MMDT treatment recom- mendation in 2016; MMDT, Manipal multidisciplinary tumor board; WFO, Watson for Oncology. 31% 18% 1% 2% 33% 5% 31% 6% 0% 10% 20% Not available Not recommended RecommendedFor consideration 30% 40% 50% 60% 70% 80% 90% 100% 8% 25% 61% 64% 64% 29% 51% 62% Concordance, 93% Concordance, 80% Concordance, 97% Concordance, 95% Concordance, 86% 2% 2% Overall (n=638) Stage I (n=61) Stage II (n=262) Stage III (n=191) Stage IV (n=124) 5% Figure 1. Treatment concordance between WFO and the MMDT overall and by stage. MMDT, Manipal multidisciplinary tumor board; WFO, Watson for Oncology. 5%Non-metastatic HR(+)HER2/neu(+)Triple(–) Metastatic Non-metastatic Metastatic Non-metastatic Metastatic 10% 1% 2% 1% 5% 20% 20%10% 0% Not applicable Not recommended For consideration Recommended 20% 40% 60% 80% 100% 5% 74% 65% 34% 64% 5% 38% 56% 15% 20% 55% 36% 59% Concordance, 95% Concordance, 75% Concordance, 94% Concordance, 98% Concordance, 94% Concordance, 85% Figure 2. Treatment concordance between WFO and the MMDT by stage and receptor status. HER2/neu, human epidermal growth factor receptor 2; HR, hormone receptor; MMDT, Manipal multidisciplinary tumor board; WFO, Watson for Oncology. Annals of Oncology Original article
  40. 40. 잠정적 결론 •왓슨 포 온콜로지와 의사의 일치율: •암종별로 다르다. •같은 암종에서도 병기별로 다르다. •같은 암종에 대해서도 병원별/국가별로 다르다. •시간이 흐름에 따라 달라질 가능성이 있다.
  41. 41. 원칙이 필요하다 •어떤 환자의 경우, 왓슨에게 의견을 물을 것인가? •왓슨을 (암종별로) 얼마나 신뢰할 것인가? •왓슨의 의견을 환자에게 공개할 것인가? •왓슨과 의료진의 판단이 다른 경우 어떻게 할 것인가? •왓슨에게 보험 급여를 매길 수 있는가? 이러한 기준에 따라 의료의 질/치료효과가 달라질 수 있으나, 현재 개별 병원이 개별적인 기준으로 활용하게 됨
  42. 42. Empowering the Oncology Community for Cancer Care Genomics Oncology Clinical Trial Matching Watson Health’s oncology clients span more than 35 hospital systems “Empowering the Oncology Community for Cancer Care” Andrew Norden, KOTRA Conference, March 2017, “The Future of Health is Cognitive”
  43. 43. IBM Watson Health Watson for Clinical Trial Matching (CTM) 18 1. According to the National Comprehensive Cancer Network (NCCN) 2. http://csdd.tufts.edu/files/uploads/02_-_jan_15,_2013_-_recruitment-retention.pdf© 2015 International Business Machines Corporation Searching across eligibility criteria of clinical trials is time consuming and labor intensive Current Challenges Fewer than 5% of adult cancer patients participate in clinical trials1 37% of sites fail to meet minimum enrollment targets. 11% of sites fail to enroll a single patient 2 The Watson solution • Uses structured and unstructured patient data to quickly check eligibility across relevant clinical trials • Provides eligible trial considerations ranked by relevance • Increases speed to qualify patients Clinical Investigators (Opportunity) • Trials to Patient: Perform feasibility analysis for a trial • Identify sites with most potential for patient enrollment • Optimize inclusion/exclusion criteria in protocols Faster, more efficient recruitment strategies, better designed protocols Point of Care (Offering) • Patient to Trials: Quickly find the right trial that a patient might be eligible for amongst 100s of open trials available Improve patient care quality, consistency, increased efficiencyIBM Confidential
  44. 44. •총 16주간 HOG( Highlands Oncology Group)의 폐암과 유방암 환자 2,620명을 대상 •90명의 환자를 3개의 노바티스 유방암 임상 프로토콜에 따라 선별 •임상 시험 코디네이터: 1시간 50분 •Watson CTM: 24분 (78% 시간 단축) •Watson CTM은 임상 시험 기준에 해당되지 않는 환자 94%를 자동으로 스크리닝
  45. 45. •메이요 클리닉의 유방암 신약 임상시험에 등록자의 수가 80% 증가하였다는 결과 발표
  46. 46. Watson Genomics Overview 20 Watson Genomics Content • 20+ Content Sources Including: • Medical Articles (23Million) • Drug Information • Clinical Trial Information • Genomic Information Case Sequenced VCF / MAF, Log2, Dge Encryption Molecular Profile Analysis Pathway Analysis Drug Analysis Service Analysis, Reports, & Visualizations
  47. 47. •In all 29 instances of retrospective analysis, WGA finds actionable insights and identifies potential drugs for consideration. •The automated generation of these insights is achieved by WGA in minutes.
  48. 48. Kazimierz O. Wrzeszczynski, PhD* Mayu O. Frank, NP, MS* Takahiko Koyama, PhD* Kahn Rhrissorrakrai, PhD* Nicolas Robine, PhD Filippo Utro, PhD Anne-Katrin Emde, PhD Bo-Juen Chen, PhD Kanika Arora, MS Minita Shah, MS Vladimir Vacic, PhD Raquel Norel, PhD Erhan Bilal, PhD Ewa A. Bergmann, MSc Julia L. Moore Vogel, PhD Jeffrey N. Bruce, MD Andrew B. Lassman, MD Peter Canoll, MD, PhD Christian Grommes, MD Steve Harvey, BS Laxmi Parida, PhD Vanessa V. Michelini, BS Michael C. Zody, PhD Vaidehi Jobanputra, PhD Ajay K. Royyuru, PhD Robert B. Darnell, MD, Comparing sequencing assays and human-machine analyses in actionable genomics for glioblastoma ABSTRACT Objective: To analyze a glioblastoma tumor specimen with 3 different platforms and compare potentially actionable calls from each. Methods: Tumor DNA was analyzed by a commercial targeted panel. In addition, tumor-normal DNA was analyzed by whole-genome sequencing (WGS) and tumor RNA was analyzed by RNA sequencing (RNA-seq). The WGS and RNA-seq data were analyzed by a team of bioinformaticians and cancer oncologists, and separately by IBM Watson Genomic Analytics (WGA), an automated system for prioritizing somatic variants and identifying drugs. Results: More variants were identified by WGS/RNA analysis than by targeted panels. WGA com- pleted a comparable analysis in a fraction of the time required by the human analysts. Conclusions: The development of an effective human-machine interface in the analysis of deep cancer genomic datasets may provide potentially clinically actionable calls for individual pa- tients in a more timely and efficient manner than currently possible. ClinicalTrials.gov identifier: NCT02725684. Neurol Genet 2017;3:e164; doi: 10.1212/ NXG.0000000000000164 GLOSSARY CNV 5 copy number variant; EGFR 5 epidermal growth factor receptor; GATK 5 Genome Analysis Toolkit; GBM 5 glioblas- toma; IRB 5 institutional review board; NLP 5 Natural Language Processing; NYGC 5 New York Genome Center; RNA-seq 5 RNA sequencing; SNV 5 single nucleotide variant; SV 5 structural variant; TCGA 5 The Cancer Genome Atlas; TPM 5 transcripts per million; VCF 5 variant call file; VUS 5 variants of uncertain significance; WGA 5 Watson Genomic Analytics; WGS 5 whole-genome sequencing. The clinical application of next-generation sequencing technology to cancer diagnosis and treat- ment is in its early stages.1–3 An initial implementation of this technology has been in targeted panels, where subsets of cancer-relevant and/or highly actionable genes are scrutinized for potentially actionable mutations. This approach has been widely adopted, offering high redun- dancy of sequence coverage for the small number of sites of known clinical utility at relatively
  49. 49. Table 3 List of variants identified as actionable by 3 different platforms Gene Variant Identified variant Identified associated drugs NYGC WGA FO NYGC WGA FO CDKN2A Deletion Yes Yes Yes Palbociclib, LY2835219 LEE001 Palbociclib LY2835219 Clinical trial CDKN2B Deletion Yes Yes Yes Palbociclib, LY2835219 LEE002 Palbociclib LY2835219 Clinical trial EGFR Gain (whole arm) Yes — — Cetuximab — — ERG Missense P114Q Yes Yes — RI-EIP RI-EIP — FGFR3 Missense L49V Yes VUS — TK-1258 — — MET Amplification Yes Yes Yes INC280 Crizotinib, cabozantinib Crizotinib, cabozantinib MET Frame shift R755fs Yes — — INC280 — — MET Exon skipping Yes — — INC280 — — NF1 Deletion Yes — — MEK162 — — NF1 Nonsense R461* Yes Yes Yes MEK162 MEK162, cobimetinib, trametinib, GDC-0994 Everolimus, temsirolimus, trametinib PIK3R1 Insertion R562_M563insI Yes Yes — BKM120 BKM120, LY3023414 — PTEN Loss (whole arm) Yes — — Everolimus, AZD2014 — — STAG2 Frame shift R1012 fs Yes Yes Yes Veliparib, clinical trial Olaparib — DNMT3A Splice site 2083-1G.C — — Yes — — — TERT Promoter-146C.T Yes — Yes — — — ABL2 Missense D716N Germline NA VUS mTOR Missense H1687R Germline NA VUS NPM1 Missense E169D Germline NA VUS NTRK1 Missense G18E Germline NA VUS PTCH1 Missense P1250R Germline NA VUS TSC1 Missense G1035S Germline NA VUS Abbreviations: FO 5 FoundationOne; NYGC 5 New York Genome Center; RNA-seq 5 RNA sequencing; WGA 5 Watson Genomic Analytics; WGS 5 whole- genome sequencing. Genes, variant description, and, where appropriate, candidate clinically relevant drugs are listed. Variants identified by the FO as variants of uncertain significance (VUS) were identified by the NYGC as germline variants. • WGA analysis vastly accelerated the time to discovery of potentially actionable variants from the VCF files. • WGA was able to provide reports of potentially clinically actionable insights within 10 minutes • , while human analysis of this patient's VCF file took an estimated 160 hours of person-time
  50. 50. •“향후 10년 동안 첫번째 cardiovascular event 가 올 것인가” 예측 •전향적 코호트 스터디: 영국 환자 378,256 명 •일상적 의료 데이터를 바탕으로 기계학습으로 질병을 예측하는 첫번째 대규모 스터디 •기존의 ACC/AHA 가이드라인과 4가지 기계학습 알고리즘의 정확도를 비교 •Random forest; Logistic regression; Gradient bossting; Neural network
  51. 51. Stephen F.Weng et al PLoS One 2017 Can machine-learning improve cardiovascular risk prediction using routine clinical data? in a sensitivity of 62.7% and PPV of 17.1%. The random forest algorithm resulted in a net increase of 191 CVD cases from the baseline model, increasing the sensitivity to 65.3% and PPV to 17.8% while logistic regression resulted in a net increase of 324 CVD cases (sensitivity 67.1%; PPV 18.3%). Gradient boosting machines and neural networks performed best, result- ing in a net increase of 354 (sensitivity 67.5%; PPV 18.4%) and 355 CVD (sensitivity 67.5%; PPV 18.4%) cases correctly predicted, respectively. The ACC/AHA baseline model correctly predicted 53,106 non-cases from 75,585 total non- cases, resulting in a specificity of 70.3% and NPV of 95.1%. The net increase in non-cases Table 3. Top 10 risk factor variables for CVD algorithms listed in descending order of coefficient effect size (ACC/AHA; logistic regression), weighting (neural networks), or selection frequency (random forest, gradient boosting machines). Algorithms were derived from training cohort of 295,267 patients. ACC/AHA Algorithm Machine-learning Algorithms Men Women ML: Logistic Regression ML: Random Forest ML: Gradient Boosting Machines ML: Neural Networks Age Age Ethnicity Age Age Atrial Fibrillation Total Cholesterol HDL Cholesterol Age Gender Gender Ethnicity HDL Cholesterol Total Cholesterol SES: Townsend Deprivation Index Ethnicity Ethnicity Oral Corticosteroid Prescribed Smoking Smoking Gender Smoking Smoking Age Age x Total Cholesterol Age x HDL Cholesterol Smoking HDL cholesterol HDL cholesterol Severe Mental Illness Treated Systolic Blood Pressure Age x Total Cholesterol Atrial Fibrillation HbA1c Triglycerides SES: Townsend Deprivation Index Age x Smoking Treated Systolic Blood Pressure Chronic Kidney Disease Triglycerides Total Cholesterol Chronic Kidney Disease Age x HDL Cholesterol Untreated Systolic Blood Pressure Rheumatoid Arthritis SES: Townsend Deprivation Index HbA1c BMI missing Untreated Systolic Blood Pressure Age x Smoking Family history of premature CHD BMI Systolic Blood Pressure Smoking Diabetes Diabetes COPD Total Cholesterol SES: Townsend Deprivation Index Gender Italics: Protective Factors https://doi.org/10.1371/journal.pone.0174944.t003 PLOS ONE | https://doi.org/10.1371/journal.pone.0174944 April 4, 2017 8 / 14 •기존 ACC/AHA 가이드라인의 위험 요소의 일부분만 기계학습 알고리즘에도 포함 •하지만, Diabetes는 네 모델 모두에서 포함되지 않았다.  •기존의 위험 예측 툴에는 포함되지 않던, 아래와 같은 새로운 요소들이 포함되었다. •COPD, severe mental illness, prescribing of oral corticosteroids •triglyceride level 등의 바이오 마커
  52. 52. Stephen F.Weng et al PLoS One 2017 Can machine-learning improve cardiovascular risk prediction using routine clinical data? correctly predicted compared to the baseline ACC/AHA model ranged from 191 non-cases for the random forest algorithm to 355 non-cases for the neural networks. Full details on classifi- cation analysis can be found in S2 Table. Discussion Compared to an established AHA/ACC risk prediction algorithm, we found all machine- learning algorithms tested were better at identifying individuals who will develop CVD and those that will not. Unlike established approaches to risk prediction, the machine-learning methods used were not limited to a small set of risk factors, and incorporated more pre-exist- Table 4. Performance of the machine-learning (ML) algorithms predicting 10-year cardiovascular disease (CVD) risk derived from applying train- ing algorithms on the validation cohort of 82,989 patients. Higher c-statistics results in better algorithm discrimination. The baseline (BL) ACC/AHA 10-year risk prediction algorithm is provided for comparative purposes. Algorithms AUC c-statistic Standard Error* 95% Confidence Interval Absolute Change from Baseline LCL UCL BL: ACC/AHA 0.728 0.002 0.723 0.735 — ML: Random Forest 0.745 0.003 0.739 0.750 +1.7% ML: Logistic Regression 0.760 0.003 0.755 0.766 +3.2% ML: Gradient Boosting Machines 0.761 0.002 0.755 0.766 +3.3% ML: Neural Networks 0.764 0.002 0.759 0.769 +3.6% *Standard error estimated by jack-knife procedure [30] https://doi.org/10.1371/journal.pone.0174944.t004 Can machine-learning improve cardiovascular risk prediction using routine clinical data? • 네 가지 기계학습 모델 모두 기존의 ACC/AHA 가이드라인 대비 더 정확했다. • Neural Networks 이 AUC=0.764 로 가장 정확했다. • “이 모델을 활용했더라면 355 명의 추가적인 cardiovascular event 를 예방했을 것” • Deep Learning 을 활용하면 정확도는 더 높아질 수 있을 것 • Genetic information 등의 추가적인 risk factor 를 활용해볼 수 있다.
  53. 53. •2018년 1월 구글이 전자의무기록(EMR)을 분석하여, 환자 치료 결과를 예측하는 인공지능 발표 •환자가 입원 중에 사망할 것인지 •장기간 입원할 것인지 •퇴원 후에 30일 내에 재입원할 것인지 •퇴원 시의 진단명
 •이번 연구의 특징: 확장성 •과거 다른 연구와 달리 EMR의 일부 데이터를 pre-processing 하지 않고, •전체 EMR 를 통채로 모두 분석하였음: UCSF, UCM (시카고 대학병원) •특히, 비정형 데이터인 의사의 진료 노트도 분석
  54. 54. ARTICLE OPEN Scalable and accurate deep learning with electronic health records Alvin Rajkomar 1,2 , Eyal Oren1 , Kai Chen1 , Andrew M. Dai1 , Nissan Hajaj1 , Michaela Hardt1 , Peter J. Liu1 , Xiaobing Liu1 , Jake Marcus1 , Mimi Sun1 , Patrik Sundberg1 , Hector Yee1 , Kun Zhang1 , Yi Zhang1 , Gerardo Flores1 , Gavin E. Duggan1 , Jamie Irvine1 , Quoc Le1 , Kurt Litsch1 , Alexander Mossin1 , Justin Tansuwan1 , De Wang1 , James Wexler1 , Jimbo Wilson1 , Dana Ludwig2 , Samuel L. Volchenboum3 , Katherine Chou1 , Michael Pearson1 , Srinivasan Madabushi1 , Nigam H. Shah4 , Atul J. Butte2 , Michael D. Howell1 , Claire Cui1 , Greg S. Corrado1 and Jeffrey Dean1 Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality. Constructing predictive statistical models typically requires extraction of curated predictor variables from normalized EHR data, a labor-intensive process that discards the vast majority of information in each patient’s record. We propose a representation of patients’ entire raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR) format. We demonstrate that deep learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization. We validated our approach using de-identified EHR data from two US academic medical centers with 216,221 adult patients hospitalized for at least 24 h. In the sequential format we propose, this volume of EHR data unrolled into a total of 46,864,534,945 data points, including clinical notes. Deep learning models achieved high accuracy for tasks such as predicting: in-hospital mortality (area under the receiver operator curve [AUROC] across sites 0.93–0.94), 30-day unplanned readmission (AUROC 0.75–0.76), prolonged length of stay (AUROC 0.85–0.86), and all of a patient’s final discharge diagnoses (frequency-weighted AUROC 0.90). These models outperformed traditional, clinically-used predictive models in all cases. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios. In a case study of a particular prediction, we demonstrate that neural networks can be used to identify relevant information from the patient’s chart. npj Digital Medicine (2018)1:18 ; doi:10.1038/s41746-018-0029-1 INTRODUCTION The promise of digital medicine stems in part from the hope that, by digitizing health data, we might more easily leverage computer information systems to understand and improve care. In fact, routinely collected patient healthcare data are now approaching the genomic scale in volume and complexity.1 Unfortunately, most of this information is not yet used in the sorts of predictive statistical models clinicians might use to improve care delivery. It is widely suspected that use of such efforts, if successful, could provide major benefits not only for patient safety and quality but also in reducing healthcare costs.2–6 In spite of the richness and potential of available data, scaling the development of predictive models is difficult because, for traditional predictive modeling techniques, each outcome to be predicted requires the creation of a custom dataset with specific variables.7 It is widely held that 80% of the effort in an analytic model is preprocessing, merging, customizing, and cleaning nurses, and other providers are included. Traditional modeling approaches have dealt with this complexity simply by choosing a very limited number of commonly collected variables to consider.7 This is problematic because the resulting models may produce imprecise predictions: false-positive predictions can overwhelm physicians, nurses, and other providers with false alarms and concomitant alert fatigue,10 which the Joint Commission identified as a national patient safety priority in 2014.11 False-negative predictions can miss significant numbers of clinically important events, leading to poor clinical outcomes.11,12 Incorporating the entire EHR, including clinicians’ free-text notes, offers some hope of overcoming these shortcomings but is unwieldy for most predictive modeling techniques. Recent developments in deep learning and artificial neural networks may allow us to address many of these challenges and unlock the information in the EHR. Deep learning emerged as the preferred machine learning approach in machine perception www.nature.com/npjdigitalmed •2018년 1월 구글이 전자의무기록(EMR)을 분석하여, 환자 치료 결과를 예측하는 인공지능 발표 •환자가 입원 중에 사망할 것인지 •장기간 입원할 것인지 •퇴원 후에 30일 내에 재입원할 것인지 •퇴원 시의 진단명
 •이번 연구의 특징: 확장성 •과거 다른 연구와 달리 EMR의 일부 데이터를 pre-processing 하지 않고, •전체 EMR 를 통채로 모두 분석하였음: UCSF, UCM (시카고 대학병원) •특히, 비정형 데이터인 의사의 진료 노트도 분석
  55. 55. Figure 4: The patient record shows a woman with metastatic breast cancer with malignant pleural e usions and empyema. The patient timeline at the top of the figure contains circles for every time-step for which at least a single token exists for the patient, and the horizontal lines show the data-type. There is a close-up view of the most recent data-points immediately preceding a prediction made 24 hours after admission. We trained models for each data-type and highlighted in red the tokens which the models attended to – the non-highlighted text was not attended to but is shown for context. The models pick up features in the medications, nursing flowsheets, and clinical notes to make the prediction. • TAAN(Time-Aware Neural Nework)를 이용하여, • 전이성 유방암 환자의 EMR에서 어떤 부분을 인공지능이 더 유의하게 보았는지를 표시해본 결과, • 실제로 사망 위험도와 관계가 높은 데이터를 더 중요하게 보았음 • 진료 기록: 농양(empyema), 흉수(pleural effusions) 등 • 간호 기록: 반코마이신, 메트로니다졸 등의 항생제 투약, 욕창(pressure ulcer)의 위험이 높음 • 흉부에 삽입하는 튜브(카테터)의 상표인 'PleurX'도 중요 단어로 파악
  56. 56. LETTERS https://doi.org/10.1038/s41591-018-0335-9 1 Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou, China. 2 Institute for Genomic Medicine, Institute of Engineering in Medicine, and Shiley Eye Institute, University of California, San Diego, La Jolla, CA, USA. 3 Hangzhou YITU Healthcare Technology Co. Ltd, Hangzhou, China. 4 Department of Thoracic Surgery/Oncology, First Affiliated Hospital of Guangzhou Medical University, China State Key Laboratory and National Clinical Research Center for Respiratory Disease, Guangzhou, China. 5 Guangzhou Kangrui Co. Ltd, Guangzhou, China. 6 Guangzhou Regenerative Medicine and Health Guangdong Laboratory, Guangzhou, China. 7 Veterans Administration Healthcare System, San Diego, CA, USA. 8 These authors contributed equally: Huiying Liang, Brian Tsui, Hao Ni, Carolina C. S. Valentim, Sally L. Baxter, Guangjian Liu. *e-mail: kang.zhang@gmail.com; xiahumin@hotmail.com Artificial intelligence (AI)-based methods have emerged as powerful tools to transform medical care. Although machine learning classifiers (MLCs) have already demonstrated strong performance in image-based diagnoses, analysis of diverse and massive electronic health record (EHR) data remains chal- lenging. Here, we show that MLCs can query EHRs in a manner similar to the hypothetico-deductive reasoning used by physi- cians and unearth associations that previous statistical meth- ods have not found. Our model applies an automated natural language processing system using deep learning techniques to extract clinically relevant information from EHRs. In total, 101.6 million data points from 1,362,559 pediatric patient visits presenting to a major referral center were analyzed to train and validate the framework. Our model demonstrates high diagnostic accuracy across multiple organ systems and is comparable to experienced pediatricians in diagnosing com- mon childhood diseases. Our study provides a proof of con- cept for implementing an AI-based system as a means to aid physiciansintacklinglargeamountsofdata,augmentingdiag- nostic evaluations, and to provide clinical decision support in cases of diagnostic uncertainty or complexity. Although this impact may be most evident in areas where healthcare provid- ers are in relative shortage, the benefits of such an AI system are likely to be universal. Medical information has become increasingly complex over time. The range of disease entities, diagnostic testing and biomark- ers, and treatment modalities has increased exponentially in recent years. Subsequently, clinical decision-making has also become more complex and demands the synthesis of decisions from assessment of large volumes of data representing clinical information. In the current digital age, the electronic health record (EHR) represents a massive repository of electronic data points representing a diverse array of clinical information1–3 . Artificial intelligence (AI) methods have emerged as potentially powerful tools to mine EHR data to aid in disease diagnosis and management, mimicking and perhaps even augmenting the clinical decision-making of human physicians1 . To formulate a diagnosis for any given patient, physicians fre- quently use hypotheticodeductive reasoning. Starting with the chief complaint, the physician then asks appropriately targeted questions relating to that complaint. From this initial small feature set, the physician forms a differential diagnosis and decides what features (historical questions, physical exam findings, laboratory testing, and/or imaging studies) to obtain next in order to rule in or rule out the diagnoses in the differential diagnosis set. The most use- ful features are identified, such that when the probability of one of the diagnoses reaches a predetermined level of acceptability, the process is stopped, and the diagnosis is accepted. It may be pos- sible to achieve an acceptable level of certainty of the diagnosis with only a few features without having to process the entire feature set. Therefore, the physician can be considered a classifier of sorts. In this study, we designed an AI-based system using machine learning to extract clinically relevant features from EHR notes to mimic the clinical reasoning of human physicians. In medicine, machine learning methods have already demonstrated strong per- formance in image-based diagnoses, notably in radiology2 , derma- tology4 , and ophthalmology5–8 , but analysis of EHR data presents a number of difficult challenges. These challenges include the vast quantity of data, high dimensionality, data sparsity, and deviations Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence Huiying Liang1,8 , Brian Y. Tsui 2,8 , Hao Ni3,8 , Carolina C. S. Valentim4,8 , Sally L. Baxter 2,8 , Guangjian Liu1,8 , Wenjia Cai 2 , Daniel S. Kermany1,2 , Xin Sun1 , Jiancong Chen2 , Liya He1 , Jie Zhu1 , Pin Tian2 , Hua Shao2 , Lianghong Zheng5,6 , Rui Hou5,6 , Sierra Hewett1,2 , Gen Li1,2 , Ping Liang3 , Xuan Zang3 , Zhiqi Zhang3 , Liyan Pan1 , Huimin Cai5,6 , Rujuan Ling1 , Shuhua Li1 , Yongwang Cui1 , Shusheng Tang1 , Hong Ye1 , Xiaoyan Huang1 , Waner He1 , Wenqing Liang1 , Qing Zhang1 , Jianmin Jiang1 , Wei Yu1 , Jianqun Gao1 , Wanxing Ou1 , Yingmin Deng1 , Qiaozhen Hou1 , Bei Wang1 , Cuichan Yao1 , Yan Liang1 , Shu Zhang1 , Yaou Duan2 , Runze Zhang2 , Sarah Gibson2 , Charlotte L. Zhang2 , Oulan Li2 , Edward D. Zhang2 , Gabriel Karin2 , Nathan Nguyen2 , Xiaokang Wu1,2 , Cindy Wen2 , Jie Xu2 , Wenqin Xu2 , Bochu Wang2 , Winston Wang2 , Jing Li1,2 , Bianca Pizzato2 , Caroline Bao2 , Daoman Xiang1 , Wanting He1,2 , Suiqin He2 , Yugui Zhou1,2 , Weldon Haw2,7 , Michael Goldbaum2 , Adriana Tremoulet2 , Chun-Nan Hsu 2 , Hannah Carter2 , Long Zhu3 , Kang Zhang 1,2,7 * and Huimin Xia 1 * NATURE MEDICINE | www.nature.com/naturemedicine Nat Med 2019 Feb •소아 환자 130만명의 EMR 데이터 101.6 million 개 분석 •딥러닝 기반의 자연어 처리 기술 •의사의 hypotetico-deductive reasoning 모방 •소아 환자의 common disease를 진단하는 인공지능
  57. 57. LETTERS https://doi.org/10.1038/s41591-018-0335-9 1 Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou, China. 2 Institute for Genomic Medicine, Institute of Engineering in Medicine, and Shiley Eye Institute, University of California, San Diego, La Jolla, CA, USA. 3 Hangzhou YITU Healthcare Technology Co. Ltd, Hangzhou, China. 4 Department of Thoracic Surgery/Oncology, First Affiliated Hospital of Guangzhou Medical University, China State Key Laboratory and National Clinical Research Center for Respiratory Disease, Guangzhou, China. 5 Guangzhou Kangrui Co. Ltd, Guangzhou, China. 6 Guangzhou Regenerative Medicine and Health Guangdong Laboratory, Guangzhou, China. 7 Veterans Administration Healthcare System, San Diego, CA, USA. 8 These authors contributed equally: Huiying Liang, Brian Tsui, Hao Ni, Carolina C. S. Valentim, Sally L. Baxter, Guangjian Liu. *e-mail: kang.zhang@gmail.com; xiahumin@hotmail.com Artificial intelligence (AI)-based methods have emerged as powerful tools to transform medical care. Although machine learning classifiers (MLCs) have already demonstrated strong performance in image-based diagnoses, analysis of diverse and massive electronic health record (EHR) data remains chal- lenging. Here, we show that MLCs can query EHRs in a manner similar to the hypothetico-deductive reasoning used by physi- cians and unearth associations that previous statistical meth- ods have not found. Our model applies an automated natural language processing system using deep learning techniques to extract clinically relevant information from EHRs. In total, 101.6 million data points from 1,362,559 pediatric patient visits presenting to a major referral center were analyzed to train and validate the framework. Our model demonstrates high diagnostic accuracy across multiple organ systems and is comparable to experienced pediatricians in diagnosing com- mon childhood diseases. Our study provides a proof of con- cept for implementing an AI-based system as a means to aid physiciansintacklinglargeamountsofdata,augmentingdiag- nostic evaluations, and to provide clinical decision support in cases of diagnostic uncertainty or complexity. Although this impact may be most evident in areas where healthcare provid- ers are in relative shortage, the benefits of such an AI system are likely to be universal. Medical information has become increasingly complex over time. The range of disease entities, diagnostic testing and biomark- ers, and treatment modalities has increased exponentially in recent years. Subsequently, clinical decision-making has also become more complex and demands the synthesis of decisions from assessment of large volumes of data representing clinical information. In the current digital age, the electronic health record (EHR) represents a massive repository of electronic data points representing a diverse array of clinical information1–3 . Artificial intelligence (AI) methods have emerged as potentially powerful tools to mine EHR data to aid in disease diagnosis and management, mimicking and perhaps even augmenting the clinical decision-making of human physicians1 . To formulate a diagnosis for any given patient, physicians fre- quently use hypotheticodeductive reasoning. Starting with the chief complaint, the physician then asks appropriately targeted questions relating to that complaint. From this initial small feature set, the physician forms a differential diagnosis and decides what features (historical questions, physical exam findings, laboratory testing, and/or imaging studies) to obtain next in order to rule in or rule out the diagnoses in the differential diagnosis set. The most use- ful features are identified, such that when the probability of one of the diagnoses reaches a predetermined level of acceptability, the process is stopped, and the diagnosis is accepted. It may be pos- sible to achieve an acceptable level of certainty of the diagnosis with only a few features without having to process the entire feature set. Therefore, the physician can be considered a classifier of sorts. In this study, we designed an AI-based system using machine learning to extract clinically relevant features from EHR notes to mimic the clinical reasoning of human physicians. In medicine, machine learning methods have already demonstrated strong per- formance in image-based diagnoses, notably in radiology2 , derma- tology4 , and ophthalmology5–8 , but analysis of EHR data presents a number of difficult challenges. These challenges include the vast quantity of data, high dimensionality, data sparsity, and deviations Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence Huiying Liang1,8 , Brian Y. Tsui 2,8 , Hao Ni3,8 , Carolina C. S. Valentim4,8 , Sally L. Baxter 2,8 , Guangjian Liu1,8 , Wenjia Cai 2 , Daniel S. Kermany1,2 , Xin Sun1 , Jiancong Chen2 , Liya He1 , Jie Zhu1 , Pin Tian2 , Hua Shao2 , Lianghong Zheng5,6 , Rui Hou5,6 , Sierra Hewett1,2 , Gen Li1,2 , Ping Liang3 , Xuan Zang3 , Zhiqi Zhang3 , Liyan Pan1 , Huimin Cai5,6 , Rujuan Ling1 , Shuhua Li1 , Yongwang Cui1 , Shusheng Tang1 , Hong Ye1 , Xiaoyan Huang1 , Waner He1 , Wenqing Liang1 , Qing Zhang1 , Jianmin Jiang1 , Wei Yu1 , Jianqun Gao1 , Wanxing Ou1 , Yingmin Deng1 , Qiaozhen Hou1 , Bei Wang1 , Cuichan Yao1 , Yan Liang1 , Shu Zhang1 , Yaou Duan2 , Runze Zhang2 , Sarah Gibson2 , Charlotte L. Zhang2 , Oulan Li2 , Edward D. Zhang2 , Gabriel Karin2 , Nathan Nguyen2 , Xiaokang Wu1,2 , Cindy Wen2 , Jie Xu2 , Wenqin Xu2 , Bochu Wang2 , Winston Wang2 , Jing Li1,2 , Bianca Pizzato2 , Caroline Bao2 , Daoman Xiang1 , Wanting He1,2 , Suiqin He2 , Yugui Zhou1,2 , Weldon Haw2,7 , Michael Goldbaum2 , Adriana Tremoulet2 , Chun-Nan Hsu 2 , Hannah Carter2 , Long Zhu3 , Kang Zhang 1,2,7 * and Huimin Xia 1 * NATURE MEDICINE | www.nature.com/naturemedicine Nat Med 2019 Feb LETTERSNATURE MEDICINE examination, laboratory testing, and PACS (picture archiving and communication systems) reports), the F1 scores exceeded 90% except in one instance, which was for categorical variables detected tree, similar to how a human physician might evaluate a patient’s features to achieve a diagnosis based on the same clinical data incorporated into the information model. Encounters labeled by Systemic generalized diseases Varicella without complication Influenza Infectious mononucleosis Sepsis Exanthema subitum Neuropsychiatric diseases Tic disorder Attention-deficit hyperactivity disorders Bacterial meningitis Encephalitis Convulsions Genitourinary diseases Respiratory diseases Upper respiratory diseases Acute upper respiratory infection Sinusitis Acute sinusitis Acute recurrent sinusitis Acute laryngitis Acute pharyngitis Lower respiratory diseases Bronchitis Acute bronchitis Bronchiolitis Acute bronchitis due to Mycoplasma pneumoniae Pneumonia Bacterial pneumonia Bronchopneumonia Bacterial pneumonia elsewhere Mycoplasma infection Asthma Asthma uncomplicated Cough variant asthma Asthma with acute exacerbation Acute tracheitis Gastrointestinal diseases Diarrhea Mouth-related diseases Enteroviral vesicular stomatitis with exanthem Fig. 2 | Hierarchy of the diagnostic framework in a large pediatric cohort. A hierarchical logistic regression classifier was used to establish a diagnostic system based on anatomic divisions. An organ-based approach was used, wherein diagnoses were first separated into broad organ systems, then subsequently divided into organ subsystems and/or into more specific diagnosis groups.
  58. 58. LETTERS https://doi.org/10.1038/s41591-018-0335-9 1 Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou, China. 2 Institute for Genomic Medicine, Institute of Engineering in Medicine, and Shiley Eye Institute, University of California, San Diego, La Jolla, CA, USA. 3 Hangzhou YITU Healthcare Technology Co. Ltd, Hangzhou, China. 4 Department of Thoracic Surgery/Oncology, First Affiliated Hospital of Guangzhou Medical University, China State Key Laboratory and National Clinical Research Center for Respiratory Disease, Guangzhou, China. 5 Guangzhou Kangrui Co. Ltd, Guangzhou, China. 6 Guangzhou Regenerative Medicine and Health Guangdong Laboratory, Guangzhou, China. 7 Veterans Administration Healthcare System, San Diego, CA, USA. 8 These authors contributed equally: Huiying Liang, Brian Tsui, Hao Ni, Carolina C. S. Valentim, Sally L. Baxter, Guangjian Liu. *e-mail: kang.zhang@gmail.com; xiahumin@hotmail.com Artificial intelligence (AI)-based methods have emerged as powerful tools to transform medical care. Although machine learning classifiers (MLCs) have already demonstrated strong performance in image-based diagnoses, analysis of diverse and massive electronic health record (EHR) data remains chal- lenging. Here, we show that MLCs can query EHRs in a manner similar to the hypothetico-deductive reasoning used by physi- cians and unearth associations that previous statistical meth- ods have not found. Our model applies an automated natural language processing system using deep learning techniques to extract clinically relevant information from EHRs. In total, 101.6 million data points from 1,362,559 pediatric patient visits presenting to a major referral center were analyzed to train and validate the framework. Our model demonstrates high diagnostic accuracy across multiple organ systems and is comparable to experienced pediatricians in diagnosing com- mon childhood diseases. Our study provides a proof of con- cept for implementing an AI-based system as a means to aid physiciansintacklinglargeamountsofdata,augmentingdiag- nostic evaluations, and to provide clinical decision support in cases of diagnostic uncertainty or complexity. Although this impact may be most evident in areas where healthcare provid- ers are in relative shortage, the benefits of such an AI system are likely to be universal. Medical information has become increasingly complex over time. The range of disease entities, diagnostic testing and biomark- ers, and treatment modalities has increased exponentially in recent years. Subsequently, clinical decision-making has also become more complex and demands the synthesis of decisions from assessment of large volumes of data representing clinical information. In the current digital age, the electronic health record (EHR) represents a massive repository of electronic data points representing a diverse array of clinical information1–3 . Artificial intelligence (AI) methods have emerged as potentially powerful tools to mine EHR data to aid in disease diagnosis and management, mimicking and perhaps even augmenting the clinical decision-making of human physicians1 . To formulate a diagnosis for any given patient, physicians fre- quently use hypotheticodeductive reasoning. Starting with the chief complaint, the physician then asks appropriately targeted questions relating to that complaint. From this initial small feature set, the physician forms a differential diagnosis and decides what features (historical questions, physical exam findings, laboratory testing, and/or imaging studies) to obtain next in order to rule in or rule out the diagnoses in the differential diagnosis set. The most use- ful features are identified, such that when the probability of one of the diagnoses reaches a predetermined level of acceptability, the process is stopped, and the diagnosis is accepted. It may be pos- sible to achieve an acceptable level of certainty of the diagnosis with only a few features without having to process the entire feature set. Therefore, the physician can be considered a classifier of sorts. In this study, we designed an AI-based system using machine learning to extract clinically relevant features from EHR notes to mimic the clinical reasoning of human physicians. In medicine, machine learning methods have already demonstrated strong per- formance in image-based diagnoses, notably in radiology2 , derma- tology4 , and ophthalmology5–8 , but analysis of EHR data presents a number of difficult challenges. These challenges include the vast quantity of data, high dimensionality, data sparsity, and deviations Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence Huiying Liang1,8 , Brian Y. Tsui 2,8 , Hao Ni3,8 , Carolina C. S. Valentim4,8 , Sally L. Baxter 2,8 , Guangjian Liu1,8 , Wenjia Cai 2 , Daniel S. Kermany1,2 , Xin Sun1 , Jiancong Chen2 , Liya He1 , Jie Zhu1 , Pin Tian2 , Hua Shao2 , Lianghong Zheng5,6 , Rui Hou5,6 , Sierra Hewett1,2 , Gen Li1,2 , Ping Liang3 , Xuan Zang3 , Zhiqi Zhang3 , Liyan Pan1 , Huimin Cai5,6 , Rujuan Ling1 , Shuhua Li1 , Yongwang Cui1 , Shusheng Tang1 , Hong Ye1 , Xiaoyan Huang1 , Waner He1 , Wenqing Liang1 , Qing Zhang1 , Jianmin Jiang1 , Wei Yu1 , Jianqun Gao1 , Wanxing Ou1 , Yingmin Deng1 , Qiaozhen Hou1 , Bei Wang1 , Cuichan Yao1 , Yan Liang1 , Shu Zhang1 , Yaou Duan2 , Runze Zhang2 , Sarah Gibson2 , Charlotte L. Zhang2 , Oulan Li2 , Edward D. Zhang2 , Gabriel Karin2 , Nathan Nguyen2 , Xiaokang Wu1,2 , Cindy Wen2 , Jie Xu2 , Wenqin Xu2 , Bochu Wang2 , Winston Wang2 , Jing Li1,2 , Bianca Pizzato2 , Caroline Bao2 , Daoman Xiang1 , Wanting He1,2 , Suiqin He2 , Yugui Zhou1,2 , Weldon Haw2,7 , Michael Goldbaum2 , Adriana Tremoulet2 , Chun-Nan Hsu 2 , Hannah Carter2 , Long Zhu3 , Kang Zhang 1,2,7 * and Huimin Xia 1 * NATURE MEDICINE | www.nature.com/naturemedicine Nat Med 2019 Feb LETTERSNATURE MEDICINE of our system was especially strong for the common conditions of acute upper respiratory infection and sinusitis, both of which were diagnosed with an accuracy of 0.95 between the machine-predicted diagnosis and the human physician-generated diagnosis. In con- trast, dangerous conditions tend to be less common and would have diagnostic hierarchy decision tree can be adjusted to what is most appropriate for the clinical situation. In terms of implementation, we foresee this type of AI-assisted diagnostic system being integrated into clinical practice in several ways. First, it could assist with triage procedures. For example, Table 2 | Illustration of diagnostic performance of our AI model and physicians Disease conditions Our model Physicians Physician group 1 Physician group 2 Physician group 3 Physician group 4 Physician group 5 Asthma 0.920 0.801 0.837 0.904 0.890 0.935 Encephalitis 0.837 0.947 0.961 0.950 0.959 0.965 Gastrointestinal disease 0.865 0.818 0.872 0.854 0.896 0.893 Group: ‘Acute laryngitis’ 0.786 0.808 0.730 0.879 0.940 0.943 Group: ‘Pneumonia’ 0.888 0.829 0.767 0.946 0.952 0.972 Group: ‘Sinusitis’ 0.932 0.839 0.797 0.896 0.873 0.870 Lower respiratory 0.803 0.803 0.815 0.910 0.903 0.935 Mouth-related diseases 0.897 0.818 0.872 0.854 0.896 0.893 Neuropsychiatric disease 0.895 0.925 0.963 0.960 0.962 0.906 Respiratory 0.935 0.808 0.769 0.89 0.907 0.917 Systemic or generalized 0.925 0.879 0.907 0.952 0.907 0.944 Upper respiratory 0.929 0.817 0.754 0.884 0.916 0.916 Root 0.889 0.843 0.863 0.908 0.903 0.912 Average F1 score 0.885 0.841 0.839 0.907 0.915 0.923 We used the F1score to evaluate the diagnosis performance across different groups (rows); our model, two junior physician groups (groups 1 and 2), and three senior physician groups (groups 3, 4, and 5) (see Methods section for description). We observed that our model performed better than junior physician groups but slightly worse than three experienced physician groups. Root is the first level of diagnosis classification. •multiple organ system에 대해서, •주니어 스태프 보다는 높은 정확도 •시니어 스태프 보다는 낮은 정확도
  59. 59. •복잡한 의료 데이터의 분석 및 insight 도출 •영상 의료/병리 데이터의 분석/판독 •연속 데이터의 모니터링 및 예방/예측 의료 인공지능의 세 유형
  60. 60. Deep Learning http://theanalyticsstore.ie/deep-learning/
  61. 61. 인공지능 기계학습 딥러닝 전문가 시스템 사이버네틱스 … 인공신경망 결정트리 서포트 벡터 머신 … 컨볼루션 신경망 (CNN) 순환신경망(RNN) … 인공지능과 딥러닝의 관계
  62. 62. 페이스북의 딥페이스 Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14. Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million parameters, where more than 95% come from the local and fully connected layers. very few parameters. These layers merely expand the input into a set of simple local features. The subsequent layers (L4, L5 and L6) are instead lo- cally connected [13, 16], like a convolutional layer they ap- ply a filter bank, but every location in the feature map learns a different set of filters. Since different regions of an aligned image have different local statistics, the spatial stationarity The goal of training is to maximize the probability of the correct class (face id). We achieve this by minimiz- ing the cross-entropy loss for each training sample. If k is the index of the true label for a given input, the loss is: L = log pk. The loss is minimized over the parameters by computing the gradient of L w.r.t. the parameters and Human: 95% vs. DeepFace in Facebook: 97.35% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
  63. 63. Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering Human: 95% vs. FaceNet of Google: 99.63% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people) False accept False reject s. This shows all pairs of images that were on LFW. Only eight of the 13 errors shown he other four are mislabeled in LFW. on Youtube Faces DB ge similarity of all pairs of the first one our face detector detects in each video. False accept False reject Figure 6. LFW errors. This shows all pairs of images that were incorrectly classified on LFW. Only eight of the 13 errors shown here are actual errors the other four are mislabeled in LFW. 5.7. Performance on Youtube Faces DB We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. This gives us a classification accuracy of 95.12%±0.39. Using the first one thousand frames results in 95.18%. Compared to [17] 91.4% who also evaluate one hundred frames per video we reduce the error rate by almost half. DeepId2+ [15] achieved 93.2% and our method reduces this error by 30%, comparable to our improvement on LFW. 5.8. Face Clustering Our compact embedding lends itself to be used in order to cluster a users personal photos into groups of people with the same identity. The constraints in assignment imposed by clustering faces, compared to the pure verification task, lead to truly amazing results. Figure 7 shows one cluster in a users personal photo collection, generated using agglom- erative clustering. It is a clear showcase of the incredible invariance to occlusion, lighting, pose and even age. Figure 7. Face Clustering. Shown is an exemplar cluster for one user. All these images in the users personal photo collection were clustered together. 6. Summary We provide a method to directly learn an embedding into an Euclidean space for face verification. This sets it apart from other methods [15, 17] who use the CNN bottleneck layer, or require additional post-processing such as concate- nation of multiple models and PCA, as well as SVM clas- sification. Our end-to-end training both simplifies the setup and shows that directly optimizing a loss relevant to the task at hand improves performance. Another strength of our model is that it only requires False accept False reject Figure 6. LFW errors. This shows all pairs of images that were incorrectly classified on LFW. Only eight of the 13 errors shown here are actual errors the other four are mislabeled in LFW. 5.7. Performance on Youtube Faces DB We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. This gives us a classification accuracy of 95.12%±0.39. Using the first one thousand frames results in 95.18%. Compared to [17] 91.4% who also evaluate one hundred frames per video we reduce the error rate by almost half. DeepId2+ [15] achieved 93.2% and our method reduces this error by 30%, comparable to our improvement on LFW. 5.8. Face Clustering Our compact embedding lends itself to be used in order to cluster a users personal photos into groups of people with the same identity. The constraints in assignment imposed by clustering faces, compared to the pure verification task, Figure 7. Face Clustering. Shown is an exemplar cluster for one user. All these images in the users personal photo collection were clustered together. 6. Summary We provide a method to directly learn an embedding into an Euclidean space for face verification. This sets it apart from other methods [15, 17] who use the CNN bottleneck layer, or require additional post-processing such as concate- nation of multiple models and PCA, as well as SVM clas- 구글의 페이스넷
  64. 64. 바이두의 얼굴 인식 인공지능 Jingtuo Liu (2015) Targeting Ultimate Accuracy: Face Recognition via Deep Embedding Human: 95% vs.Baidu: 99.77% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people) 3 Although several algorithms have achieved nearly perfect accuracy in the 6000-pair verification task, a more practical can achieve 95.8% identification rate, relatively reducing the error rate by about 77%. TABLE 3. COMPARISONS WITH OTHER METHODS ON SEVERAL EVALUATION TASKS Score = -0.060 (pair #113) Score = -0.022 (pair #202) Score = -0.034 (pair #656) Score = -0.031 (pair #1230) Score = -0.073 (pair #1862) Score = -0.091(pair #2499) Score = -0.024 (pair #2551) Score = -0.036 (pair #2552) Score = -0.089 (pair #2610) Method Performance on tasks Pair-wise Accuracy(%) Rank-1(%) DIR(%) @ FAR =1% Verification(% )@ FAR=0.1% Open-set Identification(% )@ Rank = 1,FAR = 0.1% IDL Ensemble Model 99.77 98.03 95.8 99.41 92.09 IDL Single Model 99.68 97.60 94.12 99.11 89.08 FaceNet[12] 99.63 NA NA NA NA DeepID3[9] 99.53 96.00 81.40 NA NA Face++[2] 99.50 NA NA NA NA Facebook[15] 98.37 82.5 61.9 NA NA Learning from Scratch[4] 97.73 NA NA 80.26 28.90 HighDimLBP[10] 95.17 NA NA 41.66(reported in [4]) 18.07(reported in [4]) • 6,000쌍의 얼굴 사진 중에 바이두의 인공지능은 불과 14쌍만을 잘못 판단 • 알고 보니 이 14쌍 중의 5쌍의 사진은 오히려 정답에 오류가 있었고, 
 
 실제로는 인공지능이 정확 (red box)
  65. 65. REVIEW ARTICLE | FOCUS https://doi.org/10.1038/s41591-018-0300-7 Department of Molecular Medicine, Scripps Research, La Jolla, CA, USA. e-mail: etopol@scripps.edu M edicine is at the crossroad of two major trends. The first is a failed business model, with increasing expenditures and jobs allocated to healthcare, but with deteriorating key outcomes, including reduced life expectancy and high infant, child- hood, and maternal mortality in the United States1,2 . This exem- plifies a paradox that is not at all confined to American medicine: investment of more human capital with worse human health out- comes. The second is the generation of data in massive quantities, from sources such as high-resolution medical imaging, biosensors with continuous output of physiologic metrics, genome sequenc- ing, and electronic medical records. The limits on analysis of such data by humans alone have clearly been exceeded, necessitating an increased reliance on machines. Accordingly, at the same time that there is more dependence than ever on humans to provide healthcare, algorithms are desperately needed to help. Yet the inte- gration of human and artificial intelligence (AI) for medicine has barely begun. Looking deeper, there are notable, longstanding deficiencies in healthcare that are responsible for its path of diminishing returns. These include a large number of serious diagnostic errors, mis- takes in treatment, an enormous waste of resources, inefficiencies in workflow, inequities, and inadequate time between patients and clinicians3,4 . Eager for improvement, leaders in healthcare and com- puter scientists have asserted that AI might have a role in address- ing all of these problems. That might eventually be the case, but researchers are at the starting gate in the use of neural networks to ameliorate the ills of the practice of medicine. In this Review, I have gathered much of the existing base of evidence for the use of AI in medicine, laying out the opportunities and pitfalls. Artificial intelligence for clinicians Almost every type of clinician, ranging from specialty doctor to paramedic, will be using AI technology, and in particular deep learning, in the future. This largely involved pattern recognition using deep neural networks (DNNs) (Box 1) that can help interpret medical scans, pathology slides, skin lesions, retinal images, electro- cardiograms, endoscopy, faces, and vital signs. The neural net inter- pretation is typically compared with physicians’ assessments using a plot of true-positive versus false-positive rates, known as a receiver operating characteristic (ROC), for which the area under the curve (AUC) is used to express the level of accuracy (Box 1). Radiology. One field that has attracted particular attention for application of AI is radiology5 . Chest X-rays are the most common type of medical scan, with more than 2 billion performed worldwide per year. In one study, the accuracy of one algorithm, based on a 121-layer convolutional neural network, in detecting pneumonia in over 112,000 labeled frontal chest X-ray images was compared with that of four radiologists, and the conclusion was that the algorithm outperformed the radiologists. However, the algorithm’s AUC of 0.76, although somewhat better than that for two previously tested DNN algorithms for chest X-ray interpretation5 , is far from optimal. In addition, the test used in this study is not necessarily comparable with the daily tasks of a radiologist, who will diagnose much more than pneumonia in any given scan. To further validate the conclu- sions of this study, a comparison with results from more than four radiologists should be made. A team at Google used an algorithm that analyzed the same image set as in the previously discussed study to make 14 different diagnoses, resulting in AUC scores that ranged from 0.63 for pneumonia to 0.87 for heart enlargement or a collapsed lung6 . More recently, in another related study, it was shown that a DNN that is currently in use in hospitals in India for interpretation of four different chest X-ray key findings was at least as accurate as four radiologists7 . For the narrower task of detecting cancerous pulmonary nodules on a chest X-ray, a DNN that retro- spectively assessed scans from over 34,000 patients achieved a level of accuracy exceeding 17 of 18 radiologists8 . It can be difficult for emergency room doctors to accurately diagnose wrist fractures, but a DNN led to marked improvement, increasing sensitivity from 81% to 92% and reducing misinterpretation by 47% (ref. 9 ). Similarly, DNNs have been applied across a wide variety of medical scans, including bone films for fractures and estimation of aging10–12 , classification of tuberculosis13 , and vertebral compression fractures14 ; computed tomography (CT) scans for lung nodules15 , liver masses16 , pancreatic cancer17 , and coronary calcium score18 ; brain scans for evidence of hemorrhage19 , head trauma20 , and acute referrals21 ; magnetic resonance imaging22 ; echocardiograms23,24 ; and mammographies25,26 . A unique imaging-recognition study focusing on the breadth of acute neurologic events, such as stroke or head trauma, was carried out on over 37,000 head CT 3-D scans, which the algorithm analyzed for 13 different anatomical find- ings versus gold-standard labels (annotated by expert radiologists) and achieved an AUC of 0.73 (ref. 27 ). A simulated prospective, double-blind, randomized control trial was conducted with real cases from the dataset and showed that the deep-learning algorithm could interpret scans 150 times faster than radiologists (1.2 versus 177seconds). But the conclusion that the algorithm’s diagnostic accuracyinscreeningacuteneurologicscanswaspoorerthanhuman High-performance medicine: the convergence of human and artificial intelligence Eric J. Topol The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for health systems, by improving workflow and the potential for reducing medical errors; and for patients, by enabling them to process their own data to promote health. The current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications will be discussed in this article. Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient–doctor relationship or facilitate its erosion remains to be seen. REVIEW ARTICLE | FOCUS https://doi.org/10.1038/s41591-018-0300-7 NATURE MEDICINE | VOL 25 | JANUARY 2019 | 44–56 | www.nature.com/naturemedicine44 an ed as tio rit da of al an (T m ap D an be la Table 1 | Peer-reviewed publications of AI algorithms compared with doctors Specialty Images Publication Radiology/ neurology CT head, acute neurological events Titano et al. 27 CT head for brain hemorrhage Arbabshirani et al.19 CT head for trauma Chilamkurthy et al.20 CXR for metastatic lung nodules Nam et al.8 CXR for multiple findings Singh et al.7 Mammography for breast density Lehman et al.26 Wrist X-ray* Lindsey et al.9 Pathology Breast cancer Ehteshami Bejnordi et al.41 Lung cancer (+driver mutation) Coudray et al.33 Brain tumors (+methylation) Capper et al.45 Breast cancer metastases* Steiner et al.35 Breast cancer metastases Liu et al.34 Dermatology Skin cancers Esteva et al.47 Melanoma Haenssle et al.48 Skin lesions Han et al.49 Ophthalmology Diabetic retinopathy Gulshan et al.51 Diabetic retinopathy* Abramoff et al.31 Diabetic retinopathy* Kanagasingam et al.32 Congenital cataracts Long et al.38 Retinal diseases (OCT) De Fauw et al.56 Macular degeneration Burlina et al.52 Retinopathy of prematurity Brown et al.60 AMD and diabetic retinopathy Kermany et al.53 Gastroenterology Polyps at colonoscopy* Mori et al.36 Polyps at colonoscopy Wang et al.37 Cardiology Echocardiography Madani et al.23 Echocardiography Zhang et al.24 T C A A iC Z B N ID Ic Im V A M A A
  66. 66. Radiologist
  67. 67. •손 엑스레이 영상을 판독하여 환자의 골연령 (뼈 나이)를 계산해주는 인공지능 • 기존에 의사는 그룰리히-파일(Greulich-Pyle)법 등으로 표준 사진과 엑스레이를 비교하여 판독 • 인공지능은 참조표준영상에서 성별/나이별 패턴을 찾아서 유사성을 확률로 표시 + 표준 영상 검색 •의사가 성조숙증이나 저성장을 진단하는데 도움을 줄 수 있음
  68. 68. - 1 - 보 도 자 료 국내에서 개발한 인공지능(AI) 기반 의료기기 첫 허가 - 인공지능 기술 활용하여 뼈 나이 판독한다 - 식품의약품안전처 처장 류영진 는 국내 의료기기업체 주 뷰노가 개발한 인공지능 기술이 적용된 의료영상분석장치소프트웨어 뷰노메드 본에이지 를 월 일 허가했다고 밝혔습니다 이번에 허가된 뷰노메드 본에이지 는 인공지능 이 엑스레이 영상을 분석하여 환자의 뼈 나이를 제시하고 의사가 제시된 정보 등으로 성조숙증이나 저성장을 진단하는데 도움을 주는 소프트웨어입니다 그동안 의사가 환자의 왼쪽 손 엑스레이 영상을 참조표준영상 과 비교하면서 수동으로 뼈 나이를 판독하던 것을 자동화하여 판독시간을 단축하였습니다 이번 허가 제품은 년 월부터 빅데이터 및 인공지능 기술이 적용된 의료기기의 허가 심사 가이드라인 적용 대상으로 선정되어 임상시험 설계에서 허가까지 맞춤 지원하였습니다 뷰노메드 본에이지 는 환자 왼쪽 손 엑스레이 영상을 분석하여 의 료인이 환자 뼈 나이를 판단하는데 도움을 주기 위한 목적으로 허가되었습니다 - 2 - 분석은 인공지능이 촬영된 엑스레이 영상의 패턴을 인식하여 성별 남자 개 여자 개 로 분류된 뼈 나이 모델 참조표준영상에서 성별 나이별 패턴을 찾아 유사성을 확률로 표시하면 의사가 확률값 호르몬 수치 등의 정보를 종합하여 성조숙증이나 저성장을 진단합 니다 임상시험을 통해 제품 정확도 성능 를 평가한 결과 의사가 판단한 뼈 나이와 비교했을 때 평균 개월 차이가 있었으며 제조업체가 해당 제품 인공지능이 스스로 인지 학습할 수 있도록 영상자료를 주기적으로 업데이트하여 의사와의 오차를 좁혀나갈 수 있도록 설계되었습니다 인공지능 기반 의료기기 임상시험계획 승인건수는 이번에 허가받은 뷰노메드 본에이지 를 포함하여 현재까지 건입니다 임상시험이 승인된 인공지능 기반 의료기기는 자기공명영상으로 뇌경색 유형을 분류하는 소프트웨어 건 엑스레이 영상을 통해 폐결절 진단을 도와주는 소프트웨어 건 입니다 참고로 식약처는 인공지능 가상현실 프린팅 등 차 산업과 관련된 의료기기 신속한 개발을 지원하기 위하여 제품 연구 개발부터 임상시험 허가에 이르기까지 전 과정을 맞춤 지원하는 차세대 프로젝트 신개발 의료기기 허가도우미 등을 운영하고 있 습니다 식약처는 이번 제품 허가를 통해 개개인의 뼈 나이를 신속하게 분석 판정하는데 도움을 줄 수 있을 것이라며 앞으로도 첨단 의료기기 개발이 활성화될 수 있도록 적극적으로 지원해 나갈 것이라고 밝혔습니다
  69. 69. 저는 뷰노의 자문을 맡고 있으며, 지분 관계가 있음을 밝힙니다
  70. 70. AJR:209, December 2017 1 Since 1992, concerns regarding interob- server variability in manual bone age esti- mation [4] have led to the establishment of several automatic computerized methods for bone age estimation, including computer-as- sisted skeletal age scores, computer-aided skeletal maturation assessment systems, and BoneXpert (Visiana) [5–14]. BoneXpert was developed according to traditional machine- learning techniques and has been shown to have a good performance for patients of var- ious ethnicities and in various clinical set- tings [10–14]. The deep-learning technique is an improvement in artificial neural net- works. Unlike traditional machine-learning techniques, deep-learning techniques allow an algorithm to program itself by learning from the images given a large dataset of la- beled examples, thus removing the need to specify rules [15]. Deep-learning techniques permit higher levels of abstraction and improved predic- tions from data. Deep-learning techniques Computerized Bone Age Estimation Using Deep Learning– Based Program: Evaluation of the Accuracy and Efficiency Jeong Rye Kim1 Woo Hyun Shim1 Hee Mang Yoon1 Sang Hyup Hong1 Jin Seong Lee1 Young Ah Cho1 Sangki Kim2 Kim JR, Shim WH, Yoon MH, et al. 1 Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea. Address correspondence to H. M. Yoon (espoirhm@gmail.com). 2 Vuno Research Center, Vuno Inc., Seoul, South Korea. Pediatric Imaging • Original Research Supplemental Data Available online at www.ajronline.org. AJR 2017; 209:1–7 0361–803X/17/2096–1 © American Roentgen Ray Society B one age estimation is crucial for developmental status determina- tions and ultimate height predic- tions in the pediatric population, particularly for patients with growth disor- ders and endocrine abnormalities [1]. Two major left-hand wrist radiograph-based methods for bone age estimation are current- ly used: the Greulich-Pyle [2] and Tanner- Whitehouse [3] methods. The former is much more frequently used in clinical practice. Greulich-Pyle–based bone age estimation is performed by comparing a patient’s left-hand radiograph to standard radiographs in the Greulich-Pyle atlas and is therefore simple and easily applied in clinical practice. How- ever, the process of bone age estimation, which comprises a simple comparison of multiple images, can be repetitive and time consuming and is thus sometimes burden- some to radiologists. Moreover, the accuracy depends on the radiologist’s experience and tends to be subjective. Keywords: bone age, children, deep learning, neural network model DOI:10.2214/AJR.17.18224 J. R. Kim and W. H. Shim contributed equally to this work. Received March 12, 2017; accepted after revision July 7, 2017. S. Kim is employed by Vuno, Inc., which created the deep learning–based automatic software system for bone age determination. J. R. Kim, W. H. Shim, H. M. Yoon, S. H. Hong, J. S. Lee, and Y. A. Cho are employed by Asan Medical Center, which holds patent rights for the deep learning–based automatic software system for bone age assessment. OBJECTIVE. The purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clini- cal practice. MATERIALS AND METHODS. A Greulich-Pyle method–based deep-learning tech- nique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3–17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas–assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consen- sus of two experienced radiologists. RESULTS. First-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p < 0.001). Concordance rates increased with the use of the automatic software system for both reviewer 1 (63.0% for Greulich-Pyle atlas–assisted bone age vs 72.5% for computer-as- sisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas–assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively. CONCLUSION. Automatic software system showed reliably accurate bone age estima- tions and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy. Kim et al. Accuracy and Efficiency of Computerized Bone Age Estimation Pediatric Imaging Original Research Downloadedfromwww.ajronline.orgbyFloridaAtlanticUnivon09/13/17fromIPaddress131.91.169.193.CopyrightARRS.Forpersonaluseonly;allrightsreserved • 총 환자의 수: 200명 • 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스 • 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험) • 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독) • 인공지능: VUNO의 골연령 판독 딥러닝 AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380.
  71. 71. 40 50 60 70 80 인공지능 의사 A 의사 B 69.5% 63% 49.5% 정확도(%) 영상의학과 펠로우 (소아영상 세부전공) 영상의학과 2년차 전공의 인공지능 vs 의사 AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380. • 총 환자의 수: 200명 • 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험) • 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독) • 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스 • 인공지능: VUNO의 골연령 판독 딥러닝 골연령 판독에 인간 의사와 인공지능의 시너지 효과 Digital Healthcare Institute Director,Yoon Sup Choi, PhD yoonsup.choi@gmail.com
  72. 72. 40 50 60 70 80 인공지능 의사 A 의사 B 40 50 60 70 80 의사 A 
 + 인공지능 의사 B 
 + 인공지능 69.5% 63% 49.5% 72.5% 57.5% 정확도(%) 영상의학과 펠로우 (소아영상 세부전공) 영상의학과 2년차 전공의 인공지능 vs 의사 인공지능 + 의사 AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380. • 총 환자의 수: 200명 • 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험) • 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독) • 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스 • 인공지능: VUNO의 골연령 판독 딥러닝 골연령 판독에 인간 의사와 인공지능의 시너지 효과 Digital Healthcare Institute Director,Yoon Sup Choi, PhD yoonsup.choi@gmail.com
  73. 73. 총 판독 시간 (m) 0 50 100 150 200 w/o AI w/ AI 0 50 100 150 200 w/o AI w/ AI 188m 154m 180m 108m saving 40% of time saving 18% of time 의사 A 의사 B 골연령 판독에서 인공지능을 활용하면 판독 시간의 절감도 가능 • 총 환자의 수: 200명 • 의사A: 소아영상 세부전공한 영상의학 전문의 (500례 이상의 판독 경험) • 의사B: 영상의학과 2년차 전공의 (판독법 하루 교육 이수 + 20례 판독) • 레퍼런스: 경험 많은 소아영상의학과 전문의 2명(18년, 4년 경력)의 컨센서스 • 인공지능: VUNO의 골연령 판독 딥러닝 AJR Am J Roentgenol. 2017 Dec;209(6):1374-1380. Digital Healthcare Institute Director,Yoon Sup Choi, PhD yoonsup.choi@gmail.com

×