Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. This time is (mostly) different.
Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Pattern recognition can equal or exceed the ability of human experts in some domains. It’s developing into an increasingly commercially important technology area. (Although it’s also easy to look at wins in specific domains and generalize to an overly-optimistic view of AI writ large.)
In this session, Red Hat Technology Evangelist for Emerging Technology will examine the AI landscape and identify those domains and approaches that have seen genuine advance and why. He’ll also discuss some of the specific ways in which both organizations and individuals are getting up to speed with AI today.
Given at BCBS NC Tech Summit, Raleigh, 2018
2. WHO AM I?
● Evangelist for emerging technologies
and practices at Red Hat
● Co-author of From Pots and Vats to
Programs and Apps
● Former IT industry analyst
● Former big system guy
● Website: http://www.bitmasons.com
3. CONVERGENCE OF PHYSICAL & DIGITAL
IoT
AI
Blockchain
THE CONSUMERIZATION
OF I.T.
THE DIGITIZATION
OF O.T.
CONSUMER
IoT/
MOBILITY
“Software is
eating the world”
“Data is the
new oil”
4. AI TODAY: MOORE’S LAW + OPEN SOURCE
● Can collect, store, and process huge
quantities of data
● Massive distributed processing
capability/GPU/cloud
● Open source platforms, tools, and
development model
= Complex neural networks
8. PREHISTORY
Philosophy Logic, methods of reasoning
Mathematics Algorithms, probability
Psychology Behaviorism, cognitive psychology
Economics Utility/game/decision theory, operations research
Linguistics Grammar, knowledge representation
Control theory Objective functions, feedback loops
9. HISTORY
1956
Dartmouth
Summer
workshop
1952-1969
Early enthusiasm
Lisp, formal logic vs.
working models
Partially based on Russell & Norvig
1966-1973
Reality sets in
Lack of real world context
Computing power limits
1969-1979
Knowledge-based systems
Expert systems, language
Early-mid 1980s
Becomes an industry
Ambitious goals
Neural nets return
Late 80s
AI winter
Collapse of Lisp machine market
Failures: expert systems, Fifth
Generation project, etc.
1995-
Intelligent agents
Modular approaches
Internet cross-pollination
21st century
Data, data, data
Machine learning
Deep learning
10. Thinking Humanly
Cognitive modeling
Informed by neurophysiology
Thinking Rationally
Logicist tradition
Intelligence based on logical relationships
Acting Humanly
Turing Test
Computer vision, robotics, language, reasoning
Acting Rationally
Rational agent approach
Achieve best or best expected outcome
RUSSELL & NORVIG
18. HOW WE GOT HERE
● 1969 Perceptrons (Minsky/Papert): Simple
networks can perform only basic functions
● Backpropagation (Hinton and others)
provided way to train multi-level networks
(1986 based on earlier research)
● Became practical computationally and with
sufficient data ~2010s
20. Source: http://www.asimovinstitute.org/neural-network-zoo/
I’M SIMPLIFYING
“One problem with drawing them as node maps: it
doesn’t really show how they’re used. For example,
variational autoencoders (VAE) may look just like
autoencoders (AE), but the training process is
actually quite different. The use-cases for trained
networks differ even more, because VAEs are
generators, where you insert noise to get a new
sample. AEs, simply map whatever they get as
input to the closest training sample they
“remember”. I should add that this overview is in no
way clarifying how each of the different node types
work internally (but that’s a topic for another day).”
21. AMAZING STUFF SINCE ~2010
● Voice recognition: Siri, Alexa, Cortana, Google
● IBM Watson wins Jeopardy
● Google DeepMind's AlphaGo defeats Lee Sedol
4–1
● Libratus wins against four top players at no-limit
Texas hold 'em
● Autonomous driving research
● Ubiquitous bots
● Lots of unsexy predictive analytics, trading,
optimization, and analysis
22. “While HIMSS[18] doesn’t look exactly like
CES, it’s getting close – and it’s totally
unrecognizable from the HIMSS of 10
years ago. So it’s fun to imagine what
HIMSS28 will look like.”
Mimi Grant,
Adaptive Business Leaders (ABL) Organization
23. HEALTHCARE AREAS OF INVESTIGATION
● Drug discovery
● Treatment options based on current research
● Diagnosis (imaging, correlation)
● E.g. Early Colorectal Cancer Detected by Machine Learning Model Using
Gender, Age, and Complete Blood Count Data (Kaiser Permanente
NorthWest, 2017)
28. GREAT FUZZY PATTERN RECOGNIZERS BUT...
● Almost all value in AI today is supervised learning (Andrew Ng)
● Fundamentally a statistical learning technique
● Dependent on huge training sets
● Learning model effectively classical conditioned training
● Sensitive to small changes
● No physical world context
● Difficult to explain results
29. HOW DID YOU ARRIVE AT THAT ANSWER?
http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
30. CLINICAL AND PATIENT DECISION SUPPORT SOFTWARE
DRAFT GUIDANCE FOR INDUSTRY AND FDA STAFF
DECEMBER, 2017
32. OTHER HEALTHCARE CHALLENGES
● Long-term ROIs
● “Healthcare data sucks.” (Dr. Mark Weisman)
● “Black and white ‘truth’ is rare in medicine.” (Dr. Lynda Chin)
● Privacy/sharing data
● Resistance to adoption
● Training algorithms require domain expertise
33. WHAT’S THE COMPUTATIONAL BASIS FOR:
● Learning concepts
● Judging similarity
● Inferring causal connections
● Forming perceptual representations
● Learning word meanings and syntactic principles in natural
language
● Predicting the future
● Developing physical world intuitions
?
36. HOW RED HAT IS OPTIMIZING
● Integration and access to specialized hardware resources such as
GPUs, FPGAs, and Infiniband
● Specialized features such as exclusive cores, CPU pinning
strategies, hugepages, and NUMA
● Optimizing the access and efficiency of resources with robust
scheduling, prioritization, and preemption capabilities
● Performance benchmarking and tuning