If Artificial Intelligence (AI) is a black-box, how can a human comprehend and trust the results of Machine Learning (ML) alogrithms? Explainable AI (XAI) tries to shed light into that AI black-box so humans can trust what is going on. Our speaker Meg Dickey-Kurdziolek is currently a UX Researcher for Google Cloud AI and Industry Solutions, where she focuses her research on Explainable AI and Model Understanding. Recording of the presentation: https://youtu.be/6N2DNN_HDWU
4. Proprietary + Confidential
Applications
Vision and Video Conversation Language Structured Data
Core
Notebooks Data Labeling Experiments Metadata
AutoML Training Feature Store Vizier (Optimization)
Prediction AI Accelerators Hybrid AI
Deep Learning Env
Explainable AI
Pipelines
Continuous Monitoring
Vertex AI
5. 01 “The Basics”
The basics of XAI: description,
vocabulary, and prevailing techniques
Why is XAI important?
Discussion of why XAI is essential for the
growth, adoption and engineering of ML
What makes designing XAI hard?
A discussion of what makes designing
effective XAI tools hard. In particular, we’ll
deep dive on the different audiences for
ML technologies and how they interact
with explanations.
Human Factors of Explainable AI
Presentation Outline
02
03
04 We’ve actually been explaining
complex things for a long time
We’ll take a look at an analogy of
explaining complex-weather data to
end-users
The UX of XAI
Recommendations on how to think about
and design XAI for your audience
Thank you!
A recap of what we talked about today
and some resources for you if you want
to learn more.
05
06
7. Explainable AI is the endeavor to make a ML
model more understandable to humans.
What is Explainable AI?
8. One set of definitions for transparent and opaque
● Transparent - a system that reveals its internal mechanisms.
● Opaque - a system that does not reveal its internal mechanisms.
What does transparent and opaque mean?
From Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. by Christoph Molnar
9. Another set of definitions for transparent and opaque
● Transparent - a model is considered transparent if by itself it is understandable. A
model is transparent when a human can understand its function without any need
for post-hoc explanation.
● Opaque - the opposite of a transparent model is an opaque model. They are not
readily understood by humans. To be interpretable, they require post-hoc
explanations.
What does transparent and opaque mean?
From Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI by Arrieta, Alejandro Barredo et. al
10. What makes a model transparent?
A ^ set of criteria for transparency:
➔ Simulatable - a person can contemplate the model and
“given enough scratch paper” could step through the
procedure and arrive at the same prediction for a given
input.
➔ Decomposable - each part of the model - each input,
parameter, and calculation - admits an intuitive
explanation.
➔ Algorithmically transparent - the training process used
to develop a model is well understood.
From The Mythos of Model Interpretability by Zachary C. Lipton
I could step
through this
DNN if I had
enough scratch
paper…
problematic
11. Models generally thought to be transparent:
● Linear/logistic regression
● Decision trees ← opaque if tree is complicated/very deep
● K-nearest neighbors
● Rule-based Learners
● General additive models
● Bayesian models
● Support vector machines ← opaque if data is messy/complicated
Models generally thought to be opaque:
● Tree ensembles ← transparent if trees are simple
● Deep Neural Networks (DNNs)
● Reinforcement Learners & Agents
13. Definitions for Interpretability, Explainability, and Comprehensibility
● Interpretability - a passive characteristic of a ML system. If a ML system is
interpretable then you are able to explain, or provide the meaning, of an ML process
in human understandable terms.
● Explainability - an action, procedure or interface between humans and a ML
system that makes it comprehensible to humans.
● Comprehensibility - the ability of a learning algorithm to represent its learned
knowledge in a human understandable fashion.
From Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI by Arrieta, Alejandro Barredo et. al
What does interpretable and explainable AI mean?
14. XAI Techniques
● Explanation by simplification - provides explanation
through rule-extraction & distillation [eg. Local
Interpretable Model-Agnostic Explanations (LIME)]
● Feature relevance explanation - provides explanation
through ranking or measuring the influence each feature
has on a prediction output [eg. Shapley Values]
● Visual explanation - provides explanation through visual
representation of predictions [eg. Layer-wise Relevance
(LRP)]
Image from Explaining Machine Learning Models: A Non-Technical Guide to Interpreting SHAP Analyses by Aidan Cooper
15. XAI Techniques
● Explanations by Concept - provides explanation through
concepts. Concepts could be user defined (eg. “stripes” or
“spots” in image data) [eg. Testing with Concept Activation
Vectors (TCAV)]
● Explanations by Example - provides explanations by
analogy though surfacing proponents/opponents to the
data. [eg. Example-Based Explanations]
Image from Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) by Been Kim
17. Model Agnostic vs. Model Specific
Model Agnostic explanations can work with
any type of ML model.
Examples:
● Local Interpretable Model-Agnostic
Explanations (LIME)
● Shapley Values
● Example-Based Explanations
Model Specific explanation techniques only
work with a specific model type.
Examples:
● Simplified Tree Ensemble Learner (STEL)
● DeepLIFT
● Layer-wise Relevance (LRP)
● Testing with Concept Activation Vectors
(TCAV)
18. XAI methods also provide explanations at different levels of granularity.
Local, Cohort, and Global explanations
● Local Explanations - provides an explanation for a single
prediction
● Cohort Explanations - provides an explanation for a
cohort or subset of predictions
● Global Explanations - provides an explanation for all
predictions, or the model decision making process itself
20. “The danger is in creating and
using decisions that are not
justifiable, legitimate, or that
simply do not allow obtaining
detailed explanations of their
behavior.”
(Arrieta et al., 2020)
21. ● Identifying and troubleshooting illegitimate conclusions
○ Deficiencies in the training data, and data “skews” or shifts
can result in illegitimate conclusions. Without knowing the
“why” behind a prediction it is difficult to diagnose.
● Feature engineering and data pipeline optimization
○ Removing features/data that is unnecessary for achieving
desired model performance
Explainability is important to the development, assessment, optimization, and
troubleshooting of ML Systems
Why is XAI important?
22. ● Identifying bias in datasets/models
○ Models can arrive at unfair, discriminatory, or biased
decisions. Without a means of understanding the underlying
decision making, these issues are difficult to assess.
Why is XAI important?
Explainability is important to assessing fairness and addressing bias
23. ● Trust and adoption
○ humans are reluctant to adopt or trust technologies they do
not understand
● Utility requires understanding
○ in cases where humans utilize the technology to make critical
decisions, they require explanations in order to effectively
execute their own judgment
Why is XAI important?
Explainability is essential for end-user adoption and the ultimate utility of ML
driven applications
24. Local, Cohort, and Global explanations across the ML Lifecycle
Image from A Look Into Global, Cohort and Local Model Explainability by Aparna Dhinakaran
27. Explanations need to be usable for an intended
audience.
Depending on who the audience is, the
explanation may need to account for different
domain expertise, cognitive abilities, and context
of use.
Why is XAI hard?
30. “One analogous case to explainable AI for human-to-human interaction is that of a forensic scientist explaining
forensic evidence to laypeople (e.g., members of a jury). Currently, there is a gap between the ways forensic
scientists report results and the understanding of those results by laypeople. Jackson et al. 2015 extensively studied the
types of evidence presented to juries and the ability for juries to understand that evidence. They found that most types
of explanations from forensic scientists are misleading or prone to confusion. Therefore, they do not meet our
internal criteria for being “meaningful.” A challenge for the field is learning how to improve explanations, and the
proposed solutions do not always have consistent outcomes.”
- Philips et. al 2021, Four Principles of Explainable Artificial Intelligence (NIST)
31. Human Bias
● Anchoring Bias - relying too heavily on the
first piece of information we are given
about a topic. We interpret newer
information from the reference point of our
anchor, instead of seeing it objectively.
● Availability bias - tendency to believe that
examples or cases that come readily to
mind are more representative of a
population than they actually are.
“When we become anchored to a
specific figure or plan of action, we
end up filtering all new information
through the framework we initially
drew up in our head, distorting our
perception. This makes us reluctant
to make significant changes to our
plans, even if the situation calls for
it.”
- Why we tend to rely heavily upon the first
piece of information we receive
32. Human Bias
● Confirmation Bias - seeking and favoring
information that supports their prior
beliefs. Can result in unjustified trust and
mistrust.
● Unjustified Trust/“Over trust” -
end-users may have a higher degree of
trust than they should (or “over trust”)
when explanations are presented in
different formats.
“They found that participants
tended to place “unwarranted” faith
in numbers. For example, the AI
group participants often ascribed
more value to mathematical
representations than was justified,
while the non-AI group participants
believed the numbers signaled
intelligence — even if they couldn’t
understand the meaning.”
- Even experts are too quick to rely on AI
explanations
39. Meteorologist Interviews
dBZ Rain Rate (in/hr)
65 16+
60 8.00
55 4.00
52 2.50
47 1.25
41 0.50
36 0.25
30 0.10
20 Trace
< 20 No rain
What does a quarter inch of rain
per hour feel like?
“Thats a solid rain. But not a
downpour. You would want an
umbrella, but you’d be okay if you
needed to make a quick dash to
your car or something.”
40. What do you think you’d
experience in a rainstorm that
looked like this?
“I think that if I was right in the
middle of it, in that orange spot
right there, I would not want to be
outside. I bet it would be raining
real heavy. Might flood the storm
drains.”
User Interviews
41. Lining up the expert and non-expert experience
dBZ Rain Rate (in/hr)
65 16+
60 8.00
55 4.00
52 2.50
47 1.25
41 0.50
36 0.25
30 0.10
20 Trace
< 20 No rain
~35 dBZ
Big jump
~55 dBZ
Big difference
Meteorologist
Experience
End-user
Experience
43. New radar palette is launched
Old Palette
New Palette
“Absolutely fantastic! I
abandoned WU a while back
because of the ‘dramatic
imagery’ that didn't match reality
on the ground / in the field; and
so I am very happy that
feedback was heard, that you
studied the complaint and data,
as well as communicated with
pros, observers and end users.
Time to bookmark and load the
WU apps again; and test it out.”
- User feedback on Radar
Palette Improvements
blog post (2014)
45. “The property of ‘being an explanation’
is not a property of statements, it is an
interaction. What counts as an
explanation depends on what the user
needs, what knowledge the user
already has, and especially the user's
goals.”
(Hoffman et al., 2019)
46. How can we help end-users meet their goals
and make better decisions?
Designing explanations to meet user goals
47. Designing explanations for better decision making
Designing Theory-Driven User-Centric Explainable AI (Wang et al, 2019)
48. Designing explanations for better decision making
Designing Theory-Driven User-Centric Explainable AI (Wang et al, 2019)
49. How can we build understanding through
interaction?
Designing explanations for interaction
51. Designing explanations for interaction
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs (Suresh et al., 2022)
52. “Grounding interpretability in real examples,
facilitating comparison across them, and
visualizing class distributions can help users
grasp the model’s uncertainty and connect it to
relevant challenges of the task.
Moreover, by looking at and comparing real
examples, users can discover or ask questions
about limitations of the data — and doing so
does not damage trust, but can play an
important role in building it.”
(Suresh et al., 2022)
53. XAI = interaction; Interaction Design is a cycle
Discover
Ideate
Create
Evaluate
Interaction design is a cycle
54. User-centric evaluation of XAI methods
● Understandability - Does the XAI method provide
explanations in human-readable terms with sufficient
detail to be understandable to the intended end-users?
● Satisfaction - Does the XAI method provide explanations
such that users feel that they understand the AI system
and are satisfied?
● Utility - Does the XAI method provide explanations such
that end-users can make decisions and take further action
on the prediction?
● Trustworthyness - After interacting with the explanation,
do users trust the AI model prediction to an appropriate
degree?
UX of XAI
There are published
“best practice” and
measurement scales
for all of these
55. When should UX get involved in ML development?
Here
Here
Here too
Here
Image from Organizing machine learning projects: project management guidelines. by Jeremy Jordan
57. Proprietary + Confidential
Learn more about XAI
● Explaining the Unexplainable in UXPA
Magazine
● Introduction to Vertex Explainable AI
● AI Explanations Whitepaper
Resources
Sample Notebooks
● Tabular and Image Data Notebook examples
Using XAI in AutoML
● Explanations for AutoML Tables
● Explanations for AutoML Vision
Using XAI in BQML
● BigQuery Explainable AI
Vertex XAI Service Documentation
● Vertex Explainable AI
● Explainable AI SDK
Let’s Talk!
● linkedin.com/in/mdickeykurdziolek/
● megdk@google.com