SlideShare ist ein Scribd-Unternehmen logo
1 von 58
Downloaden Sie, um offline zu lesen
For GDG Southlake - Nov, 30, 2022
Human Factors of Explainable AI
Meg Kurdziolek
megdk@google.com
A little about me and
what I work on
Sr. UX Researcher
Google
Hi, I’m Meg.
CAIIS
Cloud AI and Industry Solutions
cloud.google.com/solutions/ai
Proprietary + Confidential
Applications
Vision and Video Conversation Language Structured Data
Core
Notebooks Data Labeling Experiments Metadata
AutoML Training Feature Store Vizier (Optimization)
Prediction AI Accelerators Hybrid AI
Deep Learning Env
Explainable AI
Pipelines
Continuous Monitoring
Vertex AI
01 “The Basics”
The basics of XAI: description,
vocabulary, and prevailing techniques
Why is XAI important?
Discussion of why XAI is essential for the
growth, adoption and engineering of ML
What makes designing XAI hard?
A discussion of what makes designing
effective XAI tools hard. In particular, we’ll
deep dive on the different audiences for
ML technologies and how they interact
with explanations.
Human Factors of Explainable AI
Presentation Outline
02
03
04 We’ve actually been explaining
complex things for a long time
We’ll take a look at an analogy of
explaining complex-weather data to
end-users
The UX of XAI
Recommendations on how to think about
and design XAI for your audience
Thank you!
A recap of what we talked about today
and some resources for you if you want
to learn more.
05
06
“The Basics” of XAI
1
Explainable AI is the endeavor to make a ML
model more understandable to humans.
What is Explainable AI?
One set of definitions for transparent and opaque
● Transparent - a system that reveals its internal mechanisms.
● Opaque - a system that does not reveal its internal mechanisms.
What does transparent and opaque mean?
From Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. by Christoph Molnar
Another set of definitions for transparent and opaque
● Transparent - a model is considered transparent if by itself it is understandable. A
model is transparent when a human can understand its function without any need
for post-hoc explanation.
● Opaque - the opposite of a transparent model is an opaque model. They are not
readily understood by humans. To be interpretable, they require post-hoc
explanations.
What does transparent and opaque mean?
From Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI by Arrieta, Alejandro Barredo et. al
What makes a model transparent?
A ^ set of criteria for transparency:
➔ Simulatable - a person can contemplate the model and
“given enough scratch paper” could step through the
procedure and arrive at the same prediction for a given
input.
➔ Decomposable - each part of the model - each input,
parameter, and calculation - admits an intuitive
explanation.
➔ Algorithmically transparent - the training process used
to develop a model is well understood.
From The Mythos of Model Interpretability by Zachary C. Lipton
I could step
through this
DNN if I had
enough scratch
paper…
problematic
Models generally thought to be transparent:
● Linear/logistic regression
● Decision trees ← opaque if tree is complicated/very deep
● K-nearest neighbors
● Rule-based Learners
● General additive models
● Bayesian models
● Support vector machines ← opaque if data is messy/complicated
Models generally thought to be opaque:
● Tree ensembles ← transparent if trees are simple
● Deep Neural Networks (DNNs)
● Reinforcement Learners & Agents
The line between opaque and
transparent is blurred
Definitions for Interpretability, Explainability, and Comprehensibility
● Interpretability - a passive characteristic of a ML system. If a ML system is
interpretable then you are able to explain, or provide the meaning, of an ML process
in human understandable terms.
● Explainability - an action, procedure or interface between humans and a ML
system that makes it comprehensible to humans.
● Comprehensibility - the ability of a learning algorithm to represent its learned
knowledge in a human understandable fashion.
From Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI by Arrieta, Alejandro Barredo et. al
What does interpretable and explainable AI mean?
XAI Techniques
● Explanation by simplification - provides explanation
through rule-extraction & distillation [eg. Local
Interpretable Model-Agnostic Explanations (LIME)]
● Feature relevance explanation - provides explanation
through ranking or measuring the influence each feature
has on a prediction output [eg. Shapley Values]
● Visual explanation - provides explanation through visual
representation of predictions [eg. Layer-wise Relevance
(LRP)]
Image from Explaining Machine Learning Models: A Non-Technical Guide to Interpreting SHAP Analyses by Aidan Cooper
XAI Techniques
● Explanations by Concept - provides explanation through
concepts. Concepts could be user defined (eg. “stripes” or
“spots” in image data) [eg. Testing with Concept Activation
Vectors (TCAV)]
● Explanations by Example - provides explanations by
analogy though surfacing proponents/opponents to the
data. [eg. Example-Based Explanations]
Image from Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) by Been Kim
Example-Based Explanations
Image from Vertex AI Example-based Explanations improve ML via explainability on Google Cloud Blog
Model Agnostic vs. Model Specific
Model Agnostic explanations can work with
any type of ML model.
Examples:
● Local Interpretable Model-Agnostic
Explanations (LIME)
● Shapley Values
● Example-Based Explanations
Model Specific explanation techniques only
work with a specific model type.
Examples:
● Simplified Tree Ensemble Learner (STEL)
● DeepLIFT
● Layer-wise Relevance (LRP)
● Testing with Concept Activation Vectors
(TCAV)
XAI methods also provide explanations at different levels of granularity.
Local, Cohort, and Global explanations
● Local Explanations - provides an explanation for a single
prediction
● Cohort Explanations - provides an explanation for a
cohort or subset of predictions
● Global Explanations - provides an explanation for all
predictions, or the model decision making process itself
Why is XAI Important?
2
“The danger is in creating and
using decisions that are not
justifiable, legitimate, or that
simply do not allow obtaining
detailed explanations of their
behavior.”
(Arrieta et al., 2020)
● Identifying and troubleshooting illegitimate conclusions
○ Deficiencies in the training data, and data “skews” or shifts
can result in illegitimate conclusions. Without knowing the
“why” behind a prediction it is difficult to diagnose.
● Feature engineering and data pipeline optimization
○ Removing features/data that is unnecessary for achieving
desired model performance
Explainability is important to the development, assessment, optimization, and
troubleshooting of ML Systems
Why is XAI important?
● Identifying bias in datasets/models
○ Models can arrive at unfair, discriminatory, or biased
decisions. Without a means of understanding the underlying
decision making, these issues are difficult to assess.
Why is XAI important?
Explainability is important to assessing fairness and addressing bias
● Trust and adoption
○ humans are reluctant to adopt or trust technologies they do
not understand
● Utility requires understanding
○ in cases where humans utilize the technology to make critical
decisions, they require explanations in order to effectively
execute their own judgment
Why is XAI important?
Explainability is essential for end-user adoption and the ultimate utility of ML
driven applications
Local, Cohort, and Global explanations across the ML Lifecycle
Image from A Look Into Global, Cohort and Local Model Explainability by Aparna Dhinakaran
What makes designing
XAI hard?
3
Humans.
Why is XAI hard?
Explanations need to be usable for an intended
audience.
Depending on who the audience is, the
explanation may need to account for different
domain expertise, cognitive abilities, and context
of use.
Why is XAI hard?
Developers,
Operators,
and
Engineers
Data
Scientists /
Model
Builders
Domain
expert
Lay-person/
Consumer
Auditors/
regulatory
agencies
Prediction + Explanation
Developers,
Operators,
and
Engineers
Data
Scientists /
Model
Builders
Domain
expert
Lay-person/
Consumer
Auditors/
regulatory
agencies
Prediction + Explanation
Expert on ML
NOT an expert on the data domain
Expert on the data domain
NOT an expert on ML
NOT an expert on ML
NOT an expert on the data domain
“One analogous case to explainable AI for human-to-human interaction is that of a forensic scientist explaining
forensic evidence to laypeople (e.g., members of a jury). Currently, there is a gap between the ways forensic
scientists report results and the understanding of those results by laypeople. Jackson et al. 2015 extensively studied the
types of evidence presented to juries and the ability for juries to understand that evidence. They found that most types
of explanations from forensic scientists are misleading or prone to confusion. Therefore, they do not meet our
internal criteria for being “meaningful.” A challenge for the field is learning how to improve explanations, and the
proposed solutions do not always have consistent outcomes.”
- Philips et. al 2021, Four Principles of Explainable Artificial Intelligence (NIST)
Human Bias
● Anchoring Bias - relying too heavily on the
first piece of information we are given
about a topic. We interpret newer
information from the reference point of our
anchor, instead of seeing it objectively.
● Availability bias - tendency to believe that
examples or cases that come readily to
mind are more representative of a
population than they actually are.
“When we become anchored to a
specific figure or plan of action, we
end up filtering all new information
through the framework we initially
drew up in our head, distorting our
perception. This makes us reluctant
to make significant changes to our
plans, even if the situation calls for
it.”
- Why we tend to rely heavily upon the first
piece of information we receive
Human Bias
● Confirmation Bias - seeking and favoring
information that supports their prior
beliefs. Can result in unjustified trust and
mistrust.
● Unjustified Trust/“Over trust” -
end-users may have a higher degree of
trust than they should (or “over trust”)
when explanations are presented in
different formats.
“They found that participants
tended to place “unwarranted” faith
in numbers. For example, the AI
group participants often ascribed
more value to mathematical
representations than was justified,
while the non-AI group participants
believed the numbers signaled
intelligence — even if they couldn’t
understand the meaning.”
- Even experts are too quick to rely on AI
explanations
We’ve actually been
explaining complex
things for a long time
4
Let’s talk about the weather
Weather is an example of
just one of the many
complex systems we
explain and interpret
today.
“Stop sensationalizing
storms in your maps…”
- user feedback
Weather Underground’s radar
imagery felt inaccurate to users.
We had a problem:
Different Sites, Different Storms
Intellicast AccuWeather
NWS dBZ to Rain Rate
dBZ Rain Rate (in/hr)
65 16+
60 8.00
55 4.00
52 2.50
47 1.25
41 0.50
36 0.25
30 0.10
20 Trace
< 20 No rain
Meteorologist Interviews
dBZ Rain Rate (in/hr)
65 16+
60 8.00
55 4.00
52 2.50
47 1.25
41 0.50
36 0.25
30 0.10
20 Trace
< 20 No rain
What does a quarter inch of rain
per hour feel like?
“Thats a solid rain. But not a
downpour. You would want an
umbrella, but you’d be okay if you
needed to make a quick dash to
your car or something.”
What do you think you’d
experience in a rainstorm that
looked like this?
“I think that if I was right in the
middle of it, in that orange spot
right there, I would not want to be
outside. I bet it would be raining
real heavy. Might flood the storm
drains.”
User Interviews
Lining up the expert and non-expert experience
dBZ Rain Rate (in/hr)
65 16+
60 8.00
55 4.00
52 2.50
47 1.25
41 0.50
36 0.25
30 0.10
20 Trace
< 20 No rain
~35 dBZ
Big jump
~55 dBZ
Big difference
Meteorologist
Experience
End-user
Experience
A new palette
Big Jump at
35 dBZ
Big Jump at
55 dBZ
New radar palette is launched
Old Palette
New Palette
“Absolutely fantastic! I
abandoned WU a while back
because of the ‘dramatic
imagery’ that didn't match reality
on the ground / in the field; and
so I am very happy that
feedback was heard, that you
studied the complaint and data,
as well as communicated with
pros, observers and end users.
Time to bookmark and load the
WU apps again; and test it out.”
- User feedback on Radar
Palette Improvements
blog post (2014)
The UX of XAI
5
“The property of ‘being an explanation’
is not a property of statements, it is an
interaction. What counts as an
explanation depends on what the user
needs, what knowledge the user
already has, and especially the user's
goals.”
(Hoffman et al., 2019)
How can we help end-users meet their goals
and make better decisions?
Designing explanations to meet user goals
Designing explanations for better decision making
Designing Theory-Driven User-Centric Explainable AI (Wang et al, 2019)
Designing explanations for better decision making
Designing Theory-Driven User-Centric Explainable AI (Wang et al, 2019)
How can we build understanding through
interaction?
Designing explanations for interaction
Interaction Example: The What-If Tool
https://pair-code.github.io/what-if-tool/
Designing explanations for interaction
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs (Suresh et al., 2022)
“Grounding interpretability in real examples,
facilitating comparison across them, and
visualizing class distributions can help users
grasp the model’s uncertainty and connect it to
relevant challenges of the task.
Moreover, by looking at and comparing real
examples, users can discover or ask questions
about limitations of the data — and doing so
does not damage trust, but can play an
important role in building it.”
(Suresh et al., 2022)
XAI = interaction; Interaction Design is a cycle
Discover
Ideate
Create
Evaluate
Interaction design is a cycle
User-centric evaluation of XAI methods
● Understandability - Does the XAI method provide
explanations in human-readable terms with sufficient
detail to be understandable to the intended end-users?
● Satisfaction - Does the XAI method provide explanations
such that users feel that they understand the AI system
and are satisfied?
● Utility - Does the XAI method provide explanations such
that end-users can make decisions and take further action
on the prediction?
● Trustworthyness - After interacting with the explanation,
do users trust the AI model prediction to an appropriate
degree?
UX of XAI
There are published
“best practice” and
measurement scales
for all of these
When should UX get involved in ML development?
Here
Here
Here too
Here
Image from Organizing machine learning projects: project management guidelines. by Jeremy Jordan
Thank you!
Proprietary + Confidential
Learn more about XAI
● Explaining the Unexplainable in UXPA
Magazine
● Introduction to Vertex Explainable AI
● AI Explanations Whitepaper
Resources
Sample Notebooks
● Tabular and Image Data Notebook examples
Using XAI in AutoML
● Explanations for AutoML Tables
● Explanations for AutoML Vision
Using XAI in BQML
● BigQuery Explainable AI
Vertex XAI Service Documentation
● Vertex Explainable AI
● Explainable AI SDK
Let’s Talk!
● linkedin.com/in/mdickeykurdziolek/
● megdk@google.com
Any Questions?
Thank you!

Weitere ähnliche Inhalte

Was ist angesagt?

Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)
Krishnaram Kenthapadi
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)
Krishnaram Kenthapadi
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)
Krishnaram Kenthapadi
 

Was ist angesagt? (20)

The-CxO-Guide-to.pdf
The-CxO-Guide-to.pdfThe-CxO-Guide-to.pdf
The-CxO-Guide-to.pdf
 
Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)
 
Generative AI
Generative AIGenerative AI
Generative AI
 
The Future of AI is Generative not Discriminative 5/26/2021
The Future of AI is Generative not Discriminative 5/26/2021The Future of AI is Generative not Discriminative 5/26/2021
The Future of AI is Generative not Discriminative 5/26/2021
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
The Rise of the LLMs - How I Learned to Stop Worrying & Love the GPT!
 
haiped. impact of AI in marketing comms and CX
haiped. impact of AI in marketing comms and CXhaiped. impact of AI in marketing comms and CX
haiped. impact of AI in marketing comms and CX
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)
 
An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective
 
Explainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableExplainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretable
 
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & Concerns
 
Artificial Intelligence Introduction & Business usecases
Artificial Intelligence Introduction & Business usecasesArtificial Intelligence Introduction & Business usecases
Artificial Intelligence Introduction & Business usecases
 
A Framework for Navigating Generative Artificial Intelligence for Enterprise
A Framework for Navigating Generative Artificial Intelligence for EnterpriseA Framework for Navigating Generative Artificial Intelligence for Enterprise
A Framework for Navigating Generative Artificial Intelligence for Enterprise
 
Unlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdfUnlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdf
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML Systems
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)
 
AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...
AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...
AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...
 

Ähnlich wie GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone

​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!
Eindhoven University of Technology / JADS
 
Improved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdfImproved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdf
Narinder Singh Punn
 
An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring
gerogepatton
 
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORINGAN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
ijaia
 

Ähnlich wie GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone (20)

​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!
 
Explainable AI.pptx
Explainable AI.pptxExplainable AI.pptx
Explainable AI.pptx
 
Interactive XAI for ODSC East 2023
Interactive XAI for ODSC East 2023Interactive XAI for ODSC East 2023
Interactive XAI for ODSC East 2023
 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AI
 
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
 
Model evaluation in the land of deep learning
Model evaluation in the land of deep learningModel evaluation in the land of deep learning
Model evaluation in the land of deep learning
 
ODSC APAC 2022 - Explainable AI
ODSC APAC 2022 - Explainable AIODSC APAC 2022 - Explainable AI
ODSC APAC 2022 - Explainable AI
 
Improved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdfImproved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdf
 
Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020
 
An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring
 
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORINGAN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
 
"I don't trust AI": the role of explainability in responsible AI
"I don't trust AI": the role of explainability in responsible AI"I don't trust AI": the role of explainability in responsible AI
"I don't trust AI": the role of explainability in responsible AI
 
Learning to Learn Model Behavior ( Capital One: data intelligence conference )
Learning to Learn Model Behavior ( Capital One: data intelligence conference )Learning to Learn Model Behavior ( Capital One: data intelligence conference )
Learning to Learn Model Behavior ( Capital One: data intelligence conference )
 
Keepler Data Tech | Entendiendo tus propios modelos predictivos
Keepler Data Tech | Entendiendo tus propios modelos predictivosKeepler Data Tech | Entendiendo tus propios modelos predictivos
Keepler Data Tech | Entendiendo tus propios modelos predictivos
 
Keepler | Understanding your own predictive models
Keepler | Understanding your own predictive modelsKeepler | Understanding your own predictive models
Keepler | Understanding your own predictive models
 
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learning
 
Interpretable Machine Learning_ Techniques for Model Explainability.
Interpretable Machine Learning_ Techniques for Model Explainability.Interpretable Machine Learning_ Techniques for Model Explainability.
Interpretable Machine Learning_ Techniques for Model Explainability.
 
graziani_bias.pdf
graziani_bias.pdfgraziani_bias.pdf
graziani_bias.pdf
 
The state of the art in integrating machine learning into visual analytics
The state of the art in integrating machine learning into visual analyticsThe state of the art in integrating machine learning into visual analytics
The state of the art in integrating machine learning into visual analytics
 

Mehr von James Anderson

GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
James Anderson
 
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
James Anderson
 
GraphQL Insights Deck ( Sabre_GDG - Sept 2023).pdf
GraphQL Insights Deck ( Sabre_GDG - Sept 2023).pdfGraphQL Insights Deck ( Sabre_GDG - Sept 2023).pdf
GraphQL Insights Deck ( Sabre_GDG - Sept 2023).pdf
James Anderson
 
GDG Cloud Southlake #25: Jacek Ostrowski & David Browne: Sabre's Journey to ...
 GDG Cloud Southlake #25: Jacek Ostrowski & David Browne: Sabre's Journey to ... GDG Cloud Southlake #25: Jacek Ostrowski & David Browne: Sabre's Journey to ...
GDG Cloud Southlake #25: Jacek Ostrowski & David Browne: Sabre's Journey to ...
James Anderson
 
A3 - AR Code Planetarium CST.pdf
A3 - AR Code Planetarium CST.pdfA3 - AR Code Planetarium CST.pdf
A3 - AR Code Planetarium CST.pdf
James Anderson
 
GDG Cloud Southlake #24: Arty Starr: Enabling Powerful Software Insights by V...
GDG Cloud Southlake #24: Arty Starr: Enabling Powerful Software Insights by V...GDG Cloud Southlake #24: Arty Starr: Enabling Powerful Software Insights by V...
GDG Cloud Southlake #24: Arty Starr: Enabling Powerful Software Insights by V...
James Anderson
 
GDG Cloud Southlake no. 22 Gutta and Nayer GCP Terraform Modules Scaling Your...
GDG Cloud Southlake no. 22 Gutta and Nayer GCP Terraform Modules Scaling Your...GDG Cloud Southlake no. 22 Gutta and Nayer GCP Terraform Modules Scaling Your...
GDG Cloud Southlake no. 22 Gutta and Nayer GCP Terraform Modules Scaling Your...
James Anderson
 

Mehr von James Anderson (20)

GDG Cloud Southlake 32: Kyle Hettinger: Demystifying the Dark Web
GDG Cloud Southlake 32: Kyle Hettinger: Demystifying the Dark WebGDG Cloud Southlake 32: Kyle Hettinger: Demystifying the Dark Web
GDG Cloud Southlake 32: Kyle Hettinger: Demystifying the Dark Web
 
GDG Cloud Southlake 31: Santosh Chennuri and Festus Yeboah: Empowering Develo...
GDG Cloud Southlake 31: Santosh Chennuri and Festus Yeboah: Empowering Develo...GDG Cloud Southlake 31: Santosh Chennuri and Festus Yeboah: Empowering Develo...
GDG Cloud Southlake 31: Santosh Chennuri and Festus Yeboah: Empowering Develo...
 
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
 
GDG Cloud Southlake 29 Jimmy Mesta OWASP Top 10 for Kubernetes
GDG Cloud Southlake 29 Jimmy Mesta OWASP Top 10 for KubernetesGDG Cloud Southlake 29 Jimmy Mesta OWASP Top 10 for Kubernetes
GDG Cloud Southlake 29 Jimmy Mesta OWASP Top 10 for Kubernetes
 
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
 
GDG SLK - Why should devs care about container security.pdf
GDG SLK - Why should devs care about container security.pdfGDG SLK - Why should devs care about container security.pdf
GDG SLK - Why should devs care about container security.pdf
 
GraphQL Insights Deck ( Sabre_GDG - Sept 2023).pdf
GraphQL Insights Deck ( Sabre_GDG - Sept 2023).pdfGraphQL Insights Deck ( Sabre_GDG - Sept 2023).pdf
GraphQL Insights Deck ( Sabre_GDG - Sept 2023).pdf
 
GDG Cloud Southlake #25: Jacek Ostrowski & David Browne: Sabre's Journey to ...
 GDG Cloud Southlake #25: Jacek Ostrowski & David Browne: Sabre's Journey to ... GDG Cloud Southlake #25: Jacek Ostrowski & David Browne: Sabre's Journey to ...
GDG Cloud Southlake #25: Jacek Ostrowski & David Browne: Sabre's Journey to ...
 
A3 - AR Code Planetarium CST.pdf
A3 - AR Code Planetarium CST.pdfA3 - AR Code Planetarium CST.pdf
A3 - AR Code Planetarium CST.pdf
 
GDG Cloud Southlake #24: Arty Starr: Enabling Powerful Software Insights by V...
GDG Cloud Southlake #24: Arty Starr: Enabling Powerful Software Insights by V...GDG Cloud Southlake #24: Arty Starr: Enabling Powerful Software Insights by V...
GDG Cloud Southlake #24: Arty Starr: Enabling Powerful Software Insights by V...
 
GDG Cloud Southlake #23:Ralph Lloren: Social Engineering Large Language Models
GDG Cloud Southlake #23:Ralph Lloren: Social Engineering Large Language ModelsGDG Cloud Southlake #23:Ralph Lloren: Social Engineering Large Language Models
GDG Cloud Southlake #23:Ralph Lloren: Social Engineering Large Language Models
 
GDG Cloud Southlake no. 22 Gutta and Nayer GCP Terraform Modules Scaling Your...
GDG Cloud Southlake no. 22 Gutta and Nayer GCP Terraform Modules Scaling Your...GDG Cloud Southlake no. 22 Gutta and Nayer GCP Terraform Modules Scaling Your...
GDG Cloud Southlake no. 22 Gutta and Nayer GCP Terraform Modules Scaling Your...
 
GDG Cloud Southlake #21:Alexander Snegovoy: Master Continuous Resiliency in C...
GDG Cloud Southlake #21:Alexander Snegovoy: Master Continuous Resiliency in C...GDG Cloud Southlake #21:Alexander Snegovoy: Master Continuous Resiliency in C...
GDG Cloud Southlake #21:Alexander Snegovoy: Master Continuous Resiliency in C...
 
GDG Cloud Southlake #20:Stefano Doni: Kubernetes performance tuning dilemma: ...
GDG Cloud Southlake #20:Stefano Doni: Kubernetes performance tuning dilemma: ...GDG Cloud Southlake #20:Stefano Doni: Kubernetes performance tuning dilemma: ...
GDG Cloud Southlake #20:Stefano Doni: Kubernetes performance tuning dilemma: ...
 
GDG Cloud Southlake #19: Sullivan and Schuh: Design Thinking Primer: How to B...
GDG Cloud Southlake #19: Sullivan and Schuh: Design Thinking Primer: How to B...GDG Cloud Southlake #19: Sullivan and Schuh: Design Thinking Primer: How to B...
GDG Cloud Southlake #19: Sullivan and Schuh: Design Thinking Primer: How to B...
 
GDG Cloud Southlake #18 Yujun Liang Crawl, Walk, Run My Journey into Google C...
GDG Cloud Southlake #18 Yujun Liang Crawl, Walk, Run My Journey into Google C...GDG Cloud Southlake #18 Yujun Liang Crawl, Walk, Run My Journey into Google C...
GDG Cloud Southlake #18 Yujun Liang Crawl, Walk, Run My Journey into Google C...
 
GDG Cloud Southlake #16: Priyanka Vergadia: Scalable Data Analytics in Google...
GDG Cloud Southlake #16: Priyanka Vergadia: Scalable Data Analytics in Google...GDG Cloud Southlake #16: Priyanka Vergadia: Scalable Data Analytics in Google...
GDG Cloud Southlake #16: Priyanka Vergadia: Scalable Data Analytics in Google...
 
GDG Cloud Southlake #15: Mihir Mistry: Cybersecurity and Data Privacy in an A...
GDG Cloud Southlake #15: Mihir Mistry: Cybersecurity and Data Privacy in an A...GDG Cloud Southlake #15: Mihir Mistry: Cybersecurity and Data Privacy in an A...
GDG Cloud Southlake #15: Mihir Mistry: Cybersecurity and Data Privacy in an A...
 
GDG Cloud Southlake #14: Jonathan Schneider: OpenRewrite: Making your source ...
GDG Cloud Southlake #14: Jonathan Schneider: OpenRewrite: Making your source ...GDG Cloud Southlake #14: Jonathan Schneider: OpenRewrite: Making your source ...
GDG Cloud Southlake #14: Jonathan Schneider: OpenRewrite: Making your source ...
 
GDG Cloud Southlake #9 Secure Cloud Networking - Beyond Cloud Boundaries
GDG Cloud Southlake #9 Secure Cloud Networking - Beyond Cloud BoundariesGDG Cloud Southlake #9 Secure Cloud Networking - Beyond Cloud Boundaries
GDG Cloud Southlake #9 Secure Cloud Networking - Beyond Cloud Boundaries
 

Kürzlich hochgeladen

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Enterprise Knowledge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Kürzlich hochgeladen (20)

A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 

GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone

  • 1. For GDG Southlake - Nov, 30, 2022 Human Factors of Explainable AI Meg Kurdziolek megdk@google.com
  • 2. A little about me and what I work on
  • 3. Sr. UX Researcher Google Hi, I’m Meg. CAIIS Cloud AI and Industry Solutions cloud.google.com/solutions/ai
  • 4. Proprietary + Confidential Applications Vision and Video Conversation Language Structured Data Core Notebooks Data Labeling Experiments Metadata AutoML Training Feature Store Vizier (Optimization) Prediction AI Accelerators Hybrid AI Deep Learning Env Explainable AI Pipelines Continuous Monitoring Vertex AI
  • 5. 01 “The Basics” The basics of XAI: description, vocabulary, and prevailing techniques Why is XAI important? Discussion of why XAI is essential for the growth, adoption and engineering of ML What makes designing XAI hard? A discussion of what makes designing effective XAI tools hard. In particular, we’ll deep dive on the different audiences for ML technologies and how they interact with explanations. Human Factors of Explainable AI Presentation Outline 02 03 04 We’ve actually been explaining complex things for a long time We’ll take a look at an analogy of explaining complex-weather data to end-users The UX of XAI Recommendations on how to think about and design XAI for your audience Thank you! A recap of what we talked about today and some resources for you if you want to learn more. 05 06
  • 7. Explainable AI is the endeavor to make a ML model more understandable to humans. What is Explainable AI?
  • 8. One set of definitions for transparent and opaque ● Transparent - a system that reveals its internal mechanisms. ● Opaque - a system that does not reveal its internal mechanisms. What does transparent and opaque mean? From Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. by Christoph Molnar
  • 9. Another set of definitions for transparent and opaque ● Transparent - a model is considered transparent if by itself it is understandable. A model is transparent when a human can understand its function without any need for post-hoc explanation. ● Opaque - the opposite of a transparent model is an opaque model. They are not readily understood by humans. To be interpretable, they require post-hoc explanations. What does transparent and opaque mean? From Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI by Arrieta, Alejandro Barredo et. al
  • 10. What makes a model transparent? A ^ set of criteria for transparency: ➔ Simulatable - a person can contemplate the model and “given enough scratch paper” could step through the procedure and arrive at the same prediction for a given input. ➔ Decomposable - each part of the model - each input, parameter, and calculation - admits an intuitive explanation. ➔ Algorithmically transparent - the training process used to develop a model is well understood. From The Mythos of Model Interpretability by Zachary C. Lipton I could step through this DNN if I had enough scratch paper… problematic
  • 11. Models generally thought to be transparent: ● Linear/logistic regression ● Decision trees ← opaque if tree is complicated/very deep ● K-nearest neighbors ● Rule-based Learners ● General additive models ● Bayesian models ● Support vector machines ← opaque if data is messy/complicated Models generally thought to be opaque: ● Tree ensembles ← transparent if trees are simple ● Deep Neural Networks (DNNs) ● Reinforcement Learners & Agents
  • 12. The line between opaque and transparent is blurred
  • 13. Definitions for Interpretability, Explainability, and Comprehensibility ● Interpretability - a passive characteristic of a ML system. If a ML system is interpretable then you are able to explain, or provide the meaning, of an ML process in human understandable terms. ● Explainability - an action, procedure or interface between humans and a ML system that makes it comprehensible to humans. ● Comprehensibility - the ability of a learning algorithm to represent its learned knowledge in a human understandable fashion. From Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI by Arrieta, Alejandro Barredo et. al What does interpretable and explainable AI mean?
  • 14. XAI Techniques ● Explanation by simplification - provides explanation through rule-extraction & distillation [eg. Local Interpretable Model-Agnostic Explanations (LIME)] ● Feature relevance explanation - provides explanation through ranking or measuring the influence each feature has on a prediction output [eg. Shapley Values] ● Visual explanation - provides explanation through visual representation of predictions [eg. Layer-wise Relevance (LRP)] Image from Explaining Machine Learning Models: A Non-Technical Guide to Interpreting SHAP Analyses by Aidan Cooper
  • 15. XAI Techniques ● Explanations by Concept - provides explanation through concepts. Concepts could be user defined (eg. “stripes” or “spots” in image data) [eg. Testing with Concept Activation Vectors (TCAV)] ● Explanations by Example - provides explanations by analogy though surfacing proponents/opponents to the data. [eg. Example-Based Explanations] Image from Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) by Been Kim
  • 16. Example-Based Explanations Image from Vertex AI Example-based Explanations improve ML via explainability on Google Cloud Blog
  • 17. Model Agnostic vs. Model Specific Model Agnostic explanations can work with any type of ML model. Examples: ● Local Interpretable Model-Agnostic Explanations (LIME) ● Shapley Values ● Example-Based Explanations Model Specific explanation techniques only work with a specific model type. Examples: ● Simplified Tree Ensemble Learner (STEL) ● DeepLIFT ● Layer-wise Relevance (LRP) ● Testing with Concept Activation Vectors (TCAV)
  • 18. XAI methods also provide explanations at different levels of granularity. Local, Cohort, and Global explanations ● Local Explanations - provides an explanation for a single prediction ● Cohort Explanations - provides an explanation for a cohort or subset of predictions ● Global Explanations - provides an explanation for all predictions, or the model decision making process itself
  • 19. Why is XAI Important? 2
  • 20. “The danger is in creating and using decisions that are not justifiable, legitimate, or that simply do not allow obtaining detailed explanations of their behavior.” (Arrieta et al., 2020)
  • 21. ● Identifying and troubleshooting illegitimate conclusions ○ Deficiencies in the training data, and data “skews” or shifts can result in illegitimate conclusions. Without knowing the “why” behind a prediction it is difficult to diagnose. ● Feature engineering and data pipeline optimization ○ Removing features/data that is unnecessary for achieving desired model performance Explainability is important to the development, assessment, optimization, and troubleshooting of ML Systems Why is XAI important?
  • 22. ● Identifying bias in datasets/models ○ Models can arrive at unfair, discriminatory, or biased decisions. Without a means of understanding the underlying decision making, these issues are difficult to assess. Why is XAI important? Explainability is important to assessing fairness and addressing bias
  • 23. ● Trust and adoption ○ humans are reluctant to adopt or trust technologies they do not understand ● Utility requires understanding ○ in cases where humans utilize the technology to make critical decisions, they require explanations in order to effectively execute their own judgment Why is XAI important? Explainability is essential for end-user adoption and the ultimate utility of ML driven applications
  • 24. Local, Cohort, and Global explanations across the ML Lifecycle Image from A Look Into Global, Cohort and Local Model Explainability by Aparna Dhinakaran
  • 27. Explanations need to be usable for an intended audience. Depending on who the audience is, the explanation may need to account for different domain expertise, cognitive abilities, and context of use. Why is XAI hard?
  • 29. Developers, Operators, and Engineers Data Scientists / Model Builders Domain expert Lay-person/ Consumer Auditors/ regulatory agencies Prediction + Explanation Expert on ML NOT an expert on the data domain Expert on the data domain NOT an expert on ML NOT an expert on ML NOT an expert on the data domain
  • 30. “One analogous case to explainable AI for human-to-human interaction is that of a forensic scientist explaining forensic evidence to laypeople (e.g., members of a jury). Currently, there is a gap between the ways forensic scientists report results and the understanding of those results by laypeople. Jackson et al. 2015 extensively studied the types of evidence presented to juries and the ability for juries to understand that evidence. They found that most types of explanations from forensic scientists are misleading or prone to confusion. Therefore, they do not meet our internal criteria for being “meaningful.” A challenge for the field is learning how to improve explanations, and the proposed solutions do not always have consistent outcomes.” - Philips et. al 2021, Four Principles of Explainable Artificial Intelligence (NIST)
  • 31. Human Bias ● Anchoring Bias - relying too heavily on the first piece of information we are given about a topic. We interpret newer information from the reference point of our anchor, instead of seeing it objectively. ● Availability bias - tendency to believe that examples or cases that come readily to mind are more representative of a population than they actually are. “When we become anchored to a specific figure or plan of action, we end up filtering all new information through the framework we initially drew up in our head, distorting our perception. This makes us reluctant to make significant changes to our plans, even if the situation calls for it.” - Why we tend to rely heavily upon the first piece of information we receive
  • 32. Human Bias ● Confirmation Bias - seeking and favoring information that supports their prior beliefs. Can result in unjustified trust and mistrust. ● Unjustified Trust/“Over trust” - end-users may have a higher degree of trust than they should (or “over trust”) when explanations are presented in different formats. “They found that participants tended to place “unwarranted” faith in numbers. For example, the AI group participants often ascribed more value to mathematical representations than was justified, while the non-AI group participants believed the numbers signaled intelligence — even if they couldn’t understand the meaning.” - Even experts are too quick to rely on AI explanations
  • 33. We’ve actually been explaining complex things for a long time 4
  • 34. Let’s talk about the weather
  • 35. Weather is an example of just one of the many complex systems we explain and interpret today.
  • 36. “Stop sensationalizing storms in your maps…” - user feedback Weather Underground’s radar imagery felt inaccurate to users. We had a problem:
  • 37. Different Sites, Different Storms Intellicast AccuWeather
  • 38. NWS dBZ to Rain Rate dBZ Rain Rate (in/hr) 65 16+ 60 8.00 55 4.00 52 2.50 47 1.25 41 0.50 36 0.25 30 0.10 20 Trace < 20 No rain
  • 39. Meteorologist Interviews dBZ Rain Rate (in/hr) 65 16+ 60 8.00 55 4.00 52 2.50 47 1.25 41 0.50 36 0.25 30 0.10 20 Trace < 20 No rain What does a quarter inch of rain per hour feel like? “Thats a solid rain. But not a downpour. You would want an umbrella, but you’d be okay if you needed to make a quick dash to your car or something.”
  • 40. What do you think you’d experience in a rainstorm that looked like this? “I think that if I was right in the middle of it, in that orange spot right there, I would not want to be outside. I bet it would be raining real heavy. Might flood the storm drains.” User Interviews
  • 41. Lining up the expert and non-expert experience dBZ Rain Rate (in/hr) 65 16+ 60 8.00 55 4.00 52 2.50 47 1.25 41 0.50 36 0.25 30 0.10 20 Trace < 20 No rain ~35 dBZ Big jump ~55 dBZ Big difference Meteorologist Experience End-user Experience
  • 42. A new palette Big Jump at 35 dBZ Big Jump at 55 dBZ
  • 43. New radar palette is launched Old Palette New Palette “Absolutely fantastic! I abandoned WU a while back because of the ‘dramatic imagery’ that didn't match reality on the ground / in the field; and so I am very happy that feedback was heard, that you studied the complaint and data, as well as communicated with pros, observers and end users. Time to bookmark and load the WU apps again; and test it out.” - User feedback on Radar Palette Improvements blog post (2014)
  • 44. The UX of XAI 5
  • 45. “The property of ‘being an explanation’ is not a property of statements, it is an interaction. What counts as an explanation depends on what the user needs, what knowledge the user already has, and especially the user's goals.” (Hoffman et al., 2019)
  • 46. How can we help end-users meet their goals and make better decisions? Designing explanations to meet user goals
  • 47. Designing explanations for better decision making Designing Theory-Driven User-Centric Explainable AI (Wang et al, 2019)
  • 48. Designing explanations for better decision making Designing Theory-Driven User-Centric Explainable AI (Wang et al, 2019)
  • 49. How can we build understanding through interaction? Designing explanations for interaction
  • 50. Interaction Example: The What-If Tool https://pair-code.github.io/what-if-tool/
  • 51. Designing explanations for interaction Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs (Suresh et al., 2022)
  • 52. “Grounding interpretability in real examples, facilitating comparison across them, and visualizing class distributions can help users grasp the model’s uncertainty and connect it to relevant challenges of the task. Moreover, by looking at and comparing real examples, users can discover or ask questions about limitations of the data — and doing so does not damage trust, but can play an important role in building it.” (Suresh et al., 2022)
  • 53. XAI = interaction; Interaction Design is a cycle Discover Ideate Create Evaluate Interaction design is a cycle
  • 54. User-centric evaluation of XAI methods ● Understandability - Does the XAI method provide explanations in human-readable terms with sufficient detail to be understandable to the intended end-users? ● Satisfaction - Does the XAI method provide explanations such that users feel that they understand the AI system and are satisfied? ● Utility - Does the XAI method provide explanations such that end-users can make decisions and take further action on the prediction? ● Trustworthyness - After interacting with the explanation, do users trust the AI model prediction to an appropriate degree? UX of XAI There are published “best practice” and measurement scales for all of these
  • 55. When should UX get involved in ML development? Here Here Here too Here Image from Organizing machine learning projects: project management guidelines. by Jeremy Jordan
  • 57. Proprietary + Confidential Learn more about XAI ● Explaining the Unexplainable in UXPA Magazine ● Introduction to Vertex Explainable AI ● AI Explanations Whitepaper Resources Sample Notebooks ● Tabular and Image Data Notebook examples Using XAI in AutoML ● Explanations for AutoML Tables ● Explanations for AutoML Vision Using XAI in BQML ● BigQuery Explainable AI Vertex XAI Service Documentation ● Vertex Explainable AI ● Explainable AI SDK Let’s Talk! ● linkedin.com/in/mdickeykurdziolek/ ● megdk@google.com