SlideShare ist ein Scribd-Unternehmen logo
1 von 16
Downloaden Sie, um offline zu lesen
“I don’t trust AI”:
the role of Explainability in
Responsible AI
Overview and Examples
31st March 2021
Erika Agostinelli
IBM Data Scientist – Data Science & AI Elite
Agenda
2
• Context: Responsible AI
• Considerations
• Personas: Explanations for whom?
• Direct Interpretability vs Post-hoc
explanations
• Global vs Local explanations
• Type of your data
Some Open-Source tools
• AIX360
• What if Tool
• Examples
• Loan Application
Overview (~15min) Examples (~10min)
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Responsible AI
3
“As AI advances, and humans and AI systems increasingly
work together, it is essential that we trust the output of these
systems to inform our decisions.
Alongside policy considerations and business efforts, science
has a central role to play: developing and applying tools to
wire AI systems for trust.
https://www.research.ibm.com/artificial-intelligence/trusted-ai/
Fairness Robustness Explainability
Value
Alignment
Transparency
Accountability
/ / / /
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Personas
Explanation for whom?
4
👩🦰
🧓
🧑🦰
🧔
Group1: AI system builders
Technical individuals (data scientists and developers)
who build or deploy an AI system want to know if
their system is working as expected, how to diagnose
and improve it, and possibly to gain insight from its
decisions.
Group3: Regulatory bodies
Government agencies, charged to protect the rights of
their citizens, want to ensure decisions are made in a
safe and fair manner, and society is not negatively
impacted by the decisions such as a financial crisis
Group2: End-user decision makers
People who use the recommendations of an AI system to make a
decision (for example, physicians, loan officers, managers, judges, or
social workers) desire explanations that can build their trust and
confidence in the system’s recommendations and possibly provide
them with additional insight to improve their future decisions and
understanding of the phenomenon.
Group4: End consumers
People impacted by the recommendations made by an AI system
(for example, patients, loan applicants, employees, arrested
individuals, or at-risk children) desire explanations that can help
them under- stand if they were treated fairly and what factor(s)
could be changed to get a different result.
e.g. Data Scientist
“How can I improve the performance? Is
the model using the right data to predict the
result?”
e.g. Loan Officer
“How can I justify the predicted result? Would similar
applicants have received a similar result?”
e.g. Loan Applicants
“Why my application was rejected? What can I do to
get a loan the next time?”
e.g. Bank Executives, Audit Agencies
“Does this model comply with the law?
Is this model fair?”
Loan Application Example
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Interpretability vs Explainability
Different approaches
5
Directly Interpretable Approach
Research to explain the inner workings of an existing
or enhanced machine learning model directly, known
as a directly interpretable approach, to provide a
precise description of how the model determined its
decision.
Post-hoc Explanation Approach
Research, called post hoc interpretation, that probes
an existing model with input values similar to the
actual inputs to understand what factors were crucial
in the model’s decision.
We can see how the model “thinks”.
For example: a small decision tree
The Approach is model-agnostic so
we are trying to leverage its inputs
and outputs to infer what is
happening within the model
By Dr. Cynthia Rudin
https://www.nature.com/articles/s42256-019-0048-x
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Global vs Local
Model or Instance level approach
6
Global or Model-level Approach
An approach that describes the entire predictive model
to the user is called a global or model-level approach in
that the user can understand how any input will be
decided. it is easy to understand how a prediction will
be made for any input.
An example would be a simple decision tree:
If “salary > $50K” and “outstanding debt < $10K”
then mortgage approved
Local or Instance-level Approach
An approach that provides an explanation for a
particular example is called a local or instance-level
explanation.
An example would be an explanation for a credit
rating for a particular applicant might provide the
factors that led to the decision, but it will not
describe the factors for any other applicant.
X
X X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X X
X
X
X
X
X
X
X
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Type of Data
How to visualize your explanations
7
Tabular Text Images
Different type of data requires different type of visualizations
The choice of how to visualize your results will be crucial for your
persona. Can your end-user understand easily the results of your
explanations?
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Open-Source Tools – Example in Action
non exhaustive list
8
AI Explainability 360 (AIX360)
This toolkit is an open-source library developed by IBM
Research in support of interpretability and
explainability of datasets and machine learning models.
The AI Explainability 360 is released as a Python
package that includes a comprehensive set of
algorithms that cover different dimensions of
explanations along with proxy explainability metrics.
pip install aix360
https://aix360.mybluemix.net/
What If Tool
This toolkit is an interactive visual interface
developed by Google Research and designed to help
visualize datasets and better understand the output
of models.
pip install witwidget
https://pair-code.github.io/what-if-tool/
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
9
Local Global
Directly
Interpretable
Post-hoc
Explanation
AIX360
Taxonomy and guidance
Post-hoc
Explanation
- One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques (2019) (2019)
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
AIX360 Example
Loan Application – HELOC Dataset
10
Data Scientist
Must ensure the model works appropriately before
deployment
Loan Officer
Needs to assess the model’s prediction to make the
final judgement
Loan Applicant
Wants to understand the reason for the application
result
// BRCG / GLRM
// ProtoDash
// CEM
Notebook Available
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
AIX360 Example – Loan Application
Directly Interpretable Models for Global Understanding
11
Data Scientist
data scientist would ideally like to understand the behaviour of the model, as a whole, not just
in specific instances (e.g. specific loan applicants). A global view of the model may uncover
problems with overfitting and poor generalization to other geographies before deployment.
Boolean Rule Column Generation (BRCG)
An example of a Directly interpretable model, BRCG
yields a very simple set of rules with reasonable
accuracy.
Logistic Rule Regression (LogRR)
Part of the Generalised Linear Rule Models, it can
improve accuracy at the cost of a more complex but
still interpretable model.
Paper: Boolean Decision Rules via Column Generation
Paper: Generalized Linear Rule Models
👩🦰
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
AIX360 Example – Loan Application
Using Similar Examples to Inform a Loan Decision
12
Loan Officer
Using similar examples may help the employee understand the decision of an applicant's
HELOC application being accepted or rejected in the context of other similar applications.
ProtoDash
The method selects applications from the training
set that are similar in different ways to the user
application we want to explain, which makes this
method different from the traditional ‘distance’
methods (Euclidean, Cosine etc.).
Protodash is able to provide a much more well
rounded and comprehensive view of why the
decision for the applicant may be justifiable.
Paper: Efficient Data Representation by Selecting Prototypes with Importance Weights
🧑🦰
…
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
AIX360 Example – Loan Application
Using Similar Examples to Inform a Loan Decision
13
Loan Applicant
He would like to understand why he does not qualify for a line of credit and if so, what changes
in his application would qualify him.
Contrastive Explanation Method (CEM)
Contrastive explanations provide information to
applicants about what minimal changes to their
profile would have changed the decision of the AI
model from reject to accept or vice-versa
(pertinent negatives).
Also it can provide info on the minimal set of
changes that would still maintain the original
decision (pertinent positives).
Paper: Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
🧔
Pertinent Negative Example:
We observe that this loan application would have been accepted if
- the consolidated risk marker score (i.e. ExternalRiskEstimate) increased from 65 to 81,
- the loan application was on file (i.e. AverageMlnFile) for about 66 months and if
- the number of satisfactory trades (i.e. NumSatisfactoryTrades) increased to little over 21.
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
What if Tool Example
US Census Model Comparison
14
https://colab.research.google.com/github/pair-code/what-if-tool/blob/master/WIT_Model_Comparison.ipynb#scrollTo=NUQVro76e38Q
Find a Counterfactual
In the What-If Tool, a
Counterfactual is the
most similar datapoint of
a different classification
(for classification models)
or of a difference in
prediction greater than a
specified threshold (for
regression models).
Notebooks Available
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
Other Resources
Useful Links
15
In addition to the Links in the slides +
Websites-Articles
- https://www.research.ibm.com/artificial-intelligence/trusted-ai/
- Understanding how LIME explains predictions
- Explain Any Models with the SHAP Values — Use the KernelExplainer
- Interpretability part 3: opening the black box with LIME and SHAP
- AI Explainability 360 Documentation
- What if tool Documentation
- The Mathematics of Decision Trees, Random Forest and Feature Importance in Scikit-learn and Spark
- An Introduction to ProtoDash — An Algorithm to Better Understand Datasets and Machine Learning Models
Papers
- Questioning the AI: Informing Design Practices for Explainable AI User Experiences (2020)
- One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques (2019) (2019)
- Explaining explainable AI (2019)
- Questioning the AI: Informing Design Practices for Explainable AI User Experiences (2020)
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
https://www.linkedin.com/in/erikaagostinelli/
www.erikaagostinelli.com
Thank you!
Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI

Weitere ähnliche Inhalte

Was ist angesagt?

LLMs in Production: Tooling, Process, and Team Structure
LLMs in Production: Tooling, Process, and Team StructureLLMs in Production: Tooling, Process, and Team Structure
LLMs in Production: Tooling, Process, and Team StructureAggregage
 
ChatGPT, Foundation Models and Web3.pptx
ChatGPT, Foundation Models and Web3.pptxChatGPT, Foundation Models and Web3.pptx
ChatGPT, Foundation Models and Web3.pptxJesus Rodriguez
 
AIF360 - Trusted and Fair AI
AIF360 - Trusted and Fair AIAIF360 - Trusted and Fair AI
AIF360 - Trusted and Fair AIAnimesh Singh
 
Responsible Data Use in AI - core tech pillars
Responsible Data Use in AI - core tech pillarsResponsible Data Use in AI - core tech pillars
Responsible Data Use in AI - core tech pillarsSofus Macskássy
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1DianaGray10
 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & ConcernsAjitesh Kumar
 
Generative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptxGenerative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptxColleen Farrelly
 
leewayhertz.com-Generative AI for enterprises The architecture its implementa...
leewayhertz.com-Generative AI for enterprises The architecture its implementa...leewayhertz.com-Generative AI for enterprises The architecture its implementa...
leewayhertz.com-Generative AI for enterprises The architecture its implementa...robertsamuel23
 
Using the power of Generative AI at scale
Using the power of Generative AI at scaleUsing the power of Generative AI at scale
Using the power of Generative AI at scaleMaxim Salnikov
 
Generative AI for the rest of us
Generative AI for the rest of usGenerative AI for the rest of us
Generative AI for the rest of usMassimo Ferre'
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
 
How Does Generative AI Actually Work? (a quick semi-technical introduction to...
How Does Generative AI Actually Work? (a quick semi-technical introduction to...How Does Generative AI Actually Work? (a quick semi-technical introduction to...
How Does Generative AI Actually Work? (a quick semi-technical introduction to...ssuser4edc93
 
Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Raheel Ahmad
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
 
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬VINCI Digital - Industrial IoT (IIoT) Strategic Advisory
 
Leveraging Generative AI & Best practices
Leveraging Generative AI & Best practicesLeveraging Generative AI & Best practices
Leveraging Generative AI & Best practicesDianaGray10
 
The Future of AI is Generative not Discriminative 5/26/2021
The Future of AI is Generative not Discriminative 5/26/2021The Future of AI is Generative not Discriminative 5/26/2021
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
 
A Framework for Navigating Generative Artificial Intelligence for Enterprise
A Framework for Navigating Generative Artificial Intelligence for EnterpriseA Framework for Navigating Generative Artificial Intelligence for Enterprise
A Framework for Navigating Generative Artificial Intelligence for EnterpriseRocketSource
 

Was ist angesagt? (20)

Journey of Generative AI
Journey of Generative AIJourney of Generative AI
Journey of Generative AI
 
LLMs in Production: Tooling, Process, and Team Structure
LLMs in Production: Tooling, Process, and Team StructureLLMs in Production: Tooling, Process, and Team Structure
LLMs in Production: Tooling, Process, and Team Structure
 
ChatGPT, Foundation Models and Web3.pptx
ChatGPT, Foundation Models and Web3.pptxChatGPT, Foundation Models and Web3.pptx
ChatGPT, Foundation Models and Web3.pptx
 
AIF360 - Trusted and Fair AI
AIF360 - Trusted and Fair AIAIF360 - Trusted and Fair AI
AIF360 - Trusted and Fair AI
 
Responsible Data Use in AI - core tech pillars
Responsible Data Use in AI - core tech pillarsResponsible Data Use in AI - core tech pillars
Responsible Data Use in AI - core tech pillars
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & Concerns
 
Generative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptxGenerative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptx
 
leewayhertz.com-Generative AI for enterprises The architecture its implementa...
leewayhertz.com-Generative AI for enterprises The architecture its implementa...leewayhertz.com-Generative AI for enterprises The architecture its implementa...
leewayhertz.com-Generative AI for enterprises The architecture its implementa...
 
Using the power of Generative AI at scale
Using the power of Generative AI at scaleUsing the power of Generative AI at scale
Using the power of Generative AI at scale
 
Generative AI for the rest of us
Generative AI for the rest of usGenerative AI for the rest of us
Generative AI for the rest of us
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
 
How Does Generative AI Actually Work? (a quick semi-technical introduction to...
How Does Generative AI Actually Work? (a quick semi-technical introduction to...How Does Generative AI Actually Work? (a quick semi-technical introduction to...
How Does Generative AI Actually Work? (a quick semi-technical introduction to...
 
Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models?
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
 
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
 
Leveraging Generative AI & Best practices
Leveraging Generative AI & Best practicesLeveraging Generative AI & Best practices
Leveraging Generative AI & Best practices
 
The Future of AI is Generative not Discriminative 5/26/2021
The Future of AI is Generative not Discriminative 5/26/2021The Future of AI is Generative not Discriminative 5/26/2021
The Future of AI is Generative not Discriminative 5/26/2021
 
A Framework for Navigating Generative Artificial Intelligence for Enterprise
A Framework for Navigating Generative Artificial Intelligence for EnterpriseA Framework for Navigating Generative Artificial Intelligence for Enterprise
A Framework for Navigating Generative Artificial Intelligence for Enterprise
 
Intro to LLMs
Intro to LLMsIntro to LLMs
Intro to LLMs
 

Ähnlich wie "I don't trust AI": the role of explainability in responsible AI

An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring gerogepatton
 
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORINGAN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORINGijaia
 
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneJames Anderson
 
Improved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdfImproved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdfNarinder Singh Punn
 
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...MITAILibrary
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Krishnaram Kenthapadi
 
What is explainable AI.pdf
What is explainable AI.pdfWhat is explainable AI.pdf
What is explainable AI.pdfStephenAmell4
 
Cognitive Computing.PDF
Cognitive Computing.PDFCognitive Computing.PDF
Cognitive Computing.PDFCharles Quincy
 
Cognitive future part 1
Cognitive future part 1Cognitive future part 1
Cognitive future part 1Peter Tutty
 
Cognitive future part 1
Cognitive future part 1Cognitive future part 1
Cognitive future part 1Peter Tutty
 
Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Debmalya Biswas
 
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise SuccessAltimeter, a Prophet Company
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Krishnaram Kenthapadi
 
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Analytics India Magazine
 
B510519.pdf
B510519.pdfB510519.pdf
B510519.pdfaijbm
 
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...UXPA International
 
​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!Eindhoven University of Technology / JADS
 

Ähnlich wie "I don't trust AI": the role of explainability in responsible AI (20)

An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring An Explanation Framework for Interpretable Credit Scoring
An Explanation Framework for Interpretable Credit Scoring
 
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORINGAN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORING
 
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
 
Explainable AI.pptx
Explainable AI.pptxExplainable AI.pptx
Explainable AI.pptx
 
Improved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdfImproved Interpretability and Explainability of Deep Learning Models.pdf
Improved Interpretability and Explainability of Deep Learning Models.pdf
 
Work System Perspective on Service, Service Systems, IT Services, and Service...
Work System Perspective on Service, Service Systems, IT Services, and Service...Work System Perspective on Service, Service Systems, IT Services, and Service...
Work System Perspective on Service, Service Systems, IT Services, and Service...
 
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)
 
What is explainable AI.pdf
What is explainable AI.pdfWhat is explainable AI.pdf
What is explainable AI.pdf
 
Cognitive Computing.PDF
Cognitive Computing.PDFCognitive Computing.PDF
Cognitive Computing.PDF
 
Cognitive future part 1
Cognitive future part 1Cognitive future part 1
Cognitive future part 1
 
Cognitive future part 1
Cognitive future part 1Cognitive future part 1
Cognitive future part 1
 
Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020
 
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success
[REPORT PREVIEW] The AI Maturity Playbook: Five Pillars of Enterprise Success
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)
 
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
 
B510519.pdf
B510519.pdfB510519.pdf
B510519.pdf
 
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...
Look Beyond Data Trends - A Technique to Find Hidden Design Implications from...
 
Lime
LimeLime
Lime
 
​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!
 

Kürzlich hochgeladen

Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAl Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAroojKhan71
 
Smarteg dropshipping via API with DroFx.pptx
Smarteg dropshipping via API with DroFx.pptxSmarteg dropshipping via API with DroFx.pptx
Smarteg dropshipping via API with DroFx.pptxolyaivanovalion
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxolyaivanovalion
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusTimothy Spann
 
100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptxAnupama Kate
 
Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...shambhavirathore45
 
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Delhi Call girls
 
Log Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxLog Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxJohnnyPlasten
 
Zuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptxZuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptxolyaivanovalion
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz1
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% SecurePooja Nehwal
 
Edukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxEdukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxolyaivanovalion
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Valters Lauzums
 
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...amitlee9823
 
Vip Model Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...
Vip Model  Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...Vip Model  Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...
Vip Model Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...shivangimorya083
 

Kürzlich hochgeladen (20)

Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAl Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
 
Smarteg dropshipping via API with DroFx.pptx
Smarteg dropshipping via API with DroFx.pptxSmarteg dropshipping via API with DroFx.pptx
Smarteg dropshipping via API with DroFx.pptx
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptx
 
Sampling (random) method and Non random.ppt
Sampling (random) method and Non random.pptSampling (random) method and Non random.ppt
Sampling (random) method and Non random.ppt
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and Milvus
 
100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx
 
Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...
 
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
 
Log Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxLog Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptx
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Zuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptxZuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptx
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signals
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
 
Edukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxEdukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFx
 
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in  KishangarhDelhi 99530 vip 56974 Genuine Escort Service Call Girls in  Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
 
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
 
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...
 
Vip Model Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...
Vip Model  Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...Vip Model  Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...
Vip Model Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...
 

"I don't trust AI": the role of explainability in responsible AI

  • 1. “I don’t trust AI”: the role of Explainability in Responsible AI Overview and Examples 31st March 2021 Erika Agostinelli IBM Data Scientist – Data Science & AI Elite
  • 2. Agenda 2 • Context: Responsible AI • Considerations • Personas: Explanations for whom? • Direct Interpretability vs Post-hoc explanations • Global vs Local explanations • Type of your data Some Open-Source tools • AIX360 • What if Tool • Examples • Loan Application Overview (~15min) Examples (~10min) Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 3. Responsible AI 3 “As AI advances, and humans and AI systems increasingly work together, it is essential that we trust the output of these systems to inform our decisions. Alongside policy considerations and business efforts, science has a central role to play: developing and applying tools to wire AI systems for trust. https://www.research.ibm.com/artificial-intelligence/trusted-ai/ Fairness Robustness Explainability Value Alignment Transparency Accountability / / / / Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 4. Personas Explanation for whom? 4 👩🦰 🧓 🧑🦰 🧔 Group1: AI system builders Technical individuals (data scientists and developers) who build or deploy an AI system want to know if their system is working as expected, how to diagnose and improve it, and possibly to gain insight from its decisions. Group3: Regulatory bodies Government agencies, charged to protect the rights of their citizens, want to ensure decisions are made in a safe and fair manner, and society is not negatively impacted by the decisions such as a financial crisis Group2: End-user decision makers People who use the recommendations of an AI system to make a decision (for example, physicians, loan officers, managers, judges, or social workers) desire explanations that can build their trust and confidence in the system’s recommendations and possibly provide them with additional insight to improve their future decisions and understanding of the phenomenon. Group4: End consumers People impacted by the recommendations made by an AI system (for example, patients, loan applicants, employees, arrested individuals, or at-risk children) desire explanations that can help them under- stand if they were treated fairly and what factor(s) could be changed to get a different result. e.g. Data Scientist “How can I improve the performance? Is the model using the right data to predict the result?” e.g. Loan Officer “How can I justify the predicted result? Would similar applicants have received a similar result?” e.g. Loan Applicants “Why my application was rejected? What can I do to get a loan the next time?” e.g. Bank Executives, Audit Agencies “Does this model comply with the law? Is this model fair?” Loan Application Example Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 5. Interpretability vs Explainability Different approaches 5 Directly Interpretable Approach Research to explain the inner workings of an existing or enhanced machine learning model directly, known as a directly interpretable approach, to provide a precise description of how the model determined its decision. Post-hoc Explanation Approach Research, called post hoc interpretation, that probes an existing model with input values similar to the actual inputs to understand what factors were crucial in the model’s decision. We can see how the model “thinks”. For example: a small decision tree The Approach is model-agnostic so we are trying to leverage its inputs and outputs to infer what is happening within the model By Dr. Cynthia Rudin https://www.nature.com/articles/s42256-019-0048-x Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 6. Global vs Local Model or Instance level approach 6 Global or Model-level Approach An approach that describes the entire predictive model to the user is called a global or model-level approach in that the user can understand how any input will be decided. it is easy to understand how a prediction will be made for any input. An example would be a simple decision tree: If “salary > $50K” and “outstanding debt < $10K” then mortgage approved Local or Instance-level Approach An approach that provides an explanation for a particular example is called a local or instance-level explanation. An example would be an explanation for a credit rating for a particular applicant might provide the factors that led to the decision, but it will not describe the factors for any other applicant. X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 7. Type of Data How to visualize your explanations 7 Tabular Text Images Different type of data requires different type of visualizations The choice of how to visualize your results will be crucial for your persona. Can your end-user understand easily the results of your explanations? Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 8. Open-Source Tools – Example in Action non exhaustive list 8 AI Explainability 360 (AIX360) This toolkit is an open-source library developed by IBM Research in support of interpretability and explainability of datasets and machine learning models. The AI Explainability 360 is released as a Python package that includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. pip install aix360 https://aix360.mybluemix.net/ What If Tool This toolkit is an interactive visual interface developed by Google Research and designed to help visualize datasets and better understand the output of models. pip install witwidget https://pair-code.github.io/what-if-tool/ Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 9. 9 Local Global Directly Interpretable Post-hoc Explanation AIX360 Taxonomy and guidance Post-hoc Explanation - One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques (2019) (2019) Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 10. AIX360 Example Loan Application – HELOC Dataset 10 Data Scientist Must ensure the model works appropriately before deployment Loan Officer Needs to assess the model’s prediction to make the final judgement Loan Applicant Wants to understand the reason for the application result // BRCG / GLRM // ProtoDash // CEM Notebook Available Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 11. AIX360 Example – Loan Application Directly Interpretable Models for Global Understanding 11 Data Scientist data scientist would ideally like to understand the behaviour of the model, as a whole, not just in specific instances (e.g. specific loan applicants). A global view of the model may uncover problems with overfitting and poor generalization to other geographies before deployment. Boolean Rule Column Generation (BRCG) An example of a Directly interpretable model, BRCG yields a very simple set of rules with reasonable accuracy. Logistic Rule Regression (LogRR) Part of the Generalised Linear Rule Models, it can improve accuracy at the cost of a more complex but still interpretable model. Paper: Boolean Decision Rules via Column Generation Paper: Generalized Linear Rule Models 👩🦰 Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 12. AIX360 Example – Loan Application Using Similar Examples to Inform a Loan Decision 12 Loan Officer Using similar examples may help the employee understand the decision of an applicant's HELOC application being accepted or rejected in the context of other similar applications. ProtoDash The method selects applications from the training set that are similar in different ways to the user application we want to explain, which makes this method different from the traditional ‘distance’ methods (Euclidean, Cosine etc.). Protodash is able to provide a much more well rounded and comprehensive view of why the decision for the applicant may be justifiable. Paper: Efficient Data Representation by Selecting Prototypes with Importance Weights 🧑🦰 … Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 13. AIX360 Example – Loan Application Using Similar Examples to Inform a Loan Decision 13 Loan Applicant He would like to understand why he does not qualify for a line of credit and if so, what changes in his application would qualify him. Contrastive Explanation Method (CEM) Contrastive explanations provide information to applicants about what minimal changes to their profile would have changed the decision of the AI model from reject to accept or vice-versa (pertinent negatives). Also it can provide info on the minimal set of changes that would still maintain the original decision (pertinent positives). Paper: Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives 🧔 Pertinent Negative Example: We observe that this loan application would have been accepted if - the consolidated risk marker score (i.e. ExternalRiskEstimate) increased from 65 to 81, - the loan application was on file (i.e. AverageMlnFile) for about 66 months and if - the number of satisfactory trades (i.e. NumSatisfactoryTrades) increased to little over 21. Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 14. What if Tool Example US Census Model Comparison 14 https://colab.research.google.com/github/pair-code/what-if-tool/blob/master/WIT_Model_Comparison.ipynb#scrollTo=NUQVro76e38Q Find a Counterfactual In the What-If Tool, a Counterfactual is the most similar datapoint of a different classification (for classification models) or of a difference in prediction greater than a specified threshold (for regression models). Notebooks Available Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 15. Other Resources Useful Links 15 In addition to the Links in the slides + Websites-Articles - https://www.research.ibm.com/artificial-intelligence/trusted-ai/ - Understanding how LIME explains predictions - Explain Any Models with the SHAP Values — Use the KernelExplainer - Interpretability part 3: opening the black box with LIME and SHAP - AI Explainability 360 Documentation - What if tool Documentation - The Mathematics of Decision Trees, Random Forest and Feature Importance in Scikit-learn and Spark - An Introduction to ProtoDash — An Algorithm to Better Understand Datasets and Machine Learning Models Papers - Questioning the AI: Informing Design Practices for Explainable AI User Experiences (2020) - One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques (2019) (2019) - Explaining explainable AI (2019) - Questioning the AI: Informing Design Practices for Explainable AI User Experiences (2020) Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI
  • 16. https://www.linkedin.com/in/erikaagostinelli/ www.erikaagostinelli.com Thank you! Women in Data Science Bristol 2021 | Erika Agostinelli | The role of Explainability in Responsible AI