SlideShare ist ein Scribd-Unternehmen logo
1 von 39
Downloaden Sie, um offline zu lesen
Explainable AI – Making ML and DL models more interpretable
Explainable AI – Making ML and DL models more interpretable
About Me
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
2
Aditya
Bhattacharya
I am currently working as the Lead AI/ML Engineer at West Pharmaceutical
Services with the responsibility of leading and managing a global AI team and
creating AI products and platforms at West. I am well seasoned in Data Science,
Machine Learning, IoT and Software Development. and has established the AI
Centre of Excellence and worked towards democratizing AI practice for West
Pharmaceuticals and Microsoft. In the Data Science domain, Computer Vision,
Time-Series Analysis, Natural Language Processing and Speech analysis are my
forte.
Apart from my day job, I am an AI Researcher at an NGO called MUST Research,
and I am one of the faculty members for the MUST Research Academy :
https://must.co.in/acad
Website : https://aditya-bhattacharya.net/
LinkedIn: https://www.linkedin.com/in/aditya-bhattacharya-b59155b6/
Key Topics 1. Necessity and Principles of Explainable AI
2. Model Agnostic XAI for ML models
3. Model Agnostic XAI for DL models.
4. Popular frameworks for XAI
5. Research Questions to consider
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e 3
Necessity and Principles of
Explainable AI
5
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
6
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
7
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
8
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
XAI
Trace model
prediction from
logic of math to
nature of data
Understand the
reasoning behind
each model
predictions
Understand the
model using which
AI decision
making is based
Traceable
AI
Reasonable
AI
Understand
able
AI
9
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
1 0
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
Model
Agnostic
Results
Visualizations
Influence
Methods
Example
Based
Methods
Knowledge
Extractions
1 1
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
Using Surrogate models like linear
models or decision trees to explain
complex models
Estimates the
importance or relevant
features.
Extracting statistical
information from input
and the output
Select instances of the datasets that
explains the behaviour of the model
1 2
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
Problem, Data,
Audience
Post Hoc
Analysis
Model
Predictive
Accuracy
Descriptive
Accuracy
Iterative
Explainability
Model Agnostic XAI for ML models
1 4
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
1 5
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
1 6
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
Explainer
Surrogate Models
Predictions
Blackbox ML Model
1 7
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e 1 8
Prediction: Deny Loan
Loan Application
Suggestion: Increase your salary by 50K & pay your credit card bills on time for next 3 months
Predictive
Model
Loan Applicant
Counterfactual Generation
Algorithm
Model Agnostic XAI for DL models
1 9
2 0
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
Understanding flow of information through gradient flow between the
layers of Deep Neural Network model using the following approaches:
1. Saliency Maps
2. Guide Backpropagation
3. Gradient Class Activation Methods
• Layer GRAD CAM
• Layer Conductance using GRAD CAM
• Layer Activation using GRAD CAM
Saliency Maps Guided Backprop GRAD CAM Layer Conductance Layer Activation
Can such explainability
methods be applied for
complex models?
2 1
Explain ab l e AI: Making ML and DL models more interpr et a b l e
2 2
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
Image Captioning
using Attention based
Encoder-Decoder
Architecture
2 3
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
[Kim et. al., 2018]
Zebra
(0.97)
How important is the notion of “stripes” for this prediction?
Testing with Concept Activation Vectors (TCAV) is an interpretability method to understand what signals
your neural networks models uses for prediction.
https://github.com/tensorflow/tcav
Pattern representation plays
a key role in decision making
from both images and text.
2 4
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
[Tan et. al., 2019]
Model
Predictions
Label 1
Label 1
.
Label 2
.
v1, v2.
v11,
v12
.
Data
Explainer
Interpretable Mimic Learning – Compressing information from Deep Networks to Shallow Network
2 5
Explainable AI: Making ML and DL models more interpretable
What features need to be changed and by how much to flip a model’s prediction?
[Goyal et. al., 2019]
Popular frameworks for XAI
Explain ab l e AI: Making ML and DL models more interpr et a b l e
Popular frameworks for XAI
2 7
LIME
Local Interpretable
Model-agnostic
Explanations is
interpretability
framework that
works on
structured data,
text and image
classifiers.
SHAP
SHAP (SHapley
Additive
exPlanations) is a
game theoretic
approach to
explain the output
of any machine
learning model.
ELI5
Explain like I am 5
is another popular
framework that
helps to debug
machine learning
classifiers and
explain their
predictions.
SKATER
Skater is a unified
framework for XAI
for all forms of
models both
globally(inference
on the basis of a
complete data set)
and
locally(inference
about an individual
prediction).
TCAV
Testing with
Concept Activation
Vectors (TCAV) is a
new interpretability
method to
understand what
signals your neural
networks models
uses for prediction.
Explainable AI: Making ML and DL models more interpretable
2 8
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
• Behind the workings of LIME lies the assumption that every complex model is linear on a local scale. LIME tries
to fit a simple model around a single observation that will mimic how the global model behaves at that
locality.
• Create the perturbed data and predict the output on the perturbed data
• Create discretized features and find the Euclidean distance of perturbed data to the original observation
• Convert distance to similarity score and select the top n features for the model
• Create a linear model and explain the prediction
2 9
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
The lime package is on PyPI. `pip install lime`
3 0
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
There is a high-speed exact algorithm for tree ensemble methods (Tree SHAP arXiv paper). Fast C++
implementations are supported for XGBoost, LightGBM, CatBoost, and scikit-learn tree models!
• SHAP assigns each feature an importance
value for a particular prediction.
• Its novel components include: the
identification of a new class of additive
feature importance measures, and theoretical
results showing there is a unique solution in
this class with a set of desirable properties.
• Typically, SHAP values try to explain the
output of a model (function) as a sum of the
effects of each feature being introduced into
a conditional expectation. Importantly, for
non-linear functions the order in which
features are introduced matters.
SHAP can be installed from PyPI
3 1
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
The following figure from the KDD 18 paper, Consistent Individualized Feature
Attribution for Tree Ensembles summarizes this in a nice way!
SHAP Summary Plot
SHAP Dependence Plots
3 2
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
Available from pypi. pip install eli5
Check docs for more.
3 3
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
SKATER provides an unified framework for both Global and Local Interpretation.
Feature Importance Partial Dependency Plots
LIME integration for explanability
Project Link:
3 4
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
Testing with Concept Activation Vectors (TCAV)
is a new interpretability method to understand
what signals your neural networks models uses
for prediction.
What's special about TCAV compared to
other methods?
TCAV instead shows importance of high
level concepts (e.g., color, gender, race)
for a prediction class - this is how humans
communicate!
TCAV gives an explanation that is generally true for a class of interest, beyond one image (global
explanation).
For example, for a given class, we can show how much race or gender was important for classifications in
InceptionV3. Even though neither race nor gender labels were part of the training input!
pip install tcav https://github.com/tensorflow/tcav
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
3 5
The Concept Activation Vectors (CAVs) provide an interpretation of a neural net’s internal
state in terms of human-friendly concepts. TCAV uses directional derivatives to quantify the
degree to which a user-defined idea is vital to a classification result–for example, how sensitive
a prediction of “zebra” is to the presence of stripes.
TCAV essentially learns ‘concepts’ from examples. For instance, TCAV needs a couple of
examples of ‘female’, and something ‘not female’ to learn a “gender” concept. The goal of
TCAV is to determine how much a concept (e.g., gender, race) was necessary for a prediction
in a trained model even if the concept was not part of the training.
Research question to consider …
All these frameworks are great
and can bring interpretability
to a great extent, but can non-
expert consumers of AI
models interpret these
interpretability methods?
3 7
Explain ab l e AI: Making ML and DL models more interpr et a b l e
Summary
• Why is Explainable AI (XAI) important?
• Commonly used Model Agnostic XAI for ML models
• Commonly used Model Agnostic XAI for DL models.
• Popular frameworks for XAI
• Can we evolve XAI and extend explainability to non-expert
users?
3 8
Explain ab l e AI: Making ML and DL models more interpr et a b l e
Thank you
Aditya Bhattacharya
https://aditya-bhattacharya.net/
aditya.bhattacharya2016@gmail.com
https://www.linkedin.com/in/aditya-bhattacharya-b59155b6/
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e 3 9

Weitere ähnliche Inhalte

Was ist angesagt?

Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Krishnaram Kenthapadi
 
Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Hayim Makabee
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Saurabh Kaushik
 
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Sri Ambati
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learninginovex GmbH
 
The importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsThe importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsFrancesca Lazzeri, PhD
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Sri Ambati
 
Large Language Models Bootcamp
Large Language Models BootcampLarge Language Models Bootcamp
Large Language Models BootcampData Science Dojo
 
GANs Presentation.pptx
GANs Presentation.pptxGANs Presentation.pptx
GANs Presentation.pptxMAHMOUD729246
 
Towards Human-Centered Machine Learning
Towards Human-Centered Machine LearningTowards Human-Centered Machine Learning
Towards Human-Centered Machine LearningSri Ambati
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AIBill Liu
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine LearningSri Ambati
 
Interpretable machine learning
Interpretable machine learningInterpretable machine learning
Interpretable machine learningSri Ambati
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Krishnaram Kenthapadi
 
Generative adversarial networks
Generative adversarial networksGenerative adversarial networks
Generative adversarial networksDing Li
 
Interpretability of machine learning
Interpretability of machine learningInterpretability of machine learning
Interpretability of machine learningDaiki Tanaka
 

Was ist angesagt? (20)

Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)
 
Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective
 
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learning
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
The importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsThe importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systems
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
 
Large Language Models Bootcamp
Large Language Models BootcampLarge Language Models Bootcamp
Large Language Models Bootcamp
 
GANs Presentation.pptx
GANs Presentation.pptxGANs Presentation.pptx
GANs Presentation.pptx
 
Towards Human-Centered Machine Learning
Towards Human-Centered Machine LearningTowards Human-Centered Machine Learning
Towards Human-Centered Machine Learning
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learning
 
Interpretable machine learning
Interpretable machine learningInterpretable machine learning
Interpretable machine learning
 
Intro to LLMs
Intro to LLMsIntro to LLMs
Intro to LLMs
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)
 
Generative adversarial networks
Generative adversarial networksGenerative adversarial networks
Generative adversarial networks
 
Interpretability of machine learning
Interpretability of machine learningInterpretability of machine learning
Interpretability of machine learning
 
Journey of Generative AI
Journey of Generative AIJourney of Generative AI
Journey of Generative AI
 

Ähnlich wie Explainable AI - making ML and DL models more interpretable

Resume_Clasification.pptx
Resume_Clasification.pptxResume_Clasification.pptx
Resume_Clasification.pptxMOINDALVS
 
Resume_Clasification.pptx
Resume_Clasification.pptxResume_Clasification.pptx
Resume_Clasification.pptxMOINDALVS
 
Data Science as a Career and Intro to R
Data Science as a Career and Intro to RData Science as a Career and Intro to R
Data Science as a Career and Intro to RAnshik Bansal
 
Improve ML Predictions using Connected Feature Extraction
Improve ML Predictions using Connected Feature ExtractionImprove ML Predictions using Connected Feature Extraction
Improve ML Predictions using Connected Feature ExtractionDatabricks
 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIPramit Choudhary
 
Model evaluation in the land of deep learning
Model evaluation in the land of deep learningModel evaluation in the land of deep learning
Model evaluation in the land of deep learningPramit Choudhary
 
Muhammad Usman Akhtar | Ph.D Scholar | Wuhan University | School of Co...
Muhammad Usman Akhtar  |  Ph.D Scholar  |  Wuhan  University  |  School of Co...Muhammad Usman Akhtar  |  Ph.D Scholar  |  Wuhan  University  |  School of Co...
Muhammad Usman Akhtar | Ph.D Scholar | Wuhan University | School of Co...Wuhan University
 
Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!
Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!
Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!Sri Ambati
 
Learning to Learn Model Behavior ( Capital One: data intelligence conference )
Learning to Learn Model Behavior ( Capital One: data intelligence conference )Learning to Learn Model Behavior ( Capital One: data intelligence conference )
Learning to Learn Model Behavior ( Capital One: data intelligence conference )Pramit Choudhary
 
Model Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep LearningModel Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep LearningPramit Choudhary
 
Neural Nets Deconstructed
Neural Nets DeconstructedNeural Nets Deconstructed
Neural Nets DeconstructedPaul Sterk
 
Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018HJ van Veen
 
Keepler Data Tech | Entendiendo tus propios modelos predictivos
Keepler Data Tech | Entendiendo tus propios modelos predictivosKeepler Data Tech | Entendiendo tus propios modelos predictivos
Keepler Data Tech | Entendiendo tus propios modelos predictivosKeepler Data Tech
 
Interpretable ML
Interpretable MLInterpretable ML
Interpretable MLMayur Sand
 
AI - history and recent breakthroughs
AI - history and recent breakthroughs AI - history and recent breakthroughs
AI - history and recent breakthroughs Armando Vieira
 
Machine learning For Smarter Manufacturing & its Fundamentals
Machine learning For Smarter Manufacturing & its FundamentalsMachine learning For Smarter Manufacturing & its Fundamentals
Machine learning For Smarter Manufacturing & its FundamentalsSuchitGaikwad
 

Ähnlich wie Explainable AI - making ML and DL models more interpretable (20)

ODSC APAC 2022 - Explainable AI
ODSC APAC 2022 - Explainable AIODSC APAC 2022 - Explainable AI
ODSC APAC 2022 - Explainable AI
 
Resume_Clasification.pptx
Resume_Clasification.pptxResume_Clasification.pptx
Resume_Clasification.pptx
 
Learning to learn Model Behavior: How to use "human-in-the-loop" to explain d...
Learning to learn Model Behavior: How to use "human-in-the-loop" to explain d...Learning to learn Model Behavior: How to use "human-in-the-loop" to explain d...
Learning to learn Model Behavior: How to use "human-in-the-loop" to explain d...
 
Resume_Clasification.pptx
Resume_Clasification.pptxResume_Clasification.pptx
Resume_Clasification.pptx
 
Data Science as a Career and Intro to R
Data Science as a Career and Intro to RData Science as a Career and Intro to R
Data Science as a Career and Intro to R
 
Improve ML Predictions using Connected Feature Extraction
Improve ML Predictions using Connected Feature ExtractionImprove ML Predictions using Connected Feature Extraction
Improve ML Predictions using Connected Feature Extraction
 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AI
 
Model evaluation in the land of deep learning
Model evaluation in the land of deep learningModel evaluation in the land of deep learning
Model evaluation in the land of deep learning
 
Muhammad Usman Akhtar | Ph.D Scholar | Wuhan University | School of Co...
Muhammad Usman Akhtar  |  Ph.D Scholar  |  Wuhan  University  |  School of Co...Muhammad Usman Akhtar  |  Ph.D Scholar  |  Wuhan  University  |  School of Co...
Muhammad Usman Akhtar | Ph.D Scholar | Wuhan University | School of Co...
 
C3 w5
C3 w5C3 w5
C3 w5
 
Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!
Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!
Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!
 
Learning to Learn Model Behavior ( Capital One: data intelligence conference )
Learning to Learn Model Behavior ( Capital One: data intelligence conference )Learning to Learn Model Behavior ( Capital One: data intelligence conference )
Learning to Learn Model Behavior ( Capital One: data intelligence conference )
 
Model Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep LearningModel Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep Learning
 
ML.pdf
ML.pdfML.pdf
ML.pdf
 
Neural Nets Deconstructed
Neural Nets DeconstructedNeural Nets Deconstructed
Neural Nets Deconstructed
 
Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018
 
Keepler Data Tech | Entendiendo tus propios modelos predictivos
Keepler Data Tech | Entendiendo tus propios modelos predictivosKeepler Data Tech | Entendiendo tus propios modelos predictivos
Keepler Data Tech | Entendiendo tus propios modelos predictivos
 
Interpretable ML
Interpretable MLInterpretable ML
Interpretable ML
 
AI - history and recent breakthroughs
AI - history and recent breakthroughs AI - history and recent breakthroughs
AI - history and recent breakthroughs
 
Machine learning For Smarter Manufacturing & its Fundamentals
Machine learning For Smarter Manufacturing & its FundamentalsMachine learning For Smarter Manufacturing & its Fundamentals
Machine learning For Smarter Manufacturing & its Fundamentals
 

Mehr von Aditya Bhattacharya

Directive Explanations for Monitoring the Risk of Diabetes Onset - ACM IUI 2023
Directive Explanations for Monitoring the Risk of Diabetes Onset - ACM IUI 2023Directive Explanations for Monitoring the Risk of Diabetes Onset - ACM IUI 2023
Directive Explanations for Monitoring the Risk of Diabetes Onset - ACM IUI 2023Aditya Bhattacharya
 
Accelerating Data Science and Machine Learning Workflow with Azure Machine Le...
Accelerating Data Science and Machine Learning Workflow with Azure Machine Le...Accelerating Data Science and Machine Learning Workflow with Azure Machine Le...
Accelerating Data Science and Machine Learning Workflow with Azure Machine Le...Aditya Bhattacharya
 
Machine learning and Deep learning on edge devices using TensorFlow
Machine learning and Deep learning on edge devices using TensorFlowMachine learning and Deep learning on edge devices using TensorFlow
Machine learning and Deep learning on edge devices using TensorFlowAditya Bhattacharya
 
Time series Segmentation & Anomaly Detection
Time series Segmentation & Anomaly DetectionTime series Segmentation & Anomaly Detection
Time series Segmentation & Anomaly DetectionAditya Bhattacharya
 
Application of Masked RCNN for segmentation of brain haemorrhage from Compute...
Application of Masked RCNN for segmentation of brain haemorrhage from Compute...Application of Masked RCNN for segmentation of brain haemorrhage from Compute...
Application of Masked RCNN for segmentation of brain haemorrhage from Compute...Aditya Bhattacharya
 
Aditya Bhattacharya - Enterprise DL - Accelerating Deep Learning Solutions to...
Aditya Bhattacharya - Enterprise DL - Accelerating Deep Learning Solutions to...Aditya Bhattacharya - Enterprise DL - Accelerating Deep Learning Solutions to...
Aditya Bhattacharya - Enterprise DL - Accelerating Deep Learning Solutions to...Aditya Bhattacharya
 
Aditya Bhattacharya Chest XRay Image Analysis Using Deep Learning
Aditya Bhattacharya Chest XRay Image Analysis Using Deep LearningAditya Bhattacharya Chest XRay Image Analysis Using Deep Learning
Aditya Bhattacharya Chest XRay Image Analysis Using Deep LearningAditya Bhattacharya
 
Computer vision-must-nit-silchar-ml-hackathon-2019
Computer vision-must-nit-silchar-ml-hackathon-2019Computer vision-must-nit-silchar-ml-hackathon-2019
Computer vision-must-nit-silchar-ml-hackathon-2019Aditya Bhattacharya
 
Computer vision-nit-silchar-hackathon
Computer vision-nit-silchar-hackathonComputer vision-nit-silchar-hackathon
Computer vision-nit-silchar-hackathonAditya Bhattacharya
 

Mehr von Aditya Bhattacharya (9)

Directive Explanations for Monitoring the Risk of Diabetes Onset - ACM IUI 2023
Directive Explanations for Monitoring the Risk of Diabetes Onset - ACM IUI 2023Directive Explanations for Monitoring the Risk of Diabetes Onset - ACM IUI 2023
Directive Explanations for Monitoring the Risk of Diabetes Onset - ACM IUI 2023
 
Accelerating Data Science and Machine Learning Workflow with Azure Machine Le...
Accelerating Data Science and Machine Learning Workflow with Azure Machine Le...Accelerating Data Science and Machine Learning Workflow with Azure Machine Le...
Accelerating Data Science and Machine Learning Workflow with Azure Machine Le...
 
Machine learning and Deep learning on edge devices using TensorFlow
Machine learning and Deep learning on edge devices using TensorFlowMachine learning and Deep learning on edge devices using TensorFlow
Machine learning and Deep learning on edge devices using TensorFlow
 
Time series Segmentation & Anomaly Detection
Time series Segmentation & Anomaly DetectionTime series Segmentation & Anomaly Detection
Time series Segmentation & Anomaly Detection
 
Application of Masked RCNN for segmentation of brain haemorrhage from Compute...
Application of Masked RCNN for segmentation of brain haemorrhage from Compute...Application of Masked RCNN for segmentation of brain haemorrhage from Compute...
Application of Masked RCNN for segmentation of brain haemorrhage from Compute...
 
Aditya Bhattacharya - Enterprise DL - Accelerating Deep Learning Solutions to...
Aditya Bhattacharya - Enterprise DL - Accelerating Deep Learning Solutions to...Aditya Bhattacharya - Enterprise DL - Accelerating Deep Learning Solutions to...
Aditya Bhattacharya - Enterprise DL - Accelerating Deep Learning Solutions to...
 
Aditya Bhattacharya Chest XRay Image Analysis Using Deep Learning
Aditya Bhattacharya Chest XRay Image Analysis Using Deep LearningAditya Bhattacharya Chest XRay Image Analysis Using Deep Learning
Aditya Bhattacharya Chest XRay Image Analysis Using Deep Learning
 
Computer vision-must-nit-silchar-ml-hackathon-2019
Computer vision-must-nit-silchar-ml-hackathon-2019Computer vision-must-nit-silchar-ml-hackathon-2019
Computer vision-must-nit-silchar-ml-hackathon-2019
 
Computer vision-nit-silchar-hackathon
Computer vision-nit-silchar-hackathonComputer vision-nit-silchar-hackathon
Computer vision-nit-silchar-hackathon
 

Kürzlich hochgeladen

Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...ssuserf63bd7
 
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...Boston Institute of Analytics
 
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一F sss
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfgstagge
 
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Thomas Poetter
 
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degreeyuu sss
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryJeremy Anderson
 
GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]📊 Markus Baersch
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Cantervoginip
 
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024Timothy Spann
 
Advanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsAdvanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsVICTOR MAESTRE RAMIREZ
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxMike Bennett
 
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档208367051
 
Vision, Mission, Goals and Objectives ppt..pptx
Vision, Mission, Goals and Objectives ppt..pptxVision, Mission, Goals and Objectives ppt..pptx
Vision, Mission, Goals and Objectives ppt..pptxellehsormae
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFAAndrei Kaleshka
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort servicejennyeacort
 
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPTBoston Institute of Analytics
 
Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesConf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesTimothy Spann
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxaleedritatuxx
 
Real-Time AI Streaming - AI Max Princeton
Real-Time AI  Streaming - AI Max PrincetonReal-Time AI  Streaming - AI Max Princeton
Real-Time AI Streaming - AI Max PrincetonTimothy Spann
 

Kürzlich hochgeladen (20)

Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
 
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
 
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
办理学位证中佛罗里达大学毕业证,UCF成绩单原版一比一
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdf
 
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
 
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data Story
 
GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Canter
 
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
April 2024 - NLIT Cloudera Real-Time LLM Streaming 2024
 
Advanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsAdvanced Machine Learning for Business Professionals
Advanced Machine Learning for Business Professionals
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptx
 
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
 
Vision, Mission, Goals and Objectives ppt..pptx
Vision, Mission, Goals and Objectives ppt..pptxVision, Mission, Goals and Objectives ppt..pptx
Vision, Mission, Goals and Objectives ppt..pptx
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFA
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
 
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
 
Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesConf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
 
Real-Time AI Streaming - AI Max Princeton
Real-Time AI  Streaming - AI Max PrincetonReal-Time AI  Streaming - AI Max Princeton
Real-Time AI Streaming - AI Max Princeton
 

Explainable AI - making ML and DL models more interpretable

  • 1. Explainable AI – Making ML and DL models more interpretable Explainable AI – Making ML and DL models more interpretable
  • 2. About Me E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e 2 Aditya Bhattacharya I am currently working as the Lead AI/ML Engineer at West Pharmaceutical Services with the responsibility of leading and managing a global AI team and creating AI products and platforms at West. I am well seasoned in Data Science, Machine Learning, IoT and Software Development. and has established the AI Centre of Excellence and worked towards democratizing AI practice for West Pharmaceuticals and Microsoft. In the Data Science domain, Computer Vision, Time-Series Analysis, Natural Language Processing and Speech analysis are my forte. Apart from my day job, I am an AI Researcher at an NGO called MUST Research, and I am one of the faculty members for the MUST Research Academy : https://must.co.in/acad Website : https://aditya-bhattacharya.net/ LinkedIn: https://www.linkedin.com/in/aditya-bhattacharya-b59155b6/
  • 3. Key Topics 1. Necessity and Principles of Explainable AI 2. Model Agnostic XAI for ML models 3. Model Agnostic XAI for DL models. 4. Popular frameworks for XAI 5. Research Questions to consider E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e 3
  • 4. Necessity and Principles of Explainable AI
  • 5. 5 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
  • 6. 6 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
  • 7. 7 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
  • 8. 8 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e XAI Trace model prediction from logic of math to nature of data Understand the reasoning behind each model predictions Understand the model using which AI decision making is based Traceable AI Reasonable AI Understand able AI
  • 9. 9 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
  • 10. 1 0 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
  • 11. Model Agnostic Results Visualizations Influence Methods Example Based Methods Knowledge Extractions 1 1 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e Using Surrogate models like linear models or decision trees to explain complex models Estimates the importance or relevant features. Extracting statistical information from input and the output Select instances of the datasets that explains the behaviour of the model
  • 12. 1 2 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e Problem, Data, Audience Post Hoc Analysis Model Predictive Accuracy Descriptive Accuracy Iterative Explainability
  • 13. Model Agnostic XAI for ML models
  • 14. 1 4 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
  • 15. 1 5 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
  • 16. 1 6 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e Explainer Surrogate Models Predictions Blackbox ML Model
  • 17. 1 7 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
  • 18. E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e 1 8 Prediction: Deny Loan Loan Application Suggestion: Increase your salary by 50K & pay your credit card bills on time for next 3 months Predictive Model Loan Applicant Counterfactual Generation Algorithm
  • 19. Model Agnostic XAI for DL models 1 9
  • 20. 2 0 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e Understanding flow of information through gradient flow between the layers of Deep Neural Network model using the following approaches: 1. Saliency Maps 2. Guide Backpropagation 3. Gradient Class Activation Methods • Layer GRAD CAM • Layer Conductance using GRAD CAM • Layer Activation using GRAD CAM Saliency Maps Guided Backprop GRAD CAM Layer Conductance Layer Activation
  • 21. Can such explainability methods be applied for complex models? 2 1 Explain ab l e AI: Making ML and DL models more interpr et a b l e
  • 22. 2 2 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e Image Captioning using Attention based Encoder-Decoder Architecture
  • 23. 2 3 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e [Kim et. al., 2018] Zebra (0.97) How important is the notion of “stripes” for this prediction? Testing with Concept Activation Vectors (TCAV) is an interpretability method to understand what signals your neural networks models uses for prediction. https://github.com/tensorflow/tcav Pattern representation plays a key role in decision making from both images and text.
  • 24. 2 4 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e [Tan et. al., 2019] Model Predictions Label 1 Label 1 . Label 2 . v1, v2. v11, v12 . Data Explainer Interpretable Mimic Learning – Compressing information from Deep Networks to Shallow Network
  • 25. 2 5 Explainable AI: Making ML and DL models more interpretable What features need to be changed and by how much to flip a model’s prediction? [Goyal et. al., 2019]
  • 26. Popular frameworks for XAI Explain ab l e AI: Making ML and DL models more interpr et a b l e
  • 27. Popular frameworks for XAI 2 7 LIME Local Interpretable Model-agnostic Explanations is interpretability framework that works on structured data, text and image classifiers. SHAP SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. ELI5 Explain like I am 5 is another popular framework that helps to debug machine learning classifiers and explain their predictions. SKATER Skater is a unified framework for XAI for all forms of models both globally(inference on the basis of a complete data set) and locally(inference about an individual prediction). TCAV Testing with Concept Activation Vectors (TCAV) is a new interpretability method to understand what signals your neural networks models uses for prediction. Explainable AI: Making ML and DL models more interpretable
  • 28. 2 8 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e • Behind the workings of LIME lies the assumption that every complex model is linear on a local scale. LIME tries to fit a simple model around a single observation that will mimic how the global model behaves at that locality. • Create the perturbed data and predict the output on the perturbed data • Create discretized features and find the Euclidean distance of perturbed data to the original observation • Convert distance to similarity score and select the top n features for the model • Create a linear model and explain the prediction
  • 29. 2 9 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e The lime package is on PyPI. `pip install lime`
  • 30. 3 0 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e There is a high-speed exact algorithm for tree ensemble methods (Tree SHAP arXiv paper). Fast C++ implementations are supported for XGBoost, LightGBM, CatBoost, and scikit-learn tree models! • SHAP assigns each feature an importance value for a particular prediction. • Its novel components include: the identification of a new class of additive feature importance measures, and theoretical results showing there is a unique solution in this class with a set of desirable properties. • Typically, SHAP values try to explain the output of a model (function) as a sum of the effects of each feature being introduced into a conditional expectation. Importantly, for non-linear functions the order in which features are introduced matters. SHAP can be installed from PyPI
  • 31. 3 1 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e The following figure from the KDD 18 paper, Consistent Individualized Feature Attribution for Tree Ensembles summarizes this in a nice way! SHAP Summary Plot SHAP Dependence Plots
  • 32. 3 2 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e Available from pypi. pip install eli5 Check docs for more.
  • 33. 3 3 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e SKATER provides an unified framework for both Global and Local Interpretation. Feature Importance Partial Dependency Plots LIME integration for explanability Project Link:
  • 34. 3 4 E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e Testing with Concept Activation Vectors (TCAV) is a new interpretability method to understand what signals your neural networks models uses for prediction. What's special about TCAV compared to other methods? TCAV instead shows importance of high level concepts (e.g., color, gender, race) for a prediction class - this is how humans communicate! TCAV gives an explanation that is generally true for a class of interest, beyond one image (global explanation). For example, for a given class, we can show how much race or gender was important for classifications in InceptionV3. Even though neither race nor gender labels were part of the training input! pip install tcav https://github.com/tensorflow/tcav
  • 35. E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e 3 5 The Concept Activation Vectors (CAVs) provide an interpretation of a neural net’s internal state in terms of human-friendly concepts. TCAV uses directional derivatives to quantify the degree to which a user-defined idea is vital to a classification result–for example, how sensitive a prediction of “zebra” is to the presence of stripes. TCAV essentially learns ‘concepts’ from examples. For instance, TCAV needs a couple of examples of ‘female’, and something ‘not female’ to learn a “gender” concept. The goal of TCAV is to determine how much a concept (e.g., gender, race) was necessary for a prediction in a trained model even if the concept was not part of the training.
  • 36. Research question to consider …
  • 37. All these frameworks are great and can bring interpretability to a great extent, but can non- expert consumers of AI models interpret these interpretability methods? 3 7 Explain ab l e AI: Making ML and DL models more interpr et a b l e
  • 38. Summary • Why is Explainable AI (XAI) important? • Commonly used Model Agnostic XAI for ML models • Commonly used Model Agnostic XAI for DL models. • Popular frameworks for XAI • Can we evolve XAI and extend explainability to non-expert users? 3 8 Explain ab l e AI: Making ML and DL models more interpr et a b l e
  • 39. Thank you Aditya Bhattacharya https://aditya-bhattacharya.net/ aditya.bhattacharya2016@gmail.com https://www.linkedin.com/in/aditya-bhattacharya-b59155b6/ E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e 3 9