The document provides an overview of explainable AI (XAI). It discusses how XAI helps interpret machine learning models by explaining why and how they work. It outlines different categories of ML models and challenges around accuracy vs interpretability. The document then describes various XAI techniques like LIME, SHAP and global/local explanations that help address issues like trust, bias and fairness. Examples show how XAI tools can explain predictions for tasks like image classification.
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
An Introduction to XAI! Towards Trusting Your ML Models!
1. Explainable AI (XAI)
A quick guide to understand how to interpret ML models
Mansour Saffar
ML Developer - AltaML Inc.
2. In the near future...
Image source:
https://www.scoopnest.com/user/NewYorker/684741246443778048-a-cartoon-by-paul-noth-find-more-cartoons-from-
this-week39s-issue-here
3. ML Model Categories
White Box Models
● Easy to interpret
● Sometimes can not learn the patterns in the
data well (low accuracy) due to their
simplicity
Black Box Models
● Hard (or impossible) to interpret
● Mostly more powerful and effective
compared to white box models
○ e.g. neural networks
4. Accuracy vs Interpretability
Image source: https://towardsdatascience.com/interpretability-vs-accuracy-the-friction-that-defines-deep-learning-
dae16c84db5c
5. What is XAI?
Image source: https://github.com/LumousAI/Lantern
● Think of XAI as a Lantern inside your ML model
● XAI helps you explain the results of your ML
model
● You will know Why and How does your ML
model work when you use XAI!
XAI Symbol!
Note: Actually, I made this symbol up :)
6. Some XAI Use Cases
Image source: https://blog.lifeextension.com/2019/01/machine-learning-and-medicine-is-ai.html
https://algorithmxlab.com/blog/the-big-problems-with-machine-learning-algorithms-in-finance/
https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal-
sector/#56d5191a32c3
https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152
https://www.technologyreview.com/f/612915/chinas-military-is-rushing-to-use-artificial-intelligence/
Medicine Finance Legal
Autonomous Cars Military
7. Some XAI Use Cases
Image source: https://blog.lifeextension.com/2019/01/machine-learning-and-medicine-is-ai.html
https://algorithmxlab.com/blog/the-big-problems-with-machine-learning-algorithms-in-finance/
https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal-
sector/#56d5191a32c3
https://datafloq.com/read/machine-learning-drive-autonomous-vehicles/3152
https://www.technologyreview.com/f/612915/chinas-military-is-rushing-to-use-artificial-intelligence/
Medicine Finance Legal
Autonomous Cars Military
8. Why do we need XAI?! (Medical Application)
Image sources: https://msaffarm.github.io/projects/ml-
mri/Sahebzamani.Saffar.HSZ.Jan.2019.pdf
https://artemisinccouk.wordpress.com/author/torak289/
Brain MRI data Complex ML model
Report:
Patient is
diagnosed
with Epilepsy
with %85
confidence.
But why?!
Can I trust this
prediction?
Epilepsy Detection Model with Brain MRI Data
9. Why do we need XAI?! (Finance Application)
Image sources:
https://artemisinccouk.wordpress.com/author/torak289/
https://www.finder.com/credit-report
Financial and Demographic Data Complex ML model
Report:
Customer is
not eligible
for the loan!
Loan Model with Financial Records
But why?!
10. XAI to Help With Legal Implications
Can I trust the
epilepsy model
predictions?
● General Data Protection
Regulation (GDPR)
● GDPR imposes companies to
provide exaplanations of their
ML models to their
customers.
● Legal implication of wrong
diagnosis in medical
applications can be tough! (of
course the human loss is
more sad!)
● How can the doctor trust the
ML model predictions?
What if the
customer asks
why he was
rejected?
Image sources:
https://www.leaprate.com/financial-services/fines/occ-assesses-70-million-civil-money-penalty-
citibank/
https://venturebeat.com/2018/01/27/gdpr-a-playbook-for-compliance/
12. What Answers Does XAI Provide?
● Why did the model make
that prediction?
WHY
● How can I correct an
error?
HOW
● When can I trust ML
model predictions?
● When will the ML
model fail to make the
right prediction?
WHEN
13. XAI Off vs XAI On!
Image source: Modified https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
14. What Problems of ML can XAI Address?
● Understand decision
making process
● Make sure model is
looking at the right
features
TRUST
● Understand what
features are taken into
account
● Detect biased patterns
in data
BIAS & FAIRNESS
● Explain the decision
making process
EXPLAINABILITY
15. XAI Categories
Global Explanation
● Explain the behaviour of the ML model in
overall
Model-Agnostic
● It does not care what the models is and how
it works!
Local Explanation
● Explain the prediction results for each
desired instance
Model-Dependant
● Interpretations are based on the
model’s learning process
16. XAI Categories
Global Explanation
● Explain the behaviour of the ML model in
overall
Model-Agnostic
● It does not care what the models is and how
it works!
Local Explanation
● Explain the prediction results for each
desired instance
Model-Dependant
● Interpretations are based on the
model’s learning process
17. XAI Example 1 (Global Explanation)
● Let’s say you have a bike
renting business and want to
know how much weather
affects your business!
● Partial Dependence Plot (PDP)
to the rescue
● How change in one attribute
(feature) can change your
prediction
18. XAI Example 2 (Global Explanation)
● Let’s say you want to explain
the behaviour of a HUGE
random forest model
○ 10000 trees!
● Surrogate Model
○ Decision tree
19. XAI Example 3 (Local Explanation)
● Let’s say you trained a image classification model!
● How does the model know where is the labrador?
● How does the model know there is guitar in the picture?
Image sources:
https://github.com/marcotcr/lime
21. Shapley Values
● Let’s share a ride to AltaML
○ Mansour (Ma) => $40
○ Mohammad (Mo) => $5
○ Mina (Mi) => $25
Image sources:
https://www.barrieyellowtaxi.com/
22. Shapley Values
(Ma, Mi, Mo) 40 0 0
(Ma, Mo, Mi) 40 0 0
(Mi, Ma, Mo) 15 0 25
(Mi, Mo, Ma) 15 0 25
(Mo, Mi, Ma) 15 5 20
(Mo, Ma, Mi) 35 5 0
26.66 1.66 11.66
Ma 40
Mo 5
Mi 25
Ma, Mi 40
Ma, Mo 40
Mi, Mo 25
Ma, Mi, Mo 40
Ma Mo Mi
23. XAI Example 4 (Local Explanation)
● SHAP (SHapley Additive exPlanations)
● It’s idea is based on the Shapley Values
in game theory!
● How much each feature contributes to
the final prediction
● It’s like the ride sharing example but
features are passengers and the
amount paid is the model prediction!
24. XAI Toolsets and Libraries
● LIME
● SHAP
● Microsoft’s InterpretML
● H2O’s Driverless AI
● Google’s What-If tool
● Tf-explain (Tensorboard)
● IBM Watson
25. XAI Future
● Some models will be able to explain their results! (think of it as saying ‘Analysis’ to robots in
Westworld)
● More interpretable models which you can interact with and modify (or improve) their results
● Debugging ML models will shift more into high-level (semantic level) debugging since you can tell
which parts of the model are not functioning properly!
● You might be able to inject your knowledge into the model since it is interpretable and you know
how it makes the decision!