Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
2. Factors driving rapid advancement of AI
Third Wave of AI
SymbolicAI
Logic rules represent
knowledge
No learning capability
and poor handling of
uncertainty
StatisticalAI
Statistical models for
specific domains training
on big data.
No contextual capability
and minimal explainability.
Explainable AI
Systems construct
explanatory models.
Systems learn and
reason with new
tests and situations.
GPUs, On-chip
Neural Network
Data
Availability
Cloud
Infrastructure
New
Algorithms
2
3. There are two ways to provide explainable AI:
• Use Machine learning approaches that are inherently explainable such as decision
trees, knowledge graphs and similarity models.
• Develop new approaches to explain complicated neural networks.
What is XAI?
It is a AI system that explains their decision making which is referred as Explainable AI or
XAI. The goal of XAI is to provide verifiable explanations of how machine learning systems
makes decisions and let humans to be in the loop.
3
4. What is Explainable AI?
Black BoxAI
Data
Black-Box
AI
AI
product
Explainable AI
Explainable
AI
Explainable
AI Product
Explanation
Decision
Feedback
Decision,
Recommendation
Clear & transparent
Predictions
I understand why
I understand why not
I know why you succeed or fail
I understand, so I trust you
Confusion with today’s AI
Black Box
Why did you do that?
Why did you not do that?
When do you succeed or fail?
How do I correct an error?
4
Data
5. Black-box AI Creates Confusion
and Doubt
Can I trust our AI
decisions?
How do I answer this
customer complaint?
Is this the best model
that can be built?
How doI monitor and
debug thismodel?
Why I am getting this
decision?
How can I get a better
decision?
Poor Decision
Black-box
AI
</>
</>
Are these AI system
decisions fair?
Internal Audit,
Regulators
Data Scientists
IT & Operations
Business Owner
5
Customer Support
6. Why Do We Need It?
6
• Artificial Intelligence are increasingly implemented in our everyday lives to assist humans
in making decisions.
• These trivial decisions can vary from a lifestyle choices to more complex decisions such as
loan approvals, investments, court decisions and selection of job candidates.
• Many AI algorithms are a Blackbox that is not transparent. This leads to trustability
concerns. In order to trust these systems, humans want accountability and explanation.
7. Why Do We Need It?
7
• While the machine learning systems deployed in 2008 were mostly within the products
of tech-first companies (i.e. Google, YouTube), the false prediction would result in the
wrong recommendation to the application user.
• But, when it is being deployed in other industries such as military,healthcare, finance, it
would lead to adverse consequences affecting many lives.
• Thus, we create AI systems that explain their decision making.
8. 8
• Why did you do that?
• Why not something else?
• When do you succeed?
• When do you fail?
• When can I trust you?
• How do I correct an error?
We are entering a new
age of AI applications.
Machine learning is the
core technology.
Machine learning
models are opaque,
non-intuitive, and
difficult for people to
understand.
AI System DoD and non-DoD
Application
Transportation
Security
Medicine
Finance
Legal
Military
User
9. Process of XAI
AI is Interpretability. It collaborates between• The significant enabler of explainable
human and artificial intelligence.
• Interpretability is a degree to which a human can understand the cause of a decision.
• It strengthens trust and transparency,explains decisions, fulfil regulatory requirements,
and improve models.
• The stages of AI explainability is categorized into pre-modelling, explainable modelling
and post-modelling. They focus on explainability at the dataset stage and during model
development.
9
10. Explainable
AI
Feedback Loop
Train
Deploy
Monitor
A/B Test
Predict
“Explainability By Design" For AI Products
Model Diagnostics
Root Cause Analytics
Debug
Performance Monitoring
Fairness Monitoring
Model Comparison
Cohort Analysis
Model Debugging
Model Visualization
Model Evaluation
Compliance Testing
QA
Model Launch Signoff
Model Release Mgmt
Explainable Decisions
API Support
10
11. Explainability Approaches
11
• The popular Local Interpretable Model-agnostic Explanations (LIME) approach provides
explanation for an instance prediction of a model in terms of input features, the explanation
family, etc.
• Post-hoc Explainability approach of AI Model creates.
• Individual prediction explanations with input features, influential concepts, local
decision rules.
• The global prediction explanations with partial dependence plots, global
feature importance, global decision rules.
• The build an interpretability model approach creates
• Logistic regressions, decision trees, generalized additive models(GAMs).
12. Why Explainability: Improve ML Model
Standard ML Interpretable ML
Data
ML
model
Model/data
Improvement
Data
ML
model
Predictions
Generalization error Generalization error + Human experience
Verified predictions
Interpretability
Humaninspection
12
13. Explanation Targets
13
• The target specifies the object of an explainability method which varies in type, scope, and
complexity.
• The type of explanation target is often determined according to the role-specific goals of
end users.
• There are two types of targets: inside vs outside, which can also be referred as mechanistic
vs Functional.
• AI experts require a mechanistic explanation of some component inside a model to
understand how layers of a deep network respond to input data in order to debug or
validate the model.
14. Explanation Targets
14
• In contrast, non-experts often require a functional explanation to understand how some
output outside a model is produced.
• In addition, targets can vary in their scope. The outside-type targets are typically some
form of model prediction. They can be either local or global explanations.
• The inside-type targets also vary depending on the architecture of the underlying
model.
• They can either be a single neuron, or layers in a neural network.
15. Explanation Drivers
15
• The most common type of drivers are input features to an AI model.
• Explaining an image classifier predictions in terms of individual input pixels can result in
explanations that are too noisy, too expensive to compute, and more importantly, difficult to
interpret.
• Alternatively, we can rely on a more interpretable representation of input features knownas
super-pixels in the case of image classifier prediction.
• All factors that have an impact on the development of an AI model can be termed as
explanation drivers.
Explainer
(LIME)
16. Explanation Families
16
• A post-hoc explanation aims at communicating some information about how a target is
caused by drivers for a given AI model.
• An explanation family must be chosen such that its information content is easily
interpretable by the user.
• Importance Scores - The individual importance scores are meant to communicate
the relative contribution made by each explanation driver to a given target.
• Decision Rules - Decision trees is where outcome represents prediction of an AI
model and condition is a simple function defined over input features.
17. Explanation Families
17
• Decision Trees - Unlike decision rules, they are structured as a graph where internal
nodes represent conditional tests on input features and leaf nodes represent model out-
comes. In a decision tree each input example can satisfy only one path from the root node
to a leaf node.
• Dependency Plots - They aim at communicating how a target’s value varies as a given
explanation drivers’ value varies, in other words, how a target’s value depends on a
driver’s value.
18. • To explain pre-developed AI models, multiple methods have been proposed.
• They vary in terms of their Explanation target, Explanation drivers, Explanation family
and Extraction mechanism.
• XAI is an active research area with new, improved methods being developed consistently.
• Such diversity of choices can make it challenging of XAI experts to adopt the
most suitable approach for a given application.
• This challenge is addressed by presenting a snapshot of the most notable post-
modelling explainability methods.
Conclusion
18
19. To assist you with our services,
please reach us at
hello@mitosistech.com
www.mitosistech.com
IND: +91-78240 35173
US: +1-(415) 251-2064