1. Explainability methods
Flanders AI Forum – 6 June 2023
Katrien Verbert
Augment/HCI – Department of Computer Science – KU Leuven
@katrien_v https://augment.cs.kuleuven.be
2. What is explainable AI?
And why does it matter?
2
Src: https://www.ai4science.caltech.edu/xai
3. Explainable Artificial Intelligence (XAI)
Narrow definition
Techniques and methods that
make a model’s decision
understandable by people
Broad definition
Everything that makes AI
understandable (e.g. also
including data, functions,
performance, etc.)
3
Src: Vera Liao
5. 5
Bhattacharya, A. (2022). Applied Machine Learning Explainability Techniques: Make ML models
explainable and trustworthy for practical applications using LIME, SHAP, and more. Packt Publishing
Ltd.
7. Five ways XAI can benefit organisations
7
https://www.mckinsey.com/capabilities/quantumblack/our-insights/why-businesses-need-
explainable-ai-and-how-to-deliver-it
Technologists
1. More efficiently monitor, maintain, and
improve AI systems
Business professionals
2. Trust AI outputs, so they increasingly adopt
AI tools
3. Apply knowledge about the why of an AI
prediction or recommendation to identify
effective interventions
4. Asess whether AI applications meet business
objectives
Legal and risk professionals
5. See whether technology and associated
workflows comply with applicable
regulations and are in line with customer
expectations
8. 8
Stiglic, G., Kocbek, P., Fijacko, N., Zitnik, M., Verbert, K., & Cilar, L. (2020). Interpretability of machine
learning-based prediction models in healthcare. Wiley Interdisciplinary Reviews: Data Mining and
Knowledge Discovery, 10(5), e1379.
16. Counterfactual explanations
¤ A counterfactual explanation describes a causal situation in the
form: “If X had not occurred, Y would not have occurred”
¤ A counterfactual explanation of a prediction describes the smallest
change to the feature values that changes the prediction to a
predefined output.
¤ There are both model-agnostic and model-specific counterfactual
explanation methods
16
Example: You were denied a loan because your annual
income is €30.000. If your income had been €45,000, you
would have been offered a loan.
17. Open Challenges in XAI
¤ Difficult to understand visualisations for non-expert users
¤ Lack of stakeholder participation
¤ Lack of actionable explanations
¤ Lack of contextual explanations
17
24. 24
Explanations for end users
Word cloud Feature importance Feature importance+ %
Maxwell Szymanski, Vero Vanden Abeele and Katrien Verbert Explaining
health recommendations to lay users: The dos and don’ts – Apex-IUI 2022
27. Results
¤ Hybrid explanations more useful compared to both the
textual and visual explanations.
¤ Users with a higher NFC tend to score the hybrid
explanations lower in terms of trust, transparency and
usefulness compared to the unimodal explanation.
27
29. 29
Charleer S., Gutiérrez Hernández F., Verbert K. (2019). Supporting job mediator and job seeker through an actionable
dashboard. In: Proceedings of the 24th IUI conference on Intelligent User Interfaces Presented at the ACM IUI 2019
Actionable
explanations
30. Model behaviour
Rojo, D., Htun, N. N., Parra, D., De Croon, R., & Verbert, K. (2021). AHMoSe: A knowledge-based visual
support system for selecting regression machine learning models. Computers and Electronics in
Agriculture, 187, 106183.
32. 32
Bhattacharya, A., Ooge, J., Stiglic, G., & Verbert, K. (2023, March). Directive Explanations for Monitoring the Risk of Diabetes Onset:
Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations. In Proceedings of the 28th International
Conference on Intelligent User Interfaces (pp. 204-219).
Bhattacharya, A., Ooge, J., Stiglic, G., & Verbert, K. (2023, March). Directive Explanations for Monitoring the Risk of
Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations. In
Proceedings of the 28th International Conference on Intelligent User Interfaces (pp. 204-219).
Data-centric explanations
33. Some take-away messages
¤ Involvement of end-users is key to come up with
interfaces tailored to the needs of non-expert users
¤ Actionable vs non-actionable parameters
¤ Data-centric explanations provide powerful solution
¤ Interaction is powerful to support model understanding
¤ Need for personalisation and simplification
33