Research work presented at the Ontology Summit 2019 (http://ontologforum.org/index.php/ConferenceCall_2019_03_13) in the Narrative & Explanation sessions. Overview of how to automatically build explanations from knowledge graphs and examples of applications.
1. BUILDING INTELLIGENT SYSTEMS
(THAT CAN EXPLAIN)
Ilaria Tiddi
Faculty of Computer Science && Faculty of Behavioural Sciences
Vrije Universiteit Amsterdam
@IlaTiddi
2. DISCLAIMER
This is not a presentation on eXplainable AI (XAI)
...but rather on systems using data to making sense of other data
3. ● Why
● What
● Which
● How
● Examples
● Lessons learnt
GENERATING EXPLANATIONS
4. Why do we need (systems generating) explanations?
● to learn new knowledge
● to find meaning (reconciling contradictions in our knowledge)
● to socially interact (creating a shared meaning with the others)
● ...and because GDPR says so
Users have a “right to explanation”
for any decision made about them
EXPLANATIONS: WHY?
5. Different disciplines, common features [1]:
● Generation of coherence between old and new knowledge
● Same elements (theory, anterior, posterior, circumstances)
● Same processes (psychological , linguistic)
[1] Tiddi et al. (2015), An Ontology Design Pattern to Define Explanations, K-CAP2015.
Determinists Hempel&
Oppenheim
Weber&
Durkheim
Charles
Peirce
EXPLANATIONS: WHAT/1
V-IV BC
Plato&Aristotle
XVII AC 1948 19641903 2015
?
7. Which types?
● factual : why specific ‘everyday’ events occur
● scientific : explaining general events (e.g. environmental phenomena)
● behavioural/reason : explaining behaviour and decisions (intentional)
Which processes?
● cognitive : determining the causes (explanans) of an event (explanandum) and
relating these to a particular context
● social : transferring knowledge between explainer and explainee
EXPLANATIONS: WHICH?
8. Which audience?
● engineers/scientists/experts
● end-users
Which characteristics?
● Transparency (traceability + verificability)
● Intelligibility + clarity
EXPLANATIONS: WHICH?
Which language?
● Visual
● Written
● Spoken
9. Reuse!! Existing knowledge sources serve as background knowledge (the
“old”) to generate explanations (the “new”):
● Plenty of available sources (KGs, datahubs, open data...)
● Connected, centralised hubs
● Multi-domain, allowing serendipity
EXPLANATIONS: HOW?
11. [2] Tiddi. (2016), Explaining Data Patterns using Knowledge from the Web of Data, Ph.D. thesis.
Demo: http://dedalo.kmi.open.ac.uk/
Explaining web searches
using the Linked Data Cloud
Why do people search for “A Song of Ice and
Fire” only in certain periods?
EXPLAINING DATA PATTERNS
12. [2] Tiddi. (2016), Explaining Data Patterns using Knowledge from the Web of Data, Ph.D. thesis.
Demo: http://dedalo.kmi.open.ac.uk/
Explaining eco-demographics
using the Linked Data Cloud
Why are women in the yellow countries less
educated?
EXPLAINING DATA PATTERNS
13. Explaining user online activities
with DBpedia, recommending
Open University courses
[3] http://afel-project.eu
EXPLAINING BEHAVIOURS
14. Using identity links to find:
● The NYT dataset is about places in
the US (trivial)
● The Reading Experience Dataset is
about poets/novelists which
committed suicide (less trivial)
[4] Tiddi. (2014), Quantifying the bias in data links (EKAW201 4)
owl:sameAs
skos:exactMatch
...
A
B
Projection of B in A
EXPLAINING BIAS IN DATASETS
15. Using open data (DBpedia,
MK:DataHub) to enhance
smart-city applications
[5] Tiddi et al. (2018), Allowing exploratory search from podcasts: the case of Secklow Sounds Radio (ISWC2018)
EXPLAINING RADIO CONTENTS
16. Semantic mapping with
ShapeNet and ConceptNet
DBpedia ConceptNet ShapeNet
EXPLAINING SCENES IN MOTION
[6] Chiatti et al., Task-agnostic, ShapeNet-based Object Recognition for Mobile Robots, DARLI-AP 2019 (EDBT/ICDT 2019)
17. Explaining and rebalancing
LSTM networks using linguistic
corpora (e.g. FrameNet)
[7] Mensio et al., Towards Explainable Language Understanding for Human Robot Interaction
EXPLAINING NEURAL ATTENTIONS
18. Cooperation Databank : 50
years of scientific studies on
human cooperation
Scholarly KGs (e.g. Scigraph) to
support systematic
reviews/meta-analyses
[8] https://amsterdamcooperationlab.com/databank/
EXPLAINING SCIENTIFIC RESEARCH
19. Bringing together social and
computer scientists
Reflect on the threats and
misuse of our technologies
[9] https://kmitd.github.io/recoding-black-mirror/
EXPLAINING ETHICS TO MACHINES?
20. Sharing and reusing is the key to explainable systems
● Lots of data to build the background knowledge
● Lots of theories (e.g. insights from the social/cognitive sciences [10])
(My) desiderata:
+ cross-disciplinary discussions
+ formalised common-sense knowledge (Web of entities, Web of actions)
+ links between data, allow serendipitous knowledge discovery
SOME TAKEAWAYS
[10] Tim Miller (2018), Explanations in artificial intelligence: Insights from the social sciences, Artificial Intelligence.