1) The document discusses building intelligent systems that can explain themselves and their decisions.
2) It proposes using existing knowledge sources on the web as background knowledge to generate explanations for systems.
3) Several examples are provided of different types of explanations that could be generated by systems, such as explaining behaviors, scenes, neural attentions, inconsistencies, and more.
1. BUILDING INTELLIGENT SYSTEMS
(THAT CAN EXPLAIN)
Ilaria Tiddi
KR&R group, Faculty of Computer Science, VU Amsterdam
Cooperation Lab, Faculty of Behavioural and Movement Sciences
VU Amsterdam
2. Disclaimer
This is NOT a presentation on eXplainable AI (XAI) ...
...but rather on systems making sense of complex data
…we can argue at Q time if they somewhat overlap
3. ● Learn new knowledge
● Find meaning : we reconcile the contradictions in our knowledge
● Socially interact : we create a shared meaning, we change/influence the
others’ beliefs
● ...and because GDPR says so
Users have a “right to explanation”
for any decisions made about them
WHY DO WE NEED (SYSTEMS THAT) EXPLAIN?
4. Different disciplines, common features [1]:
● Generation of coherence between old and new knowledge
● Same elements (theory, anterior, posterior, circumstances)
● Same processes (psychological , linguistic)
[1] Tiddi et al. (2015), An Ontology Design Pattern to Define Explanations, K-CAP2015
Determinists Hempel&
Oppenheim
Weber&
Durkheim
Charles
Peirce
DEFINING EXPLANATIONS (ε)
V-IV AC
Plato&Aristotle
XVII BC 1948 19641903 2015
?
6. Which types?
● factual ε : why specific ‘everyday’ events occur
● scientific ε : generalising scientific theories
● reason ε : explaining behaviour and decision making
Which processes?
1) cognitive : determining the causes (explanans) of an event (explanandum) and
relating these to a particular context
2) social : transferring knowledge between explainer and explainee
RESEARCH GOAL : INTELLIGENT SYSTEMS THAT CAN EXPLAIN
7. Which audience?
● engineers/scientists/experts
● end-users
Which characteristics?
● Transparency (traceability + verificability)
● Intelligibility + explainability
Which language?
● Visual
● Written
● Spoken
RESEARCH GOAL : INTELLIGENT SYSTEMS THAT CAN EXPLAIN
8. Existing knowledge sources can serve as background (the “old”) knowledge to generate
explanations:
● Plenty of available sources (not only RDF...)
● Connected, centralised hubs
● Multi-domain (serendipity!)
APPROACH : REUSE AVAILABLE KNOWLEDGE SOURCES
9. Generating explanations from the Web of Data [2]
EXAMPLE : FACTUAL, WRITTEN ε
[2] Tiddi. (2016), Explaining Data Patterns using Knowledge from the Web of Data, Ph.D. thesis. Demo: http://dedalo.kmi.open.ac.uk/
Why do people search for “A Song of Ice
and Fire” only in certain periods?”
10. Explaining behaviours and recommending self-learners using online resources [3]
EXAMPLE : REASON, VISUAL ε
[3] http://afel-project.eu
11. Robots finding explanations for their behavior in a smart-city datahub [4]
EXAMPLE : REASON, SPOKEN ε
[4] http://sciroc.eu
[ Shameless advert ]
1st ERL Smart CIties
RObotics Challenge
16–22/09/2019
Milton Keynes,UK
No need to have a
robot!!!
12. Explaining scenes in motion using ShapeNet [5] as background knowledge
(and YOLO [6] for pre-processing)
EXAMPLE : factual&reason, spoken ε
[5] http://www.shapenet.org
[6] https://pjreddie.com/darknet/yolo/
13. Explaining neural attentions
A multi-layer LSTM network to understanding NL
robotic commands [8]
Avoid training biases using linguistic corpora
(FrameNet [7]) combined with domain-specific
datasets
EXAMPLE : REASON, VISUAL ε
[7] https://framenet.icsi.berkeley.edu/fndrupal/
[8] Mensio et al., A Multi-layer LSTM-based Approach for Robot Command Interaction Modeling, Language and Robotics (LangRobo), IROS 2018.
14. Explaining inconsistencies using an autonomous agent in a smart office [9]
Monitoring Health&Safety using a SHACL-based model checking and behavioural trees
Centralised data integration, processing and reasoning
EXAMPLE : FACTUAL, WRITTEN ε
[9] Bastianelli et al., Meet HanS, the Heath&Safety autonomous inspector, Posters&Demos track at ISWC 2018.
15. Re-coding Black Mirror [10] workshops
Bringing social&computer scientists
together to understand the threats of
their own technologies, and raise
awareness on methods explainability
Ethics by Design methodology
MACHINE EXPLANATION NEED MACHINE ETHICS
[10] https://kmitd.github.io/recoding-black-mirror/
16. Databank
● Collection of meta-analyses to study human cooperation
● 3.5k work on social dilemmas ( = benefitting the others vs. self-interest)
My goal
● creation of a research platform generating explanations for human
cooperation (+ search facilities)
● Generalising the methodology to Life & Medical Sciences (long-term)
WHAT ABOUT SCIENTIFIC EXPLANATIONS?
Start by asking the question = why do we need systemsIn brackets because it is the same reason why humans need explainationa
First question is what is intended for expLooking at history (work done as part of my PhD = we use Eta as a symbol for “the concept of explanation”)
There is not a real definition, but people have looked at it from the perspective of their own discipline (hence the color)
Plato & A (connecting Forms and Facts through logos VS deducing the causes of why smtg happened ) Determinists (DesCartes, Leibniz, Newton, Huygens...) deductive process
Peirce Lecture on Pragmatism (expl = deduction + induction)
Carl Hempel & Oppenheim Deductive-Nomological Model / statistical model for explanationWeber & Durkheim (justifying social facts)Put myself just in case
Removing some doubts, this in how I intend - but this is arguableInterepretation is often used as explanation but imo there’s smtg like a subjective aspect addedPeople talk about justAlso about ...ility (a degree of)
Once we give some definitions,
goal : s finding out how to design & implement systems that generate explanations (“that can explain”)A number of subquestions arise / a number of things are needed to build such systems : which types, which processes, Reason = intentional / factual-scientific = unintentional
Processes = one cognitive & one social
But also things like audience and language, as these can change the form the explanation is generated / expressedscientists or researchers might want traceability aas this guarantees transparencyEnd users might prefer simple expl than complicated
The approach I have been using is the reuse of external knowledge to bringAnd this is likely to be the main difference with XAIToday billions of heterogeneous data sources exist (stored/real-time, personal/public terminals…)
We produce them , smart cities, the LOD, google just released...
I am just going through some examples of how systems generating explanations …
Dedalo = the system I develop during my PhD…was using the LOD (the big cloud of bubbles of the previous slide) as background knowledge to explain google trendsTrends = how much a term is searched over time (10 years)We found trends with patterns (repeated peaks) and tried to explain whyExplanations were presented to the user as natural language
A project I was part last year We built a browser plugin to support “self learners” : visually explaining their behaviours, and recommending courses to improveYou can use it too!
A different example : a Project started this year to organise a robotics competition in a smart city MK was part of a big data infrastructure project (2014-2017) We built a Datahub : a large-scale infrastructure aggregating heterogeneous the city’s heterogeneous data
Idea of SciRoc : robots will use the info in the datahub for their tasks, and my research was about helping robots to find explanation for their behaviors in the datahubNo idea who’s going to do that now :)
Work of a RA who worked with me this summer working on semantic mapping (introducing common-sense knowledge on a robots’ map)
Avoid time-expensive model training ---> using YOLO for segmentation
Combining ShapeNet (richly-annotated, large-scale dataset of 3D shapes) with robot sensorial info to perform object classification
Another one worked on LSTM to parse spoken robotic commands
Analysis of the attention layers = Semantic parsing of the small dataset is extremely biased by the little quantity of data Trying to use FrameNet to improve the model = generating explanations using FrameNet
That (supposedly) goes on a monitoring UI for the secutiry to go and repair the problem
One final thingNice thing is the use of vignette
If you have noticed that we were missing one type of E it is normal → I will do it here!First : the dataBank, then we try to generalise