Presentation of the paper at the workshop "HQA 2018: First International Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge"
Part of WWW'18, April 23, 2018, Lyon, France
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Multi-turn QA: A RNN Contextual Approach to Intent Classification for Goal-oriented Systems
1. Multi-turn QA: A RNN Contextual Approach
to Intent Classification
for Goal-oriented Systems
Martino Mensio
Giuseppe Rizzo
Maurizio Morisio
HQA 2018 @ WWW2018
23 April 2018
Lyon, FR
2. General idea
QA and multi-turn interactions:
- Usually QA systems only work in single-turn
- Goal-oriented systems with dialog management (rules)
Idea: provide a dynamic - context based - sentence classification:
- shown in a Goal-oriented system
- can be extended to general QA systems
2
5. Background: QA agents
complex interrogations
“List the movies whose
music composer’s
honorary title is BAFTA
Award for Best Film
Music”
5
complex knowledge one question
↓
one answer
6. Background: Goal-oriented agents
- main focus: not only questions, also actions
- limited search capabilities: fixed API set → fixed set of intents
- multi-turn bidirectional QA: missing parameters can be asked back
- KB content changes frequently
6
11. Idea
Extend [1] by:
- detecting the change of intent
- capturing intent dependencies
- considering the agent words
11
[1] Liu, B. and Lane, I. (2016). Attention-based recurrent neural network models for joint intent detection and
slot filling. Proceedings of The 17th Annual Meeting of the International Speech Communication Association.
15. Dataset
15
Key-Value Retrieval [2]:
- 3 intent types
- 15 slot types
- sessions made of different turns
[2] Eric, M. and Manning, C. (2017). Key-value retrieval networks for task-oriented dialogue. SIGDIAL 2017: Session
on Natural Language Generation for Dialog Systems
#dialogues #user_turns #intent_change
training 2,425 6429 1583
validation 302 820 189
test 304 790 217
preprocessing:
1. move the intent from the session to
the single sentences
2. concatenate all the sessions
3. for each driver sentence, retrieve
inputs and outputs and build samples
16. Embeddings
Distributional Semantics [2]: words used in similar contexts have
similar meaning
[8] precomputed, 685k keys, 685k unique vectors
16
[2] Harris, Z. S. (1970). Distributional structure. In Papers in structural and transformational linguistics (pp.
775-794). Springer, Dordrecht.
[8] https://spacy.io/models/en
17. Results: multi-turn intent classification
17
approach
F1 epoch number
intent RNN agent words
1 ✓ LSTM ✓ 0.9987 7
2 ✓ LSTM ✘ 0.9987 8
3 ✓ GRU ✓ 0.9975 14
4 ✘ ✓ 0.9951 5
5 ✓ GRU ✘ 0.9585 9
6 [1]✘ ✘ 0.8524 8
7 CRF on pretrained word embeddings 0.7049 100
8 CRF on words 0.4976 100
[1] Liu, B. and Lane, I. (2016). Attention-based recurrent neural network models for joint intent detection and slot
filling. Proceedings of The 17th Annual Meeting of the International Speech Communication Association.
18. Conclusions
- understanding the sentences is a very important task in QA
- interaction context really matters for sentence classification
- future work
- entities
- hyperparameter optimization
- Knowledge usage, not only classification
18
19. The context of interaction can help QA systems
https://www.slideshare.net/MartinoMensio
https://twitter.com/MartinoMensio
19