6. Overview
5
⢠Three different taxonomies based on
ď§ Problem setting â especially, based on
input/output format
ď§ Type of attention
ď§ Task/problem
8. AttentionWalks
⢠Input: Homogeneous graph
⢠Output: Node embedding
⢠Mechanism: Learn attention weights + Attention-guided walk
⢠DeepWalk + Attention
⢠Performance of DeepWalk is sensitive to context window size c
7
Sami Abu-El-Haija, Bryan Perozzi, Rami Al-Rfou, and Alex Alemi. 2017. Watch Your Step: Learning Graph
Embeddings Through Attention. In arXiv:1710.09599v1
9. GAKE
⢠Input: Homogeneous graph(knowledge graph)
⢠Output: Node embedding
⢠Mechanism: Learn attention weights
⢠Node context ęłě° ě edge ě ëł´ë ěŹěŠ
⢠ěľě˘ node embedding ęłě°ě attention ěŹěŠ
8
Jun Feng, Minlie Huang, Yang Yang, and Xiaoyan Zhu. 2016. GAKE: Graph Aware Knowledge
Embedding. In Proc. of COLING. 641â651
10. GAT
⢠Input: Homogeneous graph
⢠Output: Node embedding
⢠Mechanism: Learn attention weights
⢠GCN(Graph-ConvNet) + Attention
⢠Using multi-head attention
9Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua
Bengio. 2018. Graph Attention Networks. In Proc. of ICLR. 1â12.
11. AGNN
⢠Input: Homogeneous graph
⢠Output: Node embedding
⢠Mechanism: Similarity-based attention
⢠GCN + Attention
⢠Similar to GAT, but using cosine similarity to calculate attention
⢠No multi-head attention
10Kiran K. Thekumparampil, Chong Wang, Sewoong Oh, and Li-Jia Li. 2018. Attention-based Graph
Neural Network for Semi-supervised Learning. In arXiv:1803.03735v1.
12. PRML
⢠Input: Homogeneous graph
⢠Output: Edge embedding
⢠Mechanism: Learn attention weights
⢠Path-based feature learning ě´íě
ď§ 1) ę° pathë§ë¤ attention(ě¤ě ë ¸ë ě ě )
ď§ 2) pathëźëŚŹ attentioně íľí´ ěľě˘ embedding ěěą
11Zhou Zhao, Ben Gao, Vicent W. Zheng, Deng Cai, Xiaofei He, and Yueting Zhuang. 2017. Link
Prediction via Ranking Metric Dual-level Attention Network Learning. In Proc. of IJCAI. 3525â3531.
15. graph2seq
⢠Input: Homogeneous graph
⢠Output: Graph embedding
⢠Mechanism: Similarity-based attention
⢠In node embedding, they consider both forward/backward neighbor
⢠In attention, they use node embedding
14Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Michael Witbrock, and Vadim Sheinin. 2018. Graph2Seq:
Graph to Sequence Learning with Attention-based Neural Networks. In arXiv:1804.00823v3.
16. GAM
⢠Input: Homogeneous graph
⢠Output: Graph embedding
⢠Mechanism: Learn attention weights + Attention-guided walk
⢠Use RNN in attention-guided walk
⢠Use multi agent
15John Boaz Lee, Ryan Rossi, and Xiangnan Kong. 2018. Graph Classification using Structural Attention. In Proc. of
KDD. 1â9.
17. RNNSearch, Att-NMT
⢠Input: Path
⢠Output: Graph embedding
⢠Mechanism: Similarity-based attention
⢠Use hidden as embedding, calculate attention on every hidden, with
similarity between target hidden and hiddens
⢠In Att-NMT, there are local attention
16
Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural Machine
Translation by Jointly Learning to Align and Translate. In Proc. of ICLR. 1â15
Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to
Attention-based Neural Machine Translation. In Proc. of EMNLP. 1412â1421.
18. Hao Zhou, Tom Yang, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Commonsense
Knowledge Aware Conversation Generation with Graph Attention. In Proc. of IJCAI-ECAI. 1â7
CCM
⢠Input: Homogeneous Graph
⢠Output: Graph embedding/Hybrid embedding
⢠Mechanism: Learn attention weights
⢠Get input sequence and knowledge graph
⢠Use two graph attention/one seq2seq attention
15
19. JointD/E+SATT
⢠Input: Homogeneous graph
⢠Output: Hybrid embedding
⢠Mechanism: Similarity-based attention
15Xu Han, Zhiyuan Liu, and Maosong Sun. 2018. Neural Knowledge Acquisition via Mutual Attention Between
Knowledge Graph and Text. In Proc. of AAAI. 1â8
20. GRAM
⢠Input: Directed acyclic graph
⢠Output: Hybrid embedding
⢠Mechanism: Learn attention weights
⢠DAGě ancestor node뼟 모ë ěŹěŠí´ě attention
15Edward Choi, Mohammad Taha Bahadori, Le Song, Walter F. Stewart, and Jimeng Sun. 2017. GRAM: Graph-
based Attention Model for Healthcare Representation Learning. In Proc. of KDD. 787â795