Weitere ähnliche Inhalte
Ähnlich wie 2016tf study5 (20)
Mehr von Shin Asakawa (14)
2016tf study5
- 14. 14 /23
改良モデル2
時刻 t の中間層
エンコーダ側
コンテキスト
fとqにLSTMを使うのがseq2seqモデル
TensorFlow のチュートリアルでおなじみ
- 20. 20 /23
Show, Attend and Tell: Neural Image Caption
Generation with Visual AttentioarXiv:1502.03044v2
- 22. 22 /23
# rnn with attention
classifier = tf.contrib.learn.TensorFlowRNNClassifier(rnn_size=2,
cell_type="lstm",
n_classes=2,
input_op_fn=rnn_input_fn,
bidirectional=False,
attn_length=2,
steps=100,
attn_size=2,
attn_vec_size=2)
classifier.fit(data, labels)
- 23. 23 /23
if attn_length is not None:
fw_cell = contrib_rnn.AttentionCellWrapper(
fw_cell, attn_length=attn_length, attn_size=attn_size,
attn_vec_size=attn_vec_size, state_is_tuple=False)
bw_cell = contrib_rnn.AttentionCellWrapper(
bw_cell, attn_length=attn_length, attn_size=attn_size,
attn_vec_size=attn_vec_size, state_is_tuple=False)
rnn_fw_cell = nn.rnn_cell.MultiRNNCell([fw_cell] * num_layers,
state_is_tuple=False)