aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar Lukasz Kaiser <lukaszkaiser@google.com>2016-10-14 11:51:28 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2016-10-14 13:04:42 -0700
commit3a8ec777219002edf05264730f06760d1c346d6f (patch)
tree61d20de074a33db9abf160af9a6a34f435850d06
parent68af6c9848defff9c9e77bf92302fd266f019cd5 (diff)
Adding a comment to seq2seq to warn about large size of attention
when no output_projection is specified (see #4938). Change: 136187392
-rw-r--r--tensorflow/python/ops/seq2seq.py3
1 files changed, 3 insertions, 0 deletions
diff --git a/tensorflow/python/ops/seq2seq.py b/tensorflow/python/ops/seq2seq.py
index 7a4b547fac..9ec12583de 100644
--- a/tensorflow/python/ops/seq2seq.py
+++ b/tensorflow/python/ops/seq2seq.py
@@ -772,6 +772,9 @@ def embedding_attention_seq2seq(encoder_inputs,
input_size]). Then it runs attention decoder, initialized with the last
encoder state, on embedded decoder_inputs and attending to encoder outputs.
+ Warning: when output_projection is None, the size of the attention vectors
+ and variables will be made proportional to num_decoder_symbols, can be large.
+
Args:
encoder_inputs: A list of 1D int32 Tensors of shape [batch_size].
decoder_inputs: A list of 1D int32 Tensors of shape [batch_size].