aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <nobody@tensorflow.org>2016-05-02 20:51:19 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2016-05-02 22:02:17 -0700
commita4b475cb320a69fd787803095aec1c514375a136 (patch)
treed61338d36fe4675706b80f39ae891925b4365788
parentac959c8a942b4f9202dbd6d8bffc6fcf7b096695 (diff)
Fix dynamic_rnn documentation for time_major.
From reading the code, it looks like time_major = True is more efficient. Change: 121342997
-rw-r--r--tensorflow/python/ops/rnn.py2
1 files changed, 1 insertions, 1 deletions
diff --git a/tensorflow/python/ops/rnn.py b/tensorflow/python/ops/rnn.py
index 0c18703371..bc32285fa1 100644
--- a/tensorflow/python/ops/rnn.py
+++ b/tensorflow/python/ops/rnn.py
@@ -409,7 +409,7 @@ def dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None,
time_major: The shape format of the `inputs` and `outputs` Tensors.
If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`.
If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`.
- Using time_major = False is a bit more efficient because it avoids
+ Using `time_major = True` is a bit more efficient because it avoids
transposes at the beginning and end of the RNN calculation. However,
most TensorFlow data is batch-major, so by default this function
accepts input and emits output in batch-major form.