diff options
author | 2016-05-02 20:51:19 -0800 | |
---|---|---|
committer | 2016-05-02 22:02:17 -0700 | |
commit | a4b475cb320a69fd787803095aec1c514375a136 (patch) | |
tree | d61338d36fe4675706b80f39ae891925b4365788 | |
parent | ac959c8a942b4f9202dbd6d8bffc6fcf7b096695 (diff) |
Fix dynamic_rnn documentation for time_major.
From reading the code, it looks like time_major = True is more efficient.
Change: 121342997
-rw-r--r-- | tensorflow/python/ops/rnn.py | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/tensorflow/python/ops/rnn.py b/tensorflow/python/ops/rnn.py index 0c18703371..bc32285fa1 100644 --- a/tensorflow/python/ops/rnn.py +++ b/tensorflow/python/ops/rnn.py @@ -409,7 +409,7 @@ def dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, time_major: The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. - Using time_major = False is a bit more efficient because it avoids + Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. |