aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar Neal Wu <wun@google.com>2018-01-08 14:59:16 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2018-01-08 15:03:02 -0800
commit0bc6ffadf19aa77afda55e163d2527ab66bdc4c1 (patch)
tree9c3d2785e341dcdb22ea274075d7cf75f7dbbbd0
parentfe09efbc7b58be26ac8037bb777053302f5130c2 (diff)
Very minor edits to performance_guide.md
PiperOrigin-RevId: 181223906
-rw-r--r--tensorflow/docs_src/performance/performance_guide.md6
1 files changed, 3 insertions, 3 deletions
diff --git a/tensorflow/docs_src/performance/performance_guide.md b/tensorflow/docs_src/performance/performance_guide.md
index 4850e62c12..10e7ad7ada 100644
--- a/tensorflow/docs_src/performance/performance_guide.md
+++ b/tensorflow/docs_src/performance/performance_guide.md
@@ -203,8 +203,8 @@ bn = tf.contrib.layers.batch_norm(input_layer, fused=True, data_format='NCHW')
### RNN Performance
-There are many ways to specify an RNN computation in Tensorflow and they have
-have trade-offs with respect to model flexibility and performance. The
+There are many ways to specify an RNN computation in TensorFlow and they have
+trade-offs with respect to model flexibility and performance. The
@{tf.nn.rnn_cell.BasicLSTMCell} should be considered a reference implementation
and used only as a last resort when no other options will work.
@@ -230,7 +230,7 @@ If you need to run one step of the RNN at a time, as might be the case in
reinforcement learning with a recurrent policy, then you should use the
@{tf.contrib.rnn.LSTMBlockCell} with your own environment interaction loop
inside a @{tf.while_loop} construct. Running one step of the RNN at a time and
-returning to python is possible but it will be slower.
+returning to Python is possible, but it will be slower.
On CPUs, mobile devices, and if @{tf.contrib.cudnn_rnn} is not available on
your GPU, the fastest and most memory efficient option is