From 0bc6ffadf19aa77afda55e163d2527ab66bdc4c1 Mon Sep 17 00:00:00 2001 From: Neal Wu Date: Mon, 8 Jan 2018 14:59:16 -0800 Subject: Very minor edits to performance_guide.md PiperOrigin-RevId: 181223906 --- tensorflow/docs_src/performance/performance_guide.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tensorflow/docs_src/performance/performance_guide.md b/tensorflow/docs_src/performance/performance_guide.md index 4850e62c12..10e7ad7ada 100644 --- a/tensorflow/docs_src/performance/performance_guide.md +++ b/tensorflow/docs_src/performance/performance_guide.md @@ -203,8 +203,8 @@ bn = tf.contrib.layers.batch_norm(input_layer, fused=True, data_format='NCHW') ### RNN Performance -There are many ways to specify an RNN computation in Tensorflow and they have -have trade-offs with respect to model flexibility and performance. The +There are many ways to specify an RNN computation in TensorFlow and they have +trade-offs with respect to model flexibility and performance. The @{tf.nn.rnn_cell.BasicLSTMCell} should be considered a reference implementation and used only as a last resort when no other options will work. @@ -230,7 +230,7 @@ If you need to run one step of the RNN at a time, as might be the case in reinforcement learning with a recurrent policy, then you should use the @{tf.contrib.rnn.LSTMBlockCell} with your own environment interaction loop inside a @{tf.while_loop} construct. Running one step of the RNN at a time and -returning to python is possible but it will be slower. +returning to Python is possible, but it will be slower. On CPUs, mobile devices, and if @{tf.contrib.cudnn_rnn} is not available on your GPU, the fastest and most memory efficient option is -- cgit v1.2.3