diff options
Diffstat (limited to 'tensorflow/python/ops/control_flow_ops.py')
-rw-r--r-- | tensorflow/python/ops/control_flow_ops.py | 14 |
1 files changed, 7 insertions, 7 deletions
diff --git a/tensorflow/python/ops/control_flow_ops.py b/tensorflow/python/ops/control_flow_ops.py index 5374817118..89de88a530 100644 --- a/tensorflow/python/ops/control_flow_ops.py +++ b/tensorflow/python/ops/control_flow_ops.py @@ -2671,8 +2671,8 @@ def while_loop(cond, body, loop_vars, shape_invariants=None, Note that `while_loop` calls `cond` and `body` *exactly once* (inside the call to `while_loop`, and not at all during `Session.run()`). `while_loop` stitches together the graph fragments created during the `cond` and `body` - calls with some additional graph nodes to make something the repeats - `body` until `cond` returns false. + calls with some additional graph nodes to create the graph flow that + repeats `body` until `cond` returns false. For correctness, `tf.while_loop()` strictly enforces shape invariants for the loop variables. A shape invariant is a (possibly partial) shape that @@ -2708,11 +2708,11 @@ def while_loop(cond, body, loop_vars, shape_invariants=None, memory consumption and execution order. For correct programs, `while_loop` should return the same result for any parallel_iterations > 0. - For training, TensorFlow remembers the tensors that are produced in the - forward inference but needed in back propagation. These tensors can be a - main source of memory consumption and often cause OOM problems when training - on GPUs. When the flag swap_memory is true, we swap out these tensors from - GPU to CPU. This for example allows us to train RNN models with very long + For training, TensorFlow stores the tensors that are produced in the + forward inference and are needed in back propagation. These tensors are a + main source of memory consumption and often cause OOM errors when training + on GPUs. When the flag swap_memory is true, we swap out these tensors from + GPU to CPU. This for example allows us to train RNN models with very long sequences and large batches. Args: |