diff options
Diffstat (limited to 'tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.stop_gradient.md')
-rw-r--r-- | tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.stop_gradient.md | 34 |
1 files changed, 34 insertions, 0 deletions
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.stop_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.stop_gradient.md new file mode 100644 index 0000000000..53759f49ff --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.stop_gradient.md @@ -0,0 +1,34 @@ +### `tf.stop_gradient(input, name=None)` {#stop_gradient} + +Stops gradient computation. + +When executed in a graph, this op outputs its input tensor as-is. + +When building ops to compute gradients, this op prevents the contribution of +its inputs to be taken into account. Normally, the gradient generator adds ops +to a graph to compute the derivatives of a specified 'loss' by recursively +finding out inputs that contributed to its computation. If you insert this op +in the graph it inputs are masked from the gradient generator. They are not +taken into account for computing gradients. + +This is useful any time you want to compute a value with TensorFlow but need +to pretend that the value was a constant. Some examples include: + +* The *EM* algorithm where the *M-step* should not involve backpropagation + through the output of the *E-step*. +* Contrastive divergence training of Boltzmann machines where, when + differentiating the energy function, the training must not backpropagate + through the graph that generated the samples from the model. +* Adversarial training, where no backprop should happen through the adversarial + example generation process. + +##### Args: + + +* <b>`input`</b>: A `Tensor`. +* <b>`name`</b>: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + |