diff options
author | 2018-09-14 16:12:07 -0700 | |
---|---|---|
committer | 2018-09-14 16:19:01 -0700 | |
commit | bdca15c5e5c09e5c97f4357bd2a792da54746e94 (patch) | |
tree | 4c37efac042edc98ff9f0683aeca68ca1912922a /tensorflow/python/training | |
parent | 9eba75e54e87aa00efae482c69797794d7020950 (diff) |
Fixed documentation of Optimizer.minimize() for eager mode to match behavior of Optimizer.compute_gradients().
PiperOrigin-RevId: 213060585
Diffstat (limited to 'tensorflow/python/training')
-rw-r--r-- | tensorflow/python/training/optimizer.py | 13 |
1 files changed, 6 insertions, 7 deletions
diff --git a/tensorflow/python/training/optimizer.py b/tensorflow/python/training/optimizer.py index 2304a461c1..699162b30c 100644 --- a/tensorflow/python/training/optimizer.py +++ b/tensorflow/python/training/optimizer.py @@ -385,13 +385,12 @@ class Optimizer( @compatibility(eager) When eager execution is enabled, `loss` should be a Python function that - takes elements of `var_list` as arguments and computes the value to be - minimized. If `var_list` is None, `loss` should take no arguments. - Minimization (and gradient computation) is done with respect to the - elements of `var_list` if not None, else with respect to any trainable - variables created during the execution of the `loss` function. - `gate_gradients`, `aggregation_method`, `colocate_gradients_with_ops` and - `grad_loss` are ignored when eager execution is enabled. + takes no arguments and computes the value to be minimized. Minimization (and + gradient computation) is done with respect to the elements of `var_list` if + not None, else with respect to any trainable variables created during the + execution of the `loss` function. `gate_gradients`, `aggregation_method`, + `colocate_gradients_with_ops` and `grad_loss` are ignored when eager + execution is enabled. @end_compatibility """ grads_and_vars = self.compute_gradients( |