aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2016-09-16 21:05:06 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2016-09-16 22:17:22 -0700
commitecf7854845afffe318c885d79ea339c496ac553b (patch)
treeb04e0f847fc0bb75bb507580c52164e23a7f8638
parentffd2b0cfa802c2113903b1d7d994a107171b896a (diff)
Update generated Python Op docs.
Change: 133459813
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.layers.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.optimize_loss.md20
2 files changed, 36 insertions, 4 deletions
diff --git a/tensorflow/g3doc/api_docs/python/contrib.layers.md b/tensorflow/g3doc/api_docs/python/contrib.layers.md
index 52bb907e30..670c122b2f 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.layers.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.layers.md
@@ -870,6 +870,21 @@ Optimize weights given a loss.
Given loss and parameters for optimizer, returns a training op.
+Various ways of passing optimizers, include:
+ - string, name of the optimizer like 'SGD', 'Adam', see OPTIMIZER_CLS_NAMES
+ for full list. E.g. `optimize_loss(..., optimizer='Adam')`.
+ - function, takes learning rate `Tensor` as argument and must return
+ `Optimizer` instance. E.g. `optimize_loss(...,
+ optimizer=lambda lr: tf.train.MomentumOptimizer(lr, momentum=0.5))`.
+ Alternatively, if `learning_rate` is `None`, the function takes no
+ arguments. E.g. `optimize_loss(..., learning_rate=None,
+ optimizer=lambda: tf.train.MomentumOptimizer(0.5, momentum=0.5))`.
+ - class, subclass of `Optimizer` that takes only one required argument -
+ learning rate, such as AdamOptimizer, AdagradOptimizer.
+ E.g. `optimize_loss(..., optimizer=tf.train.AdagradOptimizer)`.
+ - object, instance of subclass of `Optimizer`.
+ E.g., `optimizer_loss(..., optimizer=tf.train.AdagradOptimizer(0.5))`.
+
##### Args:
@@ -881,8 +896,9 @@ Given loss and parameters for optimizer, returns a training op.
'Adam', 'Adagrad'. Full list in OPTIMIZER_CLS_NAMES constant.
class should be sub-class of tf.Optimizer that implements
`compute_gradients` and `apply_gradients` functions.
- optimizer instance should be instantion of tf.Optimizer sub-class
- and have `compute_gradients` and `apply_gradients` functions.
+ optimizer instance should be instantion of `tf.Optimizer`
+ sub-class and have `compute_gradients` and `apply_gradients`
+ functions.
* <b>`gradient_noise_scale`</b>: float or None, adds 0-mean normal noise scaled by this
value.
* <b>`gradient_multipliers`</b>: dict of variables or variable names to floats.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.optimize_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.optimize_loss.md
index 8a8490ea45..3213b3e8ff 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.optimize_loss.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.optimize_loss.md
@@ -2,6 +2,21 @@
Given loss and parameters for optimizer, returns a training op.
+Various ways of passing optimizers, include:
+ - string, name of the optimizer like 'SGD', 'Adam', see OPTIMIZER_CLS_NAMES
+ for full list. E.g. `optimize_loss(..., optimizer='Adam')`.
+ - function, takes learning rate `Tensor` as argument and must return
+ `Optimizer` instance. E.g. `optimize_loss(...,
+ optimizer=lambda lr: tf.train.MomentumOptimizer(lr, momentum=0.5))`.
+ Alternatively, if `learning_rate` is `None`, the function takes no
+ arguments. E.g. `optimize_loss(..., learning_rate=None,
+ optimizer=lambda: tf.train.MomentumOptimizer(0.5, momentum=0.5))`.
+ - class, subclass of `Optimizer` that takes only one required argument -
+ learning rate, such as AdamOptimizer, AdagradOptimizer.
+ E.g. `optimize_loss(..., optimizer=tf.train.AdagradOptimizer)`.
+ - object, instance of subclass of `Optimizer`.
+ E.g., `optimizer_loss(..., optimizer=tf.train.AdagradOptimizer(0.5))`.
+
##### Args:
@@ -13,8 +28,9 @@ Given loss and parameters for optimizer, returns a training op.
'Adam', 'Adagrad'. Full list in OPTIMIZER_CLS_NAMES constant.
class should be sub-class of tf.Optimizer that implements
`compute_gradients` and `apply_gradients` functions.
- optimizer instance should be instantion of tf.Optimizer sub-class
- and have `compute_gradients` and `apply_gradients` functions.
+ optimizer instance should be instantion of `tf.Optimizer`
+ sub-class and have `compute_gradients` and `apply_gradients`
+ functions.
* <b>`gradient_noise_scale`</b>: float or None, adds 0-mean normal noise scaled by this
value.
* <b>`gradient_multipliers`</b>: dict of variables or variable names to floats.