From 15b10c8cf339473200d6e1e368575ff374984e3f Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Mon, 19 Dec 2016 15:52:56 -0800 Subject: Update generated Python Op docs. Change: 142495971 --- tensorflow/g3doc/api_docs/python/contrib.layers.md | 9 ++++++--- .../functions_and_classes/shard4/tf.contrib.layers.batch_norm.md | 9 ++++++--- 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/tensorflow/g3doc/api_docs/python/contrib.layers.md b/tensorflow/g3doc/api_docs/python/contrib.layers.md index a776cb197f..673536140a 100644 --- a/tensorflow/g3doc/api_docs/python/contrib.layers.md +++ b/tensorflow/g3doc/api_docs/python/contrib.layers.md @@ -79,9 +79,10 @@ can have speed penalty, specially in distributed settings. `data_format` is `NHWC` and the second dimension if `data_format` is `NCHW`. * `decay`: decay for the moving average. Reasonable values for `decay` are close - to 1.0, typically in the multiple-nines range: 0.999, 0.99, 0.9, etc. Lower - `decay` value (recommend trying `decay`=0.9) if model experiences reasonably - good training performance but poor validation and/or test performance. + to 1.0, typically in the multiple-nines range: 0.999, 0.99, 0.9, etc. + Lower `decay` value (recommend trying `decay`=0.9) if model experiences + reasonably good training performance but poor validation and/or test + performance. Try zero_debias_moving_mean=True for improved stability. * `center`: If True, subtract `beta`. If False, `beta` is ignored. * `scale`: If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be @@ -113,6 +114,8 @@ can have speed penalty, specially in distributed settings. example selection.) * `fused`: Use nn.fused_batch_norm if True, nn.batch_normalization otherwise. * `data_format`: A string. `NHWC` (default) and `NCHW` are supported. +* `zero_debias_moving_mean`: Use zero_debias for moving_mean. It creates a new + pair of variables 'moving_mean/biased' and 'moving_mean/local_step'. * `scope`: Optional scope for `variable_scope`. ##### Returns: diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md index 504157c51f..2b23d99de2 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md @@ -29,9 +29,10 @@ can have speed penalty, specially in distributed settings. `data_format` is `NHWC` and the second dimension if `data_format` is `NCHW`. * `decay`: decay for the moving average. Reasonable values for `decay` are close - to 1.0, typically in the multiple-nines range: 0.999, 0.99, 0.9, etc. Lower - `decay` value (recommend trying `decay`=0.9) if model experiences reasonably - good training performance but poor validation and/or test performance. + to 1.0, typically in the multiple-nines range: 0.999, 0.99, 0.9, etc. + Lower `decay` value (recommend trying `decay`=0.9) if model experiences + reasonably good training performance but poor validation and/or test + performance. Try zero_debias_moving_mean=True for improved stability. * `center`: If True, subtract `beta`. If False, `beta` is ignored. * `scale`: If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be @@ -63,6 +64,8 @@ can have speed penalty, specially in distributed settings. example selection.) * `fused`: Use nn.fused_batch_norm if True, nn.batch_normalization otherwise. * `data_format`: A string. `NHWC` (default) and `NCHW` are supported. +* `zero_debias_moving_mean`: Use zero_debias for moving_mean. It creates a new + pair of variables 'moving_mean/biased' and 'moving_mean/local_step'. * `scope`: Optional scope for `variable_scope`. ##### Returns: -- cgit v1.2.3