diff options
author | 2016-07-22 11:18:35 -0800 | |
---|---|---|
committer | 2016-07-22 12:32:21 -0700 | |
commit | ef5d941c164b22a9be47e4f5bd7c90ba7c83e984 (patch) | |
tree | c8ce2216cf4245930d72d716e7b1867559481f22 | |
parent | 0493565413a11b00b83b6f40990811a75499a5ba (diff) |
Update generated Python Op docs.
Change: 128198937
6 files changed, 67 insertions, 17 deletions
diff --git a/tensorflow/g3doc/api_docs/python/contrib.distributions.md b/tensorflow/g3doc/api_docs/python/contrib.distributions.md index 1ad6445284..289dc49773 100644 --- a/tensorflow/g3doc/api_docs/python/contrib.distributions.md +++ b/tensorflow/g3doc/api_docs/python/contrib.distributions.md @@ -6320,9 +6320,34 @@ Boolean describing behavior on invalid input. - - - -#### `tf.contrib.distributions.DirichletMultinomial.variance(name='variance')` {#DirichletMultinomial.variance} +#### `tf.contrib.distributions.DirichletMultinomial.variance(name='mean')` {#DirichletMultinomial.variance} -Variance of the distribution. +Class variances for every batch member. + +The variance for each batch member is defined as the following: + +``` +Var(X_j) = n * alpha_j / alpha_0 * (1 - alpha_j / alpha_0) * + (n + alpha_0) / (1 + alpha_0) +``` + +where `alpha_0 = sum_j alpha_j`. + +The covariance between elements in a batch is defined as: + +``` +Cov(X_i, X_j) = -n * alpha_i * alpha_j / alpha_0 ** 2 * + (n + alpha_0) / (1 + alpha_0) +``` + +##### Args: + + +* <b>`name`</b>: The name for this op. + +##### Returns: + + A `Tensor` representing the variances for each batch member. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.moments.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.moments.md index 704bb5ba49..700d8fcff2 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.moments.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.moments.md @@ -21,8 +21,8 @@ When using these moments for batch normalization (see * <b>`shift`</b>: A `Tensor` containing the value by which to shift the data for numerical stability, or `None` if no shift is to be performed. A shift close to the true mean provides the most numerically stable results. -* <b>`keep_dims`</b>: produce moments with the same dimensionality as the input. * <b>`name`</b>: Name used to scope the operations that compute the moments. +* <b>`keep_dims`</b>: produce moments with the same dimensionality as the input. ##### Returns: diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md index f63ce5e9e6..f5a88c11dd 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md @@ -397,8 +397,33 @@ Boolean describing behavior on invalid input. - - - -#### `tf.contrib.distributions.DirichletMultinomial.variance(name='variance')` {#DirichletMultinomial.variance} +#### `tf.contrib.distributions.DirichletMultinomial.variance(name='mean')` {#DirichletMultinomial.variance} -Variance of the distribution. +Class variances for every batch member. + +The variance for each batch member is defined as the following: + +``` +Var(X_j) = n * alpha_j / alpha_0 * (1 - alpha_j / alpha_0) * + (n + alpha_0) / (1 + alpha_0) +``` + +where `alpha_0 = sum_j alpha_j`. + +The covariance between elements in a batch is defined as: + +``` +Cov(X_i, X_j) = -n * alpha_i * alpha_j / alpha_0 ** 2 * + (n + alpha_0) / (1 + alpha_0) +``` + +##### Args: + + +* <b>`name`</b>: The name for this op. + +##### Returns: + + A `Tensor` representing the variances for each batch member. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md index 081897c19e..5fee10cf40 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md @@ -27,7 +27,7 @@ same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. `[filter_height, filter_width, in_channels, channel_multiplier]`. * <b>`strides`</b>: 1-D of size 4. The stride of the sliding window for each dimension of `input`. -* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm. +* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution) * <b>`name`</b>: A name for this operation (optional). diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.batch_normalization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.batch_normalization.md index eda1d7d053..1432f1ce2a 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.batch_normalization.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.batch_normalization.md @@ -4,9 +4,9 @@ Batch normalization. As described in http://arxiv.org/abs/1502.03167. Normalizes a tensor by `mean` and `variance`, and applies (optionally) a -`scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\): +`scale` \\\\(\gamma\\\\) to it, as well as an `offset` \\\\(\\beta\\\\): -\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\) +\\\\(\\frac{\gamma(x-\mu)}{\sigma}+\\beta\\\\) `mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes: @@ -33,9 +33,9 @@ shapes: * <b>`x`</b>: Input `Tensor` of arbitrary dimensionality. * <b>`mean`</b>: A mean `Tensor`. * <b>`variance`</b>: A variance `Tensor`. -* <b>`offset`</b>: An offset `Tensor`, often denoted \\(\beta\\) in equations, or +* <b>`offset`</b>: An offset `Tensor`, often denoted \\\\(\\beta\\\\) in equations, or None. If present, will be added to the normalized tensor. -* <b>`scale`</b>: A scale `Tensor`, often denoted \\(\gamma\\) in equations, or +* <b>`scale`</b>: A scale `Tensor`, often denoted \\\\(\gamma\\\\) in equations, or `None`. If present, the scale is applied to the normalized tensor. * <b>`variance_epsilon`</b>: A small float number to avoid dividing by 0. * <b>`name`</b>: A name for this operation (optional). diff --git a/tensorflow/g3doc/api_docs/python/nn.md b/tensorflow/g3doc/api_docs/python/nn.md index 69c7334b57..c0e5477f82 100644 --- a/tensorflow/g3doc/api_docs/python/nn.md +++ b/tensorflow/g3doc/api_docs/python/nn.md @@ -7,7 +7,7 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by [TOC] -## Activation Functions +## Activation Functions. The activation ops provide different types of nonlinearities for use in neural networks. These include smooth nonlinearities (`sigmoid`, `tanh`, `elu`, @@ -367,7 +367,7 @@ same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. `[filter_height, filter_width, in_channels, channel_multiplier]`. * <b>`strides`</b>: 1-D of size 4. The stride of the sliding window for each dimension of `input`. -* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm. +* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution) * <b>`name`</b>: A name for this operation (optional). @@ -1058,8 +1058,8 @@ When using these moments for batch normalization (see * <b>`shift`</b>: A `Tensor` containing the value by which to shift the data for numerical stability, or `None` if no shift is to be performed. A shift close to the true mean provides the most numerically stable results. -* <b>`keep_dims`</b>: produce moments with the same dimensionality as the input. * <b>`name`</b>: Name used to scope the operations that compute the moments. +* <b>`keep_dims`</b>: produce moments with the same dimensionality as the input. ##### Returns: @@ -2411,9 +2411,9 @@ Batch normalization. As described in http://arxiv.org/abs/1502.03167. Normalizes a tensor by `mean` and `variance`, and applies (optionally) a -`scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\): +`scale` \\\\(\gamma\\\\) to it, as well as an `offset` \\\\(\\beta\\\\): -\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\) +\\\\(\\frac{\gamma(x-\mu)}{\sigma}+\\beta\\\\) `mean`, `variance`, `offset` and `scale` are all expected to be of one of two shapes: @@ -2440,9 +2440,9 @@ shapes: * <b>`x`</b>: Input `Tensor` of arbitrary dimensionality. * <b>`mean`</b>: A mean `Tensor`. * <b>`variance`</b>: A variance `Tensor`. -* <b>`offset`</b>: An offset `Tensor`, often denoted \\(\beta\\) in equations, or +* <b>`offset`</b>: An offset `Tensor`, often denoted \\\\(\\beta\\\\) in equations, or None. If present, will be added to the normalized tensor. -* <b>`scale`</b>: A scale `Tensor`, often denoted \\(\gamma\\) in equations, or +* <b>`scale`</b>: A scale `Tensor`, often denoted \\\\(\gamma\\\\) in equations, or `None`. If present, the scale is applied to the normalized tensor. * <b>`variance_epsilon`</b>: A small float number to avoid dividing by 0. * <b>`name`</b>: A name for this operation (optional). |