aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2016-11-01 12:53:22 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2016-11-01 14:17:38 -0700
commit1dadfdd27650e21e0c679e615ddd377f380c574a (patch)
treed4cca1b0cfb53f1dd5966329ba339e380de498d4
parent99f55f806f426a50c01dd06bd71a478009a84af2 (diff)
Update generated Python Op docs.
Change: 137866950
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.distributions.md117
1 files changed, 60 insertions, 57 deletions
diff --git a/tensorflow/g3doc/api_docs/python/contrib.distributions.md b/tensorflow/g3doc/api_docs/python/contrib.distributions.md
index bc4a79cf85..a86285a019 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.distributions.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.distributions.md
@@ -17325,62 +17325,6 @@ Variance.
-- - -
-
-### `tf.contrib.distributions.matrix_diag_transform(matrix, transform=None, name=None)` {#matrix_diag_transform}
-
-Transform diagonal of [batch-]matrix, leave rest of matrix unchanged.
-
-Create a trainable covariance defined by a Cholesky factor:
-
-```python
-# Transform network layer into 2 x 2 array.
-matrix_values = tf.contrib.layers.fully_connected(activations, 4)
-matrix = tf.reshape(matrix_values, (batch_size, 2, 2))
-
-# Make the diagonal positive. If the upper triangle was zero, this would be a
-# valid Cholesky factor.
-chol = matrix_diag_transform(matrix, transform=tf.nn.softplus)
-
-# OperatorPDCholesky ignores the upper triangle.
-operator = OperatorPDCholesky(chol)
-```
-
-Example of heteroskedastic 2-D linear regression.
-
-```python
-# Get a trainable Cholesky factor.
-matrix_values = tf.contrib.layers.fully_connected(activations, 4)
-matrix = tf.reshape(matrix_values, (batch_size, 2, 2))
-chol = matrix_diag_transform(matrix, transform=tf.nn.softplus)
-
-# Get a trainable mean.
-mu = tf.contrib.layers.fully_connected(activations, 2)
-
-# This is a fully trainable multivariate normal!
-dist = tf.contrib.distributions.MVNCholesky(mu, chol)
-
-# Standard log loss. Minimizing this will "train" mu and chol, and then dist
-# will be a distribution predicting labels as multivariate Gaussians.
-loss = -1 * tf.reduce_mean(dist.log_pdf(labels))
-```
-
-##### Args:
-
-
-* <b>`matrix`</b>: Rank `R` `Tensor`, `R >= 2`, where the last two dimensions are
- equal.
-* <b>`transform`</b>: Element-wise function mapping `Tensors` to `Tensors`. To
- be applied to the diagonal of `matrix`. If `None`, `matrix` is returned
- unchanged. Defaults to `None`.
-* <b>`name`</b>: A name to give created ops.
- Defaults to "matrix_diag_transform".
-
-##### Returns:
-
- A `Tensor` with same shape and `dtype` as `matrix`.
-
-
### Other multivariate distributions
@@ -20793,6 +20737,65 @@ Variance.
+### Multivariate Utilities
+
+- - -
+
+### `tf.contrib.distributions.matrix_diag_transform(matrix, transform=None, name=None)` {#matrix_diag_transform}
+
+Transform diagonal of [batch-]matrix, leave rest of matrix unchanged.
+
+Create a trainable covariance defined by a Cholesky factor:
+
+```python
+# Transform network layer into 2 x 2 array.
+matrix_values = tf.contrib.layers.fully_connected(activations, 4)
+matrix = tf.reshape(matrix_values, (batch_size, 2, 2))
+
+# Make the diagonal positive. If the upper triangle was zero, this would be a
+# valid Cholesky factor.
+chol = matrix_diag_transform(matrix, transform=tf.nn.softplus)
+
+# OperatorPDCholesky ignores the upper triangle.
+operator = OperatorPDCholesky(chol)
+```
+
+Example of heteroskedastic 2-D linear regression.
+
+```python
+# Get a trainable Cholesky factor.
+matrix_values = tf.contrib.layers.fully_connected(activations, 4)
+matrix = tf.reshape(matrix_values, (batch_size, 2, 2))
+chol = matrix_diag_transform(matrix, transform=tf.nn.softplus)
+
+# Get a trainable mean.
+mu = tf.contrib.layers.fully_connected(activations, 2)
+
+# This is a fully trainable multivariate normal!
+dist = tf.contrib.distributions.MVNCholesky(mu, chol)
+
+# Standard log loss. Minimizing this will "train" mu and chol, and then dist
+# will be a distribution predicting labels as multivariate Gaussians.
+loss = -1 * tf.reduce_mean(dist.log_pdf(labels))
+```
+
+##### Args:
+
+
+* <b>`matrix`</b>: Rank `R` `Tensor`, `R >= 2`, where the last two dimensions are
+ equal.
+* <b>`transform`</b>: Element-wise function mapping `Tensors` to `Tensors`. To
+ be applied to the diagonal of `matrix`. If `None`, `matrix` is returned
+ unchanged. Defaults to `None`.
+* <b>`name`</b>: A name to give created ops.
+ Defaults to "matrix_diag_transform".
+
+##### Returns:
+
+ A `Tensor` with same shape and `dtype` as `matrix`.
+
+
+
## Transformed distributions
- - -
@@ -23052,7 +23055,7 @@ will broadcast in the case of multidimensional sets of parameters.
-## Kullback Leibler Divergence
+## Kullback-Leibler Divergence
- - -