aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2016-09-16 08:20:58 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2016-09-16 09:33:19 -0700
commitd89eda8c1a3e94eb19f92155328765b26f15d315 (patch)
treefc8595cf3cb6049840bc44ac6ba49760ca24a95c
parent4caae5f4aac071cd6b138115ebc4ececb1685291 (diff)
Update generated Python Op docs.
Change: 133391887
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.metrics.md108
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_covariance.md53
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_pearson_correlation.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/index.md2
4 files changed, 212 insertions, 0 deletions
diff --git a/tensorflow/g3doc/api_docs/python/contrib.metrics.md b/tensorflow/g3doc/api_docs/python/contrib.metrics.md
index cb2cc4fa72..6d684118ae 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.metrics.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.metrics.md
@@ -666,6 +666,114 @@ If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
- - -
+### `tf.contrib.metrics.streaming_covariance(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_covariance}
+
+Computes the unbiased sample covariance between `predictions` and `labels`.
+
+The `streaming_covariance` function creates four local variables,
+`comoment`, `mean_prediction`, `mean_label`, and `count`, which are used to
+compute the sample covariance between predictions and labels across multiple
+batches of data. The covariance is ultimately returned as an idempotent
+operation that simply divides `comoment` by `count` - 1. We use `count` - 1
+in order to get an unbiased estimate.
+
+The algorithm used for this online computation is described in
+https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance.
+Specifically, the formula used to combine two sample comoments is
+`C_AB = C_A + C_B + (E[x_A] - E[x_B]) * (E[y_A] - E[y_B]) * n_A * n_B / n_AB`
+The comoment for a single batch of data is simply
+`sum((x - E[x]) * (y - E[y]))`, optionally weighted.
+
+If `weights` is not None, then it is used to compute weighted comoments,
+means, and count. NOTE: these weights are treated as "frequency weights", as
+opposed to "reliability weights". See discussion of the difference on
+https://wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_variance
+
+To facilitate the computation of covariance across multiple batches of data,
+the function creates an `update_op` operation, which updates underlying
+variables and returns the updated covariance.
+
+##### Args:
+
+
+* <b>`predictions`</b>: A `Tensor` of arbitrary size.
+* <b>`labels`</b>: A `Tensor` of the same size as `predictions`.
+* <b>`weights`</b>: An optional set of weights which indicates the frequency with which
+ an example is sampled. Must be broadcastable with `labels`.
+* <b>`metrics_collections`</b>: An optional list of collections that the metric
+ value variable should be added to.
+* <b>`updates_collections`</b>: An optional list of collections that the metric update
+ ops should be added to.
+* <b>`name`</b>: An optional variable_scope name.
+
+##### Returns:
+
+
+* <b>`covariance`</b>: A `Tensor` representing the current unbiased sample covariance,
+ `comoment` / (`count` - 1).
+* <b>`update_op`</b>: An operation that updates the local variables appropriately.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If labels and predictions are of different sizes or if either
+ `metrics_collections` or `updates_collections` are not a list or tuple.
+
+
+- - -
+
+### `tf.contrib.metrics.streaming_pearson_correlation(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_pearson_correlation}
+
+Computes pearson correlation coefficient between `predictions`, `labels`.
+
+The `streaming_pearson_correlation` function delegates to
+`streaming_covariance` the tracking of three [co]variances:
+- streaming_covariance(predictions, labels), i.e. covariance
+- streaming_covariance(predictions, predictions), i.e. variance
+- streaming_covariance(labels, labels), i.e. variance
+
+The product-moment correlation ultimately returned is an idempotent operation
+`cov(predictions, labels) / sqrt(var(predictions) * var(labels))`. To
+facilitate correlation computation across multiple batches, the function
+groups the `update_op`s of the underlying streaming_covariance and returns an
+`update_op`.
+
+If `weights` is not None, then it is used to compute a weighted correlation.
+NOTE: these weights are treated as "frequency weights", as opposed to
+"reliability weights". See discussion of the difference on
+https://wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_variance
+
+##### Args:
+
+
+* <b>`predictions`</b>: A `Tensor` of arbitrary size.
+* <b>`labels`</b>: A `Tensor` of the same size as predictions.
+* <b>`weights`</b>: An optional set of weights which indicates the frequency with which
+ an example is sampled. Must be broadcastable with `labels`.
+* <b>`metrics_collections`</b>: An optional list of collections that the metric
+ value variable should be added to.
+* <b>`updates_collections`</b>: An optional list of collections that the metric update
+ ops should be added to.
+* <b>`name`</b>: An optional variable_scope name.
+
+##### Returns:
+
+
+* <b>`pearson_r`</b>: A tensor representing the current pearson product-moment
+ correlation coefficient, the value of
+ `cov(predictions, labels) / sqrt(var(predictions) * var(labels))`.
+* <b>`update_op`</b>: An operation that updates the underlying variables appropriately.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If labels and predictions are of different sizes or if the
+ ignore_mask is of the wrong size or if either `metrics_collections` or
+ `updates_collections` are not a list or tuple.
+
+
+- - -
+
### `tf.contrib.metrics.streaming_mean_cosine_distance(predictions, labels, dim, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_cosine_distance}
Computes the cosine distance between the labels and predictions.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_covariance.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_covariance.md
new file mode 100644
index 0000000000..60c6c238d4
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_covariance.md
@@ -0,0 +1,53 @@
+### `tf.contrib.metrics.streaming_covariance(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_covariance}
+
+Computes the unbiased sample covariance between `predictions` and `labels`.
+
+The `streaming_covariance` function creates four local variables,
+`comoment`, `mean_prediction`, `mean_label`, and `count`, which are used to
+compute the sample covariance between predictions and labels across multiple
+batches of data. The covariance is ultimately returned as an idempotent
+operation that simply divides `comoment` by `count` - 1. We use `count` - 1
+in order to get an unbiased estimate.
+
+The algorithm used for this online computation is described in
+https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance.
+Specifically, the formula used to combine two sample comoments is
+`C_AB = C_A + C_B + (E[x_A] - E[x_B]) * (E[y_A] - E[y_B]) * n_A * n_B / n_AB`
+The comoment for a single batch of data is simply
+`sum((x - E[x]) * (y - E[y]))`, optionally weighted.
+
+If `weights` is not None, then it is used to compute weighted comoments,
+means, and count. NOTE: these weights are treated as "frequency weights", as
+opposed to "reliability weights". See discussion of the difference on
+https://wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_variance
+
+To facilitate the computation of covariance across multiple batches of data,
+the function creates an `update_op` operation, which updates underlying
+variables and returns the updated covariance.
+
+##### Args:
+
+
+* <b>`predictions`</b>: A `Tensor` of arbitrary size.
+* <b>`labels`</b>: A `Tensor` of the same size as `predictions`.
+* <b>`weights`</b>: An optional set of weights which indicates the frequency with which
+ an example is sampled. Must be broadcastable with `labels`.
+* <b>`metrics_collections`</b>: An optional list of collections that the metric
+ value variable should be added to.
+* <b>`updates_collections`</b>: An optional list of collections that the metric update
+ ops should be added to.
+* <b>`name`</b>: An optional variable_scope name.
+
+##### Returns:
+
+
+* <b>`covariance`</b>: A `Tensor` representing the current unbiased sample covariance,
+ `comoment` / (`count` - 1).
+* <b>`update_op`</b>: An operation that updates the local variables appropriately.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If labels and predictions are of different sizes or if either
+ `metrics_collections` or `updates_collections` are not a list or tuple.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_pearson_correlation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_pearson_correlation.md
new file mode 100644
index 0000000000..3c8a3a5756
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_pearson_correlation.md
@@ -0,0 +1,49 @@
+### `tf.contrib.metrics.streaming_pearson_correlation(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_pearson_correlation}
+
+Computes pearson correlation coefficient between `predictions`, `labels`.
+
+The `streaming_pearson_correlation` function delegates to
+`streaming_covariance` the tracking of three [co]variances:
+- streaming_covariance(predictions, labels), i.e. covariance
+- streaming_covariance(predictions, predictions), i.e. variance
+- streaming_covariance(labels, labels), i.e. variance
+
+The product-moment correlation ultimately returned is an idempotent operation
+`cov(predictions, labels) / sqrt(var(predictions) * var(labels))`. To
+facilitate correlation computation across multiple batches, the function
+groups the `update_op`s of the underlying streaming_covariance and returns an
+`update_op`.
+
+If `weights` is not None, then it is used to compute a weighted correlation.
+NOTE: these weights are treated as "frequency weights", as opposed to
+"reliability weights". See discussion of the difference on
+https://wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_variance
+
+##### Args:
+
+
+* <b>`predictions`</b>: A `Tensor` of arbitrary size.
+* <b>`labels`</b>: A `Tensor` of the same size as predictions.
+* <b>`weights`</b>: An optional set of weights which indicates the frequency with which
+ an example is sampled. Must be broadcastable with `labels`.
+* <b>`metrics_collections`</b>: An optional list of collections that the metric
+ value variable should be added to.
+* <b>`updates_collections`</b>: An optional list of collections that the metric update
+ ops should be added to.
+* <b>`name`</b>: An optional variable_scope name.
+
+##### Returns:
+
+
+* <b>`pearson_r`</b>: A tensor representing the current pearson product-moment
+ correlation coefficient, the value of
+ `cov(predictions, labels) / sqrt(var(predictions) * var(labels))`.
+* <b>`update_op`</b>: An operation that updates the underlying variables appropriately.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If labels and predictions are of different sizes or if the
+ ignore_mask is of the wrong size or if either `metrics_collections` or
+ `updates_collections` are not a list or tuple.
+
diff --git a/tensorflow/g3doc/api_docs/python/index.md b/tensorflow/g3doc/api_docs/python/index.md
index ac5e052650..218f587167 100644
--- a/tensorflow/g3doc/api_docs/python/index.md
+++ b/tensorflow/g3doc/api_docs/python/index.md
@@ -929,12 +929,14 @@
* [`set_union`](../../api_docs/python/contrib.metrics.md#set_union)
* [`streaming_accuracy`](../../api_docs/python/contrib.metrics.md#streaming_accuracy)
* [`streaming_auc`](../../api_docs/python/contrib.metrics.md#streaming_auc)
+ * [`streaming_covariance`](../../api_docs/python/contrib.metrics.md#streaming_covariance)
* [`streaming_mean`](../../api_docs/python/contrib.metrics.md#streaming_mean)
* [`streaming_mean_absolute_error`](../../api_docs/python/contrib.metrics.md#streaming_mean_absolute_error)
* [`streaming_mean_cosine_distance`](../../api_docs/python/contrib.metrics.md#streaming_mean_cosine_distance)
* [`streaming_mean_iou`](../../api_docs/python/contrib.metrics.md#streaming_mean_iou)
* [`streaming_mean_relative_error`](../../api_docs/python/contrib.metrics.md#streaming_mean_relative_error)
* [`streaming_mean_squared_error`](../../api_docs/python/contrib.metrics.md#streaming_mean_squared_error)
+ * [`streaming_pearson_correlation`](../../api_docs/python/contrib.metrics.md#streaming_pearson_correlation)
* [`streaming_percentage_less`](../../api_docs/python/contrib.metrics.md#streaming_percentage_less)
* [`streaming_precision`](../../api_docs/python/contrib.metrics.md#streaming_precision)
* [`streaming_recall`](../../api_docs/python/contrib.metrics.md#streaming_recall)