aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/api_docs/python/contrib.metrics.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/g3doc/api_docs/python/contrib.metrics.md')
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.metrics.md92
1 files changed, 2 insertions, 90 deletions
diff --git a/tensorflow/g3doc/api_docs/python/contrib.metrics.md b/tensorflow/g3doc/api_docs/python/contrib.metrics.md
index f7f02dd7d1..f11fd9d193 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.metrics.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.metrics.md
@@ -3,90 +3,9 @@
# Metrics (contrib)
[TOC]
-##Ops for evaluation metrics and summary statistics.
+Ops for evaluation metrics and summary statistics.
-### API
-
-This module provides functions for computing streaming metrics: metrics computed
-on dynamically valued `Tensors`. Each metric declaration returns a
-"value_tensor", an idempotent operation that returns the current value of the
-metric, and an "update_op", an operation that accumulates the information
-from the current value of the `Tensors` being measured as well as returns the
-value of the "value_tensor".
-
-To use any of these metrics, one need only declare the metric, call `update_op`
-repeatedly to accumulate data over the desired number of `Tensor` values (often
-each one is a single batch) and finally evaluate the value_tensor. For example,
-to use the `streaming_mean`:
-
-```python
-value = ...
-mean_value, update_op = tf.contrib.metrics.streaming_mean(values)
-sess.run(tf.local_variables_initializer())
-
-for i in range(number_of_batches):
- print('Mean after batch %d: %f' % (i, update_op.eval())
-print('Final Mean: %f' % mean_value.eval())
-```
-
-Each metric function adds nodes to the graph that hold the state necessary to
-compute the value of the metric as well as a set of operations that actually
-perform the computation. Every metric evaluation is composed of three steps
-
-* Initialization: initializing the metric state.
-* Aggregation: updating the values of the metric state.
-* Finalization: computing the final metric value.
-
-In the above example, calling streaming_mean creates a pair of state variables
-that will contain (1) the running sum and (2) the count of the number of samples
-in the sum. Because the streaming metrics use local variables,
-the Initialization stage is performed by running the op returned
-by `tf.local_variables_initializer()`. It sets the sum and count variables to
-zero.
-
-Next, Aggregation is performed by examining the current state of `values`
-and incrementing the state variables appropriately. This step is executed by
-running the `update_op` returned by the metric.
-
-Finally, finalization is performed by evaluating the "value_tensor"
-
-In practice, we commonly want to evaluate across many batches and multiple
-metrics. To do so, we need only run the metric computation operations multiple
-times:
-
-```python
-labels = ...
-predictions = ...
-accuracy, update_op_acc = tf.contrib.metrics.streaming_accuracy(
- labels, predictions)
-error, update_op_error = tf.contrib.metrics.streaming_mean_absolute_error(
- labels, predictions)
-
-sess.run(tf.local_variables_initializer())
-for batch in range(num_batches):
- sess.run([update_op_acc, update_op_error])
-
-accuracy, mean_absolute_error = sess.run([accuracy, mean_absolute_error])
-```
-
-Note that when evaluating the same metric multiple times on different inputs,
-one must specify the scope of each metric to avoid accumulating the results
-together:
-
-```python
-labels = ...
-predictions0 = ...
-predictions1 = ...
-
-accuracy0 = tf.contrib.metrics.accuracy(labels, predictions0, name='preds0')
-accuracy1 = tf.contrib.metrics.accuracy(labels, predictions1, name='preds1')
-```
-
-Certain metrics, such as streaming_mean or streaming_accuracy, can be weighted
-via a `weights` argument. The `weights` tensor must be the same size as the
-labels and predictions tensors and results in a weighted average of the metric.
-
-## Metric `Ops`
+See the @{$python/contrib.metrics} guide.
- - -
@@ -1696,7 +1615,6 @@ If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
- - -
### `tf.contrib.metrics.auc_using_histogram(boolean_labels, scores, score_range, nbins=100, collections=None, check_shape=True, name=None)` {#auc_using_histogram}
@@ -1738,7 +1656,6 @@ numbers of bins and comparing results.
* <b>`update_op`</b>: `Op`, when run, updates internal histograms.
-
- - -
### `tf.contrib.metrics.accuracy(predictions, labels, weights=None)` {#accuracy}
@@ -1765,7 +1682,6 @@ Computes the percentage of times that predictions matches labels.
if dtype is not bool, integer, or string.
-
- - -
### `tf.contrib.metrics.aggregate_metrics(*value_update_tuples)` {#aggregate_metrics}
@@ -1822,7 +1738,6 @@ and update ops when the list of metrics is long. For example:
names to update ops.
-
- - -
### `tf.contrib.metrics.confusion_matrix(labels, predictions, num_classes=None, dtype=tf.int32, name=None, weights=None)` {#confusion_matrix}
@@ -1830,9 +1745,6 @@ and update ops when the list of metrics is long. For example:
Deprecated. Use tf.confusion_matrix instead.
-
-## Set `Ops`
-
- - -
### `tf.contrib.metrics.set_difference(a, b, aminusb=True, validate_indices=True)` {#set_difference}