aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/api_docs/python/math_ops.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/g3doc/api_docs/python/math_ops.md')
-rw-r--r--tensorflow/g3doc/api_docs/python/math_ops.md1883
1 files changed, 1883 insertions, 0 deletions
diff --git a/tensorflow/g3doc/api_docs/python/math_ops.md b/tensorflow/g3doc/api_docs/python/math_ops.md
new file mode 100644
index 0000000000..fb93c38311
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/math_ops.md
@@ -0,0 +1,1883 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Math
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Arithmetic Operators](#AUTOGENERATED-arithmetic-operators)
+ * [tf.add(x, y, name=None)](#add)
+ * [tf.sub(x, y, name=None)](#sub)
+ * [tf.mul(x, y, name=None)](#mul)
+ * [tf.div(x, y, name=None)](#div)
+ * [tf.mod(x, y, name=None)](#mod)
+* [Basic Math Functions](#AUTOGENERATED-basic-math-functions)
+ * [tf.add_n(inputs, name=None)](#add_n)
+ * [tf.abs(x, name=None)](#abs)
+ * [tf.neg(x, name=None)](#neg)
+ * [tf.sign(x, name=None)](#sign)
+ * [tf.inv(x, name=None)](#inv)
+ * [tf.square(x, name=None)](#square)
+ * [tf.round(x, name=None)](#round)
+ * [tf.sqrt(x, name=None)](#sqrt)
+ * [tf.rsqrt(x, name=None)](#rsqrt)
+ * [tf.pow(x, y, name=None)](#pow)
+ * [tf.exp(x, name=None)](#exp)
+ * [tf.log(x, name=None)](#log)
+ * [tf.ceil(x, name=None)](#ceil)
+ * [tf.floor(x, name=None)](#floor)
+ * [tf.maximum(x, y, name=None)](#maximum)
+ * [tf.minimum(x, y, name=None)](#minimum)
+ * [tf.cos(x, name=None)](#cos)
+ * [tf.sin(x, name=None)](#sin)
+* [Matrix Math Functions](#AUTOGENERATED-matrix-math-functions)
+ * [tf.diag(diagonal, name=None)](#diag)
+ * [tf.transpose(a, perm=None, name='transpose')](#transpose)
+ * [tf.matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None)](#matmul)
+ * [tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None)](#batch_matmul)
+ * [tf.matrix_determinant(input, name=None)](#matrix_determinant)
+ * [tf.batch_matrix_determinant(input, name=None)](#batch_matrix_determinant)
+ * [tf.matrix_inverse(input, name=None)](#matrix_inverse)
+ * [tf.batch_matrix_inverse(input, name=None)](#batch_matrix_inverse)
+ * [tf.cholesky(input, name=None)](#cholesky)
+ * [tf.batch_cholesky(input, name=None)](#batch_cholesky)
+* [Complex Number Functions](#AUTOGENERATED-complex-number-functions)
+ * [tf.complex(real, imag, name=None)](#complex)
+ * [tf.complex_abs(x, name=None)](#complex_abs)
+ * [tf.conj(in_, name=None)](#conj)
+ * [tf.imag(in_, name=None)](#imag)
+ * [tf.real(in_, name=None)](#real)
+* [Reduction](#AUTOGENERATED-reduction)
+ * [tf.reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_sum)
+ * [tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_prod)
+ * [tf.reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_min)
+ * [tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_max)
+ * [tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_mean)
+ * [tf.reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_all)
+ * [tf.reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_any)
+ * [tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None)](#accumulate_n)
+* [Segmentation](#AUTOGENERATED-segmentation)
+ * [tf.segment_sum(data, segment_ids, name=None)](#segment_sum)
+ * [tf.segment_prod(data, segment_ids, name=None)](#segment_prod)
+ * [tf.segment_min(data, segment_ids, name=None)](#segment_min)
+ * [tf.segment_max(data, segment_ids, name=None)](#segment_max)
+ * [tf.segment_mean(data, segment_ids, name=None)](#segment_mean)
+ * [tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None)](#unsorted_segment_sum)
+ * [tf.sparse_segment_sum(data, indices, segment_ids, name=None)](#sparse_segment_sum)
+ * [tf.sparse_segment_mean(data, indices, segment_ids, name=None)](#sparse_segment_mean)
+* [Sequence Comparison and Indexing](#AUTOGENERATED-sequence-comparison-and-indexing)
+ * [tf.argmin(input, dimension, name=None)](#argmin)
+ * [tf.argmax(input, dimension, name=None)](#argmax)
+ * [tf.listdiff(x, y, name=None)](#listdiff)
+ * [tf.where(input, name=None)](#where)
+ * [tf.unique(x, name=None)](#unique)
+ * [tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance')](#edit_distance)
+ * [tf.invert_permutation(x, name=None)](#invert_permutation)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Arithmetic Operators <div class="md-anchor" id="AUTOGENERATED-arithmetic-operators">{#AUTOGENERATED-arithmetic-operators}</div>
+
+TensorFlow provides several operations that you can use to add basic arithmetic
+operators to your graph.
+
+- - -
+
+### tf.add(x, y, name=None) <div class="md-anchor" id="add">{#add}</div>
+
+Returns x + y element-wise.
+
+*NOTE*: Add supports broadcasting. AddN does not.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int8`, `int16`, `int32`, `complex64`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.sub(x, y, name=None) <div class="md-anchor" id="sub">{#sub}</div>
+
+Returns x - y element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.mul(x, y, name=None) <div class="md-anchor" id="mul">{#mul}</div>
+
+Returns x * y element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int8`, `int16`, `int32`, `complex64`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.div(x, y, name=None) <div class="md-anchor" id="div">{#div}</div>
+
+Returns x / y element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.mod(x, y, name=None) <div class="md-anchor" id="mod">{#mod}</div>
+
+Returns element-wise remainder of division.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+
+## Basic Math Functions <div class="md-anchor" id="AUTOGENERATED-basic-math-functions">{#AUTOGENERATED-basic-math-functions}</div>
+
+TensorFlow provides several operations that you can use to add basic
+mathematical functions to your graph.
+
+- - -
+
+### tf.add_n(inputs, name=None) <div class="md-anchor" id="add_n">{#add_n}</div>
+
+Add all input tensors element wise.
+
+##### Args:
+
+
+* <b>inputs</b>: A list of at least 1 `Tensor` objects of the same type in: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
+ Must all be the same size and shape.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `inputs`.
+
+
+- - -
+
+### tf.abs(x, name=None) <div class="md-anchor" id="abs">{#abs}</div>
+
+Computes the absolute value of a tensor.
+
+Given a tensor of real numbers `x`, this operation returns a tensor
+containing the absolute value of each element in `x`. For example, if x is
+an input element and y is an output element, this operation computes
+\\(y = |x|\\).
+
+See [`tf.complex_abs()`](#tf_complex_abs) to compute the absolute value of a complex
+number.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `float`, `double`, `int32`, or `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` the same size and type as `x` with absolute values.
+
+
+- - -
+
+### tf.neg(x, name=None) <div class="md-anchor" id="neg">{#neg}</div>
+
+Computes numerical negative value element-wise.
+
+I.e., \\(y = -x\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.sign(x, name=None) <div class="md-anchor" id="sign">{#sign}</div>
+
+Returns an element-wise indication of the sign of a number.
+
+y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.inv(x, name=None) <div class="md-anchor" id="inv">{#inv}</div>
+
+Computes the reciprocal of x element-wise.
+
+I.e., \\(y = 1 / x\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.square(x, name=None) <div class="md-anchor" id="square">{#square}</div>
+
+Computes square of x element-wise.
+
+I.e., \\(y = x * x = x^2\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.round(x, name=None) <div class="md-anchor" id="round">{#round}</div>
+
+Rounds the values of a tensor to the nearest integer, element-wise.
+
+For example:
+
+```python
+# 'a' is [0.9, 2.5, 2.3, -4.4]
+tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `float` or `double`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of same shape and type as `x`.
+
+
+- - -
+
+### tf.sqrt(x, name=None) <div class="md-anchor" id="sqrt">{#sqrt}</div>
+
+Computes square root of x element-wise.
+
+I.e., \\(y = \sqrt{x} = x^{1/2}\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.rsqrt(x, name=None) <div class="md-anchor" id="rsqrt">{#rsqrt}</div>
+
+Computes reciprocal of square root of x element-wise.
+
+I.e., \\(y = 1 / \sqrt{x}\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.pow(x, y, name=None) <div class="md-anchor" id="pow">{#pow}</div>
+
+Computes the power of one value to another.
+
+Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
+corresponding elements in `x` and `y`. For example:
+
+```
+# tensor 'x' is [[2, 2]], [3, 3]]
+# tensor 'y' is [[8, 16], [2, 3]]
+tf.pow(x, y) ==> [[256, 65536], [9, 27]]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`.
+* <b>y</b>: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`.
+
+
+- - -
+
+### tf.exp(x, name=None) <div class="md-anchor" id="exp">{#exp}</div>
+
+Computes exponential of x element-wise. \\(y = e^x\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.log(x, name=None) <div class="md-anchor" id="log">{#log}</div>
+
+Computes natural logrithm of x element-wise.
+
+I.e., \\(y = \log_e x\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.ceil(x, name=None) <div class="md-anchor" id="ceil">{#ceil}</div>
+
+Returns element-wise smallest integer in not less than x.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.floor(x, name=None) <div class="md-anchor" id="floor">{#floor}</div>
+
+Returns element-wise largest integer not greater than x.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.maximum(x, y, name=None) <div class="md-anchor" id="maximum">{#maximum}</div>
+
+Returns the max of x and y (i.e. x > y ? x : y) element-wise, broadcasts.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.minimum(x, y, name=None) <div class="md-anchor" id="minimum">{#minimum}</div>
+
+Returns the min of x and y (i.e. x < y ? x : y) element-wise, broadcasts.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.cos(x, name=None) <div class="md-anchor" id="cos">{#cos}</div>
+
+Computes cos of x element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.sin(x, name=None) <div class="md-anchor" id="sin">{#sin}</div>
+
+Computes sin of x element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+
+## Matrix Math Functions <div class="md-anchor" id="AUTOGENERATED-matrix-math-functions">{#AUTOGENERATED-matrix-math-functions}</div>
+
+TensorFlow provides several operations that you can use to add basic
+mathematical functions for matrices to your graph.
+
+- - -
+
+### tf.diag(diagonal, name=None) <div class="md-anchor" id="diag">{#diag}</div>
+
+Returns a diagonal tensor with a given diagonal values.
+
+Given a `diagonal`, this operation returns a tensor with the `diagonal` and
+everything else padded with zeros. The diagonal is computed as follows:
+
+Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of
+rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:
+
+`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.
+
+For example:
+
+```prettyprint
+# 'diagonal' is [1, 2, 3, 4]
+tf.diag(diagonal) ==> [[1, 0, 0, 0]
+ [0, 2, 0, 0]
+ [0, 0, 3, 0]
+ [0, 0, 0, 4]]
+```
+
+##### Args:
+
+
+* <b>diagonal</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+ Rank k tensor where k is at most 3.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `diagonal`.
+
+
+- - -
+
+### tf.transpose(a, perm=None, name='transpose') <div class="md-anchor" id="transpose">{#transpose}</div>
+
+Transposes `a`. Permutes the dimensions according to `perm`.
+
+The returned tensor's dimension i will correspond to the input dimension
+`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is
+the rank of the input tensor. Hence by default, this operation performs a
+regular matrix transpose on 2-D input Tensors.
+
+For example:
+
+```python
+# 'x' is [[1 2 3]
+# [4 5 6]]
+tf.transpose(x) ==> [[1 4]
+ [2 5]
+ [3 6]]
+
+# Equivalently
+tf.transpose(x perm=[0, 1]) ==> [[1 4]
+ [2 5]
+ [3 6]]
+
+# 'perm' is more useful for n-dimensional tensors, for n > 2
+# 'x' is [[[1 2 3]
+# [4 5 6]]
+# [[7 8 9]
+# [10 11 12]]]
+# Take the transpose of the matrices in dimension-0
+tf.transpose(b, perm=[0, 2, 1]) ==> [[[1 4]
+ [2 5]
+ [3 6]]
+
+ [[7 10]
+ [8 11]
+ [9 12]]]
+```
+
+##### Args:
+
+
+* <b>a</b>: A `Tensor`.
+* <b>perm</b>: A permutation of the dimensions of `a`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A transposed `Tensor`.
+
+
+
+- - -
+
+### tf.matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None) <div class="md-anchor" id="matmul">{#matmul}</div>
+
+Multiplies matrix `a` by matrix `b`, producing `a` * `b`.
+
+The inputs must be two-dimensional matrices, with matching inner dimensions,
+possibly after transposition.
+
+Both matrices must be of the same type. The supported types are:
+`float`, `double`, `int32`, `complex64`.
+
+Either matrix can be transposed on the fly by setting the corresponding flag
+to `True`. This is `False` by default.
+
+If one or both of the matrices contain a lot of zeros, a more efficient
+multiplication algorithm can be used by setting the corresponding
+`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.
+
+For example:
+
+```python
+# 2-D tensor `a`
+a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.]
+ [4. 5. 6.]]
+# 2-D tensor `b`
+b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.]
+ [9. 10.]
+ [11. 12.]]
+c = tf.matmul(a, b) => [[58 64]
+ [139 154]]
+```
+
+##### Args:
+
+
+* <b>a</b>: `Tensor` of type `float`, `double`, `int32` or `complex64`.
+* <b>b</b>: `Tensor` with same type as `a`.
+* <b>transpose_a</b>: If `True`, `a` is transposed before multiplication.
+* <b>transpose_b</b>: If `True`, `b` is transposed before multiplication.
+* <b>a_is_sparse</b>: If `True`, `a` is treated as a sparse matrix.
+* <b>b_is_sparse</b>: If `True`, `b` is treated as a sparse matrix.
+* <b>name</b>: Name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of the same type as `a`.
+
+
+- - -
+
+### tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None) <div class="md-anchor" id="batch_matmul">{#batch_matmul}</div>
+
+Multiplies slices of two tensors in batches.
+
+Multiplies all slices of `Tensor` `x` and `y` (each slice can be
+viewed as an element of a batch), and arranges the individual results
+in a single output tensor of the same batch size. Each of the
+individual slices can optionally be adjointed (to adjoint a matrix
+means to transpose and conjugate it) before multiplication by setting
+the `adj_x` or `adj_y` flag to `True`, which are by default `False`.
+
+The input tensors `x` and `y` are 3-D or higher with shape `[..., r_x, c_x]`
+and `[..., r_y, c_y]`.
+
+The output tensor is 3-D or higher with shape `[..., r_o, c_o]`, where:
+
+ r_o = c_x if adj_x else r_x
+ c_o = r_y if adj_y else c_y
+
+It is computed as:
+
+ out[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`.
+ 3-D or higher with shape `[..., r_x, c_x]`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+ 3-D or higher with shape `[..., r_y, c_y]`.
+* <b>adj_x</b>: An optional `bool`. Defaults to `False`.
+ If `True`, adjoint the slices of `x`. Defaults to `False`.
+* <b>adj_y</b>: An optional `bool`. Defaults to `False`.
+ If `True`, adjoint the slices of `y`. Defaults to `False`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+ 3-D or higher with shape `[..., r_o, c_o]`
+
+
+
+- - -
+
+### tf.matrix_determinant(input, name=None) <div class="md-anchor" id="matrix_determinant">{#matrix_determinant}</div>
+
+Calculates the determinant of a square matrix.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ A tensor of shape `[M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+ A scalar, equal to the determinant of the input.
+
+
+- - -
+
+### tf.batch_matrix_determinant(input, name=None) <div class="md-anchor" id="batch_matrix_determinant">{#batch_matrix_determinant}</div>
+
+Calculates the determinants for a batch of square matrices.
+
+The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
+form square matrices. The output is a 1-D tensor containing the determinants
+for all input submatrices `[..., :, :]`.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ Shape is `[..., M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`. Shape is `[...]`.
+
+
+
+- - -
+
+### tf.matrix_inverse(input, name=None) <div class="md-anchor" id="matrix_inverse">{#matrix_inverse}</div>
+
+Calculates the inverse of a square invertible matrix. Checks for invertibility.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ Shape is `[M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+ Shape is `[M, M]` containing the matrix inverse of the input.
+
+
+- - -
+
+### tf.batch_matrix_inverse(input, name=None) <div class="md-anchor" id="batch_matrix_inverse">{#batch_matrix_inverse}</div>
+
+Calculates the inverse of square invertible matrices. Checks for invertibility.
+
+The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
+form square matrices. The output is a tensor of the same shape as the input
+containing the inverse for all input submatrices `[..., :, :]`.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ Shape is `[..., M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
+
+
+
+- - -
+
+### tf.cholesky(input, name=None) <div class="md-anchor" id="cholesky">{#cholesky}</div>
+
+Calculates the Cholesky decomposition of a square matrix.
+
+The input has to be symmetric and positive definite. Only the lower-triangular
+part of the input will be used for this operation. The upper-triangular part
+will not be read.
+
+The result is the lower-triangular matrix of the Cholesky decomposition of the
+input.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
+ Shape is `[M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`. Shape is `[M, M]`.
+
+
+- - -
+
+### tf.batch_cholesky(input, name=None) <div class="md-anchor" id="batch_cholesky">{#batch_cholesky}</div>
+
+Calculates the Cholesky decomposition of a batch of square matrices.
+
+The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
+form square matrices, with the same constraints as the single matrix Cholesky
+decomposition above. The output is a tensor of the same shape as the input
+containing the Cholesky decompositions for all input submatrices `[..., :, :]`.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
+ Shape is `[..., M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
+
+
+
+## Complex Number Functions <div class="md-anchor" id="AUTOGENERATED-complex-number-functions">{#AUTOGENERATED-complex-number-functions}</div>
+
+TensorFlow provides several operations that you can use to add complex number
+functions to your graph.
+
+- - -
+
+### tf.complex(real, imag, name=None) <div class="md-anchor" id="complex">{#complex}</div>
+
+Converts two real numbers to a complex number.
+
+Given a tensor `real` representing the real part of a complex number, and a
+tensor `imag` representing the imaginary part of a complex number, this
+operation computes complex numbers elementwise of the form \\(a + bj\\),
+where *a* represents the `real` part and *b* represents the `imag` part.
+
+The input tensors `real` and `imag` must be the same shape.
+
+For example:
+
+```
+# tensor 'real' is [2.25, 3.25]
+# tensor `imag` is [4.75, 5.75]
+tf.complex(real, imag) ==> [[2.25 + 4.74j], [3.25 + 5.75j]]
+```
+
+##### Args:
+
+
+* <b>real</b>: A `Tensor` of type `float`.
+* <b>imag</b>: A `Tensor` of type `float`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `complex64`.
+
+
+- - -
+
+### tf.complex_abs(x, name=None) <div class="md-anchor" id="complex_abs">{#complex_abs}</div>
+
+Computes the complex absolute value of a tensor.
+
+Given a tensor `x` of complex numbers, this operation returns a tensor of type
+`float` that is the absolute value of each element in `x`. All elements in `x`
+must be complex numbers of the form \\(a + bj\\). The absolute value is
+computed as \\( \sqrt{a^2 + b^2}\\).
+
+For example:
+
+```
+# tensor 'x' is [[-2.25 + 4.75j], [-3.25 + 5.75j]]
+tf.complex_abs(x) ==> [5.25594902, 6.60492229]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `complex64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`.
+
+
+- - -
+
+### tf.conj(in_, name=None) <div class="md-anchor" id="conj">{#conj}</div>
+
+Returns the complex conjugate of a complex number.
+
+Given a tensor `in` of complex numbers, this operation returns a tensor of
+complex numbers that are the complex conjugate of each element in `in`. The
+complex numbers in `in` must be of the form \\(a + bj\\), where *a* is the real
+part and *b* is the imaginary part.
+
+The complex conjugate returned by this operation is of the form \\(a - bj\\).
+
+For example:
+
+```
+# tensor 'in' is [-2.25 + 4.75j, 3.25 + 5.75j]
+tf.conj(in) ==> [-2.25 - 4.75j, 3.25 - 5.75j]
+```
+
+##### Args:
+
+
+* <b>in_</b>: A `Tensor` of type `complex64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `complex64`.
+
+
+- - -
+
+### tf.imag(in_, name=None) <div class="md-anchor" id="imag">{#imag}</div>
+
+Returns the imaginary part of a complex number.
+
+Given a tensor `in` of complex numbers, this operation returns a tensor of type
+`float` that is the imaginary part of each element in `in`. All elements in `in`
+must be complex numbers of the form \\(a + bj\\), where *a* is the real part
+and *b* is the imaginary part returned by this operation.
+
+For example:
+
+```
+# tensor 'in' is [-2.25 + 4.75j, 3.25 + 5.75j]
+tf.imag(in) ==> [4.75, 5.75]
+```
+
+##### Args:
+
+
+* <b>in_</b>: A `Tensor` of type `complex64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`.
+
+
+- - -
+
+### tf.real(in_, name=None) <div class="md-anchor" id="real">{#real}</div>
+
+Returns the real part of a complex number.
+
+Given a tensor `in` of complex numbers, this operation returns a tensor of type
+`float` that is the real part of each element in `in`. All elements in `in`
+must be complex numbers of the form \\(a + bj\\), where *a* is the real part
+returned by this operation and *b* is the imaginary part.
+
+For example:
+
+```
+# tensor 'in' is [-2.25 + 4.75j, 3.25 + 5.75j]
+tf.real(in) ==> [-2.25, 3.25]
+```
+
+##### Args:
+
+
+* <b>in_</b>: A `Tensor` of type `complex64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`.
+
+
+
+## Reduction <div class="md-anchor" id="AUTOGENERATED-reduction">{#AUTOGENERATED-reduction}</div>
+
+TensorFlow provides several operations that you can use to perform
+common math computations that reduce various dimensions of a tensor.
+
+- - -
+
+### tf.reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_sum">{#reduce_sum}</div>
+
+Computes the sum of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+For example:
+
+```python
+# 'x' is [[1, 1, 1]]
+# [1, 1, 1]]
+tf.reduce_sum(x) ==> 6
+tf.reduce_sum(x, 0) ==> [2, 2, 2]
+tf.reduce_sum(x, 1) ==> [3, 3]
+tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
+tf.reduce_sum(x, [0, 1]) ==> 6
+```
+
+##### Args:
+
+
+* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_prod">{#reduce_prod}</div>
+
+Computes the product of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+##### Args:
+
+
+* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_min">{#reduce_min}</div>
+
+Computes the minimum of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+##### Args:
+
+
+* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_max">{#reduce_max}</div>
+
+Computes the maximum of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+##### Args:
+
+
+* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_mean">{#reduce_mean}</div>
+
+Computes the mean of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+For example:
+
+```python
+# 'x' is [[1., 1. ]]
+# [2., 2.]]
+tf.reduce_mean(x) ==> 1.5
+tf.reduce_mean(x, 0) ==> [1.5, 1.5]
+tf.reduce_mean(x, 1) ==> [1., 2.]
+```
+
+##### Args:
+
+
+* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_all">{#reduce_all}</div>
+
+Computes the "logical and" of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+For example:
+
+```python
+# 'x' is [[True, True]]
+# [False, False]]
+tf.reduce_all(x) ==> False
+tf.reduce_all(x, 0) ==> [False, False]
+tf.reduce_all(x, 1) ==> [True, False]
+```
+
+##### Args:
+
+
+* <b>input_tensor</b>: The boolean tensor to reduce.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_any">{#reduce_any}</div>
+
+Computes the "logical or" of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+For example:
+
+```python
+# 'x' is [[True, True]]
+# [False, False]]
+tf.reduce_any(x) ==> True
+tf.reduce_any(x, 0) ==> [True, True]
+tf.reduce_any(x, 1) ==> [True, False]
+```
+
+##### Args:
+
+
+* <b>input_tensor</b>: The boolean tensor to reduce.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+
+- - -
+
+### tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None) <div class="md-anchor" id="accumulate_n">{#accumulate_n}</div>
+
+Returns the element-wise sum of a list of tensors.
+
+Optionally, pass `shape` and `tensor_dtype` for shape and type checking,
+otherwise, these are inferred.
+
+For example:
+
+```python
+# tensor 'a' is [[1, 2], [3, 4]
+# tensor `b` is [[5, 0], [0, 6]]
+tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]
+
+# Explicitly pass shape and type
+tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)
+ ==> [[7, 4], [6, 14]]
+```
+
+##### Args:
+
+
+* <b>inputs</b>: A list of `Tensor` objects, each with same shape and type.
+* <b>shape</b>: Shape of elements of `inputs`.
+* <b>tensor_dtype</b>: The type of `inputs`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of same shape and type as the elements of `inputs`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `inputs` don't all have same shape and dtype or the shape
+ cannot be inferred.
+
+
+
+## Segmentation <div class="md-anchor" id="AUTOGENERATED-segmentation">{#AUTOGENERATED-segmentation}</div>
+
+TensorFlow provides several operations that you can use to perform common
+math computations on tensor segments.
+Here a segmentation is a partitioning of a tensor along
+the first dimension, i.e. it defines a mapping from the first dimension onto
+`segment_ids`. The `segment_ids` tensor should be the size of
+the first dimension, `d0`, with consecutive IDs in the range `0` to `k`,
+where `k<d0`.
+In particular, a segmentation of a matrix tensor is a mapping of rows to
+segments.
+
+For example:
+
+```python
+c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
+tf.segment_sum(c, tf.constant([0, 0, 1]))
+ ==> [[0 0 0 0]
+ [5 6 7 8]]
+```
+
+- - -
+
+### tf.segment_sum(data, segment_ids, name=None) <div class="md-anchor" id="segment_sum">{#segment_sum}</div>
+
+Computes the sum along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \sum_j data_j\\) where sum is over `j` such
+that `segment_ids[j] == i`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/SegmentSum.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+- - -
+
+### tf.segment_prod(data, segment_ids, name=None) <div class="md-anchor" id="segment_prod">{#segment_prod}</div>
+
+Computes the product along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \prod_j data_j\\) where the product is over `j` such
+that `segment_ids[j] == i`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/SegmentProd.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+- - -
+
+### tf.segment_min(data, segment_ids, name=None) <div class="md-anchor" id="segment_min">{#segment_min}</div>
+
+Computes the minimum along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \min_j(data_j)\\) where `min` is over `j` such
+that `segment_ids[j] == i`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/SegmentMin.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+- - -
+
+### tf.segment_max(data, segment_ids, name=None) <div class="md-anchor" id="segment_max">{#segment_max}</div>
+
+Computes the maximum along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \max_j(data_j)\\) where `max` is over `j` such
+that `segment_ids[j] == i`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/SegmentMax.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+- - -
+
+### tf.segment_mean(data, segment_ids, name=None) <div class="md-anchor" id="segment_mean">{#segment_mean}</div>
+
+Computes the mean along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \frac{\sum_j data_j}{N}\\) where `mean` is
+over `j` such that `segment_ids[j] == i` and `N` is the total number of
+values summed.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/SegmentMean.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+
+- - -
+
+### tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None) <div class="md-anchor" id="unsorted_segment_sum">{#unsorted_segment_sum}</div>
+
+Computes the sum along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \sum_j data_j\\) where sum is over `j` such
+that `segment_ids[j] == i`. Unlike `SegmentSum`, `segment_ids`
+need not be sorted and need not cover all values in the full
+ range of valid values.
+
+If the sum is empty for a given segment ID `i`, `output[i] = 0`.
+
+`num_segments` should equal the number of distinct segment IDs.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/UnsortedSegmentSum.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension.
+* <b>num_segments</b>: A `Tensor` of type `int32`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `num_segments`.
+
+
+
+- - -
+
+### tf.sparse_segment_sum(data, indices, segment_ids, name=None) <div class="md-anchor" id="sparse_segment_sum">{#sparse_segment_sum}</div>
+
+Computes the sum along sparse segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Like `SegmentSum`, but `segment_ids` can have rank less than `data`'s first
+dimension, selecting a subset of dimension_0, specified by `indices`.
+
+For example:
+
+```prettyprint
+c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
+
+# Select two rows, one segment.
+tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))
+ ==> [[0 0 0 0]]
+
+# Select two rows, two segment.
+tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1]))
+ ==> [[ 1 2 3 4]
+ [-1 -2 -3 -4]]
+
+# Select all rows, two segments.
+tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))
+ ==> [[0 0 0 0]
+ [5 6 7 8]]
+
+# Which is equivalent to:
+tf.segment_sum(c, tf.constant([0, 0, 1]))
+```
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>indices</b>: A `Tensor` of type `int32`.
+ A 1-D tensor. Has same rank as `segment_ids`.
+* <b>segment_ids</b>: A `Tensor` of type `int32`.
+ A 1-D tensor. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+- - -
+
+### tf.sparse_segment_mean(data, indices, segment_ids, name=None) <div class="md-anchor" id="sparse_segment_mean">{#sparse_segment_mean}</div>
+
+Computes the mean along sparse segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first
+dimension, selecting a subset of dimension_0, specified by `indices`.
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>indices</b>: A `Tensor` of type `int32`.
+ A 1-D tensor. Has same rank as `segment_ids`.
+* <b>segment_ids</b>: A `Tensor` of type `int32`.
+ A 1-D tensor. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+
+
+## Sequence Comparison and Indexing <div class="md-anchor" id="AUTOGENERATED-sequence-comparison-and-indexing">{#AUTOGENERATED-sequence-comparison-and-indexing}</div>
+
+TensorFlow provides several operations that you can use to add sequence
+comparison and index extraction to your graph. You can use these operations to
+determine sequence differences and determine the indexes of specific values in
+a tensor.
+
+- - -
+
+### tf.argmin(input, dimension, name=None) <div class="md-anchor" id="argmin">{#argmin}</div>
+
+Returns the index with the smallest value across dimensions of a tensor.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
+* <b>dimension</b>: A `Tensor` of type `int32`.
+ int32, 0 <= dimension < rank(input). Describes which dimension
+ of the input Tensor to reduce across. For vectors, use dimension = 0.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int64`.
+
+
+- - -
+
+### tf.argmax(input, dimension, name=None) <div class="md-anchor" id="argmax">{#argmax}</div>
+
+Returns the index with the largest value across dimensions of a tensor.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
+* <b>dimension</b>: A `Tensor` of type `int32`.
+ int32, 0 <= dimension < rank(input). Describes which dimension
+ of the input Tensor to reduce across. For vectors, use dimension = 0.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int64`.
+
+
+
+- - -
+
+### tf.listdiff(x, y, name=None) <div class="md-anchor" id="listdiff">{#listdiff}</div>
+
+Computes the difference between two lists of numbers.
+
+Given a list `x` and a list `y`, this operation returns a list `out` that
+represents all numbers that are in `x` but not in `y`. The returned list `out`
+is sorted in the same order that the numbers appear in `x` (duplicates are
+preserved). This operation also returns a list `idx` that represents the
+position of each `out` element in `x`. In other words:
+
+`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]`
+
+For example, given this input:
+
+```prettyprint
+x = [1, 2, 3, 4, 5, 6]
+y = [1, 3, 5]
+```
+
+This operation would return:
+
+```prettyprint
+out ==> [2, 4, 6]
+idx ==> [1, 3, 5]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. 1-D. Values to keep.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of `Tensor` objects (out, idx).
+
+* <b>out</b>: A `Tensor`. Has the same type as `x`. 1-D. Values present in `x` but not in `y`.
+* <b>idx</b>: A `Tensor` of type `int32`. 1-D. Positions of `x` values preserved in `out`.
+
+
+- - -
+
+### tf.where(input, name=None) <div class="md-anchor" id="where">{#where}</div>
+
+Returns locations of true values in a boolean tensor.
+
+This operation returns the coordinates of true elements in `input`. The
+coordinates are returned in a 2-D tensor where the first dimension (rows)
+represents the number of true elements, and the second dimension (columns)
+represents the coordinates of the true elements. Keep in mind, the shape of
+the output tensor can vary depending on how many true values there are in
+`input`. Indices are output in row-major order.
+
+For example:
+
+```prettyprint
+# 'input' tensor is [[True, False]
+# [True, False]]
+# 'input' has two true values, so output has two coordinates.
+# 'input' has rank of 2, so coordinates have two indices.
+where(input) ==> [[0, 0],
+ [1, 0]]
+
+# `input` tensor is [[[True, False]
+# [True, False]]
+# [[False, True]
+# [False, True]]
+# [[False, False]
+# [False, True]]]
+# 'input' has 5 true values, so output has 5 coordinates.
+# 'input' has rank of 3, so coordinates have three indices.
+where(input) ==> [[0, 0, 0],
+ [0, 1, 0],
+ [1, 0, 1],
+ [1, 1, 1],
+ [2, 1, 1]]
+```
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor` of type `bool`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int64`.
+
+
+- - -
+
+### tf.unique(x, name=None) <div class="md-anchor" id="unique">{#unique}</div>
+
+Finds unique elements in a 1-D tensor.
+
+This operation returns a tensor `y` containing all of the unique elements of `x`
+sorted in the same order that they occur in `x`. This operation also returns a
+tensor `idx` the same size as `x` that contains the index of each value of `x`
+in the unique output `y`. In other words:
+
+`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
+
+For example:
+
+```prettyprint
+# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
+y, idx = unique(x)
+y ==> [1, 2, 4, 7, 8]
+idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. 1-D.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of `Tensor` objects (y, idx).
+
+* <b>y</b>: A `Tensor`. Has the same type as `x`. 1-D.
+* <b>idx</b>: A `Tensor` of type `int32`. 1-D.
+
+
+
+- - -
+
+### tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance') <div class="md-anchor" id="edit_distance">{#edit_distance}</div>
+
+Computes the Levenshtein distance between sequences.
+
+This operation takes variable-length sequences (`hypothesis` and `truth`),
+each provided as a `SparseTensor`, and computes the Levenshtein distance.
+You can normalize the edit distance by length of `truth` by setting
+`normalize` to true.
+
+For example, given the following input:
+
+```python
+# 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values:
+# (0,0) = ["a"]
+# (1,0) = ["b"]
+hypothesis = tf.SparseTensor(
+ [[0, 0, 0],
+ [1, 0, 0]],
+ ["a", "b"]
+ (2, 1, 1))
+
+# 'truth' is a tensor of shape `[2, 2]` with variable-length values:
+# (0,0) = []
+# (0,1) = ["a"]
+# (1,0) = ["b", "c"]
+# (1,1) = ["a"]
+truth = tf.SparseTensor(
+ [[0, 1, 0],
+ [1, 0, 0],
+ [1, 0, 1],
+ [1, 1, 0]]
+ ["a", "b", "c", "a"],
+ (2, 2, 2))
+
+normalize = True
+```
+
+This operation would return the following:
+
+```python
+# 'output' is a tensor of shape `[2, 2]` with edit distances normalized
+# by 'truth' lengths.
+output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis
+ [0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis
+```
+
+##### Args:
+
+
+* <b>hypothesis</b>: A `SparseTensor` containing hypothesis sequences.
+* <b>truth</b>: A `SparseTensor` containing truth sequences.
+* <b>normalize</b>: A `bool`. If `True`, normalizes the Levenshtein distance by
+ length of `truth.`
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A dense `Tensor` with rank `R - 1`, where R is the rank of the
+ `SparseTensor` inputs `hypothesis` and `truth`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If either `hypothesis` or `truth` are not a `SparseTensor`.
+
+
+
+- - -
+
+### tf.invert_permutation(x, name=None) <div class="md-anchor" id="invert_permutation">{#invert_permutation}</div>
+
+Computes the inverse permutation of a tensor.
+
+This operation computes the inverse of an index permutation. It takes a 1-D
+integer tensor `x`, which represents the indices of a zero-based array, and
+swaps each value with its index position. In other words, for an ouput tensor
+`y` and an input tensor `x`, this operation computes the following:
+
+`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`
+
+The values must include 0. There can be no duplicate values or negative values.
+
+For example:
+
+```prettyprint
+# tensor `x` is [3, 4, 0, 2, 1]
+invert_permutation(x) ==> [2, 4, 3, 0, 1]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `int32`. 1-D.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int32`. 1-D.
+
+