aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/api_docs/python/framework.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/g3doc/api_docs/python/framework.md')
-rw-r--r--tensorflow/g3doc/api_docs/python/framework.md487
1 files changed, 228 insertions, 259 deletions
diff --git a/tensorflow/g3doc/api_docs/python/framework.md b/tensorflow/g3doc/api_docs/python/framework.md
index eab4ec0152..42b02852c7 100644
--- a/tensorflow/g3doc/api_docs/python/framework.md
+++ b/tensorflow/g3doc/api_docs/python/framework.md
@@ -1,46 +1,15 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Building Graphs <a class="md-anchor" id="AUTOGENERATED-building-graphs"></a>
-<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
-## Contents
-### [Building Graphs](#AUTOGENERATED-building-graphs)
-* [Core graph data structures](#AUTOGENERATED-core-graph-data-structures)
- * [`class tf.Graph`](#Graph)
- * [`class tf.Operation`](#Operation)
- * [`class tf.Tensor`](#Tensor)
-* [Tensor types](#AUTOGENERATED-tensor-types)
- * [`class tf.DType`](#DType)
- * [`tf.as_dtype(type_value)`](#as_dtype)
-* [Utility functions](#AUTOGENERATED-utility-functions)
- * [`tf.device(dev)`](#device)
- * [`tf.name_scope(name)`](#name_scope)
- * [`tf.control_dependencies(control_inputs)`](#control_dependencies)
- * [`tf.convert_to_tensor(value, dtype=None, name=None)`](#convert_to_tensor)
- * [`tf.get_default_graph()`](#get_default_graph)
- * [`tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None)`](#import_graph_def)
-* [Graph collections](#AUTOGENERATED-graph-collections)
- * [`tf.add_to_collection(name, value)`](#add_to_collection)
- * [`tf.get_collection(key, scope=None)`](#get_collection)
- * [`class tf.GraphKeys`](#GraphKeys)
-* [Defining new operations](#AUTOGENERATED-defining-new-operations)
- * [`class tf.RegisterGradient`](#RegisterGradient)
- * [`tf.NoGradient(op_type)`](#NoGradient)
- * [`class tf.RegisterShape`](#RegisterShape)
- * [`class tf.TensorShape`](#TensorShape)
- * [`class tf.Dimension`](#Dimension)
- * [`tf.op_scope(values, name, default_name)`](#op_scope)
- * [`tf.get_seed(op_seed)`](#get_seed)
-
-
-<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+# Building Graphs
+[TOC]
Classes and functions for building TensorFlow graphs.
-## Core graph data structures <a class="md-anchor" id="AUTOGENERATED-core-graph-data-structures"></a>
+## Core graph data structures
- - -
-### `class tf.Graph` <a class="md-anchor" id="Graph"></a>
+### `class tf.Graph` {#Graph}
A TensorFlow computation, represented as a dataflow graph.
@@ -80,14 +49,14 @@ are not thread-safe.
- - -
-#### `tf.Graph.__init__()` <a class="md-anchor" id="Graph.__init__"></a>
+#### `tf.Graph.__init__()` {#Graph.__init__}
Creates a new, empty Graph.
- - -
-#### `tf.Graph.as_default()` <a class="md-anchor" id="Graph.as_default"></a>
+#### `tf.Graph.as_default()` {#Graph.as_default}
Returns a context manager that makes this `Graph` the default graph.
@@ -118,14 +87,14 @@ with tf.Graph().as_default() as g:
assert c.graph is g
```
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A context manager for using this graph as the default graph.
- - -
-#### `tf.Graph.as_graph_def(from_version=None)` <a class="md-anchor" id="Graph.as_graph_def"></a>
+#### `tf.Graph.as_graph_def(from_version=None)` {#Graph.as_graph_def}
Returns a serialized `GraphDef` representation of this graph.
@@ -135,19 +104,19 @@ The serialized `GraphDef` can be imported into another `Graph`
This method is thread-safe.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`from_version`</b>: Optional. If this is set, returns a `GraphDef`
containing only the nodes that were added to this graph since
its `version` property had the given value.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A [`GraphDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto)
protocol buffer.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If the graph_def would be too large.
@@ -155,7 +124,7 @@ This method is thread-safe.
- - -
-#### `tf.Graph.finalize()` <a class="md-anchor" id="Graph.finalize"></a>
+#### `tf.Graph.finalize()` {#Graph.finalize}
Finalizes this graph, making it read-only.
@@ -167,14 +136,14 @@ when using a [`QueueRunner`](../../api_docs/python/train.md#QueueRunner).
- - -
-#### `tf.Graph.finalized` <a class="md-anchor" id="Graph.finalized"></a>
+#### `tf.Graph.finalized` {#Graph.finalized}
True if this graph has been finalized.
- - -
-#### `tf.Graph.control_dependencies(control_inputs)` <a class="md-anchor" id="Graph.control_dependencies"></a>
+#### `tf.Graph.control_dependencies(control_inputs)` {#Graph.control_dependencies}
Returns a context manager that specifies control dependencies.
@@ -222,19 +191,19 @@ def my_func(pred, tensor):
return tf.matmul(tensor, tensor)
```
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`control_inputs`</b>: A list of `Operation` or `Tensor` objects, which
must be executed or computed before running the operations
defined in the context.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A context manager that specifies control dependencies for all
operations constructed within the context.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If `control_inputs` is not a list of `Operation` or
@@ -243,7 +212,7 @@ def my_func(pred, tensor):
- - -
-#### `tf.Graph.device(device_name_or_function)` <a class="md-anchor" id="Graph.device"></a>
+#### `tf.Graph.device(device_name_or_function)` {#Graph.device}
Returns a context manager that specifies the default device to use.
@@ -281,13 +250,13 @@ with g.device(matmul_on_gpu):
# on CPU 0.
```
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`device_name_or_function`</b>: The device name or function to use in
the context.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A context manager that specifies the default device to use for newly
created ops.
@@ -295,7 +264,7 @@ with g.device(matmul_on_gpu):
- - -
-#### `tf.Graph.name_scope(name)` <a class="md-anchor" id="Graph.name_scope"></a>
+#### `tf.Graph.name_scope(name)` {#Graph.name_scope}
Returns a context manager that creates hierarchical names for operations.
@@ -365,12 +334,12 @@ with g.name_scope('my_layer') as scope:
```
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`name`</b>: A name for the scope.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A context manager that installs `name` as a new name scope.
@@ -386,11 +355,11 @@ may define additional collections by specifying a new name.
- - -
-#### `tf.Graph.add_to_collection(name, value)` <a class="md-anchor" id="Graph.add_to_collection"></a>
+#### `tf.Graph.add_to_collection(name, value)` {#Graph.add_to_collection}
Stores `value` in the collection with the given `name`.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
@@ -400,11 +369,11 @@ Stores `value` in the collection with the given `name`.
- - -
-#### `tf.Graph.get_collection(name, scope=None)` <a class="md-anchor" id="Graph.get_collection"></a>
+#### `tf.Graph.get_collection(name, scope=None)` {#Graph.get_collection}
Returns a list of values in the collection with the given `name`.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`key`</b>: The key for the collection. For example, the `GraphKeys` class
@@ -412,7 +381,7 @@ Returns a list of values in the collection with the given `name`.
* <b>`scope`</b>: (Optional.) If supplied, the resulting list is filtered to include
only items whose name begins with this string.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
The list of values in the collection with the given `name`, or
an empty list if no value has been added to that collection. The
@@ -423,7 +392,7 @@ Returns a list of values in the collection with the given `name`.
- - -
-#### `tf.Graph.as_graph_element(obj, allow_tensor=True, allow_operation=True)` <a class="md-anchor" id="Graph.as_graph_element"></a>
+#### `tf.Graph.as_graph_element(obj, allow_tensor=True, allow_operation=True)` {#Graph.as_graph_element}
Returns the object referred to by `obj`, as an `Operation` or `Tensor`.
@@ -436,7 +405,7 @@ Session API.
This method may be called concurrently from multiple threads.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`obj`</b>: A `Tensor`, an `Operation`, or the name of a tensor or operation.
@@ -445,11 +414,11 @@ This method may be called concurrently from multiple threads.
* <b>`allow_tensor`</b>: If true, `obj` may refer to a `Tensor`.
* <b>`allow_operation`</b>: If true, `obj` may refer to an `Operation`.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
The `Tensor` or `Operation` in the Graph corresponding to `obj`.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If `obj` is not a type we support attempting to convert
@@ -461,22 +430,22 @@ This method may be called concurrently from multiple threads.
- - -
-#### `tf.Graph.get_operation_by_name(name)` <a class="md-anchor" id="Graph.get_operation_by_name"></a>
+#### `tf.Graph.get_operation_by_name(name)` {#Graph.get_operation_by_name}
Returns the `Operation` with the given `name`.
This method may be called concurrently from multiple threads.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`name`</b>: The name of the `Operation` to return.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
The `Operation` with the given `name`.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If `name` is not a string.
@@ -485,22 +454,22 @@ This method may be called concurrently from multiple threads.
- - -
-#### `tf.Graph.get_tensor_by_name(name)` <a class="md-anchor" id="Graph.get_tensor_by_name"></a>
+#### `tf.Graph.get_tensor_by_name(name)` {#Graph.get_tensor_by_name}
Returns the `Tensor` with the given `name`.
This method may be called concurrently from multiple threads.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`name`</b>: The name of the `Tensor` to return.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
The `Tensor` with the given `name`.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If `name` is not a string.
@@ -509,7 +478,7 @@ This method may be called concurrently from multiple threads.
- - -
-#### `tf.Graph.get_operations()` <a class="md-anchor" id="Graph.get_operations"></a>
+#### `tf.Graph.get_operations()` {#Graph.get_operations}
Return the list of operations in the graph.
@@ -519,7 +488,7 @@ list of operations known to the graph.
This method may be called concurrently from multiple threads.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A list of Operations.
@@ -527,24 +496,24 @@ This method may be called concurrently from multiple threads.
- - -
-#### `tf.Graph.get_default_device()` <a class="md-anchor" id="Graph.get_default_device"></a>
+#### `tf.Graph.get_default_device()` {#Graph.get_default_device}
Returns the default device.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A string.
- - -
-#### `tf.Graph.seed` <a class="md-anchor" id="Graph.seed"></a>
+#### `tf.Graph.seed` {#Graph.seed}
- - -
-#### `tf.Graph.unique_name(name)` <a class="md-anchor" id="Graph.unique_name"></a>
+#### `tf.Graph.unique_name(name)` {#Graph.unique_name}
Return a unique Operation name for "name".
@@ -557,12 +526,12 @@ to help identify Operations when debugging a Graph. Operation names
are displayed in error messages reported by the TensorFlow runtime,
and in various visualization tools such as TensorBoard.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`name`</b>: The name for an `Operation`.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A string to be passed to `create_op()` that will be used
to name the operation being created.
@@ -570,14 +539,14 @@ and in various visualization tools such as TensorBoard.
- - -
-#### `tf.Graph.version` <a class="md-anchor" id="Graph.version"></a>
+#### `tf.Graph.version` {#Graph.version}
Returns a version number that increases as ops are added to the graph.
- - -
-#### `tf.Graph.create_op(op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True)` <a class="md-anchor" id="Graph.create_op"></a>
+#### `tf.Graph.create_op(op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True)` {#Graph.create_op}
Creates an `Operation` in this graph.
@@ -586,7 +555,7 @@ programs will not call this method directly, and instead use the
Python op constructors, such as `tf.constant()`, which add ops to
the default graph.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`op_type`</b>: The `Operation` type to create. This corresponds to the
@@ -607,19 +576,19 @@ the default graph.
* <b>`compute_shapes`</b>: (Optional.) If True, shape inference will be performed
to compute the shapes of the outputs.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: if any of the inputs is not a `Tensor`.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
An `Operation` object.
- - -
-#### `tf.Graph.gradient_override_map(op_type_map)` <a class="md-anchor" id="Graph.gradient_override_map"></a>
+#### `tf.Graph.gradient_override_map(op_type_map)` {#Graph.gradient_override_map}
EXPERIMENTAL: A context manager for overriding gradient functions.
@@ -641,18 +610,18 @@ with tf.Graph().as_default() as g:
# gradient of s_2.
```
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`op_type_map`</b>: A dictionary mapping op type strings to alternative op
type strings.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A context manager that sets the alternative op type to be used for one
or more ops created in that context.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If `op_type_map` is not a dictionary mapping strings to
@@ -662,7 +631,7 @@ with tf.Graph().as_default() as g:
- - -
-### `class tf.Operation` <a class="md-anchor" id="Operation"></a>
+### `class tf.Operation` {#Operation}
Represents a graph node that performs computation on tensors.
@@ -684,25 +653,25 @@ be executed by passing it to
- - -
-#### `tf.Operation.name` <a class="md-anchor" id="Operation.name"></a>
+#### `tf.Operation.name` {#Operation.name}
The full name of this operation.
- - -
-#### `tf.Operation.type` <a class="md-anchor" id="Operation.type"></a>
+#### `tf.Operation.type` {#Operation.type}
The type of the op (e.g. `"MatMul"`).
- - -
-#### `tf.Operation.inputs` <a class="md-anchor" id="Operation.inputs"></a>
+#### `tf.Operation.inputs` {#Operation.inputs}
The list of `Tensor` objects representing the data inputs of this op.
- - -
-#### `tf.Operation.control_inputs` <a class="md-anchor" id="Operation.control_inputs"></a>
+#### `tf.Operation.control_inputs` {#Operation.control_inputs}
The `Operation` objects on which this op has a control dependency.
@@ -712,37 +681,37 @@ mechanism can be used to run ops sequentially for performance
reasons, or to ensure that the side effects of an op are observed
in the correct order.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A list of `Operation` objects.
- - -
-#### `tf.Operation.outputs` <a class="md-anchor" id="Operation.outputs"></a>
+#### `tf.Operation.outputs` {#Operation.outputs}
The list of `Tensor` objects representing the outputs of this op.
- - -
-#### `tf.Operation.device` <a class="md-anchor" id="Operation.device"></a>
+#### `tf.Operation.device` {#Operation.device}
The name of the device to which this op has been assigned, if any.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
The string name of the device to which this op has been
assigned, or None if it has not been assigned to a device.
- - -
-#### `tf.Operation.graph` <a class="md-anchor" id="Operation.graph"></a>
+#### `tf.Operation.graph` {#Operation.graph}
The `Graph` that contains this operation.
- - -
-#### `tf.Operation.run(feed_dict=None, session=None)` <a class="md-anchor" id="Operation.run"></a>
+#### `tf.Operation.run(feed_dict=None, session=None)` {#Operation.run}
Runs this operation in a `Session`.
@@ -753,7 +722,7 @@ produce the inputs needed for this operation.
launched in a session, and either a default session must be
available, or `session` must be specified explicitly.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
@@ -766,20 +735,20 @@ available, or `session` must be specified explicitly.
- - -
-#### `tf.Operation.get_attr(name)` <a class="md-anchor" id="Operation.get_attr"></a>
+#### `tf.Operation.get_attr(name)` {#Operation.get_attr}
Returns the value of the attr of this op with the given `name`.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`name`</b>: The name of the attr to fetch.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
The value of the attr, as a Python object.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If this op does not have an attr with the given `name`.
@@ -787,15 +756,15 @@ Returns the value of the attr of this op with the given `name`.
- - -
-#### `tf.Operation.traceback` <a class="md-anchor" id="Operation.traceback"></a>
+#### `tf.Operation.traceback` {#Operation.traceback}
Returns the call stack from when this operation was constructed.
-#### Other Methods <a class="md-anchor" id="AUTOGENERATED-other-methods"></a>
+#### Other Methods
- - -
-#### `tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None)` <a class="md-anchor" id="Operation.__init__"></a>
+#### `tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None)` {#Operation.__init__}
Creates an `Operation`.
@@ -805,7 +774,7 @@ regular expression:
[A-Za-z0-9.][A-Za-z0-9_.\-/]*
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`node_def`</b>: graph_pb2.NodeDef. NodeDef for the Operation.
@@ -829,7 +798,7 @@ regular expression:
* <b>`op_def`</b>: Optional. The op_def_pb2.OpDef proto that describes the
op type that this Operation represents.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: if control inputs are not Operations or Tensors,
@@ -842,11 +811,11 @@ regular expression:
- - -
-#### `tf.Operation.node_def` <a class="md-anchor" id="Operation.node_def"></a>
+#### `tf.Operation.node_def` {#Operation.node_def}
Returns a serialized `NodeDef` representation of this operation.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A
[`NodeDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto)
@@ -854,11 +823,11 @@ Returns a serialized `NodeDef` representation of this operation.
- - -
-#### `tf.Operation.op_def` <a class="md-anchor" id="Operation.op_def"></a>
+#### `tf.Operation.op_def` {#Operation.op_def}
Returns the `OpDef` proto that represents the type of this op.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
An
[`OpDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op_def.proto)
@@ -866,7 +835,7 @@ Returns the `OpDef` proto that represents the type of this op.
- - -
-#### `tf.Operation.values()` <a class="md-anchor" id="Operation.values"></a>
+#### `tf.Operation.values()` {#Operation.values}
DEPRECATED: Use outputs.
@@ -874,7 +843,7 @@ DEPRECATED: Use outputs.
- - -
-### `class tf.Tensor` <a class="md-anchor" id="Tensor"></a>
+### `class tf.Tensor` {#Tensor}
Represents a value produced by an `Operation`.
@@ -915,41 +884,41 @@ result = sess.run(e)
- - -
-#### `tf.Tensor.dtype` <a class="md-anchor" id="Tensor.dtype"></a>
+#### `tf.Tensor.dtype` {#Tensor.dtype}
The `DType` of elements in this tensor.
- - -
-#### `tf.Tensor.name` <a class="md-anchor" id="Tensor.name"></a>
+#### `tf.Tensor.name` {#Tensor.name}
The string name of this tensor.
- - -
-#### `tf.Tensor.value_index` <a class="md-anchor" id="Tensor.value_index"></a>
+#### `tf.Tensor.value_index` {#Tensor.value_index}
The index of this tensor in the outputs of its `Operation`.
- - -
-#### `tf.Tensor.graph` <a class="md-anchor" id="Tensor.graph"></a>
+#### `tf.Tensor.graph` {#Tensor.graph}
The `Graph` that contains this tensor.
- - -
-#### `tf.Tensor.op` <a class="md-anchor" id="Tensor.op"></a>
+#### `tf.Tensor.op` {#Tensor.op}
The `Operation` that produces this tensor as an output.
- - -
-#### `tf.Tensor.consumers()` <a class="md-anchor" id="Tensor.consumers"></a>
+#### `tf.Tensor.consumers()` {#Tensor.consumers}
Returns a list of `Operation`s that consume this tensor.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A list of `Operation`s.
@@ -957,7 +926,7 @@ Returns a list of `Operation`s that consume this tensor.
- - -
-#### `tf.Tensor.eval(feed_dict=None, session=None)` <a class="md-anchor" id="Tensor.eval"></a>
+#### `tf.Tensor.eval(feed_dict=None, session=None)` {#Tensor.eval}
Evaluates this tensor in a `Session`.
@@ -969,7 +938,7 @@ tensor.
launched in a session, and either a default session must be
available, or `session` must be specified explicitly.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
@@ -978,7 +947,7 @@ available, or `session` must be specified explicitly.
* <b>`session`</b>: (Optional.) The `Session` to be used to evaluate this tensor. If
none, the default session will be used.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A numpy array corresponding to the value of this tensor.
@@ -986,7 +955,7 @@ available, or `session` must be specified explicitly.
- - -
-#### `tf.Tensor.get_shape()` <a class="md-anchor" id="Tensor.get_shape"></a>
+#### `tf.Tensor.get_shape()` {#Tensor.get_shape}
Returns the `TensorShape` that represents the shape of this tensor.
@@ -1026,14 +995,14 @@ the caller has additional information about the values of these
dimensions, `Tensor.set_shape()` can be used to augment the
inferred shape.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A `TensorShape` representing the shape of this tensor.
- - -
-#### `tf.Tensor.set_shape(shape)` <a class="md-anchor" id="Tensor.set_shape"></a>
+#### `tf.Tensor.set_shape(shape)` {#Tensor.set_shape}
Updates the shape of this tensor.
@@ -1058,12 +1027,12 @@ print image.get_shape()
==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])
```
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`shape`</b>: A `TensorShape` representing the shape of this tensor.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `shape` is not compatible with the current shape of
@@ -1071,14 +1040,14 @@ print image.get_shape()
-#### Other Methods <a class="md-anchor" id="AUTOGENERATED-other-methods"></a>
+#### Other Methods
- - -
-#### `tf.Tensor.__init__(op, value_index, dtype)` <a class="md-anchor" id="Tensor.__init__"></a>
+#### `tf.Tensor.__init__(op, value_index, dtype)` {#Tensor.__init__}
Creates a new `Tensor`.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`op`</b>: An `Operation`. `Operation` that computes this tensor.
@@ -1086,7 +1055,7 @@ Creates a new `Tensor`.
this tensor.
* <b>`dtype`</b>: A `types.DType`. Type of data stored in this tensor.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If the op is not an `Operation`.
@@ -1094,17 +1063,17 @@ Creates a new `Tensor`.
- - -
-#### `tf.Tensor.device` <a class="md-anchor" id="Tensor.device"></a>
+#### `tf.Tensor.device` {#Tensor.device}
The name of the device on which this tensor will be produced, or None.
-## Tensor types <a class="md-anchor" id="AUTOGENERATED-tensor-types"></a>
+## Tensor types
- - -
-### `class tf.DType` <a class="md-anchor" id="DType"></a>
+### `class tf.DType` {#DType}
Represents the type of the elements in a `Tensor`.
@@ -1136,7 +1105,7 @@ names to a `DType` object.
- - -
-#### `tf.DType.is_compatible_with(other)` <a class="md-anchor" id="DType.is_compatible_with"></a>
+#### `tf.DType.is_compatible_with(other)` {#DType.is_compatible_with}
Returns True if the `other` DType will be converted to this DType.
@@ -1149,12 +1118,12 @@ DType(T).as_ref.is_compatible_with(DType(T)) == False
DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
```
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`other`</b>: A `DType` (or object that may be converted to a `DType`).
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
True if a Tensor of the `other` `DType` will be implicitly converted to
this `DType`.
@@ -1162,58 +1131,58 @@ DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
- - -
-#### `tf.DType.name` <a class="md-anchor" id="DType.name"></a>
+#### `tf.DType.name` {#DType.name}
Returns the string name for this `DType`.
- - -
-#### `tf.DType.base_dtype` <a class="md-anchor" id="DType.base_dtype"></a>
+#### `tf.DType.base_dtype` {#DType.base_dtype}
Returns a non-reference `DType` based on this `DType`.
- - -
-#### `tf.DType.is_ref_dtype` <a class="md-anchor" id="DType.is_ref_dtype"></a>
+#### `tf.DType.is_ref_dtype` {#DType.is_ref_dtype}
Returns `True` if this `DType` represents a reference type.
- - -
-#### `tf.DType.as_ref` <a class="md-anchor" id="DType.as_ref"></a>
+#### `tf.DType.as_ref` {#DType.as_ref}
Returns a reference `DType` based on this `DType`.
- - -
-#### `tf.DType.is_integer` <a class="md-anchor" id="DType.is_integer"></a>
+#### `tf.DType.is_integer` {#DType.is_integer}
Returns whether this is a (non-quantized) integer type.
- - -
-#### `tf.DType.is_quantized` <a class="md-anchor" id="DType.is_quantized"></a>
+#### `tf.DType.is_quantized` {#DType.is_quantized}
Returns whether this is a quantized data type.
- - -
-#### `tf.DType.as_numpy_dtype` <a class="md-anchor" id="DType.as_numpy_dtype"></a>
+#### `tf.DType.as_numpy_dtype` {#DType.as_numpy_dtype}
Returns a `numpy.dtype` based on this `DType`.
- - -
-#### `tf.DType.as_datatype_enum` <a class="md-anchor" id="DType.as_datatype_enum"></a>
+#### `tf.DType.as_datatype_enum` {#DType.as_datatype_enum}
Returns a `types_pb2.DataType` enum value based on this `DType`.
-#### Other Methods <a class="md-anchor" id="AUTOGENERATED-other-methods"></a>
+#### Other Methods
- - -
-#### `tf.DType.__init__(type_enum)` <a class="md-anchor" id="DType.__init__"></a>
+#### `tf.DType.__init__(type_enum)` {#DType.__init__}
Creates a new `DataType`.
@@ -1221,12 +1190,12 @@ NOTE(mrry): In normal circumstances, you should not need to
construct a DataType object directly. Instead, use the
types.as_dtype() function.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`type_enum`</b>: A `types_pb2.DataType` enum value.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If `type_enum` is not a value `types_pb2.DataType`.
@@ -1234,28 +1203,28 @@ types.as_dtype() function.
- - -
-#### `tf.DType.is_floating` <a class="md-anchor" id="DType.is_floating"></a>
+#### `tf.DType.is_floating` {#DType.is_floating}
Returns whether this is a (real) floating point type.
- - -
-#### `tf.DType.max` <a class="md-anchor" id="DType.max"></a>
+#### `tf.DType.max` {#DType.max}
Returns the maximum representable value in this data type.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: if this is a non-numeric, unordered, or quantized type.
- - -
-#### `tf.DType.min` <a class="md-anchor" id="DType.min"></a>
+#### `tf.DType.min` {#DType.min}
Returns the minimum representable value in this data type.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: if this is a non-numeric, unordered, or quantized type.
@@ -1263,11 +1232,11 @@ Returns the minimum representable value in this data type.
- - -
-### `tf.as_dtype(type_value)` <a class="md-anchor" id="as_dtype"></a>
+### `tf.as_dtype(type_value)` {#as_dtype}
Converts the given `type_value` to a `DType`.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`type_value`</b>: A value that can be converted to a `tf.DType`
@@ -1275,22 +1244,22 @@ Converts the given `type_value` to a `DType`.
[`DataType` enum](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/types.proto),
a string type name, or a `numpy.dtype`.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A `DType` corresponding to `type_value`.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If `type_value` cannot be converted to a `DType`.
-## Utility functions <a class="md-anchor" id="AUTOGENERATED-utility-functions"></a>
+## Utility functions
- - -
-### `tf.device(dev)` <a class="md-anchor" id="device"></a>
+### `tf.device(dev)` {#device}
Wrapper for `Graph.device()` using the default graph.
@@ -1298,13 +1267,13 @@ See
[`Graph.name_scope()`](../../api_docs/python/framework.md#Graph.name_scope)
for more details.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`device_name_or_function`</b>: The device name or function to use in
the context.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A context manager that specifies the default device to use for newly
created ops.
@@ -1312,7 +1281,7 @@ for more details.
- - -
-### `tf.name_scope(name)` <a class="md-anchor" id="name_scope"></a>
+### `tf.name_scope(name)` {#name_scope}
Wrapper for `Graph.name_scope()` using the default graph.
@@ -1320,12 +1289,12 @@ See
[`Graph.name_scope()`](../../api_docs/python/framework.md#Graph.name_scope)
for more details.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`name`</b>: A name for the scope.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A context manager that installs `name` as a new name scope in the
default graph.
@@ -1333,21 +1302,21 @@ for more details.
- - -
-### `tf.control_dependencies(control_inputs)` <a class="md-anchor" id="control_dependencies"></a>
+### `tf.control_dependencies(control_inputs)` {#control_dependencies}
Wrapper for `Graph.control_dependencies()` using the default graph.
See [`Graph.control_dependencies()`](../../api_docs/python/framework.md#Graph.control_dependencies)
for more details.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`control_inputs`</b>: A list of `Operation` or `Tensor` objects, which
must be executed or computed before running the operations
defined in the context.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A context manager that specifies control dependencies for all
operations constructed within the context.
@@ -1355,7 +1324,7 @@ for more details.
- - -
-### `tf.convert_to_tensor(value, dtype=None, name=None)` <a class="md-anchor" id="convert_to_tensor"></a>
+### `tf.convert_to_tensor(value, dtype=None, name=None)` {#convert_to_tensor}
Converts the given `value` to a `Tensor`.
@@ -1383,7 +1352,7 @@ constructors apply this function to each of their Tensor-valued
inputs, which allows those ops to accept numpy arrays, Python lists,
and scalars in addition to `Tensor` objects.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`value`</b>: An object whose type has a registered `Tensor` conversion function.
@@ -1391,11 +1360,11 @@ and scalars in addition to `Tensor` objects.
type is inferred from the type of `value`.
* <b>`name`</b>: Optional name to use if a new `Tensor` is created.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A `Tensor` based on `value`.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If no conversion function is registered for `value`.
@@ -1404,7 +1373,7 @@ and scalars in addition to `Tensor` objects.
- - -
-### `tf.get_default_graph()` <a class="md-anchor" id="get_default_graph"></a>
+### `tf.get_default_graph()` {#get_default_graph}
Returns the default graph for the current thread.
@@ -1417,14 +1386,14 @@ create a new thread, and wish to use the default graph in that
thread, you must explicitly add a `with g.as_default():` in that
thread's function.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
The default `Graph` being used in the current thread.
- - -
-### `tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None)` <a class="md-anchor" id="import_graph_def"></a>
+### `tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None)` {#import_graph_def}
Imports the TensorFlow graph in `graph_def` into the Python `Graph`.
@@ -1435,7 +1404,7 @@ protocol buffer, and extract individual objects in the `GraphDef` as
[`Graph.as_graph_def()`](#Graph.as_graph_def) for a way to create a
`GraphDef` proto.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`graph_def`</b>: A `GraphDef` proto containing operations to be imported into
@@ -1452,12 +1421,12 @@ protocol buffer, and extract individual objects in the `GraphDef` as
Must contain an `OpDef` proto for each op type named in `graph_def`.
If omitted, uses the `OpDef` protos registered in the global registry.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A list of `Operation` and/or `Tensor` objects from the imported graph,
corresponding to the names in `return_elements'.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If `graph_def` is not a `GraphDef` proto,
@@ -1469,18 +1438,18 @@ protocol buffer, and extract individual objects in the `GraphDef` as
-## Graph collections <a class="md-anchor" id="AUTOGENERATED-graph-collections"></a>
+## Graph collections
- - -
-### `tf.add_to_collection(name, value)` <a class="md-anchor" id="add_to_collection"></a>
+### `tf.add_to_collection(name, value)` {#add_to_collection}
Wrapper for `Graph.add_to_collection()` using the default graph.
See [`Graph.add_to_collection()`](../../api_docs/python/framework.md#Graph.add_to_collection)
for more details.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
@@ -1490,14 +1459,14 @@ for more details.
- - -
-### `tf.get_collection(key, scope=None)` <a class="md-anchor" id="get_collection"></a>
+### `tf.get_collection(key, scope=None)` {#get_collection}
Wrapper for `Graph.get_collection()` using the default graph.
See [`Graph.get_collection()`](../../api_docs/python/framework.md#Graph.get_collection)
for more details.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`key`</b>: The key for the collection. For example, the `GraphKeys` class
@@ -1505,7 +1474,7 @@ for more details.
* <b>`scope`</b>: (Optional.) If supplied, the resulting list is filtered to include
only items whose name begins with this string.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
The list of values in the collection with the given `name`, or
an empty list if no value has been added to that collection. The
@@ -1515,7 +1484,7 @@ for more details.
- - -
-### `class tf.GraphKeys` <a class="md-anchor" id="GraphKeys"></a>
+### `class tf.GraphKeys` {#GraphKeys}
Standard names to use for graph collections.
@@ -1546,11 +1515,11 @@ The following standard keys are defined:
for more details.
-## Defining new operations <a class="md-anchor" id="AUTOGENERATED-defining-new-operations"></a>
+## Defining new operations
- - -
-### `class tf.RegisterGradient` <a class="md-anchor" id="RegisterGradient"></a>
+### `class tf.RegisterGradient` {#RegisterGradient}
A decorator for registering the gradient function for an op type.
@@ -1577,11 +1546,11 @@ that defines the operation.
- - -
-#### `tf.RegisterGradient.__init__(op_type)` <a class="md-anchor" id="RegisterGradient.__init__"></a>
+#### `tf.RegisterGradient.__init__(op_type)` {#RegisterGradient.__init__}
Creates a new decorator with `op_type` as the Operation type.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`op_type`</b>: The string type of an operation. This corresponds to the
@@ -1591,7 +1560,7 @@ Creates a new decorator with `op_type` as the Operation type.
- - -
-### `tf.NoGradient(op_type)` <a class="md-anchor" id="NoGradient"></a>
+### `tf.NoGradient(op_type)` {#NoGradient}
Specifies that ops of type `op_type` do not have a defined gradient.
@@ -1603,13 +1572,13 @@ example:
tf.NoGradient("Size")
```
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`op_type`</b>: The string type of an operation. This corresponds to the
`OpDef.name` field for the proto that defines the operation.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`TypeError`</b>: If `op_type` is not a string.
@@ -1617,7 +1586,7 @@ tf.NoGradient("Size")
- - -
-### `class tf.RegisterShape` <a class="md-anchor" id="RegisterShape"></a>
+### `class tf.RegisterShape` {#RegisterShape}
A decorator for registering the shape function for an op type.
@@ -1641,7 +1610,7 @@ operation. This corresponds to the `OpDef.name` field for the proto
that defines the operation.
- - -
-#### `tf.RegisterShape.__init__(op_type)` <a class="md-anchor" id="RegisterShape.__init__"></a>
+#### `tf.RegisterShape.__init__(op_type)` {#RegisterShape.__init__}
Saves the "op_type" as the Operation type.
@@ -1649,7 +1618,7 @@ Saves the "op_type" as the Operation type.
- - -
-### `class tf.TensorShape` <a class="md-anchor" id="TensorShape"></a>
+### `class tf.TensorShape` {#TensorShape}
Represents the shape of a `Tensor`.
@@ -1672,24 +1641,24 @@ explicitly using [`Tensor.set_shape()`](../../api_docs/python/framework.md#Tenso
- - -
-#### `tf.TensorShape.merge_with(other)` <a class="md-anchor" id="TensorShape.merge_with"></a>
+#### `tf.TensorShape.merge_with(other)` {#TensorShape.merge_with}
Returns a `TensorShape` combining the information in `self` and `other`.
The dimensions in `self` and `other` are merged elementwise,
according to the rules defined for `Dimension.merge_with()`.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`other`</b>: Another `TensorShape`.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A `TensorShape` containing the combined information of `self` and
`other`.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `self` and `other` are not compatible.
@@ -1697,7 +1666,7 @@ according to the rules defined for `Dimension.merge_with()`.
- - -
-#### `tf.TensorShape.concatenate(other)` <a class="md-anchor" id="TensorShape.concatenate"></a>
+#### `tf.TensorShape.concatenate(other)` {#TensorShape.concatenate}
Returns the concatenation of the dimension in `self` and `other`.
@@ -1706,12 +1675,12 @@ concatenation will discard information about the other shape. In
future, we might support concatenation that preserves this
information for use with slicing.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`other`</b>: Another `TensorShape`.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A `TensorShape` whose dimensions are the concatenation of the
dimensions in `self` and `other`.
@@ -1720,26 +1689,26 @@ information for use with slicing.
- - -
-#### `tf.TensorShape.ndims` <a class="md-anchor" id="TensorShape.ndims"></a>
+#### `tf.TensorShape.ndims` {#TensorShape.ndims}
Returns the rank of this shape, or None if it is unspecified.
- - -
-#### `tf.TensorShape.dims` <a class="md-anchor" id="TensorShape.dims"></a>
+#### `tf.TensorShape.dims` {#TensorShape.dims}
Returns a list of Dimensions, or None if the shape is unspecified.
- - -
-#### `tf.TensorShape.as_list()` <a class="md-anchor" id="TensorShape.as_list"></a>
+#### `tf.TensorShape.as_list()` {#TensorShape.as_list}
Returns a list of integers or None for each dimension.
- - -
-#### `tf.TensorShape.is_compatible_with(other)` <a class="md-anchor" id="TensorShape.is_compatible_with"></a>
+#### `tf.TensorShape.is_compatible_with(other)` {#TensorShape.is_compatible_with}
Returns True iff `self` is compatible with `other`.
@@ -1771,19 +1740,19 @@ TensorShape(None), and TensorShape(None) is compatible with
TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with
TensorShape([4, 4]).
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`other`</b>: Another TensorShape.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
True iff `self` is compatible with `other`.
- - -
-#### `tf.TensorShape.is_fully_defined()` <a class="md-anchor" id="TensorShape.is_fully_defined"></a>
+#### `tf.TensorShape.is_fully_defined()` {#TensorShape.is_fully_defined}
Returns True iff `self` is fully defined in every dimension.
@@ -1791,23 +1760,23 @@ Returns True iff `self` is fully defined in every dimension.
- - -
-#### `tf.TensorShape.with_rank(rank)` <a class="md-anchor" id="TensorShape.with_rank"></a>
+#### `tf.TensorShape.with_rank(rank)` {#TensorShape.with_rank}
Returns a shape based on `self` with the given rank.
This method promotes a completely unknown shape to one with a
known rank.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`rank`</b>: An integer.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A shape that is at least as specific as `self` with the given rank.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
@@ -1815,21 +1784,21 @@ known rank.
- - -
-#### `tf.TensorShape.with_rank_at_least(rank)` <a class="md-anchor" id="TensorShape.with_rank_at_least"></a>
+#### `tf.TensorShape.with_rank_at_least(rank)` {#TensorShape.with_rank_at_least}
Returns a shape based on `self` with at least the given rank.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`rank`</b>: An integer.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A shape that is at least as specific as `self` with at least the given
rank.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `self` does not represent a shape with at least the given
@@ -1838,21 +1807,21 @@ Returns a shape based on `self` with at least the given rank.
- - -
-#### `tf.TensorShape.with_rank_at_most(rank)` <a class="md-anchor" id="TensorShape.with_rank_at_most"></a>
+#### `tf.TensorShape.with_rank_at_most(rank)` {#TensorShape.with_rank_at_most}
Returns a shape based on `self` with at most the given rank.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`rank`</b>: An integer.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A shape that is at least as specific as `self` with at most the given
rank.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `self` does not represent a shape with at most the given
@@ -1862,16 +1831,16 @@ Returns a shape based on `self` with at most the given rank.
- - -
-#### `tf.TensorShape.assert_has_rank(rank)` <a class="md-anchor" id="TensorShape.assert_has_rank"></a>
+#### `tf.TensorShape.assert_has_rank(rank)` {#TensorShape.assert_has_rank}
Raises an exception if `self` is not compatible with the given `rank`.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`rank`</b>: An integer.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
@@ -1879,16 +1848,16 @@ Raises an exception if `self` is not compatible with the given `rank`.
- - -
-#### `tf.TensorShape.assert_same_rank(other)` <a class="md-anchor" id="TensorShape.assert_same_rank"></a>
+#### `tf.TensorShape.assert_same_rank(other)` {#TensorShape.assert_same_rank}
Raises an exception if `self` and `other` do not have compatible ranks.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`other`</b>: Another `TensorShape`.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `self` and `other` do not represent shapes with the
@@ -1897,19 +1866,19 @@ Raises an exception if `self` and `other` do not have compatible ranks.
- - -
-#### `tf.TensorShape.assert_is_compatible_with(other)` <a class="md-anchor" id="TensorShape.assert_is_compatible_with"></a>
+#### `tf.TensorShape.assert_is_compatible_with(other)` {#TensorShape.assert_is_compatible_with}
Raises exception if `self` and `other` do not represent the same shape.
This method can be used to assert that there exists a shape that both
`self` and `other` represent.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`other`</b>: Another TensorShape.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `self` and `other` do not represent the same shape.
@@ -1917,25 +1886,25 @@ This method can be used to assert that there exists a shape that both
- - -
-#### `tf.TensorShape.assert_is_fully_defined()` <a class="md-anchor" id="TensorShape.assert_is_fully_defined"></a>
+#### `tf.TensorShape.assert_is_fully_defined()` {#TensorShape.assert_is_fully_defined}
Raises an exception if `self` is not fully defined in every dimension.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `self` does not have a known value for every dimension.
-#### Other Methods <a class="md-anchor" id="AUTOGENERATED-other-methods"></a>
+#### Other Methods
- - -
-#### `tf.TensorShape.__init__(dims)` <a class="md-anchor" id="TensorShape.__init__"></a>
+#### `tf.TensorShape.__init__(dims)` {#TensorShape.__init__}
Creates a new TensorShape with the given dimensions.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`dims`</b>: A list of Dimensions, or None if the shape is unspecified.
@@ -1944,14 +1913,14 @@ Creates a new TensorShape with the given dimensions.
- - -
-#### `tf.TensorShape.as_dimension_list()` <a class="md-anchor" id="TensorShape.as_dimension_list"></a>
+#### `tf.TensorShape.as_dimension_list()` {#TensorShape.as_dimension_list}
DEPRECATED: use as_list().
- - -
-#### `tf.TensorShape.num_elements()` <a class="md-anchor" id="TensorShape.num_elements"></a>
+#### `tf.TensorShape.num_elements()` {#TensorShape.num_elements}
Returns the total number of elements, or none for incomplete shapes.
@@ -1959,28 +1928,28 @@ Returns the total number of elements, or none for incomplete shapes.
- - -
-### `class tf.Dimension` <a class="md-anchor" id="Dimension"></a>
+### `class tf.Dimension` {#Dimension}
Represents the value of one dimension in a TensorShape.
- - -
-#### `tf.Dimension.__init__(value)` <a class="md-anchor" id="Dimension.__init__"></a>
+#### `tf.Dimension.__init__(value)` {#Dimension.__init__}
Creates a new Dimension with the given value.
- - -
-#### `tf.Dimension.assert_is_compatible_with(other)` <a class="md-anchor" id="Dimension.assert_is_compatible_with"></a>
+#### `tf.Dimension.assert_is_compatible_with(other)` {#Dimension.assert_is_compatible_with}
Raises an exception if `other` is not compatible with this Dimension.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`other`</b>: Another Dimension.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `self` and `other` are not compatible (see
@@ -1989,26 +1958,26 @@ Raises an exception if `other` is not compatible with this Dimension.
- - -
-#### `tf.Dimension.is_compatible_with(other)` <a class="md-anchor" id="Dimension.is_compatible_with"></a>
+#### `tf.Dimension.is_compatible_with(other)` {#Dimension.is_compatible_with}
Returns true if `other` is compatible with this Dimension.
Two known Dimensions are compatible if they have the same value.
An unknown Dimension is compatible with all other Dimensions.
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`other`</b>: Another Dimension.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
True if this Dimension and `other` are compatible.
- - -
-#### `tf.Dimension.merge_with(other)` <a class="md-anchor" id="Dimension.merge_with"></a>
+#### `tf.Dimension.merge_with(other)` {#Dimension.merge_with}
Returns a Dimension that combines the information in `self` and `other`.
@@ -2020,17 +1989,17 @@ Dimensions are combined as follows:
Dimension(None).merge_with(Dimension(None)) == Dimension(None)
Dimension(n) .merge_with(Dimension(m)) raises ValueError for n != m
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`other`</b>: Another Dimension.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A Dimension containing the combined information of `self` and
`other`.
-##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
+##### Raises:
* <b>`ValueError`</b>: If `self` and `other` are not compatible (see
@@ -2039,14 +2008,14 @@ Dimensions are combined as follows:
- - -
-#### `tf.Dimension.value` <a class="md-anchor" id="Dimension.value"></a>
+#### `tf.Dimension.value` {#Dimension.value}
The value of this dimension, or None if it is unknown.
- - -
-### `tf.op_scope(values, name, default_name)` <a class="md-anchor" id="op_scope"></a>
+### `tf.op_scope(values, name, default_name)` {#op_scope}
Returns a context manager for use when defining a Python op.
@@ -2066,21 +2035,21 @@ def my_op(a, b, c, name=None):
return foo_op(..., name=scope)
```
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`values`</b>: The list of `Tensor` arguments that are passed to the op function.
* <b>`name`</b>: The name argument that is passed to the op function.
* <b>`default_name`</b>: The default name to use if the `name` argument is `None`.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A context manager for use in defining a Python op.
- - -
-### `tf.get_seed(op_seed)` <a class="md-anchor" id="get_seed"></a>
+### `tf.get_seed(op_seed)` {#get_seed}
Returns the local seeds an operation should use given an op-specific seed.
@@ -2092,12 +2061,12 @@ graph, or for only specific operations.
For details on how the graph-level seed interacts with op seeds, see
[`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed).
-##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
+##### Args:
* <b>`op_seed`</b>: integer.
-##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
+##### Returns:
A tuple of two integers that should be used for the local seed of this
operation.