aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar A. Unique TensorFlower <gardener@tensorflow.org>2017-02-13 17:12:21 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2017-02-13 17:50:38 -0800
commitf9e99518804b50d335321bff67e6001da8f1f4f2 (patch)
tree6b8ace460b5b13e4e5958370e7db4206e42bba6d
parent3a87b7c77434434b541e7ecdf8381925c79ebf73 (diff)
Update generated Python Op docs.
Change: 147414471
-rw-r--r--tensorflow/g3doc/api_docs/python/client.md586
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.copy_graph.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.crf.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.distributions.bijector.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.distributions.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.ffmpeg.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.framework.md7
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.graph_editor.md97
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.integrate.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.layers.md39
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.learn.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.learn.monitors.md57
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.training.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.util.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/framework.md1001
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.PriorityQueue.from_list.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorArray.md221
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md322
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.FIFOQueue.md260
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Tensor.md339
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md155
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md2
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredTrainingSession.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DType.md167
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.FIFOQueue.from_list.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md279
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.PriorityQueue.md260
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensor.md163
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md155
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SingularMonitoredSession.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.OpError.md53
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.RandomShuffleQueue.md260
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.FtrlOptimizer.md155
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.PaddingFIFOQueue.md260
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md216
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.AdamOptimizer.md155
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Coordinator.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Operation.md199
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.PaddingFIFOQueue.from_list.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.TFRecordWriter.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriter.md112
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.IndexedSlices.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.RandomShuffleQueue.from_list.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Session.md342
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradDAOptimizer.md155
-rw-r--r--tensorflow/g3doc/api_docs/python/index.md8
-rw-r--r--tensorflow/g3doc/api_docs/python/io_ops.md1228
-rw-r--r--tensorflow/g3doc/api_docs/python/python_io.md61
-rw-r--r--tensorflow/g3doc/api_docs/python/sparse_ops.md163
-rw-r--r--tensorflow/g3doc/api_docs/python/state_ops.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/string_ops.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/summary.md121
-rw-r--r--tensorflow/g3doc/api_docs/python/tensor_array_ops.md225
-rw-r--r--tensorflow/g3doc/api_docs/python/test.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/tf_debug.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/train.md860
56 files changed, 6510 insertions, 2638 deletions
diff --git a/tensorflow/g3doc/api_docs/python/client.md b/tensorflow/g3doc/api_docs/python/client.md
index fbd1bf5808..19c5b269d5 100644
--- a/tensorflow/g3doc/api_docs/python/client.md
+++ b/tensorflow/g3doc/api_docs/python/client.md
@@ -64,6 +64,26 @@ create a session as follows:
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
log_device_placement=True))
```
+- - -
+
+#### `tf.Session.__del__()` {#Session.__del__}
+
+
+
+
+- - -
+
+#### `tf.Session.__enter__()` {#Session.__enter__}
+
+
+
+
+- - -
+
+#### `tf.Session.__exit__(exec_type, exec_value, exec_tb)` {#Session.__exit__}
+
+
+
- - -
@@ -93,6 +113,207 @@ the session constructor.
- - -
+#### `tf.Session.as_default()` {#Session.as_default}
+
+Returns a context manager that makes this object the default session.
+
+Use with the `with` keyword to specify that calls to
+[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or
+[`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) should be
+executed in this session.
+
+```python
+c = tf.constant(..)
+sess = tf.Session()
+
+with sess.as_default():
+ assert tf.get_default_session() is sess
+ print(c.eval())
+```
+
+To get the current default session, use
+[`tf.get_default_session()`](#get_default_session).
+
+
+*N.B.* The `as_default` context manager *does not* close the
+session when you exit the context, and you must close the session
+explicitly.
+
+```python
+c = tf.constant(...)
+sess = tf.Session()
+with sess.as_default():
+ print(c.eval())
+# ...
+with sess.as_default():
+ print(c.eval())
+
+sess.close()
+```
+
+Alternatively, you can use `with tf.Session():` to create a
+session that is automatically closed on exiting the context,
+including when an uncaught exception is raised.
+
+*N.B.* The default graph is a property of the current thread. If you
+create a new thread, and wish to use the default session in that
+thread, you must explicitly add a `with sess.as_default():` in that
+thread's function.
+
+##### Returns:
+
+ A context manager using this session as the default session.
+
+
+- - -
+
+#### `tf.Session.close()` {#Session.close}
+
+Closes this session.
+
+Calling this method frees all resources associated with the session.
+
+##### Raises:
+
+ tf.errors.OpError: Or one of its subclasses if an error occurs while
+ closing the TensorFlow session.
+
+
+- - -
+
+#### `tf.Session.graph` {#Session.graph}
+
+The graph that was launched in this session.
+
+
+- - -
+
+#### `tf.Session.graph_def` {#Session.graph_def}
+
+A serializable version of the underlying TensorFlow graph.
+
+##### Returns:
+
+ A graph_pb2.GraphDef proto containing nodes for all of the Operations in
+ the underlying TensorFlow graph.
+
+
+- - -
+
+#### `tf.Session.partial_run(handle, fetches, feed_dict=None)` {#Session.partial_run}
+
+Continues the execution with more feeds and fetches.
+
+This is EXPERIMENTAL and subject to change.
+
+To use partial execution, a user first calls `partial_run_setup()` and
+then a sequence of `partial_run()`. `partial_run_setup` specifies the
+list of feeds and fetches that will be used in the subsequent
+`partial_run` calls.
+
+The optional `feed_dict` argument allows the caller to override
+the value of tensors in the graph. See run() for more information.
+
+Below is a simple example:
+
+```python
+a = array_ops.placeholder(dtypes.float32, shape=[])
+b = array_ops.placeholder(dtypes.float32, shape=[])
+c = array_ops.placeholder(dtypes.float32, shape=[])
+r1 = math_ops.add(a, b)
+r2 = math_ops.multiply(r1, c)
+
+h = sess.partial_run_setup([r1, r2], [a, b, c])
+res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2})
+res = sess.partial_run(h, r2, feed_dict={c: res})
+```
+
+##### Args:
+
+
+* <b>`handle`</b>: A handle for a sequence of partial runs.
+* <b>`fetches`</b>: A single graph element, a list of graph elements,
+ or a dictionary whose values are graph elements or lists of graph
+ elements (see documentation for `run`).
+* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
+ (described above).
+
+##### Returns:
+
+ Either a single value if `fetches` is a single graph element, or
+ a list of values if `fetches` is a list, or a dictionary with the
+ same keys as `fetches` if that is a dictionary
+ (see documentation for `run`).
+
+##### Raises:
+
+ tf.errors.OpError: Or one of its subclasses on error.
+
+
+- - -
+
+#### `tf.Session.partial_run_setup(fetches, feeds=None)` {#Session.partial_run_setup}
+
+Sets up a graph with feeds and fetches for partial run.
+
+This is EXPERIMENTAL and subject to change.
+
+Note that contrary to `run`, `feeds` only specifies the graph elements.
+The tensors will be supplied by the subsequent `partial_run` calls.
+
+##### Args:
+
+
+* <b>`fetches`</b>: A single graph element, or a list of graph elements.
+* <b>`feeds`</b>: A single graph element, or a list of graph elements.
+
+##### Returns:
+
+ A handle for partial run.
+
+##### Raises:
+
+
+* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
+ closed).
+* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
+ tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens.
+
+
+- - -
+
+#### `tf.Session.reset(target, containers=None, config=None)` {#Session.reset}
+
+Resets resource containers on `target`, and close all connected sessions.
+
+A resource container is distributed across all workers in the
+same cluster as `target`. When a resource container on `target`
+is reset, resources associated with that container will be cleared.
+In particular, all Variables in the container will become undefined:
+they lose their values and shapes.
+
+NOTE:
+(i) reset() is currently only implemented for distributed sessions.
+(ii) Any sessions on the master named by `target` will be closed.
+
+If no resource containers are provided, all containers are reset.
+
+##### Args:
+
+
+* <b>`target`</b>: The execution engine to connect to.
+* <b>`containers`</b>: A list of resource container name strings, or `None` if all of
+ all the containers are to be reset.
+* <b>`config`</b>: (Optional.) Protocol buffer with configuration options.
+
+##### Raises:
+
+ tf.errors.OpError: Or one of its subclasses if an error occurs while
+ resetting containers.
+
+
+- - -
+
#### `tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#Session.run}
Runs operations and evaluates tensors in `fetches`.
@@ -204,30 +425,85 @@ collected into this argument and passed back.
- - -
-#### `tf.Session.close()` {#Session.close}
+#### `tf.Session.sess_str` {#Session.sess_str}
-Closes this session.
-Calling this method frees all resources associated with the session.
-##### Raises:
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- closing the TensorFlow session.
+- - -
+
+### `class tf.InteractiveSession` {#InteractiveSession}
+
+A TensorFlow `Session` for use in interactive contexts, such as a shell.
+
+The only difference with a regular `Session` is that an `InteractiveSession`
+installs itself as the default session on construction.
+The methods [`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval)
+and [`Operation.run()`](../../api_docs/python/framework.md#Operation.run)
+will use that session to run ops.
+
+This is convenient in interactive shells and [IPython
+notebooks](http://ipython.org), as it avoids having to pass an explicit
+`Session` object to run ops.
+
+For example:
+
+```python
+sess = tf.InteractiveSession()
+a = tf.constant(5.0)
+b = tf.constant(6.0)
+c = a * b
+# We can just use 'c.eval()' without passing 'sess'
+print(c.eval())
+sess.close()
+```
+Note that a regular session installs itself as the default session when it
+is created in a `with` statement. The common usage in non-interactive
+programs is to follow that pattern:
+```python
+a = tf.constant(5.0)
+b = tf.constant(6.0)
+c = a * b
+with tf.Session():
+ # We can also use 'c.eval()' here.
+ print(c.eval())
+```
- - -
-#### `tf.Session.graph` {#Session.graph}
+#### `tf.InteractiveSession.__del__()` {#InteractiveSession.__del__}
-The graph that was launched in this session.
- - -
-#### `tf.Session.as_default()` {#Session.as_default}
+#### `tf.InteractiveSession.__init__(target='', graph=None, config=None)` {#InteractiveSession.__init__}
+
+Creates a new interactive TensorFlow session.
+
+If no `graph` argument is specified when constructing the session,
+the default graph will be launched in the session. If you are
+using more than one graph (created with `tf.Graph()` in the same
+process, you will have to use different sessions for each graph,
+but each graph can be used in multiple sessions. In this case, it
+is often clearer to pass the graph to be launched explicitly to
+the session constructor.
+
+##### Args:
+
+
+* <b>`target`</b>: (Optional.) The execution engine to connect to.
+ Defaults to using an in-process engine.
+* <b>`graph`</b>: (Optional.) The `Graph` to be launched (described above).
+* <b>`config`</b>: (Optional) `ConfigProto` proto used to configure the session.
+
+
+- - -
+
+#### `tf.InteractiveSession.as_default()` {#InteractiveSession.as_default}
Returns a context manager that makes this object the default session.
@@ -279,125 +555,230 @@ thread's function.
A context manager using this session as the default session.
+- - -
+
+#### `tf.InteractiveSession.close()` {#InteractiveSession.close}
+
+Closes an `InteractiveSession`.
+
- - -
-#### `tf.Session.reset(target, containers=None, config=None)` {#Session.reset}
+#### `tf.InteractiveSession.graph` {#InteractiveSession.graph}
-Resets resource containers on `target`, and close all connected sessions.
+The graph that was launched in this session.
-A resource container is distributed across all workers in the
-same cluster as `target`. When a resource container on `target`
-is reset, resources associated with that container will be cleared.
-In particular, all Variables in the container will become undefined:
-they lose their values and shapes.
-NOTE:
-(i) reset() is currently only implemented for distributed sessions.
-(ii) Any sessions on the master named by `target` will be closed.
+- - -
-If no resource containers are provided, all containers are reset.
+#### `tf.InteractiveSession.graph_def` {#InteractiveSession.graph_def}
+
+A serializable version of the underlying TensorFlow graph.
+
+##### Returns:
+
+ A graph_pb2.GraphDef proto containing nodes for all of the Operations in
+ the underlying TensorFlow graph.
+
+
+- - -
+
+#### `tf.InteractiveSession.partial_run(handle, fetches, feed_dict=None)` {#InteractiveSession.partial_run}
+
+Continues the execution with more feeds and fetches.
+
+This is EXPERIMENTAL and subject to change.
+
+To use partial execution, a user first calls `partial_run_setup()` and
+then a sequence of `partial_run()`. `partial_run_setup` specifies the
+list of feeds and fetches that will be used in the subsequent
+`partial_run` calls.
+
+The optional `feed_dict` argument allows the caller to override
+the value of tensors in the graph. See run() for more information.
+
+Below is a simple example:
+
+```python
+a = array_ops.placeholder(dtypes.float32, shape=[])
+b = array_ops.placeholder(dtypes.float32, shape=[])
+c = array_ops.placeholder(dtypes.float32, shape=[])
+r1 = math_ops.add(a, b)
+r2 = math_ops.multiply(r1, c)
+
+h = sess.partial_run_setup([r1, r2], [a, b, c])
+res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2})
+res = sess.partial_run(h, r2, feed_dict={c: res})
+```
##### Args:
-* <b>`target`</b>: The execution engine to connect to.
-* <b>`containers`</b>: A list of resource container name strings, or `None` if all of
- all the containers are to be reset.
-* <b>`config`</b>: (Optional.) Protocol buffer with configuration options.
+* <b>`handle`</b>: A handle for a sequence of partial runs.
+* <b>`fetches`</b>: A single graph element, a list of graph elements,
+ or a dictionary whose values are graph elements or lists of graph
+ elements (see documentation for `run`).
+* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
+ (described above).
-##### Raises:
+##### Returns:
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- resetting containers.
+ Either a single value if `fetches` is a single graph element, or
+ a list of values if `fetches` is a list, or a dictionary with the
+ same keys as `fetches` if that is a dictionary
+ (see documentation for `run`).
+
+##### Raises:
+ tf.errors.OpError: Or one of its subclasses on error.
-#### Other Methods
- - -
-#### `tf.Session.__enter__()` {#Session.__enter__}
+#### `tf.InteractiveSession.partial_run_setup(fetches, feeds=None)` {#InteractiveSession.partial_run_setup}
+Sets up a graph with feeds and fetches for partial run.
+This is EXPERIMENTAL and subject to change.
+Note that contrary to `run`, `feeds` only specifies the graph elements.
+The tensors will be supplied by the subsequent `partial_run` calls.
-- - -
+##### Args:
-#### `tf.Session.__exit__(exec_type, exec_value, exec_tb)` {#Session.__exit__}
+* <b>`fetches`</b>: A single graph element, or a list of graph elements.
+* <b>`feeds`</b>: A single graph element, or a list of graph elements.
+
+##### Returns:
+ A handle for partial run.
+##### Raises:
+
+
+* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
+ closed).
+* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
+ tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens.
- - -
-### `class tf.InteractiveSession` {#InteractiveSession}
+#### `tf.InteractiveSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#InteractiveSession.run}
-A TensorFlow `Session` for use in interactive contexts, such as a shell.
+Runs operations and evaluates tensors in `fetches`.
-The only difference with a regular `Session` is that an `InteractiveSession`
-installs itself as the default session on construction.
-The methods [`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval)
-and [`Operation.run()`](../../api_docs/python/framework.md#Operation.run)
-will use that session to run ops.
+This method runs one "step" of TensorFlow computation, by
+running the necessary graph fragment to execute every `Operation`
+and evaluate every `Tensor` in `fetches`, substituting the values in
+`feed_dict` for the corresponding input values.
-This is convenient in interactive shells and [IPython
-notebooks](http://ipython.org), as it avoids having to pass an explicit
-`Session` object to run ops.
+The `fetches` argument may be a single graph element, or an arbitrarily
+nested list, tuple, namedtuple, dict, or OrderedDict containing graph
+elements at its leaves. A graph element can be one of the following types:
-For example:
+* An [`Operation`](../../api_docs/python/framework.md#Operation).
+ The corresponding fetched value will be `None`.
+* A [`Tensor`](../../api_docs/python/framework.md#Tensor).
+ The corresponding fetched value will be a numpy ndarray containing the
+ value of that tensor.
+* A [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor).
+ The corresponding fetched value will be a
+ [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue)
+ containing the value of that sparse tensor.
+* A `get_tensor_handle` op. The corresponding fetched value will be a
+ numpy ndarray containing the handle of that tensor.
+* A `string` which is the name of a tensor or operation in the graph.
-```python
-sess = tf.InteractiveSession()
-a = tf.constant(5.0)
-b = tf.constant(6.0)
-c = a * b
-# We can just use 'c.eval()' without passing 'sess'
-print(c.eval())
-sess.close()
-```
+The value returned by `run()` has the same shape as the `fetches` argument,
+where the leaves are replaced by the corresponding values returned by
+TensorFlow.
-Note that a regular session installs itself as the default session when it
-is created in a `with` statement. The common usage in non-interactive
-programs is to follow that pattern:
+Example:
```python
-a = tf.constant(5.0)
-b = tf.constant(6.0)
-c = a * b
-with tf.Session():
- # We can also use 'c.eval()' here.
- print(c.eval())
+ a = tf.constant([10, 20])
+ b = tf.constant([1.0, 2.0])
+ # 'fetches' can be a singleton
+ v = session.run(a)
+ # v is the numpy array [10, 20]
+ # 'fetches' can be a list.
+ v = session.run([a, b])
+ # v a Python list with 2 numpy arrays: the numpy array [10, 20] and the
+ # 1-D array [1.0, 2.0]
+ # 'fetches' can be arbitrary lists, tuples, namedtuple, dicts:
+ MyData = collections.namedtuple('MyData', ['a', 'b'])
+ v = session.run({'k1': MyData(a, b), 'k2': [b, a]})
+ # v is a dict with
+ # v['k1'] is a MyData namedtuple with 'a' the numpy array [10, 20] and
+ # 'b' the numpy array [1.0, 2.0]
+ # v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array
+ # [10, 20].
```
-- - -
+The optional `feed_dict` argument allows the caller to override
+the value of tensors in the graph. Each key in `feed_dict` can be
+one of the following types:
-#### `tf.InteractiveSession.__init__(target='', graph=None, config=None)` {#InteractiveSession.__init__}
+* If the key is a [`Tensor`](../../api_docs/python/framework.md#Tensor), the
+ value may be a Python scalar, string, list, or numpy ndarray
+ that can be converted to the same `dtype` as that
+ tensor. Additionally, if the key is a
+ [placeholder](../../api_docs/python/io_ops.md#placeholder), the shape of
+ the value will be checked for compatibility with the placeholder.
+* If the key is a
+ [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
+ the value should be a
+ [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue).
+* If the key is a nested tuple of `Tensor`s or `SparseTensor`s, the value
+ should be a nested tuple with the same structure that maps to their
+ corresponding values as above.
-Creates a new interactive TensorFlow session.
+Each value in `feed_dict` must be convertible to a numpy array of the dtype
+of the corresponding key.
-If no `graph` argument is specified when constructing the session,
-the default graph will be launched in the session. If you are
-using more than one graph (created with `tf.Graph()` in the same
-process, you will have to use different sessions for each graph,
-but each graph can be used in multiple sessions. In this case, it
-is often clearer to pass the graph to be launched explicitly to
-the session constructor.
+The optional `options` argument expects a [`RunOptions`] proto. The options
+allow controlling the behavior of this particular step (e.g. turning tracing
+on).
+
+The optional `run_metadata` argument expects a [`RunMetadata`] proto. When
+appropriate, the non-Tensor output of this step will be collected there. For
+example, when users turn on tracing in `options`, the profiled info will be
+collected into this argument and passed back.
##### Args:
-* <b>`target`</b>: (Optional.) The execution engine to connect to.
- Defaults to using an in-process engine.
-* <b>`graph`</b>: (Optional.) The `Graph` to be launched (described above).
-* <b>`config`</b>: (Optional) `ConfigProto` proto used to configure the session.
+* <b>`fetches`</b>: A single graph element, a list of graph elements,
+ or a dictionary whose values are graph elements or lists of graph
+ elements (described above).
+* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
+ (described above).
+* <b>`options`</b>: A [`RunOptions`] protocol buffer
+* <b>`run_metadata`</b>: A [`RunMetadata`] protocol buffer
+
+##### Returns:
+
+ Either a single value if `fetches` is a single graph element, or
+ a list of values if `fetches` is a list, or a dictionary with the
+ same keys as `fetches` if that is a dictionary (described above).
+
+##### Raises:
+
+
+* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
+ closed).
+* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
+* <b>`ValueError`</b>: If `fetches` or `feed_dict` keys are invalid or refer to a
+ `Tensor` that doesn't exist.
- - -
-#### `tf.InteractiveSession.close()` {#InteractiveSession.close}
+#### `tf.InteractiveSession.sess_str` {#InteractiveSession.sess_str}
+
-Closes an `InteractiveSession`.
@@ -432,34 +813,6 @@ A generic error that is raised when TensorFlow execution fails.
Whenever possible, the session will raise a more specific subclass
of `OpError` from the `tf.errors` module.
-
-- - -
-
-#### `tf.OpError.op` {#OpError.op}
-
-The operation that failed, if known.
-
-*N.B.* If the failed op was synthesized at runtime, e.g. a `Send`
-or `Recv` op, there will be no corresponding
-[`Operation`](../../api_docs/python/framework.md#Operation)
-object. In that case, this will return `None`, and you should
-instead use the [`OpError.node_def`](#OpError.node_def) to
-discover information about the op.
-
-##### Returns:
-
- The `Operation` that failed, or None.
-
-
-- - -
-
-#### `tf.OpError.node_def` {#OpError.node_def}
-
-The `NodeDef` proto representing the op that failed.
-
-
-
-#### Other Methods
- - -
#### `tf.OpError.__init__(node_def, op, message, error_code)` {#OpError.__init__}
@@ -497,6 +850,31 @@ The integer error code that describes the error.
The error message that describes the error.
+- - -
+
+#### `tf.OpError.node_def` {#OpError.node_def}
+
+The `NodeDef` proto representing the op that failed.
+
+
+- - -
+
+#### `tf.OpError.op` {#OpError.op}
+
+The operation that failed, if known.
+
+*N.B.* If the failed op was synthesized at runtime, e.g. a `Send`
+or `Recv` op, there will be no corresponding
+[`Operation`](../../api_docs/python/framework.md#Operation)
+object. In that case, this will return `None`, and you should
+instead use the [`OpError.node_def`](#OpError.node_def) to
+discover information about the op.
+
+##### Returns:
+
+ The `Operation` that failed, or None.
+
+
- - -
diff --git a/tensorflow/g3doc/api_docs/python/contrib.copy_graph.md b/tensorflow/g3doc/api_docs/python/contrib.copy_graph.md
index 2fc64e3071..90c16ce140 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.copy_graph.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.copy_graph.md
@@ -3,7 +3,9 @@
# Copying Graph Elements (contrib)
[TOC]
-Functions for copying elements from one graph to another.
+Functions to copy elements between graphs.
+
+See the @{$python/contrib.copy_graph} guide.
## Other Functions and Classes
- - -
diff --git a/tensorflow/g3doc/api_docs/python/contrib.crf.md b/tensorflow/g3doc/api_docs/python/contrib.crf.md
index 27ae98cb7b..8966bcb38d 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.crf.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.crf.md
@@ -3,9 +3,7 @@
# CRF (contrib)
[TOC]
-Linear-chain CRF layer.
-
-## This package provides functions for building a linear-chain CRF layer.
+Linear-chain CRF layer. See the @{$python/contrib.crf} guide.
- - -
diff --git a/tensorflow/g3doc/api_docs/python/contrib.distributions.bijector.md b/tensorflow/g3doc/api_docs/python/contrib.distributions.bijector.md
index eef1e88ac5..e66fd67d50 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.distributions.bijector.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.distributions.bijector.md
@@ -3,22 +3,7 @@
# Random variable transformations (contrib)
[TOC]
-Bijector Ops.
-
-An API for invertible, differentiable transformations of random variables.
-
-## Background
-
-Differentiable, bijective transformations of continuous random variables alter
-the calculations made in the cumulative/probability distribution functions and
-sample function. This module provides a standard interface for making these
-manipulations.
-
-For more details and examples, see the `Bijector` docstring.
-
-To apply a `Bijector`, use `distributions.TransformedDistribution`.
-
-## Bijectors
+Bijector Ops. See the @{$python/contrib.distributions.bijector} guide.
- - -
diff --git a/tensorflow/g3doc/api_docs/python/contrib.distributions.md b/tensorflow/g3doc/api_docs/python/contrib.distributions.md
index 85a18c8c50..b3a7d661db 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.distributions.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.distributions.md
@@ -5,12 +5,7 @@
Classes representing statistical distributions and ops for working with them.
-## Classes for statistical distributions.
-
-Classes that represent batches of statistical distributions. Each class is
-initialized with parameters that define the distributions.
-
-## Base classes
+See the @{$python/contrib.distributions} guide.
- - -
@@ -760,8 +755,6 @@ denotes expectation, and `Var.shape = batch_shape + event_shape`.
-## Univariate (scalar) distributions
-
- - -
### `class tf.contrib.distributions.Binomial` {#Binomial}
@@ -14933,10 +14926,6 @@ denotes expectation, and `Var.shape = batch_shape + event_shape`.
-## Multivariate distributions
-
-### Multivariate normal
-
- - -
### `class tf.contrib.distributions.MultivariateNormalDiag` {#MultivariateNormalDiag}
@@ -17890,8 +17879,6 @@ denotes expectation, and `Var.shape = batch_shape + event_shape`.
-### Other multivariate distributions
-
- - -
### `class tf.contrib.distributions.Dirichlet` {#Dirichlet}
@@ -21365,8 +21352,6 @@ denotes expectation, and `Var.shape = batch_shape + event_shape`.
-### Multivariate Utilities
-
- - -
### `tf.contrib.distributions.matrix_diag_transform(matrix, transform=None, name=None)` {#matrix_diag_transform}
@@ -21424,8 +21409,6 @@ loss = -1 * tf.reduce_mean(dist.log_prob(labels))
-## Transformed distributions
-
- - -
### `class tf.contrib.distributions.TransformedDistribution` {#TransformedDistribution}
@@ -22887,8 +22870,6 @@ denotes expectation, and `Var.shape = batch_shape + event_shape`.
-## Mixture Models
-
- - -
### `class tf.contrib.distributions.Mixture` {#Mixture}
@@ -23554,13 +23535,6 @@ denotes expectation, and `Var.shape = batch_shape + event_shape`.
-## Posterior inference with conjugate priors.
-
-Functions that transform conjugate prior/likelihood pairs to distributions
-representing the posterior or posterior predictive.
-
-## Normal likelihood with conjugate prior.
-
- - -
### `tf.contrib.distributions.normal_conjugates_known_scale_posterior(prior, scale, s, n)` {#normal_conjugates_known_scale_posterior}
@@ -23671,8 +23645,6 @@ will broadcast in the case of multidimensional sets of parameters.
-## Kullback-Leibler Divergence
-
- - -
### `tf.contrib.distributions.kl(dist_a, dist_b, allow_nan=False, name=None)` {#kl}
@@ -23762,8 +23734,6 @@ Initialize the KL registrar.
-## Utilities
-
- - -
### `tf.contrib.distributions.softplus_inverse(x, name=None)` {#softplus_inverse}
@@ -23788,8 +23758,6 @@ softplus_inverse = log(exp(x) - 1.)
-## Relaxed Discrete Distributions
-
- - -
### `class tf.contrib.distributions.ExpRelaxedOneHotCategorical` {#ExpRelaxedOneHotCategorical}
diff --git a/tensorflow/g3doc/api_docs/python/contrib.ffmpeg.md b/tensorflow/g3doc/api_docs/python/contrib.ffmpeg.md
index 572b7ccc1a..e420e4687f 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.ffmpeg.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.ffmpeg.md
@@ -3,23 +3,7 @@
# FFmpeg (contrib)
[TOC]
-## Encoding and decoding audio using FFmpeg
-
-TensorFlow provides Ops to decode and encode audio files using the
-[FFmpeg](https://www.ffmpeg.org/) library. FFmpeg must be
-locally [installed](https://ffmpeg.org/download.html) for these Ops to succeed.
-
-Example:
-
-```python
-from tensorflow.contrib import ffmpeg
-
-audio_binary = tf.read_file('song.mp3')
-waveform = ffmpeg.decode_audio(
- audio_binary, file_format='mp3', samples_per_second=44100, channel_count=2)
-uncompressed_binary = ffmpeg.encode_audio(
- waveform, file_format='wav', samples_per_second=44100)
-```
+Working with audio using FFmpeg. See the @{$python/contrib.ffmpeg} guide.
- - -
diff --git a/tensorflow/g3doc/api_docs/python/contrib.framework.md b/tensorflow/g3doc/api_docs/python/contrib.framework.md
index cdef1616b2..5b00ee590a 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.framework.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.framework.md
@@ -3,7 +3,7 @@
# Framework (contrib)
[TOC]
-Framework utilities.
+Framework utilities. See the @{$python/contrib.framework} guide.
- - -
@@ -298,7 +298,6 @@ Assert tensors are the same shape, from the same graph.
-## Deprecation
- - -
### `tf.contrib.framework.deprecated(date, instructions)` {#deprecated}
@@ -419,7 +418,6 @@ prepended to the rest of the docstring.
-## Arg_Scope
- - -
### `tf.contrib.framework.arg_scope(list_ops_or_scope, **kwargs)` {#arg_scope}
@@ -498,7 +496,6 @@ Returns the list kwargs that arg_scope can set for a func.
-## Variables
- - -
### `tf.contrib.framework.add_model_variable(var)` {#add_model_variable}
@@ -1077,8 +1074,6 @@ save memory during initialization.
-## Checkpoint utilities
-
- - -
### `tf.contrib.framework.load_checkpoint(filepattern)` {#load_checkpoint}
diff --git a/tensorflow/g3doc/api_docs/python/contrib.graph_editor.md b/tensorflow/g3doc/api_docs/python/contrib.graph_editor.md
index a612eda9d2..700af31086 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.graph_editor.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.graph_editor.md
@@ -3,102 +3,7 @@
# Graph Editor (contrib)
[TOC]
-TensorFlow Graph Editor.
-
-The TensorFlow Graph Editor library allows for modification of an existing
-`tf.Graph` instance in-place.
-
-The author's github username is [purpledog](https://github.com/purpledog).
-
-## Library overview
-
-Appending new nodes is the only graph editing operation allowed by the
-TensorFlow core library. The Graph Editor library is an attempt to allow for
-other kinds of editing operations, namely, *rerouting* and *transforming*.
-
-* *rerouting* is a local operation consisting in re-plugging existing tensors
- (the edges of the graph). Operations (the nodes) are not modified by this
- operation. For example, rerouting can be used to insert an operation adding
- noise in place of an existing tensor.
-* *transforming* is a global operation consisting in transforming a graph into
- another. By default, a transformation is a simple copy but it can be
- customized to achieved other goals. For instance, a graph can be transformed
- into another one in which noise is added after all the operations of a
- specific type.
-
-**Important: modifying a graph in-place with the Graph Editor must be done
-`offline`, that is, without any active sessions.**
-
-Of course new operations can be appended online but Graph Editor specific
-operations like rerouting and transforming can currently only be done offline.
-
-Here is an example of what you **cannot** do:
-
-* Build a graph.
-* Create a session and run the graph.
-* Modify the graph with the Graph Editor.
-* Re-run the graph with the `same` previously created session.
-
-To edit an already running graph, follow these steps:
-
-* Build a graph.
-* Create a session and run the graph.
-* Save the graph state and terminate the session
-* Modify the graph with the Graph Editor.
-* create a new session and restore the graph state
-* Re-run the graph with the newly created session.
-
-Note that this procedure is very costly because a new session must be created
-after any modifications. Among other things, it takes time because the entire
-graph state must be saved and restored again.
-
-## Sub-graph
-
-Most of the functions in the Graph Editor library operate on *sub-graph*.
-More precisely, they take as input arguments instances of the SubGraphView class
-(or anything which can be converted to it). Doing so allows the same function
-to transparently operate on single operations as well as sub-graph of any size.
-
-A subgraph can be created in several ways:
-
-* using a list of ops:
-
-```python
-my_sgv = ge.sgv(ops)
-```
-
-* from a name scope:
-
-```python
-my_sgv = ge.sgv_scope("foo/bar", graph=tf.get_default_graph())
-```
-
-* using regular expression:
-
-```python
-my_sgv = ge.sgv("foo/.*/.*read$", graph=tf.get_default_graph())
-```
-
-Note that the Graph Editor is meant to manipulate several graphs at the same
-time, typically during transform or copy operation. For that reason,
-to avoid any confusion, the default graph is never used and the graph on
-which to operate must always be given explicitly. This is the reason why
-*`graph=tf.get_default_graph()`* is used in the code snippets above.
-
-## Modules overview
-
-* util: utility functions.
-* select: various selection methods of TensorFlow tensors and operations.
-* match: TensorFlow graph matching. Think of this as regular expressions for
- graphs (but not quite yet).
-* reroute: various ways of rerouting tensors to different consuming ops like
- *swap* or *reroute_a2b*.
-* subgraph: the SubGraphView class, which enables subgraph manipulations in a
- TensorFlow `tf.Graph`.
-* edit: various editing functions operating on subgraphs like *detach*,
- *connect* or *bypass*.
-* transform: the Transformer class, which enables transforming
- (or simply copying) a subgraph into another one.
+TensorFlow Graph Editor. See the @{$python/contrib.graph_editor} guide.
## Other Functions and Classes
- - -
diff --git a/tensorflow/g3doc/api_docs/python/contrib.integrate.md b/tensorflow/g3doc/api_docs/python/contrib.integrate.md
index edccf69cb4..5a05662c1f 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.integrate.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.integrate.md
@@ -3,42 +3,7 @@
# Integrate (contrib)
[TOC]
-Integration and ODE solvers for TensorFlow.
-
-## Example: Lorenz attractor
-
-We can use `odeint` to solve the
-[Lorentz system](https://en.wikipedia.org/wiki/Lorenz_system) of ordinary
-differential equations, a prototypical example of chaotic dynamics:
-
-```python
-rho = 28.0
-sigma = 10.0
-beta = 8.0/3.0
-
-def lorenz_equation(state, t):
- x, y, z = tf.unstack(state)
- dx = sigma * (y - x)
- dy = x * (rho - z) - y
- dz = x * y - beta * z
- return tf.stack([dx, dy, dz])
-
-init_state = tf.constant([0, 2, 20], dtype=tf.float64)
-t = np.linspace(0, 50, num=5000)
-tensor_state, tensor_info = tf.contrib.integrate.odeint(
- lorenz_equation, init_state, t, full_output=True)
-
-sess = tf.Session()
-state, info = sess.run([tensor_state, tensor_info])
-x, y, z = state.T
-plt.plot(x, z)
-```
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/lorenz_attractor.png" alt>
-</div>
-
-## Ops
+Integration and ODE solvers. See the @{$python/contrib.integrate} guide.
- - -
diff --git a/tensorflow/g3doc/api_docs/python/contrib.layers.md b/tensorflow/g3doc/api_docs/python/contrib.layers.md
index 12370048d8..910cab1cc7 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.layers.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.layers.md
@@ -5,11 +5,7 @@
Ops for building neural network layers, regularizers, summaries, etc.
-## Higher level ops for building neural network layers.
-
-This package provides several ops that take care of creating variables that are
-used internally in a consistent way and provide the building blocks for many
-common machine learning algorithms.
+See the @{$python/contrib.layers} guide.
- - -
@@ -1015,18 +1011,6 @@ Typical use case would be reusing embeddings between an encoder and decoder.
-Aliases for fully_connected which set a default activation function are
-available: `relu`, `relu6` and `linear`.
-
-`stack` operation is also available. It builds a stack of layers by applying
-a layer repeatedly.
-
-## Regularizers
-
-Regularization can help prevent overfitting. These have the signature
-`fn(weights)`. The loss is typically added to
-`tf.GraphKeys.REGULARIZATION_LOSSES`.
-
- - -
### `tf.contrib.layers.apply_regularization(regularizer, weights_list=None)` {#apply_regularization}
@@ -1125,11 +1109,6 @@ Returns a function that applies the sum of multiple regularizers.
-## Initializers
-
-Initializers are used to initialize variables with sensible values given their
-size, data type, and purpose.
-
- - -
### `tf.contrib.layers.xavier_initializer(uniform=True, seed=None, dtype=tf.float32)` {#xavier_initializer}
@@ -1252,10 +1231,6 @@ by reaching the final layer. This initializer use the following formula:
-## Optimization
-
-Optimize weights given a loss.
-
- - -
### `tf.contrib.layers.optimize_loss(loss, global_step, learning_rate, optimizer, gradient_noise_scale=None, gradient_multipliers=None, clip_gradients=None, learning_rate_decay_fn=None, update_ops=None, variables=None, name=None, summaries=None, colocate_gradients_with_ops=False)` {#optimize_loss}
@@ -1343,10 +1318,6 @@ Various ways of passing optimizers, include:
-## Summaries
-
-Helper functions to summarize specific variables or ops.
-
- - -
### `tf.contrib.layers.summarize_activation(op)` {#summarize_activation}
@@ -1402,10 +1373,6 @@ Summarize a graph collection of tensors, possibly filtered by name.
-The layers module defines convenience functions `summarize_variables`,
-`summarize_weights` and `summarize_biases`, which set the `collection` argument
-of `summarize_collection` to `VARIABLES`, `WEIGHTS` and `BIASES`, respectively.
-
- - -
### `tf.contrib.layers.summarize_activations(name_filter=None, summarizer=summarize_activation)` {#summarize_activations}
@@ -1414,10 +1381,6 @@ Summarize activations, using `summarize_activation` to summarize.
-## Feature columns
-
-Feature columns provide a mechanism to map data to a model.
-
- - -
### `tf.contrib.layers.bucketized_column(source_column, boundaries)` {#bucketized_column}
diff --git a/tensorflow/g3doc/api_docs/python/contrib.learn.md b/tensorflow/g3doc/api_docs/python/contrib.learn.md
index 3af0320800..12e5bd6da0 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.learn.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.learn.md
@@ -3,11 +3,7 @@
# Learn (contrib)
[TOC]
-High level API for learning with TensorFlow.
-
-## Estimators
-
-Train and evaluate TensorFlow models.
+High level API for learning. See the @{$python/contrib.learn} guide.
- - -
@@ -4334,7 +4330,6 @@ Example:
-## Distributed training utilities
- - -
### `class tf.contrib.learn.Experiment` {#Experiment}
@@ -4670,10 +4665,6 @@ Alias for field number 0
-## Graph actions
-
-Perform various training, evaluation, and inference actions on a graph.
-
- - -
### `class tf.train.NanLossDuringTrainingError` {#NanLossDuringTrainingError}
@@ -5065,10 +5056,6 @@ program is terminated with exit code 1.
-## Input processing
-
-Queue and read batched input data.
-
- - -
### `tf.contrib.learn.extract_dask_data(data)` {#extract_dask_data}
@@ -5359,8 +5346,6 @@ See more detailed description in `read_examples`.
-Export utilities
-
- - -
### `class tf.contrib.learn.InputFnOps` {#InputFnOps}
diff --git a/tensorflow/g3doc/api_docs/python/contrib.learn.monitors.md b/tensorflow/g3doc/api_docs/python/contrib.learn.monitors.md
index dae7162a0d..58b2758c36 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.learn.monitors.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.learn.monitors.md
@@ -3,62 +3,9 @@
# Monitors (contrib)
[TOC]
-Monitors allow user instrumentation of the training process.
+Monitors instrument the training process.
-Monitors are useful to track training, report progress, request early
-stopping and more. Monitors use the observer pattern and notify at the following
-points:
-
-* when training begins
-* before a training step
-* after a training step
-* when training ends
-
-Monitors are not intended to be reusable.
-
-There are a few pre-defined monitors:
-
-* `CaptureVariable`: saves a variable's values
-* `GraphDump`: intended for debug only - saves all tensor values
-* `PrintTensor`: outputs one or more tensor values to log
-* `SummarySaver`: saves summaries to a summary writer
-* `ValidationMonitor`: runs model validation, by periodically calculating eval
- metrics on a separate data set; supports optional early stopping
-
-For more specific needs, you can create custom monitors by extending one of the
-following classes:
-
-* `BaseMonitor`: the base class for all monitors
-* `EveryN`: triggers a callback every N training steps
-
-Example:
-
-```python
- class ExampleMonitor(monitors.BaseMonitor):
- def __init__(self):
- print 'Init'
-
- def begin(self, max_steps):
- print 'Starting run. Will train until step %d.' % max_steps
-
- def end(self):
- print 'Completed run.'
-
- def step_begin(self, step):
- print 'About to run step %d...' % step
- return ['loss_1:0']
-
- def step_end(self, step, outputs):
- print 'Done running step %d. The value of "loss" tensor: %s' % (
- step, outputs['loss_1:0'])
-
- linear_regressor = LinearRegressor()
- example_monitor = ExampleMonitor()
- linear_regressor.fit(
- x, y, steps=2, batch_size=1, monitors=[example_monitor])
-```
-
-## Ops
+See the @{$python/contrib.learn.monitors} guide.
- - -
diff --git a/tensorflow/g3doc/api_docs/python/contrib.training.md b/tensorflow/g3doc/api_docs/python/contrib.training.md
index 1804d5339b..88ab5e6e23 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.training.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.training.md
@@ -3,15 +3,7 @@
# Training (contrib)
[TOC]
-Training and input utilities.
-
-## Splitting sequence inputs into minibatches with state saving
-
-Use [`SequenceQueueingStateSaver`](#SequenceQueueingStateSaver) or
-its wrapper [`batch_sequences_with_states`](#batch_sequences_with_states) if
-you have input data with a dynamic primary time / frame count axis which
-you'd like to convert into fixed size segments during minibatching, and would
-like to store state in the forward direction across segments of an example.
+Training and input utilities. See @{$python/contrib.training} guide.
- - -
@@ -728,25 +720,6 @@ It should be run in a separate thread via e.g. a `QueueRunner`.
-
-
-## Online data resampling
-
-To resample data with replacement on a per-example basis, use
-['rejection_sample'](#rejection_sample) or
-['resample_at_rate'](#resample_at_rate). For `rejection_sample`, provide
-a boolean Tensor describing whether to accept or reject. Resulting batch sizes
-are always the same. For `resample_at_rate`, provide the desired rate for each
-example. Resulting batch sizes may vary. If you wish to specify relative
-rates, rather than absolute ones, use ['weighted_resample'](#weighted_resample)
-(which also returns the actual resampling rate used for each output example).
-
-Use ['stratified_sample'](#stratified_sample) to resample without replacement
-from the data to achieve a desired mix of class proportions that the Tensorflow
-graph sees. For instance, if you have a binary classification dataset that is
-99.9% class 1, a common approach is to resample from the data so that the data
-is more balanced.
-
- - -
### `tf.contrib.training.rejection_sample(tensors, accept_prob_fn, batch_size, queue_threads=1, enqueue_many=False, prebatch_capacity=16, prebatch_threads=1, runtime_checks=False, name=None)` {#rejection_sample}
@@ -935,15 +908,6 @@ rate of selection across all inputs (and many invocations!) is
A tensor containing the effective resampling rate used for each output.
-
-## Bucketing
-
-Use ['bucket'](#bucket) or
-['bucket_by_sequence_length'](#bucket_by_sequence_length) to stratify
-minibatches into groups ("buckets"). Use `bucket_by_sequence_length`
-with the argument `dynamic_pad=True` to receive minibatches of similarly
-sized sequences for efficient training via `dynamic_rnn`.
-
- - -
### `tf.contrib.training.bucket(tensors, which_bucket, batch_size, num_buckets, num_threads=1, capacity=32, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, keep_input=True, shared_name=None, name=None)` {#bucket}
diff --git a/tensorflow/g3doc/api_docs/python/contrib.util.md b/tensorflow/g3doc/api_docs/python/contrib.util.md
index 103b2088d7..a5a22eb27d 100644
--- a/tensorflow/g3doc/api_docs/python/contrib.util.md
+++ b/tensorflow/g3doc/api_docs/python/contrib.util.md
@@ -3,9 +3,7 @@
# Utilities (contrib)
[TOC]
-Utilities for dealing with Tensors.
-
-## Miscellaneous Utility Functions
+Utilities for dealing with Tensors. See @{$python/contrib.util} guide.
- - -
diff --git a/tensorflow/g3doc/api_docs/python/framework.md b/tensorflow/g3doc/api_docs/python/framework.md
index 3256c7014b..c3362bd254 100644
--- a/tensorflow/g3doc/api_docs/python/framework.md
+++ b/tensorflow/g3doc/api_docs/python/framework.md
@@ -918,127 +918,6 @@ After the graph has been launched in a session, an `Operation` can
be executed by passing it to
[`Session.run()`](../../api_docs/python/client.md#Session.run).
`op.run()` is a shortcut for calling `tf.get_default_session().run(op)`.
-
-- - -
-
-#### `tf.Operation.name` {#Operation.name}
-
-The full name of this operation.
-
-
-- - -
-
-#### `tf.Operation.type` {#Operation.type}
-
-The type of the op (e.g. `"MatMul"`).
-
-
-- - -
-
-#### `tf.Operation.inputs` {#Operation.inputs}
-
-The list of `Tensor` objects representing the data inputs of this op.
-
-
-- - -
-
-#### `tf.Operation.control_inputs` {#Operation.control_inputs}
-
-The `Operation` objects on which this op has a control dependency.
-
-Before this op is executed, TensorFlow will ensure that the
-operations in `self.control_inputs` have finished executing. This
-mechanism can be used to run ops sequentially for performance
-reasons, or to ensure that the side effects of an op are observed
-in the correct order.
-
-##### Returns:
-
- A list of `Operation` objects.
-
-
-- - -
-
-#### `tf.Operation.outputs` {#Operation.outputs}
-
-The list of `Tensor` objects representing the outputs of this op.
-
-
-- - -
-
-#### `tf.Operation.device` {#Operation.device}
-
-The name of the device to which this op has been assigned, if any.
-
-##### Returns:
-
- The string name of the device to which this op has been
- assigned, or an empty string if it has not been assigned to a
- device.
-
-
-- - -
-
-#### `tf.Operation.graph` {#Operation.graph}
-
-The `Graph` that contains this operation.
-
-
-
-- - -
-
-#### `tf.Operation.run(feed_dict=None, session=None)` {#Operation.run}
-
-Runs this operation in a `Session`.
-
-Calling this method will execute all preceding operations that
-produce the inputs needed for this operation.
-
-*N.B.* Before invoking `Operation.run()`, its graph must have been
-launched in a session, and either a default session must be
-available, or `session` must be specified explicitly.
-
-##### Args:
-
-
-* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- See [`Session.run()`](../../api_docs/python/client.md#Session.run)
- for a description of the valid feed values.
-* <b>`session`</b>: (Optional.) The `Session` to be used to run to this operation. If
- none, the default session will be used.
-
-
-
-- - -
-
-#### `tf.Operation.get_attr(name)` {#Operation.get_attr}
-
-Returns the value of the attr of this op with the given `name`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the attr to fetch.
-
-##### Returns:
-
- The value of the attr, as a Python object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If this op does not have an attr with the given `name`.
-
-
-- - -
-
-#### `tf.Operation.traceback` {#Operation.traceback}
-
-Returns the call stack from when this operation was constructed.
-
-
-
-#### Other Methods
- - -
#### `tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None)` {#Operation.__init__}
@@ -1109,137 +988,119 @@ Returns the list of colocation groups of the op.
- - -
-#### `tf.Operation.node_def` {#Operation.node_def}
+#### `tf.Operation.control_inputs` {#Operation.control_inputs}
-Returns a serialized `NodeDef` representation of this operation.
+The `Operation` objects on which this op has a control dependency.
+
+Before this op is executed, TensorFlow will ensure that the
+operations in `self.control_inputs` have finished executing. This
+mechanism can be used to run ops sequentially for performance
+reasons, or to ensure that the side effects of an op are observed
+in the correct order.
##### Returns:
- A
- [`NodeDef`](https://www.tensorflow.org/code/tensorflow/core/framework/node_def.proto)
- protocol buffer.
+ A list of `Operation` objects.
- - -
-#### `tf.Operation.op_def` {#Operation.op_def}
+#### `tf.Operation.device` {#Operation.device}
-Returns the `OpDef` proto that represents the type of this op.
+The name of the device to which this op has been assigned, if any.
##### Returns:
- An
- [`OpDef`](https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto)
- protocol buffer.
+ The string name of the device to which this op has been
+ assigned, or an empty string if it has not been assigned to a
+ device.
- - -
-#### `tf.Operation.values()` {#Operation.values}
+#### `tf.Operation.get_attr(name)` {#Operation.get_attr}
-DEPRECATED: Use outputs.
+Returns the value of the attr of this op with the given `name`.
+##### Args:
-- - -
+* <b>`name`</b>: The name of the attr to fetch.
-### `class tf.Tensor` {#Tensor}
+##### Returns:
-Represents one of the outputs of an `Operation`.
+ The value of the attr, as a Python object.
-A `Tensor` is a symbolic handle to one of the outputs of an
-`Operation`. It does not hold the values of that operation's output,
-but instead provides a means of computing those values in a
-TensorFlow [`Session`](../../api_docs/python/client.md#Session).
+##### Raises:
-This class has two primary purposes:
-1. A `Tensor` can be passed as an input to another `Operation`.
- This builds a dataflow connection between operations, which
- enables TensorFlow to execute an entire `Graph` that represents a
- large, multi-step computation.
+* <b>`ValueError`</b>: If this op does not have an attr with the given `name`.
-2. After the graph has been launched in a session, the value of the
- `Tensor` can be computed by passing it to
- [`Session.run()`](../../api_docs/python/client.md#Session.run).
- `t.eval()` is a shortcut for calling
- `tf.get_default_session().run(t)`.
-In the following example, `c`, `d`, and `e` are symbolic `Tensor`
-objects, whereas `result` is a numpy array that stores a concrete
-value:
+- - -
-```python
-# Build a dataflow graph.
-c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
-d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
-e = tf.matmul(c, d)
+#### `tf.Operation.graph` {#Operation.graph}
-# Construct a `Session` to execute the graph.
-sess = tf.Session()
+The `Graph` that contains this operation.
-# Execute the graph and store the value that `e` represents in `result`.
-result = sess.run(e)
-```
- - -
-#### `tf.Tensor.dtype` {#Tensor.dtype}
+#### `tf.Operation.inputs` {#Operation.inputs}
-The `DType` of elements in this tensor.
+The list of `Tensor` objects representing the data inputs of this op.
- - -
-#### `tf.Tensor.name` {#Tensor.name}
+#### `tf.Operation.name` {#Operation.name}
-The string name of this tensor.
+The full name of this operation.
- - -
-#### `tf.Tensor.value_index` {#Tensor.value_index}
-
-The index of this tensor in the outputs of its `Operation`.
-
+#### `tf.Operation.node_def` {#Operation.node_def}
-- - -
+Returns a serialized `NodeDef` representation of this operation.
-#### `tf.Tensor.graph` {#Tensor.graph}
+##### Returns:
-The `Graph` that contains this tensor.
+ A
+ [`NodeDef`](https://www.tensorflow.org/code/tensorflow/core/framework/node_def.proto)
+ protocol buffer.
- - -
-#### `tf.Tensor.op` {#Tensor.op}
-
-The `Operation` that produces this tensor as an output.
+#### `tf.Operation.op_def` {#Operation.op_def}
+Returns the `OpDef` proto that represents the type of this op.
-- - -
+##### Returns:
-#### `tf.Tensor.consumers()` {#Tensor.consumers}
+ An
+ [`OpDef`](https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto)
+ protocol buffer.
-Returns a list of `Operation`s that consume this tensor.
-##### Returns:
+- - -
- A list of `Operation`s.
+#### `tf.Operation.outputs` {#Operation.outputs}
+The list of `Tensor` objects representing the outputs of this op.
- - -
-#### `tf.Tensor.eval(feed_dict=None, session=None)` {#Tensor.eval}
+#### `tf.Operation.run(feed_dict=None, session=None)` {#Operation.run}
-Evaluates this tensor in a `Session`.
+Runs this operation in a `Session`.
Calling this method will execute all preceding operations that
-produce the inputs needed for the operation that produces this
-tensor.
+produce the inputs needed for this operation.
-*N.B.* Before invoking `Tensor.eval()`, its graph must have been
+*N.B.* Before invoking `Operation.run()`, its graph must have been
launched in a session, and either a default session must be
available, or `session` must be specified explicitly.
@@ -1247,112 +1108,74 @@ available, or `session` must be specified explicitly.
* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a
- description of the valid feed values.
-* <b>`session`</b>: (Optional.) The `Session` to be used to evaluate this tensor. If
+ See [`Session.run()`](../../api_docs/python/client.md#Session.run)
+ for a description of the valid feed values.
+* <b>`session`</b>: (Optional.) The `Session` to be used to run to this operation. If
none, the default session will be used.
-##### Returns:
-
- A numpy array corresponding to the value of this tensor.
-
-
- - -
-#### `tf.Tensor.get_shape()` {#Tensor.get_shape}
+#### `tf.Operation.traceback` {#Operation.traceback}
-Alias of Tensor.shape.
+Returns the call stack from when this operation was constructed.
- - -
-#### `tf.Tensor.shape` {#Tensor.shape}
-
-Returns the `TensorShape` that represents the shape of this tensor.
-
-The shape is computed using shape inference functions that are
-registered in the Op for each `Operation`. See
-[`TensorShape`](../../api_docs/python/framework.md#TensorShape)
-for more details of what a shape represents.
-
-The inferred shape of a tensor is used to provide shape
-information without having to launch the graph in a session. This
-can be used for debugging, and providing early error messages. For
-example:
+#### `tf.Operation.type` {#Operation.type}
-```python
-c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
+The type of the op (e.g. `"MatMul"`).
-print(c.shape)
-==> TensorShape([Dimension(2), Dimension(3)])
-d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]])
+- - -
-print(d.shape)
-==> TensorShape([Dimension(4), Dimension(2)])
+#### `tf.Operation.values()` {#Operation.values}
-# Raises a ValueError, because `c` and `d` do not have compatible
-# inner dimensions.
-e = tf.matmul(c, d)
+DEPRECATED: Use outputs.
-f = tf.matmul(c, d, transpose_a=True, transpose_b=True)
-print(f.shape)
-==> TensorShape([Dimension(3), Dimension(4)])
-```
-In some cases, the inferred shape may have unknown dimensions. If
-the caller has additional information about the values of these
-dimensions, `Tensor.set_shape()` can be used to augment the
-inferred shape.
+- - -
-##### Returns:
+### `class tf.Tensor` {#Tensor}
- A `TensorShape` representing the shape of this tensor.
+Represents one of the outputs of an `Operation`.
+A `Tensor` is a symbolic handle to one of the outputs of an
+`Operation`. It does not hold the values of that operation's output,
+but instead provides a means of computing those values in a
+TensorFlow [`Session`](../../api_docs/python/client.md#Session).
-- - -
+This class has two primary purposes:
-#### `tf.Tensor.set_shape(shape)` {#Tensor.set_shape}
+1. A `Tensor` can be passed as an input to another `Operation`.
+ This builds a dataflow connection between operations, which
+ enables TensorFlow to execute an entire `Graph` that represents a
+ large, multi-step computation.
-Updates the shape of this tensor.
+2. After the graph has been launched in a session, the value of the
+ `Tensor` can be computed by passing it to
+ [`Session.run()`](../../api_docs/python/client.md#Session.run).
+ `t.eval()` is a shortcut for calling
+ `tf.get_default_session().run(t)`.
-This method can be called multiple times, and will merge the given
-`shape` with the current shape of this tensor. It can be used to
-provide additional information about the shape of this tensor that
-cannot be inferred from the graph alone. For example, this can be used
-to provide additional information about the shapes of images:
+In the following example, `c`, `d`, and `e` are symbolic `Tensor`
+objects, whereas `result` is a numpy array that stores a concrete
+value:
```python
-_, image_data = tf.TFRecordReader(...).read(...)
-image = tf.image.decode_png(image_data, channels=3)
+# Build a dataflow graph.
+c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
+d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
+e = tf.matmul(c, d)
-# The height and width dimensions of `image` are data dependent, and
-# cannot be computed without executing the op.
-print(image.shape)
-==> TensorShape([Dimension(None), Dimension(None), Dimension(3)])
+# Construct a `Session` to execute the graph.
+sess = tf.Session()
-# We know that each image in this dataset is 28 x 28 pixels.
-image.set_shape([28, 28, 3])
-print(image.shape)
-==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])
+# Execute the graph and store the value that `e` represents in `result`.
+result = sess.run(e)
```
-
-##### Args:
-
-
-* <b>`shape`</b>: A `TensorShape` representing the shape of this tensor.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `shape` is not compatible with the current shape of
- this tensor.
-
-
-
-#### Other Methods
- - -
#### `tf.Tensor.__abs__(x, name=None)` {#Tensor.__abs__}
@@ -2085,180 +1908,216 @@ x ^ y = (x | y) & ~(x & y).
- - -
-#### `tf.Tensor.device` {#Tensor.device}
-
-The name of the device on which this tensor will be produced, or None.
+#### `tf.Tensor.consumers()` {#Tensor.consumers}
+Returns a list of `Operation`s that consume this tensor.
+##### Returns:
+ A list of `Operation`s.
-## Tensor types
- - -
-### `class tf.DType` {#DType}
+#### `tf.Tensor.device` {#Tensor.device}
-Represents the type of the elements in a `Tensor`.
+The name of the device on which this tensor will be produced, or None.
-The following `DType` objects are defined:
-* `tf.float16`: 16-bit half-precision floating-point.
-* `tf.float32`: 32-bit single-precision floating-point.
-* `tf.float64`: 64-bit double-precision floating-point.
-* `tf.bfloat16`: 16-bit truncated floating-point.
-* `tf.complex64`: 64-bit single-precision complex.
-* `tf.complex128`: 128-bit double-precision complex.
-* `tf.int8`: 8-bit signed integer.
-* `tf.uint8`: 8-bit unsigned integer.
-* `tf.uint16`: 16-bit unsigned integer.
-* `tf.int16`: 16-bit signed integer.
-* `tf.int32`: 32-bit signed integer.
-* `tf.int64`: 64-bit signed integer.
-* `tf.bool`: Boolean.
-* `tf.string`: String.
-* `tf.qint8`: Quantized 8-bit signed integer.
-* `tf.quint8`: Quantized 8-bit unsigned integer.
-* `tf.qint16`: Quantized 16-bit signed integer.
-* `tf.quint16`: Quantized 16-bit unsigned integer.
-* `tf.qint32`: Quantized 32-bit signed integer.
-* `tf.resource`: Handle to a mutable resource.
+- - -
-In addition, variants of these types with the `_ref` suffix are
-defined for reference-typed tensors.
+#### `tf.Tensor.dtype` {#Tensor.dtype}
+
+The `DType` of elements in this tensor.
-The `tf.as_dtype()` function converts numpy types and string type
-names to a `DType` object.
- - -
-#### `tf.DType.is_compatible_with(other)` {#DType.is_compatible_with}
+#### `tf.Tensor.eval(feed_dict=None, session=None)` {#Tensor.eval}
-Returns True if the `other` DType will be converted to this DType.
+Evaluates this tensor in a `Session`.
-The conversion rules are as follows:
+Calling this method will execute all preceding operations that
+produce the inputs needed for the operation that produces this
+tensor.
-```python
-DType(T) .is_compatible_with(DType(T)) == True
-DType(T) .is_compatible_with(DType(T).as_ref) == True
-DType(T).as_ref.is_compatible_with(DType(T)) == False
-DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
-```
+*N.B.* Before invoking `Tensor.eval()`, its graph must have been
+launched in a session, and either a default session must be
+available, or `session` must be specified explicitly.
##### Args:
-* <b>`other`</b>: A `DType` (or object that may be converted to a `DType`).
+* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
+ See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a
+ description of the valid feed values.
+* <b>`session`</b>: (Optional.) The `Session` to be used to evaluate this tensor. If
+ none, the default session will be used.
##### Returns:
- True if a Tensor of the `other` `DType` will be implicitly converted to
- this `DType`.
+ A numpy array corresponding to the value of this tensor.
- - -
-#### `tf.DType.name` {#DType.name}
+#### `tf.Tensor.get_shape()` {#Tensor.get_shape}
-Returns the string name for this `DType`.
+Alias of Tensor.shape.
- - -
-#### `tf.DType.base_dtype` {#DType.base_dtype}
+#### `tf.Tensor.graph` {#Tensor.graph}
-Returns a non-reference `DType` based on this `DType`.
+The `Graph` that contains this tensor.
- - -
-#### `tf.DType.real_dtype` {#DType.real_dtype}
+#### `tf.Tensor.name` {#Tensor.name}
-Returns the dtype correspond to this dtype's real part.
+The string name of this tensor.
- - -
-#### `tf.DType.is_bool` {#DType.is_bool}
+#### `tf.Tensor.op` {#Tensor.op}
-Returns whether this is a boolean data type
+The `Operation` that produces this tensor as an output.
- - -
-#### `tf.DType.is_floating` {#DType.is_floating}
+#### `tf.Tensor.set_shape(shape)` {#Tensor.set_shape}
-Returns whether this is a (non-quantized, real) floating point type.
+Updates the shape of this tensor.
+This method can be called multiple times, and will merge the given
+`shape` with the current shape of this tensor. It can be used to
+provide additional information about the shape of this tensor that
+cannot be inferred from the graph alone. For example, this can be used
+to provide additional information about the shapes of images:
-- - -
+```python
+_, image_data = tf.TFRecordReader(...).read(...)
+image = tf.image.decode_png(image_data, channels=3)
-#### `tf.DType.is_complex` {#DType.is_complex}
+# The height and width dimensions of `image` are data dependent, and
+# cannot be computed without executing the op.
+print(image.shape)
+==> TensorShape([Dimension(None), Dimension(None), Dimension(3)])
-Returns whether this is a complex floating point type.
+# We know that each image in this dataset is 28 x 28 pixels.
+image.set_shape([28, 28, 3])
+print(image.shape)
+==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])
+```
+##### Args:
-- - -
-#### `tf.DType.is_integer` {#DType.is_integer}
+* <b>`shape`</b>: A `TensorShape` representing the shape of this tensor.
-Returns whether this is a (non-quantized) integer type.
+##### Raises:
+
+
+* <b>`ValueError`</b>: If `shape` is not compatible with the current shape of
+ this tensor.
- - -
-#### `tf.DType.is_quantized` {#DType.is_quantized}
+#### `tf.Tensor.shape` {#Tensor.shape}
-Returns whether this is a quantized data type.
+Returns the `TensorShape` that represents the shape of this tensor.
+The shape is computed using shape inference functions that are
+registered in the Op for each `Operation`. See
+[`TensorShape`](../../api_docs/python/framework.md#TensorShape)
+for more details of what a shape represents.
-- - -
+The inferred shape of a tensor is used to provide shape
+information without having to launch the graph in a session. This
+can be used for debugging, and providing early error messages. For
+example:
-#### `tf.DType.is_unsigned` {#DType.is_unsigned}
+```python
+c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
-Returns whether this type is unsigned.
+print(c.shape)
+==> TensorShape([Dimension(2), Dimension(3)])
-Non-numeric, unordered, and quantized types are not considered unsigned, and
-this function returns `False`.
+d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]])
-##### Returns:
+print(d.shape)
+==> TensorShape([Dimension(4), Dimension(2)])
- Whether a `DType` is unsigned.
+# Raises a ValueError, because `c` and `d` do not have compatible
+# inner dimensions.
+e = tf.matmul(c, d)
+f = tf.matmul(c, d, transpose_a=True, transpose_b=True)
+print(f.shape)
+==> TensorShape([Dimension(3), Dimension(4)])
+```
-- - -
+In some cases, the inferred shape may have unknown dimensions. If
+the caller has additional information about the values of these
+dimensions, `Tensor.set_shape()` can be used to augment the
+inferred shape.
-#### `tf.DType.as_numpy_dtype` {#DType.as_numpy_dtype}
+##### Returns:
-Returns a `numpy.dtype` based on this `DType`.
+ A `TensorShape` representing the shape of this tensor.
- - -
-#### `tf.DType.as_datatype_enum` {#DType.as_datatype_enum}
+#### `tf.Tensor.value_index` {#Tensor.value_index}
-Returns a `types_pb2.DataType` enum value based on this `DType`.
+The index of this tensor in the outputs of its `Operation`.
-- - -
-#### `tf.DType.limits` {#DType.limits}
+## Tensor types
-Return intensity limits, i.e. (min, max) tuple, of the dtype.
+- - -
-##### Args:
+### `class tf.DType` {#DType}
- clip_negative : bool, optional
- If True, clip the negative range (i.e. return 0 for min intensity)
- even if the image dtype allows negative values.
-Returns
- min, max : tuple
- Lower and upper intensity limits.
+Represents the type of the elements in a `Tensor`.
+The following `DType` objects are defined:
+* `tf.float16`: 16-bit half-precision floating-point.
+* `tf.float32`: 32-bit single-precision floating-point.
+* `tf.float64`: 64-bit double-precision floating-point.
+* `tf.bfloat16`: 16-bit truncated floating-point.
+* `tf.complex64`: 64-bit single-precision complex.
+* `tf.complex128`: 128-bit double-precision complex.
+* `tf.int8`: 8-bit signed integer.
+* `tf.uint8`: 8-bit unsigned integer.
+* `tf.uint16`: 16-bit unsigned integer.
+* `tf.int16`: 16-bit signed integer.
+* `tf.int32`: 32-bit signed integer.
+* `tf.int64`: 64-bit signed integer.
+* `tf.bool`: Boolean.
+* `tf.string`: String.
+* `tf.qint8`: Quantized 8-bit signed integer.
+* `tf.quint8`: Quantized 8-bit unsigned integer.
+* `tf.qint16`: Quantized 16-bit signed integer.
+* `tf.quint16`: Quantized 16-bit unsigned integer.
+* `tf.qint32`: Quantized 32-bit signed integer.
+* `tf.resource`: Handle to a mutable resource.
-#### Other Methods
+In addition, variants of these types with the `_ref` suffix are
+defined for reference-typed tensors.
+
+The `tf.as_dtype()` function converts numpy types and string type
+names to a `DType` object.
- - -
#### `tf.DType.__eq__(other)` {#DType.__eq__}
@@ -2317,6 +2176,81 @@ Returns True iff self != other.
- - -
+#### `tf.DType.as_datatype_enum` {#DType.as_datatype_enum}
+
+Returns a `types_pb2.DataType` enum value based on this `DType`.
+
+
+- - -
+
+#### `tf.DType.as_numpy_dtype` {#DType.as_numpy_dtype}
+
+Returns a `numpy.dtype` based on this `DType`.
+
+
+- - -
+
+#### `tf.DType.base_dtype` {#DType.base_dtype}
+
+Returns a non-reference `DType` based on this `DType`.
+
+
+- - -
+
+#### `tf.DType.is_bool` {#DType.is_bool}
+
+Returns whether this is a boolean data type
+
+
+- - -
+
+#### `tf.DType.is_compatible_with(other)` {#DType.is_compatible_with}
+
+Returns True if the `other` DType will be converted to this DType.
+
+The conversion rules are as follows:
+
+```python
+DType(T) .is_compatible_with(DType(T)) == True
+DType(T) .is_compatible_with(DType(T).as_ref) == True
+DType(T).as_ref.is_compatible_with(DType(T)) == False
+DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
+```
+
+##### Args:
+
+
+* <b>`other`</b>: A `DType` (or object that may be converted to a `DType`).
+
+##### Returns:
+
+ True if a Tensor of the `other` `DType` will be implicitly converted to
+ this `DType`.
+
+
+- - -
+
+#### `tf.DType.is_complex` {#DType.is_complex}
+
+Returns whether this is a complex floating point type.
+
+
+- - -
+
+#### `tf.DType.is_floating` {#DType.is_floating}
+
+Returns whether this is a (non-quantized, real) floating point type.
+
+
+- - -
+
+#### `tf.DType.is_integer` {#DType.is_integer}
+
+Returns whether this is a (non-quantized) integer type.
+
+
+- - -
+
#### `tf.DType.is_numpy_compatible` {#DType.is_numpy_compatible}
@@ -2324,6 +2258,43 @@ Returns True iff self != other.
- - -
+#### `tf.DType.is_quantized` {#DType.is_quantized}
+
+Returns whether this is a quantized data type.
+
+
+- - -
+
+#### `tf.DType.is_unsigned` {#DType.is_unsigned}
+
+Returns whether this type is unsigned.
+
+Non-numeric, unordered, and quantized types are not considered unsigned, and
+this function returns `False`.
+
+##### Returns:
+
+ Whether a `DType` is unsigned.
+
+
+- - -
+
+#### `tf.DType.limits` {#DType.limits}
+
+Return intensity limits, i.e. (min, max) tuple, of the dtype.
+
+##### Args:
+
+ clip_negative : bool, optional
+ If True, clip the negative range (i.e. return 0 for min intensity)
+ even if the image dtype allows negative values.
+Returns
+ min, max : tuple
+ Lower and upper intensity limits.
+
+
+- - -
+
#### `tf.DType.max` {#DType.max}
Returns the maximum representable value in this data type.
@@ -2348,6 +2319,20 @@ Returns the minimum representable value in this data type.
- - -
+#### `tf.DType.name` {#DType.name}
+
+Returns the string name for this `DType`.
+
+
+- - -
+
+#### `tf.DType.real_dtype` {#DType.real_dtype}
+
+Returns the dtype correspond to this dtype's real part.
+
+
+- - -
+
#### `tf.DType.size` {#DType.size}
@@ -3001,230 +2986,175 @@ C++`](../../how_tos/adding_an_op/index.md#shape-functions-in-c) for
details of shape functions and how to register them. Alternatively,
the shape may be set explicitly using
[`Tensor.set_shape()`](../../api_docs/python/framework.md#Tensor.set_shape).
-
- - -
-#### `tf.TensorShape.merge_with(other)` {#TensorShape.merge_with}
-
-Returns a `TensorShape` combining the information in `self` and `other`.
-
-The dimensions in `self` and `other` are merged elementwise,
-according to the rules defined for `Dimension.merge_with()`.
-
-##### Args:
-
-
-* <b>`other`</b>: Another `TensorShape`.
+#### `tf.TensorShape.__bool__()` {#TensorShape.__bool__}
-##### Returns:
+Returns True if this shape contains non-zero information.
- A `TensorShape` containing the combined information of `self` and
- `other`.
-##### Raises:
+- - -
+#### `tf.TensorShape.__eq__(other)` {#TensorShape.__eq__}
-* <b>`ValueError`</b>: If `self` and `other` are not compatible.
+Returns True if `self` is equivalent to `other`.
- - -
-#### `tf.TensorShape.concatenate(other)` {#TensorShape.concatenate}
-
-Returns the concatenation of the dimension in `self` and `other`.
+#### `tf.TensorShape.__getitem__(key)` {#TensorShape.__getitem__}
-*N.B.* If either `self` or `other` is completely unknown,
-concatenation will discard information about the other shape. In
-future, we might support concatenation that preserves this
-information for use with slicing.
+Returns the value of a dimension or a shape, depending on the key.
##### Args:
-* <b>`other`</b>: Another `TensorShape`.
+* <b>`key`</b>: If `key` is an integer, returns the dimension at that index;
+ otherwise if `key` is a slice, returns a TensorShape whose
+ dimensions are those selected by the slice from `self`.
##### Returns:
- A `TensorShape` whose dimensions are the concatenation of the
- dimensions in `self` and `other`.
-
-
+ A dimension if `key` is an integer, or a `TensorShape` if `key` is a
+ slice.
-- - -
+##### Raises:
-#### `tf.TensorShape.ndims` {#TensorShape.ndims}
-Returns the rank of this shape, or None if it is unspecified.
+* <b>`ValueError`</b>: If `key` is a slice, and any of its elements are negative, or
+ if `self` is completely unknown and the step is set.
- - -
-#### `tf.TensorShape.dims` {#TensorShape.dims}
-
-Returns a list of Dimensions, or None if the shape is unspecified.
-
-
-- - -
+#### `tf.TensorShape.__init__(dims)` {#TensorShape.__init__}
-#### `tf.TensorShape.as_list()` {#TensorShape.as_list}
+Creates a new TensorShape with the given dimensions.
-Returns a list of integers or `None` for each dimension.
+##### Args:
-##### Returns:
- A list of integers or `None` for each dimension.
+* <b>`dims`</b>: A list of Dimensions, or None if the shape is unspecified.
+* <b>`DEPRECATED`</b>: A single integer is treated as a singleton list.
##### Raises:
-* <b>`ValueError`</b>: If `self` is an unknown shape with an unknown rank.
+* <b>`TypeError`</b>: If dims cannot be converted to a list of dimensions.
- - -
-#### `tf.TensorShape.as_proto()` {#TensorShape.as_proto}
+#### `tf.TensorShape.__iter__()` {#TensorShape.__iter__}
-Returns this shape as a `TensorShapeProto`.
+Returns `self.dims` if the rank is known, otherwise raises ValueError.
- - -
-#### `tf.TensorShape.is_compatible_with(other)` {#TensorShape.is_compatible_with}
+#### `tf.TensorShape.__len__()` {#TensorShape.__len__}
-Returns True iff `self` is compatible with `other`.
+Returns the rank of this shape, or raises ValueError if unspecified.
-Two possibly-partially-defined shapes are compatible if there
-exists a fully-defined shape that both shapes can represent. Thus,
-compatibility allows the shape inference code to reason about
-partially-defined shapes. For example:
-* TensorShape(None) is compatible with all shapes.
+- - -
-* TensorShape([None, None]) is compatible with all two-dimensional
- shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is
- not compatible with, for example, TensorShape([None]) or
- TensorShape([None, None, None]).
+#### `tf.TensorShape.__ne__(other)` {#TensorShape.__ne__}
-* TensorShape([32, None]) is compatible with all two-dimensional shapes
- with size 32 in the 0th dimension, and also TensorShape([None, None])
- and TensorShape(None). It is not compatible with, for example,
- TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]).
+Returns True if `self` is known to be different from `other`.
-* TensorShape([32, 784]) is compatible with itself, and also
- TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None,
- None]) and TensorShape(None). It is not compatible with, for example,
- TensorShape([32, 1, 784]) or TensorShape([None]).
-The compatibility relation is reflexive and symmetric, but not
-transitive. For example, TensorShape([32, 784]) is compatible with
-TensorShape(None), and TensorShape(None) is compatible with
-TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with
-TensorShape([4, 4]).
+- - -
-##### Args:
+#### `tf.TensorShape.__nonzero__()` {#TensorShape.__nonzero__}
+Returns True if this shape contains non-zero information.
-* <b>`other`</b>: Another TensorShape.
-##### Returns:
+- - -
+
+#### `tf.TensorShape.__repr__()` {#TensorShape.__repr__}
+
- True iff `self` is compatible with `other`.
- - -
-#### `tf.TensorShape.is_fully_defined()` {#TensorShape.is_fully_defined}
+#### `tf.TensorShape.__str__()` {#TensorShape.__str__}
-Returns True iff `self` is fully defined in every dimension.
- - -
-#### `tf.TensorShape.with_rank(rank)` {#TensorShape.with_rank}
+#### `tf.TensorShape.as_list()` {#TensorShape.as_list}
-Returns a shape based on `self` with the given rank.
+Returns a list of integers or `None` for each dimension.
-This method promotes a completely unknown shape to one with a
-known rank.
+##### Returns:
-##### Args:
+ A list of integers or `None` for each dimension.
+##### Raises:
-* <b>`rank`</b>: An integer.
-##### Returns:
+* <b>`ValueError`</b>: If `self` is an unknown shape with an unknown rank.
- A shape that is at least as specific as `self` with the given rank.
-##### Raises:
+- - -
+#### `tf.TensorShape.as_proto()` {#TensorShape.as_proto}
-* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
+Returns this shape as a `TensorShapeProto`.
- - -
-#### `tf.TensorShape.with_rank_at_least(rank)` {#TensorShape.with_rank_at_least}
+#### `tf.TensorShape.assert_has_rank(rank)` {#TensorShape.assert_has_rank}
-Returns a shape based on `self` with at least the given rank.
+Raises an exception if `self` is not compatible with the given `rank`.
##### Args:
* <b>`rank`</b>: An integer.
-##### Returns:
-
- A shape that is at least as specific as `self` with at least the given
- rank.
-
##### Raises:
-* <b>`ValueError`</b>: If `self` does not represent a shape with at least the given
- `rank`.
+* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
- - -
-#### `tf.TensorShape.with_rank_at_most(rank)` {#TensorShape.with_rank_at_most}
-
-Returns a shape based on `self` with at most the given rank.
+#### `tf.TensorShape.assert_is_compatible_with(other)` {#TensorShape.assert_is_compatible_with}
-##### Args:
+Raises exception if `self` and `other` do not represent the same shape.
+This method can be used to assert that there exists a shape that both
+`self` and `other` represent.
-* <b>`rank`</b>: An integer.
+##### Args:
-##### Returns:
- A shape that is at least as specific as `self` with at most the given
- rank.
+* <b>`other`</b>: Another TensorShape.
##### Raises:
-* <b>`ValueError`</b>: If `self` does not represent a shape with at most the given
- `rank`.
-
+* <b>`ValueError`</b>: If `self` and `other` do not represent the same shape.
- - -
-#### `tf.TensorShape.assert_has_rank(rank)` {#TensorShape.assert_has_rank}
-
-Raises an exception if `self` is not compatible with the given `rank`.
-
-##### Args:
-
+#### `tf.TensorShape.assert_is_fully_defined()` {#TensorShape.assert_is_fully_defined}
-* <b>`rank`</b>: An integer.
+Raises an exception if `self` is not fully defined in every dimension.
##### Raises:
-* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
+* <b>`ValueError`</b>: If `self` does not have a known value for every dimension.
- - -
@@ -3247,142 +3177,191 @@ Raises an exception if `self` and `other` do not have compatible ranks.
- - -
-#### `tf.TensorShape.assert_is_compatible_with(other)` {#TensorShape.assert_is_compatible_with}
+#### `tf.TensorShape.concatenate(other)` {#TensorShape.concatenate}
-Raises exception if `self` and `other` do not represent the same shape.
+Returns the concatenation of the dimension in `self` and `other`.
-This method can be used to assert that there exists a shape that both
-`self` and `other` represent.
+*N.B.* If either `self` or `other` is completely unknown,
+concatenation will discard information about the other shape. In
+future, we might support concatenation that preserves this
+information for use with slicing.
##### Args:
-* <b>`other`</b>: Another TensorShape.
+* <b>`other`</b>: Another `TensorShape`.
-##### Raises:
+##### Returns:
+ A `TensorShape` whose dimensions are the concatenation of the
+ dimensions in `self` and `other`.
-* <b>`ValueError`</b>: If `self` and `other` do not represent the same shape.
+
+- - -
+
+#### `tf.TensorShape.dims` {#TensorShape.dims}
+
+Returns a list of Dimensions, or None if the shape is unspecified.
- - -
-#### `tf.TensorShape.assert_is_fully_defined()` {#TensorShape.assert_is_fully_defined}
+#### `tf.TensorShape.is_compatible_with(other)` {#TensorShape.is_compatible_with}
-Raises an exception if `self` is not fully defined in every dimension.
+Returns True iff `self` is compatible with `other`.
-##### Raises:
+Two possibly-partially-defined shapes are compatible if there
+exists a fully-defined shape that both shapes can represent. Thus,
+compatibility allows the shape inference code to reason about
+partially-defined shapes. For example:
+* TensorShape(None) is compatible with all shapes.
-* <b>`ValueError`</b>: If `self` does not have a known value for every dimension.
+* TensorShape([None, None]) is compatible with all two-dimensional
+ shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is
+ not compatible with, for example, TensorShape([None]) or
+ TensorShape([None, None, None]).
+* TensorShape([32, None]) is compatible with all two-dimensional shapes
+ with size 32 in the 0th dimension, and also TensorShape([None, None])
+ and TensorShape(None). It is not compatible with, for example,
+ TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]).
+* TensorShape([32, 784]) is compatible with itself, and also
+ TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None,
+ None]) and TensorShape(None). It is not compatible with, for example,
+ TensorShape([32, 1, 784]) or TensorShape([None]).
-#### Other Methods
-- - -
+The compatibility relation is reflexive and symmetric, but not
+transitive. For example, TensorShape([32, 784]) is compatible with
+TensorShape(None), and TensorShape(None) is compatible with
+TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with
+TensorShape([4, 4]).
-#### `tf.TensorShape.__bool__()` {#TensorShape.__bool__}
+##### Args:
-Returns True if this shape contains non-zero information.
+
+* <b>`other`</b>: Another TensorShape.
+
+##### Returns:
+
+ True iff `self` is compatible with `other`.
- - -
-#### `tf.TensorShape.__eq__(other)` {#TensorShape.__eq__}
+#### `tf.TensorShape.is_fully_defined()` {#TensorShape.is_fully_defined}
-Returns True if `self` is equivalent to `other`.
+Returns True iff `self` is fully defined in every dimension.
- - -
-#### `tf.TensorShape.__getitem__(key)` {#TensorShape.__getitem__}
+#### `tf.TensorShape.merge_with(other)` {#TensorShape.merge_with}
-Returns the value of a dimension or a shape, depending on the key.
+Returns a `TensorShape` combining the information in `self` and `other`.
+
+The dimensions in `self` and `other` are merged elementwise,
+according to the rules defined for `Dimension.merge_with()`.
##### Args:
-* <b>`key`</b>: If `key` is an integer, returns the dimension at that index;
- otherwise if `key` is a slice, returns a TensorShape whose
- dimensions are those selected by the slice from `self`.
+* <b>`other`</b>: Another `TensorShape`.
##### Returns:
- A dimension if `key` is an integer, or a `TensorShape` if `key` is a
- slice.
+ A `TensorShape` containing the combined information of `self` and
+ `other`.
##### Raises:
-* <b>`ValueError`</b>: If `key` is a slice, and any of its elements are negative, or
- if `self` is completely unknown and the step is set.
+* <b>`ValueError`</b>: If `self` and `other` are not compatible.
- - -
-#### `tf.TensorShape.__init__(dims)` {#TensorShape.__init__}
-
-Creates a new TensorShape with the given dimensions.
-
-##### Args:
+#### `tf.TensorShape.ndims` {#TensorShape.ndims}
+Returns the rank of this shape, or None if it is unspecified.
-* <b>`dims`</b>: A list of Dimensions, or None if the shape is unspecified.
-* <b>`DEPRECATED`</b>: A single integer is treated as a singleton list.
-##### Raises:
+- - -
+#### `tf.TensorShape.num_elements()` {#TensorShape.num_elements}
-* <b>`TypeError`</b>: If dims cannot be converted to a list of dimensions.
+Returns the total number of elements, or none for incomplete shapes.
- - -
-#### `tf.TensorShape.__iter__()` {#TensorShape.__iter__}
+#### `tf.TensorShape.with_rank(rank)` {#TensorShape.with_rank}
-Returns `self.dims` if the rank is known, otherwise raises ValueError.
+Returns a shape based on `self` with the given rank.
+This method promotes a completely unknown shape to one with a
+known rank.
-- - -
+##### Args:
-#### `tf.TensorShape.__len__()` {#TensorShape.__len__}
-Returns the rank of this shape, or raises ValueError if unspecified.
+* <b>`rank`</b>: An integer.
+##### Returns:
-- - -
+ A shape that is at least as specific as `self` with the given rank.
-#### `tf.TensorShape.__ne__(other)` {#TensorShape.__ne__}
+##### Raises:
-Returns True if `self` is known to be different from `other`.
+
+* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
- - -
-#### `tf.TensorShape.__nonzero__()` {#TensorShape.__nonzero__}
+#### `tf.TensorShape.with_rank_at_least(rank)` {#TensorShape.with_rank_at_least}
-Returns True if this shape contains non-zero information.
+Returns a shape based on `self` with at least the given rank.
+##### Args:
-- - -
-#### `tf.TensorShape.__repr__()` {#TensorShape.__repr__}
+* <b>`rank`</b>: An integer.
+
+##### Returns:
+
+ A shape that is at least as specific as `self` with at least the given
+ rank.
+
+##### Raises:
+* <b>`ValueError`</b>: If `self` does not represent a shape with at least the given
+ `rank`.
- - -
-#### `tf.TensorShape.__str__()` {#TensorShape.__str__}
+#### `tf.TensorShape.with_rank_at_most(rank)` {#TensorShape.with_rank_at_most}
+Returns a shape based on `self` with at most the given rank.
+##### Args:
-- - -
+* <b>`rank`</b>: An integer.
-#### `tf.TensorShape.num_elements()` {#TensorShape.num_elements}
+##### Returns:
-Returns the total number of elements, or none for incomplete shapes.
+ A shape that is at least as specific as `self` with at most the given
+ rank.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If `self` does not represent a shape with at most the given
+ `rank`.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.PriorityQueue.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.PriorityQueue.from_list.md
new file mode 100644
index 0000000000..1fbd1a6f03
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.PriorityQueue.from_list.md
@@ -0,0 +1,21 @@
+#### `tf.PriorityQueue.from_list(index, queues)` {#PriorityQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorArray.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorArray.md
index a10d61aedc..240628114e 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorArray.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorArray.md
@@ -3,44 +3,89 @@ Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.
This class is meant to be used with dynamic iteration primitives such as
`while_loop` and `map_fn`. It supports gradient back-propagation via special
"flow" control flow dependencies.
-
- - -
-#### `tf.TensorArray.handle` {#TensorArray.handle}
+#### `tf.TensorArray.__init__(dtype, size=None, dynamic_size=None, clear_after_read=None, tensor_array_name=None, handle=None, flow=None, infer_shape=True, element_shape=None, name=None)` {#TensorArray.__init__}
-The reference to the TensorArray.
+Construct a new TensorArray or wrap an existing TensorArray handle.
+A note about the parameter `name`:
-- - -
+The name of the `TensorArray` (even if passed in) is uniquified: each time
+a new `TensorArray` is created at runtime it is assigned its own name for
+the duration of the run. This avoids name collisions if a `TensorArray`
+is created within a `while_loop`.
-#### `tf.TensorArray.flow` {#TensorArray.flow}
+##### Args:
-The flow `Tensor` forcing ops leading to this TensorArray state.
+* <b>`dtype`</b>: (required) data type of the TensorArray.
+* <b>`size`</b>: (optional) int32 scalar `Tensor`: the size of the TensorArray.
+ Required if handle is not provided.
+* <b>`dynamic_size`</b>: (optional) Python bool: If true, writes to the TensorArray
+ can grow the TensorArray past its initial size. Default: False.
+* <b>`clear_after_read`</b>: Boolean (optional, default: True). If True, clear
+ TensorArray values after reading them. This disables read-many
+ semantics, but allows early release of memory.
+* <b>`tensor_array_name`</b>: (optional) Python string: the name of the TensorArray.
+ This is used when creating the TensorArray handle. If this value is
+ set, handle should be None.
+* <b>`handle`</b>: (optional) A `Tensor` handle to an existing TensorArray. If this
+ is set, tensor_array_name should be None.
+* <b>`flow`</b>: (optional) A float `Tensor` scalar coming from an existing
+ `TensorArray.flow`.
+* <b>`infer_shape`</b>: (optional, default: True) If True, shape inference
+ is enabled. In this case, all elements must have the same shape.
+* <b>`element_shape`</b>: (optional, default: None) A `TensorShape` object specifying
+ the shape constraints of each of the elements of the TensorArray.
+ Need not be fully defined.
+* <b>`name`</b>: A name for the operation (optional).
-- - -
+##### Raises:
-#### `tf.TensorArray.dtype` {#TensorArray.dtype}
-The data type of this TensorArray.
+* <b>`ValueError`</b>: if both handle and tensor_array_name are provided.
+* <b>`TypeError`</b>: if handle is provided but is not a Tensor.
+- - -
+
+#### `tf.TensorArray.close(name=None)` {#TensorArray.close}
+
+Close the current TensorArray.
+
- - -
-#### `tf.TensorArray.read(index, name=None)` {#TensorArray.read}
+#### `tf.TensorArray.concat(name=None)` {#TensorArray.concat}
-Read the value at location `index` in the TensorArray.
+Return the values in the TensorArray as a concatenated `Tensor`.
+
+All of the values must have been written, their ranks must match, and
+and their shapes must all match for all dimensions except the first.
##### Args:
-* <b>`index`</b>: 0-D. int32 tensor with the index to read from.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
- The tensor at index `index`.
+ All the tensors in the TensorArray concatenated into one tensor.
+
+
+- - -
+
+#### `tf.TensorArray.dtype` {#TensorArray.dtype}
+
+The data type of this TensorArray.
+
+
+- - -
+
+#### `tf.TensorArray.flow` {#TensorArray.flow}
+
+The flow `Tensor` forcing ops leading to this TensorArray state.
- - -
@@ -66,65 +111,46 @@ must all match.
- - -
-#### `tf.TensorArray.stack(name=None)` {#TensorArray.stack}
-
-Return the values in the TensorArray as a stacked `Tensor`.
-
-All of the values must have been written and their shapes must all match.
-If input shapes have rank-`R`, then output shape will have rank-`(R+1)`.
-
-##### Args:
+#### `tf.TensorArray.grad(source, flow=None, name=None)` {#TensorArray.grad}
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- All the tensors in the TensorArray stacked into one tensor.
- - -
-#### `tf.TensorArray.concat(name=None)` {#TensorArray.concat}
+#### `tf.TensorArray.handle` {#TensorArray.handle}
-Return the values in the TensorArray as a concatenated `Tensor`.
+The reference to the TensorArray.
-All of the values must have been written, their ranks must match, and
-and their shapes must all match for all dimensions except the first.
-##### Args:
+- - -
+#### `tf.TensorArray.identity()` {#TensorArray.identity}
-* <b>`name`</b>: A name for the operation (optional).
+Returns a TensorArray with the same content and properties.
##### Returns:
- All the tensors in the TensorArray concatenated into one tensor.
-
+ A new TensorArray object with flow that ensures the control dependencies
+ from the contexts will become control dependencies for writes, reads, etc.
+ Use this object all for subsequent operations.
- - -
-#### `tf.TensorArray.write(index, value, name=None)` {#TensorArray.write}
+#### `tf.TensorArray.read(index, name=None)` {#TensorArray.read}
-Write `value` into index `index` of the TensorArray.
+Read the value at location `index` in the TensorArray.
##### Args:
-* <b>`index`</b>: 0-D. int32 scalar with the index to write to.
-* <b>`value`</b>: N-D. Tensor of type `dtype`. The Tensor to write to this index.
+* <b>`index`</b>: 0-D. int32 tensor with the index to read from.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
- A new TensorArray object with flow that ensures the write occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if there are more writers than specified.
+ The tensor at index `index`.
- - -
@@ -154,28 +180,9 @@ Scatter the values of a `Tensor` in specific indices of a `TensorArray`.
- - -
-#### `tf.TensorArray.unstack(value, name=None)` {#TensorArray.unstack}
-
-Unstack the values of a `Tensor` in the TensorArray.
-
-If input value shapes have rank-`R`, then the output TensorArray will
-contain elements whose shapes are rank-`(R-1)`.
-
-##### Args:
-
-
-* <b>`value`</b>: (N+1)-D. Tensor of type `dtype`. The Tensor to unstack.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the unstack occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
+#### `tf.TensorArray.size(name=None)` {#TensorArray.size}
-* <b>`ValueError`</b>: if the shape inference fails.
+Return the size of the TensorArray.
- - -
@@ -203,86 +210,72 @@ Split the values of a `Tensor` into the TensorArray.
* <b>`ValueError`</b>: if the shape inference fails.
-
- - -
-#### `tf.TensorArray.identity()` {#TensorArray.identity}
-
-Returns a TensorArray with the same content and properties.
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the control dependencies
- from the contexts will become control dependencies for writes, reads, etc.
- Use this object all for subsequent operations.
+#### `tf.TensorArray.stack(name=None)` {#TensorArray.stack}
+Return the values in the TensorArray as a stacked `Tensor`.
+All of the values must have been written and their shapes must all match.
+If input shapes have rank-`R`, then output shape will have rank-`(R+1)`.
-- - -
+##### Args:
-#### `tf.TensorArray.grad(source, flow=None, name=None)` {#TensorArray.grad}
+* <b>`name`</b>: A name for the operation (optional).
+##### Returns:
+ All the tensors in the TensorArray stacked into one tensor.
-#### Other Methods
- - -
-#### `tf.TensorArray.__init__(dtype, size=None, dynamic_size=None, clear_after_read=None, tensor_array_name=None, handle=None, flow=None, infer_shape=True, element_shape=None, name=None)` {#TensorArray.__init__}
-
-Construct a new TensorArray or wrap an existing TensorArray handle.
+#### `tf.TensorArray.unstack(value, name=None)` {#TensorArray.unstack}
-A note about the parameter `name`:
+Unstack the values of a `Tensor` in the TensorArray.
-The name of the `TensorArray` (even if passed in) is uniquified: each time
-a new `TensorArray` is created at runtime it is assigned its own name for
-the duration of the run. This avoids name collisions if a `TensorArray`
-is created within a `while_loop`.
+If input value shapes have rank-`R`, then the output TensorArray will
+contain elements whose shapes are rank-`(R-1)`.
##### Args:
-* <b>`dtype`</b>: (required) data type of the TensorArray.
-* <b>`size`</b>: (optional) int32 scalar `Tensor`: the size of the TensorArray.
- Required if handle is not provided.
-* <b>`dynamic_size`</b>: (optional) Python bool: If true, writes to the TensorArray
- can grow the TensorArray past its initial size. Default: False.
-* <b>`clear_after_read`</b>: Boolean (optional, default: True). If True, clear
- TensorArray values after reading them. This disables read-many
- semantics, but allows early release of memory.
-* <b>`tensor_array_name`</b>: (optional) Python string: the name of the TensorArray.
- This is used when creating the TensorArray handle. If this value is
- set, handle should be None.
-* <b>`handle`</b>: (optional) A `Tensor` handle to an existing TensorArray. If this
- is set, tensor_array_name should be None.
-* <b>`flow`</b>: (optional) A float `Tensor` scalar coming from an existing
- `TensorArray.flow`.
-* <b>`infer_shape`</b>: (optional, default: True) If True, shape inference
- is enabled. In this case, all elements must have the same shape.
-* <b>`element_shape`</b>: (optional, default: None) A `TensorShape` object specifying
- the shape constraints of each of the elements of the TensorArray.
- Need not be fully defined.
+* <b>`value`</b>: (N+1)-D. Tensor of type `dtype`. The Tensor to unstack.
* <b>`name`</b>: A name for the operation (optional).
+##### Returns:
+
+ A new TensorArray object with flow that ensures the unstack occurs.
+ Use this object all for subsequent operations.
+
##### Raises:
-* <b>`ValueError`</b>: if both handle and tensor_array_name are provided.
-* <b>`TypeError`</b>: if handle is provided but is not a Tensor.
+* <b>`ValueError`</b>: if the shape inference fails.
- - -
-#### `tf.TensorArray.close(name=None)` {#TensorArray.close}
+#### `tf.TensorArray.write(index, value, name=None)` {#TensorArray.write}
-Close the current TensorArray.
+Write `value` into index `index` of the TensorArray.
+##### Args:
-- - -
-#### `tf.TensorArray.size(name=None)` {#TensorArray.size}
+* <b>`index`</b>: 0-D. int32 scalar with the index to write to.
+* <b>`value`</b>: N-D. Tensor of type `dtype`. The Tensor to write to this index.
+* <b>`name`</b>: A name for the operation (optional).
-Return the size of the TensorArray.
+##### Returns:
+
+ A new TensorArray object with flow that ensures the write occurs.
+ Use this object all for subsequent operations.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: if there are more writers than specified.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md
index 29f3c6b26a..0ff6c80e23 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md
@@ -17,230 +17,175 @@ C++`](../../how_tos/adding_an_op/index.md#shape-functions-in-c) for
details of shape functions and how to register them. Alternatively,
the shape may be set explicitly using
[`Tensor.set_shape()`](../../api_docs/python/framework.md#Tensor.set_shape).
-
- - -
-#### `tf.TensorShape.merge_with(other)` {#TensorShape.merge_with}
-
-Returns a `TensorShape` combining the information in `self` and `other`.
-
-The dimensions in `self` and `other` are merged elementwise,
-according to the rules defined for `Dimension.merge_with()`.
-
-##### Args:
-
-
-* <b>`other`</b>: Another `TensorShape`.
+#### `tf.TensorShape.__bool__()` {#TensorShape.__bool__}
-##### Returns:
+Returns True if this shape contains non-zero information.
- A `TensorShape` containing the combined information of `self` and
- `other`.
-##### Raises:
+- - -
+#### `tf.TensorShape.__eq__(other)` {#TensorShape.__eq__}
-* <b>`ValueError`</b>: If `self` and `other` are not compatible.
+Returns True if `self` is equivalent to `other`.
- - -
-#### `tf.TensorShape.concatenate(other)` {#TensorShape.concatenate}
-
-Returns the concatenation of the dimension in `self` and `other`.
+#### `tf.TensorShape.__getitem__(key)` {#TensorShape.__getitem__}
-*N.B.* If either `self` or `other` is completely unknown,
-concatenation will discard information about the other shape. In
-future, we might support concatenation that preserves this
-information for use with slicing.
+Returns the value of a dimension or a shape, depending on the key.
##### Args:
-* <b>`other`</b>: Another `TensorShape`.
+* <b>`key`</b>: If `key` is an integer, returns the dimension at that index;
+ otherwise if `key` is a slice, returns a TensorShape whose
+ dimensions are those selected by the slice from `self`.
##### Returns:
- A `TensorShape` whose dimensions are the concatenation of the
- dimensions in `self` and `other`.
-
-
+ A dimension if `key` is an integer, or a `TensorShape` if `key` is a
+ slice.
-- - -
+##### Raises:
-#### `tf.TensorShape.ndims` {#TensorShape.ndims}
-Returns the rank of this shape, or None if it is unspecified.
+* <b>`ValueError`</b>: If `key` is a slice, and any of its elements are negative, or
+ if `self` is completely unknown and the step is set.
- - -
-#### `tf.TensorShape.dims` {#TensorShape.dims}
-
-Returns a list of Dimensions, or None if the shape is unspecified.
-
-
-- - -
+#### `tf.TensorShape.__init__(dims)` {#TensorShape.__init__}
-#### `tf.TensorShape.as_list()` {#TensorShape.as_list}
+Creates a new TensorShape with the given dimensions.
-Returns a list of integers or `None` for each dimension.
+##### Args:
-##### Returns:
- A list of integers or `None` for each dimension.
+* <b>`dims`</b>: A list of Dimensions, or None if the shape is unspecified.
+* <b>`DEPRECATED`</b>: A single integer is treated as a singleton list.
##### Raises:
-* <b>`ValueError`</b>: If `self` is an unknown shape with an unknown rank.
+* <b>`TypeError`</b>: If dims cannot be converted to a list of dimensions.
- - -
-#### `tf.TensorShape.as_proto()` {#TensorShape.as_proto}
+#### `tf.TensorShape.__iter__()` {#TensorShape.__iter__}
-Returns this shape as a `TensorShapeProto`.
+Returns `self.dims` if the rank is known, otherwise raises ValueError.
- - -
-#### `tf.TensorShape.is_compatible_with(other)` {#TensorShape.is_compatible_with}
+#### `tf.TensorShape.__len__()` {#TensorShape.__len__}
-Returns True iff `self` is compatible with `other`.
+Returns the rank of this shape, or raises ValueError if unspecified.
-Two possibly-partially-defined shapes are compatible if there
-exists a fully-defined shape that both shapes can represent. Thus,
-compatibility allows the shape inference code to reason about
-partially-defined shapes. For example:
-* TensorShape(None) is compatible with all shapes.
+- - -
-* TensorShape([None, None]) is compatible with all two-dimensional
- shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is
- not compatible with, for example, TensorShape([None]) or
- TensorShape([None, None, None]).
+#### `tf.TensorShape.__ne__(other)` {#TensorShape.__ne__}
-* TensorShape([32, None]) is compatible with all two-dimensional shapes
- with size 32 in the 0th dimension, and also TensorShape([None, None])
- and TensorShape(None). It is not compatible with, for example,
- TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]).
+Returns True if `self` is known to be different from `other`.
-* TensorShape([32, 784]) is compatible with itself, and also
- TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None,
- None]) and TensorShape(None). It is not compatible with, for example,
- TensorShape([32, 1, 784]) or TensorShape([None]).
-The compatibility relation is reflexive and symmetric, but not
-transitive. For example, TensorShape([32, 784]) is compatible with
-TensorShape(None), and TensorShape(None) is compatible with
-TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with
-TensorShape([4, 4]).
+- - -
-##### Args:
+#### `tf.TensorShape.__nonzero__()` {#TensorShape.__nonzero__}
+
+Returns True if this shape contains non-zero information.
-* <b>`other`</b>: Another TensorShape.
+- - -
+
+#### `tf.TensorShape.__repr__()` {#TensorShape.__repr__}
-##### Returns:
- True iff `self` is compatible with `other`.
- - -
-#### `tf.TensorShape.is_fully_defined()` {#TensorShape.is_fully_defined}
+#### `tf.TensorShape.__str__()` {#TensorShape.__str__}
-Returns True iff `self` is fully defined in every dimension.
- - -
-#### `tf.TensorShape.with_rank(rank)` {#TensorShape.with_rank}
+#### `tf.TensorShape.as_list()` {#TensorShape.as_list}
-Returns a shape based on `self` with the given rank.
+Returns a list of integers or `None` for each dimension.
-This method promotes a completely unknown shape to one with a
-known rank.
+##### Returns:
-##### Args:
+ A list of integers or `None` for each dimension.
+##### Raises:
-* <b>`rank`</b>: An integer.
-##### Returns:
+* <b>`ValueError`</b>: If `self` is an unknown shape with an unknown rank.
- A shape that is at least as specific as `self` with the given rank.
-##### Raises:
+- - -
+#### `tf.TensorShape.as_proto()` {#TensorShape.as_proto}
-* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
+Returns this shape as a `TensorShapeProto`.
- - -
-#### `tf.TensorShape.with_rank_at_least(rank)` {#TensorShape.with_rank_at_least}
+#### `tf.TensorShape.assert_has_rank(rank)` {#TensorShape.assert_has_rank}
-Returns a shape based on `self` with at least the given rank.
+Raises an exception if `self` is not compatible with the given `rank`.
##### Args:
* <b>`rank`</b>: An integer.
-##### Returns:
-
- A shape that is at least as specific as `self` with at least the given
- rank.
-
##### Raises:
-* <b>`ValueError`</b>: If `self` does not represent a shape with at least the given
- `rank`.
+* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
- - -
-#### `tf.TensorShape.with_rank_at_most(rank)` {#TensorShape.with_rank_at_most}
-
-Returns a shape based on `self` with at most the given rank.
+#### `tf.TensorShape.assert_is_compatible_with(other)` {#TensorShape.assert_is_compatible_with}
-##### Args:
+Raises exception if `self` and `other` do not represent the same shape.
+This method can be used to assert that there exists a shape that both
+`self` and `other` represent.
-* <b>`rank`</b>: An integer.
+##### Args:
-##### Returns:
- A shape that is at least as specific as `self` with at most the given
- rank.
+* <b>`other`</b>: Another TensorShape.
##### Raises:
-* <b>`ValueError`</b>: If `self` does not represent a shape with at most the given
- `rank`.
-
+* <b>`ValueError`</b>: If `self` and `other` do not represent the same shape.
- - -
-#### `tf.TensorShape.assert_has_rank(rank)` {#TensorShape.assert_has_rank}
-
-Raises an exception if `self` is not compatible with the given `rank`.
-
-##### Args:
-
+#### `tf.TensorShape.assert_is_fully_defined()` {#TensorShape.assert_is_fully_defined}
-* <b>`rank`</b>: An integer.
+Raises an exception if `self` is not fully defined in every dimension.
##### Raises:
-* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
+* <b>`ValueError`</b>: If `self` does not have a known value for every dimension.
- - -
@@ -263,141 +208,190 @@ Raises an exception if `self` and `other` do not have compatible ranks.
- - -
-#### `tf.TensorShape.assert_is_compatible_with(other)` {#TensorShape.assert_is_compatible_with}
+#### `tf.TensorShape.concatenate(other)` {#TensorShape.concatenate}
-Raises exception if `self` and `other` do not represent the same shape.
+Returns the concatenation of the dimension in `self` and `other`.
-This method can be used to assert that there exists a shape that both
-`self` and `other` represent.
+*N.B.* If either `self` or `other` is completely unknown,
+concatenation will discard information about the other shape. In
+future, we might support concatenation that preserves this
+information for use with slicing.
##### Args:
-* <b>`other`</b>: Another TensorShape.
+* <b>`other`</b>: Another `TensorShape`.
-##### Raises:
+##### Returns:
+
+ A `TensorShape` whose dimensions are the concatenation of the
+ dimensions in `self` and `other`.
-* <b>`ValueError`</b>: If `self` and `other` do not represent the same shape.
+- - -
+
+#### `tf.TensorShape.dims` {#TensorShape.dims}
+
+Returns a list of Dimensions, or None if the shape is unspecified.
- - -
-#### `tf.TensorShape.assert_is_fully_defined()` {#TensorShape.assert_is_fully_defined}
+#### `tf.TensorShape.is_compatible_with(other)` {#TensorShape.is_compatible_with}
-Raises an exception if `self` is not fully defined in every dimension.
+Returns True iff `self` is compatible with `other`.
-##### Raises:
+Two possibly-partially-defined shapes are compatible if there
+exists a fully-defined shape that both shapes can represent. Thus,
+compatibility allows the shape inference code to reason about
+partially-defined shapes. For example:
+* TensorShape(None) is compatible with all shapes.
-* <b>`ValueError`</b>: If `self` does not have a known value for every dimension.
+* TensorShape([None, None]) is compatible with all two-dimensional
+ shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is
+ not compatible with, for example, TensorShape([None]) or
+ TensorShape([None, None, None]).
+* TensorShape([32, None]) is compatible with all two-dimensional shapes
+ with size 32 in the 0th dimension, and also TensorShape([None, None])
+ and TensorShape(None). It is not compatible with, for example,
+ TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]).
+* TensorShape([32, 784]) is compatible with itself, and also
+ TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None,
+ None]) and TensorShape(None). It is not compatible with, for example,
+ TensorShape([32, 1, 784]) or TensorShape([None]).
-#### Other Methods
-- - -
+The compatibility relation is reflexive and symmetric, but not
+transitive. For example, TensorShape([32, 784]) is compatible with
+TensorShape(None), and TensorShape(None) is compatible with
+TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with
+TensorShape([4, 4]).
-#### `tf.TensorShape.__bool__()` {#TensorShape.__bool__}
+##### Args:
-Returns True if this shape contains non-zero information.
+
+* <b>`other`</b>: Another TensorShape.
+
+##### Returns:
+
+ True iff `self` is compatible with `other`.
- - -
-#### `tf.TensorShape.__eq__(other)` {#TensorShape.__eq__}
+#### `tf.TensorShape.is_fully_defined()` {#TensorShape.is_fully_defined}
-Returns True if `self` is equivalent to `other`.
+Returns True iff `self` is fully defined in every dimension.
- - -
-#### `tf.TensorShape.__getitem__(key)` {#TensorShape.__getitem__}
+#### `tf.TensorShape.merge_with(other)` {#TensorShape.merge_with}
-Returns the value of a dimension or a shape, depending on the key.
+Returns a `TensorShape` combining the information in `self` and `other`.
+
+The dimensions in `self` and `other` are merged elementwise,
+according to the rules defined for `Dimension.merge_with()`.
##### Args:
-* <b>`key`</b>: If `key` is an integer, returns the dimension at that index;
- otherwise if `key` is a slice, returns a TensorShape whose
- dimensions are those selected by the slice from `self`.
+* <b>`other`</b>: Another `TensorShape`.
##### Returns:
- A dimension if `key` is an integer, or a `TensorShape` if `key` is a
- slice.
+ A `TensorShape` containing the combined information of `self` and
+ `other`.
##### Raises:
-* <b>`ValueError`</b>: If `key` is a slice, and any of its elements are negative, or
- if `self` is completely unknown and the step is set.
+* <b>`ValueError`</b>: If `self` and `other` are not compatible.
- - -
-#### `tf.TensorShape.__init__(dims)` {#TensorShape.__init__}
-
-Creates a new TensorShape with the given dimensions.
-
-##### Args:
+#### `tf.TensorShape.ndims` {#TensorShape.ndims}
+Returns the rank of this shape, or None if it is unspecified.
-* <b>`dims`</b>: A list of Dimensions, or None if the shape is unspecified.
-* <b>`DEPRECATED`</b>: A single integer is treated as a singleton list.
-##### Raises:
+- - -
+#### `tf.TensorShape.num_elements()` {#TensorShape.num_elements}
-* <b>`TypeError`</b>: If dims cannot be converted to a list of dimensions.
+Returns the total number of elements, or none for incomplete shapes.
- - -
-#### `tf.TensorShape.__iter__()` {#TensorShape.__iter__}
+#### `tf.TensorShape.with_rank(rank)` {#TensorShape.with_rank}
-Returns `self.dims` if the rank is known, otherwise raises ValueError.
+Returns a shape based on `self` with the given rank.
+This method promotes a completely unknown shape to one with a
+known rank.
-- - -
+##### Args:
-#### `tf.TensorShape.__len__()` {#TensorShape.__len__}
-Returns the rank of this shape, or raises ValueError if unspecified.
+* <b>`rank`</b>: An integer.
+##### Returns:
-- - -
+ A shape that is at least as specific as `self` with the given rank.
-#### `tf.TensorShape.__ne__(other)` {#TensorShape.__ne__}
+##### Raises:
-Returns True if `self` is known to be different from `other`.
+
+* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
- - -
-#### `tf.TensorShape.__nonzero__()` {#TensorShape.__nonzero__}
+#### `tf.TensorShape.with_rank_at_least(rank)` {#TensorShape.with_rank_at_least}
-Returns True if this shape contains non-zero information.
+Returns a shape based on `self` with at least the given rank.
+##### Args:
-- - -
-#### `tf.TensorShape.__repr__()` {#TensorShape.__repr__}
+* <b>`rank`</b>: An integer.
+
+##### Returns:
+
+ A shape that is at least as specific as `self` with at least the given
+ rank.
+
+##### Raises:
+* <b>`ValueError`</b>: If `self` does not represent a shape with at least the given
+ `rank`.
- - -
-#### `tf.TensorShape.__str__()` {#TensorShape.__str__}
+#### `tf.TensorShape.with_rank_at_most(rank)` {#TensorShape.with_rank_at_most}
+Returns a shape based on `self` with at most the given rank.
+##### Args:
-- - -
+* <b>`rank`</b>: An integer.
-#### `tf.TensorShape.num_elements()` {#TensorShape.num_elements}
+##### Returns:
-Returns the total number of elements, or none for incomplete shapes.
+ A shape that is at least as specific as `self` with at most the given
+ rank.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If `self` does not represent a shape with at most the given
+ `rank`.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.FIFOQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.FIFOQueue.md
index b60559e41f..1d271e6eab 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.FIFOQueue.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.FIFOQueue.md
@@ -2,7 +2,6 @@ A queue implementation that dequeues elements in first-in first-out order.
See [`tf.QueueBase`](#QueueBase) for a description of the methods on
this class.
-
- - -
#### `tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, names=None, shared_name=None, name='fifo_queue')` {#FIFOQueue.__init__}
@@ -39,3 +38,262 @@ but the use of `dequeue_many` is disallowed.
* <b>`name`</b>: Optional name for the queue operation.
+- - -
+
+#### `tf.FIFOQueue.close(cancel_pending_enqueues=False, name=None)` {#FIFOQueue.close}
+
+Closes this queue.
+
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
+
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
+
+##### Args:
+
+
+* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that closes the queue.
+
+
+- - -
+
+#### `tf.FIFOQueue.dequeue(name=None)` {#FIFOQueue.dequeue}
+
+Dequeues one element from this queue.
+
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue is empty, and there are no pending
+enqueue operations that can fulfill this request,
+`tf.errors.OutOfRangeError` will be raised. If the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of tensors that was dequeued.
+
+
+- - -
+
+#### `tf.FIFOQueue.dequeue_many(n, name=None)` {#FIFOQueue.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue is closed and there are less than `n` elements left, then an
+`OutOfRange` exception is raised.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue contains fewer than `n` elements, and
+there are no pending enqueue operations that can fulfill this
+request, `tf.errors.OutOfRangeError` will be raised. If the
+session is [closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.FIFOQueue.dequeue_up_to(n, name=None)` {#FIFOQueue.dequeue_up_to}
+
+Dequeues and concatenates `n` elements from this queue.
+
+**Note** This operation is not supported by all queues. If a queue does not
+support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. If the queue
+has not been closed, all of the components in the dequeued tuple
+will have size `n` in the 0th dimension.
+
+If the queue is closed and there are more than `0` but fewer than
+`n` elements remaining, then instead of raising a
+`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
+less than `n` elements are returned immediately. If the queue is
+closed and there are `0` elements left in the queue, then a
+`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
+Otherwise the behavior is identical to `dequeue_many`.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.FIFOQueue.dtypes` {#FIFOQueue.dtypes}
+
+The list of dtypes for each component of a queue element.
+
+
+- - -
+
+#### `tf.FIFOQueue.enqueue(vals, name=None)` {#FIFOQueue.enqueue}
+
+Enqueues one element to this queue.
+
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
+ the values to enqueue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a new tuple of tensors to the queue.
+
+
+- - -
+
+#### `tf.FIFOQueue.enqueue_many(vals, name=None)` {#FIFOQueue.enqueue_many}
+
+Enqueues zero or more elements to this queue.
+
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
+
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
+ from which the queue elements are taken.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a batch of tuples of tensors to the queue.
+
+
+- - -
+
+#### `tf.FIFOQueue.from_list(index, queues)` {#FIFOQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
+
+- - -
+
+#### `tf.FIFOQueue.name` {#FIFOQueue.name}
+
+The name of the underlying queue.
+
+
+- - -
+
+#### `tf.FIFOQueue.names` {#FIFOQueue.names}
+
+The list of names for each component of a queue element.
+
+
+- - -
+
+#### `tf.FIFOQueue.queue_ref` {#FIFOQueue.queue_ref}
+
+The underlying queue reference.
+
+
+- - -
+
+#### `tf.FIFOQueue.shapes` {#FIFOQueue.shapes}
+
+The list of shapes for each component of a queue element.
+
+
+- - -
+
+#### `tf.FIFOQueue.size(name=None)` {#FIFOQueue.size}
+
+Compute the number of elements in this queue.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar tensor containing the number of elements in this queue.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Tensor.md
index 41b72011ed..abab577434 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Tensor.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Tensor.md
@@ -34,178 +34,6 @@ sess = tf.Session()
# Execute the graph and store the value that `e` represents in `result`.
result = sess.run(e)
```
-
-- - -
-
-#### `tf.Tensor.dtype` {#Tensor.dtype}
-
-The `DType` of elements in this tensor.
-
-
-- - -
-
-#### `tf.Tensor.name` {#Tensor.name}
-
-The string name of this tensor.
-
-
-- - -
-
-#### `tf.Tensor.value_index` {#Tensor.value_index}
-
-The index of this tensor in the outputs of its `Operation`.
-
-
-- - -
-
-#### `tf.Tensor.graph` {#Tensor.graph}
-
-The `Graph` that contains this tensor.
-
-
-- - -
-
-#### `tf.Tensor.op` {#Tensor.op}
-
-The `Operation` that produces this tensor as an output.
-
-
-- - -
-
-#### `tf.Tensor.consumers()` {#Tensor.consumers}
-
-Returns a list of `Operation`s that consume this tensor.
-
-##### Returns:
-
- A list of `Operation`s.
-
-
-
-- - -
-
-#### `tf.Tensor.eval(feed_dict=None, session=None)` {#Tensor.eval}
-
-Evaluates this tensor in a `Session`.
-
-Calling this method will execute all preceding operations that
-produce the inputs needed for the operation that produces this
-tensor.
-
-*N.B.* Before invoking `Tensor.eval()`, its graph must have been
-launched in a session, and either a default session must be
-available, or `session` must be specified explicitly.
-
-##### Args:
-
-
-* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a
- description of the valid feed values.
-* <b>`session`</b>: (Optional.) The `Session` to be used to evaluate this tensor. If
- none, the default session will be used.
-
-##### Returns:
-
- A numpy array corresponding to the value of this tensor.
-
-
-
-- - -
-
-#### `tf.Tensor.get_shape()` {#Tensor.get_shape}
-
-Alias of Tensor.shape.
-
-
-- - -
-
-#### `tf.Tensor.shape` {#Tensor.shape}
-
-Returns the `TensorShape` that represents the shape of this tensor.
-
-The shape is computed using shape inference functions that are
-registered in the Op for each `Operation`. See
-[`TensorShape`](../../api_docs/python/framework.md#TensorShape)
-for more details of what a shape represents.
-
-The inferred shape of a tensor is used to provide shape
-information without having to launch the graph in a session. This
-can be used for debugging, and providing early error messages. For
-example:
-
-```python
-c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
-
-print(c.shape)
-==> TensorShape([Dimension(2), Dimension(3)])
-
-d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]])
-
-print(d.shape)
-==> TensorShape([Dimension(4), Dimension(2)])
-
-# Raises a ValueError, because `c` and `d` do not have compatible
-# inner dimensions.
-e = tf.matmul(c, d)
-
-f = tf.matmul(c, d, transpose_a=True, transpose_b=True)
-
-print(f.shape)
-==> TensorShape([Dimension(3), Dimension(4)])
-```
-
-In some cases, the inferred shape may have unknown dimensions. If
-the caller has additional information about the values of these
-dimensions, `Tensor.set_shape()` can be used to augment the
-inferred shape.
-
-##### Returns:
-
- A `TensorShape` representing the shape of this tensor.
-
-
-- - -
-
-#### `tf.Tensor.set_shape(shape)` {#Tensor.set_shape}
-
-Updates the shape of this tensor.
-
-This method can be called multiple times, and will merge the given
-`shape` with the current shape of this tensor. It can be used to
-provide additional information about the shape of this tensor that
-cannot be inferred from the graph alone. For example, this can be used
-to provide additional information about the shapes of images:
-
-```python
-_, image_data = tf.TFRecordReader(...).read(...)
-image = tf.image.decode_png(image_data, channels=3)
-
-# The height and width dimensions of `image` are data dependent, and
-# cannot be computed without executing the op.
-print(image.shape)
-==> TensorShape([Dimension(None), Dimension(None), Dimension(3)])
-
-# We know that each image in this dataset is 28 x 28 pixels.
-image.set_shape([28, 28, 3])
-print(image.shape)
-==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])
-```
-
-##### Args:
-
-
-* <b>`shape`</b>: A `TensorShape` representing the shape of this tensor.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `shape` is not compatible with the current shape of
- this tensor.
-
-
-
-#### Other Methods
- - -
#### `tf.Tensor.__abs__(x, name=None)` {#Tensor.__abs__}
@@ -938,8 +766,175 @@ x ^ y = (x | y) & ~(x & y).
- - -
+#### `tf.Tensor.consumers()` {#Tensor.consumers}
+
+Returns a list of `Operation`s that consume this tensor.
+
+##### Returns:
+
+ A list of `Operation`s.
+
+
+- - -
+
#### `tf.Tensor.device` {#Tensor.device}
The name of the device on which this tensor will be produced, or None.
+- - -
+
+#### `tf.Tensor.dtype` {#Tensor.dtype}
+
+The `DType` of elements in this tensor.
+
+
+- - -
+
+#### `tf.Tensor.eval(feed_dict=None, session=None)` {#Tensor.eval}
+
+Evaluates this tensor in a `Session`.
+
+Calling this method will execute all preceding operations that
+produce the inputs needed for the operation that produces this
+tensor.
+
+*N.B.* Before invoking `Tensor.eval()`, its graph must have been
+launched in a session, and either a default session must be
+available, or `session` must be specified explicitly.
+
+##### Args:
+
+
+* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
+ See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a
+ description of the valid feed values.
+* <b>`session`</b>: (Optional.) The `Session` to be used to evaluate this tensor. If
+ none, the default session will be used.
+
+##### Returns:
+
+ A numpy array corresponding to the value of this tensor.
+
+
+- - -
+
+#### `tf.Tensor.get_shape()` {#Tensor.get_shape}
+
+Alias of Tensor.shape.
+
+
+- - -
+
+#### `tf.Tensor.graph` {#Tensor.graph}
+
+The `Graph` that contains this tensor.
+
+
+- - -
+
+#### `tf.Tensor.name` {#Tensor.name}
+
+The string name of this tensor.
+
+
+- - -
+
+#### `tf.Tensor.op` {#Tensor.op}
+
+The `Operation` that produces this tensor as an output.
+
+
+- - -
+
+#### `tf.Tensor.set_shape(shape)` {#Tensor.set_shape}
+
+Updates the shape of this tensor.
+
+This method can be called multiple times, and will merge the given
+`shape` with the current shape of this tensor. It can be used to
+provide additional information about the shape of this tensor that
+cannot be inferred from the graph alone. For example, this can be used
+to provide additional information about the shapes of images:
+
+```python
+_, image_data = tf.TFRecordReader(...).read(...)
+image = tf.image.decode_png(image_data, channels=3)
+
+# The height and width dimensions of `image` are data dependent, and
+# cannot be computed without executing the op.
+print(image.shape)
+==> TensorShape([Dimension(None), Dimension(None), Dimension(3)])
+
+# We know that each image in this dataset is 28 x 28 pixels.
+image.set_shape([28, 28, 3])
+print(image.shape)
+==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])
+```
+
+##### Args:
+
+
+* <b>`shape`</b>: A `TensorShape` representing the shape of this tensor.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If `shape` is not compatible with the current shape of
+ this tensor.
+
+
+- - -
+
+#### `tf.Tensor.shape` {#Tensor.shape}
+
+Returns the `TensorShape` that represents the shape of this tensor.
+
+The shape is computed using shape inference functions that are
+registered in the Op for each `Operation`. See
+[`TensorShape`](../../api_docs/python/framework.md#TensorShape)
+for more details of what a shape represents.
+
+The inferred shape of a tensor is used to provide shape
+information without having to launch the graph in a session. This
+can be used for debugging, and providing early error messages. For
+example:
+
+```python
+c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
+
+print(c.shape)
+==> TensorShape([Dimension(2), Dimension(3)])
+
+d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]])
+
+print(d.shape)
+==> TensorShape([Dimension(4), Dimension(2)])
+
+# Raises a ValueError, because `c` and `d` do not have compatible
+# inner dimensions.
+e = tf.matmul(c, d)
+
+f = tf.matmul(c, d, transpose_a=True, transpose_b=True)
+
+print(f.shape)
+==> TensorShape([Dimension(3), Dimension(4)])
+```
+
+In some cases, the inferred shape may have unknown dimensions. If
+the caller has additional information about the values of these
+dimensions, `Tensor.set_shape()` can be used to augment the
+inferred shape.
+
+##### Returns:
+
+ A `TensorShape` representing the shape of this tensor.
+
+
+- - -
+
+#### `tf.Tensor.value_index` {#Tensor.value_index}
+
+The index of this tensor in the outputs of its `Operation`.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md
index 4503f54b74..08d37ac815 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md
@@ -2,7 +2,6 @@ Optimizer that implements the Adadelta algorithm.
See [M. D. Zeiler](http://arxiv.org/abs/1212.5701)
([pdf](http://arxiv.org/pdf/1212.5701v1.pdf))
-
- - -
#### `tf.train.AdadeltaOptimizer.__init__(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name='Adadelta')` {#AdadeltaOptimizer.__init__}
@@ -21,3 +20,157 @@ Construct a new Adadelta optimizer.
gradients. Defaults to "Adadelta".
+- - -
+
+#### `tf.train.AdadeltaOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdadeltaOptimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of `minimize()`. It returns an `Operation` that
+applies gradients.
+
+##### Args:
+
+
+* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
+ `compute_gradients()`.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`name`</b>: Optional name for the returned operation. Default to the
+ name passed to the `Optimizer` constructor.
+
+##### Returns:
+
+ An `Operation` that applies the specified gradients. If `global_step`
+ was not None, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
+* <b>`ValueError`</b>: If none of the variables have gradients.
+
+
+- - -
+
+#### `tf.train.AdadeltaOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdadeltaOptimizer.compute_gradients}
+
+Compute gradients of `loss` for the variables in `var_list`.
+
+This is the first part of `minimize()`. It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a `Tensor`, an
+`IndexedSlices`, or `None` if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>`loss`</b>: A Tensor containing the value to minimize.
+* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKey.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs. Variable is always present, but
+ gradient can be `None`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
+* <b>`ValueError`</b>: If some arguments are invalid.
+
+
+- - -
+
+#### `tf.train.AdadeltaOptimizer.get_name()` {#AdadeltaOptimizer.get_name}
+
+
+
+
+- - -
+
+#### `tf.train.AdadeltaOptimizer.get_slot(var, name)` {#AdadeltaOptimizer.get_slot}
+
+Return a slot named `name` created for `var` by the Optimizer.
+
+Some `Optimizer` subclasses use additional variables. For example
+`Momentum` and `Adagrad` use variables to accumulate updates. This method
+gives access to these `Variable` objects if for some reason you need them.
+
+Use `get_slot_names()` to get the list of slot names created by the
+`Optimizer`.
+
+##### Args:
+
+
+* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
+* <b>`name`</b>: A string.
+
+##### Returns:
+
+ The `Variable` for the slot if it was created, `None` otherwise.
+
+
+- - -
+
+#### `tf.train.AdadeltaOptimizer.get_slot_names()` {#AdadeltaOptimizer.get_slot_names}
+
+Return a list of the names of slots created by the `Optimizer`.
+
+See `get_slot()`.
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### `tf.train.AdadeltaOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdadeltaOptimizer.minimize}
+
+Add operations to minimize `loss` by updating `var_list`.
+
+This method simply combines calls `compute_gradients()` and
+`apply_gradients()`. If you want to process the gradient before applying
+them call `compute_gradients()` and `apply_gradients()` explicitly instead
+of using this function.
+
+##### Args:
+
+
+* <b>`loss`</b>: A `Tensor` containing the value to minimize.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKeys.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`name`</b>: Optional name for the returned operation.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ An Operation that updates the variables in `var_list`. If `global_step`
+ was not `None`, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md
index aeaf209fe9..0e96f64ff4 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md
@@ -79,7 +79,7 @@ Returns:
- - -
-#### `tf.train.MonitoredSession.__init__(session_creator=None, hooks=None)` {#MonitoredSession.__init__}
+#### `tf.train.MonitoredSession.__init__(session_creator=None, hooks=None, stop_grace_period_secs=120)` {#MonitoredSession.__init__}
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredTrainingSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredTrainingSession.md
index 254e28a70a..d84ddbe277 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredTrainingSession.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredTrainingSession.md
@@ -1,4 +1,4 @@
-### `tf.train.MonitoredTrainingSession(master='', is_chief=True, checkpoint_dir=None, scaffold=None, hooks=None, chief_only_hooks=None, save_checkpoint_secs=600, save_summaries_steps=100, save_summaries_secs=None, config=None)` {#MonitoredTrainingSession}
+### `tf.train.MonitoredTrainingSession(master='', is_chief=True, checkpoint_dir=None, scaffold=None, hooks=None, chief_only_hooks=None, save_checkpoint_secs=600, save_summaries_steps=100, save_summaries_secs=None, config=None, stop_grace_period_secs=120)` {#MonitoredTrainingSession}
Creates a `MonitoredSession` for training.
@@ -35,6 +35,8 @@ inialize/restore.
isn't used.
* <b>`config`</b>: an instance of `tf.ConfigProto` proto used to configure the session.
It's the `config` argument of constructor of `tf.Session`.
+* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
+ `close()` has been called.
##### Returns:
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DType.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DType.md
index 1bd817c962..7035798b17 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DType.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DType.md
@@ -28,198 +28,179 @@ defined for reference-typed tensors.
The `tf.as_dtype()` function converts numpy types and string type
names to a `DType` object.
-
- - -
-#### `tf.DType.is_compatible_with(other)` {#DType.is_compatible_with}
-
-Returns True if the `other` DType will be converted to this DType.
-
-The conversion rules are as follows:
+#### `tf.DType.__eq__(other)` {#DType.__eq__}
-```python
-DType(T) .is_compatible_with(DType(T)) == True
-DType(T) .is_compatible_with(DType(T).as_ref) == True
-DType(T).as_ref.is_compatible_with(DType(T)) == False
-DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
-```
+Returns True iff this DType refers to the same type as `other`.
-##### Args:
+- - -
-* <b>`other`</b>: A `DType` (or object that may be converted to a `DType`).
+#### `tf.DType.__hash__()` {#DType.__hash__}
-##### Returns:
- True if a Tensor of the `other` `DType` will be implicitly converted to
- this `DType`.
- - -
-#### `tf.DType.name` {#DType.name}
+#### `tf.DType.__init__(type_enum)` {#DType.__init__}
-Returns the string name for this `DType`.
+Creates a new `DataType`.
+NOTE(mrry): In normal circumstances, you should not need to
+construct a `DataType` object directly. Instead, use the
+`tf.as_dtype()` function.
-- - -
+##### Args:
-#### `tf.DType.base_dtype` {#DType.base_dtype}
-
-Returns a non-reference `DType` based on this `DType`.
+* <b>`type_enum`</b>: A `types_pb2.DataType` enum value.
-- - -
+##### Raises:
-#### `tf.DType.real_dtype` {#DType.real_dtype}
-Returns the dtype correspond to this dtype's real part.
+* <b>`TypeError`</b>: If `type_enum` is not a value `types_pb2.DataType`.
- - -
-#### `tf.DType.is_bool` {#DType.is_bool}
+#### `tf.DType.__ne__(other)` {#DType.__ne__}
-Returns whether this is a boolean data type
+Returns True iff self != other.
- - -
-#### `tf.DType.is_floating` {#DType.is_floating}
-
-Returns whether this is a (non-quantized, real) floating point type.
-
-
-- - -
+#### `tf.DType.__repr__()` {#DType.__repr__}
-#### `tf.DType.is_complex` {#DType.is_complex}
-Returns whether this is a complex floating point type.
- - -
-#### `tf.DType.is_integer` {#DType.is_integer}
+#### `tf.DType.__str__()` {#DType.__str__}
+
-Returns whether this is a (non-quantized) integer type.
- - -
-#### `tf.DType.is_quantized` {#DType.is_quantized}
+#### `tf.DType.as_datatype_enum` {#DType.as_datatype_enum}
-Returns whether this is a quantized data type.
+Returns a `types_pb2.DataType` enum value based on this `DType`.
- - -
-#### `tf.DType.is_unsigned` {#DType.is_unsigned}
+#### `tf.DType.as_numpy_dtype` {#DType.as_numpy_dtype}
-Returns whether this type is unsigned.
+Returns a `numpy.dtype` based on this `DType`.
-Non-numeric, unordered, and quantized types are not considered unsigned, and
-this function returns `False`.
-##### Returns:
+- - -
- Whether a `DType` is unsigned.
+#### `tf.DType.base_dtype` {#DType.base_dtype}
+Returns a non-reference `DType` based on this `DType`.
- - -
-#### `tf.DType.as_numpy_dtype` {#DType.as_numpy_dtype}
+#### `tf.DType.is_bool` {#DType.is_bool}
-Returns a `numpy.dtype` based on this `DType`.
+Returns whether this is a boolean data type
- - -
-#### `tf.DType.as_datatype_enum` {#DType.as_datatype_enum}
-
-Returns a `types_pb2.DataType` enum value based on this `DType`.
+#### `tf.DType.is_compatible_with(other)` {#DType.is_compatible_with}
+Returns True if the `other` DType will be converted to this DType.
+The conversion rules are as follows:
-- - -
+```python
+DType(T) .is_compatible_with(DType(T)) == True
+DType(T) .is_compatible_with(DType(T).as_ref) == True
+DType(T).as_ref.is_compatible_with(DType(T)) == False
+DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
+```
-#### `tf.DType.limits` {#DType.limits}
+##### Args:
-Return intensity limits, i.e. (min, max) tuple, of the dtype.
-##### Args:
+* <b>`other`</b>: A `DType` (or object that may be converted to a `DType`).
- clip_negative : bool, optional
- If True, clip the negative range (i.e. return 0 for min intensity)
- even if the image dtype allows negative values.
-Returns
- min, max : tuple
- Lower and upper intensity limits.
+##### Returns:
+ True if a Tensor of the `other` `DType` will be implicitly converted to
+ this `DType`.
-#### Other Methods
- - -
-#### `tf.DType.__eq__(other)` {#DType.__eq__}
+#### `tf.DType.is_complex` {#DType.is_complex}
-Returns True iff this DType refers to the same type as `other`.
+Returns whether this is a complex floating point type.
- - -
-#### `tf.DType.__hash__()` {#DType.__hash__}
-
+#### `tf.DType.is_floating` {#DType.is_floating}
+Returns whether this is a (non-quantized, real) floating point type.
- - -
-#### `tf.DType.__init__(type_enum)` {#DType.__init__}
-
-Creates a new `DataType`.
-
-NOTE(mrry): In normal circumstances, you should not need to
-construct a `DataType` object directly. Instead, use the
-`tf.as_dtype()` function.
+#### `tf.DType.is_integer` {#DType.is_integer}
-##### Args:
+Returns whether this is a (non-quantized) integer type.
-* <b>`type_enum`</b>: A `types_pb2.DataType` enum value.
+- - -
-##### Raises:
+#### `tf.DType.is_numpy_compatible` {#DType.is_numpy_compatible}
-* <b>`TypeError`</b>: If `type_enum` is not a value `types_pb2.DataType`.
- - -
-#### `tf.DType.__ne__(other)` {#DType.__ne__}
+#### `tf.DType.is_quantized` {#DType.is_quantized}
-Returns True iff self != other.
+Returns whether this is a quantized data type.
- - -
-#### `tf.DType.__repr__()` {#DType.__repr__}
-
-
-
+#### `tf.DType.is_unsigned` {#DType.is_unsigned}
-- - -
+Returns whether this type is unsigned.
-#### `tf.DType.__str__()` {#DType.__str__}
+Non-numeric, unordered, and quantized types are not considered unsigned, and
+this function returns `False`.
+##### Returns:
+ Whether a `DType` is unsigned.
- - -
-#### `tf.DType.is_numpy_compatible` {#DType.is_numpy_compatible}
+#### `tf.DType.limits` {#DType.limits}
+Return intensity limits, i.e. (min, max) tuple, of the dtype.
+##### Args:
+
+ clip_negative : bool, optional
+ If True, clip the negative range (i.e. return 0 for min intensity)
+ even if the image dtype allows negative values.
+Returns
+ min, max : tuple
+ Lower and upper intensity limits.
- - -
@@ -248,6 +229,20 @@ Returns the minimum representable value in this data type.
- - -
+#### `tf.DType.name` {#DType.name}
+
+Returns the string name for this `DType`.
+
+
+- - -
+
+#### `tf.DType.real_dtype` {#DType.real_dtype}
+
+Returns the dtype correspond to this dtype's real part.
+
+
+- - -
+
#### `tf.DType.size` {#DType.size}
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.FIFOQueue.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.FIFOQueue.from_list.md
new file mode 100644
index 0000000000..f27017af74
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.FIFOQueue.from_list.md
@@ -0,0 +1,21 @@
+#### `tf.FIFOQueue.from_list(index, queues)` {#FIFOQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md
index 308a0a80b4..5c0c5892bd 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md
@@ -34,6 +34,12 @@ with tf.Session():
# We can also use 'c.eval()' here.
print(c.eval())
```
+- - -
+
+#### `tf.InteractiveSession.__del__()` {#InteractiveSession.__del__}
+
+
+
- - -
@@ -60,8 +66,281 @@ the session constructor.
- - -
+#### `tf.InteractiveSession.as_default()` {#InteractiveSession.as_default}
+
+Returns a context manager that makes this object the default session.
+
+Use with the `with` keyword to specify that calls to
+[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or
+[`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) should be
+executed in this session.
+
+```python
+c = tf.constant(..)
+sess = tf.Session()
+
+with sess.as_default():
+ assert tf.get_default_session() is sess
+ print(c.eval())
+```
+
+To get the current default session, use
+[`tf.get_default_session()`](#get_default_session).
+
+
+*N.B.* The `as_default` context manager *does not* close the
+session when you exit the context, and you must close the session
+explicitly.
+
+```python
+c = tf.constant(...)
+sess = tf.Session()
+with sess.as_default():
+ print(c.eval())
+# ...
+with sess.as_default():
+ print(c.eval())
+
+sess.close()
+```
+
+Alternatively, you can use `with tf.Session():` to create a
+session that is automatically closed on exiting the context,
+including when an uncaught exception is raised.
+
+*N.B.* The default graph is a property of the current thread. If you
+create a new thread, and wish to use the default session in that
+thread, you must explicitly add a `with sess.as_default():` in that
+thread's function.
+
+##### Returns:
+
+ A context manager using this session as the default session.
+
+
+- - -
+
#### `tf.InteractiveSession.close()` {#InteractiveSession.close}
Closes an `InteractiveSession`.
+- - -
+
+#### `tf.InteractiveSession.graph` {#InteractiveSession.graph}
+
+The graph that was launched in this session.
+
+
+- - -
+
+#### `tf.InteractiveSession.graph_def` {#InteractiveSession.graph_def}
+
+A serializable version of the underlying TensorFlow graph.
+
+##### Returns:
+
+ A graph_pb2.GraphDef proto containing nodes for all of the Operations in
+ the underlying TensorFlow graph.
+
+
+- - -
+
+#### `tf.InteractiveSession.partial_run(handle, fetches, feed_dict=None)` {#InteractiveSession.partial_run}
+
+Continues the execution with more feeds and fetches.
+
+This is EXPERIMENTAL and subject to change.
+
+To use partial execution, a user first calls `partial_run_setup()` and
+then a sequence of `partial_run()`. `partial_run_setup` specifies the
+list of feeds and fetches that will be used in the subsequent
+`partial_run` calls.
+
+The optional `feed_dict` argument allows the caller to override
+the value of tensors in the graph. See run() for more information.
+
+Below is a simple example:
+
+```python
+a = array_ops.placeholder(dtypes.float32, shape=[])
+b = array_ops.placeholder(dtypes.float32, shape=[])
+c = array_ops.placeholder(dtypes.float32, shape=[])
+r1 = math_ops.add(a, b)
+r2 = math_ops.multiply(r1, c)
+
+h = sess.partial_run_setup([r1, r2], [a, b, c])
+res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2})
+res = sess.partial_run(h, r2, feed_dict={c: res})
+```
+
+##### Args:
+
+
+* <b>`handle`</b>: A handle for a sequence of partial runs.
+* <b>`fetches`</b>: A single graph element, a list of graph elements,
+ or a dictionary whose values are graph elements or lists of graph
+ elements (see documentation for `run`).
+* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
+ (described above).
+
+##### Returns:
+
+ Either a single value if `fetches` is a single graph element, or
+ a list of values if `fetches` is a list, or a dictionary with the
+ same keys as `fetches` if that is a dictionary
+ (see documentation for `run`).
+
+##### Raises:
+
+ tf.errors.OpError: Or one of its subclasses on error.
+
+
+- - -
+
+#### `tf.InteractiveSession.partial_run_setup(fetches, feeds=None)` {#InteractiveSession.partial_run_setup}
+
+Sets up a graph with feeds and fetches for partial run.
+
+This is EXPERIMENTAL and subject to change.
+
+Note that contrary to `run`, `feeds` only specifies the graph elements.
+The tensors will be supplied by the subsequent `partial_run` calls.
+
+##### Args:
+
+
+* <b>`fetches`</b>: A single graph element, or a list of graph elements.
+* <b>`feeds`</b>: A single graph element, or a list of graph elements.
+
+##### Returns:
+
+ A handle for partial run.
+
+##### Raises:
+
+
+* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
+ closed).
+* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
+ tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens.
+
+
+- - -
+
+#### `tf.InteractiveSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#InteractiveSession.run}
+
+Runs operations and evaluates tensors in `fetches`.
+
+This method runs one "step" of TensorFlow computation, by
+running the necessary graph fragment to execute every `Operation`
+and evaluate every `Tensor` in `fetches`, substituting the values in
+`feed_dict` for the corresponding input values.
+
+The `fetches` argument may be a single graph element, or an arbitrarily
+nested list, tuple, namedtuple, dict, or OrderedDict containing graph
+elements at its leaves. A graph element can be one of the following types:
+
+* An [`Operation`](../../api_docs/python/framework.md#Operation).
+ The corresponding fetched value will be `None`.
+* A [`Tensor`](../../api_docs/python/framework.md#Tensor).
+ The corresponding fetched value will be a numpy ndarray containing the
+ value of that tensor.
+* A [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor).
+ The corresponding fetched value will be a
+ [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue)
+ containing the value of that sparse tensor.
+* A `get_tensor_handle` op. The corresponding fetched value will be a
+ numpy ndarray containing the handle of that tensor.
+* A `string` which is the name of a tensor or operation in the graph.
+
+The value returned by `run()` has the same shape as the `fetches` argument,
+where the leaves are replaced by the corresponding values returned by
+TensorFlow.
+
+Example:
+
+```python
+ a = tf.constant([10, 20])
+ b = tf.constant([1.0, 2.0])
+ # 'fetches' can be a singleton
+ v = session.run(a)
+ # v is the numpy array [10, 20]
+ # 'fetches' can be a list.
+ v = session.run([a, b])
+ # v a Python list with 2 numpy arrays: the numpy array [10, 20] and the
+ # 1-D array [1.0, 2.0]
+ # 'fetches' can be arbitrary lists, tuples, namedtuple, dicts:
+ MyData = collections.namedtuple('MyData', ['a', 'b'])
+ v = session.run({'k1': MyData(a, b), 'k2': [b, a]})
+ # v is a dict with
+ # v['k1'] is a MyData namedtuple with 'a' the numpy array [10, 20] and
+ # 'b' the numpy array [1.0, 2.0]
+ # v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array
+ # [10, 20].
+```
+
+The optional `feed_dict` argument allows the caller to override
+the value of tensors in the graph. Each key in `feed_dict` can be
+one of the following types:
+
+* If the key is a [`Tensor`](../../api_docs/python/framework.md#Tensor), the
+ value may be a Python scalar, string, list, or numpy ndarray
+ that can be converted to the same `dtype` as that
+ tensor. Additionally, if the key is a
+ [placeholder](../../api_docs/python/io_ops.md#placeholder), the shape of
+ the value will be checked for compatibility with the placeholder.
+* If the key is a
+ [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
+ the value should be a
+ [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue).
+* If the key is a nested tuple of `Tensor`s or `SparseTensor`s, the value
+ should be a nested tuple with the same structure that maps to their
+ corresponding values as above.
+
+Each value in `feed_dict` must be convertible to a numpy array of the dtype
+of the corresponding key.
+
+The optional `options` argument expects a [`RunOptions`] proto. The options
+allow controlling the behavior of this particular step (e.g. turning tracing
+on).
+
+The optional `run_metadata` argument expects a [`RunMetadata`] proto. When
+appropriate, the non-Tensor output of this step will be collected there. For
+example, when users turn on tracing in `options`, the profiled info will be
+collected into this argument and passed back.
+
+##### Args:
+
+
+* <b>`fetches`</b>: A single graph element, a list of graph elements,
+ or a dictionary whose values are graph elements or lists of graph
+ elements (described above).
+* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
+ (described above).
+* <b>`options`</b>: A [`RunOptions`] protocol buffer
+* <b>`run_metadata`</b>: A [`RunMetadata`] protocol buffer
+
+##### Returns:
+
+ Either a single value if `fetches` is a single graph element, or
+ a list of values if `fetches` is a list, or a dictionary with the
+ same keys as `fetches` if that is a dictionary (described above).
+
+##### Raises:
+
+
+* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
+ closed).
+* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
+* <b>`ValueError`</b>: If `fetches` or `feed_dict` keys are invalid or refer to a
+ `Tensor` that doesn't exist.
+
+
+- - -
+
+#### `tf.InteractiveSession.sess_str` {#InteractiveSession.sess_str}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.PriorityQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.PriorityQueue.md
index 527a306c95..6154301c4e 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.PriorityQueue.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.PriorityQueue.md
@@ -2,7 +2,6 @@ A queue implementation that dequeues elements in prioritized order.
See [`tf.QueueBase`](#QueueBase) for a description of the methods on
this class.
-
- - -
#### `tf.PriorityQueue.__init__(capacity, types, shapes=None, names=None, shared_name=None, name='priority_queue')` {#PriorityQueue.__init__}
@@ -45,3 +44,262 @@ an int64 scalar (for `enqueue`) or an int64 vector (for `enqueue_many`).
* <b>`name`</b>: Optional name for the queue operation.
+- - -
+
+#### `tf.PriorityQueue.close(cancel_pending_enqueues=False, name=None)` {#PriorityQueue.close}
+
+Closes this queue.
+
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
+
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
+
+##### Args:
+
+
+* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that closes the queue.
+
+
+- - -
+
+#### `tf.PriorityQueue.dequeue(name=None)` {#PriorityQueue.dequeue}
+
+Dequeues one element from this queue.
+
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue is empty, and there are no pending
+enqueue operations that can fulfill this request,
+`tf.errors.OutOfRangeError` will be raised. If the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PriorityQueue.dequeue_many(n, name=None)` {#PriorityQueue.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue is closed and there are less than `n` elements left, then an
+`OutOfRange` exception is raised.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue contains fewer than `n` elements, and
+there are no pending enqueue operations that can fulfill this
+request, `tf.errors.OutOfRangeError` will be raised. If the
+session is [closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PriorityQueue.dequeue_up_to(n, name=None)` {#PriorityQueue.dequeue_up_to}
+
+Dequeues and concatenates `n` elements from this queue.
+
+**Note** This operation is not supported by all queues. If a queue does not
+support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. If the queue
+has not been closed, all of the components in the dequeued tuple
+will have size `n` in the 0th dimension.
+
+If the queue is closed and there are more than `0` but fewer than
+`n` elements remaining, then instead of raising a
+`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
+less than `n` elements are returned immediately. If the queue is
+closed and there are `0` elements left in the queue, then a
+`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
+Otherwise the behavior is identical to `dequeue_many`.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PriorityQueue.dtypes` {#PriorityQueue.dtypes}
+
+The list of dtypes for each component of a queue element.
+
+
+- - -
+
+#### `tf.PriorityQueue.enqueue(vals, name=None)` {#PriorityQueue.enqueue}
+
+Enqueues one element to this queue.
+
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
+ the values to enqueue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a new tuple of tensors to the queue.
+
+
+- - -
+
+#### `tf.PriorityQueue.enqueue_many(vals, name=None)` {#PriorityQueue.enqueue_many}
+
+Enqueues zero or more elements to this queue.
+
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
+
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
+ from which the queue elements are taken.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a batch of tuples of tensors to the queue.
+
+
+- - -
+
+#### `tf.PriorityQueue.from_list(index, queues)` {#PriorityQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
+
+- - -
+
+#### `tf.PriorityQueue.name` {#PriorityQueue.name}
+
+The name of the underlying queue.
+
+
+- - -
+
+#### `tf.PriorityQueue.names` {#PriorityQueue.names}
+
+The list of names for each component of a queue element.
+
+
+- - -
+
+#### `tf.PriorityQueue.queue_ref` {#PriorityQueue.queue_ref}
+
+The underlying queue reference.
+
+
+- - -
+
+#### `tf.PriorityQueue.shapes` {#PriorityQueue.shapes}
+
+The list of shapes for each component of a queue element.
+
+
+- - -
+
+#### `tf.PriorityQueue.size(name=None)` {#PriorityQueue.size}
+
+Compute the number of elements in this queue.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar tensor containing the number of elements in this queue.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensor.md
index 747652514c..31c5d725b2 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensor.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensor.md
@@ -55,89 +55,6 @@ represents the dense tensor
[0, 0, 2, 0]
[0, 0, 0, 0]]
```
-
-- - -
-
-#### `tf.SparseTensor.__init__(indices, values, dense_shape)` {#SparseTensor.__init__}
-
-Creates a `SparseTensor`.
-
-##### Args:
-
-
-* <b>`indices`</b>: A 2-D int64 tensor of shape `[N, ndims]`.
-* <b>`values`</b>: A 1-D tensor of any type and shape `[N]`.
-* <b>`dense_shape`</b>: A 1-D int64 tensor of shape `[ndims]`.
-
-##### Returns:
-
- A `SparseTensor`.
-
-
-- - -
-
-#### `tf.SparseTensor.get_shape()` {#SparseTensor.get_shape}
-
-Get the `TensorShape` representing the shape of the dense tensor.
-
-##### Returns:
-
- A `TensorShape` object.
-
-
-- - -
-
-#### `tf.SparseTensor.indices` {#SparseTensor.indices}
-
-The indices of non-zero values in the represented dense tensor.
-
-##### Returns:
-
- A 2-D Tensor of int64 with dense_shape `[N, ndims]`, where `N` is the
- number of non-zero values in the tensor, and `ndims` is the rank.
-
-
-- - -
-
-#### `tf.SparseTensor.values` {#SparseTensor.values}
-
-The non-zero values in the represented dense tensor.
-
-##### Returns:
-
- A 1-D Tensor of any data type.
-
-
-- - -
-
-#### `tf.SparseTensor.dense_shape` {#SparseTensor.dense_shape}
-
-A 1-D Tensor of int64 representing the shape of the dense tensor.
-
-
-- - -
-
-#### `tf.SparseTensor.dtype` {#SparseTensor.dtype}
-
-The `DType` of elements in this tensor.
-
-
-- - -
-
-#### `tf.SparseTensor.op` {#SparseTensor.op}
-
-The `Operation` that produces `values` as an output.
-
-
-- - -
-
-#### `tf.SparseTensor.graph` {#SparseTensor.graph}
-
-The `Graph` that contains the index, value, and dense_shape tensors.
-
-
-
-#### Other Methods
- - -
#### `tf.SparseTensor.__div__(sp_x, y)` {#SparseTensor.__div__}
@@ -169,6 +86,24 @@ the other direction.
- - -
+#### `tf.SparseTensor.__init__(indices, values, dense_shape)` {#SparseTensor.__init__}
+
+Creates a `SparseTensor`.
+
+##### Args:
+
+
+* <b>`indices`</b>: A 2-D int64 tensor of shape `[N, ndims]`.
+* <b>`values`</b>: A 1-D tensor of any type and shape `[N]`.
+* <b>`dense_shape`</b>: A 1-D int64 tensor of shape `[ndims]`.
+
+##### Returns:
+
+ A `SparseTensor`.
+
+
+- - -
+
#### `tf.SparseTensor.__mul__(sp_x, y)` {#SparseTensor.__mul__}
Component-wise multiplies a SparseTensor by a dense Tensor.
@@ -216,6 +151,20 @@ Internal helper function for 'sp_t / dense_t'.
- - -
+#### `tf.SparseTensor.dense_shape` {#SparseTensor.dense_shape}
+
+A 1-D Tensor of int64 representing the shape of the dense tensor.
+
+
+- - -
+
+#### `tf.SparseTensor.dtype` {#SparseTensor.dtype}
+
+The `DType` of elements in this tensor.
+
+
+- - -
+
#### `tf.SparseTensor.eval(feed_dict=None, session=None)` {#SparseTensor.eval}
Evaluates this sparse tensor in a `Session`.
@@ -249,3 +198,51 @@ available, or `session` must be specified explicitly.
+- - -
+
+#### `tf.SparseTensor.get_shape()` {#SparseTensor.get_shape}
+
+Get the `TensorShape` representing the shape of the dense tensor.
+
+##### Returns:
+
+ A `TensorShape` object.
+
+
+- - -
+
+#### `tf.SparseTensor.graph` {#SparseTensor.graph}
+
+The `Graph` that contains the index, value, and dense_shape tensors.
+
+
+- - -
+
+#### `tf.SparseTensor.indices` {#SparseTensor.indices}
+
+The indices of non-zero values in the represented dense tensor.
+
+##### Returns:
+
+ A 2-D Tensor of int64 with dense_shape `[N, ndims]`, where `N` is the
+ number of non-zero values in the tensor, and `ndims` is the rank.
+
+
+- - -
+
+#### `tf.SparseTensor.op` {#SparseTensor.op}
+
+The `Operation` that produces `values` as an output.
+
+
+- - -
+
+#### `tf.SparseTensor.values` {#SparseTensor.values}
+
+The non-zero values in the represented dense tensor.
+
+##### Returns:
+
+ A 1-D Tensor of any data type.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md
index 4b8b08edae..75ed61cc9a 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md
@@ -3,7 +3,6 @@ Optimizer that implements the Adagrad algorithm.
See this [paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)
or this
[intro](http://cs.stanford.edu/~ppasupat/a9online/uploads/proximal_notes.pdf).
-
- - -
#### `tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad')` {#AdagradOptimizer.__init__}
@@ -26,3 +25,157 @@ Construct a new Adagrad optimizer.
* <b>`ValueError`</b>: If the `initial_accumulator_value` is invalid.
+- - -
+
+#### `tf.train.AdagradOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdagradOptimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of `minimize()`. It returns an `Operation` that
+applies gradients.
+
+##### Args:
+
+
+* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
+ `compute_gradients()`.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`name`</b>: Optional name for the returned operation. Default to the
+ name passed to the `Optimizer` constructor.
+
+##### Returns:
+
+ An `Operation` that applies the specified gradients. If `global_step`
+ was not None, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
+* <b>`ValueError`</b>: If none of the variables have gradients.
+
+
+- - -
+
+#### `tf.train.AdagradOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdagradOptimizer.compute_gradients}
+
+Compute gradients of `loss` for the variables in `var_list`.
+
+This is the first part of `minimize()`. It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a `Tensor`, an
+`IndexedSlices`, or `None` if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>`loss`</b>: A Tensor containing the value to minimize.
+* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKey.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs. Variable is always present, but
+ gradient can be `None`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
+* <b>`ValueError`</b>: If some arguments are invalid.
+
+
+- - -
+
+#### `tf.train.AdagradOptimizer.get_name()` {#AdagradOptimizer.get_name}
+
+
+
+
+- - -
+
+#### `tf.train.AdagradOptimizer.get_slot(var, name)` {#AdagradOptimizer.get_slot}
+
+Return a slot named `name` created for `var` by the Optimizer.
+
+Some `Optimizer` subclasses use additional variables. For example
+`Momentum` and `Adagrad` use variables to accumulate updates. This method
+gives access to these `Variable` objects if for some reason you need them.
+
+Use `get_slot_names()` to get the list of slot names created by the
+`Optimizer`.
+
+##### Args:
+
+
+* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
+* <b>`name`</b>: A string.
+
+##### Returns:
+
+ The `Variable` for the slot if it was created, `None` otherwise.
+
+
+- - -
+
+#### `tf.train.AdagradOptimizer.get_slot_names()` {#AdagradOptimizer.get_slot_names}
+
+Return a list of the names of slots created by the `Optimizer`.
+
+See `get_slot()`.
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### `tf.train.AdagradOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdagradOptimizer.minimize}
+
+Add operations to minimize `loss` by updating `var_list`.
+
+This method simply combines calls `compute_gradients()` and
+`apply_gradients()`. If you want to process the gradient before applying
+them call `compute_gradients()` and `apply_gradients()` explicitly instead
+of using this function.
+
+##### Args:
+
+
+* <b>`loss`</b>: A `Tensor` containing the value to minimize.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKeys.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`name`</b>: Optional name for the returned operation.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ An Operation that updates the variables in `var_list`. If `global_step`
+ was not `None`, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SingularMonitoredSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SingularMonitoredSession.md
index d858e813e3..09271a91a1 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SingularMonitoredSession.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SingularMonitoredSession.md
@@ -62,7 +62,7 @@ Exit: At the `close()`, the hooked session does following things in order:
- - -
-#### `tf.train.SingularMonitoredSession.__init__(hooks=None, scaffold=None, master='', config=None, checkpoint_dir=None)` {#SingularMonitoredSession.__init__}
+#### `tf.train.SingularMonitoredSession.__init__(hooks=None, scaffold=None, master='', config=None, checkpoint_dir=None, stop_grace_period_secs=120)` {#SingularMonitoredSession.__init__}
Creates a SingularMonitoredSession.
@@ -76,6 +76,8 @@ Creates a SingularMonitoredSession.
* <b>`config`</b>: `ConfigProto` proto used to configure the session.
* <b>`checkpoint_dir`</b>: A string. Optional path to a directory where to restore
variables.
+* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
+ `close()` has been called.
- - -
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.OpError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.OpError.md
index 0323a56f66..650139bf1e 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.OpError.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.OpError.md
@@ -2,34 +2,6 @@ A generic error that is raised when TensorFlow execution fails.
Whenever possible, the session will raise a more specific subclass
of `OpError` from the `tf.errors` module.
-
-- - -
-
-#### `tf.OpError.op` {#OpError.op}
-
-The operation that failed, if known.
-
-*N.B.* If the failed op was synthesized at runtime, e.g. a `Send`
-or `Recv` op, there will be no corresponding
-[`Operation`](../../api_docs/python/framework.md#Operation)
-object. In that case, this will return `None`, and you should
-instead use the [`OpError.node_def`](#OpError.node_def) to
-discover information about the op.
-
-##### Returns:
-
- The `Operation` that failed, or None.
-
-
-- - -
-
-#### `tf.OpError.node_def` {#OpError.node_def}
-
-The `NodeDef` proto representing the op that failed.
-
-
-
-#### Other Methods
- - -
#### `tf.OpError.__init__(node_def, op, message, error_code)` {#OpError.__init__}
@@ -67,3 +39,28 @@ The integer error code that describes the error.
The error message that describes the error.
+- - -
+
+#### `tf.OpError.node_def` {#OpError.node_def}
+
+The `NodeDef` proto representing the op that failed.
+
+
+- - -
+
+#### `tf.OpError.op` {#OpError.op}
+
+The operation that failed, if known.
+
+*N.B.* If the failed op was synthesized at runtime, e.g. a `Send`
+or `Recv` op, there will be no corresponding
+[`Operation`](../../api_docs/python/framework.md#Operation)
+object. In that case, this will return `None`, and you should
+instead use the [`OpError.node_def`](#OpError.node_def) to
+discover information about the op.
+
+##### Returns:
+
+ The `Operation` that failed, or None.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.RandomShuffleQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.RandomShuffleQueue.md
index cd617e7578..04cf93cec1 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.RandomShuffleQueue.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.RandomShuffleQueue.md
@@ -2,7 +2,6 @@ A queue implementation that dequeues elements in a random order.
See [`tf.QueueBase`](#QueueBase) for a description of the methods on
this class.
-
- - -
#### `tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, names=None, seed=None, shared_name=None, name='random_shuffle_queue')` {#RandomShuffleQueue.__init__}
@@ -52,3 +51,262 @@ queue has been closed.
* <b>`name`</b>: Optional name for the queue operation.
+- - -
+
+#### `tf.RandomShuffleQueue.close(cancel_pending_enqueues=False, name=None)` {#RandomShuffleQueue.close}
+
+Closes this queue.
+
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
+
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
+
+##### Args:
+
+
+* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that closes the queue.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.dequeue(name=None)` {#RandomShuffleQueue.dequeue}
+
+Dequeues one element from this queue.
+
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue is empty, and there are no pending
+enqueue operations that can fulfill this request,
+`tf.errors.OutOfRangeError` will be raised. If the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of tensors that was dequeued.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.dequeue_many(n, name=None)` {#RandomShuffleQueue.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue is closed and there are less than `n` elements left, then an
+`OutOfRange` exception is raised.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue contains fewer than `n` elements, and
+there are no pending enqueue operations that can fulfill this
+request, `tf.errors.OutOfRangeError` will be raised. If the
+session is [closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.dequeue_up_to(n, name=None)` {#RandomShuffleQueue.dequeue_up_to}
+
+Dequeues and concatenates `n` elements from this queue.
+
+**Note** This operation is not supported by all queues. If a queue does not
+support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. If the queue
+has not been closed, all of the components in the dequeued tuple
+will have size `n` in the 0th dimension.
+
+If the queue is closed and there are more than `0` but fewer than
+`n` elements remaining, then instead of raising a
+`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
+less than `n` elements are returned immediately. If the queue is
+closed and there are `0` elements left in the queue, then a
+`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
+Otherwise the behavior is identical to `dequeue_many`.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.dtypes` {#RandomShuffleQueue.dtypes}
+
+The list of dtypes for each component of a queue element.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.enqueue(vals, name=None)` {#RandomShuffleQueue.enqueue}
+
+Enqueues one element to this queue.
+
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
+ the values to enqueue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a new tuple of tensors to the queue.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.enqueue_many(vals, name=None)` {#RandomShuffleQueue.enqueue_many}
+
+Enqueues zero or more elements to this queue.
+
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
+
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
+ from which the queue elements are taken.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a batch of tuples of tensors to the queue.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.from_list(index, queues)` {#RandomShuffleQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.name` {#RandomShuffleQueue.name}
+
+The name of the underlying queue.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.names` {#RandomShuffleQueue.names}
+
+The list of names for each component of a queue element.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.queue_ref` {#RandomShuffleQueue.queue_ref}
+
+The underlying queue reference.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.shapes` {#RandomShuffleQueue.shapes}
+
+The list of shapes for each component of a queue element.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.size(name=None)` {#RandomShuffleQueue.size}
+
+Compute the number of elements in this queue.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar tensor containing the number of elements in this queue.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.FtrlOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.FtrlOptimizer.md
index 4fe719ee6b..c1b1755ed8 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.FtrlOptimizer.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.FtrlOptimizer.md
@@ -2,7 +2,6 @@ Optimizer that implements the FTRL algorithm.
See this [paper](
https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf).
-
- - -
#### `tf.train.FtrlOptimizer.__init__(learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl')` {#FtrlOptimizer.__init__}
@@ -30,3 +29,157 @@ Construct a new FTRL optimizer.
* <b>`ValueError`</b>: If one of the arguments is invalid.
+- - -
+
+#### `tf.train.FtrlOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#FtrlOptimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of `minimize()`. It returns an `Operation` that
+applies gradients.
+
+##### Args:
+
+
+* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
+ `compute_gradients()`.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`name`</b>: Optional name for the returned operation. Default to the
+ name passed to the `Optimizer` constructor.
+
+##### Returns:
+
+ An `Operation` that applies the specified gradients. If `global_step`
+ was not None, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
+* <b>`ValueError`</b>: If none of the variables have gradients.
+
+
+- - -
+
+#### `tf.train.FtrlOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#FtrlOptimizer.compute_gradients}
+
+Compute gradients of `loss` for the variables in `var_list`.
+
+This is the first part of `minimize()`. It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a `Tensor`, an
+`IndexedSlices`, or `None` if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>`loss`</b>: A Tensor containing the value to minimize.
+* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKey.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs. Variable is always present, but
+ gradient can be `None`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
+* <b>`ValueError`</b>: If some arguments are invalid.
+
+
+- - -
+
+#### `tf.train.FtrlOptimizer.get_name()` {#FtrlOptimizer.get_name}
+
+
+
+
+- - -
+
+#### `tf.train.FtrlOptimizer.get_slot(var, name)` {#FtrlOptimizer.get_slot}
+
+Return a slot named `name` created for `var` by the Optimizer.
+
+Some `Optimizer` subclasses use additional variables. For example
+`Momentum` and `Adagrad` use variables to accumulate updates. This method
+gives access to these `Variable` objects if for some reason you need them.
+
+Use `get_slot_names()` to get the list of slot names created by the
+`Optimizer`.
+
+##### Args:
+
+
+* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
+* <b>`name`</b>: A string.
+
+##### Returns:
+
+ The `Variable` for the slot if it was created, `None` otherwise.
+
+
+- - -
+
+#### `tf.train.FtrlOptimizer.get_slot_names()` {#FtrlOptimizer.get_slot_names}
+
+Return a list of the names of slots created by the `Optimizer`.
+
+See `get_slot()`.
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### `tf.train.FtrlOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#FtrlOptimizer.minimize}
+
+Add operations to minimize `loss` by updating `var_list`.
+
+This method simply combines calls `compute_gradients()` and
+`apply_gradients()`. If you want to process the gradient before applying
+them call `compute_gradients()` and `apply_gradients()` explicitly instead
+of using this function.
+
+##### Args:
+
+
+* <b>`loss`</b>: A `Tensor` containing the value to minimize.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKeys.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`name`</b>: Optional name for the returned operation.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ An Operation that updates the variables in `var_list`. If `global_step`
+ was not `None`, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.PaddingFIFOQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.PaddingFIFOQueue.md
index 66270e6fc7..eaf4408c9f 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.PaddingFIFOQueue.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.PaddingFIFOQueue.md
@@ -5,7 +5,6 @@ supporting `dequeue_many`. See the constructor for more details.
See [`tf.QueueBase`](#QueueBase) for a description of the methods on
this class.
-
- - -
#### `tf.PaddingFIFOQueue.__init__(capacity, dtypes, shapes, names=None, shared_name=None, name='padding_fifo_queue')` {#PaddingFIFOQueue.__init__}
@@ -53,3 +52,262 @@ shape of all elements in the given batch.
dtypes and names do not match.
+- - -
+
+#### `tf.PaddingFIFOQueue.close(cancel_pending_enqueues=False, name=None)` {#PaddingFIFOQueue.close}
+
+Closes this queue.
+
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
+
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
+
+##### Args:
+
+
+* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that closes the queue.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.dequeue(name=None)` {#PaddingFIFOQueue.dequeue}
+
+Dequeues one element from this queue.
+
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue is empty, and there are no pending
+enqueue operations that can fulfill this request,
+`tf.errors.OutOfRangeError` will be raised. If the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.dequeue_many(n, name=None)` {#PaddingFIFOQueue.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue is closed and there are less than `n` elements left, then an
+`OutOfRange` exception is raised.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue contains fewer than `n` elements, and
+there are no pending enqueue operations that can fulfill this
+request, `tf.errors.OutOfRangeError` will be raised. If the
+session is [closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.dequeue_up_to(n, name=None)` {#PaddingFIFOQueue.dequeue_up_to}
+
+Dequeues and concatenates `n` elements from this queue.
+
+**Note** This operation is not supported by all queues. If a queue does not
+support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. If the queue
+has not been closed, all of the components in the dequeued tuple
+will have size `n` in the 0th dimension.
+
+If the queue is closed and there are more than `0` but fewer than
+`n` elements remaining, then instead of raising a
+`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
+less than `n` elements are returned immediately. If the queue is
+closed and there are `0` elements left in the queue, then a
+`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
+Otherwise the behavior is identical to `dequeue_many`.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.dtypes` {#PaddingFIFOQueue.dtypes}
+
+The list of dtypes for each component of a queue element.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.enqueue(vals, name=None)` {#PaddingFIFOQueue.enqueue}
+
+Enqueues one element to this queue.
+
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
+ the values to enqueue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a new tuple of tensors to the queue.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.enqueue_many(vals, name=None)` {#PaddingFIFOQueue.enqueue_many}
+
+Enqueues zero or more elements to this queue.
+
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
+
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
+ from which the queue elements are taken.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a batch of tuples of tensors to the queue.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.from_list(index, queues)` {#PaddingFIFOQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.name` {#PaddingFIFOQueue.name}
+
+The name of the underlying queue.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.names` {#PaddingFIFOQueue.names}
+
+The list of names for each component of a queue element.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.queue_ref` {#PaddingFIFOQueue.queue_ref}
+
+The underlying queue reference.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.shapes` {#PaddingFIFOQueue.shapes}
+
+The list of shapes for each component of a queue element.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.size(name=None)` {#PaddingFIFOQueue.size}
+
+Compute the number of elements in this queue.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar tensor containing the number of elements in this queue.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md
index 855ca42ee1..941f8f5dec 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md
@@ -14,70 +14,62 @@ See [`tf.FIFOQueue`](#FIFOQueue) and
[`tf.RandomShuffleQueue`](#RandomShuffleQueue) for concrete
implementations of this class, and instructions on how to create
them.
-
- - -
-#### `tf.QueueBase.enqueue(vals, name=None)` {#QueueBase.enqueue}
-
-Enqueues one element to this queue.
+#### `tf.QueueBase.__init__(dtypes, shapes, names, queue_ref)` {#QueueBase.__init__}
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
+Constructs a queue object from a queue reference.
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
+The two optional lists, `shapes` and `names`, must be of the same length
+as `dtypes` if provided. The values at a given index `i` indicate the
+shape and name to use for the corresponding queue component in `dtypes`.
##### Args:
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
+* <b>`dtypes`</b>: A list of types. The length of dtypes must equal the number
+ of tensors in each element.
+* <b>`shapes`</b>: Constraints on the shapes of tensors in an element:
+ A list of shape tuples or None. This list is the same length
+ as dtypes. If the shape of any tensors in the element are constrained,
+ all must be; shapes can be None if the shapes should not be constrained.
+* <b>`names`</b>: Optional list of names. If provided, the `enqueue()` and
+ `dequeue()` methods will use dictionaries with these names as keys.
+ Must be None or a list or tuple of the same length as `dtypes`.
+* <b>`queue_ref`</b>: The queue reference, i.e. the output of the queue op.
-##### Returns:
+##### Raises:
- The operation that enqueues a new tuple of tensors to the queue.
+* <b>`ValueError`</b>: If one of the arguments is invalid.
-- - -
-#### `tf.QueueBase.enqueue_many(vals, name=None)` {#QueueBase.enqueue_many}
+- - -
-Enqueues zero or more elements to this queue.
+#### `tf.QueueBase.close(cancel_pending_enqueues=False, name=None)` {#QueueBase.close}
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
+Closes this queue.
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
##### Args:
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
+* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
- The operation that enqueues a batch of tuples of tensors to the queue.
-
+ The operation that closes the queue.
- - -
@@ -139,122 +131,108 @@ session is [closed](../../api_docs/python/client.md#Session.close),
The tuple of concatenated tensors that was dequeued.
-
- - -
-#### `tf.QueueBase.size(name=None)` {#QueueBase.size}
+#### `tf.QueueBase.dequeue_up_to(n, name=None)` {#QueueBase.dequeue_up_to}
-Compute the number of elements in this queue.
+Dequeues and concatenates `n` elements from this queue.
+
+**Note** This operation is not supported by all queues. If a queue does not
+support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. If the queue
+has not been closed, all of the components in the dequeued tuple
+will have size `n` in the 0th dimension.
+
+If the queue is closed and there are more than `0` but fewer than
+`n` elements remaining, then instead of raising a
+`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
+less than `n` elements are returned immediately. If the queue is
+closed and there are `0` elements left in the queue, then a
+`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
+Otherwise the behavior is identical to `dequeue_many`.
##### Args:
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
- A scalar tensor containing the number of elements in this queue.
-
+ The tuple of concatenated tensors that was dequeued.
- - -
-#### `tf.QueueBase.close(cancel_pending_enqueues=False, name=None)` {#QueueBase.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
+#### `tf.QueueBase.dtypes` {#QueueBase.dtypes}
+The list of dtypes for each component of a queue element.
-#### Other Methods
- - -
-#### `tf.QueueBase.__init__(dtypes, shapes, names, queue_ref)` {#QueueBase.__init__}
+#### `tf.QueueBase.enqueue(vals, name=None)` {#QueueBase.enqueue}
-Constructs a queue object from a queue reference.
+Enqueues one element to this queue.
-The two optional lists, `shapes` and `names`, must be of the same length
-as `dtypes` if provided. The values at a given index `i` indicate the
-shape and name to use for the corresponding queue component in `dtypes`.
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
-##### Args:
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+##### Args:
-* <b>`dtypes`</b>: A list of types. The length of dtypes must equal the number
- of tensors in each element.
-* <b>`shapes`</b>: Constraints on the shapes of tensors in an element:
- A list of shape tuples or None. This list is the same length
- as dtypes. If the shape of any tensors in the element are constrained,
- all must be; shapes can be None if the shapes should not be constrained.
-* <b>`names`</b>: Optional list of names. If provided, the `enqueue()` and
- `dequeue()` methods will use dictionaries with these names as keys.
- Must be None or a list or tuple of the same length as `dtypes`.
-* <b>`queue_ref`</b>: The queue reference, i.e. the output of the queue op.
-##### Raises:
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
+ the values to enqueue.
+* <b>`name`</b>: A name for the operation (optional).
+##### Returns:
-* <b>`ValueError`</b>: If one of the arguments is invalid.
+ The operation that enqueues a new tuple of tensors to the queue.
- - -
-#### `tf.QueueBase.dequeue_up_to(n, name=None)` {#QueueBase.dequeue_up_to}
+#### `tf.QueueBase.enqueue_many(vals, name=None)` {#QueueBase.enqueue_many}
-Dequeues and concatenates `n` elements from this queue.
+Enqueues zero or more elements to this queue.
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
##### Args:
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
+ from which the queue elements are taken.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.QueueBase.dtypes` {#QueueBase.dtypes}
-
-The list of dtypes for each component of a queue element.
+ The operation that enqueues a batch of tuples of tensors to the queue.
- - -
@@ -309,3 +287,19 @@ The underlying queue reference.
The list of shapes for each component of a queue element.
+- - -
+
+#### `tf.QueueBase.size(name=None)` {#QueueBase.size}
+
+Compute the number of elements in this queue.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar tensor containing the number of elements in this queue.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.AdamOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.AdamOptimizer.md
index 97f8b825e3..04d2ec6d0b 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.AdamOptimizer.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.AdamOptimizer.md
@@ -2,7 +2,6 @@ Optimizer that implements the Adam algorithm.
See [Kingma et. al., 2014](http://arxiv.org/abs/1412.6980)
([pdf](http://arxiv.org/pdf/1412.6980.pdf)).
-
- - -
#### `tf.train.AdamOptimizer.__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam')` {#AdamOptimizer.__init__}
@@ -51,3 +50,157 @@ will not update in iterations g is zero.
Defaults to "Adam".
+- - -
+
+#### `tf.train.AdamOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdamOptimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of `minimize()`. It returns an `Operation` that
+applies gradients.
+
+##### Args:
+
+
+* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
+ `compute_gradients()`.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`name`</b>: Optional name for the returned operation. Default to the
+ name passed to the `Optimizer` constructor.
+
+##### Returns:
+
+ An `Operation` that applies the specified gradients. If `global_step`
+ was not None, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
+* <b>`ValueError`</b>: If none of the variables have gradients.
+
+
+- - -
+
+#### `tf.train.AdamOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdamOptimizer.compute_gradients}
+
+Compute gradients of `loss` for the variables in `var_list`.
+
+This is the first part of `minimize()`. It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a `Tensor`, an
+`IndexedSlices`, or `None` if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>`loss`</b>: A Tensor containing the value to minimize.
+* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKey.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs. Variable is always present, but
+ gradient can be `None`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
+* <b>`ValueError`</b>: If some arguments are invalid.
+
+
+- - -
+
+#### `tf.train.AdamOptimizer.get_name()` {#AdamOptimizer.get_name}
+
+
+
+
+- - -
+
+#### `tf.train.AdamOptimizer.get_slot(var, name)` {#AdamOptimizer.get_slot}
+
+Return a slot named `name` created for `var` by the Optimizer.
+
+Some `Optimizer` subclasses use additional variables. For example
+`Momentum` and `Adagrad` use variables to accumulate updates. This method
+gives access to these `Variable` objects if for some reason you need them.
+
+Use `get_slot_names()` to get the list of slot names created by the
+`Optimizer`.
+
+##### Args:
+
+
+* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
+* <b>`name`</b>: A string.
+
+##### Returns:
+
+ The `Variable` for the slot if it was created, `None` otherwise.
+
+
+- - -
+
+#### `tf.train.AdamOptimizer.get_slot_names()` {#AdamOptimizer.get_slot_names}
+
+Return a list of the names of slots created by the `Optimizer`.
+
+See `get_slot()`.
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### `tf.train.AdamOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdamOptimizer.minimize}
+
+Add operations to minimize `loss` by updating `var_list`.
+
+This method simply combines calls `compute_gradients()` and
+`apply_gradients()`. If you want to process the gradient before applying
+them call `compute_gradients()` and `apply_gradients()` explicitly instead
+of using this function.
+
+##### Args:
+
+
+* <b>`loss`</b>: A `Tensor` containing the value to minimize.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKeys.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`name`</b>: Optional name for the returned operation.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ An Operation that updates the variables in `var_list`. If `global_step`
+ was not `None`, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Coordinator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Coordinator.md
index d43253eb6e..25a4025fc9 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Coordinator.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Coordinator.md
@@ -120,7 +120,7 @@ After this is called, calls to `should_stop()` will return `False`.
- - -
-#### `tf.train.Coordinator.join(threads=None, stop_grace_period_secs=120)` {#Coordinator.join}
+#### `tf.train.Coordinator.join(threads=None, stop_grace_period_secs=120, ignore_live_threads=False)` {#Coordinator.join}
Wait for threads to terminate.
@@ -145,6 +145,8 @@ that `RuntimeError`.
addition to the registered threads.
* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
`request_stop()` has been called.
+* <b>`ignore_live_threads`</b>: If `False`, raises an error if any of the threads are
+ still alive after `stop_grace_period_secs`.
##### Raises:
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Operation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Operation.md
index 5bad8aaaab..08323b592f 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Operation.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Operation.md
@@ -15,26 +15,72 @@ After the graph has been launched in a session, an `Operation` can
be executed by passing it to
[`Session.run()`](../../api_docs/python/client.md#Session.run).
`op.run()` is a shortcut for calling `tf.get_default_session().run(op)`.
+- - -
+
+#### `tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None)` {#Operation.__init__}
+
+Creates an `Operation`.
+
+NOTE: This constructor validates the name of the `Operation` (passed
+as `node_def.name`). Valid `Operation` names match the following
+regular expression:
+
+ [A-Za-z0-9.][A-Za-z0-9_.\\-/]*
+
+##### Args:
+
+
+* <b>`node_def`</b>: `node_def_pb2.NodeDef`. `NodeDef` for the `Operation`.
+ Used for attributes of `node_def_pb2.NodeDef`, typically `name`,
+ `op`, and `device`. The `input` attribute is irrelevant here
+ as it will be computed when generating the model.
+* <b>`g`</b>: `Graph`. The parent graph.
+* <b>`inputs`</b>: list of `Tensor` objects. The inputs to this `Operation`.
+* <b>`output_types`</b>: list of `DType` objects. List of the types of the
+ `Tensors` computed by this operation. The length of this list indicates
+ the number of output endpoints of the `Operation`.
+* <b>`control_inputs`</b>: list of operations or tensors from which to have a
+ control dependency.
+* <b>`input_types`</b>: List of `DType` objects representing the
+ types of the tensors accepted by the `Operation`. By default
+ uses `[x.dtype.base_dtype for x in inputs]`. Operations that expect
+ reference-typed inputs must specify these explicitly.
+* <b>`original_op`</b>: Optional. Used to associate the new `Operation` with an
+ existing `Operation` (for example, a replica with the op that was
+ replicated).
+* <b>`op_def`</b>: Optional. The `op_def_pb2.OpDef` proto that describes the
+ op type that this `Operation` represents.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: if control inputs are not Operations or Tensors,
+ or if `node_def` is not a `NodeDef`,
+ or if `g` is not a `Graph`,
+ or if `inputs` are not tensors,
+ or if `inputs` and `input_types` are incompatible.
+* <b>`ValueError`</b>: if the `node_def` name is not valid.
+
- - -
-#### `tf.Operation.name` {#Operation.name}
+#### `tf.Operation.__repr__()` {#Operation.__repr__}
+
-The full name of this operation.
- - -
-#### `tf.Operation.type` {#Operation.type}
+#### `tf.Operation.__str__()` {#Operation.__str__}
+
-The type of the op (e.g. `"MatMul"`).
- - -
-#### `tf.Operation.inputs` {#Operation.inputs}
+#### `tf.Operation.colocation_groups()` {#Operation.colocation_groups}
-The list of `Tensor` objects representing the data inputs of this op.
+Returns the list of colocation groups of the op.
- - -
@@ -56,13 +102,6 @@ in the correct order.
- - -
-#### `tf.Operation.outputs` {#Operation.outputs}
-
-The list of `Tensor` objects representing the outputs of this op.
-
-
-- - -
-
#### `tf.Operation.device` {#Operation.device}
The name of the device to which this op has been assigned, if any.
@@ -76,38 +115,6 @@ The name of the device to which this op has been assigned, if any.
- - -
-#### `tf.Operation.graph` {#Operation.graph}
-
-The `Graph` that contains this operation.
-
-
-
-- - -
-
-#### `tf.Operation.run(feed_dict=None, session=None)` {#Operation.run}
-
-Runs this operation in a `Session`.
-
-Calling this method will execute all preceding operations that
-produce the inputs needed for this operation.
-
-*N.B.* Before invoking `Operation.run()`, its graph must have been
-launched in a session, and either a default session must be
-available, or `session` must be specified explicitly.
-
-##### Args:
-
-
-* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- See [`Session.run()`](../../api_docs/python/client.md#Session.run)
- for a description of the valid feed values.
-* <b>`session`</b>: (Optional.) The `Session` to be used to run to this operation. If
- none, the default session will be used.
-
-
-
-- - -
-
#### `tf.Operation.get_attr(name)` {#Operation.get_attr}
Returns the value of the attr of this op with the given `name`.
@@ -129,105 +136,93 @@ Returns the value of the attr of this op with the given `name`.
- - -
-#### `tf.Operation.traceback` {#Operation.traceback}
-
-Returns the call stack from when this operation was constructed.
+#### `tf.Operation.graph` {#Operation.graph}
+The `Graph` that contains this operation.
-#### Other Methods
- - -
-#### `tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None)` {#Operation.__init__}
+#### `tf.Operation.inputs` {#Operation.inputs}
-Creates an `Operation`.
+The list of `Tensor` objects representing the data inputs of this op.
-NOTE: This constructor validates the name of the `Operation` (passed
-as `node_def.name`). Valid `Operation` names match the following
-regular expression:
- [A-Za-z0-9.][A-Za-z0-9_.\\-/]*
+- - -
-##### Args:
+#### `tf.Operation.name` {#Operation.name}
+The full name of this operation.
-* <b>`node_def`</b>: `node_def_pb2.NodeDef`. `NodeDef` for the `Operation`.
- Used for attributes of `node_def_pb2.NodeDef`, typically `name`,
- `op`, and `device`. The `input` attribute is irrelevant here
- as it will be computed when generating the model.
-* <b>`g`</b>: `Graph`. The parent graph.
-* <b>`inputs`</b>: list of `Tensor` objects. The inputs to this `Operation`.
-* <b>`output_types`</b>: list of `DType` objects. List of the types of the
- `Tensors` computed by this operation. The length of this list indicates
- the number of output endpoints of the `Operation`.
-* <b>`control_inputs`</b>: list of operations or tensors from which to have a
- control dependency.
-* <b>`input_types`</b>: List of `DType` objects representing the
- types of the tensors accepted by the `Operation`. By default
- uses `[x.dtype.base_dtype for x in inputs]`. Operations that expect
- reference-typed inputs must specify these explicitly.
-* <b>`original_op`</b>: Optional. Used to associate the new `Operation` with an
- existing `Operation` (for example, a replica with the op that was
- replicated).
-* <b>`op_def`</b>: Optional. The `op_def_pb2.OpDef` proto that describes the
- op type that this `Operation` represents.
-##### Raises:
+- - -
+#### `tf.Operation.node_def` {#Operation.node_def}
-* <b>`TypeError`</b>: if control inputs are not Operations or Tensors,
- or if `node_def` is not a `NodeDef`,
- or if `g` is not a `Graph`,
- or if `inputs` are not tensors,
- or if `inputs` and `input_types` are incompatible.
-* <b>`ValueError`</b>: if the `node_def` name is not valid.
+Returns a serialized `NodeDef` representation of this operation.
+
+##### Returns:
+
+ A
+ [`NodeDef`](https://www.tensorflow.org/code/tensorflow/core/framework/node_def.proto)
+ protocol buffer.
- - -
-#### `tf.Operation.__repr__()` {#Operation.__repr__}
+#### `tf.Operation.op_def` {#Operation.op_def}
+Returns the `OpDef` proto that represents the type of this op.
+##### Returns:
+ An
+ [`OpDef`](https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto)
+ protocol buffer.
-- - -
-#### `tf.Operation.__str__()` {#Operation.__str__}
+- - -
+#### `tf.Operation.outputs` {#Operation.outputs}
+The list of `Tensor` objects representing the outputs of this op.
- - -
-#### `tf.Operation.colocation_groups()` {#Operation.colocation_groups}
-
-Returns the list of colocation groups of the op.
+#### `tf.Operation.run(feed_dict=None, session=None)` {#Operation.run}
+Runs this operation in a `Session`.
-- - -
+Calling this method will execute all preceding operations that
+produce the inputs needed for this operation.
-#### `tf.Operation.node_def` {#Operation.node_def}
+*N.B.* Before invoking `Operation.run()`, its graph must have been
+launched in a session, and either a default session must be
+available, or `session` must be specified explicitly.
-Returns a serialized `NodeDef` representation of this operation.
+##### Args:
-##### Returns:
- A
- [`NodeDef`](https://www.tensorflow.org/code/tensorflow/core/framework/node_def.proto)
- protocol buffer.
+* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
+ See [`Session.run()`](../../api_docs/python/client.md#Session.run)
+ for a description of the valid feed values.
+* <b>`session`</b>: (Optional.) The `Session` to be used to run to this operation. If
+ none, the default session will be used.
- - -
-#### `tf.Operation.op_def` {#Operation.op_def}
+#### `tf.Operation.traceback` {#Operation.traceback}
-Returns the `OpDef` proto that represents the type of this op.
+Returns the call stack from when this operation was constructed.
-##### Returns:
- An
- [`OpDef`](https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto)
- protocol buffer.
+- - -
+
+#### `tf.Operation.type` {#Operation.type}
+
+The type of the op (e.g. `"MatMul"`).
- - -
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.PaddingFIFOQueue.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.PaddingFIFOQueue.from_list.md
new file mode 100644
index 0000000000..105b0fd4c6
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.PaddingFIFOQueue.from_list.md
@@ -0,0 +1,21 @@
+#### `tf.PaddingFIFOQueue.from_list(index, queues)` {#PaddingFIFOQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.TFRecordWriter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.TFRecordWriter.md
index 36c524310f..41cbdda85c 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.TFRecordWriter.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.TFRecordWriter.md
@@ -2,6 +2,19 @@ A class to write records to a TFRecords file.
This class implements `__enter__` and `__exit__`, and can be used
in `with` blocks like a normal file.
+- - -
+
+#### `tf.python_io.TFRecordWriter.__enter__()` {#TFRecordWriter.__enter__}
+
+Enter a `with` block.
+
+
+- - -
+
+#### `tf.python_io.TFRecordWriter.__exit__(unused_type, unused_value, unused_traceback)` {#TFRecordWriter.__exit__}
+
+Exit a `with` block, closing the file.
+
- - -
@@ -23,36 +36,20 @@ Opens file `path` and creates a `TFRecordWriter` writing to it.
- - -
-#### `tf.python_io.TFRecordWriter.write(record)` {#TFRecordWriter.write}
-
-Write a string record to the file.
-
-##### Args:
-
-
-* <b>`record`</b>: str
-
-
-- - -
-
#### `tf.python_io.TFRecordWriter.close()` {#TFRecordWriter.close}
Close the file.
-
-#### Other Methods
- - -
-#### `tf.python_io.TFRecordWriter.__enter__()` {#TFRecordWriter.__enter__}
-
-Enter a `with` block.
+#### `tf.python_io.TFRecordWriter.write(record)` {#TFRecordWriter.write}
+Write a string record to the file.
-- - -
+##### Args:
-#### `tf.python_io.TFRecordWriter.__exit__(unused_type, unused_value, unused_traceback)` {#TFRecordWriter.__exit__}
-Exit a `with` block, closing the file.
+* <b>`record`</b>: str
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriter.md
index 6e80a4a562..526e408fba 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriter.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriter.md
@@ -5,7 +5,6 @@ given directory and add summaries and events to it. The class updates the
file contents asynchronously. This allows a training program to call methods
to add data to the file directly from the training loop, without slowing down
training.
-
- - -
#### `tf.summary.FileWriter.__init__(logdir, graph=None, max_queue=10, flush_secs=120, graph_def=None)` {#FileWriter.__init__}
@@ -51,81 +50,62 @@ the event file:
* <b>`graph_def`</b>: DEPRECATED: Use the `graph` argument instead.
-
- - -
-#### `tf.summary.FileWriter.add_summary(summary, global_step=None)` {#FileWriter.add_summary}
-
-Adds a `Summary` protocol buffer to the event file.
-
-This method wraps the provided summary in an `Event` protocol buffer
-and adds it to the event file.
+#### `tf.summary.FileWriter.add_event(event)` {#FileWriter.add_event}
-You can pass the result of evaluating any summary op, using
-[`Session.run()`](client.md#Session.run) or
-[`Tensor.eval()`](framework.md#Tensor.eval), to this
-function. Alternatively, you can pass a `tf.Summary` protocol
-buffer that you populate with your own data. The latter is
-commonly done to report evaluation results in event files.
+Adds an event to the event file.
##### Args:
-* <b>`summary`</b>: A `Summary` protocol buffer, optionally serialized as a string.
-* <b>`global_step`</b>: Number. Optional global step value to record with the
- summary.
+* <b>`event`</b>: An `Event` protocol buffer.
- - -
-#### `tf.summary.FileWriter.add_session_log(session_log, global_step=None)` {#FileWriter.add_session_log}
+#### `tf.summary.FileWriter.add_graph(graph, global_step=None, graph_def=None)` {#FileWriter.add_graph}
-Adds a `SessionLog` protocol buffer to the event file.
+Adds a `Graph` to the event file.
-This method wraps the provided session in an `Event` protocol buffer
-and adds it to the event file.
+The graph described by the protocol buffer will be displayed by
+TensorBoard. Most users pass a graph in the constructor instead.
##### Args:
-* <b>`session_log`</b>: A `SessionLog` protocol buffer.
-* <b>`global_step`</b>: Number. Optional global step value to record with the
- summary.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_event(event)` {#FileWriter.add_event}
-
-Adds an event to the event file.
+* <b>`graph`</b>: A `Graph` object, such as `sess.graph`.
+* <b>`global_step`</b>: Number. Optional global step counter to record with the
+ graph.
+* <b>`graph_def`</b>: DEPRECATED. Use the `graph` parameter instead.
-##### Args:
+##### Raises:
-* <b>`event`</b>: An `Event` protocol buffer.
+* <b>`ValueError`</b>: If both graph and graph_def are passed to the method.
- - -
-#### `tf.summary.FileWriter.add_graph(graph, global_step=None, graph_def=None)` {#FileWriter.add_graph}
+#### `tf.summary.FileWriter.add_meta_graph(meta_graph_def, global_step=None)` {#FileWriter.add_meta_graph}
-Adds a `Graph` to the event file.
+Adds a `MetaGraphDef` to the event file.
-The graph described by the protocol buffer will be displayed by
-TensorBoard. Most users pass a graph in the constructor instead.
+The `MetaGraphDef` allows running the given graph via
+`saver.import_meta_graph()`.
##### Args:
-* <b>`graph`</b>: A `Graph` object, such as `sess.graph`.
+* <b>`meta_graph_def`</b>: A `MetaGraphDef` object, often as retured by
+ `saver.export_meta_graph()`.
* <b>`global_step`</b>: Number. Optional global step counter to record with the
graph.
-* <b>`graph_def`</b>: DEPRECATED. Use the `graph` parameter instead.
##### Raises:
-* <b>`ValueError`</b>: If both graph and graph_def are passed to the method.
+* <b>`TypeError`</b>: If both `meta_graph_def` is not an instance of `MetaGraphDef`.
- - -
@@ -150,20 +130,43 @@ Adds a metadata information for a single session.run() call.
- - -
-#### `tf.summary.FileWriter.get_logdir()` {#FileWriter.get_logdir}
+#### `tf.summary.FileWriter.add_session_log(session_log, global_step=None)` {#FileWriter.add_session_log}
-Returns the directory where event file will be written.
+Adds a `SessionLog` protocol buffer to the event file.
+This method wraps the provided session in an `Event` protocol buffer
+and adds it to the event file.
+
+##### Args:
+
+
+* <b>`session_log`</b>: A `SessionLog` protocol buffer.
+* <b>`global_step`</b>: Number. Optional global step value to record with the
+ summary.
- - -
-#### `tf.summary.FileWriter.flush()` {#FileWriter.flush}
+#### `tf.summary.FileWriter.add_summary(summary, global_step=None)` {#FileWriter.add_summary}
-Flushes the event file to disk.
+Adds a `Summary` protocol buffer to the event file.
-Call this method to make sure that all pending events have been written to
-disk.
+This method wraps the provided summary in an `Event` protocol buffer
+and adds it to the event file.
+
+You can pass the result of evaluating any summary op, using
+[`Session.run()`](client.md#Session.run) or
+[`Tensor.eval()`](framework.md#Tensor.eval), to this
+function. Alternatively, you can pass a `tf.Summary` protocol
+buffer that you populate with your own data. The latter is
+commonly done to report evaluation results in event files.
+
+##### Args:
+
+
+* <b>`summary`</b>: A `Summary` protocol buffer, optionally serialized as a string.
+* <b>`global_step`</b>: Number. Optional global step value to record with the
+ summary.
- - -
@@ -175,8 +178,23 @@ Flushes the event file to disk and close the file.
Call this method when you do not need the summary writer anymore.
+- - -
+
+#### `tf.summary.FileWriter.flush()` {#FileWriter.flush}
+
+Flushes the event file to disk.
+
+Call this method to make sure that all pending events have been written to
+disk.
+
+
+- - -
+
+#### `tf.summary.FileWriter.get_logdir()` {#FileWriter.get_logdir}
+
+Returns the directory where event file will be written.
+
-#### Other Methods
- - -
#### `tf.summary.FileWriter.reopen()` {#FileWriter.reopen}
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.IndexedSlices.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.IndexedSlices.md
index 6ca64ebf1b..cb674c3ea8 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.IndexedSlices.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.IndexedSlices.md
@@ -23,7 +23,6 @@ gradients for operations that have sparse gradients
Contrast this representation with
[`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
which uses multi-dimensional indices and scalar values.
-
- - -
#### `tf.IndexedSlices.__init__(values, indices, dense_shape=None)` {#IndexedSlices.__init__}
@@ -31,19 +30,18 @@ which uses multi-dimensional indices and scalar values.
Creates an `IndexedSlices`.
-
- - -
-#### `tf.IndexedSlices.values` {#IndexedSlices.values}
+#### `tf.IndexedSlices.__neg__()` {#IndexedSlices.__neg__}
+
-A `Tensor` containing the values of the slices.
- - -
-#### `tf.IndexedSlices.indices` {#IndexedSlices.indices}
+#### `tf.IndexedSlices.__str__()` {#IndexedSlices.__str__}
+
-A 1-D `Tensor` containing the indices of the slices.
- - -
@@ -53,12 +51,11 @@ A 1-D `Tensor` containing the indices of the slices.
A 1-D `Tensor` containing the shape of the corresponding dense tensor.
-
- - -
-#### `tf.IndexedSlices.name` {#IndexedSlices.name}
+#### `tf.IndexedSlices.device` {#IndexedSlices.device}
-The name of this `IndexedSlices`.
+The name of the device on which `values` will be produced, or `None`.
- - -
@@ -70,38 +67,36 @@ The `DType` of elements in this tensor.
- - -
-#### `tf.IndexedSlices.device` {#IndexedSlices.device}
+#### `tf.IndexedSlices.graph` {#IndexedSlices.graph}
-The name of the device on which `values` will be produced, or `None`.
+The `Graph` that contains the values, indices, and shape tensors.
- - -
-#### `tf.IndexedSlices.op` {#IndexedSlices.op}
-
-The `Operation` that produces `values` as an output.
+#### `tf.IndexedSlices.indices` {#IndexedSlices.indices}
+A 1-D `Tensor` containing the indices of the slices.
-#### Other Methods
- - -
-#### `tf.IndexedSlices.__neg__()` {#IndexedSlices.__neg__}
-
+#### `tf.IndexedSlices.name` {#IndexedSlices.name}
+The name of this `IndexedSlices`.
- - -
-#### `tf.IndexedSlices.__str__()` {#IndexedSlices.__str__}
-
+#### `tf.IndexedSlices.op` {#IndexedSlices.op}
+The `Operation` that produces `values` as an output.
- - -
-#### `tf.IndexedSlices.graph` {#IndexedSlices.graph}
+#### `tf.IndexedSlices.values` {#IndexedSlices.values}
-The `Graph` that contains the values, indices, and shape tensors.
+A `Tensor` containing the values of the slices.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.RandomShuffleQueue.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.RandomShuffleQueue.from_list.md
new file mode 100644
index 0000000000..546ee36157
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.RandomShuffleQueue.from_list.md
@@ -0,0 +1,21 @@
+#### `tf.RandomShuffleQueue.from_list(index, queues)` {#RandomShuffleQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Session.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Session.md
index 1c183cb120..92766465b2 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Session.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Session.md
@@ -48,6 +48,26 @@ create a session as follows:
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
log_device_placement=True))
```
+- - -
+
+#### `tf.Session.__del__()` {#Session.__del__}
+
+
+
+
+- - -
+
+#### `tf.Session.__enter__()` {#Session.__enter__}
+
+
+
+
+- - -
+
+#### `tf.Session.__exit__(exec_type, exec_value, exec_tb)` {#Session.__exit__}
+
+
+
- - -
@@ -77,6 +97,207 @@ the session constructor.
- - -
+#### `tf.Session.as_default()` {#Session.as_default}
+
+Returns a context manager that makes this object the default session.
+
+Use with the `with` keyword to specify that calls to
+[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or
+[`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) should be
+executed in this session.
+
+```python
+c = tf.constant(..)
+sess = tf.Session()
+
+with sess.as_default():
+ assert tf.get_default_session() is sess
+ print(c.eval())
+```
+
+To get the current default session, use
+[`tf.get_default_session()`](#get_default_session).
+
+
+*N.B.* The `as_default` context manager *does not* close the
+session when you exit the context, and you must close the session
+explicitly.
+
+```python
+c = tf.constant(...)
+sess = tf.Session()
+with sess.as_default():
+ print(c.eval())
+# ...
+with sess.as_default():
+ print(c.eval())
+
+sess.close()
+```
+
+Alternatively, you can use `with tf.Session():` to create a
+session that is automatically closed on exiting the context,
+including when an uncaught exception is raised.
+
+*N.B.* The default graph is a property of the current thread. If you
+create a new thread, and wish to use the default session in that
+thread, you must explicitly add a `with sess.as_default():` in that
+thread's function.
+
+##### Returns:
+
+ A context manager using this session as the default session.
+
+
+- - -
+
+#### `tf.Session.close()` {#Session.close}
+
+Closes this session.
+
+Calling this method frees all resources associated with the session.
+
+##### Raises:
+
+ tf.errors.OpError: Or one of its subclasses if an error occurs while
+ closing the TensorFlow session.
+
+
+- - -
+
+#### `tf.Session.graph` {#Session.graph}
+
+The graph that was launched in this session.
+
+
+- - -
+
+#### `tf.Session.graph_def` {#Session.graph_def}
+
+A serializable version of the underlying TensorFlow graph.
+
+##### Returns:
+
+ A graph_pb2.GraphDef proto containing nodes for all of the Operations in
+ the underlying TensorFlow graph.
+
+
+- - -
+
+#### `tf.Session.partial_run(handle, fetches, feed_dict=None)` {#Session.partial_run}
+
+Continues the execution with more feeds and fetches.
+
+This is EXPERIMENTAL and subject to change.
+
+To use partial execution, a user first calls `partial_run_setup()` and
+then a sequence of `partial_run()`. `partial_run_setup` specifies the
+list of feeds and fetches that will be used in the subsequent
+`partial_run` calls.
+
+The optional `feed_dict` argument allows the caller to override
+the value of tensors in the graph. See run() for more information.
+
+Below is a simple example:
+
+```python
+a = array_ops.placeholder(dtypes.float32, shape=[])
+b = array_ops.placeholder(dtypes.float32, shape=[])
+c = array_ops.placeholder(dtypes.float32, shape=[])
+r1 = math_ops.add(a, b)
+r2 = math_ops.multiply(r1, c)
+
+h = sess.partial_run_setup([r1, r2], [a, b, c])
+res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2})
+res = sess.partial_run(h, r2, feed_dict={c: res})
+```
+
+##### Args:
+
+
+* <b>`handle`</b>: A handle for a sequence of partial runs.
+* <b>`fetches`</b>: A single graph element, a list of graph elements,
+ or a dictionary whose values are graph elements or lists of graph
+ elements (see documentation for `run`).
+* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
+ (described above).
+
+##### Returns:
+
+ Either a single value if `fetches` is a single graph element, or
+ a list of values if `fetches` is a list, or a dictionary with the
+ same keys as `fetches` if that is a dictionary
+ (see documentation for `run`).
+
+##### Raises:
+
+ tf.errors.OpError: Or one of its subclasses on error.
+
+
+- - -
+
+#### `tf.Session.partial_run_setup(fetches, feeds=None)` {#Session.partial_run_setup}
+
+Sets up a graph with feeds and fetches for partial run.
+
+This is EXPERIMENTAL and subject to change.
+
+Note that contrary to `run`, `feeds` only specifies the graph elements.
+The tensors will be supplied by the subsequent `partial_run` calls.
+
+##### Args:
+
+
+* <b>`fetches`</b>: A single graph element, or a list of graph elements.
+* <b>`feeds`</b>: A single graph element, or a list of graph elements.
+
+##### Returns:
+
+ A handle for partial run.
+
+##### Raises:
+
+
+* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
+ closed).
+* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
+ tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens.
+
+
+- - -
+
+#### `tf.Session.reset(target, containers=None, config=None)` {#Session.reset}
+
+Resets resource containers on `target`, and close all connected sessions.
+
+A resource container is distributed across all workers in the
+same cluster as `target`. When a resource container on `target`
+is reset, resources associated with that container will be cleared.
+In particular, all Variables in the container will become undefined:
+they lose their values and shapes.
+
+NOTE:
+(i) reset() is currently only implemented for distributed sessions.
+(ii) Any sessions on the master named by `target` will be closed.
+
+If no resource containers are provided, all containers are reset.
+
+##### Args:
+
+
+* <b>`target`</b>: The execution engine to connect to.
+* <b>`containers`</b>: A list of resource container name strings, or `None` if all of
+ all the containers are to be reset.
+* <b>`config`</b>: (Optional.) Protocol buffer with configuration options.
+
+##### Raises:
+
+ tf.errors.OpError: Or one of its subclasses if an error occurs while
+ resetting containers.
+
+
+- - -
+
#### `tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#Session.run}
Runs operations and evaluates tensors in `fetches`.
@@ -188,126 +409,7 @@ collected into this argument and passed back.
- - -
-#### `tf.Session.close()` {#Session.close}
-
-Closes this session.
-
-Calling this method frees all resources associated with the session.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- closing the TensorFlow session.
-
-
-
-- - -
-
-#### `tf.Session.graph` {#Session.graph}
-
-The graph that was launched in this session.
-
-
-
-- - -
-
-#### `tf.Session.as_default()` {#Session.as_default}
-
-Returns a context manager that makes this object the default session.
-
-Use with the `with` keyword to specify that calls to
-[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or
-[`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) should be
-executed in this session.
-
-```python
-c = tf.constant(..)
-sess = tf.Session()
-
-with sess.as_default():
- assert tf.get_default_session() is sess
- print(c.eval())
-```
-
-To get the current default session, use
-[`tf.get_default_session()`](#get_default_session).
-
-
-*N.B.* The `as_default` context manager *does not* close the
-session when you exit the context, and you must close the session
-explicitly.
-
-```python
-c = tf.constant(...)
-sess = tf.Session()
-with sess.as_default():
- print(c.eval())
-# ...
-with sess.as_default():
- print(c.eval())
-
-sess.close()
-```
-
-Alternatively, you can use `with tf.Session():` to create a
-session that is automatically closed on exiting the context,
-including when an uncaught exception is raised.
-
-*N.B.* The default graph is a property of the current thread. If you
-create a new thread, and wish to use the default session in that
-thread, you must explicitly add a `with sess.as_default():` in that
-thread's function.
-
-##### Returns:
-
- A context manager using this session as the default session.
-
-
-
-- - -
-
-#### `tf.Session.reset(target, containers=None, config=None)` {#Session.reset}
-
-Resets resource containers on `target`, and close all connected sessions.
-
-A resource container is distributed across all workers in the
-same cluster as `target`. When a resource container on `target`
-is reset, resources associated with that container will be cleared.
-In particular, all Variables in the container will become undefined:
-they lose their values and shapes.
-
-NOTE:
-(i) reset() is currently only implemented for distributed sessions.
-(ii) Any sessions on the master named by `target` will be closed.
-
-If no resource containers are provided, all containers are reset.
-
-##### Args:
-
-
-* <b>`target`</b>: The execution engine to connect to.
-* <b>`containers`</b>: A list of resource container name strings, or `None` if all of
- all the containers are to be reset.
-* <b>`config`</b>: (Optional.) Protocol buffer with configuration options.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- resetting containers.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.Session.__enter__()` {#Session.__enter__}
-
-
-
-
-- - -
-
-#### `tf.Session.__exit__(exec_type, exec_value, exec_tb)` {#Session.__exit__}
+#### `tf.Session.sess_str` {#Session.sess_str}
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradDAOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradDAOptimizer.md
index 5847c37ac9..f25ccc018a 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradDAOptimizer.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradDAOptimizer.md
@@ -10,7 +10,6 @@ AdagradDA is typically used when there is a need for large sparsity in the
trained model. This optimizer only guarantees sparsity for linear models. Be
careful when using AdagradDA for deep networks as it will require careful
initialization of the gradient accumulators for it to train.
-
- - -
#### `tf.train.AdagradDAOptimizer.__init__(learning_rate, global_step, initial_gradient_squared_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='AdagradDA')` {#AdagradDAOptimizer.__init__}
@@ -39,3 +38,157 @@ Construct a new AdagradDA optimizer.
invalid.
+- - -
+
+#### `tf.train.AdagradDAOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdagradDAOptimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of `minimize()`. It returns an `Operation` that
+applies gradients.
+
+##### Args:
+
+
+* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
+ `compute_gradients()`.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`name`</b>: Optional name for the returned operation. Default to the
+ name passed to the `Optimizer` constructor.
+
+##### Returns:
+
+ An `Operation` that applies the specified gradients. If `global_step`
+ was not None, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
+* <b>`ValueError`</b>: If none of the variables have gradients.
+
+
+- - -
+
+#### `tf.train.AdagradDAOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdagradDAOptimizer.compute_gradients}
+
+Compute gradients of `loss` for the variables in `var_list`.
+
+This is the first part of `minimize()`. It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a `Tensor`, an
+`IndexedSlices`, or `None` if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>`loss`</b>: A Tensor containing the value to minimize.
+* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKey.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs. Variable is always present, but
+ gradient can be `None`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
+* <b>`ValueError`</b>: If some arguments are invalid.
+
+
+- - -
+
+#### `tf.train.AdagradDAOptimizer.get_name()` {#AdagradDAOptimizer.get_name}
+
+
+
+
+- - -
+
+#### `tf.train.AdagradDAOptimizer.get_slot(var, name)` {#AdagradDAOptimizer.get_slot}
+
+Return a slot named `name` created for `var` by the Optimizer.
+
+Some `Optimizer` subclasses use additional variables. For example
+`Momentum` and `Adagrad` use variables to accumulate updates. This method
+gives access to these `Variable` objects if for some reason you need them.
+
+Use `get_slot_names()` to get the list of slot names created by the
+`Optimizer`.
+
+##### Args:
+
+
+* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
+* <b>`name`</b>: A string.
+
+##### Returns:
+
+ The `Variable` for the slot if it was created, `None` otherwise.
+
+
+- - -
+
+#### `tf.train.AdagradDAOptimizer.get_slot_names()` {#AdagradDAOptimizer.get_slot_names}
+
+Return a list of the names of slots created by the `Optimizer`.
+
+See `get_slot()`.
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### `tf.train.AdagradDAOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdagradDAOptimizer.minimize}
+
+Add operations to minimize `loss` by updating `var_list`.
+
+This method simply combines calls `compute_gradients()` and
+`apply_gradients()`. If you want to process the gradient before applying
+them call `compute_gradients()` and `apply_gradients()` explicitly instead
+of using this function.
+
+##### Args:
+
+
+* <b>`loss`</b>: A `Tensor` containing the value to minimize.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKeys.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`name`</b>: Optional name for the returned operation.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ An Operation that updates the variables in `var_list`. If `global_step`
+ was not `None`, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/index.md b/tensorflow/g3doc/api_docs/python/index.md
index 9bb1ff184e..0de79c8474 100644
--- a/tensorflow/g3doc/api_docs/python/index.md
+++ b/tensorflow/g3doc/api_docs/python/index.md
@@ -31,7 +31,6 @@
* [`register_tensor_conversion_function`](../../api_docs/python/framework.md#register_tensor_conversion_function)
* [`RegisterGradient`](../../api_docs/python/framework.md#RegisterGradient)
* [`reset_default_graph`](../../api_docs/python/framework.md#reset_default_graph)
- * [`shape`](../../api_docs/python/framework.md#shape)
* [`Tensor`](../../api_docs/python/framework.md#Tensor)
* [`TensorShape`](../../api_docs/python/framework.md#TensorShape)
@@ -363,13 +362,7 @@
* [`scan`](../../api_docs/python/functional_ops.md#scan)
* **[TensorArray Operations](../../api_docs/python/tensor_array_ops.md)**:
- * [`concat`](../../api_docs/python/tensor_array_ops.md#concat)
- * [`gather`](../../api_docs/python/tensor_array_ops.md#gather)
- * [`identity`](../../api_docs/python/tensor_array_ops.md#identity)
- * [`split`](../../api_docs/python/tensor_array_ops.md#split)
- * [`stack`](../../api_docs/python/tensor_array_ops.md#stack)
* [`TensorArray`](../../api_docs/python/tensor_array_ops.md#TensorArray)
- * [`unstack`](../../api_docs/python/tensor_array_ops.md#unstack)
* **[Tensor Handle Operations](../../api_docs/python/session_ops.md)**:
* [`delete_session_tensor`](../../api_docs/python/session_ops.md#delete_session_tensor)
@@ -478,7 +471,6 @@
* [`ReaderBase`](../../api_docs/python/io_ops.md#ReaderBase)
* [`shuffle_batch`](../../api_docs/python/io_ops.md#shuffle_batch)
* [`shuffle_batch_join`](../../api_docs/python/io_ops.md#shuffle_batch_join)
- * [`size`](../../api_docs/python/io_ops.md#size)
* [`slice_input_producer`](../../api_docs/python/io_ops.md#slice_input_producer)
* [`sparse_placeholder`](../../api_docs/python/io_ops.md#sparse_placeholder)
* [`SparseConditionalAccumulator`](../../api_docs/python/io_ops.md#SparseConditionalAccumulator)
diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md
index 458480c876..db2abc44b3 100644
--- a/tensorflow/g3doc/api_docs/python/io_ops.md
+++ b/tensorflow/g3doc/api_docs/python/io_ops.md
@@ -1818,6 +1818,162 @@ See [`tf.FIFOQueue`](#FIFOQueue) and
[`tf.RandomShuffleQueue`](#RandomShuffleQueue) for concrete
implementations of this class, and instructions on how to create
them.
+- - -
+
+#### `tf.QueueBase.__init__(dtypes, shapes, names, queue_ref)` {#QueueBase.__init__}
+
+Constructs a queue object from a queue reference.
+
+The two optional lists, `shapes` and `names`, must be of the same length
+as `dtypes` if provided. The values at a given index `i` indicate the
+shape and name to use for the corresponding queue component in `dtypes`.
+
+##### Args:
+
+
+* <b>`dtypes`</b>: A list of types. The length of dtypes must equal the number
+ of tensors in each element.
+* <b>`shapes`</b>: Constraints on the shapes of tensors in an element:
+ A list of shape tuples or None. This list is the same length
+ as dtypes. If the shape of any tensors in the element are constrained,
+ all must be; shapes can be None if the shapes should not be constrained.
+* <b>`names`</b>: Optional list of names. If provided, the `enqueue()` and
+ `dequeue()` methods will use dictionaries with these names as keys.
+ Must be None or a list or tuple of the same length as `dtypes`.
+* <b>`queue_ref`</b>: The queue reference, i.e. the output of the queue op.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If one of the arguments is invalid.
+
+
+- - -
+
+#### `tf.QueueBase.close(cancel_pending_enqueues=False, name=None)` {#QueueBase.close}
+
+Closes this queue.
+
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
+
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
+
+##### Args:
+
+
+* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that closes the queue.
+
+
+- - -
+
+#### `tf.QueueBase.dequeue(name=None)` {#QueueBase.dequeue}
+
+Dequeues one element from this queue.
+
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue is empty, and there are no pending
+enqueue operations that can fulfill this request,
+`tf.errors.OutOfRangeError` will be raised. If the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of tensors that was dequeued.
+
+
+- - -
+
+#### `tf.QueueBase.dequeue_many(n, name=None)` {#QueueBase.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue is closed and there are less than `n` elements left, then an
+`OutOfRange` exception is raised.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue contains fewer than `n` elements, and
+there are no pending enqueue operations that can fulfill this
+request, `tf.errors.OutOfRangeError` will be raised. If the
+session is [closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.QueueBase.dequeue_up_to(n, name=None)` {#QueueBase.dequeue_up_to}
+
+Dequeues and concatenates `n` elements from this queue.
+
+**Note** This operation is not supported by all queues. If a queue does not
+support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. If the queue
+has not been closed, all of the components in the dequeued tuple
+will have size `n` in the 0th dimension.
+
+If the queue is closed and there are more than `0` but fewer than
+`n` elements remaining, then instead of raising a
+`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
+less than `n` elements are returned immediately. If the queue is
+closed and there are `0` elements left in the queue, then a
+`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
+Otherwise the behavior is identical to `dequeue_many`.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.QueueBase.dtypes` {#QueueBase.dtypes}
+
+The list of dtypes for each component of a queue element.
+
- - -
@@ -1883,65 +2039,56 @@ with `cancel_pending_enqueues=True`, or (ii) the session is
The operation that enqueues a batch of tuples of tensors to the queue.
-
- - -
-#### `tf.QueueBase.dequeue(name=None)` {#QueueBase.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
+#### `tf.QueueBase.from_list(index, queues)` {#QueueBase.from_list}
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
+Create a queue using the queue reference from `queues[index]`.
##### Args:
-* <b>`name`</b>: A name for the operation (optional).
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
##### Returns:
- The tuple of tensors that was dequeued.
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
- - -
-#### `tf.QueueBase.dequeue_many(n, name=None)` {#QueueBase.dequeue_many}
+#### `tf.QueueBase.name` {#QueueBase.name}
-Dequeues and concatenates `n` elements from this queue.
+The name of the underlying queue.
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
+- - -
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
+#### `tf.QueueBase.names` {#QueueBase.names}
-##### Args:
+The list of names for each component of a queue element.
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
+- - -
-##### Returns:
+#### `tf.QueueBase.queue_ref` {#QueueBase.queue_ref}
+
+The underlying queue reference.
- The tuple of concatenated tensors that was dequeued.
+- - -
+
+#### `tf.QueueBase.shapes` {#QueueBase.shapes}
+
+The list of shapes for each component of a queue element.
- - -
@@ -1963,7 +2110,51 @@ Compute the number of elements in this queue.
- - -
-#### `tf.QueueBase.close(cancel_pending_enqueues=False, name=None)` {#QueueBase.close}
+### `class tf.FIFOQueue` {#FIFOQueue}
+
+A queue implementation that dequeues elements in first-in first-out order.
+
+See [`tf.QueueBase`](#QueueBase) for a description of the methods on
+this class.
+- - -
+
+#### `tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, names=None, shared_name=None, name='fifo_queue')` {#FIFOQueue.__init__}
+
+Creates a queue that dequeues elements in a first-in first-out order.
+
+A `FIFOQueue` has bounded capacity; supports multiple concurrent
+producers and consumers; and provides exactly-once delivery.
+
+A `FIFOQueue` holds a list of up to `capacity` elements. Each
+element is a fixed-length tuple of tensors whose dtypes are
+described by `dtypes`, and whose shapes are optionally described
+by the `shapes` argument.
+
+If the `shapes` argument is specified, each component of a queue
+element must have the respective fixed shape. If it is
+unspecified, different queue elements may have different shapes,
+but the use of `dequeue_many` is disallowed.
+
+##### Args:
+
+
+* <b>`capacity`</b>: An integer. The upper bound on the number of elements
+ that may be stored in this queue.
+* <b>`dtypes`</b>: A list of `DType` objects. The length of `dtypes` must equal
+ the number of tensors in each queue element.
+* <b>`shapes`</b>: (Optional.) A list of fully-defined `TensorShape` objects
+ with the same length as `dtypes`, or `None`.
+* <b>`names`</b>: (Optional.) A list of string naming the components in the queue
+ with the same length as `dtypes`, or `None`. If specified the dequeue
+ methods return a dictionary with the names as keys.
+* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
+ the given name across multiple sessions.
+* <b>`name`</b>: Optional name for the queue operation.
+
+
+- - -
+
+#### `tf.FIFOQueue.close(cancel_pending_enqueues=False, name=None)` {#FIFOQueue.close}
Closes this queue.
@@ -1989,41 +2180,68 @@ be cancelled.
The operation that closes the queue.
-
-#### Other Methods
- - -
-#### `tf.QueueBase.__init__(dtypes, shapes, names, queue_ref)` {#QueueBase.__init__}
+#### `tf.FIFOQueue.dequeue(name=None)` {#FIFOQueue.dequeue}
-Constructs a queue object from a queue reference.
+Dequeues one element from this queue.
-The two optional lists, `shapes` and `names`, must be of the same length
-as `dtypes` if provided. The values at a given index `i` indicate the
-shape and name to use for the corresponding queue component in `dtypes`.
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue is empty, and there are no pending
+enqueue operations that can fulfill this request,
+`tf.errors.OutOfRangeError` will be raised. If the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
##### Args:
-* <b>`dtypes`</b>: A list of types. The length of dtypes must equal the number
- of tensors in each element.
-* <b>`shapes`</b>: Constraints on the shapes of tensors in an element:
- A list of shape tuples or None. This list is the same length
- as dtypes. If the shape of any tensors in the element are constrained,
- all must be; shapes can be None if the shapes should not be constrained.
-* <b>`names`</b>: Optional list of names. If provided, the `enqueue()` and
- `dequeue()` methods will use dictionaries with these names as keys.
- Must be None or a list or tuple of the same length as `dtypes`.
-* <b>`queue_ref`</b>: The queue reference, i.e. the output of the queue op.
+* <b>`name`</b>: A name for the operation (optional).
-##### Raises:
+##### Returns:
+ The tuple of tensors that was dequeued.
-* <b>`ValueError`</b>: If one of the arguments is invalid.
+
+- - -
+
+#### `tf.FIFOQueue.dequeue_many(n, name=None)` {#FIFOQueue.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue is closed and there are less than `n` elements left, then an
+`OutOfRange` exception is raised.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue contains fewer than `n` elements, and
+there are no pending enqueue operations that can fulfill this
+request, `tf.errors.OutOfRangeError` will be raised. If the
+session is [closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
- - -
-#### `tf.QueueBase.dequeue_up_to(n, name=None)` {#QueueBase.dequeue_up_to}
+#### `tf.FIFOQueue.dequeue_up_to(n, name=None)` {#FIFOQueue.dequeue_up_to}
Dequeues and concatenates `n` elements from this queue.
@@ -2056,14 +2274,78 @@ Otherwise the behavior is identical to `dequeue_many`.
- - -
-#### `tf.QueueBase.dtypes` {#QueueBase.dtypes}
+#### `tf.FIFOQueue.dtypes` {#FIFOQueue.dtypes}
The list of dtypes for each component of a queue element.
- - -
-#### `tf.QueueBase.from_list(index, queues)` {#QueueBase.from_list}
+#### `tf.FIFOQueue.enqueue(vals, name=None)` {#FIFOQueue.enqueue}
+
+Enqueues one element to this queue.
+
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
+ the values to enqueue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a new tuple of tensors to the queue.
+
+
+- - -
+
+#### `tf.FIFOQueue.enqueue_many(vals, name=None)` {#FIFOQueue.enqueue_many}
+
+Enqueues zero or more elements to this queue.
+
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
+
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
+ from which the queue elements are taken.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a batch of tuples of tensors to the queue.
+
+
+- - -
+
+#### `tf.FIFOQueue.from_list(index, queues)` {#FIFOQueue.from_list}
Create a queue using the queue reference from `queues[index]`.
@@ -2087,76 +2369,46 @@ Create a queue using the queue reference from `queues[index]`.
- - -
-#### `tf.QueueBase.name` {#QueueBase.name}
+#### `tf.FIFOQueue.name` {#FIFOQueue.name}
The name of the underlying queue.
- - -
-#### `tf.QueueBase.names` {#QueueBase.names}
+#### `tf.FIFOQueue.names` {#FIFOQueue.names}
The list of names for each component of a queue element.
- - -
-#### `tf.QueueBase.queue_ref` {#QueueBase.queue_ref}
+#### `tf.FIFOQueue.queue_ref` {#FIFOQueue.queue_ref}
The underlying queue reference.
- - -
-#### `tf.QueueBase.shapes` {#QueueBase.shapes}
+#### `tf.FIFOQueue.shapes` {#FIFOQueue.shapes}
The list of shapes for each component of a queue element.
-
-- - -
-
-### `class tf.FIFOQueue` {#FIFOQueue}
-
-A queue implementation that dequeues elements in first-in first-out order.
-
-See [`tf.QueueBase`](#QueueBase) for a description of the methods on
-this class.
-
- - -
-#### `tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, names=None, shared_name=None, name='fifo_queue')` {#FIFOQueue.__init__}
+#### `tf.FIFOQueue.size(name=None)` {#FIFOQueue.size}
-Creates a queue that dequeues elements in a first-in first-out order.
-
-A `FIFOQueue` has bounded capacity; supports multiple concurrent
-producers and consumers; and provides exactly-once delivery.
+Compute the number of elements in this queue.
-A `FIFOQueue` holds a list of up to `capacity` elements. Each
-element is a fixed-length tuple of tensors whose dtypes are
-described by `dtypes`, and whose shapes are optionally described
-by the `shapes` argument.
+##### Args:
-If the `shapes` argument is specified, each component of a queue
-element must have the respective fixed shape. If it is
-unspecified, different queue elements may have different shapes,
-but the use of `dequeue_many` is disallowed.
-##### Args:
+* <b>`name`</b>: A name for the operation (optional).
+##### Returns:
-* <b>`capacity`</b>: An integer. The upper bound on the number of elements
- that may be stored in this queue.
-* <b>`dtypes`</b>: A list of `DType` objects. The length of `dtypes` must equal
- the number of tensors in each queue element.
-* <b>`shapes`</b>: (Optional.) A list of fully-defined `TensorShape` objects
- with the same length as `dtypes`, or `None`.
-* <b>`names`</b>: (Optional.) A list of string naming the components in the queue
- with the same length as `dtypes`, or `None`. If specified the dequeue
- methods return a dictionary with the names as keys.
-* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the queue operation.
+ A scalar tensor containing the number of elements in this queue.
@@ -2171,7 +2423,6 @@ supporting `dequeue_many`. See the constructor for more details.
See [`tf.QueueBase`](#QueueBase) for a description of the methods on
this class.
-
- - -
#### `tf.PaddingFIFOQueue.__init__(capacity, dtypes, shapes, names=None, shared_name=None, name='padding_fifo_queue')` {#PaddingFIFOQueue.__init__}
@@ -2219,6 +2470,265 @@ shape of all elements in the given batch.
dtypes and names do not match.
+- - -
+
+#### `tf.PaddingFIFOQueue.close(cancel_pending_enqueues=False, name=None)` {#PaddingFIFOQueue.close}
+
+Closes this queue.
+
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
+
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
+
+##### Args:
+
+
+* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that closes the queue.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.dequeue(name=None)` {#PaddingFIFOQueue.dequeue}
+
+Dequeues one element from this queue.
+
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue is empty, and there are no pending
+enqueue operations that can fulfill this request,
+`tf.errors.OutOfRangeError` will be raised. If the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.dequeue_many(n, name=None)` {#PaddingFIFOQueue.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue is closed and there are less than `n` elements left, then an
+`OutOfRange` exception is raised.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue contains fewer than `n` elements, and
+there are no pending enqueue operations that can fulfill this
+request, `tf.errors.OutOfRangeError` will be raised. If the
+session is [closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.dequeue_up_to(n, name=None)` {#PaddingFIFOQueue.dequeue_up_to}
+
+Dequeues and concatenates `n` elements from this queue.
+
+**Note** This operation is not supported by all queues. If a queue does not
+support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. If the queue
+has not been closed, all of the components in the dequeued tuple
+will have size `n` in the 0th dimension.
+
+If the queue is closed and there are more than `0` but fewer than
+`n` elements remaining, then instead of raising a
+`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
+less than `n` elements are returned immediately. If the queue is
+closed and there are `0` elements left in the queue, then a
+`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
+Otherwise the behavior is identical to `dequeue_many`.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.dtypes` {#PaddingFIFOQueue.dtypes}
+
+The list of dtypes for each component of a queue element.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.enqueue(vals, name=None)` {#PaddingFIFOQueue.enqueue}
+
+Enqueues one element to this queue.
+
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
+ the values to enqueue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a new tuple of tensors to the queue.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.enqueue_many(vals, name=None)` {#PaddingFIFOQueue.enqueue_many}
+
+Enqueues zero or more elements to this queue.
+
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
+
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
+ from which the queue elements are taken.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a batch of tuples of tensors to the queue.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.from_list(index, queues)` {#PaddingFIFOQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.name` {#PaddingFIFOQueue.name}
+
+The name of the underlying queue.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.names` {#PaddingFIFOQueue.names}
+
+The list of names for each component of a queue element.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.queue_ref` {#PaddingFIFOQueue.queue_ref}
+
+The underlying queue reference.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.shapes` {#PaddingFIFOQueue.shapes}
+
+The list of shapes for each component of a queue element.
+
+
+- - -
+
+#### `tf.PaddingFIFOQueue.size(name=None)` {#PaddingFIFOQueue.size}
+
+Compute the number of elements in this queue.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar tensor containing the number of elements in this queue.
+
+
- - -
@@ -2228,7 +2738,6 @@ A queue implementation that dequeues elements in a random order.
See [`tf.QueueBase`](#QueueBase) for a description of the methods on
this class.
-
- - -
#### `tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, names=None, seed=None, shared_name=None, name='random_shuffle_queue')` {#RandomShuffleQueue.__init__}
@@ -2278,6 +2787,265 @@ queue has been closed.
* <b>`name`</b>: Optional name for the queue operation.
+- - -
+
+#### `tf.RandomShuffleQueue.close(cancel_pending_enqueues=False, name=None)` {#RandomShuffleQueue.close}
+
+Closes this queue.
+
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
+
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
+
+##### Args:
+
+
+* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that closes the queue.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.dequeue(name=None)` {#RandomShuffleQueue.dequeue}
+
+Dequeues one element from this queue.
+
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue is empty, and there are no pending
+enqueue operations that can fulfill this request,
+`tf.errors.OutOfRangeError` will be raised. If the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of tensors that was dequeued.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.dequeue_many(n, name=None)` {#RandomShuffleQueue.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue is closed and there are less than `n` elements left, then an
+`OutOfRange` exception is raised.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue contains fewer than `n` elements, and
+there are no pending enqueue operations that can fulfill this
+request, `tf.errors.OutOfRangeError` will be raised. If the
+session is [closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.dequeue_up_to(n, name=None)` {#RandomShuffleQueue.dequeue_up_to}
+
+Dequeues and concatenates `n` elements from this queue.
+
+**Note** This operation is not supported by all queues. If a queue does not
+support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. If the queue
+has not been closed, all of the components in the dequeued tuple
+will have size `n` in the 0th dimension.
+
+If the queue is closed and there are more than `0` but fewer than
+`n` elements remaining, then instead of raising a
+`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
+less than `n` elements are returned immediately. If the queue is
+closed and there are `0` elements left in the queue, then a
+`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
+Otherwise the behavior is identical to `dequeue_many`.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.dtypes` {#RandomShuffleQueue.dtypes}
+
+The list of dtypes for each component of a queue element.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.enqueue(vals, name=None)` {#RandomShuffleQueue.enqueue}
+
+Enqueues one element to this queue.
+
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
+ the values to enqueue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a new tuple of tensors to the queue.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.enqueue_many(vals, name=None)` {#RandomShuffleQueue.enqueue_many}
+
+Enqueues zero or more elements to this queue.
+
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
+
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
+ from which the queue elements are taken.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a batch of tuples of tensors to the queue.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.from_list(index, queues)` {#RandomShuffleQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.name` {#RandomShuffleQueue.name}
+
+The name of the underlying queue.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.names` {#RandomShuffleQueue.names}
+
+The list of names for each component of a queue element.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.queue_ref` {#RandomShuffleQueue.queue_ref}
+
+The underlying queue reference.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.shapes` {#RandomShuffleQueue.shapes}
+
+The list of shapes for each component of a queue element.
+
+
+- - -
+
+#### `tf.RandomShuffleQueue.size(name=None)` {#RandomShuffleQueue.size}
+
+Compute the number of elements in this queue.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar tensor containing the number of elements in this queue.
+
+
- - -
@@ -2287,7 +3055,6 @@ A queue implementation that dequeues elements in prioritized order.
See [`tf.QueueBase`](#QueueBase) for a description of the methods on
this class.
-
- - -
#### `tf.PriorityQueue.__init__(capacity, types, shapes=None, names=None, shared_name=None, name='priority_queue')` {#PriorityQueue.__init__}
@@ -2330,6 +3097,265 @@ an int64 scalar (for `enqueue`) or an int64 vector (for `enqueue_many`).
* <b>`name`</b>: Optional name for the queue operation.
+- - -
+
+#### `tf.PriorityQueue.close(cancel_pending_enqueues=False, name=None)` {#PriorityQueue.close}
+
+Closes this queue.
+
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
+
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
+
+##### Args:
+
+
+* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that closes the queue.
+
+
+- - -
+
+#### `tf.PriorityQueue.dequeue(name=None)` {#PriorityQueue.dequeue}
+
+Dequeues one element from this queue.
+
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue is empty, and there are no pending
+enqueue operations that can fulfill this request,
+`tf.errors.OutOfRangeError` will be raised. If the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PriorityQueue.dequeue_many(n, name=None)` {#PriorityQueue.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue is closed and there are less than `n` elements left, then an
+`OutOfRange` exception is raised.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed, the queue contains fewer than `n` elements, and
+there are no pending enqueue operations that can fulfill this
+request, `tf.errors.OutOfRangeError` will be raised. If the
+session is [closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PriorityQueue.dequeue_up_to(n, name=None)` {#PriorityQueue.dequeue_up_to}
+
+Dequeues and concatenates `n` elements from this queue.
+
+**Note** This operation is not supported by all queues. If a queue does not
+support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. If the queue
+has not been closed, all of the components in the dequeued tuple
+will have size `n` in the 0th dimension.
+
+If the queue is closed and there are more than `0` but fewer than
+`n` elements remaining, then instead of raising a
+`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
+less than `n` elements are returned immediately. If the queue is
+closed and there are `0` elements left in the queue, then a
+`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
+Otherwise the behavior is identical to `dequeue_many`.
+
+##### Args:
+
+
+* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+- - -
+
+#### `tf.PriorityQueue.dtypes` {#PriorityQueue.dtypes}
+
+The list of dtypes for each component of a queue element.
+
+
+- - -
+
+#### `tf.PriorityQueue.enqueue(vals, name=None)` {#PriorityQueue.enqueue}
+
+Enqueues one element to this queue.
+
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
+ the values to enqueue.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a new tuple of tensors to the queue.
+
+
+- - -
+
+#### `tf.PriorityQueue.enqueue_many(vals, name=None)` {#PriorityQueue.enqueue_many}
+
+Enqueues zero or more elements to this queue.
+
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
+
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
+
+At runtime, this operation may raise an error if the queue is
+[closed](#QueueBase.close) before or during its execution. If the
+queue is closed before this operation runs,
+`tf.errors.CancelledError` will be raised. If this operation is
+blocked, and either (i) the queue is closed by a close operation
+with `cancel_pending_enqueues=True`, or (ii) the session is
+[closed](../../api_docs/python/client.md#Session.close),
+`tf.errors.CancelledError` will be raised.
+
+##### Args:
+
+
+* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
+ from which the queue elements are taken.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a batch of tuples of tensors to the queue.
+
+
+- - -
+
+#### `tf.PriorityQueue.from_list(index, queues)` {#PriorityQueue.from_list}
+
+Create a queue using the queue reference from `queues[index]`.
+
+##### Args:
+
+
+* <b>`index`</b>: An integer scalar tensor that determines the input that gets
+ selected.
+* <b>`queues`</b>: A list of `QueueBase` objects.
+
+##### Returns:
+
+ A `QueueBase` object.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
+ or when the data types of `queues` are not all the same.
+
+
+- - -
+
+#### `tf.PriorityQueue.name` {#PriorityQueue.name}
+
+The name of the underlying queue.
+
+
+- - -
+
+#### `tf.PriorityQueue.names` {#PriorityQueue.names}
+
+The list of names for each component of a queue element.
+
+
+- - -
+
+#### `tf.PriorityQueue.queue_ref` {#PriorityQueue.queue_ref}
+
+The underlying queue reference.
+
+
+- - -
+
+#### `tf.PriorityQueue.shapes` {#PriorityQueue.shapes}
+
+The list of shapes for each component of a queue element.
+
+
+- - -
+
+#### `tf.PriorityQueue.size(name=None)` {#PriorityQueue.size}
+
+Compute the number of elements in this queue.
+
+##### Args:
+
+
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar tensor containing the number of elements in this queue.
+
+
- - -
diff --git a/tensorflow/g3doc/api_docs/python/python_io.md b/tensorflow/g3doc/api_docs/python/python_io.md
index d9dd38bcd6..c41fe3ada0 100644
--- a/tensorflow/g3doc/api_docs/python/python_io.md
+++ b/tensorflow/g3doc/api_docs/python/python_io.md
@@ -3,11 +3,9 @@
# Data IO (Python functions)
[TOC]
-## Data IO (Python Functions)
+Python functions for directly manipulating TFRecord-formatted files.
-A TFRecords file represents a sequence of (binary) strings. The format is not
-random access, so it is suitable for streaming large amounts of data but not
-suitable if fast sharding or other non-sequential access is desired.
+See the @{$python/python_io} guide.
- - -
@@ -17,6 +15,19 @@ A class to write records to a TFRecords file.
This class implements `__enter__` and `__exit__`, and can be used
in `with` blocks like a normal file.
+- - -
+
+#### `tf.python_io.TFRecordWriter.__enter__()` {#TFRecordWriter.__enter__}
+
+Enter a `with` block.
+
+
+- - -
+
+#### `tf.python_io.TFRecordWriter.__exit__(unused_type, unused_value, unused_traceback)` {#TFRecordWriter.__exit__}
+
+Exit a `with` block, closing the file.
+
- - -
@@ -38,37 +49,21 @@ Opens file `path` and creates a `TFRecordWriter` writing to it.
- - -
-#### `tf.python_io.TFRecordWriter.write(record)` {#TFRecordWriter.write}
-
-Write a string record to the file.
-
-##### Args:
-
-
-* <b>`record`</b>: str
-
-
-- - -
-
#### `tf.python_io.TFRecordWriter.close()` {#TFRecordWriter.close}
Close the file.
-
-#### Other Methods
- - -
-#### `tf.python_io.TFRecordWriter.__enter__()` {#TFRecordWriter.__enter__}
+#### `tf.python_io.TFRecordWriter.write(record)` {#TFRecordWriter.write}
-Enter a `with` block.
+Write a string record to the file.
+##### Args:
-- - -
-#### `tf.python_io.TFRecordWriter.__exit__(unused_type, unused_value, unused_traceback)` {#TFRecordWriter.__exit__}
-
-Exit a `with` block, closing the file.
+* <b>`record`</b>: str
@@ -120,21 +115,3 @@ Options used for manipulating TFRecord files.
-
-- - -
-
-### TFRecords Format Details
-
-A TFRecords file contains a sequence of strings with CRC hashes. Each record
-has the format
-
- uint64 length
- uint32 masked_crc32_of_length
- byte data[length]
- uint32 masked_crc32_of_data
-
-and the records are concatenated together to produce the file. The CRC32s
-are [described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check),
-and the mask of a CRC is
-
- masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
diff --git a/tensorflow/g3doc/api_docs/python/sparse_ops.md b/tensorflow/g3doc/api_docs/python/sparse_ops.md
index 3222677be4..47771a24fa 100644
--- a/tensorflow/g3doc/api_docs/python/sparse_ops.md
+++ b/tensorflow/g3doc/api_docs/python/sparse_ops.md
@@ -75,89 +75,6 @@ represents the dense tensor
[0, 0, 2, 0]
[0, 0, 0, 0]]
```
-
-- - -
-
-#### `tf.SparseTensor.__init__(indices, values, dense_shape)` {#SparseTensor.__init__}
-
-Creates a `SparseTensor`.
-
-##### Args:
-
-
-* <b>`indices`</b>: A 2-D int64 tensor of shape `[N, ndims]`.
-* <b>`values`</b>: A 1-D tensor of any type and shape `[N]`.
-* <b>`dense_shape`</b>: A 1-D int64 tensor of shape `[ndims]`.
-
-##### Returns:
-
- A `SparseTensor`.
-
-
-- - -
-
-#### `tf.SparseTensor.get_shape()` {#SparseTensor.get_shape}
-
-Get the `TensorShape` representing the shape of the dense tensor.
-
-##### Returns:
-
- A `TensorShape` object.
-
-
-- - -
-
-#### `tf.SparseTensor.indices` {#SparseTensor.indices}
-
-The indices of non-zero values in the represented dense tensor.
-
-##### Returns:
-
- A 2-D Tensor of int64 with dense_shape `[N, ndims]`, where `N` is the
- number of non-zero values in the tensor, and `ndims` is the rank.
-
-
-- - -
-
-#### `tf.SparseTensor.values` {#SparseTensor.values}
-
-The non-zero values in the represented dense tensor.
-
-##### Returns:
-
- A 1-D Tensor of any data type.
-
-
-- - -
-
-#### `tf.SparseTensor.dense_shape` {#SparseTensor.dense_shape}
-
-A 1-D Tensor of int64 representing the shape of the dense tensor.
-
-
-- - -
-
-#### `tf.SparseTensor.dtype` {#SparseTensor.dtype}
-
-The `DType` of elements in this tensor.
-
-
-- - -
-
-#### `tf.SparseTensor.op` {#SparseTensor.op}
-
-The `Operation` that produces `values` as an output.
-
-
-- - -
-
-#### `tf.SparseTensor.graph` {#SparseTensor.graph}
-
-The `Graph` that contains the index, value, and dense_shape tensors.
-
-
-
-#### Other Methods
- - -
#### `tf.SparseTensor.__div__(sp_x, y)` {#SparseTensor.__div__}
@@ -189,6 +106,24 @@ the other direction.
- - -
+#### `tf.SparseTensor.__init__(indices, values, dense_shape)` {#SparseTensor.__init__}
+
+Creates a `SparseTensor`.
+
+##### Args:
+
+
+* <b>`indices`</b>: A 2-D int64 tensor of shape `[N, ndims]`.
+* <b>`values`</b>: A 1-D tensor of any type and shape `[N]`.
+* <b>`dense_shape`</b>: A 1-D int64 tensor of shape `[ndims]`.
+
+##### Returns:
+
+ A `SparseTensor`.
+
+
+- - -
+
#### `tf.SparseTensor.__mul__(sp_x, y)` {#SparseTensor.__mul__}
Component-wise multiplies a SparseTensor by a dense Tensor.
@@ -236,6 +171,20 @@ Internal helper function for 'sp_t / dense_t'.
- - -
+#### `tf.SparseTensor.dense_shape` {#SparseTensor.dense_shape}
+
+A 1-D Tensor of int64 representing the shape of the dense tensor.
+
+
+- - -
+
+#### `tf.SparseTensor.dtype` {#SparseTensor.dtype}
+
+The `DType` of elements in this tensor.
+
+
+- - -
+
#### `tf.SparseTensor.eval(feed_dict=None, session=None)` {#SparseTensor.eval}
Evaluates this sparse tensor in a `Session`.
@@ -269,6 +218,54 @@ available, or `session` must be specified explicitly.
+- - -
+
+#### `tf.SparseTensor.get_shape()` {#SparseTensor.get_shape}
+
+Get the `TensorShape` representing the shape of the dense tensor.
+
+##### Returns:
+
+ A `TensorShape` object.
+
+
+- - -
+
+#### `tf.SparseTensor.graph` {#SparseTensor.graph}
+
+The `Graph` that contains the index, value, and dense_shape tensors.
+
+
+- - -
+
+#### `tf.SparseTensor.indices` {#SparseTensor.indices}
+
+The indices of non-zero values in the represented dense tensor.
+
+##### Returns:
+
+ A 2-D Tensor of int64 with dense_shape `[N, ndims]`, where `N` is the
+ number of non-zero values in the tensor, and `ndims` is the rank.
+
+
+- - -
+
+#### `tf.SparseTensor.op` {#SparseTensor.op}
+
+The `Operation` that produces `values` as an output.
+
+
+- - -
+
+#### `tf.SparseTensor.values` {#SparseTensor.values}
+
+The non-zero values in the represented dense tensor.
+
+##### Returns:
+
+ A 1-D Tensor of any data type.
+
+
- - -
diff --git a/tensorflow/g3doc/api_docs/python/state_ops.md b/tensorflow/g3doc/api_docs/python/state_ops.md
index 0a4227e6e2..cc3cdd33a5 100644
--- a/tensorflow/g3doc/api_docs/python/state_ops.md
+++ b/tensorflow/g3doc/api_docs/python/state_ops.md
@@ -3421,7 +3421,6 @@ gradients for operations that have sparse gradients
Contrast this representation with
[`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
which uses multi-dimensional indices and scalar values.
-
- - -
#### `tf.IndexedSlices.__init__(values, indices, dense_shape=None)` {#IndexedSlices.__init__}
@@ -3429,19 +3428,18 @@ which uses multi-dimensional indices and scalar values.
Creates an `IndexedSlices`.
-
- - -
-#### `tf.IndexedSlices.values` {#IndexedSlices.values}
+#### `tf.IndexedSlices.__neg__()` {#IndexedSlices.__neg__}
+
-A `Tensor` containing the values of the slices.
- - -
-#### `tf.IndexedSlices.indices` {#IndexedSlices.indices}
+#### `tf.IndexedSlices.__str__()` {#IndexedSlices.__str__}
+
-A 1-D `Tensor` containing the indices of the slices.
- - -
@@ -3451,12 +3449,11 @@ A 1-D `Tensor` containing the indices of the slices.
A 1-D `Tensor` containing the shape of the corresponding dense tensor.
-
- - -
-#### `tf.IndexedSlices.name` {#IndexedSlices.name}
+#### `tf.IndexedSlices.device` {#IndexedSlices.device}
-The name of this `IndexedSlices`.
+The name of the device on which `values` will be produced, or `None`.
- - -
@@ -3468,39 +3465,37 @@ The `DType` of elements in this tensor.
- - -
-#### `tf.IndexedSlices.device` {#IndexedSlices.device}
+#### `tf.IndexedSlices.graph` {#IndexedSlices.graph}
-The name of the device on which `values` will be produced, or `None`.
+The `Graph` that contains the values, indices, and shape tensors.
- - -
-#### `tf.IndexedSlices.op` {#IndexedSlices.op}
-
-The `Operation` that produces `values` as an output.
+#### `tf.IndexedSlices.indices` {#IndexedSlices.indices}
+A 1-D `Tensor` containing the indices of the slices.
-#### Other Methods
- - -
-#### `tf.IndexedSlices.__neg__()` {#IndexedSlices.__neg__}
-
+#### `tf.IndexedSlices.name` {#IndexedSlices.name}
+The name of this `IndexedSlices`.
- - -
-#### `tf.IndexedSlices.__str__()` {#IndexedSlices.__str__}
-
+#### `tf.IndexedSlices.op` {#IndexedSlices.op}
+The `Operation` that produces `values` as an output.
- - -
-#### `tf.IndexedSlices.graph` {#IndexedSlices.graph}
+#### `tf.IndexedSlices.values` {#IndexedSlices.values}
-The `Graph` that contains the values, indices, and shape tensors.
+A `Tensor` containing the values of the slices.
diff --git a/tensorflow/g3doc/api_docs/python/string_ops.md b/tensorflow/g3doc/api_docs/python/string_ops.md
index a53210c46d..ad170934a3 100644
--- a/tensorflow/g3doc/api_docs/python/string_ops.md
+++ b/tensorflow/g3doc/api_docs/python/string_ops.md
@@ -7,10 +7,9 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
[TOC]
-## Hashing
+Operations for working with string Tensors.
-String hashing ops take a string input tensor and map each element to an
-integer.
+See the @{$python/string_ops} guide.
- - -
@@ -97,12 +96,6 @@ This functionality will be deprecated and it's recommended to use
A Tensor of the same shape as the input `string_tensor`.
-
-## Joining
-
-String joining ops concatenate elements of input string tensors to produce a new
-string tensor.
-
- - -
### `tf.reduce_join(inputs, axis=None, keep_dims=False, separator='', name=None, reduction_indices=None)` {#reduce_join}
@@ -176,9 +169,6 @@ with the given separator (default is an empty separator).
A `Tensor` of type `string`.
-
-## Splitting
-
- - -
### `tf.string_split(source, delimiter=' ')` {#string_split}
@@ -320,9 +310,6 @@ output = [b'hir', b'ee', b'n"]
A `Tensor` of type `string`. Tensor of substrings
-
-## Conversion
-
- - -
### `tf.as_string(input, precision=None, scientific=None, shortest=None, width=None, fill=None, name=None)` {#as_string}
diff --git a/tensorflow/g3doc/api_docs/python/summary.md b/tensorflow/g3doc/api_docs/python/summary.md
index 8d344036db..42c946c971 100644
--- a/tensorflow/g3doc/api_docs/python/summary.md
+++ b/tensorflow/g3doc/api_docs/python/summary.md
@@ -3,9 +3,10 @@
# Summary Operations
[TOC]
-## Generation of summaries.
+Tensor summaries for exporting information about a model.
+
+See the @{$python/summary} guide.
-### Class for writing Summaries
- - -
### `class tf.summary.FileWriter` {#FileWriter}
@@ -17,7 +18,6 @@ given directory and add summaries and events to it. The class updates the
file contents asynchronously. This allows a training program to call methods
to add data to the file directly from the training loop, without slowing down
training.
-
- - -
#### `tf.summary.FileWriter.__init__(logdir, graph=None, max_queue=10, flush_secs=120, graph_def=None)` {#FileWriter.__init__}
@@ -63,81 +63,62 @@ the event file:
* <b>`graph_def`</b>: DEPRECATED: Use the `graph` argument instead.
-
- - -
-#### `tf.summary.FileWriter.add_summary(summary, global_step=None)` {#FileWriter.add_summary}
-
-Adds a `Summary` protocol buffer to the event file.
-
-This method wraps the provided summary in an `Event` protocol buffer
-and adds it to the event file.
+#### `tf.summary.FileWriter.add_event(event)` {#FileWriter.add_event}
-You can pass the result of evaluating any summary op, using
-[`Session.run()`](client.md#Session.run) or
-[`Tensor.eval()`](framework.md#Tensor.eval), to this
-function. Alternatively, you can pass a `tf.Summary` protocol
-buffer that you populate with your own data. The latter is
-commonly done to report evaluation results in event files.
+Adds an event to the event file.
##### Args:
-* <b>`summary`</b>: A `Summary` protocol buffer, optionally serialized as a string.
-* <b>`global_step`</b>: Number. Optional global step value to record with the
- summary.
+* <b>`event`</b>: An `Event` protocol buffer.
- - -
-#### `tf.summary.FileWriter.add_session_log(session_log, global_step=None)` {#FileWriter.add_session_log}
+#### `tf.summary.FileWriter.add_graph(graph, global_step=None, graph_def=None)` {#FileWriter.add_graph}
-Adds a `SessionLog` protocol buffer to the event file.
+Adds a `Graph` to the event file.
-This method wraps the provided session in an `Event` protocol buffer
-and adds it to the event file.
+The graph described by the protocol buffer will be displayed by
+TensorBoard. Most users pass a graph in the constructor instead.
##### Args:
-* <b>`session_log`</b>: A `SessionLog` protocol buffer.
-* <b>`global_step`</b>: Number. Optional global step value to record with the
- summary.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_event(event)` {#FileWriter.add_event}
-
-Adds an event to the event file.
+* <b>`graph`</b>: A `Graph` object, such as `sess.graph`.
+* <b>`global_step`</b>: Number. Optional global step counter to record with the
+ graph.
+* <b>`graph_def`</b>: DEPRECATED. Use the `graph` parameter instead.
-##### Args:
+##### Raises:
-* <b>`event`</b>: An `Event` protocol buffer.
+* <b>`ValueError`</b>: If both graph and graph_def are passed to the method.
- - -
-#### `tf.summary.FileWriter.add_graph(graph, global_step=None, graph_def=None)` {#FileWriter.add_graph}
+#### `tf.summary.FileWriter.add_meta_graph(meta_graph_def, global_step=None)` {#FileWriter.add_meta_graph}
-Adds a `Graph` to the event file.
+Adds a `MetaGraphDef` to the event file.
-The graph described by the protocol buffer will be displayed by
-TensorBoard. Most users pass a graph in the constructor instead.
+The `MetaGraphDef` allows running the given graph via
+`saver.import_meta_graph()`.
##### Args:
-* <b>`graph`</b>: A `Graph` object, such as `sess.graph`.
+* <b>`meta_graph_def`</b>: A `MetaGraphDef` object, often as retured by
+ `saver.export_meta_graph()`.
* <b>`global_step`</b>: Number. Optional global step counter to record with the
graph.
-* <b>`graph_def`</b>: DEPRECATED. Use the `graph` parameter instead.
##### Raises:
-* <b>`ValueError`</b>: If both graph and graph_def are passed to the method.
+* <b>`TypeError`</b>: If both `meta_graph_def` is not an instance of `MetaGraphDef`.
- - -
@@ -162,20 +143,43 @@ Adds a metadata information for a single session.run() call.
- - -
-#### `tf.summary.FileWriter.get_logdir()` {#FileWriter.get_logdir}
+#### `tf.summary.FileWriter.add_session_log(session_log, global_step=None)` {#FileWriter.add_session_log}
-Returns the directory where event file will be written.
+Adds a `SessionLog` protocol buffer to the event file.
+
+This method wraps the provided session in an `Event` protocol buffer
+and adds it to the event file.
+##### Args:
+
+
+* <b>`session_log`</b>: A `SessionLog` protocol buffer.
+* <b>`global_step`</b>: Number. Optional global step value to record with the
+ summary.
- - -
-#### `tf.summary.FileWriter.flush()` {#FileWriter.flush}
+#### `tf.summary.FileWriter.add_summary(summary, global_step=None)` {#FileWriter.add_summary}
-Flushes the event file to disk.
+Adds a `Summary` protocol buffer to the event file.
-Call this method to make sure that all pending events have been written to
-disk.
+This method wraps the provided summary in an `Event` protocol buffer
+and adds it to the event file.
+
+You can pass the result of evaluating any summary op, using
+[`Session.run()`](client.md#Session.run) or
+[`Tensor.eval()`](framework.md#Tensor.eval), to this
+function. Alternatively, you can pass a `tf.Summary` protocol
+buffer that you populate with your own data. The latter is
+commonly done to report evaluation results in event files.
+
+##### Args:
+
+
+* <b>`summary`</b>: A `Summary` protocol buffer, optionally serialized as a string.
+* <b>`global_step`</b>: Number. Optional global step value to record with the
+ summary.
- - -
@@ -187,8 +191,23 @@ Flushes the event file to disk and close the file.
Call this method when you do not need the summary writer anymore.
+- - -
+
+#### `tf.summary.FileWriter.flush()` {#FileWriter.flush}
+
+Flushes the event file to disk.
+
+Call this method to make sure that all pending events have been written to
+disk.
+
+
+- - -
+
+#### `tf.summary.FileWriter.get_logdir()` {#FileWriter.get_logdir}
+
+Returns the directory where event file will be written.
+
-#### Other Methods
- - -
#### `tf.summary.FileWriter.reopen()` {#FileWriter.reopen}
@@ -233,8 +252,6 @@ Returns the FileWriter for the specified directory.
-
-### Summary Ops
- - -
### `tf.summary.tensor_summary(name, tensor, summary_description=None, collections=None)` {#tensor_summary}
@@ -452,8 +469,6 @@ Merges all summaries collected in the default graph.
buffer resulting from the merging.
-
-## Utilities
- - -
### `tf.summary.get_summary_description(node_def)` {#get_summary_description}
diff --git a/tensorflow/g3doc/api_docs/python/tensor_array_ops.md b/tensorflow/g3doc/api_docs/python/tensor_array_ops.md
index f8e76333b7..b605a3a199 100644
--- a/tensorflow/g3doc/api_docs/python/tensor_array_ops.md
+++ b/tensorflow/g3doc/api_docs/python/tensor_array_ops.md
@@ -7,9 +7,7 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
[TOC]
-TensorArray operations.
-
-## Classes containing dynamically sized arrays of Tensors.
+TensorArray: a dynamically sized array of Tensors.
- - -
@@ -20,44 +18,89 @@ Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.
This class is meant to be used with dynamic iteration primitives such as
`while_loop` and `map_fn`. It supports gradient back-propagation via special
"flow" control flow dependencies.
-
- - -
-#### `tf.TensorArray.handle` {#TensorArray.handle}
+#### `tf.TensorArray.__init__(dtype, size=None, dynamic_size=None, clear_after_read=None, tensor_array_name=None, handle=None, flow=None, infer_shape=True, element_shape=None, name=None)` {#TensorArray.__init__}
-The reference to the TensorArray.
+Construct a new TensorArray or wrap an existing TensorArray handle.
+A note about the parameter `name`:
-- - -
+The name of the `TensorArray` (even if passed in) is uniquified: each time
+a new `TensorArray` is created at runtime it is assigned its own name for
+the duration of the run. This avoids name collisions if a `TensorArray`
+is created within a `while_loop`.
-#### `tf.TensorArray.flow` {#TensorArray.flow}
+##### Args:
-The flow `Tensor` forcing ops leading to this TensorArray state.
+* <b>`dtype`</b>: (required) data type of the TensorArray.
+* <b>`size`</b>: (optional) int32 scalar `Tensor`: the size of the TensorArray.
+ Required if handle is not provided.
+* <b>`dynamic_size`</b>: (optional) Python bool: If true, writes to the TensorArray
+ can grow the TensorArray past its initial size. Default: False.
+* <b>`clear_after_read`</b>: Boolean (optional, default: True). If True, clear
+ TensorArray values after reading them. This disables read-many
+ semantics, but allows early release of memory.
+* <b>`tensor_array_name`</b>: (optional) Python string: the name of the TensorArray.
+ This is used when creating the TensorArray handle. If this value is
+ set, handle should be None.
+* <b>`handle`</b>: (optional) A `Tensor` handle to an existing TensorArray. If this
+ is set, tensor_array_name should be None.
+* <b>`flow`</b>: (optional) A float `Tensor` scalar coming from an existing
+ `TensorArray.flow`.
+* <b>`infer_shape`</b>: (optional, default: True) If True, shape inference
+ is enabled. In this case, all elements must have the same shape.
+* <b>`element_shape`</b>: (optional, default: None) A `TensorShape` object specifying
+ the shape constraints of each of the elements of the TensorArray.
+ Need not be fully defined.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Raises:
-- - -
-#### `tf.TensorArray.dtype` {#TensorArray.dtype}
+* <b>`ValueError`</b>: if both handle and tensor_array_name are provided.
+* <b>`TypeError`</b>: if handle is provided but is not a Tensor.
-The data type of this TensorArray.
+- - -
+
+#### `tf.TensorArray.close(name=None)` {#TensorArray.close}
+
+Close the current TensorArray.
- - -
-#### `tf.TensorArray.read(index, name=None)` {#TensorArray.read}
+#### `tf.TensorArray.concat(name=None)` {#TensorArray.concat}
-Read the value at location `index` in the TensorArray.
+Return the values in the TensorArray as a concatenated `Tensor`.
+
+All of the values must have been written, their ranks must match, and
+and their shapes must all match for all dimensions except the first.
##### Args:
-* <b>`index`</b>: 0-D. int32 tensor with the index to read from.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
- The tensor at index `index`.
+ All the tensors in the TensorArray concatenated into one tensor.
+
+
+- - -
+
+#### `tf.TensorArray.dtype` {#TensorArray.dtype}
+
+The data type of this TensorArray.
+
+
+- - -
+
+#### `tf.TensorArray.flow` {#TensorArray.flow}
+
+The flow `Tensor` forcing ops leading to this TensorArray state.
- - -
@@ -83,65 +126,46 @@ must all match.
- - -
-#### `tf.TensorArray.stack(name=None)` {#TensorArray.stack}
-
-Return the values in the TensorArray as a stacked `Tensor`.
-
-All of the values must have been written and their shapes must all match.
-If input shapes have rank-`R`, then output shape will have rank-`(R+1)`.
-
-##### Args:
+#### `tf.TensorArray.grad(source, flow=None, name=None)` {#TensorArray.grad}
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- All the tensors in the TensorArray stacked into one tensor.
- - -
-#### `tf.TensorArray.concat(name=None)` {#TensorArray.concat}
+#### `tf.TensorArray.handle` {#TensorArray.handle}
-Return the values in the TensorArray as a concatenated `Tensor`.
+The reference to the TensorArray.
-All of the values must have been written, their ranks must match, and
-and their shapes must all match for all dimensions except the first.
-##### Args:
+- - -
+#### `tf.TensorArray.identity()` {#TensorArray.identity}
-* <b>`name`</b>: A name for the operation (optional).
+Returns a TensorArray with the same content and properties.
##### Returns:
- All the tensors in the TensorArray concatenated into one tensor.
-
+ A new TensorArray object with flow that ensures the control dependencies
+ from the contexts will become control dependencies for writes, reads, etc.
+ Use this object all for subsequent operations.
- - -
-#### `tf.TensorArray.write(index, value, name=None)` {#TensorArray.write}
+#### `tf.TensorArray.read(index, name=None)` {#TensorArray.read}
-Write `value` into index `index` of the TensorArray.
+Read the value at location `index` in the TensorArray.
##### Args:
-* <b>`index`</b>: 0-D. int32 scalar with the index to write to.
-* <b>`value`</b>: N-D. Tensor of type `dtype`. The Tensor to write to this index.
+* <b>`index`</b>: 0-D. int32 tensor with the index to read from.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
- A new TensorArray object with flow that ensures the write occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if there are more writers than specified.
+ The tensor at index `index`.
- - -
@@ -171,28 +195,9 @@ Scatter the values of a `Tensor` in specific indices of a `TensorArray`.
- - -
-#### `tf.TensorArray.unstack(value, name=None)` {#TensorArray.unstack}
-
-Unstack the values of a `Tensor` in the TensorArray.
-
-If input value shapes have rank-`R`, then the output TensorArray will
-contain elements whose shapes are rank-`(R-1)`.
-
-##### Args:
-
-
-* <b>`value`</b>: (N+1)-D. Tensor of type `dtype`. The Tensor to unstack.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the unstack occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
+#### `tf.TensorArray.size(name=None)` {#TensorArray.size}
-* <b>`ValueError`</b>: if the shape inference fails.
+Return the size of the TensorArray.
- - -
@@ -220,87 +225,73 @@ Split the values of a `Tensor` into the TensorArray.
* <b>`ValueError`</b>: if the shape inference fails.
-
- - -
-#### `tf.TensorArray.identity()` {#TensorArray.identity}
-
-Returns a TensorArray with the same content and properties.
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the control dependencies
- from the contexts will become control dependencies for writes, reads, etc.
- Use this object all for subsequent operations.
+#### `tf.TensorArray.stack(name=None)` {#TensorArray.stack}
+Return the values in the TensorArray as a stacked `Tensor`.
+All of the values must have been written and their shapes must all match.
+If input shapes have rank-`R`, then output shape will have rank-`(R+1)`.
-- - -
+##### Args:
-#### `tf.TensorArray.grad(source, flow=None, name=None)` {#TensorArray.grad}
+* <b>`name`</b>: A name for the operation (optional).
+##### Returns:
+ All the tensors in the TensorArray stacked into one tensor.
-#### Other Methods
- - -
-#### `tf.TensorArray.__init__(dtype, size=None, dynamic_size=None, clear_after_read=None, tensor_array_name=None, handle=None, flow=None, infer_shape=True, element_shape=None, name=None)` {#TensorArray.__init__}
-
-Construct a new TensorArray or wrap an existing TensorArray handle.
+#### `tf.TensorArray.unstack(value, name=None)` {#TensorArray.unstack}
-A note about the parameter `name`:
+Unstack the values of a `Tensor` in the TensorArray.
-The name of the `TensorArray` (even if passed in) is uniquified: each time
-a new `TensorArray` is created at runtime it is assigned its own name for
-the duration of the run. This avoids name collisions if a `TensorArray`
-is created within a `while_loop`.
+If input value shapes have rank-`R`, then the output TensorArray will
+contain elements whose shapes are rank-`(R-1)`.
##### Args:
-* <b>`dtype`</b>: (required) data type of the TensorArray.
-* <b>`size`</b>: (optional) int32 scalar `Tensor`: the size of the TensorArray.
- Required if handle is not provided.
-* <b>`dynamic_size`</b>: (optional) Python bool: If true, writes to the TensorArray
- can grow the TensorArray past its initial size. Default: False.
-* <b>`clear_after_read`</b>: Boolean (optional, default: True). If True, clear
- TensorArray values after reading them. This disables read-many
- semantics, but allows early release of memory.
-* <b>`tensor_array_name`</b>: (optional) Python string: the name of the TensorArray.
- This is used when creating the TensorArray handle. If this value is
- set, handle should be None.
-* <b>`handle`</b>: (optional) A `Tensor` handle to an existing TensorArray. If this
- is set, tensor_array_name should be None.
-* <b>`flow`</b>: (optional) A float `Tensor` scalar coming from an existing
- `TensorArray.flow`.
-* <b>`infer_shape`</b>: (optional, default: True) If True, shape inference
- is enabled. In this case, all elements must have the same shape.
-* <b>`element_shape`</b>: (optional, default: None) A `TensorShape` object specifying
- the shape constraints of each of the elements of the TensorArray.
- Need not be fully defined.
+* <b>`value`</b>: (N+1)-D. Tensor of type `dtype`. The Tensor to unstack.
* <b>`name`</b>: A name for the operation (optional).
+##### Returns:
+
+ A new TensorArray object with flow that ensures the unstack occurs.
+ Use this object all for subsequent operations.
+
##### Raises:
-* <b>`ValueError`</b>: if both handle and tensor_array_name are provided.
-* <b>`TypeError`</b>: if handle is provided but is not a Tensor.
+* <b>`ValueError`</b>: if the shape inference fails.
- - -
-#### `tf.TensorArray.close(name=None)` {#TensorArray.close}
+#### `tf.TensorArray.write(index, value, name=None)` {#TensorArray.write}
-Close the current TensorArray.
+Write `value` into index `index` of the TensorArray.
+##### Args:
-- - -
-#### `tf.TensorArray.size(name=None)` {#TensorArray.size}
+* <b>`index`</b>: 0-D. int32 scalar with the index to write to.
+* <b>`value`</b>: N-D. Tensor of type `dtype`. The Tensor to write to this index.
+* <b>`name`</b>: A name for the operation (optional).
-Return the size of the TensorArray.
+##### Returns:
+
+ A new TensorArray object with flow that ensures the write occurs.
+ Use this object all for subsequent operations.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: if there are more writers than specified.
diff --git a/tensorflow/g3doc/api_docs/python/test.md b/tensorflow/g3doc/api_docs/python/test.md
index d86e7379df..bd27fece09 100644
--- a/tensorflow/g3doc/api_docs/python/test.md
+++ b/tensorflow/g3doc/api_docs/python/test.md
@@ -3,29 +3,7 @@
# Testing
[TOC]
-## Unit tests
-
-TensorFlow provides a convenience class inheriting from `unittest.TestCase`
-which adds methods relevant to TensorFlow tests. Here is an example:
-
-```python
- import tensorflow as tf
-
-
- class SquareTest(tf.test.TestCase):
-
- def testSquare(self):
- with self.test_session():
- x = tf.square([2, 3])
- self.assertAllEqual(x.eval(), [4, 9])
-
-
- if __name__ == '__main__':
- tf.test.main()
-```
-
-`tf.test.TestCase` inherits from `unittest.TestCase` but adds a few additional
-methods. We will document these methods soon.
+Testing. See the @{$python/test} guide.
- - -
@@ -1432,9 +1410,6 @@ Creates an absolute test srcdir path given a relative path.
An absolute path to the linked in runfiles.
-
-## Utilities
-
- - -
### `tf.test.assert_equal_graph_def(actual, expected, checkpoint_v2=False)` {#assert_equal_graph_def}
@@ -1503,13 +1478,6 @@ Returns whether TensorFlow can access a GPU.
Returns the name of a GPU device if available or the empty string.
-
-## Gradient checking
-
-[`compute_gradient`](#compute_gradient) and
-[`compute_gradient_error`](#compute_gradient_error) perform numerical
-differentiation of graphs for comparison against registered analytic gradients.
-
- - -
### `tf.test.compute_gradient(x, x_shape, y, y_shape, x_init_value=None, delta=0.001, init_targets=None, extra_feed_dict=None)` {#compute_gradient}
diff --git a/tensorflow/g3doc/api_docs/python/tf_debug.md b/tensorflow/g3doc/api_docs/python/tf_debug.md
index 3e0cc273bf..38a082d408 100644
--- a/tensorflow/g3doc/api_docs/python/tf_debug.md
+++ b/tensorflow/g3doc/api_docs/python/tf_debug.md
@@ -5,10 +5,7 @@
Public Python API of TensorFlow Debugger (tfdbg).
-## Functions for adding debug watches
-
-These functions help you modify `RunOptions` to specify which `Tensor`s are to
-be watched when the TensorFlow graph is executed at runtime.
+See the @{$python/tfdbg} guide.
- - -
@@ -103,13 +100,6 @@ N.B.: Under certain circumstances, not all specified `Tensor`s will be
watch.
-
-
-## Classes for debug-dump data and directories
-
-These classes allow you to load and inspect tensor values dumped from
-TensorFlow graphs during runtime.
-
- - -
### `class tf_debug.DebugTensorDatum` {#DebugTensorDatum}
@@ -814,10 +804,6 @@ Get all `DebugTensorDatum` instances corresponding to a debug watch key.
-
-
-## Functions for loading debug-dump data
-
- - -
### `tf_debug.load_tensor_from_event_file(event_file_path)` {#load_tensor_from_event_file}
@@ -840,13 +826,6 @@ protobuf contains a `Tensor` value.
`None`.
-
-
-## Tensor-value predicates
-
-Built-in tensor-filter predicates to support conditional breakpoint between
-runs. See `DebugDumpDir.find()` for more details.
-
- - -
### `tf_debug.has_inf_or_nan(datum, tensor)` {#has_inf_or_nan}
@@ -870,17 +849,6 @@ The signature of this function follows the requirement of the method
(`bool`) True if and only if tensor consists of any nan or inf values.
-
-
-## Session wrapper class and `SessionRunHook` implementations
-
-These classes allow you to
-
-* wrap aroundTensorFlow `Session` objects to debug plain TensorFlow models
- (see `DumpingDebugWrapperSession` and `LocalCLIDebugWrapperSession`), or
-* generate `SessionRunHook` objects to debug `tf.contrib.learn` models (see
- `DumpingDebugHook` and `LocalCLIDebugHook`).
-
- - -
### `class tf_debug.DumpingDebugHook` {#DumpingDebugHook}
diff --git a/tensorflow/g3doc/api_docs/python/train.md b/tensorflow/g3doc/api_docs/python/train.md
index dc88d0e413..06f0087ac2 100644
--- a/tensorflow/g3doc/api_docs/python/train.md
+++ b/tensorflow/g3doc/api_docs/python/train.md
@@ -3,16 +3,7 @@
# Training
[TOC]
-This library provides a set of classes and functions that helps train models.
-
-## Optimizers
-
-The Optimizer base class provides methods to compute gradients for a loss and
-apply gradients to variables. A collection of subclasses implement classic
-optimization algorithms such as GradientDescent and Adagrad.
-
-You never instantiate the Optimizer class itself, but instead instantiate one
-of the subclasses.
+Support for training models. See the @{$python/train} guide.
- - -
@@ -284,7 +275,6 @@ Use `get_slot_names()` to get the list of slot names created by the
-
- - -
### `class tf.train.GradientDescentOptimizer` {#GradientDescentOptimizer}
@@ -316,7 +306,6 @@ Optimizer that implements the Adadelta algorithm.
See [M. D. Zeiler](http://arxiv.org/abs/1212.5701)
([pdf](http://arxiv.org/pdf/1212.5701v1.pdf))
-
- - -
#### `tf.train.AdadeltaOptimizer.__init__(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name='Adadelta')` {#AdadeltaOptimizer.__init__}
@@ -335,6 +324,160 @@ Construct a new Adadelta optimizer.
gradients. Defaults to "Adadelta".
+- - -
+
+#### `tf.train.AdadeltaOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdadeltaOptimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of `minimize()`. It returns an `Operation` that
+applies gradients.
+
+##### Args:
+
+
+* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
+ `compute_gradients()`.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`name`</b>: Optional name for the returned operation. Default to the
+ name passed to the `Optimizer` constructor.
+
+##### Returns:
+
+ An `Operation` that applies the specified gradients. If `global_step`
+ was not None, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
+* <b>`ValueError`</b>: If none of the variables have gradients.
+
+
+- - -
+
+#### `tf.train.AdadeltaOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdadeltaOptimizer.compute_gradients}
+
+Compute gradients of `loss` for the variables in `var_list`.
+
+This is the first part of `minimize()`. It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a `Tensor`, an
+`IndexedSlices`, or `None` if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>`loss`</b>: A Tensor containing the value to minimize.
+* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKey.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs. Variable is always present, but
+ gradient can be `None`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
+* <b>`ValueError`</b>: If some arguments are invalid.
+
+
+- - -
+
+#### `tf.train.AdadeltaOptimizer.get_name()` {#AdadeltaOptimizer.get_name}
+
+
+
+
+- - -
+
+#### `tf.train.AdadeltaOptimizer.get_slot(var, name)` {#AdadeltaOptimizer.get_slot}
+
+Return a slot named `name` created for `var` by the Optimizer.
+
+Some `Optimizer` subclasses use additional variables. For example
+`Momentum` and `Adagrad` use variables to accumulate updates. This method
+gives access to these `Variable` objects if for some reason you need them.
+
+Use `get_slot_names()` to get the list of slot names created by the
+`Optimizer`.
+
+##### Args:
+
+
+* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
+* <b>`name`</b>: A string.
+
+##### Returns:
+
+ The `Variable` for the slot if it was created, `None` otherwise.
+
+
+- - -
+
+#### `tf.train.AdadeltaOptimizer.get_slot_names()` {#AdadeltaOptimizer.get_slot_names}
+
+Return a list of the names of slots created by the `Optimizer`.
+
+See `get_slot()`.
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### `tf.train.AdadeltaOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdadeltaOptimizer.minimize}
+
+Add operations to minimize `loss` by updating `var_list`.
+
+This method simply combines calls `compute_gradients()` and
+`apply_gradients()`. If you want to process the gradient before applying
+them call `compute_gradients()` and `apply_gradients()` explicitly instead
+of using this function.
+
+##### Args:
+
+
+* <b>`loss`</b>: A `Tensor` containing the value to minimize.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKeys.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`name`</b>: Optional name for the returned operation.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ An Operation that updates the variables in `var_list`. If `global_step`
+ was not `None`, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
+
+
- - -
@@ -345,7 +488,6 @@ Optimizer that implements the Adagrad algorithm.
See this [paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)
or this
[intro](http://cs.stanford.edu/~ppasupat/a9online/uploads/proximal_notes.pdf).
-
- - -
#### `tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad')` {#AdagradOptimizer.__init__}
@@ -368,6 +510,160 @@ Construct a new Adagrad optimizer.
* <b>`ValueError`</b>: If the `initial_accumulator_value` is invalid.
+- - -
+
+#### `tf.train.AdagradOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdagradOptimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of `minimize()`. It returns an `Operation` that
+applies gradients.
+
+##### Args:
+
+
+* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
+ `compute_gradients()`.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`name`</b>: Optional name for the returned operation. Default to the
+ name passed to the `Optimizer` constructor.
+
+##### Returns:
+
+ An `Operation` that applies the specified gradients. If `global_step`
+ was not None, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
+* <b>`ValueError`</b>: If none of the variables have gradients.
+
+
+- - -
+
+#### `tf.train.AdagradOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdagradOptimizer.compute_gradients}
+
+Compute gradients of `loss` for the variables in `var_list`.
+
+This is the first part of `minimize()`. It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a `Tensor`, an
+`IndexedSlices`, or `None` if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>`loss`</b>: A Tensor containing the value to minimize.
+* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKey.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs. Variable is always present, but
+ gradient can be `None`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
+* <b>`ValueError`</b>: If some arguments are invalid.
+
+
+- - -
+
+#### `tf.train.AdagradOptimizer.get_name()` {#AdagradOptimizer.get_name}
+
+
+
+
+- - -
+
+#### `tf.train.AdagradOptimizer.get_slot(var, name)` {#AdagradOptimizer.get_slot}
+
+Return a slot named `name` created for `var` by the Optimizer.
+
+Some `Optimizer` subclasses use additional variables. For example
+`Momentum` and `Adagrad` use variables to accumulate updates. This method
+gives access to these `Variable` objects if for some reason you need them.
+
+Use `get_slot_names()` to get the list of slot names created by the
+`Optimizer`.
+
+##### Args:
+
+
+* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
+* <b>`name`</b>: A string.
+
+##### Returns:
+
+ The `Variable` for the slot if it was created, `None` otherwise.
+
+
+- - -
+
+#### `tf.train.AdagradOptimizer.get_slot_names()` {#AdagradOptimizer.get_slot_names}
+
+Return a list of the names of slots created by the `Optimizer`.
+
+See `get_slot()`.
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### `tf.train.AdagradOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdagradOptimizer.minimize}
+
+Add operations to minimize `loss` by updating `var_list`.
+
+This method simply combines calls `compute_gradients()` and
+`apply_gradients()`. If you want to process the gradient before applying
+them call `compute_gradients()` and `apply_gradients()` explicitly instead
+of using this function.
+
+##### Args:
+
+
+* <b>`loss`</b>: A `Tensor` containing the value to minimize.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKeys.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`name`</b>: Optional name for the returned operation.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ An Operation that updates the variables in `var_list`. If `global_step`
+ was not `None`, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
+
+
- - -
@@ -385,7 +681,6 @@ AdagradDA is typically used when there is a need for large sparsity in the
trained model. This optimizer only guarantees sparsity for linear models. Be
careful when using AdagradDA for deep networks as it will require careful
initialization of the gradient accumulators for it to train.
-
- - -
#### `tf.train.AdagradDAOptimizer.__init__(learning_rate, global_step, initial_gradient_squared_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='AdagradDA')` {#AdagradDAOptimizer.__init__}
@@ -414,6 +709,160 @@ Construct a new AdagradDA optimizer.
invalid.
+- - -
+
+#### `tf.train.AdagradDAOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdagradDAOptimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of `minimize()`. It returns an `Operation` that
+applies gradients.
+
+##### Args:
+
+
+* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
+ `compute_gradients()`.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`name`</b>: Optional name for the returned operation. Default to the
+ name passed to the `Optimizer` constructor.
+
+##### Returns:
+
+ An `Operation` that applies the specified gradients. If `global_step`
+ was not None, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
+* <b>`ValueError`</b>: If none of the variables have gradients.
+
+
+- - -
+
+#### `tf.train.AdagradDAOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdagradDAOptimizer.compute_gradients}
+
+Compute gradients of `loss` for the variables in `var_list`.
+
+This is the first part of `minimize()`. It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a `Tensor`, an
+`IndexedSlices`, or `None` if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>`loss`</b>: A Tensor containing the value to minimize.
+* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKey.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs. Variable is always present, but
+ gradient can be `None`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
+* <b>`ValueError`</b>: If some arguments are invalid.
+
+
+- - -
+
+#### `tf.train.AdagradDAOptimizer.get_name()` {#AdagradDAOptimizer.get_name}
+
+
+
+
+- - -
+
+#### `tf.train.AdagradDAOptimizer.get_slot(var, name)` {#AdagradDAOptimizer.get_slot}
+
+Return a slot named `name` created for `var` by the Optimizer.
+
+Some `Optimizer` subclasses use additional variables. For example
+`Momentum` and `Adagrad` use variables to accumulate updates. This method
+gives access to these `Variable` objects if for some reason you need them.
+
+Use `get_slot_names()` to get the list of slot names created by the
+`Optimizer`.
+
+##### Args:
+
+
+* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
+* <b>`name`</b>: A string.
+
+##### Returns:
+
+ The `Variable` for the slot if it was created, `None` otherwise.
+
+
+- - -
+
+#### `tf.train.AdagradDAOptimizer.get_slot_names()` {#AdagradDAOptimizer.get_slot_names}
+
+Return a list of the names of slots created by the `Optimizer`.
+
+See `get_slot()`.
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### `tf.train.AdagradDAOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdagradDAOptimizer.minimize}
+
+Add operations to minimize `loss` by updating `var_list`.
+
+This method simply combines calls `compute_gradients()` and
+`apply_gradients()`. If you want to process the gradient before applying
+them call `compute_gradients()` and `apply_gradients()` explicitly instead
+of using this function.
+
+##### Args:
+
+
+* <b>`loss`</b>: A `Tensor` containing the value to minimize.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKeys.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`name`</b>: Optional name for the returned operation.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ An Operation that updates the variables in `var_list`. If `global_step`
+ was not `None`, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
+
+
- - -
@@ -449,7 +898,6 @@ Optimizer that implements the Adam algorithm.
See [Kingma et. al., 2014](http://arxiv.org/abs/1412.6980)
([pdf](http://arxiv.org/pdf/1412.6980.pdf)).
-
- - -
#### `tf.train.AdamOptimizer.__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam')` {#AdamOptimizer.__init__}
@@ -498,6 +946,160 @@ will not update in iterations g is zero.
Defaults to "Adam".
+- - -
+
+#### `tf.train.AdamOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdamOptimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of `minimize()`. It returns an `Operation` that
+applies gradients.
+
+##### Args:
+
+
+* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
+ `compute_gradients()`.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`name`</b>: Optional name for the returned operation. Default to the
+ name passed to the `Optimizer` constructor.
+
+##### Returns:
+
+ An `Operation` that applies the specified gradients. If `global_step`
+ was not None, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
+* <b>`ValueError`</b>: If none of the variables have gradients.
+
+
+- - -
+
+#### `tf.train.AdamOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdamOptimizer.compute_gradients}
+
+Compute gradients of `loss` for the variables in `var_list`.
+
+This is the first part of `minimize()`. It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a `Tensor`, an
+`IndexedSlices`, or `None` if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>`loss`</b>: A Tensor containing the value to minimize.
+* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKey.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs. Variable is always present, but
+ gradient can be `None`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
+* <b>`ValueError`</b>: If some arguments are invalid.
+
+
+- - -
+
+#### `tf.train.AdamOptimizer.get_name()` {#AdamOptimizer.get_name}
+
+
+
+
+- - -
+
+#### `tf.train.AdamOptimizer.get_slot(var, name)` {#AdamOptimizer.get_slot}
+
+Return a slot named `name` created for `var` by the Optimizer.
+
+Some `Optimizer` subclasses use additional variables. For example
+`Momentum` and `Adagrad` use variables to accumulate updates. This method
+gives access to these `Variable` objects if for some reason you need them.
+
+Use `get_slot_names()` to get the list of slot names created by the
+`Optimizer`.
+
+##### Args:
+
+
+* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
+* <b>`name`</b>: A string.
+
+##### Returns:
+
+ The `Variable` for the slot if it was created, `None` otherwise.
+
+
+- - -
+
+#### `tf.train.AdamOptimizer.get_slot_names()` {#AdamOptimizer.get_slot_names}
+
+Return a list of the names of slots created by the `Optimizer`.
+
+See `get_slot()`.
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### `tf.train.AdamOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdamOptimizer.minimize}
+
+Add operations to minimize `loss` by updating `var_list`.
+
+This method simply combines calls `compute_gradients()` and
+`apply_gradients()`. If you want to process the gradient before applying
+them call `compute_gradients()` and `apply_gradients()` explicitly instead
+of using this function.
+
+##### Args:
+
+
+* <b>`loss`</b>: A `Tensor` containing the value to minimize.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKeys.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`name`</b>: Optional name for the returned operation.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ An Operation that updates the variables in `var_list`. If `global_step`
+ was not `None`, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
+
+
- - -
@@ -507,7 +1109,6 @@ Optimizer that implements the FTRL algorithm.
See this [paper](
https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf).
-
- - -
#### `tf.train.FtrlOptimizer.__init__(learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl')` {#FtrlOptimizer.__init__}
@@ -535,6 +1136,160 @@ Construct a new FTRL optimizer.
* <b>`ValueError`</b>: If one of the arguments is invalid.
+- - -
+
+#### `tf.train.FtrlOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#FtrlOptimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of `minimize()`. It returns an `Operation` that
+applies gradients.
+
+##### Args:
+
+
+* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
+ `compute_gradients()`.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`name`</b>: Optional name for the returned operation. Default to the
+ name passed to the `Optimizer` constructor.
+
+##### Returns:
+
+ An `Operation` that applies the specified gradients. If `global_step`
+ was not None, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
+* <b>`ValueError`</b>: If none of the variables have gradients.
+
+
+- - -
+
+#### `tf.train.FtrlOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#FtrlOptimizer.compute_gradients}
+
+Compute gradients of `loss` for the variables in `var_list`.
+
+This is the first part of `minimize()`. It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a `Tensor`, an
+`IndexedSlices`, or `None` if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>`loss`</b>: A Tensor containing the value to minimize.
+* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKey.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs. Variable is always present, but
+ gradient can be `None`.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
+* <b>`ValueError`</b>: If some arguments are invalid.
+
+
+- - -
+
+#### `tf.train.FtrlOptimizer.get_name()` {#FtrlOptimizer.get_name}
+
+
+
+
+- - -
+
+#### `tf.train.FtrlOptimizer.get_slot(var, name)` {#FtrlOptimizer.get_slot}
+
+Return a slot named `name` created for `var` by the Optimizer.
+
+Some `Optimizer` subclasses use additional variables. For example
+`Momentum` and `Adagrad` use variables to accumulate updates. This method
+gives access to these `Variable` objects if for some reason you need them.
+
+Use `get_slot_names()` to get the list of slot names created by the
+`Optimizer`.
+
+##### Args:
+
+
+* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
+* <b>`name`</b>: A string.
+
+##### Returns:
+
+ The `Variable` for the slot if it was created, `None` otherwise.
+
+
+- - -
+
+#### `tf.train.FtrlOptimizer.get_slot_names()` {#FtrlOptimizer.get_slot_names}
+
+Return a list of the names of slots created by the `Optimizer`.
+
+See `get_slot()`.
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### `tf.train.FtrlOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#FtrlOptimizer.minimize}
+
+Add operations to minimize `loss` by updating `var_list`.
+
+This method simply combines calls `compute_gradients()` and
+`apply_gradients()`. If you want to process the gradient before applying
+them call `compute_gradients()` and `apply_gradients()` explicitly instead
+of using this function.
+
+##### Args:
+
+
+* <b>`loss`</b>: A `Tensor` containing the value to minimize.
+* <b>`global_step`</b>: Optional `Variable` to increment by one after the
+ variables have been updated.
+* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
+ `loss`. Defaults to the list of variables collected in the graph
+ under the key `GraphKeys.TRAINABLE_VARIABLES`.
+* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
+ `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
+* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
+ Valid values are defined in the class `AggregationMethod`.
+* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>`name`</b>: Optional name for the returned operation.
+* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
+
+##### Returns:
+
+ An Operation that updates the variables in `var_list`. If `global_step`
+ was not `None`, that operation also increments `global_step`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
+
+
- - -
@@ -635,15 +1390,6 @@ will not update in iterations g is zero.
-
-## Gradient Computation
-
-TensorFlow provides functions to compute the derivatives for a given
-TensorFlow computation graph, adding operations to the graph. The
-optimizer classes automatically compute derivatives on your graph, but
-creators of new Optimizers or expert users can call the lower-level
-functions below.
-
- - -
### `tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)` {#gradients}
@@ -710,7 +1456,6 @@ be used to combine gradients in the graph:
gradients must be ready before any aggregation is performed.
* `DEFAULT`: The system-chosen default aggregation method.
-
- - -
### `tf.stop_gradient(input, name=None)` {#stop_gradient}
@@ -748,7 +1493,6 @@ to pretend that the value was a constant. Some examples include:
A `Tensor`. Has the same type as `input`.
-
- - -
### `tf.hessians(ys, xs, name='hessians', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)` {#hessians}
@@ -788,15 +1532,6 @@ tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details).
this function only supports one-dimensional `x` in `xs`.
-
-
-## Gradient Clipping
-
-TensorFlow provides several operations that you can use to add clipping
-functions to your graph. You can use these functions to perform general data
-clipping, but they're particularly useful for handling exploding or vanishing
-gradients.
-
- - -
### `tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)` {#clip_by_value}
@@ -976,8 +1711,6 @@ Any entries in `t_list` that are of type None are ignored.
* <b>`TypeError`</b>: If `t_list` is not a sequence.
-
-## Decaying the learning rate
- - -
### `tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)` {#exponential_decay}
@@ -1284,13 +2017,6 @@ learning_step = (
* <b>`ValueError`</b>: if `global_step` is not supplied.
-
-## Moving Averages
-
-Some training algorithms, such as GradientDescent and Momentum often benefit
-from maintaining a moving average of variables during optimization. Using the
-moving averages for evaluations often improve results significantly.
-
- - -
### `class tf.train.ExponentialMovingAverage` {#ExponentialMovingAverage}
@@ -1528,14 +2254,6 @@ Below is an example of such mapping:
-
-## Coordinator and QueueRunner
-
-See [Threading and Queues](../../how_tos/threading_and_queues/index.md)
-for how to use threads and queues. For documentation on the Queue API,
-see [Queues](../../api_docs/python/io_ops.md#queues).
-
-
- - -
### `class tf.train.Coordinator` {#Coordinator}
@@ -1662,7 +2380,7 @@ After this is called, calls to `should_stop()` will return `False`.
- - -
-#### `tf.train.Coordinator.join(threads=None, stop_grace_period_secs=120)` {#Coordinator.join}
+#### `tf.train.Coordinator.join(threads=None, stop_grace_period_secs=120, ignore_live_threads=False)` {#Coordinator.join}
Wait for threads to terminate.
@@ -1687,6 +2405,8 @@ that `RuntimeError`.
addition to the registered threads.
* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
`request_stop()` has been called.
+* <b>`ignore_live_threads`</b>: If `False`, raises an error if any of the threads are
+ still alive after `stop_grace_period_secs`.
##### Raises:
@@ -2260,12 +2980,6 @@ the list of all threads.
A list of threads.
-
-## Distributed execution
-
-See [Distributed TensorFlow](../../how_tos/distributed/index.md) for
-more information about how to configure a distributed TensorFlow program.
-
- - -
### `class tf.train.Server` {#Server}
@@ -3761,7 +4475,7 @@ with tf.device(tf.train.replica_device_setter(cluster=cluster_spec)):
- - -
-### `tf.train.MonitoredTrainingSession(master='', is_chief=True, checkpoint_dir=None, scaffold=None, hooks=None, chief_only_hooks=None, save_checkpoint_secs=600, save_summaries_steps=100, save_summaries_secs=None, config=None)` {#MonitoredTrainingSession}
+### `tf.train.MonitoredTrainingSession(master='', is_chief=True, checkpoint_dir=None, scaffold=None, hooks=None, chief_only_hooks=None, save_checkpoint_secs=600, save_summaries_steps=100, save_summaries_secs=None, config=None, stop_grace_period_secs=120)` {#MonitoredTrainingSession}
Creates a `MonitoredSession` for training.
@@ -3798,6 +4512,8 @@ inialize/restore.
isn't used.
* <b>`config`</b>: an instance of `tf.ConfigProto` proto used to configure the session.
It's the `config` argument of constructor of `tf.Session`.
+* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
+ `close()` has been called.
##### Returns:
@@ -3889,7 +4605,7 @@ Returns:
- - -
-#### `tf.train.MonitoredSession.__init__(session_creator=None, hooks=None)` {#MonitoredSession.__init__}
+#### `tf.train.MonitoredSession.__init__(session_creator=None, hooks=None, stop_grace_period_secs=120)` {#MonitoredSession.__init__}
@@ -4005,7 +4721,7 @@ Exit: At the `close()`, the hooked session does following things in order:
- - -
-#### `tf.train.SingularMonitoredSession.__init__(hooks=None, scaffold=None, master='', config=None, checkpoint_dir=None)` {#SingularMonitoredSession.__init__}
+#### `tf.train.SingularMonitoredSession.__init__(hooks=None, scaffold=None, master='', config=None, checkpoint_dir=None, stop_grace_period_secs=120)` {#SingularMonitoredSession.__init__}
Creates a SingularMonitoredSession.
@@ -4019,6 +4735,8 @@ Creates a SingularMonitoredSession.
* <b>`config`</b>: `ConfigProto` proto used to configure the session.
* <b>`checkpoint_dir`</b>: A string. Optional path to a directory where to restore
variables.
+* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
+ `close()` has been called.
- - -
@@ -4292,13 +5010,6 @@ Initializes a worker session creator.
-
-## Reading Summaries from Event Files
-
-See [Summaries and
-TensorBoard](../../how_tos/summaries_and_tensorboard/index.md) for an
-overview of summaries, event files, and visualization in TensorBoard.
-
- - -
### `tf.train.summary_iterator(path)` {#summary_iterator}
@@ -4344,11 +5055,6 @@ for more information about their attributes.
`Event` protocol buffers.
-
-## Training Hooks
-
-Hooks are tools that run in the process of training/evaluation of the model.
-
- - -
### `class tf.train.SessionRunHook` {#SessionRunHook}
@@ -4653,7 +5359,6 @@ Alias for field number 2
-
- - -
### `class tf.train.LoggingTensorHook` {#LoggingTensorHook}
@@ -5476,9 +6181,6 @@ such as saving a last checkpoint.
-
-## Training Utilities
-
- - -
### `tf.train.global_step(sess, global_step_tensor)` {#global_step}