aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/g3doc/api_docs/python/functions_and_classes/shard3')
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.DeviceSpec.from_string.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Graph.md783
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.GraphKeys.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RandomShuffleQueue.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RegisterGradient.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Tensor.md228
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.VarLenFeature.md11
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.WholeFileReader.md148
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.abs.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.add_to_collection.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.as_dtype.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_integer.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_non_negative.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky_solve.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_ifft3d.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_diag.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_self_adjoint_eig.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ceil.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.constant_initializer.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.ContinuousDistribution.md153
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md185
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.StudentT.md245
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Uniform.md216
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sum_regularizer.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_tensors.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.RunConfig.md47
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowEstimator.md295
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_recall_at_k.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_root_mean_squared_error.md48
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_sparse_precision_at_k.md60
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.control_dependencies.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_csv.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_raw.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.CancelledError.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.UnimplementedError.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.expand_dims.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gather.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.get_collection.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_brightness.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.resize_bilinear.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.resize_images.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.invert_permutation.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.listdiff.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.mod.md (renamed from tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.mul.md)6
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.compute_accidental_hits.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.embedding_lookup_sparse.md66
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.l2_normalize.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_softmax.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.normalize_moments.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.separable_conv2d.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.sparse_softmax_cross_entropy_with_logits.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.not_equal.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.one_hot.md129
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.pow.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.python_io.TFRecordWriter.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_prod.md (renamed from tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_mean.md)14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reshape.md72
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reverse.md61
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.round.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_mul.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scan.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_max.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.slice.md47
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_merge.md73
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_mean.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_sqrt_n_grad.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.squared_difference.md (renamed from tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.rsqrt.md)7
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.squeeze.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.string_to_hash_bucket_strong.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.assert_equal_graph_def.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.compute_gradient.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.SessionManager.md187
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.latest_checkpoint.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.shuffle_batch.md74
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.start_queue_runners.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.tuple.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unique.md33
79 files changed, 1915 insertions, 2817 deletions
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.DeviceSpec.from_string.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.DeviceSpec.from_string.md
deleted file mode 100644
index 5cbba0ada6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.DeviceSpec.from_string.md
+++ /dev/null
@@ -1,18 +0,0 @@
-#### `tf.DeviceSpec.from_string(spec)` {#DeviceSpec.from_string}
-
-Construct a `DeviceSpec` from a string.
-
-##### Args:
-
-
-* <b>`spec`</b>: a string of the form
- /job:<name>/replica:<id>/task:<id>/device:CPU:<id>
- or
- /job:<name>/replica:<id>/task:<id>/device:GPU:<id>
- as cpu and gpu are mutually exclusive.
- All entries are optional.
-
-##### Returns:
-
- A DeviceSpec.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Graph.md
deleted file mode 100644
index 762a117664..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Graph.md
+++ /dev/null
@@ -1,783 +0,0 @@
-A TensorFlow computation, represented as a dataflow graph.
-
-A `Graph` contains a set of
-[`Operation`](../../api_docs/python/framework.md#Operation) objects,
-which represent units of computation; and
-[`Tensor`](../../api_docs/python/framework.md#Tensor) objects, which represent
-the units of data that flow between operations.
-
-A default `Graph` is always registered, and accessible by calling
-[`tf.get_default_graph()`](../../api_docs/python/framework.md#get_default_graph).
-To add an operation to the default graph, simply call one of the functions
-that defines a new `Operation`:
-
-```
-c = tf.constant(4.0)
-assert c.graph is tf.get_default_graph()
-```
-
-Another typical usage involves the
-[`Graph.as_default()`](../../api_docs/python/framework.md#Graph.as_default)
-context manager, which overrides the current default graph for the
-lifetime of the context:
-
-```python
-g = tf.Graph()
-with g.as_default():
- # Define operations and tensors in `g`.
- c = tf.constant(30.0)
- assert c.graph is g
-```
-
-Important note: This class *is not* thread-safe for graph construction. All
-operations should be created from a single thread, or external
-synchronization must be provided. Unless otherwise specified, all methods
-are not thread-safe.
-
-- - -
-
-#### `tf.Graph.__init__()` {#Graph.__init__}
-
-Creates a new, empty Graph.
-
-
-- - -
-
-#### `tf.Graph.as_default()` {#Graph.as_default}
-
-Returns a context manager that makes this `Graph` the default graph.
-
-This method should be used if you want to create multiple graphs
-in the same process. For convenience, a global default graph is
-provided, and all ops will be added to this graph if you do not
-create a new graph explicitly. Use this method with the `with` keyword
-to specify that ops created within the scope of a block should be
-added to this graph.
-
-The default graph is a property of the current thread. If you
-create a new thread, and wish to use the default graph in that
-thread, you must explicitly add a `with g.as_default():` in that
-thread's function.
-
-The following code examples are equivalent:
-
-```python
-# 1. Using Graph.as_default():
-g = tf.Graph()
-with g.as_default():
- c = tf.constant(5.0)
- assert c.graph is g
-
-# 2. Constructing and making default:
-with tf.Graph().as_default() as g:
- c = tf.constant(5.0)
- assert c.graph is g
-```
-
-##### Returns:
-
- A context manager for using this graph as the default graph.
-
-
-- - -
-
-#### `tf.Graph.as_graph_def(from_version=None, add_shapes=False)` {#Graph.as_graph_def}
-
-Returns a serialized `GraphDef` representation of this graph.
-
-The serialized `GraphDef` can be imported into another `Graph`
-(using [`import_graph_def()`](#import_graph_def)) or used with the
-[C++ Session API](../../api_docs/cc/index.md).
-
-This method is thread-safe.
-
-##### Args:
-
-
-* <b>`from_version`</b>: Optional. If this is set, returns a `GraphDef`
- containing only the nodes that were added to this graph since
- its `version` property had the given value.
-* <b>`add_shapes`</b>: If true, adds an "_output_shapes" list attr to each
- node with the inferred shapes of each of its outputs.
-
-##### Returns:
-
- A [`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto)
- protocol buffer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `graph_def` would be too large.
-
-
-- - -
-
-#### `tf.Graph.finalize()` {#Graph.finalize}
-
-Finalizes this graph, making it read-only.
-
-After calling `g.finalize()`, no new operations can be added to
-`g`. This method is used to ensure that no operations are added
-to a graph when it is shared between multiple threads, for example
-when using a [`QueueRunner`](../../api_docs/python/train.md#QueueRunner).
-
-
-- - -
-
-#### `tf.Graph.finalized` {#Graph.finalized}
-
-True if this graph has been finalized.
-
-
-
-- - -
-
-#### `tf.Graph.control_dependencies(control_inputs)` {#Graph.control_dependencies}
-
-Returns a context manager that specifies control dependencies.
-
-Use with the `with` keyword to specify that all operations constructed
-within the context should have control dependencies on
-`control_inputs`. For example:
-
-```python
-with g.control_dependencies([a, b, c]):
- # `d` and `e` will only run after `a`, `b`, and `c` have executed.
- d = ...
- e = ...
-```
-
-Multiple calls to `control_dependencies()` can be nested, and in
-that case a new `Operation` will have control dependencies on the union
-of `control_inputs` from all active contexts.
-
-```python
-with g.control_dependencies([a, b]):
- # Ops constructed here run after `a` and `b`.
- with g.control_dependencies([c, d]):
- # Ops constructed here run after `a`, `b`, `c`, and `d`.
-```
-
-You can pass None to clear the control dependencies:
-
-```python
-with g.control_dependencies([a, b]):
- # Ops constructed here run after `a` and `b`.
- with g.control_dependencies(None):
- # Ops constructed here run normally, not waiting for either `a` or `b`.
- with g.control_dependencies([c, d]):
- # Ops constructed here run after `c` and `d`, also not waiting
- # for either `a` or `b`.
-```
-
-*N.B.* The control dependencies context applies *only* to ops that
-are constructed within the context. Merely using an op or tensor
-in the context does not add a control dependency. The following
-example illustrates this point:
-
-```python
-# WRONG
-def my_func(pred, tensor):
- t = tf.matmul(tensor, tensor)
- with tf.control_dependencies([pred]):
- # The matmul op is created outside the context, so no control
- # dependency will be added.
- return t
-
-# RIGHT
-def my_func(pred, tensor):
- with tf.control_dependencies([pred]):
- # The matmul op is created in the context, so a control dependency
- # will be added.
- return tf.matmul(tensor, tensor)
-```
-
-##### Args:
-
-
-* <b>`control_inputs`</b>: A list of `Operation` or `Tensor` objects which
- must be executed or computed before running the operations
- defined in the context. Can also be `None` to clear the control
- dependencies.
-
-##### Returns:
-
- A context manager that specifies control dependencies for all
- operations constructed within the context.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `control_inputs` is not a list of `Operation` or
- `Tensor` objects.
-
-
-- - -
-
-#### `tf.Graph.device(device_name_or_function)` {#Graph.device}
-
-Returns a context manager that specifies the default device to use.
-
-The `device_name_or_function` argument may either be a device name
-string, a device function, or None:
-
-* If it is a device name string, all operations constructed in
- this context will be assigned to the device with that name, unless
- overridden by a nested `device()` context.
-* If it is a function, it will be treated as a function from
- Operation objects to device name strings, and invoked each time
- a new Operation is created. The Operation will be assigned to
- the device with the returned name.
-* If it is None, all `device()` invocations from the enclosing context
- will be ignored.
-
-For information about the valid syntax of device name strings, see
-the documentation in
-[`DeviceNameUtils`](https://www.tensorflow.org/code/tensorflow/core/util/device_name_utils.h).
-
-For example:
-
-```python
-with g.device('/gpu:0'):
- # All operations constructed in this context will be placed
- # on GPU 0.
- with g.device(None):
- # All operations constructed in this context will have no
- # assigned device.
-
-# Defines a function from `Operation` to device string.
-def matmul_on_gpu(n):
- if n.type == "MatMul":
- return "/gpu:0"
- else:
- return "/cpu:0"
-
-with g.device(matmul_on_gpu):
- # All operations of type "MatMul" constructed in this context
- # will be placed on GPU 0; all other operations will be placed
- # on CPU 0.
-```
-
-**N.B.** The device scope may be overridden by op wrappers or
-other library code. For example, a variable assignment op
-`v.assign()` must be colocated with the `tf.Variable` `v`, and
-incompatible device scopes will be ignored.
-
-##### Args:
-
-
-* <b>`device_name_or_function`</b>: The device name or function to use in
- the context.
-
-##### Returns:
-
- A context manager that specifies the default device to use for newly
- created ops.
-
-
-- - -
-
-#### `tf.Graph.name_scope(name)` {#Graph.name_scope}
-
-Returns a context manager that creates hierarchical names for operations.
-
-A graph maintains a stack of name scopes. A `with name_scope(...):`
-statement pushes a new name onto the stack for the lifetime of the context.
-
-The `name` argument will be interpreted as follows:
-
-* A string (not ending with '/') will create a new name scope, in which
- `name` is appended to the prefix of all operations created in the
- context. If `name` has been used before, it will be made unique by
- calling `self.unique_name(name)`.
-* A scope previously captured from a `with g.name_scope(...) as
- scope:` statement will be treated as an "absolute" name scope, which
- makes it possible to re-enter existing scopes.
-* A value of `None` or the empty string will reset the current name scope
- to the top-level (empty) name scope.
-
-For example:
-
-```python
-with tf.Graph().as_default() as g:
- c = tf.constant(5.0, name="c")
- assert c.op.name == "c"
- c_1 = tf.constant(6.0, name="c")
- assert c_1.op.name == "c_1"
-
- # Creates a scope called "nested"
- with g.name_scope("nested") as scope:
- nested_c = tf.constant(10.0, name="c")
- assert nested_c.op.name == "nested/c"
-
- # Creates a nested scope called "inner".
- with g.name_scope("inner"):
- nested_inner_c = tf.constant(20.0, name="c")
- assert nested_inner_c.op.name == "nested/inner/c"
-
- # Create a nested scope called "inner_1".
- with g.name_scope("inner"):
- nested_inner_1_c = tf.constant(30.0, name="c")
- assert nested_inner_1_c.op.name == "nested/inner_1/c"
-
- # Treats `scope` as an absolute name scope, and
- # switches to the "nested/" scope.
- with g.name_scope(scope):
- nested_d = tf.constant(40.0, name="d")
- assert nested_d.op.name == "nested/d"
-
- with g.name_scope(""):
- e = tf.constant(50.0, name="e")
- assert e.op.name == "e"
-```
-
-The name of the scope itself can be captured by `with
-g.name_scope(...) as scope:`, which stores the name of the scope
-in the variable `scope`. This value can be used to name an
-operation that represents the overall result of executing the ops
-in a scope. For example:
-
-```python
-inputs = tf.constant(...)
-with g.name_scope('my_layer') as scope:
- weights = tf.Variable(..., name="weights")
- biases = tf.Variable(..., name="biases")
- affine = tf.matmul(inputs, weights) + biases
- output = tf.nn.relu(affine, name=scope)
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the scope.
-
-##### Returns:
-
- A context manager that installs `name` as a new name scope.
-
-
-
-A `Graph` instance supports an arbitrary number of "collections"
-that are identified by name. For convenience when building a large
-graph, collections can store groups of related objects: for
-example, the `tf.Variable` uses a collection (named
-[`tf.GraphKeys.VARIABLES`](../../api_docs/python/framework.md#GraphKeys)) for
-all variables that are created during the construction of a graph. The caller
-may define additional collections by specifying a new name.
-
-- - -
-
-#### `tf.Graph.add_to_collection(name, value)` {#Graph.add_to_collection}
-
-Stores `value` in the collection with the given `name`.
-
-Note that collections are not sets, so it is possible to add a value to
-a collection several times.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. The `GraphKeys` class
- contains many standard names for collections.
-* <b>`value`</b>: The value to add to the collection.
-
-
-- - -
-
-#### `tf.Graph.add_to_collections(names, value)` {#Graph.add_to_collections}
-
-Stores `value` in the collections given by `names`.
-
-Note that collections are not sets, so it is possible to add a value to
-a collection several times. This function makes sure that duplicates in
-`names` are ignored, but it will not check for pre-existing membership of
-`value` in any of the collections in `names`.
-
-`names` can be any iterable, but if `names` is a string, it is treated as a
-single collection name.
-
-##### Args:
-
-
-* <b>`names`</b>: The keys for the collections to add to. The `GraphKeys` class
- contains many standard names for collections.
-* <b>`value`</b>: The value to add to the collections.
-
-
-- - -
-
-#### `tf.Graph.get_collection(name, scope=None)` {#Graph.get_collection}
-
-Returns a list of values in the collection with the given `name`.
-
-This is different from `get_collection_ref()` which always returns the
-actual collection list if it exists in that it returns a new list each time
-it is called.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-* <b>`scope`</b>: (Optional.) If supplied, the resulting list is filtered to include
- only items whose `name` attribute matches using `re.match`. Items
- without a `name` attribute are never returned if a scope is supplied and
- the choice or `re.match` means that a `scope` without special tokens
- filters by prefix.
-
-##### Returns:
-
- The list of values in the collection with the given `name`, or
- an empty list if no value has been added to that collection. The
- list contains the values in the order under which they were
- collected.
-
-
-- - -
-
-#### `tf.Graph.get_collection_ref(name)` {#Graph.get_collection_ref}
-
-Returns a list of values in the collection with the given `name`.
-
-If the collection exists, this returns the list itself, which can
-be modified in place to change the collection. If the collection does
-not exist, it is created as an empty list and the list is returned.
-
-This is different from `get_collection()` which always returns a copy of
-the collection list if it exists and never creates an empty collection.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-
-##### Returns:
-
- The list of values in the collection with the given `name`, or an empty
- list if no value has been added to that collection.
-
-
-
-- - -
-
-#### `tf.Graph.as_graph_element(obj, allow_tensor=True, allow_operation=True)` {#Graph.as_graph_element}
-
-Returns the object referred to by `obj`, as an `Operation` or `Tensor`.
-
-This function validates that `obj` represents an element of this
-graph, and gives an informative error message if it is not.
-
-This function is the canonical way to get/validate an object of
-one of the allowed types from an external argument reference in the
-Session API.
-
-This method may be called concurrently from multiple threads.
-
-##### Args:
-
-
-* <b>`obj`</b>: A `Tensor`, an `Operation`, or the name of a tensor or operation.
- Can also be any object with an `_as_graph_element()` method that returns
- a value of one of these types.
-* <b>`allow_tensor`</b>: If true, `obj` may refer to a `Tensor`.
-* <b>`allow_operation`</b>: If true, `obj` may refer to an `Operation`.
-
-##### Returns:
-
- The `Tensor` or `Operation` in the Graph corresponding to `obj`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `obj` is not a type we support attempting to convert
- to types.
-* <b>`ValueError`</b>: If `obj` is of an appropriate type but invalid. For
- example, an invalid string.
-* <b>`KeyError`</b>: If `obj` is not an object in the graph.
-
-
-- - -
-
-#### `tf.Graph.get_operation_by_name(name)` {#Graph.get_operation_by_name}
-
-Returns the `Operation` with the given `name`.
-
-This method may be called concurrently from multiple threads.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the `Operation` to return.
-
-##### Returns:
-
- The `Operation` with the given `name`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `name` is not a string.
-* <b>`KeyError`</b>: If `name` does not correspond to an operation in this graph.
-
-
-- - -
-
-#### `tf.Graph.get_tensor_by_name(name)` {#Graph.get_tensor_by_name}
-
-Returns the `Tensor` with the given `name`.
-
-This method may be called concurrently from multiple threads.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the `Tensor` to return.
-
-##### Returns:
-
- The `Tensor` with the given `name`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `name` is not a string.
-* <b>`KeyError`</b>: If `name` does not correspond to a tensor in this graph.
-
-
-- - -
-
-#### `tf.Graph.get_operations()` {#Graph.get_operations}
-
-Return the list of operations in the graph.
-
-You can modify the operations in place, but modifications
-to the list such as inserts/delete have no effect on the
-list of operations known to the graph.
-
-This method may be called concurrently from multiple threads.
-
-##### Returns:
-
- A list of Operations.
-
-
-
-- - -
-
-#### `tf.Graph.seed` {#Graph.seed}
-
-The graph-level random seed of this graph.
-
-
-- - -
-
-#### `tf.Graph.unique_name(name, mark_as_used=True)` {#Graph.unique_name}
-
-Return a unique operation name for `name`.
-
-Note: You rarely need to call `unique_name()` directly. Most of
-the time you just need to create `with g.name_scope()` blocks to
-generate structured names.
-
-`unique_name` is used to generate structured names, separated by
-`"/"`, to help identify operations when debugging a graph.
-Operation names are displayed in error messages reported by the
-TensorFlow runtime, and in various visualization tools such as
-TensorBoard.
-
-If `mark_as_used` is set to `True`, which is the default, a new
-unique name is created and marked as in use. If it's set to `False`,
-the unique name is returned without actually being marked as used.
-This is useful when the caller simply wants to know what the name
-to be created will be.
-
-##### Args:
-
-
-* <b>`name`</b>: The name for an operation.
-* <b>`mark_as_used`</b>: Whether to mark this name as being used.
-
-##### Returns:
-
- A string to be passed to `create_op()` that will be used
- to name the operation being created.
-
-
-- - -
-
-#### `tf.Graph.version` {#Graph.version}
-
-Returns a version number that increases as ops are added to the graph.
-
-Note that this is unrelated to the
-[GraphDef version](#Graph.graph_def_version).
-
-
-- - -
-
-#### `tf.Graph.graph_def_versions` {#Graph.graph_def_versions}
-
-The GraphDef version information of this graph.
-
-For details on the meaning of each version, see [`GraphDef`]
-(https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto).
-
-##### Returns:
-
- A `VersionDef`.
-
-
-
-- - -
-
-#### `tf.Graph.create_op(op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True, compute_device=True)` {#Graph.create_op}
-
-Creates an `Operation` in this graph.
-
-This is a low-level interface for creating an `Operation`. Most
-programs will not call this method directly, and instead use the
-Python op constructors, such as `tf.constant()`, which add ops to
-the default graph.
-
-##### Args:
-
-
-* <b>`op_type`</b>: The `Operation` type to create. This corresponds to the
- `OpDef.name` field for the proto that defines the operation.
-* <b>`inputs`</b>: A list of `Tensor` objects that will be inputs to the `Operation`.
-* <b>`dtypes`</b>: A list of `DType` objects that will be the types of the tensors
- that the operation produces.
-* <b>`input_types`</b>: (Optional.) A list of `DType`s that will be the types of
- the tensors that the operation consumes. By default, uses the base
- `DType` of each input in `inputs`. Operations that expect
- reference-typed inputs must specify `input_types` explicitly.
-* <b>`name`</b>: (Optional.) A string name for the operation. If not specified, a
- name is generated based on `op_type`.
-* <b>`attrs`</b>: (Optional.) A dictionary where the key is the attribute name (a
- string) and the value is the respective `attr` attribute of the
- `NodeDef` proto that will represent the operation (an `AttrValue`
- proto).
-* <b>`op_def`</b>: (Optional.) The `OpDef` proto that describes the `op_type` that
- the operation will have.
-* <b>`compute_shapes`</b>: (Optional.) If True, shape inference will be performed
- to compute the shapes of the outputs.
-* <b>`compute_device`</b>: (Optional.) If True, device functions will be executed
- to compute the device property of the Operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if any of the inputs is not a `Tensor`.
-* <b>`ValueError`</b>: if colocation conflicts with existing device assignment.
-
-##### Returns:
-
- An `Operation` object.
-
-
-- - -
-
-#### `tf.Graph.gradient_override_map(op_type_map)` {#Graph.gradient_override_map}
-
-EXPERIMENTAL: A context manager for overriding gradient functions.
-
-This context manager can be used to override the gradient function
-that will be used for ops within the scope of the context.
-
-For example:
-
-```python
-@tf.RegisterGradient("CustomSquare")
-def _custom_square_grad(op, grad):
- # ...
-
-with tf.Graph().as_default() as g:
- c = tf.constant(5.0)
- s_1 = tf.square(c) # Uses the default gradient for tf.square.
- with g.gradient_override_map({"Square": "CustomSquare"}):
- s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the
- # gradient of s_2.
-```
-
-##### Args:
-
-
-* <b>`op_type_map`</b>: A dictionary mapping op type strings to alternative op
- type strings.
-
-##### Returns:
-
- A context manager that sets the alternative op type to be used for one
- or more ops created in that context.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `op_type_map` is not a dictionary mapping strings to
- strings.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.Graph.colocate_with(op, ignore_existing=False)` {#Graph.colocate_with}
-
-Returns a context manager that specifies an op to colocate with.
-
-Note: this function is not for public use, only for internal libraries.
-
-For example:
-
-```python
-a = tf.Variable([1.0])
-with g.colocate_with(a):
- b = tf.constant(1.0)
- c = tf.add(a, b)
-```
-
-`b` and `c` will always be colocated with `a`, no matter where `a`
-is eventually placed.
-
-##### Args:
-
-
-* <b>`op`</b>: The op to colocate all created ops with.
-* <b>`ignore_existing`</b>: If true, only applies colocation of this op within
- the context, rather than applying all colocation properties
- on the stack.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if op is None.
-
-##### Yields:
-
- A context manager that specifies the op with which to colocate
- newly created ops.
-
-
-- - -
-
-#### `tf.Graph.get_all_collection_keys()` {#Graph.get_all_collection_keys}
-
-Returns a list of collections used in this graph.
-
-
-- - -
-
-#### `tf.Graph.is_feedable(tensor)` {#Graph.is_feedable}
-
-Returns `True` if and only if `tensor` is feedable.
-
-
-- - -
-
-#### `tf.Graph.prevent_feeding(tensor)` {#Graph.prevent_feeding}
-
-Marks the given `tensor` as unfeedable in this graph.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.GraphKeys.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.GraphKeys.md
deleted file mode 100644
index 1d656f4018..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.GraphKeys.md
+++ /dev/null
@@ -1,36 +0,0 @@
-Standard names to use for graph collections.
-
-The standard library uses various well-known names to collect and
-retrieve values associated with a graph. For example, the
-`tf.Optimizer` subclasses default to optimizing the variables
-collected under `tf.GraphKeys.TRAINABLE_VARIABLES` if none is
-specified, but it is also possible to pass an explicit list of
-variables.
-
-The following standard keys are defined:
-
-* `VARIABLES`: the `Variable` objects that comprise a model, and
- must be saved and restored together. See
- [`tf.all_variables()`](../../api_docs/python/state_ops.md#all_variables)
- for more details.
-* `TRAINABLE_VARIABLES`: the subset of `Variable` objects that will
- be trained by an optimizer. See
- [`tf.trainable_variables()`](../../api_docs/python/state_ops.md#trainable_variables)
- for more details.
-* `SUMMARIES`: the summary `Tensor` objects that have been created in the
- graph. See
- [`tf.merge_all_summaries()`](../../api_docs/python/train.md#merge_all_summaries)
- for more details.
-* `QUEUE_RUNNERS`: the `QueueRunner` objects that are used to
- produce input for a computation. See
- [`tf.start_queue_runners()`](../../api_docs/python/train.md#start_queue_runners)
- for more details.
-* `MOVING_AVERAGE_VARIABLES`: the subset of `Variable` objects that will also
- keep moving averages. See
- [`tf.moving_average_variables()`](../../api_docs/python/state_ops.md#moving_average_variables)
- for more details.
-* `REGULARIZATION_LOSSES`: regularization losses collected during graph
- construction.
-* `WEIGHTS`: weights inside neural network layers
-* `BIASES`: biases inside neural network layers
-* `ACTIVATIONS`: activations of neural network layers
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RandomShuffleQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RandomShuffleQueue.md
deleted file mode 100644
index cd617e7578..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RandomShuffleQueue.md
+++ /dev/null
@@ -1,54 +0,0 @@
-A queue implementation that dequeues elements in a random order.
-
-See [`tf.QueueBase`](#QueueBase) for a description of the methods on
-this class.
-
-- - -
-
-#### `tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, names=None, seed=None, shared_name=None, name='random_shuffle_queue')` {#RandomShuffleQueue.__init__}
-
-Create a queue that dequeues elements in a random order.
-
-A `RandomShuffleQueue` has bounded capacity; supports multiple
-concurrent producers and consumers; and provides exactly-once
-delivery.
-
-A `RandomShuffleQueue` holds a list of up to `capacity`
-elements. Each element is a fixed-length tuple of tensors whose
-dtypes are described by `dtypes`, and whose shapes are optionally
-described by the `shapes` argument.
-
-If the `shapes` argument is specified, each component of a queue
-element must have the respective fixed shape. If it is
-unspecified, different queue elements may have different shapes,
-but the use of `dequeue_many` is disallowed.
-
-The `min_after_dequeue` argument allows the caller to specify a
-minimum number of elements that will remain in the queue after a
-`dequeue` or `dequeue_many` operation completes, to ensure a
-minimum level of mixing of elements. This invariant is maintained
-by blocking those operations until sufficient elements have been
-enqueued. The `min_after_dequeue` argument is ignored after the
-queue has been closed.
-
-##### Args:
-
-
-* <b>`capacity`</b>: An integer. The upper bound on the number of elements
- that may be stored in this queue.
-* <b>`min_after_dequeue`</b>: An integer (described above).
-* <b>`dtypes`</b>: A list of `DType` objects. The length of `dtypes` must equal
- the number of tensors in each queue element.
-* <b>`shapes`</b>: (Optional.) A list of fully-defined `TensorShape` objects
- with the same length as `dtypes`, or `None`.
-* <b>`names`</b>: (Optional.) A list of string naming the components in the queue
- with the same length as `dtypes`, or `None`. If specified the dequeue
- methods return a dictionary with the names as keys.
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the queue operation.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RegisterGradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RegisterGradient.md
new file mode 100644
index 0000000000..736bd5b4af
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RegisterGradient.md
@@ -0,0 +1,36 @@
+A decorator for registering the gradient function for an op type.
+
+This decorator is only used when defining a new op type. For an op
+with `m` inputs and `n` outputs, the gradient function is a function
+that takes the original `Operation` and `n` `Tensor` objects
+(representing the gradients with respect to each output of the op),
+and returns `m` `Tensor` objects (representing the partial gradients
+with respect to each input of the op).
+
+For example, assuming that operations of type `"Sub"` take two
+inputs `x` and `y`, and return a single output `x - y`, the
+following gradient function would be registered:
+
+```python
+@tf.RegisterGradient("Sub")
+def _sub_grad(unused_op, grad):
+ return grad, tf.neg(grad)
+```
+
+The decorator argument `op_type` is the string type of an
+operation. This corresponds to the `OpDef.name` field for the proto
+that defines the operation.
+
+- - -
+
+#### `tf.RegisterGradient.__init__(op_type)` {#RegisterGradient.__init__}
+
+Creates a new decorator with `op_type` as the Operation type.
+
+##### Args:
+
+
+* <b>`op_type`</b>: The string type of an operation. This corresponds to the
+ `OpDef.name` field for the proto that defines the operation.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Tensor.md
new file mode 100644
index 0000000000..73af134a7a
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Tensor.md
@@ -0,0 +1,228 @@
+Represents a value produced by an `Operation`.
+
+A `Tensor` is a symbolic handle to one of the outputs of an
+`Operation`. It does not hold the values of that operation's output,
+but instead provides a means of computing those values in a
+TensorFlow [`Session`](../../api_docs/python/client.md#Session).
+
+This class has two primary purposes:
+
+1. A `Tensor` can be passed as an input to another `Operation`.
+ This builds a dataflow connection between operations, which
+ enables TensorFlow to execute an entire `Graph` that represents a
+ large, multi-step computation.
+
+2. After the graph has been launched in a session, the value of the
+ `Tensor` can be computed by passing it to
+ [`Session.run()`](../../api_docs/python/client.md#Session.run).
+ `t.eval()` is a shortcut for calling
+ `tf.get_default_session().run(t)`.
+
+In the following example, `c`, `d`, and `e` are symbolic `Tensor`
+objects, whereas `result` is a numpy array that stores a concrete
+value:
+
+```python
+# Build a dataflow graph.
+c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
+d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
+e = tf.matmul(c, d)
+
+# Construct a `Session` to execute the graph.
+sess = tf.Session()
+
+# Execute the graph and store the value that `e` represents in `result`.
+result = sess.run(e)
+```
+
+- - -
+
+#### `tf.Tensor.dtype` {#Tensor.dtype}
+
+The `DType` of elements in this tensor.
+
+
+- - -
+
+#### `tf.Tensor.name` {#Tensor.name}
+
+The string name of this tensor.
+
+
+- - -
+
+#### `tf.Tensor.value_index` {#Tensor.value_index}
+
+The index of this tensor in the outputs of its `Operation`.
+
+
+- - -
+
+#### `tf.Tensor.graph` {#Tensor.graph}
+
+The `Graph` that contains this tensor.
+
+
+- - -
+
+#### `tf.Tensor.op` {#Tensor.op}
+
+The `Operation` that produces this tensor as an output.
+
+
+- - -
+
+#### `tf.Tensor.consumers()` {#Tensor.consumers}
+
+Returns a list of `Operation`s that consume this tensor.
+
+##### Returns:
+
+ A list of `Operation`s.
+
+
+
+- - -
+
+#### `tf.Tensor.eval(feed_dict=None, session=None)` {#Tensor.eval}
+
+Evaluates this tensor in a `Session`.
+
+Calling this method will execute all preceding operations that
+produce the inputs needed for the operation that produces this
+tensor.
+
+*N.B.* Before invoking `Tensor.eval()`, its graph must have been
+launched in a session, and either a default session must be
+available, or `session` must be specified explicitly.
+
+##### Args:
+
+
+* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
+ See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a
+ description of the valid feed values.
+* <b>`session`</b>: (Optional.) The `Session` to be used to evaluate this tensor. If
+ none, the default session will be used.
+
+##### Returns:
+
+ A numpy array corresponding to the value of this tensor.
+
+
+
+- - -
+
+#### `tf.Tensor.get_shape()` {#Tensor.get_shape}
+
+Returns the `TensorShape` that represents the shape of this tensor.
+
+The shape is computed using shape inference functions that are
+registered for each `Operation` type using `tf.RegisterShape`.
+See [`TensorShape`](../../api_docs/python/framework.md#TensorShape) for more
+details of what a shape represents.
+
+The inferred shape of a tensor is used to provide shape
+information without having to launch the graph in a session. This
+can be used for debugging, and providing early error messages. For
+example:
+
+```python
+c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
+
+print(c.get_shape())
+==> TensorShape([Dimension(2), Dimension(3)])
+
+d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]])
+
+print(d.get_shape())
+==> TensorShape([Dimension(4), Dimension(2)])
+
+# Raises a ValueError, because `c` and `d` do not have compatible
+# inner dimensions.
+e = tf.matmul(c, d)
+
+f = tf.matmul(c, d, transpose_a=True, transpose_b=True)
+
+print(f.get_shape())
+==> TensorShape([Dimension(3), Dimension(4)])
+```
+
+In some cases, the inferred shape may have unknown dimensions. If
+the caller has additional information about the values of these
+dimensions, `Tensor.set_shape()` can be used to augment the
+inferred shape.
+
+##### Returns:
+
+ A `TensorShape` representing the shape of this tensor.
+
+
+- - -
+
+#### `tf.Tensor.set_shape(shape)` {#Tensor.set_shape}
+
+Updates the shape of this tensor.
+
+This method can be called multiple times, and will merge the given
+`shape` with the current shape of this tensor. It can be used to
+provide additional information about the shape of this tensor that
+cannot be inferred from the graph alone. For example, this can be used
+to provide additional information about the shapes of images:
+
+```python
+_, image_data = tf.TFRecordReader(...).read(...)
+image = tf.image.decode_png(image_data, channels=3)
+
+# The height and width dimensions of `image` are data dependent, and
+# cannot be computed without executing the op.
+print(image.get_shape())
+==> TensorShape([Dimension(None), Dimension(None), Dimension(3)])
+
+# We know that each image in this dataset is 28 x 28 pixels.
+image.set_shape([28, 28, 3])
+print(image.get_shape())
+==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])
+```
+
+##### Args:
+
+
+* <b>`shape`</b>: A `TensorShape` representing the shape of this tensor.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If `shape` is not compatible with the current shape of
+ this tensor.
+
+
+
+#### Other Methods
+- - -
+
+#### `tf.Tensor.__init__(op, value_index, dtype)` {#Tensor.__init__}
+
+Creates a new `Tensor`.
+
+##### Args:
+
+
+* <b>`op`</b>: An `Operation`. `Operation` that computes this tensor.
+* <b>`value_index`</b>: An `int`. Index of the operation's endpoint that produces
+ this tensor.
+* <b>`dtype`</b>: A `DType`. Type of elements stored in this tensor.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: If the op is not an `Operation`.
+
+
+- - -
+
+#### `tf.Tensor.device` {#Tensor.device}
+
+The name of the device on which this tensor will be produced, or None.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.VarLenFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.VarLenFeature.md
new file mode 100644
index 0000000000..a7b49bfcd6
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.VarLenFeature.md
@@ -0,0 +1,11 @@
+Configuration for parsing a variable-length input feature.
+
+Fields:
+ dtype: Data type of input.
+- - -
+
+#### `tf.VarLenFeature.dtype` {#VarLenFeature.dtype}
+
+Alias for field number 0
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.WholeFileReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.WholeFileReader.md
deleted file mode 100644
index e168cabc9e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.WholeFileReader.md
+++ /dev/null
@@ -1,148 +0,0 @@
-A Reader that outputs the entire contents of a file as a value.
-
-To use, enqueue filenames in a Queue. The output of Read will
-be a filename (key) and the contents of that file (value).
-
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.WholeFileReader.__init__(name=None)` {#WholeFileReader.__init__}
-
-Create a WholeFileReader.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-
-- - -
-
-#### `tf.WholeFileReader.num_records_produced(name=None)` {#WholeFileReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.num_work_units_completed(name=None)` {#WholeFileReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.read(queue, name=None)` {#WholeFileReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.reader_ref` {#WholeFileReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.WholeFileReader.reset(name=None)` {#WholeFileReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.WholeFileReader.restore_state(state, name=None)` {#WholeFileReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.WholeFileReader.serialize_state(name=None)` {#WholeFileReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.supports_serialize` {#WholeFileReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.abs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.abs.md
new file mode 100644
index 0000000000..63a0b4c954
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.abs.md
@@ -0,0 +1,22 @@
+### `tf.abs(x, name=None)` {#abs}
+
+Computes the absolute value of a tensor.
+
+Given a tensor of real numbers `x`, this operation returns a tensor
+containing the absolute value of each element in `x`. For example, if x is
+an input element and y is an output element, this operation computes
+\\(y = |x|\\).
+
+See [`tf.complex_abs()`](#tf_complex_abs) to compute the absolute value of a complex
+number.
+
+##### Args:
+
+
+* <b>`x`</b>: A `Tensor` of type `float`, `double`, `int32`, or `int64`.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` the same size and type as `x` with absolute values.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.add_to_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.add_to_collection.md
deleted file mode 100644
index 1d8d752917..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.add_to_collection.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.add_to_collection(name, value)` {#add_to_collection}
-
-Wrapper for `Graph.add_to_collection()` using the default graph.
-
-See [`Graph.add_to_collection()`](../../api_docs/python/framework.md#Graph.add_to_collection)
-for more details.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-* <b>`value`</b>: The value to add to the collection.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.as_dtype.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.as_dtype.md
deleted file mode 100644
index 50a048aacb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.as_dtype.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.as_dtype(type_value)` {#as_dtype}
-
-Converts the given `type_value` to a `DType`.
-
-##### Args:
-
-
-* <b>`type_value`</b>: A value that can be converted to a `tf.DType`
- object. This may currently be a `tf.DType` object, a
- [`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto),
- a string type name, or a `numpy.dtype`.
-
-##### Returns:
-
- A `DType` corresponding to `type_value`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `type_value` cannot be converted to a `DType`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_integer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_integer.md
deleted file mode 100644
index c75ba58765..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_integer.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.assert_integer(x, data=None, summarize=None, name=None)` {#assert_integer}
-
-Assert that `x` is of integer dtype.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_integer(x)]):
- output = tf.reduce_sum(x)
-```
-
-Example of adding dependency to the tensor being checked:
-
-```python
-x = tf.with_dependencies([tf.assert_integer(x)], x)
-```
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` whose basetype is integer and is not quantized.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_integer".
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x == y` is False.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_non_negative.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_non_negative.md
new file mode 100644
index 0000000000..47f07a698a
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_non_negative.md
@@ -0,0 +1,34 @@
+### `tf.assert_non_negative(x, data=None, summarize=None, name=None)` {#assert_non_negative}
+
+Assert the condition `x >= 0` holds element-wise.
+
+Example of adding a dependency to an operation:
+
+```python
+with tf.control_dependencies([tf.assert_non_negative(x)]):
+ output = tf.reduce_sum(x)
+```
+
+Example of adding dependency to the tensor being checked:
+
+```python
+x = tf.with_dependencies([tf.assert_non_negative(x)], x)
+```
+
+Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`.
+If `x` is empty this is trivially satisfied.
+
+##### Args:
+
+
+* <b>`x`</b>: Numeric `Tensor`.
+* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
+ error message and first few entries of `x`.
+* <b>`summarize`</b>: Print this many entries of each tensor.
+* <b>`name`</b>: A name for this operation (optional).
+ Defaults to "assert_non_negative".
+
+##### Returns:
+
+ Op raising `InvalidArgumentError` unless `x` is all non-negative.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky.md
new file mode 100644
index 0000000000..487680f50b
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky.md
@@ -0,0 +1,20 @@
+### `tf.batch_cholesky(input, name=None)` {#batch_cholesky}
+
+Calculates the Cholesky decomposition of a batch of square matrices.
+
+The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
+form square matrices, with the same constraints as the single matrix Cholesky
+decomposition above. The output is a tensor of the same shape as the input
+containing the Cholesky decompositions for all input submatrices `[..., :, :]`.
+
+##### Args:
+
+
+* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
+ Shape is `[..., M, M]`.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky_solve.md
deleted file mode 100644
index 25fcc5c908..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky_solve.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.batch_cholesky_solve(chol, rhs, name=None)` {#batch_cholesky_solve}
-
-Solve batches of linear eqns `A X = RHS`, given Cholesky factorizations.
-
-```python
-# Solve one linear system (K = 1) for every member of the length 10 batch.
-A = ... # shape 10 x 2 x 2
-RHS = ... # shape 10 x 2 x 1
-chol = tf.batch_cholesky(A) # shape 10 x 2 x 2
-X = tf.batch_cholesky_solve(chol, RHS) # shape 10 x 2 x 1
-# tf.matmul(A, X) ~ RHS
-X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]
-
-# Solve five linear systems (K = 5) for every member of the length 10 batch.
-A = ... # shape 10 x 2 x 2
-RHS = ... # shape 10 x 2 x 5
-...
-X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`chol`</b>: A `Tensor`. Must be `float32` or `float64`, shape is `[..., M, M]`.
- Cholesky factorization of `A`, e.g. `chol = tf.batch_cholesky(A)`.
- For that reason, only the lower triangular parts (including the diagonal)
- of the last two dimensions of `chol` are used. The strictly upper part is
- assumed to be zero and not accessed.
-* <b>`rhs`</b>: A `Tensor`, same type as `chol`, shape is `[..., M, K]`.
-* <b>`name`</b>: A name to give this `Op`. Defaults to `batch_cholesky_solve`.
-
-##### Returns:
-
- Solution to `A x = rhs`, shape `[..., M, K]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_ifft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_ifft3d.md
deleted file mode 100644
index 1173a17d6d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_ifft3d.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.batch_ifft3d(input, name=None)` {#batch_ifft3d}
-
-Compute the inverse 3-dimensional discrete Fourier Transform over the inner-most
-
-3 dimensions of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most 3
- dimensions of `input` are replaced with their inverse 3D Fourier Transform.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_diag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_diag.md
deleted file mode 100644
index 6e5458ba6c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_diag.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.batch_matrix_diag(diagonal, name=None)` {#batch_matrix_diag}
-
-Returns a batched diagonal tensor with a given batched diagonal values.
-
-Given a `diagonal`, this operation returns a tensor with the `diagonal` and
-everything else padded with zeros. The diagonal is computed as follows:
-
-Assume `diagonal` has `k` dimensions `[I, J, K, ..., N]`, then the output is a
-tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where:
-
-`output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]`.
-
-For example:
-
-```prettyprint
-# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]
-
-and diagonal.shape = (2, 4)
-
-tf.batch_matrix_diag(diagonal) ==> [[[1, 0, 0, 0]
- [0, 2, 0, 0]
- [0, 0, 3, 0]
- [0, 0, 0, 4]],
- [[5, 0, 0, 0]
- [0, 6, 0, 0]
- [0, 0, 7, 0]
- [0, 0, 0, 8]]]
-
-which has shape (2, 4, 4)
-```
-
-##### Args:
-
-
-* <b>`diagonal`</b>: A `Tensor`. Rank `k`, where `k >= 1`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `diagonal`.
- Rank `k+1`, with `output.shape = diagonal.shape + [diagonal.shape[-1]]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_self_adjoint_eig.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_self_adjoint_eig.md
new file mode 100644
index 0000000000..19d6c5319f
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_self_adjoint_eig.md
@@ -0,0 +1,22 @@
+### `tf.batch_self_adjoint_eig(input, name=None)` {#batch_self_adjoint_eig}
+
+Calculates the Eigen Decomposition of a batch of square self-adjoint matrices.
+
+The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
+form square matrices, with the same constraints as the single matrix
+SelfAdjointEig.
+
+The result is a '[..., M+1, M] matrix with [..., 0,:] containing the
+eigenvalues, and subsequent [...,1:, :] containing the eigenvectors.
+
+##### Args:
+
+
+* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
+ Shape is `[..., M, M]`.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`. Shape is `[..., M+1, M]`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ceil.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ceil.md
deleted file mode 100644
index 34e4a7feed..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ceil.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.ceil(x, name=None)` {#ceil}
-
-Returns element-wise smallest integer in not less than x.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.constant_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.constant_initializer.md
new file mode 100644
index 0000000000..4ac524d708
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.constant_initializer.md
@@ -0,0 +1,20 @@
+### `tf.constant_initializer(value=0.0, dtype=tf.float32)` {#constant_initializer}
+
+Returns an initializer that generates tensors with a single value.
+
+##### Args:
+
+
+* <b>`value`</b>: A Python scalar. All elements of the initialized variable
+ will be set to this value.
+* <b>`dtype`</b>: The data type. Only floating point types are supported.
+
+##### Returns:
+
+ An initializer that generates tensors with a single value.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: if `dtype` is not a floating point type.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.ContinuousDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.ContinuousDistribution.md
deleted file mode 100644
index e474870cd4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.ContinuousDistribution.md
+++ /dev/null
@@ -1,153 +0,0 @@
-Base class for continuous probability distributions.
-
-`ContinuousDistribution` defines the API for the likelihood functions `pdf`
-and `log_pdf` of continuous probability distributions, and a property
-`is_reparameterized` (returning `True` or `False`) which describes
-whether the samples of this distribution are calculated in a differentiable
-way from a non-parameterized distribution. For example, the `Normal`
-distribution with parameters `mu` and `sigma` is reparameterized as
-
-```Normal(mu, sigma) = sigma * Normal(0, 1) + mu```
-
-Subclasses must override `pdf` and `log_pdf` but one can call this base
-class's implementation. They must also override the `is_reparameterized`
-property.
-
-See `BaseDistribution` for more information on the API for probability
-distributions.
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.batch_shape(name=None)` {#ContinuousDistribution.batch_shape}
-
-Batch dimensions of this instance as a 1-D int32 `Tensor`.
-
-The product of the dimensions of the `batch_shape` is the number of
-independent distributions of this kind the instance represents.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
- `Tensor` `batch_shape`
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.cdf(value, name='cdf')` {#ContinuousDistribution.cdf}
-
-Cumulative distribution function.
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.dtype` {#ContinuousDistribution.dtype}
-
-dtype of samples from this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.entropy(name=None)` {#ContinuousDistribution.entropy}
-
-Entropy of the distribution in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.event_shape(name=None)` {#ContinuousDistribution.event_shape}
-
-Shape of a sample from a single distribution as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
- `Tensor` `event_shape`
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.get_batch_shape()` {#ContinuousDistribution.get_batch_shape}
-
-`TensorShape` available at graph construction time.
-
-Same meaning as `batch_shape`. May be only partially defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.get_event_shape()` {#ContinuousDistribution.get_event_shape}
-
-`TensorShape` available at graph construction time.
-
-Same meaning as `event_shape`. May be only partially defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.is_reparameterized` {#ContinuousDistribution.is_reparameterized}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.log_cdf(value, name='log_cdf')` {#ContinuousDistribution.log_cdf}
-
-Log CDF.
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.log_pdf(value, name='log_pdf')` {#ContinuousDistribution.log_pdf}
-
-Log of the probability density function.
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.mean` {#ContinuousDistribution.mean}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.name` {#ContinuousDistribution.name}
-
-Name to prepend to all ops.
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.pdf(value, name='pdf')` {#ContinuousDistribution.pdf}
-
-Probability density function.
-
-
-- - -
-
-#### `tf.contrib.distributions.ContinuousDistribution.sample(n, seed=None, name=None)` {#ContinuousDistribution.sample}
-
-Generate `n` samples.
-
-##### Args:
-
-
-* <b>`n`</b>: scalar. Number of samples to draw from each distribution.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape`
- with values of type `self.dtype`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md
new file mode 100644
index 0000000000..1d8cb6a6dd
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md
@@ -0,0 +1,185 @@
+DirichletMultinomial mixture distribution.
+
+This distribution is parameterized by a vector `alpha` of concentration
+parameters for `k` classes.
+
+#### Mathematical details
+
+The Dirichlet Multinomial is a distribution over k-class count data, meaning
+for each k-tuple of non-negative integer `counts = [c_1,...,c_k]`, we have a
+probability of these draws being made from the distribution. The distribution
+has hyperparameters `alpha = (alpha_1,...,alpha_k)`, and probability mass
+function (pmf):
+
+```pmf(counts) = C! / (c_1!...c_k!) * Beta(alpha + c) / Beta(alpha)```
+
+where above `C = sum_j c_j`, `N!` is `N` factorial, and
+`Beta(x) = prod_j Gamma(x_j) / Gamma(sum_j x_j)` is the multivariate beta
+function.
+
+This is a mixture distribution in that `N` samples can be produced by:
+ 1. Choose class probabilities `p = (p_1,...,p_k) ~ Dir(alpha)`
+ 2. Draw integers `m = (m_1,...,m_k) ~ Multinomial(p, N)`
+
+This class provides methods to create indexed batches of Dirichlet
+Multinomial distributions. If the provided `alpha` is rank 2 or higher, for
+every fixed set of leading dimensions, the last dimension represents one
+single Dirichlet Multinomial distribution. When calling distribution
+functions (e.g. `dist.pdf(counts)`), `alpha` and `counts` are broadcast to the
+same shape (if possible). In all cases, the last dimension of alpha/counts
+represents single Dirichlet Multinomial distributions.
+
+#### Examples
+
+```python
+alpha = [1, 2, 3]
+dist = DirichletMultinomial(alpha)
+```
+
+Creates a 3-class distribution, with the 3rd class is most likely to be drawn.
+The distribution functions can be evaluated on counts.
+
+```python
+# counts same shape as alpha.
+counts = [0, 2, 0]
+dist.pdf(counts) # Shape []
+
+# alpha will be broadcast to [[1, 2, 3], [1, 2, 3]] to match counts.
+counts = [[11, 22, 33], [44, 55, 66]]
+dist.pdf(counts) # Shape [2]
+
+# alpha will be broadcast to shape [5, 7, 3] to match counts.
+counts = [[...]] # Shape [5, 7, 3]
+dist.pdf(counts) # Shape [5, 7]
+```
+
+Creates a 2-batch of 3-class distributions.
+
+```python
+alpha = [[1, 2, 3], [4, 5, 6]] # Shape [2, 3]
+dist = DirichletMultinomial(alpha)
+
+# counts will be broadcast to [[11, 22, 33], [11, 22, 33]] to match alpha.
+counts = [11, 22, 33]
+dist.pdf(counts) # Shape [2]
+```
+- - -
+
+#### `tf.contrib.distributions.DirichletMultinomial.__init__(alpha)` {#DirichletMultinomial.__init__}
+
+Initialize a batch of DirichletMultinomial distributions.
+
+##### Args:
+
+
+* <b>`alpha`</b>: Shape `[N1,..., Nn, k]` positive `float` or `double` tensor with
+ `n >= 0`. Defines this as a batch of `N1 x ... x Nn` different `k`
+ class Dirichlet multinomial distributions.
+
+
+* <b>`Examples`</b>:
+
+```python
+# Define 1-batch of 2-class Dirichlet multinomial distribution,
+# also known as a beta-binomial.
+dist = DirichletMultinomial([1.1, 2.0])
+
+# Define a 2-batch of 3-class distributions.
+dist = DirichletMultinomial([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
+```
+
+
+- - -
+
+#### `tf.contrib.distributions.DirichletMultinomial.alpha` {#DirichletMultinomial.alpha}
+
+Parameters defining this distribution.
+
+
+- - -
+
+#### `tf.contrib.distributions.DirichletMultinomial.cdf(x)` {#DirichletMultinomial.cdf}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.DirichletMultinomial.dtype` {#DirichletMultinomial.dtype}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.DirichletMultinomial.log_cdf(x)` {#DirichletMultinomial.log_cdf}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.DirichletMultinomial.log_pmf(counts, name=None)` {#DirichletMultinomial.log_pmf}
+
+`Log(P[counts])`, computed for every batch member.
+
+For each batch of counts `[c_1,...,c_k]`, `P[counts]` is the probability
+that after sampling `sum_j c_j` draws from this Dirichlet Multinomial
+distribution, the number of draws falling in class `j` is `c_j`. Note that
+different sequences of draws can result in the same counts, thus the
+probability includes a combinatorial coefficient.
+
+##### Args:
+
+
+* <b>`counts`</b>: Non-negative `float`, `double`, or `int` tensor whose shape can
+ be broadcast with `self.alpha`. For fixed leading dimensions, the last
+ dimension represents counts for the corresponding Dirichlet Multinomial
+ distribution in `self.alpha`.
+* <b>`name`</b>: Name to give this Op, defaults to "log_pmf".
+
+##### Returns:
+
+ Log probabilities for each record, shape `[N1,...,Nn]`.
+
+
+- - -
+
+#### `tf.contrib.distributions.DirichletMultinomial.mean` {#DirichletMultinomial.mean}
+
+Class means for every batch member.
+
+
+- - -
+
+#### `tf.contrib.distributions.DirichletMultinomial.num_classes` {#DirichletMultinomial.num_classes}
+
+Tensor providing number of classes in each batch member.
+
+
+- - -
+
+#### `tf.contrib.distributions.DirichletMultinomial.pmf(counts, name=None)` {#DirichletMultinomial.pmf}
+
+`P[counts]`, computed for every batch member.
+
+For each batch of counts `[c_1,...,c_k]`, `P[counts]` is the probability
+that after sampling `sum_j c_j` draws from this Dirichlet Multinomial
+distribution, the number of draws falling in class `j` is `c_j`. Note that
+different sequences of draws can result in the same counts, thus the
+probability includes a combinatorial coefficient.
+
+##### Args:
+
+
+* <b>`counts`</b>: Non-negative `float`, `double`, or `int` tensor whose shape can
+ be broadcast with `self.alpha`. For fixed leading dimensions, the last
+ dimension represents counts for the corresponding Dirichlet Multinomial
+ distribution in `self.alpha`.
+* <b>`name`</b>: Name to give this Op, defaults to "pmf".
+
+##### Returns:
+
+ Probabilities for each record, shape `[N1,...,Nn]`.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.StudentT.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.StudentT.md
new file mode 100644
index 0000000000..816e5d5a83
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.StudentT.md
@@ -0,0 +1,245 @@
+Student's t distribution with degree-of-freedom parameter df.
+
+#### Mathematical details
+
+The PDF of this distribution is:
+
+`f(t) = gamma((df+1)/2)/sqrt(df*pi)/gamma(df/2)*(1+t^2/df)^(-(df+1)/2)`
+
+#### Examples
+
+Examples of initialization of one or a batch of distributions.
+
+```python
+# Define a single scalar Student t distribution.
+single_dist = tf.contrib.distributions.StudentT(df=3)
+
+# Evaluate the pdf at 1, returning a scalar Tensor.
+single_dist.pdf(1.)
+
+# Define a batch of two scalar valued Student t's.
+# The first has degrees of freedom 2, mean 1, and scale 11.
+# The second 3, 2 and 22.
+multi_dist = tf.contrib.distributions.StudentT(df=[2, 3],
+ mu=[1, 2.],
+ sigma=[11, 22.])
+
+# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
+# returning a length two tensor.
+multi_dist.pdf([0, 1.5])
+
+# Get 3 samples, returning a 3 x 2 tensor.
+multi_dist.sample(3)
+```
+
+Arguments are broadcast when possible.
+
+```python
+# Define a batch of two Student's t distributions.
+# Both have df 2 and mean 1, but different scales.
+dist = tf.contrib.distributions.StudentT(df=2, mu=1, sigma=[11, 22.])
+
+# Evaluate the pdf of both distributions on the same point, 3.0,
+# returning a length 2 tensor.
+dist.pdf(3.0)
+```
+- - -
+
+#### `tf.contrib.distributions.StudentT.__init__(df, mu, sigma, name='StudentT')` {#StudentT.__init__}
+
+Construct Student's t distributions.
+
+The distributions have degree of freedom `df`, mean `mu`, and scale `sigma`.
+
+The parameters `df`, `mu`, and `sigma` must be shaped in a way that supports
+broadcasting (e.g. `df + mu + sigma` is a valid operation).
+
+##### Args:
+
+
+* <b>`df`</b>: `float` or `double` tensor, the degrees of freedom of the
+ distribution(s). `df` must contain only positive values.
+* <b>`mu`</b>: `float` or `double` tensor, the means of the distribution(s).
+* <b>`sigma`</b>: `float` or `double` tensor, the scaling factor for the
+ distribution(s). `sigma` must contain only positive values.
+ Note that `sigma` is not the standard deviation of this distribution.
+* <b>`name`</b>: The name to give Ops created by the initializer.
+
+##### Raises:
+
+
+* <b>`TypeError`</b>: if mu and sigma are different dtypes.
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.batch_shape(name='batch_shape')` {#StudentT.batch_shape}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.cdf(value, name='cdf')` {#StudentT.cdf}
+
+Cumulative distribution function.
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.df` {#StudentT.df}
+
+Degrees of freedom in these Student's t distribution(s).
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.dtype` {#StudentT.dtype}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.entropy(name='entropy')` {#StudentT.entropy}
+
+The entropy of Student t distribution(s).
+
+##### Args:
+
+
+* <b>`name`</b>: The name to give this op.
+
+##### Returns:
+
+
+* <b>`entropy`</b>: tensor of dtype `dtype`, the entropy.
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.event_shape(name='event_shape')` {#StudentT.event_shape}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.get_batch_shape()` {#StudentT.get_batch_shape}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.get_event_shape()` {#StudentT.get_event_shape}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.is_reparameterized` {#StudentT.is_reparameterized}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.log_cdf(value, name='log_cdf')` {#StudentT.log_cdf}
+
+Log CDF.
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.log_pdf(x, name='log_pdf')` {#StudentT.log_pdf}
+
+Log pdf of observations in `x` under these Student's t-distribution(s).
+
+##### Args:
+
+
+* <b>`x`</b>: tensor of dtype `dtype`, must be broadcastable with `mu` and `df`.
+* <b>`name`</b>: The name to give this op.
+
+##### Returns:
+
+
+* <b>`log_pdf`</b>: tensor of dtype `dtype`, the log-PDFs of `x`.
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.mean` {#StudentT.mean}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.mu` {#StudentT.mu}
+
+Locations of these Student's t distribution(s).
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.name` {#StudentT.name}
+
+
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.pdf(x, name='pdf')` {#StudentT.pdf}
+
+The PDF of observations in `x` under these Student's t distribution(s).
+
+##### Args:
+
+
+* <b>`x`</b>: tensor of dtype `dtype`, must be broadcastable with `df`, `mu`, and
+ `sigma`.
+* <b>`name`</b>: The name to give this op.
+
+##### Returns:
+
+
+* <b>`pdf`</b>: tensor of dtype `dtype`, the pdf values of `x`.
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.sample(n, seed=None, name='sample')` {#StudentT.sample}
+
+Sample `n` observations from the Student t Distributions.
+
+##### Args:
+
+
+* <b>`n`</b>: `Scalar`, type int32, the number of observations to sample.
+* <b>`seed`</b>: Python integer, the random seed.
+* <b>`name`</b>: The name to give this op.
+
+##### Returns:
+
+
+* <b>`samples`</b>: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape`
+ with values of type `self.dtype`.
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.sigma` {#StudentT.sigma}
+
+Scaling factors of these Student's t distribution(s).
+
+
+- - -
+
+#### `tf.contrib.distributions.StudentT.variance` {#StudentT.variance}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Uniform.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Uniform.md
deleted file mode 100644
index ad6008c9f6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Uniform.md
+++ /dev/null
@@ -1,216 +0,0 @@
-Uniform distribution with `a` and `b` parameters.
-
-The PDF of this distribution is constant between [`a`, `b`], and 0 elsewhere.
-- - -
-
-#### `tf.contrib.distributions.Uniform.__init__(a=0.0, b=1.0, name='Uniform')` {#Uniform.__init__}
-
-Construct Uniform distributions with `a` and `b`.
-
-The parameters `a` and `b` must be shaped in a way that supports
-broadcasting (e.g. `b - a` is a valid operation).
-
-Here are examples without broadcasting:
-
-```python
-# Without broadcasting
-u1 = Uniform(3.0, 4.0) # a single uniform distribution [3, 4]
-u2 = Uniform([1.0, 2.0], [3.0, 4.0]) # 2 distributions [1, 3], [2, 4]
-u3 = Uniform([[1.0, 2.0],
- [3.0, 4.0]],
- [[1.5, 2.5],
- [3.5, 4.5]]) # 4 distributions
-```
-
-And with broadcasting:
-
-```python
-u1 = Uniform(3.0, [5.0, 6.0, 7.0]) # 3 distributions
-```
-
-##### Args:
-
-
-* <b>`a`</b>: `float` or `double` tensor, the minimum endpoint.
-* <b>`b`</b>: `float` or `double` tensor, the maximum endpoint. Must be > `a`.
-* <b>`name`</b>: The name to prefix Ops created by this distribution class.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: if `a >= b`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.a` {#Uniform.a}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.b` {#Uniform.b}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.batch_shape(name='batch_shape')` {#Uniform.batch_shape}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.cdf(x, name='cdf')` {#Uniform.cdf}
-
-CDF of observations in `x` under these Uniform distribution(s).
-
-##### Args:
-
-
-* <b>`x`</b>: tensor of dtype `dtype`, must be broadcastable with `a` and `b`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: tensor of dtype `dtype`, the CDFs of `x`. If `x` is `nan`, will
- return `nan`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.dtype` {#Uniform.dtype}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.entropy(name='entropy')` {#Uniform.entropy}
-
-The entropy of Uniform distribution(s).
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`entropy`</b>: tensor of dtype `dtype`, the entropy.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.event_shape(name='event_shape')` {#Uniform.event_shape}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.get_batch_shape()` {#Uniform.get_batch_shape}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.get_event_shape()` {#Uniform.get_event_shape}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.is_reparameterized` {#Uniform.is_reparameterized}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.log_cdf(x, name='log_cdf')` {#Uniform.log_cdf}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.log_pdf(x, name='log_pdf')` {#Uniform.log_pdf}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.mean` {#Uniform.mean}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.name` {#Uniform.name}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.pdf(x, name='pdf')` {#Uniform.pdf}
-
-The PDF of observations in `x` under these Uniform distribution(s).
-
-##### Args:
-
-
-* <b>`x`</b>: tensor of dtype `dtype`, must be broadcastable with `a` and `b`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`pdf`</b>: tensor of dtype `dtype`, the pdf values of `x`. If `x` is `nan`, will
- return `nan`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.range` {#Uniform.range}
-
-`b - a`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.sample(n, seed=None, name='sample')` {#Uniform.sample}
-
-Sample `n` observations from the Uniform Distributions.
-
-##### Args:
-
-
-* <b>`n`</b>: `Scalar`, type int32, the number of observations to sample.
-* <b>`seed`</b>: Python integer, the random seed.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape`
- with values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.variance` {#Uniform.variance}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sum_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sum_regularizer.md
deleted file mode 100644
index ee05583b04..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sum_regularizer.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.layers.sum_regularizer(regularizer_list)` {#sum_regularizer}
-
-Returns a function that applies the sum of multiple regularizers.
-
-##### Args:
-
-
-* <b>`regularizer_list`</b>: A list of regularizers to apply.
-
-##### Returns:
-
- A function with signature `sum_reg(weights, name=None)` that applies the
- sum of all the input regularizers.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_tensors.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_tensors.md
new file mode 100644
index 0000000000..608999b437
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_tensors.md
@@ -0,0 +1,4 @@
+### `tf.contrib.layers.summarize_tensors(tensors, summarizer=summarize_tensor)` {#summarize_tensors}
+
+Summarize a set of tensors.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.RunConfig.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.RunConfig.md
new file mode 100644
index 0000000000..ffdf8703c0
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.RunConfig.md
@@ -0,0 +1,47 @@
+This class specifies the specific configurations for the run.
+
+Parameters:
+ execution_mode: Runners use this flag to execute different tasks, like
+ training vs evaluation. 'all' (the default) executes both training and
+ eval.
+ master: TensorFlow master. Empty string (the default) for local.
+ task: Task id of the replica running the training (default: 0).
+ num_ps_replicas: Number of parameter server tasks to use (default: 0).
+ training_worker_session_startup_stagger_secs: Seconds to sleep between the
+ startup of each worker task session (default: 5).
+ training_worker_max_startup_secs: Max seconds to wait before starting any
+ worker (default: 60).
+ eval_delay_secs: Number of seconds between the beginning of each eval run.
+ If one run takes more than this amount of time, the next run will start
+ immediately once that run completes (default 60).
+ eval_steps: Number of steps to run in each eval (default: 100).
+ num_cores: Number of cores to be used (default: 4).
+ verbose: Controls the verbosity, possible values:
+ 0: the algorithm and debug information is muted.
+ 1: trainer prints the progress.
+ 2: log device placement is printed.
+ gpu_memory_fraction: Fraction of GPU memory used by the process on
+ each GPU uniformly on the same machine.
+ tf_random_seed: Random seed for TensorFlow initializers.
+ Setting this value allows consistency between reruns.
+ keep_checkpoint_max: The maximum number of recent checkpoint files to keep.
+ As new files are created, older files are deleted.
+ If None or 0, all checkpoint files are kept.
+ Defaults to 5 (that is, the 5 most recent checkpoint files are kept.)
+ keep_checkpoint_every_n_hours: Number of hours between each checkpoint
+ to be saved. The default value of 10,000 hours effectively disables
+ the feature.
+
+Attributes:
+ tf_master: Tensorflow master.
+ tf_config: Tensorflow Session Config proto.
+ tf_random_seed: Tensorflow random seed.
+ keep_checkpoint_max: Maximum number of checkpoints to keep.
+ keep_checkpoint_every_n_hours: Number of hours between each checkpoint.
+- - -
+
+#### `tf.contrib.learn.RunConfig.__init__(execution_mode='all', master='', task=0, num_ps_replicas=0, training_worker_session_startup_stagger_secs=5, training_worker_max_startup_secs=60, eval_delay_secs=60, eval_steps=100, num_cores=4, verbose=1, gpu_memory_fraction=1, tf_random_seed=42, keep_checkpoint_max=5, keep_checkpoint_every_n_hours=10000)` {#RunConfig.__init__}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowEstimator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowEstimator.md
new file mode 100644
index 0000000000..c3270290b9
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowEstimator.md
@@ -0,0 +1,295 @@
+Base class for all TensorFlow estimators.
+
+Parameters:
+ model_fn: Model function, that takes input X, y tensors and outputs
+ prediction and loss tensors.
+ n_classes: Number of classes in the target.
+ batch_size: Mini batch size.
+ steps: Number of steps to run over data.
+ optimizer: Optimizer name (or class), for example "SGD", "Adam",
+ "Adagrad".
+ learning_rate: If this is constant float value, no decay function is used.
+ Instead, a customized decay function can be passed that accepts
+ global_step as parameter and returns a Tensor.
+ e.g. exponential decay function:
+ def exp_decay(global_step):
+ return tf.train.exponential_decay(
+ learning_rate=0.1, global_step,
+ decay_steps=2, decay_rate=0.001)
+ clip_gradients: Clip norm of the gradients to this value to stop
+ gradient explosion.
+ class_weight: None or list of n_classes floats. Weight associated with
+ classes for loss computation. If not given, all classes are supposed to
+ have weight one.
+ continue_training: when continue_training is True, once initialized
+ model will be continuely trained on every call of fit.
+ config: RunConfig object that controls the configurations of the
+ session, e.g. num_cores, gpu_memory_fraction, etc.
+ verbose: Controls the verbosity, possible values:
+ 0: the algorithm and debug information is muted.
+ 1: trainer prints the progress.
+ 2: log device placement is printed.
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.__init__(model_fn, n_classes, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, class_weight=None, continue_training=False, config=None, verbose=1)` {#TensorFlowEstimator.__init__}
+
+
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowEstimator.evaluate}
+
+See base class.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowEstimator.fit}
+
+Builds a neural network model given provided `model_fn` and training
+data X and y.
+
+Note: called first time constructs the graph and initializers
+variables. Consecutives times it will continue training the same model.
+This logic follows partial_fit() interface in scikit-learn.
+
+To restart learning, create new estimator.
+
+##### Args:
+
+
+* <b>`x`</b>: matrix or tensor of shape [n_samples, n_features...]. Can be
+ iterator that returns arrays of features. The training input
+ samples for fitting the model.
+
+* <b>`y`</b>: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
+ iterator that returns array of targets. The training target values
+ (class labels in classification, real numbers in regression).
+
+* <b>`steps`</b>: int, number of steps to train.
+ If None or 0, train for `self.steps`.
+* <b>`monitors`</b>: List of `BaseMonitor` objects to print training progress and
+ invoke early stopping.
+* <b>`logdir`</b>: the directory to save the log file that can be used for
+ optional visualization.
+
+##### Returns:
+
+ Returns self.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.get_params(deep=True)` {#TensorFlowEstimator.get_params}
+
+Get parameters for this estimator.
+
+##### Args:
+
+
+* <b>`deep`</b>: boolean, optional
+ If True, will return the parameters for this estimator and
+ contained subobjects that are estimators.
+
+##### Returns:
+
+ params : mapping of string to any
+ Parameter names mapped to their values.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.get_tensor(name)` {#TensorFlowEstimator.get_tensor}
+
+Returns tensor by name.
+
+##### Args:
+
+
+* <b>`name`</b>: string, name of the tensor.
+
+##### Returns:
+
+ Tensor.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.get_tensor_value(name)` {#TensorFlowEstimator.get_tensor_value}
+
+Returns value of the tensor give by name.
+
+##### Args:
+
+
+* <b>`name`</b>: string, name of the tensor.
+
+##### Returns:
+
+ Numpy array - value of the tensor.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.get_variable_names()` {#TensorFlowEstimator.get_variable_names}
+
+Returns list of all variable names in this model.
+
+##### Returns:
+
+ List of names.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.model_dir` {#TensorFlowEstimator.model_dir}
+
+
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.partial_fit(x, y)` {#TensorFlowEstimator.partial_fit}
+
+Incremental fit on a batch of samples.
+
+This method is expected to be called several times consecutively
+on different or the same chunks of the dataset. This either can
+implement iterative training or out-of-core/online training.
+
+This is especially useful when the whole dataset is too big to
+fit in memory at the same time. Or when model is taking long time
+to converge, and you want to split up training into subparts.
+
+##### Args:
+
+
+* <b>`x`</b>: matrix or tensor of shape [n_samples, n_features...]. Can be
+ iterator that returns arrays of features. The training input
+ samples for fitting the model.
+
+* <b>`y`</b>: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
+ iterator that returns array of targets. The training target values
+ (class label in classification, real numbers in regression).
+
+##### Returns:
+
+ Returns self.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.predict(x, axis=1, batch_size=None)` {#TensorFlowEstimator.predict}
+
+Predict class or regression for X.
+
+For a classification model, the predicted class for each sample in X is
+returned. For a regression model, the predicted value based on X is
+returned.
+
+##### Args:
+
+
+* <b>`x`</b>: array-like matrix, [n_samples, n_features...] or iterator.
+* <b>`axis`</b>: Which axis to argmax for classification.
+ By default axis 1 (next after batch) is used.
+ Use 2 for sequence predictions.
+* <b>`batch_size`</b>: If test set is too big, use batch size to split
+ it into mini batches. By default the batch_size member
+ variable is used.
+
+##### Returns:
+
+
+* <b>`y`</b>: array of shape [n_samples]. The predicted classes or predicted
+ value.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.predict_proba(x, batch_size=None)` {#TensorFlowEstimator.predict_proba}
+
+Predict class probability of the input samples X.
+
+##### Args:
+
+
+* <b>`x`</b>: array-like matrix, [n_samples, n_features...] or iterator.
+* <b>`batch_size`</b>: If test set is too big, use batch size to split
+ it into mini batches. By default the batch_size member variable is used.
+
+##### Returns:
+
+
+* <b>`y`</b>: array of shape [n_samples, n_classes]. The predicted
+ probabilities for each class.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.restore(cls, path, config=None)` {#TensorFlowEstimator.restore}
+
+Restores model from give path.
+
+##### Args:
+
+
+* <b>`path`</b>: Path to the checkpoints and other model information.
+* <b>`config`</b>: RunConfig object that controls the configurations of the session,
+ e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be
+ reconfigured.
+
+##### Returns:
+
+ Estimator, object of the subclass of TensorFlowEstimator.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.save(path)` {#TensorFlowEstimator.save}
+
+Saves checkpoints and graph to given path.
+
+##### Args:
+
+
+* <b>`path`</b>: Folder to save model to.
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.set_params(**params)` {#TensorFlowEstimator.set_params}
+
+Set the parameters of this estimator.
+
+The method works on simple estimators as well as on nested objects
+(such as pipelines). The former have parameters of the form
+``<component>__<parameter>`` so that it's possible to update each
+component of a nested object.
+
+##### Returns:
+
+ self
+
+
+- - -
+
+#### `tf.contrib.learn.TensorFlowEstimator.train(input_fn, steps, monitors=None)` {#TensorFlowEstimator.train}
+
+Trains a model given input builder function.
+
+##### Args:
+
+
+* <b>`input_fn`</b>: Input builder function, returns tuple of dicts or
+ dict and Tensor.
+* <b>`steps`</b>: number of steps to train model for.
+* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
+ inside the training loop.
+
+##### Returns:
+
+ Returns self.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_recall_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_recall_at_k.md
new file mode 100644
index 0000000000..dd03b95b69
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_recall_at_k.md
@@ -0,0 +1,52 @@
+### `tf.contrib.metrics.streaming_recall_at_k(predictions, labels, k, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_recall_at_k}
+
+Computes the recall@k of the predictions with respect to dense labels.
+
+The `streaming_recall_at_k` function creates two local variables, `total` and
+`count`, that are used to compute the recall@k frequency. This frequency is
+ultimately returned as `recall_at_<k>`: an idempotent operation that simply
+divides `total` by `count`. To facilitate the estimation of recall@k over a
+stream of data, the function utilizes two operations. First, an `in_top_k`
+operation computes a tensor with shape [batch_size] whose elements indicate
+whether or not the corresponding label is in the top `k` predictions of the
+`predictions` `Tensor`. Second, an `update_op` operation whose behavior is
+dependent on the value of `ignore_mask`. If `ignore_mask` is None, then
+`update_op` increments `total` with the number of elements of `in_top_k` that
+are set to `True` and increments `count` with the batch size. If `ignore_mask`
+is not `None`, then `update_op` increments `total` with the number of elements
+in `in_top_k` that are `True` whose corresponding element in `ignore_mask` is
+`False`. In addition to performing the updates, `update_op` also returns the
+recall value.
+
+##### Args:
+
+
+* <b>`predictions`</b>: A floating point tensor of dimension [batch_size, num_classes]
+* <b>`labels`</b>: A tensor of dimension [batch_size] whose type is in `int32`,
+ `int64`.
+* <b>`k`</b>: The number of top elements to look at for computing precision.
+* <b>`ignore_mask`</b>: An optional, binary tensor whose size matches `labels`. If an
+ element of `ignore_mask` is True, the corresponding prediction and label
+ pair is used to compute the metrics. Otherwise, the pair is ignored.
+* <b>`metrics_collections`</b>: An optional list of collections that `recall_at_k`
+ should be added to.
+* <b>`updates_collections`</b>: An optional list of collections `update_op` should be
+ added to.
+* <b>`name`</b>: An optional variable_op_scope name.
+
+##### Returns:
+
+
+* <b>`recall_at_k`</b>: A tensor representing the recall@k, the fraction of labels
+ which fall into the top `k` predictions.
+* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
+ appropriately and whose value matches `recall_at_k`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If the dimensions of `predictions` and `labels` don't match or
+ if `ignore_mask` is not `None` and its shape doesn't match `predictions`
+ or if either `metrics_collections` or `updates_collections` are not a list
+ or tuple.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_root_mean_squared_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_root_mean_squared_error.md
deleted file mode 100644
index 85319f44dd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_root_mean_squared_error.md
+++ /dev/null
@@ -1,48 +0,0 @@
-### `tf.contrib.metrics.streaming_root_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_root_mean_squared_error}
-
-Computes the root mean squared error between the labels and predictions.
-
-The `streaming_root_mean_squared_error` function creates two local variables,
-`total` and `count` that are used to compute the root mean squared error.
-This average is ultimately returned as `root_mean_squared_error`: an
-idempotent operation that takes the square root of the division of `total`
-by `count`. To facilitate the estimation of the root mean squared error over a
-stream of data, the function utilizes two operations. First, a `squared_error`
-operation computes the element-wise square of the difference between
-`predictions` and `labels`. Second, an `update_op` operation whose behavior is
-dependent on the value of `weights`. If `weights` is None, then `update_op`
-increments `total` with the reduced sum of `squared_error` and increments
-`count` with the number of elements in `squared_error`. If `weights` is not
-`None`, then `update_op` increments `total` with the reduced sum of the
-product of `weights` and `squared_error` and increments `count` with the
-reduced sum of `weights`. In addition to performing the updates, `update_op`
-also returns the `root_mean_squared_error` value.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary shape.
-* <b>`labels`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`weights`</b>: An optional set of weights of the same shape as `predictions`. If
- `weights` is not None, the function computes a weighted mean.
-* <b>`metrics_collections`</b>: An optional list of collections that
- `root_mean_squared_error` should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_op_scope name.
-
-##### Returns:
-
-
-* <b>`root_mean_squared_error`</b>: A tensor representing the current mean, the value
- of `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `root_mean_squared_error`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match
- `predictions` or if either `metrics_collections` or `updates_collections`
- are not a list or tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_sparse_precision_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_sparse_precision_at_k.md
deleted file mode 100644
index ad24dd742a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_sparse_precision_at_k.md
+++ /dev/null
@@ -1,60 +0,0 @@
-### `tf.contrib.metrics.streaming_sparse_precision_at_k(predictions, labels, k, class_id=None, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_precision_at_k}
-
-Computes precision@k of the predictions with respect to sparse labels.
-
-If `class_id` is specified, we calculate precision by considering only the
- entries in the batch for which `class_id` is in the top-k highest
- `predictions`, and computing the fraction of them for which `class_id` is
- indeed a correct label.
-If `class_id` is not specified, we'll calculate precision as how often on
- average a class among the top-k classes with the highest predicted values
- of a batch entry is correct and can be found in the label for that entry.
-
-`streaming_sparse_precision_at_k` creates two local variables,
-`true_positive_at_<k>` and `false_positive_at_<k>`, that are used to compute
-the precision@k frequency. This frequency is ultimately returned as
-`recall_at_<k>`: an idempotent operation that simply divides
-`true_positive_at_<k>` by total (`true_positive_at_<k>` + `recall_at_<k>`). To
-facilitate the estimation of precision@k over a stream of data, the function
-utilizes three steps.
-* A `top_k` operation computes a tensor whose elements indicate the top `k`
- predictions of the `predictions` `Tensor`.
-* Set operations are applied to `top_k` and `labels` to calculate true
- positives and false positives.
-* An `update_op` operation increments `true_positive_at_<k>` and
- `false_positive_at_<k>`. It also returns the recall value.
-
-##### Args:
-
-
-* <b>`predictions`</b>: Float `Tensor` with shape [D1, ... DN, num_classes] where
- N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes].
- The final dimension contains the logit values for each class. [D1, ... DN]
- must match `labels`.
-* <b>`labels`</b>: `int64` `Tensor` or `SparseTensor` with shape
- [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of
- target classes for the associated prediction. Commonly, N=1 and `labels`
- has shape [batch_size, num_labels]. [D1, ... DN] must match
- `predictions_idx`. Values should be in range [0, num_classes], where
- num_classes is the last dimension of `predictions`.
-* <b>`k`</b>: Integer, k for @k metric.
-* <b>`class_id`</b>: Integer class ID for which we want binary metrics. This should be
- in range [0, num_classes], where num_classes is the last dimension of
- `predictions`.
-* <b>`ignore_mask`</b>: An optional, binary tensor whose shape is broadcastable to the
- the first [D1, ... DN] dimensions of `predictions_idx` and `labels`.
-* <b>`metrics_collections`</b>: An optional list of collections that values should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that updates should
- be added to.
-* <b>`name`</b>: Name of new update operation, and namespace for other dependant ops.
-
-##### Returns:
-
-
-* <b>`precision`</b>: Scalar `float64` `Tensor` with the value of `true_positives`
- divided by the sum of `true_positives` and `false_positives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_positives` variables appropriately, and whose value matches
- `precision`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.control_dependencies.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.control_dependencies.md
new file mode 100644
index 0000000000..070f8788e5
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.control_dependencies.md
@@ -0,0 +1,20 @@
+### `tf.control_dependencies(control_inputs)` {#control_dependencies}
+
+Wrapper for `Graph.control_dependencies()` using the default graph.
+
+See [`Graph.control_dependencies()`](../../api_docs/python/framework.md#Graph.control_dependencies)
+for more details.
+
+##### Args:
+
+
+* <b>`control_inputs`</b>: A list of `Operation` or `Tensor` objects which
+ must be executed or computed before running the operations
+ defined in the context. Can also be `None` to clear the control
+ dependencies.
+
+##### Returns:
+
+ A context manager that specifies control dependencies for all
+ operations constructed within the context.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_csv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_csv.md
deleted file mode 100644
index f2ebf6945b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_csv.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.decode_csv(records, record_defaults, field_delim=None, name=None)` {#decode_csv}
-
-Convert CSV records to tensors. Each column maps to one tensor.
-
-RFC 4180 format is expected for the CSV records.
-(https://tools.ietf.org/html/rfc4180)
-Note that we allow leading and trailing spaces with int or float field.
-
-##### Args:
-
-
-* <b>`records`</b>: A `Tensor` of type `string`.
- Each string is a record/row in the csv and all records should have
- the same format.
-* <b>`record_defaults`</b>: A list of `Tensor` objects with types from: `float32`, `int32`, `int64`, `string`.
- One tensor per column of the input record, with either a
- scalar default value for that column or empty if the column is required.
-* <b>`field_delim`</b>: An optional `string`. Defaults to `","`.
- delimiter to separate fields in a record.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A list of `Tensor` objects. Has the same type as `record_defaults`.
- Each tensor will have the same shape as records.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_raw.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_raw.md
new file mode 100644
index 0000000000..125c15d9a8
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_raw.md
@@ -0,0 +1,23 @@
+### `tf.decode_raw(bytes, out_type, little_endian=None, name=None)` {#decode_raw}
+
+Reinterpret the bytes of a string as a vector of numbers.
+
+##### Args:
+
+
+* <b>`bytes`</b>: A `Tensor` of type `string`.
+ All the elements must have the same length.
+* <b>`out_type`</b>: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64`.
+* <b>`little_endian`</b>: An optional `bool`. Defaults to `True`.
+ Whether the input `bytes` are in little-endian order.
+ Ignored for `out_type` values that are stored in a single byte like
+ `uint8`.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `out_type`.
+ A Tensor with one more dimension than the input `bytes`. The
+ added dimension will have size equal to the length of the elements
+ of `bytes` divided by the number of bytes to represent `out_type`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.CancelledError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.CancelledError.md
deleted file mode 100644
index cf20c0e2e3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.CancelledError.md
+++ /dev/null
@@ -1,17 +0,0 @@
-Raised when an operation or step is cancelled.
-
-For example, a long-running operation (e.g.
-[`queue.enqueue()`](../../api_docs/python/io_ops.md#QueueBase.enqueue) may be
-cancelled by running another operation (e.g.
-[`queue.close(cancel_pending_enqueues=True)`](../../api_docs/python/io_ops.md#QueueBase.close),
-or by [closing the session](../../api_docs/python/client.md#Session.close).
-A step that is running such a long-running operation will fail by raising
-`CancelledError`.
-
-- - -
-
-#### `tf.errors.CancelledError.__init__(node_def, op, message)` {#CancelledError.__init__}
-
-Creates a `CancelledError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.UnimplementedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.UnimplementedError.md
new file mode 100644
index 0000000000..945daa1a22
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.UnimplementedError.md
@@ -0,0 +1,15 @@
+Raised when an operation has not been implemented.
+
+Some operations may raise this error when passed otherwise-valid
+arguments that it does not currently support. For example, running
+the [`tf.nn.max_pool()`](../../api_docs/python/nn.md#max_pool) operation
+would raise this error if pooling was requested on the batch dimension,
+because this is not yet supported.
+
+- - -
+
+#### `tf.errors.UnimplementedError.__init__(node_def, op, message)` {#UnimplementedError.__init__}
+
+Creates an `UnimplementedError`.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.expand_dims.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.expand_dims.md
deleted file mode 100644
index a188cda506..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.expand_dims.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.expand_dims(input, dim, name=None)` {#expand_dims}
-
-Inserts a dimension of 1 into a tensor's shape.
-
-Given a tensor `input`, this operation inserts a dimension of 1 at the
-dimension index `dim` of `input`'s shape. The dimension index `dim` starts at
-zero; if you specify a negative number for `dim` it is counted backward from
-the end.
-
-This operation is useful if you want to add a batch dimension to a single
-element. For example, if you have a single image of shape `[height, width,
-channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`,
-which will make the shape `[1, height, width, channels]`.
-
-Other examples:
-
-```prettyprint
-# 't' is a tensor of shape [2]
-shape(expand_dims(t, 0)) ==> [1, 2]
-shape(expand_dims(t, 1)) ==> [2, 1]
-shape(expand_dims(t, -1)) ==> [2, 1]
-
-# 't2' is a tensor of shape [2, 3, 5]
-shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5]
-shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5]
-shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1]
-```
-
-This operation requires that:
-
-`-1-input.dims() <= dim <= input.dims()`
-
-This operation is related to `squeeze()`, which removes dimensions of
-size 1.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`dim`</b>: A `Tensor` of type `int32`.
- 0-D (scalar). Specifies the dimension index at which to
- expand the shape of `input`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- Contains the same data as `input`, but its shape has an additional
- dimension of size 1 added.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gather.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gather.md
deleted file mode 100644
index f3ae59bbb6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gather.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.gather(params, indices, validate_indices=None, name=None)` {#gather}
-
-Gather slices from `params` according to `indices`.
-
-`indices` must be an integer tensor of any dimension (usually 0-D or 1-D).
-Produces an output tensor with shape `indices.shape + params.shape[1:]` where:
-
- # Scalar indices
- output[:, ..., :] = params[indices, :, ... :]
-
- # Vector indices
- output[i, :, ..., :] = params[indices[i], :, ... :]
-
- # Higher rank indices
- output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]
-
-If `indices` is a permutation and `len(indices) == params.shape[0]` then
-this operation will permute `params` accordingly.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/Gather.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`params`</b>: A `Tensor`.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
-* <b>`validate_indices`</b>: An optional `bool`. Defaults to `True`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `params`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.get_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.get_collection.md
new file mode 100644
index 0000000000..fc0044b490
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.get_collection.md
@@ -0,0 +1,25 @@
+### `tf.get_collection(key, scope=None)` {#get_collection}
+
+Wrapper for `Graph.get_collection()` using the default graph.
+
+See [`Graph.get_collection()`](../../api_docs/python/framework.md#Graph.get_collection)
+for more details.
+
+##### Args:
+
+
+* <b>`key`</b>: The key for the collection. For example, the `GraphKeys` class
+ contains many standard names for collections.
+* <b>`scope`</b>: (Optional.) If supplied, the resulting list is filtered to include
+ only items whose `name` attribute matches using `re.match`. Items
+ without a `name` attribute are never returned if a scope is supplied and
+ the choice or `re.match` means that a `scope` without special tokens
+ filters by prefix.
+
+##### Returns:
+
+ The list of values in the collection with the given `name`, or
+ an empty list if no value has been added to that collection. The
+ list contains the values in the order under which they were
+ collected.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_brightness.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_brightness.md
new file mode 100644
index 0000000000..6c773b6985
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_brightness.md
@@ -0,0 +1,25 @@
+### `tf.image.random_brightness(image, max_delta, seed=None)` {#random_brightness}
+
+Adjust the brightness of images by a random factor.
+
+Equivalent to `adjust_brightness()` using a `delta` randomly picked in the
+interval `[-max_delta, max_delta)`.
+
+##### Args:
+
+
+* <b>`image`</b>: An image.
+* <b>`max_delta`</b>: float, must be non-negative.
+* <b>`seed`</b>: A Python integer. Used to create a random seed. See
+ [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
+ for behavior.
+
+##### Returns:
+
+ The brightness-adjusted image.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: if `max_delta` is negative.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.resize_bilinear.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.resize_bilinear.md
new file mode 100644
index 0000000000..a9580ca199
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.resize_bilinear.md
@@ -0,0 +1,24 @@
+### `tf.image.resize_bilinear(images, size, align_corners=None, name=None)` {#resize_bilinear}
+
+Resize `images` to `size` using bilinear interpolation.
+
+Input images can be of different types but output images are always float.
+
+##### Args:
+
+
+* <b>`images`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
+ 4-D with shape `[batch, height, width, channels]`.
+* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
+ new size for the images.
+* <b>`align_corners`</b>: An optional `bool`. Defaults to `False`.
+ If true, rescale input by (new_height - 1) / (height - 1), which
+ exactly aligns the 4 corners of images and resized images. If false, rescale
+ by new_height / height. Treat similarly the width dimension.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`. 4-D with shape
+ `[batch, new_height, new_width, channels]`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.resize_images.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.resize_images.md
new file mode 100644
index 0000000000..d010cac831
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.resize_images.md
@@ -0,0 +1,43 @@
+### `tf.image.resize_images(images, new_height, new_width, method=0, align_corners=False)` {#resize_images}
+
+Resize `images` to `new_width`, `new_height` using the specified `method`.
+
+Resized images will be distorted if their original aspect ratio is not
+the same as `new_width`, `new_height`. To avoid distortions see
+[`resize_image_with_crop_or_pad`](#resize_image_with_crop_or_pad).
+
+`method` can be one of:
+
+* <b>`ResizeMethod.BILINEAR`</b>: [Bilinear interpolation.]
+ (https://en.wikipedia.org/wiki/Bilinear_interpolation)
+* <b>`ResizeMethod.NEAREST_NEIGHBOR`</b>: [Nearest neighbor interpolation.]
+ (https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation)
+* <b>`ResizeMethod.BICUBIC`</b>: [Bicubic interpolation.]
+ (https://en.wikipedia.org/wiki/Bicubic_interpolation)
+* <b>`ResizeMethod.AREA`</b>: Area interpolation.
+
+##### Args:
+
+
+* <b>`images`</b>: 4-D Tensor of shape `[batch, height, width, channels]` or
+ 3-D Tensor of shape `[height, width, channels]`.
+* <b>`new_height`</b>: integer.
+* <b>`new_width`</b>: integer.
+* <b>`method`</b>: ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
+* <b>`align_corners`</b>: bool. If true, exactly align all 4 corners of the input and
+ output. Defaults to `false`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: if the shape of `images` is incompatible with the
+ shape arguments to this function
+* <b>`ValueError`</b>: if an unsupported resize method is specified.
+
+##### Returns:
+
+ If `images` was 4-D, a 4-D float Tensor of shape
+ `[batch, new_height, new_width, channels]`.
+ If `images` was 3-D, a 3-D float Tensor of shape
+ `[new_height, new_width, channels]`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.invert_permutation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.invert_permutation.md
deleted file mode 100644
index b12cc7e94c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.invert_permutation.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.invert_permutation(x, name=None)` {#invert_permutation}
-
-Computes the inverse permutation of a tensor.
-
-This operation computes the inverse of an index permutation. It takes a 1-D
-integer tensor `x`, which represents the indices of a zero-based array, and
-swaps each value with its index position. In other words, for an output tensor
-`y` and an input tensor `x`, this operation computes the following:
-
-`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`
-
-The values must include 0. There can be no duplicate values or negative values.
-
-For example:
-
-```prettyprint
-# tensor `x` is [3, 4, 0, 2, 1]
-invert_permutation(x) ==> [2, 4, 3, 0, 1]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `int32`. 1-D.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int32`. 1-D.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.listdiff.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.listdiff.md
deleted file mode 100644
index 1f04bd8d9e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.listdiff.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.listdiff(x, y, name=None)` {#listdiff}
-
-Computes the difference between two lists of numbers or strings.
-
-Given a list `x` and a list `y`, this operation returns a list `out` that
-represents all values that are in `x` but not in `y`. The returned list `out`
-is sorted in the same order that the numbers appear in `x` (duplicates are
-preserved). This operation also returns a list `idx` that represents the
-position of each `out` element in `x`. In other words:
-
-`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]`
-
-For example, given this input:
-
-```prettyprint
-x = [1, 2, 3, 4, 5, 6]
-y = [1, 3, 5]
-```
-
-This operation would return:
-
-```prettyprint
-out ==> [2, 4, 6]
-idx ==> [1, 3, 5]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. 1-D. Values to keep.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (out, idx).
-
-* <b>`out`</b>: A `Tensor`. Has the same type as `x`. 1-D. Values present in `x` but not in `y`.
-* <b>`idx`</b>: A `Tensor` of type `int32`. 1-D. Positions of `x` values preserved in `out`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.mul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.mod.md
index 3d6fa56864..5bfe1058a7 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.mul.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.mod.md
@@ -1,11 +1,11 @@
-### `tf.mul(x, y, name=None)` {#mul}
+### `tf.mod(x, y, name=None)` {#mod}
-Returns x * y element-wise.
+Returns element-wise remainder of division.
##### Args:
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.
+* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
* <b>`name`</b>: A name for the operation (optional).
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.compute_accidental_hits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.compute_accidental_hits.md
deleted file mode 100644
index 9d5bb30303..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.compute_accidental_hits.md
+++ /dev/null
@@ -1,45 +0,0 @@
-### `tf.nn.compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None)` {#compute_accidental_hits}
-
-Compute the position ids in `sampled_candidates` matching `true_classes`.
-
-In Candidate Sampling, this operation facilitates virtually removing
-sampled classes which happen to match target classes. This is done
-in Sampled Softmax and Sampled Logistic.
-
-See our [Candidate Sampling Algorithms
-Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-
-We presuppose that the `sampled_candidates` are unique.
-
-We call it an 'accidental hit' when one of the target classes
-matches one of the sampled classes. This operation reports
-accidental hits as triples `(index, id, weight)`, where `index`
-represents the row number in `true_classes`, `id` represents the
-position in `sampled_candidates`, and weight is `-FLOAT_MAX`.
-
-The result of this op should be passed through a `sparse_to_dense`
-operation, then added to the logits of the sampled classes. This
-removes the contradictory effect of accidentally sampling the true
-target classes as noise classes for the same example.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled_candidates output of CandidateSampler.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`indices`</b>: A `Tensor` of type `int32` and shape `[num_accidental_hits]`.
- Values indicate rows in `true_classes`.
-* <b>`ids`</b>: A `Tensor` of type `int64` and shape `[num_accidental_hits]`.
- Values indicate positions in `sampled_candidates`.
-* <b>`weights`</b>: A `Tensor` of type `float` and shape `[num_accidental_hits]`.
- Each value is `-FLOAT_MAX`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.embedding_lookup_sparse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.embedding_lookup_sparse.md
deleted file mode 100644
index 03997f7813..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.embedding_lookup_sparse.md
+++ /dev/null
@@ -1,66 +0,0 @@
-### `tf.nn.embedding_lookup_sparse(params, sp_ids, sp_weights, partition_strategy='mod', name=None, combiner='mean')` {#embedding_lookup_sparse}
-
-Computes embeddings for the given ids and weights.
-
-This op assumes that there is at least one id for each row in the dense tensor
-represented by sp_ids (i.e. there are no rows with empty features), and that
-all the indices of sp_ids are in canonical row-major order.
-
-It also assumes that all id values lie in the range [0, p0), where p0
-is the sum of the size of params along dimension 0.
-
-##### Args:
-
-
-* <b>`params`</b>: A single tensor representing the complete embedding tensor,
- or a list of P tensors all of same shape except for the first dimension,
- representing sharded embedding tensors.
-* <b>`sp_ids`</b>: N x M SparseTensor of int64 ids (typically from FeatureValueToId),
- where N is typically batch size and M is arbitrary.
-* <b>`sp_weights`</b>: either a SparseTensor of float / double weights, or None to
- indicate all weights should be taken to be 1. If specified, sp_weights
- must have exactly the same shape and indices as sp_ids.
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy, relevant
- if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default
- is `"mod"`. See `tf.nn.embedding_lookup` for more details.
-* <b>`name`</b>: Optional name for the op.
-* <b>`combiner`</b>: A string specifying the reduction op. Currently "mean", "sqrtn"
- and "sum" are supported.
- "sum" computes the weighted sum of the embedding results for each row.
- "mean" is the weighted sum divided by the total weight.
- "sqrtn" is the weighted sum divided by the square root of the sum of the
- squares of the weights.
-
-##### Returns:
-
- A dense tensor representing the combined embeddings for the
- sparse ids. For each row in the dense tensor represented by sp_ids, the op
- looks up the embeddings for all ids in that row, multiplies them by the
- corresponding weight, and combines these embeddings as specified.
-
- In other words, if
- shape(combined params) = [p0, p1, ..., pm]
- and
- shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]
- then
- shape(output) = [d0, d1, ..., dn-1, p1, ..., pm].
-
- For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are
-
- [0, 0]: id 1, weight 2.0
- [0, 1]: id 3, weight 0.5
- [1, 0]: id 0, weight 1.0
- [2, 3]: id 1, weight 3.0
-
- with combiner="mean", then the output will be a 3x20 matrix where
- output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
- output[1, :] = params[0, :] * 1.0
- output[2, :] = params[1, :] * 3.0
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If sp_ids is not a SparseTensor, or if sp_weights is neither
- None nor SparseTensor.
-* <b>`ValueError`</b>: If combiner is not one of {"mean", "sqrtn", "sum"}.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.l2_normalize.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.l2_normalize.md
new file mode 100644
index 0000000000..fdcdd71e20
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.l2_normalize.md
@@ -0,0 +1,24 @@
+### `tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None)` {#l2_normalize}
+
+Normalizes along dimension `dim` using an L2 norm.
+
+For a 1-D tensor with `dim = 0`, computes
+
+ output = x / sqrt(max(sum(x**2), epsilon))
+
+For `x` with more dimensions, independently normalizes each 1-D slice along
+dimension `dim`.
+
+##### Args:
+
+
+* <b>`x`</b>: A `Tensor`.
+* <b>`dim`</b>: Dimension along which to normalize.
+* <b>`epsilon`</b>: A lower bound value for the norm. Will use `sqrt(epsilon)` as the
+ divisor if `norm < sqrt(epsilon)`.
+* <b>`name`</b>: A name for this operation (optional).
+
+##### Returns:
+
+ A `Tensor` with the same shape as `x`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_softmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_softmax.md
deleted file mode 100644
index 18e1f96590..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_softmax.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.nn.log_softmax(logits, name=None)` {#log_softmax}
-
-Computes log softmax activations.
-
-For each batch `i` and class `j` we have
-
- logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))
-
-##### Args:
-
-
-* <b>`logits`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- 2-D with shape `[batch_size, num_classes]`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `logits`. Same shape as `logits`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.normalize_moments.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.normalize_moments.md
new file mode 100644
index 0000000000..d7a6b9cab4
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.normalize_moments.md
@@ -0,0 +1,20 @@
+### `tf.nn.normalize_moments(counts, mean_ss, variance_ss, shift, name=None)` {#normalize_moments}
+
+Calculate the mean and variance of based on the sufficient statistics.
+
+##### Args:
+
+
+* <b>`counts`</b>: A `Tensor` containing a the total count of the data (one value).
+* <b>`mean_ss`</b>: A `Tensor` containing the mean sufficient statistics: the (possibly
+ shifted) sum of the elements to average over.
+* <b>`variance_ss`</b>: A `Tensor` containing the variance sufficient statistics: the
+ (possibly shifted) squared sum of the data to compute the variance over.
+* <b>`shift`</b>: A `Tensor` containing the value by which the data is shifted for
+ numerical stability, or `None` if no shift was performed.
+* <b>`name`</b>: Name used to scope the operations that compute the moments.
+
+##### Returns:
+
+ Two `Tensor` objects: `mean` and `variance`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.separable_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.separable_conv2d.md
deleted file mode 100644
index f4be03303f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.separable_conv2d.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None)` {#separable_conv2d}
-
-2-D convolution with separable filters.
-
-Performs a depthwise convolution that acts separately on channels followed by
-a pointwise convolution that mixes channels. Note that this is separability
-between dimensions `[1, 2]` and `3`, not spatial separability between
-dimensions `1` and `2`.
-
-In detail,
-
- output[b, i, j, k] = sum_{di, dj, q, r]
- input[b, strides[1] * i + di, strides[2] * j + dj, q] *
- depthwise_filter[di, dj, q, r] *
- pointwise_filter[0, 0, q * channel_multiplier + r, k]
-
-`strides` controls the strides for the depthwise convolution only, since
-the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have
-`strides[0] = strides[3] = 1`. For the most common case of the same
-horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
-
-##### Args:
-
-
-* <b>`input`</b>: 4-D `Tensor` with shape `[batch, in_height, in_width, in_channels]`.
-* <b>`depthwise_filter`</b>: 4-D `Tensor` with shape
- `[filter_height, filter_width, in_channels, channel_multiplier]`.
- Contains `in_channels` convolutional filters of depth 1.
-* <b>`pointwise_filter`</b>: 4-D `Tensor` with shape
- `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise
- filter to mix channels after `depthwise_filter` has convolved spatially.
-* <b>`strides`</b>: 1-D of size 4. The strides for the depthwise convolution for
- each dimension of `input`.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A 4-D `Tensor` of shape `[batch, out_height, out_width, out_channels]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.sparse_softmax_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.sparse_softmax_cross_entropy_with_logits.md
new file mode 100644
index 0000000000..6d53d84c5b
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.sparse_softmax_cross_entropy_with_logits.md
@@ -0,0 +1,38 @@
+### `tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels, name=None)` {#sparse_softmax_cross_entropy_with_logits}
+
+Computes sparse softmax cross entropy between `logits` and `labels`.
+
+Measures the probability error in discrete classification tasks in which the
+classes are mutually exclusive (each entry is in exactly one class). For
+example, each CIFAR-10 image is labeled with one and only one label: an image
+can be a dog or a truck, but not both.
+
+**NOTE:** For this operation, the probability of a given label is considered
+exclusive. That is, soft classes are not allowed, and the `labels` vector
+must provide a single specific index for the true class for each row of
+`logits` (each minibatch entry). For soft softmax classification with
+a probability distribution for each entry, see
+`softmax_cross_entropy_with_logits`.
+
+**WARNING:** This op expects unscaled logits, since it performs a softmax
+on `logits` internally for efficiency. Do not call this op with the
+output of `softmax`, as it will produce incorrect results.
+
+`logits` must have the shape `[batch_size, num_classes]`
+and dtype `float32` or `float64`.
+
+`labels` must have the shape `[batch_size]` and dtype `int32` or `int64`.
+
+##### Args:
+
+
+* <b>`logits`</b>: Unscaled log probabilities.
+* <b>`labels`</b>: Each entry `labels[i]` must be an index in `[0, num_classes)`. Other
+ values will result in a loss of 0, but incorrect gradient computations.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the
+ softmax cross entropy loss.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.not_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.not_equal.md
new file mode 100644
index 0000000000..9c18792223
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.not_equal.md
@@ -0,0 +1,15 @@
+### `tf.not_equal(x, y, name=None)` {#not_equal}
+
+Returns the truth value of (x != y) element-wise.
+
+##### Args:
+
+
+* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`.
+* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.one_hot.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.one_hot.md
deleted file mode 100644
index eebb6ab643..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.one_hot.md
+++ /dev/null
@@ -1,129 +0,0 @@
-### `tf.one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None)` {#one_hot}
-
-Returns a one-hot tensor.
-
-The locations represented by indices in `indices` take value `on_value`,
-while all other locations take value `off_value`.
-
-`on_value` and `off_value` must have matching data types. If `dtype` is also
-provided, they must be the same data type as specified by `dtype`.
-
-If `on_value` is not provided, it will default to the value `1` with type
-`dtype`
-
-If `off_value` is not provided, it will default to the value `0` with type
-`dtype`
-
-If the input `indices` is rank `N`, the output will have rank `N+1`. The
-new axis is created at dimension `axis` (default: the new axis is appended
-at the end).
-
-If `indices` is a scalar the output shape will be a vector of length `depth`
-
-If `indices` is a vector of length `features`, the output shape will be:
-```
- features x depth if axis == -1
- depth x features if axis == 0
-```
-
-If `indices` is a matrix (batch) with shape `[batch, features]`, the output
-shape will be:
-```
- batch x features x depth if axis == -1
- batch x depth x features if axis == 1
- depth x batch x features if axis == 0
-```
-
-If `dtype` is not provided, it will attempt to assume the data type of
-`on_value` or `off_value`, if one or both are passed in. If none of
-`on_value`, `off_value`, or `dtype` are provided, `dtype` will default to the
-value `tf.float32`
-
-Note: If a non-numeric data type output is desired (tf.string, tf.bool, etc.),
-both `on_value` and `off_value` _must_ be provided to `one_hot`
-
-Examples
-=========
-
-Suppose that
-
-```
- indices = [0, 2, -1, 1]
- depth = 3
- on_value = 5.0
- off_value = 0.0
- axis = -1
-```
-
-Then output is `[4 x 3]`:
-
-```
- output =
- [5.0 0.0 0.0] // one_hot(0)
- [0.0 0.0 5.0] // one_hot(2)
- [0.0 0.0 0.0] // one_hot(-1)
- [0.0 5.0 0.0] // one_hot(1)
-```
-
-Suppose that
-
-```
- indices = [[0, 2], [1, -1]]
- depth = 3
- on_value = 1.0
- off_value = 0.0
- axis = -1
-```
-
-Then output is `[2 x 2 x 3]`:
-
-```
- output =
- [
- [1.0, 0.0, 0.0] // one_hot(0)
- [0.0, 0.0, 1.0] // one_hot(2)
- ][
- [0.0, 1.0, 0.0] // one_hot(1)
- [0.0, 0.0, 0.0] // one_hot(-1)
- ]
-```
-
-Using default values for `on_value` and `off_value`:
-
-```
- indices = [0, 1, 2]
- depth = 3
-```
-
-The output will be
-
-```
- output =
- [[1., 0., 0.],
- [0., 1., 0.],
- [0., 0., 1.]]
-```
-
-##### Args:
-
-
-* <b>`indices`</b>: A `Tensor` of indices.
-* <b>`depth`</b>: A scalar defining the depth of the one hot dimension.
-* <b>`on_value`</b>: A scalar defining the value to fill in output when `indices[j]
- = i`. (default: 1)
-* <b>`off_value`</b>: A scalar defining the value to fill in output when `indices[j]
- != i`. (default: 0)
-* <b>`axis`</b>: The axis to fill (default: -1, a new inner-most axis).
-* <b>`dtype`</b>: The data type of the output tensor.
-
-##### Returns:
-
-
-* <b>`output`</b>: The one-hot tensor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If dtype of either `on_value` or `off_value` don't match `dtype`
-* <b>`TypeError`</b>: If dtype of `on_value` and `off_value` don't match one another
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.pow.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.pow.md
new file mode 100644
index 0000000000..8588b72fb8
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.pow.md
@@ -0,0 +1,24 @@
+### `tf.pow(x, y, name=None)` {#pow}
+
+Computes the power of one value to another.
+
+Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
+corresponding elements in `x` and `y`. For example:
+
+```
+# tensor 'x' is [[2, 2], [3, 3]]
+# tensor 'y' is [[8, 16], [2, 3]]
+tf.pow(x, y) ==> [[256, 65536], [9, 27]]
+```
+
+##### Args:
+
+
+* <b>`x`</b>: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`.
+* <b>`y`</b>: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.python_io.TFRecordWriter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.python_io.TFRecordWriter.md
new file mode 100644
index 0000000000..4a67724209
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.python_io.TFRecordWriter.md
@@ -0,0 +1,41 @@
+A class to write records to a TFRecords file.
+
+This class implements `__enter__` and `__exit__`, and can be used
+in `with` blocks like a normal file.
+
+- - -
+
+#### `tf.python_io.TFRecordWriter.__init__(path)` {#TFRecordWriter.__init__}
+
+Opens file `path` and creates a `TFRecordWriter` writing to it.
+
+##### Args:
+
+
+* <b>`path`</b>: The path to the TFRecords file.
+
+##### Raises:
+
+
+* <b>`IOError`</b>: If `path` cannot be opened for writing.
+
+
+- - -
+
+#### `tf.python_io.TFRecordWriter.write(record)` {#TFRecordWriter.write}
+
+Write a string record to the file.
+
+##### Args:
+
+
+* <b>`record`</b>: str
+
+
+- - -
+
+#### `tf.python_io.TFRecordWriter.close()` {#TFRecordWriter.close}
+
+Close the file.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_prod.md
index af446b6c53..a87daa33fb 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_mean.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_prod.md
@@ -1,6 +1,6 @@
-### `tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_mean}
+### `tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_prod}
-Computes the mean of elements across dimensions of a tensor.
+Computes the product of elements across dimensions of a tensor.
Reduces `input_tensor` along the dimensions given in `reduction_indices`.
Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
@@ -10,16 +10,6 @@ are retained with length 1.
If `reduction_indices` has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
-For example:
-
-```python
-# 'x' is [[1., 1.]
-# [2., 2.]]
-tf.reduce_mean(x) ==> 1.5
-tf.reduce_mean(x, 0) ==> [1.5, 1.5]
-tf.reduce_mean(x, 1) ==> [1., 2.]
-```
-
##### Args:
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reshape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reshape.md
new file mode 100644
index 0000000000..057b29e91f
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reshape.md
@@ -0,0 +1,72 @@
+### `tf.reshape(tensor, shape, name=None)` {#reshape}
+
+Reshapes a tensor.
+
+Given `tensor`, this operation returns a tensor that has the same values
+as `tensor` with shape `shape`.
+
+If one component of `shape` is the special value -1, the size of that dimension
+is computed so that the total size remains constant. In particular, a `shape`
+of `[-1]` flattens into 1-D. At most one component of `shape` can be -1.
+
+If `shape` is 1-D or higher, then the operation returns a tensor with shape
+`shape` filled with the values of `tensor`. In this case, the number of elements
+implied by `shape` must be the same as the number of elements in `tensor`.
+
+For example:
+
+```prettyprint
+# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]
+# tensor 't' has shape [9]
+reshape(t, [3, 3]) ==> [[1, 2, 3],
+ [4, 5, 6],
+ [7, 8, 9]]
+
+# tensor 't' is [[[1, 1], [2, 2]],
+# [[3, 3], [4, 4]]]
+# tensor 't' has shape [2, 2, 2]
+reshape(t, [2, 4]) ==> [[1, 1, 2, 2],
+ [3, 3, 4, 4]]
+
+# tensor 't' is [[[1, 1, 1],
+# [2, 2, 2]],
+# [[3, 3, 3],
+# [4, 4, 4]],
+# [[5, 5, 5],
+# [6, 6, 6]]]
+# tensor 't' has shape [3, 2, 3]
+# pass '[-1]' to flatten 't'
+reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]
+
+# -1 can also be used to infer the shape
+
+# -1 is inferred to be 9:
+reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],
+ [4, 4, 4, 5, 5, 5, 6, 6, 6]]
+# -1 is inferred to be 2:
+reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],
+ [4, 4, 4, 5, 5, 5, 6, 6, 6]]
+# -1 is inferred to be 3:
+reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1],
+ [2, 2, 2],
+ [3, 3, 3]],
+ [[4, 4, 4],
+ [5, 5, 5],
+ [6, 6, 6]]]
+
+# tensor 't' is [7]
+# shape `[]` reshapes to a scalar
+reshape(t, []) ==> 7
+```
+
+##### Args:
+
+
+* <b>`tensor`</b>: A `Tensor`.
+* <b>`shape`</b>: A `Tensor` of type `int32`. Defines the shape of the output tensor.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `tensor`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reverse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reverse.md
new file mode 100644
index 0000000000..e316d5faae
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reverse.md
@@ -0,0 +1,61 @@
+### `tf.reverse(tensor, dims, name=None)` {#reverse}
+
+Reverses specific dimensions of a tensor.
+
+Given a `tensor`, and a `bool` tensor `dims` representing the dimensions
+of `tensor`, this operation reverses each dimension i of `tensor` where
+`dims[i]` is `True`.
+
+`tensor` can have up to 8 dimensions. The number of dimensions
+of `tensor` must equal the number of elements in `dims`. In other words:
+
+`rank(tensor) = size(dims)`
+
+For example:
+
+```prettyprint
+# tensor 't' is [[[[ 0, 1, 2, 3],
+# [ 4, 5, 6, 7],
+# [ 8, 9, 10, 11]],
+# [[12, 13, 14, 15],
+# [16, 17, 18, 19],
+# [20, 21, 22, 23]]]]
+# tensor 't' shape is [1, 2, 3, 4]
+
+# 'dims' is [False, False, False, True]
+reverse(t, dims) ==> [[[[ 3, 2, 1, 0],
+ [ 7, 6, 5, 4],
+ [ 11, 10, 9, 8]],
+ [[15, 14, 13, 12],
+ [19, 18, 17, 16],
+ [23, 22, 21, 20]]]]
+
+# 'dims' is [False, True, False, False]
+reverse(t, dims) ==> [[[[12, 13, 14, 15],
+ [16, 17, 18, 19],
+ [20, 21, 22, 23]
+ [[ 0, 1, 2, 3],
+ [ 4, 5, 6, 7],
+ [ 8, 9, 10, 11]]]]
+
+# 'dims' is [False, False, True, False]
+reverse(t, dims) ==> [[[[8, 9, 10, 11],
+ [4, 5, 6, 7],
+ [0, 1, 2, 3]]
+ [[20, 21, 22, 23],
+ [16, 17, 18, 19],
+ [12, 13, 14, 15]]]]
+```
+
+##### Args:
+
+
+* <b>`tensor`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `bool`, `float32`, `float64`.
+ Up to 8-D.
+* <b>`dims`</b>: A `Tensor` of type `bool`. 1-D. The dimensions to reverse.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `tensor`. The same shape as `tensor`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.round.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.round.md
deleted file mode 100644
index 8d2ce32921..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.round.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.round(x, name=None)` {#round}
-
-Rounds the values of a tensor to the nearest integer, element-wise.
-
-For example:
-
-```python
-# 'a' is [0.9, 2.5, 2.3, -4.4]
-tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float` or `double`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of same shape and type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_mul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_mul.md
deleted file mode 100644
index 5af291597d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_mul.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.scalar_mul(scalar, x)` {#scalar_mul}
-
-Multiplies a scalar times a `Tensor` or `IndexedSlices` object.
-
-Intended for use in gradient code which might deal with `IndexedSlices`
-objects, which are easy to multiply by a scalar but more expensive to
-multiply with arbitrary tensors.
-
-##### Args:
-
-
-* <b>`scalar`</b>: A 0-D scalar `Tensor`. Must have known shape.
-* <b>`x`</b>: A `Tensor` or `IndexedSlices` to be scaled.
-
-##### Returns:
-
- `scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if scalar is not a 0-D `scalar`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scan.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scan.md
deleted file mode 100644
index 6ea0ac677b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scan.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.scan(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#scan}
-
-scan on the list of tensors unpacked from `elems` on dimension 0.
-
-This scan operator repeatedly applies the callable `fn` to a sequence
-of elements from first to last. The elements are made of the tensors
-unpacked from `elems` on dimension 0. The callable fn takes two tensors as
-arguments. The first argument is the accumulated value computed from the
-preceding invocation of fn. If `initializer` is None, `elems` must contain
-at least one element, and its first element is used as the initializer.
-
-Suppose that `elems` is unpacked into `values`, a list of tensors. The shape
-of the result tensor is `[len(values)] + fn(initializer, values[0]).shape`.
-
-##### Args:
-
-
-* <b>`fn`</b>: The callable to be performed.
-* <b>`elems`</b>: A tensor to be unpacked on dimension 0.
-* <b>`initializer`</b>: (optional) The initial value for the accumulator.
-* <b>`parallel_iterations`</b>: (optional) The number of iterations allowed to run
- in parallel.
-* <b>`back_prop`</b>: (optional) True enables back propagation.
-* <b>`swap_memory`</b>: (optional) True enables GPU-CPU memory swapping.
-* <b>`name`</b>: (optional) Name prefix for the returned tensors.
-
-##### Returns:
-
- A tensor that packs the results of applying `fn` to the list of tensors
- unpacked from `elems`, from first to last.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn` is not callable.
-
-##### Example:
-
- ```python
- elems = [1, 2, 3, 4, 5, 6]
- sum = scan(lambda a, x: a + x, elems)
- # sum == [1, 3, 6, 10, 15, 21]
- ```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_max.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_max.md
deleted file mode 100644
index c9d7a28900..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_max.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.segment_max(data, segment_ids, name=None)` {#segment_max}
-
-Computes the maximum along segments of a tensor.
-
-Read [the section on Segmentation](../../api_docs/python/math_ops.md#segmentation)
-for an explanation of segments.
-
-Computes a tensor such that
-\\(output_i = \max_j(data_j)\\) where `max` is over `j` such
-that `segment_ids[j] == i`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentMax.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.slice.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.slice.md
new file mode 100644
index 0000000000..6da47df0b0
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.slice.md
@@ -0,0 +1,47 @@
+### `tf.slice(input_, begin, size, name=None)` {#slice}
+
+Extracts a slice from a tensor.
+
+This operation extracts a slice of size `size` from a tensor `input` starting
+at the location specified by `begin`. The slice `size` is represented as a
+tensor shape, where `size[i]` is the number of elements of the 'i'th dimension
+of `input` that you want to slice. The starting location (`begin`) for the
+slice is represented as an offset in each dimension of `input`. In other
+words, `begin[i]` is the offset into the 'i'th dimension of `input` that you
+want to slice from.
+
+`begin` is zero-based; `size` is one-based. If `size[i]` is -1,
+all remaining elements in dimension i are included in the
+slice. In other words, this is equivalent to setting:
+
+`size[i] = input.dim_size(i) - begin[i]`
+
+This operation requires that:
+
+`0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]`
+
+For example:
+
+```
+# 'input' is [[[1, 1, 1], [2, 2, 2]],
+# [[3, 3, 3], [4, 4, 4]],
+# [[5, 5, 5], [6, 6, 6]]]
+tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]
+tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3],
+ [4, 4, 4]]]
+tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]],
+ [[5, 5, 5]]]
+```
+
+##### Args:
+
+
+* <b>`input_`</b>: A `Tensor`.
+* <b>`begin`</b>: An `int32` or `int64` `Tensor`.
+* <b>`size`</b>: An `int32` or `int64` `Tensor`.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` the same type as `input`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_merge.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_merge.md
deleted file mode 100644
index 38742123d6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_merge.md
+++ /dev/null
@@ -1,73 +0,0 @@
-### `tf.sparse_merge(sp_ids, sp_values, vocab_size, name=None)` {#sparse_merge}
-
-Combines a batch of feature ids and values into a single `SparseTensor`.
-
-The most common use case for this function occurs when feature ids and
-their corresponding values are stored in `Example` protos on disk.
-`parse_example` will return a batch of ids and a batch of values, and this
-function joins them into a single logical `SparseTensor` for use in
-functions such as `sparse_tensor_dense_matmul`, `sparse_to_dense`, etc.
-
-The `SparseTensor` returned by this function has the following properties:
-
- - `indices` is equivalent to `sp_ids.indices` with the last
- dimension discarded and replaced with `sp_ids.values`.
- - `values` is simply `sp_values.values`.
- - If `sp_ids.shape = [D0, D1, ..., Dn, K]`, then
- `output.shape = [D0, D1, ..., Dn, vocab_size]`.
-
-For example, consider the following feature vectors:
-
- vector1 = [-3, 0, 0, 0, 0, 0]
- vector2 = [ 0, 1, 0, 4, 1, 0]
- vector3 = [ 5, 0, 0, 9, 0, 0]
-
-These might be stored sparsely in the following Example protos by storing
-only the feature ids (column number if the vectors are treated as a matrix)
-of the non-zero elements and the corresponding values:
-
- examples = [Example(features={
- "ids": Feature(int64_list=Int64List(value=[0])),
- "values": Feature(float_list=FloatList(value=[-3]))}),
- Example(features={
- "ids": Feature(int64_list=Int64List(value=[1, 4, 3])),
- "values": Feature(float_list=FloatList(value=[1, 1, 4]))}),
- Example(features={
- "ids": Feature(int64_list=Int64List(value=[0, 3])),
- "values": Feature(float_list=FloatList(value=[5, 9]))})]
-
-The result of calling parse_example on these examples will produce a
-dictionary with entries for "ids" and "values". Passing those two objects
-to this function along with vocab_size=6, will produce a `SparseTensor` that
-sparsely represents all three instances. Namely, the `indices` property will
-contain the coordinates of the non-zero entries in the feature matrix (the
-first dimension is the row number in the matrix, i.e., the index within the
-batch, and the second dimension is the column number, i.e., the feature id);
-`values` will contain the actual values. `shape` will be the shape of the
-original matrix, i.e., (3, 6). For our example above, the output will be
-equal to:
-
- SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]],
- values=[-3, 1, 4, 1, 5, 9],
- shape=[3, 6])
-
-##### Args:
-
-
-* <b>`sp_ids`</b>: A `SparseTensor` with `values` property of type `int32`
- or `int64`.
-* <b>`sp_values`</b>: A`SparseTensor` of any type.
-* <b>`vocab_size`</b>: A scalar `int64` Tensor (or Python int) containing the new size
- of the last dimension, `all(0 <= sp_ids.values < vocab_size)`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A `SparseTensor` compactly representing a batch of feature ids and values,
- useful for passing to functions that expect such a `SparseTensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_ids` or `sp_values` are not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_mean.md
deleted file mode 100644
index d95830b8a9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_mean.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.sparse_segment_mean(data, indices, segment_ids, name=None)` {#sparse_segment_mean}
-
-Computes the mean along sparse segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first
-dimension, selecting a subset of dimension 0, specified by `indices`.
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`indices`</b>: A `Tensor` of type `int32`.
- A 1-D tensor. Has same rank as `segment_ids`.
-* <b>`segment_ids`</b>: A `Tensor` of type `int32`.
- A 1-D tensor. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_sqrt_n_grad.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_sqrt_n_grad.md
new file mode 100644
index 0000000000..2a2e0c9e33
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_sqrt_n_grad.md
@@ -0,0 +1,24 @@
+### `tf.sparse_segment_sqrt_n_grad(grad, indices, segment_ids, output_dim0, name=None)` {#sparse_segment_sqrt_n_grad}
+
+Computes gradients for SparseSegmentSqrtN.
+
+Returns tensor "output" with same shape as grad, except for dimension 0 whose
+value is output_dim0.
+
+##### Args:
+
+
+* <b>`grad`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ gradient propagated to the SparseSegmentSqrtN op.
+* <b>`indices`</b>: A `Tensor` of type `int32`.
+ indices passed to the corresponding SparseSegmentSqrtN op.
+* <b>`segment_ids`</b>: A `Tensor` of type `int32`.
+ segment_ids passed to the corresponding SparseSegmentSqrtN op.
+* <b>`output_dim0`</b>: A `Tensor` of type `int32`.
+ dimension 0 of "data" passed to SparseSegmentSqrtN op.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `grad`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.rsqrt.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.squared_difference.md
index 5e8b1bc917..d6bb175669 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.rsqrt.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.squared_difference.md
@@ -1,13 +1,12 @@
-### `tf.rsqrt(x, name=None)` {#rsqrt}
+### `tf.squared_difference(x, y, name=None)` {#squared_difference}
-Computes reciprocal of square root of x element-wise.
-
-I.e., \\(y = 1 / \sqrt{x}\\).
+Returns (x - y)(x - y) element-wise.
##### Args:
* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
+* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.squeeze.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.squeeze.md
deleted file mode 100644
index e76c02e115..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.squeeze.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.squeeze(input, squeeze_dims=None, name=None)` {#squeeze}
-
-Removes dimensions of size 1 from the shape of a tensor.
-
-Given a tensor `input`, this operation returns a tensor of the same type with
-all dimensions of size 1 removed. If you don't want to remove all size 1
-dimensions, you can remove specific size 1 dimensions by specifying
-`squeeze_dims`.
-
-For example:
-
-```prettyprint
-# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
-shape(squeeze(t)) ==> [2, 3]
-```
-
-Or, to remove specific size 1 dimensions:
-
-```prettyprint
-# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
-shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. The `input` to squeeze.
-* <b>`squeeze_dims`</b>: An optional list of `ints`. Defaults to `[]`.
- If specified, only squeezes the dimensions listed. The dimension
- index starts at 0. It is an error to squeeze a dimension that is not 1.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- Contains the same data as `input`, but has one or more dimensions of
- size 1 removed.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.string_to_hash_bucket_strong.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.string_to_hash_bucket_strong.md
new file mode 100644
index 0000000000..67cf3b6fd9
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.string_to_hash_bucket_strong.md
@@ -0,0 +1,30 @@
+### `tf.string_to_hash_bucket_strong(input, num_buckets, key, name=None)` {#string_to_hash_bucket_strong}
+
+Converts each string in the input Tensor to its hash mod by a number of buckets.
+
+The hash function is deterministic on the content of the string within the
+process. The hash function is a keyed hash function, where attribute `key`
+defines the key of the hash function. `key` is an array of 2 elements.
+
+A strong hash is important when inputs may be malicious, e.g. URLs with
+additional components. Adversaries could try to make their inputs hash to the
+same bucket for a denial-of-service attack or to skew the results. A strong
+hash prevents this by making it dificult, if not infeasible, to compute inputs
+that hash to the same bucket. This comes at a cost of roughly 4x higher compute
+time than tf.string_to_hash_bucket_fast.
+
+##### Args:
+
+
+* <b>`input`</b>: A `Tensor` of type `string`. The strings to assign a hash bucket.
+* <b>`num_buckets`</b>: An `int` that is `>= 1`. The number of buckets.
+* <b>`key`</b>: A list of `ints`.
+ The key for the keyed hash function passed as a list of two uint64
+ elements.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int64`.
+ A Tensor of the same shape as the input `string_tensor`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.assert_equal_graph_def.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.assert_equal_graph_def.md
new file mode 100644
index 0000000000..653236cf9f
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.assert_equal_graph_def.md
@@ -0,0 +1,20 @@
+### `tf.test.assert_equal_graph_def(actual, expected)` {#assert_equal_graph_def}
+
+Asserts that two `GraphDef`s are (mostly) the same.
+
+Compares two `GraphDef` protos for equality, ignoring versions and ordering of
+nodes, attrs, and control inputs. Node names are used to match up nodes
+between the graphs, so the naming of nodes must be consistent.
+
+##### Args:
+
+
+* <b>`actual`</b>: The `GraphDef` we have.
+* <b>`expected`</b>: The `GraphDef` we expected.
+
+##### Raises:
+
+
+* <b>`AssertionError`</b>: If the `GraphDef`s do not match.
+* <b>`TypeError`</b>: If either argument is not a `GraphDef`.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.compute_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.compute_gradient.md
deleted file mode 100644
index 19b302d466..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.compute_gradient.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.test.compute_gradient(x, x_shape, y, y_shape, x_init_value=None, delta=0.001, init_targets=None)` {#compute_gradient}
-
-Computes and returns the theoretical and numerical Jacobian.
-
-If `x` or `y` is complex, the Jacobian will still be real but the
-corresponding Jacobian dimension(s) will be twice as large. This is required
-even if both input and output is complex since TensorFlow graphs are not
-necessarily holomorphic, and may have gradients not expressible as complex
-numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
-with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with
-
- J[:m, :n] = d(Re y)/d(Re x)
- J[:m, n:] = d(Im y)/d(Re x)
- J[m:, :n] = d(Re y)/d(Im x)
- J[m:, n:] = d(Im y)/d(Im x)
-
-##### Args:
-
-
-* <b>`x`</b>: a tensor or list of tensors
-* <b>`x_shape`</b>: the dimensions of x as a tuple or an array of ints. If x is a list,
- then this is the list of shapes.
-
-* <b>`y`</b>: a tensor
-* <b>`y_shape`</b>: the dimensions of y as a tuple or an array of ints.
-* <b>`x_init_value`</b>: (optional) a numpy array of the same shape as "x"
- representing the initial value of x. If x is a list, this should be a list
- of numpy arrays. If this is none, the function will pick a random tensor
- as the initial value.
-* <b>`delta`</b>: (optional) the amount of perturbation.
-* <b>`init_targets`</b>: list of targets to run to initialize model params.
- TODO(mrry): remove this argument.
-
-##### Returns:
-
- Two 2-d numpy arrays representing the theoretical and numerical
- Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns
- where "x_size" is the number of elements in x and "y_size" is the
- number of elements in y. If x is a list, returns a list of two numpy arrays.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md
new file mode 100644
index 0000000000..35e416386e
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md
@@ -0,0 +1,26 @@
+Optimizer that implements the Adagrad algorithm.
+
+See this [paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf).
+
+- - -
+
+#### `tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad')` {#AdagradOptimizer.__init__}
+
+Construct a new Adagrad optimizer.
+
+##### Args:
+
+
+* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
+* <b>`initial_accumulator_value`</b>: A floating point value.
+ Starting value for the accumulators, must be positive.
+* <b>`use_locking`</b>: If `True` use locks for update operations.
+* <b>`name`</b>: Optional name prefix for the operations created when applying
+ gradients. Defaults to "Adagrad".
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If the `initial_accumulator_value` is invalid.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.SessionManager.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.SessionManager.md
deleted file mode 100644
index 8bebb8bd29..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.SessionManager.md
+++ /dev/null
@@ -1,187 +0,0 @@
-Training helper that restores from checkpoint and creates session.
-
-This class is a small wrapper that takes care of session creation and
-checkpoint recovery. It also provides functions that to facilitate
-coordination among multiple training threads or processes.
-
-* Checkpointing trained variables as the training progresses.
-* Initializing variables on startup, restoring them from the most recent
- checkpoint after a crash, or wait for checkpoints to become available.
-
-### Usage:
-
-```python
-with tf.Graph().as_default():
- ...add operations to the graph...
- # Create a SessionManager that will checkpoint the model in '/tmp/mydir'.
- sm = SessionManager()
- sess = sm.prepare_session(master, init_op, saver, checkpoint_dir)
- # Use the session to train the graph.
- while True:
- sess.run(<my_train_op>)
-```
-
-`prepare_session()` initializes or restores a model. It requires `init_op`
-and `saver` as an argument.
-
-A second process could wait for the model to be ready by doing the following:
-
-```python
-with tf.Graph().as_default():
- ...add operations to the graph...
- # Create a SessionManager that will wait for the model to become ready.
- sm = SessionManager()
- sess = sm.wait_for_session(master)
- # Use the session to train the graph.
- while True:
- sess.run(<my_train_op>)
-```
-
-`wait_for_session()` waits for a model to be initialized by other processes.
-- - -
-
-#### `tf.train.SessionManager.__init__(local_init_op=None, ready_op=None, graph=None, recovery_wait_secs=30)` {#SessionManager.__init__}
-
-Creates a SessionManager.
-
-The `local_init_op` is an `Operation` that is run always after a new session
-was created. If `None`, this step is skipped.
-
-The `ready_op` is an `Operation` used to check if the model is ready. The
-model is considered ready if that operation returns an empty string tensor.
-If the operation returns non empty string tensor, the elements are
-concatenated and used to indicate to the user why the model is not ready.
-
-If `ready_op` is `None`, the model is not checked for readiness.
-
-`recovery_wait_secs` is the number of seconds between checks that
-the model is ready. It is used by processes to wait for a model to
-be initialized or restored. Defaults to 30 seconds.
-
-##### Args:
-
-
-* <b>`local_init_op`</b>: An `Operation` run immediately after session creation.
- Usually used to initialize tables and local variables.
-* <b>`ready_op`</b>: An `Operation` to check if the model is initialized.
-* <b>`graph`</b>: The `Graph` that the model will use.
-* <b>`recovery_wait_secs`</b>: Seconds between checks for the model to be ready.
-
-
-- - -
-
-#### `tf.train.SessionManager.prepare_session(master, init_op=None, saver=None, checkpoint_dir=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None, init_feed_dict=None, init_fn=None)` {#SessionManager.prepare_session}
-
-Creates a `Session`. Makes sure the model is ready to be used.
-
-Creates a `Session` on 'master'. If a `saver` object is passed in, and
-`checkpoint_dir` points to a directory containing valid checkpoint
-files, then it will try to recover the model from checkpoint. If
-no checkpoint files are available, and `wait_for_checkpoint` is
-`True`, then the process would check every `recovery_wait_secs`,
-up to `max_wait_secs`, for recovery to succeed.
-
-If the model cannot be recovered successfully then it is initialized by
-either running the provided `init_op`, or calling the provided `init_fn`.
-It is an error if the model cannot be recovered and neither an `init_op`
-or an `init_fn` are passed.
-
-This is a convenient function for the following, with a few error checks
-added:
-
-```python
-sess, initialized = self.recover_session(master)
-if not initialized:
- if init_op:
- sess.run(init_op, feed_dict=init_feed_dict)
- if init_fn;
- init_fn(sess)
-return sess
-```
-
-##### Args:
-
-
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`init_op`</b>: Optional `Operation` used to initialize the model.
-* <b>`saver`</b>: A `Saver` object used to restore a model.
-* <b>`checkpoint_dir`</b>: Path to the checkpoint files.
-* <b>`wait_for_checkpoint`</b>: Whether to wait for checkpoint to become available.
-* <b>`max_wait_secs`</b>: Maximum time to wait for checkpoints to become available.
-* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
-* <b>`init_feed_dict`</b>: Optional dictionary that maps `Tensor` objects to feed
- values. This feed dictionary is passed to the session `run()` call when
- running the init op.
-* <b>`init_fn`</b>: Optional callable used to initialize the model. Called after the
- optional `init_op` is called. The callable must accept one argument,
- the session being initialized.
-
-##### Returns:
-
- A `Session` object that can be used to drive the model.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If the model cannot be initialized or recovered.
-
-
-- - -
-
-#### `tf.train.SessionManager.recover_session(master, saver=None, checkpoint_dir=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None)` {#SessionManager.recover_session}
-
-Creates a `Session`, recovering if possible.
-
-Creates a new session on 'master'. If the session is not initialized
-and can be recovered from a checkpoint, recover it.
-
-##### Args:
-
-
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`saver`</b>: A `Saver` object used to restore a model.
-* <b>`checkpoint_dir`</b>: Path to the checkpoint files.
-* <b>`wait_for_checkpoint`</b>: Whether to wait for checkpoint to become available.
-* <b>`max_wait_secs`</b>: Maximum time to wait for checkpoints to become available.
-* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
-
-##### Returns:
-
- A pair (sess, initialized) where 'initialized' is `True` if
- the session could be recovered, `False` otherwise.
-
-
-- - -
-
-#### `tf.train.SessionManager.wait_for_session(master, config=None, max_wait_secs=inf)` {#SessionManager.wait_for_session}
-
-Creates a new `Session` and waits for model to be ready.
-
-Creates a new `Session` on 'master'. Waits for the model to be
-initialized or recovered from a checkpoint. It's expected that
-another thread or process will make the model ready, and that this
-is intended to be used by threads/processes that participate in a
-distributed training configuration where a different thread/process
-is responsible for initializing or recovering the model being trained.
-
-NB: The amount of time this method waits for the session is bounded
-by max_wait_secs. By default, this function will wait indefinitely.
-
-##### Args:
-
-
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`config`</b>: Optional ConfigProto proto used to configure the session.
-* <b>`max_wait_secs`</b>: Maximum time to wait for the session to become available.
-
-##### Returns:
-
- A `Session`. May be None if the operation exceeds the timeout
- specified by config.operation_timeout_in_ms.
-
-##### Raises:
-
- tf.DeadlineExceededError: if the session is not available after
- max_wait_secs.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.latest_checkpoint.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.latest_checkpoint.md
deleted file mode 100644
index b1fc87cdd7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.latest_checkpoint.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.train.latest_checkpoint(checkpoint_dir, latest_filename=None)` {#latest_checkpoint}
-
-Finds the filename of latest saved checkpoint file.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: Directory where the variables were saved.
-* <b>`latest_filename`</b>: Optional name for the protocol buffer file that
- contains the list of most recent checkpoint filenames.
- See the corresponding argument to `Saver.save()`.
-
-##### Returns:
-
- The full path to the latest checkpoint or `None` if no checkpoint was found.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.shuffle_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.shuffle_batch.md
deleted file mode 100644
index bf2591801b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.shuffle_batch.md
+++ /dev/null
@@ -1,74 +0,0 @@
-### `tf.train.shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, shared_name=None, name=None)` {#shuffle_batch}
-
-Creates batches by randomly shuffling tensors.
-
-This function adds the following to the current `Graph`:
-
-* A shuffling queue into which tensors from `tensors` are enqueued.
-* A `dequeue_many` operation to create batches from the queue.
-* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
- from `tensors`.
-
-If `enqueue_many` is `False`, `tensors` is assumed to represent a
-single example. An input tensor with shape `[x, y, z]` will be output
-as a tensor with shape `[batch_size, x, y, z]`.
-
-If `enqueue_many` is `True`, `tensors` is assumed to represent a
-batch of examples, where the first dimension is indexed by example,
-and all members of `tensors` should have the same size in the
-first dimension. If an input tensor has shape `[*, x, y, z]`, the
-output will have shape `[batch_size, x, y, z]`.
-
-The `capacity` argument controls the how long the prefetching is allowed to
-grow the queues.
-
-The returned operation is a dequeue operation and will throw
-`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
-operation is feeding another input queue, its queue runner will catch
-this exception, however, if this operation is used in your main thread
-you are responsible for catching this yourself.
-
-For example:
-
-```python
-# Creates batches of 32 images and 32 labels.
-image_batch, label_batch = tf.train.shuffle_batch(
- [single_image, single_label],
- batch_size=32,
- num_threads=4,
- capacity=50000,
- min_after_dequeue=10000)
-```
-
-*N.B.:* You must ensure that either (i) the `shapes` argument is
-passed, or (ii) all of the tensors in `tensors` must have
-fully-defined shapes. `ValueError` will be raised if neither of
-these conditions holds.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
-* <b>`batch_size`</b>: The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
- dequeue, used to ensure a level of mixing of elements.
-* <b>`num_threads`</b>: The number of threads enqueuing `tensor_list`.
-* <b>`seed`</b>: Seed for the random shuffling within the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list` is a single example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list`.
-* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the types as `tensors`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.start_queue_runners.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.start_queue_runners.md
deleted file mode 100644
index 21ac6efee8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.start_queue_runners.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners')` {#start_queue_runners}
-
-Starts all queue runners collected in the graph.
-
-This is a companion method to `add_queue_runner()`. It just starts
-threads for all queue runners collected in the graph. It returns
-the list of all threads.
-
-##### Args:
-
-
-* <b>`sess`</b>: `Session` used to run the queue ops. Defaults to the
- default session.
-* <b>`coord`</b>: Optional `Coordinator` for coordinating the started threads.
-* <b>`daemon`</b>: Whether the threads should be marked as `daemons`, meaning
- they don't block program exit.
-* <b>`start`</b>: Set to `False` to only create the threads, not start them.
-* <b>`collection`</b>: A `GraphKey` specifying the graph collection to
- get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
-
-##### Returns:
-
- A list of threads.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.tuple.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.tuple.md
new file mode 100644
index 0000000000..503a98d625
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.tuple.md
@@ -0,0 +1,36 @@
+### `tf.tuple(tensors, name=None, control_inputs=None)` {#tuple}
+
+Group tensors together.
+
+This creates a tuple of tensors with the same values as the `tensors`
+argument, except that the value of each tensor is only returned after the
+values of all tensors have been computed.
+
+`control_inputs` contains additional ops that have to finish before this op
+finishes, but whose outputs are not returned.
+
+This can be used as a "join" mechanism for parallel computations: all the
+argument tensors can be computed in parallel, but the values of any tensor
+returned by `tuple` are only available after all the parallel computations
+are done.
+
+See also `group` and `with_dependencies`.
+
+##### Args:
+
+
+* <b>`tensors`</b>: A list of `Tensor`s or `IndexedSlices`, some entries can be `None`.
+* <b>`name`</b>: (optional) A name to use as a `name_scope` for the operation.
+* <b>`control_inputs`</b>: List of additional ops to finish before returning.
+
+##### Returns:
+
+ Same as `tensors`.
+
+##### Raises:
+
+
+* <b>`ValueError`</b>: If `tensors` does not contain any `Tensor` or `IndexedSlices`.
+* <b>`TypeError`</b>: If `control_inputs` is not a list of `Operation` or `Tensor`
+ objects.
+
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unique.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unique.md
new file mode 100644
index 0000000000..0929f57b0f
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unique.md
@@ -0,0 +1,33 @@
+### `tf.unique(x, name=None)` {#unique}
+
+Finds unique elements in a 1-D tensor.
+
+This operation returns a tensor `y` containing all of the unique elements of `x`
+sorted in the same order that they occur in `x`. This operation also returns a
+tensor `idx` the same size as `x` that contains the index of each value of `x`
+in the unique output `y`. In other words:
+
+`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
+
+For example:
+
+```prettyprint
+# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
+y, idx = unique(x)
+y ==> [1, 2, 4, 7, 8]
+idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
+```
+
+##### Args:
+
+
+* <b>`x`</b>: A `Tensor`. 1-D.
+* <b>`name`</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of `Tensor` objects (y, idx).
+
+* <b>`y`</b>: A `Tensor`. Has the same type as `x`. 1-D.
+* <b>`idx`</b>: A `Tensor` of type `int32`. 1-D.
+