aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/api_docs
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/g3doc/api_docs')
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassEnv.md146
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md143
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md38
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassSession.md88
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassStatus.md107
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensor.md361
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensorBuffer.md52
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensorShape.md196
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensorShapeIter.md45
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md81
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassThread.md25
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassWritableFile.md52
-rw-r--r--tensorflow/g3doc/api_docs/cc/StructSessionOptions.md49
-rw-r--r--tensorflow/g3doc/api_docs/cc/StructState.md24
-rw-r--r--tensorflow/g3doc/api_docs/cc/StructTensorShapeDim.md24
-rw-r--r--tensorflow/g3doc/api_docs/cc/StructThreadOptions.md26
-rw-r--r--tensorflow/g3doc/api_docs/cc/index.md75
-rw-r--r--tensorflow/g3doc/api_docs/index.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/array_ops.md1025
-rw-r--r--tensorflow/g3doc/api_docs/python/client.md638
-rw-r--r--tensorflow/g3doc/api_docs/python/constant_op.md565
-rw-r--r--tensorflow/g3doc/api_docs/python/control_flow_ops.md590
-rw-r--r--tensorflow/g3doc/api_docs/python/framework.md2079
-rw-r--r--tensorflow/g3doc/api_docs/python/image.md857
-rw-r--r--tensorflow/g3doc/api_docs/python/index.md352
-rw-r--r--tensorflow/g3doc/api_docs/python/io_ops.md1956
-rw-r--r--tensorflow/g3doc/api_docs/python/math_ops.md1883
-rw-r--r--tensorflow/g3doc/api_docs/python/nn.md1306
-rw-r--r--tensorflow/g3doc/api_docs/python/ops.md10
-rw-r--r--tensorflow/g3doc/api_docs/python/python_io.md104
-rw-r--r--tensorflow/g3doc/api_docs/python/sparse_ops.md502
-rw-r--r--tensorflow/g3doc/api_docs/python/state_ops.md1383
-rw-r--r--tensorflow/g3doc/api_docs/python/train.md1825
33 files changed, 16622 insertions, 0 deletions
diff --git a/tensorflow/g3doc/api_docs/cc/ClassEnv.md b/tensorflow/g3doc/api_docs/cc/ClassEnv.md
new file mode 100644
index 0000000000..0fdb3d32c7
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassEnv.md
@@ -0,0 +1,146 @@
+#Class tensorflow::Env
+
+An interface used by the tensorflow implementation to access operating system functionality like the filesystem etc.
+
+Callers may wish to provide a custom Env object to get fine grain control.
+
+All Env implementations are safe for concurrent access from multiple threads without any external synchronization.
+
+##Member Summary
+
+* [tensorflow::Env::Env](#tensorflow_Env_Env)
+* [virtual tensorflow::Env::~Env](#virtual_tensorflow_Env_Env)
+* [virtual Status tensorflow::Env::NewRandomAccessFile](#virtual_Status_tensorflow_Env_NewRandomAccessFile)
+ * Creates a brand new random access read-only file with the specified name.
+* [virtual Status tensorflow::Env::NewWritableFile](#virtual_Status_tensorflow_Env_NewWritableFile)
+ * Creates an object that writes to a new file with the specified name.
+* [virtual Status tensorflow::Env::NewAppendableFile](#virtual_Status_tensorflow_Env_NewAppendableFile)
+ * Creates an object that either appends to an existing file, or writes to a new file (if the file does not exist to begin with).
+* [virtual bool tensorflow::Env::FileExists](#virtual_bool_tensorflow_Env_FileExists)
+ * Returns true iff the named file exists.
+* [virtual Status tensorflow::Env::GetChildren](#virtual_Status_tensorflow_Env_GetChildren)
+ * Stores in *result the names of the children of the specified directory. The names are relative to "dir".
+* [virtual Status tensorflow::Env::DeleteFile](#virtual_Status_tensorflow_Env_DeleteFile)
+ * Deletes the named file.
+* [virtual Status tensorflow::Env::CreateDir](#virtual_Status_tensorflow_Env_CreateDir)
+ * Creates the specified directory.
+* [virtual Status tensorflow::Env::DeleteDir](#virtual_Status_tensorflow_Env_DeleteDir)
+ * Deletes the specified directory.
+* [virtual Status tensorflow::Env::GetFileSize](#virtual_Status_tensorflow_Env_GetFileSize)
+ * Stores the size of fname in *file_size.
+* [virtual Status tensorflow::Env::RenameFile](#virtual_Status_tensorflow_Env_RenameFile)
+ * Renames file src to target. If target already exists, it will be replaced.
+* [virtual uint64 tensorflow::Env::NowMicros](#virtual_uint64_tensorflow_Env_NowMicros)
+ * Returns the number of micro-seconds since some fixed point in time. Only useful for computing deltas of time.
+* [virtual void tensorflow::Env::SleepForMicroseconds](#virtual_void_tensorflow_Env_SleepForMicroseconds)
+ * Sleeps/delays the thread for the prescribed number of micro-seconds.
+* [virtual Thread* tensorflow::Env::StartThread](#virtual_Thread_tensorflow_Env_StartThread)
+ * Returns a new thread that is running fn() and is identified (for debugging/performance-analysis) by "name".
+* [static Env* tensorflow::Env::Default](#static_Env_tensorflow_Env_Default)
+ * Returns a default environment suitable for the current operating system.
+
+##Member Details
+
+#### tensorflow::Env::Env() {#tensorflow_Env_Env}
+
+
+
+
+
+#### virtual tensorflow::Env::~Env() {#virtual_tensorflow_Env_Env}
+
+
+
+
+
+#### virtual Status tensorflow::Env::NewRandomAccessFile(const string &fname, RandomAccessFile **result)=0 {#virtual_Status_tensorflow_Env_NewRandomAccessFile}
+
+Creates a brand new random access read-only file with the specified name.
+
+On success, stores a pointer to the new file in *result and returns OK. On failure stores NULL in *result and returns non-OK. If the file does not exist, returns a non-OK status.
+
+The returned file may be concurrently accessed by multiple threads.
+
+#### virtual Status tensorflow::Env::NewWritableFile(const string &fname, WritableFile **result)=0 {#virtual_Status_tensorflow_Env_NewWritableFile}
+
+Creates an object that writes to a new file with the specified name.
+
+Deletes any existing file with the same name and creates a new file. On success, stores a pointer to the new file in *result and returns OK. On failure stores NULL in *result and returns non-OK.
+
+The returned file will only be accessed by one thread at a time.
+
+#### virtual Status tensorflow::Env::NewAppendableFile(const string &fname, WritableFile **result)=0 {#virtual_Status_tensorflow_Env_NewAppendableFile}
+
+Creates an object that either appends to an existing file, or writes to a new file (if the file does not exist to begin with).
+
+On success, stores a pointer to the new file in *result and returns OK. On failure stores NULL in *result and returns non-OK.
+
+The returned file will only be accessed by one thread at a time.
+
+#### virtual bool tensorflow::Env::FileExists(const string &fname)=0 {#virtual_bool_tensorflow_Env_FileExists}
+
+Returns true iff the named file exists.
+
+
+
+#### virtual Status tensorflow::Env::GetChildren(const string &dir, std::vector< string > *result)=0 {#virtual_Status_tensorflow_Env_GetChildren}
+
+Stores in *result the names of the children of the specified directory. The names are relative to "dir".
+
+Original contents of *results are dropped.
+
+#### virtual Status tensorflow::Env::DeleteFile(const string &fname)=0 {#virtual_Status_tensorflow_Env_DeleteFile}
+
+Deletes the named file.
+
+
+
+#### virtual Status tensorflow::Env::CreateDir(const string &dirname)=0 {#virtual_Status_tensorflow_Env_CreateDir}
+
+Creates the specified directory.
+
+
+
+#### virtual Status tensorflow::Env::DeleteDir(const string &dirname)=0 {#virtual_Status_tensorflow_Env_DeleteDir}
+
+Deletes the specified directory.
+
+
+
+#### virtual Status tensorflow::Env::GetFileSize(const string &fname, uint64 *file_size)=0 {#virtual_Status_tensorflow_Env_GetFileSize}
+
+Stores the size of fname in *file_size.
+
+
+
+#### virtual Status tensorflow::Env::RenameFile(const string &src, const string &target)=0 {#virtual_Status_tensorflow_Env_RenameFile}
+
+Renames file src to target. If target already exists, it will be replaced.
+
+
+
+#### virtual uint64 tensorflow::Env::NowMicros()=0 {#virtual_uint64_tensorflow_Env_NowMicros}
+
+Returns the number of micro-seconds since some fixed point in time. Only useful for computing deltas of time.
+
+
+
+#### virtual void tensorflow::Env::SleepForMicroseconds(int micros)=0 {#virtual_void_tensorflow_Env_SleepForMicroseconds}
+
+Sleeps/delays the thread for the prescribed number of micro-seconds.
+
+
+
+#### virtual Thread* tensorflow::Env::StartThread(const ThreadOptions &thread_options, const string &name, std::function< void()> fn) TF_MUST_USE_RESULT=0 {#virtual_Thread_tensorflow_Env_StartThread}
+
+Returns a new thread that is running fn() and is identified (for debugging/performance-analysis) by "name".
+
+Caller takes ownership of the result and must delete it eventually (the deletion will block until fn() stops running).
+
+#### static Env* tensorflow::Env::Default() {#static_Env_tensorflow_Env_Default}
+
+Returns a default environment suitable for the current operating system.
+
+Sophisticated users may wish to provide their own Env implementation instead of relying on this default environment.
+
+The result of Default() belongs to this library and must never be deleted.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md b/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md
new file mode 100644
index 0000000000..2c6af82113
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md
@@ -0,0 +1,143 @@
+#Class tensorflow::EnvWrapper
+
+An implementation of Env that forwards all calls to another Env .
+
+May be useful to clients who wish to override just part of the functionality of another Env .
+
+##Member Summary
+
+* [tensorflow::EnvWrapper::EnvWrapper](#tensorflow_EnvWrapper_EnvWrapper)
+ * Initializes an EnvWrapper that delegates all calls to *t.
+* [virtual tensorflow::EnvWrapper::~EnvWrapper](#virtual_tensorflow_EnvWrapper_EnvWrapper)
+* [Env* tensorflow::EnvWrapper::target](#Env_tensorflow_EnvWrapper_target)
+ * Returns the target to which this Env forwards all calls.
+* [Status tensorflow::EnvWrapper::NewRandomAccessFile](#Status_tensorflow_EnvWrapper_NewRandomAccessFile)
+ * Creates a brand new random access read-only file with the specified name.
+* [Status tensorflow::EnvWrapper::NewWritableFile](#Status_tensorflow_EnvWrapper_NewWritableFile)
+ * Creates an object that writes to a new file with the specified name.
+* [Status tensorflow::EnvWrapper::NewAppendableFile](#Status_tensorflow_EnvWrapper_NewAppendableFile)
+ * Creates an object that either appends to an existing file, or writes to a new file (if the file does not exist to begin with).
+* [bool tensorflow::EnvWrapper::FileExists](#bool_tensorflow_EnvWrapper_FileExists)
+ * Returns true iff the named file exists.
+* [Status tensorflow::EnvWrapper::GetChildren](#Status_tensorflow_EnvWrapper_GetChildren)
+ * Stores in *result the names of the children of the specified directory. The names are relative to "dir".
+* [Status tensorflow::EnvWrapper::DeleteFile](#Status_tensorflow_EnvWrapper_DeleteFile)
+ * Deletes the named file.
+* [Status tensorflow::EnvWrapper::CreateDir](#Status_tensorflow_EnvWrapper_CreateDir)
+ * Creates the specified directory.
+* [Status tensorflow::EnvWrapper::DeleteDir](#Status_tensorflow_EnvWrapper_DeleteDir)
+ * Deletes the specified directory.
+* [Status tensorflow::EnvWrapper::GetFileSize](#Status_tensorflow_EnvWrapper_GetFileSize)
+ * Stores the size of fname in *file_size.
+* [Status tensorflow::EnvWrapper::RenameFile](#Status_tensorflow_EnvWrapper_RenameFile)
+ * Renames file src to target. If target already exists, it will be replaced.
+* [uint64 tensorflow::EnvWrapper::NowMicros](#uint64_tensorflow_EnvWrapper_NowMicros)
+ * Returns the number of micro-seconds since some fixed point in time. Only useful for computing deltas of time.
+* [void tensorflow::EnvWrapper::SleepForMicroseconds](#void_tensorflow_EnvWrapper_SleepForMicroseconds)
+ * Sleeps/delays the thread for the prescribed number of micro-seconds.
+* [Thread* tensorflow::EnvWrapper::StartThread](#Thread_tensorflow_EnvWrapper_StartThread)
+ * Returns a new thread that is running fn() and is identified (for debugging/performance-analysis) by "name".
+
+##Member Details
+
+#### tensorflow::EnvWrapper::EnvWrapper(Env *t) {#tensorflow_EnvWrapper_EnvWrapper}
+
+Initializes an EnvWrapper that delegates all calls to *t.
+
+
+
+#### virtual tensorflow::EnvWrapper::~EnvWrapper() {#virtual_tensorflow_EnvWrapper_EnvWrapper}
+
+
+
+
+
+#### Env* tensorflow::EnvWrapper::target() const {#Env_tensorflow_EnvWrapper_target}
+
+Returns the target to which this Env forwards all calls.
+
+
+
+#### Status tensorflow::EnvWrapper::NewRandomAccessFile(const string &f, RandomAccessFile **r) override {#Status_tensorflow_EnvWrapper_NewRandomAccessFile}
+
+Creates a brand new random access read-only file with the specified name.
+
+On success, stores a pointer to the new file in *result and returns OK. On failure stores NULL in *result and returns non-OK. If the file does not exist, returns a non-OK status.
+
+The returned file may be concurrently accessed by multiple threads.
+
+#### Status tensorflow::EnvWrapper::NewWritableFile(const string &f, WritableFile **r) override {#Status_tensorflow_EnvWrapper_NewWritableFile}
+
+Creates an object that writes to a new file with the specified name.
+
+Deletes any existing file with the same name and creates a new file. On success, stores a pointer to the new file in *result and returns OK. On failure stores NULL in *result and returns non-OK.
+
+The returned file will only be accessed by one thread at a time.
+
+#### Status tensorflow::EnvWrapper::NewAppendableFile(const string &f, WritableFile **r) override {#Status_tensorflow_EnvWrapper_NewAppendableFile}
+
+Creates an object that either appends to an existing file, or writes to a new file (if the file does not exist to begin with).
+
+On success, stores a pointer to the new file in *result and returns OK. On failure stores NULL in *result and returns non-OK.
+
+The returned file will only be accessed by one thread at a time.
+
+#### bool tensorflow::EnvWrapper::FileExists(const string &f) override {#bool_tensorflow_EnvWrapper_FileExists}
+
+Returns true iff the named file exists.
+
+
+
+#### Status tensorflow::EnvWrapper::GetChildren(const string &dir, std::vector< string > *r) override {#Status_tensorflow_EnvWrapper_GetChildren}
+
+Stores in *result the names of the children of the specified directory. The names are relative to "dir".
+
+Original contents of *results are dropped.
+
+#### Status tensorflow::EnvWrapper::DeleteFile(const string &f) override {#Status_tensorflow_EnvWrapper_DeleteFile}
+
+Deletes the named file.
+
+
+
+#### Status tensorflow::EnvWrapper::CreateDir(const string &d) override {#Status_tensorflow_EnvWrapper_CreateDir}
+
+Creates the specified directory.
+
+
+
+#### Status tensorflow::EnvWrapper::DeleteDir(const string &d) override {#Status_tensorflow_EnvWrapper_DeleteDir}
+
+Deletes the specified directory.
+
+
+
+#### Status tensorflow::EnvWrapper::GetFileSize(const string &f, uint64 *s) override {#Status_tensorflow_EnvWrapper_GetFileSize}
+
+Stores the size of fname in *file_size.
+
+
+
+#### Status tensorflow::EnvWrapper::RenameFile(const string &s, const string &t) override {#Status_tensorflow_EnvWrapper_RenameFile}
+
+Renames file src to target. If target already exists, it will be replaced.
+
+
+
+#### uint64 tensorflow::EnvWrapper::NowMicros() override {#uint64_tensorflow_EnvWrapper_NowMicros}
+
+Returns the number of micro-seconds since some fixed point in time. Only useful for computing deltas of time.
+
+
+
+#### void tensorflow::EnvWrapper::SleepForMicroseconds(int micros) override {#void_tensorflow_EnvWrapper_SleepForMicroseconds}
+
+Sleeps/delays the thread for the prescribed number of micro-seconds.
+
+
+
+#### Thread* tensorflow::EnvWrapper::StartThread(const ThreadOptions &thread_options, const string &name, std::function< void()> fn) override {#Thread_tensorflow_EnvWrapper_StartThread}
+
+Returns a new thread that is running fn() and is identified (for debugging/performance-analysis) by "name".
+
+Caller takes ownership of the result and must delete it eventually (the deletion will block until fn() stops running).
diff --git a/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md b/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md
new file mode 100644
index 0000000000..3538c2ca11
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md
@@ -0,0 +1,38 @@
+#Class tensorflow::RandomAccessFile
+
+A file abstraction for randomly reading the contents of a file.
+
+
+
+##Member Summary
+
+* [tensorflow::RandomAccessFile::RandomAccessFile](#tensorflow_RandomAccessFile_RandomAccessFile)
+* [virtual tensorflow::RandomAccessFile::~RandomAccessFile](#virtual_tensorflow_RandomAccessFile_RandomAccessFile)
+* [virtual Status tensorflow::RandomAccessFile::Read](#virtual_Status_tensorflow_RandomAccessFile_Read)
+ * Reads up to "n" bytes from the file starting at "offset".
+
+##Member Details
+
+#### tensorflow::RandomAccessFile::RandomAccessFile() {#tensorflow_RandomAccessFile_RandomAccessFile}
+
+
+
+
+
+#### virtual tensorflow::RandomAccessFile::~RandomAccessFile() {#virtual_tensorflow_RandomAccessFile_RandomAccessFile}
+
+
+
+
+
+#### virtual Status tensorflow::RandomAccessFile::Read(uint64 offset, size_t n, StringPiece *result, char *scratch) const =0 {#virtual_Status_tensorflow_RandomAccessFile_Read}
+
+Reads up to "n" bytes from the file starting at "offset".
+
+"scratch[0..n-1]" may be written by this routine. Sets "*result" to the data that was read (including if fewer than "n" bytes were successfully read). May set "*result" to point at data in "scratch[0..n-1]", so "scratch[0..n-1]" must be live when "*result" is used.
+
+On OK returned status: "n" bytes have been stored in "*result". On non-OK returned status: [0..n] bytes have been stored in "*result".
+
+Returns OUT_OF_RANGE if fewer than n bytes were stored in "*result" because of EOF.
+
+Safe for concurrent use by multiple threads.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassSession.md b/tensorflow/g3doc/api_docs/cc/ClassSession.md
new file mode 100644
index 0000000000..f2f9d8f762
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassSession.md
@@ -0,0 +1,88 @@
+#Class tensorflow::Session
+
+A Session instance lets a caller drive a TensorFlow graph computation.
+
+When a Session is created with a given target, a new Session object is bound to the universe of resources specified by that target. Those resources are available to this session to perform computation described in the GraphDef. After extending the session with a graph, the caller uses the Run() API to perform the computation and potentially fetch outputs as Tensors.
+
+Example: tensorflow::GraphDef graph;
+// ... Create or load graph into 'graph'.
+
+// This example uses the default options which connects
+// to a local runtime.
+tensorflow::SessionOptions options;
+std::unique_ptr<tensorflow::Session>
+session(tensorflow::NewSession(options));
+
+// Create the session with this graph.
+tensorflow::Status s = session->Create(graph);
+if (!s.ok()) { ... }
+
+// Run the graph and fetch the first output of the "output"
+// operation, and also run to but do not return anything
+// for the "update_state" operation.
+std::vector<tensorflow::Tensor> outputs;
+s = session->Run({}, {"output:0"}, {"update_state"}, &outputs);
+if (!s.ok()) { ... }
+
+// Map the output as a flattened float tensor, and do something
+// with it.
+auto output_tensor = outputs[0].flat<float>();
+if (output_tensor(0) > 0.5) { ... }
+
+// Close the session to release the resources associated with
+// this session.
+session->Close()
+
+A Session allows concurrent calls to Run() , though a Session must be created / extended by a single thread.
+
+Only one thread must call Close() , and Close() must only be called after all other calls to Run() have returned.
+
+##Member Summary
+
+* [virtual Status tensorflow::Session::Create](#virtual_Status_tensorflow_Session_Create)
+ * Create the graph to be used for the session.
+* [virtual Status tensorflow::Session::Extend](#virtual_Status_tensorflow_Session_Extend)
+ * Adds operations to the graph that is already registered with the Session .
+* [virtual Status tensorflow::Session::Run](#virtual_Status_tensorflow_Session_Run)
+ * Runs the graph with the provided input tensors and fills 'outputs' for the endpoints specified in 'output_tensor_names'. Runs to but does not return Tensors for the nodes in 'target_node_names'.
+* [virtual Status tensorflow::Session::Close](#virtual_Status_tensorflow_Session_Close)
+ * Closes this session.
+* [virtual tensorflow::Session::~Session](#virtual_tensorflow_Session_Session)
+
+##Member Details
+
+#### virtual Status tensorflow::Session::Create(const GraphDef &graph)=0 {#virtual_Status_tensorflow_Session_Create}
+
+Create the graph to be used for the session.
+
+Returns an error if this session has already been created with a graph. To re-use the session with a different graph, the caller must Close() the session first.
+
+#### virtual Status tensorflow::Session::Extend(const GraphDef &graph)=0 {#virtual_Status_tensorflow_Session_Extend}
+
+Adds operations to the graph that is already registered with the Session .
+
+The names of new operations in "graph" must not exist in the graph that is already registered.
+
+#### virtual Status tensorflow::Session::Run(const std::vector< std::pair< string, Tensor > > &inputs, const std::vector< string > &output_tensor_names, const std::vector< string > &target_node_names, std::vector< Tensor > *outputs)=0 {#virtual_Status_tensorflow_Session_Run}
+
+Runs the graph with the provided input tensors and fills 'outputs' for the endpoints specified in 'output_tensor_names'. Runs to but does not return Tensors for the nodes in 'target_node_names'.
+
+The order of tensors in 'outputs' will match the order provided by 'output_tensor_names'.
+
+If Run returns OK(), then outputs->size() will be equal to output_tensor_names.size(). If Run does not return OK(), the state of outputs is undefined.
+
+REQUIRES: The name of each Tensor of the input or output must match a "Tensor endpoint" in the GraphDef passed to Create() .
+
+REQUIRES: outputs is not nullptr if output_tensor_names is non-empty.
+
+#### virtual Status tensorflow::Session::Close()=0 {#virtual_Status_tensorflow_Session_Close}
+
+Closes this session.
+
+Closing a session releases the resources used by this session on the TensorFlow runtime (specified during session creation by the ' SessionOptions::target ' field).
+
+#### virtual tensorflow::Session::~Session() {#virtual_tensorflow_Session_Session}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/ClassStatus.md b/tensorflow/g3doc/api_docs/cc/ClassStatus.md
new file mode 100644
index 0000000000..d5ef48b14d
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassStatus.md
@@ -0,0 +1,107 @@
+#Class tensorflow::Status
+
+
+
+
+
+##Member Summary
+
+* [tensorflow::Status::Status](#tensorflow_Status_Status)
+ * Create a success status.
+* [tensorflow::Status::~Status](#tensorflow_Status_Status)
+* [tensorflow::Status::Status](#tensorflow_Status_Status)
+ * Create a status with the specified error code and msg as a human-readable string containing more detailed information.
+* [tensorflow::Status::Status](#tensorflow_Status_Status)
+ * Copy the specified status.
+* [void tensorflow::Status::operator=](#void_tensorflow_Status_operator_)
+* [bool tensorflow::Status::ok](#bool_tensorflow_Status_ok)
+ * Returns true iff the status indicates success.
+* [tensorflow::error::Code tensorflow::Status::code](#tensorflow_error_Code_tensorflow_Status_code)
+* [const string& tensorflow::Status::error_message](#const_string_amp_tensorflow_Status_error_message)
+* [bool tensorflow::Status::operator==](#bool_tensorflow_Status_operator_)
+* [bool tensorflow::Status::operator!=](#bool_tensorflow_Status_operator_)
+* [void tensorflow::Status::Update](#void_tensorflow_Status_Update)
+ * If "ok()", stores "new_status" into *this. If "!ok()", preserves the current status, but may augment with additional information about "new_status".
+* [string tensorflow::Status::ToString](#string_tensorflow_Status_ToString)
+ * Return a string representation of this status suitable for printing. Returns the string "OK" for success.
+* [static Status tensorflow::Status::OK](#static_Status_tensorflow_Status_OK)
+
+##Member Details
+
+#### tensorflow::Status::Status() {#tensorflow_Status_Status}
+
+Create a success status.
+
+
+
+#### tensorflow::Status::~Status() {#tensorflow_Status_Status}
+
+
+
+
+
+#### tensorflow::Status::Status(tensorflow::error::Code code, tensorflow::StringPiece msg) {#tensorflow_Status_Status}
+
+Create a status with the specified error code and msg as a human-readable string containing more detailed information.
+
+
+
+#### tensorflow::Status::Status(const Status &s) {#tensorflow_Status_Status}
+
+Copy the specified status.
+
+
+
+#### void tensorflow::Status::operator=(const Status &s) {#void_tensorflow_Status_operator_}
+
+
+
+
+
+#### bool tensorflow::Status::ok() const {#bool_tensorflow_Status_ok}
+
+Returns true iff the status indicates success.
+
+
+
+#### tensorflow::error::Code tensorflow::Status::code() const {#tensorflow_error_Code_tensorflow_Status_code}
+
+
+
+
+
+#### const string& tensorflow::Status::error_message() const {#const_string_amp_tensorflow_Status_error_message}
+
+
+
+
+
+#### bool tensorflow::Status::operator==(const Status &x) const {#bool_tensorflow_Status_operator_}
+
+
+
+
+
+#### bool tensorflow::Status::operator!=(const Status &x) const {#bool_tensorflow_Status_operator_}
+
+
+
+
+
+#### void tensorflow::Status::Update(const Status &new_status) {#void_tensorflow_Status_Update}
+
+If "ok()", stores "new_status" into *this. If "!ok()", preserves the current status, but may augment with additional information about "new_status".
+
+Convenient way of keeping track of the first error encountered. Instead of: if (overall_status.ok()) overall_status = new_status Use: overall_status.Update(new_status);
+
+#### string tensorflow::Status::ToString() const {#string_tensorflow_Status_ToString}
+
+Return a string representation of this status suitable for printing. Returns the string "OK" for success.
+
+
+
+#### static Status tensorflow::Status::OK() {#static_Status_tensorflow_Status_OK}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensor.md b/tensorflow/g3doc/api_docs/cc/ClassTensor.md
new file mode 100644
index 0000000000..7ecc7688f3
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensor.md
@@ -0,0 +1,361 @@
+#Class tensorflow::Tensor
+
+Represents an n-dimensional array of values.
+
+
+
+##Member Summary
+
+* [tensorflow::Tensor::Tensor](#tensorflow_Tensor_Tensor)
+ * Default Tensor constructor. Creates a 1-dimension, 0-element float tensor.
+* [tensorflow::Tensor::Tensor](#tensorflow_Tensor_Tensor)
+ * Creates a Tensor of the given datatype and shape.
+* [tensorflow::Tensor::Tensor](#tensorflow_Tensor_Tensor)
+ * Creates a tensor with the input datatype and shape, using the allocator 'a' to allocate the underlying buffer.
+* [tensorflow::Tensor::Tensor](#tensorflow_Tensor_Tensor)
+ * Creates an uninitialized Tensor of the given data type.
+* [tensorflow::Tensor::Tensor](#tensorflow_Tensor_Tensor)
+* [tensorflow::Tensor::~Tensor](#tensorflow_Tensor_Tensor)
+ * Copy constructor.
+* [DataType tensorflow::Tensor::dtype](#DataType_tensorflow_Tensor_dtype)
+ * Returns the data type.
+* [const TensorShape& tensorflow::Tensor::shape](#const_TensorShape_amp_tensorflow_Tensor_shape)
+ * Returns the shape of the tensor.
+* [int tensorflow::Tensor::dims](#int_tensorflow_Tensor_dims)
+ * Convenience accessor for the tensor shape.
+* [int64 tensorflow::Tensor::dim_size](#int64_tensorflow_Tensor_dim_size)
+ * Convenience accessor for the tensor shape.
+* [int64 tensorflow::Tensor::NumElements](#int64_tensorflow_Tensor_NumElements)
+ * Convenience accessor for the tensor shape.
+* [bool tensorflow::Tensor::IsSameSize](#bool_tensorflow_Tensor_IsSameSize)
+* [bool tensorflow::Tensor::IsInitialized](#bool_tensorflow_Tensor_IsInitialized)
+ * Has this Tensor been initialized?
+* [size_t tensorflow::Tensor::TotalBytes](#size_t_tensorflow_Tensor_TotalBytes)
+ * Returns the estimated memory usage of this tensor.
+* [Tensor& tensorflow::Tensor::operator=](#Tensor_amp_tensorflow_Tensor_operator_)
+ * Assign operator. This tensor shares other's underlying storage.
+* [bool tensorflow::Tensor::CopyFrom](#bool_tensorflow_Tensor_CopyFrom)
+ * Copy the other tensor into this tensor and reshape it.
+* [Tensor tensorflow::Tensor::Slice](#Tensor_tensorflow_Tensor_Slice)
+ * Slice this tensor along the 1st dimension.
+* [bool tensorflow::Tensor::FromProto](#bool_tensorflow_Tensor_FromProto)
+ * Parse "other' and construct the tensor.
+* [bool tensorflow::Tensor::FromProto](#bool_tensorflow_Tensor_FromProto)
+* [void tensorflow::Tensor::AsProtoField](#void_tensorflow_Tensor_AsProtoField)
+ * Fills in "proto" with "*this" tensor's content.
+* [void tensorflow::Tensor::AsProtoTensorContent](#void_tensorflow_Tensor_AsProtoTensorContent)
+* [TTypes<T>::Vec tensorflow::Tensor::vec](#TTypes_lt_T_gt_Vec_tensorflow_Tensor_vec)
+ * Return the Tensor data as an Eigen::Tensor with the type and sizes of this Tensor .
+* [TTypes<T>::Matrix tensorflow::Tensor::matrix](#TTypes_lt_T_gt_Matrix_tensorflow_Tensor_matrix)
+* [TTypes< T, NDIMS >::Tensor tensorflow::Tensor::tensor](#TTypes_lt_T_NDIMS_gt_Tensor_tensorflow_Tensor_tensor)
+* [TTypes<T>::Flat tensorflow::Tensor::flat](#TTypes_lt_T_gt_Flat_tensorflow_Tensor_flat)
+ * Return the Tensor data as an Eigen::Tensor of the data type and a specified shape.
+* [TTypes<T>::UnalignedFlat tensorflow::Tensor::unaligned_flat](#TTypes_lt_T_gt_UnalignedFlat_tensorflow_Tensor_unaligned_flat)
+* [TTypes<T>::Matrix tensorflow::Tensor::flat_inner_dims](#TTypes_lt_T_gt_Matrix_tensorflow_Tensor_flat_inner_dims)
+* [TTypes<T>::Matrix tensorflow::Tensor::flat_outer_dims](#TTypes_lt_T_gt_Matrix_tensorflow_Tensor_flat_outer_dims)
+* [TTypes< T, NDIMS >::Tensor tensorflow::Tensor::shaped](#TTypes_lt_T_NDIMS_gt_Tensor_tensorflow_Tensor_shaped)
+* [TTypes< T, NDIMS >::UnalignedTensor tensorflow::Tensor::unaligned_shaped](#TTypes_lt_T_NDIMS_gt_UnalignedTensor_tensorflow_Tensor_unaligned_shaped)
+* [TTypes< T >::Scalar tensorflow::Tensor::scalar](#TTypes_lt_T_gt_Scalar_tensorflow_Tensor_scalar)
+ * Return the Tensor data as a Tensor Map of fixed size 1: TensorMap<TensorFixedSize<T, 1>>.
+* [TTypes<T>::ConstVec tensorflow::Tensor::vec](#TTypes_lt_T_gt_ConstVec_tensorflow_Tensor_vec)
+ * Const versions of all the methods above.
+* [TTypes<T>::ConstMatrix tensorflow::Tensor::matrix](#TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_matrix)
+* [TTypes< T, NDIMS >::ConstTensor tensorflow::Tensor::tensor](#TTypes_lt_T_NDIMS_gt_ConstTensor_tensorflow_Tensor_tensor)
+* [TTypes<T>::ConstFlat tensorflow::Tensor::flat](#TTypes_lt_T_gt_ConstFlat_tensorflow_Tensor_flat)
+* [TTypes<T>::ConstUnalignedFlat tensorflow::Tensor::unaligned_flat](#TTypes_lt_T_gt_ConstUnalignedFlat_tensorflow_Tensor_unaligned_flat)
+* [TTypes<T>::ConstMatrix tensorflow::Tensor::flat_inner_dims](#TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_flat_inner_dims)
+* [TTypes<T>::ConstMatrix tensorflow::Tensor::flat_outer_dims](#TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_flat_outer_dims)
+* [TTypes< T, NDIMS >::ConstTensor tensorflow::Tensor::shaped](#TTypes_lt_T_NDIMS_gt_ConstTensor_tensorflow_Tensor_shaped)
+* [TTypes< T, NDIMS >::ConstUnalignedTensor tensorflow::Tensor::unaligned_shaped](#TTypes_lt_T_NDIMS_gt_ConstUnalignedTensor_tensorflow_Tensor_unaligned_shaped)
+* [TTypes< T >::ConstScalar tensorflow::Tensor::scalar](#TTypes_lt_T_gt_ConstScalar_tensorflow_Tensor_scalar)
+* [string tensorflow::Tensor::SummarizeValue](#string_tensorflow_Tensor_SummarizeValue)
+ * Render the first max_entries values in *this into a string.
+* [string tensorflow::Tensor::DebugString](#string_tensorflow_Tensor_DebugString)
+ * A human-readable summary of the Tensor suitable for debugging.
+* [void tensorflow::Tensor::FillDescription](#void_tensorflow_Tensor_FillDescription)
+* [StringPiece tensorflow::Tensor::tensor_data](#StringPiece_tensorflow_Tensor_tensor_data)
+ * Returns a StringPiece mapping the current tensor's buffer.
+
+##Member Details
+
+#### tensorflow::Tensor::Tensor() {#tensorflow_Tensor_Tensor}
+
+Default Tensor constructor. Creates a 1-dimension, 0-element float tensor.
+
+
+
+#### tensorflow::Tensor::Tensor(DataType type, const TensorShape &shape) {#tensorflow_Tensor_Tensor}
+
+Creates a Tensor of the given datatype and shape.
+
+The underlying buffer is allocated using a CPUAllocator.
+
+#### tensorflow::Tensor::Tensor(Allocator *a, DataType type, const TensorShape &shape) {#tensorflow_Tensor_Tensor}
+
+Creates a tensor with the input datatype and shape, using the allocator 'a' to allocate the underlying buffer.
+
+'a' must outlive the lifetime of this Tensor .
+
+#### tensorflow::Tensor::Tensor(DataType type) {#tensorflow_Tensor_Tensor}
+
+Creates an uninitialized Tensor of the given data type.
+
+
+
+#### tensorflow::Tensor::Tensor(const Tensor &other) {#tensorflow_Tensor_Tensor}
+
+
+
+
+
+#### tensorflow::Tensor::~Tensor() {#tensorflow_Tensor_Tensor}
+
+Copy constructor.
+
+
+
+#### DataType tensorflow::Tensor::dtype() const {#DataType_tensorflow_Tensor_dtype}
+
+Returns the data type.
+
+
+
+#### const TensorShape& tensorflow::Tensor::shape() const {#const_TensorShape_amp_tensorflow_Tensor_shape}
+
+Returns the shape of the tensor.
+
+
+
+#### int tensorflow::Tensor::dims() const {#int_tensorflow_Tensor_dims}
+
+Convenience accessor for the tensor shape.
+
+For all shape accessors, see comments for relevant methods of TensorShape in tensor_shape.h .
+
+#### int64 tensorflow::Tensor::dim_size(int d) const {#int64_tensorflow_Tensor_dim_size}
+
+Convenience accessor for the tensor shape.
+
+
+
+#### int64 tensorflow::Tensor::NumElements() const {#int64_tensorflow_Tensor_NumElements}
+
+Convenience accessor for the tensor shape.
+
+
+
+#### bool tensorflow::Tensor::IsSameSize(const Tensor &b) const {#bool_tensorflow_Tensor_IsSameSize}
+
+
+
+
+
+#### bool tensorflow::Tensor::IsInitialized() const {#bool_tensorflow_Tensor_IsInitialized}
+
+Has this Tensor been initialized?
+
+
+
+#### size_t tensorflow::Tensor::TotalBytes() const {#size_t_tensorflow_Tensor_TotalBytes}
+
+Returns the estimated memory usage of this tensor.
+
+
+
+#### Tensor& tensorflow::Tensor::operator=(const Tensor &other) {#Tensor_amp_tensorflow_Tensor_operator_}
+
+Assign operator. This tensor shares other's underlying storage.
+
+
+
+#### bool tensorflow::Tensor::CopyFrom(const Tensor &other, const TensorShape &shape) TF_MUST_USE_RESULT {#bool_tensorflow_Tensor_CopyFrom}
+
+Copy the other tensor into this tensor and reshape it.
+
+This tensor shares other's underlying storage. Returns true iff other.shape() has the same number of elements of the given "shape".
+
+#### Tensor tensorflow::Tensor::Slice(int64 dim0_start, int64 dim0_limit) const {#Tensor_tensorflow_Tensor_Slice}
+
+Slice this tensor along the 1st dimension.
+
+I.e., the returned tensor satisifies returned[i, ...] == this[dim0_start + i, ...]. The returned tensor shares the underlying tensor buffer with this tensor.
+
+NOTE: The returned tensor may not satisfies the same alignment requirement as this tensor depending on the shape. The caller must check the returned tensor's alignment before calling certain methods that have alignment requirement (e.g., flat() , tensor()).
+
+REQUIRES: dims() >= 1 REQUIRES: 0 <= dim0_start <= dim0_limit <= dim_size(0)
+
+#### bool tensorflow::Tensor::FromProto(const TensorProto &other) TF_MUST_USE_RESULT {#bool_tensorflow_Tensor_FromProto}
+
+Parse "other' and construct the tensor.
+
+Returns true iff the parsing succeeds. If the parsing fails, the state of "*this" is unchanged.
+
+#### bool tensorflow::Tensor::FromProto(Allocator *a, const TensorProto &other) TF_MUST_USE_RESULT {#bool_tensorflow_Tensor_FromProto}
+
+
+
+
+
+#### void tensorflow::Tensor::AsProtoField(TensorProto *proto) const {#void_tensorflow_Tensor_AsProtoField}
+
+Fills in "proto" with "*this" tensor's content.
+
+AsProtoField() fills in the repeated field for proto.dtype(), while AsProtoTensorContent() encodes the content in proto.tensor_content() in a compact form.
+
+#### void tensorflow::Tensor::AsProtoTensorContent(TensorProto *proto) const {#void_tensorflow_Tensor_AsProtoTensorContent}
+
+
+
+
+
+#### TTypes<T>::Vec tensorflow::Tensor::vec() {#TTypes_lt_T_gt_Vec_tensorflow_Tensor_vec}
+
+Return the Tensor data as an Eigen::Tensor with the type and sizes of this Tensor .
+
+Use these methods when you know the data type and the number of dimensions of the Tensor and you want an Eigen::Tensor automatically sized to the Tensor sizes. The implementation check fails if either type or sizes mismatch.
+
+Example: typedef float T; Tensor my_mat(...built with Shape{rows: 3, cols: 5}...); auto mat = my_mat.matrix<T>(); // 2D Eigen::Tensor, 3 x 5. auto mat = my_mat.tensor<T, 2>(); // 2D Eigen::Tensor, 3 x 5. auto vec = my_mat.vec<T>(); // CHECK fails as my_mat is 2D. auto vec = my_mat.tensor<T, 3>(); // CHECK fails as my_mat is 2D. auto mat = my_mat.matrix<int32>();// CHECK fails as type mismatch.
+
+#### TTypes<T>::Matrix tensorflow::Tensor::matrix() {#TTypes_lt_T_gt_Matrix_tensorflow_Tensor_matrix}
+
+
+
+
+
+#### TTypes< T, NDIMS >::Tensor tensorflow::Tensor::tensor() {#TTypes_lt_T_NDIMS_gt_Tensor_tensorflow_Tensor_tensor}
+
+
+
+
+
+#### TTypes<T>::Flat tensorflow::Tensor::flat() {#TTypes_lt_T_gt_Flat_tensorflow_Tensor_flat}
+
+Return the Tensor data as an Eigen::Tensor of the data type and a specified shape.
+
+These methods allow you to access the data with the dimensions and sizes of your choice. You do not need to know the number of dimensions of the Tensor to call them. However, they CHECK that the type matches and the dimensions requested creates an Eigen::Tensor with the same number of elements as the Tensor .
+
+Example: typedef float T; Tensor my_ten(...built with Shape{planes: 4, rows: 3, cols: 5}...); // 1D Eigen::Tensor, size 60: auto flat = my_ten.flat<T>(); // 2D Eigen::Tensor 12 x 5: auto inner = my_ten.flat_inner_dims<T>(); // 2D Eigen::Tensor 4 x 15: auto outer = my_ten.shaped<T, 2>({4, 15}); // CHECK fails, bad num elements: auto outer = my_ten.shaped<T, 2>({4, 8}); // 3D Eigen::Tensor 6 x 5 x 2: auto weird = my_ten.shaped<T, 3>({6, 5, 2}); // CHECK fails, type mismatch: auto bad = my_ten.flat<int32>();
+
+#### TTypes<T>::UnalignedFlat tensorflow::Tensor::unaligned_flat() {#TTypes_lt_T_gt_UnalignedFlat_tensorflow_Tensor_unaligned_flat}
+
+
+
+
+
+#### TTypes<T>::Matrix tensorflow::Tensor::flat_inner_dims() {#TTypes_lt_T_gt_Matrix_tensorflow_Tensor_flat_inner_dims}
+
+
+
+Returns the data as an Eigen::Tensor with 2 dimensions, collapsing all Tensor dimensions but the last one into the first dimension of the result.
+
+#### TTypes<T>::Matrix tensorflow::Tensor::flat_outer_dims() {#TTypes_lt_T_gt_Matrix_tensorflow_Tensor_flat_outer_dims}
+
+
+
+Returns the data as an Eigen::Tensor with 2 dimensions, collapsing all Tensor dimensions but the first one into the last dimension of the result.
+
+#### TTypes< T, NDIMS >::Tensor tensorflow::Tensor::shaped(gtl::ArraySlice< int64 > new_sizes) {#TTypes_lt_T_NDIMS_gt_Tensor_tensorflow_Tensor_shaped}
+
+
+
+
+
+#### TTypes< T, NDIMS >::UnalignedTensor tensorflow::Tensor::unaligned_shaped(gtl::ArraySlice< int64 > new_sizes) {#TTypes_lt_T_NDIMS_gt_UnalignedTensor_tensorflow_Tensor_unaligned_shaped}
+
+
+
+
+
+#### TTypes< T >::Scalar tensorflow::Tensor::scalar() {#TTypes_lt_T_gt_Scalar_tensorflow_Tensor_scalar}
+
+Return the Tensor data as a Tensor Map of fixed size 1: TensorMap<TensorFixedSize<T, 1>>.
+
+Using scalar() allows the compiler to perform optimizations as the size of the tensor is known at compile time.
+
+#### TTypes<T>::ConstVec tensorflow::Tensor::vec() const {#TTypes_lt_T_gt_ConstVec_tensorflow_Tensor_vec}
+
+Const versions of all the methods above.
+
+
+
+#### TTypes<T>::ConstMatrix tensorflow::Tensor::matrix() const {#TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_matrix}
+
+
+
+
+
+#### TTypes< T, NDIMS >::ConstTensor tensorflow::Tensor::tensor() const {#TTypes_lt_T_NDIMS_gt_ConstTensor_tensorflow_Tensor_tensor}
+
+
+
+
+
+#### TTypes<T>::ConstFlat tensorflow::Tensor::flat() const {#TTypes_lt_T_gt_ConstFlat_tensorflow_Tensor_flat}
+
+
+
+
+
+#### TTypes<T>::ConstUnalignedFlat tensorflow::Tensor::unaligned_flat() const {#TTypes_lt_T_gt_ConstUnalignedFlat_tensorflow_Tensor_unaligned_flat}
+
+
+
+
+
+#### TTypes<T>::ConstMatrix tensorflow::Tensor::flat_inner_dims() const {#TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_flat_inner_dims}
+
+
+
+
+
+#### TTypes<T>::ConstMatrix tensorflow::Tensor::flat_outer_dims() const {#TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_flat_outer_dims}
+
+
+
+
+
+#### TTypes< T, NDIMS >::ConstTensor tensorflow::Tensor::shaped(gtl::ArraySlice< int64 > new_sizes) const {#TTypes_lt_T_NDIMS_gt_ConstTensor_tensorflow_Tensor_shaped}
+
+
+
+
+
+#### TTypes< T, NDIMS >::ConstUnalignedTensor tensorflow::Tensor::unaligned_shaped(gtl::ArraySlice< int64 > new_sizes) const {#TTypes_lt_T_NDIMS_gt_ConstUnalignedTensor_tensorflow_Tensor_unaligned_shaped}
+
+
+
+
+
+#### TTypes< T >::ConstScalar tensorflow::Tensor::scalar() const {#TTypes_lt_T_gt_ConstScalar_tensorflow_Tensor_scalar}
+
+
+
+
+
+#### string tensorflow::Tensor::SummarizeValue(int64 max_entries) const {#string_tensorflow_Tensor_SummarizeValue}
+
+Render the first max_entries values in *this into a string.
+
+
+
+#### string tensorflow::Tensor::DebugString() const {#string_tensorflow_Tensor_DebugString}
+
+A human-readable summary of the Tensor suitable for debugging.
+
+
+
+#### void tensorflow::Tensor::FillDescription(TensorDescription *description) const {#void_tensorflow_Tensor_FillDescription}
+
+
+
+Fill in the TensorDescription proto with metadata about the Tensor that is useful for monitoring and debugging.
+
+#### StringPiece tensorflow::Tensor::tensor_data() const {#StringPiece_tensorflow_Tensor_tensor_data}
+
+Returns a StringPiece mapping the current tensor's buffer.
+
+The returned StringPiece may point to memory location on devices that the CPU cannot address directly.
+
+NOTE: The underlying Tensor buffer is refcounted, so the lifetime of the contents mapped by the StringPiece matches the lifetime of the buffer; callers should arrange to make sure the buffer does not get destroyed while the StringPiece is still used.
+
+REQUIRES: DataTypeCanUseMemcpy( dtype() ).
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensorBuffer.md b/tensorflow/g3doc/api_docs/cc/ClassTensorBuffer.md
new file mode 100644
index 0000000000..9f2c6a23be
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensorBuffer.md
@@ -0,0 +1,52 @@
+#Class tensorflow::TensorBuffer
+
+
+
+
+
+##Member Summary
+
+* [tensorflow::TensorBuffer::~TensorBuffer](#tensorflow_TensorBuffer_TensorBuffer)
+* [virtual void* tensorflow::TensorBuffer::data](#virtual_void_tensorflow_TensorBuffer_data)
+* [virtual size_t tensorflow::TensorBuffer::size](#virtual_size_t_tensorflow_TensorBuffer_size)
+* [virtual TensorBuffer* tensorflow::TensorBuffer::root_buffer](#virtual_TensorBuffer_tensorflow_TensorBuffer_root_buffer)
+* [virtual void tensorflow::TensorBuffer::FillAllocationDescription](#virtual_void_tensorflow_TensorBuffer_FillAllocationDescription)
+* [T* tensorflow::TensorBuffer::base](#T_tensorflow_TensorBuffer_base)
+
+##Member Details
+
+#### tensorflow::TensorBuffer::~TensorBuffer() override {#tensorflow_TensorBuffer_TensorBuffer}
+
+
+
+
+
+#### virtual void* tensorflow::TensorBuffer::data() const =0 {#virtual_void_tensorflow_TensorBuffer_data}
+
+
+
+
+
+#### virtual size_t tensorflow::TensorBuffer::size() const =0 {#virtual_size_t_tensorflow_TensorBuffer_size}
+
+
+
+
+
+#### virtual TensorBuffer* tensorflow::TensorBuffer::root_buffer()=0 {#virtual_TensorBuffer_tensorflow_TensorBuffer_root_buffer}
+
+
+
+
+
+#### virtual void tensorflow::TensorBuffer::FillAllocationDescription(AllocationDescription *proto) const =0 {#virtual_void_tensorflow_TensorBuffer_FillAllocationDescription}
+
+
+
+
+
+#### T* tensorflow::TensorBuffer::base() const {#T_tensorflow_TensorBuffer_base}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md b/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md
new file mode 100644
index 0000000000..47a105a76e
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md
@@ -0,0 +1,196 @@
+#Class tensorflow::TensorShape
+
+Manages the dimensions of a Tensor and their sizes.
+
+
+
+##Member Summary
+
+* [tensorflow::TensorShape::TensorShape](#tensorflow_TensorShape_TensorShape)
+ * Construct a TensorShape from the provided sizes.. REQUIRES: dim_sizes[i] >= 0.
+* [tensorflow::TensorShape::TensorShape](#tensorflow_TensorShape_TensorShape)
+* [tensorflow::TensorShape::TensorShape](#tensorflow_TensorShape_TensorShape)
+ * REQUIRES: IsValid(proto)
+* [tensorflow::TensorShape::TensorShape](#tensorflow_TensorShape_TensorShape)
+* [void tensorflow::TensorShape::Clear](#void_tensorflow_TensorShape_Clear)
+ * Clear a tensor shape.
+* [void tensorflow::TensorShape::AddDim](#void_tensorflow_TensorShape_AddDim)
+ * Add a dimension to the end ("inner-most"). REQUIRES: size >= 0.
+* [void tensorflow::TensorShape::AppendShape](#void_tensorflow_TensorShape_AppendShape)
+ * Appends all the dimensions from shape.
+* [void tensorflow::TensorShape::InsertDim](#void_tensorflow_TensorShape_InsertDim)
+ * Insert a dimension somewhere in the TensorShape . REQUIRES: "0 <= d <= dims()" REQUIRES: size >= 0.
+* [void tensorflow::TensorShape::set_dim](#void_tensorflow_TensorShape_set_dim)
+ * Modifies the size of the dimension 'd' to be 'size' REQUIRES: "0 <= d < dims()" REQUIRES: size >= 0.
+* [void tensorflow::TensorShape::RemoveDim](#void_tensorflow_TensorShape_RemoveDim)
+ * Removes dimension 'd' from the TensorShape . REQUIRES: "0 <= d < dims()".
+* [int tensorflow::TensorShape::dims](#int_tensorflow_TensorShape_dims)
+ * Return the number of dimensions in the tensor.
+* [int64 tensorflow::TensorShape::dim_size](#int64_tensorflow_TensorShape_dim_size)
+ * Returns the number of elements in dimension "d". REQUIRES: "0 <= d < dims()".
+* [gtl::ArraySlice<int64> tensorflow::TensorShape::dim_sizes](#gtl_ArraySlice_lt_int64_gt_tensorflow_TensorShape_dim_sizes)
+ * Returns sizes of all dimensions.
+* [int64 tensorflow::TensorShape::num_elements](#int64_tensorflow_TensorShape_num_elements)
+ * Returns the number of elements in the tensor.
+* [bool tensorflow::TensorShape::IsSameSize](#bool_tensorflow_TensorShape_IsSameSize)
+ * Returns true if *this and b have the same sizes. Ignores dimension names.
+* [bool tensorflow::TensorShape::operator==](#bool_tensorflow_TensorShape_operator_)
+* [void tensorflow::TensorShape::AsProto](#void_tensorflow_TensorShape_AsProto)
+ * Fill *proto from *this.
+* [Eigen::DSizes< Eigen::DenseIndex, NDIMS > tensorflow::TensorShape::AsEigenDSizes](#Eigen_DSizes_lt_Eigen_DenseIndex_NDIMS_gt_tensorflow_TensorShape_AsEigenDSizes)
+ * Fill *dsizes from *this.
+* [Eigen::DSizes< Eigen::DenseIndex, NDIMS > tensorflow::TensorShape::AsEigenDSizesWithPadding](#Eigen_DSizes_lt_Eigen_DenseIndex_NDIMS_gt_tensorflow_TensorShape_AsEigenDSizesWithPadding)
+* [TensorShapeIter tensorflow::TensorShape::begin](#TensorShapeIter_tensorflow_TensorShape_begin)
+ * For iterating through the dimensions.
+* [TensorShapeIter tensorflow::TensorShape::end](#TensorShapeIter_tensorflow_TensorShape_end)
+* [string tensorflow::TensorShape::DebugString](#string_tensorflow_TensorShape_DebugString)
+ * For error messages.
+* [string tensorflow::TensorShape::ShortDebugString](#string_tensorflow_TensorShape_ShortDebugString)
+* [static bool tensorflow::TensorShape::IsValid](#static_bool_tensorflow_TensorShape_IsValid)
+ * Returns true iff "proto" is a valid tensor shape.
+
+##Member Details
+
+#### tensorflow::TensorShape::TensorShape(gtl::ArraySlice< int64 > dim_sizes) {#tensorflow_TensorShape_TensorShape}
+
+Construct a TensorShape from the provided sizes.. REQUIRES: dim_sizes[i] >= 0.
+
+
+
+#### tensorflow::TensorShape::TensorShape(std::initializer_list< int64 > dim_sizes) {#tensorflow_TensorShape_TensorShape}
+
+
+
+
+
+#### tensorflow::TensorShape::TensorShape(const TensorShapeProto &proto) {#tensorflow_TensorShape_TensorShape}
+
+REQUIRES: IsValid(proto)
+
+
+
+#### tensorflow::TensorShape::TensorShape() {#tensorflow_TensorShape_TensorShape}
+
+
+
+Create a tensor shape with no dimensions and one element, which you can then call AddDim() on.
+
+#### void tensorflow::TensorShape::Clear() {#void_tensorflow_TensorShape_Clear}
+
+Clear a tensor shape.
+
+
+
+#### void tensorflow::TensorShape::AddDim(int64 size) {#void_tensorflow_TensorShape_AddDim}
+
+Add a dimension to the end ("inner-most"). REQUIRES: size >= 0.
+
+
+
+#### void tensorflow::TensorShape::AppendShape(const TensorShape &shape) {#void_tensorflow_TensorShape_AppendShape}
+
+Appends all the dimensions from shape.
+
+
+
+#### void tensorflow::TensorShape::InsertDim(int d, int64 size) {#void_tensorflow_TensorShape_InsertDim}
+
+Insert a dimension somewhere in the TensorShape . REQUIRES: "0 <= d <= dims()" REQUIRES: size >= 0.
+
+
+
+#### void tensorflow::TensorShape::set_dim(int d, int64 size) {#void_tensorflow_TensorShape_set_dim}
+
+Modifies the size of the dimension 'd' to be 'size' REQUIRES: "0 <= d < dims()" REQUIRES: size >= 0.
+
+
+
+#### void tensorflow::TensorShape::RemoveDim(int d) {#void_tensorflow_TensorShape_RemoveDim}
+
+Removes dimension 'd' from the TensorShape . REQUIRES: "0 <= d < dims()".
+
+
+
+#### int tensorflow::TensorShape::dims() const {#int_tensorflow_TensorShape_dims}
+
+Return the number of dimensions in the tensor.
+
+
+
+#### int64 tensorflow::TensorShape::dim_size(int d) const {#int64_tensorflow_TensorShape_dim_size}
+
+Returns the number of elements in dimension "d". REQUIRES: "0 <= d < dims()".
+
+
+
+#### gtl::ArraySlice<int64> tensorflow::TensorShape::dim_sizes() const {#gtl_ArraySlice_lt_int64_gt_tensorflow_TensorShape_dim_sizes}
+
+Returns sizes of all dimensions.
+
+
+
+#### int64 tensorflow::TensorShape::num_elements() const {#int64_tensorflow_TensorShape_num_elements}
+
+Returns the number of elements in the tensor.
+
+We use int64 and not size_t to be compatible with Eigen::Tensor which uses ptr_fi
+
+#### bool tensorflow::TensorShape::IsSameSize(const TensorShape &b) const {#bool_tensorflow_TensorShape_IsSameSize}
+
+Returns true if *this and b have the same sizes. Ignores dimension names.
+
+
+
+#### bool tensorflow::TensorShape::operator==(const TensorShape &b) const {#bool_tensorflow_TensorShape_operator_}
+
+
+
+
+
+#### void tensorflow::TensorShape::AsProto(TensorShapeProto *proto) const {#void_tensorflow_TensorShape_AsProto}
+
+Fill *proto from *this.
+
+
+
+#### Eigen::DSizes< Eigen::DenseIndex, NDIMS > tensorflow::TensorShape::AsEigenDSizes() const {#Eigen_DSizes_lt_Eigen_DenseIndex_NDIMS_gt_tensorflow_TensorShape_AsEigenDSizes}
+
+Fill *dsizes from *this.
+
+
+
+#### Eigen::DSizes< Eigen::DenseIndex, NDIMS > tensorflow::TensorShape::AsEigenDSizesWithPadding() const {#Eigen_DSizes_lt_Eigen_DenseIndex_NDIMS_gt_tensorflow_TensorShape_AsEigenDSizesWithPadding}
+
+
+
+Same as AsEigenDSizes() but allows for NDIMS > dims() in which case we pad the rest of the sizes with 1.
+
+#### TensorShapeIter tensorflow::TensorShape::begin() const {#TensorShapeIter_tensorflow_TensorShape_begin}
+
+For iterating through the dimensions.
+
+
+
+#### TensorShapeIter tensorflow::TensorShape::end() const {#TensorShapeIter_tensorflow_TensorShape_end}
+
+
+
+
+
+#### string tensorflow::TensorShape::DebugString() const {#string_tensorflow_TensorShape_DebugString}
+
+For error messages.
+
+
+
+#### string tensorflow::TensorShape::ShortDebugString() const {#string_tensorflow_TensorShape_ShortDebugString}
+
+
+
+
+
+#### static bool tensorflow::TensorShape::IsValid(const TensorShapeProto &proto) {#static_bool_tensorflow_TensorShape_IsValid}
+
+Returns true iff "proto" is a valid tensor shape.
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensorShapeIter.md b/tensorflow/g3doc/api_docs/cc/ClassTensorShapeIter.md
new file mode 100644
index 0000000000..2f198168a2
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensorShapeIter.md
@@ -0,0 +1,45 @@
+#Class tensorflow::TensorShapeIter
+
+
+
+
+
+##Member Summary
+
+* [tensorflow::TensorShapeIter::TensorShapeIter](#tensorflow_TensorShapeIter_TensorShapeIter)
+* [bool tensorflow::TensorShapeIter::operator==](#bool_tensorflow_TensorShapeIter_operator_)
+* [bool tensorflow::TensorShapeIter::operator!=](#bool_tensorflow_TensorShapeIter_operator_)
+* [void tensorflow::TensorShapeIter::operator++](#void_tensorflow_TensorShapeIter_operator_)
+* [TensorShapeDim tensorflow::TensorShapeIter::operator*](#TensorShapeDim_tensorflow_TensorShapeIter_operator_)
+
+##Member Details
+
+#### tensorflow::TensorShapeIter::TensorShapeIter(const TensorShape *shape, int d) {#tensorflow_TensorShapeIter_TensorShapeIter}
+
+
+
+
+
+#### bool tensorflow::TensorShapeIter::operator==(const TensorShapeIter &rhs) {#bool_tensorflow_TensorShapeIter_operator_}
+
+
+
+
+
+#### bool tensorflow::TensorShapeIter::operator!=(const TensorShapeIter &rhs) {#bool_tensorflow_TensorShapeIter_operator_}
+
+
+
+
+
+#### void tensorflow::TensorShapeIter::operator++() {#void_tensorflow_TensorShapeIter_operator_}
+
+
+
+
+
+#### TensorShapeDim tensorflow::TensorShapeIter::operator*() {#TensorShapeDim_tensorflow_TensorShapeIter_operator_}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md b/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md
new file mode 100644
index 0000000000..7b81eb62a8
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md
@@ -0,0 +1,81 @@
+#Class tensorflow::TensorShapeUtils
+
+Static helper routines for TensorShape . Includes a few common predicates on a tensor shape.
+
+
+
+##Member Summary
+
+* [static bool tensorflow::TensorShapeUtils::IsScalar](#static_bool_tensorflow_TensorShapeUtils_IsScalar)
+* [static bool tensorflow::TensorShapeUtils::IsVector](#static_bool_tensorflow_TensorShapeUtils_IsVector)
+* [static bool tensorflow::TensorShapeUtils::IsLegacyScalar](#static_bool_tensorflow_TensorShapeUtils_IsLegacyScalar)
+* [static bool tensorflow::TensorShapeUtils::IsLegacyVector](#static_bool_tensorflow_TensorShapeUtils_IsLegacyVector)
+* [static bool tensorflow::TensorShapeUtils::IsVectorOrHigher](#static_bool_tensorflow_TensorShapeUtils_IsVectorOrHigher)
+* [static bool tensorflow::TensorShapeUtils::IsMatrix](#static_bool_tensorflow_TensorShapeUtils_IsMatrix)
+* [static bool tensorflow::TensorShapeUtils::IsMatrixOrHigher](#static_bool_tensorflow_TensorShapeUtils_IsMatrixOrHigher)
+* [static TensorShape tensorflow::TensorShapeUtils::MakeShape](#static_TensorShape_tensorflow_TensorShapeUtils_MakeShape)
+ * Returns a TensorShape whose dimensions are dims[0], dims[1], ..., dims[n-1].
+* [static string tensorflow::TensorShapeUtils::ShapeListString](#static_string_tensorflow_TensorShapeUtils_ShapeListString)
+* [static bool tensorflow::TensorShapeUtils::StartsWith](#static_bool_tensorflow_TensorShapeUtils_StartsWith)
+
+##Member Details
+
+#### static bool tensorflow::TensorShapeUtils::IsScalar(const TensorShape &shape) {#static_bool_tensorflow_TensorShapeUtils_IsScalar}
+
+
+
+
+
+#### static bool tensorflow::TensorShapeUtils::IsVector(const TensorShape &shape) {#static_bool_tensorflow_TensorShapeUtils_IsVector}
+
+
+
+
+
+#### static bool tensorflow::TensorShapeUtils::IsLegacyScalar(const TensorShape &shape) {#static_bool_tensorflow_TensorShapeUtils_IsLegacyScalar}
+
+
+
+
+
+#### static bool tensorflow::TensorShapeUtils::IsLegacyVector(const TensorShape &shape) {#static_bool_tensorflow_TensorShapeUtils_IsLegacyVector}
+
+
+
+
+
+#### static bool tensorflow::TensorShapeUtils::IsVectorOrHigher(const TensorShape &shape) {#static_bool_tensorflow_TensorShapeUtils_IsVectorOrHigher}
+
+
+
+
+
+#### static bool tensorflow::TensorShapeUtils::IsMatrix(const TensorShape &shape) {#static_bool_tensorflow_TensorShapeUtils_IsMatrix}
+
+
+
+
+
+#### static bool tensorflow::TensorShapeUtils::IsMatrixOrHigher(const TensorShape &shape) {#static_bool_tensorflow_TensorShapeUtils_IsMatrixOrHigher}
+
+
+
+
+
+#### static TensorShape tensorflow::TensorShapeUtils::MakeShape(const T *dims, int n) {#static_TensorShape_tensorflow_TensorShapeUtils_MakeShape}
+
+Returns a TensorShape whose dimensions are dims[0], dims[1], ..., dims[n-1].
+
+
+
+#### static string tensorflow::TensorShapeUtils::ShapeListString(const gtl::ArraySlice< TensorShape > &shapes) {#static_string_tensorflow_TensorShapeUtils_ShapeListString}
+
+
+
+
+
+#### static bool tensorflow::TensorShapeUtils::StartsWith(const TensorShape &shape0, const TensorShape &shape1) {#static_bool_tensorflow_TensorShapeUtils_StartsWith}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/ClassThread.md b/tensorflow/g3doc/api_docs/cc/ClassThread.md
new file mode 100644
index 0000000000..32bb286206
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassThread.md
@@ -0,0 +1,25 @@
+#Class tensorflow::Thread
+
+
+
+
+
+##Member Summary
+
+* [tensorflow::Thread::Thread](#tensorflow_Thread_Thread)
+* [virtual tensorflow::Thread::~Thread](#virtual_tensorflow_Thread_Thread)
+ * Blocks until the thread of control stops running.
+
+##Member Details
+
+#### tensorflow::Thread::Thread() {#tensorflow_Thread_Thread}
+
+
+
+
+
+#### virtual tensorflow::Thread::~Thread() {#virtual_tensorflow_Thread_Thread}
+
+Blocks until the thread of control stops running.
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/ClassWritableFile.md b/tensorflow/g3doc/api_docs/cc/ClassWritableFile.md
new file mode 100644
index 0000000000..e1b2132b4f
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/ClassWritableFile.md
@@ -0,0 +1,52 @@
+#Class tensorflow::WritableFile
+
+A file abstraction for sequential writing.
+
+The implementation must provide buffering since callers may append small fragments at a time to the file.
+
+##Member Summary
+
+* [tensorflow::WritableFile::WritableFile](#tensorflow_WritableFile_WritableFile)
+* [virtual tensorflow::WritableFile::~WritableFile](#virtual_tensorflow_WritableFile_WritableFile)
+* [virtual Status tensorflow::WritableFile::Append](#virtual_Status_tensorflow_WritableFile_Append)
+* [virtual Status tensorflow::WritableFile::Close](#virtual_Status_tensorflow_WritableFile_Close)
+* [virtual Status tensorflow::WritableFile::Flush](#virtual_Status_tensorflow_WritableFile_Flush)
+* [virtual Status tensorflow::WritableFile::Sync](#virtual_Status_tensorflow_WritableFile_Sync)
+
+##Member Details
+
+#### tensorflow::WritableFile::WritableFile() {#tensorflow_WritableFile_WritableFile}
+
+
+
+
+
+#### virtual tensorflow::WritableFile::~WritableFile() {#virtual_tensorflow_WritableFile_WritableFile}
+
+
+
+
+
+#### virtual Status tensorflow::WritableFile::Append(const StringPiece &data)=0 {#virtual_Status_tensorflow_WritableFile_Append}
+
+
+
+
+
+#### virtual Status tensorflow::WritableFile::Close()=0 {#virtual_Status_tensorflow_WritableFile_Close}
+
+
+
+
+
+#### virtual Status tensorflow::WritableFile::Flush()=0 {#virtual_Status_tensorflow_WritableFile_Flush}
+
+
+
+
+
+#### virtual Status tensorflow::WritableFile::Sync()=0 {#virtual_Status_tensorflow_WritableFile_Sync}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/StructSessionOptions.md b/tensorflow/g3doc/api_docs/cc/StructSessionOptions.md
new file mode 100644
index 0000000000..99044997c9
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/StructSessionOptions.md
@@ -0,0 +1,49 @@
+#Struct tensorflow::SessionOptions
+
+Configuration information for a Session .
+
+
+
+##Member Summary
+
+* [Env* tensorflow::SessionOptions::env](#Env_tensorflow_SessionOptions_env)
+ * The environment to use.
+* [string tensorflow::SessionOptions::target](#string_tensorflow_SessionOptions_target)
+ * The TensorFlow runtime to connect to.
+* [ConfigProto tensorflow::SessionOptions::config](#ConfigProto_tensorflow_SessionOptions_config)
+ * Configuration options.
+* [tensorflow::SessionOptions::SessionOptions](#tensorflow_SessionOptions_SessionOptions)
+
+##Member Details
+
+#### Env* tensorflow::SessionOptions::env {#Env_tensorflow_SessionOptions_env}
+
+The environment to use.
+
+
+
+#### string tensorflow::SessionOptions::target {#string_tensorflow_SessionOptions_target}
+
+The TensorFlow runtime to connect to.
+
+If 'target' is empty or unspecified, the local TensorFlow runtime implementation will be used. Otherwise, the TensorFlow engine defined by 'target' will be used to perform all computations.
+
+"target" can be either a single entry or a comma separated list of entries. Each entry is a resolvable address of the following format: local ip:port host:port ... other system-specific formats to identify tasks and jobs ...
+
+NOTE: at the moment 'local' maps to an in-process service-based runtime.
+
+Upon creation, a single session affines itself to one of the remote processes, with possible load balancing choices when the "target" resolves to a list of possible processes.
+
+If the session disconnects from the remote process during its lifetime, session calls may fail immediately.
+
+#### ConfigProto tensorflow::SessionOptions::config {#ConfigProto_tensorflow_SessionOptions_config}
+
+Configuration options.
+
+
+
+#### tensorflow::SessionOptions::SessionOptions() {#tensorflow_SessionOptions_SessionOptions}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/StructState.md b/tensorflow/g3doc/api_docs/cc/StructState.md
new file mode 100644
index 0000000000..d031b50370
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/StructState.md
@@ -0,0 +1,24 @@
+#Struct tensorflow::Status::State
+
+
+
+
+
+##Member Summary
+
+* [tensorflow::error::Code tensorflow::Status::State::code](#tensorflow_error_Code_tensorflow_Status_State_code)
+* [string tensorflow::Status::State::msg](#string_tensorflow_Status_State_msg)
+
+##Member Details
+
+#### tensorflow::error::Code tensorflow::Status::State::code {#tensorflow_error_Code_tensorflow_Status_State_code}
+
+
+
+
+
+#### string tensorflow::Status::State::msg {#string_tensorflow_Status_State_msg}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/StructTensorShapeDim.md b/tensorflow/g3doc/api_docs/cc/StructTensorShapeDim.md
new file mode 100644
index 0000000000..711743ac85
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/StructTensorShapeDim.md
@@ -0,0 +1,24 @@
+#Struct tensorflow::TensorShapeDim
+
+
+
+
+
+##Member Summary
+
+* [int tensorflow::TensorShapeDim::size](#int_tensorflow_TensorShapeDim_size)
+* [tensorflow::TensorShapeDim::TensorShapeDim](#tensorflow_TensorShapeDim_TensorShapeDim)
+
+##Member Details
+
+#### int tensorflow::TensorShapeDim::size {#int_tensorflow_TensorShapeDim_size}
+
+
+
+
+
+#### tensorflow::TensorShapeDim::TensorShapeDim(int64 s) {#tensorflow_TensorShapeDim_TensorShapeDim}
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/StructThreadOptions.md b/tensorflow/g3doc/api_docs/cc/StructThreadOptions.md
new file mode 100644
index 0000000000..b568855d6e
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/StructThreadOptions.md
@@ -0,0 +1,26 @@
+#Struct tensorflow::ThreadOptions
+
+Options to configure a Thread .
+
+Note that the options are all hints, and the underlying implementation may choose to ignore it.
+
+##Member Summary
+
+* [size_t tensorflow::ThreadOptions::stack_size](#size_t_tensorflow_ThreadOptions_stack_size)
+ * Thread stack size to use (in bytes).
+* [size_t tensorflow::ThreadOptions::guard_size](#size_t_tensorflow_ThreadOptions_guard_size)
+ * Guard area size to use near thread stacks to use (in bytes)
+
+##Member Details
+
+#### size_t tensorflow::ThreadOptions::stack_size {#size_t_tensorflow_ThreadOptions_stack_size}
+
+Thread stack size to use (in bytes).
+
+
+
+#### size_t tensorflow::ThreadOptions::guard_size {#size_t_tensorflow_ThreadOptions_guard_size}
+
+Guard area size to use near thread stacks to use (in bytes)
+
+
diff --git a/tensorflow/g3doc/api_docs/cc/index.md b/tensorflow/g3doc/api_docs/cc/index.md
new file mode 100644
index 0000000000..82aafc7486
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/cc/index.md
@@ -0,0 +1,75 @@
+# TensorFlow C++ Session API reference documentation
+
+TensorFlow's public C++ API includes only the API for executing graphs, as of
+version 0.5. To control the execution of a graph from C++:
+
+1. Build the computation graph using the [Python API](../python/).
+1. Use [tf.train.write_graph()](../python/train.md?cl=head#write_graph) to
+write the graph to a file.
+1. Load the graph using the C++ Session API. For example:
+
+ ```c++
+ // Reads a model graph definition from disk, and creates a session object you
+ // can use to run it.
+ Status LoadGraph(string graph_file_name, Session** session) {
+ GraphDef graph_def;
+ TF_RETURN_IF_ERROR(
+ ReadBinaryProto(Env::Default(), graph_file_name, &graph_def));
+ TF_RETURN_IF_ERROR(NewSession(SessionOptions(), session));
+ TF_RETURN_IF_ERROR((*session)->Create(graph_def));
+ return Status::OK();
+ }
+```
+
+1. Run the graph with a call to `session->Run()`
+
+
+##Classes
+
+* [tensorflow::Env](ClassEnv.md)
+* [tensorflow::EnvWrapper](ClassEnvWrapper.md)
+* [tensorflow::RandomAccessFile](ClassRandomAccessFile.md)
+* [tensorflow::Session](ClassSession.md)
+* [tensorflow::Status](ClassStatus.md)
+* [tensorflow::Tensor](ClassTensor.md)
+* [tensorflow::TensorBuffer](ClassTensorBuffer.md)
+* [tensorflow::TensorShape](ClassTensorShape.md)
+* [tensorflow::TensorShapeIter](ClassTensorShapeIter.md)
+* [tensorflow::TensorShapeUtils](ClassTensorShapeUtils.md)
+* [tensorflow::Thread](ClassThread.md)
+* [tensorflow::WritableFile](ClassWritableFile.md)
+
+##Structs
+
+* [tensorflow::SessionOptions](StructSessionOptions.md)
+* [tensorflow::Status::State](StructState.md)
+* [tensorflow::TensorShapeDim](StructTensorShapeDim.md)
+* [tensorflow::ThreadOptions](StructThreadOptions.md)
+
+
+<div class='sections-order' style="display: none;">
+<!--
+<!-- ClassEnv.md -->
+<!-- ClassEnvWrapper.md -->
+<!-- ClassRandomAccessFile.md -->
+<!-- ClassSession.md -->
+<!-- ClassStatus.md -->
+<!-- ClassTensor.md -->
+<!-- ClassTensorBuffer.md -->
+<!-- ClassTensorShape.md -->
+<!-- ClassTensorShapeIter.md -->
+<!-- ClassTensorShapeUtils.md -->
+<!-- ClassThread.md -->
+<!-- ClassWritableFile.md -->
+<!-- StructSessionOptions.md -->
+<!-- StructState.md -->
+<!-- StructTensorShapeDim.md -->
+<!-- StructThreadOptions.md -->
+-->
+</div>
+
+
+
+
+
+
diff --git a/tensorflow/g3doc/api_docs/index.md b/tensorflow/g3doc/api_docs/index.md
new file mode 100644
index 0000000000..7234bf45a8
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/index.md
@@ -0,0 +1,15 @@
+# Overview
+
+TensorFlow has APIs available in several languages both for constructing and
+executing a TensorFlow graph. The Python API is at present the most complete
+and the easiest to use, but the C++ API may offer some performance advantages
+in graph execution, and supports deployment to small devices such as Android.
+
+Over time, we hope that the TensorFlow community will develop front ends for
+languages like Go, Java, Javascript, Lua R, and perhaps others. With SWIG, it's
+relatively easy to contribute a TensorFlow interface to your favorite language.
+
+Note: Many practical aspects of ssage are covered in the Mechanics tab, and
+some additional documentation not specific to any particular language API is
+available in the Resources tab.
+
diff --git a/tensorflow/g3doc/api_docs/python/array_ops.md b/tensorflow/g3doc/api_docs/python/array_ops.md
new file mode 100644
index 0000000000..eecb442f1c
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/array_ops.md
@@ -0,0 +1,1025 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Tensor Transformations
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Casting](#AUTOGENERATED-casting)
+ * [tf.string_to_number(string_tensor, out_type=None, name=None)](#string_to_number)
+ * [tf.to_double(x, name='ToDouble')](#to_double)
+ * [tf.to_float(x, name='ToFloat')](#to_float)
+ * [tf.to_bfloat16(x, name='ToBFloat16')](#to_bfloat16)
+ * [tf.to_int32(x, name='ToInt32')](#to_int32)
+ * [tf.to_int64(x, name='ToInt64')](#to_int64)
+ * [tf.cast(x, dtype, name=None)](#cast)
+* [Shapes and Shaping](#AUTOGENERATED-shapes-and-shaping)
+ * [tf.shape(input, name=None)](#shape)
+ * [tf.size(input, name=None)](#size)
+ * [tf.rank(input, name=None)](#rank)
+ * [tf.reshape(tensor, shape, name=None)](#reshape)
+ * [tf.squeeze(input, squeeze_dims=None, name=None)](#squeeze)
+ * [tf.expand_dims(input, dim, name=None)](#expand_dims)
+* [Slicing and Joining](#AUTOGENERATED-slicing-and-joining)
+ * [tf.slice(input_, begin, size, name=None)](#slice)
+ * [tf.split(split_dim, num_split, value, name='split')](#split)
+ * [tf.tile(input, multiples, name=None)](#tile)
+ * [tf.pad(input, paddings, name=None)](#pad)
+ * [tf.concat(concat_dim, values, name='concat')](#concat)
+ * [tf.pack(values, name='pack')](#pack)
+ * [tf.unpack(value, num=None, name='unpack')](#unpack)
+ * [tf.reverse_sequence(input, seq_lengths, seq_dim, name=None)](#reverse_sequence)
+ * [tf.reverse(tensor, dims, name=None)](#reverse)
+ * [tf.transpose(a, perm=None, name='transpose')](#transpose)
+ * [tf.gather(params, indices, name=None)](#gather)
+ * [tf.dynamic_partition(data, partitions, num_partitions, name=None)](#dynamic_partition)
+ * [tf.dynamic_stitch(indices, data, name=None)](#dynamic_stitch)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Casting <div class="md-anchor" id="AUTOGENERATED-casting">{#AUTOGENERATED-casting}</div>
+
+TensorFlow provides several operations that you can use to cast tensor data
+types in your graph.
+
+- - -
+
+### tf.string_to_number(string_tensor, out_type=None, name=None) <div class="md-anchor" id="string_to_number">{#string_to_number}</div>
+
+Converts each string in the input Tensor to the specified numeric type.
+
+(Note that int32 overflow results in an error while float overflow
+results in a rounded value.)
+
+##### Args:
+
+
+* <b>string_tensor</b>: A `Tensor` of type `string`.
+* <b>out_type</b>: An optional `tf.DType` from: `tf.float32, tf.int32`. Defaults to `tf.float32`.
+ The numeric type to interpret each string in string_tensor as.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `out_type`.
+ A Tensor of the same shape as the input string_tensor.
+
+
+- - -
+
+### tf.to_double(x, name='ToDouble') <div class="md-anchor" id="to_double">{#to_double}</div>
+
+Casts a tensor to type `float64`.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` or `SparseTensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` or `SparseTensor` with same shape as `x` with type `float64`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `x` cannot be cast to the `float64`.
+
+
+- - -
+
+### tf.to_float(x, name='ToFloat') <div class="md-anchor" id="to_float">{#to_float}</div>
+
+Casts a tensor to type `float32`.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` or `SparseTensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` or `SparseTensor` with same shape as `x` with type `float32`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `x` cannot be cast to the `float32`.
+
+
+- - -
+
+### tf.to_bfloat16(x, name='ToBFloat16') <div class="md-anchor" id="to_bfloat16">{#to_bfloat16}</div>
+
+Casts a tensor to type `bfloat16`.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` or `SparseTensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` or `SparseTensor` with same shape as `x` with type `bfloat16`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `x` cannot be cast to the `bfloat16`.
+
+
+- - -
+
+### tf.to_int32(x, name='ToInt32') <div class="md-anchor" id="to_int32">{#to_int32}</div>
+
+Casts a tensor to type `int32`.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` or `SparseTensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` or `SparseTensor` with same shape as `x` with type `int32`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `x` cannot be cast to the `int32`.
+
+
+- - -
+
+### tf.to_int64(x, name='ToInt64') <div class="md-anchor" id="to_int64">{#to_int64}</div>
+
+Casts a tensor to type `int64`.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` or `SparseTensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` or `SparseTensor` with same shape as `x` with type `int64`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `x` cannot be cast to the `int64`.
+
+
+- - -
+
+### tf.cast(x, dtype, name=None) <div class="md-anchor" id="cast">{#cast}</div>
+
+Casts a tensor to a new type.
+
+The operation casts `x` (in case of `Tensor`) or `x.values`
+(in case of `SparseTensor`) to `dtype`.
+
+For example:
+
+```python
+# tensor `a` is [1.8, 2.2], dtype=tf.float
+tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` or `SparseTensor`.
+* <b>dtype</b>: The destination type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` or `SparseTensor` with same shape as `x`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `x` cannot be cast to the `dtype`.
+
+
+
+## Shapes and Shaping <div class="md-anchor" id="AUTOGENERATED-shapes-and-shaping">{#AUTOGENERATED-shapes-and-shaping}</div>
+
+TensorFlow provides several operations that you can use to determine the shape
+of a tensor and change the shape of a tensor.
+
+- - -
+
+### tf.shape(input, name=None) <div class="md-anchor" id="shape">{#shape}</div>
+
+Returns the shape of a tensor.
+
+This operation returns a 1-D integer tensor representing the shape of `input`.
+
+For example:
+
+```prettyprint
+# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
+shape(t) ==> [2, 2, 3]
+```
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int32`.
+
+
+- - -
+
+### tf.size(input, name=None) <div class="md-anchor" id="size">{#size}</div>
+
+Returns the size of a tensor.
+
+This operation returns an integer representing the number of elements in
+`input`.
+
+For example:
+
+```prettyprint
+# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
+size(t) ==> 12
+```
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int32`.
+
+
+- - -
+
+### tf.rank(input, name=None) <div class="md-anchor" id="rank">{#rank}</div>
+
+Returns the rank of a tensor.
+
+This operation returns an integer representing the rank of `input`.
+
+For example:
+
+```prettyprint
+# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
+# shape of tensor 't' is [2, 2, 3]
+rank(t) ==> 3
+```
+
+**Note**: The rank of a tensor is not the same as the rank of a matrix. The rank
+of a tensor is the number of indices required to uniquely select each element
+of the tensor. Rank is also known as "order", "degree", or "ndims."
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int32`.
+
+
+- - -
+
+### tf.reshape(tensor, shape, name=None) <div class="md-anchor" id="reshape">{#reshape}</div>
+
+Reshapes a tensor.
+
+Given `tensor`, this operation returns a tensor that has the same values
+as `tensor` with shape `shape`.
+
+If `shape` is the special value `[-1]`, then `tensor` is flattened and the
+operation outputs a 1-D tensor with all elements of `tensor`.
+
+If `shape` is 1-D or higher, then the operation returns a tensor with shape
+`shape` filled with the values of `tensor`. In this case, the number of elements
+implied by `shape` must be the same as the number of elements in `tensor`.
+
+For example:
+
+```prettyprint
+# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]
+# tensor 't' has shape [9]
+reshape(t, [3, 3]) ==> [[1, 2, 3]
+ [4, 5, 6]
+ [7, 8, 9]]
+
+# tensor 't' is [[[1, 1], [2, 2]]
+# [[3, 3], [4, 4]]]
+# tensor 't' has shape [2, 2]
+reshape(t, [2, 4]) ==> [[1, 1, 2, 2]
+ [3, 3, 4, 4]]
+
+# tensor 't' is [[[1, 1, 1],
+# [2, 2, 2]],
+# [[3, 3, 3],
+# [4, 4, 4]],
+# [[5, 5, 5],
+# [6, 6, 6]]]
+# tensor 't' has shape [3, 2, 3]
+# pass '[-1]' to flatten 't'
+reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]
+```
+
+##### Args:
+
+
+* <b>tensor</b>: A `Tensor`.
+* <b>shape</b>: A `Tensor` of type `int32`. Defines the shape of the output tensor.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `tensor`.
+
+
+- - -
+
+### tf.squeeze(input, squeeze_dims=None, name=None) <div class="md-anchor" id="squeeze">{#squeeze}</div>
+
+Removes dimensions of size 1 from the shape of a tensor.
+
+Given a tensor `input`, this operation returns a tensor of the same type with
+all dimensions of size 1 removed. If you don't want to remove all size 1
+dimensions, you can remove specific size 1 dimensions by specifying
+`squeeze_dims`.
+
+For example:
+
+```prettyprint
+# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
+shape(squeeze(t)) ==> [2, 3]
+```
+
+Or, to remove specific size 1 dimensions:
+
+```prettyprint
+# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
+shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
+```
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. The `input` to squeeze.
+* <b>squeeze_dims</b>: An optional list of `ints`. Defaults to `[]`.
+ If specified, only squeezes the dimensions listed. The dimension
+ index starts at 0. It is an error to squeeze a dimension that is not 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+ Contains the same data as `input`, but has one or more dimensions of
+ size 1 removed.
+
+
+- - -
+
+### tf.expand_dims(input, dim, name=None) <div class="md-anchor" id="expand_dims">{#expand_dims}</div>
+
+Inserts a dimension of 1 into a tensor's shape.
+
+Given a tensor `input`, this operation inserts a dimension of 1 at the
+dimension index `dim` of `input`'s shape. The dimension index `dim` starts at
+zero; if you specify a negative number for `dim` it is counted backward from
+the end.
+
+This operation is useful if you want to add a batch dimension to a single
+element. For example, if you have a single image of shape `[height, width,
+channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`,
+which will make the shape `[1, height, width, channels]`.
+
+Other examples:
+
+```prettyprint
+# 't' is a tensor of shape [2]
+shape(expand_dims(t, 0)) ==> [1, 2]
+shape(expand_dims(t, 1)) ==> [2, 1]
+shape(expand_dims(t, -1)) ==> [2, 1]
+
+# 't2' is a tensor of shape [2, 3, 5]
+shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5]
+shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5]
+shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1]
+```
+
+This operation requires that:
+
+`-1-input.dims() <= dim <= input.dims()`
+
+This operation is related to `squeeze()`, which removes dimensions of
+size 1.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`.
+* <b>dim</b>: A `Tensor` of type `int32`.
+ 0-D (scalar). Specifies the dimension index at which to
+ expand the shape of `input`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+ Contains the same data as `input`, but its shape has an additional
+ dimension of size 1 added.
+
+
+
+## Slicing and Joining <div class="md-anchor" id="AUTOGENERATED-slicing-and-joining">{#AUTOGENERATED-slicing-and-joining}</div>
+
+TensorFlow provides several operations to slice or extract parts of a tensor,
+or join multiple tensors together.
+
+- - -
+
+### tf.slice(input_, begin, size, name=None) <div class="md-anchor" id="slice">{#slice}</div>
+
+Extracts a slice from a tensor.
+
+This operation extracts a slice of size `size` from a tensor `input` starting
+at the location specified by `begin`. The slice `size` is represented as a
+tensor shape, where `size[i]` is the number of elements of the 'i'th dimension
+of `input` that you want to slice. The starting location (`begin`) for the
+slice is represented as an offset in each dimension of `input`. In other
+words, `begin[i]` is the offset into the 'i'th dimension of `input` that you
+want to slice from.
+
+`begin` is zero-based; `size` is one-based. If `size[i]` is -1,
+all remaining elements in dimension i are included in the
+slice. In other words, this is equivalent to setting:
+
+`size[i] = input.dim_size(i) - begin[i]`
+
+This operation requires that:
+
+`0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]`
+
+For example:
+
+```
+# 'input' is [[[1, 1, 1], [2, 2, 2]],
+# [[3, 3, 3], [4, 4, 4]],
+# [[5, 5, 5], [6, 6, 6]]]
+tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]
+tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3],
+ [4, 4, 4]]]
+tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]],
+ [[5, 5, 5]]]
+```
+
+##### Args:
+
+
+* <b>input_</b>: A `Tensor`.
+* <b>begin</b>: An `int32` or `int64` `Tensor`.
+* <b>size</b>: An `int32` or `int64` `Tensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` the same type as `input`.
+
+
+- - -
+
+### tf.split(split_dim, num_split, value, name='split') <div class="md-anchor" id="split">{#split}</div>
+
+Splits a tensor into `num_split` tensors along one dimension.
+
+Splits `value` along dimension `split_dim` into `num_split` smaller tensors.
+Requires that `num_split` evenly divide `value.shape[split_dim]`.
+
+For example:
+
+```python
+# 'value' is a tensor with shape [5, 30]
+# Split 'value' into 3 tensors along dimension 1
+split0, split1, split2 = tf.split(1, 3, value)
+tf.shape(split0) ==> [5, 10]
+```
+
+##### Args:
+
+
+* <b>split_dim</b>: A 0-D `int32` `Tensor`. The dimension along which to split.
+ Must be in the range `[0, rank(value))`.
+* <b>num_split</b>: A 0-D `int32` `Tensor`. The number of ways to split.
+* <b>value</b>: The `Tensor` to split.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ `num_split` `Tensor` objects resulting from splitting `value`.
+
+
+- - -
+
+### tf.tile(input, multiples, name=None) <div class="md-anchor" id="tile">{#tile}</div>
+
+Constructs a tensor by tiling a given tensor.
+
+This operation creates a new tensor by replicating `input` `multiples` times.
+The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements,
+and the values of `input` are replicated `multiples[i]` times along the 'i'th
+dimension. For example, tiling `[a b c d]` by `[2]` produces
+`[a b c d a b c d]`.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. 1-D or higher.
+* <b>multiples</b>: A `Tensor` of type `int32`.
+ 1-D. Length must be the same as the number of dimensions in `input`
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+
+
+- - -
+
+### tf.pad(input, paddings, name=None) <div class="md-anchor" id="pad">{#pad}</div>
+
+Pads a tensor with zeros.
+
+This operation pads a `input` with zeros according to the `paddings` you
+specify. `paddings` is an integer tensor with shape `[Dn, 2]`, where n is the
+rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates
+how many zeros to add before the contents of `input` in that dimension, and
+`paddings[D, 1]` indicates how many zeros to add after the contents of `input`
+in that dimension.
+
+The padded size of each dimension D of the output is:
+
+`paddings(D, 0) + input.dim_size(D) + paddings(D, 1)`
+
+For example:
+
+```prettyprint
+# 't' is [[1, 1], [2, 2]]
+# 'paddings' is [[1, 1]], [2, 2]]
+# rank of 't' is 2
+pad(t, paddings) ==> [[0, 0, 0, 0, 0]
+ [0, 0, 0, 0, 0]
+ [0, 1, 1, 0, 0]
+ [[0, 2, 2, 0, 0]
+ [0, 0, 0, 0, 0]]
+```
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`.
+* <b>paddings</b>: A `Tensor` of type `int32`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+
+
+- - -
+
+### tf.concat(concat_dim, values, name='concat') <div class="md-anchor" id="concat">{#concat}</div>
+
+Concatenates tensors along one dimension.
+
+Concatenates the list of tensors `values` along dimension `concat_dim`. If
+`values[i].shape = [D0, D1, ... Dconcat_dim(i), ...Dn]`, the concatenated
+result has shape
+
+ [D0, D1, ... Rconcat_dim, ...Dn]
+
+where
+
+ Rconcat_dim = sum(Dconcat_dim(i))
+
+That is, the data from the input tensors is joined along the `concat_dim`
+dimension.
+
+The number of dimensions of the input tensors must match, and all dimensions
+except `concat_dim` must be equal.
+
+For example:
+
+```python
+t1 = [[1, 2, 3], [4, 5, 6]]
+t2 = [[7, 8, 9], [10, 11, 12]]
+tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
+tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
+
+# tensor t3 with shape [2, 3]
+# tensor t4 with shape [2, 3]
+tf.shape(tf.concat(0, [t3, t4])) ==> [4, 3]
+tf.shape(tf.concat(1, [t3, t4])) ==> [2, 6]
+```
+
+##### Args:
+
+
+* <b>concat_dim</b>: 0-D `int32` `Tensor`. Dimension along which to concatenate.
+* <b>values</b>: A list of `Tensor` objects or a single `Tensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` resulting from concatenation of the input tensors.
+
+
+- - -
+
+### tf.pack(values, name='pack') <div class="md-anchor" id="pack">{#pack}</div>
+
+Packs a list of rank-`R` tensors into one rank-`(R+1)` tensor.
+
+Packs tensors in `values` into a tensor with rank one higher than each tensor
+in `values` and shape `[len(values)] + values[0].shape`. The output satisfies
+`output[i, ...] = values[i][...]`.
+
+This is the opposite of unpack. The numpy equivalent is
+
+ tf.pack([x, y, z]) = np.asarray([x, y, z])
+
+##### Args:
+
+
+* <b>values</b>: A list of `Tensor` objects with the same shape and type.
+* <b>name</b>: A name for this operation (optional).
+
+##### Returns:
+
+
+* <b>output</b>: A packed `Tensor` with the same type as `values`.
+
+
+- - -
+
+### tf.unpack(value, num=None, name='unpack') <div class="md-anchor" id="unpack">{#unpack}</div>
+
+Unpacks the outer dimension of a rank-`R` tensor into rank-`(R-1)` tensors.
+
+Unpacks `num` tensors from `value` along the first dimension.
+If `num` is not specified (the default), it is inferred from `value`'s shape.
+If `value.shape[0]` is not known, `ValueError` is raised.
+
+The ith tensor in `output` is the slice `value[i, ...]`. Each tensor in
+`output` has shape `value.shape[1:]`.
+
+This is the opposite of pack. The numpy equivalent is
+
+ tf.unpack(x, n) = list(x)
+
+##### Args:
+
+
+* <b>value</b>: A rank `R > 0` `Tensor` to be unpacked.
+* <b>num</b>: An `int`. The first dimension of value. Automatically inferred if
+ `None` (the default).
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The list of `Tensor` objects unpacked from `value`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `num` is unspecified and cannot be inferred.
+
+
+- - -
+
+### tf.reverse_sequence(input, seq_lengths, seq_dim, name=None) <div class="md-anchor" id="reverse_sequence">{#reverse_sequence}</div>
+
+Reverses variable length slices in dimension `seq_dim`.
+
+This op first slices `input` along the first dimension, and for each slice `i`,
+reverses the first `seq_lengths[i]` elements along the dimension `seq_dim`.
+
+The elements of `seq_lengths` must obey `seq_lengths[i] < input.dims[seq_dim]`,
+and `seq_lengths` must be a vector of length `input.dims(0)`.
+
+The output slice `i` along dimension 0 is then given by input slice `i`, with
+the first `seq_lengths[i]` slices along dimension `seq_dim` reversed.
+
+For example:
+
+```prettyprint
+# Given this:
+seq_dim = 1
+input.dims = (4, ...)
+seq_lengths = [7, 2, 3, 5]
+
+# then slices of input are reversed on seq_dim, but only up to seq_lengths:
+output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...]
+output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...]
+output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...]
+output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...]
+
+# while entries past seq_lens are copied through:
+output[0, 7:, :, ...] = input[0, 7:, :, ...]
+output[1, 2:, :, ...] = input[1, 2:, :, ...]
+output[2, 3:, :, ...] = input[2, 3:, :, ...]
+output[3, 2:, :, ...] = input[3, 2:, :, ...]
+```
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. The input to reverse.
+* <b>seq_lengths</b>: A `Tensor` of type `int64`.
+ 1-D with length `input.dims(0)` and
+ `max(seq_lengths) < input.dims(seq_dim)`
+* <b>seq_dim</b>: An `int`. The dimension which is partially reversed.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+ The partially reversed input. It has the same shape as `input`.
+
+
+- - -
+
+### tf.reverse(tensor, dims, name=None) <div class="md-anchor" id="reverse">{#reverse}</div>
+
+Reverses specific dimensions of a tensor.
+
+Given a `tensor`, and a `bool` tensor `dims` representing the dimensions
+of `tensor`, this operation reverses each dimension i of `tensor` where
+`dims[i]` is `True`.
+
+`tensor` can have up to 8 dimensions. The number of dimensions
+of `tensor` must equal the number of elements in `dims`. In other words:
+
+`rank(tensor) = size(dims)`
+
+For example:
+
+```prettyprint
+# tensor 't' is [[[[ 0, 1, 2, 3],
+# [ 4, 5, 6, 7],
+# [ 8, 9, 10, 11]],
+# [[12, 13, 14, 15],
+# [16, 17, 18, 19],
+# [20, 21, 22, 23]]]]
+# tensor 't' shape is [1, 2, 3, 4]
+
+# 'dims' is [False, False, False, True]
+reverse(t, dims) ==> [[[[ 3, 2, 1, 0],
+ [ 7, 6, 5, 4],
+ [ 11, 10, 9, 8]],
+ [[15, 14, 13, 12],
+ [19, 18, 17, 16],
+ [23, 22, 21, 20]]]]
+
+# 'dims' is [False, True, False, False]
+reverse(t, dims) ==> [[[[12, 13, 14, 15],
+ [16, 17, 18, 19],
+ [20, 21, 22, 23]
+ [[ 0, 1, 2, 3],
+ [ 4, 5, 6, 7],
+ [ 8, 9, 10, 11]]]]
+
+# 'dims' is [False, False, True, False]
+reverse(t, dims) ==> [[[[8, 9, 10, 11],
+ [4, 5, 6, 7],
+ [0, 1, 2, 3]]
+ [[20, 21, 22, 23],
+ [16, 17, 18, 19],
+ [12, 13, 14, 15]]]]
+```
+
+##### Args:
+
+
+* <b>tensor</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `bool`, `float32`, `float64`.
+ Up to 8-D.
+* <b>dims</b>: A `Tensor` of type `bool`. 1-D. The dimensions to reverse.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `tensor`. The same shape as `tensor`.
+
+
+- - -
+
+### tf.transpose(a, perm=None, name='transpose') <div class="md-anchor" id="transpose">{#transpose}</div>
+
+Transposes `a`. Permutes the dimensions according to `perm`.
+
+The returned tensor's dimension i will correspond to the input dimension
+`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is
+the rank of the input tensor. Hence by default, this operation performs a
+regular matrix transpose on 2-D input Tensors.
+
+For example:
+
+```python
+# 'x' is [[1 2 3]
+# [4 5 6]]
+tf.transpose(x) ==> [[1 4]
+ [2 5]
+ [3 6]]
+
+# Equivalently
+tf.transpose(x perm=[0, 1]) ==> [[1 4]
+ [2 5]
+ [3 6]]
+
+# 'perm' is more useful for n-dimensional tensors, for n > 2
+# 'x' is [[[1 2 3]
+# [4 5 6]]
+# [[7 8 9]
+# [10 11 12]]]
+# Take the transpose of the matrices in dimension-0
+tf.transpose(b, perm=[0, 2, 1]) ==> [[[1 4]
+ [2 5]
+ [3 6]]
+
+ [[7 10]
+ [8 11]
+ [9 12]]]
+```
+
+##### Args:
+
+
+* <b>a</b>: A `Tensor`.
+* <b>perm</b>: A permutation of the dimensions of `a`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A transposed `Tensor`.
+
+
+- - -
+
+### tf.gather(params, indices, name=None) <div class="md-anchor" id="gather">{#gather}</div>
+
+Gather slices from `params` according to `indices`.
+
+`indices` must be an integer tensor of any dimension (usually 0-D or 1-D).
+Produces an output tensor with shape `indices.shape + params.shape[1:]` where:
+
+ # Scalar indices
+ output[:, ..., :] = params[indices, :, ... :]
+
+ # Vector indices
+ output[i, :, ..., :] = params[indices[i], :, ... :]
+
+ # Higher rank indices
+ output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]
+
+If `indices` is a permutation and `len(indices) == params.shape[0]` then
+this operation will permute `params` accordingly.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/Gather.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>params</b>: A `Tensor`.
+* <b>indices</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `params`.
+
+
+- - -
+
+### tf.dynamic_partition(data, partitions, num_partitions, name=None) <div class="md-anchor" id="dynamic_partition">{#dynamic_partition}</div>
+
+Partitions `data` into `num_partitions` tensors using indices from `partitions`.
+
+For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]`
+becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i`
+are placed in `outputs[i]` in lexicographic order of `js`, and the first
+dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`.
+In detail,
+
+ outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]
+
+ outputs[i] = pack([data[js, ...] for js if partitions[js] == i])
+
+`data.shape` must start with `partitions.shape`.
+
+For example:
+
+ # Scalar partitions
+ partitions = 1
+ num_partitions = 2
+ data = [10, 20]
+ outputs[0] = [] # Empty with shape [0, 2]
+ outputs[1] = [[10, 20]]
+
+ # Vector partitions
+ partitions = [0, 0, 1, 1, 0]
+ num_partitions = 2
+ data = [10, 20, 30, 40, 50]
+ outputs[0] = [10, 20, 50]
+ outputs[1] = [30, 40]
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/DynamicPartition.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`.
+* <b>partitions</b>: A `Tensor` of type `int32`.
+ Any shape. Indices in the range `[0, num_partitions)`.
+* <b>num_partitions</b>: An `int` that is `>= 1`.
+ The number of partitions to output.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A list of `num_partitions` `Tensor` objects of the same type as data.
+
+
+- - -
+
+### tf.dynamic_stitch(indices, data, name=None) <div class="md-anchor" id="dynamic_stitch">{#dynamic_stitch}</div>
+
+Interleave the values from the `data` tensors into a single tensor.
+
+Builds a merged tensor such that
+
+ merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]
+
+For example, if each `indices[m]` is scalar or vector, we have
+
+ # Scalar indices
+ merged[indices[m], ...] = data[m][...]
+
+ # Vector indices
+ merged[indices[m][i], ...] = data[m][i, ...]
+
+Each `data[i].shape` must start with the corresponding `indices[i].shape`,
+and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we
+must have `data[i].shape = indices[i].shape + constant`. In terms of this
+`constant`, the output shape is
+
+ merged.shape = [max(indices)] + constant
+
+Values are merged in order, so if an index appears in both `indices[m][i]` and
+`indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the
+merged result.
+
+For example:
+
+ indices[0] = 6
+ indices[1] = [4, 1]
+ indices[2] = [[5, 2], [0, 3]]
+ data[0] = [61, 62]
+ data[1] = [[41, 42], [11, 12]]
+ data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
+ merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
+ [51, 52], [61, 62]]
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/DynamicStitch.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>indices</b>: A list of at least 2 `Tensor` objects of type `int32`.
+* <b>data</b>: A list with the same number of `Tensor` objects as `indices` of `Tensor` objects of the same type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/client.md b/tensorflow/g3doc/api_docs/python/client.md
new file mode 100644
index 0000000000..b37057e4b5
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/client.md
@@ -0,0 +1,638 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Running Graphs
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Session management](#AUTOGENERATED-session-management)
+ * [class tf.Session](#Session)
+ * [tf.get_default_session()](#get_default_session)
+* [Error classes](#AUTOGENERATED-error-classes)
+ * [class tf.OpError](#OpError)
+ * [class tf.errors.CancelledError](#CancelledError)
+ * [class tf.errors.UnknownError](#UnknownError)
+ * [class tf.errors.InvalidArgumentError](#InvalidArgumentError)
+ * [class tf.errors.DeadlineExceededError](#DeadlineExceededError)
+ * [class tf.errors.NotFoundError](#NotFoundError)
+ * [class tf.errors.AlreadyExistsError](#AlreadyExistsError)
+ * [class tf.errors.PermissionDeniedError](#PermissionDeniedError)
+ * [class tf.errors.UnauthenticatedError](#UnauthenticatedError)
+ * [class tf.errors.ResourceExhaustedError](#ResourceExhaustedError)
+ * [class tf.errors.FailedPreconditionError](#FailedPreconditionError)
+ * [class tf.errors.AbortedError](#AbortedError)
+ * [class tf.errors.OutOfRangeError](#OutOfRangeError)
+ * [class tf.errors.UnimplementedError](#UnimplementedError)
+ * [class tf.errors.InternalError](#InternalError)
+ * [class tf.errors.UnavailableError](#UnavailableError)
+ * [class tf.errors.DataLossError](#DataLossError)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+This library contains classes for launching graphs and executing operations.
+
+The [basic usage](../../get_started/index.md#basic-usage) guide has
+examples of how a graph is launched in a [`tf.Session`](#Session).
+
+## Session management <div class="md-anchor" id="AUTOGENERATED-session-management">{#AUTOGENERATED-session-management}</div>
+
+- - -
+
+### class tf.Session <div class="md-anchor" id="Session">{#Session}</div>
+
+A class for running TensorFlow operations.
+
+A `Session` object encapsulates the environment in which `Operation`
+objects are executed, and `Tensor` objects are evaluated. For
+example:
+
+```python
+# Build a graph.
+a = tf.constant(5.0)
+b = tf.constant(6.0)
+c = a * b
+
+# Launch the graph in a session.
+sess = tf.Session()
+
+# Evaluate the tensor `c`.
+print sess.run(c)
+```
+
+A session may own resources, such as
+[variables](state_ops.md#Variable), [queues](io_ops.md#QueueBase),
+and [readers](io_ops.md#ReaderBase). It is important to release
+these resources when they are no longer required. To do this, either
+invoke the [`close()`](#Session.close) method on the session, or use
+the session as a context manager. The following two examples are
+equivalent:
+
+```python
+# Using the `close()` method.
+sess = tf.Session()
+sess.run(...)
+sess.close()
+
+# Using the context manager.
+with tf.Session() as sess:
+ sess.run(...)
+```
+
+The [`ConfigProto`]
+(https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/config.proto)
+protocol buffer exposes various configuration options for a
+session. For example, to create a session that uses soft constraints
+for device placement, and log the resulting placement decisions,
+create a session as follows:
+
+```python
+# Launch the graph in a session that allows soft device placement and
+# logs the placement decisions.
+sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
+ log_device_placement=True))
+```
+
+- - -
+
+#### tf.Session.__init__(target='', graph=None, config=None) {#Session.__init__}
+
+Creates a new TensorFlow session.
+
+If no `graph` argument is specified when constructing the session,
+the default graph will be launched in the session. If you are
+using more than one graph (created with `tf.Graph()` in the same
+process, you will have to use different sessions for each graph,
+but each graph can be used in multiple sessions. In this case, it
+is often clearer to pass the graph to be launched explicitly to
+the session constructor.
+
+##### Args:
+
+
+* <b>target</b>: (Optional.) The execution engine to connect to.
+ Defaults to using an in-process engine. At present, no value
+ other than the empty string is supported.
+* <b>graph</b>: (Optional.) The `Graph` to be launched (described above).
+* <b>config</b>: (Optional.) A [`ConfigProto`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/config.proto)
+ protocol buffer with configuration options for the session.
+
+
+- - -
+
+#### tf.Session.run(fetches, feed_dict=None) {#Session.run}
+
+Runs the operations and evaluates the tensors in `fetches`.
+
+This method runs one "step" of TensorFlow computation, by
+running the necessary graph fragment to execute every `Operation`
+and evaluate every `Tensor` in `fetches`, substituting the values in
+`feed_dict` for the corresponding input values.
+
+The `fetches` argument may be a list of graph elements or a single
+graph element, and these determine the return value of this
+method. A graph element can be one of the following types:
+
+* If the *i*th element of `fetches` is an
+ [`Operation`](framework.md#Operation), the *i*th return value
+ will be `None`.
+* If the *i*th element of `fetches` is a
+ [`Tensor`](framework.md#Tensor), the *i*th return value will
+ be a numpy ndarray containing the value of that tensor.
+* If the *i*th element of `fetches` is a
+ [`SparseTensor`](sparse_ops.md#SparseTensor), the *i*th
+ return value will be a
+ [`SparseTensorValue`](sparse_ops.md#SparseTensorValue)
+ containing the value of that sparse tensor.
+
+The optional `feed_dict` argument allows the caller to override
+the value of tensors in the graph. Each key in `feed_dict` can be
+one of the following types:
+
+* If the key is a [`Tensor`](framework.md#Tensor), the
+ value may be a Python scalar, string, list, or numpy ndarray
+ that can be converted to the same `dtype` as that
+ tensor. Additionally, if the key is a
+ [placeholder](io_ops.md#placeholder), the shape of the value
+ will be checked for compatibility with the placeholder.
+* If the key is a [`SparseTensor`](sparse_ops.md#SparseTensor),
+ the value should be a
+ [`SparseTensorValue`](sparse_ops.md#SparseTensorValue).
+
+##### Args:
+
+
+* <b>fetches</b>: A single graph element, or a list of graph elements
+ (described above).
+* <b>feed_dict</b>: A dictionary that maps graph elements to values
+ (described above).
+
+##### Returns:
+
+ Either a single value if `fetches` is a single graph element, or
+ a list of values if `fetches` is a list (described above).
+
+##### Raises:
+
+
+* <b>RuntimeError</b>: If this `Session` is in an invalid state (e.g. has been
+ closed).
+* <b>TypeError</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
+* <b>ValueError</b>: If `fetches` or `feed_dict` keys are invalid or refer to a
+ `Tensor` that doesn't exist.
+
+
+- - -
+
+#### tf.Session.close() {#Session.close}
+
+Closes this session.
+
+Calling this method frees all resources associated with the session.
+
+##### Raises:
+
+
+* <b>RuntimeError</b>: If an error occurs while closing the session.
+
+
+
+- - -
+
+#### tf.Session.graph {#Session.graph}
+
+The graph that was launched in this session.
+
+
+- - -
+
+#### tf.Session.as_default() {#Session.as_default}
+
+Returns a context manager that makes this object the default session.
+
+Use with the `with` keyword to specify that calls to
+[`Operation.run()`](framework.md#Operation.run) or
+[`Tensor.run()`](framework.md#Tensor.run) should be executed in
+this session.
+
+```python
+c = tf.constant(..)
+sess = tf.Session()
+
+with sess.as_default():
+ assert tf.get_default_session() is sess
+ print c.eval()
+```
+
+To get the current default session, use
+[`tf.get_default_session()`](#get_default_session).
+
+
+*N.B.* The `as_default` context manager *does not* close the
+session when you exit the context, and you must close the session
+explicitly.
+
+```python
+c = tf.constant(...)
+sess = tf.Session()
+with sess.as_default():
+ print c.eval()
+# ...
+with sess.as_default():
+ print c.eval()
+
+sess.close()
+```
+
+Alternatively, you can use `with tf.Session():` to create a
+session that is automatically closed on exiting the context,
+including when an uncaught exception is raised.
+
+*N.B.* The default graph is a property of the current thread. If you
+create a new thread, and wish to use the default session in that
+thread, you must explicitly add a `with sess.as_default():` in that
+thread's function.
+
+##### Returns:
+
+ A context manager using this session as the default session.
+
+
+
+
+- - -
+
+### tf.get_default_session() <div class="md-anchor" id="get_default_session">{#get_default_session}</div>
+
+Returns the default session for the current thread.
+
+The returned `Session` will be the innermost session on which a
+`Session` or `Session.as_default()` context has been entered.
+
+*N.B.* The default session is a property of the current thread. If you
+create a new thread, and wish to use the default session in that
+thread, you must explicitly add a `with sess.as_default():` in that
+thread's function.
+
+##### Returns:
+
+ The default `Session` being used in the current thread.
+
+
+
+## Error classes <div class="md-anchor" id="AUTOGENERATED-error-classes">{#AUTOGENERATED-error-classes}</div>
+
+- - -
+
+### class tf.OpError <div class="md-anchor" id="OpError">{#OpError}</div>
+
+A generic error that is raised when TensorFlow execution fails.
+
+Whenever possible, the session will raise a more specific subclass
+of `OpError` from the `tf.errors` module.
+
+- - -
+
+#### tf.OpError.op {#OpError.op}
+
+The operation that failed, if known.
+
+*N.B.* If the failed op was synthesized at runtime, e.g. a `Send`
+or `Recv` op, there will be no corresponding
+[`Operation`](framework.md#Operation) object. In that case, this
+will return `None`, and you should instead use the
+[`node_def`](OpError.node_def) to discover information about the op.
+
+##### Returns:
+
+ The `Operation` that failed, or None.
+
+- - -
+
+#### tf.OpError.node_def {#OpError.node_def}
+
+The `NodeDef` proto representing the op that failed.
+
+
+#### Other Methods
+- - -
+
+#### tf.OpError.__init__(node_def, op, message, error_code) {#OpError.__init__}
+
+Creates a new OpError indicating that a particular op failed.
+
+##### Args:
+
+
+* <b>node_def</b>: The graph_pb2.NodeDef proto representing the op that failed.
+* <b>op</b>: The ops.Operation that failed, if known; otherwise None.
+* <b>message</b>: The message string describing the failure.
+* <b>error_code</b>: The error_codes_pb2.Code describing the error.
+
+
+- - -
+
+#### tf.OpError.error_code {#OpError.error_code}
+
+The integer error code that describes the error.
+
+- - -
+
+#### tf.OpError.message {#OpError.message}
+
+The error message that describes the error.
+
+
+- - -
+
+### class tf.errors.CancelledError <div class="md-anchor" id="CancelledError">{#CancelledError}</div>
+
+Raised when an operation or step is cancelled.
+
+For example, a long-running operation (e.g.
+[`queue.enqueue()`](io_ops.md#QueueBase.enqueue) may be cancelled by
+running another operation (e.g.
+[`queue.close(cancel_pending_enqueues=True)`](io_ops.md#QueueBase.close),
+or by [closing the session](client.md#Session.close). A step that is
+running such a long-running operation will fail by raising `CancelledError`.
+
+- - -
+
+#### tf.errors.CancelledError.__init__(node_def, op, message) {#CancelledError.__init__}
+
+Creates a `CancelledError`.
+
+
+
+- - -
+
+### class tf.errors.UnknownError <div class="md-anchor" id="UnknownError">{#UnknownError}</div>
+
+Unknown error.
+
+An example of where this error may be returned is if a Status value
+received from another address space belongs to an error-space that
+is not known to this address space. Also errors raised by APIs that
+do not return enough error information may be converted to this
+error.
+
+- - -
+
+#### tf.errors.UnknownError.__init__(node_def, op, message, error_code=2) {#UnknownError.__init__}
+
+Creates an `UnknownError`.
+
+
+
+- - -
+
+### class tf.errors.InvalidArgumentError <div class="md-anchor" id="InvalidArgumentError">{#InvalidArgumentError}</div>
+
+Raised when an operation receives an invalid argument.
+
+This may occur, for example, if an operation is receives an input
+tensor that has an invalid value or shape. For example, the
+[`tf.matmul()`](math_ops.md#matmul) op will raise this error if it
+receives an input that is not a matrix, and the
+[`tf.reshape()`](array_ops.md#reshape) op will raise this error if
+the new shape does not match the number of elements in the input
+tensor.
+
+- - -
+
+#### tf.errors.InvalidArgumentError.__init__(node_def, op, message) {#InvalidArgumentError.__init__}
+
+Creates an `InvalidArgumentError`.
+
+
+
+- - -
+
+### class tf.errors.DeadlineExceededError <div class="md-anchor" id="DeadlineExceededError">{#DeadlineExceededError}</div>
+
+Raised when a deadline expires before an operation could complete.
+
+This exception is not currently used.
+
+- - -
+
+#### tf.errors.DeadlineExceededError.__init__(node_def, op, message) {#DeadlineExceededError.__init__}
+
+Creates a `DeadlineExceededError`.
+
+
+
+- - -
+
+### class tf.errors.NotFoundError <div class="md-anchor" id="NotFoundError">{#NotFoundError}</div>
+
+Raised when a requested entity (e.g., a file or directory) was not found.
+
+For example, running the
+[`tf.WholeFileReader.read()`](io_ops.md#WholeFileReader) operation
+could raise `NotFoundError` if it receives the name of a file that
+does not exist.
+
+- - -
+
+#### tf.errors.NotFoundError.__init__(node_def, op, message) {#NotFoundError.__init__}
+
+Creates a `NotFoundError`.
+
+
+
+- - -
+
+### class tf.errors.AlreadyExistsError <div class="md-anchor" id="AlreadyExistsError">{#AlreadyExistsError}</div>
+
+Raised when an entity that we attempted to create already exists.
+
+For example, running an operation that saves a file
+(e.g. [`tf.train.Saver.save()`](train.md#Saver.save)) could
+potentially raise this exception if an explicit filename for an
+existing file was passed.
+
+- - -
+
+#### tf.errors.AlreadyExistsError.__init__(node_def, op, message) {#AlreadyExistsError.__init__}
+
+Creates an `AlreadyExistsError`.
+
+
+
+- - -
+
+### class tf.errors.PermissionDeniedError <div class="md-anchor" id="PermissionDeniedError">{#PermissionDeniedError}</div>
+
+Raised when the caller does not have permission to run an operation.
+
+For example, running the
+[`tf.WholeFileReader.read()`](io_ops.md#WholeFileReader) operation
+could raise `PermissionDeniedError` if it receives the name of a
+file for which the user does not have the read file permission.
+
+- - -
+
+#### tf.errors.PermissionDeniedError.__init__(node_def, op, message) {#PermissionDeniedError.__init__}
+
+Creates a `PermissionDeniedError`.
+
+
+
+- - -
+
+### class tf.errors.UnauthenticatedError <div class="md-anchor" id="UnauthenticatedError">{#UnauthenticatedError}</div>
+
+The request does not have valid authentication credentials.
+
+This exception is not currently used.
+
+- - -
+
+#### tf.errors.UnauthenticatedError.__init__(node_def, op, message) {#UnauthenticatedError.__init__}
+
+Creates an `UnauthenticatedError`.
+
+
+
+- - -
+
+### class tf.errors.ResourceExhaustedError <div class="md-anchor" id="ResourceExhaustedError">{#ResourceExhaustedError}</div>
+
+Some resource has been exhausted.
+
+For example, this error might be raised if a per-user quota is
+exhausted, or perhaps the entire file system is out of space.
+
+- - -
+
+#### tf.errors.ResourceExhaustedError.__init__(node_def, op, message) {#ResourceExhaustedError.__init__}
+
+Creates a `ResourceExhaustedError`.
+
+
+
+- - -
+
+### class tf.errors.FailedPreconditionError <div class="md-anchor" id="FailedPreconditionError">{#FailedPreconditionError}</div>
+
+Operation was rejected because the system is not in a state to execute it.
+
+This exception is most commonly raised when running an operation
+that reads a [`tf.Variable`](state_ops.md#Variable) before it has
+been initialized.
+
+- - -
+
+#### tf.errors.FailedPreconditionError.__init__(node_def, op, message) {#FailedPreconditionError.__init__}
+
+Creates a `FailedPreconditionError`.
+
+
+
+- - -
+
+### class tf.errors.AbortedError <div class="md-anchor" id="AbortedError">{#AbortedError}</div>
+
+The operation was aborted, typically due to a concurrent action.
+
+For example, running a [`queue.enqueue()`](io_ops.md#QueueBase.enqueue)
+operation may raise `AbortedError` if a
+[`queue.close()`](io_ops.md@QueueBase.close) operation previously ran.
+
+- - -
+
+#### tf.errors.AbortedError.__init__(node_def, op, message) {#AbortedError.__init__}
+
+Creates an `AbortedError`.
+
+
+
+- - -
+
+### class tf.errors.OutOfRangeError <div class="md-anchor" id="OutOfRangeError">{#OutOfRangeError}</div>
+
+Raised when an operation executed past the valid range.
+
+This exception is raised in "end-of-file" conditions, such as when a
+[`queue.dequeue()`](io_ops.md#QueueBase.dequeue) operation is
+blocked on an empty queue, and a
+[`queue.close()`](io_ops.md#QueueBase.close) operation executes.
+
+- - -
+
+#### tf.errors.OutOfRangeError.__init__(node_def, op, message) {#OutOfRangeError.__init__}
+
+Creates an `OutOfRangeError`.
+
+
+
+- - -
+
+### class tf.errors.UnimplementedError <div class="md-anchor" id="UnimplementedError">{#UnimplementedError}</div>
+
+Raised when an operation has not been implemented.
+
+Some operations may raise this error when passed otherwise-valid
+arguments that it does not currently support. For example, running
+the [`tf.nn.max_pool()`](nn.md#max_pool) operation would raise this
+error if pooling was requested on the batch dimension, because this
+is not yet supported.
+
+- - -
+
+#### tf.errors.UnimplementedError.__init__(node_def, op, message) {#UnimplementedError.__init__}
+
+Creates an `UnimplementedError`.
+
+
+
+- - -
+
+### class tf.errors.InternalError <div class="md-anchor" id="InternalError">{#InternalError}</div>
+
+Raised when the system experiences an internal error.
+
+This exception is raised when some invariant expected by the runtime
+has been broken. Catching this exception is not recommended.
+
+- - -
+
+#### tf.errors.InternalError.__init__(node_def, op, message) {#InternalError.__init__}
+
+Creates an `InternalError`.
+
+
+
+- - -
+
+### class tf.errors.UnavailableError <div class="md-anchor" id="UnavailableError">{#UnavailableError}</div>
+
+Raised when the runtime is currently unavailable.
+
+This exception is not currently used.
+
+- - -
+
+#### tf.errors.UnavailableError.__init__(node_def, op, message) {#UnavailableError.__init__}
+
+Creates an `UnavailableError`.
+
+
+
+- - -
+
+### class tf.errors.DataLossError <div class="md-anchor" id="DataLossError">{#DataLossError}</div>
+
+Raised when unrecoverable data loss or corruption is encountered.
+
+For example, this may be raised by running a
+[`tf.WholeFileReader.read()`](io_ops.md#WholeFileReader) operation,
+if the file is truncated while it is being read.
+
+- - -
+
+#### tf.errors.DataLossError.__init__(node_def, op, message) {#DataLossError.__init__}
+
+Creates a `DataLossError`.
+
+
+
diff --git a/tensorflow/g3doc/api_docs/python/constant_op.md b/tensorflow/g3doc/api_docs/python/constant_op.md
new file mode 100644
index 0000000000..34d2b511ab
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/constant_op.md
@@ -0,0 +1,565 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Constants, Sequences, and Random Values
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Constant Value Tensors](#AUTOGENERATED-constant-value-tensors)
+ * [tf.zeros(shape, dtype=tf.float32, name=None)](#zeros)
+ * [tf.zeros_like(tensor, dtype=None, name=None)](#zeros_like)
+ * [tf.ones(shape, dtype=tf.float32, name=None)](#ones)
+ * [tf.ones_like(tensor, dtype=None, name=None)](#ones_like)
+ * [tf.fill(dims, value, name=None)](#fill)
+ * [tf.constant(value, dtype=None, shape=None, name='Const')](#constant)
+* [Sequences](#AUTOGENERATED-sequences)
+ * [tf.linspace(start, stop, num, name=None)](#linspace)
+ * [tf.range(start, limit, delta=1, name='range')](#range)
+* [Random Tensors](#AUTOGENERATED-random-tensors)
+ * [Examples:](#AUTOGENERATED-examples-)
+ * [tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)](#random_normal)
+ * [tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)](#truncated_normal)
+ * [tf.random_uniform(shape, minval=0.0, maxval=1.0, dtype=tf.float32, seed=None, name=None)](#random_uniform)
+ * [tf.random_shuffle(value, seed=None, name=None)](#random_shuffle)
+ * [tf.set_random_seed(seed)](#set_random_seed)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Constant Value Tensors <div class="md-anchor" id="AUTOGENERATED-constant-value-tensors">{#AUTOGENERATED-constant-value-tensors}</div>
+
+TensorFlow provides several operations that you can use to generate constants.
+
+- - -
+
+### tf.zeros(shape, dtype=tf.float32, name=None) <div class="md-anchor" id="zeros">{#zeros}</div>
+
+Creates a tensor with all elements set to zero.
+
+This operation returns a tensor of type `dtype` with shape `shape` and
+all elements set to zero.
+
+For example:
+
+```python
+tf.zeros([3, 4], int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
+```
+
+##### Args:
+
+
+* <b>shape</b>: Either a list of integers, or a 1-D `Tensor` of type `int32`.
+* <b>dtype</b>: The type of an element in the resulting `Tensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` with all elements set to zero.
+
+
+- - -
+
+### tf.zeros_like(tensor, dtype=None, name=None) <div class="md-anchor" id="zeros_like">{#zeros_like}</div>
+
+Creates a tensor with all elements set to zero.
+
+Given a single tensor (`tensor`), this operation returns a tensor of the
+same type and shape as `tensor` with all elements set to zero. Optionally,
+you can use `dtype` to specify a new type for the returned tensor.
+
+For example:
+
+```python
+# 'tensor' is [[1, 2, 3], [4, 5, 6]]
+tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]]
+```
+
+##### Args:
+
+
+* <b>tensor</b>: A `Tensor`.
+* <b>dtype</b>: A type for the returned `Tensor`. Must be `float32`, `float64`,
+ `int8`, `int16`, `int32`, `int64`, `uint8`, or `complex64`.
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` with all elements set to zero.
+
+
+
+- - -
+
+### tf.ones(shape, dtype=tf.float32, name=None) <div class="md-anchor" id="ones">{#ones}</div>
+
+Creates a tensor with all elements set to 1.
+
+This operation returns a tensor of type `dtype` with shape `shape` and all
+elements set to 1.
+
+For example:
+
+```python
+tf.ones([2, 3], int32) ==> [[1, 1, 1], [1, 1, 1]]
+```
+
+##### Args:
+
+
+* <b>shape</b>: Either a list of integers, or a 1-D `Tensor` of type `int32`.
+* <b>dtype</b>: The type of an element in the resulting `Tensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` with all elements set to 1.
+
+
+- - -
+
+### tf.ones_like(tensor, dtype=None, name=None) <div class="md-anchor" id="ones_like">{#ones_like}</div>
+
+Creates a tensor with all elements set to 1.
+
+Given a single tensor (`tensor`), this operation returns a tensor of the same
+type and shape as `tensor` with all elements set to 1. Optionally, you can
+specify a new type (`dtype`) for the returned tensor.
+
+For example:
+
+```python
+# 'tensor' is [[1, 2, 3], [4, 5, 6]]
+tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]]
+```
+
+##### Args:
+
+
+* <b>tensor</b>: A `Tensor`.
+* <b>dtype</b>: A type for the returned `Tensor`. Must be `float32`, `float64`,
+ `int8`, `int16`, `int32`, `int64`, `uint8`, or `complex64`.
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` with all elements set to 1.
+
+
+
+- - -
+
+### tf.fill(dims, value, name=None) <div class="md-anchor" id="fill">{#fill}</div>
+
+Creates a tensor filled with a scalar value.
+
+This operation creates a tensor of shape `dims` and fills it with `value`.
+
+For example:
+
+```prettyprint
+# output tensor shape needs to be [2, 3]
+# so 'dims' is [2, 3]
+fill(dims, 9) ==> [[9, 9, 9]
+ [9, 9, 9]]
+```
+
+##### Args:
+
+
+* <b>dims</b>: A `Tensor` of type `int32`.
+ 1-D. Represents the shape of the output tensor.
+* <b>value</b>: A `Tensor`. 0-D (scalar). Value to fill the returned tensor.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `value`.
+
+
+
+- - -
+
+### tf.constant(value, dtype=None, shape=None, name='Const') <div class="md-anchor" id="constant">{#constant}</div>
+
+Creates a constant tensor.
+
+ The resulting tensor is populated with values of type `dtype`, as
+ specified by arguments `value` and (optionally) `shape` (see examples
+ below).
+
+ The argument `value` can be a constant value, or a list of values of type
+ `dtype`. If `value` is a list, then the length of the list must be less
+ than or equal to the number of elements implied by the `shape` argument (if
+ specified). In the case where the list length is less than the number of
+ elements specified by `shape`, the last element in the list will be used
+ to fill the remaining entries.
+
+ The argument `shape` is optional. If present, it specifies the dimensions
+ of the resulting tensor. If not present, then the tensor is a scalar (0-D)
+ if `value` is a scalar, or 1-D otherwise.
+
+ If the argument `dtype` is not specified, then the type is inferred from
+ the type of `value`.
+
+ For example:
+
+ ```python
+ # Constant 1-D Tensor populated with value list.
+ tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]
+
+ # Constant 2-D tensor populated with scalar value -1.
+ tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.]
+ [-1. -1. -1.]]
+ ```
+
+##### Args:
+
+
+* <b>value</b>: A constant value (or list) of output type `dtype`.
+
+
+* <b>dtype</b>: The type of the elements of the resulting tensor.
+
+
+* <b>shape</b>: Optional dimensions of resulting tensor.
+
+
+* <b>name</b>: Optional name for the tensor.
+
+##### Returns:
+
+ A Constant Tensor.
+
+
+
+## Sequences <div class="md-anchor" id="AUTOGENERATED-sequences">{#AUTOGENERATED-sequences}</div>
+
+- - -
+
+### tf.linspace(start, stop, num, name=None) <div class="md-anchor" id="linspace">{#linspace}</div>
+
+Generates values in an interval.
+
+A sequence of `num` evenly-spaced values are generated beginning at `start`.
+If `num > 1`, the values in the sequence increase by `stop - start / num - 1`,
+so that the last one is exactly `stop`.
+
+For example:
+
+```
+tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]
+```
+
+##### Args:
+
+
+* <b>start</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ First entry in the range.
+* <b>stop</b>: A `Tensor`. Must have the same type as `start`.
+ Last entry in the range.
+* <b>num</b>: A `Tensor` of type `int32`. Number of values to generate.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `start`. 1-D. The generated values.
+
+
+
+- - -
+
+### tf.range(start, limit, delta=1, name='range') <div class="md-anchor" id="range">{#range}</div>
+
+Creates a sequence of integers.
+
+This operation creates a sequence of integers that begins at `start` and
+extends by increments of `delta` up to but not including `limit`.
+
+For example:
+
+```
+# 'start' is 3
+# 'limit' is 18
+# 'delta' is 3
+tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]
+```
+
+##### Args:
+
+
+* <b>start</b>: A 0-D (scalar) of type `int32`. First entry in sequence.
+* <b>limit</b>: A 0-D (scalar) of type `int32`. Upper limit of sequence,
+ exclusive.
+* <b>delta</b>: A 0-D `Tensor` (scalar) of type `int32`. Optional. Default is 1.
+ Number that increments `start`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An 1-D `int32` `Tensor`.
+
+
+
+## Random Tensors <div class="md-anchor" id="AUTOGENERATED-random-tensors">{#AUTOGENERATED-random-tensors}</div>
+
+TensorFlow has several ops that create random tensors with different
+distributions. The random ops are stateful, and create new random values each
+time they are evaluated.
+
+The `seed` keyword argument in these functions acts in conjunction with
+the graph-level random seed. Changing either the graph-level seed using
+[`set_random_seed`](constant_op.md#set_random_seed) or the op-level seed
+will change the underlying seed of these operations. Setting neither graph-level
+nor op-level seed, results in a random seed for all operations.
+See [`set_random_seed`](constant_op.md#set_random_seed) for details on the
+interaction between operation-level and graph-level random seeds.
+
+### Examples: <div class="md-anchor" id="AUTOGENERATED-examples-">{#AUTOGENERATED-examples-}</div>
+
+```python
+# Create a tensor of shape [2, 3] consisting of random normal values, with mean
+# -1 and standard deviation 4.
+norm = tf.random_normal([2, 3], mean=-1, stddev=4)
+
+# Shuffle the first dimension of a tensor
+c = tf.constant([[1, 2], [3, 4], [5, 6]])
+shuff = tf.random_shuffle(c)
+
+# Each time we run these ops, different results are generated
+sess = tf.Session()
+print sess.run(norm)
+print sess.run(norm)
+
+# Set an op-level seed to generate repeatable sequences across sessions.
+c = tf.constant([[1, 2], [3, 4], [5, 6]])
+sess = tf.Session()
+norm = tf.random_normal(c, seed=1234)
+print sess.run(norm)
+print sess.run(norm)
+```
+
+Another common use of random values is the intialization of variables. Also see
+the [Variables How To](../../how_tos/variables/index.md).
+
+```python
+# Use random uniform values in [0, 1) as the initializer for a variable of shape
+# [2, 3]. The default type is float32.
+var = tf.Variable(tf.random_uniform([2, 3]), name="var")
+init = tf.initialize_all_variables()
+
+sess = tf.Session()
+sess.run(init)
+print sess.run(var)
+```
+
+- - -
+
+### tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) <div class="md-anchor" id="random_normal">{#random_normal}</div>
+
+Outputs random values from a normal distribution.
+
+##### Args:
+
+
+* <b>shape</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
+* <b>mean</b>: A 0-D Tensor or Python value of type `dtype`. The mean of the normal
+ distribution.
+* <b>stddev</b>: A 0-D Tensor or Python value of type `dtype`. The standard deviation
+ of the normal distribution.
+* <b>dtype</b>: The type of the output.
+* <b>seed</b>: A Python integer. Used to create a random seed for the distribution.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tensor of the specified shape filled with random normal values.
+
+
+- - -
+
+### tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) <div class="md-anchor" id="truncated_normal">{#truncated_normal}</div>
+
+Outputs random values from a truncated normal distribution.
+
+The generated values follow a normal distribution with specified mean and
+standard deviation, except that values whose magnitude is more than 2 standard
+deviations from the mean are dropped and re-picked.
+
+##### Args:
+
+
+* <b>shape</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
+* <b>mean</b>: A 0-D Tensor or Python value of type `dtype`. The mean of the
+ truncated normal distribution.
+* <b>stddev</b>: A 0-D Tensor or Python value of type `dtype`. The standard deviation
+ of the truncated normal distribution.
+* <b>dtype</b>: The type of the output.
+* <b>seed</b>: A Python integer. Used to create a random seed for the distribution.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tensor of the specified shape filled with random truncated normal values.
+
+
+- - -
+
+### tf.random_uniform(shape, minval=0.0, maxval=1.0, dtype=tf.float32, seed=None, name=None) <div class="md-anchor" id="random_uniform">{#random_uniform}</div>
+
+Outputs random values from a uniform distribution.
+
+The generated values follow a uniform distribution in the range
+`[minval, maxval)`. The lower bound `minval` is included in the range, while
+the upper bound `maxval` is excluded.
+
+##### Args:
+
+
+* <b>shape</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
+* <b>minval</b>: A 0-D Tensor or Python value of type `dtype`. The lower bound on the
+ range of random values to generate.
+* <b>maxval</b>: A 0-D Tensor or Python value of type `dtype`. The upper bound on
+ the range of random values to generate.
+* <b>dtype</b>: The type of the output.
+* <b>seed</b>: A Python integer. Used to create a random seed for the distribution.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tensor of the specified shape filled with random uniform values.
+
+
+- - -
+
+### tf.random_shuffle(value, seed=None, name=None) <div class="md-anchor" id="random_shuffle">{#random_shuffle}</div>
+
+Randomly shuffles a tensor along its first dimension.
+
+The tensor is shuffled along dimension 0, such that each `value[j]` is mapped
+to one and only one `output[i]`. For example, a mapping that might occur for a
+3x2 tensor is:
+
+```python
+[[1, 2], [[5, 6],
+ [3, 4], ==> [1, 2],
+ [5, 6]] [3, 4]]
+```
+
+##### Args:
+
+
+* <b>value</b>: A Tensor to be shuffled.
+* <b>seed</b>: A Python integer. Used to create a random seed for the distribution.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tensor of same shape and type as `value`, shuffled along its first
+ dimension.
+
+
+- - -
+
+### tf.set_random_seed(seed) <div class="md-anchor" id="set_random_seed">{#set_random_seed}</div>
+
+Sets the graph-level random seed.
+
+Operations that rely on a random seed actually derive it from two seeds:
+the graph-level and operation-level seeds. This sets the graph-level seed.
+
+Its interactions with operation-level seeds is as follows:
+
+ 1. If neither the graph-level nor the operation seed is set:
+ A random seed is used for this op.
+ 2. If the graph-level seed is set, but the operation seed is not:
+ The system deterministically picks an operation seed in conjunction
+ with the graph-level seed so that it gets a unique random sequence.
+ 3. If the graph-level seed is not set, but the operation seed is set:
+ A default graph-level seed and the specified operation seed are used to
+ determine the random sequence.
+ 4. If both the graph-level and the operation seed are set:
+ Both seeds are used in conjunction to determine the random sequence.
+
+To illustrate the user-visible effects, consider these examples:
+
+To generate different sequences across sessions, set neither
+graph-level nor op-level seeds:
+
+```python
+a = tf.random_uniform([1])
+b = tf.random_normal([1])
+
+print "Session 1"
+with tf.Session() as sess1:
+ print sess1.run(a) # generates 'A1'
+ print sess1.run(a) # generates 'A2'
+ print sess1.run(b) # generates 'B1'
+ print sess1.run(b) # generates 'B2'
+
+print "Session 2"
+with tf.Session() as sess2:
+ print sess2.run(a) # generates 'A3'
+ print sess2.run(a) # generates 'A4'
+ print sess2.run(b) # generates 'B3'
+ print sess2.run(b) # generates 'B4'
+```
+
+To generate the same repeatable sequence for an op across sessions, set the
+seed for the op:
+
+```python
+a = tf.random_uniform([1], seed=1)
+b = tf.random_normal([1])
+
+# Repeatedly running this block with the same graph will generate the same
+# sequence of values for 'a', but different sequences of values for 'b'.
+print "Session 1"
+with tf.Session() as sess1:
+ print sess1.run(a) # generates 'A1'
+ print sess1.run(a) # generates 'A2'
+ print sess1.run(b) # generates 'B1'
+ print sess1.run(b) # generates 'B2'
+
+print "Session 2"
+with tf.Session() as sess2:
+ print sess2.run(a) # generates 'A1'
+ print sess2.run(a) # generates 'A2'
+ print sess2.run(b) # generates 'B3'
+ print sess2.run(b) # generates 'B4'
+```
+
+To make the random sequences generated by all ops be repeatable across
+sessions, set a graph-level seed:
+
+```python
+tf.set_random_seed(1234)
+a = tf.random_uniform([1])
+b = tf.random_normal([1])
+
+# Repeatedly running this block with the same graph will generate different
+# sequences of 'a' and 'b'.
+print "Session 1"
+with tf.Session() as sess1:
+ print sess1.run(a) # generates 'A1'
+ print sess1.run(a) # generates 'A2'
+ print sess1.run(b) # generates 'B1'
+ print sess1.run(b) # generates 'B2'
+
+print "Session 2"
+with tf.Session() as sess2:
+ print sess2.run(a) # generates 'A1'
+ print sess2.run(a) # generates 'A2'
+ print sess2.run(b) # generates 'B1'
+ print sess2.run(b) # generates 'B2'
+```
+
+##### Args:
+
+
+* <b>seed</b>: integer.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/control_flow_ops.md b/tensorflow/g3doc/api_docs/python/control_flow_ops.md
new file mode 100644
index 0000000000..ad4321f01b
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/control_flow_ops.md
@@ -0,0 +1,590 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Control Flow
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Control Flow Operations](#AUTOGENERATED-control-flow-operations)
+ * [tf.identity(input, name=None)](#identity)
+ * [tf.tuple(tensors, name=None, control_inputs=None)](#tuple)
+ * [tf.group(*inputs, **kwargs)](#group)
+ * [tf.no_op(name=None)](#no_op)
+ * [tf.count_up_to(ref, limit, name=None)](#count_up_to)
+* [Logical Operators](#AUTOGENERATED-logical-operators)
+ * [tf.logical_and(x, y, name=None)](#logical_and)
+ * [tf.logical_not(x, name=None)](#logical_not)
+ * [tf.logical_or(x, y, name=None)](#logical_or)
+ * [tf.logical_xor(x, y, name='LogicalXor')](#logical_xor)
+* [Comparison Operators](#AUTOGENERATED-comparison-operators)
+ * [tf.equal(x, y, name=None)](#equal)
+ * [tf.not_equal(x, y, name=None)](#not_equal)
+ * [tf.less(x, y, name=None)](#less)
+ * [tf.less_equal(x, y, name=None)](#less_equal)
+ * [tf.greater(x, y, name=None)](#greater)
+ * [tf.greater_equal(x, y, name=None)](#greater_equal)
+ * [tf.select(condition, t, e, name=None)](#select)
+ * [tf.where(input, name=None)](#where)
+* [Debugging Operations](#AUTOGENERATED-debugging-operations)
+ * [tf.is_finite(x, name=None)](#is_finite)
+ * [tf.is_inf(x, name=None)](#is_inf)
+ * [tf.is_nan(x, name=None)](#is_nan)
+ * [tf.verify_tensor_all_finite(t, msg, name=None)](#verify_tensor_all_finite)
+ * [tf.check_numerics(tensor, message, name=None)](#check_numerics)
+ * [tf.add_check_numerics_ops()](#add_check_numerics_ops)
+ * [tf.Assert(condition, data, summarize=None, name=None)](#Assert)
+ * [tf.Print(input_, data, message=None, first_n=None, summarize=None, name=None)](#Print)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Control Flow Operations <div class="md-anchor" id="AUTOGENERATED-control-flow-operations">{#AUTOGENERATED-control-flow-operations}</div>
+
+TensorFlow provides several operations and classes that you can use to control
+the execution of operations and add conditional dependencies to your graph.
+
+- - -
+
+### tf.identity(input, name=None) <div class="md-anchor" id="identity">{#identity}</div>
+
+Return a tensor with the same shape and contents as the input tensor or value.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+
+
+- - -
+
+### tf.tuple(tensors, name=None, control_inputs=None) <div class="md-anchor" id="tuple">{#tuple}</div>
+
+Group tensors together.
+
+This creates a tuple of tensors with the same values as the `tensors`
+argument, except that the value of each tensor is only returned after the
+values of all tensors have been computed.
+
+`control_inputs` contains additional ops that have to finish before this op
+finishes, but whose outputs are not returned.
+
+This can be used as a "join" mechanism for parallel computations: all the
+argument tensors can be computed in parallel, but the values of any tensor
+returned by `tuple` are only available after all the parallel computations
+are done.
+
+See also `group` and `with_dependencies`.
+
+##### Args:
+
+
+* <b>tensors</b>: A list of `Tensor`s or `IndexedSlices`, some entries can be `None`.
+* <b>name</b>: (optional) A name to use as a `name_scope` for the operation.
+* <b>control_inputs</b>: List of additional ops to finish before returning.
+
+##### Returns:
+
+ Same as `tensors`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `tensors` does not contain any `Tensor` or `IndexedSlices`.
+
+
+- - -
+
+### tf.group(*inputs, **kwargs) <div class="md-anchor" id="group">{#group}</div>
+
+Create an op that groups multiple operations.
+
+When this op finishes, all ops in `input` have finished. This op has no
+output.
+
+See also `tuple` and `with_dependencies`.
+
+##### Args:
+
+
+* <b>*inputs</b>: One or more tensors to group.
+* <b>**kwargs</b>: Optional parameters to pass when constructing the NodeDef.
+* <b>name</b>: A name for this operation (optional).
+
+##### Returns:
+
+ An Operation that executes all its inputs.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If an unknown keyword argument is provided, or if there are
+ no inputs.
+
+
+- - -
+
+### tf.no_op(name=None) <div class="md-anchor" id="no_op">{#no_op}</div>
+
+Does nothing. Only useful as a placeholder for control edges.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+### tf.count_up_to(ref, limit, name=None) <div class="md-anchor" id="count_up_to">{#count_up_to}</div>
+
+Increments 'ref' until it reaches 'limit'.
+
+This operation outputs "ref" after the update is done. This makes it
+easier to chain operations that need to use the updated value.
+
+##### Args:
+
+
+* <b>ref</b>: A mutable `Tensor`. Must be one of the following types: `int32`, `int64`.
+ Should be from a scalar `Variable` node.
+* <b>limit</b>: An `int`.
+ If incrementing ref would bring it above limit, instead generates an
+ 'OutOfRange' error.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `ref`.
+ A copy of the input before increment. If nothing else modifies the
+ input, the values produced will all be distinct.
+
+
+
+## Logical Operators <div class="md-anchor" id="AUTOGENERATED-logical-operators">{#AUTOGENERATED-logical-operators}</div>
+
+TensorFlow provides several operations that you can use to add logical operators
+to your graph.
+
+- - -
+
+### tf.logical_and(x, y, name=None) <div class="md-anchor" id="logical_and">{#logical_and}</div>
+
+Returns the truth value of x AND y element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `bool`.
+* <b>y</b>: A `Tensor` of type `bool`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.logical_not(x, name=None) <div class="md-anchor" id="logical_not">{#logical_not}</div>
+
+Returns the truth value of NOT x element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `bool`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.logical_or(x, y, name=None) <div class="md-anchor" id="logical_or">{#logical_or}</div>
+
+Returns the truth value of x OR y element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `bool`.
+* <b>y</b>: A `Tensor` of type `bool`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.logical_xor(x, y, name='LogicalXor') <div class="md-anchor" id="logical_xor">{#logical_xor}</div>
+
+x ^ y = (x | y) & ~(x & y).
+
+
+
+## Comparison Operators <div class="md-anchor" id="AUTOGENERATED-comparison-operators">{#AUTOGENERATED-comparison-operators}</div>
+
+TensorFlow provides several operations that you can use to add comparison
+operators to your graph.
+
+- - -
+
+### tf.equal(x, y, name=None) <div class="md-anchor" id="equal">{#equal}</div>
+
+Returns the truth value of (x == y) element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.not_equal(x, y, name=None) <div class="md-anchor" id="not_equal">{#not_equal}</div>
+
+Returns the truth value of (x != y) element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.less(x, y, name=None) <div class="md-anchor" id="less">{#less}</div>
+
+Returns the truth value of (x < y) element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.less_equal(x, y, name=None) <div class="md-anchor" id="less_equal">{#less_equal}</div>
+
+Returns the truth value of (x <= y) element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.greater(x, y, name=None) <div class="md-anchor" id="greater">{#greater}</div>
+
+Returns the truth value of (x > y) element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.greater_equal(x, y, name=None) <div class="md-anchor" id="greater_equal">{#greater_equal}</div>
+
+Returns the truth value of (x >= y) element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.select(condition, t, e, name=None) <div class="md-anchor" id="select">{#select}</div>
+
+Selects elements from `t` or `e`, depending on `condition`.
+
+The `condition`, `t`, and `e` tensors must all have the same shape,
+and the output will also have that shape. The `condition` tensor acts
+as an element-wise mask that chooses, based on the value at each
+element, whether the corresponding element in the output should be
+taken from `t` (if true) or `e` (if false). For example:
+
+For example:
+
+```prettyprint
+# 'condition' tensor is [[True, False]
+# [True, False]]
+# 't' is [[1, 1],
+# [1, 1]]
+# 'e' is [[2, 2],
+# [2, 2]]
+select(condition, t, e) ==> [[1, 2],
+ [1, 2]]
+```
+
+##### Args:
+
+
+* <b>condition</b>: A `Tensor` of type `bool`.
+* <b>t</b>: A `Tensor` with the same shape as `condition`.
+* <b>e</b>: A `Tensor` with the same type and shape as `t`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` with the same type and shape as `t` and `e`.
+
+
+- - -
+
+### tf.where(input, name=None) <div class="md-anchor" id="where">{#where}</div>
+
+Returns locations of true values in a boolean tensor.
+
+This operation returns the coordinates of true elements in `input`. The
+coordinates are returned in a 2-D tensor where the first dimension (rows)
+represents the number of true elements, and the second dimension (columns)
+represents the coordinates of the true elements. Keep in mind, the shape of
+the output tensor can vary depending on how many true values there are in
+`input`. Indices are output in row-major order.
+
+For example:
+
+```prettyprint
+# 'input' tensor is [[True, False]
+# [True, False]]
+# 'input' has two true values, so output has two coordinates.
+# 'input' has rank of 2, so coordinates have two indices.
+where(input) ==> [[0, 0],
+ [1, 0]]
+
+# `input` tensor is [[[True, False]
+# [True, False]]
+# [[False, True]
+# [False, True]]
+# [[False, False]
+# [False, True]]]
+# 'input' has 5 true values, so output has 5 coordinates.
+# 'input' has rank of 3, so coordinates have three indices.
+where(input) ==> [[0, 0, 0],
+ [0, 1, 0],
+ [1, 0, 1],
+ [1, 1, 1],
+ [2, 1, 1]]
+```
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor` of type `bool`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int64`.
+
+
+
+## Debugging Operations <div class="md-anchor" id="AUTOGENERATED-debugging-operations">{#AUTOGENERATED-debugging-operations}</div>
+
+TensorFlow provides several operations that you can use to validate values and
+debug your graph.
+
+- - -
+
+### tf.is_finite(x, name=None) <div class="md-anchor" id="is_finite">{#is_finite}</div>
+
+Returns which elements of x are finite.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.is_inf(x, name=None) <div class="md-anchor" id="is_inf">{#is_inf}</div>
+
+Returns which elements of x are Inf.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.is_nan(x, name=None) <div class="md-anchor" id="is_nan">{#is_nan}</div>
+
+Returns which elements of x are NaN.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`.
+
+
+- - -
+
+### tf.verify_tensor_all_finite(t, msg, name=None) <div class="md-anchor" id="verify_tensor_all_finite">{#verify_tensor_all_finite}</div>
+
+Assert that the tensor does not contain any NaN's or Inf's.
+
+##### Args:
+
+
+* <b>t</b>: Tensor to check.
+* <b>msg</b>: Message to log on failure.
+* <b>name</b>: A name for this operation (optional).
+
+##### Returns:
+
+ Same tensor as `t`.
+
+
+- - -
+
+### tf.check_numerics(tensor, message, name=None) <div class="md-anchor" id="check_numerics">{#check_numerics}</div>
+
+Checks a tensor for NaN and Inf values.
+
+When run, reports an `InvalidArgument` error if `tensor` has any values
+that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is.
+
+##### Args:
+
+
+* <b>tensor</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>message</b>: A `string`. Prefix of the error message.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `tensor`.
+
+
+- - -
+
+### tf.add_check_numerics_ops() <div class="md-anchor" id="add_check_numerics_ops">{#add_check_numerics_ops}</div>
+
+Connect a check_numerics to every floating point tensor.
+
+`check_numerics` operations themselves are added for each `float` or `double`
+tensor in the graph. For all ops in the graph, the `check_numerics` op for
+all of its (`float` or `double`) inputs is guaranteed to run before the
+`check_numerics` op on any of its outputs.
+
+##### Returns:
+
+ A `group` op depending on all `check_numerics` ops added.
+
+
+- - -
+
+### tf.Assert(condition, data, summarize=None, name=None) <div class="md-anchor" id="Assert">{#Assert}</div>
+
+Asserts that the given condition is true.
+
+If `condition` evaluates to false, print the list of tensors in `data`.
+`summarize` determines how many entries of the tensors to print.
+
+##### Args:
+
+
+* <b>condition</b>: The condition to evaluate.
+* <b>data</b>: The tensors to print out when condition is false.
+* <b>summarize</b>: Print this many entries of each tensor.
+* <b>name</b>: A name for this operation (optional).
+
+
+- - -
+
+### tf.Print(input_, data, message=None, first_n=None, summarize=None, name=None) <div class="md-anchor" id="Print">{#Print}</div>
+
+Prints a list of tensors.
+
+This is an identity op with the side effect of printing `data` when
+evaluating.
+
+##### Args:
+
+
+* <b>input_</b>: A tensor passed through this op.
+* <b>data</b>: A list of tensors to print out when op is evaluated.
+* <b>message</b>: A string, prefix of the error message.
+* <b>first_n</b>: Only log `first_n` number of times. Negative numbers log always;
+ this is the default.
+* <b>summarize</b>: Only print this many entries of each tensor.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ Same tensor as `input_`.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/framework.md b/tensorflow/g3doc/api_docs/python/framework.md
new file mode 100644
index 0000000000..e28daaa77a
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/framework.md
@@ -0,0 +1,2079 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Building Graphs
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Core graph data structures](#AUTOGENERATED-core-graph-data-structures)
+ * [class tf.Graph](#Graph)
+ * [class tf.Operation](#Operation)
+ * [class tf.Tensor](#Tensor)
+* [Tensor types](#AUTOGENERATED-tensor-types)
+ * [class tf.DType](#DType)
+ * [tf.as_dtype(type_value)](#as_dtype)
+* [Utility functions](#AUTOGENERATED-utility-functions)
+ * [tf.device(dev)](#device)
+ * [tf.name_scope(name)](#name_scope)
+ * [tf.control_dependencies(control_inputs)](#control_dependencies)
+ * [tf.convert_to_tensor(value, dtype=None, name=None)](#convert_to_tensor)
+ * [tf.get_default_graph()](#get_default_graph)
+ * [tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None)](#import_graph_def)
+* [Graph collections](#AUTOGENERATED-graph-collections)
+ * [tf.add_to_collection(name, value)](#add_to_collection)
+ * [tf.get_collection(key, scope=None)](#get_collection)
+ * [class tf.GraphKeys](#GraphKeys)
+* [Defining new operations](#AUTOGENERATED-defining-new-operations)
+ * [class tf.RegisterGradient](#RegisterGradient)
+ * [tf.NoGradient(op_type)](#NoGradient)
+ * [class tf.RegisterShape](#RegisterShape)
+ * [class tf.TensorShape](#TensorShape)
+ * [class tf.Dimension](#Dimension)
+ * [tf.op_scope(*args, **kwds)](#op_scope)
+ * [tf.get_seed(op_seed)](#get_seed)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+Import names from the framework library.
+
+## Core graph data structures <div class="md-anchor" id="AUTOGENERATED-core-graph-data-structures">{#AUTOGENERATED-core-graph-data-structures}</div>
+
+- - -
+
+### class tf.Graph <div class="md-anchor" id="Graph">{#Graph}</div>
+
+A TensorFlow computation, represented as a dataflow graph.
+
+A `Graph` contains a set of [`Operation`](framework.md#Operation) objects,
+which represent units of computation; and [`Tensor`](framework.md#Tensor)
+objects, which represent the units of data that flow between operations.
+
+A default `Graph` is always registered, and accessible by calling
+[`tf.get_default_graph()`](framework.md#get_default_graph). To add an
+operation to the default graph, simply call one of the functions that defines
+a new `Operation`:
+
+```
+c = tf.constant(4.0)
+assert c.graph is tf.get_default_graph()
+```
+
+Another typical usage involves the
+[`Graph.as_default()`](framework.md#Graph.as_default)
+context manager, which overrides the current default graph for the
+lifetime of the context:
+
+```python
+g = tf.Graph()
+with g.as_default():
+ # Define operations and tensors in `g`.
+ c = tf.constant(30.0)
+ assert c.graph is g
+```
+
+Important note: This class *is not* thread-safe for graph construction. All
+operations should be created from a single thread, or external
+synchronization must be provided. Unless otherwise specified, all methods
+are not thread-safe.
+
+- - -
+
+#### tf.Graph.__init__() {#Graph.__init__}
+
+Creates a new, empty Graph.
+
+
+- - -
+
+#### tf.Graph.as_default() {#Graph.as_default}
+
+Returns a context manager that makes this `Graph` the default graph.
+
+This method should be used if you want to create multiple graphs
+in the same process. For convenience, a global default graph is
+provided, and all ops will be added to this graph if you do not
+create a new graph explicitly. Use this method the `with` keyword
+to specify that ops created within the scope of a block should be
+added to this graph.
+
+The default graph is a property of the current thread. If you
+create a new thread, and wish to use the default graph in that
+thread, you must explicitly add a `with g.as_default():` in that
+thread's function.
+
+The following code examples are equivalent:
+
+```python
+# 1. Using Graph.as_default():
+g = tf.Graph()
+with g.as_default():
+ c = tf.constant(5.0)
+ assert c.graph is g
+
+# 2. Constructing and making default:
+with tf.Graph().as_default() as g:
+ c = tf.constant(5.0)
+ assert c.graph is g
+```
+
+##### Returns:
+
+ A context manager for using this graph as the default graph.
+
+
+- - -
+
+#### tf.Graph.as_graph_def(from_version=None) {#Graph.as_graph_def}
+
+Returns a serialized `GraphDef` representation of this graph.
+
+This method is thread-safe.
+
+##### Args:
+
+
+* <b>from_version</b>: Optional. If this is set, returns a `GraphDef`
+ containing only the nodes that were added to this graph since
+ its `version` property had the given value.
+
+##### Returns:
+
+ A
+ [`GraphDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto)
+ protocol buffer.
+
+
+- - -
+
+#### tf.Graph.finalize() {#Graph.finalize}
+
+Finalizes this graph, making it read-only.
+
+After calling `g.finalize()`, no new operations can be added to
+`g`. This method is used to ensure that no operations are added
+to a graph when it is shared between multiple threads, for example
+when using a [`QueueRunner`](train.md#QueueRunner).
+
+
+- - -
+
+#### tf.Graph.finalized {#Graph.finalized}
+
+True if this graph has been finalized.
+
+
+- - -
+
+#### tf.Graph.control_dependencies(control_inputs) {#Graph.control_dependencies}
+
+Returns a context manager that specifies control dependencies.
+
+Use with the `with` keyword to specify that all operations constructed
+within the context should have control dependencies on
+`control_inputs`. For example:
+
+```python
+with g.control_dependencies([a, b, c]):
+ # `d` and `e` will only run after `a`, `b`, and `c` have executed.
+ d = ...
+ e = ...
+```
+
+Multiple calls to `control_dependencies()` can be nested, and in
+that case a new `Operation` will have control dependencies on the union
+of `control_inputs` from all active contexts.
+
+```python
+with g.control_dependencies([a, b]):
+ # Ops declared here run after `a` and `b`.
+ with g.control_dependencies([c, d]):
+ # Ops declared here run after `a`, `b`, `c`, and `d`.
+```
+
+*N.B.* The control dependencies context applies *only* to ops that
+are constructed within the context. Merely using an op or tensor
+in the context does not add a control dependency. The following
+example illustrates this point:
+
+```python
+# WRONG
+def my_func(pred, tensor):
+ t = tf.matmul(tensor, tensor)
+ with tf.control_dependencies([pred]):
+ # The matmul op is created outside the context, so no control
+ # dependency will be added.
+ return t
+
+# RIGHT
+def my_func(pred, tensor):
+ with tf.control_dependencies([pred]):
+ # The matmul op is created in the context, so a control dependency
+ # will be added.
+ return tf.matmul(tensor, tensor)
+```
+
+##### Args:
+
+
+* <b>control_inputs</b>: A list of `Operation` or `Tensor` objects, which
+ must be executed or computed before running the operations
+ defined in the context.
+
+##### Returns:
+
+ A context manager that specifies control dependencies for all
+ operations constructed within the context.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `control_inputs` is not a list of `Operation` or
+ `Tensor` objects.
+
+
+- - -
+
+#### tf.Graph.device(*args, **kwds) {#Graph.device}
+
+Returns a context manager that specifies the default device to use.
+
+The `device_name_or_function` argument may either be a device name
+string, a device function, or None:
+
+* If it is a device name string, all operations constructed in
+ this context will be assigned to the device with that name.
+* If it is a function, it will be treated as function from
+ Operation objects to device name strings, and invoked each time
+ a new Operation is created. The Operation will be assigned to
+ the device with the returned name.
+* If it is None, the default device will be cleared.
+
+For example:
+
+```python
+with g.device('/gpu:0'):
+ # All operations constructed in this context will be placed
+ # on GPU 0.
+ with g.device(None):
+ # All operations constructed in this context will have no
+ # assigned device.
+
+# Defines a function from `Operation` to device string.
+def matmul_on_gpu(n):
+ if n.type == "MatMul":
+ return "/gpu:0"
+ else:
+ return "/cpu:0"
+
+with g.device(matmul_on_gpu):
+ # All operations of type "MatMul" constructed in this context
+ # will be placed on GPU 0; all other operations will be placed
+ # on CPU 0.
+```
+
+##### Args:
+
+
+* <b>device_name_or_function</b>: The device name or function to use in
+ the context.
+
+##### Returns:
+
+ A context manager that specifies the default device to use for newly
+ created ops.
+
+
+- - -
+
+#### tf.Graph.name_scope(*args, **kwds) {#Graph.name_scope}
+
+Returns a context manager that creates hierarchical names for operations.
+
+A graph maintains a stack of name scopes. A `with name_scope(...):`
+statement pushes a new name onto the stack for the lifetime of the context.
+
+The `name` argument will be interpreted as follows:
+
+* A string (not ending with '/') will create a new name scope, in which
+ `name` is appended to the prefix of all operations created in the
+ context. If `name` has been used before, it will be made unique by
+ calling `self.unique_name(name)`.
+* A scope previously captured from a `with g.name_scope(...) as
+ scope:` statement will be treated as an "absolute" name scope, which
+ makes it possible to re-enter existing scopes.
+* A value of `None` or the empty string will reset the current name scope
+ to the top-level (empty) name scope.
+
+For example:
+
+```python
+with tf.Graph().as_default() as g:
+ c = tf.constant(5.0, name="c")
+ assert c_1.name == "c"
+ c_1 = tf.constant(6.0, name="c")
+ assert c_1.name == "c_1"
+
+ # Creates a scope called "nested"
+ with g.name_scope("nested") as scope:
+ nested_c = tf.constant(10.0, name="c")
+ assert nested_c.name == "nested/c"
+
+ # Creates a nested scope called "inner".
+ with g.name_scope("inner"):
+ nested_inner_c = tf.constant(20.0, name="c")
+ assert nested_inner_c.name == "nested/inner/c"
+
+ # Create a nested scope called "inner_1".
+ with g.name_scope("inner"):
+ nested_inner_1_c = tf.constant(30.0, name="c")
+ assert nested_inner_1_c.name == "nested/inner_1/c"
+
+ # Treats `scope` as an absolute name scope, and
+ # switches to the "nested/" scope.
+ with g.name_scope(scope):
+ nested_d = tf.constant(40.0, name="d")
+ assert nested_d.name == "nested/d"
+
+ with g.name_scope(""):
+ e = tf.constant(50.0, name="e")
+ assert e.name == "e"
+```
+
+The name of the scope itself can be captured by `with
+g.name_scope(...) as scope:`, which stores the name of the scope
+in the variable `scope`. This value can be used to name an
+operation that represents the overall result of executing the ops
+in a scope. For example:
+
+```python
+inputs = tf.constant(...)
+with g.name_scope('my_layer') as scope:
+ weights = tf.Variable(..., name="weights")
+ biases = tf.Variable(..., name="biases")
+ affine = tf.matmul(inputs, weights) + biases
+ output = tf.nn.relu(affine, name=scope)
+```
+
+
+##### Args:
+
+
+* <b>name</b>: A name for the scope.
+
+##### Returns:
+
+ A context manager that installs `name` as a new name scope.
+
+
+
+A `Graph` instance supports an arbitrary number of "collections"
+that are identified by name. For convenience when building a large
+graph, collections can store groups of related objects: for
+example, the `tf.Variable` uses a collection (named
+[`tf.GraphKeys.VARIABLES`](framework.md#GraphKeys)) for all variables that are
+created during the construction of a graph. The caller may define
+additional collections by specifying a new name.
+
+- - -
+
+#### tf.Graph.add_to_collection(name, value) {#Graph.add_to_collection}
+
+Stores `value` in the collection with the given `name`.
+
+##### Args:
+
+
+* <b>name</b>: The key for the collection. For example, the `GraphKeys` class
+ contains many standard names for collections.
+* <b>value</b>: The value to add to the collection.
+
+
+- - -
+
+#### tf.Graph.get_collection(name, scope=None) {#Graph.get_collection}
+
+Returns a list of values in the collection with the given `name`.
+
+##### Args:
+
+
+* <b>key</b>: The key for the collection. For example, the `GraphKeys` class
+ contains many standard names for collections.
+* <b>scope</b>: (Optional.) If supplied, the resulting list is filtered to include
+ only items whose name begins with this string.
+
+##### Returns:
+
+ The list of values in the collection with the given `name`, or
+ an empty list if no value has been added to that collection. The
+ list contains the values in the order under which they were
+ collected.
+
+
+
+- - -
+
+#### tf.Graph.as_graph_element(obj, allow_tensor=True, allow_operation=True) {#Graph.as_graph_element}
+
+Returns the object referred to by `obj`, as an `Operation` or `Tensor`.
+
+This function validates that `obj` represents an element of this
+graph, and gives an informative error message if it is not.
+
+This function is the canonical way to get/validate an object of
+one of the allowed types from an external argument reference in the
+Session API.
+
+This method may be called concurrently from multiple threads.
+
+##### Args:
+
+
+* <b>obj</b>: A `Tensor`, an `Operation`, or the name of a tensor or operation.
+ Can also be any object with an `_as_graph_element()` method that returns
+ a value of one of these types.
+* <b>allow_tensor</b>: If true, `obj` may refer to a `Tensor`.
+* <b>allow_operation</b>: If true, `obj` may refer to an `Operation`.
+
+##### Returns:
+
+ The `Tensor` or `Operation` in the Graph corresponding to `obj`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `obj` is not a type we support attempting to convert
+ to types.
+* <b>ValueError</b>: If `obj` is of an appropriate type but invalid. For
+ example, an invalid string.
+* <b>KeyError</b>: If `obj` is not an object in the graph.
+
+
+- - -
+
+#### tf.Graph.get_operation_by_name(name) {#Graph.get_operation_by_name}
+
+Returns the `Operation` with the given `name`.
+
+This method may be called concurrently from multiple threads.
+
+##### Args:
+
+
+* <b>name</b>: The name of the `Operation` to return.
+
+##### Returns:
+
+ The `Operation` with the given `name`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `name` is not a string.
+* <b>KeyError</b>: If `name` does not correspond to an operation in this graph.
+
+
+- - -
+
+#### tf.Graph.get_tensor_by_name(name) {#Graph.get_tensor_by_name}
+
+Returns the `Tensor` with the given `name`.
+
+This method may be called concurrently from multiple threads.
+
+##### Args:
+
+
+* <b>name</b>: The name of the `Tensor` to return.
+
+##### Returns:
+
+ The `Tensor` with the given `name`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `name` is not a string.
+* <b>KeyError</b>: If `name` does not correspond to a tensor in this graph.
+
+
+- - -
+
+#### tf.Graph.get_operations() {#Graph.get_operations}
+
+Return the list of operations in the graph.
+
+You can modify the operations in place, but modifications
+to the list such as inserts/delete have no effect on the
+list of operations known to the graph.
+
+This method may be called concurrently from multiple threads.
+
+##### Returns:
+
+ A list of Operations.
+
+
+
+- - -
+
+#### tf.Graph.get_default_device() {#Graph.get_default_device}
+
+Returns the default device.
+
+##### Returns:
+
+ A string.
+
+
+- - -
+
+#### tf.Graph.seed {#Graph.seed}
+
+
+
+- - -
+
+#### tf.Graph.unique_name(name) {#Graph.unique_name}
+
+Return a unique Operation name for "name".
+
+Note: You rarely need to call unique_name() directly. Most of the time you
+just need to create "with g.name_scope()" blocks to generate structured
+names.
+
+`unique_name` is used to generate structured names, separated by "/",
+to help identify Operations when debugging a Graph. Operation names
+are displayed in error messages reported by the TensorFlow runtime,
+and in various visualization tools such as TensorBoard.
+
+##### Args:
+
+
+* <b>name</b>: The name for an `Operation`.
+
+##### Returns:
+
+ A string to be passed to `create_op()` that will be used
+ to name the operation being created.
+
+
+- - -
+
+#### tf.Graph.version {#Graph.version}
+
+Returns a version number that increases as ops are added to the graph.
+
+
+- - -
+
+#### tf.Graph.create_op(op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True) {#Graph.create_op}
+
+Creates an `Operation` in this graph.
+
+This is a low-level interface for creating an `Operation`. Most
+programs will not call this method directly, and instead use the
+Python op constructors, such as `tf.constant()`, which add ops to
+the default graph.
+
+##### Args:
+
+
+* <b>op_type</b>: The `Operation` type to create. This corresponds to the
+ `OpDef.name` field for the proto that defines the operation.
+* <b>inputs</b>: A list of `Tensor` objects that will be inputs to the `Operation`.
+* <b>dtypes</b>: A list of `DType` objects that will be the types of the tensors
+ that the operation produces.
+* <b>input_types</b>: (Optional.) A list of `DType`s that will be the types of
+ the tensors that the operation consumes. By default, uses the base
+ `DType` of each input in `inputs`. Operations that expect
+ reference-typed inputs must specify `input_types` explicitly.
+* <b>name</b>: (Optional.) A string name for the operation. If not specified, a
+ name is generated based on `op_type`.
+* <b>attrs</b>: (Optional.) A list of `AttrValue` protos for the `attr` field of
+ the `NodeDef` proto that will represent the operation.
+* <b>op_def</b>: (Optional.) The `OpDef` proto that describes the `op_type` that
+ the operation will have.
+* <b>compute_shapes</b>: (Optional.) If True, shape inference will be performed
+ to compute the shapes of the outputs.
+
+##### Raises:
+
+
+* <b>TypeError</b>: if any of the inputs is not a `Tensor`.
+
+##### Returns:
+
+ An `Operation` object.
+
+
+- - -
+
+#### tf.Graph.gradient_override_map(*args, **kwds) {#Graph.gradient_override_map}
+
+EXPERIMENTAL: A context manager for overriding gradient functions.
+
+This context manager can be used to override the gradient function
+that will be used for ops within the scope of the context.
+
+For example:
+
+```python
+@tf.RegisterGradient("CustomSquare")
+def _custom_square_grad(op, inputs):
+ # ...
+
+with tf.Graph().as_default() as g:
+ c = tf.constant(5.0)
+ s_1 = tf.square(c) # Uses the default gradient for tf.square.
+ with g.gradient_override_map({"Square": "CustomSquare"}):
+ s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the
+ # gradient of s_2.
+```
+
+##### Args:
+
+
+* <b>op_type_map</b>: A dictionary mapping op type strings to alternative op
+ type strings.
+
+##### Returns:
+
+ A context manager that sets the alternative op type to be used for one
+ or more ops created in that context.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `op_type_map` is not a dictionary mapping strings to
+ strings.
+
+
+
+- - -
+
+### class tf.Operation <div class="md-anchor" id="Operation">{#Operation}</div>
+
+Represents a graph node that performs computation on tensors.
+
+An `Operation` is a node in a TensorFlow `Graph` that takes zero or
+more `Tensor` objects as input, and produces zero or more `Tensor`
+objects as output. Objects of type `Operation` are created by
+calling a Python op constructor (such as [`tf.matmul()`](math_ops.md#matmul))
+or [`Graph.create_op()`](framework.md#Graph.create_op).
+
+For example `c = tf.matmul(a, b)` creates an `Operation` of type
+"MatMul" that takes tensors `a` and `b` as input, and produces `c`
+as output.
+
+After the graph has been launched in a session, an `Operation` can
+be executed by passing it to [`Session.run()`](client.md#Session.run).
+`op.run()` is a shortcut for calling `tf.get_default_session().run(op)`.
+
+- - -
+
+#### tf.Operation.name {#Operation.name}
+
+The full name of this operation.
+
+- - -
+
+#### tf.Operation.type {#Operation.type}
+
+The type of the op (e.g. `"MatMul"`).
+
+- - -
+
+#### tf.Operation.inputs {#Operation.inputs}
+
+The list of `Tensor` objects representing the data inputs of this op.
+
+- - -
+
+#### tf.Operation.control_inputs {#Operation.control_inputs}
+
+The `Operation` objects on which this op has a control dependency.
+
+Before this op is executed, TensorFlow will ensure that the
+operations in `self.control_inputs` have finished executing. This
+mechanism can be used to run ops sequentially for performance
+reasons, or to ensure that the side effects of an op are observed
+in the correct order.
+
+##### Returns:
+
+ A list of `Operation` objects.
+
+- - -
+
+#### tf.Operation.outputs {#Operation.outputs}
+
+The list of `Tensor` objects representing the outputs of this op.
+
+- - -
+
+#### tf.Operation.device {#Operation.device}
+
+The name of the device to which this op has been assigned, if any.
+
+##### Returns:
+
+ The string name of the device to which this op has been
+ assigned, or None if it has not been assigned to a device.
+
+- - -
+
+#### tf.Operation.graph {#Operation.graph}
+
+The `Graph` that contains this operation.
+
+
+- - -
+
+#### tf.Operation.run(feed_dict=None, session=None) {#Operation.run}
+
+Runs this operation in a `Session`.
+
+Calling this method will execute all preceding operations that
+produce the inputs needed for this operation.
+
+*N.B.* Before invoking `Operation.run()`, its graph must have been
+launched in a session, and either a default session must be
+available, or `session` must be specified explicitly.
+
+##### Args:
+
+
+* <b>feed_dict</b>: A dictionary that maps `Tensor` objects to feed values.
+ See [`Session.run()`](client.md#Session.run) for a description of the
+ valid feed values.
+* <b>session</b>: (Optional.) The `Session` to be used to run to this operation. If
+ none, the default session will be used.
+
+
+
+- - -
+
+#### tf.Operation.get_attr(name) {#Operation.get_attr}
+
+Returns the value of the attr of this op with the given `name`.
+
+##### Args:
+
+
+* <b>name</b>: The name of the attr to fetch.
+
+##### Returns:
+
+ The value of the attr, as a Python object.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If this op does not have an attr with the given `name`.
+
+
+- - -
+
+#### tf.Operation.traceback {#Operation.traceback}
+
+Returns the call stack from when this operation was constructed.
+
+
+#### Other Methods
+- - -
+
+#### tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None) {#Operation.__init__}
+
+Creates an `Operation`.
+
+NOTE: This constructor validates the name of the Operation (passed
+as "node_def.name"). Valid Operation names match the following
+regular expression:
+
+ [A-Za-z0-9.][A-Za-z0-9_.\-/]*
+
+##### Args:
+
+
+* <b>node_def</b>: graph_pb2.NodeDef. NodeDef for the Operation.
+ Used for attributes of graph_pb2.NodeDef, typically "name",
+ "op", and "device". The "input" attribute is irrelevant here
+ as it will be computed when generating the model.
+* <b>g</b>: Graph. The parent graph.
+* <b>inputs</b>: list of Tensor objects. The inputs to this Operation.
+* <b>output_types</b>: list of types_pb2.DataType. List of the types of the
+ Tensors computed by this operation. The length of this list indicates
+ the number of output endpoints of the Operation.
+* <b>control_inputs</b>: list of operations or tensors from which to have a
+ control dependency.
+* <b>input_types</b>: List of types_pb2.DataType representing the
+ types of the Tensors accepted by the Operation. By default
+ uses [x.dtype.base_dtype for x in inputs]. Operations that expect
+ reference-typed inputs must specify these explicitly.
+* <b>original_op</b>: Optional. Used to associate the new Operation with an
+ existing Operation (for example, a replica with the op that was
+ replicated).
+* <b>op_def</b>: Optional. The op_def_pb2.OpDef proto that describes the
+ op type that this Operation represents.
+
+##### Raises:
+
+
+* <b>TypeError</b>: if control inputs are not Operations or Tensors,
+ or if node_def is not a NodeDef,
+ or if g is not a Graph,
+ or if inputs are not Tensors,
+ or if inputs and input_types are incompatible.
+* <b>ValueError</b>: if the node_def name is not valid.
+
+
+- - -
+
+#### tf.Operation.node_def {#Operation.node_def}
+
+Returns a serialized `NodeDef` representation of this operation.
+
+##### Returns:
+
+ A
+ [`NodeDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto)
+ protocol buffer.
+
+- - -
+
+#### tf.Operation.op_def {#Operation.op_def}
+
+Returns the `OpDef` proto that represents the type of this op.
+
+##### Returns:
+
+ An
+ [`OpDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op_def.proto)
+ protocol buffer.
+
+- - -
+
+#### tf.Operation.values() {#Operation.values}
+
+DEPRECATED: Use outputs.
+
+
+
+- - -
+
+### class tf.Tensor <div class="md-anchor" id="Tensor">{#Tensor}</div>
+
+Represents a value produced by an `Operation`.
+
+A `Tensor` is a symbolic handle to one of the outputs of an
+`Operation`. It does not hold the values of that operation's output,
+but instead provides a means of computing those values in a
+TensorFlow [`Session`](client.md#Session).
+
+This class has two primary purposes:
+
+1. A `Tensor` can be passed as an input to another `Operation`.
+ This builds a dataflow connection between operations, which
+ enables TensorFlow to execute an entire `Graph` that represents a
+ large, multi-step computation.
+
+2. After the graph has been launched in a session, the value of the
+ `Tensor` can be computed by passing it to
+ [`Session.run()`](client.md#Session.run).
+ `t.eval()` is a shortcut for calling
+ `tf.get_default_session().run(t)`.
+
+In the following example, `c`, `d`, and `e` are symbolic `Tensor`
+objects, whereas `result` is a numpy array that stores a concrete
+value:
+
+```python
+# Build a dataflow graph.
+c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
+d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
+e = tf.matmul(c, d)
+
+# Construct a `Session` to execut the graph.
+sess = tf.Session()
+
+# Execute the graph and store the value that `e` represents in `result`.
+result = sess.run(e)
+```
+
+- - -
+
+#### tf.Tensor.dtype {#Tensor.dtype}
+
+The `DType` of elements in this tensor.
+
+- - -
+
+#### tf.Tensor.name {#Tensor.name}
+
+The string name of this tensor.
+
+- - -
+
+#### tf.Tensor.value_index {#Tensor.value_index}
+
+The index of this tensor in the outputs of its `Operation`.
+
+- - -
+
+#### tf.Tensor.graph {#Tensor.graph}
+
+The `Graph` that contains this tensor.
+
+- - -
+
+#### tf.Tensor.op {#Tensor.op}
+
+The `Operation` that produces this tensor as an output.
+
+- - -
+
+#### tf.Tensor.consumers() {#Tensor.consumers}
+
+Returns a list of `Operation`s that consume this tensor.
+
+##### Returns:
+
+ A list of `Operation`s.
+
+
+
+- - -
+
+#### tf.Tensor.eval(feed_dict=None, session=None) {#Tensor.eval}
+
+Evaluates this tensor in a `Session`.
+
+Calling this method will execute all preceding operations that
+produce the inputs needed for the operation that produces this
+tensor.
+
+*N.B.* Before invoking `Tensor.eval()`, its graph must have been
+launched in a session, and either a default session must be
+available, or `session` must be specified explicitly.
+
+##### Args:
+
+
+* <b>feed_dict</b>: A dictionary that maps `Tensor` objects to feed values.
+ See [`Session.run()`](client.md#Session.run) for a description of
+ the valid feed values.
+* <b>session</b>: (Optional.) The `Session` to be used to evaluate this tensor. If
+ none, the default session will be used.
+
+##### Returns:
+
+ A numpy array corresponding to the value of this tensor.
+
+
+
+- - -
+
+#### tf.Tensor.get_shape() {#Tensor.get_shape}
+
+Returns the `TensorShape` that represents the shape of this tensor.
+
+The shape is computed using shape inference functions that are
+registered for each `Operation` type using `tf.RegisterShape`.
+See [`TensorShape`](framework.md#TensorShape) for more details of what a shape
+represents.
+
+The inferred shape of a tensor is used to provide shape
+information without having to launch the graph in a session. This
+can be used for debugging, and providing early error messages. For
+example:
+
+```python
+c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
+
+print c.get_shape()
+==> TensorShape([Dimension(2), Dimension(3)])
+
+d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]])
+
+print d.get_shape()
+==> TensorShape([Dimension(4), Dimension(2)])
+
+# Raises a ValueError, because `c` and `d` do not have compatible
+# inner dimensions.
+e = tf.matmul(c, d)
+
+f = tf.matmul(c, d, transpose_a=True, transpose_b=True)
+
+print f.get_shape()
+==> TensorShape([Dimension(3), Dimension(4)])
+```
+
+In some cases, the inferred shape may have unknown dimensions. If
+the caller has additional information about the values of these
+dimensions, `Tensor.set_shape()` can be used to augment the
+inferred shape.
+
+##### Returns:
+
+ A `TensorShape` representing the shape of this tensor.
+
+
+- - -
+
+#### tf.Tensor.set_shape(shape) {#Tensor.set_shape}
+
+Updates the shape of this tensor.
+
+This method can be called multiple times, and will merge the given
+`shape` with the current shape of this tensor. It can be used to
+provide additional information about the shape of this tensor that
+cannot be inferred from the graph alone. For example, this can be used
+to provide additional information about the shapes of images:
+
+```python
+_, image_data = tf.TFRecordReader(...).read(...)
+image = tf.image.decode_png(image_data, channels=3)
+
+# The height and width dimensions of `image` are data dependent, and
+# cannot be computed without executing the op.
+print image.get_shape()
+==> TensorShape([Dimension(None), Dimension(None), Dimension(3)])
+
+# We know that each image in this dataset is 28 x 28 pixels.
+image.set_shape([28, 28, 3])
+print image.get_shape()
+==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])
+```
+
+##### Args:
+
+
+* <b>shape</b>: A `TensorShape` representing the shape of this tensor.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `shape` is not compatible with the current shape of
+ this tensor.
+
+
+
+#### Other Methods
+- - -
+
+#### tf.Tensor.__init__(op, value_index, dtype) {#Tensor.__init__}
+
+Creates a new `Tensor`.
+
+##### Args:
+
+
+* <b>op</b>: An `Operation`. `Operation` that computes this tensor.
+* <b>value_index</b>: An `int`. Index of the operation's endpoint that produces
+ this tensor.
+* <b>dtype</b>: A `types.DType`. Type of data stored in this tensor.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If the op is not an `Operation`.
+
+
+- - -
+
+#### tf.Tensor.device {#Tensor.device}
+
+The name of the device on which this tensor will be produced, or None.
+
+
+
+## Tensor types <div class="md-anchor" id="AUTOGENERATED-tensor-types">{#AUTOGENERATED-tensor-types}</div>
+
+- - -
+
+### class tf.DType <div class="md-anchor" id="DType">{#DType}</div>
+
+Represents the type of the elements in a `Tensor`.
+
+The following `DType` objects are defined:
+
+* `tf.float32`: 32-bit single-precision floating-point.
+* `tf.float64`: 64-bit double-precision floating-point.
+* `tf.bfloat16`: 16-bit truncated floating-point.
+* `tf.complex64`: 64-bit single-precision complex.
+
+* `tf.int8`: 8-bit signed integer.
+* `tf.uint8`: 8-bit unsigned integer.
+* `tf.int32`: 32-bit signed integer.
+* `tf.int64`: 64-bit signed integer.
+
+* `tf.bool`: Boolean.
+
+* `tf.string`: String.
+
+* `tf.qint8`: Quantized 8-bit signed integer.
+* `tf.quint8`: Quantized 8-bit unsigned integer.
+* `tf.qint32`: Quantized 32-bit signed integer.
+
+In addition, variants of these types with the `_ref` suffix are
+defined for reference-typed tensors.
+
+The `tf.as_dtype()` function converts numpy types and string type
+names to a `DType` object.
+
+- - -
+
+#### tf.DType.is_compatible_with(other) {#DType.is_compatible_with}
+
+Returns True if the `other` DType will be converted to this DType.
+
+The conversion rules are as follows:
+
+```
+DType(T) .is_compatible_with(DType(T)) == True
+DType(T) .is_compatible_with(DType(T).as_ref) == True
+DType(T).as_ref.is_compatible_with(DType(T)) == False
+DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
+```
+
+##### Args:
+
+
+* <b>other</b>: A `DType` (or object that may be converted to a `DType`).
+
+##### Returns:
+
+ True if a Tensor of the `other` `DType` will be implicitly converted to
+ this `DType`.
+
+
+- - -
+
+#### tf.DType.name {#DType.name}
+
+Returns the string name for this `DType`.
+
+- - -
+
+#### tf.DType.base_dtype {#DType.base_dtype}
+
+Returns a non-reference `DType` based on this `DType`.
+
+- - -
+
+#### tf.DType.is_ref_dtype {#DType.is_ref_dtype}
+
+Returns `True` if this `DType` represents a reference type.
+
+- - -
+
+#### tf.DType.as_ref {#DType.as_ref}
+
+Returns a reference `DType` based on this `DType`.
+
+- - -
+
+#### tf.DType.is_integer {#DType.is_integer}
+
+Returns whether this is a (non-quantized) integer type.
+
+- - -
+
+#### tf.DType.is_quantized {#DType.is_quantized}
+
+Returns whether this is a quantized data type.
+
+
+- - -
+
+#### tf.DType.as_numpy_dtype {#DType.as_numpy_dtype}
+
+Returns a `numpy.dtype` based on this `DType`.
+
+- - -
+
+#### tf.DType.as_datatype_enum {#DType.as_datatype_enum}
+
+Returns a `types_pb2.DataType` enum value based on this `DType`.
+
+
+#### Other Methods
+- - -
+
+#### tf.DType.__init__(type_enum) {#DType.__init__}
+
+Creates a new `DataType`.
+
+NOTE(mrry): In normal circumstances, you should not need to
+construct a DataType object directly. Instead, use the
+types.as_dtype() function.
+
+##### Args:
+
+
+* <b>type_enum</b>: A `types_pb2.DataType` enum value.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `type_enum` is not a value `types_pb2.DataType`.
+
+
+- - -
+
+#### tf.DType.max {#DType.max}
+
+Returns the maximum representable value in this data type.
+
+##### Raises:
+
+
+* <b>TypeError</b>: if this is a non-numeric, unordered, or quantized type.
+
+- - -
+
+#### tf.DType.min {#DType.min}
+
+Returns the minimum representable value in this data type.
+
+##### Raises:
+
+
+* <b>TypeError</b>: if this is a non-numeric, unordered, or quantized type.
+
+
+- - -
+
+### tf.as_dtype(type_value) <div class="md-anchor" id="as_dtype">{#as_dtype}</div>
+
+Converts the given `type_value` to a `DType`.
+
+##### Args:
+
+
+* <b>type_value</b>: A value that can be converted to a `tf.DType`
+ object. This may currently be a `tf.DType` object, a
+ [`DataType` enum](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/types.proto),
+ a string type name, or a `numpy.dtype`.
+
+##### Returns:
+
+ A `DType` corresponding to `type_value`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `type_value` cannot be converted to a `DType`.
+
+
+
+## Utility functions <div class="md-anchor" id="AUTOGENERATED-utility-functions">{#AUTOGENERATED-utility-functions}</div>
+
+- - -
+
+### tf.device(dev) <div class="md-anchor" id="device">{#device}</div>
+
+Wrapper for `Graph.device()` using the default graph.
+
+See [`Graph.name_scope()`](framework.md#Graph.name_scope) for more details.
+
+##### Args:
+
+
+* <b>device_name_or_function</b>: The device name or function to use in
+ the context.
+
+##### Returns:
+
+ A context manager that specifies the default device to use for newly
+ created ops.
+
+
+- - -
+
+### tf.name_scope(name) <div class="md-anchor" id="name_scope">{#name_scope}</div>
+
+Wrapper for `Graph.name_scope()` using the default graph.
+
+See [`Graph.name_scope()`](framework.md#Graph.name_scope) for more details.
+
+##### Args:
+
+
+* <b>name</b>: A name for the scope.
+
+##### Returns:
+
+ A context manager that installs `name` as a new name scope in the
+ default graph.
+
+
+- - -
+
+### tf.control_dependencies(control_inputs) <div class="md-anchor" id="control_dependencies">{#control_dependencies}</div>
+
+Wrapper for `Graph.control_dependencies()` using the default graph.
+
+See [`Graph.control_dependencies()`](framework.md#Graph.control_dependencies)
+for more details.
+
+##### Args:
+
+
+* <b>control_inputs</b>: A list of `Operation` or `Tensor` objects, which
+ must be executed or computed before running the operations
+ defined in the context.
+
+##### Returns:
+
+ A context manager that specifies control dependencies for all
+ operations constructed within the context.
+
+
+- - -
+
+### tf.convert_to_tensor(value, dtype=None, name=None) <div class="md-anchor" id="convert_to_tensor">{#convert_to_tensor}</div>
+
+Converts the given `value` to a `Tensor`.
+
+This function converts Python objects of various types to `Tensor`
+objects. It accepts `Tensor` objects, numpy arrays, Python lists,
+and Python scalars. For example:
+
+```python
+import numpy as np
+array = np.random.rand((32, 100, 100))
+
+def my_func(arg):
+ arg = tf.convert_to_tensor(arg, dtype=tf.float32)
+ return tf.matmul(arg, arg) + arg
+
+# The following calls are equivalent.
+value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]))
+value_2 = my_func([[1.0, 2.0], [3.0, 4.0]])
+value_3 = my_func(numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32))
+```
+
+This function can be useful when composing a new operation in Python
+(such as `my_func` in the example above). All standard Python op
+constructors apply this function to each of their Tensor-valued
+inputs, which allows those ops to accept numpy arrays, Python lists,
+and scalars in addition to `Tensor` objects.
+
+##### Args:
+
+
+* <b>value</b>: An object whose type has a registered `Tensor` conversion function.
+* <b>dtype</b>: Optional element type for the returned tensor. If missing, the
+ type is inferred from the type of `value`.
+* <b>name</b>: Optional name to use if a new `Tensor` is created.
+
+##### Returns:
+
+ A `Tensor` based on `value`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If no conversion function is registered for `value`.
+* <b>RuntimeError</b>: If a registered conversion function returns an invalid value.
+
+
+- - -
+
+### tf.get_default_graph() <div class="md-anchor" id="get_default_graph">{#get_default_graph}</div>
+
+Returns the default graph for the current thread.
+
+The returned graph will be the innermost graph on which a
+`Graph.as_default()` context has been entered, or a global default
+graph if none has been explicitly created.
+
+*N.B.* The default graph is a property of the current thread. If you
+create a new thread, and wish to use the default graph in that
+thread, you must explicitly add a `with g.as_default():` in that
+thread's function.
+
+##### Returns:
+
+ The default `Graph` being used in the current thread.
+
+
+- - -
+
+### tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None) <div class="md-anchor" id="import_graph_def">{#import_graph_def}</div>
+
+Imports the TensorFlow graph in `graph_def` into the Python `Graph`.
+
+This function provides a way to import a serialized TensorFlow
+[`GraphDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto)
+protocol buffer, and extract individual objects in the `GraphDef` as
+[`Tensor`](#Tensor) and [`Operation`](#Operation) objects. See
+[`Graph.as_graph_def()`](#Graph.as_graph_def) for a way to create a
+`GraphDef` proto.
+
+##### Args:
+
+
+* <b>graph_def</b>: A `GraphDef` proto containing operations to be imported into
+ the default graph.
+* <b>input_map</b>: A dictionary mapping input names (as strings) in `graph_def`
+ to `Tensor` objects. The values of the named input tensors in the
+ imported graph will be re-mapped to the respective `Tensor` values.
+* <b>return_elements</b>: A list of strings containing operation names in
+ `graph_def` that will be returned as `Operation` objects; and/or
+ tensor names in `graph_def` that will be returned as `Tensor` objects.
+* <b>name</b>: (Optional.) A prefix that will be prepended to the names in
+ `graph_def`. Defaults to `"import"`.
+* <b>op_dict</b>: (Optional.) A dictionary mapping op type names to `OpDef` protos.
+ Must contain an `OpDef` proto for each op type named in `graph_def`.
+ If omitted, uses the `OpDef` protos registered in the global registry.
+
+##### Returns:
+
+ A list of `Operation` and/or `Tensor` objects from the imported graph,
+ corresponding to the names in `return_elements'.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `graph_def` is not a `GraphDef` proto,
+ `input_map' is not a dictionary mapping strings to `Tensor` objects,
+ or `return_elements` is not a list of strings.
+* <b>ValueError</b>: If `input_map`, or `return_elements` contains names that
+ do not appear in `graph_def`, or `graph_def` is not well-formed (e.g.
+ it refers to an unknown tensor).
+
+
+
+## Graph collections <div class="md-anchor" id="AUTOGENERATED-graph-collections">{#AUTOGENERATED-graph-collections}</div>
+
+- - -
+
+### tf.add_to_collection(name, value) <div class="md-anchor" id="add_to_collection">{#add_to_collection}</div>
+
+Wrapper for `Graph.add_to_collection()` using the default graph.
+
+See [`Graph.add_to_collection()`](framework.md#Graph.add_to_collection)
+for more details.
+
+##### Args:
+
+
+* <b>name</b>: The key for the collection. For example, the `GraphKeys` class
+ contains many standard names for collections.
+* <b>value</b>: The value to add to the collection.
+
+
+- - -
+
+### tf.get_collection(key, scope=None) <div class="md-anchor" id="get_collection">{#get_collection}</div>
+
+Wrapper for `Graph.get_collection()` using the default graph.
+
+See [`Graph.get_collection()`](framework.md#Graph.get_collection)
+for more details.
+
+##### Args:
+
+
+* <b>key</b>: The key for the collection. For example, the `GraphKeys` class
+ contains many standard names for collections.
+* <b>scope</b>: (Optional.) If supplied, the resulting list is filtered to include
+ only items whose name begins with this string.
+
+##### Returns:
+
+ The list of values in the collection with the given `name`, or
+ an empty list if no value has been added to that collection. The
+ list contains the values in the order under which they were
+ collected.
+
+
+- - -
+
+### class tf.GraphKeys <div class="md-anchor" id="GraphKeys">{#GraphKeys}</div>
+
+Standard names to use for graph collections.
+
+The standard library uses various well-known names to collect and
+retrieve values associated with a graph. For example, the
+`tf.Optimizer` subclasses default to optimizing the variables
+collected under `tf.GraphKeys.TRAINABLE_VARIABLES` if none is
+specified, but it is also possible to pass an explicit list of
+variables.
+
+The following standard keys are defined:
+
+* `VARIABLES`: the `Variable` objects that comprise a model, and
+ must be saved and restored together. See
+ [`tf.all_variables()`](state_ops.md#all_variables) for more details.
+* `TRAINABLE_VARIABLES`: the subset of `Variable` objects that will
+ be trained by an optimizer. See
+ [`tf.trainable_variables()`](state_ops.md#trainable_variables)
+ for more details.
+* `SUMMARIES`: the summary `Tensor` objects that have been created
+ in the graph. See [`tf.merge_all_summaries()`](train.md#merge_all_summaries)
+ for more details.
+* `QUEUE_RUNNERS`: the `QueueRunner` objects that are used to
+ produce input for a computation. See
+ [`tf.start_queue_runners()`](train.md#start_queue_runners) for more details.
+
+
+## Defining new operations <div class="md-anchor" id="AUTOGENERATED-defining-new-operations">{#AUTOGENERATED-defining-new-operations}</div>
+
+- - -
+
+### class tf.RegisterGradient <div class="md-anchor" id="RegisterGradient">{#RegisterGradient}</div>
+
+A decorator for registering the gradient function for an op type.
+
+This decorator is only used when defining a new op type. For an op
+with `m` inputs and `n` inputs, the gradient function is a function
+that takes the original `Operation` and `n` `Tensor` objects
+(representing the gradients with respect to each output of the op),
+and returns `m` `Tensor` objects (representing the partial gradients
+with respect to each input of the op).
+
+For example, assuming that operations of type `"Sub"` take two
+inputs `x` and `y`, and return a single output `x - y`, the
+following gradient function would be registered:
+
+```python
+@tf.RegisterGradient("Sub")
+def _sub_grad(unused_op, grad):
+ return grad, tf.Neg(grad)
+```
+
+The decorator argument `op_type` is the string type of an
+operation. This corresponds to the `OpDef.name` field for the proto
+that defines the operation.
+
+- - -
+
+#### tf.RegisterGradient.__init__(op_type) {#RegisterGradient.__init__}
+
+Creates a new decorator with `op_type` as the Operation type.
+
+##### Args:
+
+
+* <b>op_type</b>: The string type of an operation. This corresponds to the
+ `OpDef.name` field for the proto that defines the operation.
+
+
+
+- - -
+
+### tf.NoGradient(op_type) <div class="md-anchor" id="NoGradient">{#NoGradient}</div>
+
+Specifies that ops of type `op_type` do not have a defined gradient.
+
+This function is only used when defining a new op type. It may be
+used for ops such as `tf.size()` that are not differentiable. For
+example:
+
+```python
+tf.NoGradient("Size")
+```
+
+##### Args:
+
+
+* <b>op_type</b>: The string type of an operation. This corresponds to the
+ `OpDef.name` field for the proto that defines the operation.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `op_type` is not a string.
+
+
+- - -
+
+### class tf.RegisterShape <div class="md-anchor" id="RegisterShape">{#RegisterShape}</div>
+
+A decorator for registering the shape function for an op type.
+
+This decorator is only used when defining a new op type. A shape
+function is a function from an `Operation` object to a list of
+`TensorShape` objects, with one `TensorShape` for each output of the
+operation.
+
+For example, assuming that operations of type `"Sub"` take two
+inputs `x` and `y`, and return a single output `x - y`, all with the
+same shape, the following shape function would be registered:
+
+```python
+@tf.RegisterShape("Sub")
+def _sub_shape(op):
+ return [op.inputs[0].get_shape().merge_with(op.inputs[1].get_shape())]
+```
+
+The decorator argument `op_type` is the string type of an
+operation. This corresponds to the `OpDef.name` field for the proto
+that defines the operation.
+- - -
+
+#### tf.RegisterShape.__init__(op_type) {#RegisterShape.__init__}
+
+Saves the "op_type" as the Operation type.
+
+
+
+- - -
+
+### class tf.TensorShape <div class="md-anchor" id="TensorShape">{#TensorShape}</div>
+
+Represents the shape of a `Tensor`.
+
+A `TensorShape` represents a possibly-partial shape specification for a
+`Tensor`. It may be one of the following:
+
+* *Fully-known shape:* has a known number of dimensions and a known size
+ for each dimension.
+* *Partially-known shape:* has a known number of dimensions, and an unknown
+ size for one or more dimension.
+* *Unknown shape:* has an unknown number of dimensions, and an unknown
+ size in all dimensions.
+
+If a tensor is produced by an operation of type `"Foo"`, its shape
+may be inferred if there is a registered shape function for
+`"Foo"`. See [`tf.RegisterShape()`](framework.md#RegisterShape)
+for details of shape
+functions and how to register them. Alternatively, the shape may be set
+explicitly using [`Tensor.set_shape()`](framework.md#Tensor.set_shape).
+
+- - -
+
+#### tf.TensorShape.merge_with(other) {#TensorShape.merge_with}
+
+Returns a `TensorShape` combining the information in `self` and `other`.
+
+The dimensions in `self` and `other` are merged elementwise,
+according to the rules defined for `Dimension.merge_with()`.
+
+##### Args:
+
+
+* <b>other</b>: Another `TensorShape`.
+
+##### Returns:
+
+ A `TensorShape` containing the combined information of `self` and
+ `other`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `self` and `other` are not compatible.
+
+
+- - -
+
+#### tf.TensorShape.concatenate(other) {#TensorShape.concatenate}
+
+Returns the concatenation of the dimension in `self` and `other`.
+
+*N.B.* If either `self` or `other` is completely unknown,
+concatenation will discard information about the other shape. In
+future, we might support concatenation that preserves this
+information for use with slicing.
+
+##### Args:
+
+
+* <b>other</b>: Another `TensorShape`.
+
+##### Returns:
+
+ A `TensorShape` whose dimensions are the concatenation of the
+ dimensions in `self` and `other`.
+
+
+
+- - -
+
+#### tf.TensorShape.ndims {#TensorShape.ndims}
+
+Returns the rank of this shape, or None if it is unspecified.
+
+- - -
+
+#### tf.TensorShape.dims {#TensorShape.dims}
+
+Returns a list of Dimensions, or None if the shape is unspecified.
+
+- - -
+
+#### tf.TensorShape.as_list() {#TensorShape.as_list}
+
+Returns a list of integers or None for each dimension.
+
+
+- - -
+
+#### tf.TensorShape.is_compatible_with(other) {#TensorShape.is_compatible_with}
+
+Returns True iff `self` is compatible with `other`.
+
+Two possibly-partially-defined shapes are compatible if there
+exists a fully-defined shape that both shapes can represent. Thus,
+compatibility allows the shape inference code to reason about
+partially-defined shapes. For example:
+
+* TensorShape(None) is compatible with all shapes.
+
+* TensorShape([None, None]) is compatible with all two-dimensional
+ shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is
+ not compatible with, for example, TensorShape([None]) or
+ TensorShape([None, None, None]).
+
+* TensorShape([32, None]) is compatible with all two-dimensional shapes
+ with size 32 in the 0th dimension, and also TensorShape([None, None])
+ and TensorShape(None). It is not compatible with, for example,
+ TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]).
+
+* TensorShape([32, 784]) is compatible with itself, and also
+ TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None,
+ None]) and TensorShape(None). It is not compatible with, for example,
+ TensorShape([32, 1, 784]) or TensorShape([None]).
+
+The compatibility relation is reflexive and symmetric, but not
+transitive. For example, TensorShape([32, 784]) is compatible with
+TensorShape(None), and TensorShape(None) is compatible with
+TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with
+TensorShape([4, 4]).
+
+##### Args:
+
+
+* <b>other</b>: Another TensorShape.
+
+##### Returns:
+
+ True iff `self` is compatible with `other`.
+
+
+- - -
+
+#### tf.TensorShape.is_fully_defined() {#TensorShape.is_fully_defined}
+
+Returns True iff `self` is fully defined in every dimension.
+
+
+
+- - -
+
+#### tf.TensorShape.with_rank(rank) {#TensorShape.with_rank}
+
+Returns a shape based on `self` with the given rank.
+
+This method promotes a completely unknown shape to one with a
+known rank.
+
+##### Args:
+
+
+* <b>rank</b>: An integer.
+
+##### Returns:
+
+ A shape that is at least as specific as `self` with the given rank.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `self` does not represent a shape with the given `rank`.
+
+
+- - -
+
+#### tf.TensorShape.with_rank_at_least(rank) {#TensorShape.with_rank_at_least}
+
+Returns a shape based on `self` with at least the given rank.
+
+##### Args:
+
+
+* <b>rank</b>: An integer.
+
+##### Returns:
+
+ A shape that is at least as specific as `self` with at least the given
+ rank.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `self` does not represent a shape with at least the given
+ `rank`.
+
+
+- - -
+
+#### tf.TensorShape.with_rank_at_most(rank) {#TensorShape.with_rank_at_most}
+
+Returns a shape based on `self` with at most the given rank.
+
+##### Args:
+
+
+* <b>rank</b>: An integer.
+
+##### Returns:
+
+ A shape that is at least as specific as `self` with at most the given
+ rank.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `self` does not represent a shape with at most the given
+ `rank`.
+
+
+
+- - -
+
+#### tf.TensorShape.assert_has_rank(rank) {#TensorShape.assert_has_rank}
+
+Raises an exception if `self` is not compatible with the given `rank`.
+
+##### Args:
+
+
+* <b>rank</b>: An integer.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `self` does not represent a shape with the given `rank`.
+
+
+- - -
+
+#### tf.TensorShape.assert_same_rank(other) {#TensorShape.assert_same_rank}
+
+Raises an exception if `self` and `other` do not have compatible ranks.
+
+##### Args:
+
+
+* <b>other</b>: Another `TensorShape`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `self` and `other` do not represent shapes with the
+ same rank.
+
+
+- - -
+
+#### tf.TensorShape.assert_is_compatible_with(other) {#TensorShape.assert_is_compatible_with}
+
+Raises exception if `self` and `other` do not represent the same shape.
+
+This method can be used to assert that there exists a shape that both
+`self` and `other` represent.
+
+##### Args:
+
+
+* <b>other</b>: Another TensorShape.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `self` and `other` do not represent the same shape.
+
+
+- - -
+
+#### tf.TensorShape.assert_is_fully_defined() {#TensorShape.assert_is_fully_defined}
+
+Raises an exception if `self` is not fully defined in every dimension.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `self` does not have a known value for every dimension.
+
+
+
+#### Other Methods
+- - -
+
+#### tf.TensorShape.__init__(dims) {#TensorShape.__init__}
+
+Creates a new TensorShape with the given dimensions.
+
+##### Args:
+
+
+* <b>dims</b>: A list of Dimensions, or None if the shape is unspecified.
+* <b>DEPRECATED</b>: A single integer is treated as a singleton list.
+
+
+- - -
+
+#### tf.TensorShape.as_dimension_list() {#TensorShape.as_dimension_list}
+
+DEPRECATED: use as_list().
+
+
+- - -
+
+#### tf.TensorShape.num_elements() {#TensorShape.num_elements}
+
+Returns the total number of elements, or none for incomplete shapes.
+
+
+
+- - -
+
+### class tf.Dimension <div class="md-anchor" id="Dimension">{#Dimension}</div>
+
+Represents the value of one dimension in a TensorShape.
+- - -
+
+#### tf.Dimension.__init__(value) {#Dimension.__init__}
+
+Creates a new Dimension with the given value.
+
+
+- - -
+
+#### tf.Dimension.assert_is_compatible_with(other) {#Dimension.assert_is_compatible_with}
+
+Raises an exception if `other` is not compatible with this Dimension.
+
+##### Args:
+
+
+* <b>other</b>: Another Dimension.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `self` and `other` are not compatible (see
+ is_compatible_with).
+
+
+- - -
+
+#### tf.Dimension.is_compatible_with(other) {#Dimension.is_compatible_with}
+
+Returns true if `other` is compatible with this Dimension.
+
+Two known Dimensions are compatible if they have the same value.
+An unknown Dimension is compatible with all other Dimensions.
+
+##### Args:
+
+
+* <b>other</b>: Another Dimension.
+
+##### Returns:
+
+ True if this Dimension and `other` are compatible.
+
+
+- - -
+
+#### tf.Dimension.merge_with(other) {#Dimension.merge_with}
+
+Returns a Dimension that combines the information in `self` and `other`.
+
+Dimensions are combined as follows:
+
+ Dimension(n) .merge_with(Dimension(n)) == Dimension(n)
+ Dimension(n) .merge_with(Dimension(None)) == Dimension(n)
+ Dimension(None).merge_with(Dimension(n)) == Dimension(n)
+ Dimension(None).merge_with(Dimension(None)) == Dimension(None)
+ Dimension(n) .merge_with(Dimension(m)) raises ValueError for n != m
+
+##### Args:
+
+
+* <b>other</b>: Another Dimension.
+
+##### Returns:
+
+ A Dimension containing the combined information of `self` and
+ `other`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `self` and `other` are not compatible (see
+ is_compatible_with).
+
+
+- - -
+
+#### tf.Dimension.value {#Dimension.value}
+
+The value of this dimension, or None if it is unknown.
+
+
+- - -
+
+### tf.op_scope(*args, **kwds) <div class="md-anchor" id="op_scope">{#op_scope}</div>
+
+Returns a context manager for use when defining a Python op.
+
+This context manager validates that the given `values` are from the
+same graph, ensures that that graph is the default graph, and pushes a
+name scope.
+
+For example, to define a new Python op called `my_op`:
+
+```python
+def my_op(a, b, c, name=None):
+ with tf.op_scope([a, b, c], name, "MyOp") as scope:
+ a = tf.convert_to_tensor(a, name="a")
+ b = tf.convert_to_tensor(b, name="b")
+ c = tf.convert_to_tensor(c, name="c")
+ # Define some computation that uses `a`, `b`, and `c`.
+ return foo_op(..., name=scope)
+```
+
+##### Args:
+
+
+* <b>values</b>: The list of `Tensor` arguments that are passed to the op function.
+* <b>name</b>: The name argument that is passed to the op function.
+* <b>default_name</b>: The default name to use if the `name` argument is `None`.
+
+##### Returns:
+
+ A context manager for use in defining a Python op.
+
+
+- - -
+
+### tf.get_seed(op_seed) <div class="md-anchor" id="get_seed">{#get_seed}</div>
+
+Returns the local seeds an operation should use given an op-specific seed.
+
+Given operation-specific seed, `op_seed`, this helper function returns two
+seeds derived from graph-level and op-level seeds. Many random operations
+internally use the two seeds to allow user to change the seed globally for a
+graph, or for only specific operations.
+
+For details on how the graph-level seed interacts with op seeds, see
+[`set_random_seed`](constant_op.md#set_random_seed).
+
+##### Args:
+
+
+* <b>op_seed</b>: integer.
+
+##### Returns:
+
+ A tuple of two integers that should be used for the local seed of this
+ operation.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/image.md b/tensorflow/g3doc/api_docs/python/image.md
new file mode 100644
index 0000000000..6b3d6c3ca7
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/image.md
@@ -0,0 +1,857 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Images
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Encoding and Decoding.](#AUTOGENERATED-encoding-and-decoding.)
+ * [tf.image.decode_jpeg(contents, channels=None, ratio=None, fancy_upscaling=None, try_recover_truncated=None, acceptable_fraction=None, name=None)](#decode_jpeg)
+ * [tf.image.encode_jpeg(image, format=None, quality=None, progressive=None, optimize_size=None, chroma_downsampling=None, density_unit=None, x_density=None, y_density=None, xmp_metadata=None, name=None)](#encode_jpeg)
+ * [tf.image.decode_png(contents, channels=None, name=None)](#decode_png)
+ * [tf.image.encode_png(image, compression=None, name=None)](#encode_png)
+* [Resizing.](#AUTOGENERATED-resizing.)
+ * [tf.image.resize_images(images, new_height, new_width, method=0)](#resize_images)
+ * [tf.image.resize_area(images, size, name=None)](#resize_area)
+ * [tf.image.resize_bicubic(images, size, name=None)](#resize_bicubic)
+ * [tf.image.resize_bilinear(images, size, name=None)](#resize_bilinear)
+ * [tf.image.resize_nearest_neighbor(images, size, name=None)](#resize_nearest_neighbor)
+* [Cropping.](#AUTOGENERATED-cropping.)
+ * [tf.image.resize_image_with_crop_or_pad(image, target_height, target_width)](#resize_image_with_crop_or_pad)
+ * [tf.image.pad_to_bounding_box(image, offset_height, offset_width, target_height, target_width)](#pad_to_bounding_box)
+ * [tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width)](#crop_to_bounding_box)
+ * [tf.image.random_crop(image, size, seed=None, name=None)](#random_crop)
+ * [tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None)](#extract_glimpse)
+* [Flipping and Transposing.](#AUTOGENERATED-flipping-and-transposing.)
+ * [tf.image.flip_up_down(image)](#flip_up_down)
+ * [tf.image.random_flip_up_down(image, seed=None)](#random_flip_up_down)
+ * [tf.image.flip_left_right(image)](#flip_left_right)
+ * [tf.image.random_flip_left_right(image, seed=None)](#random_flip_left_right)
+ * [tf.image.transpose_image(image)](#transpose_image)
+* [Image Adjustments.](#AUTOGENERATED-image-adjustments.)
+ * [tf.image.adjust_brightness(image, delta, min_value=None, max_value=None)](#adjust_brightness)
+ * [tf.image.random_brightness(image, max_delta, seed=None)](#random_brightness)
+ * [tf.image.adjust_contrast(images, contrast_factor, min_value=None, max_value=None)](#adjust_contrast)
+ * [tf.image.random_contrast(image, lower, upper, seed=None)](#random_contrast)
+ * [tf.image.per_image_whitening(image)](#per_image_whitening)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Encoding and Decoding. <div class="md-anchor" id="AUTOGENERATED-encoding-and-decoding.">{#AUTOGENERATED-encoding-and-decoding.}</div>
+
+TensorFlow provides Ops to decode and encode JPEG and PNG formats. Encoded
+images are represented by scalar string Tensors, decoded images by 3-D uint8
+tensors of shape `[height, width, channels]`.
+
+The encode and decode Ops apply to one image at a time. Their input and output
+are all of variable size. If you need fixed size images, pass the output of
+the decode Ops to one of the cropping and resizing Ops.
+
+Note: The PNG encode and decode Ops support RGBA, but the conversions Ops
+presently only support RGB, HSV, and GrayScale.
+
+- - -
+
+### tf.image.decode_jpeg(contents, channels=None, ratio=None, fancy_upscaling=None, try_recover_truncated=None, acceptable_fraction=None, name=None) <div class="md-anchor" id="decode_jpeg">{#decode_jpeg}</div>
+
+Decode a JPEG-encoded image to a uint8 tensor.
+
+The attr `channels` indicates the desired number of color channels for the
+decoded image.
+
+Accepted values are:
+
+* 0: Use the number of channels in the JPEG-encoded image.
+* 1: output a grayscale image.
+* 3: output an RGB image.
+
+If needed, the JPEG-encoded image is transformed to match the requested number
+of color channels.
+
+The attr `ratio` allows downscaling the image by an integer factor during
+decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than
+downscaling the image later.
+
+##### Args:
+
+
+* <b>contents</b>: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.
+* <b>channels</b>: An optional `int`. Defaults to `0`.
+ Number of color channels for the decoded image.
+* <b>ratio</b>: An optional `int`. Defaults to `1`. Downscaling ratio.
+* <b>fancy_upscaling</b>: An optional `bool`. Defaults to `True`.
+ If true use a slower but nicer upscaling of the
+ chroma planes (yuv420/422 only).
+* <b>try_recover_truncated</b>: An optional `bool`. Defaults to `False`.
+ If true try to recover an image from truncated input.
+* <b>acceptable_fraction</b>: An optional `float`. Defaults to `1`.
+ The minimum required fraction of lines before a truncated
+ input is accepted.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `uint8`. 3-D with shape `[height, width, channels]`..
+
+
+- - -
+
+### tf.image.encode_jpeg(image, format=None, quality=None, progressive=None, optimize_size=None, chroma_downsampling=None, density_unit=None, x_density=None, y_density=None, xmp_metadata=None, name=None) <div class="md-anchor" id="encode_jpeg">{#encode_jpeg}</div>
+
+JPEG-encode an image.
+
+`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.
+
+The attr `format` can be used to override the color format of the encoded
+output. Values can be:
+
+* `''`: Use a default format based on the number of channels in the image.
+* `grayscale`: Output a grayscale JPEG image. The `channels` dimension
+ of `image` must be 1.
+* `rgb`: Output an RGB JPEG image. The `channels` dimension
+ of `image` must be 3.
+
+If `format` is not specified or is the empty string, a default format is picked
+in function of the number of channels in `image`:
+
+* 1: Output a grayscale image.
+* 3: Output an RGB image.
+
+##### Args:
+
+
+* <b>image</b>: A `Tensor` of type `uint8`.
+ 3-D with shape `[height, width, channels]`.
+* <b>format</b>: An optional `string` from: `"", "grayscale", "rgb"`. Defaults to `""`.
+ Per pixel image format.
+* <b>quality</b>: An optional `int`. Defaults to `95`.
+ Quality of the compression from 0 to 100 (higher is better and slower).
+* <b>progressive</b>: An optional `bool`. Defaults to `False`.
+ If True, create a JPEG that loads progressively (coarse to fine).
+* <b>optimize_size</b>: An optional `bool`. Defaults to `False`.
+ If True, spend CPU/RAM to reduce size with no quality change.
+* <b>chroma_downsampling</b>: An optional `bool`. Defaults to `True`.
+ See http://en.wikipedia.org/wiki/Chroma_subsampling.
+* <b>density_unit</b>: An optional `string` from: `"in", "cm"`. Defaults to `"in"`.
+ Unit used to specify `x_density` and `y_density`:
+ pixels per inch (`'in'`) or centimeter (`'cm'`).
+* <b>x_density</b>: An optional `int`. Defaults to `300`.
+ Horizontal pixels per density unit.
+* <b>y_density</b>: An optional `int`. Defaults to `300`.
+ Vertical pixels per density unit.
+* <b>xmp_metadata</b>: An optional `string`. Defaults to `""`.
+ If not empty, embed this XMP metadata in the image header.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `string`. 0-D. JPEG-encoded image.
+
+
+
+- - -
+
+### tf.image.decode_png(contents, channels=None, name=None) <div class="md-anchor" id="decode_png">{#decode_png}</div>
+
+Decode a PNG-encoded image to a uint8 tensor.
+
+The attr `channels` indicates the desired number of color channels for the
+decoded image.
+
+Accepted values are:
+
+* 0: Use the number of channels in the PNG-encoded image.
+* 1: output a grayscale image.
+* 3: output an RGB image.
+* 4: output an RGBA image.
+
+If needed, the PNG-encoded image is transformed to match the requested number
+of color channels.
+
+##### Args:
+
+
+* <b>contents</b>: A `Tensor` of type `string`. 0-D. The PNG-encoded image.
+* <b>channels</b>: An optional `int`. Defaults to `0`.
+ Number of color channels for the decoded image.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `uint8`. 3-D with shape `[height, width, channels]`.
+
+
+- - -
+
+### tf.image.encode_png(image, compression=None, name=None) <div class="md-anchor" id="encode_png">{#encode_png}</div>
+
+PNG-encode an image.
+
+`image` is a 3-D uint8 Tensor of shape `[height, width, channels]` where
+`channels` is:
+
+* 1: for grayscale.
+* 3: for RGB.
+* 4: for RGBA.
+
+The ZLIB compression level, `compression`, can be -1 for the PNG-encoder
+default or a value from 0 to 9. 9 is the highest compression level, generating
+the smallest output, but is slower.
+
+##### Args:
+
+
+* <b>image</b>: A `Tensor` of type `uint8`.
+ 3-D with shape `[height, width, channels]`.
+* <b>compression</b>: An optional `int`. Defaults to `-1`. Compression level.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `string`. 0-D. PNG-encoded image.
+
+
+
+## Resizing. <div class="md-anchor" id="AUTOGENERATED-resizing.">{#AUTOGENERATED-resizing.}</div>
+
+The resizing Ops accept input images as tensors of several types. They always
+output resized images as float32 tensors.
+
+The convenience function [resize_images()](#resize_images) supports both 4-D
+and 3-D tensors as input and output. 4-D tensors are for batches of images,
+3-D tensors for individual images.
+
+Other resizing Ops only support 3-D individual images as input:
+[resize_area](#resize_area), [resize_bicubic](#resize_bicubic),
+[resize_bilinear](#resize_bilinear),
+[resize_nearest_neighbor](#resize_nearest_neighbor).
+
+Example:
+
+```python
+# Decode a JPG image and resize it to 299 by 299.
+image = tf.image.decode_jpeg(...)
+resized_image = tf.image.resize_bilinear(image, [299, 299])
+```
+
+<i>Maybe refer to the Queue examples that show how to add images to a Queue
+after resizing them to a fixed size, and how to dequeue batches of resized
+images from the Queue.</i>
+
+- - -
+
+### tf.image.resize_images(images, new_height, new_width, method=0) <div class="md-anchor" id="resize_images">{#resize_images}</div>
+
+Resize `images` to `new_width`, `new_height` using the specified `method`.
+
+Resized images will be distorted if their original aspect ratio is not
+the same as `new_width`, `new_height`. To avoid distortions see
+[resize_image_with_crop_or_pad](#resize_image_with_crop_or_pad).
+
+`method` can be one of:
+
+* <b>ResizeMethod.BILINEAR</b>: [Bilinear interpolation.]
+ (https://en.wikipedia.org/wiki/Bilinear_interpolation)
+* <b>ResizeMethod.NEAREST_NEIGHBOR</b>: [Nearest neighbor interpolation.]
+ (https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation)
+* <b>ResizeMethod.BICUBIC</b>: [Bicubic interpolation.]
+ (https://en.wikipedia.org/wiki/Bicubic_interpolation)
+* <b>ResizeMethod.AREA</b>: Area interpolation.
+
+##### Args:
+
+
+* <b>images</b>: 4-D Tensor of shape `[batch, height, width, channels]` or
+ 3-D Tensor of shape `[height, width, channels]`.
+* <b>new_height</b>: integer.
+* <b>new_width</b>: integer.
+* <b>method</b>: ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if the shape of `images` is incompatible with the
+ shape arguments to this function
+* <b>ValueError</b>: if an unsupported resize method is specified.
+
+##### Returns:
+
+ If `images` was 4-D, a 4-D float Tensor of shape
+ `[batch, new_height, new_width, channels]`.
+ If `images` was 3-D, a 3-D float Tensor of shape
+ `[new_height, new_width, channels]`.
+
+
+
+- - -
+
+### tf.image.resize_area(images, size, name=None) <div class="md-anchor" id="resize_area">{#resize_area}</div>
+
+Resize `images` to `size` using area interpolation.
+
+Input images can be of different types but output images are always float.
+
+##### Args:
+
+
+* <b>images</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `float32`, `float64`.
+ 4-D with shape `[batch, height, width, channels]`.
+* <b>size</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
+ new size for the images.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`. 4-D with shape
+ `[batch, new_height, new_width, channels]`.
+
+
+- - -
+
+### tf.image.resize_bicubic(images, size, name=None) <div class="md-anchor" id="resize_bicubic">{#resize_bicubic}</div>
+
+Resize `images` to `size` using bicubic interpolation.
+
+Input images can be of different types but output images are always float.
+
+##### Args:
+
+
+* <b>images</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `float32`, `float64`.
+ 4-D with shape `[batch, height, width, channels]`.
+* <b>size</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
+ new size for the images.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`. 4-D with shape
+ `[batch, new_height, new_width, channels]`.
+
+
+- - -
+
+### tf.image.resize_bilinear(images, size, name=None) <div class="md-anchor" id="resize_bilinear">{#resize_bilinear}</div>
+
+Resize `images` to `size` using bilinear interpolation.
+
+Input images can be of different types but output images are always float.
+
+##### Args:
+
+
+* <b>images</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `float32`, `float64`.
+ 4-D with shape `[batch, height, width, channels]`.
+* <b>size</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
+ new size for the images.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`. 4-D with shape
+ `[batch, new_height, new_width, channels]`.
+
+
+- - -
+
+### tf.image.resize_nearest_neighbor(images, size, name=None) <div class="md-anchor" id="resize_nearest_neighbor">{#resize_nearest_neighbor}</div>
+
+Resize `images` to `size` using nearest neighbor interpolation.
+
+Input images can be of different types but output images are always float.
+
+##### Args:
+
+
+* <b>images</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `float32`, `float64`.
+ 4-D with shape `[batch, height, width, channels]`.
+* <b>size</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
+ new size for the images.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `images`. 4-D with shape
+ `[batch, new_height, new_width, channels]`.
+
+
+
+
+## Cropping. <div class="md-anchor" id="AUTOGENERATED-cropping.">{#AUTOGENERATED-cropping.}</div>
+
+- - -
+
+### tf.image.resize_image_with_crop_or_pad(image, target_height, target_width) <div class="md-anchor" id="resize_image_with_crop_or_pad">{#resize_image_with_crop_or_pad}</div>
+
+Crops and/or pads an image to a target width and height.
+
+Resizes an image to a target width and height by either centrally
+cropping the image or padding it evenly with zeros.
+
+If `width` or `height` is greater than the specified `target_width` or
+`target_height` respectively, this op centrally crops along that dimension.
+If `width` or `height` is smaller than the specified `target_width` or
+`target_height` respectively, this op centrally pads with 0 along that
+dimension.
+
+##### Args:
+
+
+* <b>image</b>: 3-D tensor of shape [height, width, channels]
+* <b>target_height</b>: Target height.
+* <b>target_width</b>: Target width.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if `target_height` or `target_width` are zero or negative.
+
+##### Returns:
+
+ Cropped and/or padded image of shape
+ `[target_height, target_width, channels]`
+
+
+
+- - -
+
+### tf.image.pad_to_bounding_box(image, offset_height, offset_width, target_height, target_width) <div class="md-anchor" id="pad_to_bounding_box">{#pad_to_bounding_box}</div>
+
+Pad `image` with zeros to the specified `height` and `width`.
+
+Adds `offset_height` rows of zeros on top, `offset_width` columns of
+zeros on the left, and then pads the image on the bottom and right
+with zeros until it has dimensions `target_height`, `target_width`.
+
+This op does nothing if `offset_*` is zero and the image already has size
+`target_height` by `target_width`.
+
+##### Args:
+
+
+* <b>image</b>: 3-D tensor with shape `[height, width, channels]`
+* <b>offset_height</b>: Number of rows of zeros to add on top.
+* <b>offset_width</b>: Number of columns of zeros to add on the left.
+* <b>target_height</b>: Height of output image.
+* <b>target_width</b>: Width of output image.
+
+##### Returns:
+
+ 3-D tensor of shape `[target_height, target_width, channels]`
+
+##### Raises:
+
+
+* <b>ValueError</b>: If the shape of `image` is incompatible with the `offset_*` or
+ `target_*` arguments
+
+
+- - -
+
+### tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width) <div class="md-anchor" id="crop_to_bounding_box">{#crop_to_bounding_box}</div>
+
+Crops an image to a specified bounding box.
+
+This op cuts a rectangular part out of `image`. The top-left corner of the
+returned image is at `offset_height, offset_width` in `image`, and its
+lower-right corner is at
+`offset_height + target_height, offset_width + target_width'.
+
+##### Args:
+
+
+* <b>image</b>: 3-D tensor with shape `[height, width, channels]`
+* <b>offset_height</b>: Vertical coordinate of the top-left corner of the result in
+ the input.
+* <b>offset_width</b>: Horizontal coordinate of the top-left corner of the result in
+ the input.
+* <b>target_height</b>: Height of the result.
+* <b>target_width</b>: Width of the result.
+
+##### Returns:
+
+ 3-D tensor of image with shape `[target_height, target_width, channels]`
+
+##### Raises:
+
+
+* <b>ValueError</b>: If the shape of `image` is incompatible with the `offset_*` or
+ `target_*` arguments
+
+
+- - -
+
+### tf.image.random_crop(image, size, seed=None, name=None) <div class="md-anchor" id="random_crop">{#random_crop}</div>
+
+Randomly crops `image` to size `[target_height, target_width]`.
+
+The offset of the output within `image` is uniformly random. `image` always
+fully contains the result.
+
+##### Args:
+
+
+* <b>image</b>: 3-D tensor of shape `[height, width, channels]`
+* <b>size</b>: 1-D tensor with two elements, specifying target `[height, width]`
+* <b>seed</b>: A Python integer. Used to create a random seed.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+* <b>name</b>: A name for this operation (optional).
+
+##### Returns:
+
+ A cropped 3-D tensor of shape `[target_height, target_width, channels]`.
+
+
+- - -
+
+### tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None) <div class="md-anchor" id="extract_glimpse">{#extract_glimpse}</div>
+
+Extracts a glimpse from the input tensor.
+
+Returns a set of windows called glimpses extracted at location `offsets`
+from the input tensor. If the windows only partially overlaps the inputs, the
+non overlapping areas will be filled with random noise.
+
+The result is a 4-D tensor of shape `[batch_size, glimpse_height,
+glimpse_width, channels]`. The channels and batch dimensions are the same as that
+of the input tensor. The height and width of the output windows are
+specified in the `size` parameter.
+
+The argument `normalized` and `centered` controls how the windows are built:
+* If the coordinates are normalized but not centered, 0.0 and 1.0
+ correspond to the minimum and maximum of each height and width dimension.
+* If the coordinates are both normalized and centered, they range from -1.0 to
+ 1.0. The coordinates (-1.0, -1.0) correspond to the upper left corner, the
+ lower right corner is located at (1.0, 1.0) and the center is at (0, 0).
+* If the coordinates are not normalized they are interpreted as numbers of pixels.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor` of type `float32`.
+ A 4-D float tensor of shape `[batch_size, height, width, channels]`.
+* <b>size</b>: A `Tensor` of type `int32`.
+ A 1-D tensor of 2 elements containing the size of the glimpses to extract.
+ The glimpse height must be specified first, following by the glimpse width.
+* <b>offsets</b>: A `Tensor` of type `float32`.
+ A 2-D integer tensor of shape `[batch_size, 2]` containing the x, y
+ locations of the center of each window.
+* <b>centered</b>: An optional `bool`. Defaults to `True`.
+ indicates if the offset coordinates are centered relative to
+ the image, in which case the (0, 0) offset is relative to the center of the
+ input images. If false, the (0,0) offset corresponds to the upper left corner
+ of the input images.
+* <b>normalized</b>: An optional `bool`. Defaults to `True`.
+ indicates if the offset coordinates are normalized.
+* <b>uniform_noise</b>: An optional `bool`. Defaults to `True`.
+ indicates if the noise should be generated using a
+ uniform distribution or a gaussian distribution.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`.
+ A tensor representing the glimpses `[batch_size, glimpse_height,
+ glimpse_width, channels]`.
+
+
+
+## Flipping and Transposing. <div class="md-anchor" id="AUTOGENERATED-flipping-and-transposing.">{#AUTOGENERATED-flipping-and-transposing.}</div>
+
+- - -
+
+### tf.image.flip_up_down(image) <div class="md-anchor" id="flip_up_down">{#flip_up_down}</div>
+
+Flip an image horizontally (upside down).
+
+Outputs the contents of `image` flipped along the first dimension, which is
+`height`.
+
+See also `reverse()`.
+
+##### Args:
+
+
+* <b>image</b>: A 3-D tensor of shape `[height, width, channels].`
+
+##### Returns:
+
+ A 3-D tensor of the same type and shape as `image`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if the shape of `image` not supported.
+
+
+- - -
+
+### tf.image.random_flip_up_down(image, seed=None) <div class="md-anchor" id="random_flip_up_down">{#random_flip_up_down}</div>
+
+Randomly flips an image vertically (upside down).
+
+With a 1 in 2 chance, outputs the contents of `image` flipped along the first
+dimension, which is `height`. Otherwise output the image as-is.
+
+##### Args:
+
+
+* <b>image</b>: A 3-D tensor of shape `[height, width, channels].`
+* <b>seed</b>: A Python integer. Used to create a random seed.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+
+##### Returns:
+
+ A 3-D tensor of the same type and shape as `image`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if the shape of `image` not supported.
+
+
+
+- - -
+
+### tf.image.flip_left_right(image) <div class="md-anchor" id="flip_left_right">{#flip_left_right}</div>
+
+Flip an image horizontally (left to right).
+
+Outputs the contents of `image` flipped along the second dimension, which is
+`width`.
+
+See also `reverse()`.
+
+##### Args:
+
+
+* <b>image</b>: A 3-D tensor of shape `[height, width, channels].`
+
+##### Returns:
+
+ A 3-D tensor of the same type and shape as `image`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if the shape of `image` not supported.
+
+
+- - -
+
+### tf.image.random_flip_left_right(image, seed=None) <div class="md-anchor" id="random_flip_left_right">{#random_flip_left_right}</div>
+
+Randomly flip an image horizontally (left to right).
+
+With a 1 in 2 chance, outputs the contents of `image` flipped along the
+second dimension, which is `width`. Otherwise output the image as-is.
+
+##### Args:
+
+
+* <b>image</b>: A 3-D tensor of shape `[height, width, channels].`
+* <b>seed</b>: A Python integer. Used to create a random seed.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+
+##### Returns:
+
+ A 3-D tensor of the same type and shape as `image`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if the shape of `image` not supported.
+
+
+
+- - -
+
+### tf.image.transpose_image(image) <div class="md-anchor" id="transpose_image">{#transpose_image}</div>
+
+Transpose an image by swapping the first and second dimension.
+
+See also `transpose()`.
+
+##### Args:
+
+
+* <b>image</b>: 3-D tensor of shape `[height, width, channels]`
+
+##### Returns:
+
+ A 3-D tensor of shape `[width, height, channels]`
+
+##### Raises:
+
+
+* <b>ValueError</b>: if the shape of `image` not supported.
+
+
+
+## Image Adjustments. <div class="md-anchor" id="AUTOGENERATED-image-adjustments.">{#AUTOGENERATED-image-adjustments.}</div>
+
+TensorFlow provides functions to adjust images in various ways: brightness,
+contrast, hue, and saturation. Each adjustment can be done with predefined
+parameters or with random parameters picked from predefined intervals. Random
+adjustments are often useful to expand a training set and reduce overfitting.
+
+- - -
+
+### tf.image.adjust_brightness(image, delta, min_value=None, max_value=None) <div class="md-anchor" id="adjust_brightness">{#adjust_brightness}</div>
+
+Adjust the brightness of RGB or Grayscale images.
+
+The value `delta` is added to all components of the tensor `image`. `image`
+and `delta` are cast to `float` before adding, and the resulting values are
+clamped to `[min_value, max_value]`. Finally, the result is cast back to
+`images.dtype`.
+
+If `min_value` or `max_value` are not given, they are set to the minimum and
+maximum allowed values for `image.dtype` respectively.
+
+##### Args:
+
+
+* <b>image</b>: A tensor.
+* <b>delta</b>: A scalar. Amount to add to the pixel values.
+* <b>min_value</b>: Minimum value for output.
+* <b>max_value</b>: Maximum value for output.
+
+##### Returns:
+
+ A tensor of the same shape and type as `image`.
+
+
+- - -
+
+### tf.image.random_brightness(image, max_delta, seed=None) <div class="md-anchor" id="random_brightness">{#random_brightness}</div>
+
+Adjust the brightness of images by a random factor.
+
+Equivalent to `adjust_brightness()` using a `delta` randomly picked in the
+interval `[-max_delta, max_delta)`.
+
+Note that `delta` is picked as a float. Because for integer type images,
+the brightness adjusted result is rounded before casting, integer images may
+have modifications in the range `[-max_delta,max_delta]`.
+
+##### Args:
+
+
+* <b>image</b>: 3-D tensor of shape `[height, width, channels]`.
+* <b>max_delta</b>: float, must be non-negative.
+* <b>seed</b>: A Python integer. Used to create a random seed.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+
+##### Returns:
+
+ 3-D tensor of images of shape `[height, width, channels]`
+
+##### Raises:
+
+
+* <b>ValueError</b>: if max_delta is negative.
+
+
+
+- - -
+
+### tf.image.adjust_contrast(images, contrast_factor, min_value=None, max_value=None) <div class="md-anchor" id="adjust_contrast">{#adjust_contrast}</div>
+
+Adjust contrast of RGB or grayscale images.
+
+`images` is a tensor of at least 3 dimensions. The last 3 dimensions are
+interpreted as `[height, width, channels]`. The other dimensions only
+represent a collection of images, such as `[batch, height, width, channels].`
+
+Contrast is adjusted independently for each channel of each image.
+
+For each channel, this Op first computes the mean of the image pixels in the
+channel and then adjusts each component `x` of each pixel to
+`(x - mean) * contrast_factor + mean`.
+
+The adjusted values are then clipped to fit in the `[min_value, max_value]`
+interval. If `min_value` or `max_value` is not given, it is replaced with the
+minimum and maximum values for the data type of `images` respectively.
+
+The contrast-adjusted image is always computed as `float`, and it is
+cast back to its original type after clipping.
+
+##### Args:
+
+
+* <b>images</b>: Images to adjust. At least 3-D.
+* <b>contrast_factor</b>: A float multiplier for adjusting contrast.
+* <b>min_value</b>: Minimum value for clipping the adjusted pixels.
+* <b>max_value</b>: Maximum value for clipping the adjusted pixels.
+
+##### Returns:
+
+ The constrast-adjusted image or images.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if the arguments are invalid.
+
+
+- - -
+
+### tf.image.random_contrast(image, lower, upper, seed=None) <div class="md-anchor" id="random_contrast">{#random_contrast}</div>
+
+Adjust the contrase of an image by a random factor.
+
+Equivalent to `adjust_constrast()` but uses a `contrast_factor` randomly
+picked in the interval `[lower, upper]`.
+
+##### Args:
+
+
+* <b>image</b>: 3-D tensor of shape `[height, width, channels]`.
+* <b>lower</b>: float. Lower bound for the random contrast factor.
+* <b>upper</b>: float. Upper bound for the random contrast factor.
+* <b>seed</b>: A Python integer. Used to create a random seed.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+
+##### Returns:
+
+ 3-D tensor of shape `[height, width, channels]`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if `upper <= lower` or if `lower < 0`.
+
+
+
+- - -
+
+### tf.image.per_image_whitening(image) <div class="md-anchor" id="per_image_whitening">{#per_image_whitening}</div>
+
+Linearly scales `image` to have zero mean and unit norm.
+
+This op computes `(x - mean) / adjusted_stddev`, where `mean` is the average
+of all values in image, and
+`adjusted_stddev = max(stddev, 1.0/srqt(image.NumElements()))`.
+
+`stddev` is the standard deviation of all values in `image`. It is capped
+away from zero to protect against division by 0 when handling uniform images.
+
+Note that this implementation is limited:
+* It only whitens based on the statistics of an individual image.
+* It does not take into account the covariance structure.
+
+##### Args:
+
+
+* <b>image</b>: 3-D tensor of shape `[height, width, channels]`.
+
+##### Returns:
+
+ The whitened image with same shape as `image`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if the shape of 'image' is incompatible with this function.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/index.md b/tensorflow/g3doc/api_docs/python/index.md
new file mode 100644
index 0000000000..72c0a401ef
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/index.md
@@ -0,0 +1,352 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# TensorFlow Python reference documentation
+
+* <b>[Building Graphs](framework.md)</b>: [class DType](framework.md#DType),
+ [class Dimension](framework.md#Dimension),
+ [class Graph](framework.md#Graph),
+ [class GraphKeys](framework.md#GraphKeys),
+ [NoGradient](framework.md#NoGradient),
+ [class Operation](framework.md#Operation),
+ [class RegisterGradient](framework.md#RegisterGradient),
+ [class RegisterShape](framework.md#RegisterShape),
+ [class Tensor](framework.md#Tensor),
+ [class TensorShape](framework.md#TensorShape),
+ [add_to_collection](framework.md#add_to_collection),
+ [as_dtype](framework.md#as_dtype),
+ [control_dependencies](framework.md#control_dependencies),
+ [convert_to_tensor](framework.md#convert_to_tensor),
+ [device](framework.md#device),
+ [get_collection](framework.md#get_collection),
+ [get_default_graph](framework.md#get_default_graph),
+ [get_seed](framework.md#get_seed),
+ [import_graph_def](framework.md#import_graph_def),
+ [name_scope](framework.md#name_scope),
+ [op_scope](framework.md#op_scope)
+
+* <b>[Constants, Sequences, and Random Values](constant_op.md)</b>: [constant](constant_op.md#constant),
+ [fill](constant_op.md#fill),
+ [linspace](constant_op.md#linspace),
+ [ones](constant_op.md#ones),
+ [ones_like](constant_op.md#ones_like),
+ [random_normal](constant_op.md#random_normal),
+ [random_shuffle](constant_op.md#random_shuffle),
+ [random_uniform](constant_op.md#random_uniform),
+ [range](constant_op.md#range),
+ [set_random_seed](constant_op.md#set_random_seed),
+ [truncated_normal](constant_op.md#truncated_normal),
+ [zeros](constant_op.md#zeros),
+ [zeros_like](constant_op.md#zeros_like)
+
+* <b>[Variables](state_ops.md)</b>: [class IndexedSlices](state_ops.md#IndexedSlices),
+ [class Saver](state_ops.md#Saver),
+ [class Variable](state_ops.md#Variable),
+ [all_variables](state_ops.md#all_variables),
+ [assert_variables_initialized](state_ops.md#assert_variables_initialized),
+ [assign](state_ops.md#assign),
+ [assign_add](state_ops.md#assign_add),
+ [assign_sub](state_ops.md#assign_sub),
+ [constant_initializer](state_ops.md#constant_initializer),
+ [count_up_to](state_ops.md#count_up_to),
+ [device](state_ops.md#device),
+ [get_checkpoint_state](state_ops.md#get_checkpoint_state),
+ [get_variable](state_ops.md#get_variable),
+ [get_variable_scope](state_ops.md#get_variable_scope),
+ [initialize_all_variables](state_ops.md#initialize_all_variables),
+ [initialize_variables](state_ops.md#initialize_variables),
+ [latest_checkpoint](state_ops.md#latest_checkpoint),
+ [random_normal_initializer](state_ops.md#random_normal_initializer),
+ [random_uniform_initializer](state_ops.md#random_uniform_initializer),
+ [scatter_add](state_ops.md#scatter_add),
+ [scatter_sub](state_ops.md#scatter_sub),
+ [scatter_update](state_ops.md#scatter_update),
+ [sparse_mask](state_ops.md#sparse_mask),
+ [trainable_variables](state_ops.md#trainable_variables),
+ [truncated_normal_initializer](state_ops.md#truncated_normal_initializer),
+ [uniform_unit_scaling_initializer](state_ops.md#uniform_unit_scaling_initializer),
+ [update_checkpoint_state](state_ops.md#update_checkpoint_state),
+ [variable_scope](state_ops.md#variable_scope),
+ [zeros_initializer](state_ops.md#zeros_initializer)
+
+* <b>[Tensor Transformations](array_ops.md)</b>: [cast](array_ops.md#cast),
+ [concat](array_ops.md#concat),
+ [dynamic_partition](array_ops.md#dynamic_partition),
+ [dynamic_stitch](array_ops.md#dynamic_stitch),
+ [expand_dims](array_ops.md#expand_dims),
+ [gather](array_ops.md#gather),
+ [pack](array_ops.md#pack),
+ [pad](array_ops.md#pad),
+ [rank](array_ops.md#rank),
+ [reshape](array_ops.md#reshape),
+ [reverse](array_ops.md#reverse),
+ [reverse_sequence](array_ops.md#reverse_sequence),
+ [shape](array_ops.md#shape),
+ [size](array_ops.md#size),
+ [slice](array_ops.md#slice),
+ [split](array_ops.md#split),
+ [squeeze](array_ops.md#squeeze),
+ [string_to_number](array_ops.md#string_to_number),
+ [tile](array_ops.md#tile),
+ [to_bfloat16](array_ops.md#to_bfloat16),
+ [to_double](array_ops.md#to_double),
+ [to_float](array_ops.md#to_float),
+ [to_int32](array_ops.md#to_int32),
+ [to_int64](array_ops.md#to_int64),
+ [transpose](array_ops.md#transpose),
+ [unpack](array_ops.md#unpack)
+
+* <b>[Math](math_ops.md)</b>: [abs](math_ops.md#abs),
+ [accumulate_n](math_ops.md#accumulate_n),
+ [add](math_ops.md#add),
+ [add_n](math_ops.md#add_n),
+ [argmax](math_ops.md#argmax),
+ [argmin](math_ops.md#argmin),
+ [batch_cholesky](math_ops.md#batch_cholesky),
+ [batch_matmul](math_ops.md#batch_matmul),
+ [batch_matrix_determinant](math_ops.md#batch_matrix_determinant),
+ [batch_matrix_inverse](math_ops.md#batch_matrix_inverse),
+ [ceil](math_ops.md#ceil),
+ [cholesky](math_ops.md#cholesky),
+ [complex](math_ops.md#complex),
+ [complex_abs](math_ops.md#complex_abs),
+ [conj](math_ops.md#conj),
+ [cos](math_ops.md#cos),
+ [diag](math_ops.md#diag),
+ [div](math_ops.md#div),
+ [edit_distance](math_ops.md#edit_distance),
+ [exp](math_ops.md#exp),
+ [floor](math_ops.md#floor),
+ [imag](math_ops.md#imag),
+ [inv](math_ops.md#inv),
+ [invert_permutation](math_ops.md#invert_permutation),
+ [listdiff](math_ops.md#listdiff),
+ [log](math_ops.md#log),
+ [matmul](math_ops.md#matmul),
+ [matrix_determinant](math_ops.md#matrix_determinant),
+ [matrix_inverse](math_ops.md#matrix_inverse),
+ [maximum](math_ops.md#maximum),
+ [minimum](math_ops.md#minimum),
+ [mod](math_ops.md#mod),
+ [mul](math_ops.md#mul),
+ [neg](math_ops.md#neg),
+ [pow](math_ops.md#pow),
+ [real](math_ops.md#real),
+ [reduce_all](math_ops.md#reduce_all),
+ [reduce_any](math_ops.md#reduce_any),
+ [reduce_max](math_ops.md#reduce_max),
+ [reduce_mean](math_ops.md#reduce_mean),
+ [reduce_min](math_ops.md#reduce_min),
+ [reduce_prod](math_ops.md#reduce_prod),
+ [reduce_sum](math_ops.md#reduce_sum),
+ [round](math_ops.md#round),
+ [rsqrt](math_ops.md#rsqrt),
+ [segment_max](math_ops.md#segment_max),
+ [segment_mean](math_ops.md#segment_mean),
+ [segment_min](math_ops.md#segment_min),
+ [segment_prod](math_ops.md#segment_prod),
+ [segment_sum](math_ops.md#segment_sum),
+ [sign](math_ops.md#sign),
+ [sin](math_ops.md#sin),
+ [sparse_segment_mean](math_ops.md#sparse_segment_mean),
+ [sparse_segment_sum](math_ops.md#sparse_segment_sum),
+ [sqrt](math_ops.md#sqrt),
+ [square](math_ops.md#square),
+ [sub](math_ops.md#sub),
+ [transpose](math_ops.md#transpose),
+ [unique](math_ops.md#unique),
+ [unsorted_segment_sum](math_ops.md#unsorted_segment_sum),
+ [where](math_ops.md#where)
+
+* <b>[Control Flow](control_flow_ops.md)</b>: [Assert](control_flow_ops.md#Assert),
+ [Print](control_flow_ops.md#Print),
+ [add_check_numerics_ops](control_flow_ops.md#add_check_numerics_ops),
+ [check_numerics](control_flow_ops.md#check_numerics),
+ [count_up_to](control_flow_ops.md#count_up_to),
+ [equal](control_flow_ops.md#equal),
+ [greater](control_flow_ops.md#greater),
+ [greater_equal](control_flow_ops.md#greater_equal),
+ [group](control_flow_ops.md#group),
+ [identity](control_flow_ops.md#identity),
+ [is_finite](control_flow_ops.md#is_finite),
+ [is_inf](control_flow_ops.md#is_inf),
+ [is_nan](control_flow_ops.md#is_nan),
+ [less](control_flow_ops.md#less),
+ [less_equal](control_flow_ops.md#less_equal),
+ [logical_and](control_flow_ops.md#logical_and),
+ [logical_not](control_flow_ops.md#logical_not),
+ [logical_or](control_flow_ops.md#logical_or),
+ [logical_xor](control_flow_ops.md#logical_xor),
+ [no_op](control_flow_ops.md#no_op),
+ [not_equal](control_flow_ops.md#not_equal),
+ [select](control_flow_ops.md#select),
+ [tuple](control_flow_ops.md#tuple),
+ [verify_tensor_all_finite](control_flow_ops.md#verify_tensor_all_finite),
+ [where](control_flow_ops.md#where)
+
+* <b>[Images](image.md)</b>: [adjust_brightness](image.md#adjust_brightness),
+ [adjust_contrast](image.md#adjust_contrast),
+ [crop_to_bounding_box](image.md#crop_to_bounding_box),
+ [decode_jpeg](image.md#decode_jpeg),
+ [decode_png](image.md#decode_png),
+ [encode_jpeg](image.md#encode_jpeg),
+ [encode_png](image.md#encode_png),
+ [extract_glimpse](image.md#extract_glimpse),
+ [flip_left_right](image.md#flip_left_right),
+ [flip_up_down](image.md#flip_up_down),
+ [pad_to_bounding_box](image.md#pad_to_bounding_box),
+ [per_image_whitening](image.md#per_image_whitening),
+ [random_brightness](image.md#random_brightness),
+ [random_contrast](image.md#random_contrast),
+ [random_crop](image.md#random_crop),
+ [random_flip_left_right](image.md#random_flip_left_right),
+ [random_flip_up_down](image.md#random_flip_up_down),
+ [resize_area](image.md#resize_area),
+ [resize_bicubic](image.md#resize_bicubic),
+ [resize_bilinear](image.md#resize_bilinear),
+ [resize_image_with_crop_or_pad](image.md#resize_image_with_crop_or_pad),
+ [resize_images](image.md#resize_images),
+ [resize_nearest_neighbor](image.md#resize_nearest_neighbor),
+ [transpose_image](image.md#transpose_image)
+
+* <b>[Sparse Tensors](sparse_ops.md)</b>: [class SparseTensor](sparse_ops.md#SparseTensor),
+ [class SparseTensorValue](sparse_ops.md#SparseTensorValue),
+ [shape](sparse_ops.md#shape),
+ [sparse_concat](sparse_ops.md#sparse_concat),
+ [sparse_fill_empty_rows](sparse_ops.md#sparse_fill_empty_rows),
+ [sparse_reorder](sparse_ops.md#sparse_reorder),
+ [sparse_retain](sparse_ops.md#sparse_retain),
+ [sparse_tensor_to_dense](sparse_ops.md#sparse_tensor_to_dense),
+ [sparse_to_dense](sparse_ops.md#sparse_to_dense),
+ [sparse_to_indicator](sparse_ops.md#sparse_to_indicator)
+
+* <b>[Inputs and Readers](io_ops.md)</b>: [class FIFOQueue](io_ops.md#FIFOQueue),
+ [class FixedLengthRecordReader](io_ops.md#FixedLengthRecordReader),
+ [class IdentityReader](io_ops.md#IdentityReader),
+ [class QueueBase](io_ops.md#QueueBase),
+ [class RandomShuffleQueue](io_ops.md#RandomShuffleQueue),
+ [class ReaderBase](io_ops.md#ReaderBase),
+ [class TFRecordReader](io_ops.md#TFRecordReader),
+ [class TextLineReader](io_ops.md#TextLineReader),
+ [class WholeFileReader](io_ops.md#WholeFileReader),
+ [batch](io_ops.md#batch),
+ [batch_join](io_ops.md#batch_join),
+ [decode_csv](io_ops.md#decode_csv),
+ [decode_raw](io_ops.md#decode_raw),
+ [limit_epochs](io_ops.md#limit_epochs),
+ [match_filenames_once](io_ops.md#match_filenames_once),
+ [matching_files](io_ops.md#matching_files),
+ [parse_example](io_ops.md#parse_example),
+ [parse_single_example](io_ops.md#parse_single_example),
+ [placeholder](io_ops.md#placeholder),
+ [range_input_producer](io_ops.md#range_input_producer),
+ [read_file](io_ops.md#read_file),
+ [shuffle_batch](io_ops.md#shuffle_batch),
+ [shuffle_batch_join](io_ops.md#shuffle_batch_join),
+ [size](io_ops.md#size),
+ [slice_input_producer](io_ops.md#slice_input_producer),
+ [string_input_producer](io_ops.md#string_input_producer)
+
+* <b>[Data IO (Python functions)](python_io.md)</b>: [class TFRecordWriter](python_io.md#TFRecordWriter),
+ [tf_record_iterator](python_io.md#tf_record_iterator)
+
+* <b>[Neural Network](nn.md)</b>: [avg_pool](nn.md#avg_pool),
+ [bias_add](nn.md#bias_add),
+ [compute_accidental_hits](nn.md#compute_accidental_hits),
+ [conv2d](nn.md#conv2d),
+ [depthwise_conv2d](nn.md#depthwise_conv2d),
+ [dropout](nn.md#dropout),
+ [embedding_lookup](nn.md#embedding_lookup),
+ [embedding_lookup_sparse](nn.md#embedding_lookup_sparse),
+ [fixed_unigram_candidate_sampler](nn.md#fixed_unigram_candidate_sampler),
+ [in_top_k](nn.md#in_top_k),
+ [l2_loss](nn.md#l2_loss),
+ [l2_normalize](nn.md#l2_normalize),
+ [learned_unigram_candidate_sampler](nn.md#learned_unigram_candidate_sampler),
+ [local_response_normalization](nn.md#local_response_normalization),
+ [log_uniform_candidate_sampler](nn.md#log_uniform_candidate_sampler),
+ [max_pool](nn.md#max_pool),
+ [max_pool_with_argmax](nn.md#max_pool_with_argmax),
+ [moments](nn.md#moments),
+ [nce_loss](nn.md#nce_loss),
+ [relu](nn.md#relu),
+ [relu6](nn.md#relu6),
+ [sampled_softmax_loss](nn.md#sampled_softmax_loss),
+ [separable_conv2d](nn.md#separable_conv2d),
+ [sigmoid](nn.md#sigmoid),
+ [sigmoid_cross_entropy_with_logits](nn.md#sigmoid_cross_entropy_with_logits),
+ [softmax](nn.md#softmax),
+ [softmax_cross_entropy_with_logits](nn.md#softmax_cross_entropy_with_logits),
+ [softplus](nn.md#softplus),
+ [tanh](nn.md#tanh),
+ [top_k](nn.md#top_k),
+ [uniform_candidate_sampler](nn.md#uniform_candidate_sampler)
+
+* <b>[Running Graphs](client.md)</b>: [class AbortedError](client.md#AbortedError),
+ [class AlreadyExistsError](client.md#AlreadyExistsError),
+ [class CancelledError](client.md#CancelledError),
+ [class DataLossError](client.md#DataLossError),
+ [class DeadlineExceededError](client.md#DeadlineExceededError),
+ [class FailedPreconditionError](client.md#FailedPreconditionError),
+ [class InternalError](client.md#InternalError),
+ [class InvalidArgumentError](client.md#InvalidArgumentError),
+ [class NotFoundError](client.md#NotFoundError),
+ [class OpError](client.md#OpError),
+ [class OutOfRangeError](client.md#OutOfRangeError),
+ [class PermissionDeniedError](client.md#PermissionDeniedError),
+ [class ResourceExhaustedError](client.md#ResourceExhaustedError),
+ [class Session](client.md#Session),
+ [class UnauthenticatedError](client.md#UnauthenticatedError),
+ [class UnavailableError](client.md#UnavailableError),
+ [class UnimplementedError](client.md#UnimplementedError),
+ [class UnknownError](client.md#UnknownError),
+ [get_default_session](client.md#get_default_session)
+
+* <b>[Training](train.md)</b>: [class AdagradOptimizer](train.md#AdagradOptimizer),
+ [class AdamOptimizer](train.md#AdamOptimizer),
+ [class AggregationMethod](train.md#AggregationMethod),
+ [class Coordinator](train.md#Coordinator),
+ [class ExponentialMovingAverage](train.md#ExponentialMovingAverage),
+ [class FtrlOptimizer](train.md#FtrlOptimizer),
+ [class GradientDescentOptimizer](train.md#GradientDescentOptimizer),
+ [class MomentumOptimizer](train.md#MomentumOptimizer),
+ [class Optimizer](train.md#Optimizer),
+ [class QueueRunner](train.md#QueueRunner),
+ [class RMSPropOptimizer](train.md#RMSPropOptimizer),
+ [class SummaryWriter](train.md#SummaryWriter),
+ [add_queue_runner](train.md#add_queue_runner),
+ [clip_by_average_norm](train.md#clip_by_average_norm),
+ [clip_by_global_norm](train.md#clip_by_global_norm),
+ [clip_by_norm](train.md#clip_by_norm),
+ [clip_by_value](train.md#clip_by_value),
+ [exponential_decay](train.md#exponential_decay),
+ [global_norm](train.md#global_norm),
+ [global_step](train.md#global_step),
+ [gradients](train.md#gradients),
+ [histogram_summary](train.md#histogram_summary),
+ [image_summary](train.md#image_summary),
+ [merge_all_summaries](train.md#merge_all_summaries),
+ [merge_summary](train.md#merge_summary),
+ [scalar_summary](train.md#scalar_summary),
+ [start_queue_runners](train.md#start_queue_runners),
+ [stop_gradient](train.md#stop_gradient),
+ [summary_iterator](train.md#summary_iterator),
+ [write_graph](train.md#write_graph),
+ [zero_fraction](train.md#zero_fraction)
+
+<div class="sections-order" style="display: none;">
+<!--
+<!-- framework.md -->
+<!-- constant_op.md -->
+<!-- state_ops.md -->
+<!-- array_ops.md -->
+<!-- math_ops.md -->
+<!-- control_flow_ops.md -->
+<!-- image.md -->
+<!-- sparse_ops.md -->
+<!-- io_ops.md -->
+<!-- python_io.md -->
+<!-- nn.md -->
+<!-- client.md -->
+<!-- train.md -->
+-->
+</div>
diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md
new file mode 100644
index 0000000000..ab8c4aa146
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/io_ops.md
@@ -0,0 +1,1956 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Inputs and Readers
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Placeholders](#AUTOGENERATED-placeholders)
+ * [tf.placeholder(dtype, shape=None, name=None)](#placeholder)
+* [Readers](#AUTOGENERATED-readers)
+ * [class tf.ReaderBase](#ReaderBase)
+ * [class tf.TextLineReader](#TextLineReader)
+ * [class tf.WholeFileReader](#WholeFileReader)
+ * [class tf.IdentityReader](#IdentityReader)
+ * [class tf.TFRecordReader](#TFRecordReader)
+ * [class tf.FixedLengthRecordReader](#FixedLengthRecordReader)
+* [Converting](#AUTOGENERATED-converting)
+ * [tf.decode_csv(records, record_defaults, field_delim=None, name=None)](#decode_csv)
+ * [tf.decode_raw(bytes, out_type, little_endian=None, name=None)](#decode_raw)
+ * [tf.parse_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseExample')](#parse_example)
+ * [tf.parse_single_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseSingleExample')](#parse_single_example)
+* [Queues](#AUTOGENERATED-queues)
+ * [class tf.QueueBase](#QueueBase)
+ * [class tf.FIFOQueue](#FIFOQueue)
+ * [class tf.RandomShuffleQueue](#RandomShuffleQueue)
+* [Dealing with the filesystem](#AUTOGENERATED-dealing-with-the-filesystem)
+ * [tf.matching_files(pattern, name=None)](#matching_files)
+ * [tf.read_file(filename, name=None)](#read_file)
+* [Input pipeline](#AUTOGENERATED-input-pipeline)
+ * [Beginning of an input pipeline](#AUTOGENERATED-beginning-of-an-input-pipeline)
+ * [tf.train.match_filenames_once(pattern, name=None)](#match_filenames_once)
+ * [tf.train.limit_epochs(tensor, num_epochs=None, name=None)](#limit_epochs)
+ * [tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None)](#range_input_producer)
+ * [tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None)](#slice_input_producer)
+ * [tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None)](#string_input_producer)
+ * [Batching at the end of an input pipeline](#AUTOGENERATED-batching-at-the-end-of-an-input-pipeline)
+ * [tf.train.batch(tensor_list, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, name=None)](#batch)
+ * [tf.train.batch_join(tensor_list_list, batch_size, capacity=32, enqueue_many=False, shapes=None, name=None)](#batch_join)
+ * [tf.train.shuffle_batch(tensor_list, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, name=None)](#shuffle_batch)
+ * [tf.train.shuffle_batch_join(tensor_list_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, name=None)](#shuffle_batch_join)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Placeholders <div class="md-anchor" id="AUTOGENERATED-placeholders">{#AUTOGENERATED-placeholders}</div>
+
+TensorFlow provides a placeholder operation that must be fed with data
+on execution. For more info, see the section on [Feeding
+data](../../how_tos/reading_data/index.md#feeding).
+
+- - -
+
+### tf.placeholder(dtype, shape=None, name=None) <div class="md-anchor" id="placeholder">{#placeholder}</div>
+
+Inserts a placeholder for a tensor that will be always fed.
+
+**Important**: This tensor will produce an error if evaluated. Its value must
+be fed using the `feed_dict` optional argument to `Session.run()`,
+`Tensor.eval()`, or `Operation.run()`.
+
+For example:
+
+```python
+x = tf.placeholder(float, shape=(1024, 1024))
+y = tf.matmul(x, x)
+
+with tf.Session() as sess:
+ print sess.run(y) # ERROR: will fail because x was not fed.
+
+ rand_array = np.random.rand(1024, 1024)
+ print sess.run(y, feed_dict={x: rand_array}) # Will succeed.
+```
+
+##### Args:
+
+
+* <b>dtype</b>: The type of elements in the tensor to be fed.
+* <b>shape</b>: The shape of the tensor to be fed (optional). If the shape is not
+ specified, you can feed a tensor of any shape.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` that may be used as a handle for feeding a value, but not
+ evaluated directly.
+
+
+
+## Readers <div class="md-anchor" id="AUTOGENERATED-readers">{#AUTOGENERATED-readers}</div>
+
+TensorFlow provides a set of Reader classes for reading data formats.
+For more information on inputs and readers, see [Reading
+data](../../how_tos/reading_data/index.md).
+
+- - -
+
+### class tf.ReaderBase <div class="md-anchor" id="ReaderBase">{#ReaderBase}</div>
+
+Base class for different Reader types, that produce a record every step.
+
+Conceptually, Readers convert string 'work units' into records (key,
+value pairs). Typically the 'work units' are filenames and the
+records are extracted from the contents of those files. We want a
+single record produced per step, but a work unit can correspond to
+many records.
+
+Therefore we introduce some decoupling using a queue. The queue
+contains the work units and the Reader dequeues from the queue when
+it is asked to produce a record (via Read()) but it has finished the
+last work unit.
+- - -
+
+#### tf.ReaderBase.__init__(reader_ref, supports_serialize=False) {#ReaderBase.__init__}
+
+Creates a new ReaderBase.
+
+##### Args:
+
+
+* <b>reader_ref</b>: The operation that implements the reader.
+* <b>supports_serialize</b>: True if the reader implementation can
+ serialize its state.
+
+
+- - -
+
+#### tf.ReaderBase.num_records_produced(name=None) {#ReaderBase.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.ReaderBase.num_work_units_completed(name=None) {#ReaderBase.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.ReaderBase.read(queue, name=None) {#ReaderBase.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.ReaderBase.reader_ref {#ReaderBase.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.ReaderBase.reset(name=None) {#ReaderBase.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.ReaderBase.restore_state(state, name=None) {#ReaderBase.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.ReaderBase.serialize_state(name=None) {#ReaderBase.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.ReaderBase.supports_serialize {#ReaderBase.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+- - -
+
+### class tf.TextLineReader <div class="md-anchor" id="TextLineReader">{#TextLineReader}</div>
+
+A Reader that outputs the lines of a file delimited by newlines.
+
+Newlines are stripped from the output.
+See ReaderBase for supported methods.
+- - -
+
+#### tf.TextLineReader.__init__(skip_header_lines=None, name=None) {#TextLineReader.__init__}
+
+Create a TextLineReader.
+
+##### Args:
+
+
+* <b>skip_header_lines</b>: An optional int. Defaults to 0. Number of lines
+ to skip from the beginning of every file.
+* <b>name</b>: A name for the operation (optional).
+
+
+- - -
+
+#### tf.TextLineReader.num_records_produced(name=None) {#TextLineReader.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.TextLineReader.num_work_units_completed(name=None) {#TextLineReader.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.TextLineReader.read(queue, name=None) {#TextLineReader.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.TextLineReader.reader_ref {#TextLineReader.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.TextLineReader.reset(name=None) {#TextLineReader.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.TextLineReader.restore_state(state, name=None) {#TextLineReader.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.TextLineReader.serialize_state(name=None) {#TextLineReader.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.TextLineReader.supports_serialize {#TextLineReader.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+- - -
+
+### class tf.WholeFileReader <div class="md-anchor" id="WholeFileReader">{#WholeFileReader}</div>
+
+A Reader that outputs the entire contents of a file as a value.
+
+To use, enqueue filenames in a Queue. The output of Read will
+be a filename (key) and the contents of that file (value).
+
+See ReaderBase for supported methods.
+- - -
+
+#### tf.WholeFileReader.__init__(name=None) {#WholeFileReader.__init__}
+
+Create a WholeFileReader.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+
+- - -
+
+#### tf.WholeFileReader.num_records_produced(name=None) {#WholeFileReader.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.WholeFileReader.num_work_units_completed(name=None) {#WholeFileReader.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.WholeFileReader.read(queue, name=None) {#WholeFileReader.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.WholeFileReader.reader_ref {#WholeFileReader.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.WholeFileReader.reset(name=None) {#WholeFileReader.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.WholeFileReader.restore_state(state, name=None) {#WholeFileReader.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.WholeFileReader.serialize_state(name=None) {#WholeFileReader.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.WholeFileReader.supports_serialize {#WholeFileReader.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+- - -
+
+### class tf.IdentityReader <div class="md-anchor" id="IdentityReader">{#IdentityReader}</div>
+
+A Reader that outputs the queued work as both the key and value.
+
+To use, enqueue strings in a Queue. Read will take the front
+work string and output (work, work).
+
+See ReaderBase for supported methods.
+- - -
+
+#### tf.IdentityReader.__init__(name=None) {#IdentityReader.__init__}
+
+Create a IdentityReader.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+
+- - -
+
+#### tf.IdentityReader.num_records_produced(name=None) {#IdentityReader.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.IdentityReader.num_work_units_completed(name=None) {#IdentityReader.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.IdentityReader.read(queue, name=None) {#IdentityReader.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.IdentityReader.reader_ref {#IdentityReader.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.IdentityReader.reset(name=None) {#IdentityReader.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.IdentityReader.restore_state(state, name=None) {#IdentityReader.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.IdentityReader.serialize_state(name=None) {#IdentityReader.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.IdentityReader.supports_serialize {#IdentityReader.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+- - -
+
+### class tf.TFRecordReader <div class="md-anchor" id="TFRecordReader">{#TFRecordReader}</div>
+
+A Reader that outputs the records from a TFRecords file.
+
+See ReaderBase for supported methods.
+- - -
+
+#### tf.TFRecordReader.__init__(name=None) {#TFRecordReader.__init__}
+
+Create a TFRecordReader.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+
+- - -
+
+#### tf.TFRecordReader.num_records_produced(name=None) {#TFRecordReader.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.TFRecordReader.num_work_units_completed(name=None) {#TFRecordReader.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.TFRecordReader.read(queue, name=None) {#TFRecordReader.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.TFRecordReader.reader_ref {#TFRecordReader.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.TFRecordReader.reset(name=None) {#TFRecordReader.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.TFRecordReader.restore_state(state, name=None) {#TFRecordReader.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.TFRecordReader.serialize_state(name=None) {#TFRecordReader.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.TFRecordReader.supports_serialize {#TFRecordReader.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+- - -
+
+### class tf.FixedLengthRecordReader <div class="md-anchor" id="FixedLengthRecordReader">{#FixedLengthRecordReader}</div>
+
+A Reader that outputs fixed-length records from a file.
+
+See ReaderBase for supported methods.
+- - -
+
+#### tf.FixedLengthRecordReader.__init__(record_bytes, header_bytes=None, footer_bytes=None, name=None) {#FixedLengthRecordReader.__init__}
+
+Create a FixedLengthRecordReader.
+
+##### Args:
+
+
+* <b>record_bytes</b>: An int.
+* <b>header_bytes</b>: An optional int. Defaults to 0.
+* <b>footer_bytes</b>: An optional int. Defaults to 0.
+* <b>name</b>: A name for the operation (optional).
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.num_records_produced(name=None) {#FixedLengthRecordReader.num_records_produced}
+
+Returns the number of records this reader has produced.
+
+This is the same as the number of Read executions that have
+succeeded.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.num_work_units_completed(name=None) {#FixedLengthRecordReader.num_work_units_completed}
+
+Returns the number of work units this reader has finished processing.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ An int64 Tensor.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.read(queue, name=None) {#FixedLengthRecordReader.read}
+
+Returns the next record (key, value pair) produced by a reader.
+
+Will dequeue a work unit from queue if necessary (e.g. when the
+Reader needs to start reading from a new file since it has
+finished with the previous file).
+
+##### Args:
+
+
+* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
+ to a Queue, with string work items.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of Tensors (key, value).
+
+* <b>key</b>: A string scalar Tensor.
+* <b>value</b>: A string scalar Tensor.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.reader_ref {#FixedLengthRecordReader.reader_ref}
+
+Op that implements the reader.
+
+- - -
+
+#### tf.FixedLengthRecordReader.reset(name=None) {#FixedLengthRecordReader.reset}
+
+Restore a reader to its initial clean state.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.restore_state(state, name=None) {#FixedLengthRecordReader.restore_state}
+
+Restore a reader to a previously saved state.
+
+Not all Readers support being restored, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>state</b>: A string Tensor.
+ Result of a SerializeState of a Reader with matching type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The created Operation.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.serialize_state(name=None) {#FixedLengthRecordReader.serialize_state}
+
+Produce a string tensor that encodes the state of a reader.
+
+Not all Readers support being serialized, so this can produce an
+Unimplemented error.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A string Tensor.
+
+
+- - -
+
+#### tf.FixedLengthRecordReader.supports_serialize {#FixedLengthRecordReader.supports_serialize}
+
+Whether the Reader implementation can serialize its state.
+
+
+
+## Converting <div class="md-anchor" id="AUTOGENERATED-converting">{#AUTOGENERATED-converting}</div>
+
+TensorFlow provides several operations that you can use to convert various data
+formats into tensors.
+
+- - -
+
+### tf.decode_csv(records, record_defaults, field_delim=None, name=None) <div class="md-anchor" id="decode_csv">{#decode_csv}</div>
+
+Convert CSV records to tensors. Each column maps to one tensor.
+
+RFC 4180 format is expected for the CSV records.
+(https://tools.ietf.org/html/rfc4180)
+Note that we allow leading and trailing spaces with int or float field.
+
+##### Args:
+
+
+* <b>records</b>: A `Tensor` of type `string`.
+ Each string is a record/row in the csv and all records should have
+ the same format.
+* <b>record_defaults</b>: A list of `Tensor` objects with types from: `float32`, `int32`, `int64`, `string`.
+ One tensor per column of the input record, with either a
+ scalar default value for that column or empty if the column is required.
+* <b>field_delim</b>: An optional `string`. Defaults to `","`.
+ delimiter to separate fields in a record.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A list of `Tensor` objects. Has the same type as `record_defaults`.
+ Each tensor will have the same shape as records.
+
+
+- - -
+
+### tf.decode_raw(bytes, out_type, little_endian=None, name=None) <div class="md-anchor" id="decode_raw">{#decode_raw}</div>
+
+Reinterpret the bytes of a string as a vector of numbers.
+
+##### Args:
+
+
+* <b>bytes</b>: A `Tensor` of type `string`.
+ All the elements must have the same length.
+* <b>out_type</b>: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64`.
+* <b>little_endian</b>: An optional `bool`. Defaults to `True`.
+ Whether the input bytes are in little-endian order.
+ Ignored for out_types that are stored in a single byte like uint8.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `out_type`.
+ A Tensor with one more dimension than the input bytes. The
+ added dimension will have size equal to the length of the elements
+ of bytes divided by the number of bytes to represent out_type.
+
+
+- - -
+
+### tf.parse_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseExample') <div class="md-anchor" id="parse_example">{#parse_example}</div>
+
+Parse Example protos.
+
+##### Args:
+
+
+* <b>serialized</b>: string vector, a batch of binary serialized Example protos.
+* <b>names</b>: A string vector, the names of the serialized protos.
+ "names" may contain, e.g., table key (descriptive) names for the
+ corresponding serialized protos. These are purely useful for debugging
+ purposes, and the presence of values here has no effect on the output.
+ "names" may be an empty vector, if no names are available.
+ If non-empty, this vector must be the same length as "serialized".
+* <b>sparse_keys</b>: A string list of keys in the Examples' features.
+ These keys are associated with sparse values.
+* <b>sparse_types</b>: A list of DTypes.
+ This list's length must match that of sparse_keys. Currently
+ parse_example supports tf.float32 (FloatList), tf.int64 (Int64List),
+ and tf.string (BytesList).
+* <b>dense_keys</b>: A string list of keys in the Examples' features.
+ These keys are associated with dense values.
+* <b>dense_types</b>: A list of DTypes.
+ This list's length must match that of dense_keys. Currently
+ parse_example supports tf.float32 (FloatList), tf.int64 (Int64List),
+ and tf.string (BytesList).
+* <b>dense_defaults</b>: A dict of {key:Tensor} (some may be missing).
+ The keys of the dict must match the dense_keys of the feature.
+ If a key is not present in this dictionary, the corresponding dense
+ Feature is required in all elements of serialized.
+* <b>dense_shapes</b>: A list of tuples.
+ Entries provide the shape of data in each dense Feature in features.
+ The length of dense_shapes must be the same as the length of dense_keys.
+ The number of elements in the Feature corresponding to dense_key[j]
+ must always have np.prod(dense_shapes[j]) entries.
+ If dense_shapes[j] == (D0, D1, ..., DN) then the the shape of output
+ Tensor dense_values[j] will be (|serialized|, D0, D1, ..., DN):
+ The dense outputs are just the inputs row-stacked by batch.
+* <b>name</b>: (Optional) Name of Op in the graph.
+
+##### Returns:
+
+ A dictionary mapping keys to Tensors and SparseTensors.
+
+ The key dense_keys[j] is mapped to a tensor of type dense_types[j] and
+ of shape (serialized.size(),) + dense_shapes[j] (i.e., the dense outputs are
+ inputs, reshaped in row-major format and then row-stacked by batch).
+
+ The key sparse_keys[j] is mapped to a SparseTensor of type sparse_types[j].
+ The SparseTensor represents a ragged matrix. Its indices are [batch, index]
+ where "batch" is is the batch entry the value is from, and "index" is the
+ value's index in the list of values associated with that feature
+ and example. For example, if one expects a tf.float32 sparse feature "ft"
+ and three serialized examples are provided:
+
+ serialized = [
+
+* <b>features</b>:
+ { feature: [ key: { "ft" value: float_list: { value: [1.0, 2.0] } } ] },
+* <b>features</b>:
+ { feature: [] },
+* <b>features</b>:
+ { feature: [ key: { "ft" value: float_list: { value: [3.0] } } ] }
+ ]
+
+ then the output will look like:
+
+ {"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],
+ values=[1.0, 2.0, 3.0],
+ shape=(3, 2)) }
+
+##### Raises:
+
+
+* <b>ValueError</b>: If sparse and dense keys intersect, or input lengths do not
+ match up for sparse_* (similarly for dense_*).
+* <b>TypeError</b>: If an input is malformed.
+
+Example input, format, and output: Just Sparse Inputs
+================================================
+
+Given two brain.Example input protos:
+
+
+* <b>serialized</b>: // serialized versions of the protos below
+ [features: {
+
+* <b>feature</b>: { key: "kw" value: { bytes_list: { value: [ "knit", "big" ] } } }
+* <b>feature</b>: { key: "gps" value: { float_list: { value: [] } } }
+ },
+* <b>features</b>: {
+* <b>feature</b>: { key: "kw" value: { bytes_list: { value: [ "emmy" ] } } }
+* <b>feature</b>: { key: "dank" value: { int64_list: { value: [ 42 ] } } }
+* <b>feature</b>: { key: "gps" value: { } }
+ }]
+
+* <b>names</b>: ["input0", "input1"],
+* <b>sparse_keys</b>: ["kw", "dank", "gps"]
+* <b>sparse_types</b>: [DT_STRING, DT_INT64, DT_FLOAT]
+
+Then the expected output is a dictionary:
+{
+ "kw": SparseTensor(
+ indices=[[0, 0], [0, 1], [1, 0]],
+ values=["knit", "big", "emmy"]
+ shape=[2, 2]),
+ "dank": SparseTensor(
+ indices=[[1, 0]],
+ values=[42],
+ shape=[2, 1]),
+ "gps": SparseTensor(
+ indices=[],
+ values=[],
+ shape=[2, 0]),
+}
+
+
+Example input, format, and output: Dense Inputs (without defaults)
+==================================================================
+
+Given two brain.Example input protos:
+
+
+* <b>serialized</b>: // serialized versions of the protos below
+ [features: {
+
+* <b>feature</b>: { key: "age" value: { int64_list: { value: [ 0 ] } } }
+* <b>feature</b>: { key: "gender" value: { bytes_list: { value: [ "f" ] } } }
+ },
+* <b>features</b>: {
+* <b>feature</b>: { key: "age" value: { int64_list: { value: [] } } }
+* <b>feature</b>: { key: "gender" value: { bytes_list: { value: [ "f" ] } } }
+ }]
+
+* <b>names</b>: ["input0", "input1"],
+* <b>dense_keys</b>: np.array(["age", "gender"])
+* <b>dense_types</b>: [tf.int64, tf.string]
+* <b>dense_defaults</b>: {
+ "age": -1 # defaults to -1 if missing
+ # "gender" has no specified default so it's required
+}
+
+* <b>dense_shapes</b>: [(1,), (1,)] # age, gender, label, weight
+
+Then the expected output is a dictionary:
+{
+ "age": [[0], [-1]],
+ "gender": [["f"], ["f"]],
+}
+
+
+Example input, format, and output: Dense Inputs (with defaults)
+===============================================================
+
+Given two brain.Example input protos:
+
+
+* <b>serialized</b>: // serialized versions of the protos below
+ [features: {
+
+* <b>feature</b>: { key: "weight" value: { float_list: { value: [ 1.0 ] } } }
+ },
+* <b>features</b>: {
+* <b>feature</b>: { key: "label" value: { float_list: { value: [ -1.0, 0.0 ] } } }
+ }]
+
+* <b>names</b>: ["input0", "input1"],
+* <b>dense_keys</b>: np.array(["label", "weight"])
+* <b>dense_defaults</b>: {
+ "label": [1.0, 2.0], # float (default: vector)
+ "weight": 5.0 # float (default: scalar, 5.0)
+}
+
+* <b>dense_shapes</b>: [(2,), (1,)] # age, gender, label, weight
+
+Then the expected output is a dictionary:
+{
+ "label": [[1.0, 2.0], [-1.0, 0.0]],
+ "weight": [[1.0], [5.0]],
+}
+
+
+- - -
+
+### tf.parse_single_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseSingleExample') <div class="md-anchor" id="parse_single_example">{#parse_single_example}</div>
+
+Identical to parse_example but for scalar serialized and names.
+
+##### Args:
+
+
+* <b>serialized</b>: A scalar string, a single serialized Example.
+ See parse_example documentation for more details.
+* <b>names</b>: (Optional) A scalar string, the associated name.
+ See parse_example documentation for more details.
+* <b>sparse_keys</b>: See parse_example documentation for more details.
+* <b>sparse_types</b>: See parse_example documentation for more details.
+* <b>dense_keys</b>: See parse_example documentation for more details.
+* <b>dense_types</b>: See parse_example documentation for more details.
+* <b>dense_defaults</b>: See parse_example documentation for more details.
+* <b>dense_shapes</b>: See parse_example documentation for more details.
+* <b>name</b>: Optional op name.
+
+##### Returns:
+
+ A dictionary mapping keys to Tensors and SparseTensors.
+
+ For dense tensors, the Tensor is identical to the output of parse_example,
+ except it is one less dimension (the first, batch, dimension is removed).
+
+ For SparseTensors:
+ The first (batch) column of the indices matrix is removed
+ (it is now a column vector).
+ The values vector is unchanged.
+ The first (batch_size) entry of the shape vector is removed
+ (it is now a single element vector).
+
+##### Raises:
+
+
+* <b>ValueError</b>: if "scalar" or "names" have known shapes, and are not scalars.
+
+
+
+## Queues <div class="md-anchor" id="AUTOGENERATED-queues">{#AUTOGENERATED-queues}</div>
+
+TensorFlow provides several implementations of 'Queues', which are
+structures within the TensorFlow computation graph to stage pipelines
+of tensors together. The following describe the basic Queue interface
+and some implementations. To see an example use, see [Threading and
+Queues](../../how_tos/threading_and_queues/index.md).
+
+- - -
+
+### class tf.QueueBase <div class="md-anchor" id="QueueBase">{#QueueBase}</div>
+
+Base class for queue implementations.
+
+A queue is a TensorFlow data structure that stores tensors across
+multiple steps, and exposes operations that enqueue and dequeue
+tensors.
+
+Each queue element is a tuple of one or more tensors, where each
+tuple component has a static dtype, and may have a static shape. The
+queue implementations support versions of enqueue and dequeue that
+handle single elements, versions that support enqueuing and
+dequeuing a batch of elements at once.
+
+See [`tf.FIFOQueue`](#FIFOQueue) and
+[`tf.RandomShuffleQueue`](#RandomShuffleQueue) for concrete
+implementations of this class, and instructions on how to create
+them.
+
+- - -
+
+#### tf.QueueBase.enqueue(vals, name=None) {#QueueBase.enqueue}
+
+Enqueues one element to this queue.
+
+If the queue is full when this operation executes, it will block
+until the element has been enqueued.
+
+##### Args:
+
+
+* <b>vals</b>: The tuple of `Tensor` objects to be enqueued.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a new tuple of tensors to the queue.
+
+
+- - -
+
+#### tf.QueueBase.enqueue_many(vals, name=None) {#QueueBase.enqueue_many}
+
+Enqueues zero or elements to this queue.
+
+This operation slices each component tensor along the 0th dimension to
+make multiple queue elements. All of the tensors in `vals` must have the
+same size in the 0th dimension.
+
+If the queue is full when this operation executes, it will block
+until all of the elements have been enqueued.
+
+##### Args:
+
+
+* <b>vals</b>: The tensor or tuple of tensors from which the queue elements
+ are taken.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that enqueues a batch of tuples of tensors to the queue.
+
+
+
+- - -
+
+#### tf.QueueBase.dequeue(name=None) {#QueueBase.dequeue}
+
+Dequeues one element from this queue.
+
+If the queue is empty when this operation executes, it will block
+until there is an element to dequeue.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of tensors that was dequeued.
+
+
+- - -
+
+#### tf.QueueBase.dequeue_many(n, name=None) {#QueueBase.dequeue_many}
+
+Dequeues and concatenates `n` elements from this queue.
+
+This operation concatenates queue-element component tensors along
+the 0th dimension to make a single component tensor. All of the
+components in the dequeued tuple will have size `n` in the 0th dimension.
+
+If the queue contains fewer than `n` elements when this operation
+executes, it will block until `n` elements have been dequeued.
+
+##### Args:
+
+
+* <b>n</b>: A scalar `Tensor` containing the number of elements to dequeue.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The tuple of concatenated tensors that was dequeued.
+
+
+
+- - -
+
+#### tf.QueueBase.size(name=None) {#QueueBase.size}
+
+Compute the number of elements in this queue.
+
+##### Args:
+
+
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar tensor containing the number of elements in this queue.
+
+
+
+- - -
+
+#### tf.QueueBase.close(cancel_pending_enqueues=False, name=None) {#QueueBase.close}
+
+Closes this queue.
+
+This operation signals that no more elements will be enqueued in
+the given queue. Subsequent `enqueue` and `enqueue_many`
+operations will fail. Subsequent `dequeue` and `dequeue_many`
+operations will continue to succeed if sufficient elements remain
+in the queue. Subsequent `dequeue` and `dequeue_many` operations
+that would block will fail immediately.
+
+If `cancel_pending_enqueues` is `True`, all pending requests will also
+be cancelled.
+
+##### Args:
+
+
+* <b>cancel_pending_enqueues</b>: (Optional.) A boolean, defaulting to
+ `False` (described above).
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The operation that closes the queue.
+
+
+
+#### Other Methods
+- - -
+
+#### tf.QueueBase.__init__(dtypes, shapes, queue_ref) {#QueueBase.__init__}
+
+Constructs a queue object from a queue reference.
+
+##### Args:
+
+
+* <b>dtypes</b>: A list of types. The length of dtypes must equal the number
+ of tensors in each element.
+* <b>shapes</b>: Constraints on the shapes of tensors in an element:
+ A list of shape tuples or None. This list is the same length
+ as dtypes. If the shape of any tensors in the element are constrained,
+ all must be; shapes can be None if the shapes should not be constrained.
+* <b>queue_ref</b>: The queue reference, i.e. the output of the queue op.
+
+
+- - -
+
+#### tf.QueueBase.dtypes {#QueueBase.dtypes}
+
+The list of dtypes for each component of a queue element.
+
+- - -
+
+#### tf.QueueBase.name {#QueueBase.name}
+
+The name of the underlying queue.
+
+- - -
+
+#### tf.QueueBase.queue_ref {#QueueBase.queue_ref}
+
+The underlying queue reference.
+
+
+- - -
+
+### class tf.FIFOQueue <div class="md-anchor" id="FIFOQueue">{#FIFOQueue}</div>
+
+A queue implementation that dequeues elements in first-in-first out order.
+
+See [`tf.QueueBase`](#QueueBase) for a description of the methods on
+this class.
+
+- - -
+
+#### tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, shared_name=None, name='fifo_queue') {#FIFOQueue.__init__}
+
+Creates a queue that dequeues elements in a first-in first-out order.
+
+A `FIFOQueue` has bounded capacity; supports multiple concurrent
+producers and consumers; and provides exactly-once delivery.
+
+A `FIFOQueue` holds a list of up to `capacity` elements. Each
+element is a fixed-length tuple of tensors whose dtypes are
+described by `dtypes`, and whose shapes are optionally described
+by the `shapes` argument.
+
+If the `shapes` argument is specified, each component of a queue
+element must have the respective fixed shape. If it is
+unspecified, different queue elements may have different shapes,
+but the use of `dequeue_many` is disallowed.
+
+##### Args:
+
+
+* <b>capacity</b>: An integer. The upper bound on the number of elements
+ that may be stored in this queue.
+* <b>dtypes</b>: A list of `DType` objects. The length of `dtypes` must equal
+ the number of tensors in each queue element.
+* <b>shapes</b>: (Optional.) A list of fully-defined `TensorShape` objects,
+ with the same length as `dtypes` or `None`.
+* <b>shared_name</b>: (Optional.) If non-empty, this queue will be shared under
+ the given name across multiple sessions.
+* <b>name</b>: Optional name for the queue operation.
+
+
+
+- - -
+
+### class tf.RandomShuffleQueue <div class="md-anchor" id="RandomShuffleQueue">{#RandomShuffleQueue}</div>
+
+A queue implementation that dequeues elements in a random order.
+
+See [`tf.QueueBase`](#QueueBase) for a description of the methods on
+this class.
+
+- - -
+
+#### tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, seed=None, shared_name=None, name='random_shuffle_queue') {#RandomShuffleQueue.__init__}
+
+Create a queue that dequeues elements in a random order.
+
+A `RandomShuffleQueue` has bounded capacity; supports multiple
+concurrent producers and consumers; and provides exactly-once
+delivery.
+
+A `RandomShuffleQueue` holds a list of up to `capacity`
+elements. Each element is a fixed-length tuple of tensors whose
+dtypes are described by `dtypes`, and whose shapes are optionally
+described by the `shapes` argument.
+
+If the `shapes` argument is specified, each component of a queue
+element must have the respective fixed shape. If it is
+unspecified, different queue elements may have different shapes,
+but the use of `dequeue_many` is disallowed.
+
+The `min_after_dequeue` argument allows the caller to specify a
+minimum number of elements that will remain in the queue after a
+`dequeue` or `dequeue_many` operation completes, to ensure a
+minimum level of mixing of elements. This invariant is maintained
+by blocking those operations until sufficient elements have been
+enqueued. The `min_after_dequeue` argument is ignored after the
+queue has been closed.
+
+##### Args:
+
+
+* <b>capacity</b>: An integer. The upper bound on the number of elements
+ that may be stored in this queue.
+* <b>min_after_dequeue</b>: An integer (described above).
+* <b>dtypes</b>: A list of `DType` objects. The length of `dtypes` must equal
+ the number of tensors in each queue element.
+* <b>shapes</b>: (Optional.) A list of fully-defined `TensorShape` objects,
+ with the same length as `dtypes` or `None`.
+* <b>seed</b>: A Python integer. Used to create a random seed.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+* <b>shared_name</b>: (Optional.) If non-empty, this queue will be shared under
+ the given name across multiple sessions.
+* <b>name</b>: Optional name for the queue operation.
+
+
+
+
+## Dealing with the filesystem <div class="md-anchor" id="AUTOGENERATED-dealing-with-the-filesystem">{#AUTOGENERATED-dealing-with-the-filesystem}</div>
+
+- - -
+
+### tf.matching_files(pattern, name=None) <div class="md-anchor" id="matching_files">{#matching_files}</div>
+
+Returns the set of files matching a pattern.
+
+Note that this routine only supports wildcard characters in the
+basename portion of the pattern, not in the directory portion.
+
+##### Args:
+
+
+* <b>pattern</b>: A `Tensor` of type `string`. A (scalar) shell wildcard pattern.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `string`. A vector of matching filenames.
+
+
+- - -
+
+### tf.read_file(filename, name=None) <div class="md-anchor" id="read_file">{#read_file}</div>
+
+Reads and outputs the entire contents of the input filename.
+
+##### Args:
+
+
+* <b>filename</b>: A `Tensor` of type `string`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `string`.
+
+
+
+## Input pipeline <div class="md-anchor" id="AUTOGENERATED-input-pipeline">{#AUTOGENERATED-input-pipeline}</div>
+
+TensorFlow functions for setting up an input-prefetching pipeline.
+Please see the [reading data how-to](../../how_tos/reading_data.md)
+for context.
+
+### Beginning of an input pipeline <div class="md-anchor" id="AUTOGENERATED-beginning-of-an-input-pipeline">{#AUTOGENERATED-beginning-of-an-input-pipeline}</div>
+
+The "producer" functions add a queue to the graph and a corresponding
+`QueueRunner` for running the subgraph that fills that queue.
+
+- - -
+
+### tf.train.match_filenames_once(pattern, name=None) <div class="md-anchor" id="match_filenames_once">{#match_filenames_once}</div>
+
+Save the list of files matching pattern, so it is only computed once.
+
+##### Args:
+
+
+* <b>pattern</b>: A file pattern (glob).
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A variable that is initialized to the list of files matching pattern.
+
+
+- - -
+
+### tf.train.limit_epochs(tensor, num_epochs=None, name=None) <div class="md-anchor" id="limit_epochs">{#limit_epochs}</div>
+
+Returns tensor num_epochs times and then raises an OutOfRange error.
+
+##### Args:
+
+
+* <b>tensor</b>: Any Tensor.
+* <b>num_epochs</b>: An integer (optional). If specified, limits the number
+ of steps the output tensor may be evaluated.
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ tensor or OutOfRange.
+
+
+- - -
+
+### tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <div class="md-anchor" id="range_input_producer">{#range_input_producer}</div>
+
+Produces the integers from 0 to limit-1 in a queue.
+
+##### Args:
+
+
+* <b>limit</b>: An int32 scalar tensor.
+* <b>num_epochs</b>: An integer (optional). If specified, `range_input_producer`
+ produces each integer `num_epochs` times before generating an
+ OutOfRange error. If not specified, `range_input_producer` can cycle
+ through the integers an unlimited number of times.
+* <b>shuffle</b>: Boolean. If true, the integers are randomly shuffled within each
+ epoch.
+* <b>seed</b>: An integer (optional). Seed used if shuffle == True.
+* <b>capacity</b>: An integer. Sets the queue capacity.
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A Queue with the output integers. A QueueRunner for the Queue
+ is added to the current Graph's QUEUE_RUNNER collection.
+
+
+- - -
+
+### tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <div class="md-anchor" id="slice_input_producer">{#slice_input_producer}</div>
+
+Produces a slice of each Tensor in tensor_list.
+
+Implemented using a Queue -- a QueueRunner for the Queue
+is added to the current Graph's QUEUE_RUNNER collection.
+
+##### Args:
+
+
+* <b>tensor_list</b>: A list of Tensors. Every Tensor in tensor_list must
+ have the same size in the first dimension.
+* <b>num_epochs</b>: An integer (optional). If specified, `slice_input_producer`
+ produces each slice `num_epochs` times before generating
+ an OutOfRange error. If not specified, `slice_input_producer` can cycle
+ through the slices an unlimited number of times.
+* <b>seed</b>: An integer (optional). Seed used if shuffle == True.
+* <b>capacity</b>: An integer. Sets the queue capacity.
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A list of tensors, one for each element of tensor_list. If the tensor
+ in tensor_list has shape [N, a, b, .., z], then the corresponding output
+ tensor will have shape [a, b, ..., z].
+
+
+- - -
+
+### tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <div class="md-anchor" id="string_input_producer">{#string_input_producer}</div>
+
+Output strings (e.g. filenames) to a queue for an input pipeline.
+
+##### Args:
+
+
+* <b>string_tensor</b>: A 1-D string tensor with the strings to produce.
+* <b>num_epochs</b>: An integer (optional). If specified, `string_input_producer`
+ produces each string from `string_tensor` `num_epochs` times before
+ generating an OutOfRange error. If not specified, `string_input_producer`
+ can cycle through the strings in `string_tensor` an unlimited number of
+ times.
+* <b>shuffle</b>: Boolean. If true, the strings are randomly shuffled within each
+ epoch.
+* <b>seed</b>: An integer (optional). Seed used if shuffle == True.
+* <b>capacity</b>: An integer. Sets the queue capacity.
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A queue with the output strings. A QueueRunner for the Queue
+ is added to the current Graph's QUEUE_RUNNER collection.
+
+
+
+### Batching at the end of an input pipeline <div class="md-anchor" id="AUTOGENERATED-batching-at-the-end-of-an-input-pipeline">{#AUTOGENERATED-batching-at-the-end-of-an-input-pipeline}</div>
+
+These functions add a queue to the graph to assemble a batch of examples, with
+possible shuffling. They also add a `QueueRunner` for running the subgraph
+that fills that queue.
+
+Use [batch](#batch) or [batch_join](#batch_join) for batching examples that have
+already been well shuffled. Use [shuffle_batch](#shuffle_batch) or
+[shuffle_batch_join](#shuffle_batch_join) for examples that
+would benefit from additional shuffling.
+
+Use [batch](#batch) or [shuffle_batch](#shuffle_batch) if you want a
+single thread producing examples to batch, or if you have a
+single subgraph producing examples but you want to run it in N threads
+(where you increase N until it can keep the queue full). Use
+[batch_join](#batch_join) or [shuffle_batch_join](#shuffle_batch_join)
+if you have N different subgraphs producing examples to batch and you
+want them run by N threads.
+
+- - -
+
+### tf.train.batch(tensor_list, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="batch">{#batch}</div>
+
+Run tensor_list to fill a queue to create batches.
+
+Implemented using a queue -- a QueueRunner for the queue
+is added to the current Graph's QUEUE_RUNNER collection.
+
+##### Args:
+
+
+* <b>tensor_list</b>: The list of tensors to enqueue.
+* <b>batch_size</b>: The new batch size pulled from the queue.
+* <b>num_threads</b>: The number of threads enqueuing tensor_list.
+* <b>capacity</b>: Maximum number of elements in the queue, controls the
+ how far ahead the prefetching allowed is allowed to get and
+ memory usage.
+* <b>enqueue_many</b>: If False, tensor_list is assumed to represent a
+ single example. If True, tensor_list is assumed to represent
+ a batch of examples, where the first dimension is indexed by
+ example, and all members of tensor_list should have the same
+ size in the first dimension.
+* <b>shapes</b>: Optional. The shapes for each example. Defaults to the
+ inferred shapes for tensor_list (leaving off the first dimension
+ if enqueue_many is True).
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A list of tensors with the same number and types as tensor_list.
+ If enqueue_many is false, then an input tensor with shape
+ `[x, y, z]` will be output as a tensor with shape
+ `[batch_size, x, y, z]`. If enqueue_many is True, and an
+ input tensor has shape `[*, x, y, z]`, the the output will have
+ shape `[batch_size, x, y, z]`.
+
+
+- - -
+
+### tf.train.batch_join(tensor_list_list, batch_size, capacity=32, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="batch_join">{#batch_join}</div>
+
+Run a list of tensors to fill a queue to create batches of examples.
+
+This version enqueues a different list of tensors in different threads.
+Implemented using a queue -- a QueueRunner for the queue
+is added to the current Graph's QUEUE_RUNNER collection.
+
+##### Args:
+
+
+* <b>tensor_list_list</b>: A list of tuples of tensors to enqueue.
+ len(tensor_list_list) threads will be started, with the i-th
+ thread enqueuing the tensors from tensor_list[i].
+ tensor_list[i1][j] must match tensor_list[i2][j] in type and
+ shape (except in the first dimension if enqueue_many is true).
+* <b>batch_size</b>: The new batch size pulled from the queue.
+* <b>capacity</b>: Maximum number of elements in the queue, controls the
+ how far ahead the prefetching allowed is allowed to get and
+ memory usage.
+* <b>enqueue_many</b>: If False, each tensor_list_list[i] is assumed to
+ represent a single example. If True, tensor_list_list[i] is
+ assumed to represent a batch of examples, where the first
+ dimension is indexed by example, and all members of
+ tensor_list_list[i] should have the same size in the first
+ dimension.
+* <b>shapes</b>: Optional. The shapes for each example. Defaults to the
+ inferred shapes for tensor_list_list[i] (which must match, after
+ leaving off the first dimension if enqueue_many is True).
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A list of tensors with the same number and types as
+ tensor_list_list[i]. If enqueue_many is false, then an input
+ tensor with shape `[x, y, z]` will be output as a tensor with
+ shape `[batch_size, x, y, z]`. If enqueue_many is True, and an
+ input tensor has shape `[*, x, y, z]`, the the output will have
+ shape `[batch_size, x, y, z]`.
+
+
+- - -
+
+### tf.train.shuffle_batch(tensor_list, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="shuffle_batch">{#shuffle_batch}</div>
+
+Create batches by randomly shuffling tensors.
+
+This adds:
+
+* a shuffling queue into which tensors from tensor_list are enqueued.
+* a dequeue many operation to create batches from the queue,
+* and a QueueRunner is added to the current Graph's QUEUE_RUNNER collection,
+ to enqueue the tensors from tensor_list.
+
+##### Args:
+
+
+* <b>tensor_list</b>: The list of tensors to enqueue.
+* <b>batch_size</b>: The new batch size pulled from the queue.
+* <b>capacity</b>: Maximum number of elements in the queue, controls the
+ how far ahead the prefetching allowed is allowed to get and
+ memory usage.
+* <b>min_after_dequeue</b>: Minimum number elements in the queue after a
+ dequeue, used to ensure a level of mixing of elements.
+* <b>num_threads</b>: The number of threads enqueuing tensor_list.
+* <b>seed</b>: Seed for the random shuffling within the queue.
+* <b>enqueue_many</b>: If False, tensor_list is assumed to represent a
+ single example. If True, tensor_list is assumed to represent
+ a batch of examples, where the first dimension is indexed by
+ example, and all members of tensor_list should have the same
+ size in the first dimension.
+* <b>shapes</b>: Optional. The shapes for each example. Defaults to the
+ inferred shapes for tensor_list (leaving off the first dimension
+ if enqueue_many is True).
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A list of tensors with the same number and types as tensor_list.
+ If enqueue_many is false, then an input tensor with shape
+ `[x, y, z]` will be output as a tensor with shape
+ `[batch_size, x, y, z]`. If enqueue_many is True, and an
+ input tensor has shape `[*, x, y, z]`, the the output will have
+ shape `[batch_size, x, y, z]`.
+
+
+- - -
+
+### tf.train.shuffle_batch_join(tensor_list_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="shuffle_batch_join">{#shuffle_batch_join}</div>
+
+Create batches by randomly shuffling tensors.
+
+This version enqueues a different list of tensors in different threads.
+It adds:
+
+* a shuffling queue into which tensors from tensor_list_list are enqueued.
+* a dequeue many operation to create batches from the queue,
+* and a QueueRunner is added to the current Graph's QUEUE_RUNNER collection,
+ to enqueue the tensors from tensor_list_list.
+
+##### Args:
+
+
+* <b>tensor_list_list</b>: A list of tuples of tensors to enqueue.
+ len(tensor_list_list) threads will be started, with the i-th
+ thread enqueuing the tensors from tensor_list[i].
+ tensor_list[i1][j] must match tensor_list[i2][j] in type and
+ shape (except in the first dimension if enqueue_many is true).
+* <b>batch_size</b>: The new batch size pulled from the queue.
+* <b>capacity</b>: Maximum number of elements in the queue, controls the
+ how far ahead the prefetching allowed is allowed to get and
+ memory usage.
+* <b>min_after_dequeue</b>: Minimum number elements in the queue after a
+ dequeue, used to ensure a level of mixing of elements.
+* <b>seed</b>: Seed for the random shuffling within the queue.
+* <b>enqueue_many</b>: If False, each tensor_list_list[i] is assumed to
+ represent a single example. If True, tensor_list_list[i] is
+ assumed to represent a batch of examples, where the first
+ dimension is indexed by example, and all members of
+ tensor_list_list[i] should have the same size in the first
+ dimension.
+* <b>shapes</b>: Optional. The shapes for each example. Defaults to the
+ inferred shapes for tensor_list_list[i] (which must match, after
+ leaving off the first dimension if enqueue_many is True).
+* <b>name</b>: A name for the operations (optional).
+
+##### Returns:
+
+ A list of tensors with the same number and types as
+ tensor_list_list[i]. If enqueue_many is false, then an input
+ tensor with shape `[x, y, z]` will be output as a tensor with
+ shape `[batch_size, x, y, z]`. If enqueue_many is True, and an
+ input tensor has shape `[*, x, y, z]`, the the output will have
+ shape `[batch_size, x, y, z]`.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/math_ops.md b/tensorflow/g3doc/api_docs/python/math_ops.md
new file mode 100644
index 0000000000..fb93c38311
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/math_ops.md
@@ -0,0 +1,1883 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Math
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Arithmetic Operators](#AUTOGENERATED-arithmetic-operators)
+ * [tf.add(x, y, name=None)](#add)
+ * [tf.sub(x, y, name=None)](#sub)
+ * [tf.mul(x, y, name=None)](#mul)
+ * [tf.div(x, y, name=None)](#div)
+ * [tf.mod(x, y, name=None)](#mod)
+* [Basic Math Functions](#AUTOGENERATED-basic-math-functions)
+ * [tf.add_n(inputs, name=None)](#add_n)
+ * [tf.abs(x, name=None)](#abs)
+ * [tf.neg(x, name=None)](#neg)
+ * [tf.sign(x, name=None)](#sign)
+ * [tf.inv(x, name=None)](#inv)
+ * [tf.square(x, name=None)](#square)
+ * [tf.round(x, name=None)](#round)
+ * [tf.sqrt(x, name=None)](#sqrt)
+ * [tf.rsqrt(x, name=None)](#rsqrt)
+ * [tf.pow(x, y, name=None)](#pow)
+ * [tf.exp(x, name=None)](#exp)
+ * [tf.log(x, name=None)](#log)
+ * [tf.ceil(x, name=None)](#ceil)
+ * [tf.floor(x, name=None)](#floor)
+ * [tf.maximum(x, y, name=None)](#maximum)
+ * [tf.minimum(x, y, name=None)](#minimum)
+ * [tf.cos(x, name=None)](#cos)
+ * [tf.sin(x, name=None)](#sin)
+* [Matrix Math Functions](#AUTOGENERATED-matrix-math-functions)
+ * [tf.diag(diagonal, name=None)](#diag)
+ * [tf.transpose(a, perm=None, name='transpose')](#transpose)
+ * [tf.matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None)](#matmul)
+ * [tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None)](#batch_matmul)
+ * [tf.matrix_determinant(input, name=None)](#matrix_determinant)
+ * [tf.batch_matrix_determinant(input, name=None)](#batch_matrix_determinant)
+ * [tf.matrix_inverse(input, name=None)](#matrix_inverse)
+ * [tf.batch_matrix_inverse(input, name=None)](#batch_matrix_inverse)
+ * [tf.cholesky(input, name=None)](#cholesky)
+ * [tf.batch_cholesky(input, name=None)](#batch_cholesky)
+* [Complex Number Functions](#AUTOGENERATED-complex-number-functions)
+ * [tf.complex(real, imag, name=None)](#complex)
+ * [tf.complex_abs(x, name=None)](#complex_abs)
+ * [tf.conj(in_, name=None)](#conj)
+ * [tf.imag(in_, name=None)](#imag)
+ * [tf.real(in_, name=None)](#real)
+* [Reduction](#AUTOGENERATED-reduction)
+ * [tf.reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_sum)
+ * [tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_prod)
+ * [tf.reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_min)
+ * [tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_max)
+ * [tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_mean)
+ * [tf.reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_all)
+ * [tf.reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None)](#reduce_any)
+ * [tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None)](#accumulate_n)
+* [Segmentation](#AUTOGENERATED-segmentation)
+ * [tf.segment_sum(data, segment_ids, name=None)](#segment_sum)
+ * [tf.segment_prod(data, segment_ids, name=None)](#segment_prod)
+ * [tf.segment_min(data, segment_ids, name=None)](#segment_min)
+ * [tf.segment_max(data, segment_ids, name=None)](#segment_max)
+ * [tf.segment_mean(data, segment_ids, name=None)](#segment_mean)
+ * [tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None)](#unsorted_segment_sum)
+ * [tf.sparse_segment_sum(data, indices, segment_ids, name=None)](#sparse_segment_sum)
+ * [tf.sparse_segment_mean(data, indices, segment_ids, name=None)](#sparse_segment_mean)
+* [Sequence Comparison and Indexing](#AUTOGENERATED-sequence-comparison-and-indexing)
+ * [tf.argmin(input, dimension, name=None)](#argmin)
+ * [tf.argmax(input, dimension, name=None)](#argmax)
+ * [tf.listdiff(x, y, name=None)](#listdiff)
+ * [tf.where(input, name=None)](#where)
+ * [tf.unique(x, name=None)](#unique)
+ * [tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance')](#edit_distance)
+ * [tf.invert_permutation(x, name=None)](#invert_permutation)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Arithmetic Operators <div class="md-anchor" id="AUTOGENERATED-arithmetic-operators">{#AUTOGENERATED-arithmetic-operators}</div>
+
+TensorFlow provides several operations that you can use to add basic arithmetic
+operators to your graph.
+
+- - -
+
+### tf.add(x, y, name=None) <div class="md-anchor" id="add">{#add}</div>
+
+Returns x + y element-wise.
+
+*NOTE*: Add supports broadcasting. AddN does not.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int8`, `int16`, `int32`, `complex64`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.sub(x, y, name=None) <div class="md-anchor" id="sub">{#sub}</div>
+
+Returns x - y element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.mul(x, y, name=None) <div class="md-anchor" id="mul">{#mul}</div>
+
+Returns x * y element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int8`, `int16`, `int32`, `complex64`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.div(x, y, name=None) <div class="md-anchor" id="div">{#div}</div>
+
+Returns x / y element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.mod(x, y, name=None) <div class="md-anchor" id="mod">{#mod}</div>
+
+Returns element-wise remainder of division.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+
+## Basic Math Functions <div class="md-anchor" id="AUTOGENERATED-basic-math-functions">{#AUTOGENERATED-basic-math-functions}</div>
+
+TensorFlow provides several operations that you can use to add basic
+mathematical functions to your graph.
+
+- - -
+
+### tf.add_n(inputs, name=None) <div class="md-anchor" id="add_n">{#add_n}</div>
+
+Add all input tensors element wise.
+
+##### Args:
+
+
+* <b>inputs</b>: A list of at least 1 `Tensor` objects of the same type in: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
+ Must all be the same size and shape.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `inputs`.
+
+
+- - -
+
+### tf.abs(x, name=None) <div class="md-anchor" id="abs">{#abs}</div>
+
+Computes the absolute value of a tensor.
+
+Given a tensor of real numbers `x`, this operation returns a tensor
+containing the absolute value of each element in `x`. For example, if x is
+an input element and y is an output element, this operation computes
+\\(y = |x|\\).
+
+See [`tf.complex_abs()`](#tf_complex_abs) to compute the absolute value of a complex
+number.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `float`, `double`, `int32`, or `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` the same size and type as `x` with absolute values.
+
+
+- - -
+
+### tf.neg(x, name=None) <div class="md-anchor" id="neg">{#neg}</div>
+
+Computes numerical negative value element-wise.
+
+I.e., \\(y = -x\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.sign(x, name=None) <div class="md-anchor" id="sign">{#sign}</div>
+
+Returns an element-wise indication of the sign of a number.
+
+y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.inv(x, name=None) <div class="md-anchor" id="inv">{#inv}</div>
+
+Computes the reciprocal of x element-wise.
+
+I.e., \\(y = 1 / x\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.square(x, name=None) <div class="md-anchor" id="square">{#square}</div>
+
+Computes square of x element-wise.
+
+I.e., \\(y = x * x = x^2\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.round(x, name=None) <div class="md-anchor" id="round">{#round}</div>
+
+Rounds the values of a tensor to the nearest integer, element-wise.
+
+For example:
+
+```python
+# 'a' is [0.9, 2.5, 2.3, -4.4]
+tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `float` or `double`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of same shape and type as `x`.
+
+
+- - -
+
+### tf.sqrt(x, name=None) <div class="md-anchor" id="sqrt">{#sqrt}</div>
+
+Computes square root of x element-wise.
+
+I.e., \\(y = \sqrt{x} = x^{1/2}\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.rsqrt(x, name=None) <div class="md-anchor" id="rsqrt">{#rsqrt}</div>
+
+Computes reciprocal of square root of x element-wise.
+
+I.e., \\(y = 1 / \sqrt{x}\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.pow(x, y, name=None) <div class="md-anchor" id="pow">{#pow}</div>
+
+Computes the power of one value to another.
+
+Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
+corresponding elements in `x` and `y`. For example:
+
+```
+# tensor 'x' is [[2, 2]], [3, 3]]
+# tensor 'y' is [[8, 16], [2, 3]]
+tf.pow(x, y) ==> [[256, 65536], [9, 27]]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`.
+* <b>y</b>: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`.
+
+
+- - -
+
+### tf.exp(x, name=None) <div class="md-anchor" id="exp">{#exp}</div>
+
+Computes exponential of x element-wise. \\(y = e^x\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.log(x, name=None) <div class="md-anchor" id="log">{#log}</div>
+
+Computes natural logrithm of x element-wise.
+
+I.e., \\(y = \log_e x\\).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.ceil(x, name=None) <div class="md-anchor" id="ceil">{#ceil}</div>
+
+Returns element-wise smallest integer in not less than x.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.floor(x, name=None) <div class="md-anchor" id="floor">{#floor}</div>
+
+Returns element-wise largest integer not greater than x.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.maximum(x, y, name=None) <div class="md-anchor" id="maximum">{#maximum}</div>
+
+Returns the max of x and y (i.e. x > y ? x : y) element-wise, broadcasts.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.minimum(x, y, name=None) <div class="md-anchor" id="minimum">{#minimum}</div>
+
+Returns the min of x and y (i.e. x < y ? x : y) element-wise, broadcasts.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.cos(x, name=None) <div class="md-anchor" id="cos">{#cos}</div>
+
+Computes cos of x element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+- - -
+
+### tf.sin(x, name=None) <div class="md-anchor" id="sin">{#sin}</div>
+
+Computes sin of x element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+
+
+
+## Matrix Math Functions <div class="md-anchor" id="AUTOGENERATED-matrix-math-functions">{#AUTOGENERATED-matrix-math-functions}</div>
+
+TensorFlow provides several operations that you can use to add basic
+mathematical functions for matrices to your graph.
+
+- - -
+
+### tf.diag(diagonal, name=None) <div class="md-anchor" id="diag">{#diag}</div>
+
+Returns a diagonal tensor with a given diagonal values.
+
+Given a `diagonal`, this operation returns a tensor with the `diagonal` and
+everything else padded with zeros. The diagonal is computed as follows:
+
+Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of
+rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:
+
+`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.
+
+For example:
+
+```prettyprint
+# 'diagonal' is [1, 2, 3, 4]
+tf.diag(diagonal) ==> [[1, 0, 0, 0]
+ [0, 2, 0, 0]
+ [0, 0, 3, 0]
+ [0, 0, 0, 4]]
+```
+
+##### Args:
+
+
+* <b>diagonal</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
+ Rank k tensor where k is at most 3.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `diagonal`.
+
+
+- - -
+
+### tf.transpose(a, perm=None, name='transpose') <div class="md-anchor" id="transpose">{#transpose}</div>
+
+Transposes `a`. Permutes the dimensions according to `perm`.
+
+The returned tensor's dimension i will correspond to the input dimension
+`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is
+the rank of the input tensor. Hence by default, this operation performs a
+regular matrix transpose on 2-D input Tensors.
+
+For example:
+
+```python
+# 'x' is [[1 2 3]
+# [4 5 6]]
+tf.transpose(x) ==> [[1 4]
+ [2 5]
+ [3 6]]
+
+# Equivalently
+tf.transpose(x perm=[0, 1]) ==> [[1 4]
+ [2 5]
+ [3 6]]
+
+# 'perm' is more useful for n-dimensional tensors, for n > 2
+# 'x' is [[[1 2 3]
+# [4 5 6]]
+# [[7 8 9]
+# [10 11 12]]]
+# Take the transpose of the matrices in dimension-0
+tf.transpose(b, perm=[0, 2, 1]) ==> [[[1 4]
+ [2 5]
+ [3 6]]
+
+ [[7 10]
+ [8 11]
+ [9 12]]]
+```
+
+##### Args:
+
+
+* <b>a</b>: A `Tensor`.
+* <b>perm</b>: A permutation of the dimensions of `a`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A transposed `Tensor`.
+
+
+
+- - -
+
+### tf.matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None) <div class="md-anchor" id="matmul">{#matmul}</div>
+
+Multiplies matrix `a` by matrix `b`, producing `a` * `b`.
+
+The inputs must be two-dimensional matrices, with matching inner dimensions,
+possibly after transposition.
+
+Both matrices must be of the same type. The supported types are:
+`float`, `double`, `int32`, `complex64`.
+
+Either matrix can be transposed on the fly by setting the corresponding flag
+to `True`. This is `False` by default.
+
+If one or both of the matrices contain a lot of zeros, a more efficient
+multiplication algorithm can be used by setting the corresponding
+`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.
+
+For example:
+
+```python
+# 2-D tensor `a`
+a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.]
+ [4. 5. 6.]]
+# 2-D tensor `b`
+b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.]
+ [9. 10.]
+ [11. 12.]]
+c = tf.matmul(a, b) => [[58 64]
+ [139 154]]
+```
+
+##### Args:
+
+
+* <b>a</b>: `Tensor` of type `float`, `double`, `int32` or `complex64`.
+* <b>b</b>: `Tensor` with same type as `a`.
+* <b>transpose_a</b>: If `True`, `a` is transposed before multiplication.
+* <b>transpose_b</b>: If `True`, `b` is transposed before multiplication.
+* <b>a_is_sparse</b>: If `True`, `a` is treated as a sparse matrix.
+* <b>b_is_sparse</b>: If `True`, `b` is treated as a sparse matrix.
+* <b>name</b>: Name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of the same type as `a`.
+
+
+- - -
+
+### tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None) <div class="md-anchor" id="batch_matmul">{#batch_matmul}</div>
+
+Multiplies slices of two tensors in batches.
+
+Multiplies all slices of `Tensor` `x` and `y` (each slice can be
+viewed as an element of a batch), and arranges the individual results
+in a single output tensor of the same batch size. Each of the
+individual slices can optionally be adjointed (to adjoint a matrix
+means to transpose and conjugate it) before multiplication by setting
+the `adj_x` or `adj_y` flag to `True`, which are by default `False`.
+
+The input tensors `x` and `y` are 3-D or higher with shape `[..., r_x, c_x]`
+and `[..., r_y, c_y]`.
+
+The output tensor is 3-D or higher with shape `[..., r_o, c_o]`, where:
+
+ r_o = c_x if adj_x else r_x
+ c_o = r_y if adj_y else c_y
+
+It is computed as:
+
+ out[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`.
+ 3-D or higher with shape `[..., r_x, c_x]`.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`.
+ 3-D or higher with shape `[..., r_y, c_y]`.
+* <b>adj_x</b>: An optional `bool`. Defaults to `False`.
+ If `True`, adjoint the slices of `x`. Defaults to `False`.
+* <b>adj_y</b>: An optional `bool`. Defaults to `False`.
+ If `True`, adjoint the slices of `y`. Defaults to `False`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `x`.
+ 3-D or higher with shape `[..., r_o, c_o]`
+
+
+
+- - -
+
+### tf.matrix_determinant(input, name=None) <div class="md-anchor" id="matrix_determinant">{#matrix_determinant}</div>
+
+Calculates the determinant of a square matrix.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ A tensor of shape `[M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+ A scalar, equal to the determinant of the input.
+
+
+- - -
+
+### tf.batch_matrix_determinant(input, name=None) <div class="md-anchor" id="batch_matrix_determinant">{#batch_matrix_determinant}</div>
+
+Calculates the determinants for a batch of square matrices.
+
+The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
+form square matrices. The output is a 1-D tensor containing the determinants
+for all input submatrices `[..., :, :]`.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ Shape is `[..., M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`. Shape is `[...]`.
+
+
+
+- - -
+
+### tf.matrix_inverse(input, name=None) <div class="md-anchor" id="matrix_inverse">{#matrix_inverse}</div>
+
+Calculates the inverse of a square invertible matrix. Checks for invertibility.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ Shape is `[M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+ Shape is `[M, M]` containing the matrix inverse of the input.
+
+
+- - -
+
+### tf.batch_matrix_inverse(input, name=None) <div class="md-anchor" id="batch_matrix_inverse">{#batch_matrix_inverse}</div>
+
+Calculates the inverse of square invertible matrices. Checks for invertibility.
+
+The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
+form square matrices. The output is a tensor of the same shape as the input
+containing the inverse for all input submatrices `[..., :, :]`.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ Shape is `[..., M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
+
+
+
+- - -
+
+### tf.cholesky(input, name=None) <div class="md-anchor" id="cholesky">{#cholesky}</div>
+
+Calculates the Cholesky decomposition of a square matrix.
+
+The input has to be symmetric and positive definite. Only the lower-triangular
+part of the input will be used for this operation. The upper-triangular part
+will not be read.
+
+The result is the lower-triangular matrix of the Cholesky decomposition of the
+input.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
+ Shape is `[M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`. Shape is `[M, M]`.
+
+
+- - -
+
+### tf.batch_cholesky(input, name=None) <div class="md-anchor" id="batch_cholesky">{#batch_cholesky}</div>
+
+Calculates the Cholesky decomposition of a batch of square matrices.
+
+The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
+form square matrices, with the same constraints as the single matrix Cholesky
+decomposition above. The output is a tensor of the same shape as the input
+containing the Cholesky decompositions for all input submatrices `[..., :, :]`.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
+ Shape is `[..., M, M]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
+
+
+
+## Complex Number Functions <div class="md-anchor" id="AUTOGENERATED-complex-number-functions">{#AUTOGENERATED-complex-number-functions}</div>
+
+TensorFlow provides several operations that you can use to add complex number
+functions to your graph.
+
+- - -
+
+### tf.complex(real, imag, name=None) <div class="md-anchor" id="complex">{#complex}</div>
+
+Converts two real numbers to a complex number.
+
+Given a tensor `real` representing the real part of a complex number, and a
+tensor `imag` representing the imaginary part of a complex number, this
+operation computes complex numbers elementwise of the form \\(a + bj\\),
+where *a* represents the `real` part and *b* represents the `imag` part.
+
+The input tensors `real` and `imag` must be the same shape.
+
+For example:
+
+```
+# tensor 'real' is [2.25, 3.25]
+# tensor `imag` is [4.75, 5.75]
+tf.complex(real, imag) ==> [[2.25 + 4.74j], [3.25 + 5.75j]]
+```
+
+##### Args:
+
+
+* <b>real</b>: A `Tensor` of type `float`.
+* <b>imag</b>: A `Tensor` of type `float`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `complex64`.
+
+
+- - -
+
+### tf.complex_abs(x, name=None) <div class="md-anchor" id="complex_abs">{#complex_abs}</div>
+
+Computes the complex absolute value of a tensor.
+
+Given a tensor `x` of complex numbers, this operation returns a tensor of type
+`float` that is the absolute value of each element in `x`. All elements in `x`
+must be complex numbers of the form \\(a + bj\\). The absolute value is
+computed as \\( \sqrt{a^2 + b^2}\\).
+
+For example:
+
+```
+# tensor 'x' is [[-2.25 + 4.75j], [-3.25 + 5.75j]]
+tf.complex_abs(x) ==> [5.25594902, 6.60492229]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `complex64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`.
+
+
+- - -
+
+### tf.conj(in_, name=None) <div class="md-anchor" id="conj">{#conj}</div>
+
+Returns the complex conjugate of a complex number.
+
+Given a tensor `in` of complex numbers, this operation returns a tensor of
+complex numbers that are the complex conjugate of each element in `in`. The
+complex numbers in `in` must be of the form \\(a + bj\\), where *a* is the real
+part and *b* is the imaginary part.
+
+The complex conjugate returned by this operation is of the form \\(a - bj\\).
+
+For example:
+
+```
+# tensor 'in' is [-2.25 + 4.75j, 3.25 + 5.75j]
+tf.conj(in) ==> [-2.25 - 4.75j, 3.25 - 5.75j]
+```
+
+##### Args:
+
+
+* <b>in_</b>: A `Tensor` of type `complex64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `complex64`.
+
+
+- - -
+
+### tf.imag(in_, name=None) <div class="md-anchor" id="imag">{#imag}</div>
+
+Returns the imaginary part of a complex number.
+
+Given a tensor `in` of complex numbers, this operation returns a tensor of type
+`float` that is the imaginary part of each element in `in`. All elements in `in`
+must be complex numbers of the form \\(a + bj\\), where *a* is the real part
+and *b* is the imaginary part returned by this operation.
+
+For example:
+
+```
+# tensor 'in' is [-2.25 + 4.75j, 3.25 + 5.75j]
+tf.imag(in) ==> [4.75, 5.75]
+```
+
+##### Args:
+
+
+* <b>in_</b>: A `Tensor` of type `complex64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`.
+
+
+- - -
+
+### tf.real(in_, name=None) <div class="md-anchor" id="real">{#real}</div>
+
+Returns the real part of a complex number.
+
+Given a tensor `in` of complex numbers, this operation returns a tensor of type
+`float` that is the real part of each element in `in`. All elements in `in`
+must be complex numbers of the form \\(a + bj\\), where *a* is the real part
+returned by this operation and *b* is the imaginary part.
+
+For example:
+
+```
+# tensor 'in' is [-2.25 + 4.75j, 3.25 + 5.75j]
+tf.real(in) ==> [-2.25, 3.25]
+```
+
+##### Args:
+
+
+* <b>in_</b>: A `Tensor` of type `complex64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`.
+
+
+
+## Reduction <div class="md-anchor" id="AUTOGENERATED-reduction">{#AUTOGENERATED-reduction}</div>
+
+TensorFlow provides several operations that you can use to perform
+common math computations that reduce various dimensions of a tensor.
+
+- - -
+
+### tf.reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_sum">{#reduce_sum}</div>
+
+Computes the sum of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+For example:
+
+```python
+# 'x' is [[1, 1, 1]]
+# [1, 1, 1]]
+tf.reduce_sum(x) ==> 6
+tf.reduce_sum(x, 0) ==> [2, 2, 2]
+tf.reduce_sum(x, 1) ==> [3, 3]
+tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
+tf.reduce_sum(x, [0, 1]) ==> 6
+```
+
+##### Args:
+
+
+* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_prod">{#reduce_prod}</div>
+
+Computes the product of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+##### Args:
+
+
+* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_min">{#reduce_min}</div>
+
+Computes the minimum of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+##### Args:
+
+
+* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_max">{#reduce_max}</div>
+
+Computes the maximum of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+##### Args:
+
+
+* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_mean">{#reduce_mean}</div>
+
+Computes the mean of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+For example:
+
+```python
+# 'x' is [[1., 1. ]]
+# [2., 2.]]
+tf.reduce_mean(x) ==> 1.5
+tf.reduce_mean(x, 0) ==> [1.5, 1.5]
+tf.reduce_mean(x, 1) ==> [1., 2.]
+```
+
+##### Args:
+
+
+* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_all">{#reduce_all}</div>
+
+Computes the "logical and" of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+For example:
+
+```python
+# 'x' is [[True, True]]
+# [False, False]]
+tf.reduce_all(x) ==> False
+tf.reduce_all(x, 0) ==> [False, False]
+tf.reduce_all(x, 1) ==> [True, False]
+```
+
+##### Args:
+
+
+* <b>input_tensor</b>: The boolean tensor to reduce.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+- - -
+
+### tf.reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_any">{#reduce_any}</div>
+
+Computes the "logical or" of elements across dimensions of a tensor.
+
+Reduces `input_tensor` along the dimensions given in `reduction_indices`.
+Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
+entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions
+are retained with length 1.
+
+If `reduction_indices` has no entries, all dimensions are reduced, and a
+tensor with a single element is returned.
+
+For example:
+
+```python
+# 'x' is [[True, True]]
+# [False, False]]
+tf.reduce_any(x) ==> True
+tf.reduce_any(x, 0) ==> [True, True]
+tf.reduce_any(x, 1) ==> [True, False]
+```
+
+##### Args:
+
+
+* <b>input_tensor</b>: The boolean tensor to reduce.
+* <b>reduction_indices</b>: The dimensions to reduce. If `None` (the defaut),
+ reduces all dimensions.
+* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The reduced tensor.
+
+
+
+- - -
+
+### tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None) <div class="md-anchor" id="accumulate_n">{#accumulate_n}</div>
+
+Returns the element-wise sum of a list of tensors.
+
+Optionally, pass `shape` and `tensor_dtype` for shape and type checking,
+otherwise, these are inferred.
+
+For example:
+
+```python
+# tensor 'a' is [[1, 2], [3, 4]
+# tensor `b` is [[5, 0], [0, 6]]
+tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]
+
+# Explicitly pass shape and type
+tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)
+ ==> [[7, 4], [6, 14]]
+```
+
+##### Args:
+
+
+* <b>inputs</b>: A list of `Tensor` objects, each with same shape and type.
+* <b>shape</b>: Shape of elements of `inputs`.
+* <b>tensor_dtype</b>: The type of `inputs`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of same shape and type as the elements of `inputs`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `inputs` don't all have same shape and dtype or the shape
+ cannot be inferred.
+
+
+
+## Segmentation <div class="md-anchor" id="AUTOGENERATED-segmentation">{#AUTOGENERATED-segmentation}</div>
+
+TensorFlow provides several operations that you can use to perform common
+math computations on tensor segments.
+Here a segmentation is a partitioning of a tensor along
+the first dimension, i.e. it defines a mapping from the first dimension onto
+`segment_ids`. The `segment_ids` tensor should be the size of
+the first dimension, `d0`, with consecutive IDs in the range `0` to `k`,
+where `k<d0`.
+In particular, a segmentation of a matrix tensor is a mapping of rows to
+segments.
+
+For example:
+
+```python
+c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
+tf.segment_sum(c, tf.constant([0, 0, 1]))
+ ==> [[0 0 0 0]
+ [5 6 7 8]]
+```
+
+- - -
+
+### tf.segment_sum(data, segment_ids, name=None) <div class="md-anchor" id="segment_sum">{#segment_sum}</div>
+
+Computes the sum along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \sum_j data_j\\) where sum is over `j` such
+that `segment_ids[j] == i`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/SegmentSum.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+- - -
+
+### tf.segment_prod(data, segment_ids, name=None) <div class="md-anchor" id="segment_prod">{#segment_prod}</div>
+
+Computes the product along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \prod_j data_j\\) where the product is over `j` such
+that `segment_ids[j] == i`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/SegmentProd.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+- - -
+
+### tf.segment_min(data, segment_ids, name=None) <div class="md-anchor" id="segment_min">{#segment_min}</div>
+
+Computes the minimum along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \min_j(data_j)\\) where `min` is over `j` such
+that `segment_ids[j] == i`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/SegmentMin.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+- - -
+
+### tf.segment_max(data, segment_ids, name=None) <div class="md-anchor" id="segment_max">{#segment_max}</div>
+
+Computes the maximum along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \max_j(data_j)\\) where `max` is over `j` such
+that `segment_ids[j] == i`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/SegmentMax.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+- - -
+
+### tf.segment_mean(data, segment_ids, name=None) <div class="md-anchor" id="segment_mean">{#segment_mean}</div>
+
+Computes the mean along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \frac{\sum_j data_j}{N}\\) where `mean` is
+over `j` such that `segment_ids[j] == i` and `N` is the total number of
+values summed.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/SegmentMean.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+
+- - -
+
+### tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None) <div class="md-anchor" id="unsorted_segment_sum">{#unsorted_segment_sum}</div>
+
+Computes the sum along segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Computes a tensor such that
+\\(output_i = \sum_j data_j\\) where sum is over `j` such
+that `segment_ids[j] == i`. Unlike `SegmentSum`, `segment_ids`
+need not be sorted and need not cover all values in the full
+ range of valid values.
+
+If the sum is empty for a given segment ID `i`, `output[i] = 0`.
+
+`num_segments` should equal the number of distinct segment IDs.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/UnsortedSegmentSum.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>segment_ids</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A 1-D tensor whose rank is equal to the rank of `data`'s
+ first dimension.
+* <b>num_segments</b>: A `Tensor` of type `int32`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `num_segments`.
+
+
+
+- - -
+
+### tf.sparse_segment_sum(data, indices, segment_ids, name=None) <div class="md-anchor" id="sparse_segment_sum">{#sparse_segment_sum}</div>
+
+Computes the sum along sparse segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Like `SegmentSum`, but `segment_ids` can have rank less than `data`'s first
+dimension, selecting a subset of dimension_0, specified by `indices`.
+
+For example:
+
+```prettyprint
+c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
+
+# Select two rows, one segment.
+tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))
+ ==> [[0 0 0 0]]
+
+# Select two rows, two segment.
+tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1]))
+ ==> [[ 1 2 3 4]
+ [-1 -2 -3 -4]]
+
+# Select all rows, two segments.
+tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))
+ ==> [[0 0 0 0]
+ [5 6 7 8]]
+
+# Which is equivalent to:
+tf.segment_sum(c, tf.constant([0, 0, 1]))
+```
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>indices</b>: A `Tensor` of type `int32`.
+ A 1-D tensor. Has same rank as `segment_ids`.
+* <b>segment_ids</b>: A `Tensor` of type `int32`.
+ A 1-D tensor. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+- - -
+
+### tf.sparse_segment_mean(data, indices, segment_ids, name=None) <div class="md-anchor" id="sparse_segment_mean">{#sparse_segment_mean}</div>
+
+Computes the mean along sparse segments of a tensor.
+
+Read [the section on Segmentation](../python/math_ops.md#segmentation)
+for an explanation of segments.
+
+Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first
+dimension, selecting a subset of dimension_0, specified by `indices`.
+
+##### Args:
+
+
+* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>indices</b>: A `Tensor` of type `int32`.
+ A 1-D tensor. Has same rank as `segment_ids`.
+* <b>segment_ids</b>: A `Tensor` of type `int32`.
+ A 1-D tensor. Values should be sorted and can be repeated.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `data`.
+ Has same shape as data, except for dimension_0 which
+ has size `k`, the number of segments.
+
+
+
+
+## Sequence Comparison and Indexing <div class="md-anchor" id="AUTOGENERATED-sequence-comparison-and-indexing">{#AUTOGENERATED-sequence-comparison-and-indexing}</div>
+
+TensorFlow provides several operations that you can use to add sequence
+comparison and index extraction to your graph. You can use these operations to
+determine sequence differences and determine the indexes of specific values in
+a tensor.
+
+- - -
+
+### tf.argmin(input, dimension, name=None) <div class="md-anchor" id="argmin">{#argmin}</div>
+
+Returns the index with the smallest value across dimensions of a tensor.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
+* <b>dimension</b>: A `Tensor` of type `int32`.
+ int32, 0 <= dimension < rank(input). Describes which dimension
+ of the input Tensor to reduce across. For vectors, use dimension = 0.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int64`.
+
+
+- - -
+
+### tf.argmax(input, dimension, name=None) <div class="md-anchor" id="argmax">{#argmax}</div>
+
+Returns the index with the largest value across dimensions of a tensor.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
+* <b>dimension</b>: A `Tensor` of type `int32`.
+ int32, 0 <= dimension < rank(input). Describes which dimension
+ of the input Tensor to reduce across. For vectors, use dimension = 0.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int64`.
+
+
+
+- - -
+
+### tf.listdiff(x, y, name=None) <div class="md-anchor" id="listdiff">{#listdiff}</div>
+
+Computes the difference between two lists of numbers.
+
+Given a list `x` and a list `y`, this operation returns a list `out` that
+represents all numbers that are in `x` but not in `y`. The returned list `out`
+is sorted in the same order that the numbers appear in `x` (duplicates are
+preserved). This operation also returns a list `idx` that represents the
+position of each `out` element in `x`. In other words:
+
+`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]`
+
+For example, given this input:
+
+```prettyprint
+x = [1, 2, 3, 4, 5, 6]
+y = [1, 3, 5]
+```
+
+This operation would return:
+
+```prettyprint
+out ==> [2, 4, 6]
+idx ==> [1, 3, 5]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. 1-D. Values to keep.
+* <b>y</b>: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of `Tensor` objects (out, idx).
+
+* <b>out</b>: A `Tensor`. Has the same type as `x`. 1-D. Values present in `x` but not in `y`.
+* <b>idx</b>: A `Tensor` of type `int32`. 1-D. Positions of `x` values preserved in `out`.
+
+
+- - -
+
+### tf.where(input, name=None) <div class="md-anchor" id="where">{#where}</div>
+
+Returns locations of true values in a boolean tensor.
+
+This operation returns the coordinates of true elements in `input`. The
+coordinates are returned in a 2-D tensor where the first dimension (rows)
+represents the number of true elements, and the second dimension (columns)
+represents the coordinates of the true elements. Keep in mind, the shape of
+the output tensor can vary depending on how many true values there are in
+`input`. Indices are output in row-major order.
+
+For example:
+
+```prettyprint
+# 'input' tensor is [[True, False]
+# [True, False]]
+# 'input' has two true values, so output has two coordinates.
+# 'input' has rank of 2, so coordinates have two indices.
+where(input) ==> [[0, 0],
+ [1, 0]]
+
+# `input` tensor is [[[True, False]
+# [True, False]]
+# [[False, True]
+# [False, True]]
+# [[False, False]
+# [False, True]]]
+# 'input' has 5 true values, so output has 5 coordinates.
+# 'input' has rank of 3, so coordinates have three indices.
+where(input) ==> [[0, 0, 0],
+ [0, 1, 0],
+ [1, 0, 1],
+ [1, 1, 1],
+ [2, 1, 1]]
+```
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor` of type `bool`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int64`.
+
+
+- - -
+
+### tf.unique(x, name=None) <div class="md-anchor" id="unique">{#unique}</div>
+
+Finds unique elements in a 1-D tensor.
+
+This operation returns a tensor `y` containing all of the unique elements of `x`
+sorted in the same order that they occur in `x`. This operation also returns a
+tensor `idx` the same size as `x` that contains the index of each value of `x`
+in the unique output `y`. In other words:
+
+`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
+
+For example:
+
+```prettyprint
+# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
+y, idx = unique(x)
+y ==> [1, 2, 4, 7, 8]
+idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`. 1-D.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of `Tensor` objects (y, idx).
+
+* <b>y</b>: A `Tensor`. Has the same type as `x`. 1-D.
+* <b>idx</b>: A `Tensor` of type `int32`. 1-D.
+
+
+
+- - -
+
+### tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance') <div class="md-anchor" id="edit_distance">{#edit_distance}</div>
+
+Computes the Levenshtein distance between sequences.
+
+This operation takes variable-length sequences (`hypothesis` and `truth`),
+each provided as a `SparseTensor`, and computes the Levenshtein distance.
+You can normalize the edit distance by length of `truth` by setting
+`normalize` to true.
+
+For example, given the following input:
+
+```python
+# 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values:
+# (0,0) = ["a"]
+# (1,0) = ["b"]
+hypothesis = tf.SparseTensor(
+ [[0, 0, 0],
+ [1, 0, 0]],
+ ["a", "b"]
+ (2, 1, 1))
+
+# 'truth' is a tensor of shape `[2, 2]` with variable-length values:
+# (0,0) = []
+# (0,1) = ["a"]
+# (1,0) = ["b", "c"]
+# (1,1) = ["a"]
+truth = tf.SparseTensor(
+ [[0, 1, 0],
+ [1, 0, 0],
+ [1, 0, 1],
+ [1, 1, 0]]
+ ["a", "b", "c", "a"],
+ (2, 2, 2))
+
+normalize = True
+```
+
+This operation would return the following:
+
+```python
+# 'output' is a tensor of shape `[2, 2]` with edit distances normalized
+# by 'truth' lengths.
+output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis
+ [0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis
+```
+
+##### Args:
+
+
+* <b>hypothesis</b>: A `SparseTensor` containing hypothesis sequences.
+* <b>truth</b>: A `SparseTensor` containing truth sequences.
+* <b>normalize</b>: A `bool`. If `True`, normalizes the Levenshtein distance by
+ length of `truth.`
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A dense `Tensor` with rank `R - 1`, where R is the rank of the
+ `SparseTensor` inputs `hypothesis` and `truth`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If either `hypothesis` or `truth` are not a `SparseTensor`.
+
+
+
+- - -
+
+### tf.invert_permutation(x, name=None) <div class="md-anchor" id="invert_permutation">{#invert_permutation}</div>
+
+Computes the inverse permutation of a tensor.
+
+This operation computes the inverse of an index permutation. It takes a 1-D
+integer tensor `x`, which represents the indices of a zero-based array, and
+swaps each value with its index position. In other words, for an ouput tensor
+`y` and an input tensor `x`, this operation computes the following:
+
+`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`
+
+The values must include 0. There can be no duplicate values or negative values.
+
+For example:
+
+```prettyprint
+# tensor `x` is [3, 4, 0, 2, 1]
+invert_permutation(x) ==> [2, 4, 3, 0, 1]
+```
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor` of type `int32`. 1-D.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `int32`. 1-D.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/nn.md b/tensorflow/g3doc/api_docs/python/nn.md
new file mode 100644
index 0000000000..91fab34255
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/nn.md
@@ -0,0 +1,1306 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Neural Network
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Activation Functions](#AUTOGENERATED-activation-functions)
+ * [tf.nn.relu(features, name=None)](#relu)
+ * [tf.nn.relu6(features, name=None)](#relu6)
+ * [tf.nn.softplus(features, name=None)](#softplus)
+ * [tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)](#dropout)
+ * [tf.nn.bias_add(value, bias, name=None)](#bias_add)
+ * [tf.sigmoid(x, name=None)](#sigmoid)
+ * [tf.tanh(x, name=None)](#tanh)
+* [Convolution](#AUTOGENERATED-convolution)
+ * [tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, name=None)](#conv2d)
+ * [tf.nn.depthwise_conv2d(input, filter, strides, padding, name=None)](#depthwise_conv2d)
+ * [tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None)](#separable_conv2d)
+* [Pooling](#AUTOGENERATED-pooling)
+ * [tf.nn.avg_pool(value, ksize, strides, padding, name=None)](#avg_pool)
+ * [tf.nn.max_pool(value, ksize, strides, padding, name=None)](#max_pool)
+ * [tf.nn.max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None)](#max_pool_with_argmax)
+* [Normalization](#AUTOGENERATED-normalization)
+ * [tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None)](#l2_normalize)
+ * [tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None)](#local_response_normalization)
+ * [tf.nn.moments(x, axes, name=None)](#moments)
+* [Losses](#AUTOGENERATED-losses)
+ * [tf.nn.l2_loss(t, name=None)](#l2_loss)
+* [Classification](#AUTOGENERATED-classification)
+ * [tf.nn.sigmoid_cross_entropy_with_logits(logits, targets, name=None)](#sigmoid_cross_entropy_with_logits)
+ * [tf.nn.softmax(logits, name=None)](#softmax)
+ * [tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None)](#softmax_cross_entropy_with_logits)
+* [Embeddings](#AUTOGENERATED-embeddings)
+ * [tf.nn.embedding_lookup(params, ids, name=None)](#embedding_lookup)
+ * [tf.nn.embedding_lookup_sparse(params, sp_ids, sp_weights, name=None, combiner='mean')](#embedding_lookup_sparse)
+* [Evaluation](#AUTOGENERATED-evaluation)
+ * [tf.nn.top_k(input, k, name=None)](#top_k)
+ * [tf.nn.in_top_k(predictions, targets, k, name=None)](#in_top_k)
+* [Candidate Sampling](#AUTOGENERATED-candidate-sampling)
+ * [Sampled Loss Functions](#AUTOGENERATED-sampled-loss-functions)
+ * [tf.nn.nce_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, name='nce_loss')](#nce_loss)
+ * [tf.nn.sampled_softmax_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=True, name='sampled_softmax_loss')](#sampled_softmax_loss)
+ * [Candidate Samplers](#AUTOGENERATED-candidate-samplers)
+ * [tf.nn.uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)](#uniform_candidate_sampler)
+ * [tf.nn.log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)](#log_uniform_candidate_sampler)
+ * [tf.nn.learned_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)](#learned_unigram_candidate_sampler)
+ * [tf.nn.fixed_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, vocab_file='', distortion=0.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=[], seed=None, name=None)](#fixed_unigram_candidate_sampler)
+ * [Miscellaneous candidate sampling utilities](#AUTOGENERATED-miscellaneous-candidate-sampling-utilities)
+ * [tf.nn.compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None)](#compute_accidental_hits)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Activation Functions <div class="md-anchor" id="AUTOGENERATED-activation-functions">{#AUTOGENERATED-activation-functions}</div>
+
+The activation ops provide different types of nonlinearities for use in
+neural networks. These include smooth nonlinearities (`sigmoid`,
+`tanh`, and `softplus`), continuous but not everywhere differentiable
+functions (`relu`, `relu6`, and `relu_x`), and random regularization
+(`dropout`).
+
+All activation ops apply componentwise, and produce a tensor of the same
+shape as the input tensor.
+
+- - -
+
+### tf.nn.relu(features, name=None) <div class="md-anchor" id="relu">{#relu}</div>
+
+Computes rectified linear: `max(features, 0)`.
+
+##### Args:
+
+
+* <b>features</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `features`.
+
+
+- - -
+
+### tf.nn.relu6(features, name=None) <div class="md-anchor" id="relu6">{#relu6}</div>
+
+Computes Rectified Linear 6: `min(max(features, 0), 6)`.
+
+##### Args:
+
+
+* <b>features</b>: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,
+ `int16`, or `int8`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` with the same type as `features`.
+
+
+- - -
+
+### tf.nn.softplus(features, name=None) <div class="md-anchor" id="softplus">{#softplus}</div>
+
+Computes softplus: `log(exp(features) + 1)`.
+
+##### Args:
+
+
+* <b>features</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `features`.
+
+
+- - -
+
+### tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None) <div class="md-anchor" id="dropout">{#dropout}</div>
+
+Computes dropout.
+
+With probability `keep_prob`, outputs the input element scaled up by
+`1 / keep_prob`, otherwise outputs `0`. The scaling is so that the expected
+sum is unchanged.
+
+By default, each element is kept or dropped independently. If `noise_shape`
+is specified, it must be
+[broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
+to the shape of `x`, and only dimensions with `noise_shape[i] == x.shape[i]`
+will make independent decisions. For example, if `x.shape = [b, x, y, c]` and
+`noise_shape = [b, 1, 1, c]`, each batch and channel component will be
+kept independently and each row and column will be kept or not kept together.
+
+##### Args:
+
+
+* <b>x</b>: A tensor.
+* <b>keep_prob</b>: Float probability that each element is kept.
+* <b>noise_shape</b>: Shape for randomly generated keep/drop flags.
+* <b>seed</b>: A Python integer. Used to create a random seed.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+* <b>name</b>: A name for this operation (optional).
+
+##### Returns:
+
+ A Tensor of the same shape of `x`.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If `keep_prob` is not in `(0, 1]`.
+
+
+- - -
+
+### tf.nn.bias_add(value, bias, name=None) <div class="md-anchor" id="bias_add">{#bias_add}</div>
+
+Adds `bias` to `value`.
+
+This is (mostly) a special case of `tf.add` where `bias` is restricted to 1-D.
+Broadcasting is supported, so `value` may have any number of dimensions.
+Unlike `tf.add`, the type of `bias` is allowed to differ from `value` in the
+case where both types are quantized.
+
+##### Args:
+
+
+* <b>value</b>: A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`,
+ `int16`, `int8`, or `complex64`.
+* <b>bias</b>: A 1-D `Tensor` with size matching the last dimension of `value`.
+ Must be the same type as `value` unless `value` is a quantized type,
+ in which case a different quantized type may be used.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` with the same type as `value`.
+
+
+- - -
+
+### tf.sigmoid(x, name=None) <div class="md-anchor" id="sigmoid">{#sigmoid}</div>
+
+Computes sigmoid of `x` element-wise.
+
+Specifically, `y = 1 / (1 + exp(-x))`.
+
+##### Args:
+
+
+* <b>x</b>: A Tensor with type `float`, `double`, `int32`, `complex64`, `int64`,
+ or `qint32`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A Tensor with the same type as `x` if `x.dtype != qint32`
+ otherwise the return type is `quint8`.
+
+
+- - -
+
+### tf.tanh(x, name=None) <div class="md-anchor" id="tanh">{#tanh}</div>
+
+Computes hyperbolic tangent of `x` element-wise.
+
+##### Args:
+
+
+* <b>x</b>: A Tensor with type `float`, `double`, `int32`, `complex64`, `int64`,
+ or `qint32`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A Tensor with the same type as `x` if `x.dtype != qint32` otherwise
+ the return type is `quint8`.
+
+
+
+## Convolution <div class="md-anchor" id="AUTOGENERATED-convolution">{#AUTOGENERATED-convolution}</div>
+
+The convolution ops sweep a 2-D filter over a batch of images, applying the
+filter to each window of each image of the appropriate size. The different
+ops trade off between generic vs. specific filters:
+
+* `conv2d`: Arbitrary filters that can mix channels together.
+* `depthwise_conv2d`: Filters that operate on each channel independently.
+* `separable_conv2d`: A depthwise spatial filter followed by a pointwise filter.
+
+Note that although these ops are called "convolution", they are strictly
+speaking "cross-correlation" since the filter is combined with an input window
+without reversing the filter. For details, see [the properties of
+cross-correlation](https://en.wikipedia.org/wiki/Cross-correlation#Properties).
+
+The filter is applied to image patches of the same size as the filter and
+strided according to the `strides` argument. `strides = [1, 1, 1, 1]` applies
+the filter to a patch at every offset, `strides = [1, 2, 2, 1]` applies the
+filter to every other image patch in each dimension, etc.
+
+Ignoring channels for the moment, the spatial semantics of the convolution ops
+are as follows. If the 4-D `input` has shape
+`[batch, in_height, in_width, ...]` and the 4-D `filter` has shape
+`[filter_height, filter_width, ...]`, then
+
+ output.shape = [batch,
+ (in_height - filter_height + 1) / strides[1],
+ (in_width - filter_width + 1) / strides[2],
+ ...]
+
+ output[b, i, j, :] =
+ sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, ...] *
+ filter[di, dj, ...]
+
+Since `input` is 4-D, each `input[b, i, j, :]` is a vector. For `conv2d`, these
+vectors are multiplied by the `filter[di, dj, :, :]` matrices to produce new
+vectors. For `depthwise_conv_2d`, each scalar component `input[b, i, j, k]`
+is multiplied by a vector `filter[di, dj, k]`, and all the vectors are
+concatenated.
+
+In the formula for `output.shape`, the rounding direction depends on padding:
+
+* `padding = 'SAME'`: Round down (only full size windows are considered).
+* `padding = 'VALID'`: Round up (partial windows are included).
+
+- - -
+
+### tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, name=None) <div class="md-anchor" id="conv2d">{#conv2d}</div>
+
+Computes a 2-D convolution given 4-D `input` and `filter` tensors.
+
+Given an input tensor of shape `[batch, in_height, in_width, in_channels]`
+and a filter / kernel tensor of shape
+`[filter_height, filter_width, in_channels, out_channels]`, this op
+performs the following:
+
+1. Flattens the filter to a 2-D matrix with shape
+ `[filter_height * filter_width * in_channels, output_channels]`.
+2. Extracts image patches from the the input tensor to form a *virtual*
+ tensor of shape `[batch, out_height, out_width,
+ filter_height * filter_width * in_channels]`.
+3. For each patch, right-multiplies the filter matrix and the image patch
+ vector.
+
+In detail,
+
+ output[b, i, j, k] =
+ sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
+ filter[di, dj, q, k]
+
+Must have `strides[0] = strides[3] = 1`. For the most common case of the same
+horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+* <b>filter</b>: A `Tensor`. Must have the same type as `input`.
+* <b>strides</b>: A list of `ints`.
+ 1-D of length 4. The stride of the sliding window for each dimension
+ of `input`.
+* <b>padding</b>: A `string` from: `"SAME", "VALID"`.
+ The type of padding algorithm to use.
+* <b>use_cudnn_on_gpu</b>: An optional `bool`. Defaults to `True`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+
+
+- - -
+
+### tf.nn.depthwise_conv2d(input, filter, strides, padding, name=None) <div class="md-anchor" id="depthwise_conv2d">{#depthwise_conv2d}</div>
+
+Depthwise 2-D convolution.
+
+Given an input tensor of shape `[batch, in_height, in_width, in_channels]`
+and a filter tensor of shape
+`[filter_height, filter_width, in_channels, channel_multiplier]`
+containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d`
+applies a different filter to each input channel (expanding from 1 channel
+to `channel_multiplier` channels for each), then concatenates the results
+together. The output has `in_channels * channel_multiplier` channels.
+
+In detail,
+
+ output[b, i, j, k * channel_multiplier + q] =
+ sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *
+ filter[di, dj, k, q]
+
+Must have `strides[0] = strides[3] = 1`. For the most common case of the
+same horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
+
+##### Args:
+
+
+* <b>input</b>: 4-D with shape `[batch, in_height, in_width, in_channels]`.
+* <b>filter</b>: 4-D with shape
+ `[filter_height, filter_width, in_channels, channel_multiplier]`.
+* <b>strides</b>: 1-D of size 4. The stride of the sliding window for each
+ dimension of `input`.
+* <b>padding</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
+* <b>name</b>: A name for this operation (optional).
+
+##### Returns:
+
+ A 4-D `Tensor` of shape
+ `[batch, out_height, out_width, in_channels * channel_multiplier].`
+
+
+- - -
+
+### tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None) <div class="md-anchor" id="separable_conv2d">{#separable_conv2d}</div>
+
+2-D convolution with separable filters.
+
+Performs a depthwise convolution that acts separately on channels followed by
+a pointwise convolution that mixes channels. Note that this is separability
+between dimensions `[1, 2]` and `3`, not spatial separability between
+dimensions `1` and `2`.
+
+In detail,
+
+ output[b, i, j, k] = sum_{di, dj, q, r]
+ input[b, strides[1] * i + di, strides[2] * j + dj, q] *
+ depthwise_filter[di, dj, q, r] *
+ pointwise_filter[0, 0, q * channel_multiplier + r, k]
+
+`strides` controls the strides for the depthwise convolution only, since
+the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have
+`strides[0] = strides[3] = 1`. For the most common case of the same
+horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
+
+##### Args:
+
+
+* <b>input</b>: 4-D `Tensor` with shape `[batch, in_height, in_width, in_channels]`.
+* <b>depthwise_filter</b>: 4-D `Tensor` with shape
+ `[filter_height, filter_width, in_channels, channel_multiplier]`.
+ Contains `in_channels` convolutional filters of depth 1.
+* <b>pointwise_filter</b>: 4-D `Tensor` with shape
+ `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise
+ filter to mix channels after `depthwise_filter` has convolved spatially.
+* <b>strides</b>: 1-D of size 4. The strides for the depthwise convolution for
+ each dimension of `input`.
+* <b>padding</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
+* <b>name</b>: A name for this operation (optional).
+
+##### Returns:
+
+ A 4-D `Tensor` of shape `[batch, out_height, out_width, out_channels]`.
+
+
+
+## Pooling <div class="md-anchor" id="AUTOGENERATED-pooling">{#AUTOGENERATED-pooling}</div>
+
+The pooling ops sweep a rectangular window over the input tensor, computing a
+reduction operation for each window (average, max, or max with argmax). Each
+pooling op uses rectangular windows of size `ksize` separated by offset
+`strides`. For example, if `strides` is all ones every window is used, if
+`strides` is all twos every other window is used in each dimension, etc.
+
+In detail, the output is
+
+ output[i] = reduce(value[strides * i:strides * i + ksize])
+
+for each tuple of indices `i`. The output shape is
+
+ output.shape = (value.shape - ksize + 1) / strides
+
+where the rounding direction depends on padding:
+
+* `padding = 'SAME'`: Round down (only full size windows are considered).
+* `padding = 'VALID'`: Round up (partial windows are included).
+
+- - -
+
+### tf.nn.avg_pool(value, ksize, strides, padding, name=None) <div class="md-anchor" id="avg_pool">{#avg_pool}</div>
+
+Performs the average pooling on the input.
+
+Each entry in `output` is the mean of the corresponding size `ksize`
+window in `value`.
+
+##### Args:
+
+
+* <b>value</b>: A 4-D `Tensor` of shape `[batch, height, width, channels]` and type
+ `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
+* <b>ksize</b>: A list of ints that has length >= 4.
+ The size of the window for each dimension of the input tensor.
+* <b>strides</b>: A list of ints that has length >= 4.
+ The stride of the sliding window for each dimension of the
+ input tensor.
+* <b>padding</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
+* <b>name</b>: Optional name for the operation.
+
+##### Returns:
+
+ A `Tensor` with the same type as `value`. The average pooled output tensor.
+
+
+- - -
+
+### tf.nn.max_pool(value, ksize, strides, padding, name=None) <div class="md-anchor" id="max_pool">{#max_pool}</div>
+
+Performs the max pooling on the input.
+
+##### Args:
+
+
+* <b>value</b>: A 4-D `Tensor` with shape `[batch, height, width, channels]` and
+ type `float32`, `float64`, `qint8`, `quint8`, `qint32`.
+* <b>ksize</b>: A list of ints that has length >= 4. The size of the window for
+ each dimension of the input tensor.
+* <b>strides</b>: A list of ints that has length >= 4. The stride of the sliding
+ window for each dimension of the input tensor.
+* <b>padding</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
+* <b>name</b>: Optional name for the operation.
+
+##### Returns:
+
+ A `Tensor` with the same type as `value`. The max pooled output tensor.
+
+
+- - -
+
+### tf.nn.max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None) <div class="md-anchor" id="max_pool_with_argmax">{#max_pool_with_argmax}</div>
+
+Performs max pooling on the input and outputs both max values and indices.
+
+The indices in `argmax` are flattened, so that a maximum value at position
+`[b, y, x, c]` becomes flattened index
+`((b * height + y) * width + x) * channels + c`.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor` of type `float32`.
+ 4-D with shape `[batch, height, width, channels]`. Input to pool over.
+* <b>ksize</b>: A list of `ints` that has length `>= 4`.
+ The size of the window for each dimension of the input tensor.
+* <b>strides</b>: A list of `ints` that has length `>= 4`.
+ The stride of the sliding window for each dimension of the
+ input tensor.
+* <b>padding</b>: A `string` from: `"SAME", "VALID"`.
+ The type of padding algorithm to use.
+* <b>Targmax</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of `Tensor` objects (output, argmax).
+
+* <b>output</b>: A `Tensor` of type `float32`. The max pooled output tensor.
+* <b>argmax</b>: A `Tensor` of type `Targmax`. 4-D. The flattened indices of the max values chosen for each output.
+
+
+
+## Normalization <div class="md-anchor" id="AUTOGENERATED-normalization">{#AUTOGENERATED-normalization}</div>
+
+Normalization is useful to prevent neurons from saturating when inputs may
+have varying scale, and to aid generalization.
+
+- - -
+
+### tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None) <div class="md-anchor" id="l2_normalize">{#l2_normalize}</div>
+
+Normalizes along dimension `dim` using an L2 norm.
+
+For a 1-D tensor with `dim = 0`, computes
+
+ output = x / sqrt(max(sum(x**2), epsilon))
+
+For `x` with more dimensions, independently normalizes each 1-D slice along
+dimension `dim`.
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`.
+* <b>dim</b>: Dimension along which to normalize.
+* <b>epsilon</b>: A lower bound value for the norm. Will use `sqrt(epsilon)` as the
+ divisor if `norm < sqrt(epsilon)`.
+* <b>name</b>: A name for this operation (optional).
+
+##### Returns:
+
+ A `Tensor` with the same shape as `x`.
+
+
+- - -
+
+### tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None) <div class="md-anchor" id="local_response_normalization">{#local_response_normalization}</div>
+
+Local Response Normalization.
+
+The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last
+dimension), and each vector is normalized independently. Within a given vector,
+each component is divided by the weighted, squared sum of inputs within
+`depth_radius`. In detail,
+
+ sqr_sum[a, b, c, d] =
+ sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
+ output = input / (bias + alpha * sqr_sum ** beta)
+
+For details, see [Krizhevsky et al., ImageNet classification with deep
+convolutional neural networks (NIPS 2012)]
+(http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor` of type `float32`. 4-D.
+* <b>depth_radius</b>: An optional `int`. Defaults to `5`.
+ 0-D. Half-width of the 1-D normalization window.
+* <b>bias</b>: An optional `float`. Defaults to `1`.
+ An offset (usually positive to avoid dividing by 0).
+* <b>alpha</b>: An optional `float`. Defaults to `1`.
+ A scale factor, usually positive.
+* <b>beta</b>: An optional `float`. Defaults to `0.5`. An exponent.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `float32`.
+
+
+- - -
+
+### tf.nn.moments(x, axes, name=None) <div class="md-anchor" id="moments">{#moments}</div>
+
+Calculate the mean and variance of `x`.
+
+The mean and variance are calculated by aggregating the contents of `x`
+across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean
+and variance of a vector.
+
+For so-called "global normalization" needed for convolutional filters pass
+`axes=[0, 1, 2]` (batch, height, width). For batch normalization pass
+`axes=[0]` (batch).
+
+##### Args:
+
+
+* <b>x</b>: A `Tensor`.
+* <b>axes</b>: array of ints. Axes along which to compute mean and
+ variance.
+* <b>name</b>: Name used to scope the operations that compute the moments.
+
+##### Returns:
+
+ Two `Tensors`: `mean` and `variance`.
+
+
+
+## Losses <div class="md-anchor" id="AUTOGENERATED-losses">{#AUTOGENERATED-losses}</div>
+
+The loss ops measure error between two tensors, or between a tensor and zero.
+These can be used for measuring accuracy of a network in a regression task
+or for regularization purposes (weight decay).
+
+- - -
+
+### tf.nn.l2_loss(t, name=None) <div class="md-anchor" id="l2_loss">{#l2_loss}</div>
+
+L2 Loss.
+
+Computes half the L2 norm of a tensor without the `sqrt`:
+
+ output = sum(t ** 2) / 2
+
+##### Args:
+
+
+* <b>t</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
+ Typically 2-D, but may have any dimensions.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `t`. 0-D.
+
+
+
+## Classification <div class="md-anchor" id="AUTOGENERATED-classification">{#AUTOGENERATED-classification}</div>
+
+TensorFlow provides several operations that help you perform classification.
+
+- - -
+
+### tf.nn.sigmoid_cross_entropy_with_logits(logits, targets, name=None) <div class="md-anchor" id="sigmoid_cross_entropy_with_logits">{#sigmoid_cross_entropy_with_logits}</div>
+
+Computes sigmoid cross entropy given `logits`.
+
+Measures the probability error in discrete classification tasks in which each
+class is independent and not mutually exclusive. For instance, one could
+perform multilabel classification where a picture can contain both an elephant
+and a dog at the same time.
+
+For brevity, let `x = logits`, `z = targets`. The logistic loss is
+
+ x - x * z + log(1 + exp(-x))
+
+To ensure stability and avoid overflow, the implementation uses
+
+ max(x, 0) - x * z + log(1 + exp(-abs(x)))
+
+`logits` and `targets` must have the same type and shape.
+
+##### Args:
+
+
+* <b>logits</b>: A `Tensor` of type `float32` or `float64`.
+* <b>targets</b>: A `Tensor` of the same type and shape as `logits`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of the same shape as `logits` with the componentwise
+ logistic losses.
+
+
+- - -
+
+### tf.nn.softmax(logits, name=None) <div class="md-anchor" id="softmax">{#softmax}</div>
+
+Computes softmax activations.
+
+For each batch `i` and class `j` we have
+
+ softmax[i, j] = exp(logits[i, j]) / sum(exp(logits[i]))
+
+##### Args:
+
+
+* <b>logits</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
+ 2-D with shape `[batch_size, num_classes]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `logits`. Same shape as `logits`.
+
+
+- - -
+
+### tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None) <div class="md-anchor" id="softmax_cross_entropy_with_logits">{#softmax_cross_entropy_with_logits}</div>
+
+Computes softmax cross entropy between `logits` and `labels`.
+
+Measures the probability error in discrete classification tasks in which the
+classes are mutually exclusive (each entry is in exactly one class). For
+example, each CIFAR-10 image is labeled with one and only one label: an image
+can be a dog or a truck, but not both.
+
+**WARNING:** This op expects unscaled logits, since it performs a `softmax`
+on `logits` internally for efficiency. Do not call this op with the
+output of `softmax`, as it will produce incorrect results.
+
+`logits` and `labels` must have the same shape `[batch_size, num_classes]`
+and the same dtype (either `float32` or `float64`).
+
+##### Args:
+
+
+* <b>logits</b>: Unscaled log probabilities.
+* <b>labels</b>: Each row `labels[i]` must be a valid probability distribution.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the
+ softmax cross entropy loss.
+
+
+
+## Embeddings <div class="md-anchor" id="AUTOGENERATED-embeddings">{#AUTOGENERATED-embeddings}</div>
+
+TensorFlow provides several operations that help you compute embeddings.
+
+- - -
+
+### tf.nn.embedding_lookup(params, ids, name=None) <div class="md-anchor" id="embedding_lookup">{#embedding_lookup}</div>
+
+Return a tensor of embedding values by looking up "ids" in "params".
+
+##### Args:
+
+
+* <b>params</b>: List of tensors of the same shape. A single tensor is
+ treated as a singleton list.
+* <b>ids</b>: Tensor of integers containing the ids to be looked up in
+ 'params'. Let P be len(params). If P > 1, then the ids are
+ partitioned by id % P, and we do separate lookups in params[p]
+ for 0 <= p < P, and then stitch the results back together into
+ a single result tensor.
+* <b>name</b>: Optional name for the op.
+
+##### Returns:
+
+ A tensor of shape ids.shape + params[0].shape[1:] containing the
+ values params[i % P][i] for each i in ids.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if some parameters are invalid.
+
+
+- - -
+
+### tf.nn.embedding_lookup_sparse(params, sp_ids, sp_weights, name=None, combiner='mean') <div class="md-anchor" id="embedding_lookup_sparse">{#embedding_lookup_sparse}</div>
+
+Computes embeddings for the given ids and weights.
+
+This op assumes that there is at least one id for each row in the dense tensor
+represented by sp_ids (i.e. there are no rows with empty features), and that
+all the indices of sp_ids are in canonical row-major order.
+
+It also assumes that all id values lie in the range [0, p0), where p0
+is the sum of the size of params along dimension 0.
+
+##### Args:
+
+
+* <b>params</b>: A single tensor representing the complete embedding tensor,
+ or a list of P tensors all of same shape except for the first dimension,
+ representing sharded embedding tensors. In the latter case, the ids are
+ partitioned by id % P, and we do separate lookups in params[p] for
+ 0 <= p < P, and then stitch the results back together into a single
+ result tensor. The first dimension is allowed to vary as the vocab
+ size is not necessarily a multiple of P.
+* <b>sp_ids</b>: N x M SparseTensor of int64 ids (typically from FeatureValueToId),
+ where N is typically batch size and M is arbitrary.
+* <b>sp_weights</b>: either a SparseTensor of float / double weights, or None to
+ indicate all weights should be taken to be 1. If specified, sp_weights
+ must have exactly the same shape and indices as sp_ids.
+* <b>name</b>: Optional name for the op.
+* <b>combiner</b>: A string specifying the reduction op. Currently "mean" and "sum"
+ are supported.
+ "sum" computes the weighted sum of the embedding results for each row.
+ "mean" is the weighted sum divided by the total weight.
+
+##### Returns:
+
+ A dense tensor representing the combined embeddings for the
+ sparse ids. For each row in the dense tensor represented by sp_ids, the op
+ looks up the embeddings for all ids in that row, multiplies them by the
+ corresponding weight, and combines these embeddings as specified.
+
+ In other words, if
+ shape(combined params) = [p0, p1, ..., pm]
+ and
+ shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]
+ then
+ shape(output) = [d0, d1, ..., dn-1, p1, ..., pm].
+
+ For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are
+
+ [0, 0]: id 1, weight 2.0
+ [0, 1]: id 3, weight 0.5
+ [1, 0]: id 0, weight 1.0
+ [2, 3]: id 1, weight 3.0
+
+ with combiner="mean", then the output will be a 3x20 matrix where
+ output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
+ output[1, :] = params[0, :] * 1.0
+ output[2, :] = params[1, :] * 3.0
+
+##### Raises:
+
+
+* <b>TypeError</b>: If sp_ids is not a SparseTensor, or if sp_weights is neither
+ None nor SparseTensor.
+* <b>ValueError</b>: If combiner is not one of {"mean", "sum"}.
+
+
+
+## Evaluation <div class="md-anchor" id="AUTOGENERATED-evaluation">{#AUTOGENERATED-evaluation}</div>
+
+The evaluation ops are useful for measuring the performance of a network.
+Since they are nondifferentiable, they are typically used at evaluation time.
+
+- - -
+
+### tf.nn.top_k(input, k, name=None) <div class="md-anchor" id="top_k">{#top_k}</div>
+
+Returns the values and indices of the k largest elements for each row.
+
+\\(values_{i, j}\\) represents the j-th largest element in \\(input_i\\).
+
+\\(indices_{i, j}\\) gives the column index of the corresponding element,
+such that \\(input_{i, indices_{i, j}} = values_{i, j}\\). If two
+elements are equal, the lower-index element appears first.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
+ A batch_size x classes tensor
+* <b>k</b>: An `int` that is `>= 1`.
+ Number of top elements to look for within each row
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A tuple of `Tensor` objects (values, indices).
+
+* <b>values</b>: A `Tensor`. Has the same type as `input`. A batch_size x k tensor with the k largest elements for each row,
+ sorted in descending order
+* <b>indices</b>: A `Tensor` of type `int32`. A batch_size x k tensor with the index of each value within each row
+
+
+- - -
+
+### tf.nn.in_top_k(predictions, targets, k, name=None) <div class="md-anchor" id="in_top_k">{#in_top_k}</div>
+
+Says whether the targets are in the top K predictions.
+
+This outputs a batch_size bool array, an entry out[i] is true if the
+prediction for the target class is among the top k predictions among
+all predictions for example i. Note that the behavior of InTopK differs
+from the TopK op in its handling of ties; if multiple classes have the
+same prediction value and straddle the top-k boundary, all of those
+classes are considered to be in the top k.
+
+More formally, let
+
+ \\(predictions_i\\) be the predictions for all classes for example i,
+ \\(targets_i\\) be the target class for example i,
+ \\(out_i\\) be the output for example i,
+
+$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$
+
+##### Args:
+
+
+* <b>predictions</b>: A `Tensor` of type `float32`. A batch_size x classes tensor
+* <b>targets</b>: A `Tensor` of type `int32`. A batch_size vector of class ids
+* <b>k</b>: An `int`. Number of top elements to look at for computing precision
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor` of type `bool`. Computed Precision at k as a bool Tensor
+
+
+
+## Candidate Sampling <div class="md-anchor" id="AUTOGENERATED-candidate-sampling">{#AUTOGENERATED-candidate-sampling}</div>
+
+Do you want to train a multiclass or multilabel model with thousands
+or millions of output classes (for example, a language model with a
+large vocabulary)? Training with a full Softmax is slow in this case,
+since all of the classes are evaluated for every training example.
+Candidate Sampling training algorithms can speed up your step times by
+only considering a small randomly-chosen subset of contrastive classes
+(called candidates) for each batch of training examples.
+
+See our [Candidate Sampling Algorithms Reference]
+(http://www.tensorflow.org/extras/candidate_sampling.pdf)
+
+### Sampled Loss Functions <div class="md-anchor" id="AUTOGENERATED-sampled-loss-functions">{#AUTOGENERATED-sampled-loss-functions}</div>
+
+TensorFlow provides the following sampled loss functions for faster training.
+
+- - -
+
+### tf.nn.nce_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, name='nce_loss') <div class="md-anchor" id="nce_loss">{#nce_loss}</div>
+
+Computes and returns the noise-contrastive estimation training loss.
+
+See [Noise-contrastive estimation: A new estimation principle for
+unnormalized statistical models]
+(http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf).
+Also see our [Candidate Sampling Algorithms Reference]
+(http://www.tensorflow.org/extras/candidate_sampling.pdf)
+
+Note: In the case where num_true > 1, we assign to each target class
+the target probability 1 / num_true so that the target probabilities
+sum to 1 per-example.
+
+Note: It would be useful to allow a variable number of target classes per
+example. We hope to provide this functionality in a future release.
+For now, if you have a variable number of target classes, you can pad them
+out to a constant number by either repeating them or by padding
+with an otherwise unused class.
+
+##### Args:
+
+
+* <b>weights</b>: A `Tensor` of shape [num_classes, dim]. The class embeddings.
+* <b>biases</b>: A `Tensor` of shape [num_classes]. The class biases.
+* <b>inputs</b>: A `Tensor` of shape [batch_size, dim]. The forward
+ activations of the input network.
+* <b>labels</b>: A `Tensor` of type `int64` and shape `[batch_size,
+ num_true]`. The target classes.
+* <b>num_sampled</b>: An `int`. The number of classes to randomly sample per batch.
+* <b>num_classes</b>: An `int`. The number of possible classes.
+* <b>num_true</b>: An `int`. The number of target classes per training example.
+* <b>sampled_values</b>: a tuple of `(sampled_candidates, true_expected_count,
+ sampled_expected_count)` returned by a *_candidate_sampler function.
+ (if None, we default to LogUniformCandidateSampler)
+* <b>remove_accidental_hits</b>: A `bool`. Whether to remove "accidental hits"
+ where a sampled class equals one of the target classes. If set to
+ `True`, this is a "Sampled Logistic" loss instead of NCE, and we are
+ learning to generate log-odds instead of log probabilities. See
+ our [Candidate Sampling Algorithms Reference]
+ (http://www.tensorflow.org/extras/candidate_sampling.pdf).
+ Default is False.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A batch_size 1-D tensor of per-example NCE losses.
+
+
+- - -
+
+### tf.nn.sampled_softmax_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=True, name='sampled_softmax_loss') <div class="md-anchor" id="sampled_softmax_loss">{#sampled_softmax_loss}</div>
+
+Computes and returns the sampled softmax training loss.
+
+This is a faster way to train a softmax classifier over a huge number of
+classes.
+
+This operation is for training only. It is generally an underestimate of
+the full softmax loss.
+
+At inference time, you can compute full softmax probabilities with the
+expression `tf.nn.softmax(tf.matmul(inputs, weights) + biases)`.
+
+See our [Candidate Sampling Algorithms Reference]
+(http://www.tensorflow.org/extras/candidate_sampling.pdf)
+
+Also see Section 3 of http://arxiv.org/abs/1412.2007 for the math.
+
+##### Args:
+
+
+* <b>weights</b>: A `Tensor` of shape [num_classes, dim]. The class embeddings.
+* <b>biases</b>: A `Tensor` of shape [num_classes]. The class biases.
+* <b>inputs</b>: A `Tensor` of shape [batch_size, dim]. The forward
+ activations of the input network.
+* <b>labels</b>: A `Tensor` of type `int64` and shape `[batch_size,
+ num_true]`. The target classes. Note that this format differs from
+ the `labels` argument of `nn.softmax_cross_entropy_with_logits`.
+* <b>num_sampled</b>: An `int`. The number of classes to randomly sample per batch.
+* <b>num_classes</b>: An `int`. The number of possible classes.
+* <b>num_true</b>: An `int`. The number of target classes per training example.
+* <b>sampled_values</b>: a tuple of `(sampled_candidates, true_expected_count,
+ sampled_expected_count)` returned by a *_candidate_sampler function.
+ (if None, we default to LogUniformCandidateSampler)
+* <b>remove_accidental_hits</b>: A `bool`. whether to remove "accidental hits"
+ where a sampled class equals one of the target classes. Default is
+ True.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A batch_size 1-D tensor of per-example sampled softmax losses.
+
+
+
+### Candidate Samplers <div class="md-anchor" id="AUTOGENERATED-candidate-samplers">{#AUTOGENERATED-candidate-samplers}</div>
+
+TensorFlow provides the following samplers for randomly sampling candidate
+classes when using one of the sampled loss functions above.
+
+- - -
+
+### tf.nn.uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None) <div class="md-anchor" id="uniform_candidate_sampler">{#uniform_candidate_sampler}</div>
+
+Samples a set of classes using a uniform base distribution.
+
+This operation randomly samples a tensor of sampled classes
+(`sampled_candidates`) from the range of integers `[0, range_max]`.
+
+The elements of `sampled_candidates` are drawn without replacement
+(if `unique=True`) or with replacement (if `unique=False`) from
+the base distribution.
+
+The base distribution for this operation is the uniform distribution
+over the range of integers `[0, range_max]`.
+
+In addition, this operation returns tensors `true_expected_count`
+and `sampled_expected_count` representing the number of times each
+of the target classes (`true_classes`) and the sampled
+classes (`sampled_candidates`) is expected to occur in an average
+tensor of sampled classes. These values correspond to `Q(y|x)`
+defined in [this
+document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
+If `unique=True`, then these are post-rejection probabilities and we
+compute them approximately.
+
+##### Args:
+
+
+* <b>true_classes</b>: A `Tensor` of type `int64` and shape `[batch_size,
+ num_true]`. The target classes.
+* <b>num_true</b>: An `int`. The number of target classes per training example.
+* <b>num_sampled</b>: An `int`. The number of classes to randomly sample per batch.
+* <b>unique</b>: A `bool`. Determines whether all sampled classes in a batch are
+ unique.
+* <b>range_max</b>: An `int`. The number of possible classes.
+* <b>seed</b>: An `int`. An operation-specific seed. Default is 0.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+
+* <b>sampled_candidates</b>: A tensor of type `int64` and shape `[num_sampled]`.
+ The sampled classes.
+* <b>true_expected_count</b>: A tensor of type `float`. Same shape as
+ `true_classes`. The expected counts under the sampling distribution
+ of each of `true_classes`.
+* <b>sampled_expected_count</b>: A tensor of type `float`. Same shape as
+ `sampled_candidates`. The expected counts under the sampling distribution
+ of each of `sampled_candidates`.
+
+
+- - -
+
+### tf.nn.log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None) <div class="md-anchor" id="log_uniform_candidate_sampler">{#log_uniform_candidate_sampler}</div>
+
+Samples a set of classes using a log-uniform (Zipfian) base distribution.
+
+This operation randomly samples a tensor of sampled classes
+(`sampled_candidates`) from the range of integers `[0, range_max]`.
+
+The elements of `sampled_candidates` are drawn without replacement
+(if `unique=True`) or with replacement (if `unique=False`) from
+the base distribution.
+
+The base distribution for this operation is an approximately log-uniform
+or Zipfian distribution:
+
+`P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)`
+
+This sampler is useful when the target classes approximately follow such
+a distribution - for example, if the classes represent words in a lexicon
+sorted in decreasing order of frequency. If your classes are not ordered by
+decreasing frequency, do not use this op.
+
+In addition, this operation returns tensors `true_expected_count`
+and `sampled_expected_count` representing the number of times each
+of the target classes (`true_classes`) and the sampled
+classes (`sampled_candidates`) is expected to occur in an average
+tensor of sampled classes. These values correspond to `Q(y|x)`
+defined in [this
+document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
+If `unique=True`, then these are post-rejection probabilities and we
+compute them approximately.
+
+##### Args:
+
+
+* <b>true_classes</b>: A `Tensor` of type `int64` and shape `[batch_size,
+ num_true]`. The target classes.
+* <b>num_true</b>: An `int`. The number of target classes per training example.
+* <b>num_sampled</b>: An `int`. The number of classes to randomly sample per batch.
+* <b>unique</b>: A `bool`. Determines whether all sampled classes in a batch are
+ unique.
+* <b>range_max</b>: An `int`. The number of possible classes.
+* <b>seed</b>: An `int`. An operation-specific seed. Default is 0.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+
+* <b>sampled_candidates</b>: A tensor of type `int64` and shape `[num_sampled]`.
+ The sampled classes.
+* <b>true_expected_count</b>: A tensor of type `float`. Same shape as
+ `true_classes`. The expected counts under the sampling distribution
+ of each of `true_classes`.
+* <b>sampled_expected_count</b>: A tensor of type `float`. Same shape as
+ `sampled_candidates`. The expected counts under the sampling distribution
+ of each of `sampled_candidates`.
+
+
+- - -
+
+### tf.nn.learned_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None) <div class="md-anchor" id="learned_unigram_candidate_sampler">{#learned_unigram_candidate_sampler}</div>
+
+Samples a set of classes from a distribution learned during training.
+
+This operation randomly samples a tensor of sampled classes
+(`sampled_candidates`) from the range of integers `[0, range_max]`.
+
+The elements of `sampled_candidates` are drawn without replacement
+(if `unique=True`) or with replacement (if `unique=False`) from
+the base distribution.
+
+The base distribution for this operation is constructed on the fly
+during training. It is a unigram distribution over the target
+classes seen so far during training. Every integer in `[0, range_max]`
+begins with a weight of 1, and is incremented by 1 each time it is
+seen as a target class. The base distribution is not saved to checkpoints,
+so it is reset when the model is reloaded.
+
+In addition, this operation returns tensors `true_expected_count`
+and `sampled_expected_count` representing the number of times each
+of the target classes (`true_classes`) and the sampled
+classes (`sampled_candidates`) is expected to occur in an average
+tensor of sampled classes. These values correspond to `Q(y|x)`
+defined in [this
+document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
+If `unique=True`, then these are post-rejection probabilities and we
+compute them approximately.
+
+##### Args:
+
+
+* <b>true_classes</b>: A `Tensor` of type `int64` and shape `[batch_size,
+ num_true]`. The target classes.
+* <b>num_true</b>: An `int`. The number of target classes per training example.
+* <b>num_sampled</b>: An `int`. The number of classes to randomly sample per batch.
+* <b>unique</b>: A `bool`. Determines whether all sampled classes in a batch are
+ unique.
+* <b>range_max</b>: An `int`. The number of possible classes.
+* <b>seed</b>: An `int`. An operation-specific seed. Default is 0.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+
+* <b>sampled_candidates</b>: A tensor of type `int64` and shape `[num_sampled]`.
+ The sampled classes.
+* <b>true_expected_count</b>: A tensor of type `float`. Same shape as
+ `true_classes`. The expected counts under the sampling distribution
+ of each of `true_classes`.
+* <b>sampled_expected_count</b>: A tensor of type `float`. Same shape as
+ `sampled_candidates`. The expected counts under the sampling distribution
+ of each of `sampled_candidates`.
+
+
+- - -
+
+### tf.nn.fixed_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, vocab_file='', distortion=0.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=[], seed=None, name=None) <div class="md-anchor" id="fixed_unigram_candidate_sampler">{#fixed_unigram_candidate_sampler}</div>
+
+Samples a set of classes using the provided (fixed) base distribution.
+
+This operation randomly samples a tensor of sampled classes
+(`sampled_candidates`) from the range of integers `[0, range_max]`.
+
+The elements of `sampled_candidates` are drawn without replacement
+(if `unique=True`) or with replacement (if `unique=False`) from
+the base distribution.
+
+The base distribution is read from a file or passed in as an
+in-memory array. There is also an option to skew the distribution by
+applying a distortion power to the weights.
+
+In addition, this operation returns tensors `true_expected_count`
+and `sampled_expected_count` representing the number of times each
+of the target classes (`true_classes`) and the sampled
+classes (`sampled_candidates`) is expected to occur in an average
+tensor of sampled classes. These values correspond to `Q(y|x)`
+defined in [this
+document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
+If `unique=True`, then these are post-rejection probabilities and we
+compute them approximately.
+
+##### Args:
+
+
+* <b>true_classes</b>: A `Tensor` of type `int64` and shape `[batch_size,
+ num_true]`. The target classes.
+* <b>num_true</b>: An `int`. The number of target classes per training example.
+* <b>num_sampled</b>: An `int`. The number of classes to randomly sample per batch.
+* <b>unique</b>: A `bool`. Determines whether all sampled classes in a batch are
+ unique.
+* <b>range_max</b>: An `int`. The number of possible classes.
+* <b>vocab_file</b>: Each valid line in this file (which should have a CSV-like
+ format) corresponds to a valid word ID. IDs are in sequential order,
+ starting from num_reserved_ids. The last entry in each line is expected
+ to be a value corresponding to the count or relative probability. Exactly
+ one of `vocab_file` and `unigrams` needs to be passed to this operation.
+* <b>distortion</b>: The distortion is used to skew the unigram probability
+ distribution. Each weight is first raised to the distortion's power
+ before adding to the internal unigram distribution. As a result,
+ `distortion = 1.0` gives regular unigram sampling (as defined by the vocab
+ file), and `distortion = 0.0` gives a uniform distribution.
+* <b>num_reserved_ids</b>: Optionally some reserved IDs can be added in the range
+ `[0, num_reserved_ids]` by the users. One use case is that a special
+ unknown word token is used as ID 0. These IDs will have a sampling
+ probability of 0.
+* <b>num_shards</b>: A sampler can be used to sample from a subset of the original
+ range in order to speed up the whole computation through parallelism. This
+ parameter (together with `shard`) indicates the number of partitions that
+ are being used in the overall computation.
+* <b>shard</b>: A sampler can be used to sample from a subset of the original range
+ in order to speed up the whole computation through parallelism. This
+ parameter (together with `num_shards`) indicates the particular partition
+ number of the operation, when partitioning is being used.
+* <b>unigrams</b>: A list of unigram counts or probabilities, one per ID in
+ sequential order. Exactly one of `vocab_file` and `unigrams` should be
+ passed to this operation.
+* <b>seed</b>: An `int`. An operation-specific seed. Default is 0.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+
+* <b>sampled_candidates</b>: A tensor of type `int64` and shape `[num_sampled]`.
+ The sampled classes.
+* <b>true_expected_count</b>: A tensor of type `float`. Same shape as
+ `true_classes`. The expected counts under the sampling distribution
+ of each of `true_classes`.
+* <b>sampled_expected_count</b>: A tensor of type `float`. Same shape as
+ `sampled_candidates`. The expected counts under the sampling distribution
+ of each of `sampled_candidates`.
+
+
+
+### Miscellaneous candidate sampling utilities <div class="md-anchor" id="AUTOGENERATED-miscellaneous-candidate-sampling-utilities">{#AUTOGENERATED-miscellaneous-candidate-sampling-utilities}</div>
+
+- - -
+
+### tf.nn.compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None) <div class="md-anchor" id="compute_accidental_hits">{#compute_accidental_hits}</div>
+
+Compute the ids of positions in sampled_candidates matching true_classes.
+
+In Candidate Sampling, this operation facilitates virtually removing
+sampled classes which happen to match target classes. This is done
+in Sampled Softmax and Sampled Logistic.
+
+See our [Candidate Sampling Algorithms
+Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).
+
+We presuppose that the `sampled_candidates` are unique.
+
+We call it an 'accidental hit' when one of the target classes
+matches one of the sampled classes. This operation reports
+accidental hits as triples `(index, id, weight)`, where `index`
+represents the row number in `true_classes`, `id` represents the
+position in `sampled_candidates`, and weight is `-FLOAT_MAX`.
+
+The result of this op should be passed through a `sparse_to_dense`
+operation, then added to the logits of the sampled classes. This
+removes the contradictory effect of accidentally sampling the true
+target classes as noise classes for the same example.
+
+##### Args:
+
+
+* <b>true_classes</b>: A `Tensor` of type `int64` and shape `[batch_size,
+ num_true]`. The target classes.
+* <b>sampled_candidates</b>: A tensor of type `int64` and shape `[num_sampled]`.
+ The sampled_candidates output of CandidateSampler.
+* <b>num_true</b>: An `int`. The number of target classes per training example.
+* <b>seed</b>: An `int`. An operation-specific seed. Default is 0.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+
+* <b>indices</b>: A `Tensor` of type `int32` and shape `[num_accidental_hits]`.
+ Values indicate rows in `true_classes`.
+* <b>ids</b>: A `Tensor` of type `int64` and shape `[num_accidental_hits]`.
+ Values indicate positions in `sampled_candidates`.
+* <b>weights</b>: A `Tensor` of type `float` and shape `[num_accidental_hits]`.
+ Each value is `-FLOAT_MAX`.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/ops.md b/tensorflow/g3doc/api_docs/python/ops.md
new file mode 100644
index 0000000000..bb7d6e70e2
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/ops.md
@@ -0,0 +1,10 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Leftovers, should be empty and removed
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+
diff --git a/tensorflow/g3doc/api_docs/python/python_io.md b/tensorflow/g3doc/api_docs/python/python_io.md
new file mode 100644
index 0000000000..7ad4b65bd0
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/python_io.md
@@ -0,0 +1,104 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Data IO (Python functions)
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Data IO (Python Functions)](#AUTOGENERATED-data-io--python-functions-)
+ * [class tf.python_io.TFRecordWriter](#TFRecordWriter)
+ * [tf.python_io.tf_record_iterator(path)](#tf_record_iterator)
+ * [TFRecords Format Details](#AUTOGENERATED-tfrecords-format-details)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Data IO (Python Functions) <div class="md-anchor" id="AUTOGENERATED-data-io--python-functions-">{#AUTOGENERATED-data-io--python-functions-}</div>
+
+A TFRecords file represents a sequence of (binary) strings. The format is not
+random access, so it is suitable for streaming large amounts of data but not
+suitable if fast sharding or other non-sequential access is desired.
+
+- - -
+
+### class tf.python_io.TFRecordWriter <div class="md-anchor" id="TFRecordWriter">{#TFRecordWriter}</div>
+
+A class to write records to a TFRecords file.
+
+This class implements `__enter__` and `__exit__`, and can be used
+in `with` blocks like a normal file.
+
+- - -
+
+#### tf.python_io.TFRecordWriter.__init__(path) {#TFRecordWriter.__init__}
+
+Opens file `path` and creates a `TFRecordWriter` writing to it.
+
+##### Args:
+
+
+* <b>path</b>: The path to the TFRecords file.
+
+##### Raises:
+
+
+* <b>IOError</b>: If `path` cannot be opened for writing.
+
+
+- - -
+
+#### tf.python_io.TFRecordWriter.write(record) {#TFRecordWriter.write}
+
+Write a string record to the file.
+
+##### Args:
+
+
+* <b>record</b>: str
+
+
+- - -
+
+#### tf.python_io.TFRecordWriter.close() {#TFRecordWriter.close}
+
+Close the file.
+
+
+
+- - -
+
+### tf.python_io.tf_record_iterator(path) <div class="md-anchor" id="tf_record_iterator">{#tf_record_iterator}</div>
+
+An iterator that read the records from a TFRecords file.
+
+##### Args:
+
+
+* <b>path</b>: The path to the TFRecords file.
+
+##### Yields:
+
+ Strings.
+
+##### Raises:
+
+
+* <b>IOError</b>: If `path` cannot be opened for reading.
+
+
+
+- - -
+
+### TFRecords Format Details <div class="md-anchor" id="AUTOGENERATED-tfrecords-format-details">{#AUTOGENERATED-tfrecords-format-details}</div>
+
+A TFRecords file contains a sequence of strings with CRC hashes. Each record
+has the format
+
+ uint64 length
+ uint32 masked_crc32_of_length
+ byte data[length]
+ uint32 masked_crc32_of_data
+
+and the records are concatenated together to produce the file. The CRC32s
+are [described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check),
+and the mask of a CRC is
+
+ masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
diff --git a/tensorflow/g3doc/api_docs/python/sparse_ops.md b/tensorflow/g3doc/api_docs/python/sparse_ops.md
new file mode 100644
index 0000000000..7e9ab0775f
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/sparse_ops.md
@@ -0,0 +1,502 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Sparse Tensors
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Sparse Tensor Representation.](#AUTOGENERATED-sparse-tensor-representation.)
+ * [class tf.SparseTensor](#SparseTensor)
+ * [class tf.SparseTensorValue](#SparseTensorValue)
+* [Sparse to Dense Conversion.](#AUTOGENERATED-sparse-to-dense-conversion.)
+ * [tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value, name=None)](#sparse_to_dense)
+ * [tf.sparse_tensor_to_dense(sp_input, default_value, name=None)](#sparse_tensor_to_dense)
+ * [tf.sparse_to_indicator(sp_input, vocab_size, name=None)](#sparse_to_indicator)
+* [Manipulation.](#AUTOGENERATED-manipulation.)
+ * [tf.sparse_concat(concat_dim, sp_inputs, name=None)](#sparse_concat)
+ * [tf.sparse_reorder(sp_input, name=None)](#sparse_reorder)
+ * [tf.sparse_retain(sp_input, to_retain)](#sparse_retain)
+ * [tf.sparse_fill_empty_rows(sp_input, default_value, name=None)](#sparse_fill_empty_rows)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Sparse Tensor Representation. <div class="md-anchor" id="AUTOGENERATED-sparse-tensor-representation.">{#AUTOGENERATED-sparse-tensor-representation.}</div>
+
+Tensorflow supports a `SparseTensor` representation for data that is sparse
+in multiple dimensions. Contrast this representation with `IndexedSlices`,
+which is efficient for representing tensors that are sparse in their first
+dimension, and dense along all other dimensions.
+
+- - -
+
+### class tf.SparseTensor <div class="md-anchor" id="SparseTensor">{#SparseTensor}</div>
+
+Represents a sparse tensor.
+
+Tensorflow represents a sparse tensor as three separate dense tensors:
+`indices`, `values`, and `dense_shape`. In Python, the three tensors are
+collected into a `SparseTensor` class for ease of use. If you have separate
+`indices`, `values`, and `dense_shape` tensors, wrap them in a `SparseTensor`
+object before passing to the Ops below.
+
+Concretely, the sparse tensor `SparseTensor(values, indices, dense_shape)` is
+
+* `indices`: A 2-D int64 tensor of shape `[N, ndims]`.
+* `values`: A 1-D tensor of any type and shape `[N]`.
+* `dense_shape`: A 1-D int64 tensor of shape `[ndims]`.
+
+where `N` and `ndims` are the number of values, and number of dimensions in
+the `SparseTensor` respectively.
+
+The corresponding dense tensor satisfies
+
+```python
+dense.shape = dense_shape
+dense[tuple(indices[i])] = values[i]
+```
+
+By convention, `indices` should be sorted in row-major order (or equivalently
+lexigraphic order on the tuples `indices[i]`). This is not enforced when
+`SparseTensor` objects are constructed, but most Ops assume correct ordering.
+If the ordering is wrong, it can be fixed by calling `sparse_reorder` on the
+misordered `SparseTensor`.
+
+Example: The sparse tensor
+
+```python
+ SparseTensor(values=[1, 2], indices=[[0, 0], [1, 2]], shape=[3, 4])
+```
+
+represents the dense tensor
+
+```python
+ [[1, 0, 0, 0]
+ [0, 0, 2, 0]
+ [0, 0, 0, 0]]
+```
+
+- - -
+
+#### tf.SparseTensor.__init__(indices, values, shape) {#SparseTensor.__init__}
+
+Creates a `SparseTensor`.
+
+##### Args:
+
+
+* <b>indices</b>: A 2-D int64 tensor of shape `[N, ndims]`.
+* <b>values</b>: A 1-D tensor of any type and shape `[N]`.
+* <b>dense_shape</b>: A 1-D int64 tensor of shape `[ndims]`.
+
+##### Returns:
+
+ A `SparseTensor`
+
+
+- - -
+
+#### tf.SparseTensor.indices {#SparseTensor.indices}
+
+The indices of non-zero values in the represented dense tensor.
+
+##### Returns:
+
+ A 2-D Tensor of int64 with shape `[N, ndims]`, where `N` is the
+ number of non-zero values in the tensor, and `ndims` is the rank.
+
+- - -
+
+#### tf.SparseTensor.values {#SparseTensor.values}
+
+The non-zero values in the represented dense tensor.
+
+##### Returns:
+
+ A 1-D Tensor of any data type.
+
+- - -
+
+#### tf.SparseTensor.dtype {#SparseTensor.dtype}
+
+The `DType` of elements in this tensor.
+
+- - -
+
+#### tf.SparseTensor.shape {#SparseTensor.shape}
+
+A 1-D Tensor of int64 representing the shape of the dense tensor.
+
+- - -
+
+#### tf.SparseTensor.graph {#SparseTensor.graph}
+
+The `Graph` that contains the index, value, and shape tensors.
+
+
+- - -
+
+### class tf.SparseTensorValue <div class="md-anchor" id="SparseTensorValue">{#SparseTensorValue}</div>
+
+SparseTensorValue(indices, values, shape)
+- - -
+
+#### tf.SparseTensorValue.indices {#SparseTensorValue.indices}
+
+Alias for field number 0
+
+- - -
+
+#### tf.SparseTensorValue.shape {#SparseTensorValue.shape}
+
+Alias for field number 2
+
+- - -
+
+#### tf.SparseTensorValue.values {#SparseTensorValue.values}
+
+Alias for field number 1
+
+
+
+## Sparse to Dense Conversion. <div class="md-anchor" id="AUTOGENERATED-sparse-to-dense-conversion.">{#AUTOGENERATED-sparse-to-dense-conversion.}</div>
+
+- - -
+
+### tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value, name=None) <div class="md-anchor" id="sparse_to_dense">{#sparse_to_dense}</div>
+
+Converts a sparse representation into a dense tensor.
+
+Builds an array `dense` with shape `output_shape` such that
+
+```prettyprint
+# If sparse_indices is scalar
+dense[i] = (i == sparse_indices ? sparse_values : default_value)
+
+# If sparse_indices is a vector, then for each i
+dense[sparse_indices[i]] = sparse_values[i]
+
+# If sparse_indices is an n by d matrix, then for each i in [0, n)
+dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i]
+```
+
+All other values in `dense` are set to `default_value`. If `sparse_values` is a
+scalar, all sparse indices are set to this single value.
+
+##### Args:
+
+
+* <b>sparse_indices</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ 0-D, 1-D, or 2-D. `sparse_indices[i]` contains the complete
+ index where `sparse_values[i]` will be placed.
+* <b>output_shape</b>: A `Tensor`. Must have the same type as `sparse_indices`.
+ 1-D. Shape of the dense output tensor.
+* <b>sparse_values</b>: A `Tensor`.
+ 1-D. Values corresponding to each row of `sparse_indices`,
+ or a scalar value to be used for all sparse indices.
+* <b>default_value</b>: A `Tensor`. Must have the same type as `sparse_values`.
+ Scalar value to set for indices not specified in
+ `sparse_indices`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `sparse_values`.
+ Dense output tensor of shape `output_shape`.
+
+
+- - -
+
+### tf.sparse_tensor_to_dense(sp_input, default_value, name=None) <div class="md-anchor" id="sparse_tensor_to_dense">{#sparse_tensor_to_dense}</div>
+
+Converts a `SparseTensor` into a dense tensor.
+
+This op is a convenience wrapper around `sparse_to_dense` for `SparseTensor`s.
+
+For example, if `sp_input` has shape `[3, 5]` and non-empty string values:
+
+ [0, 1]: a
+ [0, 3]: b
+ [2, 0]: c
+
+and `default_value` is `x`, then the output will be a dense `[3, 5]`
+string tensor with values:
+
+ [[x a x b x]
+ [x x x x x]
+ [c x x x x]]
+
+##### Args:
+
+
+* <b>sp_input</b>: The input `SparseTensor`.
+* <b>default_value</b>: Scalar value to set for indices not specified in
+ `sp_input`.
+* <b>name</b>: A name prefix for the returned tensors (optional).
+
+##### Returns:
+
+ A dense tensor with shape `sp_input.shape` and values specified by
+ the non-empty values in `sp_input`. Indices not in `sp_input` are assigned
+ `default_value`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `sp_input` is not a `SparseTensor`.
+
+
+- - -
+
+### tf.sparse_to_indicator(sp_input, vocab_size, name=None) <div class="md-anchor" id="sparse_to_indicator">{#sparse_to_indicator}</div>
+
+Converts a `SparseTensor` of ids into a dense bool indicator tensor.
+
+The last dimension of `sp_input` is discarded and replaced with the values of
+`sp_input`. If `sp_input.shape = [D0, D1, ..., Dn, K]`, then
+`output.shape = [D0, D1, ..., Dn, vocab_size]`, where
+
+ output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True
+
+and False elsewhere in `output`.
+
+For example, if `sp_input.shape = [2, 3, 4]` with non-empty values:
+
+ [0, 0, 0]: 0
+ [0, 1, 0]: 10
+ [1, 0, 3]: 103
+ [1, 1, 2]: 112
+ [1, 1, 3]: 113
+ [1, 2, 1]: 121
+
+and `vocab_size = 200`, then the output will be a `[2, 3, 200]` dense bool
+tensor with False everywhere except at positions
+
+ (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 112), (1, 1, 113), (1, 2, 121).
+
+This op is useful for converting `SparseTensor`s into dense formats for
+compatibility with ops that expect dense tensors.
+
+The input `SparseTensor` must be in row-major order.
+
+##### Args:
+
+
+* <b>sp_input</b>: A `SparseTensor` of type `int32` or `int64`.
+* <b>vocab_size</b>: The new size of the last dimension, with
+ `all(0 <= sp_input.values < vocab_size)`.
+* <b>name</b>: A name prefix for the returned tensors (optional)
+
+##### Returns:
+
+ A dense bool indicator tensor representing the indices with specified value.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `sp_input` is not a `SparseTensor`.
+
+
+
+## Manipulation. <div class="md-anchor" id="AUTOGENERATED-manipulation.">{#AUTOGENERATED-manipulation.}</div>
+
+- - -
+
+### tf.sparse_concat(concat_dim, sp_inputs, name=None) <div class="md-anchor" id="sparse_concat">{#sparse_concat}</div>
+
+Concatenates a list of `SparseTensor` along the specified dimension.
+
+Concatenation is with respect to the dense versions of each sparse input.
+It is assumed that each inputs is a `SparseTensor` whose elements are ordered
+along increasing dimension number.
+
+All inputs' shapes must match, except for the concat dimension. The
+`indices`, `values`, and `shapes` lists must have the same length.
+
+The output shape is identical to the inputs', except along the concat
+dimension, where it is the sum of the inputs' sizes along that dimension.
+
+The output elements will be resorted to preserve the sort order along
+increasing dimension number.
+
+This op runs in `O(M log M)` time, where `M` is the total number of non-empty
+values across all inputs. This is due to the need for an internal sort in
+order to concatenate efficiently across an arbitrary dimension.
+
+For example, if `concat_dim = 1` and the inputs are
+
+ sp_inputs[0]: shape = [2, 3]
+ [0, 2]: "a"
+ [1, 0]: "b"
+ [1, 1]: "c"
+
+ sp_inputs[1]: shape = [2, 4]
+ [0, 1]: "d"
+ [0, 2]: "e"
+
+then the output will be
+
+ shape = [2, 7]
+ [0, 2]: "a"
+ [0, 4]: "d"
+ [0, 5]: "e"
+ [1, 0]: "b"
+ [1, 1]: "c"
+
+Graphically this is equivalent to doing
+
+ [ a] concat [ d e ] = [ a d e ]
+ [b c ] [ ] [b c ]
+
+##### Args:
+
+
+* <b>concat_dim</b>: Dimension to concatenate along.
+* <b>sp_inputs</b>: List of `SparseTensor` to concatenate.
+* <b>name</b>: A name prefix for the returned tensors (optional).
+
+##### Returns:
+
+ A `SparseTensor` with the concatenated output.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `sp_inputs` is not a list of `SparseTensor`.
+
+
+- - -
+
+### tf.sparse_reorder(sp_input, name=None) <div class="md-anchor" id="sparse_reorder">{#sparse_reorder}</div>
+
+Reorders a `SparseTensor` into the canonical, row-major ordering.
+
+Note that by convention, all sparse ops preserve the canonical ordering
+along increasing dimension number. The only time ordering can be violated
+is during manual manipulation of the indices and values to add entries.
+
+Reordering does not affect the shape of the `SparseTensor`.
+
+For example, if sp_input has shape `[4, 5]` and `indices` / `values`:
+
+ [0, 3]: b
+ [0, 1]: a
+ [3, 1]: d
+ [2, 0]: c
+
+then the output will be a `SparseTensor` of shape `[4, 5]` and
+`indices` / `values`:
+
+ [0, 1]: a
+ [0, 3]: b
+ [2, 0]: c
+ [3, 1]: d
+
+##### Args:
+
+
+* <b>sp_input</b>: The input `SparseTensor`.
+* <b>name</b>: A name prefix for the returned tensors (optional)
+
+##### Returns:
+
+ A `SparseTensor` with the same shape and non-empty values, but in
+ canonical ordering.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `sp_input` is not a `SparseTensor`.
+
+
+- - -
+
+### tf.sparse_retain(sp_input, to_retain) <div class="md-anchor" id="sparse_retain">{#sparse_retain}</div>
+
+Retains specified non-empty values within a `SparseTensor`.
+
+For example, if `sp_input` has shape `[4, 5]` and 4 non-empty string values:
+
+ [0, 1]: a
+ [0, 3]: b
+ [2, 0]: c
+ [3, 1]: d
+
+and `to_retain = [True, False, False, True]`, then the output will
+be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values:
+
+ [0, 1]: a
+ [3, 1]: d
+
+##### Args:
+
+
+* <b>sp_input</b>: The input `SparseTensor` with `N` non-empty elements.
+* <b>to_retain</b>: A bool vector of length `N` with `M` true values.
+
+##### Returns:
+
+ A `SparseTensor` with the same shape as the input and `M` non-empty
+ elements corresponding to the true positions in `to_retain`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `sp_input` is not a `SparseTensor`.
+
+
+- - -
+
+### tf.sparse_fill_empty_rows(sp_input, default_value, name=None) <div class="md-anchor" id="sparse_fill_empty_rows">{#sparse_fill_empty_rows}</div>
+
+Fills empty rows in the input 2-D `SparseTensor` with a default value.
+
+This op adds entries with the specified `default_value` at index
+`[row, 0]` for any row in the input that does not already have a value.
+
+For example, suppose `sp_input` has shape `[5, 6]` and non-empty values:
+
+ [0, 1]: a
+ [0, 3]: b
+ [2, 0]: c
+ [3, 1]: d
+
+Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values:
+
+ [0, 1]: a
+ [0, 3]: b
+ [1, 0]: default_value
+ [2, 0]: c
+ [3, 1]: d
+ [4, 0]: default_value
+
+Note that the input may have empty columns at the end, with no effect on
+this op.
+
+The output `SparseTensor` will be in row-major order and will have the
+same shape as the input.
+
+This op also returns an indicator vector such that
+
+ empty_row_indicator[i] = True iff row i was an empty row.
+
+##### Args:
+
+
+* <b>sp_input</b>: A `SparseTensor` with shape `[N, M]`.
+* <b>default_value</b>: The value to fill for empty rows, with the same type as
+ `sp_input.`
+* <b>name</b>: A name prefix for the returned tensors (optional)
+
+##### Returns:
+
+
+* <b>sp_ordered_output</b>: A `SparseTensor` with shape `[N, M]`, and with all empty
+ rows filled in with `default_value`.
+* <b>empty_row_indicator</b>: A bool vector of length `N` indicating whether each
+ input row was empty.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `sp_input` is not a `SparseTensor`.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/state_ops.md b/tensorflow/g3doc/api_docs/python/state_ops.md
new file mode 100644
index 0000000000..70d912178b
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/state_ops.md
@@ -0,0 +1,1383 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Variables
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Variables](#AUTOGENERATED-variables)
+ * [class tf.Variable](#Variable)
+* [Variable helper functions](#AUTOGENERATED-variable-helper-functions)
+ * [tf.all_variables()](#all_variables)
+ * [tf.trainable_variables()](#trainable_variables)
+ * [tf.initialize_all_variables()](#initialize_all_variables)
+ * [tf.initialize_variables(var_list, name='init')](#initialize_variables)
+ * [tf.assert_variables_initialized(var_list=None)](#assert_variables_initialized)
+* [Saving and Restoring Variables.](#AUTOGENERATED-saving-and-restoring-variables.)
+ * [class tf.train.Saver](#Saver)
+ * [tf.train.latest_checkpoint(checkpoint_dir, latest_filename=None)](#latest_checkpoint)
+ * [tf.train.get_checkpoint_state(checkpoint_dir, latest_filename=None)](#get_checkpoint_state)
+ * [tf.train.update_checkpoint_state(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, latest_filename=None)](#update_checkpoint_state)
+* [Sharing Variables](#AUTOGENERATED-sharing-variables)
+ * [tf.get_variable(name, shape=None, dtype=tf.float32, initializer=None, trainable=True, collections=None)](#get_variable)
+ * [tf.get_variable_scope()](#get_variable_scope)
+ * [tf.variable_scope(*args, **kwds)](#variable_scope)
+ * [tf.constant_initializer(value=0.0)](#constant_initializer)
+ * [tf.random_normal_initializer(mean=0.0, stddev=1.0, seed=None)](#random_normal_initializer)
+ * [tf.truncated_normal_initializer(mean=0.0, stddev=1.0, seed=None)](#truncated_normal_initializer)
+ * [tf.random_uniform_initializer(minval=0.0, maxval=1.0, seed=None)](#random_uniform_initializer)
+ * [tf.uniform_unit_scaling_initializer(factor=1.0, seed=None)](#uniform_unit_scaling_initializer)
+ * [tf.zeros_initializer(shape, dtype=tf.float32)](#zeros_initializer)
+* [Sparse Variable Updates](#AUTOGENERATED-sparse-variable-updates)
+ * [tf.scatter_update(ref, indices, updates, use_locking=None, name=None)](#scatter_update)
+ * [tf.scatter_add(ref, indices, updates, use_locking=None, name=None)](#scatter_add)
+ * [tf.scatter_sub(ref, indices, updates, use_locking=None, name=None)](#scatter_sub)
+ * [tf.sparse_mask(a, mask_indices, name=None)](#sparse_mask)
+ * [class tf.IndexedSlices](#IndexedSlices)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+## Variables <div class="md-anchor" id="AUTOGENERATED-variables">{#AUTOGENERATED-variables}</div>
+
+- - -
+
+### class tf.Variable <div class="md-anchor" id="Variable">{#Variable}</div>
+
+See the [Variables How To](../../how_tos/variables/index.md) for a high
+level overview.
+
+A variable maintains state in the graph across calls to `run()`. You add a
+variable to the graph by constructing an instance of the class `Variable`.
+
+The `Variable()` constructor requires an initial value for the variable,
+which can be a `Tensor` of any type and shape. The initial value defines the
+type and shape of the variable. After construction, the type and shape of
+the variable are fixed. The value can be changed using one of the assign
+methods.
+
+If you want to change the shape of a variable later you have to use an
+`assign` Op with `validate_shape=False`.
+
+Just like any `Tensor`, variables created with `Variable()` can be used as
+inputs for other Ops in the graph. Additionally, all the operators
+overloaded for the `Tensor` class are carried over to variables, so you can
+also add nodes to the graph by just doing arithmetic on variables.
+
+```python
+import tensorflow as tf
+
+# Create a variable.
+w = tf.Variable(<initial-value>, name=<optional-name>)
+
+# Use the variable in the graph like any Tensor.
+y = tf.matmul(w, ...another variable or tensor...)
+
+# The overloaded operators are available too.
+z = tf.sigmoid(w + b)
+
+# Assign a new value to the variable with `assign()` or a related method.
+w.assign(w + 1.0)
+w.assign_add(1.0)
+```
+
+When you launch the graph, variables have to be explicitly initialized before
+you can run Ops that use their value. You can initialize a variable by
+running its *initializer op*, restoring the variable from a save file, or
+simply running an `assign` Op that assigns a value to the variable. In fact,
+the variable *initializer op* is just an `assign` Op that assigns the
+variable's initial value to the variable itself.
+
+```python
+# Launch the graph in a session.
+with tf.Session() as sess:
+ # Run the variable initializer.
+ sess.run(w.initializer)
+ # ...you now can run ops that use the value of 'w'...
+```
+
+The most common initialization pattern is to use the convenience function
+`initialize_all_variables()` to add an Op to the graph that initializes
+all the variables. You then run that Op after launching the graph.
+
+```python
+# Add an Op to initialize all variables.
+init_op = tf.initialize_all_variables()
+
+# Launch the graph in a session.
+with tf.Session() as sess:
+ # Run the Op that initializes all variables.
+ sess.run(init_op)
+ # ...you can now run any Op that uses variable values...
+```
+
+If you need to create a variable with an initial value dependent on another
+variable, use the other variable's `initialized_value()`. This ensures that
+variables are initialized in the right order.
+
+All variables are automatically collected in the graph where they are
+created. By default, the constructor adds the new variable to the graph
+collection `GraphKeys.VARIABLES`. The convenience function
+`all_variables()` returns the contents of that collection.
+
+When building a machine learning model it is often convenient to distinguish
+betwen variables holding the trainable model parameters and other variables
+such as a `global step` variable used to count training steps. To make this
+easier, the variable constructor supports a `trainable=<bool>` parameter. If
+`True`, the new variable is also added to the graph collection
+`GraphKeys.TRAINABLE_VARIABLES`. The convenience function
+`trainable_variables()` returns the contents of this collection. The
+various `Optimizer` classes use this collection as the default list of
+variables to optimize.
+
+
+Creating a variable.
+
+- - -
+
+#### tf.Variable.__init__(initial_value, trainable=True, collections=None, validate_shape=True, name=None) {#Variable.__init__}
+
+Creates a new variable with value `initial_value`.
+
+The new variable is added to the graph collections listed in `collections`,
+which defaults to `[GraphKeys.VARIABLES]`.
+
+If `trainable` is `True` the variable is also added to the graph collection
+`GraphKeys.TRAINABLE_VARIABLES`.
+
+This constructor creates both a `variable` Op and an `assign` Op to set the
+variable to its initial value.
+
+##### Args:
+
+
+* <b>initial_value</b>: A `Tensor`, or Python object convertible to a `Tensor`.
+ The initial value for the Variable. Must have a shape specified unless
+ `validate_shape` is set to False.
+* <b>trainable</b>: If `True`, the default, also adds the variable to the graph
+ collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as
+ the default list of variables to use by the `Optimizer` classes.
+* <b>collections</b>: List of graph collections keys. The new variable is added to
+ these collections. Defaults to `[GraphKeys.VARIABLES]`.
+* <b>validate_shape</b>: If `False`, allows the variable to be initialized with a
+ value of unknown shape. If `True`, the default, the shape of
+ `initial_value` must be known.
+* <b>name</b>: Optional name for the variable. Defaults to `'Variable'` and gets
+ uniquified automatically.
+
+##### Returns:
+
+ A Variable.
+
+##### Raises:
+
+
+* <b>ValueError</b>: If the initial value does not have a shape and
+ `validate_shape` is `True`.
+
+
+- - -
+
+#### tf.Variable.initialized_value() {#Variable.initialized_value}
+
+Returns the value of the initialized variable.
+
+You should use this instead of the variable itself to initialize another
+variable with a value that depends on the value of this variable.
+
+```python
+# Initialize 'v' with a random tensor.
+v = tf.Variable(tf.truncated_normal([10, 40]))
+# Use `initialized_value` to guarantee that `v` has been
+# initialized before its value is used to initialize `w`.
+# The random values are picked only once.
+w = tf.Variable(v.initialized_value() * 2.0)
+```
+
+##### Returns:
+
+ A `Tensor` holding the value of this variable after its initializer
+ has run.
+
+
+
+Changing a variable value.
+
+- - -
+
+#### tf.Variable.assign(value, use_locking=False) {#Variable.assign}
+
+Assigns a new value to the variable.
+
+This is essentially a shortcut for `assign(self, value)`.
+
+##### Args:
+
+
+* <b>value</b>: A `Tensor`. The new value for this variable.
+* <b>use_locking</b>: If `True`, use locking during the assignment.
+
+##### Returns:
+
+ A `Tensor` that will hold the new value of this variable after
+ the assignment has completed.
+
+
+- - -
+
+#### tf.Variable.assign_add(delta, use_locking=False) {#Variable.assign_add}
+
+Adds a value to this variable.
+
+ This is essentially a shortcut for `assign_add(self, delta)`.
+
+##### Args:
+
+
+* <b>delta</b>: A `Tensor`. The value to add to this variable.
+* <b>use_locking</b>: If `True`, use locking during the operation.
+
+##### Returns:
+
+ A `Tensor` that will hold the new value of this variable after
+ the addition has completed.
+
+
+- - -
+
+#### tf.Variable.assign_sub(delta, use_locking=False) {#Variable.assign_sub}
+
+Subtracts a value from this variable.
+
+This is essentially a shortcut for `assign_sub(self, delta)`.
+
+##### Args:
+
+
+* <b>delta</b>: A `Tensor`. The value to subtract from this variable.
+* <b>use_locking</b>: If `True`, use locking during the operation.
+
+##### Returns:
+
+ A `Tensor` that will hold the new value of this variable after
+ the subtraction has completed.
+
+
+- - -
+
+#### tf.Variable.scatter_sub(sparse_delta, use_locking=False) {#Variable.scatter_sub}
+
+Subtracts `IndexedSlices` from this variable.
+
+This is essentially a shortcut for `scatter_sub(self, sparse_delta.indices,
+sparse_delta.values)`.
+
+##### Args:
+
+
+* <b>sparse_delta</b>: `IndexedSlices` to be subtracted from this variable.
+* <b>use_locking</b>: If `True`, use locking during the operation.
+
+##### Returns:
+
+ A `Tensor` that will hold the new value of this variable after
+ the scattered subtraction has completed.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if `sparse_delta` is not an `IndexedSlices`.
+
+
+- - -
+
+#### tf.Variable.count_up_to(limit) {#Variable.count_up_to}
+
+Increments this variable until it reaches `limit`.
+
+When that Op is run it tries to increment the variable by `1`. If
+incrementing the variable would bring it above `limit` then the Op raises
+the exception `OutOfRangeError`.
+
+If no error is raised, the Op outputs the value of the variable before
+the increment.
+
+This is essentially a shortcut for `count_up_to(self, limit)`.
+
+##### Args:
+
+
+* <b>limit</b>: value at which incrementing the variable raises an error.
+
+##### Returns:
+
+ A `Tensor` that will hold the variable value before the increment. If no
+ other Op modifies this variable, the values produced will all be
+ distinct.
+
+
+
+- - -
+
+#### tf.Variable.eval(session=None) {#Variable.eval}
+
+In a session, computes and returns the value of this variable.
+
+This is not a graph construction method, it does not add ops to the graph.
+
+This convenience method requires a session where the graph containing this
+variable has been launched. If no session is passed, the default session is
+used. See the [Session class](../client.md#Session) for more information on
+launching a graph and on sessions.
+
+```python
+v = tf.Variable([1, 2])
+init = tf.initialize_all_variables()
+
+with tf.Session() as sess:
+ sess.run(init)
+ # Usage passing the session explicitly.
+ print v.eval(sess)
+ # Usage with the default session. The 'with' block
+ # above makes 'sess' the default session.
+ print v.eval()
+```
+
+##### Args:
+
+
+* <b>session</b>: The session to use to evaluate this variable. If
+ none, the default session is used.
+
+##### Returns:
+
+ A numpy `ndarray` with a copy of the value of this variable.
+
+
+
+Properties.
+
+- - -
+
+#### tf.Variable.name {#Variable.name}
+
+The name of this variable.
+
+- - -
+
+#### tf.Variable.dtype {#Variable.dtype}
+
+The `DType` of this variable.
+
+- - -
+
+#### tf.Variable.get_shape() {#Variable.get_shape}
+
+The `TensorShape` of this variable.
+
+##### Returns:
+
+ A `TensorShape`.
+
+
+- - -
+
+#### tf.Variable.device {#Variable.device}
+
+The device of this variable.
+
+- - -
+
+#### tf.Variable.initializer {#Variable.initializer}
+
+The initializer operation for this variable.
+
+- - -
+
+#### tf.Variable.graph {#Variable.graph}
+
+The `Graph` of this variable.
+
+- - -
+
+#### tf.Variable.op {#Variable.op}
+
+The `Operation` of this variable.
+
+
+
+## Variable helper functions <div class="md-anchor" id="AUTOGENERATED-variable-helper-functions">{#AUTOGENERATED-variable-helper-functions}</div>
+
+TensorFlow provides a set of functions to help manage the set of variables
+collected in the graph.
+
+- - -
+
+### tf.all_variables() <div class="md-anchor" id="all_variables">{#all_variables}</div>
+
+Returns all variables collected in the graph.
+
+The `Variable()` constructor automatically adds new variables to the graph
+collection `GraphKeys.VARIABLES`. This convenience function returns the
+contents of that collection.
+
+##### Returns:
+
+ A list of `Variable` objects.
+
+
+- - -
+
+### tf.trainable_variables() <div class="md-anchor" id="trainable_variables">{#trainable_variables}</div>
+
+Returns all variables created with `trainable=True`.
+
+When passed `trainable=True`, the `Variable()` constructor automatically
+adds new variables to the graph collection
+`GraphKeys.TRAINABLE_VARIABLES`. This convenience function returns the
+contents of that collection.
+
+##### Returns:
+
+ A list of Variable objects.
+
+
+
+- - -
+
+### tf.initialize_all_variables() <div class="md-anchor" id="initialize_all_variables">{#initialize_all_variables}</div>
+
+Returns an Op that initializes all variables.
+
+This is just a shortcut for `initialize_variables(all_variables())`
+
+##### Returns:
+
+ An Op that initializes all variables in the graph.
+
+
+- - -
+
+### tf.initialize_variables(var_list, name='init') <div class="md-anchor" id="initialize_variables">{#initialize_variables}</div>
+
+Returns an Op that initializes a list of variables.
+
+After you launch the graph in a session, you can run the returned Op to
+initialize all the variables in `var_list`. This Op runs all the
+initializers of the variables in `var_list` in parallel.
+
+Calling `initialize_variables()` is equivalent to passing the list of
+initializers to `Group()`.
+
+If `var_list` is empty, however, the function still returns an Op that can
+be run. That Op just has no effect.
+
+##### Args:
+
+
+* <b>var_list</b>: List of `Variable` objects to initialize.
+* <b>name</b>: Optional name for the returned operation.
+
+##### Returns:
+
+ An Op that run the initializers of all the specified variables.
+
+
+- - -
+
+### tf.assert_variables_initialized(var_list=None) <div class="md-anchor" id="assert_variables_initialized">{#assert_variables_initialized}</div>
+
+Returns an Op to check if variables are initialized.
+
+When run, the returned Op will raise the exception `FailedPreconditionError`
+if any of the variables has not yet been initialized.
+
+Note: This function is implemented by trying to fetch the values of the
+variables. If one of the variables is not initialized a message may be
+logged by the C++ runtime. This is expected.
+
+##### Args:
+
+
+* <b>var_list</b>: List of `Variable` objects to check. Defaults to the
+ value of `all_variables().`
+
+##### Returns:
+
+ An Op, or None if there are no variables.
+
+
+
+## Saving and Restoring Variables. <div class="md-anchor" id="AUTOGENERATED-saving-and-restoring-variables.">{#AUTOGENERATED-saving-and-restoring-variables.}</div>
+
+- - -
+
+### class tf.train.Saver <div class="md-anchor" id="Saver">{#Saver}</div>
+
+Saves and restores variables.
+
+See [Variables](../../how_tos/variables/index.md)
+for an overview of variables, saving and restoring.
+
+The `Saver` class adds ops to save and restore variables to and from
+*checkpoints*. It also provides convenience methods to run these ops.
+
+Checkpoints are binary files in a proprietary format which map variable names
+to tensor values. The best way to examine the contents of a checkpoint is to
+load it using a `Saver`.
+
+Savers can automatically number checkpoint filenames with a provided counter.
+This lets you keep multiple checkpoints at different steps while training a
+model. For example you can number the checkpoint filenames with the training
+step number. To avoid filling up disks, savers manage checkpoint files
+automatically. For example, they can keep only the N most recent files, or
+one checkpoint for every N hours of training.
+
+You number checkpoint filenames by passing a value to the optional
+`global_step` argument to `save()`:
+
+```python
+saver.save('my-model', global_step=0) ==> filename: 'my-model-0'
+...
+saver.save('my-model', global_step=1000) ==> filename: 'my-model-1000'
+```
+
+Additionally, optional arguments to the `Saver()` constructor let you control
+the proliferation of checkpoint files on disk:
+
+* `max_to_keep` indicates the maximum number of recent checkpoint files to
+ keep. As new files are created, older files are deleted. If None or 0,
+ all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent
+ checkpoint files are kept.)
+
+* `keep_checkpoint_every_n_hours`: In addition to keeping the most recent
+ `max_to_keep` checkpoint files, you might want to keep one checkpoint file
+ for every N hours of training. This can be useful if you want to later
+ analyze how a model progressed during a long training session. For
+ example, passing `keep_checkpoint_every_n_hours=2` ensures that you keep
+ one checkpoint file for every 2 hours of training. The default value of
+ 10,000 hours effectively disables the feature.
+
+Note that you still have to call the `save()` method to save the model.
+Passing these arguments to the constructor will not save variables
+automatically for you.
+
+A training program that saves regularly looks like:
+
+```python
+...
+# Create a saver.
+saver = tf.train.Saver(...variables...)
+# Launch the graph and train, saving the model every 1,000 steps.
+sess = tf.Session()
+for step in xrange(1000000):
+ sess.run(..training_op..)
+ if step % 1000 == 0:
+ # Append the step number to the checkpoint name:
+ saver.save(sess, 'my-model', global_step=step)
+```
+
+In addition to checkpoint files, savers keep a protocol buffer on disk with
+the list of recent checkpoints. This is used to manage numbered checkpoint
+files and by `latest_checkpoint()`, which makes it easy to discover the path
+to the most recent checkpoint. That protocol buffer is stored in a file named
+'checkpoint' next to the checkpoint files.
+
+If you create several savers, you can specify a different filename for the
+protocol buffer file in the call to `save()`.
+
+- - -
+
+#### tf.train.Saver.__init__(var_list=None, reshape=False, sharded=False, max_to_keep=5, keep_checkpoint_every_n_hours=10000.0, name=None, restore_sequentially=False, saver_def=None, builder=None) {#Saver.__init__}
+
+Creates a `Saver`.
+
+The constructor adds ops to save and restore variables.
+
+`var_list` specifies the variables that will be saved and restored. It can
+be passed as a `dict` or a list:
+
+* A `dict` of names to variables: The keys are the names that will be
+ used to save or restore the variables in the checkpoint files.
+* A list of variables: The variables will be keyed with their op name in
+ the checkpoint files.
+
+For example:
+
+```python
+v1 = tf.Variable(..., name='v1')
+v2 = tf.Variable(..., name='v2')
+
+# Pass the variables as a dict:
+saver = tf.train.Saver({'v1': v1, 'v2': v2})
+
+# Or pass them as a list.
+saver = tf.train.Saver([v1, v2])
+# Passing a list is equivalent to passing a dict with the variable op names
+# as keys:
+saver = tf.train.Saver({v.op.name: v for v in [v1, v2]})
+```
+
+The optional `reshape` argument, if True, allows restoring a variable from
+a save file where the variable had a different shape, but the same number
+of elements and type. This is useful if you have reshaped a variable and
+want to reload it from an older checkpoint.
+
+The optional `sharded` argument, if True, instructs the saver to shard
+checkpoints per device.
+
+##### Args:
+
+
+* <b>var_list</b>: A list of Variables or a dictionary mapping names to
+ Variables. If None, defaults to the list of all variables.
+* <b>reshape</b>: If True, allows restoring parameters from a checkpoint
+ where the variables have a different shape.
+* <b>sharded</b>: If True, shard the checkpoints, one per device.
+* <b>max_to_keep</b>: maximum number of recent checkpoints to keep.
+ Defaults to 10,000 hours.
+* <b>keep_checkpoint_every_n_hours</b>: How often to keep checkpoints.
+ Defaults to 10,000 hours.
+* <b>name</b>: string. Optional name to use as a prefix when adding operations.
+* <b>restore_sequentially</b>: A Bool, which if true, causes restore of different
+ variables to happen sequentially within each device. This can lower
+ memory usage when restoring very large models.
+* <b>saver_def</b>: Optional SaverDef proto to use instead of running the builder.
+ This is only useful for specialty code that wants to recreate a Saver
+ object for a previously built Graph that had a Saver. The saver_def
+ proto should be the one returned by the as_saver_def() call of the
+ Saver that was created for that Graph.
+* <b>builder</b>: Optional SaverBuilder to use if a saver_def was not provided.
+ Defaults to BaseSaverBuilder().
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `var_list` is invalid.
+* <b>ValueError</b>: If any of the keys or values in `var_list` is not unique.
+
+
+- - -
+
+#### tf.train.Saver.save(sess, save_path, global_step=None, latest_filename=None) {#Saver.save}
+
+Saves variables.
+
+This method runs the ops added by the constructor for saving variables.
+It requires a session in which the graph was launched. The variables to
+save must also have been initialized.
+
+The method returns the path of the newly created checkpoint file. This
+path can be passed directly to a call to `restore()`.
+
+##### Args:
+
+
+* <b>sess</b>: A Session to use to save the variables..
+* <b>save_path</b>: string. Path to the checkpoint filename. If the saver is
+ `sharded`, this is the prefix of the sharded checkpoint filename.
+* <b>global_step</b>: If provided the global step number is appended to
+ `save_path` to create the checkpoint filename. The optional argument
+ can be a Tensor, a Tensor name or an integer.
+* <b>latest_filename</b>: Optional name for the protocol buffer file that will
+ contains the list of most recent checkpoint filenames. That file,
+ kept in the same directory as the checkpoint files, is automatically
+ managed by the saver to keep track of recent checkpoints. Defaults to
+ 'checkpoint'.
+
+##### Returns:
+
+ A string: path at which the variables were saved. If the saver is
+ sharded, this string ends with: '-?????-of-nnnnn' where 'nnnnn'
+ is the number of shards created.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `sess` is not a Session.
+
+
+- - -
+
+#### tf.train.Saver.restore(sess, save_path) {#Saver.restore}
+
+Restores previously saved variables.
+
+This method runs the ops added by the constructor for restoring variables.
+It requires a session in which the graph was launched. The variables to
+restore do not have to have been initialized, as restoring is itself a way
+to initialize variables.
+
+The `save_path` argument is typically a value previously returned from a
+`save()` call, or a call to `latest_checkpoint()`.
+
+##### Args:
+
+
+* <b>sess</b>: A Session to use to restore the parameters.
+* <b>save_path</b>: Path where parameters were previously saved.
+
+
+
+Other utility methods.
+
+- - -
+
+#### tf.train.Saver.last_checkpoints {#Saver.last_checkpoints}
+
+List of not-yet-deleted checkpoint filenames.
+
+You can pass any of the returned values to `restore()`.
+
+##### Returns:
+
+ A list of checkpoint filenames, sorted from oldest to newest.
+
+- - -
+
+#### tf.train.Saver.set_last_checkpoints(last_checkpoints) {#Saver.set_last_checkpoints}
+
+Sets the list of not-yet-deleted checkpoint filenames.
+
+##### Args:
+
+
+* <b>last_checkpoints</b>: a list of checkpoint filenames.
+
+##### Raises:
+
+
+* <b>AssertionError</b>: if the list of checkpoint filenames has already been set.
+
+
+- - -
+
+#### tf.train.Saver.as_saver_def() {#Saver.as_saver_def}
+
+Generates a `SaverDef` representation of this saver.
+
+##### Returns:
+
+ A `SaverDef` proto.
+
+
+
+
+- - -
+
+### tf.train.latest_checkpoint(checkpoint_dir, latest_filename=None) <div class="md-anchor" id="latest_checkpoint">{#latest_checkpoint}</div>
+
+Finds the filename of latest saved checkpoint file.
+
+##### Args:
+
+
+* <b>checkpoint_dir</b>: Directory where the variables were saved.
+* <b>latest_filename</b>: Optional name for the protocol buffer file that
+ contains the list of most recent checkpoint filenames.
+ See the corresponding argument to `Saver.save()`.
+
+##### Returns:
+
+ The full path to the latest checkpoint or None if no checkpoint was found.
+
+
+
+- - -
+
+### tf.train.get_checkpoint_state(checkpoint_dir, latest_filename=None) <div class="md-anchor" id="get_checkpoint_state">{#get_checkpoint_state}</div>
+
+Returns CheckpointState proto from the "checkpoint" file.
+
+If the "checkpoint" file contains a valid CheckpointState
+proto, returns it.
+
+##### Args:
+
+
+* <b>checkpoint_dir</b>: The directory of checkpoints.
+* <b>latest_filename</b>: Optional name of the checkpoint file. Default to
+ 'checkpoint'.
+
+##### Returns:
+
+ A CheckpointState if the state was available, None
+ otherwise.
+
+
+- - -
+
+### tf.train.update_checkpoint_state(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, latest_filename=None) <div class="md-anchor" id="update_checkpoint_state">{#update_checkpoint_state}</div>
+
+Updates the content of the 'checkpoint' file.
+
+This updates the checkpoint file containing a CheckpointState
+proto.
+
+##### Args:
+
+
+* <b>save_dir</b>: Directory where the model was saved.
+* <b>model_checkpoint_path</b>: The checkpoint file.
+* <b>all_model_checkpoint_paths</b>: list of strings. Paths to all not-yet-deleted
+ checkpoints, sorted from oldest to newest. If this is a non-empty list,
+ the last element must be equal to model_checkpoint_path. These paths
+ are also saved in the CheckpointState proto.
+* <b>latest_filename</b>: Optional name of the checkpoint file. Default to
+ 'checkpoint'.
+
+##### Raises:
+
+
+* <b>RuntimeError</b>: If the save paths conflict.
+
+
+
+## Sharing Variables <div class="md-anchor" id="AUTOGENERATED-sharing-variables">{#AUTOGENERATED-sharing-variables}</div>
+
+TensorFlow provides several classes and operations that you can use to
+create variables contingent on certain conditions.
+
+- - -
+
+### tf.get_variable(name, shape=None, dtype=tf.float32, initializer=None, trainable=True, collections=None) <div class="md-anchor" id="get_variable">{#get_variable}</div>
+
+Gets an existing variable with these parameters or create a new one.
+
+This function prefixes the name with the current variable scope
+and performs reuse checks. See the
+[Variable Scope How To](../../how_tos/variable_scope/index.md)
+for an extensive description of how reusing works. Here is a basic example:
+
+```python
+with tf.variable_scope("foo"):
+ v = get_variable("v", [1]) # v.name == "foo/v:0"
+ w = get_variable("w", [1]) # w.name == "foo/w:0"
+with tf.variable_scope("foo", reuse=True)
+ v1 = get_variable("v") # The same as v above.
+```
+
+If initializer is `None` (the default), the default initializer passed in
+the constructor is used. If that one is `None` too, a
+`UniformUnitScalingInitializer` will be used.
+
+##### Args:
+
+
+* <b>name</b>: the name of the new or existing variable.
+* <b>shape</b>: shape of the new or existing variable.
+* <b>dtype</b>: type of the new or existing variable (defaults to `DT_FLOAT`).
+* <b>initializer</b>: initializer for the variable if one is created.
+* <b>trainable</b>: If `True` also add the variable to the graph collection
+ `GraphKeys.TRAINABLE_VARIABLES` (see variables.Variable).
+* <b>collections</b>: List of graph collections keys to add the Variable to.
+ Defaults to `[GraphKeys.VARIABLES]` (see variables.Variable).
+
+##### Returns:
+
+ The created or existing variable.
+
+##### Raises:
+
+
+* <b>ValueError</b>: when creating a new variable and shape is not declared,
+ or when violating reuse during variable creation. Reuse is set inside
+ `variable_scope`.
+
+
+- - -
+
+### tf.get_variable_scope() <div class="md-anchor" id="get_variable_scope">{#get_variable_scope}</div>
+
+Returns the current variable scope.
+
+
+- - -
+
+### tf.variable_scope(*args, **kwds) <div class="md-anchor" id="variable_scope">{#variable_scope}</div>
+
+Returns a context for variable scope.
+
+Variable scope allows to create new variables and to share already created
+ones while providing checks to not create or share by accident. For details,
+see the [Variable Scope How To](../../how_tos/variable_scope/index.md),
+here we present only a few basic examples.
+
+Simple example of how to create a new variable:
+
+```python
+with tf.variable_scope("foo"):
+ with tf.variable_scope("bar"):
+ v = tf.get_variable("v", [1])
+ assert v.name == "foo/bar/v:0"
+```
+
+Basic example of sharing a variable:
+
+```python
+with tf.variable_scope("foo"):
+ v = get_variable("v", [1])
+with tf.variable_scope("foo", reuse=True):
+ v1 = tf.get_variable("v", [1])
+assert v1 == v
+```
+
+Sharing a variable by capturing a scope and setting reuse:
+
+```python
+with tf.variable_scope("foo") as scope.
+ v = get_variable("v", [1])
+ scope.reuse_variables()
+ v1 = tf.get_variable("v", [1])
+assert v1 == v
+```
+
+To prevent accidental sharing of variables, we raise an exception when
+getting an existing variable in a non-reusing scope.
+
+```python
+with tf.variable_scope("foo") as scope.
+ v = get_variable("v", [1])
+ v1 = tf.get_variable("v", [1])
+ # Raises ValueError("... v already exists ...").
+```
+
+Similarly, we raise an exception when trying to get a variable that
+does not exist in reuse mode.
+
+```python
+with tf.variable_scope("foo", reuse=True):
+ v = get_variable("v", [1])
+ # Raises ValueError("... v does not exists ...").
+```
+
+Note that the `reuse` flag is inherited: if we open a reusing scope,
+then all its sub-scopes become reusing as well.
+
+##### Args:
+
+
+* <b>name_or_scope</b>: `string` or `VariableScope`: the scope to open.
+* <b>reuse</b>: `True` or `None`; if `True`, we go into reuse mode for this scope as
+ well as all sub-scopes; if `None`, we just inherit the parent scope reuse.
+* <b>initializer</b>: default initializer for variables within this scope.
+
+##### Yields:
+
+ A scope that can be to captured and reused.
+
+##### Raises:
+
+
+* <b>ValueError</b>: when trying to reuse within a create scope, or create within
+ a reuse scope, or if reuse is not `None` or `True`.
+* <b>TypeError</b>: when the types of some arguments are not appropriate.
+
+
+
+- - -
+
+### tf.constant_initializer(value=0.0) <div class="md-anchor" id="constant_initializer">{#constant_initializer}</div>
+
+Returns an initializer that generates Tensors with a single value.
+
+##### Args:
+
+
+* <b>value</b>: A Python scalar. All elements of the initialized variable
+ will be set to this value.
+
+##### Returns:
+
+ An initializer that generates Tensors with a single value.
+
+
+- - -
+
+### tf.random_normal_initializer(mean=0.0, stddev=1.0, seed=None) <div class="md-anchor" id="random_normal_initializer">{#random_normal_initializer}</div>
+
+Returns an initializer that generates Tensors with a normal distribution.
+
+##### Args:
+
+
+* <b>mean</b>: a python scalar or a scalar tensor. Mean of the random values
+ to generate.
+* <b>stddev</b>: a python scalar or a scalar tensor. Standard deviation of the
+ random values to generate.
+* <b>seed</b>: A Python integer. Used to create random seeds.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+
+##### Returns:
+
+ An initializer that generates Tensors with a normal distribution.
+
+
+- - -
+
+### tf.truncated_normal_initializer(mean=0.0, stddev=1.0, seed=None) <div class="md-anchor" id="truncated_normal_initializer">{#truncated_normal_initializer}</div>
+
+Returns an initializer that generates a truncated normal distribution.
+
+These values are similar to values from a random_normal_initializer
+except that values more than two standard deviations from the mean
+are discarded and re-drawn. This is the recommended initializer for
+neural network weights and filters.
+
+##### Args:
+
+
+* <b>mean</b>: a python scalar or a scalar tensor. Mean of the random values
+ to generate.
+* <b>stddev</b>: a python scalar or a scalar tensor. Standard deviation of the
+ random values to generate.
+* <b>seed</b>: A Python integer. Used to create random seeds.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+
+##### Returns:
+
+ An initializer that generates Tensors with a truncated normal
+ distribution.
+
+
+- - -
+
+### tf.random_uniform_initializer(minval=0.0, maxval=1.0, seed=None) <div class="md-anchor" id="random_uniform_initializer">{#random_uniform_initializer}</div>
+
+Returns an initializer that generates Tensors with a uniform distribution.
+
+##### Args:
+
+
+* <b>minval</b>: a python scalar or a scalar tensor. lower bound of the range
+ of random values to generate.
+* <b>maxval</b>: a python scalar or a scalar tensor. upper bound of the range
+ of random values to generate.
+* <b>seed</b>: A Python integer. Used to create random seeds.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+
+##### Returns:
+
+ An initializer that generates Tensors with a uniform distribution.
+
+
+- - -
+
+### tf.uniform_unit_scaling_initializer(factor=1.0, seed=None) <div class="md-anchor" id="uniform_unit_scaling_initializer">{#uniform_unit_scaling_initializer}</div>
+
+Returns an initializer that generates tensors without scaling variance.
+
+When initializing a deep network, it is in principle advantageous to keep
+the scale of the input variance constant, so it does not explode or diminish
+by reaching the final layer. If the input is `x` and the operation `x * W`,
+and we want to initialize `W` uniformly at random, we need to pick `W` from
+
+ [-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)]
+
+to keep the scale intact, where `dim = W.shape[0]` (the size of the input).
+A similar calculation for convolutional networks gives an analogous result
+with `dim` equal to the product of the first 3 dimensions. When
+nonlinearities are present, we need to multiply this by a constant `factor`.
+See <https://arxiv.org/pdf/1412.6558v3.pdf> for deeper motivation, experiments
+and the calculation of constants. In section 2.3 there, the constants were
+numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15.
+
+##### Args:
+
+
+* <b>factor</b>: Float. A multiplicative factor by which the values will be scaled.
+* <b>seed</b>: A Python integer. Used to create random seeds.
+ See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
+
+##### Returns:
+
+ An initializer that generates tensors with unit variance.
+
+
+- - -
+
+### tf.zeros_initializer(shape, dtype=tf.float32) <div class="md-anchor" id="zeros_initializer">{#zeros_initializer}</div>
+
+An adaptor for zeros() to match the Initializer spec.
+
+
+
+## Sparse Variable Updates <div class="md-anchor" id="AUTOGENERATED-sparse-variable-updates">{#AUTOGENERATED-sparse-variable-updates}</div>
+
+The sparse update ops modify a subset of the entries in a dense `Variable`,
+either overwriting the entries or adding / subtracting a delta. These are
+useful for training embedding models and similar lookup-based networks, since
+only a small subset of embedding vectors change in any given step.
+
+Since a sparse update of a large tensor may be generated automatically during
+gradient computation (as in the gradient of [`tf.gather`](array_ops.md#gather)),
+an [`IndexedSlices`](#IndexedSlices) class is provided that encapsulates a set
+of sparse indices and values. `IndexedSlices` objects are detected and handled
+automatically by the optimizers in most cases.
+
+- - -
+
+### tf.scatter_update(ref, indices, updates, use_locking=None, name=None) <div class="md-anchor" id="scatter_update">{#scatter_update}</div>
+
+Applies sparse updates to a variable reference.
+
+This operation computes
+
+ # Scalar indices
+ ref[indices, ...] = updates[...]
+
+ # Vector indices (for each i)
+ ref[indices[i], ...] = updates[i, ...]
+
+ # High rank indices (for each i, ..., j)
+ ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]
+
+This operation outputs `ref` after the update is done.
+This makes it easier to chain operations that need to use the reset value.
+
+If `indices` contains duplicate entries, lexicographically later entries
+override earlier entries.
+
+Requires `updates.shape = indices.shape + ref.shape[1:]`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/ScatterUpdate.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>ref</b>: A mutable `Tensor`. Should be from a `Variable` node.
+* <b>indices</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A tensor of indices into the first dimension of `ref`.
+* <b>updates</b>: A `Tensor`. Must have the same type as `ref`.
+ A tensor of updated values to store in `ref`.
+* <b>use_locking</b>: An optional `bool`. Defaults to `True`.
+ If True, the assignment will be protected by a lock;
+ otherwise the behavior is undefined, but may exhibit less contention.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ Same as `ref`. Returned as a convenience for operations that want
+ to use the updated values after the update is done.
+
+
+- - -
+
+### tf.scatter_add(ref, indices, updates, use_locking=None, name=None) <div class="md-anchor" id="scatter_add">{#scatter_add}</div>
+
+Adds sparse updates to a variable reference.
+
+This operation computes
+
+ # Scalar indices
+ ref[indices, ...] += updates[...]
+
+ # Vector indices (for each i)
+ ref[indices[i], ...] += updates[i, ...]
+
+ # High rank indices (for each i, ..., j)
+ ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]
+
+This operation outputs `ref` after the update is done.
+This makes it easier to chain operations that need to use the reset value.
+
+Duplicate entries are handled correctly: if multiple `indices` reference
+the same location, their contributions add.
+
+Requires `updates.shape = indices.shape + ref.shape[1:]`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/ScatterAdd.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>ref</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
+ Should be from a `Variable` node.
+* <b>indices</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A tensor of indices into the first dimension of `ref`.
+* <b>updates</b>: A `Tensor`. Must have the same type as `ref`.
+ A tensor of updated values to add to `ref`.
+* <b>use_locking</b>: An optional `bool`. Defaults to `False`.
+ If True, the addition will be protected by a lock;
+ otherwise the behavior is undefined, but may exhibit less contention.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ Same as `ref`. Returned as a convenience for operations that want
+ to use the updated values after the update is done.
+
+
+- - -
+
+### tf.scatter_sub(ref, indices, updates, use_locking=None, name=None) <div class="md-anchor" id="scatter_sub">{#scatter_sub}</div>
+
+Subtracts sparse updates to a variable reference.
+
+ # Scalar indices
+ ref[indices, ...] -= updates[...]
+
+ # Vector indices (for each i)
+ ref[indices[i], ...] -= updates[i, ...]
+
+ # High rank indices (for each i, ..., j)
+ ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]
+
+This operation outputs `ref` after the update is done.
+This makes it easier to chain operations that need to use the reset value.
+
+Duplicate entries are handled correctly: if multiple `indices` reference
+the same location, their (negated) contributions add.
+
+Requires `updates.shape = indices.shape + ref.shape[1:]`.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="../images/ScatterSub.png" alt>
+</div>
+
+##### Args:
+
+
+* <b>ref</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
+ Should be from a `Variable` node.
+* <b>indices</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
+ A tensor of indices into the first dimension of `ref`.
+* <b>updates</b>: A `Tensor`. Must have the same type as `ref`.
+ A tensor of updated values to subtract from `ref`.
+* <b>use_locking</b>: An optional `bool`. Defaults to `False`.
+ If True, the subtraction will be protected by a lock;
+ otherwise the behavior is undefined, but may exhibit less contention.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ Same as `ref`. Returned as a convenience for operations that want
+ to use the updated values after the update is done.
+
+
+- - -
+
+### tf.sparse_mask(a, mask_indices, name=None) <div class="md-anchor" id="sparse_mask">{#sparse_mask}</div>
+
+Masks elements of `IndexedSlices`.
+
+Given an `IndexedSlices` instance `a`, returns another `IndexedSlices` that
+contains a subset of the slices of `a`. Only the slices at indices specified
+in `mask_indices` are returned.
+
+This is useful when you need to extract a subset of slices in an
+`IndexedSlices` object.
+
+For example:
+
+```python
+# `a` contains slices at indices [12, 26, 37, 45] from a large tensor
+# with shape [1000, 10]
+a.indices => [12, 26, 37, 45]
+tf.shape(a.values) => [4, 10]
+
+# `b` will be the subset of `a` slices at its second and third indices, so
+# we want to mask of its first and last indices (which are at absolute
+# indices 12, 45)
+b = tf.sparse_mask(a, [12, 45])
+
+b.indices => [26, 37]
+tf.shape(b.values) => [2, 10]
+
+```
+
+##### Args:
+
+ * `a`: An `IndexedSlices` instance.
+ * `mask_indices`: Indices of elements to mask.
+ * `name`: A name for the operation (optional).
+
+##### Returns:
+
+ The masked `IndexedSlices` instance.
+
+
+- - -
+
+### class tf.IndexedSlices <div class="md-anchor" id="IndexedSlices">{#IndexedSlices}</div>
+
+A sparse representation of a set of tensor slices at given indices.
+
+This class is a simple wrapper for a pair of `Tensor` objects:
+
+* `values`: A `Tensor` of any dtype with shape `[D0, D1, ..., Dn]`.
+* `indices`: A 1-D integer `Tensor` with shape `[D0]`.
+
+An `IndexedSlices` is typically used to represent a subset of a larger
+tensor `dense` of shape `[LARGE0, D1, .. , DN]` where `LARGE0 >> D0`.
+The values in `indices` are the indices in the first dimension of
+the slices that have been extracted from the larger tensor.
+
+The dense tensor `dense` represented by an `IndexedSlices` `slices` has
+
+```python
+dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...]
+```
+
+The `IndexedSlices` class is used principally in the definition of
+gradients for operations that have sparse gradients
+(e.g. [`tf.gather`](array_ops.md#gather)).
+
+Contrast this representation with
+[`SparseTensor`](sparse_ops.md#SparseTensor),
+which uses multi-dimensional indices and scalar values.
+
+- - -
+
+#### tf.IndexedSlices.__init__(values, indices, dense_shape=None) {#IndexedSlices.__init__}
+
+Creates an `IndexedSlices`.
+
+
+
+- - -
+
+#### tf.IndexedSlices.values {#IndexedSlices.values}
+
+A `Tensor` containing the values of the slices.
+
+- - -
+
+#### tf.IndexedSlices.indices {#IndexedSlices.indices}
+
+A 1-D `Tensor` containing the indices of the slices.
+
+- - -
+
+#### tf.IndexedSlices.dense_shape {#IndexedSlices.dense_shape}
+
+A 1-D `Tensor` containing the shape of the corresponding dense tensor.
+
+
+- - -
+
+#### tf.IndexedSlices.name {#IndexedSlices.name}
+
+The name of this `IndexedSlices`.
+
+- - -
+
+#### tf.IndexedSlices.dtype {#IndexedSlices.dtype}
+
+The `DType` of elements in this tensor.
+
+- - -
+
+#### tf.IndexedSlices.device {#IndexedSlices.device}
+
+The name of the device on which `values` will be produced, or `None`.
+
+- - -
+
+#### tf.IndexedSlices.op {#IndexedSlices.op}
+
+The `Operation` that produces `values` as an output.
+
+
diff --git a/tensorflow/g3doc/api_docs/python/train.md b/tensorflow/g3doc/api_docs/python/train.md
new file mode 100644
index 0000000000..0c88968c5d
--- /dev/null
+++ b/tensorflow/g3doc/api_docs/python/train.md
@@ -0,0 +1,1825 @@
+<!-- This file is machine generated: DO NOT EDIT! -->
+
+# Training
+<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
+## Contents
+* [Optimizers.](#AUTOGENERATED-optimizers.)
+ * [class tf.train.Optimizer](#Optimizer)
+ * [Usage](#AUTOGENERATED-usage)
+ * [Processing gradients before applying them.](#AUTOGENERATED-processing-gradients-before-applying-them.)
+ * [Gating Gradients](#AUTOGENERATED-gating-gradients)
+ * [Slots](#AUTOGENERATED-slots)
+ * [class tf.train.GradientDescentOptimizer](#GradientDescentOptimizer)
+ * [class tf.train.AdagradOptimizer](#AdagradOptimizer)
+ * [class tf.train.MomentumOptimizer](#MomentumOptimizer)
+ * [class tf.train.AdamOptimizer](#AdamOptimizer)
+ * [class tf.train.FtrlOptimizer](#FtrlOptimizer)
+ * [class tf.train.RMSPropOptimizer](#RMSPropOptimizer)
+* [Gradient Computation.](#AUTOGENERATED-gradient-computation.)
+ * [tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)](#gradients)
+ * [class tf.AggregationMethod](#AggregationMethod)
+ * [tf.stop_gradient(input, name=None)](#stop_gradient)
+* [Gradient Clipping](#AUTOGENERATED-gradient-clipping)
+ * [tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)](#clip_by_value)
+ * [tf.clip_by_norm(t, clip_norm, name=None)](#clip_by_norm)
+ * [tf.clip_by_average_norm(t, clip_norm, name=None)](#clip_by_average_norm)
+ * [tf.clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None)](#clip_by_global_norm)
+ * [tf.global_norm(t_list, name=None)](#global_norm)
+* [Decaying the learning rate.](#AUTOGENERATED-decaying-the-learning-rate.)
+ * [tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)](#exponential_decay)
+* [Moving Averages.](#AUTOGENERATED-moving-averages.)
+ * [class tf.train.ExponentialMovingAverage](#ExponentialMovingAverage)
+* [Coordinator and QueueRunner.](#AUTOGENERATED-coordinator-and-queuerunner.)
+ * [class tf.train.Coordinator](#Coordinator)
+ * [class tf.train.QueueRunner](#QueueRunner)
+ * [tf.train.add_queue_runner(qr, collection='queue_runners')](#add_queue_runner)
+ * [tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners')](#start_queue_runners)
+* [Summary Operations.](#AUTOGENERATED-summary-operations.)
+ * [tf.scalar_summary(tags, values, collections=None, name=None)](#scalar_summary)
+ * [tf.image_summary(tag, tensor, max_images=None, collections=None, name=None)](#image_summary)
+ * [tf.histogram_summary(tag, values, collections=None, name=None)](#histogram_summary)
+ * [tf.nn.zero_fraction(value, name=None)](#zero_fraction)
+ * [tf.merge_summary(inputs, collections=None, name=None)](#merge_summary)
+ * [tf.merge_all_summaries(key='summaries')](#merge_all_summaries)
+* [Adding Summaries to Event Files.](#AUTOGENERATED-adding-summaries-to-event-files.)
+ * [class tf.train.SummaryWriter](#SummaryWriter)
+ * [tf.train.summary_iterator(path)](#summary_iterator)
+* [Training utilities.](#AUTOGENERATED-training-utilities.)
+ * [tf.train.global_step(sess, global_step_tensor)](#global_step)
+ * [tf.train.write_graph(graph_def, logdir, name, as_text=True)](#write_graph)
+
+
+<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
+
+This library provides a set of classes and functions that helps train models.
+
+## Optimizers. <div class="md-anchor" id="AUTOGENERATED-optimizers.">{#AUTOGENERATED-optimizers.}</div>
+
+The Optimizer base class provides methods to compute gradients for a loss and
+apply gradients to variables. A collection of subclasses implement classic
+optimization algorithms such as GradientDescent and Adagrad.
+
+You never instantiate the Optimizer class itself, but instead instantiate one
+of the subclasses.
+
+- - -
+
+### class tf.train.Optimizer <div class="md-anchor" id="Optimizer">{#Optimizer}</div>
+
+Base class for optimizers.
+
+This class defines the API to add Ops to train a model. You never use this
+class directly, but instead instantiate one of its subclasses such as
+`GradientDescentOptimizer`, `AdagradOptimizer`, or `MomentumOptimizer`.
+
+### Usage <div class="md-anchor" id="AUTOGENERATED-usage">{#AUTOGENERATED-usage}</div>
+
+```
+# Create an optimizer with the desired parameters.
+opt = GradientDescentOptimizer(learning_rate=0.1)
+# Add Ops to the graph to minimize a cost by updating a list of variables.
+# "cost" is a Tensor, and the list of variables contains variables.Variable
+# objects.
+opt_op = opt.minimize(cost, <list of variables>)
+```
+
+In the training program you will just have to run the returned Op.
+
+```
+# Execute opt_op to do one step of training:
+opt_op.run()
+```
+
+### Processing gradients before applying them. <div class="md-anchor" id="AUTOGENERATED-processing-gradients-before-applying-them.">{#AUTOGENERATED-processing-gradients-before-applying-them.}</div>
+
+Calling `minimize()` takes care of both computing the gradients and
+applying them to the variables. If you want to process the gradients
+before applying them you can instead use the optimizer in three steps:
+
+1. Compute the gradients with `compute_gradients()`.
+2. Process the gradients as you wish.
+3. Apply the processed gradients with `apply_gradients()`.
+
+Example:
+
+```
+# Create an optimizer.
+opt = GradientDescentOptimizer(learning_rate=0.1)
+
+# Compute the gradients for a list of variables.
+grads_and_vars = opt.compute_gradients(loss, <list of variables>)
+
+# grads_and_vars is a list of tuples (gradient, variable). Do whatever you
+# need to the 'gradient' part, for example cap them, etc.
+capped_grads_and_vars = [(MyCapper(gv[0]), gv[1])) for gv in grads_and_vars]
+
+# Ask the optimizer to apply the capped gradients.
+opt.apply_gradients(capped_grads_and_vars)
+```
+
+- - -
+
+#### tf.train.Optimizer.__init__(use_locking, name) {#Optimizer.__init__}
+
+Create a new Optimizer.
+
+This must be called by the constructors of subclasses.
+
+##### Args:
+
+
+* <b>use_locking</b>: Bool. If True apply use locks to prevent concurrent updates
+ to variables.
+* <b>name</b>: A non-empty string. The name to use for accumulators created
+ for the optimizer.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if name is malformed.
+
+
+
+- - -
+
+#### tf.train.Optimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, name=None) {#Optimizer.minimize}
+
+Add operations to minimize 'loss' by updating 'var_list'.
+
+This method simply combines calls compute_gradients() and
+apply_gradients(). If you want to process the gradient before applying them
+call compute_gradients() and apply_gradients() explicitly instead of using
+this function.
+
+##### Args:
+
+
+* <b>loss</b>: A Tensor containing the value to minimize.
+* <b>global_step</b>: Optional Variable to increment by one after the
+ variables have been updated.
+* <b>var_list</b>: Optional list of variables.Variable to update to minimize
+ 'loss'. Defaults to the list of variables collected in the graph
+ under the key GraphKeys.TRAINABLE_VARIABLES.
+* <b>gate_gradients</b>: How to gate the computation of gradients. Can be
+ GATE_NONE, GATE_OP, or GATE_GRAPH.
+* <b>name</b>: Optional name for the returned operation.
+
+##### Returns:
+
+ An Operation that updates the variables in 'var_list'. If 'global_step'
+ was not None, that operation also increments global_step.
+
+##### Raises:
+
+
+* <b>ValueError</b>: if some of the variables are not variables.Variable objects.
+
+
+- - -
+
+#### tf.train.Optimizer.compute_gradients(loss, var_list=None, gate_gradients=1) {#Optimizer.compute_gradients}
+
+Compute gradients of "loss" for the variables in "var_list".
+
+This is the first part of minimize(). It returns a list
+of (gradient, variable) pairs where "gradient" is the gradient
+for "variable". Note that "gradient" can be a Tensor, a
+IndexedSlices, or None if there is no gradient for the
+given variable.
+
+##### Args:
+
+
+* <b>loss</b>: A Tensor containing the value to minimize.
+* <b>var_list</b>: Optional list of variables.Variable to update to minimize
+ "loss". Defaults to the list of variables collected in the graph
+ under the key GraphKey.TRAINABLE_VARIABLES.
+* <b>gate_gradients</b>: How to gate the computation of gradients. Can be
+ GATE_NONE, GATE_OP, or GATE_GRAPH.
+
+##### Returns:
+
+ A list of (gradient, variable) pairs.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If var_list contains anything else than variables.Variable.
+* <b>ValueError</b>: If some arguments are invalid.
+
+
+- - -
+
+#### tf.train.Optimizer.apply_gradients(grads_and_vars, global_step=None, name=None) {#Optimizer.apply_gradients}
+
+Apply gradients to variables.
+
+This is the second part of minimize(). It returns an Operation that
+applies gradients.
+
+##### Args:
+
+
+* <b>grads_and_vars</b>: List of (gradient, variable) pairs as returned by
+ compute_gradients().
+* <b>global_step</b>: Optional Variable to increment by one after the
+ variables have been updated.
+* <b>name</b>: Optional name for the returned operation. Default to the
+ name passed to the Optimizer constructor.
+
+##### Returns:
+
+ An Operation that applies the specified gradients. If 'global_step'
+ was not None, that operation also increments global_step.
+
+##### Raises:
+
+
+* <b>TypeError</b>: if grads_and_vars is malformed.
+
+
+
+### Gating Gradients <div class="md-anchor" id="AUTOGENERATED-gating-gradients">{#AUTOGENERATED-gating-gradients}</div>
+
+Both `minimize()` and `compute_gradients()` accept a `gate_gradient` argument
+that controls the degree of parallelism during the application of the
+gradients.
+
+The possible values are: `GATE_NONE`, `GATE_OP`, and `GATE_GRAPH`.
+
+<b>GATE_NONE</b>: Compute and apply gradients in parallel. This provides the
+maximum parallelism in execution, at the cost of some non-reproducibility in
+the results. For example the two gradients of MatMul depend on the input
+values: With `GATE_NONE` one of the gradients could be applied to one of the
+inputs _before_ the other gradient is computed resulting in non-reproducible
+results.
+
+<b>GATE_OP</b>: For each Op, make sure all gradients are computed before they
+are used. This prevents race conditions for Ops that generate gradients for
+multiple inputs where the gradients depend on the inputs.
+
+<b>GATE_GRAPH</b>: Make sure all gradients for all variables are computed
+before any one of them is used. This provides the least parallelism but can
+be useful if you want to process all gradients before applying any of them.
+
+### Slots <div class="md-anchor" id="AUTOGENERATED-slots">{#AUTOGENERATED-slots}</div>
+
+Some optimizer subclasses, such as `MomentumOptimizer` and `AdagradOptimizer`
+allocate and manage additional variables associated with the variables to
+train. These are called <i>Slots</i>. Slots have names and you can ask the
+optimizer for the names of the slots that it uses. Once you have a slot name
+you can ask the optimizer for the variable it created to hold the slot value.
+
+This can be useful if you want to log debug a training algorithm, report stats
+about the slots, etc.
+
+- - -
+
+#### tf.train.Optimizer.get_slot_names() {#Optimizer.get_slot_names}
+
+Return a list of the names of slots created by the Optimizer.
+
+See get_slot().
+
+##### Returns:
+
+ A list of strings.
+
+
+- - -
+
+#### tf.train.Optimizer.get_slot(var, name) {#Optimizer.get_slot}
+
+Return a slot named "name" created for "var" by the Optimizer.
+
+Some Optimizer subclasses use additional variables. For example
+Momentum and Adagrad use variables to accumulate updates. This method
+gives access to these Variables if for some reason you need them.
+
+Use get_slot_names() to get the list of slot names created by the Optimizer.
+
+##### Args:
+
+
+* <b>var</b>: A variable passed to minimize() or apply_gradients().
+* <b>name</b>: A string.
+
+##### Returns:
+
+ The Variable for the slot if it was created, None otherwise.
+
+
+
+
+- - -
+
+### class tf.train.GradientDescentOptimizer <div class="md-anchor" id="GradientDescentOptimizer">{#GradientDescentOptimizer}</div>
+
+Optimizer that implements the gradient descent algorithm.
+
+- - -
+
+#### tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent') {#GradientDescentOptimizer.__init__}
+
+Construct a new gradient descent optimizer.
+
+##### Args:
+
+
+* <b>learning_rate</b>: A Tensor or a floating point value. The learning
+ rate to use.
+* <b>use_locking</b>: If True use locks for update operation.s
+* <b>name</b>: Optional name prefix for the operations created when applying
+ gradients. Defaults to "GradientDescent".
+
+
+
+- - -
+
+### class tf.train.AdagradOptimizer <div class="md-anchor" id="AdagradOptimizer">{#AdagradOptimizer}</div>
+
+Optimizer that implements the Adagrad algorithm.
+
+- - -
+
+#### tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad') {#AdagradOptimizer.__init__}
+
+Construct a new Adagrad optimizer.
+
+##### Args:
+
+
+* <b>learning_rate</b>: A `Tensor` or a floating point value. The learning rate.
+* <b>initial_accumulator_value</b>: A floating point value.
+ Starting value for the accumulators, must be positive.
+* <b>use_locking</b>: If `True` use locks for update operations.
+* <b>name</b>: Optional name prefix for the operations created when applying
+ gradients. Defaults to "Adagrad".
+
+##### Raises:
+
+
+* <b>ValueError</b>: If the initial_accumulator_value is invalid.
+
+
+
+- - -
+
+### class tf.train.MomentumOptimizer <div class="md-anchor" id="MomentumOptimizer">{#MomentumOptimizer}</div>
+
+Optimizer that implements the Momentum algorithm.
+
+- - -
+
+#### tf.train.MomentumOptimizer.__init__(learning_rate, momentum, use_locking=False, name='Momentum') {#MomentumOptimizer.__init__}
+
+Construct a new Momentum optimizer.
+
+##### Args:
+
+
+* <b>learning_rate</b>: A `Tensor` or a floating point value. The learning rate.
+* <b>momentum</b>: A `Tensor` or a floating point value. The momentum.
+* <b>use_locking</b>: If `True` use locks for update operations.
+* <b>name</b>: Optional name prefix for the operations created when applying
+ gradients. Defaults to "Momentum".
+
+
+
+- - -
+
+### class tf.train.AdamOptimizer <div class="md-anchor" id="AdamOptimizer">{#AdamOptimizer}</div>
+
+Optimizer that implements the Adam algorithm.
+
+- - -
+
+#### tf.train.AdamOptimizer.__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam') {#AdamOptimizer.__init__}
+
+Construct a new Adam optimizer.
+
+Implementation is based on: http://arxiv.org/pdf/1412.6980v7.pdf
+
+Initialization:
+
+```
+m_0 <- 0 (Initialize initial 1st moment vector)
+v_0 <- 0 (Initialize initial 2nd moment vector)
+t <- 0 (Initialize timestep)
+```
+
+The update rule for `variable` with gradient `g` uses an optimization
+described at the end of section2 of the paper:
+
+```
+t <- t + 1
+lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t)
+
+m_t <- beta1 * m_{t-1} + (1 - beta1) * g
+v_t <- beta2 * v_{t-1} + (1 - beta2) * g * g
+variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)
+```
+
+The default value of 1e-8 for epsilon might not be a good default in
+general. For example, when training an Inception network on ImageNet a
+current good choice is 1.0 or 0.1.
+
+##### Args:
+
+
+* <b>learning_rate</b>: A Tensor or a floating point value. The learning rate.
+* <b>beta1</b>: A float value or a constant float tensor.
+ The exponential decay rate for the 1st moment estimates.
+* <b>beta2</b>: A float value or a constant float tensor.
+ The exponential decay rate for the 2st moment estimates.
+* <b>epsilon</b>: A small constant for numerical stability.
+* <b>use_locking</b>: If True use locks for update operation.s
+* <b>name</b>: Optional name for the operations created when applying gradients.
+ Defaults to "Adam".
+
+
+
+- - -
+
+### class tf.train.FtrlOptimizer <div class="md-anchor" id="FtrlOptimizer">{#FtrlOptimizer}</div>
+
+Optimizer that implements the FTRL algorithm.
+
+- - -
+
+#### tf.train.FtrlOptimizer.__init__(learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl') {#FtrlOptimizer.__init__}
+
+Construct a new FTRL optimizer.
+
+The Ftrl-proximal algorithm, abbreviated for Follow-the-regularized-leader,
+is described in the paper [Ad Click Prediction: a View from the Trenches](
+https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf).
+
+It can give a good performance vs. sparsity tradeoff.
+
+Ftrl-proximal uses its own global base learning rate and can behave like
+Adagrad with `learning_rate_power=-0.5`, or like gradient descent with
+`learning_rate_power=0.0`.
+
+The effective learning rate is adjusted per parameter, relative to this
+base learning rate as:
+
+```
+effective_learning_rate_i = (learning_rate /
+ pow(k + summed_squared_gradients_for_i, learning_rate_power));
+```
+
+where k is the small constant `initial_accumulator_value`.
+
+Note that the real regularization coefficient of `|w|^2` for objective
+function is `1 / lambda_2` if specifying `l2 = lambda_2` as argument when
+using this function.
+
+##### Args:
+
+
+* <b>learning_rate</b>: A float value or a constant float `Tensor`.
+* <b>learning_rate_power</b>: A float value, must be less or equal to zero.
+* <b>initial_accumulator_value</b>: The starting value for accumulators.
+ Only positive values are allowed.
+* <b>l1_regularization_strength</b>: A float value, must be greater than or
+ equal to zero.
+* <b>l2_regularization_strength</b>: A float value, must be greater than or
+ equal to zero.
+* <b>use_locking</b>: If `True` use locks for update operations.
+* <b>name</b>: Optional name prefix for the operations created when applying
+ gradients. Defaults to "Ftrl".
+
+##### Raises:
+
+
+* <b>ValueError</b>: if one of the arguments is invalid.
+
+
+
+- - -
+
+### class tf.train.RMSPropOptimizer <div class="md-anchor" id="RMSPropOptimizer">{#RMSPropOptimizer}</div>
+
+Optimizer that implements the RMSProp algorithm.
+
+- - -
+
+#### tf.train.RMSPropOptimizer.__init__(learning_rate, decay, momentum=0.0, epsilon=1e-10, use_locking=False, name='RMSProp') {#RMSPropOptimizer.__init__}
+
+Construct a new RMSProp optimizer.
+
+##### Args:
+
+
+* <b>learning_rate</b>: A Tensor or a floating point value. The learning rate.
+* <b>decay</b>: discounting factor for the history/coming gradient
+* <b>momentum</b>: a scalar tensor.
+* <b>epsilon</b>: small value to avoid zero denominator.
+* <b>use_locking</b>: If True use locks for update operation.
+* <b>name</b>: Optional name prefic for the operations created when applying
+ gradients. Defaults to "RMSProp".
+
+
+
+
+## Gradient Computation. <div class="md-anchor" id="AUTOGENERATED-gradient-computation.">{#AUTOGENERATED-gradient-computation.}</div>
+
+TensorFlow provides functions to compute the derivatives for a given
+TensorFlow computation graph, adding operations to the graph. The
+optimizer classes automatically compute derivatives on your graph, but
+creators of new Optimizers or expert users can call the lower-level
+functions below.
+
+- - -
+
+### tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None) <div class="md-anchor" id="gradients">{#gradients}</div>
+
+Constructs symbolic partial derivatives of `ys` w.r.t. x in `xs`.
+
+`ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys`
+is a list of `Tensor`, holding the gradients received by the
+`ys`. The list must be the same length as `ys`.
+
+`gradients()` adds ops to the graph to output the partial
+derivatives of `ys` with respect to `xs`. It returns a list of
+`Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)`
+for y in `ys`.
+
+`grad_ys` is a list of tensors of the same length as `ys` that holds
+the initial gradients for each y in `ys`. When `grad_ys` is None,
+we fill in a tensor of '1's of the shape of y for each y in `ys`. A
+user can provide their own initial 'grad_ys` to compute the
+derivatives using a different initial gradient for each y (e.g., if
+one wanted to weight the gradient differently for each value in
+each y).
+
+##### Args:
+
+
+* <b>ys</b>: A `Tensor` or list of tensors to be differentiated.
+* <b>xs</b>: A `Tensor` or list of tensors to be used for differentiation.
+* <b>grad_ys</b>: Optional. A `Tensor` or list of tensors the same size as
+ `ys` and holding the gradients computed for each y in `ys`.
+* <b>name</b>: Optional name to use for grouping all the gradient ops together.
+ defaults to 'gradients'.
+* <b>colocate_gradients_with_ops</b>: If True, try colocating gradients with
+ the corresponding op.
+* <b>gate_gradients</b>: If True, add a tuple around the gradients returned
+ for an operations. This avoids some race conditions.
+* <b>aggregation_method</b>: Specifies the method used to combine gradient terms.
+ Accepted values are constants defined in the class `AggregationMethod`.
+
+##### Returns:
+
+ A list of `sum(dy/dx)` for each x in `xs`.
+
+##### Raises:
+
+
+* <b>LookupError</b>: if one of the operations between `x` and `y` does not
+ have a registered gradient function.
+* <b>ValueError</b>: if the arguments are invalid.
+
+
+- - -
+
+### class tf.AggregationMethod <div class="md-anchor" id="AggregationMethod">{#AggregationMethod}</div>
+
+A class listing aggregation methods used to combine gradients.
+
+Computing partial derivatives can require aggregating gradient
+contributions. This class lists the various methods that can
+be used to combine gradients in the graph:
+
+* `ADD_N`: All of the gradient terms are summed as part of one
+ operation using the "AddN" op. It has the property that all
+ gradients must be ready before any aggregation is performed.
+* `DEFAULT`: The system-chosen default aggregation method.
+
+
+- - -
+
+### tf.stop_gradient(input, name=None) <div class="md-anchor" id="stop_gradient">{#stop_gradient}</div>
+
+Stops gradient computation.
+
+When executed in a graph, this op outputs its input tensor as-is.
+
+When building ops to compute gradients, this op prevents the contribution of
+its inputs to be taken into account. Normally, the gradient generator adds ops
+to a graph to compute the derivatives of a specified 'loss' by recursively
+finding out inputs that contributed to its computation. If you insert this op
+in the graph it inputs are masked from the gradient generator. They are not
+taken into account for computing gradients.
+
+This is useful any time you want to compute a value with TensorFlow but need
+to pretend that the value was a constant. Some examples include:
+
+* The *EM* algorithm where the *M-step* should not involve backpropagation
+ through the output of the *E-step*.
+* Contrastive divergence training of Boltzmann machines where, when
+ differentiating the energy function, the training must not backpropagate
+ through the graph that generated the samples from the model.
+* Adversarial training, where no backprop should happen through the adversarial
+ example generation process.
+
+##### Args:
+
+
+* <b>input</b>: A `Tensor`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A `Tensor`. Has the same type as `input`.
+
+
+
+
+## Gradient Clipping <div class="md-anchor" id="AUTOGENERATED-gradient-clipping">{#AUTOGENERATED-gradient-clipping}</div>
+
+TensorFlow provides several operations that you can use to add clipping
+functions to your graph. You can use these functions to perform general data
+clipping, but they're particularly useful for handling exploding or vanishing
+gradients.
+
+- - -
+
+### tf.clip_by_value(t, clip_value_min, clip_value_max, name=None) <div class="md-anchor" id="clip_by_value">{#clip_by_value}</div>
+
+Clips tensor values to a specified min and max.
+
+Given a tensor `t`, this operation returns a tensor of the same type and
+shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`.
+Any values less than `clip_value_min` are set to `clip_value_min`. Any values
+greater than `clip_value_max` are set to `clip_value_max`.
+
+##### Args:
+
+
+* <b>t</b>: A `Tensor`.
+* <b>clip_value_min</b>: A 0-D (scalar) `Tensor`. The minimum value to clip by.
+* <b>clip_value_max</b>: A 0-D (scalar) `Tensor`. The maximum value to clip by.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A clipped `Tensor`.
+
+
+- - -
+
+### tf.clip_by_norm(t, clip_norm, name=None) <div class="md-anchor" id="clip_by_norm">{#clip_by_norm}</div>
+
+Clips tensor values to a maximum L2-norm.
+
+Given a tensor `t`, and a maximum clip value `clip_norm`, this operation
+normalizes `t` so that its L2-norm is less than or equal to `clip_norm'.
+Specifically, if the L2-norm is already less than or equal to `clip_norm`,
+then `t` is not modified. If the L2-norm is greater than `clip_norm`, then
+this operation returns a tensor of the same type and shape as `t` with its
+values set to:
+
+`t * clip_norm / l2norm(t)`
+
+In this case, the L2-norm of the output tensor is `clip_norm`.
+
+This operation is typically used to clip gradients before applying them with
+an optimizer.
+
+##### Args:
+
+
+* <b>t</b>: A `Tensor`.
+* <b>clip_norm</b>: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A clipped `Tensor`.
+
+
+- - -
+
+### tf.clip_by_average_norm(t, clip_norm, name=None) <div class="md-anchor" id="clip_by_average_norm">{#clip_by_average_norm}</div>
+
+Clips tensor values to a maximum average L2-norm.
+
+Given a tensor `t`, and a maximum clip value `clip_norm`, this operation
+normalizes `t` so that its average L2-norm is less than or equal to
+`clip_norm'. Specifically, if the average L2-norm is already less than or
+equal to `clip_norm`, then `t` is not modified. If the average L2-norm is
+greater than `clip_norm`, then this operation returns a tensor of the same
+type and shape as `t` with its values set to:
+
+`t * clip_norm / l2norm_avg(t)`
+
+In this case, the average L2-norm of the output tensor is `clip_norm`.
+
+This operation is typically used to clip gradients before applying them with
+an optimizer.
+
+##### Args:
+
+
+* <b>t</b>: A `Tensor`.
+* <b>clip_norm</b>: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A clipped `Tensor`.
+
+
+- - -
+
+### tf.clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None) <div class="md-anchor" id="clip_by_global_norm">{#clip_by_global_norm}</div>
+
+Clips values of multiple tensors by the ratio of the sum of their norms.
+
+Given a tuple or list of tensors `t_list`, and a clipping ratio `clip_norm`,
+this operation returns a list of clipped tensors `list_clipped`
+and the global norm (`global_norm`) of all tensors in `t_list`. Optionally,
+if you've already computed the global norm for `t_list`, you can specify
+the global norm with `use_norm`.
+
+To perform the clipping, the values t_list[i] are set to:
+
+`t_list[i] * clip_norm / max(global_norm, clip_norm)`
+
+where:
+
+`global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))`
+
+If `clip_norm > global_norm` then the entries in `t_list` remain as they are,
+otherwise they're all shrunk by the global ratio.
+
+Any of the entries of `t_list` that are of type None are ignored.
+
+This is the correct way to perform gradient clipping (for example, see
+R. Pascanu, T. Mikolov, and Y. Bengio, "On the difficulty of training
+Recurrent Neural Networks". http://arxiv.org/abs/1211.5063)
+
+However, it is slower than `clip_by_norm()` because all the parameters must be
+ready before the clipping operation can be performed.
+
+##### Args:
+
+
+* <b>t_list</b>: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.
+* <b>clip_norm</b>: A 0-D (scalar) `Tensor` > 0. The clipping ratio.
+* <b>use_norm</b>: A 0-D (scalar) `Tensor` of type `float` (optional). The global
+ norm to use. If not provided, `global_norm()` is used to compute the norm.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+
+* <b>list_clipped</b>: A list of `Tensors` of the same type as `list_t`.
+* <b>global_norm</b>: A 0-D (scalar) `Tensor` representing the global norm.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `t_list` is not a sequence.
+
+
+- - -
+
+### tf.global_norm(t_list, name=None) <div class="md-anchor" id="global_norm">{#global_norm}</div>
+
+Computes the global norm of multiple tensors.
+
+Given a tuple or list of tensors `t_list`, this operation returns the
+global norm of the elements in all tensors in `t_list`. The global norm is
+computed as:
+
+`global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))`
+
+Any entries in `t_list` that are of type None are ignored.
+
+##### Args:
+
+
+* <b>t_list</b>: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A 0-D (scalar) `Tensor` of type `float`.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If `t_list` is not a sequence.
+
+
+
+## Decaying the learning rate. <div class="md-anchor" id="AUTOGENERATED-decaying-the-learning-rate.">{#AUTOGENERATED-decaying-the-learning-rate.}</div>
+- - -
+
+### tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None) <div class="md-anchor" id="exponential_decay">{#exponential_decay}</div>
+
+Applies exponential decay to the learning rate.
+
+When training a model, it is often recommended to lower the learning rate as
+the training progresses. This function applies an exponential decay function
+to a provided initial learning rate. It requires a `global_step` value to
+compute the decayed learning rate. You can just pass a TensorFlow variable
+that you increment at each training step.
+
+The function returns the decayed learning rate. It is computed as:
+
+```python
+decayed_learning_rate = learning_rate *
+ decay_rate ^ (global_step / decay_steps)
+```
+
+If the argument `staircase` is `True`, then `global_step /decay_steps` is an
+integer division and the decayed learning rate follows a staircase function.
+
+Example: decay every 100000 steps with a base of 0.96:
+
+```python
+...
+global_step = tf.Variable(0, trainable=False)
+starter_learning_rate = 0.1
+learning_rate = tf.exponential_decay(starter_learning_rate, global_step,
+ 100000, 0.96, staircase=True)
+optimizer = tf.GradientDescent(learning_rate)
+# Passing global_step to minimize() will increment it at each step.
+optimizer.minimize(...my loss..., global_step=global_step)
+```
+
+##### Args:
+
+
+* <b>learning_rate</b>: A scalar `float32` or `float64` `Tensor` or a
+ Python number. The initial learning rate.
+* <b>global_step</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
+ Global step to use for the decay computation. Must not be negative.
+* <b>decay_steps</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
+ Must be positive. See the decay computation above.
+* <b>decay_rate</b>: A scalar `float32` or `float64` `Tensor` or a
+ Python number. The decay rate.
+* <b>staircase</b>: Boolean. It `True` decay the learning rate at discrete intervals.
+* <b>name</b>: string. Optional name of the operation. Defaults to 'ExponentialDecay'
+
+##### Returns:
+
+ A scalar `Tensor` of the same type as `learning_rate`. The decayed
+ learning rate.
+
+
+
+## Moving Averages. <div class="md-anchor" id="AUTOGENERATED-moving-averages.">{#AUTOGENERATED-moving-averages.}</div>
+
+Some training algorithms, such as GradientDescent and Momentum often benefit
+from maintaining a moving average of variables during optimization. Using the
+moving averages for evaluations often improve results significantly.
+
+- - -
+
+### class tf.train.ExponentialMovingAverage <div class="md-anchor" id="ExponentialMovingAverage">{#ExponentialMovingAverage}</div>
+
+Maintains moving averages of variables by employing and exponential decay.
+
+When training a model, it is often beneficial to maintain moving averages of
+the trained parameters. Evaluations that use averaged parameters sometimes
+produce significantly better results than the final trained values.
+
+The `apply()` method adds shadow copies of trained variables and add ops that
+maintain a moving average of the trained variables in their shadow copies.
+It is used when building the training model. The ops that maintain moving
+averages are typically run after each training step.
+The `average()` and `average_name()` methods give access to the shadow
+variables and their names. They are useful when building an evaluation
+model, or when restoring a model from a checkpoint file. They help use the
+moving averages in place of the last trained values for evaluations.
+
+The moving averages are computed using exponential decay. You specify the
+decay value when creating the `ExponentialMovingAverage` object. The shadow
+variables are initialized with the same initial values as the trained
+variables. When you run the ops to maintain the moving averages, each
+shadow variable is updated with the formula:
+
+ `shadow_variable -= (1 - decay) * (shadow_variable - variable)`
+
+This is mathematically equivalent to the classic formula below, but the use
+of an `assign_sub` op (the `"-="` in the formula) allows concurrent lockless
+updates to the variables:
+
+ `shadow_variable = decay * shadow_variable + (1 - decay) * variable`
+
+Reasonable values for `decay` are close to 1.0, typically in the
+multiple-nines range: 0.999, 0.9999, etc.
+
+Example usage when creating a training model:
+
+```python
+# Create variables.
+var0 = tf.Variable(...)
+var1 = tf.Variable(...)
+# ... use the variables to build a training model...
+...
+# Create an op that applies the optimizer. This is what we usually
+# would use as a training op.
+opt_op = opt.minimize(my_loss, [var0, var1])
+
+# Create an ExponentialMovingAverage object
+ema = tf.train.ExponentialMovingAverage(decay=0.9999)
+
+# Create the shadow variables, and add ops to maintain moving averages
+# of var0 and var1.
+maintain_averages_op = ema.apply([var0, var1])
+
+# Create an op that will update the moving averages after each training
+# step. This is what we will use in place of the usuall trainig op.
+with tf.control_dependencies([opt_op]):
+ training_op = tf.group(maintain_averages_op)
+
+...train the model by running training_op...
+```
+
+There are two ways to use the moving averages for evaluations:
+
+* Build a model that uses the shadow variables instead of the variables.
+ For this, use the `average()` method which returns the shadow variable
+ for a given variable.
+* Build a model normally but load the checkpoint files to evaluate by using
+ the shadow variable names. For this use the `average_name()` method. See
+ the [Saver class](train.md#Saver) for more information on restoring saved
+ variables.
+
+Example of restoring the shadow variable values:
+
+```python
+# Create a Saver that loads variables from their saved shadow values.
+shadow_var0_name = ema.average_name(var0)
+shadow_var1_name = ema.average_name(var1)
+saver = tf.train.Saver({shadow_var0_name: var0, shadow_var1_name: var1})
+saver.restore(...checkpoint filename...)
+# var0 and var1 now hold the moving average values
+```
+
+- - -
+
+#### tf.train.ExponentialMovingAverage.__init__(decay, num_updates=None, name='ExponentialMovingAverage') {#ExponentialMovingAverage.__init__}
+
+Creates a new ExponentialMovingAverage object.
+
+The `Apply()` method has to be called to create shadow variables and add
+ops to maintain moving averages.
+
+The optional `num_updates` parameter allows one to tweak the decay rate
+dynamically. . It is typical to pass the count of training steps, usually
+kept in a variable that is incremented at each step, in which case the
+decay rate is lower at the start of training. This makes moving averages
+move faster. If passed, the actual decay rate used is:
+
+ `min(decay, (1 + num_updates) / (10 + num_updates))`
+
+##### Args:
+
+
+* <b>decay</b>: Float. The decay to use.
+* <b>num_updates</b>: Optional count of number of updates applied to variables.
+* <b>name</b>: String. Optional prefix name to use for the name of ops added in
+ `Apply()`.
+
+
+- - -
+
+#### tf.train.ExponentialMovingAverage.apply(var_list=None) {#ExponentialMovingAverage.apply}
+
+Maintains moving averages of variables.
+
+`var_list` must be a list of `Variable` or `Tensor` objects. This method
+creates shadow variables for all elements of `var_list`. Shadow variables
+for `Variable` objects are initialized to the variable's initial value.
+For `Tensor` objects, the shadow variables are initialized to 0.
+
+shadow variables are created with `trainable=False` and added to the
+`GraphKeys.ALL_VARIABLES` collection. They will be returned by calls to
+`tf.all_variables()`.
+
+Returns an op that updates all shadow variables as described above.
+
+Note that `apply()` can be called multiple times with different lists of
+variables.
+
+##### Args:
+
+
+* <b>var_list</b>: A list of Variable or Tensor objects. The variables
+ and Tensors must be of types float32 or float64.
+
+##### Returns:
+
+ An Operation that updates the moving averages.
+
+##### Raises:
+
+
+* <b>TypeError</b>: If the arguments are not all float32 or float64.
+* <b>ValueError</b>: If the moving average of one of the variables is already
+ being computed.
+
+
+- - -
+
+#### tf.train.ExponentialMovingAverage.average_name(var) {#ExponentialMovingAverage.average_name}
+
+Returns the name of the `Variable` holding the average for `var`.
+
+The typical scenario for `ExponentialMovingAverage` is to compute moving
+averages of variables during training, and restore the variables from the
+computed moving averages during evaluations.
+
+To restore variables, you have to know the name of the shadow variables.
+That name and the original variable can then be passed to a `Saver()` object
+to restore the variable from the moving average value with:
+ `saver = tf.train.Saver({ema.average_name(var): var})`
+
+`average_name()` can be called whether or not `apply()` has been called.
+
+##### Args:
+
+
+* <b>var</b>: A `Variable` object.
+
+##### Returns:
+
+ A string: the name of the variable that will be used or was used
+ by the `ExponentialMovingAverage class` to hold the moving average of
+ `var`.
+
+
+- - -
+
+#### tf.train.ExponentialMovingAverage.average(var) {#ExponentialMovingAverage.average}
+
+Returns the `Variable` holding the average of `var`.
+
+##### Args:
+
+
+* <b>var</b>: A `Variable` object.
+
+##### Returns:
+
+ A `Variable` object or `None` if the moving average of `var`
+ is not maintained..
+
+
+
+
+## Coordinator and QueueRunner. <div class="md-anchor" id="AUTOGENERATED-coordinator-and-queuerunner.">{#AUTOGENERATED-coordinator-and-queuerunner.}</div>
+
+See [Threading and Queues](../../how_tos/threading_and_queues/index.md)
+for how to use threads and queues. For documentation on the Queue API,
+see [Queues](../../api_docs/python/io_ops.md#queues).
+
+- - -
+
+### class tf.train.Coordinator <div class="md-anchor" id="Coordinator">{#Coordinator}</div>
+
+A coordinator for threads.
+
+This class implements a simple mechanism to coordinate the termination of a
+set of threads.
+
+#### Usage:
+
+```python
+# Create a coordinator.
+coord = Coordinator()
+# Start a number of threads, passing the coordinator to each of them.
+...start thread 1...(coord, ...)
+...start thread N...(coord, ...)
+# Wait for all the threads to terminate.
+coord.join(threads)
+```
+
+Any of the threads can call `coord.request_stop()` to ask for all the threads
+to stop. To cooperate with the requests, each thread must check for
+`coord.should_stop()` on a regular basis. `coord.should_stop()` returns
+`True` as soon as `coord.request_stop()` has been called.
+
+A typical thread running with a Coordinator will do something like:
+
+```python
+while not coord.should_stop():
+ ...do some work...
+```
+
+#### Exception handling:
+
+A thread can report an exception to the Coordinator as part of the
+`should_stop()` call. The exception will be re-raised from the
+`coord.join()` call.
+
+Thread code:
+
+```python
+try:
+ while not coord.should_stop():
+ ...do some work...
+except Exception, e:
+ coord.request_stop(e)
+```
+
+Main code:
+
+```python
+try:
+ ...
+ coord = Coordinator()
+ # Start a number of threads, passing the coordinator to each of them.
+ ...start thread 1...(coord, ...)
+ ...start thread N...(coord, ...)
+ # Wait for all the threads to terminate.
+ coord.join(threads)
+except Exception, e:
+ ...exception that was passed to coord.request_stop()
+```
+
+#### Grace period for stopping:
+
+After a thread has called `coord.request_stop()` the other threads have a
+fixed time to stop, this is called the 'stop grace period' and defaults to 2
+minutes. If any of the threads is still alive after the grace period expires
+`coord.join()` raises a RuntimeException reporting the laggards.
+
+```
+try:
+ ...
+ coord = Coordinator()
+ # Start a number of threads, passing the coordinator to each of them.
+ ...start thread 1...(coord, ...)
+ ...start thread N...(coord, ...)
+ # Wait for all the threads to terminate, give them 10s grace period
+ coord.join(threads, stop_grace_period_secs=10)
+except RuntimeException:
+ ...one of the threads took more than 10s to stop after request_stop()
+ ...was called.
+except Exception:
+ ...exception that was passed to coord.request_stop()
+```
+- - -
+
+#### tf.train.Coordinator.__init__() {#Coordinator.__init__}
+
+Create a new Coordinator.
+
+
+- - -
+
+#### tf.train.Coordinator.join(threads, stop_grace_period_secs=120) {#Coordinator.join}
+
+Wait for threads to terminate.
+
+Blocks until all 'threads' have terminated or request_stop() is called.
+
+After the threads stop, if an 'exc_info' was passed to request_stop, that
+exception is re-reaised.
+
+Grace period handling: When request_stop() is called, threads are given
+'stop_grace_period_secs' seconds to terminate. If any of them is still
+alive after that period expires, a RuntimeError is raised. Note that if
+an 'exc_info' was passed to request_stop() then it is raised instead of
+that RuntimeError.
+
+##### Args:
+
+
+* <b>threads</b>: List threading.Threads. The started threads to join.
+* <b>stop_grace_period_secs</b>: Number of seconds given to threads to stop after
+ request_stop() has been called.
+
+##### Raises:
+
+
+* <b>RuntimeError</b>: If any thread is still alive after request_stop()
+ is called and the grace period expires.
+
+
+- - -
+
+#### tf.train.Coordinator.request_stop(ex=None) {#Coordinator.request_stop}
+
+Request that the threads stop.
+
+After this is called, calls to should_stop() will return True.
+
+##### Args:
+
+
+* <b>ex</b>: Optional Exception, or Python 'exc_info' tuple as returned by
+ sys.exc_info(). If this is the first call to request_stop() the
+ corresponding exception is recorded and re-raised from join().
+
+
+- - -
+
+#### tf.train.Coordinator.should_stop() {#Coordinator.should_stop}
+
+Check if stop was requested.
+
+##### Returns:
+
+ True if a stop was requested.
+
+
+- - -
+
+#### tf.train.Coordinator.wait_for_stop(timeout=None) {#Coordinator.wait_for_stop}
+
+Wait till the Coordinator is told to stop.
+
+##### Args:
+
+
+* <b>timeout</b>: float. Sleep for up to that many seconds waiting for
+ should_stop() to become True.
+
+##### Returns:
+
+ True if the Coordinator is told stop, False if the timeout expired.
+
+
+
+- - -
+
+### class tf.train.QueueRunner <div class="md-anchor" id="QueueRunner">{#QueueRunner}</div>
+
+Holds a list of enqueue operations for a queue, each to be run in a thread.
+
+Queues are a convenient TensorFlow mechanism to compute tensors
+asynchronously using multiple threads. For example in the canonical 'Input
+Reader' setup one set of threads generates filenames in a queue; a second set
+of threads read records from the files, processes them, and enqueues tensors
+on a second queue; a third set of threads dequeues these input records to
+construct batches and runs them through training operations.
+
+There are several delicate issues when running multiple threads that way:
+closing the queues in sequence as the input is exhausted, correctly catching
+and reporting exceptions, etc.
+
+The `QueueRunner`, combined with the `Coordinator`, helps handle these issues.
+- - -
+
+#### tf.train.QueueRunner.__init__(queue, enqueue_ops) {#QueueRunner.__init__}
+
+Create a QueueRunner.
+
+On construction the `QueueRunner` adds an op to close the queue. That op
+will be run if the enqueue ops raise exceptions.
+
+When you later call the `create_threads()` method, the `QueueRunner` will
+create one thread for each op in `enqueue_ops`. Each thread will run its
+enqueue op in parallel with the other threads. The enqueue ops do not have
+to all be the same op, but it is expected that they all enqueue tensors in
+`queue`.
+
+##### Args:
+
+
+* <b>queue</b>: A `Queue`.
+* <b>enqueue_ops</b>: List of enqueue ops to run in threads later.
+
+
+- - -
+
+#### tf.train.QueueRunner.create_threads(sess, coord=None, daemon=False, start=False) {#QueueRunner.create_threads}
+
+Create threads to run the enqueue ops.
+
+This method requires a session in which the graph was launched. It creates
+a list of threads, optionally starting them. There is one thread for each
+op passed in `enqueue_ops`.
+
+The `coord` argument is an optional coordinator, that the threads will use
+to terminate together and report exceptions. If a coordinator is given,
+this method starts an additional thread to close the queue when the
+coordinator requests a stop.
+
+This method may be called again as long as all threads from a previous call
+have stopped.
+
+##### Args:
+
+
+* <b>sess</b>: A `Session`.
+* <b>coord</b>: Optional `Coordinator` object for reporting errors and checking
+ stop conditions.
+* <b>daemon</b>: Boolean. If `True` make the threads daemon threads.
+* <b>start</b>: Boolean. If `True` starts the threads. If `False` the
+ caller must call the `start()` method of the returned threads.
+
+##### Returns:
+
+ A list of threads.
+
+##### Raises:
+
+
+* <b>RuntimeError</b>: If threads from a previous call to `create_threads()` are
+ still running.
+
+
+- - -
+
+#### tf.train.QueueRunner.exceptions_raised {#QueueRunner.exceptions_raised}
+
+Exceptions raised but not handled by the `QueueRunner` threads.
+
+Exceptions raised in queue runner threads are handled in one of two ways
+depending on whether or not a `Coordinator` was passed to
+`create_threads()`:
+
+* With a `Coordinator`, exceptions are reported to the coordinator and
+ forgotten by the `QueueRunner`.
+* Without a `Coordinator`, exceptions are captured by the `QueueRunner` and
+ made available in this `exceptions_raised` property.
+
+##### Returns:
+
+ A list of Python `Exception` objects. The list is empty if no exception
+ was captured. (No exceptions are captured when using a Coordinator.)
+
+
+- - -
+
+### tf.train.add_queue_runner(qr, collection='queue_runners') <div class="md-anchor" id="add_queue_runner">{#add_queue_runner}</div>
+
+Adds a `QueueRunner` to a collection in the graph.
+
+When building a complex model that uses many queues it is often difficult to
+gather all the queue runners that need to be run. This convenience function
+allows you to add a queue runner to a well known collection in the graph.
+
+The companion method `start_queue_runners()` can be used to start threads for
+all the collected queue runners.
+
+##### Args:
+
+
+* <b>qr</b>: A `QueueRunner`.
+* <b>collection</b>: A `GraphKey` specifying the graph collection to add
+ the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`.
+
+
+- - -
+
+### tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners') <div class="md-anchor" id="start_queue_runners">{#start_queue_runners}</div>
+
+Starts all queue runners collected in the graph.
+
+This is a companion method to `add_queue_runner()`. It just starts
+threads for all queue runners collected in the graph. It returns
+the list of all threads.
+
+##### Args:
+
+
+* <b>sess</b>: `Session` used to run the queue ops. Defaults to the
+ default session.
+* <b>coord</b>: Optional `Coordinator` for coordinating the started threads.
+* <b>daemon</b>: Whether the threads should be marked as `daemons`, meaning
+ they don't block program exit.
+* <b>start</b>: Set to `False` to only create the threads, not start them.
+* <b>collection</b>: A `GraphKey` specifying the graph collection to
+ get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
+
+##### Returns:
+
+ A list of threads.
+
+
+
+## Summary Operations. <div class="md-anchor" id="AUTOGENERATED-summary-operations.">{#AUTOGENERATED-summary-operations.}</div>
+
+The following ops output
+[`Summary`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/summary.proto)
+protocol buffers as serialized string tensors.
+
+You can fetch the output of a summary op in a session, and pass it to a
+[SummaryWriter](train.md#SummaryWriter) to append it to an event file. You can
+then use TensorBoard to visualize the contents of the event files. See
+[TensorBoard and Summaries](../../how_tos/summaries_and_tensorboard/index.md)
+for more details.
+
+- - -
+
+### tf.scalar_summary(tags, values, collections=None, name=None) <div class="md-anchor" id="scalar_summary">{#scalar_summary}</div>
+
+Outputs a `Summary` protocol buffer with scalar values.
+
+The input `tags` and `values` must have the same shape. The generated
+summary has a summary value for each tag-value pair in `tags` and `values`.
+
+##### Args:
+
+
+* <b>tags</b>: A 1-D `string` `Tensor`. Tags for the summaries.
+* <b>values</b>: A 1-D `float32` or `float64` Tensor. Values for the summaries.
+* <b>collections</b>: Optional list of graph collections keys. The new summary op is
+ added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar `Tensor` of type `string`. The serialized `Summary` protocol
+ buffer.
+
+
+- - -
+
+### tf.image_summary(tag, tensor, max_images=None, collections=None, name=None) <div class="md-anchor" id="image_summary">{#image_summary}</div>
+
+Outputs a `Summary` protocol buffer with images.
+
+The summary has up to `max_images` summary values containing images. The
+images are built from `tensor` which must be 4-D with shape `[batch_size,
+height, width, channels]` and where `channels` can be:
+
+* 1: `tensor` is interpreted as Grayscale.
+* 3: `tensor` is interpreted as RGB.
+* 4: `tensor` is interpreted as RGBA.
+
+The images have the same number of channels as the input tensor. Their values
+are normalized, one image at a time, to fit in the range `[0, 255]`. The
+op uses two different normalization algorithms:
+
+* If the input values are all positive, they are rescaled so the largest one
+ is 255.
+
+* If any input value is negative, the values are shifted so input value 0.0
+ is at 127. They are then rescaled so that either the smallest value is 0,
+ or the largest one is 255.
+
+The `tag` argument is a scalar `Tensor` of type `string`. It is used to
+build the `tag` of the summary values:
+
+* If `max_images` is 1, the summary value tag is '*tag*/image'.
+* If `max_images` is greater than 1, the summary value tags are
+ generated sequentially as '*tag*/image/0', '*tag*/image/1', etc.
+
+##### Args:
+
+
+* <b>tag</b>: A scalar `Tensor` of type `string`. Used to build the `tag`
+ of the summary values.
+* <b>tensor</b>: A 4-D `float32` `Tensor` of shape `[batch_size, height, width,
+ channels]` where `channels` is 1, 3, or 4.
+* <b>max_images</b>: Max number of batch elements to generate images for.
+* <b>collections</b>: Optional list of ops.GraphKeys. The collections to add the
+ summary to. Defaults to [ops.GraphKeys.SUMMARIES]
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar `Tensor` of type `string`. The serialized `Summary` protocol
+ buffer.
+
+
+- - -
+
+### tf.histogram_summary(tag, values, collections=None, name=None) <div class="md-anchor" id="histogram_summary">{#histogram_summary}</div>
+
+Outputs a `Summary` protocol buffer with a histogram.
+
+The generated
+[`Summary`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/summary.proto)
+has one summary value containing a histogram for `values`.
+
+This op reports an `OutOfRange` error if any value is not finite.
+
+##### Args:
+
+
+* <b>tag</b>: A `string` `Tensor`. 0-D. Tag to use for the summary value.
+* <b>values</b>: A `float32` `Tensor`. Any shape. Values to use to build the
+ histogram.
+* <b>collections</b>: Optional list of graph collections keys. The new summary op is
+ added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar `Tensor` of type `string`. The serialized `Summary` protocol
+ buffer.
+
+
+- - -
+
+### tf.nn.zero_fraction(value, name=None) <div class="md-anchor" id="zero_fraction">{#zero_fraction}</div>
+
+Returns the fraction of zeros in `value`.
+
+If `value` is empty, the result is `nan`.
+
+This is useful in summaries to measure and report sparsity. For example,
+
+ z = tf.Relu(...)
+ summ = tf.scalar_summary('sparsity', tf.zero_fraction(z))
+
+##### Args:
+
+
+* <b>value</b>: A tensor of numeric type.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ The fraction of zeros in `value`, with type `float32`.
+
+
+
+- - -
+
+### tf.merge_summary(inputs, collections=None, name=None) <div class="md-anchor" id="merge_summary">{#merge_summary}</div>
+
+Merges summaries.
+
+This op creates a
+[`Summary`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/summary.proto)
+protocol buffer that contains the union of all the values in the input
+summaries.
+
+When the Op is run, it reports an `InvalidArgument` error if multiple values
+in the summaries to merge use the same tag.
+
+##### Args:
+
+
+* <b>inputs</b>: A list of `string` `Tensor` objects containing serialized `Summary`
+ protocol buffers.
+* <b>collections</b>: Optional list of graph collections keys. The new summary op is
+ added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
+* <b>name</b>: A name for the operation (optional).
+
+##### Returns:
+
+ A scalar `Tensor` of type `string`. The serialized `Summary` protocol
+ buffer resulting from the merging.
+
+
+- - -
+
+### tf.merge_all_summaries(key='summaries') <div class="md-anchor" id="merge_all_summaries">{#merge_all_summaries}</div>
+
+Merges all summaries collected in the default graph.
+
+##### Args:
+
+
+* <b>key</b>: `GraphKey` used to collect the summaries. Defaults to
+ `GraphKeys.SUMMARIES`.
+
+##### Returns:
+
+ If no summaries were collected, returns None. Otherwise returns a scalar
+ `Tensor` of type`string` containing the serialized `Summary` protocol
+ buffer resulting from the merging.
+
+
+
+## Adding Summaries to Event Files. <div class="md-anchor" id="AUTOGENERATED-adding-summaries-to-event-files.">{#AUTOGENERATED-adding-summaries-to-event-files.}</div>
+
+See [Summaries and
+TensorBoard](../../how_tos/summaries_and_tensorboard/index.md) for an
+overview of summaries, event files, and visualization in TensorBoard.
+
+- - -
+
+### class tf.train.SummaryWriter <div class="md-anchor" id="SummaryWriter">{#SummaryWriter}</div>
+
+Writes `Summary` protocol buffers to event files.
+
+The `SummaryWriter` class provides a mechanism to create an event file in a
+given directory and add summaries and events to it. The class updates the
+file contents asynchronously. This allows a training program to call methods
+to add data to the file directly from the training loop, without slowing down
+training.
+
+- - -
+
+#### tf.train.SummaryWriter.__init__(logdir, graph_def=None, max_queue=10, flush_secs=120) {#SummaryWriter.__init__}
+
+Creates a `SummaryWriter` and an event file.
+
+On construction the summary writer creates a new event file in `logdir`.
+This event file will contain `Event` protocol buffers constructed when you
+call one of the following functions: `add_summary()`, `add_event()`, or
+`add_graph()`.
+
+If you pass a `graph_def` protocol buffer to the constructor it is added to
+the event file. (This is equivalent to calling `add_graph()` later).
+
+TensorBoard will pick the graph from the file and display it graphically so
+you can interactively explore the graph you built. You will usually pass
+the graph from the session in which you launched it:
+
+```python
+...create a graph...
+# Launch the graph in a session.
+sess = tf.Session()
+# Create a summary writer, add the 'graph_def' to the event file.
+writer = tf.train.SummaryWriter(<some-directory>, sess.graph_def)
+```
+
+The other arguments to the constructor control the asynchronous writes to
+the event file:
+
+* `flush_secs`: How often, in seconds, to flush the added summaries
+ and events to disk.
+* `max_queue`: Maximum number of summaries or events pending to be
+ written to disk before one of the 'add' calls block.
+
+##### Args:
+
+
+* <b>logdir</b>: A string. Directory where event file will be written.
+* <b>graph_def</b>: A `GraphDef` protocol buffer.
+* <b>max_queue</b>: Integer. Size of the queue for pending events and summaries.
+* <b>flush_secs</b>: Number. How often, in seconds, to flush the
+ pending events and summaries to disk.
+
+
+
+- - -
+
+#### tf.train.SummaryWriter.add_summary(summary, global_step=None) {#SummaryWriter.add_summary}
+
+Adds a `Summary` protocol buffer to the event file.
+
+This method wraps the provided summary in an `Event` procotol buffer
+and adds it to the event file.
+
+You can pass the output of any summary op, as-is, to this function. You
+can also pass a `Summary` procotol buffer that you manufacture with your
+own data. This is commonly done to report evaluation results in event
+files.
+
+##### Args:
+
+
+* <b>summary</b>: A `Summary` protocol buffer, optionally serialized as a string.
+* <b>global_step</b>: Number. Optional global step value to record with the
+ summary.
+
+
+- - -
+
+#### tf.train.SummaryWriter.add_event(event) {#SummaryWriter.add_event}
+
+Adds an event to the event file.
+
+##### Args:
+
+
+* <b>event</b>: An `Event` protocol buffer.
+
+
+- - -
+
+#### tf.train.SummaryWriter.add_graph(graph_def, global_step=None) {#SummaryWriter.add_graph}
+
+Adds a `GraphDef` protocol buffer to the event file.
+
+The graph described by the protocol buffer will be displayed by
+TensorBoard. Most users pass a graph in the constructor instead.
+
+##### Args:
+
+
+* <b>graph_def</b>: A `GraphDef` protocol buffer.
+* <b>global_step</b>: Number. Optional global step counter to record with the
+ graph.
+
+
+
+- - -
+
+#### tf.train.SummaryWriter.flush() {#SummaryWriter.flush}
+
+Flushes the event file to disk.
+
+Call this method to make sure that all pending events have been written to
+disk.
+
+
+- - -
+
+#### tf.train.SummaryWriter.close() {#SummaryWriter.close}
+
+Flushes the event file to disk and close the file.
+
+Call this method when you do not need the summary writer anymore.
+
+
+
+- - -
+
+### tf.train.summary_iterator(path) <div class="md-anchor" id="summary_iterator">{#summary_iterator}</div>
+
+An iterator for reading `Event` protocol buffers from an event file.
+
+You can use this function to read events written to an event file. It returns
+a Python iterator that yields `Event` protocol buffers.
+
+Example: Print the contents of an events file.
+
+```python
+for e in tf.summary_iterator(path to events file):
+ print e
+```
+
+Example: Print selected summary values.
+
+```python
+# This example supposes that the events file contains summaries with a
+# summary value tag 'loss'. These could have been added by calling
+# `add_summary()`, passing the output of a scalar summary op created with
+# with: `tf.scalar_summary(['loss'], loss_tensor)`.
+for e in tf.summary_iterator(path to events file):
+ for v in e.summary.value:
+ if v.tag == 'loss':
+ print v.simple_value
+```
+
+See the protocol buffer definitions of
+[Event](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/util/event.proto)
+and
+[Summary](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/summary.proto)
+for more information about their attributes.
+
+##### Args:
+
+
+* <b>path</b>: The path to an event file created by a `SummaryWriter`.
+
+##### Yields:
+
+ `Event` protocol buffers.
+
+
+
+## Training utilities. <div class="md-anchor" id="AUTOGENERATED-training-utilities.">{#AUTOGENERATED-training-utilities.}</div>
+
+- - -
+
+### tf.train.global_step(sess, global_step_tensor) <div class="md-anchor" id="global_step">{#global_step}</div>
+
+Small helper to get the global step.
+
+```python
+# Creates a variable to hold the global_step.
+global_step_tensor = tf.Variable(10, trainable=False, name='global_step')
+# Creates a session.
+sess = tf.Session()
+# Initializes the variable.
+sess.run(global_step_tensor.initializer)
+print 'global_step:', tf.train.global_step(sess, global_step_tensor)
+
+global_step: 10
+```
+
+##### Args:
+
+
+* <b>sess</b>: A brain `Session` object.
+* <b>global_step_tensor</b>: `Tensor` or the `name` of the operation that contains
+ the global step.
+
+##### Returns:
+
+ The global step value.
+
+
+- - -
+
+### tf.train.write_graph(graph_def, logdir, name, as_text=True) <div class="md-anchor" id="write_graph">{#write_graph}</div>
+
+Writes a graph proto on disk.
+
+The graph is written as a binary proto unless as_text is `True`.
+
+```python
+v = tf.Variable(0, name='my_variable')
+sess = tf.Session()
+tf.train.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt')
+```
+
+##### Args:
+
+
+* <b>graph_def</b>: A `GraphDef` protocol buffer.
+* <b>logdir</b>: Directory where to write the graph.
+* <b>name</b>: Filename for the graph.
+* <b>as_text</b>: If `True`, writes the graph as an ASCII proto.
+
+