aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar Vijay Vasudevan <vrv@google.com>2015-11-07 13:58:24 -0800
committerGravatar Vijay Vasudevan <vrv@google.com>2015-11-07 13:58:24 -0800
commitfddaed524622417900d745fe8f115562c55ac49a (patch)
treecabb2fc16540a27748b60329195966d535f48837
parent7de9099a739c9dc62b1ca55c1eeef90acbfa7be9 (diff)
TensorFlow: Upstream commits to git.
Changes: - More documentation edits, fixes to anchors, fixes to mathjax, new images, etc. - Add rnn models to pip install package. Base CL: 107312343
-rw-r--r--README.md4
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassEnv.md38
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md38
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md12
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassSession.md16
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassStatus.md32
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensor.md96
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensorBuffer.md18
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensorShape.md54
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensorShapeIter.md16
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md26
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassThread.md10
-rw-r--r--tensorflow/g3doc/api_docs/cc/ClassWritableFile.md18
-rw-r--r--tensorflow/g3doc/api_docs/cc/StructSessionOptions.md14
-rw-r--r--tensorflow/g3doc/api_docs/cc/StructState.md10
-rw-r--r--tensorflow/g3doc/api_docs/cc/StructTensorShapeDim.md10
-rw-r--r--tensorflow/g3doc/api_docs/cc/StructThreadOptions.md10
-rw-r--r--tensorflow/g3doc/api_docs/cc/index.md6
-rw-r--r--tensorflow/g3doc/api_docs/index.md2
-rw-r--r--tensorflow/g3doc/api_docs/python/array_ops.md179
-rw-r--r--tensorflow/g3doc/api_docs/python/client.md125
-rw-r--r--tensorflow/g3doc/api_docs/python/constant_op.md87
-rw-r--r--tensorflow/g3doc/api_docs/python/control_flow_ops.md157
-rw-r--r--tensorflow/g3doc/api_docs/python/framework.md451
-rw-r--r--tensorflow/g3doc/api_docs/python/image.md183
-rw-r--r--tensorflow/g3doc/api_docs/python/index.md2
-rw-r--r--tensorflow/g3doc/api_docs/python/io_ops.md459
-rw-r--r--tensorflow/g3doc/api_docs/python/math_ops.md387
-rw-r--r--tensorflow/g3doc/api_docs/python/nn.md211
-rw-r--r--tensorflow/g3doc/api_docs/python/ops.md3
-rw-r--r--tensorflow/g3doc/api_docs/python/python_io.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/sparse_ops.md93
-rw-r--r--tensorflow/g3doc/api_docs/python/state_ops.md249
-rw-r--r--tensorflow/g3doc/api_docs/python/train.md325
-rw-r--r--tensorflow/g3doc/get_started/basic_usage.md20
-rw-r--r--tensorflow/g3doc/get_started/index.md4
-rw-r--r--tensorflow/g3doc/get_started/os_setup.md50
-rw-r--r--tensorflow/g3doc/how_tos/adding_an_op/index.md55
-rw-r--r--tensorflow/g3doc/how_tos/graph_viz/index.md6
-rw-r--r--tensorflow/g3doc/how_tos/index.md22
-rw-r--r--tensorflow/g3doc/how_tos/new_data_formats/index.md7
-rw-r--r--tensorflow/g3doc/how_tos/reading_data/index.md40
-rw-r--r--tensorflow/g3doc/how_tos/summaries_and_tensorboard/index.md6
-rw-r--r--tensorflow/g3doc/how_tos/threading_and_queues/index.md34
-rw-r--r--tensorflow/g3doc/how_tos/using_gpu/index.md12
-rw-r--r--tensorflow/g3doc/how_tos/variable_scope/index.md20
-rw-r--r--tensorflow/g3doc/how_tos/variables/index.md20
-rw-r--r--tensorflow/g3doc/index.md6
-rw-r--r--tensorflow/g3doc/resources/bib.md2
-rw-r--r--tensorflow/g3doc/resources/dims_types.md8
-rw-r--r--tensorflow/g3doc/resources/faq.md61
-rw-r--r--tensorflow/g3doc/resources/glossary.md2
-rw-r--r--tensorflow/g3doc/resources/index.md16
-rw-r--r--tensorflow/g3doc/resources/uses.md2
-rw-r--r--tensorflow/g3doc/tutorials/deep_cnn/index.md36
-rw-r--r--tensorflow/g3doc/tutorials/index.md26
-rwxr-xr-xtensorflow/g3doc/tutorials/mandelbrot/index.md2
-rw-r--r--tensorflow/g3doc/tutorials/mnist/beginners/index.md36
-rw-r--r--tensorflow/g3doc/tutorials/mnist/download/index.md12
-rw-r--r--tensorflow/g3doc/tutorials/mnist/pros/index.md40
-rw-r--r--tensorflow/g3doc/tutorials/mnist/tf/index.md40
-rwxr-xr-xtensorflow/g3doc/tutorials/pdes/index.md10
-rw-r--r--tensorflow/g3doc/tutorials/recurrent/index.md26
-rw-r--r--tensorflow/g3doc/tutorials/seq2seq/index.md17
-rw-r--r--tensorflow/g3doc/tutorials/word2vec/index.md58
-rw-r--r--tensorflow/models/rnn/BUILD12
-rw-r--r--[-rwxr-xr-x]tensorflow/models/rnn/__init__.py12
-rw-r--r--tensorflow/tools/pip_package/BUILD1
68 files changed, 2076 insertions, 2015 deletions
diff --git a/README.md b/README.md
index 102f3742e8..575a305bc4 100644
--- a/README.md
+++ b/README.md
@@ -19,8 +19,8 @@ changes to TensorFlow through
**We use [github issues](https://github.com/tensorflow/tensorflow/issues) for
tracking requests and bugs, but please see
-[Community](resources/index.md#community) for general questions and
-discussion.**
+[Community](tensorflow/g3doc/resources/index.md#community) for general questions
+and discussion.**
# Download and Setup
diff --git a/tensorflow/g3doc/api_docs/cc/ClassEnv.md b/tensorflow/g3doc/api_docs/cc/ClassEnv.md
index 0fdb3d32c7..039087e703 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassEnv.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassEnv.md
@@ -1,4 +1,4 @@
-#Class tensorflow::Env
+#Class tensorflow::Env <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--env"></a>
An interface used by the tensorflow implementation to access operating system functionality like the filesystem etc.
@@ -6,7 +6,7 @@ Callers may wish to provide a custom Env object to get fine grain control.
All Env implementations are safe for concurrent access from multiple threads without any external synchronization.
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::Env::Env](#tensorflow_Env_Env)
* [virtual tensorflow::Env::~Env](#virtual_tensorflow_Env_Env)
@@ -39,21 +39,21 @@ All Env implementations are safe for concurrent access from multiple threads wit
* [static Env* tensorflow::Env::Default](#static_Env_tensorflow_Env_Default)
* Returns a default environment suitable for the current operating system.
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::Env::Env() {#tensorflow_Env_Env}
+#### tensorflow::Env::Env() <a class="md-anchor" id="tensorflow_Env_Env"></a>
-#### virtual tensorflow::Env::~Env() {#virtual_tensorflow_Env_Env}
+#### virtual tensorflow::Env::~Env() <a class="md-anchor" id="virtual_tensorflow_Env_Env"></a>
-#### virtual Status tensorflow::Env::NewRandomAccessFile(const string &amp;fname, RandomAccessFile **result)=0 {#virtual_Status_tensorflow_Env_NewRandomAccessFile}
+#### virtual Status tensorflow::Env::NewRandomAccessFile(const string &amp;fname, RandomAccessFile **result)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Env_NewRandomAccessFile"></a>
Creates a brand new random access read-only file with the specified name.
@@ -61,7 +61,7 @@ On success, stores a pointer to the new file in *result and returns OK. On failu
The returned file may be concurrently accessed by multiple threads.
-#### virtual Status tensorflow::Env::NewWritableFile(const string &amp;fname, WritableFile **result)=0 {#virtual_Status_tensorflow_Env_NewWritableFile}
+#### virtual Status tensorflow::Env::NewWritableFile(const string &amp;fname, WritableFile **result)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Env_NewWritableFile"></a>
Creates an object that writes to a new file with the specified name.
@@ -69,7 +69,7 @@ Deletes any existing file with the same name and creates a new file. On success,
The returned file will only be accessed by one thread at a time.
-#### virtual Status tensorflow::Env::NewAppendableFile(const string &amp;fname, WritableFile **result)=0 {#virtual_Status_tensorflow_Env_NewAppendableFile}
+#### virtual Status tensorflow::Env::NewAppendableFile(const string &amp;fname, WritableFile **result)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Env_NewAppendableFile"></a>
Creates an object that either appends to an existing file, or writes to a new file (if the file does not exist to begin with).
@@ -77,67 +77,67 @@ On success, stores a pointer to the new file in *result and returns OK. On failu
The returned file will only be accessed by one thread at a time.
-#### virtual bool tensorflow::Env::FileExists(const string &amp;fname)=0 {#virtual_bool_tensorflow_Env_FileExists}
+#### virtual bool tensorflow::Env::FileExists(const string &amp;fname)=0 <a class="md-anchor" id="virtual_bool_tensorflow_Env_FileExists"></a>
Returns true iff the named file exists.
-#### virtual Status tensorflow::Env::GetChildren(const string &amp;dir, std::vector&lt; string &gt; *result)=0 {#virtual_Status_tensorflow_Env_GetChildren}
+#### virtual Status tensorflow::Env::GetChildren(const string &amp;dir, std::vector&lt; string &gt; *result)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Env_GetChildren"></a>
Stores in *result the names of the children of the specified directory. The names are relative to &quot;dir&quot;.
Original contents of *results are dropped.
-#### virtual Status tensorflow::Env::DeleteFile(const string &amp;fname)=0 {#virtual_Status_tensorflow_Env_DeleteFile}
+#### virtual Status tensorflow::Env::DeleteFile(const string &amp;fname)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Env_DeleteFile"></a>
Deletes the named file.
-#### virtual Status tensorflow::Env::CreateDir(const string &amp;dirname)=0 {#virtual_Status_tensorflow_Env_CreateDir}
+#### virtual Status tensorflow::Env::CreateDir(const string &amp;dirname)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Env_CreateDir"></a>
Creates the specified directory.
-#### virtual Status tensorflow::Env::DeleteDir(const string &amp;dirname)=0 {#virtual_Status_tensorflow_Env_DeleteDir}
+#### virtual Status tensorflow::Env::DeleteDir(const string &amp;dirname)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Env_DeleteDir"></a>
Deletes the specified directory.
-#### virtual Status tensorflow::Env::GetFileSize(const string &amp;fname, uint64 *file_size)=0 {#virtual_Status_tensorflow_Env_GetFileSize}
+#### virtual Status tensorflow::Env::GetFileSize(const string &amp;fname, uint64 *file_size)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Env_GetFileSize"></a>
Stores the size of fname in *file_size.
-#### virtual Status tensorflow::Env::RenameFile(const string &amp;src, const string &amp;target)=0 {#virtual_Status_tensorflow_Env_RenameFile}
+#### virtual Status tensorflow::Env::RenameFile(const string &amp;src, const string &amp;target)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Env_RenameFile"></a>
Renames file src to target. If target already exists, it will be replaced.
-#### virtual uint64 tensorflow::Env::NowMicros()=0 {#virtual_uint64_tensorflow_Env_NowMicros}
+#### virtual uint64 tensorflow::Env::NowMicros()=0 <a class="md-anchor" id="virtual_uint64_tensorflow_Env_NowMicros"></a>
Returns the number of micro-seconds since some fixed point in time. Only useful for computing deltas of time.
-#### virtual void tensorflow::Env::SleepForMicroseconds(int micros)=0 {#virtual_void_tensorflow_Env_SleepForMicroseconds}
+#### virtual void tensorflow::Env::SleepForMicroseconds(int micros)=0 <a class="md-anchor" id="virtual_void_tensorflow_Env_SleepForMicroseconds"></a>
Sleeps/delays the thread for the prescribed number of micro-seconds.
-#### virtual Thread* tensorflow::Env::StartThread(const ThreadOptions &amp;thread_options, const string &amp;name, std::function&lt; void()&gt; fn) TF_MUST_USE_RESULT=0 {#virtual_Thread_tensorflow_Env_StartThread}
+#### virtual Thread* tensorflow::Env::StartThread(const ThreadOptions &amp;thread_options, const string &amp;name, std::function&lt; void()&gt; fn) TF_MUST_USE_RESULT=0 <a class="md-anchor" id="virtual_Thread_tensorflow_Env_StartThread"></a>
Returns a new thread that is running fn() and is identified (for debugging/performance-analysis) by &quot;name&quot;.
Caller takes ownership of the result and must delete it eventually (the deletion will block until fn() stops running).
-#### static Env* tensorflow::Env::Default() {#static_Env_tensorflow_Env_Default}
+#### static Env* tensorflow::Env::Default() <a class="md-anchor" id="static_Env_tensorflow_Env_Default"></a>
Returns a default environment suitable for the current operating system.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md b/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md
index 2c6af82113..58e1059886 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassEnvWrapper.md
@@ -1,10 +1,10 @@
-#Class tensorflow::EnvWrapper
+#Class tensorflow::EnvWrapper <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--envwrapper"></a>
An implementation of Env that forwards all calls to another Env .
May be useful to clients who wish to override just part of the functionality of another Env .
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::EnvWrapper::EnvWrapper](#tensorflow_EnvWrapper_EnvWrapper)
* Initializes an EnvWrapper that delegates all calls to *t.
@@ -38,27 +38,27 @@ May be useful to clients who wish to override just part of the functionality of
* [Thread* tensorflow::EnvWrapper::StartThread](#Thread_tensorflow_EnvWrapper_StartThread)
* Returns a new thread that is running fn() and is identified (for debugging/performance-analysis) by &quot;name&quot;.
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::EnvWrapper::EnvWrapper(Env *t) {#tensorflow_EnvWrapper_EnvWrapper}
+#### tensorflow::EnvWrapper::EnvWrapper(Env *t) <a class="md-anchor" id="tensorflow_EnvWrapper_EnvWrapper"></a>
Initializes an EnvWrapper that delegates all calls to *t.
-#### virtual tensorflow::EnvWrapper::~EnvWrapper() {#virtual_tensorflow_EnvWrapper_EnvWrapper}
+#### virtual tensorflow::EnvWrapper::~EnvWrapper() <a class="md-anchor" id="virtual_tensorflow_EnvWrapper_EnvWrapper"></a>
-#### Env* tensorflow::EnvWrapper::target() const {#Env_tensorflow_EnvWrapper_target}
+#### Env* tensorflow::EnvWrapper::target() const <a class="md-anchor" id="Env_tensorflow_EnvWrapper_target"></a>
Returns the target to which this Env forwards all calls.
-#### Status tensorflow::EnvWrapper::NewRandomAccessFile(const string &amp;f, RandomAccessFile **r) override {#Status_tensorflow_EnvWrapper_NewRandomAccessFile}
+#### Status tensorflow::EnvWrapper::NewRandomAccessFile(const string &amp;f, RandomAccessFile **r) override <a class="md-anchor" id="Status_tensorflow_EnvWrapper_NewRandomAccessFile"></a>
Creates a brand new random access read-only file with the specified name.
@@ -66,7 +66,7 @@ On success, stores a pointer to the new file in *result and returns OK. On failu
The returned file may be concurrently accessed by multiple threads.
-#### Status tensorflow::EnvWrapper::NewWritableFile(const string &amp;f, WritableFile **r) override {#Status_tensorflow_EnvWrapper_NewWritableFile}
+#### Status tensorflow::EnvWrapper::NewWritableFile(const string &amp;f, WritableFile **r) override <a class="md-anchor" id="Status_tensorflow_EnvWrapper_NewWritableFile"></a>
Creates an object that writes to a new file with the specified name.
@@ -74,7 +74,7 @@ Deletes any existing file with the same name and creates a new file. On success,
The returned file will only be accessed by one thread at a time.
-#### Status tensorflow::EnvWrapper::NewAppendableFile(const string &amp;f, WritableFile **r) override {#Status_tensorflow_EnvWrapper_NewAppendableFile}
+#### Status tensorflow::EnvWrapper::NewAppendableFile(const string &amp;f, WritableFile **r) override <a class="md-anchor" id="Status_tensorflow_EnvWrapper_NewAppendableFile"></a>
Creates an object that either appends to an existing file, or writes to a new file (if the file does not exist to begin with).
@@ -82,61 +82,61 @@ On success, stores a pointer to the new file in *result and returns OK. On failu
The returned file will only be accessed by one thread at a time.
-#### bool tensorflow::EnvWrapper::FileExists(const string &amp;f) override {#bool_tensorflow_EnvWrapper_FileExists}
+#### bool tensorflow::EnvWrapper::FileExists(const string &amp;f) override <a class="md-anchor" id="bool_tensorflow_EnvWrapper_FileExists"></a>
Returns true iff the named file exists.
-#### Status tensorflow::EnvWrapper::GetChildren(const string &amp;dir, std::vector&lt; string &gt; *r) override {#Status_tensorflow_EnvWrapper_GetChildren}
+#### Status tensorflow::EnvWrapper::GetChildren(const string &amp;dir, std::vector&lt; string &gt; *r) override <a class="md-anchor" id="Status_tensorflow_EnvWrapper_GetChildren"></a>
Stores in *result the names of the children of the specified directory. The names are relative to &quot;dir&quot;.
Original contents of *results are dropped.
-#### Status tensorflow::EnvWrapper::DeleteFile(const string &amp;f) override {#Status_tensorflow_EnvWrapper_DeleteFile}
+#### Status tensorflow::EnvWrapper::DeleteFile(const string &amp;f) override <a class="md-anchor" id="Status_tensorflow_EnvWrapper_DeleteFile"></a>
Deletes the named file.
-#### Status tensorflow::EnvWrapper::CreateDir(const string &amp;d) override {#Status_tensorflow_EnvWrapper_CreateDir}
+#### Status tensorflow::EnvWrapper::CreateDir(const string &amp;d) override <a class="md-anchor" id="Status_tensorflow_EnvWrapper_CreateDir"></a>
Creates the specified directory.
-#### Status tensorflow::EnvWrapper::DeleteDir(const string &amp;d) override {#Status_tensorflow_EnvWrapper_DeleteDir}
+#### Status tensorflow::EnvWrapper::DeleteDir(const string &amp;d) override <a class="md-anchor" id="Status_tensorflow_EnvWrapper_DeleteDir"></a>
Deletes the specified directory.
-#### Status tensorflow::EnvWrapper::GetFileSize(const string &amp;f, uint64 *s) override {#Status_tensorflow_EnvWrapper_GetFileSize}
+#### Status tensorflow::EnvWrapper::GetFileSize(const string &amp;f, uint64 *s) override <a class="md-anchor" id="Status_tensorflow_EnvWrapper_GetFileSize"></a>
Stores the size of fname in *file_size.
-#### Status tensorflow::EnvWrapper::RenameFile(const string &amp;s, const string &amp;t) override {#Status_tensorflow_EnvWrapper_RenameFile}
+#### Status tensorflow::EnvWrapper::RenameFile(const string &amp;s, const string &amp;t) override <a class="md-anchor" id="Status_tensorflow_EnvWrapper_RenameFile"></a>
Renames file src to target. If target already exists, it will be replaced.
-#### uint64 tensorflow::EnvWrapper::NowMicros() override {#uint64_tensorflow_EnvWrapper_NowMicros}
+#### uint64 tensorflow::EnvWrapper::NowMicros() override <a class="md-anchor" id="uint64_tensorflow_EnvWrapper_NowMicros"></a>
Returns the number of micro-seconds since some fixed point in time. Only useful for computing deltas of time.
-#### void tensorflow::EnvWrapper::SleepForMicroseconds(int micros) override {#void_tensorflow_EnvWrapper_SleepForMicroseconds}
+#### void tensorflow::EnvWrapper::SleepForMicroseconds(int micros) override <a class="md-anchor" id="void_tensorflow_EnvWrapper_SleepForMicroseconds"></a>
Sleeps/delays the thread for the prescribed number of micro-seconds.
-#### Thread* tensorflow::EnvWrapper::StartThread(const ThreadOptions &amp;thread_options, const string &amp;name, std::function&lt; void()&gt; fn) override {#Thread_tensorflow_EnvWrapper_StartThread}
+#### Thread* tensorflow::EnvWrapper::StartThread(const ThreadOptions &amp;thread_options, const string &amp;name, std::function&lt; void()&gt; fn) override <a class="md-anchor" id="Thread_tensorflow_EnvWrapper_StartThread"></a>
Returns a new thread that is running fn() and is identified (for debugging/performance-analysis) by &quot;name&quot;.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md b/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md
index 3538c2ca11..b3647db7c7 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassRandomAccessFile.md
@@ -1,31 +1,31 @@
-#Class tensorflow::RandomAccessFile
+#Class tensorflow::RandomAccessFile <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--randomaccessfile"></a>
A file abstraction for randomly reading the contents of a file.
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::RandomAccessFile::RandomAccessFile](#tensorflow_RandomAccessFile_RandomAccessFile)
* [virtual tensorflow::RandomAccessFile::~RandomAccessFile](#virtual_tensorflow_RandomAccessFile_RandomAccessFile)
* [virtual Status tensorflow::RandomAccessFile::Read](#virtual_Status_tensorflow_RandomAccessFile_Read)
* Reads up to &quot;n&quot; bytes from the file starting at &quot;offset&quot;.
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::RandomAccessFile::RandomAccessFile() {#tensorflow_RandomAccessFile_RandomAccessFile}
+#### tensorflow::RandomAccessFile::RandomAccessFile() <a class="md-anchor" id="tensorflow_RandomAccessFile_RandomAccessFile"></a>
-#### virtual tensorflow::RandomAccessFile::~RandomAccessFile() {#virtual_tensorflow_RandomAccessFile_RandomAccessFile}
+#### virtual tensorflow::RandomAccessFile::~RandomAccessFile() <a class="md-anchor" id="virtual_tensorflow_RandomAccessFile_RandomAccessFile"></a>
-#### virtual Status tensorflow::RandomAccessFile::Read(uint64 offset, size_t n, StringPiece *result, char *scratch) const =0 {#virtual_Status_tensorflow_RandomAccessFile_Read}
+#### virtual Status tensorflow::RandomAccessFile::Read(uint64 offset, size_t n, StringPiece *result, char *scratch) const =0 <a class="md-anchor" id="virtual_Status_tensorflow_RandomAccessFile_Read"></a>
Reads up to &quot;n&quot; bytes from the file starting at &quot;offset&quot;.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassSession.md b/tensorflow/g3doc/api_docs/cc/ClassSession.md
index f2f9d8f762..21e99a8332 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassSession.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassSession.md
@@ -1,4 +1,4 @@
-#Class tensorflow::Session
+#Class tensorflow::Session <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--session"></a>
A Session instance lets a caller drive a TensorFlow graph computation.
@@ -37,7 +37,7 @@ A Session allows concurrent calls to Run() , though a Session must be created /
Only one thread must call Close() , and Close() must only be called after all other calls to Run() have returned.
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [virtual Status tensorflow::Session::Create](#virtual_Status_tensorflow_Session_Create)
* Create the graph to be used for the session.
@@ -49,21 +49,21 @@ Only one thread must call Close() , and Close() must only be called after all ot
* Closes this session.
* [virtual tensorflow::Session::~Session](#virtual_tensorflow_Session_Session)
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### virtual Status tensorflow::Session::Create(const GraphDef &amp;graph)=0 {#virtual_Status_tensorflow_Session_Create}
+#### virtual Status tensorflow::Session::Create(const GraphDef &amp;graph)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Session_Create"></a>
Create the graph to be used for the session.
Returns an error if this session has already been created with a graph. To re-use the session with a different graph, the caller must Close() the session first.
-#### virtual Status tensorflow::Session::Extend(const GraphDef &amp;graph)=0 {#virtual_Status_tensorflow_Session_Extend}
+#### virtual Status tensorflow::Session::Extend(const GraphDef &amp;graph)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Session_Extend"></a>
Adds operations to the graph that is already registered with the Session .
The names of new operations in &quot;graph&quot; must not exist in the graph that is already registered.
-#### virtual Status tensorflow::Session::Run(const std::vector&lt; std::pair&lt; string, Tensor &gt; &gt; &amp;inputs, const std::vector&lt; string &gt; &amp;output_tensor_names, const std::vector&lt; string &gt; &amp;target_node_names, std::vector&lt; Tensor &gt; *outputs)=0 {#virtual_Status_tensorflow_Session_Run}
+#### virtual Status tensorflow::Session::Run(const std::vector&lt; std::pair&lt; string, Tensor &gt; &gt; &amp;inputs, const std::vector&lt; string &gt; &amp;output_tensor_names, const std::vector&lt; string &gt; &amp;target_node_names, std::vector&lt; Tensor &gt; *outputs)=0 <a class="md-anchor" id="virtual_Status_tensorflow_Session_Run"></a>
Runs the graph with the provided input tensors and fills &apos;outputs&apos; for the endpoints specified in &apos;output_tensor_names&apos;. Runs to but does not return Tensors for the nodes in &apos;target_node_names&apos;.
@@ -75,13 +75,13 @@ REQUIRES: The name of each Tensor of the input or output must match a &quot;Tens
REQUIRES: outputs is not nullptr if output_tensor_names is non-empty.
-#### virtual Status tensorflow::Session::Close()=0 {#virtual_Status_tensorflow_Session_Close}
+#### virtual Status tensorflow::Session::Close()=0 <a class="md-anchor" id="virtual_Status_tensorflow_Session_Close"></a>
Closes this session.
Closing a session releases the resources used by this session on the TensorFlow runtime (specified during session creation by the &apos; SessionOptions::target &apos; field).
-#### virtual tensorflow::Session::~Session() {#virtual_tensorflow_Session_Session}
+#### virtual tensorflow::Session::~Session() <a class="md-anchor" id="virtual_tensorflow_Session_Session"></a>
diff --git a/tensorflow/g3doc/api_docs/cc/ClassStatus.md b/tensorflow/g3doc/api_docs/cc/ClassStatus.md
index d5ef48b14d..2082930df0 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassStatus.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassStatus.md
@@ -1,10 +1,10 @@
-#Class tensorflow::Status
+#Class tensorflow::Status <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--status"></a>
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::Status::Status](#tensorflow_Status_Status)
* Create a success status.
@@ -26,81 +26,81 @@
* Return a string representation of this status suitable for printing. Returns the string &quot;OK&quot; for success.
* [static Status tensorflow::Status::OK](#static_Status_tensorflow_Status_OK)
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::Status::Status() {#tensorflow_Status_Status}
+#### tensorflow::Status::Status() <a class="md-anchor" id="tensorflow_Status_Status"></a>
Create a success status.
-#### tensorflow::Status::~Status() {#tensorflow_Status_Status}
+#### tensorflow::Status::~Status() <a class="md-anchor" id="tensorflow_Status_Status"></a>
-#### tensorflow::Status::Status(tensorflow::error::Code code, tensorflow::StringPiece msg) {#tensorflow_Status_Status}
+#### tensorflow::Status::Status(tensorflow::error::Code code, tensorflow::StringPiece msg) <a class="md-anchor" id="tensorflow_Status_Status"></a>
Create a status with the specified error code and msg as a human-readable string containing more detailed information.
-#### tensorflow::Status::Status(const Status &amp;s) {#tensorflow_Status_Status}
+#### tensorflow::Status::Status(const Status &amp;s) <a class="md-anchor" id="tensorflow_Status_Status"></a>
Copy the specified status.
-#### void tensorflow::Status::operator=(const Status &amp;s) {#void_tensorflow_Status_operator_}
+#### void tensorflow::Status::operator=(const Status &amp;s) <a class="md-anchor" id="void_tensorflow_Status_operator_"></a>
-#### bool tensorflow::Status::ok() const {#bool_tensorflow_Status_ok}
+#### bool tensorflow::Status::ok() const <a class="md-anchor" id="bool_tensorflow_Status_ok"></a>
Returns true iff the status indicates success.
-#### tensorflow::error::Code tensorflow::Status::code() const {#tensorflow_error_Code_tensorflow_Status_code}
+#### tensorflow::error::Code tensorflow::Status::code() const <a class="md-anchor" id="tensorflow_error_Code_tensorflow_Status_code"></a>
-#### const string&amp; tensorflow::Status::error_message() const {#const_string_amp_tensorflow_Status_error_message}
+#### const string&amp; tensorflow::Status::error_message() const <a class="md-anchor" id="const_string_amp_tensorflow_Status_error_message"></a>
-#### bool tensorflow::Status::operator==(const Status &amp;x) const {#bool_tensorflow_Status_operator_}
+#### bool tensorflow::Status::operator==(const Status &amp;x) const <a class="md-anchor" id="bool_tensorflow_Status_operator_"></a>
-#### bool tensorflow::Status::operator!=(const Status &amp;x) const {#bool_tensorflow_Status_operator_}
+#### bool tensorflow::Status::operator!=(const Status &amp;x) const <a class="md-anchor" id="bool_tensorflow_Status_operator_"></a>
-#### void tensorflow::Status::Update(const Status &amp;new_status) {#void_tensorflow_Status_Update}
+#### void tensorflow::Status::Update(const Status &amp;new_status) <a class="md-anchor" id="void_tensorflow_Status_Update"></a>
If &quot;ok()&quot;, stores &quot;new_status&quot; into *this. If &quot;!ok()&quot;, preserves the current status, but may augment with additional information about &quot;new_status&quot;.
Convenient way of keeping track of the first error encountered. Instead of: if (overall_status.ok()) overall_status = new_status Use: overall_status.Update(new_status);
-#### string tensorflow::Status::ToString() const {#string_tensorflow_Status_ToString}
+#### string tensorflow::Status::ToString() const <a class="md-anchor" id="string_tensorflow_Status_ToString"></a>
Return a string representation of this status suitable for printing. Returns the string &quot;OK&quot; for success.
-#### static Status tensorflow::Status::OK() {#static_Status_tensorflow_Status_OK}
+#### static Status tensorflow::Status::OK() <a class="md-anchor" id="static_Status_tensorflow_Status_OK"></a>
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensor.md b/tensorflow/g3doc/api_docs/cc/ClassTensor.md
index 7ecc7688f3..37d52bfacf 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassTensor.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensor.md
@@ -1,10 +1,10 @@
-#Class tensorflow::Tensor
+#Class tensorflow::Tensor <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--tensor"></a>
Represents an n-dimensional array of values.
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::Tensor::Tensor](#tensorflow_Tensor_Tensor)
* Default Tensor constructor. Creates a 1-dimension, 0-element float tensor.
@@ -76,105 +76,105 @@ Represents an n-dimensional array of values.
* [StringPiece tensorflow::Tensor::tensor_data](#StringPiece_tensorflow_Tensor_tensor_data)
* Returns a StringPiece mapping the current tensor&apos;s buffer.
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::Tensor::Tensor() {#tensorflow_Tensor_Tensor}
+#### tensorflow::Tensor::Tensor() <a class="md-anchor" id="tensorflow_Tensor_Tensor"></a>
Default Tensor constructor. Creates a 1-dimension, 0-element float tensor.
-#### tensorflow::Tensor::Tensor(DataType type, const TensorShape &amp;shape) {#tensorflow_Tensor_Tensor}
+#### tensorflow::Tensor::Tensor(DataType type, const TensorShape &amp;shape) <a class="md-anchor" id="tensorflow_Tensor_Tensor"></a>
Creates a Tensor of the given datatype and shape.
The underlying buffer is allocated using a CPUAllocator.
-#### tensorflow::Tensor::Tensor(Allocator *a, DataType type, const TensorShape &amp;shape) {#tensorflow_Tensor_Tensor}
+#### tensorflow::Tensor::Tensor(Allocator *a, DataType type, const TensorShape &amp;shape) <a class="md-anchor" id="tensorflow_Tensor_Tensor"></a>
Creates a tensor with the input datatype and shape, using the allocator &apos;a&apos; to allocate the underlying buffer.
&apos;a&apos; must outlive the lifetime of this Tensor .
-#### tensorflow::Tensor::Tensor(DataType type) {#tensorflow_Tensor_Tensor}
+#### tensorflow::Tensor::Tensor(DataType type) <a class="md-anchor" id="tensorflow_Tensor_Tensor"></a>
Creates an uninitialized Tensor of the given data type.
-#### tensorflow::Tensor::Tensor(const Tensor &amp;other) {#tensorflow_Tensor_Tensor}
+#### tensorflow::Tensor::Tensor(const Tensor &amp;other) <a class="md-anchor" id="tensorflow_Tensor_Tensor"></a>
-#### tensorflow::Tensor::~Tensor() {#tensorflow_Tensor_Tensor}
+#### tensorflow::Tensor::~Tensor() <a class="md-anchor" id="tensorflow_Tensor_Tensor"></a>
Copy constructor.
-#### DataType tensorflow::Tensor::dtype() const {#DataType_tensorflow_Tensor_dtype}
+#### DataType tensorflow::Tensor::dtype() const <a class="md-anchor" id="DataType_tensorflow_Tensor_dtype"></a>
Returns the data type.
-#### const TensorShape&amp; tensorflow::Tensor::shape() const {#const_TensorShape_amp_tensorflow_Tensor_shape}
+#### const TensorShape&amp; tensorflow::Tensor::shape() const <a class="md-anchor" id="const_TensorShape_amp_tensorflow_Tensor_shape"></a>
Returns the shape of the tensor.
-#### int tensorflow::Tensor::dims() const {#int_tensorflow_Tensor_dims}
+#### int tensorflow::Tensor::dims() const <a class="md-anchor" id="int_tensorflow_Tensor_dims"></a>
Convenience accessor for the tensor shape.
For all shape accessors, see comments for relevant methods of TensorShape in tensor_shape.h .
-#### int64 tensorflow::Tensor::dim_size(int d) const {#int64_tensorflow_Tensor_dim_size}
+#### int64 tensorflow::Tensor::dim_size(int d) const <a class="md-anchor" id="int64_tensorflow_Tensor_dim_size"></a>
Convenience accessor for the tensor shape.
-#### int64 tensorflow::Tensor::NumElements() const {#int64_tensorflow_Tensor_NumElements}
+#### int64 tensorflow::Tensor::NumElements() const <a class="md-anchor" id="int64_tensorflow_Tensor_NumElements"></a>
Convenience accessor for the tensor shape.
-#### bool tensorflow::Tensor::IsSameSize(const Tensor &amp;b) const {#bool_tensorflow_Tensor_IsSameSize}
+#### bool tensorflow::Tensor::IsSameSize(const Tensor &amp;b) const <a class="md-anchor" id="bool_tensorflow_Tensor_IsSameSize"></a>
-#### bool tensorflow::Tensor::IsInitialized() const {#bool_tensorflow_Tensor_IsInitialized}
+#### bool tensorflow::Tensor::IsInitialized() const <a class="md-anchor" id="bool_tensorflow_Tensor_IsInitialized"></a>
Has this Tensor been initialized?
-#### size_t tensorflow::Tensor::TotalBytes() const {#size_t_tensorflow_Tensor_TotalBytes}
+#### size_t tensorflow::Tensor::TotalBytes() const <a class="md-anchor" id="size_t_tensorflow_Tensor_TotalBytes"></a>
Returns the estimated memory usage of this tensor.
-#### Tensor&amp; tensorflow::Tensor::operator=(const Tensor &amp;other) {#Tensor_amp_tensorflow_Tensor_operator_}
+#### Tensor&amp; tensorflow::Tensor::operator=(const Tensor &amp;other) <a class="md-anchor" id="Tensor_amp_tensorflow_Tensor_operator_"></a>
Assign operator. This tensor shares other&apos;s underlying storage.
-#### bool tensorflow::Tensor::CopyFrom(const Tensor &amp;other, const TensorShape &amp;shape) TF_MUST_USE_RESULT {#bool_tensorflow_Tensor_CopyFrom}
+#### bool tensorflow::Tensor::CopyFrom(const Tensor &amp;other, const TensorShape &amp;shape) TF_MUST_USE_RESULT <a class="md-anchor" id="bool_tensorflow_Tensor_CopyFrom"></a>
Copy the other tensor into this tensor and reshape it.
This tensor shares other&apos;s underlying storage. Returns true iff other.shape() has the same number of elements of the given &quot;shape&quot;.
-#### Tensor tensorflow::Tensor::Slice(int64 dim0_start, int64 dim0_limit) const {#Tensor_tensorflow_Tensor_Slice}
+#### Tensor tensorflow::Tensor::Slice(int64 dim0_start, int64 dim0_limit) const <a class="md-anchor" id="Tensor_tensorflow_Tensor_Slice"></a>
Slice this tensor along the 1st dimension.
@@ -184,31 +184,31 @@ NOTE: The returned tensor may not satisfies the same alignment requirement as th
REQUIRES: dims() &gt;= 1 REQUIRES: 0 &lt;= dim0_start &lt;= dim0_limit &lt;= dim_size(0)
-#### bool tensorflow::Tensor::FromProto(const TensorProto &amp;other) TF_MUST_USE_RESULT {#bool_tensorflow_Tensor_FromProto}
+#### bool tensorflow::Tensor::FromProto(const TensorProto &amp;other) TF_MUST_USE_RESULT <a class="md-anchor" id="bool_tensorflow_Tensor_FromProto"></a>
Parse "other&apos; and construct the tensor.
Returns true iff the parsing succeeds. If the parsing fails, the state of &quot;*this&quot; is unchanged.
-#### bool tensorflow::Tensor::FromProto(Allocator *a, const TensorProto &amp;other) TF_MUST_USE_RESULT {#bool_tensorflow_Tensor_FromProto}
+#### bool tensorflow::Tensor::FromProto(Allocator *a, const TensorProto &amp;other) TF_MUST_USE_RESULT <a class="md-anchor" id="bool_tensorflow_Tensor_FromProto"></a>
-#### void tensorflow::Tensor::AsProtoField(TensorProto *proto) const {#void_tensorflow_Tensor_AsProtoField}
+#### void tensorflow::Tensor::AsProtoField(TensorProto *proto) const <a class="md-anchor" id="void_tensorflow_Tensor_AsProtoField"></a>
Fills in &quot;proto&quot; with &quot;*this&quot; tensor&apos;s content.
AsProtoField() fills in the repeated field for proto.dtype(), while AsProtoTensorContent() encodes the content in proto.tensor_content() in a compact form.
-#### void tensorflow::Tensor::AsProtoTensorContent(TensorProto *proto) const {#void_tensorflow_Tensor_AsProtoTensorContent}
+#### void tensorflow::Tensor::AsProtoTensorContent(TensorProto *proto) const <a class="md-anchor" id="void_tensorflow_Tensor_AsProtoTensorContent"></a>
-#### TTypes&lt;T&gt;::Vec tensorflow::Tensor::vec() {#TTypes_lt_T_gt_Vec_tensorflow_Tensor_vec}
+#### TTypes&lt;T&gt;::Vec tensorflow::Tensor::vec() <a class="md-anchor" id="TTypes_lt_T_gt_Vec_tensorflow_Tensor_vec"></a>
Return the Tensor data as an Eigen::Tensor with the type and sizes of this Tensor .
@@ -216,19 +216,19 @@ Use these methods when you know the data type and the number of dimensions of th
Example: typedef float T; Tensor my_mat(...built with Shape{rows: 3, cols: 5}...); auto mat = my_mat.matrix&lt;T&gt;(); // 2D Eigen::Tensor, 3 x 5. auto mat = my_mat.tensor&lt;T, 2&gt;(); // 2D Eigen::Tensor, 3 x 5. auto vec = my_mat.vec&lt;T&gt;(); // CHECK fails as my_mat is 2D. auto vec = my_mat.tensor&lt;T, 3&gt;(); // CHECK fails as my_mat is 2D. auto mat = my_mat.matrix&lt;int32&gt;();// CHECK fails as type mismatch.
-#### TTypes&lt;T&gt;::Matrix tensorflow::Tensor::matrix() {#TTypes_lt_T_gt_Matrix_tensorflow_Tensor_matrix}
+#### TTypes&lt;T&gt;::Matrix tensorflow::Tensor::matrix() <a class="md-anchor" id="TTypes_lt_T_gt_Matrix_tensorflow_Tensor_matrix"></a>
-#### TTypes&lt; T, NDIMS &gt;::Tensor tensorflow::Tensor::tensor() {#TTypes_lt_T_NDIMS_gt_Tensor_tensorflow_Tensor_tensor}
+#### TTypes&lt; T, NDIMS &gt;::Tensor tensorflow::Tensor::tensor() <a class="md-anchor" id="TTypes_lt_T_NDIMS_gt_Tensor_tensorflow_Tensor_tensor"></a>
-#### TTypes&lt;T&gt;::Flat tensorflow::Tensor::flat() {#TTypes_lt_T_gt_Flat_tensorflow_Tensor_flat}
+#### TTypes&lt;T&gt;::Flat tensorflow::Tensor::flat() <a class="md-anchor" id="TTypes_lt_T_gt_Flat_tensorflow_Tensor_flat"></a>
Return the Tensor data as an Eigen::Tensor of the data type and a specified shape.
@@ -236,121 +236,121 @@ These methods allow you to access the data with the dimensions and sizes of your
Example: typedef float T; Tensor my_ten(...built with Shape{planes: 4, rows: 3, cols: 5}...); // 1D Eigen::Tensor, size 60: auto flat = my_ten.flat&lt;T&gt;(); // 2D Eigen::Tensor 12 x 5: auto inner = my_ten.flat_inner_dims&lt;T&gt;(); // 2D Eigen::Tensor 4 x 15: auto outer = my_ten.shaped&lt;T, 2&gt;({4, 15}); // CHECK fails, bad num elements: auto outer = my_ten.shaped&lt;T, 2&gt;({4, 8}); // 3D Eigen::Tensor 6 x 5 x 2: auto weird = my_ten.shaped&lt;T, 3&gt;({6, 5, 2}); // CHECK fails, type mismatch: auto bad = my_ten.flat&lt;int32&gt;();
-#### TTypes&lt;T&gt;::UnalignedFlat tensorflow::Tensor::unaligned_flat() {#TTypes_lt_T_gt_UnalignedFlat_tensorflow_Tensor_unaligned_flat}
+#### TTypes&lt;T&gt;::UnalignedFlat tensorflow::Tensor::unaligned_flat() <a class="md-anchor" id="TTypes_lt_T_gt_UnalignedFlat_tensorflow_Tensor_unaligned_flat"></a>
-#### TTypes&lt;T&gt;::Matrix tensorflow::Tensor::flat_inner_dims() {#TTypes_lt_T_gt_Matrix_tensorflow_Tensor_flat_inner_dims}
+#### TTypes&lt;T&gt;::Matrix tensorflow::Tensor::flat_inner_dims() <a class="md-anchor" id="TTypes_lt_T_gt_Matrix_tensorflow_Tensor_flat_inner_dims"></a>
Returns the data as an Eigen::Tensor with 2 dimensions, collapsing all Tensor dimensions but the last one into the first dimension of the result.
-#### TTypes&lt;T&gt;::Matrix tensorflow::Tensor::flat_outer_dims() {#TTypes_lt_T_gt_Matrix_tensorflow_Tensor_flat_outer_dims}
+#### TTypes&lt;T&gt;::Matrix tensorflow::Tensor::flat_outer_dims() <a class="md-anchor" id="TTypes_lt_T_gt_Matrix_tensorflow_Tensor_flat_outer_dims"></a>
Returns the data as an Eigen::Tensor with 2 dimensions, collapsing all Tensor dimensions but the first one into the last dimension of the result.
-#### TTypes&lt; T, NDIMS &gt;::Tensor tensorflow::Tensor::shaped(gtl::ArraySlice&lt; int64 &gt; new_sizes) {#TTypes_lt_T_NDIMS_gt_Tensor_tensorflow_Tensor_shaped}
+#### TTypes&lt; T, NDIMS &gt;::Tensor tensorflow::Tensor::shaped(gtl::ArraySlice&lt; int64 &gt; new_sizes) <a class="md-anchor" id="TTypes_lt_T_NDIMS_gt_Tensor_tensorflow_Tensor_shaped"></a>
-#### TTypes&lt; T, NDIMS &gt;::UnalignedTensor tensorflow::Tensor::unaligned_shaped(gtl::ArraySlice&lt; int64 &gt; new_sizes) {#TTypes_lt_T_NDIMS_gt_UnalignedTensor_tensorflow_Tensor_unaligned_shaped}
+#### TTypes&lt; T, NDIMS &gt;::UnalignedTensor tensorflow::Tensor::unaligned_shaped(gtl::ArraySlice&lt; int64 &gt; new_sizes) <a class="md-anchor" id="TTypes_lt_T_NDIMS_gt_UnalignedTensor_tensorflow_Tensor_unaligned_shaped"></a>
-#### TTypes&lt; T &gt;::Scalar tensorflow::Tensor::scalar() {#TTypes_lt_T_gt_Scalar_tensorflow_Tensor_scalar}
+#### TTypes&lt; T &gt;::Scalar tensorflow::Tensor::scalar() <a class="md-anchor" id="TTypes_lt_T_gt_Scalar_tensorflow_Tensor_scalar"></a>
Return the Tensor data as a Tensor Map of fixed size 1: TensorMap&lt;TensorFixedSize&lt;T, 1&gt;&gt;.
Using scalar() allows the compiler to perform optimizations as the size of the tensor is known at compile time.
-#### TTypes&lt;T&gt;::ConstVec tensorflow::Tensor::vec() const {#TTypes_lt_T_gt_ConstVec_tensorflow_Tensor_vec}
+#### TTypes&lt;T&gt;::ConstVec tensorflow::Tensor::vec() const <a class="md-anchor" id="TTypes_lt_T_gt_ConstVec_tensorflow_Tensor_vec"></a>
Const versions of all the methods above.
-#### TTypes&lt;T&gt;::ConstMatrix tensorflow::Tensor::matrix() const {#TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_matrix}
+#### TTypes&lt;T&gt;::ConstMatrix tensorflow::Tensor::matrix() const <a class="md-anchor" id="TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_matrix"></a>
-#### TTypes&lt; T, NDIMS &gt;::ConstTensor tensorflow::Tensor::tensor() const {#TTypes_lt_T_NDIMS_gt_ConstTensor_tensorflow_Tensor_tensor}
+#### TTypes&lt; T, NDIMS &gt;::ConstTensor tensorflow::Tensor::tensor() const <a class="md-anchor" id="TTypes_lt_T_NDIMS_gt_ConstTensor_tensorflow_Tensor_tensor"></a>
-#### TTypes&lt;T&gt;::ConstFlat tensorflow::Tensor::flat() const {#TTypes_lt_T_gt_ConstFlat_tensorflow_Tensor_flat}
+#### TTypes&lt;T&gt;::ConstFlat tensorflow::Tensor::flat() const <a class="md-anchor" id="TTypes_lt_T_gt_ConstFlat_tensorflow_Tensor_flat"></a>
-#### TTypes&lt;T&gt;::ConstUnalignedFlat tensorflow::Tensor::unaligned_flat() const {#TTypes_lt_T_gt_ConstUnalignedFlat_tensorflow_Tensor_unaligned_flat}
+#### TTypes&lt;T&gt;::ConstUnalignedFlat tensorflow::Tensor::unaligned_flat() const <a class="md-anchor" id="TTypes_lt_T_gt_ConstUnalignedFlat_tensorflow_Tensor_unaligned_flat"></a>
-#### TTypes&lt;T&gt;::ConstMatrix tensorflow::Tensor::flat_inner_dims() const {#TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_flat_inner_dims}
+#### TTypes&lt;T&gt;::ConstMatrix tensorflow::Tensor::flat_inner_dims() const <a class="md-anchor" id="TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_flat_inner_dims"></a>
-#### TTypes&lt;T&gt;::ConstMatrix tensorflow::Tensor::flat_outer_dims() const {#TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_flat_outer_dims}
+#### TTypes&lt;T&gt;::ConstMatrix tensorflow::Tensor::flat_outer_dims() const <a class="md-anchor" id="TTypes_lt_T_gt_ConstMatrix_tensorflow_Tensor_flat_outer_dims"></a>
-#### TTypes&lt; T, NDIMS &gt;::ConstTensor tensorflow::Tensor::shaped(gtl::ArraySlice&lt; int64 &gt; new_sizes) const {#TTypes_lt_T_NDIMS_gt_ConstTensor_tensorflow_Tensor_shaped}
+#### TTypes&lt; T, NDIMS &gt;::ConstTensor tensorflow::Tensor::shaped(gtl::ArraySlice&lt; int64 &gt; new_sizes) const <a class="md-anchor" id="TTypes_lt_T_NDIMS_gt_ConstTensor_tensorflow_Tensor_shaped"></a>
-#### TTypes&lt; T, NDIMS &gt;::ConstUnalignedTensor tensorflow::Tensor::unaligned_shaped(gtl::ArraySlice&lt; int64 &gt; new_sizes) const {#TTypes_lt_T_NDIMS_gt_ConstUnalignedTensor_tensorflow_Tensor_unaligned_shaped}
+#### TTypes&lt; T, NDIMS &gt;::ConstUnalignedTensor tensorflow::Tensor::unaligned_shaped(gtl::ArraySlice&lt; int64 &gt; new_sizes) const <a class="md-anchor" id="TTypes_lt_T_NDIMS_gt_ConstUnalignedTensor_tensorflow_Tensor_unaligned_shaped"></a>
-#### TTypes&lt; T &gt;::ConstScalar tensorflow::Tensor::scalar() const {#TTypes_lt_T_gt_ConstScalar_tensorflow_Tensor_scalar}
+#### TTypes&lt; T &gt;::ConstScalar tensorflow::Tensor::scalar() const <a class="md-anchor" id="TTypes_lt_T_gt_ConstScalar_tensorflow_Tensor_scalar"></a>
-#### string tensorflow::Tensor::SummarizeValue(int64 max_entries) const {#string_tensorflow_Tensor_SummarizeValue}
+#### string tensorflow::Tensor::SummarizeValue(int64 max_entries) const <a class="md-anchor" id="string_tensorflow_Tensor_SummarizeValue"></a>
Render the first max_entries values in *this into a string.
-#### string tensorflow::Tensor::DebugString() const {#string_tensorflow_Tensor_DebugString}
+#### string tensorflow::Tensor::DebugString() const <a class="md-anchor" id="string_tensorflow_Tensor_DebugString"></a>
A human-readable summary of the Tensor suitable for debugging.
-#### void tensorflow::Tensor::FillDescription(TensorDescription *description) const {#void_tensorflow_Tensor_FillDescription}
+#### void tensorflow::Tensor::FillDescription(TensorDescription *description) const <a class="md-anchor" id="void_tensorflow_Tensor_FillDescription"></a>
Fill in the TensorDescription proto with metadata about the Tensor that is useful for monitoring and debugging.
-#### StringPiece tensorflow::Tensor::tensor_data() const {#StringPiece_tensorflow_Tensor_tensor_data}
+#### StringPiece tensorflow::Tensor::tensor_data() const <a class="md-anchor" id="StringPiece_tensorflow_Tensor_tensor_data"></a>
Returns a StringPiece mapping the current tensor&apos;s buffer.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensorBuffer.md b/tensorflow/g3doc/api_docs/cc/ClassTensorBuffer.md
index 9f2c6a23be..e6a76083dc 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassTensorBuffer.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensorBuffer.md
@@ -1,10 +1,10 @@
-#Class tensorflow::TensorBuffer
+#Class tensorflow::TensorBuffer <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--tensorbuffer"></a>
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::TensorBuffer::~TensorBuffer](#tensorflow_TensorBuffer_TensorBuffer)
* [virtual void* tensorflow::TensorBuffer::data](#virtual_void_tensorflow_TensorBuffer_data)
@@ -13,39 +13,39 @@
* [virtual void tensorflow::TensorBuffer::FillAllocationDescription](#virtual_void_tensorflow_TensorBuffer_FillAllocationDescription)
* [T* tensorflow::TensorBuffer::base](#T_tensorflow_TensorBuffer_base)
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::TensorBuffer::~TensorBuffer() override {#tensorflow_TensorBuffer_TensorBuffer}
+#### tensorflow::TensorBuffer::~TensorBuffer() override <a class="md-anchor" id="tensorflow_TensorBuffer_TensorBuffer"></a>
-#### virtual void* tensorflow::TensorBuffer::data() const =0 {#virtual_void_tensorflow_TensorBuffer_data}
+#### virtual void* tensorflow::TensorBuffer::data() const =0 <a class="md-anchor" id="virtual_void_tensorflow_TensorBuffer_data"></a>
-#### virtual size_t tensorflow::TensorBuffer::size() const =0 {#virtual_size_t_tensorflow_TensorBuffer_size}
+#### virtual size_t tensorflow::TensorBuffer::size() const =0 <a class="md-anchor" id="virtual_size_t_tensorflow_TensorBuffer_size"></a>
-#### virtual TensorBuffer* tensorflow::TensorBuffer::root_buffer()=0 {#virtual_TensorBuffer_tensorflow_TensorBuffer_root_buffer}
+#### virtual TensorBuffer* tensorflow::TensorBuffer::root_buffer()=0 <a class="md-anchor" id="virtual_TensorBuffer_tensorflow_TensorBuffer_root_buffer"></a>
-#### virtual void tensorflow::TensorBuffer::FillAllocationDescription(AllocationDescription *proto) const =0 {#virtual_void_tensorflow_TensorBuffer_FillAllocationDescription}
+#### virtual void tensorflow::TensorBuffer::FillAllocationDescription(AllocationDescription *proto) const =0 <a class="md-anchor" id="virtual_void_tensorflow_TensorBuffer_FillAllocationDescription"></a>
-#### T* tensorflow::TensorBuffer::base() const {#T_tensorflow_TensorBuffer_base}
+#### T* tensorflow::TensorBuffer::base() const <a class="md-anchor" id="T_tensorflow_TensorBuffer_base"></a>
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md b/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md
index 47a105a76e..c2318c0dac 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensorShape.md
@@ -1,10 +1,10 @@
-#Class tensorflow::TensorShape
+#Class tensorflow::TensorShape <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--tensorshape"></a>
Manages the dimensions of a Tensor and their sizes.
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::TensorShape::TensorShape](#tensorflow_TensorShape_TensorShape)
* Construct a TensorShape from the provided sizes.. REQUIRES: dim_sizes[i] &gt;= 0.
@@ -49,147 +49,147 @@ Manages the dimensions of a Tensor and their sizes.
* [static bool tensorflow::TensorShape::IsValid](#static_bool_tensorflow_TensorShape_IsValid)
* Returns true iff &quot;proto&quot; is a valid tensor shape.
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::TensorShape::TensorShape(gtl::ArraySlice&lt; int64 &gt; dim_sizes) {#tensorflow_TensorShape_TensorShape}
+#### tensorflow::TensorShape::TensorShape(gtl::ArraySlice&lt; int64 &gt; dim_sizes) <a class="md-anchor" id="tensorflow_TensorShape_TensorShape"></a>
Construct a TensorShape from the provided sizes.. REQUIRES: dim_sizes[i] &gt;= 0.
-#### tensorflow::TensorShape::TensorShape(std::initializer_list&lt; int64 &gt; dim_sizes) {#tensorflow_TensorShape_TensorShape}
+#### tensorflow::TensorShape::TensorShape(std::initializer_list&lt; int64 &gt; dim_sizes) <a class="md-anchor" id="tensorflow_TensorShape_TensorShape"></a>
-#### tensorflow::TensorShape::TensorShape(const TensorShapeProto &amp;proto) {#tensorflow_TensorShape_TensorShape}
+#### tensorflow::TensorShape::TensorShape(const TensorShapeProto &amp;proto) <a class="md-anchor" id="tensorflow_TensorShape_TensorShape"></a>
REQUIRES: IsValid(proto)
-#### tensorflow::TensorShape::TensorShape() {#tensorflow_TensorShape_TensorShape}
+#### tensorflow::TensorShape::TensorShape() <a class="md-anchor" id="tensorflow_TensorShape_TensorShape"></a>
Create a tensor shape with no dimensions and one element, which you can then call AddDim() on.
-#### void tensorflow::TensorShape::Clear() {#void_tensorflow_TensorShape_Clear}
+#### void tensorflow::TensorShape::Clear() <a class="md-anchor" id="void_tensorflow_TensorShape_Clear"></a>
Clear a tensor shape.
-#### void tensorflow::TensorShape::AddDim(int64 size) {#void_tensorflow_TensorShape_AddDim}
+#### void tensorflow::TensorShape::AddDim(int64 size) <a class="md-anchor" id="void_tensorflow_TensorShape_AddDim"></a>
Add a dimension to the end (&quot;inner-most&quot;). REQUIRES: size &gt;= 0.
-#### void tensorflow::TensorShape::AppendShape(const TensorShape &amp;shape) {#void_tensorflow_TensorShape_AppendShape}
+#### void tensorflow::TensorShape::AppendShape(const TensorShape &amp;shape) <a class="md-anchor" id="void_tensorflow_TensorShape_AppendShape"></a>
Appends all the dimensions from shape.
-#### void tensorflow::TensorShape::InsertDim(int d, int64 size) {#void_tensorflow_TensorShape_InsertDim}
+#### void tensorflow::TensorShape::InsertDim(int d, int64 size) <a class="md-anchor" id="void_tensorflow_TensorShape_InsertDim"></a>
Insert a dimension somewhere in the TensorShape . REQUIRES: &quot;0 &lt;= d &lt;= dims()&quot; REQUIRES: size &gt;= 0.
-#### void tensorflow::TensorShape::set_dim(int d, int64 size) {#void_tensorflow_TensorShape_set_dim}
+#### void tensorflow::TensorShape::set_dim(int d, int64 size) <a class="md-anchor" id="void_tensorflow_TensorShape_set_dim"></a>
Modifies the size of the dimension &apos;d&apos; to be &apos;size&apos; REQUIRES: &quot;0 &lt;= d &lt; dims()&quot; REQUIRES: size &gt;= 0.
-#### void tensorflow::TensorShape::RemoveDim(int d) {#void_tensorflow_TensorShape_RemoveDim}
+#### void tensorflow::TensorShape::RemoveDim(int d) <a class="md-anchor" id="void_tensorflow_TensorShape_RemoveDim"></a>
Removes dimension &apos;d&apos; from the TensorShape . REQUIRES: &quot;0 &lt;= d &lt; dims()&quot;.
-#### int tensorflow::TensorShape::dims() const {#int_tensorflow_TensorShape_dims}
+#### int tensorflow::TensorShape::dims() const <a class="md-anchor" id="int_tensorflow_TensorShape_dims"></a>
Return the number of dimensions in the tensor.
-#### int64 tensorflow::TensorShape::dim_size(int d) const {#int64_tensorflow_TensorShape_dim_size}
+#### int64 tensorflow::TensorShape::dim_size(int d) const <a class="md-anchor" id="int64_tensorflow_TensorShape_dim_size"></a>
Returns the number of elements in dimension &quot;d&quot;. REQUIRES: &quot;0 &lt;= d &lt; dims()&quot;.
-#### gtl::ArraySlice&lt;int64&gt; tensorflow::TensorShape::dim_sizes() const {#gtl_ArraySlice_lt_int64_gt_tensorflow_TensorShape_dim_sizes}
+#### gtl::ArraySlice&lt;int64&gt; tensorflow::TensorShape::dim_sizes() const <a class="md-anchor" id="gtl_ArraySlice_lt_int64_gt_tensorflow_TensorShape_dim_sizes"></a>
Returns sizes of all dimensions.
-#### int64 tensorflow::TensorShape::num_elements() const {#int64_tensorflow_TensorShape_num_elements}
+#### int64 tensorflow::TensorShape::num_elements() const <a class="md-anchor" id="int64_tensorflow_TensorShape_num_elements"></a>
Returns the number of elements in the tensor.
We use int64 and not size_t to be compatible with Eigen::Tensor which uses ptr_fi
-#### bool tensorflow::TensorShape::IsSameSize(const TensorShape &amp;b) const {#bool_tensorflow_TensorShape_IsSameSize}
+#### bool tensorflow::TensorShape::IsSameSize(const TensorShape &amp;b) const <a class="md-anchor" id="bool_tensorflow_TensorShape_IsSameSize"></a>
Returns true if *this and b have the same sizes. Ignores dimension names.
-#### bool tensorflow::TensorShape::operator==(const TensorShape &amp;b) const {#bool_tensorflow_TensorShape_operator_}
+#### bool tensorflow::TensorShape::operator==(const TensorShape &amp;b) const <a class="md-anchor" id="bool_tensorflow_TensorShape_operator_"></a>
-#### void tensorflow::TensorShape::AsProto(TensorShapeProto *proto) const {#void_tensorflow_TensorShape_AsProto}
+#### void tensorflow::TensorShape::AsProto(TensorShapeProto *proto) const <a class="md-anchor" id="void_tensorflow_TensorShape_AsProto"></a>
Fill *proto from *this.
-#### Eigen::DSizes&lt; Eigen::DenseIndex, NDIMS &gt; tensorflow::TensorShape::AsEigenDSizes() const {#Eigen_DSizes_lt_Eigen_DenseIndex_NDIMS_gt_tensorflow_TensorShape_AsEigenDSizes}
+#### Eigen::DSizes&lt; Eigen::DenseIndex, NDIMS &gt; tensorflow::TensorShape::AsEigenDSizes() const <a class="md-anchor" id="Eigen_DSizes_lt_Eigen_DenseIndex_NDIMS_gt_tensorflow_TensorShape_AsEigenDSizes"></a>
Fill *dsizes from *this.
-#### Eigen::DSizes&lt; Eigen::DenseIndex, NDIMS &gt; tensorflow::TensorShape::AsEigenDSizesWithPadding() const {#Eigen_DSizes_lt_Eigen_DenseIndex_NDIMS_gt_tensorflow_TensorShape_AsEigenDSizesWithPadding}
+#### Eigen::DSizes&lt; Eigen::DenseIndex, NDIMS &gt; tensorflow::TensorShape::AsEigenDSizesWithPadding() const <a class="md-anchor" id="Eigen_DSizes_lt_Eigen_DenseIndex_NDIMS_gt_tensorflow_TensorShape_AsEigenDSizesWithPadding"></a>
Same as AsEigenDSizes() but allows for NDIMS &gt; dims() in which case we pad the rest of the sizes with 1.
-#### TensorShapeIter tensorflow::TensorShape::begin() const {#TensorShapeIter_tensorflow_TensorShape_begin}
+#### TensorShapeIter tensorflow::TensorShape::begin() const <a class="md-anchor" id="TensorShapeIter_tensorflow_TensorShape_begin"></a>
For iterating through the dimensions.
-#### TensorShapeIter tensorflow::TensorShape::end() const {#TensorShapeIter_tensorflow_TensorShape_end}
+#### TensorShapeIter tensorflow::TensorShape::end() const <a class="md-anchor" id="TensorShapeIter_tensorflow_TensorShape_end"></a>
-#### string tensorflow::TensorShape::DebugString() const {#string_tensorflow_TensorShape_DebugString}
+#### string tensorflow::TensorShape::DebugString() const <a class="md-anchor" id="string_tensorflow_TensorShape_DebugString"></a>
For error messages.
-#### string tensorflow::TensorShape::ShortDebugString() const {#string_tensorflow_TensorShape_ShortDebugString}
+#### string tensorflow::TensorShape::ShortDebugString() const <a class="md-anchor" id="string_tensorflow_TensorShape_ShortDebugString"></a>
-#### static bool tensorflow::TensorShape::IsValid(const TensorShapeProto &amp;proto) {#static_bool_tensorflow_TensorShape_IsValid}
+#### static bool tensorflow::TensorShape::IsValid(const TensorShapeProto &amp;proto) <a class="md-anchor" id="static_bool_tensorflow_TensorShape_IsValid"></a>
Returns true iff &quot;proto&quot; is a valid tensor shape.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensorShapeIter.md b/tensorflow/g3doc/api_docs/cc/ClassTensorShapeIter.md
index 2f198168a2..4789df2e0e 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassTensorShapeIter.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensorShapeIter.md
@@ -1,10 +1,10 @@
-#Class tensorflow::TensorShapeIter
+#Class tensorflow::TensorShapeIter <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--tensorshapeiter"></a>
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::TensorShapeIter::TensorShapeIter](#tensorflow_TensorShapeIter_TensorShapeIter)
* [bool tensorflow::TensorShapeIter::operator==](#bool_tensorflow_TensorShapeIter_operator_)
@@ -12,33 +12,33 @@
* [void tensorflow::TensorShapeIter::operator++](#void_tensorflow_TensorShapeIter_operator_)
* [TensorShapeDim tensorflow::TensorShapeIter::operator*](#TensorShapeDim_tensorflow_TensorShapeIter_operator_)
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::TensorShapeIter::TensorShapeIter(const TensorShape *shape, int d) {#tensorflow_TensorShapeIter_TensorShapeIter}
+#### tensorflow::TensorShapeIter::TensorShapeIter(const TensorShape *shape, int d) <a class="md-anchor" id="tensorflow_TensorShapeIter_TensorShapeIter"></a>
-#### bool tensorflow::TensorShapeIter::operator==(const TensorShapeIter &amp;rhs) {#bool_tensorflow_TensorShapeIter_operator_}
+#### bool tensorflow::TensorShapeIter::operator==(const TensorShapeIter &amp;rhs) <a class="md-anchor" id="bool_tensorflow_TensorShapeIter_operator_"></a>
-#### bool tensorflow::TensorShapeIter::operator!=(const TensorShapeIter &amp;rhs) {#bool_tensorflow_TensorShapeIter_operator_}
+#### bool tensorflow::TensorShapeIter::operator!=(const TensorShapeIter &amp;rhs) <a class="md-anchor" id="bool_tensorflow_TensorShapeIter_operator_"></a>
-#### void tensorflow::TensorShapeIter::operator++() {#void_tensorflow_TensorShapeIter_operator_}
+#### void tensorflow::TensorShapeIter::operator++() <a class="md-anchor" id="void_tensorflow_TensorShapeIter_operator_"></a>
-#### TensorShapeDim tensorflow::TensorShapeIter::operator*() {#TensorShapeDim_tensorflow_TensorShapeIter_operator_}
+#### TensorShapeDim tensorflow::TensorShapeIter::operator*() <a class="md-anchor" id="TensorShapeDim_tensorflow_TensorShapeIter_operator_"></a>
diff --git a/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md b/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md
index 7b81eb62a8..2221ebdd91 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassTensorShapeUtils.md
@@ -1,10 +1,10 @@
-#Class tensorflow::TensorShapeUtils
+#Class tensorflow::TensorShapeUtils <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--tensorshapeutils"></a>
Static helper routines for TensorShape . Includes a few common predicates on a tensor shape.
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [static bool tensorflow::TensorShapeUtils::IsScalar](#static_bool_tensorflow_TensorShapeUtils_IsScalar)
* [static bool tensorflow::TensorShapeUtils::IsVector](#static_bool_tensorflow_TensorShapeUtils_IsVector)
@@ -18,63 +18,63 @@ Static helper routines for TensorShape . Includes a few common predicates on a t
* [static string tensorflow::TensorShapeUtils::ShapeListString](#static_string_tensorflow_TensorShapeUtils_ShapeListString)
* [static bool tensorflow::TensorShapeUtils::StartsWith](#static_bool_tensorflow_TensorShapeUtils_StartsWith)
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### static bool tensorflow::TensorShapeUtils::IsScalar(const TensorShape &amp;shape) {#static_bool_tensorflow_TensorShapeUtils_IsScalar}
+#### static bool tensorflow::TensorShapeUtils::IsScalar(const TensorShape &amp;shape) <a class="md-anchor" id="static_bool_tensorflow_TensorShapeUtils_IsScalar"></a>
-#### static bool tensorflow::TensorShapeUtils::IsVector(const TensorShape &amp;shape) {#static_bool_tensorflow_TensorShapeUtils_IsVector}
+#### static bool tensorflow::TensorShapeUtils::IsVector(const TensorShape &amp;shape) <a class="md-anchor" id="static_bool_tensorflow_TensorShapeUtils_IsVector"></a>
-#### static bool tensorflow::TensorShapeUtils::IsLegacyScalar(const TensorShape &amp;shape) {#static_bool_tensorflow_TensorShapeUtils_IsLegacyScalar}
+#### static bool tensorflow::TensorShapeUtils::IsLegacyScalar(const TensorShape &amp;shape) <a class="md-anchor" id="static_bool_tensorflow_TensorShapeUtils_IsLegacyScalar"></a>
-#### static bool tensorflow::TensorShapeUtils::IsLegacyVector(const TensorShape &amp;shape) {#static_bool_tensorflow_TensorShapeUtils_IsLegacyVector}
+#### static bool tensorflow::TensorShapeUtils::IsLegacyVector(const TensorShape &amp;shape) <a class="md-anchor" id="static_bool_tensorflow_TensorShapeUtils_IsLegacyVector"></a>
-#### static bool tensorflow::TensorShapeUtils::IsVectorOrHigher(const TensorShape &amp;shape) {#static_bool_tensorflow_TensorShapeUtils_IsVectorOrHigher}
+#### static bool tensorflow::TensorShapeUtils::IsVectorOrHigher(const TensorShape &amp;shape) <a class="md-anchor" id="static_bool_tensorflow_TensorShapeUtils_IsVectorOrHigher"></a>
-#### static bool tensorflow::TensorShapeUtils::IsMatrix(const TensorShape &amp;shape) {#static_bool_tensorflow_TensorShapeUtils_IsMatrix}
+#### static bool tensorflow::TensorShapeUtils::IsMatrix(const TensorShape &amp;shape) <a class="md-anchor" id="static_bool_tensorflow_TensorShapeUtils_IsMatrix"></a>
-#### static bool tensorflow::TensorShapeUtils::IsMatrixOrHigher(const TensorShape &amp;shape) {#static_bool_tensorflow_TensorShapeUtils_IsMatrixOrHigher}
+#### static bool tensorflow::TensorShapeUtils::IsMatrixOrHigher(const TensorShape &amp;shape) <a class="md-anchor" id="static_bool_tensorflow_TensorShapeUtils_IsMatrixOrHigher"></a>
-#### static TensorShape tensorflow::TensorShapeUtils::MakeShape(const T *dims, int n) {#static_TensorShape_tensorflow_TensorShapeUtils_MakeShape}
+#### static TensorShape tensorflow::TensorShapeUtils::MakeShape(const T *dims, int n) <a class="md-anchor" id="static_TensorShape_tensorflow_TensorShapeUtils_MakeShape"></a>
Returns a TensorShape whose dimensions are dims[0], dims[1], ..., dims[n-1].
-#### static string tensorflow::TensorShapeUtils::ShapeListString(const gtl::ArraySlice&lt; TensorShape &gt; &amp;shapes) {#static_string_tensorflow_TensorShapeUtils_ShapeListString}
+#### static string tensorflow::TensorShapeUtils::ShapeListString(const gtl::ArraySlice&lt; TensorShape &gt; &amp;shapes) <a class="md-anchor" id="static_string_tensorflow_TensorShapeUtils_ShapeListString"></a>
-#### static bool tensorflow::TensorShapeUtils::StartsWith(const TensorShape &amp;shape0, const TensorShape &amp;shape1) {#static_bool_tensorflow_TensorShapeUtils_StartsWith}
+#### static bool tensorflow::TensorShapeUtils::StartsWith(const TensorShape &amp;shape0, const TensorShape &amp;shape1) <a class="md-anchor" id="static_bool_tensorflow_TensorShapeUtils_StartsWith"></a>
diff --git a/tensorflow/g3doc/api_docs/cc/ClassThread.md b/tensorflow/g3doc/api_docs/cc/ClassThread.md
index 32bb286206..9ae21780df 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassThread.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassThread.md
@@ -1,24 +1,24 @@
-#Class tensorflow::Thread
+#Class tensorflow::Thread <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--thread"></a>
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::Thread::Thread](#tensorflow_Thread_Thread)
* [virtual tensorflow::Thread::~Thread](#virtual_tensorflow_Thread_Thread)
* Blocks until the thread of control stops running.
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::Thread::Thread() {#tensorflow_Thread_Thread}
+#### tensorflow::Thread::Thread() <a class="md-anchor" id="tensorflow_Thread_Thread"></a>
-#### virtual tensorflow::Thread::~Thread() {#virtual_tensorflow_Thread_Thread}
+#### virtual tensorflow::Thread::~Thread() <a class="md-anchor" id="virtual_tensorflow_Thread_Thread"></a>
Blocks until the thread of control stops running.
diff --git a/tensorflow/g3doc/api_docs/cc/ClassWritableFile.md b/tensorflow/g3doc/api_docs/cc/ClassWritableFile.md
index e1b2132b4f..b9923cfe56 100644
--- a/tensorflow/g3doc/api_docs/cc/ClassWritableFile.md
+++ b/tensorflow/g3doc/api_docs/cc/ClassWritableFile.md
@@ -1,10 +1,10 @@
-#Class tensorflow::WritableFile
+#Class tensorflow::WritableFile <a class="md-anchor" id="AUTOGENERATED-class-tensorflow--writablefile"></a>
A file abstraction for sequential writing.
The implementation must provide buffering since callers may append small fragments at a time to the file.
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::WritableFile::WritableFile](#tensorflow_WritableFile_WritableFile)
* [virtual tensorflow::WritableFile::~WritableFile](#virtual_tensorflow_WritableFile_WritableFile)
@@ -13,39 +13,39 @@ The implementation must provide buffering since callers may append small fragmen
* [virtual Status tensorflow::WritableFile::Flush](#virtual_Status_tensorflow_WritableFile_Flush)
* [virtual Status tensorflow::WritableFile::Sync](#virtual_Status_tensorflow_WritableFile_Sync)
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::WritableFile::WritableFile() {#tensorflow_WritableFile_WritableFile}
+#### tensorflow::WritableFile::WritableFile() <a class="md-anchor" id="tensorflow_WritableFile_WritableFile"></a>
-#### virtual tensorflow::WritableFile::~WritableFile() {#virtual_tensorflow_WritableFile_WritableFile}
+#### virtual tensorflow::WritableFile::~WritableFile() <a class="md-anchor" id="virtual_tensorflow_WritableFile_WritableFile"></a>
-#### virtual Status tensorflow::WritableFile::Append(const StringPiece &amp;data)=0 {#virtual_Status_tensorflow_WritableFile_Append}
+#### virtual Status tensorflow::WritableFile::Append(const StringPiece &amp;data)=0 <a class="md-anchor" id="virtual_Status_tensorflow_WritableFile_Append"></a>
-#### virtual Status tensorflow::WritableFile::Close()=0 {#virtual_Status_tensorflow_WritableFile_Close}
+#### virtual Status tensorflow::WritableFile::Close()=0 <a class="md-anchor" id="virtual_Status_tensorflow_WritableFile_Close"></a>
-#### virtual Status tensorflow::WritableFile::Flush()=0 {#virtual_Status_tensorflow_WritableFile_Flush}
+#### virtual Status tensorflow::WritableFile::Flush()=0 <a class="md-anchor" id="virtual_Status_tensorflow_WritableFile_Flush"></a>
-#### virtual Status tensorflow::WritableFile::Sync()=0 {#virtual_Status_tensorflow_WritableFile_Sync}
+#### virtual Status tensorflow::WritableFile::Sync()=0 <a class="md-anchor" id="virtual_Status_tensorflow_WritableFile_Sync"></a>
diff --git a/tensorflow/g3doc/api_docs/cc/StructSessionOptions.md b/tensorflow/g3doc/api_docs/cc/StructSessionOptions.md
index 99044997c9..12f4ed9101 100644
--- a/tensorflow/g3doc/api_docs/cc/StructSessionOptions.md
+++ b/tensorflow/g3doc/api_docs/cc/StructSessionOptions.md
@@ -1,10 +1,10 @@
-#Struct tensorflow::SessionOptions
+#Struct tensorflow::SessionOptions <a class="md-anchor" id="AUTOGENERATED-struct-tensorflow--sessionoptions"></a>
Configuration information for a Session .
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [Env* tensorflow::SessionOptions::env](#Env_tensorflow_SessionOptions_env)
* The environment to use.
@@ -14,15 +14,15 @@ Configuration information for a Session .
* Configuration options.
* [tensorflow::SessionOptions::SessionOptions](#tensorflow_SessionOptions_SessionOptions)
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### Env* tensorflow::SessionOptions::env {#Env_tensorflow_SessionOptions_env}
+#### Env* tensorflow::SessionOptions::env <a class="md-anchor" id="Env_tensorflow_SessionOptions_env"></a>
The environment to use.
-#### string tensorflow::SessionOptions::target {#string_tensorflow_SessionOptions_target}
+#### string tensorflow::SessionOptions::target <a class="md-anchor" id="string_tensorflow_SessionOptions_target"></a>
The TensorFlow runtime to connect to.
@@ -36,13 +36,13 @@ Upon creation, a single session affines itself to one of the remote processes, w
If the session disconnects from the remote process during its lifetime, session calls may fail immediately.
-#### ConfigProto tensorflow::SessionOptions::config {#ConfigProto_tensorflow_SessionOptions_config}
+#### ConfigProto tensorflow::SessionOptions::config <a class="md-anchor" id="ConfigProto_tensorflow_SessionOptions_config"></a>
Configuration options.
-#### tensorflow::SessionOptions::SessionOptions() {#tensorflow_SessionOptions_SessionOptions}
+#### tensorflow::SessionOptions::SessionOptions() <a class="md-anchor" id="tensorflow_SessionOptions_SessionOptions"></a>
diff --git a/tensorflow/g3doc/api_docs/cc/StructState.md b/tensorflow/g3doc/api_docs/cc/StructState.md
index d031b50370..5772ef46c9 100644
--- a/tensorflow/g3doc/api_docs/cc/StructState.md
+++ b/tensorflow/g3doc/api_docs/cc/StructState.md
@@ -1,23 +1,23 @@
-#Struct tensorflow::Status::State
+#Struct tensorflow::Status::State <a class="md-anchor" id="AUTOGENERATED-struct-tensorflow--status--state"></a>
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [tensorflow::error::Code tensorflow::Status::State::code](#tensorflow_error_Code_tensorflow_Status_State_code)
* [string tensorflow::Status::State::msg](#string_tensorflow_Status_State_msg)
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### tensorflow::error::Code tensorflow::Status::State::code {#tensorflow_error_Code_tensorflow_Status_State_code}
+#### tensorflow::error::Code tensorflow::Status::State::code <a class="md-anchor" id="tensorflow_error_Code_tensorflow_Status_State_code"></a>
-#### string tensorflow::Status::State::msg {#string_tensorflow_Status_State_msg}
+#### string tensorflow::Status::State::msg <a class="md-anchor" id="string_tensorflow_Status_State_msg"></a>
diff --git a/tensorflow/g3doc/api_docs/cc/StructTensorShapeDim.md b/tensorflow/g3doc/api_docs/cc/StructTensorShapeDim.md
index 711743ac85..f177d3c8a8 100644
--- a/tensorflow/g3doc/api_docs/cc/StructTensorShapeDim.md
+++ b/tensorflow/g3doc/api_docs/cc/StructTensorShapeDim.md
@@ -1,23 +1,23 @@
-#Struct tensorflow::TensorShapeDim
+#Struct tensorflow::TensorShapeDim <a class="md-anchor" id="AUTOGENERATED-struct-tensorflow--tensorshapedim"></a>
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [int tensorflow::TensorShapeDim::size](#int_tensorflow_TensorShapeDim_size)
* [tensorflow::TensorShapeDim::TensorShapeDim](#tensorflow_TensorShapeDim_TensorShapeDim)
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### int tensorflow::TensorShapeDim::size {#int_tensorflow_TensorShapeDim_size}
+#### int tensorflow::TensorShapeDim::size <a class="md-anchor" id="int_tensorflow_TensorShapeDim_size"></a>
-#### tensorflow::TensorShapeDim::TensorShapeDim(int64 s) {#tensorflow_TensorShapeDim_TensorShapeDim}
+#### tensorflow::TensorShapeDim::TensorShapeDim(int64 s) <a class="md-anchor" id="tensorflow_TensorShapeDim_TensorShapeDim"></a>
diff --git a/tensorflow/g3doc/api_docs/cc/StructThreadOptions.md b/tensorflow/g3doc/api_docs/cc/StructThreadOptions.md
index b568855d6e..20dacebab2 100644
--- a/tensorflow/g3doc/api_docs/cc/StructThreadOptions.md
+++ b/tensorflow/g3doc/api_docs/cc/StructThreadOptions.md
@@ -1,25 +1,25 @@
-#Struct tensorflow::ThreadOptions
+#Struct tensorflow::ThreadOptions <a class="md-anchor" id="AUTOGENERATED-struct-tensorflow--threadoptions"></a>
Options to configure a Thread .
Note that the options are all hints, and the underlying implementation may choose to ignore it.
-##Member Summary
+##Member Summary <a class="md-anchor" id="AUTOGENERATED-member-summary"></a>
* [size_t tensorflow::ThreadOptions::stack_size](#size_t_tensorflow_ThreadOptions_stack_size)
* Thread stack size to use (in bytes).
* [size_t tensorflow::ThreadOptions::guard_size](#size_t_tensorflow_ThreadOptions_guard_size)
* Guard area size to use near thread stacks to use (in bytes)
-##Member Details
+##Member Details <a class="md-anchor" id="AUTOGENERATED-member-details"></a>
-#### size_t tensorflow::ThreadOptions::stack_size {#size_t_tensorflow_ThreadOptions_stack_size}
+#### size_t tensorflow::ThreadOptions::stack_size <a class="md-anchor" id="size_t_tensorflow_ThreadOptions_stack_size"></a>
Thread stack size to use (in bytes).
-#### size_t tensorflow::ThreadOptions::guard_size {#size_t_tensorflow_ThreadOptions_guard_size}
+#### size_t tensorflow::ThreadOptions::guard_size <a class="md-anchor" id="size_t_tensorflow_ThreadOptions_guard_size"></a>
Guard area size to use near thread stacks to use (in bytes)
diff --git a/tensorflow/g3doc/api_docs/cc/index.md b/tensorflow/g3doc/api_docs/cc/index.md
index 82aafc7486..9a3a75534b 100644
--- a/tensorflow/g3doc/api_docs/cc/index.md
+++ b/tensorflow/g3doc/api_docs/cc/index.md
@@ -1,4 +1,4 @@
-# TensorFlow C++ Session API reference documentation
+# TensorFlow C++ Session API reference documentation <a class="md-anchor" id="AUTOGENERATED-tensorflow-c---session-api-reference-documentation"></a>
TensorFlow's public C++ API includes only the API for executing graphs, as of
version 0.5. To control the execution of a graph from C++:
@@ -24,7 +24,7 @@ write the graph to a file.
1. Run the graph with a call to `session->Run()`
-##Classes
+##Classes <a class="md-anchor" id="AUTOGENERATED-classes"></a>
* [tensorflow::Env](ClassEnv.md)
* [tensorflow::EnvWrapper](ClassEnvWrapper.md)
@@ -39,7 +39,7 @@ write the graph to a file.
* [tensorflow::Thread](ClassThread.md)
* [tensorflow::WritableFile](ClassWritableFile.md)
-##Structs
+##Structs <a class="md-anchor" id="AUTOGENERATED-structs"></a>
* [tensorflow::SessionOptions](StructSessionOptions.md)
* [tensorflow::Status::State](StructState.md)
diff --git a/tensorflow/g3doc/api_docs/index.md b/tensorflow/g3doc/api_docs/index.md
index a4a5b50b79..863cf3f87f 100644
--- a/tensorflow/g3doc/api_docs/index.md
+++ b/tensorflow/g3doc/api_docs/index.md
@@ -1,4 +1,4 @@
-# Overview
+# Overview <a class="md-anchor" id="AUTOGENERATED-overview"></a>
TensorFlow has APIs available in several languages both for constructing and
executing a TensorFlow graph. The Python API is at present the most complete
diff --git a/tensorflow/g3doc/api_docs/python/array_ops.md b/tensorflow/g3doc/api_docs/python/array_ops.md
index 282e0c6b60..9d68da7caa 100644
--- a/tensorflow/g3doc/api_docs/python/array_ops.md
+++ b/tensorflow/g3doc/api_docs/python/array_ops.md
@@ -1,12 +1,13 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Tensor Transformations
+# Tensor Transformations <a class="md-anchor" id="AUTOGENERATED-tensor-transformations"></a>
Note: Functions taking `Tensor` arguments can also take anything
accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Tensor Transformations](#AUTOGENERATED-tensor-transformations)
* [Casting](#AUTOGENERATED-casting)
* [tf.string_to_number(string_tensor, out_type=None, name=None)](#string_to_number)
* [tf.to_double(x, name='ToDouble')](#to_double)
@@ -40,21 +41,21 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Casting <div class="md-anchor" id="AUTOGENERATED-casting">{#AUTOGENERATED-casting}</div>
+## Casting <a class="md-anchor" id="AUTOGENERATED-casting"></a>
TensorFlow provides several operations that you can use to cast tensor data
types in your graph.
- - -
-### tf.string_to_number(string_tensor, out_type=None, name=None) <div class="md-anchor" id="string_to_number">{#string_to_number}</div>
+### tf.string_to_number(string_tensor, out_type=None, name=None) <a class="md-anchor" id="string_to_number"></a>
Converts each string in the input Tensor to the specified numeric type.
(Note that int32 overflow results in an error while float overflow
results in a rounded value.)
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>string_tensor</b>: A `Tensor` of type `string`.
@@ -62,7 +63,7 @@ results in a rounded value.)
The numeric type to interpret each string in string_tensor as.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `out_type`.
A Tensor of the same shape as the input string_tensor.
@@ -70,21 +71,21 @@ results in a rounded value.)
- - -
-### tf.to_double(x, name='ToDouble') <div class="md-anchor" id="to_double">{#to_double}</div>
+### tf.to_double(x, name='ToDouble') <a class="md-anchor" id="to_double"></a>
Casts a tensor to type `float64`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` or `SparseTensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` or `SparseTensor` with same shape as `x` with type `float64`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `x` cannot be cast to the `float64`.
@@ -92,21 +93,21 @@ Casts a tensor to type `float64`.
- - -
-### tf.to_float(x, name='ToFloat') <div class="md-anchor" id="to_float">{#to_float}</div>
+### tf.to_float(x, name='ToFloat') <a class="md-anchor" id="to_float"></a>
Casts a tensor to type `float32`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` or `SparseTensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` or `SparseTensor` with same shape as `x` with type `float32`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `x` cannot be cast to the `float32`.
@@ -114,21 +115,21 @@ Casts a tensor to type `float32`.
- - -
-### tf.to_bfloat16(x, name='ToBFloat16') <div class="md-anchor" id="to_bfloat16">{#to_bfloat16}</div>
+### tf.to_bfloat16(x, name='ToBFloat16') <a class="md-anchor" id="to_bfloat16"></a>
Casts a tensor to type `bfloat16`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` or `SparseTensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` or `SparseTensor` with same shape as `x` with type `bfloat16`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `x` cannot be cast to the `bfloat16`.
@@ -136,21 +137,21 @@ Casts a tensor to type `bfloat16`.
- - -
-### tf.to_int32(x, name='ToInt32') <div class="md-anchor" id="to_int32">{#to_int32}</div>
+### tf.to_int32(x, name='ToInt32') <a class="md-anchor" id="to_int32"></a>
Casts a tensor to type `int32`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` or `SparseTensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` or `SparseTensor` with same shape as `x` with type `int32`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `x` cannot be cast to the `int32`.
@@ -158,21 +159,21 @@ Casts a tensor to type `int32`.
- - -
-### tf.to_int64(x, name='ToInt64') <div class="md-anchor" id="to_int64">{#to_int64}</div>
+### tf.to_int64(x, name='ToInt64') <a class="md-anchor" id="to_int64"></a>
Casts a tensor to type `int64`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` or `SparseTensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` or `SparseTensor` with same shape as `x` with type `int64`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `x` cannot be cast to the `int64`.
@@ -180,7 +181,7 @@ Casts a tensor to type `int64`.
- - -
-### tf.cast(x, dtype, name=None) <div class="md-anchor" id="cast">{#cast}</div>
+### tf.cast(x, dtype, name=None) <a class="md-anchor" id="cast"></a>
Casts a tensor to a new type.
@@ -194,32 +195,32 @@ For example:
tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` or `SparseTensor`.
* <b>dtype</b>: The destination type.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` or `SparseTensor` with same shape as `x`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `x` cannot be cast to the `dtype`.
-## Shapes and Shaping <div class="md-anchor" id="AUTOGENERATED-shapes-and-shaping">{#AUTOGENERATED-shapes-and-shaping}</div>
+## Shapes and Shaping <a class="md-anchor" id="AUTOGENERATED-shapes-and-shaping"></a>
TensorFlow provides several operations that you can use to determine the shape
of a tensor and change the shape of a tensor.
- - -
-### tf.shape(input, name=None) <div class="md-anchor" id="shape">{#shape}</div>
+### tf.shape(input, name=None) <a class="md-anchor" id="shape"></a>
Returns the shape of a tensor.
@@ -232,20 +233,20 @@ For example:
shape(t) ==> [2, 2, 3]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `int32`.
- - -
-### tf.size(input, name=None) <div class="md-anchor" id="size">{#size}</div>
+### tf.size(input, name=None) <a class="md-anchor" id="size"></a>
Returns the size of a tensor.
@@ -259,20 +260,20 @@ For example:
size(t) ==> 12
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `int32`.
- - -
-### tf.rank(input, name=None) <div class="md-anchor" id="rank">{#rank}</div>
+### tf.rank(input, name=None) <a class="md-anchor" id="rank"></a>
Returns the rank of a tensor.
@@ -290,20 +291,20 @@ rank(t) ==> 3
of a tensor is the number of indices required to uniquely select each element
of the tensor. Rank is also known as "order", "degree", or "ndims."
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `int32`.
- - -
-### tf.reshape(tensor, shape, name=None) <div class="md-anchor" id="reshape">{#reshape}</div>
+### tf.reshape(tensor, shape, name=None) <a class="md-anchor" id="reshape"></a>
Reshapes a tensor.
@@ -343,21 +344,21 @@ reshape(t, [2, 4]) ==> [[1, 1, 2, 2]
reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor</b>: A `Tensor`.
* <b>shape</b>: A `Tensor` of type `int32`. Defines the shape of the output tensor.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `tensor`.
- - -
-### tf.squeeze(input, squeeze_dims=None, name=None) <div class="md-anchor" id="squeeze">{#squeeze}</div>
+### tf.squeeze(input, squeeze_dims=None, name=None) <a class="md-anchor" id="squeeze"></a>
Removes dimensions of size 1 from the shape of a tensor.
@@ -380,7 +381,7 @@ Or, to remove specific size 1 dimensions:
shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. The `input` to squeeze.
@@ -389,7 +390,7 @@ shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
index starts at 0. It is an error to squeeze a dimension that is not 1.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`.
Contains the same data as `input`, but has one or more dimensions of
@@ -398,7 +399,7 @@ shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
- - -
-### tf.expand_dims(input, dim, name=None) <div class="md-anchor" id="expand_dims">{#expand_dims}</div>
+### tf.expand_dims(input, dim, name=None) <a class="md-anchor" id="expand_dims"></a>
Inserts a dimension of 1 into a tensor's shape.
@@ -433,7 +434,7 @@ This operation requires that:
This operation is related to `squeeze()`, which removes dimensions of
size 1.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`.
@@ -442,7 +443,7 @@ size 1.
expand the shape of `input`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`.
Contains the same data as `input`, but its shape has an additional
@@ -450,14 +451,14 @@ size 1.
-## Slicing and Joining <div class="md-anchor" id="AUTOGENERATED-slicing-and-joining">{#AUTOGENERATED-slicing-and-joining}</div>
+## Slicing and Joining <a class="md-anchor" id="AUTOGENERATED-slicing-and-joining"></a>
TensorFlow provides several operations to slice or extract parts of a tensor,
or join multiple tensors together.
- - -
-### tf.slice(input_, begin, size, name=None) <div class="md-anchor" id="slice">{#slice}</div>
+### tf.slice(input_, begin, size, name=None) <a class="md-anchor" id="slice"></a>
Extracts a slice from a tensor.
@@ -492,7 +493,7 @@ tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]],
[[5, 5, 5]]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input_</b>: A `Tensor`.
@@ -500,14 +501,14 @@ tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]],
* <b>size</b>: An `int32` or `int64` `Tensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` the same type as `input`.
- - -
-### tf.split(split_dim, num_split, value, name='split') <div class="md-anchor" id="split">{#split}</div>
+### tf.split(split_dim, num_split, value, name='split') <a class="md-anchor" id="split"></a>
Splits a tensor into `num_split` tensors along one dimension.
@@ -523,7 +524,7 @@ split0, split1, split2 = tf.split(1, 3, value)
tf.shape(split0) ==> [5, 10]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>split_dim</b>: A 0-D `int32` `Tensor`. The dimension along which to split.
@@ -532,14 +533,14 @@ tf.shape(split0) ==> [5, 10]
* <b>value</b>: The `Tensor` to split.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
`num_split` `Tensor` objects resulting from splitting `value`.
- - -
-### tf.tile(input, multiples, name=None) <div class="md-anchor" id="tile">{#tile}</div>
+### tf.tile(input, multiples, name=None) <a class="md-anchor" id="tile"></a>
Constructs a tensor by tiling a given tensor.
@@ -549,7 +550,7 @@ and the values of `input` are replicated `multiples[i]` times along the 'i'th
dimension. For example, tiling `[a b c d]` by `[2]` produces
`[a b c d a b c d]`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. 1-D or higher.
@@ -557,14 +558,14 @@ dimension. For example, tiling `[a b c d]` by `[2]` produces
1-D. Length must be the same as the number of dimensions in `input`
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`.
- - -
-### tf.pad(input, paddings, name=None) <div class="md-anchor" id="pad">{#pad}</div>
+### tf.pad(input, paddings, name=None) <a class="md-anchor" id="pad"></a>
Pads a tensor with zeros.
@@ -592,21 +593,21 @@ pad(t, paddings) ==> [[0, 0, 0, 0, 0]
[0, 0, 0, 0, 0]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`.
* <b>paddings</b>: A `Tensor` of type `int32`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`.
- - -
-### tf.concat(concat_dim, values, name='concat') <div class="md-anchor" id="concat">{#concat}</div>
+### tf.concat(concat_dim, values, name='concat') <a class="md-anchor" id="concat"></a>
Concatenates tensors along one dimension.
@@ -640,21 +641,21 @@ tf.shape(tf.concat(0, [t3, t4])) ==> [4, 3]
tf.shape(tf.concat(1, [t3, t4])) ==> [2, 6]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>concat_dim</b>: 0-D `int32` `Tensor`. Dimension along which to concatenate.
* <b>values</b>: A list of `Tensor` objects or a single `Tensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` resulting from concatenation of the input tensors.
- - -
-### tf.pack(values, name='pack') <div class="md-anchor" id="pack">{#pack}</div>
+### tf.pack(values, name='pack') <a class="md-anchor" id="pack"></a>
Packs a list of rank-`R` tensors into one rank-`(R+1)` tensor.
@@ -666,13 +667,13 @@ This is the opposite of unpack. The numpy equivalent is
tf.pack([x, y, z]) = np.asarray([x, y, z])
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>values</b>: A list of `Tensor` objects with the same shape and type.
* <b>name</b>: A name for this operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
* <b>output</b>: A packed `Tensor` with the same type as `values`.
@@ -680,7 +681,7 @@ This is the opposite of unpack. The numpy equivalent is
- - -
-### tf.unpack(value, num=None, name='unpack') <div class="md-anchor" id="unpack">{#unpack}</div>
+### tf.unpack(value, num=None, name='unpack') <a class="md-anchor" id="unpack"></a>
Unpacks the outer dimension of a rank-`R` tensor into rank-`(R-1)` tensors.
@@ -695,7 +696,7 @@ This is the opposite of pack. The numpy equivalent is
tf.unpack(x, n) = list(x)
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>value</b>: A rank `R > 0` `Tensor` to be unpacked.
@@ -703,11 +704,11 @@ This is the opposite of pack. The numpy equivalent is
`None` (the default).
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The list of `Tensor` objects unpacked from `value`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `num` is unspecified and cannot be inferred.
@@ -715,7 +716,7 @@ This is the opposite of pack. The numpy equivalent is
- - -
-### tf.reverse_sequence(input, seq_lengths, seq_dim, name=None) <div class="md-anchor" id="reverse_sequence">{#reverse_sequence}</div>
+### tf.reverse_sequence(input, seq_lengths, seq_dim, name=None) <a class="md-anchor" id="reverse_sequence"></a>
Reverses variable length slices in dimension `seq_dim`.
@@ -749,7 +750,7 @@ output[2, 3:, :, ...] = input[2, 3:, :, ...]
output[3, 2:, :, ...] = input[3, 2:, :, ...]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. The input to reverse.
@@ -759,7 +760,7 @@ output[3, 2:, :, ...] = input[3, 2:, :, ...]
* <b>seq_dim</b>: An `int`. The dimension which is partially reversed.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`.
The partially reversed input. It has the same shape as `input`.
@@ -767,7 +768,7 @@ output[3, 2:, :, ...] = input[3, 2:, :, ...]
- - -
-### tf.reverse(tensor, dims, name=None) <div class="md-anchor" id="reverse">{#reverse}</div>
+### tf.reverse(tensor, dims, name=None) <a class="md-anchor" id="reverse"></a>
Reverses specific dimensions of a tensor.
@@ -816,7 +817,7 @@ reverse(t, dims) ==> [[[[8, 9, 10, 11],
[12, 13, 14, 15]]]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `bool`, `float32`, `float64`.
@@ -824,14 +825,14 @@ reverse(t, dims) ==> [[[[8, 9, 10, 11],
* <b>dims</b>: A `Tensor` of type `bool`. 1-D. The dimensions to reverse.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `tensor`. The same shape as `tensor`.
- - -
-### tf.transpose(a, perm=None, name='transpose') <div class="md-anchor" id="transpose">{#transpose}</div>
+### tf.transpose(a, perm=None, name='transpose') <a class="md-anchor" id="transpose"></a>
Transposes `a`. Permutes the dimensions according to `perm`.
@@ -869,21 +870,21 @@ tf.transpose(b, perm=[0, 2, 1]) ==> [[[1 4]
[9 12]]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>a</b>: A `Tensor`.
* <b>perm</b>: A permutation of the dimensions of `a`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A transposed `Tensor`.
- - -
-### tf.gather(params, indices, name=None) <div class="md-anchor" id="gather">{#gather}</div>
+### tf.gather(params, indices, name=None) <a class="md-anchor" id="gather"></a>
Gather slices from `params` according to `indices`.
@@ -906,21 +907,21 @@ this operation will permute `params` accordingly.
<img style="width:100%" src="../images/Gather.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>params</b>: A `Tensor`.
* <b>indices</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `params`.
- - -
-### tf.dynamic_partition(data, partitions, num_partitions, name=None) <div class="md-anchor" id="dynamic_partition">{#dynamic_partition}</div>
+### tf.dynamic_partition(data, partitions, num_partitions, name=None) <a class="md-anchor" id="dynamic_partition"></a>
Partitions `data` into `num_partitions` tensors using indices from `partitions`.
@@ -956,7 +957,7 @@ For example:
<img style="width:100%" src="../images/DynamicPartition.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>data</b>: A `Tensor`.
@@ -966,14 +967,14 @@ For example:
The number of partitions to output.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of `num_partitions` `Tensor` objects of the same type as data.
- - -
-### tf.dynamic_stitch(indices, data, name=None) <div class="md-anchor" id="dynamic_stitch">{#dynamic_stitch}</div>
+### tf.dynamic_stitch(indices, data, name=None) <a class="md-anchor" id="dynamic_stitch"></a>
Interleave the values from the `data` tensors into a single tensor.
@@ -1015,14 +1016,14 @@ For example:
<img style="width:100%" src="../images/DynamicStitch.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>indices</b>: A list of at least 2 `Tensor` objects of type `int32`.
* <b>data</b>: A list with the same number of `Tensor` objects as `indices` of `Tensor` objects of the same type.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `data`.
diff --git a/tensorflow/g3doc/api_docs/python/client.md b/tensorflow/g3doc/api_docs/python/client.md
index 8db13549b3..3da41016a1 100644
--- a/tensorflow/g3doc/api_docs/python/client.md
+++ b/tensorflow/g3doc/api_docs/python/client.md
@@ -1,8 +1,9 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Running Graphs
+# Running Graphs <a class="md-anchor" id="AUTOGENERATED-running-graphs"></a>
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Running Graphs](#AUTOGENERATED-running-graphs)
* [Session management](#AUTOGENERATED-session-management)
* [class tf.Session](#Session)
* [class tf.InteractiveSession](#InteractiveSession)
@@ -34,11 +35,11 @@ This library contains classes for launching graphs and executing operations.
The [basic usage](../../get_started/index.md#basic-usage) guide has
examples of how a graph is launched in a [`tf.Session`](#Session).
-## Session management <div class="md-anchor" id="AUTOGENERATED-session-management">{#AUTOGENERATED-session-management}</div>
+## Session management <a class="md-anchor" id="AUTOGENERATED-session-management"></a>
- - -
-### class tf.Session <div class="md-anchor" id="Session">{#Session}</div>
+### class tf.Session <a class="md-anchor" id="Session"></a>
A class for running TensorFlow operations.
@@ -94,7 +95,7 @@ sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
- - -
-#### tf.Session.__init__(target='', graph=None, config=None) {#Session.__init__}
+#### tf.Session.__init__(target='', graph=None, config=None) <a class="md-anchor" id="Session.__init__"></a>
Creates a new TensorFlow session.
@@ -106,7 +107,7 @@ but each graph can be used in multiple sessions. In this case, it
is often clearer to pass the graph to be launched explicitly to
the session constructor.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>target</b>: (Optional.) The execution engine to connect to.
@@ -119,7 +120,7 @@ the session constructor.
- - -
-#### tf.Session.run(fetches, feed_dict=None) {#Session.run}
+#### tf.Session.run(fetches, feed_dict=None) <a class="md-anchor" id="Session.run"></a>
Runs the operations and evaluates the tensors in `fetches`.
@@ -158,7 +159,7 @@ one of the following types:
the value should be a
[`SparseTensorValue`](sparse_ops.md#SparseTensorValue).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>fetches</b>: A single graph element, or a list of graph elements
@@ -166,12 +167,12 @@ one of the following types:
* <b>feed_dict</b>: A dictionary that maps graph elements to values
(described above).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
Either a single value if `fetches` is a single graph element, or
a list of values if `fetches` is a list (described above).
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>RuntimeError</b>: If this `Session` is in an invalid state (e.g. has been
@@ -183,13 +184,13 @@ one of the following types:
- - -
-#### tf.Session.close() {#Session.close}
+#### tf.Session.close() <a class="md-anchor" id="Session.close"></a>
Closes this session.
Calling this method frees all resources associated with the session.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>RuntimeError</b>: If an error occurs while closing the session.
@@ -198,14 +199,14 @@ Calling this method frees all resources associated with the session.
- - -
-#### tf.Session.graph {#Session.graph}
+#### tf.Session.graph <a class="md-anchor" id="Session.graph"></a>
The graph that was launched in this session.
- - -
-#### tf.Session.as_default() {#Session.as_default}
+#### tf.Session.as_default() <a class="md-anchor" id="Session.as_default"></a>
Returns a context manager that makes this object the default session.
@@ -252,7 +253,7 @@ create a new thread, and wish to use the default session in that
thread, you must explicitly add a `with sess.as_default():` in that
thread's function.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A context manager using this session as the default session.
@@ -260,7 +261,7 @@ thread's function.
- - -
-### class tf.InteractiveSession <div class="md-anchor" id="InteractiveSession">{#InteractiveSession}</div>
+### class tf.InteractiveSession <a class="md-anchor" id="InteractiveSession"></a>
A TensorFlow `Session` for use in interactive contexts, such as a shell.
@@ -301,7 +302,7 @@ with tf.Session():
- - -
-#### tf.InteractiveSession.__init__(target='', graph=None) {#InteractiveSession.__init__}
+#### tf.InteractiveSession.__init__(target='', graph=None) <a class="md-anchor" id="InteractiveSession.__init__"></a>
Creates a new interactive TensorFlow session.
@@ -313,7 +314,7 @@ but each graph can be used in multiple sessions. In this case, it
is often clearer to pass the graph to be launched explicitly to
the session constructor.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>target</b>: (Optional.) The execution engine to connect to.
@@ -324,7 +325,7 @@ the session constructor.
- - -
-#### tf.InteractiveSession.close() {#InteractiveSession.close}
+#### tf.InteractiveSession.close() <a class="md-anchor" id="InteractiveSession.close"></a>
Closes an `InteractiveSession`.
@@ -333,7 +334,7 @@ Closes an `InteractiveSession`.
- - -
-### tf.get_default_session() <div class="md-anchor" id="get_default_session">{#get_default_session}</div>
+### tf.get_default_session() <a class="md-anchor" id="get_default_session"></a>
Returns the default session for the current thread.
@@ -345,17 +346,17 @@ create a new thread, and wish to use the default session in that
thread, you must explicitly add a `with sess.as_default():` in that
thread's function.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The default `Session` being used in the current thread.
-## Error classes <div class="md-anchor" id="AUTOGENERATED-error-classes">{#AUTOGENERATED-error-classes}</div>
+## Error classes <a class="md-anchor" id="AUTOGENERATED-error-classes"></a>
- - -
-### class tf.OpError <div class="md-anchor" id="OpError">{#OpError}</div>
+### class tf.OpError <a class="md-anchor" id="OpError"></a>
A generic error that is raised when TensorFlow execution fails.
@@ -364,7 +365,7 @@ of `OpError` from the `tf.errors` module.
- - -
-#### tf.OpError.op {#OpError.op}
+#### tf.OpError.op <a class="md-anchor" id="OpError.op"></a>
The operation that failed, if known.
@@ -375,25 +376,25 @@ will return `None`, and you should instead use the
[`OpError.node_def`](#OpError.node_def) to discover information about the
op.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The `Operation` that failed, or None.
- - -
-#### tf.OpError.node_def {#OpError.node_def}
+#### tf.OpError.node_def <a class="md-anchor" id="OpError.node_def"></a>
The `NodeDef` proto representing the op that failed.
-#### Other Methods
+#### Other Methods <a class="md-anchor" id="AUTOGENERATED-other-methods"></a>
- - -
-#### tf.OpError.__init__(node_def, op, message, error_code) {#OpError.__init__}
+#### tf.OpError.__init__(node_def, op, message, error_code) <a class="md-anchor" id="OpError.__init__"></a>
Creates a new OpError indicating that a particular op failed.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>node_def</b>: The graph_pb2.NodeDef proto representing the op that failed.
@@ -404,20 +405,20 @@ Creates a new OpError indicating that a particular op failed.
- - -
-#### tf.OpError.error_code {#OpError.error_code}
+#### tf.OpError.error_code <a class="md-anchor" id="OpError.error_code"></a>
The integer error code that describes the error.
- - -
-#### tf.OpError.message {#OpError.message}
+#### tf.OpError.message <a class="md-anchor" id="OpError.message"></a>
The error message that describes the error.
- - -
-### class tf.errors.CancelledError <div class="md-anchor" id="CancelledError">{#CancelledError}</div>
+### class tf.errors.CancelledError <a class="md-anchor" id="CancelledError"></a>
Raised when an operation or step is cancelled.
@@ -430,7 +431,7 @@ running such a long-running operation will fail by raising `CancelledError`.
- - -
-#### tf.errors.CancelledError.__init__(node_def, op, message) {#CancelledError.__init__}
+#### tf.errors.CancelledError.__init__(node_def, op, message) <a class="md-anchor" id="CancelledError.__init__"></a>
Creates a `CancelledError`.
@@ -438,7 +439,7 @@ Creates a `CancelledError`.
- - -
-### class tf.errors.UnknownError <div class="md-anchor" id="UnknownError">{#UnknownError}</div>
+### class tf.errors.UnknownError <a class="md-anchor" id="UnknownError"></a>
Unknown error.
@@ -450,7 +451,7 @@ error.
- - -
-#### tf.errors.UnknownError.__init__(node_def, op, message, error_code=2) {#UnknownError.__init__}
+#### tf.errors.UnknownError.__init__(node_def, op, message, error_code=2) <a class="md-anchor" id="UnknownError.__init__"></a>
Creates an `UnknownError`.
@@ -458,7 +459,7 @@ Creates an `UnknownError`.
- - -
-### class tf.errors.InvalidArgumentError <div class="md-anchor" id="InvalidArgumentError">{#InvalidArgumentError}</div>
+### class tf.errors.InvalidArgumentError <a class="md-anchor" id="InvalidArgumentError"></a>
Raised when an operation receives an invalid argument.
@@ -472,7 +473,7 @@ tensor.
- - -
-#### tf.errors.InvalidArgumentError.__init__(node_def, op, message) {#InvalidArgumentError.__init__}
+#### tf.errors.InvalidArgumentError.__init__(node_def, op, message) <a class="md-anchor" id="InvalidArgumentError.__init__"></a>
Creates an `InvalidArgumentError`.
@@ -480,7 +481,7 @@ Creates an `InvalidArgumentError`.
- - -
-### class tf.errors.DeadlineExceededError <div class="md-anchor" id="DeadlineExceededError">{#DeadlineExceededError}</div>
+### class tf.errors.DeadlineExceededError <a class="md-anchor" id="DeadlineExceededError"></a>
Raised when a deadline expires before an operation could complete.
@@ -488,7 +489,7 @@ This exception is not currently used.
- - -
-#### tf.errors.DeadlineExceededError.__init__(node_def, op, message) {#DeadlineExceededError.__init__}
+#### tf.errors.DeadlineExceededError.__init__(node_def, op, message) <a class="md-anchor" id="DeadlineExceededError.__init__"></a>
Creates a `DeadlineExceededError`.
@@ -496,7 +497,7 @@ Creates a `DeadlineExceededError`.
- - -
-### class tf.errors.NotFoundError <div class="md-anchor" id="NotFoundError">{#NotFoundError}</div>
+### class tf.errors.NotFoundError <a class="md-anchor" id="NotFoundError"></a>
Raised when a requested entity (e.g., a file or directory) was not found.
@@ -507,7 +508,7 @@ does not exist.
- - -
-#### tf.errors.NotFoundError.__init__(node_def, op, message) {#NotFoundError.__init__}
+#### tf.errors.NotFoundError.__init__(node_def, op, message) <a class="md-anchor" id="NotFoundError.__init__"></a>
Creates a `NotFoundError`.
@@ -515,7 +516,7 @@ Creates a `NotFoundError`.
- - -
-### class tf.errors.AlreadyExistsError <div class="md-anchor" id="AlreadyExistsError">{#AlreadyExistsError}</div>
+### class tf.errors.AlreadyExistsError <a class="md-anchor" id="AlreadyExistsError"></a>
Raised when an entity that we attempted to create already exists.
@@ -526,7 +527,7 @@ existing file was passed.
- - -
-#### tf.errors.AlreadyExistsError.__init__(node_def, op, message) {#AlreadyExistsError.__init__}
+#### tf.errors.AlreadyExistsError.__init__(node_def, op, message) <a class="md-anchor" id="AlreadyExistsError.__init__"></a>
Creates an `AlreadyExistsError`.
@@ -534,7 +535,7 @@ Creates an `AlreadyExistsError`.
- - -
-### class tf.errors.PermissionDeniedError <div class="md-anchor" id="PermissionDeniedError">{#PermissionDeniedError}</div>
+### class tf.errors.PermissionDeniedError <a class="md-anchor" id="PermissionDeniedError"></a>
Raised when the caller does not have permission to run an operation.
@@ -545,7 +546,7 @@ file for which the user does not have the read file permission.
- - -
-#### tf.errors.PermissionDeniedError.__init__(node_def, op, message) {#PermissionDeniedError.__init__}
+#### tf.errors.PermissionDeniedError.__init__(node_def, op, message) <a class="md-anchor" id="PermissionDeniedError.__init__"></a>
Creates a `PermissionDeniedError`.
@@ -553,7 +554,7 @@ Creates a `PermissionDeniedError`.
- - -
-### class tf.errors.UnauthenticatedError <div class="md-anchor" id="UnauthenticatedError">{#UnauthenticatedError}</div>
+### class tf.errors.UnauthenticatedError <a class="md-anchor" id="UnauthenticatedError"></a>
The request does not have valid authentication credentials.
@@ -561,7 +562,7 @@ This exception is not currently used.
- - -
-#### tf.errors.UnauthenticatedError.__init__(node_def, op, message) {#UnauthenticatedError.__init__}
+#### tf.errors.UnauthenticatedError.__init__(node_def, op, message) <a class="md-anchor" id="UnauthenticatedError.__init__"></a>
Creates an `UnauthenticatedError`.
@@ -569,7 +570,7 @@ Creates an `UnauthenticatedError`.
- - -
-### class tf.errors.ResourceExhaustedError <div class="md-anchor" id="ResourceExhaustedError">{#ResourceExhaustedError}</div>
+### class tf.errors.ResourceExhaustedError <a class="md-anchor" id="ResourceExhaustedError"></a>
Some resource has been exhausted.
@@ -578,7 +579,7 @@ exhausted, or perhaps the entire file system is out of space.
- - -
-#### tf.errors.ResourceExhaustedError.__init__(node_def, op, message) {#ResourceExhaustedError.__init__}
+#### tf.errors.ResourceExhaustedError.__init__(node_def, op, message) <a class="md-anchor" id="ResourceExhaustedError.__init__"></a>
Creates a `ResourceExhaustedError`.
@@ -586,7 +587,7 @@ Creates a `ResourceExhaustedError`.
- - -
-### class tf.errors.FailedPreconditionError <div class="md-anchor" id="FailedPreconditionError">{#FailedPreconditionError}</div>
+### class tf.errors.FailedPreconditionError <a class="md-anchor" id="FailedPreconditionError"></a>
Operation was rejected because the system is not in a state to execute it.
@@ -596,7 +597,7 @@ been initialized.
- - -
-#### tf.errors.FailedPreconditionError.__init__(node_def, op, message) {#FailedPreconditionError.__init__}
+#### tf.errors.FailedPreconditionError.__init__(node_def, op, message) <a class="md-anchor" id="FailedPreconditionError.__init__"></a>
Creates a `FailedPreconditionError`.
@@ -604,7 +605,7 @@ Creates a `FailedPreconditionError`.
- - -
-### class tf.errors.AbortedError <div class="md-anchor" id="AbortedError">{#AbortedError}</div>
+### class tf.errors.AbortedError <a class="md-anchor" id="AbortedError"></a>
The operation was aborted, typically due to a concurrent action.
@@ -614,7 +615,7 @@ operation may raise `AbortedError` if a
- - -
-#### tf.errors.AbortedError.__init__(node_def, op, message) {#AbortedError.__init__}
+#### tf.errors.AbortedError.__init__(node_def, op, message) <a class="md-anchor" id="AbortedError.__init__"></a>
Creates an `AbortedError`.
@@ -622,7 +623,7 @@ Creates an `AbortedError`.
- - -
-### class tf.errors.OutOfRangeError <div class="md-anchor" id="OutOfRangeError">{#OutOfRangeError}</div>
+### class tf.errors.OutOfRangeError <a class="md-anchor" id="OutOfRangeError"></a>
Raised when an operation executed past the valid range.
@@ -633,7 +634,7 @@ blocked on an empty queue, and a
- - -
-#### tf.errors.OutOfRangeError.__init__(node_def, op, message) {#OutOfRangeError.__init__}
+#### tf.errors.OutOfRangeError.__init__(node_def, op, message) <a class="md-anchor" id="OutOfRangeError.__init__"></a>
Creates an `OutOfRangeError`.
@@ -641,7 +642,7 @@ Creates an `OutOfRangeError`.
- - -
-### class tf.errors.UnimplementedError <div class="md-anchor" id="UnimplementedError">{#UnimplementedError}</div>
+### class tf.errors.UnimplementedError <a class="md-anchor" id="UnimplementedError"></a>
Raised when an operation has not been implemented.
@@ -653,7 +654,7 @@ is not yet supported.
- - -
-#### tf.errors.UnimplementedError.__init__(node_def, op, message) {#UnimplementedError.__init__}
+#### tf.errors.UnimplementedError.__init__(node_def, op, message) <a class="md-anchor" id="UnimplementedError.__init__"></a>
Creates an `UnimplementedError`.
@@ -661,7 +662,7 @@ Creates an `UnimplementedError`.
- - -
-### class tf.errors.InternalError <div class="md-anchor" id="InternalError">{#InternalError}</div>
+### class tf.errors.InternalError <a class="md-anchor" id="InternalError"></a>
Raised when the system experiences an internal error.
@@ -670,7 +671,7 @@ has been broken. Catching this exception is not recommended.
- - -
-#### tf.errors.InternalError.__init__(node_def, op, message) {#InternalError.__init__}
+#### tf.errors.InternalError.__init__(node_def, op, message) <a class="md-anchor" id="InternalError.__init__"></a>
Creates an `InternalError`.
@@ -678,7 +679,7 @@ Creates an `InternalError`.
- - -
-### class tf.errors.UnavailableError <div class="md-anchor" id="UnavailableError">{#UnavailableError}</div>
+### class tf.errors.UnavailableError <a class="md-anchor" id="UnavailableError"></a>
Raised when the runtime is currently unavailable.
@@ -686,7 +687,7 @@ This exception is not currently used.
- - -
-#### tf.errors.UnavailableError.__init__(node_def, op, message) {#UnavailableError.__init__}
+#### tf.errors.UnavailableError.__init__(node_def, op, message) <a class="md-anchor" id="UnavailableError.__init__"></a>
Creates an `UnavailableError`.
@@ -694,7 +695,7 @@ Creates an `UnavailableError`.
- - -
-### class tf.errors.DataLossError <div class="md-anchor" id="DataLossError">{#DataLossError}</div>
+### class tf.errors.DataLossError <a class="md-anchor" id="DataLossError"></a>
Raised when unrecoverable data loss or corruption is encountered.
@@ -704,7 +705,7 @@ if the file is truncated while it is being read.
- - -
-#### tf.errors.DataLossError.__init__(node_def, op, message) {#DataLossError.__init__}
+#### tf.errors.DataLossError.__init__(node_def, op, message) <a class="md-anchor" id="DataLossError.__init__"></a>
Creates a `DataLossError`.
diff --git a/tensorflow/g3doc/api_docs/python/constant_op.md b/tensorflow/g3doc/api_docs/python/constant_op.md
index ab19844ab8..b1a53b1a0c 100644
--- a/tensorflow/g3doc/api_docs/python/constant_op.md
+++ b/tensorflow/g3doc/api_docs/python/constant_op.md
@@ -1,12 +1,13 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Constants, Sequences, and Random Values
+# Constants, Sequences, and Random Values <a class="md-anchor" id="AUTOGENERATED-constants--sequences--and-random-values"></a>
Note: Functions taking `Tensor` arguments can also take anything
accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Constants, Sequences, and Random Values](#AUTOGENERATED-constants--sequences--and-random-values)
* [Constant Value Tensors](#AUTOGENERATED-constant-value-tensors)
* [tf.zeros(shape, dtype=tf.float32, name=None)](#zeros)
* [tf.zeros_like(tensor, dtype=None, name=None)](#zeros_like)
@@ -28,13 +29,13 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Constant Value Tensors <div class="md-anchor" id="AUTOGENERATED-constant-value-tensors">{#AUTOGENERATED-constant-value-tensors}</div>
+## Constant Value Tensors <a class="md-anchor" id="AUTOGENERATED-constant-value-tensors"></a>
TensorFlow provides several operations that you can use to generate constants.
- - -
-### tf.zeros(shape, dtype=tf.float32, name=None) <div class="md-anchor" id="zeros">{#zeros}</div>
+### tf.zeros(shape, dtype=tf.float32, name=None) <a class="md-anchor" id="zeros"></a>
Creates a tensor with all elements set to zero.
@@ -47,21 +48,21 @@ For example:
tf.zeros([3, 4], int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>shape</b>: Either a list of integers, or a 1-D `Tensor` of type `int32`.
* <b>dtype</b>: The type of an element in the resulting `Tensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with all elements set to zero.
- - -
-### tf.zeros_like(tensor, dtype=None, name=None) <div class="md-anchor" id="zeros_like">{#zeros_like}</div>
+### tf.zeros_like(tensor, dtype=None, name=None) <a class="md-anchor" id="zeros_like"></a>
Creates a tensor with all elements set to zero.
@@ -76,7 +77,7 @@ For example:
tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor</b>: A `Tensor`.
@@ -85,7 +86,7 @@ tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]]
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with all elements set to zero.
@@ -93,7 +94,7 @@ tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]]
- - -
-### tf.ones(shape, dtype=tf.float32, name=None) <div class="md-anchor" id="ones">{#ones}</div>
+### tf.ones(shape, dtype=tf.float32, name=None) <a class="md-anchor" id="ones"></a>
Creates a tensor with all elements set to 1.
@@ -106,21 +107,21 @@ For example:
tf.ones([2, 3], int32) ==> [[1, 1, 1], [1, 1, 1]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>shape</b>: Either a list of integers, or a 1-D `Tensor` of type `int32`.
* <b>dtype</b>: The type of an element in the resulting `Tensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with all elements set to 1.
- - -
-### tf.ones_like(tensor, dtype=None, name=None) <div class="md-anchor" id="ones_like">{#ones_like}</div>
+### tf.ones_like(tensor, dtype=None, name=None) <a class="md-anchor" id="ones_like"></a>
Creates a tensor with all elements set to 1.
@@ -135,7 +136,7 @@ For example:
tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor</b>: A `Tensor`.
@@ -144,7 +145,7 @@ tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]]
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with all elements set to 1.
@@ -152,7 +153,7 @@ tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]]
- - -
-### tf.fill(dims, value, name=None) <div class="md-anchor" id="fill">{#fill}</div>
+### tf.fill(dims, value, name=None) <a class="md-anchor" id="fill"></a>
Creates a tensor filled with a scalar value.
@@ -167,7 +168,7 @@ fill(dims, 9) ==> [[9, 9, 9]
[9, 9, 9]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>dims</b>: A `Tensor` of type `int32`.
@@ -175,7 +176,7 @@ fill(dims, 9) ==> [[9, 9, 9]
* <b>value</b>: A `Tensor`. 0-D (scalar). Value to fill the returned tensor.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `value`.
@@ -183,7 +184,7 @@ fill(dims, 9) ==> [[9, 9, 9]
- - -
-### tf.constant(value, dtype=None, shape=None, name='Const') <div class="md-anchor" id="constant">{#constant}</div>
+### tf.constant(value, dtype=None, shape=None, name='Const') <a class="md-anchor" id="constant"></a>
Creates a constant tensor.
@@ -216,7 +217,7 @@ Creates a constant tensor.
[-1. -1. -1.]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>value</b>: A constant value (or list) of output type `dtype`.
@@ -230,17 +231,17 @@ Creates a constant tensor.
* <b>name</b>: Optional name for the tensor.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A Constant Tensor.
-## Sequences <div class="md-anchor" id="AUTOGENERATED-sequences">{#AUTOGENERATED-sequences}</div>
+## Sequences <a class="md-anchor" id="AUTOGENERATED-sequences"></a>
- - -
-### tf.linspace(start, stop, num, name=None) <div class="md-anchor" id="linspace">{#linspace}</div>
+### tf.linspace(start, stop, num, name=None) <a class="md-anchor" id="linspace"></a>
Generates values in an interval.
@@ -254,7 +255,7 @@ For example:
tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>start</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
@@ -264,7 +265,7 @@ tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]
* <b>num</b>: A `Tensor` of type `int32`. Number of values to generate.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `start`. 1-D. The generated values.
@@ -272,7 +273,7 @@ tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]
- - -
-### tf.range(start, limit, delta=1, name='range') <div class="md-anchor" id="range">{#range}</div>
+### tf.range(start, limit, delta=1, name='range') <a class="md-anchor" id="range"></a>
Creates a sequence of integers.
@@ -288,7 +289,7 @@ For example:
tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>start</b>: A 0-D (scalar) of type `int32`. First entry in sequence.
@@ -298,13 +299,13 @@ tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]
Number that increments `start`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An 1-D `int32` `Tensor`.
-## Random Tensors <div class="md-anchor" id="AUTOGENERATED-random-tensors">{#AUTOGENERATED-random-tensors}</div>
+## Random Tensors <a class="md-anchor" id="AUTOGENERATED-random-tensors"></a>
TensorFlow has several ops that create random tensors with different
distributions. The random ops are stateful, and create new random values each
@@ -318,7 +319,7 @@ nor op-level seed, results in a random seed for all operations.
See [`set_random_seed`](constant_op.md#set_random_seed) for details on the
interaction between operation-level and graph-level random seeds.
-### Examples: <div class="md-anchor" id="AUTOGENERATED-examples-">{#AUTOGENERATED-examples-}</div>
+### Examples: <a class="md-anchor" id="AUTOGENERATED-examples-"></a>
```python
# Create a tensor of shape [2, 3] consisting of random normal values, with mean
@@ -358,11 +359,11 @@ print sess.run(var)
- - -
-### tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) <div class="md-anchor" id="random_normal">{#random_normal}</div>
+### tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) <a class="md-anchor" id="random_normal"></a>
Outputs random values from a normal distribution.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>shape</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
@@ -375,14 +376,14 @@ Outputs random values from a normal distribution.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tensor of the specified shape filled with random normal values.
- - -
-### tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) <div class="md-anchor" id="truncated_normal">{#truncated_normal}</div>
+### tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) <a class="md-anchor" id="truncated_normal"></a>
Outputs random values from a truncated normal distribution.
@@ -390,7 +391,7 @@ The generated values follow a normal distribution with specified mean and
standard deviation, except that values whose magnitude is more than 2 standard
deviations from the mean are dropped and re-picked.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>shape</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
@@ -403,14 +404,14 @@ deviations from the mean are dropped and re-picked.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tensor of the specified shape filled with random truncated normal values.
- - -
-### tf.random_uniform(shape, minval=0.0, maxval=1.0, dtype=tf.float32, seed=None, name=None) <div class="md-anchor" id="random_uniform">{#random_uniform}</div>
+### tf.random_uniform(shape, minval=0.0, maxval=1.0, dtype=tf.float32, seed=None, name=None) <a class="md-anchor" id="random_uniform"></a>
Outputs random values from a uniform distribution.
@@ -418,7 +419,7 @@ The generated values follow a uniform distribution in the range
`[minval, maxval)`. The lower bound `minval` is included in the range, while
the upper bound `maxval` is excluded.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>shape</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
@@ -431,14 +432,14 @@ the upper bound `maxval` is excluded.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tensor of the specified shape filled with random uniform values.
- - -
-### tf.random_shuffle(value, seed=None, name=None) <div class="md-anchor" id="random_shuffle">{#random_shuffle}</div>
+### tf.random_shuffle(value, seed=None, name=None) <a class="md-anchor" id="random_shuffle"></a>
Randomly shuffles a tensor along its first dimension.
@@ -452,7 +453,7 @@ to one and only one `output[i]`. For example, a mapping that might occur for a
[5, 6]] [3, 4]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>value</b>: A Tensor to be shuffled.
@@ -460,7 +461,7 @@ to one and only one `output[i]`. For example, a mapping that might occur for a
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tensor of same shape and type as `value`, shuffled along its first
dimension.
@@ -468,7 +469,7 @@ to one and only one `output[i]`. For example, a mapping that might occur for a
- - -
-### tf.set_random_seed(seed) <div class="md-anchor" id="set_random_seed">{#set_random_seed}</div>
+### tf.set_random_seed(seed) <a class="md-anchor" id="set_random_seed"></a>
Sets the graph-level random seed.
@@ -561,7 +562,7 @@ with tf.Session() as sess2:
print sess2.run(b) # generates 'B2'
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>seed</b>: integer.
diff --git a/tensorflow/g3doc/api_docs/python/control_flow_ops.md b/tensorflow/g3doc/api_docs/python/control_flow_ops.md
index 4d96984f59..f3245e6957 100644
--- a/tensorflow/g3doc/api_docs/python/control_flow_ops.md
+++ b/tensorflow/g3doc/api_docs/python/control_flow_ops.md
@@ -1,12 +1,13 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Control Flow
+# Control Flow <a class="md-anchor" id="AUTOGENERATED-control-flow"></a>
Note: Functions taking `Tensor` arguments can also take anything
accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Control Flow](#AUTOGENERATED-control-flow)
* [Control Flow Operations](#AUTOGENERATED-control-flow-operations)
* [tf.identity(input, name=None)](#identity)
* [tf.tuple(tensors, name=None, control_inputs=None)](#tuple)
@@ -40,31 +41,31 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Control Flow Operations <div class="md-anchor" id="AUTOGENERATED-control-flow-operations">{#AUTOGENERATED-control-flow-operations}</div>
+## Control Flow Operations <a class="md-anchor" id="AUTOGENERATED-control-flow-operations"></a>
TensorFlow provides several operations and classes that you can use to control
the execution of operations and add conditional dependencies to your graph.
- - -
-### tf.identity(input, name=None) <div class="md-anchor" id="identity">{#identity}</div>
+### tf.identity(input, name=None) <a class="md-anchor" id="identity"></a>
Return a tensor with the same shape and contents as the input tensor or value.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`.
- - -
-### tf.tuple(tensors, name=None, control_inputs=None) <div class="md-anchor" id="tuple">{#tuple}</div>
+### tf.tuple(tensors, name=None, control_inputs=None) <a class="md-anchor" id="tuple"></a>
Group tensors together.
@@ -82,18 +83,18 @@ are done.
See also `group` and `with_dependencies`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensors</b>: A list of `Tensor`s or `IndexedSlices`, some entries can be `None`.
* <b>name</b>: (optional) A name to use as a `name_scope` for the operation.
* <b>control_inputs</b>: List of additional ops to finish before returning.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
Same as `tensors`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `tensors` does not contain any `Tensor` or `IndexedSlices`.
@@ -101,7 +102,7 @@ See also `group` and `with_dependencies`.
- - -
-### tf.group(*inputs, **kwargs) <div class="md-anchor" id="group">{#group}</div>
+### tf.group(*inputs, **kwargs) <a class="md-anchor" id="group"></a>
Create an op that groups multiple operations.
@@ -110,18 +111,18 @@ output.
See also `tuple` and `with_dependencies`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>*inputs</b>: One or more tensors to group.
* <b>**kwargs</b>: Optional parameters to pass when constructing the NodeDef.
* <b>name</b>: A name for this operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An Operation that executes all its inputs.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If an unknown keyword argument is provided, or if there are
@@ -130,30 +131,30 @@ See also `tuple` and `with_dependencies`.
- - -
-### tf.no_op(name=None) <div class="md-anchor" id="no_op">{#no_op}</div>
+### tf.no_op(name=None) <a class="md-anchor" id="no_op"></a>
Does nothing. Only useful as a placeholder for control edges.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-### tf.count_up_to(ref, limit, name=None) <div class="md-anchor" id="count_up_to">{#count_up_to}</div>
+### tf.count_up_to(ref, limit, name=None) <a class="md-anchor" id="count_up_to"></a>
Increments 'ref' until it reaches 'limit'.
This operation outputs "ref" after the update is done. This makes it
easier to chain operations that need to use the updated value.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>ref</b>: A mutable `Tensor`. Must be one of the following types: `int32`, `int64`.
@@ -163,7 +164,7 @@ easier to chain operations that need to use the updated value.
'OutOfRange' error.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `ref`.
A copy of the input before increment. If nothing else modifies the
@@ -171,188 +172,188 @@ easier to chain operations that need to use the updated value.
-## Logical Operators <div class="md-anchor" id="AUTOGENERATED-logical-operators">{#AUTOGENERATED-logical-operators}</div>
+## Logical Operators <a class="md-anchor" id="AUTOGENERATED-logical-operators"></a>
TensorFlow provides several operations that you can use to add logical operators
to your graph.
- - -
-### tf.logical_and(x, y, name=None) <div class="md-anchor" id="logical_and">{#logical_and}</div>
+### tf.logical_and(x, y, name=None) <a class="md-anchor" id="logical_and"></a>
Returns the truth value of x AND y element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` of type `bool`.
* <b>y</b>: A `Tensor` of type `bool`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.logical_not(x, name=None) <div class="md-anchor" id="logical_not">{#logical_not}</div>
+### tf.logical_not(x, name=None) <a class="md-anchor" id="logical_not"></a>
Returns the truth value of NOT x element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` of type `bool`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.logical_or(x, y, name=None) <div class="md-anchor" id="logical_or">{#logical_or}</div>
+### tf.logical_or(x, y, name=None) <a class="md-anchor" id="logical_or"></a>
Returns the truth value of x OR y element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` of type `bool`.
* <b>y</b>: A `Tensor` of type `bool`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.logical_xor(x, y, name='LogicalXor') <div class="md-anchor" id="logical_xor">{#logical_xor}</div>
+### tf.logical_xor(x, y, name='LogicalXor') <a class="md-anchor" id="logical_xor"></a>
x ^ y = (x | y) & ~(x & y).
-## Comparison Operators <div class="md-anchor" id="AUTOGENERATED-comparison-operators">{#AUTOGENERATED-comparison-operators}</div>
+## Comparison Operators <a class="md-anchor" id="AUTOGENERATED-comparison-operators"></a>
TensorFlow provides several operations that you can use to add comparison
operators to your graph.
- - -
-### tf.equal(x, y, name=None) <div class="md-anchor" id="equal">{#equal}</div>
+### tf.equal(x, y, name=None) <a class="md-anchor" id="equal"></a>
Returns the truth value of (x == y) element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.not_equal(x, y, name=None) <div class="md-anchor" id="not_equal">{#not_equal}</div>
+### tf.not_equal(x, y, name=None) <a class="md-anchor" id="not_equal"></a>
Returns the truth value of (x != y) element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.less(x, y, name=None) <div class="md-anchor" id="less">{#less}</div>
+### tf.less(x, y, name=None) <a class="md-anchor" id="less"></a>
Returns the truth value of (x < y) element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.less_equal(x, y, name=None) <div class="md-anchor" id="less_equal">{#less_equal}</div>
+### tf.less_equal(x, y, name=None) <a class="md-anchor" id="less_equal"></a>
Returns the truth value of (x <= y) element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.greater(x, y, name=None) <div class="md-anchor" id="greater">{#greater}</div>
+### tf.greater(x, y, name=None) <a class="md-anchor" id="greater"></a>
Returns the truth value of (x > y) element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.greater_equal(x, y, name=None) <div class="md-anchor" id="greater_equal">{#greater_equal}</div>
+### tf.greater_equal(x, y, name=None) <a class="md-anchor" id="greater_equal"></a>
Returns the truth value of (x >= y) element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.select(condition, t, e, name=None) <div class="md-anchor" id="select">{#select}</div>
+### tf.select(condition, t, e, name=None) <a class="md-anchor" id="select"></a>
Selects elements from `t` or `e`, depending on `condition`.
@@ -375,7 +376,7 @@ select(condition, t, e) ==> [[1, 2],
[1, 2]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>condition</b>: A `Tensor` of type `bool`.
@@ -383,14 +384,14 @@ select(condition, t, e) ==> [[1, 2],
* <b>e</b>: A `Tensor` with the same type and shape as `t`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with the same type and shape as `t` and `e`.
- - -
-### tf.where(input, name=None) <div class="md-anchor" id="where">{#where}</div>
+### tf.where(input, name=None) <a class="md-anchor" id="where"></a>
Returns locations of true values in a boolean tensor.
@@ -426,116 +427,116 @@ where(input) ==> [[0, 0, 0],
[2, 1, 1]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor` of type `bool`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `int64`.
-## Debugging Operations <div class="md-anchor" id="AUTOGENERATED-debugging-operations">{#AUTOGENERATED-debugging-operations}</div>
+## Debugging Operations <a class="md-anchor" id="AUTOGENERATED-debugging-operations"></a>
TensorFlow provides several operations that you can use to validate values and
debug your graph.
- - -
-### tf.is_finite(x, name=None) <div class="md-anchor" id="is_finite">{#is_finite}</div>
+### tf.is_finite(x, name=None) <a class="md-anchor" id="is_finite"></a>
Returns which elements of x are finite.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.is_inf(x, name=None) <div class="md-anchor" id="is_inf">{#is_inf}</div>
+### tf.is_inf(x, name=None) <a class="md-anchor" id="is_inf"></a>
Returns which elements of x are Inf.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.is_nan(x, name=None) <div class="md-anchor" id="is_nan">{#is_nan}</div>
+### tf.is_nan(x, name=None) <a class="md-anchor" id="is_nan"></a>
Returns which elements of x are NaN.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`.
- - -
-### tf.verify_tensor_all_finite(t, msg, name=None) <div class="md-anchor" id="verify_tensor_all_finite">{#verify_tensor_all_finite}</div>
+### tf.verify_tensor_all_finite(t, msg, name=None) <a class="md-anchor" id="verify_tensor_all_finite"></a>
Assert that the tensor does not contain any NaN's or Inf's.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>t</b>: Tensor to check.
* <b>msg</b>: Message to log on failure.
* <b>name</b>: A name for this operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
Same tensor as `t`.
- - -
-### tf.check_numerics(tensor, message, name=None) <div class="md-anchor" id="check_numerics">{#check_numerics}</div>
+### tf.check_numerics(tensor, message, name=None) <a class="md-anchor" id="check_numerics"></a>
Checks a tensor for NaN and Inf values.
When run, reports an `InvalidArgument` error if `tensor` has any values
that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
* <b>message</b>: A `string`. Prefix of the error message.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `tensor`.
- - -
-### tf.add_check_numerics_ops() <div class="md-anchor" id="add_check_numerics_ops">{#add_check_numerics_ops}</div>
+### tf.add_check_numerics_ops() <a class="md-anchor" id="add_check_numerics_ops"></a>
Connect a check_numerics to every floating point tensor.
@@ -544,21 +545,21 @@ tensor in the graph. For all ops in the graph, the `check_numerics` op for
all of its (`float` or `double`) inputs is guaranteed to run before the
`check_numerics` op on any of its outputs.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `group` op depending on all `check_numerics` ops added.
- - -
-### tf.Assert(condition, data, summarize=None, name=None) <div class="md-anchor" id="Assert">{#Assert}</div>
+### tf.Assert(condition, data, summarize=None, name=None) <a class="md-anchor" id="Assert"></a>
Asserts that the given condition is true.
If `condition` evaluates to false, print the list of tensors in `data`.
`summarize` determines how many entries of the tensors to print.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>condition</b>: The condition to evaluate.
@@ -569,14 +570,14 @@ If `condition` evaluates to false, print the list of tensors in `data`.
- - -
-### tf.Print(input_, data, message=None, first_n=None, summarize=None, name=None) <div class="md-anchor" id="Print">{#Print}</div>
+### tf.Print(input_, data, message=None, first_n=None, summarize=None, name=None) <a class="md-anchor" id="Print"></a>
Prints a list of tensors.
This is an identity op with the side effect of printing `data` when
evaluating.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input_</b>: A tensor passed through this op.
@@ -587,7 +588,7 @@ evaluating.
* <b>summarize</b>: Only print this many entries of each tensor.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
Same tensor as `input_`.
diff --git a/tensorflow/g3doc/api_docs/python/framework.md b/tensorflow/g3doc/api_docs/python/framework.md
index 1fc659ef0b..4107a8459e 100644
--- a/tensorflow/g3doc/api_docs/python/framework.md
+++ b/tensorflow/g3doc/api_docs/python/framework.md
@@ -1,8 +1,9 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Building Graphs
+# Building Graphs <a class="md-anchor" id="AUTOGENERATED-building-graphs"></a>
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Building Graphs](#AUTOGENERATED-building-graphs)
* [Core graph data structures](#AUTOGENERATED-core-graph-data-structures)
* [class tf.Graph](#Graph)
* [class tf.Operation](#Operation)
@@ -35,11 +36,11 @@
Classes and functions for building TensorFlow graphs.
-## Core graph data structures <div class="md-anchor" id="AUTOGENERATED-core-graph-data-structures">{#AUTOGENERATED-core-graph-data-structures}</div>
+## Core graph data structures <a class="md-anchor" id="AUTOGENERATED-core-graph-data-structures"></a>
- - -
-### class tf.Graph <div class="md-anchor" id="Graph">{#Graph}</div>
+### class tf.Graph <a class="md-anchor" id="Graph"></a>
A TensorFlow computation, represented as a dataflow graph.
@@ -77,14 +78,14 @@ are not thread-safe.
- - -
-#### tf.Graph.__init__() {#Graph.__init__}
+#### tf.Graph.__init__() <a class="md-anchor" id="Graph.__init__"></a>
Creates a new, empty Graph.
- - -
-#### tf.Graph.as_default() {#Graph.as_default}
+#### tf.Graph.as_default() <a class="md-anchor" id="Graph.as_default"></a>
Returns a context manager that makes this `Graph` the default graph.
@@ -115,14 +116,14 @@ with tf.Graph().as_default() as g:
assert c.graph is g
```
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A context manager for using this graph as the default graph.
- - -
-#### tf.Graph.as_graph_def(from_version=None) {#Graph.as_graph_def}
+#### tf.Graph.as_graph_def(from_version=None) <a class="md-anchor" id="Graph.as_graph_def"></a>
Returns a serialized `GraphDef` representation of this graph.
@@ -132,14 +133,14 @@ The serialized `GraphDef` can be imported into another `Graph`
This method is thread-safe.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>from_version</b>: Optional. If this is set, returns a `GraphDef`
containing only the nodes that were added to this graph since
its `version` property had the given value.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A [`GraphDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto)
protocol buffer.
@@ -147,7 +148,7 @@ This method is thread-safe.
- - -
-#### tf.Graph.finalize() {#Graph.finalize}
+#### tf.Graph.finalize() <a class="md-anchor" id="Graph.finalize"></a>
Finalizes this graph, making it read-only.
@@ -159,14 +160,14 @@ when using a [`QueueRunner`](train.md#QueueRunner).
- - -
-#### tf.Graph.finalized {#Graph.finalized}
+#### tf.Graph.finalized <a class="md-anchor" id="Graph.finalized"></a>
True if this graph has been finalized.
- - -
-#### tf.Graph.control_dependencies(control_inputs) {#Graph.control_dependencies}
+#### tf.Graph.control_dependencies(control_inputs) <a class="md-anchor" id="Graph.control_dependencies"></a>
Returns a context manager that specifies control dependencies.
@@ -214,19 +215,19 @@ def my_func(pred, tensor):
return tf.matmul(tensor, tensor)
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>control_inputs</b>: A list of `Operation` or `Tensor` objects, which
must be executed or computed before running the operations
defined in the context.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A context manager that specifies control dependencies for all
operations constructed within the context.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `control_inputs` is not a list of `Operation` or
@@ -235,7 +236,7 @@ def my_func(pred, tensor):
- - -
-#### tf.Graph.device(device_name_or_function) {#Graph.device}
+#### tf.Graph.device(device_name_or_function) <a class="md-anchor" id="Graph.device"></a>
Returns a context manager that specifies the default device to use.
@@ -273,13 +274,13 @@ with g.device(matmul_on_gpu):
# on CPU 0.
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>device_name_or_function</b>: The device name or function to use in
the context.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A context manager that specifies the default device to use for newly
created ops.
@@ -287,7 +288,7 @@ with g.device(matmul_on_gpu):
- - -
-#### tf.Graph.name_scope(name) {#Graph.name_scope}
+#### tf.Graph.name_scope(name) <a class="md-anchor" id="Graph.name_scope"></a>
Returns a context manager that creates hierarchical names for operations.
@@ -357,12 +358,12 @@ with g.name_scope('my_layer') as scope:
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the scope.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A context manager that installs `name` as a new name scope.
@@ -378,11 +379,11 @@ additional collections by specifying a new name.
- - -
-#### tf.Graph.add_to_collection(name, value) {#Graph.add_to_collection}
+#### tf.Graph.add_to_collection(name, value) <a class="md-anchor" id="Graph.add_to_collection"></a>
Stores `value` in the collection with the given `name`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: The key for the collection. For example, the `GraphKeys` class
@@ -392,11 +393,11 @@ Stores `value` in the collection with the given `name`.
- - -
-#### tf.Graph.get_collection(name, scope=None) {#Graph.get_collection}
+#### tf.Graph.get_collection(name, scope=None) <a class="md-anchor" id="Graph.get_collection"></a>
Returns a list of values in the collection with the given `name`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>key</b>: The key for the collection. For example, the `GraphKeys` class
@@ -404,7 +405,7 @@ Returns a list of values in the collection with the given `name`.
* <b>scope</b>: (Optional.) If supplied, the resulting list is filtered to include
only items whose name begins with this string.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The list of values in the collection with the given `name`, or
an empty list if no value has been added to that collection. The
@@ -415,7 +416,7 @@ Returns a list of values in the collection with the given `name`.
- - -
-#### tf.Graph.as_graph_element(obj, allow_tensor=True, allow_operation=True) {#Graph.as_graph_element}
+#### tf.Graph.as_graph_element(obj, allow_tensor=True, allow_operation=True) <a class="md-anchor" id="Graph.as_graph_element"></a>
Returns the object referred to by `obj`, as an `Operation` or `Tensor`.
@@ -428,7 +429,7 @@ Session API.
This method may be called concurrently from multiple threads.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>obj</b>: A `Tensor`, an `Operation`, or the name of a tensor or operation.
@@ -437,11 +438,11 @@ This method may be called concurrently from multiple threads.
* <b>allow_tensor</b>: If true, `obj` may refer to a `Tensor`.
* <b>allow_operation</b>: If true, `obj` may refer to an `Operation`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The `Tensor` or `Operation` in the Graph corresponding to `obj`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `obj` is not a type we support attempting to convert
@@ -453,22 +454,22 @@ This method may be called concurrently from multiple threads.
- - -
-#### tf.Graph.get_operation_by_name(name) {#Graph.get_operation_by_name}
+#### tf.Graph.get_operation_by_name(name) <a class="md-anchor" id="Graph.get_operation_by_name"></a>
Returns the `Operation` with the given `name`.
This method may be called concurrently from multiple threads.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: The name of the `Operation` to return.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The `Operation` with the given `name`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `name` is not a string.
@@ -477,22 +478,22 @@ This method may be called concurrently from multiple threads.
- - -
-#### tf.Graph.get_tensor_by_name(name) {#Graph.get_tensor_by_name}
+#### tf.Graph.get_tensor_by_name(name) <a class="md-anchor" id="Graph.get_tensor_by_name"></a>
Returns the `Tensor` with the given `name`.
This method may be called concurrently from multiple threads.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: The name of the `Tensor` to return.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The `Tensor` with the given `name`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `name` is not a string.
@@ -501,7 +502,7 @@ This method may be called concurrently from multiple threads.
- - -
-#### tf.Graph.get_operations() {#Graph.get_operations}
+#### tf.Graph.get_operations() <a class="md-anchor" id="Graph.get_operations"></a>
Return the list of operations in the graph.
@@ -511,7 +512,7 @@ list of operations known to the graph.
This method may be called concurrently from multiple threads.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of Operations.
@@ -519,24 +520,24 @@ This method may be called concurrently from multiple threads.
- - -
-#### tf.Graph.get_default_device() {#Graph.get_default_device}
+#### tf.Graph.get_default_device() <a class="md-anchor" id="Graph.get_default_device"></a>
Returns the default device.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A string.
- - -
-#### tf.Graph.seed {#Graph.seed}
+#### tf.Graph.seed <a class="md-anchor" id="Graph.seed"></a>
- - -
-#### tf.Graph.unique_name(name) {#Graph.unique_name}
+#### tf.Graph.unique_name(name) <a class="md-anchor" id="Graph.unique_name"></a>
Return a unique Operation name for "name".
@@ -549,12 +550,12 @@ to help identify Operations when debugging a Graph. Operation names
are displayed in error messages reported by the TensorFlow runtime,
and in various visualization tools such as TensorBoard.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: The name for an `Operation`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A string to be passed to `create_op()` that will be used
to name the operation being created.
@@ -562,14 +563,14 @@ and in various visualization tools such as TensorBoard.
- - -
-#### tf.Graph.version {#Graph.version}
+#### tf.Graph.version <a class="md-anchor" id="Graph.version"></a>
Returns a version number that increases as ops are added to the graph.
- - -
-#### tf.Graph.create_op(op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True) {#Graph.create_op}
+#### tf.Graph.create_op(op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True) <a class="md-anchor" id="Graph.create_op"></a>
Creates an `Operation` in this graph.
@@ -578,7 +579,7 @@ programs will not call this method directly, and instead use the
Python op constructors, such as `tf.constant()`, which add ops to
the default graph.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>op_type</b>: The `Operation` type to create. This corresponds to the
@@ -599,19 +600,19 @@ the default graph.
* <b>compute_shapes</b>: (Optional.) If True, shape inference will be performed
to compute the shapes of the outputs.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: if any of the inputs is not a `Tensor`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An `Operation` object.
- - -
-#### tf.Graph.gradient_override_map(op_type_map) {#Graph.gradient_override_map}
+#### tf.Graph.gradient_override_map(op_type_map) <a class="md-anchor" id="Graph.gradient_override_map"></a>
EXPERIMENTAL: A context manager for overriding gradient functions.
@@ -633,18 +634,18 @@ with tf.Graph().as_default() as g:
# gradient of s_2.
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>op_type_map</b>: A dictionary mapping op type strings to alternative op
type strings.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A context manager that sets the alternative op type to be used for one
or more ops created in that context.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `op_type_map` is not a dictionary mapping strings to
@@ -654,7 +655,7 @@ with tf.Graph().as_default() as g:
- - -
-### class tf.Operation <div class="md-anchor" id="Operation">{#Operation}</div>
+### class tf.Operation <a class="md-anchor" id="Operation"></a>
Represents a graph node that performs computation on tensors.
@@ -674,25 +675,25 @@ be executed by passing it to [`Session.run()`](client.md#Session.run).
- - -
-#### tf.Operation.name {#Operation.name}
+#### tf.Operation.name <a class="md-anchor" id="Operation.name"></a>
The full name of this operation.
- - -
-#### tf.Operation.type {#Operation.type}
+#### tf.Operation.type <a class="md-anchor" id="Operation.type"></a>
The type of the op (e.g. `"MatMul"`).
- - -
-#### tf.Operation.inputs {#Operation.inputs}
+#### tf.Operation.inputs <a class="md-anchor" id="Operation.inputs"></a>
The list of `Tensor` objects representing the data inputs of this op.
- - -
-#### tf.Operation.control_inputs {#Operation.control_inputs}
+#### tf.Operation.control_inputs <a class="md-anchor" id="Operation.control_inputs"></a>
The `Operation` objects on which this op has a control dependency.
@@ -702,37 +703,37 @@ mechanism can be used to run ops sequentially for performance
reasons, or to ensure that the side effects of an op are observed
in the correct order.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of `Operation` objects.
- - -
-#### tf.Operation.outputs {#Operation.outputs}
+#### tf.Operation.outputs <a class="md-anchor" id="Operation.outputs"></a>
The list of `Tensor` objects representing the outputs of this op.
- - -
-#### tf.Operation.device {#Operation.device}
+#### tf.Operation.device <a class="md-anchor" id="Operation.device"></a>
The name of the device to which this op has been assigned, if any.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The string name of the device to which this op has been
assigned, or None if it has not been assigned to a device.
- - -
-#### tf.Operation.graph {#Operation.graph}
+#### tf.Operation.graph <a class="md-anchor" id="Operation.graph"></a>
The `Graph` that contains this operation.
- - -
-#### tf.Operation.run(feed_dict=None, session=None) {#Operation.run}
+#### tf.Operation.run(feed_dict=None, session=None) <a class="md-anchor" id="Operation.run"></a>
Runs this operation in a `Session`.
@@ -743,7 +744,7 @@ produce the inputs needed for this operation.
launched in a session, and either a default session must be
available, or `session` must be specified explicitly.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>feed_dict</b>: A dictionary that maps `Tensor` objects to feed values.
@@ -756,20 +757,20 @@ available, or `session` must be specified explicitly.
- - -
-#### tf.Operation.get_attr(name) {#Operation.get_attr}
+#### tf.Operation.get_attr(name) <a class="md-anchor" id="Operation.get_attr"></a>
Returns the value of the attr of this op with the given `name`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: The name of the attr to fetch.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The value of the attr, as a Python object.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If this op does not have an attr with the given `name`.
@@ -777,15 +778,15 @@ Returns the value of the attr of this op with the given `name`.
- - -
-#### tf.Operation.traceback {#Operation.traceback}
+#### tf.Operation.traceback <a class="md-anchor" id="Operation.traceback"></a>
Returns the call stack from when this operation was constructed.
-#### Other Methods
+#### Other Methods <a class="md-anchor" id="AUTOGENERATED-other-methods"></a>
- - -
-#### tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None) {#Operation.__init__}
+#### tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None) <a class="md-anchor" id="Operation.__init__"></a>
Creates an `Operation`.
@@ -795,7 +796,7 @@ regular expression:
[A-Za-z0-9.][A-Za-z0-9_.\-/]*
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>node_def</b>: graph_pb2.NodeDef. NodeDef for the Operation.
@@ -819,7 +820,7 @@ regular expression:
* <b>op_def</b>: Optional. The op_def_pb2.OpDef proto that describes the
op type that this Operation represents.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: if control inputs are not Operations or Tensors,
@@ -832,11 +833,11 @@ regular expression:
- - -
-#### tf.Operation.node_def {#Operation.node_def}
+#### tf.Operation.node_def <a class="md-anchor" id="Operation.node_def"></a>
Returns a serialized `NodeDef` representation of this operation.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A
[`NodeDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/graph.proto)
@@ -844,11 +845,11 @@ Returns a serialized `NodeDef` representation of this operation.
- - -
-#### tf.Operation.op_def {#Operation.op_def}
+#### tf.Operation.op_def <a class="md-anchor" id="Operation.op_def"></a>
Returns the `OpDef` proto that represents the type of this op.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An
[`OpDef`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op_def.proto)
@@ -856,7 +857,7 @@ Returns the `OpDef` proto that represents the type of this op.
- - -
-#### tf.Operation.values() {#Operation.values}
+#### tf.Operation.values() <a class="md-anchor" id="Operation.values"></a>
DEPRECATED: Use outputs.
@@ -864,7 +865,7 @@ DEPRECATED: Use outputs.
- - -
-### class tf.Tensor <div class="md-anchor" id="Tensor">{#Tensor}</div>
+### class tf.Tensor <a class="md-anchor" id="Tensor"></a>
Represents a value produced by an `Operation`.
@@ -905,41 +906,41 @@ result = sess.run(e)
- - -
-#### tf.Tensor.dtype {#Tensor.dtype}
+#### tf.Tensor.dtype <a class="md-anchor" id="Tensor.dtype"></a>
The `DType` of elements in this tensor.
- - -
-#### tf.Tensor.name {#Tensor.name}
+#### tf.Tensor.name <a class="md-anchor" id="Tensor.name"></a>
The string name of this tensor.
- - -
-#### tf.Tensor.value_index {#Tensor.value_index}
+#### tf.Tensor.value_index <a class="md-anchor" id="Tensor.value_index"></a>
The index of this tensor in the outputs of its `Operation`.
- - -
-#### tf.Tensor.graph {#Tensor.graph}
+#### tf.Tensor.graph <a class="md-anchor" id="Tensor.graph"></a>
The `Graph` that contains this tensor.
- - -
-#### tf.Tensor.op {#Tensor.op}
+#### tf.Tensor.op <a class="md-anchor" id="Tensor.op"></a>
The `Operation` that produces this tensor as an output.
- - -
-#### tf.Tensor.consumers() {#Tensor.consumers}
+#### tf.Tensor.consumers() <a class="md-anchor" id="Tensor.consumers"></a>
Returns a list of `Operation`s that consume this tensor.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of `Operation`s.
@@ -947,7 +948,7 @@ Returns a list of `Operation`s that consume this tensor.
- - -
-#### tf.Tensor.eval(feed_dict=None, session=None) {#Tensor.eval}
+#### tf.Tensor.eval(feed_dict=None, session=None) <a class="md-anchor" id="Tensor.eval"></a>
Evaluates this tensor in a `Session`.
@@ -959,7 +960,7 @@ tensor.
launched in a session, and either a default session must be
available, or `session` must be specified explicitly.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>feed_dict</b>: A dictionary that maps `Tensor` objects to feed values.
@@ -968,7 +969,7 @@ available, or `session` must be specified explicitly.
* <b>session</b>: (Optional.) The `Session` to be used to evaluate this tensor. If
none, the default session will be used.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A numpy array corresponding to the value of this tensor.
@@ -976,7 +977,7 @@ available, or `session` must be specified explicitly.
- - -
-#### tf.Tensor.get_shape() {#Tensor.get_shape}
+#### tf.Tensor.get_shape() <a class="md-anchor" id="Tensor.get_shape"></a>
Returns the `TensorShape` that represents the shape of this tensor.
@@ -1016,14 +1017,14 @@ the caller has additional information about the values of these
dimensions, `Tensor.set_shape()` can be used to augment the
inferred shape.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `TensorShape` representing the shape of this tensor.
- - -
-#### tf.Tensor.set_shape(shape) {#Tensor.set_shape}
+#### tf.Tensor.set_shape(shape) <a class="md-anchor" id="Tensor.set_shape"></a>
Updates the shape of this tensor.
@@ -1048,12 +1049,12 @@ print image.get_shape()
==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>shape</b>: A `TensorShape` representing the shape of this tensor.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `shape` is not compatible with the current shape of
@@ -1061,14 +1062,14 @@ print image.get_shape()
-#### Other Methods
+#### Other Methods <a class="md-anchor" id="AUTOGENERATED-other-methods"></a>
- - -
-#### tf.Tensor.__init__(op, value_index, dtype) {#Tensor.__init__}
+#### tf.Tensor.__init__(op, value_index, dtype) <a class="md-anchor" id="Tensor.__init__"></a>
Creates a new `Tensor`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>op</b>: An `Operation`. `Operation` that computes this tensor.
@@ -1076,7 +1077,7 @@ Creates a new `Tensor`.
this tensor.
* <b>dtype</b>: A `types.DType`. Type of data stored in this tensor.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If the op is not an `Operation`.
@@ -1084,17 +1085,17 @@ Creates a new `Tensor`.
- - -
-#### tf.Tensor.device {#Tensor.device}
+#### tf.Tensor.device <a class="md-anchor" id="Tensor.device"></a>
The name of the device on which this tensor will be produced, or None.
-## Tensor types <div class="md-anchor" id="AUTOGENERATED-tensor-types">{#AUTOGENERATED-tensor-types}</div>
+## Tensor types <a class="md-anchor" id="AUTOGENERATED-tensor-types"></a>
- - -
-### class tf.DType <div class="md-anchor" id="DType">{#DType}</div>
+### class tf.DType <a class="md-anchor" id="DType"></a>
Represents the type of the elements in a `Tensor`.
@@ -1126,7 +1127,7 @@ names to a `DType` object.
- - -
-#### tf.DType.is_compatible_with(other) {#DType.is_compatible_with}
+#### tf.DType.is_compatible_with(other) <a class="md-anchor" id="DType.is_compatible_with"></a>
Returns True if the `other` DType will be converted to this DType.
@@ -1139,12 +1140,12 @@ DType(T).as_ref.is_compatible_with(DType(T)) == False
DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>other</b>: A `DType` (or object that may be converted to a `DType`).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
True if a Tensor of the `other` `DType` will be implicitly converted to
this `DType`.
@@ -1152,58 +1153,58 @@ DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
- - -
-#### tf.DType.name {#DType.name}
+#### tf.DType.name <a class="md-anchor" id="DType.name"></a>
Returns the string name for this `DType`.
- - -
-#### tf.DType.base_dtype {#DType.base_dtype}
+#### tf.DType.base_dtype <a class="md-anchor" id="DType.base_dtype"></a>
Returns a non-reference `DType` based on this `DType`.
- - -
-#### tf.DType.is_ref_dtype {#DType.is_ref_dtype}
+#### tf.DType.is_ref_dtype <a class="md-anchor" id="DType.is_ref_dtype"></a>
Returns `True` if this `DType` represents a reference type.
- - -
-#### tf.DType.as_ref {#DType.as_ref}
+#### tf.DType.as_ref <a class="md-anchor" id="DType.as_ref"></a>
Returns a reference `DType` based on this `DType`.
- - -
-#### tf.DType.is_integer {#DType.is_integer}
+#### tf.DType.is_integer <a class="md-anchor" id="DType.is_integer"></a>
Returns whether this is a (non-quantized) integer type.
- - -
-#### tf.DType.is_quantized {#DType.is_quantized}
+#### tf.DType.is_quantized <a class="md-anchor" id="DType.is_quantized"></a>
Returns whether this is a quantized data type.
- - -
-#### tf.DType.as_numpy_dtype {#DType.as_numpy_dtype}
+#### tf.DType.as_numpy_dtype <a class="md-anchor" id="DType.as_numpy_dtype"></a>
Returns a `numpy.dtype` based on this `DType`.
- - -
-#### tf.DType.as_datatype_enum {#DType.as_datatype_enum}
+#### tf.DType.as_datatype_enum <a class="md-anchor" id="DType.as_datatype_enum"></a>
Returns a `types_pb2.DataType` enum value based on this `DType`.
-#### Other Methods
+#### Other Methods <a class="md-anchor" id="AUTOGENERATED-other-methods"></a>
- - -
-#### tf.DType.__init__(type_enum) {#DType.__init__}
+#### tf.DType.__init__(type_enum) <a class="md-anchor" id="DType.__init__"></a>
Creates a new `DataType`.
@@ -1211,12 +1212,12 @@ NOTE(mrry): In normal circumstances, you should not need to
construct a DataType object directly. Instead, use the
types.as_dtype() function.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>type_enum</b>: A `types_pb2.DataType` enum value.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `type_enum` is not a value `types_pb2.DataType`.
@@ -1224,22 +1225,22 @@ types.as_dtype() function.
- - -
-#### tf.DType.max {#DType.max}
+#### tf.DType.max <a class="md-anchor" id="DType.max"></a>
Returns the maximum representable value in this data type.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: if this is a non-numeric, unordered, or quantized type.
- - -
-#### tf.DType.min {#DType.min}
+#### tf.DType.min <a class="md-anchor" id="DType.min"></a>
Returns the minimum representable value in this data type.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: if this is a non-numeric, unordered, or quantized type.
@@ -1247,11 +1248,11 @@ Returns the minimum representable value in this data type.
- - -
-### tf.as_dtype(type_value) <div class="md-anchor" id="as_dtype">{#as_dtype}</div>
+### tf.as_dtype(type_value) <a class="md-anchor" id="as_dtype"></a>
Converts the given `type_value` to a `DType`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>type_value</b>: A value that can be converted to a `tf.DType`
@@ -1259,34 +1260,34 @@ Converts the given `type_value` to a `DType`.
[`DataType` enum](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/types.proto),
a string type name, or a `numpy.dtype`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `DType` corresponding to `type_value`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `type_value` cannot be converted to a `DType`.
-## Utility functions <div class="md-anchor" id="AUTOGENERATED-utility-functions">{#AUTOGENERATED-utility-functions}</div>
+## Utility functions <a class="md-anchor" id="AUTOGENERATED-utility-functions"></a>
- - -
-### tf.device(dev) <div class="md-anchor" id="device">{#device}</div>
+### tf.device(dev) <a class="md-anchor" id="device"></a>
Wrapper for `Graph.device()` using the default graph.
See [`Graph.name_scope()`](framework.md#Graph.name_scope) for more details.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>device_name_or_function</b>: The device name or function to use in
the context.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A context manager that specifies the default device to use for newly
created ops.
@@ -1294,18 +1295,18 @@ See [`Graph.name_scope()`](framework.md#Graph.name_scope) for more details.
- - -
-### tf.name_scope(name) <div class="md-anchor" id="name_scope">{#name_scope}</div>
+### tf.name_scope(name) <a class="md-anchor" id="name_scope"></a>
Wrapper for `Graph.name_scope()` using the default graph.
See [`Graph.name_scope()`](framework.md#Graph.name_scope) for more details.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the scope.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A context manager that installs `name` as a new name scope in the
default graph.
@@ -1313,21 +1314,21 @@ See [`Graph.name_scope()`](framework.md#Graph.name_scope) for more details.
- - -
-### tf.control_dependencies(control_inputs) <div class="md-anchor" id="control_dependencies">{#control_dependencies}</div>
+### tf.control_dependencies(control_inputs) <a class="md-anchor" id="control_dependencies"></a>
Wrapper for `Graph.control_dependencies()` using the default graph.
See [`Graph.control_dependencies()`](framework.md#Graph.control_dependencies)
for more details.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>control_inputs</b>: A list of `Operation` or `Tensor` objects, which
must be executed or computed before running the operations
defined in the context.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A context manager that specifies control dependencies for all
operations constructed within the context.
@@ -1335,7 +1336,7 @@ for more details.
- - -
-### tf.convert_to_tensor(value, dtype=None, name=None) <div class="md-anchor" id="convert_to_tensor">{#convert_to_tensor}</div>
+### tf.convert_to_tensor(value, dtype=None, name=None) <a class="md-anchor" id="convert_to_tensor"></a>
Converts the given `value` to a `Tensor`.
@@ -1363,7 +1364,7 @@ constructors apply this function to each of their Tensor-valued
inputs, which allows those ops to accept numpy arrays, Python lists,
and scalars in addition to `Tensor` objects.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>value</b>: An object whose type has a registered `Tensor` conversion function.
@@ -1371,11 +1372,11 @@ and scalars in addition to `Tensor` objects.
type is inferred from the type of `value`.
* <b>name</b>: Optional name to use if a new `Tensor` is created.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` based on `value`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If no conversion function is registered for `value`.
@@ -1384,7 +1385,7 @@ and scalars in addition to `Tensor` objects.
- - -
-### tf.get_default_graph() <div class="md-anchor" id="get_default_graph">{#get_default_graph}</div>
+### tf.get_default_graph() <a class="md-anchor" id="get_default_graph"></a>
Returns the default graph for the current thread.
@@ -1397,14 +1398,14 @@ create a new thread, and wish to use the default graph in that
thread, you must explicitly add a `with g.as_default():` in that
thread's function.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The default `Graph` being used in the current thread.
- - -
-### tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None) <div class="md-anchor" id="import_graph_def">{#import_graph_def}</div>
+### tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None) <a class="md-anchor" id="import_graph_def"></a>
Imports the TensorFlow graph in `graph_def` into the Python `Graph`.
@@ -1415,7 +1416,7 @@ protocol buffer, and extract individual objects in the `GraphDef` as
[`Graph.as_graph_def()`](#Graph.as_graph_def) for a way to create a
`GraphDef` proto.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>graph_def</b>: A `GraphDef` proto containing operations to be imported into
@@ -1432,12 +1433,12 @@ protocol buffer, and extract individual objects in the `GraphDef` as
Must contain an `OpDef` proto for each op type named in `graph_def`.
If omitted, uses the `OpDef` protos registered in the global registry.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of `Operation` and/or `Tensor` objects from the imported graph,
corresponding to the names in `return_elements'.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `graph_def` is not a `GraphDef` proto,
@@ -1449,18 +1450,18 @@ protocol buffer, and extract individual objects in the `GraphDef` as
-## Graph collections <div class="md-anchor" id="AUTOGENERATED-graph-collections">{#AUTOGENERATED-graph-collections}</div>
+## Graph collections <a class="md-anchor" id="AUTOGENERATED-graph-collections"></a>
- - -
-### tf.add_to_collection(name, value) <div class="md-anchor" id="add_to_collection">{#add_to_collection}</div>
+### tf.add_to_collection(name, value) <a class="md-anchor" id="add_to_collection"></a>
Wrapper for `Graph.add_to_collection()` using the default graph.
See [`Graph.add_to_collection()`](framework.md#Graph.add_to_collection)
for more details.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: The key for the collection. For example, the `GraphKeys` class
@@ -1470,14 +1471,14 @@ for more details.
- - -
-### tf.get_collection(key, scope=None) <div class="md-anchor" id="get_collection">{#get_collection}</div>
+### tf.get_collection(key, scope=None) <a class="md-anchor" id="get_collection"></a>
Wrapper for `Graph.get_collection()` using the default graph.
See [`Graph.get_collection()`](framework.md#Graph.get_collection)
for more details.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>key</b>: The key for the collection. For example, the `GraphKeys` class
@@ -1485,7 +1486,7 @@ for more details.
* <b>scope</b>: (Optional.) If supplied, the resulting list is filtered to include
only items whose name begins with this string.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The list of values in the collection with the given `name`, or
an empty list if no value has been added to that collection. The
@@ -1495,7 +1496,7 @@ for more details.
- - -
-### class tf.GraphKeys <div class="md-anchor" id="GraphKeys">{#GraphKeys}</div>
+### class tf.GraphKeys <a class="md-anchor" id="GraphKeys"></a>
Standard names to use for graph collections.
@@ -1523,11 +1524,11 @@ The following standard keys are defined:
[`tf.start_queue_runners()`](train.md#start_queue_runners) for more details.
-## Defining new operations <div class="md-anchor" id="AUTOGENERATED-defining-new-operations">{#AUTOGENERATED-defining-new-operations}</div>
+## Defining new operations <a class="md-anchor" id="AUTOGENERATED-defining-new-operations"></a>
- - -
-### class tf.RegisterGradient <div class="md-anchor" id="RegisterGradient">{#RegisterGradient}</div>
+### class tf.RegisterGradient <a class="md-anchor" id="RegisterGradient"></a>
A decorator for registering the gradient function for an op type.
@@ -1554,11 +1555,11 @@ that defines the operation.
- - -
-#### tf.RegisterGradient.__init__(op_type) {#RegisterGradient.__init__}
+#### tf.RegisterGradient.__init__(op_type) <a class="md-anchor" id="RegisterGradient.__init__"></a>
Creates a new decorator with `op_type` as the Operation type.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>op_type</b>: The string type of an operation. This corresponds to the
@@ -1568,7 +1569,7 @@ Creates a new decorator with `op_type` as the Operation type.
- - -
-### tf.NoGradient(op_type) <div class="md-anchor" id="NoGradient">{#NoGradient}</div>
+### tf.NoGradient(op_type) <a class="md-anchor" id="NoGradient"></a>
Specifies that ops of type `op_type` do not have a defined gradient.
@@ -1580,13 +1581,13 @@ example:
tf.NoGradient("Size")
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>op_type</b>: The string type of an operation. This corresponds to the
`OpDef.name` field for the proto that defines the operation.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `op_type` is not a string.
@@ -1594,7 +1595,7 @@ tf.NoGradient("Size")
- - -
-### class tf.RegisterShape <div class="md-anchor" id="RegisterShape">{#RegisterShape}</div>
+### class tf.RegisterShape <a class="md-anchor" id="RegisterShape"></a>
A decorator for registering the shape function for an op type.
@@ -1618,7 +1619,7 @@ operation. This corresponds to the `OpDef.name` field for the proto
that defines the operation.
- - -
-#### tf.RegisterShape.__init__(op_type) {#RegisterShape.__init__}
+#### tf.RegisterShape.__init__(op_type) <a class="md-anchor" id="RegisterShape.__init__"></a>
Saves the "op_type" as the Operation type.
@@ -1626,7 +1627,7 @@ Saves the "op_type" as the Operation type.
- - -
-### class tf.TensorShape <div class="md-anchor" id="TensorShape">{#TensorShape}</div>
+### class tf.TensorShape <a class="md-anchor" id="TensorShape"></a>
Represents the shape of a `Tensor`.
@@ -1649,24 +1650,24 @@ explicitly using [`Tensor.set_shape()`](framework.md#Tensor.set_shape).
- - -
-#### tf.TensorShape.merge_with(other) {#TensorShape.merge_with}
+#### tf.TensorShape.merge_with(other) <a class="md-anchor" id="TensorShape.merge_with"></a>
Returns a `TensorShape` combining the information in `self` and `other`.
The dimensions in `self` and `other` are merged elementwise,
according to the rules defined for `Dimension.merge_with()`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>other</b>: Another `TensorShape`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `TensorShape` containing the combined information of `self` and
`other`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `self` and `other` are not compatible.
@@ -1674,7 +1675,7 @@ according to the rules defined for `Dimension.merge_with()`.
- - -
-#### tf.TensorShape.concatenate(other) {#TensorShape.concatenate}
+#### tf.TensorShape.concatenate(other) <a class="md-anchor" id="TensorShape.concatenate"></a>
Returns the concatenation of the dimension in `self` and `other`.
@@ -1683,12 +1684,12 @@ concatenation will discard information about the other shape. In
future, we might support concatenation that preserves this
information for use with slicing.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>other</b>: Another `TensorShape`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `TensorShape` whose dimensions are the concatenation of the
dimensions in `self` and `other`.
@@ -1697,26 +1698,26 @@ information for use with slicing.
- - -
-#### tf.TensorShape.ndims {#TensorShape.ndims}
+#### tf.TensorShape.ndims <a class="md-anchor" id="TensorShape.ndims"></a>
Returns the rank of this shape, or None if it is unspecified.
- - -
-#### tf.TensorShape.dims {#TensorShape.dims}
+#### tf.TensorShape.dims <a class="md-anchor" id="TensorShape.dims"></a>
Returns a list of Dimensions, or None if the shape is unspecified.
- - -
-#### tf.TensorShape.as_list() {#TensorShape.as_list}
+#### tf.TensorShape.as_list() <a class="md-anchor" id="TensorShape.as_list"></a>
Returns a list of integers or None for each dimension.
- - -
-#### tf.TensorShape.is_compatible_with(other) {#TensorShape.is_compatible_with}
+#### tf.TensorShape.is_compatible_with(other) <a class="md-anchor" id="TensorShape.is_compatible_with"></a>
Returns True iff `self` is compatible with `other`.
@@ -1748,19 +1749,19 @@ TensorShape(None), and TensorShape(None) is compatible with
TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with
TensorShape([4, 4]).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>other</b>: Another TensorShape.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
True iff `self` is compatible with `other`.
- - -
-#### tf.TensorShape.is_fully_defined() {#TensorShape.is_fully_defined}
+#### tf.TensorShape.is_fully_defined() <a class="md-anchor" id="TensorShape.is_fully_defined"></a>
Returns True iff `self` is fully defined in every dimension.
@@ -1768,23 +1769,23 @@ Returns True iff `self` is fully defined in every dimension.
- - -
-#### tf.TensorShape.with_rank(rank) {#TensorShape.with_rank}
+#### tf.TensorShape.with_rank(rank) <a class="md-anchor" id="TensorShape.with_rank"></a>
Returns a shape based on `self` with the given rank.
This method promotes a completely unknown shape to one with a
known rank.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>rank</b>: An integer.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A shape that is at least as specific as `self` with the given rank.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `self` does not represent a shape with the given `rank`.
@@ -1792,21 +1793,21 @@ known rank.
- - -
-#### tf.TensorShape.with_rank_at_least(rank) {#TensorShape.with_rank_at_least}
+#### tf.TensorShape.with_rank_at_least(rank) <a class="md-anchor" id="TensorShape.with_rank_at_least"></a>
Returns a shape based on `self` with at least the given rank.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>rank</b>: An integer.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A shape that is at least as specific as `self` with at least the given
rank.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `self` does not represent a shape with at least the given
@@ -1815,21 +1816,21 @@ Returns a shape based on `self` with at least the given rank.
- - -
-#### tf.TensorShape.with_rank_at_most(rank) {#TensorShape.with_rank_at_most}
+#### tf.TensorShape.with_rank_at_most(rank) <a class="md-anchor" id="TensorShape.with_rank_at_most"></a>
Returns a shape based on `self` with at most the given rank.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>rank</b>: An integer.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A shape that is at least as specific as `self` with at most the given
rank.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `self` does not represent a shape with at most the given
@@ -1839,16 +1840,16 @@ Returns a shape based on `self` with at most the given rank.
- - -
-#### tf.TensorShape.assert_has_rank(rank) {#TensorShape.assert_has_rank}
+#### tf.TensorShape.assert_has_rank(rank) <a class="md-anchor" id="TensorShape.assert_has_rank"></a>
Raises an exception if `self` is not compatible with the given `rank`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>rank</b>: An integer.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `self` does not represent a shape with the given `rank`.
@@ -1856,16 +1857,16 @@ Raises an exception if `self` is not compatible with the given `rank`.
- - -
-#### tf.TensorShape.assert_same_rank(other) {#TensorShape.assert_same_rank}
+#### tf.TensorShape.assert_same_rank(other) <a class="md-anchor" id="TensorShape.assert_same_rank"></a>
Raises an exception if `self` and `other` do not have compatible ranks.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>other</b>: Another `TensorShape`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `self` and `other` do not represent shapes with the
@@ -1874,19 +1875,19 @@ Raises an exception if `self` and `other` do not have compatible ranks.
- - -
-#### tf.TensorShape.assert_is_compatible_with(other) {#TensorShape.assert_is_compatible_with}
+#### tf.TensorShape.assert_is_compatible_with(other) <a class="md-anchor" id="TensorShape.assert_is_compatible_with"></a>
Raises exception if `self` and `other` do not represent the same shape.
This method can be used to assert that there exists a shape that both
`self` and `other` represent.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>other</b>: Another TensorShape.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `self` and `other` do not represent the same shape.
@@ -1894,25 +1895,25 @@ This method can be used to assert that there exists a shape that both
- - -
-#### tf.TensorShape.assert_is_fully_defined() {#TensorShape.assert_is_fully_defined}
+#### tf.TensorShape.assert_is_fully_defined() <a class="md-anchor" id="TensorShape.assert_is_fully_defined"></a>
Raises an exception if `self` is not fully defined in every dimension.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `self` does not have a known value for every dimension.
-#### Other Methods
+#### Other Methods <a class="md-anchor" id="AUTOGENERATED-other-methods"></a>
- - -
-#### tf.TensorShape.__init__(dims) {#TensorShape.__init__}
+#### tf.TensorShape.__init__(dims) <a class="md-anchor" id="TensorShape.__init__"></a>
Creates a new TensorShape with the given dimensions.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>dims</b>: A list of Dimensions, or None if the shape is unspecified.
@@ -1921,14 +1922,14 @@ Creates a new TensorShape with the given dimensions.
- - -
-#### tf.TensorShape.as_dimension_list() {#TensorShape.as_dimension_list}
+#### tf.TensorShape.as_dimension_list() <a class="md-anchor" id="TensorShape.as_dimension_list"></a>
DEPRECATED: use as_list().
- - -
-#### tf.TensorShape.num_elements() {#TensorShape.num_elements}
+#### tf.TensorShape.num_elements() <a class="md-anchor" id="TensorShape.num_elements"></a>
Returns the total number of elements, or none for incomplete shapes.
@@ -1936,28 +1937,28 @@ Returns the total number of elements, or none for incomplete shapes.
- - -
-### class tf.Dimension <div class="md-anchor" id="Dimension">{#Dimension}</div>
+### class tf.Dimension <a class="md-anchor" id="Dimension"></a>
Represents the value of one dimension in a TensorShape.
- - -
-#### tf.Dimension.__init__(value) {#Dimension.__init__}
+#### tf.Dimension.__init__(value) <a class="md-anchor" id="Dimension.__init__"></a>
Creates a new Dimension with the given value.
- - -
-#### tf.Dimension.assert_is_compatible_with(other) {#Dimension.assert_is_compatible_with}
+#### tf.Dimension.assert_is_compatible_with(other) <a class="md-anchor" id="Dimension.assert_is_compatible_with"></a>
Raises an exception if `other` is not compatible with this Dimension.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>other</b>: Another Dimension.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `self` and `other` are not compatible (see
@@ -1966,26 +1967,26 @@ Raises an exception if `other` is not compatible with this Dimension.
- - -
-#### tf.Dimension.is_compatible_with(other) {#Dimension.is_compatible_with}
+#### tf.Dimension.is_compatible_with(other) <a class="md-anchor" id="Dimension.is_compatible_with"></a>
Returns true if `other` is compatible with this Dimension.
Two known Dimensions are compatible if they have the same value.
An unknown Dimension is compatible with all other Dimensions.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>other</b>: Another Dimension.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
True if this Dimension and `other` are compatible.
- - -
-#### tf.Dimension.merge_with(other) {#Dimension.merge_with}
+#### tf.Dimension.merge_with(other) <a class="md-anchor" id="Dimension.merge_with"></a>
Returns a Dimension that combines the information in `self` and `other`.
@@ -1997,17 +1998,17 @@ Dimensions are combined as follows:
Dimension(None).merge_with(Dimension(None)) == Dimension(None)
Dimension(n) .merge_with(Dimension(m)) raises ValueError for n != m
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>other</b>: Another Dimension.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A Dimension containing the combined information of `self` and
`other`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `self` and `other` are not compatible (see
@@ -2016,14 +2017,14 @@ Dimensions are combined as follows:
- - -
-#### tf.Dimension.value {#Dimension.value}
+#### tf.Dimension.value <a class="md-anchor" id="Dimension.value"></a>
The value of this dimension, or None if it is unknown.
- - -
-### tf.op_scope(values, name, default_name) <div class="md-anchor" id="op_scope">{#op_scope}</div>
+### tf.op_scope(values, name, default_name) <a class="md-anchor" id="op_scope"></a>
Returns a context manager for use when defining a Python op.
@@ -2043,21 +2044,21 @@ def my_op(a, b, c, name=None):
return foo_op(..., name=scope)
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>values</b>: The list of `Tensor` arguments that are passed to the op function.
* <b>name</b>: The name argument that is passed to the op function.
* <b>default_name</b>: The default name to use if the `name` argument is `None`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A context manager for use in defining a Python op.
- - -
-### tf.get_seed(op_seed) <div class="md-anchor" id="get_seed">{#get_seed}</div>
+### tf.get_seed(op_seed) <a class="md-anchor" id="get_seed"></a>
Returns the local seeds an operation should use given an op-specific seed.
@@ -2069,12 +2070,12 @@ graph, or for only specific operations.
For details on how the graph-level seed interacts with op seeds, see
[`set_random_seed`](constant_op.md#set_random_seed).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>op_seed</b>: integer.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of two integers that should be used for the local seed of this
operation.
diff --git a/tensorflow/g3doc/api_docs/python/image.md b/tensorflow/g3doc/api_docs/python/image.md
index 0baa28ffae..735ceaf0dd 100644
--- a/tensorflow/g3doc/api_docs/python/image.md
+++ b/tensorflow/g3doc/api_docs/python/image.md
@@ -1,12 +1,13 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Images
+# Images <a class="md-anchor" id="AUTOGENERATED-images"></a>
Note: Functions taking `Tensor` arguments can also take anything
accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Images](#AUTOGENERATED-images)
* [Encoding and Decoding](#AUTOGENERATED-encoding-and-decoding)
* [tf.image.decode_jpeg(contents, channels=None, ratio=None, fancy_upscaling=None, try_recover_truncated=None, acceptable_fraction=None, name=None)](#decode_jpeg)
* [tf.image.encode_jpeg(image, format=None, quality=None, progressive=None, optimize_size=None, chroma_downsampling=None, density_unit=None, x_density=None, y_density=None, xmp_metadata=None, name=None)](#encode_jpeg)
@@ -40,7 +41,7 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Encoding and Decoding <div class="md-anchor" id="AUTOGENERATED-encoding-and-decoding">{#AUTOGENERATED-encoding-and-decoding}</div>
+## Encoding and Decoding <a class="md-anchor" id="AUTOGENERATED-encoding-and-decoding"></a>
TensorFlow provides Ops to decode and encode JPEG and PNG formats. Encoded
images are represented by scalar string Tensors, decoded images by 3-D uint8
@@ -55,7 +56,7 @@ presently only support RGB, HSV, and GrayScale.
- - -
-### tf.image.decode_jpeg(contents, channels=None, ratio=None, fancy_upscaling=None, try_recover_truncated=None, acceptable_fraction=None, name=None) <div class="md-anchor" id="decode_jpeg">{#decode_jpeg}</div>
+### tf.image.decode_jpeg(contents, channels=None, ratio=None, fancy_upscaling=None, try_recover_truncated=None, acceptable_fraction=None, name=None) <a class="md-anchor" id="decode_jpeg"></a>
Decode a JPEG-encoded image to a uint8 tensor.
@@ -75,7 +76,7 @@ The attr `ratio` allows downscaling the image by an integer factor during
decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than
downscaling the image later.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>contents</b>: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.
@@ -92,14 +93,14 @@ downscaling the image later.
input is accepted.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `uint8`. 3-D with shape `[height, width, channels]`..
- - -
-### tf.image.encode_jpeg(image, format=None, quality=None, progressive=None, optimize_size=None, chroma_downsampling=None, density_unit=None, x_density=None, y_density=None, xmp_metadata=None, name=None) <div class="md-anchor" id="encode_jpeg">{#encode_jpeg}</div>
+### tf.image.encode_jpeg(image, format=None, quality=None, progressive=None, optimize_size=None, chroma_downsampling=None, density_unit=None, x_density=None, y_density=None, xmp_metadata=None, name=None) <a class="md-anchor" id="encode_jpeg"></a>
JPEG-encode an image.
@@ -120,7 +121,7 @@ in function of the number of channels in `image`:
* 1: Output a grayscale image.
* 3: Output an RGB image.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: A `Tensor` of type `uint8`.
@@ -146,7 +147,7 @@ in function of the number of channels in `image`:
If not empty, embed this XMP metadata in the image header.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `string`. 0-D. JPEG-encoded image.
@@ -154,7 +155,7 @@ in function of the number of channels in `image`:
- - -
-### tf.image.decode_png(contents, channels=None, name=None) <div class="md-anchor" id="decode_png">{#decode_png}</div>
+### tf.image.decode_png(contents, channels=None, name=None) <a class="md-anchor" id="decode_png"></a>
Decode a PNG-encoded image to a uint8 tensor.
@@ -171,7 +172,7 @@ Accepted values are:
If needed, the PNG-encoded image is transformed to match the requested number
of color channels.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>contents</b>: A `Tensor` of type `string`. 0-D. The PNG-encoded image.
@@ -179,14 +180,14 @@ of color channels.
Number of color channels for the decoded image.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `uint8`. 3-D with shape `[height, width, channels]`.
- - -
-### tf.image.encode_png(image, compression=None, name=None) <div class="md-anchor" id="encode_png">{#encode_png}</div>
+### tf.image.encode_png(image, compression=None, name=None) <a class="md-anchor" id="encode_png"></a>
PNG-encode an image.
@@ -201,7 +202,7 @@ The ZLIB compression level, `compression`, can be -1 for the PNG-encoder
default or a value from 0 to 9. 9 is the highest compression level, generating
the smallest output, but is slower.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: A `Tensor` of type `uint8`.
@@ -209,13 +210,13 @@ the smallest output, but is slower.
* <b>compression</b>: An optional `int`. Defaults to `-1`. Compression level.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `string`. 0-D. PNG-encoded image.
-## Resizing <div class="md-anchor" id="AUTOGENERATED-resizing">{#AUTOGENERATED-resizing}</div>
+## Resizing <a class="md-anchor" id="AUTOGENERATED-resizing"></a>
The resizing Ops accept input images as tensors of several types. They always
output resized images as float32 tensors.
@@ -243,7 +244,7 @@ images from the Queue.</i>
- - -
-### tf.image.resize_images(images, new_height, new_width, method=0) <div class="md-anchor" id="resize_images">{#resize_images}</div>
+### tf.image.resize_images(images, new_height, new_width, method=0) <a class="md-anchor" id="resize_images"></a>
Resize `images` to `new_width`, `new_height` using the specified `method`.
@@ -261,7 +262,7 @@ the same as `new_width`, `new_height`. To avoid distortions see
(https://en.wikipedia.org/wiki/Bicubic_interpolation)
* <b>ResizeMethod.AREA</b>: Area interpolation.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>images</b>: 4-D Tensor of shape `[batch, height, width, channels]` or
@@ -270,14 +271,14 @@ the same as `new_width`, `new_height`. To avoid distortions see
* <b>new_width</b>: integer.
* <b>method</b>: ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if the shape of `images` is incompatible with the
shape arguments to this function
* <b>ValueError</b>: if an unsupported resize method is specified.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
If `images` was 4-D, a 4-D float Tensor of shape
`[batch, new_height, new_width, channels]`.
@@ -288,13 +289,13 @@ the same as `new_width`, `new_height`. To avoid distortions see
- - -
-### tf.image.resize_area(images, size, name=None) <div class="md-anchor" id="resize_area">{#resize_area}</div>
+### tf.image.resize_area(images, size, name=None) <a class="md-anchor" id="resize_area"></a>
Resize `images` to `size` using area interpolation.
Input images can be of different types but output images are always float.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>images</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `float32`, `float64`.
@@ -303,7 +304,7 @@ Input images can be of different types but output images are always float.
new size for the images.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `float32`. 4-D with shape
`[batch, new_height, new_width, channels]`.
@@ -311,13 +312,13 @@ Input images can be of different types but output images are always float.
- - -
-### tf.image.resize_bicubic(images, size, name=None) <div class="md-anchor" id="resize_bicubic">{#resize_bicubic}</div>
+### tf.image.resize_bicubic(images, size, name=None) <a class="md-anchor" id="resize_bicubic"></a>
Resize `images` to `size` using bicubic interpolation.
Input images can be of different types but output images are always float.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>images</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `float32`, `float64`.
@@ -326,7 +327,7 @@ Input images can be of different types but output images are always float.
new size for the images.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `float32`. 4-D with shape
`[batch, new_height, new_width, channels]`.
@@ -334,13 +335,13 @@ Input images can be of different types but output images are always float.
- - -
-### tf.image.resize_bilinear(images, size, name=None) <div class="md-anchor" id="resize_bilinear">{#resize_bilinear}</div>
+### tf.image.resize_bilinear(images, size, name=None) <a class="md-anchor" id="resize_bilinear"></a>
Resize `images` to `size` using bilinear interpolation.
Input images can be of different types but output images are always float.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>images</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `float32`, `float64`.
@@ -349,7 +350,7 @@ Input images can be of different types but output images are always float.
new size for the images.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `float32`. 4-D with shape
`[batch, new_height, new_width, channels]`.
@@ -357,13 +358,13 @@ Input images can be of different types but output images are always float.
- - -
-### tf.image.resize_nearest_neighbor(images, size, name=None) <div class="md-anchor" id="resize_nearest_neighbor">{#resize_nearest_neighbor}</div>
+### tf.image.resize_nearest_neighbor(images, size, name=None) <a class="md-anchor" id="resize_nearest_neighbor"></a>
Resize `images` to `size` using nearest neighbor interpolation.
Input images can be of different types but output images are always float.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>images</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `float32`, `float64`.
@@ -372,7 +373,7 @@ Input images can be of different types but output images are always float.
new size for the images.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `images`. 4-D with shape
`[batch, new_height, new_width, channels]`.
@@ -380,11 +381,11 @@ Input images can be of different types but output images are always float.
-## Cropping <div class="md-anchor" id="AUTOGENERATED-cropping">{#AUTOGENERATED-cropping}</div>
+## Cropping <a class="md-anchor" id="AUTOGENERATED-cropping"></a>
- - -
-### tf.image.resize_image_with_crop_or_pad(image, target_height, target_width) <div class="md-anchor" id="resize_image_with_crop_or_pad">{#resize_image_with_crop_or_pad}</div>
+### tf.image.resize_image_with_crop_or_pad(image, target_height, target_width) <a class="md-anchor" id="resize_image_with_crop_or_pad"></a>
Crops and/or pads an image to a target width and height.
@@ -397,19 +398,19 @@ If `width` or `height` is smaller than the specified `target_width` or
`target_height` respectively, this op centrally pads with 0 along that
dimension.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: 3-D tensor of shape [height, width, channels]
* <b>target_height</b>: Target height.
* <b>target_width</b>: Target width.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if `target_height` or `target_width` are zero or negative.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
Cropped and/or padded image of shape
`[target_height, target_width, channels]`
@@ -418,7 +419,7 @@ dimension.
- - -
-### tf.image.pad_to_bounding_box(image, offset_height, offset_width, target_height, target_width) <div class="md-anchor" id="pad_to_bounding_box">{#pad_to_bounding_box}</div>
+### tf.image.pad_to_bounding_box(image, offset_height, offset_width, target_height, target_width) <a class="md-anchor" id="pad_to_bounding_box"></a>
Pad `image` with zeros to the specified `height` and `width`.
@@ -429,7 +430,7 @@ with zeros until it has dimensions `target_height`, `target_width`.
This op does nothing if `offset_*` is zero and the image already has size
`target_height` by `target_width`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: 3-D tensor with shape `[height, width, channels]`
@@ -438,11 +439,11 @@ This op does nothing if `offset_*` is zero and the image already has size
* <b>target_height</b>: Height of output image.
* <b>target_width</b>: Width of output image.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
3-D tensor of shape `[target_height, target_width, channels]`
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If the shape of `image` is incompatible with the `offset_*` or
@@ -451,7 +452,7 @@ This op does nothing if `offset_*` is zero and the image already has size
- - -
-### tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width) <div class="md-anchor" id="crop_to_bounding_box">{#crop_to_bounding_box}</div>
+### tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width) <a class="md-anchor" id="crop_to_bounding_box"></a>
Crops an image to a specified bounding box.
@@ -460,7 +461,7 @@ returned image is at `offset_height, offset_width` in `image`, and its
lower-right corner is at
`offset_height + target_height, offset_width + target_width'.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: 3-D tensor with shape `[height, width, channels]`
@@ -471,11 +472,11 @@ lower-right corner is at
* <b>target_height</b>: Height of the result.
* <b>target_width</b>: Width of the result.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
3-D tensor of image with shape `[target_height, target_width, channels]`
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If the shape of `image` is incompatible with the `offset_*` or
@@ -484,14 +485,14 @@ lower-right corner is at
- - -
-### tf.image.random_crop(image, size, seed=None, name=None) <div class="md-anchor" id="random_crop">{#random_crop}</div>
+### tf.image.random_crop(image, size, seed=None, name=None) <a class="md-anchor" id="random_crop"></a>
Randomly crops `image` to size `[target_height, target_width]`.
The offset of the output within `image` is uniformly random. `image` always
fully contains the result.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: 3-D tensor of shape `[height, width, channels]`
@@ -500,14 +501,14 @@ fully contains the result.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
* <b>name</b>: A name for this operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A cropped 3-D tensor of shape `[target_height, target_width, channels]`.
- - -
-### tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None) <div class="md-anchor" id="extract_glimpse">{#extract_glimpse}</div>
+### tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None) <a class="md-anchor" id="extract_glimpse"></a>
Extracts a glimpse from the input tensor.
@@ -528,7 +529,7 @@ The argument `normalized` and `centered` controls how the windows are built:
lower right corner is located at (1.0, 1.0) and the center is at (0, 0).
* If the coordinates are not normalized they are interpreted as numbers of pixels.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor` of type `float32`.
@@ -551,7 +552,7 @@ The argument `normalized` and `centered` controls how the windows are built:
uniform distribution or a gaussian distribution.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `float32`.
A tensor representing the glimpses `[batch_size, glimpse_height,
@@ -559,11 +560,11 @@ The argument `normalized` and `centered` controls how the windows are built:
-## Flipping and Transposing <div class="md-anchor" id="AUTOGENERATED-flipping-and-transposing">{#AUTOGENERATED-flipping-and-transposing}</div>
+## Flipping and Transposing <a class="md-anchor" id="AUTOGENERATED-flipping-and-transposing"></a>
- - -
-### tf.image.flip_up_down(image) <div class="md-anchor" id="flip_up_down">{#flip_up_down}</div>
+### tf.image.flip_up_down(image) <a class="md-anchor" id="flip_up_down"></a>
Flip an image horizontally (upside down).
@@ -572,16 +573,16 @@ Outputs the contents of `image` flipped along the first dimension, which is
See also `reverse()`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: A 3-D tensor of shape `[height, width, channels].`
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 3-D tensor of the same type and shape as `image`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if the shape of `image` not supported.
@@ -589,25 +590,25 @@ See also `reverse()`.
- - -
-### tf.image.random_flip_up_down(image, seed=None) <div class="md-anchor" id="random_flip_up_down">{#random_flip_up_down}</div>
+### tf.image.random_flip_up_down(image, seed=None) <a class="md-anchor" id="random_flip_up_down"></a>
Randomly flips an image vertically (upside down).
With a 1 in 2 chance, outputs the contents of `image` flipped along the first
dimension, which is `height`. Otherwise output the image as-is.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: A 3-D tensor of shape `[height, width, channels].`
* <b>seed</b>: A Python integer. Used to create a random seed.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 3-D tensor of the same type and shape as `image`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if the shape of `image` not supported.
@@ -616,7 +617,7 @@ dimension, which is `height`. Otherwise output the image as-is.
- - -
-### tf.image.flip_left_right(image) <div class="md-anchor" id="flip_left_right">{#flip_left_right}</div>
+### tf.image.flip_left_right(image) <a class="md-anchor" id="flip_left_right"></a>
Flip an image horizontally (left to right).
@@ -625,16 +626,16 @@ Outputs the contents of `image` flipped along the second dimension, which is
See also `reverse()`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: A 3-D tensor of shape `[height, width, channels].`
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 3-D tensor of the same type and shape as `image`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if the shape of `image` not supported.
@@ -642,25 +643,25 @@ See also `reverse()`.
- - -
-### tf.image.random_flip_left_right(image, seed=None) <div class="md-anchor" id="random_flip_left_right">{#random_flip_left_right}</div>
+### tf.image.random_flip_left_right(image, seed=None) <a class="md-anchor" id="random_flip_left_right"></a>
Randomly flip an image horizontally (left to right).
With a 1 in 2 chance, outputs the contents of `image` flipped along the
second dimension, which is `width`. Otherwise output the image as-is.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: A 3-D tensor of shape `[height, width, channels].`
* <b>seed</b>: A Python integer. Used to create a random seed.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 3-D tensor of the same type and shape as `image`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if the shape of `image` not supported.
@@ -669,29 +670,29 @@ second dimension, which is `width`. Otherwise output the image as-is.
- - -
-### tf.image.transpose_image(image) <div class="md-anchor" id="transpose_image">{#transpose_image}</div>
+### tf.image.transpose_image(image) <a class="md-anchor" id="transpose_image"></a>
Transpose an image by swapping the first and second dimension.
See also `transpose()`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: 3-D tensor of shape `[height, width, channels]`
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 3-D tensor of shape `[width, height, channels]`
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if the shape of `image` not supported.
-## Image Adjustments <div class="md-anchor" id="AUTOGENERATED-image-adjustments">{#AUTOGENERATED-image-adjustments}</div>
+## Image Adjustments <a class="md-anchor" id="AUTOGENERATED-image-adjustments"></a>
TensorFlow provides functions to adjust images in various ways: brightness,
contrast, hue, and saturation. Each adjustment can be done with predefined
@@ -700,7 +701,7 @@ adjustments are often useful to expand a training set and reduce overfitting.
- - -
-### tf.image.adjust_brightness(image, delta, min_value=None, max_value=None) <div class="md-anchor" id="adjust_brightness">{#adjust_brightness}</div>
+### tf.image.adjust_brightness(image, delta, min_value=None, max_value=None) <a class="md-anchor" id="adjust_brightness"></a>
Adjust the brightness of RGB or Grayscale images.
@@ -712,7 +713,7 @@ clamped to `[min_value, max_value]`. Finally, the result is cast back to
If `min_value` or `max_value` are not given, they are set to the minimum and
maximum allowed values for `image.dtype` respectively.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: A tensor.
@@ -720,14 +721,14 @@ maximum allowed values for `image.dtype` respectively.
* <b>min_value</b>: Minimum value for output.
* <b>max_value</b>: Maximum value for output.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tensor of the same shape and type as `image`.
- - -
-### tf.image.random_brightness(image, max_delta, seed=None) <div class="md-anchor" id="random_brightness">{#random_brightness}</div>
+### tf.image.random_brightness(image, max_delta, seed=None) <a class="md-anchor" id="random_brightness"></a>
Adjust the brightness of images by a random factor.
@@ -738,7 +739,7 @@ Note that `delta` is picked as a float. Because for integer type images,
the brightness adjusted result is rounded before casting, integer images may
have modifications in the range `[-max_delta,max_delta]`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: 3-D tensor of shape `[height, width, channels]`.
@@ -746,11 +747,11 @@ have modifications in the range `[-max_delta,max_delta]`.
* <b>seed</b>: A Python integer. Used to create a random seed.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
3-D tensor of images of shape `[height, width, channels]`
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if max_delta is negative.
@@ -759,7 +760,7 @@ have modifications in the range `[-max_delta,max_delta]`.
- - -
-### tf.image.adjust_contrast(images, contrast_factor, min_value=None, max_value=None) <div class="md-anchor" id="adjust_contrast">{#adjust_contrast}</div>
+### tf.image.adjust_contrast(images, contrast_factor, min_value=None, max_value=None) <a class="md-anchor" id="adjust_contrast"></a>
Adjust contrast of RGB or grayscale images.
@@ -780,7 +781,7 @@ minimum and maximum values for the data type of `images` respectively.
The contrast-adjusted image is always computed as `float`, and it is
cast back to its original type after clipping.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>images</b>: Images to adjust. At least 3-D.
@@ -788,11 +789,11 @@ cast back to its original type after clipping.
* <b>min_value</b>: Minimum value for clipping the adjusted pixels.
* <b>max_value</b>: Maximum value for clipping the adjusted pixels.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The constrast-adjusted image or images.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if the arguments are invalid.
@@ -800,14 +801,14 @@ cast back to its original type after clipping.
- - -
-### tf.image.random_contrast(image, lower, upper, seed=None) <div class="md-anchor" id="random_contrast">{#random_contrast}</div>
+### tf.image.random_contrast(image, lower, upper, seed=None) <a class="md-anchor" id="random_contrast"></a>
Adjust the contrase of an image by a random factor.
Equivalent to `adjust_constrast()` but uses a `contrast_factor` randomly
picked in the interval `[lower, upper]`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: 3-D tensor of shape `[height, width, channels]`.
@@ -816,11 +817,11 @@ picked in the interval `[lower, upper]`.
* <b>seed</b>: A Python integer. Used to create a random seed.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
3-D tensor of shape `[height, width, channels]`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if `upper <= lower` or if `lower < 0`.
@@ -829,7 +830,7 @@ picked in the interval `[lower, upper]`.
- - -
-### tf.image.per_image_whitening(image) <div class="md-anchor" id="per_image_whitening">{#per_image_whitening}</div>
+### tf.image.per_image_whitening(image) <a class="md-anchor" id="per_image_whitening"></a>
Linearly scales `image` to have zero mean and unit norm.
@@ -844,16 +845,16 @@ Note that this implementation is limited:
* It only whitens based on the statistics of an individual image.
* It does not take into account the covariance structure.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>image</b>: 3-D tensor of shape `[height, width, channels]`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The whitened image with same shape as `image`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if the shape of 'image' is incompatible with this function.
diff --git a/tensorflow/g3doc/api_docs/python/index.md b/tensorflow/g3doc/api_docs/python/index.md
index f905cd0990..c9b245f836 100644
--- a/tensorflow/g3doc/api_docs/python/index.md
+++ b/tensorflow/g3doc/api_docs/python/index.md
@@ -1,6 +1,6 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# TensorFlow Python reference documentation
+# TensorFlow Python reference documentation <a class="md-anchor" id="AUTOGENERATED-tensorflow-python-reference-documentation"></a>
* **[Building Graphs](framework.md)**:
* [`add_to_collection`](framework.md#add_to_collection)
diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md
index 888c499237..5fb838d925 100644
--- a/tensorflow/g3doc/api_docs/python/io_ops.md
+++ b/tensorflow/g3doc/api_docs/python/io_ops.md
@@ -1,12 +1,13 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Inputs and Readers
+# Inputs and Readers <a class="md-anchor" id="AUTOGENERATED-inputs-and-readers"></a>
Note: Functions taking `Tensor` arguments can also take anything
accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Inputs and Readers](#AUTOGENERATED-inputs-and-readers)
* [Placeholders](#AUTOGENERATED-placeholders)
* [tf.placeholder(dtype, shape=None, name=None)](#placeholder)
* [Readers](#AUTOGENERATED-readers)
@@ -45,7 +46,7 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Placeholders <div class="md-anchor" id="AUTOGENERATED-placeholders">{#AUTOGENERATED-placeholders}</div>
+## Placeholders <a class="md-anchor" id="AUTOGENERATED-placeholders"></a>
TensorFlow provides a placeholder operation that must be fed with data
on execution. For more info, see the section on [Feeding
@@ -53,7 +54,7 @@ data](../../how_tos/reading_data/index.md#feeding).
- - -
-### tf.placeholder(dtype, shape=None, name=None) <div class="md-anchor" id="placeholder">{#placeholder}</div>
+### tf.placeholder(dtype, shape=None, name=None) <a class="md-anchor" id="placeholder"></a>
Inserts a placeholder for a tensor that will be always fed.
@@ -74,7 +75,7 @@ with tf.Session() as sess:
print sess.run(y, feed_dict={x: rand_array}) # Will succeed.
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>dtype</b>: The type of elements in the tensor to be fed.
@@ -82,14 +83,14 @@ with tf.Session() as sess:
specified, you can feed a tensor of any shape.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` that may be used as a handle for feeding a value, but not
evaluated directly.
-## Readers <div class="md-anchor" id="AUTOGENERATED-readers">{#AUTOGENERATED-readers}</div>
+## Readers <a class="md-anchor" id="AUTOGENERATED-readers"></a>
TensorFlow provides a set of Reader classes for reading data formats.
For more information on inputs and readers, see [Reading
@@ -97,7 +98,7 @@ data](../../how_tos/reading_data/index.md).
- - -
-### class tf.ReaderBase <div class="md-anchor" id="ReaderBase">{#ReaderBase}</div>
+### class tf.ReaderBase <a class="md-anchor" id="ReaderBase"></a>
Base class for different Reader types, that produce a record every step.
@@ -113,11 +114,11 @@ it is asked to produce a record (via Read()) but it has finished the
last work unit.
- - -
-#### tf.ReaderBase.__init__(reader_ref, supports_serialize=False) {#ReaderBase.__init__}
+#### tf.ReaderBase.__init__(reader_ref, supports_serialize=False) <a class="md-anchor" id="ReaderBase.__init__"></a>
Creates a new ReaderBase.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>reader_ref</b>: The operation that implements the reader.
@@ -127,42 +128,42 @@ Creates a new ReaderBase.
- - -
-#### tf.ReaderBase.num_records_produced(name=None) {#ReaderBase.num_records_produced}
+#### tf.ReaderBase.num_records_produced(name=None) <a class="md-anchor" id="ReaderBase.num_records_produced"></a>
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have
succeeded.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.ReaderBase.num_work_units_completed(name=None) {#ReaderBase.num_work_units_completed}
+#### tf.ReaderBase.num_work_units_completed(name=None) <a class="md-anchor" id="ReaderBase.num_work_units_completed"></a>
Returns the number of work units this reader has finished processing.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.ReaderBase.read(queue, name=None) {#ReaderBase.read}
+#### tf.ReaderBase.read(queue, name=None) <a class="md-anchor" id="ReaderBase.read"></a>
Returns the next record (key, value pair) produced by a reader.
@@ -170,14 +171,14 @@ Will dequeue a work unit from queue if necessary (e.g. when the
Reader needs to start reading from a new file since it has
finished with the previous file).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
to a Queue, with string work items.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of Tensors (key, value).
@@ -187,76 +188,76 @@ finished with the previous file).
- - -
-#### tf.ReaderBase.reader_ref {#ReaderBase.reader_ref}
+#### tf.ReaderBase.reader_ref <a class="md-anchor" id="ReaderBase.reader_ref"></a>
Op that implements the reader.
- - -
-#### tf.ReaderBase.reset(name=None) {#ReaderBase.reset}
+#### tf.ReaderBase.reset(name=None) <a class="md-anchor" id="ReaderBase.reset"></a>
Restore a reader to its initial clean state.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.ReaderBase.restore_state(state, name=None) {#ReaderBase.restore_state}
+#### tf.ReaderBase.restore_state(state, name=None) <a class="md-anchor" id="ReaderBase.restore_state"></a>
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>state</b>: A string Tensor.
Result of a SerializeState of a Reader with matching type.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.ReaderBase.serialize_state(name=None) {#ReaderBase.serialize_state}
+#### tf.ReaderBase.serialize_state(name=None) <a class="md-anchor" id="ReaderBase.serialize_state"></a>
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A string Tensor.
- - -
-#### tf.ReaderBase.supports_serialize {#ReaderBase.supports_serialize}
+#### tf.ReaderBase.supports_serialize <a class="md-anchor" id="ReaderBase.supports_serialize"></a>
Whether the Reader implementation can serialize its state.
- - -
-### class tf.TextLineReader <div class="md-anchor" id="TextLineReader">{#TextLineReader}</div>
+### class tf.TextLineReader <a class="md-anchor" id="TextLineReader"></a>
A Reader that outputs the lines of a file delimited by newlines.
@@ -264,11 +265,11 @@ Newlines are stripped from the output.
See ReaderBase for supported methods.
- - -
-#### tf.TextLineReader.__init__(skip_header_lines=None, name=None) {#TextLineReader.__init__}
+#### tf.TextLineReader.__init__(skip_header_lines=None, name=None) <a class="md-anchor" id="TextLineReader.__init__"></a>
Create a TextLineReader.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>skip_header_lines</b>: An optional int. Defaults to 0. Number of lines
@@ -278,42 +279,42 @@ Create a TextLineReader.
- - -
-#### tf.TextLineReader.num_records_produced(name=None) {#TextLineReader.num_records_produced}
+#### tf.TextLineReader.num_records_produced(name=None) <a class="md-anchor" id="TextLineReader.num_records_produced"></a>
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have
succeeded.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.TextLineReader.num_work_units_completed(name=None) {#TextLineReader.num_work_units_completed}
+#### tf.TextLineReader.num_work_units_completed(name=None) <a class="md-anchor" id="TextLineReader.num_work_units_completed"></a>
Returns the number of work units this reader has finished processing.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.TextLineReader.read(queue, name=None) {#TextLineReader.read}
+#### tf.TextLineReader.read(queue, name=None) <a class="md-anchor" id="TextLineReader.read"></a>
Returns the next record (key, value pair) produced by a reader.
@@ -321,14 +322,14 @@ Will dequeue a work unit from queue if necessary (e.g. when the
Reader needs to start reading from a new file since it has
finished with the previous file).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
to a Queue, with string work items.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of Tensors (key, value).
@@ -338,76 +339,76 @@ finished with the previous file).
- - -
-#### tf.TextLineReader.reader_ref {#TextLineReader.reader_ref}
+#### tf.TextLineReader.reader_ref <a class="md-anchor" id="TextLineReader.reader_ref"></a>
Op that implements the reader.
- - -
-#### tf.TextLineReader.reset(name=None) {#TextLineReader.reset}
+#### tf.TextLineReader.reset(name=None) <a class="md-anchor" id="TextLineReader.reset"></a>
Restore a reader to its initial clean state.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.TextLineReader.restore_state(state, name=None) {#TextLineReader.restore_state}
+#### tf.TextLineReader.restore_state(state, name=None) <a class="md-anchor" id="TextLineReader.restore_state"></a>
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>state</b>: A string Tensor.
Result of a SerializeState of a Reader with matching type.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.TextLineReader.serialize_state(name=None) {#TextLineReader.serialize_state}
+#### tf.TextLineReader.serialize_state(name=None) <a class="md-anchor" id="TextLineReader.serialize_state"></a>
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A string Tensor.
- - -
-#### tf.TextLineReader.supports_serialize {#TextLineReader.supports_serialize}
+#### tf.TextLineReader.supports_serialize <a class="md-anchor" id="TextLineReader.supports_serialize"></a>
Whether the Reader implementation can serialize its state.
- - -
-### class tf.WholeFileReader <div class="md-anchor" id="WholeFileReader">{#WholeFileReader}</div>
+### class tf.WholeFileReader <a class="md-anchor" id="WholeFileReader"></a>
A Reader that outputs the entire contents of a file as a value.
@@ -417,11 +418,11 @@ be a filename (key) and the contents of that file (value).
See ReaderBase for supported methods.
- - -
-#### tf.WholeFileReader.__init__(name=None) {#WholeFileReader.__init__}
+#### tf.WholeFileReader.__init__(name=None) <a class="md-anchor" id="WholeFileReader.__init__"></a>
Create a WholeFileReader.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
@@ -429,42 +430,42 @@ Create a WholeFileReader.
- - -
-#### tf.WholeFileReader.num_records_produced(name=None) {#WholeFileReader.num_records_produced}
+#### tf.WholeFileReader.num_records_produced(name=None) <a class="md-anchor" id="WholeFileReader.num_records_produced"></a>
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have
succeeded.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.WholeFileReader.num_work_units_completed(name=None) {#WholeFileReader.num_work_units_completed}
+#### tf.WholeFileReader.num_work_units_completed(name=None) <a class="md-anchor" id="WholeFileReader.num_work_units_completed"></a>
Returns the number of work units this reader has finished processing.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.WholeFileReader.read(queue, name=None) {#WholeFileReader.read}
+#### tf.WholeFileReader.read(queue, name=None) <a class="md-anchor" id="WholeFileReader.read"></a>
Returns the next record (key, value pair) produced by a reader.
@@ -472,14 +473,14 @@ Will dequeue a work unit from queue if necessary (e.g. when the
Reader needs to start reading from a new file since it has
finished with the previous file).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
to a Queue, with string work items.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of Tensors (key, value).
@@ -489,76 +490,76 @@ finished with the previous file).
- - -
-#### tf.WholeFileReader.reader_ref {#WholeFileReader.reader_ref}
+#### tf.WholeFileReader.reader_ref <a class="md-anchor" id="WholeFileReader.reader_ref"></a>
Op that implements the reader.
- - -
-#### tf.WholeFileReader.reset(name=None) {#WholeFileReader.reset}
+#### tf.WholeFileReader.reset(name=None) <a class="md-anchor" id="WholeFileReader.reset"></a>
Restore a reader to its initial clean state.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.WholeFileReader.restore_state(state, name=None) {#WholeFileReader.restore_state}
+#### tf.WholeFileReader.restore_state(state, name=None) <a class="md-anchor" id="WholeFileReader.restore_state"></a>
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>state</b>: A string Tensor.
Result of a SerializeState of a Reader with matching type.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.WholeFileReader.serialize_state(name=None) {#WholeFileReader.serialize_state}
+#### tf.WholeFileReader.serialize_state(name=None) <a class="md-anchor" id="WholeFileReader.serialize_state"></a>
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A string Tensor.
- - -
-#### tf.WholeFileReader.supports_serialize {#WholeFileReader.supports_serialize}
+#### tf.WholeFileReader.supports_serialize <a class="md-anchor" id="WholeFileReader.supports_serialize"></a>
Whether the Reader implementation can serialize its state.
- - -
-### class tf.IdentityReader <div class="md-anchor" id="IdentityReader">{#IdentityReader}</div>
+### class tf.IdentityReader <a class="md-anchor" id="IdentityReader"></a>
A Reader that outputs the queued work as both the key and value.
@@ -568,11 +569,11 @@ work string and output (work, work).
See ReaderBase for supported methods.
- - -
-#### tf.IdentityReader.__init__(name=None) {#IdentityReader.__init__}
+#### tf.IdentityReader.__init__(name=None) <a class="md-anchor" id="IdentityReader.__init__"></a>
Create a IdentityReader.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
@@ -580,42 +581,42 @@ Create a IdentityReader.
- - -
-#### tf.IdentityReader.num_records_produced(name=None) {#IdentityReader.num_records_produced}
+#### tf.IdentityReader.num_records_produced(name=None) <a class="md-anchor" id="IdentityReader.num_records_produced"></a>
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have
succeeded.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.IdentityReader.num_work_units_completed(name=None) {#IdentityReader.num_work_units_completed}
+#### tf.IdentityReader.num_work_units_completed(name=None) <a class="md-anchor" id="IdentityReader.num_work_units_completed"></a>
Returns the number of work units this reader has finished processing.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.IdentityReader.read(queue, name=None) {#IdentityReader.read}
+#### tf.IdentityReader.read(queue, name=None) <a class="md-anchor" id="IdentityReader.read"></a>
Returns the next record (key, value pair) produced by a reader.
@@ -623,14 +624,14 @@ Will dequeue a work unit from queue if necessary (e.g. when the
Reader needs to start reading from a new file since it has
finished with the previous file).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
to a Queue, with string work items.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of Tensors (key, value).
@@ -640,87 +641,87 @@ finished with the previous file).
- - -
-#### tf.IdentityReader.reader_ref {#IdentityReader.reader_ref}
+#### tf.IdentityReader.reader_ref <a class="md-anchor" id="IdentityReader.reader_ref"></a>
Op that implements the reader.
- - -
-#### tf.IdentityReader.reset(name=None) {#IdentityReader.reset}
+#### tf.IdentityReader.reset(name=None) <a class="md-anchor" id="IdentityReader.reset"></a>
Restore a reader to its initial clean state.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.IdentityReader.restore_state(state, name=None) {#IdentityReader.restore_state}
+#### tf.IdentityReader.restore_state(state, name=None) <a class="md-anchor" id="IdentityReader.restore_state"></a>
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>state</b>: A string Tensor.
Result of a SerializeState of a Reader with matching type.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.IdentityReader.serialize_state(name=None) {#IdentityReader.serialize_state}
+#### tf.IdentityReader.serialize_state(name=None) <a class="md-anchor" id="IdentityReader.serialize_state"></a>
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A string Tensor.
- - -
-#### tf.IdentityReader.supports_serialize {#IdentityReader.supports_serialize}
+#### tf.IdentityReader.supports_serialize <a class="md-anchor" id="IdentityReader.supports_serialize"></a>
Whether the Reader implementation can serialize its state.
- - -
-### class tf.TFRecordReader <div class="md-anchor" id="TFRecordReader">{#TFRecordReader}</div>
+### class tf.TFRecordReader <a class="md-anchor" id="TFRecordReader"></a>
A Reader that outputs the records from a TFRecords file.
See ReaderBase for supported methods.
- - -
-#### tf.TFRecordReader.__init__(name=None) {#TFRecordReader.__init__}
+#### tf.TFRecordReader.__init__(name=None) <a class="md-anchor" id="TFRecordReader.__init__"></a>
Create a TFRecordReader.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
@@ -728,42 +729,42 @@ Create a TFRecordReader.
- - -
-#### tf.TFRecordReader.num_records_produced(name=None) {#TFRecordReader.num_records_produced}
+#### tf.TFRecordReader.num_records_produced(name=None) <a class="md-anchor" id="TFRecordReader.num_records_produced"></a>
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have
succeeded.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.TFRecordReader.num_work_units_completed(name=None) {#TFRecordReader.num_work_units_completed}
+#### tf.TFRecordReader.num_work_units_completed(name=None) <a class="md-anchor" id="TFRecordReader.num_work_units_completed"></a>
Returns the number of work units this reader has finished processing.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.TFRecordReader.read(queue, name=None) {#TFRecordReader.read}
+#### tf.TFRecordReader.read(queue, name=None) <a class="md-anchor" id="TFRecordReader.read"></a>
Returns the next record (key, value pair) produced by a reader.
@@ -771,14 +772,14 @@ Will dequeue a work unit from queue if necessary (e.g. when the
Reader needs to start reading from a new file since it has
finished with the previous file).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
to a Queue, with string work items.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of Tensors (key, value).
@@ -788,87 +789,87 @@ finished with the previous file).
- - -
-#### tf.TFRecordReader.reader_ref {#TFRecordReader.reader_ref}
+#### tf.TFRecordReader.reader_ref <a class="md-anchor" id="TFRecordReader.reader_ref"></a>
Op that implements the reader.
- - -
-#### tf.TFRecordReader.reset(name=None) {#TFRecordReader.reset}
+#### tf.TFRecordReader.reset(name=None) <a class="md-anchor" id="TFRecordReader.reset"></a>
Restore a reader to its initial clean state.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.TFRecordReader.restore_state(state, name=None) {#TFRecordReader.restore_state}
+#### tf.TFRecordReader.restore_state(state, name=None) <a class="md-anchor" id="TFRecordReader.restore_state"></a>
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>state</b>: A string Tensor.
Result of a SerializeState of a Reader with matching type.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.TFRecordReader.serialize_state(name=None) {#TFRecordReader.serialize_state}
+#### tf.TFRecordReader.serialize_state(name=None) <a class="md-anchor" id="TFRecordReader.serialize_state"></a>
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A string Tensor.
- - -
-#### tf.TFRecordReader.supports_serialize {#TFRecordReader.supports_serialize}
+#### tf.TFRecordReader.supports_serialize <a class="md-anchor" id="TFRecordReader.supports_serialize"></a>
Whether the Reader implementation can serialize its state.
- - -
-### class tf.FixedLengthRecordReader <div class="md-anchor" id="FixedLengthRecordReader">{#FixedLengthRecordReader}</div>
+### class tf.FixedLengthRecordReader <a class="md-anchor" id="FixedLengthRecordReader"></a>
A Reader that outputs fixed-length records from a file.
See ReaderBase for supported methods.
- - -
-#### tf.FixedLengthRecordReader.__init__(record_bytes, header_bytes=None, footer_bytes=None, name=None) {#FixedLengthRecordReader.__init__}
+#### tf.FixedLengthRecordReader.__init__(record_bytes, header_bytes=None, footer_bytes=None, name=None) <a class="md-anchor" id="FixedLengthRecordReader.__init__"></a>
Create a FixedLengthRecordReader.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>record_bytes</b>: An int.
@@ -879,42 +880,42 @@ Create a FixedLengthRecordReader.
- - -
-#### tf.FixedLengthRecordReader.num_records_produced(name=None) {#FixedLengthRecordReader.num_records_produced}
+#### tf.FixedLengthRecordReader.num_records_produced(name=None) <a class="md-anchor" id="FixedLengthRecordReader.num_records_produced"></a>
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have
succeeded.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.FixedLengthRecordReader.num_work_units_completed(name=None) {#FixedLengthRecordReader.num_work_units_completed}
+#### tf.FixedLengthRecordReader.num_work_units_completed(name=None) <a class="md-anchor" id="FixedLengthRecordReader.num_work_units_completed"></a>
Returns the number of work units this reader has finished processing.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An int64 Tensor.
- - -
-#### tf.FixedLengthRecordReader.read(queue, name=None) {#FixedLengthRecordReader.read}
+#### tf.FixedLengthRecordReader.read(queue, name=None) <a class="md-anchor" id="FixedLengthRecordReader.read"></a>
Returns the next record (key, value pair) produced by a reader.
@@ -922,14 +923,14 @@ Will dequeue a work unit from queue if necessary (e.g. when the
Reader needs to start reading from a new file since it has
finished with the previous file).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>queue</b>: A Queue or a mutable string Tensor representing a handle
to a Queue, with string work items.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of Tensors (key, value).
@@ -939,82 +940,82 @@ finished with the previous file).
- - -
-#### tf.FixedLengthRecordReader.reader_ref {#FixedLengthRecordReader.reader_ref}
+#### tf.FixedLengthRecordReader.reader_ref <a class="md-anchor" id="FixedLengthRecordReader.reader_ref"></a>
Op that implements the reader.
- - -
-#### tf.FixedLengthRecordReader.reset(name=None) {#FixedLengthRecordReader.reset}
+#### tf.FixedLengthRecordReader.reset(name=None) <a class="md-anchor" id="FixedLengthRecordReader.reset"></a>
Restore a reader to its initial clean state.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.FixedLengthRecordReader.restore_state(state, name=None) {#FixedLengthRecordReader.restore_state}
+#### tf.FixedLengthRecordReader.restore_state(state, name=None) <a class="md-anchor" id="FixedLengthRecordReader.restore_state"></a>
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>state</b>: A string Tensor.
Result of a SerializeState of a Reader with matching type.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created Operation.
- - -
-#### tf.FixedLengthRecordReader.serialize_state(name=None) {#FixedLengthRecordReader.serialize_state}
+#### tf.FixedLengthRecordReader.serialize_state(name=None) <a class="md-anchor" id="FixedLengthRecordReader.serialize_state"></a>
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an
Unimplemented error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A string Tensor.
- - -
-#### tf.FixedLengthRecordReader.supports_serialize {#FixedLengthRecordReader.supports_serialize}
+#### tf.FixedLengthRecordReader.supports_serialize <a class="md-anchor" id="FixedLengthRecordReader.supports_serialize"></a>
Whether the Reader implementation can serialize its state.
-## Converting <div class="md-anchor" id="AUTOGENERATED-converting">{#AUTOGENERATED-converting}</div>
+## Converting <a class="md-anchor" id="AUTOGENERATED-converting"></a>
TensorFlow provides several operations that you can use to convert various data
formats into tensors.
- - -
-### tf.decode_csv(records, record_defaults, field_delim=None, name=None) <div class="md-anchor" id="decode_csv">{#decode_csv}</div>
+### tf.decode_csv(records, record_defaults, field_delim=None, name=None) <a class="md-anchor" id="decode_csv"></a>
Convert CSV records to tensors. Each column maps to one tensor.
@@ -1022,7 +1023,7 @@ RFC 4180 format is expected for the CSV records.
(https://tools.ietf.org/html/rfc4180)
Note that we allow leading and trailing spaces with int or float field.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>records</b>: A `Tensor` of type `string`.
@@ -1035,7 +1036,7 @@ Note that we allow leading and trailing spaces with int or float field.
delimiter to separate fields in a record.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of `Tensor` objects. Has the same type as `record_defaults`.
Each tensor will have the same shape as records.
@@ -1043,11 +1044,11 @@ Note that we allow leading and trailing spaces with int or float field.
- - -
-### tf.decode_raw(bytes, out_type, little_endian=None, name=None) <div class="md-anchor" id="decode_raw">{#decode_raw}</div>
+### tf.decode_raw(bytes, out_type, little_endian=None, name=None) <a class="md-anchor" id="decode_raw"></a>
Reinterpret the bytes of a string as a vector of numbers.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>bytes</b>: A `Tensor` of type `string`.
@@ -1058,7 +1059,7 @@ Reinterpret the bytes of a string as a vector of numbers.
Ignored for out_types that are stored in a single byte like uint8.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `out_type`.
A Tensor with one more dimension than the input bytes. The
@@ -1069,7 +1070,7 @@ Reinterpret the bytes of a string as a vector of numbers.
- - -
-### Example protocol buffer <div class="md-anchor" id="AUTOGENERATED-example-protocol-buffer">{#AUTOGENERATED-example-protocol-buffer}</div>
+### Example protocol buffer <a class="md-anchor" id="AUTOGENERATED-example-protocol-buffer"></a>
TensorFlow's [recommended format for training
examples](../../how_tos/reading_data/index.md#standard-tensorflow-format)
@@ -1080,7 +1081,7 @@ here](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/ex
- - -
-### tf.parse_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseExample') <div class="md-anchor" id="parse_example">{#parse_example}</div>
+### tf.parse_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseExample') <a class="md-anchor" id="parse_example"></a>
Parses `Example` protos.
@@ -1219,7 +1220,7 @@ And the expected output is:
}
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>serialized</b>: A list of strings, a batch of binary serialized `Example`
@@ -1241,11 +1242,11 @@ And the expected output is:
The shape of the data for each dense feature referenced by `dense_keys`.
* <b>name</b>: A name for this operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `dict` mapping keys to `Tensor`s and `SparseTensor`s.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If sparse and dense key sets intersect, or input lengths do not
@@ -1254,7 +1255,7 @@ And the expected output is:
- - -
-### tf.parse_single_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseSingleExample') <div class="md-anchor" id="parse_single_example">{#parse_single_example}</div>
+### tf.parse_single_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseSingleExample') <a class="md-anchor" id="parse_single_example"></a>
Parses a single `Example` proto.
@@ -1271,7 +1272,7 @@ single element vector).
See also `parse_example`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>serialized</b>: A scalar string, a single serialized Example.
@@ -1286,18 +1287,18 @@ See also `parse_example`.
* <b>dense_shapes</b>: See parse_example documentation for more details.
* <b>name</b>: A name for this operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A dictionary mapping keys to Tensors and SparseTensors.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if "scalar" or "names" have known shapes, and are not scalars.
-## Queues <div class="md-anchor" id="AUTOGENERATED-queues">{#AUTOGENERATED-queues}</div>
+## Queues <a class="md-anchor" id="AUTOGENERATED-queues"></a>
TensorFlow provides several implementations of 'Queues', which are
structures within the TensorFlow computation graph to stage pipelines
@@ -1307,7 +1308,7 @@ Queues](../../how_tos/threading_and_queues/index.md).
- - -
-### class tf.QueueBase <div class="md-anchor" id="QueueBase">{#QueueBase}</div>
+### class tf.QueueBase <a class="md-anchor" id="QueueBase"></a>
Base class for queue implementations.
@@ -1328,27 +1329,27 @@ them.
- - -
-#### tf.QueueBase.enqueue(vals, name=None) {#QueueBase.enqueue}
+#### tf.QueueBase.enqueue(vals, name=None) <a class="md-anchor" id="QueueBase.enqueue"></a>
Enqueues one element to this queue.
If the queue is full when this operation executes, it will block
until the element has been enqueued.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>vals</b>: The tuple of `Tensor` objects to be enqueued.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The operation that enqueues a new tuple of tensors to the queue.
- - -
-#### tf.QueueBase.enqueue_many(vals, name=None) {#QueueBase.enqueue_many}
+#### tf.QueueBase.enqueue_many(vals, name=None) <a class="md-anchor" id="QueueBase.enqueue_many"></a>
Enqueues zero or elements to this queue.
@@ -1359,14 +1360,14 @@ same size in the 0th dimension.
If the queue is full when this operation executes, it will block
until all of the elements have been enqueued.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>vals</b>: The tensor or tuple of tensors from which the queue elements
are taken.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The operation that enqueues a batch of tuples of tensors to the queue.
@@ -1374,26 +1375,26 @@ until all of the elements have been enqueued.
- - -
-#### tf.QueueBase.dequeue(name=None) {#QueueBase.dequeue}
+#### tf.QueueBase.dequeue(name=None) <a class="md-anchor" id="QueueBase.dequeue"></a>
Dequeues one element from this queue.
If the queue is empty when this operation executes, it will block
until there is an element to dequeue.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The tuple of tensors that was dequeued.
- - -
-#### tf.QueueBase.dequeue_many(n, name=None) {#QueueBase.dequeue_many}
+#### tf.QueueBase.dequeue_many(n, name=None) <a class="md-anchor" id="QueueBase.dequeue_many"></a>
Dequeues and concatenates `n` elements from this queue.
@@ -1404,13 +1405,13 @@ components in the dequeued tuple will have size `n` in the 0th dimension.
If the queue contains fewer than `n` elements when this operation
executes, it will block until `n` elements have been dequeued.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>n</b>: A scalar `Tensor` containing the number of elements to dequeue.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The tuple of concatenated tensors that was dequeued.
@@ -1418,16 +1419,16 @@ executes, it will block until `n` elements have been dequeued.
- - -
-#### tf.QueueBase.size(name=None) {#QueueBase.size}
+#### tf.QueueBase.size(name=None) <a class="md-anchor" id="QueueBase.size"></a>
Compute the number of elements in this queue.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A scalar tensor containing the number of elements in this queue.
@@ -1435,7 +1436,7 @@ Compute the number of elements in this queue.
- - -
-#### tf.QueueBase.close(cancel_pending_enqueues=False, name=None) {#QueueBase.close}
+#### tf.QueueBase.close(cancel_pending_enqueues=False, name=None) <a class="md-anchor" id="QueueBase.close"></a>
Closes this queue.
@@ -1449,27 +1450,27 @@ that would block will fail immediately.
If `cancel_pending_enqueues` is `True`, all pending requests will also
be cancelled.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>cancel_pending_enqueues</b>: (Optional.) A boolean, defaulting to
`False` (described above).
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The operation that closes the queue.
-#### Other Methods
+#### Other Methods <a class="md-anchor" id="AUTOGENERATED-other-methods"></a>
- - -
-#### tf.QueueBase.__init__(dtypes, shapes, queue_ref) {#QueueBase.__init__}
+#### tf.QueueBase.__init__(dtypes, shapes, queue_ref) <a class="md-anchor" id="QueueBase.__init__"></a>
Constructs a queue object from a queue reference.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>dtypes</b>: A list of types. The length of dtypes must equal the number
@@ -1483,26 +1484,26 @@ Constructs a queue object from a queue reference.
- - -
-#### tf.QueueBase.dtypes {#QueueBase.dtypes}
+#### tf.QueueBase.dtypes <a class="md-anchor" id="QueueBase.dtypes"></a>
The list of dtypes for each component of a queue element.
- - -
-#### tf.QueueBase.name {#QueueBase.name}
+#### tf.QueueBase.name <a class="md-anchor" id="QueueBase.name"></a>
The name of the underlying queue.
- - -
-#### tf.QueueBase.queue_ref {#QueueBase.queue_ref}
+#### tf.QueueBase.queue_ref <a class="md-anchor" id="QueueBase.queue_ref"></a>
The underlying queue reference.
- - -
-### class tf.FIFOQueue <div class="md-anchor" id="FIFOQueue">{#FIFOQueue}</div>
+### class tf.FIFOQueue <a class="md-anchor" id="FIFOQueue"></a>
A queue implementation that dequeues elements in first-in-first out order.
@@ -1511,7 +1512,7 @@ this class.
- - -
-#### tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, shared_name=None, name='fifo_queue') {#FIFOQueue.__init__}
+#### tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, shared_name=None, name='fifo_queue') <a class="md-anchor" id="FIFOQueue.__init__"></a>
Creates a queue that dequeues elements in a first-in first-out order.
@@ -1528,7 +1529,7 @@ element must have the respective fixed shape. If it is
unspecified, different queue elements may have different shapes,
but the use of `dequeue_many` is disallowed.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>capacity</b>: An integer. The upper bound on the number of elements
@@ -1545,7 +1546,7 @@ but the use of `dequeue_many` is disallowed.
- - -
-### class tf.RandomShuffleQueue <div class="md-anchor" id="RandomShuffleQueue">{#RandomShuffleQueue}</div>
+### class tf.RandomShuffleQueue <a class="md-anchor" id="RandomShuffleQueue"></a>
A queue implementation that dequeues elements in a random order.
@@ -1554,7 +1555,7 @@ this class.
- - -
-#### tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, seed=None, shared_name=None, name='random_shuffle_queue') {#RandomShuffleQueue.__init__}
+#### tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, seed=None, shared_name=None, name='random_shuffle_queue') <a class="md-anchor" id="RandomShuffleQueue.__init__"></a>
Create a queue that dequeues elements in a random order.
@@ -1580,7 +1581,7 @@ by blocking those operations until sufficient elements have been
enqueued. The `min_after_dequeue` argument is ignored after the
queue has been closed.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>capacity</b>: An integer. The upper bound on the number of elements
@@ -1599,81 +1600,81 @@ queue has been closed.
-## Dealing with the filesystem <div class="md-anchor" id="AUTOGENERATED-dealing-with-the-filesystem">{#AUTOGENERATED-dealing-with-the-filesystem}</div>
+## Dealing with the filesystem <a class="md-anchor" id="AUTOGENERATED-dealing-with-the-filesystem"></a>
- - -
-### tf.matching_files(pattern, name=None) <div class="md-anchor" id="matching_files">{#matching_files}</div>
+### tf.matching_files(pattern, name=None) <a class="md-anchor" id="matching_files"></a>
Returns the set of files matching a pattern.
Note that this routine only supports wildcard characters in the
basename portion of the pattern, not in the directory portion.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>pattern</b>: A `Tensor` of type `string`. A (scalar) shell wildcard pattern.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `string`. A vector of matching filenames.
- - -
-### tf.read_file(filename, name=None) <div class="md-anchor" id="read_file">{#read_file}</div>
+### tf.read_file(filename, name=None) <a class="md-anchor" id="read_file"></a>
Reads and outputs the entire contents of the input filename.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>filename</b>: A `Tensor` of type `string`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `string`.
-## Input pipeline <div class="md-anchor" id="AUTOGENERATED-input-pipeline">{#AUTOGENERATED-input-pipeline}</div>
+## Input pipeline <a class="md-anchor" id="AUTOGENERATED-input-pipeline"></a>
TensorFlow functions for setting up an input-prefetching pipeline.
Please see the [reading data how-to](../../how_tos/reading_data/index.md)
for context.
-### Beginning of an input pipeline <div class="md-anchor" id="AUTOGENERATED-beginning-of-an-input-pipeline">{#AUTOGENERATED-beginning-of-an-input-pipeline}</div>
+### Beginning of an input pipeline <a class="md-anchor" id="AUTOGENERATED-beginning-of-an-input-pipeline"></a>
The "producer" functions add a queue to the graph and a corresponding
`QueueRunner` for running the subgraph that fills that queue.
- - -
-### tf.train.match_filenames_once(pattern, name=None) <div class="md-anchor" id="match_filenames_once">{#match_filenames_once}</div>
+### tf.train.match_filenames_once(pattern, name=None) <a class="md-anchor" id="match_filenames_once"></a>
Save the list of files matching pattern, so it is only computed once.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>pattern</b>: A file pattern (glob).
* <b>name</b>: A name for the operations (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A variable that is initialized to the list of files matching pattern.
- - -
-### tf.train.limit_epochs(tensor, num_epochs=None, name=None) <div class="md-anchor" id="limit_epochs">{#limit_epochs}</div>
+### tf.train.limit_epochs(tensor, num_epochs=None, name=None) <a class="md-anchor" id="limit_epochs"></a>
Returns tensor num_epochs times and then raises an OutOfRange error.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor</b>: Any Tensor.
@@ -1681,18 +1682,18 @@ Returns tensor num_epochs times and then raises an OutOfRange error.
of steps the output tensor may be evaluated.
* <b>name</b>: A name for the operations (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
tensor or OutOfRange.
- - -
-### tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <div class="md-anchor" id="range_input_producer">{#range_input_producer}</div>
+### tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <a class="md-anchor" id="range_input_producer"></a>
Produces the integers from 0 to limit-1 in a queue.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>limit</b>: An int32 scalar tensor.
@@ -1706,7 +1707,7 @@ Produces the integers from 0 to limit-1 in a queue.
* <b>capacity</b>: An integer. Sets the queue capacity.
* <b>name</b>: A name for the operations (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A Queue with the output integers. A QueueRunner for the Queue
is added to the current Graph's QUEUE_RUNNER collection.
@@ -1714,14 +1715,14 @@ Produces the integers from 0 to limit-1 in a queue.
- - -
-### tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <div class="md-anchor" id="slice_input_producer">{#slice_input_producer}</div>
+### tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <a class="md-anchor" id="slice_input_producer"></a>
Produces a slice of each Tensor in tensor_list.
Implemented using a Queue -- a QueueRunner for the Queue
is added to the current Graph's QUEUE_RUNNER collection.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor_list</b>: A list of Tensors. Every Tensor in tensor_list must
@@ -1734,7 +1735,7 @@ is added to the current Graph's QUEUE_RUNNER collection.
* <b>capacity</b>: An integer. Sets the queue capacity.
* <b>name</b>: A name for the operations (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of tensors, one for each element of tensor_list. If the tensor
in tensor_list has shape [N, a, b, .., z], then the corresponding output
@@ -1743,11 +1744,11 @@ is added to the current Graph's QUEUE_RUNNER collection.
- - -
-### tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <div class="md-anchor" id="string_input_producer">{#string_input_producer}</div>
+### tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None) <a class="md-anchor" id="string_input_producer"></a>
Output strings (e.g. filenames) to a queue for an input pipeline.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>string_tensor</b>: A 1-D string tensor with the strings to produce.
@@ -1762,14 +1763,14 @@ Output strings (e.g. filenames) to a queue for an input pipeline.
* <b>capacity</b>: An integer. Sets the queue capacity.
* <b>name</b>: A name for the operations (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A queue with the output strings. A QueueRunner for the Queue
is added to the current Graph's QUEUE_RUNNER collection.
-### Batching at the end of an input pipeline <div class="md-anchor" id="AUTOGENERATED-batching-at-the-end-of-an-input-pipeline">{#AUTOGENERATED-batching-at-the-end-of-an-input-pipeline}</div>
+### Batching at the end of an input pipeline <a class="md-anchor" id="AUTOGENERATED-batching-at-the-end-of-an-input-pipeline"></a>
These functions add a queue to the graph to assemble a batch of examples, with
possible shuffling. They also add a `QueueRunner` for running the subgraph
@@ -1790,14 +1791,14 @@ want them run by N threads.
- - -
-### tf.train.batch(tensor_list, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="batch">{#batch}</div>
+### tf.train.batch(tensor_list, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, name=None) <a class="md-anchor" id="batch"></a>
Run tensor_list to fill a queue to create batches.
Implemented using a queue -- a QueueRunner for the queue
is added to the current Graph's QUEUE_RUNNER collection.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor_list</b>: The list of tensors to enqueue.
@@ -1816,7 +1817,7 @@ is added to the current Graph's QUEUE_RUNNER collection.
if enqueue_many is True).
* <b>name</b>: A name for the operations (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of tensors with the same number and types as tensor_list.
If enqueue_many is false, then an input tensor with shape
@@ -1828,7 +1829,7 @@ is added to the current Graph's QUEUE_RUNNER collection.
- - -
-### tf.train.batch_join(tensor_list_list, batch_size, capacity=32, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="batch_join">{#batch_join}</div>
+### tf.train.batch_join(tensor_list_list, batch_size, capacity=32, enqueue_many=False, shapes=None, name=None) <a class="md-anchor" id="batch_join"></a>
Run a list of tensors to fill a queue to create batches of examples.
@@ -1855,7 +1856,7 @@ will have shape `[batch_size] + x.shape[1:]`.
The `capacity` argument controls the how long the prefetching
is allowed to grow the queues.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor_list_list</b>: A list of tuples of tensors to enqueue.
@@ -1867,7 +1868,7 @@ is allowed to grow the queues.
inferred shapes for `tensor_list_list[i]`.
* <b>name</b>: A name for the operations (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of tensors with the same number and types as
`tensor_list_list[i]`.
@@ -1875,7 +1876,7 @@ is allowed to grow the queues.
- - -
-### tf.train.shuffle_batch(tensor_list, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="shuffle_batch">{#shuffle_batch}</div>
+### tf.train.shuffle_batch(tensor_list, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, name=None) <a class="md-anchor" id="shuffle_batch"></a>
Create batches by randomly shuffling tensors.
@@ -1886,7 +1887,7 @@ This adds:
* and a QueueRunner is added to the current Graph's QUEUE_RUNNER collection,
to enqueue the tensors from tensor_list.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor_list</b>: The list of tensors to enqueue.
@@ -1908,7 +1909,7 @@ This adds:
if enqueue_many is True).
* <b>name</b>: A name for the operations (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of tensors with the same number and types as tensor_list.
If enqueue_many is false, then an input tensor with shape
@@ -1920,7 +1921,7 @@ This adds:
- - -
-### tf.train.shuffle_batch_join(tensor_list_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, name=None) <div class="md-anchor" id="shuffle_batch_join">{#shuffle_batch_join}</div>
+### tf.train.shuffle_batch_join(tensor_list_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, name=None) <a class="md-anchor" id="shuffle_batch_join"></a>
Create batches by randomly shuffling tensors.
@@ -1932,7 +1933,7 @@ It adds:
* and a QueueRunner is added to the current Graph's QUEUE_RUNNER collection,
to enqueue the tensors from tensor_list_list.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tensor_list_list</b>: A list of tuples of tensors to enqueue.
@@ -1958,7 +1959,7 @@ It adds:
leaving off the first dimension if enqueue_many is `True`).
* <b>name</b>: A name for the operations (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of tensors with the same number and types as
tensor_list_list[i]. If enqueue_many is false, then an input
diff --git a/tensorflow/g3doc/api_docs/python/math_ops.md b/tensorflow/g3doc/api_docs/python/math_ops.md
index 53f7c59df7..3ccf56443f 100644
--- a/tensorflow/g3doc/api_docs/python/math_ops.md
+++ b/tensorflow/g3doc/api_docs/python/math_ops.md
@@ -1,12 +1,13 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Math
+# Math <a class="md-anchor" id="AUTOGENERATED-math"></a>
Note: Functions taking `Tensor` arguments can also take anything
accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Math](#AUTOGENERATED-math)
* [Arithmetic Operators](#AUTOGENERATED-arithmetic-operators)
* [tf.add(x, y, name=None)](#add)
* [tf.sub(x, y, name=None)](#sub)
@@ -79,130 +80,130 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Arithmetic Operators <div class="md-anchor" id="AUTOGENERATED-arithmetic-operators">{#AUTOGENERATED-arithmetic-operators}</div>
+## Arithmetic Operators <a class="md-anchor" id="AUTOGENERATED-arithmetic-operators"></a>
TensorFlow provides several operations that you can use to add basic arithmetic
operators to your graph.
- - -
-### tf.add(x, y, name=None) <div class="md-anchor" id="add">{#add}</div>
+### tf.add(x, y, name=None) <a class="md-anchor" id="add"></a>
Returns x + y element-wise.
*NOTE*: Add supports broadcasting. AddN does not.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int8`, `int16`, `int32`, `complex64`, `int64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.sub(x, y, name=None) <div class="md-anchor" id="sub">{#sub}</div>
+### tf.sub(x, y, name=None) <a class="md-anchor" id="sub"></a>
Returns x - y element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.mul(x, y, name=None) <div class="md-anchor" id="mul">{#mul}</div>
+### tf.mul(x, y, name=None) <a class="md-anchor" id="mul"></a>
Returns x * y element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int8`, `int16`, `int32`, `complex64`, `int64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.div(x, y, name=None) <div class="md-anchor" id="div">{#div}</div>
+### tf.div(x, y, name=None) <a class="md-anchor" id="div"></a>
Returns x / y element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.mod(x, y, name=None) <div class="md-anchor" id="mod">{#mod}</div>
+### tf.mod(x, y, name=None) <a class="md-anchor" id="mod"></a>
Returns element-wise remainder of division.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
-## Basic Math Functions <div class="md-anchor" id="AUTOGENERATED-basic-math-functions">{#AUTOGENERATED-basic-math-functions}</div>
+## Basic Math Functions <a class="md-anchor" id="AUTOGENERATED-basic-math-functions"></a>
TensorFlow provides several operations that you can use to add basic
mathematical functions to your graph.
- - -
-### tf.add_n(inputs, name=None) <div class="md-anchor" id="add_n">{#add_n}</div>
+### tf.add_n(inputs, name=None) <a class="md-anchor" id="add_n"></a>
Add all input tensors element wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>inputs</b>: A list of at least 1 `Tensor` objects of the same type in: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
Must all be the same size and shape.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `inputs`.
- - -
-### tf.abs(x, name=None) <div class="md-anchor" id="abs">{#abs}</div>
+### tf.abs(x, name=None) <a class="md-anchor" id="abs"></a>
Computes the absolute value of a tensor.
@@ -214,96 +215,96 @@ an input element and y is an output element, this operation computes
See [`tf.complex_abs()`](#tf_complex_abs) to compute the absolute value of a complex
number.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` of type `float`, `double`, `int32`, or `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` the same size and type as `x` with absolute values.
- - -
-### tf.neg(x, name=None) <div class="md-anchor" id="neg">{#neg}</div>
+### tf.neg(x, name=None) <a class="md-anchor" id="neg"></a>
Computes numerical negative value element-wise.
I.e., \\(y = -x\\).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.sign(x, name=None) <div class="md-anchor" id="sign">{#sign}</div>
+### tf.sign(x, name=None) <a class="md-anchor" id="sign"></a>
Returns an element-wise indication of the sign of a number.
y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.inv(x, name=None) <div class="md-anchor" id="inv">{#inv}</div>
+### tf.inv(x, name=None) <a class="md-anchor" id="inv"></a>
Computes the reciprocal of x element-wise.
I.e., \\(y = 1 / x\\).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.square(x, name=None) <div class="md-anchor" id="square">{#square}</div>
+### tf.square(x, name=None) <a class="md-anchor" id="square"></a>
Computes square of x element-wise.
I.e., \\(y = x * x = x^2\\).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.round(x, name=None) <div class="md-anchor" id="round">{#round}</div>
+### tf.round(x, name=None) <a class="md-anchor" id="round"></a>
Rounds the values of a tensor to the nearest integer, element-wise.
@@ -314,58 +315,58 @@ For example:
tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` of type `float` or `double`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of same shape and type as `x`.
- - -
-### tf.sqrt(x, name=None) <div class="md-anchor" id="sqrt">{#sqrt}</div>
+### tf.sqrt(x, name=None) <a class="md-anchor" id="sqrt"></a>
Computes square root of x element-wise.
I.e., \\(y = \sqrt{x} = x^{1/2}\\).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.rsqrt(x, name=None) <div class="md-anchor" id="rsqrt">{#rsqrt}</div>
+### tf.rsqrt(x, name=None) <a class="md-anchor" id="rsqrt"></a>
Computes reciprocal of square root of x element-wise.
I.e., \\(y = 1 / \sqrt{x}\\).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.pow(x, y, name=None) <div class="md-anchor" id="pow">{#pow}</div>
+### tf.pow(x, y, name=None) <a class="md-anchor" id="pow"></a>
Computes the power of one value to another.
@@ -378,167 +379,167 @@ corresponding elements in `x` and `y`. For example:
tf.pow(x, y) ==> [[256, 65536], [9, 27]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`.
* <b>y</b>: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`.
- - -
-### tf.exp(x, name=None) <div class="md-anchor" id="exp">{#exp}</div>
+### tf.exp(x, name=None) <a class="md-anchor" id="exp"></a>
Computes exponential of x element-wise. \\(y = e^x\\).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.log(x, name=None) <div class="md-anchor" id="log">{#log}</div>
+### tf.log(x, name=None) <a class="md-anchor" id="log"></a>
Computes natural logrithm of x element-wise.
I.e., \\(y = \log_e x\\).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.ceil(x, name=None) <div class="md-anchor" id="ceil">{#ceil}</div>
+### tf.ceil(x, name=None) <a class="md-anchor" id="ceil"></a>
Returns element-wise smallest integer in not less than x.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.floor(x, name=None) <div class="md-anchor" id="floor">{#floor}</div>
+### tf.floor(x, name=None) <a class="md-anchor" id="floor"></a>
Returns element-wise largest integer not greater than x.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.maximum(x, y, name=None) <div class="md-anchor" id="maximum">{#maximum}</div>
+### tf.maximum(x, y, name=None) <a class="md-anchor" id="maximum"></a>
Returns the max of x and y (i.e. x > y ? x : y) element-wise, broadcasts.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.minimum(x, y, name=None) <div class="md-anchor" id="minimum">{#minimum}</div>
+### tf.minimum(x, y, name=None) <a class="md-anchor" id="minimum"></a>
Returns the min of x and y (i.e. x < y ? x : y) element-wise, broadcasts.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
* <b>y</b>: A `Tensor`. Must have the same type as `x`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.cos(x, name=None) <div class="md-anchor" id="cos">{#cos}</div>
+### tf.cos(x, name=None) <a class="md-anchor" id="cos"></a>
Computes cos of x element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
- - -
-### tf.sin(x, name=None) <div class="md-anchor" id="sin">{#sin}</div>
+### tf.sin(x, name=None) <a class="md-anchor" id="sin"></a>
Computes sin of x element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`, `int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
-## Matrix Math Functions <div class="md-anchor" id="AUTOGENERATED-matrix-math-functions">{#AUTOGENERATED-matrix-math-functions}</div>
+## Matrix Math Functions <a class="md-anchor" id="AUTOGENERATED-matrix-math-functions"></a>
TensorFlow provides several operations that you can use to add basic
mathematical functions for matrices to your graph.
- - -
-### tf.diag(diagonal, name=None) <div class="md-anchor" id="diag">{#diag}</div>
+### tf.diag(diagonal, name=None) <a class="md-anchor" id="diag"></a>
Returns a diagonal tensor with a given diagonal values.
@@ -560,21 +561,21 @@ tf.diag(diagonal) ==> [[1, 0, 0, 0]
[0, 0, 0, 4]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>diagonal</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
Rank k tensor where k is at most 3.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `diagonal`.
- - -
-### tf.transpose(a, perm=None, name='transpose') <div class="md-anchor" id="transpose">{#transpose}</div>
+### tf.transpose(a, perm=None, name='transpose') <a class="md-anchor" id="transpose"></a>
Transposes `a`. Permutes the dimensions according to `perm`.
@@ -612,14 +613,14 @@ tf.transpose(b, perm=[0, 2, 1]) ==> [[[1 4]
[9 12]]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>a</b>: A `Tensor`.
* <b>perm</b>: A permutation of the dimensions of `a`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A transposed `Tensor`.
@@ -627,7 +628,7 @@ tf.transpose(b, perm=[0, 2, 1]) ==> [[[1 4]
- - -
-### tf.matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None) <div class="md-anchor" id="matmul">{#matmul}</div>
+### tf.matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None) <a class="md-anchor" id="matmul"></a>
Multiplies matrix `a` by matrix `b`, producing `a` * `b`.
@@ -658,7 +659,7 @@ c = tf.matmul(a, b) => [[58 64]
[139 154]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>a</b>: `Tensor` of type `float`, `double`, `int32` or `complex64`.
@@ -669,14 +670,14 @@ c = tf.matmul(a, b) => [[58 64]
* <b>b_is_sparse</b>: If `True`, `b` is treated as a sparse matrix.
* <b>name</b>: Name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of the same type as `a`.
- - -
-### tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None) <div class="md-anchor" id="batch_matmul">{#batch_matmul}</div>
+### tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None) <a class="md-anchor" id="batch_matmul"></a>
Multiplies slices of two tensors in batches.
@@ -699,7 +700,7 @@ It is computed as:
out[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `complex64`.
@@ -712,7 +713,7 @@ It is computed as:
If `True`, adjoint the slices of `y`. Defaults to `False`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `x`.
3-D or higher with shape `[..., r_o, c_o]`
@@ -721,18 +722,18 @@ It is computed as:
- - -
-### tf.matrix_determinant(input, name=None) <div class="md-anchor" id="matrix_determinant">{#matrix_determinant}</div>
+### tf.matrix_determinant(input, name=None) <a class="md-anchor" id="matrix_determinant"></a>
Calculates the determinant of a square matrix.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
A tensor of shape `[M, M]`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`.
A scalar, equal to the determinant of the input.
@@ -740,7 +741,7 @@ Calculates the determinant of a square matrix.
- - -
-### tf.batch_matrix_determinant(input, name=None) <div class="md-anchor" id="batch_matrix_determinant">{#batch_matrix_determinant}</div>
+### tf.batch_matrix_determinant(input, name=None) <a class="md-anchor" id="batch_matrix_determinant"></a>
Calculates the determinants for a batch of square matrices.
@@ -748,14 +749,14 @@ The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
form square matrices. The output is a 1-D tensor containing the determinants
for all input submatrices `[..., :, :]`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
Shape is `[..., M, M]`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`. Shape is `[...]`.
@@ -763,18 +764,18 @@ for all input submatrices `[..., :, :]`.
- - -
-### tf.matrix_inverse(input, name=None) <div class="md-anchor" id="matrix_inverse">{#matrix_inverse}</div>
+### tf.matrix_inverse(input, name=None) <a class="md-anchor" id="matrix_inverse"></a>
Calculates the inverse of a square invertible matrix. Checks for invertibility.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
Shape is `[M, M]`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`.
Shape is `[M, M]` containing the matrix inverse of the input.
@@ -782,7 +783,7 @@ Calculates the inverse of a square invertible matrix. Checks for invertibility.
- - -
-### tf.batch_matrix_inverse(input, name=None) <div class="md-anchor" id="batch_matrix_inverse">{#batch_matrix_inverse}</div>
+### tf.batch_matrix_inverse(input, name=None) <a class="md-anchor" id="batch_matrix_inverse"></a>
Calculates the inverse of square invertible matrices. Checks for invertibility.
@@ -790,14 +791,14 @@ The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
form square matrices. The output is a tensor of the same shape as the input
containing the inverse for all input submatrices `[..., :, :]`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
Shape is `[..., M, M]`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
@@ -805,7 +806,7 @@ containing the inverse for all input submatrices `[..., :, :]`.
- - -
-### tf.cholesky(input, name=None) <div class="md-anchor" id="cholesky">{#cholesky}</div>
+### tf.cholesky(input, name=None) <a class="md-anchor" id="cholesky"></a>
Calculates the Cholesky decomposition of a square matrix.
@@ -816,21 +817,21 @@ will not be read.
The result is the lower-triangular matrix of the Cholesky decomposition of the
input.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
Shape is `[M, M]`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`. Shape is `[M, M]`.
- - -
-### tf.batch_cholesky(input, name=None) <div class="md-anchor" id="batch_cholesky">{#batch_cholesky}</div>
+### tf.batch_cholesky(input, name=None) <a class="md-anchor" id="batch_cholesky"></a>
Calculates the Cholesky decomposition of a batch of square matrices.
@@ -839,27 +840,27 @@ form square matrices, with the same constraints as the single matrix Cholesky
decomposition above. The output is a tensor of the same shape as the input
containing the Cholesky decompositions for all input submatrices `[..., :, :]`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
Shape is `[..., M, M]`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
-## Complex Number Functions <div class="md-anchor" id="AUTOGENERATED-complex-number-functions">{#AUTOGENERATED-complex-number-functions}</div>
+## Complex Number Functions <a class="md-anchor" id="AUTOGENERATED-complex-number-functions"></a>
TensorFlow provides several operations that you can use to add complex number
functions to your graph.
- - -
-### tf.complex(real, imag, name=None) <div class="md-anchor" id="complex">{#complex}</div>
+### tf.complex(real, imag, name=None) <a class="md-anchor" id="complex"></a>
Converts two real numbers to a complex number.
@@ -878,21 +879,21 @@ For example:
tf.complex(real, imag) ==> [[2.25 + 4.74j], [3.25 + 5.75j]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>real</b>: A `Tensor` of type `float`.
* <b>imag</b>: A `Tensor` of type `float`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `complex64`.
- - -
-### tf.complex_abs(x, name=None) <div class="md-anchor" id="complex_abs">{#complex_abs}</div>
+### tf.complex_abs(x, name=None) <a class="md-anchor" id="complex_abs"></a>
Computes the complex absolute value of a tensor.
@@ -908,20 +909,20 @@ For example:
tf.complex_abs(x) ==> [5.25594902, 6.60492229]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` of type `complex64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `float32`.
- - -
-### tf.conj(in_, name=None) <div class="md-anchor" id="conj">{#conj}</div>
+### tf.conj(in_, name=None) <a class="md-anchor" id="conj"></a>
Returns the complex conjugate of a complex number.
@@ -939,20 +940,20 @@ For example:
tf.conj(in) ==> [-2.25 - 4.75j, 3.25 - 5.75j]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>in_</b>: A `Tensor` of type `complex64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `complex64`.
- - -
-### tf.imag(in_, name=None) <div class="md-anchor" id="imag">{#imag}</div>
+### tf.imag(in_, name=None) <a class="md-anchor" id="imag"></a>
Returns the imaginary part of a complex number.
@@ -968,20 +969,20 @@ For example:
tf.imag(in) ==> [4.75, 5.75]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>in_</b>: A `Tensor` of type `complex64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `float32`.
- - -
-### tf.real(in_, name=None) <div class="md-anchor" id="real">{#real}</div>
+### tf.real(in_, name=None) <a class="md-anchor" id="real"></a>
Returns the real part of a complex number.
@@ -997,26 +998,26 @@ For example:
tf.real(in) ==> [-2.25, 3.25]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>in_</b>: A `Tensor` of type `complex64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `float32`.
-## Reduction <div class="md-anchor" id="AUTOGENERATED-reduction">{#AUTOGENERATED-reduction}</div>
+## Reduction <a class="md-anchor" id="AUTOGENERATED-reduction"></a>
TensorFlow provides several operations that you can use to perform
common math computations that reduce various dimensions of a tensor.
- - -
-### tf.reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_sum">{#reduce_sum}</div>
+### tf.reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None) <a class="md-anchor" id="reduce_sum"></a>
Computes the sum of elements across dimensions of a tensor.
@@ -1040,7 +1041,7 @@ tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
tf.reduce_sum(x, [0, 1]) ==> 6
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
@@ -1049,14 +1050,14 @@ tf.reduce_sum(x, [0, 1]) ==> 6
* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The reduced tensor.
- - -
-### tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_prod">{#reduce_prod}</div>
+### tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None) <a class="md-anchor" id="reduce_prod"></a>
Computes the product of elements across dimensions of a tensor.
@@ -1068,7 +1069,7 @@ are retained with length 1.
If `reduction_indices` has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
@@ -1077,14 +1078,14 @@ tensor with a single element is returned.
* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The reduced tensor.
- - -
-### tf.reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_min">{#reduce_min}</div>
+### tf.reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None) <a class="md-anchor" id="reduce_min"></a>
Computes the minimum of elements across dimensions of a tensor.
@@ -1096,7 +1097,7 @@ are retained with length 1.
If `reduction_indices` has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
@@ -1105,14 +1106,14 @@ tensor with a single element is returned.
* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The reduced tensor.
- - -
-### tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_max">{#reduce_max}</div>
+### tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None) <a class="md-anchor" id="reduce_max"></a>
Computes the maximum of elements across dimensions of a tensor.
@@ -1124,7 +1125,7 @@ are retained with length 1.
If `reduction_indices` has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
@@ -1133,14 +1134,14 @@ tensor with a single element is returned.
* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The reduced tensor.
- - -
-### tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_mean">{#reduce_mean}</div>
+### tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None) <a class="md-anchor" id="reduce_mean"></a>
Computes the mean of elements across dimensions of a tensor.
@@ -1162,7 +1163,7 @@ tf.reduce_mean(x, 0) ==> [1.5, 1.5]
tf.reduce_mean(x, 1) ==> [1., 2.]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input_tensor</b>: The tensor to reduce. Should have numeric type.
@@ -1171,14 +1172,14 @@ tf.reduce_mean(x, 1) ==> [1., 2.]
* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The reduced tensor.
- - -
-### tf.reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_all">{#reduce_all}</div>
+### tf.reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None) <a class="md-anchor" id="reduce_all"></a>
Computes the "logical and" of elements across dimensions of a tensor.
@@ -1200,7 +1201,7 @@ tf.reduce_all(x, 0) ==> [False, False]
tf.reduce_all(x, 1) ==> [True, False]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input_tensor</b>: The boolean tensor to reduce.
@@ -1209,14 +1210,14 @@ tf.reduce_all(x, 1) ==> [True, False]
* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The reduced tensor.
- - -
-### tf.reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None) <div class="md-anchor" id="reduce_any">{#reduce_any}</div>
+### tf.reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None) <a class="md-anchor" id="reduce_any"></a>
Computes the "logical or" of elements across dimensions of a tensor.
@@ -1238,7 +1239,7 @@ tf.reduce_any(x, 0) ==> [True, True]
tf.reduce_any(x, 1) ==> [True, False]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input_tensor</b>: The boolean tensor to reduce.
@@ -1247,7 +1248,7 @@ tf.reduce_any(x, 1) ==> [True, False]
* <b>keep_dims</b>: If true, retains reduced dimensions with length 1.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The reduced tensor.
@@ -1255,7 +1256,7 @@ tf.reduce_any(x, 1) ==> [True, False]
- - -
-### tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None) <div class="md-anchor" id="accumulate_n">{#accumulate_n}</div>
+### tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None) <a class="md-anchor" id="accumulate_n"></a>
Returns the element-wise sum of a list of tensors.
@@ -1274,7 +1275,7 @@ tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)
==> [[7, 4], [6, 14]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>inputs</b>: A list of `Tensor` objects, each with same shape and type.
@@ -1282,11 +1283,11 @@ tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)
* <b>tensor_dtype</b>: The type of `inputs`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of same shape and type as the elements of `inputs`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `inputs` don't all have same shape and dtype or the shape
@@ -1294,7 +1295,7 @@ tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)
-## Segmentation <div class="md-anchor" id="AUTOGENERATED-segmentation">{#AUTOGENERATED-segmentation}</div>
+## Segmentation <a class="md-anchor" id="AUTOGENERATED-segmentation"></a>
TensorFlow provides several operations that you can use to perform common
math computations on tensor segments.
@@ -1317,7 +1318,7 @@ tf.segment_sum(c, tf.constant([0, 0, 1]))
- - -
-### tf.segment_sum(data, segment_ids, name=None) <div class="md-anchor" id="segment_sum">{#segment_sum}</div>
+### tf.segment_sum(data, segment_ids, name=None) <a class="md-anchor" id="segment_sum"></a>
Computes the sum along segments of a tensor.
@@ -1332,7 +1333,7 @@ that `segment_ids[j] == i`.
<img style="width:100%" src="../images/SegmentSum.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
@@ -1341,7 +1342,7 @@ that `segment_ids[j] == i`.
first dimension. Values should be sorted and can be repeated.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `data`.
Has same shape as data, except for dimension_0 which
@@ -1350,7 +1351,7 @@ that `segment_ids[j] == i`.
- - -
-### tf.segment_prod(data, segment_ids, name=None) <div class="md-anchor" id="segment_prod">{#segment_prod}</div>
+### tf.segment_prod(data, segment_ids, name=None) <a class="md-anchor" id="segment_prod"></a>
Computes the product along segments of a tensor.
@@ -1365,7 +1366,7 @@ that `segment_ids[j] == i`.
<img style="width:100%" src="../images/SegmentProd.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
@@ -1374,7 +1375,7 @@ that `segment_ids[j] == i`.
first dimension. Values should be sorted and can be repeated.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `data`.
Has same shape as data, except for dimension_0 which
@@ -1383,7 +1384,7 @@ that `segment_ids[j] == i`.
- - -
-### tf.segment_min(data, segment_ids, name=None) <div class="md-anchor" id="segment_min">{#segment_min}</div>
+### tf.segment_min(data, segment_ids, name=None) <a class="md-anchor" id="segment_min"></a>
Computes the minimum along segments of a tensor.
@@ -1398,7 +1399,7 @@ that `segment_ids[j] == i`.
<img style="width:100%" src="../images/SegmentMin.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
@@ -1407,7 +1408,7 @@ that `segment_ids[j] == i`.
first dimension. Values should be sorted and can be repeated.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `data`.
Has same shape as data, except for dimension_0 which
@@ -1416,7 +1417,7 @@ that `segment_ids[j] == i`.
- - -
-### tf.segment_max(data, segment_ids, name=None) <div class="md-anchor" id="segment_max">{#segment_max}</div>
+### tf.segment_max(data, segment_ids, name=None) <a class="md-anchor" id="segment_max"></a>
Computes the maximum along segments of a tensor.
@@ -1431,7 +1432,7 @@ that `segment_ids[j] == i`.
<img style="width:100%" src="../images/SegmentMax.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
@@ -1440,7 +1441,7 @@ that `segment_ids[j] == i`.
first dimension. Values should be sorted and can be repeated.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `data`.
Has same shape as data, except for dimension_0 which
@@ -1449,7 +1450,7 @@ that `segment_ids[j] == i`.
- - -
-### tf.segment_mean(data, segment_ids, name=None) <div class="md-anchor" id="segment_mean">{#segment_mean}</div>
+### tf.segment_mean(data, segment_ids, name=None) <a class="md-anchor" id="segment_mean"></a>
Computes the mean along segments of a tensor.
@@ -1465,7 +1466,7 @@ values summed.
<img style="width:100%" src="../images/SegmentMean.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
@@ -1474,7 +1475,7 @@ values summed.
first dimension. Values should be sorted and can be repeated.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `data`.
Has same shape as data, except for dimension_0 which
@@ -1484,7 +1485,7 @@ values summed.
- - -
-### tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None) <div class="md-anchor" id="unsorted_segment_sum">{#unsorted_segment_sum}</div>
+### tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None) <a class="md-anchor" id="unsorted_segment_sum"></a>
Computes the sum along segments of a tensor.
@@ -1505,7 +1506,7 @@ If the sum is empty for a given segment ID `i`, `output[i] = 0`.
<img style="width:100%" src="../images/UnsortedSegmentSum.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
@@ -1515,7 +1516,7 @@ If the sum is empty for a given segment ID `i`, `output[i] = 0`.
* <b>num_segments</b>: A `Tensor` of type `int32`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `data`.
Has same shape as data, except for dimension_0 which
@@ -1525,7 +1526,7 @@ If the sum is empty for a given segment ID `i`, `output[i] = 0`.
- - -
-### tf.sparse_segment_sum(data, indices, segment_ids, name=None) <div class="md-anchor" id="sparse_segment_sum">{#sparse_segment_sum}</div>
+### tf.sparse_segment_sum(data, indices, segment_ids, name=None) <a class="md-anchor" id="sparse_segment_sum"></a>
Computes the sum along sparse segments of a tensor.
@@ -1558,7 +1559,7 @@ tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))
tf.segment_sum(c, tf.constant([0, 0, 1]))
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
@@ -1568,7 +1569,7 @@ tf.segment_sum(c, tf.constant([0, 0, 1]))
A 1-D tensor. Values should be sorted and can be repeated.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `data`.
Has same shape as data, except for dimension_0 which
@@ -1577,7 +1578,7 @@ tf.segment_sum(c, tf.constant([0, 0, 1]))
- - -
-### tf.sparse_segment_mean(data, indices, segment_ids, name=None) <div class="md-anchor" id="sparse_segment_mean">{#sparse_segment_mean}</div>
+### tf.sparse_segment_mean(data, indices, segment_ids, name=None) <a class="md-anchor" id="sparse_segment_mean"></a>
Computes the mean along sparse segments of a tensor.
@@ -1587,7 +1588,7 @@ for an explanation of segments.
Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first
dimension, selecting a subset of dimension_0, specified by `indices`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>data</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
@@ -1597,7 +1598,7 @@ dimension, selecting a subset of dimension_0, specified by `indices`.
A 1-D tensor. Values should be sorted and can be repeated.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `data`.
Has same shape as data, except for dimension_0 which
@@ -1606,7 +1607,7 @@ dimension, selecting a subset of dimension_0, specified by `indices`.
-## Sequence Comparison and Indexing <div class="md-anchor" id="AUTOGENERATED-sequence-comparison-and-indexing">{#AUTOGENERATED-sequence-comparison-and-indexing}</div>
+## Sequence Comparison and Indexing <a class="md-anchor" id="AUTOGENERATED-sequence-comparison-and-indexing"></a>
TensorFlow provides several operations that you can use to add sequence
comparison and index extraction to your graph. You can use these operations to
@@ -1615,11 +1616,11 @@ a tensor.
- - -
-### tf.argmin(input, dimension, name=None) <div class="md-anchor" id="argmin">{#argmin}</div>
+### tf.argmin(input, dimension, name=None) <a class="md-anchor" id="argmin"></a>
Returns the index with the smallest value across dimensions of a tensor.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
@@ -1628,18 +1629,18 @@ Returns the index with the smallest value across dimensions of a tensor.
of the input Tensor to reduce across. For vectors, use dimension = 0.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `int64`.
- - -
-### tf.argmax(input, dimension, name=None) <div class="md-anchor" id="argmax">{#argmax}</div>
+### tf.argmax(input, dimension, name=None) <a class="md-anchor" id="argmax"></a>
Returns the index with the largest value across dimensions of a tensor.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
@@ -1648,7 +1649,7 @@ Returns the index with the largest value across dimensions of a tensor.
of the input Tensor to reduce across. For vectors, use dimension = 0.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `int64`.
@@ -1656,7 +1657,7 @@ Returns the index with the largest value across dimensions of a tensor.
- - -
-### tf.listdiff(x, y, name=None) <div class="md-anchor" id="listdiff">{#listdiff}</div>
+### tf.listdiff(x, y, name=None) <a class="md-anchor" id="listdiff"></a>
Computes the difference between two lists of numbers.
@@ -1682,14 +1683,14 @@ out ==> [2, 4, 6]
idx ==> [1, 3, 5]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. 1-D. Values to keep.
* <b>y</b>: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of `Tensor` objects (out, idx).
@@ -1699,7 +1700,7 @@ idx ==> [1, 3, 5]
- - -
-### tf.where(input, name=None) <div class="md-anchor" id="where">{#where}</div>
+### tf.where(input, name=None) <a class="md-anchor" id="where"></a>
Returns locations of true values in a boolean tensor.
@@ -1735,20 +1736,20 @@ where(input) ==> [[0, 0, 0],
[2, 1, 1]]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor` of type `bool`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `int64`.
- - -
-### tf.unique(x, name=None) <div class="md-anchor" id="unique">{#unique}</div>
+### tf.unique(x, name=None) <a class="md-anchor" id="unique"></a>
Finds unique elements in a 1-D tensor.
@@ -1768,13 +1769,13 @@ y ==> [1, 2, 4, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`. 1-D.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of `Tensor` objects (y, idx).
@@ -1785,7 +1786,7 @@ idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
- - -
-### tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance') <div class="md-anchor" id="edit_distance">{#edit_distance}</div>
+### tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance') <a class="md-anchor" id="edit_distance"></a>
Computes the Levenshtein distance between sequences.
@@ -1831,7 +1832,7 @@ output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis
[0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>hypothesis</b>: A `SparseTensor` containing hypothesis sequences.
@@ -1840,12 +1841,12 @@ output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis
length of `truth.`
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A dense `Tensor` with rank `R - 1`, where R is the rank of the
`SparseTensor` inputs `hypothesis` and `truth`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If either `hypothesis` or `truth` are not a `SparseTensor`.
@@ -1854,7 +1855,7 @@ output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis
- - -
-### tf.invert_permutation(x, name=None) <div class="md-anchor" id="invert_permutation">{#invert_permutation}</div>
+### tf.invert_permutation(x, name=None) <a class="md-anchor" id="invert_permutation"></a>
Computes the inverse permutation of a tensor.
@@ -1874,13 +1875,13 @@ For example:
invert_permutation(x) ==> [2, 4, 3, 0, 1]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor` of type `int32`. 1-D.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `int32`. 1-D.
diff --git a/tensorflow/g3doc/api_docs/python/nn.md b/tensorflow/g3doc/api_docs/python/nn.md
index b129506107..11c820847d 100644
--- a/tensorflow/g3doc/api_docs/python/nn.md
+++ b/tensorflow/g3doc/api_docs/python/nn.md
@@ -1,12 +1,13 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Neural Network
+# Neural Network <a class="md-anchor" id="AUTOGENERATED-neural-network"></a>
Note: Functions taking `Tensor` arguments can also take anything
accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Neural Network](#AUTOGENERATED-neural-network)
* [Activation Functions](#AUTOGENERATED-activation-functions)
* [tf.nn.relu(features, name=None)](#relu)
* [tf.nn.relu6(features, name=None)](#relu6)
@@ -53,7 +54,7 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Activation Functions <div class="md-anchor" id="AUTOGENERATED-activation-functions">{#AUTOGENERATED-activation-functions}</div>
+## Activation Functions <a class="md-anchor" id="AUTOGENERATED-activation-functions"></a>
The activation ops provide different types of nonlinearities for use in
neural networks. These include smooth nonlinearities (`sigmoid`,
@@ -66,59 +67,59 @@ shape as the input tensor.
- - -
-### tf.nn.relu(features, name=None) <div class="md-anchor" id="relu">{#relu}</div>
+### tf.nn.relu(features, name=None) <a class="md-anchor" id="relu"></a>
Computes rectified linear: `max(features, 0)`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>features</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `features`.
- - -
-### tf.nn.relu6(features, name=None) <div class="md-anchor" id="relu6">{#relu6}</div>
+### tf.nn.relu6(features, name=None) <a class="md-anchor" id="relu6"></a>
Computes Rectified Linear 6: `min(max(features, 0), 6)`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>features</b>: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,
`int16`, or `int8`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with the same type as `features`.
- - -
-### tf.nn.softplus(features, name=None) <div class="md-anchor" id="softplus">{#softplus}</div>
+### tf.nn.softplus(features, name=None) <a class="md-anchor" id="softplus"></a>
Computes softplus: `log(exp(features) + 1)`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>features</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `features`.
- - -
-### tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None) <div class="md-anchor" id="dropout">{#dropout}</div>
+### tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None) <a class="md-anchor" id="dropout"></a>
Computes dropout.
@@ -134,7 +135,7 @@ will make independent decisions. For example, if `shape(x) = [k, l, m, n]`
and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be
kept independently and each row and column will be kept or not kept together.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A tensor.
@@ -145,11 +146,11 @@ kept independently and each row and column will be kept or not kept together.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
* <b>name</b>: A name for this operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A Tensor of the same shape of `x`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `keep_prob` is not in `(0, 1]`.
@@ -157,7 +158,7 @@ kept independently and each row and column will be kept or not kept together.
- - -
-### tf.nn.bias_add(value, bias, name=None) <div class="md-anchor" id="bias_add">{#bias_add}</div>
+### tf.nn.bias_add(value, bias, name=None) <a class="md-anchor" id="bias_add"></a>
Adds `bias` to `value`.
@@ -166,7 +167,7 @@ Broadcasting is supported, so `value` may have any number of dimensions.
Unlike `tf.add`, the type of `bias` is allowed to differ from `value` in the
case where both types are quantized.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>value</b>: A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`,
@@ -176,27 +177,27 @@ case where both types are quantized.
in which case a different quantized type may be used.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with the same type as `value`.
- - -
-### tf.sigmoid(x, name=None) <div class="md-anchor" id="sigmoid">{#sigmoid}</div>
+### tf.sigmoid(x, name=None) <a class="md-anchor" id="sigmoid"></a>
Computes sigmoid of `x` element-wise.
Specifically, `y = 1 / (1 + exp(-x))`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A Tensor with type `float`, `double`, `int32`, `complex64`, `int64`,
or `qint32`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A Tensor with the same type as `x` if `x.dtype != qint32`
otherwise the return type is `quint8`.
@@ -204,25 +205,25 @@ Specifically, `y = 1 / (1 + exp(-x))`.
- - -
-### tf.tanh(x, name=None) <div class="md-anchor" id="tanh">{#tanh}</div>
+### tf.tanh(x, name=None) <a class="md-anchor" id="tanh"></a>
Computes hyperbolic tangent of `x` element-wise.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A Tensor with type `float`, `double`, `int32`, `complex64`, `int64`,
or `qint32`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A Tensor with the same type as `x` if `x.dtype != qint32` otherwise
the return type is `quint8`.
-## Convolution <div class="md-anchor" id="AUTOGENERATED-convolution">{#AUTOGENERATED-convolution}</div>
+## Convolution <a class="md-anchor" id="AUTOGENERATED-convolution"></a>
The convolution ops sweep a 2-D filter over a batch of images, applying the
filter to each window of each image of the appropriate size. The different
@@ -269,7 +270,7 @@ In the formula for `shape(output)`, the rounding direction depends on padding:
- - -
-### tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, name=None) <div class="md-anchor" id="conv2d">{#conv2d}</div>
+### tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, name=None) <a class="md-anchor" id="conv2d"></a>
Computes a 2-D convolution given 4-D `input` and `filter` tensors.
@@ -295,7 +296,7 @@ In detail,
Must have `strides[0] = strides[3] = 1`. For the most common case of the same
horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
@@ -308,14 +309,14 @@ horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
* <b>use_cudnn_on_gpu</b>: An optional `bool`. Defaults to `True`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`.
- - -
-### tf.nn.depthwise_conv2d(input, filter, strides, padding, name=None) <div class="md-anchor" id="depthwise_conv2d">{#depthwise_conv2d}</div>
+### tf.nn.depthwise_conv2d(input, filter, strides, padding, name=None) <a class="md-anchor" id="depthwise_conv2d"></a>
Depthwise 2-D convolution.
@@ -336,7 +337,7 @@ In detail,
Must have `strides[0] = strides[3] = 1`. For the most common case of the
same horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: 4-D with shape `[batch, in_height, in_width, in_channels]`.
@@ -347,7 +348,7 @@ same horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
* <b>padding</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
* <b>name</b>: A name for this operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 4-D `Tensor` of shape
`[batch, out_height, out_width, in_channels * channel_multiplier].`
@@ -355,7 +356,7 @@ same horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
- - -
-### tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None) <div class="md-anchor" id="separable_conv2d">{#separable_conv2d}</div>
+### tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None) <a class="md-anchor" id="separable_conv2d"></a>
2-D convolution with separable filters.
@@ -376,7 +377,7 @@ the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have
`strides[0] = strides[3] = 1`. For the most common case of the same
horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: 4-D `Tensor` with shape `[batch, in_height, in_width, in_channels]`.
@@ -391,13 +392,13 @@ horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
* <b>padding</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
* <b>name</b>: A name for this operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 4-D `Tensor` of shape `[batch, out_height, out_width, out_channels]`.
-## Pooling <div class="md-anchor" id="AUTOGENERATED-pooling">{#AUTOGENERATED-pooling}</div>
+## Pooling <a class="md-anchor" id="AUTOGENERATED-pooling"></a>
The pooling ops sweep a rectangular window over the input tensor, computing a
reduction operation for each window (average, max, or max with argmax). Each
@@ -420,14 +421,14 @@ where the rounding direction depends on padding:
- - -
-### tf.nn.avg_pool(value, ksize, strides, padding, name=None) <div class="md-anchor" id="avg_pool">{#avg_pool}</div>
+### tf.nn.avg_pool(value, ksize, strides, padding, name=None) <a class="md-anchor" id="avg_pool"></a>
Performs the average pooling on the input.
Each entry in `output` is the mean of the corresponding size `ksize`
window in `value`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>value</b>: A 4-D `Tensor` of shape `[batch, height, width, channels]` and type
@@ -440,18 +441,18 @@ window in `value`.
* <b>padding</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
* <b>name</b>: Optional name for the operation.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with the same type as `value`. The average pooled output tensor.
- - -
-### tf.nn.max_pool(value, ksize, strides, padding, name=None) <div class="md-anchor" id="max_pool">{#max_pool}</div>
+### tf.nn.max_pool(value, ksize, strides, padding, name=None) <a class="md-anchor" id="max_pool"></a>
Performs the max pooling on the input.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>value</b>: A 4-D `Tensor` with shape `[batch, height, width, channels]` and
@@ -463,14 +464,14 @@ Performs the max pooling on the input.
* <b>padding</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
* <b>name</b>: Optional name for the operation.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with the same type as `value`. The max pooled output tensor.
- - -
-### tf.nn.max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None) <div class="md-anchor" id="max_pool_with_argmax">{#max_pool_with_argmax}</div>
+### tf.nn.max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None) <a class="md-anchor" id="max_pool_with_argmax"></a>
Performs max pooling on the input and outputs both max values and indices.
@@ -478,7 +479,7 @@ The indices in `argmax` are flattened, so that a maximum value at position
`[b, y, x, c]` becomes flattened index
`((b * height + y) * width + x) * channels + c`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor` of type `float32`.
@@ -493,7 +494,7 @@ The indices in `argmax` are flattened, so that a maximum value at position
* <b>Targmax</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of `Tensor` objects (output, argmax).
@@ -502,14 +503,14 @@ The indices in `argmax` are flattened, so that a maximum value at position
-## Normalization <div class="md-anchor" id="AUTOGENERATED-normalization">{#AUTOGENERATED-normalization}</div>
+## Normalization <a class="md-anchor" id="AUTOGENERATED-normalization"></a>
Normalization is useful to prevent neurons from saturating when inputs may
have varying scale, and to aid generalization.
- - -
-### tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None) <div class="md-anchor" id="l2_normalize">{#l2_normalize}</div>
+### tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None) <a class="md-anchor" id="l2_normalize"></a>
Normalizes along dimension `dim` using an L2 norm.
@@ -520,7 +521,7 @@ For a 1-D tensor with `dim = 0`, computes
For `x` with more dimensions, independently normalizes each 1-D slice along
dimension `dim`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`.
@@ -529,14 +530,14 @@ dimension `dim`.
divisor if `norm < sqrt(epsilon)`.
* <b>name</b>: A name for this operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with the same shape as `x`.
- - -
-### tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None) <div class="md-anchor" id="local_response_normalization">{#local_response_normalization}</div>
+### tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None) <a class="md-anchor" id="local_response_normalization"></a>
Local Response Normalization.
@@ -553,7 +554,7 @@ For details, see [Krizhevsky et al., ImageNet classification with deep
convolutional neural networks (NIPS 2012)]
(http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor` of type `float32`. 4-D.
@@ -566,14 +567,14 @@ convolutional neural networks (NIPS 2012)]
* <b>beta</b>: An optional `float`. Defaults to `0.5`. An exponent.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `float32`.
- - -
-### tf.nn.moments(x, axes, name=None) <div class="md-anchor" id="moments">{#moments}</div>
+### tf.nn.moments(x, axes, name=None) <a class="md-anchor" id="moments"></a>
Calculate the mean and variance of `x`.
@@ -585,7 +586,7 @@ For so-called "global normalization" needed for convolutional filters pass
`axes=[0, 1, 2]` (batch, height, width). For batch normalization pass
`axes=[0]` (batch).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>x</b>: A `Tensor`.
@@ -593,13 +594,13 @@ For so-called "global normalization" needed for convolutional filters pass
variance.
* <b>name</b>: Name used to scope the operations that compute the moments.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
Two `Tensors`: `mean` and `variance`.
-## Losses <div class="md-anchor" id="AUTOGENERATED-losses">{#AUTOGENERATED-losses}</div>
+## Losses <a class="md-anchor" id="AUTOGENERATED-losses"></a>
The loss ops measure error between two tensors, or between a tensor and zero.
These can be used for measuring accuracy of a network in a regression task
@@ -607,7 +608,7 @@ or for regularization purposes (weight decay).
- - -
-### tf.nn.l2_loss(t, name=None) <div class="md-anchor" id="l2_loss">{#l2_loss}</div>
+### tf.nn.l2_loss(t, name=None) <a class="md-anchor" id="l2_loss"></a>
L2 Loss.
@@ -615,26 +616,26 @@ Computes half the L2 norm of a tensor without the `sqrt`:
output = sum(t ** 2) / 2
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>t</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
Typically 2-D, but may have any dimensions.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `t`. 0-D.
-## Classification <div class="md-anchor" id="AUTOGENERATED-classification">{#AUTOGENERATED-classification}</div>
+## Classification <a class="md-anchor" id="AUTOGENERATED-classification"></a>
TensorFlow provides several operations that help you perform classification.
- - -
-### tf.nn.sigmoid_cross_entropy_with_logits(logits, targets, name=None) <div class="md-anchor" id="sigmoid_cross_entropy_with_logits">{#sigmoid_cross_entropy_with_logits}</div>
+### tf.nn.sigmoid_cross_entropy_with_logits(logits, targets, name=None) <a class="md-anchor" id="sigmoid_cross_entropy_with_logits"></a>
Computes sigmoid cross entropy given `logits`.
@@ -653,14 +654,14 @@ To ensure stability and avoid overflow, the implementation uses
`logits` and `targets` must have the same type and shape.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>logits</b>: A `Tensor` of type `float32` or `float64`.
* <b>targets</b>: A `Tensor` of the same type and shape as `logits`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of the same shape as `logits` with the componentwise
logistic losses.
@@ -668,7 +669,7 @@ To ensure stability and avoid overflow, the implementation uses
- - -
-### tf.nn.softmax(logits, name=None) <div class="md-anchor" id="softmax">{#softmax}</div>
+### tf.nn.softmax(logits, name=None) <a class="md-anchor" id="softmax"></a>
Computes softmax activations.
@@ -676,21 +677,21 @@ For each batch `i` and class `j` we have
softmax[i, j] = exp(logits[i, j]) / sum(exp(logits[i]))
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>logits</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
2-D with shape `[batch_size, num_classes]`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `logits`. Same shape as `logits`.
- - -
-### tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None) <div class="md-anchor" id="softmax_cross_entropy_with_logits">{#softmax_cross_entropy_with_logits}</div>
+### tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None) <a class="md-anchor" id="softmax_cross_entropy_with_logits"></a>
Computes softmax cross entropy between `logits` and `labels`.
@@ -706,28 +707,28 @@ output of `softmax`, as it will produce incorrect results.
`logits` and `labels` must have the same shape `[batch_size, num_classes]`
and the same dtype (either `float32` or `float64`).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>logits</b>: Unscaled log probabilities.
* <b>labels</b>: Each row `labels[i]` must be a valid probability distribution.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the
softmax cross entropy loss.
-## Embeddings <div class="md-anchor" id="AUTOGENERATED-embeddings">{#AUTOGENERATED-embeddings}</div>
+## Embeddings <a class="md-anchor" id="AUTOGENERATED-embeddings"></a>
TensorFlow provides library support for looking up values in embedding
tensors.
- - -
-### tf.nn.embedding_lookup(params, ids, name=None) <div class="md-anchor" id="embedding_lookup">{#embedding_lookup}</div>
+### tf.nn.embedding_lookup(params, ids, name=None) <a class="md-anchor" id="embedding_lookup"></a>
Looks up `ids` in a list of embedding tensors.
@@ -743,7 +744,7 @@ then used to look up the slice `params[p][id // len(params), ...]`.
The results of the lookup are then concatenated into a dense
tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>params</b>: A list of tensors with the same shape and type.
@@ -751,25 +752,25 @@ tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
up in `params`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` with the same type as the tensors in `params`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If `params` is empty.
-## Evaluation <div class="md-anchor" id="AUTOGENERATED-evaluation">{#AUTOGENERATED-evaluation}</div>
+## Evaluation <a class="md-anchor" id="AUTOGENERATED-evaluation"></a>
The evaluation ops are useful for measuring the performance of a network.
Since they are nondifferentiable, they are typically used at evaluation time.
- - -
-### tf.nn.top_k(input, k, name=None) <div class="md-anchor" id="top_k">{#top_k}</div>
+### tf.nn.top_k(input, k, name=None) <a class="md-anchor" id="top_k"></a>
Returns the values and indices of the k largest elements for each row.
@@ -779,7 +780,7 @@ Returns the values and indices of the k largest elements for each row.
such that \\(input_{i, indices_{i, j}} = values_{i, j}\\). If two
elements are equal, the lower-index element appears first.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`.
@@ -788,7 +789,7 @@ elements are equal, the lower-index element appears first.
Number of top elements to look for within each row
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A tuple of `Tensor` objects (values, indices).
@@ -799,7 +800,7 @@ elements are equal, the lower-index element appears first.
- - -
-### tf.nn.in_top_k(predictions, targets, k, name=None) <div class="md-anchor" id="in_top_k">{#in_top_k}</div>
+### tf.nn.in_top_k(predictions, targets, k, name=None) <a class="md-anchor" id="in_top_k"></a>
Says whether the targets are in the top K predictions.
@@ -818,7 +819,7 @@ More formally, let
$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>predictions</b>: A `Tensor` of type `float32`. A batch_size x classes tensor
@@ -826,13 +827,13 @@ $$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$
* <b>k</b>: An `int`. Number of top elements to look at for computing precision
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` of type `bool`. Computed Precision at k as a bool Tensor
-## Candidate Sampling <div class="md-anchor" id="AUTOGENERATED-candidate-sampling">{#AUTOGENERATED-candidate-sampling}</div>
+## Candidate Sampling <a class="md-anchor" id="AUTOGENERATED-candidate-sampling"></a>
Do you want to train a multiclass or multilabel model with thousands
or millions of output classes (for example, a language model with a
@@ -845,13 +846,13 @@ only considering a small randomly-chosen subset of contrastive classes
See our [Candidate Sampling Algorithms Reference]
(../../extras/candidate_sampling.pdf)
-### Sampled Loss Functions <div class="md-anchor" id="AUTOGENERATED-sampled-loss-functions">{#AUTOGENERATED-sampled-loss-functions}</div>
+### Sampled Loss Functions <a class="md-anchor" id="AUTOGENERATED-sampled-loss-functions"></a>
TensorFlow provides the following sampled loss functions for faster training.
- - -
-### tf.nn.nce_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, name='nce_loss') <div class="md-anchor" id="nce_loss">{#nce_loss}</div>
+### tf.nn.nce_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, name='nce_loss') <a class="md-anchor" id="nce_loss"></a>
Computes and returns the noise-contrastive estimation training loss.
@@ -871,7 +872,7 @@ For now, if you have a variable number of target classes, you can pad them
out to a constant number by either repeating them or by padding
with an otherwise unused class.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>weights</b>: A `Tensor` of shape [num_classes, dim]. The class embeddings.
@@ -895,14 +896,14 @@ with an otherwise unused class.
Default is False.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A batch_size 1-D tensor of per-example NCE losses.
- - -
-### tf.nn.sampled_softmax_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=True, name='sampled_softmax_loss') <div class="md-anchor" id="sampled_softmax_loss">{#sampled_softmax_loss}</div>
+### tf.nn.sampled_softmax_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=True, name='sampled_softmax_loss') <a class="md-anchor" id="sampled_softmax_loss"></a>
Computes and returns the sampled softmax training loss.
@@ -920,7 +921,7 @@ See our [Candidate Sampling Algorithms Reference]
Also see Section 3 of http://arxiv.org/abs/1412.2007 for the math.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>weights</b>: A `Tensor` of shape [num_classes, dim]. The class embeddings.
@@ -941,20 +942,20 @@ Also see Section 3 of http://arxiv.org/abs/1412.2007 for the math.
True.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A batch_size 1-D tensor of per-example sampled softmax losses.
-### Candidate Samplers <div class="md-anchor" id="AUTOGENERATED-candidate-samplers">{#AUTOGENERATED-candidate-samplers}</div>
+### Candidate Samplers <a class="md-anchor" id="AUTOGENERATED-candidate-samplers"></a>
TensorFlow provides the following samplers for randomly sampling candidate
classes when using one of the sampled loss functions above.
- - -
-### tf.nn.uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None) <div class="md-anchor" id="uniform_candidate_sampler">{#uniform_candidate_sampler}</div>
+### tf.nn.uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None) <a class="md-anchor" id="uniform_candidate_sampler"></a>
Samples a set of classes using a uniform base distribution.
@@ -978,7 +979,7 @@ document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
If `unique=True`, then these are post-rejection probabilities and we
compute them approximately.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>true_classes</b>: A `Tensor` of type `int64` and shape `[batch_size,
@@ -991,7 +992,7 @@ compute them approximately.
* <b>seed</b>: An `int`. An operation-specific seed. Default is 0.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
* <b>sampled_candidates</b>: A tensor of type `int64` and shape `[num_sampled]`.
@@ -1006,7 +1007,7 @@ compute them approximately.
- - -
-### tf.nn.log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None) <div class="md-anchor" id="log_uniform_candidate_sampler">{#log_uniform_candidate_sampler}</div>
+### tf.nn.log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None) <a class="md-anchor" id="log_uniform_candidate_sampler"></a>
Samples a set of classes using a log-uniform (Zipfian) base distribution.
@@ -1037,7 +1038,7 @@ document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
If `unique=True`, then these are post-rejection probabilities and we
compute them approximately.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>true_classes</b>: A `Tensor` of type `int64` and shape `[batch_size,
@@ -1050,7 +1051,7 @@ compute them approximately.
* <b>seed</b>: An `int`. An operation-specific seed. Default is 0.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
* <b>sampled_candidates</b>: A tensor of type `int64` and shape `[num_sampled]`.
@@ -1065,7 +1066,7 @@ compute them approximately.
- - -
-### tf.nn.learned_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None) <div class="md-anchor" id="learned_unigram_candidate_sampler">{#learned_unigram_candidate_sampler}</div>
+### tf.nn.learned_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None) <a class="md-anchor" id="learned_unigram_candidate_sampler"></a>
Samples a set of classes from a distribution learned during training.
@@ -1093,7 +1094,7 @@ document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
If `unique=True`, then these are post-rejection probabilities and we
compute them approximately.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>true_classes</b>: A `Tensor` of type `int64` and shape `[batch_size,
@@ -1106,7 +1107,7 @@ compute them approximately.
* <b>seed</b>: An `int`. An operation-specific seed. Default is 0.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
* <b>sampled_candidates</b>: A tensor of type `int64` and shape `[num_sampled]`.
@@ -1121,7 +1122,7 @@ compute them approximately.
- - -
-### tf.nn.fixed_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, vocab_file='', distortion=0.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=[], seed=None, name=None) <div class="md-anchor" id="fixed_unigram_candidate_sampler">{#fixed_unigram_candidate_sampler}</div>
+### tf.nn.fixed_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, vocab_file='', distortion=0.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=[], seed=None, name=None) <a class="md-anchor" id="fixed_unigram_candidate_sampler"></a>
Samples a set of classes using the provided (fixed) base distribution.
@@ -1146,7 +1147,7 @@ document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
If `unique=True`, then these are post-rejection probabilities and we
compute them approximately.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>true_classes</b>: A `Tensor` of type `int64` and shape `[batch_size,
@@ -1184,7 +1185,7 @@ compute them approximately.
* <b>seed</b>: An `int`. An operation-specific seed. Default is 0.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
* <b>sampled_candidates</b>: A tensor of type `int64` and shape `[num_sampled]`.
@@ -1198,11 +1199,11 @@ compute them approximately.
-### Miscellaneous candidate sampling utilities <div class="md-anchor" id="AUTOGENERATED-miscellaneous-candidate-sampling-utilities">{#AUTOGENERATED-miscellaneous-candidate-sampling-utilities}</div>
+### Miscellaneous candidate sampling utilities <a class="md-anchor" id="AUTOGENERATED-miscellaneous-candidate-sampling-utilities"></a>
- - -
-### tf.nn.compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None) <div class="md-anchor" id="compute_accidental_hits">{#compute_accidental_hits}</div>
+### tf.nn.compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None) <a class="md-anchor" id="compute_accidental_hits"></a>
Compute the ids of positions in sampled_candidates matching true_classes.
@@ -1226,7 +1227,7 @@ operation, then added to the logits of the sampled classes. This
removes the contradictory effect of accidentally sampling the true
target classes as noise classes for the same example.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>true_classes</b>: A `Tensor` of type `int64` and shape `[batch_size,
@@ -1237,7 +1238,7 @@ target classes as noise classes for the same example.
* <b>seed</b>: An `int`. An operation-specific seed. Default is 0.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
* <b>indices</b>: A `Tensor` of type `int32` and shape `[num_accidental_hits]`.
diff --git a/tensorflow/g3doc/api_docs/python/ops.md b/tensorflow/g3doc/api_docs/python/ops.md
index bb7d6e70e2..0206f315f3 100644
--- a/tensorflow/g3doc/api_docs/python/ops.md
+++ b/tensorflow/g3doc/api_docs/python/ops.md
@@ -1,8 +1,9 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Leftovers, should be empty and removed
+# Leftovers, should be empty and removed <a class="md-anchor" id="AUTOGENERATED-leftovers--should-be-empty-and-removed"></a>
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Leftovers, should be empty and removed](#AUTOGENERATED-leftovers--should-be-empty-and-removed)
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
diff --git a/tensorflow/g3doc/api_docs/python/python_io.md b/tensorflow/g3doc/api_docs/python/python_io.md
index 7ad4b65bd0..df3c325454 100644
--- a/tensorflow/g3doc/api_docs/python/python_io.md
+++ b/tensorflow/g3doc/api_docs/python/python_io.md
@@ -1,8 +1,9 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Data IO (Python functions)
+# Data IO (Python functions) <a class="md-anchor" id="AUTOGENERATED-data-io--python-functions-"></a>
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Data IO (Python functions)](#AUTOGENERATED-data-io--python-functions-)
* [Data IO (Python Functions)](#AUTOGENERATED-data-io--python-functions-)
* [class tf.python_io.TFRecordWriter](#TFRecordWriter)
* [tf.python_io.tf_record_iterator(path)](#tf_record_iterator)
@@ -11,7 +12,7 @@
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Data IO (Python Functions) <div class="md-anchor" id="AUTOGENERATED-data-io--python-functions-">{#AUTOGENERATED-data-io--python-functions-}</div>
+## Data IO (Python Functions) <a class="md-anchor" id="AUTOGENERATED-data-io--python-functions-"></a>
A TFRecords file represents a sequence of (binary) strings. The format is not
random access, so it is suitable for streaming large amounts of data but not
@@ -19,7 +20,7 @@ suitable if fast sharding or other non-sequential access is desired.
- - -
-### class tf.python_io.TFRecordWriter <div class="md-anchor" id="TFRecordWriter">{#TFRecordWriter}</div>
+### class tf.python_io.TFRecordWriter <a class="md-anchor" id="TFRecordWriter"></a>
A class to write records to a TFRecords file.
@@ -28,16 +29,16 @@ in `with` blocks like a normal file.
- - -
-#### tf.python_io.TFRecordWriter.__init__(path) {#TFRecordWriter.__init__}
+#### tf.python_io.TFRecordWriter.__init__(path) <a class="md-anchor" id="TFRecordWriter.__init__"></a>
Opens file `path` and creates a `TFRecordWriter` writing to it.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>path</b>: The path to the TFRecords file.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>IOError</b>: If `path` cannot be opened for writing.
@@ -45,11 +46,11 @@ Opens file `path` and creates a `TFRecordWriter` writing to it.
- - -
-#### tf.python_io.TFRecordWriter.write(record) {#TFRecordWriter.write}
+#### tf.python_io.TFRecordWriter.write(record) <a class="md-anchor" id="TFRecordWriter.write"></a>
Write a string record to the file.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>record</b>: str
@@ -57,7 +58,7 @@ Write a string record to the file.
- - -
-#### tf.python_io.TFRecordWriter.close() {#TFRecordWriter.close}
+#### tf.python_io.TFRecordWriter.close() <a class="md-anchor" id="TFRecordWriter.close"></a>
Close the file.
@@ -65,20 +66,20 @@ Close the file.
- - -
-### tf.python_io.tf_record_iterator(path) <div class="md-anchor" id="tf_record_iterator">{#tf_record_iterator}</div>
+### tf.python_io.tf_record_iterator(path) <a class="md-anchor" id="tf_record_iterator"></a>
An iterator that read the records from a TFRecords file.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>path</b>: The path to the TFRecords file.
-##### Yields:
+##### Yields: <a class="md-anchor" id="AUTOGENERATED-yields-"></a>
Strings.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>IOError</b>: If `path` cannot be opened for reading.
@@ -87,7 +88,7 @@ An iterator that read the records from a TFRecords file.
- - -
-### TFRecords Format Details <div class="md-anchor" id="AUTOGENERATED-tfrecords-format-details">{#AUTOGENERATED-tfrecords-format-details}</div>
+### TFRecords Format Details <a class="md-anchor" id="AUTOGENERATED-tfrecords-format-details"></a>
A TFRecords file contains a sequence of strings with CRC hashes. Each record
has the format
diff --git a/tensorflow/g3doc/api_docs/python/sparse_ops.md b/tensorflow/g3doc/api_docs/python/sparse_ops.md
index 1d30f81e9a..d19c37e30c 100644
--- a/tensorflow/g3doc/api_docs/python/sparse_ops.md
+++ b/tensorflow/g3doc/api_docs/python/sparse_ops.md
@@ -1,12 +1,13 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Sparse Tensors
+# Sparse Tensors <a class="md-anchor" id="AUTOGENERATED-sparse-tensors"></a>
Note: Functions taking `Tensor` arguments can also take anything
accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Sparse Tensors](#AUTOGENERATED-sparse-tensors)
* [Sparse Tensor Representation](#AUTOGENERATED-sparse-tensor-representation)
* [class tf.SparseTensor](#SparseTensor)
* [class tf.SparseTensorValue](#SparseTensorValue)
@@ -23,7 +24,7 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Sparse Tensor Representation <div class="md-anchor" id="AUTOGENERATED-sparse-tensor-representation">{#AUTOGENERATED-sparse-tensor-representation}</div>
+## Sparse Tensor Representation <a class="md-anchor" id="AUTOGENERATED-sparse-tensor-representation"></a>
Tensorflow supports a `SparseTensor` representation for data that is sparse
in multiple dimensions. Contrast this representation with `IndexedSlices`,
@@ -32,7 +33,7 @@ dimension, and dense along all other dimensions.
- - -
-### class tf.SparseTensor <div class="md-anchor" id="SparseTensor">{#SparseTensor}</div>
+### class tf.SparseTensor <a class="md-anchor" id="SparseTensor"></a>
Represents a sparse tensor.
@@ -80,92 +81,92 @@ represents the dense tensor
- - -
-#### tf.SparseTensor.__init__(indices, values, shape) {#SparseTensor.__init__}
+#### tf.SparseTensor.__init__(indices, values, shape) <a class="md-anchor" id="SparseTensor.__init__"></a>
Creates a `SparseTensor`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>indices</b>: A 2-D int64 tensor of shape `[N, ndims]`.
* <b>values</b>: A 1-D tensor of any type and shape `[N]`.
* <b>dense_shape</b>: A 1-D int64 tensor of shape `[ndims]`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `SparseTensor`
- - -
-#### tf.SparseTensor.indices {#SparseTensor.indices}
+#### tf.SparseTensor.indices <a class="md-anchor" id="SparseTensor.indices"></a>
The indices of non-zero values in the represented dense tensor.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 2-D Tensor of int64 with shape `[N, ndims]`, where `N` is the
number of non-zero values in the tensor, and `ndims` is the rank.
- - -
-#### tf.SparseTensor.values {#SparseTensor.values}
+#### tf.SparseTensor.values <a class="md-anchor" id="SparseTensor.values"></a>
The non-zero values in the represented dense tensor.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 1-D Tensor of any data type.
- - -
-#### tf.SparseTensor.dtype {#SparseTensor.dtype}
+#### tf.SparseTensor.dtype <a class="md-anchor" id="SparseTensor.dtype"></a>
The `DType` of elements in this tensor.
- - -
-#### tf.SparseTensor.shape {#SparseTensor.shape}
+#### tf.SparseTensor.shape <a class="md-anchor" id="SparseTensor.shape"></a>
A 1-D Tensor of int64 representing the shape of the dense tensor.
- - -
-#### tf.SparseTensor.graph {#SparseTensor.graph}
+#### tf.SparseTensor.graph <a class="md-anchor" id="SparseTensor.graph"></a>
The `Graph` that contains the index, value, and shape tensors.
- - -
-### class tf.SparseTensorValue <div class="md-anchor" id="SparseTensorValue">{#SparseTensorValue}</div>
+### class tf.SparseTensorValue <a class="md-anchor" id="SparseTensorValue"></a>
SparseTensorValue(indices, values, shape)
- - -
-#### tf.SparseTensorValue.indices {#SparseTensorValue.indices}
+#### tf.SparseTensorValue.indices <a class="md-anchor" id="SparseTensorValue.indices"></a>
Alias for field number 0
- - -
-#### tf.SparseTensorValue.shape {#SparseTensorValue.shape}
+#### tf.SparseTensorValue.shape <a class="md-anchor" id="SparseTensorValue.shape"></a>
Alias for field number 2
- - -
-#### tf.SparseTensorValue.values {#SparseTensorValue.values}
+#### tf.SparseTensorValue.values <a class="md-anchor" id="SparseTensorValue.values"></a>
Alias for field number 1
-## Sparse to Dense Conversion <div class="md-anchor" id="AUTOGENERATED-sparse-to-dense-conversion">{#AUTOGENERATED-sparse-to-dense-conversion}</div>
+## Sparse to Dense Conversion <a class="md-anchor" id="AUTOGENERATED-sparse-to-dense-conversion"></a>
- - -
-### tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value, name=None) <div class="md-anchor" id="sparse_to_dense">{#sparse_to_dense}</div>
+### tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value, name=None) <a class="md-anchor" id="sparse_to_dense"></a>
Converts a sparse representation into a dense tensor.
@@ -185,7 +186,7 @@ dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i]
All other values in `dense` are set to `default_value`. If `sparse_values` is a
scalar, all sparse indices are set to this single value.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sparse_indices</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
@@ -201,7 +202,7 @@ scalar, all sparse indices are set to this single value.
`sparse_indices`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `sparse_values`.
Dense output tensor of shape `output_shape`.
@@ -209,7 +210,7 @@ scalar, all sparse indices are set to this single value.
- - -
-### tf.sparse_tensor_to_dense(sp_input, default_value, name=None) <div class="md-anchor" id="sparse_tensor_to_dense">{#sparse_tensor_to_dense}</div>
+### tf.sparse_tensor_to_dense(sp_input, default_value, name=None) <a class="md-anchor" id="sparse_tensor_to_dense"></a>
Converts a `SparseTensor` into a dense tensor.
@@ -228,7 +229,7 @@ string tensor with values:
[x x x x x]
[c x x x x]]
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sp_input</b>: The input `SparseTensor`.
@@ -236,13 +237,13 @@ string tensor with values:
`sp_input`.
* <b>name</b>: A name prefix for the returned tensors (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A dense tensor with shape `sp_input.shape` and values specified by
the non-empty values in `sp_input`. Indices not in `sp_input` are assigned
`default_value`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `sp_input` is not a `SparseTensor`.
@@ -250,7 +251,7 @@ string tensor with values:
- - -
-### tf.sparse_to_indicator(sp_input, vocab_size, name=None) <div class="md-anchor" id="sparse_to_indicator">{#sparse_to_indicator}</div>
+### tf.sparse_to_indicator(sp_input, vocab_size, name=None) <a class="md-anchor" id="sparse_to_indicator"></a>
Converts a `SparseTensor` of ids into a dense bool indicator tensor.
@@ -281,7 +282,7 @@ compatibility with ops that expect dense tensors.
The input `SparseTensor` must be in row-major order.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sp_input</b>: A `SparseTensor` of type `int32` or `int64`.
@@ -289,22 +290,22 @@ The input `SparseTensor` must be in row-major order.
`all(0 <= sp_input.values < vocab_size)`.
* <b>name</b>: A name prefix for the returned tensors (optional)
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A dense bool indicator tensor representing the indices with specified value.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `sp_input` is not a `SparseTensor`.
-## Manipulation <div class="md-anchor" id="AUTOGENERATED-manipulation">{#AUTOGENERATED-manipulation}</div>
+## Manipulation <a class="md-anchor" id="AUTOGENERATED-manipulation"></a>
- - -
-### tf.sparse_concat(concat_dim, sp_inputs, name=None) <div class="md-anchor" id="sparse_concat">{#sparse_concat}</div>
+### tf.sparse_concat(concat_dim, sp_inputs, name=None) <a class="md-anchor" id="sparse_concat"></a>
Concatenates a list of `SparseTensor` along the specified dimension.
@@ -350,18 +351,18 @@ Graphically this is equivalent to doing
[ a] concat [ d e ] = [ a d e ]
[b c ] [ ] [b c ]
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>concat_dim</b>: Dimension to concatenate along.
* <b>sp_inputs</b>: List of `SparseTensor` to concatenate.
* <b>name</b>: A name prefix for the returned tensors (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `SparseTensor` with the concatenated output.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `sp_inputs` is not a list of `SparseTensor`.
@@ -369,7 +370,7 @@ Graphically this is equivalent to doing
- - -
-### tf.sparse_reorder(sp_input, name=None) <div class="md-anchor" id="sparse_reorder">{#sparse_reorder}</div>
+### tf.sparse_reorder(sp_input, name=None) <a class="md-anchor" id="sparse_reorder"></a>
Reorders a `SparseTensor` into the canonical, row-major ordering.
@@ -394,18 +395,18 @@ then the output will be a `SparseTensor` of shape `[4, 5]` and
[2, 0]: c
[3, 1]: d
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sp_input</b>: The input `SparseTensor`.
* <b>name</b>: A name prefix for the returned tensors (optional)
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `SparseTensor` with the same shape and non-empty values, but in
canonical ordering.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `sp_input` is not a `SparseTensor`.
@@ -413,7 +414,7 @@ then the output will be a `SparseTensor` of shape `[4, 5]` and
- - -
-### tf.sparse_retain(sp_input, to_retain) <div class="md-anchor" id="sparse_retain">{#sparse_retain}</div>
+### tf.sparse_retain(sp_input, to_retain) <a class="md-anchor" id="sparse_retain"></a>
Retains specified non-empty values within a `SparseTensor`.
@@ -430,18 +431,18 @@ be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values:
[0, 1]: a
[3, 1]: d
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sp_input</b>: The input `SparseTensor` with `N` non-empty elements.
* <b>to_retain</b>: A bool vector of length `N` with `M` true values.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `SparseTensor` with the same shape as the input and `M` non-empty
elements corresponding to the true positions in `to_retain`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `sp_input` is not a `SparseTensor`.
@@ -449,7 +450,7 @@ be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values:
- - -
-### tf.sparse_fill_empty_rows(sp_input, default_value, name=None) <div class="md-anchor" id="sparse_fill_empty_rows">{#sparse_fill_empty_rows}</div>
+### tf.sparse_fill_empty_rows(sp_input, default_value, name=None) <a class="md-anchor" id="sparse_fill_empty_rows"></a>
Fills empty rows in the input 2-D `SparseTensor` with a default value.
@@ -482,7 +483,7 @@ This op also returns an indicator vector such that
empty_row_indicator[i] = True iff row i was an empty row.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sp_input</b>: A `SparseTensor` with shape `[N, M]`.
@@ -490,7 +491,7 @@ This op also returns an indicator vector such that
`sp_input.`
* <b>name</b>: A name prefix for the returned tensors (optional)
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
* <b>sp_ordered_output</b>: A `SparseTensor` with shape `[N, M]`, and with all empty
@@ -498,7 +499,7 @@ This op also returns an indicator vector such that
* <b>empty_row_indicator</b>: A bool vector of length `N` indicating whether each
input row was empty.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `sp_input` is not a `SparseTensor`.
diff --git a/tensorflow/g3doc/api_docs/python/state_ops.md b/tensorflow/g3doc/api_docs/python/state_ops.md
index 70685a65bc..f18de539cb 100644
--- a/tensorflow/g3doc/api_docs/python/state_ops.md
+++ b/tensorflow/g3doc/api_docs/python/state_ops.md
@@ -1,12 +1,13 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Variables
+# Variables <a class="md-anchor" id="AUTOGENERATED-variables"></a>
Note: Functions taking `Tensor` arguments can also take anything
accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Variables](#AUTOGENERATED-variables)
* [Variables](#AUTOGENERATED-variables)
* [class tf.Variable](#Variable)
* [Variable helper functions](#AUTOGENERATED-variable-helper-functions)
@@ -40,11 +41,11 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Variables <div class="md-anchor" id="AUTOGENERATED-variables">{#AUTOGENERATED-variables}</div>
+## Variables <a class="md-anchor" id="AUTOGENERATED-variables"></a>
- - -
-### class tf.Variable <div class="md-anchor" id="Variable">{#Variable}</div>
+### class tf.Variable <a class="md-anchor" id="Variable"></a>
See the [Variables How To](../../how_tos/variables/index.md) for a high
level overview.
@@ -137,7 +138,7 @@ Creating a variable.
- - -
-#### tf.Variable.__init__(initial_value, trainable=True, collections=None, validate_shape=True, name=None) {#Variable.__init__}
+#### tf.Variable.__init__(initial_value, trainable=True, collections=None, validate_shape=True, name=None) <a class="md-anchor" id="Variable.__init__"></a>
Creates a new variable with value `initial_value`.
@@ -150,7 +151,7 @@ If `trainable` is `True` the variable is also added to the graph collection
This constructor creates both a `variable` Op and an `assign` Op to set the
variable to its initial value.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>initial_value</b>: A `Tensor`, or Python object convertible to a `Tensor`.
@@ -167,11 +168,11 @@ variable to its initial value.
* <b>name</b>: Optional name for the variable. Defaults to `'Variable'` and gets
uniquified automatically.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A Variable.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If the initial value does not have a shape and
@@ -180,7 +181,7 @@ variable to its initial value.
- - -
-#### tf.Variable.initialized_value() {#Variable.initialized_value}
+#### tf.Variable.initialized_value() <a class="md-anchor" id="Variable.initialized_value"></a>
Returns the value of the initialized variable.
@@ -196,7 +197,7 @@ v = tf.Variable(tf.truncated_normal([10, 40]))
w = tf.Variable(v.initialized_value() * 2.0)
```
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` holding the value of this variable after its initializer
has run.
@@ -207,19 +208,19 @@ Changing a variable value.
- - -
-#### tf.Variable.assign(value, use_locking=False) {#Variable.assign}
+#### tf.Variable.assign(value, use_locking=False) <a class="md-anchor" id="Variable.assign"></a>
Assigns a new value to the variable.
This is essentially a shortcut for `assign(self, value)`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>value</b>: A `Tensor`. The new value for this variable.
* <b>use_locking</b>: If `True`, use locking during the assignment.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` that will hold the new value of this variable after
the assignment has completed.
@@ -227,19 +228,19 @@ This is essentially a shortcut for `assign(self, value)`.
- - -
-#### tf.Variable.assign_add(delta, use_locking=False) {#Variable.assign_add}
+#### tf.Variable.assign_add(delta, use_locking=False) <a class="md-anchor" id="Variable.assign_add"></a>
Adds a value to this variable.
This is essentially a shortcut for `assign_add(self, delta)`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>delta</b>: A `Tensor`. The value to add to this variable.
* <b>use_locking</b>: If `True`, use locking during the operation.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` that will hold the new value of this variable after
the addition has completed.
@@ -247,19 +248,19 @@ Adds a value to this variable.
- - -
-#### tf.Variable.assign_sub(delta, use_locking=False) {#Variable.assign_sub}
+#### tf.Variable.assign_sub(delta, use_locking=False) <a class="md-anchor" id="Variable.assign_sub"></a>
Subtracts a value from this variable.
This is essentially a shortcut for `assign_sub(self, delta)`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>delta</b>: A `Tensor`. The value to subtract from this variable.
* <b>use_locking</b>: If `True`, use locking during the operation.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` that will hold the new value of this variable after
the subtraction has completed.
@@ -267,25 +268,25 @@ This is essentially a shortcut for `assign_sub(self, delta)`.
- - -
-#### tf.Variable.scatter_sub(sparse_delta, use_locking=False) {#Variable.scatter_sub}
+#### tf.Variable.scatter_sub(sparse_delta, use_locking=False) <a class="md-anchor" id="Variable.scatter_sub"></a>
Subtracts `IndexedSlices` from this variable.
This is essentially a shortcut for `scatter_sub(self, sparse_delta.indices,
sparse_delta.values)`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sparse_delta</b>: `IndexedSlices` to be subtracted from this variable.
* <b>use_locking</b>: If `True`, use locking during the operation.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` that will hold the new value of this variable after
the scattered subtraction has completed.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if `sparse_delta` is not an `IndexedSlices`.
@@ -293,7 +294,7 @@ sparse_delta.values)`.
- - -
-#### tf.Variable.count_up_to(limit) {#Variable.count_up_to}
+#### tf.Variable.count_up_to(limit) <a class="md-anchor" id="Variable.count_up_to"></a>
Increments this variable until it reaches `limit`.
@@ -306,12 +307,12 @@ the increment.
This is essentially a shortcut for `count_up_to(self, limit)`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>limit</b>: value at which incrementing the variable raises an error.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor` that will hold the variable value before the increment. If no
other Op modifies this variable, the values produced will all be
@@ -321,7 +322,7 @@ This is essentially a shortcut for `count_up_to(self, limit)`.
- - -
-#### tf.Variable.eval(session=None) {#Variable.eval}
+#### tf.Variable.eval(session=None) <a class="md-anchor" id="Variable.eval"></a>
In a session, computes and returns the value of this variable.
@@ -345,13 +346,13 @@ with tf.Session() as sess:
print v.eval()
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>session</b>: The session to use to evaluate this variable. If
none, the default session is used.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A numpy `ndarray` with a copy of the value of this variable.
@@ -361,61 +362,61 @@ Properties.
- - -
-#### tf.Variable.name {#Variable.name}
+#### tf.Variable.name <a class="md-anchor" id="Variable.name"></a>
The name of this variable.
- - -
-#### tf.Variable.dtype {#Variable.dtype}
+#### tf.Variable.dtype <a class="md-anchor" id="Variable.dtype"></a>
The `DType` of this variable.
- - -
-#### tf.Variable.get_shape() {#Variable.get_shape}
+#### tf.Variable.get_shape() <a class="md-anchor" id="Variable.get_shape"></a>
The `TensorShape` of this variable.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `TensorShape`.
- - -
-#### tf.Variable.device {#Variable.device}
+#### tf.Variable.device <a class="md-anchor" id="Variable.device"></a>
The device of this variable.
- - -
-#### tf.Variable.initializer {#Variable.initializer}
+#### tf.Variable.initializer <a class="md-anchor" id="Variable.initializer"></a>
The initializer operation for this variable.
- - -
-#### tf.Variable.graph {#Variable.graph}
+#### tf.Variable.graph <a class="md-anchor" id="Variable.graph"></a>
The `Graph` of this variable.
- - -
-#### tf.Variable.op {#Variable.op}
+#### tf.Variable.op <a class="md-anchor" id="Variable.op"></a>
The `Operation` of this variable.
-## Variable helper functions <div class="md-anchor" id="AUTOGENERATED-variable-helper-functions">{#AUTOGENERATED-variable-helper-functions}</div>
+## Variable helper functions <a class="md-anchor" id="AUTOGENERATED-variable-helper-functions"></a>
TensorFlow provides a set of functions to help manage the set of variables
collected in the graph.
- - -
-### tf.all_variables() <div class="md-anchor" id="all_variables">{#all_variables}</div>
+### tf.all_variables() <a class="md-anchor" id="all_variables"></a>
Returns all variables collected in the graph.
@@ -423,14 +424,14 @@ The `Variable()` constructor automatically adds new variables to the graph
collection `GraphKeys.VARIABLES`. This convenience function returns the
contents of that collection.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of `Variable` objects.
- - -
-### tf.trainable_variables() <div class="md-anchor" id="trainable_variables">{#trainable_variables}</div>
+### tf.trainable_variables() <a class="md-anchor" id="trainable_variables"></a>
Returns all variables created with `trainable=True`.
@@ -439,7 +440,7 @@ adds new variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES`. This convenience function returns the
contents of that collection.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of Variable objects.
@@ -447,20 +448,20 @@ contents of that collection.
- - -
-### tf.initialize_all_variables() <div class="md-anchor" id="initialize_all_variables">{#initialize_all_variables}</div>
+### tf.initialize_all_variables() <a class="md-anchor" id="initialize_all_variables"></a>
Returns an Op that initializes all variables.
This is just a shortcut for `initialize_variables(all_variables())`
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An Op that initializes all variables in the graph.
- - -
-### tf.initialize_variables(var_list, name='init') <div class="md-anchor" id="initialize_variables">{#initialize_variables}</div>
+### tf.initialize_variables(var_list, name='init') <a class="md-anchor" id="initialize_variables"></a>
Returns an Op that initializes a list of variables.
@@ -474,20 +475,20 @@ initializers to `Group()`.
If `var_list` is empty, however, the function still returns an Op that can
be run. That Op just has no effect.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>var_list</b>: List of `Variable` objects to initialize.
* <b>name</b>: Optional name for the returned operation.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An Op that run the initializers of all the specified variables.
- - -
-### tf.assert_variables_initialized(var_list=None) <div class="md-anchor" id="assert_variables_initialized">{#assert_variables_initialized}</div>
+### tf.assert_variables_initialized(var_list=None) <a class="md-anchor" id="assert_variables_initialized"></a>
Returns an Op to check if variables are initialized.
@@ -498,23 +499,23 @@ Note: This function is implemented by trying to fetch the values of the
variables. If one of the variables is not initialized a message may be
logged by the C++ runtime. This is expected.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>var_list</b>: List of `Variable` objects to check. Defaults to the
value of `all_variables().`
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An Op, or None if there are no variables.
-## Saving and Restoring Variables <div class="md-anchor" id="AUTOGENERATED-saving-and-restoring-variables">{#AUTOGENERATED-saving-and-restoring-variables}</div>
+## Saving and Restoring Variables <a class="md-anchor" id="AUTOGENERATED-saving-and-restoring-variables"></a>
- - -
-### class tf.train.Saver <div class="md-anchor" id="Saver">{#Saver}</div>
+### class tf.train.Saver <a class="md-anchor" id="Saver"></a>
Saves and restores variables.
@@ -590,7 +591,7 @@ protocol buffer file in the call to `save()`.
- - -
-#### tf.train.Saver.__init__(var_list=None, reshape=False, sharded=False, max_to_keep=5, keep_checkpoint_every_n_hours=10000.0, name=None, restore_sequentially=False, saver_def=None, builder=None) {#Saver.__init__}
+#### tf.train.Saver.__init__(var_list=None, reshape=False, sharded=False, max_to_keep=5, keep_checkpoint_every_n_hours=10000.0, name=None, restore_sequentially=False, saver_def=None, builder=None) <a class="md-anchor" id="Saver.__init__"></a>
Creates a `Saver`.
@@ -628,7 +629,7 @@ want to reload it from an older checkpoint.
The optional `sharded` argument, if True, instructs the saver to shard
checkpoints per device.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>var_list</b>: A list of Variables or a dictionary mapping names to
@@ -652,7 +653,7 @@ checkpoints per device.
* <b>builder</b>: Optional SaverBuilder to use if a saver_def was not provided.
Defaults to BaseSaverBuilder().
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `var_list` is invalid.
@@ -661,7 +662,7 @@ checkpoints per device.
- - -
-#### tf.train.Saver.save(sess, save_path, global_step=None, latest_filename=None) {#Saver.save}
+#### tf.train.Saver.save(sess, save_path, global_step=None, latest_filename=None) <a class="md-anchor" id="Saver.save"></a>
Saves variables.
@@ -672,7 +673,7 @@ save must also have been initialized.
The method returns the path of the newly created checkpoint file. This
path can be passed directly to a call to `restore()`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sess</b>: A Session to use to save the variables..
@@ -687,13 +688,13 @@ path can be passed directly to a call to `restore()`.
managed by the saver to keep track of recent checkpoints. Defaults to
'checkpoint'.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A string: path at which the variables were saved. If the saver is
sharded, this string ends with: '-?????-of-nnnnn' where 'nnnnn'
is the number of shards created.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `sess` is not a Session.
@@ -701,7 +702,7 @@ path can be passed directly to a call to `restore()`.
- - -
-#### tf.train.Saver.restore(sess, save_path) {#Saver.restore}
+#### tf.train.Saver.restore(sess, save_path) <a class="md-anchor" id="Saver.restore"></a>
Restores previously saved variables.
@@ -713,7 +714,7 @@ to initialize variables.
The `save_path` argument is typically a value previously returned from a
`save()` call, or a call to `latest_checkpoint()`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sess</b>: A Session to use to restore the parameters.
@@ -725,28 +726,28 @@ Other utility methods.
- - -
-#### tf.train.Saver.last_checkpoints {#Saver.last_checkpoints}
+#### tf.train.Saver.last_checkpoints <a class="md-anchor" id="Saver.last_checkpoints"></a>
List of not-yet-deleted checkpoint filenames.
You can pass any of the returned values to `restore()`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of checkpoint filenames, sorted from oldest to newest.
- - -
-#### tf.train.Saver.set_last_checkpoints(last_checkpoints) {#Saver.set_last_checkpoints}
+#### tf.train.Saver.set_last_checkpoints(last_checkpoints) <a class="md-anchor" id="Saver.set_last_checkpoints"></a>
Sets the list of not-yet-deleted checkpoint filenames.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>last_checkpoints</b>: a list of checkpoint filenames.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>AssertionError</b>: if the list of checkpoint filenames has already been set.
@@ -754,11 +755,11 @@ Sets the list of not-yet-deleted checkpoint filenames.
- - -
-#### tf.train.Saver.as_saver_def() {#Saver.as_saver_def}
+#### tf.train.Saver.as_saver_def() <a class="md-anchor" id="Saver.as_saver_def"></a>
Generates a `SaverDef` representation of this saver.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `SaverDef` proto.
@@ -767,11 +768,11 @@ Generates a `SaverDef` representation of this saver.
- - -
-### tf.train.latest_checkpoint(checkpoint_dir, latest_filename=None) <div class="md-anchor" id="latest_checkpoint">{#latest_checkpoint}</div>
+### tf.train.latest_checkpoint(checkpoint_dir, latest_filename=None) <a class="md-anchor" id="latest_checkpoint"></a>
Finds the filename of latest saved checkpoint file.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>checkpoint_dir</b>: Directory where the variables were saved.
@@ -779,7 +780,7 @@ Finds the filename of latest saved checkpoint file.
contains the list of most recent checkpoint filenames.
See the corresponding argument to `Saver.save()`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The full path to the latest checkpoint or None if no checkpoint was found.
@@ -787,21 +788,21 @@ Finds the filename of latest saved checkpoint file.
- - -
-### tf.train.get_checkpoint_state(checkpoint_dir, latest_filename=None) <div class="md-anchor" id="get_checkpoint_state">{#get_checkpoint_state}</div>
+### tf.train.get_checkpoint_state(checkpoint_dir, latest_filename=None) <a class="md-anchor" id="get_checkpoint_state"></a>
Returns CheckpointState proto from the "checkpoint" file.
If the "checkpoint" file contains a valid CheckpointState
proto, returns it.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>checkpoint_dir</b>: The directory of checkpoints.
* <b>latest_filename</b>: Optional name of the checkpoint file. Default to
'checkpoint'.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A CheckpointState if the state was available, None
otherwise.
@@ -809,14 +810,14 @@ proto, returns it.
- - -
-### tf.train.update_checkpoint_state(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, latest_filename=None) <div class="md-anchor" id="update_checkpoint_state">{#update_checkpoint_state}</div>
+### tf.train.update_checkpoint_state(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, latest_filename=None) <a class="md-anchor" id="update_checkpoint_state"></a>
Updates the content of the 'checkpoint' file.
This updates the checkpoint file containing a CheckpointState
proto.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>save_dir</b>: Directory where the model was saved.
@@ -828,21 +829,21 @@ proto.
* <b>latest_filename</b>: Optional name of the checkpoint file. Default to
'checkpoint'.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>RuntimeError</b>: If the save paths conflict.
-## Sharing Variables <div class="md-anchor" id="AUTOGENERATED-sharing-variables">{#AUTOGENERATED-sharing-variables}</div>
+## Sharing Variables <a class="md-anchor" id="AUTOGENERATED-sharing-variables"></a>
TensorFlow provides several classes and operations that you can use to
create variables contingent on certain conditions.
- - -
-### tf.get_variable(name, shape=None, dtype=tf.float32, initializer=None, trainable=True, collections=None) <div class="md-anchor" id="get_variable">{#get_variable}</div>
+### tf.get_variable(name, shape=None, dtype=tf.float32, initializer=None, trainable=True, collections=None) <a class="md-anchor" id="get_variable"></a>
Gets an existing variable with these parameters or create a new one.
@@ -863,7 +864,7 @@ If initializer is `None` (the default), the default initializer passed in
the constructor is used. If that one is `None` too, a
`UniformUnitScalingInitializer` will be used.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name</b>: the name of the new or existing variable.
@@ -875,11 +876,11 @@ the constructor is used. If that one is `None` too, a
* <b>collections</b>: List of graph collections keys to add the Variable to.
Defaults to `[GraphKeys.VARIABLES]` (see variables.Variable).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The created or existing variable.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: when creating a new variable and shape is not declared,
@@ -889,14 +890,14 @@ the constructor is used. If that one is `None` too, a
- - -
-### tf.get_variable_scope() <div class="md-anchor" id="get_variable_scope">{#get_variable_scope}</div>
+### tf.get_variable_scope() <a class="md-anchor" id="get_variable_scope"></a>
Returns the current variable scope.
- - -
-### tf.variable_scope(name_or_scope, reuse=None, initializer=None) <div class="md-anchor" id="variable_scope">{#variable_scope}</div>
+### tf.variable_scope(name_or_scope, reuse=None, initializer=None) <a class="md-anchor" id="variable_scope"></a>
Returns a context for variable scope.
@@ -956,7 +957,7 @@ with tf.variable_scope("foo", reuse=True):
Note that the `reuse` flag is inherited: if we open a reusing scope,
then all its sub-scopes become reusing as well.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>name_or_scope</b>: `string` or `VariableScope`: the scope to open.
@@ -964,11 +965,11 @@ then all its sub-scopes become reusing as well.
well as all sub-scopes; if `None`, we just inherit the parent scope reuse.
* <b>initializer</b>: default initializer for variables within this scope.
-##### Yields:
+##### Yields: <a class="md-anchor" id="AUTOGENERATED-yields-"></a>
A scope that can be to captured and reused.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: when trying to reuse within a create scope, or create within
@@ -979,28 +980,28 @@ then all its sub-scopes become reusing as well.
- - -
-### tf.constant_initializer(value=0.0) <div class="md-anchor" id="constant_initializer">{#constant_initializer}</div>
+### tf.constant_initializer(value=0.0) <a class="md-anchor" id="constant_initializer"></a>
Returns an initializer that generates Tensors with a single value.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>value</b>: A Python scalar. All elements of the initialized variable
will be set to this value.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An initializer that generates Tensors with a single value.
- - -
-### tf.random_normal_initializer(mean=0.0, stddev=1.0, seed=None) <div class="md-anchor" id="random_normal_initializer">{#random_normal_initializer}</div>
+### tf.random_normal_initializer(mean=0.0, stddev=1.0, seed=None) <a class="md-anchor" id="random_normal_initializer"></a>
Returns an initializer that generates Tensors with a normal distribution.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>mean</b>: a python scalar or a scalar tensor. Mean of the random values
@@ -1010,14 +1011,14 @@ Returns an initializer that generates Tensors with a normal distribution.
* <b>seed</b>: A Python integer. Used to create random seeds.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An initializer that generates Tensors with a normal distribution.
- - -
-### tf.truncated_normal_initializer(mean=0.0, stddev=1.0, seed=None) <div class="md-anchor" id="truncated_normal_initializer">{#truncated_normal_initializer}</div>
+### tf.truncated_normal_initializer(mean=0.0, stddev=1.0, seed=None) <a class="md-anchor" id="truncated_normal_initializer"></a>
Returns an initializer that generates a truncated normal distribution.
@@ -1026,7 +1027,7 @@ except that values more than two standard deviations from the mean
are discarded and re-drawn. This is the recommended initializer for
neural network weights and filters.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>mean</b>: a python scalar or a scalar tensor. Mean of the random values
@@ -1036,7 +1037,7 @@ neural network weights and filters.
* <b>seed</b>: A Python integer. Used to create random seeds.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An initializer that generates Tensors with a truncated normal
distribution.
@@ -1044,11 +1045,11 @@ neural network weights and filters.
- - -
-### tf.random_uniform_initializer(minval=0.0, maxval=1.0, seed=None) <div class="md-anchor" id="random_uniform_initializer">{#random_uniform_initializer}</div>
+### tf.random_uniform_initializer(minval=0.0, maxval=1.0, seed=None) <a class="md-anchor" id="random_uniform_initializer"></a>
Returns an initializer that generates Tensors with a uniform distribution.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>minval</b>: a python scalar or a scalar tensor. lower bound of the range
@@ -1058,14 +1059,14 @@ Returns an initializer that generates Tensors with a uniform distribution.
* <b>seed</b>: A Python integer. Used to create random seeds.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An initializer that generates Tensors with a uniform distribution.
- - -
-### tf.uniform_unit_scaling_initializer(factor=1.0, seed=None) <div class="md-anchor" id="uniform_unit_scaling_initializer">{#uniform_unit_scaling_initializer}</div>
+### tf.uniform_unit_scaling_initializer(factor=1.0, seed=None) <a class="md-anchor" id="uniform_unit_scaling_initializer"></a>
Returns an initializer that generates tensors without scaling variance.
@@ -1084,27 +1085,27 @@ See <https://arxiv.org/pdf/1412.6558v3.pdf> for deeper motivation, experiments
and the calculation of constants. In section 2.3 there, the constants were
numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>factor</b>: Float. A multiplicative factor by which the values will be scaled.
* <b>seed</b>: A Python integer. Used to create random seeds.
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An initializer that generates tensors with unit variance.
- - -
-### tf.zeros_initializer(shape, dtype=tf.float32) <div class="md-anchor" id="zeros_initializer">{#zeros_initializer}</div>
+### tf.zeros_initializer(shape, dtype=tf.float32) <a class="md-anchor" id="zeros_initializer"></a>
An adaptor for zeros() to match the Initializer spec.
-## Sparse Variable Updates <div class="md-anchor" id="AUTOGENERATED-sparse-variable-updates">{#AUTOGENERATED-sparse-variable-updates}</div>
+## Sparse Variable Updates <a class="md-anchor" id="AUTOGENERATED-sparse-variable-updates"></a>
The sparse update ops modify a subset of the entries in a dense `Variable`,
either overwriting the entries or adding / subtracting a delta. These are
@@ -1119,7 +1120,7 @@ automatically by the optimizers in most cases.
- - -
-### tf.scatter_update(ref, indices, updates, use_locking=None, name=None) <div class="md-anchor" id="scatter_update">{#scatter_update}</div>
+### tf.scatter_update(ref, indices, updates, use_locking=None, name=None) <a class="md-anchor" id="scatter_update"></a>
Applies sparse updates to a variable reference.
@@ -1146,7 +1147,7 @@ Requires `updates.shape = indices.shape + ref.shape[1:]`.
<img style="width:100%" src="../images/ScatterUpdate.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>ref</b>: A mutable `Tensor`. Should be from a `Variable` node.
@@ -1159,7 +1160,7 @@ Requires `updates.shape = indices.shape + ref.shape[1:]`.
otherwise the behavior is undefined, but may exhibit less contention.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
Same as `ref`. Returned as a convenience for operations that want
to use the updated values after the update is done.
@@ -1167,7 +1168,7 @@ Requires `updates.shape = indices.shape + ref.shape[1:]`.
- - -
-### tf.scatter_add(ref, indices, updates, use_locking=None, name=None) <div class="md-anchor" id="scatter_add">{#scatter_add}</div>
+### tf.scatter_add(ref, indices, updates, use_locking=None, name=None) <a class="md-anchor" id="scatter_add"></a>
Adds sparse updates to a variable reference.
@@ -1194,7 +1195,7 @@ Requires `updates.shape = indices.shape + ref.shape[1:]`.
<img style="width:100%" src="../images/ScatterAdd.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>ref</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
@@ -1208,7 +1209,7 @@ Requires `updates.shape = indices.shape + ref.shape[1:]`.
otherwise the behavior is undefined, but may exhibit less contention.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
Same as `ref`. Returned as a convenience for operations that want
to use the updated values after the update is done.
@@ -1216,7 +1217,7 @@ Requires `updates.shape = indices.shape + ref.shape[1:]`.
- - -
-### tf.scatter_sub(ref, indices, updates, use_locking=None, name=None) <div class="md-anchor" id="scatter_sub">{#scatter_sub}</div>
+### tf.scatter_sub(ref, indices, updates, use_locking=None, name=None) <a class="md-anchor" id="scatter_sub"></a>
Subtracts sparse updates to a variable reference.
@@ -1241,7 +1242,7 @@ Requires `updates.shape = indices.shape + ref.shape[1:]`.
<img style="width:100%" src="../images/ScatterSub.png" alt>
</div>
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>ref</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `qint8`, `quint8`, `qint32`.
@@ -1255,7 +1256,7 @@ Requires `updates.shape = indices.shape + ref.shape[1:]`.
otherwise the behavior is undefined, but may exhibit less contention.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
Same as `ref`. Returned as a convenience for operations that want
to use the updated values after the update is done.
@@ -1263,7 +1264,7 @@ Requires `updates.shape = indices.shape + ref.shape[1:]`.
- - -
-### tf.sparse_mask(a, mask_indices, name=None) <div class="md-anchor" id="sparse_mask">{#sparse_mask}</div>
+### tf.sparse_mask(a, mask_indices, name=None) <a class="md-anchor" id="sparse_mask"></a>
Masks elements of `IndexedSlices`.
@@ -1292,20 +1293,20 @@ tf.shape(b.values) => [2, 10]
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* `a`: An `IndexedSlices` instance.
* `mask_indices`: Indices of elements to mask.
* `name`: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The masked `IndexedSlices` instance.
- - -
-### class tf.IndexedSlices <div class="md-anchor" id="IndexedSlices">{#IndexedSlices}</div>
+### class tf.IndexedSlices <a class="md-anchor" id="IndexedSlices"></a>
A sparse representation of a set of tensor slices at given indices.
@@ -1335,7 +1336,7 @@ which uses multi-dimensional indices and scalar values.
- - -
-#### tf.IndexedSlices.__init__(values, indices, dense_shape=None) {#IndexedSlices.__init__}
+#### tf.IndexedSlices.__init__(values, indices, dense_shape=None) <a class="md-anchor" id="IndexedSlices.__init__"></a>
Creates an `IndexedSlices`.
@@ -1343,44 +1344,44 @@ Creates an `IndexedSlices`.
- - -
-#### tf.IndexedSlices.values {#IndexedSlices.values}
+#### tf.IndexedSlices.values <a class="md-anchor" id="IndexedSlices.values"></a>
A `Tensor` containing the values of the slices.
- - -
-#### tf.IndexedSlices.indices {#IndexedSlices.indices}
+#### tf.IndexedSlices.indices <a class="md-anchor" id="IndexedSlices.indices"></a>
A 1-D `Tensor` containing the indices of the slices.
- - -
-#### tf.IndexedSlices.dense_shape {#IndexedSlices.dense_shape}
+#### tf.IndexedSlices.dense_shape <a class="md-anchor" id="IndexedSlices.dense_shape"></a>
A 1-D `Tensor` containing the shape of the corresponding dense tensor.
- - -
-#### tf.IndexedSlices.name {#IndexedSlices.name}
+#### tf.IndexedSlices.name <a class="md-anchor" id="IndexedSlices.name"></a>
The name of this `IndexedSlices`.
- - -
-#### tf.IndexedSlices.dtype {#IndexedSlices.dtype}
+#### tf.IndexedSlices.dtype <a class="md-anchor" id="IndexedSlices.dtype"></a>
The `DType` of elements in this tensor.
- - -
-#### tf.IndexedSlices.device {#IndexedSlices.device}
+#### tf.IndexedSlices.device <a class="md-anchor" id="IndexedSlices.device"></a>
The name of the device on which `values` will be produced, or `None`.
- - -
-#### tf.IndexedSlices.op {#IndexedSlices.op}
+#### tf.IndexedSlices.op <a class="md-anchor" id="IndexedSlices.op"></a>
The `Operation` that produces `values` as an output.
diff --git a/tensorflow/g3doc/api_docs/python/train.md b/tensorflow/g3doc/api_docs/python/train.md
index 554d25698a..3327aa20b4 100644
--- a/tensorflow/g3doc/api_docs/python/train.md
+++ b/tensorflow/g3doc/api_docs/python/train.md
@@ -1,8 +1,9 @@
<!-- This file is machine generated: DO NOT EDIT! -->
-# Training
+# Training <a class="md-anchor" id="AUTOGENERATED-training"></a>
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Training](#AUTOGENERATED-training)
* [Optimizers](#AUTOGENERATED-optimizers)
* [class tf.train.Optimizer](#Optimizer)
* [Usage](#AUTOGENERATED-usage)
@@ -53,7 +54,7 @@
This library provides a set of classes and functions that helps train models.
-## Optimizers <div class="md-anchor" id="AUTOGENERATED-optimizers">{#AUTOGENERATED-optimizers}</div>
+## Optimizers <a class="md-anchor" id="AUTOGENERATED-optimizers"></a>
The Optimizer base class provides methods to compute gradients for a loss and
apply gradients to variables. A collection of subclasses implement classic
@@ -64,7 +65,7 @@ of the subclasses.
- - -
-### class tf.train.Optimizer <div class="md-anchor" id="Optimizer">{#Optimizer}</div>
+### class tf.train.Optimizer <a class="md-anchor" id="Optimizer"></a>
Base class for optimizers.
@@ -72,7 +73,7 @@ This class defines the API to add Ops to train a model. You never use this
class directly, but instead instantiate one of its subclasses such as
`GradientDescentOptimizer`, `AdagradOptimizer`, or `MomentumOptimizer`.
-### Usage <div class="md-anchor" id="AUTOGENERATED-usage">{#AUTOGENERATED-usage}</div>
+### Usage <a class="md-anchor" id="AUTOGENERATED-usage"></a>
```
# Create an optimizer with the desired parameters.
@@ -90,7 +91,7 @@ In the training program you will just have to run the returned Op.
opt_op.run()
```
-### Processing gradients before applying them. <div class="md-anchor" id="AUTOGENERATED-processing-gradients-before-applying-them.">{#AUTOGENERATED-processing-gradients-before-applying-them.}</div>
+### Processing gradients before applying them. <a class="md-anchor" id="AUTOGENERATED-processing-gradients-before-applying-them."></a>
Calling `minimize()` takes care of both computing the gradients and
applying them to the variables. If you want to process the gradients
@@ -119,13 +120,13 @@ opt.apply_gradients(capped_grads_and_vars)
- - -
-#### tf.train.Optimizer.__init__(use_locking, name) {#Optimizer.__init__}
+#### tf.train.Optimizer.__init__(use_locking, name) <a class="md-anchor" id="Optimizer.__init__"></a>
Create a new Optimizer.
This must be called by the constructors of subclasses.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>use_locking</b>: Bool. If True apply use locks to prevent concurrent updates
@@ -133,7 +134,7 @@ This must be called by the constructors of subclasses.
* <b>name</b>: A non-empty string. The name to use for accumulators created
for the optimizer.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if name is malformed.
@@ -142,7 +143,7 @@ This must be called by the constructors of subclasses.
- - -
-#### tf.train.Optimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, name=None) {#Optimizer.minimize}
+#### tf.train.Optimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, name=None) <a class="md-anchor" id="Optimizer.minimize"></a>
Add operations to minimize 'loss' by updating 'var_list'.
@@ -151,7 +152,7 @@ apply_gradients(). If you want to process the gradient before applying them
call compute_gradients() and apply_gradients() explicitly instead of using
this function.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>loss</b>: A Tensor containing the value to minimize.
@@ -164,12 +165,12 @@ this function.
GATE_NONE, GATE_OP, or GATE_GRAPH.
* <b>name</b>: Optional name for the returned operation.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An Operation that updates the variables in 'var_list'. If 'global_step'
was not None, that operation also increments global_step.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if some of the variables are not variables.Variable objects.
@@ -177,7 +178,7 @@ this function.
- - -
-#### tf.train.Optimizer.compute_gradients(loss, var_list=None, gate_gradients=1) {#Optimizer.compute_gradients}
+#### tf.train.Optimizer.compute_gradients(loss, var_list=None, gate_gradients=1) <a class="md-anchor" id="Optimizer.compute_gradients"></a>
Compute gradients of "loss" for the variables in "var_list".
@@ -187,7 +188,7 @@ for "variable". Note that "gradient" can be a Tensor, a
IndexedSlices, or None if there is no gradient for the
given variable.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>loss</b>: A Tensor containing the value to minimize.
@@ -197,11 +198,11 @@ given variable.
* <b>gate_gradients</b>: How to gate the computation of gradients. Can be
GATE_NONE, GATE_OP, or GATE_GRAPH.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of (gradient, variable) pairs.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If var_list contains anything else than variables.Variable.
@@ -210,14 +211,14 @@ given variable.
- - -
-#### tf.train.Optimizer.apply_gradients(grads_and_vars, global_step=None, name=None) {#Optimizer.apply_gradients}
+#### tf.train.Optimizer.apply_gradients(grads_and_vars, global_step=None, name=None) <a class="md-anchor" id="Optimizer.apply_gradients"></a>
Apply gradients to variables.
This is the second part of minimize(). It returns an Operation that
applies gradients.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>grads_and_vars</b>: List of (gradient, variable) pairs as returned by
@@ -227,19 +228,19 @@ applies gradients.
* <b>name</b>: Optional name for the returned operation. Default to the
name passed to the Optimizer constructor.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An Operation that applies the specified gradients. If 'global_step'
was not None, that operation also increments global_step.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: if grads_and_vars is malformed.
-### Gating Gradients <div class="md-anchor" id="AUTOGENERATED-gating-gradients">{#AUTOGENERATED-gating-gradients}</div>
+### Gating Gradients <a class="md-anchor" id="AUTOGENERATED-gating-gradients"></a>
Both `minimize()` and `compute_gradients()` accept a `gate_gradient` argument
that controls the degree of parallelism during the application of the
@@ -262,7 +263,7 @@ multiple inputs where the gradients depend on the inputs.
before any one of them is used. This provides the least parallelism but can
be useful if you want to process all gradients before applying any of them.
-### Slots <div class="md-anchor" id="AUTOGENERATED-slots">{#AUTOGENERATED-slots}</div>
+### Slots <a class="md-anchor" id="AUTOGENERATED-slots"></a>
Some optimizer subclasses, such as `MomentumOptimizer` and `AdagradOptimizer`
allocate and manage additional variables associated with the variables to
@@ -275,20 +276,20 @@ about the slots, etc.
- - -
-#### tf.train.Optimizer.get_slot_names() {#Optimizer.get_slot_names}
+#### tf.train.Optimizer.get_slot_names() <a class="md-anchor" id="Optimizer.get_slot_names"></a>
Return a list of the names of slots created by the Optimizer.
See get_slot().
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of strings.
- - -
-#### tf.train.Optimizer.get_slot(var, name) {#Optimizer.get_slot}
+#### tf.train.Optimizer.get_slot(var, name) <a class="md-anchor" id="Optimizer.get_slot"></a>
Return a slot named "name" created for "var" by the Optimizer.
@@ -298,13 +299,13 @@ gives access to these Variables if for some reason you need them.
Use get_slot_names() to get the list of slot names created by the Optimizer.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>var</b>: A variable passed to minimize() or apply_gradients().
* <b>name</b>: A string.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The Variable for the slot if it was created, None otherwise.
@@ -313,17 +314,17 @@ Use get_slot_names() to get the list of slot names created by the Optimizer.
- - -
-### class tf.train.GradientDescentOptimizer <div class="md-anchor" id="GradientDescentOptimizer">{#GradientDescentOptimizer}</div>
+### class tf.train.GradientDescentOptimizer <a class="md-anchor" id="GradientDescentOptimizer"></a>
Optimizer that implements the gradient descent algorithm.
- - -
-#### tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent') {#GradientDescentOptimizer.__init__}
+#### tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent') <a class="md-anchor" id="GradientDescentOptimizer.__init__"></a>
Construct a new gradient descent optimizer.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>learning_rate</b>: A Tensor or a floating point value. The learning
@@ -336,17 +337,17 @@ Construct a new gradient descent optimizer.
- - -
-### class tf.train.AdagradOptimizer <div class="md-anchor" id="AdagradOptimizer">{#AdagradOptimizer}</div>
+### class tf.train.AdagradOptimizer <a class="md-anchor" id="AdagradOptimizer"></a>
Optimizer that implements the Adagrad algorithm.
- - -
-#### tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad') {#AdagradOptimizer.__init__}
+#### tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad') <a class="md-anchor" id="AdagradOptimizer.__init__"></a>
Construct a new Adagrad optimizer.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>learning_rate</b>: A `Tensor` or a floating point value. The learning rate.
@@ -356,7 +357,7 @@ Construct a new Adagrad optimizer.
* <b>name</b>: Optional name prefix for the operations created when applying
gradients. Defaults to "Adagrad".
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: If the initial_accumulator_value is invalid.
@@ -365,17 +366,17 @@ Construct a new Adagrad optimizer.
- - -
-### class tf.train.MomentumOptimizer <div class="md-anchor" id="MomentumOptimizer">{#MomentumOptimizer}</div>
+### class tf.train.MomentumOptimizer <a class="md-anchor" id="MomentumOptimizer"></a>
Optimizer that implements the Momentum algorithm.
- - -
-#### tf.train.MomentumOptimizer.__init__(learning_rate, momentum, use_locking=False, name='Momentum') {#MomentumOptimizer.__init__}
+#### tf.train.MomentumOptimizer.__init__(learning_rate, momentum, use_locking=False, name='Momentum') <a class="md-anchor" id="MomentumOptimizer.__init__"></a>
Construct a new Momentum optimizer.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>learning_rate</b>: A `Tensor` or a floating point value. The learning rate.
@@ -388,13 +389,13 @@ Construct a new Momentum optimizer.
- - -
-### class tf.train.AdamOptimizer <div class="md-anchor" id="AdamOptimizer">{#AdamOptimizer}</div>
+### class tf.train.AdamOptimizer <a class="md-anchor" id="AdamOptimizer"></a>
Optimizer that implements the Adam algorithm.
- - -
-#### tf.train.AdamOptimizer.__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam') {#AdamOptimizer.__init__}
+#### tf.train.AdamOptimizer.__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam') <a class="md-anchor" id="AdamOptimizer.__init__"></a>
Construct a new Adam optimizer.
@@ -424,7 +425,7 @@ The default value of 1e-8 for epsilon might not be a good default in
general. For example, when training an Inception network on ImageNet a
current good choice is 1.0 or 0.1.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>learning_rate</b>: A Tensor or a floating point value. The learning rate.
@@ -441,13 +442,13 @@ current good choice is 1.0 or 0.1.
- - -
-### class tf.train.FtrlOptimizer <div class="md-anchor" id="FtrlOptimizer">{#FtrlOptimizer}</div>
+### class tf.train.FtrlOptimizer <a class="md-anchor" id="FtrlOptimizer"></a>
Optimizer that implements the FTRL algorithm.
- - -
-#### tf.train.FtrlOptimizer.__init__(learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl') {#FtrlOptimizer.__init__}
+#### tf.train.FtrlOptimizer.__init__(learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl') <a class="md-anchor" id="FtrlOptimizer.__init__"></a>
Construct a new FTRL optimizer.
@@ -475,7 +476,7 @@ Note that the real regularization coefficient of `|w|^2` for objective
function is `1 / lambda_2` if specifying `l2 = lambda_2` as argument when
using this function.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>learning_rate</b>: A float value or a constant float `Tensor`.
@@ -490,7 +491,7 @@ using this function.
* <b>name</b>: Optional name prefix for the operations created when applying
gradients. Defaults to "Ftrl".
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>ValueError</b>: if one of the arguments is invalid.
@@ -499,17 +500,17 @@ using this function.
- - -
-### class tf.train.RMSPropOptimizer <div class="md-anchor" id="RMSPropOptimizer">{#RMSPropOptimizer}</div>
+### class tf.train.RMSPropOptimizer <a class="md-anchor" id="RMSPropOptimizer"></a>
Optimizer that implements the RMSProp algorithm.
- - -
-#### tf.train.RMSPropOptimizer.__init__(learning_rate, decay, momentum=0.0, epsilon=1e-10, use_locking=False, name='RMSProp') {#RMSPropOptimizer.__init__}
+#### tf.train.RMSPropOptimizer.__init__(learning_rate, decay, momentum=0.0, epsilon=1e-10, use_locking=False, name='RMSProp') <a class="md-anchor" id="RMSPropOptimizer.__init__"></a>
Construct a new RMSProp optimizer.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>learning_rate</b>: A Tensor or a floating point value. The learning rate.
@@ -523,7 +524,7 @@ Construct a new RMSProp optimizer.
-## Gradient Computation <div class="md-anchor" id="AUTOGENERATED-gradient-computation">{#AUTOGENERATED-gradient-computation}</div>
+## Gradient Computation <a class="md-anchor" id="AUTOGENERATED-gradient-computation"></a>
TensorFlow provides functions to compute the derivatives for a given
TensorFlow computation graph, adding operations to the graph. The
@@ -533,7 +534,7 @@ functions below.
- - -
-### tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None) <div class="md-anchor" id="gradients">{#gradients}</div>
+### tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None) <a class="md-anchor" id="gradients"></a>
Constructs symbolic partial derivatives of `ys` w.r.t. x in `xs`.
@@ -554,7 +555,7 @@ derivatives using a different initial gradient for each y (e.g., if
one wanted to weight the gradient differently for each value in
each y).
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>ys</b>: A `Tensor` or list of tensors to be differentiated.
@@ -570,11 +571,11 @@ each y).
* <b>aggregation_method</b>: Specifies the method used to combine gradient terms.
Accepted values are constants defined in the class `AggregationMethod`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of `sum(dy/dx)` for each x in `xs`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>LookupError</b>: if one of the operations between `x` and `y` does not
@@ -584,7 +585,7 @@ each y).
- - -
-### class tf.AggregationMethod <div class="md-anchor" id="AggregationMethod">{#AggregationMethod}</div>
+### class tf.AggregationMethod <a class="md-anchor" id="AggregationMethod"></a>
A class listing aggregation methods used to combine gradients.
@@ -600,7 +601,7 @@ be used to combine gradients in the graph:
- - -
-### tf.stop_gradient(input, name=None) <div class="md-anchor" id="stop_gradient">{#stop_gradient}</div>
+### tf.stop_gradient(input, name=None) <a class="md-anchor" id="stop_gradient"></a>
Stops gradient computation.
@@ -624,20 +625,20 @@ to pretend that the value was a constant. Some examples include:
* Adversarial training, where no backprop should happen through the adversarial
example generation process.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>input</b>: A `Tensor`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Tensor`. Has the same type as `input`.
-## Gradient Clipping <div class="md-anchor" id="AUTOGENERATED-gradient-clipping">{#AUTOGENERATED-gradient-clipping}</div>
+## Gradient Clipping <a class="md-anchor" id="AUTOGENERATED-gradient-clipping"></a>
TensorFlow provides several operations that you can use to add clipping
functions to your graph. You can use these functions to perform general data
@@ -646,7 +647,7 @@ gradients.
- - -
-### tf.clip_by_value(t, clip_value_min, clip_value_max, name=None) <div class="md-anchor" id="clip_by_value">{#clip_by_value}</div>
+### tf.clip_by_value(t, clip_value_min, clip_value_max, name=None) <a class="md-anchor" id="clip_by_value"></a>
Clips tensor values to a specified min and max.
@@ -655,7 +656,7 @@ shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`.
Any values less than `clip_value_min` are set to `clip_value_min`. Any values
greater than `clip_value_max` are set to `clip_value_max`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>t</b>: A `Tensor`.
@@ -663,14 +664,14 @@ greater than `clip_value_max` are set to `clip_value_max`.
* <b>clip_value_max</b>: A 0-D (scalar) `Tensor`. The maximum value to clip by.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A clipped `Tensor`.
- - -
-### tf.clip_by_norm(t, clip_norm, name=None) <div class="md-anchor" id="clip_by_norm">{#clip_by_norm}</div>
+### tf.clip_by_norm(t, clip_norm, name=None) <a class="md-anchor" id="clip_by_norm"></a>
Clips tensor values to a maximum L2-norm.
@@ -688,21 +689,21 @@ In this case, the L2-norm of the output tensor is `clip_norm`.
This operation is typically used to clip gradients before applying them with
an optimizer.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>t</b>: A `Tensor`.
* <b>clip_norm</b>: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A clipped `Tensor`.
- - -
-### tf.clip_by_average_norm(t, clip_norm, name=None) <div class="md-anchor" id="clip_by_average_norm">{#clip_by_average_norm}</div>
+### tf.clip_by_average_norm(t, clip_norm, name=None) <a class="md-anchor" id="clip_by_average_norm"></a>
Clips tensor values to a maximum average L2-norm.
@@ -720,21 +721,21 @@ In this case, the average L2-norm of the output tensor is `clip_norm`.
This operation is typically used to clip gradients before applying them with
an optimizer.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>t</b>: A `Tensor`.
* <b>clip_norm</b>: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A clipped `Tensor`.
- - -
-### tf.clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None) <div class="md-anchor" id="clip_by_global_norm">{#clip_by_global_norm}</div>
+### tf.clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None) <a class="md-anchor" id="clip_by_global_norm"></a>
Clips values of multiple tensors by the ratio of the sum of their norms.
@@ -764,7 +765,7 @@ Recurrent Neural Networks". http://arxiv.org/abs/1211.5063)
However, it is slower than `clip_by_norm()` because all the parameters must be
ready before the clipping operation can be performed.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>t_list</b>: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.
@@ -773,13 +774,13 @@ ready before the clipping operation can be performed.
norm to use. If not provided, `global_norm()` is used to compute the norm.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
* <b>list_clipped</b>: A list of `Tensors` of the same type as `list_t`.
* <b>global_norm</b>: A 0-D (scalar) `Tensor` representing the global norm.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `t_list` is not a sequence.
@@ -787,7 +788,7 @@ ready before the clipping operation can be performed.
- - -
-### tf.global_norm(t_list, name=None) <div class="md-anchor" id="global_norm">{#global_norm}</div>
+### tf.global_norm(t_list, name=None) <a class="md-anchor" id="global_norm"></a>
Computes the global norm of multiple tensors.
@@ -799,27 +800,27 @@ computed as:
Any entries in `t_list` that are of type None are ignored.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>t_list</b>: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A 0-D (scalar) `Tensor` of type `float`.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If `t_list` is not a sequence.
-## Decaying the learning rate <div class="md-anchor" id="AUTOGENERATED-decaying-the-learning-rate">{#AUTOGENERATED-decaying-the-learning-rate}</div>
+## Decaying the learning rate <a class="md-anchor" id="AUTOGENERATED-decaying-the-learning-rate"></a>
- - -
-### tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None) <div class="md-anchor" id="exponential_decay">{#exponential_decay}</div>
+### tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None) <a class="md-anchor" id="exponential_decay"></a>
Applies exponential decay to the learning rate.
@@ -852,7 +853,7 @@ optimizer = tf.GradientDescent(learning_rate)
optimizer.minimize(...my loss..., global_step=global_step)
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>learning_rate</b>: A scalar `float32` or `float64` `Tensor` or a
@@ -866,14 +867,14 @@ optimizer.minimize(...my loss..., global_step=global_step)
* <b>staircase</b>: Boolean. It `True` decay the learning rate at discrete intervals.
* <b>name</b>: string. Optional name of the operation. Defaults to 'ExponentialDecay'
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A scalar `Tensor` of the same type as `learning_rate`. The decayed
learning rate.
-## Moving Averages <div class="md-anchor" id="AUTOGENERATED-moving-averages">{#AUTOGENERATED-moving-averages}</div>
+## Moving Averages <a class="md-anchor" id="AUTOGENERATED-moving-averages"></a>
Some training algorithms, such as GradientDescent and Momentum often benefit
from maintaining a moving average of variables during optimization. Using the
@@ -881,7 +882,7 @@ moving averages for evaluations often improve results significantly.
- - -
-### class tf.train.ExponentialMovingAverage <div class="md-anchor" id="ExponentialMovingAverage">{#ExponentialMovingAverage}</div>
+### class tf.train.ExponentialMovingAverage <a class="md-anchor" id="ExponentialMovingAverage"></a>
Maintains moving averages of variables by employing and exponential decay.
@@ -965,7 +966,7 @@ saver.restore(...checkpoint filename...)
- - -
-#### tf.train.ExponentialMovingAverage.__init__(decay, num_updates=None, name='ExponentialMovingAverage') {#ExponentialMovingAverage.__init__}
+#### tf.train.ExponentialMovingAverage.__init__(decay, num_updates=None, name='ExponentialMovingAverage') <a class="md-anchor" id="ExponentialMovingAverage.__init__"></a>
Creates a new ExponentialMovingAverage object.
@@ -980,7 +981,7 @@ move faster. If passed, the actual decay rate used is:
`min(decay, (1 + num_updates) / (10 + num_updates))`
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>decay</b>: Float. The decay to use.
@@ -991,7 +992,7 @@ move faster. If passed, the actual decay rate used is:
- - -
-#### tf.train.ExponentialMovingAverage.apply(var_list=None) {#ExponentialMovingAverage.apply}
+#### tf.train.ExponentialMovingAverage.apply(var_list=None) <a class="md-anchor" id="ExponentialMovingAverage.apply"></a>
Maintains moving averages of variables.
@@ -1009,17 +1010,17 @@ Returns an op that updates all shadow variables as described above.
Note that `apply()` can be called multiple times with different lists of
variables.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>var_list</b>: A list of Variable or Tensor objects. The variables
and Tensors must be of types float32 or float64.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
An Operation that updates the moving averages.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>TypeError</b>: If the arguments are not all float32 or float64.
@@ -1029,7 +1030,7 @@ variables.
- - -
-#### tf.train.ExponentialMovingAverage.average_name(var) {#ExponentialMovingAverage.average_name}
+#### tf.train.ExponentialMovingAverage.average_name(var) <a class="md-anchor" id="ExponentialMovingAverage.average_name"></a>
Returns the name of the `Variable` holding the average for `var`.
@@ -1044,12 +1045,12 @@ to restore the variable from the moving average value with:
`average_name()` can be called whether or not `apply()` has been called.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>var</b>: A `Variable` object.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A string: the name of the variable that will be used or was used
by the `ExponentialMovingAverage class` to hold the moving average of
@@ -1058,16 +1059,16 @@ to restore the variable from the moving average value with:
- - -
-#### tf.train.ExponentialMovingAverage.average(var) {#ExponentialMovingAverage.average}
+#### tf.train.ExponentialMovingAverage.average(var) <a class="md-anchor" id="ExponentialMovingAverage.average"></a>
Returns the `Variable` holding the average of `var`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>var</b>: A `Variable` object.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A `Variable` object or `None` if the moving average of `var`
is not maintained..
@@ -1075,7 +1076,7 @@ Returns the `Variable` holding the average of `var`.
-## Coordinator and QueueRunner <div class="md-anchor" id="AUTOGENERATED-coordinator-and-queuerunner">{#AUTOGENERATED-coordinator-and-queuerunner}</div>
+## Coordinator and QueueRunner <a class="md-anchor" id="AUTOGENERATED-coordinator-and-queuerunner"></a>
See [Threading and Queues](../../how_tos/threading_and_queues/index.md)
for how to use threads and queues. For documentation on the Queue API,
@@ -1083,14 +1084,14 @@ see [Queues](../../api_docs/python/io_ops.md#queues).
- - -
-### class tf.train.Coordinator <div class="md-anchor" id="Coordinator">{#Coordinator}</div>
+### class tf.train.Coordinator <a class="md-anchor" id="Coordinator"></a>
A coordinator for threads.
This class implements a simple mechanism to coordinate the termination of a
set of threads.
-#### Usage:
+#### Usage: <a class="md-anchor" id="AUTOGENERATED-usage-"></a>
```python
# Create a coordinator.
@@ -1114,7 +1115,7 @@ while not coord.should_stop():
...do some work...
```
-#### Exception handling:
+#### Exception handling: <a class="md-anchor" id="AUTOGENERATED-exception-handling-"></a>
A thread can report an exception to the Coordinator as part of the
`should_stop()` call. The exception will be re-raised from the
@@ -1145,7 +1146,7 @@ except Exception, e:
...exception that was passed to coord.request_stop()
```
-#### Grace period for stopping:
+#### Grace period for stopping: <a class="md-anchor" id="AUTOGENERATED-grace-period-for-stopping-"></a>
After a thread has called `coord.request_stop()` the other threads have a
fixed time to stop, this is called the 'stop grace period' and defaults to 2
@@ -1169,14 +1170,14 @@ except Exception:
```
- - -
-#### tf.train.Coordinator.__init__() {#Coordinator.__init__}
+#### tf.train.Coordinator.__init__() <a class="md-anchor" id="Coordinator.__init__"></a>
Create a new Coordinator.
- - -
-#### tf.train.Coordinator.join(threads, stop_grace_period_secs=120) {#Coordinator.join}
+#### tf.train.Coordinator.join(threads, stop_grace_period_secs=120) <a class="md-anchor" id="Coordinator.join"></a>
Wait for threads to terminate.
@@ -1191,14 +1192,14 @@ alive after that period expires, a RuntimeError is raised. Note that if
an 'exc_info' was passed to request_stop() then it is raised instead of
that RuntimeError.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>threads</b>: List threading.Threads. The started threads to join.
* <b>stop_grace_period_secs</b>: Number of seconds given to threads to stop after
request_stop() has been called.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>RuntimeError</b>: If any thread is still alive after request_stop()
@@ -1207,13 +1208,13 @@ that RuntimeError.
- - -
-#### tf.train.Coordinator.request_stop(ex=None) {#Coordinator.request_stop}
+#### tf.train.Coordinator.request_stop(ex=None) <a class="md-anchor" id="Coordinator.request_stop"></a>
Request that the threads stop.
After this is called, calls to should_stop() will return True.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>ex</b>: Optional Exception, or Python 'exc_info' tuple as returned by
@@ -1223,28 +1224,28 @@ After this is called, calls to should_stop() will return True.
- - -
-#### tf.train.Coordinator.should_stop() {#Coordinator.should_stop}
+#### tf.train.Coordinator.should_stop() <a class="md-anchor" id="Coordinator.should_stop"></a>
Check if stop was requested.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
True if a stop was requested.
- - -
-#### tf.train.Coordinator.wait_for_stop(timeout=None) {#Coordinator.wait_for_stop}
+#### tf.train.Coordinator.wait_for_stop(timeout=None) <a class="md-anchor" id="Coordinator.wait_for_stop"></a>
Wait till the Coordinator is told to stop.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>timeout</b>: float. Sleep for up to that many seconds waiting for
should_stop() to become True.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
True if the Coordinator is told stop, False if the timeout expired.
@@ -1252,7 +1253,7 @@ Wait till the Coordinator is told to stop.
- - -
-### class tf.train.QueueRunner <div class="md-anchor" id="QueueRunner">{#QueueRunner}</div>
+### class tf.train.QueueRunner <a class="md-anchor" id="QueueRunner"></a>
Holds a list of enqueue operations for a queue, each to be run in a thread.
@@ -1270,7 +1271,7 @@ and reporting exceptions, etc.
The `QueueRunner`, combined with the `Coordinator`, helps handle these issues.
- - -
-#### tf.train.QueueRunner.__init__(queue, enqueue_ops) {#QueueRunner.__init__}
+#### tf.train.QueueRunner.__init__(queue, enqueue_ops) <a class="md-anchor" id="QueueRunner.__init__"></a>
Create a QueueRunner.
@@ -1283,7 +1284,7 @@ enqueue op in parallel with the other threads. The enqueue ops do not have
to all be the same op, but it is expected that they all enqueue tensors in
`queue`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>queue</b>: A `Queue`.
@@ -1292,7 +1293,7 @@ to all be the same op, but it is expected that they all enqueue tensors in
- - -
-#### tf.train.QueueRunner.create_threads(sess, coord=None, daemon=False, start=False) {#QueueRunner.create_threads}
+#### tf.train.QueueRunner.create_threads(sess, coord=None, daemon=False, start=False) <a class="md-anchor" id="QueueRunner.create_threads"></a>
Create threads to run the enqueue ops.
@@ -1308,7 +1309,7 @@ coordinator requests a stop.
This method may be called again as long as all threads from a previous call
have stopped.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sess</b>: A `Session`.
@@ -1318,11 +1319,11 @@ have stopped.
* <b>start</b>: Boolean. If `True` starts the threads. If `False` the
caller must call the `start()` method of the returned threads.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of threads.
-##### Raises:
+##### Raises: <a class="md-anchor" id="AUTOGENERATED-raises-"></a>
* <b>RuntimeError</b>: If threads from a previous call to `create_threads()` are
@@ -1331,7 +1332,7 @@ have stopped.
- - -
-#### tf.train.QueueRunner.exceptions_raised {#QueueRunner.exceptions_raised}
+#### tf.train.QueueRunner.exceptions_raised <a class="md-anchor" id="QueueRunner.exceptions_raised"></a>
Exceptions raised but not handled by the `QueueRunner` threads.
@@ -1344,7 +1345,7 @@ depending on whether or not a `Coordinator` was passed to
* Without a `Coordinator`, exceptions are captured by the `QueueRunner` and
made available in this `exceptions_raised` property.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of Python `Exception` objects. The list is empty if no exception
was captured. (No exceptions are captured when using a Coordinator.)
@@ -1352,7 +1353,7 @@ depending on whether or not a `Coordinator` was passed to
- - -
-### tf.train.add_queue_runner(qr, collection='queue_runners') <div class="md-anchor" id="add_queue_runner">{#add_queue_runner}</div>
+### tf.train.add_queue_runner(qr, collection='queue_runners') <a class="md-anchor" id="add_queue_runner"></a>
Adds a `QueueRunner` to a collection in the graph.
@@ -1363,7 +1364,7 @@ allows you to add a queue runner to a well known collection in the graph.
The companion method `start_queue_runners()` can be used to start threads for
all the collected queue runners.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>qr</b>: A `QueueRunner`.
@@ -1373,7 +1374,7 @@ all the collected queue runners.
- - -
-### tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners') <div class="md-anchor" id="start_queue_runners">{#start_queue_runners}</div>
+### tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners') <a class="md-anchor" id="start_queue_runners"></a>
Starts all queue runners collected in the graph.
@@ -1381,7 +1382,7 @@ This is a companion method to `add_queue_runner()`. It just starts
threads for all queue runners collected in the graph. It returns
the list of all threads.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sess</b>: `Session` used to run the queue ops. Defaults to the
@@ -1393,13 +1394,13 @@ the list of all threads.
* <b>collection</b>: A `GraphKey` specifying the graph collection to
get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A list of threads.
-## Summary Operations <div class="md-anchor" id="AUTOGENERATED-summary-operations">{#AUTOGENERATED-summary-operations}</div>
+## Summary Operations <a class="md-anchor" id="AUTOGENERATED-summary-operations"></a>
The following ops output
[`Summary`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/summary.proto)
@@ -1417,14 +1418,14 @@ details.
- - -
-### tf.scalar_summary(tags, values, collections=None, name=None) <div class="md-anchor" id="scalar_summary">{#scalar_summary}</div>
+### tf.scalar_summary(tags, values, collections=None, name=None) <a class="md-anchor" id="scalar_summary"></a>
Outputs a `Summary` protocol buffer with scalar values.
The input `tags` and `values` must have the same shape. The generated
summary has a summary value for each tag-value pair in `tags` and `values`.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tags</b>: A 1-D `string` `Tensor`. Tags for the summaries.
@@ -1433,7 +1434,7 @@ summary has a summary value for each tag-value pair in `tags` and `values`.
added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A scalar `Tensor` of type `string`. The serialized `Summary` protocol
buffer.
@@ -1441,7 +1442,7 @@ summary has a summary value for each tag-value pair in `tags` and `values`.
- - -
-### tf.image_summary(tag, tensor, max_images=None, collections=None, name=None) <div class="md-anchor" id="image_summary">{#image_summary}</div>
+### tf.image_summary(tag, tensor, max_images=None, collections=None, name=None) <a class="md-anchor" id="image_summary"></a>
Outputs a `Summary` protocol buffer with images.
@@ -1471,7 +1472,7 @@ build the `tag` of the summary values:
* If `max_images` is greater than 1, the summary value tags are
generated sequentially as '*tag*/image/0', '*tag*/image/1', etc.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tag</b>: A scalar `Tensor` of type `string`. Used to build the `tag`
@@ -1483,7 +1484,7 @@ build the `tag` of the summary values:
summary to. Defaults to [ops.GraphKeys.SUMMARIES]
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A scalar `Tensor` of type `string`. The serialized `Summary` protocol
buffer.
@@ -1491,7 +1492,7 @@ build the `tag` of the summary values:
- - -
-### tf.histogram_summary(tag, values, collections=None, name=None) <div class="md-anchor" id="histogram_summary">{#histogram_summary}</div>
+### tf.histogram_summary(tag, values, collections=None, name=None) <a class="md-anchor" id="histogram_summary"></a>
Outputs a `Summary` protocol buffer with a histogram.
@@ -1501,7 +1502,7 @@ has one summary value containing a histogram for `values`.
This op reports an `OutOfRange` error if any value is not finite.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>tag</b>: A `string` `Tensor`. 0-D. Tag to use for the summary value.
@@ -1511,7 +1512,7 @@ This op reports an `OutOfRange` error if any value is not finite.
added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A scalar `Tensor` of type `string`. The serialized `Summary` protocol
buffer.
@@ -1519,7 +1520,7 @@ This op reports an `OutOfRange` error if any value is not finite.
- - -
-### tf.nn.zero_fraction(value, name=None) <div class="md-anchor" id="zero_fraction">{#zero_fraction}</div>
+### tf.nn.zero_fraction(value, name=None) <a class="md-anchor" id="zero_fraction"></a>
Returns the fraction of zeros in `value`.
@@ -1530,13 +1531,13 @@ This is useful in summaries to measure and report sparsity. For example,
z = tf.Relu(...)
summ = tf.scalar_summary('sparsity', tf.zero_fraction(z))
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>value</b>: A tensor of numeric type.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The fraction of zeros in `value`, with type `float32`.
@@ -1544,7 +1545,7 @@ This is useful in summaries to measure and report sparsity. For example,
- - -
-### tf.merge_summary(inputs, collections=None, name=None) <div class="md-anchor" id="merge_summary">{#merge_summary}</div>
+### tf.merge_summary(inputs, collections=None, name=None) <a class="md-anchor" id="merge_summary"></a>
Merges summaries.
@@ -1556,7 +1557,7 @@ summaries.
When the Op is run, it reports an `InvalidArgument` error if multiple values
in the summaries to merge use the same tag.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>inputs</b>: A list of `string` `Tensor` objects containing serialized `Summary`
@@ -1565,7 +1566,7 @@ in the summaries to merge use the same tag.
added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
* <b>name</b>: A name for the operation (optional).
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
A scalar `Tensor` of type `string`. The serialized `Summary` protocol
buffer resulting from the merging.
@@ -1573,17 +1574,17 @@ in the summaries to merge use the same tag.
- - -
-### tf.merge_all_summaries(key='summaries') <div class="md-anchor" id="merge_all_summaries">{#merge_all_summaries}</div>
+### tf.merge_all_summaries(key='summaries') <a class="md-anchor" id="merge_all_summaries"></a>
Merges all summaries collected in the default graph.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>key</b>: `GraphKey` used to collect the summaries. Defaults to
`GraphKeys.SUMMARIES`.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
If no summaries were collected, returns None. Otherwise returns a scalar
`Tensor` of type`string` containing the serialized `Summary` protocol
@@ -1591,7 +1592,7 @@ Merges all summaries collected in the default graph.
-## Adding Summaries to Event Files <div class="md-anchor" id="AUTOGENERATED-adding-summaries-to-event-files">{#AUTOGENERATED-adding-summaries-to-event-files}</div>
+## Adding Summaries to Event Files <a class="md-anchor" id="AUTOGENERATED-adding-summaries-to-event-files"></a>
See [Summaries and
TensorBoard](../../how_tos/summaries_and_tensorboard/index.md) for an
@@ -1599,7 +1600,7 @@ overview of summaries, event files, and visualization in TensorBoard.
- - -
-### class tf.train.SummaryWriter <div class="md-anchor" id="SummaryWriter">{#SummaryWriter}</div>
+### class tf.train.SummaryWriter <a class="md-anchor" id="SummaryWriter"></a>
Writes `Summary` protocol buffers to event files.
@@ -1611,7 +1612,7 @@ training.
- - -
-#### tf.train.SummaryWriter.__init__(logdir, graph_def=None, max_queue=10, flush_secs=120) {#SummaryWriter.__init__}
+#### tf.train.SummaryWriter.__init__(logdir, graph_def=None, max_queue=10, flush_secs=120) <a class="md-anchor" id="SummaryWriter.__init__"></a>
Creates a `SummaryWriter` and an event file.
@@ -1643,7 +1644,7 @@ the event file:
* `max_queue`: Maximum number of summaries or events pending to be
written to disk before one of the 'add' calls block.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>logdir</b>: A string. Directory where event file will be written.
@@ -1656,7 +1657,7 @@ the event file:
- - -
-#### tf.train.SummaryWriter.add_summary(summary, global_step=None) {#SummaryWriter.add_summary}
+#### tf.train.SummaryWriter.add_summary(summary, global_step=None) <a class="md-anchor" id="SummaryWriter.add_summary"></a>
Adds a `Summary` protocol buffer to the event file.
@@ -1668,7 +1669,7 @@ can also pass a `Summary` procotol buffer that you manufacture with your
own data. This is commonly done to report evaluation results in event
files.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>summary</b>: A `Summary` protocol buffer, optionally serialized as a string.
@@ -1678,11 +1679,11 @@ files.
- - -
-#### tf.train.SummaryWriter.add_event(event) {#SummaryWriter.add_event}
+#### tf.train.SummaryWriter.add_event(event) <a class="md-anchor" id="SummaryWriter.add_event"></a>
Adds an event to the event file.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>event</b>: An `Event` protocol buffer.
@@ -1690,14 +1691,14 @@ Adds an event to the event file.
- - -
-#### tf.train.SummaryWriter.add_graph(graph_def, global_step=None) {#SummaryWriter.add_graph}
+#### tf.train.SummaryWriter.add_graph(graph_def, global_step=None) <a class="md-anchor" id="SummaryWriter.add_graph"></a>
Adds a `GraphDef` protocol buffer to the event file.
The graph described by the protocol buffer will be displayed by
TensorBoard. Most users pass a graph in the constructor instead.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>graph_def</b>: A `GraphDef` protocol buffer.
@@ -1708,7 +1709,7 @@ TensorBoard. Most users pass a graph in the constructor instead.
- - -
-#### tf.train.SummaryWriter.flush() {#SummaryWriter.flush}
+#### tf.train.SummaryWriter.flush() <a class="md-anchor" id="SummaryWriter.flush"></a>
Flushes the event file to disk.
@@ -1718,7 +1719,7 @@ disk.
- - -
-#### tf.train.SummaryWriter.close() {#SummaryWriter.close}
+#### tf.train.SummaryWriter.close() <a class="md-anchor" id="SummaryWriter.close"></a>
Flushes the event file to disk and close the file.
@@ -1728,7 +1729,7 @@ Call this method when you do not need the summary writer anymore.
- - -
-### tf.train.summary_iterator(path) <div class="md-anchor" id="summary_iterator">{#summary_iterator}</div>
+### tf.train.summary_iterator(path) <a class="md-anchor" id="summary_iterator"></a>
An iterator for reading `Event` protocol buffers from an event file.
@@ -1761,22 +1762,22 @@ and
[Summary](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/summary.proto)
for more information about their attributes.
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>path</b>: The path to an event file created by a `SummaryWriter`.
-##### Yields:
+##### Yields: <a class="md-anchor" id="AUTOGENERATED-yields-"></a>
`Event` protocol buffers.
-## Training utilities <div class="md-anchor" id="AUTOGENERATED-training-utilities">{#AUTOGENERATED-training-utilities}</div>
+## Training utilities <a class="md-anchor" id="AUTOGENERATED-training-utilities"></a>
- - -
-### tf.train.global_step(sess, global_step_tensor) <div class="md-anchor" id="global_step">{#global_step}</div>
+### tf.train.global_step(sess, global_step_tensor) <a class="md-anchor" id="global_step"></a>
Small helper to get the global step.
@@ -1792,21 +1793,21 @@ print 'global_step:', tf.train.global_step(sess, global_step_tensor)
global_step: 10
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>sess</b>: A brain `Session` object.
* <b>global_step_tensor</b>: `Tensor` or the `name` of the operation that contains
the global step.
-##### Returns:
+##### Returns: <a class="md-anchor" id="AUTOGENERATED-returns-"></a>
The global step value.
- - -
-### tf.train.write_graph(graph_def, logdir, name, as_text=True) <div class="md-anchor" id="write_graph">{#write_graph}</div>
+### tf.train.write_graph(graph_def, logdir, name, as_text=True) <a class="md-anchor" id="write_graph"></a>
Writes a graph proto on disk.
@@ -1818,7 +1819,7 @@ sess = tf.Session()
tf.train.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt')
```
-##### Args:
+##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>
* <b>graph_def</b>: A `GraphDef` protocol buffer.
diff --git a/tensorflow/g3doc/get_started/basic_usage.md b/tensorflow/g3doc/get_started/basic_usage.md
index 359798d22b..a41ec36d56 100644
--- a/tensorflow/g3doc/get_started/basic_usage.md
+++ b/tensorflow/g3doc/get_started/basic_usage.md
@@ -1,4 +1,4 @@
-# Basic Usage
+# Basic Usage <a class="md-anchor" id="AUTOGENERATED-basic-usage"></a>
To use TensorFlow you need to understand how TensorFlow:
@@ -8,7 +8,7 @@ To use TensorFlow you need to understand how TensorFlow:
* Maintains state with `Variables`.
* Uses feeds and fetches to get data into and out of arbitrary operations.
-## Overview
+## Overview <a class="md-anchor" id="AUTOGENERATED-overview"></a>
TensorFlow is a programming system in which you represent computations as
graphs. Nodes in the graph are called *ops* (short for operations). An op
@@ -24,7 +24,7 @@ methods return tensors produced by ops as [numpy](http://www.numpy.org)
`ndarray` objects in Python, and as `tensorflow::Tensor` instances in C and
C++.
-## The computation graph
+## The computation graph <a class="md-anchor" id="AUTOGENERATED-the-computation-graph"></a>
TensorFlow programs are usually structured into a construction phase, that
assembles a graph, and an execution phase that uses a session to execute ops in
@@ -40,7 +40,7 @@ of helper functions not available in the C and C++ libraries.
The session libraries have equivalent functionalities for the three languages.
-### Building the graph
+### Building the graph <a class="md-anchor" id="AUTOGENERATED-building-the-graph"></a>
To build a graph start with ops that do not need any input (source ops), such as
`Constant`, and pass their output to other ops that do computation.
@@ -77,7 +77,7 @@ The default graph now has three nodes: two `constant()` ops and one `matmul()`
op. To actually multiply the matrices, and get the result of the multiplication,
you must launch the graph in a session.
-### Launching the graph in a session
+### Launching the graph in a session <a class="md-anchor" id="AUTOGENERATED-launching-the-graph-in-a-session"></a>
Launching follows construction. To launch a graph, create a `Session` object.
Without arguments the session constructor launches the default graph.
@@ -146,7 +146,7 @@ Devices are specified with strings. The currently supported devices are:
See [Using GPUs](../how_tos/using_gpu/index.md) for more information about GPUs
and TensorFlow.
-## Interactive Usage
+## Interactive Usage <a class="md-anchor" id="AUTOGENERATED-interactive-usage"></a>
The Python examples in the documentation launch the graph with a
[`Session`](../api_docs/python/client.md#Session) and use the
@@ -177,7 +177,7 @@ print sub.eval()
# ==> [-2. -1.]
```
-## Tensors
+## Tensors <a class="md-anchor" id="AUTOGENERATED-tensors"></a>
TensorFlow programs use a tensor data structure to represent all data -- only
tensors are passed between operations in the computation graph. You can think
@@ -186,7 +186,7 @@ static type a rank, and a shape. To learn more about how TensorFlow handles
these concepts, see the [Rank, Shape, and Type](../resources/dims_types.md)
reference.
-## Variables
+## Variables <a class="md-anchor" id="AUTOGENERATED-variables"></a>
Variables maintain state across executions of the graph. The following example
shows a variable serving as a simple counter. See
@@ -235,7 +235,7 @@ Variables. For example, you would store the weights for a neural network as a
tensor in a Variable. During training you update this tensor by running a
training graph repeatedly.
-## Fetches
+## Fetches <a class="md-anchor" id="AUTOGENERATED-fetches"></a>
To fetch the outputs of operations, execute the graph with a `run()` call on
the `Session` object and pass in the tensors to retrieve. In the previous
@@ -260,7 +260,7 @@ with tf.Session():
All the ops needed to produce the values of the requested tensors are run once
(not once per requested tensor).
-## Feeds
+## Feeds <a class="md-anchor" id="AUTOGENERATED-feeds"></a>
The examples above introduce tensors into the computation graph by storing them
in `Constants` and `Variables`. TensorFlow also provides a feed mechanism for
diff --git a/tensorflow/g3doc/get_started/index.md b/tensorflow/g3doc/get_started/index.md
index bc48f11c18..f0222e818d 100644
--- a/tensorflow/g3doc/get_started/index.md
+++ b/tensorflow/g3doc/get_started/index.md
@@ -1,4 +1,4 @@
-# Introduction
+# Introduction <a class="md-anchor" id="AUTOGENERATED-introduction"></a>
Let's get you up and running with TensorFlow!
@@ -67,7 +67,7 @@ these and charge ahead. Don't worry, you'll still get to see MNIST -- we'll
also use MNIST as an example in our technical tutorial where we elaborate on
TensorFlow features.
-## Recommended Next Steps:
+## Recommended Next Steps: <a class="md-anchor" id="AUTOGENERATED-recommended-next-steps-"></a>
* [Download and Setup](os_setup.md)
* [Basic Usage](basic_usage.md)
* [TensorFlow Mechanics 101](../tutorials/mnist/tf/index.md)
diff --git a/tensorflow/g3doc/get_started/os_setup.md b/tensorflow/g3doc/get_started/os_setup.md
index bae56f8179..f6b6bb4015 100644
--- a/tensorflow/g3doc/get_started/os_setup.md
+++ b/tensorflow/g3doc/get_started/os_setup.md
@@ -1,8 +1,8 @@
-# Download and Setup
+# Download and Setup <a class="md-anchor" id="AUTOGENERATED-download-and-setup"></a>
-## Binary Installation
+## Binary Installation <a class="md-anchor" id="AUTOGENERATED-binary-installation"></a>
-### Ubuntu/Linux
+### Ubuntu/Linux <a class="md-anchor" id="AUTOGENERATED-ubuntu-linux"></a>
Make sure you have [pip](https://pypi.python.org/pypi/pip) installed:
@@ -20,7 +20,7 @@ $ sudo pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflo
$ sudo pip install https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl
```
-### Mac OS X
+### Mac OS X <a class="md-anchor" id="AUTOGENERATED-mac-os-x"></a>
Make sure you have [pip](https://pypi.python.org/pypi/pip) installed:
@@ -36,7 +36,7 @@ Install TensorFlow (only CPU binary version is currently available).
$ sudo pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
```
-## Docker-based installation
+## Docker-based installation <a class="md-anchor" id="AUTOGENERATED-docker-based-installation"></a>
We also support running TensorFlow via [Docker](http://docker.com/), which lets
you avoid worrying about setting up dependencies.
@@ -51,7 +51,7 @@ $ docker run -it b.gcr.io/tensorflow/tensorflow
This will start a container with TensorFlow and all its dependencies already
installed.
-### Additional images
+### Additional images <a class="md-anchor" id="AUTOGENERATED-additional-images"></a>
The default Docker image above contains just a minimal set of libraries for
getting up and running with TensorFlow. We also have several other containers,
@@ -62,7 +62,7 @@ which you can use in the `docker run` command above:
makes it easy to experiment directly with the source, without needing to
install any of the dependencies described above.
-## Try your first TensorFlow program
+## Try your first TensorFlow program <a class="md-anchor" id="AUTOGENERATED-try-your-first-tensorflow-program"></a>
```sh
$ python
@@ -88,10 +88,9 @@ ImportError: libcudart.so.7.0: cannot open shared object file: No such file or d
you most likely need to set your `LD_LIBRARY_PATH` to point to the location of
your CUDA libraries.
-<a name="source"></a>
-## Installing from sources
+## Installing from sources <a class="md-anchor" id="source"></a>
-### Clone the TensorFlow repository
+### Clone the TensorFlow repository <a class="md-anchor" id="AUTOGENERATED-clone-the-tensorflow-repository"></a>
```sh
$ git clone --recurse-submodules https://tensorflow.googlesource.com/tensorflow
@@ -100,9 +99,9 @@ $ git clone --recurse-submodules https://tensorflow.googlesource.com/tensorflow
`--recurse-submodules` is required to fetch the protobuf library that TensorFlow
depends on.
-### Installation for Linux
+### Installation for Linux <a class="md-anchor" id="AUTOGENERATED-installation-for-linux"></a>
-#### Install Bazel
+#### Install Bazel <a class="md-anchor" id="AUTOGENERATED-install-bazel"></a>
Follow instructions [here](http://bazel.io/docs/install.html) to install the
@@ -121,13 +120,13 @@ TensorFlow. `HEAD` may be unstable.
Add the executable `output/bazel` to your `$PATH` environment variable.
-#### Install other dependencies
+#### Install other dependencies <a class="md-anchor" id="AUTOGENERATED-install-other-dependencies"></a>
```sh
$ sudo apt-get install python-numpy swig python-dev
```
-#### Optional: Install CUDA (GPUs on Linux)
+#### Optional: Install CUDA (GPUs on Linux) <a class="md-anchor" id="AUTOGENERATED-optional--install-cuda--gpus-on-linux-"></a>
In order to build TensorFlow with GPU support, both Cuda Toolkit 7.0 and CUDNN
6.5 V2 from NVIDIA need to be installed.
@@ -139,13 +138,13 @@ TensorFlow GPU support requires having a GPU card with NVidia Compute Capability
* NVidia K20
* NVidia K40
-##### Download and install Cuda Toolkit 7.0
+##### Download and install Cuda Toolkit 7.0 <a class="md-anchor" id="AUTOGENERATED-download-and-install-cuda-toolkit-7.0"></a>
https://developer.nvidia.com/cuda-toolkit-70
Install the toolkit into e.g. `/usr/local/cuda`
-##### Download and install CUDNN Toolkit 6.5
+##### Download and install CUDNN Toolkit 6.5 <a class="md-anchor" id="AUTOGENERATED-download-and-install-cudnn-toolkit-6.5"></a>
https://developer.nvidia.com/rdp/cudnn-archive
@@ -158,7 +157,7 @@ sudo cp cudnn-6.5-linux-x64-v2/cudnn.h /usr/local/cuda/include
sudo cp cudnn-6.5-linux-x64-v2/libcudnn* /usr/local/cuda/lib64
```
-##### Configure TensorFlow's canonical view of Cuda libraries
+##### Configure TensorFlow's canonical view of Cuda libraries <a class="md-anchor" id="AUTOGENERATED-configure-tensorflow-s-canonical-view-of-cuda-libraries"></a>
From the root of your source tree, run:
``` bash
@@ -183,7 +182,7 @@ This creates a canonical set of symbolic links to the Cuda libraries on your sys
Every time you change the Cuda library paths you need to run this step again before
you invoke the bazel build command.
-##### Build your target with GPU support.
+##### Build your target with GPU support. <a class="md-anchor" id="AUTOGENERATED-build-your-target-with-gpu-support."></a>
From the root of your source tree, run:
```sh
@@ -199,7 +198,7 @@ $ bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu
Note that "--config=cuda" is needed to enable the GPU support.
-##### Known issues
+##### Known issues <a class="md-anchor" id="AUTOGENERATED-known-issues"></a>
* Although it is possible to build both Cuda and non-Cuda configs under the same
source tree, we recommend to run "bazel clean" when switching between these two
@@ -210,31 +209,30 @@ will fail with a clear error message. In the future, we might consider making
this more conveninent by including the configure step in our build process,
given necessary bazel new feature support.
-### Installation for Mac OS X
+### Installation for Mac OS X <a class="md-anchor" id="AUTOGENERATED-installation-for-mac-os-x"></a>
Mac needs the same set of dependencies as Linux, however their installing those
dependencies is different. Here is a set of useful links to help with installing
the dependencies on Mac OS X :
-#### Bazel
+#### Bazel <a class="md-anchor" id="AUTOGENERATED-bazel"></a>
Look for installation instructions for Mac OS X on
[this](http://bazel.io/docs/install.html) page.
-#### SWIG
+#### SWIG <a class="md-anchor" id="AUTOGENERATED-swig"></a>
[Mac OS X installation](http://www.swig.org/Doc3.0/Preface.html#Preface_osx_installation).
Notes : You need to install
[PCRE](ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/) and *NOT* PCRE2.
-#### Numpy
+#### Numpy <a class="md-anchor" id="AUTOGENERATED-numpy"></a>
Follow installation instructions [here](http://docs.scipy.org/doc/numpy/user/install.html).
-<a name="create-pip"></a>
-### Create the pip package and install
+### Create the pip package and install <a class="md-anchor" id="create-pip"></a>
```sh
$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
@@ -245,7 +243,7 @@ $ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ pip install /tmp/tensorflow_pkg/tensorflow-0.5.0-cp27-none-linux_x86_64.whl
```
-## Train your first TensorFlow neural net model
+## Train your first TensorFlow neural net model <a class="md-anchor" id="AUTOGENERATED-train-your-first-tensorflow-neural-net-model"></a>
From the root of your source tree, run:
diff --git a/tensorflow/g3doc/how_tos/adding_an_op/index.md b/tensorflow/g3doc/how_tos/adding_an_op/index.md
index ee1f05029b..403629b602 100644
--- a/tensorflow/g3doc/how_tos/adding_an_op/index.md
+++ b/tensorflow/g3doc/how_tos/adding_an_op/index.md
@@ -1,4 +1,4 @@
-# Adding a New Op
+# Adding a New Op <a class="md-anchor" id="AUTOGENERATED-adding-a-new-op"></a>
PREREQUISITES:
@@ -27,28 +27,28 @@ to:
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
-* [Define the Op's interface](#AUTOGENERATED-define-the-op-s-interface)
+### [Adding a New Op](#AUTOGENERATED-adding-a-new-op)
+* [Define the Op's interface](#define_interface)
* [Implement the kernel for the Op](#AUTOGENERATED-implement-the-kernel-for-the-op)
* [Generate the client wrapper](#AUTOGENERATED-generate-the-client-wrapper)
* [The Python Op wrapper](#AUTOGENERATED-the-python-op-wrapper)
* [The C++ Op wrapper](#AUTOGENERATED-the-c---op-wrapper)
* [Verify it works](#AUTOGENERATED-verify-it-works)
-* [Validation](#AUTOGENERATED-validation)
+* [Validation](#validation)
* [Op registration](#AUTOGENERATED-op-registration)
* [Attrs](#AUTOGENERATED-attrs)
* [Attr types](#AUTOGENERATED-attr-types)
- * [Polymorphism](#AUTOGENERATED-polymorphism)
+ * [Polymorphism](#polymorphism)
* [Inputs and Outputs](#AUTOGENERATED-inputs-and-outputs)
* [Backwards compatibility](#AUTOGENERATED-backwards-compatibility)
-* [GPU Support](#AUTOGENERATED-gpu-support)
+* [GPU Support](#mult-archs)
* [Implement the gradient in Python](#AUTOGENERATED-implement-the-gradient-in-python)
* [Implement a shape function in Python](#AUTOGENERATED-implement-a-shape-function-in-python)
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-<a name="define_interface"></a>
-## Define the Op's interface <div class="md-anchor" id="AUTOGENERATED-define-the-op-s-interface">{#AUTOGENERATED-define-the-op-s-interface}</div>
+## Define the Op's interface <a class="md-anchor" id="define_interface"></a>
You define the interface of an Op by registering it with the TensorFlow system.
In the registration, you specify the name of your Op, its inputs (types and
@@ -74,7 +74,7 @@ outputs a tensor `zeroed` of 32-bit integers.
> A note on naming: The name of the Op should be unique and CamelCase. Names
> starting with an underscore (`_`) are reserved for internal use.
-## Implement the kernel for the Op <div class="md-anchor" id="AUTOGENERATED-implement-the-kernel-for-the-op">{#AUTOGENERATED-implement-the-kernel-for-the-op}</div>
+## Implement the kernel for the Op <a class="md-anchor" id="AUTOGENERATED-implement-the-kernel-for-the-op"></a>
After you define the interface, provide one or more implementations of the Op.
To create one of these kernels, create a class that extends `OpKernel` and
@@ -132,8 +132,8 @@ Once you
[build and reinstall TensorFlow](../../get_started/os_setup.md#create-pip), the
Tensorflow system can reference and use the Op when requested.
-## Generate the client wrapper <div class="md-anchor" id="AUTOGENERATED-generate-the-client-wrapper">{#AUTOGENERATED-generate-the-client-wrapper}</div>
-### The Python Op wrapper <div class="md-anchor" id="AUTOGENERATED-the-python-op-wrapper">{#AUTOGENERATED-the-python-op-wrapper}</div>
+## Generate the client wrapper <a class="md-anchor" id="AUTOGENERATED-generate-the-client-wrapper"></a>
+### The Python Op wrapper <a class="md-anchor" id="AUTOGENERATED-the-python-op-wrapper"></a>
Python op wrappers are created automatically in
`bazel-genfiles/tensorflow/python/ops/gen_user_ops.py` for all ops placed in the
@@ -172,7 +172,7 @@ def my_fact():
return gen_user_ops._fact()
```
-### The C++ Op wrapper <div class="md-anchor" id="AUTOGENERATED-the-c---op-wrapper">{#AUTOGENERATED-the-c---op-wrapper}</div>
+### The C++ Op wrapper <a class="md-anchor" id="AUTOGENERATED-the-c---op-wrapper"></a>
C++ op wrappers are created automatically for all ops placed in the
[`tensorflow/core/user_ops`][user_ops] directory, when you build Tensorflow. For
@@ -187,7 +187,7 @@ statement
#include "tensorflow/cc/ops/user_ops.h"
```
-## Verify it works <div class="md-anchor" id="AUTOGENERATED-verify-it-works">{#AUTOGENERATED-verify-it-works}</div>
+## Verify it works <a class="md-anchor" id="AUTOGENERATED-verify-it-works"></a>
A good way to verify that you've successfully implemented your Op is to write a
test for it. Create the file
@@ -211,7 +211,7 @@ Then run your test:
$ bazel test tensorflow/python:zero_out_op_test
```
-## Validation <div class="md-anchor" id="AUTOGENERATED-validation">{#AUTOGENERATED-validation}</div>
+## Validation <a class="md-anchor" id="validation"></a>
The example above assumed that the Op applied to a tensor of any shape. What
if it only applied to vectors? That means adding a check to the above OpKernel
@@ -249,9 +249,9 @@ function is an error, and if so return it, use
[`OP_REQUIRES_OK`][validation-macros]. Both of these macros return from the
function on error.
-## Op registration <div class="md-anchor" id="AUTOGENERATED-op-registration">{#AUTOGENERATED-op-registration}</div>
+## Op registration <a class="md-anchor" id="AUTOGENERATED-op-registration"></a>
-### Attrs <div class="md-anchor" id="AUTOGENERATED-attrs">{#AUTOGENERATED-attrs}</div>
+### Attrs <a class="md-anchor" id="AUTOGENERATED-attrs"></a>
Ops can have attrs, whose values are set when the Op is added to a graph. These
are used to configure the Op, and their values can be accessed both within the
@@ -329,7 +329,7 @@ which can then be used in the `Compute` method:
> .Output("zeroed: int32");
> </pre></code>
-### Attr types <div class="md-anchor" id="AUTOGENERATED-attr-types">{#AUTOGENERATED-attr-types}</div>
+### Attr types <a class="md-anchor" id="AUTOGENERATED-attr-types"></a>
The following types are supported in an attr:
@@ -345,7 +345,7 @@ The following types are supported in an attr:
See also: [op_def_builder.cc:FinalizeAttr][FinalizeAttr] for a definitive list.
-#### Default values & constraints
+#### Default values & constraints <a class="md-anchor" id="AUTOGENERATED-default-values---constraints"></a>
Attrs may have default values, and some types of attrs can have constraints. To
define an attr with constraints, you can use the following `<attr-type-expr>`s:
@@ -446,8 +446,8 @@ REGISTER_OP("AttrDefaultExampleForAllTypes")
Note in particular that the values of type `type` use [the `DT_*` names
for the types](../../resources/dims_types.md#data-types).
-### Polymorphism <div class="md-anchor" id="AUTOGENERATED-polymorphism">{#AUTOGENERATED-polymorphism}</div>
-#### Type Polymorphism
+### Polymorphism <a class="md-anchor" id="polymorphism"></a>
+#### Type Polymorphism <a class="md-anchor" id="type-polymorphism"></a>
For ops that can take different types as input or produce different output
types, you can specify [an attr](#attrs) in
@@ -467,8 +467,7 @@ REGISTER\_OP("ZeroOut")
Your Op registration now specifies that the input's type must be `float`, or
`int32`, and that its output will be the same type, since both have type `T`.
-<a name="naming"></a>
-> A note on naming: Inputs, outputs, and attrs generally should be
+> A note on naming:{#naming} Inputs, outputs, and attrs generally should be
> given snake_case names. The one exception is attrs that are used as the type
> of an input or in the type of an input. Those attrs can be inferred when the
> op is added to the graph and so don't appear in the op's function. For
@@ -676,8 +675,7 @@ TF_CALL_REAL_NUMBER_TYPES(REGISTER_KERNEL);
#undef REGISTER_KERNEL
```
-<a name="list-input-output"></a>
-#### List Inputs and Outputs
+#### List Inputs and Outputs <a class="md-anchor" id="list-input-output"></a>
In addition to being able to accept or produce different types, ops can consume
or produce a variable number of tensors.
@@ -752,7 +750,7 @@ REGISTER_OP("MinimumLengthPolymorphicListExample")
.Output("out: T");
```
-### Inputs and Outputs <div class="md-anchor" id="AUTOGENERATED-inputs-and-outputs">{#AUTOGENERATED-inputs-and-outputs}</div>
+### Inputs and Outputs <a class="md-anchor" id="AUTOGENERATED-inputs-and-outputs"></a>
To summarize the above, an Op registration can have multiple inputs and outputs:
@@ -853,7 +851,7 @@ expressions:
For more details, see
[`tensorflow/core/framework/op_def_builder.h`][op_def_builder].
-### Backwards compatibility <div class="md-anchor" id="AUTOGENERATED-backwards-compatibility">{#AUTOGENERATED-backwards-compatibility}</div>
+### Backwards compatibility <a class="md-anchor" id="AUTOGENERATED-backwards-compatibility"></a>
In general, changes to specifications must be backwards-compatible: changing the
specification of an Op must not break prior serialized GraphDefs constructed
@@ -897,8 +895,7 @@ There are several ways to preserve backwards-compatibility.
If you cannot make your change to an operation backwards compatible, then
create a new operation with a new name with the new semantics.
-<a name="mult-archs"></a>
-## GPU Support <div class="md-anchor" id="AUTOGENERATED-gpu-support">{#AUTOGENERATED-gpu-support}</div>
+## GPU Support <a class="md-anchor" id="mult-archs"></a>
You can implement different OpKernels and register one for CPU and another for
GPU, just like you can [register kernels for different types](#polymorphism).
@@ -926,11 +923,11 @@ kept on the CPU, add a `HostMemory()` call to the kernel registration, e.g.:
PadOp<GPUDevice, T>)
```
-## Implement the gradient in Python <div class="md-anchor" id="AUTOGENERATED-implement-the-gradient-in-python">{#AUTOGENERATED-implement-the-gradient-in-python}</div>
+## Implement the gradient in Python <a class="md-anchor" id="AUTOGENERATED-implement-the-gradient-in-python"></a>
[TODO]:# (Write this!)
-## Implement a shape function in Python <div class="md-anchor" id="AUTOGENERATED-implement-a-shape-function-in-python">{#AUTOGENERATED-implement-a-shape-function-in-python}</div>
+## Implement a shape function in Python <a class="md-anchor" id="AUTOGENERATED-implement-a-shape-function-in-python"></a>
The TensorFlow Python API has a feature called "shape inference" that provides
information about the shapes of tensors without having to execute the
diff --git a/tensorflow/g3doc/how_tos/graph_viz/index.md b/tensorflow/g3doc/how_tos/graph_viz/index.md
index 7e3e3fde60..81c4a9f247 100644
--- a/tensorflow/g3doc/how_tos/graph_viz/index.md
+++ b/tensorflow/g3doc/how_tos/graph_viz/index.md
@@ -1,4 +1,4 @@
-# TensorBoard: Graph Visualization
+# TensorBoard: Graph Visualization <a class="md-anchor" id="AUTOGENERATED-tensorboard--graph-visualization"></a>
TensorFlow computation graphs are powerful but complicated. The graph visualization can help you understand and debug them. Here's an example of the visualization at work.
@@ -7,7 +7,7 @@ TensorFlow computation graphs are powerful but complicated. The graph visualizat
To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see [Summaries and TensorBoard](../summaries_and_tensorboard/index.md).
-## Name scoping and nodes
+## Name scoping and nodes <a class="md-anchor" id="AUTOGENERATED-name-scoping-and-nodes"></a>
Typical TensorFlow graphs can have many thousands of nodes--far too many to see easily all at once, or even to lay out using standard graph tools. To simplify, variable's name can be scoped and the visualization uses this information to define a hierarchy structure on the nodes in the graph, and by default only shows the top of this hierarchy. Here is an example that defines three operations under the `hidden` name scope using [`tf.name_scope()`](https://tensorflow.org/api_docs/python/framework.html?cl=head#name_scope):
@@ -136,7 +136,7 @@ Symbol | Meaning
![Control dependency edge](./control_edge.png "Control dependency edge") | Edge showing the control dependency between operations.
![Reference edge](./reference_edge.png "Reference edge") | A reference edge showing that the outgoing operation node can mutate the incoming tensor.
-## Interaction
+## Interaction <a class="md-anchor" id="AUTOGENERATED-interaction"></a>
Navigate the graph by panning and zooming. Click and drag to pan, and use a
scroll gesture to zoom. Double-click on a node, or click on its `+` button, to
diff --git a/tensorflow/g3doc/how_tos/index.md b/tensorflow/g3doc/how_tos/index.md
index d04ee2a2db..cb902e4dda 100644
--- a/tensorflow/g3doc/how_tos/index.md
+++ b/tensorflow/g3doc/how_tos/index.md
@@ -1,7 +1,7 @@
-# Overview
+# Overview <a class="md-anchor" id="AUTOGENERATED-overview"></a>
-## Variables: Creation, Initializing, Saving, and Restoring
+## Variables: Creation, Initializing, Saving, and Restoring <a class="md-anchor" id="AUTOGENERATED-variables--creation--initializing--saving--and-restoring"></a>
TensorFlow Variables are in-memory buffers containing tensors. Learn how to
use them to hold and update model parameters during training.
@@ -9,7 +9,7 @@ use them to hold and update model parameters during training.
[View Tutorial](variables/index.md)
-## TensorFlow Mechanics 101
+## TensorFlow Mechanics 101 <a class="md-anchor" id="AUTOGENERATED-tensorflow-mechanics-101"></a>
A step-by-step walk through of the details of using TensorFlow infrastructure
to train models at scale, using MNIST handwritten digit recognition as a toy
@@ -18,7 +18,7 @@ example.
[View Tutorial](../tutorials/mnist/tf/index.md)
-## TensorBoard: Visualizing Learning
+## TensorBoard: Visualizing Learning <a class="md-anchor" id="AUTOGENERATED-tensorboard--visualizing-learning"></a>
TensorBoard is a useful tool for visualizing the training and evaluation of
your model(s). This tutorial describes how to build and run TensorBoard as well
@@ -28,7 +28,7 @@ TensorBoard uses for display.
[View Tutorial](summaries_and_tensorboard/index.md)
-## TensorBoard: Graph Visualization
+## TensorBoard: Graph Visualization <a class="md-anchor" id="AUTOGENERATED-tensorboard--graph-visualization"></a>
This tutorial describes how to use the graph visualizer in TensorBoard to help
you understand the dataflow graph and debug it.
@@ -36,7 +36,7 @@ you understand the dataflow graph and debug it.
[View Tutorial](graph_viz/index.md)
-## Reading Data
+## Reading Data <a class="md-anchor" id="AUTOGENERATED-reading-data"></a>
This tutorial describes the three main methods of getting data into your
TensorFlow program: Feeding, Reading and Preloading.
@@ -44,7 +44,7 @@ TensorFlow program: Feeding, Reading and Preloading.
[View Tutorial](reading_data/index.md)
-## Threading and Queues
+## Threading and Queues <a class="md-anchor" id="AUTOGENERATED-threading-and-queues"></a>
This tutorial describes the various constructs implemented by TensorFlow
to facilitate asynchronous and concurrent training.
@@ -52,7 +52,7 @@ to facilitate asynchronous and concurrent training.
[View Tutorial](threading_and_queues/index.md)
-## Adding a New Op
+## Adding a New Op <a class="md-anchor" id="AUTOGENERATED-adding-a-new-op"></a>
TensorFlow already has a large suite of node operations from which you can
compose in your graph, but here are the details of how to add you own custom Op.
@@ -60,7 +60,7 @@ compose in your graph, but here are the details of how to add you own custom Op.
[View Tutorial](adding_an_op/index.md)
-## Custom Data Readers
+## Custom Data Readers <a class="md-anchor" id="AUTOGENERATED-custom-data-readers"></a>
If you have a sizable custom data set, you may want to consider extending
TensorFlow to read your data directly in it's native format. Here's how.
@@ -68,14 +68,14 @@ TensorFlow to read your data directly in it's native format. Here's how.
[View Tutorial](new_data_formats/index.md)
-## Using GPUs
+## Using GPUs <a class="md-anchor" id="AUTOGENERATED-using-gpus"></a>
This tutorial describes how to construct and execute models on GPU(s).
[View Tutorial](using_gpu/index.md)
-## Sharing Variables
+## Sharing Variables <a class="md-anchor" id="AUTOGENERATED-sharing-variables"></a>
When deploying large models on multiple GPUs, or when unrolling complex LSTMs
or RNNs, it is often necessary to access the same Variable objects from
diff --git a/tensorflow/g3doc/how_tos/new_data_formats/index.md b/tensorflow/g3doc/how_tos/new_data_formats/index.md
index a8fa7c42d4..f48b2464a0 100644
--- a/tensorflow/g3doc/how_tos/new_data_formats/index.md
+++ b/tensorflow/g3doc/how_tos/new_data_formats/index.md
@@ -1,4 +1,4 @@
-# Custom Data Readers
+# Custom Data Readers <a class="md-anchor" id="AUTOGENERATED-custom-data-readers"></a>
PREREQUISITES:
@@ -22,13 +22,14 @@ followed by
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Custom Data Readers](#AUTOGENERATED-custom-data-readers)
* [Writing a Reader for a file format](#AUTOGENERATED-writing-a-reader-for-a-file-format)
* [Writing an Op for a record format](#AUTOGENERATED-writing-an-op-for-a-record-format)
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Writing a Reader for a file format <div class="md-anchor" id="AUTOGENERATED-writing-a-reader-for-a-file-format">{#AUTOGENERATED-writing-a-reader-for-a-file-format}</div>
+## Writing a Reader for a file format <a class="md-anchor" id="AUTOGENERATED-writing-a-reader-for-a-file-format"></a>
A `Reader` is something that reads records from a file. There are some examples
of Reader Ops already built into TensorFlow:
@@ -195,7 +196,7 @@ ops.RegisterShape("SomeReader")(common_shapes.scalar_shape)
You can see some examples in
[`tensorflow/python/ops/io_ops.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/ops/io_ops.py).
-## Writing an Op for a record format <div class="md-anchor" id="AUTOGENERATED-writing-an-op-for-a-record-format">{#AUTOGENERATED-writing-an-op-for-a-record-format}</div>
+## Writing an Op for a record format <a class="md-anchor" id="AUTOGENERATED-writing-an-op-for-a-record-format"></a>
Generally this is an ordinary op that takes a scalar string record as input, and
so follow [the instructions to add an Op](../adding_an_op/index.md). You may
diff --git a/tensorflow/g3doc/how_tos/reading_data/index.md b/tensorflow/g3doc/how_tos/reading_data/index.md
index b37d3042e7..945af144ca 100644
--- a/tensorflow/g3doc/how_tos/reading_data/index.md
+++ b/tensorflow/g3doc/how_tos/reading_data/index.md
@@ -1,4 +1,4 @@
-# Reading data
+# Reading data <a class="md-anchor" id="AUTOGENERATED-reading-data"></a>
There are three main methods of getting data into a TensorFlow program:
@@ -10,13 +10,14 @@ There are three main methods of getting data into a TensorFlow program:
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
-* [Feeding](#AUTOGENERATED-feeding)
+### [Reading data](#AUTOGENERATED-reading-data)
+* [Feeding](#Feeding)
* [Reading from files](#AUTOGENERATED-reading-from-files)
* [Filenames, shuffling, and epoch limits](#AUTOGENERATED-filenames--shuffling--and-epoch-limits)
* [File formats](#AUTOGENERATED-file-formats)
* [Preprocessing](#AUTOGENERATED-preprocessing)
* [Batching](#AUTOGENERATED-batching)
- * [Creating threads to prefetch using `QueueRunner` objects](#AUTOGENERATED-creating-threads-to-prefetch-using--queuerunner--objects)
+ * [Creating threads to prefetch using `QueueRunner` objects](#QueueRunner)
* [Filtering records or producing multiple examples per record](#AUTOGENERATED-filtering-records-or-producing-multiple-examples-per-record)
* [Sparse input data](#AUTOGENERATED-sparse-input-data)
* [Preloaded data](#AUTOGENERATED-preloaded-data)
@@ -25,7 +26,7 @@ There are three main methods of getting data into a TensorFlow program:
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Feeding <div class="md-anchor" id="AUTOGENERATED-feeding">{#AUTOGENERATED-feeding}</div>
+## Feeding <a class="md-anchor" id="Feeding"></a>
TensorFlow's feed mechanism lets you inject data into any Tensor in a
computation graph. A python computation can thus feed data directly into the
@@ -53,7 +54,7 @@ in
[tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py),
and is described in the [MNIST tutorial](../../tutorials/mnist/tf/index.md).
-## Reading from files <div class="md-anchor" id="AUTOGENERATED-reading-from-files">{#AUTOGENERATED-reading-from-files}</div>
+## Reading from files <a class="md-anchor" id="AUTOGENERATED-reading-from-files"></a>
A typical pipeline for reading records from files has the following stages:
@@ -66,7 +67,7 @@ A typical pipeline for reading records from files has the following stages:
7. *Optional* preprocessing
8. Example queue
-### Filenames, shuffling, and epoch limits <div class="md-anchor" id="AUTOGENERATED-filenames--shuffling--and-epoch-limits">{#AUTOGENERATED-filenames--shuffling--and-epoch-limits}</div>
+### Filenames, shuffling, and epoch limits <a class="md-anchor" id="AUTOGENERATED-filenames--shuffling--and-epoch-limits"></a>
For the list of filenames, use either a constant string Tensor (like
`["file0", "file1"]` or `[("file%d" % i) for i in range(2)]`) or the
@@ -88,7 +89,7 @@ The queue runner works in a thread separate from the reader that pulls
filenames from the queue, so the shuffling and enqueuing process does not
block the reader.
-### File formats <div class="md-anchor" id="AUTOGENERATED-file-formats">{#AUTOGENERATED-file-formats}</div>
+### File formats <a class="md-anchor" id="AUTOGENERATED-file-formats"></a>
Select the reader that matches your input file format and pass the filename
queue to the reader's read method. The read method outputs a key identifying
@@ -96,7 +97,7 @@ the file and record (useful for debugging if you have some weird records), and
a scalar string value. Use one (or more) of the decoder and conversion ops to
decode this string into the tensors that make up an example.
-#### CSV files
+#### CSV files <a class="md-anchor" id="AUTOGENERATED-csv-files"></a>
To read text files in [comma-separated value (CSV)
format](https://tools.ietf.org/html/rfc4180), use a
@@ -138,7 +139,7 @@ You must call `tf.train.start_queue_runners()` to populate the queue before
you call `run()` or `eval()` to execute the `read()`. Otherwise `read()` will
block while it waits for filenames from the queue.
-#### Fixed length records
+#### Fixed length records <a class="md-anchor" id="AUTOGENERATED-fixed-length-records"></a>
To read binary files in which each record is a fixed number of bytes, use
[tf.FixedLengthRecordReader](../../api_docs/python/io_ops.md#FixedLengthRecordReader)
@@ -154,7 +155,7 @@ needed. For CIFAR-10, you can see how to do the reading and decoding in
and described in
[this tutorial](../../tutorials/deep_cnn/index.md#prepare-the-data).
-#### Standard TensorFlow format
+#### Standard TensorFlow format <a class="md-anchor" id="AUTOGENERATED-standard-tensorflow-format"></a>
Another approach is to convert whatever data you have into a supported format.
This approach makes it easier to mix and match data sets and network
@@ -180,7 +181,7 @@ found in
[tensorflow/g3doc/how_tos/reading_data/fully_connected_reader.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/how_tos/reading_data/fully_connected_reader.py),
which you can compare with the `fully_connected_feed` version.
-### Preprocessing <div class="md-anchor" id="AUTOGENERATED-preprocessing">{#AUTOGENERATED-preprocessing}</div>
+### Preprocessing <a class="md-anchor" id="AUTOGENERATED-preprocessing"></a>
You can then do any preprocessing of these examples you want. This would be any
processing that doesn't depend on trainable parameters. Examples include
@@ -189,7 +190,7 @@ etc. See
[tensorflow/models/image/cifar10/cifar10.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py)
for an example.
-### Batching <div class="md-anchor" id="AUTOGENERATED-batching">{#AUTOGENERATED-batching}</div>
+### Batching <a class="md-anchor" id="AUTOGENERATED-batching"></a>
At the end of the pipeline we use another queue to batch together examples for
training, evaluation, or inference. For this we use a queue that randomizes the
@@ -267,8 +268,7 @@ summary to the graph that indicates how full the example queue is. If you have
enough reading threads, that summary will stay above zero. You can
[view your summaries as training progresses using TensorBoard](../summaries_and_tensorboard/index.md).
-<a name="QueueRunner"></a>
-### Creating threads to prefetch using `QueueRunner` objects <div class="md-anchor" id="AUTOGENERATED-creating-threads-to-prefetch-using--queuerunner--objects">{#AUTOGENERATED-creating-threads-to-prefetch-using--queuerunner--objects}</div>
+### Creating threads to prefetch using `QueueRunner` objects <a class="md-anchor" id="QueueRunner"></a>
The short version: many of the `tf.train` functions listed above add
[`QueueRunner`](../../api_docs/python/train.md#QueueRunner) objects to your
@@ -312,7 +312,7 @@ coord.join(threads)
sess.close()
```
-#### Aside: What is happening here?
+#### Aside: What is happening here? <a class="md-anchor" id="AUTOGENERATED-aside--what-is-happening-here-"></a>
First we create the graph. It will have a few pipeline stages that are
connected by queues. The first stage will generate filenames to read and enqueue
@@ -357,7 +357,7 @@ exception).
For more about threading, queues, QueueRunners, and Coordinators
[see here](../threading_and_queues/index.md).
-#### Aside: How clean shut-down when limiting epochs works
+#### Aside: How clean shut-down when limiting epochs works <a class="md-anchor" id="AUTOGENERATED-aside--how-clean-shut-down-when-limiting-epochs-works"></a>
Imagine you have a model that has set a limit on the number of epochs to train
on. That means that the thread generating filenames will only run that many
@@ -400,7 +400,7 @@ errors and exiting. Once all the training threads are done,
[tf.train.Coordinator.join()](../../api_docs/python/train.md#Coordinator.join)
will return and you can exit cleanly.
-### Filtering records or producing multiple examples per record <div class="md-anchor" id="AUTOGENERATED-filtering-records-or-producing-multiple-examples-per-record">{#AUTOGENERATED-filtering-records-or-producing-multiple-examples-per-record}</div>
+### Filtering records or producing multiple examples per record <a class="md-anchor" id="AUTOGENERATED-filtering-records-or-producing-multiple-examples-per-record"></a>
Instead of examples with shapes `[x, y, z]`, you will produce a batch of
examples with shape `[batch, x, y, z]`. The batch size can be 0 if you want to
@@ -409,14 +409,14 @@ are producing multiple examples per record. Then simply set `enqueue_many=True`
when calling one of the batching functions (such as `shuffle_batch` or
`shuffle_batch_join`).
-### Sparse input data <div class="md-anchor" id="AUTOGENERATED-sparse-input-data">{#AUTOGENERATED-sparse-input-data}</div>
+### Sparse input data <a class="md-anchor" id="AUTOGENERATED-sparse-input-data"></a>
SparseTensors don't play well with queues. If you use SparseTensors you have
to decode the string records using
[tf.parse_example](../../api_docs/python/io_ops.md#parse_example) **after**
batching (instead of using `tf.parse_single_example` before batching).
-## Preloaded data <div class="md-anchor" id="AUTOGENERATED-preloaded-data">{#AUTOGENERATED-preloaded-data}</div>
+## Preloaded data <a class="md-anchor" id="AUTOGENERATED-preloaded-data"></a>
This is only used for small data sets that can be loaded entirely in memory.
There are two approaches:
@@ -475,7 +475,7 @@ An MNIST example that preloads the data using constants can be found in
You can compare these with the `fully_connected_feed` and
`fully_connected_reader` versions above.
-## Multiple input pipelines <div class="md-anchor" id="AUTOGENERATED-multiple-input-pipelines">{#AUTOGENERATED-multiple-input-pipelines}</div>
+## Multiple input pipelines <a class="md-anchor" id="AUTOGENERATED-multiple-input-pipelines"></a>
Commonly you will want to train on one dataset and evaluate (or "eval") on
another. One way to do this is to actually have two separate processes:
diff --git a/tensorflow/g3doc/how_tos/summaries_and_tensorboard/index.md b/tensorflow/g3doc/how_tos/summaries_and_tensorboard/index.md
index eb22df184d..cf06cf70fc 100644
--- a/tensorflow/g3doc/how_tos/summaries_and_tensorboard/index.md
+++ b/tensorflow/g3doc/how_tos/summaries_and_tensorboard/index.md
@@ -1,4 +1,4 @@
-# TensorBoard: Visualizing Learning
+# TensorBoard: Visualizing Learning <a class="md-anchor" id="AUTOGENERATED-tensorboard--visualizing-learning"></a>
The computations you'll use TensorBoard for - like training a massive
deep neural network - can be complex and confusing. To make it easier to
@@ -14,7 +14,7 @@ TensorBoard](/tensorboard/cifar.html).
-## Serializing the data
+## Serializing the data <a class="md-anchor" id="AUTOGENERATED-serializing-the-data"></a>
TensorBoard operates by reading TensorFlow events files, which contain summary
data that you can generate when running TensorFlow. Here's the general
@@ -82,7 +82,7 @@ while training:
You're now all set to visualize this data using TensorBoard.
-## Launching TensorBoard
+## Launching TensorBoard <a class="md-anchor" id="AUTOGENERATED-launching-tensorboard"></a>
To run TensorBoard, use the command
`python tensorflow/tensorboard/tensorboard.py --logdir=path/to/logs`, where
diff --git a/tensorflow/g3doc/how_tos/threading_and_queues/index.md b/tensorflow/g3doc/how_tos/threading_and_queues/index.md
index c472de18c5..bb0d08f53f 100644
--- a/tensorflow/g3doc/how_tos/threading_and_queues/index.md
+++ b/tensorflow/g3doc/how_tos/threading_and_queues/index.md
@@ -1,4 +1,30 @@
-# Threading and Queues
+# Threading and Queues <a class="md-anchor" id="AUTOGENERATED-threading-and-queues"></a>
+
+Queues are a powerful mechanism for asynchronous computation using TensorFlow.
+
+Like everything in TensorFlow, a queue is a node in a TensorFlow graph. It's a
+stateful node, like variable: other nodes can modify its content. In
+particular, nodes can enqueue new items in to the queue, or dequeue existing
+items from the queue.
+
+To get a feel for queues, let's consider a simple example. We will create a
+"first in, first out" queue (`FIFOQueue`) and fill it with zeros.
+Then we'll construct a graph
+that takes an item off the queue, adds one to that item, and puts it back on the
+end of the queue. Slowly, the numbers on the queue increase.
+
+<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
+<img style="width:100%" src="IncremeterFifoQueue.gif">
+</div>
+
+`Enqueue`, `EnqueueMany`, and `Dequeue` are special nodes. They take a pointer
+to the queue instead of a normal value, allowing them to change it. We recommend
+you think of these as being like methods of the queue. In fact, in the Python
+API, they are methods of the queue object (eg. `q.enqueue(...)`).
+
+Now that you have a bit of a feel for queues, let's dive into the details...
+
+## Queue Use Overview <a class="md-anchor" id="AUTOGENERATED-queue-use-overview"></a>
Queues, such as `FIFOQueue` and `RandomShuffleQueue`, are important TensorFlow
objects for computing tensors asynchronously in a graph.
@@ -28,7 +54,7 @@ stop together and report exceptions to a program that waits for them to stop.
The `QueueRunner` class is used to create a number of threads cooperating to
enqueue tensors in the same queue.
-## Coordinator
+## Coordinator <a class="md-anchor" id="AUTOGENERATED-coordinator"></a>
The Coordinator class helps multiple threads stop together.
@@ -70,7 +96,7 @@ Obviously, the coordinator can manage threads doing very different things.
They don't have to be all the same as in the example above. The coordinator
also has support to capture and report exceptions. See the [Coordinator class](../../api_docs/python/train.md#Coordinator) documentation for more details.
-## QueueRunner
+## QueueRunner <a class="md-anchor" id="AUTOGENERATED-queuerunner"></a>
The `QueueRunner` class creates a number of threads that repeatedly run an
enqueue op. These threads can use a coordinator to stop together. In
@@ -119,7 +145,7 @@ coord.request_stop()
coord.join(threads)
```
-## Handling Exceptions
+## Handling Exceptions <a class="md-anchor" id="AUTOGENERATED-handling-exceptions"></a>
Threads started by queue runners do more than just run the enqueue ops. They
also catch and handle exceptions generated by queues, including
diff --git a/tensorflow/g3doc/how_tos/using_gpu/index.md b/tensorflow/g3doc/how_tos/using_gpu/index.md
index c0bdc5a7cb..8a46c30c95 100644
--- a/tensorflow/g3doc/how_tos/using_gpu/index.md
+++ b/tensorflow/g3doc/how_tos/using_gpu/index.md
@@ -1,6 +1,6 @@
-# Using GPUs
+# Using GPUs <a class="md-anchor" id="AUTOGENERATED-using-gpus"></a>
-## Supported devices
+## Supported devices <a class="md-anchor" id="AUTOGENERATED-supported-devices"></a>
On a typical system, there are multiple computing devices. In TensorFlow, the
supported device types are `CPU` and `GPU`. They are represented as
@@ -16,7 +16,7 @@ a device. For example, `matmul` has both CPU and GPU kernels. On a
system with devices `cpu:0` and `gpu:0`, `gpu:0` will be selected to run
`matmul`.
-## Logging Device placement
+## Logging Device placement <a class="md-anchor" id="AUTOGENERATED-logging-device-placement"></a>
To find out which devices your operations and tensors are assigned to, create
the session with `log_device_placement` configuration option set to `True`.
@@ -46,7 +46,7 @@ MatMul: /job:localhost/replica:0/task:0/gpu:0
```
-## Manual device placement
+## Manual device placement <a class="md-anchor" id="AUTOGENERATED-manual-device-placement"></a>
If you would like a particular operation to run on a device of your
choice instead of what's automatically selected for you, you can use
@@ -78,7 +78,7 @@ MatMul: /job:localhost/replica:0/task:0/gpu:0
[ 49. 64.]]
```
-## Using a single GPU on a multi-GPU system
+## Using a single GPU on a multi-GPU system <a class="md-anchor" id="AUTOGENERATED-using-a-single-gpu-on-a-multi-gpu-system"></a>
If you have more than one GPU in your system, the GPU with the lowest ID will be
selected by default. If you would like to run on a different GPU, you will need
@@ -125,7 +125,7 @@ sess = tf.Session(config=tf.ConfigProto(
print sess.run(c)
```
-## Using multiple GPUs
+## Using multiple GPUs <a class="md-anchor" id="AUTOGENERATED-using-multiple-gpus"></a>
If you would like to run TensorFlow on multiple GPUs, you can construct your
model in a multi-tower fashion where each tower is assigned to a different GPU.
diff --git a/tensorflow/g3doc/how_tos/variable_scope/index.md b/tensorflow/g3doc/how_tos/variable_scope/index.md
index f9221b207b..fd48a398d4 100644
--- a/tensorflow/g3doc/how_tos/variable_scope/index.md
+++ b/tensorflow/g3doc/how_tos/variable_scope/index.md
@@ -1,4 +1,4 @@
-# Sharing Variables
+# Sharing Variables <a class="md-anchor" id="AUTOGENERATED-sharing-variables"></a>
You can create, initialize, save and load single variables
in the way described in the [Variables HowTo](../variables/index.md).
@@ -7,7 +7,7 @@ variables and you might want to initialize all of them in one place.
This tutorial shows how this can be done using `tf.variable_scope()` and
the `tf.get_variable()`.
-## The Problem
+## The Problem <a class="md-anchor" id="AUTOGENERATED-the-problem"></a>
Imagine you create a simple model for image filters, similar to our
[Convolutional Neural Networks Tutorial](../../tutorials/deep_cnn/index.md)
@@ -88,7 +88,7 @@ For a lighter solution, not involving classes, TensorFlow provides
a *Variable Scope* mechanism that allows to easily share named variables
while constructing a graph.
-## Variable Scope Example
+## Variable Scope Example <a class="md-anchor" id="AUTOGENERATED-variable-scope-example"></a>
Variable Scope mechanism in TensorFlow consists of 2 main functions:
@@ -162,9 +162,9 @@ with tf.variable_scope("image_filters") as scope:
This is a good way to share variables, lightweight and safe.
-## How Does Variable Scope Work?
+## How Does Variable Scope Work? <a class="md-anchor" id="AUTOGENERATED-how-does-variable-scope-work-"></a>
-### Understanding `tf.get_variable()`
+### Understanding `tf.get_variable()` <a class="md-anchor" id="AUTOGENERATED-understanding--tf.get_variable---"></a>
To understand variable scope it is necessary to first
fully understand how `tf.get_variable()` works.
@@ -210,7 +210,7 @@ with tf.variable_scope("foo", reuse=True):
assert v1 == v
```
-### Basics of `tf.variable_scope()`
+### Basics of `tf.variable_scope()` <a class="md-anchor" id="AUTOGENERATED-basics-of--tf.variable_scope---"></a>
Knowing how `tf.get_variable()` works makes it easy to understand variable
scope. The primary function of variable scope is to carry a name that will
@@ -268,7 +268,7 @@ with tf.variable_scope("root"):
assert tf.get_variable_scope().reuse == False
```
-### Capturing variable scope
+### Capturing variable scope <a class="md-anchor" id="AUTOGENERATED-capturing-variable-scope"></a>
In all examples presented above, we shared parameters only because their
names agreed, that is, because we opened a reusing variable scope with
@@ -303,7 +303,7 @@ with tf.variable_scope("bar")
assert foo_scope2.name == "foo" # Not changed.
```
-### Initializers in variable scope
+### Initializers in variable scope <a class="md-anchor" id="AUTOGENERATED-initializers-in-variable-scope"></a>
Using `tf.get_variable()` allows to write functions that create or reuse
variables and can be transparently called from outside. But what if we wanted
@@ -329,7 +329,7 @@ with tf.variable_scope("foo", initializer=tf.constant_initializer(0.4)):
assert v.eval() == 0.2 # Changed default initializer.
```
-### Names of ops in `tf.variable_scope()`
+### Names of ops in `tf.variable_scope()` <a class="md-anchor" id="AUTOGENERATED-names-of-ops-in--tf.variable_scope---"></a>
We discussed how `tf.variable_scope` governs the names of variables.
But how does it influence the names of other ops in the scope?
@@ -359,7 +359,7 @@ When opening a variable scope using a captured object instead of a string,
we do not alter the current name scope for ops.
-## Examples of Use
+## Examples of Use <a class="md-anchor" id="AUTOGENERATED-examples-of-use"></a>
Here are pointers to a few files that make use of variable scope.
In particular, it is heavily used for recurrent neural networks
diff --git a/tensorflow/g3doc/how_tos/variables/index.md b/tensorflow/g3doc/how_tos/variables/index.md
index 26b19b3ae1..23fa8d71f3 100644
--- a/tensorflow/g3doc/how_tos/variables/index.md
+++ b/tensorflow/g3doc/how_tos/variables/index.md
@@ -1,4 +1,4 @@
-# Variables: Creation, Initialization, Saving, and Loading
+# Variables: Creation, Initialization, Saving, and Loading <a class="md-anchor" id="AUTOGENERATED-variables--creation--initialization--saving--and-loading"></a>
When you train a model, you use [Variables](../../api_docs/python/state_ops.md)
to hold and update parameters. Variables are in-memory buffers containing
@@ -13,7 +13,7 @@ their reference manual for a complete description of their API:
* The `Saver` class [tf.train.Saver](../../api_docs/python/state_ops.md#Saver).
-## Creation
+## Creation <a class="md-anchor" id="AUTOGENERATED-creation"></a>
When you create a [Variable](../../api_docs/python/state_ops.md) you pass a
`Tensor` as its initial value to the `Variable()` constructor. TensorFlow
@@ -43,7 +43,7 @@ Calling `tf.Variable()` adds a few Ops to the graph:
The value returned by `tf.Variable()` value is an instance of the Python class
`tf.Variable`.
-## Initialization
+## Initialization <a class="md-anchor" id="AUTOGENERATED-initialization"></a>
Variable initializers must be run explicitly before other Ops in your model can
be run. The easiest way to do that is to add an Op that runs all the variable
@@ -74,7 +74,7 @@ with tf.Session() as sess:
...
```
-### Initialization from another Variable
+### Initialization from another Variable <a class="md-anchor" id="AUTOGENERATED-initialization-from-another-variable"></a>
You sometimes need to initialize a variable from the initial value of another
variable. As the Op added by `tf.initialize_all_variables()` initializes all
@@ -96,7 +96,7 @@ w2 = tf.Variable(weights.initialized_value(), name="w2")
w_twice = tf.Variable(weights.initialized_value() * 0.2, name="w_twice")
```
-### Custom Initialization
+### Custom Initialization <a class="md-anchor" id="AUTOGENERATED-custom-initialization"></a>
The convenience function `tf.initialize_all_variables()` adds an Op to
initialize *all variables* in the model. You can also pass it an explicit list
@@ -104,14 +104,14 @@ of variables to initialize. See the
[Variables Documentation](../../api_docs/python/state_ops.md) for more options,
including checking if variables are initialized.
-## Saving and Restoring
+## Saving and Restoring <a class="md-anchor" id="AUTOGENERATED-saving-and-restoring"></a>
The easiest way to save and restore a model is to use a `tf.train.Saver`
object. The constructor adds `save` and `restore` Ops to the graph for all, or
a specified list, of variables. The saver object provides methods to run these
Ops, specifying paths for the checkpoint files to write to or read from.
-### Checkpoint Files
+### Checkpoint Files <a class="md-anchor" id="AUTOGENERATED-checkpoint-files"></a>
Variables are saved in binary files that, roughly, contains a map from variable
names to tensors.
@@ -120,7 +120,7 @@ When you create a `Saver` object, you can optionally chose names for the
variables in the checkpoint files. By default, it uses the names passed to the
`tf.Variable()` call.
-### Saving Variables
+### Saving Variables <a class="md-anchor" id="AUTOGENERATED-saving-variables"></a>
Create a `Saver` with `tf.train.Saver()` to manage all variables in
the model.
@@ -147,7 +147,7 @@ with tf.Session() as sess:
print "Model saved in file: ", save_path
```
-### Restoring Variables
+### Restoring Variables <a class="md-anchor" id="AUTOGENERATED-restoring-variables"></a>
The same `Saver` object is used to restore variables. Note that when you
restore variables form a file you do not have to initialize them beforehand.
@@ -170,7 +170,7 @@ with tf.Session() as sess:
...
```
-### Chosing which Variables to Save and Restore
+### Chosing which Variables to Save and Restore <a class="md-anchor" id="AUTOGENERATED-chosing-which-variables-to-save-and-restore"></a>
If you do not pass any argument to `tf.train.Saver()` the saver
handles all variables. Each one of them is saved under the name that was
diff --git a/tensorflow/g3doc/index.md b/tensorflow/g3doc/index.md
index dabc083ca8..f213dc4073 100644
--- a/tensorflow/g3doc/index.md
+++ b/tensorflow/g3doc/index.md
@@ -1,8 +1,8 @@
-# TensorFlow
+# TensorFlow <a class="md-anchor" id="AUTOGENERATED-tensorflow"></a>
<!-- Note: This file is ignored in building the external site tensorflow.org -->
-## Introduction
+## Introduction <a class="md-anchor" id="AUTOGENERATED-introduction"></a>
TensorFlow&#8482; is an open source software library for numerical computation
using data flow graphs. Nodes in the graph represent mathematical operations,
@@ -16,6 +16,6 @@ neural networks research. The system is general enough to be applicable in a
wide variety of other domains as well. The following documents show you how
to set up and use the TensorFlow system.
-## Table of Contents
+## Table of Contents <a class="md-anchor" id="AUTOGENERATED-table-of-contents"></a>
<!--#include virtual="sitemap.md" -->
diff --git a/tensorflow/g3doc/resources/bib.md b/tensorflow/g3doc/resources/bib.md
index 94a005e6f4..2d022a09fa 100644
--- a/tensorflow/g3doc/resources/bib.md
+++ b/tensorflow/g3doc/resources/bib.md
@@ -1,4 +1,4 @@
-# BibTex Citation
+# BibTex Citation <a class="md-anchor" id="AUTOGENERATED-bibtex-citation"></a>
```
@misc{tensorflow2015-whitepaper,
title={{TensorFlow}: Large-Scale Machine Learning on Heterogeneous Systems},
diff --git a/tensorflow/g3doc/resources/dims_types.md b/tensorflow/g3doc/resources/dims_types.md
index eebd80efaa..928dfb2b8e 100644
--- a/tensorflow/g3doc/resources/dims_types.md
+++ b/tensorflow/g3doc/resources/dims_types.md
@@ -1,11 +1,11 @@
-# Tensor Ranks, Shapes, and Types
+# Tensor Ranks, Shapes, and Types <a class="md-anchor" id="AUTOGENERATED-tensor-ranks--shapes--and-types"></a>
TensorFlow programs use a tensor data structure to represent all data. You can
think of a TensorFlow tensor as an n-dimensional array or list.
A tensor has a static type and dynamic dimensions. Only tensors may be passed
between nodes in the computation graph.
-## Rank
+## Rank <a class="md-anchor" id="AUTOGENERATED-rank"></a>
In the TensorFlow system, tensors are described by a unit of dimensionality
known as *rank*. Tensor rank is not the same as matrix rank. Tensor rank
@@ -28,7 +28,7 @@ Rank | Math entity | Python example
3 | 3-Tensor (cube of numbers] | `t = [[[2], [4], [6]], [[8], [10], [12]], [[14], [16], [18]]]`
n | n-Tensor (you get the idea) | `....`
-## Shape
+## Shape <a class="md-anchor" id="AUTOGENERATED-shape"></a>
The TensorFlow documentation uses three notational conventions to describe
tensor dimensionality: rank, shape, and dimension number. The following table
@@ -45,7 +45,7 @@ n | [D0, D1, ... Dn] | n-D | A tensor with shape [D0, D1, ... Dn].
Shapes can be represented via Python lists / tuples of ints, or with the
[`TensorShape` class](../api_docs/python/framework.md#TensorShape).
-## Data types
+## Data types <a class="md-anchor" id="AUTOGENERATED-data-types"></a>
In addition to dimensionality, Tensors have a data type. You can assign any one
of the following data types to a tensor:
diff --git a/tensorflow/g3doc/resources/faq.md b/tensorflow/g3doc/resources/faq.md
index a2b9a58e08..949806acee 100644
--- a/tensorflow/g3doc/resources/faq.md
+++ b/tensorflow/g3doc/resources/faq.md
@@ -1,4 +1,4 @@
-# Frequently Asked Questions
+# Frequently Asked Questions <a class="md-anchor" id="AUTOGENERATED-frequently-asked-questions"></a>
This document provides answers to some of the frequently asked questions about
TensorFlow. If you have a question that is not covered here, you might find an
@@ -6,6 +6,7 @@ answer on one of the TensorFlow [community resources](index.md).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Frequently Asked Questions](#AUTOGENERATED-frequently-asked-questions)
* [Building a TensorFlow graph](#AUTOGENERATED-building-a-tensorflow-graph)
* [Running a TensorFlow computation](#AUTOGENERATED-running-a-tensorflow-computation)
* [Variables](#AUTOGENERATED-variables)
@@ -17,12 +18,12 @@ answer on one of the TensorFlow [community resources](index.md).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Building a TensorFlow graph <div class="md-anchor" id="AUTOGENERATED-building-a-tensorflow-graph">{#AUTOGENERATED-building-a-tensorflow-graph}</div>
+## Building a TensorFlow graph <a class="md-anchor" id="AUTOGENERATED-building-a-tensorflow-graph"></a>
See also the
[API documentation on building graphs](../api_docs/python/framework.md).
-#### Why does `c = tf.matmul(a, b)` not execute the matrix multiplication immediately?
+#### Why does `c = tf.matmul(a, b)` not execute the matrix multiplication immediately? <a class="md-anchor" id="AUTOGENERATED-why-does--c---tf.matmul-a--b---not-execute-the-matrix-multiplication-immediately-"></a>
In the TensorFlow Python API, `a`, `b`, and `c` are
[`Tensor`](../api_docs/python/framework.md#Tensor) objects. A `Tensor` object is
@@ -35,12 +36,12 @@ a dataflow graph. You then offload the computation of the entire dataflow graph
whole computation much more efficiently than executing the operations
one-by-one.
-#### How are devices named?
+#### How are devices named? <a class="md-anchor" id="AUTOGENERATED-how-are-devices-named-"></a>
The supported device names are `"/device:CPU:0"` (or `"/cpu:0"`) for the CPU
device, and `"/device:GPU:i"` (or `"/gpu:i"`) for the *i*th GPU device.
-#### How do I place operations on a particular device?
+#### How do I place operations on a particular device? <a class="md-anchor" id="AUTOGENERATED-how-do-i-place-operations-on-a-particular-device-"></a>
To place a group of operations on a device, create them within a
[`with tf.device(name):`](../api_docs/python/framework.md#device) context. See
@@ -50,17 +51,17 @@ TensorFlow assigns operations to devices, and the
[CIFAR-10 tutorial](../tutorials/deep_cnn/index.md) for an example model that
uses multiple GPUs.
-#### What are the different types of tensors that are available?
+#### What are the different types of tensors that are available? <a class="md-anchor" id="AUTOGENERATED-what-are-the-different-types-of-tensors-that-are-available-"></a>
TensorFlow supports a variety of different data types and tensor shapes. See the
[ranks, shapes, and types reference](dims_types.md) for more details.
-## Running a TensorFlow computation <div class="md-anchor" id="AUTOGENERATED-running-a-tensorflow-computation">{#AUTOGENERATED-running-a-tensorflow-computation}</div>
+## Running a TensorFlow computation <a class="md-anchor" id="AUTOGENERATED-running-a-tensorflow-computation"></a>
See also the
[API documentation on running graphs](../api_docs/python/client.md).
-#### What's the deal with feeding and placeholders?
+#### What's the deal with feeding and placeholders? <a class="md-anchor" id="AUTOGENERATED-what-s-the-deal-with-feeding-and-placeholders-"></a>
Feeding is a mechanism in the TensorFlow Session API that allows you to
substitute different values for one or more tensors at run time. The `feed_dict`
@@ -76,7 +77,7 @@ optionally allows you to constrain their shape as well. See the
example of how placeholders and feeding can be used to provide the training data
for a neural network.
-#### What is the difference between `Session.run()` and `Tensor.eval()`?
+#### What is the difference between `Session.run()` and `Tensor.eval()`? <a class="md-anchor" id="AUTOGENERATED-what-is-the-difference-between--session.run----and--tensor.eval----"></a>
If `t` is a [`Tensor`](../api_docs/python/framework.md#Tensor) object,
[`t.eval()`](../api_docs/python/framework.md#Tensor.eval) is shorthand for
@@ -103,7 +104,7 @@ the `with` block. The context manager approach can lead to more concise code for
simple use cases (like unit tests); if your code deals with multiple graphs and
sessions, it may be more straightforward to explicit calls to `Session.run()`.
-#### Do Sessions have a lifetime? What about intermediate tensors?
+#### Do Sessions have a lifetime? What about intermediate tensors? <a class="md-anchor" id="AUTOGENERATED-do-sessions-have-a-lifetime--what-about-intermediate-tensors-"></a>
Sessions can own resources, such
[variables](../api_docs/python/state_ops.md#Variable),
@@ -117,13 +118,13 @@ The intermediate tensors that are created as part of a call to
[`Session.run()`](../api_docs/python/client.md) will be freed at or before the
end of the call.
-#### Can I run distributed training on multiple computers?
+#### Can I run distributed training on multiple computers? <a class="md-anchor" id="AUTOGENERATED-can-i-run-distributed-training-on-multiple-computers-"></a>
The initial open-source release of TensorFlow supports multiple devices (CPUs
and GPUs) in a single computer. We are working on a distributed version as well:
if you are interested, please let us know so we can prioritize accordingly.
-#### Does the runtime parallelize parts of graph execution?
+#### Does the runtime parallelize parts of graph execution? <a class="md-anchor" id="AUTOGENERATED-does-the-runtime-parallelize-parts-of-graph-execution-"></a>
The TensorFlow runtime parallelizes graph execution across many different
dimensions:
@@ -138,7 +139,7 @@ dimensions:
enables the runtime to get higher throughput, if a single step does not use
all of the resources in your computer.
-#### Which client languages are supported in TensorFlow?
+#### Which client languages are supported in TensorFlow? <a class="md-anchor" id="AUTOGENERATED-which-client-languages-are-supported-in-tensorflow-"></a>
TensorFlow is designed to support multiple client languages. Currently, the
best-supported client language is [Python](../api_docs/python/index.md). The
@@ -152,7 +153,7 @@ interest. TensorFlow has a
that makes it easy to build a client in many different languages. We invite
contributions of new language bindings.
-#### Does TensorFlow make use of all the devices (GPUs and CPUs) available on my machine?
+#### Does TensorFlow make use of all the devices (GPUs and CPUs) available on my machine? <a class="md-anchor" id="AUTOGENERATED-does-tensorflow-make-use-of-all-the-devices--gpus-and-cpus--available-on-my-machine-"></a>
TensorFlow supports multiple GPUs and CPUs. See the how-to documentation on
[using GPUs with TensorFlow](../how_tos/using_gpu/index.md) for details of how
@@ -163,7 +164,7 @@ uses multiple GPUs.
Note that TensorFlow only uses GPU devices with a compute capability greater
than 3.5.
-#### Why does `Session.run()` hang when using a reader or a queue?
+#### Why does `Session.run()` hang when using a reader or a queue? <a class="md-anchor" id="AUTOGENERATED-why-does--session.run----hang-when-using-a-reader-or-a-queue-"></a>
The [reader](../api_docs/python/io_ops.md#ReaderBase) and
[queue](../api_docs/python/io_ops.md#QueueBase) classes provide special operations that
@@ -175,20 +176,20 @@ for
[using `QueueRunner` objects to drive queues and readers](../how_tos/reading_data/index.md#QueueRunners)
for more information on how to use them.
-## Variables <div class="md-anchor" id="AUTOGENERATED-variables">{#AUTOGENERATED-variables}</div>
+## Variables <a class="md-anchor" id="AUTOGENERATED-variables"></a>
See also the how-to documentation on [variables](../how_tos/variables/index.md)
and [variable scopes](../how_tos/variable_scope/index.md), and
[the API documentation for variables](../api_docs/python/state_ops.md).
-#### What is the lifetime of a variable?
+#### What is the lifetime of a variable? <a class="md-anchor" id="AUTOGENERATED-what-is-the-lifetime-of-a-variable-"></a>
A variable is created when you first run the
[`tf.Variable.initializer`](../api_docs/python/state_ops.md#Variable.initializer)
operation for that variable in a session. It is destroyed when that
[`session is closed`](../api_docs/python/client.md#Session.close).
-#### How do variables behave when they are concurrently accessed?
+#### How do variables behave when they are concurrently accessed? <a class="md-anchor" id="AUTOGENERATED-how-do-variables-behave-when-they-are-concurrently-accessed-"></a>
Variables allow concurrent read and write operations. The value read from a
variable may change it is concurrently updated. By default, concurrent assigment
@@ -196,12 +197,12 @@ operations to a variable are allowed to run with no mutual exclusion. To acquire
a lock when assigning to a variable, pass `use_locking=True` to
[`Variable.assign()`](../api_docs/python/state_ops.md#Variable.assign).
-## Tensor shapes <div class="md-anchor" id="AUTOGENERATED-tensor-shapes">{#AUTOGENERATED-tensor-shapes}</div>
+## Tensor shapes <a class="md-anchor" id="AUTOGENERATED-tensor-shapes"></a>
See also the
[`TensorShape` API documentation](../api_docs/python/framework.md#TensorShape).
-#### How can I determine the shape of a tensor in Python?
+#### How can I determine the shape of a tensor in Python? <a class="md-anchor" id="AUTOGENERATED-how-can-i-determine-the-shape-of-a-tensor-in-python-"></a>
In TensorFlow, a tensor has both a static (inferred) shape and a dynamic (true)
shape. The static shape can be read using the
@@ -212,7 +213,7 @@ tensor, and may be
shape is not fully defined, the dynamic shape of a `Tensor` `t` can be
determined by evaluating [`tf.shape(t)`](../api_docs/python/array_ops.md#shape).
-#### What is the difference between `x.set_shape()` and `x = tf.reshape(x)`?
+#### What is the difference between `x.set_shape()` and `x = tf.reshape(x)`? <a class="md-anchor" id="AUTOGENERATED-what-is-the-difference-between--x.set_shape----and--x---tf.reshape-x---"></a>
The [`tf.Tensor.set_shape()`](../api_docs/python/framework.md) method updates
the static shape of a `Tensor` object, and it is typically used to provide
@@ -222,7 +223,7 @@ change the dynamic shape of the tensor.
The [`tf.reshape()`](../api_docs/python/array_ops.md#reshape) operation creates
a new tensor with a different dynamic shape.
-#### How do I build a graph that works with variable batch sizes?
+#### How do I build a graph that works with variable batch sizes? <a class="md-anchor" id="AUTOGENERATED-how-do-i-build-a-graph-that-works-with-variable-batch-sizes-"></a>
It is often useful to build a graph that works with variable batch sizes, for
example so that the same code can be used for (mini-)batch training, and
@@ -248,24 +249,24 @@ to encode the batch size as a Python constant, but instead to use a symbolic
[`tf.placeholder(..., shape=[None, ...])`](../api_docs/python/io_ops.md#placeholder). The
`None` element of the shape corresponds to a variable-sized dimension.
-## TensorBoard <div class="md-anchor" id="AUTOGENERATED-tensorboard">{#AUTOGENERATED-tensorboard}</div>
+## TensorBoard <a class="md-anchor" id="AUTOGENERATED-tensorboard"></a>
See also the
[how-to documentation on TensorBoard](../how_tos/graph_viz/index.md).
-#### What is the simplest way to send data to tensorboard? # TODO(danmane)
+#### What is the simplest way to send data to tensorboard? # TODO(danmane) <a class="md-anchor" id="AUTOGENERATED-what-is-the-simplest-way-to-send-data-to-tensorboard----todo-danmane-"></a>
Add summary_ops to your TensorFlow graph, and use a SummaryWriter to write all
of these summaries to a log directory. Then, startup TensorBoard using
<SOME_COMMAND> and pass the --logdir flag so that it points to your
log directory. For more details, see <YET_UNWRITTEN_TENSORBOARD_TUTORIAL>.
-## Extending TensorFlow <div class="md-anchor" id="AUTOGENERATED-extending-tensorflow">{#AUTOGENERATED-extending-tensorflow}</div>
+## Extending TensorFlow <a class="md-anchor" id="AUTOGENERATED-extending-tensorflow"></a>
See also the how-to documentation for
[adding a new operation to TensorFlow](../how_tos/adding_an_op/index.md).
-#### My data is in a custom format. How do I read it using TensorFlow?
+#### My data is in a custom format. How do I read it using TensorFlow? <a class="md-anchor" id="AUTOGENERATED-my-data-is-in-a-custom-format.-how-do-i-read-it-using-tensorflow-"></a>
There are two main options for dealing with data in a custom format.
@@ -283,7 +284,7 @@ data format. The
[guide to handling new data formats](../how_tos/new_data_formats/index.md) has
more information about the steps for doing this.
-#### How do I define an operation that takes a variable number of inputs?
+#### How do I define an operation that takes a variable number of inputs? <a class="md-anchor" id="AUTOGENERATED-how-do-i-define-an-operation-that-takes-a-variable-number-of-inputs-"></a>
The TensorFlow op registration mechanism allows you to define inputs that are a
single tensor, a list of tensors with the same type (for example when adding
@@ -293,15 +294,15 @@ how-to documentation for
[adding an op with a list of inputs or outputs](../how_tos/adding_an_op/index.md#list-input-output)
for more details of how to define these different input types.
-## Miscellaneous <div class="md-anchor" id="AUTOGENERATED-miscellaneous">{#AUTOGENERATED-miscellaneous}</div>
+## Miscellaneous <a class="md-anchor" id="AUTOGENERATED-miscellaneous"></a>
-#### Does TensorFlow work with Python 3?
+#### Does TensorFlow work with Python 3? <a class="md-anchor" id="AUTOGENERATED-does-tensorflow-work-with-python-3-"></a>
We have only tested TensorFlow using Python 2.7. We are aware of some changes
that will be required for Python 3 compatibility, and welcome contributions
towards this effort.
-#### What is TensorFlow's coding style convention?
+#### What is TensorFlow's coding style convention? <a class="md-anchor" id="AUTOGENERATED-what-is-tensorflow-s-coding-style-convention-"></a>
The TensorFlow Python API adheres to the
[PEP8](https://www.python.org/dev/peps/pep-0008/) conventions.<sup>*</sup> In
diff --git a/tensorflow/g3doc/resources/glossary.md b/tensorflow/g3doc/resources/glossary.md
index 2e7823952f..e344d21a0c 100644
--- a/tensorflow/g3doc/resources/glossary.md
+++ b/tensorflow/g3doc/resources/glossary.md
@@ -1,4 +1,4 @@
-# Glossary
+# Glossary <a class="md-anchor" id="AUTOGENERATED-glossary"></a>
TODO(someone): Fix several broken links in Glossary
diff --git a/tensorflow/g3doc/resources/index.md b/tensorflow/g3doc/resources/index.md
index 0519c97b53..3cfeb87e92 100644
--- a/tensorflow/g3doc/resources/index.md
+++ b/tensorflow/g3doc/resources/index.md
@@ -1,14 +1,14 @@
-# Additional Resources
+# Additional Resources <a class="md-anchor" id="AUTOGENERATED-additional-resources"></a>
-## TensorFlow WhitePaper
+## TensorFlow WhitePaper <a class="md-anchor" id="AUTOGENERATED-tensorflow-whitepaper"></a>
Additional details about the TensorFlow programming model and the underlying
implementation can be found in out white paper:
* [TensorFlow: Large-scale machine learning on heterogeneous systems](../extras/tensorflow-whitepaper2015.pdf)
-### Citation
+### Citation <a class="md-anchor" id="AUTOGENERATED-citation"></a>
If you use TensorFlow in your research and would like to cite the TensorFlow
system, we suggest you cite the paper above.
@@ -16,20 +16,20 @@ You can use this [BibTeX entry](bib.md). As the project progresses, we
may update the suggested citation with new papers.
-## Community
+## Community <a class="md-anchor" id="AUTOGENERATED-community"></a>
-### Discuss
+### Discuss <a class="md-anchor" id="AUTOGENERATED-discuss"></a>
* GitHub: <https://github.com/tensorflow/tensorflow>
* Stack Overflow: <https://stackoverflow.com/questions/tagged/tensorflow>
* [TensorFlow discuss mailing list](
- https://groups.google.com/forum/#!forum/tensorflow-discuss)
+ https://groups.google.com/a/tensorflow.org/forum/#!forum/discuss)
-### Report Issues
+### Report Issues <a class="md-anchor" id="AUTOGENERATED-report-issues"></a>
* [TensorFlow issues](https://github.com/tensorflow/tensorflow/issues)
-### Development
+### Development <a class="md-anchor" id="AUTOGENERATED-development"></a>
* If you are interested in contributing to TensorFlow please
[review the contributing guide](
diff --git a/tensorflow/g3doc/resources/uses.md b/tensorflow/g3doc/resources/uses.md
index cc212886c5..fa67a58163 100644
--- a/tensorflow/g3doc/resources/uses.md
+++ b/tensorflow/g3doc/resources/uses.md
@@ -1,4 +1,4 @@
-# Example Uses
+# Example Uses <a class="md-anchor" id="AUTOGENERATED-example-uses"></a>
This page describes some of the current uses of the TensorFlow system.
diff --git a/tensorflow/g3doc/tutorials/deep_cnn/index.md b/tensorflow/g3doc/tutorials/deep_cnn/index.md
index 906093009e..be23e7ccaa 100644
--- a/tensorflow/g3doc/tutorials/deep_cnn/index.md
+++ b/tensorflow/g3doc/tutorials/deep_cnn/index.md
@@ -1,9 +1,9 @@
-# Convolutional Neural Networks
+# Convolutional Neural Networks <a class="md-anchor" id="AUTOGENERATED-convolutional-neural-networks"></a>
**NOTE:** This tutorial is intended for *advanced* users of TensorFlow
and assumes expertise and experience in machine learning.
-## Overview
+## Overview <a class="md-anchor" id="AUTOGENERATED-overview"></a>
CIFAR-10 classification is a common benchmark problem in machine learning. The
problem is to classify RGB 32x32 pixel images across 10 categories:
@@ -15,7 +15,7 @@ For more details refer to the [CIFAR-10 page](http://www.cs.toronto.edu/~kriz/ci
and a [Tech Report](http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
by Alex Krizhevsky.
-### Goals
+### Goals <a class="md-anchor" id="AUTOGENERATED-goals"></a>
The goal of this tutorial is to build a relatively small convolutional neural
network (CNN) for recognizing images. In the process this tutorial:
@@ -29,7 +29,7 @@ exercise much of TensorFlow's ability to scale to large models. At the same
time, the model is small enough to train fast in order to test new ideas and
experiments.
-### Highlights of the Tutorial
+### Highlights of the Tutorial <a class="md-anchor" id="AUTOGENERATED-highlights-of-the-tutorial"></a>
The CIFAR-10 tutorial demonstrates several important constructs for
designing larger and more sophisticated models in TensorFlow:
@@ -60,7 +60,7 @@ We also provide a multi-GPU version of the model which demonstrates:
We hope that this tutorial provides a launch point for building larger CNNs for
vision tasks on TensorFlow.
-### Model Architecture
+### Model Architecture <a class="md-anchor" id="AUTOGENERATED-model-architecture"></a>
The model in this CIFAR-10 tutorial is a multi-layer architecture consisting of
alternating convolutions and nonlinearities. These layers are followed by fully
@@ -74,7 +74,7 @@ of training time on a GPU. Please see [below](#evaluating-a-model) and the code
for details. It consists of 1,068,298 learnable parameters and requires about
19.5M multiply-add operations to compute inference on a single image.
-## Code Organization
+## Code Organization <a class="md-anchor" id="AUTOGENERATED-code-organization"></a>
The code for this tutorial resides in
[`tensorflow/models/image/cifar10/`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/).
@@ -88,7 +88,7 @@ File | Purpose
[`cifar10_eval.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_eval.py) | Evaluates the predictive performance of a CIFAR-10 model.
-## CIFAR-10 Model
+## CIFAR-10 Model <a class="md-anchor" id="AUTOGENERATED-cifar-10-model"></a>
The CIFAR-10 network is largely contained in
[`cifar10.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py).
@@ -105,7 +105,7 @@ adds operations that perform inference, i.e. classification, on supplied images.
add operations that compute the loss,
gradients, variable updates and visualization summaries.
-### Model Inputs
+### Model Inputs <a class="md-anchor" id="AUTOGENERATED-model-inputs"></a>
The input part of the model is built by the functions `inputs()` and
`distorted_inputs()` which read images from the CIFAR-10 binary data files.
@@ -143,7 +143,7 @@ processing time. To prevent these operations from slowing down training, we run
them inside 16 separate threads which continuously fill a TensorFlow
[queue](../../api_docs/python/io_ops.md#shuffle_batch).
-### Model Prediction
+### Model Prediction <a class="md-anchor" id="AUTOGENERATED-model-prediction"></a>
The prediction part of the model is constructed by the `inference()` function
which adds operations to compute the *logits* of the predictions. That part of
@@ -181,7 +181,7 @@ the CIFAR-10 model specified in
layers are locally connected and not fully connected. Try editing the
architecture to exactly replicate that fully connected model.
-### Model Training
+### Model Training <a class="md-anchor" id="AUTOGENERATED-model-training"></a>
The usual method for training a network to perform N-way classification is
[multinomial logistic regression](https://en.wikipedia.org/wiki/Multinomial_logistic_regression),
@@ -199,7 +199,7 @@ loss and all these weight decay terms, as returned by the `loss()` function.
We visualize it in TensorBoard with a [scalar_summary](../../api_docs/python/train.md?#scalar_summary):
![CIFAR-10 Loss](./cifar_loss.png "CIFAR-10 Total Loss")
-###### [View this TensorBoard live! (Chrome/FF)](/tensorboard/cifar.html)
+###### [View this TensorBoard live! (Chrome/FF)](/tensorboard/cifar.html) <a class="md-anchor" id="AUTOGENERATED--view-this-tensorboard-live---chrome-ff----tensorboard-cifar.html-"></a>
We train the model using standard
[gradient descent](https://en.wikipedia.org/wiki/Gradient_descent)
@@ -209,7 +209,7 @@ with a learning rate that
over time.
![CIFAR-10 Learning Rate Decay](./cifar_lr_decay.png "CIFAR-10 Learning Rate Decay")
-###### [View this TensorBoard live! (Chrome/FF)](/tensorboard/cifar.html)
+###### [View this TensorBoard live! (Chrome/FF)](/tensorboard/cifar.html) <a class="md-anchor" id="AUTOGENERATED--view-this-tensorboard-live---chrome-ff----tensorboard-cifar.html-"></a>
The `train()` function adds the operations needed to minimize the objective by
calculating the gradient and updating the learned variables (see
@@ -217,7 +217,7 @@ calculating the gradient and updating the learned variables (see
for details). It returns an operation that executes all of the calculations
needed to train and update the model for one batch of images.
-## Launching and Training the Model
+## Launching and Training the Model <a class="md-anchor" id="AUTOGENERATED-launching-and-training-the-model"></a>
We have built the model, let's now launch it and run the training operation with
the script `cifar10_train.py`.
@@ -302,7 +302,7 @@ values. See how the scripts use
[ExponentialMovingAverage](../../api_docs/python/train.md#ExponentialMovingAverage)
for this purpose.
-## Evaluating a Model
+## Evaluating a Model <a class="md-anchor" id="AUTOGENERATED-evaluating-a-model"></a>
Let us now evaluate how well the trained model performs on a hold-out data set.
the model is evaluated by the script `cifar10_eval.py`. It constructs the model
@@ -346,7 +346,7 @@ the averaged parameters for the model and verify that the predictive performance
drops.
-## Training a Model Using Multiple GPU Cards
+## Training a Model Using Multiple GPU Cards <a class="md-anchor" id="AUTOGENERATED-training-a-model-using-multiple-gpu-cards"></a>
Modern workstations may contain multiple GPUs for scientific computation.
TensorFlow can leverage this environment to run the training operation
@@ -390,7 +390,7 @@ The GPUs are synchronized in operation. All gradients are accumulated from
the GPUs and averaged (see green box). The model parameters are updated with
the gradients averaged across all model replicas.
-### Placing Variables and Operations on Devices
+### Placing Variables and Operations on Devices <a class="md-anchor" id="AUTOGENERATED-placing-variables-and-operations-on-devices"></a>
Placing operations and variables on devices requires some special
abstractions.
@@ -414,7 +414,7 @@ All variables are pinned to the CPU and accessed via
in order to share them in a multi-GPU version.
See how-to on [Sharing Variables](../../how_tos/variable_scope/index.md).
-### Launching and Training the Model on Multiple GPU cards
+### Launching and Training the Model on Multiple GPU cards <a class="md-anchor" id="AUTOGENERATED-launching-and-training-the-model-on-multiple-gpu-cards"></a>
If you have several GPU cards installed on your machine you can use them to
train the model faster with the `cifar10_multi_gpu_train.py` script. It is a
@@ -446,7 +446,7 @@ you ask for more.
run on a batch size of 128. Try running `cifar10_multi_gpu_train.py` on 2 GPUs
with a batch size of 64 and compare the training speed.
-## Next Steps
+## Next Steps <a class="md-anchor" id="AUTOGENERATED-next-steps"></a>
[Congratulations!](https://www.youtube.com/watch?v=9bZkp7q19f0). You have
completed the CIFAR-10 tutorial.
diff --git a/tensorflow/g3doc/tutorials/index.md b/tensorflow/g3doc/tutorials/index.md
index 202b87c73c..4ee9ad0497 100644
--- a/tensorflow/g3doc/tutorials/index.md
+++ b/tensorflow/g3doc/tutorials/index.md
@@ -1,7 +1,7 @@
-# Overview
+# Overview <a class="md-anchor" id="AUTOGENERATED-overview"></a>
-## MNIST For ML Beginners
+## MNIST For ML Beginners <a class="md-anchor" id="AUTOGENERATED-mnist-for-ml-beginners"></a>
If you're new to machine learning, we recommend starting here. You'll learn
about a classic problem, handwritten digit classification (MNIST), and get a
@@ -10,7 +10,7 @@ gentle introduction to multiclass classification.
[View Tutorial](mnist/beginners/index.md)
-## Deep MNIST for Experts
+## Deep MNIST for Experts <a class="md-anchor" id="AUTOGENERATED-deep-mnist-for-experts"></a>
If you're already familiar with other deep learning software packages, and are
already familiar with MNIST, this tutorial with give you a very brief primer on
@@ -19,7 +19,7 @@ TensorFlow.
[View Tutorial](mnist/pros/index.md)
-## TensorFlow Mechanics 101
+## TensorFlow Mechanics 101 <a class="md-anchor" id="AUTOGENERATED-tensorflow-mechanics-101"></a>
This is a technical tutorial, where we walk you through the details of using
TensorFlow infrastructure to train models at scale. We use again MNIST as the
@@ -28,7 +28,7 @@ example.
[View Tutorial](mnist/tf/index.md)
-## Convolutional Neural Networks
+## Convolutional Neural Networks <a class="md-anchor" id="AUTOGENERATED-convolutional-neural-networks"></a>
An introduction to convolutional neural networks using the CIFAR-10 data set.
Convolutional neural nets are particularly tailored to images, since they
@@ -38,7 +38,7 @@ representations of visual content.
[View Tutorial](deep_cnn/index.md)
-## Vector Representations of Words
+## Vector Representations of Words <a class="md-anchor" id="AUTOGENERATED-vector-representations-of-words"></a>
This tutorial motivates why it is useful to learn to represent words as vectors
(called *word embeddings*). It introduces the word2vec model as an efficient
@@ -49,7 +49,7 @@ embeddings).
[View Tutorial](word2vec/index.md)
-## Recurrent Neural Networks
+## Recurrent Neural Networks <a class="md-anchor" id="AUTOGENERATED-recurrent-neural-networks"></a>
An introduction to RNNs, wherein we train an LSTM network to predict the next
word in an English sentence. (A task sometimes called language modeling.)
@@ -57,7 +57,7 @@ word in an English sentence. (A task sometimes called language modeling.)
[View Tutorial](recurrent/index.md)
-## Sequence-to-Sequence Models
+## Sequence-to-Sequence Models <a class="md-anchor" id="AUTOGENERATED-sequence-to-sequence-models"></a>
A follow on to the RNN tutorial, where we assemble a sequence-to-sequence model
for machine translation. You will learn to build your own English-to-French
@@ -66,7 +66,7 @@ translator, entirely machine learned, end-to-end.
[View Tutorial](seq2seq/index.md)
-## Mandelbrot Set
+## Mandelbrot Set <a class="md-anchor" id="AUTOGENERATED-mandelbrot-set"></a>
TensorFlow can be used for computation that has nothing to do with machine
learning. Here's a naive implementation of Mandelbrot set visualization.
@@ -74,7 +74,7 @@ learning. Here's a naive implementation of Mandelbrot set visualization.
[View Tutorial](mandelbrot/index.md)
-## Partial Differential Equations
+## Partial Differential Equations <a class="md-anchor" id="AUTOGENERATED-partial-differential-equations"></a>
As another example of non-machine learning computation, we offer an example of
a naive PDE simulation of raindrops landing on a pond.
@@ -82,7 +82,7 @@ a naive PDE simulation of raindrops landing on a pond.
[View Tutorial](pdes/index.md)
-## MNIST Data Download
+## MNIST Data Download <a class="md-anchor" id="AUTOGENERATED-mnist-data-download"></a>
Details about downloading the MNIST handwritten digits data set. Exciting
stuff.
@@ -90,7 +90,7 @@ stuff.
[View Tutorial](mnist/download/index.md)
-## Visual Object Recognition
+## Visual Object Recognition <a class="md-anchor" id="AUTOGENERATED-visual-object-recognition"></a>
We will be releasing our state-of-the-art Inception object recognition model,
complete and already trained.
@@ -98,7 +98,7 @@ complete and already trained.
COMING SOON
-## Deep Dream Visual Hallucinations
+## Deep Dream Visual Hallucinations <a class="md-anchor" id="AUTOGENERATED-deep-dream-visual-hallucinations"></a>
Building on the Inception recognition model, we will release a TensorFlow
version of the [Deep Dream](https://github.com/google/deepdream) neural network
diff --git a/tensorflow/g3doc/tutorials/mandelbrot/index.md b/tensorflow/g3doc/tutorials/mandelbrot/index.md
index b3d5a185f9..fa06e6b882 100755
--- a/tensorflow/g3doc/tutorials/mandelbrot/index.md
+++ b/tensorflow/g3doc/tutorials/mandelbrot/index.md
@@ -1,4 +1,4 @@
-# Mandelbrot Set
+# Mandelbrot Set <a class="md-anchor" id="AUTOGENERATED-mandelbrot-set"></a>
```
#Import libraries for simulation
diff --git a/tensorflow/g3doc/tutorials/mnist/beginners/index.md b/tensorflow/g3doc/tutorials/mnist/beginners/index.md
index fff7484959..eddd4f324a 100644
--- a/tensorflow/g3doc/tutorials/mnist/beginners/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/beginners/index.md
@@ -1,4 +1,4 @@
-# MNIST For ML Beginners
+# MNIST For ML Beginners <a class="md-anchor" id="AUTOGENERATED-mnist-for-ml-beginners"></a>
*This tutorial is intended for readers who are new to both machine learning and
TensorFlow. If you already
@@ -31,7 +31,7 @@ important to understand the ideas behind it: both how TensorFlow works and the
core machine learning concepts. Because of this, we are going to very carefully
work through the code.
-## The MNIST Data
+## The MNIST Data <a class="md-anchor" id="AUTOGENERATED-the-mnist-data"></a>
The MNIST data is hosted on
[Yann LeCun's website](http://yann.lecun.com/exdb/mnist/).
@@ -88,9 +88,9 @@ The corresponding labels in MNIST are numbers between 0 and 9, describing
which digit a given image is of.
For the purposes of this tutorial, we're going to want our labels as
as "one-hot vectors". A one-hot vector is a vector which is 0 in most
-dimensions, and 1 in a single dimension. In this case, the $$n$$th digit will be
-represented as a vector which is 1 in the $$n$$th dimensions. For example, 0
-would be $$[1,0,0,0,0,0,0,0,0,0,0]$$.
+dimensions, and 1 in a single dimension. In this case, the \(n\)th digit will be
+represented as a vector which is 1 in the \(n\)th dimensions. For example, 0
+would be \([1,0,0,0,0,0,0,0,0,0,0]\).
Consequently, `mnist.train.labels` is a
`[60000, 10]` array of floats.
@@ -100,7 +100,7 @@ Consequently, `mnist.train.labels` is a
We're now ready to actually make our model!
-## Softmax Regressions
+## Softmax Regressions <a class="md-anchor" id="AUTOGENERATED-softmax-regressions"></a>
We know that every image in MNIST is a digit, whether it's a zero or a nine. We
want to be able to look at an image and give probabilities for it being each
@@ -131,14 +131,14 @@ weights.
We also add some extra evidence called a bias. Basically, we want to be able
to say that some things are more likely independent of the input. The result is
-that the evidence for a class $$i$$ given an input $$x$$ is:
+that the evidence for a class \(i\) given an input \(x\) is:
$$\text{evidence}_i = \sum_j W_{i,~ j} x_j + b_i$$
-where $$W_i$$ is the weights and $$b_i$$ is the bias for class $$i$$, and $$j$$
-is an index for summing over the pixels in our input image $$x$$. We then
+where \(W_i\) is the weights and \(b_i\) is the bias for class \(i\), and \(j\)
+is an index for summing over the pixels in our input image \(x\). We then
convert the evidence tallies into our predicted probabilities
-$$y$$ using the "softmax" function:
+\(y\) using the "softmax" function:
$$y = \text{softmax}(\text{evidence})$$
@@ -168,8 +168,8 @@ on it in Michael Nieslen's book, complete with an interactive visualization.)
You can picture our softmax regression as looking something like the following,
-although with a lot more $$x$$s. For each output, we compute a weighted sum of
-the $$x$$s, add a bias, and then apply softmax.
+although with a lot more \(x\)s. For each output, we compute a weighted sum of
+the \(x\)s, add a bias, and then apply softmax.
<div style="width:55%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="img/softmax-regression-scalargraph.png">
@@ -194,7 +194,7 @@ More compactly, we can just write:
$$y = \text{softmax}(Wx + b)$$
-## Implementing the Regression
+## Implementing the Regression <a class="md-anchor" id="AUTOGENERATED-implementing-the-regression"></a>
To do efficient numerical computing in Python, we typically use libraries like
@@ -261,7 +261,7 @@ y = tf.nn.softmax(tf.matmul(x,W) + b)
```
First, we multiply `x` by `W` with the expression `tf.matmul(x,W)`. This is
-flipped from when we multiplied them in our equation, where we had $$Wx$$, as a
+flipped from when we multiplied them in our equation, where we had \(Wx\), as a
small trick
to deal with `x` being a 2D tensor with multiple inputs. We then add `b`, and
finally apply `tf.nn.softmax`.
@@ -274,7 +274,7 @@ simulations. And once defined, our model can be run on different devices:
your computer's CPU, GPUs, and even phones!
-## Training
+## Training <a class="md-anchor" id="AUTOGENERATED-training"></a>
In order to train our model, we need to define what it means for the model to
be good. Well, actually, in machine learning we typically define what it means
@@ -288,7 +288,7 @@ from gambling to machine learning. It's defined:
$$H_{y'}(y) = -\sum_i y'_i \log(y_i)$$
-Where $$y$$ is our predicted probability distribution, and $$y'$$ is the true
+Where \(y\) is our predicted probability distribution, and \(y'\) is the true
distribution (the one-hot vector we'll input). In some rough sense, the
cross-entropy is measuring how inefficient our predictions are for describing
the truth. Going into more detail about cross-entropy is beyond the scope of
@@ -302,7 +302,7 @@ the correct answers:
y_ = tf.placeholder("float", [None,10])
```
-Then we can implement the cross-entropy, $$-\sum y'\log(y)$$:
+Then we can implement the cross-entropy, \(-\sum y'\log(y)\):
```python
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
@@ -378,7 +378,7 @@ every time. Doing this is cheap and has much of the same benefit.
-## Evaluating Our Model
+## Evaluating Our Model <a class="md-anchor" id="AUTOGENERATED-evaluating-our-model"></a>
How well does our model do?
diff --git a/tensorflow/g3doc/tutorials/mnist/download/index.md b/tensorflow/g3doc/tutorials/mnist/download/index.md
index df6245df78..e985a2204d 100644
--- a/tensorflow/g3doc/tutorials/mnist/download/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/download/index.md
@@ -1,11 +1,11 @@
-# MNIST Data Download
+# MNIST Data Download <a class="md-anchor" id="AUTOGENERATED-mnist-data-download"></a>
Code: [tensorflow/g3doc/tutorials/mnist/](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/)
The goal of this tutorial is to show how to download the dataset files required
for handwritten digit classification using the (classic) MNIST data set.
-## Tutorial Files
+## Tutorial Files <a class="md-anchor" id="AUTOGENERATED-tutorial-files"></a>
This tutorial references the following files:
@@ -13,7 +13,7 @@ File | Purpose
--- | ---
[`input_data.py`](../input_data.py) | The code to download the MNIST dataset for training and evaluation.
-## Prepare the Data
+## Prepare the Data <a class="md-anchor" id="AUTOGENERATED-prepare-the-data"></a>
MNIST is a classic problem in machine learning. The problem is to look at
greyscale 28x28 pixel images of handwritten digits and determine which digit
@@ -24,7 +24,7 @@ the image represents, for all the digits from zero to nine.
For more information, refer to [Yann LeCun's MNIST page](http://yann.lecun.com/exdb/mnist/)
or [Chris Olah's visualizations of MNIST](http://colah.github.io/posts/2014-10-Visualizing-MNIST/).
-### Download
+### Download <a class="md-anchor" id="AUTOGENERATED-download"></a>
[Yann LeCun's MNIST page](http://yann.lecun.com/exdb/mnist/)
also hosts the training and test data for download.
@@ -42,7 +42,7 @@ files are downloaded into a local data folder for training.
The folder name is specified in a flag variable at the top of the
`fully_connected_feed.py` file and may be changed to fit your needs.
-### Unpack and Reshape
+### Unpack and Reshape <a class="md-anchor" id="AUTOGENERATED-unpack-and-reshape"></a>
The files themselves are not in any standard image format and are manually
unpacked (following the instructions available at the website) by the
@@ -64,7 +64,7 @@ The label data is extracted into a 1d tensor of: `[image index]`
with the class identifier for each example as the value. For the training set
labels, this would then be of shape `[55000]`.
-### DataSet Object
+### DataSet Object <a class="md-anchor" id="AUTOGENERATED-dataset-object"></a>
The underlying code will download, unpack, and reshape images and labels for
the following datasets:
diff --git a/tensorflow/g3doc/tutorials/mnist/pros/index.md b/tensorflow/g3doc/tutorials/mnist/pros/index.md
index 34853ccf66..15892a957d 100644
--- a/tensorflow/g3doc/tutorials/mnist/pros/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/pros/index.md
@@ -1,4 +1,4 @@
-# Deep MNIST for Experts
+# Deep MNIST for Experts <a class="md-anchor" id="AUTOGENERATED-deep-mnist-for-experts"></a>
TensorFlow is a powerful library for doing large-scale numerical computation.
One of the tasks at which it excels is implementing and training deep neural
@@ -11,12 +11,12 @@ dataset. If you don't have
a background with them, check out the
[introduction for beginners](../beginners/index.md).*
-## Setup
+## Setup <a class="md-anchor" id="AUTOGENERATED-setup"></a>
Before we create our model, we will first load the MNIST dataset, and start a
TensorFlow session.
-### Load MNIST Data
+### Load MNIST Data <a class="md-anchor" id="AUTOGENERATED-load-mnist-data"></a>
For your convenience, we've included [a script](../input_data.py) which
automatically downloads and imports the MNIST dataset. It will create a
@@ -32,7 +32,7 @@ testing sets as NumPy arrays.
It also provides a function for iterating through data minibatches, which we
will use below.
-### Start TensorFlow InteractiveSession
+### Start TensorFlow InteractiveSession <a class="md-anchor" id="AUTOGENERATED-start-tensorflow-interactivesession"></a>
Tensorflow relies on a highly efficient C++ backend to do its computation. The
connection to this backend is called a session. The common usage for TensorFlow
@@ -55,7 +55,7 @@ import tensorflow as tf
sess = tf.InteractiveSession()
```
-#### Computation Graph
+#### Computation Graph <a class="md-anchor" id="AUTOGENERATED-computation-graph"></a>
To do efficient numerical computing in Python, we typically use libraries like
NumPy that do expensive operations such as matrix multiplication outside Python,
@@ -80,13 +80,13 @@ section of
[Basic Usage](../../../get_started/basic_usage.md)
for more detail.
-## Build a Softmax Regression Model
+## Build a Softmax Regression Model <a class="md-anchor" id="AUTOGENERATED-build-a-softmax-regression-model"></a>
In this section we will build a softmax regression model with a single linear
layer. In the next section, we will extend this to the case of softmax
regression with a multilayer convolutional network.
-### Placeholders
+### Placeholders <a class="md-anchor" id="AUTOGENERATED-placeholders"></a>
We start building the computation graph by creating nodes for the
input images and target output classes.
@@ -110,7 +110,7 @@ which digit class the corresponding MNIST image belongs to.
The `shape` argument to `placeholder` is optional, but it allows TensorFlow
to automatically catch bugs stemming from inconsistent tensor shapes.
-### Variables
+### Variables <a class="md-anchor" id="AUTOGENERATED-variables"></a>
We now define the weights `W` and biases `b` for our model. We could imagine treating
these like additional inputs, but TensorFlow has an even better way to handle
@@ -139,7 +139,7 @@ done for all `Variables` at once.
sess.run(tf.initialize_all_variables())
```
-### Predicted Class and Cost Function
+### Predicted Class and Cost Function <a class="md-anchor" id="AUTOGENERATED-predicted-class-and-cost-function"></a>
We can now implement our regression model. It only takes one line!
We multiply the vectorized input images `x` by the weight matrix `W`, add
@@ -161,7 +161,7 @@ cross_entropy = -tf.reduce_sum(y_*tf.log(y))
Note that `tf.reduce_sum` sums across all images in the minibatch, as well as
all classes. We are computing the cross entropy for the entire minibatch.
-## Train the Model
+## Train the Model <a class="md-anchor" id="AUTOGENERATED-train-the-model"></a>
Now that we have defined our model and training cost function, it is
straightforward to train using TensorFlow.
@@ -198,7 +198,7 @@ Each training iteration we load 50 training examples. We then run the
Note that you can replace any tensor in your computation graph using `feed_dict`
-- it's not restricted to just `placeholder`s.
-### Evaluate the Model
+### Evaluate the Model <a class="md-anchor" id="AUTOGENERATED-evaluate-the-model"></a>
How well did our model do?
@@ -228,14 +228,14 @@ Finally, we can evaluate our accuracy on the test data. This should be about
print accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})
```
-## Build a Multilayer Convolutional Network
+## Build a Multilayer Convolutional Network <a class="md-anchor" id="AUTOGENERATED-build-a-multilayer-convolutional-network"></a>
Getting 91% accuracy on MNIST is bad. It's almost embarrassingly bad. In this
section, we'll fix that, jumping from a very simple model to something moderatly
sophisticated: a small convolutional neural network. This will get us to around
99.2% accuracy -- not state of the art, but respectable.
-### Weight Initialization
+### Weight Initialization <a class="md-anchor" id="AUTOGENERATED-weight-initialization"></a>
To create this model, we're going to need to create a lot of weights and biases.
One should generally initialize weights with a small amount of noise for
@@ -254,7 +254,7 @@ def bias_variable(shape):
return tf.Variable(initial)
```
-### Convolution and Pooling
+### Convolution and Pooling <a class="md-anchor" id="AUTOGENERATED-convolution-and-pooling"></a>
TensorFlow also gives us a lot of flexibility in convolution and pooling
operations. How do we handle the boundaries? What is our stride size?
@@ -273,7 +273,7 @@ def max_pool_2x2(x):
strides=[1, 2, 2, 1], padding='SAME')
```
-### First Convolutional Layer
+### First Convolutional Layer <a class="md-anchor" id="AUTOGENERATED-first-convolutional-layer"></a>
We can now implement our first layer. It will consist of convolution, followed
by max pooling. The convolutional will compute 32 features for each 5x5 patch.
@@ -303,7 +303,7 @@ h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
```
-### Second Convolutional Layer
+### Second Convolutional Layer <a class="md-anchor" id="AUTOGENERATED-second-convolutional-layer"></a>
In order to build a deep network, we stack several layers of this type. The
second layer will have 64 features for each 5x5 patch.
@@ -316,7 +316,7 @@ h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
```
-### Densely Connected Layer
+### Densely Connected Layer <a class="md-anchor" id="AUTOGENERATED-densely-connected-layer"></a>
Now that the image size has been reduced to 7x7, we add a fully-connected layer
with 1024 neurons to allow processing on the entire image. We reshape the tensor
@@ -331,7 +331,7 @@ h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
```
-#### Dropout
+#### Dropout <a class="md-anchor" id="AUTOGENERATED-dropout"></a>
To reduce overfitting, we will apply dropout before the readout layer.
We create a `placeholder` for the probability that a neuron's output is kept
@@ -345,7 +345,7 @@ keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
```
-### Readout Layer
+### Readout Layer <a class="md-anchor" id="AUTOGENERATED-readout-layer"></a>
Finally, we add a softmax layer, just like for the one layer softmax regression
above.
@@ -357,7 +357,7 @@ b_fc2 = bias_variable([10])
y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
```
-### Train and Evaluate the Model
+### Train and Evaluate the Model <a class="md-anchor" id="AUTOGENERATED-train-and-evaluate-the-model"></a>
How well does this model do?
To train and evaluate it we will use code that is nearly identical to that for
diff --git a/tensorflow/g3doc/tutorials/mnist/tf/index.md b/tensorflow/g3doc/tutorials/mnist/tf/index.md
index 5ce996af12..c1fc07e373 100644
--- a/tensorflow/g3doc/tutorials/mnist/tf/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/tf/index.md
@@ -1,4 +1,4 @@
-# TensorFlow Mechanics 101
+# TensorFlow Mechanics 101 <a class="md-anchor" id="AUTOGENERATED-tensorflow-mechanics-101"></a>
Code: [tensorflow/g3doc/tutorials/mnist/](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/)
@@ -12,7 +12,7 @@ These tutorials are not intended for teaching Machine Learning in general.
Please ensure you have followed the instructions to [`Install TensorFlow`](../../../get_started/os_setup.md).
-## Tutorial Files
+## Tutorial Files <a class="md-anchor" id="AUTOGENERATED-tutorial-files"></a>
This tutorial references the following files:
@@ -25,7 +25,7 @@ Simply run the `fully_connected_feed.py` file directly to start training:
`python fully_connected_feed.py`
-## Prepare the Data
+## Prepare the Data <a class="md-anchor" id="AUTOGENERATED-prepare-the-data"></a>
MNIST is a classic problem in machine learning. The problem is to look at
greyscale 28x28 pixel images of handwritten digits and determine which digit
@@ -36,7 +36,7 @@ the image represents, for all the digits from zero to nine.
For more information, refer to [Yann LeCun's MNIST page](http://yann.lecun.com/exdb/mnist/)
or [Chris Olah's visualizations of MNIST](http://colah.github.io/posts/2014-10-Visualizing-MNIST/).
-### Download
+### Download <a class="md-anchor" id="AUTOGENERATED-download"></a>
At the top of the `run_training()` method, the `input_data.read_data_sets()`
function will ensure that the correct data has been downloaded to your local
@@ -59,7 +59,7 @@ Dataset | Purpose
For more information about the data, please read the [`Download`](../download/index.md)
tutorial.
-### Inputs and Placeholders
+### Inputs and Placeholders <a class="md-anchor" id="AUTOGENERATED-inputs-and-placeholders"></a>
The `placeholder_inputs()` function creates two [`tf.placeholder`](../../../api_docs/python/io_ops.md#placeholder)
ops that define the shape of the inputs, including the `batch_size`, to the
@@ -76,7 +76,7 @@ sliced to fit the `batch_size` for each step, matched with these placeholder
ops, and then passed into the `sess.run()` function using the `feed_dict`
parameter.
-## Build the Graph
+## Build the Graph <a class="md-anchor" id="AUTOGENERATED-build-the-graph"></a>
After creating placeholders for the data, the graph is built from the
`mnist.py` file according to a 3-stage pattern: `inference()`, `loss()`, and
@@ -93,7 +93,7 @@ and apply gradients.
<img style="width:100%" src="./mnist_subgraph.png">
</div>
-### Inference
+### Inference <a class="md-anchor" id="AUTOGENERATED-inference"></a>
The `inference()` function builds the graph as far as needed to
return the tensor that would contain the output predictions.
@@ -162,7 +162,7 @@ logits = tf.matmul(hidden2, weights) + biases
Finally, the `logits` tensor that will contain the output is returned.
-### Loss
+### Loss <a class="md-anchor" id="AUTOGENERATED-loss"></a>
The `loss()` function further builds the graph by adding the required loss
ops.
@@ -205,7 +205,7 @@ And the tensor that will then contain the loss value is returned.
> given what is actually true. For more information, read the blog post Visual
> Information Theory (http://colah.github.io/posts/2015-09-Visual-Information/)
-### Training
+### Training <a class="md-anchor" id="AUTOGENERATED-training"></a>
The `training()` function adds the operations needed to minimize the loss via
gradient descent.
@@ -241,12 +241,12 @@ train_op = optimizer.minimize(loss, global_step=global_step)
The tensor containing the outputs of the training op is returned.
-## Train the Model
+## Train the Model <a class="md-anchor" id="AUTOGENERATED-train-the-model"></a>
Once the graph is built, it can be iteratively trained and evaluated in a loop
controlled by the user code in `fully_connected_feed.py`.
-### The Graph
+### The Graph <a class="md-anchor" id="AUTOGENERATED-the-graph"></a>
At the top of the `run_training()` function is a python `with` command that
indicates all of the built ops are to be associated with the default
@@ -263,7 +263,7 @@ Most TensorFlow uses will only need to rely on the single default graph.
More complicated uses with multiple graphs are possible, but beyond the scope of
this simple tutorial.
-### The Session
+### The Session <a class="md-anchor" id="AUTOGENERATED-the-session"></a>
Once all of the build preparation has been completed and all of the necessary
ops generated, a [`tf.Session`](../../../api_docs/python/client.md#Session)
@@ -297,7 +297,7 @@ op is a [`tf.group`](../../../api_docs/python/control_flow_ops.md#group)
that contains only the initializers for the variables. None of the rest of the
graph is run here, that happens in the training loop below.
-### Train Loop
+### Train Loop <a class="md-anchor" id="AUTOGENERATED-train-loop"></a>
After initializing the variables with the session, training may begin.
@@ -312,7 +312,7 @@ for step in xrange(max_steps):
However, this tutorial is slightly more complicated in that it must also slice
up the input data for each step to match the previously generated placeholders.
-#### Feed the Graph
+#### Feed the Graph <a class="md-anchor" id="AUTOGENERATED-feed-the-graph"></a>
For each step, the code will generate a feed dictionary that will contain the
set of examples on which to train for the step, keyed by the placeholder
@@ -339,7 +339,7 @@ feed_dict = {
This is passed into the `sess.run()` function's `feed_dict` parameter to provide
the input examples for this step of training.
-#### Check the Status
+#### Check the Status <a class="md-anchor" id="AUTOGENERATED-check-the-status"></a>
The code specifies two op-tensors in its run call: `[train_op, loss]`:
@@ -369,7 +369,7 @@ if step % 100 == 0:
print 'Step %d: loss = %.2f (%.3f sec)' % (step, loss_value, duration)
```
-#### Visualize the Status
+#### Visualize the Status <a class="md-anchor" id="AUTOGENERATED-visualize-the-status"></a>
In order to emit the events files used by [TensorBoard](../../../how_tos/summaries_and_tensorboard/index.md),
all of the summaries (in this case, only one) are collected into a single op
@@ -404,7 +404,7 @@ folder to display the values from the summaries.
**NOTE**: For more info about how to build and run Tensorboard, please see the accompanying tutorial [Tensorboard: Visualizing Your Training](../../../how_tos/summaries_and_tensorboard/index.md).
-#### Save a Checkpoint
+#### Save a Checkpoint <a class="md-anchor" id="AUTOGENERATED-save-a-checkpoint"></a>
In order to emit a checkpoint file that may be used to later restore a model
for further training or evaluation, we instantiate a
@@ -430,7 +430,7 @@ method to reload the model parameters.
saver.restore(sess, FLAGS.train_dir)
```
-## Evaluate the Model
+## Evaluate the Model <a class="md-anchor" id="AUTOGENERATED-evaluate-the-model"></a>
Every thousand steps, the code will attempt to evaluate the model against both
the training and test datasets. The `do_eval()` function is called thrice, for
@@ -462,7 +462,7 @@ do_eval(sess,
> the sake of a simple little MNIST problem, however, we evaluate against all of
> the data.
-### Build the Eval Graph
+### Build the Eval Graph <a class="md-anchor" id="AUTOGENERATED-build-the-eval-graph"></a>
Before opening the default Graph, the test data should have been fetched by
calling the `get_data(train=False)` function with the parameter set to grab
@@ -489,7 +489,7 @@ of K to 1 to only consider a prediction correct if it is for the true label.
eval_correct = tf.nn.in_top_k(logits, labels, 1)
```
-### Eval Output
+### Eval Output <a class="md-anchor" id="AUTOGENERATED-eval-output"></a>
One can then create a loop for filling a `feed_dict` and calling `sess.run()`
against the `eval_correct` op to evaluate the model on the given dataset.
diff --git a/tensorflow/g3doc/tutorials/pdes/index.md b/tensorflow/g3doc/tutorials/pdes/index.md
index 26f36d5536..a7c84ebd63 100755
--- a/tensorflow/g3doc/tutorials/pdes/index.md
+++ b/tensorflow/g3doc/tutorials/pdes/index.md
@@ -1,6 +1,6 @@
-# Partial Differential Equations
+# Partial Differential Equations <a class="md-anchor" id="AUTOGENERATED-partial-differential-equations"></a>
-## Basic Setup
+## Basic Setup <a class="md-anchor" id="AUTOGENERATED-basic-setup"></a>
```
@@ -30,7 +30,7 @@ def DisplayArray(a, fmt='jpeg', rng=[0,1]):
sess = tf.InteractiveSession()
```
-## Computational Convenience Functions
+## Computational Convenience Functions <a class="md-anchor" id="AUTOGENERATED-computational-convenience-functions"></a>
```
@@ -54,7 +54,7 @@ def laplace(x):
return simple_conv(x, laplace_k)
```
-## Define the PDE
+## Define the PDE <a class="md-anchor" id="AUTOGENERATED-define-the-pde"></a>
```
@@ -103,7 +103,7 @@ step = tf.group(
Ut.Assign(Ut_) )
```
-## Run The Simulation
+## Run The Simulation <a class="md-anchor" id="AUTOGENERATED-run-the-simulation"></a>
```
diff --git a/tensorflow/g3doc/tutorials/recurrent/index.md b/tensorflow/g3doc/tutorials/recurrent/index.md
index 29d058cd5d..c2ae1afb70 100644
--- a/tensorflow/g3doc/tutorials/recurrent/index.md
+++ b/tensorflow/g3doc/tutorials/recurrent/index.md
@@ -1,12 +1,12 @@
-# Recurrent Neural Networks
+# Recurrent Neural Networks <a class="md-anchor" id="AUTOGENERATED-recurrent-neural-networks"></a>
-## Introduction
+## Introduction <a class="md-anchor" id="AUTOGENERATED-introduction"></a>
Take a look at [this great article]
(http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
for an introduction to recurrent neural networks and LSTMs in particular.
-## Language Modeling
+## Language Modeling <a class="md-anchor" id="AUTOGENERATED-language-modeling"></a>
In this tutorial we will show how to train a recurrent neural network on
a challenging task of language modeling. The goal of the problem is to fit a
@@ -24,7 +24,7 @@ For the purpose of this tutorial, we will reproduce the results from
[Zaremba et al., 2014] (http://arxiv.org/abs/1409.2329), which achieves very
good results on the PTB dataset.
-## Tutorial Files
+## Tutorial Files <a class="md-anchor" id="AUTOGENERATED-tutorial-files"></a>
This tutorial references the following files from `models/rnn/ptb`:
@@ -33,7 +33,7 @@ File | Purpose
`ptb_word_lm.py` | The code to train a language model on the PTB dataset.
`reader.py` | The code to read the dataset.
-## Download and Prepare the Data
+## Download and Prepare the Data <a class="md-anchor" id="AUTOGENERATED-download-and-prepare-the-data"></a>
The data required for this tutorial is in the data/ directory of the
PTB dataset from Tomas Mikolov's webpage:
@@ -44,9 +44,9 @@ including the end-of-sentence marker and a special symbol (\<unk\>) for rare
words. We convert all of them in the `reader.py` to unique integer identifiers
to make it easy for the neural network to process.
-## The Model
+## The Model <a class="md-anchor" id="AUTOGENERATED-the-model"></a>
-### LSTM
+### LSTM <a class="md-anchor" id="AUTOGENERATED-lstm"></a>
The core of the model consists of an LSTM cell that processes one word at the
time and computes probabilities of the possible continuations of the sentence.
@@ -72,7 +72,7 @@ for current_batch_of_words in words_in_dataset:
loss += loss_function(probabilities, target_words)
```
-### Truncated Backpropagation
+### Truncated Backpropagation <a class="md-anchor" id="AUTOGENERATED-truncated-backpropagation"></a>
In order to make the learning process tractable, it is a common practice to
truncate the gradients for backpropagation to a fixed number (`num_steps`)
@@ -114,7 +114,7 @@ for current_batch_of_words in words_in_dataset:
total_loss += current_loss
```
-### Inputs
+### Inputs <a class="md-anchor" id="AUTOGENERATED-inputs"></a>
The word IDs will be embedded into a dense representation (see the
[Vectors Representations Tutorial](../word2vec/index.md)) before feeding to
@@ -129,7 +129,7 @@ word_embeddings = tf.nn.embedding_lookup(embedding_matrix, word_ids)
The embedding matrix will be initialized randomly and the model will learn to
differentiate the meaning of words just by looking at the data.
-### Loss Fuction
+### Loss Fuction <a class="md-anchor" id="AUTOGENERATED-loss-fuction"></a>
We want to minimize the average negative log probability of the target words:
@@ -145,7 +145,7 @@ $$e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} $$
and we will monitor its value throughout the training process.
-### Stacking multiple LSTMs
+### Stacking multiple LSTMs <a class="md-anchor" id="AUTOGENERATED-stacking-multiple-lstms"></a>
To give the model more expressive power, we can add multiple layers of LSTMs
to process the data. The output of the first layer will become the input of
@@ -168,7 +168,7 @@ for i in range(len(num_steps)):
final_state = state
```
-## Compile and Run the Code
+## Compile and Run the Code <a class="md-anchor" id="AUTOGENERATED-compile-and-run-the-code"></a>
First, the library needs to be built. To compile it on CPU:
@@ -198,7 +198,7 @@ The larger the model, the better results it should get. The `small` model should
be able to reach perplexity below 120 on the test set and the `large` one below
80, though it might take several hours to train.
-## What Next?
+## What Next? <a class="md-anchor" id="AUTOGENERATED-what-next-"></a>
There are several tricks that we haven't mentioned that make the model better,
including:
diff --git a/tensorflow/g3doc/tutorials/seq2seq/index.md b/tensorflow/g3doc/tutorials/seq2seq/index.md
index b91688691d..ee9808a5dd 100644
--- a/tensorflow/g3doc/tutorials/seq2seq/index.md
+++ b/tensorflow/g3doc/tutorials/seq2seq/index.md
@@ -1,4 +1,4 @@
-# Sequence-to-Sequence Models
+# Sequence-to-Sequence Models <a class="md-anchor" id="AUTOGENERATED-sequence-to-sequence-models"></a>
Recurrent neural networks can learn to model language, as already discussed
in the [RNN Tutorial](../recurrent/index.md)
@@ -32,7 +32,7 @@ File | What's in it?
`translate/translate.py` | Binary that trains and runs the translation model.
-## Sequence-to-Sequence Basics
+## Sequence-to-Sequence Basics <a class="md-anchor" id="AUTOGENERATED-sequence-to-sequence-basics"></a>
A basic sequence-to-sequence model, as introduced in
[Cho et al., 2014](http://arxiv.org/pdf/1406.1078v3.pdf),
@@ -64,7 +64,7 @@ attention mechanism in the decoder looks like this.
<img style="width:100%" src="attention_seq2seq.png" />
</div>
-## TensorFlow seq2seq Library
+## TensorFlow seq2seq Library <a class="md-anchor" id="AUTOGENERATED-tensorflow-seq2seq-library"></a>
As you can see above, there are many different sequence-to-sequence
models. Each of these models can use different RNN cells, but all
@@ -141,14 +141,14 @@ more sequence-to-sequence models in `seq2seq.py`, take a look there. They all
have similar interfaces, so we will not describe them in detail. We will use
`embedding_attention_seq2seq` for our translation model below.
-## Neural Translation Model
+## Neural Translation Model <a class="md-anchor" id="AUTOGENERATED-neural-translation-model"></a>
While the core of the sequence-to-sequence model is constructed by
the functions in `models/rnn/seq2seq.py`, there are still a few tricks
that are worth mentioning that are used in our translation model in
`models/rnn/translate/seq2seq_model.py`.
-### Sampled softmax and output projection
+### Sampled softmax and output projection <a class="md-anchor" id="AUTOGENERATED-sampled-softmax-and-output-projection"></a>
For one, as already mentioned above, we want to use sampled softmax to
handle large output vocabulary. To decode from it, we need to keep track
@@ -184,7 +184,7 @@ if output_projection is not None:
output_projection[1] for ...]
```
-### Bucketing and padding
+### Bucketing and padding <a class="md-anchor" id="AUTOGENERATED-bucketing-and-padding"></a>
In addition to sampled softmax, our translation model also makes use
of *bucketing*, which is a method to efficiently handle sentences of
@@ -230,8 +230,7 @@ with encoder inputs representing `[PAD PAD "." "go" "I"]` and decoder
inputs `[GO "Je" "vais" "." EOS PAD PAD PAD PAD PAD]`.
-<a name="run_it"></a>
-## Let's Run It
+## Let's Run It <a class="md-anchor" id="run_it"></a>
To train the model described above, we need to a large English-French corpus.
We will use the *10^9-French-English corpus* from the
@@ -305,7 +304,7 @@ Reading model parameters from /tmp/translate.ckpt-340000
Qui est le président des États-Unis ?
```
-## What Next?
+## What Next? <a class="md-anchor" id="AUTOGENERATED-what-next-"></a>
The example above shows how you can build your own English-to-French
translator, end-to-end. Run it and see how the model performs for yourself.
diff --git a/tensorflow/g3doc/tutorials/word2vec/index.md b/tensorflow/g3doc/tutorials/word2vec/index.md
index 290ff3627f..c9b66cab88 100644
--- a/tensorflow/g3doc/tutorials/word2vec/index.md
+++ b/tensorflow/g3doc/tutorials/word2vec/index.md
@@ -1,11 +1,11 @@
-# Vector Representations of Words
+# Vector Representations of Words <a class="md-anchor" id="AUTOGENERATED-vector-representations-of-words"></a>
In this tutorial we look at the word2vec model by
[Mikolov et al.](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf).
This model is used for learning vector representations of words, called *word
embeddings*.
-## Highlights
+## Highlights <a class="md-anchor" id="AUTOGENERATED-highlights"></a>
This tutorial is meant to highlight the interesting, substantive parts of
building a word2vec model in TensorFlow.
@@ -32,7 +32,7 @@ But first, let's look at why we would want to learn word embeddings in the first
place. Feel free to skip this section if you're an Embedding Pro and you'd just
like to get your hands dirty with the details.
-## Motivation: Why Learn Word Embeddings?
+## Motivation: Why Learn Word Embeddings? <a class="md-anchor" id="AUTOGENERATED-motivation--why-learn-word-embeddings-"></a>
Image and audio processing systems work with rich, high-dimensional datasets
encoded as vectors of the individual raw pixel-intensities for image data, or
@@ -90,12 +90,12 @@ pair as a new observation, and this tends to do better when we have larger
datasets. We will focus on the skip-gram model in the rest of this tutorial.
-## Scaling up with Noise-Contrastive Training
+## Scaling up with Noise-Contrastive Training <a class="md-anchor" id="AUTOGENERATED-scaling-up-with-noise-contrastive-training"></a>
Neural probabilistic language models are traditionally trained using the
[maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood) (ML)
-principle to maximize the probability of the next word $$w_t$$ (for 'target)
-given the previous words $$h$$ (for 'history') in terms of a
+principle to maximize the probability of the next word \(w_t\) (for 'target)
+given the previous words \(h\) (for 'history') in terms of a
[*softmax* function](https://en.wikipedia.org/wiki/Softmax_function),
$$
@@ -106,8 +106,8 @@ P(w_t | h) &= \text{softmax}(\exp \{ \text{score}(w_t, h) \}) \\
\end{align}
$$
-where $$\text{score}(w_t, h)$$ computes the compatibility of word $$w_t$$ with
-the context $$h$$ (a dot product is commonly used). We train this model by
+where \(\text{score}(w_t, h)\) computes the compatibility of word \(w_t\) with
+the context \(h\) (a dot product is commonly used). We train this model by
maximizing its log-likelihood on the training set, i.e. by maximizing
$$
@@ -120,8 +120,8 @@ $$
This yields a properly normalized probabilistic model for language modeling.
However this is very expensive, because we need to compute and normalize each
-probability using the score for all other $$V$$ words $$w'$$ in the current
-context $$h$$, *at every training step*.
+probability using the score for all other \(V\) words \(w'\) in the current
+context \(h\), *at every training step*.
<div style="width:60%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="img/softmax-nplm.png" alt>
@@ -130,7 +130,7 @@ context $$h$$, *at every training step*.
On the other hand, for feature learning in word2vec we do not need a full
probabilistic model. The CBOW and skip-gram models are instead trained using a
binary classification objective (logistic regression) to discriminate the real
-target words $$w_t$$ from $$k$$ imaginary (noise) words $$\tilde w$$, in the
+target words \(w_t\) from \(k\) imaginary (noise) words \(\tilde w\), in the
same context. We illustrate this below for a CBOW model. For skip-gram the
direction is simply inverted.
@@ -144,10 +144,10 @@ $$J_\text{NEG} = \log Q_\theta(D=1 |w_t, h) +
k \mathop{\mathbb{E}}_{\tilde w \sim P_\text{noise}}
\left[ \log Q_\theta(D = 0 |\tilde w, h) \right]$$,
-where $$Q_\theta(D=1 | w, h)$$ is the binary logistic regression probability
-under the model of seeing the word $$w$$ in the context $$h$$ in the dataset
-$$D$$, calculated in terms of the learned embedding vectors $$\theta$$. In
-practice we approximate the expectation by drawing $$k$$ constrastive words
+where \(Q_\theta(D=1 | w, h)\) is the binary logistic regression probability
+under the model of seeing the word \(w\) in the context \(h\) in the dataset
+\(D\), calculated in terms of the learned embedding vectors \(\theta\). In
+practice we approximate the expectation by drawing \(k\) constrastive words
from the noise distribution (i.e. we compute a
[Monte Carlo average](https://en.wikipedia.org/wiki/Monte_Carlo_integration)).
@@ -159,14 +159,14 @@ and there is good mathematical motivation for using this loss function:
The updates it proposes approximate the updates of the softmax function in the
limit. But computationally it is especially appealing because computing the
loss function now scales only with the number of *noise words* that we
-select ($$k$$), and not *all words* in the vocabulary ($$V$$). This makes it
+select (\(k\)), and not *all words* in the vocabulary (\(V\)). This makes it
much faster to train. We will actually make use of the very similar
[noise-contrastive estimation (NCE)](http://papers.nips.cc/paper/5165-learning-word-embeddings-efficiently-with-noise-contrastive-estimation.pdf)
loss, for which TensorFlow has a handy helper function `tf.nn.nce_loss()`.
Let's get an intuitive feel for how this would work in practice!
-## The Skip-gram Model
+## The Skip-gram Model <a class="md-anchor" id="AUTOGENERATED-the-skip-gram-model"></a>
As an example, let's consider the dataset
@@ -198,21 +198,21 @@ dataset, but we typically optimize this with
where typically `16 <= batch_size <= 512`). So let's look at one step of
this process.
-Let's imagine at training step $$t$$ we observe the first training case above,
+Let's imagine at training step \(t\) we observe the first training case above,
where the goal is to predict `the` from `quick`. We select `num_noise` number
of noisy (contrastive) examples by drawing from some noise distribution,
-typically the unigram distribution, $$P(w)$$. For simplicity let's say
+typically the unigram distribution, \(P(w)\). For simplicity let's say
`num_noise=1` and we select `sheep` as a noisy example. Next we compute the
loss for this pair of observed and noisy examples, i.e. the objective at time
-step $$t$$ becomes
+step \(t\) becomes
$$J^{(t)}_\text{NEG} = \log Q_\theta(D=1 | \text{the, quick}) +
\log(Q_\theta(D=0 | \text{sheep, quick}))$$.
-The goal is to make an update to the embedding parameters $$\theta$$ to improve
+The goal is to make an update to the embedding parameters \(\theta\) to improve
(in this case, maximize) this objective function. We do this by deriving the
-gradient of the loss with respect to the embedding parameters $$\theta$$, i.e.
-$$\frac{\partial}{\partial \theta} J_\text{NEG}$$ (luckily TensorFlow provides
+gradient of the loss with respect to the embedding parameters \(\theta\), i.e.
+\(\frac{\partial}{\partial \theta} J_\text{NEG}\) (luckily TensorFlow provides
easy helper functions for doing this!). We then perform an update to the
embeddings by taking a small step in the direction of the gradient. When this
process is repeated over the entire training set, this has the effect of
@@ -243,7 +243,7 @@ NLP prediction tasks, such as part-of-speech tagging or named entity recognition
But for now, let's just use them to draw pretty pictures!
-## Building the Graph
+## Building the Graph <a class="md-anchor" id="AUTOGENERATED-building-the-graph"></a>
This is all about embeddings, so let's define our embedding matrix.
This is just a big random matrix to start. We'll initialize the values to be
@@ -307,7 +307,7 @@ gradient descent, and TensorFlow has handy helpers to make this easy.
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1.0).minimize(loss)
```
-## Training the Model
+## Training the Model <a class="md-anchor" id="AUTOGENERATED-training-the-model"></a>
Training the model is then as simple as using a `feed_dict` to push data into
the placeholders and calling `session.run` with this new data in a loop.
@@ -321,7 +321,7 @@ for inputs, labels in generate_batch(...):
See the full example code in
[tensorflow/g3doc/tutorials/word2vec/word2vec_basic.py](./word2vec_basic.py).
-## Visualizing the Learned Embeddings
+## Visualizing the Learned Embeddings <a class="md-anchor" id="AUTOGENERATED-visualizing-the-learned-embeddings"></a>
After training has finished we can visualize the learned embeddings using
t-SNE.
@@ -335,7 +335,7 @@ other. For a more heavyweight implementation of word2vec that showcases more of
the advanced features of TensorFlow, see the implementation in
[tensorflow/models/embedding/word2vec.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/embedding/word2vec.py).
-## Evaluating Embeddings: Analogical Reasoning
+## Evaluating Embeddings: Analogical Reasoning <a class="md-anchor" id="AUTOGENERATED-evaluating-embeddings--analogical-reasoning"></a>
Embeddings are useful for a wide variety of prediction tasks in NLP. Short of
training a full-blown part-of-speech model or named-entity model, one simple way
@@ -356,7 +356,7 @@ very large dataset, carefully tuning the hyperparameters and making use of
tricks like subsampling the data, which is out of the scope of this tutorial.
-## Optimizing the Implementation
+## Optimizing the Implementation <a class="md-anchor" id="AUTOGENERATED-optimizing-the-implementation"></a>
Our vanilla implementation showcases the flexibility of TensorFlow. For
example, changing the training objective is as simple as swapping out the call
@@ -386,7 +386,7 @@ example of this for the Skip-Gram case
Feel free to benchmark these against each other to measure performance
improvements at each stage.
-## Conclusion
+## Conclusion <a class="md-anchor" id="AUTOGENERATED-conclusion"></a>
In this tutorial we covered the word2vec model, a computationally efficient
model for learning word embeddings. We motivated why embeddings are useful,
diff --git a/tensorflow/models/rnn/BUILD b/tensorflow/models/rnn/BUILD
index a88d48fd42..3e5e6b37ca 100644
--- a/tensorflow/models/rnn/BUILD
+++ b/tensorflow/models/rnn/BUILD
@@ -51,6 +51,18 @@ py_test(
)
py_library(
+ name = "package",
+ srcs = [
+ "__init__.py",
+ ],
+ deps = [
+ ":rnn",
+ ":rnn_cell",
+ ":seq2seq",
+ ],
+)
+
+py_library(
name = "rnn",
srcs = [
"rnn.py",
diff --git a/tensorflow/models/rnn/__init__.py b/tensorflow/models/rnn/__init__.py
index e69de29bb2..475b6f8d31 100755..100644
--- a/tensorflow/models/rnn/__init__.py
+++ b/tensorflow/models/rnn/__init__.py
@@ -0,0 +1,12 @@
+"""Libraries to build Recurrent Neural Networks.
+
+This file helps simplify the import process:
+
+import tensorflow.python.platform
+from tensorflow.models.rnn import rnn
+from tensorflow.models.rnn import rnn_cell
+...
+"""
+from tensorflow.models.rnn import rnn
+from tensorflow.models.rnn import rnn_cell
+from tensorflow.models.rnn import seq2seq
diff --git a/tensorflow/tools/pip_package/BUILD b/tensorflow/tools/pip_package/BUILD
index 1302553e8d..b9a50e4288 100644
--- a/tensorflow/tools/pip_package/BUILD
+++ b/tensorflow/tools/pip_package/BUILD
@@ -22,6 +22,7 @@ sh_binary(
"//tensorflow/models/image/cifar10:cifar10_train",
"//tensorflow/models/image/cifar10:cifar10_multi_gpu_train",
"//tensorflow/models/image/mnist:convolutional",
+ "//tensorflow/models/rnn:package",
"//tensorflow/tensorboard",
],
)