aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
-rw-r--r--README.md1
-rw-r--r--tensorflow/g3doc/api_docs/python/client.md76
-rw-r--r--tensorflow/g3doc/api_docs/python/framework.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/io_ops.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/python_io.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/sparse_ops.md8
-rw-r--r--tensorflow/g3doc/api_docs/python/state_ops.md12
-rw-r--r--tensorflow/g3doc/api_docs/python/train.md48
-rw-r--r--tensorflow/g3doc/get_started/os_setup.md23
-rw-r--r--tensorflow/g3doc/how_tos/adding_an_op/index.md15
-rw-r--r--tensorflow/g3doc/how_tos/new_data_formats/index.md32
-rw-r--r--tensorflow/g3doc/how_tos/reading_data/index.md72
-rw-r--r--tensorflow/g3doc/tutorials/deep_cnn/index.md14
-rw-r--r--tensorflow/g3doc/tutorials/mnist/beginners/index.md6
-rw-r--r--tensorflow/g3doc/tutorials/mnist/tf/index.md39
-rw-r--r--tensorflow/g3doc/tutorials/recurrent/index.md4
-rw-r--r--tensorflow/g3doc/tutorials/word2vec/index.md14
-rw-r--r--tensorflow/python/framework/docs.py2
-rw-r--r--tensorflow/tools/docker/Dockerfile.gpu8
-rw-r--r--tensorflow/tools/docker/Dockerfile.gpu_base34
20 files changed, 270 insertions, 214 deletions
diff --git a/README.md b/README.md
index e9fc94c6ec..1a53403d85 100644
--- a/README.md
+++ b/README.md
@@ -75,3 +75,4 @@ Hello, TensorFlow!
##For more information
* [TensorFlow website](http://tensorflow.org)
+* [TensorFlow whitepaper](http://download.tensorflow.org/paper/whitepaper2015.pdf)
diff --git a/tensorflow/g3doc/api_docs/python/client.md b/tensorflow/g3doc/api_docs/python/client.md
index 6be112738d..7f781f7106 100644
--- a/tensorflow/g3doc/api_docs/python/client.md
+++ b/tensorflow/g3doc/api_docs/python/client.md
@@ -5,27 +5,27 @@
## Contents
### [Running Graphs](#AUTOGENERATED-running-graphs)
* [Session management](#AUTOGENERATED-session-management)
- * [class tf.Session](#Session)
- * [class tf.InteractiveSession](#InteractiveSession)
+ * [`class tf.Session`](#Session)
+ * [`class tf.InteractiveSession`](#InteractiveSession)
* [`tf.get_default_session()`](#get_default_session)
* [Error classes](#AUTOGENERATED-error-classes)
- * [class tf.OpError](#OpError)
- * [class tf.errors.CancelledError](#CancelledError)
- * [class tf.errors.UnknownError](#UnknownError)
- * [class tf.errors.InvalidArgumentError](#InvalidArgumentError)
- * [class tf.errors.DeadlineExceededError](#DeadlineExceededError)
- * [class tf.errors.NotFoundError](#NotFoundError)
- * [class tf.errors.AlreadyExistsError](#AlreadyExistsError)
- * [class tf.errors.PermissionDeniedError](#PermissionDeniedError)
- * [class tf.errors.UnauthenticatedError](#UnauthenticatedError)
- * [class tf.errors.ResourceExhaustedError](#ResourceExhaustedError)
- * [class tf.errors.FailedPreconditionError](#FailedPreconditionError)
- * [class tf.errors.AbortedError](#AbortedError)
- * [class tf.errors.OutOfRangeError](#OutOfRangeError)
- * [class tf.errors.UnimplementedError](#UnimplementedError)
- * [class tf.errors.InternalError](#InternalError)
- * [class tf.errors.UnavailableError](#UnavailableError)
- * [class tf.errors.DataLossError](#DataLossError)
+ * [`class tf.OpError`](#OpError)
+ * [`class tf.errors.CancelledError`](#CancelledError)
+ * [`class tf.errors.UnknownError`](#UnknownError)
+ * [`class tf.errors.InvalidArgumentError`](#InvalidArgumentError)
+ * [`class tf.errors.DeadlineExceededError`](#DeadlineExceededError)
+ * [`class tf.errors.NotFoundError`](#NotFoundError)
+ * [`class tf.errors.AlreadyExistsError`](#AlreadyExistsError)
+ * [`class tf.errors.PermissionDeniedError`](#PermissionDeniedError)
+ * [`class tf.errors.UnauthenticatedError`](#UnauthenticatedError)
+ * [`class tf.errors.ResourceExhaustedError`](#ResourceExhaustedError)
+ * [`class tf.errors.FailedPreconditionError`](#FailedPreconditionError)
+ * [`class tf.errors.AbortedError`](#AbortedError)
+ * [`class tf.errors.OutOfRangeError`](#OutOfRangeError)
+ * [`class tf.errors.UnimplementedError`](#UnimplementedError)
+ * [`class tf.errors.InternalError`](#InternalError)
+ * [`class tf.errors.UnavailableError`](#UnavailableError)
+ * [`class tf.errors.DataLossError`](#DataLossError)
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
@@ -39,7 +39,7 @@ examples of how a graph is launched in a [`tf.Session`](#Session).
- - -
-### class tf.Session <a class="md-anchor" id="Session"></a>
+### `class tf.Session` <a class="md-anchor" id="Session"></a>
A class for running TensorFlow operations.
@@ -262,7 +262,7 @@ thread's function.
- - -
-### class tf.InteractiveSession <a class="md-anchor" id="InteractiveSession"></a>
+### `class tf.InteractiveSession` <a class="md-anchor" id="InteractiveSession"></a>
A TensorFlow `Session` for use in interactive contexts, such as a shell.
@@ -357,7 +357,7 @@ thread's function.
- - -
-### class tf.OpError <a class="md-anchor" id="OpError"></a>
+### `class tf.OpError` <a class="md-anchor" id="OpError"></a>
A generic error that is raised when TensorFlow execution fails.
@@ -419,7 +419,7 @@ The error message that describes the error.
- - -
-### class tf.errors.CancelledError <a class="md-anchor" id="CancelledError"></a>
+### `class tf.errors.CancelledError` <a class="md-anchor" id="CancelledError"></a>
Raised when an operation or step is cancelled.
@@ -441,7 +441,7 @@ Creates a `CancelledError`.
- - -
-### class tf.errors.UnknownError <a class="md-anchor" id="UnknownError"></a>
+### `class tf.errors.UnknownError` <a class="md-anchor" id="UnknownError"></a>
Unknown error.
@@ -461,7 +461,7 @@ Creates an `UnknownError`.
- - -
-### class tf.errors.InvalidArgumentError <a class="md-anchor" id="InvalidArgumentError"></a>
+### `class tf.errors.InvalidArgumentError` <a class="md-anchor" id="InvalidArgumentError"></a>
Raised when an operation receives an invalid argument.
@@ -483,7 +483,7 @@ Creates an `InvalidArgumentError`.
- - -
-### class tf.errors.DeadlineExceededError <a class="md-anchor" id="DeadlineExceededError"></a>
+### `class tf.errors.DeadlineExceededError` <a class="md-anchor" id="DeadlineExceededError"></a>
Raised when a deadline expires before an operation could complete.
@@ -499,7 +499,7 @@ Creates a `DeadlineExceededError`.
- - -
-### class tf.errors.NotFoundError <a class="md-anchor" id="NotFoundError"></a>
+### `class tf.errors.NotFoundError` <a class="md-anchor" id="NotFoundError"></a>
Raised when a requested entity (e.g., a file or directory) was not found.
@@ -518,7 +518,7 @@ Creates a `NotFoundError`.
- - -
-### class tf.errors.AlreadyExistsError <a class="md-anchor" id="AlreadyExistsError"></a>
+### `class tf.errors.AlreadyExistsError` <a class="md-anchor" id="AlreadyExistsError"></a>
Raised when an entity that we attempted to create already exists.
@@ -537,7 +537,7 @@ Creates an `AlreadyExistsError`.
- - -
-### class tf.errors.PermissionDeniedError <a class="md-anchor" id="PermissionDeniedError"></a>
+### `class tf.errors.PermissionDeniedError` <a class="md-anchor" id="PermissionDeniedError"></a>
Raised when the caller does not have permission to run an operation.
@@ -556,7 +556,7 @@ Creates a `PermissionDeniedError`.
- - -
-### class tf.errors.UnauthenticatedError <a class="md-anchor" id="UnauthenticatedError"></a>
+### `class tf.errors.UnauthenticatedError` <a class="md-anchor" id="UnauthenticatedError"></a>
The request does not have valid authentication credentials.
@@ -572,7 +572,7 @@ Creates an `UnauthenticatedError`.
- - -
-### class tf.errors.ResourceExhaustedError <a class="md-anchor" id="ResourceExhaustedError"></a>
+### `class tf.errors.ResourceExhaustedError` <a class="md-anchor" id="ResourceExhaustedError"></a>
Some resource has been exhausted.
@@ -589,7 +589,7 @@ Creates a `ResourceExhaustedError`.
- - -
-### class tf.errors.FailedPreconditionError <a class="md-anchor" id="FailedPreconditionError"></a>
+### `class tf.errors.FailedPreconditionError` <a class="md-anchor" id="FailedPreconditionError"></a>
Operation was rejected because the system is not in a state to execute it.
@@ -607,7 +607,7 @@ Creates a `FailedPreconditionError`.
- - -
-### class tf.errors.AbortedError <a class="md-anchor" id="AbortedError"></a>
+### `class tf.errors.AbortedError` <a class="md-anchor" id="AbortedError"></a>
The operation was aborted, typically due to a concurrent action.
@@ -627,7 +627,7 @@ Creates an `AbortedError`.
- - -
-### class tf.errors.OutOfRangeError <a class="md-anchor" id="OutOfRangeError"></a>
+### `class tf.errors.OutOfRangeError` <a class="md-anchor" id="OutOfRangeError"></a>
Raised when an operation executed past the valid range.
@@ -647,7 +647,7 @@ Creates an `OutOfRangeError`.
- - -
-### class tf.errors.UnimplementedError <a class="md-anchor" id="UnimplementedError"></a>
+### `class tf.errors.UnimplementedError` <a class="md-anchor" id="UnimplementedError"></a>
Raised when an operation has not been implemented.
@@ -667,7 +667,7 @@ Creates an `UnimplementedError`.
- - -
-### class tf.errors.InternalError <a class="md-anchor" id="InternalError"></a>
+### `class tf.errors.InternalError` <a class="md-anchor" id="InternalError"></a>
Raised when the system experiences an internal error.
@@ -684,7 +684,7 @@ Creates an `InternalError`.
- - -
-### class tf.errors.UnavailableError <a class="md-anchor" id="UnavailableError"></a>
+### `class tf.errors.UnavailableError` <a class="md-anchor" id="UnavailableError"></a>
Raised when the runtime is currently unavailable.
@@ -700,7 +700,7 @@ Creates an `UnavailableError`.
- - -
-### class tf.errors.DataLossError <a class="md-anchor" id="DataLossError"></a>
+### `class tf.errors.DataLossError` <a class="md-anchor" id="DataLossError"></a>
Raised when unrecoverable data loss or corruption is encountered.
diff --git a/tensorflow/g3doc/api_docs/python/framework.md b/tensorflow/g3doc/api_docs/python/framework.md
index 9af0191ad7..a62ef9e711 100644
--- a/tensorflow/g3doc/api_docs/python/framework.md
+++ b/tensorflow/g3doc/api_docs/python/framework.md
@@ -5,11 +5,11 @@
## Contents
### [Building Graphs](#AUTOGENERATED-building-graphs)
* [Core graph data structures](#AUTOGENERATED-core-graph-data-structures)
- * [class tf.Graph](#Graph)
- * [class tf.Operation](#Operation)
- * [class tf.Tensor](#Tensor)
+ * [`class tf.Graph`](#Graph)
+ * [`class tf.Operation`](#Operation)
+ * [`class tf.Tensor`](#Tensor)
* [Tensor types](#AUTOGENERATED-tensor-types)
- * [class tf.DType](#DType)
+ * [`class tf.DType`](#DType)
* [`tf.as_dtype(type_value)`](#as_dtype)
* [Utility functions](#AUTOGENERATED-utility-functions)
* [`tf.device(dev)`](#device)
@@ -21,13 +21,13 @@
* [Graph collections](#AUTOGENERATED-graph-collections)
* [`tf.add_to_collection(name, value)`](#add_to_collection)
* [`tf.get_collection(key, scope=None)`](#get_collection)
- * [class tf.GraphKeys](#GraphKeys)
+ * [`class tf.GraphKeys`](#GraphKeys)
* [Defining new operations](#AUTOGENERATED-defining-new-operations)
- * [class tf.RegisterGradient](#RegisterGradient)
+ * [`class tf.RegisterGradient`](#RegisterGradient)
* [`tf.NoGradient(op_type)`](#NoGradient)
- * [class tf.RegisterShape](#RegisterShape)
- * [class tf.TensorShape](#TensorShape)
- * [class tf.Dimension](#Dimension)
+ * [`class tf.RegisterShape`](#RegisterShape)
+ * [`class tf.TensorShape`](#TensorShape)
+ * [`class tf.Dimension`](#Dimension)
* [`tf.op_scope(values, name, default_name)`](#op_scope)
* [`tf.get_seed(op_seed)`](#get_seed)
@@ -40,7 +40,7 @@ Classes and functions for building TensorFlow graphs.
- - -
-### class tf.Graph <a class="md-anchor" id="Graph"></a>
+### `class tf.Graph` <a class="md-anchor" id="Graph"></a>
A TensorFlow computation, represented as a dataflow graph.
@@ -657,7 +657,7 @@ with tf.Graph().as_default() as g:
- - -
-### class tf.Operation <a class="md-anchor" id="Operation"></a>
+### `class tf.Operation` <a class="md-anchor" id="Operation"></a>
Represents a graph node that performs computation on tensors.
@@ -869,7 +869,7 @@ DEPRECATED: Use outputs.
- - -
-### class tf.Tensor <a class="md-anchor" id="Tensor"></a>
+### `class tf.Tensor` <a class="md-anchor" id="Tensor"></a>
Represents a value produced by an `Operation`.
@@ -1099,7 +1099,7 @@ The name of the device on which this tensor will be produced, or None.
- - -
-### class tf.DType <a class="md-anchor" id="DType"></a>
+### `class tf.DType` <a class="md-anchor" id="DType"></a>
Represents the type of the elements in a `Tensor`.
@@ -1504,7 +1504,7 @@ for more details.
- - -
-### class tf.GraphKeys <a class="md-anchor" id="GraphKeys"></a>
+### `class tf.GraphKeys` <a class="md-anchor" id="GraphKeys"></a>
Standard names to use for graph collections.
@@ -1539,7 +1539,7 @@ The following standard keys are defined:
- - -
-### class tf.RegisterGradient <a class="md-anchor" id="RegisterGradient"></a>
+### `class tf.RegisterGradient` <a class="md-anchor" id="RegisterGradient"></a>
A decorator for registering the gradient function for an op type.
@@ -1606,7 +1606,7 @@ tf.NoGradient("Size")
- - -
-### class tf.RegisterShape <a class="md-anchor" id="RegisterShape"></a>
+### `class tf.RegisterShape` <a class="md-anchor" id="RegisterShape"></a>
A decorator for registering the shape function for an op type.
@@ -1638,7 +1638,7 @@ Saves the "op_type" as the Operation type.
- - -
-### class tf.TensorShape <a class="md-anchor" id="TensorShape"></a>
+### `class tf.TensorShape` <a class="md-anchor" id="TensorShape"></a>
Represents the shape of a `Tensor`.
@@ -1948,7 +1948,7 @@ Returns the total number of elements, or none for incomplete shapes.
- - -
-### class tf.Dimension <a class="md-anchor" id="Dimension"></a>
+### `class tf.Dimension` <a class="md-anchor" id="Dimension"></a>
Represents the value of one dimension in a TensorShape.
- - -
diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md
index 297024a6eb..9426cce788 100644
--- a/tensorflow/g3doc/api_docs/python/io_ops.md
+++ b/tensorflow/g3doc/api_docs/python/io_ops.md
@@ -11,12 +11,12 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
* [Placeholders](#AUTOGENERATED-placeholders)
* [`tf.placeholder(dtype, shape=None, name=None)`](#placeholder)
* [Readers](#AUTOGENERATED-readers)
- * [class tf.ReaderBase](#ReaderBase)
- * [class tf.TextLineReader](#TextLineReader)
- * [class tf.WholeFileReader](#WholeFileReader)
- * [class tf.IdentityReader](#IdentityReader)
- * [class tf.TFRecordReader](#TFRecordReader)
- * [class tf.FixedLengthRecordReader](#FixedLengthRecordReader)
+ * [`class tf.ReaderBase`](#ReaderBase)
+ * [`class tf.TextLineReader`](#TextLineReader)
+ * [`class tf.WholeFileReader`](#WholeFileReader)
+ * [`class tf.IdentityReader`](#IdentityReader)
+ * [`class tf.TFRecordReader`](#TFRecordReader)
+ * [`class tf.FixedLengthRecordReader`](#FixedLengthRecordReader)
* [Converting](#AUTOGENERATED-converting)
* [`tf.decode_csv(records, record_defaults, field_delim=None, name=None)`](#decode_csv)
* [`tf.decode_raw(bytes, out_type, little_endian=None, name=None)`](#decode_raw)
@@ -24,9 +24,9 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
* [`tf.parse_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseExample')`](#parse_example)
* [`tf.parse_single_example(serialized, names=None, sparse_keys=None, sparse_types=None, dense_keys=None, dense_types=None, dense_defaults=None, dense_shapes=None, name='ParseSingleExample')`](#parse_single_example)
* [Queues](#AUTOGENERATED-queues)
- * [class tf.QueueBase](#QueueBase)
- * [class tf.FIFOQueue](#FIFOQueue)
- * [class tf.RandomShuffleQueue](#RandomShuffleQueue)
+ * [`class tf.QueueBase`](#QueueBase)
+ * [`class tf.FIFOQueue`](#FIFOQueue)
+ * [`class tf.RandomShuffleQueue`](#RandomShuffleQueue)
* [Dealing with the filesystem](#AUTOGENERATED-dealing-with-the-filesystem)
* [`tf.matching_files(pattern, name=None)`](#matching_files)
* [`tf.read_file(filename, name=None)`](#read_file)
@@ -98,7 +98,7 @@ data](../../how_tos/reading_data/index.md).
- - -
-### class tf.ReaderBase <a class="md-anchor" id="ReaderBase"></a>
+### `class tf.ReaderBase` <a class="md-anchor" id="ReaderBase"></a>
Base class for different Reader types, that produce a record every step.
@@ -257,7 +257,7 @@ Whether the Reader implementation can serialize its state.
- - -
-### class tf.TextLineReader <a class="md-anchor" id="TextLineReader"></a>
+### `class tf.TextLineReader` <a class="md-anchor" id="TextLineReader"></a>
A Reader that outputs the lines of a file delimited by newlines.
@@ -408,7 +408,7 @@ Whether the Reader implementation can serialize its state.
- - -
-### class tf.WholeFileReader <a class="md-anchor" id="WholeFileReader"></a>
+### `class tf.WholeFileReader` <a class="md-anchor" id="WholeFileReader"></a>
A Reader that outputs the entire contents of a file as a value.
@@ -559,7 +559,7 @@ Whether the Reader implementation can serialize its state.
- - -
-### class tf.IdentityReader <a class="md-anchor" id="IdentityReader"></a>
+### `class tf.IdentityReader` <a class="md-anchor" id="IdentityReader"></a>
A Reader that outputs the queued work as both the key and value.
@@ -710,7 +710,7 @@ Whether the Reader implementation can serialize its state.
- - -
-### class tf.TFRecordReader <a class="md-anchor" id="TFRecordReader"></a>
+### `class tf.TFRecordReader` <a class="md-anchor" id="TFRecordReader"></a>
A Reader that outputs the records from a TFRecords file.
@@ -858,7 +858,7 @@ Whether the Reader implementation can serialize its state.
- - -
-### class tf.FixedLengthRecordReader <a class="md-anchor" id="FixedLengthRecordReader"></a>
+### `class tf.FixedLengthRecordReader` <a class="md-anchor" id="FixedLengthRecordReader"></a>
A Reader that outputs fixed-length records from a file.
@@ -1308,7 +1308,7 @@ Queues](../../how_tos/threading_and_queues/index.md).
- - -
-### class tf.QueueBase <a class="md-anchor" id="QueueBase"></a>
+### `class tf.QueueBase` <a class="md-anchor" id="QueueBase"></a>
Base class for queue implementations.
@@ -1503,7 +1503,7 @@ The underlying queue reference.
- - -
-### class tf.FIFOQueue <a class="md-anchor" id="FIFOQueue"></a>
+### `class tf.FIFOQueue` <a class="md-anchor" id="FIFOQueue"></a>
A queue implementation that dequeues elements in first-in-first out order.
@@ -1546,7 +1546,7 @@ but the use of `dequeue_many` is disallowed.
- - -
-### class tf.RandomShuffleQueue <a class="md-anchor" id="RandomShuffleQueue"></a>
+### `class tf.RandomShuffleQueue` <a class="md-anchor" id="RandomShuffleQueue"></a>
A queue implementation that dequeues elements in a random order.
diff --git a/tensorflow/g3doc/api_docs/python/python_io.md b/tensorflow/g3doc/api_docs/python/python_io.md
index 822532450b..d349c0a106 100644
--- a/tensorflow/g3doc/api_docs/python/python_io.md
+++ b/tensorflow/g3doc/api_docs/python/python_io.md
@@ -5,7 +5,7 @@
## Contents
### [Data IO (Python functions)](#AUTOGENERATED-data-io--python-functions-)
* [Data IO (Python Functions)](#AUTOGENERATED-data-io--python-functions-)
- * [class tf.python_io.TFRecordWriter](#TFRecordWriter)
+ * [`class tf.python_io.TFRecordWriter`](#TFRecordWriter)
* [`tf.python_io.tf_record_iterator(path)`](#tf_record_iterator)
* [TFRecords Format Details](#AUTOGENERATED-tfrecords-format-details)
@@ -20,7 +20,7 @@ suitable if fast sharding or other non-sequential access is desired.
- - -
-### class tf.python_io.TFRecordWriter <a class="md-anchor" id="TFRecordWriter"></a>
+### `class tf.python_io.TFRecordWriter` <a class="md-anchor" id="TFRecordWriter"></a>
A class to write records to a TFRecords file.
diff --git a/tensorflow/g3doc/api_docs/python/sparse_ops.md b/tensorflow/g3doc/api_docs/python/sparse_ops.md
index aed6169dbf..8c83897031 100644
--- a/tensorflow/g3doc/api_docs/python/sparse_ops.md
+++ b/tensorflow/g3doc/api_docs/python/sparse_ops.md
@@ -9,8 +9,8 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
## Contents
### [Sparse Tensors](#AUTOGENERATED-sparse-tensors)
* [Sparse Tensor Representation](#AUTOGENERATED-sparse-tensor-representation)
- * [class tf.SparseTensor](#SparseTensor)
- * [class tf.SparseTensorValue](#SparseTensorValue)
+ * [`class tf.SparseTensor`](#SparseTensor)
+ * [`class tf.SparseTensorValue`](#SparseTensorValue)
* [Sparse to Dense Conversion](#AUTOGENERATED-sparse-to-dense-conversion)
* [`tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value, name=None)`](#sparse_to_dense)
* [`tf.sparse_tensor_to_dense(sp_input, default_value, name=None)`](#sparse_tensor_to_dense)
@@ -33,7 +33,7 @@ dimension, and dense along all other dimensions.
- - -
-### class tf.SparseTensor <a class="md-anchor" id="SparseTensor"></a>
+### `class tf.SparseTensor` <a class="md-anchor" id="SparseTensor"></a>
Represents a sparse tensor.
@@ -139,7 +139,7 @@ The `Graph` that contains the index, value, and shape tensors.
- - -
-### class tf.SparseTensorValue <a class="md-anchor" id="SparseTensorValue"></a>
+### `class tf.SparseTensorValue` <a class="md-anchor" id="SparseTensorValue"></a>
SparseTensorValue(indices, values, shape)
- - -
diff --git a/tensorflow/g3doc/api_docs/python/state_ops.md b/tensorflow/g3doc/api_docs/python/state_ops.md
index bf9bfc0875..4cc67541a8 100644
--- a/tensorflow/g3doc/api_docs/python/state_ops.md
+++ b/tensorflow/g3doc/api_docs/python/state_ops.md
@@ -9,7 +9,7 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
## Contents
### [Variables](#AUTOGENERATED-variables)
* [Variables](#AUTOGENERATED-variables)
- * [class tf.Variable](#Variable)
+ * [`class tf.Variable`](#Variable)
* [Variable helper functions](#AUTOGENERATED-variable-helper-functions)
* [`tf.all_variables()`](#all_variables)
* [`tf.trainable_variables()`](#trainable_variables)
@@ -17,7 +17,7 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
* [`tf.initialize_variables(var_list, name='init')`](#initialize_variables)
* [`tf.assert_variables_initialized(var_list=None)`](#assert_variables_initialized)
* [Saving and Restoring Variables](#AUTOGENERATED-saving-and-restoring-variables)
- * [class tf.train.Saver](#Saver)
+ * [`class tf.train.Saver`](#Saver)
* [`tf.train.latest_checkpoint(checkpoint_dir, latest_filename=None)`](#latest_checkpoint)
* [`tf.train.get_checkpoint_state(checkpoint_dir, latest_filename=None)`](#get_checkpoint_state)
* [`tf.train.update_checkpoint_state(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, latest_filename=None)`](#update_checkpoint_state)
@@ -36,7 +36,7 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
* [`tf.scatter_add(ref, indices, updates, use_locking=None, name=None)`](#scatter_add)
* [`tf.scatter_sub(ref, indices, updates, use_locking=None, name=None)`](#scatter_sub)
* [`tf.sparse_mask(a, mask_indices, name=None)`](#sparse_mask)
- * [class tf.IndexedSlices](#IndexedSlices)
+ * [`class tf.IndexedSlices`](#IndexedSlices)
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
@@ -45,7 +45,7 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
- - -
-### class tf.Variable <a class="md-anchor" id="Variable"></a>
+### `class tf.Variable` <a class="md-anchor" id="Variable"></a>
See the [Variables How To](../../how_tos/variables/index.md) for a high
level overview.
@@ -515,7 +515,7 @@ logged by the C++ runtime. This is expected.
- - -
-### class tf.train.Saver <a class="md-anchor" id="Saver"></a>
+### `class tf.train.Saver` <a class="md-anchor" id="Saver"></a>
Saves and restores variables.
@@ -1311,7 +1311,7 @@ tf.shape(b.values) => [2, 10]
- - -
-### class tf.IndexedSlices <a class="md-anchor" id="IndexedSlices"></a>
+### `class tf.IndexedSlices` <a class="md-anchor" id="IndexedSlices"></a>
A sparse representation of a set of tensor slices at given indices.
diff --git a/tensorflow/g3doc/api_docs/python/train.md b/tensorflow/g3doc/api_docs/python/train.md
index 93912c0feb..69f58ee2eb 100644
--- a/tensorflow/g3doc/api_docs/python/train.md
+++ b/tensorflow/g3doc/api_docs/python/train.md
@@ -5,20 +5,20 @@
## Contents
### [Training](#AUTOGENERATED-training)
* [Optimizers](#AUTOGENERATED-optimizers)
- * [class tf.train.Optimizer](#Optimizer)
+ * [`class tf.train.Optimizer`](#Optimizer)
* [Usage](#AUTOGENERATED-usage)
* [Processing gradients before applying them.](#AUTOGENERATED-processing-gradients-before-applying-them.)
* [Gating Gradients](#AUTOGENERATED-gating-gradients)
* [Slots](#AUTOGENERATED-slots)
- * [class tf.train.GradientDescentOptimizer](#GradientDescentOptimizer)
- * [class tf.train.AdagradOptimizer](#AdagradOptimizer)
- * [class tf.train.MomentumOptimizer](#MomentumOptimizer)
- * [class tf.train.AdamOptimizer](#AdamOptimizer)
- * [class tf.train.FtrlOptimizer](#FtrlOptimizer)
- * [class tf.train.RMSPropOptimizer](#RMSPropOptimizer)
+ * [`class tf.train.GradientDescentOptimizer`](#GradientDescentOptimizer)
+ * [`class tf.train.AdagradOptimizer`](#AdagradOptimizer)
+ * [`class tf.train.MomentumOptimizer`](#MomentumOptimizer)
+ * [`class tf.train.AdamOptimizer`](#AdamOptimizer)
+ * [`class tf.train.FtrlOptimizer`](#FtrlOptimizer)
+ * [`class tf.train.RMSPropOptimizer`](#RMSPropOptimizer)
* [Gradient Computation](#AUTOGENERATED-gradient-computation)
* [`tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)`](#gradients)
- * [class tf.AggregationMethod](#AggregationMethod)
+ * [`class tf.AggregationMethod`](#AggregationMethod)
* [`tf.stop_gradient(input, name=None)`](#stop_gradient)
* [Gradient Clipping](#AUTOGENERATED-gradient-clipping)
* [`tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)`](#clip_by_value)
@@ -29,10 +29,10 @@
* [Decaying the learning rate](#AUTOGENERATED-decaying-the-learning-rate)
* [`tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)`](#exponential_decay)
* [Moving Averages](#AUTOGENERATED-moving-averages)
- * [class tf.train.ExponentialMovingAverage](#ExponentialMovingAverage)
+ * [`class tf.train.ExponentialMovingAverage`](#ExponentialMovingAverage)
* [Coordinator and QueueRunner](#AUTOGENERATED-coordinator-and-queuerunner)
- * [class tf.train.Coordinator](#Coordinator)
- * [class tf.train.QueueRunner](#QueueRunner)
+ * [`class tf.train.Coordinator`](#Coordinator)
+ * [`class tf.train.QueueRunner`](#QueueRunner)
* [`tf.train.add_queue_runner(qr, collection='queue_runners')`](#add_queue_runner)
* [`tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners')`](#start_queue_runners)
* [Summary Operations](#AUTOGENERATED-summary-operations)
@@ -43,7 +43,7 @@
* [`tf.merge_summary(inputs, collections=None, name=None)`](#merge_summary)
* [`tf.merge_all_summaries(key='summaries')`](#merge_all_summaries)
* [Adding Summaries to Event Files](#AUTOGENERATED-adding-summaries-to-event-files)
- * [class tf.train.SummaryWriter](#SummaryWriter)
+ * [`class tf.train.SummaryWriter`](#SummaryWriter)
* [`tf.train.summary_iterator(path)`](#summary_iterator)
* [Training utilities](#AUTOGENERATED-training-utilities)
* [`tf.train.global_step(sess, global_step_tensor)`](#global_step)
@@ -65,7 +65,7 @@ of the subclasses.
- - -
-### class tf.train.Optimizer <a class="md-anchor" id="Optimizer"></a>
+### `class tf.train.Optimizer` <a class="md-anchor" id="Optimizer"></a>
Base class for optimizers.
@@ -314,7 +314,7 @@ Use get_slot_names() to get the list of slot names created by the Optimizer.
- - -
-### class tf.train.GradientDescentOptimizer <a class="md-anchor" id="GradientDescentOptimizer"></a>
+### `class tf.train.GradientDescentOptimizer` <a class="md-anchor" id="GradientDescentOptimizer"></a>
Optimizer that implements the gradient descent algorithm.
@@ -337,7 +337,7 @@ Construct a new gradient descent optimizer.
- - -
-### class tf.train.AdagradOptimizer <a class="md-anchor" id="AdagradOptimizer"></a>
+### `class tf.train.AdagradOptimizer` <a class="md-anchor" id="AdagradOptimizer"></a>
Optimizer that implements the Adagrad algorithm.
@@ -366,7 +366,7 @@ Construct a new Adagrad optimizer.
- - -
-### class tf.train.MomentumOptimizer <a class="md-anchor" id="MomentumOptimizer"></a>
+### `class tf.train.MomentumOptimizer` <a class="md-anchor" id="MomentumOptimizer"></a>
Optimizer that implements the Momentum algorithm.
@@ -389,7 +389,7 @@ Construct a new Momentum optimizer.
- - -
-### class tf.train.AdamOptimizer <a class="md-anchor" id="AdamOptimizer"></a>
+### `class tf.train.AdamOptimizer` <a class="md-anchor" id="AdamOptimizer"></a>
Optimizer that implements the Adam algorithm.
@@ -442,7 +442,7 @@ current good choice is 1.0 or 0.1.
- - -
-### class tf.train.FtrlOptimizer <a class="md-anchor" id="FtrlOptimizer"></a>
+### `class tf.train.FtrlOptimizer` <a class="md-anchor" id="FtrlOptimizer"></a>
Optimizer that implements the FTRL algorithm.
@@ -500,7 +500,7 @@ using this function.
- - -
-### class tf.train.RMSPropOptimizer <a class="md-anchor" id="RMSPropOptimizer"></a>
+### `class tf.train.RMSPropOptimizer` <a class="md-anchor" id="RMSPropOptimizer"></a>
Optimizer that implements the RMSProp algorithm.
@@ -585,7 +585,7 @@ each y).
- - -
-### class tf.AggregationMethod <a class="md-anchor" id="AggregationMethod"></a>
+### `class tf.AggregationMethod` <a class="md-anchor" id="AggregationMethod"></a>
A class listing aggregation methods used to combine gradients.
@@ -882,7 +882,7 @@ moving averages for evaluations often improve results significantly.
- - -
-### class tf.train.ExponentialMovingAverage <a class="md-anchor" id="ExponentialMovingAverage"></a>
+### `class tf.train.ExponentialMovingAverage` <a class="md-anchor" id="ExponentialMovingAverage"></a>
Maintains moving averages of variables by employing and exponential decay.
@@ -1084,7 +1084,7 @@ see [Queues](../../api_docs/python/io_ops.md#queues).
- - -
-### class tf.train.Coordinator <a class="md-anchor" id="Coordinator"></a>
+### `class tf.train.Coordinator` <a class="md-anchor" id="Coordinator"></a>
A coordinator for threads.
@@ -1253,7 +1253,7 @@ Wait till the Coordinator is told to stop.
- - -
-### class tf.train.QueueRunner <a class="md-anchor" id="QueueRunner"></a>
+### `class tf.train.QueueRunner` <a class="md-anchor" id="QueueRunner"></a>
Holds a list of enqueue operations for a queue, each to be run in a thread.
@@ -1600,7 +1600,7 @@ overview of summaries, event files, and visualization in TensorBoard.
- - -
-### class tf.train.SummaryWriter <a class="md-anchor" id="SummaryWriter"></a>
+### `class tf.train.SummaryWriter` <a class="md-anchor" id="SummaryWriter"></a>
Writes `Summary` protocol buffers to event files.
diff --git a/tensorflow/g3doc/get_started/os_setup.md b/tensorflow/g3doc/get_started/os_setup.md
index 06b89e3a2e..f12d189f41 100644
--- a/tensorflow/g3doc/get_started/os_setup.md
+++ b/tensorflow/g3doc/get_started/os_setup.md
@@ -365,15 +365,24 @@ If you encounter:
ImportError: No module named copyreg
```
-Solution: TensorFlow depends on protobuf which require six-1.10.0. Apple's
-default python environment has six-1.4.1 and may be difficult to upgrade.
-So we recommend either installing a separate copy of python via homebrew:
+Solution: TensorFlow depends on protobuf, which requires `six-1.10.0`. Apple's
+default python environment has `six-1.4.1` and may be difficult to upgrade.
+There are several ways to fix this:
-```bash
-brew install python
-```
+1. Upgrade the system-wide copy of `six`:
+
+ ```bash
+ sudo easy_install -U six
+ ```
+
+2. Install a separate copy of python via homebrew:
+
+ ```bash
+ brew install python
+ ```
-or building / using TensorFlow within `virtualenv` as described above.
+3. Build or use TensorFlow
+ [within `virtualenv`](#virtualenv_install).
diff --git a/tensorflow/g3doc/how_tos/adding_an_op/index.md b/tensorflow/g3doc/how_tos/adding_an_op/index.md
index a9fdf747d1..9f3b3985f1 100644
--- a/tensorflow/g3doc/how_tos/adding_an_op/index.md
+++ b/tensorflow/g3doc/how_tos/adding_an_op/index.md
@@ -228,14 +228,15 @@ implementation.
This asserts that the input is a vector, and returns having set the
`InvalidArgument` status if it isn't. The
-[OP_REQUIRES macro][validation-macros] takes three arguments:
+[`OP_REQUIRES` macro][validation-macros] takes three arguments:
* The `context`, which can either be an `OpKernelContext` or
`OpKernelConstruction` pointer (see
[`tensorflow/core/framework/op_kernel.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op_kernel.h)),
for its `SetStatus()` method.
* The condition. For example, there are functions for validating the shape
- of a tensor in [`tensorflow/core/public/tensor_shape.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/public/tensor_shape.h)
+ of a tensor in
+ [`tensorflow/core/public/tensor_shape.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/public/tensor_shape.h)
* The error itself, which is represented by a `Status` object, see
[`tensorflow/core/public/status.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/public/status.h). A
`Status` has both a type (frequently `InvalidArgument`, but see the list of
@@ -347,7 +348,7 @@ The following types are supported in an attr:
* `list(<type>)`: A list of `<type>`, where `<type>` is one of the above types.
Note that `list(list(<type>))` is invalid.
-See also: [op_def_builder.cc:FinalizeAttr][FinalizeAttr] for a definitive list.
+See also: [`op_def_builder.cc:FinalizeAttr`][FinalizeAttr] for a definitive list.
#### Default values & constraints <a class="md-anchor" id="AUTOGENERATED-default-values---constraints"></a>
@@ -904,7 +905,7 @@ create a new operation with a new name with the new semantics.
You can implement different OpKernels and register one for CPU and another for
GPU, just like you can [register kernels for different types](#Polymorphism).
There are several examples of kernels with GPU support in
-[tensorflow/core/kernels/](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/).
+[`tensorflow/core/kernels/`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/).
Notice some kernels have a CPU version in a `.cc` file, a GPU version in a file
ending in `_gpu.cu.cc`, and some code shared in common in a `.h` file.
@@ -1018,7 +1019,7 @@ returns a list of
output of the op). To register a shape function, apply the
[`tf.RegisterShape` decorator](../../api_docs/python/framework.md#RegisterShape)
to a shape function. For example, the
-[ZeroOut op defined above](#define_interface) would have a shape function like
+[`ZeroOut` op defined above](#define_interface) would have a shape function like
the following:
```python
@@ -1033,7 +1034,7 @@ def _zero_out_shape(op):
```
A shape function can also constrain the shape of an input. For the version of
-[ZeroOut with a vector shape constraint](#Validation), the shape function
+[`ZeroOut` with a vector shape constraint](#Validation), the shape function
would be as follows:
```python
@@ -1066,7 +1067,7 @@ def _int_list_input_example_shape(op):
Since shape inference is an optional feature, and the shapes of tensors may vary
dynamically, shape functions must be robust to incomplete shape information for
-any of the inputs. The [`merge_with()`](../../api_docs/python/framework.md)
+any of the inputs. The [`merge_with`](../../api_docs/python/framework.md)
method allows the caller to assert that two shapes are the same, even if either
or both of them do not have complete information. Shape functions are defined
for all of the
diff --git a/tensorflow/g3doc/how_tos/new_data_formats/index.md b/tensorflow/g3doc/how_tos/new_data_formats/index.md
index 2c019c79ec..676f78eae9 100644
--- a/tensorflow/g3doc/how_tos/new_data_formats/index.md
+++ b/tensorflow/g3doc/how_tos/new_data_formats/index.md
@@ -35,14 +35,14 @@ A `Reader` is something that reads records from a file. There are some examples
of Reader Ops already built into TensorFlow:
* [`tf.TFRecordReader`](../../api_docs/python/io_ops.md#TFRecordReader)
- ([source in kernels/tf_record_reader_op.cc](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/tf_record_reader_op.cc))
+ ([source in `kernels/tf_record_reader_op.cc`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/tf_record_reader_op.cc))
* [`tf.FixedLengthRecordReader`](../../api_docs/python/io_ops.md#FixedLengthRecordReader)
- ([source in kernels/fixed_length_record_reader_op.cc](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/fixed_length_record_reader_op.cc))
+ ([source in `kernels/fixed_length_record_reader_op.cc`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/fixed_length_record_reader_op.cc))
* [`tf.TextLineReader`](../../api_docs/python/io_ops.md#TextLineReader)
- ([source in kernels/text_line_reader_op.cc](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/text_line_reader_op.cc))
+ ([source in `kernels/text_line_reader_op.cc`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/text_line_reader_op.cc))
You can see these all expose the same interface, the only differences
-are in their constructors. The most important method is `read()`.
+are in their constructors. The most important method is `read`.
It takes a queue argument, which is where it gets filenames to
read from whenever it needs one (e.g. when the `read` op first runs, or
the previous `read` reads the last record from a file). It produces
@@ -59,7 +59,7 @@ To create a new reader called `SomeReader`, you will need to:
You can put all the C++ code in a file in
`tensorflow/core/user_ops/some_reader_op.cc`. The code to read a file will live
in a descendant of the C++ `ReaderBase` class, which is defined in
-[tensorflow/core/kernels/reader_base.h](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/reader_base.h).
+[`tensorflow/core/kernels/reader_base.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/kernels/reader_base.h).
You will need to implement the following methods:
* `OnWorkStartedLocked`: open the next file
@@ -73,13 +73,13 @@ have to worry about thread safety (though that only protects the members of the
class, not global state).
For `OnWorkStartedLocked`, the name of the file to open is the value returned by
-the `current_work()` method. `ReadLocked()` has this signature:
+the `current_work()` method. `ReadLocked` has this signature:
```c++
Status ReadLocked(string* key, string* value, bool* produced, bool* at_end)
```
-If `ReadLocked()` successfully reads a record from the file, it should fill in:
+If `ReadLocked` successfully reads a record from the file, it should fill in:
* `*key`: with an identifier for the record, that a human could use to find
this record again. You can include the filename from `current_work()`,
@@ -90,7 +90,7 @@ If `ReadLocked()` successfully reads a record from the file, it should fill in:
If you hit the end of a file (EOF), set `*at_end` to `true`. In either case,
return `Status::OK()`. If there is an error, simply return it using one of the
helper functions from
-[tensorflow/core/lib/core/errors.h](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/lib/core/errors.h)
+[`tensorflow/core/lib/core/errors.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/lib/core/errors.h)
without modifying any arguments.
Next you will create the actual Reader op. It will help if you are familiar
@@ -100,13 +100,13 @@ are:
* Registering the op.
* Define and register an `OpKernel`.
-To register the op, you will use a `REGISTER_OP()` call defined in
-[tensorflow/core/framework/op.h](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op.h).
+To register the op, you will use a `REGISTER_OP` call defined in
+[`tensorflow/core/framework/op.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/op.h).
Reader ops never take any input and always have a single output with type
`Ref(string)`. They should always call `SetIsStateful()`, and have a string
`container` and `shared_name` attrs. You may optionally define additional attrs
-for configuration or include documentation in a `Doc()`. For examples, see
-[tensorflow/core/ops/io_ops.cc](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/ops/io_ops.cc),
+for configuration or include documentation in a `Doc`. For examples, see
+[`tensorflow/core/ops/io_ops.cc`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/ops/io_ops.cc),
e.g.:
```c++
@@ -125,8 +125,8 @@ A Reader that outputs the lines of a file delimited by '\n'.
To define an `OpKernel`, Readers can use the shortcut of descending from
`ReaderOpKernel`, defined in
-[tensorflow/core/framework/reader_op_kernel.h](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/reader_op_kernel.h),
-and implement a constructor that calls `SetReaderFactory()`. After defining
+[`tensorflow/core/framework/reader_op_kernel.h`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/reader_op_kernel.h),
+and implement a constructor that calls `SetReaderFactory`. After defining
your class, you will need to register it using `REGISTER_KERNEL_BUILDER(...)`.
An example with no attrs:
@@ -174,7 +174,7 @@ REGISTER_KERNEL_BUILDER(Name("TextLineReader").Device(DEVICE_CPU),
The last step is to add the Python wrapper. You will import
`tensorflow.python.ops.io_ops` in
-[tensorflow/python/user_ops/user_ops.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/user_ops/user_ops.py)
+[`tensorflow/python/user_ops/user_ops.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/user_ops/user_ops.py)
and add a descendant of [`io_ops.ReaderBase`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/python/ops/io_ops.py).
```python
@@ -214,7 +214,7 @@ Examples of Ops useful for decoding records:
Note that it can be useful to use multiple Ops to decode a particular record
format. For example, you may have an image saved as a string in
-[a tf.train.Example protocol buffer](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/example.proto).
+[a `tf.train.Example` protocol buffer](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/example.proto).
Depending on the format of that image, you might take the corresponding output
from a
[`tf.parse_single_example`](../../api_docs/python/io_ops.md#parse_single_example)
diff --git a/tensorflow/g3doc/how_tos/reading_data/index.md b/tensorflow/g3doc/how_tos/reading_data/index.md
index a304e1d669..64209b8bd0 100644
--- a/tensorflow/g3doc/how_tos/reading_data/index.md
+++ b/tensorflow/g3doc/how_tos/reading_data/index.md
@@ -51,7 +51,7 @@ it is executed without a feed, so you won't forget to feed it.
An example using `placeholder` and feeding to train on MNIST data can be found
in
-[tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py),
+[`tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py),
and is described in the [MNIST tutorial](../../tutorials/mnist/tf/index.md).
## Reading from files <a class="md-anchor" id="AUTOGENERATED-reading-from-files"></a>
@@ -71,10 +71,10 @@ A typical pipeline for reading records from files has the following stages:
For the list of filenames, use either a constant string Tensor (like
`["file0", "file1"]` or `[("file%d" % i) for i in range(2)]`) or the
-[tf.train.match_filenames_once
+[`tf.train.match_filenames_once`
function](../../api_docs/python/io_ops.md#match_filenames_once).
-Pass the list of filenames to the [tf.train.string_input_producer
+Pass the list of filenames to the [`tf.train.string_input_producer`
function](../../api_docs/python/io_ops.md#string_input_producer).
`string_input_producer` creates a FIFO queue for holding the filenames until
the reader needs them.
@@ -101,8 +101,8 @@ decode this string into the tensors that make up an example.
To read text files in [comma-separated value (CSV)
format](https://tools.ietf.org/html/rfc4180), use a
-[TextLineReader](../../api_docs/python/io_ops.md#TextLineReader) with the
-[decode_csv](../../api_docs/python/io_ops.md#decode_csv) operation. For example:
+[`TextLineReader`](../../api_docs/python/io_ops.md#TextLineReader) with the
+[`decode_csv`](../../api_docs/python/io_ops.md#decode_csv) operation. For example:
```python
filename_queue = tf.train.string_input_producer(["file0.csv", "file1.csv"])
@@ -130,20 +130,20 @@ with tf.Session() as sess:
coord.join(threads)
```
-Each execution of `read()` reads a single line from the file. The
-`decode_csv()` op then parses the result into a list of tensors. The
+Each execution of `read` reads a single line from the file. The
+`decode_csv` op then parses the result into a list of tensors. The
`record_defaults` argument determines the type of the resulting tensors and
sets the default value to use if a value is missing in the input string.
-You must call `tf.train.start_queue_runners()` to populate the queue before
-you call `run()` or `eval()` to execute the `read()`. Otherwise `read()` will
+You must call `tf.train.start_queue_runners` to populate the queue before
+you call `run` or `eval` to execute the `read`. Otherwise `read` will
block while it waits for filenames from the queue.
#### Fixed length records <a class="md-anchor" id="AUTOGENERATED-fixed-length-records"></a>
To read binary files in which each record is a fixed number of bytes, use
-[tf.FixedLengthRecordReader](../../api_docs/python/io_ops.md#FixedLengthRecordReader)
-with the [tf.decode_raw](../../api_docs/python/io_ops.md#decode_raw) operation.
+[`tf.FixedLengthRecordReader`](../../api_docs/python/io_ops.md#FixedLengthRecordReader)
+with the [`tf.decode_raw`](../../api_docs/python/io_ops.md#decode_raw) operation.
The `decode_raw` op converts from a string to a uint8 tensor.
For example, [the CIFAR-10 dataset](http://www.cs.toronto.edu/~kriz/cifar.html)
@@ -151,7 +151,7 @@ uses a file format where each record is represented using a fixed number of
bytes: 1 byte for the label followed by 3072 bytes of image data. Once you have
a uint8 tensor, standard operations can slice out each piece and reformat as
needed. For CIFAR-10, you can see how to do the reading and decoding in
-[tensorflow/models/image/cifar10/cifar10_input.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_input.py)
+[`tensorflow/models/image/cifar10/cifar10_input.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_input.py)
and described in
[this tutorial](../../tutorials/deep_cnn/index.md#prepare-the-data).
@@ -161,24 +161,24 @@ Another approach is to convert whatever data you have into a supported format.
This approach makes it easier to mix and match data sets and network
architectures. The recommended format for TensorFlow is a TFRecords file
containing
-[tf.train.Example protocol buffers](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/example.proto)
+[`tf.train.Example` protocol buffers](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/example.proto)
(which contain
[`Features`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/example/feature.proto)
as a field). You write a little program that gets your data, stuffs it in an
`Example` protocol buffer, serializes the protocol buffer to a string, and then
writes the string to a TFRecords file using the
-[tf.python_io.TFRecordWriter class](../../api_docs/python/python_io.md#TFRecordWriter).
+[`tf.python_io.TFRecordWriter` class](../../api_docs/python/python_io.md#TFRecordWriter).
For example,
-[tensorflow/g3doc/how_tos/reading_data/convert_to_records.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/how_tos/reading_data/convert_to_records.py)
+[`tensorflow/g3doc/how_tos/reading_data/convert_to_records.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/how_tos/reading_data/convert_to_records.py)
converts MNIST data to this format.
To read a file of TFRecords, use
-[tf.TFRecordReader](../../api_docs/python/io_ops.md#TFRecordReader) with
-the [tf.parse_single_example](../../api_docs/python/io_ops.md#parse_single_example)
+[`tf.TFRecordReader`](../../api_docs/python/io_ops.md#TFRecordReader) with
+the [`tf.parse_single_example`](../../api_docs/python/io_ops.md#parse_single_example)
decoder. The `parse_single_example` op decodes the example protocol buffers into
tensors. An MNIST example using the data produced by `convert_to_records` can be
found in
-[tensorflow/g3doc/how_tos/reading_data/fully_connected_reader.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/how_tos/reading_data/fully_connected_reader.py),
+[`tensorflow/g3doc/how_tos/reading_data/fully_connected_reader.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/how_tos/reading_data/fully_connected_reader.py),
which you can compare with the `fully_connected_feed` version.
### Preprocessing <a class="md-anchor" id="AUTOGENERATED-preprocessing"></a>
@@ -187,7 +187,7 @@ You can then do any preprocessing of these examples you want. This would be any
processing that doesn't depend on trainable parameters. Examples include
normalization of your data, picking a random slice, adding noise or distortions,
etc. See
-[tensorflow/models/image/cifar10/cifar10.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py)
+[`tensorflow/models/image/cifar10/cifar10.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py)
for an example.
### Batching <a class="md-anchor" id="AUTOGENERATED-batching"></a>
@@ -195,7 +195,7 @@ for an example.
At the end of the pipeline we use another queue to batch together examples for
training, evaluation, or inference. For this we use a queue that randomizes the
order of examples, using the
-[tf.train.shuffle_batch function](../../api_docs/python/io_ops.md#shuffle_batch).
+[`tf.train.shuffle_batch` function](../../api_docs/python/io_ops.md#shuffle_batch).
Example:
@@ -227,7 +227,7 @@ def input_pipeline(filenames, batch_size, num_epochs=None):
If you need more parallelism or shuffling of examples between files, use
multiple reader instances using the
-[tf.train.shuffle_batch_join function](../../api_docs/python/io_ops.md#shuffle_batch_join).
+[`tf.train.shuffle_batch_join` function](../../api_docs/python/io_ops.md#shuffle_batch_join).
For example:
```
@@ -253,7 +253,7 @@ epoch until all the files from the epoch have been started. (It is also usually
sufficient to have a single thread filling the filename queue.)
An alternative is to use a single reader via the
-[tf.train.shuffle_batch function](../../api_docs/python/io_ops.md#shuffle_batch)
+[`tf.train.shuffle_batch` function](../../api_docs/python/io_ops.md#shuffle_batch)
with `num_threads` bigger than 1. This will make it read from a single file at
the same time (but faster than with 1 thread), instead of N files at once.
This can be important:
@@ -273,11 +273,11 @@ enough reading threads, that summary will stay above zero. You can
The short version: many of the `tf.train` functions listed above add
[`QueueRunner`](../../api_docs/python/train.md#QueueRunner) objects to your
graph. These require that you call
-[tf.train.start_queue_runners](../../api_docs/python/train.md#start_queue_runners)
+[`tf.train.start_queue_runners`](../../api_docs/python/train.md#start_queue_runners)
before running any training or inference steps, or it will hang forever. This
will start threads that run the input pipeline, filling the example queue so
that the dequeue to get the examples will succeed. This is best combined with a
-[tf.train.Coordinator](../../api_docs/python/train.md#Coordinator) to cleanly
+[`tf.train.Coordinator`](../../api_docs/python/train.md#Coordinator) to cleanly
shut down these threads when there are errors. If you set a limit on the number
of epochs, that will use an epoch counter that will need to be intialized. The
recommended code pattern combining these is:
@@ -330,13 +330,13 @@ queue.
</div>
The helpers in `tf.train` that create these queues and enqueuing operations add
-a [tf.train.QueueRunner docs](../../api_docs/python/train.md#QueueRunner) to the
+a [`tf.train.QueueRunner`](../../api_docs/python/train.md#QueueRunner) to the
graph using the
-[tf.train.add_queue_runner](../../api_docs/python/train.md#add_queue_runner)
+[`tf.train.add_queue_runner`](../../api_docs/python/train.md#add_queue_runner)
function. Each `QueueRunner` is responsible for one stage, and holds the list of
enqueue operations that need to be run in threads. Once the graph is
constructed, the
-[tf.train.start_queue_runners](../../api_docs/python/train.md#start_queue_runners)
+[`tf.train.start_queue_runners`](../../api_docs/python/train.md#start_queue_runners)
function asks each QueueRunner in the graph to start its threads running the
enqueuing operations.
@@ -348,7 +348,7 @@ is the TensorFlow equivalent of "end of file" (EOF) -- this means the epoch
limit has been reached and no more examples are available.
The last ingredient is the
-[Coordinator](../../api_docs/python/train.md#Coordinator). This is responsible
+[`Coordinator`](../../api_docs/python/train.md#Coordinator). This is responsible
for letting all the threads know if anything has signalled a shut down. Most
commonly this would be because an exception was raised, for example one of the
threads got an error when running some operation (or an ordinary Python
@@ -383,21 +383,21 @@ associated with a single QueueRunner. If this isn't the last thread in the
QueueRunner, the `OutOfRange` error just causes the one thread to exit. This
allows the other threads, which are still finishing up their last file, to
proceed until they finish as well. (Assuming you are using a
-[tf.train.Coordinator](../../api_docs/python/train.md#Coordinator),
+[`tf.train.Coordinator`](../../api_docs/python/train.md#Coordinator),
other types of errors will cause all the threads to stop.) Once all the reader
threads hit the `OutOfRange` error, only then does the next queue, the example
queue, gets closed.
Again, the example queue will have some elements queued, so training will
continue until those are exhausted. If the example queue is a
-[RandomShuffleQueue](../../api_docs/python/io_ops.md#RandomShuffleQueue), say
+[`RandomShuffleQueue`](../../api_docs/python/io_ops.md#RandomShuffleQueue), say
because you are using `shuffle_batch` or `shuffle_batch_join`, it normally will
avoid ever going having fewer than its `min_after_dequeue` attr elements
buffered. However, once the queue is closed that restriction will be lifted and
the queue will eventually empty. At that point the actual training threads,
when they try and dequeue from example queue, will start getting `OutOfRange`
errors and exiting. Once all the training threads are done,
-[tf.train.Coordinator.join()](../../api_docs/python/train.md#Coordinator.join)
+[`tf.train.Coordinator.join`](../../api_docs/python/train.md#Coordinator.join)
will return and you can exit cleanly.
### Filtering records or producing multiple examples per record <a class="md-anchor" id="AUTOGENERATED-filtering-records-or-producing-multiple-examples-per-record"></a>
@@ -413,7 +413,7 @@ when calling one of the batching functions (such as `shuffle_batch` or
SparseTensors don't play well with queues. If you use SparseTensors you have
to decode the string records using
-[tf.parse_example](../../api_docs/python/io_ops.md#parse_example) **after**
+[`tf.parse_example`](../../api_docs/python/io_ops.md#parse_example) **after**
batching (instead of using `tf.parse_single_example` before batching).
## Preloaded data <a class="md-anchor" id="AUTOGENERATED-preloaded-data"></a>
@@ -461,17 +461,17 @@ update it when training. Setting `collections=[]` keeps the variable out of the
`GraphKeys.VARIABLES` collection used for saving and restoring checkpoints.
Either way,
-[tf.train.slice_input_producer function](../../api_docs/python/io_ops.md#slice_input_producer)
+[`tf.train.slice_input_producer function`](../../api_docs/python/io_ops.md#slice_input_producer)
can be used to produce a slice at a time. This shuffles the examples across an
entire epoch, so further shuffling when batching is undesirable. So instead of
using the `shuffle_batch` functions, we use the plain
-[tf.train.batch function](../../api_docs/python/io_ops.md#batch). To use
+[`tf.train.batch` function](../../api_docs/python/io_ops.md#batch). To use
multiple preprocessing threads, set the `num_threads` parameter to a number
bigger than 1.
An MNIST example that preloads the data using constants can be found in
-[tensorflow/g3doc/how_tos/reading_data/fully_connected_preloaded.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/how_tos/reading_data/fully_connected_preloaded.py), and one that preloads the data using variables can be found in
-[tensorflow/g3doc/how_tos/reading_data/fully_connected_preloaded_var.py](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/how_tos/reading_data/fully_connected_preloaded_var.py),
+[`tensorflow/g3doc/how_tos/reading_data/fully_connected_preloaded.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/how_tos/reading_data/fully_connected_preloaded.py), and one that preloads the data using variables can be found in
+[`tensorflow/g3doc/how_tos/reading_data/fully_connected_preloaded_var.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/how_tos/reading_data/fully_connected_preloaded_var.py),
You can compare these with the `fully_connected_feed` and
`fully_connected_reader` versions above.
diff --git a/tensorflow/g3doc/tutorials/deep_cnn/index.md b/tensorflow/g3doc/tutorials/deep_cnn/index.md
index 26b2906e0a..323ad54bdc 100644
--- a/tensorflow/g3doc/tutorials/deep_cnn/index.md
+++ b/tensorflow/g3doc/tutorials/deep_cnn/index.md
@@ -111,7 +111,7 @@ The input part of the model is built by the functions `inputs()` and
`distorted_inputs()` which read images from the CIFAR-10 binary data files.
These files contain fixed byte length records, so we use
[`tf.FixedLengthRecordReader`](../../api_docs/python/io_ops.md#FixedLengthRecordReader).
-See [`Reading Data`](../../how_tos/reading_data/index.md#reading-from-files) to
+See [Reading Data](../../how_tos/reading_data/index.md#reading-from-files) to
learn more about how the `Reader` class works.
The images are processed as follows:
@@ -128,7 +128,7 @@ artificially increase the data set size:
* Randomly distort the [image brightness](../../api_docs/python/image.md#random_brightness).
* Randomly distort the [image contrast](../../api_docs/python/image.md#tf_image_random_contrast).
-Please see the [`Images`](../../api_docs/python/image.md) page for the list of
+Please see the [Images](../../api_docs/python/image.md) page for the list of
available distortions. We also attach an
[`image_summary`](../../api_docs/python/train.md#image_summary) to the images
so that we may visualize them in TensorBoard. This is a good practice to verify
@@ -224,7 +224,7 @@ the script `cifar10_train.py`.
python cifar10_train.py
```
-**NOTE:** The first time your run any target in the CIFAR-10 tutorial,
+**NOTE:** The first time you run any target in the CIFAR-10 tutorial,
the CIFAR-10 dataset is automatically downloaded. The data set is ~160MB
so you may want to grab a quick cup of coffee for your first run.
@@ -297,7 +297,7 @@ interesting to track over time. However, the loss exhibits a considerable amount
of noise due to the small batch size employed by training. In practice we find
it extremely useful to visualize their moving averages in addition to their raw
values. See how the scripts use
-[ExponentialMovingAverage](../../api_docs/python/train.md#ExponentialMovingAverage)
+[`ExponentialMovingAverage`](../../api_docs/python/train.md#ExponentialMovingAverage)
for this purpose.
## Evaluating a Model <a class="md-anchor" id="evaluating-a-model"></a>
@@ -381,7 +381,7 @@ of data across the GPUs.
This setup requires that all GPUs share the model parameters. A well-known
fact is that transferring data to and from GPUs is quite slow. For this
reason, we decide to store and update all model parameters on the CPU (see
-green box). A fresh set of model parameters are transferred to the GPU
+green box). A fresh set of model parameters is transferred to the GPU
when a new batch of data is processed by all GPUs.
The GPUs are synchronized in operation. All gradients are accumulated from
@@ -395,7 +395,7 @@ abstractions.
The first abstraction we require is a function for computing inference and
gradients for a single model replica. In the code we term this abstraction
-a *tower*. We must set two attributes for each tower:
+a "tower". We must set two attributes for each tower:
* A unique name for all operations within a tower.
[`tf.name_scope()`](../../api_docs/python/framework.md#name_scope) provides
@@ -446,7 +446,7 @@ with a batch size of 64 and compare the training speed.
## Next Steps <a class="md-anchor" id="AUTOGENERATED-next-steps"></a>
-[Congratulations!](https://www.youtube.com/watch?v=9bZkp7q19f0). You have
+[Congratulations!](https://www.youtube.com/watch?v=9bZkp7q19f0) You have
completed the CIFAR-10 tutorial.
If you are now interested in developing and training your own image
diff --git a/tensorflow/g3doc/tutorials/mnist/beginners/index.md b/tensorflow/g3doc/tutorials/mnist/beginners/index.md
index bcf3460c7a..f53531537b 100644
--- a/tensorflow/g3doc/tutorials/mnist/beginners/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/beginners/index.md
@@ -136,9 +136,9 @@ that the evidence for a class \\(i\\) given an input \\(x\\) is:
$$\text{evidence}_i = \sum_j W_{i,~ j} x_j + b_i$$
-where \\(W_i\\) is the weights and \\(b_i\\) is the bias for class \\(i\\), and \\(j\\)
-is an index for summing over the pixels in our input image \\(x\\). We then
-convert the evidence tallies into our predicted probabilities
+where \\(W\_i\\) is the weights and \\(b\_i\\) is the bias for class \\(i\\),
+and \\(j\\) is an index for summing over the pixels in our input image \\(x\\).
+We then convert the evidence tallies into our predicted probabilities
\\(y\\) using the "softmax" function:
$$y = \text{softmax}(\text{evidence})$$
diff --git a/tensorflow/g3doc/tutorials/mnist/tf/index.md b/tensorflow/g3doc/tutorials/mnist/tf/index.md
index 124fb8c0c2..98578acce8 100644
--- a/tensorflow/g3doc/tutorials/mnist/tf/index.md
+++ b/tensorflow/g3doc/tutorials/mnist/tf/index.md
@@ -10,7 +10,7 @@ TensorFlow.
These tutorials are not intended for teaching Machine Learning in general.
-Please ensure you have followed the instructions to [`Install TensorFlow`](../../../get_started/os_setup.md).
+Please ensure you have followed the instructions to [install TensorFlow](../../../get_started/os_setup.md).
## Tutorial Files <a class="md-anchor" id="AUTOGENERATED-tutorial-files"></a>
@@ -19,7 +19,7 @@ This tutorial references the following files:
File | Purpose
--- | ---
[`mnist.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/mnist.py) | The code to build a fully-connected MNIST model.
-[`fully_connected_feed.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py) | The main code, to train the built MNIST model against the downloaded dataset using a feed dictionary.
+[`fully_connected_feed.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py) | The main code to train the built MNIST model against the downloaded dataset using a feed dictionary.
Simply run the `fully_connected_feed.py` file directly to start training:
@@ -56,7 +56,7 @@ Dataset | Purpose
`data_sets.validation` | 5000 images and labels, for iterative validation of training accuracy.
`data_sets.test` | 10000 images and labels, for final testing of trained accuracy.
-For more information about the data, please read the [`Download`](../../../tutorials/mnist/download/index.md)
+For more information about the data, please read the [Download](../../../tutorials/mnist/download/index.md)
tutorial.
### Inputs and Placeholders <a class="md-anchor" id="AUTOGENERATED-inputs-and-placeholders"></a>
@@ -129,7 +129,7 @@ Each variable is given initializer ops as part of their construction.
In this most common case, the weights are initialized with the
[`tf.truncated_normal`](../../../api_docs/python/constant_op.md#truncated_normal)
-and given their shape of a 2d tensor with
+and given their shape of a 2-D tensor with
the first dim representing the number of units in the layer from which the
weights connect and the second dim representing the number of
units in the layer to which the weights connect. For the first layer, named
@@ -167,7 +167,7 @@ Finally, the `logits` tensor that will contain the output is returned.
The `loss()` function further builds the graph by adding the required loss
ops.
-First, the values from the label_placeholder are encoded as a tensor of 1-hot
+First, the values from the `labels_placeholder` are encoded as a tensor of 1-hot
values. For example, if the class identifier is '3' the value is converted to:
<br>`[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]`
@@ -283,7 +283,8 @@ The empty parameter to session indicates that this code will attach to
(or create if not yet created) the default local session.
Immediately after creating the session, all of the `tf.Variable`
-instances are initialized by calling `sess.run()` on their initialization op.
+instances are initialized by calling [`sess.run()`](../../../api_docs/python/client.md#Session.run)
+on their initialization op.
```python
init = tf.initialize_all_variables()
@@ -295,7 +296,7 @@ method will run the complete subset of the graph that
corresponds to the op(s) passed as parameters. In this first call, the `init`
op is a [`tf.group`](../../../api_docs/python/control_flow_ops.md#group)
that contains only the initializers for the variables. None of the rest of the
-graph is run here, that happens in the training loop below.
+graph is run here; that happens in the training loop below.
### Train Loop <a class="md-anchor" id="AUTOGENERATED-train-loop"></a>
@@ -306,7 +307,7 @@ can do useful training is:
```python
for step in xrange(max_steps):
- sess.run([train_op])
+ sess.run(train_op)
```
However, this tutorial is slightly more complicated in that it must also slice
@@ -341,7 +342,7 @@ the input examples for this step of training.
#### Check the Status <a class="md-anchor" id="AUTOGENERATED-check-the-status"></a>
-The code specifies two op-tensors in its run call: `[train_op, loss]`:
+The code specifies two values to fetch in its run call: `[train_op, loss]`.
```python
for step in xrange(FLAGS.max_steps):
@@ -352,13 +353,13 @@ for step in xrange(FLAGS.max_steps):
feed_dict=feed_dict)
```
-Because there are two tensors passed as parameters, the return from
-`sess.run()` is a tuple with two items. The returned items are themselves
-tensors, filled with the values of the passed op-tensors during this step of
-training.
-
-The value of the `train_op` is actually `None` and, thus, discarded. But the
-value of the `loss` tensor may become NaN if the model diverges during training.
+Because there are two values to fetch, `sess.run()` returns a tuple with two
+items. Each `Tensor` in the list of values to fetch corresponds to a numpy
+array in the returned tuple, filled with the value of that tensor during this
+step of training. Since `train_op` is an `Operation` with no output value, the
+corresponding element in the returned tuple is `None` and, thus,
+discarded. However, the value of the `loss` tensor may become NaN if the model
+diverges during training, so we capture this value for logging.
Assuming that the training runs fine without NaNs, the training loop also
prints a simple status text every 100 steps to let the user know the state of
@@ -379,9 +380,9 @@ during the graph building phase.
summary_op = tf.merge_all_summaries()
```
-And then after the Session is generated, a [`tf.train.SummaryWriter`](../../../api_docs/python/train.md#SummaryWriter)
-may be instantiated to output into the given directory the events files,
-containing the Graph itself and the values of the summaries.
+And then after the session is created, a [`tf.train.SummaryWriter`](../../../api_docs/python/train.md#SummaryWriter)
+may be instantiated to write the events files, which
+contain both the graph itself and the values of the summaries.
```python
summary_writer = tf.train.SummaryWriter(FLAGS.train_dir,
diff --git a/tensorflow/g3doc/tutorials/recurrent/index.md b/tensorflow/g3doc/tutorials/recurrent/index.md
index ac594d7c2b..d1be50e00f 100644
--- a/tensorflow/g3doc/tutorials/recurrent/index.md
+++ b/tensorflow/g3doc/tutorials/recurrent/index.md
@@ -117,7 +117,7 @@ for current_batch_of_words in words_in_dataset:
### Inputs <a class="md-anchor" id="AUTOGENERATED-inputs"></a>
The word IDs will be embedded into a dense representation (see the
-[Vectors Representations Tutorial](../../tutorials/word2vec/index.md)) before feeding to
+[Vector Representations Tutorial](../../tutorials/word2vec/index.md)) before feeding to
the LSTM. This allows the model to efficiently represent the knowledge about
particular words. It is also easy to write:
@@ -151,7 +151,7 @@ To give the model more expressive power, we can add multiple layers of LSTMs
to process the data. The output of the first layer will become the input of
the second and so on.
-We have a class called `MultiRNNCell` that makes the implementation seemless:
+We have a class called `MultiRNNCell` that makes the implementation seamless:
```python
lstm = rnn_cell.BasicLSTMCell(lstm_size)
diff --git a/tensorflow/g3doc/tutorials/word2vec/index.md b/tensorflow/g3doc/tutorials/word2vec/index.md
index 0296a9bc86..a046d70e19 100644
--- a/tensorflow/g3doc/tutorials/word2vec/index.md
+++ b/tensorflow/g3doc/tutorials/word2vec/index.md
@@ -1,9 +1,9 @@
# Vector Representations of Words <a class="md-anchor" id="AUTOGENERATED-vector-representations-of-words"></a>
In this tutorial we look at the word2vec model by
-[Mikolov et al.](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf).
-This model is used for learning vector representations of words, called *word
-embeddings*.
+[Mikolov et al.](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf)
+This model is used for learning vector representations of words, called "word
+embeddings".
## Highlights <a class="md-anchor" id="AUTOGENERATED-highlights"></a>
@@ -142,7 +142,7 @@ Mathematically, the objective (for each example) is to maximize
$$J_\text{NEG} = \log Q_\theta(D=1 |w_t, h) +
k \mathop{\mathbb{E}}_{\tilde w \sim P_\text{noise}}
- \left[ \log Q_\theta(D = 0 |\tilde w, h) \right]$$,
+ \left[ \log Q_\theta(D = 0 |\tilde w, h) \right]$$
where \\(Q_\theta(D=1 | w, h)\\) is the binary logistic regression probability
under the model of seeing the word \\(w\\) in the context \\(h\\) in the dataset
@@ -300,7 +300,7 @@ loss = tf.reduce_mean(
Now that we have a loss node, we need to add the nodes required to compute
gradients and update the parameters, etc. For this we will use stochastic
-gradient descent, and TensorFlow has handy helpers to make this easy.
+gradient descent, and TensorFlow has handy helpers to make this easy as well.
```python
# We use the SGD optimizer.
@@ -310,7 +310,9 @@ optimizer = tf.train.GradientDescentOptimizer(learning_rate=1.0).minimize(loss)
## Training the Model <a class="md-anchor" id="AUTOGENERATED-training-the-model"></a>
Training the model is then as simple as using a `feed_dict` to push data into
-the placeholders and calling `session.run` with this new data in a loop.
+the placeholders and calling
+[`session.run`](../../api_docs/python/client.md#Session.run) with this new data
+in a loop.
```python
for inputs, labels in generate_batch(...):
diff --git a/tensorflow/python/framework/docs.py b/tensorflow/python/framework/docs.py
index cd6ecfb04a..7e9770683c 100644
--- a/tensorflow/python/framework/docs.py
+++ b/tensorflow/python/framework/docs.py
@@ -369,7 +369,7 @@ class Library(Document):
elif inspect.isclass(member):
print >>f, "- - -"
print >>f, ""
- print >>f, "### class %s {#%s}" % (
+ print >>f, "### `class %s` {#%s}" % (
name, _get_anchor(self._module_to_name, name))
print >>f, ""
self._write_class_markdown_to_file(f, name, member)
diff --git a/tensorflow/tools/docker/Dockerfile.gpu b/tensorflow/tools/docker/Dockerfile.gpu
new file mode 100644
index 0000000000..cca9e2ccfa
--- /dev/null
+++ b/tensorflow/tools/docker/Dockerfile.gpu
@@ -0,0 +1,8 @@
+FROM b.gcr.io/tensorflow-testing/tensorflow-gpu-flat
+
+MAINTAINER Craig Citro <craigcitro@google.com>
+
+WORKDIR /root
+EXPOSE 6006
+EXPOSE 8888
+RUN ["/bin/bash"]
diff --git a/tensorflow/tools/docker/Dockerfile.gpu_base b/tensorflow/tools/docker/Dockerfile.gpu_base
new file mode 100644
index 0000000000..05f9fd6309
--- /dev/null
+++ b/tensorflow/tools/docker/Dockerfile.gpu_base
@@ -0,0 +1,34 @@
+FROM b.gcr.io/tensorflow-testing/tensorflow-full
+
+MAINTAINER Craig Citro <craigcitro@google.com>
+
+# Set up CUDA variables and symlinks
+COPY cuda /usr/local/cuda
+ENV CUDA_PATH /usr/local/cuda
+ENV LD_LIBRARY_PATH /usr/local/cuda/lib64
+
+RUN echo "CUDA_PATH=/usr/local/cuda" >>~/.bash_profile
+RUN echo "LD_LIBRARY_PATH=/usr/local/cuda/lib64" >>~/.bash_profile
+
+# Set up to build TensorFlow with GPU support.
+WORKDIR /tensorflow
+
+# Configure the build for our CUDA configuration.
+ENV CUDA_TOOLKIT_PATH /usr/local/cuda
+ENV CUDNN_INSTALL_PATH /usr/local/cuda
+ENV TF_NEED_CUDA 1
+RUN ./configure
+
+# Now we build
+RUN bazel clean && \
+ bazel build -c opt --config=cuda tensorflow/tools/pip_package:build_pip_package 2>&1 | tee -a /tmp/bazel.log && \
+ rm -rf /tmp/pip && \
+ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/pip && \
+ pip install /tmp/pip/tensorflow-*.whl && \
+ bazel clean
+
+RUN rm -rf /usr/local/cuda && \
+ rm -rf /usr/share/nvidia && \
+ rm -rf /root/.cache/
+
+RUN ["/bin/bash"]