aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc
diff options
context:
space:
mode:
authorGravatar Derek Murray <mrry@google.com>2017-02-06 15:21:54 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2017-02-06 15:47:20 -0800
commit44b11c4e9f0d778c5547b8b3d6b772b0fbed8079 (patch)
tree72bf1714ee19291979c93ec9196b299833baa2b7 /tensorflow/g3doc
parentec843c549280444fa747627720eac42112225261 (diff)
[Docs] Fix a few incorrect names and missing links in queues HOWTO.
Change: 146718366
Diffstat (limited to 'tensorflow/g3doc')
-rw-r--r--tensorflow/g3doc/how_tos/threading_and_queues/index.md37
1 files changed, 20 insertions, 17 deletions
diff --git a/tensorflow/g3doc/how_tos/threading_and_queues/index.md b/tensorflow/g3doc/how_tos/threading_and_queues/index.md
index 639ad116c9..3b2c8adcbc 100644
--- a/tensorflow/g3doc/how_tos/threading_and_queues/index.md
+++ b/tensorflow/g3doc/how_tos/threading_and_queues/index.md
@@ -28,10 +28,12 @@ creating these operations.
Now that you have a bit of a feel for queues, let's dive into the details...
-## Queue use overview
+## Queue usage overview
-Queues, such as `FIFOQueue` and `RandomShuffleQueue`, are important TensorFlow
-objects for computing tensors asynchronously in a graph.
+Queues, such as [`FIFOQueue`](../../api_docs/python/io_ops.md#FIFOQueue)
+and [`RandomShuffleQueue`](../../api_docs/python/io_ops.md#RandomShuffleQueue),
+are important TensorFlow objects for computing tensors asynchronously in a
+graph.
For example, a typical input architecture is to use a `RandomShuffleQueue` to
prepare inputs for training a model:
@@ -51,8 +53,8 @@ threads must be able to stop together, exceptions must be caught and
reported, and queues must be properly closed when stopping.
TensorFlow provides two classes to help:
-[tf.Coordinator](../../api_docs/python/train.md#Coordinator) and
-[tf.QueueRunner](../../api_docs/python/train.md#QueueRunner). These two classes
+[`tf.train.Coordinator`](../../api_docs/python/train.md#Coordinator) and
+[`tf.train.QueueRunner`](../../api_docs/python/train.md#QueueRunner). These two classes
are designed to be used together. The `Coordinator` class helps multiple threads
stop together and report exceptions to a program that waits for them to stop.
The `QueueRunner` class is used to create a number of threads cooperating to
@@ -60,13 +62,13 @@ enqueue tensors in the same queue.
## Coordinator
-The Coordinator class helps multiple threads stop together.
+The `Coordinator` class helps multiple threads stop together.
Its key methods are:
-* `should_stop()`: returns True if the threads should stop.
-* `request_stop(<exception>)`: requests that threads should stop.
-* `join(<list of threads>)`: waits until the specified threads have stopped.
+* [`should_stop()`](../../api_docs/python/train.md#Coordinator.should_stop): returns True if the threads should stop.
+* [`request_stop(exception)`](../../api_docs/python/train.md#Coordinator.request_stop): requests that threads should stop.
+* [`join(thread_list)`](../../api_docs/python/train.md#Coordinator.join): waits until the specified threads have stopped.
You first create a `Coordinator` object, and then create a number of threads
that use the coordinator. The threads typically run loops that stop when
@@ -85,20 +87,21 @@ def MyLoop(coord):
if ...some condition...:
coord.request_stop()
-# Main code: create a coordinator.
-coord = Coordinator()
+# Main thread: create a coordinator.
+coord = tf.train.Coordinator()
# Create 10 threads that run 'MyLoop()'
threads = [threading.Thread(target=MyLoop, args=(coord,)) for i in xrange(10)]
# Start the threads and wait for all of them to stop.
-for t in threads: t.start()
+for t in threads:
+ t.start()
coord.join(threads)
```
Obviously, the coordinator can manage threads doing very different things.
They don't have to be all the same as in the example above. The coordinator
-also has support to capture and report exceptions. See the [Coordinator class](../../api_docs/python/train.md#Coordinator) documentation for more details.
+also has support to capture and report exceptions. See the [`tf.train.Coordinator`](../../api_docs/python/train.md#Coordinator) documentation for more details.
## QueueRunner
@@ -109,7 +112,7 @@ queue if an exception is reported to the coordinator.
You can use a queue runner to implement the architecture described above.
-First build a graph that uses a `Queue` for input examples. Add ops that
+First build a graph that uses a TensorFlow queue (e.g. a `tf.RandomShuffleQueue`) for input examples. Add ops that
process examples and enqueue them in the queue. Add training ops that start by
dequeueing from the queue.
@@ -152,8 +155,8 @@ coord.join(enqueue_threads)
## Handling exceptions
Threads started by queue runners do more than just run the enqueue ops. They
-also catch and handle exceptions generated by queues, including
-`OutOfRangeError` which is used to report that a queue was closed.
+also catch and handle exceptions generated by queues, including the
+`tf.errors.OutOfRangeError` exception, which is used to report that a queue was closed.
A training program that uses a coordinator must similarly catch and report
exceptions in its main loop.
@@ -170,7 +173,7 @@ except Exception, e:
# Report exceptions to the coordinator.
coord.request_stop(e)
finally:
- # Terminate as usual. It is innocuous to request stop twice.
+ # Terminate as usual. It is safe to call `coord.request_stop()` twice.
coord.request_stop()
coord.join(threads)
```