aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/distribute
diff options
context:
space:
mode:
authorGravatar EFanZh <efanzh@gmail.com>2018-09-28 15:20:26 +0800
committerGravatar EFanZh <efanzh@gmail.com>2018-09-28 15:20:26 +0800
commitd0690d46466bf0393ad65544d1e8c55e948df133 (patch)
treed0ccbd2dbf7c3752e991881a93684403bdd39f13 /tensorflow/contrib/distribute
parent6ebe9baae06c06d0a70a424a55c78f5af07b49f7 (diff)
Fix some documentation errors
Diffstat (limited to 'tensorflow/contrib/distribute')
-rw-r--r--tensorflow/contrib/distribute/python/mirrored_strategy.py5
1 files changed, 3 insertions, 2 deletions
diff --git a/tensorflow/contrib/distribute/python/mirrored_strategy.py b/tensorflow/contrib/distribute/python/mirrored_strategy.py
index 504f45a695..c0861da567 100644
--- a/tensorflow/contrib/distribute/python/mirrored_strategy.py
+++ b/tensorflow/contrib/distribute/python/mirrored_strategy.py
@@ -318,12 +318,13 @@ class MirroredStrategy(distribute_lib.DistributionStrategy):
[TensorFlow's documentation](https://www.tensorflow.org/deploy/distributed).
The distribution strategy inherits these concepts as well and in addition to
that we also clarify several more concepts:
- * **In-graph replication**: the `client` creates a single `tf.Graph` that
+
+ * **In-graph replication**: the `client` creates a single `tf.Graph` that
specifies tasks for devices on all workers. The `client` then creates a
client session which will talk to the `master` service of a `worker`. Then
the `master` will partition the graph and distribute the work to all
participating workers.
- * **Worker**: A `worker` is a TensorFlow `task` that usually maps to one
+ * **Worker**: A `worker` is a TensorFlow `task` that usually maps to one
physical machine. We will have multiple `worker`s with different `task`
index. They all do similar things except for one worker checkpointing model
variables, writing summaries, etc. in addition to its ordinary work.