diff options
Diffstat (limited to 'tensorflow/core/distributed_runtime/README.md')
-rw-r--r-- | tensorflow/core/distributed_runtime/README.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/tensorflow/core/distributed_runtime/README.md b/tensorflow/core/distributed_runtime/README.md index 4d2a18ed33..918af2d2ba 100644 --- a/tensorflow/core/distributed_runtime/README.md +++ b/tensorflow/core/distributed_runtime/README.md @@ -127,7 +127,7 @@ replicated model. Possible approaches include: * As above, but where the gradients from all workers are averaged. See the [CIFAR-10 multi-GPU trainer](https://www.tensorflow.org/code/tensorflow/models/image/cifar10/cifar10_multi_gpu_train.py) - for an example of this form of replication. The implements *synchronous* training + for an example of this form of replication. This implements *synchronous* training * The "distributed trainer" approach uses multiple graphs—one per worker—where each graph contains one set of parameters (pinned to |