aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar Neal Wu <wun@google.com>2017-11-10 19:16:53 -0800
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2017-11-10 19:23:53 -0800
commit03c150d3d3cd00c8fbcb7e84b5fa1db08256ed3c (patch)
tree82f1059ab3c5f31408969018499943df0151fd81
parent973987fbe2d9448e15f6efa613aae090703457e0 (diff)
Minor cleanup of links in the performance guides
PiperOrigin-RevId: 175368372
-rw-r--r--tensorflow/docs_src/performance/performance_guide.md2
-rw-r--r--tensorflow/docs_src/performance/performance_models.md6
2 files changed, 4 insertions, 4 deletions
diff --git a/tensorflow/docs_src/performance/performance_guide.md b/tensorflow/docs_src/performance/performance_guide.md
index da556bd848..17f71a6d77 100644
--- a/tensorflow/docs_src/performance/performance_guide.md
+++ b/tensorflow/docs_src/performance/performance_guide.md
@@ -127,7 +127,7 @@ Reading large numbers of small files significantly impacts I/O performance.
One approach to get maximum I/O throughput is to preprocess input data into
larger (~100MB) `TFRecord` files. For smaller data sets (200MB-1GB), the best
approach is often to load the entire data set into memory. The document
-[Downloading and converting to TFRecord format](https://github.com/tensorflow/models/tree/master/research/slim#Data)
+[Downloading and converting to TFRecord format](https://github.com/tensorflow/models/tree/master/research/slim#downloading-and-converting-to-tfrecord-format)
includes information and scripts for creating `TFRecords` and this
[script](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10_estimator/generate_cifar10_tfrecords.py)
converts the CIFAR-10 data set into `TFRecords`.
diff --git a/tensorflow/docs_src/performance/performance_models.md b/tensorflow/docs_src/performance/performance_models.md
index fcda19e74c..359b0e904d 100644
--- a/tensorflow/docs_src/performance/performance_models.md
+++ b/tensorflow/docs_src/performance/performance_models.md
@@ -29,8 +29,8 @@ implementation is made up of 3 stages:
The dominant part of each stage is executed in parallel with the other stages
using `data_flow_ops.StagingArea`. `StagingArea` is a queue-like operator
-similar to @{tf.FIFOQueue}. The difference is that `StagingArea` does not
-guarantee FIFO ordering, but offers simpler functionality and can be executed
+similar to @{tf.FIFOQueue}. The difference is that `StagingArea` does not
+guarantee FIFO ordering, but offers simpler functionality and can be executed
on both CPU and GPU in parallel with other stages. Breaking the input pipeline
into 3 stages that operate independently in parallel is scalable and takes full
advantage of large multi-core environments. The rest of this section details
@@ -344,7 +344,7 @@ executing the main script
`alexnet`.
* **`num_gpus`**: Number of GPUs to use.
* **`data_dir`**: Path to data to process. If not set, synthetic data is used.
- To use Imagenet data use these
+ To use ImageNet data use these
[instructions](https://github.com/tensorflow/models/tree/master/research/inception#getting-started)
as a starting point.
* **`batch_size`**: Batch size for each GPU.