aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/performance
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src/performance')
-rw-r--r--tensorflow/docs_src/performance/performance_guide.md2
-rw-r--r--tensorflow/docs_src/performance/quantization.md8
2 files changed, 2 insertions, 8 deletions
diff --git a/tensorflow/docs_src/performance/performance_guide.md b/tensorflow/docs_src/performance/performance_guide.md
index a5508ac23e..9ac60024a1 100644
--- a/tensorflow/docs_src/performance/performance_guide.md
+++ b/tensorflow/docs_src/performance/performance_guide.md
@@ -104,7 +104,7 @@ with tf.device('/cpu:0'):
Under some circumstances, both the CPU and GPU can be starved for data by the
I/O system. If you are using many small files to form your input data set, you
may be limited by the speed of your filesystem. If your training loop runs
-faster when using SSDs vs HDDs for storing your input data, you could could be
+faster when using SSDs vs HDDs for storing your input data, you could be
I/O bottlenecked.
If this is the case, you should pre-process your input data, creating a few
diff --git a/tensorflow/docs_src/performance/quantization.md b/tensorflow/docs_src/performance/quantization.md
index a37748d0c9..d050fc5c56 100644
--- a/tensorflow/docs_src/performance/quantization.md
+++ b/tensorflow/docs_src/performance/quantization.md
@@ -93,7 +93,7 @@ curl http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.t
tar xzf /tmp/inceptionv3.tgz -C /tmp/
bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
- --in_graph=/tmp/classify_image_graph_def.pb \
+ --inputs="Mul" --in_graph=/tmp/classify_image_graph_def.pb \
--outputs="softmax" --out_graph=/tmp/quantized_graph.pb \
--transforms='add_default_attributes strip_unused_nodes(type=float, shape="1,299,299,3")
remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true)
@@ -108,12 +108,6 @@ versus 91MB). You can still run this model using exactly the same inputs and
outputs though, and you should get equivalent results. Here's an example:
```sh
-# Note: You need to add the dependencies of the quantization operation to the
-# cc_binary in the BUILD file of the label_image program:
-#
-# //tensorflow/contrib/quantization:cc_ops
-# //tensorflow/contrib/quantization/kernels:quantized_ops
-
bazel build tensorflow/examples/label_image:label_image
bazel-bin/tensorflow/examples/label_image/label_image \
--image=<input-image> \