aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/performance
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src/performance')
-rw-r--r--tensorflow/docs_src/performance/quantization.md4
-rw-r--r--tensorflow/docs_src/performance/xla/index.md2
2 files changed, 3 insertions, 3 deletions
diff --git a/tensorflow/docs_src/performance/quantization.md b/tensorflow/docs_src/performance/quantization.md
index 878371f674..86d2b92494 100644
--- a/tensorflow/docs_src/performance/quantization.md
+++ b/tensorflow/docs_src/performance/quantization.md
@@ -82,7 +82,7 @@ them directly is very convenient.
## How Can You Quantize Your Models?
-TensorFlow has production-grade support for eight-bit calculations built it. It
+TensorFlow has production-grade support for eight-bit calculations built in. It
also has a process for converting many models trained in floating-point over to
equivalent graphs using quantized calculations for inference. For example,
here's how you can translate the latest GoogLeNet model into a version that uses
@@ -153,7 +153,7 @@ bit.
The min and max operations actually look at the values in the input float
tensor, and then feeds them into the Dequantize operation that converts the
-tensor into eight-bits. There's more details on how the quantized representation
+tensor into eight-bits. There're more details on how the quantized representation
works later on.
Once the individual operations have been converted, the next stage is to remove
diff --git a/tensorflow/docs_src/performance/xla/index.md b/tensorflow/docs_src/performance/xla/index.md
index 222b2ba887..9c23e79845 100644
--- a/tensorflow/docs_src/performance/xla/index.md
+++ b/tensorflow/docs_src/performance/xla/index.md
@@ -10,7 +10,7 @@ XLA (Accelerated Linear Algebra) is a domain-specific compiler for linear
algebra that optimizes TensorFlow computations. The results are improvements in
speed, memory usage, and portability on server and mobile platforms. Initially,
most users will not see large benefits from XLA, but are welcome to experiment
-by using XLA via @{$jit$just-in-time (JIT) compilaton} or @{$tfcompile$ahead-of-time (AOT) compilation}. Developers targeting new hardware accelerators are
+by using XLA via @{$jit$just-in-time (JIT) compilation} or @{$tfcompile$ahead-of-time (AOT) compilation}. Developers targeting new hardware accelerators are
especially encouraged to try out XLA.
The XLA framework is experimental and in active development. In particular,