aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/performance/xla/index.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src/performance/xla/index.md')
-rw-r--r--tensorflow/docs_src/performance/xla/index.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/tensorflow/docs_src/performance/xla/index.md b/tensorflow/docs_src/performance/xla/index.md
index d2c1843327..19045b45d9 100644
--- a/tensorflow/docs_src/performance/xla/index.md
+++ b/tensorflow/docs_src/performance/xla/index.md
@@ -65,13 +65,13 @@ The following diagram shows the compilation process in XLA:
<img src="https://www.tensorflow.org/images/how-does-xla-work.png">
</div>
-XLA comes with several optimizations and analyses that are target-independent,
+XLA comes with several optimizations and analyzes that are target-independent,
such as [CSE](https://en.wikipedia.org/wiki/Common_subexpression_elimination),
target-independent operation fusion, and buffer analysis for allocating runtime
memory for the computation.
After the target-independent step, XLA sends the HLO computation to a backend.
-The backend can perform further HLO-level analyses and optimizations, this time
+The backend can perform further HLO-level analyzes and optimizations, this time
with target specific information and needs in mind. For example, the XLA GPU
backend may perform operation fusion beneficial specifically for the GPU
programming model and determine how to partition the computation into streams.