aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/performance/index.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src/performance/index.md')
-rw-r--r--tensorflow/docs_src/performance/index.md22
1 files changed, 11 insertions, 11 deletions
diff --git a/tensorflow/docs_src/performance/index.md b/tensorflow/docs_src/performance/index.md
index 131d28fa3e..a0f26a8c3a 100644
--- a/tensorflow/docs_src/performance/index.md
+++ b/tensorflow/docs_src/performance/index.md
@@ -7,18 +7,18 @@ details on the high level APIs to use along with best practices to build
and train high performance models, and quantize models for the least latency
and highest throughput for inference.
- * @{$performance_guide$Performance Guide} contains a collection of best
+ * [Performance Guide](../performance/performance_guide.md) contains a collection of best
practices for optimizing your TensorFlow code.
- * @{$datasets_performance$Data input pipeline guide} describes the tf.data
+ * [Data input pipeline guide](../performance/datasets_performance.md) describes the tf.data
API for building efficient data input pipelines for TensorFlow.
- * @{$performance/benchmarks$Benchmarks} contains a collection of
+ * [Benchmarks](../performance/benchmarks.md) contains a collection of
benchmark results for a variety of hardware configurations.
* For improving inference efficiency on mobile and
embedded hardware, see
- @{$quantization$How to Quantize Neural Networks with TensorFlow}, which
+ [How to Quantize Neural Networks with TensorFlow](../performance/quantization.md), which
explains how to use quantization to reduce model size, both in storage
and at runtime.
@@ -31,20 +31,20 @@ XLA (Accelerated Linear Algebra) is an experimental compiler for linear
algebra that optimizes TensorFlow computations. The following guides explore
XLA:
- * @{$xla$XLA Overview}, which introduces XLA.
- * @{$broadcasting$Broadcasting Semantics}, which describes XLA's
+ * [XLA Overview](../performance/xla/index.md), which introduces XLA.
+ * [Broadcasting Semantics](../performance/xla/broadcasting.md), which describes XLA's
broadcasting semantics.
- * @{$developing_new_backend$Developing a new back end for XLA}, which
+ * [Developing a new back end for XLA](../performance/xla/developing_new_backend.md), which
explains how to re-target TensorFlow in order to optimize the performance
of the computational graph for particular hardware.
- * @{$jit$Using JIT Compilation}, which describes the XLA JIT compiler that
+ * [Using JIT Compilation](../performance/xla/jit.md), which describes the XLA JIT compiler that
compiles and runs parts of TensorFlow graphs via XLA in order to optimize
performance.
- * @{$operation_semantics$Operation Semantics}, which is a reference manual
+ * [Operation Semantics](../performance/xla/operation_semantics.md), which is a reference manual
describing the semantics of operations in the `ComputationBuilder`
interface.
- * @{$shapes$Shapes and Layout}, which details the `Shape` protocol buffer.
- * @{$tfcompile$Using AOT compilation}, which explains `tfcompile`, a
+ * [Shapes and Layout](../performance/xla/shapes.md), which details the `Shape` protocol buffer.
+ * [Using AOT compilation](../performance/xla/tfcompile.md), which explains `tfcompile`, a
standalone tool that compiles TensorFlow graphs into executable code in
order to optimize performance.