aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src
diff options
context:
space:
mode:
authorGravatar Mark Daoust <markdaoust@google.com>2018-08-16 10:51:38 -0700
committerGravatar TensorFlower Gardener <gardener@tensorflow.org>2018-08-16 10:57:49 -0700
commitb2a496a2a13d02d6208a369df36b036a8e1a236b (patch)
tree06f308d0ca4e6f3d111882669582d4ada682b3a9 /tensorflow/docs_src
parent8a91a018a4ff3539d87cb4284359902dd0dcaf2d (diff)
Remove magic links from docs.
I patched the doc generator to generate markdown links. Ran the doc converter, and copied the output into docs_src. PiperOrigin-RevId: 209010351
Diffstat (limited to 'tensorflow/docs_src')
-rw-r--r--tensorflow/docs_src/about/index.md6
-rw-r--r--tensorflow/docs_src/api_guides/python/client.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/constant_op.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/input_dataset.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/io_ops.md10
-rw-r--r--tensorflow/docs_src/api_guides/python/meta_graph.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/reading_data.md24
-rw-r--r--tensorflow/docs_src/api_guides/python/regression_examples.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/summary.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/threading_and_queues.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/train.md8
-rw-r--r--tensorflow/docs_src/community/contributing.md6
-rw-r--r--tensorflow/docs_src/community/index.md6
-rw-r--r--tensorflow/docs_src/community/style_guide.md2
-rw-r--r--tensorflow/docs_src/deploy/distributed.md2
-rw-r--r--tensorflow/docs_src/deploy/hadoop.md4
-rw-r--r--tensorflow/docs_src/deploy/index.md6
-rw-r--r--tensorflow/docs_src/deploy/s3.md2
-rw-r--r--tensorflow/docs_src/extend/add_filesys.md2
-rw-r--r--tensorflow/docs_src/extend/adding_an_op.md10
-rw-r--r--tensorflow/docs_src/extend/architecture.md8
-rw-r--r--tensorflow/docs_src/extend/index.md12
-rw-r--r--tensorflow/docs_src/extend/language_bindings.md2
-rw-r--r--tensorflow/docs_src/extend/new_data_formats.md10
-rw-r--r--tensorflow/docs_src/guide/checkpoints.md8
-rw-r--r--tensorflow/docs_src/guide/custom_estimators.md14
-rw-r--r--tensorflow/docs_src/guide/datasets.md2
-rw-r--r--tensorflow/docs_src/guide/datasets_for_estimators.md14
-rw-r--r--tensorflow/docs_src/guide/debugger.md2
-rw-r--r--tensorflow/docs_src/guide/eager.md2
-rw-r--r--tensorflow/docs_src/guide/embedding.md2
-rw-r--r--tensorflow/docs_src/guide/estimators.md4
-rw-r--r--tensorflow/docs_src/guide/faq.md38
-rw-r--r--tensorflow/docs_src/guide/feature_columns.md6
-rw-r--r--tensorflow/docs_src/guide/graph_viz.md4
-rw-r--r--tensorflow/docs_src/guide/graphs.md8
-rw-r--r--tensorflow/docs_src/guide/index.md46
-rw-r--r--tensorflow/docs_src/guide/low_level_intro.md18
-rw-r--r--tensorflow/docs_src/guide/premade_estimators.md18
-rw-r--r--tensorflow/docs_src/guide/saved_model.md10
-rw-r--r--tensorflow/docs_src/guide/summaries_and_tensorboard.md8
-rw-r--r--tensorflow/docs_src/guide/tensors.md2
-rw-r--r--tensorflow/docs_src/guide/using_gpu.md2
-rw-r--r--tensorflow/docs_src/guide/using_tpu.md16
-rw-r--r--tensorflow/docs_src/guide/version_compat.md4
-rw-r--r--tensorflow/docs_src/install/index.md18
-rw-r--r--tensorflow/docs_src/install/install_c.md4
-rw-r--r--tensorflow/docs_src/install/install_go.md4
-rw-r--r--tensorflow/docs_src/install/install_java.md6
-rw-r--r--tensorflow/docs_src/install/install_linux.md2
-rw-r--r--tensorflow/docs_src/performance/index.md22
-rw-r--r--tensorflow/docs_src/performance/performance_guide.md16
-rw-r--r--tensorflow/docs_src/performance/performance_models.md2
-rw-r--r--tensorflow/docs_src/performance/quantization.md2
-rw-r--r--tensorflow/docs_src/performance/xla/index.md10
-rw-r--r--tensorflow/docs_src/performance/xla/operation_semantics.md8
-rw-r--r--tensorflow/docs_src/performance/xla/tfcompile.md4
-rw-r--r--tensorflow/docs_src/tutorials/estimators/cnn.md16
-rw-r--r--tensorflow/docs_src/tutorials/images/deep_cnn.md20
-rw-r--r--tensorflow/docs_src/tutorials/images/image_recognition.md4
-rw-r--r--tensorflow/docs_src/tutorials/representation/kernel_methods.md4
-rw-r--r--tensorflow/docs_src/tutorials/representation/linear.md4
-rw-r--r--tensorflow/docs_src/tutorials/representation/word2vec.md4
-rw-r--r--tensorflow/docs_src/tutorials/sequences/recurrent.md2
-rw-r--r--tensorflow/docs_src/tutorials/sequences/recurrent_quickdraw.md8
65 files changed, 261 insertions, 261 deletions
diff --git a/tensorflow/docs_src/about/index.md b/tensorflow/docs_src/about/index.md
index dc1e9af876..c3c13ff329 100644
--- a/tensorflow/docs_src/about/index.md
+++ b/tensorflow/docs_src/about/index.md
@@ -3,9 +3,9 @@
This section provides a few documents about TensorFlow itself,
including the following:
- * @{$uses$TensorFlow in Use}, which provides a link to our model zoo and
+ * [TensorFlow in Use](../about/uses.md), which provides a link to our model zoo and
lists some popular ways that TensorFlow is being used.
- * @{$bib$TensorFlow White Papers}, which provides abstracts of white papers
+ * [TensorFlow White Papers](../about/bib.md), which provides abstracts of white papers
about TensorFlow.
- * @{$attribution$Attribution}, which specifies how to attribute and refer
+ * [Attribution](../about/attribution.md), which specifies how to attribute and refer
to TensorFlow.
diff --git a/tensorflow/docs_src/api_guides/python/client.md b/tensorflow/docs_src/api_guides/python/client.md
index 56367e6671..fdd48e66dc 100644
--- a/tensorflow/docs_src/api_guides/python/client.md
+++ b/tensorflow/docs_src/api_guides/python/client.md
@@ -3,7 +3,7 @@
This library contains classes for launching graphs and executing operations.
-@{$guide/low_level_intro$This guide} has examples of how a graph
+[This guide](../../guide/low_level_intro.md) has examples of how a graph
is launched in a `tf.Session`.
## Session management
diff --git a/tensorflow/docs_src/api_guides/python/constant_op.md b/tensorflow/docs_src/api_guides/python/constant_op.md
index 498ec3db5d..9ba95b0f55 100644
--- a/tensorflow/docs_src/api_guides/python/constant_op.md
+++ b/tensorflow/docs_src/api_guides/python/constant_op.md
@@ -64,7 +64,7 @@ print(sess.run(norm))
```
Another common use of random values is the initialization of variables. Also see
-the @{$variables$Variables How To}.
+the [Variables How To](../../guide/variables.md).
```python
# Use random uniform values in [0, 1) as the initializer for a variable of shape
diff --git a/tensorflow/docs_src/api_guides/python/input_dataset.md b/tensorflow/docs_src/api_guides/python/input_dataset.md
index ab572e53d4..911a76c2df 100644
--- a/tensorflow/docs_src/api_guides/python/input_dataset.md
+++ b/tensorflow/docs_src/api_guides/python/input_dataset.md
@@ -2,7 +2,7 @@
[TOC]
`tf.data.Dataset` allows you to build complex input pipelines. See the
-@{$guide/datasets} for an in-depth explanation of how to use this API.
+[Importing Data](../../guide/datasets.md) for an in-depth explanation of how to use this API.
## Reader classes
diff --git a/tensorflow/docs_src/api_guides/python/io_ops.md b/tensorflow/docs_src/api_guides/python/io_ops.md
index ab3c70daa0..d7ce6fdfde 100644
--- a/tensorflow/docs_src/api_guides/python/io_ops.md
+++ b/tensorflow/docs_src/api_guides/python/io_ops.md
@@ -8,7 +8,7 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
## Placeholders
TensorFlow provides a placeholder operation that must be fed with data
-on execution. For more info, see the section on @{$reading_data#Feeding$Feeding data}.
+on execution. For more info, see the section on [Feeding data](../../api_guides/python/reading_data.md#Feeding).
* `tf.placeholder`
* `tf.placeholder_with_default`
@@ -21,7 +21,7 @@ there is a convenience function:
## Readers
TensorFlow provides a set of Reader classes for reading data formats.
-For more information on inputs and readers, see @{$reading_data$Reading data}.
+For more information on inputs and readers, see [Reading data](../../api_guides/python/reading_data.md).
* `tf.ReaderBase`
* `tf.TextLineReader`
@@ -42,7 +42,7 @@ formats into tensors.
### Example protocol buffer
-TensorFlow's @{$reading_data#standard_tensorflow_format$recommended format for training examples}
+TensorFlow's [recommended format for training examples](../../api_guides/python/reading_data.md#standard_tensorflow_format)
is serialized `Example` protocol buffers, [described
here](https://www.tensorflow.org/code/tensorflow/core/example/example.proto).
They contain `Features`, [described
@@ -62,7 +62,7 @@ here](https://www.tensorflow.org/code/tensorflow/core/example/feature.proto).
TensorFlow provides several implementations of 'Queues', which are
structures within the TensorFlow computation graph to stage pipelines
of tensors together. The following describe the basic Queue interface
-and some implementations. To see an example use, see @{$threading_and_queues$Threading and Queues}.
+and some implementations. To see an example use, see [Threading and Queues](../../api_guides/python/threading_and_queues.md).
* `tf.QueueBase`
* `tf.FIFOQueue`
@@ -85,7 +85,7 @@ and some implementations. To see an example use, see @{$threading_and_queues$Th
## Input pipeline
TensorFlow functions for setting up an input-prefetching pipeline.
-Please see the @{$reading_data$reading data how-to}
+Please see the [reading data how-to](../../api_guides/python/reading_data.md)
for context.
### Beginning of an input pipeline
diff --git a/tensorflow/docs_src/api_guides/python/meta_graph.md b/tensorflow/docs_src/api_guides/python/meta_graph.md
index 7dbd9a56f4..5e8a8b4d0f 100644
--- a/tensorflow/docs_src/api_guides/python/meta_graph.md
+++ b/tensorflow/docs_src/api_guides/python/meta_graph.md
@@ -23,7 +23,7 @@ protocol buffer. It contains the following fields:
* [`SaverDef`](https://www.tensorflow.org/code/tensorflow/core/protobuf/saver.proto) for the saver.
* [`CollectionDef`](https://www.tensorflow.org/code/tensorflow/core/protobuf/meta_graph.proto)
map that further describes additional components of the model such as
-@{$python/state_ops$`Variables`},
+[`Variables`](../../api_guides/python/state_ops.md),
`tf.train.QueueRunner`, etc.
In order for a Python object to be serialized
diff --git a/tensorflow/docs_src/api_guides/python/reading_data.md b/tensorflow/docs_src/api_guides/python/reading_data.md
index 78c36d965c..9f555ee85d 100644
--- a/tensorflow/docs_src/api_guides/python/reading_data.md
+++ b/tensorflow/docs_src/api_guides/python/reading_data.md
@@ -1,7 +1,7 @@
# Reading data
Note: The preferred way to feed data into a tensorflow program is using the
-@{$datasets$`tf.data` API}.
+[`tf.data` API](../../guide/datasets.md).
There are four methods of getting data into a TensorFlow program:
@@ -16,7 +16,7 @@ There are four methods of getting data into a TensorFlow program:
## `tf.data` API
-See the @{$guide/datasets} for an in-depth explanation of `tf.data.Dataset`.
+See the [Importing Data](../../guide/datasets.md) for an in-depth explanation of `tf.data.Dataset`.
The `tf.data` API enables you to extract and preprocess data
from different input/file formats, and apply transformations such as batching,
shuffling, and mapping functions over the dataset. This is an improved version
@@ -56,8 +56,8 @@ in
## `QueueRunner`
Warning: This section discusses implementing input pipelines using the
-queue-based APIs which can be cleanly replaced by the @{$datasets$`tf.data`
-API}.
+queue-based APIs which can be cleanly replaced by the [`tf.data`
+API](../../guide/datasets.md).
A typical queue-based pipeline for reading records from files has the following stages:
@@ -154,14 +154,14 @@ a uint8 tensor, standard operations can slice out each piece and reformat as
needed. For CIFAR-10, you can see how to do the reading and decoding in
[`tensorflow_models/tutorials/image/cifar10/cifar10_input.py`](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10/cifar10_input.py)
and described in
-@{$deep_cnn#prepare-the-data$this tutorial}.
+[this tutorial](../../tutorials/images/deep_cnn.md#prepare-the-data).
#### Standard TensorFlow format
Another approach is to convert whatever data you have into a supported format.
This approach makes it easier to mix and match data sets and network
architectures. The recommended format for TensorFlow is a
-@{$python/python_io#tfrecords_format_details$TFRecords file}
+[TFRecords file](../../api_guides/python/python_io.md#tfrecords_format_details)
containing
[`tf.train.Example` protocol buffers](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)
(which contain
@@ -279,7 +279,7 @@ This can be important:
How many threads do you need? the `tf.train.shuffle_batch*` functions add a
summary to the graph that indicates how full the example queue is. If you have
enough reading threads, that summary will stay above zero. You can
-@{$summaries_and_tensorboard$view your summaries as training progresses using TensorBoard}.
+[view your summaries as training progresses using TensorBoard](../../guide/summaries_and_tensorboard.md).
### Creating threads to prefetch using `QueueRunner` objects
@@ -368,7 +368,7 @@ threads got an error when running some operation (or an ordinary Python
exception).
For more about threading, queues, QueueRunners, and Coordinators
-@{$threading_and_queues$see here}.
+[see here](../../api_guides/python/threading_and_queues.md).
#### Aside: How clean shut-down when limiting epochs works
@@ -501,18 +501,18 @@ sessions, maybe in separate processes:
model that reads validation input data.
This is what is done `tf.estimator` and manually in
-@{$deep_cnn#save-and-restore-checkpoints$the example CIFAR-10 model}.
+[the example CIFAR-10 model](../../tutorials/images/deep_cnn.md#save-and-restore-checkpoints).
This has a couple of benefits:
* The eval is performed on a single snapshot of the trained variables.
* You can perform the eval even after training has completed and exited.
You can have the train and eval in the same graph in the same process, and share
-their trained variables or layers. See @{$variables$the shared variables tutorial}.
+their trained variables or layers. See [the shared variables tutorial](../../guide/variables.md).
To support the single-graph approach
-@{$guide/datasets$`tf.data`} also supplies
-@{$guide/datasets#creating_an_iterator$advanced iterator types} that
+[`tf.data`](../../guide/datasets.md) also supplies
+[advanced iterator types](../../guide/datasets.md#creating_an_iterator) that
that allow the user to change the input pipeline without rebuilding the graph or
session.
diff --git a/tensorflow/docs_src/api_guides/python/regression_examples.md b/tensorflow/docs_src/api_guides/python/regression_examples.md
index f8abbf0f97..d67f38f57a 100644
--- a/tensorflow/docs_src/api_guides/python/regression_examples.md
+++ b/tensorflow/docs_src/api_guides/python/regression_examples.md
@@ -66,7 +66,7 @@ watch the following video:
<a name="running"></a>
## Running the examples
-You must @{$install$install TensorFlow} prior to running these examples.
+You must [install TensorFlow](../../install/index.md) prior to running these examples.
Depending on the way you've installed TensorFlow, you might also
need to activate your TensorFlow environment. Then, do the following:
diff --git a/tensorflow/docs_src/api_guides/python/summary.md b/tensorflow/docs_src/api_guides/python/summary.md
index e290703b7d..fc45e7b4c3 100644
--- a/tensorflow/docs_src/api_guides/python/summary.md
+++ b/tensorflow/docs_src/api_guides/python/summary.md
@@ -2,7 +2,7 @@
[TOC]
Summaries provide a way to export condensed information about a model, which is
-then accessible in tools such as @{$summaries_and_tensorboard$TensorBoard}.
+then accessible in tools such as [TensorBoard](../../guide/summaries_and_tensorboard.md).
## Generation of Summaries
diff --git a/tensorflow/docs_src/api_guides/python/threading_and_queues.md b/tensorflow/docs_src/api_guides/python/threading_and_queues.md
index 48f0778b73..e00f17f955 100644
--- a/tensorflow/docs_src/api_guides/python/threading_and_queues.md
+++ b/tensorflow/docs_src/api_guides/python/threading_and_queues.md
@@ -3,7 +3,7 @@
Note: In versions of TensorFlow before 1.2, we recommended using multi-threaded,
queue-based input pipelines for performance. Beginning with TensorFlow 1.4,
however, we recommend using the `tf.data` module instead. (See
-@{$datasets$Datasets} for details. In TensorFlow 1.2 and 1.3, the module was
+[Datasets](../../guide/datasets.md) for details. In TensorFlow 1.2 and 1.3, the module was
called `tf.contrib.data`.) The `tf.data` module offers an easier-to-use
interface for constructing efficient input pipelines. Furthermore, we've stopped
developing the old multi-threaded, queue-based input pipelines. We've retained
diff --git a/tensorflow/docs_src/api_guides/python/train.md b/tensorflow/docs_src/api_guides/python/train.md
index a118123665..4b4c6a4fe3 100644
--- a/tensorflow/docs_src/api_guides/python/train.md
+++ b/tensorflow/docs_src/api_guides/python/train.md
@@ -74,9 +74,9 @@ moving averages for evaluations often improve results significantly.
## Coordinator and QueueRunner
-See @{$threading_and_queues$Threading and Queues}
+See [Threading and Queues](../../api_guides/python/threading_and_queues.md)
for how to use threads and queues. For documentation on the Queue API,
-see @{$python/io_ops#queues$Queues}.
+see [Queues](../../api_guides/python/io_ops.md#queues).
* `tf.train.Coordinator`
@@ -87,7 +87,7 @@ see @{$python/io_ops#queues$Queues}.
## Distributed execution
-See @{$distributed$Distributed TensorFlow} for
+See [Distributed TensorFlow](../../deploy/distributed.md) for
more information about how to configure a distributed TensorFlow program.
* `tf.train.Server`
@@ -105,7 +105,7 @@ more information about how to configure a distributed TensorFlow program.
## Reading Summaries from Event Files
-See @{$summaries_and_tensorboard$Summaries and TensorBoard} for an
+See [Summaries and TensorBoard](../../guide/summaries_and_tensorboard.md) for an
overview of summaries, event files, and visualization in TensorBoard.
* `tf.train.summary_iterator`
diff --git a/tensorflow/docs_src/community/contributing.md b/tensorflow/docs_src/community/contributing.md
index afbb8bbdd0..ece4a7c70b 100644
--- a/tensorflow/docs_src/community/contributing.md
+++ b/tensorflow/docs_src/community/contributing.md
@@ -25,12 +25,12 @@ guidelines](https://github.com/tensorflow/tensorflow/blob/master/CONTRIBUTING.md
[developers@tensorflow.org](https://groups.google.com/a/tensorflow.org/d/forum/developers)
mailing list, to coordinate and discuss with others contributing to TensorFlow.
-* For coding style conventions, read the @{$style_guide$TensorFlow Style Guide}.
+* For coding style conventions, read the [TensorFlow Style Guide](../community/style_guide.md).
-* Finally, review @{$documentation$Writing TensorFlow Documentation}, which
+* Finally, review [Writing TensorFlow Documentation](../community/documentation.md), which
explains documentation conventions.
-You may also wish to review our guide to @{$benchmarks$defining and running benchmarks}.
+You may also wish to review our guide to [defining and running benchmarks](../community/benchmarks.md).
## Special Interest Groups
diff --git a/tensorflow/docs_src/community/index.md b/tensorflow/docs_src/community/index.md
index 865a203bf8..1a30be32a5 100644
--- a/tensorflow/docs_src/community/index.md
+++ b/tensorflow/docs_src/community/index.md
@@ -40,7 +40,7 @@ We recommend that you join this list if you depend on TensorFlow in any way.
### Development Roadmap
-The @{$roadmap$Roadmap} summarizes plans for upcoming additions to TensorFlow.
+The [Roadmap](../community/roadmap.md) summarizes plans for upcoming additions to TensorFlow.
### Social Media
@@ -70,12 +70,12 @@ the [TensorFlow discuss mailing
list](https://groups.google.com/a/tensorflow.org/d/forum/discuss).
A number of other mailing lists exist, focused on different project areas, which
-can be found at @{$lists$TensorFlow Mailing Lists}.
+can be found at [TensorFlow Mailing Lists](../community/lists.md).
### User Groups
To meet with like-minded people local to you, check out the many
-@{$groups$TensorFlow user groups} around the world.
+[TensorFlow user groups](../community/groups.md) around the world.
## Contributing To TensorFlow
diff --git a/tensorflow/docs_src/community/style_guide.md b/tensorflow/docs_src/community/style_guide.md
index daf0d2fdc0..c78da20edd 100644
--- a/tensorflow/docs_src/community/style_guide.md
+++ b/tensorflow/docs_src/community/style_guide.md
@@ -88,7 +88,7 @@ creates a part of the graph and returns output tensors.
* Operations should contain an extensive Python comment with Args and Returns
declarations that explain both the type and meaning of each value. Possible
shapes, dtypes, or ranks should be specified in the description.
- @{$documentation$See documentation details}
+ [See documentation details](../community/documentation.md)
* For increased usability include an example of usage with inputs / outputs
of the op in Example section.
diff --git a/tensorflow/docs_src/deploy/distributed.md b/tensorflow/docs_src/deploy/distributed.md
index 6a760f53c8..2fba36cfa7 100644
--- a/tensorflow/docs_src/deploy/distributed.md
+++ b/tensorflow/docs_src/deploy/distributed.md
@@ -2,7 +2,7 @@
This document shows how to create a cluster of TensorFlow servers, and how to
distribute a computation graph across that cluster. We assume that you are
-familiar with the @{$guide/low_level_intro$basic concepts} of
+familiar with the [basic concepts](../guide/low_level_intro.md) of
writing low level TensorFlow programs.
## Hello distributed TensorFlow!
diff --git a/tensorflow/docs_src/deploy/hadoop.md b/tensorflow/docs_src/deploy/hadoop.md
index c4471562b9..b0d416df2e 100644
--- a/tensorflow/docs_src/deploy/hadoop.md
+++ b/tensorflow/docs_src/deploy/hadoop.md
@@ -6,7 +6,7 @@ at the moment.
## HDFS
-We assume that you are familiar with @{$reading_data$reading data}.
+We assume that you are familiar with [reading data](../api_guides/python/reading_data.md).
To use HDFS with TensorFlow, change the file paths you use to read and write
data to an HDFS path. For example:
@@ -61,5 +61,5 @@ be set:
export KRB5CCNAME=/tmp/krb5cc_10002
```
-If you are running @{$distributed$Distributed TensorFlow}, then all
+If you are running [Distributed TensorFlow](../deploy/distributed.md), then all
workers must have the environment variables set and Hadoop installed.
diff --git a/tensorflow/docs_src/deploy/index.md b/tensorflow/docs_src/deploy/index.md
index 3322004189..08b28de639 100644
--- a/tensorflow/docs_src/deploy/index.md
+++ b/tensorflow/docs_src/deploy/index.md
@@ -3,11 +3,11 @@
This section focuses on deploying real-world models. It contains
the following documents:
- * @{$distributed$Distributed TensorFlow}, which explains how to create
+ * [Distributed TensorFlow](../deploy/distributed.md), which explains how to create
a cluster of TensorFlow servers.
- * @{$hadoop$How to run TensorFlow on Hadoop}, which has a highly
+ * [How to run TensorFlow on Hadoop](../deploy/hadoop.md), which has a highly
self-explanatory title.
- * @{$s3$How to run TensorFlow with the S3 filesystem}, which explains how
+ * [How to run TensorFlow with the S3 filesystem](../deploy/s3.md), which explains how
to run TensorFlow with the S3 file system.
* The entire document set for [TensorFlow serving](/serving), an open-source,
flexible, high-performance serving system for machine-learned models
diff --git a/tensorflow/docs_src/deploy/s3.md b/tensorflow/docs_src/deploy/s3.md
index 079c796aa7..b4a759d687 100644
--- a/tensorflow/docs_src/deploy/s3.md
+++ b/tensorflow/docs_src/deploy/s3.md
@@ -64,7 +64,7 @@ You should see output similar to this:
### Reading Data
-When @{$reading_data$reading data}, change the file paths you use to read and write
+When [reading data](../api_guides/python/reading_data.md), change the file paths you use to read and write
data to an S3 path. For example:
```python
diff --git a/tensorflow/docs_src/extend/add_filesys.md b/tensorflow/docs_src/extend/add_filesys.md
index bc0f662f0c..5f8ac64d25 100644
--- a/tensorflow/docs_src/extend/add_filesys.md
+++ b/tensorflow/docs_src/extend/add_filesys.md
@@ -225,7 +225,7 @@ it will use the `FooBarFileSystem` implementation.
Next, you must build a shared object containing this implementation. An example
of doing so using bazel's `cc_binary` rule can be found
[here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/BUILD#L244),
-but you may use any build system to do so. See the section on @{$adding_an_op#build_the_op_library$building the op library} for similar
+but you may use any build system to do so. See the section on [building the op library](../extend/adding_an_op.md#build_the_op_library) for similar
instructions.
The result of building this target is a `.so` shared object file.
diff --git a/tensorflow/docs_src/extend/adding_an_op.md b/tensorflow/docs_src/extend/adding_an_op.md
index fbf5c0b90d..cc25ab9b45 100644
--- a/tensorflow/docs_src/extend/adding_an_op.md
+++ b/tensorflow/docs_src/extend/adding_an_op.md
@@ -56,8 +56,8 @@ PREREQUISITES:
* Some familiarity with C++.
* Must have installed the
- @{$install$TensorFlow binary}, or must have
- @{$install_sources$downloaded TensorFlow source},
+ [TensorFlow binary](../install/index.md), or must have
+ [downloaded TensorFlow source](../install/install_sources.md),
and be able to build it.
[TOC]
@@ -1140,7 +1140,7 @@ In general, changes to existing, checked-in specifications must be
backwards-compatible: changing the specification of an op must not break prior
serialized `GraphDef` protocol buffers constructed from older specifications.
The details of `GraphDef` compatibility are
-@{$version_compat#compatibility_of_graphs_and_checkpoints$described here}.
+[described here](../guide/version_compat.md#compatibility_of_graphs_and_checkpoints).
There are several ways to preserve backwards-compatibility.
@@ -1190,7 +1190,7 @@ callers. The Python API may be kept compatible by careful changes in a
hand-written Python wrapper, by keeping the old signature except possibly adding
new optional arguments to the end. Generally incompatible changes may only be
made when TensorFlow's changes major versions, and must conform to the
-@{$version_compat#compatibility_of_graphs_and_checkpoints$`GraphDef` version semantics}.
+[`GraphDef` version semantics](../guide/version_compat.md#compatibility_of_graphs_and_checkpoints).
### GPU Support
@@ -1262,7 +1262,7 @@ For example, add `-L /usr/local/cuda-8.0/lib64/` if your CUDA is installed in
Given a graph of ops, TensorFlow uses automatic differentiation
(backpropagation) to add new ops representing gradients with respect to the
existing ops (see
-@{$python/train#gradient_computation$Gradient Computation}).
+[Gradient Computation](../api_guides/python/train.md#gradient_computation)).
To make automatic differentiation work for new ops, you must register a gradient
function which computes gradients with respect to the ops' inputs given
gradients with respect to the ops' outputs.
diff --git a/tensorflow/docs_src/extend/architecture.md b/tensorflow/docs_src/extend/architecture.md
index 83d70c9468..eb33336bee 100644
--- a/tensorflow/docs_src/extend/architecture.md
+++ b/tensorflow/docs_src/extend/architecture.md
@@ -7,8 +7,8 @@ learning models and system-level optimizations.
This document describes the system architecture that makes this
combination of scale and flexibility possible. It assumes that you have basic familiarity
with TensorFlow programming concepts such as the computation graph, operations,
-and sessions. See @{$guide/low_level_intro$this document} for an introduction to
-these topics. Some familiarity with @{$distributed$distributed TensorFlow}
+and sessions. See [this document](../guide/low_level_intro.md) for an introduction to
+these topics. Some familiarity with [distributed TensorFlow](../deploy/distributed.md)
will also be helpful.
This document is for developers who want to extend TensorFlow in some way not
@@ -199,7 +199,7 @@ Many of the operation kernels are implemented using Eigen::Tensor, which uses
C++ templates to generate efficient parallel code for multicore CPUs and GPUs;
however, we liberally use libraries like cuDNN where a more efficient kernel
implementation is possible. We have also implemented
-@{$quantization$quantization}, which enables
+[quantization](../performance/quantization.md), which enables
faster inference in environments such as mobile devices and high-throughput
datacenter applications, and use the
[gemmlowp](https://github.com/google/gemmlowp) low-precision matrix library to
@@ -209,7 +209,7 @@ If it is difficult or inefficient to represent a subcomputation as a composition
of operations, users can register additional kernels that provide an efficient
implementation written in C++. For example, we recommend registering your own
fused kernels for some performance critical operations, such as the ReLU and
-Sigmoid activation functions and their corresponding gradients. The @{$xla$XLA Compiler} has an
+Sigmoid activation functions and their corresponding gradients. The [XLA Compiler](../performance/xla/index.md) has an
experimental implementation of automatic kernel fusion.
### Code
diff --git a/tensorflow/docs_src/extend/index.md b/tensorflow/docs_src/extend/index.md
index 0e4bfd1dc4..bbf4a8139b 100644
--- a/tensorflow/docs_src/extend/index.md
+++ b/tensorflow/docs_src/extend/index.md
@@ -3,16 +3,16 @@
This section explains how developers can add functionality to TensorFlow's
capabilities. Begin by reading the following architectural overview:
- * @{$architecture$TensorFlow Architecture}
+ * [TensorFlow Architecture](../extend/architecture.md)
The following guides explain how to extend particular aspects of
TensorFlow:
- * @{$adding_an_op$Adding a New Op}, which explains how to create your own
+ * [Adding a New Op](../extend/adding_an_op.md), which explains how to create your own
operations.
- * @{$add_filesys$Adding a Custom Filesystem Plugin}, which explains how to
+ * [Adding a Custom Filesystem Plugin](../extend/add_filesys.md), which explains how to
add support for your own shared or distributed filesystem.
- * @{$new_data_formats$Custom Data Readers}, which details how to add support
+ * [Custom Data Readers](../extend/new_data_formats.md), which details how to add support
for your own file and record formats.
Python is currently the only language supported by TensorFlow's API stability
@@ -24,11 +24,11 @@ plus community support for [Haskell](https://github.com/tensorflow/haskell) and
develop TensorFlow features in a language other than these languages, read the
following guide:
- * @{$language_bindings$TensorFlow in Other Languages}
+ * [TensorFlow in Other Languages](../extend/language_bindings.md)
To create tools compatible with TensorFlow's model format, read the following
guide:
- * @{$tool_developers$A Tool Developer's Guide to TensorFlow Model Files}
+ * [A Tool Developer's Guide to TensorFlow Model Files](../extend/tool_developers/index.md)
diff --git a/tensorflow/docs_src/extend/language_bindings.md b/tensorflow/docs_src/extend/language_bindings.md
index 9a968d365b..4727eabdc1 100644
--- a/tensorflow/docs_src/extend/language_bindings.md
+++ b/tensorflow/docs_src/extend/language_bindings.md
@@ -125,7 +125,7 @@ The `OpDef` specifies the following:
instead of CamelCase for the op's function name.
- A list of inputs and outputs. The types for these may be polymorphic by
referencing attributes, as described in the inputs and outputs section of
- @{$adding_an_op$Adding an op}.
+ [Adding an op](../extend/adding_an_op.md).
- A list of attributes, along with their default values (if any). Note that
some of these will be inferred (if they are determined by an input), some
will be optional (if they have a default), and some will be required (no
diff --git a/tensorflow/docs_src/extend/new_data_formats.md b/tensorflow/docs_src/extend/new_data_formats.md
index 47a8344b70..7ca50c9c76 100644
--- a/tensorflow/docs_src/extend/new_data_formats.md
+++ b/tensorflow/docs_src/extend/new_data_formats.md
@@ -4,7 +4,7 @@ PREREQUISITES:
* Some familiarity with C++.
* Must have
- @{$install_sources$downloaded TensorFlow source}, and be
+ [downloaded TensorFlow source](../install/install_sources.md), and be
able to build it.
We divide the task of supporting a file format into two pieces:
@@ -67,7 +67,7 @@ need to:
You can put all the C++ code in a single file, such as
`my_reader_dataset_op.cc`. It will help if you are
-familiar with @{$adding_an_op$the adding an op how-to}. The following skeleton
+familiar with [the adding an op how-to](../extend/adding_an_op.md). The following skeleton
can be used as a starting point for your implementation:
```c++
@@ -227,8 +227,8 @@ REGISTER_KERNEL_BUILDER(Name("MyReaderDataset").Device(tensorflow::DEVICE_CPU),
```
The last step is to build the C++ code and add a Python wrapper. The easiest way
-to do this is by @{$adding_an_op#build_the_op_library$compiling a dynamic
-library} (e.g. called `"my_reader_dataset_op.so"`), and adding a Python class
+to do this is by [compiling a dynamic
+library](../extend/adding_an_op.md#build_the_op_library) (e.g. called `"my_reader_dataset_op.so"`), and adding a Python class
that subclasses `tf.data.Dataset` to wrap it. An example Python program is
given here:
@@ -285,7 +285,7 @@ You can see some examples of `Dataset` wrapper classes in
## Writing an Op for a record format
Generally this is an ordinary op that takes a scalar string record as input, and
-so follow @{$adding_an_op$the instructions to add an Op}.
+so follow [the instructions to add an Op](../extend/adding_an_op.md).
You may optionally take a scalar string key as input, and include that in error
messages reporting improperly formatted data. That way users can more easily
track down where the bad data came from.
diff --git a/tensorflow/docs_src/guide/checkpoints.md b/tensorflow/docs_src/guide/checkpoints.md
index e1add29852..3c92cbbd40 100644
--- a/tensorflow/docs_src/guide/checkpoints.md
+++ b/tensorflow/docs_src/guide/checkpoints.md
@@ -9,13 +9,13 @@ Estimators. TensorFlow provides two model formats:
the model.
This document focuses on checkpoints. For details on `SavedModel`, see the
-@{$saved_model$Saving and Restoring} guide.
+[Saving and Restoring](../guide/saved_model.md) guide.
## Sample code
This document relies on the same
-[Iris classification example](https://github.com/tensorflow/models/blob/master/samples/core/get_started/premade_estimator.py) detailed in @{$premade_estimators$Getting Started with TensorFlow}.
+[Iris classification example](https://github.com/tensorflow/models/blob/master/samples/core/get_started/premade_estimator.py) detailed in [Getting Started with TensorFlow](../guide/premade_estimators.md).
To download and access the example, invoke the following two commands:
```shell
@@ -160,7 +160,7 @@ checkpoint to the `model_dir`. Each subsequent call to the Estimator's
1. The Estimator builds the model's
[graph](https://developers.google.com/machine-learning/glossary/#graph)
by running the `model_fn()`. (For details on the `model_fn()`, see
- @{$custom_estimators$Creating Custom Estimators.})
+ [Creating Custom Estimators.](../guide/custom_estimators.md))
2. The Estimator initializes the weights of the new model from the data
stored in the most recent checkpoint.
@@ -231,7 +231,7 @@ This separation will keep your checkpoints recoverable.
Checkpoints provide an easy automatic mechanism for saving and restoring
models created by Estimators.
-See the @{$saved_model$Saving and Restoring} guide for details about:
+See the [Saving and Restoring](../guide/saved_model.md) guide for details about:
* Saving and restoring models using low-level TensorFlow APIs.
* Exporting and importing models in the SavedModel format, which is a
diff --git a/tensorflow/docs_src/guide/custom_estimators.md b/tensorflow/docs_src/guide/custom_estimators.md
index 199a0e93de..913a35920f 100644
--- a/tensorflow/docs_src/guide/custom_estimators.md
+++ b/tensorflow/docs_src/guide/custom_estimators.md
@@ -5,7 +5,7 @@ This document introduces custom Estimators. In particular, this document
demonstrates how to create a custom `tf.estimator.Estimator` that
mimics the behavior of the pre-made Estimator
`tf.estimator.DNNClassifier` in solving the Iris problem. See
-the @{$premade_estimators$Pre-Made Estimators chapter} for details
+the [Pre-Made Estimators chapter](../guide/premade_estimators.md) for details
on the Iris problem.
To download and access the example code invoke the following two commands:
@@ -84,7 +84,7 @@ and a logits output layer.
## Write an Input function
Our custom Estimator implementation uses the same input function as our
-@{$premade_estimators$pre-made Estimator implementation}, from
+[pre-made Estimator implementation](../guide/premade_estimators.md), from
[`iris_data.py`](https://github.com/tensorflow/models/blob/master/samples/core/get_started/iris_data.py).
Namely:
@@ -106,8 +106,8 @@ This input function builds an input pipeline that yields batches of
## Create feature columns
-As detailed in the @{$premade_estimators$Premade Estimators} and
-@{$feature_columns$Feature Columns} chapters, you must define
+As detailed in the [Premade Estimators](../guide/premade_estimators.md) and
+[Feature Columns](../guide/feature_columns.md) chapters, you must define
your model's feature columns to specify how the model should use each feature.
Whether working with pre-made Estimators or custom Estimators, you define
feature columns in the same fashion.
@@ -145,7 +145,7 @@ to the constructor are in turn passed on to the `model_fn`. In
[`custom_estimator.py`](https://github.com/tensorflow/models/blob/master/samples/core/get_started/custom_estimator.py)
the following lines create the estimator and set the params to configure the
model. This configuration step is similar to how we configured the `tf.estimator.DNNClassifier` in
-@{$premade_estimators}.
+[Premade Estimators](../guide/premade_estimators.md).
```python
classifier = tf.estimator.Estimator(
@@ -489,7 +489,7 @@ configure your Estimator without modifying the code in the `model_fn`.
The rest of the code to train, evaluate, and generate predictions using our
Estimator is the same as in the
-@{$premade_estimators$Premade Estimators} chapter. For
+[Premade Estimators](../guide/premade_estimators.md) chapter. For
example, the following line will train the model:
```python
@@ -597,6 +597,6 @@ For more details, be sure to check out:
which contains more curated examples using custom estimators.
* This [TensorBoard video](https://youtu.be/eBbEDRsCmv4), which introduces
TensorBoard.
-* The @{$low_level_intro$Low Level Introduction}, which demonstrates
+* The [Low Level Introduction](../guide/low_level_intro.md), which demonstrates
how to experiment directly with TensorFlow's low level APIs, making debugging
easier.
diff --git a/tensorflow/docs_src/guide/datasets.md b/tensorflow/docs_src/guide/datasets.md
index bb18e8b79c..bf77550f6a 100644
--- a/tensorflow/docs_src/guide/datasets.md
+++ b/tensorflow/docs_src/guide/datasets.md
@@ -335,7 +335,7 @@ restore the current state of the iterator (and, effectively, the whole input
pipeline). A saveable object thus created can be added to `tf.train.Saver`
variables list or the `tf.GraphKeys.SAVEABLE_OBJECTS` collection for saving and
restoring in the same manner as a `tf.Variable`. Refer to
-@{$saved_model$Saving and Restoring} for details on how to save and restore
+[Saving and Restoring](../guide/saved_model.md) for details on how to save and restore
variables.
```python
diff --git a/tensorflow/docs_src/guide/datasets_for_estimators.md b/tensorflow/docs_src/guide/datasets_for_estimators.md
index 969ea579f7..09a3830ca9 100644
--- a/tensorflow/docs_src/guide/datasets_for_estimators.md
+++ b/tensorflow/docs_src/guide/datasets_for_estimators.md
@@ -14,7 +14,7 @@ introduces the API by walking through two simple examples:
Taking slices from an array is the simplest way to get started with `tf.data`.
-The @{$premade_estimators$Premade Estimators} chapter describes
+The [Premade Estimators](../guide/premade_estimators.md) chapter describes
the following `train_input_fn`, from
[`iris_data.py`](https://github.com/tensorflow/models/blob/master/samples/core/get_started/iris_data.py),
to pipe the data into the Estimator:
@@ -91,8 +91,8 @@ print(mnist_ds)
```
This will print the following line, showing the
-@{$guide/tensors#shapes$shapes} and
-@{$guide/tensors#data_types$types} of the items in
+[shapes](../guide/tensors.md#shapes) and
+[types](../guide/tensors.md#data_types) of the items in
the dataset. Note that a `Dataset` does not know how many items it contains.
``` None
@@ -128,7 +128,7 @@ print(dataset)
Here we see that when a `Dataset` contains structured elements, the `shapes`
and `types` of the `Dataset` take on the same structure. This dataset contains
-dictionaries of @{$guide/tensors#rank$scalars}, all of type
+dictionaries of [scalars](../guide/tensors.md#rank), all of type
`tf.float64`.
The first line of the iris `train_input_fn` uses the same functionality, but
@@ -377,11 +377,11 @@ Now you have the basic idea of how to efficiently load data into an
Estimator. Consider the following documents next:
-* @{$custom_estimators}, which demonstrates how to build your own
+* [Creating Custom Estimators](../guide/custom_estimators.md), which demonstrates how to build your own
custom `Estimator` model.
-* The @{$low_level_intro#datasets$Low Level Introduction}, which demonstrates
+* The [Low Level Introduction](../guide/low_level_intro.md#datasets), which demonstrates
how to experiment directly with `tf.data.Datasets` using TensorFlow's low
level APIs.
-* @{$guide/datasets} which goes into great detail about additional
+* [Importing Data](../guide/datasets.md) which goes into great detail about additional
functionality of `Datasets`.
diff --git a/tensorflow/docs_src/guide/debugger.md b/tensorflow/docs_src/guide/debugger.md
index 4c4a04a88a..5af27471a2 100644
--- a/tensorflow/docs_src/guide/debugger.md
+++ b/tensorflow/docs_src/guide/debugger.md
@@ -95,7 +95,7 @@ intermediate tensors (tensors that are neither inputs or outputs of the
`Session.run()` call, but are in the path leading from the inputs to the
outputs). This filter is for `nan`s and `inf`s is a common enough use case that
we ship it with the
-@{$python/tfdbg#Classes_for_debug_dump_data_and_directories$`debug_data`}
+[`debug_data`](../api_guides/python/tfdbg.md#Classes_for_debug_dump_data_and_directories)
module.
Note: You can also write your own custom filters. See `tfdbg.DebugDumpDir.find`
diff --git a/tensorflow/docs_src/guide/eager.md b/tensorflow/docs_src/guide/eager.md
index e47a8b599c..3b5797a638 100644
--- a/tensorflow/docs_src/guide/eager.md
+++ b/tensorflow/docs_src/guide/eager.md
@@ -558,7 +558,7 @@ m.result() # => 5.5
#### Summaries and TensorBoard
-@{$summaries_and_tensorboard$TensorBoard} is a visualization tool for
+[TensorBoard](../guide/summaries_and_tensorboard.md) is a visualization tool for
understanding, debugging and optimizing the model training process. It uses
summary events that are written while executing the program.
diff --git a/tensorflow/docs_src/guide/embedding.md b/tensorflow/docs_src/guide/embedding.md
index 8a98367dfb..6007e6847b 100644
--- a/tensorflow/docs_src/guide/embedding.md
+++ b/tensorflow/docs_src/guide/embedding.md
@@ -78,7 +78,7 @@ Embeddings can be trained in many network types, and with various loss
functions and data sets. For example, one could use a recurrent neural network
to predict the next word from the previous one given a large corpus of
sentences, or one could train two networks to do multi-lingual translation.
-These methods are described in the @{$word2vec$Vector Representations of Words}
+These methods are described in the [Vector Representations of Words](../tutorials/representation/word2vec.md)
tutorial.
## Visualizing Embeddings
diff --git a/tensorflow/docs_src/guide/estimators.md b/tensorflow/docs_src/guide/estimators.md
index 7b54e3de29..3903bfd126 100644
--- a/tensorflow/docs_src/guide/estimators.md
+++ b/tensorflow/docs_src/guide/estimators.md
@@ -84,7 +84,7 @@ of the following four steps:
... # manipulate dataset, extracting the feature dict and the label
return feature_dict, label
- (See @{$guide/datasets} for full details.)
+ (See [Importing Data](../guide/datasets.md) for full details.)
2. **Define the feature columns.** Each `tf.feature_column`
identifies a feature name, its type, and any input pre-processing.
@@ -136,7 +136,7 @@ The heart of every Estimator--whether pre-made or custom--is its
evaluation, and prediction. When you are using a pre-made Estimator,
someone else has already implemented the model function. When relying
on a custom Estimator, you must write the model function yourself. A
-@{$custom_estimators$companion document}
+[companion document](../guide/custom_estimators.md)
explains how to write the model function.
diff --git a/tensorflow/docs_src/guide/faq.md b/tensorflow/docs_src/guide/faq.md
index 8370097560..a02635ebba 100644
--- a/tensorflow/docs_src/guide/faq.md
+++ b/tensorflow/docs_src/guide/faq.md
@@ -2,7 +2,7 @@
This document provides answers to some of the frequently asked questions about
TensorFlow. If you have a question that is not covered here, you might find an
-answer on one of the TensorFlow @{$about$community resources}.
+answer on one of the TensorFlow [community resources](../about/index.md).
[TOC]
@@ -11,7 +11,7 @@ answer on one of the TensorFlow @{$about$community resources}.
#### Can I run distributed training on multiple computers?
Yes! TensorFlow gained
-@{$distributed$support for distributed computation} in
+[support for distributed computation](../deploy/distributed.md) in
version 0.8. TensorFlow now supports multiple devices (CPUs and GPUs) in one or
more computers.
@@ -23,7 +23,7 @@ As of the 0.6.0 release timeframe (Early December 2015), we do support Python
## Building a TensorFlow graph
See also the
-@{$python/framework$API documentation on building graphs}.
+[API documentation on building graphs](../api_guides/python/framework.md).
#### Why does `c = tf.matmul(a, b)` not execute the matrix multiplication immediately?
@@ -48,16 +48,16 @@ device, and `"/device:GPU:i"` (or `"/gpu:i"`) for the *i*th GPU device.
To place a group of operations on a device, create them within a
`tf.device` context. See
the how-to documentation on
-@{$using_gpu$using GPUs with TensorFlow} for details of how
+[using GPUs with TensorFlow](../guide/using_gpu.md) for details of how
TensorFlow assigns operations to devices, and the
-@{$deep_cnn$CIFAR-10 tutorial} for an example model that
+[CIFAR-10 tutorial](../tutorials/images/deep_cnn.md) for an example model that
uses multiple GPUs.
## Running a TensorFlow computation
See also the
-@{$python/client$API documentation on running graphs}.
+[API documentation on running graphs](../api_guides/python/client.md).
#### What's the deal with feeding and placeholders?
@@ -106,7 +106,7 @@ a significant amount of memory, and can be released when the session is closed b
`tf.Session.close`.
The intermediate tensors that are created as part of a call to
-@{$python/client$`Session.run()`} will be freed at or before the
+[`Session.run()`](../api_guides/python/client.md) will be freed at or before the
end of the call.
#### Does the runtime parallelize parts of graph execution?
@@ -118,7 +118,7 @@ dimensions:
CPU, or multiple threads in a GPU.
* Independent nodes in a TensorFlow graph can run in parallel on multiple
devices, which makes it possible to speed up
- @{$deep_cnn$CIFAR-10 training using multiple GPUs}.
+ [CIFAR-10 training using multiple GPUs](../tutorials/images/deep_cnn.md).
* The Session API allows multiple concurrent steps (i.e. calls to
`tf.Session.run` in parallel). This
enables the runtime to get higher throughput, if a single step does not use
@@ -141,9 +141,9 @@ Bindings for various other languages (such as [C#](https://github.com/migueldeic
#### Does TensorFlow make use of all the devices (GPUs and CPUs) available on my machine?
TensorFlow supports multiple GPUs and CPUs. See the how-to documentation on
-@{$using_gpu$using GPUs with TensorFlow} for details of how
+[using GPUs with TensorFlow](../guide/using_gpu.md) for details of how
TensorFlow assigns operations to devices, and the
-@{$deep_cnn$CIFAR-10 tutorial} for an example model that
+[CIFAR-10 tutorial](../tutorials/images/deep_cnn.md) for an example model that
uses multiple GPUs.
Note that TensorFlow only uses GPU devices with a compute capability greater
@@ -155,16 +155,16 @@ The `tf.ReaderBase` and
`tf.QueueBase` classes provide special operations that
can *block* until input (or free space in a bounded queue) becomes
available. These operations allow you to build sophisticated
-@{$reading_data$input pipelines}, at the cost of making the
+[input pipelines](../api_guides/python/reading_data.md), at the cost of making the
TensorFlow computation somewhat more complicated. See the how-to documentation
for
-@{$reading_data#creating_threads_to_prefetch_using_queuerunner_objects$using `QueueRunner` objects to drive queues and readers}
+[using `QueueRunner` objects to drive queues and readers](../api_guides/python/reading_data.md#creating_threads_to_prefetch_using_queuerunner_objects)
for more information on how to use them.
## Variables
-See also the how-to documentation on @{$variables$variables} and
-@{$python/state_ops$the API documentation for variables}.
+See also the how-to documentation on [variables](../guide/variables.md) and
+[the API documentation for variables](../api_guides/python/state_ops.md).
#### What is the lifetime of a variable?
@@ -231,7 +231,7 @@ to encode the batch size as a Python constant, but instead to use a symbolic
#### How can I visualize a TensorFlow graph?
-See the @{$graph_viz$graph visualization tutorial}.
+See the [graph visualization tutorial](../guide/graph_viz.md).
#### What is the simplest way to send data to TensorBoard?
@@ -241,7 +241,7 @@ these summaries to a log directory. Then, start TensorBoard using
python tensorflow/tensorboard/tensorboard.py --logdir=path/to/log-directory
For more details, see the
-@{$summaries_and_tensorboard$Summaries and TensorBoard tutorial}.
+[Summaries and TensorBoard tutorial](../guide/summaries_and_tensorboard.md).
#### Every time I launch TensorBoard, I get a network security popup!
@@ -251,7 +251,7 @@ the flag --host=localhost. This should quiet any security warnings.
## Extending TensorFlow
See the how-to documentation for
-@{$adding_an_op$adding a new operation to TensorFlow}.
+[adding a new operation to TensorFlow](../extend/adding_an_op.md).
#### My data is in a custom format. How do I read it using TensorFlow?
@@ -273,8 +273,8 @@ consider converting it, offline, to a format that is easily parsable, such
as `tf.python_io.TFRecordWriter` format.
The most efficient method to customize the parsing behavior is to
-@{$adding_an_op$add a new op written in C++} that parses your
-data format. The @{$new_data_formats$guide to handling new data formats} has
+[add a new op written in C++](../extend/adding_an_op.md) that parses your
+data format. The [guide to handling new data formats](../extend/new_data_formats.md) has
more information about the steps for doing this.
diff --git a/tensorflow/docs_src/guide/feature_columns.md b/tensorflow/docs_src/guide/feature_columns.md
index b189c4334e..3ad41855e4 100644
--- a/tensorflow/docs_src/guide/feature_columns.md
+++ b/tensorflow/docs_src/guide/feature_columns.md
@@ -5,7 +5,7 @@ intermediaries between raw data and Estimators. Feature columns are very rich,
enabling you to transform a diverse range of raw data into formats that
Estimators can use, allowing easy experimentation.
-In @{$premade_estimators$Premade Estimators}, we used the premade
+In [Premade Estimators](../guide/premade_estimators.md), we used the premade
Estimator, `tf.estimator.DNNClassifier` to train a model to
predict different types of Iris flowers from four input features. That example
created only numerical feature columns (of type
@@ -534,7 +534,7 @@ embedding_column = tf.feature_column.embedding_column(
dimension=embedding_dimensions)
```
-@{$guide/embedding$Embeddings} is a significant topic within machine
+[Embeddings](../guide/embedding.md) is a significant topic within machine
learning. This information was just to get you started using them as feature
columns.
@@ -559,7 +559,7 @@ As the following list indicates, not all Estimators permit all types of
For more examples on feature columns, view the following:
-* The @{$low_level_intro#feature_columns$Low Level Introduction} demonstrates how
+* The [Low Level Introduction](../guide/low_level_intro.md#feature_columns) demonstrates how
experiment directly with `feature_columns` using TensorFlow's low level APIs.
* The [Estimator wide and deep learning tutorial](https://github.com/tensorflow/models/tree/master/official/wide_deep)
solves a binary classification problem using `feature_columns` on a variety of
diff --git a/tensorflow/docs_src/guide/graph_viz.md b/tensorflow/docs_src/guide/graph_viz.md
index 97b0e2d4de..23f722bbe7 100644
--- a/tensorflow/docs_src/guide/graph_viz.md
+++ b/tensorflow/docs_src/guide/graph_viz.md
@@ -5,7 +5,7 @@ TensorFlow computation graphs are powerful but complicated. The graph visualizat
![Visualization of a TensorFlow graph](https://www.tensorflow.org/images/graph_vis_animation.gif "Visualization of a TensorFlow graph")
*Visualization of a TensorFlow graph.*
-To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see @{$summaries_and_tensorboard$TensorBoard: Visualizing Learning}.
+To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see [TensorBoard: Visualizing Learning](../guide/summaries_and_tensorboard.md).
## Name scoping and nodes
@@ -251,7 +251,7 @@ is a snippet from the train and test section of a modification of the
[Estimators MNIST tutorial](../tutorials/estimators/cnn.md), in which we have
recorded summaries and
runtime statistics. See the
-@{$summaries_and_tensorboard#serializing-the-data$Summaries Tutorial}
+[Summaries Tutorial](../guide/summaries_and_tensorboard.md#serializing-the-data)
for details on how to record summaries.
Full source is [here](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py).
diff --git a/tensorflow/docs_src/guide/graphs.md b/tensorflow/docs_src/guide/graphs.md
index 2bb44fbb32..c70479dba2 100644
--- a/tensorflow/docs_src/guide/graphs.md
+++ b/tensorflow/docs_src/guide/graphs.md
@@ -38,13 +38,13 @@ programs:
machines. TensorFlow inserts the necessary communication and coordination
between devices.
-* **Compilation.** TensorFlow's @{$performance/xla$XLA compiler} can
+* **Compilation.** TensorFlow's [XLA compiler](../performance/xla/index.md) can
use the information in your dataflow graph to generate faster code, for
example, by fusing together adjacent operations.
* **Portability.** The dataflow graph is a language-independent representation
of the code in your model. You can build a dataflow graph in Python, store it
- in a @{$saved_model$SavedModel}, and restore it in a C++ program for
+ in a [SavedModel](../guide/saved_model.md), and restore it in a C++ program for
low-latency inference.
@@ -93,7 +93,7 @@ to all API functions in the same context. For example:
stored value. The `tf.Variable` object also has methods such as
`tf.Variable.assign` and `tf.Variable.assign_add` that
create `tf.Operation` objects that, when executed, update the stored value.
- (See @{$guide/variables} for more information about variables.)
+ (See [Variables](../guide/variables.md) for more information about variables.)
* Calling `tf.train.Optimizer.minimize` will add operations and tensors to the
default graph that calculates gradients, and return a `tf.Operation` that,
@@ -210,7 +210,7 @@ with tf.device("/device:GPU:0"):
# Operations created in this context will be pinned to the GPU.
result = tf.matmul(weights, img)
```
-If you are deploying TensorFlow in a @{$distributed$typical distributed configuration},
+If you are deploying TensorFlow in a [typical distributed configuration](../deploy/distributed.md),
you might specify the job name and task ID to place variables on
a task in the parameter server job (`"/job:ps"`), and the other operations on
task in the worker job (`"/job:worker"`):
diff --git a/tensorflow/docs_src/guide/index.md b/tensorflow/docs_src/guide/index.md
index 1c920e7d70..50499582cc 100644
--- a/tensorflow/docs_src/guide/index.md
+++ b/tensorflow/docs_src/guide/index.md
@@ -5,38 +5,38 @@ works. The units are as follows:
## High Level APIs
- * @{$guide/keras}, TensorFlow's high-level API for building and
+ * [Keras](../guide/keras.md), TensorFlow's high-level API for building and
training deep learning models.
- * @{$guide/eager}, an API for writing TensorFlow code
+ * [Eager Execution](../guide/eager.md), an API for writing TensorFlow code
imperatively, like you would use Numpy.
- * @{$guide/datasets}, easy input pipelines to bring your data into
+ * [Importing Data](../guide/datasets.md), easy input pipelines to bring your data into
your TensorFlow program.
- * @{$guide/estimators}, a high-level API that provides
+ * [Estimators](../guide/estimators.md), a high-level API that provides
fully-packaged models ready for large-scale training and production.
## Estimators
-* @{$premade_estimators}, the basics of premade Estimators.
-* @{$checkpoints}, save training progress and resume where you left off.
-* @{$feature_columns}, handle a variety of input data types without changes to the model.
-* @{$datasets_for_estimators}, use `tf.data` to input data.
-* @{$custom_estimators}, write your own Estimator.
+* [Premade Estimators](../guide/premade_estimators.md), the basics of premade Estimators.
+* [Checkpoints](../guide/checkpoints.md), save training progress and resume where you left off.
+* [Feature Columns](../guide/feature_columns.md), handle a variety of input data types without changes to the model.
+* [Datasets for Estimators](../guide/datasets_for_estimators.md), use `tf.data` to input data.
+* [Creating Custom Estimators](../guide/custom_estimators.md), write your own Estimator.
## Accelerators
- * @{$using_gpu} explains how TensorFlow assigns operations to
+ * [Using GPUs](../guide/using_gpu.md) explains how TensorFlow assigns operations to
devices and how you can change the arrangement manually.
- * @{$using_tpu} explains how to modify `Estimator` programs to run on a TPU.
+ * [Using TPUs](../guide/using_tpu.md) explains how to modify `Estimator` programs to run on a TPU.
## Low Level APIs
- * @{$guide/low_level_intro}, which introduces the
+ * [Introduction](../guide/low_level_intro.md), which introduces the
basics of how you can use TensorFlow outside of the high Level APIs.
- * @{$guide/tensors}, which explains how to create,
+ * [Tensors](../guide/tensors.md), which explains how to create,
manipulate, and access Tensors--the fundamental object in TensorFlow.
- * @{$guide/variables}, which details how
+ * [Variables](../guide/variables.md), which details how
to represent shared, persistent state in your program.
- * @{$guide/graphs}, which explains:
+ * [Graphs and Sessions](../guide/graphs.md), which explains:
* dataflow graphs, which are TensorFlow's representation of computations
as dependencies between operations.
* sessions, which are TensorFlow's mechanism for running dataflow graphs
@@ -46,19 +46,19 @@ works. The units are as follows:
such as Estimators or Keras, the high-level API creates and manages
graphs and sessions for you, but understanding graphs and sessions
can still be helpful.
- * @{$guide/saved_model}, which
+ * [Save and Restore](../guide/saved_model.md), which
explains how to save and restore variables and models.
## ML Concepts
- * @{$guide/embedding}, which introduces the concept
+ * [Embeddings](../guide/embedding.md), which introduces the concept
of embeddings, provides a simple example of training an embedding in
TensorFlow, and explains how to view embeddings with the TensorBoard
Embedding Projector.
## Debugging
- * @{$guide/debugger}, which
+ * [TensorFlow Debugger](../guide/debugger.md), which
explains how to use the TensorFlow debugger (tfdbg).
## TensorBoard
@@ -66,17 +66,17 @@ works. The units are as follows:
TensorBoard is a utility to visualize different aspects of machine learning.
The following guides explain how to use TensorBoard:
- * @{$guide/summaries_and_tensorboard},
+ * [TensorBoard: Visualizing Learning](../guide/summaries_and_tensorboard.md),
which introduces TensorBoard.
- * @{$guide/graph_viz}, which
+ * [TensorBoard: Graph Visualization](../guide/graph_viz.md), which
explains how to visualize the computational graph.
- * @{$guide/tensorboard_histograms} which demonstrates the how to
+ * [TensorBoard Histogram Dashboard](../guide/tensorboard_histograms.md) which demonstrates the how to
use TensorBoard's histogram dashboard.
## Misc
- * @{$guide/version_compat},
+ * [TensorFlow Version Compatibility](../guide/version_compat.md),
which explains backward compatibility guarantees and non-guarantees.
- * @{$guide/faq}, which contains frequently asked
+ * [Frequently Asked Questions](../guide/faq.md), which contains frequently asked
questions about TensorFlow.
diff --git a/tensorflow/docs_src/guide/low_level_intro.md b/tensorflow/docs_src/guide/low_level_intro.md
index dc6cb9ee0d..d002f8af0b 100644
--- a/tensorflow/docs_src/guide/low_level_intro.md
+++ b/tensorflow/docs_src/guide/low_level_intro.md
@@ -9,7 +9,7 @@ This guide gets you started programming in the low-level TensorFlow APIs
* Use high level components ([datasets](#datasets), [layers](#layers), and
[feature_columns](#feature_columns)) in this low level environment.
* Build your own training loop, instead of using the one
- @{$premade_estimators$provided by Estimators}.
+ [provided by Estimators](../guide/premade_estimators.md).
We recommend using the higher level APIs to build models when possible.
Knowing TensorFlow Core is valuable for the following reasons:
@@ -21,7 +21,7 @@ Knowing TensorFlow Core is valuable for the following reasons:
## Setup
-Before using this guide, @{$install$install TensorFlow}.
+Before using this guide, [install TensorFlow](../install/index.md).
To get the most out of this guide, you should know the following:
@@ -145,7 +145,7 @@ browser, and you should see a graph similar to the following:
![TensorBoard screenshot](https://www.tensorflow.org/images/getting_started_add.png)
-For more about TensorBoard's graph visualization tools see @{$graph_viz}.
+For more about TensorBoard's graph visualization tools see [TensorBoard: Graph Visualization](../guide/graph_viz.md).
### Session
@@ -303,7 +303,7 @@ while True:
break
```
-For more details on Datasets and Iterators see: @{$guide/datasets}.
+For more details on Datasets and Iterators see: [Importing Data](../guide/datasets.md).
## Layers
@@ -398,7 +398,7 @@ and layer reuse impossible.
The easiest way to experiment with feature columns is using the
`tf.feature_column.input_layer` function. This function only accepts
-@{$feature_columns$dense columns} as inputs, so to view the result
+[dense columns](../guide/feature_columns.md) as inputs, so to view the result
of a categorical column you must wrap it in an
`tf.feature_column.indicator_column`. For example:
@@ -589,7 +589,7 @@ print(sess.run(y_pred))
To learn more about building models with TensorFlow consider the following:
-* @{$custom_estimators$Custom Estimators}, to learn how to build
+* [Custom Estimators](../guide/custom_estimators.md), to learn how to build
customized models with TensorFlow. Your knowledge of TensorFlow Core will
help you understand and debug your own models.
@@ -597,8 +597,8 @@ If you want to learn more about the inner workings of TensorFlow consider the
following documents, which go into more depth on many of the topics discussed
here:
-* @{$graphs}
-* @{$tensors}
-* @{$variables}
+* [Graphs and Sessions](../guide/graphs.md)
+* [Tensors](../guide/tensors.md)
+* [Variables](../guide/variables.md)
diff --git a/tensorflow/docs_src/guide/premade_estimators.md b/tensorflow/docs_src/guide/premade_estimators.md
index dc38f0c1d3..a1703058c3 100644
--- a/tensorflow/docs_src/guide/premade_estimators.md
+++ b/tensorflow/docs_src/guide/premade_estimators.md
@@ -8,7 +8,7 @@ how to solve the Iris classification problem in TensorFlow.
Prior to using the sample code in this document, you'll need to do the
following:
-* @{$install$Install TensorFlow}.
+* [Install TensorFlow](../install/index.md).
* If you installed TensorFlow with virtualenv or Anaconda, activate your
TensorFlow environment.
* Install or upgrade pandas by issuing the following command:
@@ -78,10 +78,10 @@ provides a programming stack consisting of multiple API layers:
We strongly recommend writing TensorFlow programs with the following APIs:
-* @{$guide/estimators$Estimators}, which represent a complete model.
+* [Estimators](../guide/estimators.md), which represent a complete model.
The Estimator API provides methods to train the model, to judge the model's
accuracy, and to generate predictions.
-* @{$guide/datasets_for_estimators}, which build a data input
+* [Datasets for Estimators](../guide/datasets_for_estimators.md), which build a data input
pipeline. The Dataset API has methods to load and manipulate data, and feed
it into your model. The Dataset API meshes well with the Estimators API.
@@ -173,14 +173,14 @@ example is an Iris Versicolor.
An Estimator is TensorFlow's high-level representation of a complete model. It
handles the details of initialization, logging, saving and restoring, and many
other features so you can concentrate on your model. For more details see
-@{$guide/estimators}.
+[Estimators](../guide/estimators.md).
An Estimator is any class derived from `tf.estimator.Estimator`. TensorFlow
provides a collection of
`tf.estimator`
(for example, `LinearRegressor`) to implement common ML algorithms. Beyond
those, you may write your own
-@{$custom_estimators$custom Estimators}.
+[custom Estimators](../guide/custom_estimators.md).
We recommend using pre-made Estimators when just getting started.
To write a TensorFlow program based on pre-made Estimators, you must perform the
@@ -287,7 +287,7 @@ for key in train_x.keys():
```
Feature columns can be far more sophisticated than those we're showing here. We
-detail feature columns @{$feature_columns$later on} in our Getting
+detail feature columns [later on](../guide/feature_columns.md) in our Getting
Started guide.
Now that we have the description of how we want the model to represent the raw
@@ -423,8 +423,8 @@ Pre-made Estimators are an effective way to quickly create standard models.
Now that you've gotten started writing TensorFlow programs, consider the
following material:
-* @{$checkpoints$Checkpoints} to learn how to save and restore models.
-* @{$guide/datasets_for_estimators} to learn more about importing
+* [Checkpoints](../guide/checkpoints.md) to learn how to save and restore models.
+* [Datasets for Estimators](../guide/datasets_for_estimators.md) to learn more about importing
data into your model.
-* @{$custom_estimators$Creating Custom Estimators} to learn how to
+* [Creating Custom Estimators](../guide/custom_estimators.md) to learn how to
write your own Estimator, customized for a particular problem.
diff --git a/tensorflow/docs_src/guide/saved_model.md b/tensorflow/docs_src/guide/saved_model.md
index c260da7966..6c967fd882 100644
--- a/tensorflow/docs_src/guide/saved_model.md
+++ b/tensorflow/docs_src/guide/saved_model.md
@@ -7,7 +7,7 @@ automatically save and restore variables in the `model_dir`.
## Save and restore variables
-TensorFlow @{$variables} are the best way to represent shared, persistent state
+TensorFlow [Variables](../guide/variables.md) are the best way to represent shared, persistent state
manipulated by your program. The `tf.train.Saver` constructor adds `save` and
`restore` ops to the graph for all, or a specified list, of the variables in the
graph. The `Saver` object provides methods to run these ops, specifying paths
@@ -274,7 +274,7 @@ Ops has not changed.
The `tf.saved_model.builder.SavedModelBuilder` class allows
users to control whether default-valued attributes must be stripped from the
-@{$extend/tool_developers#nodes$`NodeDefs`}
+[`NodeDefs`](../extend/tool_developers/index.md#nodes)
while adding a meta graph to the SavedModel bundle. Both
`tf.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables`
and `tf.saved_model.builder.SavedModelBuilder.add_meta_graph`
@@ -413,7 +413,7 @@ SavedModel format. This section explains how to:
### Prepare serving inputs
-During training, an @{$premade_estimators#input_fn$`input_fn()`} ingests data
+During training, an [`input_fn()`](../guide/premade_estimators.md#input_fn) ingests data
and prepares it for use by the model. At serving time, similarly, a
`serving_input_receiver_fn()` accepts inference requests and prepares them for
the model. This function has the following purposes:
@@ -616,7 +616,7 @@ result = stub.Classify(request, 10.0) # 10 secs timeout
The returned result in this example is a `ClassificationResponse` protocol
buffer.
-This is a skeletal example; please see the @{$deploy$Tensorflow Serving}
+This is a skeletal example; please see the [Tensorflow Serving](../deploy/index.md)
documentation and [examples](https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example)
for more details.
@@ -647,7 +647,7 @@ You can use the SavedModel Command Line Interface (CLI) to inspect and
execute a SavedModel.
For example, you can use the CLI to inspect the model's `SignatureDef`s.
The CLI enables you to quickly confirm that the input
-@{$tensors$Tensor dtype and shape} match the model. Moreover, if you
+[Tensor dtype and shape](../guide/tensors.md) match the model. Moreover, if you
want to test your model, you can use the CLI to do a sanity check by
passing in sample inputs in various formats (for example, Python
expressions) and then fetching the output.
diff --git a/tensorflow/docs_src/guide/summaries_and_tensorboard.md b/tensorflow/docs_src/guide/summaries_and_tensorboard.md
index 6177c3393b..788c556b9d 100644
--- a/tensorflow/docs_src/guide/summaries_and_tensorboard.md
+++ b/tensorflow/docs_src/guide/summaries_and_tensorboard.md
@@ -36,7 +36,7 @@ lifecycle for summary data within TensorBoard.
First, create the TensorFlow graph that you'd like to collect summary
data from, and decide which nodes you would like to annotate with
-@{$python/summary$summary operations}.
+[summary operations](../api_guides/python/summary.md).
For example, suppose you are training a convolutional neural network for
recognizing MNIST digits. You'd like to record how the learning rate
@@ -53,7 +53,7 @@ this data by attaching
the gradient outputs and to the variable that holds your weights, respectively.
For details on all of the summary operations available, check out the docs on
-@{$python/summary$summary operations}.
+[summary operations](../api_guides/python/summary.md).
Operations in TensorFlow don't do anything until you run them, or an op that
depends on their output. And the summary nodes that we've just created are
@@ -74,7 +74,7 @@ Also, the `FileWriter` can optionally take a `Graph` in its constructor.
If it receives a `Graph` object, then TensorBoard will visualize your graph
along with tensor shape information. This will give you a much better sense of
what flows through the graph: see
-@{$graph_viz#tensor-shape-information$Tensor shape information}.
+[Tensor shape information](../guide/graph_viz.md#tensor-shape-information).
Now that you've modified your graph and have a `FileWriter`, you're ready to
start running your network! If you want, you could run the merged summary op
@@ -219,7 +219,7 @@ When looking at TensorBoard, you will see the navigation tabs in the top right
corner. Each tab represents a set of serialized data that can be visualized.
For in depth information on how to use the *graph* tab to visualize your graph,
-see @{$graph_viz$TensorBoard: Graph Visualization}.
+see [TensorBoard: Graph Visualization](../guide/graph_viz.md).
For more usage information on TensorBoard in general, see the
[TensorBoard GitHub](https://github.com/tensorflow/tensorboard).
diff --git a/tensorflow/docs_src/guide/tensors.md b/tensorflow/docs_src/guide/tensors.md
index 6b5a110a1c..4f0ddb21b5 100644
--- a/tensorflow/docs_src/guide/tensors.md
+++ b/tensorflow/docs_src/guide/tensors.md
@@ -298,7 +298,7 @@ to call `tf.train.start_queue_runners` before evaluating any `tf.Tensor`s.
## Printing Tensors
For debugging purposes you might want to print the value of a `tf.Tensor`. While
- @{$debugger$tfdbg} provides advanced debugging support, TensorFlow also has an
+ [tfdbg](../guide/debugger.md) provides advanced debugging support, TensorFlow also has an
operation to directly print the value of a `tf.Tensor`.
Note that you rarely want to use the following pattern when printing a
diff --git a/tensorflow/docs_src/guide/using_gpu.md b/tensorflow/docs_src/guide/using_gpu.md
index c0218fd12e..8cb9b354c7 100644
--- a/tensorflow/docs_src/guide/using_gpu.md
+++ b/tensorflow/docs_src/guide/using_gpu.md
@@ -211,5 +211,5 @@ AddN: /job:localhost/replica:0/task:0/cpu:0
[ 98. 128.]]
```
-The @{$deep_cnn$cifar10 tutorial} is a good example
+The [cifar10 tutorial](../tutorials/images/deep_cnn.md) is a good example
demonstrating how to do training with multiple GPUs.
diff --git a/tensorflow/docs_src/guide/using_tpu.md b/tensorflow/docs_src/guide/using_tpu.md
index 90a663b75e..59b34e19e0 100644
--- a/tensorflow/docs_src/guide/using_tpu.md
+++ b/tensorflow/docs_src/guide/using_tpu.md
@@ -22,8 +22,8 @@ Standard `Estimators` can drive models on CPU and GPUs. You must use
`tf.contrib.tpu.TPUEstimator` to drive a model on TPUs.
Refer to TensorFlow's Getting Started section for an introduction to the basics
-of using a @{$premade_estimators$pre-made `Estimator`}, and
-@{$custom_estimators$custom `Estimator`s}.
+of using a [pre-made `Estimator`](../guide/premade_estimators.md), and
+[custom `Estimator`s](../guide/custom_estimators.md).
The `TPUEstimator` class differs somewhat from the `Estimator` class.
@@ -171,9 +171,9 @@ This section details the changes you must make to the model function
During regular usage TensorFlow attempts to determine the shapes of each
`tf.Tensor` during graph construction. During execution any unknown shape
dimensions are determined dynamically,
-see @{$guide/tensors#shape$Tensor Shapes} for more details.
+see [Tensor Shapes](../guide/tensors.md#shape) for more details.
-To run on Cloud TPUs TensorFlow models are compiled using @{$xla$XLA}.
+To run on Cloud TPUs TensorFlow models are compiled using [XLA](../performance/xla/index.md).
XLA uses a similar system for determining shapes at compile time. XLA requires
that all tensor dimensions be statically defined at compile time. All shapes
must evaluate to a constant, and not depend on external data, or stateful
@@ -184,7 +184,7 @@ operations like variables or a random number generator.
Remove any use of `tf.summary` from your model.
-@{$summaries_and_tensorboard$TensorBoard summaries} are a great way see inside
+[TensorBoard summaries](../guide/summaries_and_tensorboard.md) are a great way see inside
your model. A minimal set of basic summaries are automatically recorded by the
`TPUEstimator`, to `event` files in the `model_dir`. Custom summaries, however,
are currently unsupported when training on a Cloud TPU. So while the
@@ -343,7 +343,7 @@ weight when creating your `tf.metrics`.
Efficient use of the `tf.data.Dataset` API is critical when using a Cloud
TPU, as it is impossible to use the Cloud TPU's unless you can feed it data
-quickly enough. See @{$datasets_performance} for details on dataset performance.
+quickly enough. See [Input Pipeline Performance Guide](../performance/datasets_performance.md) for details on dataset performance.
For all but the simplest experimentation (using
`tf.data.Dataset.from_tensor_slices` or other in-graph data) you will need to
@@ -361,7 +361,7 @@ Small datasets can be loaded entirely into memory using
`tf.data.Dataset.cache`.
Regardless of the data format used, it is strongly recommended that you
-@{$performance_guide#use_large_files$use large files}, on the order of
+[use large files](../performance/performance_guide.md#use_large_files), on the order of
100MB. This is especially important in this networked setting as the overhead
of opening a file is significantly higher.
@@ -391,5 +391,5 @@ to make a Cloud TPU compatible model are the example models published in:
For more information about tuning TensorFlow code for performance see:
- * The @{$performance$Performance Section.}
+ * The [Performance Section.](../performance/index.md)
diff --git a/tensorflow/docs_src/guide/version_compat.md b/tensorflow/docs_src/guide/version_compat.md
index 29ac066e6f..b6d509196a 100644
--- a/tensorflow/docs_src/guide/version_compat.md
+++ b/tensorflow/docs_src/guide/version_compat.md
@@ -75,7 +75,7 @@ backward incompatible ways between minor releases. These include:
* **Other languages**: TensorFlow APIs in languages other than Python and C,
such as:
- - @{$cc/guide$C++} (exposed through header files in
+ - [C++](../api_guides/cc/guide.md) (exposed through header files in
[`tensorflow/cc`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/cc)).
- [Java](../api_docs/java/reference/org/tensorflow/package-summary),
- [Go](https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go)
@@ -98,7 +98,7 @@ backward incompatible ways between minor releases. These include:
accuracy for the overall system.
* **Random numbers:** The specific random numbers computed by the
- @{$python/constant_op#Random_Tensors$random ops} may change at any time.
+ [random ops](../api_guides/python/constant_op.md#Random_Tensors) may change at any time.
Users should rely only on approximately correct distributions and
statistical strength, not the specific bits computed. However, we will make
changes to random bits rarely (or perhaps never) for patch releases. We
diff --git a/tensorflow/docs_src/install/index.md b/tensorflow/docs_src/install/index.md
index 55481cc400..76e590e1e1 100644
--- a/tensorflow/docs_src/install/index.md
+++ b/tensorflow/docs_src/install/index.md
@@ -17,23 +17,23 @@ systems listed above.
The following guides explain how to install a version of TensorFlow
that enables you to write applications in Python:
- * @{$install_linux$Install TensorFlow on Ubuntu}
- * @{$install_mac$Install TensorFlow on macOS}
- * @{$install_windows$Install TensorFlow on Windows}
- * @{$install_raspbian$Install TensorFlow on a Raspberry Pi}
- * @{$install_sources$Install TensorFlow from source code}
+ * [Install TensorFlow on Ubuntu](../install/install_linux.md)
+ * [Install TensorFlow on macOS](../install/install_mac.md)
+ * [Install TensorFlow on Windows](../install/install_windows.md)
+ * [Install TensorFlow on a Raspberry Pi](../install/install_raspbian.md)
+ * [Install TensorFlow from source code](../install/install_sources.md)
Many aspects of the Python TensorFlow API changed from version 0.n to 1.0.
The following guide explains how to migrate older TensorFlow applications
to Version 1.0:
- * @{$migration$Transition to TensorFlow 1.0}
+ * [Transition to TensorFlow 1.0](../install/migration.md)
The following guides explain how to install TensorFlow libraries for use in
other programming languages. These APIs are aimed at deploying TensorFlow
models in applications and are not as extensive as the Python APIs.
- * @{$install_java$Install TensorFlow for Java}
- * @{$install_c$Install TensorFlow for C}
- * @{$install_go$Install TensorFlow for Go}
+ * [Install TensorFlow for Java](../install/install_java.md)
+ * [Install TensorFlow for C](../install/install_c.md)
+ * [Install TensorFlow for Go](../install/install_go.md)
diff --git a/tensorflow/docs_src/install/install_c.md b/tensorflow/docs_src/install/install_c.md
index 4a63f11fca..084634bc9c 100644
--- a/tensorflow/docs_src/install/install_c.md
+++ b/tensorflow/docs_src/install/install_c.md
@@ -28,8 +28,8 @@ enable TensorFlow for C:
entitled "Determine which TensorFlow to install" in one of the
following guides:
- * @{$install_linux#determine_which_tensorflow_to_install$Installing TensorFlow on Linux}
- * @{$install_mac#determine_which_tensorflow_to_install$Installing TensorFlow on macOS}
+ * [Installing TensorFlow on Linux](../install/install_linux.md#determine_which_tensorflow_to_install)
+ * [Installing TensorFlow on macOS](../install/install_mac.md#determine_which_tensorflow_to_install)
2. Download and extract the TensorFlow C library into `/usr/local/lib` by
invoking the following shell commands:
diff --git a/tensorflow/docs_src/install/install_go.md b/tensorflow/docs_src/install/install_go.md
index f0f8436777..0c604d7713 100644
--- a/tensorflow/docs_src/install/install_go.md
+++ b/tensorflow/docs_src/install/install_go.md
@@ -29,8 +29,8 @@ steps to install this library and enable TensorFlow for Go:
the help of GPU(s). To help you decide, read the section entitled
"Determine which TensorFlow to install" in one of the following guides:
- * @{$install_linux#determine_which_tensorflow_to_install$Installing TensorFlow on Linux}
- * @{$install_mac#determine_which_tensorflow_to_install$Installing TensorFlow on macOS}
+ * [Installing TensorFlow on Linux](../install/install_linux.md#determine_which_tensorflow_to_install)
+ * [Installing TensorFlow on macOS](../install/install_mac.md#determine_which_tensorflow_to_install)
2. Download and extract the TensorFlow C library into `/usr/local/lib` by
invoking the following shell commands:
diff --git a/tensorflow/docs_src/install/install_java.md b/tensorflow/docs_src/install/install_java.md
index c131a2ea76..c411cb78fe 100644
--- a/tensorflow/docs_src/install/install_java.md
+++ b/tensorflow/docs_src/install/install_java.md
@@ -135,7 +135,7 @@ instead:
GPU acceleration is available via Maven only for Linux and only if your system
meets the
-@{$install_linux#determine_which_tensorflow_to_install$requirements for GPU}.
+[requirements for GPU](../install/install_linux.md#determine_which_tensorflow_to_install).
## Using TensorFlow with JDK
@@ -155,8 +155,8 @@ Take the following steps to install TensorFlow for Java on Linux or macOS:
the help of GPU(s). To help you decide, read the section entitled
"Determine which TensorFlow to install" in one of the following guides:
- * @{$install_linux#determine_which_tensorflow_to_install$Installing TensorFlow on Linux}
- * @{$install_mac#determine_which_tensorflow_to_install$Installing TensorFlow on macOS}
+ * [Installing TensorFlow on Linux](../install/install_linux.md#determine_which_tensorflow_to_install)
+ * [Installing TensorFlow on macOS](../install/install_mac.md#determine_which_tensorflow_to_install)
3. Download and extract the appropriate Java Native Interface (JNI)
file for your operating system and processor support by running the
diff --git a/tensorflow/docs_src/install/install_linux.md b/tensorflow/docs_src/install/install_linux.md
index 0febdee99f..5fcfa4b988 100644
--- a/tensorflow/docs_src/install/install_linux.md
+++ b/tensorflow/docs_src/install/install_linux.md
@@ -520,7 +520,7 @@ The following NVIDIA® <i>software</i> must be installed on your system:
To use a GPU with CUDA Compute Capability 3.0, or different versions of the
preceding NVIDIA libraries see
-@{$install_sources$installing TensorFlow from Sources}. If using Ubuntu 16.04
+[installing TensorFlow from Sources](../install/install_sources.md). If using Ubuntu 16.04
and possibly other Debian based linux distros, `apt-get` can be used with the
NVIDIA repository to simplify installation.
diff --git a/tensorflow/docs_src/performance/index.md b/tensorflow/docs_src/performance/index.md
index 131d28fa3e..a0f26a8c3a 100644
--- a/tensorflow/docs_src/performance/index.md
+++ b/tensorflow/docs_src/performance/index.md
@@ -7,18 +7,18 @@ details on the high level APIs to use along with best practices to build
and train high performance models, and quantize models for the least latency
and highest throughput for inference.
- * @{$performance_guide$Performance Guide} contains a collection of best
+ * [Performance Guide](../performance/performance_guide.md) contains a collection of best
practices for optimizing your TensorFlow code.
- * @{$datasets_performance$Data input pipeline guide} describes the tf.data
+ * [Data input pipeline guide](../performance/datasets_performance.md) describes the tf.data
API for building efficient data input pipelines for TensorFlow.
- * @{$performance/benchmarks$Benchmarks} contains a collection of
+ * [Benchmarks](../performance/benchmarks.md) contains a collection of
benchmark results for a variety of hardware configurations.
* For improving inference efficiency on mobile and
embedded hardware, see
- @{$quantization$How to Quantize Neural Networks with TensorFlow}, which
+ [How to Quantize Neural Networks with TensorFlow](../performance/quantization.md), which
explains how to use quantization to reduce model size, both in storage
and at runtime.
@@ -31,20 +31,20 @@ XLA (Accelerated Linear Algebra) is an experimental compiler for linear
algebra that optimizes TensorFlow computations. The following guides explore
XLA:
- * @{$xla$XLA Overview}, which introduces XLA.
- * @{$broadcasting$Broadcasting Semantics}, which describes XLA's
+ * [XLA Overview](../performance/xla/index.md), which introduces XLA.
+ * [Broadcasting Semantics](../performance/xla/broadcasting.md), which describes XLA's
broadcasting semantics.
- * @{$developing_new_backend$Developing a new back end for XLA}, which
+ * [Developing a new back end for XLA](../performance/xla/developing_new_backend.md), which
explains how to re-target TensorFlow in order to optimize the performance
of the computational graph for particular hardware.
- * @{$jit$Using JIT Compilation}, which describes the XLA JIT compiler that
+ * [Using JIT Compilation](../performance/xla/jit.md), which describes the XLA JIT compiler that
compiles and runs parts of TensorFlow graphs via XLA in order to optimize
performance.
- * @{$operation_semantics$Operation Semantics}, which is a reference manual
+ * [Operation Semantics](../performance/xla/operation_semantics.md), which is a reference manual
describing the semantics of operations in the `ComputationBuilder`
interface.
- * @{$shapes$Shapes and Layout}, which details the `Shape` protocol buffer.
- * @{$tfcompile$Using AOT compilation}, which explains `tfcompile`, a
+ * [Shapes and Layout](../performance/xla/shapes.md), which details the `Shape` protocol buffer.
+ * [Using AOT compilation](../performance/xla/tfcompile.md), which explains `tfcompile`, a
standalone tool that compiles TensorFlow graphs into executable code in
order to optimize performance.
diff --git a/tensorflow/docs_src/performance/performance_guide.md b/tensorflow/docs_src/performance/performance_guide.md
index df70309568..9ea1d6a705 100644
--- a/tensorflow/docs_src/performance/performance_guide.md
+++ b/tensorflow/docs_src/performance/performance_guide.md
@@ -41,7 +41,7 @@ approaches to identifying issues:
utilization is not approaching 80-100%, then the input pipeline may be the
bottleneck.
* Generate a timeline and look for large blocks of white space (waiting). An
- example of generating a timeline exists as part of the @{$jit$XLA JIT}
+ example of generating a timeline exists as part of the [XLA JIT](../performance/xla/jit.md)
tutorial.
* Check CPU usage. It is possible to have an optimized input pipeline and lack
the CPU cycles to process the pipeline.
@@ -68,7 +68,7 @@ the CPU.
#### Using the tf.data API
-The @{$datasets$tf.data API} is replacing `queue_runner` as the recommended API
+The [tf.data API](../guide/datasets.md) is replacing `queue_runner` as the recommended API
for building input pipelines. This
[ResNet example](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10_estimator/cifar10_main.py)
([arXiv:1512.03385](https://arxiv.org/abs/1512.03385))
@@ -78,7 +78,7 @@ training CIFAR-10 illustrates the use of the `tf.data` API along with
The `tf.data` API utilizes C++ multi-threading and has a much lower overhead
than the Python-based `queue_runner` that is limited by Python's multi-threading
performance. A detailed performance guide for the `tf.data` API can be found
-@{$datasets_performance$here}.
+[here](../performance/datasets_performance.md).
While feeding data using a `feed_dict` offers a high level of flexibility, in
general `feed_dict` does not provide a scalable solution. If only a single GPU
@@ -174,7 +174,7 @@ faster using `NHWC` than the normally most efficient `NCHW`.
### Common fused Ops
Fused Ops combine multiple operations into a single kernel for improved
-performance. There are many fused Ops within TensorFlow and @{$xla$XLA} will
+performance. There are many fused Ops within TensorFlow and [XLA](../performance/xla/index.md) will
create fused Ops when possible to automatically improve performance. Collected
below are select fused Ops that can greatly improve performance and may be
overlooked.
@@ -257,7 +257,7 @@ the CPU in use. Speedups for training and inference on CPU are documented below
in [Comparing compiler optimizations](#comparing-compiler-optimizations).
To install the most optimized version of TensorFlow,
-@{$install_sources$build and install} from source. If there is a need to build
+[build and install](../install/install_sources.md) from source. If there is a need to build
TensorFlow on a platform that has different hardware than the target, then
cross-compile with the highest optimizations for the target platform. The
following command is an example of using `bazel` to compile for a specific
@@ -298,7 +298,7 @@ each of the towers. How each tower gets the updated variables and how the
gradients are applied has an impact on the performance, scaling, and convergence
of the model. The rest of this section provides an overview of variable
placement and the towering of a model on multiple GPUs.
-@{$performance_models$High-Performance Models} gets into more details regarding
+[High-Performance Models](../performance/performance_models.md) gets into more details regarding
more complex methods that can be used to share and update variables between
towers.
@@ -307,7 +307,7 @@ and even how the hardware has been configured. An example of this, is that two
systems can be built with NVIDIA Tesla P100s but one may be using PCIe and the
other [NVLink](http://www.nvidia.com/object/nvlink.html). In that scenario, the
optimal solution for each system may be different. For real world examples, read
-the @{$performance/benchmarks$benchmark} page which details the settings that
+the [benchmark](../performance/benchmarks.md) page which details the settings that
were optimal for a variety of platforms. Below is a summary of what was learned
from benchmarking various platforms and configurations:
@@ -433,7 +433,7 @@ scenarios.
## Optimizing for CPU
CPUs, which includes Intel® Xeon Phi™, achieve optimal performance when
-TensorFlow is @{$install_sources$built from source} with all of the instructions
+TensorFlow is [built from source](../install/install_sources.md) with all of the instructions
supported by the target CPU.
Beyond using the latest instruction sets, Intel® has added support for the
diff --git a/tensorflow/docs_src/performance/performance_models.md b/tensorflow/docs_src/performance/performance_models.md
index 66bf684d5b..151c0b2946 100644
--- a/tensorflow/docs_src/performance/performance_models.md
+++ b/tensorflow/docs_src/performance/performance_models.md
@@ -9,7 +9,7 @@ incorporated into high-level APIs.
## Input Pipeline
-The @{$performance_guide$Performance Guide} explains how to identify possible
+The [Performance Guide](../performance/performance_guide.md) explains how to identify possible
input pipeline issues and best practices. We found that using `tf.FIFOQueue`
and `tf.train.queue_runner` could not saturate multiple current generation GPUs
when using large inputs and processing with higher samples per second, such
diff --git a/tensorflow/docs_src/performance/quantization.md b/tensorflow/docs_src/performance/quantization.md
index 4499f5715c..3326d82964 100644
--- a/tensorflow/docs_src/performance/quantization.md
+++ b/tensorflow/docs_src/performance/quantization.md
@@ -80,7 +80,7 @@ need for a separate calibration step.
TensorFlow can train models with quantization in the loop. Because training
requires small gradient adjustments, floating point values are still used. To
keep models as floating point while adding the quantization error in the training
-loop, @{$array_ops#Fake_quantization$fake quantization} nodes simulate the
+loop, [fake quantization](../api_guides/python/array_ops.md#Fake_quantization) nodes simulate the
effect of quantization in the forward and backward passes.
Since it's difficult to add these fake quantization operations to all the
diff --git a/tensorflow/docs_src/performance/xla/index.md b/tensorflow/docs_src/performance/xla/index.md
index 8f5de83ea6..770737c34c 100644
--- a/tensorflow/docs_src/performance/xla/index.md
+++ b/tensorflow/docs_src/performance/xla/index.md
@@ -14,7 +14,7 @@ XLA (Accelerated Linear Algebra) is a domain-specific compiler for linear
algebra that optimizes TensorFlow computations. The results are improvements in
speed, memory usage, and portability on server and mobile platforms. Initially,
most users will not see large benefits from XLA, but are welcome to experiment
-by using XLA via @{$jit$just-in-time (JIT) compilation} or @{$tfcompile$ahead-of-time (AOT) compilation}. Developers targeting new hardware accelerators are
+by using XLA via [just-in-time (JIT) compilation](../../performance/xla/jit.md) or [ahead-of-time (AOT) compilation](../../performance/xla/tfcompile.md). Developers targeting new hardware accelerators are
especially encouraged to try out XLA.
The XLA framework is experimental and in active development. In particular,
@@ -54,13 +54,13 @@ We had several objectives for XLA to work with TensorFlow:
The input language to XLA is called "HLO IR", or just HLO (High Level
Optimizer). The semantics of HLO are described on the
-@{$operation_semantics$Operation Semantics} page. It
+[Operation Semantics](../../performance/xla/operation_semantics.md) page. It
is most convenient to think of HLO as a [compiler
IR](https://en.wikipedia.org/wiki/Intermediate_representation).
XLA takes graphs ("computations") defined in HLO and compiles them into machine
instructions for various architectures. XLA is modular in the sense that it is
-easy to slot in an alternative backend to @{$developing_new_backend$target some novel HW architecture}. The CPU backend for x64 and ARM64 as
+easy to slot in an alternative backend to [target some novel HW architecture](../../performance/xla/developing_new_backend.md). The CPU backend for x64 and ARM64 as
well as the NVIDIA GPU backend are in the TensorFlow source tree.
The following diagram shows the compilation process in XLA:
@@ -94,5 +94,5 @@ CPU backend supports multiple CPU ISAs.
## Supported Platforms
-XLA currently supports @{$jit$JIT compilation} on x86-64 and NVIDIA GPUs; and
-@{$tfcompile$AOT compilation} for x86-64 and ARM.
+XLA currently supports [JIT compilation](../../performance/xla/jit.md) on x86-64 and NVIDIA GPUs; and
+[AOT compilation](../../performance/xla/tfcompile.md) for x86-64 and ARM.
diff --git a/tensorflow/docs_src/performance/xla/operation_semantics.md b/tensorflow/docs_src/performance/xla/operation_semantics.md
index 8c9d26fcbb..16dd3c5bf3 100644
--- a/tensorflow/docs_src/performance/xla/operation_semantics.md
+++ b/tensorflow/docs_src/performance/xla/operation_semantics.md
@@ -1028,7 +1028,7 @@ Arguments | Type | Semantics
`rhs` | `XlaOp` | right-hand-side operand: array of type T
The arguments' shapes have to be either similar or compatible. See the
-@{$broadcasting$broadcasting} documentation about what it means for shapes to
+[broadcasting](../../performance/xla/broadcasting.md) documentation about what it means for shapes to
be compatible. The result of an operation has a shape which is the result of
broadcasting the two input arrays. In this variant, operations between arrays of
different ranks are *not* supported, unless one of the operands is a scalar.
@@ -1052,7 +1052,7 @@ the dimensions of the higher-rank shape. The unmapped dimensions of the expanded
shape are filled with dimensions of size one. Degenerate-dimension broadcasting
then broadcasts the shapes along these degenerate dimensions to equalize the
shapes of both operands. The semantics are described in detail on the
-@{$broadcasting$broadcasting page}.
+[broadcasting page](../../performance/xla/broadcasting.md).
## Element-wise comparison operations
@@ -1075,7 +1075,7 @@ Arguments | Type | Semantics
`rhs` | `XlaOp` | right-hand-side operand: array of type T
The arguments' shapes have to be either similar or compatible. See the
-@{$broadcasting$broadcasting} documentation about what it means for shapes to
+[broadcasting](../../performance/xla/broadcasting.md) documentation about what it means for shapes to
be compatible. The result of an operation has a shape which is the result of
broadcasting the two input arrays with the element type `PRED`. In this variant,
operations between arrays of different ranks are *not* supported, unless one of
@@ -1092,7 +1092,7 @@ matrix to a vector).
The additional `broadcast_dimensions` operand is a slice of integers specifying
the dimensions to use for broadcasting the operands. The semantics are described
-in detail on the @{$broadcasting$broadcasting page}.
+in detail on the [broadcasting page](../../performance/xla/broadcasting.md).
## Element-wise unary functions
diff --git a/tensorflow/docs_src/performance/xla/tfcompile.md b/tensorflow/docs_src/performance/xla/tfcompile.md
index e4b803164f..2e0f3774c4 100644
--- a/tensorflow/docs_src/performance/xla/tfcompile.md
+++ b/tensorflow/docs_src/performance/xla/tfcompile.md
@@ -17,7 +17,7 @@ kernels that are actually used in the computation.
The compiler is built on top of the XLA framework. The code bridging TensorFlow
to the XLA framework resides under
[tensorflow/compiler](https://www.tensorflow.org/code/tensorflow/compiler/),
-which also includes support for @{$jit$just-in-time (JIT) compilation} of
+which also includes support for [just-in-time (JIT) compilation](../../performance/xla/jit.md) of
TensorFlow graphs.
## What does tfcompile do?
@@ -116,7 +116,7 @@ tf_library(
> [make_test_graphs.py]("https://www.tensorflow.org/code/tensorflow/compiler/aot/tests/make_test_graphs.py")
> and specify the output location with the --out_dir flag.
-Typical graphs contain @{$python/state_ops$`Variables`}
+Typical graphs contain [`Variables`](../../api_guides/python/state_ops.md)
representing the weights that are learned via training, but `tfcompile` cannot
compile a subgraph that contain `Variables`. The
[freeze_graph.py](https://www.tensorflow.org/code/tensorflow/python/tools/freeze_graph.py)
diff --git a/tensorflow/docs_src/tutorials/estimators/cnn.md b/tensorflow/docs_src/tutorials/estimators/cnn.md
index 100f501cc2..2fd69f50a0 100644
--- a/tensorflow/docs_src/tutorials/estimators/cnn.md
+++ b/tensorflow/docs_src/tutorials/estimators/cnn.md
@@ -190,7 +190,7 @@ def cnn_model_fn(features, labels, mode):
The following sections (with headings corresponding to each code block above)
dive deeper into the `tf.layers` code used to create each layer, as well as how
to calculate loss, configure the training op, and generate predictions. If
-you're already experienced with CNNs and @{$custom_estimators$TensorFlow `Estimator`s},
+you're already experienced with CNNs and [TensorFlow `Estimator`s](../../guide/custom_estimators.md),
and find the above code intuitive, you may want to skim these sections or just
skip ahead to ["Training and Evaluating the CNN MNIST Classifier"](#train_eval_mnist).
@@ -501,8 +501,8 @@ if mode == tf.estimator.ModeKeys.TRAIN:
```
> Note: For a more in-depth look at configuring training ops for Estimator model
-> functions, see @{$custom_estimators#defining-the-training-op-for-the-model$"Defining the training op for the model"}
-> in the @{$custom_estimators$"Creating Estimations in tf.estimator"} tutorial.
+> functions, see ["Defining the training op for the model"](../../guide/custom_estimators.md#defining-the-training-op-for-the-model)
+> in the ["Creating Estimations in tf.estimator"](../../guide/custom_estimators.md) tutorial.
### Add evaluation metrics
@@ -567,7 +567,7 @@ be saved (here, we specify the temp directory `/tmp/mnist_convnet_model`, but
feel free to change to another directory of your choice).
> Note: For an in-depth walkthrough of the TensorFlow `Estimator` API, see the
-> tutorial @{$custom_estimators$"Creating Estimators in tf.estimator."}
+> tutorial ["Creating Estimators in tf.estimator."](../../guide/custom_estimators.md)
### Set Up a Logging Hook {#set_up_a_logging_hook}
@@ -593,8 +593,8 @@ operation earlier when we generated the probabilities in `cnn_model_fn`.
> Note: If you don't explicitly assign a name to an operation via the `name`
> argument, TensorFlow will assign a default name. A couple easy ways to
> discover the names applied to operations are to visualize your graph on
-> @{$graph_viz$TensorBoard}) or to enable the
-> @{$guide/debugger$TensorFlow Debugger (tfdbg)}.
+> [TensorBoard](../../guide/graph_viz.md)) or to enable the
+> [TensorFlow Debugger (tfdbg)](../../guide/debugger.md).
Next, we create the `LoggingTensorHook`, passing `tensors_to_log` to the
`tensors` argument. We set `every_n_iter=50`, which specifies that probabilities
@@ -686,9 +686,9 @@ Here, we've achieved an accuracy of 97.3% on our test data set.
To learn more about TensorFlow Estimators and CNNs in TensorFlow, see the
following resources:
-* @{$custom_estimators$Creating Estimators in tf.estimator}
+* [Creating Estimators in tf.estimator](../../guide/custom_estimators.md)
provides an introduction to the TensorFlow Estimator API. It walks through
configuring an Estimator, writing a model function, calculating loss, and
defining a training op.
-* @{$deep_cnn} walks through how to build a MNIST CNN classification model
+* [Advanced Convolutional Neural Networks](../../tutorials/images/deep_cnn.md) walks through how to build a MNIST CNN classification model
*without estimators* using lower-level TensorFlow operations.
diff --git a/tensorflow/docs_src/tutorials/images/deep_cnn.md b/tensorflow/docs_src/tutorials/images/deep_cnn.md
index 42ad484bbf..00996b82e6 100644
--- a/tensorflow/docs_src/tutorials/images/deep_cnn.md
+++ b/tensorflow/docs_src/tutorials/images/deep_cnn.md
@@ -40,7 +40,7 @@ designing larger and more sophisticated models in TensorFlow:
and `tf.nn.local_response_normalization`
(Chapter 3.3 in
[AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)).
-* @{$summaries_and_tensorboard$Visualization}
+* [Visualization](../../guide/summaries_and_tensorboard.md)
of network activities during training, including input images,
losses and distributions of activations and gradients.
* Routines for calculating the
@@ -114,7 +114,7 @@ The input part of the model is built by the functions `inputs()` and
`distorted_inputs()` which read images from the CIFAR-10 binary data files.
These files contain fixed byte length records, so we use
`tf.FixedLengthRecordReader`.
-See @{$reading_data#reading-from-files$Reading Data} to
+See [Reading Data](../../api_guides/python/reading_data.md#reading-from-files) to
learn more about how the `Reader` class works.
The images are processed as follows:
@@ -131,10 +131,10 @@ artificially increase the data set size:
* Randomly distort the `tf.image.random_brightness`.
* Randomly distort the `tf.image.random_contrast`.
-Please see the @{$python/image$Images} page for the list of
+Please see the [Images](../../api_guides/python/image.md) page for the list of
available distortions. We also attach an
`tf.summary.image` to the images
-so that we may visualize them in @{$summaries_and_tensorboard$TensorBoard}.
+so that we may visualize them in [TensorBoard](../../guide/summaries_and_tensorboard.md).
This is a good practice to verify that inputs are built correctly.
<div style="width:50%; margin:auto; margin-bottom:10px; margin-top:20px;">
@@ -160,8 +160,8 @@ Layer Name | Description
`conv2` | `tf.nn.conv2d` and `tf.nn.relu` activation.
`norm2` | `tf.nn.local_response_normalization`.
`pool2` | `tf.nn.max_pool`.
-`local3` | @{$python/nn$fully connected layer with rectified linear activation}.
-`local4` | @{$python/nn$fully connected layer with rectified linear activation}.
+`local3` | [fully connected layer with rectified linear activation](../../api_guides/python/nn.md).
+`local4` | [fully connected layer with rectified linear activation](../../api_guides/python/nn.md).
`softmax_linear` | linear transformation to produce logits.
Here is a graph generated from TensorBoard describing the inference operation:
@@ -205,7 +205,7 @@ We visualize it in TensorBoard with a `tf.summary.scalar`:
We train the model using standard
[gradient descent](https://en.wikipedia.org/wiki/Gradient_descent)
-algorithm (see @{$python/train$Training} for other methods)
+algorithm (see [Training](../../api_guides/python/train.md) for other methods)
with a learning rate that
`tf.train.exponential_decay`
over time.
@@ -265,7 +265,7 @@ in `cifar10_input.py`.
`cifar10_train.py` periodically uses a `tf.train.Saver` to save
all model parameters in
-@{$guide/saved_model$checkpoint files}
+[checkpoint files](../../guide/saved_model.md)
but it does *not* evaluate the model. The checkpoint file
will be used by `cifar10_eval.py` to measure the predictive
performance (see [Evaluating a Model](#evaluating-a-model) below).
@@ -282,7 +282,7 @@ how the model is training. We want more insight into the model during training:
* Are the gradients, activations and weights reasonable?
* What is the learning rate currently at?
-@{$summaries_and_tensorboard$TensorBoard} provides this
+[TensorBoard](../../guide/summaries_and_tensorboard.md) provides this
functionality, displaying data exported periodically from `cifar10_train.py` via
a
`tf.summary.FileWriter`.
@@ -413,7 +413,7 @@ scope indicating that they should be run on the first GPU.
All variables are pinned to the CPU and accessed via
`tf.get_variable`
in order to share them in a multi-GPU version.
-See how-to on @{$variables$Sharing Variables}.
+See how-to on [Sharing Variables](../../guide/variables.md).
### Launching and Training the Model on Multiple GPU cards
diff --git a/tensorflow/docs_src/tutorials/images/image_recognition.md b/tensorflow/docs_src/tutorials/images/image_recognition.md
index 83a8d97cf0..52913b2082 100644
--- a/tensorflow/docs_src/tutorials/images/image_recognition.md
+++ b/tensorflow/docs_src/tutorials/images/image_recognition.md
@@ -106,7 +106,7 @@ curl -L "https://storage.googleapis.com/download.tensorflow.org/models/inception
Next, we need to compile the C++ binary that includes the code to load and run the graph.
If you've followed
-@{$install_sources$the instructions to download the source installation of TensorFlow}
+[the instructions to download the source installation of TensorFlow](../../install/install_sources.md)
for your platform, you should be able to build the example by
running this command from your shell terminal:
@@ -448,7 +448,7 @@ and Michael Nielsen's book has a
covering them.
To find out more about implementing convolutional neural networks, you can jump
-to the TensorFlow @{$deep_cnn$deep convolutional networks tutorial},
+to the TensorFlow [deep convolutional networks tutorial](../../tutorials/images/deep_cnn.md),
or start a bit more gently with our [Estimator MNIST tutorial](../estimators/cnn.md).
Finally, if you want to get up to speed on research in this area, you can
read the recent work of all the papers referenced in this tutorial.
diff --git a/tensorflow/docs_src/tutorials/representation/kernel_methods.md b/tensorflow/docs_src/tutorials/representation/kernel_methods.md
index 71e87f4d3e..67adc4951c 100644
--- a/tensorflow/docs_src/tutorials/representation/kernel_methods.md
+++ b/tensorflow/docs_src/tutorials/representation/kernel_methods.md
@@ -2,7 +2,7 @@
Note: This document uses a deprecated version of `tf.estimator`,
`tf.contrib.learn.Estimator`, which has a different interface. It also uses
-other `contrib` methods whose @{$version_compat#not_covered$API may not be stable}.
+other `contrib` methods whose [API may not be stable](../../guide/version_compat.md#not_covered).
In this tutorial, we demonstrate how combining (explicit) kernel methods with
linear models can drastically increase the latters' quality of predictions
@@ -52,7 +52,7 @@ In order to feed data to a `tf.contrib.learn Estimator`, it is helpful to conver
it to Tensors. For this, we will use an `input function` which adds Ops to the
TensorFlow graph that, when executed, create mini-batches of Tensors to be used
downstream. For more background on input functions, check
-@{$premade_estimators#create_input_functions$this section on input functions}.
+[this section on input functions](../../guide/premade_estimators.md#create_input_functions).
In this example, we will use the `tf.train.shuffle_batch` Op which, besides
converting numpy arrays to Tensors, allows us to specify the batch_size and
whether to randomize the input every time the input_fn Ops are executed
diff --git a/tensorflow/docs_src/tutorials/representation/linear.md b/tensorflow/docs_src/tutorials/representation/linear.md
index 014409c617..4f0e67f08e 100644
--- a/tensorflow/docs_src/tutorials/representation/linear.md
+++ b/tensorflow/docs_src/tutorials/representation/linear.md
@@ -18,7 +18,7 @@ tutorial walks through the code in greater detail.
To understand this overview it will help to have some familiarity
with basic machine learning concepts, and also with
-@{$premade_estimators$Estimators}.
+[Estimators](../../guide/premade_estimators.md).
[TOC]
@@ -175,7 +175,7 @@ the data itself. You provide the data through an input function.
The input function must return a dictionary of tensors. Each key corresponds to
the name of a `FeatureColumn`. Each key's value is a tensor containing the
values of that feature for all data instances. See
-@{$premade_estimators#input_fn} for a
+[Premade Estimators](../../guide/premade_estimators.md#input_fn) for a
more comprehensive look at input functions, and `input_fn` in the
[wide and deep learning tutorial](https://github.com/tensorflow/models/tree/master/official/wide_deep)
for an example implementation of an input function.
diff --git a/tensorflow/docs_src/tutorials/representation/word2vec.md b/tensorflow/docs_src/tutorials/representation/word2vec.md
index 7964650e19..df0d3176b6 100644
--- a/tensorflow/docs_src/tutorials/representation/word2vec.md
+++ b/tensorflow/docs_src/tutorials/representation/word2vec.md
@@ -383,13 +383,13 @@ compromised speed because we use Python for reading and feeding data items --
each of which require very little work on the TensorFlow back-end. If you find
your model is seriously bottlenecked on input data, you may want to implement a
custom data reader for your problem, as described in
-@{$new_data_formats$New Data Formats}. For the case of Skip-Gram
+[New Data Formats](../../extend/new_data_formats.md). For the case of Skip-Gram
modeling, we've actually already done this for you as an example in
[models/tutorials/embedding/word2vec.py](https://github.com/tensorflow/models/tree/master/tutorials/embedding/word2vec.py).
If your model is no longer I/O bound but you want still more performance, you
can take things further by writing your own TensorFlow Ops, as described in
-@{$adding_an_op$Adding a New Op}. Again we've provided an
+[Adding a New Op](../../extend/adding_an_op.md). Again we've provided an
example of this for the Skip-Gram case
[models/tutorials/embedding/word2vec_optimized.py](https://github.com/tensorflow/models/tree/master/tutorials/embedding/word2vec_optimized.py).
Feel free to benchmark these against each other to measure performance
diff --git a/tensorflow/docs_src/tutorials/sequences/recurrent.md b/tensorflow/docs_src/tutorials/sequences/recurrent.md
index 10d60f7966..39ad441381 100644
--- a/tensorflow/docs_src/tutorials/sequences/recurrent.md
+++ b/tensorflow/docs_src/tutorials/sequences/recurrent.md
@@ -138,7 +138,7 @@ for current_batch_of_words in words_in_dataset:
### Inputs
The word IDs will be embedded into a dense representation (see the
-@{$word2vec$Vector Representations Tutorial}) before feeding to
+[Vector Representations Tutorial](../../tutorials/representation/word2vec.md)) before feeding to
the LSTM. This allows the model to efficiently represent the knowledge about
particular words. It is also easy to write:
diff --git a/tensorflow/docs_src/tutorials/sequences/recurrent_quickdraw.md b/tensorflow/docs_src/tutorials/sequences/recurrent_quickdraw.md
index 37bce5b76d..2c537c60a1 100644
--- a/tensorflow/docs_src/tutorials/sequences/recurrent_quickdraw.md
+++ b/tensorflow/docs_src/tutorials/sequences/recurrent_quickdraw.md
@@ -32,7 +32,7 @@ drawings in 345 categories.
To try the code for this tutorial:
-1. @{$install$Install TensorFlow} if you haven't already.
+1. [Install TensorFlow](../../install/index.md) if you haven't already.
1. Download the [tutorial code]
(https://github.com/tensorflow/models/tree/master/tutorials/rnn/quickdraw/train_model.py).
1. [Download the data](#download-the-data) in `TFRecord` format from
@@ -108,7 +108,7 @@ This download will take a while and download a bit more than 23GB of data.
### Optional: Converting the data
To convert the `ndjson` files to
-@{$python/python_io#TFRecords_Format_Details$TFRecord} files containing
+[TFRecord](../../api_guides/python/python_io.md#TFRecords_Format_Details) files containing
[`tf.train.Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)
protos run the following command.
@@ -118,7 +118,7 @@ protos run the following command.
```
This will store the data in 10 shards of
-@{$python/python_io#TFRecords_Format_Details$TFRecord} files with 10000 items
+[TFRecord](../../api_guides/python/python_io.md#TFRecords_Format_Details) files with 10000 items
per class for the training data and 1000 items per class as eval data.
This conversion process is described in more detail in the following.
@@ -220,7 +220,7 @@ length 2.
### Defining the model
To define the model we create a new `Estimator`. If you want to read more about
-estimators, we recommend @{$custom_estimators$this tutorial}.
+estimators, we recommend [this tutorial](../../guide/custom_estimators.md).
To build the model, we: