aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src')
-rw-r--r--tensorflow/docs_src/BUILD14
-rw-r--r--tensorflow/docs_src/api_guides/cc/guide.md18
-rw-r--r--tensorflow/docs_src/api_guides/python/array_ops.md120
-rw-r--r--tensorflow/docs_src/api_guides/python/check_ops.md34
-rw-r--r--tensorflow/docs_src/api_guides/python/client.md48
-rw-r--r--tensorflow/docs_src/api_guides/python/constant_op.md38
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.crf.md14
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.ffmpeg.md4
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.framework.md94
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.graph_editor.md114
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.integrate.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.layers.md118
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.learn.md76
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.linalg.md16
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.losses.md30
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.metrics.md84
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.rnn.md60
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.seq2seq.md32
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.signal.md16
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.staging.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.training.md34
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.util.md10
-rw-r--r--tensorflow/docs_src/api_guides/python/control_flow_ops.md56
-rw-r--r--tensorflow/docs_src/api_guides/python/framework.md58
-rw-r--r--tensorflow/docs_src/api_guides/python/functional_ops.md10
-rw-r--r--tensorflow/docs_src/api_guides/python/image.md98
-rw-r--r--tensorflow/docs_src/api_guides/python/input_dataset.md96
-rw-r--r--tensorflow/docs_src/api_guides/python/io_ops.md100
-rw-r--r--tensorflow/docs_src/api_guides/python/math_ops.md230
-rw-r--r--tensorflow/docs_src/api_guides/python/meta_graph.md10
-rw-r--r--tensorflow/docs_src/api_guides/python/nn.md156
-rw-r--r--tensorflow/docs_src/api_guides/python/python_io.md8
-rw-r--r--tensorflow/docs_src/api_guides/python/reading_data.md58
-rw-r--r--tensorflow/docs_src/api_guides/python/regression_examples.md12
-rw-r--r--tensorflow/docs_src/api_guides/python/session_ops.md8
-rw-r--r--tensorflow/docs_src/api_guides/python/sparse_ops.md44
-rw-r--r--tensorflow/docs_src/api_guides/python/spectral_ops.md30
-rw-r--r--tensorflow/docs_src/api_guides/python/state_ops.md122
-rw-r--r--tensorflow/docs_src/api_guides/python/string_ops.md28
-rw-r--r--tensorflow/docs_src/api_guides/python/summary.md20
-rw-r--r--tensorflow/docs_src/api_guides/python/test.md20
-rw-r--r--tensorflow/docs_src/api_guides/python/tfdbg.md22
-rw-r--r--tensorflow/docs_src/api_guides/python/threading_and_queues.md36
-rw-r--r--tensorflow/docs_src/api_guides/python/train.md138
-rw-r--r--tensorflow/docs_src/community/index.md2
-rw-r--r--tensorflow/docs_src/community/lists.md2
-rw-r--r--tensorflow/docs_src/community/style_guide.md58
-rw-r--r--tensorflow/docs_src/deploy/distributed.md20
-rw-r--r--tensorflow/docs_src/deploy/s3.md2
-rw-r--r--tensorflow/docs_src/extend/adding_an_op.md27
-rw-r--r--tensorflow/docs_src/extend/architecture.md4
-rw-r--r--tensorflow/docs_src/extend/index.md2
-rw-r--r--tensorflow/docs_src/extend/new_data_formats.md33
-rw-r--r--tensorflow/docs_src/guide/checkpoints.md2
-rw-r--r--tensorflow/docs_src/guide/custom_estimators.md54
-rw-r--r--tensorflow/docs_src/guide/datasets.md24
-rw-r--r--tensorflow/docs_src/guide/datasets_for_estimators.md26
-rw-r--r--tensorflow/docs_src/guide/debugger.md26
-rw-r--r--tensorflow/docs_src/guide/eager.md15
-rw-r--r--tensorflow/docs_src/guide/estimators.md23
-rw-r--r--tensorflow/docs_src/guide/faq.md71
-rw-r--r--tensorflow/docs_src/guide/feature_columns.md36
-rw-r--r--tensorflow/docs_src/guide/graph_viz.md2
-rw-r--r--tensorflow/docs_src/guide/graphs.md204
-rw-r--r--tensorflow/docs_src/guide/index.md5
-rw-r--r--tensorflow/docs_src/guide/leftnav_files2
-rw-r--r--tensorflow/docs_src/guide/low_level_intro.md46
-rw-r--r--tensorflow/docs_src/guide/premade_estimators.md14
-rw-r--r--tensorflow/docs_src/guide/saved_model.md60
-rw-r--r--tensorflow/docs_src/guide/summaries_and_tensorboard.md8
-rw-r--r--tensorflow/docs_src/guide/tensors.md2
-rw-r--r--tensorflow/docs_src/guide/using_gpu.md2
-rw-r--r--tensorflow/docs_src/guide/using_tpu.md32
-rw-r--r--tensorflow/docs_src/guide/variables.md4
-rw-r--r--tensorflow/docs_src/guide/version_compat.md11
-rw-r--r--tensorflow/docs_src/install/install_c.md2
-rw-r--r--tensorflow/docs_src/install/install_go.md4
-rw-r--r--tensorflow/docs_src/install/install_java.md22
-rw-r--r--tensorflow/docs_src/install/install_linux.md18
-rw-r--r--tensorflow/docs_src/install/install_mac.md10
-rw-r--r--tensorflow/docs_src/install/install_raspbian.md6
-rw-r--r--tensorflow/docs_src/install/install_sources.md15
-rw-r--r--tensorflow/docs_src/performance/datasets_performance.md22
-rw-r--r--tensorflow/docs_src/performance/performance_guide.md42
-rw-r--r--tensorflow/docs_src/performance/performance_models.md18
-rw-r--r--tensorflow/docs_src/performance/quantization.md2
-rw-r--r--tensorflow/docs_src/performance/xla/jit.md12
-rw-r--r--tensorflow/docs_src/performance/xla/operation_semantics.md307
-rw-r--r--tensorflow/docs_src/performance/xla/tfcompile.md5
-rw-r--r--tensorflow/docs_src/tutorials/_toc.yaml21
-rw-r--r--tensorflow/docs_src/tutorials/estimators/cnn.md16
-rw-r--r--tensorflow/docs_src/tutorials/images/deep_cnn.md72
-rw-r--r--tensorflow/docs_src/tutorials/images/image_recognition.md2
-rw-r--r--tensorflow/docs_src/tutorials/representation/kernel_methods.md11
-rw-r--r--tensorflow/docs_src/tutorials/representation/linear.md2
-rw-r--r--tensorflow/docs_src/tutorials/representation/word2vec.md2
96 files changed, 2007 insertions, 1761 deletions
diff --git a/tensorflow/docs_src/BUILD b/tensorflow/docs_src/BUILD
new file mode 100644
index 0000000000..34bf7b6a11
--- /dev/null
+++ b/tensorflow/docs_src/BUILD
@@ -0,0 +1,14 @@
+# Files used to generate TensorFlow docs.
+
+licenses(["notice"]) # Apache 2.0
+
+package(
+ default_visibility = ["//tensorflow:internal"],
+)
+
+exports_files(["LICENSE"])
+
+filegroup(
+ name = "docs_src",
+ data = glob(["**/*.md"]),
+)
diff --git a/tensorflow/docs_src/api_guides/cc/guide.md b/tensorflow/docs_src/api_guides/cc/guide.md
index 4e51ada58a..2cd645afa7 100644
--- a/tensorflow/docs_src/api_guides/cc/guide.md
+++ b/tensorflow/docs_src/api_guides/cc/guide.md
@@ -7,6 +7,12 @@ You should, as a result, be sure you are following the
[`master` version of this doc](https://www.tensorflow.org/versions/master/api_guides/cc/guide),
in case there have been any changes.
+Note: The C++ API is only designed to work with TensorFlow `bazel build`.
+If you need a stand-alone option use the [C-api](../../install/install_c.md).
+See [these instructions](https://docs.bazel.build/versions/master/external.html)
+for details on how to include TensorFlow as a subproject (instead of building
+your project from inside TensorFlow, as in this example).
+
[TOC]
TensorFlow's C++ API provides mechanisms for constructing and executing a data
@@ -92,7 +98,7 @@ We will delve into the details of each below.
### Scope
-@{tensorflow::Scope} is the main data structure that holds the current state
+`tensorflow::Scope` is the main data structure that holds the current state
of graph construction. A `Scope` acts as a handle to the graph being
constructed, as well as storing TensorFlow operation properties. The `Scope`
object is the first argument to operation constructors, and operations that use
@@ -102,7 +108,7 @@ explained further below.
Create a new `Scope` object by calling `Scope::NewRootScope`. This creates
some resources such as a graph to which operations are added. It also creates a
-@{tensorflow::Status} object which will be used to indicate errors encountered
+`tensorflow::Status` object which will be used to indicate errors encountered
when constructing operations. The `Scope` class has value semantics, thus, a
`Scope` object can be freely copied and passed around.
@@ -121,7 +127,7 @@ Here are some of the properties controlled by a `Scope` object:
* Device placement for an operation
* Kernel attribute for an operation
-Please refer to @{tensorflow::Scope} for the complete list of member functions
+Please refer to `tensorflow::Scope` for the complete list of member functions
that let you create child scopes with new properties.
### Operation Constructors
@@ -213,7 +219,7 @@ auto c = Concat(scope, s, 0);
You may pass many different types of C++ values directly to tensor
constants. You may explicitly create a tensor constant by calling the
-@{tensorflow::ops::Const} function from various kinds of C++ values. For
+`tensorflow::ops::Const` function from various kinds of C++ values. For
example:
* Scalars
@@ -257,7 +263,7 @@ auto y = Add(scope, {1, 2, 3, 4}, 10);
## Graph Execution
When executing a graph, you will need a session. The C++ API provides a
-@{tensorflow::ClientSession} class that will execute ops created by the
+`tensorflow::ClientSession` class that will execute ops created by the
operation constructors. TensorFlow will automatically determine which parts of
the graph need to be executed, and what values need feeding. For example:
@@ -291,5 +297,5 @@ session.Run({ {a, { {1, 2}, {3, 4} } } }, {c}, &outputs);
// outputs[0] == [4 5; 6 7]
```
-Please see the @{tensorflow::Tensor} documentation for more information on how
+Please see the `tensorflow::Tensor` documentation for more information on how
to use the execution output.
diff --git a/tensorflow/docs_src/api_guides/python/array_ops.md b/tensorflow/docs_src/api_guides/python/array_ops.md
index a34f01f073..ddeea80c56 100644
--- a/tensorflow/docs_src/api_guides/python/array_ops.md
+++ b/tensorflow/docs_src/api_guides/python/array_ops.md
@@ -1,7 +1,7 @@
# Tensor Transformations
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -10,78 +10,78 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
TensorFlow provides several operations that you can use to cast tensor data
types in your graph.
-* @{tf.string_to_number}
-* @{tf.to_double}
-* @{tf.to_float}
-* @{tf.to_bfloat16}
-* @{tf.to_int32}
-* @{tf.to_int64}
-* @{tf.cast}
-* @{tf.bitcast}
-* @{tf.saturate_cast}
+* `tf.string_to_number`
+* `tf.to_double`
+* `tf.to_float`
+* `tf.to_bfloat16`
+* `tf.to_int32`
+* `tf.to_int64`
+* `tf.cast`
+* `tf.bitcast`
+* `tf.saturate_cast`
## Shapes and Shaping
TensorFlow provides several operations that you can use to determine the shape
of a tensor and change the shape of a tensor.
-* @{tf.broadcast_dynamic_shape}
-* @{tf.broadcast_static_shape}
-* @{tf.shape}
-* @{tf.shape_n}
-* @{tf.size}
-* @{tf.rank}
-* @{tf.reshape}
-* @{tf.squeeze}
-* @{tf.expand_dims}
-* @{tf.meshgrid}
+* `tf.broadcast_dynamic_shape`
+* `tf.broadcast_static_shape`
+* `tf.shape`
+* `tf.shape_n`
+* `tf.size`
+* `tf.rank`
+* `tf.reshape`
+* `tf.squeeze`
+* `tf.expand_dims`
+* `tf.meshgrid`
## Slicing and Joining
TensorFlow provides several operations to slice or extract parts of a tensor,
or join multiple tensors together.
-* @{tf.slice}
-* @{tf.strided_slice}
-* @{tf.split}
-* @{tf.tile}
-* @{tf.pad}
-* @{tf.concat}
-* @{tf.stack}
-* @{tf.parallel_stack}
-* @{tf.unstack}
-* @{tf.reverse_sequence}
-* @{tf.reverse}
-* @{tf.reverse_v2}
-* @{tf.transpose}
-* @{tf.extract_image_patches}
-* @{tf.space_to_batch_nd}
-* @{tf.space_to_batch}
-* @{tf.required_space_to_batch_paddings}
-* @{tf.batch_to_space_nd}
-* @{tf.batch_to_space}
-* @{tf.space_to_depth}
-* @{tf.depth_to_space}
-* @{tf.gather}
-* @{tf.gather_nd}
-* @{tf.unique_with_counts}
-* @{tf.scatter_nd}
-* @{tf.dynamic_partition}
-* @{tf.dynamic_stitch}
-* @{tf.boolean_mask}
-* @{tf.one_hot}
-* @{tf.sequence_mask}
-* @{tf.dequantize}
-* @{tf.quantize_v2}
-* @{tf.quantized_concat}
-* @{tf.setdiff1d}
+* `tf.slice`
+* `tf.strided_slice`
+* `tf.split`
+* `tf.tile`
+* `tf.pad`
+* `tf.concat`
+* `tf.stack`
+* `tf.parallel_stack`
+* `tf.unstack`
+* `tf.reverse_sequence`
+* `tf.reverse`
+* `tf.reverse_v2`
+* `tf.transpose`
+* `tf.extract_image_patches`
+* `tf.space_to_batch_nd`
+* `tf.space_to_batch`
+* `tf.required_space_to_batch_paddings`
+* `tf.batch_to_space_nd`
+* `tf.batch_to_space`
+* `tf.space_to_depth`
+* `tf.depth_to_space`
+* `tf.gather`
+* `tf.gather_nd`
+* `tf.unique_with_counts`
+* `tf.scatter_nd`
+* `tf.dynamic_partition`
+* `tf.dynamic_stitch`
+* `tf.boolean_mask`
+* `tf.one_hot`
+* `tf.sequence_mask`
+* `tf.dequantize`
+* `tf.quantize_v2`
+* `tf.quantized_concat`
+* `tf.setdiff1d`
## Fake quantization
Operations used to help train for better quantization accuracy.
-* @{tf.fake_quant_with_min_max_args}
-* @{tf.fake_quant_with_min_max_args_gradient}
-* @{tf.fake_quant_with_min_max_vars}
-* @{tf.fake_quant_with_min_max_vars_gradient}
-* @{tf.fake_quant_with_min_max_vars_per_channel}
-* @{tf.fake_quant_with_min_max_vars_per_channel_gradient}
+* `tf.fake_quant_with_min_max_args`
+* `tf.fake_quant_with_min_max_args_gradient`
+* `tf.fake_quant_with_min_max_vars`
+* `tf.fake_quant_with_min_max_vars_gradient`
+* `tf.fake_quant_with_min_max_vars_per_channel`
+* `tf.fake_quant_with_min_max_vars_per_channel_gradient`
diff --git a/tensorflow/docs_src/api_guides/python/check_ops.md b/tensorflow/docs_src/api_guides/python/check_ops.md
index 6f8a18af42..b52fdaa3ab 100644
--- a/tensorflow/docs_src/api_guides/python/check_ops.md
+++ b/tensorflow/docs_src/api_guides/python/check_ops.md
@@ -1,19 +1,19 @@
# Asserts and boolean checks
-* @{tf.assert_negative}
-* @{tf.assert_positive}
-* @{tf.assert_proper_iterable}
-* @{tf.assert_non_negative}
-* @{tf.assert_non_positive}
-* @{tf.assert_equal}
-* @{tf.assert_integer}
-* @{tf.assert_less}
-* @{tf.assert_less_equal}
-* @{tf.assert_greater}
-* @{tf.assert_greater_equal}
-* @{tf.assert_rank}
-* @{tf.assert_rank_at_least}
-* @{tf.assert_type}
-* @{tf.is_non_decreasing}
-* @{tf.is_numeric_tensor}
-* @{tf.is_strictly_increasing}
+* `tf.assert_negative`
+* `tf.assert_positive`
+* `tf.assert_proper_iterable`
+* `tf.assert_non_negative`
+* `tf.assert_non_positive`
+* `tf.assert_equal`
+* `tf.assert_integer`
+* `tf.assert_less`
+* `tf.assert_less_equal`
+* `tf.assert_greater`
+* `tf.assert_greater_equal`
+* `tf.assert_rank`
+* `tf.assert_rank_at_least`
+* `tf.assert_type`
+* `tf.is_non_decreasing`
+* `tf.is_numeric_tensor`
+* `tf.is_strictly_increasing`
diff --git a/tensorflow/docs_src/api_guides/python/client.md b/tensorflow/docs_src/api_guides/python/client.md
index 27fc8610bf..56367e6671 100644
--- a/tensorflow/docs_src/api_guides/python/client.md
+++ b/tensorflow/docs_src/api_guides/python/client.md
@@ -4,33 +4,33 @@
This library contains classes for launching graphs and executing operations.
@{$guide/low_level_intro$This guide} has examples of how a graph
-is launched in a @{tf.Session}.
+is launched in a `tf.Session`.
## Session management
-* @{tf.Session}
-* @{tf.InteractiveSession}
-* @{tf.get_default_session}
+* `tf.Session`
+* `tf.InteractiveSession`
+* `tf.get_default_session`
## Error classes and convenience functions
-* @{tf.OpError}
-* @{tf.errors.CancelledError}
-* @{tf.errors.UnknownError}
-* @{tf.errors.InvalidArgumentError}
-* @{tf.errors.DeadlineExceededError}
-* @{tf.errors.NotFoundError}
-* @{tf.errors.AlreadyExistsError}
-* @{tf.errors.PermissionDeniedError}
-* @{tf.errors.UnauthenticatedError}
-* @{tf.errors.ResourceExhaustedError}
-* @{tf.errors.FailedPreconditionError}
-* @{tf.errors.AbortedError}
-* @{tf.errors.OutOfRangeError}
-* @{tf.errors.UnimplementedError}
-* @{tf.errors.InternalError}
-* @{tf.errors.UnavailableError}
-* @{tf.errors.DataLossError}
-* @{tf.errors.exception_type_from_error_code}
-* @{tf.errors.error_code_from_exception_type}
-* @{tf.errors.raise_exception_on_not_ok_status}
+* `tf.OpError`
+* `tf.errors.CancelledError`
+* `tf.errors.UnknownError`
+* `tf.errors.InvalidArgumentError`
+* `tf.errors.DeadlineExceededError`
+* `tf.errors.NotFoundError`
+* `tf.errors.AlreadyExistsError`
+* `tf.errors.PermissionDeniedError`
+* `tf.errors.UnauthenticatedError`
+* `tf.errors.ResourceExhaustedError`
+* `tf.errors.FailedPreconditionError`
+* `tf.errors.AbortedError`
+* `tf.errors.OutOfRangeError`
+* `tf.errors.UnimplementedError`
+* `tf.errors.InternalError`
+* `tf.errors.UnavailableError`
+* `tf.errors.DataLossError`
+* `tf.errors.exception_type_from_error_code`
+* `tf.errors.error_code_from_exception_type`
+* `tf.errors.raise_exception_on_not_ok_status`
diff --git a/tensorflow/docs_src/api_guides/python/constant_op.md b/tensorflow/docs_src/api_guides/python/constant_op.md
index db3410ce22..498ec3db5d 100644
--- a/tensorflow/docs_src/api_guides/python/constant_op.md
+++ b/tensorflow/docs_src/api_guides/python/constant_op.md
@@ -1,7 +1,7 @@
# Constants, Sequences, and Random Values
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -9,17 +9,17 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
TensorFlow provides several operations that you can use to generate constants.
-* @{tf.zeros}
-* @{tf.zeros_like}
-* @{tf.ones}
-* @{tf.ones_like}
-* @{tf.fill}
-* @{tf.constant}
+* `tf.zeros`
+* `tf.zeros_like`
+* `tf.ones`
+* `tf.ones_like`
+* `tf.fill`
+* `tf.constant`
## Sequences
-* @{tf.linspace}
-* @{tf.range}
+* `tf.linspace`
+* `tf.range`
## Random Tensors
@@ -29,11 +29,11 @@ time they are evaluated.
The `seed` keyword argument in these functions acts in conjunction with
the graph-level random seed. Changing either the graph-level seed using
-@{tf.set_random_seed} or the
+`tf.set_random_seed` or the
op-level seed will change the underlying seed of these operations. Setting
neither graph-level nor op-level seed, results in a random seed for all
operations.
-See @{tf.set_random_seed}
+See `tf.set_random_seed`
for details on the interaction between operation-level and graph-level random
seeds.
@@ -77,11 +77,11 @@ sess.run(init)
print(sess.run(var))
```
-* @{tf.random_normal}
-* @{tf.truncated_normal}
-* @{tf.random_uniform}
-* @{tf.random_shuffle}
-* @{tf.random_crop}
-* @{tf.multinomial}
-* @{tf.random_gamma}
-* @{tf.set_random_seed}
+* `tf.random_normal`
+* `tf.truncated_normal`
+* `tf.random_uniform`
+* `tf.random_shuffle`
+* `tf.random_crop`
+* `tf.multinomial`
+* `tf.random_gamma`
+* `tf.set_random_seed`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.crf.md b/tensorflow/docs_src/api_guides/python/contrib.crf.md
index 428383fd41..a544f136b3 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.crf.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.crf.md
@@ -2,10 +2,10 @@
Linear-chain CRF layer.
-* @{tf.contrib.crf.crf_sequence_score}
-* @{tf.contrib.crf.crf_log_norm}
-* @{tf.contrib.crf.crf_log_likelihood}
-* @{tf.contrib.crf.crf_unary_score}
-* @{tf.contrib.crf.crf_binary_score}
-* @{tf.contrib.crf.CrfForwardRnnCell}
-* @{tf.contrib.crf.viterbi_decode}
+* `tf.contrib.crf.crf_sequence_score`
+* `tf.contrib.crf.crf_log_norm`
+* `tf.contrib.crf.crf_log_likelihood`
+* `tf.contrib.crf.crf_unary_score`
+* `tf.contrib.crf.crf_binary_score`
+* `tf.contrib.crf.CrfForwardRnnCell`
+* `tf.contrib.crf.viterbi_decode`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.ffmpeg.md b/tensorflow/docs_src/api_guides/python/contrib.ffmpeg.md
index 27948689c5..7df7547131 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.ffmpeg.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.ffmpeg.md
@@ -19,5 +19,5 @@ uncompressed_binary = ffmpeg.encode_audio(
waveform, file_format='wav', samples_per_second=44100)
```
-* @{tf.contrib.ffmpeg.decode_audio}
-* @{tf.contrib.ffmpeg.encode_audio}
+* `tf.contrib.ffmpeg.decode_audio`
+* `tf.contrib.ffmpeg.encode_audio`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.framework.md b/tensorflow/docs_src/api_guides/python/contrib.framework.md
index 6b4ce3a14d..00fb8b0ac3 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.framework.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.framework.md
@@ -3,62 +3,62 @@
Framework utilities.
-* @{tf.contrib.framework.assert_same_float_dtype}
-* @{tf.contrib.framework.assert_scalar}
-* @{tf.contrib.framework.assert_scalar_int}
-* @{tf.convert_to_tensor_or_sparse_tensor}
-* @{tf.contrib.framework.get_graph_from_inputs}
-* @{tf.is_numeric_tensor}
-* @{tf.is_non_decreasing}
-* @{tf.is_strictly_increasing}
-* @{tf.contrib.framework.is_tensor}
-* @{tf.contrib.framework.reduce_sum_n}
-* @{tf.contrib.framework.remove_squeezable_dimensions}
-* @{tf.contrib.framework.with_shape}
-* @{tf.contrib.framework.with_same_shape}
+* `tf.contrib.framework.assert_same_float_dtype`
+* `tf.contrib.framework.assert_scalar`
+* `tf.contrib.framework.assert_scalar_int`
+* `tf.convert_to_tensor_or_sparse_tensor`
+* `tf.contrib.framework.get_graph_from_inputs`
+* `tf.is_numeric_tensor`
+* `tf.is_non_decreasing`
+* `tf.is_strictly_increasing`
+* `tf.contrib.framework.is_tensor`
+* `tf.contrib.framework.reduce_sum_n`
+* `tf.contrib.framework.remove_squeezable_dimensions`
+* `tf.contrib.framework.with_shape`
+* `tf.contrib.framework.with_same_shape`
## Deprecation
-* @{tf.contrib.framework.deprecated}
-* @{tf.contrib.framework.deprecated_args}
-* @{tf.contrib.framework.deprecated_arg_values}
+* `tf.contrib.framework.deprecated`
+* `tf.contrib.framework.deprecated_args`
+* `tf.contrib.framework.deprecated_arg_values`
## Arg_Scope
-* @{tf.contrib.framework.arg_scope}
-* @{tf.contrib.framework.add_arg_scope}
-* @{tf.contrib.framework.has_arg_scope}
-* @{tf.contrib.framework.arg_scoped_arguments}
+* `tf.contrib.framework.arg_scope`
+* `tf.contrib.framework.add_arg_scope`
+* `tf.contrib.framework.has_arg_scope`
+* `tf.contrib.framework.arg_scoped_arguments`
## Variables
-* @{tf.contrib.framework.add_model_variable}
-* @{tf.train.assert_global_step}
-* @{tf.contrib.framework.assert_or_get_global_step}
-* @{tf.contrib.framework.assign_from_checkpoint}
-* @{tf.contrib.framework.assign_from_checkpoint_fn}
-* @{tf.contrib.framework.assign_from_values}
-* @{tf.contrib.framework.assign_from_values_fn}
-* @{tf.contrib.framework.create_global_step}
-* @{tf.contrib.framework.filter_variables}
-* @{tf.train.get_global_step}
-* @{tf.contrib.framework.get_or_create_global_step}
-* @{tf.contrib.framework.get_local_variables}
-* @{tf.contrib.framework.get_model_variables}
-* @{tf.contrib.framework.get_unique_variable}
-* @{tf.contrib.framework.get_variables_by_name}
-* @{tf.contrib.framework.get_variables_by_suffix}
-* @{tf.contrib.framework.get_variables_to_restore}
-* @{tf.contrib.framework.get_variables}
-* @{tf.contrib.framework.local_variable}
-* @{tf.contrib.framework.model_variable}
-* @{tf.contrib.framework.variable}
-* @{tf.contrib.framework.VariableDeviceChooser}
-* @{tf.contrib.framework.zero_initializer}
+* `tf.contrib.framework.add_model_variable`
+* `tf.train.assert_global_step`
+* `tf.contrib.framework.assert_or_get_global_step`
+* `tf.contrib.framework.assign_from_checkpoint`
+* `tf.contrib.framework.assign_from_checkpoint_fn`
+* `tf.contrib.framework.assign_from_values`
+* `tf.contrib.framework.assign_from_values_fn`
+* `tf.contrib.framework.create_global_step`
+* `tf.contrib.framework.filter_variables`
+* `tf.train.get_global_step`
+* `tf.contrib.framework.get_or_create_global_step`
+* `tf.contrib.framework.get_local_variables`
+* `tf.contrib.framework.get_model_variables`
+* `tf.contrib.framework.get_unique_variable`
+* `tf.contrib.framework.get_variables_by_name`
+* `tf.contrib.framework.get_variables_by_suffix`
+* `tf.contrib.framework.get_variables_to_restore`
+* `tf.contrib.framework.get_variables`
+* `tf.contrib.framework.local_variable`
+* `tf.contrib.framework.model_variable`
+* `tf.contrib.framework.variable`
+* `tf.contrib.framework.VariableDeviceChooser`
+* `tf.contrib.framework.zero_initializer`
## Checkpoint utilities
-* @{tf.contrib.framework.load_checkpoint}
-* @{tf.contrib.framework.list_variables}
-* @{tf.contrib.framework.load_variable}
-* @{tf.contrib.framework.init_from_checkpoint}
+* `tf.contrib.framework.load_checkpoint`
+* `tf.contrib.framework.list_variables`
+* `tf.contrib.framework.load_variable`
+* `tf.contrib.framework.init_from_checkpoint`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.graph_editor.md b/tensorflow/docs_src/api_guides/python/contrib.graph_editor.md
index 20fe88a799..8ce49b952b 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.graph_editor.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.graph_editor.md
@@ -100,78 +100,78 @@ which to operate must always be given explicitly. This is the reason why
## Module: util
-* @{tf.contrib.graph_editor.make_list_of_op}
-* @{tf.contrib.graph_editor.get_tensors}
-* @{tf.contrib.graph_editor.make_list_of_t}
-* @{tf.contrib.graph_editor.get_generating_ops}
-* @{tf.contrib.graph_editor.get_consuming_ops}
-* @{tf.contrib.graph_editor.ControlOutputs}
-* @{tf.contrib.graph_editor.placeholder_name}
-* @{tf.contrib.graph_editor.make_placeholder_from_tensor}
-* @{tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape}
+* `tf.contrib.graph_editor.make_list_of_op`
+* `tf.contrib.graph_editor.get_tensors`
+* `tf.contrib.graph_editor.make_list_of_t`
+* `tf.contrib.graph_editor.get_generating_ops`
+* `tf.contrib.graph_editor.get_consuming_ops`
+* `tf.contrib.graph_editor.ControlOutputs`
+* `tf.contrib.graph_editor.placeholder_name`
+* `tf.contrib.graph_editor.make_placeholder_from_tensor`
+* `tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape`
## Module: select
-* @{tf.contrib.graph_editor.filter_ts}
-* @{tf.contrib.graph_editor.filter_ts_from_regex}
-* @{tf.contrib.graph_editor.filter_ops}
-* @{tf.contrib.graph_editor.filter_ops_from_regex}
-* @{tf.contrib.graph_editor.get_name_scope_ops}
-* @{tf.contrib.graph_editor.check_cios}
-* @{tf.contrib.graph_editor.get_ops_ios}
-* @{tf.contrib.graph_editor.compute_boundary_ts}
-* @{tf.contrib.graph_editor.get_within_boundary_ops}
-* @{tf.contrib.graph_editor.get_forward_walk_ops}
-* @{tf.contrib.graph_editor.get_backward_walk_ops}
-* @{tf.contrib.graph_editor.get_walks_intersection_ops}
-* @{tf.contrib.graph_editor.get_walks_union_ops}
-* @{tf.contrib.graph_editor.select_ops}
-* @{tf.contrib.graph_editor.select_ts}
-* @{tf.contrib.graph_editor.select_ops_and_ts}
+* `tf.contrib.graph_editor.filter_ts`
+* `tf.contrib.graph_editor.filter_ts_from_regex`
+* `tf.contrib.graph_editor.filter_ops`
+* `tf.contrib.graph_editor.filter_ops_from_regex`
+* `tf.contrib.graph_editor.get_name_scope_ops`
+* `tf.contrib.graph_editor.check_cios`
+* `tf.contrib.graph_editor.get_ops_ios`
+* `tf.contrib.graph_editor.compute_boundary_ts`
+* `tf.contrib.graph_editor.get_within_boundary_ops`
+* `tf.contrib.graph_editor.get_forward_walk_ops`
+* `tf.contrib.graph_editor.get_backward_walk_ops`
+* `tf.contrib.graph_editor.get_walks_intersection_ops`
+* `tf.contrib.graph_editor.get_walks_union_ops`
+* `tf.contrib.graph_editor.select_ops`
+* `tf.contrib.graph_editor.select_ts`
+* `tf.contrib.graph_editor.select_ops_and_ts`
## Module: subgraph
-* @{tf.contrib.graph_editor.SubGraphView}
-* @{tf.contrib.graph_editor.make_view}
-* @{tf.contrib.graph_editor.make_view_from_scope}
+* `tf.contrib.graph_editor.SubGraphView`
+* `tf.contrib.graph_editor.make_view`
+* `tf.contrib.graph_editor.make_view_from_scope`
## Module: reroute
-* @{tf.contrib.graph_editor.swap_ts}
-* @{tf.contrib.graph_editor.reroute_ts}
-* @{tf.contrib.graph_editor.swap_inputs}
-* @{tf.contrib.graph_editor.reroute_inputs}
-* @{tf.contrib.graph_editor.swap_outputs}
-* @{tf.contrib.graph_editor.reroute_outputs}
-* @{tf.contrib.graph_editor.swap_ios}
-* @{tf.contrib.graph_editor.reroute_ios}
-* @{tf.contrib.graph_editor.remove_control_inputs}
-* @{tf.contrib.graph_editor.add_control_inputs}
+* `tf.contrib.graph_editor.swap_ts`
+* `tf.contrib.graph_editor.reroute_ts`
+* `tf.contrib.graph_editor.swap_inputs`
+* `tf.contrib.graph_editor.reroute_inputs`
+* `tf.contrib.graph_editor.swap_outputs`
+* `tf.contrib.graph_editor.reroute_outputs`
+* `tf.contrib.graph_editor.swap_ios`
+* `tf.contrib.graph_editor.reroute_ios`
+* `tf.contrib.graph_editor.remove_control_inputs`
+* `tf.contrib.graph_editor.add_control_inputs`
## Module: edit
-* @{tf.contrib.graph_editor.detach_control_inputs}
-* @{tf.contrib.graph_editor.detach_control_outputs}
-* @{tf.contrib.graph_editor.detach_inputs}
-* @{tf.contrib.graph_editor.detach_outputs}
-* @{tf.contrib.graph_editor.detach}
-* @{tf.contrib.graph_editor.connect}
-* @{tf.contrib.graph_editor.bypass}
+* `tf.contrib.graph_editor.detach_control_inputs`
+* `tf.contrib.graph_editor.detach_control_outputs`
+* `tf.contrib.graph_editor.detach_inputs`
+* `tf.contrib.graph_editor.detach_outputs`
+* `tf.contrib.graph_editor.detach`
+* `tf.contrib.graph_editor.connect`
+* `tf.contrib.graph_editor.bypass`
## Module: transform
-* @{tf.contrib.graph_editor.replace_t_with_placeholder_handler}
-* @{tf.contrib.graph_editor.keep_t_if_possible_handler}
-* @{tf.contrib.graph_editor.assign_renamed_collections_handler}
-* @{tf.contrib.graph_editor.transform_op_if_inside_handler}
-* @{tf.contrib.graph_editor.copy_op_handler}
-* @{tf.contrib.graph_editor.Transformer}
-* @{tf.contrib.graph_editor.copy}
-* @{tf.contrib.graph_editor.copy_with_input_replacements}
-* @{tf.contrib.graph_editor.graph_replace}
+* `tf.contrib.graph_editor.replace_t_with_placeholder_handler`
+* `tf.contrib.graph_editor.keep_t_if_possible_handler`
+* `tf.contrib.graph_editor.assign_renamed_collections_handler`
+* `tf.contrib.graph_editor.transform_op_if_inside_handler`
+* `tf.contrib.graph_editor.copy_op_handler`
+* `tf.contrib.graph_editor.Transformer`
+* `tf.contrib.graph_editor.copy`
+* `tf.contrib.graph_editor.copy_with_input_replacements`
+* `tf.contrib.graph_editor.graph_replace`
## Useful aliases
-* @{tf.contrib.graph_editor.ph}
-* @{tf.contrib.graph_editor.sgv}
-* @{tf.contrib.graph_editor.sgv_scope}
+* `tf.contrib.graph_editor.ph`
+* `tf.contrib.graph_editor.sgv`
+* `tf.contrib.graph_editor.sgv_scope`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.integrate.md b/tensorflow/docs_src/api_guides/python/contrib.integrate.md
index e95b5a2e68..a70d202ab5 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.integrate.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.integrate.md
@@ -38,4 +38,4 @@ plt.plot(x, z)
## Ops
-* @{tf.contrib.integrate.odeint}
+* `tf.contrib.integrate.odeint`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.layers.md b/tensorflow/docs_src/api_guides/python/contrib.layers.md
index b85db4b96f..4c176a129c 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.layers.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.layers.md
@@ -9,29 +9,29 @@ This package provides several ops that take care of creating variables that are
used internally in a consistent way and provide the building blocks for many
common machine learning algorithms.
-* @{tf.contrib.layers.avg_pool2d}
-* @{tf.contrib.layers.batch_norm}
-* @{tf.contrib.layers.convolution2d}
-* @{tf.contrib.layers.conv2d_in_plane}
-* @{tf.contrib.layers.convolution2d_in_plane}
-* @{tf.nn.conv2d_transpose}
-* @{tf.contrib.layers.convolution2d_transpose}
-* @{tf.nn.dropout}
-* @{tf.contrib.layers.flatten}
-* @{tf.contrib.layers.fully_connected}
-* @{tf.contrib.layers.layer_norm}
-* @{tf.contrib.layers.max_pool2d}
-* @{tf.contrib.layers.one_hot_encoding}
-* @{tf.nn.relu}
-* @{tf.nn.relu6}
-* @{tf.contrib.layers.repeat}
-* @{tf.contrib.layers.safe_embedding_lookup_sparse}
-* @{tf.nn.separable_conv2d}
-* @{tf.contrib.layers.separable_convolution2d}
-* @{tf.nn.softmax}
-* @{tf.stack}
-* @{tf.contrib.layers.unit_norm}
-* @{tf.contrib.layers.embed_sequence}
+* `tf.contrib.layers.avg_pool2d`
+* `tf.contrib.layers.batch_norm`
+* `tf.contrib.layers.convolution2d`
+* `tf.contrib.layers.conv2d_in_plane`
+* `tf.contrib.layers.convolution2d_in_plane`
+* `tf.nn.conv2d_transpose`
+* `tf.contrib.layers.convolution2d_transpose`
+* `tf.nn.dropout`
+* `tf.contrib.layers.flatten`
+* `tf.contrib.layers.fully_connected`
+* `tf.contrib.layers.layer_norm`
+* `tf.contrib.layers.max_pool2d`
+* `tf.contrib.layers.one_hot_encoding`
+* `tf.nn.relu`
+* `tf.nn.relu6`
+* `tf.contrib.layers.repeat`
+* `tf.contrib.layers.safe_embedding_lookup_sparse`
+* `tf.nn.separable_conv2d`
+* `tf.contrib.layers.separable_convolution2d`
+* `tf.nn.softmax`
+* `tf.stack`
+* `tf.contrib.layers.unit_norm`
+* `tf.contrib.layers.embed_sequence`
Aliases for fully_connected which set a default activation function are
available: `relu`, `relu6` and `linear`.
@@ -45,65 +45,65 @@ Regularization can help prevent overfitting. These have the signature
`fn(weights)`. The loss is typically added to
`tf.GraphKeys.REGULARIZATION_LOSSES`.
-* @{tf.contrib.layers.apply_regularization}
-* @{tf.contrib.layers.l1_regularizer}
-* @{tf.contrib.layers.l2_regularizer}
-* @{tf.contrib.layers.sum_regularizer}
+* `tf.contrib.layers.apply_regularization`
+* `tf.contrib.layers.l1_regularizer`
+* `tf.contrib.layers.l2_regularizer`
+* `tf.contrib.layers.sum_regularizer`
## Initializers
Initializers are used to initialize variables with sensible values given their
size, data type, and purpose.
-* @{tf.contrib.layers.xavier_initializer}
-* @{tf.contrib.layers.xavier_initializer_conv2d}
-* @{tf.contrib.layers.variance_scaling_initializer}
+* `tf.contrib.layers.xavier_initializer`
+* `tf.contrib.layers.xavier_initializer_conv2d`
+* `tf.contrib.layers.variance_scaling_initializer`
## Optimization
Optimize weights given a loss.
-* @{tf.contrib.layers.optimize_loss}
+* `tf.contrib.layers.optimize_loss`
## Summaries
Helper functions to summarize specific variables or ops.
-* @{tf.contrib.layers.summarize_activation}
-* @{tf.contrib.layers.summarize_tensor}
-* @{tf.contrib.layers.summarize_tensors}
-* @{tf.contrib.layers.summarize_collection}
+* `tf.contrib.layers.summarize_activation`
+* `tf.contrib.layers.summarize_tensor`
+* `tf.contrib.layers.summarize_tensors`
+* `tf.contrib.layers.summarize_collection`
The layers module defines convenience functions `summarize_variables`,
`summarize_weights` and `summarize_biases`, which set the `collection` argument
of `summarize_collection` to `VARIABLES`, `WEIGHTS` and `BIASES`, respectively.
-* @{tf.contrib.layers.summarize_activations}
+* `tf.contrib.layers.summarize_activations`
## Feature columns
Feature columns provide a mechanism to map data to a model.
-* @{tf.contrib.layers.bucketized_column}
-* @{tf.contrib.layers.check_feature_columns}
-* @{tf.contrib.layers.create_feature_spec_for_parsing}
-* @{tf.contrib.layers.crossed_column}
-* @{tf.contrib.layers.embedding_column}
-* @{tf.contrib.layers.scattered_embedding_column}
-* @{tf.contrib.layers.input_from_feature_columns}
-* @{tf.contrib.layers.joint_weighted_sum_from_feature_columns}
-* @{tf.contrib.layers.make_place_holder_tensors_for_base_features}
-* @{tf.contrib.layers.multi_class_target}
-* @{tf.contrib.layers.one_hot_column}
-* @{tf.contrib.layers.parse_feature_columns_from_examples}
-* @{tf.contrib.layers.parse_feature_columns_from_sequence_examples}
-* @{tf.contrib.layers.real_valued_column}
-* @{tf.contrib.layers.shared_embedding_columns}
-* @{tf.contrib.layers.sparse_column_with_hash_bucket}
-* @{tf.contrib.layers.sparse_column_with_integerized_feature}
-* @{tf.contrib.layers.sparse_column_with_keys}
-* @{tf.contrib.layers.sparse_column_with_vocabulary_file}
-* @{tf.contrib.layers.weighted_sparse_column}
-* @{tf.contrib.layers.weighted_sum_from_feature_columns}
-* @{tf.contrib.layers.infer_real_valued_columns}
-* @{tf.contrib.layers.sequence_input_from_feature_columns}
+* `tf.contrib.layers.bucketized_column`
+* `tf.contrib.layers.check_feature_columns`
+* `tf.contrib.layers.create_feature_spec_for_parsing`
+* `tf.contrib.layers.crossed_column`
+* `tf.contrib.layers.embedding_column`
+* `tf.contrib.layers.scattered_embedding_column`
+* `tf.contrib.layers.input_from_feature_columns`
+* `tf.contrib.layers.joint_weighted_sum_from_feature_columns`
+* `tf.contrib.layers.make_place_holder_tensors_for_base_features`
+* `tf.contrib.layers.multi_class_target`
+* `tf.contrib.layers.one_hot_column`
+* `tf.contrib.layers.parse_feature_columns_from_examples`
+* `tf.contrib.layers.parse_feature_columns_from_sequence_examples`
+* `tf.contrib.layers.real_valued_column`
+* `tf.contrib.layers.shared_embedding_columns`
+* `tf.contrib.layers.sparse_column_with_hash_bucket`
+* `tf.contrib.layers.sparse_column_with_integerized_feature`
+* `tf.contrib.layers.sparse_column_with_keys`
+* `tf.contrib.layers.sparse_column_with_vocabulary_file`
+* `tf.contrib.layers.weighted_sparse_column`
+* `tf.contrib.layers.weighted_sum_from_feature_columns`
+* `tf.contrib.layers.infer_real_valued_columns`
+* `tf.contrib.layers.sequence_input_from_feature_columns`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.learn.md b/tensorflow/docs_src/api_guides/python/contrib.learn.md
index 03838dc5ae..635849ead5 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.learn.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.learn.md
@@ -7,57 +7,57 @@ High level API for learning with TensorFlow.
Train and evaluate TensorFlow models.
-* @{tf.contrib.learn.BaseEstimator}
-* @{tf.contrib.learn.Estimator}
-* @{tf.contrib.learn.Trainable}
-* @{tf.contrib.learn.Evaluable}
-* @{tf.contrib.learn.KMeansClustering}
-* @{tf.contrib.learn.ModeKeys}
-* @{tf.contrib.learn.ModelFnOps}
-* @{tf.contrib.learn.MetricSpec}
-* @{tf.contrib.learn.PredictionKey}
-* @{tf.contrib.learn.DNNClassifier}
-* @{tf.contrib.learn.DNNRegressor}
-* @{tf.contrib.learn.DNNLinearCombinedRegressor}
-* @{tf.contrib.learn.DNNLinearCombinedClassifier}
-* @{tf.contrib.learn.LinearClassifier}
-* @{tf.contrib.learn.LinearRegressor}
-* @{tf.contrib.learn.LogisticRegressor}
+* `tf.contrib.learn.BaseEstimator`
+* `tf.contrib.learn.Estimator`
+* `tf.contrib.learn.Trainable`
+* `tf.contrib.learn.Evaluable`
+* `tf.contrib.learn.KMeansClustering`
+* `tf.contrib.learn.ModeKeys`
+* `tf.contrib.learn.ModelFnOps`
+* `tf.contrib.learn.MetricSpec`
+* `tf.contrib.learn.PredictionKey`
+* `tf.contrib.learn.DNNClassifier`
+* `tf.contrib.learn.DNNRegressor`
+* `tf.contrib.learn.DNNLinearCombinedRegressor`
+* `tf.contrib.learn.DNNLinearCombinedClassifier`
+* `tf.contrib.learn.LinearClassifier`
+* `tf.contrib.learn.LinearRegressor`
+* `tf.contrib.learn.LogisticRegressor`
## Distributed training utilities
-* @{tf.contrib.learn.Experiment}
-* @{tf.contrib.learn.ExportStrategy}
-* @{tf.contrib.learn.TaskType}
+* `tf.contrib.learn.Experiment`
+* `tf.contrib.learn.ExportStrategy`
+* `tf.contrib.learn.TaskType`
## Graph actions
Perform various training, evaluation, and inference actions on a graph.
-* @{tf.train.NanLossDuringTrainingError}
-* @{tf.contrib.learn.RunConfig}
-* @{tf.contrib.learn.evaluate}
-* @{tf.contrib.learn.infer}
-* @{tf.contrib.learn.run_feeds}
-* @{tf.contrib.learn.run_n}
-* @{tf.contrib.learn.train}
+* `tf.train.NanLossDuringTrainingError`
+* `tf.contrib.learn.RunConfig`
+* `tf.contrib.learn.evaluate`
+* `tf.contrib.learn.infer`
+* `tf.contrib.learn.run_feeds`
+* `tf.contrib.learn.run_n`
+* `tf.contrib.learn.train`
## Input processing
Queue and read batched input data.
-* @{tf.contrib.learn.extract_dask_data}
-* @{tf.contrib.learn.extract_dask_labels}
-* @{tf.contrib.learn.extract_pandas_data}
-* @{tf.contrib.learn.extract_pandas_labels}
-* @{tf.contrib.learn.extract_pandas_matrix}
-* @{tf.contrib.learn.infer_real_valued_columns_from_input}
-* @{tf.contrib.learn.infer_real_valued_columns_from_input_fn}
-* @{tf.contrib.learn.read_batch_examples}
-* @{tf.contrib.learn.read_batch_features}
-* @{tf.contrib.learn.read_batch_record_features}
+* `tf.contrib.learn.extract_dask_data`
+* `tf.contrib.learn.extract_dask_labels`
+* `tf.contrib.learn.extract_pandas_data`
+* `tf.contrib.learn.extract_pandas_labels`
+* `tf.contrib.learn.extract_pandas_matrix`
+* `tf.contrib.learn.infer_real_valued_columns_from_input`
+* `tf.contrib.learn.infer_real_valued_columns_from_input_fn`
+* `tf.contrib.learn.read_batch_examples`
+* `tf.contrib.learn.read_batch_features`
+* `tf.contrib.learn.read_batch_record_features`
Export utilities
-* @{tf.contrib.learn.build_parsing_serving_input_fn}
-* @{tf.contrib.learn.ProblemType}
+* `tf.contrib.learn.build_parsing_serving_input_fn`
+* `tf.contrib.learn.ProblemType`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.linalg.md b/tensorflow/docs_src/api_guides/python/contrib.linalg.md
index c0cb2b195c..3055449dc2 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.linalg.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.linalg.md
@@ -14,17 +14,17 @@ Subclasses of `LinearOperator` provide a access to common methods on a
### Base class
-* @{tf.contrib.linalg.LinearOperator}
+* `tf.contrib.linalg.LinearOperator`
### Individual operators
-* @{tf.contrib.linalg.LinearOperatorDiag}
-* @{tf.contrib.linalg.LinearOperatorIdentity}
-* @{tf.contrib.linalg.LinearOperatorScaledIdentity}
-* @{tf.contrib.linalg.LinearOperatorFullMatrix}
-* @{tf.contrib.linalg.LinearOperatorLowerTriangular}
-* @{tf.contrib.linalg.LinearOperatorLowRankUpdate}
+* `tf.contrib.linalg.LinearOperatorDiag`
+* `tf.contrib.linalg.LinearOperatorIdentity`
+* `tf.contrib.linalg.LinearOperatorScaledIdentity`
+* `tf.contrib.linalg.LinearOperatorFullMatrix`
+* `tf.contrib.linalg.LinearOperatorLowerTriangular`
+* `tf.contrib.linalg.LinearOperatorLowRankUpdate`
### Transformations and Combinations of operators
-* @{tf.contrib.linalg.LinearOperatorComposition}
+* `tf.contrib.linalg.LinearOperatorComposition`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.losses.md b/tensorflow/docs_src/api_guides/python/contrib.losses.md
index 8b7442216c..8787454af6 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.losses.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.losses.md
@@ -2,7 +2,7 @@
## Deprecated
-This module is deprecated. Instructions for updating: Use @{tf.losses} instead.
+This module is deprecated. Instructions for updating: Use `tf.losses` instead.
## Loss operations for use in neural networks.
@@ -107,19 +107,19 @@ weighted average over the individual prediction errors:
loss = tf.contrib.losses.mean_squared_error(predictions, depths, weight)
```
-* @{tf.contrib.losses.absolute_difference}
-* @{tf.contrib.losses.add_loss}
-* @{tf.contrib.losses.hinge_loss}
-* @{tf.contrib.losses.compute_weighted_loss}
-* @{tf.contrib.losses.cosine_distance}
-* @{tf.contrib.losses.get_losses}
-* @{tf.contrib.losses.get_regularization_losses}
-* @{tf.contrib.losses.get_total_loss}
-* @{tf.contrib.losses.log_loss}
-* @{tf.contrib.losses.mean_pairwise_squared_error}
-* @{tf.contrib.losses.mean_squared_error}
-* @{tf.contrib.losses.sigmoid_cross_entropy}
-* @{tf.contrib.losses.softmax_cross_entropy}
-* @{tf.contrib.losses.sparse_softmax_cross_entropy}
+* `tf.contrib.losses.absolute_difference`
+* `tf.contrib.losses.add_loss`
+* `tf.contrib.losses.hinge_loss`
+* `tf.contrib.losses.compute_weighted_loss`
+* `tf.contrib.losses.cosine_distance`
+* `tf.contrib.losses.get_losses`
+* `tf.contrib.losses.get_regularization_losses`
+* `tf.contrib.losses.get_total_loss`
+* `tf.contrib.losses.log_loss`
+* `tf.contrib.losses.mean_pairwise_squared_error`
+* `tf.contrib.losses.mean_squared_error`
+* `tf.contrib.losses.sigmoid_cross_entropy`
+* `tf.contrib.losses.softmax_cross_entropy`
+* `tf.contrib.losses.sparse_softmax_cross_entropy`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.metrics.md b/tensorflow/docs_src/api_guides/python/contrib.metrics.md
index 1eb9cf417a..de6346ca80 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.metrics.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.metrics.md
@@ -86,48 +86,48 @@ labels and predictions tensors and results in a weighted average of the metric.
## Metric `Ops`
-* @{tf.contrib.metrics.streaming_accuracy}
-* @{tf.contrib.metrics.streaming_mean}
-* @{tf.contrib.metrics.streaming_recall}
-* @{tf.contrib.metrics.streaming_recall_at_thresholds}
-* @{tf.contrib.metrics.streaming_precision}
-* @{tf.contrib.metrics.streaming_precision_at_thresholds}
-* @{tf.contrib.metrics.streaming_auc}
-* @{tf.contrib.metrics.streaming_recall_at_k}
-* @{tf.contrib.metrics.streaming_mean_absolute_error}
-* @{tf.contrib.metrics.streaming_mean_iou}
-* @{tf.contrib.metrics.streaming_mean_relative_error}
-* @{tf.contrib.metrics.streaming_mean_squared_error}
-* @{tf.contrib.metrics.streaming_mean_tensor}
-* @{tf.contrib.metrics.streaming_root_mean_squared_error}
-* @{tf.contrib.metrics.streaming_covariance}
-* @{tf.contrib.metrics.streaming_pearson_correlation}
-* @{tf.contrib.metrics.streaming_mean_cosine_distance}
-* @{tf.contrib.metrics.streaming_percentage_less}
-* @{tf.contrib.metrics.streaming_sensitivity_at_specificity}
-* @{tf.contrib.metrics.streaming_sparse_average_precision_at_k}
-* @{tf.contrib.metrics.streaming_sparse_precision_at_k}
-* @{tf.contrib.metrics.streaming_sparse_precision_at_top_k}
-* @{tf.contrib.metrics.streaming_sparse_recall_at_k}
-* @{tf.contrib.metrics.streaming_specificity_at_sensitivity}
-* @{tf.contrib.metrics.streaming_concat}
-* @{tf.contrib.metrics.streaming_false_negatives}
-* @{tf.contrib.metrics.streaming_false_negatives_at_thresholds}
-* @{tf.contrib.metrics.streaming_false_positives}
-* @{tf.contrib.metrics.streaming_false_positives_at_thresholds}
-* @{tf.contrib.metrics.streaming_true_negatives}
-* @{tf.contrib.metrics.streaming_true_negatives_at_thresholds}
-* @{tf.contrib.metrics.streaming_true_positives}
-* @{tf.contrib.metrics.streaming_true_positives_at_thresholds}
-* @{tf.contrib.metrics.auc_using_histogram}
-* @{tf.contrib.metrics.accuracy}
-* @{tf.contrib.metrics.aggregate_metrics}
-* @{tf.contrib.metrics.aggregate_metric_map}
-* @{tf.contrib.metrics.confusion_matrix}
+* `tf.contrib.metrics.streaming_accuracy`
+* `tf.contrib.metrics.streaming_mean`
+* `tf.contrib.metrics.streaming_recall`
+* `tf.contrib.metrics.streaming_recall_at_thresholds`
+* `tf.contrib.metrics.streaming_precision`
+* `tf.contrib.metrics.streaming_precision_at_thresholds`
+* `tf.contrib.metrics.streaming_auc`
+* `tf.contrib.metrics.streaming_recall_at_k`
+* `tf.contrib.metrics.streaming_mean_absolute_error`
+* `tf.contrib.metrics.streaming_mean_iou`
+* `tf.contrib.metrics.streaming_mean_relative_error`
+* `tf.contrib.metrics.streaming_mean_squared_error`
+* `tf.contrib.metrics.streaming_mean_tensor`
+* `tf.contrib.metrics.streaming_root_mean_squared_error`
+* `tf.contrib.metrics.streaming_covariance`
+* `tf.contrib.metrics.streaming_pearson_correlation`
+* `tf.contrib.metrics.streaming_mean_cosine_distance`
+* `tf.contrib.metrics.streaming_percentage_less`
+* `tf.contrib.metrics.streaming_sensitivity_at_specificity`
+* `tf.contrib.metrics.streaming_sparse_average_precision_at_k`
+* `tf.contrib.metrics.streaming_sparse_precision_at_k`
+* `tf.contrib.metrics.streaming_sparse_precision_at_top_k`
+* `tf.contrib.metrics.streaming_sparse_recall_at_k`
+* `tf.contrib.metrics.streaming_specificity_at_sensitivity`
+* `tf.contrib.metrics.streaming_concat`
+* `tf.contrib.metrics.streaming_false_negatives`
+* `tf.contrib.metrics.streaming_false_negatives_at_thresholds`
+* `tf.contrib.metrics.streaming_false_positives`
+* `tf.contrib.metrics.streaming_false_positives_at_thresholds`
+* `tf.contrib.metrics.streaming_true_negatives`
+* `tf.contrib.metrics.streaming_true_negatives_at_thresholds`
+* `tf.contrib.metrics.streaming_true_positives`
+* `tf.contrib.metrics.streaming_true_positives_at_thresholds`
+* `tf.contrib.metrics.auc_using_histogram`
+* `tf.contrib.metrics.accuracy`
+* `tf.contrib.metrics.aggregate_metrics`
+* `tf.contrib.metrics.aggregate_metric_map`
+* `tf.contrib.metrics.confusion_matrix`
## Set `Ops`
-* @{tf.contrib.metrics.set_difference}
-* @{tf.contrib.metrics.set_intersection}
-* @{tf.contrib.metrics.set_size}
-* @{tf.contrib.metrics.set_union}
+* `tf.contrib.metrics.set_difference`
+* `tf.contrib.metrics.set_intersection`
+* `tf.contrib.metrics.set_size`
+* `tf.contrib.metrics.set_union`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.rnn.md b/tensorflow/docs_src/api_guides/python/contrib.rnn.md
index d089b0616f..d265ab6925 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.rnn.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.rnn.md
@@ -5,49 +5,49 @@ Module for constructing RNN Cells and additional RNN operations.
## Base interface for all RNN Cells
-* @{tf.contrib.rnn.RNNCell}
+* `tf.contrib.rnn.RNNCell`
## Core RNN Cells for use with TensorFlow's core RNN methods
-* @{tf.contrib.rnn.BasicRNNCell}
-* @{tf.contrib.rnn.BasicLSTMCell}
-* @{tf.contrib.rnn.GRUCell}
-* @{tf.contrib.rnn.LSTMCell}
-* @{tf.contrib.rnn.LayerNormBasicLSTMCell}
+* `tf.contrib.rnn.BasicRNNCell`
+* `tf.contrib.rnn.BasicLSTMCell`
+* `tf.contrib.rnn.GRUCell`
+* `tf.contrib.rnn.LSTMCell`
+* `tf.contrib.rnn.LayerNormBasicLSTMCell`
## Classes storing split `RNNCell` state
-* @{tf.contrib.rnn.LSTMStateTuple}
+* `tf.contrib.rnn.LSTMStateTuple`
## Core RNN Cell wrappers (RNNCells that wrap other RNNCells)
-* @{tf.contrib.rnn.MultiRNNCell}
-* @{tf.contrib.rnn.LSTMBlockWrapper}
-* @{tf.contrib.rnn.DropoutWrapper}
-* @{tf.contrib.rnn.EmbeddingWrapper}
-* @{tf.contrib.rnn.InputProjectionWrapper}
-* @{tf.contrib.rnn.OutputProjectionWrapper}
-* @{tf.contrib.rnn.DeviceWrapper}
-* @{tf.contrib.rnn.ResidualWrapper}
+* `tf.contrib.rnn.MultiRNNCell`
+* `tf.contrib.rnn.LSTMBlockWrapper`
+* `tf.contrib.rnn.DropoutWrapper`
+* `tf.contrib.rnn.EmbeddingWrapper`
+* `tf.contrib.rnn.InputProjectionWrapper`
+* `tf.contrib.rnn.OutputProjectionWrapper`
+* `tf.contrib.rnn.DeviceWrapper`
+* `tf.contrib.rnn.ResidualWrapper`
### Block RNNCells
-* @{tf.contrib.rnn.LSTMBlockCell}
-* @{tf.contrib.rnn.GRUBlockCell}
+* `tf.contrib.rnn.LSTMBlockCell`
+* `tf.contrib.rnn.GRUBlockCell`
### Fused RNNCells
-* @{tf.contrib.rnn.FusedRNNCell}
-* @{tf.contrib.rnn.FusedRNNCellAdaptor}
-* @{tf.contrib.rnn.TimeReversedFusedRNN}
-* @{tf.contrib.rnn.LSTMBlockFusedCell}
+* `tf.contrib.rnn.FusedRNNCell`
+* `tf.contrib.rnn.FusedRNNCellAdaptor`
+* `tf.contrib.rnn.TimeReversedFusedRNN`
+* `tf.contrib.rnn.LSTMBlockFusedCell`
### LSTM-like cells
-* @{tf.contrib.rnn.CoupledInputForgetGateLSTMCell}
-* @{tf.contrib.rnn.TimeFreqLSTMCell}
-* @{tf.contrib.rnn.GridLSTMCell}
+* `tf.contrib.rnn.CoupledInputForgetGateLSTMCell`
+* `tf.contrib.rnn.TimeFreqLSTMCell`
+* `tf.contrib.rnn.GridLSTMCell`
### RNNCell wrappers
-* @{tf.contrib.rnn.AttentionCellWrapper}
-* @{tf.contrib.rnn.CompiledWrapper}
+* `tf.contrib.rnn.AttentionCellWrapper`
+* `tf.contrib.rnn.CompiledWrapper`
## Recurrent Neural Networks
@@ -55,7 +55,7 @@ Module for constructing RNN Cells and additional RNN operations.
TensorFlow provides a number of methods for constructing Recurrent Neural
Networks.
-* @{tf.contrib.rnn.static_rnn}
-* @{tf.contrib.rnn.static_state_saving_rnn}
-* @{tf.contrib.rnn.static_bidirectional_rnn}
-* @{tf.contrib.rnn.stack_bidirectional_dynamic_rnn}
+* `tf.contrib.rnn.static_rnn`
+* `tf.contrib.rnn.static_state_saving_rnn`
+* `tf.contrib.rnn.static_bidirectional_rnn`
+* `tf.contrib.rnn.stack_bidirectional_dynamic_rnn`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.seq2seq.md b/tensorflow/docs_src/api_guides/python/contrib.seq2seq.md
index 143919fd84..54f2fafc71 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.seq2seq.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.seq2seq.md
@@ -2,18 +2,18 @@
[TOC]
Module for constructing seq2seq models and dynamic decoding. Builds on top of
-libraries in @{tf.contrib.rnn}.
+libraries in `tf.contrib.rnn`.
This library is composed of two primary components:
-* New attention wrappers for @{tf.contrib.rnn.RNNCell} objects.
+* New attention wrappers for `tf.contrib.rnn.RNNCell` objects.
* A new object-oriented dynamic decoding framework.
## Attention
Attention wrappers are `RNNCell` objects that wrap other `RNNCell` objects and
implement attention. The form of attention is determined by a subclass of
-@{tf.contrib.seq2seq.AttentionMechanism}. These subclasses describe the form
+`tf.contrib.seq2seq.AttentionMechanism`. These subclasses describe the form
of attention (e.g. additive vs. multiplicative) to use when creating the
wrapper. An instance of an `AttentionMechanism` is constructed with a
`memory` tensor, from which lookup keys and values tensors are created.
@@ -22,9 +22,9 @@ wrapper. An instance of an `AttentionMechanism` is constructed with a
The two basic attention mechanisms are:
-* @{tf.contrib.seq2seq.BahdanauAttention} (additive attention,
+* `tf.contrib.seq2seq.BahdanauAttention` (additive attention,
[ref.](https://arxiv.org/abs/1409.0473))
-* @{tf.contrib.seq2seq.LuongAttention} (multiplicative attention,
+* `tf.contrib.seq2seq.LuongAttention` (multiplicative attention,
[ref.](https://arxiv.org/abs/1508.04025))
The `memory` tensor passed the attention mechanism's constructor is expected to
@@ -41,7 +41,7 @@ depth.
### Attention Wrappers
-The basic attention wrapper is @{tf.contrib.seq2seq.AttentionWrapper}.
+The basic attention wrapper is `tf.contrib.seq2seq.AttentionWrapper`.
This wrapper accepts an `RNNCell` instance, an instance of `AttentionMechanism`,
and an attention depth parameter (`attention_size`); as well as several
optional arguments that allow one to customize intermediate calculations.
@@ -120,19 +120,19 @@ outputs, _ = tf.contrib.seq2seq.dynamic_decode(
### Decoder base class and functions
-* @{tf.contrib.seq2seq.Decoder}
-* @{tf.contrib.seq2seq.dynamic_decode}
+* `tf.contrib.seq2seq.Decoder`
+* `tf.contrib.seq2seq.dynamic_decode`
### Basic Decoder
-* @{tf.contrib.seq2seq.BasicDecoderOutput}
-* @{tf.contrib.seq2seq.BasicDecoder}
+* `tf.contrib.seq2seq.BasicDecoderOutput`
+* `tf.contrib.seq2seq.BasicDecoder`
### Decoder Helpers
-* @{tf.contrib.seq2seq.Helper}
-* @{tf.contrib.seq2seq.CustomHelper}
-* @{tf.contrib.seq2seq.GreedyEmbeddingHelper}
-* @{tf.contrib.seq2seq.ScheduledEmbeddingTrainingHelper}
-* @{tf.contrib.seq2seq.ScheduledOutputTrainingHelper}
-* @{tf.contrib.seq2seq.TrainingHelper}
+* `tf.contrib.seq2seq.Helper`
+* `tf.contrib.seq2seq.CustomHelper`
+* `tf.contrib.seq2seq.GreedyEmbeddingHelper`
+* `tf.contrib.seq2seq.ScheduledEmbeddingTrainingHelper`
+* `tf.contrib.seq2seq.ScheduledOutputTrainingHelper`
+* `tf.contrib.seq2seq.TrainingHelper`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.signal.md b/tensorflow/docs_src/api_guides/python/contrib.signal.md
index 0f7690f80a..66df561084 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.signal.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.signal.md
@@ -1,7 +1,7 @@
# Signal Processing (contrib)
[TOC]
-@{tf.contrib.signal} is a module for signal processing primitives. All
+`tf.contrib.signal` is a module for signal processing primitives. All
operations have GPU support and are differentiable. This module is especially
helpful for building TensorFlow models that process or generate audio, though
the techniques are useful in many domains.
@@ -10,7 +10,7 @@ the techniques are useful in many domains.
When dealing with variable length signals (e.g. audio) it is common to "frame"
them into multiple fixed length windows. These windows can overlap if the 'step'
-of the frame is less than the frame length. @{tf.contrib.signal.frame} does
+of the frame is less than the frame length. `tf.contrib.signal.frame` does
exactly this. For example:
```python
@@ -24,7 +24,7 @@ signals = tf.placeholder(tf.float32, [None, None])
frames = tf.contrib.signal.frame(signals, frame_length=128, frame_step=32)
```
-The `axis` parameter to @{tf.contrib.signal.frame} allows you to frame tensors
+The `axis` parameter to `tf.contrib.signal.frame` allows you to frame tensors
with inner structure (e.g. a spectrogram):
```python
@@ -42,7 +42,7 @@ spectrogram_patches = tf.contrib.signal.frame(
## Reconstructing framed sequences and applying a tapering window
-@{tf.contrib.signal.overlap_and_add} can be used to reconstruct a signal from a
+`tf.contrib.signal.overlap_and_add` can be used to reconstruct a signal from a
framed representation. For example, the following code reconstructs the signal
produced in the preceding example:
@@ -58,7 +58,7 @@ the resulting reconstruction will have a greater magnitude than the original
window function satisfies the Constant Overlap-Add (COLA) property for the given
frame step, then it will recover the original `signals`.
-@{tf.contrib.signal.hamming_window} and @{tf.contrib.signal.hann_window} both
+`tf.contrib.signal.hamming_window` and `tf.contrib.signal.hann_window` both
satisfy the COLA property for a 75% overlap.
```python
@@ -74,7 +74,7 @@ reconstructed_signals = tf.contrib.signal.overlap_and_add(
A spectrogram is a time-frequency decomposition of a signal that indicates its
frequency content over time. The most common approach to computing spectrograms
is to take the magnitude of the [Short-time Fourier Transform][stft] (STFT),
-which @{tf.contrib.signal.stft} can compute as follows:
+which `tf.contrib.signal.stft` can compute as follows:
```python
# A batch of float32 time-domain signals in the range [-1, 1] with shape
@@ -121,7 +121,7 @@ When working with spectral representations of audio, the [mel scale][mel] is a
common reweighting of the frequency dimension, which results in a
lower-dimensional and more perceptually-relevant representation of the audio.
-@{tf.contrib.signal.linear_to_mel_weight_matrix} produces a matrix you can use
+`tf.contrib.signal.linear_to_mel_weight_matrix` produces a matrix you can use
to convert a spectrogram to the mel scale.
```python
@@ -156,7 +156,7 @@ log_mel_spectrograms = tf.log(mel_spectrograms + log_offset)
## Computing Mel-Frequency Cepstral Coefficients (MFCCs)
-Call @{tf.contrib.signal.mfccs_from_log_mel_spectrograms} to compute
+Call `tf.contrib.signal.mfccs_from_log_mel_spectrograms` to compute
[MFCCs][mfcc] from log-magnitude, mel-scale spectrograms (as computed in the
preceding example):
diff --git a/tensorflow/docs_src/api_guides/python/contrib.staging.md b/tensorflow/docs_src/api_guides/python/contrib.staging.md
index b0ac548342..de143a7bd3 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.staging.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.staging.md
@@ -3,4 +3,4 @@
This library contains utilities for adding pipelining to a model.
-* @{tf.contrib.staging.StagingArea}
+* `tf.contrib.staging.StagingArea`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.training.md b/tensorflow/docs_src/api_guides/python/contrib.training.md
index 87395d930b..068efdc829 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.training.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.training.md
@@ -5,46 +5,46 @@ Training and input utilities.
## Splitting sequence inputs into minibatches with state saving
-Use @{tf.contrib.training.SequenceQueueingStateSaver} or
-its wrapper @{tf.contrib.training.batch_sequences_with_states} if
+Use `tf.contrib.training.SequenceQueueingStateSaver` or
+its wrapper `tf.contrib.training.batch_sequences_with_states` if
you have input data with a dynamic primary time / frame count axis which
you'd like to convert into fixed size segments during minibatching, and would
like to store state in the forward direction across segments of an example.
-* @{tf.contrib.training.batch_sequences_with_states}
-* @{tf.contrib.training.NextQueuedSequenceBatch}
-* @{tf.contrib.training.SequenceQueueingStateSaver}
+* `tf.contrib.training.batch_sequences_with_states`
+* `tf.contrib.training.NextQueuedSequenceBatch`
+* `tf.contrib.training.SequenceQueueingStateSaver`
## Online data resampling
To resample data with replacement on a per-example basis, use
-@{tf.contrib.training.rejection_sample} or
-@{tf.contrib.training.resample_at_rate}. For `rejection_sample`, provide
+`tf.contrib.training.rejection_sample` or
+`tf.contrib.training.resample_at_rate`. For `rejection_sample`, provide
a boolean Tensor describing whether to accept or reject. Resulting batch sizes
are always the same. For `resample_at_rate`, provide the desired rate for each
example. Resulting batch sizes may vary. If you wish to specify relative
-rates, rather than absolute ones, use @{tf.contrib.training.weighted_resample}
+rates, rather than absolute ones, use `tf.contrib.training.weighted_resample`
(which also returns the actual resampling rate used for each output example).
-Use @{tf.contrib.training.stratified_sample} to resample without replacement
+Use `tf.contrib.training.stratified_sample` to resample without replacement
from the data to achieve a desired mix of class proportions that the Tensorflow
graph sees. For instance, if you have a binary classification dataset that is
99.9% class 1, a common approach is to resample from the data so that the data
is more balanced.
-* @{tf.contrib.training.rejection_sample}
-* @{tf.contrib.training.resample_at_rate}
-* @{tf.contrib.training.stratified_sample}
-* @{tf.contrib.training.weighted_resample}
+* `tf.contrib.training.rejection_sample`
+* `tf.contrib.training.resample_at_rate`
+* `tf.contrib.training.stratified_sample`
+* `tf.contrib.training.weighted_resample`
## Bucketing
-Use @{tf.contrib.training.bucket} or
-@{tf.contrib.training.bucket_by_sequence_length} to stratify
+Use `tf.contrib.training.bucket` or
+`tf.contrib.training.bucket_by_sequence_length` to stratify
minibatches into groups ("buckets"). Use `bucket_by_sequence_length`
with the argument `dynamic_pad=True` to receive minibatches of similarly
sized sequences for efficient training via `dynamic_rnn`.
-* @{tf.contrib.training.bucket}
-* @{tf.contrib.training.bucket_by_sequence_length}
+* `tf.contrib.training.bucket`
+* `tf.contrib.training.bucket_by_sequence_length`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.util.md b/tensorflow/docs_src/api_guides/python/contrib.util.md
index 6bc120d43d..e5fd97e9f2 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.util.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.util.md
@@ -5,8 +5,8 @@ Utilities for dealing with Tensors.
## Miscellaneous Utility Functions
-* @{tf.contrib.util.constant_value}
-* @{tf.contrib.util.make_tensor_proto}
-* @{tf.contrib.util.make_ndarray}
-* @{tf.contrib.util.ops_used_by_graph_def}
-* @{tf.contrib.util.stripped_op_list_for_graph}
+* `tf.contrib.util.constant_value`
+* `tf.contrib.util.make_tensor_proto`
+* `tf.contrib.util.make_ndarray`
+* `tf.contrib.util.ops_used_by_graph_def`
+* `tf.contrib.util.stripped_op_list_for_graph`
diff --git a/tensorflow/docs_src/api_guides/python/control_flow_ops.md b/tensorflow/docs_src/api_guides/python/control_flow_ops.md
index 68ea96d3dc..42c86d9978 100644
--- a/tensorflow/docs_src/api_guides/python/control_flow_ops.md
+++ b/tensorflow/docs_src/api_guides/python/control_flow_ops.md
@@ -1,7 +1,7 @@
# Control Flow
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -10,48 +10,48 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
TensorFlow provides several operations and classes that you can use to control
the execution of operations and add conditional dependencies to your graph.
-* @{tf.identity}
-* @{tf.tuple}
-* @{tf.group}
-* @{tf.no_op}
-* @{tf.count_up_to}
-* @{tf.cond}
-* @{tf.case}
-* @{tf.while_loop}
+* `tf.identity`
+* `tf.tuple`
+* `tf.group`
+* `tf.no_op`
+* `tf.count_up_to`
+* `tf.cond`
+* `tf.case`
+* `tf.while_loop`
## Logical Operators
TensorFlow provides several operations that you can use to add logical operators
to your graph.
-* @{tf.logical_and}
-* @{tf.logical_not}
-* @{tf.logical_or}
-* @{tf.logical_xor}
+* `tf.logical_and`
+* `tf.logical_not`
+* `tf.logical_or`
+* `tf.logical_xor`
## Comparison Operators
TensorFlow provides several operations that you can use to add comparison
operators to your graph.
-* @{tf.equal}
-* @{tf.not_equal}
-* @{tf.less}
-* @{tf.less_equal}
-* @{tf.greater}
-* @{tf.greater_equal}
-* @{tf.where}
+* `tf.equal`
+* `tf.not_equal`
+* `tf.less`
+* `tf.less_equal`
+* `tf.greater`
+* `tf.greater_equal`
+* `tf.where`
## Debugging Operations
TensorFlow provides several operations that you can use to validate values and
debug your graph.
-* @{tf.is_finite}
-* @{tf.is_inf}
-* @{tf.is_nan}
-* @{tf.verify_tensor_all_finite}
-* @{tf.check_numerics}
-* @{tf.add_check_numerics_ops}
-* @{tf.Assert}
-* @{tf.Print}
+* `tf.is_finite`
+* `tf.is_inf`
+* `tf.is_nan`
+* `tf.verify_tensor_all_finite`
+* `tf.check_numerics`
+* `tf.add_check_numerics_ops`
+* `tf.Assert`
+* `tf.Print`
diff --git a/tensorflow/docs_src/api_guides/python/framework.md b/tensorflow/docs_src/api_guides/python/framework.md
index 42c3e57477..40a6c0783a 100644
--- a/tensorflow/docs_src/api_guides/python/framework.md
+++ b/tensorflow/docs_src/api_guides/python/framework.md
@@ -5,47 +5,47 @@ Classes and functions for building TensorFlow graphs.
## Core graph data structures
-* @{tf.Graph}
-* @{tf.Operation}
-* @{tf.Tensor}
+* `tf.Graph`
+* `tf.Operation`
+* `tf.Tensor`
## Tensor types
-* @{tf.DType}
-* @{tf.as_dtype}
+* `tf.DType`
+* `tf.as_dtype`
## Utility functions
-* @{tf.device}
-* @{tf.container}
-* @{tf.name_scope}
-* @{tf.control_dependencies}
-* @{tf.convert_to_tensor}
-* @{tf.convert_to_tensor_or_indexed_slices}
-* @{tf.convert_to_tensor_or_sparse_tensor}
-* @{tf.get_default_graph}
-* @{tf.reset_default_graph}
-* @{tf.import_graph_def}
-* @{tf.load_file_system_library}
-* @{tf.load_op_library}
+* `tf.device`
+* `tf.container`
+* `tf.name_scope`
+* `tf.control_dependencies`
+* `tf.convert_to_tensor`
+* `tf.convert_to_tensor_or_indexed_slices`
+* `tf.convert_to_tensor_or_sparse_tensor`
+* `tf.get_default_graph`
+* `tf.reset_default_graph`
+* `tf.import_graph_def`
+* `tf.load_file_system_library`
+* `tf.load_op_library`
## Graph collections
-* @{tf.add_to_collection}
-* @{tf.get_collection}
-* @{tf.get_collection_ref}
-* @{tf.GraphKeys}
+* `tf.add_to_collection`
+* `tf.get_collection`
+* `tf.get_collection_ref`
+* `tf.GraphKeys`
## Defining new operations
-* @{tf.RegisterGradient}
-* @{tf.NotDifferentiable}
-* @{tf.NoGradient}
-* @{tf.TensorShape}
-* @{tf.Dimension}
-* @{tf.op_scope}
-* @{tf.get_seed}
+* `tf.RegisterGradient`
+* `tf.NotDifferentiable`
+* `tf.NoGradient`
+* `tf.TensorShape`
+* `tf.Dimension`
+* `tf.op_scope`
+* `tf.get_seed`
## For libraries building on TensorFlow
-* @{tf.register_tensor_conversion_function}
+* `tf.register_tensor_conversion_function`
diff --git a/tensorflow/docs_src/api_guides/python/functional_ops.md b/tensorflow/docs_src/api_guides/python/functional_ops.md
index 9fd46066a8..0a9fe02ad5 100644
--- a/tensorflow/docs_src/api_guides/python/functional_ops.md
+++ b/tensorflow/docs_src/api_guides/python/functional_ops.md
@@ -1,7 +1,7 @@
# Higher Order Functions
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -12,7 +12,7 @@ Functional operations.
TensorFlow provides several higher order operators to simplify the common
map-reduce programming patterns.
-* @{tf.map_fn}
-* @{tf.foldl}
-* @{tf.foldr}
-* @{tf.scan}
+* `tf.map_fn`
+* `tf.foldl`
+* `tf.foldr`
+* `tf.scan`
diff --git a/tensorflow/docs_src/api_guides/python/image.md b/tensorflow/docs_src/api_guides/python/image.md
index 051e4547ee..c51b92db05 100644
--- a/tensorflow/docs_src/api_guides/python/image.md
+++ b/tensorflow/docs_src/api_guides/python/image.md
@@ -1,7 +1,7 @@
# Images
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -19,27 +19,27 @@ Note: The PNG encode and decode Ops support RGBA, but the conversions Ops
presently only support RGB, HSV, and GrayScale. Presently, the alpha channel has
to be stripped from the image and re-attached using slicing ops.
-* @{tf.image.decode_bmp}
-* @{tf.image.decode_gif}
-* @{tf.image.decode_jpeg}
-* @{tf.image.encode_jpeg}
-* @{tf.image.decode_png}
-* @{tf.image.encode_png}
-* @{tf.image.decode_image}
+* `tf.image.decode_bmp`
+* `tf.image.decode_gif`
+* `tf.image.decode_jpeg`
+* `tf.image.encode_jpeg`
+* `tf.image.decode_png`
+* `tf.image.encode_png`
+* `tf.image.decode_image`
## Resizing
The resizing Ops accept input images as tensors of several types. They always
output resized images as float32 tensors.
-The convenience function @{tf.image.resize_images} supports both 4-D
+The convenience function `tf.image.resize_images` supports both 4-D
and 3-D tensors as input and output. 4-D tensors are for batches of images,
3-D tensors for individual images.
Other resizing Ops only support 4-D batches of images as input:
-@{tf.image.resize_area}, @{tf.image.resize_bicubic},
-@{tf.image.resize_bilinear},
-@{tf.image.resize_nearest_neighbor}.
+`tf.image.resize_area`, `tf.image.resize_bicubic`,
+`tf.image.resize_bilinear`,
+`tf.image.resize_nearest_neighbor`.
Example:
@@ -49,29 +49,29 @@ image = tf.image.decode_jpeg(...)
resized_image = tf.image.resize_images(image, [299, 299])
```
-* @{tf.image.resize_images}
-* @{tf.image.resize_area}
-* @{tf.image.resize_bicubic}
-* @{tf.image.resize_bilinear}
-* @{tf.image.resize_nearest_neighbor}
+* `tf.image.resize_images`
+* `tf.image.resize_area`
+* `tf.image.resize_bicubic`
+* `tf.image.resize_bilinear`
+* `tf.image.resize_nearest_neighbor`
## Cropping
-* @{tf.image.resize_image_with_crop_or_pad}
-* @{tf.image.central_crop}
-* @{tf.image.pad_to_bounding_box}
-* @{tf.image.crop_to_bounding_box}
-* @{tf.image.extract_glimpse}
-* @{tf.image.crop_and_resize}
+* `tf.image.resize_image_with_crop_or_pad`
+* `tf.image.central_crop`
+* `tf.image.pad_to_bounding_box`
+* `tf.image.crop_to_bounding_box`
+* `tf.image.extract_glimpse`
+* `tf.image.crop_and_resize`
## Flipping, Rotating and Transposing
-* @{tf.image.flip_up_down}
-* @{tf.image.random_flip_up_down}
-* @{tf.image.flip_left_right}
-* @{tf.image.random_flip_left_right}
-* @{tf.image.transpose_image}
-* @{tf.image.rot90}
+* `tf.image.flip_up_down`
+* `tf.image.random_flip_up_down`
+* `tf.image.flip_left_right`
+* `tf.image.random_flip_left_right`
+* `tf.image.transpose_image`
+* `tf.image.rot90`
## Converting Between Colorspaces
@@ -94,7 +94,7 @@ per pixel (values are assumed to lie in `[0,255]`).
TensorFlow can convert between images in RGB or HSV. The conversion functions
work only on float images, so you need to convert images in other formats using
-@{tf.image.convert_image_dtype}.
+`tf.image.convert_image_dtype`.
Example:
@@ -105,11 +105,11 @@ rgb_image_float = tf.image.convert_image_dtype(rgb_image, tf.float32)
hsv_image = tf.image.rgb_to_hsv(rgb_image)
```
-* @{tf.image.rgb_to_grayscale}
-* @{tf.image.grayscale_to_rgb}
-* @{tf.image.hsv_to_rgb}
-* @{tf.image.rgb_to_hsv}
-* @{tf.image.convert_image_dtype}
+* `tf.image.rgb_to_grayscale`
+* `tf.image.grayscale_to_rgb`
+* `tf.image.hsv_to_rgb`
+* `tf.image.rgb_to_hsv`
+* `tf.image.convert_image_dtype`
## Image Adjustments
@@ -122,23 +122,23 @@ If several adjustments are chained it is advisable to minimize the number of
redundant conversions by first converting the images to the most natural data
type and representation (RGB or HSV).
-* @{tf.image.adjust_brightness}
-* @{tf.image.random_brightness}
-* @{tf.image.adjust_contrast}
-* @{tf.image.random_contrast}
-* @{tf.image.adjust_hue}
-* @{tf.image.random_hue}
-* @{tf.image.adjust_gamma}
-* @{tf.image.adjust_saturation}
-* @{tf.image.random_saturation}
-* @{tf.image.per_image_standardization}
+* `tf.image.adjust_brightness`
+* `tf.image.random_brightness`
+* `tf.image.adjust_contrast`
+* `tf.image.random_contrast`
+* `tf.image.adjust_hue`
+* `tf.image.random_hue`
+* `tf.image.adjust_gamma`
+* `tf.image.adjust_saturation`
+* `tf.image.random_saturation`
+* `tf.image.per_image_standardization`
## Working with Bounding Boxes
-* @{tf.image.draw_bounding_boxes}
-* @{tf.image.non_max_suppression}
-* @{tf.image.sample_distorted_bounding_box}
+* `tf.image.draw_bounding_boxes`
+* `tf.image.non_max_suppression`
+* `tf.image.sample_distorted_bounding_box`
## Denoising
-* @{tf.image.total_variation}
+* `tf.image.total_variation`
diff --git a/tensorflow/docs_src/api_guides/python/input_dataset.md b/tensorflow/docs_src/api_guides/python/input_dataset.md
index a6612d1bf7..ab572e53d4 100644
--- a/tensorflow/docs_src/api_guides/python/input_dataset.md
+++ b/tensorflow/docs_src/api_guides/python/input_dataset.md
@@ -1,27 +1,27 @@
# Dataset Input Pipeline
[TOC]
-@{tf.data.Dataset} allows you to build complex input pipelines. See the
+`tf.data.Dataset` allows you to build complex input pipelines. See the
@{$guide/datasets} for an in-depth explanation of how to use this API.
## Reader classes
Classes that create a dataset from input files.
-* @{tf.data.FixedLengthRecordDataset}
-* @{tf.data.TextLineDataset}
-* @{tf.data.TFRecordDataset}
+* `tf.data.FixedLengthRecordDataset`
+* `tf.data.TextLineDataset`
+* `tf.data.TFRecordDataset`
## Creating new datasets
Static methods in `Dataset` that create new datasets.
-* @{tf.data.Dataset.from_generator}
-* @{tf.data.Dataset.from_tensor_slices}
-* @{tf.data.Dataset.from_tensors}
-* @{tf.data.Dataset.list_files}
-* @{tf.data.Dataset.range}
-* @{tf.data.Dataset.zip}
+* `tf.data.Dataset.from_generator`
+* `tf.data.Dataset.from_tensor_slices`
+* `tf.data.Dataset.from_tensors`
+* `tf.data.Dataset.list_files`
+* `tf.data.Dataset.range`
+* `tf.data.Dataset.zip`
## Transformations on existing datasets
@@ -32,54 +32,54 @@ can be chained together, as shown in the example below:
train_data = train_data.batch(100).shuffle().repeat()
```
-* @{tf.data.Dataset.apply}
-* @{tf.data.Dataset.batch}
-* @{tf.data.Dataset.cache}
-* @{tf.data.Dataset.concatenate}
-* @{tf.data.Dataset.filter}
-* @{tf.data.Dataset.flat_map}
-* @{tf.data.Dataset.interleave}
-* @{tf.data.Dataset.map}
-* @{tf.data.Dataset.padded_batch}
-* @{tf.data.Dataset.prefetch}
-* @{tf.data.Dataset.repeat}
-* @{tf.data.Dataset.shard}
-* @{tf.data.Dataset.shuffle}
-* @{tf.data.Dataset.skip}
-* @{tf.data.Dataset.take}
+* `tf.data.Dataset.apply`
+* `tf.data.Dataset.batch`
+* `tf.data.Dataset.cache`
+* `tf.data.Dataset.concatenate`
+* `tf.data.Dataset.filter`
+* `tf.data.Dataset.flat_map`
+* `tf.data.Dataset.interleave`
+* `tf.data.Dataset.map`
+* `tf.data.Dataset.padded_batch`
+* `tf.data.Dataset.prefetch`
+* `tf.data.Dataset.repeat`
+* `tf.data.Dataset.shard`
+* `tf.data.Dataset.shuffle`
+* `tf.data.Dataset.skip`
+* `tf.data.Dataset.take`
### Custom transformation functions
-Custom transformation functions can be applied to a `Dataset` using @{tf.data.Dataset.apply}. Below are custom transformation functions from `tf.contrib.data`:
-
-* @{tf.contrib.data.batch_and_drop_remainder}
-* @{tf.contrib.data.dense_to_sparse_batch}
-* @{tf.contrib.data.enumerate_dataset}
-* @{tf.contrib.data.group_by_window}
-* @{tf.contrib.data.ignore_errors}
-* @{tf.contrib.data.map_and_batch}
-* @{tf.contrib.data.padded_batch_and_drop_remainder}
-* @{tf.contrib.data.parallel_interleave}
-* @{tf.contrib.data.rejection_resample}
-* @{tf.contrib.data.scan}
-* @{tf.contrib.data.shuffle_and_repeat}
-* @{tf.contrib.data.unbatch}
+Custom transformation functions can be applied to a `Dataset` using `tf.data.Dataset.apply`. Below are custom transformation functions from `tf.contrib.data`:
+
+* `tf.contrib.data.batch_and_drop_remainder`
+* `tf.contrib.data.dense_to_sparse_batch`
+* `tf.contrib.data.enumerate_dataset`
+* `tf.contrib.data.group_by_window`
+* `tf.contrib.data.ignore_errors`
+* `tf.contrib.data.map_and_batch`
+* `tf.contrib.data.padded_batch_and_drop_remainder`
+* `tf.contrib.data.parallel_interleave`
+* `tf.contrib.data.rejection_resample`
+* `tf.contrib.data.scan`
+* `tf.contrib.data.shuffle_and_repeat`
+* `tf.contrib.data.unbatch`
## Iterating over datasets
-These functions make a @{tf.data.Iterator} from a `Dataset`.
+These functions make a `tf.data.Iterator` from a `Dataset`.
-* @{tf.data.Dataset.make_initializable_iterator}
-* @{tf.data.Dataset.make_one_shot_iterator}
+* `tf.data.Dataset.make_initializable_iterator`
+* `tf.data.Dataset.make_one_shot_iterator`
-The `Iterator` class also contains static methods that create a @{tf.data.Iterator} that can be used with multiple `Dataset` objects.
+The `Iterator` class also contains static methods that create a `tf.data.Iterator` that can be used with multiple `Dataset` objects.
-* @{tf.data.Iterator.from_structure}
-* @{tf.data.Iterator.from_string_handle}
+* `tf.data.Iterator.from_structure`
+* `tf.data.Iterator.from_string_handle`
## Extra functions from `tf.contrib.data`
-* @{tf.contrib.data.get_single_element}
-* @{tf.contrib.data.make_saveable_from_iterator}
-* @{tf.contrib.data.read_batch_features}
+* `tf.contrib.data.get_single_element`
+* `tf.contrib.data.make_saveable_from_iterator`
+* `tf.contrib.data.read_batch_features`
diff --git a/tensorflow/docs_src/api_guides/python/io_ops.md b/tensorflow/docs_src/api_guides/python/io_ops.md
index 86b4b39409..ab3c70daa0 100644
--- a/tensorflow/docs_src/api_guides/python/io_ops.md
+++ b/tensorflow/docs_src/api_guides/python/io_ops.md
@@ -1,7 +1,7 @@
# Inputs and Readers
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -10,33 +10,33 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
TensorFlow provides a placeholder operation that must be fed with data
on execution. For more info, see the section on @{$reading_data#Feeding$Feeding data}.
-* @{tf.placeholder}
-* @{tf.placeholder_with_default}
+* `tf.placeholder`
+* `tf.placeholder_with_default`
For feeding `SparseTensor`s which are composite type,
there is a convenience function:
-* @{tf.sparse_placeholder}
+* `tf.sparse_placeholder`
## Readers
TensorFlow provides a set of Reader classes for reading data formats.
For more information on inputs and readers, see @{$reading_data$Reading data}.
-* @{tf.ReaderBase}
-* @{tf.TextLineReader}
-* @{tf.WholeFileReader}
-* @{tf.IdentityReader}
-* @{tf.TFRecordReader}
-* @{tf.FixedLengthRecordReader}
+* `tf.ReaderBase`
+* `tf.TextLineReader`
+* `tf.WholeFileReader`
+* `tf.IdentityReader`
+* `tf.TFRecordReader`
+* `tf.FixedLengthRecordReader`
## Converting
TensorFlow provides several operations that you can use to convert various data
formats into tensors.
-* @{tf.decode_csv}
-* @{tf.decode_raw}
+* `tf.decode_csv`
+* `tf.decode_raw`
- - -
@@ -48,14 +48,14 @@ here](https://www.tensorflow.org/code/tensorflow/core/example/example.proto).
They contain `Features`, [described
here](https://www.tensorflow.org/code/tensorflow/core/example/feature.proto).
-* @{tf.VarLenFeature}
-* @{tf.FixedLenFeature}
-* @{tf.FixedLenSequenceFeature}
-* @{tf.SparseFeature}
-* @{tf.parse_example}
-* @{tf.parse_single_example}
-* @{tf.parse_tensor}
-* @{tf.decode_json_example}
+* `tf.VarLenFeature`
+* `tf.FixedLenFeature`
+* `tf.FixedLenSequenceFeature`
+* `tf.SparseFeature`
+* `tf.parse_example`
+* `tf.parse_single_example`
+* `tf.parse_tensor`
+* `tf.decode_json_example`
## Queues
@@ -64,23 +64,23 @@ structures within the TensorFlow computation graph to stage pipelines
of tensors together. The following describe the basic Queue interface
and some implementations. To see an example use, see @{$threading_and_queues$Threading and Queues}.
-* @{tf.QueueBase}
-* @{tf.FIFOQueue}
-* @{tf.PaddingFIFOQueue}
-* @{tf.RandomShuffleQueue}
-* @{tf.PriorityQueue}
+* `tf.QueueBase`
+* `tf.FIFOQueue`
+* `tf.PaddingFIFOQueue`
+* `tf.RandomShuffleQueue`
+* `tf.PriorityQueue`
## Conditional Accumulators
-* @{tf.ConditionalAccumulatorBase}
-* @{tf.ConditionalAccumulator}
-* @{tf.SparseConditionalAccumulator}
+* `tf.ConditionalAccumulatorBase`
+* `tf.ConditionalAccumulator`
+* `tf.SparseConditionalAccumulator`
## Dealing with the filesystem
-* @{tf.matching_files}
-* @{tf.read_file}
-* @{tf.write_file}
+* `tf.matching_files`
+* `tf.read_file`
+* `tf.write_file`
## Input pipeline
@@ -93,12 +93,12 @@ for context.
The "producer" functions add a queue to the graph and a corresponding
`QueueRunner` for running the subgraph that fills that queue.
-* @{tf.train.match_filenames_once}
-* @{tf.train.limit_epochs}
-* @{tf.train.input_producer}
-* @{tf.train.range_input_producer}
-* @{tf.train.slice_input_producer}
-* @{tf.train.string_input_producer}
+* `tf.train.match_filenames_once`
+* `tf.train.limit_epochs`
+* `tf.train.input_producer`
+* `tf.train.range_input_producer`
+* `tf.train.slice_input_producer`
+* `tf.train.string_input_producer`
### Batching at the end of an input pipeline
@@ -106,25 +106,25 @@ These functions add a queue to the graph to assemble a batch of
examples, with possible shuffling. They also add a `QueueRunner` for
running the subgraph that fills that queue.
-Use @{tf.train.batch} or @{tf.train.batch_join} for batching
+Use `tf.train.batch` or `tf.train.batch_join` for batching
examples that have already been well shuffled. Use
-@{tf.train.shuffle_batch} or
-@{tf.train.shuffle_batch_join} for examples that would
+`tf.train.shuffle_batch` or
+`tf.train.shuffle_batch_join` for examples that would
benefit from additional shuffling.
-Use @{tf.train.batch} or @{tf.train.shuffle_batch} if you want a
+Use `tf.train.batch` or `tf.train.shuffle_batch` if you want a
single thread producing examples to batch, or if you have a
single subgraph producing examples but you want to run it in *N* threads
(where you increase *N* until it can keep the queue full). Use
-@{tf.train.batch_join} or @{tf.train.shuffle_batch_join}
+`tf.train.batch_join` or `tf.train.shuffle_batch_join`
if you have *N* different subgraphs producing examples to batch and you
want them run by *N* threads. Use `maybe_*` to enqueue conditionally.
-* @{tf.train.batch}
-* @{tf.train.maybe_batch}
-* @{tf.train.batch_join}
-* @{tf.train.maybe_batch_join}
-* @{tf.train.shuffle_batch}
-* @{tf.train.maybe_shuffle_batch}
-* @{tf.train.shuffle_batch_join}
-* @{tf.train.maybe_shuffle_batch_join}
+* `tf.train.batch`
+* `tf.train.maybe_batch`
+* `tf.train.batch_join`
+* `tf.train.maybe_batch_join`
+* `tf.train.shuffle_batch`
+* `tf.train.maybe_shuffle_batch`
+* `tf.train.shuffle_batch_join`
+* `tf.train.maybe_shuffle_batch_join`
diff --git a/tensorflow/docs_src/api_guides/python/math_ops.md b/tensorflow/docs_src/api_guides/python/math_ops.md
index dee7f1618a..e738161e49 100644
--- a/tensorflow/docs_src/api_guides/python/math_ops.md
+++ b/tensorflow/docs_src/api_guides/python/math_ops.md
@@ -1,7 +1,7 @@
# Math
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -13,97 +13,97 @@ broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
TensorFlow provides several operations that you can use to add basic arithmetic
operators to your graph.
-* @{tf.add}
-* @{tf.subtract}
-* @{tf.multiply}
-* @{tf.scalar_mul}
-* @{tf.div}
-* @{tf.divide}
-* @{tf.truediv}
-* @{tf.floordiv}
-* @{tf.realdiv}
-* @{tf.truncatediv}
-* @{tf.floor_div}
-* @{tf.truncatemod}
-* @{tf.floormod}
-* @{tf.mod}
-* @{tf.cross}
+* `tf.add`
+* `tf.subtract`
+* `tf.multiply`
+* `tf.scalar_mul`
+* `tf.div`
+* `tf.divide`
+* `tf.truediv`
+* `tf.floordiv`
+* `tf.realdiv`
+* `tf.truncatediv`
+* `tf.floor_div`
+* `tf.truncatemod`
+* `tf.floormod`
+* `tf.mod`
+* `tf.cross`
## Basic Math Functions
TensorFlow provides several operations that you can use to add basic
mathematical functions to your graph.
-* @{tf.add_n}
-* @{tf.abs}
-* @{tf.negative}
-* @{tf.sign}
-* @{tf.reciprocal}
-* @{tf.square}
-* @{tf.round}
-* @{tf.sqrt}
-* @{tf.rsqrt}
-* @{tf.pow}
-* @{tf.exp}
-* @{tf.expm1}
-* @{tf.log}
-* @{tf.log1p}
-* @{tf.ceil}
-* @{tf.floor}
-* @{tf.maximum}
-* @{tf.minimum}
-* @{tf.cos}
-* @{tf.sin}
-* @{tf.lbeta}
-* @{tf.tan}
-* @{tf.acos}
-* @{tf.asin}
-* @{tf.atan}
-* @{tf.cosh}
-* @{tf.sinh}
-* @{tf.asinh}
-* @{tf.acosh}
-* @{tf.atanh}
-* @{tf.lgamma}
-* @{tf.digamma}
-* @{tf.erf}
-* @{tf.erfc}
-* @{tf.squared_difference}
-* @{tf.igamma}
-* @{tf.igammac}
-* @{tf.zeta}
-* @{tf.polygamma}
-* @{tf.betainc}
-* @{tf.rint}
+* `tf.add_n`
+* `tf.abs`
+* `tf.negative`
+* `tf.sign`
+* `tf.reciprocal`
+* `tf.square`
+* `tf.round`
+* `tf.sqrt`
+* `tf.rsqrt`
+* `tf.pow`
+* `tf.exp`
+* `tf.expm1`
+* `tf.log`
+* `tf.log1p`
+* `tf.ceil`
+* `tf.floor`
+* `tf.maximum`
+* `tf.minimum`
+* `tf.cos`
+* `tf.sin`
+* `tf.lbeta`
+* `tf.tan`
+* `tf.acos`
+* `tf.asin`
+* `tf.atan`
+* `tf.cosh`
+* `tf.sinh`
+* `tf.asinh`
+* `tf.acosh`
+* `tf.atanh`
+* `tf.lgamma`
+* `tf.digamma`
+* `tf.erf`
+* `tf.erfc`
+* `tf.squared_difference`
+* `tf.igamma`
+* `tf.igammac`
+* `tf.zeta`
+* `tf.polygamma`
+* `tf.betainc`
+* `tf.rint`
## Matrix Math Functions
TensorFlow provides several operations that you can use to add linear algebra
functions on matrices to your graph.
-* @{tf.diag}
-* @{tf.diag_part}
-* @{tf.trace}
-* @{tf.transpose}
-* @{tf.eye}
-* @{tf.matrix_diag}
-* @{tf.matrix_diag_part}
-* @{tf.matrix_band_part}
-* @{tf.matrix_set_diag}
-* @{tf.matrix_transpose}
-* @{tf.matmul}
-* @{tf.norm}
-* @{tf.matrix_determinant}
-* @{tf.matrix_inverse}
-* @{tf.cholesky}
-* @{tf.cholesky_solve}
-* @{tf.matrix_solve}
-* @{tf.matrix_triangular_solve}
-* @{tf.matrix_solve_ls}
-* @{tf.qr}
-* @{tf.self_adjoint_eig}
-* @{tf.self_adjoint_eigvals}
-* @{tf.svd}
+* `tf.diag`
+* `tf.diag_part`
+* `tf.trace`
+* `tf.transpose`
+* `tf.eye`
+* `tf.matrix_diag`
+* `tf.matrix_diag_part`
+* `tf.matrix_band_part`
+* `tf.matrix_set_diag`
+* `tf.matrix_transpose`
+* `tf.matmul`
+* `tf.norm`
+* `tf.matrix_determinant`
+* `tf.matrix_inverse`
+* `tf.cholesky`
+* `tf.cholesky_solve`
+* `tf.matrix_solve`
+* `tf.matrix_triangular_solve`
+* `tf.matrix_solve_ls`
+* `tf.qr`
+* `tf.self_adjoint_eig`
+* `tf.self_adjoint_eigvals`
+* `tf.svd`
## Tensor Math Function
@@ -111,7 +111,7 @@ functions on matrices to your graph.
TensorFlow provides operations that you can use to add tensor functions to your
graph.
-* @{tf.tensordot}
+* `tf.tensordot`
## Complex Number Functions
@@ -119,11 +119,11 @@ graph.
TensorFlow provides several operations that you can use to add complex number
functions to your graph.
-* @{tf.complex}
-* @{tf.conj}
-* @{tf.imag}
-* @{tf.angle}
-* @{tf.real}
+* `tf.complex`
+* `tf.conj`
+* `tf.imag`
+* `tf.angle`
+* `tf.real`
## Reduction
@@ -131,25 +131,25 @@ functions to your graph.
TensorFlow provides several operations that you can use to perform
common math computations that reduce various dimensions of a tensor.
-* @{tf.reduce_sum}
-* @{tf.reduce_prod}
-* @{tf.reduce_min}
-* @{tf.reduce_max}
-* @{tf.reduce_mean}
-* @{tf.reduce_all}
-* @{tf.reduce_any}
-* @{tf.reduce_logsumexp}
-* @{tf.count_nonzero}
-* @{tf.accumulate_n}
-* @{tf.einsum}
+* `tf.reduce_sum`
+* `tf.reduce_prod`
+* `tf.reduce_min`
+* `tf.reduce_max`
+* `tf.reduce_mean`
+* `tf.reduce_all`
+* `tf.reduce_any`
+* `tf.reduce_logsumexp`
+* `tf.count_nonzero`
+* `tf.accumulate_n`
+* `tf.einsum`
## Scan
TensorFlow provides several operations that you can use to perform scans
(running totals) across one axis of a tensor.
-* @{tf.cumsum}
-* @{tf.cumprod}
+* `tf.cumsum`
+* `tf.cumprod`
## Segmentation
@@ -172,15 +172,15 @@ tf.segment_sum(c, tf.constant([0, 0, 1]))
[5 6 7 8]]
```
-* @{tf.segment_sum}
-* @{tf.segment_prod}
-* @{tf.segment_min}
-* @{tf.segment_max}
-* @{tf.segment_mean}
-* @{tf.unsorted_segment_sum}
-* @{tf.sparse_segment_sum}
-* @{tf.sparse_segment_mean}
-* @{tf.sparse_segment_sqrt_n}
+* `tf.segment_sum`
+* `tf.segment_prod`
+* `tf.segment_min`
+* `tf.segment_max`
+* `tf.segment_mean`
+* `tf.unsorted_segment_sum`
+* `tf.sparse_segment_sum`
+* `tf.sparse_segment_mean`
+* `tf.sparse_segment_sqrt_n`
## Sequence Comparison and Indexing
@@ -190,10 +190,10 @@ comparison and index extraction to your graph. You can use these operations to
determine sequence differences and determine the indexes of specific values in
a tensor.
-* @{tf.argmin}
-* @{tf.argmax}
-* @{tf.setdiff1d}
-* @{tf.where}
-* @{tf.unique}
-* @{tf.edit_distance}
-* @{tf.invert_permutation}
+* `tf.argmin`
+* `tf.argmax`
+* `tf.setdiff1d`
+* `tf.where`
+* `tf.unique`
+* `tf.edit_distance`
+* `tf.invert_permutation`
diff --git a/tensorflow/docs_src/api_guides/python/meta_graph.md b/tensorflow/docs_src/api_guides/python/meta_graph.md
index f1c3adc22c..7dbd9a56f4 100644
--- a/tensorflow/docs_src/api_guides/python/meta_graph.md
+++ b/tensorflow/docs_src/api_guides/python/meta_graph.md
@@ -7,10 +7,10 @@ term storage of graphs. The MetaGraph contains the information required
to continue training, perform evaluation, or run inference on a previously trained graph.
The APIs for exporting and importing the complete model are in
-the @{tf.train.Saver} class:
-@{tf.train.export_meta_graph}
+the `tf.train.Saver` class:
+`tf.train.export_meta_graph`
and
-@{tf.train.import_meta_graph}.
+`tf.train.import_meta_graph`.
## What's in a MetaGraph
@@ -24,7 +24,7 @@ protocol buffer. It contains the following fields:
* [`CollectionDef`](https://www.tensorflow.org/code/tensorflow/core/protobuf/meta_graph.proto)
map that further describes additional components of the model such as
@{$python/state_ops$`Variables`},
-@{tf.train.QueueRunner}, etc.
+`tf.train.QueueRunner`, etc.
In order for a Python object to be serialized
to and from `MetaGraphDef`, the Python class must implement `to_proto()` and
@@ -122,7 +122,7 @@ The API for exporting a running model as a MetaGraph is `export_meta_graph()`.
The MetaGraph is also automatically exported via the `save()` API in
-@{tf.train.Saver}.
+`tf.train.Saver`.
## Import a MetaGraph
diff --git a/tensorflow/docs_src/api_guides/python/nn.md b/tensorflow/docs_src/api_guides/python/nn.md
index 8d8daaae19..40dda3941d 100644
--- a/tensorflow/docs_src/api_guides/python/nn.md
+++ b/tensorflow/docs_src/api_guides/python/nn.md
@@ -1,7 +1,7 @@
# Neural Network
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -16,17 +16,17 @@ functions (`relu`, `relu6`, `crelu` and `relu_x`), and random regularization
All activation ops apply componentwise, and produce a tensor of the same
shape as the input tensor.
-* @{tf.nn.relu}
-* @{tf.nn.relu6}
-* @{tf.nn.crelu}
-* @{tf.nn.elu}
-* @{tf.nn.selu}
-* @{tf.nn.softplus}
-* @{tf.nn.softsign}
-* @{tf.nn.dropout}
-* @{tf.nn.bias_add}
-* @{tf.sigmoid}
-* @{tf.tanh}
+* `tf.nn.relu`
+* `tf.nn.relu6`
+* `tf.nn.crelu`
+* `tf.nn.elu`
+* `tf.nn.selu`
+* `tf.nn.softplus`
+* `tf.nn.softsign`
+* `tf.nn.dropout`
+* `tf.nn.bias_add`
+* `tf.sigmoid`
+* `tf.tanh`
## Convolution
@@ -112,22 +112,22 @@ vectors. For `depthwise_conv_2d`, each scalar component `input[b, i, j, k]`
is multiplied by a vector `filter[di, dj, k]`, and all the vectors are
concatenated.
-* @{tf.nn.convolution}
-* @{tf.nn.conv2d}
-* @{tf.nn.depthwise_conv2d}
-* @{tf.nn.depthwise_conv2d_native}
-* @{tf.nn.separable_conv2d}
-* @{tf.nn.atrous_conv2d}
-* @{tf.nn.atrous_conv2d_transpose}
-* @{tf.nn.conv2d_transpose}
-* @{tf.nn.conv1d}
-* @{tf.nn.conv3d}
-* @{tf.nn.conv3d_transpose}
-* @{tf.nn.conv2d_backprop_filter}
-* @{tf.nn.conv2d_backprop_input}
-* @{tf.nn.conv3d_backprop_filter_v2}
-* @{tf.nn.depthwise_conv2d_native_backprop_filter}
-* @{tf.nn.depthwise_conv2d_native_backprop_input}
+* `tf.nn.convolution`
+* `tf.nn.conv2d`
+* `tf.nn.depthwise_conv2d`
+* `tf.nn.depthwise_conv2d_native`
+* `tf.nn.separable_conv2d`
+* `tf.nn.atrous_conv2d`
+* `tf.nn.atrous_conv2d_transpose`
+* `tf.nn.conv2d_transpose`
+* `tf.nn.conv1d`
+* `tf.nn.conv3d`
+* `tf.nn.conv3d_transpose`
+* `tf.nn.conv2d_backprop_filter`
+* `tf.nn.conv2d_backprop_input`
+* `tf.nn.conv3d_backprop_filter_v2`
+* `tf.nn.depthwise_conv2d_native_backprop_filter`
+* `tf.nn.depthwise_conv2d_native_backprop_input`
## Pooling
@@ -144,14 +144,14 @@ In detail, the output is
where the indices also take into consideration the padding values. Please refer
to the `Convolution` section for details about the padding calculation.
-* @{tf.nn.avg_pool}
-* @{tf.nn.max_pool}
-* @{tf.nn.max_pool_with_argmax}
-* @{tf.nn.avg_pool3d}
-* @{tf.nn.max_pool3d}
-* @{tf.nn.fractional_avg_pool}
-* @{tf.nn.fractional_max_pool}
-* @{tf.nn.pool}
+* `tf.nn.avg_pool`
+* `tf.nn.max_pool`
+* `tf.nn.max_pool_with_argmax`
+* `tf.nn.avg_pool3d`
+* `tf.nn.max_pool3d`
+* `tf.nn.fractional_avg_pool`
+* `tf.nn.fractional_max_pool`
+* `tf.nn.pool`
## Morphological filtering
@@ -190,24 +190,24 @@ Dilation and erosion are dual to each other. The dilation of the input signal
Striding and padding is carried out in exactly the same way as in standard
convolution. Please refer to the `Convolution` section for details.
-* @{tf.nn.dilation2d}
-* @{tf.nn.erosion2d}
-* @{tf.nn.with_space_to_batch}
+* `tf.nn.dilation2d`
+* `tf.nn.erosion2d`
+* `tf.nn.with_space_to_batch`
## Normalization
Normalization is useful to prevent neurons from saturating when inputs may
have varying scale, and to aid generalization.
-* @{tf.nn.l2_normalize}
-* @{tf.nn.local_response_normalization}
-* @{tf.nn.sufficient_statistics}
-* @{tf.nn.normalize_moments}
-* @{tf.nn.moments}
-* @{tf.nn.weighted_moments}
-* @{tf.nn.fused_batch_norm}
-* @{tf.nn.batch_normalization}
-* @{tf.nn.batch_norm_with_global_normalization}
+* `tf.nn.l2_normalize`
+* `tf.nn.local_response_normalization`
+* `tf.nn.sufficient_statistics`
+* `tf.nn.normalize_moments`
+* `tf.nn.moments`
+* `tf.nn.weighted_moments`
+* `tf.nn.fused_batch_norm`
+* `tf.nn.batch_normalization`
+* `tf.nn.batch_norm_with_global_normalization`
## Losses
@@ -215,29 +215,29 @@ The loss ops measure error between two tensors, or between a tensor and zero.
These can be used for measuring accuracy of a network in a regression task
or for regularization purposes (weight decay).
-* @{tf.nn.l2_loss}
-* @{tf.nn.log_poisson_loss}
+* `tf.nn.l2_loss`
+* `tf.nn.log_poisson_loss`
## Classification
TensorFlow provides several operations that help you perform classification.
-* @{tf.nn.sigmoid_cross_entropy_with_logits}
-* @{tf.nn.softmax}
-* @{tf.nn.log_softmax}
-* @{tf.nn.softmax_cross_entropy_with_logits}
-* @{tf.nn.softmax_cross_entropy_with_logits_v2} - identical to the base
+* `tf.nn.sigmoid_cross_entropy_with_logits`
+* `tf.nn.softmax`
+* `tf.nn.log_softmax`
+* `tf.nn.softmax_cross_entropy_with_logits`
+* `tf.nn.softmax_cross_entropy_with_logits_v2` - identical to the base
version, except it allows gradient propagation into the labels.
-* @{tf.nn.sparse_softmax_cross_entropy_with_logits}
-* @{tf.nn.weighted_cross_entropy_with_logits}
+* `tf.nn.sparse_softmax_cross_entropy_with_logits`
+* `tf.nn.weighted_cross_entropy_with_logits`
## Embeddings
TensorFlow provides library support for looking up values in embedding
tensors.
-* @{tf.nn.embedding_lookup}
-* @{tf.nn.embedding_lookup_sparse}
+* `tf.nn.embedding_lookup`
+* `tf.nn.embedding_lookup_sparse`
## Recurrent Neural Networks
@@ -245,23 +245,23 @@ TensorFlow provides a number of methods for constructing Recurrent
Neural Networks. Most accept an `RNNCell`-subclassed object
(see the documentation for `tf.contrib.rnn`).
-* @{tf.nn.dynamic_rnn}
-* @{tf.nn.bidirectional_dynamic_rnn}
-* @{tf.nn.raw_rnn}
+* `tf.nn.dynamic_rnn`
+* `tf.nn.bidirectional_dynamic_rnn`
+* `tf.nn.raw_rnn`
## Connectionist Temporal Classification (CTC)
-* @{tf.nn.ctc_loss}
-* @{tf.nn.ctc_greedy_decoder}
-* @{tf.nn.ctc_beam_search_decoder}
+* `tf.nn.ctc_loss`
+* `tf.nn.ctc_greedy_decoder`
+* `tf.nn.ctc_beam_search_decoder`
## Evaluation
The evaluation ops are useful for measuring the performance of a network.
They are typically used at evaluation time.
-* @{tf.nn.top_k}
-* @{tf.nn.in_top_k}
+* `tf.nn.top_k`
+* `tf.nn.in_top_k`
## Candidate Sampling
@@ -281,29 +281,29 @@ Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf)
TensorFlow provides the following sampled loss functions for faster training.
-* @{tf.nn.nce_loss}
-* @{tf.nn.sampled_softmax_loss}
+* `tf.nn.nce_loss`
+* `tf.nn.sampled_softmax_loss`
### Candidate Samplers
TensorFlow provides the following samplers for randomly sampling candidate
classes when using one of the sampled loss functions above.
-* @{tf.nn.uniform_candidate_sampler}
-* @{tf.nn.log_uniform_candidate_sampler}
-* @{tf.nn.learned_unigram_candidate_sampler}
-* @{tf.nn.fixed_unigram_candidate_sampler}
+* `tf.nn.uniform_candidate_sampler`
+* `tf.nn.log_uniform_candidate_sampler`
+* `tf.nn.learned_unigram_candidate_sampler`
+* `tf.nn.fixed_unigram_candidate_sampler`
### Miscellaneous candidate sampling utilities
-* @{tf.nn.compute_accidental_hits}
+* `tf.nn.compute_accidental_hits`
### Quantization ops
-* @{tf.nn.quantized_conv2d}
-* @{tf.nn.quantized_relu_x}
-* @{tf.nn.quantized_max_pool}
-* @{tf.nn.quantized_avg_pool}
+* `tf.nn.quantized_conv2d`
+* `tf.nn.quantized_relu_x`
+* `tf.nn.quantized_max_pool`
+* `tf.nn.quantized_avg_pool`
## Notes on SAME Convolution Padding
diff --git a/tensorflow/docs_src/api_guides/python/python_io.md b/tensorflow/docs_src/api_guides/python/python_io.md
index 06282e49d5..e7e82a8701 100644
--- a/tensorflow/docs_src/api_guides/python/python_io.md
+++ b/tensorflow/docs_src/api_guides/python/python_io.md
@@ -5,10 +5,10 @@ A TFRecords file represents a sequence of (binary) strings. The format is not
random access, so it is suitable for streaming large amounts of data but not
suitable if fast sharding or other non-sequential access is desired.
-* @{tf.python_io.TFRecordWriter}
-* @{tf.python_io.tf_record_iterator}
-* @{tf.python_io.TFRecordCompressionType}
-* @{tf.python_io.TFRecordOptions}
+* `tf.python_io.TFRecordWriter`
+* `tf.python_io.tf_record_iterator`
+* `tf.python_io.TFRecordCompressionType`
+* `tf.python_io.TFRecordOptions`
- - -
diff --git a/tensorflow/docs_src/api_guides/python/reading_data.md b/tensorflow/docs_src/api_guides/python/reading_data.md
index d7d0904ae2..78c36d965c 100644
--- a/tensorflow/docs_src/api_guides/python/reading_data.md
+++ b/tensorflow/docs_src/api_guides/python/reading_data.md
@@ -16,7 +16,7 @@ There are four methods of getting data into a TensorFlow program:
## `tf.data` API
-See the @{$guide/datasets} for an in-depth explanation of @{tf.data.Dataset}.
+See the @{$guide/datasets} for an in-depth explanation of `tf.data.Dataset`.
The `tf.data` API enables you to extract and preprocess data
from different input/file formats, and apply transformations such as batching,
shuffling, and mapping functions over the dataset. This is an improved version
@@ -44,7 +44,7 @@ with tf.Session():
While you can replace any Tensor with feed data, including variables and
constants, the best practice is to use a
-@{tf.placeholder} node. A
+`tf.placeholder` node. A
`placeholder` exists solely to serve as the target of feeds. It is not
initialized and contains no data. A placeholder generates an error if
it is executed without a feed, so you won't forget to feed it.
@@ -74,9 +74,9 @@ A typical queue-based pipeline for reading records from files has the following
For the list of filenames, use either a constant string Tensor (like
`["file0", "file1"]` or `[("file%d" % i) for i in range(2)]`) or the
-@{tf.train.match_filenames_once} function.
+`tf.train.match_filenames_once` function.
-Pass the list of filenames to the @{tf.train.string_input_producer} function.
+Pass the list of filenames to the `tf.train.string_input_producer` function.
`string_input_producer` creates a FIFO queue for holding the filenames until
the reader needs them.
@@ -102,8 +102,8 @@ decode this string into the tensors that make up an example.
To read text files in [comma-separated value (CSV)
format](https://tools.ietf.org/html/rfc4180), use a
-@{tf.TextLineReader} with the
-@{tf.decode_csv} operation. For example:
+`tf.TextLineReader` with the
+`tf.decode_csv` operation. For example:
```python
filename_queue = tf.train.string_input_producer(["file0.csv", "file1.csv"])
@@ -143,8 +143,8 @@ block while it waits for filenames from the queue.
#### Fixed length records
To read binary files in which each record is a fixed number of bytes, use
-@{tf.FixedLengthRecordReader}
-with the @{tf.decode_raw} operation.
+`tf.FixedLengthRecordReader`
+with the `tf.decode_raw` operation.
The `decode_raw` op converts from a string to a uint8 tensor.
For example, [the CIFAR-10 dataset](http://www.cs.toronto.edu/~kriz/cifar.html)
@@ -169,12 +169,12 @@ containing
as a field). You write a little program that gets your data, stuffs it in an
`Example` protocol buffer, serializes the protocol buffer to a string, and then
writes the string to a TFRecords file using the
-@{tf.python_io.TFRecordWriter}.
+`tf.python_io.TFRecordWriter`.
For example,
[`tensorflow/examples/how_tos/reading_data/convert_to_records.py`](https://www.tensorflow.org/code/tensorflow/examples/how_tos/reading_data/convert_to_records.py)
converts MNIST data to this format.
-The recommended way to read a TFRecord file is with a @{tf.data.TFRecordDataset}, [as in this example](https://www.tensorflow.org/code/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py):
+The recommended way to read a TFRecord file is with a `tf.data.TFRecordDataset`, [as in this example](https://www.tensorflow.org/code/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py):
``` python
dataset = tf.data.TFRecordDataset(filename)
@@ -208,7 +208,7 @@ for an example.
At the end of the pipeline we use another queue to batch together examples for
training, evaluation, or inference. For this we use a queue that randomizes the
order of examples, using the
-@{tf.train.shuffle_batch}.
+`tf.train.shuffle_batch`.
Example:
@@ -240,7 +240,7 @@ def input_pipeline(filenames, batch_size, num_epochs=None):
If you need more parallelism or shuffling of examples between files, use
multiple reader instances using the
-@{tf.train.shuffle_batch_join}.
+`tf.train.shuffle_batch_join`.
For example:
```
@@ -266,7 +266,7 @@ epoch until all the files from the epoch have been started. (It is also usually
sufficient to have a single thread filling the filename queue.)
An alternative is to use a single reader via the
-@{tf.train.shuffle_batch}
+`tf.train.shuffle_batch`
with `num_threads` bigger than 1. This will make it read from a single file at
the same time (but faster than with 1 thread), instead of N files at once.
This can be important:
@@ -284,13 +284,13 @@ enough reading threads, that summary will stay above zero. You can
### Creating threads to prefetch using `QueueRunner` objects
The short version: many of the `tf.train` functions listed above add
-@{tf.train.QueueRunner} objects to your
+`tf.train.QueueRunner` objects to your
graph. These require that you call
-@{tf.train.start_queue_runners}
+`tf.train.start_queue_runners`
before running any training or inference steps, or it will hang forever. This
will start threads that run the input pipeline, filling the example queue so
that the dequeue to get the examples will succeed. This is best combined with a
-@{tf.train.Coordinator} to cleanly
+`tf.train.Coordinator` to cleanly
shut down these threads when there are errors. If you set a limit on the number
of epochs, that will use an epoch counter that will need to be initialized. The
recommended code pattern combining these is:
@@ -343,25 +343,25 @@ queue.
</div>
The helpers in `tf.train` that create these queues and enqueuing operations add
-a @{tf.train.QueueRunner} to the
+a `tf.train.QueueRunner` to the
graph using the
-@{tf.train.add_queue_runner}
+`tf.train.add_queue_runner`
function. Each `QueueRunner` is responsible for one stage, and holds the list of
enqueue operations that need to be run in threads. Once the graph is
constructed, the
-@{tf.train.start_queue_runners}
+`tf.train.start_queue_runners`
function asks each QueueRunner in the graph to start its threads running the
enqueuing operations.
If all goes well, you can now run your training steps and the queues will be
filled by the background threads. If you have set an epoch limit, at some point
an attempt to dequeue examples will get an
-@{tf.errors.OutOfRangeError}. This
+`tf.errors.OutOfRangeError`. This
is the TensorFlow equivalent of "end of file" (EOF) -- this means the epoch
limit has been reached and no more examples are available.
The last ingredient is the
-@{tf.train.Coordinator}. This is responsible
+`tf.train.Coordinator`. This is responsible
for letting all the threads know if anything has signaled a shut down. Most
commonly this would be because an exception was raised, for example one of the
threads got an error when running some operation (or an ordinary Python
@@ -396,21 +396,21 @@ associated with a single QueueRunner. If this isn't the last thread in the
QueueRunner, the `OutOfRange` error just causes the one thread to exit. This
allows the other threads, which are still finishing up their last file, to
proceed until they finish as well. (Assuming you are using a
-@{tf.train.Coordinator},
+`tf.train.Coordinator`,
other types of errors will cause all the threads to stop.) Once all the reader
threads hit the `OutOfRange` error, only then does the next queue, the example
queue, gets closed.
Again, the example queue will have some elements queued, so training will
continue until those are exhausted. If the example queue is a
-@{tf.RandomShuffleQueue}, say
+`tf.RandomShuffleQueue`, say
because you are using `shuffle_batch` or `shuffle_batch_join`, it normally will
avoid ever having fewer than its `min_after_dequeue` attr elements buffered.
However, once the queue is closed that restriction will be lifted and the queue
will eventually empty. At that point the actual training threads, when they
try and dequeue from example queue, will start getting `OutOfRange` errors and
exiting. Once all the training threads are done,
-@{tf.train.Coordinator.join}
+`tf.train.Coordinator.join`
will return and you can exit cleanly.
### Filtering records or producing multiple examples per record
@@ -426,7 +426,7 @@ when calling one of the batching functions (such as `shuffle_batch` or
SparseTensors don't play well with queues. If you use SparseTensors you have
to decode the string records using
-@{tf.parse_example} **after**
+`tf.parse_example` **after**
batching (instead of using `tf.parse_single_example` before batching).
## Preloaded data
@@ -475,11 +475,11 @@ update it when training. Setting `collections=[]` keeps the variable out of the
`GraphKeys.GLOBAL_VARIABLES` collection used for saving and restoring checkpoints.
Either way,
-@{tf.train.slice_input_producer}
+`tf.train.slice_input_producer`
can be used to produce a slice at a time. This shuffles the examples across an
entire epoch, so further shuffling when batching is undesirable. So instead of
using the `shuffle_batch` functions, we use the plain
-@{tf.train.batch} function. To use
+`tf.train.batch` function. To use
multiple preprocessing threads, set the `num_threads` parameter to a number
bigger than 1.
@@ -500,7 +500,7 @@ sessions, maybe in separate processes:
* The evaluation process restores the checkpoint files into an inference
model that reads validation input data.
-This is what is done @{tf.estimator$estimators} and manually in
+This is what is done `tf.estimator` and manually in
@{$deep_cnn#save-and-restore-checkpoints$the example CIFAR-10 model}.
This has a couple of benefits:
@@ -517,6 +517,6 @@ that allow the user to change the input pipeline without rebuilding the graph or
session.
Note: Regardless of the implementation, many
-operations (like @{tf.layers.batch_normalization}, and @{tf.layers.dropout})
+operations (like `tf.layers.batch_normalization`, and `tf.layers.dropout`)
need to know if they are in training or evaluation mode, and you must be
careful to set this appropriately if you change the data source.
diff --git a/tensorflow/docs_src/api_guides/python/regression_examples.md b/tensorflow/docs_src/api_guides/python/regression_examples.md
index 7de2be0552..f8abbf0f97 100644
--- a/tensorflow/docs_src/api_guides/python/regression_examples.md
+++ b/tensorflow/docs_src/api_guides/python/regression_examples.md
@@ -8,25 +8,25 @@ to implement regression in Estimators:
<tr>
<td><a href="https://www.tensorflow.org/code/tensorflow/examples/get_started/regression/linear_regression.py">linear_regression.py</a></td>
- <td>Use the @{tf.estimator.LinearRegressor} Estimator to train a
+ <td>Use the `tf.estimator.LinearRegressor` Estimator to train a
regression model on numeric data.</td>
</tr>
<tr>
<td><a href="https://www.tensorflow.org/code/tensorflow/examples/get_started/regression/linear_regression_categorical.py">linear_regression_categorical.py</a></td>
- <td>Use the @{tf.estimator.LinearRegressor} Estimator to train a
+ <td>Use the `tf.estimator.LinearRegressor` Estimator to train a
regression model on categorical data.</td>
</tr>
<tr>
<td><a href="https://www.tensorflow.org/code/tensorflow/examples/get_started/regression/dnn_regression.py">dnn_regression.py</a></td>
- <td>Use the @{tf.estimator.DNNRegressor} Estimator to train a
+ <td>Use the `tf.estimator.DNNRegressor` Estimator to train a
regression model on discrete data with a deep neural network.</td>
</tr>
<tr>
<td><a href="https://www.tensorflow.org/code/tensorflow/examples/get_started/regression/custom_regression.py">custom_regression.py</a></td>
- <td>Use @{tf.estimator.Estimator} to train a customized dnn
+ <td>Use `tf.estimator.Estimator` to train a customized dnn
regression model.</td>
</tr>
@@ -219,7 +219,7 @@ The `custom_regression.py` example also trains a model that predicts the price
of a car based on mixed real-valued and categorical input features, described by
feature_columns. Unlike `linear_regression_categorical.py`, and
`dnn_regression.py` this example does not use a pre-made estimator, but defines
-a custom model using the base @{tf.estimator.Estimator$`Estimator`} class. The
+a custom model using the base `tf.estimator.Estimator` class. The
custom model is quite similar to the model defined by `dnn_regression.py`.
The custom model is defined by the `model_fn` argument to the constructor. The
@@ -227,6 +227,6 @@ customization is made more reusable through `params` dictionary, which is later
passed through to the `model_fn` when the `model_fn` is called.
The `model_fn` returns an
-@{tf.estimator.EstimatorSpec$`EstimatorSpec`} which is a simple structure
+`tf.estimator.EstimatorSpec` which is a simple structure
indicating to the `Estimator` which operations should be run to accomplish
various tasks.
diff --git a/tensorflow/docs_src/api_guides/python/session_ops.md b/tensorflow/docs_src/api_guides/python/session_ops.md
index 5176e3549c..5f41bcf209 100644
--- a/tensorflow/docs_src/api_guides/python/session_ops.md
+++ b/tensorflow/docs_src/api_guides/python/session_ops.md
@@ -1,7 +1,7 @@
# Tensor Handle Operations
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -10,6 +10,6 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
TensorFlow provides several operators that allows the user to keep tensors
"in-place" across run calls.
-* @{tf.get_session_handle}
-* @{tf.get_session_tensor}
-* @{tf.delete_session_tensor}
+* `tf.get_session_handle`
+* `tf.get_session_tensor`
+* `tf.delete_session_tensor`
diff --git a/tensorflow/docs_src/api_guides/python/sparse_ops.md b/tensorflow/docs_src/api_guides/python/sparse_ops.md
index 19d5faba05..b360055ed0 100644
--- a/tensorflow/docs_src/api_guides/python/sparse_ops.md
+++ b/tensorflow/docs_src/api_guides/python/sparse_ops.md
@@ -1,7 +1,7 @@
# Sparse Tensors
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -12,34 +12,34 @@ in multiple dimensions. Contrast this representation with `IndexedSlices`,
which is efficient for representing tensors that are sparse in their first
dimension, and dense along all other dimensions.
-* @{tf.SparseTensor}
-* @{tf.SparseTensorValue}
+* `tf.SparseTensor`
+* `tf.SparseTensorValue`
## Conversion
-* @{tf.sparse_to_dense}
-* @{tf.sparse_tensor_to_dense}
-* @{tf.sparse_to_indicator}
-* @{tf.sparse_merge}
+* `tf.sparse_to_dense`
+* `tf.sparse_tensor_to_dense`
+* `tf.sparse_to_indicator`
+* `tf.sparse_merge`
## Manipulation
-* @{tf.sparse_concat}
-* @{tf.sparse_reorder}
-* @{tf.sparse_reshape}
-* @{tf.sparse_split}
-* @{tf.sparse_retain}
-* @{tf.sparse_reset_shape}
-* @{tf.sparse_fill_empty_rows}
-* @{tf.sparse_transpose}
+* `tf.sparse_concat`
+* `tf.sparse_reorder`
+* `tf.sparse_reshape`
+* `tf.sparse_split`
+* `tf.sparse_retain`
+* `tf.sparse_reset_shape`
+* `tf.sparse_fill_empty_rows`
+* `tf.sparse_transpose`
## Reduction
-* @{tf.sparse_reduce_sum}
-* @{tf.sparse_reduce_sum_sparse}
+* `tf.sparse_reduce_sum`
+* `tf.sparse_reduce_sum_sparse`
## Math Operations
-* @{tf.sparse_add}
-* @{tf.sparse_softmax}
-* @{tf.sparse_tensor_dense_matmul}
-* @{tf.sparse_maximum}
-* @{tf.sparse_minimum}
+* `tf.sparse_add`
+* `tf.sparse_softmax`
+* `tf.sparse_tensor_dense_matmul`
+* `tf.sparse_maximum`
+* `tf.sparse_minimum`
diff --git a/tensorflow/docs_src/api_guides/python/spectral_ops.md b/tensorflow/docs_src/api_guides/python/spectral_ops.md
index dd13802f00..f6d109a3a0 100644
--- a/tensorflow/docs_src/api_guides/python/spectral_ops.md
+++ b/tensorflow/docs_src/api_guides/python/spectral_ops.md
@@ -2,25 +2,25 @@
[TOC]
-The @{tf.spectral} module supports several spectral decomposition operations
+The `tf.spectral` module supports several spectral decomposition operations
that you can use to transform Tensors of real and complex signals.
## Discrete Fourier Transforms
-* @{tf.spectral.fft}
-* @{tf.spectral.ifft}
-* @{tf.spectral.fft2d}
-* @{tf.spectral.ifft2d}
-* @{tf.spectral.fft3d}
-* @{tf.spectral.ifft3d}
-* @{tf.spectral.rfft}
-* @{tf.spectral.irfft}
-* @{tf.spectral.rfft2d}
-* @{tf.spectral.irfft2d}
-* @{tf.spectral.rfft3d}
-* @{tf.spectral.irfft3d}
+* `tf.spectral.fft`
+* `tf.spectral.ifft`
+* `tf.spectral.fft2d`
+* `tf.spectral.ifft2d`
+* `tf.spectral.fft3d`
+* `tf.spectral.ifft3d`
+* `tf.spectral.rfft`
+* `tf.spectral.irfft`
+* `tf.spectral.rfft2d`
+* `tf.spectral.irfft2d`
+* `tf.spectral.rfft3d`
+* `tf.spectral.irfft3d`
## Discrete Cosine Transforms
-* @{tf.spectral.dct}
-* @{tf.spectral.idct}
+* `tf.spectral.dct`
+* `tf.spectral.idct`
diff --git a/tensorflow/docs_src/api_guides/python/state_ops.md b/tensorflow/docs_src/api_guides/python/state_ops.md
index ec2d877386..fc55ea1481 100644
--- a/tensorflow/docs_src/api_guides/python/state_ops.md
+++ b/tensorflow/docs_src/api_guides/python/state_ops.md
@@ -1,68 +1,68 @@
# Variables
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
## Variables
-* @{tf.Variable}
+* `tf.Variable`
## Variable helper functions
TensorFlow provides a set of functions to help manage the set of variables
collected in the graph.
-* @{tf.global_variables}
-* @{tf.local_variables}
-* @{tf.model_variables}
-* @{tf.trainable_variables}
-* @{tf.moving_average_variables}
-* @{tf.global_variables_initializer}
-* @{tf.local_variables_initializer}
-* @{tf.variables_initializer}
-* @{tf.is_variable_initialized}
-* @{tf.report_uninitialized_variables}
-* @{tf.assert_variables_initialized}
-* @{tf.assign}
-* @{tf.assign_add}
-* @{tf.assign_sub}
+* `tf.global_variables`
+* `tf.local_variables`
+* `tf.model_variables`
+* `tf.trainable_variables`
+* `tf.moving_average_variables`
+* `tf.global_variables_initializer`
+* `tf.local_variables_initializer`
+* `tf.variables_initializer`
+* `tf.is_variable_initialized`
+* `tf.report_uninitialized_variables`
+* `tf.assert_variables_initialized`
+* `tf.assign`
+* `tf.assign_add`
+* `tf.assign_sub`
## Saving and Restoring Variables
-* @{tf.train.Saver}
-* @{tf.train.latest_checkpoint}
-* @{tf.train.get_checkpoint_state}
-* @{tf.train.update_checkpoint_state}
+* `tf.train.Saver`
+* `tf.train.latest_checkpoint`
+* `tf.train.get_checkpoint_state`
+* `tf.train.update_checkpoint_state`
## Sharing Variables
TensorFlow provides several classes and operations that you can use to
create variables contingent on certain conditions.
-* @{tf.get_variable}
-* @{tf.get_local_variable}
-* @{tf.VariableScope}
-* @{tf.variable_scope}
-* @{tf.variable_op_scope}
-* @{tf.get_variable_scope}
-* @{tf.make_template}
-* @{tf.no_regularizer}
-* @{tf.constant_initializer}
-* @{tf.random_normal_initializer}
-* @{tf.truncated_normal_initializer}
-* @{tf.random_uniform_initializer}
-* @{tf.uniform_unit_scaling_initializer}
-* @{tf.zeros_initializer}
-* @{tf.ones_initializer}
-* @{tf.orthogonal_initializer}
+* `tf.get_variable`
+* `tf.get_local_variable`
+* `tf.VariableScope`
+* `tf.variable_scope`
+* `tf.variable_op_scope`
+* `tf.get_variable_scope`
+* `tf.make_template`
+* `tf.no_regularizer`
+* `tf.constant_initializer`
+* `tf.random_normal_initializer`
+* `tf.truncated_normal_initializer`
+* `tf.random_uniform_initializer`
+* `tf.uniform_unit_scaling_initializer`
+* `tf.zeros_initializer`
+* `tf.ones_initializer`
+* `tf.orthogonal_initializer`
## Variable Partitioners for Sharding
-* @{tf.fixed_size_partitioner}
-* @{tf.variable_axis_size_partitioner}
-* @{tf.min_max_variable_partitioner}
+* `tf.fixed_size_partitioner`
+* `tf.variable_axis_size_partitioner`
+* `tf.min_max_variable_partitioner`
## Sparse Variable Updates
@@ -73,38 +73,38 @@ only a small subset of embedding vectors change in any given step.
Since a sparse update of a large tensor may be generated automatically during
gradient computation (as in the gradient of
-@{tf.gather}),
-an @{tf.IndexedSlices} class is provided that encapsulates a set
+`tf.gather`),
+an `tf.IndexedSlices` class is provided that encapsulates a set
of sparse indices and values. `IndexedSlices` objects are detected and handled
automatically by the optimizers in most cases.
-* @{tf.scatter_update}
-* @{tf.scatter_add}
-* @{tf.scatter_sub}
-* @{tf.scatter_mul}
-* @{tf.scatter_div}
-* @{tf.scatter_min}
-* @{tf.scatter_max}
-* @{tf.scatter_nd_update}
-* @{tf.scatter_nd_add}
-* @{tf.scatter_nd_sub}
-* @{tf.sparse_mask}
-* @{tf.IndexedSlices}
+* `tf.scatter_update`
+* `tf.scatter_add`
+* `tf.scatter_sub`
+* `tf.scatter_mul`
+* `tf.scatter_div`
+* `tf.scatter_min`
+* `tf.scatter_max`
+* `tf.scatter_nd_update`
+* `tf.scatter_nd_add`
+* `tf.scatter_nd_sub`
+* `tf.sparse_mask`
+* `tf.IndexedSlices`
### Read-only Lookup Tables
-* @{tf.initialize_all_tables}
-* @{tf.tables_initializer}
+* `tf.initialize_all_tables`
+* `tf.tables_initializer`
## Exporting and Importing Meta Graphs
-* @{tf.train.export_meta_graph}
-* @{tf.train.import_meta_graph}
+* `tf.train.export_meta_graph`
+* `tf.train.import_meta_graph`
# Deprecated functions (removed after 2017-03-02). Please don't use them.
-* @{tf.all_variables}
-* @{tf.initialize_all_variables}
-* @{tf.initialize_local_variables}
-* @{tf.initialize_variables}
+* `tf.all_variables`
+* `tf.initialize_all_variables`
+* `tf.initialize_local_variables`
+* `tf.initialize_variables`
diff --git a/tensorflow/docs_src/api_guides/python/string_ops.md b/tensorflow/docs_src/api_guides/python/string_ops.md
index e9be4f156a..24a3aad642 100644
--- a/tensorflow/docs_src/api_guides/python/string_ops.md
+++ b/tensorflow/docs_src/api_guides/python/string_ops.md
@@ -1,7 +1,7 @@
# Strings
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -10,30 +10,30 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
String hashing ops take a string input tensor and map each element to an
integer.
-* @{tf.string_to_hash_bucket_fast}
-* @{tf.string_to_hash_bucket_strong}
-* @{tf.string_to_hash_bucket}
+* `tf.string_to_hash_bucket_fast`
+* `tf.string_to_hash_bucket_strong`
+* `tf.string_to_hash_bucket`
## Joining
String joining ops concatenate elements of input string tensors to produce a new
string tensor.
-* @{tf.reduce_join}
-* @{tf.string_join}
+* `tf.reduce_join`
+* `tf.string_join`
## Splitting
-* @{tf.string_split}
-* @{tf.substr}
+* `tf.string_split`
+* `tf.substr`
## Conversion
-* @{tf.as_string}
-* @{tf.string_to_number}
+* `tf.as_string`
+* `tf.string_to_number`
-* @{tf.decode_raw}
-* @{tf.decode_csv}
+* `tf.decode_raw`
+* `tf.decode_csv`
-* @{tf.encode_base64}
-* @{tf.decode_base64}
+* `tf.encode_base64`
+* `tf.decode_base64`
diff --git a/tensorflow/docs_src/api_guides/python/summary.md b/tensorflow/docs_src/api_guides/python/summary.md
index eda119ab24..e290703b7d 100644
--- a/tensorflow/docs_src/api_guides/python/summary.md
+++ b/tensorflow/docs_src/api_guides/python/summary.md
@@ -7,17 +7,17 @@ then accessible in tools such as @{$summaries_and_tensorboard$TensorBoard}.
## Generation of Summaries
### Class for writing Summaries
-* @{tf.summary.FileWriter}
-* @{tf.summary.FileWriterCache}
+* `tf.summary.FileWriter`
+* `tf.summary.FileWriterCache`
### Summary Ops
-* @{tf.summary.tensor_summary}
-* @{tf.summary.scalar}
-* @{tf.summary.histogram}
-* @{tf.summary.audio}
-* @{tf.summary.image}
-* @{tf.summary.merge}
-* @{tf.summary.merge_all}
+* `tf.summary.tensor_summary`
+* `tf.summary.scalar`
+* `tf.summary.histogram`
+* `tf.summary.audio`
+* `tf.summary.image`
+* `tf.summary.merge`
+* `tf.summary.merge_all`
## Utilities
-* @{tf.summary.get_summary_description}
+* `tf.summary.get_summary_description`
diff --git a/tensorflow/docs_src/api_guides/python/test.md b/tensorflow/docs_src/api_guides/python/test.md
index 5dc88124e7..b6e0a332b9 100644
--- a/tensorflow/docs_src/api_guides/python/test.md
+++ b/tensorflow/docs_src/api_guides/python/test.md
@@ -23,25 +23,25 @@ which adds methods relevant to TensorFlow tests. Here is an example:
```
`tf.test.TestCase` inherits from `unittest.TestCase` but adds a few additional
-methods. See @{tf.test.TestCase} for details.
+methods. See `tf.test.TestCase` for details.
-* @{tf.test.main}
-* @{tf.test.TestCase}
-* @{tf.test.test_src_dir_path}
+* `tf.test.main`
+* `tf.test.TestCase`
+* `tf.test.test_src_dir_path`
## Utilities
Note: `tf.test.mock` is an alias to the python `mock` or `unittest.mock`
depending on the python version.
-* @{tf.test.assert_equal_graph_def}
-* @{tf.test.get_temp_dir}
-* @{tf.test.is_built_with_cuda}
-* @{tf.test.is_gpu_available}
-* @{tf.test.gpu_device_name}
+* `tf.test.assert_equal_graph_def`
+* `tf.test.get_temp_dir`
+* `tf.test.is_built_with_cuda`
+* `tf.test.is_gpu_available`
+* `tf.test.gpu_device_name`
## Gradient checking
-@{tf.test.compute_gradient} and @{tf.test.compute_gradient_error} perform
+`tf.test.compute_gradient` and `tf.test.compute_gradient_error` perform
numerical differentiation of graphs for comparison against registered analytic
gradients.
diff --git a/tensorflow/docs_src/api_guides/python/tfdbg.md b/tensorflow/docs_src/api_guides/python/tfdbg.md
index 2212a2da0e..9778cdc0b0 100644
--- a/tensorflow/docs_src/api_guides/python/tfdbg.md
+++ b/tensorflow/docs_src/api_guides/python/tfdbg.md
@@ -8,9 +8,9 @@ Public Python API of TensorFlow Debugger (tfdbg).
These functions help you modify `RunOptions` to specify which `Tensor`s are to
be watched when the TensorFlow graph is executed at runtime.
-* @{tfdbg.add_debug_tensor_watch}
-* @{tfdbg.watch_graph}
-* @{tfdbg.watch_graph_with_blacklists}
+* `tfdbg.add_debug_tensor_watch`
+* `tfdbg.watch_graph`
+* `tfdbg.watch_graph_with_blacklists`
## Classes for debug-dump data and directories
@@ -18,13 +18,13 @@ be watched when the TensorFlow graph is executed at runtime.
These classes allow you to load and inspect tensor values dumped from
TensorFlow graphs during runtime.
-* @{tfdbg.DebugTensorDatum}
-* @{tfdbg.DebugDumpDir}
+* `tfdbg.DebugTensorDatum`
+* `tfdbg.DebugDumpDir`
## Functions for loading debug-dump data
-* @{tfdbg.load_tensor_from_event_file}
+* `tfdbg.load_tensor_from_event_file`
## Tensor-value predicates
@@ -32,7 +32,7 @@ TensorFlow graphs during runtime.
Built-in tensor-filter predicates to support conditional breakpoint between
runs. See `DebugDumpDir.find()` for more details.
-* @{tfdbg.has_inf_or_nan}
+* `tfdbg.has_inf_or_nan`
## Session wrapper class and `SessionRunHook` implementations
@@ -44,7 +44,7 @@ These classes allow you to
* generate `SessionRunHook` objects to debug `tf.contrib.learn` models (see
`DumpingDebugHook` and `LocalCLIDebugHook`).
-* @{tfdbg.DumpingDebugHook}
-* @{tfdbg.DumpingDebugWrapperSession}
-* @{tfdbg.LocalCLIDebugHook}
-* @{tfdbg.LocalCLIDebugWrapperSession}
+* `tfdbg.DumpingDebugHook`
+* `tfdbg.DumpingDebugWrapperSession`
+* `tfdbg.LocalCLIDebugHook`
+* `tfdbg.LocalCLIDebugWrapperSession`
diff --git a/tensorflow/docs_src/api_guides/python/threading_and_queues.md b/tensorflow/docs_src/api_guides/python/threading_and_queues.md
index 8ad4c4c075..48f0778b73 100644
--- a/tensorflow/docs_src/api_guides/python/threading_and_queues.md
+++ b/tensorflow/docs_src/api_guides/python/threading_and_queues.md
@@ -25,7 +25,7 @@ longer holds, the queue will unblock the step and allow execution to proceed.
TensorFlow implements several classes of queue. The principal difference between
these classes is the order that items are removed from the queue. To get a feel
for queues, let's consider a simple example. We will create a "first in, first
-out" queue (@{tf.FIFOQueue}) and fill it with zeros. Then we'll construct a
+out" queue (`tf.FIFOQueue`) and fill it with zeros. Then we'll construct a
graph that takes an item off the queue, adds one to that item, and puts it back
on the end of the queue. Slowly, the numbers on the queue increase.
@@ -47,8 +47,8 @@ Now that you have a bit of a feel for queues, let's dive into the details...
## Queue usage overview
-Queues, such as @{tf.FIFOQueue}
-and @{tf.RandomShuffleQueue},
+Queues, such as `tf.FIFOQueue`
+and `tf.RandomShuffleQueue`,
are important TensorFlow objects that aid in computing tensors asynchronously
in a graph.
@@ -59,11 +59,11 @@ prepare inputs for training a model as follows:
* A training thread executes a training op that dequeues mini-batches from the
queue
-We recommend using the @{tf.data.Dataset.shuffle$`shuffle`}
-and @{tf.data.Dataset.batch$`batch`} methods of a
-@{tf.data.Dataset$`Dataset`} to accomplish this. However, if you'd prefer
+We recommend using the `tf.data.Dataset.shuffle`
+and `tf.data.Dataset.batch` methods of a
+`tf.data.Dataset` to accomplish this. However, if you'd prefer
to use a queue-based version instead, you can find a full implementation in the
-@{tf.train.shuffle_batch} function.
+`tf.train.shuffle_batch` function.
For demonstration purposes a simplified implementation is given below.
@@ -93,8 +93,8 @@ def simple_shuffle_batch(source, capacity, batch_size=10):
return queue.dequeue_many(batch_size)
```
-Once started by @{tf.train.start_queue_runners}, or indirectly through
-@{tf.train.MonitoredSession}, the `QueueRunner` will launch the
+Once started by `tf.train.start_queue_runners`, or indirectly through
+`tf.train.MonitoredSession`, the `QueueRunner` will launch the
threads in the background to fill the queue. Meanwhile the main thread will
execute the `dequeue_many` op to pull data from it. Note how these ops do not
depend on each other, except indirectly through the internal state of the queue.
@@ -126,7 +126,7 @@ with tf.train.MonitoredSession() as sess:
```
For most use cases, the automatic thread startup and management provided
-by @{tf.train.MonitoredSession} is sufficient. In the rare case that it is not,
+by `tf.train.MonitoredSession` is sufficient. In the rare case that it is not,
TensorFlow provides tools for manually managing your threads and queues.
## Manual Thread Management
@@ -139,8 +139,8 @@ threads must be able to stop together, exceptions must be caught and
reported, and queues must be properly closed when stopping.
TensorFlow provides two classes to help:
-@{tf.train.Coordinator} and
-@{tf.train.QueueRunner}. These two classes
+`tf.train.Coordinator` and
+`tf.train.QueueRunner`. These two classes
are designed to be used together. The `Coordinator` class helps multiple threads
stop together and report exceptions to a program that waits for them to stop.
The `QueueRunner` class is used to create a number of threads cooperating to
@@ -148,14 +148,14 @@ enqueue tensors in the same queue.
### Coordinator
-The @{tf.train.Coordinator} class manages background threads in a TensorFlow
+The `tf.train.Coordinator` class manages background threads in a TensorFlow
program and helps multiple threads stop together.
Its key methods are:
-* @{tf.train.Coordinator.should_stop}: returns `True` if the threads should stop.
-* @{tf.train.Coordinator.request_stop}: requests that threads should stop.
-* @{tf.train.Coordinator.join}: waits until the specified threads have stopped.
+* `tf.train.Coordinator.should_stop`: returns `True` if the threads should stop.
+* `tf.train.Coordinator.request_stop`: requests that threads should stop.
+* `tf.train.Coordinator.join`: waits until the specified threads have stopped.
You first create a `Coordinator` object, and then create a number of threads
that use the coordinator. The threads typically run loops that stop when
@@ -191,11 +191,11 @@ coord.join(threads)
Obviously, the coordinator can manage threads doing very different things.
They don't have to be all the same as in the example above. The coordinator
-also has support to capture and report exceptions. See the @{tf.train.Coordinator} documentation for more details.
+also has support to capture and report exceptions. See the `tf.train.Coordinator` documentation for more details.
### QueueRunner
-The @{tf.train.QueueRunner} class creates a number of threads that repeatedly
+The `tf.train.QueueRunner` class creates a number of threads that repeatedly
run an enqueue op. These threads can use a coordinator to stop together. In
addition, a queue runner will run a *closer operation* that closes the queue if
an exception is reported to the coordinator.
diff --git a/tensorflow/docs_src/api_guides/python/train.md b/tensorflow/docs_src/api_guides/python/train.md
index cbc5052946..a118123665 100644
--- a/tensorflow/docs_src/api_guides/python/train.md
+++ b/tensorflow/docs_src/api_guides/python/train.md
@@ -1,7 +1,7 @@
# Training
[TOC]
-@{tf.train} provides a set of classes and functions that help train models.
+`tf.train` provides a set of classes and functions that help train models.
## Optimizers
@@ -12,19 +12,19 @@ optimization algorithms such as GradientDescent and Adagrad.
You never instantiate the Optimizer class itself, but instead instantiate one
of the subclasses.
-* @{tf.train.Optimizer}
-* @{tf.train.GradientDescentOptimizer}
-* @{tf.train.AdadeltaOptimizer}
-* @{tf.train.AdagradOptimizer}
-* @{tf.train.AdagradDAOptimizer}
-* @{tf.train.MomentumOptimizer}
-* @{tf.train.AdamOptimizer}
-* @{tf.train.FtrlOptimizer}
-* @{tf.train.ProximalGradientDescentOptimizer}
-* @{tf.train.ProximalAdagradOptimizer}
-* @{tf.train.RMSPropOptimizer}
+* `tf.train.Optimizer`
+* `tf.train.GradientDescentOptimizer`
+* `tf.train.AdadeltaOptimizer`
+* `tf.train.AdagradOptimizer`
+* `tf.train.AdagradDAOptimizer`
+* `tf.train.MomentumOptimizer`
+* `tf.train.AdamOptimizer`
+* `tf.train.FtrlOptimizer`
+* `tf.train.ProximalGradientDescentOptimizer`
+* `tf.train.ProximalAdagradOptimizer`
+* `tf.train.RMSPropOptimizer`
-See @{tf.contrib.opt} for more optimizers.
+See `tf.contrib.opt` for more optimizers.
## Gradient Computation
@@ -34,10 +34,10 @@ optimizer classes automatically compute derivatives on your graph, but
creators of new Optimizers or expert users can call the lower-level
functions below.
-* @{tf.gradients}
-* @{tf.AggregationMethod}
-* @{tf.stop_gradient}
-* @{tf.hessians}
+* `tf.gradients`
+* `tf.AggregationMethod`
+* `tf.stop_gradient`
+* `tf.hessians`
## Gradient Clipping
@@ -47,22 +47,22 @@ functions to your graph. You can use these functions to perform general data
clipping, but they're particularly useful for handling exploding or vanishing
gradients.
-* @{tf.clip_by_value}
-* @{tf.clip_by_norm}
-* @{tf.clip_by_average_norm}
-* @{tf.clip_by_global_norm}
-* @{tf.global_norm}
+* `tf.clip_by_value`
+* `tf.clip_by_norm`
+* `tf.clip_by_average_norm`
+* `tf.clip_by_global_norm`
+* `tf.global_norm`
## Decaying the learning rate
-* @{tf.train.exponential_decay}
-* @{tf.train.inverse_time_decay}
-* @{tf.train.natural_exp_decay}
-* @{tf.train.piecewise_constant}
-* @{tf.train.polynomial_decay}
-* @{tf.train.cosine_decay}
-* @{tf.train.linear_cosine_decay}
-* @{tf.train.noisy_linear_cosine_decay}
+* `tf.train.exponential_decay`
+* `tf.train.inverse_time_decay`
+* `tf.train.natural_exp_decay`
+* `tf.train.piecewise_constant`
+* `tf.train.polynomial_decay`
+* `tf.train.cosine_decay`
+* `tf.train.linear_cosine_decay`
+* `tf.train.noisy_linear_cosine_decay`
## Moving Averages
@@ -70,7 +70,7 @@ Some training algorithms, such as GradientDescent and Momentum often benefit
from maintaining a moving average of variables during optimization. Using the
moving averages for evaluations often improve results significantly.
-* @{tf.train.ExponentialMovingAverage}
+* `tf.train.ExponentialMovingAverage`
## Coordinator and QueueRunner
@@ -79,61 +79,61 @@ for how to use threads and queues. For documentation on the Queue API,
see @{$python/io_ops#queues$Queues}.
-* @{tf.train.Coordinator}
-* @{tf.train.QueueRunner}
-* @{tf.train.LooperThread}
-* @{tf.train.add_queue_runner}
-* @{tf.train.start_queue_runners}
+* `tf.train.Coordinator`
+* `tf.train.QueueRunner`
+* `tf.train.LooperThread`
+* `tf.train.add_queue_runner`
+* `tf.train.start_queue_runners`
## Distributed execution
See @{$distributed$Distributed TensorFlow} for
more information about how to configure a distributed TensorFlow program.
-* @{tf.train.Server}
-* @{tf.train.Supervisor}
-* @{tf.train.SessionManager}
-* @{tf.train.ClusterSpec}
-* @{tf.train.replica_device_setter}
-* @{tf.train.MonitoredTrainingSession}
-* @{tf.train.MonitoredSession}
-* @{tf.train.SingularMonitoredSession}
-* @{tf.train.Scaffold}
-* @{tf.train.SessionCreator}
-* @{tf.train.ChiefSessionCreator}
-* @{tf.train.WorkerSessionCreator}
+* `tf.train.Server`
+* `tf.train.Supervisor`
+* `tf.train.SessionManager`
+* `tf.train.ClusterSpec`
+* `tf.train.replica_device_setter`
+* `tf.train.MonitoredTrainingSession`
+* `tf.train.MonitoredSession`
+* `tf.train.SingularMonitoredSession`
+* `tf.train.Scaffold`
+* `tf.train.SessionCreator`
+* `tf.train.ChiefSessionCreator`
+* `tf.train.WorkerSessionCreator`
## Reading Summaries from Event Files
See @{$summaries_and_tensorboard$Summaries and TensorBoard} for an
overview of summaries, event files, and visualization in TensorBoard.
-* @{tf.train.summary_iterator}
+* `tf.train.summary_iterator`
## Training Hooks
Hooks are tools that run in the process of training/evaluation of the model.
-* @{tf.train.SessionRunHook}
-* @{tf.train.SessionRunArgs}
-* @{tf.train.SessionRunContext}
-* @{tf.train.SessionRunValues}
-* @{tf.train.LoggingTensorHook}
-* @{tf.train.StopAtStepHook}
-* @{tf.train.CheckpointSaverHook}
-* @{tf.train.NewCheckpointReader}
-* @{tf.train.StepCounterHook}
-* @{tf.train.NanLossDuringTrainingError}
-* @{tf.train.NanTensorHook}
-* @{tf.train.SummarySaverHook}
-* @{tf.train.GlobalStepWaiterHook}
-* @{tf.train.FinalOpsHook}
-* @{tf.train.FeedFnHook}
+* `tf.train.SessionRunHook`
+* `tf.train.SessionRunArgs`
+* `tf.train.SessionRunContext`
+* `tf.train.SessionRunValues`
+* `tf.train.LoggingTensorHook`
+* `tf.train.StopAtStepHook`
+* `tf.train.CheckpointSaverHook`
+* `tf.train.NewCheckpointReader`
+* `tf.train.StepCounterHook`
+* `tf.train.NanLossDuringTrainingError`
+* `tf.train.NanTensorHook`
+* `tf.train.SummarySaverHook`
+* `tf.train.GlobalStepWaiterHook`
+* `tf.train.FinalOpsHook`
+* `tf.train.FeedFnHook`
## Training Utilities
-* @{tf.train.global_step}
-* @{tf.train.basic_train_loop}
-* @{tf.train.get_global_step}
-* @{tf.train.assert_global_step}
-* @{tf.train.write_graph}
+* `tf.train.global_step`
+* `tf.train.basic_train_loop`
+* `tf.train.get_global_step`
+* `tf.train.assert_global_step`
+* `tf.train.write_graph`
diff --git a/tensorflow/docs_src/community/index.md b/tensorflow/docs_src/community/index.md
index eec2e51a87..0aa8e7612a 100644
--- a/tensorflow/docs_src/community/index.md
+++ b/tensorflow/docs_src/community/index.md
@@ -54,7 +54,7 @@ with content from the TensorFlow team and the best articles from the community.
### YouTube
-Our [YouTube Channel](http://youtube.com/tensorflow/) focuses on machine learing
+Our [YouTube Channel](http://youtube.com/tensorflow/) focuses on machine learning
and AI with TensorFlow. On it we have a number of new shows, including:
- TensorFlow Meets: meet with community contributors to learn and share what they're doing
diff --git a/tensorflow/docs_src/community/lists.md b/tensorflow/docs_src/community/lists.md
index 7450ab36c4..bc2f573c29 100644
--- a/tensorflow/docs_src/community/lists.md
+++ b/tensorflow/docs_src/community/lists.md
@@ -32,6 +32,8 @@ These projects inside the TensorFlow GitHub organization have lists dedicated to
and peer support for TensorFlow.js.
* [tflite](https://groups.google.com/a/tensorflow.org/d/forum/tflite) - Discussion and
peer support for TensorFlow Lite.
+* [tfprobability](https://groups.google.com/a/tensorflow.org/d/forum/tfprobability) - Discussion and
+ peer support for TensorFlow Probability.
* [tpu-users](https://groups.google.com/a/tensorflow.org/d/forum/tpu-users) - Community discussion
and support for TPU users.
diff --git a/tensorflow/docs_src/community/style_guide.md b/tensorflow/docs_src/community/style_guide.md
index c9268790a7..daf0d2fdc0 100644
--- a/tensorflow/docs_src/community/style_guide.md
+++ b/tensorflow/docs_src/community/style_guide.md
@@ -47,27 +47,7 @@ licenses(["notice"]) # Apache 2.0
exports_files(["LICENSE"])
```
-* At the end of every BUILD file, should contain:
-```
-filegroup(
- name = "all_files",
- srcs = glob(
- ["**/*"],
- exclude = [
- "**/METADATA",
- "**/OWNERS",
- ],
- ),
- visibility = ["//tensorflow:__subpackages__"],
-)
-```
-
-* When adding new BUILD file, add this line to `tensorflow/BUILD` file into `all_opensource_files` target.
-
-```
-"//tensorflow/<directory>:all_files",
-```
* For all Python BUILD targets (libraries and tests) add next line:
@@ -80,6 +60,9 @@ srcs_version = "PY2AND3",
* Operations that deal with batches may assume that the first dimension of a Tensor is the batch dimension.
+* In most models the *last dimension* is the number of channels.
+
+* Dimensions excluding the first and last usually make up the "space" dimensions: Sequence-length or Image-size.
## Python operations
@@ -148,37 +131,6 @@ Usage:
## Layers
-A *Layer* is a Python operation that combines variable creation and/or one or many
-other graph operations. Follow the same requirements as for regular Python
-operation.
-
-* If a layer creates one or more variables, the layer function
- should take next arguments also following order:
- - `initializers`: Optionally allow to specify initializers for the variables.
- - `regularizers`: Optionally allow to specify regularizers for the variables.
- - `trainable`: which control if their variables are trainable or not.
- - `scope`: `VariableScope` object that variable will be put under.
- - `reuse`: `bool` indicator if the variable should be reused if
- it's present in the scope.
-
-* Layers that behave differently during training should take:
- - `is_training`: `bool` indicator to conditionally choose different
- computation paths (e.g. using `tf.cond`) during execution.
-
-Example:
-
- def conv2d(inputs,
- num_filters_out,
- kernel_size,
- stride=1,
- padding='SAME',
- activation_fn=tf.nn.relu,
- normalization_fn=add_bias,
- normalization_params=None,
- initializers=None,
- regularizers=None,
- trainable=True,
- scope=None,
- reuse=None):
- ... see implementation at tensorflow/contrib/layers/python/layers/layers.py ...
+Use `tf.keras.layers`, not `tf.layers`.
+See `tf.keras.layers` and [the Keras guide](../guide/keras.md#custom_layers) for details on how to sub-class layers.
diff --git a/tensorflow/docs_src/deploy/distributed.md b/tensorflow/docs_src/deploy/distributed.md
index 8e2c818e39..6a760f53c8 100644
--- a/tensorflow/docs_src/deploy/distributed.md
+++ b/tensorflow/docs_src/deploy/distributed.md
@@ -21,7 +21,7 @@ $ python
```
The
-@{tf.train.Server.create_local_server}
+`tf.train.Server.create_local_server`
method creates a single-process cluster, with an in-process server.
## Create a cluster
@@ -55,7 +55,7 @@ the following:
The cluster specification dictionary maps job names to lists of network
addresses. Pass this dictionary to
-the @{tf.train.ClusterSpec}
+the `tf.train.ClusterSpec`
constructor. For example:
<table>
@@ -84,10 +84,10 @@ tf.train.ClusterSpec({
### Create a `tf.train.Server` instance in each task
-A @{tf.train.Server} object contains a
+A `tf.train.Server` object contains a
set of local devices, a set of connections to other tasks in its
`tf.train.ClusterSpec`, and a
-@{tf.Session} that can use these
+`tf.Session` that can use these
to perform a distributed computation. Each server is a member of a specific
named job and has a task index within that job. A server can communicate with
any other server in the cluster.
@@ -117,7 +117,7 @@ which you'd like to see support, please raise a
## Specifying distributed devices in your model
To place operations on a particular process, you can use the same
-@{tf.device}
+`tf.device`
function that is used to specify whether ops run on the CPU or GPU. For example:
```python
@@ -165,7 +165,7 @@ simplify the work of specifying a replicated model. Possible approaches include:
for each `/job:worker` task, typically in the same process as the worker
task. Each client builds a similar graph containing the parameters (pinned to
`/job:ps` as before using
- @{tf.train.replica_device_setter}
+ `tf.train.replica_device_setter`
to map them deterministically to the same tasks); and a single copy of the
compute-intensive part of the model, pinned to the local task in
`/job:worker`.
@@ -180,7 +180,7 @@ simplify the work of specifying a replicated model. Possible approaches include:
gradient averaging as in the
[CIFAR-10 multi-GPU trainer](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py)),
and between-graph replication (e.g. using the
- @{tf.train.SyncReplicasOptimizer}).
+ `tf.train.SyncReplicasOptimizer`).
### Putting it all together: example trainer program
@@ -314,11 +314,11 @@ serve multiple clients.
**Cluster**
-A TensorFlow cluster comprises a one or more "jobs", each divided into lists of
+A TensorFlow cluster comprises one or more "jobs", each divided into lists of
one or more "tasks". A cluster is typically dedicated to a particular high-level
objective, such as training a neural network, using many machines in parallel. A
cluster is defined by
-a @{tf.train.ClusterSpec} object.
+a `tf.train.ClusterSpec` object.
**Job**
@@ -344,7 +344,7 @@ to a single process. A task belongs to a particular "job" and is identified by
its index within that job's list of tasks.
**TensorFlow server** A process running
-a @{tf.train.Server} instance, which is
+a `tf.train.Server` instance, which is
a member of a cluster, and exports a "master service" and "worker service".
**Worker service**
diff --git a/tensorflow/docs_src/deploy/s3.md b/tensorflow/docs_src/deploy/s3.md
index 7028249e94..079c796aa7 100644
--- a/tensorflow/docs_src/deploy/s3.md
+++ b/tensorflow/docs_src/deploy/s3.md
@@ -40,7 +40,7 @@ AWS_SECRET_ACCESS_KEY=XXXXX
AWS_REGION=us-east-1 # Region for the S3 bucket, this is not always needed. Default is us-east-1.
S3_ENDPOINT=s3.us-east-1.amazonaws.com # The S3 API Endpoint to connect to. This is specified in a HOST:PORT format.
S3_USE_HTTPS=1 # Whether or not to use HTTPS. Disable with 0.
-S3_VERIFY_SSL=1 # If HTTPS is used, conterols if SSL should be enabled. Disable with 0.
+S3_VERIFY_SSL=1 # If HTTPS is used, controls if SSL should be enabled. Disable with 0.
```
## Usage
diff --git a/tensorflow/docs_src/extend/adding_an_op.md b/tensorflow/docs_src/extend/adding_an_op.md
index 1b028be4ea..fbf5c0b90d 100644
--- a/tensorflow/docs_src/extend/adding_an_op.md
+++ b/tensorflow/docs_src/extend/adding_an_op.md
@@ -46,7 +46,7 @@ To incorporate your custom op you'll need to:
4. Write a function to compute gradients for the op (optional).
5. Test the op. We usually do this in Python for convenience, but you can also
test the op in C++. If you define gradients, you can verify them with the
- Python @{tf.test.compute_gradient_error$gradient checker}.
+ Python `tf.test.compute_gradient_error`.
See
[`relu_op_test.py`](https://www.tensorflow.org/code/tensorflow/python/kernel_tests/relu_op_test.py) as
an example that tests the forward functions of Relu-like operators and
@@ -388,7 +388,7 @@ $ bazel build --config opt //tensorflow/core/user_ops:zero_out.so
## Use the op in Python
TensorFlow Python API provides the
-@{tf.load_op_library} function to
+`tf.load_op_library` function to
load the dynamic library and register the op with the TensorFlow
framework. `load_op_library` returns a Python module that contains the Python
wrappers for the op and the kernel. Thus, once you have built the op, you can
@@ -538,7 +538,7 @@ REGISTER_OP("ZeroOut")
```
(Note that the set of [attribute types](#attr_types) is different from the
-@{tf.DType$tensor types} used for inputs and outputs.)
+`tf.DType` used for inputs and outputs.)
Your kernel can then access this attr in its constructor via the `context`
parameter:
@@ -615,7 +615,7 @@ define an attr with constraints, you can use the following `<attr-type-expr>`s:
* `{<type1>, <type2>}`: The value is of type `type`, and must be one of
`<type1>` or `<type2>`, where `<type1>` and `<type2>` are supported
- @{tf.DType$tensor types}. You don't specify
+ `tf.DType`. You don't specify
that the type of the attr is `type`. This is implied when you have a list of
types in `{...}`. For example, in this case the attr `t` is a type that must
be an `int32`, a `float`, or a `bool`:
@@ -649,7 +649,7 @@ define an attr with constraints, you can use the following `<attr-type-expr>`s:
```
Lists can be combined with other lists and single types. The following
- op allows attr `t` to be any of the numberic types, or the bool type:
+ op allows attr `t` to be any of the numeric types, or the bool type:
```c++
REGISTER_OP("NumberOrBooleanType")
@@ -714,7 +714,7 @@ REGISTER_OP("AttrDefaultExampleForAllTypes")
```
Note in particular that the values of type `type`
-use @{tf.DType$the `DT_*` names for the types}.
+use `tf.DType`.
#### Polymorphism
@@ -1056,7 +1056,7 @@ expressions:
`string`). This specifies a single tensor of the given type.
See
- @{tf.DType$the list of supported Tensor types}.
+ `tf.DType`.
```c++
REGISTER_OP("BuiltInTypesExample")
@@ -1098,8 +1098,7 @@ expressions:
* For a sequence of tensors with the same type: `<number> * <type>`, where
`<number>` is the name of an [Attr](#attrs) with type `int`. The `<type>` can
- either be
- @{tf.DType$a specific type like `int32` or `float`},
+ either be a `tf.DType`,
or the name of an attr with type `type`. As an example of the first, this
op accepts a list of `int32` tensors:
@@ -1202,7 +1201,7 @@ There are several examples of kernels with GPU support in
Notice some kernels have a CPU version in a `.cc` file, a GPU version in a file
ending in `_gpu.cu.cc`, and some code shared in common in a `.h` file.
-For example, the @{tf.pad} has
+For example, the `tf.pad` has
everything but the GPU kernel in [`tensorflow/core/kernels/pad_op.cc`][pad_op].
The GPU kernel is in
[`tensorflow/core/kernels/pad_op_gpu.cu.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/pad_op_gpu.cu.cc),
@@ -1307,16 +1306,16 @@ def _zero_out_grad(op, grad):
```
Details about registering gradient functions with
-@{tf.RegisterGradient}:
+`tf.RegisterGradient`:
* For an op with one output, the gradient function will take an
- @{tf.Operation} `op` and a
- @{tf.Tensor} `grad` and build new ops
+ `tf.Operation` `op` and a
+ `tf.Tensor` `grad` and build new ops
out of the tensors
[`op.inputs[i]`](../../api_docs/python/framework.md#Operation.inputs),
[`op.outputs[i]`](../../api_docs/python/framework.md#Operation.outputs), and `grad`. Information
about any attrs can be found via
- @{tf.Operation.get_attr}.
+ `tf.Operation.get_attr`.
* If the op has multiple outputs, the gradient function will take `op` and
`grads`, where `grads` is a list of gradients with respect to each output.
diff --git a/tensorflow/docs_src/extend/architecture.md b/tensorflow/docs_src/extend/architecture.md
index 84435a57f2..83d70c9468 100644
--- a/tensorflow/docs_src/extend/architecture.md
+++ b/tensorflow/docs_src/extend/architecture.md
@@ -81,7 +81,7 @@ implementation from all client languages. Most of the training libraries are
still Python-only, but C++ does have support for efficient inference.
The client creates a session, which sends the graph definition to the
-distributed master as a @{tf.GraphDef}
+distributed master as a `tf.GraphDef`
protocol buffer. When the client evaluates a node or nodes in the
graph, the evaluation triggers a call to the distributed master to initiate
computation.
@@ -96,7 +96,7 @@ feature vector (x), adds a bias term (b) and saves the result in a variable
### Code
-* @{tf.Session}
+* `tf.Session`
## Distributed master
diff --git a/tensorflow/docs_src/extend/index.md b/tensorflow/docs_src/extend/index.md
index d48340a777..0e4bfd1dc4 100644
--- a/tensorflow/docs_src/extend/index.md
+++ b/tensorflow/docs_src/extend/index.md
@@ -17,7 +17,7 @@ TensorFlow:
Python is currently the only language supported by TensorFlow's API stability
promises. However, TensorFlow also provides functionality in C++, Go, Java and
-[JavaScript](https://js.tensorflow.org) (incuding
+[JavaScript](https://js.tensorflow.org) (including
[Node.js](https://github.com/tensorflow/tfjs-node)),
plus community support for [Haskell](https://github.com/tensorflow/haskell) and
[Rust](https://github.com/tensorflow/rust). If you'd like to create or
diff --git a/tensorflow/docs_src/extend/new_data_formats.md b/tensorflow/docs_src/extend/new_data_formats.md
index abbf47910e..47a8344b70 100644
--- a/tensorflow/docs_src/extend/new_data_formats.md
+++ b/tensorflow/docs_src/extend/new_data_formats.md
@@ -15,25 +15,24 @@ We divide the task of supporting a file format into two pieces:
* Record formats: We use decoder or parsing ops to turn a string record
into tensors usable by TensorFlow.
-For example, to read a
-[CSV file](https://en.wikipedia.org/wiki/Comma-separated_values), we use
-@{tf.data.TextLineDataset$a dataset for reading text files line-by-line}
-and then @{tf.data.Dataset.map$map} an
-@{tf.decode_csv$op} that parses CSV data from each line of text in the dataset.
+For example, to re-implement `tf.contrib.data.make_csv_dataset` function, we
+could use `tf.data.TextLineDataset` to extract the records, and then
+use `tf.data.Dataset.map` and `tf.decode_csv` to parses the CSV records from
+each line of text in the dataset.
[TOC]
## Writing a `Dataset` for a file format
-A @{tf.data.Dataset} represents a sequence of *elements*, which can be the
+A `tf.data.Dataset` represents a sequence of *elements*, which can be the
individual records in a file. There are several examples of "reader" datasets
that are already built into TensorFlow:
-* @{tf.data.TFRecordDataset}
+* `tf.data.TFRecordDataset`
([source in `kernels/data/reader_dataset_ops.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/data/reader_dataset_ops.cc))
-* @{tf.data.FixedLengthRecordDataset}
+* `tf.data.FixedLengthRecordDataset`
([source in `kernels/data/reader_dataset_ops.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/data/reader_dataset_ops.cc))
-* @{tf.data.TextLineDataset}
+* `tf.data.TextLineDataset`
([source in `kernels/data/reader_dataset_ops.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/data/reader_dataset_ops.cc))
Each of these implementations comprises three related classes:
@@ -64,7 +63,7 @@ need to:
that implement the reading logic.
2. In C++, register a new reader op and kernel with the name
`"MyReaderDataset"`.
-3. In Python, define a subclass of @{tf.data.Dataset} called `MyReaderDataset`.
+3. In Python, define a subclass of `tf.data.Dataset` called `MyReaderDataset`.
You can put all the C++ code in a single file, such as
`my_reader_dataset_op.cc`. It will help if you are
@@ -230,7 +229,7 @@ REGISTER_KERNEL_BUILDER(Name("MyReaderDataset").Device(tensorflow::DEVICE_CPU),
The last step is to build the C++ code and add a Python wrapper. The easiest way
to do this is by @{$adding_an_op#build_the_op_library$compiling a dynamic
library} (e.g. called `"my_reader_dataset_op.so"`), and adding a Python class
-that subclasses @{tf.data.Dataset} to wrap it. An example Python program is
+that subclasses `tf.data.Dataset` to wrap it. An example Python program is
given here:
```python
@@ -293,14 +292,14 @@ track down where the bad data came from.
Examples of Ops useful for decoding records:
-* @{tf.parse_single_example} (and @{tf.parse_example})
-* @{tf.decode_csv}
-* @{tf.decode_raw}
+* `tf.parse_single_example` (and `tf.parse_example`)
+* `tf.decode_csv`
+* `tf.decode_raw`
Note that it can be useful to use multiple Ops to decode a particular record
format. For example, you may have an image saved as a string in
[a `tf.train.Example` protocol buffer](https://www.tensorflow.org/code/tensorflow/core/example/example.proto).
Depending on the format of that image, you might take the corresponding output
-from a @{tf.parse_single_example} op and call @{tf.image.decode_jpeg},
-@{tf.image.decode_png}, or @{tf.decode_raw}. It is common to take the output
-of `tf.decode_raw` and use @{tf.slice} and @{tf.reshape} to extract pieces.
+from a `tf.parse_single_example` op and call `tf.image.decode_jpeg`,
+`tf.image.decode_png`, or `tf.decode_raw`. It is common to take the output
+of `tf.decode_raw` and use `tf.slice` and `tf.reshape` to extract pieces.
diff --git a/tensorflow/docs_src/guide/checkpoints.md b/tensorflow/docs_src/guide/checkpoints.md
index dfb2626b86..e1add29852 100644
--- a/tensorflow/docs_src/guide/checkpoints.md
+++ b/tensorflow/docs_src/guide/checkpoints.md
@@ -129,7 +129,7 @@ in the `model_dir` according to the following schedule:
You may alter the default schedule by taking the following steps:
-1. Create a @{tf.estimator.RunConfig$`RunConfig`} object that defines the
+1. Create a `tf.estimator.RunConfig` object that defines the
desired schedule.
2. When instantiating the Estimator, pass that `RunConfig` object to the
Estimator's `config` argument.
diff --git a/tensorflow/docs_src/guide/custom_estimators.md b/tensorflow/docs_src/guide/custom_estimators.md
index a63e2bafb3..199a0e93de 100644
--- a/tensorflow/docs_src/guide/custom_estimators.md
+++ b/tensorflow/docs_src/guide/custom_estimators.md
@@ -2,9 +2,9 @@
# Creating Custom Estimators
This document introduces custom Estimators. In particular, this document
-demonstrates how to create a custom @{tf.estimator.Estimator$Estimator} that
+demonstrates how to create a custom `tf.estimator.Estimator` that
mimics the behavior of the pre-made Estimator
-@{tf.estimator.DNNClassifier$`DNNClassifier`} in solving the Iris problem. See
+`tf.estimator.DNNClassifier` in solving the Iris problem. See
the @{$premade_estimators$Pre-Made Estimators chapter} for details
on the Iris problem.
@@ -34,7 +34,7 @@ with
## Pre-made vs. custom
As the following figure shows, pre-made Estimators are subclasses of the
-@{tf.estimator.Estimator} base class, while custom Estimators are an instance
+`tf.estimator.Estimator` base class, while custom Estimators are an instance
of tf.estimator.Estimator:
<div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px;">
@@ -144,12 +144,12 @@ The caller may pass `params` to an Estimator's constructor. Any `params` passed
to the constructor are in turn passed on to the `model_fn`. In
[`custom_estimator.py`](https://github.com/tensorflow/models/blob/master/samples/core/get_started/custom_estimator.py)
the following lines create the estimator and set the params to configure the
-model. This configuration step is similar to how we configured the @{tf.estimator.DNNClassifier} in
+model. This configuration step is similar to how we configured the `tf.estimator.DNNClassifier` in
@{$premade_estimators}.
```python
classifier = tf.estimator.Estimator(
- model_fn=my_model,
+ model_fn=my_model_fn,
params={
'feature_columns': my_feature_columns,
# Two hidden layers of 10 nodes each.
@@ -178,7 +178,7 @@ The basic deep neural network model must define the following three sections:
### Define the input layer
-The first line of the `model_fn` calls @{tf.feature_column.input_layer} to
+The first line of the `model_fn` calls `tf.feature_column.input_layer` to
convert the feature dictionary and `feature_columns` into input for your model,
as follows:
@@ -202,7 +202,7 @@ creating the model's input layer.
If you are creating a deep neural network, you must define one or more hidden
layers. The Layers API provides a rich set of functions to define all types of
hidden layers, including convolutional, pooling, and dropout layers. For Iris,
-we're simply going to call @{tf.layers.dense} to create hidden layers, with
+we're simply going to call `tf.layers.dense` to create hidden layers, with
dimensions defined by `params['hidden_layers']`. In a `dense` layer each node
is connected to every node in the preceding layer. Here's the relevant code:
@@ -231,14 +231,14 @@ simplicity, the figure does not show all the units in each layer.
src="../images/custom_estimators/add_hidden_layer.png">
</div>
-Note that @{tf.layers.dense} provides many additional capabilities, including
+Note that `tf.layers.dense` provides many additional capabilities, including
the ability to set a multitude of regularization parameters. For the sake of
simplicity, though, we're going to simply accept the default values of the
other parameters.
### Output Layer
-We'll define the output layer by calling @{tf.layers.dense} yet again, this
+We'll define the output layer by calling `tf.layers.dense` yet again, this
time without an activation function:
```python
@@ -265,7 +265,7 @@ score, or "logit", calculated for the associated class of Iris: Setosa,
Versicolor, or Virginica, respectively.
Later on, these logits will be transformed into probabilities by the
-@{tf.nn.softmax} function.
+`tf.nn.softmax` function.
## Implement training, evaluation, and prediction {#modes}
@@ -290,9 +290,9 @@ function with the mode parameter set as follows:
| Estimator method | Estimator Mode |
|:---------------------------------|:------------------|
-|@{tf.estimator.Estimator.train$`train()`} |@{tf.estimator.ModeKeys.TRAIN$`ModeKeys.TRAIN`} |
-|@{tf.estimator.Estimator.evaluate$`evaluate()`} |@{tf.estimator.ModeKeys.EVAL$`ModeKeys.EVAL`} |
-|@{tf.estimator.Estimator.predict$`predict()`}|@{tf.estimator.ModeKeys.PREDICT$`ModeKeys.PREDICT`} |
+|`tf.estimator.Estimator.train` |`tf.estimator.ModeKeys.TRAIN` |
+|`tf.estimator.Estimator.evaluate` |`tf.estimator.ModeKeys.EVAL` |
+|`tf.estimator.Estimator.predict`|`tf.estimator.ModeKeys.PREDICT` |
For example, suppose you instantiate a custom Estimator to generate an object
named `classifier`. Then, you make the following call:
@@ -350,8 +350,8 @@ The `predictions` holds the following three key/value pairs:
* `logit` holds the raw logit values (in this example, -1.3, 2.6, and -0.9)
We return that dictionary to the caller via the `predictions` parameter of the
-@{tf.estimator.EstimatorSpec}. The Estimator's
-@{tf.estimator.Estimator.predict$`predict`} method will yield these
+`tf.estimator.EstimatorSpec`. The Estimator's
+`tf.estimator.Estimator.predict` method will yield these
dictionaries.
### Calculate the loss
@@ -361,7 +361,7 @@ model's loss. This is the
[objective](https://developers.google.com/machine-learning/glossary/#objective)
that will be optimized.
-We can calculate the loss by calling @{tf.losses.sparse_softmax_cross_entropy}.
+We can calculate the loss by calling `tf.losses.sparse_softmax_cross_entropy`.
The value returned by this function will be approximately 0 at lowest,
when the probability of the correct class (at index `label`) is near 1.0.
The loss value returned is progressively larger as the probability of the
@@ -382,12 +382,12 @@ When the Estimator's `evaluate` method is called, the `model_fn` receives
or more metrics.
Although returning metrics is optional, most custom Estimators do return at
-least one metric. TensorFlow provides a Metrics module @{tf.metrics} to
+least one metric. TensorFlow provides a Metrics module `tf.metrics` to
calculate common metrics. For brevity's sake, we'll only return accuracy. The
-@{tf.metrics.accuracy} function compares our predictions against the
+`tf.metrics.accuracy` function compares our predictions against the
true values, that is, against the labels provided by the input function. The
-@{tf.metrics.accuracy} function requires the labels and predictions to have the
-same shape. Here's the call to @{tf.metrics.accuracy}:
+`tf.metrics.accuracy` function requires the labels and predictions to have the
+same shape. Here's the call to `tf.metrics.accuracy`:
``` python
# Compute evaluation metrics.
@@ -396,7 +396,7 @@ accuracy = tf.metrics.accuracy(labels=labels,
name='acc_op')
```
-The @{tf.estimator.EstimatorSpec$`EstimatorSpec`} returned for evaluation
+The `tf.estimator.EstimatorSpec` returned for evaluation
typically contains the following information:
* `loss`, which is the model's loss
@@ -416,7 +416,7 @@ if mode == tf.estimator.ModeKeys.EVAL:
mode, loss=loss, eval_metric_ops=metrics)
```
-The @{tf.summary.scalar} will make accuracy available to TensorBoard
+The `tf.summary.scalar` will make accuracy available to TensorBoard
in both `TRAIN` and `EVAL` modes. (More on this later).
### Train
@@ -426,7 +426,7 @@ with `mode = ModeKeys.TRAIN`. In this case, the model function must return an
`EstimatorSpec` that contains the loss and a training operation.
Building the training operation will require an optimizer. We will use
-@{tf.train.AdagradOptimizer} because we're mimicking the `DNNClassifier`, which
+`tf.train.AdagradOptimizer` because we're mimicking the `DNNClassifier`, which
also uses `Adagrad` by default. The `tf.train` package provides many other
optimizers—feel free to experiment with them.
@@ -437,14 +437,14 @@ optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
```
Next, we build the training operation using the optimizer's
-@{tf.train.Optimizer.minimize$`minimize`} method on the loss we calculated
+`tf.train.Optimizer.minimize` method on the loss we calculated
earlier.
The `minimize` method also takes a `global_step` parameter. TensorFlow uses this
parameter to count the number of training steps that have been processed
(to know when to end a training run). Furthermore, the `global_step` is
essential for TensorBoard graphs to work correctly. Simply call
-@{tf.train.get_global_step} and pass the result to the `global_step`
+`tf.train.get_global_step` and pass the result to the `global_step`
argument of `minimize`.
Here's the code to train the model:
@@ -453,7 +453,7 @@ Here's the code to train the model:
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
```
-The @{tf.estimator.EstimatorSpec$`EstimatorSpec`} returned for training
+The `tf.estimator.EstimatorSpec` returned for training
must have the following fields set:
* `loss`, which contains the value of the loss function.
@@ -474,7 +474,7 @@ Instantiate the custom Estimator through the Estimator base class as follows:
```python
# Build 2 hidden layer DNN with 10, 10 units respectively.
classifier = tf.estimator.Estimator(
- model_fn=my_model,
+ model_fn=my_model_fn,
params={
'feature_columns': my_feature_columns,
# Two hidden layers of 10 nodes each.
diff --git a/tensorflow/docs_src/guide/datasets.md b/tensorflow/docs_src/guide/datasets.md
index 8b69860a68..bb18e8b79c 100644
--- a/tensorflow/docs_src/guide/datasets.md
+++ b/tensorflow/docs_src/guide/datasets.md
@@ -1,6 +1,6 @@
# Importing Data
-The @{tf.data} API enables you to build complex input pipelines from
+The `tf.data` API enables you to build complex input pipelines from
simple, reusable pieces. For example, the pipeline for an image model might
aggregate data from files in a distributed file system, apply random
perturbations to each image, and merge randomly selected images into a batch
@@ -51,7 +51,7 @@ Once you have a `Dataset` object, you can *transform* it into a new `Dataset` by
chaining method calls on the `tf.data.Dataset` object. For example, you
can apply per-element transformations such as `Dataset.map()` (to apply a
function to each element), and multi-element transformations such as
-`Dataset.batch()`. See the documentation for @{tf.data.Dataset}
+`Dataset.batch()`. See the documentation for `tf.data.Dataset`
for a complete list of transformations.
The most common way to consume values from a `Dataset` is to make an
@@ -211,13 +211,13 @@ for _ in range(20):
sess.run(next_element)
```
-A **feedable** iterator can be used together with @{tf.placeholder} to select
-what `Iterator` to use in each call to @{tf.Session.run}, via the familiar
+A **feedable** iterator can be used together with `tf.placeholder` to select
+what `Iterator` to use in each call to `tf.Session.run`, via the familiar
`feed_dict` mechanism. It offers the same functionality as a reinitializable
iterator, but it does not require you to initialize the iterator from the start
of a dataset when you switch between iterators. For example, using the same
training and validation example from above, you can use
-@{tf.data.Iterator.from_string_handle} to define a feedable iterator
+`tf.data.Iterator.from_string_handle` to define a feedable iterator
that allows you to switch between the two datasets:
```python
@@ -329,12 +329,12 @@ of an iterator will include all components in a single expression.
### Saving iterator state
-The @{tf.contrib.data.make_saveable_from_iterator} function creates a
+The `tf.contrib.data.make_saveable_from_iterator` function creates a
`SaveableObject` from an iterator, which can be used to save and
restore the current state of the iterator (and, effectively, the whole input
-pipeline). A saveable object thus created can be added to @{tf.train.Saver}
+pipeline). A saveable object thus created can be added to `tf.train.Saver`
variables list or the `tf.GraphKeys.SAVEABLE_OBJECTS` collection for saving and
-restoring in the same manner as a @{tf.Variable}. Refer to
+restoring in the same manner as a `tf.Variable`. Refer to
@{$saved_model$Saving and Restoring} for details on how to save and restore
variables.
@@ -488,7 +488,7 @@ dataset = dataset.flat_map(
### Consuming CSV data
The CSV file format is a popular format for storing tabular data in plain text.
-The @{tf.contrib.data.CsvDataset} class provides a way to extract records from
+The `tf.contrib.data.CsvDataset` class provides a way to extract records from
one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).
Given one or more filenames and a list of defaults, a `CsvDataset` will produce
a tuple of elements whose types correspond to the types of the defaults
@@ -757,9 +757,9 @@ dataset = dataset.repeat()
### Using high-level APIs
-The @{tf.train.MonitoredTrainingSession} API simplifies many aspects of running
+The `tf.train.MonitoredTrainingSession` API simplifies many aspects of running
TensorFlow in a distributed setting. `MonitoredTrainingSession` uses the
-@{tf.errors.OutOfRangeError} to signal that training has completed, so to use it
+`tf.errors.OutOfRangeError` to signal that training has completed, so to use it
with the `tf.data` API, we recommend using
`Dataset.make_one_shot_iterator()`. For example:
@@ -782,7 +782,7 @@ with tf.train.MonitoredTrainingSession(...) as sess:
sess.run(training_op)
```
-To use a `Dataset` in the `input_fn` of a @{tf.estimator.Estimator}, we also
+To use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, we also
recommend using `Dataset.make_one_shot_iterator()`. For example:
```python
diff --git a/tensorflow/docs_src/guide/datasets_for_estimators.md b/tensorflow/docs_src/guide/datasets_for_estimators.md
index b55a5731a4..969ea579f7 100644
--- a/tensorflow/docs_src/guide/datasets_for_estimators.md
+++ b/tensorflow/docs_src/guide/datasets_for_estimators.md
@@ -1,6 +1,6 @@
# Datasets for Estimators
-The @{tf.data} module contains a collection of classes that allows you to
+The `tf.data` module contains a collection of classes that allows you to
easily load data, manipulate it, and pipe it into your model. This document
introduces the API by walking through two simple examples:
@@ -73,8 +73,8 @@ Let's walk through the `train_input_fn()`.
### Slices
-The function starts by using the @{tf.data.Dataset.from_tensor_slices} function
-to create a @{tf.data.Dataset} representing slices of the array. The array is
+The function starts by using the `tf.data.Dataset.from_tensor_slices` function
+to create a `tf.data.Dataset` representing slices of the array. The array is
sliced across the first dimension. For example, an array containing the
MNIST training data has a shape of `(60000, 28, 28)`. Passing this to
`from_tensor_slices` returns a `Dataset` object containing 60000 slices, each one
@@ -170,15 +170,15 @@ function takes advantage of several of these methods:
dataset = dataset.shuffle(1000).repeat().batch(batch_size)
```
-The @{tf.data.Dataset.shuffle$`shuffle`} method uses a fixed-size buffer to
+The `tf.data.Dataset.shuffle` method uses a fixed-size buffer to
shuffle the items as they pass through. In this case the `buffer_size` is
greater than the number of examples in the `Dataset`, ensuring that the data is
completely shuffled (The Iris data set only contains 150 examples).
-The @{tf.data.Dataset.repeat$`repeat`} method restarts the `Dataset` when
+The `tf.data.Dataset.repeat` method restarts the `Dataset` when
it reaches the end. To limit the number of epochs, set the `count` argument.
-The @{tf.data.Dataset.batch$`batch`} method collects a number of examples and
+The `tf.data.Dataset.batch` method collects a number of examples and
stacks them, to create batches. This adds a dimension to their shape. The new
dimension is added as the first dimension. The following code uses
the `batch` method on the MNIST `Dataset`, from earlier. This results in a
@@ -234,7 +234,7 @@ The `labels` can/should be omitted when using the `predict` method.
## Reading a CSV File
The most common real-world use case for the `Dataset` class is to stream data
-from files on disk. The @{tf.data} module includes a variety of
+from files on disk. The `tf.data` module includes a variety of
file readers. Let's see how parsing the Iris dataset from the csv file looks
using a `Dataset`.
@@ -255,9 +255,9 @@ from the local files.
### Build the `Dataset`
-We start by building a @{tf.data.TextLineDataset$`TextLineDataset`} object to
+We start by building a `tf.data.TextLineDataset` object to
read the file one line at a time. Then, we call the
-@{tf.data.Dataset.skip$`skip`} method to skip over the first line of the file, which contains a header, not an example:
+`tf.data.Dataset.skip` method to skip over the first line of the file, which contains a header, not an example:
``` python
ds = tf.data.TextLineDataset(train_path).skip(1)
@@ -268,11 +268,11 @@ ds = tf.data.TextLineDataset(train_path).skip(1)
We will start by building a function to parse a single line.
The following `iris_data.parse_line` function accomplishes this task using the
-@{tf.decode_csv} function, and some simple python code:
+`tf.decode_csv` function, and some simple python code:
We must parse each of the lines in the dataset in order to generate the
necessary `(features, label)` pairs. The following `_parse_line` function
-calls @{tf.decode_csv} to parse a single line into its features
+calls `tf.decode_csv` to parse a single line into its features
and the label. Since Estimators require that features be represented as a
dictionary, we rely on Python's built-in `dict` and `zip` functions to build
that dictionary. The feature names are the keys of that dictionary.
@@ -301,7 +301,7 @@ def _parse_line(line):
### Parse the lines
Datasets have many methods for manipulating the data while it is being piped
-to a model. The most heavily-used method is @{tf.data.Dataset.map$`map`}, which
+to a model. The most heavily-used method is `tf.data.Dataset.map`, which
applies a transformation to each element of the `Dataset`.
The `map` method takes a `map_func` argument that describes how each item in the
@@ -311,7 +311,7 @@ The `map` method takes a `map_func` argument that describes how each item in the
<img style="width:100%" src="../images/datasets/map.png">
</div>
<div style="text-align: center">
-The @{tf.data.Dataset.map$`map`} method applies the `map_func` to
+The `tf.data.Dataset.map` method applies the `map_func` to
transform each item in the <code>Dataset</code>.
</div>
diff --git a/tensorflow/docs_src/guide/debugger.md b/tensorflow/docs_src/guide/debugger.md
index f0e465214e..4c4a04a88a 100644
--- a/tensorflow/docs_src/guide/debugger.md
+++ b/tensorflow/docs_src/guide/debugger.md
@@ -89,7 +89,7 @@ control the execution and inspect the graph's internal state.
the diagnosis of issues.
In this example, we have already registered a tensor filter called
-@{tfdbg.has_inf_or_nan},
+`tfdbg.has_inf_or_nan`,
which simply determines if there are any `nan` or `inf` values in any
intermediate tensors (tensors that are neither inputs or outputs of the
`Session.run()` call, but are in the path leading from the inputs to the
@@ -98,13 +98,11 @@ we ship it with the
@{$python/tfdbg#Classes_for_debug_dump_data_and_directories$`debug_data`}
module.
-Note: You can also write your own custom filters. See
-the @{tfdbg.DebugDumpDir.find$API documentation}
-of `DebugDumpDir.find()` for additional information.
+Note: You can also write your own custom filters. See `tfdbg.DebugDumpDir.find`
+for additional information.
## Debugging Model Training with tfdbg
-
Let's try training the model again, but with the `--debug` flag added this time:
```none
@@ -429,9 +427,9 @@ described in the preceding sections inapplicable. Fortunately, you can still
debug them by using special `hook`s provided by `tfdbg`.
`tfdbg` can debug the
-@{tf.estimator.Estimator.train$`train()`},
-@{tf.estimator.Estimator.evaluate$`evaluate()`} and
-@{tf.estimator.Estimator.predict$`predict()`}
+`tf.estimator.Estimator.train`,
+`tf.estimator.Estimator.evaluate` and
+`tf.estimator.Estimator.predict`
methods of tf-learn `Estimator`s. To debug `Estimator.train()`,
create a `LocalCLIDebugHook` and supply it in the `hooks` argument. For example:
@@ -473,7 +471,7 @@ python -m tensorflow.python.debug.examples.debug_tflearn_iris --debug
The `LocalCLIDebugHook` also allows you to configure a `watch_fn` that can be
used to flexibly specify what `Tensor`s to watch on different `Session.run()`
calls, as a function of the `fetches` and `feed_dict` and other states. See
-@{tfdbg.DumpingDebugWrapperSession.__init__$this API doc}
+`tfdbg.DumpingDebugWrapperSession.__init__`
for more details.
## Debugging Keras Models with TFDBG
@@ -556,7 +554,7 @@ and the higher-level `Estimator` API.
If you interact directly with the `tf.Session` API in `python`, you can
configure the `RunOptions` proto that you call your `Session.run()` method
-with, by using the method @{tfdbg.watch_graph}.
+with, by using the method `tfdbg.watch_graph`.
This will cause the intermediate tensors and runtime graphs to be dumped to a
shared storage location of your choice when the `Session.run()` call occurs
(at the cost of slower performance). For example:
@@ -629,7 +627,7 @@ hooks = [tf_debug.DumpingDebugHook("/shared/storage/location/tfdbg_dumps_1")]
Then this `hook` can be used in the same way as the `LocalCLIDebugHook` examples
described earlier in this document.
-As the training, evalution or prediction happens with `Estimator`,
+As the training, evaluation or prediction happens with `Estimator`,
tfdbg creates directories having the following name pattern:
`/shared/storage/location/tfdbg_dumps_1/run_<epoch_timestamp_microsec>_<uuid>`.
Each directory corresponds to a `Session.run()` call that underlies
@@ -715,7 +713,7 @@ You might encounter this problem in any of the following situations:
* models with many intermediate tensors
* very large intermediate tensors
-* many @{tf.while_loop} iterations
+* many `tf.while_loop` iterations
There are three possible workarounds or solutions:
@@ -770,12 +768,12 @@ sess.run(b)
**A**: The reason why you see no data dumped is because every node in the
executed TensorFlow graph is constant-folded by the TensorFlow runtime.
- In this exapmle, `a` is a constant tensor; therefore, the fetched
+ In this example, `a` is a constant tensor; therefore, the fetched
tensor `b` is effectively also a constant tensor. TensorFlow's graph
optimization folds the graph that contains `a` and `b` into a single
node to speed up future runs of the graph, which is why `tfdbg` does
not generate any intermediate tensor dumps. However, if `a` were a
- @{tf.Variable}, as in the following example:
+ `tf.Variable`, as in the following example:
``` python
import numpy as np
diff --git a/tensorflow/docs_src/guide/eager.md b/tensorflow/docs_src/guide/eager.md
index 3b54d6d2bb..017fdaf81e 100644
--- a/tensorflow/docs_src/guide/eager.md
+++ b/tensorflow/docs_src/guide/eager.md
@@ -193,8 +193,7 @@ class MNISTModel(tf.keras.Model):
def call(self, input):
"""Run the model."""
result = self.dense1(input)
- result = self.dense2(result)
- result = self.dense2(result) # reuse variables from dense2 layer
+ result = self.dense2(result) # reuse variables from dense1 layer
return result
model = MNISTModel()
@@ -727,7 +726,13 @@ def measure(x, steps):
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
- _ = x.numpy() # Make sure to execute op and not just enqueue it
+ # tf.matmul can return before completing the matrix multiplication
+ # (e.g., can return after enqueing the operation on a CUDA stream).
+ # The x.numpy() call below will ensure that all enqueued operations
+ # have completed (and will also copy the result to host memory,
+ # so we're including a little more than just the matmul operation
+ # time).
+ _ = x.numpy()
end = time.time()
return end - start
@@ -751,8 +756,8 @@ Output (exact numbers depend on hardware):
```
Time to multiply a (1000, 1000) matrix by itself 200 times:
-CPU: 4.614904403686523 secs
-GPU: 0.5581181049346924 secs
+CPU: 1.46628093719 secs
+GPU: 0.0593810081482 secs
```
A `tf.Tensor` object can be copied to a different device to execute its
diff --git a/tensorflow/docs_src/guide/estimators.md b/tensorflow/docs_src/guide/estimators.md
index 78b30c3040..7b54e3de29 100644
--- a/tensorflow/docs_src/guide/estimators.md
+++ b/tensorflow/docs_src/guide/estimators.md
@@ -1,6 +1,6 @@
# Estimators
-This document introduces @{tf.estimator$**Estimators**}--a high-level TensorFlow
+This document introduces `tf.estimator`--a high-level TensorFlow
API that greatly simplifies machine learning programming. Estimators encapsulate
the following actions:
@@ -11,10 +11,13 @@ the following actions:
You may either use the pre-made Estimators we provide or write your
own custom Estimators. All Estimators--whether pre-made or custom--are
-classes based on the @{tf.estimator.Estimator} class.
+classes based on the `tf.estimator.Estimator` class.
+
+For a quick example try [Estimator tutorials]](../tutorials/estimators/linear).
+To see each sub-topic in depth, see the [Estimator guides](premade_estimators).
Note: TensorFlow also includes a deprecated `Estimator` class at
-@{tf.contrib.learn.Estimator}, which you should not use.
+`tf.contrib.learn.Estimator`, which you should not use.
## Advantages of Estimators
@@ -29,14 +32,14 @@ Estimators provide the following benefits:
* You can develop a state of the art model with high-level intuitive code.
In short, it is generally much easier to create models with Estimators
than with the low-level TensorFlow APIs.
-* Estimators are themselves built on @{tf.layers}, which
+* Estimators are themselves built on `tf.keras.layers`, which
simplifies customization.
* Estimators build the graph for you.
* Estimators provide a safe distributed training loop that controls how and
when to:
* build the graph
* initialize variables
- * start queues
+ * load data
* handle exceptions
* create checkpoint files and recover from failures
* save summaries for TensorBoard
@@ -52,9 +55,9 @@ Pre-made Estimators enable you to work at a much higher conceptual level
than the base TensorFlow APIs. You no longer have to worry about creating
the computational graph or sessions since Estimators handle all
the "plumbing" for you. That is, pre-made Estimators create and manage
-@{tf.Graph$`Graph`} and @{tf.Session$`Session`} objects for you. Furthermore,
+`tf.Graph` and `tf.Session` objects for you. Furthermore,
pre-made Estimators let you experiment with different model architectures by
-making only minimal code changes. @{tf.estimator.DNNClassifier$`DNNClassifier`},
+making only minimal code changes. `tf.estimator.DNNClassifier`,
for example, is a pre-made Estimator class that trains classification models
based on dense, feed-forward neural networks.
@@ -83,7 +86,7 @@ of the following four steps:
(See @{$guide/datasets} for full details.)
-2. **Define the feature columns.** Each @{tf.feature_column}
+2. **Define the feature columns.** Each `tf.feature_column`
identifies a feature name, its type, and any input pre-processing.
For example, the following snippet creates three feature
columns that hold integer or floating-point data. The first two
@@ -155,7 +158,7 @@ We recommend the following workflow:
You can convert existing Keras models to Estimators. Doing so enables your Keras
model to access Estimator's strengths, such as distributed training. Call
-@{tf.keras.estimator.model_to_estimator} as in the
+`tf.keras.estimator.model_to_estimator` as in the
following sample:
```python
@@ -190,4 +193,4 @@ and similarly, the predicted output names can be obtained from
`keras_inception_v3.output_names`.
For more details, please refer to the documentation for
-@{tf.keras.estimator.model_to_estimator}.
+`tf.keras.estimator.model_to_estimator`.
diff --git a/tensorflow/docs_src/guide/faq.md b/tensorflow/docs_src/guide/faq.md
index b6291a9ffa..8370097560 100644
--- a/tensorflow/docs_src/guide/faq.md
+++ b/tensorflow/docs_src/guide/faq.md
@@ -28,13 +28,13 @@ See also the
#### Why does `c = tf.matmul(a, b)` not execute the matrix multiplication immediately?
In the TensorFlow Python API, `a`, `b`, and `c` are
-@{tf.Tensor} objects. A `Tensor` object is
+`tf.Tensor` objects. A `Tensor` object is
a symbolic handle to the result of an operation, but does not actually hold the
values of the operation's output. Instead, TensorFlow encourages users to build
up complicated expressions (such as entire neural networks and its gradients) as
a dataflow graph. You then offload the computation of the entire dataflow graph
(or a subgraph of it) to a TensorFlow
-@{tf.Session}, which is able to execute the
+`tf.Session`, which is able to execute the
whole computation much more efficiently than executing the operations
one-by-one.
@@ -46,7 +46,7 @@ device, and `"/device:GPU:i"` (or `"/gpu:i"`) for the *i*th GPU device.
#### How do I place operations on a particular device?
To place a group of operations on a device, create them within a
-@{tf.device$`with tf.device(name):`} context. See
+`tf.device` context. See
the how-to documentation on
@{$using_gpu$using GPUs with TensorFlow} for details of how
TensorFlow assigns operations to devices, and the
@@ -63,17 +63,17 @@ See also the
Feeding is a mechanism in the TensorFlow Session API that allows you to
substitute different values for one or more tensors at run time. The `feed_dict`
-argument to @{tf.Session.run} is a
-dictionary that maps @{tf.Tensor} objects to
+argument to `tf.Session.run` is a
+dictionary that maps `tf.Tensor` objects to
numpy arrays (and some other types), which will be used as the values of those
tensors in the execution of a step.
#### What is the difference between `Session.run()` and `Tensor.eval()`?
-If `t` is a @{tf.Tensor} object,
-@{tf.Tensor.eval} is shorthand for
-@{tf.Session.run}, where `sess` is the
-current @{tf.get_default_session}. The
+If `t` is a `tf.Tensor` object,
+`tf.Tensor.eval` is shorthand for
+`tf.Session.run`, where `sess` is the
+current `tf.get_default_session`. The
two following snippets of code are equivalent:
```python
@@ -99,11 +99,11 @@ sessions, it may be more straightforward to make explicit calls to
#### Do Sessions have a lifetime? What about intermediate tensors?
Sessions can own resources, such as
-@{tf.Variable},
-@{tf.QueueBase}, and
-@{tf.ReaderBase}. These resources can sometimes use
+`tf.Variable`,
+`tf.QueueBase`, and
+`tf.ReaderBase`. These resources can sometimes use
a significant amount of memory, and can be released when the session is closed by calling
-@{tf.Session.close}.
+`tf.Session.close`.
The intermediate tensors that are created as part of a call to
@{$python/client$`Session.run()`} will be freed at or before the
@@ -120,7 +120,7 @@ dimensions:
devices, which makes it possible to speed up
@{$deep_cnn$CIFAR-10 training using multiple GPUs}.
* The Session API allows multiple concurrent steps (i.e. calls to
- @{tf.Session.run} in parallel). This
+ `tf.Session.run` in parallel). This
enables the runtime to get higher throughput, if a single step does not use
all of the resources in your computer.
@@ -151,8 +151,8 @@ than 3.5.
#### Why does `Session.run()` hang when using a reader or a queue?
-The @{tf.ReaderBase} and
-@{tf.QueueBase} classes provide special operations that
+The `tf.ReaderBase` and
+`tf.QueueBase` classes provide special operations that
can *block* until input (or free space in a bounded queue) becomes
available. These operations allow you to build sophisticated
@{$reading_data$input pipelines}, at the cost of making the
@@ -169,9 +169,9 @@ See also the how-to documentation on @{$variables$variables} and
#### What is the lifetime of a variable?
A variable is created when you first run the
-@{tf.Variable.initializer}
+`tf.Variable.initializer`
operation for that variable in a session. It is destroyed when that
-@{tf.Session.close}.
+`tf.Session.close`.
#### How do variables behave when they are concurrently accessed?
@@ -179,32 +179,31 @@ Variables allow concurrent read and write operations. The value read from a
variable may change if it is concurrently updated. By default, concurrent
assignment operations to a variable are allowed to run with no mutual exclusion.
To acquire a lock when assigning to a variable, pass `use_locking=True` to
-@{tf.Variable.assign}.
+`tf.Variable.assign`.
## Tensor shapes
See also the
-@{tf.TensorShape}.
+`tf.TensorShape`.
#### How can I determine the shape of a tensor in Python?
In TensorFlow, a tensor has both a static (inferred) shape and a dynamic (true)
shape. The static shape can be read using the
-@{tf.Tensor.get_shape}
+`tf.Tensor.get_shape`
method: this shape is inferred from the operations that were used to create the
-tensor, and may be
-@{tf.TensorShape$partially complete}. If the static
-shape is not fully defined, the dynamic shape of a `Tensor` `t` can be
-determined by evaluating @{tf.shape$`tf.shape(t)`}.
+tensor, and may be partially complete (the static-shape may contain `None`). If
+the static shape is not fully defined, the dynamic shape of a `tf.Tensor`, `t`
+can be determined using `tf.shape(t)`.
#### What is the difference between `x.set_shape()` and `x = tf.reshape(x)`?
-The @{tf.Tensor.set_shape} method updates
+The `tf.Tensor.set_shape` method updates
the static shape of a `Tensor` object, and it is typically used to provide
additional shape information when this cannot be inferred directly. It does not
change the dynamic shape of the tensor.
-The @{tf.reshape} operation creates
+The `tf.reshape` operation creates
a new tensor with a different dynamic shape.
#### How do I build a graph that works with variable batch sizes?
@@ -212,9 +211,9 @@ a new tensor with a different dynamic shape.
It is often useful to build a graph that works with variable batch sizes
so that the same code can be used for (mini-)batch training, and
single-instance inference. The resulting graph can be
-@{tf.Graph.as_graph_def$saved as a protocol buffer}
+`tf.Graph.as_graph_def`
and
-@{tf.import_graph_def$imported into another program}.
+`tf.import_graph_def`.
When building a variable-size graph, the most important thing to remember is not
to encode the batch size as a Python constant, but instead to use a symbolic
@@ -224,7 +223,7 @@ to encode the batch size as a Python constant, but instead to use a symbolic
to extract the batch dimension from a `Tensor` called `input`, and store it in
a `Tensor` called `batch_size`.
-* Use @{tf.reduce_mean} instead
+* Use `tf.reduce_mean` instead
of `tf.reduce_sum(...) / batch_size`.
@@ -259,19 +258,19 @@ See the how-to documentation for
There are three main options for dealing with data in a custom format.
The easiest option is to write parsing code in Python that transforms the data
-into a numpy array. Then, use @{tf.data.Dataset.from_tensor_slices} to
+into a numpy array. Then, use `tf.data.Dataset.from_tensor_slices` to
create an input pipeline from the in-memory data.
If your data doesn't fit in memory, try doing the parsing in the Dataset
pipeline. Start with an appropriate file reader, like
-@{tf.data.TextLineDataset}. Then convert the dataset by mapping
-@{tf.data.Dataset.map$mapping} appropriate operations over it.
-Prefer predefined TensorFlow operations such as @{tf.decode_raw},
-@{tf.decode_csv}, @{tf.parse_example}, or @{tf.image.decode_png}.
+`tf.data.TextLineDataset`. Then convert the dataset by mapping
+`tf.data.Dataset.map` appropriate operations over it.
+Prefer predefined TensorFlow operations such as `tf.decode_raw`,
+`tf.decode_csv`, `tf.parse_example`, or `tf.image.decode_png`.
If your data is not easily parsable with the built-in TensorFlow operations,
consider converting it, offline, to a format that is easily parsable, such
-as @{tf.python_io.TFRecordWriter$`TFRecord`} format.
+as `tf.python_io.TFRecordWriter` format.
The most efficient method to customize the parsing behavior is to
@{$adding_an_op$add a new op written in C++} that parses your
diff --git a/tensorflow/docs_src/guide/feature_columns.md b/tensorflow/docs_src/guide/feature_columns.md
index 38760df82b..b189c4334e 100644
--- a/tensorflow/docs_src/guide/feature_columns.md
+++ b/tensorflow/docs_src/guide/feature_columns.md
@@ -6,10 +6,10 @@ enabling you to transform a diverse range of raw data into formats that
Estimators can use, allowing easy experimentation.
In @{$premade_estimators$Premade Estimators}, we used the premade
-Estimator, @{tf.estimator.DNNClassifier$`DNNClassifier`} to train a model to
+Estimator, `tf.estimator.DNNClassifier` to train a model to
predict different types of Iris flowers from four input features. That example
created only numerical feature columns (of type
-@{tf.feature_column.numeric_column}). Although numerical feature columns model
+`tf.feature_column.numeric_column`). Although numerical feature columns model
the lengths of petals and sepals effectively, real world data sets contain all
kinds of features, many of which are non-numerical.
@@ -59,7 +59,7 @@ Feature columns bridge raw data with the data your model needs.
</div>
To create feature columns, call functions from the
-@{tf.feature_column} module. This document explains nine of the functions in
+`tf.feature_column` module. This document explains nine of the functions in
that module. As the following figure shows, all nine functions return either a
Categorical-Column or a Dense-Column object, except `bucketized_column`, which
inherits from both classes:
@@ -75,7 +75,7 @@ Let's look at these functions in more detail.
### Numeric column
-The Iris classifier calls the @{tf.feature_column.numeric_column} function for
+The Iris classifier calls the `tf.feature_column.numeric_column` function for
all input features:
* `SepalLength`
@@ -119,7 +119,7 @@ matrix_feature_column = tf.feature_column.numeric_column(key="MyMatrix",
Often, you don't want to feed a number directly into the model, but instead
split its value into different categories based on numerical ranges. To do so,
-create a @{tf.feature_column.bucketized_column$bucketized column}. For
+create a `tf.feature_column.bucketized_column`. For
example, consider raw data that represents the year a house was built. Instead
of representing that year as a scalar numeric column, we could split the year
into the following four buckets:
@@ -194,7 +194,7 @@ value. That is:
* `1="electronics"`
* `2="sport"`
-Call @{tf.feature_column.categorical_column_with_identity} to implement a
+Call `tf.feature_column.categorical_column_with_identity` to implement a
categorical identity column. For example:
``` python
@@ -230,8 +230,8 @@ As you can see, categorical vocabulary columns are kind of an enum version of
categorical identity columns. TensorFlow provides two different functions to
create categorical vocabulary columns:
-* @{tf.feature_column.categorical_column_with_vocabulary_list}
-* @{tf.feature_column.categorical_column_with_vocabulary_file}
+* `tf.feature_column.categorical_column_with_vocabulary_list`
+* `tf.feature_column.categorical_column_with_vocabulary_file`
`categorical_column_with_vocabulary_list` maps each string to an integer based
on an explicit vocabulary list. For example:
@@ -281,7 +281,7 @@ categories can be so big that it's not possible to have individual categories
for each vocabulary word or integer because that would consume too much memory.
For these cases, we can instead turn the question around and ask, "How many
categories am I willing to have for my input?" In fact, the
-@{tf.feature_column.categorical_column_with_hash_bucket} function enables you
+`tf.feature_column.categorical_column_with_hash_bucket` function enables you
to specify the number of categories. For this type of feature column the model
calculates a hash value of the input, then puts it into one of
the `hash_bucket_size` categories using the modulo operator, as in the following
@@ -349,7 +349,7 @@ equal size.
</div>
For the solution, we used a combination of the `bucketized_column` we looked at
-earlier, with the @{tf.feature_column.crossed_column} function.
+earlier, with the `tf.feature_column.crossed_column` function.
<!--TODO(markdaoust) link to full example-->
@@ -440,7 +440,7 @@ Representing data in indicator columns.
</div>
Here's how you create an indicator column by calling
-@{tf.feature_column.indicator_column}:
+`tf.feature_column.indicator_column`:
``` python
categorical_column = ... # Create any type of categorical column.
@@ -521,7 +521,7 @@ number of dimensions is 3:
Note that this is just a general guideline; you can set the number of embedding
dimensions as you please.
-Call @{tf.feature_column.embedding_column} to create an `embedding_column` as
+Call `tf.feature_column.embedding_column` to create an `embedding_column` as
suggested by the following snippet:
``` python
@@ -543,15 +543,15 @@ columns.
As the following list indicates, not all Estimators permit all types of
`feature_columns` argument(s):
-* @{tf.estimator.LinearClassifier$`LinearClassifier`} and
- @{tf.estimator.LinearRegressor$`LinearRegressor`}: Accept all types of
+* `tf.estimator.LinearClassifier` and
+ `tf.estimator.LinearRegressor`: Accept all types of
feature column.
-* @{tf.estimator.DNNClassifier$`DNNClassifier`} and
- @{tf.estimator.DNNRegressor$`DNNRegressor`}: Only accept dense columns. Other
+* `tf.estimator.DNNClassifier` and
+ `tf.estimator.DNNRegressor`: Only accept dense columns. Other
column types must be wrapped in either an `indicator_column` or
`embedding_column`.
-* @{tf.estimator.DNNLinearCombinedClassifier$`DNNLinearCombinedClassifier`} and
- @{tf.estimator.DNNLinearCombinedRegressor$`DNNLinearCombinedRegressor`}:
+* `tf.estimator.DNNLinearCombinedClassifier` and
+ `tf.estimator.DNNLinearCombinedRegressor`:
* The `linear_feature_columns` argument accepts any feature column type.
* The `dnn_feature_columns` argument only accepts dense columns.
diff --git a/tensorflow/docs_src/guide/graph_viz.md b/tensorflow/docs_src/guide/graph_viz.md
index a8876da5a5..97b0e2d4de 100644
--- a/tensorflow/docs_src/guide/graph_viz.md
+++ b/tensorflow/docs_src/guide/graph_viz.md
@@ -15,7 +15,7 @@ variable names can be scoped and the visualization uses this information to
define a hierarchy on the nodes in the graph. By default, only the top of this
hierarchy is shown. Here is an example that defines three operations under the
`hidden` name scope using
-@{tf.name_scope}:
+`tf.name_scope`:
```python
import tensorflow as tf
diff --git a/tensorflow/docs_src/guide/graphs.md b/tensorflow/docs_src/guide/graphs.md
index 492f97c191..2bb44fbb32 100644
--- a/tensorflow/docs_src/guide/graphs.md
+++ b/tensorflow/docs_src/guide/graphs.md
@@ -7,7 +7,7 @@ TensorFlow **session** to run parts of the graph across a set of local and
remote devices.
This guide will be most useful if you intend to use the low-level programming
-model directly. Higher-level APIs such as @{tf.estimator.Estimator} and Keras
+model directly. Higher-level APIs such as `tf.estimator.Estimator` and Keras
hide the details of graphs and sessions from the end user, but this guide may
also be useful if you want to understand how these APIs are implemented.
@@ -18,12 +18,12 @@ also be useful if you want to understand how these APIs are implemented.
[Dataflow](https://en.wikipedia.org/wiki/Dataflow_programming) is a common
programming model for parallel computing. In a dataflow graph, the nodes
represent units of computation, and the edges represent the data consumed or
-produced by a computation. For example, in a TensorFlow graph, the @{tf.matmul}
+produced by a computation. For example, in a TensorFlow graph, the `tf.matmul`
operation would correspond to a single node with two incoming edges (the
matrices to be multiplied) and one outgoing edge (the result of the
multiplication).
-<!-- TODO(barryr): Add a diagram to illustrate the @{tf.matmul} graph. -->
+<!-- TODO(barryr): Add a diagram to illustrate the `tf.matmul` graph. -->
Dataflow has several advantages that TensorFlow leverages when executing your
programs:
@@ -48,9 +48,9 @@ programs:
low-latency inference.
-## What is a @{tf.Graph}?
+## What is a `tf.Graph`?
-A @{tf.Graph} contains two relevant kinds of information:
+A `tf.Graph` contains two relevant kinds of information:
* **Graph structure.** The nodes and edges of the graph, indicating how
individual operations are composed together, but not prescribing how they
@@ -59,78 +59,78 @@ A @{tf.Graph} contains two relevant kinds of information:
context that source code conveys.
* **Graph collections.** TensorFlow provides a general mechanism for storing
- collections of metadata in a @{tf.Graph}. The @{tf.add_to_collection} function
- enables you to associate a list of objects with a key (where @{tf.GraphKeys}
- defines some of the standard keys), and @{tf.get_collection} enables you to
+ collections of metadata in a `tf.Graph`. The `tf.add_to_collection` function
+ enables you to associate a list of objects with a key (where `tf.GraphKeys`
+ defines some of the standard keys), and `tf.get_collection` enables you to
look up all objects associated with a key. Many parts of the TensorFlow
- library use this facility: for example, when you create a @{tf.Variable}, it
+ library use this facility: for example, when you create a `tf.Variable`, it
is added by default to collections representing "global variables" and
- "trainable variables". When you later come to create a @{tf.train.Saver} or
- @{tf.train.Optimizer}, the variables in these collections are used as the
+ "trainable variables". When you later come to create a `tf.train.Saver` or
+ `tf.train.Optimizer`, the variables in these collections are used as the
default arguments.
-## Building a @{tf.Graph}
+## Building a `tf.Graph`
Most TensorFlow programs start with a dataflow graph construction phase. In this
-phase, you invoke TensorFlow API functions that construct new @{tf.Operation}
-(node) and @{tf.Tensor} (edge) objects and add them to a @{tf.Graph}
+phase, you invoke TensorFlow API functions that construct new `tf.Operation`
+(node) and `tf.Tensor` (edge) objects and add them to a `tf.Graph`
instance. TensorFlow provides a **default graph** that is an implicit argument
to all API functions in the same context. For example:
-* Calling `tf.constant(42.0)` creates a single @{tf.Operation} that produces the
- value `42.0`, adds it to the default graph, and returns a @{tf.Tensor} that
+* Calling `tf.constant(42.0)` creates a single `tf.Operation` that produces the
+ value `42.0`, adds it to the default graph, and returns a `tf.Tensor` that
represents the value of the constant.
-* Calling `tf.matmul(x, y)` creates a single @{tf.Operation} that multiplies
- the values of @{tf.Tensor} objects `x` and `y`, adds it to the default graph,
- and returns a @{tf.Tensor} that represents the result of the multiplication.
+* Calling `tf.matmul(x, y)` creates a single `tf.Operation` that multiplies
+ the values of `tf.Tensor` objects `x` and `y`, adds it to the default graph,
+ and returns a `tf.Tensor` that represents the result of the multiplication.
-* Executing `v = tf.Variable(0)` adds to the graph a @{tf.Operation} that will
- store a writeable tensor value that persists between @{tf.Session.run} calls.
- The @{tf.Variable} object wraps this operation, and can be used [like a
+* Executing `v = tf.Variable(0)` adds to the graph a `tf.Operation` that will
+ store a writeable tensor value that persists between `tf.Session.run` calls.
+ The `tf.Variable` object wraps this operation, and can be used [like a
tensor](#tensor-like_objects), which will read the current value of the
- stored value. The @{tf.Variable} object also has methods such as
- @{tf.Variable.assign$`assign`} and @{tf.Variable.assign_add$`assign_add`} that
- create @{tf.Operation} objects that, when executed, update the stored value.
+ stored value. The `tf.Variable` object also has methods such as
+ `tf.Variable.assign` and `tf.Variable.assign_add` that
+ create `tf.Operation` objects that, when executed, update the stored value.
(See @{$guide/variables} for more information about variables.)
-* Calling @{tf.train.Optimizer.minimize} will add operations and tensors to the
- default graph that calculates gradients, and return a @{tf.Operation} that,
+* Calling `tf.train.Optimizer.minimize` will add operations and tensors to the
+ default graph that calculates gradients, and return a `tf.Operation` that,
when run, will apply those gradients to a set of variables.
Most programs rely solely on the default graph. However,
see [Dealing with multiple graphs](#programming_with_multiple_graphs) for more
-advanced use cases. High-level APIs such as the @{tf.estimator.Estimator} API
+advanced use cases. High-level APIs such as the `tf.estimator.Estimator` API
manage the default graph on your behalf, and--for example--may create different
graphs for training and evaluation.
Note: Calling most functions in the TensorFlow API merely adds operations
and tensors to the default graph, but **does not** perform the actual
-computation. Instead, you compose these functions until you have a @{tf.Tensor}
-or @{tf.Operation} that represents the overall computation--such as performing
-one step of gradient descent--and then pass that object to a @{tf.Session} to
-perform the computation. See the section "Executing a graph in a @{tf.Session}"
+computation. Instead, you compose these functions until you have a `tf.Tensor`
+or `tf.Operation` that represents the overall computation--such as performing
+one step of gradient descent--and then pass that object to a `tf.Session` to
+perform the computation. See the section "Executing a graph in a `tf.Session`"
for more details.
## Naming operations
-A @{tf.Graph} object defines a **namespace** for the @{tf.Operation} objects it
+A `tf.Graph` object defines a **namespace** for the `tf.Operation` objects it
contains. TensorFlow automatically chooses a unique name for each operation in
your graph, but giving operations descriptive names can make your program easier
to read and debug. The TensorFlow API provides two ways to override the name of
an operation:
-* Each API function that creates a new @{tf.Operation} or returns a new
- @{tf.Tensor} accepts an optional `name` argument. For example,
- `tf.constant(42.0, name="answer")` creates a new @{tf.Operation} named
- `"answer"` and returns a @{tf.Tensor} named `"answer:0"`. If the default graph
+* Each API function that creates a new `tf.Operation` or returns a new
+ `tf.Tensor` accepts an optional `name` argument. For example,
+ `tf.constant(42.0, name="answer")` creates a new `tf.Operation` named
+ `"answer"` and returns a `tf.Tensor` named `"answer:0"`. If the default graph
already contains an operation named `"answer"`, then TensorFlow would append
`"_1"`, `"_2"`, and so on to the name, in order to make it unique.
-* The @{tf.name_scope} function makes it possible to add a **name scope** prefix
+* The `tf.name_scope` function makes it possible to add a **name scope** prefix
to all operations created in a particular context. The current name scope
- prefix is a `"/"`-delimited list of the names of all active @{tf.name_scope}
+ prefix is a `"/"`-delimited list of the names of all active `tf.name_scope`
context managers. If a name scope has already been used in the current
context, TensorFlow appends `"_1"`, `"_2"`, and so on. For example:
@@ -160,7 +160,7 @@ The graph visualizer uses name scopes to group operations and reduce the visual
complexity of a graph. See [Visualizing your graph](#visualizing-your-graph) for
more information.
-Note that @{tf.Tensor} objects are implicitly named after the @{tf.Operation}
+Note that `tf.Tensor` objects are implicitly named after the `tf.Operation`
that produces the tensor as output. A tensor name has the form `"<OP_NAME>:<i>"`
where:
@@ -171,7 +171,7 @@ where:
## Placing operations on different devices
If you want your TensorFlow program to use multiple different devices, the
-@{tf.device} function provides a convenient way to request that all operations
+`tf.device` function provides a convenient way to request that all operations
created in a particular context are placed on the same device (or type of
device).
@@ -186,7 +186,7 @@ where:
* `<JOB_NAME>` is an alpha-numeric string that does not start with a number.
* `<DEVICE_TYPE>` is a registered device type (such as `GPU` or `CPU`).
* `<TASK_INDEX>` is a non-negative integer representing the index of the task
- in the job named `<JOB_NAME>`. See @{tf.train.ClusterSpec} for an explanation
+ in the job named `<JOB_NAME>`. See `tf.train.ClusterSpec` for an explanation
of jobs and tasks.
* `<DEVICE_INDEX>` is a non-negative integer representing the index of the
device, for example, to distinguish between different GPU devices used in the
@@ -194,7 +194,7 @@ where:
You do not need to specify every part of a device specification. For example,
if you are running in a single-machine configuration with a single GPU, you
-might use @{tf.device} to pin some operations to the CPU and GPU:
+might use `tf.device` to pin some operations to the CPU and GPU:
```python
# Operations created outside either context will run on the "best possible"
@@ -229,13 +229,13 @@ with tf.device("/job:worker"):
layer_2 = tf.matmul(train_batch, weights_2) + biases_2
```
-@{tf.device} gives you a lot of flexibility to choose placements for individual
+`tf.device` gives you a lot of flexibility to choose placements for individual
operations or broad regions of a TensorFlow graph. In many cases, there are
simple heuristics that work well. For example, the
-@{tf.train.replica_device_setter} API can be used with @{tf.device} to place
+`tf.train.replica_device_setter` API can be used with `tf.device` to place
operations for **data-parallel distributed training**. For example, the
-following code fragment shows how @{tf.train.replica_device_setter} applies
-different placement policies to @{tf.Variable} objects and other operations:
+following code fragment shows how `tf.train.replica_device_setter` applies
+different placement policies to `tf.Variable` objects and other operations:
```python
with tf.device(tf.train.replica_device_setter(ps_tasks=3)):
@@ -253,41 +253,41 @@ with tf.device(tf.train.replica_device_setter(ps_tasks=3)):
## Tensor-like objects
-Many TensorFlow operations take one or more @{tf.Tensor} objects as arguments.
-For example, @{tf.matmul} takes two @{tf.Tensor} objects, and @{tf.add_n} takes
-a list of `n` @{tf.Tensor} objects. For convenience, these functions will accept
-a **tensor-like object** in place of a @{tf.Tensor}, and implicitly convert it
-to a @{tf.Tensor} using the @{tf.convert_to_tensor} method. Tensor-like objects
+Many TensorFlow operations take one or more `tf.Tensor` objects as arguments.
+For example, `tf.matmul` takes two `tf.Tensor` objects, and `tf.add_n` takes
+a list of `n` `tf.Tensor` objects. For convenience, these functions will accept
+a **tensor-like object** in place of a `tf.Tensor`, and implicitly convert it
+to a `tf.Tensor` using the `tf.convert_to_tensor` method. Tensor-like objects
include elements of the following types:
-* @{tf.Tensor}
-* @{tf.Variable}
+* `tf.Tensor`
+* `tf.Variable`
* [`numpy.ndarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html)
* `list` (and lists of tensor-like objects)
* Scalar Python types: `bool`, `float`, `int`, `str`
You can register additional tensor-like types using
-@{tf.register_tensor_conversion_function}.
+`tf.register_tensor_conversion_function`.
-Note: By default, TensorFlow will create a new @{tf.Tensor} each time you use
+Note: By default, TensorFlow will create a new `tf.Tensor` each time you use
the same tensor-like object. If the tensor-like object is large (e.g. a
`numpy.ndarray` containing a set of training examples) and you use it multiple
times, you may run out of memory. To avoid this, manually call
-@{tf.convert_to_tensor} on the tensor-like object once and use the returned
-@{tf.Tensor} instead.
+`tf.convert_to_tensor` on the tensor-like object once and use the returned
+`tf.Tensor` instead.
-## Executing a graph in a @{tf.Session}
+## Executing a graph in a `tf.Session`
-TensorFlow uses the @{tf.Session} class to represent a connection between the
+TensorFlow uses the `tf.Session` class to represent a connection between the
client program---typically a Python program, although a similar interface is
-available in other languages---and the C++ runtime. A @{tf.Session} object
+available in other languages---and the C++ runtime. A `tf.Session` object
provides access to devices in the local machine, and remote devices using the
distributed TensorFlow runtime. It also caches information about your
-@{tf.Graph} so that you can efficiently run the same computation multiple times.
+`tf.Graph` so that you can efficiently run the same computation multiple times.
-### Creating a @{tf.Session}
+### Creating a `tf.Session`
-If you are using the low-level TensorFlow API, you can create a @{tf.Session}
+If you are using the low-level TensorFlow API, you can create a `tf.Session`
for the current default graph as follows:
```python
@@ -300,50 +300,50 @@ with tf.Session("grpc://example.org:2222"):
# ...
```
-Since a @{tf.Session} owns physical resources (such as GPUs and
+Since a `tf.Session` owns physical resources (such as GPUs and
network connections), it is typically used as a context manager (in a `with`
block) that automatically closes the session when you exit the block. It is
also possible to create a session without using a `with` block, but you should
-explicitly call @{tf.Session.close} when you are finished with it to free the
+explicitly call `tf.Session.close` when you are finished with it to free the
resources.
-Note: Higher-level APIs such as @{tf.train.MonitoredTrainingSession} or
-@{tf.estimator.Estimator} will create and manage a @{tf.Session} for you. These
+Note: Higher-level APIs such as `tf.train.MonitoredTrainingSession` or
+`tf.estimator.Estimator` will create and manage a `tf.Session` for you. These
APIs accept optional `target` and `config` arguments (either directly, or as
-part of a @{tf.estimator.RunConfig} object), with the same meaning as
+part of a `tf.estimator.RunConfig` object), with the same meaning as
described below.
-@{tf.Session.__init__} accepts three optional arguments:
+`tf.Session.__init__` accepts three optional arguments:
* **`target`.** If this argument is left empty (the default), the session will
only use devices in the local machine. However, you may also specify a
`grpc://` URL to specify the address of a TensorFlow server, which gives the
session access to all devices on machines that this server controls. See
- @{tf.train.Server} for details of how to create a TensorFlow
+ `tf.train.Server` for details of how to create a TensorFlow
server. For example, in the common **between-graph replication**
- configuration, the @{tf.Session} connects to a @{tf.train.Server} in the same
+ configuration, the `tf.Session` connects to a `tf.train.Server` in the same
process as the client. The [distributed TensorFlow](../deploy/distributed.md)
deployment guide describes other common scenarios.
-* **`graph`.** By default, a new @{tf.Session} will be bound to---and only able
+* **`graph`.** By default, a new `tf.Session` will be bound to---and only able
to run operations in---the current default graph. If you are using multiple
graphs in your program (see [Programming with multiple
graphs](#programming_with_multiple_graphs) for more details), you can specify
- an explicit @{tf.Graph} when you construct the session.
+ an explicit `tf.Graph` when you construct the session.
-* **`config`.** This argument allows you to specify a @{tf.ConfigProto} that
+* **`config`.** This argument allows you to specify a `tf.ConfigProto` that
controls the behavior of the session. For example, some of the configuration
options include:
* `allow_soft_placement`. Set this to `True` to enable a "soft" device
- placement algorithm, which ignores @{tf.device} annotations that attempt
+ placement algorithm, which ignores `tf.device` annotations that attempt
to place CPU-only operations on a GPU device, and places them on the CPU
instead.
* `cluster_def`. When using distributed TensorFlow, this option allows you
to specify what machines to use in the computation, and provide a mapping
between job names, task indices, and network addresses. See
- @{tf.train.ClusterSpec.as_cluster_def} for details.
+ `tf.train.ClusterSpec.as_cluster_def` for details.
* `graph_options.optimizer_options`. Provides control over the optimizations
that TensorFlow performs on your graph before executing it.
@@ -353,21 +353,21 @@ described below.
rather than allocating most of the memory at startup.
-### Using @{tf.Session.run} to execute operations
+### Using `tf.Session.run` to execute operations
-The @{tf.Session.run} method is the main mechanism for running a @{tf.Operation}
-or evaluating a @{tf.Tensor}. You can pass one or more @{tf.Operation} or
-@{tf.Tensor} objects to @{tf.Session.run}, and TensorFlow will execute the
+The `tf.Session.run` method is the main mechanism for running a `tf.Operation`
+or evaluating a `tf.Tensor`. You can pass one or more `tf.Operation` or
+`tf.Tensor` objects to `tf.Session.run`, and TensorFlow will execute the
operations that are needed to compute the result.
-@{tf.Session.run} requires you to specify a list of **fetches**, which determine
-the return values, and may be a @{tf.Operation}, a @{tf.Tensor}, or
-a [tensor-like type](#tensor-like_objects) such as @{tf.Variable}. These fetches
-determine what **subgraph** of the overall @{tf.Graph} must be executed to
+`tf.Session.run` requires you to specify a list of **fetches**, which determine
+the return values, and may be a `tf.Operation`, a `tf.Tensor`, or
+a [tensor-like type](#tensor-like_objects) such as `tf.Variable`. These fetches
+determine what **subgraph** of the overall `tf.Graph` must be executed to
produce the result: this is the subgraph that contains all operations named in
the fetch list, plus all operations whose outputs are used to compute the value
of the fetches. For example, the following code fragment shows how different
-arguments to @{tf.Session.run} cause different subgraphs to be executed:
+arguments to `tf.Session.run` cause different subgraphs to be executed:
```python
x = tf.constant([[37.0, -23.0], [1.0, 4.0]])
@@ -390,8 +390,8 @@ with tf.Session() as sess:
y_val, output_val = sess.run([y, output])
```
-@{tf.Session.run} also optionally takes a dictionary of **feeds**, which is a
-mapping from @{tf.Tensor} objects (typically @{tf.placeholder} tensors) to
+`tf.Session.run` also optionally takes a dictionary of **feeds**, which is a
+mapping from `tf.Tensor` objects (typically `tf.placeholder` tensors) to
values (typically Python scalars, lists, or NumPy arrays) that will be
substituted for those tensors in the execution. For example:
@@ -415,7 +415,7 @@ with tf.Session() as sess:
sess.run(y, {x: 37.0})
```
-@{tf.Session.run} also accepts an optional `options` argument that enables you
+`tf.Session.run` also accepts an optional `options` argument that enables you
to specify options about the call, and an optional `run_metadata` argument that
enables you to collect metadata about the execution. For example, you can use
these options together to collect tracing information about the execution:
@@ -447,8 +447,8 @@ with tf.Session() as sess:
TensorFlow includes tools that can help you to understand the code in a graph.
The **graph visualizer** is a component of TensorBoard that renders the
structure of your graph visually in a browser. The easiest way to create a
-visualization is to pass a @{tf.Graph} when creating the
-@{tf.summary.FileWriter}:
+visualization is to pass a `tf.Graph` when creating the
+`tf.summary.FileWriter`:
```python
# Build your graph.
@@ -471,7 +471,7 @@ with tf.Session() as sess:
writer.close()
```
-Note: If you are using a @{tf.estimator.Estimator}, the graph (and any
+Note: If you are using a `tf.estimator.Estimator`, the graph (and any
summaries) will be logged automatically to the `model_dir` that you specified
when creating the estimator.
@@ -495,8 +495,8 @@ graph for training your model, and a separate graph for evaluating or performing
inference with a trained model. In many cases, the inference graph will be
different from the training graph: for example, techniques like dropout and
batch normalization use different operations in each case. Furthermore, by
-default utilities like @{tf.train.Saver} use the names of @{tf.Variable} objects
-(which have names based on an underlying @{tf.Operation}) to identify each
+default utilities like `tf.train.Saver` use the names of `tf.Variable` objects
+(which have names based on an underlying `tf.Operation`) to identify each
variable in a saved checkpoint. When programming this way, you can either use
completely separate Python processes to build and execute the graphs, or you can
use multiple graphs in the same process. This section describes how to use
@@ -507,21 +507,21 @@ to all API functions in the same context. For many applications, a single graph
is sufficient. However, TensorFlow also provides methods for manipulating
the default graph, which can be useful in more advanced use cases. For example:
-* A @{tf.Graph} defines the namespace for @{tf.Operation} objects: each
+* A `tf.Graph` defines the namespace for `tf.Operation` objects: each
operation in a single graph must have a unique name. TensorFlow will
"uniquify" the names of operations by appending `"_1"`, `"_2"`, and so on to
their names if the requested name is already taken. Using multiple explicitly
created graphs gives you more control over what name is given to each
operation.
-* The default graph stores information about every @{tf.Operation} and
- @{tf.Tensor} that was ever added to it. If your program creates a large number
+* The default graph stores information about every `tf.Operation` and
+ `tf.Tensor` that was ever added to it. If your program creates a large number
of unconnected subgraphs, it may be more efficient to use a different
- @{tf.Graph} to build each subgraph, so that unrelated state can be garbage
+ `tf.Graph` to build each subgraph, so that unrelated state can be garbage
collected.
-You can install a different @{tf.Graph} as the default graph, using the
-@{tf.Graph.as_default} context manager:
+You can install a different `tf.Graph` as the default graph, using the
+`tf.Graph.as_default` context manager:
```python
g_1 = tf.Graph()
@@ -548,8 +548,8 @@ assert d.graph is g_2
assert sess_2.graph is g_2
```
-To inspect the current default graph, call @{tf.get_default_graph}, which
-returns a @{tf.Graph} object:
+To inspect the current default graph, call `tf.get_default_graph`, which
+returns a `tf.Graph` object:
```python
# Print all of the operations in the default graph.
diff --git a/tensorflow/docs_src/guide/index.md b/tensorflow/docs_src/guide/index.md
index f78dfc9a89..1c920e7d70 100644
--- a/tensorflow/docs_src/guide/index.md
+++ b/tensorflow/docs_src/guide/index.md
@@ -9,14 +9,13 @@ works. The units are as follows:
training deep learning models.
* @{$guide/eager}, an API for writing TensorFlow code
imperatively, like you would use Numpy.
- * @{$guide/estimators}, a high-level API that provides
- fully-packaged models ready for large-scale training and production.
* @{$guide/datasets}, easy input pipelines to bring your data into
your TensorFlow program.
+ * @{$guide/estimators}, a high-level API that provides
+ fully-packaged models ready for large-scale training and production.
## Estimators
-* @{$estimators}, learn how to use Estimators for machine learning.
* @{$premade_estimators}, the basics of premade Estimators.
* @{$checkpoints}, save training progress and resume where you left off.
* @{$feature_columns}, handle a variety of input data types without changes to the model.
diff --git a/tensorflow/docs_src/guide/leftnav_files b/tensorflow/docs_src/guide/leftnav_files
index c4e235b41a..8e227e0c8f 100644
--- a/tensorflow/docs_src/guide/leftnav_files
+++ b/tensorflow/docs_src/guide/leftnav_files
@@ -4,9 +4,9 @@ index.md
keras.md
eager.md
datasets.md
+estimators.md: Introduction to Estimators
### Estimators
-estimators.md: Introduction to Estimators
premade_estimators.md
checkpoints.md
feature_columns.md
diff --git a/tensorflow/docs_src/guide/low_level_intro.md b/tensorflow/docs_src/guide/low_level_intro.md
index 665a5568b4..dc6cb9ee0d 100644
--- a/tensorflow/docs_src/guide/low_level_intro.md
+++ b/tensorflow/docs_src/guide/low_level_intro.md
@@ -63,17 +63,17 @@ TensorFlow uses numpy arrays to represent tensor **values**.
You might think of TensorFlow Core programs as consisting of two discrete
sections:
-1. Building the computational graph (a @{tf.Graph}).
-2. Running the computational graph (using a @{tf.Session}).
+1. Building the computational graph (a `tf.Graph`).
+2. Running the computational graph (using a `tf.Session`).
### Graph
A **computational graph** is a series of TensorFlow operations arranged into a
graph. The graph is composed of two types of objects.
- * @{tf.Operation$Operations} (or "ops"): The nodes of the graph.
+ * `tf.Operation` (or "ops"): The nodes of the graph.
Operations describe calculations that consume and produce tensors.
- * @{tf.Tensor$Tensors}: The edges in the graph. These represent the values
+ * `tf.Tensor`: The edges in the graph. These represent the values
that will flow through the graph. Most TensorFlow functions return
`tf.Tensors`.
@@ -149,7 +149,7 @@ For more about TensorBoard's graph visualization tools see @{$graph_viz}.
### Session
-To evaluate tensors, instantiate a @{tf.Session} object, informally known as a
+To evaluate tensors, instantiate a `tf.Session` object, informally known as a
**session**. A session encapsulates the state of the TensorFlow runtime, and
runs TensorFlow operations. If a `tf.Graph` is like a `.py` file, a `tf.Session`
is like the `python` executable.
@@ -232,7 +232,7 @@ z = x + y
The preceding three lines are a bit like a function in which we
define two input parameters (`x` and `y`) and then an operation on them. We can
evaluate this graph with multiple inputs by using the `feed_dict` argument of
-the @{tf.Session.run$run method} to feed concrete values to the placeholders:
+the `tf.Session.run` method to feed concrete values to the placeholders:
```python
print(sess.run(z, feed_dict={x: 3, y: 4.5}))
@@ -251,15 +251,15 @@ that placeholders throw an error if no value is fed to them.
## Datasets
-Placeholders work for simple experiments, but @{tf.data$Datasets} are the
+Placeholders work for simple experiments, but `tf.data` are the
preferred method of streaming data into a model.
To get a runnable `tf.Tensor` from a Dataset you must first convert it to a
-@{tf.data.Iterator}, and then call the Iterator's
-@{tf.data.Iterator.get_next$`get_next`} method.
+`tf.data.Iterator`, and then call the Iterator's
+`tf.data.Iterator.get_next` method.
The simplest way to create an Iterator is with the
-@{tf.data.Dataset.make_one_shot_iterator$`make_one_shot_iterator`} method.
+`tf.data.Dataset.make_one_shot_iterator` method.
For example, in the following code the `next_item` tensor will return a row from
the `my_data` array on each `run` call:
@@ -275,7 +275,7 @@ next_item = slices.make_one_shot_iterator().get_next()
```
Reaching the end of the data stream causes `Dataset` to throw an
-@{tf.errors.OutOfRangeError$`OutOfRangeError`}. For example, the following code
+`tf.errors.OutOfRangeError`. For example, the following code
reads the `next_item` until there is no more data to read:
``` python
@@ -308,7 +308,7 @@ For more details on Datasets and Iterators see: @{$guide/datasets}.
## Layers
A trainable model must modify the values in the graph to get new outputs with
-the same input. @{tf.layers$Layers} are the preferred way to add trainable
+the same input. `tf.layers` are the preferred way to add trainable
parameters to a graph.
Layers package together both the variables and the operations that act
@@ -321,7 +321,7 @@ The connection weights and biases are managed by the layer object.
### Creating Layers
-The following code creates a @{tf.layers.Dense$`Dense`} layer that takes a
+The following code creates a `tf.layers.Dense` layer that takes a
batch of input vectors, and produces a single output value for each. To apply a
layer to an input, call the layer as if it were a function. For example:
@@ -375,8 +375,8 @@ will generate a two-element output vector such as the following:
### Layer Function shortcuts
-For each layer class (like @{tf.layers.Dense}) TensorFlow also supplies a
-shortcut function (like @{tf.layers.dense}). The only difference is that the
+For each layer class (like `tf.layers.Dense`) TensorFlow also supplies a
+shortcut function (like `tf.layers.dense`). The only difference is that the
shortcut function versions create and run the layer in a single call. For
example, the following code is equivalent to the earlier version:
@@ -390,17 +390,17 @@ sess.run(init)
print(sess.run(y, {x: [[1, 2, 3], [4, 5, 6]]}))
```
-While convenient, this approach allows no access to the @{tf.layers.Layer}
+While convenient, this approach allows no access to the `tf.layers.Layer`
object. This makes introspection and debugging more difficult,
and layer reuse impossible.
## Feature columns
The easiest way to experiment with feature columns is using the
-@{tf.feature_column.input_layer} function. This function only accepts
+`tf.feature_column.input_layer` function. This function only accepts
@{$feature_columns$dense columns} as inputs, so to view the result
of a categorical column you must wrap it in an
-@{tf.feature_column.indicator_column}. For example:
+`tf.feature_column.indicator_column`. For example:
``` python
features = {
@@ -422,9 +422,9 @@ inputs = tf.feature_column.input_layer(features, columns)
Running the `inputs` tensor will parse the `features` into a batch of vectors.
Feature columns can have internal state, like layers, so they often need to be
-initialized. Categorical columns use @{tf.contrib.lookup$lookup tables}
+initialized. Categorical columns use `tf.contrib.lookup`
internally and these require a separate initialization op,
-@{tf.tables_initializer}.
+`tf.tables_initializer`.
``` python
var_init = tf.global_variables_initializer()
@@ -501,7 +501,7 @@ To optimize a model, you first need to define the loss. We'll use the mean
square error, a standard loss for regression problems.
While you could do this manually with lower level math operations,
-the @{tf.losses} module provides a set of common loss functions. You can use it
+the `tf.losses` module provides a set of common loss functions. You can use it
to calculate the mean square error as follows:
``` python
@@ -520,10 +520,10 @@ This will produce a loss value, something like:
TensorFlow provides
[**optimizers**](https://developers.google.com/machine-learning/glossary/#optimizer)
implementing standard optimization algorithms. These are implemented as
-sub-classes of @{tf.train.Optimizer}. They incrementally change each
+sub-classes of `tf.train.Optimizer`. They incrementally change each
variable in order to minimize the loss. The simplest optimization algorithm is
[**gradient descent**](https://developers.google.com/machine-learning/glossary/#gradient_descent),
-implemented by @{tf.train.GradientDescentOptimizer}. It modifies each
+implemented by `tf.train.GradientDescentOptimizer`. It modifies each
variable according to the magnitude of the derivative of loss with respect to
that variable. For example:
diff --git a/tensorflow/docs_src/guide/premade_estimators.md b/tensorflow/docs_src/guide/premade_estimators.md
index 3e910c1fe2..dc38f0c1d3 100644
--- a/tensorflow/docs_src/guide/premade_estimators.md
+++ b/tensorflow/docs_src/guide/premade_estimators.md
@@ -175,9 +175,9 @@ handles the details of initialization, logging, saving and restoring, and many
other features so you can concentrate on your model. For more details see
@{$guide/estimators}.
-An Estimator is any class derived from @{tf.estimator.Estimator}. TensorFlow
+An Estimator is any class derived from `tf.estimator.Estimator`. TensorFlow
provides a collection of
-@{tf.estimator$pre-made Estimators}
+`tf.estimator`
(for example, `LinearRegressor`) to implement common ML algorithms. Beyond
those, you may write your own
@{$custom_estimators$custom Estimators}.
@@ -200,7 +200,7 @@ Let's see how those tasks are implemented for Iris classification.
You must create input functions to supply data for training,
evaluating, and prediction.
-An **input function** is a function that returns a @{tf.data.Dataset} object
+An **input function** is a function that returns a `tf.data.Dataset` object
which outputs the following two-element tuple:
* [`features`](https://developers.google.com/machine-learning/glossary/#feature) - A Python dictionary in which:
@@ -271,7 +271,7 @@ A [**feature column**](https://developers.google.com/machine-learning/glossary/#
is an object describing how the model should use raw input data from the
features dictionary. When you build an Estimator model, you pass it a list of
feature columns that describes each of the features you want the model to use.
-The @{tf.feature_column} module provides many options for representing data
+The `tf.feature_column` module provides many options for representing data
to the model.
For Iris, the 4 raw features are numeric values, so we'll build a list of
@@ -299,10 +299,10 @@ features, we can build the estimator.
The Iris problem is a classic classification problem. Fortunately, TensorFlow
provides several pre-made classifier Estimators, including:
-* @{tf.estimator.DNNClassifier} for deep models that perform multi-class
+* `tf.estimator.DNNClassifier` for deep models that perform multi-class
classification.
-* @{tf.estimator.DNNLinearCombinedClassifier} for wide & deep models.
-* @{tf.estimator.LinearClassifier} for classifiers based on linear models.
+* `tf.estimator.DNNLinearCombinedClassifier` for wide & deep models.
+* `tf.estimator.LinearClassifier` for classifiers based on linear models.
For the Iris problem, `tf.estimator.DNNClassifier` seems like the best choice.
Here's how we instantiated this Estimator:
diff --git a/tensorflow/docs_src/guide/saved_model.md b/tensorflow/docs_src/guide/saved_model.md
index 717488e7cc..c260da7966 100644
--- a/tensorflow/docs_src/guide/saved_model.md
+++ b/tensorflow/docs_src/guide/saved_model.md
@@ -1,8 +1,8 @@
# Save and Restore
-The @{tf.train.Saver} class provides methods to save and restore models. The
-@{tf.saved_model.simple_save} function is an easy way to build a
-@{tf.saved_model$saved model} suitable for serving. [Estimators](./estimators)
+The `tf.train.Saver` class provides methods to save and restore models. The
+`tf.saved_model.simple_save` function is an easy way to build a
+`tf.saved_model` suitable for serving. [Estimators](./estimators)
automatically save and restore variables in the `model_dir`.
## Save and restore variables
@@ -145,13 +145,13 @@ Notes:
* If you only restore a subset of the model variables at the start of a
session, you have to run an initialize op for the other variables. See
- @{tf.variables_initializer} for more information.
+ `tf.variables_initializer` for more information.
* To inspect the variables in a checkpoint, you can use the
[`inspect_checkpoint`](https://www.tensorflow.org/code/tensorflow/python/tools/inspect_checkpoint.py)
library, particularly the `print_tensors_in_checkpoint_file` function.
-* By default, `Saver` uses the value of the @{tf.Variable.name} property
+* By default, `Saver` uses the value of the `tf.Variable.name` property
for each variable. However, when you create a `Saver` object, you may
optionally choose names for the variables in the checkpoint files.
@@ -196,15 +196,15 @@ Use `SavedModel` to save and load your model—variables, the graph, and the
graph's metadata. This is a language-neutral, recoverable, hermetic
serialization format that enables higher-level systems and tools to produce,
consume, and transform TensorFlow models. TensorFlow provides several ways to
-interact with `SavedModel`, including the @{tf.saved_model} APIs,
-@{tf.estimator.Estimator}, and a command-line interface.
+interact with `SavedModel`, including the `tf.saved_model` APIs,
+`tf.estimator.Estimator`, and a command-line interface.
## Build and load a SavedModel
### Simple save
-The easiest way to create a `SavedModel` is to use the @{tf.saved_model.simple_save}
+The easiest way to create a `SavedModel` is to use the `tf.saved_model.simple_save`
function:
```python
@@ -218,14 +218,14 @@ This configures the `SavedModel` so it can be loaded by
[TensorFlow serving](/serving/serving_basic) and supports the
[Predict API](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/predict.proto).
To access the classify, regress, or multi-inference APIs, use the manual
-`SavedModel` builder APIs or an @{tf.estimator.Estimator}.
+`SavedModel` builder APIs or an `tf.estimator.Estimator`.
### Manually build a SavedModel
-If your use case isn't covered by @{tf.saved_model.simple_save}, use the manual
-@{tf.saved_model.builder$builder APIs} to create a `SavedModel`.
+If your use case isn't covered by `tf.saved_model.simple_save`, use the manual
+`tf.saved_model.builder` to create a `SavedModel`.
-The @{tf.saved_model.builder.SavedModelBuilder} class provides functionality to
+The `tf.saved_model.builder.SavedModelBuilder` class provides functionality to
save multiple `MetaGraphDef`s. A **MetaGraph** is a dataflow graph, plus
its associated variables, assets, and signatures. A **`MetaGraphDef`**
is the protocol buffer representation of a MetaGraph. A **signature** is
@@ -272,16 +272,16 @@ builder.save()
Following the guidance below gives you forward compatibility only if the set of
Ops has not changed.
-The @{tf.saved_model.builder.SavedModelBuilder$`SavedModelBuilder`} class allows
+The `tf.saved_model.builder.SavedModelBuilder` class allows
users to control whether default-valued attributes must be stripped from the
@{$extend/tool_developers#nodes$`NodeDefs`}
while adding a meta graph to the SavedModel bundle. Both
-@{tf.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables$`SavedModelBuilder.add_meta_graph_and_variables`}
-and @{tf.saved_model.builder.SavedModelBuilder.add_meta_graph$`SavedModelBuilder.add_meta_graph`}
+`tf.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables`
+and `tf.saved_model.builder.SavedModelBuilder.add_meta_graph`
methods accept a Boolean flag `strip_default_attrs` that controls this behavior.
-If `strip_default_attrs` is `False`, the exported @{tf.MetaGraphDef} will have
-the default valued attributes in all its @{tf.NodeDef} instances.
+If `strip_default_attrs` is `False`, the exported `tf.MetaGraphDef` will have
+the default valued attributes in all its `tf.NodeDef` instances.
This can break forward compatibility with a sequence of events such as the
following:
@@ -304,7 +304,7 @@ for more information.
### Loading a SavedModel in Python
The Python version of the SavedModel
-@{tf.saved_model.loader$loader}
+`tf.saved_model.loader`
provides load and restore capability for a SavedModel. The `load` operation
requires the following information:
@@ -423,20 +423,20 @@ the model. This function has the following purposes:
* To add any additional ops needed to convert data from the input format
into the feature `Tensor`s expected by the model.
-The function returns a @{tf.estimator.export.ServingInputReceiver} object,
+The function returns a `tf.estimator.export.ServingInputReceiver` object,
which packages the placeholders and the resulting feature `Tensor`s together.
A typical pattern is that inference requests arrive in the form of serialized
`tf.Example`s, so the `serving_input_receiver_fn()` creates a single string
placeholder to receive them. The `serving_input_receiver_fn()` is then also
-responsible for parsing the `tf.Example`s by adding a @{tf.parse_example} op to
+responsible for parsing the `tf.Example`s by adding a `tf.parse_example` op to
the graph.
When writing such a `serving_input_receiver_fn()`, you must pass a parsing
-specification to @{tf.parse_example} to tell the parser what feature names to
+specification to `tf.parse_example` to tell the parser what feature names to
expect and how to map them to `Tensor`s. A parsing specification takes the
-form of a dict from feature names to @{tf.FixedLenFeature}, @{tf.VarLenFeature},
-and @{tf.SparseFeature}. Note this parsing specification should not include
+form of a dict from feature names to `tf.FixedLenFeature`, `tf.VarLenFeature`,
+and `tf.SparseFeature`. Note this parsing specification should not include
any label or weight columns, since those will not be available at serving
time&mdash;in contrast to a parsing specification used in the `input_fn()` at
training time.
@@ -457,7 +457,7 @@ def serving_input_receiver_fn():
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
```
-The @{tf.estimator.export.build_parsing_serving_input_receiver_fn} utility
+The `tf.estimator.export.build_parsing_serving_input_receiver_fn` utility
function provides that input receiver for the common case.
> Note: when training a model to be served using the Predict API with a local
@@ -468,7 +468,7 @@ Even if you require no parsing or other input processing&mdash;that is, if the
serving system will feed feature `Tensor`s directly&mdash;you must still provide
a `serving_input_receiver_fn()` that creates placeholders for the feature
`Tensor`s and passes them through. The
-@{tf.estimator.export.build_raw_serving_input_receiver_fn} utility provides for
+`tf.estimator.export.build_raw_serving_input_receiver_fn` utility provides for
this.
If these utilities do not meet your needs, you are free to write your own
@@ -488,7 +488,7 @@ By contrast, the *output* portion of the signature is determined by the model.
### Specify the outputs of a custom model
When writing a custom `model_fn`, you must populate the `export_outputs` element
-of the @{tf.estimator.EstimatorSpec} return value. This is a dict of
+of the `tf.estimator.EstimatorSpec` return value. This is a dict of
`{name: output}` describing the output signatures to be exported and used during
serving.
@@ -498,9 +498,9 @@ is represented by an entry in this dict. In this case the `name` is a string
of your choice that can be used to request a specific head at serving time.
Each `output` value must be an `ExportOutput` object such as
-@{tf.estimator.export.ClassificationOutput},
-@{tf.estimator.export.RegressionOutput}, or
-@{tf.estimator.export.PredictOutput}.
+`tf.estimator.export.ClassificationOutput`,
+`tf.estimator.export.RegressionOutput`, or
+`tf.estimator.export.PredictOutput`.
These output types map straightforwardly to the
[TensorFlow Serving APIs](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/prediction_service.proto),
@@ -520,7 +520,7 @@ does not specify one.
### Perform the export
To export your trained Estimator, call
-@{tf.estimator.Estimator.export_savedmodel} with the export base path and
+`tf.estimator.Estimator.export_savedmodel` with the export base path and
the `serving_input_receiver_fn`.
```py
diff --git a/tensorflow/docs_src/guide/summaries_and_tensorboard.md b/tensorflow/docs_src/guide/summaries_and_tensorboard.md
index fadfa03e78..6177c3393b 100644
--- a/tensorflow/docs_src/guide/summaries_and_tensorboard.md
+++ b/tensorflow/docs_src/guide/summaries_and_tensorboard.md
@@ -41,7 +41,7 @@ data from, and decide which nodes you would like to annotate with
For example, suppose you are training a convolutional neural network for
recognizing MNIST digits. You'd like to record how the learning rate
varies over time, and how the objective function is changing. Collect these by
-attaching @{tf.summary.scalar} ops
+attaching `tf.summary.scalar` ops
to the nodes that output the learning rate and loss respectively. Then, give
each `scalar_summary` a meaningful `tag`, like `'learning rate'` or `'loss
function'`.
@@ -49,7 +49,7 @@ function'`.
Perhaps you'd also like to visualize the distributions of activations coming
off a particular layer, or the distribution of gradients or weights. Collect
this data by attaching
-@{tf.summary.histogram} ops to
+`tf.summary.histogram` ops to
the gradient outputs and to the variable that holds your weights, respectively.
For details on all of the summary operations available, check out the docs on
@@ -60,13 +60,13 @@ depends on their output. And the summary nodes that we've just created are
peripheral to your graph: none of the ops you are currently running depend on
them. So, to generate summaries, we need to run all of these summary nodes.
Managing them by hand would be tedious, so use
-@{tf.summary.merge_all}
+`tf.summary.merge_all`
to combine them into a single op that generates all the summary data.
Then, you can just run the merged summary op, which will generate a serialized
`Summary` protobuf object with all of your summary data at a given step.
Finally, to write this summary data to disk, pass the summary protobuf to a
-@{tf.summary.FileWriter}.
+`tf.summary.FileWriter`.
The `FileWriter` takes a logdir in its constructor - this logdir is quite
important, it's the directory where all of the events will be written out.
diff --git a/tensorflow/docs_src/guide/tensors.md b/tensorflow/docs_src/guide/tensors.md
index 7227260f1a..6b5a110a1c 100644
--- a/tensorflow/docs_src/guide/tensors.md
+++ b/tensorflow/docs_src/guide/tensors.md
@@ -176,7 +176,7 @@ Rank | Shape | Dimension number | Example
n | [D0, D1, ... Dn-1] | n-D | A tensor with shape [D0, D1, ... Dn-1].
Shapes can be represented via Python lists / tuples of ints, or with the
-@{tf.TensorShape}.
+`tf.TensorShape`.
### Getting a `tf.Tensor` object's shape
diff --git a/tensorflow/docs_src/guide/using_gpu.md b/tensorflow/docs_src/guide/using_gpu.md
index c429ca4750..c0218fd12e 100644
--- a/tensorflow/docs_src/guide/using_gpu.md
+++ b/tensorflow/docs_src/guide/using_gpu.md
@@ -143,7 +143,7 @@ If the device you have specified does not exist, you will get
```
InvalidArgumentError: Invalid argument: Cannot assign a device to node 'b':
Could not satisfy explicit device specification '/device:GPU:2'
- [[Node: b = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [3,2]
+ [[{{node b}} = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [3,2]
values: 1 2 3...>, _device="/device:GPU:2"]()]]
```
diff --git a/tensorflow/docs_src/guide/using_tpu.md b/tensorflow/docs_src/guide/using_tpu.md
index 41d80d9d60..90a663b75e 100644
--- a/tensorflow/docs_src/guide/using_tpu.md
+++ b/tensorflow/docs_src/guide/using_tpu.md
@@ -17,9 +17,9 @@ This doc is aimed at users who:
## TPUEstimator
-@{tf.estimator.Estimator$Estimators} are TensorFlow's model-level abstraction.
+`tf.estimator.Estimator` are TensorFlow's model-level abstraction.
Standard `Estimators` can drive models on CPU and GPUs. You must use
-@{tf.contrib.tpu.TPUEstimator} to drive a model on TPUs.
+`tf.contrib.tpu.TPUEstimator` to drive a model on TPUs.
Refer to TensorFlow's Getting Started section for an introduction to the basics
of using a @{$premade_estimators$pre-made `Estimator`}, and
@@ -44,10 +44,10 @@ my_estimator = tf.estimator.Estimator(
model_fn=my_model_fn)
```
-The changes required to use a @{tf.contrib.tpu.TPUEstimator} on your local
+The changes required to use a `tf.contrib.tpu.TPUEstimator` on your local
machine are relatively minor. The constructor requires two additional arguments.
You should set the `use_tpu` argument to `False`, and pass a
-@{tf.contrib.tpu.RunConfig} as the `config` argument, as shown below:
+`tf.contrib.tpu.RunConfig` as the `config` argument, as shown below:
``` python
my_tpu_estimator = tf.contrib.tpu.TPUEstimator(
@@ -117,7 +117,7 @@ my_tpu_run_config = tf.contrib.tpu.RunConfig(
)
```
-Then you must pass the @{tf.contrib.tpu.RunConfig} to the constructor:
+Then you must pass the `tf.contrib.tpu.RunConfig` to the constructor:
``` python
my_tpu_estimator = tf.contrib.tpu.TPUEstimator(
@@ -137,7 +137,7 @@ training locally to training on a cloud TPU you would need to:
## Optimizer
When training on a cloud TPU you **must** wrap the optimizer in a
-@{tf.contrib.tpu.CrossShardOptimizer}, which uses an `allreduce` to aggregate
+`tf.contrib.tpu.CrossShardOptimizer`, which uses an `allreduce` to aggregate
gradients and broadcast the result to each shard (each TPU core).
The `CrossShardOptimizer` is not compatible with local training. So, to have
@@ -200,7 +200,7 @@ Build your evaluation metrics dictionary in a stand-alone `metric_fn`.
Evaluation metrics are an essential part of training a model. These are fully
supported on Cloud TPUs, but with a slightly different syntax.
-A standard @{tf.metrics} returns two tensors. The first returns the running
+A standard `tf.metrics` returns two tensors. The first returns the running
average of the metric value, while the second updates the running average and
returns the value for this batch:
@@ -242,15 +242,15 @@ An `Estimator`'s `model_fn` must return an `EstimatorSpec`. An `EstimatorSpec`
is a simple structure of named fields containing all the `tf.Tensors` of the
model that the `Estimator` may need to interact with.
-`TPUEstimators` use a @{tf.contrib.tpu.TPUEstimatorSpec}. There are a few
-differences between it and a standard @{tf.estimator.EstimatorSpec}:
+`TPUEstimators` use a `tf.contrib.tpu.TPUEstimatorSpec`. There are a few
+differences between it and a standard `tf.estimator.EstimatorSpec`:
* The `eval_metric_ops` must be wrapped into a `metrics_fn`, this field is
renamed `eval_metrics` ([see above](#metrics)).
-* The @{tf.train.SessionRunHook$hooks} are unsupported, so these fields are
+* The `tf.train.SessionRunHook` are unsupported, so these fields are
omitted.
-* The @{tf.train.Scaffold$`scaffold`}, if used, must also be wrapped in a
+* The `tf.train.Scaffold`, if used, must also be wrapped in a
function. This field is renamed to `scaffold_fn`.
`Scaffold` and `Hooks` are for advanced usage, and can typically be omitted.
@@ -304,7 +304,7 @@ In many cases the batch size is the only unknown dimension.
A typical input pipeline, using `tf.data`, will usually produce batches of a
fixed size. The last batch of a finite `Dataset`, however, is typically smaller,
containing just the remaining elements. Since a `Dataset` does not know its own
-length or finiteness, the standard @{tf.data.Dataset.batch$`batch`} method
+length or finiteness, the standard `tf.data.Dataset.batch` method
cannot determine if all batches will have a fixed size batch on its own:
```
@@ -317,7 +317,7 @@ cannot determine if all batches will have a fixed size batch on its own:
```
The most straightforward fix is to
-@{tf.data.Dataset.apply$apply} @{tf.contrib.data.batch_and_drop_remainder}
+`tf.data.Dataset.apply` `tf.contrib.data.batch_and_drop_remainder`
as follows:
```
@@ -346,19 +346,19 @@ TPU, as it is impossible to use the Cloud TPU's unless you can feed it data
quickly enough. See @{$datasets_performance} for details on dataset performance.
For all but the simplest experimentation (using
-@{tf.data.Dataset.from_tensor_slices} or other in-graph data) you will need to
+`tf.data.Dataset.from_tensor_slices` or other in-graph data) you will need to
store all data files read by the `TPUEstimator`'s `Dataset` in Google Cloud
Storage Buckets.
<!--TODO(markdaoust): link to the `TFRecord` doc when it exists.-->
For most use-cases, we recommend converting your data into `TFRecord`
-format and using a @{tf.data.TFRecordDataset} to read it. This, however, is not
+format and using a `tf.data.TFRecordDataset` to read it. This, however, is not
a hard requirement and you can use other dataset readers
(`FixedLengthRecordDataset` or `TextLineDataset`) if you prefer.
Small datasets can be loaded entirely into memory using
-@{tf.data.Dataset.cache}.
+`tf.data.Dataset.cache`.
Regardless of the data format used, it is strongly recommended that you
@{$performance_guide#use_large_files$use large files}, on the order of
diff --git a/tensorflow/docs_src/guide/variables.md b/tensorflow/docs_src/guide/variables.md
index cd8c4b5b9a..5d5d73394c 100644
--- a/tensorflow/docs_src/guide/variables.md
+++ b/tensorflow/docs_src/guide/variables.md
@@ -119,7 +119,7 @@ It is particularly important for variables to be in the correct device in
distributed settings. Accidentally putting variables on workers instead of
parameter servers, for example, can severely slow down training or, in the worst
case, let each worker blithely forge ahead with its own independent copy of each
-variable. For this reason we provide @{tf.train.replica_device_setter}, which
+variable. For this reason we provide `tf.train.replica_device_setter`, which
can automatically place variables in parameter servers. For example:
``` python
@@ -211,7 +211,7 @@ sess.run(assignment) # or assignment.op.run(), or assignment.eval()
Most TensorFlow optimizers have specialized ops that efficiently update the
values of variables according to some gradient descent-like algorithm. See
-@{tf.train.Optimizer} for an explanation of how to use optimizers.
+`tf.train.Optimizer` for an explanation of how to use optimizers.
Because variables are mutable it's sometimes useful to know what version of a
variable's value is being used at any point in time. To force a re-read of the
diff --git a/tensorflow/docs_src/guide/version_compat.md b/tensorflow/docs_src/guide/version_compat.md
index d2e5e41190..29ac066e6f 100644
--- a/tensorflow/docs_src/guide/version_compat.md
+++ b/tensorflow/docs_src/guide/version_compat.md
@@ -66,7 +66,7 @@ patch versions. The public APIs consist of
Some API functions are explicitly marked as "experimental" and can change in
backward incompatible ways between minor releases. These include:
-* **Experimental APIs**: The @{tf.contrib} module and its submodules in Python
+* **Experimental APIs**: The `tf.contrib` module and its submodules in Python
and any functions in the C API or fields in protocol buffers that are
explicitly commented as being experimental. In particular, any field in a
protocol buffer which is called "experimental" and all its fields and
@@ -79,6 +79,7 @@ backward incompatible ways between minor releases. These include:
[`tensorflow/cc`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/cc)).
- [Java](../api_docs/java/reference/org/tensorflow/package-summary),
- [Go](https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go)
+ - [JavaScript](https://js.tensorflow.org)
* **Details of composite ops:** Many public functions in Python expand to
several primitive ops in the graph, and these details will be part of any
@@ -252,13 +253,13 @@ ops has not changed:
1. If forward compatibility is desired, set `strip_default_attrs` to `True`
while exporting the model using either the
- @{tf.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables$`add_meta_graph_and_variables`}
- and @{tf.saved_model.builder.SavedModelBuilder.add_meta_graph$`add_meta_graph`}
+ `tf.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables`
+ and `tf.saved_model.builder.SavedModelBuilder.add_meta_graph`
methods of the `SavedModelBuilder` class, or
- @{tf.estimator.Estimator.export_savedmodel$`Estimator.export_savedmodel`}
+ `tf.estimator.Estimator.export_savedmodel`
2. This strips off the default valued attributes at the time of
producing/exporting the models. This makes sure that the exported
- @{tf.MetaGraphDef} does not contain the new op-attribute when the default
+ `tf.MetaGraphDef` does not contain the new op-attribute when the default
value is used.
3. Having this control could allow out-of-date consumers (for example, serving
binaries that lag behind training binaries) to continue loading the models
diff --git a/tensorflow/docs_src/install/install_c.md b/tensorflow/docs_src/install/install_c.md
index cf869e8655..4a63f11fca 100644
--- a/tensorflow/docs_src/install/install_c.md
+++ b/tensorflow/docs_src/install/install_c.md
@@ -38,7 +38,7 @@ enable TensorFlow for C:
OS="linux" # Change to "darwin" for macOS
TARGET_DIRECTORY="/usr/local"
curl -L \
- "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.9.0.tar.gz" |
+ "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.10.0.tar.gz" |
sudo tar -C $TARGET_DIRECTORY -xz
The `tar` command extracts the TensorFlow C library into the `lib`
diff --git a/tensorflow/docs_src/install/install_go.md b/tensorflow/docs_src/install/install_go.md
index 4ec7e42773..f0f8436777 100644
--- a/tensorflow/docs_src/install/install_go.md
+++ b/tensorflow/docs_src/install/install_go.md
@@ -6,7 +6,7 @@ a Go application. This guide explains how to install and set up the
[TensorFlow Go package](https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go).
Warning: The TensorFlow Go API is *not* covered by the TensorFlow
-[API stability guarantees](../guide/version_semantics.md).
+[API stability guarantees](../guide/version_compat.md).
## Supported Platforms
@@ -38,7 +38,7 @@ steps to install this library and enable TensorFlow for Go:
TF_TYPE="cpu" # Change to "gpu" for GPU support
TARGET_DIRECTORY='/usr/local'
curl -L \
- "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.9.0.tar.gz" |
+ "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.10.0.tar.gz" |
sudo tar -C $TARGET_DIRECTORY -xz
The `tar` command extracts the TensorFlow C library into the `lib`
diff --git a/tensorflow/docs_src/install/install_java.md b/tensorflow/docs_src/install/install_java.md
index c5f760d254..c131a2ea76 100644
--- a/tensorflow/docs_src/install/install_java.md
+++ b/tensorflow/docs_src/install/install_java.md
@@ -36,7 +36,7 @@ following to the project's `pom.xml` to use the TensorFlow Java APIs:
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow</artifactId>
- <version>1.9.0</version>
+ <version>1.10.0</version>
</dependency>
```
@@ -65,7 +65,7 @@ As an example, these steps will create a Maven project that uses TensorFlow:
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow</artifactId>
- <version>1.9.0</version>
+ <version>1.10.0</version>
</dependency>
</dependencies>
</project>
@@ -124,12 +124,12 @@ instead:
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>libtensorflow</artifactId>
- <version>1.9.0</version>
+ <version>1.10.0</version>
</dependency>
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>libtensorflow_jni_gpu</artifactId>
- <version>1.9.0</version>
+ <version>1.10.0</version>
</dependency>
```
@@ -148,7 +148,7 @@ refer to the simpler instructions above instead.
Take the following steps to install TensorFlow for Java on Linux or macOS:
1. Download
- [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.9.0.jar),
+ [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.10.0.jar),
which is the TensorFlow Java Archive (JAR).
2. Decide whether you will run TensorFlow for Java on CPU(s) only or with
@@ -167,7 +167,7 @@ Take the following steps to install TensorFlow for Java on Linux or macOS:
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
mkdir -p ./jni
curl -L \
- "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.9.0.tar.gz" |
+ "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.10.0.tar.gz" |
tar -xz -C ./jni
### Install on Windows
@@ -175,10 +175,10 @@ Take the following steps to install TensorFlow for Java on Linux or macOS:
Take the following steps to install TensorFlow for Java on Windows:
1. Download
- [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.9.0.jar),
+ [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.10.0.jar),
which is the TensorFlow Java Archive (JAR).
2. Download the following Java Native Interface (JNI) file appropriate for
- [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.9.0.zip).
+ [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.10.0.zip).
3. Extract this .zip file.
__Note__: The native library (`tensorflow_jni.dll`) requires `msvcp140.dll` at runtime, which is included in the [Visual C++ 2015 Redistributable](https://www.microsoft.com/en-us/download/details.aspx?id=48145) package.
@@ -227,7 +227,7 @@ must be part of your `classpath`. For example, you can include the
downloaded `.jar` in your `classpath` by using the `-cp` compilation flag
as follows:
-<pre><b>javac -cp libtensorflow-1.9.0.jar HelloTF.java</b></pre>
+<pre><b>javac -cp libtensorflow-1.10.0.jar HelloTF.java</b></pre>
### Running
@@ -241,11 +241,11 @@ two files are available to the JVM:
For example, the following command line executes the `HelloTF` program on Linux
and macOS X:
-<pre><b>java -cp libtensorflow-1.9.0.jar:. -Djava.library.path=./jni HelloTF</b></pre>
+<pre><b>java -cp libtensorflow-1.10.0.jar:. -Djava.library.path=./jni HelloTF</b></pre>
And the following command line executes the `HelloTF` program on Windows:
-<pre><b>java -cp libtensorflow-1.9.0.jar;. -Djava.library.path=jni HelloTF</b></pre>
+<pre><b>java -cp libtensorflow-1.10.0.jar;. -Djava.library.path=jni HelloTF</b></pre>
If the program prints <tt>Hello from <i>version</i></tt>, you've successfully
installed TensorFlow for Java and are ready to use the API. If the program
diff --git a/tensorflow/docs_src/install/install_linux.md b/tensorflow/docs_src/install/install_linux.md
index 3a9a01c57e..0febdee99f 100644
--- a/tensorflow/docs_src/install/install_linux.md
+++ b/tensorflow/docs_src/install/install_linux.md
@@ -436,7 +436,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
<pre>
(tensorflow)$ <b>pip install --ignore-installed --upgrade \
- https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0-cp34-cp34m-linux_x86_64.whl</b></pre>
+ https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.10.0-cp34-cp34m-linux_x86_64.whl</b></pre>
<a name="ValidateYourInstallation"></a>
@@ -650,13 +650,13 @@ This section documents the relevant values for Linux installations.
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0-cp27-none-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.10.0-cp27-none-linux_x86_64.whl
</pre>
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0-cp27-none-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.0-cp27-none-linux_x86_64.whl
</pre>
Note that GPU support requires the NVIDIA hardware and software described in
@@ -667,13 +667,13 @@ Note that GPU support requires the NVIDIA hardware and software described in
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0-cp34-cp34m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.10.0-cp34-cp34m-linux_x86_64.whl
</pre>
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0-cp34-cp34m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.0-cp34-cp34m-linux_x86_64.whl
</pre>
Note that GPU support requires the NVIDIA hardware and software described in
@@ -684,13 +684,13 @@ Note that GPU support requires the NVIDIA hardware and software described in
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0-cp35-cp35m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.10.0-cp35-cp35m-linux_x86_64.whl
</pre>
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0-cp35-cp35m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.0-cp35-cp35m-linux_x86_64.whl
</pre>
Note that GPU support requires the NVIDIA hardware and software described in
@@ -701,13 +701,13 @@ Note that GPU support requires the NVIDIA hardware and software described in
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0-cp36-cp36m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.10.0-cp36-cp36m-linux_x86_64.whl
</pre>
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0-cp36-cp36m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.0-cp36-cp36m-linux_x86_64.whl
</pre>
Note that GPU support requires the NVIDIA hardware and software described in
diff --git a/tensorflow/docs_src/install/install_mac.md b/tensorflow/docs_src/install/install_mac.md
index 1a7b2b815d..c4d63cc107 100644
--- a/tensorflow/docs_src/install/install_mac.md
+++ b/tensorflow/docs_src/install/install_mac.md
@@ -119,7 +119,7 @@ Take the following steps to install TensorFlow with Virtualenv:
TensorFlow in the active Virtualenv is as follows:
<pre> $ <b>pip3 install --upgrade \
- https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0-py3-none-any.whl</b></pre>
+ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0-py3-none-any.whl</b></pre>
If you encounter installation problems, see
[Common Installation Problems](#common-installation-problems).
@@ -242,7 +242,7 @@ take the following steps:
issue the following command:
<pre> $ <b>sudo pip3 install --upgrade \
- https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0-py3-none-any.whl</b> </pre>
+ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0-py3-none-any.whl</b> </pre>
If the preceding command fails, see
[installation problems](#common-installation-problems).
@@ -350,7 +350,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
TensorFlow for Python 2.7:
<pre> (<i>targetDirectory</i>)$ <b>pip install --ignore-installed --upgrade \
- https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0-py2-none-any.whl</b></pre>
+ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0-py2-none-any.whl</b></pre>
<a name="ValidateYourInstallation"></a>
@@ -517,7 +517,7 @@ The value you specify depends on your Python version.
<pre>
-https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0-py2-none-any.whl
+https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0-py2-none-any.whl
</pre>
@@ -525,5 +525,5 @@ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0-py2-none-any.
<pre>
-https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0-py3-none-any.whl
+https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0-py3-none-any.whl
</pre>
diff --git a/tensorflow/docs_src/install/install_raspbian.md b/tensorflow/docs_src/install/install_raspbian.md
index 58a5285c78..cf6b6b4f79 100644
--- a/tensorflow/docs_src/install/install_raspbian.md
+++ b/tensorflow/docs_src/install/install_raspbian.md
@@ -60,7 +60,7 @@ If it gives the error "Command not found", then the package has not been
installed yet. To install if for the first time, run:
<pre>$ sudo apt-get install python3-pip # for Python 3.n
-sudo apt-get install python-pip # for Python 2.7</pre>
+$ sudo apt-get install python-pip # for Python 2.7</pre>
You can find more help on installing and upgrading pip in
[the Raspberry Pi documentation](https://www.raspberrypi.org/documentation/linux/software/python.md).
@@ -78,8 +78,8 @@ your system, run the following command:
Assuming the prerequisite software is installed on your Pi, install TensorFlow
by invoking **one** of the following commands:
- <pre> $ <b>pip3 install tensorflow</b> # Python 3.n
- $ <b>pip install tensorflow</b> # Python 2.7</pre>
+<pre>$ <b>pip3 install tensorflow</b> # Python 3.n
+$ <b>pip install tensorflow</b> # Python 2.7</pre>
This can take some time on certain platforms like the Pi Zero, where some Python
packages like scipy that TensorFlow depends on need to be compiled before the
diff --git a/tensorflow/docs_src/install/install_sources.md b/tensorflow/docs_src/install/install_sources.md
index 31dcad64d4..dfd9fbce4b 100644
--- a/tensorflow/docs_src/install/install_sources.md
+++ b/tensorflow/docs_src/install/install_sources.md
@@ -168,6 +168,7 @@ If bazel is not installed on your system, install it now by following
To build TensorFlow, you must install the following packages:
* six
+* mock
* numpy, which is a numerical processing package that TensorFlow requires.
* wheel, which enables you to manage Python compressed packages in the wheel
(.whl) format.
@@ -179,7 +180,10 @@ If you follow these instructions, you will not need to disable SIP.
After installing pip, invoke the following commands:
-<pre> $ <b>sudo pip install six numpy wheel</b> </pre>
+<pre> $ <b>sudo pip install six numpy wheel mock h5py</b>
+ $ <b>sudo pip install keras_applications==1.0.4 --no-deps</b>
+ $ <b>sudo pip install keras_preprocessing==1.0.2 --no-deps</b>
+</pre>
Note: These are just the minimum requirements to _build_ tensorflow. Installing
the pip package will download additional packages required to _run_ it. If you
@@ -374,10 +378,10 @@ Invoke `pip install` to install that pip package. The filename of the `.whl`
file depends on your platform. For example, the following command will install
the pip package
-for TensorFlow 1.9.0 on Linux:
+for TensorFlow 1.10.0 on Linux:
<pre>
-$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.9.0-py2-none-any.whl</b>
+$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.10.0-py2-none-any.whl</b>
</pre>
## Validate your installation
@@ -483,6 +487,8 @@ the error message, ask a new question on Stack Overflow and specify the
**Linux**
<table>
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
+<tr><td>tensorflow-1.10.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.15.0</td><td>N/A</td><td>N/A</td></tr>
+<tr><td>tensorflow_gpu-1.10.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.15.0</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.9.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.11.0</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow_gpu-1.9.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.11.0</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.8.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.10.0</td><td>N/A</td><td>N/A</td></tr>
@@ -508,6 +514,7 @@ the error message, ask a new question on Stack Overflow and specify the
**Mac**
<table>
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
+<tr><td>tensorflow-1.10.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.15.0</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow-1.9.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.11.0</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow-1.8.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.10.1</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow-1.7.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.10.1</td><td>N/A</td><td>N/A</td></tr>
@@ -525,6 +532,8 @@ the error message, ask a new question on Stack Overflow and specify the
**Windows**
<table>
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
+<tr><td>tensorflow-1.10.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
+<tr><td>tensorflow_gpu-1.10.0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.9.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow_gpu-1.9.0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.8.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
diff --git a/tensorflow/docs_src/performance/datasets_performance.md b/tensorflow/docs_src/performance/datasets_performance.md
index 46b43b7673..5d9e4ba392 100644
--- a/tensorflow/docs_src/performance/datasets_performance.md
+++ b/tensorflow/docs_src/performance/datasets_performance.md
@@ -38,9 +38,9 @@ the heavy lifting of training your model. In addition, viewing input pipelines
as an ETL process provides structure that facilitates the application of
performance optimizations.
-When using the @{tf.estimator.Estimator} API, the first two phases (Extract and
+When using the `tf.estimator.Estimator` API, the first two phases (Extract and
Transform) are captured in the `input_fn` passed to
-@{tf.estimator.Estimator.train}. In code, this might look like the following
+`tf.estimator.Estimator.train`. In code, this might look like the following
(naive, sequential) implementation:
```
@@ -99,7 +99,7 @@ With pipelining, idle time diminishes significantly:
![with pipelining](/images/datasets_with_pipelining.png)
The `tf.data` API provides a software pipelining mechanism through the
-@{tf.data.Dataset.prefetch} transformation, which can be used to decouple the
+`tf.data.Dataset.prefetch` transformation, which can be used to decouple the
time data is produced from the time it is consumed. In particular, the
transformation uses a background thread and an internal buffer to prefetch
elements from the input dataset ahead of the time they are requested. Thus, to
@@ -130,7 +130,7 @@ The preceding recommendation is simply the most common application.
### Parallelize Data Transformation
When preparing a batch, input elements may need to be pre-processed. To this
-end, the `tf.data` API offers the @{tf.data.Dataset.map} transformation, which
+end, the `tf.data` API offers the `tf.data.Dataset.map` transformation, which
applies a user-defined function (for example, `parse_fn` from the running
example) to each element of the input dataset. Because input elements are
independent of one another, the pre-processing can be parallelized across
@@ -164,7 +164,7 @@ dataset = dataset.map(map_func=parse_fn, num_parallel_calls=FLAGS.num_parallel_c
Furthermore, if your batch size is in the hundreds or thousands, your pipeline
will likely additionally benefit from parallelizing the batch creation. To this
-end, the `tf.data` API provides the @{tf.contrib.data.map_and_batch}
+end, the `tf.data` API provides the `tf.contrib.data.map_and_batch`
transformation, which effectively "fuses" the map and batch transformations.
To apply this change to our running example, change:
@@ -205,7 +205,7 @@ is stored locally or remotely, but can be worse in the remote case if data is
not prefetched effectively.
To mitigate the impact of the various data extraction overheads, the `tf.data`
-API offers the @{tf.contrib.data.parallel_interleave} transformation. Use this
+API offers the `tf.contrib.data.parallel_interleave` transformation. Use this
transformation to parallelize the execution of and interleave the contents of
other datasets (such as data file readers). The
number of datasets to overlap can be specified by the `cycle_length` argument.
@@ -232,7 +232,7 @@ dataset = files.apply(tf.contrib.data.parallel_interleave(
The throughput of remote storage systems can vary over time due to load or
network events. To account for this variance, the `parallel_interleave`
transformation can optionally use prefetching. (See
-@{tf.contrib.data.parallel_interleave} for details).
+`tf.contrib.data.parallel_interleave` for details).
By default, the `parallel_interleave` transformation provides a deterministic
ordering of elements to aid reproducibility. As an alternative to prefetching
@@ -261,7 +261,7 @@ function (that is, have it operate over a batch of inputs at once) and apply the
### Map and Cache
-The @{tf.data.Dataset.cache} transformation can cache a dataset, either in
+The `tf.data.Dataset.cache` transformation can cache a dataset, either in
memory or on local storage. If the user-defined function passed into the `map`
transformation is expensive, apply the cache transformation after the map
transformation as long as the resulting dataset can still fit into memory or
@@ -281,9 +281,9 @@ performance (for example, to enable fusing of the map and batch transformations)
### Repeat and Shuffle
-The @{tf.data.Dataset.repeat} transformation repeats the input data a finite (or
+The `tf.data.Dataset.repeat` transformation repeats the input data a finite (or
infinite) number of times; each repetition of the data is typically referred to
-as an _epoch_. The @{tf.data.Dataset.shuffle} transformation randomizes the
+as an _epoch_. The `tf.data.Dataset.shuffle` transformation randomizes the
order of the dataset's examples.
If the `repeat` transformation is applied before the `shuffle` transformation,
@@ -296,7 +296,7 @@ internal state of the `shuffle` transformation. In other words, the former
(`shuffle` before `repeat`) provides stronger ordering guarantees.
When possible, we recommend using the fused
-@{tf.contrib.data.shuffle_and_repeat} transformation, which combines the best of
+`tf.contrib.data.shuffle_and_repeat` transformation, which combines the best of
both worlds (good performance and strong ordering guarantees). Otherwise, we
recommend shuffling before repeating.
diff --git a/tensorflow/docs_src/performance/performance_guide.md b/tensorflow/docs_src/performance/performance_guide.md
index dafacbe379..df70309568 100644
--- a/tensorflow/docs_src/performance/performance_guide.md
+++ b/tensorflow/docs_src/performance/performance_guide.md
@@ -94,7 +94,7 @@ sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
#### Fused decode and crop
If inputs are JPEG images that also require cropping, use fused
-@{tf.image.decode_and_crop_jpeg} to speed up preprocessing.
+`tf.image.decode_and_crop_jpeg` to speed up preprocessing.
`tf.image.decode_and_crop_jpeg` only decodes the part of
the image within the crop window. This significantly speeds up the process if
the crop window is much smaller than the full image. For imagenet data, this
@@ -187,14 +187,14 @@ some models makes up a large percentage of the operation time. Using fused batch
norm can result in a 12%-30% speedup.
There are two commonly used batch norms and both support fusing. The core
-@{tf.layers.batch_normalization} added fused starting in TensorFlow 1.3.
+`tf.layers.batch_normalization` added fused starting in TensorFlow 1.3.
```python
bn = tf.layers.batch_normalization(
input_layer, fused=True, data_format='NCHW')
```
-The contrib @{tf.contrib.layers.batch_norm} method has had fused as an option
+The contrib `tf.contrib.layers.batch_norm` method has had fused as an option
since before TensorFlow 1.0.
```python
@@ -205,43 +205,43 @@ bn = tf.contrib.layers.batch_norm(input_layer, fused=True, data_format='NCHW')
There are many ways to specify an RNN computation in TensorFlow and they have
trade-offs with respect to model flexibility and performance. The
-@{tf.nn.rnn_cell.BasicLSTMCell} should be considered a reference implementation
+`tf.nn.rnn_cell.BasicLSTMCell` should be considered a reference implementation
and used only as a last resort when no other options will work.
When using one of the cells, rather than the fully fused RNN layers, you have a
-choice of whether to use @{tf.nn.static_rnn} or @{tf.nn.dynamic_rnn}. There
+choice of whether to use `tf.nn.static_rnn` or `tf.nn.dynamic_rnn`. There
shouldn't generally be a performance difference at runtime, but large unroll
-amounts can increase the graph size of the @{tf.nn.static_rnn} and cause long
-compile times. An additional advantage of @{tf.nn.dynamic_rnn} is that it can
+amounts can increase the graph size of the `tf.nn.static_rnn` and cause long
+compile times. An additional advantage of `tf.nn.dynamic_rnn` is that it can
optionally swap memory from the GPU to the CPU to enable training of very long
sequences. Depending on the model and hardware configuration, this can come at
a performance cost. It is also possible to run multiple iterations of
-@{tf.nn.dynamic_rnn} and the underlying @{tf.while_loop} construct in parallel,
+`tf.nn.dynamic_rnn` and the underlying `tf.while_loop` construct in parallel,
although this is rarely useful with RNN models as they are inherently
sequential.
-On NVIDIA GPUs, the use of @{tf.contrib.cudnn_rnn} should always be preferred
+On NVIDIA GPUs, the use of `tf.contrib.cudnn_rnn` should always be preferred
unless you want layer normalization, which it doesn't support. It is often at
-least an order of magnitude faster than @{tf.contrib.rnn.BasicLSTMCell} and
-@{tf.contrib.rnn.LSTMBlockCell} and uses 3-4x less memory than
-@{tf.contrib.rnn.BasicLSTMCell}.
+least an order of magnitude faster than `tf.contrib.rnn.BasicLSTMCell` and
+`tf.contrib.rnn.LSTMBlockCell` and uses 3-4x less memory than
+`tf.contrib.rnn.BasicLSTMCell`.
If you need to run one step of the RNN at a time, as might be the case in
reinforcement learning with a recurrent policy, then you should use the
-@{tf.contrib.rnn.LSTMBlockCell} with your own environment interaction loop
-inside a @{tf.while_loop} construct. Running one step of the RNN at a time and
+`tf.contrib.rnn.LSTMBlockCell` with your own environment interaction loop
+inside a `tf.while_loop` construct. Running one step of the RNN at a time and
returning to Python is possible, but it will be slower.
-On CPUs, mobile devices, and if @{tf.contrib.cudnn_rnn} is not available on
+On CPUs, mobile devices, and if `tf.contrib.cudnn_rnn` is not available on
your GPU, the fastest and most memory efficient option is
-@{tf.contrib.rnn.LSTMBlockFusedCell}.
+`tf.contrib.rnn.LSTMBlockFusedCell`.
-For all of the less common cell types like @{tf.contrib.rnn.NASCell},
-@{tf.contrib.rnn.PhasedLSTMCell}, @{tf.contrib.rnn.UGRNNCell},
-@{tf.contrib.rnn.GLSTMCell}, @{tf.contrib.rnn.Conv1DLSTMCell},
-@{tf.contrib.rnn.Conv2DLSTMCell}, @{tf.contrib.rnn.LayerNormBasicLSTMCell},
+For all of the less common cell types like `tf.contrib.rnn.NASCell`,
+`tf.contrib.rnn.PhasedLSTMCell`, `tf.contrib.rnn.UGRNNCell`,
+`tf.contrib.rnn.GLSTMCell`, `tf.contrib.rnn.Conv1DLSTMCell`,
+`tf.contrib.rnn.Conv2DLSTMCell`, `tf.contrib.rnn.LayerNormBasicLSTMCell`,
etc., one should be aware that they are implemented in the graph like
-@{tf.contrib.rnn.BasicLSTMCell} and as such will suffer from the same poor
+`tf.contrib.rnn.BasicLSTMCell` and as such will suffer from the same poor
performance and high memory usage. One should consider whether or not those
trade-offs are worth it before using these cells. For example, while layer
normalization can speed up convergence, because cuDNN is 20x faster the fastest
diff --git a/tensorflow/docs_src/performance/performance_models.md b/tensorflow/docs_src/performance/performance_models.md
index 359b0e904d..66bf684d5b 100644
--- a/tensorflow/docs_src/performance/performance_models.md
+++ b/tensorflow/docs_src/performance/performance_models.md
@@ -10,8 +10,8 @@ incorporated into high-level APIs.
## Input Pipeline
The @{$performance_guide$Performance Guide} explains how to identify possible
-input pipeline issues and best practices. We found that using @{tf.FIFOQueue}
-and @{tf.train.queue_runner} could not saturate multiple current generation GPUs
+input pipeline issues and best practices. We found that using `tf.FIFOQueue`
+and `tf.train.queue_runner` could not saturate multiple current generation GPUs
when using large inputs and processing with higher samples per second, such
as training ImageNet with [AlexNet](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf).
This is due to the use of Python threads as its underlying implementation. The
@@ -29,7 +29,7 @@ implementation is made up of 3 stages:
The dominant part of each stage is executed in parallel with the other stages
using `data_flow_ops.StagingArea`. `StagingArea` is a queue-like operator
-similar to @{tf.FIFOQueue}. The difference is that `StagingArea` does not
+similar to `tf.FIFOQueue`. The difference is that `StagingArea` does not
guarantee FIFO ordering, but offers simpler functionality and can be executed
on both CPU and GPU in parallel with other stages. Breaking the input pipeline
into 3 stages that operate independently in parallel is scalable and takes full
@@ -62,10 +62,10 @@ and executed in parallel. The image preprocessing ops include operations such as
image decoding, distortion, and resizing.
Once the images are through preprocessing, they are concatenated together into 8
-tensors each with a batch-size of 32. Rather than using @{tf.concat} for this
+tensors each with a batch-size of 32. Rather than using `tf.concat` for this
purpose, which is implemented as a single op that waits for all the inputs to be
-ready before concatenating them together, @{tf.parallel_stack} is used.
-@{tf.parallel_stack} allocates an uninitialized tensor as an output, and each
+ready before concatenating them together, `tf.parallel_stack` is used.
+`tf.parallel_stack` allocates an uninitialized tensor as an output, and each
input tensor is written to its designated portion of the output tensor as soon
as the input is available.
@@ -94,7 +94,7 @@ the GPU, all the tensors are already available.
With all the stages capable of being driven by different processors,
`data_flow_ops.StagingArea` is used between them so they run in parallel.
-`StagingArea` is a queue-like operator similar to @{tf.FIFOQueue} that offers
+`StagingArea` is a queue-like operator similar to `tf.FIFOQueue` that offers
simpler functionalities that can be executed on both CPU and GPU.
Before the model starts running all the stages, the input pipeline stages are
@@ -153,7 +153,7 @@ weights obtained from training.
The default batch-normalization in TensorFlow is implemented as composite
operations. This is very general, but often leads to suboptimal performance. An
alternative is to use fused batch-normalization which often has much better
-performance on GPU. Below is an example of using @{tf.contrib.layers.batch_norm}
+performance on GPU. Below is an example of using `tf.contrib.layers.batch_norm`
to implement fused batch-normalization.
```python
@@ -301,7 +301,7 @@ In order to broadcast variables and aggregate gradients across different GPUs
within the same host machine, we can use the default TensorFlow implicit copy
mechanism.
-However, we can instead use the optional NCCL (@{tf.contrib.nccl}) support. NCCL
+However, we can instead use the optional NCCL (`tf.contrib.nccl`) support. NCCL
is an NVIDIA® library that can efficiently broadcast and aggregate data across
different GPUs. It schedules a cooperating kernel on each GPU that knows how to
best utilize the underlying hardware topology; this kernel uses a single SM of
diff --git a/tensorflow/docs_src/performance/quantization.md b/tensorflow/docs_src/performance/quantization.md
index c97f74139c..4499f5715c 100644
--- a/tensorflow/docs_src/performance/quantization.md
+++ b/tensorflow/docs_src/performance/quantization.md
@@ -163,7 +163,7 @@ bazel build tensorflow/contrib/lite/toco:toco && \
--std_value=127.5 --mean_value=127.5
```
-See the documentation for @{tf.contrib.quantize} and
+See the documentation for `tf.contrib.quantize` and
[TensorFlow Lite](/mobile/tflite/).
## Quantized accuracy
diff --git a/tensorflow/docs_src/performance/xla/jit.md b/tensorflow/docs_src/performance/xla/jit.md
index 6724d1eaf8..7202ef47f7 100644
--- a/tensorflow/docs_src/performance/xla/jit.md
+++ b/tensorflow/docs_src/performance/xla/jit.md
@@ -19,10 +19,11 @@ on the `XLA_CPU` or `XLA_GPU` TensorFlow devices. Placing operators directly on
a TensorFlow XLA device forces the operator to run on that device and is mainly
used for testing.
-> Note: The XLA CPU backend produces fast single-threaded code (in most cases),
-> but does not yet parallelize as well as the TensorFlow CPU backend. The XLA
-> GPU backend is competitive with the standard TensorFlow implementation,
-> sometimes faster, sometimes slower.
+> Note: The XLA CPU backend supports intra-op parallelism (i.e. it can shard a
+> single operation across multiple cores) but it does not support inter-op
+> parallelism (i.e. it cannot execute independent operations concurrently across
+> multiple cores). The XLA GPU backend is competitive with the standard
+> TensorFlow implementation, sometimes faster, sometimes slower.
### Turning on JIT compilation
@@ -55,8 +56,7 @@ sess = tf.Session(config=config)
> Note: Turning on JIT at the session level will not result in operations being
> compiled for the CPU. JIT compilation for CPU operations must be done via
-> the manual method documented below. This decision was made due to the CPU
-> backend being single-threaded.
+> the manual method documented below.
#### Manual
diff --git a/tensorflow/docs_src/performance/xla/operation_semantics.md b/tensorflow/docs_src/performance/xla/operation_semantics.md
index 5f7482f90f..e24a7cda73 100644
--- a/tensorflow/docs_src/performance/xla/operation_semantics.md
+++ b/tensorflow/docs_src/performance/xla/operation_semantics.md
@@ -13,6 +13,79 @@ arbitrary-dimensional array. For convenience, special cases have more specific
and familiar names; for example a *vector* is a 1-dimensional array and a
*matrix* is a 2-dimensional array.
+## AllToAll
+
+See also
+[`XlaBuilder::AllToAll`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
+
+Alltoall is a collective operation that sends data from all cores to all cores.
+It has two phases:
+
+1. the scatter phase. On each core, the operand is split into `split_count`
+ number of blocks along the `split_dimensions`, and the blocks are scattered
+ to all cores, e.g., the ith block is send to the ith core.
+2. the gather phase. Each core concatenates the received blocks along the
+ `concat_dimension`.
+
+The participating cores can be configured by:
+
+- `replica_groups`: each ReplicaGroup contains a list of replica id. If empty,
+ all replicas belong to one group in the order of 0 - (n-1). Alltoall will be
+ applied within subgroups in the specified order. For example, replica
+ groups = {{1,2,3},{4,5,0}} means, an Alltoall will be applied within replica
+ 1, 2, 3, and in the gather phase, the received blocks will be concatenated
+ in the order of 1, 2, 3; another Alltoall will be applied within replica 4,
+ 5, 0, and the concatenation order is 4, 5, 0.
+
+Prerequisites:
+
+- The dimension size of the operand on the split_dimension is divisible by
+ split_count.
+- The operand's shape is not tuple.
+
+<b> `AllToAll(operand, split_dimension, concat_dimension, split_count,
+replica_groups)` </b>
+
+
+| Arguments | Type | Semantics |
+| ------------------ | --------------------- | ------------------------------- |
+| `operand` | `XlaOp` | n dimensional input array |
+| `split_dimension` | `int64` | A value in the interval `[0, |
+: : : n)` that names the dimension :
+: : : along which the operand is :
+: : : split :
+| `concat_dimension` | `int64` | a value in the interval `[0, |
+: : : n)` that names the dimension :
+: : : along which the split blocks :
+: : : are concatenated :
+| `split_count` | `int64` | the number of cores that |
+: : : participate this operation. If :
+: : : `replica_groups` is empty, this :
+: : : should be the number of :
+: : : replicas; otherwise, this :
+: : : should be equal to the number :
+: : : of replicas in each group. :
+| `replica_groups` | `ReplicaGroup` vector | each group contains a list of |
+: : : replica id. :
+
+Below shows an example of Alltoall.
+
+```
+XlaBuilder b("alltoall");
+auto x = Parameter(&b, 0, ShapeUtil::MakeShape(F32, {4, 16}), "x");
+AllToAll(x, /*split_dimension=*/1, /*concat_dimension=*/0, /*split_count=*/4);
+```
+
+<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
+ <img style="width:100%" src="../../images/xla/ops_alltoall.png">
+</div>
+
+In this example, there are 4 cores participating the Alltoall. On each core, the
+operand is split into 4 parts along dimension 0, so each part has shape
+f32[4,4]. The 4 parts are scattered to all cores. Then each core concatenates
+the received parts along dimension 1, in the order or core 0-4. So the output on
+each core has shape f32[16,4].
+
## BatchNormGrad
See also
@@ -270,7 +343,7 @@ Clamp(min, operand, max) = s32[3]{0, 5, 6};
See also
[`XlaBuilder::Collapse`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h)
-and the @{tf.reshape} operation.
+and the `tf.reshape` operation.
Collapses dimensions of an array into one dimension.
@@ -291,7 +364,7 @@ same position in the dimension sequence as those they replace, with the new
dimension size equal to the product of original dimension sizes. The lowest
dimension number in `dimensions` is the slowest varying dimension (most major)
in the loop nest which collapses these dimension, and the highest dimension
-number is fastest varying (most minor). See the @{tf.reshape} operator
+number is fastest varying (most minor). See the `tf.reshape` operator
if more general collapse ordering is needed.
For example, let v be an array of 24 elements:
@@ -490,8 +563,8 @@ array. The holes are filled with a no-op value, which for convolution means
zeroes.
Dilation of the rhs is also called atrous convolution. For more details, see
-@{tf.nn.atrous_conv2d}. Dilation of the lhs is also called transposed
-convolution. For more details, see @{tf.nn.conv2d_transpose}.
+`tf.nn.atrous_conv2d`. Dilation of the lhs is also called transposed
+convolution. For more details, see `tf.nn.conv2d_transpose`.
The output shape has these dimensions, in this order:
@@ -1270,7 +1343,7 @@ let t: (f32[10], s32) = tuple(v, s);
let element_1: s32 = gettupleelement(t, 1); // Inferred shape matches s32.
```
-See also @{tf.tuple}.
+See also `tf.tuple`.
## Infeed
@@ -1431,19 +1504,29 @@ complete and returns the received data.
See also
[`XlaBuilder::Reduce`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
-Applies a reduction function to an array.
+Applies a reduction function to one or more arrays in parallel.
-<b> `Reduce(operand, init_value, computation, dimensions)` </b>
+<b> `Reduce(operands..., init_values..., computation, dimensions)` </b>
-Arguments | Type | Semantics
-------------- | ---------------- | ---------------------------------------
-`operand` | `XlaOp` | array of type `T`
-`init_value` | `XlaOp` | scalar of type `T`
-`computation` | `XlaComputation` | computation of type `T, T -> T`
-`dimensions` | `int64` array | unordered array of dimensions to reduce
+Arguments | Type | Semantics
+------------- | --------------------- | ---------------------------------------
+`operands` | Sequence of N `XlaOp` | N arrays of types `T_0, ..., T_N`.
+`init_values` | Sequence of N `XlaOp` | N scalars of types `T_0, ..., T_N`.
+`computation` | `XlaComputation` | computation of type
+ : : `T_0, ..., T_N, T_0, ..., T_N -> Collate(T_0, ..., T_N)`
+`dimensions` | `int64` array | unordered array of dimensions to reduce
-This operation reduces one or more dimensions of the input array into scalars.
-The rank of the returned array is `rank(operand) - len(dimensions)`.
+Where:
+* N is required to be greater or equal to 1.
+* All input arrays must have the same dimensions.
+* If `N = 1`, `Collate(T)` is `T`.
+* If `N > 1`, `Collate(T_0, ..., T_N)` is a tuple of `N` elements of type `T`.
+
+The output of the op is `Collate(Q_0, ..., Q_N)` where `Q_i` is an array of type
+`T_i`, the dimensions of which are described below.
+
+This operation reduces one or more dimensions of each input array into scalars.
+The rank of each returned array is `rank(operand) - len(dimensions)`.
`init_value` is the initial value used for every reduction and may be inserted
anywhere during computation by the back-end. In most cases, `init_value` is an
identity of the reduction function (for example, 0 for addition). The applied
@@ -1459,9 +1542,9 @@ enough to being associative for most practical uses. It is possible to conceive
of some completely non-associative reductions, however, and these will produce
incorrect or unpredictable results in XLA reductions.
-As an example, when reducing across the one dimension in a 1D array with values
-[10, 11, 12, 13], with reduction function `f` (this is `computation`) then that
-could be computed as
+As an example, when reducing across one dimension in a single 1D array with
+values [10, 11, 12, 13], with reduction function `f` (this is `computation`)
+then that could be computed as
`f(10, f(11, f(12, f(init_value, 13)))`
@@ -1543,6 +1626,34 @@ the 1D array `| 20 28 36 |`.
Reducing the 3D array over all its dimensions produces the scalar `84`.
+When `N > 1`, reduce function application is slightly more complex, as it is
+applied simultaneously to all inputs. For example, consider the following
+reduction function, which can be used to compute the max and the argmax of a
+a 1-D tensor in parallel:
+
+```
+f: (Float, Int, Float, Int) -> Float, Int
+f(max, argmax, value, index):
+ if value >= argmax:
+ return (value, index)
+ else:
+ return (max, argmax)
+```
+
+For 1-D Input arrays `V = Float[N], K = Int[N]`, and init values
+`I_V = Float, I_K = Int`, the result `f_(N-1)` of reducing across the only
+input dimension is equivalent to the following recursive application:
+```
+f_0 = f(I_V, I_K, V_0, K_0)
+f_1 = f(f_0.first, f_0.second, V_1, K_1)
+...
+f_(N-1) = f(f_(N-2).first, f_(N-2).second, V_(N-1), K_(N-1))
+```
+
+Applying this reduction to an array of values, and an array of sequential
+indices (i.e. iota), will co-iterate over the arrays, and return a tuple
+containing the maximal value and the matching index.
+
## ReducePrecision
See also
@@ -1766,19 +1877,19 @@ See also
[`XlaBuilder::RngNormal`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Constructs an output of a given shape with random numbers generated following
-the $$N(\mu, \sigma)$$ normal distribution. The parameters `mu` and `sigma`, and
-output shape have to have elemental type F32. The parameters furthermore have to
-be scalar valued.
+the $$N(\mu, \sigma)$$ normal distribution. The parameters $$\mu$$ and
+$$\sigma$$, and output shape have to have a floating point elemental type. The
+parameters furthermore have to be scalar valued.
-<b>`RngNormal(mean, sigma, shape)`</b>
+<b>`RngNormal(mu, sigma, shape)`</b>
| Arguments | Type | Semantics |
| --------- | ------- | --------------------------------------------------- |
-| `mu` | `XlaOp` | Scalar of type F32 specifying mean of generated |
-: : : numbers :
-| `sigma` | `XlaOp` | Scalar of type F32 specifying standard deviation of |
+| `mu` | `XlaOp` | Scalar of type T specifying mean of generated |
+: : : numbers :
+| `sigma` | `XlaOp` | Scalar of type T specifying standard deviation of |
: : : generated numbers :
-| `shape` | `Shape` | Output shape of type F32 |
+| `shape` | `Shape` | Output shape of type T |
## RngUniform
@@ -1787,9 +1898,11 @@ See also
Constructs an output of a given shape with random numbers generated following
the uniform distribution over the interval $$[a,b)$$. The parameters and output
-shape may be either F32, S32 or U32, but the types have to be consistent.
-Furthermore, the parameters need to be scalar valued. If $$b <= a$$ the result
-is implementation-defined.
+element type have to be a boolean type, an integral type or a floating point
+types, and the types have to be consistent. The CPU and GPU backends currently
+only support F64, F32, F16, BF16, S64, U64, S32 and U32. Furthermore, the
+parameters need to be scalar valued. If $$b <= a$$ the result is
+implementation-defined.
<b>`RngUniform(a, b, shape)`</b>
@@ -1801,6 +1914,138 @@ is implementation-defined.
: : : limit of interval :
| `shape` | `Shape` | Output shape of type T |
+## Scatter
+
+The XLA scatter operation generates a result which is the value of the input
+tensor `operand`, with several slices (at indices specified by
+`scatter_indices`) updated with the values in `updates` using
+`update_computation`.
+
+See also
+[`XlaBuilder::Scatter`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
+
+<b> `scatter(operand, scatter_indices, updates, update_computation, index_vector_dim, update_window_dims, inserted_window_dims, scatter_dims_to_operand_dims)` </b>
+
+|Arguments | Type | Semantics |
+|------------------|------------------------|----------------------------------|
+|`operand` | `XlaOp` | Tensor to be scattered into. |
+|`scatter_indices` | `XlaOp` | Tensor containing the starting |
+: : : indices of the slices that must :
+: : : be scattered to. :
+|`updates` | `XlaOp` | Tensor containing the values that|
+: : : must be used for scattering. :
+|`update_computation`| `XlaComputation` | Computation to be used for |
+: : : combining the existing values in :
+: : : the input tensor and the updates :
+: : : during scatter. This computation :
+: : : should be of type `T, T -> T`. :
+|`index_vector_dim`| `int64` | The dimension in |
+: : : `scatter_indices` that contains :
+: : : the starting indices. :
+|`update_window_dims`| `ArraySlice<int64>` | The set of dimensions in |
+: : : `updates` shape that are _window :
+: : : dimensions_. :
+|`inserted_window_dims`| `ArraySlice<int64>`| The set of _window dimensions_ |
+: : : that must be inserted into :
+: : : `updates` shape. :
+|`scatter_dims_to_operand_dims`| `ArraySlice<int64>` | A dimensions map from |
+: : : the scatter indices to the :
+: : : operand index space. This array :
+: : : is interpreted as mapping `i` to :
+: : : `scatter_dims_to_operand_dims[i]`:
+: : : . It has to be one-to-one and :
+: : : total. :
+
+If `index_vector_dim` is equal to `scatter_indices.rank` we implicitly consider
+`scatter_indices` to have a trailing `1` dimension.
+
+We define `update_scatter_dims` of type `ArraySlice<int64>` as the set of
+dimensions in `updates` shape that are not in `update_window_dims`, in ascending
+order.
+
+The arguments of scatter should follow these constraints:
+
+ - `updates` tensor must be of rank `update_window_dims.size +
+ scatter_indices.rank - 1`.
+
+ - Bounds of dimension `i` in `updates` must conform to the following:
+ - If `i` is present in `update_window_dims` (i.e. equal to
+ `update_window_dims`[`k`] for some `k`), then the bound of dimension
+ `i` in `updates` must not exceed the corresponding bound of `operand`
+ after accounting for the `inserted_window_dims` (i.e.
+ `adjusted_window_bounds`[`k`], where `adjusted_window_bounds` contains
+ the bounds of `operand` with the bounds at indices
+ `inserted_window_dims` removed).
+ - If `i` is present in `update_scatter_dims` (i.e. equal to
+ `update_scatter_dims`[`k`] for some `k`), then the bound of dimension
+ `i` in `updates` must be equal to the corresponding bound of
+ `scatter_indices`, skipping `index_vector_dim` (i.e.
+ `scatter_indices.shape.dims`[`k`], if `k` < `index_vector_dim` and
+ `scatter_indices.shape.dims`[`k+1`] otherwise).
+
+ - `update_window_dims` must be in ascending order, not have any repeating
+ dimension numbers, and be in the range `[0, updates.rank)`.
+
+ - `inserted_window_dims` must be in ascending order, not have any
+ repeating dimension numbers, and be in the range `[0, operand.rank)`.
+
+ - `scatter_dims_to_operand_dims.size` must be equal to
+ `scatter_indices`[`index_vector_dim`], and its values must be in the range
+ `[0, operand.rank)`.
+
+For a given index `U` in the `updates` tensor, the corresponding index `I` in
+the `operand` tensor into which this update has to be applied is computed as
+follows:
+
+ 1. Let `G` = { `U`[`k`] for `k` in `update_scatter_dims` }. Use `G` to look up
+ an index vector `S` in the `scatter_indices` tensor such that `S`[`i`] =
+ `scatter_indices`[Combine(`G`, `i`)] where Combine(A, b) inserts b at
+ positions `index_vector_dim` into A.
+ 2. Create an index `S`<sub>`in`</sub> into `operand` using `S` by scattering
+ `S` using the `scatter_dims_to_operand_dims` map. More formally:
+ 1. `S`<sub>`in`</sub>[`scatter_dims_to_operand_dims`[`k`]] = `S`[`k`] if
+ `k` < `scatter_dims_to_operand_dims.size`.
+ 2. `S`<sub>`in`</sub>[`_`] = `0` otherwise.
+ 3. Create an index `W`<sub>`in`</sub> into `operand` by scattering the indices
+ at `update_window_dims` in `U` according to `inserted_window_dims`.
+ More formally:
+ 1. `W`<sub>`in`</sub>[`window_dims_to_operand_dims`(`k`)] = `U`[`k`] if
+ `k` < `update_window_dims.size`, where `window_dims_to_operand_dims`
+ is the monotonic function with domain [`0`, `update_window_dims.size`)
+ and range [`0`, `operand.rank`) \\ `inserted_window_dims`. (For
+ example, if `update_window_dims.size` is `4`, `operand.rank` is `6`,
+ and `inserted_window_dims` is {`0`, `2`} then
+ `window_dims_to_operand_dims` is {`0`→`1`, `1`→`3`, `2`→`4`,
+ `3`→`5`}).
+ 2. `W`<sub>`in`</sub>[`_`] = `0` otherwise.
+ 4. `I` is `W`<sub>`in`</sub> + `S`<sub>`in`</sub> where + is element-wise
+ addition.
+
+In summary, the scatter operation can be defined as follows.
+
+ - Initialize `output` with `operand`, i.e. for all indices `O` in the
+ `operand` tensor:\
+ `output`[`O`] = `operand`[`O`]
+ - For every index `U` in the `updates` tensor and the corresponding index `O`
+ in the `operand` tensor:\
+ `output`[`O`] = `update_computation`(`output`[`O`], `updates`[`U`])
+
+The order in which updates are applied is non-deterministic. So, when multiple
+indices in `updates` refer to the same index in `operand`, the corresponding
+value in `output` will be non-deterministic.
+
+Note that the first parameter that is passed into the `update_computation` will
+always be the current value from the `output` tensor and the second parameter
+will always be the value from the `updates` tensor. This is important
+specifically for cases when the `update_computation` is _not commutative_.
+
+Informally, the scatter op can be viewed as an _inverse_ of the gather op, i.e.
+the scatter op updates the elements in the input that are extracted by the
+corresponding gather op.
+
+For a detailed informal description and examples, refer to the
+"Informal Description" section under `Gather`.
+
## Select
See also
@@ -2080,7 +2325,7 @@ element types.
## Transpose
-See also the @{tf.reshape} operation.
+See also the `tf.reshape` operation.
<b>`Transpose(operand)`</b>
@@ -2140,8 +2385,6 @@ restrictions listed below.
last execution of the `body`.
* The shape of the type `T` is statically determined and must be the same
across all iterations.
-* `While` nodes are not allowed to be nested. (This restriction may be lifted
- in the future on some targets.)
The T parameters of the computations are initialized with the `init` value in
the first iteration and are automatically updated to the new result from `body`
diff --git a/tensorflow/docs_src/performance/xla/tfcompile.md b/tensorflow/docs_src/performance/xla/tfcompile.md
index 8521d7eacb..e4b803164f 100644
--- a/tensorflow/docs_src/performance/xla/tfcompile.md
+++ b/tensorflow/docs_src/performance/xla/tfcompile.md
@@ -205,10 +205,7 @@ representing the inputs, `results` representing the outputs, and `temps`
representing temporary buffers used internally to perform the computation. By
default, each instance of the generated class allocates and manages all of these
buffers for you. The `AllocMode` constructor argument may be used to change this
-behavior. A convenience library is provided in
-[`tensorflow/compiler/aot/runtime.h`](https://www.tensorflow.org/code/tensorflow/compiler/aot/runtime.h)
-to help with manual buffer allocation; usage of this library is optional. All
-buffers should be aligned to 32-byte boundaries.
+behavior. All buffers are aligned to 64-byte boundaries.
The generated C++ class is just a wrapper around the low-level code generated by
XLA.
diff --git a/tensorflow/docs_src/tutorials/_toc.yaml b/tensorflow/docs_src/tutorials/_toc.yaml
index d33869af6e..0e25208a00 100644
--- a/tensorflow/docs_src/tutorials/_toc.yaml
+++ b/tensorflow/docs_src/tutorials/_toc.yaml
@@ -37,9 +37,30 @@ toc:
status: external
- title: "Custom training: walkthrough"
path: /tutorials/eager/custom_training_walkthrough
+ - title: Text generation
+ path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb
+ status: external
- title: Translation with attention
path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb
status: external
+ - title: Image captioning
+ path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
+ status: external
+ - title: Neural Style Transfer
+ path: https://github.com/tensorflow/models/blob/master/research/nst_blogpost/4_Neural_Style_Transfer_with_Eager_Execution.ipynb
+ status: external
+ - title: DCGAN
+ path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb
+ status: external
+ - title: VAE
+ path: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/cvae.ipynb
+ status: external
+ - title: Pix2Pix
+ path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/pix2pix/pix2pix_eager.ipynb
+ status: external
+ - title: Image Segmentation
+ path: https://github.com/tensorflow/models/blob/master/samples/outreach/blogs/segmentation_blogpost/image_segmentation.ipynb
+ status: external
- title: ML at production scale
style: accordion
diff --git a/tensorflow/docs_src/tutorials/estimators/cnn.md b/tensorflow/docs_src/tutorials/estimators/cnn.md
index 12a215b50c..100f501cc2 100644
--- a/tensorflow/docs_src/tutorials/estimators/cnn.md
+++ b/tensorflow/docs_src/tutorials/estimators/cnn.md
@@ -1,6 +1,6 @@
# Build a Convolutional Neural Network using Estimators
-The TensorFlow @{tf.layers$`layers` module} provides a high-level API that makes
+The `tf.layers` module provides a high-level API that makes
it easy to construct a neural network. It provides methods that facilitate the
creation of dense (fully connected) layers and convolutional layers, adding
activation functions, and applying dropout regularization. In this tutorial,
@@ -118,8 +118,8 @@ output from one layer-creation method and supply it as input to another.
Open `cnn_mnist.py` and add the following `cnn_model_fn` function, which
conforms to the interface expected by TensorFlow's Estimator API (more on this
later in [Create the Estimator](#create-the-estimator)). `cnn_mnist.py` takes
-MNIST feature data, labels, and
-@{tf.estimator.ModeKeys$model mode} (`TRAIN`, `EVAL`, `PREDICT`) as arguments;
+MNIST feature data, labels, and mode (from
+`tf.estimator.ModeKeys`: `TRAIN`, `EVAL`, `PREDICT`) as arguments;
configures the CNN; and returns predictions, loss, and a training operation:
```python
@@ -277,7 +277,7 @@ a 5x5 convolution over a 28x28 tensor will produce a 24x24 tensor, as there are
The `activation` argument specifies the activation function to apply to the
output of the convolution. Here, we specify ReLU activation with
-@{tf.nn.relu}.
+`tf.nn.relu`.
Our output tensor produced by `conv2d()` has a shape of
<code>[<em>batch_size</em>, 28, 28, 32]</code>: the same height and width
@@ -423,7 +423,7 @@ raw values into two different formats that our model function can return:
For a given example, our predicted class is the element in the corresponding row
of the logits tensor with the highest raw value. We can find the index of this
-element using the @{tf.argmax}
+element using the `tf.argmax`
function:
```python
@@ -438,7 +438,7 @@ value along the dimension with index of 1, which corresponds to our predictions
10]</code>).
We can derive probabilities from our logits layer by applying softmax activation
-using @{tf.nn.softmax}:
+using `tf.nn.softmax`:
```python
tf.nn.softmax(logits, name="softmax_tensor")
@@ -572,8 +572,8 @@ feel free to change to another directory of your choice).
### Set Up a Logging Hook {#set_up_a_logging_hook}
Since CNNs can take a while to train, let's set up some logging so we can track
-progress during training. We can use TensorFlow's @{tf.train.SessionRunHook} to create a
-@{tf.train.LoggingTensorHook}
+progress during training. We can use TensorFlow's `tf.train.SessionRunHook` to create a
+`tf.train.LoggingTensorHook`
that will log the probability values from the softmax layer of our CNN. Add the
following to `main()`:
diff --git a/tensorflow/docs_src/tutorials/images/deep_cnn.md b/tensorflow/docs_src/tutorials/images/deep_cnn.md
index 27963575f5..42ad484bbf 100644
--- a/tensorflow/docs_src/tutorials/images/deep_cnn.md
+++ b/tensorflow/docs_src/tutorials/images/deep_cnn.md
@@ -31,26 +31,26 @@ new ideas and experimenting with new techniques.
The CIFAR-10 tutorial demonstrates several important constructs for
designing larger and more sophisticated models in TensorFlow:
-* Core mathematical components including @{tf.nn.conv2d$convolution}
+* Core mathematical components including `tf.nn.conv2d`
([wiki](https://en.wikipedia.org/wiki/Convolution)),
-@{tf.nn.relu$rectified linear activations}
+`tf.nn.relu`
([wiki](https://en.wikipedia.org/wiki/Rectifier_(neural_networks))),
-@{tf.nn.max_pool$max pooling}
+`tf.nn.max_pool`
([wiki](https://en.wikipedia.org/wiki/Convolutional_neural_network#Pooling_layer))
-and @{tf.nn.local_response_normalization$local response normalization}
+and `tf.nn.local_response_normalization`
(Chapter 3.3 in
[AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)).
* @{$summaries_and_tensorboard$Visualization}
of network activities during training, including input images,
losses and distributions of activations and gradients.
* Routines for calculating the
-@{tf.train.ExponentialMovingAverage$moving average}
+`tf.train.ExponentialMovingAverage`
of learned parameters and using these averages
during evaluation to boost predictive performance.
* Implementation of a
-@{tf.train.exponential_decay$learning rate schedule}
+`tf.train.exponential_decay`
that systematically decrements over time.
-* Prefetching @{tf.train.shuffle_batch$queues}
+* Prefetching `tf.train.shuffle_batch`
for input
data to isolate the model from disk latency and expensive image pre-processing.
@@ -113,27 +113,27 @@ gradients, variable updates and visualization summaries.
The input part of the model is built by the functions `inputs()` and
`distorted_inputs()` which read images from the CIFAR-10 binary data files.
These files contain fixed byte length records, so we use
-@{tf.FixedLengthRecordReader}.
+`tf.FixedLengthRecordReader`.
See @{$reading_data#reading-from-files$Reading Data} to
learn more about how the `Reader` class works.
The images are processed as follows:
* They are cropped to 24 x 24 pixels, centrally for evaluation or
- @{tf.random_crop$randomly} for training.
-* They are @{tf.image.per_image_standardization$approximately whitened}
+ `tf.random_crop` for training.
+* They are `tf.image.per_image_standardization`
to make the model insensitive to dynamic range.
For training, we additionally apply a series of random distortions to
artificially increase the data set size:
-* @{tf.image.random_flip_left_right$Randomly flip} the image from left to right.
-* Randomly distort the @{tf.image.random_brightness$image brightness}.
-* Randomly distort the @{tf.image.random_contrast$image contrast}.
+* `tf.image.random_flip_left_right` the image from left to right.
+* Randomly distort the `tf.image.random_brightness`.
+* Randomly distort the `tf.image.random_contrast`.
Please see the @{$python/image$Images} page for the list of
available distortions. We also attach an
-@{tf.summary.image} to the images
+`tf.summary.image` to the images
so that we may visualize them in @{$summaries_and_tensorboard$TensorBoard}.
This is a good practice to verify that inputs are built correctly.
@@ -144,7 +144,7 @@ This is a good practice to verify that inputs are built correctly.
Reading images from disk and distorting them can use a non-trivial amount of
processing time. To prevent these operations from slowing down training, we run
them inside 16 separate threads which continuously fill a TensorFlow
-@{tf.train.shuffle_batch$queue}.
+`tf.train.shuffle_batch`.
### Model Prediction
@@ -154,12 +154,12 @@ the model is organized as follows:
Layer Name | Description
--- | ---
-`conv1` | @{tf.nn.conv2d$convolution} and @{tf.nn.relu$rectified linear} activation.
-`pool1` | @{tf.nn.max_pool$max pooling}.
-`norm1` | @{tf.nn.local_response_normalization$local response normalization}.
-`conv2` | @{tf.nn.conv2d$convolution} and @{tf.nn.relu$rectified linear} activation.
-`norm2` | @{tf.nn.local_response_normalization$local response normalization}.
-`pool2` | @{tf.nn.max_pool$max pooling}.
+`conv1` | `tf.nn.conv2d` and `tf.nn.relu` activation.
+`pool1` | `tf.nn.max_pool`.
+`norm1` | `tf.nn.local_response_normalization`.
+`conv2` | `tf.nn.conv2d` and `tf.nn.relu` activation.
+`norm2` | `tf.nn.local_response_normalization`.
+`pool2` | `tf.nn.max_pool`.
`local3` | @{$python/nn$fully connected layer with rectified linear activation}.
`local4` | @{$python/nn$fully connected layer with rectified linear activation}.
`softmax_linear` | linear transformation to produce logits.
@@ -172,7 +172,7 @@ Here is a graph generated from TensorBoard describing the inference operation:
> **EXERCISE**: The output of `inference` are un-normalized logits. Try editing
the network architecture to return normalized predictions using
-@{tf.nn.softmax}.
+`tf.nn.softmax`.
The `inputs()` and `inference()` functions provide all the components
necessary to perform an evaluation of a model. We now shift our focus towards
@@ -190,16 +190,16 @@ architecture in the top layer.
The usual method for training a network to perform N-way classification is
[multinomial logistic regression](https://en.wikipedia.org/wiki/Multinomial_logistic_regression),
aka. *softmax regression*. Softmax regression applies a
-@{tf.nn.softmax$softmax} nonlinearity to the
+`tf.nn.softmax` nonlinearity to the
output of the network and calculates the
-@{tf.nn.sparse_softmax_cross_entropy_with_logits$cross-entropy}
+`tf.nn.sparse_softmax_cross_entropy_with_logits`
between the normalized predictions and the label index.
For regularization, we also apply the usual
-@{tf.nn.l2_loss$weight decay} losses to all learned
+`tf.nn.l2_loss` losses to all learned
variables. The objective function for the model is the sum of the cross entropy
loss and all these weight decay terms, as returned by the `loss()` function.
-We visualize it in TensorBoard with a @{tf.summary.scalar}:
+We visualize it in TensorBoard with a `tf.summary.scalar`:
![CIFAR-10 Loss](https://www.tensorflow.org/images/cifar_loss.png "CIFAR-10 Total Loss")
@@ -207,14 +207,14 @@ We train the model using standard
[gradient descent](https://en.wikipedia.org/wiki/Gradient_descent)
algorithm (see @{$python/train$Training} for other methods)
with a learning rate that
-@{tf.train.exponential_decay$exponentially decays}
+`tf.train.exponential_decay`
over time.
![CIFAR-10 Learning Rate Decay](https://www.tensorflow.org/images/cifar_lr_decay.png "CIFAR-10 Learning Rate Decay")
The `train()` function adds the operations needed to minimize the objective by
calculating the gradient and updating the learned variables (see
-@{tf.train.GradientDescentOptimizer}
+`tf.train.GradientDescentOptimizer`
for details). It returns an operation that executes all the calculations
needed to train and update the model for one batch of images.
@@ -263,7 +263,7 @@ training step can take so long. Try decreasing the number of images that
initially fill up the queue. Search for `min_fraction_of_examples_in_queue`
in `cifar10_input.py`.
-`cifar10_train.py` periodically @{tf.train.Saver$saves}
+`cifar10_train.py` periodically uses a `tf.train.Saver` to save
all model parameters in
@{$guide/saved_model$checkpoint files}
but it does *not* evaluate the model. The checkpoint file
@@ -285,7 +285,7 @@ how the model is training. We want more insight into the model during training:
@{$summaries_and_tensorboard$TensorBoard} provides this
functionality, displaying data exported periodically from `cifar10_train.py` via
a
-@{tf.summary.FileWriter}.
+`tf.summary.FileWriter`.
For instance, we can watch how the distribution of activations and degree of
sparsity in `local3` features evolve during training:
@@ -300,7 +300,7 @@ interesting to track over time. However, the loss exhibits a considerable amount
of noise due to the small batch size employed by training. In practice we find
it extremely useful to visualize their moving averages in addition to their raw
values. See how the scripts use
-@{tf.train.ExponentialMovingAverage}
+`tf.train.ExponentialMovingAverage`
for this purpose.
## Evaluating a Model
@@ -336,8 +336,8 @@ exports summaries that may be visualized in TensorBoard. These summaries
provide additional insight into the model during evaluation.
The training script calculates the
-@{tf.train.ExponentialMovingAverage$moving average}
-version of all learned variables. The evaluation script substitutes
+`tf.train.ExponentialMovingAverage` of all learned variables.
+The evaluation script substitutes
all learned model parameters with the moving average version. This
substitution boosts model performance at evaluation time.
@@ -401,17 +401,17 @@ gradients for a single model replica. In the code we term this abstraction
a "tower". We must set two attributes for each tower:
* A unique name for all operations within a tower.
-@{tf.name_scope} provides
+`tf.name_scope` provides
this unique name by prepending a scope. For instance, all operations in
the first tower are prepended with `tower_0`, e.g. `tower_0/conv1/Conv2D`.
* A preferred hardware device to run the operation within a tower.
-@{tf.device} specifies this. For
+`tf.device` specifies this. For
instance, all operations in the first tower reside within `device('/device:GPU:0')`
scope indicating that they should be run on the first GPU.
All variables are pinned to the CPU and accessed via
-@{tf.get_variable}
+`tf.get_variable`
in order to share them in a multi-GPU version.
See how-to on @{$variables$Sharing Variables}.
diff --git a/tensorflow/docs_src/tutorials/images/image_recognition.md b/tensorflow/docs_src/tutorials/images/image_recognition.md
index d545de73df..83a8d97cf0 100644
--- a/tensorflow/docs_src/tutorials/images/image_recognition.md
+++ b/tensorflow/docs_src/tutorials/images/image_recognition.md
@@ -253,7 +253,7 @@ definition with the `ToGraphDef()` function.
TF_RETURN_IF_ERROR(session->Run({}, {output_name}, {}, out_tensors));
return Status::OK();
```
-Then we create a @{tf.Session}
+Then we create a `tf.Session`
object, which is the interface to actually running the graph, and run it,
specifying which node we want to get the output from, and where to put the
output data.
diff --git a/tensorflow/docs_src/tutorials/representation/kernel_methods.md b/tensorflow/docs_src/tutorials/representation/kernel_methods.md
index f3c232c511..71e87f4d3e 100644
--- a/tensorflow/docs_src/tutorials/representation/kernel_methods.md
+++ b/tensorflow/docs_src/tutorials/representation/kernel_methods.md
@@ -1,9 +1,8 @@
# Improving Linear Models Using Explicit Kernel Methods
-Note: This document uses a deprecated version of @{tf.estimator},
-which has a @{tf.contrib.learn.Estimator$different interface}.
-It also uses other `contrib` methods whose
-@{$version_compat#not_covered$API may not be stable}.
+Note: This document uses a deprecated version of `tf.estimator`,
+`tf.contrib.learn.Estimator`, which has a different interface. It also uses
+other `contrib` methods whose @{$version_compat#not_covered$API may not be stable}.
In this tutorial, we demonstrate how combining (explicit) kernel methods with
linear models can drastically increase the latters' quality of predictions
@@ -90,7 +89,7 @@ eval_input_fn = get_input_fn(data.validation, batch_size=5000)
## Training a simple linear model
We can now train a linear model over the MNIST dataset. We will use the
-@{tf.contrib.learn.LinearClassifier} estimator with 10 classes representing the
+`tf.contrib.learn.LinearClassifier` estimator with 10 classes representing the
10 digits. The input features form a 784-dimensional dense vector which can
be specified as follows:
@@ -195,7 +194,7 @@ much higher dimensional space than the original one. See
for more details.
### Kernel classifier
-@{tf.contrib.kernel_methods.KernelLinearClassifier} is a pre-packaged
+`tf.contrib.kernel_methods.KernelLinearClassifier` is a pre-packaged
`tf.contrib.learn` estimator that combines the power of explicit kernel mappings
with linear models. Its constructor is almost identical to that of the
LinearClassifier estimator with the additional option to specify a list of
diff --git a/tensorflow/docs_src/tutorials/representation/linear.md b/tensorflow/docs_src/tutorials/representation/linear.md
index 1b418cf065..014409c617 100644
--- a/tensorflow/docs_src/tutorials/representation/linear.md
+++ b/tensorflow/docs_src/tutorials/representation/linear.md
@@ -1,6 +1,6 @@
# Large-scale Linear Models with TensorFlow
-@{tf.estimator$Estimators} provides (among other things) a rich set of tools for
+`tf.estimator` provides (among other things) a rich set of tools for
working with linear models in TensorFlow. This document provides an overview of
those tools. It explains:
diff --git a/tensorflow/docs_src/tutorials/representation/word2vec.md b/tensorflow/docs_src/tutorials/representation/word2vec.md
index 0a1c41c84a..7964650e19 100644
--- a/tensorflow/docs_src/tutorials/representation/word2vec.md
+++ b/tensorflow/docs_src/tutorials/representation/word2vec.md
@@ -317,7 +317,7 @@ optimizer = tf.train.GradientDescentOptimizer(learning_rate=1.0).minimize(loss)
Training the model is then as simple as using a `feed_dict` to push data into
the placeholders and calling
-@{tf.Session.run} with this new data
+`tf.Session.run` with this new data
in a loop.
```python