aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src')
-rw-r--r--tensorflow/docs_src/BUILD14
-rw-r--r--tensorflow/docs_src/api_guides/cc/guide.md12
-rw-r--r--tensorflow/docs_src/api_guides/python/array_ops.md120
-rw-r--r--tensorflow/docs_src/api_guides/python/check_ops.md34
-rw-r--r--tensorflow/docs_src/api_guides/python/client.md48
-rw-r--r--tensorflow/docs_src/api_guides/python/constant_op.md38
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.crf.md14
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.ffmpeg.md4
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.framework.md94
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.graph_editor.md114
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.integrate.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.layers.md118
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.learn.md76
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.linalg.md16
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.losses.md30
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.metrics.md84
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.rnn.md60
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.seq2seq.md32
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.signal.md16
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.staging.md2
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.training.md34
-rw-r--r--tensorflow/docs_src/api_guides/python/contrib.util.md10
-rw-r--r--tensorflow/docs_src/api_guides/python/control_flow_ops.md56
-rw-r--r--tensorflow/docs_src/api_guides/python/framework.md58
-rw-r--r--tensorflow/docs_src/api_guides/python/functional_ops.md10
-rw-r--r--tensorflow/docs_src/api_guides/python/image.md98
-rw-r--r--tensorflow/docs_src/api_guides/python/input_dataset.md96
-rw-r--r--tensorflow/docs_src/api_guides/python/io_ops.md100
-rw-r--r--tensorflow/docs_src/api_guides/python/math_ops.md230
-rw-r--r--tensorflow/docs_src/api_guides/python/meta_graph.md10
-rw-r--r--tensorflow/docs_src/api_guides/python/nn.md156
-rw-r--r--tensorflow/docs_src/api_guides/python/python_io.md8
-rw-r--r--tensorflow/docs_src/api_guides/python/reading_data.md58
-rw-r--r--tensorflow/docs_src/api_guides/python/regression_examples.md12
-rw-r--r--tensorflow/docs_src/api_guides/python/session_ops.md8
-rw-r--r--tensorflow/docs_src/api_guides/python/sparse_ops.md44
-rw-r--r--tensorflow/docs_src/api_guides/python/spectral_ops.md29
-rw-r--r--tensorflow/docs_src/api_guides/python/state_ops.md122
-rw-r--r--tensorflow/docs_src/api_guides/python/string_ops.md28
-rw-r--r--tensorflow/docs_src/api_guides/python/summary.md20
-rw-r--r--tensorflow/docs_src/api_guides/python/test.md20
-rw-r--r--tensorflow/docs_src/api_guides/python/tfdbg.md22
-rw-r--r--tensorflow/docs_src/api_guides/python/threading_and_queues.md36
-rw-r--r--tensorflow/docs_src/api_guides/python/train.md138
-rw-r--r--tensorflow/docs_src/community/style_guide.md58
-rw-r--r--tensorflow/docs_src/deploy/distributed.md20
-rw-r--r--tensorflow/docs_src/deploy/s3.md2
-rw-r--r--tensorflow/docs_src/extend/adding_an_op.md25
-rw-r--r--tensorflow/docs_src/extend/architecture.md4
-rw-r--r--tensorflow/docs_src/extend/index.md3
-rw-r--r--tensorflow/docs_src/extend/new_data_formats.md93
-rw-r--r--tensorflow/docs_src/get_started/eager.md3
-rw-r--r--tensorflow/docs_src/get_started/leftnav_files10
-rw-r--r--tensorflow/docs_src/guide/autograph.md3
-rw-r--r--tensorflow/docs_src/guide/checkpoints.md2
-rw-r--r--tensorflow/docs_src/guide/custom_estimators.md54
-rw-r--r--tensorflow/docs_src/guide/datasets.md24
-rw-r--r--tensorflow/docs_src/guide/datasets_for_estimators.md32
-rw-r--r--tensorflow/docs_src/guide/debugger.md25
-rw-r--r--tensorflow/docs_src/guide/eager.md57
-rw-r--r--tensorflow/docs_src/guide/estimators.md23
-rw-r--r--tensorflow/docs_src/guide/faq.md71
-rw-r--r--tensorflow/docs_src/guide/feature_columns.md42
-rw-r--r--tensorflow/docs_src/guide/graph_viz.md5
-rw-r--r--tensorflow/docs_src/guide/graphs.md206
-rw-r--r--tensorflow/docs_src/guide/index.md18
-rw-r--r--tensorflow/docs_src/guide/keras.md22
-rw-r--r--tensorflow/docs_src/guide/leftnav_files7
-rw-r--r--tensorflow/docs_src/guide/low_level_intro.md46
-rw-r--r--tensorflow/docs_src/guide/premade_estimators.md14
-rw-r--r--tensorflow/docs_src/guide/saved_model.md72
-rw-r--r--tensorflow/docs_src/guide/summaries_and_tensorboard.md8
-rw-r--r--tensorflow/docs_src/guide/tensorboard_histograms.md4
-rw-r--r--tensorflow/docs_src/guide/tensors.md2
-rw-r--r--tensorflow/docs_src/guide/using_gpu.md2
-rw-r--r--tensorflow/docs_src/guide/using_tpu.md32
-rw-r--r--tensorflow/docs_src/guide/variables.md4
-rw-r--r--tensorflow/docs_src/guide/version_compat.md16
-rw-r--r--tensorflow/docs_src/install/index.md31
-rw-r--r--tensorflow/docs_src/install/install_c.md4
-rw-r--r--tensorflow/docs_src/install/install_go.md6
-rw-r--r--tensorflow/docs_src/install/install_java.md24
-rw-r--r--tensorflow/docs_src/install/install_linux.md442
-rw-r--r--tensorflow/docs_src/install/install_mac.md15
-rw-r--r--tensorflow/docs_src/install/install_raspbian.md4
-rw-r--r--tensorflow/docs_src/install/install_sources.md449
-rw-r--r--tensorflow/docs_src/install/install_windows.md4
-rw-r--r--tensorflow/docs_src/install/migration.md3
-rw-r--r--tensorflow/docs_src/javascript/index.md5
-rw-r--r--tensorflow/docs_src/javascript/leftnav_files1
-rw-r--r--tensorflow/docs_src/mobile/README.md3
-rw-r--r--tensorflow/docs_src/mobile/android_build.md177
-rw-r--r--tensorflow/docs_src/mobile/index.md36
-rw-r--r--tensorflow/docs_src/mobile/ios_build.md107
-rw-r--r--tensorflow/docs_src/mobile/leftnav_files14
-rw-r--r--tensorflow/docs_src/mobile/linking_libs.md243
-rw-r--r--tensorflow/docs_src/mobile/mobile_intro.md247
-rw-r--r--tensorflow/docs_src/mobile/optimizing.md499
-rw-r--r--tensorflow/docs_src/mobile/prepare_models.md301
-rw-r--r--tensorflow/docs_src/mobile/tflite/demo_android.md146
-rw-r--r--tensorflow/docs_src/mobile/tflite/demo_ios.md68
-rw-r--r--tensorflow/docs_src/mobile/tflite/devguide.md231
-rw-r--r--tensorflow/docs_src/mobile/tflite/index.md209
-rw-r--r--tensorflow/docs_src/performance/datasets_performance.md22
-rw-r--r--tensorflow/docs_src/performance/performance_guide.md44
-rw-r--r--tensorflow/docs_src/performance/performance_models.md18
-rw-r--r--tensorflow/docs_src/performance/quantization.md2
-rw-r--r--tensorflow/docs_src/performance/xla/broadcasting.md2
-rw-r--r--tensorflow/docs_src/performance/xla/developing_new_backend.md2
-rw-r--r--tensorflow/docs_src/performance/xla/jit.md12
-rw-r--r--tensorflow/docs_src/performance/xla/operation_semantics.md443
-rw-r--r--tensorflow/docs_src/performance/xla/tfcompile.md5
-rw-r--r--tensorflow/docs_src/tutorials/_index.yaml (renamed from tensorflow/docs_src/get_started/_index.yaml)97
-rw-r--r--tensorflow/docs_src/tutorials/_toc.yaml103
-rw-r--r--tensorflow/docs_src/tutorials/eager/custom_training_walkthrough.md3
-rw-r--r--tensorflow/docs_src/tutorials/eager/index.md13
-rw-r--r--tensorflow/docs_src/tutorials/estimators/cnn.md (renamed from tensorflow/docs_src/tutorials/layers.md)18
-rw-r--r--tensorflow/docs_src/tutorials/estimators/linear.md3
-rw-r--r--tensorflow/docs_src/tutorials/image_retraining.md4
-rw-r--r--tensorflow/docs_src/tutorials/images/deep_cnn.md (renamed from tensorflow/docs_src/tutorials/deep_cnn.md)94
-rw-r--r--tensorflow/docs_src/tutorials/images/image_recognition.md (renamed from tensorflow/docs_src/tutorials/image_recognition.md)5
-rw-r--r--tensorflow/docs_src/tutorials/index.md59
-rw-r--r--tensorflow/docs_src/tutorials/keras/basic_classification.md (renamed from tensorflow/docs_src/get_started/basic_classification.md)2
-rw-r--r--tensorflow/docs_src/tutorials/keras/basic_regression.md (renamed from tensorflow/docs_src/get_started/basic_regression.md)2
-rw-r--r--tensorflow/docs_src/tutorials/keras/basic_text_classification.md (renamed from tensorflow/docs_src/get_started/basic_text_classification.md)2
-rw-r--r--tensorflow/docs_src/tutorials/keras/index.md22
-rw-r--r--tensorflow/docs_src/tutorials/keras/overfit_and_underfit.md (renamed from tensorflow/docs_src/get_started/overfit_and_underfit.md)2
-rw-r--r--tensorflow/docs_src/tutorials/keras/save_and_restore_models.md (renamed from tensorflow/docs_src/get_started/save_and_restore_models.md)2
-rw-r--r--tensorflow/docs_src/tutorials/leftnav_files23
-rw-r--r--tensorflow/docs_src/tutorials/next_steps.md (renamed from tensorflow/docs_src/get_started/next_steps.md)0
-rw-r--r--[-rwxr-xr-x]tensorflow/docs_src/tutorials/non-ml/mandelbrot.md (renamed from tensorflow/docs_src/tutorials/mandelbrot.md)0
-rw-r--r--[-rwxr-xr-x]tensorflow/docs_src/tutorials/non-ml/pdes.md (renamed from tensorflow/docs_src/tutorials/pdes.md)3
-rw-r--r--tensorflow/docs_src/tutorials/representation/kernel_methods.md (renamed from tensorflow/docs_src/tutorials/kernel_methods.md)13
-rw-r--r--tensorflow/docs_src/tutorials/representation/linear.md (renamed from tensorflow/docs_src/tutorials/linear.md)12
-rw-r--r--tensorflow/docs_src/tutorials/representation/word2vec.md (renamed from tensorflow/docs_src/tutorials/word2vec.md)12
-rw-r--r--tensorflow/docs_src/tutorials/seq2seq.md5
-rw-r--r--tensorflow/docs_src/tutorials/sequences/audio_recognition.md (renamed from tensorflow/docs_src/tutorials/audio_recognition.md)0
-rw-r--r--tensorflow/docs_src/tutorials/sequences/recurrent.md (renamed from tensorflow/docs_src/tutorials/recurrent.md)4
-rw-r--r--tensorflow/docs_src/tutorials/sequences/recurrent_quickdraw.md (renamed from tensorflow/docs_src/tutorials/recurrent_quickdraw.md)4
-rw-r--r--tensorflow/docs_src/tutorials/wide.md461
-rw-r--r--tensorflow/docs_src/tutorials/wide_and_deep.md243
141 files changed, 2838 insertions, 5532 deletions
diff --git a/tensorflow/docs_src/BUILD b/tensorflow/docs_src/BUILD
new file mode 100644
index 0000000000..34bf7b6a11
--- /dev/null
+++ b/tensorflow/docs_src/BUILD
@@ -0,0 +1,14 @@
+# Files used to generate TensorFlow docs.
+
+licenses(["notice"]) # Apache 2.0
+
+package(
+ default_visibility = ["//tensorflow:internal"],
+)
+
+exports_files(["LICENSE"])
+
+filegroup(
+ name = "docs_src",
+ data = glob(["**/*.md"]),
+)
diff --git a/tensorflow/docs_src/api_guides/cc/guide.md b/tensorflow/docs_src/api_guides/cc/guide.md
index 4e51ada58a..0cea1d266e 100644
--- a/tensorflow/docs_src/api_guides/cc/guide.md
+++ b/tensorflow/docs_src/api_guides/cc/guide.md
@@ -92,7 +92,7 @@ We will delve into the details of each below.
### Scope
-@{tensorflow::Scope} is the main data structure that holds the current state
+`tensorflow::Scope` is the main data structure that holds the current state
of graph construction. A `Scope` acts as a handle to the graph being
constructed, as well as storing TensorFlow operation properties. The `Scope`
object is the first argument to operation constructors, and operations that use
@@ -102,7 +102,7 @@ explained further below.
Create a new `Scope` object by calling `Scope::NewRootScope`. This creates
some resources such as a graph to which operations are added. It also creates a
-@{tensorflow::Status} object which will be used to indicate errors encountered
+`tensorflow::Status` object which will be used to indicate errors encountered
when constructing operations. The `Scope` class has value semantics, thus, a
`Scope` object can be freely copied and passed around.
@@ -121,7 +121,7 @@ Here are some of the properties controlled by a `Scope` object:
* Device placement for an operation
* Kernel attribute for an operation
-Please refer to @{tensorflow::Scope} for the complete list of member functions
+Please refer to `tensorflow::Scope` for the complete list of member functions
that let you create child scopes with new properties.
### Operation Constructors
@@ -213,7 +213,7 @@ auto c = Concat(scope, s, 0);
You may pass many different types of C++ values directly to tensor
constants. You may explicitly create a tensor constant by calling the
-@{tensorflow::ops::Const} function from various kinds of C++ values. For
+`tensorflow::ops::Const` function from various kinds of C++ values. For
example:
* Scalars
@@ -257,7 +257,7 @@ auto y = Add(scope, {1, 2, 3, 4}, 10);
## Graph Execution
When executing a graph, you will need a session. The C++ API provides a
-@{tensorflow::ClientSession} class that will execute ops created by the
+`tensorflow::ClientSession` class that will execute ops created by the
operation constructors. TensorFlow will automatically determine which parts of
the graph need to be executed, and what values need feeding. For example:
@@ -291,5 +291,5 @@ session.Run({ {a, { {1, 2}, {3, 4} } } }, {c}, &outputs);
// outputs[0] == [4 5; 6 7]
```
-Please see the @{tensorflow::Tensor} documentation for more information on how
+Please see the `tensorflow::Tensor` documentation for more information on how
to use the execution output.
diff --git a/tensorflow/docs_src/api_guides/python/array_ops.md b/tensorflow/docs_src/api_guides/python/array_ops.md
index a34f01f073..ddeea80c56 100644
--- a/tensorflow/docs_src/api_guides/python/array_ops.md
+++ b/tensorflow/docs_src/api_guides/python/array_ops.md
@@ -1,7 +1,7 @@
# Tensor Transformations
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -10,78 +10,78 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
TensorFlow provides several operations that you can use to cast tensor data
types in your graph.
-* @{tf.string_to_number}
-* @{tf.to_double}
-* @{tf.to_float}
-* @{tf.to_bfloat16}
-* @{tf.to_int32}
-* @{tf.to_int64}
-* @{tf.cast}
-* @{tf.bitcast}
-* @{tf.saturate_cast}
+* `tf.string_to_number`
+* `tf.to_double`
+* `tf.to_float`
+* `tf.to_bfloat16`
+* `tf.to_int32`
+* `tf.to_int64`
+* `tf.cast`
+* `tf.bitcast`
+* `tf.saturate_cast`
## Shapes and Shaping
TensorFlow provides several operations that you can use to determine the shape
of a tensor and change the shape of a tensor.
-* @{tf.broadcast_dynamic_shape}
-* @{tf.broadcast_static_shape}
-* @{tf.shape}
-* @{tf.shape_n}
-* @{tf.size}
-* @{tf.rank}
-* @{tf.reshape}
-* @{tf.squeeze}
-* @{tf.expand_dims}
-* @{tf.meshgrid}
+* `tf.broadcast_dynamic_shape`
+* `tf.broadcast_static_shape`
+* `tf.shape`
+* `tf.shape_n`
+* `tf.size`
+* `tf.rank`
+* `tf.reshape`
+* `tf.squeeze`
+* `tf.expand_dims`
+* `tf.meshgrid`
## Slicing and Joining
TensorFlow provides several operations to slice or extract parts of a tensor,
or join multiple tensors together.
-* @{tf.slice}
-* @{tf.strided_slice}
-* @{tf.split}
-* @{tf.tile}
-* @{tf.pad}
-* @{tf.concat}
-* @{tf.stack}
-* @{tf.parallel_stack}
-* @{tf.unstack}
-* @{tf.reverse_sequence}
-* @{tf.reverse}
-* @{tf.reverse_v2}
-* @{tf.transpose}
-* @{tf.extract_image_patches}
-* @{tf.space_to_batch_nd}
-* @{tf.space_to_batch}
-* @{tf.required_space_to_batch_paddings}
-* @{tf.batch_to_space_nd}
-* @{tf.batch_to_space}
-* @{tf.space_to_depth}
-* @{tf.depth_to_space}
-* @{tf.gather}
-* @{tf.gather_nd}
-* @{tf.unique_with_counts}
-* @{tf.scatter_nd}
-* @{tf.dynamic_partition}
-* @{tf.dynamic_stitch}
-* @{tf.boolean_mask}
-* @{tf.one_hot}
-* @{tf.sequence_mask}
-* @{tf.dequantize}
-* @{tf.quantize_v2}
-* @{tf.quantized_concat}
-* @{tf.setdiff1d}
+* `tf.slice`
+* `tf.strided_slice`
+* `tf.split`
+* `tf.tile`
+* `tf.pad`
+* `tf.concat`
+* `tf.stack`
+* `tf.parallel_stack`
+* `tf.unstack`
+* `tf.reverse_sequence`
+* `tf.reverse`
+* `tf.reverse_v2`
+* `tf.transpose`
+* `tf.extract_image_patches`
+* `tf.space_to_batch_nd`
+* `tf.space_to_batch`
+* `tf.required_space_to_batch_paddings`
+* `tf.batch_to_space_nd`
+* `tf.batch_to_space`
+* `tf.space_to_depth`
+* `tf.depth_to_space`
+* `tf.gather`
+* `tf.gather_nd`
+* `tf.unique_with_counts`
+* `tf.scatter_nd`
+* `tf.dynamic_partition`
+* `tf.dynamic_stitch`
+* `tf.boolean_mask`
+* `tf.one_hot`
+* `tf.sequence_mask`
+* `tf.dequantize`
+* `tf.quantize_v2`
+* `tf.quantized_concat`
+* `tf.setdiff1d`
## Fake quantization
Operations used to help train for better quantization accuracy.
-* @{tf.fake_quant_with_min_max_args}
-* @{tf.fake_quant_with_min_max_args_gradient}
-* @{tf.fake_quant_with_min_max_vars}
-* @{tf.fake_quant_with_min_max_vars_gradient}
-* @{tf.fake_quant_with_min_max_vars_per_channel}
-* @{tf.fake_quant_with_min_max_vars_per_channel_gradient}
+* `tf.fake_quant_with_min_max_args`
+* `tf.fake_quant_with_min_max_args_gradient`
+* `tf.fake_quant_with_min_max_vars`
+* `tf.fake_quant_with_min_max_vars_gradient`
+* `tf.fake_quant_with_min_max_vars_per_channel`
+* `tf.fake_quant_with_min_max_vars_per_channel_gradient`
diff --git a/tensorflow/docs_src/api_guides/python/check_ops.md b/tensorflow/docs_src/api_guides/python/check_ops.md
index 6f8a18af42..b52fdaa3ab 100644
--- a/tensorflow/docs_src/api_guides/python/check_ops.md
+++ b/tensorflow/docs_src/api_guides/python/check_ops.md
@@ -1,19 +1,19 @@
# Asserts and boolean checks
-* @{tf.assert_negative}
-* @{tf.assert_positive}
-* @{tf.assert_proper_iterable}
-* @{tf.assert_non_negative}
-* @{tf.assert_non_positive}
-* @{tf.assert_equal}
-* @{tf.assert_integer}
-* @{tf.assert_less}
-* @{tf.assert_less_equal}
-* @{tf.assert_greater}
-* @{tf.assert_greater_equal}
-* @{tf.assert_rank}
-* @{tf.assert_rank_at_least}
-* @{tf.assert_type}
-* @{tf.is_non_decreasing}
-* @{tf.is_numeric_tensor}
-* @{tf.is_strictly_increasing}
+* `tf.assert_negative`
+* `tf.assert_positive`
+* `tf.assert_proper_iterable`
+* `tf.assert_non_negative`
+* `tf.assert_non_positive`
+* `tf.assert_equal`
+* `tf.assert_integer`
+* `tf.assert_less`
+* `tf.assert_less_equal`
+* `tf.assert_greater`
+* `tf.assert_greater_equal`
+* `tf.assert_rank`
+* `tf.assert_rank_at_least`
+* `tf.assert_type`
+* `tf.is_non_decreasing`
+* `tf.is_numeric_tensor`
+* `tf.is_strictly_increasing`
diff --git a/tensorflow/docs_src/api_guides/python/client.md b/tensorflow/docs_src/api_guides/python/client.md
index 27fc8610bf..56367e6671 100644
--- a/tensorflow/docs_src/api_guides/python/client.md
+++ b/tensorflow/docs_src/api_guides/python/client.md
@@ -4,33 +4,33 @@
This library contains classes for launching graphs and executing operations.
@{$guide/low_level_intro$This guide} has examples of how a graph
-is launched in a @{tf.Session}.
+is launched in a `tf.Session`.
## Session management
-* @{tf.Session}
-* @{tf.InteractiveSession}
-* @{tf.get_default_session}
+* `tf.Session`
+* `tf.InteractiveSession`
+* `tf.get_default_session`
## Error classes and convenience functions
-* @{tf.OpError}
-* @{tf.errors.CancelledError}
-* @{tf.errors.UnknownError}
-* @{tf.errors.InvalidArgumentError}
-* @{tf.errors.DeadlineExceededError}
-* @{tf.errors.NotFoundError}
-* @{tf.errors.AlreadyExistsError}
-* @{tf.errors.PermissionDeniedError}
-* @{tf.errors.UnauthenticatedError}
-* @{tf.errors.ResourceExhaustedError}
-* @{tf.errors.FailedPreconditionError}
-* @{tf.errors.AbortedError}
-* @{tf.errors.OutOfRangeError}
-* @{tf.errors.UnimplementedError}
-* @{tf.errors.InternalError}
-* @{tf.errors.UnavailableError}
-* @{tf.errors.DataLossError}
-* @{tf.errors.exception_type_from_error_code}
-* @{tf.errors.error_code_from_exception_type}
-* @{tf.errors.raise_exception_on_not_ok_status}
+* `tf.OpError`
+* `tf.errors.CancelledError`
+* `tf.errors.UnknownError`
+* `tf.errors.InvalidArgumentError`
+* `tf.errors.DeadlineExceededError`
+* `tf.errors.NotFoundError`
+* `tf.errors.AlreadyExistsError`
+* `tf.errors.PermissionDeniedError`
+* `tf.errors.UnauthenticatedError`
+* `tf.errors.ResourceExhaustedError`
+* `tf.errors.FailedPreconditionError`
+* `tf.errors.AbortedError`
+* `tf.errors.OutOfRangeError`
+* `tf.errors.UnimplementedError`
+* `tf.errors.InternalError`
+* `tf.errors.UnavailableError`
+* `tf.errors.DataLossError`
+* `tf.errors.exception_type_from_error_code`
+* `tf.errors.error_code_from_exception_type`
+* `tf.errors.raise_exception_on_not_ok_status`
diff --git a/tensorflow/docs_src/api_guides/python/constant_op.md b/tensorflow/docs_src/api_guides/python/constant_op.md
index db3410ce22..498ec3db5d 100644
--- a/tensorflow/docs_src/api_guides/python/constant_op.md
+++ b/tensorflow/docs_src/api_guides/python/constant_op.md
@@ -1,7 +1,7 @@
# Constants, Sequences, and Random Values
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -9,17 +9,17 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
TensorFlow provides several operations that you can use to generate constants.
-* @{tf.zeros}
-* @{tf.zeros_like}
-* @{tf.ones}
-* @{tf.ones_like}
-* @{tf.fill}
-* @{tf.constant}
+* `tf.zeros`
+* `tf.zeros_like`
+* `tf.ones`
+* `tf.ones_like`
+* `tf.fill`
+* `tf.constant`
## Sequences
-* @{tf.linspace}
-* @{tf.range}
+* `tf.linspace`
+* `tf.range`
## Random Tensors
@@ -29,11 +29,11 @@ time they are evaluated.
The `seed` keyword argument in these functions acts in conjunction with
the graph-level random seed. Changing either the graph-level seed using
-@{tf.set_random_seed} or the
+`tf.set_random_seed` or the
op-level seed will change the underlying seed of these operations. Setting
neither graph-level nor op-level seed, results in a random seed for all
operations.
-See @{tf.set_random_seed}
+See `tf.set_random_seed`
for details on the interaction between operation-level and graph-level random
seeds.
@@ -77,11 +77,11 @@ sess.run(init)
print(sess.run(var))
```
-* @{tf.random_normal}
-* @{tf.truncated_normal}
-* @{tf.random_uniform}
-* @{tf.random_shuffle}
-* @{tf.random_crop}
-* @{tf.multinomial}
-* @{tf.random_gamma}
-* @{tf.set_random_seed}
+* `tf.random_normal`
+* `tf.truncated_normal`
+* `tf.random_uniform`
+* `tf.random_shuffle`
+* `tf.random_crop`
+* `tf.multinomial`
+* `tf.random_gamma`
+* `tf.set_random_seed`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.crf.md b/tensorflow/docs_src/api_guides/python/contrib.crf.md
index 428383fd41..a544f136b3 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.crf.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.crf.md
@@ -2,10 +2,10 @@
Linear-chain CRF layer.
-* @{tf.contrib.crf.crf_sequence_score}
-* @{tf.contrib.crf.crf_log_norm}
-* @{tf.contrib.crf.crf_log_likelihood}
-* @{tf.contrib.crf.crf_unary_score}
-* @{tf.contrib.crf.crf_binary_score}
-* @{tf.contrib.crf.CrfForwardRnnCell}
-* @{tf.contrib.crf.viterbi_decode}
+* `tf.contrib.crf.crf_sequence_score`
+* `tf.contrib.crf.crf_log_norm`
+* `tf.contrib.crf.crf_log_likelihood`
+* `tf.contrib.crf.crf_unary_score`
+* `tf.contrib.crf.crf_binary_score`
+* `tf.contrib.crf.CrfForwardRnnCell`
+* `tf.contrib.crf.viterbi_decode`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.ffmpeg.md b/tensorflow/docs_src/api_guides/python/contrib.ffmpeg.md
index 27948689c5..7df7547131 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.ffmpeg.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.ffmpeg.md
@@ -19,5 +19,5 @@ uncompressed_binary = ffmpeg.encode_audio(
waveform, file_format='wav', samples_per_second=44100)
```
-* @{tf.contrib.ffmpeg.decode_audio}
-* @{tf.contrib.ffmpeg.encode_audio}
+* `tf.contrib.ffmpeg.decode_audio`
+* `tf.contrib.ffmpeg.encode_audio`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.framework.md b/tensorflow/docs_src/api_guides/python/contrib.framework.md
index 6b4ce3a14d..00fb8b0ac3 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.framework.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.framework.md
@@ -3,62 +3,62 @@
Framework utilities.
-* @{tf.contrib.framework.assert_same_float_dtype}
-* @{tf.contrib.framework.assert_scalar}
-* @{tf.contrib.framework.assert_scalar_int}
-* @{tf.convert_to_tensor_or_sparse_tensor}
-* @{tf.contrib.framework.get_graph_from_inputs}
-* @{tf.is_numeric_tensor}
-* @{tf.is_non_decreasing}
-* @{tf.is_strictly_increasing}
-* @{tf.contrib.framework.is_tensor}
-* @{tf.contrib.framework.reduce_sum_n}
-* @{tf.contrib.framework.remove_squeezable_dimensions}
-* @{tf.contrib.framework.with_shape}
-* @{tf.contrib.framework.with_same_shape}
+* `tf.contrib.framework.assert_same_float_dtype`
+* `tf.contrib.framework.assert_scalar`
+* `tf.contrib.framework.assert_scalar_int`
+* `tf.convert_to_tensor_or_sparse_tensor`
+* `tf.contrib.framework.get_graph_from_inputs`
+* `tf.is_numeric_tensor`
+* `tf.is_non_decreasing`
+* `tf.is_strictly_increasing`
+* `tf.contrib.framework.is_tensor`
+* `tf.contrib.framework.reduce_sum_n`
+* `tf.contrib.framework.remove_squeezable_dimensions`
+* `tf.contrib.framework.with_shape`
+* `tf.contrib.framework.with_same_shape`
## Deprecation
-* @{tf.contrib.framework.deprecated}
-* @{tf.contrib.framework.deprecated_args}
-* @{tf.contrib.framework.deprecated_arg_values}
+* `tf.contrib.framework.deprecated`
+* `tf.contrib.framework.deprecated_args`
+* `tf.contrib.framework.deprecated_arg_values`
## Arg_Scope
-* @{tf.contrib.framework.arg_scope}
-* @{tf.contrib.framework.add_arg_scope}
-* @{tf.contrib.framework.has_arg_scope}
-* @{tf.contrib.framework.arg_scoped_arguments}
+* `tf.contrib.framework.arg_scope`
+* `tf.contrib.framework.add_arg_scope`
+* `tf.contrib.framework.has_arg_scope`
+* `tf.contrib.framework.arg_scoped_arguments`
## Variables
-* @{tf.contrib.framework.add_model_variable}
-* @{tf.train.assert_global_step}
-* @{tf.contrib.framework.assert_or_get_global_step}
-* @{tf.contrib.framework.assign_from_checkpoint}
-* @{tf.contrib.framework.assign_from_checkpoint_fn}
-* @{tf.contrib.framework.assign_from_values}
-* @{tf.contrib.framework.assign_from_values_fn}
-* @{tf.contrib.framework.create_global_step}
-* @{tf.contrib.framework.filter_variables}
-* @{tf.train.get_global_step}
-* @{tf.contrib.framework.get_or_create_global_step}
-* @{tf.contrib.framework.get_local_variables}
-* @{tf.contrib.framework.get_model_variables}
-* @{tf.contrib.framework.get_unique_variable}
-* @{tf.contrib.framework.get_variables_by_name}
-* @{tf.contrib.framework.get_variables_by_suffix}
-* @{tf.contrib.framework.get_variables_to_restore}
-* @{tf.contrib.framework.get_variables}
-* @{tf.contrib.framework.local_variable}
-* @{tf.contrib.framework.model_variable}
-* @{tf.contrib.framework.variable}
-* @{tf.contrib.framework.VariableDeviceChooser}
-* @{tf.contrib.framework.zero_initializer}
+* `tf.contrib.framework.add_model_variable`
+* `tf.train.assert_global_step`
+* `tf.contrib.framework.assert_or_get_global_step`
+* `tf.contrib.framework.assign_from_checkpoint`
+* `tf.contrib.framework.assign_from_checkpoint_fn`
+* `tf.contrib.framework.assign_from_values`
+* `tf.contrib.framework.assign_from_values_fn`
+* `tf.contrib.framework.create_global_step`
+* `tf.contrib.framework.filter_variables`
+* `tf.train.get_global_step`
+* `tf.contrib.framework.get_or_create_global_step`
+* `tf.contrib.framework.get_local_variables`
+* `tf.contrib.framework.get_model_variables`
+* `tf.contrib.framework.get_unique_variable`
+* `tf.contrib.framework.get_variables_by_name`
+* `tf.contrib.framework.get_variables_by_suffix`
+* `tf.contrib.framework.get_variables_to_restore`
+* `tf.contrib.framework.get_variables`
+* `tf.contrib.framework.local_variable`
+* `tf.contrib.framework.model_variable`
+* `tf.contrib.framework.variable`
+* `tf.contrib.framework.VariableDeviceChooser`
+* `tf.contrib.framework.zero_initializer`
## Checkpoint utilities
-* @{tf.contrib.framework.load_checkpoint}
-* @{tf.contrib.framework.list_variables}
-* @{tf.contrib.framework.load_variable}
-* @{tf.contrib.framework.init_from_checkpoint}
+* `tf.contrib.framework.load_checkpoint`
+* `tf.contrib.framework.list_variables`
+* `tf.contrib.framework.load_variable`
+* `tf.contrib.framework.init_from_checkpoint`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.graph_editor.md b/tensorflow/docs_src/api_guides/python/contrib.graph_editor.md
index 20fe88a799..8ce49b952b 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.graph_editor.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.graph_editor.md
@@ -100,78 +100,78 @@ which to operate must always be given explicitly. This is the reason why
## Module: util
-* @{tf.contrib.graph_editor.make_list_of_op}
-* @{tf.contrib.graph_editor.get_tensors}
-* @{tf.contrib.graph_editor.make_list_of_t}
-* @{tf.contrib.graph_editor.get_generating_ops}
-* @{tf.contrib.graph_editor.get_consuming_ops}
-* @{tf.contrib.graph_editor.ControlOutputs}
-* @{tf.contrib.graph_editor.placeholder_name}
-* @{tf.contrib.graph_editor.make_placeholder_from_tensor}
-* @{tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape}
+* `tf.contrib.graph_editor.make_list_of_op`
+* `tf.contrib.graph_editor.get_tensors`
+* `tf.contrib.graph_editor.make_list_of_t`
+* `tf.contrib.graph_editor.get_generating_ops`
+* `tf.contrib.graph_editor.get_consuming_ops`
+* `tf.contrib.graph_editor.ControlOutputs`
+* `tf.contrib.graph_editor.placeholder_name`
+* `tf.contrib.graph_editor.make_placeholder_from_tensor`
+* `tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape`
## Module: select
-* @{tf.contrib.graph_editor.filter_ts}
-* @{tf.contrib.graph_editor.filter_ts_from_regex}
-* @{tf.contrib.graph_editor.filter_ops}
-* @{tf.contrib.graph_editor.filter_ops_from_regex}
-* @{tf.contrib.graph_editor.get_name_scope_ops}
-* @{tf.contrib.graph_editor.check_cios}
-* @{tf.contrib.graph_editor.get_ops_ios}
-* @{tf.contrib.graph_editor.compute_boundary_ts}
-* @{tf.contrib.graph_editor.get_within_boundary_ops}
-* @{tf.contrib.graph_editor.get_forward_walk_ops}
-* @{tf.contrib.graph_editor.get_backward_walk_ops}
-* @{tf.contrib.graph_editor.get_walks_intersection_ops}
-* @{tf.contrib.graph_editor.get_walks_union_ops}
-* @{tf.contrib.graph_editor.select_ops}
-* @{tf.contrib.graph_editor.select_ts}
-* @{tf.contrib.graph_editor.select_ops_and_ts}
+* `tf.contrib.graph_editor.filter_ts`
+* `tf.contrib.graph_editor.filter_ts_from_regex`
+* `tf.contrib.graph_editor.filter_ops`
+* `tf.contrib.graph_editor.filter_ops_from_regex`
+* `tf.contrib.graph_editor.get_name_scope_ops`
+* `tf.contrib.graph_editor.check_cios`
+* `tf.contrib.graph_editor.get_ops_ios`
+* `tf.contrib.graph_editor.compute_boundary_ts`
+* `tf.contrib.graph_editor.get_within_boundary_ops`
+* `tf.contrib.graph_editor.get_forward_walk_ops`
+* `tf.contrib.graph_editor.get_backward_walk_ops`
+* `tf.contrib.graph_editor.get_walks_intersection_ops`
+* `tf.contrib.graph_editor.get_walks_union_ops`
+* `tf.contrib.graph_editor.select_ops`
+* `tf.contrib.graph_editor.select_ts`
+* `tf.contrib.graph_editor.select_ops_and_ts`
## Module: subgraph
-* @{tf.contrib.graph_editor.SubGraphView}
-* @{tf.contrib.graph_editor.make_view}
-* @{tf.contrib.graph_editor.make_view_from_scope}
+* `tf.contrib.graph_editor.SubGraphView`
+* `tf.contrib.graph_editor.make_view`
+* `tf.contrib.graph_editor.make_view_from_scope`
## Module: reroute
-* @{tf.contrib.graph_editor.swap_ts}
-* @{tf.contrib.graph_editor.reroute_ts}
-* @{tf.contrib.graph_editor.swap_inputs}
-* @{tf.contrib.graph_editor.reroute_inputs}
-* @{tf.contrib.graph_editor.swap_outputs}
-* @{tf.contrib.graph_editor.reroute_outputs}
-* @{tf.contrib.graph_editor.swap_ios}
-* @{tf.contrib.graph_editor.reroute_ios}
-* @{tf.contrib.graph_editor.remove_control_inputs}
-* @{tf.contrib.graph_editor.add_control_inputs}
+* `tf.contrib.graph_editor.swap_ts`
+* `tf.contrib.graph_editor.reroute_ts`
+* `tf.contrib.graph_editor.swap_inputs`
+* `tf.contrib.graph_editor.reroute_inputs`
+* `tf.contrib.graph_editor.swap_outputs`
+* `tf.contrib.graph_editor.reroute_outputs`
+* `tf.contrib.graph_editor.swap_ios`
+* `tf.contrib.graph_editor.reroute_ios`
+* `tf.contrib.graph_editor.remove_control_inputs`
+* `tf.contrib.graph_editor.add_control_inputs`
## Module: edit
-* @{tf.contrib.graph_editor.detach_control_inputs}
-* @{tf.contrib.graph_editor.detach_control_outputs}
-* @{tf.contrib.graph_editor.detach_inputs}
-* @{tf.contrib.graph_editor.detach_outputs}
-* @{tf.contrib.graph_editor.detach}
-* @{tf.contrib.graph_editor.connect}
-* @{tf.contrib.graph_editor.bypass}
+* `tf.contrib.graph_editor.detach_control_inputs`
+* `tf.contrib.graph_editor.detach_control_outputs`
+* `tf.contrib.graph_editor.detach_inputs`
+* `tf.contrib.graph_editor.detach_outputs`
+* `tf.contrib.graph_editor.detach`
+* `tf.contrib.graph_editor.connect`
+* `tf.contrib.graph_editor.bypass`
## Module: transform
-* @{tf.contrib.graph_editor.replace_t_with_placeholder_handler}
-* @{tf.contrib.graph_editor.keep_t_if_possible_handler}
-* @{tf.contrib.graph_editor.assign_renamed_collections_handler}
-* @{tf.contrib.graph_editor.transform_op_if_inside_handler}
-* @{tf.contrib.graph_editor.copy_op_handler}
-* @{tf.contrib.graph_editor.Transformer}
-* @{tf.contrib.graph_editor.copy}
-* @{tf.contrib.graph_editor.copy_with_input_replacements}
-* @{tf.contrib.graph_editor.graph_replace}
+* `tf.contrib.graph_editor.replace_t_with_placeholder_handler`
+* `tf.contrib.graph_editor.keep_t_if_possible_handler`
+* `tf.contrib.graph_editor.assign_renamed_collections_handler`
+* `tf.contrib.graph_editor.transform_op_if_inside_handler`
+* `tf.contrib.graph_editor.copy_op_handler`
+* `tf.contrib.graph_editor.Transformer`
+* `tf.contrib.graph_editor.copy`
+* `tf.contrib.graph_editor.copy_with_input_replacements`
+* `tf.contrib.graph_editor.graph_replace`
## Useful aliases
-* @{tf.contrib.graph_editor.ph}
-* @{tf.contrib.graph_editor.sgv}
-* @{tf.contrib.graph_editor.sgv_scope}
+* `tf.contrib.graph_editor.ph`
+* `tf.contrib.graph_editor.sgv`
+* `tf.contrib.graph_editor.sgv_scope`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.integrate.md b/tensorflow/docs_src/api_guides/python/contrib.integrate.md
index e95b5a2e68..a70d202ab5 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.integrate.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.integrate.md
@@ -38,4 +38,4 @@ plt.plot(x, z)
## Ops
-* @{tf.contrib.integrate.odeint}
+* `tf.contrib.integrate.odeint`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.layers.md b/tensorflow/docs_src/api_guides/python/contrib.layers.md
index b85db4b96f..4c176a129c 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.layers.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.layers.md
@@ -9,29 +9,29 @@ This package provides several ops that take care of creating variables that are
used internally in a consistent way and provide the building blocks for many
common machine learning algorithms.
-* @{tf.contrib.layers.avg_pool2d}
-* @{tf.contrib.layers.batch_norm}
-* @{tf.contrib.layers.convolution2d}
-* @{tf.contrib.layers.conv2d_in_plane}
-* @{tf.contrib.layers.convolution2d_in_plane}
-* @{tf.nn.conv2d_transpose}
-* @{tf.contrib.layers.convolution2d_transpose}
-* @{tf.nn.dropout}
-* @{tf.contrib.layers.flatten}
-* @{tf.contrib.layers.fully_connected}
-* @{tf.contrib.layers.layer_norm}
-* @{tf.contrib.layers.max_pool2d}
-* @{tf.contrib.layers.one_hot_encoding}
-* @{tf.nn.relu}
-* @{tf.nn.relu6}
-* @{tf.contrib.layers.repeat}
-* @{tf.contrib.layers.safe_embedding_lookup_sparse}
-* @{tf.nn.separable_conv2d}
-* @{tf.contrib.layers.separable_convolution2d}
-* @{tf.nn.softmax}
-* @{tf.stack}
-* @{tf.contrib.layers.unit_norm}
-* @{tf.contrib.layers.embed_sequence}
+* `tf.contrib.layers.avg_pool2d`
+* `tf.contrib.layers.batch_norm`
+* `tf.contrib.layers.convolution2d`
+* `tf.contrib.layers.conv2d_in_plane`
+* `tf.contrib.layers.convolution2d_in_plane`
+* `tf.nn.conv2d_transpose`
+* `tf.contrib.layers.convolution2d_transpose`
+* `tf.nn.dropout`
+* `tf.contrib.layers.flatten`
+* `tf.contrib.layers.fully_connected`
+* `tf.contrib.layers.layer_norm`
+* `tf.contrib.layers.max_pool2d`
+* `tf.contrib.layers.one_hot_encoding`
+* `tf.nn.relu`
+* `tf.nn.relu6`
+* `tf.contrib.layers.repeat`
+* `tf.contrib.layers.safe_embedding_lookup_sparse`
+* `tf.nn.separable_conv2d`
+* `tf.contrib.layers.separable_convolution2d`
+* `tf.nn.softmax`
+* `tf.stack`
+* `tf.contrib.layers.unit_norm`
+* `tf.contrib.layers.embed_sequence`
Aliases for fully_connected which set a default activation function are
available: `relu`, `relu6` and `linear`.
@@ -45,65 +45,65 @@ Regularization can help prevent overfitting. These have the signature
`fn(weights)`. The loss is typically added to
`tf.GraphKeys.REGULARIZATION_LOSSES`.
-* @{tf.contrib.layers.apply_regularization}
-* @{tf.contrib.layers.l1_regularizer}
-* @{tf.contrib.layers.l2_regularizer}
-* @{tf.contrib.layers.sum_regularizer}
+* `tf.contrib.layers.apply_regularization`
+* `tf.contrib.layers.l1_regularizer`
+* `tf.contrib.layers.l2_regularizer`
+* `tf.contrib.layers.sum_regularizer`
## Initializers
Initializers are used to initialize variables with sensible values given their
size, data type, and purpose.
-* @{tf.contrib.layers.xavier_initializer}
-* @{tf.contrib.layers.xavier_initializer_conv2d}
-* @{tf.contrib.layers.variance_scaling_initializer}
+* `tf.contrib.layers.xavier_initializer`
+* `tf.contrib.layers.xavier_initializer_conv2d`
+* `tf.contrib.layers.variance_scaling_initializer`
## Optimization
Optimize weights given a loss.
-* @{tf.contrib.layers.optimize_loss}
+* `tf.contrib.layers.optimize_loss`
## Summaries
Helper functions to summarize specific variables or ops.
-* @{tf.contrib.layers.summarize_activation}
-* @{tf.contrib.layers.summarize_tensor}
-* @{tf.contrib.layers.summarize_tensors}
-* @{tf.contrib.layers.summarize_collection}
+* `tf.contrib.layers.summarize_activation`
+* `tf.contrib.layers.summarize_tensor`
+* `tf.contrib.layers.summarize_tensors`
+* `tf.contrib.layers.summarize_collection`
The layers module defines convenience functions `summarize_variables`,
`summarize_weights` and `summarize_biases`, which set the `collection` argument
of `summarize_collection` to `VARIABLES`, `WEIGHTS` and `BIASES`, respectively.
-* @{tf.contrib.layers.summarize_activations}
+* `tf.contrib.layers.summarize_activations`
## Feature columns
Feature columns provide a mechanism to map data to a model.
-* @{tf.contrib.layers.bucketized_column}
-* @{tf.contrib.layers.check_feature_columns}
-* @{tf.contrib.layers.create_feature_spec_for_parsing}
-* @{tf.contrib.layers.crossed_column}
-* @{tf.contrib.layers.embedding_column}
-* @{tf.contrib.layers.scattered_embedding_column}
-* @{tf.contrib.layers.input_from_feature_columns}
-* @{tf.contrib.layers.joint_weighted_sum_from_feature_columns}
-* @{tf.contrib.layers.make_place_holder_tensors_for_base_features}
-* @{tf.contrib.layers.multi_class_target}
-* @{tf.contrib.layers.one_hot_column}
-* @{tf.contrib.layers.parse_feature_columns_from_examples}
-* @{tf.contrib.layers.parse_feature_columns_from_sequence_examples}
-* @{tf.contrib.layers.real_valued_column}
-* @{tf.contrib.layers.shared_embedding_columns}
-* @{tf.contrib.layers.sparse_column_with_hash_bucket}
-* @{tf.contrib.layers.sparse_column_with_integerized_feature}
-* @{tf.contrib.layers.sparse_column_with_keys}
-* @{tf.contrib.layers.sparse_column_with_vocabulary_file}
-* @{tf.contrib.layers.weighted_sparse_column}
-* @{tf.contrib.layers.weighted_sum_from_feature_columns}
-* @{tf.contrib.layers.infer_real_valued_columns}
-* @{tf.contrib.layers.sequence_input_from_feature_columns}
+* `tf.contrib.layers.bucketized_column`
+* `tf.contrib.layers.check_feature_columns`
+* `tf.contrib.layers.create_feature_spec_for_parsing`
+* `tf.contrib.layers.crossed_column`
+* `tf.contrib.layers.embedding_column`
+* `tf.contrib.layers.scattered_embedding_column`
+* `tf.contrib.layers.input_from_feature_columns`
+* `tf.contrib.layers.joint_weighted_sum_from_feature_columns`
+* `tf.contrib.layers.make_place_holder_tensors_for_base_features`
+* `tf.contrib.layers.multi_class_target`
+* `tf.contrib.layers.one_hot_column`
+* `tf.contrib.layers.parse_feature_columns_from_examples`
+* `tf.contrib.layers.parse_feature_columns_from_sequence_examples`
+* `tf.contrib.layers.real_valued_column`
+* `tf.contrib.layers.shared_embedding_columns`
+* `tf.contrib.layers.sparse_column_with_hash_bucket`
+* `tf.contrib.layers.sparse_column_with_integerized_feature`
+* `tf.contrib.layers.sparse_column_with_keys`
+* `tf.contrib.layers.sparse_column_with_vocabulary_file`
+* `tf.contrib.layers.weighted_sparse_column`
+* `tf.contrib.layers.weighted_sum_from_feature_columns`
+* `tf.contrib.layers.infer_real_valued_columns`
+* `tf.contrib.layers.sequence_input_from_feature_columns`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.learn.md b/tensorflow/docs_src/api_guides/python/contrib.learn.md
index 03838dc5ae..635849ead5 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.learn.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.learn.md
@@ -7,57 +7,57 @@ High level API for learning with TensorFlow.
Train and evaluate TensorFlow models.
-* @{tf.contrib.learn.BaseEstimator}
-* @{tf.contrib.learn.Estimator}
-* @{tf.contrib.learn.Trainable}
-* @{tf.contrib.learn.Evaluable}
-* @{tf.contrib.learn.KMeansClustering}
-* @{tf.contrib.learn.ModeKeys}
-* @{tf.contrib.learn.ModelFnOps}
-* @{tf.contrib.learn.MetricSpec}
-* @{tf.contrib.learn.PredictionKey}
-* @{tf.contrib.learn.DNNClassifier}
-* @{tf.contrib.learn.DNNRegressor}
-* @{tf.contrib.learn.DNNLinearCombinedRegressor}
-* @{tf.contrib.learn.DNNLinearCombinedClassifier}
-* @{tf.contrib.learn.LinearClassifier}
-* @{tf.contrib.learn.LinearRegressor}
-* @{tf.contrib.learn.LogisticRegressor}
+* `tf.contrib.learn.BaseEstimator`
+* `tf.contrib.learn.Estimator`
+* `tf.contrib.learn.Trainable`
+* `tf.contrib.learn.Evaluable`
+* `tf.contrib.learn.KMeansClustering`
+* `tf.contrib.learn.ModeKeys`
+* `tf.contrib.learn.ModelFnOps`
+* `tf.contrib.learn.MetricSpec`
+* `tf.contrib.learn.PredictionKey`
+* `tf.contrib.learn.DNNClassifier`
+* `tf.contrib.learn.DNNRegressor`
+* `tf.contrib.learn.DNNLinearCombinedRegressor`
+* `tf.contrib.learn.DNNLinearCombinedClassifier`
+* `tf.contrib.learn.LinearClassifier`
+* `tf.contrib.learn.LinearRegressor`
+* `tf.contrib.learn.LogisticRegressor`
## Distributed training utilities
-* @{tf.contrib.learn.Experiment}
-* @{tf.contrib.learn.ExportStrategy}
-* @{tf.contrib.learn.TaskType}
+* `tf.contrib.learn.Experiment`
+* `tf.contrib.learn.ExportStrategy`
+* `tf.contrib.learn.TaskType`
## Graph actions
Perform various training, evaluation, and inference actions on a graph.
-* @{tf.train.NanLossDuringTrainingError}
-* @{tf.contrib.learn.RunConfig}
-* @{tf.contrib.learn.evaluate}
-* @{tf.contrib.learn.infer}
-* @{tf.contrib.learn.run_feeds}
-* @{tf.contrib.learn.run_n}
-* @{tf.contrib.learn.train}
+* `tf.train.NanLossDuringTrainingError`
+* `tf.contrib.learn.RunConfig`
+* `tf.contrib.learn.evaluate`
+* `tf.contrib.learn.infer`
+* `tf.contrib.learn.run_feeds`
+* `tf.contrib.learn.run_n`
+* `tf.contrib.learn.train`
## Input processing
Queue and read batched input data.
-* @{tf.contrib.learn.extract_dask_data}
-* @{tf.contrib.learn.extract_dask_labels}
-* @{tf.contrib.learn.extract_pandas_data}
-* @{tf.contrib.learn.extract_pandas_labels}
-* @{tf.contrib.learn.extract_pandas_matrix}
-* @{tf.contrib.learn.infer_real_valued_columns_from_input}
-* @{tf.contrib.learn.infer_real_valued_columns_from_input_fn}
-* @{tf.contrib.learn.read_batch_examples}
-* @{tf.contrib.learn.read_batch_features}
-* @{tf.contrib.learn.read_batch_record_features}
+* `tf.contrib.learn.extract_dask_data`
+* `tf.contrib.learn.extract_dask_labels`
+* `tf.contrib.learn.extract_pandas_data`
+* `tf.contrib.learn.extract_pandas_labels`
+* `tf.contrib.learn.extract_pandas_matrix`
+* `tf.contrib.learn.infer_real_valued_columns_from_input`
+* `tf.contrib.learn.infer_real_valued_columns_from_input_fn`
+* `tf.contrib.learn.read_batch_examples`
+* `tf.contrib.learn.read_batch_features`
+* `tf.contrib.learn.read_batch_record_features`
Export utilities
-* @{tf.contrib.learn.build_parsing_serving_input_fn}
-* @{tf.contrib.learn.ProblemType}
+* `tf.contrib.learn.build_parsing_serving_input_fn`
+* `tf.contrib.learn.ProblemType`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.linalg.md b/tensorflow/docs_src/api_guides/python/contrib.linalg.md
index c0cb2b195c..3055449dc2 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.linalg.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.linalg.md
@@ -14,17 +14,17 @@ Subclasses of `LinearOperator` provide a access to common methods on a
### Base class
-* @{tf.contrib.linalg.LinearOperator}
+* `tf.contrib.linalg.LinearOperator`
### Individual operators
-* @{tf.contrib.linalg.LinearOperatorDiag}
-* @{tf.contrib.linalg.LinearOperatorIdentity}
-* @{tf.contrib.linalg.LinearOperatorScaledIdentity}
-* @{tf.contrib.linalg.LinearOperatorFullMatrix}
-* @{tf.contrib.linalg.LinearOperatorLowerTriangular}
-* @{tf.contrib.linalg.LinearOperatorLowRankUpdate}
+* `tf.contrib.linalg.LinearOperatorDiag`
+* `tf.contrib.linalg.LinearOperatorIdentity`
+* `tf.contrib.linalg.LinearOperatorScaledIdentity`
+* `tf.contrib.linalg.LinearOperatorFullMatrix`
+* `tf.contrib.linalg.LinearOperatorLowerTriangular`
+* `tf.contrib.linalg.LinearOperatorLowRankUpdate`
### Transformations and Combinations of operators
-* @{tf.contrib.linalg.LinearOperatorComposition}
+* `tf.contrib.linalg.LinearOperatorComposition`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.losses.md b/tensorflow/docs_src/api_guides/python/contrib.losses.md
index 8b7442216c..8787454af6 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.losses.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.losses.md
@@ -2,7 +2,7 @@
## Deprecated
-This module is deprecated. Instructions for updating: Use @{tf.losses} instead.
+This module is deprecated. Instructions for updating: Use `tf.losses` instead.
## Loss operations for use in neural networks.
@@ -107,19 +107,19 @@ weighted average over the individual prediction errors:
loss = tf.contrib.losses.mean_squared_error(predictions, depths, weight)
```
-* @{tf.contrib.losses.absolute_difference}
-* @{tf.contrib.losses.add_loss}
-* @{tf.contrib.losses.hinge_loss}
-* @{tf.contrib.losses.compute_weighted_loss}
-* @{tf.contrib.losses.cosine_distance}
-* @{tf.contrib.losses.get_losses}
-* @{tf.contrib.losses.get_regularization_losses}
-* @{tf.contrib.losses.get_total_loss}
-* @{tf.contrib.losses.log_loss}
-* @{tf.contrib.losses.mean_pairwise_squared_error}
-* @{tf.contrib.losses.mean_squared_error}
-* @{tf.contrib.losses.sigmoid_cross_entropy}
-* @{tf.contrib.losses.softmax_cross_entropy}
-* @{tf.contrib.losses.sparse_softmax_cross_entropy}
+* `tf.contrib.losses.absolute_difference`
+* `tf.contrib.losses.add_loss`
+* `tf.contrib.losses.hinge_loss`
+* `tf.contrib.losses.compute_weighted_loss`
+* `tf.contrib.losses.cosine_distance`
+* `tf.contrib.losses.get_losses`
+* `tf.contrib.losses.get_regularization_losses`
+* `tf.contrib.losses.get_total_loss`
+* `tf.contrib.losses.log_loss`
+* `tf.contrib.losses.mean_pairwise_squared_error`
+* `tf.contrib.losses.mean_squared_error`
+* `tf.contrib.losses.sigmoid_cross_entropy`
+* `tf.contrib.losses.softmax_cross_entropy`
+* `tf.contrib.losses.sparse_softmax_cross_entropy`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.metrics.md b/tensorflow/docs_src/api_guides/python/contrib.metrics.md
index 1eb9cf417a..de6346ca80 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.metrics.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.metrics.md
@@ -86,48 +86,48 @@ labels and predictions tensors and results in a weighted average of the metric.
## Metric `Ops`
-* @{tf.contrib.metrics.streaming_accuracy}
-* @{tf.contrib.metrics.streaming_mean}
-* @{tf.contrib.metrics.streaming_recall}
-* @{tf.contrib.metrics.streaming_recall_at_thresholds}
-* @{tf.contrib.metrics.streaming_precision}
-* @{tf.contrib.metrics.streaming_precision_at_thresholds}
-* @{tf.contrib.metrics.streaming_auc}
-* @{tf.contrib.metrics.streaming_recall_at_k}
-* @{tf.contrib.metrics.streaming_mean_absolute_error}
-* @{tf.contrib.metrics.streaming_mean_iou}
-* @{tf.contrib.metrics.streaming_mean_relative_error}
-* @{tf.contrib.metrics.streaming_mean_squared_error}
-* @{tf.contrib.metrics.streaming_mean_tensor}
-* @{tf.contrib.metrics.streaming_root_mean_squared_error}
-* @{tf.contrib.metrics.streaming_covariance}
-* @{tf.contrib.metrics.streaming_pearson_correlation}
-* @{tf.contrib.metrics.streaming_mean_cosine_distance}
-* @{tf.contrib.metrics.streaming_percentage_less}
-* @{tf.contrib.metrics.streaming_sensitivity_at_specificity}
-* @{tf.contrib.metrics.streaming_sparse_average_precision_at_k}
-* @{tf.contrib.metrics.streaming_sparse_precision_at_k}
-* @{tf.contrib.metrics.streaming_sparse_precision_at_top_k}
-* @{tf.contrib.metrics.streaming_sparse_recall_at_k}
-* @{tf.contrib.metrics.streaming_specificity_at_sensitivity}
-* @{tf.contrib.metrics.streaming_concat}
-* @{tf.contrib.metrics.streaming_false_negatives}
-* @{tf.contrib.metrics.streaming_false_negatives_at_thresholds}
-* @{tf.contrib.metrics.streaming_false_positives}
-* @{tf.contrib.metrics.streaming_false_positives_at_thresholds}
-* @{tf.contrib.metrics.streaming_true_negatives}
-* @{tf.contrib.metrics.streaming_true_negatives_at_thresholds}
-* @{tf.contrib.metrics.streaming_true_positives}
-* @{tf.contrib.metrics.streaming_true_positives_at_thresholds}
-* @{tf.contrib.metrics.auc_using_histogram}
-* @{tf.contrib.metrics.accuracy}
-* @{tf.contrib.metrics.aggregate_metrics}
-* @{tf.contrib.metrics.aggregate_metric_map}
-* @{tf.contrib.metrics.confusion_matrix}
+* `tf.contrib.metrics.streaming_accuracy`
+* `tf.contrib.metrics.streaming_mean`
+* `tf.contrib.metrics.streaming_recall`
+* `tf.contrib.metrics.streaming_recall_at_thresholds`
+* `tf.contrib.metrics.streaming_precision`
+* `tf.contrib.metrics.streaming_precision_at_thresholds`
+* `tf.contrib.metrics.streaming_auc`
+* `tf.contrib.metrics.streaming_recall_at_k`
+* `tf.contrib.metrics.streaming_mean_absolute_error`
+* `tf.contrib.metrics.streaming_mean_iou`
+* `tf.contrib.metrics.streaming_mean_relative_error`
+* `tf.contrib.metrics.streaming_mean_squared_error`
+* `tf.contrib.metrics.streaming_mean_tensor`
+* `tf.contrib.metrics.streaming_root_mean_squared_error`
+* `tf.contrib.metrics.streaming_covariance`
+* `tf.contrib.metrics.streaming_pearson_correlation`
+* `tf.contrib.metrics.streaming_mean_cosine_distance`
+* `tf.contrib.metrics.streaming_percentage_less`
+* `tf.contrib.metrics.streaming_sensitivity_at_specificity`
+* `tf.contrib.metrics.streaming_sparse_average_precision_at_k`
+* `tf.contrib.metrics.streaming_sparse_precision_at_k`
+* `tf.contrib.metrics.streaming_sparse_precision_at_top_k`
+* `tf.contrib.metrics.streaming_sparse_recall_at_k`
+* `tf.contrib.metrics.streaming_specificity_at_sensitivity`
+* `tf.contrib.metrics.streaming_concat`
+* `tf.contrib.metrics.streaming_false_negatives`
+* `tf.contrib.metrics.streaming_false_negatives_at_thresholds`
+* `tf.contrib.metrics.streaming_false_positives`
+* `tf.contrib.metrics.streaming_false_positives_at_thresholds`
+* `tf.contrib.metrics.streaming_true_negatives`
+* `tf.contrib.metrics.streaming_true_negatives_at_thresholds`
+* `tf.contrib.metrics.streaming_true_positives`
+* `tf.contrib.metrics.streaming_true_positives_at_thresholds`
+* `tf.contrib.metrics.auc_using_histogram`
+* `tf.contrib.metrics.accuracy`
+* `tf.contrib.metrics.aggregate_metrics`
+* `tf.contrib.metrics.aggregate_metric_map`
+* `tf.contrib.metrics.confusion_matrix`
## Set `Ops`
-* @{tf.contrib.metrics.set_difference}
-* @{tf.contrib.metrics.set_intersection}
-* @{tf.contrib.metrics.set_size}
-* @{tf.contrib.metrics.set_union}
+* `tf.contrib.metrics.set_difference`
+* `tf.contrib.metrics.set_intersection`
+* `tf.contrib.metrics.set_size`
+* `tf.contrib.metrics.set_union`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.rnn.md b/tensorflow/docs_src/api_guides/python/contrib.rnn.md
index d089b0616f..d265ab6925 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.rnn.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.rnn.md
@@ -5,49 +5,49 @@ Module for constructing RNN Cells and additional RNN operations.
## Base interface for all RNN Cells
-* @{tf.contrib.rnn.RNNCell}
+* `tf.contrib.rnn.RNNCell`
## Core RNN Cells for use with TensorFlow's core RNN methods
-* @{tf.contrib.rnn.BasicRNNCell}
-* @{tf.contrib.rnn.BasicLSTMCell}
-* @{tf.contrib.rnn.GRUCell}
-* @{tf.contrib.rnn.LSTMCell}
-* @{tf.contrib.rnn.LayerNormBasicLSTMCell}
+* `tf.contrib.rnn.BasicRNNCell`
+* `tf.contrib.rnn.BasicLSTMCell`
+* `tf.contrib.rnn.GRUCell`
+* `tf.contrib.rnn.LSTMCell`
+* `tf.contrib.rnn.LayerNormBasicLSTMCell`
## Classes storing split `RNNCell` state
-* @{tf.contrib.rnn.LSTMStateTuple}
+* `tf.contrib.rnn.LSTMStateTuple`
## Core RNN Cell wrappers (RNNCells that wrap other RNNCells)
-* @{tf.contrib.rnn.MultiRNNCell}
-* @{tf.contrib.rnn.LSTMBlockWrapper}
-* @{tf.contrib.rnn.DropoutWrapper}
-* @{tf.contrib.rnn.EmbeddingWrapper}
-* @{tf.contrib.rnn.InputProjectionWrapper}
-* @{tf.contrib.rnn.OutputProjectionWrapper}
-* @{tf.contrib.rnn.DeviceWrapper}
-* @{tf.contrib.rnn.ResidualWrapper}
+* `tf.contrib.rnn.MultiRNNCell`
+* `tf.contrib.rnn.LSTMBlockWrapper`
+* `tf.contrib.rnn.DropoutWrapper`
+* `tf.contrib.rnn.EmbeddingWrapper`
+* `tf.contrib.rnn.InputProjectionWrapper`
+* `tf.contrib.rnn.OutputProjectionWrapper`
+* `tf.contrib.rnn.DeviceWrapper`
+* `tf.contrib.rnn.ResidualWrapper`
### Block RNNCells
-* @{tf.contrib.rnn.LSTMBlockCell}
-* @{tf.contrib.rnn.GRUBlockCell}
+* `tf.contrib.rnn.LSTMBlockCell`
+* `tf.contrib.rnn.GRUBlockCell`
### Fused RNNCells
-* @{tf.contrib.rnn.FusedRNNCell}
-* @{tf.contrib.rnn.FusedRNNCellAdaptor}
-* @{tf.contrib.rnn.TimeReversedFusedRNN}
-* @{tf.contrib.rnn.LSTMBlockFusedCell}
+* `tf.contrib.rnn.FusedRNNCell`
+* `tf.contrib.rnn.FusedRNNCellAdaptor`
+* `tf.contrib.rnn.TimeReversedFusedRNN`
+* `tf.contrib.rnn.LSTMBlockFusedCell`
### LSTM-like cells
-* @{tf.contrib.rnn.CoupledInputForgetGateLSTMCell}
-* @{tf.contrib.rnn.TimeFreqLSTMCell}
-* @{tf.contrib.rnn.GridLSTMCell}
+* `tf.contrib.rnn.CoupledInputForgetGateLSTMCell`
+* `tf.contrib.rnn.TimeFreqLSTMCell`
+* `tf.contrib.rnn.GridLSTMCell`
### RNNCell wrappers
-* @{tf.contrib.rnn.AttentionCellWrapper}
-* @{tf.contrib.rnn.CompiledWrapper}
+* `tf.contrib.rnn.AttentionCellWrapper`
+* `tf.contrib.rnn.CompiledWrapper`
## Recurrent Neural Networks
@@ -55,7 +55,7 @@ Module for constructing RNN Cells and additional RNN operations.
TensorFlow provides a number of methods for constructing Recurrent Neural
Networks.
-* @{tf.contrib.rnn.static_rnn}
-* @{tf.contrib.rnn.static_state_saving_rnn}
-* @{tf.contrib.rnn.static_bidirectional_rnn}
-* @{tf.contrib.rnn.stack_bidirectional_dynamic_rnn}
+* `tf.contrib.rnn.static_rnn`
+* `tf.contrib.rnn.static_state_saving_rnn`
+* `tf.contrib.rnn.static_bidirectional_rnn`
+* `tf.contrib.rnn.stack_bidirectional_dynamic_rnn`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.seq2seq.md b/tensorflow/docs_src/api_guides/python/contrib.seq2seq.md
index 143919fd84..54f2fafc71 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.seq2seq.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.seq2seq.md
@@ -2,18 +2,18 @@
[TOC]
Module for constructing seq2seq models and dynamic decoding. Builds on top of
-libraries in @{tf.contrib.rnn}.
+libraries in `tf.contrib.rnn`.
This library is composed of two primary components:
-* New attention wrappers for @{tf.contrib.rnn.RNNCell} objects.
+* New attention wrappers for `tf.contrib.rnn.RNNCell` objects.
* A new object-oriented dynamic decoding framework.
## Attention
Attention wrappers are `RNNCell` objects that wrap other `RNNCell` objects and
implement attention. The form of attention is determined by a subclass of
-@{tf.contrib.seq2seq.AttentionMechanism}. These subclasses describe the form
+`tf.contrib.seq2seq.AttentionMechanism`. These subclasses describe the form
of attention (e.g. additive vs. multiplicative) to use when creating the
wrapper. An instance of an `AttentionMechanism` is constructed with a
`memory` tensor, from which lookup keys and values tensors are created.
@@ -22,9 +22,9 @@ wrapper. An instance of an `AttentionMechanism` is constructed with a
The two basic attention mechanisms are:
-* @{tf.contrib.seq2seq.BahdanauAttention} (additive attention,
+* `tf.contrib.seq2seq.BahdanauAttention` (additive attention,
[ref.](https://arxiv.org/abs/1409.0473))
-* @{tf.contrib.seq2seq.LuongAttention} (multiplicative attention,
+* `tf.contrib.seq2seq.LuongAttention` (multiplicative attention,
[ref.](https://arxiv.org/abs/1508.04025))
The `memory` tensor passed the attention mechanism's constructor is expected to
@@ -41,7 +41,7 @@ depth.
### Attention Wrappers
-The basic attention wrapper is @{tf.contrib.seq2seq.AttentionWrapper}.
+The basic attention wrapper is `tf.contrib.seq2seq.AttentionWrapper`.
This wrapper accepts an `RNNCell` instance, an instance of `AttentionMechanism`,
and an attention depth parameter (`attention_size`); as well as several
optional arguments that allow one to customize intermediate calculations.
@@ -120,19 +120,19 @@ outputs, _ = tf.contrib.seq2seq.dynamic_decode(
### Decoder base class and functions
-* @{tf.contrib.seq2seq.Decoder}
-* @{tf.contrib.seq2seq.dynamic_decode}
+* `tf.contrib.seq2seq.Decoder`
+* `tf.contrib.seq2seq.dynamic_decode`
### Basic Decoder
-* @{tf.contrib.seq2seq.BasicDecoderOutput}
-* @{tf.contrib.seq2seq.BasicDecoder}
+* `tf.contrib.seq2seq.BasicDecoderOutput`
+* `tf.contrib.seq2seq.BasicDecoder`
### Decoder Helpers
-* @{tf.contrib.seq2seq.Helper}
-* @{tf.contrib.seq2seq.CustomHelper}
-* @{tf.contrib.seq2seq.GreedyEmbeddingHelper}
-* @{tf.contrib.seq2seq.ScheduledEmbeddingTrainingHelper}
-* @{tf.contrib.seq2seq.ScheduledOutputTrainingHelper}
-* @{tf.contrib.seq2seq.TrainingHelper}
+* `tf.contrib.seq2seq.Helper`
+* `tf.contrib.seq2seq.CustomHelper`
+* `tf.contrib.seq2seq.GreedyEmbeddingHelper`
+* `tf.contrib.seq2seq.ScheduledEmbeddingTrainingHelper`
+* `tf.contrib.seq2seq.ScheduledOutputTrainingHelper`
+* `tf.contrib.seq2seq.TrainingHelper`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.signal.md b/tensorflow/docs_src/api_guides/python/contrib.signal.md
index 0f7690f80a..66df561084 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.signal.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.signal.md
@@ -1,7 +1,7 @@
# Signal Processing (contrib)
[TOC]
-@{tf.contrib.signal} is a module for signal processing primitives. All
+`tf.contrib.signal` is a module for signal processing primitives. All
operations have GPU support and are differentiable. This module is especially
helpful for building TensorFlow models that process or generate audio, though
the techniques are useful in many domains.
@@ -10,7 +10,7 @@ the techniques are useful in many domains.
When dealing with variable length signals (e.g. audio) it is common to "frame"
them into multiple fixed length windows. These windows can overlap if the 'step'
-of the frame is less than the frame length. @{tf.contrib.signal.frame} does
+of the frame is less than the frame length. `tf.contrib.signal.frame` does
exactly this. For example:
```python
@@ -24,7 +24,7 @@ signals = tf.placeholder(tf.float32, [None, None])
frames = tf.contrib.signal.frame(signals, frame_length=128, frame_step=32)
```
-The `axis` parameter to @{tf.contrib.signal.frame} allows you to frame tensors
+The `axis` parameter to `tf.contrib.signal.frame` allows you to frame tensors
with inner structure (e.g. a spectrogram):
```python
@@ -42,7 +42,7 @@ spectrogram_patches = tf.contrib.signal.frame(
## Reconstructing framed sequences and applying a tapering window
-@{tf.contrib.signal.overlap_and_add} can be used to reconstruct a signal from a
+`tf.contrib.signal.overlap_and_add` can be used to reconstruct a signal from a
framed representation. For example, the following code reconstructs the signal
produced in the preceding example:
@@ -58,7 +58,7 @@ the resulting reconstruction will have a greater magnitude than the original
window function satisfies the Constant Overlap-Add (COLA) property for the given
frame step, then it will recover the original `signals`.
-@{tf.contrib.signal.hamming_window} and @{tf.contrib.signal.hann_window} both
+`tf.contrib.signal.hamming_window` and `tf.contrib.signal.hann_window` both
satisfy the COLA property for a 75% overlap.
```python
@@ -74,7 +74,7 @@ reconstructed_signals = tf.contrib.signal.overlap_and_add(
A spectrogram is a time-frequency decomposition of a signal that indicates its
frequency content over time. The most common approach to computing spectrograms
is to take the magnitude of the [Short-time Fourier Transform][stft] (STFT),
-which @{tf.contrib.signal.stft} can compute as follows:
+which `tf.contrib.signal.stft` can compute as follows:
```python
# A batch of float32 time-domain signals in the range [-1, 1] with shape
@@ -121,7 +121,7 @@ When working with spectral representations of audio, the [mel scale][mel] is a
common reweighting of the frequency dimension, which results in a
lower-dimensional and more perceptually-relevant representation of the audio.
-@{tf.contrib.signal.linear_to_mel_weight_matrix} produces a matrix you can use
+`tf.contrib.signal.linear_to_mel_weight_matrix` produces a matrix you can use
to convert a spectrogram to the mel scale.
```python
@@ -156,7 +156,7 @@ log_mel_spectrograms = tf.log(mel_spectrograms + log_offset)
## Computing Mel-Frequency Cepstral Coefficients (MFCCs)
-Call @{tf.contrib.signal.mfccs_from_log_mel_spectrograms} to compute
+Call `tf.contrib.signal.mfccs_from_log_mel_spectrograms` to compute
[MFCCs][mfcc] from log-magnitude, mel-scale spectrograms (as computed in the
preceding example):
diff --git a/tensorflow/docs_src/api_guides/python/contrib.staging.md b/tensorflow/docs_src/api_guides/python/contrib.staging.md
index b0ac548342..de143a7bd3 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.staging.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.staging.md
@@ -3,4 +3,4 @@
This library contains utilities for adding pipelining to a model.
-* @{tf.contrib.staging.StagingArea}
+* `tf.contrib.staging.StagingArea`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.training.md b/tensorflow/docs_src/api_guides/python/contrib.training.md
index 87395d930b..068efdc829 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.training.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.training.md
@@ -5,46 +5,46 @@ Training and input utilities.
## Splitting sequence inputs into minibatches with state saving
-Use @{tf.contrib.training.SequenceQueueingStateSaver} or
-its wrapper @{tf.contrib.training.batch_sequences_with_states} if
+Use `tf.contrib.training.SequenceQueueingStateSaver` or
+its wrapper `tf.contrib.training.batch_sequences_with_states` if
you have input data with a dynamic primary time / frame count axis which
you'd like to convert into fixed size segments during minibatching, and would
like to store state in the forward direction across segments of an example.
-* @{tf.contrib.training.batch_sequences_with_states}
-* @{tf.contrib.training.NextQueuedSequenceBatch}
-* @{tf.contrib.training.SequenceQueueingStateSaver}
+* `tf.contrib.training.batch_sequences_with_states`
+* `tf.contrib.training.NextQueuedSequenceBatch`
+* `tf.contrib.training.SequenceQueueingStateSaver`
## Online data resampling
To resample data with replacement on a per-example basis, use
-@{tf.contrib.training.rejection_sample} or
-@{tf.contrib.training.resample_at_rate}. For `rejection_sample`, provide
+`tf.contrib.training.rejection_sample` or
+`tf.contrib.training.resample_at_rate`. For `rejection_sample`, provide
a boolean Tensor describing whether to accept or reject. Resulting batch sizes
are always the same. For `resample_at_rate`, provide the desired rate for each
example. Resulting batch sizes may vary. If you wish to specify relative
-rates, rather than absolute ones, use @{tf.contrib.training.weighted_resample}
+rates, rather than absolute ones, use `tf.contrib.training.weighted_resample`
(which also returns the actual resampling rate used for each output example).
-Use @{tf.contrib.training.stratified_sample} to resample without replacement
+Use `tf.contrib.training.stratified_sample` to resample without replacement
from the data to achieve a desired mix of class proportions that the Tensorflow
graph sees. For instance, if you have a binary classification dataset that is
99.9% class 1, a common approach is to resample from the data so that the data
is more balanced.
-* @{tf.contrib.training.rejection_sample}
-* @{tf.contrib.training.resample_at_rate}
-* @{tf.contrib.training.stratified_sample}
-* @{tf.contrib.training.weighted_resample}
+* `tf.contrib.training.rejection_sample`
+* `tf.contrib.training.resample_at_rate`
+* `tf.contrib.training.stratified_sample`
+* `tf.contrib.training.weighted_resample`
## Bucketing
-Use @{tf.contrib.training.bucket} or
-@{tf.contrib.training.bucket_by_sequence_length} to stratify
+Use `tf.contrib.training.bucket` or
+`tf.contrib.training.bucket_by_sequence_length` to stratify
minibatches into groups ("buckets"). Use `bucket_by_sequence_length`
with the argument `dynamic_pad=True` to receive minibatches of similarly
sized sequences for efficient training via `dynamic_rnn`.
-* @{tf.contrib.training.bucket}
-* @{tf.contrib.training.bucket_by_sequence_length}
+* `tf.contrib.training.bucket`
+* `tf.contrib.training.bucket_by_sequence_length`
diff --git a/tensorflow/docs_src/api_guides/python/contrib.util.md b/tensorflow/docs_src/api_guides/python/contrib.util.md
index 6bc120d43d..e5fd97e9f2 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.util.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.util.md
@@ -5,8 +5,8 @@ Utilities for dealing with Tensors.
## Miscellaneous Utility Functions
-* @{tf.contrib.util.constant_value}
-* @{tf.contrib.util.make_tensor_proto}
-* @{tf.contrib.util.make_ndarray}
-* @{tf.contrib.util.ops_used_by_graph_def}
-* @{tf.contrib.util.stripped_op_list_for_graph}
+* `tf.contrib.util.constant_value`
+* `tf.contrib.util.make_tensor_proto`
+* `tf.contrib.util.make_ndarray`
+* `tf.contrib.util.ops_used_by_graph_def`
+* `tf.contrib.util.stripped_op_list_for_graph`
diff --git a/tensorflow/docs_src/api_guides/python/control_flow_ops.md b/tensorflow/docs_src/api_guides/python/control_flow_ops.md
index 68ea96d3dc..42c86d9978 100644
--- a/tensorflow/docs_src/api_guides/python/control_flow_ops.md
+++ b/tensorflow/docs_src/api_guides/python/control_flow_ops.md
@@ -1,7 +1,7 @@
# Control Flow
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -10,48 +10,48 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
TensorFlow provides several operations and classes that you can use to control
the execution of operations and add conditional dependencies to your graph.
-* @{tf.identity}
-* @{tf.tuple}
-* @{tf.group}
-* @{tf.no_op}
-* @{tf.count_up_to}
-* @{tf.cond}
-* @{tf.case}
-* @{tf.while_loop}
+* `tf.identity`
+* `tf.tuple`
+* `tf.group`
+* `tf.no_op`
+* `tf.count_up_to`
+* `tf.cond`
+* `tf.case`
+* `tf.while_loop`
## Logical Operators
TensorFlow provides several operations that you can use to add logical operators
to your graph.
-* @{tf.logical_and}
-* @{tf.logical_not}
-* @{tf.logical_or}
-* @{tf.logical_xor}
+* `tf.logical_and`
+* `tf.logical_not`
+* `tf.logical_or`
+* `tf.logical_xor`
## Comparison Operators
TensorFlow provides several operations that you can use to add comparison
operators to your graph.
-* @{tf.equal}
-* @{tf.not_equal}
-* @{tf.less}
-* @{tf.less_equal}
-* @{tf.greater}
-* @{tf.greater_equal}
-* @{tf.where}
+* `tf.equal`
+* `tf.not_equal`
+* `tf.less`
+* `tf.less_equal`
+* `tf.greater`
+* `tf.greater_equal`
+* `tf.where`
## Debugging Operations
TensorFlow provides several operations that you can use to validate values and
debug your graph.
-* @{tf.is_finite}
-* @{tf.is_inf}
-* @{tf.is_nan}
-* @{tf.verify_tensor_all_finite}
-* @{tf.check_numerics}
-* @{tf.add_check_numerics_ops}
-* @{tf.Assert}
-* @{tf.Print}
+* `tf.is_finite`
+* `tf.is_inf`
+* `tf.is_nan`
+* `tf.verify_tensor_all_finite`
+* `tf.check_numerics`
+* `tf.add_check_numerics_ops`
+* `tf.Assert`
+* `tf.Print`
diff --git a/tensorflow/docs_src/api_guides/python/framework.md b/tensorflow/docs_src/api_guides/python/framework.md
index 42c3e57477..40a6c0783a 100644
--- a/tensorflow/docs_src/api_guides/python/framework.md
+++ b/tensorflow/docs_src/api_guides/python/framework.md
@@ -5,47 +5,47 @@ Classes and functions for building TensorFlow graphs.
## Core graph data structures
-* @{tf.Graph}
-* @{tf.Operation}
-* @{tf.Tensor}
+* `tf.Graph`
+* `tf.Operation`
+* `tf.Tensor`
## Tensor types
-* @{tf.DType}
-* @{tf.as_dtype}
+* `tf.DType`
+* `tf.as_dtype`
## Utility functions
-* @{tf.device}
-* @{tf.container}
-* @{tf.name_scope}
-* @{tf.control_dependencies}
-* @{tf.convert_to_tensor}
-* @{tf.convert_to_tensor_or_indexed_slices}
-* @{tf.convert_to_tensor_or_sparse_tensor}
-* @{tf.get_default_graph}
-* @{tf.reset_default_graph}
-* @{tf.import_graph_def}
-* @{tf.load_file_system_library}
-* @{tf.load_op_library}
+* `tf.device`
+* `tf.container`
+* `tf.name_scope`
+* `tf.control_dependencies`
+* `tf.convert_to_tensor`
+* `tf.convert_to_tensor_or_indexed_slices`
+* `tf.convert_to_tensor_or_sparse_tensor`
+* `tf.get_default_graph`
+* `tf.reset_default_graph`
+* `tf.import_graph_def`
+* `tf.load_file_system_library`
+* `tf.load_op_library`
## Graph collections
-* @{tf.add_to_collection}
-* @{tf.get_collection}
-* @{tf.get_collection_ref}
-* @{tf.GraphKeys}
+* `tf.add_to_collection`
+* `tf.get_collection`
+* `tf.get_collection_ref`
+* `tf.GraphKeys`
## Defining new operations
-* @{tf.RegisterGradient}
-* @{tf.NotDifferentiable}
-* @{tf.NoGradient}
-* @{tf.TensorShape}
-* @{tf.Dimension}
-* @{tf.op_scope}
-* @{tf.get_seed}
+* `tf.RegisterGradient`
+* `tf.NotDifferentiable`
+* `tf.NoGradient`
+* `tf.TensorShape`
+* `tf.Dimension`
+* `tf.op_scope`
+* `tf.get_seed`
## For libraries building on TensorFlow
-* @{tf.register_tensor_conversion_function}
+* `tf.register_tensor_conversion_function`
diff --git a/tensorflow/docs_src/api_guides/python/functional_ops.md b/tensorflow/docs_src/api_guides/python/functional_ops.md
index 9fd46066a8..0a9fe02ad5 100644
--- a/tensorflow/docs_src/api_guides/python/functional_ops.md
+++ b/tensorflow/docs_src/api_guides/python/functional_ops.md
@@ -1,7 +1,7 @@
# Higher Order Functions
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -12,7 +12,7 @@ Functional operations.
TensorFlow provides several higher order operators to simplify the common
map-reduce programming patterns.
-* @{tf.map_fn}
-* @{tf.foldl}
-* @{tf.foldr}
-* @{tf.scan}
+* `tf.map_fn`
+* `tf.foldl`
+* `tf.foldr`
+* `tf.scan`
diff --git a/tensorflow/docs_src/api_guides/python/image.md b/tensorflow/docs_src/api_guides/python/image.md
index 051e4547ee..c51b92db05 100644
--- a/tensorflow/docs_src/api_guides/python/image.md
+++ b/tensorflow/docs_src/api_guides/python/image.md
@@ -1,7 +1,7 @@
# Images
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -19,27 +19,27 @@ Note: The PNG encode and decode Ops support RGBA, but the conversions Ops
presently only support RGB, HSV, and GrayScale. Presently, the alpha channel has
to be stripped from the image and re-attached using slicing ops.
-* @{tf.image.decode_bmp}
-* @{tf.image.decode_gif}
-* @{tf.image.decode_jpeg}
-* @{tf.image.encode_jpeg}
-* @{tf.image.decode_png}
-* @{tf.image.encode_png}
-* @{tf.image.decode_image}
+* `tf.image.decode_bmp`
+* `tf.image.decode_gif`
+* `tf.image.decode_jpeg`
+* `tf.image.encode_jpeg`
+* `tf.image.decode_png`
+* `tf.image.encode_png`
+* `tf.image.decode_image`
## Resizing
The resizing Ops accept input images as tensors of several types. They always
output resized images as float32 tensors.
-The convenience function @{tf.image.resize_images} supports both 4-D
+The convenience function `tf.image.resize_images` supports both 4-D
and 3-D tensors as input and output. 4-D tensors are for batches of images,
3-D tensors for individual images.
Other resizing Ops only support 4-D batches of images as input:
-@{tf.image.resize_area}, @{tf.image.resize_bicubic},
-@{tf.image.resize_bilinear},
-@{tf.image.resize_nearest_neighbor}.
+`tf.image.resize_area`, `tf.image.resize_bicubic`,
+`tf.image.resize_bilinear`,
+`tf.image.resize_nearest_neighbor`.
Example:
@@ -49,29 +49,29 @@ image = tf.image.decode_jpeg(...)
resized_image = tf.image.resize_images(image, [299, 299])
```
-* @{tf.image.resize_images}
-* @{tf.image.resize_area}
-* @{tf.image.resize_bicubic}
-* @{tf.image.resize_bilinear}
-* @{tf.image.resize_nearest_neighbor}
+* `tf.image.resize_images`
+* `tf.image.resize_area`
+* `tf.image.resize_bicubic`
+* `tf.image.resize_bilinear`
+* `tf.image.resize_nearest_neighbor`
## Cropping
-* @{tf.image.resize_image_with_crop_or_pad}
-* @{tf.image.central_crop}
-* @{tf.image.pad_to_bounding_box}
-* @{tf.image.crop_to_bounding_box}
-* @{tf.image.extract_glimpse}
-* @{tf.image.crop_and_resize}
+* `tf.image.resize_image_with_crop_or_pad`
+* `tf.image.central_crop`
+* `tf.image.pad_to_bounding_box`
+* `tf.image.crop_to_bounding_box`
+* `tf.image.extract_glimpse`
+* `tf.image.crop_and_resize`
## Flipping, Rotating and Transposing
-* @{tf.image.flip_up_down}
-* @{tf.image.random_flip_up_down}
-* @{tf.image.flip_left_right}
-* @{tf.image.random_flip_left_right}
-* @{tf.image.transpose_image}
-* @{tf.image.rot90}
+* `tf.image.flip_up_down`
+* `tf.image.random_flip_up_down`
+* `tf.image.flip_left_right`
+* `tf.image.random_flip_left_right`
+* `tf.image.transpose_image`
+* `tf.image.rot90`
## Converting Between Colorspaces
@@ -94,7 +94,7 @@ per pixel (values are assumed to lie in `[0,255]`).
TensorFlow can convert between images in RGB or HSV. The conversion functions
work only on float images, so you need to convert images in other formats using
-@{tf.image.convert_image_dtype}.
+`tf.image.convert_image_dtype`.
Example:
@@ -105,11 +105,11 @@ rgb_image_float = tf.image.convert_image_dtype(rgb_image, tf.float32)
hsv_image = tf.image.rgb_to_hsv(rgb_image)
```
-* @{tf.image.rgb_to_grayscale}
-* @{tf.image.grayscale_to_rgb}
-* @{tf.image.hsv_to_rgb}
-* @{tf.image.rgb_to_hsv}
-* @{tf.image.convert_image_dtype}
+* `tf.image.rgb_to_grayscale`
+* `tf.image.grayscale_to_rgb`
+* `tf.image.hsv_to_rgb`
+* `tf.image.rgb_to_hsv`
+* `tf.image.convert_image_dtype`
## Image Adjustments
@@ -122,23 +122,23 @@ If several adjustments are chained it is advisable to minimize the number of
redundant conversions by first converting the images to the most natural data
type and representation (RGB or HSV).
-* @{tf.image.adjust_brightness}
-* @{tf.image.random_brightness}
-* @{tf.image.adjust_contrast}
-* @{tf.image.random_contrast}
-* @{tf.image.adjust_hue}
-* @{tf.image.random_hue}
-* @{tf.image.adjust_gamma}
-* @{tf.image.adjust_saturation}
-* @{tf.image.random_saturation}
-* @{tf.image.per_image_standardization}
+* `tf.image.adjust_brightness`
+* `tf.image.random_brightness`
+* `tf.image.adjust_contrast`
+* `tf.image.random_contrast`
+* `tf.image.adjust_hue`
+* `tf.image.random_hue`
+* `tf.image.adjust_gamma`
+* `tf.image.adjust_saturation`
+* `tf.image.random_saturation`
+* `tf.image.per_image_standardization`
## Working with Bounding Boxes
-* @{tf.image.draw_bounding_boxes}
-* @{tf.image.non_max_suppression}
-* @{tf.image.sample_distorted_bounding_box}
+* `tf.image.draw_bounding_boxes`
+* `tf.image.non_max_suppression`
+* `tf.image.sample_distorted_bounding_box`
## Denoising
-* @{tf.image.total_variation}
+* `tf.image.total_variation`
diff --git a/tensorflow/docs_src/api_guides/python/input_dataset.md b/tensorflow/docs_src/api_guides/python/input_dataset.md
index a6612d1bf7..ab572e53d4 100644
--- a/tensorflow/docs_src/api_guides/python/input_dataset.md
+++ b/tensorflow/docs_src/api_guides/python/input_dataset.md
@@ -1,27 +1,27 @@
# Dataset Input Pipeline
[TOC]
-@{tf.data.Dataset} allows you to build complex input pipelines. See the
+`tf.data.Dataset` allows you to build complex input pipelines. See the
@{$guide/datasets} for an in-depth explanation of how to use this API.
## Reader classes
Classes that create a dataset from input files.
-* @{tf.data.FixedLengthRecordDataset}
-* @{tf.data.TextLineDataset}
-* @{tf.data.TFRecordDataset}
+* `tf.data.FixedLengthRecordDataset`
+* `tf.data.TextLineDataset`
+* `tf.data.TFRecordDataset`
## Creating new datasets
Static methods in `Dataset` that create new datasets.
-* @{tf.data.Dataset.from_generator}
-* @{tf.data.Dataset.from_tensor_slices}
-* @{tf.data.Dataset.from_tensors}
-* @{tf.data.Dataset.list_files}
-* @{tf.data.Dataset.range}
-* @{tf.data.Dataset.zip}
+* `tf.data.Dataset.from_generator`
+* `tf.data.Dataset.from_tensor_slices`
+* `tf.data.Dataset.from_tensors`
+* `tf.data.Dataset.list_files`
+* `tf.data.Dataset.range`
+* `tf.data.Dataset.zip`
## Transformations on existing datasets
@@ -32,54 +32,54 @@ can be chained together, as shown in the example below:
train_data = train_data.batch(100).shuffle().repeat()
```
-* @{tf.data.Dataset.apply}
-* @{tf.data.Dataset.batch}
-* @{tf.data.Dataset.cache}
-* @{tf.data.Dataset.concatenate}
-* @{tf.data.Dataset.filter}
-* @{tf.data.Dataset.flat_map}
-* @{tf.data.Dataset.interleave}
-* @{tf.data.Dataset.map}
-* @{tf.data.Dataset.padded_batch}
-* @{tf.data.Dataset.prefetch}
-* @{tf.data.Dataset.repeat}
-* @{tf.data.Dataset.shard}
-* @{tf.data.Dataset.shuffle}
-* @{tf.data.Dataset.skip}
-* @{tf.data.Dataset.take}
+* `tf.data.Dataset.apply`
+* `tf.data.Dataset.batch`
+* `tf.data.Dataset.cache`
+* `tf.data.Dataset.concatenate`
+* `tf.data.Dataset.filter`
+* `tf.data.Dataset.flat_map`
+* `tf.data.Dataset.interleave`
+* `tf.data.Dataset.map`
+* `tf.data.Dataset.padded_batch`
+* `tf.data.Dataset.prefetch`
+* `tf.data.Dataset.repeat`
+* `tf.data.Dataset.shard`
+* `tf.data.Dataset.shuffle`
+* `tf.data.Dataset.skip`
+* `tf.data.Dataset.take`
### Custom transformation functions
-Custom transformation functions can be applied to a `Dataset` using @{tf.data.Dataset.apply}. Below are custom transformation functions from `tf.contrib.data`:
-
-* @{tf.contrib.data.batch_and_drop_remainder}
-* @{tf.contrib.data.dense_to_sparse_batch}
-* @{tf.contrib.data.enumerate_dataset}
-* @{tf.contrib.data.group_by_window}
-* @{tf.contrib.data.ignore_errors}
-* @{tf.contrib.data.map_and_batch}
-* @{tf.contrib.data.padded_batch_and_drop_remainder}
-* @{tf.contrib.data.parallel_interleave}
-* @{tf.contrib.data.rejection_resample}
-* @{tf.contrib.data.scan}
-* @{tf.contrib.data.shuffle_and_repeat}
-* @{tf.contrib.data.unbatch}
+Custom transformation functions can be applied to a `Dataset` using `tf.data.Dataset.apply`. Below are custom transformation functions from `tf.contrib.data`:
+
+* `tf.contrib.data.batch_and_drop_remainder`
+* `tf.contrib.data.dense_to_sparse_batch`
+* `tf.contrib.data.enumerate_dataset`
+* `tf.contrib.data.group_by_window`
+* `tf.contrib.data.ignore_errors`
+* `tf.contrib.data.map_and_batch`
+* `tf.contrib.data.padded_batch_and_drop_remainder`
+* `tf.contrib.data.parallel_interleave`
+* `tf.contrib.data.rejection_resample`
+* `tf.contrib.data.scan`
+* `tf.contrib.data.shuffle_and_repeat`
+* `tf.contrib.data.unbatch`
## Iterating over datasets
-These functions make a @{tf.data.Iterator} from a `Dataset`.
+These functions make a `tf.data.Iterator` from a `Dataset`.
-* @{tf.data.Dataset.make_initializable_iterator}
-* @{tf.data.Dataset.make_one_shot_iterator}
+* `tf.data.Dataset.make_initializable_iterator`
+* `tf.data.Dataset.make_one_shot_iterator`
-The `Iterator` class also contains static methods that create a @{tf.data.Iterator} that can be used with multiple `Dataset` objects.
+The `Iterator` class also contains static methods that create a `tf.data.Iterator` that can be used with multiple `Dataset` objects.
-* @{tf.data.Iterator.from_structure}
-* @{tf.data.Iterator.from_string_handle}
+* `tf.data.Iterator.from_structure`
+* `tf.data.Iterator.from_string_handle`
## Extra functions from `tf.contrib.data`
-* @{tf.contrib.data.get_single_element}
-* @{tf.contrib.data.make_saveable_from_iterator}
-* @{tf.contrib.data.read_batch_features}
+* `tf.contrib.data.get_single_element`
+* `tf.contrib.data.make_saveable_from_iterator`
+* `tf.contrib.data.read_batch_features`
diff --git a/tensorflow/docs_src/api_guides/python/io_ops.md b/tensorflow/docs_src/api_guides/python/io_ops.md
index 86b4b39409..ab3c70daa0 100644
--- a/tensorflow/docs_src/api_guides/python/io_ops.md
+++ b/tensorflow/docs_src/api_guides/python/io_ops.md
@@ -1,7 +1,7 @@
# Inputs and Readers
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -10,33 +10,33 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
TensorFlow provides a placeholder operation that must be fed with data
on execution. For more info, see the section on @{$reading_data#Feeding$Feeding data}.
-* @{tf.placeholder}
-* @{tf.placeholder_with_default}
+* `tf.placeholder`
+* `tf.placeholder_with_default`
For feeding `SparseTensor`s which are composite type,
there is a convenience function:
-* @{tf.sparse_placeholder}
+* `tf.sparse_placeholder`
## Readers
TensorFlow provides a set of Reader classes for reading data formats.
For more information on inputs and readers, see @{$reading_data$Reading data}.
-* @{tf.ReaderBase}
-* @{tf.TextLineReader}
-* @{tf.WholeFileReader}
-* @{tf.IdentityReader}
-* @{tf.TFRecordReader}
-* @{tf.FixedLengthRecordReader}
+* `tf.ReaderBase`
+* `tf.TextLineReader`
+* `tf.WholeFileReader`
+* `tf.IdentityReader`
+* `tf.TFRecordReader`
+* `tf.FixedLengthRecordReader`
## Converting
TensorFlow provides several operations that you can use to convert various data
formats into tensors.
-* @{tf.decode_csv}
-* @{tf.decode_raw}
+* `tf.decode_csv`
+* `tf.decode_raw`
- - -
@@ -48,14 +48,14 @@ here](https://www.tensorflow.org/code/tensorflow/core/example/example.proto).
They contain `Features`, [described
here](https://www.tensorflow.org/code/tensorflow/core/example/feature.proto).
-* @{tf.VarLenFeature}
-* @{tf.FixedLenFeature}
-* @{tf.FixedLenSequenceFeature}
-* @{tf.SparseFeature}
-* @{tf.parse_example}
-* @{tf.parse_single_example}
-* @{tf.parse_tensor}
-* @{tf.decode_json_example}
+* `tf.VarLenFeature`
+* `tf.FixedLenFeature`
+* `tf.FixedLenSequenceFeature`
+* `tf.SparseFeature`
+* `tf.parse_example`
+* `tf.parse_single_example`
+* `tf.parse_tensor`
+* `tf.decode_json_example`
## Queues
@@ -64,23 +64,23 @@ structures within the TensorFlow computation graph to stage pipelines
of tensors together. The following describe the basic Queue interface
and some implementations. To see an example use, see @{$threading_and_queues$Threading and Queues}.
-* @{tf.QueueBase}
-* @{tf.FIFOQueue}
-* @{tf.PaddingFIFOQueue}
-* @{tf.RandomShuffleQueue}
-* @{tf.PriorityQueue}
+* `tf.QueueBase`
+* `tf.FIFOQueue`
+* `tf.PaddingFIFOQueue`
+* `tf.RandomShuffleQueue`
+* `tf.PriorityQueue`
## Conditional Accumulators
-* @{tf.ConditionalAccumulatorBase}
-* @{tf.ConditionalAccumulator}
-* @{tf.SparseConditionalAccumulator}
+* `tf.ConditionalAccumulatorBase`
+* `tf.ConditionalAccumulator`
+* `tf.SparseConditionalAccumulator`
## Dealing with the filesystem
-* @{tf.matching_files}
-* @{tf.read_file}
-* @{tf.write_file}
+* `tf.matching_files`
+* `tf.read_file`
+* `tf.write_file`
## Input pipeline
@@ -93,12 +93,12 @@ for context.
The "producer" functions add a queue to the graph and a corresponding
`QueueRunner` for running the subgraph that fills that queue.
-* @{tf.train.match_filenames_once}
-* @{tf.train.limit_epochs}
-* @{tf.train.input_producer}
-* @{tf.train.range_input_producer}
-* @{tf.train.slice_input_producer}
-* @{tf.train.string_input_producer}
+* `tf.train.match_filenames_once`
+* `tf.train.limit_epochs`
+* `tf.train.input_producer`
+* `tf.train.range_input_producer`
+* `tf.train.slice_input_producer`
+* `tf.train.string_input_producer`
### Batching at the end of an input pipeline
@@ -106,25 +106,25 @@ These functions add a queue to the graph to assemble a batch of
examples, with possible shuffling. They also add a `QueueRunner` for
running the subgraph that fills that queue.
-Use @{tf.train.batch} or @{tf.train.batch_join} for batching
+Use `tf.train.batch` or `tf.train.batch_join` for batching
examples that have already been well shuffled. Use
-@{tf.train.shuffle_batch} or
-@{tf.train.shuffle_batch_join} for examples that would
+`tf.train.shuffle_batch` or
+`tf.train.shuffle_batch_join` for examples that would
benefit from additional shuffling.
-Use @{tf.train.batch} or @{tf.train.shuffle_batch} if you want a
+Use `tf.train.batch` or `tf.train.shuffle_batch` if you want a
single thread producing examples to batch, or if you have a
single subgraph producing examples but you want to run it in *N* threads
(where you increase *N* until it can keep the queue full). Use
-@{tf.train.batch_join} or @{tf.train.shuffle_batch_join}
+`tf.train.batch_join` or `tf.train.shuffle_batch_join`
if you have *N* different subgraphs producing examples to batch and you
want them run by *N* threads. Use `maybe_*` to enqueue conditionally.
-* @{tf.train.batch}
-* @{tf.train.maybe_batch}
-* @{tf.train.batch_join}
-* @{tf.train.maybe_batch_join}
-* @{tf.train.shuffle_batch}
-* @{tf.train.maybe_shuffle_batch}
-* @{tf.train.shuffle_batch_join}
-* @{tf.train.maybe_shuffle_batch_join}
+* `tf.train.batch`
+* `tf.train.maybe_batch`
+* `tf.train.batch_join`
+* `tf.train.maybe_batch_join`
+* `tf.train.shuffle_batch`
+* `tf.train.maybe_shuffle_batch`
+* `tf.train.shuffle_batch_join`
+* `tf.train.maybe_shuffle_batch_join`
diff --git a/tensorflow/docs_src/api_guides/python/math_ops.md b/tensorflow/docs_src/api_guides/python/math_ops.md
index dee7f1618a..e738161e49 100644
--- a/tensorflow/docs_src/api_guides/python/math_ops.md
+++ b/tensorflow/docs_src/api_guides/python/math_ops.md
@@ -1,7 +1,7 @@
# Math
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -13,97 +13,97 @@ broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
TensorFlow provides several operations that you can use to add basic arithmetic
operators to your graph.
-* @{tf.add}
-* @{tf.subtract}
-* @{tf.multiply}
-* @{tf.scalar_mul}
-* @{tf.div}
-* @{tf.divide}
-* @{tf.truediv}
-* @{tf.floordiv}
-* @{tf.realdiv}
-* @{tf.truncatediv}
-* @{tf.floor_div}
-* @{tf.truncatemod}
-* @{tf.floormod}
-* @{tf.mod}
-* @{tf.cross}
+* `tf.add`
+* `tf.subtract`
+* `tf.multiply`
+* `tf.scalar_mul`
+* `tf.div`
+* `tf.divide`
+* `tf.truediv`
+* `tf.floordiv`
+* `tf.realdiv`
+* `tf.truncatediv`
+* `tf.floor_div`
+* `tf.truncatemod`
+* `tf.floormod`
+* `tf.mod`
+* `tf.cross`
## Basic Math Functions
TensorFlow provides several operations that you can use to add basic
mathematical functions to your graph.
-* @{tf.add_n}
-* @{tf.abs}
-* @{tf.negative}
-* @{tf.sign}
-* @{tf.reciprocal}
-* @{tf.square}
-* @{tf.round}
-* @{tf.sqrt}
-* @{tf.rsqrt}
-* @{tf.pow}
-* @{tf.exp}
-* @{tf.expm1}
-* @{tf.log}
-* @{tf.log1p}
-* @{tf.ceil}
-* @{tf.floor}
-* @{tf.maximum}
-* @{tf.minimum}
-* @{tf.cos}
-* @{tf.sin}
-* @{tf.lbeta}
-* @{tf.tan}
-* @{tf.acos}
-* @{tf.asin}
-* @{tf.atan}
-* @{tf.cosh}
-* @{tf.sinh}
-* @{tf.asinh}
-* @{tf.acosh}
-* @{tf.atanh}
-* @{tf.lgamma}
-* @{tf.digamma}
-* @{tf.erf}
-* @{tf.erfc}
-* @{tf.squared_difference}
-* @{tf.igamma}
-* @{tf.igammac}
-* @{tf.zeta}
-* @{tf.polygamma}
-* @{tf.betainc}
-* @{tf.rint}
+* `tf.add_n`
+* `tf.abs`
+* `tf.negative`
+* `tf.sign`
+* `tf.reciprocal`
+* `tf.square`
+* `tf.round`
+* `tf.sqrt`
+* `tf.rsqrt`
+* `tf.pow`
+* `tf.exp`
+* `tf.expm1`
+* `tf.log`
+* `tf.log1p`
+* `tf.ceil`
+* `tf.floor`
+* `tf.maximum`
+* `tf.minimum`
+* `tf.cos`
+* `tf.sin`
+* `tf.lbeta`
+* `tf.tan`
+* `tf.acos`
+* `tf.asin`
+* `tf.atan`
+* `tf.cosh`
+* `tf.sinh`
+* `tf.asinh`
+* `tf.acosh`
+* `tf.atanh`
+* `tf.lgamma`
+* `tf.digamma`
+* `tf.erf`
+* `tf.erfc`
+* `tf.squared_difference`
+* `tf.igamma`
+* `tf.igammac`
+* `tf.zeta`
+* `tf.polygamma`
+* `tf.betainc`
+* `tf.rint`
## Matrix Math Functions
TensorFlow provides several operations that you can use to add linear algebra
functions on matrices to your graph.
-* @{tf.diag}
-* @{tf.diag_part}
-* @{tf.trace}
-* @{tf.transpose}
-* @{tf.eye}
-* @{tf.matrix_diag}
-* @{tf.matrix_diag_part}
-* @{tf.matrix_band_part}
-* @{tf.matrix_set_diag}
-* @{tf.matrix_transpose}
-* @{tf.matmul}
-* @{tf.norm}
-* @{tf.matrix_determinant}
-* @{tf.matrix_inverse}
-* @{tf.cholesky}
-* @{tf.cholesky_solve}
-* @{tf.matrix_solve}
-* @{tf.matrix_triangular_solve}
-* @{tf.matrix_solve_ls}
-* @{tf.qr}
-* @{tf.self_adjoint_eig}
-* @{tf.self_adjoint_eigvals}
-* @{tf.svd}
+* `tf.diag`
+* `tf.diag_part`
+* `tf.trace`
+* `tf.transpose`
+* `tf.eye`
+* `tf.matrix_diag`
+* `tf.matrix_diag_part`
+* `tf.matrix_band_part`
+* `tf.matrix_set_diag`
+* `tf.matrix_transpose`
+* `tf.matmul`
+* `tf.norm`
+* `tf.matrix_determinant`
+* `tf.matrix_inverse`
+* `tf.cholesky`
+* `tf.cholesky_solve`
+* `tf.matrix_solve`
+* `tf.matrix_triangular_solve`
+* `tf.matrix_solve_ls`
+* `tf.qr`
+* `tf.self_adjoint_eig`
+* `tf.self_adjoint_eigvals`
+* `tf.svd`
## Tensor Math Function
@@ -111,7 +111,7 @@ functions on matrices to your graph.
TensorFlow provides operations that you can use to add tensor functions to your
graph.
-* @{tf.tensordot}
+* `tf.tensordot`
## Complex Number Functions
@@ -119,11 +119,11 @@ graph.
TensorFlow provides several operations that you can use to add complex number
functions to your graph.
-* @{tf.complex}
-* @{tf.conj}
-* @{tf.imag}
-* @{tf.angle}
-* @{tf.real}
+* `tf.complex`
+* `tf.conj`
+* `tf.imag`
+* `tf.angle`
+* `tf.real`
## Reduction
@@ -131,25 +131,25 @@ functions to your graph.
TensorFlow provides several operations that you can use to perform
common math computations that reduce various dimensions of a tensor.
-* @{tf.reduce_sum}
-* @{tf.reduce_prod}
-* @{tf.reduce_min}
-* @{tf.reduce_max}
-* @{tf.reduce_mean}
-* @{tf.reduce_all}
-* @{tf.reduce_any}
-* @{tf.reduce_logsumexp}
-* @{tf.count_nonzero}
-* @{tf.accumulate_n}
-* @{tf.einsum}
+* `tf.reduce_sum`
+* `tf.reduce_prod`
+* `tf.reduce_min`
+* `tf.reduce_max`
+* `tf.reduce_mean`
+* `tf.reduce_all`
+* `tf.reduce_any`
+* `tf.reduce_logsumexp`
+* `tf.count_nonzero`
+* `tf.accumulate_n`
+* `tf.einsum`
## Scan
TensorFlow provides several operations that you can use to perform scans
(running totals) across one axis of a tensor.
-* @{tf.cumsum}
-* @{tf.cumprod}
+* `tf.cumsum`
+* `tf.cumprod`
## Segmentation
@@ -172,15 +172,15 @@ tf.segment_sum(c, tf.constant([0, 0, 1]))
[5 6 7 8]]
```
-* @{tf.segment_sum}
-* @{tf.segment_prod}
-* @{tf.segment_min}
-* @{tf.segment_max}
-* @{tf.segment_mean}
-* @{tf.unsorted_segment_sum}
-* @{tf.sparse_segment_sum}
-* @{tf.sparse_segment_mean}
-* @{tf.sparse_segment_sqrt_n}
+* `tf.segment_sum`
+* `tf.segment_prod`
+* `tf.segment_min`
+* `tf.segment_max`
+* `tf.segment_mean`
+* `tf.unsorted_segment_sum`
+* `tf.sparse_segment_sum`
+* `tf.sparse_segment_mean`
+* `tf.sparse_segment_sqrt_n`
## Sequence Comparison and Indexing
@@ -190,10 +190,10 @@ comparison and index extraction to your graph. You can use these operations to
determine sequence differences and determine the indexes of specific values in
a tensor.
-* @{tf.argmin}
-* @{tf.argmax}
-* @{tf.setdiff1d}
-* @{tf.where}
-* @{tf.unique}
-* @{tf.edit_distance}
-* @{tf.invert_permutation}
+* `tf.argmin`
+* `tf.argmax`
+* `tf.setdiff1d`
+* `tf.where`
+* `tf.unique`
+* `tf.edit_distance`
+* `tf.invert_permutation`
diff --git a/tensorflow/docs_src/api_guides/python/meta_graph.md b/tensorflow/docs_src/api_guides/python/meta_graph.md
index f1c3adc22c..7dbd9a56f4 100644
--- a/tensorflow/docs_src/api_guides/python/meta_graph.md
+++ b/tensorflow/docs_src/api_guides/python/meta_graph.md
@@ -7,10 +7,10 @@ term storage of graphs. The MetaGraph contains the information required
to continue training, perform evaluation, or run inference on a previously trained graph.
The APIs for exporting and importing the complete model are in
-the @{tf.train.Saver} class:
-@{tf.train.export_meta_graph}
+the `tf.train.Saver` class:
+`tf.train.export_meta_graph`
and
-@{tf.train.import_meta_graph}.
+`tf.train.import_meta_graph`.
## What's in a MetaGraph
@@ -24,7 +24,7 @@ protocol buffer. It contains the following fields:
* [`CollectionDef`](https://www.tensorflow.org/code/tensorflow/core/protobuf/meta_graph.proto)
map that further describes additional components of the model such as
@{$python/state_ops$`Variables`},
-@{tf.train.QueueRunner}, etc.
+`tf.train.QueueRunner`, etc.
In order for a Python object to be serialized
to and from `MetaGraphDef`, the Python class must implement `to_proto()` and
@@ -122,7 +122,7 @@ The API for exporting a running model as a MetaGraph is `export_meta_graph()`.
The MetaGraph is also automatically exported via the `save()` API in
-@{tf.train.Saver}.
+`tf.train.Saver`.
## Import a MetaGraph
diff --git a/tensorflow/docs_src/api_guides/python/nn.md b/tensorflow/docs_src/api_guides/python/nn.md
index 8d8daaae19..40dda3941d 100644
--- a/tensorflow/docs_src/api_guides/python/nn.md
+++ b/tensorflow/docs_src/api_guides/python/nn.md
@@ -1,7 +1,7 @@
# Neural Network
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -16,17 +16,17 @@ functions (`relu`, `relu6`, `crelu` and `relu_x`), and random regularization
All activation ops apply componentwise, and produce a tensor of the same
shape as the input tensor.
-* @{tf.nn.relu}
-* @{tf.nn.relu6}
-* @{tf.nn.crelu}
-* @{tf.nn.elu}
-* @{tf.nn.selu}
-* @{tf.nn.softplus}
-* @{tf.nn.softsign}
-* @{tf.nn.dropout}
-* @{tf.nn.bias_add}
-* @{tf.sigmoid}
-* @{tf.tanh}
+* `tf.nn.relu`
+* `tf.nn.relu6`
+* `tf.nn.crelu`
+* `tf.nn.elu`
+* `tf.nn.selu`
+* `tf.nn.softplus`
+* `tf.nn.softsign`
+* `tf.nn.dropout`
+* `tf.nn.bias_add`
+* `tf.sigmoid`
+* `tf.tanh`
## Convolution
@@ -112,22 +112,22 @@ vectors. For `depthwise_conv_2d`, each scalar component `input[b, i, j, k]`
is multiplied by a vector `filter[di, dj, k]`, and all the vectors are
concatenated.
-* @{tf.nn.convolution}
-* @{tf.nn.conv2d}
-* @{tf.nn.depthwise_conv2d}
-* @{tf.nn.depthwise_conv2d_native}
-* @{tf.nn.separable_conv2d}
-* @{tf.nn.atrous_conv2d}
-* @{tf.nn.atrous_conv2d_transpose}
-* @{tf.nn.conv2d_transpose}
-* @{tf.nn.conv1d}
-* @{tf.nn.conv3d}
-* @{tf.nn.conv3d_transpose}
-* @{tf.nn.conv2d_backprop_filter}
-* @{tf.nn.conv2d_backprop_input}
-* @{tf.nn.conv3d_backprop_filter_v2}
-* @{tf.nn.depthwise_conv2d_native_backprop_filter}
-* @{tf.nn.depthwise_conv2d_native_backprop_input}
+* `tf.nn.convolution`
+* `tf.nn.conv2d`
+* `tf.nn.depthwise_conv2d`
+* `tf.nn.depthwise_conv2d_native`
+* `tf.nn.separable_conv2d`
+* `tf.nn.atrous_conv2d`
+* `tf.nn.atrous_conv2d_transpose`
+* `tf.nn.conv2d_transpose`
+* `tf.nn.conv1d`
+* `tf.nn.conv3d`
+* `tf.nn.conv3d_transpose`
+* `tf.nn.conv2d_backprop_filter`
+* `tf.nn.conv2d_backprop_input`
+* `tf.nn.conv3d_backprop_filter_v2`
+* `tf.nn.depthwise_conv2d_native_backprop_filter`
+* `tf.nn.depthwise_conv2d_native_backprop_input`
## Pooling
@@ -144,14 +144,14 @@ In detail, the output is
where the indices also take into consideration the padding values. Please refer
to the `Convolution` section for details about the padding calculation.
-* @{tf.nn.avg_pool}
-* @{tf.nn.max_pool}
-* @{tf.nn.max_pool_with_argmax}
-* @{tf.nn.avg_pool3d}
-* @{tf.nn.max_pool3d}
-* @{tf.nn.fractional_avg_pool}
-* @{tf.nn.fractional_max_pool}
-* @{tf.nn.pool}
+* `tf.nn.avg_pool`
+* `tf.nn.max_pool`
+* `tf.nn.max_pool_with_argmax`
+* `tf.nn.avg_pool3d`
+* `tf.nn.max_pool3d`
+* `tf.nn.fractional_avg_pool`
+* `tf.nn.fractional_max_pool`
+* `tf.nn.pool`
## Morphological filtering
@@ -190,24 +190,24 @@ Dilation and erosion are dual to each other. The dilation of the input signal
Striding and padding is carried out in exactly the same way as in standard
convolution. Please refer to the `Convolution` section for details.
-* @{tf.nn.dilation2d}
-* @{tf.nn.erosion2d}
-* @{tf.nn.with_space_to_batch}
+* `tf.nn.dilation2d`
+* `tf.nn.erosion2d`
+* `tf.nn.with_space_to_batch`
## Normalization
Normalization is useful to prevent neurons from saturating when inputs may
have varying scale, and to aid generalization.
-* @{tf.nn.l2_normalize}
-* @{tf.nn.local_response_normalization}
-* @{tf.nn.sufficient_statistics}
-* @{tf.nn.normalize_moments}
-* @{tf.nn.moments}
-* @{tf.nn.weighted_moments}
-* @{tf.nn.fused_batch_norm}
-* @{tf.nn.batch_normalization}
-* @{tf.nn.batch_norm_with_global_normalization}
+* `tf.nn.l2_normalize`
+* `tf.nn.local_response_normalization`
+* `tf.nn.sufficient_statistics`
+* `tf.nn.normalize_moments`
+* `tf.nn.moments`
+* `tf.nn.weighted_moments`
+* `tf.nn.fused_batch_norm`
+* `tf.nn.batch_normalization`
+* `tf.nn.batch_norm_with_global_normalization`
## Losses
@@ -215,29 +215,29 @@ The loss ops measure error between two tensors, or between a tensor and zero.
These can be used for measuring accuracy of a network in a regression task
or for regularization purposes (weight decay).
-* @{tf.nn.l2_loss}
-* @{tf.nn.log_poisson_loss}
+* `tf.nn.l2_loss`
+* `tf.nn.log_poisson_loss`
## Classification
TensorFlow provides several operations that help you perform classification.
-* @{tf.nn.sigmoid_cross_entropy_with_logits}
-* @{tf.nn.softmax}
-* @{tf.nn.log_softmax}
-* @{tf.nn.softmax_cross_entropy_with_logits}
-* @{tf.nn.softmax_cross_entropy_with_logits_v2} - identical to the base
+* `tf.nn.sigmoid_cross_entropy_with_logits`
+* `tf.nn.softmax`
+* `tf.nn.log_softmax`
+* `tf.nn.softmax_cross_entropy_with_logits`
+* `tf.nn.softmax_cross_entropy_with_logits_v2` - identical to the base
version, except it allows gradient propagation into the labels.
-* @{tf.nn.sparse_softmax_cross_entropy_with_logits}
-* @{tf.nn.weighted_cross_entropy_with_logits}
+* `tf.nn.sparse_softmax_cross_entropy_with_logits`
+* `tf.nn.weighted_cross_entropy_with_logits`
## Embeddings
TensorFlow provides library support for looking up values in embedding
tensors.
-* @{tf.nn.embedding_lookup}
-* @{tf.nn.embedding_lookup_sparse}
+* `tf.nn.embedding_lookup`
+* `tf.nn.embedding_lookup_sparse`
## Recurrent Neural Networks
@@ -245,23 +245,23 @@ TensorFlow provides a number of methods for constructing Recurrent
Neural Networks. Most accept an `RNNCell`-subclassed object
(see the documentation for `tf.contrib.rnn`).
-* @{tf.nn.dynamic_rnn}
-* @{tf.nn.bidirectional_dynamic_rnn}
-* @{tf.nn.raw_rnn}
+* `tf.nn.dynamic_rnn`
+* `tf.nn.bidirectional_dynamic_rnn`
+* `tf.nn.raw_rnn`
## Connectionist Temporal Classification (CTC)
-* @{tf.nn.ctc_loss}
-* @{tf.nn.ctc_greedy_decoder}
-* @{tf.nn.ctc_beam_search_decoder}
+* `tf.nn.ctc_loss`
+* `tf.nn.ctc_greedy_decoder`
+* `tf.nn.ctc_beam_search_decoder`
## Evaluation
The evaluation ops are useful for measuring the performance of a network.
They are typically used at evaluation time.
-* @{tf.nn.top_k}
-* @{tf.nn.in_top_k}
+* `tf.nn.top_k`
+* `tf.nn.in_top_k`
## Candidate Sampling
@@ -281,29 +281,29 @@ Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf)
TensorFlow provides the following sampled loss functions for faster training.
-* @{tf.nn.nce_loss}
-* @{tf.nn.sampled_softmax_loss}
+* `tf.nn.nce_loss`
+* `tf.nn.sampled_softmax_loss`
### Candidate Samplers
TensorFlow provides the following samplers for randomly sampling candidate
classes when using one of the sampled loss functions above.
-* @{tf.nn.uniform_candidate_sampler}
-* @{tf.nn.log_uniform_candidate_sampler}
-* @{tf.nn.learned_unigram_candidate_sampler}
-* @{tf.nn.fixed_unigram_candidate_sampler}
+* `tf.nn.uniform_candidate_sampler`
+* `tf.nn.log_uniform_candidate_sampler`
+* `tf.nn.learned_unigram_candidate_sampler`
+* `tf.nn.fixed_unigram_candidate_sampler`
### Miscellaneous candidate sampling utilities
-* @{tf.nn.compute_accidental_hits}
+* `tf.nn.compute_accidental_hits`
### Quantization ops
-* @{tf.nn.quantized_conv2d}
-* @{tf.nn.quantized_relu_x}
-* @{tf.nn.quantized_max_pool}
-* @{tf.nn.quantized_avg_pool}
+* `tf.nn.quantized_conv2d`
+* `tf.nn.quantized_relu_x`
+* `tf.nn.quantized_max_pool`
+* `tf.nn.quantized_avg_pool`
## Notes on SAME Convolution Padding
diff --git a/tensorflow/docs_src/api_guides/python/python_io.md b/tensorflow/docs_src/api_guides/python/python_io.md
index 06282e49d5..e7e82a8701 100644
--- a/tensorflow/docs_src/api_guides/python/python_io.md
+++ b/tensorflow/docs_src/api_guides/python/python_io.md
@@ -5,10 +5,10 @@ A TFRecords file represents a sequence of (binary) strings. The format is not
random access, so it is suitable for streaming large amounts of data but not
suitable if fast sharding or other non-sequential access is desired.
-* @{tf.python_io.TFRecordWriter}
-* @{tf.python_io.tf_record_iterator}
-* @{tf.python_io.TFRecordCompressionType}
-* @{tf.python_io.TFRecordOptions}
+* `tf.python_io.TFRecordWriter`
+* `tf.python_io.tf_record_iterator`
+* `tf.python_io.TFRecordCompressionType`
+* `tf.python_io.TFRecordOptions`
- - -
diff --git a/tensorflow/docs_src/api_guides/python/reading_data.md b/tensorflow/docs_src/api_guides/python/reading_data.md
index d7d0904ae2..78c36d965c 100644
--- a/tensorflow/docs_src/api_guides/python/reading_data.md
+++ b/tensorflow/docs_src/api_guides/python/reading_data.md
@@ -16,7 +16,7 @@ There are four methods of getting data into a TensorFlow program:
## `tf.data` API
-See the @{$guide/datasets} for an in-depth explanation of @{tf.data.Dataset}.
+See the @{$guide/datasets} for an in-depth explanation of `tf.data.Dataset`.
The `tf.data` API enables you to extract and preprocess data
from different input/file formats, and apply transformations such as batching,
shuffling, and mapping functions over the dataset. This is an improved version
@@ -44,7 +44,7 @@ with tf.Session():
While you can replace any Tensor with feed data, including variables and
constants, the best practice is to use a
-@{tf.placeholder} node. A
+`tf.placeholder` node. A
`placeholder` exists solely to serve as the target of feeds. It is not
initialized and contains no data. A placeholder generates an error if
it is executed without a feed, so you won't forget to feed it.
@@ -74,9 +74,9 @@ A typical queue-based pipeline for reading records from files has the following
For the list of filenames, use either a constant string Tensor (like
`["file0", "file1"]` or `[("file%d" % i) for i in range(2)]`) or the
-@{tf.train.match_filenames_once} function.
+`tf.train.match_filenames_once` function.
-Pass the list of filenames to the @{tf.train.string_input_producer} function.
+Pass the list of filenames to the `tf.train.string_input_producer` function.
`string_input_producer` creates a FIFO queue for holding the filenames until
the reader needs them.
@@ -102,8 +102,8 @@ decode this string into the tensors that make up an example.
To read text files in [comma-separated value (CSV)
format](https://tools.ietf.org/html/rfc4180), use a
-@{tf.TextLineReader} with the
-@{tf.decode_csv} operation. For example:
+`tf.TextLineReader` with the
+`tf.decode_csv` operation. For example:
```python
filename_queue = tf.train.string_input_producer(["file0.csv", "file1.csv"])
@@ -143,8 +143,8 @@ block while it waits for filenames from the queue.
#### Fixed length records
To read binary files in which each record is a fixed number of bytes, use
-@{tf.FixedLengthRecordReader}
-with the @{tf.decode_raw} operation.
+`tf.FixedLengthRecordReader`
+with the `tf.decode_raw` operation.
The `decode_raw` op converts from a string to a uint8 tensor.
For example, [the CIFAR-10 dataset](http://www.cs.toronto.edu/~kriz/cifar.html)
@@ -169,12 +169,12 @@ containing
as a field). You write a little program that gets your data, stuffs it in an
`Example` protocol buffer, serializes the protocol buffer to a string, and then
writes the string to a TFRecords file using the
-@{tf.python_io.TFRecordWriter}.
+`tf.python_io.TFRecordWriter`.
For example,
[`tensorflow/examples/how_tos/reading_data/convert_to_records.py`](https://www.tensorflow.org/code/tensorflow/examples/how_tos/reading_data/convert_to_records.py)
converts MNIST data to this format.
-The recommended way to read a TFRecord file is with a @{tf.data.TFRecordDataset}, [as in this example](https://www.tensorflow.org/code/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py):
+The recommended way to read a TFRecord file is with a `tf.data.TFRecordDataset`, [as in this example](https://www.tensorflow.org/code/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py):
``` python
dataset = tf.data.TFRecordDataset(filename)
@@ -208,7 +208,7 @@ for an example.
At the end of the pipeline we use another queue to batch together examples for
training, evaluation, or inference. For this we use a queue that randomizes the
order of examples, using the
-@{tf.train.shuffle_batch}.
+`tf.train.shuffle_batch`.
Example:
@@ -240,7 +240,7 @@ def input_pipeline(filenames, batch_size, num_epochs=None):
If you need more parallelism or shuffling of examples between files, use
multiple reader instances using the
-@{tf.train.shuffle_batch_join}.
+`tf.train.shuffle_batch_join`.
For example:
```
@@ -266,7 +266,7 @@ epoch until all the files from the epoch have been started. (It is also usually
sufficient to have a single thread filling the filename queue.)
An alternative is to use a single reader via the
-@{tf.train.shuffle_batch}
+`tf.train.shuffle_batch`
with `num_threads` bigger than 1. This will make it read from a single file at
the same time (but faster than with 1 thread), instead of N files at once.
This can be important:
@@ -284,13 +284,13 @@ enough reading threads, that summary will stay above zero. You can
### Creating threads to prefetch using `QueueRunner` objects
The short version: many of the `tf.train` functions listed above add
-@{tf.train.QueueRunner} objects to your
+`tf.train.QueueRunner` objects to your
graph. These require that you call
-@{tf.train.start_queue_runners}
+`tf.train.start_queue_runners`
before running any training or inference steps, or it will hang forever. This
will start threads that run the input pipeline, filling the example queue so
that the dequeue to get the examples will succeed. This is best combined with a
-@{tf.train.Coordinator} to cleanly
+`tf.train.Coordinator` to cleanly
shut down these threads when there are errors. If you set a limit on the number
of epochs, that will use an epoch counter that will need to be initialized. The
recommended code pattern combining these is:
@@ -343,25 +343,25 @@ queue.
</div>
The helpers in `tf.train` that create these queues and enqueuing operations add
-a @{tf.train.QueueRunner} to the
+a `tf.train.QueueRunner` to the
graph using the
-@{tf.train.add_queue_runner}
+`tf.train.add_queue_runner`
function. Each `QueueRunner` is responsible for one stage, and holds the list of
enqueue operations that need to be run in threads. Once the graph is
constructed, the
-@{tf.train.start_queue_runners}
+`tf.train.start_queue_runners`
function asks each QueueRunner in the graph to start its threads running the
enqueuing operations.
If all goes well, you can now run your training steps and the queues will be
filled by the background threads. If you have set an epoch limit, at some point
an attempt to dequeue examples will get an
-@{tf.errors.OutOfRangeError}. This
+`tf.errors.OutOfRangeError`. This
is the TensorFlow equivalent of "end of file" (EOF) -- this means the epoch
limit has been reached and no more examples are available.
The last ingredient is the
-@{tf.train.Coordinator}. This is responsible
+`tf.train.Coordinator`. This is responsible
for letting all the threads know if anything has signaled a shut down. Most
commonly this would be because an exception was raised, for example one of the
threads got an error when running some operation (or an ordinary Python
@@ -396,21 +396,21 @@ associated with a single QueueRunner. If this isn't the last thread in the
QueueRunner, the `OutOfRange` error just causes the one thread to exit. This
allows the other threads, which are still finishing up their last file, to
proceed until they finish as well. (Assuming you are using a
-@{tf.train.Coordinator},
+`tf.train.Coordinator`,
other types of errors will cause all the threads to stop.) Once all the reader
threads hit the `OutOfRange` error, only then does the next queue, the example
queue, gets closed.
Again, the example queue will have some elements queued, so training will
continue until those are exhausted. If the example queue is a
-@{tf.RandomShuffleQueue}, say
+`tf.RandomShuffleQueue`, say
because you are using `shuffle_batch` or `shuffle_batch_join`, it normally will
avoid ever having fewer than its `min_after_dequeue` attr elements buffered.
However, once the queue is closed that restriction will be lifted and the queue
will eventually empty. At that point the actual training threads, when they
try and dequeue from example queue, will start getting `OutOfRange` errors and
exiting. Once all the training threads are done,
-@{tf.train.Coordinator.join}
+`tf.train.Coordinator.join`
will return and you can exit cleanly.
### Filtering records or producing multiple examples per record
@@ -426,7 +426,7 @@ when calling one of the batching functions (such as `shuffle_batch` or
SparseTensors don't play well with queues. If you use SparseTensors you have
to decode the string records using
-@{tf.parse_example} **after**
+`tf.parse_example` **after**
batching (instead of using `tf.parse_single_example` before batching).
## Preloaded data
@@ -475,11 +475,11 @@ update it when training. Setting `collections=[]` keeps the variable out of the
`GraphKeys.GLOBAL_VARIABLES` collection used for saving and restoring checkpoints.
Either way,
-@{tf.train.slice_input_producer}
+`tf.train.slice_input_producer`
can be used to produce a slice at a time. This shuffles the examples across an
entire epoch, so further shuffling when batching is undesirable. So instead of
using the `shuffle_batch` functions, we use the plain
-@{tf.train.batch} function. To use
+`tf.train.batch` function. To use
multiple preprocessing threads, set the `num_threads` parameter to a number
bigger than 1.
@@ -500,7 +500,7 @@ sessions, maybe in separate processes:
* The evaluation process restores the checkpoint files into an inference
model that reads validation input data.
-This is what is done @{tf.estimator$estimators} and manually in
+This is what is done `tf.estimator` and manually in
@{$deep_cnn#save-and-restore-checkpoints$the example CIFAR-10 model}.
This has a couple of benefits:
@@ -517,6 +517,6 @@ that allow the user to change the input pipeline without rebuilding the graph or
session.
Note: Regardless of the implementation, many
-operations (like @{tf.layers.batch_normalization}, and @{tf.layers.dropout})
+operations (like `tf.layers.batch_normalization`, and `tf.layers.dropout`)
need to know if they are in training or evaluation mode, and you must be
careful to set this appropriately if you change the data source.
diff --git a/tensorflow/docs_src/api_guides/python/regression_examples.md b/tensorflow/docs_src/api_guides/python/regression_examples.md
index 7de2be0552..f8abbf0f97 100644
--- a/tensorflow/docs_src/api_guides/python/regression_examples.md
+++ b/tensorflow/docs_src/api_guides/python/regression_examples.md
@@ -8,25 +8,25 @@ to implement regression in Estimators:
<tr>
<td><a href="https://www.tensorflow.org/code/tensorflow/examples/get_started/regression/linear_regression.py">linear_regression.py</a></td>
- <td>Use the @{tf.estimator.LinearRegressor} Estimator to train a
+ <td>Use the `tf.estimator.LinearRegressor` Estimator to train a
regression model on numeric data.</td>
</tr>
<tr>
<td><a href="https://www.tensorflow.org/code/tensorflow/examples/get_started/regression/linear_regression_categorical.py">linear_regression_categorical.py</a></td>
- <td>Use the @{tf.estimator.LinearRegressor} Estimator to train a
+ <td>Use the `tf.estimator.LinearRegressor` Estimator to train a
regression model on categorical data.</td>
</tr>
<tr>
<td><a href="https://www.tensorflow.org/code/tensorflow/examples/get_started/regression/dnn_regression.py">dnn_regression.py</a></td>
- <td>Use the @{tf.estimator.DNNRegressor} Estimator to train a
+ <td>Use the `tf.estimator.DNNRegressor` Estimator to train a
regression model on discrete data with a deep neural network.</td>
</tr>
<tr>
<td><a href="https://www.tensorflow.org/code/tensorflow/examples/get_started/regression/custom_regression.py">custom_regression.py</a></td>
- <td>Use @{tf.estimator.Estimator} to train a customized dnn
+ <td>Use `tf.estimator.Estimator` to train a customized dnn
regression model.</td>
</tr>
@@ -219,7 +219,7 @@ The `custom_regression.py` example also trains a model that predicts the price
of a car based on mixed real-valued and categorical input features, described by
feature_columns. Unlike `linear_regression_categorical.py`, and
`dnn_regression.py` this example does not use a pre-made estimator, but defines
-a custom model using the base @{tf.estimator.Estimator$`Estimator`} class. The
+a custom model using the base `tf.estimator.Estimator` class. The
custom model is quite similar to the model defined by `dnn_regression.py`.
The custom model is defined by the `model_fn` argument to the constructor. The
@@ -227,6 +227,6 @@ customization is made more reusable through `params` dictionary, which is later
passed through to the `model_fn` when the `model_fn` is called.
The `model_fn` returns an
-@{tf.estimator.EstimatorSpec$`EstimatorSpec`} which is a simple structure
+`tf.estimator.EstimatorSpec` which is a simple structure
indicating to the `Estimator` which operations should be run to accomplish
various tasks.
diff --git a/tensorflow/docs_src/api_guides/python/session_ops.md b/tensorflow/docs_src/api_guides/python/session_ops.md
index 5176e3549c..5f41bcf209 100644
--- a/tensorflow/docs_src/api_guides/python/session_ops.md
+++ b/tensorflow/docs_src/api_guides/python/session_ops.md
@@ -1,7 +1,7 @@
# Tensor Handle Operations
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -10,6 +10,6 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
TensorFlow provides several operators that allows the user to keep tensors
"in-place" across run calls.
-* @{tf.get_session_handle}
-* @{tf.get_session_tensor}
-* @{tf.delete_session_tensor}
+* `tf.get_session_handle`
+* `tf.get_session_tensor`
+* `tf.delete_session_tensor`
diff --git a/tensorflow/docs_src/api_guides/python/sparse_ops.md b/tensorflow/docs_src/api_guides/python/sparse_ops.md
index 19d5faba05..b360055ed0 100644
--- a/tensorflow/docs_src/api_guides/python/sparse_ops.md
+++ b/tensorflow/docs_src/api_guides/python/sparse_ops.md
@@ -1,7 +1,7 @@
# Sparse Tensors
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -12,34 +12,34 @@ in multiple dimensions. Contrast this representation with `IndexedSlices`,
which is efficient for representing tensors that are sparse in their first
dimension, and dense along all other dimensions.
-* @{tf.SparseTensor}
-* @{tf.SparseTensorValue}
+* `tf.SparseTensor`
+* `tf.SparseTensorValue`
## Conversion
-* @{tf.sparse_to_dense}
-* @{tf.sparse_tensor_to_dense}
-* @{tf.sparse_to_indicator}
-* @{tf.sparse_merge}
+* `tf.sparse_to_dense`
+* `tf.sparse_tensor_to_dense`
+* `tf.sparse_to_indicator`
+* `tf.sparse_merge`
## Manipulation
-* @{tf.sparse_concat}
-* @{tf.sparse_reorder}
-* @{tf.sparse_reshape}
-* @{tf.sparse_split}
-* @{tf.sparse_retain}
-* @{tf.sparse_reset_shape}
-* @{tf.sparse_fill_empty_rows}
-* @{tf.sparse_transpose}
+* `tf.sparse_concat`
+* `tf.sparse_reorder`
+* `tf.sparse_reshape`
+* `tf.sparse_split`
+* `tf.sparse_retain`
+* `tf.sparse_reset_shape`
+* `tf.sparse_fill_empty_rows`
+* `tf.sparse_transpose`
## Reduction
-* @{tf.sparse_reduce_sum}
-* @{tf.sparse_reduce_sum_sparse}
+* `tf.sparse_reduce_sum`
+* `tf.sparse_reduce_sum_sparse`
## Math Operations
-* @{tf.sparse_add}
-* @{tf.sparse_softmax}
-* @{tf.sparse_tensor_dense_matmul}
-* @{tf.sparse_maximum}
-* @{tf.sparse_minimum}
+* `tf.sparse_add`
+* `tf.sparse_softmax`
+* `tf.sparse_tensor_dense_matmul`
+* `tf.sparse_maximum`
+* `tf.sparse_minimum`
diff --git a/tensorflow/docs_src/api_guides/python/spectral_ops.md b/tensorflow/docs_src/api_guides/python/spectral_ops.md
index 022c471ef1..f6d109a3a0 100644
--- a/tensorflow/docs_src/api_guides/python/spectral_ops.md
+++ b/tensorflow/docs_src/api_guides/python/spectral_ops.md
@@ -2,24 +2,25 @@
[TOC]
-The @{tf.spectral} module supports several spectral decomposition operations
+The `tf.spectral` module supports several spectral decomposition operations
that you can use to transform Tensors of real and complex signals.
## Discrete Fourier Transforms
-* @{tf.spectral.fft}
-* @{tf.spectral.ifft}
-* @{tf.spectral.fft2d}
-* @{tf.spectral.ifft2d}
-* @{tf.spectral.fft3d}
-* @{tf.spectral.ifft3d}
-* @{tf.spectral.rfft}
-* @{tf.spectral.irfft}
-* @{tf.spectral.rfft2d}
-* @{tf.spectral.irfft2d}
-* @{tf.spectral.rfft3d}
-* @{tf.spectral.irfft3d}
+* `tf.spectral.fft`
+* `tf.spectral.ifft`
+* `tf.spectral.fft2d`
+* `tf.spectral.ifft2d`
+* `tf.spectral.fft3d`
+* `tf.spectral.ifft3d`
+* `tf.spectral.rfft`
+* `tf.spectral.irfft`
+* `tf.spectral.rfft2d`
+* `tf.spectral.irfft2d`
+* `tf.spectral.rfft3d`
+* `tf.spectral.irfft3d`
## Discrete Cosine Transforms
-* @{tf.spectral.dct}
+* `tf.spectral.dct`
+* `tf.spectral.idct`
diff --git a/tensorflow/docs_src/api_guides/python/state_ops.md b/tensorflow/docs_src/api_guides/python/state_ops.md
index ec2d877386..fc55ea1481 100644
--- a/tensorflow/docs_src/api_guides/python/state_ops.md
+++ b/tensorflow/docs_src/api_guides/python/state_ops.md
@@ -1,68 +1,68 @@
# Variables
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
## Variables
-* @{tf.Variable}
+* `tf.Variable`
## Variable helper functions
TensorFlow provides a set of functions to help manage the set of variables
collected in the graph.
-* @{tf.global_variables}
-* @{tf.local_variables}
-* @{tf.model_variables}
-* @{tf.trainable_variables}
-* @{tf.moving_average_variables}
-* @{tf.global_variables_initializer}
-* @{tf.local_variables_initializer}
-* @{tf.variables_initializer}
-* @{tf.is_variable_initialized}
-* @{tf.report_uninitialized_variables}
-* @{tf.assert_variables_initialized}
-* @{tf.assign}
-* @{tf.assign_add}
-* @{tf.assign_sub}
+* `tf.global_variables`
+* `tf.local_variables`
+* `tf.model_variables`
+* `tf.trainable_variables`
+* `tf.moving_average_variables`
+* `tf.global_variables_initializer`
+* `tf.local_variables_initializer`
+* `tf.variables_initializer`
+* `tf.is_variable_initialized`
+* `tf.report_uninitialized_variables`
+* `tf.assert_variables_initialized`
+* `tf.assign`
+* `tf.assign_add`
+* `tf.assign_sub`
## Saving and Restoring Variables
-* @{tf.train.Saver}
-* @{tf.train.latest_checkpoint}
-* @{tf.train.get_checkpoint_state}
-* @{tf.train.update_checkpoint_state}
+* `tf.train.Saver`
+* `tf.train.latest_checkpoint`
+* `tf.train.get_checkpoint_state`
+* `tf.train.update_checkpoint_state`
## Sharing Variables
TensorFlow provides several classes and operations that you can use to
create variables contingent on certain conditions.
-* @{tf.get_variable}
-* @{tf.get_local_variable}
-* @{tf.VariableScope}
-* @{tf.variable_scope}
-* @{tf.variable_op_scope}
-* @{tf.get_variable_scope}
-* @{tf.make_template}
-* @{tf.no_regularizer}
-* @{tf.constant_initializer}
-* @{tf.random_normal_initializer}
-* @{tf.truncated_normal_initializer}
-* @{tf.random_uniform_initializer}
-* @{tf.uniform_unit_scaling_initializer}
-* @{tf.zeros_initializer}
-* @{tf.ones_initializer}
-* @{tf.orthogonal_initializer}
+* `tf.get_variable`
+* `tf.get_local_variable`
+* `tf.VariableScope`
+* `tf.variable_scope`
+* `tf.variable_op_scope`
+* `tf.get_variable_scope`
+* `tf.make_template`
+* `tf.no_regularizer`
+* `tf.constant_initializer`
+* `tf.random_normal_initializer`
+* `tf.truncated_normal_initializer`
+* `tf.random_uniform_initializer`
+* `tf.uniform_unit_scaling_initializer`
+* `tf.zeros_initializer`
+* `tf.ones_initializer`
+* `tf.orthogonal_initializer`
## Variable Partitioners for Sharding
-* @{tf.fixed_size_partitioner}
-* @{tf.variable_axis_size_partitioner}
-* @{tf.min_max_variable_partitioner}
+* `tf.fixed_size_partitioner`
+* `tf.variable_axis_size_partitioner`
+* `tf.min_max_variable_partitioner`
## Sparse Variable Updates
@@ -73,38 +73,38 @@ only a small subset of embedding vectors change in any given step.
Since a sparse update of a large tensor may be generated automatically during
gradient computation (as in the gradient of
-@{tf.gather}),
-an @{tf.IndexedSlices} class is provided that encapsulates a set
+`tf.gather`),
+an `tf.IndexedSlices` class is provided that encapsulates a set
of sparse indices and values. `IndexedSlices` objects are detected and handled
automatically by the optimizers in most cases.
-* @{tf.scatter_update}
-* @{tf.scatter_add}
-* @{tf.scatter_sub}
-* @{tf.scatter_mul}
-* @{tf.scatter_div}
-* @{tf.scatter_min}
-* @{tf.scatter_max}
-* @{tf.scatter_nd_update}
-* @{tf.scatter_nd_add}
-* @{tf.scatter_nd_sub}
-* @{tf.sparse_mask}
-* @{tf.IndexedSlices}
+* `tf.scatter_update`
+* `tf.scatter_add`
+* `tf.scatter_sub`
+* `tf.scatter_mul`
+* `tf.scatter_div`
+* `tf.scatter_min`
+* `tf.scatter_max`
+* `tf.scatter_nd_update`
+* `tf.scatter_nd_add`
+* `tf.scatter_nd_sub`
+* `tf.sparse_mask`
+* `tf.IndexedSlices`
### Read-only Lookup Tables
-* @{tf.initialize_all_tables}
-* @{tf.tables_initializer}
+* `tf.initialize_all_tables`
+* `tf.tables_initializer`
## Exporting and Importing Meta Graphs
-* @{tf.train.export_meta_graph}
-* @{tf.train.import_meta_graph}
+* `tf.train.export_meta_graph`
+* `tf.train.import_meta_graph`
# Deprecated functions (removed after 2017-03-02). Please don't use them.
-* @{tf.all_variables}
-* @{tf.initialize_all_variables}
-* @{tf.initialize_local_variables}
-* @{tf.initialize_variables}
+* `tf.all_variables`
+* `tf.initialize_all_variables`
+* `tf.initialize_local_variables`
+* `tf.initialize_variables`
diff --git a/tensorflow/docs_src/api_guides/python/string_ops.md b/tensorflow/docs_src/api_guides/python/string_ops.md
index e9be4f156a..24a3aad642 100644
--- a/tensorflow/docs_src/api_guides/python/string_ops.md
+++ b/tensorflow/docs_src/api_guides/python/string_ops.md
@@ -1,7 +1,7 @@
# Strings
Note: Functions taking `Tensor` arguments can also take anything accepted by
-@{tf.convert_to_tensor}.
+`tf.convert_to_tensor`.
[TOC]
@@ -10,30 +10,30 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
String hashing ops take a string input tensor and map each element to an
integer.
-* @{tf.string_to_hash_bucket_fast}
-* @{tf.string_to_hash_bucket_strong}
-* @{tf.string_to_hash_bucket}
+* `tf.string_to_hash_bucket_fast`
+* `tf.string_to_hash_bucket_strong`
+* `tf.string_to_hash_bucket`
## Joining
String joining ops concatenate elements of input string tensors to produce a new
string tensor.
-* @{tf.reduce_join}
-* @{tf.string_join}
+* `tf.reduce_join`
+* `tf.string_join`
## Splitting
-* @{tf.string_split}
-* @{tf.substr}
+* `tf.string_split`
+* `tf.substr`
## Conversion
-* @{tf.as_string}
-* @{tf.string_to_number}
+* `tf.as_string`
+* `tf.string_to_number`
-* @{tf.decode_raw}
-* @{tf.decode_csv}
+* `tf.decode_raw`
+* `tf.decode_csv`
-* @{tf.encode_base64}
-* @{tf.decode_base64}
+* `tf.encode_base64`
+* `tf.decode_base64`
diff --git a/tensorflow/docs_src/api_guides/python/summary.md b/tensorflow/docs_src/api_guides/python/summary.md
index eda119ab24..e290703b7d 100644
--- a/tensorflow/docs_src/api_guides/python/summary.md
+++ b/tensorflow/docs_src/api_guides/python/summary.md
@@ -7,17 +7,17 @@ then accessible in tools such as @{$summaries_and_tensorboard$TensorBoard}.
## Generation of Summaries
### Class for writing Summaries
-* @{tf.summary.FileWriter}
-* @{tf.summary.FileWriterCache}
+* `tf.summary.FileWriter`
+* `tf.summary.FileWriterCache`
### Summary Ops
-* @{tf.summary.tensor_summary}
-* @{tf.summary.scalar}
-* @{tf.summary.histogram}
-* @{tf.summary.audio}
-* @{tf.summary.image}
-* @{tf.summary.merge}
-* @{tf.summary.merge_all}
+* `tf.summary.tensor_summary`
+* `tf.summary.scalar`
+* `tf.summary.histogram`
+* `tf.summary.audio`
+* `tf.summary.image`
+* `tf.summary.merge`
+* `tf.summary.merge_all`
## Utilities
-* @{tf.summary.get_summary_description}
+* `tf.summary.get_summary_description`
diff --git a/tensorflow/docs_src/api_guides/python/test.md b/tensorflow/docs_src/api_guides/python/test.md
index 5dc88124e7..b6e0a332b9 100644
--- a/tensorflow/docs_src/api_guides/python/test.md
+++ b/tensorflow/docs_src/api_guides/python/test.md
@@ -23,25 +23,25 @@ which adds methods relevant to TensorFlow tests. Here is an example:
```
`tf.test.TestCase` inherits from `unittest.TestCase` but adds a few additional
-methods. See @{tf.test.TestCase} for details.
+methods. See `tf.test.TestCase` for details.
-* @{tf.test.main}
-* @{tf.test.TestCase}
-* @{tf.test.test_src_dir_path}
+* `tf.test.main`
+* `tf.test.TestCase`
+* `tf.test.test_src_dir_path`
## Utilities
Note: `tf.test.mock` is an alias to the python `mock` or `unittest.mock`
depending on the python version.
-* @{tf.test.assert_equal_graph_def}
-* @{tf.test.get_temp_dir}
-* @{tf.test.is_built_with_cuda}
-* @{tf.test.is_gpu_available}
-* @{tf.test.gpu_device_name}
+* `tf.test.assert_equal_graph_def`
+* `tf.test.get_temp_dir`
+* `tf.test.is_built_with_cuda`
+* `tf.test.is_gpu_available`
+* `tf.test.gpu_device_name`
## Gradient checking
-@{tf.test.compute_gradient} and @{tf.test.compute_gradient_error} perform
+`tf.test.compute_gradient` and `tf.test.compute_gradient_error` perform
numerical differentiation of graphs for comparison against registered analytic
gradients.
diff --git a/tensorflow/docs_src/api_guides/python/tfdbg.md b/tensorflow/docs_src/api_guides/python/tfdbg.md
index 2212a2da0e..9778cdc0b0 100644
--- a/tensorflow/docs_src/api_guides/python/tfdbg.md
+++ b/tensorflow/docs_src/api_guides/python/tfdbg.md
@@ -8,9 +8,9 @@ Public Python API of TensorFlow Debugger (tfdbg).
These functions help you modify `RunOptions` to specify which `Tensor`s are to
be watched when the TensorFlow graph is executed at runtime.
-* @{tfdbg.add_debug_tensor_watch}
-* @{tfdbg.watch_graph}
-* @{tfdbg.watch_graph_with_blacklists}
+* `tfdbg.add_debug_tensor_watch`
+* `tfdbg.watch_graph`
+* `tfdbg.watch_graph_with_blacklists`
## Classes for debug-dump data and directories
@@ -18,13 +18,13 @@ be watched when the TensorFlow graph is executed at runtime.
These classes allow you to load and inspect tensor values dumped from
TensorFlow graphs during runtime.
-* @{tfdbg.DebugTensorDatum}
-* @{tfdbg.DebugDumpDir}
+* `tfdbg.DebugTensorDatum`
+* `tfdbg.DebugDumpDir`
## Functions for loading debug-dump data
-* @{tfdbg.load_tensor_from_event_file}
+* `tfdbg.load_tensor_from_event_file`
## Tensor-value predicates
@@ -32,7 +32,7 @@ TensorFlow graphs during runtime.
Built-in tensor-filter predicates to support conditional breakpoint between
runs. See `DebugDumpDir.find()` for more details.
-* @{tfdbg.has_inf_or_nan}
+* `tfdbg.has_inf_or_nan`
## Session wrapper class and `SessionRunHook` implementations
@@ -44,7 +44,7 @@ These classes allow you to
* generate `SessionRunHook` objects to debug `tf.contrib.learn` models (see
`DumpingDebugHook` and `LocalCLIDebugHook`).
-* @{tfdbg.DumpingDebugHook}
-* @{tfdbg.DumpingDebugWrapperSession}
-* @{tfdbg.LocalCLIDebugHook}
-* @{tfdbg.LocalCLIDebugWrapperSession}
+* `tfdbg.DumpingDebugHook`
+* `tfdbg.DumpingDebugWrapperSession`
+* `tfdbg.LocalCLIDebugHook`
+* `tfdbg.LocalCLIDebugWrapperSession`
diff --git a/tensorflow/docs_src/api_guides/python/threading_and_queues.md b/tensorflow/docs_src/api_guides/python/threading_and_queues.md
index 8ad4c4c075..48f0778b73 100644
--- a/tensorflow/docs_src/api_guides/python/threading_and_queues.md
+++ b/tensorflow/docs_src/api_guides/python/threading_and_queues.md
@@ -25,7 +25,7 @@ longer holds, the queue will unblock the step and allow execution to proceed.
TensorFlow implements several classes of queue. The principal difference between
these classes is the order that items are removed from the queue. To get a feel
for queues, let's consider a simple example. We will create a "first in, first
-out" queue (@{tf.FIFOQueue}) and fill it with zeros. Then we'll construct a
+out" queue (`tf.FIFOQueue`) and fill it with zeros. Then we'll construct a
graph that takes an item off the queue, adds one to that item, and puts it back
on the end of the queue. Slowly, the numbers on the queue increase.
@@ -47,8 +47,8 @@ Now that you have a bit of a feel for queues, let's dive into the details...
## Queue usage overview
-Queues, such as @{tf.FIFOQueue}
-and @{tf.RandomShuffleQueue},
+Queues, such as `tf.FIFOQueue`
+and `tf.RandomShuffleQueue`,
are important TensorFlow objects that aid in computing tensors asynchronously
in a graph.
@@ -59,11 +59,11 @@ prepare inputs for training a model as follows:
* A training thread executes a training op that dequeues mini-batches from the
queue
-We recommend using the @{tf.data.Dataset.shuffle$`shuffle`}
-and @{tf.data.Dataset.batch$`batch`} methods of a
-@{tf.data.Dataset$`Dataset`} to accomplish this. However, if you'd prefer
+We recommend using the `tf.data.Dataset.shuffle`
+and `tf.data.Dataset.batch` methods of a
+`tf.data.Dataset` to accomplish this. However, if you'd prefer
to use a queue-based version instead, you can find a full implementation in the
-@{tf.train.shuffle_batch} function.
+`tf.train.shuffle_batch` function.
For demonstration purposes a simplified implementation is given below.
@@ -93,8 +93,8 @@ def simple_shuffle_batch(source, capacity, batch_size=10):
return queue.dequeue_many(batch_size)
```
-Once started by @{tf.train.start_queue_runners}, or indirectly through
-@{tf.train.MonitoredSession}, the `QueueRunner` will launch the
+Once started by `tf.train.start_queue_runners`, or indirectly through
+`tf.train.MonitoredSession`, the `QueueRunner` will launch the
threads in the background to fill the queue. Meanwhile the main thread will
execute the `dequeue_many` op to pull data from it. Note how these ops do not
depend on each other, except indirectly through the internal state of the queue.
@@ -126,7 +126,7 @@ with tf.train.MonitoredSession() as sess:
```
For most use cases, the automatic thread startup and management provided
-by @{tf.train.MonitoredSession} is sufficient. In the rare case that it is not,
+by `tf.train.MonitoredSession` is sufficient. In the rare case that it is not,
TensorFlow provides tools for manually managing your threads and queues.
## Manual Thread Management
@@ -139,8 +139,8 @@ threads must be able to stop together, exceptions must be caught and
reported, and queues must be properly closed when stopping.
TensorFlow provides two classes to help:
-@{tf.train.Coordinator} and
-@{tf.train.QueueRunner}. These two classes
+`tf.train.Coordinator` and
+`tf.train.QueueRunner`. These two classes
are designed to be used together. The `Coordinator` class helps multiple threads
stop together and report exceptions to a program that waits for them to stop.
The `QueueRunner` class is used to create a number of threads cooperating to
@@ -148,14 +148,14 @@ enqueue tensors in the same queue.
### Coordinator
-The @{tf.train.Coordinator} class manages background threads in a TensorFlow
+The `tf.train.Coordinator` class manages background threads in a TensorFlow
program and helps multiple threads stop together.
Its key methods are:
-* @{tf.train.Coordinator.should_stop}: returns `True` if the threads should stop.
-* @{tf.train.Coordinator.request_stop}: requests that threads should stop.
-* @{tf.train.Coordinator.join}: waits until the specified threads have stopped.
+* `tf.train.Coordinator.should_stop`: returns `True` if the threads should stop.
+* `tf.train.Coordinator.request_stop`: requests that threads should stop.
+* `tf.train.Coordinator.join`: waits until the specified threads have stopped.
You first create a `Coordinator` object, and then create a number of threads
that use the coordinator. The threads typically run loops that stop when
@@ -191,11 +191,11 @@ coord.join(threads)
Obviously, the coordinator can manage threads doing very different things.
They don't have to be all the same as in the example above. The coordinator
-also has support to capture and report exceptions. See the @{tf.train.Coordinator} documentation for more details.
+also has support to capture and report exceptions. See the `tf.train.Coordinator` documentation for more details.
### QueueRunner
-The @{tf.train.QueueRunner} class creates a number of threads that repeatedly
+The `tf.train.QueueRunner` class creates a number of threads that repeatedly
run an enqueue op. These threads can use a coordinator to stop together. In
addition, a queue runner will run a *closer operation* that closes the queue if
an exception is reported to the coordinator.
diff --git a/tensorflow/docs_src/api_guides/python/train.md b/tensorflow/docs_src/api_guides/python/train.md
index cbc5052946..a118123665 100644
--- a/tensorflow/docs_src/api_guides/python/train.md
+++ b/tensorflow/docs_src/api_guides/python/train.md
@@ -1,7 +1,7 @@
# Training
[TOC]
-@{tf.train} provides a set of classes and functions that help train models.
+`tf.train` provides a set of classes and functions that help train models.
## Optimizers
@@ -12,19 +12,19 @@ optimization algorithms such as GradientDescent and Adagrad.
You never instantiate the Optimizer class itself, but instead instantiate one
of the subclasses.
-* @{tf.train.Optimizer}
-* @{tf.train.GradientDescentOptimizer}
-* @{tf.train.AdadeltaOptimizer}
-* @{tf.train.AdagradOptimizer}
-* @{tf.train.AdagradDAOptimizer}
-* @{tf.train.MomentumOptimizer}
-* @{tf.train.AdamOptimizer}
-* @{tf.train.FtrlOptimizer}
-* @{tf.train.ProximalGradientDescentOptimizer}
-* @{tf.train.ProximalAdagradOptimizer}
-* @{tf.train.RMSPropOptimizer}
+* `tf.train.Optimizer`
+* `tf.train.GradientDescentOptimizer`
+* `tf.train.AdadeltaOptimizer`
+* `tf.train.AdagradOptimizer`
+* `tf.train.AdagradDAOptimizer`
+* `tf.train.MomentumOptimizer`
+* `tf.train.AdamOptimizer`
+* `tf.train.FtrlOptimizer`
+* `tf.train.ProximalGradientDescentOptimizer`
+* `tf.train.ProximalAdagradOptimizer`
+* `tf.train.RMSPropOptimizer`
-See @{tf.contrib.opt} for more optimizers.
+See `tf.contrib.opt` for more optimizers.
## Gradient Computation
@@ -34,10 +34,10 @@ optimizer classes automatically compute derivatives on your graph, but
creators of new Optimizers or expert users can call the lower-level
functions below.
-* @{tf.gradients}
-* @{tf.AggregationMethod}
-* @{tf.stop_gradient}
-* @{tf.hessians}
+* `tf.gradients`
+* `tf.AggregationMethod`
+* `tf.stop_gradient`
+* `tf.hessians`
## Gradient Clipping
@@ -47,22 +47,22 @@ functions to your graph. You can use these functions to perform general data
clipping, but they're particularly useful for handling exploding or vanishing
gradients.
-* @{tf.clip_by_value}
-* @{tf.clip_by_norm}
-* @{tf.clip_by_average_norm}
-* @{tf.clip_by_global_norm}
-* @{tf.global_norm}
+* `tf.clip_by_value`
+* `tf.clip_by_norm`
+* `tf.clip_by_average_norm`
+* `tf.clip_by_global_norm`
+* `tf.global_norm`
## Decaying the learning rate
-* @{tf.train.exponential_decay}
-* @{tf.train.inverse_time_decay}
-* @{tf.train.natural_exp_decay}
-* @{tf.train.piecewise_constant}
-* @{tf.train.polynomial_decay}
-* @{tf.train.cosine_decay}
-* @{tf.train.linear_cosine_decay}
-* @{tf.train.noisy_linear_cosine_decay}
+* `tf.train.exponential_decay`
+* `tf.train.inverse_time_decay`
+* `tf.train.natural_exp_decay`
+* `tf.train.piecewise_constant`
+* `tf.train.polynomial_decay`
+* `tf.train.cosine_decay`
+* `tf.train.linear_cosine_decay`
+* `tf.train.noisy_linear_cosine_decay`
## Moving Averages
@@ -70,7 +70,7 @@ Some training algorithms, such as GradientDescent and Momentum often benefit
from maintaining a moving average of variables during optimization. Using the
moving averages for evaluations often improve results significantly.
-* @{tf.train.ExponentialMovingAverage}
+* `tf.train.ExponentialMovingAverage`
## Coordinator and QueueRunner
@@ -79,61 +79,61 @@ for how to use threads and queues. For documentation on the Queue API,
see @{$python/io_ops#queues$Queues}.
-* @{tf.train.Coordinator}
-* @{tf.train.QueueRunner}
-* @{tf.train.LooperThread}
-* @{tf.train.add_queue_runner}
-* @{tf.train.start_queue_runners}
+* `tf.train.Coordinator`
+* `tf.train.QueueRunner`
+* `tf.train.LooperThread`
+* `tf.train.add_queue_runner`
+* `tf.train.start_queue_runners`
## Distributed execution
See @{$distributed$Distributed TensorFlow} for
more information about how to configure a distributed TensorFlow program.
-* @{tf.train.Server}
-* @{tf.train.Supervisor}
-* @{tf.train.SessionManager}
-* @{tf.train.ClusterSpec}
-* @{tf.train.replica_device_setter}
-* @{tf.train.MonitoredTrainingSession}
-* @{tf.train.MonitoredSession}
-* @{tf.train.SingularMonitoredSession}
-* @{tf.train.Scaffold}
-* @{tf.train.SessionCreator}
-* @{tf.train.ChiefSessionCreator}
-* @{tf.train.WorkerSessionCreator}
+* `tf.train.Server`
+* `tf.train.Supervisor`
+* `tf.train.SessionManager`
+* `tf.train.ClusterSpec`
+* `tf.train.replica_device_setter`
+* `tf.train.MonitoredTrainingSession`
+* `tf.train.MonitoredSession`
+* `tf.train.SingularMonitoredSession`
+* `tf.train.Scaffold`
+* `tf.train.SessionCreator`
+* `tf.train.ChiefSessionCreator`
+* `tf.train.WorkerSessionCreator`
## Reading Summaries from Event Files
See @{$summaries_and_tensorboard$Summaries and TensorBoard} for an
overview of summaries, event files, and visualization in TensorBoard.
-* @{tf.train.summary_iterator}
+* `tf.train.summary_iterator`
## Training Hooks
Hooks are tools that run in the process of training/evaluation of the model.
-* @{tf.train.SessionRunHook}
-* @{tf.train.SessionRunArgs}
-* @{tf.train.SessionRunContext}
-* @{tf.train.SessionRunValues}
-* @{tf.train.LoggingTensorHook}
-* @{tf.train.StopAtStepHook}
-* @{tf.train.CheckpointSaverHook}
-* @{tf.train.NewCheckpointReader}
-* @{tf.train.StepCounterHook}
-* @{tf.train.NanLossDuringTrainingError}
-* @{tf.train.NanTensorHook}
-* @{tf.train.SummarySaverHook}
-* @{tf.train.GlobalStepWaiterHook}
-* @{tf.train.FinalOpsHook}
-* @{tf.train.FeedFnHook}
+* `tf.train.SessionRunHook`
+* `tf.train.SessionRunArgs`
+* `tf.train.SessionRunContext`
+* `tf.train.SessionRunValues`
+* `tf.train.LoggingTensorHook`
+* `tf.train.StopAtStepHook`
+* `tf.train.CheckpointSaverHook`
+* `tf.train.NewCheckpointReader`
+* `tf.train.StepCounterHook`
+* `tf.train.NanLossDuringTrainingError`
+* `tf.train.NanTensorHook`
+* `tf.train.SummarySaverHook`
+* `tf.train.GlobalStepWaiterHook`
+* `tf.train.FinalOpsHook`
+* `tf.train.FeedFnHook`
## Training Utilities
-* @{tf.train.global_step}
-* @{tf.train.basic_train_loop}
-* @{tf.train.get_global_step}
-* @{tf.train.assert_global_step}
-* @{tf.train.write_graph}
+* `tf.train.global_step`
+* `tf.train.basic_train_loop`
+* `tf.train.get_global_step`
+* `tf.train.assert_global_step`
+* `tf.train.write_graph`
diff --git a/tensorflow/docs_src/community/style_guide.md b/tensorflow/docs_src/community/style_guide.md
index c9268790a7..daf0d2fdc0 100644
--- a/tensorflow/docs_src/community/style_guide.md
+++ b/tensorflow/docs_src/community/style_guide.md
@@ -47,27 +47,7 @@ licenses(["notice"]) # Apache 2.0
exports_files(["LICENSE"])
```
-* At the end of every BUILD file, should contain:
-```
-filegroup(
- name = "all_files",
- srcs = glob(
- ["**/*"],
- exclude = [
- "**/METADATA",
- "**/OWNERS",
- ],
- ),
- visibility = ["//tensorflow:__subpackages__"],
-)
-```
-
-* When adding new BUILD file, add this line to `tensorflow/BUILD` file into `all_opensource_files` target.
-
-```
-"//tensorflow/<directory>:all_files",
-```
* For all Python BUILD targets (libraries and tests) add next line:
@@ -80,6 +60,9 @@ srcs_version = "PY2AND3",
* Operations that deal with batches may assume that the first dimension of a Tensor is the batch dimension.
+* In most models the *last dimension* is the number of channels.
+
+* Dimensions excluding the first and last usually make up the "space" dimensions: Sequence-length or Image-size.
## Python operations
@@ -148,37 +131,6 @@ Usage:
## Layers
-A *Layer* is a Python operation that combines variable creation and/or one or many
-other graph operations. Follow the same requirements as for regular Python
-operation.
-
-* If a layer creates one or more variables, the layer function
- should take next arguments also following order:
- - `initializers`: Optionally allow to specify initializers for the variables.
- - `regularizers`: Optionally allow to specify regularizers for the variables.
- - `trainable`: which control if their variables are trainable or not.
- - `scope`: `VariableScope` object that variable will be put under.
- - `reuse`: `bool` indicator if the variable should be reused if
- it's present in the scope.
-
-* Layers that behave differently during training should take:
- - `is_training`: `bool` indicator to conditionally choose different
- computation paths (e.g. using `tf.cond`) during execution.
-
-Example:
-
- def conv2d(inputs,
- num_filters_out,
- kernel_size,
- stride=1,
- padding='SAME',
- activation_fn=tf.nn.relu,
- normalization_fn=add_bias,
- normalization_params=None,
- initializers=None,
- regularizers=None,
- trainable=True,
- scope=None,
- reuse=None):
- ... see implementation at tensorflow/contrib/layers/python/layers/layers.py ...
+Use `tf.keras.layers`, not `tf.layers`.
+See `tf.keras.layers` and [the Keras guide](../guide/keras.md#custom_layers) for details on how to sub-class layers.
diff --git a/tensorflow/docs_src/deploy/distributed.md b/tensorflow/docs_src/deploy/distributed.md
index 8e2c818e39..6a760f53c8 100644
--- a/tensorflow/docs_src/deploy/distributed.md
+++ b/tensorflow/docs_src/deploy/distributed.md
@@ -21,7 +21,7 @@ $ python
```
The
-@{tf.train.Server.create_local_server}
+`tf.train.Server.create_local_server`
method creates a single-process cluster, with an in-process server.
## Create a cluster
@@ -55,7 +55,7 @@ the following:
The cluster specification dictionary maps job names to lists of network
addresses. Pass this dictionary to
-the @{tf.train.ClusterSpec}
+the `tf.train.ClusterSpec`
constructor. For example:
<table>
@@ -84,10 +84,10 @@ tf.train.ClusterSpec({
### Create a `tf.train.Server` instance in each task
-A @{tf.train.Server} object contains a
+A `tf.train.Server` object contains a
set of local devices, a set of connections to other tasks in its
`tf.train.ClusterSpec`, and a
-@{tf.Session} that can use these
+`tf.Session` that can use these
to perform a distributed computation. Each server is a member of a specific
named job and has a task index within that job. A server can communicate with
any other server in the cluster.
@@ -117,7 +117,7 @@ which you'd like to see support, please raise a
## Specifying distributed devices in your model
To place operations on a particular process, you can use the same
-@{tf.device}
+`tf.device`
function that is used to specify whether ops run on the CPU or GPU. For example:
```python
@@ -165,7 +165,7 @@ simplify the work of specifying a replicated model. Possible approaches include:
for each `/job:worker` task, typically in the same process as the worker
task. Each client builds a similar graph containing the parameters (pinned to
`/job:ps` as before using
- @{tf.train.replica_device_setter}
+ `tf.train.replica_device_setter`
to map them deterministically to the same tasks); and a single copy of the
compute-intensive part of the model, pinned to the local task in
`/job:worker`.
@@ -180,7 +180,7 @@ simplify the work of specifying a replicated model. Possible approaches include:
gradient averaging as in the
[CIFAR-10 multi-GPU trainer](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py)),
and between-graph replication (e.g. using the
- @{tf.train.SyncReplicasOptimizer}).
+ `tf.train.SyncReplicasOptimizer`).
### Putting it all together: example trainer program
@@ -314,11 +314,11 @@ serve multiple clients.
**Cluster**
-A TensorFlow cluster comprises a one or more "jobs", each divided into lists of
+A TensorFlow cluster comprises one or more "jobs", each divided into lists of
one or more "tasks". A cluster is typically dedicated to a particular high-level
objective, such as training a neural network, using many machines in parallel. A
cluster is defined by
-a @{tf.train.ClusterSpec} object.
+a `tf.train.ClusterSpec` object.
**Job**
@@ -344,7 +344,7 @@ to a single process. A task belongs to a particular "job" and is identified by
its index within that job's list of tasks.
**TensorFlow server** A process running
-a @{tf.train.Server} instance, which is
+a `tf.train.Server` instance, which is
a member of a cluster, and exports a "master service" and "worker service".
**Worker service**
diff --git a/tensorflow/docs_src/deploy/s3.md b/tensorflow/docs_src/deploy/s3.md
index 9ef9674338..7028249e94 100644
--- a/tensorflow/docs_src/deploy/s3.md
+++ b/tensorflow/docs_src/deploy/s3.md
@@ -90,4 +90,4 @@ S3 was invented by Amazon, but the S3 API has spread in popularity and has sever
* [Amazon S3](https://aws.amazon.com/s3/)
* [Google Storage](https://cloud.google.com/storage/docs/interoperability)
-* [Minio](https://www.minio.io/kubernetes.html)(Standalone mode only)
+* [Minio](https://www.minio.io/kubernetes.html)
diff --git a/tensorflow/docs_src/extend/adding_an_op.md b/tensorflow/docs_src/extend/adding_an_op.md
index 1b028be4ea..6e96cfc532 100644
--- a/tensorflow/docs_src/extend/adding_an_op.md
+++ b/tensorflow/docs_src/extend/adding_an_op.md
@@ -46,7 +46,7 @@ To incorporate your custom op you'll need to:
4. Write a function to compute gradients for the op (optional).
5. Test the op. We usually do this in Python for convenience, but you can also
test the op in C++. If you define gradients, you can verify them with the
- Python @{tf.test.compute_gradient_error$gradient checker}.
+ Python `tf.test.compute_gradient_error`.
See
[`relu_op_test.py`](https://www.tensorflow.org/code/tensorflow/python/kernel_tests/relu_op_test.py) as
an example that tests the forward functions of Relu-like operators and
@@ -388,7 +388,7 @@ $ bazel build --config opt //tensorflow/core/user_ops:zero_out.so
## Use the op in Python
TensorFlow Python API provides the
-@{tf.load_op_library} function to
+`tf.load_op_library` function to
load the dynamic library and register the op with the TensorFlow
framework. `load_op_library` returns a Python module that contains the Python
wrappers for the op and the kernel. Thus, once you have built the op, you can
@@ -538,7 +538,7 @@ REGISTER_OP("ZeroOut")
```
(Note that the set of [attribute types](#attr_types) is different from the
-@{tf.DType$tensor types} used for inputs and outputs.)
+`tf.DType` used for inputs and outputs.)
Your kernel can then access this attr in its constructor via the `context`
parameter:
@@ -615,7 +615,7 @@ define an attr with constraints, you can use the following `<attr-type-expr>`s:
* `{<type1>, <type2>}`: The value is of type `type`, and must be one of
`<type1>` or `<type2>`, where `<type1>` and `<type2>` are supported
- @{tf.DType$tensor types}. You don't specify
+ `tf.DType`. You don't specify
that the type of the attr is `type`. This is implied when you have a list of
types in `{...}`. For example, in this case the attr `t` is a type that must
be an `int32`, a `float`, or a `bool`:
@@ -714,7 +714,7 @@ REGISTER_OP("AttrDefaultExampleForAllTypes")
```
Note in particular that the values of type `type`
-use @{tf.DType$the `DT_*` names for the types}.
+use `tf.DType`.
#### Polymorphism
@@ -1056,7 +1056,7 @@ expressions:
`string`). This specifies a single tensor of the given type.
See
- @{tf.DType$the list of supported Tensor types}.
+ `tf.DType`.
```c++
REGISTER_OP("BuiltInTypesExample")
@@ -1098,8 +1098,7 @@ expressions:
* For a sequence of tensors with the same type: `<number> * <type>`, where
`<number>` is the name of an [Attr](#attrs) with type `int`. The `<type>` can
- either be
- @{tf.DType$a specific type like `int32` or `float`},
+ either be a `tf.DType`,
or the name of an attr with type `type`. As an example of the first, this
op accepts a list of `int32` tensors:
@@ -1202,7 +1201,7 @@ There are several examples of kernels with GPU support in
Notice some kernels have a CPU version in a `.cc` file, a GPU version in a file
ending in `_gpu.cu.cc`, and some code shared in common in a `.h` file.
-For example, the @{tf.pad} has
+For example, the `tf.pad` has
everything but the GPU kernel in [`tensorflow/core/kernels/pad_op.cc`][pad_op].
The GPU kernel is in
[`tensorflow/core/kernels/pad_op_gpu.cu.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/pad_op_gpu.cu.cc),
@@ -1307,16 +1306,16 @@ def _zero_out_grad(op, grad):
```
Details about registering gradient functions with
-@{tf.RegisterGradient}:
+`tf.RegisterGradient`:
* For an op with one output, the gradient function will take an
- @{tf.Operation} `op` and a
- @{tf.Tensor} `grad` and build new ops
+ `tf.Operation` `op` and a
+ `tf.Tensor` `grad` and build new ops
out of the tensors
[`op.inputs[i]`](../../api_docs/python/framework.md#Operation.inputs),
[`op.outputs[i]`](../../api_docs/python/framework.md#Operation.outputs), and `grad`. Information
about any attrs can be found via
- @{tf.Operation.get_attr}.
+ `tf.Operation.get_attr`.
* If the op has multiple outputs, the gradient function will take `op` and
`grads`, where `grads` is a list of gradients with respect to each output.
diff --git a/tensorflow/docs_src/extend/architecture.md b/tensorflow/docs_src/extend/architecture.md
index 84435a57f2..83d70c9468 100644
--- a/tensorflow/docs_src/extend/architecture.md
+++ b/tensorflow/docs_src/extend/architecture.md
@@ -81,7 +81,7 @@ implementation from all client languages. Most of the training libraries are
still Python-only, but C++ does have support for efficient inference.
The client creates a session, which sends the graph definition to the
-distributed master as a @{tf.GraphDef}
+distributed master as a `tf.GraphDef`
protocol buffer. When the client evaluates a node or nodes in the
graph, the evaluation triggers a call to the distributed master to initiate
computation.
@@ -96,7 +96,7 @@ feature vector (x), adds a bias term (b) and saves the result in a variable
### Code
-* @{tf.Session}
+* `tf.Session`
## Distributed master
diff --git a/tensorflow/docs_src/extend/index.md b/tensorflow/docs_src/extend/index.md
index 1ab0340ad9..d48340a777 100644
--- a/tensorflow/docs_src/extend/index.md
+++ b/tensorflow/docs_src/extend/index.md
@@ -17,7 +17,8 @@ TensorFlow:
Python is currently the only language supported by TensorFlow's API stability
promises. However, TensorFlow also provides functionality in C++, Go, Java and
-[JavaScript](https://js.tensorflow.org),
+[JavaScript](https://js.tensorflow.org) (incuding
+[Node.js](https://github.com/tensorflow/tfjs-node)),
plus community support for [Haskell](https://github.com/tensorflow/haskell) and
[Rust](https://github.com/tensorflow/rust). If you'd like to create or
develop TensorFlow features in a language other than these languages, read the
diff --git a/tensorflow/docs_src/extend/new_data_formats.md b/tensorflow/docs_src/extend/new_data_formats.md
index d1d1f69766..47a8344b70 100644
--- a/tensorflow/docs_src/extend/new_data_formats.md
+++ b/tensorflow/docs_src/extend/new_data_formats.md
@@ -15,25 +15,24 @@ We divide the task of supporting a file format into two pieces:
* Record formats: We use decoder or parsing ops to turn a string record
into tensors usable by TensorFlow.
-For example, to read a
-[CSV file](https://en.wikipedia.org/wiki/Comma-separated_values), we use
-@{tf.data.TextLineDataset$a dataset for reading text files line-by-line}
-and then @{tf.data.Dataset.map$map} an
-@{tf.decode_csv$op} that parses CSV data from each line of text in the dataset.
+For example, to re-implement `tf.contrib.data.make_csv_dataset` function, we
+could use `tf.data.TextLineDataset` to extract the records, and then
+use `tf.data.Dataset.map` and `tf.decode_csv` to parses the CSV records from
+each line of text in the dataset.
[TOC]
## Writing a `Dataset` for a file format
-A @{tf.data.Dataset} represents a sequence of *elements*, which can be the
+A `tf.data.Dataset` represents a sequence of *elements*, which can be the
individual records in a file. There are several examples of "reader" datasets
that are already built into TensorFlow:
-* @{tf.data.TFRecordDataset}
+* `tf.data.TFRecordDataset`
([source in `kernels/data/reader_dataset_ops.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/data/reader_dataset_ops.cc))
-* @{tf.data.FixedLengthRecordDataset}
+* `tf.data.FixedLengthRecordDataset`
([source in `kernels/data/reader_dataset_ops.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/data/reader_dataset_ops.cc))
-* @{tf.data.TextLineDataset}
+* `tf.data.TextLineDataset`
([source in `kernels/data/reader_dataset_ops.cc`](https://www.tensorflow.org/code/tensorflow/core/kernels/data/reader_dataset_ops.cc))
Each of these implementations comprises three related classes:
@@ -64,7 +63,7 @@ need to:
that implement the reading logic.
2. In C++, register a new reader op and kernel with the name
`"MyReaderDataset"`.
-3. In Python, define a subclass of @{tf.data.Dataset} called `MyReaderDataset`.
+3. In Python, define a subclass of `tf.data.Dataset` called `MyReaderDataset`.
You can put all the C++ code in a single file, such as
`my_reader_dataset_op.cc`. It will help if you are
@@ -77,18 +76,24 @@ can be used as a starting point for your implementation:
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
-namespace tensorflow {
+namespace myproject {
namespace {
-class MyReaderDatasetOp : public DatasetOpKernel {
+using ::tensorflow::DT_STRING;
+using ::tensorflow::PartialTensorShape;
+using ::tensorflow::Status;
+
+class MyReaderDatasetOp : public tensorflow::DatasetOpKernel {
public:
- MyReaderDatasetOp(OpKernelConstruction* ctx) : DatasetOpKernel(ctx) {
+ MyReaderDatasetOp(tensorflow::OpKernelConstruction* ctx)
+ : DatasetOpKernel(ctx) {
// Parse and validate any attrs that define the dataset using
// `ctx->GetAttr()`, and store them in member variables.
}
- void MakeDataset(OpKernelContext* ctx, DatasetBase** output) override {
+ void MakeDataset(tensorflow::OpKernelContext* ctx,
+ tensorflow::DatasetBase** output) override {
// Parse and validate any input tensors 0that define the dataset using
// `ctx->input()` or the utility function
// `ParseScalarArgument<T>(ctx, &arg)`.
@@ -99,14 +104,14 @@ class MyReaderDatasetOp : public DatasetOpKernel {
}
private:
- class Dataset : public GraphDatasetBase {
+ class Dataset : public tensorflow::GraphDatasetBase {
public:
- Dataset(OpKernelContext* ctx) : GraphDatasetBase(ctx) {}
+ Dataset(tensorflow::OpKernelContext* ctx) : GraphDatasetBase(ctx) {}
- std::unique_ptr<IteratorBase> MakeIteratorInternal(
+ std::unique_ptr<tensorflow::IteratorBase> MakeIteratorInternal(
const string& prefix) const override {
- return std::unique_ptr<IteratorBase>(
- new Iterator({this, strings::StrCat(prefix, "::MyReader")}));
+ return std::unique_ptr<tensorflow::IteratorBase>(new Iterator(
+ {this, tensorflow::strings::StrCat(prefix, "::MyReader")}));
}
// Record structure: Each record is represented by a scalar string tensor.
@@ -114,8 +119,8 @@ class MyReaderDatasetOp : public DatasetOpKernel {
// Dataset elements can have a fixed number of components of different
// types and shapes; replace the following two methods to customize this
// aspect of the dataset.
- const DataTypeVector& output_dtypes() const override {
- static DataTypeVector* dtypes = new DataTypeVector({DT_STRING});
+ const tensorflow::DataTypeVector& output_dtypes() const override {
+ static auto* const dtypes = new tensorflow::DataTypeVector({DT_STRING});
return *dtypes;
}
const std::vector<PartialTensorShape>& output_shapes() const override {
@@ -132,16 +137,16 @@ class MyReaderDatasetOp : public DatasetOpKernel {
// Implement this method if you want to be able to save and restore
// instances of this dataset (and any iterators over it).
Status AsGraphDefInternal(DatasetGraphDefBuilder* b,
- Node** output) const override {
+ tensorflow::Node** output) const override {
// Construct nodes to represent any of the input tensors from this
// object's member variables using `b->AddScalar()` and `b->AddVector()`.
- std::vector<Node*> input_tensors;
+ std::vector<tensorflow::Node*> input_tensors;
TF_RETURN_IF_ERROR(b->AddDataset(this, input_tensors, output));
return Status::OK();
}
private:
- class Iterator : public DatasetIterator<Dataset> {
+ class Iterator : public tensorflow::DatasetIterator<Dataset> {
public:
explicit Iterator(const Params& params)
: DatasetIterator<Dataset>(params), i_(0) {}
@@ -158,15 +163,15 @@ class MyReaderDatasetOp : public DatasetOpKernel {
// return `Status::OK()`.
// 3. If an error occurs, return an error status using one of the helper
// functions from "tensorflow/core/lib/core/errors.h".
- Status GetNextInternal(IteratorContext* ctx,
- std::vector<Tensor>* out_tensors,
+ Status GetNextInternal(tensorflow::IteratorContext* ctx,
+ std::vector<tensorflow::Tensor>* out_tensors,
bool* end_of_sequence) override {
// NOTE: `GetNextInternal()` may be called concurrently, so it is
// recommended that you protect the iterator state with a mutex.
- mutex_lock l(mu_);
+ tensorflow::mutex_lock l(mu_);
if (i_ < 10) {
// Create a scalar string tensor and add it to the output.
- Tensor record_tensor(ctx->allocator({}), DT_STRING, {});
+ tensorflow::Tensor record_tensor(ctx->allocator({}), DT_STRING, {});
record_tensor.scalar<string>()() = "MyReader!";
out_tensors->emplace_back(std::move(record_tensor));
++i_;
@@ -183,20 +188,20 @@ class MyReaderDatasetOp : public DatasetOpKernel {
//
// Implement these two methods if you want to be able to save and restore
// instances of this iterator.
- Status SaveInternal(IteratorStateWriter* writer) override {
- mutex_lock l(mu_);
+ Status SaveInternal(tensorflow::IteratorStateWriter* writer) override {
+ tensorflow::mutex_lock l(mu_);
TF_RETURN_IF_ERROR(writer->WriteScalar(full_name("i"), i_));
return Status::OK();
}
- Status RestoreInternal(IteratorContext* ctx,
- IteratorStateReader* reader) override {
- mutex_lock l(mu_);
+ Status RestoreInternal(tensorflow::IteratorContext* ctx,
+ tensorflow::IteratorStateReader* reader) override {
+ tensorflow::mutex_lock l(mu_);
TF_RETURN_IF_ERROR(reader->ReadScalar(full_name("i"), &i_));
return Status::OK();
}
private:
- mutex mu_;
+ tensorflow::mutex mu_;
int64 i_ GUARDED_BY(mu_);
};
};
@@ -211,20 +216,20 @@ class MyReaderDatasetOp : public DatasetOpKernel {
REGISTER_OP("MyReaderDataset")
.Output("handle: variant")
.SetIsStateful()
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn(tensorflow::shape_inference::ScalarShape);
// Register the kernel implementation for MyReaderDataset.
-REGISTER_KERNEL_BUILDER(Name("MyReaderDataset").Device(DEVICE_CPU),
+REGISTER_KERNEL_BUILDER(Name("MyReaderDataset").Device(tensorflow::DEVICE_CPU),
MyReaderDatasetOp);
} // namespace
-} // namespace tensorflow
+} // namespace myproject
```
The last step is to build the C++ code and add a Python wrapper. The easiest way
to do this is by @{$adding_an_op#build_the_op_library$compiling a dynamic
library} (e.g. called `"my_reader_dataset_op.so"`), and adding a Python class
-that subclasses @{tf.data.Dataset} to wrap it. An example Python program is
+that subclasses `tf.data.Dataset` to wrap it. An example Python program is
given here:
```python
@@ -287,14 +292,14 @@ track down where the bad data came from.
Examples of Ops useful for decoding records:
-* @{tf.parse_single_example} (and @{tf.parse_example})
-* @{tf.decode_csv}
-* @{tf.decode_raw}
+* `tf.parse_single_example` (and `tf.parse_example`)
+* `tf.decode_csv`
+* `tf.decode_raw`
Note that it can be useful to use multiple Ops to decode a particular record
format. For example, you may have an image saved as a string in
[a `tf.train.Example` protocol buffer](https://www.tensorflow.org/code/tensorflow/core/example/example.proto).
Depending on the format of that image, you might take the corresponding output
-from a @{tf.parse_single_example} op and call @{tf.image.decode_jpeg},
-@{tf.image.decode_png}, or @{tf.decode_raw}. It is common to take the output
-of `tf.decode_raw` and use @{tf.slice} and @{tf.reshape} to extract pieces.
+from a `tf.parse_single_example` op and call `tf.image.decode_jpeg`,
+`tf.image.decode_png`, or `tf.decode_raw`. It is common to take the output
+of `tf.decode_raw` and use `tf.slice` and `tf.reshape` to extract pieces.
diff --git a/tensorflow/docs_src/get_started/eager.md b/tensorflow/docs_src/get_started/eager.md
deleted file mode 100644
index ddf239485a..0000000000
--- a/tensorflow/docs_src/get_started/eager.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Custom Training Walkthrough
-
-[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/r1.9.0/samples/core/get_started/eager.ipynb)
diff --git a/tensorflow/docs_src/get_started/leftnav_files b/tensorflow/docs_src/get_started/leftnav_files
deleted file mode 100644
index 99d2b2c3e1..0000000000
--- a/tensorflow/docs_src/get_started/leftnav_files
+++ /dev/null
@@ -1,10 +0,0 @@
-### Learn and use ML
-basic_classification.md: Basic classification
-basic_text_classification.md: Text classification
-basic_regression.md: Regression
-overfit_and_underfit.md
-save_and_restore_models.md
-next_steps.md
-
-### Research and experimentation
-eager.md
diff --git a/tensorflow/docs_src/guide/autograph.md b/tensorflow/docs_src/guide/autograph.md
new file mode 100644
index 0000000000..823e1c6d6b
--- /dev/null
+++ b/tensorflow/docs_src/guide/autograph.md
@@ -0,0 +1,3 @@
+# AutoGraph: Easy control flow for graphs
+
+[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/guide/autograph.ipynb)
diff --git a/tensorflow/docs_src/guide/checkpoints.md b/tensorflow/docs_src/guide/checkpoints.md
index dfb2626b86..e1add29852 100644
--- a/tensorflow/docs_src/guide/checkpoints.md
+++ b/tensorflow/docs_src/guide/checkpoints.md
@@ -129,7 +129,7 @@ in the `model_dir` according to the following schedule:
You may alter the default schedule by taking the following steps:
-1. Create a @{tf.estimator.RunConfig$`RunConfig`} object that defines the
+1. Create a `tf.estimator.RunConfig` object that defines the
desired schedule.
2. When instantiating the Estimator, pass that `RunConfig` object to the
Estimator's `config` argument.
diff --git a/tensorflow/docs_src/guide/custom_estimators.md b/tensorflow/docs_src/guide/custom_estimators.md
index a63e2bafb3..199a0e93de 100644
--- a/tensorflow/docs_src/guide/custom_estimators.md
+++ b/tensorflow/docs_src/guide/custom_estimators.md
@@ -2,9 +2,9 @@
# Creating Custom Estimators
This document introduces custom Estimators. In particular, this document
-demonstrates how to create a custom @{tf.estimator.Estimator$Estimator} that
+demonstrates how to create a custom `tf.estimator.Estimator` that
mimics the behavior of the pre-made Estimator
-@{tf.estimator.DNNClassifier$`DNNClassifier`} in solving the Iris problem. See
+`tf.estimator.DNNClassifier` in solving the Iris problem. See
the @{$premade_estimators$Pre-Made Estimators chapter} for details
on the Iris problem.
@@ -34,7 +34,7 @@ with
## Pre-made vs. custom
As the following figure shows, pre-made Estimators are subclasses of the
-@{tf.estimator.Estimator} base class, while custom Estimators are an instance
+`tf.estimator.Estimator` base class, while custom Estimators are an instance
of tf.estimator.Estimator:
<div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px;">
@@ -144,12 +144,12 @@ The caller may pass `params` to an Estimator's constructor. Any `params` passed
to the constructor are in turn passed on to the `model_fn`. In
[`custom_estimator.py`](https://github.com/tensorflow/models/blob/master/samples/core/get_started/custom_estimator.py)
the following lines create the estimator and set the params to configure the
-model. This configuration step is similar to how we configured the @{tf.estimator.DNNClassifier} in
+model. This configuration step is similar to how we configured the `tf.estimator.DNNClassifier` in
@{$premade_estimators}.
```python
classifier = tf.estimator.Estimator(
- model_fn=my_model,
+ model_fn=my_model_fn,
params={
'feature_columns': my_feature_columns,
# Two hidden layers of 10 nodes each.
@@ -178,7 +178,7 @@ The basic deep neural network model must define the following three sections:
### Define the input layer
-The first line of the `model_fn` calls @{tf.feature_column.input_layer} to
+The first line of the `model_fn` calls `tf.feature_column.input_layer` to
convert the feature dictionary and `feature_columns` into input for your model,
as follows:
@@ -202,7 +202,7 @@ creating the model's input layer.
If you are creating a deep neural network, you must define one or more hidden
layers. The Layers API provides a rich set of functions to define all types of
hidden layers, including convolutional, pooling, and dropout layers. For Iris,
-we're simply going to call @{tf.layers.dense} to create hidden layers, with
+we're simply going to call `tf.layers.dense` to create hidden layers, with
dimensions defined by `params['hidden_layers']`. In a `dense` layer each node
is connected to every node in the preceding layer. Here's the relevant code:
@@ -231,14 +231,14 @@ simplicity, the figure does not show all the units in each layer.
src="../images/custom_estimators/add_hidden_layer.png">
</div>
-Note that @{tf.layers.dense} provides many additional capabilities, including
+Note that `tf.layers.dense` provides many additional capabilities, including
the ability to set a multitude of regularization parameters. For the sake of
simplicity, though, we're going to simply accept the default values of the
other parameters.
### Output Layer
-We'll define the output layer by calling @{tf.layers.dense} yet again, this
+We'll define the output layer by calling `tf.layers.dense` yet again, this
time without an activation function:
```python
@@ -265,7 +265,7 @@ score, or "logit", calculated for the associated class of Iris: Setosa,
Versicolor, or Virginica, respectively.
Later on, these logits will be transformed into probabilities by the
-@{tf.nn.softmax} function.
+`tf.nn.softmax` function.
## Implement training, evaluation, and prediction {#modes}
@@ -290,9 +290,9 @@ function with the mode parameter set as follows:
| Estimator method | Estimator Mode |
|:---------------------------------|:------------------|
-|@{tf.estimator.Estimator.train$`train()`} |@{tf.estimator.ModeKeys.TRAIN$`ModeKeys.TRAIN`} |
-|@{tf.estimator.Estimator.evaluate$`evaluate()`} |@{tf.estimator.ModeKeys.EVAL$`ModeKeys.EVAL`} |
-|@{tf.estimator.Estimator.predict$`predict()`}|@{tf.estimator.ModeKeys.PREDICT$`ModeKeys.PREDICT`} |
+|`tf.estimator.Estimator.train` |`tf.estimator.ModeKeys.TRAIN` |
+|`tf.estimator.Estimator.evaluate` |`tf.estimator.ModeKeys.EVAL` |
+|`tf.estimator.Estimator.predict`|`tf.estimator.ModeKeys.PREDICT` |
For example, suppose you instantiate a custom Estimator to generate an object
named `classifier`. Then, you make the following call:
@@ -350,8 +350,8 @@ The `predictions` holds the following three key/value pairs:
* `logit` holds the raw logit values (in this example, -1.3, 2.6, and -0.9)
We return that dictionary to the caller via the `predictions` parameter of the
-@{tf.estimator.EstimatorSpec}. The Estimator's
-@{tf.estimator.Estimator.predict$`predict`} method will yield these
+`tf.estimator.EstimatorSpec`. The Estimator's
+`tf.estimator.Estimator.predict` method will yield these
dictionaries.
### Calculate the loss
@@ -361,7 +361,7 @@ model's loss. This is the
[objective](https://developers.google.com/machine-learning/glossary/#objective)
that will be optimized.
-We can calculate the loss by calling @{tf.losses.sparse_softmax_cross_entropy}.
+We can calculate the loss by calling `tf.losses.sparse_softmax_cross_entropy`.
The value returned by this function will be approximately 0 at lowest,
when the probability of the correct class (at index `label`) is near 1.0.
The loss value returned is progressively larger as the probability of the
@@ -382,12 +382,12 @@ When the Estimator's `evaluate` method is called, the `model_fn` receives
or more metrics.
Although returning metrics is optional, most custom Estimators do return at
-least one metric. TensorFlow provides a Metrics module @{tf.metrics} to
+least one metric. TensorFlow provides a Metrics module `tf.metrics` to
calculate common metrics. For brevity's sake, we'll only return accuracy. The
-@{tf.metrics.accuracy} function compares our predictions against the
+`tf.metrics.accuracy` function compares our predictions against the
true values, that is, against the labels provided by the input function. The
-@{tf.metrics.accuracy} function requires the labels and predictions to have the
-same shape. Here's the call to @{tf.metrics.accuracy}:
+`tf.metrics.accuracy` function requires the labels and predictions to have the
+same shape. Here's the call to `tf.metrics.accuracy`:
``` python
# Compute evaluation metrics.
@@ -396,7 +396,7 @@ accuracy = tf.metrics.accuracy(labels=labels,
name='acc_op')
```
-The @{tf.estimator.EstimatorSpec$`EstimatorSpec`} returned for evaluation
+The `tf.estimator.EstimatorSpec` returned for evaluation
typically contains the following information:
* `loss`, which is the model's loss
@@ -416,7 +416,7 @@ if mode == tf.estimator.ModeKeys.EVAL:
mode, loss=loss, eval_metric_ops=metrics)
```
-The @{tf.summary.scalar} will make accuracy available to TensorBoard
+The `tf.summary.scalar` will make accuracy available to TensorBoard
in both `TRAIN` and `EVAL` modes. (More on this later).
### Train
@@ -426,7 +426,7 @@ with `mode = ModeKeys.TRAIN`. In this case, the model function must return an
`EstimatorSpec` that contains the loss and a training operation.
Building the training operation will require an optimizer. We will use
-@{tf.train.AdagradOptimizer} because we're mimicking the `DNNClassifier`, which
+`tf.train.AdagradOptimizer` because we're mimicking the `DNNClassifier`, which
also uses `Adagrad` by default. The `tf.train` package provides many other
optimizers—feel free to experiment with them.
@@ -437,14 +437,14 @@ optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
```
Next, we build the training operation using the optimizer's
-@{tf.train.Optimizer.minimize$`minimize`} method on the loss we calculated
+`tf.train.Optimizer.minimize` method on the loss we calculated
earlier.
The `minimize` method also takes a `global_step` parameter. TensorFlow uses this
parameter to count the number of training steps that have been processed
(to know when to end a training run). Furthermore, the `global_step` is
essential for TensorBoard graphs to work correctly. Simply call
-@{tf.train.get_global_step} and pass the result to the `global_step`
+`tf.train.get_global_step` and pass the result to the `global_step`
argument of `minimize`.
Here's the code to train the model:
@@ -453,7 +453,7 @@ Here's the code to train the model:
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
```
-The @{tf.estimator.EstimatorSpec$`EstimatorSpec`} returned for training
+The `tf.estimator.EstimatorSpec` returned for training
must have the following fields set:
* `loss`, which contains the value of the loss function.
@@ -474,7 +474,7 @@ Instantiate the custom Estimator through the Estimator base class as follows:
```python
# Build 2 hidden layer DNN with 10, 10 units respectively.
classifier = tf.estimator.Estimator(
- model_fn=my_model,
+ model_fn=my_model_fn,
params={
'feature_columns': my_feature_columns,
# Two hidden layers of 10 nodes each.
diff --git a/tensorflow/docs_src/guide/datasets.md b/tensorflow/docs_src/guide/datasets.md
index 8b69860a68..bb18e8b79c 100644
--- a/tensorflow/docs_src/guide/datasets.md
+++ b/tensorflow/docs_src/guide/datasets.md
@@ -1,6 +1,6 @@
# Importing Data
-The @{tf.data} API enables you to build complex input pipelines from
+The `tf.data` API enables you to build complex input pipelines from
simple, reusable pieces. For example, the pipeline for an image model might
aggregate data from files in a distributed file system, apply random
perturbations to each image, and merge randomly selected images into a batch
@@ -51,7 +51,7 @@ Once you have a `Dataset` object, you can *transform* it into a new `Dataset` by
chaining method calls on the `tf.data.Dataset` object. For example, you
can apply per-element transformations such as `Dataset.map()` (to apply a
function to each element), and multi-element transformations such as
-`Dataset.batch()`. See the documentation for @{tf.data.Dataset}
+`Dataset.batch()`. See the documentation for `tf.data.Dataset`
for a complete list of transformations.
The most common way to consume values from a `Dataset` is to make an
@@ -211,13 +211,13 @@ for _ in range(20):
sess.run(next_element)
```
-A **feedable** iterator can be used together with @{tf.placeholder} to select
-what `Iterator` to use in each call to @{tf.Session.run}, via the familiar
+A **feedable** iterator can be used together with `tf.placeholder` to select
+what `Iterator` to use in each call to `tf.Session.run`, via the familiar
`feed_dict` mechanism. It offers the same functionality as a reinitializable
iterator, but it does not require you to initialize the iterator from the start
of a dataset when you switch between iterators. For example, using the same
training and validation example from above, you can use
-@{tf.data.Iterator.from_string_handle} to define a feedable iterator
+`tf.data.Iterator.from_string_handle` to define a feedable iterator
that allows you to switch between the two datasets:
```python
@@ -329,12 +329,12 @@ of an iterator will include all components in a single expression.
### Saving iterator state
-The @{tf.contrib.data.make_saveable_from_iterator} function creates a
+The `tf.contrib.data.make_saveable_from_iterator` function creates a
`SaveableObject` from an iterator, which can be used to save and
restore the current state of the iterator (and, effectively, the whole input
-pipeline). A saveable object thus created can be added to @{tf.train.Saver}
+pipeline). A saveable object thus created can be added to `tf.train.Saver`
variables list or the `tf.GraphKeys.SAVEABLE_OBJECTS` collection for saving and
-restoring in the same manner as a @{tf.Variable}. Refer to
+restoring in the same manner as a `tf.Variable`. Refer to
@{$saved_model$Saving and Restoring} for details on how to save and restore
variables.
@@ -488,7 +488,7 @@ dataset = dataset.flat_map(
### Consuming CSV data
The CSV file format is a popular format for storing tabular data in plain text.
-The @{tf.contrib.data.CsvDataset} class provides a way to extract records from
+The `tf.contrib.data.CsvDataset` class provides a way to extract records from
one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).
Given one or more filenames and a list of defaults, a `CsvDataset` will produce
a tuple of elements whose types correspond to the types of the defaults
@@ -757,9 +757,9 @@ dataset = dataset.repeat()
### Using high-level APIs
-The @{tf.train.MonitoredTrainingSession} API simplifies many aspects of running
+The `tf.train.MonitoredTrainingSession` API simplifies many aspects of running
TensorFlow in a distributed setting. `MonitoredTrainingSession` uses the
-@{tf.errors.OutOfRangeError} to signal that training has completed, so to use it
+`tf.errors.OutOfRangeError` to signal that training has completed, so to use it
with the `tf.data` API, we recommend using
`Dataset.make_one_shot_iterator()`. For example:
@@ -782,7 +782,7 @@ with tf.train.MonitoredTrainingSession(...) as sess:
sess.run(training_op)
```
-To use a `Dataset` in the `input_fn` of a @{tf.estimator.Estimator}, we also
+To use a `Dataset` in the `input_fn` of a `tf.estimator.Estimator`, we also
recommend using `Dataset.make_one_shot_iterator()`. For example:
```python
diff --git a/tensorflow/docs_src/guide/datasets_for_estimators.md b/tensorflow/docs_src/guide/datasets_for_estimators.md
index b04af78cd8..969ea579f7 100644
--- a/tensorflow/docs_src/guide/datasets_for_estimators.md
+++ b/tensorflow/docs_src/guide/datasets_for_estimators.md
@@ -1,6 +1,6 @@
# Datasets for Estimators
-The @{tf.data} module contains a collection of classes that allows you to
+The `tf.data` module contains a collection of classes that allows you to
easily load data, manipulate it, and pipe it into your model. This document
introduces the API by walking through two simple examples:
@@ -73,12 +73,12 @@ Let's walk through the `train_input_fn()`.
### Slices
-The function starts by using the @{tf.data.Dataset.from_tensor_slices} function
-to create a @{tf.data.Dataset} representing slices of the array. The array is
+The function starts by using the `tf.data.Dataset.from_tensor_slices` function
+to create a `tf.data.Dataset` representing slices of the array. The array is
sliced across the first dimension. For example, an array containing the
-@{$tutorials/layers$mnist training data} has a shape of `(60000, 28, 28)`.
-Passing this to `from_tensor_slices` returns a `Dataset` object containing
-60000 slices, each one a 28x28 image.
+MNIST training data has a shape of `(60000, 28, 28)`. Passing this to
+`from_tensor_slices` returns a `Dataset` object containing 60000 slices, each one
+a 28x28 image.
The code that returns this `Dataset` is as follows:
@@ -170,15 +170,15 @@ function takes advantage of several of these methods:
dataset = dataset.shuffle(1000).repeat().batch(batch_size)
```
-The @{tf.data.Dataset.shuffle$`shuffle`} method uses a fixed-size buffer to
+The `tf.data.Dataset.shuffle` method uses a fixed-size buffer to
shuffle the items as they pass through. In this case the `buffer_size` is
greater than the number of examples in the `Dataset`, ensuring that the data is
completely shuffled (The Iris data set only contains 150 examples).
-The @{tf.data.Dataset.repeat$`repeat`} method restarts the `Dataset` when
+The `tf.data.Dataset.repeat` method restarts the `Dataset` when
it reaches the end. To limit the number of epochs, set the `count` argument.
-The @{tf.data.Dataset.batch$`batch`} method collects a number of examples and
+The `tf.data.Dataset.batch` method collects a number of examples and
stacks them, to create batches. This adds a dimension to their shape. The new
dimension is added as the first dimension. The following code uses
the `batch` method on the MNIST `Dataset`, from earlier. This results in a
@@ -234,7 +234,7 @@ The `labels` can/should be omitted when using the `predict` method.
## Reading a CSV File
The most common real-world use case for the `Dataset` class is to stream data
-from files on disk. The @{tf.data} module includes a variety of
+from files on disk. The `tf.data` module includes a variety of
file readers. Let's see how parsing the Iris dataset from the csv file looks
using a `Dataset`.
@@ -255,9 +255,9 @@ from the local files.
### Build the `Dataset`
-We start by building a @{tf.data.TextLineDataset$`TextLineDataset`} object to
+We start by building a `tf.data.TextLineDataset` object to
read the file one line at a time. Then, we call the
-@{tf.data.Dataset.skip$`skip`} method to skip over the first line of the file, which contains a header, not an example:
+`tf.data.Dataset.skip` method to skip over the first line of the file, which contains a header, not an example:
``` python
ds = tf.data.TextLineDataset(train_path).skip(1)
@@ -268,11 +268,11 @@ ds = tf.data.TextLineDataset(train_path).skip(1)
We will start by building a function to parse a single line.
The following `iris_data.parse_line` function accomplishes this task using the
-@{tf.decode_csv} function, and some simple python code:
+`tf.decode_csv` function, and some simple python code:
We must parse each of the lines in the dataset in order to generate the
necessary `(features, label)` pairs. The following `_parse_line` function
-calls @{tf.decode_csv} to parse a single line into its features
+calls `tf.decode_csv` to parse a single line into its features
and the label. Since Estimators require that features be represented as a
dictionary, we rely on Python's built-in `dict` and `zip` functions to build
that dictionary. The feature names are the keys of that dictionary.
@@ -301,7 +301,7 @@ def _parse_line(line):
### Parse the lines
Datasets have many methods for manipulating the data while it is being piped
-to a model. The most heavily-used method is @{tf.data.Dataset.map$`map`}, which
+to a model. The most heavily-used method is `tf.data.Dataset.map`, which
applies a transformation to each element of the `Dataset`.
The `map` method takes a `map_func` argument that describes how each item in the
@@ -311,7 +311,7 @@ The `map` method takes a `map_func` argument that describes how each item in the
<img style="width:100%" src="../images/datasets/map.png">
</div>
<div style="text-align: center">
-The @{tf.data.Dataset.map$`map`} method applies the `map_func` to
+The `tf.data.Dataset.map` method applies the `map_func` to
transform each item in the <code>Dataset</code>.
</div>
diff --git a/tensorflow/docs_src/guide/debugger.md b/tensorflow/docs_src/guide/debugger.md
index dc4db58857..0b4a063c10 100644
--- a/tensorflow/docs_src/guide/debugger.md
+++ b/tensorflow/docs_src/guide/debugger.md
@@ -89,7 +89,7 @@ control the execution and inspect the graph's internal state.
the diagnosis of issues.
In this example, we have already registered a tensor filter called
-@{tfdbg.has_inf_or_nan},
+`tfdbg.has_inf_or_nan`,
which simply determines if there are any `nan` or `inf` values in any
intermediate tensors (tensors that are neither inputs or outputs of the
`Session.run()` call, but are in the path leading from the inputs to the
@@ -98,13 +98,11 @@ we ship it with the
@{$python/tfdbg#Classes_for_debug_dump_data_and_directories$`debug_data`}
module.
-Note: You can also write your own custom filters. See
-the @{tfdbg.DebugDumpDir.find$API documentation}
-of `DebugDumpDir.find()` for additional information.
+Note: You can also write your own custom filters. See `tfdbg.DebugDumpDir.find`
+for additional information.
## Debugging Model Training with tfdbg
-
Let's try training the model again, but with the `--debug` flag added this time:
```none
@@ -429,9 +427,9 @@ described in the preceding sections inapplicable. Fortunately, you can still
debug them by using special `hook`s provided by `tfdbg`.
`tfdbg` can debug the
-@{tf.estimator.Estimator.train$`train()`},
-@{tf.estimator.Estimator.evaluate$`evaluate()`} and
-@{tf.estimator.Estimator.predict$`predict()`}
+`tf.estimator.Estimator.train`,
+`tf.estimator.Estimator.evaluate` and
+`tf.estimator.Estimator.predict`
methods of tf-learn `Estimator`s. To debug `Estimator.train()`,
create a `LocalCLIDebugHook` and supply it in the `hooks` argument. For example:
@@ -463,7 +461,6 @@ predict_results = classifier.predict(predict_input_fn, hooks=hooks)
```
[debug_tflearn_iris.py](https://www.tensorflow.org/code/tensorflow/python/debug/examples/debug_tflearn_iris.py),
-based on [tf-learn's iris tutorial](https://www.tensorflow.org/versions/r1.8/get_started/tflearn),
contains a full example of how to use the tfdbg with `Estimator`s.
To run this example, do:
@@ -474,7 +471,7 @@ python -m tensorflow.python.debug.examples.debug_tflearn_iris --debug
The `LocalCLIDebugHook` also allows you to configure a `watch_fn` that can be
used to flexibly specify what `Tensor`s to watch on different `Session.run()`
calls, as a function of the `fetches` and `feed_dict` and other states. See
-@{tfdbg.DumpingDebugWrapperSession.__init__$this API doc}
+`tfdbg.DumpingDebugWrapperSession.__init__`
for more details.
## Debugging Keras Models with TFDBG
@@ -557,7 +554,7 @@ and the higher-level `Estimator` API.
If you interact directly with the `tf.Session` API in `python`, you can
configure the `RunOptions` proto that you call your `Session.run()` method
-with, by using the method @{tfdbg.watch_graph}.
+with, by using the method `tfdbg.watch_graph`.
This will cause the intermediate tensors and runtime graphs to be dumped to a
shared storage location of your choice when the `Session.run()` call occurs
(at the cost of slower performance). For example:
@@ -716,7 +713,7 @@ You might encounter this problem in any of the following situations:
* models with many intermediate tensors
* very large intermediate tensors
-* many @{tf.while_loop} iterations
+* many `tf.while_loop` iterations
There are three possible workarounds or solutions:
@@ -776,12 +773,12 @@ sess.run(b)
optimization folds the graph that contains `a` and `b` into a single
node to speed up future runs of the graph, which is why `tfdbg` does
not generate any intermediate tensor dumps. However, if `a` were a
- @{tf.Variable}, as in the following example:
+ `tf.Variable`, as in the following example:
``` python
import numpy as np
-a = tf.Variable(np.ones[10], name="a")
+a = tf.Variable(np.ones(10), name="a")
b = tf.add(a, a, name="b")
sess = tf.Session()
sess.run(tf.global_variables_initializer())
diff --git a/tensorflow/docs_src/guide/eager.md b/tensorflow/docs_src/guide/eager.md
index b2bc3273b4..24f6e4ee95 100644
--- a/tensorflow/docs_src/guide/eager.md
+++ b/tensorflow/docs_src/guide/eager.md
@@ -225,7 +225,7 @@ the tape backwards and then discard. A particular `tf.GradientTape` can only
compute one gradient; subsequent calls throw a runtime error.
```py
-w = tfe.Variable([[1.0]])
+w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
@@ -260,8 +260,8 @@ def grad(weights, biases):
train_steps = 200
learning_rate = 0.01
# Start with arbitrary values for W and B on the same batch of data
-W = tfe.Variable(5.)
-B = tfe.Variable(10.)
+W = tf.Variable(5.)
+B = tf.Variable(10.)
print("Initial loss: {:.3f}".format(loss(W, B)))
@@ -316,9 +316,8 @@ for (batch, (images, labels)) in enumerate(dataset):
The following example creates a multi-layer model that classifies the standard
-[MNIST handwritten digits](https://www.tensorflow.org/tutorials/layers). It
-demonstrates the optimizer and layer APIs to build trainable graphs in an eager
-execution environment.
+MNIST handwritten digits. It demonstrates the optimizer and layer APIs to build
+trainable graphs in an eager execution environment.
### Train a model
@@ -408,11 +407,11 @@ with tf.device("/gpu:0"):
### Variables and optimizers
-`tfe.Variable` objects store mutable `tf.Tensor` values accessed during
+`tf.Variable` objects store mutable `tf.Tensor` values accessed during
training to make automatic differentiation easier. The parameters of a model can
be encapsulated in classes as variables.
-Better encapsulate model parameters by using `tfe.Variable` with
+Better encapsulate model parameters by using `tf.Variable` with
`tf.GradientTape`. For example, the automatic differentiation example above
can be rewritten:
@@ -420,9 +419,9 @@ can be rewritten:
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
- self.W = tfe.Variable(5., name='weight')
- self.B = tfe.Variable(10., name='bias')
- def predict(self, inputs):
+ self.W = tf.Variable(5., name='weight')
+ self.B = tf.Variable(10., name='bias')
+ def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
@@ -433,7 +432,7 @@ training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
- error = model.predict(inputs) - targets
+ error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
@@ -499,19 +498,19 @@ is removed, and is then deleted.
```py
with tf.device("gpu:0"):
- v = tfe.Variable(tf.random_normal([1000, 1000]))
+ v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
```
### Object-based saving
-`tfe.Checkpoint` can save and restore `tfe.Variable`s to and from
+`tf.train.Checkpoint` can save and restore `tf.Variable`s to and from
checkpoints:
```py
-x = tfe.Variable(10.)
+x = tf.Variable(10.)
-checkpoint = tfe.Checkpoint(x=x) # save as "x"
+checkpoint = tf.train.Checkpoint(x=x) # save as "x"
x.assign(2.) # Assign a new value to the variables and save.
save_path = checkpoint.save('./ckpt/')
@@ -524,18 +523,18 @@ checkpoint.restore(save_path)
print(x) # => 2.0
```
-To save and load models, `tfe.Checkpoint` stores the internal state of objects,
+To save and load models, `tf.train.Checkpoint` stores the internal state of objects,
without requiring hidden variables. To record the state of a `model`,
-an `optimizer`, and a global step, pass them to a `tfe.Checkpoint`:
+an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
```py
model = MyModel()
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = ‘/path/to/model_dir’
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
-root = tfe.Checkpoint(optimizer=optimizer,
- model=model,
- optimizer_step=tf.train.get_or_create_global_step())
+root = tf.train.Checkpoint(optimizer=optimizer,
+ model=model,
+ optimizer_step=tf.train.get_or_create_global_step())
root.save(file_prefix=checkpoint_prefix)
# or
@@ -613,7 +612,7 @@ def line_search_step(fn, init_x, rate=1.0):
`tf.GradientTape` is a powerful interface for computing gradients, but there
is another [Autograd](https://github.com/HIPS/autograd)-style API available for
automatic differentiation. These functions are useful if writing math code with
-only tensors and gradient functions, and without `tfe.Variables`:
+only tensors and gradient functions, and without `tf.Variables`:
* `tfe.gradients_function` —Returns a function that computes the derivatives
of its input function parameter with respect to its arguments. The input
@@ -728,7 +727,13 @@ def measure(x, steps):
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
- _ = x.numpy() # Make sure to execute op and not just enqueue it
+ # tf.matmul can return before completing the matrix multiplication
+ # (e.g., can return after enqueing the operation on a CUDA stream).
+ # The x.numpy() call below will ensure that all enqueued operations
+ # have completed (and will also copy the result to host memory,
+ # so we're including a little more than just the matmul operation
+ # time).
+ _ = x.numpy()
end = time.time()
return end - start
@@ -752,8 +757,8 @@ Output (exact numbers depend on hardware):
```
Time to multiply a (1000, 1000) matrix by itself 200 times:
-CPU: 4.614904403686523 secs
-GPU: 0.5581181049346924 secs
+CPU: 1.46628093719 secs
+GPU: 0.0593810081482 secs
```
A `tf.Tensor` object can be copied to a different device to execute its
@@ -825,7 +830,7 @@ gives you eager's interactive experimentation and debuggability with the
distributed performance benefits of graph execution.
Write, debug, and iterate in eager execution, then import the model graph for
-production deployment. Use `tfe.Checkpoint` to save and restore model
+production deployment. Use `tf.train.Checkpoint` to save and restore model
variables, this allows movement between eager and graph execution environments.
See the examples in:
[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).
diff --git a/tensorflow/docs_src/guide/estimators.md b/tensorflow/docs_src/guide/estimators.md
index 78b30c3040..7b54e3de29 100644
--- a/tensorflow/docs_src/guide/estimators.md
+++ b/tensorflow/docs_src/guide/estimators.md
@@ -1,6 +1,6 @@
# Estimators
-This document introduces @{tf.estimator$**Estimators**}--a high-level TensorFlow
+This document introduces `tf.estimator`--a high-level TensorFlow
API that greatly simplifies machine learning programming. Estimators encapsulate
the following actions:
@@ -11,10 +11,13 @@ the following actions:
You may either use the pre-made Estimators we provide or write your
own custom Estimators. All Estimators--whether pre-made or custom--are
-classes based on the @{tf.estimator.Estimator} class.
+classes based on the `tf.estimator.Estimator` class.
+
+For a quick example try [Estimator tutorials]](../tutorials/estimators/linear).
+To see each sub-topic in depth, see the [Estimator guides](premade_estimators).
Note: TensorFlow also includes a deprecated `Estimator` class at
-@{tf.contrib.learn.Estimator}, which you should not use.
+`tf.contrib.learn.Estimator`, which you should not use.
## Advantages of Estimators
@@ -29,14 +32,14 @@ Estimators provide the following benefits:
* You can develop a state of the art model with high-level intuitive code.
In short, it is generally much easier to create models with Estimators
than with the low-level TensorFlow APIs.
-* Estimators are themselves built on @{tf.layers}, which
+* Estimators are themselves built on `tf.keras.layers`, which
simplifies customization.
* Estimators build the graph for you.
* Estimators provide a safe distributed training loop that controls how and
when to:
* build the graph
* initialize variables
- * start queues
+ * load data
* handle exceptions
* create checkpoint files and recover from failures
* save summaries for TensorBoard
@@ -52,9 +55,9 @@ Pre-made Estimators enable you to work at a much higher conceptual level
than the base TensorFlow APIs. You no longer have to worry about creating
the computational graph or sessions since Estimators handle all
the "plumbing" for you. That is, pre-made Estimators create and manage
-@{tf.Graph$`Graph`} and @{tf.Session$`Session`} objects for you. Furthermore,
+`tf.Graph` and `tf.Session` objects for you. Furthermore,
pre-made Estimators let you experiment with different model architectures by
-making only minimal code changes. @{tf.estimator.DNNClassifier$`DNNClassifier`},
+making only minimal code changes. `tf.estimator.DNNClassifier`,
for example, is a pre-made Estimator class that trains classification models
based on dense, feed-forward neural networks.
@@ -83,7 +86,7 @@ of the following four steps:
(See @{$guide/datasets} for full details.)
-2. **Define the feature columns.** Each @{tf.feature_column}
+2. **Define the feature columns.** Each `tf.feature_column`
identifies a feature name, its type, and any input pre-processing.
For example, the following snippet creates three feature
columns that hold integer or floating-point data. The first two
@@ -155,7 +158,7 @@ We recommend the following workflow:
You can convert existing Keras models to Estimators. Doing so enables your Keras
model to access Estimator's strengths, such as distributed training. Call
-@{tf.keras.estimator.model_to_estimator} as in the
+`tf.keras.estimator.model_to_estimator` as in the
following sample:
```python
@@ -190,4 +193,4 @@ and similarly, the predicted output names can be obtained from
`keras_inception_v3.output_names`.
For more details, please refer to the documentation for
-@{tf.keras.estimator.model_to_estimator}.
+`tf.keras.estimator.model_to_estimator`.
diff --git a/tensorflow/docs_src/guide/faq.md b/tensorflow/docs_src/guide/faq.md
index b6291a9ffa..8370097560 100644
--- a/tensorflow/docs_src/guide/faq.md
+++ b/tensorflow/docs_src/guide/faq.md
@@ -28,13 +28,13 @@ See also the
#### Why does `c = tf.matmul(a, b)` not execute the matrix multiplication immediately?
In the TensorFlow Python API, `a`, `b`, and `c` are
-@{tf.Tensor} objects. A `Tensor` object is
+`tf.Tensor` objects. A `Tensor` object is
a symbolic handle to the result of an operation, but does not actually hold the
values of the operation's output. Instead, TensorFlow encourages users to build
up complicated expressions (such as entire neural networks and its gradients) as
a dataflow graph. You then offload the computation of the entire dataflow graph
(or a subgraph of it) to a TensorFlow
-@{tf.Session}, which is able to execute the
+`tf.Session`, which is able to execute the
whole computation much more efficiently than executing the operations
one-by-one.
@@ -46,7 +46,7 @@ device, and `"/device:GPU:i"` (or `"/gpu:i"`) for the *i*th GPU device.
#### How do I place operations on a particular device?
To place a group of operations on a device, create them within a
-@{tf.device$`with tf.device(name):`} context. See
+`tf.device` context. See
the how-to documentation on
@{$using_gpu$using GPUs with TensorFlow} for details of how
TensorFlow assigns operations to devices, and the
@@ -63,17 +63,17 @@ See also the
Feeding is a mechanism in the TensorFlow Session API that allows you to
substitute different values for one or more tensors at run time. The `feed_dict`
-argument to @{tf.Session.run} is a
-dictionary that maps @{tf.Tensor} objects to
+argument to `tf.Session.run` is a
+dictionary that maps `tf.Tensor` objects to
numpy arrays (and some other types), which will be used as the values of those
tensors in the execution of a step.
#### What is the difference between `Session.run()` and `Tensor.eval()`?
-If `t` is a @{tf.Tensor} object,
-@{tf.Tensor.eval} is shorthand for
-@{tf.Session.run}, where `sess` is the
-current @{tf.get_default_session}. The
+If `t` is a `tf.Tensor` object,
+`tf.Tensor.eval` is shorthand for
+`tf.Session.run`, where `sess` is the
+current `tf.get_default_session`. The
two following snippets of code are equivalent:
```python
@@ -99,11 +99,11 @@ sessions, it may be more straightforward to make explicit calls to
#### Do Sessions have a lifetime? What about intermediate tensors?
Sessions can own resources, such as
-@{tf.Variable},
-@{tf.QueueBase}, and
-@{tf.ReaderBase}. These resources can sometimes use
+`tf.Variable`,
+`tf.QueueBase`, and
+`tf.ReaderBase`. These resources can sometimes use
a significant amount of memory, and can be released when the session is closed by calling
-@{tf.Session.close}.
+`tf.Session.close`.
The intermediate tensors that are created as part of a call to
@{$python/client$`Session.run()`} will be freed at or before the
@@ -120,7 +120,7 @@ dimensions:
devices, which makes it possible to speed up
@{$deep_cnn$CIFAR-10 training using multiple GPUs}.
* The Session API allows multiple concurrent steps (i.e. calls to
- @{tf.Session.run} in parallel). This
+ `tf.Session.run` in parallel). This
enables the runtime to get higher throughput, if a single step does not use
all of the resources in your computer.
@@ -151,8 +151,8 @@ than 3.5.
#### Why does `Session.run()` hang when using a reader or a queue?
-The @{tf.ReaderBase} and
-@{tf.QueueBase} classes provide special operations that
+The `tf.ReaderBase` and
+`tf.QueueBase` classes provide special operations that
can *block* until input (or free space in a bounded queue) becomes
available. These operations allow you to build sophisticated
@{$reading_data$input pipelines}, at the cost of making the
@@ -169,9 +169,9 @@ See also the how-to documentation on @{$variables$variables} and
#### What is the lifetime of a variable?
A variable is created when you first run the
-@{tf.Variable.initializer}
+`tf.Variable.initializer`
operation for that variable in a session. It is destroyed when that
-@{tf.Session.close}.
+`tf.Session.close`.
#### How do variables behave when they are concurrently accessed?
@@ -179,32 +179,31 @@ Variables allow concurrent read and write operations. The value read from a
variable may change if it is concurrently updated. By default, concurrent
assignment operations to a variable are allowed to run with no mutual exclusion.
To acquire a lock when assigning to a variable, pass `use_locking=True` to
-@{tf.Variable.assign}.
+`tf.Variable.assign`.
## Tensor shapes
See also the
-@{tf.TensorShape}.
+`tf.TensorShape`.
#### How can I determine the shape of a tensor in Python?
In TensorFlow, a tensor has both a static (inferred) shape and a dynamic (true)
shape. The static shape can be read using the
-@{tf.Tensor.get_shape}
+`tf.Tensor.get_shape`
method: this shape is inferred from the operations that were used to create the
-tensor, and may be
-@{tf.TensorShape$partially complete}. If the static
-shape is not fully defined, the dynamic shape of a `Tensor` `t` can be
-determined by evaluating @{tf.shape$`tf.shape(t)`}.
+tensor, and may be partially complete (the static-shape may contain `None`). If
+the static shape is not fully defined, the dynamic shape of a `tf.Tensor`, `t`
+can be determined using `tf.shape(t)`.
#### What is the difference between `x.set_shape()` and `x = tf.reshape(x)`?
-The @{tf.Tensor.set_shape} method updates
+The `tf.Tensor.set_shape` method updates
the static shape of a `Tensor` object, and it is typically used to provide
additional shape information when this cannot be inferred directly. It does not
change the dynamic shape of the tensor.
-The @{tf.reshape} operation creates
+The `tf.reshape` operation creates
a new tensor with a different dynamic shape.
#### How do I build a graph that works with variable batch sizes?
@@ -212,9 +211,9 @@ a new tensor with a different dynamic shape.
It is often useful to build a graph that works with variable batch sizes
so that the same code can be used for (mini-)batch training, and
single-instance inference. The resulting graph can be
-@{tf.Graph.as_graph_def$saved as a protocol buffer}
+`tf.Graph.as_graph_def`
and
-@{tf.import_graph_def$imported into another program}.
+`tf.import_graph_def`.
When building a variable-size graph, the most important thing to remember is not
to encode the batch size as a Python constant, but instead to use a symbolic
@@ -224,7 +223,7 @@ to encode the batch size as a Python constant, but instead to use a symbolic
to extract the batch dimension from a `Tensor` called `input`, and store it in
a `Tensor` called `batch_size`.
-* Use @{tf.reduce_mean} instead
+* Use `tf.reduce_mean` instead
of `tf.reduce_sum(...) / batch_size`.
@@ -259,19 +258,19 @@ See the how-to documentation for
There are three main options for dealing with data in a custom format.
The easiest option is to write parsing code in Python that transforms the data
-into a numpy array. Then, use @{tf.data.Dataset.from_tensor_slices} to
+into a numpy array. Then, use `tf.data.Dataset.from_tensor_slices` to
create an input pipeline from the in-memory data.
If your data doesn't fit in memory, try doing the parsing in the Dataset
pipeline. Start with an appropriate file reader, like
-@{tf.data.TextLineDataset}. Then convert the dataset by mapping
-@{tf.data.Dataset.map$mapping} appropriate operations over it.
-Prefer predefined TensorFlow operations such as @{tf.decode_raw},
-@{tf.decode_csv}, @{tf.parse_example}, or @{tf.image.decode_png}.
+`tf.data.TextLineDataset`. Then convert the dataset by mapping
+`tf.data.Dataset.map` appropriate operations over it.
+Prefer predefined TensorFlow operations such as `tf.decode_raw`,
+`tf.decode_csv`, `tf.parse_example`, or `tf.image.decode_png`.
If your data is not easily parsable with the built-in TensorFlow operations,
consider converting it, offline, to a format that is easily parsable, such
-as @{tf.python_io.TFRecordWriter$`TFRecord`} format.
+as `tf.python_io.TFRecordWriter` format.
The most efficient method to customize the parsing behavior is to
@{$adding_an_op$add a new op written in C++} that parses your
diff --git a/tensorflow/docs_src/guide/feature_columns.md b/tensorflow/docs_src/guide/feature_columns.md
index 1013ec910c..9cd695cc25 100644
--- a/tensorflow/docs_src/guide/feature_columns.md
+++ b/tensorflow/docs_src/guide/feature_columns.md
@@ -6,10 +6,10 @@ enabling you to transform a diverse range of raw data into formats that
Estimators can use, allowing easy experimentation.
In @{$premade_estimators$Premade Estimators}, we used the premade
-Estimator, @{tf.estimator.DNNClassifier$`DNNClassifier`} to train a model to
+Estimator, `tf.estimator.DNNClassifier` to train a model to
predict different types of Iris flowers from four input features. That example
created only numerical feature columns (of type
-@{tf.feature_column.numeric_column}). Although numerical feature columns model
+`tf.feature_column.numeric_column`). Although numerical feature columns model
the lengths of petals and sepals effectively, real world data sets contain all
kinds of features, many of which are non-numerical.
@@ -59,7 +59,7 @@ Feature columns bridge raw data with the data your model needs.
</div>
To create feature columns, call functions from the
-@{tf.feature_column} module. This document explains nine of the functions in
+`tf.feature_column` module. This document explains nine of the functions in
that module. As the following figure shows, all nine functions return either a
Categorical-Column or a Dense-Column object, except `bucketized_column`, which
inherits from both classes:
@@ -75,7 +75,7 @@ Let's look at these functions in more detail.
### Numeric column
-The Iris classifier calls the @{tf.feature_column.numeric_column} function for
+The Iris classifier calls the `tf.feature_column.numeric_column` function for
all input features:
* `SepalLength`
@@ -119,7 +119,7 @@ matrix_feature_column = tf.feature_column.numeric_column(key="MyMatrix",
Often, you don't want to feed a number directly into the model, but instead
split its value into different categories based on numerical ranges. To do so,
-create a @{tf.feature_column.bucketized_column$bucketized column}. For
+create a `tf.feature_column.bucketized_column`. For
example, consider raw data that represents the year a house was built. Instead
of representing that year as a scalar numeric column, we could split the year
into the following four buckets:
@@ -194,7 +194,7 @@ value. That is:
* `1="electronics"`
* `2="sport"`
-Call @{tf.feature_column.categorical_column_with_identity} to implement a
+Call `tf.feature_column.categorical_column_with_identity` to implement a
categorical identity column. For example:
``` python
@@ -230,8 +230,8 @@ As you can see, categorical vocabulary columns are kind of an enum version of
categorical identity columns. TensorFlow provides two different functions to
create categorical vocabulary columns:
-* @{tf.feature_column.categorical_column_with_vocabulary_list}
-* @{tf.feature_column.categorical_column_with_vocabulary_file}
+* `tf.feature_column.categorical_column_with_vocabulary_list`
+* `tf.feature_column.categorical_column_with_vocabulary_file`
`categorical_column_with_vocabulary_list` maps each string to an integer based
on an explicit vocabulary list. For example:
@@ -281,7 +281,7 @@ categories can be so big that it's not possible to have individual categories
for each vocabulary word or integer because that would consume too much memory.
For these cases, we can instead turn the question around and ask, "How many
categories am I willing to have for my input?" In fact, the
-@{tf.feature_column.categorical_column_with_hash_bucket} function enables you
+`tf.feature_column.categorical_column_with_hash_bucket` function enables you
to specify the number of categories. For this type of feature column the model
calculates a hash value of the input, then puts it into one of
the `hash_bucket_size` categories using the modulo operator, as in the following
@@ -349,7 +349,7 @@ equal size.
</div>
For the solution, we used a combination of the `bucketized_column` we looked at
-earlier, with the @{tf.feature_column.crossed_column} function.
+earlier, with the `tf.feature_column.crossed_column` function.
<!--TODO(markdaoust) link to full example-->
@@ -440,7 +440,7 @@ Representing data in indicator columns.
</div>
Here's how you create an indicator column by calling
-@{tf.feature_column.indicator_column}:
+`tf.feature_column.indicator_column`:
``` python
categorical_column = ... # Create any type of categorical column.
@@ -521,7 +521,7 @@ number of dimensions is 3:
Note that this is just a general guideline; you can set the number of embedding
dimensions as you please.
-Call @{tf.feature_column.embedding_column} to create an `embedding_column` as
+Call `tf.feature_column.embedding_column` to create an `embedding_column` as
suggested by the following snippet:
``` python
@@ -543,15 +543,15 @@ columns.
As the following list indicates, not all Estimators permit all types of
`feature_columns` argument(s):
-* @{tf.estimator.LinearClassifier$`LinearClassifier`} and
- @{tf.estimator.LinearRegressor$`LinearRegressor`}: Accept all types of
+* `tf.estimator.LinearClassifier` and
+ `tf.estimator.LinearRegressor`: Accept all types of
feature column.
-* @{tf.estimator.DNNClassifier$`DNNClassifier`} and
- @{tf.estimator.DNNRegressor$`DNNRegressor`}: Only accept dense columns. Other
+* `tf.estimator.DNNClassifier` and
+ `tf.estimator.DNNRegressor`: Only accept dense columns. Other
column types must be wrapped in either an `indicator_column` or
`embedding_column`.
-* @{tf.estimator.DNNLinearCombinedClassifier$`DNNLinearCombinedClassifier`} and
- @{tf.estimator.DNNLinearCombinedRegressor$`DNNLinearCombinedRegressor`}:
+* `tf.estimator.DNNLinearCombinedClassifier` and
+ `tf.estimator.DNNLinearCombinedRegressor`:
* The `linear_feature_columns` argument accepts any feature column type.
* The `dnn_feature_columns` argument only accepts dense columns.
@@ -561,9 +561,9 @@ For more examples on feature columns, view the following:
* The @{$low_level_intro#feature_columns$Low Level Introduction} demonstrates how
experiment directly with `feature_columns` using TensorFlow's low level APIs.
-* The @{$wide$wide} and @{$wide_and_deep$Wide & Deep} Tutorials solve a
- binary classification problem using `feature_columns` on a variety of input
- data types.
+* The [Estimator wide and deep learning tutorial](https://github.com/tensorflow/models/tree/master/official/wide_deep)
+ solves a binary classification problem using `feature_columns` on a variety of
+ input data types.
To learn more about embeddings, see the following:
diff --git a/tensorflow/docs_src/guide/graph_viz.md b/tensorflow/docs_src/guide/graph_viz.md
index f581ae56da..97b0e2d4de 100644
--- a/tensorflow/docs_src/guide/graph_viz.md
+++ b/tensorflow/docs_src/guide/graph_viz.md
@@ -15,7 +15,7 @@ variable names can be scoped and the visualization uses this information to
define a hierarchy on the nodes in the graph. By default, only the top of this
hierarchy is shown. Here is an example that defines three operations under the
`hidden` name scope using
-@{tf.name_scope}:
+`tf.name_scope`:
```python
import tensorflow as tf
@@ -248,7 +248,8 @@ The images below show the CIFAR-10 model with tensor shape information:
Often it is useful to collect runtime metadata for a run, such as total memory
usage, total compute time, and tensor shapes for nodes. The code example below
is a snippet from the train and test section of a modification of the
-@{$layers$simple MNIST tutorial}, in which we have recorded summaries and
+[Estimators MNIST tutorial](../tutorials/estimators/cnn.md), in which we have
+recorded summaries and
runtime statistics. See the
@{$summaries_and_tensorboard#serializing-the-data$Summaries Tutorial}
for details on how to record summaries.
diff --git a/tensorflow/docs_src/guide/graphs.md b/tensorflow/docs_src/guide/graphs.md
index e6246ef148..2bb44fbb32 100644
--- a/tensorflow/docs_src/guide/graphs.md
+++ b/tensorflow/docs_src/guide/graphs.md
@@ -7,7 +7,7 @@ TensorFlow **session** to run parts of the graph across a set of local and
remote devices.
This guide will be most useful if you intend to use the low-level programming
-model directly. Higher-level APIs such as @{tf.estimator.Estimator} and Keras
+model directly. Higher-level APIs such as `tf.estimator.Estimator` and Keras
hide the details of graphs and sessions from the end user, but this guide may
also be useful if you want to understand how these APIs are implemented.
@@ -18,12 +18,12 @@ also be useful if you want to understand how these APIs are implemented.
[Dataflow](https://en.wikipedia.org/wiki/Dataflow_programming) is a common
programming model for parallel computing. In a dataflow graph, the nodes
represent units of computation, and the edges represent the data consumed or
-produced by a computation. For example, in a TensorFlow graph, the @{tf.matmul}
+produced by a computation. For example, in a TensorFlow graph, the `tf.matmul`
operation would correspond to a single node with two incoming edges (the
matrices to be multiplied) and one outgoing edge (the result of the
multiplication).
-<!-- TODO(barryr): Add a diagram to illustrate the @{tf.matmul} graph. -->
+<!-- TODO(barryr): Add a diagram to illustrate the `tf.matmul` graph. -->
Dataflow has several advantages that TensorFlow leverages when executing your
programs:
@@ -48,9 +48,9 @@ programs:
low-latency inference.
-## What is a @{tf.Graph}?
+## What is a `tf.Graph`?
-A @{tf.Graph} contains two relevant kinds of information:
+A `tf.Graph` contains two relevant kinds of information:
* **Graph structure.** The nodes and edges of the graph, indicating how
individual operations are composed together, but not prescribing how they
@@ -59,78 +59,78 @@ A @{tf.Graph} contains two relevant kinds of information:
context that source code conveys.
* **Graph collections.** TensorFlow provides a general mechanism for storing
- collections of metadata in a @{tf.Graph}. The @{tf.add_to_collection} function
- enables you to associate a list of objects with a key (where @{tf.GraphKeys}
- defines some of the standard keys), and @{tf.get_collection} enables you to
+ collections of metadata in a `tf.Graph`. The `tf.add_to_collection` function
+ enables you to associate a list of objects with a key (where `tf.GraphKeys`
+ defines some of the standard keys), and `tf.get_collection` enables you to
look up all objects associated with a key. Many parts of the TensorFlow
- library use this facility: for example, when you create a @{tf.Variable}, it
+ library use this facility: for example, when you create a `tf.Variable`, it
is added by default to collections representing "global variables" and
- "trainable variables". When you later come to create a @{tf.train.Saver} or
- @{tf.train.Optimizer}, the variables in these collections are used as the
+ "trainable variables". When you later come to create a `tf.train.Saver` or
+ `tf.train.Optimizer`, the variables in these collections are used as the
default arguments.
-## Building a @{tf.Graph}
+## Building a `tf.Graph`
Most TensorFlow programs start with a dataflow graph construction phase. In this
-phase, you invoke TensorFlow API functions that construct new @{tf.Operation}
-(node) and @{tf.Tensor} (edge) objects and add them to a @{tf.Graph}
+phase, you invoke TensorFlow API functions that construct new `tf.Operation`
+(node) and `tf.Tensor` (edge) objects and add them to a `tf.Graph`
instance. TensorFlow provides a **default graph** that is an implicit argument
to all API functions in the same context. For example:
-* Calling `tf.constant(42.0)` creates a single @{tf.Operation} that produces the
- value `42.0`, adds it to the default graph, and returns a @{tf.Tensor} that
+* Calling `tf.constant(42.0)` creates a single `tf.Operation` that produces the
+ value `42.0`, adds it to the default graph, and returns a `tf.Tensor` that
represents the value of the constant.
-* Calling `tf.matmul(x, y)` creates a single @{tf.Operation} that multiplies
- the values of @{tf.Tensor} objects `x` and `y`, adds it to the default graph,
- and returns a @{tf.Tensor} that represents the result of the multiplication.
+* Calling `tf.matmul(x, y)` creates a single `tf.Operation` that multiplies
+ the values of `tf.Tensor` objects `x` and `y`, adds it to the default graph,
+ and returns a `tf.Tensor` that represents the result of the multiplication.
-* Executing `v = tf.Variable(0)` adds to the graph a @{tf.Operation} that will
- store a writeable tensor value that persists between @{tf.Session.run} calls.
- The @{tf.Variable} object wraps this operation, and can be used [like a
+* Executing `v = tf.Variable(0)` adds to the graph a `tf.Operation` that will
+ store a writeable tensor value that persists between `tf.Session.run` calls.
+ The `tf.Variable` object wraps this operation, and can be used [like a
tensor](#tensor-like_objects), which will read the current value of the
- stored value. The @{tf.Variable} object also has methods such as
- @{tf.Variable.assign$`assign`} and @{tf.Variable.assign_add$`assign_add`} that
- create @{tf.Operation} objects that, when executed, update the stored value.
+ stored value. The `tf.Variable` object also has methods such as
+ `tf.Variable.assign` and `tf.Variable.assign_add` that
+ create `tf.Operation` objects that, when executed, update the stored value.
(See @{$guide/variables} for more information about variables.)
-* Calling @{tf.train.Optimizer.minimize} will add operations and tensors to the
- default graph that calculates gradients, and return a @{tf.Operation} that,
+* Calling `tf.train.Optimizer.minimize` will add operations and tensors to the
+ default graph that calculates gradients, and return a `tf.Operation` that,
when run, will apply those gradients to a set of variables.
Most programs rely solely on the default graph. However,
see [Dealing with multiple graphs](#programming_with_multiple_graphs) for more
-advanced use cases. High-level APIs such as the @{tf.estimator.Estimator} API
+advanced use cases. High-level APIs such as the `tf.estimator.Estimator` API
manage the default graph on your behalf, and--for example--may create different
graphs for training and evaluation.
Note: Calling most functions in the TensorFlow API merely adds operations
and tensors to the default graph, but **does not** perform the actual
-computation. Instead, you compose these functions until you have a @{tf.Tensor}
-or @{tf.Operation} that represents the overall computation--such as performing
-one step of gradient descent--and then pass that object to a @{tf.Session} to
-perform the computation. See the section "Executing a graph in a @{tf.Session}"
+computation. Instead, you compose these functions until you have a `tf.Tensor`
+or `tf.Operation` that represents the overall computation--such as performing
+one step of gradient descent--and then pass that object to a `tf.Session` to
+perform the computation. See the section "Executing a graph in a `tf.Session`"
for more details.
## Naming operations
-A @{tf.Graph} object defines a **namespace** for the @{tf.Operation} objects it
+A `tf.Graph` object defines a **namespace** for the `tf.Operation` objects it
contains. TensorFlow automatically chooses a unique name for each operation in
your graph, but giving operations descriptive names can make your program easier
to read and debug. The TensorFlow API provides two ways to override the name of
an operation:
-* Each API function that creates a new @{tf.Operation} or returns a new
- @{tf.Tensor} accepts an optional `name` argument. For example,
- `tf.constant(42.0, name="answer")` creates a new @{tf.Operation} named
- `"answer"` and returns a @{tf.Tensor} named `"answer:0"`. If the default graph
+* Each API function that creates a new `tf.Operation` or returns a new
+ `tf.Tensor` accepts an optional `name` argument. For example,
+ `tf.constant(42.0, name="answer")` creates a new `tf.Operation` named
+ `"answer"` and returns a `tf.Tensor` named `"answer:0"`. If the default graph
already contains an operation named `"answer"`, then TensorFlow would append
`"_1"`, `"_2"`, and so on to the name, in order to make it unique.
-* The @{tf.name_scope} function makes it possible to add a **name scope** prefix
+* The `tf.name_scope` function makes it possible to add a **name scope** prefix
to all operations created in a particular context. The current name scope
- prefix is a `"/"`-delimited list of the names of all active @{tf.name_scope}
+ prefix is a `"/"`-delimited list of the names of all active `tf.name_scope`
context managers. If a name scope has already been used in the current
context, TensorFlow appends `"_1"`, `"_2"`, and so on. For example:
@@ -160,7 +160,7 @@ The graph visualizer uses name scopes to group operations and reduce the visual
complexity of a graph. See [Visualizing your graph](#visualizing-your-graph) for
more information.
-Note that @{tf.Tensor} objects are implicitly named after the @{tf.Operation}
+Note that `tf.Tensor` objects are implicitly named after the `tf.Operation`
that produces the tensor as output. A tensor name has the form `"<OP_NAME>:<i>"`
where:
@@ -171,7 +171,7 @@ where:
## Placing operations on different devices
If you want your TensorFlow program to use multiple different devices, the
-@{tf.device} function provides a convenient way to request that all operations
+`tf.device` function provides a convenient way to request that all operations
created in a particular context are placed on the same device (or type of
device).
@@ -186,7 +186,7 @@ where:
* `<JOB_NAME>` is an alpha-numeric string that does not start with a number.
* `<DEVICE_TYPE>` is a registered device type (such as `GPU` or `CPU`).
* `<TASK_INDEX>` is a non-negative integer representing the index of the task
- in the job named `<JOB_NAME>`. See @{tf.train.ClusterSpec} for an explanation
+ in the job named `<JOB_NAME>`. See `tf.train.ClusterSpec` for an explanation
of jobs and tasks.
* `<DEVICE_INDEX>` is a non-negative integer representing the index of the
device, for example, to distinguish between different GPU devices used in the
@@ -194,7 +194,7 @@ where:
You do not need to specify every part of a device specification. For example,
if you are running in a single-machine configuration with a single GPU, you
-might use @{tf.device} to pin some operations to the CPU and GPU:
+might use `tf.device` to pin some operations to the CPU and GPU:
```python
# Operations created outside either context will run on the "best possible"
@@ -229,13 +229,13 @@ with tf.device("/job:worker"):
layer_2 = tf.matmul(train_batch, weights_2) + biases_2
```
-@{tf.device} gives you a lot of flexibility to choose placements for individual
+`tf.device` gives you a lot of flexibility to choose placements for individual
operations or broad regions of a TensorFlow graph. In many cases, there are
simple heuristics that work well. For example, the
-@{tf.train.replica_device_setter} API can be used with @{tf.device} to place
+`tf.train.replica_device_setter` API can be used with `tf.device` to place
operations for **data-parallel distributed training**. For example, the
-following code fragment shows how @{tf.train.replica_device_setter} applies
-different placement policies to @{tf.Variable} objects and other operations:
+following code fragment shows how `tf.train.replica_device_setter` applies
+different placement policies to `tf.Variable` objects and other operations:
```python
with tf.device(tf.train.replica_device_setter(ps_tasks=3)):
@@ -253,41 +253,41 @@ with tf.device(tf.train.replica_device_setter(ps_tasks=3)):
## Tensor-like objects
-Many TensorFlow operations take one or more @{tf.Tensor} objects as arguments.
-For example, @{tf.matmul} takes two @{tf.Tensor} objects, and @{tf.add_n} takes
-a list of `n` @{tf.Tensor} objects. For convenience, these functions will accept
-a **tensor-like object** in place of a @{tf.Tensor}, and implicitly convert it
-to a @{tf.Tensor} using the @{tf.convert_to_tensor} method. Tensor-like objects
+Many TensorFlow operations take one or more `tf.Tensor` objects as arguments.
+For example, `tf.matmul` takes two `tf.Tensor` objects, and `tf.add_n` takes
+a list of `n` `tf.Tensor` objects. For convenience, these functions will accept
+a **tensor-like object** in place of a `tf.Tensor`, and implicitly convert it
+to a `tf.Tensor` using the `tf.convert_to_tensor` method. Tensor-like objects
include elements of the following types:
-* @{tf.Tensor}
-* @{tf.Variable}
+* `tf.Tensor`
+* `tf.Variable`
* [`numpy.ndarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html)
* `list` (and lists of tensor-like objects)
* Scalar Python types: `bool`, `float`, `int`, `str`
You can register additional tensor-like types using
-@{tf.register_tensor_conversion_function}.
+`tf.register_tensor_conversion_function`.
-Note: By default, TensorFlow will create a new @{tf.Tensor} each time you use
+Note: By default, TensorFlow will create a new `tf.Tensor` each time you use
the same tensor-like object. If the tensor-like object is large (e.g. a
`numpy.ndarray` containing a set of training examples) and you use it multiple
times, you may run out of memory. To avoid this, manually call
-@{tf.convert_to_tensor} on the tensor-like object once and use the returned
-@{tf.Tensor} instead.
+`tf.convert_to_tensor` on the tensor-like object once and use the returned
+`tf.Tensor` instead.
-## Executing a graph in a @{tf.Session}
+## Executing a graph in a `tf.Session`
-TensorFlow uses the @{tf.Session} class to represent a connection between the
+TensorFlow uses the `tf.Session` class to represent a connection between the
client program---typically a Python program, although a similar interface is
-available in other languages---and the C++ runtime. A @{tf.Session} object
+available in other languages---and the C++ runtime. A `tf.Session` object
provides access to devices in the local machine, and remote devices using the
distributed TensorFlow runtime. It also caches information about your
-@{tf.Graph} so that you can efficiently run the same computation multiple times.
+`tf.Graph` so that you can efficiently run the same computation multiple times.
-### Creating a @{tf.Session}
+### Creating a `tf.Session`
-If you are using the low-level TensorFlow API, you can create a @{tf.Session}
+If you are using the low-level TensorFlow API, you can create a `tf.Session`
for the current default graph as follows:
```python
@@ -300,50 +300,50 @@ with tf.Session("grpc://example.org:2222"):
# ...
```
-Since a @{tf.Session} owns physical resources (such as GPUs and
+Since a `tf.Session` owns physical resources (such as GPUs and
network connections), it is typically used as a context manager (in a `with`
block) that automatically closes the session when you exit the block. It is
also possible to create a session without using a `with` block, but you should
-explicitly call @{tf.Session.close} when you are finished with it to free the
+explicitly call `tf.Session.close` when you are finished with it to free the
resources.
-Note: Higher-level APIs such as @{tf.train.MonitoredTrainingSession} or
-@{tf.estimator.Estimator} will create and manage a @{tf.Session} for you. These
+Note: Higher-level APIs such as `tf.train.MonitoredTrainingSession` or
+`tf.estimator.Estimator` will create and manage a `tf.Session` for you. These
APIs accept optional `target` and `config` arguments (either directly, or as
-part of a @{tf.estimator.RunConfig} object), with the same meaning as
+part of a `tf.estimator.RunConfig` object), with the same meaning as
described below.
-@{tf.Session.__init__} accepts three optional arguments:
+`tf.Session.__init__` accepts three optional arguments:
* **`target`.** If this argument is left empty (the default), the session will
only use devices in the local machine. However, you may also specify a
`grpc://` URL to specify the address of a TensorFlow server, which gives the
session access to all devices on machines that this server controls. See
- @{tf.train.Server} for details of how to create a TensorFlow
+ `tf.train.Server` for details of how to create a TensorFlow
server. For example, in the common **between-graph replication**
- configuration, the @{tf.Session} connects to a @{tf.train.Server} in the same
+ configuration, the `tf.Session` connects to a `tf.train.Server` in the same
process as the client. The [distributed TensorFlow](../deploy/distributed.md)
deployment guide describes other common scenarios.
-* **`graph`.** By default, a new @{tf.Session} will be bound to---and only able
+* **`graph`.** By default, a new `tf.Session` will be bound to---and only able
to run operations in---the current default graph. If you are using multiple
graphs in your program (see [Programming with multiple
graphs](#programming_with_multiple_graphs) for more details), you can specify
- an explicit @{tf.Graph} when you construct the session.
+ an explicit `tf.Graph` when you construct the session.
-* **`config`.** This argument allows you to specify a @{tf.ConfigProto} that
+* **`config`.** This argument allows you to specify a `tf.ConfigProto` that
controls the behavior of the session. For example, some of the configuration
options include:
* `allow_soft_placement`. Set this to `True` to enable a "soft" device
- placement algorithm, which ignores @{tf.device} annotations that attempt
+ placement algorithm, which ignores `tf.device` annotations that attempt
to place CPU-only operations on a GPU device, and places them on the CPU
instead.
* `cluster_def`. When using distributed TensorFlow, this option allows you
to specify what machines to use in the computation, and provide a mapping
between job names, task indices, and network addresses. See
- @{tf.train.ClusterSpec.as_cluster_def} for details.
+ `tf.train.ClusterSpec.as_cluster_def` for details.
* `graph_options.optimizer_options`. Provides control over the optimizations
that TensorFlow performs on your graph before executing it.
@@ -353,21 +353,21 @@ described below.
rather than allocating most of the memory at startup.
-### Using @{tf.Session.run} to execute operations
+### Using `tf.Session.run` to execute operations
-The @{tf.Session.run} method is the main mechanism for running a @{tf.Operation}
-or evaluating a @{tf.Tensor}. You can pass one or more @{tf.Operation} or
-@{tf.Tensor} objects to @{tf.Session.run}, and TensorFlow will execute the
+The `tf.Session.run` method is the main mechanism for running a `tf.Operation`
+or evaluating a `tf.Tensor`. You can pass one or more `tf.Operation` or
+`tf.Tensor` objects to `tf.Session.run`, and TensorFlow will execute the
operations that are needed to compute the result.
-@{tf.Session.run} requires you to specify a list of **fetches**, which determine
-the return values, and may be a @{tf.Operation}, a @{tf.Tensor}, or
-a [tensor-like type](#tensor-like_objects) such as @{tf.Variable}. These fetches
-determine what **subgraph** of the overall @{tf.Graph} must be executed to
+`tf.Session.run` requires you to specify a list of **fetches**, which determine
+the return values, and may be a `tf.Operation`, a `tf.Tensor`, or
+a [tensor-like type](#tensor-like_objects) such as `tf.Variable`. These fetches
+determine what **subgraph** of the overall `tf.Graph` must be executed to
produce the result: this is the subgraph that contains all operations named in
the fetch list, plus all operations whose outputs are used to compute the value
of the fetches. For example, the following code fragment shows how different
-arguments to @{tf.Session.run} cause different subgraphs to be executed:
+arguments to `tf.Session.run` cause different subgraphs to be executed:
```python
x = tf.constant([[37.0, -23.0], [1.0, 4.0]])
@@ -390,8 +390,8 @@ with tf.Session() as sess:
y_val, output_val = sess.run([y, output])
```
-@{tf.Session.run} also optionally takes a dictionary of **feeds**, which is a
-mapping from @{tf.Tensor} objects (typically @{tf.placeholder} tensors) to
+`tf.Session.run` also optionally takes a dictionary of **feeds**, which is a
+mapping from `tf.Tensor` objects (typically `tf.placeholder` tensors) to
values (typically Python scalars, lists, or NumPy arrays) that will be
substituted for those tensors in the execution. For example:
@@ -415,7 +415,7 @@ with tf.Session() as sess:
sess.run(y, {x: 37.0})
```
-@{tf.Session.run} also accepts an optional `options` argument that enables you
+`tf.Session.run` also accepts an optional `options` argument that enables you
to specify options about the call, and an optional `run_metadata` argument that
enables you to collect metadata about the execution. For example, you can use
these options together to collect tracing information about the execution:
@@ -447,8 +447,8 @@ with tf.Session() as sess:
TensorFlow includes tools that can help you to understand the code in a graph.
The **graph visualizer** is a component of TensorBoard that renders the
structure of your graph visually in a browser. The easiest way to create a
-visualization is to pass a @{tf.Graph} when creating the
-@{tf.summary.FileWriter}:
+visualization is to pass a `tf.Graph` when creating the
+`tf.summary.FileWriter`:
```python
# Build your graph.
@@ -471,7 +471,7 @@ with tf.Session() as sess:
writer.close()
```
-Note: If you are using a @{tf.estimator.Estimator}, the graph (and any
+Note: If you are using a `tf.estimator.Estimator`, the graph (and any
summaries) will be logged automatically to the `model_dir` that you specified
when creating the estimator.
@@ -486,7 +486,7 @@ subgraph inside.
![](../images/mnist_deep.png)
For more information about visualizing your TensorFlow application with
-TensorBoard, see the [TensorBoard tutorial](../get_started/summaries_and_tensorboard.md).
+TensorBoard, see the [TensorBoard guide](./summaries_and_tensorboard.md).
## Programming with multiple graphs
@@ -495,8 +495,8 @@ graph for training your model, and a separate graph for evaluating or performing
inference with a trained model. In many cases, the inference graph will be
different from the training graph: for example, techniques like dropout and
batch normalization use different operations in each case. Furthermore, by
-default utilities like @{tf.train.Saver} use the names of @{tf.Variable} objects
-(which have names based on an underlying @{tf.Operation}) to identify each
+default utilities like `tf.train.Saver` use the names of `tf.Variable` objects
+(which have names based on an underlying `tf.Operation`) to identify each
variable in a saved checkpoint. When programming this way, you can either use
completely separate Python processes to build and execute the graphs, or you can
use multiple graphs in the same process. This section describes how to use
@@ -507,21 +507,21 @@ to all API functions in the same context. For many applications, a single graph
is sufficient. However, TensorFlow also provides methods for manipulating
the default graph, which can be useful in more advanced use cases. For example:
-* A @{tf.Graph} defines the namespace for @{tf.Operation} objects: each
+* A `tf.Graph` defines the namespace for `tf.Operation` objects: each
operation in a single graph must have a unique name. TensorFlow will
"uniquify" the names of operations by appending `"_1"`, `"_2"`, and so on to
their names if the requested name is already taken. Using multiple explicitly
created graphs gives you more control over what name is given to each
operation.
-* The default graph stores information about every @{tf.Operation} and
- @{tf.Tensor} that was ever added to it. If your program creates a large number
+* The default graph stores information about every `tf.Operation` and
+ `tf.Tensor` that was ever added to it. If your program creates a large number
of unconnected subgraphs, it may be more efficient to use a different
- @{tf.Graph} to build each subgraph, so that unrelated state can be garbage
+ `tf.Graph` to build each subgraph, so that unrelated state can be garbage
collected.
-You can install a different @{tf.Graph} as the default graph, using the
-@{tf.Graph.as_default} context manager:
+You can install a different `tf.Graph` as the default graph, using the
+`tf.Graph.as_default` context manager:
```python
g_1 = tf.Graph()
@@ -548,8 +548,8 @@ assert d.graph is g_2
assert sess_2.graph is g_2
```
-To inspect the current default graph, call @{tf.get_default_graph}, which
-returns a @{tf.Graph} object:
+To inspect the current default graph, call `tf.get_default_graph`, which
+returns a `tf.Graph` object:
```python
# Print all of the operations in the default graph.
diff --git a/tensorflow/docs_src/guide/index.md b/tensorflow/docs_src/guide/index.md
index eefdb9ceae..1c920e7d70 100644
--- a/tensorflow/docs_src/guide/index.md
+++ b/tensorflow/docs_src/guide/index.md
@@ -9,22 +9,18 @@ works. The units are as follows:
training deep learning models.
* @{$guide/eager}, an API for writing TensorFlow code
imperatively, like you would use Numpy.
- * @{$guide/estimators}, a high-level API that provides
- fully-packaged models ready for large-scale training and production.
* @{$guide/datasets}, easy input pipelines to bring your data into
your TensorFlow program.
+ * @{$guide/estimators}, a high-level API that provides
+ fully-packaged models ready for large-scale training and production.
## Estimators
-* @{$estimators} provides an introduction.
-* @{$premade_estimators}, introduces Estimators for machine learning.
-* @{$custom_estimators}, which demonstrates how to build and train models you
- design yourself.
-* @{$feature_columns}, which shows how an Estimator can handle a variety of input
- data types without changes to the model.
-* @{$datasets_for_estimators} describes using tf.data with estimators.
-* @{$checkpoints}, which explains how to save training progress and resume where
- you left off.
+* @{$premade_estimators}, the basics of premade Estimators.
+* @{$checkpoints}, save training progress and resume where you left off.
+* @{$feature_columns}, handle a variety of input data types without changes to the model.
+* @{$datasets_for_estimators}, use `tf.data` to input data.
+* @{$custom_estimators}, write your own Estimator.
## Accelerators
diff --git a/tensorflow/docs_src/guide/keras.md b/tensorflow/docs_src/guide/keras.md
index f2f49f8c93..2330fa03c7 100644
--- a/tensorflow/docs_src/guide/keras.md
+++ b/tensorflow/docs_src/guide/keras.md
@@ -467,13 +467,13 @@ JSON and YAML serialization formats:
json_string = model.to_json()
# Recreate the model (freshly initialized)
-fresh_model = keras.models.from_json(json_string)
+fresh_model = keras.models.model_from_json(json_string)
# Serializes a model to YAML format
yaml_string = model.to_yaml()
# Recreate the model
-fresh_model = keras.models.from_yaml(yaml_string)
+fresh_model = keras.models.model_from_yaml(yaml_string)
```
Caution: Subclassed models are not serializable because their architecture is
@@ -581,15 +581,6 @@ model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
```
-Convert the Keras model to a `tf.estimator.Estimator` instance:
-
-```python
-keras_estimator = keras.estimator.model_to_estimator(
- keras_model=model,
- config=config,
- model_dir='/tmp/model_dir')
-```
-
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` object
used to distribute the data across multiple devices—with each device processing
a slice of the input batch.
@@ -615,6 +606,15 @@ strategy = tf.contrib.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
```
+Convert the Keras model to a `tf.estimator.Estimator` instance:
+
+```python
+keras_estimator = keras.estimator.model_to_estimator(
+ keras_model=model,
+ config=config,
+ model_dir='/tmp/model_dir')
+```
+
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`
arguments:
diff --git a/tensorflow/docs_src/guide/leftnav_files b/tensorflow/docs_src/guide/leftnav_files
index 357a2a1cb9..8e227e0c8f 100644
--- a/tensorflow/docs_src/guide/leftnav_files
+++ b/tensorflow/docs_src/guide/leftnav_files
@@ -4,14 +4,14 @@ index.md
keras.md
eager.md
datasets.md
+estimators.md: Introduction to Estimators
### Estimators
-estimators.md: Introduction to Estimators
premade_estimators.md
-custom_estimators.md
+checkpoints.md
feature_columns.md
datasets_for_estimators.md
-checkpoints.md
+custom_estimators.md
### Accelerators
using_gpu.md
@@ -23,6 +23,7 @@ tensors.md
variables.md
graphs.md
saved_model.md
+autograph.md : Control flow
### ML Concepts
embedding.md
diff --git a/tensorflow/docs_src/guide/low_level_intro.md b/tensorflow/docs_src/guide/low_level_intro.md
index 665a5568b4..dc6cb9ee0d 100644
--- a/tensorflow/docs_src/guide/low_level_intro.md
+++ b/tensorflow/docs_src/guide/low_level_intro.md
@@ -63,17 +63,17 @@ TensorFlow uses numpy arrays to represent tensor **values**.
You might think of TensorFlow Core programs as consisting of two discrete
sections:
-1. Building the computational graph (a @{tf.Graph}).
-2. Running the computational graph (using a @{tf.Session}).
+1. Building the computational graph (a `tf.Graph`).
+2. Running the computational graph (using a `tf.Session`).
### Graph
A **computational graph** is a series of TensorFlow operations arranged into a
graph. The graph is composed of two types of objects.
- * @{tf.Operation$Operations} (or "ops"): The nodes of the graph.
+ * `tf.Operation` (or "ops"): The nodes of the graph.
Operations describe calculations that consume and produce tensors.
- * @{tf.Tensor$Tensors}: The edges in the graph. These represent the values
+ * `tf.Tensor`: The edges in the graph. These represent the values
that will flow through the graph. Most TensorFlow functions return
`tf.Tensors`.
@@ -149,7 +149,7 @@ For more about TensorBoard's graph visualization tools see @{$graph_viz}.
### Session
-To evaluate tensors, instantiate a @{tf.Session} object, informally known as a
+To evaluate tensors, instantiate a `tf.Session` object, informally known as a
**session**. A session encapsulates the state of the TensorFlow runtime, and
runs TensorFlow operations. If a `tf.Graph` is like a `.py` file, a `tf.Session`
is like the `python` executable.
@@ -232,7 +232,7 @@ z = x + y
The preceding three lines are a bit like a function in which we
define two input parameters (`x` and `y`) and then an operation on them. We can
evaluate this graph with multiple inputs by using the `feed_dict` argument of
-the @{tf.Session.run$run method} to feed concrete values to the placeholders:
+the `tf.Session.run` method to feed concrete values to the placeholders:
```python
print(sess.run(z, feed_dict={x: 3, y: 4.5}))
@@ -251,15 +251,15 @@ that placeholders throw an error if no value is fed to them.
## Datasets
-Placeholders work for simple experiments, but @{tf.data$Datasets} are the
+Placeholders work for simple experiments, but `tf.data` are the
preferred method of streaming data into a model.
To get a runnable `tf.Tensor` from a Dataset you must first convert it to a
-@{tf.data.Iterator}, and then call the Iterator's
-@{tf.data.Iterator.get_next$`get_next`} method.
+`tf.data.Iterator`, and then call the Iterator's
+`tf.data.Iterator.get_next` method.
The simplest way to create an Iterator is with the
-@{tf.data.Dataset.make_one_shot_iterator$`make_one_shot_iterator`} method.
+`tf.data.Dataset.make_one_shot_iterator` method.
For example, in the following code the `next_item` tensor will return a row from
the `my_data` array on each `run` call:
@@ -275,7 +275,7 @@ next_item = slices.make_one_shot_iterator().get_next()
```
Reaching the end of the data stream causes `Dataset` to throw an
-@{tf.errors.OutOfRangeError$`OutOfRangeError`}. For example, the following code
+`tf.errors.OutOfRangeError`. For example, the following code
reads the `next_item` until there is no more data to read:
``` python
@@ -308,7 +308,7 @@ For more details on Datasets and Iterators see: @{$guide/datasets}.
## Layers
A trainable model must modify the values in the graph to get new outputs with
-the same input. @{tf.layers$Layers} are the preferred way to add trainable
+the same input. `tf.layers` are the preferred way to add trainable
parameters to a graph.
Layers package together both the variables and the operations that act
@@ -321,7 +321,7 @@ The connection weights and biases are managed by the layer object.
### Creating Layers
-The following code creates a @{tf.layers.Dense$`Dense`} layer that takes a
+The following code creates a `tf.layers.Dense` layer that takes a
batch of input vectors, and produces a single output value for each. To apply a
layer to an input, call the layer as if it were a function. For example:
@@ -375,8 +375,8 @@ will generate a two-element output vector such as the following:
### Layer Function shortcuts
-For each layer class (like @{tf.layers.Dense}) TensorFlow also supplies a
-shortcut function (like @{tf.layers.dense}). The only difference is that the
+For each layer class (like `tf.layers.Dense`) TensorFlow also supplies a
+shortcut function (like `tf.layers.dense`). The only difference is that the
shortcut function versions create and run the layer in a single call. For
example, the following code is equivalent to the earlier version:
@@ -390,17 +390,17 @@ sess.run(init)
print(sess.run(y, {x: [[1, 2, 3], [4, 5, 6]]}))
```
-While convenient, this approach allows no access to the @{tf.layers.Layer}
+While convenient, this approach allows no access to the `tf.layers.Layer`
object. This makes introspection and debugging more difficult,
and layer reuse impossible.
## Feature columns
The easiest way to experiment with feature columns is using the
-@{tf.feature_column.input_layer} function. This function only accepts
+`tf.feature_column.input_layer` function. This function only accepts
@{$feature_columns$dense columns} as inputs, so to view the result
of a categorical column you must wrap it in an
-@{tf.feature_column.indicator_column}. For example:
+`tf.feature_column.indicator_column`. For example:
``` python
features = {
@@ -422,9 +422,9 @@ inputs = tf.feature_column.input_layer(features, columns)
Running the `inputs` tensor will parse the `features` into a batch of vectors.
Feature columns can have internal state, like layers, so they often need to be
-initialized. Categorical columns use @{tf.contrib.lookup$lookup tables}
+initialized. Categorical columns use `tf.contrib.lookup`
internally and these require a separate initialization op,
-@{tf.tables_initializer}.
+`tf.tables_initializer`.
``` python
var_init = tf.global_variables_initializer()
@@ -501,7 +501,7 @@ To optimize a model, you first need to define the loss. We'll use the mean
square error, a standard loss for regression problems.
While you could do this manually with lower level math operations,
-the @{tf.losses} module provides a set of common loss functions. You can use it
+the `tf.losses` module provides a set of common loss functions. You can use it
to calculate the mean square error as follows:
``` python
@@ -520,10 +520,10 @@ This will produce a loss value, something like:
TensorFlow provides
[**optimizers**](https://developers.google.com/machine-learning/glossary/#optimizer)
implementing standard optimization algorithms. These are implemented as
-sub-classes of @{tf.train.Optimizer}. They incrementally change each
+sub-classes of `tf.train.Optimizer`. They incrementally change each
variable in order to minimize the loss. The simplest optimization algorithm is
[**gradient descent**](https://developers.google.com/machine-learning/glossary/#gradient_descent),
-implemented by @{tf.train.GradientDescentOptimizer}. It modifies each
+implemented by `tf.train.GradientDescentOptimizer`. It modifies each
variable according to the magnitude of the derivative of loss with respect to
that variable. For example:
diff --git a/tensorflow/docs_src/guide/premade_estimators.md b/tensorflow/docs_src/guide/premade_estimators.md
index 3e910c1fe2..dc38f0c1d3 100644
--- a/tensorflow/docs_src/guide/premade_estimators.md
+++ b/tensorflow/docs_src/guide/premade_estimators.md
@@ -175,9 +175,9 @@ handles the details of initialization, logging, saving and restoring, and many
other features so you can concentrate on your model. For more details see
@{$guide/estimators}.
-An Estimator is any class derived from @{tf.estimator.Estimator}. TensorFlow
+An Estimator is any class derived from `tf.estimator.Estimator`. TensorFlow
provides a collection of
-@{tf.estimator$pre-made Estimators}
+`tf.estimator`
(for example, `LinearRegressor`) to implement common ML algorithms. Beyond
those, you may write your own
@{$custom_estimators$custom Estimators}.
@@ -200,7 +200,7 @@ Let's see how those tasks are implemented for Iris classification.
You must create input functions to supply data for training,
evaluating, and prediction.
-An **input function** is a function that returns a @{tf.data.Dataset} object
+An **input function** is a function that returns a `tf.data.Dataset` object
which outputs the following two-element tuple:
* [`features`](https://developers.google.com/machine-learning/glossary/#feature) - A Python dictionary in which:
@@ -271,7 +271,7 @@ A [**feature column**](https://developers.google.com/machine-learning/glossary/#
is an object describing how the model should use raw input data from the
features dictionary. When you build an Estimator model, you pass it a list of
feature columns that describes each of the features you want the model to use.
-The @{tf.feature_column} module provides many options for representing data
+The `tf.feature_column` module provides many options for representing data
to the model.
For Iris, the 4 raw features are numeric values, so we'll build a list of
@@ -299,10 +299,10 @@ features, we can build the estimator.
The Iris problem is a classic classification problem. Fortunately, TensorFlow
provides several pre-made classifier Estimators, including:
-* @{tf.estimator.DNNClassifier} for deep models that perform multi-class
+* `tf.estimator.DNNClassifier` for deep models that perform multi-class
classification.
-* @{tf.estimator.DNNLinearCombinedClassifier} for wide & deep models.
-* @{tf.estimator.LinearClassifier} for classifiers based on linear models.
+* `tf.estimator.DNNLinearCombinedClassifier` for wide & deep models.
+* `tf.estimator.LinearClassifier` for classifiers based on linear models.
For the Iris problem, `tf.estimator.DNNClassifier` seems like the best choice.
Here's how we instantiated this Estimator:
diff --git a/tensorflow/docs_src/guide/saved_model.md b/tensorflow/docs_src/guide/saved_model.md
index 27ef7bb0da..c260da7966 100644
--- a/tensorflow/docs_src/guide/saved_model.md
+++ b/tensorflow/docs_src/guide/saved_model.md
@@ -1,10 +1,9 @@
# Save and Restore
-The @{tf.train.Saver} class provides methods to save and restore models. The
-@{tf.saved_model.simple_save} function is an easy way to build a
-@{tf.saved_model$saved model} suitable for serving.
-[Estimators](@{$guide/estimators}) automatically save and restore
-variables in the `model_dir`.
+The `tf.train.Saver` class provides methods to save and restore models. The
+`tf.saved_model.simple_save` function is an easy way to build a
+`tf.saved_model` suitable for serving. [Estimators](./estimators)
+automatically save and restore variables in the `model_dir`.
## Save and restore variables
@@ -146,13 +145,13 @@ Notes:
* If you only restore a subset of the model variables at the start of a
session, you have to run an initialize op for the other variables. See
- @{tf.variables_initializer} for more information.
+ `tf.variables_initializer` for more information.
* To inspect the variables in a checkpoint, you can use the
[`inspect_checkpoint`](https://www.tensorflow.org/code/tensorflow/python/tools/inspect_checkpoint.py)
library, particularly the `print_tensors_in_checkpoint_file` function.
-* By default, `Saver` uses the value of the @{tf.Variable.name} property
+* By default, `Saver` uses the value of the `tf.Variable.name` property
for each variable. However, when you create a `Saver` object, you may
optionally choose names for the variables in the checkpoint files.
@@ -197,15 +196,15 @@ Use `SavedModel` to save and load your model—variables, the graph, and the
graph's metadata. This is a language-neutral, recoverable, hermetic
serialization format that enables higher-level systems and tools to produce,
consume, and transform TensorFlow models. TensorFlow provides several ways to
-interact with `SavedModel`, including the @{tf.saved_model} APIs,
-@{tf.estimator.Estimator}, and a command-line interface.
+interact with `SavedModel`, including the `tf.saved_model` APIs,
+`tf.estimator.Estimator`, and a command-line interface.
## Build and load a SavedModel
### Simple save
-The easiest way to create a `SavedModel` is to use the @{tf.saved_model.simple_save}
+The easiest way to create a `SavedModel` is to use the `tf.saved_model.simple_save`
function:
```python
@@ -219,14 +218,14 @@ This configures the `SavedModel` so it can be loaded by
[TensorFlow serving](/serving/serving_basic) and supports the
[Predict API](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/predict.proto).
To access the classify, regress, or multi-inference APIs, use the manual
-`SavedModel` builder APIs or an @{tf.estimator.Estimator}.
+`SavedModel` builder APIs or an `tf.estimator.Estimator`.
### Manually build a SavedModel
-If your use case isn't covered by @{tf.saved_model.simple_save}, use the manual
-@{tf.saved_model.builder$builder APIs} to create a `SavedModel`.
+If your use case isn't covered by `tf.saved_model.simple_save`, use the manual
+`tf.saved_model.builder` to create a `SavedModel`.
-The @{tf.saved_model.builder.SavedModelBuilder} class provides functionality to
+The `tf.saved_model.builder.SavedModelBuilder` class provides functionality to
save multiple `MetaGraphDef`s. A **MetaGraph** is a dataflow graph, plus
its associated variables, assets, and signatures. A **`MetaGraphDef`**
is the protocol buffer representation of a MetaGraph. A **signature** is
@@ -273,16 +272,16 @@ builder.save()
Following the guidance below gives you forward compatibility only if the set of
Ops has not changed.
-The @{tf.saved_model.builder.SavedModelBuilder$`SavedModelBuilder`} class allows
+The `tf.saved_model.builder.SavedModelBuilder` class allows
users to control whether default-valued attributes must be stripped from the
@{$extend/tool_developers#nodes$`NodeDefs`}
while adding a meta graph to the SavedModel bundle. Both
-@{tf.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables$`SavedModelBuilder.add_meta_graph_and_variables`}
-and @{tf.saved_model.builder.SavedModelBuilder.add_meta_graph$`SavedModelBuilder.add_meta_graph`}
+`tf.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables`
+and `tf.saved_model.builder.SavedModelBuilder.add_meta_graph`
methods accept a Boolean flag `strip_default_attrs` that controls this behavior.
-If `strip_default_attrs` is `False`, the exported @{tf.MetaGraphDef} will have
-the default valued attributes in all its @{tf.NodeDef} instances.
+If `strip_default_attrs` is `False`, the exported `tf.MetaGraphDef` will have
+the default valued attributes in all its `tf.NodeDef` instances.
This can break forward compatibility with a sequence of events such as the
following:
@@ -305,7 +304,7 @@ for more information.
### Loading a SavedModel in Python
The Python version of the SavedModel
-@{tf.saved_model.loader$loader}
+`tf.saved_model.loader`
provides load and restore capability for a SavedModel. The `load` operation
requires the following information:
@@ -424,20 +423,20 @@ the model. This function has the following purposes:
* To add any additional ops needed to convert data from the input format
into the feature `Tensor`s expected by the model.
-The function returns a @{tf.estimator.export.ServingInputReceiver} object,
+The function returns a `tf.estimator.export.ServingInputReceiver` object,
which packages the placeholders and the resulting feature `Tensor`s together.
A typical pattern is that inference requests arrive in the form of serialized
`tf.Example`s, so the `serving_input_receiver_fn()` creates a single string
placeholder to receive them. The `serving_input_receiver_fn()` is then also
-responsible for parsing the `tf.Example`s by adding a @{tf.parse_example} op to
+responsible for parsing the `tf.Example`s by adding a `tf.parse_example` op to
the graph.
When writing such a `serving_input_receiver_fn()`, you must pass a parsing
-specification to @{tf.parse_example} to tell the parser what feature names to
+specification to `tf.parse_example` to tell the parser what feature names to
expect and how to map them to `Tensor`s. A parsing specification takes the
-form of a dict from feature names to @{tf.FixedLenFeature}, @{tf.VarLenFeature},
-and @{tf.SparseFeature}. Note this parsing specification should not include
+form of a dict from feature names to `tf.FixedLenFeature`, `tf.VarLenFeature`,
+and `tf.SparseFeature`. Note this parsing specification should not include
any label or weight columns, since those will not be available at serving
time&mdash;in contrast to a parsing specification used in the `input_fn()` at
training time.
@@ -458,7 +457,7 @@ def serving_input_receiver_fn():
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
```
-The @{tf.estimator.export.build_parsing_serving_input_receiver_fn} utility
+The `tf.estimator.export.build_parsing_serving_input_receiver_fn` utility
function provides that input receiver for the common case.
> Note: when training a model to be served using the Predict API with a local
@@ -469,7 +468,7 @@ Even if you require no parsing or other input processing&mdash;that is, if the
serving system will feed feature `Tensor`s directly&mdash;you must still provide
a `serving_input_receiver_fn()` that creates placeholders for the feature
`Tensor`s and passes them through. The
-@{tf.estimator.export.build_raw_serving_input_receiver_fn} utility provides for
+`tf.estimator.export.build_raw_serving_input_receiver_fn` utility provides for
this.
If these utilities do not meet your needs, you are free to write your own
@@ -489,7 +488,7 @@ By contrast, the *output* portion of the signature is determined by the model.
### Specify the outputs of a custom model
When writing a custom `model_fn`, you must populate the `export_outputs` element
-of the @{tf.estimator.EstimatorSpec} return value. This is a dict of
+of the `tf.estimator.EstimatorSpec` return value. This is a dict of
`{name: output}` describing the output signatures to be exported and used during
serving.
@@ -499,9 +498,9 @@ is represented by an entry in this dict. In this case the `name` is a string
of your choice that can be used to request a specific head at serving time.
Each `output` value must be an `ExportOutput` object such as
-@{tf.estimator.export.ClassificationOutput},
-@{tf.estimator.export.RegressionOutput}, or
-@{tf.estimator.export.PredictOutput}.
+`tf.estimator.export.ClassificationOutput`,
+`tf.estimator.export.RegressionOutput`, or
+`tf.estimator.export.PredictOutput`.
These output types map straightforwardly to the
[TensorFlow Serving APIs](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/prediction_service.proto),
@@ -521,7 +520,7 @@ does not specify one.
### Perform the export
To export your trained Estimator, call
-@{tf.estimator.Estimator.export_savedmodel} with the export base path and
+`tf.estimator.Estimator.export_savedmodel` with the export base path and
the `serving_input_receiver_fn`.
```py
@@ -794,11 +793,12 @@ Here's the syntax:
```
usage: saved_model_cli run [-h] --dir DIR --tag_set TAG_SET --signature_def
SIGNATURE_DEF_KEY [--inputs INPUTS]
- [--input_exprs INPUT_EXPRS] [--outdir OUTDIR]
+ [--input_exprs INPUT_EXPRS]
+ [--input_examples INPUT_EXAMPLES] [--outdir OUTDIR]
[--overwrite] [--tf_debug]
```
-The `run` command provides the following two ways to pass inputs to the model:
+The `run` command provides the following three ways to pass inputs to the model:
* `--inputs` option enables you to pass numpy ndarray in files.
* `--input_exprs` option enables you to pass Python expressions.
@@ -847,7 +847,7 @@ dictionary is stored in the pickle file and the value corresponding to
the *variable_name* will be used.
-#### `--inputs_exprs`
+#### `--input_exprs`
To pass inputs through Python expressions, specify the `--input_exprs` option.
This can be useful for when you don't have data
@@ -869,7 +869,7 @@ example:
(Note that the `numpy` module is already available to you as `np`.)
-#### `--inputs_examples`
+#### `--input_examples`
To pass `tf.train.Example` as inputs, specify the `--input_examples` option.
For each input key, it takes a list of dictionary, where each dictionary is an
diff --git a/tensorflow/docs_src/guide/summaries_and_tensorboard.md b/tensorflow/docs_src/guide/summaries_and_tensorboard.md
index fadfa03e78..6177c3393b 100644
--- a/tensorflow/docs_src/guide/summaries_and_tensorboard.md
+++ b/tensorflow/docs_src/guide/summaries_and_tensorboard.md
@@ -41,7 +41,7 @@ data from, and decide which nodes you would like to annotate with
For example, suppose you are training a convolutional neural network for
recognizing MNIST digits. You'd like to record how the learning rate
varies over time, and how the objective function is changing. Collect these by
-attaching @{tf.summary.scalar} ops
+attaching `tf.summary.scalar` ops
to the nodes that output the learning rate and loss respectively. Then, give
each `scalar_summary` a meaningful `tag`, like `'learning rate'` or `'loss
function'`.
@@ -49,7 +49,7 @@ function'`.
Perhaps you'd also like to visualize the distributions of activations coming
off a particular layer, or the distribution of gradients or weights. Collect
this data by attaching
-@{tf.summary.histogram} ops to
+`tf.summary.histogram` ops to
the gradient outputs and to the variable that holds your weights, respectively.
For details on all of the summary operations available, check out the docs on
@@ -60,13 +60,13 @@ depends on their output. And the summary nodes that we've just created are
peripheral to your graph: none of the ops you are currently running depend on
them. So, to generate summaries, we need to run all of these summary nodes.
Managing them by hand would be tedious, so use
-@{tf.summary.merge_all}
+`tf.summary.merge_all`
to combine them into a single op that generates all the summary data.
Then, you can just run the merged summary op, which will generate a serialized
`Summary` protobuf object with all of your summary data at a given step.
Finally, to write this summary data to disk, pass the summary protobuf to a
-@{tf.summary.FileWriter}.
+`tf.summary.FileWriter`.
The `FileWriter` takes a logdir in its constructor - this logdir is quite
important, it's the directory where all of the events will be written out.
diff --git a/tensorflow/docs_src/guide/tensorboard_histograms.md b/tensorflow/docs_src/guide/tensorboard_histograms.md
index 918deda190..af8f2cadd1 100644
--- a/tensorflow/docs_src/guide/tensorboard_histograms.md
+++ b/tensorflow/docs_src/guide/tensorboard_histograms.md
@@ -13,8 +13,8 @@ TensorFlow has an op
which is perfect for this purpose. As is usually the case with TensorBoard, we
will ingest data using a summary op; in this case,
['tf.summary.histogram'](https://www.tensorflow.org/api_docs/python/tf/summary/histogram).
-For a primer on how summaries work, please see the general
-[TensorBoard tutorial](https://www.tensorflow.org/get_started/summaries_and_tensorboard).
+For a primer on how summaries work, please see the
+[TensorBoard guide](./summaries_and_tensorboard.md).
Here is a code snippet that will generate some histogram summaries containing
normally distributed data, where the mean of the distribution increases over
diff --git a/tensorflow/docs_src/guide/tensors.md b/tensorflow/docs_src/guide/tensors.md
index 7227260f1a..6b5a110a1c 100644
--- a/tensorflow/docs_src/guide/tensors.md
+++ b/tensorflow/docs_src/guide/tensors.md
@@ -176,7 +176,7 @@ Rank | Shape | Dimension number | Example
n | [D0, D1, ... Dn-1] | n-D | A tensor with shape [D0, D1, ... Dn-1].
Shapes can be represented via Python lists / tuples of ints, or with the
-@{tf.TensorShape}.
+`tf.TensorShape`.
### Getting a `tf.Tensor` object's shape
diff --git a/tensorflow/docs_src/guide/using_gpu.md b/tensorflow/docs_src/guide/using_gpu.md
index c429ca4750..c0218fd12e 100644
--- a/tensorflow/docs_src/guide/using_gpu.md
+++ b/tensorflow/docs_src/guide/using_gpu.md
@@ -143,7 +143,7 @@ If the device you have specified does not exist, you will get
```
InvalidArgumentError: Invalid argument: Cannot assign a device to node 'b':
Could not satisfy explicit device specification '/device:GPU:2'
- [[Node: b = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [3,2]
+ [[{{node b}} = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [3,2]
values: 1 2 3...>, _device="/device:GPU:2"]()]]
```
diff --git a/tensorflow/docs_src/guide/using_tpu.md b/tensorflow/docs_src/guide/using_tpu.md
index 41d80d9d60..90a663b75e 100644
--- a/tensorflow/docs_src/guide/using_tpu.md
+++ b/tensorflow/docs_src/guide/using_tpu.md
@@ -17,9 +17,9 @@ This doc is aimed at users who:
## TPUEstimator
-@{tf.estimator.Estimator$Estimators} are TensorFlow's model-level abstraction.
+`tf.estimator.Estimator` are TensorFlow's model-level abstraction.
Standard `Estimators` can drive models on CPU and GPUs. You must use
-@{tf.contrib.tpu.TPUEstimator} to drive a model on TPUs.
+`tf.contrib.tpu.TPUEstimator` to drive a model on TPUs.
Refer to TensorFlow's Getting Started section for an introduction to the basics
of using a @{$premade_estimators$pre-made `Estimator`}, and
@@ -44,10 +44,10 @@ my_estimator = tf.estimator.Estimator(
model_fn=my_model_fn)
```
-The changes required to use a @{tf.contrib.tpu.TPUEstimator} on your local
+The changes required to use a `tf.contrib.tpu.TPUEstimator` on your local
machine are relatively minor. The constructor requires two additional arguments.
You should set the `use_tpu` argument to `False`, and pass a
-@{tf.contrib.tpu.RunConfig} as the `config` argument, as shown below:
+`tf.contrib.tpu.RunConfig` as the `config` argument, as shown below:
``` python
my_tpu_estimator = tf.contrib.tpu.TPUEstimator(
@@ -117,7 +117,7 @@ my_tpu_run_config = tf.contrib.tpu.RunConfig(
)
```
-Then you must pass the @{tf.contrib.tpu.RunConfig} to the constructor:
+Then you must pass the `tf.contrib.tpu.RunConfig` to the constructor:
``` python
my_tpu_estimator = tf.contrib.tpu.TPUEstimator(
@@ -137,7 +137,7 @@ training locally to training on a cloud TPU you would need to:
## Optimizer
When training on a cloud TPU you **must** wrap the optimizer in a
-@{tf.contrib.tpu.CrossShardOptimizer}, which uses an `allreduce` to aggregate
+`tf.contrib.tpu.CrossShardOptimizer`, which uses an `allreduce` to aggregate
gradients and broadcast the result to each shard (each TPU core).
The `CrossShardOptimizer` is not compatible with local training. So, to have
@@ -200,7 +200,7 @@ Build your evaluation metrics dictionary in a stand-alone `metric_fn`.
Evaluation metrics are an essential part of training a model. These are fully
supported on Cloud TPUs, but with a slightly different syntax.
-A standard @{tf.metrics} returns two tensors. The first returns the running
+A standard `tf.metrics` returns two tensors. The first returns the running
average of the metric value, while the second updates the running average and
returns the value for this batch:
@@ -242,15 +242,15 @@ An `Estimator`'s `model_fn` must return an `EstimatorSpec`. An `EstimatorSpec`
is a simple structure of named fields containing all the `tf.Tensors` of the
model that the `Estimator` may need to interact with.
-`TPUEstimators` use a @{tf.contrib.tpu.TPUEstimatorSpec}. There are a few
-differences between it and a standard @{tf.estimator.EstimatorSpec}:
+`TPUEstimators` use a `tf.contrib.tpu.TPUEstimatorSpec`. There are a few
+differences between it and a standard `tf.estimator.EstimatorSpec`:
* The `eval_metric_ops` must be wrapped into a `metrics_fn`, this field is
renamed `eval_metrics` ([see above](#metrics)).
-* The @{tf.train.SessionRunHook$hooks} are unsupported, so these fields are
+* The `tf.train.SessionRunHook` are unsupported, so these fields are
omitted.
-* The @{tf.train.Scaffold$`scaffold`}, if used, must also be wrapped in a
+* The `tf.train.Scaffold`, if used, must also be wrapped in a
function. This field is renamed to `scaffold_fn`.
`Scaffold` and `Hooks` are for advanced usage, and can typically be omitted.
@@ -304,7 +304,7 @@ In many cases the batch size is the only unknown dimension.
A typical input pipeline, using `tf.data`, will usually produce batches of a
fixed size. The last batch of a finite `Dataset`, however, is typically smaller,
containing just the remaining elements. Since a `Dataset` does not know its own
-length or finiteness, the standard @{tf.data.Dataset.batch$`batch`} method
+length or finiteness, the standard `tf.data.Dataset.batch` method
cannot determine if all batches will have a fixed size batch on its own:
```
@@ -317,7 +317,7 @@ cannot determine if all batches will have a fixed size batch on its own:
```
The most straightforward fix is to
-@{tf.data.Dataset.apply$apply} @{tf.contrib.data.batch_and_drop_remainder}
+`tf.data.Dataset.apply` `tf.contrib.data.batch_and_drop_remainder`
as follows:
```
@@ -346,19 +346,19 @@ TPU, as it is impossible to use the Cloud TPU's unless you can feed it data
quickly enough. See @{$datasets_performance} for details on dataset performance.
For all but the simplest experimentation (using
-@{tf.data.Dataset.from_tensor_slices} or other in-graph data) you will need to
+`tf.data.Dataset.from_tensor_slices` or other in-graph data) you will need to
store all data files read by the `TPUEstimator`'s `Dataset` in Google Cloud
Storage Buckets.
<!--TODO(markdaoust): link to the `TFRecord` doc when it exists.-->
For most use-cases, we recommend converting your data into `TFRecord`
-format and using a @{tf.data.TFRecordDataset} to read it. This, however, is not
+format and using a `tf.data.TFRecordDataset` to read it. This, however, is not
a hard requirement and you can use other dataset readers
(`FixedLengthRecordDataset` or `TextLineDataset`) if you prefer.
Small datasets can be loaded entirely into memory using
-@{tf.data.Dataset.cache}.
+`tf.data.Dataset.cache`.
Regardless of the data format used, it is strongly recommended that you
@{$performance_guide#use_large_files$use large files}, on the order of
diff --git a/tensorflow/docs_src/guide/variables.md b/tensorflow/docs_src/guide/variables.md
index cd8c4b5b9a..5d5d73394c 100644
--- a/tensorflow/docs_src/guide/variables.md
+++ b/tensorflow/docs_src/guide/variables.md
@@ -119,7 +119,7 @@ It is particularly important for variables to be in the correct device in
distributed settings. Accidentally putting variables on workers instead of
parameter servers, for example, can severely slow down training or, in the worst
case, let each worker blithely forge ahead with its own independent copy of each
-variable. For this reason we provide @{tf.train.replica_device_setter}, which
+variable. For this reason we provide `tf.train.replica_device_setter`, which
can automatically place variables in parameter servers. For example:
``` python
@@ -211,7 +211,7 @@ sess.run(assignment) # or assignment.op.run(), or assignment.eval()
Most TensorFlow optimizers have specialized ops that efficiently update the
values of variables according to some gradient descent-like algorithm. See
-@{tf.train.Optimizer} for an explanation of how to use optimizers.
+`tf.train.Optimizer` for an explanation of how to use optimizers.
Because variables are mutable it's sometimes useful to know what version of a
variable's value is being used at any point in time. To force a re-read of the
diff --git a/tensorflow/docs_src/guide/version_compat.md b/tensorflow/docs_src/guide/version_compat.md
index 5f31c6c5f8..29ac066e6f 100644
--- a/tensorflow/docs_src/guide/version_compat.md
+++ b/tensorflow/docs_src/guide/version_compat.md
@@ -66,7 +66,7 @@ patch versions. The public APIs consist of
Some API functions are explicitly marked as "experimental" and can change in
backward incompatible ways between minor releases. These include:
-* **Experimental APIs**: The @{tf.contrib} module and its submodules in Python
+* **Experimental APIs**: The `tf.contrib` module and its submodules in Python
and any functions in the C API or fields in protocol buffers that are
explicitly commented as being experimental. In particular, any field in a
protocol buffer which is called "experimental" and all its fields and
@@ -253,13 +253,13 @@ ops has not changed:
1. If forward compatibility is desired, set `strip_default_attrs` to `True`
while exporting the model using either the
- @{tf.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables$`add_meta_graph_and_variables`}
- and @{tf.saved_model.builder.SavedModelBuilder.add_meta_graph$`add_meta_graph`}
+ `tf.saved_model.builder.SavedModelBuilder.add_meta_graph_and_variables`
+ and `tf.saved_model.builder.SavedModelBuilder.add_meta_graph`
methods of the `SavedModelBuilder` class, or
- @{tf.estimator.Estimator.export_savedmodel$`Estimator.export_savedmodel`}
+ `tf.estimator.Estimator.export_savedmodel`
2. This strips off the default valued attributes at the time of
producing/exporting the models. This makes sure that the exported
- @{tf.MetaGraphDef} does not contain the new op-attribute when the default
+ `tf.MetaGraphDef` does not contain the new op-attribute when the default
value is used.
3. Having this control could allow out-of-date consumers (for example, serving
binaries that lag behind training binaries) to continue loading the models
@@ -302,8 +302,10 @@ existing producer scripts will not suddenly use the new functionality.
#### Change an op's functionality
1. Add a new similar op named `SomethingV2` or similar and go through the
- process of adding it and switching existing Python wrappers to use it, which
- may take three weeks if forward compatibility is desired.
+ process of adding it and switching existing Python wrappers to use it.
+ To ensure forward compatibility use the checks suggested in
+ [compat.py](https://www.tensorflow.org/code/tensorflow/python/compat/compat.py)
+ when changing the Python wrappers.
2. Remove the old op (Can only take place with a major version change due to
backward compatibility).
3. Increase `min_consumer` to rule out consumers with the old op, add back the
diff --git a/tensorflow/docs_src/install/index.md b/tensorflow/docs_src/install/index.md
index c2e5a991d4..55481cc400 100644
--- a/tensorflow/docs_src/install/index.md
+++ b/tensorflow/docs_src/install/index.md
@@ -1,36 +1,39 @@
-# Installing TensorFlow
+# Install TensorFlow
-We've built and tested TensorFlow on the following 64-bit laptop/desktop
-operating systems:
+Note: Run the [TensorFlow tutorials](../tutorials) in a pre-configured
+[Colab notebook environment](https://colab.research.google.com/notebooks/welcome.ipynb){: .external},
+without installation.
+
+TensorFlow is built and tested on the following 64-bit operating systems:
* macOS 10.12.6 (Sierra) or later.
* Ubuntu 16.04 or later
* Windows 7 or later.
* Raspbian 9.0 or later.
-Although you might be able to install TensorFlow on other laptop or desktop
-systems, we only support (and only fix issues in) the preceding configurations.
+While TensorFlow may work on other systems, we only support—and fix issues in—the
+systems listed above.
The following guides explain how to install a version of TensorFlow
that enables you to write applications in Python:
- * @{$install_linux$Installing TensorFlow on Ubuntu}
- * @{$install_mac$Installing TensorFlow on macOS}
- * @{$install_windows$Installing TensorFlow on Windows}
- * @{$install_raspbian$Installing TensorFlow on a Raspberry Pi}
- * @{$install_sources$Installing TensorFlow from Sources}
+ * @{$install_linux$Install TensorFlow on Ubuntu}
+ * @{$install_mac$Install TensorFlow on macOS}
+ * @{$install_windows$Install TensorFlow on Windows}
+ * @{$install_raspbian$Install TensorFlow on a Raspberry Pi}
+ * @{$install_sources$Install TensorFlow from source code}
Many aspects of the Python TensorFlow API changed from version 0.n to 1.0.
The following guide explains how to migrate older TensorFlow applications
to Version 1.0:
- * @{$migration$Transitioning to TensorFlow 1.0}
+ * @{$migration$Transition to TensorFlow 1.0}
The following guides explain how to install TensorFlow libraries for use in
other programming languages. These APIs are aimed at deploying TensorFlow
models in applications and are not as extensive as the Python APIs.
- * @{$install_java$Installing TensorFlow for Java}
- * @{$install_c$Installing TensorFlow for C}
- * @{$install_go$Installing TensorFlow for Go}
+ * @{$install_java$Install TensorFlow for Java}
+ * @{$install_c$Install TensorFlow for C}
+ * @{$install_go$Install TensorFlow for Go}
diff --git a/tensorflow/docs_src/install/install_c.md b/tensorflow/docs_src/install/install_c.md
index 2901848745..5e26facaba 100644
--- a/tensorflow/docs_src/install/install_c.md
+++ b/tensorflow/docs_src/install/install_c.md
@@ -1,4 +1,4 @@
-# Installing TensorFlow for C
+# Install TensorFlow for C
TensorFlow provides a C API defined in
[`c_api.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api.h),
@@ -38,7 +38,7 @@ enable TensorFlow for C:
OS="linux" # Change to "darwin" for macOS
TARGET_DIRECTORY="/usr/local"
curl -L \
- "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.9.0-rc0.tar.gz" |
+ "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.10.0-rc1.tar.gz" |
sudo tar -C $TARGET_DIRECTORY -xz
The `tar` command extracts the TensorFlow C library into the `lib`
diff --git a/tensorflow/docs_src/install/install_go.md b/tensorflow/docs_src/install/install_go.md
index 2c126df5aa..83d16bc4b7 100644
--- a/tensorflow/docs_src/install/install_go.md
+++ b/tensorflow/docs_src/install/install_go.md
@@ -1,4 +1,4 @@
-# Installing TensorFlow for Go
+# Install TensorFlow for Go
TensorFlow provides APIs for use in Go programs. These APIs are particularly
well-suited to loading models created in Python and executing them within
@@ -6,7 +6,7 @@ a Go application. This guide explains how to install and set up the
[TensorFlow Go package](https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go).
Warning: The TensorFlow Go API is *not* covered by the TensorFlow
-[API stability guarantees](../guide/version_semantics.md).
+[API stability guarantees](../guide/version_compat.md).
## Supported Platforms
@@ -38,7 +38,7 @@ steps to install this library and enable TensorFlow for Go:
TF_TYPE="cpu" # Change to "gpu" for GPU support
TARGET_DIRECTORY='/usr/local'
curl -L \
- "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.9.0-rc0.tar.gz" |
+ "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.10.0-rc1.tar.gz" |
sudo tar -C $TARGET_DIRECTORY -xz
The `tar` command extracts the TensorFlow C library into the `lib`
diff --git a/tensorflow/docs_src/install/install_java.md b/tensorflow/docs_src/install/install_java.md
index 692dfc9cef..e9c6650c92 100644
--- a/tensorflow/docs_src/install/install_java.md
+++ b/tensorflow/docs_src/install/install_java.md
@@ -1,4 +1,4 @@
-# Installing TensorFlow for Java
+# Install TensorFlow for Java
TensorFlow provides APIs for use in Java programs. These APIs are particularly
well-suited to loading models created in Python and executing them within a
@@ -36,7 +36,7 @@ following to the project's `pom.xml` to use the TensorFlow Java APIs:
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow</artifactId>
- <version>1.9.0-rc0</version>
+ <version>1.10.0-rc1</version>
</dependency>
```
@@ -65,7 +65,7 @@ As an example, these steps will create a Maven project that uses TensorFlow:
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow</artifactId>
- <version>1.9.0-rc0</version>
+ <version>1.10.0-rc1</version>
</dependency>
</dependencies>
</project>
@@ -124,12 +124,12 @@ instead:
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>libtensorflow</artifactId>
- <version>1.9.0-rc0</version>
+ <version>1.10.0-rc1</version>
</dependency>
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>libtensorflow_jni_gpu</artifactId>
- <version>1.9.0-rc0</version>
+ <version>1.10.0-rc1</version>
</dependency>
```
@@ -148,7 +148,7 @@ refer to the simpler instructions above instead.
Take the following steps to install TensorFlow for Java on Linux or macOS:
1. Download
- [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.9.0-rc0.jar),
+ [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.10.0-rc1.jar),
which is the TensorFlow Java Archive (JAR).
2. Decide whether you will run TensorFlow for Java on CPU(s) only or with
@@ -167,7 +167,7 @@ Take the following steps to install TensorFlow for Java on Linux or macOS:
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
mkdir -p ./jni
curl -L \
- "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.9.0-rc0.tar.gz" |
+ "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.10.0-rc1.tar.gz" |
tar -xz -C ./jni
### Install on Windows
@@ -175,10 +175,10 @@ Take the following steps to install TensorFlow for Java on Linux or macOS:
Take the following steps to install TensorFlow for Java on Windows:
1. Download
- [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.9.0-rc0.jar),
+ [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.10.0-rc1.jar),
which is the TensorFlow Java Archive (JAR).
2. Download the following Java Native Interface (JNI) file appropriate for
- [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.9.0-rc0.zip).
+ [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.10.0-rc1.zip).
3. Extract this .zip file.
__Note__: The native library (`tensorflow_jni.dll`) requires `msvcp140.dll` at runtime, which is included in the [Visual C++ 2015 Redistributable](https://www.microsoft.com/en-us/download/details.aspx?id=48145) package.
@@ -227,7 +227,7 @@ must be part of your `classpath`. For example, you can include the
downloaded `.jar` in your `classpath` by using the `-cp` compilation flag
as follows:
-<pre><b>javac -cp libtensorflow-1.9.0-rc0.jar HelloTF.java</b></pre>
+<pre><b>javac -cp libtensorflow-1.10.0-rc1.jar HelloTF.java</b></pre>
### Running
@@ -241,11 +241,11 @@ two files are available to the JVM:
For example, the following command line executes the `HelloTF` program on Linux
and macOS X:
-<pre><b>java -cp libtensorflow-1.9.0-rc0.jar:. -Djava.library.path=./jni HelloTF</b></pre>
+<pre><b>java -cp libtensorflow-1.10.0-rc1.jar:. -Djava.library.path=./jni HelloTF</b></pre>
And the following command line executes the `HelloTF` program on Windows:
-<pre><b>java -cp libtensorflow-1.9.0-rc0.jar;. -Djava.library.path=jni HelloTF</b></pre>
+<pre><b>java -cp libtensorflow-1.10.0-rc1.jar;. -Djava.library.path=jni HelloTF</b></pre>
If the program prints <tt>Hello from <i>version</i></tt>, you've successfully
installed TensorFlow for Java and are ready to use the API. If the program
diff --git a/tensorflow/docs_src/install/install_linux.md b/tensorflow/docs_src/install/install_linux.md
index c573acaf45..005ad437bc 100644
--- a/tensorflow/docs_src/install/install_linux.md
+++ b/tensorflow/docs_src/install/install_linux.md
@@ -1,38 +1,38 @@
-# Installing TensorFlow on Ubuntu
+# Install TensorFlow on Ubuntu
This guide explains how to install TensorFlow on Ubuntu Linux. While these
-instructions may work on other Linux variants, they are tested and supported with
-the following system requirements:
-
-* 64-bit desktops or laptops
-* Ubuntu 16.04 or higher
+instructions may work on other Linux variants, they are tested and supported
+with the following system requirements:
+* 64-bit desktops or laptops
+* Ubuntu 16.04 or higher
## Choose which TensorFlow to install
The following TensorFlow variants are available for installation:
-* __TensorFlow with CPU support only__. If your system does not have a
- NVIDIA®&nbsp;GPU, you must install this version. This version of TensorFlow is
- usually easier to install, so even if you have an NVIDIA GPU, we recommend
- installing this version first.
-* __TensorFlow with GPU support__. TensorFlow programs usually run much faster on
- a GPU instead of a CPU. If you run performance-critical applications and your
- system has an NVIDIA®&nbsp;GPU that meets the prerequisites, you should install
- this version. See [TensorFlow GPU support](#NVIDIARequirements) for details.
-
+* __TensorFlow with CPU support only__. If your system does not have a
+ NVIDIA®&nbsp;GPU, you must install this version. This version of TensorFlow
+ is usually easier to install, so even if you have an NVIDIA GPU, we
+ recommend installing this version first.
+* __TensorFlow with GPU support__. TensorFlow programs usually run much faster
+ on a GPU instead of a CPU. If you run performance-critical applications and
+ your system has an NVIDIA®&nbsp;GPU that meets the prerequisites, you should
+ install this version. See [TensorFlow GPU support](#NVIDIARequirements) for
+ details.
## How to install TensorFlow
There are a few options to install TensorFlow on your machine:
-* [Use pip in a virtual environment](#InstallingVirtualenv) *(recommended)*
-* [Use pip in your system environment](#InstallingNativePip)
-* [Configure a Docker container](#InstallingDocker)
-* [Use pip in Anaconda](#InstallingAnaconda)
-* [Install TensorFlow from source](/install/install_sources)
+* [Use pip in a virtual environment](#InstallingVirtualenv) *(recommended)*
+* [Use pip in your system environment](#InstallingNativePip)
+* [Configure a Docker container](#InstallingDocker)
+* [Use pip in Anaconda](#InstallingAnaconda)
+* [Install TensorFlow from source](/install/install_sources)
<a name="InstallingVirtualenv"></a>
+
### Use `pip` in a virtual environment
Key Point: Using a virtual environment is the recommended install method.
@@ -41,8 +41,8 @@ The [Virtualenv](https://virtualenv.pypa.io/en/stable/) tool creates virtual
Python environments that are isolated from other Python development on the same
machine. In this scenario, you install TensorFlow and its dependencies within a
virtual environment that is available when *activated*. Virtualenv provides a
-reliable way to install and run TensorFlow while avoiding conflicts with the rest
-of the system.
+reliable way to install and run TensorFlow while avoiding conflicts with the
+rest of the system.
##### 1. Install Python, `pip`, and `virtualenv`.
@@ -62,10 +62,10 @@ To install these packages on Ubuntu:
</pre>
We *recommend* using `pip` version 8.1 or higher. If using a release before
-version 8.1, upgrade `pip`:
+version 8.1, upgrade `pip`:
<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">sudo pip install -U pip</code>
+ <code class="devsite-terminal">pip install --upgrade pip</code>
</pre>
If not using Ubuntu and [setuptools](https://pypi.org/project/setuptools/) is
@@ -102,7 +102,7 @@ When the Virtualenv is activated, the shell prompt displays as `(venv) $`.
Within the active virtual environment, upgrade `pip`:
<pre class="prettyprint lang-bsh">
-(venv)$ pip install -U pip
+(venv)$ pip install --upgrade pip
</pre>
You can install other Python packages within the virtual environment without
@@ -112,15 +112,15 @@ affecting packages outside the `virtualenv`.
Choose one of the available TensorFlow packages for installation:
-* `tensorflow` —Current release for CPU
-* `tensorflow-gpu` —Current release with GPU support
-* `tf-nightly` —Nightly build for CPU
-* `tf-nightly-gpu` —Nightly build with GPU support
+* `tensorflow` —Current release for CPU
+* `tensorflow-gpu` —Current release with GPU support
+* `tf-nightly` —Nightly build for CPU
+* `tf-nightly-gpu` —Nightly build with GPU support
Within an active Virtualenv environment, use `pip` to install the package:
<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">pip install -U tensorflow</code>
+ <code class="devsite-terminal">pip install --upgrade tensorflow</code>
</pre>
Use `pip list` to show the packages installed in the virtual environment.
@@ -160,14 +160,14 @@ To uninstall TensorFlow, remove the Virtualenv directory you created in step 2:
<code class="devsite-terminal">rm -r ~/tensorflow/<var>venv</var></code>
</pre>
-
<a name="InstallingNativePip"></a>
+
### Use `pip` in your system environment
Use `pip` to install the TensorFlow package directly on your system without
using a container or virtual environment for isolation. This method is
-recommended for system administrators that want a TensorFlow installation that is
-available to everyone on a multi-user system.
+recommended for system administrators that want a TensorFlow installation that
+is available to everyone on a multi-user system.
Since a system install is not isolated, it could interfere with other
Python-based installations. But if you understand `pip` and your Python
@@ -195,10 +195,10 @@ To install these packages on Ubuntu:
</pre>
We *recommend* using `pip` version 8.1 or higher. If using a release before
-version 8.1, upgrade `pip`:
+version 8.1, upgrade `pip`:
<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">sudo pip install -U pip</code>
+ <code class="devsite-terminal">pip install --upgrade pip</code>
</pre>
If not using Ubuntu and [setuptools](https://pypi.org/project/setuptools/) is
@@ -212,16 +212,16 @@ installed, use `easy_install` to install `pip`:
Choose one of the available TensorFlow packages for installation:
-* `tensorflow` —Current release for CPU
-* `tensorflow-gpu` —Current release with GPU support
-* `tf-nightly` —Nightly build for CPU
-* `tf-nightly-gpu` —Nightly build with GPU support
+* `tensorflow` —Current release for CPU
+* `tensorflow-gpu` —Current release with GPU support
+* `tf-nightly` —Nightly build for CPU
+* `tf-nightly-gpu` —Nightly build with GPU support
And use `pip` to install the package for Python 2 or 3:
<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">sudo pip install -U tensorflow # Python 2.7</code>
- <code class="devsite-terminal">sudo pip3 install -U tensorflow # Python 3.n</code>
+ <code class="devsite-terminal">pip install --upgrade --user tensorflow # Python 2.7</code>
+ <code class="devsite-terminal">pip3 install --upgrade --user tensorflow # Python 3.n</code>
</pre>
Use `pip list` to show the packages installed on the system.
@@ -239,8 +239,8 @@ If the above steps failed, try installing the TensorFlow binary using the remote
URL of the `pip` package:
<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">sudo pip install --upgrade <var>remote-pkg-URL</var> # Python 2.7</code>
- <code class="devsite-terminal">sudo pip3 install --upgrade <var>remote-pkg-URL</var> # Python 3.n</code>
+ <code class="devsite-terminal">pip install --user --upgrade <var>remote-pkg-URL</var> # Python 2.7</code>
+ <code class="devsite-terminal">pip3 install --user --upgrade <var>remote-pkg-URL</var> # Python 3.n</code>
</pre>
The <var>remote-pkg-URL</var> depends on the operating system, Python version,
@@ -255,42 +255,41 @@ encounter problems.
To uninstall TensorFlow on your system, use one of following commands:
<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">sudo pip uninstall tensorflow # for Python 2.7</code>
- <code class="devsite-terminal">sudo pip3 uninstall tensorflow # for Python 3.n</code>
+ <code class="devsite-terminal">pip uninstall tensorflow # for Python 2.7</code>
+ <code class="devsite-terminal">pip3 uninstall tensorflow # for Python 3.n</code>
</pre>
<a name="InstallingDocker"></a>
+
### Configure a Docker container
-Docker completely isolates the TensorFlow installation
-from pre-existing packages on your machine. The Docker container contains
-TensorFlow and all its dependencies. Note that the Docker image can be quite
-large (hundreds of MBs). You might choose the Docker installation if you are
-incorporating TensorFlow into a larger application architecture that already
-uses Docker.
+Docker completely isolates the TensorFlow installation from pre-existing
+packages on your machine. The Docker container contains TensorFlow and all its
+dependencies. Note that the Docker image can be quite large (hundreds of MBs).
+You might choose the Docker installation if you are incorporating TensorFlow
+into a larger application architecture that already uses Docker.
Take the following steps to install TensorFlow through Docker:
- 1. Install Docker on your machine as described in the
- [Docker documentation](http://docs.docker.com/engine/installation/).
- 2. Optionally, create a Linux group called <code>docker</code> to allow
- launching containers without sudo as described in the
- [Docker documentation](https://docs.docker.com/engine/installation/linux/linux-postinstall/).
- (If you don't do this step, you'll have to use sudo each time
- you invoke Docker.)
- 3. To install a version of TensorFlow that supports GPUs, you must first
- install [nvidia-docker](https://github.com/NVIDIA/nvidia-docker), which
- is stored in github.
- 4. Launch a Docker container that contains one of the
- [TensorFlow binary images](https://hub.docker.com/r/tensorflow/tensorflow/tags/).
+1. Install Docker on your machine as described in the
+ [Docker documentation](http://docs.docker.com/engine/installation/).
+2. Optionally, create a Linux group called <code>docker</code> to allow
+ launching containers without sudo as described in the
+ [Docker documentation](https://docs.docker.com/engine/installation/linux/linux-postinstall/).
+ (If you don't do this step, you'll have to use sudo each time you invoke
+ Docker.)
+3. To install a version of TensorFlow that supports GPUs, you must first
+ install [nvidia-docker](https://github.com/NVIDIA/nvidia-docker), which is
+ stored in github.
+4. Launch a Docker container that contains one of the
+ [TensorFlow binary images](https://hub.docker.com/r/tensorflow/tensorflow/tags/).
The remainder of this section explains how to launch a Docker container.
-
#### CPU-only
-To launch a Docker container with CPU-only support (that is, without
-GPU support), enter a command of the following format:
+To launch a Docker container with CPU-only support (that is, without GPU
+support), enter a command of the following format:
<pre>
$ docker run -it <i>-p hostPort:containerPort TensorFlowCPUImage</i>
@@ -298,29 +297,31 @@ $ docker run -it <i>-p hostPort:containerPort TensorFlowCPUImage</i>
where:
- * <tt><i>-p hostPort:containerPort</i></tt> is optional.
- If you plan to run TensorFlow programs from the shell, omit this option.
- If you plan to run TensorFlow programs as Jupyter notebooks, set both
- <tt><i>hostPort</i></tt> and <tt><i>containerPort</i></tt>
- to <tt>8888</tt>. If you'd like to run TensorBoard inside the container,
- add a second `-p` flag, setting both <i>hostPort</i> and <i>containerPort</i>
- to 6006.
- * <tt><i>TensorFlowCPUImage</i></tt> is required. It identifies the Docker
+* <tt><i>-p hostPort:containerPort</i></tt> is optional. If you plan to run
+ TensorFlow programs from the shell, omit this option. If you plan to run
+ TensorFlow programs as Jupyter notebooks, set both <tt><i>hostPort</i></tt>
+ and <tt><i>containerPort</i></tt> to <tt>8888</tt>. If you'd like to run
+ TensorBoard inside the container, add a second `-p` flag, setting both
+ <i>hostPort</i> and <i>containerPort</i> to 6006.
+* <tt><i>TensorFlowCPUImage</i></tt> is required. It identifies the Docker
container. Specify one of the following values:
- * <tt>tensorflow/tensorflow</tt>, which is the TensorFlow CPU binary image.
- * <tt>tensorflow/tensorflow:latest-devel</tt>, which is the latest
- TensorFlow CPU Binary image plus source code.
- * <tt>tensorflow/tensorflow:<i>version</i></tt>, which is the
- specified version (for example, 1.1.0rc1) of TensorFlow CPU binary image.
- * <tt>tensorflow/tensorflow:<i>version</i>-devel</tt>, which is
- the specified version (for example, 1.1.0rc1) of the TensorFlow GPU
- binary image plus source code.
+
+ * <tt>tensorflow/tensorflow</tt>, which is the TensorFlow CPU binary
+ image.
+ * <tt>tensorflow/tensorflow:latest-devel</tt>, which is the latest
+ TensorFlow CPU Binary image plus source code.
+ * <tt>tensorflow/tensorflow:<i>version</i></tt>, which is the specified
+ version (for example, 1.1.0rc1) of TensorFlow CPU binary image.
+ * <tt>tensorflow/tensorflow:<i>version</i>-devel</tt>, which is the
+ specified version (for example, 1.1.0rc1) of the TensorFlow GPU binary
+ image plus source code.
TensorFlow images are available at
[dockerhub](https://hub.docker.com/r/tensorflow/tensorflow/).
-For example, the following command launches the latest TensorFlow CPU binary image
-in a Docker container from which you can run TensorFlow programs in a shell:
+For example, the following command launches the latest TensorFlow CPU binary
+image in a Docker container from which you can run TensorFlow programs in a
+shell:
<pre>
$ <b>docker run -it tensorflow/tensorflow bash</b>
@@ -336,10 +337,11 @@ $ <b>docker run -it -p 8888:8888 tensorflow/tensorflow</b>
Docker will download the TensorFlow binary image the first time you launch it.
-
#### GPU support
-To launch a Docker container with NVidia GPU support, enter a command of the following format (this [does not require any local CUDA installation](https://github.com/nvidia/nvidia-docker/wiki/CUDA#requirements)):
+To launch a Docker container with NVidia GPU support, enter a command of the
+following format (this
+[does not require any local CUDA installation](https://github.com/nvidia/nvidia-docker/wiki/CUDA#requirements)):
<pre>
$ <b>nvidia-docker run -it</b> <i>-p hostPort:containerPort TensorFlowGPUImage</i>
@@ -347,34 +349,34 @@ $ <b>nvidia-docker run -it</b> <i>-p hostPort:containerPort TensorFlowGPUImage</
where:
- * <tt><i>-p hostPort:containerPort</i></tt> is optional. If you plan
- to run TensorFlow programs from the shell, omit this option. If you plan
- to run TensorFlow programs as Jupyter notebooks, set both
- <tt><i>hostPort</i></tt> and <code><em>containerPort</em></code> to `8888`.
- * <i>TensorFlowGPUImage</i> specifies the Docker container. You must
- specify one of the following values:
- * <tt>tensorflow/tensorflow:latest-gpu</tt>, which is the latest
- TensorFlow GPU binary image.
- * <tt>tensorflow/tensorflow:latest-devel-gpu</tt>, which is
- the latest TensorFlow GPU Binary image plus source code.
- * <tt>tensorflow/tensorflow:<i>version</i>-gpu</tt>, which is the
- specified version (for example, 0.12.1) of the TensorFlow GPU
- binary image.
- * <tt>tensorflow/tensorflow:<i>version</i>-devel-gpu</tt>, which is
- the specified version (for example, 0.12.1) of the TensorFlow GPU
- binary image plus source code.
-
-We recommend installing one of the `latest` versions. For example, the
-following command launches the latest TensorFlow GPU binary image in a
-Docker container from which you can run TensorFlow programs in a shell:
+* <tt><i>-p hostPort:containerPort</i></tt> is optional. If you plan to run
+ TensorFlow programs from the shell, omit this option. If you plan to run
+ TensorFlow programs as Jupyter notebooks, set both <tt><i>hostPort</i></tt>
+ and <code><em>containerPort</em></code> to `8888`.
+* <i>TensorFlowGPUImage</i> specifies the Docker container. You must specify
+ one of the following values:
+ * <tt>tensorflow/tensorflow:latest-gpu</tt>, which is the latest
+ TensorFlow GPU binary image.
+ * <tt>tensorflow/tensorflow:latest-devel-gpu</tt>, which is the latest
+ TensorFlow GPU Binary image plus source code.
+ * <tt>tensorflow/tensorflow:<i>version</i>-gpu</tt>, which is the
+ specified version (for example, 0.12.1) of the TensorFlow GPU binary
+ image.
+ * <tt>tensorflow/tensorflow:<i>version</i>-devel-gpu</tt>, which is the
+ specified version (for example, 0.12.1) of the TensorFlow GPU binary
+ image plus source code.
+
+We recommend installing one of the `latest` versions. For example, the following
+command launches the latest TensorFlow GPU binary image in a Docker container
+from which you can run TensorFlow programs in a shell:
<pre>
$ <b>nvidia-docker run -it tensorflow/tensorflow:latest-gpu bash</b>
</pre>
-The following command also launches the latest TensorFlow GPU binary image
-in a Docker container. In this Docker container, you can run TensorFlow
-programs in a Jupyter notebook:
+The following command also launches the latest TensorFlow GPU binary image in a
+Docker container. In this Docker container, you can run TensorFlow programs in a
+Jupyter notebook:
<pre>
$ <b>nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu</b>
@@ -390,14 +392,12 @@ Docker will download the TensorFlow binary image the first time you launch it.
For more details see the
[TensorFlow docker readme](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker).
-
#### Next Steps
-You should now
-[validate your installation](#ValidateYourInstallation).
-
+You should now [validate your installation](#ValidateYourInstallation).
<a name="InstallingAnaconda"></a>
+
### Use `pip` in Anaconda
Anaconda provides the `conda` utility to create a virtual environment. However,
@@ -410,61 +410,59 @@ not tested on new TensorFlow releases.
Take the following steps to install TensorFlow in an Anaconda environment:
- 1. Follow the instructions on the
- [Anaconda download site](https://www.continuum.io/downloads)
- to download and install Anaconda.
+1. Follow the instructions on the
+ [Anaconda download site](https://www.continuum.io/downloads) to download and
+ install Anaconda.
- 2. Create a conda environment named <tt>tensorflow</tt> to run a version
- of Python by invoking the following command:
+2. Create a conda environment named <tt>tensorflow</tt> to run a version of
+ Python by invoking the following command:
<pre>$ <b>conda create -n tensorflow pip python=2.7 # or python=3.3, etc.</b></pre>
- 3. Activate the conda environment by issuing the following command:
+3. Activate the conda environment by issuing the following command:
<pre>$ <b>source activate tensorflow</b>
(tensorflow)$ # Your prompt should change </pre>
- 4. Issue a command of the following format to install
- TensorFlow inside your conda environment:
+4. Issue a command of the following format to install TensorFlow inside your
+ conda environment:
<pre>(tensorflow)$ <b>pip install --ignore-installed --upgrade</b> <i>tfBinaryURL</i></pre>
- where <code><em>tfBinaryURL</em></code> is the
- [URL of the TensorFlow Python package](#the_url_of_the_tensorflow_python_package).
- For example, the following command installs the CPU-only version of
- TensorFlow for Python 3.4:
+ where <code><em>tfBinaryURL</em></code> is the
+ [URL of the TensorFlow Python package](#the_url_of_the_tensorflow_python_package).
+ For example, the following command installs the CPU-only version of
+ TensorFlow for Python 3.4:
<pre>
(tensorflow)$ <b>pip install --ignore-installed --upgrade \
- https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre>
+ https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.10.0rc1-cp34-cp34m-linux_x86_64.whl</b></pre>
<a name="ValidateYourInstallation"></a>
+
## Validate your installation
To validate your TensorFlow installation, do the following:
- 1. Ensure that your environment is prepared to run TensorFlow programs.
- 2. Run a short TensorFlow program.
-
+1. Ensure that your environment is prepared to run TensorFlow programs.
+2. Run a short TensorFlow program.
### Prepare your environment
-If you installed on native pip, Virtualenv, or Anaconda, then
-do the following:
+If you installed on native pip, Virtualenv, or Anaconda, then do the following:
- 1. Start a terminal.
- 2. If you installed with Virtualenv or Anaconda, activate your container.
- 3. If you installed TensorFlow source code, navigate to any
- directory *except* one containing TensorFlow source code.
+1. Start a terminal.
+2. If you installed with Virtualenv or Anaconda, activate your container.
+3. If you installed TensorFlow source code, navigate to any directory *except*
+ one containing TensorFlow source code.
-If you installed through Docker, start a Docker container
-from which you can run bash. For example:
+If you installed through Docker, start a Docker container from which you can run
+bash. For example:
<pre>
$ <b>docker run -it tensorflow/tensorflow bash</b>
</pre>
-
### Run a short TensorFlow program
Invoke python from your shell as follows:
@@ -486,94 +484,71 @@ TensorFlow programs:
<pre>Hello, TensorFlow!</pre>
-If the system outputs an error message instead of a greeting, see [Common
-installation problems](#common_installation_problems).
+If the system outputs an error message instead of a greeting, see
+[Common installation problems](#common_installation_problems).
-To learn more, see [Get Started with TensorFlow](https://www.tensorflow.org/get_started).
+To learn more, see the [TensorFlow tutorials](../tutorials/).
<a name="NVIDIARequirements"></a>
-## TensorFlow GPU support
-
-To install TensorFlow with GPU support, configure the following NVIDIA® software
-on your system:
-
-* [CUDA Toolkit 9.0](http://nvidia.com/cuda). For details, see
- [NVIDIA's documentation](http://docs.nvidia.com/cuda/cuda-installation-guide-linux/).
- Append the relevant CUDA pathnames to the `LD_LIBRARY_PATH` environmental
- variable as described in the NVIDIA documentation.
-* [cuDNN SDK v7](http://developer.nvidia.com/cudnn). For details, see
- [NVIDIA's documentation](http://docs.nvidia.com/deeplearning/sdk/cudnn-install/).
- Create the `CUDA_HOME` environment variable as described in the NVIDIA
- documentation.
-* A GPU card with CUDA Compute Capability 3.0 or higher for building TensorFlow
- from source. To use the TensorFlow binaries, version 3.5 or higher is required.
- See the [NVIDIA documentation](https://developer.nvidia.com/cuda-gpus) for a
- list of supported GPU cards.
-* [GPU drivers](http://nvidia.com/drivers) that support your version of the CUDA
- Toolkit.
-* The `libcupti-dev` library is the NVIDIA CUDA Profile Tools Interface. This
- library provides advanced profiling support. To install this library,
- use the following command for CUDA Toolkit >= 8.0:
-
-<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">sudo apt-get install cuda-command-line-tools</code>
-</pre>
-
-Add this path to the `LD_LIBRARY_PATH` environmental variable:
-
-<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">export LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+${LD_LIBRARY_PATH}:}/usr/local/cuda/extras/CUPTI/lib64</code>
-</pre>
-
-* *OPTIONAL*: For optimized performance during inference, install
- *NVIDIA&nbsp;TensorRT&nbsp;3.0*. To install the minimal amount of TensorRT
- runtime components required to use with the pre-built `tensorflow-gpu` package:
-<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">wget https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64/nvinfer-runtime-trt-repo-ubuntu1404-3.0.4-ga-cuda9.0_1.0-1_amd64.deb</code>
- <code class="devsite-terminal">sudo dpkg -i nvinfer-runtime-trt-repo-ubuntu1404-3.0.4-ga-cuda9.0_1.0-1_amd64.deb</code>
- <code class="devsite-terminal">sudo apt-get update</code>
- <code class="devsite-terminal">sudo apt-get install -y --allow-downgrades libnvinfer-dev libcudnn7-dev=7.0.5.15-1+cuda9.0 libcudnn7=7.0.5.15-1+cuda9.0</code>
-</pre>
-
-Note: For compatibility with the pre-built `tensorflow-gpu` package, use the
-Ubuntu *14.04* package of TensorRT (shown above). Use this even when installing
-on an Ubuntu 16.04 system.
-
-To build the TensorFlow-TensorRT integration module from source instead of using
-the pre-built binaries, see the
-[module documentation](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/tensorrt#using-tensorrt-in-tensorflow).
-For detailed TensorRT installation instructions, see
-[NVIDIA's TensorRT documentation](http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html).
-
-To avoid cuDNN version conflicts during later system upgrades, hold the cuDNN
-version at 7.0.5:
-
-<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">sudo apt-mark hold libcudnn7 libcudnn7-dev</code>
-</pre>
-
-To allow upgrades, remove the this hold:
-
-<pre class="prettyprint lang-bsh">
- <code class="devsite-terminal">sudo apt-mark unhold libcudnn7 libcudnn7-dev</code>
-</pre>
-
-If you have an earlier version of the preceding packages, upgrade to the
-specified versions. If upgrading is not possible, you can still run TensorFlow
-with GPU support by @{$install_sources}.
+## TensorFlow GPU support
+Note: Due to the number of libraries required, using [Docker](#InstallingDocker)
+is recommended over installing directly on the host system.
+
+The following NVIDIA® <i>hardware</i> must be installed on your system:
+
+* GPU card with CUDA Compute Capability 3.5 or higher. See
+ [NVIDIA documentation](https://developer.nvidia.com/cuda-gpus) for a list of
+ supported GPU cards.
+
+The following NVIDIA® <i>software</i> must be installed on your system:
+
+* [GPU drivers](http://nvidia.com/driver). CUDA 9.0 requires 384.x or higher.
+* [CUDA Toolkit 9.0](http://nvidia.com/cuda).
+* [cuDNN SDK](http://developer.nvidia.com/cudnn) (>= 7.0). Version 7.1 is
+ recommended.
+* [CUPTI](http://docs.nvidia.com/cuda/cupti/) ships with the CUDA Toolkit, but
+ you also need to append its path to the `LD_LIBRARY_PATH` environment
+ variable: `export
+ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64`
+* *OPTIONAL*: [NCCL 2.2](https://developer.nvidia.com/nccl) to use TensorFlow
+ with multiple GPUs.
+* *OPTIONAL*:
+ [TensorRT](http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html)
+ which can improve latency and throughput for inference for some models.
+
+To use a GPU with CUDA Compute Capability 3.0, or different versions of the
+preceding NVIDIA libraries see
+@{$install_sources$installing TensorFlow from Sources}. If using Ubuntu 16.04
+and possibly other Debian based linux distros, `apt-get` can be used with the
+NVIDIA repository to simplify installation.
+
+```bash
+# Adds NVIDIA package repository.
+sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
+wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
+wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb
+sudo dpkg -i cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
+sudo dpkg -i nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb
+sudo apt-get update
+# Includes optional NCCL 2.x.
+sudo apt-get install cuda9.0 cuda-cublas-9-0 cuda-cufft-9-0 cuda-curand-9-0 \
+ cuda-cusolver-9-0 cuda-cusparse-9-0 libcudnn7=7.1.4.18-1+cuda9.0 \
+ libnccl2=2.2.13-1+cuda9.0 cuda-command-line-tools-9-0
+# Optionally install TensorRT runtime, must be done after above cuda install.
+sudo apt-get update
+sudo apt-get install libnvinfer4=4.1.2-1+cuda9.0
+```
## Common installation problems
We are relying on Stack Overflow to document TensorFlow installation problems
-and their remedies. The following table contains links to Stack Overflow
-answers for some common installation problems.
-If you encounter an error message or other
-installation problem not listed in the following table, search for it
-on Stack Overflow. If Stack Overflow doesn't show the error message,
-ask a new question about it on Stack Overflow and specify
-the `tensorflow` tag.
+and their remedies. The following table contains links to Stack Overflow answers
+for some common installation problems. If you encounter an error message or
+other installation problem not listed in the following table, search for it on
+Stack Overflow. If Stack Overflow doesn't show the error message, ask a new
+question about it on Stack Overflow and specify the `tensorflow` tag.
<table>
<tr> <th>Link to GitHub or Stack&nbsp;Overflow</th> <th>Error Message</th> </tr>
@@ -657,74 +632,67 @@ the `tensorflow` tag.
</table>
-
<a name="TF_PYTHON_URL"></a>
+
## The URL of the TensorFlow Python package
A few installation mechanisms require the URL of the TensorFlow Python package.
The value you specify depends on three factors:
- * operating system
- * Python version
- * CPU only vs. GPU support
+* operating system
+* Python version
+* CPU only vs. GPU support
This section documents the relevant values for Linux installations.
-
### Python 2.7
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc0-cp27-none-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.10.0rc1-cp27-none-linux_x86_64.whl
</pre>
-
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc0-cp27-none-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.0rc1-cp27-none-linux_x86_64.whl
</pre>
Note that GPU support requires the NVIDIA hardware and software described in
[NVIDIA requirements to run TensorFlow with GPU support](#NVIDIARequirements).
-
### Python 3.4
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc0-cp34-cp34m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.10.0rc1-cp34-cp34m-linux_x86_64.whl
</pre>
-
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc0-cp34-cp34m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.0rc1-cp34-cp34m-linux_x86_64.whl
</pre>
Note that GPU support requires the NVIDIA hardware and software described in
[NVIDIA requirements to run TensorFlow with GPU support](#NVIDIARequirements).
-
### Python 3.5
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc0-cp35-cp35m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.10.0rc1-cp35-cp35m-linux_x86_64.whl
</pre>
-
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc0-cp35-cp35m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.0rc1-cp35-cp35m-linux_x86_64.whl
</pre>
-
Note that GPU support requires the NVIDIA hardware and software described in
[NVIDIA requirements to run TensorFlow with GPU support](#NVIDIARequirements).
@@ -733,16 +701,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.10.0rc1-cp36-cp36m-linux_x86_64.whl
</pre>
-
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc0-cp36-cp36m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.0rc1-cp36-cp36m-linux_x86_64.whl
</pre>
-
Note that GPU support requires the NVIDIA hardware and software described in
[NVIDIA requirements to run TensorFlow with GPU support](#NVIDIARequirements).
diff --git a/tensorflow/docs_src/install/install_mac.md b/tensorflow/docs_src/install/install_mac.md
index 584f1e2e35..3a8637bfb1 100644
--- a/tensorflow/docs_src/install/install_mac.md
+++ b/tensorflow/docs_src/install/install_mac.md
@@ -1,4 +1,4 @@
-# Installing TensorFlow on macOS
+# Install TensorFlow on macOS
This guide explains how to install TensorFlow on macOS. Although these
instructions might also work on other macOS variants, we have only
@@ -119,7 +119,7 @@ Take the following steps to install TensorFlow with Virtualenv:
TensorFlow in the active Virtualenv is as follows:
<pre> $ <b>pip3 install --upgrade \
- https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py3-none-any.whl</b></pre>
+ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0rc1-py3-none-any.whl</b></pre>
If you encounter installation problems, see
[Common Installation Problems](#common-installation-problems).
@@ -242,7 +242,7 @@ take the following steps:
issue the following command:
<pre> $ <b>sudo pip3 install --upgrade \
- https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py3-none-any.whl</b> </pre>
+ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0rc1-py3-none-any.whl</b> </pre>
If the preceding command fails, see
[installation problems](#common-installation-problems).
@@ -350,7 +350,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
TensorFlow for Python 2.7:
<pre> (<i>targetDirectory</i>)$ <b>pip install --ignore-installed --upgrade \
- https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py2-none-any.whl</b></pre>
+ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0rc1-py2-none-any.whl</b></pre>
<a name="ValidateYourInstallation"></a>
@@ -403,8 +403,7 @@ writing TensorFlow programs:
If the system outputs an error message instead of a greeting, see
[Common installation problems](#common_installation_problems).
-To learn more, see [Get Started with TensorFlow](https://www.tensorflow.org/get_started).
-
+To learn more, see the [TensorFlow tutorials](../tutorials/).
## Common installation problems
@@ -518,7 +517,7 @@ The value you specify depends on your Python version.
<pre>
-https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py2-none-any.whl
+https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0rc1-py2-none-any.whl
</pre>
@@ -526,5 +525,5 @@ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py2-none-a
<pre>
-https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py3-none-any.whl
+https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0rc1-py3-none-any.whl
</pre>
diff --git a/tensorflow/docs_src/install/install_raspbian.md b/tensorflow/docs_src/install/install_raspbian.md
index 0caab6d335..58a5285c78 100644
--- a/tensorflow/docs_src/install/install_raspbian.md
+++ b/tensorflow/docs_src/install/install_raspbian.md
@@ -1,4 +1,4 @@
-# Installing TensorFlow on Raspbian
+# Install TensorFlow on Raspbian
This guide explains how to install TensorFlow on a Raspberry Pi running
Raspbian. Although these instructions might also work on other Pi variants, we
@@ -230,7 +230,7 @@ problems, despite the log message.
If the system outputs an error message instead of a greeting, see [Common
installation problems](#common_installation_problems).
-To learn more, see [Get Started with TensorFlow](https://www.tensorflow.org/get_started).
+To learn more, see the [TensorFlow tutorials](../tutorials/).
## Common installation problems
diff --git a/tensorflow/docs_src/install/install_sources.md b/tensorflow/docs_src/install/install_sources.md
index e55520ceaa..8bb09f4021 100644
--- a/tensorflow/docs_src/install/install_sources.md
+++ b/tensorflow/docs_src/install/install_sources.md
@@ -1,28 +1,27 @@
-# Installing TensorFlow from Sources
+# Install TensorFlow from Sources
-This guide explains how to build TensorFlow sources into a TensorFlow
-binary and how to install that TensorFlow binary. Note that we provide
-well-tested, pre-built TensorFlow binaries for Ubuntu, macOS, and Windows
-systems. In addition, there are pre-built TensorFlow
-[docker images](https://hub.docker.com/r/tensorflow/tensorflow/).
-So, don't build a TensorFlow binary yourself unless you are very
-comfortable building complex packages from source and dealing with
-the inevitable aftermath should things not go exactly as documented.
+This guide explains how to build TensorFlow sources into a TensorFlow binary and
+how to install that TensorFlow binary. Note that we provide well-tested,
+pre-built TensorFlow binaries for Ubuntu, macOS, and Windows systems. In
+addition, there are pre-built TensorFlow
+[docker images](https://hub.docker.com/r/tensorflow/tensorflow/). So, don't
+build a TensorFlow binary yourself unless you are very comfortable building
+complex packages from source and dealing with the inevitable aftermath should
+things not go exactly as documented.
-If the last paragraph didn't scare you off, welcome. This guide explains
-how to build TensorFlow on 64-bit desktops and laptops running either of
-the following operating systems:
+If the last paragraph didn't scare you off, welcome. This guide explains how to
+build TensorFlow on 64-bit desktops and laptops running either of the following
+operating systems:
* Ubuntu
* macOS X
-Note: Some users have successfully built and installed TensorFlow from
-sources on non-supported systems. Please remember that we do not fix
-issues stemming from these attempts.
+Note: Some users have successfully built and installed TensorFlow from sources
+on non-supported systems. Please remember that we do not fix issues stemming
+from these attempts.
-We **do not support** building TensorFlow on Windows. That said, if you'd
-like to try to build TensorFlow on Windows anyway, use either of the
-following:
+We **do not support** building TensorFlow on Windows. That said, if you'd like
+to try to build TensorFlow on Windows anyway, use either of the following:
* [Bazel on Windows](https://bazel.build/versions/master/docs/windows.html)
* [TensorFlow CMake build](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/cmake)
@@ -32,38 +31,33 @@ instructions. Older CPUs may not be able to execute these binaries.
## Determine which TensorFlow to install
-You must choose one of the following types of TensorFlow to build and
-install:
-
-* **TensorFlow with CPU support only**. If your system does not have a
- NVIDIA® GPU, build and install this version. Note that this version of
- TensorFlow is typically easier to build and install, so even if you
- have an NVIDIA GPU, we recommend building and installing this version
- first.
-* **TensorFlow with GPU support**. TensorFlow programs typically run
- significantly faster on a GPU than on a CPU. Therefore, if your system
- has a NVIDIA GPU and you need to run performance-critical applications,
- you should ultimately build and install this version.
- Beyond the NVIDIA GPU itself, your system must also fulfill the NVIDIA
- software requirements described in one of the following documents:
+You must choose one of the following types of TensorFlow to build and install:
- * @{$install_linux#NVIDIARequirements$Installing TensorFlow on Ubuntu}
- * @{$install_mac#NVIDIARequirements$Installing TensorFlow on macOS}
+* **TensorFlow with CPU support only**. If your system does not have a NVIDIA®
+ GPU, build and install this version. Note that this version of TensorFlow is
+ typically easier to build and install, so even if you have an NVIDIA GPU, we
+ recommend building and installing this version first.
+* **TensorFlow with GPU support**. TensorFlow programs typically run
+ significantly faster on a GPU than on a CPU. Therefore, if your system has a
+ NVIDIA GPU and you need to run performance-critical applications, you should
+ ultimately build and install this version. Beyond the NVIDIA GPU itself,
+ your system must also fulfill the NVIDIA software requirements described in
+ one of the following documents:
+ * @ {$install_linux#NVIDIARequirements$Installing TensorFlow on Ubuntu}
+ * @ {$install_mac#NVIDIARequirements$Installing TensorFlow on macOS}
## Clone the TensorFlow repository
-Start the process of building TensorFlow by cloning a TensorFlow
-repository.
+Start the process of building TensorFlow by cloning a TensorFlow repository.
To clone **the latest** TensorFlow repository, issue the following command:
<pre>$ <b>git clone https://github.com/tensorflow/tensorflow</b> </pre>
-The preceding <code>git clone</code> command creates a subdirectory
-named `tensorflow`. After cloning, you may optionally build a
-**specific branch** (such as a release branch) by invoking the
-following commands:
+The preceding <code>git clone</code> command creates a subdirectory named
+`tensorflow`. After cloning, you may optionally build a **specific branch**
+(such as a release branch) by invoking the following commands:
<pre>
$ <b>cd tensorflow</b>
@@ -75,38 +69,34 @@ issue the following command:
<pre>$ <b>git checkout r1.0</b></pre>
-Next, you must prepare your environment for
-[Linux](#PrepareLinux)
-or
+Next, you must prepare your environment for [Linux](#PrepareLinux) or
[macOS](#PrepareMac)
-
<a name="PrepareLinux"></a>
-## Prepare environment for Linux
-Before building TensorFlow on Linux, install the following build
-tools on your system:
+## Prepare environment for Linux
- * bazel
- * TensorFlow Python dependencies
- * optionally, NVIDIA packages to support TensorFlow for GPU.
+Before building TensorFlow on Linux, install the following build tools on your
+system:
+* bazel
+* TensorFlow Python dependencies
+* optionally, NVIDIA packages to support TensorFlow for GPU.
### Install Bazel
If bazel is not installed on your system, install it now by following
[these directions](https://bazel.build/versions/master/docs/install.html).
-
### Install TensorFlow Python dependencies
To install TensorFlow, you must install the following packages:
- * `numpy`, which is a numerical processing package that TensorFlow requires.
- * `dev`, which enables adding extensions to Python.
- * `pip`, which enables you to install and manage certain Python packages.
- * `wheel`, which enables you to manage Python compressed packages in
- the wheel (.whl) format.
+* `numpy`, which is a numerical processing package that TensorFlow requires.
+* `dev`, which enables adding extensions to Python.
+* `pip`, which enables you to install and manage certain Python packages.
+* `wheel`, which enables you to manage Python compressed packages in the wheel
+ (.whl) format.
To install these packages for Python 2.7, issue the following command:
@@ -120,94 +110,98 @@ To install these packages for Python 3.n, issue the following command:
$ <b>sudo apt-get install python3-numpy python3-dev python3-pip python3-wheel</b>
</pre>
-
### Optional: install TensorFlow for GPU prerequisites
If you are building TensorFlow without GPU support, skip this section.
-The following NVIDIA <i>hardware</i> must be installed on your system:
-
- * GPU card with CUDA Compute Capability 3.0 or higher. See
- [NVIDIA documentation](https://developer.nvidia.com/cuda-gpus)
- for a list of supported GPU cards.
-
-The following NVIDIA <i>software</i> must be installed on your system:
-
- * [CUDA Toolkit](http://nvidia.com/cuda) (>= 8.0). We recommend version 9.0.
- For details, see
- [NVIDIA's documentation](http://docs.nvidia.com/cuda/cuda-installation-guide-linux/).
- Ensure that you append the relevant CUDA pathnames to the
- `LD_LIBRARY_PATH` environment variable as described in the
- NVIDIA documentation.
- * [GPU drivers](http://nvidia.com/driver) supporting your version of the CUDA
- Toolkit.
- * [cuDNN SDK](http://developer.nvidia.com/cudnn) (>= 6.0). We recommend version 7.0. For details, see
- [NVIDIA's documentation](http://docs.nvidia.com/deeplearning/sdk/cudnn-install/).
- * [CUPTI](http://docs.nvidia.com/cuda/cupti/) ships with the CUDA Toolkit, but
- you also need to append its path to the `LD_LIBRARY_PATH` environment
- variable:
+The following NVIDIA® <i>hardware</i> must be installed on your system:
+
+* GPU card with CUDA Compute Capability 3.5 or higher. See
+ [NVIDIA documentation](https://developer.nvidia.com/cuda-gpus) for a list of
+ supported GPU cards.
- <pre> $ <b>export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64</b> </pre>
+The following NVIDIA® <i>software</i> must be installed on your system:
+
+* [GPU drivers](http://nvidia.com/driver). CUDA 9.0 requires 384.x or higher.
+* [CUDA Toolkit](http://nvidia.com/cuda) (>= 8.0). We recommend version 9.0.
+* [cuDNN SDK](http://developer.nvidia.com/cudnn) (>= 6.0). We recommend
+ version 7.1.x.
+* [CUPTI](http://docs.nvidia.com/cuda/cupti/) ships with the CUDA Toolkit, but
+ you also need to append its path to the `LD_LIBRARY_PATH` environment
+ variable: `export
+ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64`
+* *OPTIONAL*: [NCCL 2.2](https://developer.nvidia.com/nccl) to use TensorFlow
+ with multiple GPUs.
+* *OPTIONAL*:
+ [TensorRT](http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html)
+ which can improve latency and throughput for inference for some models.
+
+While it is possible to install the NVIDIA libraries via `apt-get` from the
+NVIDIA repository, the libraries and headers are installed in locations that
+make it difficult to configure and debug build issues. Downloading and
+installing the libraries manually or using docker
+([latest-devel-gpu](https://hub.docker.com/r/tensorflow/tensorflow/tags/)) is
+recommended.
### Next
After preparing the environment, you must now
[configure the installation](#ConfigureInstallation).
-
<a name="PrepareMac"></a>
+
## Prepare environment for macOS
Before building TensorFlow, you must install the following on your system:
- * bazel
- * TensorFlow Python dependencies.
- * optionally, NVIDIA packages to support TensorFlow for GPU.
-
+* bazel
+* TensorFlow Python dependencies.
+* optionally, NVIDIA packages to support TensorFlow for GPU.
### Install bazel
If bazel is not installed on your system, install it now by following
[these directions](https://bazel.build/versions/master/docs/install.html#mac-os-x).
-
### Install python dependencies
To build TensorFlow, you must install the following packages:
- * six
- * numpy, which is a numerical processing package that TensorFlow requires.
- * wheel, which enables you to manage Python compressed packages
- in the wheel (.whl) format.
+* six
+* mock
+* numpy, which is a numerical processing package that TensorFlow requires.
+* wheel, which enables you to manage Python compressed packages in the wheel
+ (.whl) format.
-You may install the python dependencies using pip. If you don't have pip
-on your machine, we recommend using homebrew to install Python and pip as
+You may install the python dependencies using pip. If you don't have pip on your
+machine, we recommend using homebrew to install Python and pip as
[documented here](http://docs.python-guide.org/en/latest/starting/install/osx/).
If you follow these instructions, you will not need to disable SIP.
After installing pip, invoke the following commands:
-<pre> $ <b>sudo pip install six numpy wheel</b> </pre>
+<pre> $ <b>sudo pip install six numpy wheel mock</b> </pre>
Note: These are just the minimum requirements to _build_ tensorflow. Installing
the pip package will download additional packages required to _run_ it. If you
plan on executing tasks directly with `bazel` , without the pip installation,
-you may need to install additional python packages. For example, you should
-`pip install mock enum34` before running TensorFlow's tests with bazel.
+you may need to install additional python packages. For example, you should `pip
+install mock enum34` before running TensorFlow's tests with bazel.
<a name="ConfigureInstallation"></a>
+
## Configure the installation
-The root of the source tree contains a bash script named
-<code>configure</code>. This script asks you to identify the pathname of all
-relevant TensorFlow dependencies and specify other build configuration options
-such as compiler flags. You must run this script *prior* to
-creating the pip package and installing TensorFlow.
+The root of the source tree contains a bash script named <code>configure</code>.
+This script asks you to identify the pathname of all relevant TensorFlow
+dependencies and specify other build configuration options such as compiler
+flags. You must run this script *prior* to creating the pip package and
+installing TensorFlow.
-If you wish to build TensorFlow with GPU, `configure` will ask
-you to specify the version numbers of CUDA and cuDNN. If several
-versions of CUDA or cuDNN are installed on your system, explicitly select
-the desired version instead of relying on the default.
+If you wish to build TensorFlow with GPU, `configure` will ask you to specify
+the version numbers of CUDA and cuDNN. If several versions of CUDA or cuDNN are
+installed on your system, explicitly select the desired version instead of
+relying on the default.
One of the questions that `configure` will ask is as follows:
@@ -215,73 +209,117 @@ One of the questions that `configure` will ask is as follows:
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]
</pre>
-This question refers to a later phase in which you'll use bazel to [build the
-pip package](#build-the-pip-package) or the [C/Java libraries](#BuildCorJava).
-We recommend accepting the default (`-march=native`), which will optimize the
-generated code for your local machine's CPU type. However, if you are building
-TensorFlow on one CPU type but will run TensorFlow on a different CPU type, then
-consider specifying a more specific optimization
-flag as described in [the gcc
-documentation](https://gcc.gnu.org/onlinedocs/gcc-4.5.3/gcc/i386-and-x86_002d64-Options.html).
+This question refers to a later phase in which you'll use bazel to
+[build the pip package](#build-the-pip-package) or the
+[C/Java libraries](#BuildCorJava). We recommend accepting the default
+(`-march=native`), which will optimize the generated code for your local
+machine's CPU type. However, if you are building TensorFlow on one CPU type but
+will run TensorFlow on a different CPU type, then consider specifying a more
+specific optimization flag as described in
+[the gcc documentation](https://gcc.gnu.org/onlinedocs/gcc-4.5.3/gcc/i386-and-x86_002d64-Options.html).
-Here is an example execution of the `configure` script. Note that your
-own input will likely differ from our sample input:
+Here is an example execution of the `configure` script. Note that your own input
+will likely differ from our sample input:
<pre>
$ <b>cd tensorflow</b> # cd to the top-level directory created
$ <b>./configure</b>
+You have bazel 0.15.0 installed.
Please specify the location of python. [Default is /usr/bin/python]: <b>/usr/bin/python2.7</b>
+
+
Found possible Python library paths:
/usr/local/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages
Please input the desired Python library path to use. Default is [/usr/lib/python2.7/dist-packages]
-Using python library path: /usr/local/lib/python2.7/dist-packages
-Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
-Do you wish to use jemalloc as the malloc implementation? [Y/n]
-jemalloc enabled
-Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
-No Google Cloud Platform support will be enabled for TensorFlow
-Do you wish to build TensorFlow with Hadoop File System support? [y/N]
-No Hadoop File System support will be enabled for TensorFlow
-Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
-No XLA support will be enabled for TensorFlow
-Do you wish to build TensorFlow with VERBS support? [y/N]
-No VERBS support will be enabled for TensorFlow
-Do you wish to build TensorFlow with OpenCL support? [y/N]
-No OpenCL support will be enabled for TensorFlow
-Do you wish to build TensorFlow with CUDA support? [y/N] <b>Y</b>
-CUDA support will be enabled for TensorFlow
-Do you want to use clang as CUDA compiler? [y/N]
-nvcc will be used as CUDA compiler
+Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]:
+jemalloc as malloc support will be enabled for TensorFlow.
+
+Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]:
+Google Cloud Platform support will be enabled for TensorFlow.
+
+Do you wish to build TensorFlow with Hadoop File System support? [Y/n]:
+Hadoop File System support will be enabled for TensorFlow.
+
+Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]:
+Amazon AWS Platform support will be enabled for TensorFlow.
+
+Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]:
+Apache Kafka Platform support will be enabled for TensorFlow.
+
+Do you wish to build TensorFlow with XLA JIT support? [y/N]:
+No XLA JIT support will be enabled for TensorFlow.
+
+Do you wish to build TensorFlow with GDR support? [y/N]:
+No GDR support will be enabled for TensorFlow.
+
+Do you wish to build TensorFlow with VERBS support? [y/N]:
+No VERBS support will be enabled for TensorFlow.
+
+Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]:
+No OpenCL SYCL support will be enabled for TensorFlow.
+
+Do you wish to build TensorFlow with CUDA support? [y/N]: <b>Y</b>
+CUDA support will be enabled for TensorFlow.
+
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: <b>9.0</b>
+
+
Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
-Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
-Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: <b>7</b>
+
+
+Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: <b>7.0</b>
+
+
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
-Please specify a list of comma-separated CUDA compute capabilities you want to build with.
+
+
+Do you wish to build TensorFlow with TensorRT support? [y/N]:
+No TensorRT support will be enabled for TensorFlow.
+
+Please specify the NCCL version you want to use. If NCLL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. [Default is 2.2]: 1.3
+
+
+Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
-Please note that each additional compute capability significantly increases your build time and binary size.
-[Default is: "3.5,5.2"]: <b>3.0</b>
-Do you wish to build TensorFlow with MPI support? [y/N]
-MPI support will not be enabled for TensorFlow
+Please note that each additional compute capability significantly increases your
+build time and binary size. [Default is: 3.5,7.0] <b>6.1</b>
+
+
+Do you want to use clang as CUDA compiler? [y/N]:
+nvcc will be used as CUDA compiler.
+
+Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
+
+
+Do you wish to build TensorFlow with MPI support? [y/N]:
+No MPI support will be enabled for TensorFlow.
+
+Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
+
+
+Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
+Not configuring the WORKSPACE for Android builds.
+
+Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
+ --config=mkl # Build with MKL support.
+ --config=monolithic # Config for mostly static monolithic build.
Configuration finished
</pre>
-If you told `configure` to build for GPU support, then `configure`
-will create a canonical set of symbolic links to the CUDA libraries
-on your system. Therefore, every time you change the CUDA library paths,
-you must rerun the `configure` script before re-invoking
-the <code>bazel build</code> command.
+If you told `configure` to build for GPU support, then `configure` will create a
+canonical set of symbolic links to the CUDA libraries on your system. Therefore,
+every time you change the CUDA library paths, you must rerun the `configure`
+script before re-invoking the <code>bazel build</code> command.
Note the following:
- * Although it is possible to build both CUDA and non-CUDA configs
- under the same source tree, we recommend running `bazel clean` when
- switching between these two configurations in the same source tree.
- * If you don't run the `configure` script *before* running the
- `bazel build` command, the `bazel build` command will fail.
-
+* Although it is possible to build both CUDA and non-CUDA configs under the
+ same source tree, we recommend running `bazel clean` when switching between
+ these two configurations in the same source tree.
+* If you don't run the `configure` script *before* running the `bazel build`
+ command, the `bazel build` command will fail.
## Build the pip package
@@ -289,49 +327,58 @@ Note: If you're only interested in building the libraries for the TensorFlow C
or Java APIs, see [Build the C or Java libraries](#BuildCorJava), you do not
need to build the pip package in that case.
-To build a pip package for TensorFlow with CPU-only support,
-you would typically invoke the following command:
+### CPU-only support
+
+To build a pip package for TensorFlow with CPU-only support:
+
+<pre>
+$ bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
+</pre>
+
+To build a pip package for TensorFlow with CPU-only support for the Intel®
+MKL-DNN:
<pre>
-$ <b>bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package</b>
+$ bazel build --config=mkl --config=opt //tensorflow/tools/pip_package:build_pip_package
</pre>
-To build a pip package for TensorFlow with GPU support,
-invoke the following command:
+### GPU support
-<pre>$ <b>bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package</b> </pre>
+To build a pip package for TensorFlow with GPU support:
-**NOTE on gcc 5 or later:** the binary pip packages available on the
-TensorFlow website are built with gcc 4, which uses the older ABI. To
-make your build compatible with the older ABI, you need to add
-`--cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0"` to your `bazel build` command.
-ABI compatibility allows custom ops built against the TensorFlow pip package
-to continue to work against your built package.
+<pre>
+$ bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
+</pre>
-<b>Tip:</b> By default, building TensorFlow from sources consumes
-a lot of RAM. If RAM is an issue on your system, you may limit RAM usage
-by specifying <code>--local_resources 2048,.5,1.0</code> while
-invoking `bazel`.
+**NOTE on gcc 5 or later:** the binary pip packages available on the TensorFlow
+website are built with gcc 4, which uses the older ABI. To make your build
+compatible with the older ABI, you need to add
+`--cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0"` to your `bazel build` command. ABI
+compatibility allows custom ops built against the TensorFlow pip package to
+continue to work against your built package.
-The <code>bazel build</code> command builds a script named
-`build_pip_package`. Running this script as follows will build
-a `.whl` file within the `/tmp/tensorflow_pkg` directory:
+<b>Tip:</b> By default, building TensorFlow from sources consumes a lot of RAM.
+If RAM is an issue on your system, you may limit RAM usage by specifying
+<code>--local_resources 2048,.5,1.0</code> while invoking `bazel`.
+
+The <code>bazel build</code> command builds a script named `build_pip_package`.
+Running this script as follows will build a `.whl` file within the
+`/tmp/tensorflow_pkg` directory:
<pre>
$ <b>bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg</b>
</pre>
-
## Install the pip package
-Invoke `pip install` to install that pip package.
-The filename of the `.whl` file depends on your platform.
-For example, the following command will install the pip package
+Invoke `pip install` to install that pip package. The filename of the `.whl`
+file depends on your platform. For example, the following command will install
+the pip package
-for TensorFlow 1.9.0rc0 on Linux:
+for TensorFlow 1.10.0rc1 on Linux:
<pre>
-$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.9.0rc0-py2-none-any.whl</b>
+$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.10.0rc1-py2-none-any.whl</b>
</pre>
## Validate your installation
@@ -362,28 +409,31 @@ TensorFlow programs:
<pre>Hello, TensorFlow!</pre>
-To learn more, see [Get Started with TensorFlow](https://www.tensorflow.org/get_started).
+To learn more, see the [TensorFlow tutorials](../tutorials/).
-If the system outputs an error message instead of a greeting, see [Common
-installation problems](#common_installation_problems).
+If the system outputs an error message instead of a greeting, see
+[Common installation problems](#common_installation_problems).
## Common build and installation problems
The build and installation problems you encounter typically depend on the
-operating system. See the "Common installation problems" section
-of one of the following guides:
-
- * @{$install_linux#common_installation_problems$Installing TensorFlow on Linux}
- * @{$install_mac#common_installation_problems$Installing TensorFlow on Mac OS}
- * @{$install_windows#common_installation_problems$Installing TensorFlow on Windows}
-
-Beyond the errors documented in those two guides, the following table
-notes additional errors specific to building TensorFlow. Note that we
-are relying on Stack Overflow as the repository for build and installation
-problems. If you encounter an error message not listed in the preceding
-two guides or in the following table, search for it on Stack Overflow. If
-Stack Overflow doesn't show the error message, ask a new question on
-Stack Overflow and specify the `tensorflow` tag.
+operating system. See the "Common installation problems" section of one of the
+following guides:
+
+* @
+ {$install_linux#common_installation_problems$Installing TensorFlow on Linux}
+* @
+ {$install_mac#common_installation_problems$Installing TensorFlow on Mac OS}
+* @
+ {$install_windows#common_installation_problems$Installing TensorFlow on Windows}
+
+Beyond the errors documented in those two guides, the following table notes
+additional errors specific to building TensorFlow. Note that we are relying on
+Stack Overflow as the repository for build and installation problems. If you
+encounter an error message not listed in the preceding two guides or in the
+following table, search for it on Stack Overflow. If Stack Overflow doesn't show
+the error message, ask a new question on Stack Overflow and specify the
+`tensorflow` tag.
<table>
<tr> <th>Stack Overflow Link</th> <th>Error Message</th> </tr>
@@ -430,9 +480,12 @@ Stack Overflow and specify the `tensorflow` tag.
</table>
## Tested source configurations
+
**Linux**
<table>
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
+<tr><td>tensorflow-1.10.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.15.0</td><td>N/A</td><td>N/A</td></tr>
+<tr><td>tensorflow_gpu-1.10.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.15.0</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.9.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.11.0</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow_gpu-1.9.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.11.0</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.8.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.10.0</td><td>N/A</td><td>N/A</td></tr>
@@ -458,6 +511,7 @@ Stack Overflow and specify the `tensorflow` tag.
**Mac**
<table>
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
+<tr><td>tensorflow-1.10.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.15.0</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow-1.9.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.11.0</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow-1.8.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.10.1</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow-1.7.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.10.1</td><td>N/A</td><td>N/A</td></tr>
@@ -475,6 +529,8 @@ Stack Overflow and specify the `tensorflow` tag.
**Windows**
<table>
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
+<tr><td>tensorflow-1.10.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
+<tr><td>tensorflow_gpu-1.10.0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.9.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow_gpu-1.9.0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.8.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
@@ -498,6 +554,7 @@ Stack Overflow and specify the `tensorflow` tag.
</table>
<a name="BuildCorJava"></a>
+
## Build the C or Java libraries
The instructions above are tailored to building the TensorFlow Python packages.
@@ -506,10 +563,12 @@ If you're interested in building the libraries for the TensorFlow C API, do the
following:
1. Follow the steps up to [Configure the installation](#ConfigureInstallation)
-2. Build the C libraries following instructions in the [README](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/lib_package/README.md).
+2. Build the C libraries following instructions in the
+ [README](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/lib_package/README.md).
-If you're interested inv building the libraries for the TensorFlow Java API,
-do the following:
+If you're interested inv building the libraries for the TensorFlow Java API, do
+the following:
1. Follow the steps up to [Configure the installation](#ConfigureInstallation)
-2. Build the Java library following instructions in the [README](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/lib_package/README.md).
+2. Build the Java library following instructions in the
+ [README](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/lib_package/README.md).
diff --git a/tensorflow/docs_src/install/install_windows.md b/tensorflow/docs_src/install/install_windows.md
index 7fe94f0bc3..e9061bf3c1 100644
--- a/tensorflow/docs_src/install/install_windows.md
+++ b/tensorflow/docs_src/install/install_windows.md
@@ -1,4 +1,4 @@
-# Installing TensorFlow on Windows
+# Install TensorFlow on Windows
This guide explains how to install TensorFlow on Windows. Although these
instructions might also work on other Windows variants, we have only
@@ -157,7 +157,7 @@ TensorFlow programs:
If the system outputs an error message instead of a greeting, see [Common
installation problems](#common_installation_problems).
-To learn more, see [Get Started with TensorFlow](https://www.tensorflow.org/get_started).
+To learn more, see the [TensorFlow tutorials](../tutorials/).
## Common installation problems
diff --git a/tensorflow/docs_src/install/migration.md b/tensorflow/docs_src/install/migration.md
index d6c31f96bd..19315ace2d 100644
--- a/tensorflow/docs_src/install/migration.md
+++ b/tensorflow/docs_src/install/migration.md
@@ -1,5 +1,4 @@
-
-# Transitioning to TensorFlow 1.0
+# Transition to TensorFlow 1.0
The APIs in TensorFlow 1.0 have changed in ways that are not all backwards
diff --git a/tensorflow/docs_src/javascript/index.md b/tensorflow/docs_src/javascript/index.md
deleted file mode 100644
index ad63eeb255..0000000000
--- a/tensorflow/docs_src/javascript/index.md
+++ /dev/null
@@ -1,5 +0,0 @@
-# JavaScript
-
-You may develop TensorFlow programs in JavaScript, training and deploying
-models right in your browser. For details, see
-[js.tensorflow.org](https://js.tensorflow.org).
diff --git a/tensorflow/docs_src/javascript/leftnav_files b/tensorflow/docs_src/javascript/leftnav_files
deleted file mode 100644
index fc0ab8a543..0000000000
--- a/tensorflow/docs_src/javascript/leftnav_files
+++ /dev/null
@@ -1 +0,0 @@
-index.md
diff --git a/tensorflow/docs_src/mobile/README.md b/tensorflow/docs_src/mobile/README.md
new file mode 100644
index 0000000000..ecf4267265
--- /dev/null
+++ b/tensorflow/docs_src/mobile/README.md
@@ -0,0 +1,3 @@
+# TF Lite subsite
+
+This subsite directory lives in [tensorflow/contrib/lite/g3doc](../../contrib/lite/g3doc/).
diff --git a/tensorflow/docs_src/mobile/android_build.md b/tensorflow/docs_src/mobile/android_build.md
deleted file mode 100644
index f4b07db459..0000000000
--- a/tensorflow/docs_src/mobile/android_build.md
+++ /dev/null
@@ -1,177 +0,0 @@
-# Building TensorFlow on Android
-
-To get you started working with TensorFlow on Android, we'll walk through two
-ways to build our TensorFlow mobile demos and deploying them on an Android
-device. The first is Android Studio, which lets you build and deploy in an
-IDE. The second is building with Bazel and deploying with ADB on the command
-line.
-
-Why choose one or the other of these methods?
-
-The simplest way to use TensorFlow on Android is to use Android Studio. If you
-aren't planning to customize your TensorFlow build at all, or if you want to use
-Android Studio's editor and other features to build an app and just want to add
-TensorFlow to it, we recommend using Android Studio.
-
-If you are using custom ops, or have some other reason to build TensorFlow from
-scratch, scroll down and see our instructions
-for [building the demo with Bazel](#build_the_demo_using_bazel).
-
-## Build the demo using Android Studio
-
-**Prerequisites**
-
-If you haven't already, do the following two things:
-
-- Install [Android Studio](https://developer.android.com/studio/index.html),
- following the instructions on their website.
-
-- Clone the TensorFlow repository from GitHub:
-
- git clone https://github.com/tensorflow/tensorflow
-
-**Building**
-
-1. Open Android Studio, and from the Welcome screen, select **Open an existing
- Android Studio project**.
-
-2. From the **Open File or Project** window that appears, navigate to and select
- the `tensorflow/examples/android` directory from wherever you cloned the
- TensorFlow GitHub repo. Click OK.
-
- If it asks you to do a Gradle Sync, click OK.
-
- You may also need to install various platforms and tools, if you get
- errors like "Failed to find target with hash string 'android-23' and similar.
-
-3. Open the `build.gradle` file (you can go to **1:Project** in the side panel
- and find it under the **Gradle Scripts** zippy under **Android**). Look for
- the `nativeBuildSystem` variable and set it to `none` if it isn't already:
-
- // set to 'bazel', 'cmake', 'makefile', 'none'
- def nativeBuildSystem = 'none'
-
-4. Click the *Run* button (the green arrow) or select *Run > Run 'android'* from the
- top menu. You may need to rebuild the project using *Build > Rebuild Project*.
-
- If it asks you to use Instant Run, click **Proceed Without Instant Run**.
-
- Also, you need to have an Android device plugged in with developer options
- enabled at this
- point. See [here](https://developer.android.com/studio/run/device.html) for
- more details on setting up developer devices.
-
-This installs three apps on your phone that are all part of the TensorFlow
-Demo. See [Android Sample Apps](#android_sample_apps) for more information about
-them.
-
-## Adding TensorFlow to your apps using Android Studio
-
-To add TensorFlow to your own apps on Android, the simplest way is to add the
-following lines to your Gradle build file:
-
- allprojects {
- repositories {
- jcenter()
- }
- }
-
- dependencies {
- compile 'org.tensorflow:tensorflow-android:+'
- }
-
-This automatically downloads the latest stable version of TensorFlow as an AAR
-and installs it in your project.
-
-## Build the demo using Bazel
-
-Another way to use TensorFlow on Android is to build an APK
-using [Bazel](https://bazel.build/) and load it onto your device
-using [ADB](https://developer.android.com/studio/command-line/adb.html). This
-requires some knowledge of build systems and Android developer tools, but we'll
-guide you through the basics here.
-
-- First, follow our instructions for @{$install/install_sources$installing from sources}.
- This will also guide you through installing Bazel and cloning the
- TensorFlow code.
-
-- Download the Android [SDK](https://developer.android.com/studio/index.html)
- and [NDK](https://developer.android.com/ndk/downloads/index.html) if you do
- not already have them. You need at least version 12b of the NDK, and 23 of the
- SDK.
-
-- In your copy of the TensorFlow source, update the
- [WORKSPACE](https://github.com/tensorflow/tensorflow/blob/master/WORKSPACE)
- file with the location of your SDK and NDK, where it says &lt;PATH_TO_NDK&gt;
- and &lt;PATH_TO_SDK&gt;.
-
-- Run Bazel to build the demo APK:
-
- bazel build -c opt //tensorflow/examples/android:tensorflow_demo
-
-- Use [ADB](https://developer.android.com/studio/command-line/adb.html#move) to
- install the APK onto your device:
-
- adb install -r bazel-bin/tensorflow/examples/android/tensorflow_demo.apk
-
-Note: In general when compiling for Android with Bazel you need
-`--config=android` on the Bazel command line, though in this case this
-particular example is Android-only, so you don't need it here.
-
-This installs three apps on your phone that are all part of the TensorFlow
-Demo. See [Android Sample Apps](#android_sample_apps) for more information about
-them.
-
-## Android Sample Apps
-
-The
-[Android example code](https://www.tensorflow.org/code/tensorflow/examples/android/) is
-a single project that builds and installs three sample apps which all use the
-same underlying code. The sample apps all take video input from a phone's
-camera:
-
-- **TF Classify** uses the Inception v3 model to label the objects it’s pointed
- at with classes from Imagenet. There are only 1,000 categories in Imagenet,
- which misses most everyday objects and includes many things you’re unlikely to
- encounter often in real life, so the results can often be quite amusing. For
- example there’s no ‘person’ category, so instead it will often guess things it
- does know that are often associated with pictures of people, like a seat belt
- or an oxygen mask. If you do want to customize this example to recognize
- objects you care about, you can use
- the
- [TensorFlow for Poets codelab](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html#0) as
- an example for how to train a model based on your own data.
-
-- **TF Detect** uses a multibox model to try to draw bounding boxes around the
- locations of people in the camera. These boxes are annotated with the
- confidence for each detection result. Results will not be perfect, as this
- kind of object detection is still an active research topic. The demo also
- includes optical tracking for when objects move between frames, which runs
- more frequently than the TensorFlow inference. This improves the user
- experience since the apparent frame rate is faster, but it also gives the
- ability to estimate which boxes refer to the same object between frames, which
- is important for counting objects over time.
-
-- **TF Stylize** implements a real-time style transfer algorithm on the camera
- feed. You can select which styles to use and mix between them using the
- palette at the bottom of the screen, and also switch out the resolution of the
- processing to go higher or lower rez.
-
-When you build and install the demo, you'll see three app icons on your phone,
-one for each of the demos. Tapping on them should open up the app and let you
-explore what they do. You can enable profiling statistics on-screen by tapping
-the volume up button while they’re running.
-
-### Android Inference Library
-
-Because Android apps need to be written in Java, and core TensorFlow is in C++,
-TensorFlow has a JNI library to interface between the two. Its interface is aimed
-only at inference, so it provides the ability to load a graph, set up inputs,
-and run the model to calculate particular outputs. You can see the full
-documentation for the minimal set of methods in
-[TensorFlowInferenceInterface.java](https://www.tensorflow.org/code/tensorflow/contrib/android/java/org/tensorflow/contrib/android/TensorFlowInferenceInterface.java)
-
-The demos applications use this interface, so they’re a good place to look for
-example usage. You can download prebuilt binary jars
-at
-[ci.tensorflow.org](https://ci.tensorflow.org/view/Nightly/job/nightly-android/).
diff --git a/tensorflow/docs_src/mobile/index.md b/tensorflow/docs_src/mobile/index.md
deleted file mode 100644
index 419ae7094a..0000000000
--- a/tensorflow/docs_src/mobile/index.md
+++ /dev/null
@@ -1,36 +0,0 @@
-# Overview
-
-TensorFlow was designed to be a good deep learning solution for mobile
-platforms. Currently we have two solutions for deploying machine learning
-applications on mobile and embedded devices:
-@{$mobile/mobile_intro$TensorFlow for Mobile} and @{$mobile/tflite$TensorFlow Lite}.
-
-## TensorFlow Lite versus TensorFlow Mobile
-
-Here are a few of the differences between the two:
-
-- TensorFlow Lite is an evolution of TensorFlow Mobile. In most cases, apps
- developed with TensorFlow Lite will have a smaller binary size, fewer
- dependencies, and better performance.
-
-- TensorFlow Lite is in developer preview, so not all use cases are covered yet.
- We expect you to use TensorFlow Mobile to cover production cases.
-
-- TensorFlow Lite supports only a limited set of operators, so not all models
- will work on it by default. TensorFlow for Mobile has a fuller set of
- supported functionality.
-
-TensorFlow Lite provides better performance and a small binary size on mobile
-platforms as well as the ability to leverage hardware acceleration if available
-on their platforms. In addition, it has many fewer dependencies so it can be
-built and hosted on simpler, more constrained device scenarios. TensorFlow Lite
-also allows targeting accelerators through the [Neural Networks
-API](https://developer.android.com/ndk/guides/neuralnetworks/index.html).
-
-TensorFlow Lite currently has coverage for a limited set of operators. While
-TensorFlow for Mobile supports only a constrained set of ops by default, in
-principle if you use an arbitrary operator in TensorFlow, it can be customized
-to build that kernel. Thus use cases which are not currently supported by
-TensorFlow Lite should continue to use TensorFlow for Mobile. As TensorFlow Lite
-evolves, it will gain additional operators, and the decision will be easier to
-make.
diff --git a/tensorflow/docs_src/mobile/ios_build.md b/tensorflow/docs_src/mobile/ios_build.md
deleted file mode 100644
index 4c84a1214a..0000000000
--- a/tensorflow/docs_src/mobile/ios_build.md
+++ /dev/null
@@ -1,107 +0,0 @@
-# Building TensorFlow on iOS
-
-## Using CocoaPods
-
-The simplest way to get started with TensorFlow on iOS is using the CocoaPods
-package management system. You can add the `TensorFlow-experimental` pod to your
-Podfile, which installs a universal binary framework. This makes it easy to get
-started but has the disadvantage of being hard to customize, which is important
-in case you want to shrink your binary size. If you do need the ability to
-customize your libraries, see later sections on how to do that.
-
-## Creating your own app
-
-If you'd like to add TensorFlow capabilities to your own app, do the following:
-
-- Create your own app or load your already-created app in XCode.
-
-- Add a file named Podfile at the project root directory with the following content:
-
- target 'YourProjectName'
- pod 'TensorFlow-experimental'
-
-- Run `pod install` to download and install the `TensorFlow-experimental` pod.
-
-- Open `YourProjectName.xcworkspace` and add your code.
-
-- In your app's **Build Settings**, make sure to add `$(inherited)` to the
- **Other Linker Flags**, and **Header Search Paths** sections.
-
-## Running the Samples
-
-You'll need Xcode 7.3 or later to run our iOS samples.
-
-There are currently three examples: simple, benchmark, and camera. For now, you
-can download the sample code by cloning the main tensorflow repository (we are
-planning to make the samples available as a separate repository later).
-
-From the root of the tensorflow folder, download [Inception
-v1](https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip),
-and extract the label and graph files into the data folders inside both the
-simple and camera examples using these steps:
-
- mkdir -p ~/graphs
- curl -o ~/graphs/inception5h.zip \
- https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip \
- && unzip ~/graphs/inception5h.zip -d ~/graphs/inception5h
- cp ~/graphs/inception5h/* tensorflow/examples/ios/benchmark/data/
- cp ~/graphs/inception5h/* tensorflow/examples/ios/camera/data/
- cp ~/graphs/inception5h/* tensorflow/examples/ios/simple/data/
-
-Change into one of the sample directories, download the
-[Tensorflow-experimental](https://cocoapods.org/pods/TensorFlow-experimental)
-pod, and open the Xcode workspace. Note that installing the pod can take a long
-time since it is big (~450MB). If you want to run the simple example, then:
-
- cd tensorflow/examples/ios/simple
- pod install
- open tf_simple_example.xcworkspace # note .xcworkspace, not .xcodeproj
- # this is created by pod install
-
-Run the simple app in the XCode simulator. You should see a single-screen app
-with a **Run Model** button. Tap that, and you should see some debug output
-appear below indicating that the example Grace Hopper image in directory data
-has been analyzed, with a military uniform recognized.
-
-Run the other samples using the same process. The camera example requires a real
-device connected. Once you build and run that, you should get a live camera view
-that you can point at objects to get real-time recognition results.
-
-### iOS Example details
-
-There are three demo applications for iOS, all defined in Xcode projects inside
-[tensorflow/examples/ios](https://www.tensorflow.org/code/tensorflow/examples/ios/).
-
-- **Simple**: This is a minimal example showing how to load and run a TensorFlow
- model in as few lines as possible. It just consists of a single view with a
- button that executes the model loading and inference when its pressed.
-
-- **Camera**: This is very similar to the Android TF Classify demo. It loads
- Inception v3 and outputs its best label estimate for what’s in the live camera
- view. As with the Android version, you can train your own custom model using
- TensorFlow for Poets and drop it into this example with minimal code changes.
-
-- **Benchmark**: is quite close to Simple, but it runs the graph repeatedly and
- outputs similar statistics to the benchmark tool on Android.
-
-
-### Troubleshooting
-
-- Make sure you use the TensorFlow-experimental pod (and not TensorFlow).
-
-- The TensorFlow-experimental pod is current about ~450MB. The reason it is so
- big is because we are bundling multiple platforms, and the pod includes all
- TensorFlow functionality (e.g. operations). The final app size after build is
- substantially smaller though (~25MB). Working with the complete pod is
- convenient during development, but see below section on how you can build your
- own custom TensorFlow library to reduce the size.
-
-## Building the TensorFlow iOS libraries from source
-
-While Cocoapods is the quickest and easiest way of getting started, you sometimes
-need more flexibility to determine which parts of TensorFlow your app should be
-shipped with. For such cases, you can build the iOS libraries from the
-sources. [This
-guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ios#building-the-tensorflow-ios-libraries-from-source)
-contains detailed instructions on how to do that.
-
diff --git a/tensorflow/docs_src/mobile/leftnav_files b/tensorflow/docs_src/mobile/leftnav_files
deleted file mode 100644
index 585470d5f0..0000000000
--- a/tensorflow/docs_src/mobile/leftnav_files
+++ /dev/null
@@ -1,14 +0,0 @@
-index.md
-### TensorFlow Lite
-tflite/index.md
-tflite/devguide.md
-tflite/demo_android.md
-tflite/demo_ios.md
->>>
-### TensorFlow Mobile
-mobile_intro.md
-android_build.md
-ios_build.md
-linking_libs.md
-prepare_models.md
-optimizing.md
diff --git a/tensorflow/docs_src/mobile/linking_libs.md b/tensorflow/docs_src/mobile/linking_libs.md
deleted file mode 100644
index efef5dd0da..0000000000
--- a/tensorflow/docs_src/mobile/linking_libs.md
+++ /dev/null
@@ -1,243 +0,0 @@
-# Integrating TensorFlow libraries
-
-Once you have made some progress on a model that addresses the problem you’re
-trying to solve, it’s important to test it out inside your application
-immediately. There are often unexpected differences between your training data
-and what users actually encounter in the real world, and getting a clear picture
-of the gap as soon as possible improves the product experience.
-
-This page talks about how to integrate the TensorFlow libraries into your own
-mobile applications, once you have already successfully built and deployed the
-TensorFlow mobile demo apps.
-
-## Linking the library
-
-After you've managed to build the examples, you'll probably want to call
-TensorFlow from one of your existing applications. The very easiest way to do
-this is to use the Pod installation steps described
-@{$mobile/ios_build#using_cocoapods$here}, but if you want to build TensorFlow
-from source (for example to customize which operators are included) you'll need
-to break out TensorFlow as a framework, include the right header files, and link
-against the built libraries and dependencies.
-
-### Android
-
-For Android, you just need to link in a Java library contained in a JAR file
-called `libandroid_tensorflow_inference_java.jar`. There are three ways to
-include this functionality in your program:
-
-1. Include the jcenter AAR which contains it, as in this
- [example app](https://github.com/googlecodelabs/tensorflow-for-poets-2/blob/master/android/tfmobile/build.gradle#L59-L65)
-
-2. Download the nightly precompiled version from
-[ci.tensorflow.org](http://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/).
-
-3. Build the JAR file yourself using the instructions [in our Android GitHub repo](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/android)
-
-### iOS
-
-Pulling in the TensorFlow libraries on iOS is a little more complicated. Here is
-a checklist of what you’ll need to do to your iOS app:
-
-- Link against tensorflow/contrib/makefile/gen/lib/libtensorflow-core.a, usually
- by adding `-L/your/path/tensorflow/contrib/makefile/gen/lib/` and
- `-ltensorflow-core` to your linker flags.
-
-- Link against the generated protobuf libraries by adding
- `-L/your/path/tensorflow/contrib/makefile/gen/protobuf_ios/lib` and
- `-lprotobuf` and `-lprotobuf-lite` to your command line.
-
-- For the include paths, you need the root of your TensorFlow source folder as
- the first entry, followed by
- `tensorflow/contrib/makefile/downloads/protobuf/src`,
- `tensorflow/contrib/makefile/downloads`,
- `tensorflow/contrib/makefile/downloads/eigen`, and
- `tensorflow/contrib/makefile/gen/proto`.
-
-- Make sure your binary is built with `-force_load` (or the equivalent on your
- platform), aimed at the TensorFlow library to ensure that it’s linked
- correctly. More detail on why this is necessary can be found in the next
- section, [Global constructor magic](#global_constructor_magic). On Linux-like
- platforms, you’ll need different flags, more like
- `-Wl,--allow-multiple-definition -Wl,--whole-archive`.
-
-You’ll also need to link in the Accelerator framework, since this is used to
-speed up some of the operations.
-
-## Global constructor magic
-
-One of the subtlest problems you may run up against is the “No session factory
-registered for the given session options” error when trying to call TensorFlow
-from your own application. To understand why this is happening and how to fix
-it, you need to know a bit about the architecture of TensorFlow.
-
-The framework is designed to be very modular, with a thin core and a large
-number of specific objects that are independent and can be mixed and matched as
-needed. To enable this, the coding pattern in C++ had to let modules easily
-notify the framework about the services they offer, without requiring a central
-list that has to be updated separately from each implementation. It also had to
-allow separate libraries to add their own implementations without needing a
-recompile of the core.
-
-To achieve this capability, TensorFlow uses a registration pattern in a lot of
-places. In the code, it looks like this:
-
- class MulKernel : OpKernel {
- Status Compute(OpKernelContext* context) { … }
- };
- REGISTER_KERNEL(MulKernel, “Mul”);
-
-This would be in a standalone `.cc` file linked into your application, either
-as part of the main set of kernels or as a separate custom library. The magic
-part is that the `REGISTER_KERNEL()` macro is able to inform the core of
-TensorFlow that it has an implementation of the Mul operation, so that it can be
-called in any graphs that require it.
-
-From a programming point of view, this setup is very convenient. The
-implementation and registration code live in the same file, and adding new
-implementations is as simple as compiling and linking it in. The difficult part
-comes from the way that the `REGISTER_KERNEL()` macro is implemented. C++
-doesn’t offer a good mechanism for doing this sort of registration, so we have
-to resort to some tricky code. Under the hood, the macro is implemented so that
-it produces something like this:
-
- class RegisterMul {
- public:
- RegisterMul() {
- global_kernel_registry()->Register(“Mul”, [](){
- return new MulKernel()
- });
- }
- };
- RegisterMul g_register_mul;
-
-This sets up a class `RegisterMul` with a constructor that tells the global
-kernel registry what function to call when somebody asks it how to create a
-“Mul” kernel. Then there’s a global object of that class, and so the constructor
-should be called at the start of any program.
-
-While this may sound sensible, the unfortunate part is that the global object
-that’s defined is not used by any other code, so linkers not designed with this
-in mind will decide that it can be deleted. As a result, the constructor is
-never called, and the class is never registered. All sorts of modules use this
-pattern in TensorFlow, and it happens that `Session` implementations are the
-first to be looked for when the code is run, which is why it shows up as the
-characteristic error when this problem occurs.
-
-The solution is to force the linker to not strip any code from the library, even
-if it believes it’s unused. On iOS, this step can be accomplished with the
-`-force_load` flag, specifying a library path, and on Linux you need
-`--whole-archive`. These persuade the linker to not be as aggressive about
-stripping, and should retain the globals.
-
-The actual implementation of the various `REGISTER_*` macros is a bit more
-complicated in practice, but they all suffer the same underlying problem. If
-you’re interested in how they work, [op_kernel.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/op_kernel.h#L1091)
-is a good place to start investigating.
-
-## Protobuf problems
-
-TensorFlow relies on
-the [Protocol Buffer](https://developers.google.com/protocol-buffers/) library,
-commonly known as protobuf. This library takes definitions of data structures
-and produces serialization and access code for them in a variety of
-languages. The tricky part is that this generated code needs to be linked
-against shared libraries for the exact same version of the framework that was
-used for the generator. This can be an issue when `protoc`, the tool used to
-generate the code, is from a different version of protobuf than the libraries in
-the standard linking and include paths. For example, you might be using a copy
-of `protoc` that was built locally in `~/projects/protobuf-3.0.1.a`, but you have
-libraries installed at `/usr/local/lib` and `/usr/local/include` that are from
-3.0.0.
-
-The symptoms of this issue are errors during the compilation or linking phases
-with protobufs. Usually, the build tools take care of this, but if you’re using
-the makefile, make sure you’re building the protobuf library locally and using
-it, as shown in [this Makefile](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/makefile/Makefile#L18).
-
-Another situation that can cause problems is when protobuf headers and source
-files need to be generated as part of the build process. This process makes
-building more complex, since the first phase has to be a pass over the protobuf
-definitions to create all the needed code files, and only after that can you go
-ahead and do a build of the library code.
-
-### Multiple versions of protobufs in the same app
-
-Protobufs generate headers that are needed as part of the C++ interface to the
-overall TensorFlow library. This complicates using the library as a standalone
-framework.
-
-If your application is already using version 1 of the protocol buffers library,
-you may have trouble integrating TensorFlow because it requires version 2. If
-you just try to link both versions into the same binary, you’ll see linking
-errors because some of the symbols clash. To solve this particular problem, we
-have an experimental script at [rename_protobuf.sh](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/makefile/rename_protobuf.sh).
-
-You need to run this as part of the makefile build, after you’ve downloaded all
-the dependencies:
-
- tensorflow/contrib/makefile/download_dependencies.sh
- tensorflow/contrib/makefile/rename_protobuf.sh
-
-## Calling the TensorFlow API
-
-Once you have the framework available, you then need to call into it. The usual
-pattern is that you first load your model, which represents a preset set of
-numeric computations, and then you run inputs through that model (for example,
-images from a camera) and receive outputs (for example, predicted labels).
-
-On Android, we provide the Java Inference Library that is focused on just this
-use case, while on iOS and Raspberry Pi you call directly into the C++ API.
-
-### Android
-
-Here’s what a typical Inference Library sequence looks like on Android:
-
- // Load the model from disk.
- TensorFlowInferenceInterface inferenceInterface =
- new TensorFlowInferenceInterface(assetManager, modelFilename);
-
- // Copy the input data into TensorFlow.
- inferenceInterface.feed(inputName, floatValues, 1, inputSize, inputSize, 3);
-
- // Run the inference call.
- inferenceInterface.run(outputNames, logStats);
-
- // Copy the output Tensor back into the output array.
- inferenceInterface.fetch(outputName, outputs);
-
-You can find the source of this code in the [Android examples](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowImageClassifier.java#L107).
-
-### iOS and Raspberry Pi
-
-Here’s the equivalent code for iOS and Raspberry Pi:
-
- // Load the model.
- PortableReadFileToProto(file_path, &tensorflow_graph);
-
- // Create a session from the model.
- tensorflow::Status s = session->Create(tensorflow_graph);
- if (!s.ok()) {
- LOG(FATAL) << "Could not create TensorFlow Graph: " << s;
- }
-
- // Run the model.
- std::string input_layer = "input";
- std::string output_layer = "output";
- std::vector<tensorflow::Tensor> outputs;
- tensorflow::Status run_status = session->Run({{input_layer, image_tensor}},
- {output_layer}, {}, &outputs);
- if (!run_status.ok()) {
- LOG(FATAL) << "Running model failed: " << run_status;
- }
-
- // Access the output data.
- tensorflow::Tensor* output = &outputs[0];
-
-This is all based on the
-[iOS sample code](https://www.tensorflow.org/code/tensorflow/examples/ios/simple/RunModelViewController.mm),
-but there’s nothing iOS-specific; the same code should be usable on any platform
-that supports C++.
-
-You can also find specific examples for Raspberry Pi
-[here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/pi_examples/label_image/label_image.cc).
diff --git a/tensorflow/docs_src/mobile/mobile_intro.md b/tensorflow/docs_src/mobile/mobile_intro.md
deleted file mode 100644
index 241f01d460..0000000000
--- a/tensorflow/docs_src/mobile/mobile_intro.md
+++ /dev/null
@@ -1,247 +0,0 @@
-# Introduction to TensorFlow Mobile
-
-TensorFlow was designed from the ground up to be a good deep learning solution
-for mobile platforms like Android and iOS. This mobile guide should help you
-understand how machine learning can work on mobile platforms and how to
-integrate TensorFlow into your mobile apps effectively and efficiently.
-
-## About this Guide
-
-This guide is aimed at developers who have a TensorFlow model that’s
-successfully working in a desktop environment, who want to integrate it into
-a mobile application, and cannot use TensorFlow Lite. Here are the
-main challenges you’ll face during that process:
-
-- Understanding how to use Tensorflow for mobile.
-- Building TensorFlow for your platform.
-- Integrating the TensorFlow library into your application.
-- Preparing your model file for mobile deployment.
-- Optimizing for latency, RAM usage, model file size, and binary size.
-
-## Common use cases for mobile machine learning
-
-**Why run TensorFlow on mobile?**
-
-Traditionally, deep learning has been associated with data centers and giant
-clusters of high-powered GPU machines. However, it can be very expensive and
-time-consuming to send all of the data a device has access to across a network
-connection. Running on mobile makes it possible to deliver very interactive
-applications in a way that’s not possible when you have to wait for a network
-round trip.
-
-Here are some common use cases for on-device deep learning:
-
-### Speech Recognition
-
-There are a lot of interesting applications that can be built with a
-speech-driven interface, and many of these require on-device processing. Most of
-the time a user isn’t giving commands, and so streaming audio continuously to a
-remote server would be a waste of bandwidth, since it would mostly be silence or
-background noises. To solve this problem it’s common to have a small neural
-network running on-device @{$tutorials/audio_recognition$listening out for a particular keyword}.
-Once that keyword has been spotted, the rest of the
-conversation can be transmitted over to the server for further processing if
-more computing power is needed.
-
-### Image Recognition
-
-It can be very useful for a mobile app to be able to make sense of a camera
-image. If your users are taking photos, recognizing what’s in them can help your
-camera apps apply appropriate filters, or label the photos so they’re easily
-findable. It’s important for embedded applications too, since you can use image
-sensors to detect all sorts of interesting conditions, whether it’s spotting
-endangered animals in the wild
-or
-[reporting how late your train is running](https://svds.com/tensorflow-image-recognition-raspberry-pi/).
-
-TensorFlow comes with several examples of recognizing the types of objects
-inside images along with a variety of different pre-trained models, and they can
-all be run on mobile devices. You can try out
-our
-[Tensorflow for Poets](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html#0) and
-[Tensorflow for Poets 2: Optimize for Mobile](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2/index.html#0) codelabs to
-see how to take a pretrained model and run some very fast and lightweight
-training to teach it to recognize specific objects, and then optimize it to
-run on mobile.
-
-### Object Localization
-
-Sometimes it’s important to know where objects are in an image as well as what
-they are. There are lots of augmented reality use cases that could benefit a
-mobile app, such as guiding users to the right component when offering them
-help fixing their wireless network or providing informative overlays on top of
-landscape features. Embedded applications often need to count objects that are
-passing by them, whether it’s pests in a field of crops, or people, cars and
-bikes going past a street lamp.
-
-TensorFlow offers a pretrained model for drawing bounding boxes around people
-detected in images, together with tracking code to follow them over time. The
-tracking is especially important for applications where you’re trying to count
-how many objects are present over time, since it gives you a good idea when a
-new object enters or leaves the scene. We have some sample code for this
-available for Android [on
-GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android),
-and also a [more general object detection
-model](https://github.com/tensorflow/models/tree/master/research/object_detection/README.md)
-available as well.
-
-### Gesture Recognition
-
-It can be useful to be able to control applications with hand or other
-gestures, either recognized from images or through analyzing accelerometer
-sensor data. Creating those models is beyond the scope of this guide, but
-TensorFlow is an effective way of deploying them.
-
-### Optical Character Recognition
-
-Google Translate’s live camera view is a great example of how effective
-interactive on-device detection of text can be.
-
-<div class="video-wrapper">
- <iframe class="devsite-embedded-youtube-video" data-video-id="06olHmcJjS0"
- data-autohide="1" data-showinfo="0" frameborder="0" allowfullscreen>
- </iframe>
-</div>
-
-There are multiple steps involved in recognizing text in images. You first have
-to identify the areas where the text is present, which is a variation on the
-object localization problem, and can be solved with similar techniques. Once you
-have an area of text, you then need to interpret it as letters, and then use a
-language model to help guess what words they represent. The simplest way to
-estimate what letters are present is to segment the line of text into individual
-letters, and then apply a simple neural network to the bounding box of each. You
-can get good results with the kind of models used for MNIST, which you can find
-in TensorFlow’s tutorials, though you may want a higher-resolution input. A
-more advanced alternative is to use an LSTM model to process a whole line of
-text at once, with the model itself handling the segmentation into different
-characters.
-
-### Translation
-
-Translating from one language to another quickly and accurately, even if you
-don’t have a network connection, is an important use case. Deep networks are
-very effective at this sort of task, and you can find descriptions of a lot of
-different models in the literature. Often these are sequence-to-sequence
-recurrent models where you’re able to run a single graph to do the whole
-translation, without needing to run separate parsing stages.
-
-### Text Classification
-
-If you want to suggest relevant prompts to users based on what they’re typing or
-reading, it can be very useful to understand the meaning of the text. This is
-where text classification comes in. Text classification is an umbrella term
-that covers everything from sentiment analysis to topic discovery. You’re likely
-to have your own categories or labels that you want to apply, so the best place
-to start is with an example
-like
-[Skip-Thoughts](https://github.com/tensorflow/models/tree/master/research/skip_thoughts/),
-and then train on your own examples.
-
-### Voice Synthesis
-
-A synthesized voice can be a great way of giving users feedback or aiding
-accessibility, and recent advances such as
-[WaveNet](https://deepmind.com/blog/wavenet-generative-model-raw-audio/) show
-that deep learning can offer very natural-sounding speech.
-
-## Mobile machine learning and the cloud
-
-These examples of use cases give an idea of how on-device networks can
-complement cloud services. Cloud has a great deal of computing power in a
-controlled environment, but running on devices can offer higher interactivity.
-In situations where the cloud is unavailable, or your cloud capacity is limited,
-you can provide an offline experience, or reduce cloud workload by processing
-easy cases on device.
-
-Doing on-device computation can also signal when it's time to switch to working
-on the cloud. A good example of this is hotword detection in speech. Since
-devices are able to constantly listen out for the keywords, this then triggers a
-lot of traffic to cloud-based speech recognition once one is recognized. Without
-the on-device component, the whole application wouldn’t be feasible, and this
-pattern exists across several other applications as well. Recognizing that some
-sensor input is interesting enough for further processing makes a lot of
-interesting products possible.
-
-## What hardware and software should you have?
-
-TensorFlow runs on Ubuntu Linux, Windows 10, and OS X. For a list of all
-supported operating systems and instructions to install TensorFlow, see
-@{$install$Installing Tensorflow}.
-
-Note that some of the sample code we provide for mobile TensorFlow requires you
-to compile TensorFlow from source, so you’ll need more than just `pip install`
-to work through all the sample code.
-
-To try out the mobile examples, you’ll need a device set up for development,
-using
-either [Android Studio](https://developer.android.com/studio/install.html),
-or [XCode](https://developer.apple.com/xcode/) if you're developing for iOS.
-
-## What should you do before you get started?
-
-Before thinking about how to get your solution on mobile:
-
-1. Determine whether your problem is solvable by mobile machine learning
-2. Create a labelled dataset to define your problem
-3. Pick an effective model for the problem
-
-We'll discuss these in more detail below.
-
-### Is your problem solvable by mobile machine learning?
-
-Once you have an idea of the problem you want to solve, you need to make a plan
-of how to build your solution. The most important first step is making sure that
-your problem is actually solvable, and the best way to do that is to mock it up
-using humans in the loop.
-
-For example, if you want to drive a robot toy car using voice commands, try
-recording some audio from the device and listen back to it to see if you can
-make sense of what’s being said. Often you’ll find there are problems in the
-capture process, such as the motor drowning out speech or not being able to hear
-at a distance, and you should tackle these problems before investing in the
-modeling process.
-
-Another example would be giving photos taken from your app to people see if they
-can classify what’s in them, in the way you’re looking for. If they can’t do
-that (for example, trying to estimate calories in food from photos may be
-impossible because all white soups look the same), then you’ll need to redesign
-your experience to cope with that. A good rule of thumb is that if a human can’t
-handle the task then it will be difficult to train a computer to do better.
-
-### Create a labelled dataset
-
-After you’ve solved any fundamental issues with your use case, you need to
-create a labeled dataset to define what problem you’re trying to solve. This
-step is extremely important, more than picking which model to use. You want it
-to be as representative as possible of your actual use case, since the model
-will only be effective at the task you teach it. It’s also worth investing in
-tools to make labeling the data as efficient and accurate as possible. For
-example, if you’re able to switch from having to click a button on a web
-interface to simple keyboard shortcuts, you may be able to speed up the
-generation process a lot. You should also start by doing the initial labeling
-yourself, so you can learn about the difficulties and likely errors, and
-possibly change your labeling or data capture process to avoid them. Once you
-and your team are able to consistently label examples (that is once you
-generally agree on the same labels for most examples), you can then try and
-capture your knowledge in a manual and teach external raters how to run the same
-process.
-
-### Pick an effective model
-
-The next step is to pick an effective model to use. You might be able to avoid
-training a model from scratch if someone else has already implemented a model
-similar to what you need; we have a repository of models implemented in
-TensorFlow [on GitHub](https://github.com/tensorflow/models) that you can look
-through. Lean towards the simplest model you can find, and try to get started as
-soon as you have even a small amount of labelled data, since you’ll get the best
-results when you’re able to iterate quickly. The shorter the time it takes to
-try training a model and running it in its real application, the better overall
-results you’ll see. It’s common for an algorithm to get great training accuracy
-numbers but then fail to be useful within a real application because there’s a
-mismatch between the dataset and real usage. Prototype end-to-end usage as soon
-as possible to create a consistent user experience.
-
-## Next Steps
-
-We suggest you get started by building one of our demos for
-@{$mobile/android_build$Android} or @{$mobile/ios_build$iOS}.
diff --git a/tensorflow/docs_src/mobile/optimizing.md b/tensorflow/docs_src/mobile/optimizing.md
deleted file mode 100644
index 778e4d3a62..0000000000
--- a/tensorflow/docs_src/mobile/optimizing.md
+++ /dev/null
@@ -1,499 +0,0 @@
-# Optimizing for mobile
-
-There are some special issues that you have to deal with when you’re trying to
-ship on mobile or embedded devices, and you’ll need to think about these as
-you’re developing your model.
-
-These issues are:
-
-- Model and Binary Size
-- App speed and model loading speed
-- Performance and threading
-
-We'll discuss a few of these below.
-
-## What are the minimum device requirements for TensorFlow?
-
-You need at least one megabyte of program memory and several megabytes of RAM to
-run the base TensorFlow runtime, so it’s not suitable for DSPs or
-microcontrollers. Other than those, the biggest constraint is usually the
-calculation speed of the device, and whether you can run the model you need for
-your application with a low enough latency. You can use the benchmarking tools
-in [How to Profile your Model](#how_to_profile_your_model) to get an idea of how
-many FLOPs are required for a model, and then use that to make rule-of-thumb
-estimates of how fast they will run on different devices. For example, a modern
-smartphone might be able to run 10 GFLOPs per second, so the best you could hope
-for from a 5 GFLOP model is two frames per second, though you may do worse
-depending on what the exact computation patterns are.
-
-This model dependence means that it’s possible to run TensorFlow even on very
-old or constrained phones, as long as you optimize your network to fit within
-the latency budget and possibly within limited RAM too. For memory usage, you
-mostly need to make sure that the intermediate buffers that TensorFlow creates
-aren’t too large, which you can examine in the benchmark output too.
-
-## Speed
-
-One of the highest priorities of most model deployments is figuring out how to
-run the inference fast enough to give a good user experience. The first place to
-start is by looking at the total number of floating point operations that are
-required to execute the graph. You can get a very rough estimate of this by
-using the `benchmark_model` tool:
-
- bazel build -c opt tensorflow/tools/benchmark:benchmark_model && \
- bazel-bin/tensorflow/tools/benchmark/benchmark_model \
- --graph=/tmp/inception_graph.pb --input_layer="Mul:0" \
- --input_layer_shape="1,299,299,3" --input_layer_type="float" \
- --output_layer="softmax:0" --show_run_order=false --show_time=false \
- --show_memory=false --show_summary=true --show_flops=true --logtostderr
-
-This should show you an estimate of how many operations are needed to run the
-graph. You can then use that information to figure out how feasible your model
-is to run on the devices you’re targeting. For an example, a high-end phone from
-2016 might be able to do 20 billion FLOPs per second, so the best speed you
-could hope for from a model that requires 10 billion FLOPs is around 500ms. On a
-device like the Raspberry Pi 3 that can do about 5 billion FLOPs, you may only
-get one inference every two seconds.
-
-Having this estimate helps you plan for what you’ll be able to realistically
-achieve on a device. If the model is using too many ops, then there are a lot of
-opportunities to optimize the architecture to reduce that number.
-
-Advanced techniques include [SqueezeNet](https://arxiv.org/abs/1602.07360)
-and [MobileNet](https://arxiv.org/abs/1704.04861), which are architectures
-designed to produce models for mobile -- lean and fast but with a small accuracy
-cost. You can also just look at alternative models, even older ones, which may
-be smaller. For example, Inception v1 only has around 7 million parameters,
-compared to Inception v3’s 24 million, and requires only 3 billion FLOPs rather
-than 9 billion for v3.
-
-## Model Size
-
-Models that run on a device need to be stored somewhere on the device, and very
-large neural networks can be hundreds of megabytes. Most users are reluctant to
-download very large app bundles from app stores, so you want to make your model
-as small as possible. Furthermore, smaller neural networks can persist in and
-out of a mobile device's memory faster.
-
-To understand how large your network will be on disk, start by looking at the
-size on disk of your `GraphDef` file after you’ve run `freeze_graph` and
-`strip_unused_nodes` on it (see @{$mobile/prepare_models$Preparing models} for
-more details on these tools), since then it should only contain
-inference-related nodes. To double-check that your results are as expected, run
-the `summarize_graph` tool to see how many parameters are in constants:
-
- bazel build tensorflow/tools/graph_transforms:summarize_graph && \
- bazel-bin/tensorflow/tools/graph_transforms/summarize_graph \
- --in_graph=/tmp/tensorflow_inception_graph.pb
-
-That command should give you output that looks something like this:
-
- No inputs spotted.
- Found 1 possible outputs: (name=softmax, op=Softmax)
- Found 23885411 (23.89M) const parameters, 0 (0) variable parameters,
- and 99 control_edges
- Op types used: 489 Const, 99 CheckNumerics, 99 Identity, 94
- BatchNormWithGlobalNormalization, 94 Conv2D, 94 Relu, 11 Concat, 9 AvgPool,
- 5 MaxPool, 1 Sub, 1 Softmax, 1 ResizeBilinear, 1 Reshape, 1 Mul, 1 MatMul,
- 1 ExpandDims, 1 DecodeJpeg, 1 Cast, 1 BiasAdd
-
-The important part for our current purposes is the number of const
-parameters. In most models these will be stored as 32-bit floats to start, so if
-you multiply the number of const parameters by four, you should get something
-that’s close to the size of the file on disk. You can often get away with only
-eight-bits per parameter with very little loss of accuracy in the final result,
-so if your file size is too large you can try using
-@{$performance/quantization$quantize_weights} to transform the parameters down.
-
- bazel build tensorflow/tools/graph_transforms:transform_graph && \
- bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
- --in_graph=/tmp/tensorflow_inception_optimized.pb \
- --out_graph=/tmp/tensorflow_inception_quantized.pb \
- --inputs='Mul:0' --outputs='softmax:0' --transforms='quantize_weights'
-
-If you look at the resulting file size, you should see that it’s about a quarter
-of the original at 23MB.
-
-Another transform is `round_weights`, which doesn't make the file smaller, but it
-makes the file compressible to about the same size as when `quantize_weights` is
-used. This is particularly useful for mobile development, taking advantage of
-the fact that app bundles are compressed before they’re downloaded by consumers.
-
-The original file does not compress well with standard algorithms, because the
-bit patterns of even very similar numbers can be very different. The
-`round_weights` transform keeps the weight parameters stored as floats, but
-rounds them to a set number of step values. This means there are a lot more
-repeated byte patterns in the stored model, and so compression can often bring
-the size down dramatically, in many cases to near the size it would be if they
-were stored as eight bit.
-
-Another advantage of `round_weights` is that the framework doesn’t have to
-allocate a temporary buffer to unpack the parameters into, as we have to when
-we just use `quantize_weights`. This saves a little bit of latency (though the
-results should be cached so it’s only costly on the first run) and makes it
-possible to use memory mapping, as described later.
-
-## Binary Size
-
-One of the biggest differences between mobile and server development is the
-importance of binary size. On desktop machines it’s not unusual to have
-executables that are hundreds of megabytes on disk, but for mobile and embedded
-apps it’s vital to keep the binary as small as possible so that user downloads
-are easy. As mentioned above, TensorFlow only includes a subset of op
-implementations by default, but this still results in a 12 MB final
-executable. To reduce this, you can set up the library to only include the
-implementations of the ops that you actually need, based on automatically
-analyzing your model. To use it:
-
-- Run `tools/print_required_ops/print_selective_registration_header.py` on your
- model to produce a header file that only enables the ops it uses.
-
-- Place the `ops_to_register.h` file somewhere that the compiler can find
- it. This can be in the root of your TensorFlow source folder.
-
-- Build TensorFlow with `SELECTIVE_REGISTRATION` defined, for example by passing
- in `--copts=”-DSELECTIVE_REGISTRATION”` to your Bazel build command.
-
-This process recompiles the library so that only the needed ops and types are
-included, which can dramatically reduce the executable size. For example, with
-Inception v3, the new size is only 1.5MB.
-
-## How to Profile your Model
-
-Once you have an idea of what your device's peak performance range is, it’s
-worth looking at its actual current performance. Using a standalone TensorFlow
-benchmark, rather than running it inside a larger app, helps isolate just the
-Tensorflow contribution to the
-latency. The
-[tensorflow/tools/benchmark](https://www.tensorflow.org/code/tensorflow/tools/benchmark/) tool
-is designed to help you do this. To run it on Inception v3 on your desktop
-machine, build this benchmark model:
-
- bazel build -c opt tensorflow/tools/benchmark:benchmark_model && \
- bazel-bin/tensorflow/tools/benchmark/benchmark_model \
- --graph=/tmp/tensorflow_inception_graph.pb --input_layer="Mul" \
- --input_layer_shape="1,299,299,3" --input_layer_type="float" \
- --output_layer="softmax:0" --show_run_order=false --show_time=false \
- --show_memory=false --show_summary=true --show_flops=true --logtostderr
-
-You should see output that looks something like this:
-
-<pre>
-============================== Top by Computation Time ==============================
-[node
- type] [start] [first] [avg ms] [%] [cdf%] [mem KB] [Name]
-Conv2D 22.859 14.212 13.700 4.972% 4.972% 3871.488 conv_4/Conv2D
-Conv2D 8.116 8.964 11.315 4.106% 9.078% 5531.904 conv_2/Conv2D
-Conv2D 62.066 16.504 7.274 2.640% 11.717% 443.904 mixed_3/conv/Conv2D
-Conv2D 2.530 6.226 4.939 1.792% 13.510% 2765.952 conv_1/Conv2D
-Conv2D 55.585 4.605 4.665 1.693% 15.203% 313.600 mixed_2/tower/conv_1/Conv2D
-Conv2D 127.114 5.469 4.630 1.680% 16.883% 81.920 mixed_10/conv/Conv2D
-Conv2D 47.391 6.994 4.588 1.665% 18.548% 313.600 mixed_1/tower/conv_1/Conv2D
-Conv2D 39.463 7.878 4.336 1.574% 20.122% 313.600 mixed/tower/conv_1/Conv2D
-Conv2D 127.113 4.192 3.894 1.413% 21.535% 114.688 mixed_10/tower_1/conv/Conv2D
-Conv2D 70.188 5.205 3.626 1.316% 22.850% 221.952 mixed_4/conv/Conv2D
-
-============================== Summary by node type ==============================
-[Node type] [count] [avg ms] [avg %] [cdf %] [mem KB]
-Conv2D 94 244.899 88.952% 88.952% 35869.953
-BiasAdd 95 9.664 3.510% 92.462% 35873.984
-AvgPool 9 7.990 2.902% 95.364% 7493.504
-Relu 94 5.727 2.080% 97.444% 35869.953
-MaxPool 5 3.485 1.266% 98.710% 3358.848
-Const 192 1.727 0.627% 99.337% 0.000
-Concat 11 1.081 0.393% 99.730% 9892.096
-MatMul 1 0.665 0.242% 99.971% 4.032
-Softmax 1 0.040 0.015% 99.986% 4.032
-<> 1 0.032 0.012% 99.997% 0.000
-Reshape 1 0.007 0.003% 100.000% 0.000
-
-Timings (microseconds): count=50 first=330849 curr=274803 min=232354 max=415352 avg=275563 std=44193
-Memory (bytes): count=50 curr=128366400(all same)
-514 nodes defined 504 nodes observed
-</pre>
-
-This is the summary view, which is enabled by the show_summary flag. To
-interpret it, the first table is a list of the nodes that took the most time, in
-order by how long they took. From left to right, the columns are:
-
-- Node type, what kind of operation this was.
-
-- Start time of the op, showing where it falls in the sequence of operations.
-
-- First time in milliseconds. This is how long the operation took on the first
- run of the benchmark, since by default 20 runs are executed to get more
- reliable statistics. The first time is useful to spot which ops are doing
- expensive calculations on the first run, and then caching the results.
-
-- Average time for the operation across all runs, in milliseconds.
-
-- What percentage of the total time for one run the op took. This is useful to
- understand where the hotspots are.
-
-- The cumulative total time of this and the previous ops in the table. This is
- handy for understanding what the distribution of work is across the layers, to
- see if just a few of the nodes are taking up most of the time.
-
-- The amount of memory consumed by outputs of this type of op.
-
-- Name of the node.
-
-The second table is similar, but instead of breaking down the timings by
-particular named nodes, it groups them by the kind of op. This is very useful to
-understand which op implementations you might want to optimize or eliminate from
-your graph. The table is arranged with the most costly operations at the start,
-and only shows the top ten entries, with a placeholder for other nodes. The
-columns from left to right are:
-
-- Type of the nodes being analyzed.
-
-- Accumulated average time taken by all nodes of this type, in milliseconds.
-
-- What percentage of the total time was taken by this type of operation.
-
-- Cumulative time taken by this and op types higher in the table, so you can
- understand the distribution of the workload.
-
-- How much memory the outputs of this op type took up.
-
-Both of these tables are set up so that you can easily copy and paste their
-results into spreadsheet documents, since they are output with tabs as
-separators between the columns. The summary by node type can be the most useful
-when looking for optimization opportunities, since it’s a pointer to the code
-that’s taking the most time. In this case, you can see that the Conv2D ops are
-almost 90% of the execution time. This is a sign that the graph is pretty
-optimal, since convolutions and matrix multiplies are expected to be the bulk of
-a neural network’s computing workload.
-
-As a rule of thumb, it’s more worrying if you see a lot of other operations
-taking up more than a small fraction of the time. For neural networks, the ops
-that don’t involve large matrix multiplications should usually be dwarfed by the
-ones that do, so if you see a lot of time going into those it’s a sign that
-either your network is non-optimally constructed, or the code implementing those
-ops is not as optimized as it could
-be. [Performance bugs](https://github.com/tensorflow/tensorflow/issues) or
-patches are always welcome if you do encounter this situation, especially if
-they include an attached model exhibiting this behavior and the command line
-used to run the benchmark tool on it.
-
-The run above was on your desktop, but the tool also works on Android, which is
-where it’s most useful for mobile development. Here’s an example command line to
-run it on a 64-bit ARM device:
-
- bazel build -c opt --config=android_arm64 \
- tensorflow/tools/benchmark:benchmark_model
- adb push bazel-bin/tensorflow/tools/benchmark/benchmark_model /data/local/tmp
- adb push /tmp/tensorflow_inception_graph.pb /data/local/tmp/
- adb shell '/data/local/tmp/benchmark_model \
- --graph=/data/local/tmp/tensorflow_inception_graph.pb --input_layer="Mul" \
- --input_layer_shape="1,299,299,3" --input_layer_type="float" \
- --output_layer="softmax:0" --show_run_order=false --show_time=false \
- --show_memory=false --show_summary=true'
-
-You can interpret the results in exactly the same way as the desktop version
-above. If you have any trouble figuring out what the right input and output
-names and types are, take a look at the @{$mobile/prepare_models$Preparing models}
-page for details about detecting these for your model, and look at the
-`summarize_graph` tool which may give you
-helpful information.
-
-There isn’t good support for command line tools on iOS, so instead there’s a
-separate example
-at
-[tensorflow/examples/ios/benchmark](https://www.tensorflow.org/code/tensorflow/examples/ios/benchmark) that
-packages the same functionality inside a standalone app. This outputs the
-statistics to both the screen of the device and the debug log. If you want
-on-screen statistics for the Android example apps, you can turn them on by
-pressing the volume-up button.
-
-## Profiling within your own app
-
-The output you see from the benchmark tool is generated from modules that are
-included as part of the standard TensorFlow runtime, which means you have access
-to them within your own applications too. You can see an example of how to do
-that [here](https://www.tensorflow.org/code/tensorflow/examples/ios/benchmark/BenchmarkViewController.mm?l=139).
-
-The basic steps are:
-
-1. Create a StatSummarizer object:
-
- tensorflow::StatSummarizer stat_summarizer(tensorflow_graph);
-
-2. Set up the options:
-
- tensorflow::RunOptions run_options;
- run_options.set_trace_level(tensorflow::RunOptions::FULL_TRACE);
- tensorflow::RunMetadata run_metadata;
-
-3. Run the graph:
-
- run_status = session->Run(run_options, inputs, output_layer_names, {},
- output_layers, &run_metadata);
-
-4. Calculate the results and print them out:
-
- assert(run_metadata.has_step_stats());
- const tensorflow::StepStats& step_stats = run_metadata.step_stats();
- stat_summarizer->ProcessStepStats(step_stats);
- stat_summarizer->PrintStepStats();
-
-## Visualizing Models
-
-The most effective way to speed up your code is by altering your model so it
-does less work. To do that, you need to understand what your model is doing, and
-visualizing it is a good first step. To get a high-level overview of your graph,
-use [TensorBoard](https://github.com/tensorflow/tensorboard).
-
-## Threading
-
-The desktop version of TensorFlow has a sophisticated threading model, and will
-try to run multiple operations in parallel if it can. In our terminology this is
-called “inter-op parallelism” (though to avoid confusion with “intra-op”, you
-could think of it as “between-op” instead), and can be set by specifying
-`inter_op_parallelism_threads` in the session options.
-
-By default, mobile devices run operations serially; that is,
-`inter_op_parallelism_threads` is set to 1. Mobile processors usually have few
-cores and a small cache, so running multiple operations accessing disjoint parts
-of memory usually doesn’t help performance. “Intra-op parallelism” (or
-“within-op”) can be very helpful though, especially for computation-bound
-operations like convolutions where different threads can feed off the same small
-set of memory.
-
-On mobile, how many threads an op will use is set to the number of cores by
-default, or 2 when the number of cores can't be determined. You can override the
-default number of threads that ops are using by setting
-`intra_op_parallelism_threads` in the session options. It’s a good idea to
-reduce the default if your app has its own threads doing heavy processing, so
-that they don’t interfere with each other.
-
-To see more details on session options, look at [ConfigProto](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto).
-
-## Retrain with mobile data
-
-The biggest cause of accuracy problems when running models on mobile apps is
-unrepresentative training data. For example, most of the Imagenet photos are
-well-framed so that the object is in the center of the picture, well-lit, and
-shot with a normal lens. Photos from mobile devices are often poorly framed,
-badly lit, and can have fisheye distortions, especially selfies.
-
-The solution is to expand your training set with data actually captured from
-your application. This step can involve extra work, since you’ll have to label
-the examples yourself, but even if you just use it to expand your original
-training data, it can help the training set dramatically. Improving the training
-set by doing this, and by fixing other quality issues like duplicates or badly
-labeled examples is the single best way to improve accuracy. It’s usually a
-bigger help than altering your model architecture or using different techniques.
-
-## Reducing model loading time and/or memory footprint
-
-Most operating systems allow you to load a file using memory mapping, rather
-than going through the usual I/O APIs. Instead of allocating an area of memory
-on the heap and then copying bytes from disk into it, you simply tell the
-operating system to make the entire contents of a file appear directly in
-memory. This has several advantages:
-
-* Speeds loading
-* Reduces paging (increases performance)
-* Does not count towards RAM budget for your app
-
-TensorFlow has support for memory mapping the weights that form the bulk of most
-model files. Because of limitations in the `ProtoBuf` serialization format, we
-have to make a few changes to our model loading and processing code. The
-way memory mapping works is that we have a single file where the first part is a
-normal `GraphDef` serialized into the protocol buffer wire format, but then the
-weights are appended in a form that can be directly mapped.
-
-To create this file, run the
-`tensorflow/contrib/util:convert_graphdef_memmapped_format` tool. This takes in
-a `GraphDef` file that’s been run through `freeze_graph` and converts it to the
-format that has the weights appended at the end. Since that file’s no longer a
-standard `GraphDef` protobuf, you then need to make some changes to the loading
-code. You can see an example of this in
-the
-[iOS Camera demo app](https://www.tensorflow.org/code/tensorflow/examples/ios/camera/tensorflow_utils.mm?l=147),
-in the `LoadMemoryMappedModel()` function.
-
-The same code (with the Objective C calls for getting the filenames substituted)
-can be used on other platforms too. Because we’re using memory mapping, we need
-to start by creating a special TensorFlow environment object that’s set up with
-the file we’ll be using:
-
- std::unique_ptr<tensorflow::MemmappedEnv> memmapped_env;
- memmapped_env->reset(
- new tensorflow::MemmappedEnv(tensorflow::Env::Default()));
- tensorflow::Status mmap_status =
- (memmapped_env->get())->InitializeFromFile(file_path);
-
-You then need to pass in this environment to subsequent calls, like this one for
-loading the graph:
-
- tensorflow::GraphDef tensorflow_graph;
- tensorflow::Status load_graph_status = ReadBinaryProto(
- memmapped_env->get(),
- tensorflow::MemmappedFileSystem::kMemmappedPackageDefaultGraphDef,
- &tensorflow_graph);
-
-You also need to create the session with a pointer to the environment you’ve
-created:
-
- tensorflow::SessionOptions options;
- options.config.mutable_graph_options()
- ->mutable_optimizer_options()
- ->set_opt_level(::tensorflow::OptimizerOptions::L0);
- options.env = memmapped_env->get();
-
- tensorflow::Session* session_pointer = nullptr;
- tensorflow::Status session_status =
- tensorflow::NewSession(options, &session_pointer);
-
-One thing to notice here is that we’re also disabling automatic optimizations,
-since in some cases these will fold constant sub-trees, and so create copies of
-tensor values that we don’t want and use up more RAM.
-
-Once you’ve gone through these steps, you can use the session and graph as
-normal, and you should see a reduction in loading time and memory usage.
-
-## Protecting model files from easy copying
-
-By default, your models will be stored in the standard serialized protobuf
-format on disk. In theory this means that anybody can copy your model, which you
-may not want. However, in practice, most models are so application-specific and
-obfuscated by optimizations that the risk is similar to that of competitors
-disassembling and reusing your code, but if you do want to make it tougher for
-casual users to access your files it is possible to take some basic steps.
-
-Most of our examples use
-the
-[ReadBinaryProto()](https://www.tensorflow.org/code/tensorflow/core/platform/env.cc?q=core/platform/env.cc&l=409) convenience
-call to load a `GraphDef` from disk. This does require an unencrypted protobuf on
-disk. Luckily though, the implementation of the call is pretty straightforward
-and it should be easy to write an equivalent that can decrypt in memory. Here's
-some code that shows how you can read and decrypt a protobuf using your own
-decryption routine:
-
- Status ReadEncryptedProto(Env* env, const string& fname,
- ::tensorflow::protobuf::MessageLite* proto) {
- string data;
- TF_RETURN_IF_ERROR(ReadFileToString(env, fname, &data));
-
- DecryptData(&data); // Your own function here.
-
- if (!proto->ParseFromString(&data)) {
- TF_RETURN_IF_ERROR(stream->status());
- return errors::DataLoss("Can't parse ", fname, " as binary proto");
- }
- return Status::OK();
- }
-
-To use this you’d need to define the DecryptData() function yourself. It could
-be as simple as something like:
-
- void DecryptData(string* data) {
- for (int i = 0; i < data.size(); ++i) {
- data[i] = data[i] ^ 0x23;
- }
- }
-
-You may want something more complex, but exactly what you’ll need is outside the
-current scope here.
diff --git a/tensorflow/docs_src/mobile/prepare_models.md b/tensorflow/docs_src/mobile/prepare_models.md
deleted file mode 100644
index 2b84dbb973..0000000000
--- a/tensorflow/docs_src/mobile/prepare_models.md
+++ /dev/null
@@ -1,301 +0,0 @@
-# Preparing models for mobile deployment
-
-The requirements for storing model information during training are very
-different from when you want to release it as part of a mobile app. This section
-covers the tools involved in converting from a training model to something
-releasable in production.
-
-## What is up with all the different saved file formats?
-
-You may find yourself getting very confused by all the different ways that
-TensorFlow can save out graphs. To help, here’s a rundown of some of the
-different components, and what they are used for. The objects are mostly defined
-and serialized as protocol buffers:
-
-- [NodeDef](https://www.tensorflow.org/code/tensorflow/core/framework/node_def.proto):
- Defines a single operation in a model. It has a unique name, a list of the
- names of other nodes it pulls inputs from, the operation type it implements
- (for example `Add`, or `Mul`), and any attributes that are needed to control
- that operation. This is the basic unit of computation for TensorFlow, and all
- work is done by iterating through a network of these nodes, applying each one
- in turn. One particular operation type that’s worth knowing about is `Const`,
- since this holds information about a constant. This may be a single, scalar
- number or string, but it can also hold an entire multi-dimensional tensor
- array. The values for a `Const` are stored inside the `NodeDef`, and so large
- constants can take up a lot of room when serialized.
-
-- [Checkpoint](https://www.tensorflow.org/code/tensorflow/core/util/tensor_bundle/tensor_bundle.h). Another
- way of storing values for a model is by using `Variable` ops. Unlike `Const`
- ops, these don’t store their content as part of the `NodeDef`, so they take up
- very little space within the `GraphDef` file. Instead their values are held in
- RAM while a computation is running, and then saved out to disk as checkpoint
- files periodically. This typically happens as a neural network is being
- trained and weights are updated, so it’s a time-critical operation, and it may
- happen in a distributed fashion across many workers, so the file format has to
- be both fast and flexible. They are stored as multiple checkpoint files,
- together with metadata files that describe what’s contained within the
- checkpoints. When you’re referring to a checkpoint in the API (for example
- when passing a filename in as a command line argument), you’ll use the common
- prefix for a set of related files. If you had these files:
-
- /tmp/model/model-chkpt-1000.data-00000-of-00002
- /tmp/model/model-chkpt-1000.data-00001-of-00002
- /tmp/model/model-chkpt-1000.index
- /tmp/model/model-chkpt-1000.meta
-
- You would refer to them as `/tmp/model/chkpt-1000`.
-
-- [GraphDef](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto):
- Has a list of `NodeDefs`, which together define the computational graph to
- execute. During training, some of these nodes will be `Variables`, and so if
- you want to have a complete graph you can run, including the weights, you’ll
- need to call a restore operation to pull those values from
- checkpoints. Because checkpoint loading has to be flexible to deal with all of
- the training requirements, this can be tricky to implement on mobile and
- embedded devices, especially those with no proper file system available like
- iOS. This is where
- the
- [`freeze_graph.py`](https://www.tensorflow.org/code/tensorflow/python/tools/freeze_graph.py) script
- comes in handy. As mentioned above, `Const` ops store their values as part of
- the `NodeDef`, so if all the `Variable` weights are converted to `Const` nodes,
- then we only need a single `GraphDef` file to hold the model architecture and
- the weights. Freezing the graph handles the process of loading the
- checkpoints, and then converts all Variables to Consts. You can then load the
- resulting file in a single call, without having to restore variable values
- from checkpoints. One thing to watch out for with `GraphDef` files is that
- sometimes they’re stored in text format for easy inspection. These versions
- usually have a ‘.pbtxt’ filename suffix, whereas the binary files end with
- ‘.pb’.
-
-- [FunctionDefLibrary](https://www.tensorflow.org/code/tensorflow/core/framework/function.proto):
- This appears in `GraphDef`, and is effectively a set of sub-graphs, each with
- information about their input and output nodes. Each sub-graph can then be
- used as an op in the main graph, allowing easy instantiation of different
- nodes, in a similar way to how functions encapsulate code in other languages.
-
-- [MetaGraphDef](https://www.tensorflow.org/code/tensorflow/core/protobuf/meta_graph.proto):
- A plain `GraphDef` only has information about the network of computations, but
- doesn’t have any extra information about the model or how it can be
- used. `MetaGraphDef` contains a `GraphDef` defining the computation part of
- the model, but also includes information like ‘signatures’, which are
- suggestions about which inputs and outputs you may want to call the model
- with, data on how and where any checkpoint files are saved, and convenience
- tags for grouping ops together for ease of use.
-
-- [SavedModel](https://www.tensorflow.org/code/tensorflow/core/protobuf/saved_model.proto):
- It’s common to want to have different versions of a graph that rely on a
- common set of variable checkpoints. For example, you might need a GPU and a
- CPU version of the same graph, but keep the same weights for both. You might
- also need some extra files (like label names) as part of your
- model. The
- [SavedModel](https://www.tensorflow.org/code/tensorflow/python/saved_model/README.md) format
- addresses these needs by letting you save multiple versions of the same graph
- without duplicating variables, and also storing asset files in the same
- bundle. Under the hood, it uses `MetaGraphDef` and checkpoint files, along
- with extra metadata files. It’s the format that you’ll want to use if you’re
- deploying a web API using TensorFlow Serving, for example.
-
-## How do you get a model you can use on mobile?
-
-In most situations, training a model with TensorFlow will give you a folder
-containing a `GraphDef` file (usually ending with the `.pb` or `.pbtxt` extension) and
-a set of checkpoint files. What you need for mobile or embedded deployment is a
-single `GraphDef` file that’s been ‘frozen’, or had its variables converted into
-inline constants so everything’s in one file. To handle the conversion, you’ll
-need the `freeze_graph.py` script, that’s held in
-[`tensorflow/python/tools/freeze_graph.py`](https://www.tensorflow.org/code/tensorflow/python/tools/freeze_graph.py). You’ll run it like this:
-
- bazel build tensorflow/python/tools:freeze_graph
- bazel-bin/tensorflow/python/tools/freeze_graph \
- --input_graph=/tmp/model/my_graph.pb \
- --input_checkpoint=/tmp/model/model.ckpt-1000 \
- --output_graph=/tmp/frozen_graph.pb \
- --output_node_names=output_node \
-
-The `input_graph` argument should point to the `GraphDef` file that holds your
-model architecture. It’s possible that your `GraphDef` has been stored in a text
-format on disk, in which case it’s likely to end in `.pbtxt` instead of `.pb`,
-and you should add an extra `--input_binary=false` flag to the command.
-
-The `input_checkpoint` should be the most recent saved checkpoint. As mentioned
-in the checkpoint section, you need to give the common prefix to the set of
-checkpoints here, rather than a full filename.
-
-`output_graph` defines where the resulting frozen `GraphDef` will be
-saved. Because it’s likely to contain a lot of weight values that take up a
-large amount of space in text format, it’s always saved as a binary protobuf.
-
-`output_node_names` is a list of the names of the nodes that you want to extract
-the results of your graph from. This is needed because the freezing process
-needs to understand which parts of the graph are actually needed, and which are
-artifacts of the training process, like summarization ops. Only ops that
-contribute to calculating the given output nodes will be kept. If you know how
-your graph is going to be used, these should just be the names of the nodes you
-pass into `Session::Run()` as your fetch targets. The easiest way to find the
-node names is to inspect the Node objects while building your graph in python.
-Inspecting your graph in TensorBoard is another simple way. You can get some
-suggestions on likely outputs by running the [`summarize_graph` tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms/README.md#inspecting-graphs).
-
-Because the output format for TensorFlow has changed over time, there are a
-variety of other less commonly used flags available too, like `input_saver`, but
-hopefully you shouldn’t need these on graphs trained with modern versions of the
-framework.
-
-## Using the Graph Transform Tool
-
-A lot of the things you need to do to efficiently run a model on device are
-available through the [Graph Transform
-Tool](https://www.tensorflow.org/code/tensorflow/tools/graph_transforms/README.md). This
-command-line tool takes an input `GraphDef` file, applies the set of rewriting
-rules you request, and then writes out the result as a `GraphDef`. See the
-documentation for more information on how to build and run this tool.
-
-### Removing training-only nodes
-
-TensorFlow `GraphDefs` produced by the training code contain all of the
-computation that’s needed for back-propagation and updates of weights, as well
-as the queuing and decoding of inputs, and the saving out of checkpoints. All of
-these nodes are no longer needed during inference, and some of the operations
-like checkpoint saving aren’t even supported on mobile platforms. To create a
-model file that you can load on devices you need to delete those unneeded
-operations by running the `strip_unused_nodes` rule in the Graph Transform Tool.
-
-The trickiest part of this process is figuring out the names of the nodes you
-want to use as inputs and outputs during inference. You'll need these anyway
-once you start to run inference, but you also need them here so that the
-transform can calculate which nodes are not needed on the inference-only
-path. These may not be obvious from the training code. The easiest way to
-determine the node name is to explore the graph with TensorBoard.
-
-Remember that mobile applications typically gather their data from sensors and
-have it as arrays in memory, whereas training typically involves loading and
-decoding representations of the data stored on disk. In the case of Inception v3
-for example, there’s a `DecodeJpeg` op at the start of the graph that’s designed
-to take JPEG-encoded data from a file retrieved from disk and turn it into an
-arbitrary-sized image. After that there’s a `BilinearResize` op to scale it to
-the expected size, followed by a couple of other ops that convert the byte data
-into float and scale the value magnitudes it in the way the rest of the graph
-expects. A typical mobile app will skip most of these steps because it’s getting
-its input directly from a live camera, so the input node you will actually
-supply will be the output of the `Mul` node in this case.
-
-<img src ="../images/inception_input.png" width="300">
-
-You’ll need to do a similar process of inspection to figure out the correct
-output nodes.
-
-If you’ve just been given a frozen `GraphDef` file, and are not sure about the
-contents, try using the `summarize_graph` tool to print out information
-about the inputs and outputs it finds from the graph structure. Here’s an
-example with the original Inception v3 file:
-
- bazel run tensorflow/tools/graph_transforms:summarize_graph --
- --in_graph=tensorflow_inception_graph.pb
-
-Once you have an idea of what the input and output nodes are, you can feed them
-into the graph transform tool as the `--input_names` and `--output_names`
-arguments, and call the `strip_unused_nodes` transform, like this:
-
- bazel run tensorflow/tools/graph_transforms:transform_graph --
- --in_graph=tensorflow_inception_graph.pb
- --out_graph=optimized_inception_graph.pb --inputs='Mul' --outputs='softmax'
- --transforms='
- strip_unused_nodes(type=float, shape="1,299,299,3")
- fold_constants(ignore_errors=true)
- fold_batch_norms
- fold_old_batch_norms'
-
-One thing to look out for here is that you need to specify the size and type
-that you want your inputs to be. This is because any values that you’re going to
-be passing in as inputs to inference need to be fed to special `Placeholder` op
-nodes, and the transform may need to create them if they don’t already exist. In
-the case of Inception v3 for example, a `Placeholder` node replaces the old
-`Mul` node that used to output the resized and rescaled image array, since we’re
-going to be doing that processing ourselves before we call TensorFlow. It keeps
-the original name though, which is why we always feed in inputs to `Mul` when we
-run a session with our modified Inception graph.
-
-After you’ve run this process, you’ll have a graph that only contains the actual
-nodes you need to run your prediction process. This is the point where it
-becomes useful to run metrics on the graph, so it’s worth running
-`summarize_graph` again to understand what’s in your model.
-
-## What ops should you include on mobile?
-
-There are hundreds of operations available in TensorFlow, and each one has
-multiple implementations for different data types. On mobile platforms, the size
-of the executable binary that’s produced after compilation is important, because
-app download bundles need to be as small as possible for the best user
-experience. If all of the ops and data types are compiled into the TensorFlow
-library then the total size of the compiled library can be tens of megabytes, so
-by default only a subset of ops and data types are included.
-
-That means that if you load a model file that’s been trained on a desktop
-machine, you may see the error “No OpKernel was registered to support Op” when
-you load it on mobile. The first thing to try is to make sure you’ve stripped
-out any training-only nodes, since the error will occur at load time even if the
-op is never executed. If you’re still hitting the same problem once that’s done,
-you’ll need to look at adding the op to your built library.
-
-The criteria for including ops and types fall into several categories:
-
-- Are they only useful in back-propagation, for gradients? Since mobile is
- focused on inference, we don’t include these.
-
-- Are they useful mainly for other training needs, such as checkpoint saving?
- These we leave out.
-
-- Do they rely on frameworks that aren’t always available on mobile, such as
- libjpeg? To avoid extra dependencies we don’t include ops like `DecodeJpeg`.
-
-- Are there types that aren’t commonly used? We don’t include boolean variants
- of ops for example, since we don’t see much use of them in typical inference
- graphs.
-
-These ops are trimmed by default to optimize for inference on mobile, but it is
-possible to alter some build files to change the default. After alternating the
-build files, you will need to recompile TensorFlow. See below for more details
-on how to do this, and also see @{$mobile/optimizing#binary_size$Optimizing} for
-more on reducing your binary size.
-
-### Locate the implementation
-
-Operations are broken into two parts. The first is the op definition, which
-declares the signature of the operation, which inputs, outputs, and attributes
-it has. These take up very little space, and so all are included by default. The
-implementations of the op computations are done in kernels, which live in the
-`tensorflow/core/kernels` folder. You need to compile the C++ file containing
-the kernel implementation of the op you need into the library. To figure out
-which file that is, you can search for the operation name in the source
-files.
-
-[Here’s an example search in github](https://github.com/search?utf8=%E2%9C%93&q=repo%3Atensorflow%2Ftensorflow+extension%3Acc+path%3Atensorflow%2Fcore%2Fkernels+REGISTER+Mul&type=Code&ref=searchresults).
-
-You’ll see that this search is looking for the `Mul` op implementation, and it
-finds it in `tensorflow/core/kernels/cwise_op_mul_1.cc`. You need to look for
-macros beginning with `REGISTER`, with the op name you care about as one of the
-string arguments.
-
-In this case, the implementations are actually broken up across multiple `.cc`
-files, so you’d need to include all of them in your build. If you’re more
-comfortable using the command line for code search, here’s a grep command that
-also locates the right files if you run it from the root of your TensorFlow
-repository:
-
-`grep 'REGISTER.*"Mul"' tensorflow/core/kernels/*.cc`
-
-### Add the implementation to the build
-
-If you’re using Bazel, and building for Android, you’ll want to add the files
-you’ve found to
-the
-[`android_extended_ops_group1`](https://www.tensorflow.org/code/tensorflow/core/kernels/BUILD#L3565) or
-[`android_extended_ops_group2`](https://www.tensorflow.org/code/tensorflow/core/kernels/BUILD#L3632) targets. You
-may also need to include any .cc files they depend on in there. If the build
-complains about missing header files, add the .h’s that are needed into
-the
-[`android_extended_ops`](https://www.tensorflow.org/code/tensorflow/core/kernels/BUILD#L3525) target.
-
-If you’re using a makefile targeting iOS, Raspberry Pi, etc, go to
-[`tensorflow/contrib/makefile/tf_op_files.txt`](https://www.tensorflow.org/code/tensorflow/contrib/makefile/tf_op_files.txt) and
-add the right implementation files there.
diff --git a/tensorflow/docs_src/mobile/tflite/demo_android.md b/tensorflow/docs_src/mobile/tflite/demo_android.md
deleted file mode 100644
index 6f9893f8f1..0000000000
--- a/tensorflow/docs_src/mobile/tflite/demo_android.md
+++ /dev/null
@@ -1,146 +0,0 @@
-# Android Demo App
-
-An example Android application using TensorFLow Lite is available
-[on GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/app).
-The demo is a sample camera app that classifies images continuously
-using either a quantized Mobilenet model or a floating point Inception-v3 model.
-To run the demo, a device running Android 5.0 ( API 21) or higher is required.
-
-In the demo app, inference is done using the TensorFlow Lite Java API. The demo
-app classifies frames in real-time, displaying the top most probable
-classifications. It also displays the time taken to detect the object.
-
-There are three ways to get the demo app to your device:
-
-* Download the [prebuilt binary APK](http://download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk).
-* Use Android Studio to build the application.
-* Download the source code for TensorFlow Lite and the demo and build it using
- bazel.
-
-
-## Download the pre-built binary
-
-The easiest way to try the demo is to download the
-[pre-built binary APK](https://storage.googleapis.com/download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk)
-
-Once the APK is installed, click the app icon to start the program. The first
-time the app is opened, it asks for runtime permissions to access the device
-camera. The demo app opens the back-camera of the device and recognizes objects
-in the camera's field of view. At the bottom of the image (or at the left
-of the image if the device is in landscape mode), it displays top three objects
-classified and the classification latency.
-
-
-## Build in Android Studio with TensorFlow Lite AAR from JCenter
-
-Use Android Studio to try out changes in the project code and compile the demo
-app:
-
-* Install the latest version of
- [Android Studio](https://developer.android.com/studio/index.html).
-* Make sure the Android SDK version is greater than 26 and NDK version is greater
- than 14 (in the Android Studio settings).
-* Import the `tensorflow/contrib/lite/java/demo` directory as a new
- Android Studio project.
-* Install all the Gradle extensions it requests.
-
-Now you can build and run the demo app.
-
-The build process downloads the quantized [Mobilenet TensorFlow Lite model](https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_224_android_quant_2017_11_08.zip), and unzips it into the assets directory: `tensorflow/contrib/lite/java/demo/app/src/main/assets/`.
-
-Some additional details are available on the
-[TF Lite Android App page](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/README.md).
-
-### Using other models
-
-To use a different model:
-* Download the floating point [Inception-v3 model](https://storage.googleapis.com/download.tensorflow.org/models/tflite/inception_v3_slim_2016_android_2017_11_10.zip).
-* Unzip and copy `inceptionv3_non_slim_2015.tflite` to the assets directory.
-* Change the chosen classifier in [Camera2BasicFragment.java](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/Camera2BasicFragment.java)<br>
- from: `classifier = new ImageClassifierQuantizedMobileNet(getActivity());`<br>
- to: `classifier = new ImageClassifierFloatInception(getActivity());`.
-
-
-## Build TensorFlow Lite and the demo app from source
-
-### Clone the TensorFlow repo
-
-```sh
-git clone https://github.com/tensorflow/tensorflow
-```
-
-### Install Bazel
-
-If `bazel` is not installed on your system, see
-[Installing Bazel](https://bazel.build/versions/master/docs/install.html).
-
-Note: Bazel does not currently support Android builds on Windows. Windows users
-should download the
-[prebuilt binary](https://storage.googleapis.com/download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk).
-
-### Install Android NDK and SDK
-
-The Android NDK is required to build the native (C/C++) TensorFlow Lite code. The
-current recommended version is *14b* and can be found on the
-[NDK Archives](https://developer.android.com/ndk/downloads/older_releases.html#ndk-14b-downloads)
-page.
-
-The Android SDK and build tools can be
-[downloaded separately](https://developer.android.com/tools/revisions/build-tools.html)
-or used as part of
-[Android Studio](https://developer.android.com/studio/index.html). To build the
-TensorFlow Lite Android demo, build tools require API >= 23 (but it will run on
-devices with API >= 21).
-
-In the root of the TensorFlow repository, update the `WORKSPACE` file with the
-`api_level` and location of the SDK and NDK. If you installed it with
-Android Studio, the SDK path can be found in the SDK manager. The default NDK
-path is:`{SDK path}/ndk-bundle.` For example:
-
-```
-android_sdk_repository (
- name = "androidsdk",
- api_level = 23,
- build_tools_version = "23.0.2",
- path = "/home/xxxx/android-sdk-linux/",
-)
-
-android_ndk_repository(
- name = "androidndk",
- path = "/home/xxxx/android-ndk-r10e/",
- api_level = 19,
-)
-```
-
-Some additional details are available on the
-[TF Lite Android App page](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/README.md).
-
-### Build the source code
-
-To build the demo app, run `bazel`:
-
-```
-bazel build --cxxopt=--std=c++11 //tensorflow/contrib/lite/java/demo/app/src/main:TfLiteCameraDemo
-```
-
-Caution: Because of an bazel bug, we only support building the Android demo app
-within a Python 2 environment.
-
-
-## About the demo
-
-The demo app is resizing each camera image frame (224 width * 224 height) to
-match the quantized MobileNets model (299 * 299 for Inception-v3). The resized
-image is converted—row by row—into a
-[ByteBuffer](https://developer.android.com/reference/java/nio/ByteBuffer.html).
-Its size is 1 * 224 * 224 * 3 bytes, where 1 is the number of images in a batch.
-224 * 224 (299 * 299) is the width and height of the image. 3 bytes represents
-the 3 colors of a pixel.
-
-This demo uses the TensorFlow Lite Java inference API
-for models which take a single input and provide a single output. This outputs a
-two-dimensional array, with the first dimension being the category index and the
-second dimension being the confidence of classification. Both models have 1001
-unique categories and the app sorts the probabilities of all the categories and
-displays the top three. The model file must be downloaded and bundled within the
-assets directory of the app.
diff --git a/tensorflow/docs_src/mobile/tflite/demo_ios.md b/tensorflow/docs_src/mobile/tflite/demo_ios.md
deleted file mode 100644
index 3be21da89f..0000000000
--- a/tensorflow/docs_src/mobile/tflite/demo_ios.md
+++ /dev/null
@@ -1,68 +0,0 @@
-# iOS Demo App
-
-The TensorFlow Lite demo is a camera app that continuously classifies whatever
-it sees from your device's back camera, using a quantized MobileNet model. These
-instructions walk you through building and running the demo on an iOS device.
-
-## Prerequisites
-
-* You must have [Xcode](https://developer.apple.com/xcode/) installed and have a
- valid Apple Developer ID, and have an iOS device set up and linked to your
- developer account with all of the appropriate certificates. For these
- instructions, we assume that you have already been able to build and deploy an
- app to an iOS device with your current developer environment.
-
-* The demo app requires a camera and must be executed on a real iOS device. You
- can build it and run with the iPhone Simulator but it won't have any camera
- information to classify.
-
-* You don't need to build the entire TensorFlow library to run the demo, but you
- will need to clone the TensorFlow repository if you haven't already:
-
- git clone https://github.com/tensorflow/tensorflow
-
-* You'll also need the Xcode command-line tools:
-
- xcode-select --install
-
- If this is a new install, you will need to run the Xcode application once to
- agree to the license before continuing.
-
-## Building the iOS Demo App
-
-1. Install CocoaPods if you don't have it:
-
- sudo gem install cocoapods
-
-2. Download the model files used by the demo app (this is done from inside the
- cloned directory):
-
- sh tensorflow/contrib/lite/examples/ios/download_models.sh
-
-3. Install the pod to generate the workspace file:
-
- cd tensorflow/contrib/lite/examples/ios/camera
- pod install
-
- If you have installed this pod before and that command doesn't work, try
-
- pod update
-
- At the end of this step you should have a file called
- `tflite_camera_example.xcworkspace`.
-
-4. Open the project in Xcode by typing this on the command line:
-
- open tflite_camera_example.xcworkspace
-
- This launches Xcode if it isn't open already and opens the
- `tflite_camera_example` project.
-
-5. Build and run the app in Xcode.
-
- Note that as mentioned earlier, you must already have a device set up and
- linked to your Apple Developer account in order to deploy the app on a
- device.
-
-You'll have to grant permissions for the app to use the device's camera. Point
-the camera at various objects and enjoy seeing how the model classifies things!
diff --git a/tensorflow/docs_src/mobile/tflite/devguide.md b/tensorflow/docs_src/mobile/tflite/devguide.md
deleted file mode 100644
index 4133bc172a..0000000000
--- a/tensorflow/docs_src/mobile/tflite/devguide.md
+++ /dev/null
@@ -1,231 +0,0 @@
-# Developer Guide
-
-Using a TensorFlow Lite model in your mobile app requires multiple
-considerations: you must choose a pre-trained or custom model, convert the model
-to a TensorFLow Lite format, and finally, integrate the model in your app.
-
-## 1. Choose a model
-
-Depending on the use case, you can choose one of the popular open-sourced models,
-such as *InceptionV3* or *MobileNets*, and re-train these models with a custom
-data set or even build your own custom model.
-
-### Use a pre-trained model
-
-[MobileNets](https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html)
-is a family of mobile-first computer vision models for TensorFlow designed to
-effectively maximize accuracy, while taking into consideration the restricted
-resources for on-device or embedded applications. MobileNets are small,
-low-latency, low-power models parameterized to meet the resource constraints for
-a variety of uses. They can be used for classification, detection, embeddings, and
-segmentation—similar to other popular large scale models, such as
-[Inception](https://arxiv.org/pdf/1602.07261.pdf). Google provides 16 pre-trained
-[ImageNet](http://www.image-net.org/challenges/LSVRC/) classification checkpoints
-for MobileNets that can be used in mobile projects of all sizes.
-
-[Inception-v3](https://arxiv.org/abs/1512.00567) is an image recognition model
-that achieves fairly high accuracy recognizing general objects with 1000 classes,
-for example, "Zebra", "Dalmatian", and "Dishwasher". The model extracts general
-features from input images using a convolutional neural network and classifies
-them based on those features with fully-connected and softmax layers.
-
-[On Device Smart Reply](https://research.googleblog.com/2017/02/on-device-machine-intelligence.html)
-is an on-device model that provides one-touch replies for incoming text messages
-by suggesting contextually relevant messages. The model is built specifically for
-memory constrained devices, such as watches and phones, and has been successfully
-used in Smart Replies on Android Wear. Currently, this model is Android-specific.
-
-These pre-trained models are [available for download](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md)
-
-### Re-train Inception-V3 or MobileNet for a custom data set
-
-These pre-trained models were trained on the *ImageNet* data set which contains
-1000 predefined classes. If these classes are not sufficient for your use case,
-the model will need to be re-trained. This technique is called
-*transfer learning* and starts with a model that has been already trained on a
-problem, then retrains the model on a similar problem. Deep learning from
-scratch can take days, but transfer learning is fairly quick. In order to do
-this, you need to generate a custom data set labeled with the relevant classes.
-
-The [TensorFlow for Poets](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/)
-codelab walks through the re-training process step-by-step. The code supports
-both floating point and quantized inference.
-
-### Train a custom model
-
-A developer may choose to train a custom model using Tensorflow (see the
-@{$tutorials} for examples of building and training models). If you have already
-written a model, the first step is to export this to a @{tf.GraphDef} file. This
-is required because some formats do not store the model structure outside the
-code, and we must communicate with other parts of the framework. See
-[Exporting the Inference Graph](https://github.com/tensorflow/models/blob/master/research/slim/README.md)
-to create .pb file for the custom model.
-
-TensorFlow Lite currently supports a subset of TensorFlow operators. Refer to the
-[TensorFlow Lite & TensorFlow Compatibility Guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/tf_ops_compatibility.md)
-for supported operators and their usage. This set of operators will continue to
-grow in future Tensorflow Lite releases.
-
-
-## 2. Convert the model format
-
-The model generated (or downloaded) in the previous step is a *standard*
-Tensorflow model and you should now have a .pb or .pbtxt @{tf.GraphDef} file.
-Models generated with transfer learning (re-training) or custom models must be
-converted—but, we must first freeze the graph to convert the model to the
-Tensorflow Lite format. This process uses several model formats:
-
-* @{tf.GraphDef} (.pb) —A protobuf that represents the TensorFlow training or
- computation graph. It contains operators, tensors, and variables definitions.
-* *CheckPoint* (.ckpt) —Serialized variables from a TensorFlow graph. Since this
- does not contain a graph structure, it cannot be interpreted by itself.
-* `FrozenGraphDef` —A subclass of `GraphDef` that does not contain
- variables. A `GraphDef` can be converted to a `FrozenGraphDef` by taking a
- CheckPoint and a `GraphDef`, and converting each variable into a constant
- using the value retrieved from the CheckPoint.
-* `SavedModel` —A `GraphDef` and CheckPoint with a signature that labels
- input and output arguments to a model. A `GraphDef` and CheckPoint can be
- extracted from a `SavedModel`.
-* *TensorFlow Lite model* (.tflite) —A serialized
- [FlatBuffer](https://google.github.io/flatbuffers/) that contains TensorFlow
- Lite operators and tensors for the TensorFlow Lite interpreter, similar to a
- `FrozenGraphDef`.
-
-### Freeze Graph
-
-To use the `GraphDef` .pb file with TensorFlow Lite, you must have checkpoints
-that contain trained weight parameters. The .pb file only contains the structure
-of the graph. The process of merging the checkpoint values with the graph
-structure is called *freezing the graph*.
-
-You should have a checkpoints folder or download them for a pre-trained model
-(for example,
-[MobileNets](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md)).
-
-To freeze the graph, use the following command (changing the arguments):
-
-```
-freeze_graph --input_graph=/tmp/mobilenet_v1_224.pb \
- --input_checkpoint=/tmp/checkpoints/mobilenet-10202.ckpt \
- --input_binary=true \
- --output_graph=/tmp/frozen_mobilenet_v1_224.pb \
- --output_node_names=MobileNetV1/Predictions/Reshape_1
-```
-
-The `input_binary` flag must be enabled so the protobuf is read and written in
-a binary format. Set the `input_graph` and `input_checkpoint` files.
-
-The `output_node_names` may not be obvious outside of the code that built the
-model. The easiest way to find them is to visualize the graph, either with
-[TensorBoard](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2/#3)
-or `graphviz`.
-
-The frozen `GraphDef` is now ready for conversion to the `FlatBuffer` format
-(.tflite) for use on Android or iOS devices. For Android, the Tensorflow
-Optimizing Converter tool supports both float and quantized models. To convert
-the frozen `GraphDef` to the .tflite format:
-
-```
-toco --input_file=$(pwd)/mobilenet_v1_1.0_224/frozen_graph.pb \
- --input_format=TENSORFLOW_GRAPHDEF \
- --output_format=TFLITE \
- --output_file=/tmp/mobilenet_v1_1.0_224.tflite \
- --inference_type=FLOAT \
- --input_type=FLOAT \
- --input_arrays=input \
- --output_arrays=MobilenetV1/Predictions/Reshape_1 \
- --input_shapes=1,224,224,3
-```
-
-The `input_file` argument should reference the frozen `GraphDef` file
-containing the model architecture. The [frozen_graph.pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_1.0_224_frozen.tgz)
-file used here is available for download. `output_file` is where the TensorFlow
-Lite model will get generated. The `input_type` and `inference_type`
-arguments should be set to `FLOAT`, unless converting a
-@{$performance/quantization$quantized model}. Setting the `input_array`,
-`output_array`, and `input_shape` arguments are not as straightforward. The
-easiest way to find these values is to explore the graph using Tensorboard. Reuse
-the arguments for specifying the output nodes for inference in the
-`freeze_graph` step.
-
-It is also possible to use the Tensorflow Optimizing Converter with protobufs
-from either Python or from the command line (see the
-[toco_from_protos.py](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/toco/python/toco_from_protos.py)
-example). This allows you to integrate the conversion step into the model design
-workflow, ensuring the model is easily convertible to a mobile inference graph.
-For example:
-
-```python
-import tensorflow as tf
-
-img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 3))
-val = img + tf.constant([1., 2., 3.]) + tf.constant([1., 4., 4.])
-out = tf.identity(val, name="out")
-
-with tf.Session() as sess:
- tflite_model = tf.contrib.lite.toco_convert(sess.graph_def, [img], [out])
- open("converteds_model.tflite", "wb").write(tflite_model)
-```
-
-For usage, see the Tensorflow Optimizing Converter
-[command-line examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md).
-
-Refer to the
-[Ops compatibility guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/tf_ops_compatibility.md)
-for troubleshooting help, and if that doesn't help, please
-[file an issue](https://github.com/tensorflow/tensorflow/issues).
-
-The [development repo](https://github.com/tensorflow/tensorflow) contains a tool
-to visualize TensorFlow Lite models after conversion. To build the
-[visualize.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/tools/visualize.py)
-tool:
-
-```sh
-bazel run tensorflow/contrib/lite/tools:visualize -- model.tflite model_viz.html
-```
-
-This generates an interactive HTML page listing subgraphs, operations, and a
-graph visualization.
-
-
-## 3. Use the TensorFlow Lite model for inference in a mobile app
-
-After completing the prior steps, you should now have a `.tflite` model file.
-
-### Android
-
-Since Android apps are written in Java and the core TensorFlow library is in C++,
-a JNI library is provided as an interface. This is only meant for inference—it
-provides the ability to load a graph, set up inputs, and run the model to
-calculate outputs.
-
-The open source Android demo app uses the JNI interface and is available
-[on GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/app).
-You can also download a
-[prebuilt APK](http://download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk).
-See the @{$tflite/demo_android} guide for details.
-
-The @{$mobile/android_build} guide has instructions for installing TensorFlow on
-Android and setting up `bazel` and Android Studio.
-
-### iOS
-
-To integrate a TensorFlow model in an iOS app, see the
-[TensorFlow Lite for iOS](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/g3doc/ios.md)
-guide and @{$tflite/demo_ios} guide.
-
-#### Core ML support
-
-Core ML is a machine learning framework used in Apple products. In addition to
-using Tensorflow Lite models directly in your applications, you can convert
-trained Tensorflow models to the
-[CoreML](https://developer.apple.com/machine-learning/) format for use on Apple
-devices. To use the converter, refer to the
-[Tensorflow-CoreML converter documentation](https://github.com/tf-coreml/tf-coreml).
-
-### Raspberry Pi
-
-Compile Tensorflow Lite for a Raspberry Pi by following the
-[RPi build instructions](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/rpi.md)
-This compiles a static library file (`.a`) used to build your app. There are
-plans for Python bindings and a demo app.
diff --git a/tensorflow/docs_src/mobile/tflite/index.md b/tensorflow/docs_src/mobile/tflite/index.md
deleted file mode 100644
index 3d1733024e..0000000000
--- a/tensorflow/docs_src/mobile/tflite/index.md
+++ /dev/null
@@ -1,209 +0,0 @@
-# Introduction to TensorFlow Lite
-
-TensorFlow Lite is TensorFlow’s lightweight solution for mobile and embedded
-devices. It enables on-device machine learning inference with low latency and a
-small binary size. TensorFlow Lite also supports hardware acceleration with the
-[Android Neural Networks
-API](https://developer.android.com/ndk/guides/neuralnetworks/index.html).
-
-TensorFlow Lite uses many techniques for achieving low latency such as
-optimizing the kernels for mobile apps, pre-fused activations, and quantized
-kernels that allow smaller and faster (fixed-point math) models.
-
-Most of our TensorFlow Lite documentation is [on
-GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite)
-for the time being.
-
-## What does TensorFlow Lite contain?
-
-TensorFlow Lite supports a set of core operators, both quantized and
-float, which have been tuned for mobile platforms. They incorporate pre-fused
-activations and biases to further enhance performance and quantized
-accuracy. Additionally, TensorFlow Lite also supports using custom operations in
-models.
-
-TensorFlow Lite defines a new model file format, based on
-[FlatBuffers](https://google.github.io/flatbuffers/). FlatBuffers is an
-open-sourced, efficient cross platform serialization library. It is similar to
-[protocol buffers](https://developers.google.com/protocol-buffers/?hl=en), but
-the primary difference is that FlatBuffers does not need a parsing/unpacking
-step to a secondary representation before you can access data, often coupled
-with per-object memory allocation. Also, the code footprint of FlatBuffers is an
-order of magnitude smaller than protocol buffers.
-
-TensorFlow Lite has a new mobile-optimized interpreter, which has the key goals
-of keeping apps lean and fast. The interpreter uses a static graph ordering and
-a custom (less-dynamic) memory allocator to ensure minimal load, initialization,
-and execution latency.
-
-TensorFlow Lite provides an interface to leverage hardware acceleration, if
-available on the device. It does so via the
-[Android Neural Networks API](https://developer.android.com/ndk/guides/neuralnetworks/index.html),
-available on Android 8.1 (API level 27) and higher.
-
-## Why do we need a new mobile-specific library?
-
-Machine Learning is changing the computing paradigm, and we see an emerging
-trend of new use cases on mobile and embedded devices. Consumer expectations are
-also trending toward natural, human-like interactions with their devices, driven
-by the camera and voice interaction models.
-
-There are several factors which are fueling interest in this domain:
-
-- Innovation at the silicon layer is enabling new possibilities for hardware
- acceleration, and frameworks such as the Android Neural Networks API make it
- easy to leverage these.
-
-- Recent advances in real-time computer-vision and spoken language understanding
- have led to mobile-optimized benchmark models being open sourced
- (e.g. MobileNets, SqueezeNet).
-
-- Widely-available smart appliances create new possibilities for
- on-device intelligence.
-
-- Interest in stronger user data privacy paradigms where user data does not need
- to leave the mobile device.
-
-- Ability to serve ‘offline’ use cases, where the device does not need to be
- connected to a network.
-
-We believe the next wave of machine learning applications will have significant
-processing on mobile and embedded devices.
-
-## TensorFlow Lite developer preview highlights
-
-TensorFlow Lite is available as a developer preview and includes the
-following:
-
-- A set of core operators, both quantized and float, many of which have been
- tuned for mobile platforms. These can be used to create and run custom
- models. Developers can also write their own custom operators and use them in
- models.
-
-- A new [FlatBuffers](https://google.github.io/flatbuffers/)-based
- model file format.
-
-- On-device interpreter with kernels optimized for faster execution on mobile.
-
-- TensorFlow converter to convert TensorFlow-trained models to the TensorFlow
- Lite format.
-
-- Smaller in size: TensorFlow Lite is smaller than 300KB when all supported
- operators are linked and less than 200KB when using only the operators needed
- for supporting InceptionV3 and Mobilenet.
-
-- **Pre-tested models:**
-
- All of the following models are guaranteed to work out of the box:
-
- - Inception V3, a popular model for detecting the dominant objects
- present in an image.
-
- - [MobileNets](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md),
- a family of mobile-first computer vision models designed to effectively
- maximize accuracy while being mindful of the restricted resources for an
- on-device or embedded application. They are small, low-latency, low-power
- models parameterized to meet the resource constraints of a variety of use
- cases. They can be built upon for classification, detection, embeddings
- and segmentation. MobileNet models are smaller but [lower in
- accuracy](https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html)
- than Inception V3.
-
- - On Device Smart Reply, an on-device model which provides one-touch
- replies for an incoming text message by suggesting contextually relevant
- messages. The model was built specifically for memory constrained devices
- such as watches & phones and it has been successfully used to surface
- [Smart Replies on Android
- Wear](https://research.googleblog.com/2017/02/on-device-machine-intelligence.html)
- to all first-party and third-party apps.
-
- Also see the complete list of
- [TensorFlow Lite's supported models](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md),
- including the model sizes, performance numbers, and downloadable model files.
-
-- Quantized versions of the MobileNet model, which runs faster than the
- non-quantized (float) version on CPU.
-
-- New Android demo app to illustrate the use of TensorFlow Lite with a quantized
- MobileNet model for object classification.
-
-- Java and C++ API support
-
-Note: This is a developer release, and it’s likely that there will be changes in
-the API in upcoming versions. We do not guarantee backward or forward
-compatibility with this release.
-
-## Getting Started
-
-We recommend you try out TensorFlow Lite with the pre-tested models indicated
-above. If you have an existing model, you will need to test whether your model
-is compatible with both the converter and the supported operator set. To test
-your model, see the
-[documentation on GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite).
-
-### Retrain Inception-V3 or MobileNet for a custom data set
-
-The pre-trained models mentioned above have been trained on the ImageNet data
-set, which consists of 1000 predefined classes. If those classes are not
-relevant or useful for your use case, you will need to retrain those
-models. This technique is called transfer learning, which starts with a model
-that has been already trained on a problem and will then be retrained on a
-similar problem. Deep learning from scratch can take days, but transfer learning
-can be done fairly quickly. In order to do this, you'll need to generate your
-custom data set labeled with the relevant classes.
-
-The [TensorFlow for Poets](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/)
-codelab walks through this process step-by-step. The retraining code supports
-retraining for both floating point and quantized inference.
-
-## TensorFlow Lite Architecture
-
-The following diagram shows the architectural design of TensorFlow Lite:
-
-<img src="https://www.tensorflow.org/images/tflite-architecture.jpg"
- alt="TensorFlow Lite architecture diagram"
- style="max-width:600px;">
-
-Starting with a trained TensorFlow model on disk, you'll convert that model to
-the TensorFlow Lite file format (`.tflite`) using the TensorFlow Lite
-Converter. Then you can use that converted file in your mobile application.
-
-Deploying the TensorFlow Lite model file uses:
-
-- Java API: A convenience wrapper around the C++ API on Android.
-
-- C++ API: Loads the TensorFlow Lite Model File and invokes the Interpreter. The
- same library is available on both Android and iOS.
-
-- Interpreter: Executes the model using a set of kernels. The interpreter
- supports selective kernel loading; without kernels it is only 100KB, and 300KB
- with all the kernels loaded. This is a significant reduction from the 1.5M
- required by TensorFlow Mobile.
-
-- On select Android devices, the Interpreter will use the Android Neural
- Networks API for hardware acceleration, or default to CPU execution if none
- are available.
-
-You can also implement custom kernels using the C++ API that can be used by the
-Interpreter.
-
-## Future Work
-
-In future releases, TensorFlow Lite will support more models and built-in
-operators, contain performance improvements for both fixed point and floating
-point models, improvements to the tools to enable easier developer workflows and
-support for other smaller devices and more. As we continue development, we hope
-that TensorFlow Lite will greatly simplify the developer experience of targeting
-a model for small devices.
-
-Future plans include using specialized machine learning hardware to get the best
-possible performance for a particular model on a particular device.
-
-## Next Steps
-
-For the developer preview, most of our documentation is on GitHub. Please take a
-look at the [TensorFlow Lite
-repository](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite)
-on GitHub for more information and for code samples, demo applications, and
-more.
-
diff --git a/tensorflow/docs_src/performance/datasets_performance.md b/tensorflow/docs_src/performance/datasets_performance.md
index 46b43b7673..5d9e4ba392 100644
--- a/tensorflow/docs_src/performance/datasets_performance.md
+++ b/tensorflow/docs_src/performance/datasets_performance.md
@@ -38,9 +38,9 @@ the heavy lifting of training your model. In addition, viewing input pipelines
as an ETL process provides structure that facilitates the application of
performance optimizations.
-When using the @{tf.estimator.Estimator} API, the first two phases (Extract and
+When using the `tf.estimator.Estimator` API, the first two phases (Extract and
Transform) are captured in the `input_fn` passed to
-@{tf.estimator.Estimator.train}. In code, this might look like the following
+`tf.estimator.Estimator.train`. In code, this might look like the following
(naive, sequential) implementation:
```
@@ -99,7 +99,7 @@ With pipelining, idle time diminishes significantly:
![with pipelining](/images/datasets_with_pipelining.png)
The `tf.data` API provides a software pipelining mechanism through the
-@{tf.data.Dataset.prefetch} transformation, which can be used to decouple the
+`tf.data.Dataset.prefetch` transformation, which can be used to decouple the
time data is produced from the time it is consumed. In particular, the
transformation uses a background thread and an internal buffer to prefetch
elements from the input dataset ahead of the time they are requested. Thus, to
@@ -130,7 +130,7 @@ The preceding recommendation is simply the most common application.
### Parallelize Data Transformation
When preparing a batch, input elements may need to be pre-processed. To this
-end, the `tf.data` API offers the @{tf.data.Dataset.map} transformation, which
+end, the `tf.data` API offers the `tf.data.Dataset.map` transformation, which
applies a user-defined function (for example, `parse_fn` from the running
example) to each element of the input dataset. Because input elements are
independent of one another, the pre-processing can be parallelized across
@@ -164,7 +164,7 @@ dataset = dataset.map(map_func=parse_fn, num_parallel_calls=FLAGS.num_parallel_c
Furthermore, if your batch size is in the hundreds or thousands, your pipeline
will likely additionally benefit from parallelizing the batch creation. To this
-end, the `tf.data` API provides the @{tf.contrib.data.map_and_batch}
+end, the `tf.data` API provides the `tf.contrib.data.map_and_batch`
transformation, which effectively "fuses" the map and batch transformations.
To apply this change to our running example, change:
@@ -205,7 +205,7 @@ is stored locally or remotely, but can be worse in the remote case if data is
not prefetched effectively.
To mitigate the impact of the various data extraction overheads, the `tf.data`
-API offers the @{tf.contrib.data.parallel_interleave} transformation. Use this
+API offers the `tf.contrib.data.parallel_interleave` transformation. Use this
transformation to parallelize the execution of and interleave the contents of
other datasets (such as data file readers). The
number of datasets to overlap can be specified by the `cycle_length` argument.
@@ -232,7 +232,7 @@ dataset = files.apply(tf.contrib.data.parallel_interleave(
The throughput of remote storage systems can vary over time due to load or
network events. To account for this variance, the `parallel_interleave`
transformation can optionally use prefetching. (See
-@{tf.contrib.data.parallel_interleave} for details).
+`tf.contrib.data.parallel_interleave` for details).
By default, the `parallel_interleave` transformation provides a deterministic
ordering of elements to aid reproducibility. As an alternative to prefetching
@@ -261,7 +261,7 @@ function (that is, have it operate over a batch of inputs at once) and apply the
### Map and Cache
-The @{tf.data.Dataset.cache} transformation can cache a dataset, either in
+The `tf.data.Dataset.cache` transformation can cache a dataset, either in
memory or on local storage. If the user-defined function passed into the `map`
transformation is expensive, apply the cache transformation after the map
transformation as long as the resulting dataset can still fit into memory or
@@ -281,9 +281,9 @@ performance (for example, to enable fusing of the map and batch transformations)
### Repeat and Shuffle
-The @{tf.data.Dataset.repeat} transformation repeats the input data a finite (or
+The `tf.data.Dataset.repeat` transformation repeats the input data a finite (or
infinite) number of times; each repetition of the data is typically referred to
-as an _epoch_. The @{tf.data.Dataset.shuffle} transformation randomizes the
+as an _epoch_. The `tf.data.Dataset.shuffle` transformation randomizes the
order of the dataset's examples.
If the `repeat` transformation is applied before the `shuffle` transformation,
@@ -296,7 +296,7 @@ internal state of the `shuffle` transformation. In other words, the former
(`shuffle` before `repeat`) provides stronger ordering guarantees.
When possible, we recommend using the fused
-@{tf.contrib.data.shuffle_and_repeat} transformation, which combines the best of
+`tf.contrib.data.shuffle_and_repeat` transformation, which combines the best of
both worlds (good performance and strong ordering guarantees). Otherwise, we
recommend shuffling before repeating.
diff --git a/tensorflow/docs_src/performance/performance_guide.md b/tensorflow/docs_src/performance/performance_guide.md
index cb0f5ca924..df70309568 100644
--- a/tensorflow/docs_src/performance/performance_guide.md
+++ b/tensorflow/docs_src/performance/performance_guide.md
@@ -94,7 +94,7 @@ sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
#### Fused decode and crop
If inputs are JPEG images that also require cropping, use fused
-@{tf.image.decode_and_crop_jpeg} to speed up preprocessing.
+`tf.image.decode_and_crop_jpeg` to speed up preprocessing.
`tf.image.decode_and_crop_jpeg` only decodes the part of
the image within the crop window. This significantly speeds up the process if
the crop window is much smaller than the full image. For imagenet data, this
@@ -187,14 +187,14 @@ some models makes up a large percentage of the operation time. Using fused batch
norm can result in a 12%-30% speedup.
There are two commonly used batch norms and both support fusing. The core
-@{tf.layers.batch_normalization} added fused starting in TensorFlow 1.3.
+`tf.layers.batch_normalization` added fused starting in TensorFlow 1.3.
```python
bn = tf.layers.batch_normalization(
input_layer, fused=True, data_format='NCHW')
```
-The contrib @{tf.contrib.layers.batch_norm} method has had fused as an option
+The contrib `tf.contrib.layers.batch_norm` method has had fused as an option
since before TensorFlow 1.0.
```python
@@ -205,43 +205,43 @@ bn = tf.contrib.layers.batch_norm(input_layer, fused=True, data_format='NCHW')
There are many ways to specify an RNN computation in TensorFlow and they have
trade-offs with respect to model flexibility and performance. The
-@{tf.nn.rnn_cell.BasicLSTMCell} should be considered a reference implementation
+`tf.nn.rnn_cell.BasicLSTMCell` should be considered a reference implementation
and used only as a last resort when no other options will work.
When using one of the cells, rather than the fully fused RNN layers, you have a
-choice of whether to use @{tf.nn.static_rnn} or @{tf.nn.dynamic_rnn}. There
+choice of whether to use `tf.nn.static_rnn` or `tf.nn.dynamic_rnn`. There
shouldn't generally be a performance difference at runtime, but large unroll
-amounts can increase the graph size of the @{tf.nn.static_rnn} and cause long
-compile times. An additional advantage of @{tf.nn.dynamic_rnn} is that it can
+amounts can increase the graph size of the `tf.nn.static_rnn` and cause long
+compile times. An additional advantage of `tf.nn.dynamic_rnn` is that it can
optionally swap memory from the GPU to the CPU to enable training of very long
sequences. Depending on the model and hardware configuration, this can come at
a performance cost. It is also possible to run multiple iterations of
-@{tf.nn.dynamic_rnn} and the underlying @{tf.while_loop} construct in parallel,
+`tf.nn.dynamic_rnn` and the underlying `tf.while_loop` construct in parallel,
although this is rarely useful with RNN models as they are inherently
sequential.
-On NVIDIA GPUs, the use of @{tf.contrib.cudnn_rnn} should always be preferred
+On NVIDIA GPUs, the use of `tf.contrib.cudnn_rnn` should always be preferred
unless you want layer normalization, which it doesn't support. It is often at
-least an order of magnitude faster than @{tf.contrib.rnn.BasicLSTMCell} and
-@{tf.contrib.rnn.LSTMBlockCell} and uses 3-4x less memory than
-@{tf.contrib.rnn.BasicLSTMCell}.
+least an order of magnitude faster than `tf.contrib.rnn.BasicLSTMCell` and
+`tf.contrib.rnn.LSTMBlockCell` and uses 3-4x less memory than
+`tf.contrib.rnn.BasicLSTMCell`.
If you need to run one step of the RNN at a time, as might be the case in
reinforcement learning with a recurrent policy, then you should use the
-@{tf.contrib.rnn.LSTMBlockCell} with your own environment interaction loop
-inside a @{tf.while_loop} construct. Running one step of the RNN at a time and
+`tf.contrib.rnn.LSTMBlockCell` with your own environment interaction loop
+inside a `tf.while_loop` construct. Running one step of the RNN at a time and
returning to Python is possible, but it will be slower.
-On CPUs, mobile devices, and if @{tf.contrib.cudnn_rnn} is not available on
+On CPUs, mobile devices, and if `tf.contrib.cudnn_rnn` is not available on
your GPU, the fastest and most memory efficient option is
-@{tf.contrib.rnn.LSTMBlockFusedCell}.
+`tf.contrib.rnn.LSTMBlockFusedCell`.
-For all of the less common cell types like @{tf.contrib.rnn.NASCell},
-@{tf.contrib.rnn.PhasedLSTMCell}, @{tf.contrib.rnn.UGRNNCell},
-@{tf.contrib.rnn.GLSTMCell}, @{tf.contrib.rnn.Conv1DLSTMCell},
-@{tf.contrib.rnn.Conv2DLSTMCell}, @{tf.contrib.rnn.LayerNormBasicLSTMCell},
+For all of the less common cell types like `tf.contrib.rnn.NASCell`,
+`tf.contrib.rnn.PhasedLSTMCell`, `tf.contrib.rnn.UGRNNCell`,
+`tf.contrib.rnn.GLSTMCell`, `tf.contrib.rnn.Conv1DLSTMCell`,
+`tf.contrib.rnn.Conv2DLSTMCell`, `tf.contrib.rnn.LayerNormBasicLSTMCell`,
etc., one should be aware that they are implemented in the graph like
-@{tf.contrib.rnn.BasicLSTMCell} and as such will suffer from the same poor
+`tf.contrib.rnn.BasicLSTMCell` and as such will suffer from the same poor
performance and high memory usage. One should consider whether or not those
trade-offs are worth it before using these cells. For example, while layer
normalization can speed up convergence, because cuDNN is 20x faster the fastest
@@ -464,7 +464,7 @@ equal to the number of physical cores rather than logical cores.
config = tf.ConfigProto()
config.intra_op_parallelism_threads = 44
config.inter_op_parallelism_threads = 44
- tf.session(config=config)
+ tf.Session(config=config)
```
diff --git a/tensorflow/docs_src/performance/performance_models.md b/tensorflow/docs_src/performance/performance_models.md
index 359b0e904d..66bf684d5b 100644
--- a/tensorflow/docs_src/performance/performance_models.md
+++ b/tensorflow/docs_src/performance/performance_models.md
@@ -10,8 +10,8 @@ incorporated into high-level APIs.
## Input Pipeline
The @{$performance_guide$Performance Guide} explains how to identify possible
-input pipeline issues and best practices. We found that using @{tf.FIFOQueue}
-and @{tf.train.queue_runner} could not saturate multiple current generation GPUs
+input pipeline issues and best practices. We found that using `tf.FIFOQueue`
+and `tf.train.queue_runner` could not saturate multiple current generation GPUs
when using large inputs and processing with higher samples per second, such
as training ImageNet with [AlexNet](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf).
This is due to the use of Python threads as its underlying implementation. The
@@ -29,7 +29,7 @@ implementation is made up of 3 stages:
The dominant part of each stage is executed in parallel with the other stages
using `data_flow_ops.StagingArea`. `StagingArea` is a queue-like operator
-similar to @{tf.FIFOQueue}. The difference is that `StagingArea` does not
+similar to `tf.FIFOQueue`. The difference is that `StagingArea` does not
guarantee FIFO ordering, but offers simpler functionality and can be executed
on both CPU and GPU in parallel with other stages. Breaking the input pipeline
into 3 stages that operate independently in parallel is scalable and takes full
@@ -62,10 +62,10 @@ and executed in parallel. The image preprocessing ops include operations such as
image decoding, distortion, and resizing.
Once the images are through preprocessing, they are concatenated together into 8
-tensors each with a batch-size of 32. Rather than using @{tf.concat} for this
+tensors each with a batch-size of 32. Rather than using `tf.concat` for this
purpose, which is implemented as a single op that waits for all the inputs to be
-ready before concatenating them together, @{tf.parallel_stack} is used.
-@{tf.parallel_stack} allocates an uninitialized tensor as an output, and each
+ready before concatenating them together, `tf.parallel_stack` is used.
+`tf.parallel_stack` allocates an uninitialized tensor as an output, and each
input tensor is written to its designated portion of the output tensor as soon
as the input is available.
@@ -94,7 +94,7 @@ the GPU, all the tensors are already available.
With all the stages capable of being driven by different processors,
`data_flow_ops.StagingArea` is used between them so they run in parallel.
-`StagingArea` is a queue-like operator similar to @{tf.FIFOQueue} that offers
+`StagingArea` is a queue-like operator similar to `tf.FIFOQueue` that offers
simpler functionalities that can be executed on both CPU and GPU.
Before the model starts running all the stages, the input pipeline stages are
@@ -153,7 +153,7 @@ weights obtained from training.
The default batch-normalization in TensorFlow is implemented as composite
operations. This is very general, but often leads to suboptimal performance. An
alternative is to use fused batch-normalization which often has much better
-performance on GPU. Below is an example of using @{tf.contrib.layers.batch_norm}
+performance on GPU. Below is an example of using `tf.contrib.layers.batch_norm`
to implement fused batch-normalization.
```python
@@ -301,7 +301,7 @@ In order to broadcast variables and aggregate gradients across different GPUs
within the same host machine, we can use the default TensorFlow implicit copy
mechanism.
-However, we can instead use the optional NCCL (@{tf.contrib.nccl}) support. NCCL
+However, we can instead use the optional NCCL (`tf.contrib.nccl`) support. NCCL
is an NVIDIA® library that can efficiently broadcast and aggregate data across
different GPUs. It schedules a cooperating kernel on each GPU that knows how to
best utilize the underlying hardware topology; this kernel uses a single SM of
diff --git a/tensorflow/docs_src/performance/quantization.md b/tensorflow/docs_src/performance/quantization.md
index c97f74139c..4499f5715c 100644
--- a/tensorflow/docs_src/performance/quantization.md
+++ b/tensorflow/docs_src/performance/quantization.md
@@ -163,7 +163,7 @@ bazel build tensorflow/contrib/lite/toco:toco && \
--std_value=127.5 --mean_value=127.5
```
-See the documentation for @{tf.contrib.quantize} and
+See the documentation for `tf.contrib.quantize` and
[TensorFlow Lite](/mobile/tflite/).
## Quantized accuracy
diff --git a/tensorflow/docs_src/performance/xla/broadcasting.md b/tensorflow/docs_src/performance/xla/broadcasting.md
index eaa709c2f8..7018ded53f 100644
--- a/tensorflow/docs_src/performance/xla/broadcasting.md
+++ b/tensorflow/docs_src/performance/xla/broadcasting.md
@@ -99,7 +99,7 @@ dimensions 1 and 2 of the cuboid.
This type of broadcast is used in the binary ops in `XlaBuilder`, if the
`broadcast_dimensions` argument is given. For example, see
-[XlaBuilder::Add](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.cc).
+[XlaBuilder::Add](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.cc).
In the XLA source code, this type of broadcasting is sometimes called "InDim"
broadcasting.
diff --git a/tensorflow/docs_src/performance/xla/developing_new_backend.md b/tensorflow/docs_src/performance/xla/developing_new_backend.md
index 74ea15bb2b..840f6983c2 100644
--- a/tensorflow/docs_src/performance/xla/developing_new_backend.md
+++ b/tensorflow/docs_src/performance/xla/developing_new_backend.md
@@ -44,7 +44,7 @@ It is possible to model a new
implementation on the existing [`xla::CPUCompiler`]
(https://www.tensorflow.org/code/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc)
and [`xla::GPUCompiler`]
-(https://www.tensorflow.org/code/tensorflow/compiler/xla/service/gpu/gpu_compiler.cc)
+(https://www.tensorflow.org/code/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc)
classes, since these already emit LLVM IR. Depending on the nature of the
hardware, it is possible that many of the LLVM IR generation aspects will have
to be changed, but a lot of code can be shared with the existing backends.
diff --git a/tensorflow/docs_src/performance/xla/jit.md b/tensorflow/docs_src/performance/xla/jit.md
index 6724d1eaf8..7202ef47f7 100644
--- a/tensorflow/docs_src/performance/xla/jit.md
+++ b/tensorflow/docs_src/performance/xla/jit.md
@@ -19,10 +19,11 @@ on the `XLA_CPU` or `XLA_GPU` TensorFlow devices. Placing operators directly on
a TensorFlow XLA device forces the operator to run on that device and is mainly
used for testing.
-> Note: The XLA CPU backend produces fast single-threaded code (in most cases),
-> but does not yet parallelize as well as the TensorFlow CPU backend. The XLA
-> GPU backend is competitive with the standard TensorFlow implementation,
-> sometimes faster, sometimes slower.
+> Note: The XLA CPU backend supports intra-op parallelism (i.e. it can shard a
+> single operation across multiple cores) but it does not support inter-op
+> parallelism (i.e. it cannot execute independent operations concurrently across
+> multiple cores). The XLA GPU backend is competitive with the standard
+> TensorFlow implementation, sometimes faster, sometimes slower.
### Turning on JIT compilation
@@ -55,8 +56,7 @@ sess = tf.Session(config=config)
> Note: Turning on JIT at the session level will not result in operations being
> compiled for the CPU. JIT compilation for CPU operations must be done via
-> the manual method documented below. This decision was made due to the CPU
-> backend being single-threaded.
+> the manual method documented below.
#### Manual
diff --git a/tensorflow/docs_src/performance/xla/operation_semantics.md b/tensorflow/docs_src/performance/xla/operation_semantics.md
index ce43d09b63..02af71f8a3 100644
--- a/tensorflow/docs_src/performance/xla/operation_semantics.md
+++ b/tensorflow/docs_src/performance/xla/operation_semantics.md
@@ -1,7 +1,7 @@
# Operation Semantics
The following describes the semantics of operations defined in the
-[`XlaBuilder`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h)
+[`XlaBuilder`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h)
interface. Typically, these operations map one-to-one to operations defined in
the RPC interface in
[`xla_data.proto`](https://www.tensorflow.org/code/tensorflow/compiler/xla/xla_data.proto).
@@ -13,10 +13,83 @@ arbitrary-dimensional array. For convenience, special cases have more specific
and familiar names; for example a *vector* is a 1-dimensional array and a
*matrix* is a 2-dimensional array.
+## AllToAll
+
+See also
+[`XlaBuilder::AllToAll`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
+
+Alltoall is a collective operation that sends data from all cores to all cores.
+It has two phases:
+
+1. the scatter phase. On each core, the operand is split into `split_count`
+ number of blocks along the `split_dimensions`, and the blocks are scatterd
+ to all cores, e.g., the ith block is send to the ith core.
+2. the gather phase. Each core concatenates the received blocks along the
+ `concat_dimension`.
+
+The participating cores can be configured by:
+
+- `replica_groups`: each ReplicaGroup contains a list of replica id. If empty,
+ all replicas belong to one group in the order of 0 - (n-1). Alltoall will be
+ applied within subgroups in the specified order. For example, replica
+ groups = {{1,2,3},{4,5,0}} means, an Alltoall will be applied within replica
+ 1, 2, 3, and in the gather phase, the received blocks will be concatenated
+ in the order of 1, 2, 3; another Alltoall will be applied within replica 4,
+ 5, 0, and the concatenation order is 4, 5, 0.
+
+Prerequisites:
+
+- The dimension size of the operand on the split_dimension is divisible by
+ split_count.
+- The operand's shape is not tuple.
+
+<b> `AllToAll(operand, split_dimension, concat_dimension, split_count,
+replica_groups)` </b>
+
+
+| Arguments | Type | Semantics |
+| ------------------ | --------------------- | ------------------------------- |
+| `operand` | `XlaOp` | n dimensional input array |
+| `split_dimension` | `int64` | A value in the interval `[0, |
+: : : n)` that names the dimension :
+: : : along which the operand is :
+: : : split :
+| `concat_dimension` | `int64` | a value in the interval `[0, |
+: : : n)` that names the dimension :
+: : : along which the split blocks :
+: : : are concatenated :
+| `split_count` | `int64` | the number of cores that |
+: : : participate this operation. If :
+: : : `replica_groups` is empty, this :
+: : : should be the number of :
+: : : replicas; otherwise, this :
+: : : should be equal to the number :
+: : : of replicas in each group. :
+| `replica_groups` | `ReplicaGroup` vector | each group contains a list of |
+: : : replica id. :
+
+Below shows an example of Alltoall.
+
+```
+XlaBuilder b("alltoall");
+auto x = Parameter(&b, 0, ShapeUtil::MakeShape(F32, {4, 16}), "x");
+AllToAll(x, /*split_dimension=*/1, /*concat_dimension=*/0, /*split_count=*/4);
+```
+
+<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
+ <img style="width:100%" src="../../images/xla/ops_alltoall.png">
+</div>
+
+In this example, there are 4 cores participating the Alltoall. On each core, the
+operand is split into 4 parts along dimension 0, so each part has shape
+f32[4,4]. The 4 parts are scattered to all cores. Then each core concatenates
+the received parts along dimension 1, in the order or core 0-4. So the output on
+each core has shape f32[16,4].
+
## BatchNormGrad
See also
-[`XlaBuilder::BatchNormGrad`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h)
+[`XlaBuilder::BatchNormGrad`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h)
and [the original batch normalization paper](https://arxiv.org/abs/1502.03167)
for a detailed description of the algorithm.
@@ -80,7 +153,7 @@ The output type is a tuple of three handles:
## BatchNormInference
See also
-[`XlaBuilder::BatchNormInference`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h)
+[`XlaBuilder::BatchNormInference`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h)
and [the original batch normalization paper](https://arxiv.org/abs/1502.03167)
for a detailed description of the algorithm.
@@ -115,7 +188,7 @@ The output is an n-dimensional, normalized array with the same shape as input
## BatchNormTraining
See also
-[`XlaBuilder::BatchNormTraining`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h)
+[`XlaBuilder::BatchNormTraining`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h)
and [`the original batch normalization paper`](https://arxiv.org/abs/1502.03167)
for a detailed description of the algorithm.
@@ -167,7 +240,7 @@ spatial dimensions using the formulas above.
## BitcastConvertType
See also
-[`XlaBuilder::BitcastConvertType`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::BitcastConvertType`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Similar to a `tf.bitcast` in TensorFlow, performs an element-wise bitcast
operation from a data shape to a target shape. The dimensions must match, and
@@ -189,7 +262,7 @@ and destination element types must not be tuples.
## Broadcast
See also
-[`XlaBuilder::Broadcast`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Broadcast`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Adds dimensions to an array by duplicating the data in the array.
@@ -217,7 +290,7 @@ For example, if `operand` is a scalar `f32` with value `2.0f`, and
## Call
See also
-[`XlaBuilder::Call`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Call`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Invokes a computation with the given arguments.
@@ -236,7 +309,7 @@ The arity and types of the `args` must match the parameters of the
## Clamp
See also
-[`XlaBuilder::Clamp`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Clamp`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Clamps an operand to within the range between a minimum and maximum value.
@@ -269,8 +342,8 @@ Clamp(min, operand, max) = s32[3]{0, 5, 6};
## Collapse
See also
-[`XlaBuilder::Collapse`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h)
-and the @{tf.reshape} operation.
+[`XlaBuilder::Collapse`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h)
+and the `tf.reshape` operation.
Collapses dimensions of an array into one dimension.
@@ -291,7 +364,7 @@ same position in the dimension sequence as those they replace, with the new
dimension size equal to the product of original dimension sizes. The lowest
dimension number in `dimensions` is the slowest varying dimension (most major)
in the loop nest which collapses these dimension, and the highest dimension
-number is fastest varying (most minor). See the @{tf.reshape} operator
+number is fastest varying (most minor). See the `tf.reshape` operator
if more general collapse ordering is needed.
For example, let v be an array of 24 elements:
@@ -332,7 +405,7 @@ then v12 == f32[8x3] {{10, 11, 12},
## Concatenate
See also
-[`XlaBuilder::ConcatInDim`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::ConcatInDim`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Concatenate composes an array from multiple array operands. The array is of the
same rank as each of the input array operands (which must be of the same rank as
@@ -388,7 +461,7 @@ Diagram:
## Conditional
See also
-[`XlaBuilder::Conditional`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Conditional`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
<b> `Conditional(pred, true_operand, true_computation, false_operand,
false_computation)` </b>
@@ -416,7 +489,7 @@ executed depending on the value of `pred`.
## Conv (convolution)
See also
-[`XlaBuilder::Conv`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Conv`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
As ConvWithGeneralPadding, but the padding is specified in a short-hand way as
either SAME or VALID. SAME padding pads the input (`lhs`) with zeroes so that
@@ -426,7 +499,7 @@ account. VALID padding simply means no padding.
## ConvWithGeneralPadding (convolution)
See also
-[`XlaBuilder::ConvWithGeneralPadding`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::ConvWithGeneralPadding`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Computes a convolution of the kind used in neural networks. Here, a convolution
can be thought of as a n-dimensional window moving across a n-dimensional base
@@ -490,8 +563,8 @@ array. The holes are filled with a no-op value, which for convolution means
zeroes.
Dilation of the rhs is also called atrous convolution. For more details, see
-@{tf.nn.atrous_conv2d}. Dilation of the lhs is also called transposed
-convolution. For more details, see @{tf.nn.conv2d_transpose}.
+`tf.nn.atrous_conv2d`. Dilation of the lhs is also called transposed
+convolution. For more details, see `tf.nn.conv2d_transpose`.
The output shape has these dimensions, in this order:
@@ -538,7 +611,7 @@ for (b, oz, oy, ox) { // output coordinates
## ConvertElementType
See also
-[`XlaBuilder::ConvertElementType`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::ConvertElementType`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Similar to an element-wise `static_cast` in C++, performs an element-wise
conversion operation from a data shape to a target shape. The dimensions must
@@ -572,7 +645,7 @@ then b == f32[3]{0.0, 1.0, 2.0}
## CrossReplicaSum
See also
-[`XlaBuilder::CrossReplicaSum`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::CrossReplicaSum`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Computes a sum across replicas.
@@ -607,7 +680,7 @@ than another.
## CustomCall
See also
-[`XlaBuilder::CustomCall`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::CustomCall`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Call a user-provided function within a computation.
@@ -668,7 +741,7 @@ idempotent.
## Dot
See also
-[`XlaBuilder::Dot`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Dot`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
<b> `Dot(lhs, rhs)` </b>
@@ -697,7 +770,7 @@ multiplications or matrix/matrix multiplications.
## DotGeneral
See also
-[`XlaBuilder::DotGeneral`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::DotGeneral`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
<b> `DotGeneral(lhs, rhs, dimension_numbers)` </b>
@@ -784,15 +857,13 @@ non-contracting/non-batch dimension.
## DynamicSlice
See also
-[`XlaBuilder::DynamicSlice`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::DynamicSlice`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
DynamicSlice extracts a sub-array from the input array at dynamic
`start_indices`. The size of the slice in each dimension is passed in
`size_indices`, which specify the end point of exclusive slice intervals in each
dimension: [start, start + size). The shape of `start_indices` must be rank ==
1, with dimension size equal to the rank of `operand`.
-Note: handling of out-of-bounds slice indices (generated by incorrect runtime
-calculation of 'start_indices') is currently implementation-defined.
<b> `DynamicSlice(operand, start_indices, size_indices)` </b>
@@ -812,6 +883,17 @@ calculation of 'start_indices') is currently implementation-defined.
: : : dimension to avoid wrapping modulo :
: : : dimension size. :
+The effective slice indices are computed by applying the following
+transformation for each index `i` in `[1, N)` before performing the slice:
+
+```
+start_indices[i] = clamp(start_indices[i], 0, operand.dimension_size[i] - size_indices[i])
+```
+
+This ensures that the extracted slice is always in-bounds with respect to the
+operand array. If the slice is in-bounds before the transformation is applied,
+the transformation has no effect.
+
1-dimensional example:
```
@@ -839,7 +921,7 @@ DynamicSlice(b, s, {2, 2}) produces:
## DynamicUpdateSlice
See also
-[`XlaBuilder::DynamicUpdateSlice`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::DynamicUpdateSlice`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
DynamicUpdateSlice generates a result which is the value of the input array
`operand`, with a slice `update` overwritten at `start_indices`.
@@ -847,8 +929,6 @@ The shape of `update` determines the shape of the sub-array of the result which
is updated.
The shape of `start_indices` must be rank == 1, with dimension size equal to
the rank of `operand`.
-Note: handling of out-of-bounds slice indices (generated by incorrect runtime
-calculation of 'start_indices') is currently implementation-defined.
<b> `DynamicUpdateSlice(operand, update, start_indices)` </b>
@@ -866,6 +946,17 @@ calculation of 'start_indices') is currently implementation-defined.
: : : dimension. Value must be greater than or equal :
: : : to zero. :
+The effective slice indices are computed by applying the following
+transformation for each index `i` in `[1, N)` before performing the slice:
+
+```
+start_indices[i] = clamp(start_indices[i], 0, operand.dimension_size[i] - update.dimension_size[i])
+```
+
+This ensures that the updated slice is always in-bounds with respect to the
+operand array. If the slice is in-bounds before the transformation is applied,
+the transformation has no effect.
+
1-dimensional example:
```
@@ -902,7 +993,7 @@ DynamicUpdateSlice(b, u, s) produces:
## Element-wise binary arithmetic operations
See also
-[`XlaBuilder::Add`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Add`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
A set of element-wise binary arithmetic operations is supported.
@@ -947,7 +1038,7 @@ shapes of both operands. The semantics are described in detail on the
## Element-wise comparison operations
See also
-[`XlaBuilder::Eq`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Eq`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
A set of standard element-wise binary comparison operations is supported. Note
that standard IEEE 754 floating-point comparison semantics apply when comparing
@@ -1033,7 +1124,7 @@ potentially different runtime offset) of an input tensor into an output tensor.
### General Semantics
See also
-[`XlaBuilder::Gather`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Gather`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
For a more intuitive description, see the "Informal Description" section below.
<b> `gather(operand, gather_indices, output_window_dims, elided_window_dims, window_bounds, gather_dims_to_operand_dims)` </b>
@@ -1236,7 +1327,7 @@ concatenation of all these rows.
## GetTupleElement
See also
-[`XlaBuilder::GetTupleElement`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::GetTupleElement`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Indexes into a tuple with a compile-time-constant value.
@@ -1252,12 +1343,12 @@ let t: (f32[10], s32) = tuple(v, s);
let element_1: s32 = gettupleelement(t, 1); // Inferred shape matches s32.
```
-See also @{tf.tuple}.
+See also `tf.tuple`.
## Infeed
See also
-[`XlaBuilder::Infeed`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Infeed`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
<b> `Infeed(shape)` </b>
@@ -1293,17 +1384,30 @@ Infeed of the device.
> which case the compiler will provide information about how the Infeed
> operations are serialized in the compiled program.
+## Iota
+
+<b> `Iota()` </b>
+
+Builds a constant literal on device rather than a potentially large host
+transfer. Creates a rank 1 tensor of values starting at zero and incrementing
+by one.
+
+Arguments | Type | Semantics
+------------------ | --------------- | ---------------------------
+`type` | `PrimitiveType` | type U
+`size` | `int64` | The number of elements in the tensor.
+
## Map
See also
-[`XlaBuilder::Map`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Map`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
<b> `Map(operands..., computation)` </b>
| Arguments | Type | Semantics |
| ----------------- | ---------------------- | ------------------------------ |
| `operands` | sequence of N `XlaOp`s | N arrays of types T_0..T_{N-1} |
-| `computation` | `XlaComputation` | computation of type `T_0, T_1, |
+| `computation` | `XlaComputation` | computation of type `T_0, T_1, |
: : : ..., T_{N + M -1} -> S` with N :
: : : parameters of type T and M of :
: : : arbitrary type :
@@ -1325,7 +1429,7 @@ input arrays to produce the output array.
## Pad
See also
-[`XlaBuilder::Pad`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Pad`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
<b> `Pad(operand, padding_value, padding_config)` </b>
@@ -1364,7 +1468,7 @@ are all 0. The figure below shows examples of different `edge_padding` and
## Recv
See also
-[`XlaBuilder::Recv`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Recv`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
<b> `Recv(shape, channel_handle)` </b>
@@ -1398,21 +1502,31 @@ complete and returns the received data.
## Reduce
See also
-[`XlaBuilder::Reduce`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Reduce`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
+
+Applies a reduction function to one or more arrays in parallel.
-Applies a reduction function to an array.
+<b> `Reduce(operands..., init_values..., computation, dimensions)` </b>
-<b> `Reduce(operand, init_value, computation, dimensions)` </b>
+Arguments | Type | Semantics
+------------- | --------------------- | ---------------------------------------
+`operands` | Sequence of N `XlaOp` | N arrays of types `T_0, ..., T_N`.
+`init_values` | Sequence of N `XlaOp` | N scalars of types `T_0, ..., T_N`.
+`computation` | `XlaComputation` | computation of type
+ : : `T_0, ..., T_N, T_0, ..., T_N -> Collate(T_0, ..., T_N)`
+`dimensions` | `int64` array | unordered array of dimensions to reduce
-Arguments | Type | Semantics
-------------- | ---------------- | ---------------------------------------
-`operand` | `XlaOp` | array of type `T`
-`init_value` | `XlaOp` | scalar of type `T`
-`computation` | `XlaComputation` | computation of type `T, T -> T`
-`dimensions` | `int64` array | unordered array of dimensions to reduce
+Where:
+* N is required to be greater or equal to 1.
+* All input arrays must have the same dimensions.
+* If `N = 1`, `Collate(T)` is `T`.
+* If `N > 1`, `Collate(T_0, ..., T_N)` is a tuple of `N` elements of type `T`.
-This operation reduces one or more dimensions of the input array into scalars.
-The rank of the returned array is `rank(operand) - len(dimensions)`.
+The output of the op is `Collate(Q_0, ..., Q_N)` where `Q_i` is an array of type
+`T_i`, the dimensions of which are described below.
+
+This operation reduces one or more dimensions of each input array into scalars.
+The rank of each returned array is `rank(operand) - len(dimensions)`.
`init_value` is the initial value used for every reduction and may be inserted
anywhere during computation by the back-end. In most cases, `init_value` is an
identity of the reduction function (for example, 0 for addition). The applied
@@ -1428,9 +1542,9 @@ enough to being associative for most practical uses. It is possible to conceive
of some completely non-associative reductions, however, and these will produce
incorrect or unpredictable results in XLA reductions.
-As an example, when reducing across the one dimension in a 1D array with values
-[10, 11, 12, 13], with reduction function `f` (this is `computation`) then that
-could be computed as
+As an example, when reducing across one dimension in a single 1D array with
+values [10, 11, 12, 13], with reduction function `f` (this is `computation`)
+then that could be computed as
`f(10, f(11, f(12, f(init_value, 13)))`
@@ -1512,10 +1626,38 @@ the 1D array `| 20 28 36 |`.
Reducing the 3D array over all its dimensions produces the scalar `84`.
+When `N > 1`, reduce function application is slightly more complex, as it is
+applied simultaneously to all inputs. For example, consider the following
+reduction function, which can be used to compute the max and the argmax of a
+a 1-D tensor in parallel:
+
+```
+f: (Float, Int, Float, Int) -> Float, Int
+f(max, argmax, value, index):
+ if value >= argmax:
+ return (value, index)
+ else:
+ return (max, argmax)
+```
+
+For 1-D Input arrays `V = Float[N], K = Int[N]`, and init values
+`I_V = Float, I_K = Int`, the result `f_(N-1)` of reducing across the only
+input dimension is equivalent to the following recursive application:
+```
+f_0 = f(I_V, I_K, V_0, K_0)
+f_1 = f(f_0.first, f_0.second, V_1, K_1)
+...
+f_(N-1) = f(f_(N-2).first, f_(N-2).second, V_(N-1), K_(N-1))
+```
+
+Applying this reduction to an array of values, and an array of sequential
+indices (i.e. iota), will co-iterate over the arrays, and return a tuple
+containing the maximal value and the matching index.
+
## ReducePrecision
See also
-[`XlaBuilder::ReducePrecision`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::ReducePrecision`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Models the effect of converting floating-point values to a lower-precision
format (such as IEEE-FP16) and back to the original format. The number of
@@ -1546,7 +1688,7 @@ portion of the conversion is then simply a no-op.
## ReduceWindow
See also
-[`XlaBuilder::ReduceWindow`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::ReduceWindow`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Applies a reduction function to all elements in each window of the input
multi-dimensional array, producing an output multi-dimensional array with the
@@ -1629,7 +1771,7 @@ context of [`Reduce`](#reduce) for more details.
## Reshape
See also
-[`XlaBuilder::Reshape`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h)
+[`XlaBuilder::Reshape`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h)
and the [`Collapse`](#collapse) operation.
Reshapes the dimensions of an array into a new configuration.
@@ -1710,7 +1852,7 @@ Reshape(5, {}, {1,1}) == f32[1x1] {{5}};
## Rev (reverse)
See also
-[`XlaBuilder::Rev`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Rev`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
<b>`Rev(operand, dimensions)`</b>
@@ -1732,7 +1874,7 @@ the two window dimensions during the gradient computation in neural networks.
## RngNormal
See also
-[`XlaBuilder::RngNormal`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::RngNormal`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Constructs an output of a given shape with random numbers generated following
the $$N(\mu, \sigma)$$ normal distribution. The parameters `mu` and `sigma`, and
@@ -1752,7 +1894,7 @@ be scalar valued.
## RngUniform
See also
-[`XlaBuilder::RngUniform`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::RngUniform`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Constructs an output of a given shape with random numbers generated following
the uniform distribution over the interval $$[a,b)$$. The parameters and output
@@ -1770,10 +1912,142 @@ is implementation-defined.
: : : limit of interval :
| `shape` | `Shape` | Output shape of type T |
+## Scatter
+
+The XLA scatter operation generates a result which is the value of the input
+tensor `operand`, with several slices (at indices specified by
+`scatter_indices`) updated with the values in `updates` using
+`update_computation`.
+
+See also
+[`XlaBuilder::Scatter`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
+
+<b> `scatter(operand, scatter_indices, updates, update_computation, index_vector_dim, update_window_dims, inserted_window_dims, scatter_dims_to_operand_dims)` </b>
+
+|Arguments | Type | Semantics |
+|------------------|------------------------|----------------------------------|
+|`operand` | `XlaOp` | Tensor to be scattered into. |
+|`scatter_indices` | `XlaOp` | Tensor containing the starting |
+: : : indices of the slices that must :
+: : : be scattered to. :
+|`updates` | `XlaOp` | Tensor containing the values that|
+: : : must be used for scattering. :
+|`update_computation`| `XlaComputation` | Computation to be used for |
+: : : combining the existing values in :
+: : : the input tensor and the updates :
+: : : during scatter. This computation :
+: : : should be of type `T, T -> T`. :
+|`index_vector_dim`| `int64` | The dimension in |
+: : : `scatter_indices` that contains :
+: : : the starting indices. :
+|`update_window_dims`| `ArraySlice<int64>` | The set of dimensions in |
+: : : `updates` shape that are _window :
+: : : dimensions_. :
+|`inserted_window_dims`| `ArraySlice<int64>`| The set of _window dimensions_ |
+: : : that must be inserted into :
+: : : `updates` shape. :
+|`scatter_dims_to_operand_dims`| `ArraySlice<int64>` | A dimensions map from |
+: : : the scatter indices to the :
+: : : operand index space. This array :
+: : : is interpreted as mapping `i` to :
+: : : `scatter_dims_to_operand_dims[i]`:
+: : : . It has to be one-to-one and :
+: : : total. :
+
+If `index_vector_dim` is equal to `scatter_indices.rank` we implicitly consider
+`scatter_indices` to have a trailing `1` dimension.
+
+We define `update_scatter_dims` of type `ArraySlice<int64>` as the set of
+dimensions in `updates` shape that are not in `update_window_dims`, in ascending
+order.
+
+The arguments of scatter should follow these constraints:
+
+ - `updates` tensor must be of rank `update_window_dims.size +
+ scatter_indices.rank - 1`.
+
+ - Bounds of dimension `i` in `updates` must conform to the following:
+ - If `i` is present in `update_window_dims` (i.e. equal to
+ `update_window_dims`[`k`] for some `k`), then the bound of dimension
+ `i` in `updates` must not exceed the corresponding bound of `operand`
+ after accounting for the `inserted_window_dims` (i.e.
+ `adjusted_window_bounds`[`k`], where `adjusted_window_bounds` contains
+ the bounds of `operand` with the bounds at indices
+ `inserted_window_dims` removed).
+ - If `i` is present in `update_scatter_dims` (i.e. equal to
+ `update_scatter_dims`[`k`] for some `k`), then the bound of dimension
+ `i` in `updates` must be equal to the corresponding bound of
+ `scatter_indices`, skipping `index_vector_dim` (i.e.
+ `scatter_indices.shape.dims`[`k`], if `k` < `index_vector_dim` and
+ `scatter_indices.shape.dims`[`k+1`] otherwise).
+
+ - `update_window_dims` must be in ascending order, not have any repeating
+ dimension numbers, and be in the range `[0, updates.rank)`.
+
+ - `inserted_window_dims` must be in ascending order, not have any
+ repeating dimension numbers, and be in the range `[0, operand.rank)`.
+
+ - `scatter_dims_to_operand_dims.size` must be equal to
+ `scatter_indices`[`index_vector_dim`], and its values must be in the range
+ `[0, operand.rank)`.
+
+For a given index `U` in the `updates` tensor, the corresponding index `I` in
+the `operand` tensor into which this update has to be applied is computed as
+follows:
+
+ 1. Let `G` = { `U`[`k`] for `k` in `update_scatter_dims` }. Use `G` to look up
+ an index vector `S` in the `scatter_indices` tensor such that `S`[`i`] =
+ `scatter_indices`[Combine(`G`, `i`)] where Combine(A, b) inserts b at
+ positions `index_vector_dim` into A.
+ 2. Create an index `S`<sub>`in`</sub> into `operand` using `S` by scattering
+ `S` using the `scatter_dims_to_operand_dims` map. More formally:
+ 1. `S`<sub>`in`</sub>[`scatter_dims_to_operand_dims`[`k`]] = `S`[`k`] if
+ `k` < `scatter_dims_to_operand_dims.size`.
+ 2. `S`<sub>`in`</sub>[`_`] = `0` otherwise.
+ 3. Create an index `W`<sub>`in`</sub> into `operand` by scattering the indices
+ at `update_window_dims` in `U` according to `inserted_window_dims`.
+ More formally:
+ 1. `W`<sub>`in`</sub>[`window_dims_to_operand_dims`(`k`)] = `U`[`k`] if
+ `k` < `update_window_dims.size`, where `window_dims_to_operand_dims`
+ is the monotonic function with domain [`0`, `update_window_dims.size`)
+ and range [`0`, `operand.rank`) \\ `inserted_window_dims`. (For
+ example, if `update_window_dims.size` is `4`, `operand.rank` is `6`,
+ and `inserted_window_dims` is {`0`, `2`} then
+ `window_dims_to_operand_dims` is {`0`→`1`, `1`→`3`, `2`→`4`,
+ `3`→`5`}).
+ 2. `W`<sub>`in`</sub>[`_`] = `0` otherwise.
+ 4. `I` is `W`<sub>`in`</sub> + `S`<sub>`in`</sub> where + is element-wise
+ addition.
+
+In summary, the scatter operation can be defined as follows.
+
+ - Initialize `output` with `operand`, i.e. for all indices `O` in the
+ `operand` tensor:\
+ `output`[`O`] = `operand`[`O`]
+ - For every index `U` in the `updates` tensor and the corresponding index `O`
+ in the `operand` tensor:\
+ `output`[`O`] = `update_computation`(`output`[`O`], `updates`[`U`])
+
+The order in which updates are applied is non-deterministic. So, when multiple
+indices in `updates` refer to the same index in `operand`, the corresponding
+value in `output` will be non-deterministic.
+
+Note that the first parameter that is passed into the `update_computation` will
+always be the current value from the `output` tensor and the second parameter
+will always be the value from the `updates` tensor. This is important
+specifically for cases when the `update_computation` is _not commutative_.
+
+Informally, the scatter op can be viewed as an _inverse_ of the gather op, i.e.
+the scatter op updates the elements in the input that are extracted by the
+corresponding gather op.
+
+For a detailed informal description and examples, refer to the
+"Informal Description" section under `Gather`.
+
## Select
See also
-[`XlaBuilder::Select`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Select`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Constructs an output array from elements of two input arrays, based on the
values of a predicate array.
@@ -1824,7 +2098,7 @@ the same shape!) then `pred` has to be a scalar of type `PRED`.
## SelectAndScatter
See also
-[`XlaBuilder::SelectAndScatter`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::SelectAndScatter`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
This operation can be considered as a composite operation that first computes
`ReduceWindow` on the `operand` array to select an element from each window, and
@@ -1904,7 +2178,7 @@ context of [`Reduce`](#reduce) for more details.
## Send
See also
-[`XlaBuilder::Send`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Send`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
<b> `Send(operand, channel_handle)` </b>
@@ -1959,7 +2233,7 @@ computations. For example, below schedules lead to deadlocks.
## Slice
See also
-[`XlaBuilder::Slice`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Slice`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Slicing extracts a sub-array from the input array. The sub-array is of the same
rank as the input and contains the values inside a bounding box within the input
@@ -2008,19 +2282,48 @@ Slice(b, {2, 1}, {4, 3}) produces:
## Sort
See also
-[`XlaBuilder::Sort`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Sort`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
-Sorts the elements in the operand.
+There are two versions of the Sort instruction: a single-operand and a
+two-operand version.
<b>`Sort(operand)`</b>
-Arguments | Type | Semantics
---------- | ------- | -------------------
-`operand` | `XlaOp` | The operand to sort
+Arguments | Type | Semantics
+----------- | ------- | --------------------
+`operand` | `XlaOp` | The operand to sort.
+`dimension` | `int64` | The dimension along which to sort.
+
+Sorts the elements in the operand in ascending order along the provided
+dimension. For example, for a rank-2 (matrix) operand, a `dimension` value of 0
+will sort each column independently, and a `dimension` value of 1 will sort each
+row independently. If the operand's elements have floating point type, and the
+operand contains NaN elements, the order of elements in the output is
+implementation-defined.
+
+<b>`Sort(key, value)`</b>
+
+Sorts both the key and the value operands. The keys are sorted as in the
+single-operand version. The values are sorted according to the order of their
+corresponding keys. For example, if the inputs are `keys = [3, 1]` and
+`values = [42, 50]`, then the output of the sort is the tuple
+`{[1, 3], [50, 42]}`.
+
+The sort is not guaranteed to be stable, that is, if the keys array contains
+duplicates, the order of their corresponding values may not be preserved.
+
+Arguments | Type | Semantics
+----------- | ------- | -------------------
+`keys` | `XlaOp` | The sort keys.
+`values` | `XlaOp` | The values to sort.
+`dimension` | `int64` | The dimension along which to sort.
+
+The `keys` and `values` must have the same dimensions, but may have different
+element types.
## Transpose
-See also the @{tf.reshape} operation.
+See also the `tf.reshape` operation.
<b>`Transpose(operand)`</b>
@@ -2039,7 +2342,7 @@ This is the same as Reshape(operand, permutation,
## Tuple
See also
-[`XlaBuilder::Tuple`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::Tuple`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
A tuple containing a variable number of data handles, each of which has its own
shape.
@@ -2058,7 +2361,7 @@ Tuples can be deconstructed (accessed) via the [`GetTupleElement`]
## While
See also
-[`XlaBuilder::While`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
+[`XlaBuilder::While`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
<b> `While(condition, body, init)` </b>
diff --git a/tensorflow/docs_src/performance/xla/tfcompile.md b/tensorflow/docs_src/performance/xla/tfcompile.md
index 8521d7eacb..e4b803164f 100644
--- a/tensorflow/docs_src/performance/xla/tfcompile.md
+++ b/tensorflow/docs_src/performance/xla/tfcompile.md
@@ -205,10 +205,7 @@ representing the inputs, `results` representing the outputs, and `temps`
representing temporary buffers used internally to perform the computation. By
default, each instance of the generated class allocates and manages all of these
buffers for you. The `AllocMode` constructor argument may be used to change this
-behavior. A convenience library is provided in
-[`tensorflow/compiler/aot/runtime.h`](https://www.tensorflow.org/code/tensorflow/compiler/aot/runtime.h)
-to help with manual buffer allocation; usage of this library is optional. All
-buffers should be aligned to 32-byte boundaries.
+behavior. All buffers are aligned to 64-byte boundaries.
The generated C++ class is just a wrapper around the low-level code generated by
XLA.
diff --git a/tensorflow/docs_src/get_started/_index.yaml b/tensorflow/docs_src/tutorials/_index.yaml
index 4060804892..9534114689 100644
--- a/tensorflow/docs_src/get_started/_index.yaml
+++ b/tensorflow/docs_src/tutorials/_index.yaml
@@ -2,6 +2,7 @@ project_path: /_project.yaml
book_path: /_book.yaml
description: <!--no description-->
landing_page:
+ custom_css_path: /site-assets/css/style.css
show_side_navs: True
rows:
- description: >
@@ -14,57 +15,6 @@ landing_page:
</p>
items:
- custom_html: >
- <style>
- .tfo-button-primary {
- background-color: #fca851;
- }
- .tfo-button-primary:hover {
- background-color: #ef6c02;
- }
-
- a.colab-button {
- display: inline-block;
- background: rgba(255, 255, 255, 0.75);
- padding: 4px 8px;
- border-radius: 4px;
- font-size: 11px!important;
- text-decoration: none;
- color:#aaa;border: none;
- font-weight: 300;
- border: solid 1px rgba(0, 0, 0, 0.08);
- border-bottom-color: rgba(0, 0, 0, 0.15);
- text-transform: uppercase;
- line-height: 16px
- }
- a.colab-button:hover {
- color: #666;
- background: white;
- border-color: rgba(0, 0, 0, 0.2);
- }
- a.colab-button span {
- background-image: url("/images/colab_logo_button.svg");
- background-repeat:no-repeat;background-size:20px;
- background-position-y:2px;display:inline-block;
- padding-left:24px;border-radius:4px;
- text-decoration:none;
- }
-
- /* adjust code block for smaller screens */
- @media screen and (max-width: 1000px) {
- .tfo-landing-row-item-code-block {
- flex-direction: column !important;
- }
- .tfo-landing-row-item-code-block > .devsite-landing-row-item-code {
- /*display: none;*/
- width: 100%;
- }
- }
- @media screen and (max-width: 720px) {
- .tfo-landing-row-item-code-block {
- display: none;
- }
- }
- </style>
<div class="devsite-landing-row-item-description">
<h3 class="hide-from-toc">Learn and use ML</h3>
<div class="devsite-landing-row-item-description-content">
@@ -75,11 +25,11 @@ landing_page:
<a href="/guide/keras">TensorFlow Keras guide</a>.
</p>
<ol style="padding-left:20px;">
- <li><a href="/get_started/basic_classification">Basic classification</a></li>
- <li><a href="/get_started/basic_text_classification">Text classification</a></li>
- <li><a href="/get_started/basic_regression">Regression</a></li>
- <li><a href="/get_started/overfit_and_underfit">Overfitting and underfitting</a></li>
- <li><a href="/get_started/save_and_restore_models">Save and load</a></li>
+ <li><a href="./keras/basic_classification">Basic classification</a></li>
+ <li><a href="./keras/basic_text_classification">Text classification</a></li>
+ <li><a href="./keras/basic_regression">Regression</a></li>
+ <li><a href="./keras/overfit_and_underfit">Overfitting and underfitting</a></li>
+ <li><a href="./keras/save_and_restore_models">Save and load</a></li>
</ol>
</div>
<div class="devsite-landing-row-item-buttons" style="margin-top:0;">
@@ -109,7 +59,7 @@ landing_page:
model.evaluate(x_test, y_test)
</pre>
{% dynamic if request.tld != 'cn' %}
- <a class="colab-button" target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb">Run in a <span>Notebook</span></a>
+ <a class="colab-button" target="_blank" href="https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb">Run in a <span>Notebook</span></a>
{% dynamic endif %}
- items:
@@ -124,38 +74,38 @@ landing_page:
<ol style="padding-left:20px;">
<li>
{% dynamic if request.tld == 'cn' %}
- <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/1_basics.ipynb" class="external">Eager execution basics</a>
+ <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb" class="external">Eager execution basics</a>
{% dynamic else %}
- <a href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/1_basics.ipynb" class="external">Eager execution basics</a>
+ <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb" class="external">Eager execution basics</a>
{% dynamic endif %}
</li>
<li>
{% dynamic if request.tld == 'cn' %}
- <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/2_gradients.ipynb" class="external">Automatic differentiation and gradient tapes</a>
+ <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb" class="external">Automatic differentiation and gradient tape</a>
{% dynamic else %}
- <a href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/2_gradients.ipynb" class="external">Automatic differentiation and gradient tapes</a>
+ <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb" class="external">Automatic differentiation and gradient tape</a>
{% dynamic endif %}
</li>
<li>
{% dynamic if request.tld == 'cn' %}
- <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/3_training_models.ipynb" class="external">Variables, models, and training</a>
+ <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_training.ipynb" class="external">Custom training: basics</a>
{% dynamic else %}
- <a href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/3_training_models.ipynb" class="external">Variables, models, and training</a>
+ <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_training.ipynb" class="external">Custom training: basics</a>
{% dynamic endif %}
</li>
<li>
{% dynamic if request.tld == 'cn' %}
- <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/4_high_level.ipynb" class="external">Custom layers</a>
+ <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb" class="external">Custom layers</a>
{% dynamic else %}
- <a href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/4_high_level.ipynb" class="external">Custom layers</a>
+ <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb" class="external">Custom layers</a>
{% dynamic endif %}
</li>
- <li><a href="/get_started/eager">Custom training walkthrough</a></li>
+ <li><a href="./eager/custom_training_walkthrough">Custom training: walkthrough</a></li>
<li>
{% dynamic if request.tld == 'cn' %}
<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb" class="external">Example: Neural machine translation w/ attention</a>
{% dynamic else %}
- <a href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb" class="external">Example: Neural machine translation w/ attention</a>
+ <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb" class="external">Example: Neural machine translation w/ attention</a>
{% dynamic endif %}
</li>
</ol>
@@ -170,13 +120,16 @@ landing_page:
<div class="devsite-landing-row-item-description-content">
<p>
Estimators can train large models on multiple machines in a
- production environment. Try the examples below and read the
+ production environment. TensorFlow provides a collection of
+ pre-made Estimators to implement common ML algorithms. See the
<a href="/guide/estimators">Estimators guide</a>.
</p>
<ol style="padding-left: 20px;">
- <li><a href="/tutorials/text_classification_with_tf_hub">How to build a simple text classifier with TF-Hub</a></li>
- <li><a href="https://github.com/tensorflow/models/tree/master/official/boosted_trees">Classifying Higgs boson processes</a></li>
- <li><a href="/tutorials/wide_and_deep">Wide and deep learning using estimators</a></li>
+ <li><a href="/tutorials/estimators/linear">Build a linear model with Estimators</a></li>
+ <li><a href="https://github.com/tensorflow/models/tree/master/official/wide_deep" class="external">Wide and deep learning with Estimators</a></li>
+ <li><a href="https://github.com/tensorflow/models/tree/master/official/boosted_trees" class="external">Boosted trees</a></li>
+ <li><a href="/hub/tutorials/text_classification_with_tf_hub">How to build a simple text classifier with TF-Hub</a></li>
+ <li><a href="/tutorials/estimators/cnn">Build a Convolutional Neural Network using Estimators</a></li>
</ol>
</div>
<div class="devsite-landing-row-item-buttons">
@@ -187,7 +140,7 @@ landing_page:
- description: >
<h2 class="hide-from-toc">Google Colab&#58; An easy way to learn and use TensorFlow</h2>
<p>
- <a href="https://colab.sandbox.google.com/notebooks/welcome.ipynb" class="external">Colaboratory</a>
+ <a href="https://colab.research.google.com/notebooks/welcome.ipynb" class="external">Colaboratory</a>
is a Google research project created to help disseminate machine learning
education and research. It's a Jupyter notebook environment that requires
no setup to use and runs entirely in the cloud.
diff --git a/tensorflow/docs_src/tutorials/_toc.yaml b/tensorflow/docs_src/tutorials/_toc.yaml
new file mode 100644
index 0000000000..d33869af6e
--- /dev/null
+++ b/tensorflow/docs_src/tutorials/_toc.yaml
@@ -0,0 +1,103 @@
+toc:
+- title: Get started with TensorFlow
+ path: /tutorials/
+
+- title: Learn and use ML
+ style: accordion
+ section:
+ - title: Overview
+ path: /tutorials/keras/
+ - title: Basic classification
+ path: /tutorials/keras/basic_classification
+ - title: Text classification
+ path: /tutorials/keras/basic_text_classification
+ - title: Regression
+ path: /tutorials/keras/basic_regression
+ - title: Overfitting and underfitting
+ path: /tutorials/keras/overfit_and_underfit
+ - title: Save and restore models
+ path: /tutorials/keras/save_and_restore_models
+
+- title: Research and experimentation
+ style: accordion
+ section:
+ - title: Overview
+ path: /tutorials/eager/
+ - title: Eager execution
+ path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb
+ status: external
+ - title: Automatic differentiation
+ path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb
+ status: external
+ - title: "Custom training: basics"
+ path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_training.ipynb
+ status: external
+ - title: Custom layers
+ path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb
+ status: external
+ - title: "Custom training: walkthrough"
+ path: /tutorials/eager/custom_training_walkthrough
+ - title: Translation with attention
+ path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb
+ status: external
+
+- title: ML at production scale
+ style: accordion
+ section:
+ - title: Linear model with Estimators
+ path: /tutorials/estimators/linear
+ - title: Wide and deep learning
+ path: https://github.com/tensorflow/models/tree/master/official/wide_deep
+ status: external
+ - title: Boosted trees
+ path: https://github.com/tensorflow/models/tree/master/official/boosted_trees
+ status: external
+ - title: Text classifier with TF-Hub
+ path: /hub/tutorials/text_classification_with_tf_hub
+ - title: Build a CNN using Estimators
+ path: /tutorials/estimators/cnn
+
+- title: Images
+ style: accordion
+ section:
+ - title: Image recognition
+ path: /tutorials/images/image_recognition
+ - title: Image retraining
+ path: /hub/tutorials/image_retraining
+ - title: Advanced CNN
+ path: /tutorials/images/deep_cnn
+
+- title: Sequences
+ style: accordion
+ section:
+ - title: Recurrent neural network
+ path: /tutorials/sequences/recurrent
+ - title: Drawing classification
+ path: /tutorials/sequences/recurrent_quickdraw
+ - title: Simple audio recognition
+ path: /tutorials/sequences/audio_recognition
+ - title: Neural machine translation
+ path: https://github.com/tensorflow/nmt
+ status: external
+
+- title: Data representation
+ style: accordion
+ section:
+ - title: Vector representations of words
+ path: /tutorials/representation/word2vec
+ - title: Kernel methods
+ path: /tutorials/representation/kernel_methods
+ - title: Large-scale linear models
+ path: /tutorials/representation/linear
+
+- title: Non-ML
+ style: accordion
+ section:
+ - title: Mandelbrot set
+ path: /tutorials/non-ml/mandelbrot
+ - title: Partial differential equations
+ path: /tutorials/non-ml/pdes
+
+- break: True
+- title: Next steps
+ path: /tutorials/next_steps
diff --git a/tensorflow/docs_src/tutorials/eager/custom_training_walkthrough.md b/tensorflow/docs_src/tutorials/eager/custom_training_walkthrough.md
new file mode 100644
index 0000000000..b564a27ecf
--- /dev/null
+++ b/tensorflow/docs_src/tutorials/eager/custom_training_walkthrough.md
@@ -0,0 +1,3 @@
+# Custom training: walkthrough
+
+[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/eager/custom_training_walkthrough.ipynb)
diff --git a/tensorflow/docs_src/tutorials/eager/index.md b/tensorflow/docs_src/tutorials/eager/index.md
new file mode 100644
index 0000000000..a13b396094
--- /dev/null
+++ b/tensorflow/docs_src/tutorials/eager/index.md
@@ -0,0 +1,13 @@
+# Research and experimentation
+
+Eager execution provides an imperative, define-by-run interface for advanced
+operations. Write custom layers, forward passes, and training loops with
+auto&nbsp;differentiation. Start with these notebooks, then read the
+[eager execution guide](../../guide/eager).
+
+1. <span>[Eager execution](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb){:.external}</span>
+2. <span>[Automatic differentiation and gradient tape](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb){:.external}</span>
+3. <span>[Custom training: basics](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_training.ipynb){:.external}</span>
+4. <span>[Custom layers](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb){:.external}</span>
+5. [Custom training: walkthrough](/tutorials/eager/custom_training_walkthrough)
+6. <span>[Advanced example: Neural machine translation with attention](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb){:.external}</span>
diff --git a/tensorflow/docs_src/tutorials/layers.md b/tensorflow/docs_src/tutorials/estimators/cnn.md
index 791909f5fd..100f501cc2 100644
--- a/tensorflow/docs_src/tutorials/layers.md
+++ b/tensorflow/docs_src/tutorials/estimators/cnn.md
@@ -1,6 +1,6 @@
-# A Guide to TF Layers: Building a Convolutional Neural Network
+# Build a Convolutional Neural Network using Estimators
-The TensorFlow @{tf.layers$`layers` module} provides a high-level API that makes
+The `tf.layers` module provides a high-level API that makes
it easy to construct a neural network. It provides methods that facilitate the
creation of dense (fully connected) layers and convolutional layers, adding
activation functions, and applying dropout regularization. In this tutorial,
@@ -118,8 +118,8 @@ output from one layer-creation method and supply it as input to another.
Open `cnn_mnist.py` and add the following `cnn_model_fn` function, which
conforms to the interface expected by TensorFlow's Estimator API (more on this
later in [Create the Estimator](#create-the-estimator)). `cnn_mnist.py` takes
-MNIST feature data, labels, and
-@{tf.estimator.ModeKeys$model mode} (`TRAIN`, `EVAL`, `PREDICT`) as arguments;
+MNIST feature data, labels, and mode (from
+`tf.estimator.ModeKeys`: `TRAIN`, `EVAL`, `PREDICT`) as arguments;
configures the CNN; and returns predictions, loss, and a training operation:
```python
@@ -277,7 +277,7 @@ a 5x5 convolution over a 28x28 tensor will produce a 24x24 tensor, as there are
The `activation` argument specifies the activation function to apply to the
output of the convolution. Here, we specify ReLU activation with
-@{tf.nn.relu}.
+`tf.nn.relu`.
Our output tensor produced by `conv2d()` has a shape of
<code>[<em>batch_size</em>, 28, 28, 32]</code>: the same height and width
@@ -423,7 +423,7 @@ raw values into two different formats that our model function can return:
For a given example, our predicted class is the element in the corresponding row
of the logits tensor with the highest raw value. We can find the index of this
-element using the @{tf.argmax}
+element using the `tf.argmax`
function:
```python
@@ -438,7 +438,7 @@ value along the dimension with index of 1, which corresponds to our predictions
10]</code>).
We can derive probabilities from our logits layer by applying softmax activation
-using @{tf.nn.softmax}:
+using `tf.nn.softmax`:
```python
tf.nn.softmax(logits, name="softmax_tensor")
@@ -572,8 +572,8 @@ feel free to change to another directory of your choice).
### Set Up a Logging Hook {#set_up_a_logging_hook}
Since CNNs can take a while to train, let's set up some logging so we can track
-progress during training. We can use TensorFlow's @{tf.train.SessionRunHook} to create a
-@{tf.train.LoggingTensorHook}
+progress during training. We can use TensorFlow's `tf.train.SessionRunHook` to create a
+`tf.train.LoggingTensorHook`
that will log the probability values from the softmax layer of our CNN. Add the
following to `main()`:
diff --git a/tensorflow/docs_src/tutorials/estimators/linear.md b/tensorflow/docs_src/tutorials/estimators/linear.md
new file mode 100644
index 0000000000..067a33ac03
--- /dev/null
+++ b/tensorflow/docs_src/tutorials/estimators/linear.md
@@ -0,0 +1,3 @@
+# Build a linear model with Estimators
+
+[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/estimators/linear.ipynb)
diff --git a/tensorflow/docs_src/tutorials/image_retraining.md b/tensorflow/docs_src/tutorials/image_retraining.md
deleted file mode 100644
index 27784eef9c..0000000000
--- a/tensorflow/docs_src/tutorials/image_retraining.md
+++ /dev/null
@@ -1,4 +0,0 @@
-# How to Retrain Inception's Final Layer for New Categories
-
-**NOTE: This tutorial has moved to**
-https://github.com/tensorflow/hub/tree/master/docs/tutorials/image_retraining.md
diff --git a/tensorflow/docs_src/tutorials/deep_cnn.md b/tensorflow/docs_src/tutorials/images/deep_cnn.md
index 44a32d9d1d..42ad484bbf 100644
--- a/tensorflow/docs_src/tutorials/deep_cnn.md
+++ b/tensorflow/docs_src/tutorials/images/deep_cnn.md
@@ -1,7 +1,4 @@
-# Convolutional Neural Networks
-
-> **NOTE:** This tutorial is intended for *advanced* users of TensorFlow
-and assumes expertise and experience in machine learning.
+# Advanced Convolutional Neural Networks
## Overview
@@ -34,26 +31,26 @@ new ideas and experimenting with new techniques.
The CIFAR-10 tutorial demonstrates several important constructs for
designing larger and more sophisticated models in TensorFlow:
-* Core mathematical components including @{tf.nn.conv2d$convolution}
+* Core mathematical components including `tf.nn.conv2d`
([wiki](https://en.wikipedia.org/wiki/Convolution)),
-@{tf.nn.relu$rectified linear activations}
+`tf.nn.relu`
([wiki](https://en.wikipedia.org/wiki/Rectifier_(neural_networks))),
-@{tf.nn.max_pool$max pooling}
+`tf.nn.max_pool`
([wiki](https://en.wikipedia.org/wiki/Convolutional_neural_network#Pooling_layer))
-and @{tf.nn.local_response_normalization$local response normalization}
+and `tf.nn.local_response_normalization`
(Chapter 3.3 in
[AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)).
* @{$summaries_and_tensorboard$Visualization}
of network activities during training, including input images,
losses and distributions of activations and gradients.
* Routines for calculating the
-@{tf.train.ExponentialMovingAverage$moving average}
+`tf.train.ExponentialMovingAverage`
of learned parameters and using these averages
during evaluation to boost predictive performance.
* Implementation of a
-@{tf.train.exponential_decay$learning rate schedule}
+`tf.train.exponential_decay`
that systematically decrements over time.
-* Prefetching @{tf.train.shuffle_batch$queues}
+* Prefetching `tf.train.shuffle_batch`
for input
data to isolate the model from disk latency and expensive image pre-processing.
@@ -83,21 +80,21 @@ for details. It consists of 1,068,298 learnable parameters and requires about
## Code Organization
The code for this tutorial resides in
-[`models/tutorials/image/cifar10/`](https://www.tensorflow.org/code/tensorflow_models/tutorials/image/cifar10/).
+[`models/tutorials/image/cifar10/`](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10/).
File | Purpose
--- | ---
-[`cifar10_input.py`](https://www.tensorflow.org/code/tensorflow_models/tutorials/image/cifar10/cifar10_input.py) | Reads the native CIFAR-10 binary file format.
-[`cifar10.py`](https://www.tensorflow.org/code/tensorflow_models/tutorials/image/cifar10/cifar10.py) | Builds the CIFAR-10 model.
-[`cifar10_train.py`](https://www.tensorflow.org/code/tensorflow_models/tutorials/image/cifar10/cifar10_train.py) | Trains a CIFAR-10 model on a CPU or GPU.
-[`cifar10_multi_gpu_train.py`](https://www.tensorflow.org/code/tensorflow_models/tutorials/image/cifar10/cifar10_multi_gpu_train.py) | Trains a CIFAR-10 model on multiple GPUs.
-[`cifar10_eval.py`](https://www.tensorflow.org/code/tensorflow_models/tutorials/image/cifar10/cifar10_eval.py) | Evaluates the predictive performance of a CIFAR-10 model.
+[`cifar10_input.py`](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10/cifar10_input.py) | Reads the native CIFAR-10 binary file format.
+[`cifar10.py`](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10/cifar10.py) | Builds the CIFAR-10 model.
+[`cifar10_train.py`](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10/cifar10_train.py) | Trains a CIFAR-10 model on a CPU or GPU.
+[`cifar10_multi_gpu_train.py`](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py) | Trains a CIFAR-10 model on multiple GPUs.
+[`cifar10_eval.py`](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10/cifar10_eval.py) | Evaluates the predictive performance of a CIFAR-10 model.
## CIFAR-10 Model
The CIFAR-10 network is largely contained in
-[`cifar10.py`](https://www.tensorflow.org/code/tensorflow_models/tutorials/image/cifar10/cifar10.py).
+[`cifar10.py`](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10/cifar10.py).
The complete training
graph contains roughly 765 operations. We find that we can make the code most
reusable by constructing the graph with the following modules:
@@ -116,27 +113,27 @@ gradients, variable updates and visualization summaries.
The input part of the model is built by the functions `inputs()` and
`distorted_inputs()` which read images from the CIFAR-10 binary data files.
These files contain fixed byte length records, so we use
-@{tf.FixedLengthRecordReader}.
+`tf.FixedLengthRecordReader`.
See @{$reading_data#reading-from-files$Reading Data} to
learn more about how the `Reader` class works.
The images are processed as follows:
* They are cropped to 24 x 24 pixels, centrally for evaluation or
- @{tf.random_crop$randomly} for training.
-* They are @{tf.image.per_image_standardization$approximately whitened}
+ `tf.random_crop` for training.
+* They are `tf.image.per_image_standardization`
to make the model insensitive to dynamic range.
For training, we additionally apply a series of random distortions to
artificially increase the data set size:
-* @{tf.image.random_flip_left_right$Randomly flip} the image from left to right.
-* Randomly distort the @{tf.image.random_brightness$image brightness}.
-* Randomly distort the @{tf.image.random_contrast$image contrast}.
+* `tf.image.random_flip_left_right` the image from left to right.
+* Randomly distort the `tf.image.random_brightness`.
+* Randomly distort the `tf.image.random_contrast`.
Please see the @{$python/image$Images} page for the list of
available distortions. We also attach an
-@{tf.summary.image} to the images
+`tf.summary.image` to the images
so that we may visualize them in @{$summaries_and_tensorboard$TensorBoard}.
This is a good practice to verify that inputs are built correctly.
@@ -147,7 +144,7 @@ This is a good practice to verify that inputs are built correctly.
Reading images from disk and distorting them can use a non-trivial amount of
processing time. To prevent these operations from slowing down training, we run
them inside 16 separate threads which continuously fill a TensorFlow
-@{tf.train.shuffle_batch$queue}.
+`tf.train.shuffle_batch`.
### Model Prediction
@@ -157,12 +154,12 @@ the model is organized as follows:
Layer Name | Description
--- | ---
-`conv1` | @{tf.nn.conv2d$convolution} and @{tf.nn.relu$rectified linear} activation.
-`pool1` | @{tf.nn.max_pool$max pooling}.
-`norm1` | @{tf.nn.local_response_normalization$local response normalization}.
-`conv2` | @{tf.nn.conv2d$convolution} and @{tf.nn.relu$rectified linear} activation.
-`norm2` | @{tf.nn.local_response_normalization$local response normalization}.
-`pool2` | @{tf.nn.max_pool$max pooling}.
+`conv1` | `tf.nn.conv2d` and `tf.nn.relu` activation.
+`pool1` | `tf.nn.max_pool`.
+`norm1` | `tf.nn.local_response_normalization`.
+`conv2` | `tf.nn.conv2d` and `tf.nn.relu` activation.
+`norm2` | `tf.nn.local_response_normalization`.
+`pool2` | `tf.nn.max_pool`.
`local3` | @{$python/nn$fully connected layer with rectified linear activation}.
`local4` | @{$python/nn$fully connected layer with rectified linear activation}.
`softmax_linear` | linear transformation to produce logits.
@@ -175,7 +172,7 @@ Here is a graph generated from TensorBoard describing the inference operation:
> **EXERCISE**: The output of `inference` are un-normalized logits. Try editing
the network architecture to return normalized predictions using
-@{tf.nn.softmax}.
+`tf.nn.softmax`.
The `inputs()` and `inference()` functions provide all the components
necessary to perform an evaluation of a model. We now shift our focus towards
@@ -193,16 +190,16 @@ architecture in the top layer.
The usual method for training a network to perform N-way classification is
[multinomial logistic regression](https://en.wikipedia.org/wiki/Multinomial_logistic_regression),
aka. *softmax regression*. Softmax regression applies a
-@{tf.nn.softmax$softmax} nonlinearity to the
+`tf.nn.softmax` nonlinearity to the
output of the network and calculates the
-@{tf.nn.sparse_softmax_cross_entropy_with_logits$cross-entropy}
+`tf.nn.sparse_softmax_cross_entropy_with_logits`
between the normalized predictions and the label index.
For regularization, we also apply the usual
-@{tf.nn.l2_loss$weight decay} losses to all learned
+`tf.nn.l2_loss` losses to all learned
variables. The objective function for the model is the sum of the cross entropy
loss and all these weight decay terms, as returned by the `loss()` function.
-We visualize it in TensorBoard with a @{tf.summary.scalar}:
+We visualize it in TensorBoard with a `tf.summary.scalar`:
![CIFAR-10 Loss](https://www.tensorflow.org/images/cifar_loss.png "CIFAR-10 Total Loss")
@@ -210,14 +207,14 @@ We train the model using standard
[gradient descent](https://en.wikipedia.org/wiki/Gradient_descent)
algorithm (see @{$python/train$Training} for other methods)
with a learning rate that
-@{tf.train.exponential_decay$exponentially decays}
+`tf.train.exponential_decay`
over time.
![CIFAR-10 Learning Rate Decay](https://www.tensorflow.org/images/cifar_lr_decay.png "CIFAR-10 Learning Rate Decay")
The `train()` function adds the operations needed to minimize the objective by
calculating the gradient and updating the learned variables (see
-@{tf.train.GradientDescentOptimizer}
+`tf.train.GradientDescentOptimizer`
for details). It returns an operation that executes all the calculations
needed to train and update the model for one batch of images.
@@ -266,7 +263,7 @@ training step can take so long. Try decreasing the number of images that
initially fill up the queue. Search for `min_fraction_of_examples_in_queue`
in `cifar10_input.py`.
-`cifar10_train.py` periodically @{tf.train.Saver$saves}
+`cifar10_train.py` periodically uses a `tf.train.Saver` to save
all model parameters in
@{$guide/saved_model$checkpoint files}
but it does *not* evaluate the model. The checkpoint file
@@ -288,7 +285,7 @@ how the model is training. We want more insight into the model during training:
@{$summaries_and_tensorboard$TensorBoard} provides this
functionality, displaying data exported periodically from `cifar10_train.py` via
a
-@{tf.summary.FileWriter}.
+`tf.summary.FileWriter`.
For instance, we can watch how the distribution of activations and degree of
sparsity in `local3` features evolve during training:
@@ -303,7 +300,7 @@ interesting to track over time. However, the loss exhibits a considerable amount
of noise due to the small batch size employed by training. In practice we find
it extremely useful to visualize their moving averages in addition to their raw
values. See how the scripts use
-@{tf.train.ExponentialMovingAverage}
+`tf.train.ExponentialMovingAverage`
for this purpose.
## Evaluating a Model
@@ -339,8 +336,8 @@ exports summaries that may be visualized in TensorBoard. These summaries
provide additional insight into the model during evaluation.
The training script calculates the
-@{tf.train.ExponentialMovingAverage$moving average}
-version of all learned variables. The evaluation script substitutes
+`tf.train.ExponentialMovingAverage` of all learned variables.
+The evaluation script substitutes
all learned model parameters with the moving average version. This
substitution boosts model performance at evaluation time.
@@ -404,17 +401,17 @@ gradients for a single model replica. In the code we term this abstraction
a "tower". We must set two attributes for each tower:
* A unique name for all operations within a tower.
-@{tf.name_scope} provides
+`tf.name_scope` provides
this unique name by prepending a scope. For instance, all operations in
the first tower are prepended with `tower_0`, e.g. `tower_0/conv1/Conv2D`.
* A preferred hardware device to run the operation within a tower.
-@{tf.device} specifies this. For
+`tf.device` specifies this. For
instance, all operations in the first tower reside within `device('/device:GPU:0')`
scope indicating that they should be run on the first GPU.
All variables are pinned to the CPU and accessed via
-@{tf.get_variable}
+`tf.get_variable`
in order to share them in a multi-GPU version.
See how-to on @{$variables$Sharing Variables}.
@@ -438,9 +435,6 @@ with a batch size of 64 and compare the training speed.
## Next Steps
-[Congratulations!](https://www.youtube.com/watch?v=9bZkp7q19f0) You have
-completed the CIFAR-10 tutorial.
-
If you are now interested in developing and training your own image
classification system, we recommend forking this tutorial and replacing
components to address your image classification problem.
diff --git a/tensorflow/docs_src/tutorials/image_recognition.md b/tensorflow/docs_src/tutorials/images/image_recognition.md
index 332bcf54f0..83a8d97cf0 100644
--- a/tensorflow/docs_src/tutorials/image_recognition.md
+++ b/tensorflow/docs_src/tutorials/images/image_recognition.md
@@ -253,7 +253,7 @@ definition with the `ToGraphDef()` function.
TF_RETURN_IF_ERROR(session->Run({}, {output_name}, {}, out_tensors));
return Status::OK();
```
-Then we create a @{tf.Session}
+Then we create a `tf.Session`
object, which is the interface to actually running the graph, and run it,
specifying which node we want to get the output from, and where to put the
output data.
@@ -434,7 +434,6 @@ should be able to transfer some of that understanding to solving related
problems. One way to perform transfer learning is to remove the final
classification layer of the network and extract
the [next-to-last layer of the CNN](https://arxiv.org/abs/1310.1531), in this case a 2048 dimensional vector.
-There's a guide to doing this @{$image_retraining$in the how-to section}.
## Resources for Learning More
@@ -450,7 +449,7 @@ covering them.
To find out more about implementing convolutional neural networks, you can jump
to the TensorFlow @{$deep_cnn$deep convolutional networks tutorial},
-or start a bit more gently with our @{$layers$MNIST starter tutorial}.
+or start a bit more gently with our [Estimator MNIST tutorial](../estimators/cnn.md).
Finally, if you want to get up to speed on research in this area, you can
read the recent work of all the papers referenced in this tutorial.
diff --git a/tensorflow/docs_src/tutorials/index.md b/tensorflow/docs_src/tutorials/index.md
deleted file mode 100644
index 6bd3a3a897..0000000000
--- a/tensorflow/docs_src/tutorials/index.md
+++ /dev/null
@@ -1,59 +0,0 @@
-# Tutorials
-
-
-This section contains tutorials demonstrating how to do specific tasks
-in TensorFlow. If you are new to TensorFlow, we recommend reading
-[Get Started with TensorFlow](/get_started/).
-
-## Images
-
-These tutorials cover different aspects of image recognition:
-
- * @{$layers$MNIST}, which introduces convolutional neural networks (CNNs) and
- demonstrates how to build a CNN in TensorFlow.
- * @{$image_recognition}, which introduces the field of image recognition and
- uses a pre-trained model (Inception) for recognizing images.
- * @{$image_retraining}, which has a wonderfully self-explanatory title.
- * @{$deep_cnn}, which demonstrates how to build a small CNN for recognizing
- images. This tutorial is aimed at advanced TensorFlow users.
-
-
-## Sequences
-
-These tutorials focus on machine learning problems dealing with sequence data.
-
- * @{$recurrent}, which demonstrates how to use a
- recurrent neural network to predict the next word in a sentence.
- * @{$seq2seq}, which demonstrates how to use a
- sequence-to-sequence model to translate text from English to French.
- * @{$recurrent_quickdraw}
- builds a classification model for drawings, directly from the sequence of
- pen strokes.
- * @{$audio_recognition}, which shows how to
- build a basic speech recognition network.
-
-## Data representation
-
-These tutorials demonstrate various data representations that can be used in
-TensorFlow.
-
- * @{$wide}, uses
- @{tf.feature_column$feature columns} to feed a variety of data types
- to linear model, to solve a classification problem.
- * @{$wide_and_deep}, builds on the
- above linear model tutorial, adding a deep feed-forward neural network
- component and a DNN-compatible data representation.
- * @{$word2vec}, which demonstrates how to
- create an embedding for words.
- * @{$kernel_methods},
- which shows how to improve the quality of a linear model by using explicit
- kernel mappings.
-
-## Non Machine Learning
-
-Although TensorFlow specializes in machine learning, the core of TensorFlow is
-a powerful numeric computation system which you can also use to solve other
-kinds of math problems. For example:
-
- * @{$mandelbrot}
- * @{$pdes}
diff --git a/tensorflow/docs_src/get_started/basic_classification.md b/tensorflow/docs_src/tutorials/keras/basic_classification.md
index 91bbd85b24..e028af99b9 100644
--- a/tensorflow/docs_src/get_started/basic_classification.md
+++ b/tensorflow/docs_src/tutorials/keras/basic_classification.md
@@ -1,3 +1,3 @@
# Basic Classification
-[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/basic_classification.ipynb)
+[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/keras/basic_classification.ipynb)
diff --git a/tensorflow/docs_src/get_started/basic_regression.md b/tensorflow/docs_src/tutorials/keras/basic_regression.md
index a535f22f5a..8721b7aca1 100644
--- a/tensorflow/docs_src/get_started/basic_regression.md
+++ b/tensorflow/docs_src/tutorials/keras/basic_regression.md
@@ -1,3 +1,3 @@
# Basic Regression
-[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/basic_regression.ipynb)
+[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/keras/basic_regression.ipynb)
diff --git a/tensorflow/docs_src/get_started/basic_text_classification.md b/tensorflow/docs_src/tutorials/keras/basic_text_classification.md
index 7c5d4f7896..c2a16bdd20 100644
--- a/tensorflow/docs_src/get_started/basic_text_classification.md
+++ b/tensorflow/docs_src/tutorials/keras/basic_text_classification.md
@@ -1,3 +1,3 @@
# Basic Text Classification
-[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/basic_text_classification.ipynb)
+[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/keras/basic_text_classification.ipynb)
diff --git a/tensorflow/docs_src/tutorials/keras/index.md b/tensorflow/docs_src/tutorials/keras/index.md
new file mode 100644
index 0000000000..9d42281c8f
--- /dev/null
+++ b/tensorflow/docs_src/tutorials/keras/index.md
@@ -0,0 +1,22 @@
+# Learn and use machine learning
+
+This notebook collection is inspired by the book
+*[Deep Learning with Python](https://books.google.com/books?id=Yo3CAQAACAAJ)*.
+These tutorials use `tf.keras`, TensorFlow's high-level Python API for building
+and training deep learning models. To learn more about using Keras with
+TensorFlow, see the [TensorFlow Keras Guide](../../guide/keras).
+
+Publisher's note: *Deep Learning with Python* introduces the field of deep
+learning using the Python language and the powerful Keras library. Written by
+Keras creator and Google AI researcher François Chollet, this book builds your
+understanding through intuitive explanations and practical examples.
+
+To learn about machine learning fundamentals and concepts, consider taking the
+[Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/).
+Additional TensorFlow and machine learning resources are listed in [next steps](../next_steps).
+
+1. [Basic classification](./basic_classification)
+2. [Text classification](./basic_text_classification)
+3. [Regression](./basic_regression)
+4. [Overfitting and underfitting](./overfit_and_underfit)
+5. [Save and restore models](./save_and_restore_models)
diff --git a/tensorflow/docs_src/get_started/overfit_and_underfit.md b/tensorflow/docs_src/tutorials/keras/overfit_and_underfit.md
index e5b5ae7b5a..f07f3addd8 100644
--- a/tensorflow/docs_src/get_started/overfit_and_underfit.md
+++ b/tensorflow/docs_src/tutorials/keras/overfit_and_underfit.md
@@ -1,3 +1,3 @@
# Overfitting and Underfitting
-[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/overfit_and_underfit.ipynb)
+[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/keras/overfit_and_underfit.ipynb)
diff --git a/tensorflow/docs_src/get_started/save_and_restore_models.md b/tensorflow/docs_src/tutorials/keras/save_and_restore_models.md
index 44b3772945..a799b379a0 100644
--- a/tensorflow/docs_src/get_started/save_and_restore_models.md
+++ b/tensorflow/docs_src/tutorials/keras/save_and_restore_models.md
@@ -1,3 +1,3 @@
# Save and restore Models
-[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/save_and_restore_models.ipynb)
+[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/keras/save_and_restore_models.ipynb)
diff --git a/tensorflow/docs_src/tutorials/leftnav_files b/tensorflow/docs_src/tutorials/leftnav_files
deleted file mode 100644
index 888052428f..0000000000
--- a/tensorflow/docs_src/tutorials/leftnav_files
+++ /dev/null
@@ -1,23 +0,0 @@
-index.md
-
-### Images
-layers.md: MNIST
-image_recognition.md: Image Recognition
-image_retraining.md: Image Retraining
-deep_cnn.md
-
-### Sequences
-recurrent.md
-seq2seq.md: Neural Machine Translation
-recurrent_quickdraw.md: Drawing Classification
-audio_recognition.md
-
-### Data Representation
-wide.md: Linear Models
-wide_and_deep.md: Wide & Deep Learning
-word2vec.md
-kernel_methods.md: Kernel Methods
-
-### Non-ML
-mandelbrot.md
-pdes.md
diff --git a/tensorflow/docs_src/get_started/next_steps.md b/tensorflow/docs_src/tutorials/next_steps.md
index 01c9f7204a..01c9f7204a 100644
--- a/tensorflow/docs_src/get_started/next_steps.md
+++ b/tensorflow/docs_src/tutorials/next_steps.md
diff --git a/tensorflow/docs_src/tutorials/mandelbrot.md b/tensorflow/docs_src/tutorials/non-ml/mandelbrot.md
index 1c0a548129..1c0a548129 100755..100644
--- a/tensorflow/docs_src/tutorials/mandelbrot.md
+++ b/tensorflow/docs_src/tutorials/non-ml/mandelbrot.md
diff --git a/tensorflow/docs_src/tutorials/pdes.md b/tensorflow/docs_src/tutorials/non-ml/pdes.md
index 425e8d7084..b5a0fa834a 100755..100644
--- a/tensorflow/docs_src/tutorials/pdes.md
+++ b/tensorflow/docs_src/tutorials/non-ml/pdes.md
@@ -135,7 +135,6 @@ for i in range(1000):
DisplayArray(U.eval(), rng=[-0.1, 0.1])
```
-![jpeg](../images/pde_output_2.jpg)
+![jpeg](../../images/pde_output_2.jpg)
Look! Ripples!
-
diff --git a/tensorflow/docs_src/tutorials/kernel_methods.md b/tensorflow/docs_src/tutorials/representation/kernel_methods.md
index 205e2a2d2c..71e87f4d3e 100644
--- a/tensorflow/docs_src/tutorials/kernel_methods.md
+++ b/tensorflow/docs_src/tutorials/representation/kernel_methods.md
@@ -1,9 +1,8 @@
# Improving Linear Models Using Explicit Kernel Methods
-Note: This document uses a deprecated version of @{tf.estimator},
-which has a @{tf.contrib.learn.Estimator$different interface}.
-It also uses other `contrib` methods whose
-@{$version_compat#not_covered$API may not be stable}.
+Note: This document uses a deprecated version of `tf.estimator`,
+`tf.contrib.learn.Estimator`, which has a different interface. It also uses
+other `contrib` methods whose @{$version_compat#not_covered$API may not be stable}.
In this tutorial, we demonstrate how combining (explicit) kernel methods with
linear models can drastically increase the latters' quality of predictions
@@ -27,7 +26,7 @@ TensorFlow will provide support for sparse features at a later release.
This tutorial uses [tf.contrib.learn](https://www.tensorflow.org/code/tensorflow/contrib/learn/python/learn)
(TensorFlow's high-level Machine Learning API) Estimators for our ML models.
-If you are not familiar with this API, [tf.estimator Quickstart](https://www.tensorflow.org/get_started/estimator)
+If you are not familiar with this API, The [Estimator guide](../../guide/estimators.md)
is a good place to start. We will use the MNIST dataset. The tutorial consists
of the following steps:
@@ -90,7 +89,7 @@ eval_input_fn = get_input_fn(data.validation, batch_size=5000)
## Training a simple linear model
We can now train a linear model over the MNIST dataset. We will use the
-@{tf.contrib.learn.LinearClassifier} estimator with 10 classes representing the
+`tf.contrib.learn.LinearClassifier` estimator with 10 classes representing the
10 digits. The input features form a 784-dimensional dense vector which can
be specified as follows:
@@ -195,7 +194,7 @@ much higher dimensional space than the original one. See
for more details.
### Kernel classifier
-@{tf.contrib.kernel_methods.KernelLinearClassifier} is a pre-packaged
+`tf.contrib.kernel_methods.KernelLinearClassifier` is a pre-packaged
`tf.contrib.learn` estimator that combines the power of explicit kernel mappings
with linear models. Its constructor is almost identical to that of the
LinearClassifier estimator with the additional option to specify a list of
diff --git a/tensorflow/docs_src/tutorials/linear.md b/tensorflow/docs_src/tutorials/representation/linear.md
index 3f247ade26..014409c617 100644
--- a/tensorflow/docs_src/tutorials/linear.md
+++ b/tensorflow/docs_src/tutorials/representation/linear.md
@@ -1,6 +1,6 @@
# Large-scale Linear Models with TensorFlow
-@{tf.estimator$Estimators} provides (among other things) a rich set of tools for
+`tf.estimator` provides (among other things) a rich set of tools for
working with linear models in TensorFlow. This document provides an overview of
those tools. It explains:
@@ -11,8 +11,9 @@ those tools. It explains:
deep learning to get the advantages of both.
Read this overview to decide whether the Estimator's linear model tools might
-be useful to you. Then do the @{$wide$Linear Models tutorial} to
-give it a try. This overview uses code samples from the tutorial, but the
+be useful to you. Then work through the
+[Estimator wide and deep learning tutorial](https://github.com/tensorflow/models/tree/master/official/wide_deep)
+to give it a try. This overview uses code samples from the tutorial, but the
tutorial walks through the code in greater detail.
To understand this overview it will help to have some familiarity
@@ -176,7 +177,7 @@ the name of a `FeatureColumn`. Each key's value is a tensor containing the
values of that feature for all data instances. See
@{$premade_estimators#input_fn} for a
more comprehensive look at input functions, and `input_fn` in the
-[linear models tutorial code](https://github.com/tensorflow/models/tree/master/official/wide_deep/wide_deep.py)
+[wide and deep learning tutorial](https://github.com/tensorflow/models/tree/master/official/wide_deep)
for an example implementation of an input function.
The input function is passed to the `train()` and `evaluate()` calls that
@@ -234,4 +235,5 @@ e = tf.estimator.DNNLinearCombinedClassifier(
dnn_feature_columns=deep_columns,
dnn_hidden_units=[100, 50])
```
-For more information, see the @{$wide_and_deep$Wide and Deep Learning tutorial}.
+For more information, see the
+[wide and deep learning tutorial](https://github.com/tensorflow/models/tree/master/official/wide_deep).
diff --git a/tensorflow/docs_src/tutorials/word2vec.md b/tensorflow/docs_src/tutorials/representation/word2vec.md
index 3fe7352bd2..7964650e19 100644
--- a/tensorflow/docs_src/tutorials/word2vec.md
+++ b/tensorflow/docs_src/tutorials/representation/word2vec.md
@@ -23,7 +23,7 @@ straight in, feel free to look at the minimalistic implementation in
This basic example contains the code needed to download some data, train on it a
bit and visualize the result. Once you get comfortable with reading and running
the basic version, you can graduate to
-[models/tutorials/embedding/word2vec.py](https://www.tensorflow.org/code/tensorflow_models/tutorials/embedding/word2vec.py)
+[models/tutorials/embedding/word2vec.py](https://github.com/tensorflow/models/tree/master/tutorials/embedding/word2vec.py)
which is a more serious implementation that showcases some more advanced
TensorFlow principles about how to efficiently use threads to move data into a
text model, how to checkpoint during training, etc.
@@ -317,7 +317,7 @@ optimizer = tf.train.GradientDescentOptimizer(learning_rate=1.0).minimize(loss)
Training the model is then as simple as using a `feed_dict` to push data into
the placeholders and calling
-@{tf.Session.run} with this new data
+`tf.Session.run` with this new data
in a loop.
```python
@@ -341,7 +341,7 @@ t-SNE.
Et voila! As expected, words that are similar end up clustering nearby each
other. For a more heavyweight implementation of word2vec that showcases more of
the advanced features of TensorFlow, see the implementation in
-[models/tutorials/embedding/word2vec.py](https://www.tensorflow.org/code/tensorflow_models/tutorials/embedding/word2vec.py).
+[models/tutorials/embedding/word2vec.py](https://github.com/tensorflow/models/tree/master/tutorials/embedding/word2vec.py).
## Evaluating Embeddings: Analogical Reasoning
@@ -357,7 +357,7 @@ Download the dataset for this task from
To see how we do this evaluation, have a look at the `build_eval_graph()` and
`eval()` functions in
-[models/tutorials/embedding/word2vec.py](https://www.tensorflow.org/code/tensorflow_models/tutorials/embedding/word2vec.py).
+[models/tutorials/embedding/word2vec.py](https://github.com/tensorflow/models/tree/master/tutorials/embedding/word2vec.py).
The choice of hyperparameters can strongly influence the accuracy on this task.
To achieve state-of-the-art performance on this task requires training over a
@@ -385,13 +385,13 @@ your model is seriously bottlenecked on input data, you may want to implement a
custom data reader for your problem, as described in
@{$new_data_formats$New Data Formats}. For the case of Skip-Gram
modeling, we've actually already done this for you as an example in
-[models/tutorials/embedding/word2vec.py](https://www.tensorflow.org/code/tensorflow_models/tutorials/embedding/word2vec.py).
+[models/tutorials/embedding/word2vec.py](https://github.com/tensorflow/models/tree/master/tutorials/embedding/word2vec.py).
If your model is no longer I/O bound but you want still more performance, you
can take things further by writing your own TensorFlow Ops, as described in
@{$adding_an_op$Adding a New Op}. Again we've provided an
example of this for the Skip-Gram case
-[models/tutorials/embedding/word2vec_optimized.py](https://www.tensorflow.org/code/tensorflow_models/tutorials/embedding/word2vec_optimized.py).
+[models/tutorials/embedding/word2vec_optimized.py](https://github.com/tensorflow/models/tree/master/tutorials/embedding/word2vec_optimized.py).
Feel free to benchmark these against each other to measure performance
improvements at each stage.
diff --git a/tensorflow/docs_src/tutorials/seq2seq.md b/tensorflow/docs_src/tutorials/seq2seq.md
deleted file mode 100644
index 8928ba4f7d..0000000000
--- a/tensorflow/docs_src/tutorials/seq2seq.md
+++ /dev/null
@@ -1,5 +0,0 @@
-# Sequence-to-Sequence Models
-
-Please check out the
-[tensorflow neural machine translation tutorial](https://github.com/tensorflow/nmt)
-for building sequence-to-sequence models with the latest Tensorflow API.
diff --git a/tensorflow/docs_src/tutorials/audio_recognition.md b/tensorflow/docs_src/tutorials/sequences/audio_recognition.md
index d7a8da6f96..d7a8da6f96 100644
--- a/tensorflow/docs_src/tutorials/audio_recognition.md
+++ b/tensorflow/docs_src/tutorials/sequences/audio_recognition.md
diff --git a/tensorflow/docs_src/tutorials/recurrent.md b/tensorflow/docs_src/tutorials/sequences/recurrent.md
index 14da2c8785..715cc7856a 100644
--- a/tensorflow/docs_src/tutorials/recurrent.md
+++ b/tensorflow/docs_src/tutorials/sequences/recurrent.md
@@ -2,8 +2,8 @@
## Introduction
-Take a look at [this great article](https://colah.github.io/posts/2015-08-Understanding-LSTMs/)
-for an introduction to recurrent neural networks and LSTMs in particular.
+See [Understanding LSTM Networks](https://colah.github.io/posts/2015-08-Understanding-LSTMs/){:.external}
+for an introduction to recurrent neural networks and LSTMs.
## Language Modeling
diff --git a/tensorflow/docs_src/tutorials/recurrent_quickdraw.md b/tensorflow/docs_src/tutorials/sequences/recurrent_quickdraw.md
index 1afd861738..37bce5b76d 100644
--- a/tensorflow/docs_src/tutorials/recurrent_quickdraw.md
+++ b/tensorflow/docs_src/tutorials/sequences/recurrent_quickdraw.md
@@ -13,7 +13,7 @@ In this tutorial we'll show how to build an RNN-based recognizer for this
problem. The model will use a combination of convolutional layers, LSTM layers,
and a softmax output layer to classify the drawings:
-<center> ![RNN model structure](../images/quickdraw_model.png) </center>
+<center> ![RNN model structure](../../images/quickdraw_model.png) </center>
The figure above shows the structure of the model that we will build in this
tutorial. The input is a drawing that is encoded as a sequence of strokes of
@@ -208,7 +208,7 @@ This data is then reformatted into a tensor of shape `[num_training_samples,
max_length, 3]`. Then we determine the bounding box of the original drawing in
screen coordinates and normalize the size such that the drawing has unit height.
-<center> ![Size normalization](../images/quickdraw_sizenormalization.png) </center>
+<center> ![Size normalization](../../images/quickdraw_sizenormalization.png) </center>
Finally, we compute the differences between consecutive points and store these
as a `VarLenFeature` in a
diff --git a/tensorflow/docs_src/tutorials/wide.md b/tensorflow/docs_src/tutorials/wide.md
deleted file mode 100644
index 27ce75a30d..0000000000
--- a/tensorflow/docs_src/tutorials/wide.md
+++ /dev/null
@@ -1,461 +0,0 @@
-# TensorFlow Linear Model Tutorial
-
-In this tutorial, we will use the tf.estimator API in TensorFlow to solve a
-binary classification problem: Given census data about a person such as age,
-education, marital status, and occupation (the features), we will try to predict
-whether or not the person earns more than 50,000 dollars a year (the target
-label). We will train a **logistic regression** model, and given an individual's
-information our model will output a number between 0 and 1, which can be
-interpreted as the probability that the individual has an annual income of over
-50,000 dollars.
-
-## Setup
-
-To try the code for this tutorial:
-
-1. @{$install$Install TensorFlow} if you haven't already.
-
-2. Download [the tutorial code](https://github.com/tensorflow/models/tree/master/official/wide_deep/).
-
-3. Execute the data download script we provide to you:
-
- $ python data_download.py
-
-4. Execute the tutorial code with the following command to train the linear
-model described in this tutorial:
-
- $ python wide_deep.py --model_type=wide
-
-Read on to find out how this code builds its linear model.
-
-## Reading The Census Data
-
-The dataset we'll be using is the
-[Census Income Dataset](https://archive.ics.uci.edu/ml/datasets/Census+Income).
-We have provided
-[data_download.py](https://github.com/tensorflow/models/tree/master/official/wide_deep/data_download.py)
-which downloads the code and performs some additional cleanup.
-
-Since the task is a binary classification problem, we'll construct a label
-column named "label" whose value is 1 if the income is over 50K, and 0
-otherwise. For reference, see `input_fn` in
-[wide_deep.py](https://github.com/tensorflow/models/tree/master/official/wide_deep/wide_deep.py).
-
-Next, let's take a look at the dataframe and see which columns we can use to
-predict the target label. The columns can be grouped into two types—categorical
-and continuous columns:
-
-* A column is called **categorical** if its value can only be one of the
- categories in a finite set. For example, the relationship status of a person
- (wife, husband, unmarried, etc.) or the education level (high school,
- college, etc.) are categorical columns.
-* A column is called **continuous** if its value can be any numerical value in
- a continuous range. For example, the capital gain of a person (e.g. $14,084)
- is a continuous column.
-
-Here's a list of columns available in the Census Income dataset:
-
-| Column Name | Type | Description |
-| -------------- | ----------- | --------------------------------- |
-| age | Continuous | The age of the individual |
-| workclass | Categorical | The type of employer the |
-: : : individual has (government, :
-: : : military, private, etc.). :
-| fnlwgt | Continuous | The number of people the census |
-: : : takers believe that observation :
-: : : represents (sample weight). Final :
-: : : weight will not be used. :
-| education | Categorical | The highest level of education |
-: : : achieved for that individual. :
-| education_num | Continuous | The highest level of education in |
-: : : numerical form. :
-| marital_status | Categorical | Marital status of the individual. |
-| occupation | Categorical | The occupation of the individual. |
-| relationship | Categorical | Wife, Own-child, Husband, |
-: : : Not-in-family, Other-relative, :
-: : : Unmarried. :
-| race | Categorical | Amer-Indian-Eskimo, Asian-Pac- |
-: : : Islander, Black, White, Other. :
-| gender | Categorical | Female, Male. |
-| capital_gain | Continuous | Capital gains recorded. |
-| capital_loss | Continuous | Capital Losses recorded. |
-| hours_per_week | Continuous | Hours worked per week. |
-| native_country | Categorical | Country of origin of the |
-: : : individual. :
-| income_bracket | Categorical | ">50K" or "<=50K", meaning |
-: : : whether the person makes more :
-: : : than $50,000 annually. :
-
-## Converting Data into Tensors
-
-When building a tf.estimator model, the input data is specified by means of an
-Input Builder function. This builder function will not be called until it is
-later passed to tf.estimator.Estimator methods such as `train` and `evaluate`.
-The purpose of this function is to construct the input data, which is
-represented in the form of @{tf.Tensor}s or @{tf.SparseTensor}s.
-In more detail, the input builder function returns the following as a pair:
-
-1. `features`: A dict from feature column names to `Tensors` or
- `SparseTensors`.
-2. `labels`: A `Tensor` containing the label column.
-
-The keys of the `features` will be used to construct columns in the next
-section. Because we want to call the `train` and `evaluate` methods with
-different data, we define a method that returns an input function based on the
-given data. Note that the returned input function will be called while
-constructing the TensorFlow graph, not while running the graph. What it is
-returning is a representation of the input data as the fundamental unit of
-TensorFlow computations, a `Tensor` (or `SparseTensor`).
-
-Each continuous column in the train or test data will be converted into a
-`Tensor`, which in general is a good format to represent dense data. For
-categorical data, we must represent the data as a `SparseTensor`. This data
-format is good for representing sparse data. Our `input_fn` uses the `tf.data`
-API, which makes it easy to apply transformations to our dataset:
-
-```python
-def input_fn(data_file, num_epochs, shuffle, batch_size):
- """Generate an input function for the Estimator."""
- assert tf.gfile.Exists(data_file), (
- '%s not found. Please make sure you have either run data_download.py or '
- 'set both arguments --train_data and --test_data.' % data_file)
-
- def parse_csv(value):
- print('Parsing', data_file)
- columns = tf.decode_csv(value, record_defaults=_CSV_COLUMN_DEFAULTS)
- features = dict(zip(_CSV_COLUMNS, columns))
- labels = features.pop('income_bracket')
- return features, tf.equal(labels, '>50K')
-
- # Extract lines from input files using the Dataset API.
- dataset = tf.data.TextLineDataset(data_file)
-
- if shuffle:
- dataset = dataset.shuffle(buffer_size=_SHUFFLE_BUFFER)
-
- dataset = dataset.map(parse_csv, num_parallel_calls=5)
-
- # We call repeat after shuffling, rather than before, to prevent separate
- # epochs from blending together.
- dataset = dataset.repeat(num_epochs)
- dataset = dataset.batch(batch_size)
-
- iterator = dataset.make_one_shot_iterator()
- features, labels = iterator.get_next()
- return features, labels
-```
-
-## Selecting and Engineering Features for the Model
-
-Selecting and crafting the right set of feature columns is key to learning an
-effective model. A **feature column** can be either one of the raw columns in
-the original dataframe (let's call them **base feature columns**), or any new
-columns created based on some transformations defined over one or multiple base
-columns (let's call them **derived feature columns**). Basically, "feature
-column" is an abstract concept of any raw or derived variable that can be used
-to predict the target label.
-
-### Base Categorical Feature Columns
-
-To define a feature column for a categorical feature, we can create a
-`CategoricalColumn` using the tf.feature_column API. If you know the set of all
-possible feature values of a column and there are only a few of them, you can
-use `categorical_column_with_vocabulary_list`. Each key in the list will get
-assigned an auto-incremental ID starting from 0. For example, for the
-`relationship` column we can assign the feature string "Husband" to an integer
-ID of 0 and "Not-in-family" to 1, etc., by doing:
-
-```python
-relationship = tf.feature_column.categorical_column_with_vocabulary_list(
- 'relationship', [
- 'Husband', 'Not-in-family', 'Wife', 'Own-child', 'Unmarried',
- 'Other-relative'])
-```
-
-What if we don't know the set of possible values in advance? Not a problem. We
-can use `categorical_column_with_hash_bucket` instead:
-
-```python
-occupation = tf.feature_column.categorical_column_with_hash_bucket(
- 'occupation', hash_bucket_size=1000)
-```
-
-What will happen is that each possible value in the feature column `occupation`
-will be hashed to an integer ID as we encounter them in training. See an example
-illustration below:
-
-ID | Feature
---- | -------------
-... |
-9 | `"Machine-op-inspct"`
-... |
-103 | `"Farming-fishing"`
-... |
-375 | `"Protective-serv"`
-... |
-
-No matter which way we choose to define a `SparseColumn`, each feature string
-will be mapped into an integer ID by looking up a fixed mapping or by hashing.
-Note that hashing collisions are possible, but may not significantly impact the
-model quality. Under the hood, the `LinearModel` class is responsible for
-managing the mapping and creating `tf.Variable` to store the model parameters
-(also known as model weights) for each feature ID. The model parameters will be
-learned through the model training process we'll go through later.
-
-We'll do the similar trick to define the other categorical features:
-
-```python
-education = tf.feature_column.categorical_column_with_vocabulary_list(
- 'education', [
- 'Bachelors', 'HS-grad', '11th', 'Masters', '9th', 'Some-college',
- 'Assoc-acdm', 'Assoc-voc', '7th-8th', 'Doctorate', 'Prof-school',
- '5th-6th', '10th', '1st-4th', 'Preschool', '12th'])
-
-marital_status = tf.feature_column.categorical_column_with_vocabulary_list(
- 'marital_status', [
- 'Married-civ-spouse', 'Divorced', 'Married-spouse-absent',
- 'Never-married', 'Separated', 'Married-AF-spouse', 'Widowed'])
-
-relationship = tf.feature_column.categorical_column_with_vocabulary_list(
- 'relationship', [
- 'Husband', 'Not-in-family', 'Wife', 'Own-child', 'Unmarried',
- 'Other-relative'])
-
-workclass = tf.feature_column.categorical_column_with_vocabulary_list(
- 'workclass', [
- 'Self-emp-not-inc', 'Private', 'State-gov', 'Federal-gov',
- 'Local-gov', '?', 'Self-emp-inc', 'Without-pay', 'Never-worked'])
-
-# To show an example of hashing:
-occupation = tf.feature_column.categorical_column_with_hash_bucket(
- 'occupation', hash_bucket_size=1000)
-```
-
-### Base Continuous Feature Columns
-
-Similarly, we can define a `NumericColumn` for each continuous feature column
-that we want to use in the model:
-
-```python
-age = tf.feature_column.numeric_column('age')
-education_num = tf.feature_column.numeric_column('education_num')
-capital_gain = tf.feature_column.numeric_column('capital_gain')
-capital_loss = tf.feature_column.numeric_column('capital_loss')
-hours_per_week = tf.feature_column.numeric_column('hours_per_week')
-```
-
-### Making Continuous Features Categorical through Bucketization
-
-Sometimes the relationship between a continuous feature and the label is not
-linear. As a hypothetical example, a person's income may grow with age in the
-early stage of one's career, then the growth may slow at some point, and finally
-the income decreases after retirement. In this scenario, using the raw `age` as
-a real-valued feature column might not be a good choice because the model can
-only learn one of the three cases:
-
-1. Income always increases at some rate as age grows (positive correlation),
-1. Income always decreases at some rate as age grows (negative correlation), or
-1. Income stays the same no matter at what age (no correlation)
-
-If we want to learn the fine-grained correlation between income and each age
-group separately, we can leverage **bucketization**. Bucketization is a process
-of dividing the entire range of a continuous feature into a set of consecutive
-bins/buckets, and then converting the original numerical feature into a bucket
-ID (as a categorical feature) depending on which bucket that value falls into.
-So, we can define a `bucketized_column` over `age` as:
-
-```python
-age_buckets = tf.feature_column.bucketized_column(
- age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
-```
-
-where the `boundaries` is a list of bucket boundaries. In this case, there are
-10 boundaries, resulting in 11 age group buckets (from age 17 and below, 18-24,
-25-29, ..., to 65 and over).
-
-### Intersecting Multiple Columns with CrossedColumn
-
-Using each base feature column separately may not be enough to explain the data.
-For example, the correlation between education and the label (earning > 50,000
-dollars) may be different for different occupations. Therefore, if we only learn
-a single model weight for `education="Bachelors"` and `education="Masters"`, we
-won't be able to capture every single education-occupation combination (e.g.
-distinguishing between `education="Bachelors" AND occupation="Exec-managerial"`
-and `education="Bachelors" AND occupation="Craft-repair"`). To learn the
-differences between different feature combinations, we can add **crossed feature
-columns** to the model.
-
-```python
-education_x_occupation = tf.feature_column.crossed_column(
- ['education', 'occupation'], hash_bucket_size=1000)
-```
-
-We can also create a `CrossedColumn` over more than two columns. Each
-constituent column can be either a base feature column that is categorical
-(`SparseColumn`), a bucketized real-valued feature column (`BucketizedColumn`),
-or even another `CrossColumn`. Here's an example:
-
-```python
-age_buckets_x_education_x_occupation = tf.feature_column.crossed_column(
- [age_buckets, 'education', 'occupation'], hash_bucket_size=1000)
-```
-
-## Defining The Logistic Regression Model
-
-After processing the input data and defining all the feature columns, we're now
-ready to put them all together and build a Logistic Regression model. In the
-previous section we've seen several types of base and derived feature columns,
-including:
-
-* `CategoricalColumn`
-* `NumericColumn`
-* `BucketizedColumn`
-* `CrossedColumn`
-
-All of these are subclasses of the abstract `FeatureColumn` class, and can be
-added to the `feature_columns` field of a model:
-
-```python
-base_columns = [
- education, marital_status, relationship, workclass, occupation,
- age_buckets,
-]
-crossed_columns = [
- tf.feature_column.crossed_column(
- ['education', 'occupation'], hash_bucket_size=1000),
- tf.feature_column.crossed_column(
- [age_buckets, 'education', 'occupation'], hash_bucket_size=1000),
-]
-
-model_dir = tempfile.mkdtemp()
-model = tf.estimator.LinearClassifier(
- model_dir=model_dir, feature_columns=base_columns + crossed_columns)
-```
-
-The model also automatically learns a bias term, which controls the prediction
-one would make without observing any features (see the section "How Logistic
-Regression Works" for more explanations). The learned model files will be stored
-in `model_dir`.
-
-## Training and Evaluating Our Model
-
-After adding all the features to the model, now let's look at how to actually
-train the model. Training a model is just a single command using the
-tf.estimator API:
-
-```python
-model.train(input_fn=lambda: input_fn(train_data, num_epochs, True, batch_size))
-```
-
-After the model is trained, we can evaluate how good our model is at predicting
-the labels of the holdout data:
-
-```python
-results = model.evaluate(input_fn=lambda: input_fn(
- test_data, 1, False, batch_size))
-for key in sorted(results):
- print('%s: %s' % (key, results[key]))
-```
-
-The first line of the final output should be something like
-`accuracy: 0.83557522`, which means the accuracy is 83.6%. Feel free to try more
-features and transformations and see if you can do even better!
-
-After the model is evaluated, we can use the model to predict whether an individual has an annual income of over
-50,000 dollars given an individual's information input.
-```python
- pred_iter = model.predict(input_fn=lambda: input_fn(FLAGS.test_data, 1, False, 1))
- for pred in pred_iter:
- print(pred['classes'])
-```
-
-The model prediction output would be like `[b'1']` or `[b'0']` which means whether corresponding individual has an annual income of over 50,000 dollars or not.
-
-If you'd like to see a working end-to-end example, you can download our
-[example code](https://github.com/tensorflow/models/tree/master/official/wide_deep/wide_deep.py)
-and set the `model_type` flag to `wide`.
-
-## Adding Regularization to Prevent Overfitting
-
-Regularization is a technique used to avoid **overfitting**. Overfitting happens
-when your model does well on the data it is trained on, but worse on test data
-that the model has not seen before, such as live traffic. Overfitting generally
-occurs when a model is excessively complex, such as having too many parameters
-relative to the number of observed training data. Regularization allows for you
-to control your model's complexity and makes the model more generalizable to
-unseen data.
-
-In the Linear Model library, you can add L1 and L2 regularizations to the model
-as:
-
-```
-model = tf.estimator.LinearClassifier(
- model_dir=model_dir, feature_columns=base_columns + crossed_columns,
- optimizer=tf.train.FtrlOptimizer(
- learning_rate=0.1,
- l1_regularization_strength=1.0,
- l2_regularization_strength=1.0))
-```
-
-One important difference between L1 and L2 regularization is that L1
-regularization tends to make model weights stay at zero, creating sparser
-models, whereas L2 regularization also tries to make the model weights closer to
-zero but not necessarily zero. Therefore, if you increase the strength of L1
-regularization, you will have a smaller model size because many of the model
-weights will be zero. This is often desirable when the feature space is very
-large but sparse, and when there are resource constraints that prevent you from
-serving a model that is too large.
-
-In practice, you should try various combinations of L1, L2 regularization
-strengths and find the best parameters that best control overfitting and give
-you a desirable model size.
-
-## How Logistic Regression Works
-
-Finally, let's take a minute to talk about what the Logistic Regression model
-actually looks like in case you're not already familiar with it. We'll denote
-the label as \\(Y\\), and the set of observed features as a feature vector
-\\(\mathbf{x}=[x_1, x_2, ..., x_d]\\). We define \\(Y=1\\) if an individual
-earned > 50,000 dollars and \\(Y=0\\) otherwise. In Logistic Regression, the
-probability of the label being positive (\\(Y=1\\)) given the features
-\\(\mathbf{x}\\) is given as:
-
-$$ P(Y=1|\mathbf{x}) = \frac{1}{1+\exp(-(\mathbf{w}^T\mathbf{x}+b))}$$
-
-where \\(\mathbf{w}=[w_1, w_2, ..., w_d]\\) are the model weights for the
-features \\(\mathbf{x}=[x_1, x_2, ..., x_d]\\). \\(b\\) is a constant that is
-often called the **bias** of the model. The equation consists of two parts—A
-linear model and a logistic function:
-
-* **Linear Model**: First, we can see that \\(\mathbf{w}^T\mathbf{x}+b = b +
- w_1x_1 + ... +w_dx_d\\) is a linear model where the output is a linear
- function of the input features \\(\mathbf{x}\\). The bias \\(b\\) is the
- prediction one would make without observing any features. The model weight
- \\(w_i\\) reflects how the feature \\(x_i\\) is correlated with the positive
- label. If \\(x_i\\) is positively correlated with the positive label, the
- weight \\(w_i\\) increases, and the probability \\(P(Y=1|\mathbf{x})\\) will
- be closer to 1. On the other hand, if \\(x_i\\) is negatively correlated
- with the positive label, then the weight \\(w_i\\) decreases and the
- probability \\(P(Y=1|\mathbf{x})\\) will be closer to 0.
-
-* **Logistic Function**: Second, we can see that there's a logistic function
- (also known as the sigmoid function) \\(S(t) = 1/(1+\exp(-t))\\) being
- applied to the linear model. The logistic function is used to convert the
- output of the linear model \\(\mathbf{w}^T\mathbf{x}+b\\) from any real
- number into the range of \\([0, 1]\\), which can be interpreted as a
- probability.
-
-Model training is an optimization problem: The goal is to find a set of model
-weights (i.e. model parameters) to minimize a **loss function** defined over the
-training data, such as logistic loss for Logistic Regression models. The loss
-function measures the discrepancy between the ground-truth label and the model's
-prediction. If the prediction is very close to the ground-truth label, the loss
-value will be low; if the prediction is very far from the label, then the loss
-value would be high.
-
-## Learn Deeper
-
-If you're interested in learning more, check out our
-@{$wide_and_deep$Wide & Deep Learning Tutorial} where we'll show you how to
-combine the strengths of linear models and deep neural networks by jointly
-training them using the tf.estimator API.
diff --git a/tensorflow/docs_src/tutorials/wide_and_deep.md b/tensorflow/docs_src/tutorials/wide_and_deep.md
deleted file mode 100644
index 44677a810b..0000000000
--- a/tensorflow/docs_src/tutorials/wide_and_deep.md
+++ /dev/null
@@ -1,243 +0,0 @@
-# TensorFlow Wide & Deep Learning Tutorial
-
-In the previous @{$wide$TensorFlow Linear Model Tutorial}, we trained a logistic
-regression model to predict the probability that the individual has an annual
-income of over 50,000 dollars using the
-[Census Income Dataset](https://archive.ics.uci.edu/ml/datasets/Census+Income).
-TensorFlow is great for training deep neural networks too, and you might be
-thinking which one you should choose—well, why not both? Would it be possible to
-combine the strengths of both in one model?
-
-In this tutorial, we'll introduce how to use the tf.estimator API to jointly
-train a wide linear model and a deep feed-forward neural network. This approach
-combines the strengths of memorization and generalization. It's useful for
-generic large-scale regression and classification problems with sparse input
-features (e.g., categorical features with a large number of possible feature
-values). If you're interested in learning more about how Wide & Deep Learning
-works, please check out our [research paper](https://arxiv.org/abs/1606.07792).
-
-![Wide & Deep Spectrum of Models](https://www.tensorflow.org/images/wide_n_deep.svg "Wide & Deep")
-
-The figure above shows a comparison of a wide model (logistic regression with
-sparse features and transformations), a deep model (feed-forward neural network
-with an embedding layer and several hidden layers), and a Wide & Deep model
-(joint training of both). At a high level, there are only 3 steps to configure a
-wide, deep, or Wide & Deep model using the tf.estimator API:
-
-1. Select features for the wide part: Choose the sparse base columns and
- crossed columns you want to use.
-1. Select features for the deep part: Choose the continuous columns, the
- embedding dimension for each categorical column, and the hidden layer sizes.
-1. Put them all together in a Wide & Deep model
- (`DNNLinearCombinedClassifier`).
-
-And that's it! Let's go through a simple example.
-
-## Setup
-
-To try the code for this tutorial:
-
-1. @{$install$Install TensorFlow} if you haven't already.
-
-2. Download [the tutorial code](https://github.com/tensorflow/models/tree/master/official/wide_deep/).
-
-3. Execute the data download script we provide to you:
-
- $ python data_download.py
-
-4. Execute the tutorial code with the following command to train the wide and
-deep model described in this tutorial:
-
- $ python wide_deep.py
-
-Read on to find out how this code builds its model.
-
-
-## Define Base Feature Columns
-
-First, let's define the base categorical and continuous feature columns that
-we'll use. These base columns will be the building blocks used by both the wide
-part and the deep part of the model.
-
-```python
-import tensorflow as tf
-
-# Continuous columns
-age = tf.feature_column.numeric_column('age')
-education_num = tf.feature_column.numeric_column('education_num')
-capital_gain = tf.feature_column.numeric_column('capital_gain')
-capital_loss = tf.feature_column.numeric_column('capital_loss')
-hours_per_week = tf.feature_column.numeric_column('hours_per_week')
-
-education = tf.feature_column.categorical_column_with_vocabulary_list(
- 'education', [
- 'Bachelors', 'HS-grad', '11th', 'Masters', '9th', 'Some-college',
- 'Assoc-acdm', 'Assoc-voc', '7th-8th', 'Doctorate', 'Prof-school',
- '5th-6th', '10th', '1st-4th', 'Preschool', '12th'])
-
-marital_status = tf.feature_column.categorical_column_with_vocabulary_list(
- 'marital_status', [
- 'Married-civ-spouse', 'Divorced', 'Married-spouse-absent',
- 'Never-married', 'Separated', 'Married-AF-spouse', 'Widowed'])
-
-relationship = tf.feature_column.categorical_column_with_vocabulary_list(
- 'relationship', [
- 'Husband', 'Not-in-family', 'Wife', 'Own-child', 'Unmarried',
- 'Other-relative'])
-
-workclass = tf.feature_column.categorical_column_with_vocabulary_list(
- 'workclass', [
- 'Self-emp-not-inc', 'Private', 'State-gov', 'Federal-gov',
- 'Local-gov', '?', 'Self-emp-inc', 'Without-pay', 'Never-worked'])
-
-# To show an example of hashing:
-occupation = tf.feature_column.categorical_column_with_hash_bucket(
- 'occupation', hash_bucket_size=1000)
-
-# Transformations.
-age_buckets = tf.feature_column.bucketized_column(
- age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
-```
-
-## The Wide Model: Linear Model with Crossed Feature Columns
-
-The wide model is a linear model with a wide set of sparse and crossed feature
-columns:
-
-```python
-base_columns = [
- education, marital_status, relationship, workclass, occupation,
- age_buckets,
-]
-
-crossed_columns = [
- tf.feature_column.crossed_column(
- ['education', 'occupation'], hash_bucket_size=1000),
- tf.feature_column.crossed_column(
- [age_buckets, 'education', 'occupation'], hash_bucket_size=1000),
-]
-```
-
-You can also see the @{$wide$TensorFlow Linear Model Tutorial} for more details.
-
-Wide models with crossed feature columns can memorize sparse interactions
-between features effectively. That being said, one limitation of crossed feature
-columns is that they do not generalize to feature combinations that have not
-appeared in the training data. Let's add a deep model with embeddings to fix
-that.
-
-## The Deep Model: Neural Network with Embeddings
-
-The deep model is a feed-forward neural network, as shown in the previous
-figure. Each of the sparse, high-dimensional categorical features are first
-converted into a low-dimensional and dense real-valued vector, often referred to
-as an embedding vector. These low-dimensional dense embedding vectors are
-concatenated with the continuous features, and then fed into the hidden layers
-of a neural network in the forward pass. The embedding values are initialized
-randomly, and are trained along with all other model parameters to minimize the
-training loss. If you're interested in learning more about embeddings, check out
-the TensorFlow tutorial on @{$word2vec$Vector Representations of Words} or
-[Word embedding](https://en.wikipedia.org/wiki/Word_embedding) on Wikipedia.
-
-Another way to represent categorical columns to feed into a neural network is
-via a one-hot or multi-hot representation. This is often appropriate for
-categorical columns with only a few possible values. As an example of a one-hot
-representation, for the relationship column, `"Husband"` can be represented as
-[1, 0, 0, 0, 0, 0], and `"Not-in-family"` as [0, 1, 0, 0, 0, 0], etc. This is a
-fixed representation, whereas embeddings are more flexible and calculated at
-training time.
-
-We'll configure the embeddings for the categorical columns using
-`embedding_column`, and concatenate them with the continuous columns.
-We also use `indicator_column` to create multi-hot representations of some
-categorical columns.
-
-```python
-deep_columns = [
- age,
- education_num,
- capital_gain,
- capital_loss,
- hours_per_week,
- tf.feature_column.indicator_column(workclass),
- tf.feature_column.indicator_column(education),
- tf.feature_column.indicator_column(marital_status),
- tf.feature_column.indicator_column(relationship),
- # To show an example of embedding
- tf.feature_column.embedding_column(occupation, dimension=8),
-]
-```
-
-The higher the `dimension` of the embedding is, the more degrees of freedom the
-model will have to learn the representations of the features. For simplicity, we
-set the dimension to 8 for all feature columns here. Empirically, a more
-informed decision for the number of dimensions is to start with a value on the
-order of \\(\log_2(n)\\) or \\(k\sqrt[4]n\\), where \\(n\\) is the number of
-unique features in a feature column and \\(k\\) is a small constant (usually
-smaller than 10).
-
-Through dense embeddings, deep models can generalize better and make predictions
-on feature pairs that were previously unseen in the training data. However, it
-is difficult to learn effective low-dimensional representations for feature
-columns when the underlying interaction matrix between two feature columns is
-sparse and high-rank. In such cases, the interaction between most feature pairs
-should be zero except a few, but dense embeddings will lead to nonzero
-predictions for all feature pairs, and thus can over-generalize. On the other
-hand, linear models with crossed features can memorize these “exception rules”
-effectively with fewer model parameters.
-
-Now, let's see how to jointly train wide and deep models and allow them to
-complement each other’s strengths and weaknesses.
-
-## Combining Wide and Deep Models into One
-
-The wide models and deep models are combined by summing up their final output
-log odds as the prediction, then feeding the prediction to a logistic loss
-function. All the graph definition and variable allocations have already been
-handled for you under the hood, so you simply need to create a
-`DNNLinearCombinedClassifier`:
-
-```python
-model = tf.estimator.DNNLinearCombinedClassifier(
- model_dir='/tmp/census_model',
- linear_feature_columns=base_columns + crossed_columns,
- dnn_feature_columns=deep_columns,
- dnn_hidden_units=[100, 50])
-```
-
-## Training and Evaluating The Model
-
-Before we train the model, let's read in the Census dataset as we did in the
-@{$wide$TensorFlow Linear Model tutorial}. See `data_download.py` as well as
-`input_fn` within
-[`wide_deep.py`](https://github.com/tensorflow/models/tree/master/official/wide_deep/wide_deep.py).
-
-After reading in the data, you can train and evaluate the model:
-
-```python
-# Train and evaluate the model every `FLAGS.epochs_per_eval` epochs.
-for n in range(FLAGS.train_epochs // FLAGS.epochs_per_eval):
- model.train(input_fn=lambda: input_fn(
- FLAGS.train_data, FLAGS.epochs_per_eval, True, FLAGS.batch_size))
-
- results = model.evaluate(input_fn=lambda: input_fn(
- FLAGS.test_data, 1, False, FLAGS.batch_size))
-
- # Display evaluation metrics
- print('Results at epoch', (n + 1) * FLAGS.epochs_per_eval)
- print('-' * 30)
-
- for key in sorted(results):
- print('%s: %s' % (key, results[key]))
-```
-
-The final output accuracy should be somewhere around 85.5%. If you'd like to
-see a working end-to-end example, you can download our
-[example code](https://github.com/tensorflow/models/tree/master/official/wide_deep/wide_deep.py).
-
-Note that this tutorial is just a quick example on a small dataset to get you
-familiar with the API. Wide & Deep Learning will be even more powerful if you
-try it on a large dataset with many sparse feature columns that have a large
-number of possible feature values. Again, feel free to take a look at our
-[research paper](https://arxiv.org/abs/1606.07792) for more ideas about how to
-apply Wide & Deep Learning in real-world large-scale machine learning problems.