aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/guide/faq.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src/guide/faq.md')
-rw-r--r--tensorflow/docs_src/guide/faq.md71
1 files changed, 35 insertions, 36 deletions
diff --git a/tensorflow/docs_src/guide/faq.md b/tensorflow/docs_src/guide/faq.md
index b6291a9ffa..8370097560 100644
--- a/tensorflow/docs_src/guide/faq.md
+++ b/tensorflow/docs_src/guide/faq.md
@@ -28,13 +28,13 @@ See also the
#### Why does `c = tf.matmul(a, b)` not execute the matrix multiplication immediately?
In the TensorFlow Python API, `a`, `b`, and `c` are
-@{tf.Tensor} objects. A `Tensor` object is
+`tf.Tensor` objects. A `Tensor` object is
a symbolic handle to the result of an operation, but does not actually hold the
values of the operation's output. Instead, TensorFlow encourages users to build
up complicated expressions (such as entire neural networks and its gradients) as
a dataflow graph. You then offload the computation of the entire dataflow graph
(or a subgraph of it) to a TensorFlow
-@{tf.Session}, which is able to execute the
+`tf.Session`, which is able to execute the
whole computation much more efficiently than executing the operations
one-by-one.
@@ -46,7 +46,7 @@ device, and `"/device:GPU:i"` (or `"/gpu:i"`) for the *i*th GPU device.
#### How do I place operations on a particular device?
To place a group of operations on a device, create them within a
-@{tf.device$`with tf.device(name):`} context. See
+`tf.device` context. See
the how-to documentation on
@{$using_gpu$using GPUs with TensorFlow} for details of how
TensorFlow assigns operations to devices, and the
@@ -63,17 +63,17 @@ See also the
Feeding is a mechanism in the TensorFlow Session API that allows you to
substitute different values for one or more tensors at run time. The `feed_dict`
-argument to @{tf.Session.run} is a
-dictionary that maps @{tf.Tensor} objects to
+argument to `tf.Session.run` is a
+dictionary that maps `tf.Tensor` objects to
numpy arrays (and some other types), which will be used as the values of those
tensors in the execution of a step.
#### What is the difference between `Session.run()` and `Tensor.eval()`?
-If `t` is a @{tf.Tensor} object,
-@{tf.Tensor.eval} is shorthand for
-@{tf.Session.run}, where `sess` is the
-current @{tf.get_default_session}. The
+If `t` is a `tf.Tensor` object,
+`tf.Tensor.eval` is shorthand for
+`tf.Session.run`, where `sess` is the
+current `tf.get_default_session`. The
two following snippets of code are equivalent:
```python
@@ -99,11 +99,11 @@ sessions, it may be more straightforward to make explicit calls to
#### Do Sessions have a lifetime? What about intermediate tensors?
Sessions can own resources, such as
-@{tf.Variable},
-@{tf.QueueBase}, and
-@{tf.ReaderBase}. These resources can sometimes use
+`tf.Variable`,
+`tf.QueueBase`, and
+`tf.ReaderBase`. These resources can sometimes use
a significant amount of memory, and can be released when the session is closed by calling
-@{tf.Session.close}.
+`tf.Session.close`.
The intermediate tensors that are created as part of a call to
@{$python/client$`Session.run()`} will be freed at or before the
@@ -120,7 +120,7 @@ dimensions:
devices, which makes it possible to speed up
@{$deep_cnn$CIFAR-10 training using multiple GPUs}.
* The Session API allows multiple concurrent steps (i.e. calls to
- @{tf.Session.run} in parallel). This
+ `tf.Session.run` in parallel). This
enables the runtime to get higher throughput, if a single step does not use
all of the resources in your computer.
@@ -151,8 +151,8 @@ than 3.5.
#### Why does `Session.run()` hang when using a reader or a queue?
-The @{tf.ReaderBase} and
-@{tf.QueueBase} classes provide special operations that
+The `tf.ReaderBase` and
+`tf.QueueBase` classes provide special operations that
can *block* until input (or free space in a bounded queue) becomes
available. These operations allow you to build sophisticated
@{$reading_data$input pipelines}, at the cost of making the
@@ -169,9 +169,9 @@ See also the how-to documentation on @{$variables$variables} and
#### What is the lifetime of a variable?
A variable is created when you first run the
-@{tf.Variable.initializer}
+`tf.Variable.initializer`
operation for that variable in a session. It is destroyed when that
-@{tf.Session.close}.
+`tf.Session.close`.
#### How do variables behave when they are concurrently accessed?
@@ -179,32 +179,31 @@ Variables allow concurrent read and write operations. The value read from a
variable may change if it is concurrently updated. By default, concurrent
assignment operations to a variable are allowed to run with no mutual exclusion.
To acquire a lock when assigning to a variable, pass `use_locking=True` to
-@{tf.Variable.assign}.
+`tf.Variable.assign`.
## Tensor shapes
See also the
-@{tf.TensorShape}.
+`tf.TensorShape`.
#### How can I determine the shape of a tensor in Python?
In TensorFlow, a tensor has both a static (inferred) shape and a dynamic (true)
shape. The static shape can be read using the
-@{tf.Tensor.get_shape}
+`tf.Tensor.get_shape`
method: this shape is inferred from the operations that were used to create the
-tensor, and may be
-@{tf.TensorShape$partially complete}. If the static
-shape is not fully defined, the dynamic shape of a `Tensor` `t` can be
-determined by evaluating @{tf.shape$`tf.shape(t)`}.
+tensor, and may be partially complete (the static-shape may contain `None`). If
+the static shape is not fully defined, the dynamic shape of a `tf.Tensor`, `t`
+can be determined using `tf.shape(t)`.
#### What is the difference between `x.set_shape()` and `x = tf.reshape(x)`?
-The @{tf.Tensor.set_shape} method updates
+The `tf.Tensor.set_shape` method updates
the static shape of a `Tensor` object, and it is typically used to provide
additional shape information when this cannot be inferred directly. It does not
change the dynamic shape of the tensor.
-The @{tf.reshape} operation creates
+The `tf.reshape` operation creates
a new tensor with a different dynamic shape.
#### How do I build a graph that works with variable batch sizes?
@@ -212,9 +211,9 @@ a new tensor with a different dynamic shape.
It is often useful to build a graph that works with variable batch sizes
so that the same code can be used for (mini-)batch training, and
single-instance inference. The resulting graph can be
-@{tf.Graph.as_graph_def$saved as a protocol buffer}
+`tf.Graph.as_graph_def`
and
-@{tf.import_graph_def$imported into another program}.
+`tf.import_graph_def`.
When building a variable-size graph, the most important thing to remember is not
to encode the batch size as a Python constant, but instead to use a symbolic
@@ -224,7 +223,7 @@ to encode the batch size as a Python constant, but instead to use a symbolic
to extract the batch dimension from a `Tensor` called `input`, and store it in
a `Tensor` called `batch_size`.
-* Use @{tf.reduce_mean} instead
+* Use `tf.reduce_mean` instead
of `tf.reduce_sum(...) / batch_size`.
@@ -259,19 +258,19 @@ See the how-to documentation for
There are three main options for dealing with data in a custom format.
The easiest option is to write parsing code in Python that transforms the data
-into a numpy array. Then, use @{tf.data.Dataset.from_tensor_slices} to
+into a numpy array. Then, use `tf.data.Dataset.from_tensor_slices` to
create an input pipeline from the in-memory data.
If your data doesn't fit in memory, try doing the parsing in the Dataset
pipeline. Start with an appropriate file reader, like
-@{tf.data.TextLineDataset}. Then convert the dataset by mapping
-@{tf.data.Dataset.map$mapping} appropriate operations over it.
-Prefer predefined TensorFlow operations such as @{tf.decode_raw},
-@{tf.decode_csv}, @{tf.parse_example}, or @{tf.image.decode_png}.
+`tf.data.TextLineDataset`. Then convert the dataset by mapping
+`tf.data.Dataset.map` appropriate operations over it.
+Prefer predefined TensorFlow operations such as `tf.decode_raw`,
+`tf.decode_csv`, `tf.parse_example`, or `tf.image.decode_png`.
If your data is not easily parsable with the built-in TensorFlow operations,
consider converting it, offline, to a format that is easily parsable, such
-as @{tf.python_io.TFRecordWriter$`TFRecord`} format.
+as `tf.python_io.TFRecordWriter` format.
The most efficient method to customize the parsing behavior is to
@{$adding_an_op$add a new op written in C++} that parses your