aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/resources/faq.md
diff options
context:
space:
mode:
authorGravatar Vijay Vasudevan <vrv@google.com>2015-11-07 13:58:24 -0800
committerGravatar Vijay Vasudevan <vrv@google.com>2015-11-07 13:58:24 -0800
commitfddaed524622417900d745fe8f115562c55ac49a (patch)
treecabb2fc16540a27748b60329195966d535f48837 /tensorflow/g3doc/resources/faq.md
parent7de9099a739c9dc62b1ca55c1eeef90acbfa7be9 (diff)
TensorFlow: Upstream commits to git.
Changes: - More documentation edits, fixes to anchors, fixes to mathjax, new images, etc. - Add rnn models to pip install package. Base CL: 107312343
Diffstat (limited to 'tensorflow/g3doc/resources/faq.md')
-rw-r--r--tensorflow/g3doc/resources/faq.md61
1 files changed, 31 insertions, 30 deletions
diff --git a/tensorflow/g3doc/resources/faq.md b/tensorflow/g3doc/resources/faq.md
index a2b9a58e08..949806acee 100644
--- a/tensorflow/g3doc/resources/faq.md
+++ b/tensorflow/g3doc/resources/faq.md
@@ -1,4 +1,4 @@
-# Frequently Asked Questions
+# Frequently Asked Questions <a class="md-anchor" id="AUTOGENERATED-frequently-asked-questions"></a>
This document provides answers to some of the frequently asked questions about
TensorFlow. If you have a question that is not covered here, you might find an
@@ -6,6 +6,7 @@ answer on one of the TensorFlow [community resources](index.md).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+### [Frequently Asked Questions](#AUTOGENERATED-frequently-asked-questions)
* [Building a TensorFlow graph](#AUTOGENERATED-building-a-tensorflow-graph)
* [Running a TensorFlow computation](#AUTOGENERATED-running-a-tensorflow-computation)
* [Variables](#AUTOGENERATED-variables)
@@ -17,12 +18,12 @@ answer on one of the TensorFlow [community resources](index.md).
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-## Building a TensorFlow graph <div class="md-anchor" id="AUTOGENERATED-building-a-tensorflow-graph">{#AUTOGENERATED-building-a-tensorflow-graph}</div>
+## Building a TensorFlow graph <a class="md-anchor" id="AUTOGENERATED-building-a-tensorflow-graph"></a>
See also the
[API documentation on building graphs](../api_docs/python/framework.md).
-#### Why does `c = tf.matmul(a, b)` not execute the matrix multiplication immediately?
+#### Why does `c = tf.matmul(a, b)` not execute the matrix multiplication immediately? <a class="md-anchor" id="AUTOGENERATED-why-does--c---tf.matmul-a--b---not-execute-the-matrix-multiplication-immediately-"></a>
In the TensorFlow Python API, `a`, `b`, and `c` are
[`Tensor`](../api_docs/python/framework.md#Tensor) objects. A `Tensor` object is
@@ -35,12 +36,12 @@ a dataflow graph. You then offload the computation of the entire dataflow graph
whole computation much more efficiently than executing the operations
one-by-one.
-#### How are devices named?
+#### How are devices named? <a class="md-anchor" id="AUTOGENERATED-how-are-devices-named-"></a>
The supported device names are `"/device:CPU:0"` (or `"/cpu:0"`) for the CPU
device, and `"/device:GPU:i"` (or `"/gpu:i"`) for the *i*th GPU device.
-#### How do I place operations on a particular device?
+#### How do I place operations on a particular device? <a class="md-anchor" id="AUTOGENERATED-how-do-i-place-operations-on-a-particular-device-"></a>
To place a group of operations on a device, create them within a
[`with tf.device(name):`](../api_docs/python/framework.md#device) context. See
@@ -50,17 +51,17 @@ TensorFlow assigns operations to devices, and the
[CIFAR-10 tutorial](../tutorials/deep_cnn/index.md) for an example model that
uses multiple GPUs.
-#### What are the different types of tensors that are available?
+#### What are the different types of tensors that are available? <a class="md-anchor" id="AUTOGENERATED-what-are-the-different-types-of-tensors-that-are-available-"></a>
TensorFlow supports a variety of different data types and tensor shapes. See the
[ranks, shapes, and types reference](dims_types.md) for more details.
-## Running a TensorFlow computation <div class="md-anchor" id="AUTOGENERATED-running-a-tensorflow-computation">{#AUTOGENERATED-running-a-tensorflow-computation}</div>
+## Running a TensorFlow computation <a class="md-anchor" id="AUTOGENERATED-running-a-tensorflow-computation"></a>
See also the
[API documentation on running graphs](../api_docs/python/client.md).
-#### What's the deal with feeding and placeholders?
+#### What's the deal with feeding and placeholders? <a class="md-anchor" id="AUTOGENERATED-what-s-the-deal-with-feeding-and-placeholders-"></a>
Feeding is a mechanism in the TensorFlow Session API that allows you to
substitute different values for one or more tensors at run time. The `feed_dict`
@@ -76,7 +77,7 @@ optionally allows you to constrain their shape as well. See the
example of how placeholders and feeding can be used to provide the training data
for a neural network.
-#### What is the difference between `Session.run()` and `Tensor.eval()`?
+#### What is the difference between `Session.run()` and `Tensor.eval()`? <a class="md-anchor" id="AUTOGENERATED-what-is-the-difference-between--session.run----and--tensor.eval----"></a>
If `t` is a [`Tensor`](../api_docs/python/framework.md#Tensor) object,
[`t.eval()`](../api_docs/python/framework.md#Tensor.eval) is shorthand for
@@ -103,7 +104,7 @@ the `with` block. The context manager approach can lead to more concise code for
simple use cases (like unit tests); if your code deals with multiple graphs and
sessions, it may be more straightforward to explicit calls to `Session.run()`.
-#### Do Sessions have a lifetime? What about intermediate tensors?
+#### Do Sessions have a lifetime? What about intermediate tensors? <a class="md-anchor" id="AUTOGENERATED-do-sessions-have-a-lifetime--what-about-intermediate-tensors-"></a>
Sessions can own resources, such
[variables](../api_docs/python/state_ops.md#Variable),
@@ -117,13 +118,13 @@ The intermediate tensors that are created as part of a call to
[`Session.run()`](../api_docs/python/client.md) will be freed at or before the
end of the call.
-#### Can I run distributed training on multiple computers?
+#### Can I run distributed training on multiple computers? <a class="md-anchor" id="AUTOGENERATED-can-i-run-distributed-training-on-multiple-computers-"></a>
The initial open-source release of TensorFlow supports multiple devices (CPUs
and GPUs) in a single computer. We are working on a distributed version as well:
if you are interested, please let us know so we can prioritize accordingly.
-#### Does the runtime parallelize parts of graph execution?
+#### Does the runtime parallelize parts of graph execution? <a class="md-anchor" id="AUTOGENERATED-does-the-runtime-parallelize-parts-of-graph-execution-"></a>
The TensorFlow runtime parallelizes graph execution across many different
dimensions:
@@ -138,7 +139,7 @@ dimensions:
enables the runtime to get higher throughput, if a single step does not use
all of the resources in your computer.
-#### Which client languages are supported in TensorFlow?
+#### Which client languages are supported in TensorFlow? <a class="md-anchor" id="AUTOGENERATED-which-client-languages-are-supported-in-tensorflow-"></a>
TensorFlow is designed to support multiple client languages. Currently, the
best-supported client language is [Python](../api_docs/python/index.md). The
@@ -152,7 +153,7 @@ interest. TensorFlow has a
that makes it easy to build a client in many different languages. We invite
contributions of new language bindings.
-#### Does TensorFlow make use of all the devices (GPUs and CPUs) available on my machine?
+#### Does TensorFlow make use of all the devices (GPUs and CPUs) available on my machine? <a class="md-anchor" id="AUTOGENERATED-does-tensorflow-make-use-of-all-the-devices--gpus-and-cpus--available-on-my-machine-"></a>
TensorFlow supports multiple GPUs and CPUs. See the how-to documentation on
[using GPUs with TensorFlow](../how_tos/using_gpu/index.md) for details of how
@@ -163,7 +164,7 @@ uses multiple GPUs.
Note that TensorFlow only uses GPU devices with a compute capability greater
than 3.5.
-#### Why does `Session.run()` hang when using a reader or a queue?
+#### Why does `Session.run()` hang when using a reader or a queue? <a class="md-anchor" id="AUTOGENERATED-why-does--session.run----hang-when-using-a-reader-or-a-queue-"></a>
The [reader](../api_docs/python/io_ops.md#ReaderBase) and
[queue](../api_docs/python/io_ops.md#QueueBase) classes provide special operations that
@@ -175,20 +176,20 @@ for
[using `QueueRunner` objects to drive queues and readers](../how_tos/reading_data/index.md#QueueRunners)
for more information on how to use them.
-## Variables <div class="md-anchor" id="AUTOGENERATED-variables">{#AUTOGENERATED-variables}</div>
+## Variables <a class="md-anchor" id="AUTOGENERATED-variables"></a>
See also the how-to documentation on [variables](../how_tos/variables/index.md)
and [variable scopes](../how_tos/variable_scope/index.md), and
[the API documentation for variables](../api_docs/python/state_ops.md).
-#### What is the lifetime of a variable?
+#### What is the lifetime of a variable? <a class="md-anchor" id="AUTOGENERATED-what-is-the-lifetime-of-a-variable-"></a>
A variable is created when you first run the
[`tf.Variable.initializer`](../api_docs/python/state_ops.md#Variable.initializer)
operation for that variable in a session. It is destroyed when that
[`session is closed`](../api_docs/python/client.md#Session.close).
-#### How do variables behave when they are concurrently accessed?
+#### How do variables behave when they are concurrently accessed? <a class="md-anchor" id="AUTOGENERATED-how-do-variables-behave-when-they-are-concurrently-accessed-"></a>
Variables allow concurrent read and write operations. The value read from a
variable may change it is concurrently updated. By default, concurrent assigment
@@ -196,12 +197,12 @@ operations to a variable are allowed to run with no mutual exclusion. To acquire
a lock when assigning to a variable, pass `use_locking=True` to
[`Variable.assign()`](../api_docs/python/state_ops.md#Variable.assign).
-## Tensor shapes <div class="md-anchor" id="AUTOGENERATED-tensor-shapes">{#AUTOGENERATED-tensor-shapes}</div>
+## Tensor shapes <a class="md-anchor" id="AUTOGENERATED-tensor-shapes"></a>
See also the
[`TensorShape` API documentation](../api_docs/python/framework.md#TensorShape).
-#### How can I determine the shape of a tensor in Python?
+#### How can I determine the shape of a tensor in Python? <a class="md-anchor" id="AUTOGENERATED-how-can-i-determine-the-shape-of-a-tensor-in-python-"></a>
In TensorFlow, a tensor has both a static (inferred) shape and a dynamic (true)
shape. The static shape can be read using the
@@ -212,7 +213,7 @@ tensor, and may be
shape is not fully defined, the dynamic shape of a `Tensor` `t` can be
determined by evaluating [`tf.shape(t)`](../api_docs/python/array_ops.md#shape).
-#### What is the difference between `x.set_shape()` and `x = tf.reshape(x)`?
+#### What is the difference between `x.set_shape()` and `x = tf.reshape(x)`? <a class="md-anchor" id="AUTOGENERATED-what-is-the-difference-between--x.set_shape----and--x---tf.reshape-x---"></a>
The [`tf.Tensor.set_shape()`](../api_docs/python/framework.md) method updates
the static shape of a `Tensor` object, and it is typically used to provide
@@ -222,7 +223,7 @@ change the dynamic shape of the tensor.
The [`tf.reshape()`](../api_docs/python/array_ops.md#reshape) operation creates
a new tensor with a different dynamic shape.
-#### How do I build a graph that works with variable batch sizes?
+#### How do I build a graph that works with variable batch sizes? <a class="md-anchor" id="AUTOGENERATED-how-do-i-build-a-graph-that-works-with-variable-batch-sizes-"></a>
It is often useful to build a graph that works with variable batch sizes, for
example so that the same code can be used for (mini-)batch training, and
@@ -248,24 +249,24 @@ to encode the batch size as a Python constant, but instead to use a symbolic
[`tf.placeholder(..., shape=[None, ...])`](../api_docs/python/io_ops.md#placeholder). The
`None` element of the shape corresponds to a variable-sized dimension.
-## TensorBoard <div class="md-anchor" id="AUTOGENERATED-tensorboard">{#AUTOGENERATED-tensorboard}</div>
+## TensorBoard <a class="md-anchor" id="AUTOGENERATED-tensorboard"></a>
See also the
[how-to documentation on TensorBoard](../how_tos/graph_viz/index.md).
-#### What is the simplest way to send data to tensorboard? # TODO(danmane)
+#### What is the simplest way to send data to tensorboard? # TODO(danmane) <a class="md-anchor" id="AUTOGENERATED-what-is-the-simplest-way-to-send-data-to-tensorboard----todo-danmane-"></a>
Add summary_ops to your TensorFlow graph, and use a SummaryWriter to write all
of these summaries to a log directory. Then, startup TensorBoard using
<SOME_COMMAND> and pass the --logdir flag so that it points to your
log directory. For more details, see <YET_UNWRITTEN_TENSORBOARD_TUTORIAL>.
-## Extending TensorFlow <div class="md-anchor" id="AUTOGENERATED-extending-tensorflow">{#AUTOGENERATED-extending-tensorflow}</div>
+## Extending TensorFlow <a class="md-anchor" id="AUTOGENERATED-extending-tensorflow"></a>
See also the how-to documentation for
[adding a new operation to TensorFlow](../how_tos/adding_an_op/index.md).
-#### My data is in a custom format. How do I read it using TensorFlow?
+#### My data is in a custom format. How do I read it using TensorFlow? <a class="md-anchor" id="AUTOGENERATED-my-data-is-in-a-custom-format.-how-do-i-read-it-using-tensorflow-"></a>
There are two main options for dealing with data in a custom format.
@@ -283,7 +284,7 @@ data format. The
[guide to handling new data formats](../how_tos/new_data_formats/index.md) has
more information about the steps for doing this.
-#### How do I define an operation that takes a variable number of inputs?
+#### How do I define an operation that takes a variable number of inputs? <a class="md-anchor" id="AUTOGENERATED-how-do-i-define-an-operation-that-takes-a-variable-number-of-inputs-"></a>
The TensorFlow op registration mechanism allows you to define inputs that are a
single tensor, a list of tensors with the same type (for example when adding
@@ -293,15 +294,15 @@ how-to documentation for
[adding an op with a list of inputs or outputs](../how_tos/adding_an_op/index.md#list-input-output)
for more details of how to define these different input types.
-## Miscellaneous <div class="md-anchor" id="AUTOGENERATED-miscellaneous">{#AUTOGENERATED-miscellaneous}</div>
+## Miscellaneous <a class="md-anchor" id="AUTOGENERATED-miscellaneous"></a>
-#### Does TensorFlow work with Python 3?
+#### Does TensorFlow work with Python 3? <a class="md-anchor" id="AUTOGENERATED-does-tensorflow-work-with-python-3-"></a>
We have only tested TensorFlow using Python 2.7. We are aware of some changes
that will be required for Python 3 compatibility, and welcome contributions
towards this effort.
-#### What is TensorFlow's coding style convention?
+#### What is TensorFlow's coding style convention? <a class="md-anchor" id="AUTOGENERATED-what-is-tensorflow-s-coding-style-convention-"></a>
The TensorFlow Python API adheres to the
[PEP8](https://www.python.org/dev/peps/pep-0008/) conventions.<sup>*</sup> In