aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/resources/faq.md
diff options
context:
space:
mode:
authorGravatar Manjunath Kudlur <keveman@gmail.com>2015-11-06 18:37:11 -0800
committerGravatar Manjunath Kudlur <keveman@gmail.com>2015-11-06 18:37:11 -0800
commitcd9e60c1cd8afef6e39b4b73525d64aee33b656b (patch)
treea2b18fc3aab6169b0982bd987725325e68d7bd66 /tensorflow/g3doc/resources/faq.md
parentf41959ccb2d9d4c722fe8fc3351401d53bcf4900 (diff)
TensorFlow: Upstream latest changes to Git.
Changes: - Updates to installation instructions. - Updates to documentation. - Minor modifications and tests for word2vec. Base CL: 107284192
Diffstat (limited to 'tensorflow/g3doc/resources/faq.md')
-rw-r--r--tensorflow/g3doc/resources/faq.md79
1 files changed, 43 insertions, 36 deletions
diff --git a/tensorflow/g3doc/resources/faq.md b/tensorflow/g3doc/resources/faq.md
index fcdc8d1e33..2bd485e7f9 100644
--- a/tensorflow/g3doc/resources/faq.md
+++ b/tensorflow/g3doc/resources/faq.md
@@ -1,21 +1,28 @@
# Frequently Asked Questions
This document provides answers to some of the frequently asked questions about
-TensorFlow. If you have a question that is not covered here, please
-[get in touch](index.md).
+TensorFlow. If you have a question that is not covered here, you might find an
+answer on one of the TensorFlow [community resources](index.md).
<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
+ * [Building a TensorFlow graph](#AUTOGENERATED-building-a-tensorflow-graph)
+ * [Running a TensorFlow computation](#AUTOGENERATED-running-a-tensorflow-computation)
+ * [Variables](#AUTOGENERATED-variables)
+ * [Tensor shapes](#AUTOGENERATED-tensor-shapes)
+ * [TensorBoard](#AUTOGENERATED-tensorboard)
+ * [Extending TensorFlow](#AUTOGENERATED-extending-tensorflow)
+ * [Miscellaneous](#AUTOGENERATED-miscellaneous)
<!-- TOC-END This section was generated by neural network, THANKS FOR READING! -->
-#### Building a TensorFlow graph
+### Building a TensorFlow graph <div class="md-anchor" id="AUTOGENERATED-building-a-tensorflow-graph">{#AUTOGENERATED-building-a-tensorflow-graph}</div>
See also the
[API documentation on building graphs](../api_docs/python/framework.md).
-##### Why does `c = tf.matmul(a, b)` not execute the matrix multiplication immediately?
+#### Why does `c = tf.matmul(a, b)` not execute the matrix multiplication immediately?
In the TensorFlow Python API, `a`, `b`, and `c` are
[`Tensor`](../api_docs/python/framework.md#Tensor) objects. A `Tensor` object is
@@ -28,12 +35,12 @@ a dataflow graph. You then offload the computation of the entire dataflow graph
whole computation much more efficiently than executing the operations
one-by-one.
-##### How are devices named?
+#### How are devices named?
The supported device names are `"/device:CPU:0"` (or `"/cpu:0"`) for the CPU
device, and `"/device:GPU:i"` (or `"/gpu:i"`) for the *i*th GPU device.
-##### How do I place operations on a particular device?
+#### How do I place operations on a particular device?
To place a group of operations on a device, create them within a
[`with tf.device(name):`](../api_docs/python/framework.md#device) context. See
@@ -43,17 +50,17 @@ TensorFlow assigns operations to devices, and the
[CIFAR-10 tutorial](../tutorials/deep_cnn/index.md) for an example model that
uses multiple GPUs.
-##### What are the different types of tensors that are available?
+#### What are the different types of tensors that are available?
TensorFlow supports a variety of different data types and tensor shapes. See the
[ranks, shapes, and types reference](dims_types.md) for more details.
-#### Running a TensorFlow computation
+### Running a TensorFlow computation <div class="md-anchor" id="AUTOGENERATED-running-a-tensorflow-computation">{#AUTOGENERATED-running-a-tensorflow-computation}</div>
See also the
[API documentation on running graphs](../api_docs/python/client.md).
-##### What's the deal with feeding and placeholders?
+#### What's the deal with feeding and placeholders?
Feeding is a mechanism in the TensorFlow Session API that allows you to
substitute different values for one or more tensors at run time. The `feed_dict`
@@ -69,7 +76,7 @@ optionally allows you to constrain their shape as well. See the
example of how placeholders and feeding can be used to provide the training data
for a neural network.
-##### What is the difference between `Session.run()` and `Tensor.eval()`?
+#### What is the difference between `Session.run()` and `Tensor.eval()`?
If `t` is a [`Tensor`](../api_docs/python/framework.md#Tensor) object,
[`t.eval()`](../api_docs/python/framework.md#Tensor.eval) is shorthand for
@@ -96,7 +103,7 @@ the `with` block. The context manager approach can lead to more concise code for
simple use cases (like unit tests); if your code deals with multiple graphs and
sessions, it may be more straightforward to explicit calls to `Session.run()`.
-##### Do Sessions have a lifetime? What about intermediate tensors?
+#### Do Sessions have a lifetime? What about intermediate tensors?
Sessions can own resources, such
[variables](../api_docs/python/state_ops.md#Variable),
@@ -110,13 +117,13 @@ The intermediate tensors that are created as part of a call to
[`Session.run()`](../api_docs/python/client.md) will be freed at or before the
end of the call.
-##### Can I run distributed training on multiple computers?
+#### Can I run distributed training on multiple computers?
The initial open-source release of TensorFlow supports multiple devices (CPUs
and GPUs) in a single computer. We are working on a distributed version as well:
if you are interested, please let us know so we can prioritize accordingly.
-##### Does the runtime parallelize parts of graph execution?
+#### Does the runtime parallelize parts of graph execution?
The TensorFlow runtime parallelizes graph execution across many different
dimensions:
@@ -131,7 +138,7 @@ dimensions:
enables the runtime to get higher throughput, if a single step does not use
all of the resources in your computer.
-##### Which client languages are supported in TensorFlow?
+#### Which client languages are supported in TensorFlow?
TensorFlow is designed to support multiple client languages. Currently, the
best-supported client language is [Python](../api_docs/python/index.md). The
@@ -145,7 +152,7 @@ interest. TensorFlow has a
that makes it easy to build a client in many different languages. We invite
contributions of new language bindings.
-##### Does TensorFlow make use of all the devices (GPUs and CPUs) available on my machine?
+#### Does TensorFlow make use of all the devices (GPUs and CPUs) available on my machine?
TensorFlow supports multiple GPUs and CPUs. See the how-to documentation on
[using GPUs with TensorFlow](../how_tos/using_gpu/index.md) for details of how
@@ -156,10 +163,10 @@ uses multiple GPUs.
Note that TensorFlow only uses GPU devices with a compute capability greater
than 3.5.
-##### Why does `Session.run()` hang when using a reader or a queue?
+#### Why does `Session.run()` hang when using a reader or a queue?
-The [reader](../api_docs/io_ops.md#ReaderBase) and
-[queue](../api_docs/io_ops.md#QueueBase) classes provide special operations that
+The [reader](../api_docs/python/io_ops.md#ReaderBase) and
+[queue](../api_docs/python/io_ops.md#QueueBase) classes provide special operations that
can *block* until input (or free space in a bounded queue) becomes
available. These operations allow you to build sophisticated
[input pipelines](../how_tos/reading_data/index.md), at the cost of making the
@@ -168,20 +175,20 @@ for
[using `QueueRunner` objects to drive queues and readers](../how_tos/reading_data/index.md#QueueRunners)
for more information on how to use them.
-#### Variables
+### Variables <div class="md-anchor" id="AUTOGENERATED-variables">{#AUTOGENERATED-variables}</div>
See also the how-to documentation on [variables](../how_tos/variables/index.md)
and [variable scopes](../how_tos/variable_scope/index.md), and
[the API documentation for variables](../api_docs/python/state_ops.md).
-##### What is the lifetime of a variable?
+#### What is the lifetime of a variable?
A variable is created when you first run the
[`tf.Variable.initializer`](../api_docs/python/state_ops.md#Variable.initializer)
operation for that variable in a session. It is destroyed when that
[`session is closed`](../api_docs/python/client.md#Session.close).
-##### How do variables behave when they are concurrently accessed?
+#### How do variables behave when they are concurrently accessed?
Variables allow concurrent read and write operations. The value read from a
variable may change it is concurrently updated. By default, concurrent assigment
@@ -189,12 +196,12 @@ operations to a variable are allowed to run with no mutual exclusion. To acquire
a lock when assigning to a variable, pass `use_locking=True` to
[`Variable.assign()`](../api_docs/python/state_ops.md#Variable.assign).
-#### Tensor shapes
+### Tensor shapes <div class="md-anchor" id="AUTOGENERATED-tensor-shapes">{#AUTOGENERATED-tensor-shapes}</div>
See also the
[`TensorShape` API documentation](../api_docs/python/framework.md#TensorShape).
-##### How can I determine the shape of a tensor in Python?
+#### How can I determine the shape of a tensor in Python?
In TensorFlow, a tensor has both a static (inferred) shape and a dynamic (true)
shape. The static shape can be read using the
@@ -205,7 +212,7 @@ tensor, and may be
shape is not fully defined, the dynamic shape of a `Tensor` `t` can be
determined by evaluating [`tf.shape(t)`](../api_docs/python/array_ops.md#shape).
-##### What is the difference between `x.set_shape()` and `x = tf.reshape(x)`?
+#### What is the difference between `x.set_shape()` and `x = tf.reshape(x)`?
The [`tf.Tensor.set_shape()`](../api_docs/python/framework.md) method updates
the static shape of a `Tensor` object, and it is typically used to provide
@@ -215,7 +222,7 @@ change the dynamic shape of the tensor.
The [`tf.reshape()`](../api_docs/python/array_ops.md#reshape) operation creates
a new tensor with a different dynamic shape.
-##### How do I build a graph that works with variable batch sizes?
+#### How do I build a graph that works with variable batch sizes?
It is often useful to build a graph that works with variable batch sizes, for
example so that the same code can be used for (mini-)batch training, and
@@ -241,31 +248,31 @@ to encode the batch size as a Python constant, but instead to use a symbolic
[`tf.placeholder(..., shape=[None, ...])`](../api_docs/python/io_ops.md#placeholder). The
`None` element of the shape corresponds to a variable-sized dimension.
-#### TensorBoard
+### TensorBoard <div class="md-anchor" id="AUTOGENERATED-tensorboard">{#AUTOGENERATED-tensorboard}</div>
See also the
[how-to documentation on TensorBoard](../how_tos/graph_viz/index.md).
-##### What is the simplest way to send data to tensorboard? # TODO(danmane)
+#### What is the simplest way to send data to tensorboard? # TODO(danmane)
Add summary_ops to your TensorFlow graph, and use a SummaryWriter to write all
of these summaries to a log directory. Then, startup TensorBoard using
<SOME_COMMAND> and pass the --logdir flag so that it points to your
log directory. For more details, see <YET_UNWRITTEN_TENSORBOARD_TUTORIAL>.
-#### Extending TensorFlow
+### Extending TensorFlow <div class="md-anchor" id="AUTOGENERATED-extending-tensorflow">{#AUTOGENERATED-extending-tensorflow}</div>
See also the how-to documentation for
[adding a new operation to TensorFlow](../how_tos/adding_an_op/index.md).
-##### My data is in a custom format. How do I read it using TensorFlow?
+#### My data is in a custom format. How do I read it using TensorFlow?
There are two main options for dealing with data in a custom format.
The easier option is to write parsing code in Python that transforms the data
-into a numpy array, then feed a
-[tf.placeholder()](../api_docs/python/io_ops.md#placeholder) a tensor with that
-data. See the documentation on
+into a numpy array, then feed a [`tf.placeholder()`]
+(../api_docs/python/io_ops.md#placeholder) a tensor with that data. See the
+documentation on
[using placeholders for input](../how_tos/reading_data/index.md#Feeding) for
more details. This approach is easy to get up and running, but the parsing can
be a performance bottleneck.
@@ -276,7 +283,7 @@ data format. The
[guide to handling new data formats](../how_tos/new_data_formats/index.md) has
more information about the steps for doing this.
-##### How do I define an operation that takes a variable number of inputs?
+#### How do I define an operation that takes a variable number of inputs?
The TensorFlow op registration mechanism allows you to define inputs that are a
single tensor, a list of tensors with the same type (for example when adding
@@ -286,15 +293,15 @@ how-to documentation for
[adding an op with a list of inputs or outputs](../how_tos/adding_an_op/index.md#list-input-output)
for more details of how to define these different input types.
-#### Miscellaneous
+### Miscellaneous <div class="md-anchor" id="AUTOGENERATED-miscellaneous">{#AUTOGENERATED-miscellaneous}</div>
-##### Does TensorFlow work with Python 3?
+#### Does TensorFlow work with Python 3?
We have only tested TensorFlow using Python 2.7. We are aware of some changes
that will be required for Python 3 compatibility, and welcome contributions
towards this effort.
-##### What is TensorFlow's coding style convention?
+#### What is TensorFlow's coding style convention?
The TensorFlow Python API adheres to the
[PEP8](https://www.python.org/dev/peps/pep-0008/) conventions.<sup>*</sup> In