aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/docs_src')
-rw-r--r--tensorflow/docs_src/community/lists.md2
-rw-r--r--tensorflow/docs_src/guide/eager.md3
-rw-r--r--tensorflow/docs_src/guide/version_compat.md1
-rw-r--r--tensorflow/docs_src/install/install_go.md2
-rw-r--r--tensorflow/docs_src/install/install_raspbian.md6
-rw-r--r--tensorflow/docs_src/install/install_sources.md5
-rw-r--r--tensorflow/docs_src/performance/xla/operation_semantics.md97
7 files changed, 98 insertions, 18 deletions
diff --git a/tensorflow/docs_src/community/lists.md b/tensorflow/docs_src/community/lists.md
index 7450ab36c4..bc2f573c29 100644
--- a/tensorflow/docs_src/community/lists.md
+++ b/tensorflow/docs_src/community/lists.md
@@ -32,6 +32,8 @@ These projects inside the TensorFlow GitHub organization have lists dedicated to
and peer support for TensorFlow.js.
* [tflite](https://groups.google.com/a/tensorflow.org/d/forum/tflite) - Discussion and
peer support for TensorFlow Lite.
+* [tfprobability](https://groups.google.com/a/tensorflow.org/d/forum/tfprobability) - Discussion and
+ peer support for TensorFlow Probability.
* [tpu-users](https://groups.google.com/a/tensorflow.org/d/forum/tpu-users) - Community discussion
and support for TPU users.
diff --git a/tensorflow/docs_src/guide/eager.md b/tensorflow/docs_src/guide/eager.md
index 24f6e4ee95..017fdaf81e 100644
--- a/tensorflow/docs_src/guide/eager.md
+++ b/tensorflow/docs_src/guide/eager.md
@@ -193,8 +193,7 @@ class MNISTModel(tf.keras.Model):
def call(self, input):
"""Run the model."""
result = self.dense1(input)
- result = self.dense2(result)
- result = self.dense2(result) # reuse variables from dense2 layer
+ result = self.dense2(result) # reuse variables from dense1 layer
return result
model = MNISTModel()
diff --git a/tensorflow/docs_src/guide/version_compat.md b/tensorflow/docs_src/guide/version_compat.md
index 0e472c3381..29ac066e6f 100644
--- a/tensorflow/docs_src/guide/version_compat.md
+++ b/tensorflow/docs_src/guide/version_compat.md
@@ -79,6 +79,7 @@ backward incompatible ways between minor releases. These include:
[`tensorflow/cc`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/cc)).
- [Java](../api_docs/java/reference/org/tensorflow/package-summary),
- [Go](https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go)
+ - [JavaScript](https://js.tensorflow.org)
* **Details of composite ops:** Many public functions in Python expand to
several primitive ops in the graph, and these details will be part of any
diff --git a/tensorflow/docs_src/install/install_go.md b/tensorflow/docs_src/install/install_go.md
index 961386b15c..f0f8436777 100644
--- a/tensorflow/docs_src/install/install_go.md
+++ b/tensorflow/docs_src/install/install_go.md
@@ -6,7 +6,7 @@ a Go application. This guide explains how to install and set up the
[TensorFlow Go package](https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go).
Warning: The TensorFlow Go API is *not* covered by the TensorFlow
-[API stability guarantees](../guide/version_semantics.md).
+[API stability guarantees](../guide/version_compat.md).
## Supported Platforms
diff --git a/tensorflow/docs_src/install/install_raspbian.md b/tensorflow/docs_src/install/install_raspbian.md
index 58a5285c78..cf6b6b4f79 100644
--- a/tensorflow/docs_src/install/install_raspbian.md
+++ b/tensorflow/docs_src/install/install_raspbian.md
@@ -60,7 +60,7 @@ If it gives the error "Command not found", then the package has not been
installed yet. To install if for the first time, run:
<pre>$ sudo apt-get install python3-pip # for Python 3.n
-sudo apt-get install python-pip # for Python 2.7</pre>
+$ sudo apt-get install python-pip # for Python 2.7</pre>
You can find more help on installing and upgrading pip in
[the Raspberry Pi documentation](https://www.raspberrypi.org/documentation/linux/software/python.md).
@@ -78,8 +78,8 @@ your system, run the following command:
Assuming the prerequisite software is installed on your Pi, install TensorFlow
by invoking **one** of the following commands:
- <pre> $ <b>pip3 install tensorflow</b> # Python 3.n
- $ <b>pip install tensorflow</b> # Python 2.7</pre>
+<pre>$ <b>pip3 install tensorflow</b> # Python 3.n
+$ <b>pip install tensorflow</b> # Python 2.7</pre>
This can take some time on certain platforms like the Pi Zero, where some Python
packages like scipy that TensorFlow depends on need to be compiled before the
diff --git a/tensorflow/docs_src/install/install_sources.md b/tensorflow/docs_src/install/install_sources.md
index 1d108d8bd5..dfd9fbce4b 100644
--- a/tensorflow/docs_src/install/install_sources.md
+++ b/tensorflow/docs_src/install/install_sources.md
@@ -180,7 +180,10 @@ If you follow these instructions, you will not need to disable SIP.
After installing pip, invoke the following commands:
-<pre> $ <b>sudo pip install six numpy wheel mock</b> </pre>
+<pre> $ <b>sudo pip install six numpy wheel mock h5py</b>
+ $ <b>sudo pip install keras_applications==1.0.4 --no-deps</b>
+ $ <b>sudo pip install keras_preprocessing==1.0.2 --no-deps</b>
+</pre>
Note: These are just the minimum requirements to _build_ tensorflow. Installing
the pip package will download additional packages required to _run_ it. If you
diff --git a/tensorflow/docs_src/performance/xla/operation_semantics.md b/tensorflow/docs_src/performance/xla/operation_semantics.md
index 165f6f5914..fad9fd57f1 100644
--- a/tensorflow/docs_src/performance/xla/operation_semantics.md
+++ b/tensorflow/docs_src/performance/xla/operation_semantics.md
@@ -13,6 +13,79 @@ arbitrary-dimensional array. For convenience, special cases have more specific
and familiar names; for example a *vector* is a 1-dimensional array and a
*matrix* is a 2-dimensional array.
+## AllToAll
+
+See also
+[`XlaBuilder::AllToAll`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
+
+Alltoall is a collective operation that sends data from all cores to all cores.
+It has two phases:
+
+1. the scatter phase. On each core, the operand is split into `split_count`
+ number of blocks along the `split_dimensions`, and the blocks are scatterd
+ to all cores, e.g., the ith block is send to the ith core.
+2. the gather phase. Each core concatenates the received blocks along the
+ `concat_dimension`.
+
+The participating cores can be configured by:
+
+- `replica_groups`: each ReplicaGroup contains a list of replica id. If empty,
+ all replicas belong to one group in the order of 0 - (n-1). Alltoall will be
+ applied within subgroups in the specified order. For example, replica
+ groups = {{1,2,3},{4,5,0}} means, an Alltoall will be applied within replica
+ 1, 2, 3, and in the gather phase, the received blocks will be concatenated
+ in the order of 1, 2, 3; another Alltoall will be applied within replica 4,
+ 5, 0, and the concatenation order is 4, 5, 0.
+
+Prerequisites:
+
+- The dimension size of the operand on the split_dimension is divisible by
+ split_count.
+- The operand's shape is not tuple.
+
+<b> `AllToAll(operand, split_dimension, concat_dimension, split_count,
+replica_groups)` </b>
+
+
+| Arguments | Type | Semantics |
+| ------------------ | --------------------- | ------------------------------- |
+| `operand` | `XlaOp` | n dimensional input array |
+| `split_dimension` | `int64` | A value in the interval `[0, |
+: : : n)` that names the dimension :
+: : : along which the operand is :
+: : : split :
+| `concat_dimension` | `int64` | a value in the interval `[0, |
+: : : n)` that names the dimension :
+: : : along which the split blocks :
+: : : are concatenated :
+| `split_count` | `int64` | the number of cores that |
+: : : participate this operation. If :
+: : : `replica_groups` is empty, this :
+: : : should be the number of :
+: : : replicas; otherwise, this :
+: : : should be equal to the number :
+: : : of replicas in each group. :
+| `replica_groups` | `ReplicaGroup` vector | each group contains a list of |
+: : : replica id. :
+
+Below shows an example of Alltoall.
+
+```
+XlaBuilder b("alltoall");
+auto x = Parameter(&b, 0, ShapeUtil::MakeShape(F32, {4, 16}), "x");
+AllToAll(x, /*split_dimension=*/1, /*concat_dimension=*/0, /*split_count=*/4);
+```
+
+<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
+ <img style="width:100%" src="../../images/xla/ops_alltoall.png">
+</div>
+
+In this example, there are 4 cores participating the Alltoall. On each core, the
+operand is split into 4 parts along dimension 0, so each part has shape
+f32[4,4]. The 4 parts are scattered to all cores. Then each core concatenates
+the received parts along dimension 1, in the order or core 0-4. So the output on
+each core has shape f32[16,4].
+
## BatchNormGrad
See also
@@ -1804,19 +1877,19 @@ See also
[`XlaBuilder::RngNormal`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
Constructs an output of a given shape with random numbers generated following
-the $$N(\mu, \sigma)$$ normal distribution. The parameters `mu` and `sigma`, and
-output shape have to have elemental type F32. The parameters furthermore have to
-be scalar valued.
+the $$N(\mu, \sigma)$$ normal distribution. The parameters $$\mu$$ and
+$$\sigma$$, and output shape have to have a floating point elemental type. The
+parameters furthermore have to be scalar valued.
-<b>`RngNormal(mean, sigma, shape)`</b>
+<b>`RngNormal(mu, sigma, shape)`</b>
| Arguments | Type | Semantics |
| --------- | ------- | --------------------------------------------------- |
-| `mu` | `XlaOp` | Scalar of type F32 specifying mean of generated |
-: : : numbers :
-| `sigma` | `XlaOp` | Scalar of type F32 specifying standard deviation of |
+| `mu` | `XlaOp` | Scalar of type T specifying mean of generated |
+: : : numbers :
+| `sigma` | `XlaOp` | Scalar of type T specifying standard deviation of |
: : : generated numbers :
-| `shape` | `Shape` | Output shape of type F32 |
+| `shape` | `Shape` | Output shape of type T |
## RngUniform
@@ -1825,9 +1898,11 @@ See also
Constructs an output of a given shape with random numbers generated following
the uniform distribution over the interval $$[a,b)$$. The parameters and output
-shape may be either F32, S32 or U32, but the types have to be consistent.
-Furthermore, the parameters need to be scalar valued. If $$b <= a$$ the result
-is implementation-defined.
+element type have to be a boolean type, an integral type or a floating point
+types, and the types have to be consistent. The CPU and GPU backends currently
+only support F64, F32, F16, BF16, S64, U64, S32 and U32. Furthermore, the
+parameters need to be scalar valued. If $$b <= a$$ the result is
+implementation-defined.
<b>`RngUniform(a, b, shape)`</b>