diff options
author | 2018-08-01 10:55:52 -0700 | |
---|---|---|
committer | 2018-08-01 11:00:40 -0700 | |
commit | 58f72997fec533412d48b318fe900e3c5fce66ce (patch) | |
tree | ef463d7d3e52439d723794aee987cc582289c006 /tensorflow/docs_src | |
parent | c75c2748937e845c6f45e4c6245c2dc79b6ba285 (diff) |
Fix some outdated documentation.
PiperOrigin-RevId: 206955285
Diffstat (limited to 'tensorflow/docs_src')
-rw-r--r-- | tensorflow/docs_src/performance/xla/jit.md | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/tensorflow/docs_src/performance/xla/jit.md b/tensorflow/docs_src/performance/xla/jit.md index 6724d1eaf8..7202ef47f7 100644 --- a/tensorflow/docs_src/performance/xla/jit.md +++ b/tensorflow/docs_src/performance/xla/jit.md @@ -19,10 +19,11 @@ on the `XLA_CPU` or `XLA_GPU` TensorFlow devices. Placing operators directly on a TensorFlow XLA device forces the operator to run on that device and is mainly used for testing. -> Note: The XLA CPU backend produces fast single-threaded code (in most cases), -> but does not yet parallelize as well as the TensorFlow CPU backend. The XLA -> GPU backend is competitive with the standard TensorFlow implementation, -> sometimes faster, sometimes slower. +> Note: The XLA CPU backend supports intra-op parallelism (i.e. it can shard a +> single operation across multiple cores) but it does not support inter-op +> parallelism (i.e. it cannot execute independent operations concurrently across +> multiple cores). The XLA GPU backend is competitive with the standard +> TensorFlow implementation, sometimes faster, sometimes slower. ### Turning on JIT compilation @@ -55,8 +56,7 @@ sess = tf.Session(config=config) > Note: Turning on JIT at the session level will not result in operations being > compiled for the CPU. JIT compilation for CPU operations must be done via -> the manual method documented below. This decision was made due to the CPU -> backend being single-threaded. +> the manual method documented below. #### Manual |