diff options
author | Yuanzhong Xu <yuanzx@google.com> | 2018-02-22 12:27:37 -0800 |
---|---|---|
committer | TensorFlower Gardener <gardener@tensorflow.org> | 2018-02-22 12:38:15 -0800 |
commit | 30727a6b673ff64ea8b5ad8754dee598b829a4aa (patch) | |
tree | ec7394f39c1b83d77588e4fc003a52d34f1048fd /tensorflow/compiler/xla/service/bfloat16_support.h | |
parent | 78916e73383da9860ccdf07018892acb558249d7 (diff) |
[XLA] HLO BF16 propagation pass.
Using BFloat16Support provided by the backend to determine what precision is needed for
each HloInstruction. If the implementation of some HLOs already reduces input precision to BF16, this pass can enable BF16 on more ops without affecting the result.
PiperOrigin-RevId: 186656378
Diffstat (limited to 'tensorflow/compiler/xla/service/bfloat16_support.h')
-rw-r--r-- | tensorflow/compiler/xla/service/bfloat16_support.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/tensorflow/compiler/xla/service/bfloat16_support.h b/tensorflow/compiler/xla/service/bfloat16_support.h index 29f662d22b..82c2745f44 100644 --- a/tensorflow/compiler/xla/service/bfloat16_support.h +++ b/tensorflow/compiler/xla/service/bfloat16_support.h @@ -39,7 +39,7 @@ class BFloat16Support { // precisions (BF16 and F32). virtual bool SupportsMixedPrecisions(const HloInstruction& hlo) const; - // Returns whether the given HLO inherits its BF16 operand precision at the + // Returns whether the given HLO preserves its BF16 operand precision at the // given index, so even if the output is F32, elements in the output that // depend on the BF16 operand will still have BF16 effective precision even if // they have F32 format. Similarly, this also means if the output is BF16 then |