aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/xla/service/bfloat16_support.cc
Commit message (Collapse)AuthorAge
* [XLA] Add kAllToAll and kCollectivePermute to ↵Gravatar A. Unique TensorFlower2018-10-01
| | | | | | EffectiveOperandPrecisionIsOutputPrecision list. PiperOrigin-RevId: 215311766
* [TF:XLA] Split select HLO into array- and tuple-select.Gravatar A. Unique TensorFlower2018-07-03
| | | | | | | | | | | | Array select and tuple-select already are handled separately in all backends and HLO passes: Array select is an elementwise operation. The shapes of the to operands have the same dimensions. Tuple select does not define its own output, but instead forwards the true- or false- operand based on a scalar predicate operand. This CL reflects this by adding a new kTupleSelect HLO. The XLA builder interface stays the same and dispatches based on the operand shapes. No change in the operation semantics. This CL just splits the existing select operation into two opcodes and preserves the existing semantics. HLO cost analysis is fixed to handle the two ops appropriately. PiperOrigin-RevId: 203180342
* Support BF16 propagation through domain instructionsGravatar A. Unique TensorFlower2018-06-18
| | | | | | | | | | | Domain instructions only there to carry some metadata so they don't effect the precision of the data so we should propagate BF16 through them. The special code needed to handle domain instructions is there as this is the only HLO what have the same tuple shaped operand and result. PiperOrigin-RevId: 200968713
* Enable bfloat propagation for bitcast HLOGravatar A. Unique TensorFlower2018-06-18
| | | | | | | | If the input and output element type for a bitcast is the same (it is only a layout and shape change) then its effective output precision is same as its input precision. PiperOrigin-RevId: 200966788
* [XLA] Add kConvert to EffectiveOperandPrecisionIsOutputPrecision list.Gravatar Yuanzhong Xu2018-02-26
| | | | PiperOrigin-RevId: 187044921
* [XLA] An HLO pass that folds BF16 F32 conversions: if an HLO already ↵Gravatar Yuanzhong Xu2018-02-12
supports BF16 input/output, conversions before/after it will be removed and the HLO's input/output types will be converted to BF16. Also updates HloVerifier to allow mixed precision if requested. If an HLO has both both F32 and BF16 inputs, ShapeInference will use F32 as the output type. PiperOrigin-RevId: 185407143