aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/xla/service/hlo_matchers.h
Commit message (Collapse)AuthorAge
* [XLA] Add support for algebraic simplifications involving kIotaGravatar David Majnemer2018-08-28
| | | | PiperOrigin-RevId: 210634966
* [XLA] Use absl string types and functions instead of the TF versions.Gravatar Justin Lebar2018-08-23
| | | | | | | Unfortunately this has to be one big patch, because e.g. absl::StrCat doesn't accept a TF StringPiece, but as soon as we switch to absl::string_view, we have to switch away from all of the TF functions. PiperOrigin-RevId: 209957896
* [XLA] gtl::optional->absl::optionalGravatar Yunxing Dai2018-08-21
| | | | PiperOrigin-RevId: 209686671
* Add HLO matcher for the tuple-select HLO.Gravatar A. Unique TensorFlower2018-08-13
| | | | PiperOrigin-RevId: 208495688
* [XLA] Add operator overloads for arithmetic and bitwise operations on XlaOp.Gravatar Peter Hawkins2018-06-26
| | | | | | Remove operator== and operator!= and replace their uses with IsIdenticalTo() and IsUninitialized() methods to avoid potential confusion between structural equality and the HLO Eq()/Ne() operators. PiperOrigin-RevId: 202132720
* Rename HLO opcode kGenerateToken to kAfterAll.Gravatar Mark Heffernan2018-06-25
| | | | | | | | Long term I think we want to require kAfterAll to take at least one token as operand so it cannot generate a token out of thin air, so kGenerateToken is no longer an appropriate name. Instead, a primordial token would be supplied some how in the entry computation, perhaps as a parameter, and then threaded to any side-effecting ops. NFC. PiperOrigin-RevId: 202079040
* Add support for TOKEN type to CPU/GPU backends.Gravatar Mark Heffernan2018-06-14
| | | | | | | | TOKENs will be used for ordering side-effecting operations. They are not materialized but can be contained in tuples and flow into and out of computations. This CL adds a trivial representation for the cpu and gpu backends to support TOKENs and modifies copy insertion to avoid making copies of tokens. This also adds a Literal TOKEN which is required for the interpreter backend. PiperOrigin-RevId: 200623120
* [XLA] Move xla/tools/parser/* into xla/service.Gravatar Justin Lebar2018-06-01
| | | | | | | | Now that we're using the parser inside of xla/service, it's awkward for it to live inside of xla/tools, because everything else in there is a standalone tool. We've already had one person be confused by this. PiperOrigin-RevId: 198935921
* HloSharding parsing from string, used by new Sharding HloMatcher for ease of ↵Gravatar A. Unique TensorFlower2018-05-23
| | | | | | use. PiperOrigin-RevId: 197825588
* Rename HloDotWithContractDimsMatcher to HloDotWithContractingDimsMatcherGravatar Sanjoy Das2018-05-07
| | | | | | This is a typo I introduced in cr/195514907. PiperOrigin-RevId: 195706006
* Remove uses of the kTransposeDot fusionGravatar Sanjoy Das2018-05-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I didn't remove the enum itself, but after this change removing the enum should be a simple NFC change (famous last words!). This will make it easier to implement BatchDot on CPU. The change removes usages of kTransposeDot by: - Teaching TransposeFolding to "fuse" transposes into dots by flipping the lhs_contracting_dims/rhs_contracting_dims fields. - Replacing the notion of transpose_lhs/transpose_rhs in the IR emitters with "has a non-canonical LHS contraction dimension"/"has a non-canonical RHS contraction dimension" where the canonical LHS and RHS contraction dims [0] are 1 and 0. Some tests were getting away with creating Dot instructions with their dimensions numbers unset. I've fixed these to create canonical dot operations instead. It is possible (but hard to tell without trying) that some of the IR emission logic and Eigen runtime calls can now be simplified further. For instance, instead of passing in a `transpose_lhs` and `transpose_rhs` to the Eigen GEMM routines, we could instead pass in the LHS and RHS contraction dimensions directly. [0] See HloInstruction::CreateCanonicalDot. PiperOrigin-RevId: 195514907
* Support matching against shape string in HLO testing matchersGravatar A. Unique TensorFlower2018-04-26
| | | | | | | | After this change a test can use op::Shape("f32[7,11]") instead of the longer and harder to read op::Shape(ShapeUtil::MakeShape(F32, {7, 11})) format. PiperOrigin-RevId: 194373704
* Introduce a new HLO shape and sharding matcher.Gravatar A. Unique TensorFlower2018-04-24
| | | | | | | These new matchers can be used in tests in combination to the existing HLO opcode matchers to better verify a generated HLO graph. PiperOrigin-RevId: 194082100
* [XLA] (Re-land) Add HLO matcher for CustomCall that accepts a call target.Gravatar Justin Lebar2018-01-26
| | | | | | Now with less build breakage! PiperOrigin-RevId: 183458987
* Automated g4 rollback of changelist 183296506Gravatar Justin Lebar2018-01-25
| | | | PiperOrigin-RevId: 183312680
* [XLA] Add HLO matcher for CustomCall that accepts a call target.Gravatar Justin Lebar2018-01-25
| | | | PiperOrigin-RevId: 183296506
* [XLA:GPU] Support BF16 data type.Gravatar A. Unique TensorFlower2018-01-23
| | | | | | | | | | | | | | | Add an HLO pass to the GPU backend to implement BF16 operations with F32 operations. Define macro XLA_BACKEND_SUPPORTS_BFLOAT16=1 when building tests for the GPU backend to enable BF16 tests for GPU. Enable bfloat16_test and other BF16 tests for GPU. Add hlo_element_type_converter_test. Add convolution tests and matrix multiplication tests for BF16. PiperOrigin-RevId: 182977358
* [XLA] Add conditional HloInstruction and handle conditional in DFS visitors.Gravatar A. Unique TensorFlower2017-11-17
| | | | PiperOrigin-RevId: 176175297
* Change for asynchronous Send and Recv by splitting Send into {Send, SendDone}Gravatar HyoukJoong Lee2017-11-10
| | | | | | | and Recv into {Recv, RecvDone}. See operation_semantics.md for the updated semantics. PiperOrigin-RevId: 175216012
* [XLA] Add HLO matchers that check parameter numbers and GTE indices.Gravatar Justin Lebar2017-10-31
| | | | | | | | | | | | This lets you do EXPECT_THAT(foo, op::Parameter(42)); and EXPECT_THAT(bar, op::GetTupleElement(baz, 8)); PiperOrigin-RevId: 174113597
* [XLA] Remove dead opcode kIndex.Gravatar Justin Lebar2017-10-30
| | | | PiperOrigin-RevId: 173987428
* [XLA] Remove dead kUpdate opcode.Gravatar Justin Lebar2017-10-25
| | | | PiperOrigin-RevId: 173462881
* [XLA] Add ShiftLeft, ShiftRightArithmetic, and ShiftRightLogical operators.Gravatar Peter Hawkins2017-10-13
| | | | PiperOrigin-RevId: 172091595
* [TF:XLA] Rename HloOpcode::kLogicalX to kXGravatar A. Unique TensorFlower2017-10-09
| | | | PiperOrigin-RevId: 171536686
* [XLA] Move definition of xla::PrintTo out of line to fix duplicate ↵Gravatar Peter Hawkins2017-09-28
| | | | | | | | definition error in Mac build. Fixes GitHub issue #13357 PiperOrigin-RevId: 170347379
* [XLA] Add ReducePrecisionInsertion pass.Gravatar A. Unique TensorFlower2017-07-06
| | | | | | This new HLO pass, intended for experimental purposes rather than optimization, inserts ReducePrecision instructions (with user-specified bitsizes) after all instructions of opcode types specified by the user. This makes it possible to do experiments on the numerical effects storing intermediate values in reduced precision without changing the HLO graph definition. PiperOrigin-RevId: 161117760
* Added and using GMock matcher for HloInstructionGravatar A. Unique TensorFlower2017-04-14
Change: 153159175