aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/xla/service/dfs_hlo_visitor_with_default.h
Commit message (Collapse)AuthorAge
* Change headers to directly include absl::Span, and clean up the buildGravatar Tim Shen2018-08-30
| | | | | | dependencies as well. PiperOrigin-RevId: 211038094
* [XLA] Add the xla interface for CollectivePermute.Gravatar A. Unique TensorFlower2018-08-28
| | | | PiperOrigin-RevId: 210576458
* [XLA] Use absl string types and functions instead of the TF versions.Gravatar Justin Lebar2018-08-23
| | | | | | | Unfortunately this has to be one big patch, because e.g. absl::StrCat doesn't accept a TF StringPiece, but as soon as we switch to absl::string_view, we have to switch away from all of the TF functions. PiperOrigin-RevId: 209957896
* Remove HostCompute HLO.Gravatar Tong Shen2018-08-21
| | | | | | Now for host compute, we just emit SendToHost & RecvFromHost pairs, and use token to ensure dependency. PiperOrigin-RevId: 209671416
* [XLA] Add the xla interface for AllToAll.Gravatar A. Unique TensorFlower2018-08-08
| | | | PiperOrigin-RevId: 207971529
* [XLA] Add Scatter HLO.Gravatar A. Unique TensorFlower2018-08-01
| | | | PiperOrigin-RevId: 207045468
* Start implementation of Iota HLO.Gravatar Nick Desaulniers2018-07-20
| | | | PiperOrigin-RevId: 205447892
* [TF:XLA] Split literal_util into {literal, literal_util}.Gravatar Kay Zhu2018-07-03
| | | | | | | | | Currently Literal classes sits in literal_util.{h,cc} instead of literal.{h,cc}. It also contains helper functions that are better fit to be their own separate class/namespace. This change starts this process by moving most static factory methods to LiteralUtil namespace. PiperOrigin-RevId: 203217065
* [TF:XLA] Split select HLO into array- and tuple-select.Gravatar A. Unique TensorFlower2018-07-03
| | | | | | | | | | | | Array select and tuple-select already are handled separately in all backends and HLO passes: Array select is an elementwise operation. The shapes of the to operands have the same dimensions. Tuple select does not define its own output, but instead forwards the true- or false- operand based on a scalar predicate operand. This CL reflects this by adding a new kTupleSelect HLO. The XLA builder interface stays the same and dispatches based on the operand shapes. No change in the operation semantics. This CL just splits the existing select operation into two opcodes and preserves the existing semantics. HLO cost analysis is fixed to handle the two ops appropriately. PiperOrigin-RevId: 203180342
* Rename HLO opcode kGenerateToken to kAfterAll.Gravatar Mark Heffernan2018-06-25
| | | | | | | | Long term I think we want to require kAfterAll to take at least one token as operand so it cannot generate a token out of thin air, so kGenerateToken is no longer an appropriate name. Instead, a primordial token would be supplied some how in the entry computation, perhaps as a parameter, and then threaded to any side-effecting ops. NFC. PiperOrigin-RevId: 202079040
* Add kGenerateToken HLO instruction.Gravatar Mark Heffernan2018-06-08
| | | | | | | | | | The new HLO instruction serves two purposes. (1) It generates a new token value. This is the only way to create tokens. (2) The operation is variadic, taking zero or more token operands. The operation acts as a join of its operands. I considered initially using a kConstant constant as a method to create new tokens, but this ran into problems because of expectations in backends regarding constants and their materialization. This CL enables creation of generate-token instructions, but the new instruction is not supported yet in any backend. PiperOrigin-RevId: 199836205
* Automated g4 rollback of changelist 192180356Gravatar Dimitris Vardoulakis2018-04-18
| | | | PiperOrigin-RevId: 193427566
* Add opcode for new instruction that broadcasts degenerate dimensions.Gravatar Dimitris Vardoulakis2018-04-09
| | | | | | | Implicit broadcasts can be translated to the new instruction instead of a reshape-and-broadcast. Follow-up CLs will add support in UserComputation and the various backends. PiperOrigin-RevId: 192180356
* Fix problem with HandleElementwiseUnary/Binary in DfsHloVisitorWithDefault.Gravatar Mark Heffernan2018-03-27
| | | | | | | | | | | | DfsHloVisitorWithDefault incorrectly included some overrides for handling several elementwise binary and unary opcodes. These overrides explicitly called DefaultAction which meant that these opcodes were not handled by HandleElementwiseUnary/Binary. This CL removes these overrides and adds a comment describing the potential problem. Unfortunately, I don't see a way of automatically catching these issues when new opcodes are added, so the comment will have to do. PiperOrigin-RevId: 190708245
* [XLA] Add some plumbing, documentation, verification and shape inference for ↵Gravatar Sanjoy Das2018-02-16
| | | | | | | | | | | | | Gather Pretty much everything other than HLO verification and shape inference will fail for Gather with Unimplemented. Note that this CL is intentionally incomplete -- I figured it would be nicer to get some of the boiler-platey stuff out of the way early. Let me know if you want me to send in a larger but more complete CL instead. PiperOrigin-RevId: 186055521
* [TF:XLA] Adds HostCompute HLO - a pseudo-op to represent host-side computation.Gravatar A. Unique TensorFlower2018-02-16
| | | | PiperOrigin-RevId: 186047964
* Automated g4 rollback of changelist 180000981Gravatar A. Unique TensorFlower2018-01-02
| | | | PiperOrigin-RevId: 180581912
* Automated g4 rollback of changelist 179983419Gravatar A. Unique TensorFlower2017-12-23
| | | | PiperOrigin-RevId: 180000981
* Adds FFT for XLA: CPU via Eigen, GPU via cuFFT.Gravatar A. Unique TensorFlower2017-12-22
| | | | | | GPU support includes plan reuse with new scratch allocator per execution in fft_thunk. PiperOrigin-RevId: 179983419
* [XLA] Add conditional HloInstruction and handle conditional in DFS visitors.Gravatar A. Unique TensorFlower2017-11-17
| | | | PiperOrigin-RevId: 176175297
* Change for asynchronous Send and Recv by splitting Send into {Send, SendDone}Gravatar HyoukJoong Lee2017-11-10
| | | | | | | and Recv into {Recv, RecvDone}. See operation_semantics.md for the updated semantics. PiperOrigin-RevId: 175216012
* [TF:XLA] Add a const HLO visitor.Gravatar A. Unique TensorFlower2017-11-02
| | | | | | Use it in the HLO cost analysis pass. PiperOrigin-RevId: 174411043
* [TF:XLA] Reduce boilerplate code in HLO visitors.Gravatar A. Unique TensorFlower2017-10-30
| | | | | | | Only pass the HloInstruction into visitor methods. This makes changing instructions and visitors easier. PiperOrigin-RevId: 173983398
* [TF:XLA] Don't pass opcode separately in two HLO visitor functions.Gravatar A. Unique TensorFlower2017-08-30
| | | | | | | | HandleElementwiseUnary and HandleElementwiseBinary never use this parameter and it is accessible from the HLO instruction anyway. No functional change. PiperOrigin-RevId: 167063592
* Implement Batchnorm Inference by expanding them into smaller ops.Gravatar A. Unique TensorFlower2017-08-17
| | | | | | | | 1. Add batch norm inference support in batchnorm_rewriter 2. Connect xla's batchnorm inference to tf's FusedBatchNorm RELNOTES: n/a PiperOrigin-RevId: 165655351
* [BatchNorm] Skeleton code to implement BatchNormGradGravatar A. Unique TensorFlower2017-07-06
| | | | | | | | This CL sets up all the boilerplate code needed to implement BatchNormGrad. None of the backends bas been implemented yet. RELNOTES: n/a PiperOrigin-RevId: 161161713
* Remove operand parameters from Handle{Maximum,Minimum,Convert,Copy}.Gravatar A. Unique TensorFlower2017-06-20
| | | | PiperOrigin-RevId: 159537163
* Remove operand parameters from HandleElementwiseUnary and ↵Gravatar A. Unique TensorFlower2017-06-19
| | | | | | | | HandleElementwiseBinary functions. This allows incremental cleanup of the individual unary and binary operators. PiperOrigin-RevId: 159495454
* We believe a fused version of batch_norm_op can speed the algorithm up. This ↵Gravatar A. Unique TensorFlower2017-06-13
| | | | | | | | | pr implements a new op: fused_batch_norm_op in tf-xla and HLO. This is the CPU implementation for batch norm training. This CL is big but a lot of code are boilerplate. PiperOrigin-RevId: 158930166
* Introduce new class Literal to replace protobuf Literal.Gravatar A. Unique TensorFlower2017-06-01
| | | | | | | | | | | | | | | | | | | | | | | | This renames the existing Literal message to LiteralProto and introduces a new C++ class named Literal to replace it. The LiteralProto is only used at RPC boundaries, or when protobuf-specific functionality is required. The Literal class offers a 'ToProto' function to generate a new LiteralProto message when necessary. Currently, all the static functions in class LiteralUtil, just forward to their counterparts in class Literal. This will change in a future CL. Class Literal implements all the buffers as std::vectors. The only exception is preds(), which given the std::vector<bool> representation, makes it unusable for the semantics we require (it's not possible to get the address of the underlying vector, for instance). The CL adds a BoolVector class to work around that issue. In future CLs, the std::vector representation may be changed to something more efficient, if needed. PiperOrigin-RevId: 157739125
* Normalize arguments for HandleDynamicSlice. NFC.Gravatar A. Unique TensorFlower2017-05-16
| | | | PiperOrigin-RevId: 156250579
* [XLA] Only pass instruction in instruction visitor functions.Gravatar A. Unique TensorFlower2017-04-14
| | | | Change: 153234174
* Addition of Outfeed HLO op.Gravatar Tayo Oguntebi2017-01-27
| | | | Change: 145772331
* Initial open-source release of XLA: Accelerated Linear Algebra.Gravatar Peter Hawkins2017-01-09
XLA is a compiler-based linear algebra execution engine that targets CPUs, GPUs and custom accelerators. XLA is still experimental; we are releasing it early to get the community involved. Change: 143990941