aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/xla/service/hlo_domain_test.cc
Commit message (Collapse)AuthorAge
* Fix CRS combiner for spatial partitioningGravatar HyoukJoong Lee2018-09-04
| | | | PiperOrigin-RevId: 211519250
* Convert a couple more test files to HloVerifiedTestBase, and add default ↵Gravatar Dimitris Vardoulakis2018-08-29
| | | | | | arguments to the constructor to remove some boilerplate. PiperOrigin-RevId: 210855509
* Domain tuple sharding propagation from users instead of from operands.Gravatar A. Unique TensorFlower2018-08-28
| | | | PiperOrigin-RevId: 210525464
* [XLA] Add and use a layout-sensitive HLO verifier.Gravatar Justin Lebar2018-08-24
| | | | | | | | | | | | | | For now, this verifier checks some noncontroversial invariants, like: - Fusion operands and fusion computation parameters must have matching layouts. - Same for while loops, calls, kConditional. It's a bit of a pain to add these explicit layout-sensitive and allow-mixed-precision flags everywhere, but I think it's better than adding default args. With default args we can easily mix up the order, and we'd only be able to add new flags to the end of the list. PiperOrigin-RevId: 210059349
* [XLA] Use absl string types and functions instead of the TF versions.Gravatar Justin Lebar2018-08-23
| | | | | | | Unfortunately this has to be one big patch, because e.g. absl::StrCat doesn't accept a TF StringPiece, but as soon as we switch to absl::string_view, we have to switch away from all of the TF functions. PiperOrigin-RevId: 209957896
* Reduce the memory usage of sharding domainsGravatar A. Unique TensorFlower2018-08-23
| | | | | | | | | | | | | | | | | | | Previously the domain instructions inserted before and after an `n` element tuple required `O(n^2)` memory (and compute) because every operand and user had its own domain instruction with a tuple sharding and tuple shape for the exit domains what constructed `n` HloSharding and `n` Shape proto per domain. After this change we keep track of the domain instructions inserted and if we already have a domain instruction with the correct operand and metadata then we re-use it instead of creating a new one. Additionally we change HloInstruction and ShardingMetadata to store a std::shared_ptr to HloSharding so the same instance can be shared by many instructions. This CL doesn't update all uses to remove all of the duplicated HloShardings but handles the most wastful cases to reduce memory usage. PiperOrigin-RevId: 209924260
* Fix domain isolation for the case when multiple domain type is involvedGravatar A. Unique TensorFlower2018-08-22
| | | | | | | | | | | | | | Previously when we were stacking domains we inserted the new domain instructions between the upper most domain and its operand. This caused issues if that domain had more then one user with different atribute for the domain inserted at the second pass because we could have ended up with edges between different domains. After this change we insert the new domains between the lower most domain and its user ensuring that the domain separates every instruction with different attributes. PiperOrigin-RevId: 209776741
* [XLA] Use absl::make_unique instead of xla::MakeUnique.Gravatar Justin Lebar2018-08-20
| | | | | | Same for WrapUnique. PiperOrigin-RevId: 209531124
* Fix domain sharding setting for tuple instructions.Gravatar A. Unique TensorFlower2018-08-09
| | | | | | | We were erroneously propagationg the entire domain sharding for each tuple operand, instead of propagating the operand subsharding. PiperOrigin-RevId: 208048503
* Make the HloDomainRemover pass more configurableGravatar A. Unique TensorFlower2018-07-16
| | | | | | | | | | | | Previously we had two different function to normalize instructions within a domain where one of them was specified inside the metadata while the other one is passed into the domain remover. This change unifies them to use the externally passed in function for both usecase to make it possible to rewrite both of them from the caller of the domain remover (to add special logic). PiperOrigin-RevId: 204715075
* Fix domain removal when the root instruction is an empty domainGravatar A. Unique TensorFlower2018-07-09
| | | | | | | | | | | | | | If a domain become empty because the various optimizations removed all instruction from it then we have to re-add some instruction to make sure the user supplied sharding is still respected. This is especially important for the root instruction as the user will expect the data to be available on the device they requested it. Before this CL we failed to insert the tuple->gte sequence into the empty domain due to a bug where we only considered cases where we have an exit domain what is not the case for the root instruction. PiperOrigin-RevId: 203744534
* Change Send, SendDone, Recv and RecvDone to produce tokens.Gravatar Mark Heffernan2018-07-03
| | | | | | This is a follow up to cl/202069017 which added tokens as operands to Send and Recv. PiperOrigin-RevId: 203145403
* Change Send and Recv HLOs to take a token operand.Gravatar Mark Heffernan2018-07-02
| | | | | | Send and Recv HLOs now have an additional required operand which must be token-shaped. XLA client interface for these operations is unchanged and will be updated in follow up CLs. PiperOrigin-RevId: 202993121
* Fixed ShardingMetadata dump of null sharding from None to {}, to make itGravatar A. Unique TensorFlower2018-06-28
| | | | | | compatible with hlo string syntax. PiperOrigin-RevId: 202445509
* Rename HLO opcode kGenerateToken to kAfterAll.Gravatar Mark Heffernan2018-06-25
| | | | | | | | Long term I think we want to require kAfterAll to take at least one token as operand so it cannot generate a token out of thin air, so kGenerateToken is no longer an appropriate name. Instead, a primordial token would be supplied some how in the entry computation, perhaps as a parameter, and then threaded to any side-effecting ops. NFC. PiperOrigin-RevId: 202079040
* Change infeed and outfeed to take and produce tokens.Gravatar Mark Heffernan2018-06-25
| | | | | | | | | | | | | | Tokens are primitive types which can be threaded between side-effecting operations to order them. This CL changes infeed and outfeed to take a token as an operands and produce a token as one of its outputs. The most disruptive aspect of this change is that infeed now produces a two-element tuple containing the data value and a token. This means the shape of infed data no longer is the same as the shape of the infeed instruction, and a get-tuple-element operation must be called on the infeed instructions output to get its data. Related changes/notes: - The computation builder interface is unchanged. The infeed builder constructs an infeed instruction followed by a GTE instruction to extract the data value. Client and computation builder interface changes will be in follow up cls. - Tokens can now be the root of the entry computation. Previously tokens could not be passed into or out of the entry computation. But now that outfeed produces a token, this constraint meant that outfeed could not be a root which is awkward. In the future we'd like to pass in tokens as well, perhaps as the only way of generating the initial token to thread through side-effecting ops. - Infeed and outfeed still have a form which does not take a token to minimize the size of this CL. In the future this form will be removed. However, most HLO tests using infeed/outfeed are changed to accept a token in this cl. PiperOrigin-RevId: 202041518
* [TF:XLA] Change hlo_domain_test to use HloVerifiedTestBase.Gravatar Dimitris Vardoulakis2018-06-20
| | | | PiperOrigin-RevId: 201383246
* [XLA] Move xla/tools/parser/* into xla/service.Gravatar Justin Lebar2018-06-01
| | | | | | | | Now that we're using the parser inside of xla/service, it's awkward for it to live inside of xla/tools, because everything else in there is a standalone tool. We've already had one person be confused by this. PiperOrigin-RevId: 198935921
* Introduced kDomain HLO instruction set isolation to bound connected sets of ↵Gravatar A. Unique TensorFlower2018-05-29
instructions with similar attributes (ie, sharding). This CL simply adds the infrastructure, but leaves the wire-on to a separate CL. PiperOrigin-RevId: 198503625