aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/xla/service/hlo_sharding.h
Commit message (Collapse)AuthorAge
* Change headers to directly include absl::Span, and clean up the buildGravatar Tim Shen2018-08-30
| | | | | | dependencies as well. PiperOrigin-RevId: 211038094
* [XLA] Rename all (Mutable)ArraySlice to absl::Span.Gravatar Tim Shen2018-08-30
| | | | PiperOrigin-RevId: 210998142
* Do not crash when an empty tuple is passed into hlo_sharding.Gravatar Yunxing Dai2018-08-23
| | | | PiperOrigin-RevId: 210005372
* [XLA] gtl::optional->absl::optionalGravatar Yunxing Dai2018-08-21
| | | | PiperOrigin-RevId: 209686671
* Remove tile shape from HloShardingGravatar A. Unique TensorFlower2018-08-08
| | | | | | | | The tile shape can be deduced based on the tile assignment and then HLO shape and by not storing it in the sharding we can give more flexibility to the compiler to decide the data layout. PiperOrigin-RevId: 207860794
* Cleanup the sharding unique device API.Gravatar A. Unique TensorFlower2018-07-31
| | | | PiperOrigin-RevId: 206885051
* Build fully connected graph which edges across called computations.Gravatar A. Unique TensorFlower2018-07-07
| | | | | | Restructured sharding passes to propagate sharding on pass-through instructions which now the placer does not assign anymore (GTEs, tuples, bitcast, parameters, ...). PiperOrigin-RevId: 203591020
* [TF:XLA] Split literal_util into {literal, literal_util}.Gravatar Kay Zhu2018-07-03
| | | | | | | | | Currently Literal classes sits in literal_util.{h,cc} instead of literal.{h,cc}. It also contains helper functions that are better fit to be their own separate class/namespace. This change starts this process by moving most static factory methods to LiteralUtil namespace. PiperOrigin-RevId: 203217065
* Propagate dominant devices to kWhile computations.Gravatar A. Unique TensorFlower2018-06-20
| | | | PiperOrigin-RevId: 201439537
* [TF:XLA] Change hlo_domain_test to use HloVerifiedTestBase.Gravatar Dimitris Vardoulakis2018-06-20
| | | | PiperOrigin-RevId: 201383246
* Do not count empty tuples as having one leaf node.Gravatar A. Unique TensorFlower2018-06-12
| | | | PiperOrigin-RevId: 200327849
* Wire in the kDomain infrastructure brought in by cl/193798254.Gravatar A. Unique TensorFlower2018-06-07
| | | | PiperOrigin-RevId: 199745064
* Introduced kDomain HLO instruction set isolation to bound connected sets of ↵Gravatar A. Unique TensorFlower2018-05-29
| | | | | | | | instructions with similar attributes (ie, sharding). This CL simply adds the infrastructure, but leaves the wire-on to a separate CL. PiperOrigin-RevId: 198503625
* [XLA] Minor HloSharding cleanups.Gravatar Justin Lebar2018-05-29
| | | | | | | Delete dead code in HloSharding::ToString(), and add and use proper hasher struct. PiperOrigin-RevId: 198493972
* Propagate sharding of the source instruction to the copies added by layout ↵Gravatar A. Unique TensorFlower2018-04-12
| | | | | | assignment. PiperOrigin-RevId: 192693972
* Restructuring the HLO partitioner to fit host computation and handle kCall.Gravatar A. Unique TensorFlower2018-04-04
| | | | | | | | Pre process the input module to reassign reserved devices (like the host compute one) to new sequentially increasing device numbers, and track those in the GlobalState. This avoids having many places where we need to spread the is-special-device logic, within the HLO partitioner and its related components. Added handling for kCall, which was missing from previous implementation. PiperOrigin-RevId: 191601831
* Implement operator<< for HloShardingGravatar A. Unique TensorFlower2018-03-23
| | | | | | | The new operator makes the error messages coming from gtest more readable as well as making logging easier. PiperOrigin-RevId: 190200926
* Add new helpers to HLO sharding.Gravatar A. Unique TensorFlower2018-03-19
| | | | PiperOrigin-RevId: 189569053
* Add TransformShardedTileShape helper method to HloShardingGravatar A. Unique TensorFlower2018-03-13
| | | | | | | | It transforms an existing sharding to be compatible with a new shape with an optional transform method to adjust the tile size for the sharded dimensions. PiperOrigin-RevId: 188903257
* Add a helper to HloSharding to easily create trivial flat tuples without ↵Gravatar A. Unique TensorFlower2017-12-05
| | | | | | | | requiring a ShapeTree. PiperOrigin-RevId: 177956572
* Tuples weren't handled by the sharding validator. Add more tuple validation ↵Gravatar A. Unique TensorFlower2017-11-18
| | | | | | tests and improve the validation error messages given. PiperOrigin-RevId: 176214090
* Change HloSharding to allow getting a ShapeTree for non-tuple types.Gravatar A. Unique TensorFlower2017-11-10
| | | | | | Add reverse iteration to ShapeTree. PiperOrigin-RevId: 175341255
* When sharding a tuple, we typically want to describe the data shardingGravatar A. Unique TensorFlower2017-11-10
| | | | | | | | | | | | | | | | | | of each individual subtensor individually. Tuples are essentially just containers - the tensors they contain should be able to be sharded differently. Tuples are hierarchically structured, but shardings were designed to not contain the sharded type (the sharded type is inferred from the output type of the instruction the sharding is applied to). Therefore, shardings for tuples contain shardings for each subtensor as a non-structured list. This list is ordered as a preorder walk of the tuple shape, and of course only the leaf nodes of the tuple shape are stored. The structure is reapplied when the sharded instruction's shape is known. PiperOrigin-RevId: 175132692
* Supported in this CL:Gravatar A. Unique TensorFlower2017-10-30
* Attaching sharding descriptors to HLO ops * Partitioning the HLO graph into per-device computations based on those sharding descriptors. * All operator support for device placement and ops replicated on all devices. * Elementwise op support for tiled shardings. * 2D Convolution support for tiled shardings (no stride or dilation support). PiperOrigin-RevId: 173946036