aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/xla/layout_util.cc
Commit message (Collapse)AuthorAge
* [XLA] Added xla::CreateModuleFromProto(...) combining loading moduleGravatar A. Unique TensorFlower2018-10-09
| | | | | | from proto and verifying it with HloVerifier. PiperOrigin-RevId: 216447947
* Add custom call with layout constraints.Gravatar Mark Heffernan2018-10-08
| | | | | | Add a variant of CustomCall which specifies arbitrary layout constraints on the operands and result. The existing non-layout-constrained CustomCall is changed to have no layout preference and can now be assigned arbitrary layouts by layout assignment. PiperOrigin-RevId: 216249615
* [XLA] Rename all (Mutable)ArraySlice to absl::Span.Gravatar Tim Shen2018-08-30
| | | | PiperOrigin-RevId: 210998142
* [XLA] Switch to absl::StrFormat.Gravatar Justin Lebar2018-08-27
| | | | | | | | Unlike Printf, StrFormat does not require type-length qualifiers, e.g %z, %ll. Nor does it require that you call c_str() to print strings. So these are fixed up here as well. PiperOrigin-RevId: 210435915
* [XLA] Stop including str_util.h.Gravatar Justin Lebar2018-08-23
| | | | PiperOrigin-RevId: 210049592
* [XLA] Switch from tensorflow::str_util::Join to absl::StrJoin.Gravatar Justin Lebar2018-08-23
| | | | PiperOrigin-RevId: 210018843
* [XLA] Use absl string types and functions instead of the TF versions.Gravatar Justin Lebar2018-08-23
| | | | | | | Unfortunately this has to be one big patch, because e.g. absl::StrCat doesn't accept a TF StringPiece, but as soon as we switch to absl::string_view, we have to switch away from all of the TF functions. PiperOrigin-RevId: 209957896
* [XLA] Add guard for bytes accessed in HloCostAnalysis, need layout to determine.Gravatar Chris Leary2018-08-02
| | | | PiperOrigin-RevId: 207213865
* [XLA] Try to validate that shape sizes are sane.Gravatar Michael Kuperstein2018-06-26
| | | | | | | | This won't catch all overflows, but will do the right thing for the "normal" flow. Also fix layout validation to reject padded sparse layouts. PiperOrigin-RevId: 202151215
* Fix assumptions that a Shape must be a tuple or an array.Gravatar Mark Heffernan2018-06-13
| | | | | | | | | | | | A TOKEN primitive type was added with cl/199215963 and XLA also has an OPAQUE primitive type. However, in many places in XLA we assume either a tuple or array. This CL fixes many of those instances, but some may remain. Identified instances were discovered by searching for IsTuple or IsArray so the set of fixes is not exhaustive. Also opportunistically addressed a couple potential points of confusion in the ShapeUtil interface: (1) Rename ShapeUtil::HasZeroElements to ShapeUtil::IsZeroElementArray. The point of confusion here is that tuples can also have zero elements and HasZeroElements would check fail on tuple shapes. Method no longer check fails if the given shape is not an array. (2) ShapeUtil::IsNil now returns true only for empty tuples. Previously it also returned true for zero-element array types which was confusing because ShapeUtil::MakeNil creates an empty tuple. PiperOrigin-RevId: 200452672
* Add TOKEN primitive type.Gravatar Mark Heffernan2018-06-04
| | | | | | | | The token type will be threaded through side-effecting ops to order them. Subsequent cls will add new opcodes and change side effecting operations to support this ordering. This CL also does some cleanup in shape_util and layout_util where we have assumed that shapes are either arrays or tuples. PiperOrigin-RevId: 199215963
* Add heuristic on picking NHWC layout for (V100, fp16) convolutions.Gravatar A. Unique TensorFlower2018-05-24
| | | | | | | | | | Also move AlgorithmPicker after layout assignment, as now cudnn_convolution_runner will return failures on invalid input layouts. Also add a backend debug option to switch the layout heuristic. By default it has the old behavior (all NCHW). PiperOrigin-RevId: 197983747
* [XLA] s/tensorflow::Status/Status/.Gravatar Justin Lebar2018-05-11
| | | | | | | These are type aliases of one another; we'd like to be consistent and use the shorter one. PiperOrigin-RevId: 196322955
* [XLA:CPU] Re-use the same llvm::GlobalVariable for identical literalsGravatar Sanjoy Das2018-05-01
| | | | | | | | | | This isn't necessary today, but it will be after an optimization change I'm about to make. LLVM has a constant merging pass too, but one of the motivations here is to avoid the LLVM compile time overhead of having many large arrays in the IR. PiperOrigin-RevId: 195032900
* [XLA:TPU] Initial HLO parser/stringifier support for sparse formatsGravatar A. Unique TensorFlower2018-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Add methods for manipulating sparse literals to xla::Literal - Make LayoutUtil::HumanString handle sparse layouts - Make ShapeUtil::ParseShape handle sparse shapes - Syntax for shapes has changed: - Old way of expressing layouts still works, e.g. f32[1,2,3]{2,1,0} - Can now make dense format explicit: f32[1,2,3]dense{2,1,0} - Can express sparse layouts; the max_sparse_elements value is in the braces, e.g.: f32[1,2,3]sparse{10} - The shape should not include braces for the layout if the shape is scalar; e.g. f32[]{} is not valid shape syntax. - The shape should not include braces for the layout if the shape is a dense rank-1 array; e.g. f32[10]{0} is not valid shape syntax - Sparse literals use a dictionary-liky syntax, e.g.: f32[2,3,4]sparse{10} {[0,1,2]: 10, [1,2,3]: 11} - For rank-1 sparse arrays, the square brackets around indices may be omitted, e.g.: f32[100]sparse{10} {5: 10, 20: 30} PiperOrigin-RevId: 181813837
* [XLA] Initial sparse layout supportGravatar A. Unique TensorFlower2018-01-08
| | | | | | | | | | | | Adds SparseIndexArray and support methods to Literal. SparseIndexArray manages the array of sparse indices and is exposed by sparse Literals. Also adds HloSupportChecker classes for CPU and GPU. This will be run as the first HloPass during compilation, and verifies that the graph is supported by the backend. Currently only verifies shapes, and that the layout is not sparse since no backend supports sparse layouts yet. PiperOrigin-RevId: 181244401
* [XLA] Fix return type of LayoutUtil::PaddedDimensionsGravatar A. Unique TensorFlower2017-12-19
| | | | | | We should not use const here. PiperOrigin-RevId: 179614367
* [XLA] Add format field to layoutGravatar A. Unique TensorFlower2017-12-18
| | | | | | | | | | | Format will describe the method used to store array data in memory. Currently only DENSE is supported, which represents the way XLA currently stores arrays. Scalars have a DENSE format. Tuples and opaque shapes use INVALID_FORMAT. Adds checks to code that uses minor_to_major to ensure the layout is dense. PiperOrigin-RevId: 179475450
* The new array class provides a way to simplify the implementation ofGravatar A. Unique TensorFlower2017-10-20
| | | | | | | | | these classes by eliminating a large number of duplicated code. Removing the old API is non-trivial because of the existing users outside of tensorflow. PiperOrigin-RevId: 172920837
* Lower vector-matrix dot to LLVM IR if the RHS of the dot can be madeGravatar Sanjoy Das2017-09-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | column major. The naive dot lowering to LLVM IR (already present in XLA today) is cache efficient if the dot has LHS of shape [1,K]{1,0} and RHS of shape [K x N]{0,1}. This change teaches the layout assignment pass to exploit this property by converting a constant RHS matrix to a column major layout when possible. Couple of related things I had to touch in this change: - In LayoutAssignmentTest.TupleLayout we used to generate a kCopy to satisfy the conflicting constraints between the result and the constant shapes, but with this change we change the layout of the constants themselves. So the EXPECT_FALSE is now an EXPECT_TRUE. - The extra instruction layout constraints added at the end of CpuLayoutAssignment::AddBackendConstraints seemed redundant. The layout assignment pass already tries to make all unconstrained buffers have the default row-major layout. Moreover, they were blocking this optimization in some cases by introducing conflicting constraints. - The changes to literal_util.h have to be made to deal with the Literal::Relayout calls we now get on literals of various types. PiperOrigin-RevId: 168761204
* [TF:XLA] Fixes to the "evaluator" plugin.Gravatar Peter Hawkins2017-08-05
| | | | | | | | | * Mark the evaluator plugin as alwayslink so it doesn't get stripped out by the linker. * Add a generic LayoutAssignment pass to the pass pipeline; otherwise the entry computation has no layout and Service::Execute CHECK-fails in the AllocationTracker. * Register the default computation placer for the evaluator backend. * Add an replay_computation_hlo_evaluator binary that can replay computation snapshots via the HLO evaluator. PiperOrigin-RevId: 164364780
* [XLA] Remove the xla_default_layout flag.Gravatar Eli Bendersky2017-07-21
| | | | | | | The default layout is just major-to-minor, as we don't have sufficient testing for alternative default layouts. PiperOrigin-RevId: 162766231
* Improve support for pad instructions with negative padding.Gravatar Mark Heffernan2017-01-23
| | | | | | | | | | | Define semantics of negative padding in the Pad instruction to be identical to padding inside of convolution operation ConvWithGeneralPadding. Also make negative padding work in the backends. Specific changes: (1) Add transformation to algebraic simplifier which replaces negative padding with slices. (2) fix ReferenceUtil to properly handle negative padding and interior padding. (3) Add negative padding explanation to operation semantics g3doc. (4) Extend LayoutsInShapesEqual and CopyLayoutBetweenShapes to work with shapes which are not exactly compatible but have the same rank and tuple structure. Change: 145355127
* [XLA] Recognize any reduction where dimensions to keep are consecutive in ↵Gravatar A. Unique TensorFlower2017-01-12
| | | | | | memory as effective reduction to vector Change: 144353864
* Initial open-source release of XLA: Accelerated Linear Algebra.Gravatar Peter Hawkins2017-01-09
XLA is a compiler-based linear algebra execution engine that targets CPUs, GPUs and custom accelerators. XLA is still experimental; we are releasing it early to get the community involved. Change: 143990941