aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/compiler/xla/service/indexed_array_analysis.cc
Commit message (Collapse)AuthorAge
* [XLA] Migrate from gtl::FlatSet to absl::flat_hash_setGravatar Benjamin Kramer2018-10-01
| | | | PiperOrigin-RevId: 215324035
* [XLA] Migrate from gtl::FlatMap to absl::flat_hash_mapGravatar Benjamin Kramer2018-10-01
| | | | PiperOrigin-RevId: 215272497
* Global de-std::unique_ptr cleanup for xla::Literal.Gravatar A. Unique TensorFlower2018-09-10
| | | | PiperOrigin-RevId: 212313258
* [XLA] Rename PrecisionConfigProto to PrecisionConfigGravatar David Majnemer2018-09-05
| | | | | | The "Proto" suffix adds little clarity but makes a long type name even longer. PiperOrigin-RevId: 211693871
* [XLA] Make kConvolution, kDot HLO attributes mandatoryGravatar David Majnemer2018-09-04
| | | | | | | | HLO transformations would forget to propagate the feature depth attribute. Making these attributes mandatory, while slightly less convenient for tests, makes HLO transformations more robust. PiperOrigin-RevId: 211490160
* [XLA] Rename all (Mutable)ArraySlice to absl::Span.Gravatar Tim Shen2018-08-30
| | | | PiperOrigin-RevId: 210998142
* [XLA] xla::ArrayContains -> absl::c_linear_searchGravatar Benjamin Kramer2018-08-30
| | | | PiperOrigin-RevId: 210950150
* [XLA] Switch from tensorflow::str_util::Join to absl::StrJoin.Gravatar Justin Lebar2018-08-23
| | | | PiperOrigin-RevId: 210018843
* [XLA] Use absl string types and functions instead of the TF versions.Gravatar Justin Lebar2018-08-23
| | | | | | | Unfortunately this has to be one big patch, because e.g. absl::StrCat doesn't accept a TF StringPiece, but as soon as we switch to absl::string_view, we have to switch away from all of the TF functions. PiperOrigin-RevId: 209957896
* [XLA] gtl::optional->absl::optionalGravatar Yunxing Dai2018-08-21
| | | | PiperOrigin-RevId: 209686671
* Merged commit includes the following changes:Gravatar Yifei Feng2018-08-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 209663919 by yifeif<yifeif@google.com>: Internal change. -- 209663914 by amitpatankar<amitpatankar@google.com>: Fix the topk_op_test for numpy>1.15. -- 209660476 by jdduke<jdduke@google.com>: Fix model lifetime for TensorFlow Lite C# bindings Ensure the model's existence for the duration of the interpreter, as per API requirements. -- 209655960 by scottzhu<scottzhu@google.com>: Unify RNN Cell interface between TF and Keras. -- 209655731 by A. Unique TensorFlower<gardener@tensorflow.org>: Added tests for PredictionOps and PartitionExamplesOps -- 209655291 by nolivia<nolivia@google.com>: adding rate class so that we can save global_step/sec using tf.contrib.summary. The function takes the rate in relation to any tensors provided that the numerator and denominator are broadcastable and have dtypes that can be cast to float64 -- 209654655 by kramerb<kramerb@google.com>: [XLA] Switch from tensorflow::gtl::InlinedVector to absl::InlinedVector This one comes with extra goodies like a move constructor. -- 209653851 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal build specification change -- PiperOrigin-RevId: 209663919
* [XLA] Switch to absl versions of the c_foo functions.Gravatar Justin Lebar2018-08-20
| | | | PiperOrigin-RevId: 209502513
* Automated rollback of commit 4a41f50648929197954d892559587cb76458d306Gravatar A. Unique TensorFlower2018-08-17
| | | | PiperOrigin-RevId: 209248552
* [XLA] Switch to absl versions of the c_foo functions.Gravatar Justin Lebar2018-08-17
| | | | PiperOrigin-RevId: 209247783
* Improve gather ergonomics by renaming fields.Gravatar Sanjoy Das2018-08-16
| | | | | | | | | | | | | This CL renames the various inputs to the Gather HLO to be more mnemonic by making it more obviously a batch dynamic-slice. The replacements I made are: s/elided_window_dims/collapsed_slice_dims/g s/window_bounds/slice_sizes/g s/gather_dims_to_operand_dims/start_index_map/g s/gather_indices/start_indices/g s/output_window_dims/offset_dims/g PiperOrigin-RevId: 209051067
* Merge pull request #20497 from rongjiecomputer:accumulateGravatar TensorFlower Gardener2018-08-08
|\ | | | | | | PiperOrigin-RevId: 207976861
* | Change c_find and c_adjacent_find to take a ref and not a const refGravatar Sanjoy Das2018-07-10
| | | | | | | | | | | | | | | | | | This prevents this footgun: auto it = c_find(CreateTemporaryVector()); // `it` is now dangling PiperOrigin-RevId: 204020720
* | Teach the indexed array analysis about dot operationsGravatar Sanjoy Das2018-07-09
| | | | | | | | PiperOrigin-RevId: 203855406
| * Fix int64 to int truncation in std::accumulateGravatar Loo Rong Jie2018-07-03
|/
* Teach gather-reshape folding to work with degenerate dimsGravatar Sanjoy Das2018-06-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | I was hoping not to do this, but the motivating benchmark for all this work has reshapes on degenerate dimensions. This also forced me to introduce a new node to the analysis which isn't great (we don't want to replicate HLO inside IndexedArrayAnalysis!) but this is cleanest solution I can think of. In brief I support gather-reshape folding with degenerate dimensions by disallowing it in the core tricky part of the algorithm and instead reshaping the degenerate dimensions "in and out" in a helper that calls the core part of the folding logic. Also worth calling out that before we weren't doing something conservative -- we were just buggy. For instance the CHECK_NE(candidate_operand_dim, 0) in ComputeReshapePassthroughDimPairs can fail with degenerate dims. I also made some other supporting changes: - I was not checking window bounds in ComputeArrayForGather. I've fixed this and beefed up testing in this area (the hammer for all my nails). - Added a bunch of VLOG(3) info that was useful when debugging. - Added a simple helper to the test that makes the strings I'm matching against "whitespace insensitive" so that I can indent these. I'm happy to pull these out into separate CLs if that makes reviewing easier but for now I took the path of least resistance. :) PiperOrigin-RevId: 200821883
* Fix an incorrect precondition check in IndexedArrayAnalysisGravatar Sanjoy Das2018-05-28
| | | | PiperOrigin-RevId: 198354001
* Pass HloOpcode instead of HloInstruction; NFCGravatar Sanjoy Das2018-05-28
| | | | | | Minor code cleanup change. PiperOrigin-RevId: 198351045
* Make IndexedArrayAnalysis behave well around StatusOrGravatar Sanjoy Das2018-05-28
| | | | PiperOrigin-RevId: 198348355
* Add support for unary and binary ops to indexed tensor analysisGravatar Sanjoy Das2018-05-25
| | | | | | | I've added a TODO to clean up the use of ValueOrDie which I will address in an immediately following CL. PiperOrigin-RevId: 198134579
* Implement support for reshape in IndexedArrayAnalysisGravatar Sanjoy Das2018-05-23
| | | | PiperOrigin-RevId: 197843589
* Introduce an "indexed array" analysisGravatar Sanjoy Das2018-05-17
Context: we want to optimize computations hanging off of a embedding lookup from a constant array. For instance, consider: embedding = gather from a constant array using non-constant indices embedding_reshaped = reshape embedding embedding_reshaped_transposed = transpose embedding_reshaped result = dot(embedding_reshaped_transposed, constant) In the graph above, depending on how the details work out, we may be able to fold `result` into a gather from a precomputed constant array. However, it is inconvenient to get there by incremental rewrites -- it is probably not profitable to rewrite embedding_reshaped or embedding_reshaped_transposed [0] as embedding lookups but we get to "see" that the dot can be rewritten only after rewriting the reshape and the transpose. This analysis aims to make the optimization above more straightforward by allowing a transformation pass (that uses this analysis) to query the analysis to see if if `result` _can_ be represented as an embedding lookup. If yes it can then apply some profitability heuristics to decide if it is worth it to rewrite it as one. This suggested workflow gives us separation of concerns (the legality of the rewrite is computed separately from its profitability) and, more importantly, lets us "look ahead" and analyze the dot without rewriting its operands. The implementation is far from complete (most of the interesting bits are TODO) but I wanted to get an early design review before I spent too much time on this. [0] Under the assumption that transposing or reshaping are not expensive enough to pay the price of keeping around a new potentially large constant (in particular, some of these may have been equivalent to free bitcasts). PiperOrigin-RevId: 197064648