| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 215324035
|
|
|
|
| |
PiperOrigin-RevId: 215272497
|
|
|
|
| |
PiperOrigin-RevId: 212313258
|
|
|
|
|
|
| |
The "Proto" suffix adds little clarity but makes a long type name even longer.
PiperOrigin-RevId: 211693871
|
|
|
|
|
|
|
|
| |
HLO transformations would forget to propagate the feature depth attribute.
Making these attributes mandatory, while slightly less convenient for tests,
makes HLO transformations more robust.
PiperOrigin-RevId: 211490160
|
|
|
|
| |
PiperOrigin-RevId: 210998142
|
|
|
|
| |
PiperOrigin-RevId: 210950150
|
|
|
|
| |
PiperOrigin-RevId: 210018843
|
|
|
|
|
|
|
| |
Unfortunately this has to be one big patch, because e.g. absl::StrCat
doesn't accept a TF StringPiece, but as soon as we switch to
absl::string_view, we have to switch away from all of the TF functions.
PiperOrigin-RevId: 209957896
|
|
|
|
| |
PiperOrigin-RevId: 209686671
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
209663919 by yifeif<yifeif@google.com>:
Internal change.
--
209663914 by amitpatankar<amitpatankar@google.com>:
Fix the topk_op_test for numpy>1.15.
--
209660476 by jdduke<jdduke@google.com>:
Fix model lifetime for TensorFlow Lite C# bindings
Ensure the model's existence for the duration of the interpreter,
as per API requirements.
--
209655960 by scottzhu<scottzhu@google.com>:
Unify RNN Cell interface between TF and Keras.
--
209655731 by A. Unique TensorFlower<gardener@tensorflow.org>:
Added tests for PredictionOps and PartitionExamplesOps
--
209655291 by nolivia<nolivia@google.com>:
adding rate class so that we can save global_step/sec using tf.contrib.summary. The function takes the rate in relation to any tensors provided that the numerator and denominator are broadcastable and have dtypes that can be cast to float64
--
209654655 by kramerb<kramerb@google.com>:
[XLA] Switch from tensorflow::gtl::InlinedVector to absl::InlinedVector
This one comes with extra goodies like a move constructor.
--
209653851 by A. Unique TensorFlower<gardener@tensorflow.org>:
Internal build specification change
--
PiperOrigin-RevId: 209663919
|
|
|
|
| |
PiperOrigin-RevId: 209502513
|
|
|
|
| |
PiperOrigin-RevId: 209248552
|
|
|
|
| |
PiperOrigin-RevId: 209247783
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL renames the various inputs to the Gather HLO to be more mnemonic by
making it more obviously a batch dynamic-slice. The replacements I made are:
s/elided_window_dims/collapsed_slice_dims/g
s/window_bounds/slice_sizes/g
s/gather_dims_to_operand_dims/start_index_map/g
s/gather_indices/start_indices/g
s/output_window_dims/offset_dims/g
PiperOrigin-RevId: 209051067
|
|\
| |
| |
| | |
PiperOrigin-RevId: 207976861
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This prevents this footgun:
auto it = c_find(CreateTemporaryVector());
// `it` is now dangling
PiperOrigin-RevId: 204020720
|
| |
| |
| |
| | |
PiperOrigin-RevId: 203855406
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I was hoping not to do this, but the motivating benchmark for all this work has
reshapes on degenerate dimensions. This also forced me to introduce a new node
to the analysis which isn't great (we don't want to replicate HLO inside
IndexedArrayAnalysis!) but this is cleanest solution I can think of.
In brief I support gather-reshape folding with degenerate dimensions by
disallowing it in the core tricky part of the algorithm and instead reshaping
the degenerate dimensions "in and out" in a helper that calls the core part of
the folding logic.
Also worth calling out that before we weren't doing something conservative -- we
were just buggy. For instance the CHECK_NE(candidate_operand_dim, 0) in
ComputeReshapePassthroughDimPairs can fail with degenerate dims.
I also made some other supporting changes:
- I was not checking window bounds in ComputeArrayForGather. I've fixed this
and beefed up testing in this area (the hammer for all my nails).
- Added a bunch of VLOG(3) info that was useful when debugging.
- Added a simple helper to the test that makes the strings I'm matching against
"whitespace insensitive" so that I can indent these.
I'm happy to pull these out into separate CLs if that makes reviewing easier but
for now I took the path of least resistance. :)
PiperOrigin-RevId: 200821883
|
|
|
|
| |
PiperOrigin-RevId: 198354001
|
|
|
|
|
|
| |
Minor code cleanup change.
PiperOrigin-RevId: 198351045
|
|
|
|
| |
PiperOrigin-RevId: 198348355
|
|
|
|
|
|
|
| |
I've added a TODO to clean up the use of ValueOrDie which I will address in an
immediately following CL.
PiperOrigin-RevId: 198134579
|
|
|
|
| |
PiperOrigin-RevId: 197843589
|
|
Context: we want to optimize computations hanging off of a embedding lookup from
a constant array. For instance, consider:
embedding = gather from a constant array using non-constant indices
embedding_reshaped = reshape embedding
embedding_reshaped_transposed = transpose embedding_reshaped
result = dot(embedding_reshaped_transposed, constant)
In the graph above, depending on how the details work out, we may be able to
fold `result` into a gather from a precomputed constant array. However, it is
inconvenient to get there by incremental rewrites -- it is probably not
profitable to rewrite embedding_reshaped or embedding_reshaped_transposed [0] as
embedding lookups but we get to "see" that the dot can be rewritten only after
rewriting the reshape and the transpose.
This analysis aims to make the optimization above more straightforward by
allowing a transformation pass (that uses this analysis) to query the analysis
to see if if `result` _can_ be represented as an embedding lookup. If yes it
can then apply some profitability heuristics to decide if it is worth it to
rewrite it as one. This suggested workflow gives us separation of concerns (the
legality of the rewrite is computed separately from its profitability) and, more
importantly, lets us "look ahead" and analyze the dot without rewriting its
operands.
The implementation is far from complete (most of the interesting bits are TODO)
but I wanted to get an early design review before I spent too much time on this.
[0] Under the assumption that transposing or reshaping are not expensive enough
to pay the price of keeping around a new potentially large constant (in
particular, some of these may have been equivalent to free bitcasts).
PiperOrigin-RevId: 197064648
|