| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
allows us to identify if we need to set the drop_remainder option when creating Dataset objects.
PiperOrigin-RevId: 215633097
|
|
|
|
| |
PiperOrigin-RevId: 215631612
|
|
|
|
|
|
|
|
|
| |
Rename the test to make it obvious that it is for testing the codegen
correctness in handling layout changing elementwise operations.
Keep the test only for the CPU backend.
PiperOrigin-RevId: 215630611
|
|
|
|
| |
PiperOrigin-RevId: 215628561
|
|
|
|
| |
PiperOrigin-RevId: 215624875
|
|
|
|
| |
PiperOrigin-RevId: 215623215
|
|
|
|
|
|
| |
instead of a Conv2D layer.
PiperOrigin-RevId: 215619966
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before:
entry {
name: "MicroBenchmarks.benchmark_defun_matmul_2_by_2_CPU"
iters: 30000
wall_time: 48.4476327896
extras {
key: "examples_per_sec"
value {
double_value: 20640.8433688
}
}
}
After:
entry {
name: "MicroBenchmarks.benchmark_defun_matmul_2_by_2_CPU"
iters: 30000
wall_time: 45.2344338099
extras {
key: "examples_per_sec"
value {
double_value: 22107.0524327
}
}
}
PiperOrigin-RevId: 215619902
|
|
|
|
| |
PiperOrigin-RevId: 215618809
|
|
|
|
| |
PiperOrigin-RevId: 215617800
|
|
|
|
|
|
| |
consume multiple mini-batches while some may not even one.
PiperOrigin-RevId: 215617588
|
|
|
|
|
|
|
| |
In the process, properly place nodes on devices in the collective graph key
test.
PiperOrigin-RevId: 215616146
|
|
|
|
|
|
|
|
| |
output tensor.
This is useful if the output of both directions will be passed to the next layer as a single output, as it avoids adding a concatenation op, which can be expensive on mobile devices where memory movement is relatively expensive.
PiperOrigin-RevId: 215616140
|
|
|
|
|
|
|
|
|
|
|
| |
If the layout of a single tensor in a tuple is different from its use, then
CreateCopyWithNewLayout will do a deep copy of the entire tuple. Not only does
this operation create unnecessary copies of elements where the layout is the
same, it will throw an error if the tuple contains elements like token[] that
cannot be copied. As a result, layout assignment on TPU occassionally causes
mysterious compilation failures for code that runs correctly on CPU and GPU.
PiperOrigin-RevId: 215615731
|
|
|
|
|
|
|
|
| |
`set_stats_aggregator`. `tag` would get prep-end with all the statistics recorded as summary and `counter_prefix` would set the prefix for the statistics recorded as counter.
Note: `counter` defaults to `\tensorflow`, and `tag` and `prefix` gets associated with the dataset (not the stats_aggregator).
PiperOrigin-RevId: 215609159
|
|
|
|
| |
PiperOrigin-RevId: 215608349
|
|
|
|
| |
PiperOrigin-RevId: 215607769
|
|
|
|
| |
PiperOrigin-RevId: 215607171
|
|
|
|
| |
PiperOrigin-RevId: 215607038
|
|
|
|
| |
PiperOrigin-RevId: 215605865
|
|
|
|
| |
PiperOrigin-RevId: 215595078
|
|
|
|
| |
PiperOrigin-RevId: 215593867
|
|\
| |
| |
| |
| |
| | |
dmitrievanthony:apache-ignite-dataset-fixes-after-initial-merge
PiperOrigin-RevId: 215593528
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215592456
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215590676
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215590440
|
| |
| |
| |
| |
| |
| | |
Currently _check_shape requires that a shape be an `int` or sequence of `int`s. This CL allows `six.integer_type`s so now (1L,) would be a valid shape.
PiperOrigin-RevId: 215589131
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215589009
|
| |
| |
| |
| |
| |
| | |
Log messages now show the correct file/function name/line number instead of that of the helper function.
PiperOrigin-RevId: 215586852
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215585187
|
| |
| |
| |
| |
| |
| | |
Add a warning to not disable optimizers without consulting with the Grappler team.
PiperOrigin-RevId: 215584369
|
| |
| |
| |
| |
| |
| | |
and the rank derived from the permutation array is 0 or 1, the shape is ambiguous and cannot be determined at graph construction time. In this case, forward the shape of the input.
PiperOrigin-RevId: 215583050
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fix an issue where the Java Tensor class would hold a reference
to an invalidated TfLiteTensor instance. This issue was manifest
in certain models that add temporary tensors during execution.
PiperOrigin-RevId: 215582842
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215580891
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215579950
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 215560522
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215553161
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215534396
|
| | |
| | |
| | |
| | |
| | |
| | | |
Otherwise, when parsing a single instruction, the parsed module doesn't have a name, which won't pass the hlo verifier check.
PiperOrigin-RevId: 215519412
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215518288
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215517752
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
layout so that it can be used by the HLO verifier.
Change the function to a static member function of the LayoutAssignment class.
Add an std::function member to LayoutAssignment to store the function object
passed down from the backend compiler class and use it to decide whether an
instruction can change layouts.
Fix affected test cases.
PiperOrigin-RevId: 215515611
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215512168
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
tf.data objects
- Previously, when validation_steps was missing, the error message incorrectly says "please provide either batch_size or steps_per_epoch". Now it reads "please provide either batch_size or validation_steps".
- Some whitespace-related fixes.
PiperOrigin-RevId: 215503991
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215503549
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 215501709
|
| | |
| | |
| | |
| | |
| | |
| | | |
one function.
PiperOrigin-RevId: 215501702
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
turning into a constant is a
discardable array. If it's not discardable, it means that the user wants this array to keep existing
in a way that is observable to them, i.e. not as weights.
Typical example: a Fill op outputs an array that is passed as a RNN state array (non-discardable).
It seems that so far we have been relying on accidental ordering of graph transformations for such state
arrays not to be accidentally turned into constants. Instead, the desired graph transformation here is
RemoveUnusedOp noticing that such a Fill can be discarded since its output is a RNN state array.
So I don't have a test for this, but this seems to be tightening existing behavior, and should be good
to have as long as it does not regress anything.
PiperOrigin-RevId: 215500760
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
corresponding bugs fixed. The bugs that
were work-arounded were fixed and verified.
PiperOrigin-RevId: 215497418
|
| | |
| | |
| | |
| | |
| | |
| | | |
optimization parameter protos and removed uses of that functionality in tests.
PiperOrigin-RevId: 215494433
|