| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 196670274
|
|
|
|
| |
PiperOrigin-RevId: 196665609
|
|
|
|
| |
PiperOrigin-RevId: 196640024
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous fusion approach didn't work because a multiplication by a scalar value
will be changed into an explicit broadcast.
Another issue that is fixed in this CL is retrieving the constant value from
the literal. This depends on the PrimitiveType, before we always assumed it to be double.
Also when checking ImplementedAsGemm() we should not call it recursively, but instead just the check related to kDot.
Finally add an execution test and adjust the fusion logic test.
The fix for the issue that caused the revert is that we check earlier that consumer->operand_count() is 2.
Also, we fix the call to Get() to pass {} instead of {0}.
And we handle an output fusion node in GemmThunk to extract the dimension numbers from the dot operation.
PiperOrigin-RevId: 196631031
|
|
|
|
|
|
| |
needed by subcomputations.
PiperOrigin-RevId: 196618347
|
|
|
|
|
|
| |
error messages in situations like: #19219
PiperOrigin-RevId: 196616638
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changes included are:
- Fix `batch_dot` when `axes=None`
- Add axis=-1 as an argument to keras.backend.softmax
- Fix ctc_batch_cost() error when batch_size = 1
- Print previous best in ModelCheckpoint callback
- Fix ReduceLROnPlateau callback
- Extend RemoteMonitor to send data as application/json
- Fix default dilation rate value in 2D separable conv.
- Fix for MobileNet model with undefined shape
- Disable require_flatten in nasnet & Add an error message for undefined shape.
- Improve tests by designating dtype of sample data
- Multi_gpu_model supporting legacy/fullCPU/fullGPU
PiperOrigin-RevId: 196615376
|
|
|
|
| |
PiperOrigin-RevId: 196614376
|
|
|
|
| |
PiperOrigin-RevId: 196606455
|
|
|
|
| |
PiperOrigin-RevId: 196605347
|
|
|
|
|
|
|
| |
Working on untangling TF/Estimator deps. We would like to get to a state
where Estimator depends on Keras and not vice versa
PiperOrigin-RevId: 196605024
|
|
|
|
|
|
| |
the same dimension but different types are not considered broadcast.
PiperOrigin-RevId: 196603348
|
|
|
|
|
|
|
|
|
| |
Example usage: dataset = tf.contrib.data.CsvDataset(filenames, record_defaults=record_defaults, **kwargs)
Motivation: Fusing reading and parsing is more performant and correct than the previous canonical CSV parsing flow (`dataset = tf.data.TextLineDataset(filenames).map(lambda l: tf.decode_csv(l, **kwargs))`)
Closes #19077.
PiperOrigin-RevId: 196601381
|
|
|
|
| |
PiperOrigin-RevId: 196601310
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new argument allows you to parameterize the generator with the value of a tf.Tensor,
enabling `Dataset.from_generator()` to be initialized from a placeholder or used in a
nested expression (such as `flat_map()` or `parallel_interleave()`). For example:
```python
def generator(n):
for _ in range(n):
yield n
# Define a generator based on a placeholder.
placeholder = tf.placeholder(tf.int64, shape=[])
dataset = tf.data.Dataset.from_generator(generator, tf.int64, args=(placeholder,))
# Define a generator based on the value of a nested dataset element.
dataset = tf.data.Dataset.range(10).flat_map(
lambda i: tf.data.Dataset.from_generator(generator, tf.int64, args=(i,)))
```
Fixes #19269. Partially addresses issue #13101.
PiperOrigin-RevId: 196598650
|
|
|
|
| |
PiperOrigin-RevId: 196597196
|
|
|
|
|
|
|
| |
This will let users to benefit from TPU training, but avoid complex
eval metrics functions to be ported to TPU.
PiperOrigin-RevId: 196587755
|
|
|
|
| |
PiperOrigin-RevId: 196587227
|
|
|
|
| |
PiperOrigin-RevId: 196586601
|
|
|
|
|
|
|
|
|
|
| |
* unused using-declarations
* redundant string conversions
* C-style casts
* redundant get() call on smart pointer
* the 'empty' method should be used to check for emptiness instead of 'size'
PiperOrigin-RevId: 196585984
|
|
|
|
|
|
| |
non-scalar partition variables are supported.
PiperOrigin-RevId: 196584749
|
|
|
|
|
|
| |
Layer.add_weight would crash when called without a dtype or initializer.
PiperOrigin-RevId: 196583182
|
|
|
|
|
|
| |
When executing on GPU, synchronously copy cond result from device to host.
PiperOrigin-RevId: 196580820
|
|
|
|
| |
PiperOrigin-RevId: 196580619
|
|
|
|
|
|
| |
multiple outputs, each of which is explicitly declared.
PiperOrigin-RevId: 196579920
|
|
|
|
| |
PiperOrigin-RevId: 196578043
|
|
|
|
| |
PiperOrigin-RevId: 196577314
|
|
|
|
| |
PiperOrigin-RevId: 196576497
|
|
|
|
| |
PiperOrigin-RevId: 196576189
|
|
|
|
| |
PiperOrigin-RevId: 196575483
|
|
|
|
| |
PiperOrigin-RevId: 196575387
|
|
|
|
| |
PiperOrigin-RevId: 196573938
|
|
|
|
|
|
| |
Fixes #18648
PiperOrigin-RevId: 196572262
|
|
|
|
|
|
|
|
|
|
|
|
| |
visible_devices_list.
See #19083
See #18861
More generally, this change avoids assertion failures (that will bring the
whole process down) on a few code-paths that can be triggerred by user input.
PiperOrigin-RevId: 196572013
|
|
|
|
| |
PiperOrigin-RevId: 196570742
|
|
|
|
| |
PiperOrigin-RevId: 196570011
|
|
|
|
| |
PiperOrigin-RevId: 196567964
|
|
|
|
| |
PiperOrigin-RevId: 196567928
|
|
|
|
|
|
| |
implemented in the natural way for the Tensor class.
PiperOrigin-RevId: 196566940
|
|
|
|
|
|
| |
Also fixes hlo_schedule_test to remove the expected order on unrelated operations.
PiperOrigin-RevId: 196565651
|
|
|
|
| |
PiperOrigin-RevId: 196565296
|
|
|
|
| |
PiperOrigin-RevId: 196565153
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Don't display ops with 0 optimal seconds and 0 actual cycles. These
are ops that were expected to be free and were actually free.
- Fix HloCostAnalysis to mark parameters, constants, and
get-tuple-element as expected-to-be-free per the definition above.
- Allow optimal-seconds < 0 to indicate "I don't know". Use this for
custom calls, and then hide such ops from the "seconds above the
optimum" table.
- Don't display "<none>" and "<unknown>" -- instead, just display the
empty string. Less visual noise.
- Instead of showing ~5 ops per category in the categories tables, show
everything. This isn't so noisy now that we're hiding "free" ops, and
it makes finding optimization opportunities much easier.
PiperOrigin-RevId: 196564177
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add attribute to If op to indicate if lowering to switch-merge form is
needed;
* Add initial version of If op rewriter than transforms a If op into
switch/merge nodes (as would have been constructed via tf.cond) if the If op
has the lowering attribute set.
- The pass is not ready for general use and, for example, does not support
reference data types.
PiperOrigin-RevId: 196563421
|
|
|
|
| |
PiperOrigin-RevId: 196561620
|
|
|
|
| |
PiperOrigin-RevId: 196560221
|
|
|
|
| |
PiperOrigin-RevId: 196558466
|
|
|
|
|
|
| |
Remove duplicated code to resolve type from attributes.
PiperOrigin-RevId: 196558061
|
|
|
|
| |
PiperOrigin-RevId: 196557132
|
|
|
|
| |
PiperOrigin-RevId: 196556727
|