| Commit message (Collapse) | Author | Age |
|
|
|
| |
PiperOrigin-RevId: 207147507
|
|
|
|
|
|
| |
abb903df7a5998b33547c02e95f9fa47c00f31f4
PiperOrigin-RevId: 207145802
|
|
|
|
|
|
| |
As far as I can tell, `executable` is never nullptr.
PiperOrigin-RevId: 207141878
|
|\
| |
| |
| | |
PiperOrigin-RevId: 207140591
|
| |
| |
| |
| |
| |
| | |
this case.
PiperOrigin-RevId: 207137374
|
| |
| |
| |
| | |
PiperOrigin-RevId: 207135538
|
| |
| |
| |
| |
| |
| | |
has operations that modify the input arguments in place.
PiperOrigin-RevId: 207133095
|
|\ \
| | |
| | |
| | | |
PiperOrigin-RevId: 207129109
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
inspect.getargspec raises errors if they are present but getfullargspec is
perfectly happy to let functions with type annotations pass.
PiperOrigin-RevId: 207127930
|
| | |
| | |
| | |
| | |
| | |
| | | |
dataset.
PiperOrigin-RevId: 207127254
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207127136
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
third_party/tensorflow/core/kernels:android_tensorflow_image_op
correctly.
PiperOrigin-RevId: 207122169
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207108296
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207107983
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207084736
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207081431
|
| | |
| | |
| | |
| | |
| | |
| | | |
ops.py. This change does not depend on the new config.experimental.client_handles_error_formatting flag. I also attempted to modify relevant interpolated error strings so an uninterpolated error message still read correclty if you removed the interpolation tokens.
PiperOrigin-RevId: 207075862
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207066617
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207053503
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207051885
|
| | |
| | |
| | |
| | |
| | |
| | | |
And fix two lint issues.
PiperOrigin-RevId: 207051473
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The `Optional` type makes it possible to represent missing values (e.g. an attempt to run `Iterator.get_next()` after the sequence has ended) without raising an error.
NOTE: The `Optional` type is currently only supported on CPU, and a follow-up change will add support for other devices. After then, we will add this to the `tf.contrib.data` API, with a view to eventually migrating it to core.
PiperOrigin-RevId: 207049979
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207045468
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207034363
|
| | |
| | |
| | |
| | |
| | |
| | | |
save_weights in HDF5 format does not save optimizer weights anyway, but since TensorFlow optimizers are saved in TensorFlow format it's a bit surprising when Keras optimizers aren't.
PiperOrigin-RevId: 207027546
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Refactors a method that uses `OP_REQUIRES[_OK]` heavily to return a `Status`
in error cases instead.
PiperOrigin-RevId: 207027527
|
| | |
| | |
| | |
| | |
| | |
| | | |
triggered by the number of trees fired.
PiperOrigin-RevId: 207024504
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207020196
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207016946
|
| | |
| | |
| | |
| | | |
PiperOrigin-RevId: 207016849
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | | |
kingofthebongo2008:nysnc_and_highwayhash_cmake_fixes
PiperOrigin-RevId: 207014665
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The Unity TFLite plugin should now run successfully on Mac,
though it might require renaming `libtensorflowlite_c.so` to
`tensorflowlite_c.bundle` in the Plugins folder.
PiperOrigin-RevId: 207014537
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
important that the Eager Runtime remains compatible with Android. For now only
eager:execute is built since this is the main target TF Lite will depend on.
PiperOrigin-RevId: 207012943
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
ops.
PiperOrigin-RevId: 207010324
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It's possible for an already-existing context to be returned by
cuDevicePrimaryCtxRetain. Previously, this would be handled incorrectly
by CreatedContexts::Add, which was assuming that inserts into the map
always succeeded.
This makes XLA work with
TF_CUDA_PLATFORM_GPU_DEVICE_SCHEDULE=blocking_sync, although exactly how
that flag is related to this bug is unclear to me. It seems like some
sort of race condition, maybe?
PiperOrigin-RevId: 207010059
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
fake quant ops for weights in both depthwise and regular convolutions inside a seperable convolution op. Also insert fake quant ops for activations produced by first depthwise convolution
PiperOrigin-RevId: 207009650
|
|\ \ \ \
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 207008537
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
instead of a "char*".
(See https://docs.python.org/3/whatsnew/3.7.html#c-api-changes)
There are additional changes needed for Python 3.7 compatibility,
this change just pulls out one of them
(and subsumes a related attempt in #21202 and #20766)
Helps with #20517
PiperOrigin-RevId: 207008013
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
code replaces the None values with 1 and the model is then built with the shape of (1, 1, 1, 1). This sets the variables of the model and hence we cannot call the model on input of shape other than (1, 1, 1, 1).
In this CL, we create placeholders for the None values and build the model in graph mode. Since tf.Variable is now compatible with both eager and graph mode, the variables created after building the model in graph mode are still valid in eager mode. Now we can build the model with None's in the input shape and the model can still be called on a different shape input due to placeholders.
PiperOrigin-RevId: 207005479
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 207005345
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Use object-based save/restore to make dataset/iterator checkpointable in both graph as well as eager mode.
PiperOrigin-RevId: 206998349
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 206998261
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 206997688
|
| | | | |
| | | | |
| | | | |
| | | | | |
PiperOrigin-RevId: 206995432
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
suggestion from apassos@ -- the underlying lib->Instantiate() does the caching.
PiperOrigin-RevId: 206993242
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
the host to yield a single scaler value from the reduce function.
PiperOrigin-RevId: 206990072
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fermi and below are not supported by Cuda 9 which is the oldest Cuda supported
by XLA.
PiperOrigin-RevId: 206989869
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Works with the one-shot head (no model state in the tf.Example proto).
PiperOrigin-RevId: 206988925
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
public, so that the user can have access to more detailed results from VirtualScheduler.
PiperOrigin-RevId: 206986812
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
output_multiplier > 1.
#20451
#19607
PiperOrigin-RevId: 206983654
|