| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
sqrt(v + epsilon**2) and changed flag name accordingly.
PiperOrigin-RevId: 216240045
|
|
|
|
|
|
| |
It fails 1/1000 runs in OSS builds.
PiperOrigin-RevId: 216050192
|
|
|
|
|
|
| |
the model.
PiperOrigin-RevId: 215947354
|
|
|
|
|
|
|
|
| |
the optimizer isn't set, e.g. loading weights and then predict.
- Add load_weights for `KerasTpuModel`.
PiperOrigin-RevId: 215920993
|
|
|
|
| |
PiperOrigin-RevId: 215752559
|
|
|
|
| |
PiperOrigin-RevId: 215585187
|
|
|
|
|
|
| |
optimization parameter protos and removed uses of that functionality in tests.
PiperOrigin-RevId: 215494433
|
|
|
|
| |
PiperOrigin-RevId: 215454323
|
|
|
|
|
|
|
| |
overridden at runtime allowing dynamic switching between inference and training
modes. Not fully implemented yet.
PiperOrigin-RevId: 215325071
|
|
|
|
| |
PiperOrigin-RevId: 215313156
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to replace it.
This change prepares `tf.data` for TensorFlow 2.0, where `tf.contrib` will no longer exist. It retains the pre-existing endpoints in `tf.contrib.data` with deprecation warnings.
Note there are some exceptions to the move:
* Deprecated symbols in `tf.contrib.data` have not been moved to `tf.data.experimental`, because replacements already exist.
* `tf.contrib.data.LMDBDataset` has not been moved, because we plan to move it to a SIG-maintained repository.
* `tf.contrib.data.assert_element_shape()` has not yet been moved, because it depends on functionality in `tf.contrib`, and it will move in a later change.
* `tf.contrib.data.AUTOTUNE` has not yet been moved, because we have not yet determined how to `tf_export()` a Python integer.
* The stats-related API endpoints have not yet appeared in a released version of TensorFlow, so these are moved to `tf.data.experimental` without retaining an endpoint in `tf.contrib.data`.
In addition, this change includes some build rule and ApiDef refactoring:
* Some of the "//third_party/tensorflow/python:training" dependencies had to be split in order to avoid a circular dependency.
* The `tf.contrib.stateless` ops now have a private core library for the generated wrappers (and accordingly are hidden in their ApiDef) so that `tf.data.experimental.sample_from_datasets()` can depend on them.
PiperOrigin-RevId: 215304249
|
|
|
|
| |
PiperOrigin-RevId: 215297961
|
|
|
|
|
|
| |
Adam optimizer's variable update formula.
PiperOrigin-RevId: 215290881
|
|
|
|
| |
PiperOrigin-RevId: 215259803
|
|
|
|
| |
PiperOrigin-RevId: 215248985
|
|
|
|
| |
PiperOrigin-RevId: 215200418
|
|
|
|
|
|
| |
learning rate to be modified at runtime. The implementation is not yet complete.
PiperOrigin-RevId: 215030536
|
|
|
|
| |
PiperOrigin-RevId: 215029224
|
|
|
|
| |
PiperOrigin-RevId: 215027511
|
|
|
|
|
|
|
| |
(actual implementation is pending).
Added comments with pointers to C++ implementations of optimizers.
PiperOrigin-RevId: 215026002
|
|
|
|
| |
PiperOrigin-RevId: 215016286
|
|
|
|
| |
PiperOrigin-RevId: 215009955
|
|
|
|
| |
PiperOrigin-RevId: 214974535
|
|
|
|
| |
PiperOrigin-RevId: 214967868
|
|
|
|
| |
PiperOrigin-RevId: 214964988
|
|
|
|
| |
PiperOrigin-RevId: 214941829
|
|
|
|
|
|
| |
tpu_embedding_configuration_py, and tpu_embedding_output_layout_py.
PiperOrigin-RevId: 214879168
|
|
|
|
| |
PiperOrigin-RevId: 214846488
|
|
|
|
| |
PiperOrigin-RevId: 214831772
|
|
|
|
|
|
| |
It used to save the existing custom getter then overwrites the custom getter. That means the previous custom getter will never be called inside "computation". It now create a new custom getter that calls the previous custom getter.
PiperOrigin-RevId: 214715720
|
|
|
|
|
|
| |
Other additional refactoring.
PiperOrigin-RevId: 214715083
|
|
|
|
|
|
|
|
| |
Estimator
Add support for stateful metrics in model to estimator
PiperOrigin-RevId: 214714322
|
|
|
|
| |
PiperOrigin-RevId: 214702243
|
|
|
|
| |
PiperOrigin-RevId: 214698827
|
|
|
|
|
|
|
|
|
| |
This triggers checkpoints in a separate thread while allowing training to
continue. This can effectively parallelize checkpointing and training for
workloads like TPUEstimator, where the weights are only updated after a number
of device iterations.
PiperOrigin-RevId: 214670991
|
|
|
|
| |
PiperOrigin-RevId: 214532827
|
|
|
|
| |
PiperOrigin-RevId: 214499034
|
|
|
|
|
|
| |
output and targets and produces the same loss and metrics.
PiperOrigin-RevId: 214494877
|
|
|
|
| |
PiperOrigin-RevId: 214489904
|
|
|
|
| |
PiperOrigin-RevId: 214476713
|
|
|
|
| |
PiperOrigin-RevId: 214373714
|
|
|
|
| |
PiperOrigin-RevId: 214359786
|
|
|
|
|
|
|
|
|
|
| |
logical core) indexing scheme for cores.
Previously the DeviceAssignment class mixed both a general concept (a mapping from (replica, logical core) to physical TPU core) and a specific instantiation of that concept, by imposing a particular 3D grid structure on the logical core numbers. This was excessive ? while the physical core numbers have a particular structure, there is no need to impose any particular structure on the logical core numbers.
This change simplifies the DeviceAssignment scheme, changing it so logical cores within a replica are numbered sequentially without any particular semantics.
PiperOrigin-RevId: 213984629
|
|
|
|
| |
PiperOrigin-RevId: 213913013
|
|
|
|
|
|
| |
refactoring it, adding several new fields and an EmbeddingOutputLayout message to provide experimental support for controlling the embedding output.
PiperOrigin-RevId: 213849572
|
|
|
|
| |
PiperOrigin-RevId: 213730668
|
|
|
|
| |
PiperOrigin-RevId: 213574904
|
|
|
|
| |
PiperOrigin-RevId: 213378552
|
|
|
|
|
|
| |
Otherwise a message like "TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn." will be thrown, which is confusing.
PiperOrigin-RevId: 213371676
|
|
|
|
| |
PiperOrigin-RevId: 213327633
|