| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
values.
PiperOrigin-RevId: 216461637
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The core of the change is have the gradient tape capture
distributed variables instead of plain ResourceVariables.
In other words, we move the distribution awareness from defun
down to tape and rely on distributed variable magic to provide us
with the right variable at runtime.
In tower context, we always watch the container (e.g. MirroredVariable).
In cross tower context, we always watch all the components.
PiperOrigin-RevId: 216430530
|
|
|
|
|
|
| |
attributes, set the attributes of all the contained variables. This fixes a bug that tf.train.init_from_checkpoint doesn't overwrite the initialization values correctly for TPUMirroredVariable.
PiperOrigin-RevId: 216429476
|
|
|
|
| |
PiperOrigin-RevId: 216225505
|
|
|
|
|
|
|
|
|
|
| |
Prior to this change, tf.colocate_with(v) would insert spurious operations (a ReadVariableOp and an Identity) in the graph when v is a resource variable, and then
colocate the operations within the block with those newly added, otherwise disconnected, operations.
This commit avoids adding the unnecessary ReadVariableOp/Identity nodes and colocates
operations within the block with the VarHandleOp.
PiperOrigin-RevId: 216201638
|
|
|
|
|
|
| |
Fixes #21405.
PiperOrigin-RevId: 215973401
|
|
|
|
| |
PiperOrigin-RevId: 215950207
|
|
|
|
|
|
| |
tf.train.init_from_checkpoint can be supported.
PiperOrigin-RevId: 215843249
|
|
|
|
| |
PiperOrigin-RevId: 215653650
|
|\
| |
| |
| | |
PiperOrigin-RevId: 215639962
|
| |
| |
| |
| |
| |
| | |
allows us to identify if we need to set the drop_remainder option when creating Dataset objects.
PiperOrigin-RevId: 215633097
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215618809
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215503549
|
| |
| |
| |
| |
| |
| | |
`make_one_shot_iterator` which is to be deprecated in future.
PiperOrigin-RevId: 215491729
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215459075
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
to replace it.
This change prepares `tf.data` for TensorFlow 2.0, where `tf.contrib` will no longer exist. It retains the pre-existing endpoints in `tf.contrib.data` with deprecation warnings.
Note there are some exceptions to the move:
* Deprecated symbols in `tf.contrib.data` have not been moved to `tf.data.experimental`, because replacements already exist.
* `tf.contrib.data.LMDBDataset` has not been moved, because we plan to move it to a SIG-maintained repository.
* `tf.contrib.data.assert_element_shape()` has not yet been moved, because it depends on functionality in `tf.contrib`, and it will move in a later change.
* `tf.contrib.data.AUTOTUNE` has not yet been moved, because we have not yet determined how to `tf_export()` a Python integer.
* The stats-related API endpoints have not yet appeared in a released version of TensorFlow, so these are moved to `tf.data.experimental` without retaining an endpoint in `tf.contrib.data`.
In addition, this change includes some build rule and ApiDef refactoring:
* Some of the "//third_party/tensorflow/python:training" dependencies had to be split in order to avoid a circular dependency.
* The `tf.contrib.stateless` ops now have a private core library for the generated wrappers (and accordingly are hidden in their ApiDef) so that `tf.data.experimental.sample_from_datasets()` can depend on them.
PiperOrigin-RevId: 215304249
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
output depends on the updates across all mirrors. Before this change,
update() would return a Mirrored value that where each component was
an update to a single mirror. This caused a problem since for reading
purposes other DistributionStrategy methods would consider it okay
to read any single component, and so if you for example did something
like session.run(strategy.update(...)) it would only perform the
update on one replica. The fix is to have the output be a Mirrored
value that is actually the identity operation returning the output on
that device, but that has a control dependency making sure that the
update actually happens on all the replicas. This fix was already
present in MirroredVariable._assign_func, this CL moves the fix into
update() and generalizes it to multiple return values.
To disable this new grouping behavior, you may now pass
"grouped=False" to update(). For example, some callers (like Optimizer)
are performing a lot of updates and they prefer to group all of them
together at once for performance reasons. In this case, we still want
to make sure the caller executes the update on all replicas, so we
return an unwrapped value instead of a Mirrored value. This has the
happy side effect of removing a bunch of unwrap calls in client code,
since unwrapping was the only safe way to use the Mirrored value we
used to return.
PiperOrigin-RevId: 215301909
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
self.session().
* Move from self.test_session(graph=ops.Graph(), ...) to self.session(...) (semantically equivalent).
* Move from self.test_session() to self.cached_session(config=self.config) when run_in_graph_and_eager_modes(config=config) is set to be consistent between eager and non eager modes.
self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()" statement.
PiperOrigin-RevId: 215216964
|
| |
| |
| |
| | |
PiperOrigin-RevId: 215027511
|
| |
| |
| |
| | |
PiperOrigin-RevId: 214989908
|
| |
| |
| |
| | |
PiperOrigin-RevId: 214964988
|
| |
| |
| |
| |
| |
| | |
We will re-enable it when it is more robust.
PiperOrigin-RevId: 214956066
|
|/ |
|
|
|
|
|
|
| |
Keras and DistributionStrategy
PiperOrigin-RevId: 214890580
|
|
|
|
|
|
|
|
| |
distribution strategies. That is always the appropriate option.
In the existing code, we would set it to a partially specified "worker" name that was ambiguous and end up on the GPU.
PiperOrigin-RevId: 214882658
|
|
|
|
| |
PiperOrigin-RevId: 214867453
|
|
|
|
|
|
|
|
| |
Estimator
Add support for stateful metrics in model to estimator
PiperOrigin-RevId: 214714322
|
|
|
|
|
|
|
|
| |
supported in Graph mode using initializable iterators. In a subsequent change, we'll add in support for Eager mode as well.
This removes prefetching_ops_v2 code.
PiperOrigin-RevId: 214546754
|
|
|
|
| |
PiperOrigin-RevId: 214495925
|
|
|
|
|
|
| |
strategy with keras.
PiperOrigin-RevId: 214376435
|
|
|
|
|
|
|
| |
These properties are necessary for the strategy to work with
`tf.estimator.train_and_evaluate`.
PiperOrigin-RevId: 214285957
|
|
|
|
| |
PiperOrigin-RevId: 214219282
|
|
|
|
| |
PiperOrigin-RevId: 214119090
|
|
|
|
|
|
| |
are not complete and thus not unique, leading to same collective keys for different variables.
PiperOrigin-RevId: 214117466
|
|
|
|
| |
PiperOrigin-RevId: 214057023
|
|
|
|
|
|
|
|
| |
self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()" statement.
PiperOrigin-RevId: 213944932
|
|
|
|
|
|
| |
python3 threading.local cannot be pickled.
PiperOrigin-RevId: 213928766
|
|
|
|
| |
PiperOrigin-RevId: 213665390
|
|
|
|
| |
PiperOrigin-RevId: 213653403
|
|
|
|
|
|
| |
file, so that folks looking at API documentation can find the readme as well.
PiperOrigin-RevId: 213499832
|
|
|
|
|
|
|
|
| |
This enables cleanup of the variables referenced in defunned methods of objects when the object is garbage collected. Since one PolymorphicFunction is created per @defun, decorated methods before this change held on to all of the variables referenced in that method for any instance of the class (i.e. variables which should have been object-scoped were scoped to the lifetime of the class definition).
Raises an exception if variables used in the function have been deleted when it is called, which means no local variables.
PiperOrigin-RevId: 213337256
|
|
|
|
| |
PiperOrigin-RevId: 213053512
|
|
|
|
|
|
| |
achieve the same effect.
PiperOrigin-RevId: 212901207
|
|
|
|
|
|
|
|
| |
PrefetchingOpsV2. There is a bit of non determinism with the
FunctionBufferingResource that will get fixed with the MultiDeviceIterator and
once we transition to that we can go back to enabling these checks.
PiperOrigin-RevId: 212849405
|
|\
| |
| |
| | |
PiperOrigin-RevId: 212847729
|
| |
| |
| |
| | |
PiperOrigin-RevId: 212702577
|
| |
| |
| |
| |
| |
| | |
returns features and labels as a list instead of dict.
PiperOrigin-RevId: 212685344
|
|/
|
| |
The Keras model used a wrong variable name in the MirroredStrategy example
|
|
|
|
| |
PiperOrigin-RevId: 212210810
|
|
|
|
|
|
| |
unittests and updated examples.
PiperOrigin-RevId: 212207760
|