aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/distribute
Commit message (Collapse)AuthorAge
* Update model in keras dist strat learning phase test to return consistent ↵Gravatar Pavithra Vijay2018-10-09
| | | | | | values. PiperOrigin-RevId: 216461637
* Make defun work under distributed strategies.Gravatar Igor Ganichev2018-10-09
| | | | | | | | | | | | | The core of the change is have the gradient tape capture distributed variables instead of plain ResourceVariables. In other words, we move the distribution awareness from defun down to tape and rely on distributed variable magic to provide us with the right variable at runtime. In tower context, we always watch the container (e.g. MirroredVariable). In cross tower context, we always watch all the components. PiperOrigin-RevId: 216430530
* In TPUMirroredVariable, when setting _initializer_op and _initial_value ↵Gravatar Ruoxin Sang2018-10-09
| | | | | | attributes, set the attributes of all the contained variables. This fixes a bug that tf.train.init_from_checkpoint doesn't overwrite the initialization values correctly for TPUMirroredVariable. PiperOrigin-RevId: 216429476
* Fix the steps_per_epoch when training on mnistGravatar Sourabh Bajaj2018-10-08
| | | | PiperOrigin-RevId: 216225505
* Avoid adding spurious ops when colocating with resource variables.Gravatar Asim Shankar2018-10-08
| | | | | | | | | | Prior to this change, tf.colocate_with(v) would insert spurious operations (a ReadVariableOp and an Identity) in the graph when v is a resource variable, and then colocate the operations within the block with those newly added, otherwise disconnected, operations. This commit avoids adding the unnecessary ReadVariableOp/Identity nodes and colocates operations within the block with the VarHandleOp. PiperOrigin-RevId: 216201638
* Add DistributionStrategy support to moving average APIs.Gravatar A. Unique TensorFlower2018-10-05
| | | | | | Fixes #21405. PiperOrigin-RevId: 215973401
* Brings V2 Optimizers into Keras w/ Keras signaturesGravatar A. Unique TensorFlower2018-10-05
| | | | PiperOrigin-RevId: 215950207
* Add 'device' property to TPUMirroredVariable, so ↵Gravatar Ruoxin Sang2018-10-04
| | | | | | tf.train.init_from_checkpoint can be supported. PiperOrigin-RevId: 215843249
* Create new classes for Keras tests to allow us to create new test targets.Gravatar Anjali Sridhar2018-10-03
| | | | PiperOrigin-RevId: 215653650
* Merge pull request #22591 from EFanZh:fix-docsGravatar TensorFlower Gardener2018-10-03
|\ | | | | | | PiperOrigin-RevId: 215639962
* | Add a require_static_shapes argument to DistributionStrategy class. This ↵Gravatar Anjali Sridhar2018-10-03
| | | | | | | | | | | | allows us to identify if we need to set the drop_remainder option when creating Dataset objects. PiperOrigin-RevId: 215633097
* | Tests for metrics correctness with TPU strategyGravatar Priya Gupta2018-10-03
| | | | | | | | PiperOrigin-RevId: 215618809
* | Automated rollback of commit b7e9cbab27c893283acc4a6154d7a59dffb23758Gravatar Derek Murray2018-10-02
| | | | | | | | PiperOrigin-RevId: 215503549
* | Use `defun` instead of `Defun` for `tf.data`, except for ↵Gravatar Shivani Agrawal2018-10-02
| | | | | | | | | | | | `make_one_shot_iterator` which is to be deprecated in future. PiperOrigin-RevId: 215491729
* | Add support for multiple input/output numpy arrays when using Keras APIs.Gravatar Anjali Sridhar2018-10-02
| | | | | | | | PiperOrigin-RevId: 215459075
* | [tf.data] Deprecate `tf.contrib.data` and introduce `tf.data.experimental` ↵Gravatar Derek Murray2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to replace it. This change prepares `tf.data` for TensorFlow 2.0, where `tf.contrib` will no longer exist. It retains the pre-existing endpoints in `tf.contrib.data` with deprecation warnings. Note there are some exceptions to the move: * Deprecated symbols in `tf.contrib.data` have not been moved to `tf.data.experimental`, because replacements already exist. * `tf.contrib.data.LMDBDataset` has not been moved, because we plan to move it to a SIG-maintained repository. * `tf.contrib.data.assert_element_shape()` has not yet been moved, because it depends on functionality in `tf.contrib`, and it will move in a later change. * `tf.contrib.data.AUTOTUNE` has not yet been moved, because we have not yet determined how to `tf_export()` a Python integer. * The stats-related API endpoints have not yet appeared in a released version of TensorFlow, so these are moved to `tf.data.experimental` without retaining an endpoint in `tf.contrib.data`. In addition, this change includes some build rule and ApiDef refactoring: * Some of the "//third_party/tensorflow/python:training" dependencies had to be split in order to avoid a circular dependency. * The `tf.contrib.stateless` ops now have a private core library for the generated wrappers (and accordingly are hidden in their ApiDef) so that `tf.data.experimental.sample_from_datasets()` can depend on them. PiperOrigin-RevId: 215304249
* | Change semantics of DistributionStrategy.update() to make sure theGravatar A. Unique TensorFlower2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | output depends on the updates across all mirrors. Before this change, update() would return a Mirrored value that where each component was an update to a single mirror. This caused a problem since for reading purposes other DistributionStrategy methods would consider it okay to read any single component, and so if you for example did something like session.run(strategy.update(...)) it would only perform the update on one replica. The fix is to have the output be a Mirrored value that is actually the identity operation returning the output on that device, but that has a control dependency making sure that the update actually happens on all the replicas. This fix was already present in MirroredVariable._assign_func, this CL moves the fix into update() and generalizes it to multiple return values. To disable this new grouping behavior, you may now pass "grouped=False" to update(). For example, some callers (like Optimizer) are performing a lot of updates and they prefer to group all of them together at once for performance reasons. In this case, we still want to make sure the caller executes the update on all replicas, so we return an unwrapped value instead of a Mirrored value. This has the happy side effect of removing a bunch of unwrap calls in client code, since unwrapping was the only safe way to use the Mirrored value we used to return. PiperOrigin-RevId: 215301909
* | Move from deprecated self.test_session() to self.cached_session() or ↵Gravatar A. Unique TensorFlower2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | self.session(). * Move from self.test_session(graph=ops.Graph(), ...) to self.session(...) (semantically equivalent). * Move from self.test_session() to self.cached_session(config=self.config) when run_in_graph_and_eager_modes(config=config) is set to be consistent between eager and non eager modes. self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about: * the fact that the session may be reused. * the session is not closed even when doing a "with self.test_session()" statement. PiperOrigin-RevId: 215216964
* | Move TPU variables to the TPU device in TPUStrategy.Gravatar Jonathan Hseu2018-09-28
| | | | | | | | PiperOrigin-RevId: 215027511
* | Automated rollback of commit 7f1d70d97f543d69a9f02cd6df0964f22f9278f3Gravatar Rohan Jain2018-09-28
| | | | | | | | PiperOrigin-RevId: 214989908
* | Remove @{} api_links and ban "@{}" from python and md files.Gravatar Mark Daoust2018-09-28
| | | | | | | | PiperOrigin-RevId: 214964988
* | Disable auto_shard for MirroredStrategy by default.Gravatar Yuefeng Zhou2018-09-28
| | | | | | | | | | | | We will re-enable it when it is more robust. PiperOrigin-RevId: 214956066
| * Fix some documentation errorsGravatar EFanZh2018-09-28
|/
* Fix error that occurs when attempting to use TensorFlow optimizers with ↵Gravatar Anjali Sridhar2018-09-27
| | | | | | Keras and DistributionStrategy PiperOrigin-RevId: 214890580
* Allowing source_device to be set to /cpu:0 for multi device iterator in ↵Gravatar Rohan Jain2018-09-27
| | | | | | | | distribution strategies. That is always the appropriate option. In the existing code, we would set it to a partially specified "worker" name that was ambiguous and end up on the GPU. PiperOrigin-RevId: 214882658
* Change test size as it has been timing out consistentlyGravatar Sourabh Bajaj2018-09-27
| | | | PiperOrigin-RevId: 214867453
* Add Mirrored distribution strategy support for new metrics with Keras and ↵Gravatar Pavithra Vijay2018-09-26
| | | | | | | | Estimator Add support for stateful metrics in model to estimator PiperOrigin-RevId: 214714322
* Switching Distribution strategies to use MultiDeviceIterator. Currently only ↵Gravatar Rohan Jain2018-09-25
| | | | | | | | supported in Graph mode using initializable iterators. In a subsequent change, we'll add in support for Eager mode as well. This removes prefetching_ops_v2 code. PiperOrigin-RevId: 214546754
* [tf.data] Adding a private method for (recursively) tracking dataset inputs.Gravatar Jiri Simsa2018-09-25
| | | | PiperOrigin-RevId: 214495925
* Add validation that input shapes should be fully defined when using TPU ↵Gravatar Priya Gupta2018-09-24
| | | | | | strategy with keras. PiperOrigin-RevId: 214376435
* Implement required properties for TPU StrategyGravatar Philip Pham2018-09-24
| | | | | | | These properties are necessary for the strategy to work with `tf.estimator.train_and_evaluate`. PiperOrigin-RevId: 214285957
* Remove dependency on contrib dataset ops.Gravatar Priya Gupta2018-09-24
| | | | PiperOrigin-RevId: 214219282
* Temporarily remove isolate_session_state in CollectiveAllReduceStrategy.Gravatar Yuefeng Zhou2018-09-22
| | | | PiperOrigin-RevId: 214119090
* Fixed a bug in CollectiveAllReduce that sometimes the variable names it sees ↵Gravatar Yuefeng Zhou2018-09-22
| | | | | | are not complete and thus not unique, leading to same collective keys for different variables. PiperOrigin-RevId: 214117466
* Rollback change introduced on cross_towers_ops_test by previous commit.Gravatar A. Unique TensorFlower2018-09-21
| | | | PiperOrigin-RevId: 214057023
* Move from deprecated self.test_session() to self.cached_session().Gravatar A. Unique TensorFlower2018-09-21
| | | | | | | | self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about: * the fact that the session may be reused. * the session is not closed even when doing a "with self.test_session()" statement. PiperOrigin-RevId: 213944932
* Make threading.local not an instance member of collective ops because in ↵Gravatar Yuefeng Zhou2018-09-20
| | | | | | python3 threading.local cannot be pickled. PiperOrigin-RevId: 213928766
* Re-enable flaky keras_testGravatar Anjali Sridhar2018-09-19
| | | | PiperOrigin-RevId: 213665390
* Fix estimator_training test flakiness.Gravatar Yuefeng Zhou2018-09-19
| | | | PiperOrigin-RevId: 213653403
* Link to readme for distribution strategy from distribute.py and package init ↵Gravatar Priya Gupta2018-09-18
| | | | | | file, so that folks looking at API documentation can find the readme as well. PiperOrigin-RevId: 213499832
* Keep only weak references to variables in graph functionsGravatar Allen Lavoie2018-09-17
| | | | | | | | This enables cleanup of the variables referenced in defunned methods of objects when the object is garbage collected. Since one PolymorphicFunction is created per @defun, decorated methods before this change held on to all of the variables referenced in that method for any instance of the class (i.e. variables which should have been object-scoped were scoped to the lifetime of the class definition). Raises an exception if variables used in the function have been deleted when it is called, which means no local variables. PiperOrigin-RevId: 213337256
* Disable flaky keras_test.Gravatar Gunhan Gulsoy2018-09-14
| | | | PiperOrigin-RevId: 213053512
* Use `dataset.batch(.., drop_remainder=True)` instead of map_and_batch to ↵Gravatar Priya Gupta2018-09-13
| | | | | | achieve the same effect. PiperOrigin-RevId: 212901207
* Removing OutOfRangeError checks and testing going to the end of the dataset inGravatar Rohan Jain2018-09-13
| | | | | | | | PrefetchingOpsV2. There is a bit of non determinism with the FunctionBufferingResource that will get fixed with the MultiDeviceIterator and once we transition to that we can go back to enabling these checks. PiperOrigin-RevId: 212849405
* Merge pull request #22227 from joba01:patch-1Gravatar TensorFlower Gardener2018-09-13
|\ | | | | | | PiperOrigin-RevId: 212847729
* | Fix the colocate_with issue for Adagrad optimizerV2.Gravatar Anjali Sridhar2018-09-12
| | | | | | | | PiperOrigin-RevId: 212702577
* | Add unit test for model_to_estimator where inpu_fnGravatar Zhenyu Tan2018-09-12
| | | | | | | | | | | | returns features and labels as a list instead of dict. PiperOrigin-RevId: 212685344
| * Fixed wrong variable name in exampleGravatar Johannes Bannhofer2018-09-12
|/ | | The Keras model used a wrong variable name in the MirroredStrategy example
* Add support for numpy arrays with DistributionStrategy in Keras.Gravatar Anjali Sridhar2018-09-09
| | | | PiperOrigin-RevId: 212210810
* Add support for evaluate and predict in keras with TPUStrategy. Also add ↵Gravatar Priya Gupta2018-09-09
| | | | | | unittests and updated examples. PiperOrigin-RevId: 212207760