aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/optimizer_v2
Commit message (Collapse)AuthorAge
* Reduce tolerances for rmsprop_test float16, to fix OSS builds.Gravatar Todd Wang2018-10-08
| | | | PiperOrigin-RevId: 216200439
* Brings V2 Optimizers into Keras w/ Keras signaturesGravatar A. Unique TensorFlower2018-10-05
| | | | PiperOrigin-RevId: 215950207
* Change semantics of DistributionStrategy.update() to make sure theGravatar A. Unique TensorFlower2018-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | output depends on the updates across all mirrors. Before this change, update() would return a Mirrored value that where each component was an update to a single mirror. This caused a problem since for reading purposes other DistributionStrategy methods would consider it okay to read any single component, and so if you for example did something like session.run(strategy.update(...)) it would only perform the update on one replica. The fix is to have the output be a Mirrored value that is actually the identity operation returning the output on that device, but that has a control dependency making sure that the update actually happens on all the replicas. This fix was already present in MirroredVariable._assign_func, this CL moves the fix into update() and generalizes it to multiple return values. To disable this new grouping behavior, you may now pass "grouped=False" to update(). For example, some callers (like Optimizer) are performing a lot of updates and they prefer to group all of them together at once for performance reasons. In this case, we still want to make sure the caller executes the update on all replicas, so we return an unwrapped value instead of a Mirrored value. This has the happy side effect of removing a bunch of unwrap calls in client code, since unwrapping was the only safe way to use the Mirrored value we used to return. PiperOrigin-RevId: 215301909
* Merge pull request #22301 from jennynz:masterGravatar TensorFlower Gardener2018-09-19
|\ | | | | | | PiperOrigin-RevId: 213648091
| * Update broken link to intro on ADAGRADGravatar Jenny Sahng2018-09-17
| |
* | Fix the colocate_with issue for Adagrad optimizerV2.Gravatar Anjali Sridhar2018-09-12
| | | | | | | | PiperOrigin-RevId: 212702577
* | Deterministic ordering of the hyperparameters in optimizer_v2Gravatar A. Unique TensorFlower2018-09-10
| | | | | | | | PiperOrigin-RevId: 212348918
* | Merge pull request #21552 from sbrodehl:patch-1Gravatar TensorFlower Gardener2018-08-27
|\ \ | |/ |/| | | PiperOrigin-RevId: 210392464
* | Move from deprecated self.test_session() to self.cached_session().Gravatar A. Unique TensorFlower2018-08-21
| | | | | | | | | | | | | | | | self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about: * the fact that the session may be reused. * the session is not closed even when doing a "with self.test_session()" statement. PiperOrigin-RevId: 209703613
* | Move from deprecated self.test_session() to self.session() when a graph is set.Gravatar A. Unique TensorFlower2018-08-21
| | | | | | | | | | | | self.test_session() has been deprecated in cl/208545396 as its behavior confuses readers of the test. Moving to self.session() instead. PiperOrigin-RevId: 209696110
* | Change the initialization of the mean squared gradient variable of the ↵Gravatar A. Unique TensorFlower2018-08-14
| | | | | | | | | | | | optimizer_v2 version of RMSProp. PiperOrigin-RevId: 208774314
* | 1. Move distribution strategy context utility methods to a separate file ↵Gravatar Priya Gupta2018-08-14
| | | | | | | | | | | | | | | | | | with few dependencies. This allows us to import this in some places without creating circular dependencies as the original file imported many things. 2. Move the stack used in distribution strategy context to the graph. This allows us to use different strategies in different graphs (for e.g. in train and eval). This fixes #21412 and #21180. PiperOrigin-RevId: 208680454
| * Fix formula and text rendering.Gravatar Seb Bro2018-08-11
|/
* Split checkpoint management utility functions out of saver.pyGravatar Allen Lavoie2018-08-02
| | | | | | Pure refactor, in preparation for adding a higher level checkpoint management utility. This utility will also need to work with the Checkpoint proto, and globbing it on to saver.py seems dirty. PiperOrigin-RevId: 207179646
* remove uncessary variable naming and commentGravatar Zhenyu Tan2018-07-26
| | | | PiperOrigin-RevId: 206191743
* use parameterized test in rmspropGravatar Zhenyu Tan2018-07-24
| | | | PiperOrigin-RevId: 205914985
* Add `synchronization` and `aggregation` args to get_variable(). These args ↵Gravatar Pavithra Vijay2018-06-29
| | | | | | | | | | | | | | will be used for distributed variables. Add Enum `VariableSynchronization` with values for `synchronization`: AUTO, UNREPLICATED, ON_WRITE, ON_READ Add Enum `VariableAggregation` with values for `aggregation`: NONE, SUM, MEAN. Replace all the aggregation methods strings in distribution strategy to the enum values. Update Mirrored strategy to use these parameters to decide on whether a variable should be Mirrored or TowerLocal. Update different distribution strategy value types to use the `VariableAggregation` Enum PiperOrigin-RevId: 202736077
* Split dependency tracking out from CheckpointableBaseGravatar Allen Lavoie2018-06-22
| | | | | | | | | | Some unit test fiddling, but otherwise just moving code around. My goal is to be able to use checkpointable data structures (or something like them) in Checkpointable's __setattr__ override. Checkpointable data structures depend on Layer, so Checkpointable and CheckpointableBase need to be in seprate files (so we can have the dependency chain CheckpointableBase->Layer->CheckpointableDataStructure->Checkpointable). This will also require changes to python/keras/engine/__init__.py (which currently requires Network and Layer be imported together), but I'll do that in a separate change. PiperOrigin-RevId: 201712549
* Replace unnecessary `()` in `run_in_graph_and_eager_modes()`.Gravatar Tom Hennigan2018-06-22
| | | | PiperOrigin-RevId: 201652888
* Make regroup work on tower-local variables as well.Gravatar A. Unique TensorFlower2018-06-21
| | | | PiperOrigin-RevId: 201554738
* Create hyper parameter tensors in optimizer v2 outside any control flow ↵Gravatar Priya Gupta2018-06-19
| | | | | | | | | contexts. Also, use lambdas for creating the non slot variables in adam v2. These changes are needed to allow optimizer.minimize to run inside a while loop, which will be done in distribution strategies shortly. PiperOrigin-RevId: 201238566
* Public API to switch between eager execution and graph building.Gravatar Alexandre Passos2018-05-25
| | | | | | | | | | | | | | | | | | | | | Now, after tf.enable_eager_execution() has been executed, entering the context manager of a tf.Graph will enable graph mode. So, for example ``` tf.enable_eager_execution() with tf.Graph().as_default(): c = tf.constant(1.0) # this is a graph tensor c2 = tf.constant(1.0) # this is an eager tensor ``` The main use-case of this is allowing documentation writers to make a single notebook which starts with eager execution and seamlessly transitions to building graphs. This also makes many explicit enablings of graph mode in the code redundant (a cleanup cl will follow). PiperOrigin-RevId: 198092991
* Move Keras code out of _impl folder and remove API files.Gravatar Pavithra Vijay2018-05-17
| | | | PiperOrigin-RevId: 197097430
* Checkpointable: move python/training/checkpointable_* to ↵Gravatar Allen Lavoie2018-05-16
| | | | | | | | | | python/training/checkpointable/ Need to add some new checkpointable files in core (specifically I had some checkpointable data structures in mind), and prefixing more files with "checkpointable_" in python/training/ seems dirty. No functional changes, just some branching and build/import fiddling. PiperOrigin-RevId: 196883136
* Checkpointable: Restore-on-create for name-based checkpoints when executing ↵Gravatar Allen Lavoie2018-05-15
| | | | | | | | eagerly Should make loading name-based checkpoints more natural with object-based APIs when executing eagerly. Before this CL they could be loaded, but users needed to use "run_restore_ops" after all variables were created (which is less useful and confusing). PiperOrigin-RevId: 196729311
* Checkpointable: Remove overzealous error checking from tf.make_templateGravatar Allen Lavoie2018-05-11
| | | | | | | | It was checking that all variables in the Template's scope were dependencies, but Optimizer slot variables are created with the same prefix (and should not be dependencies). Conversely, eager execution's eager slot variable creation meant that Templates create unnecessary/somewhat harmful dependencies on restored slot variables. Fixes that. PiperOrigin-RevId: 196321999
* Support saving Python state with object-based checkpointsGravatar Allen Lavoie2018-05-09
| | | | | | | | | | Allows SaveableObjects to specify feed dict addition callbacks for object-based saving. For now just saves get_config() with Layers. Doesn't do any loading, and there isn't quite enough information to reconstruct a Model yet (needs topology). My plan is to get Models to the point where they can be reconstructed from object-based checkpoints (probably one more change), add in SavedModel export (assuming no dynamic control flow for now), then add this "SavedModel+Python" format to Model.save / load_model. PiperOrigin-RevId: 196043183
* Merge changes from github.Gravatar Patrick Nguyen2018-05-01
| | | | PiperOrigin-RevId: 194997009
* Checkpointable: better handling of objects which aren't being restoredGravatar Allen Lavoie2018-04-25
| | | | | | | | | | | | initialize_or_restore on a tf.train.Checkpoint status object will now initialize any variables which aren't being restored, which is closer to the behavior when executing eagerly (and makes it easier to use). Fixes a bug where assert_consumed() would miss some Python objects which aren't part of the object graph being restored. It will now (correctly/as documented) complain about unmatched Python objects in the dependency graph. PiperOrigin-RevId: 194315742
* Merge changes from github.Gravatar Yifei Feng2018-04-23
| | | | PiperOrigin-RevId: 194031845
* Rm references to SubmodelPortGravatar A. Unique TensorFlower2018-04-22
| | | | PiperOrigin-RevId: 193873101
* Create a skeleton tf.contrib.checkpoint.Gravatar Allen Lavoie2018-04-19
| | | | | | | | | | | | | | | My plan for this is to incubate tools for working with object-based checkpoints: - Tools for managing dependency graphs, e.g. checkpointable lists/dictionaries - Inspecting/visualizing checkpoints - Listing variables and gathering initializers from a Checkpointable object and its dependencies - Verifying all variables are accessible as dependencies, which should make converting existing graph building Saver uses easier/safer. This CL includes none of those things, it just moves the split_dependency tool here instead of contrib/eager. PiperOrigin-RevId: 193531292
* Internal-only change.Gravatar Justin Lebar2018-04-18
| | | | PiperOrigin-RevId: 193409980
* Separate out distribute dependency out of training, as it needs to be used ↵Gravatar Priya Gupta2018-04-12
| | | | | | in summary utils (which training depends on, thus causing circular dependency). PiperOrigin-RevId: 192656997
* Start moving Checkpointable utilities toward coreGravatar Allen Lavoie2018-04-12
| | | | | | | | | | | | Doesn't add to the public API yet, just shifts code around. Changes: - A tiny bit of renaming (to avoid having _Checkpoint and Checkpoint in the same file) - Removed the garbage collection decorator from a few tests due to the uuid4() garbage issue (apparently core tests get run on Python 2.7.9?) - Renamed "Object" to "CheckpointableObject" in the proto, since core protos have Java bindings and apparently Java had something else in mind for the keyword "Object" :) but otherwise this is a pure move. After this CL I'll propose adding tf.train.Checkpoint to the API (currently tf.contrib.eager.Checkpoint), move the utilities that are still in contrib/eager to their own contrib directory (there will be a few more misc. utilities for inspecting checkpoints and managing dependencies), get tf.train.Saver to read object-based checkpoints for compatibility, and work on Model.save_weights/load_weights. PiperOrigin-RevId: 192646890
* Refactor layers:Gravatar Francois Chollet2018-04-10
| | | | | | | | | | | | - tf.layers layers now subclasses tf.keras.layers layers. - tf.keras.layers is now agnostic to variable scopes and global collections (future-proof). It also uses ResourceVariable everywhere by default. - As a result tf.keras.layers is in general lower-complexity, with fewer hacks and workarounds. However some of current code is temporary (variable creation should be moved to Checkpointable, arguably, and there are some dependency issues that will require later refactors). - The legacy tf.layers layers behavior is kept, with references to variable scopes and global collections injected in the subclassed tf.layers.base.Layer class (the content of tf.layers.base.Layer is the complexity differential between the old implementation and the new one). Note: this refactor does slightly change the behavior of tf.layers.base.Layer, by disabling extreme edge-case behavior that either has long been invalid, or is dangerous and should most definitely be disabled. This will not affect any users since such behaviors only existed in the base Layer unit tests. The behaviors disabled are: - Option to create reusable variables in `call` (already invalid for some time). - Option to use a variable scope to create layer variables outside of the layer while not having the layer track such variables locally. PiperOrigin-RevId: 192339798
* Checkpointable: wrap restore ops in init_scopeGravatar Allen Lavoie2018-04-10
| | | | | | | | This should make restore() work with defun-wrapped code, when variables are created inside the function. Just lifts the restore code into the outer context. Adds a test for it. PiperOrigin-RevId: 192331065
* Simplify test_util.run_in_graph_and_eager_modesGravatar Asim Shankar2018-04-10
| | | | | | | - Get rid of unnecessary options - Update various resource variable tests so that they correctly exercise the cases where the variables are placed on GPU (these "with tf.device('/cpu:0')" blocks that were added for eager execution are no longer necessary) PiperOrigin-RevId: 192309109
* Rename distributed_apply to _distributed_apply in OptimizerV2 to matchGravatar A. Unique TensorFlower2018-03-30
| | | | | | the Optimizer base class. PiperOrigin-RevId: 191089407
* Internal change.Gravatar Igor Saprykin2018-03-29
| | | | PiperOrigin-RevId: 191023160
* Add tf.contrib.distribute, which defines classes DistributionStrategyGravatar A. Unique TensorFlower2018-03-29
and MirroredStrategy, and related functionality. Also add tf.contrib.optimizer_v2, an update to the Optimizer API. RELNOTES: Can now pass tf.contrib.distribute.MirroredStrategy() to tf.estimator.RunConfig() to run an Estimator model on multiple GPUs on one machine. PiperOrigin-RevId: 190996247