aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/python/training/training.py
Commit message (Collapse)AuthorAge
* Remove magic-doc-links from code.Gravatar Mark Daoust2018-08-16
| | | | | | | | | | This change contains no code changes. Only doc-strings. We can't use relative links in code files, so we don't have much choice but to link to tensorflow.org/ The deleted links were to docs that no longer exist. PiperOrigin-RevId: 209019572
* Create new MultiStepStopHook for running mutiple steps per run when using ↵Gravatar Sourabh Bajaj2018-08-10
| | | | | | TPU Distribution Strategy with Estimator. PiperOrigin-RevId: 208310915
* Split checkpoint management utility functions out of saver.pyGravatar Allen Lavoie2018-08-02
| | | | | | Pure refactor, in preparation for adding a higher level checkpoint management utility. This utility will also need to work with the Checkpoint proto, and globbing it on to saver.py seems dirty. PiperOrigin-RevId: 207179646
* Checkpointable: move python/training/checkpointable_* to ↵Gravatar Allen Lavoie2018-05-16
| | | | | | | | | | python/training/checkpointable/ Need to add some new checkpointable files in core (specifically I had some checkpointable data structures in mind), and prefixing more files with "checkpointable_" in python/training/ seems dirty. No functional changes, just some branching and build/import fiddling. PiperOrigin-RevId: 196883136
* Removing @@ comments from core TensorFlow. They are no longer needed for ↵Gravatar Anna R2018-04-26
| | | | | | exporting symbols to the TensorFlow API. PiperOrigin-RevId: 194426855
* Removing remove_undocumented calls from tensorflow/python.Gravatar Anna R2018-04-25
| | | | PiperOrigin-RevId: 194274698
* Add tf.train.Checkpoint for reading and writing object-based checkpoints.Gravatar Allen Lavoie2018-04-18
| | | | | | | | | | Previously exposed as tf.contrib.eager.Checkpoint / tfe.Checkpoint. Spiffies up the documentation a bit, but otherwise just adds the export decorator. Compatible in both directions with tf.train.Saver (object-based checkpoints can be fed to tf.train.Saver, and name-based checkpoints can be fed to tf.train.Checkpoint). PiperOrigin-RevId: 193439442
* Export tf.GradientTapeGravatar Asim Shankar2018-03-19
| | | | | | | | tf.GradientTape can be used both for eager execution and graph construction to compute gradients (unlike tf.gradients, which works only for graph construction). PiperOrigin-RevId: 189676004
* Move warm_starting_util from third_party/tensorflow/python/estimator to ↵Gravatar A. Unique TensorFlower2018-03-09
| | | | | | third_party/tensorflow/python/training (move WarmStartSettings definition to third_party/tensorflow/python/estimator/estimator.py), and make _warm_start() public under tf.train.warm_start(). WarmStartSettings and VocabInfo are both available under tf.estimator, and VocabInfo is also available under tf.train. PiperOrigin-RevId: 188522820
* gradients: Export tf.custom_gradientsGravatar Asim Shankar2018-03-05
| | | | | | (Moved from the tf.contrib.eager namespace) PiperOrigin-RevId: 187950503
* Add pylint check for W0622 redefined-builtin in ci_sanity.sh and fix ↵Gravatar Yifei Feng2018-02-09
| | | | | | existing pylint errors. PiperOrigin-RevId: 185206494
* Add export calls for protos.Gravatar Anna R2018-02-09
| | | | PiperOrigin-RevId: 185166764
* Merge changes from github.Gravatar Raghuraman Krishnamoorthi2018-01-03
| | | | PiperOrigin-RevId: 180746153
* Adding learning rate decays found in Neural Optimizer Search with ↵Gravatar A. Unique TensorFlower2017-10-25
| | | | | | | | Reinforcement Learning [Bello et al, ICML2017] Also adding cosine decay. PiperOrigin-RevId: 173451903
* Move profiler hook from contrib to core.Gravatar Mustafa Ispir2017-10-05
| | | | PiperOrigin-RevId: 171194291
* Add checkpoint-utils to the tf.train module.Gravatar Mustafa Ispir2017-06-15
| | | | PiperOrigin-RevId: 159171746
* Implement ClusterSpec Propagation in TF MasterGravatar Brennan Saeta2017-05-04
| | | | | | | | | | | ClusterSpec propagation is a capability upgrade for TensorFlow that should make it much easier to (1) build distributed TensorFlow clusters, and (2) handle node failures. The ClusterSpec propagation capability allows TensorFlow workers to be booted independently of each other, and with no knowledge about others. The client can then construct a ClusterDef (ClusterSpec), and then send it to the TF master at session creation. The master in turn then propagates the ClusterDef along to all of the workers. Change: 155159972
* Add sdca ops to tf.train.Gravatar Patrick Nguyen2017-04-14
| | | | | | | | | | | | | This adds: * tf.train.sdca_optimizer * tf.train.sdca_fprint * tf.train.sdca_shrink_l1 which were previously documented, and prior to 1.0, in tf.sdca. In 1.0, they were absent from tf.sdca, so this does not break compatibility. The module tf.sdca is removed. Change: 153176548
* Make CheckpointSaverListener visible next to CheckpointSaverHook.Gravatar A. Unique TensorFlower2017-04-10
| | | | Change: 152662945
* Enable access to SecondOrStepTimer via tf.train.Gravatar Mustafa Ispir2017-03-27
| | | | Change: 151375546
* Update module docstrings to (consistently) link to the guide in theGravatar A. Unique TensorFlower2017-02-24
| | | | | | body instead of the title (consistently). Also fix some malformed @{$...} references and titles starting with "##". Change: 148476930
* Move Global step creator utilities from contrib to training_util.Gravatar Mustafa Ispir2017-02-21
| | | | Change: 148156049
* Module docstring updates.Gravatar A. Unique TensorFlower2017-02-13
| | | | Change: 147412093
* Moved FeedFnHook into basic_session_run_hooks.Gravatar Illia Polosukhin2017-01-17
| | | | | | | Make LoggingTensorHook to print tensors in the order they were given [if list]. Added formatter option, to support custom string formatting. Added numpy printing options configuration to tweak precision and summarization. Change: 144748847
* Make SecondOrStepTimer public.Gravatar Mustafa Ispir2017-01-10
| | | | Change: 144101397
* Rename SyncReplicasOptimizerV2 to SyncReplicasOptimizerGravatar A. Unique TensorFlower2016-12-21
| | | | Change: 142676422
* Rename SyncReplicasOptimizerV2 to SyncReplicasOptimizerGravatar A. Unique TensorFlower2016-12-13
| | | | Change: 141904790
* Introduced tf.train.HookedSession which is designed to help single machine ↵Gravatar Mustafa Ispir2016-12-08
| | | | | | training. Change: 141462689
* Remove tf.SyncReplicasOptimizer, which will be replaced byGravatar A. Unique TensorFlower2016-12-06
| | | | | tf.SyncReplicasOptimizerV2 Change: 141243546
* Remove summary ops from tf namespace.Gravatar Dan Mané2016-12-01
| | | | | | | | | | | tf.histogram_summary, tf.scalar_summary, tf.audio_summary, tf.image_summary, tf.merge_all_summaries, tf.merge_summary are all removed. Nearly-identical and fully supported apis are available at tf.summary.histogram, tf.summary.scalar, tf.summary.audio, tf.summary.image, tf.summary.merge_all, tf.summary.merge The major change in the new API is that the summary "tag" is now actually the node name, which means the TF naming system will automatically deduplicate summary tags. If you need an exact match for the old API, you may use tf.contrib.deprecated.histogram_summary, etc, but these endpoints will eventually be removed. Change: 140792244
* Remove tf.train.SummaryWriter and tf.train.SummaryWriterCache.Gravatar Dan Mané2016-12-01
| | | | | These classes are now available as tf.summary.FileWriter and tf.summary.FileWriterCache. Change: 140762782
* Added a SessionRunHook to handle global step based delaying in distributed ↵Gravatar Mustafa Ispir2016-11-21
| | | | | | setting. Change: 139827659
* Merge changes from github.Gravatar Benoit Steiner2016-11-09
| | | | Change: 138675832
* Merge changes from github.Gravatar Vijay Vasudevan2016-11-03
| | | | Change: 138143557
* Expose missing rate decay functions.Gravatar Patrick Nguyen2016-10-25
| | | | Change: 137221633
* Seal tf.train's interface.Gravatar Patrick Nguyen2016-10-19
| | | | Change: 136613296
* Introduce tf.train.{checkpoint_exists,get_checkpoint_mtimes}().Gravatar Zongheng Yang2016-10-14
| | | | | | These two helpers are agnostic to the ckpt format -- it handles the naming difference between V1 and V2. Change: 136181681
* Switch to the new accumulators in the sync_rep optimizer (currently called ↵Gravatar Jianmin Chen2016-10-07
| | | | | | | | | | | | | | | | V2). Please note that the gradients from replicas are now averaged instead of summed (as in the old sync_replicas_optimizer) so you need to increase the learning rate according to the number of replicas. This change is introduced to be consistent with how gradients are aggregated (averaged) within a batch in a replica. As shown in the code change, the switch results in: 1. much cleaner and simpler code. 2. much more efficient and reliable staleness check. It is now 100% strict with no extra contention to PS servers. 3. no need for clean_up op so we can get rid of the abort_op which can confuse users. 4. number of replicas can be changed without complaints from checkpoint as the local_step is now just a local variable instead of a global vector variable. This has been tried with manual restarts of workers (chief or non chief) and ps and seems to be quite robust. Change: 135513399
* Move Matthieu's MonitoredTrainingSession to tf.train.Gravatar Mustafa Ispir2016-10-06
| | | | Change: 135373918
* Add missing import of NanTensorHook in training.py namespace flattening.Gravatar A. Unique TensorFlower2016-10-05
| | | | Change: 135265457
* Move MonitoredSession and related utilities from tf.contrib.learn to tf.trainGravatar Mustafa Ispir2016-10-03
| | | | Change: 135010812
* Adagrad Dual Averaging optimizer for sparse linear models, that takes care ↵Gravatar A. Unique TensorFlower2016-08-18
| | | | | | of lazy updates correctly. Change: 130714247
* Only the exponentially decaying learning rate seems to have beenGravatar A. Unique TensorFlower2016-06-30
| | | | | | exposed from the module: learning_rate_decay. The very useful decaying scheme: piecewise_constant however seems to have been not exposed. Change: 126273419
* Move do_quantize_training_on_graphdef to tf.trainGravatar Jianmin Chen2016-06-14
| | | | Change: 124903871
* Add basic_train_loop() as an example for higher level frameworks to copy orGravatar A. Unique TensorFlower2016-06-09
| | | | | | | | | | | | | | | reuse. It can also be used directly for simple training. Fix Coordinator.clear_stop() to also clear the exception to raise. Add test. Add SummaryWriter.reopen(), with tests. This is needed to properly handle summaries when create a session more than once in a Supervior. In Supervisor.prepare_or_wait_for_session() reopen the summary writer. At then end of Supervisor.managed_session() correctly close the summary write and clear the running threads even if an exception was reported. Change: 124500982
* ProximalAdagrad and ProximalGradientdescent, which provide l1 and l2 ↵Gravatar A. Unique TensorFlower2016-06-06
| | | | | | | regularization for Adagrad and GradientDescent respectively. Without l2 and l2 regularization, ProximalAdagrad and ProximalGradientDescent are exactly same as Adagrad and GradientDescent respectively. Change: 124206988
* Update copyright for 3p/tf/python.Gravatar A. Unique TensorFlower2016-06-02
| | | | Change: 123900456
* Removes InferenceExample from tensorflow.Example.Gravatar Noah Fiedel2016-05-03
| | | | | | | Background: InferenceExample was confusing as : (a) it exposed Features rather than Examples and, (b) it was primarily intended for serving optimization. Change: 121402533
* Audio summary support.Gravatar RJ Ryan2016-04-26
| | | | | | | | | * Add a simple S16LE WAV encoder. * Add an Audio value type to Summary protocol buffer. * Add AudioSummary kernel and op. * Add support to EventAccumulator/EventMultiplexer for Audio events. * Add 16-bit little endian encode/decode functions. Change: 120854931
* Clean up the interface to the distributed runtime from PythonGravatar Derek Murray2016-04-07
| | | | | | | | | | | | | | | | | This is a breaking change! The following classes have been renamed: tf.GrpcServer -> tf.train.Server tf.ClusterSpec -> tf.train.ClusterSpec tf.ServerDef -> tf.train.ServerDef tf.JobDef -> tf.train.JobDef tf.ClusterDef -> tf.train.ClusterDef The constructor for tf.train.Server is more permissive and now accepts tf.train.ClusterSpec, tf.train.ClusterDef, and dictionary inputs for specifying the cluster part of the server. For consistency, the server library moves from python/client to python/training. Change: 119335624