| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
| |
This change contains no code changes. Only doc-strings.
We can't use relative links in code files, so we don't have much choice but to link to tensorflow.org/
The deleted links were to docs that no longer exist.
PiperOrigin-RevId: 209019572
|
|
|
|
|
|
| |
TPU Distribution Strategy with Estimator.
PiperOrigin-RevId: 208310915
|
|
|
|
|
|
| |
Pure refactor, in preparation for adding a higher level checkpoint management utility. This utility will also need to work with the Checkpoint proto, and globbing it on to saver.py seems dirty.
PiperOrigin-RevId: 207179646
|
|
|
|
|
|
|
|
|
|
| |
python/training/checkpointable/
Need to add some new checkpointable files in core (specifically I had some checkpointable data structures in mind), and prefixing more files with "checkpointable_" in python/training/ seems dirty.
No functional changes, just some branching and build/import fiddling.
PiperOrigin-RevId: 196883136
|
|
|
|
|
|
| |
exporting symbols to the TensorFlow API.
PiperOrigin-RevId: 194426855
|
|
|
|
| |
PiperOrigin-RevId: 194274698
|
|
|
|
|
|
|
|
|
|
| |
Previously exposed as tf.contrib.eager.Checkpoint / tfe.Checkpoint.
Spiffies up the documentation a bit, but otherwise just adds the export decorator.
Compatible in both directions with tf.train.Saver (object-based checkpoints can be fed to tf.train.Saver, and name-based checkpoints can be fed to tf.train.Checkpoint).
PiperOrigin-RevId: 193439442
|
|
|
|
|
|
|
|
| |
tf.GradientTape can be used both for eager execution and graph construction
to compute gradients (unlike tf.gradients, which works only for graph
construction).
PiperOrigin-RevId: 189676004
|
|
|
|
|
|
| |
third_party/tensorflow/python/training (move WarmStartSettings definition to third_party/tensorflow/python/estimator/estimator.py), and make _warm_start() public under tf.train.warm_start(). WarmStartSettings and VocabInfo are both available under tf.estimator, and VocabInfo is also available under tf.train.
PiperOrigin-RevId: 188522820
|
|
|
|
|
|
| |
(Moved from the tf.contrib.eager namespace)
PiperOrigin-RevId: 187950503
|
|
|
|
|
|
| |
existing pylint errors.
PiperOrigin-RevId: 185206494
|
|
|
|
| |
PiperOrigin-RevId: 185166764
|
|
|
|
| |
PiperOrigin-RevId: 180746153
|
|
|
|
|
|
|
|
| |
Reinforcement Learning [Bello et al, ICML2017]
Also adding cosine decay.
PiperOrigin-RevId: 173451903
|
|
|
|
| |
PiperOrigin-RevId: 171194291
|
|
|
|
| |
PiperOrigin-RevId: 159171746
|
|
|
|
|
|
|
|
|
|
|
| |
ClusterSpec propagation is a capability upgrade for TensorFlow that should make
it much easier to (1) build distributed TensorFlow clusters, and (2) handle
node failures. The ClusterSpec propagation capability allows TensorFlow workers
to be booted independently of each other, and with no knowledge about others.
The client can then construct a ClusterDef (ClusterSpec), and then send it
to the TF master at session creation. The master in turn then propagates the
ClusterDef along to all of the workers.
Change: 155159972
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds:
* tf.train.sdca_optimizer
* tf.train.sdca_fprint
* tf.train.sdca_shrink_l1
which were previously documented, and prior to 1.0, in tf.sdca.
In 1.0, they were absent from tf.sdca, so this does not break
compatibility.
The module tf.sdca is removed.
Change: 153176548
|
|
|
|
| |
Change: 152662945
|
|
|
|
| |
Change: 151375546
|
|
|
|
|
|
| |
body instead of the title (consistently). Also fix some malformed
@{$...} references and titles starting with "##".
Change: 148476930
|
|
|
|
| |
Change: 148156049
|
|
|
|
| |
Change: 147412093
|
|
|
|
|
|
|
| |
Make LoggingTensorHook to print tensors in the order they were given [if list].
Added formatter option, to support custom string formatting.
Added numpy printing options configuration to tweak precision and summarization.
Change: 144748847
|
|
|
|
| |
Change: 144101397
|
|
|
|
| |
Change: 142676422
|
|
|
|
| |
Change: 141904790
|
|
|
|
|
|
| |
training.
Change: 141462689
|
|
|
|
|
| |
tf.SyncReplicasOptimizerV2
Change: 141243546
|
|
|
|
|
|
|
|
|
|
|
| |
tf.histogram_summary, tf.scalar_summary, tf.audio_summary, tf.image_summary, tf.merge_all_summaries, tf.merge_summary are all removed.
Nearly-identical and fully supported apis are available at
tf.summary.histogram, tf.summary.scalar, tf.summary.audio, tf.summary.image, tf.summary.merge_all, tf.summary.merge
The major change in the new API is that the summary "tag" is now actually the node name, which means the TF naming system will automatically deduplicate summary tags.
If you need an exact match for the old API, you may use tf.contrib.deprecated.histogram_summary, etc, but these endpoints will eventually be removed.
Change: 140792244
|
|
|
|
|
| |
These classes are now available as tf.summary.FileWriter and tf.summary.FileWriterCache.
Change: 140762782
|
|
|
|
|
|
| |
setting.
Change: 139827659
|
|
|
|
| |
Change: 138675832
|
|
|
|
| |
Change: 138143557
|
|
|
|
| |
Change: 137221633
|
|
|
|
| |
Change: 136613296
|
|
|
|
|
|
| |
These two helpers are agnostic to the ckpt format -- it handles the naming
difference between V1 and V2.
Change: 136181681
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
V2). Please note that the gradients from replicas are now averaged instead of summed (as in the old sync_replicas_optimizer) so you need to increase the learning rate according to the number of replicas. This change is introduced to be consistent with how gradients are aggregated (averaged) within a batch in a replica.
As shown in the code change, the switch results in:
1. much cleaner and simpler code.
2. much more efficient and reliable staleness check. It is now 100% strict with
no extra contention to PS servers.
3. no need for clean_up op so we can get rid of the abort_op which can confuse users.
4. number of replicas can be changed without complaints from checkpoint as the
local_step is now just a local variable instead of a global vector variable.
This has been tried with manual restarts of workers (chief or non chief) and
ps and seems to be quite robust.
Change: 135513399
|
|
|
|
| |
Change: 135373918
|
|
|
|
| |
Change: 135265457
|
|
|
|
| |
Change: 135010812
|
|
|
|
|
|
| |
of lazy updates correctly.
Change: 130714247
|
|
|
|
|
|
| |
exposed from the module: learning_rate_decay. The very useful decaying
scheme: piecewise_constant however seems to have been not exposed.
Change: 126273419
|
|
|
|
| |
Change: 124903871
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reuse. It can also be used directly for simple training.
Fix Coordinator.clear_stop() to also clear the exception to raise. Add test.
Add SummaryWriter.reopen(), with tests. This is needed to properly handle
summaries when create a session more than once in a Supervior.
In Supervisor.prepare_or_wait_for_session() reopen the summary writer.
At then end of Supervisor.managed_session() correctly close the summary write
and clear the running threads even if an exception was reported.
Change: 124500982
|
|
|
|
|
|
|
| |
regularization for Adagrad and GradientDescent respectively.
Without l2 and l2 regularization, ProximalAdagrad and ProximalGradientDescent are exactly same as Adagrad and GradientDescent respectively.
Change: 124206988
|
|
|
|
| |
Change: 123900456
|
|
|
|
|
|
|
| |
Background: InferenceExample was confusing as :
(a) it exposed Features rather than Examples and,
(b) it was primarily intended for serving optimization.
Change: 121402533
|
|
|
|
|
|
|
|
|
| |
* Add a simple S16LE WAV encoder.
* Add an Audio value type to Summary protocol buffer.
* Add AudioSummary kernel and op.
* Add support to EventAccumulator/EventMultiplexer for Audio events.
* Add 16-bit little endian encode/decode functions.
Change: 120854931
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a breaking change! The following classes have been renamed:
tf.GrpcServer -> tf.train.Server
tf.ClusterSpec -> tf.train.ClusterSpec
tf.ServerDef -> tf.train.ServerDef
tf.JobDef -> tf.train.JobDef
tf.ClusterDef -> tf.train.ClusterDef
The constructor for tf.train.Server is more permissive and now accepts
tf.train.ClusterSpec, tf.train.ClusterDef, and dictionary inputs for
specifying the cluster part of the server.
For consistency, the server library moves from python/client to python/training.
Change: 119335624
|