aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/core/kernels/training_ops.cc
Commit message (Expand)AuthorAge
* Fix potential use-after-free in the training ops.Gravatar Derek Murray2018-09-26
* Fix FTRL L2-shrinkage behavior: the gradient from the L2 shrinkage term shoul...Gravatar A. Unique TensorFlower2018-08-28
* Add an attr to apply_adagrad op that allows it to skip updating the accumulat...Gravatar A. Unique TensorFlower2018-04-24
* Merge changes from github.Gravatar Yifei Feng2018-04-23
* Add bfloat16 support for CPU ops.Gravatar A. Unique TensorFlower2018-03-02
* Changed FTRL formula for scalars to match vector version better.Gravatar A. Unique TensorFlower2018-02-16
* Cleanup: Ran clang-format on all *.{cc,h} files in tensorflow/core/kernels.Gravatar A. Unique TensorFlower2018-01-26
* Fix broken usage of mutexes in training ops like AdaDelta.Gravatar Eugene Brevdo2017-12-05
* Fixed code for Adadelta to match correct algorithm and tightened tolerances inGravatar A. Unique TensorFlower2017-11-27
* Open-sourcing AddSign and PowerSign optimizers, found in Neural OptimizerGravatar A. Unique TensorFlower2017-11-17
* SYCL: Fix build breakage introduced inGravatar Asim Shankar2017-10-03
* Cleanup training_ops to reduce code redudancy.Gravatar A. Unique TensorFlower2017-09-11
* Switch resource variables from copy-on-read to copy-on-write.Gravatar A. Unique TensorFlower2017-09-11
* Fix locking of variables in SparseProximalGradientDescent,Gravatar A. Unique TensorFlower2017-09-11
* Simplified formula for FTRL update.Gravatar A. Unique TensorFlower2017-08-10
* Remove Delay Compensated Asynchronous Stochastic Gradient Descent (DCASGD). TheGravatar A. Unique TensorFlower2017-08-08
* Add support for the shrinkage-type L2 to FtrlOptimizer in addition to the onl...Gravatar A. Unique TensorFlower2017-07-05
* Prepare to not include node_def.proto.h in node_def_util.hGravatar Geoffrey Irving2017-06-23
* Merge changes from github.Gravatar Jonathan Hseu2017-06-09
* Merge changes from github.Gravatar A. Unique TensorFlower2017-05-18
* Refactor training helper functions to separate library.Gravatar A. Unique TensorFlower2017-04-28
* Fixes correctness of ProximalAdagrad and ProximalGradientDescent under normal...Gravatar A. Unique TensorFlower2017-03-28
* Fixes broken gpu kernel registration for resource variables.Gravatar Alexandre Passos2017-03-20
* Merge changes from github.Gravatar Andrew Harp2017-03-01
* Enables all optimizers for dense resource variables.Gravatar A. Unique TensorFlower2017-01-27
* Kernels and ops for all optimizers when using resource variables.Gravatar A. Unique TensorFlower2017-01-12
* Merge changes from github.Gravatar Jonathan Hseu2016-12-22
* Extend the FTRL and Proximal AdaGrad optimizer to support accepting scalar te...Gravatar A. Unique TensorFlower2016-12-21
* Fix linter errors left over from PRs.Gravatar Martin Wicke2016-12-15
* Merge changes from github.Gravatar Martin Wicke2016-12-14
* Fix a bug in the CUDA implementation of centered RMSProp. Reordered the sum (...Gravatar A. Unique TensorFlower2016-10-11
* Add option 'centered' to RMSPropOptimizer. When set to False, gradients are n...Gravatar A. Unique TensorFlower2016-10-06
* Improved the formatting of the codeGravatar Benoit Steiner2016-08-22
* Adagrad Dual Averaging optimizer for sparse linear models, that takes care of...Gravatar A. Unique TensorFlower2016-08-18
* Merge changes from github.Gravatar Benoit Steiner2016-08-18
* Merge changes from github.Gravatar Martin Wicke2016-07-25
* Merge changes from github.Gravatar Vijay Vasudevan2016-06-11
* ProximalAdagrad and ProximalGradientdescent, which provide l1 and l2 regulari...Gravatar A. Unique TensorFlower2016-06-06
* Merge changes from github.Gravatar Martin Wicke2016-06-06
* Change some kernels to use TF_CALL* macros, so that the instantiations for someGravatar A. Unique TensorFlower2016-06-06
* Update copyright for 3p/tf/core.Gravatar A. Unique TensorFlower2016-06-02
* Add macros for mutex_lock and shared_lock, so compilation fails ifGravatar A. Unique TensorFlower2016-05-23
* Add support for fp16 to all the training ops. Note that FTRL in particularGravatar A. Unique TensorFlower2016-04-20
* Add explicit casts for a few ops that assumed it could implicitly cast to/fromGravatar A. Unique TensorFlower2016-03-29
* Merge changes from github, some fixes to adhere somewhatGravatar Vijay Vasudevan2016-03-22
* Eliminating the DoValidation/DoCompute split for training kernels andGravatar David G. Andersen2016-03-16
* Re-introducing previous fixes to locking, takingGravatar David G. Andersen2016-03-16
* Reverting use_locking change temporarily to fix test failure.Gravatar David G. Andersen2016-03-15
* Fix stale uses of the use_locking attr that could allow unlocked accessGravatar David G. Andersen2016-03-15
* Fix the crash when applying SparseFtrl update where inner_dimension > 0Gravatar A. Unique TensorFlower2016-03-09