| Commit message (Expand) | Author | Age |
* | Fix potential use-after-free in the training ops. | Derek Murray | 2018-09-26 |
* | Fix FTRL L2-shrinkage behavior: the gradient from the L2 shrinkage term shoul... | A. Unique TensorFlower | 2018-08-28 |
* | Add an attr to apply_adagrad op that allows it to skip updating the accumulat... | A. Unique TensorFlower | 2018-04-24 |
* | Merge changes from github. | Yifei Feng | 2018-04-23 |
* | Add bfloat16 support for CPU ops. | A. Unique TensorFlower | 2018-03-02 |
* | Changed FTRL formula for scalars to match vector version better. | A. Unique TensorFlower | 2018-02-16 |
* | Cleanup: Ran clang-format on all *.{cc,h} files in tensorflow/core/kernels. | A. Unique TensorFlower | 2018-01-26 |
* | Fix broken usage of mutexes in training ops like AdaDelta. | Eugene Brevdo | 2017-12-05 |
* | Fixed code for Adadelta to match correct algorithm and tightened tolerances in | A. Unique TensorFlower | 2017-11-27 |
* | Open-sourcing AddSign and PowerSign optimizers, found in Neural Optimizer | A. Unique TensorFlower | 2017-11-17 |
* | SYCL: Fix build breakage introduced in | Asim Shankar | 2017-10-03 |
* | Cleanup training_ops to reduce code redudancy. | A. Unique TensorFlower | 2017-09-11 |
* | Switch resource variables from copy-on-read to copy-on-write. | A. Unique TensorFlower | 2017-09-11 |
* | Fix locking of variables in SparseProximalGradientDescent, | A. Unique TensorFlower | 2017-09-11 |
* | Simplified formula for FTRL update. | A. Unique TensorFlower | 2017-08-10 |
* | Remove Delay Compensated Asynchronous Stochastic Gradient Descent (DCASGD). The | A. Unique TensorFlower | 2017-08-08 |
* | Add support for the shrinkage-type L2 to FtrlOptimizer in addition to the onl... | A. Unique TensorFlower | 2017-07-05 |
* | Prepare to not include node_def.proto.h in node_def_util.h | Geoffrey Irving | 2017-06-23 |
* | Merge changes from github. | Jonathan Hseu | 2017-06-09 |
* | Merge changes from github. | A. Unique TensorFlower | 2017-05-18 |
* | Refactor training helper functions to separate library. | A. Unique TensorFlower | 2017-04-28 |
* | Fixes correctness of ProximalAdagrad and ProximalGradientDescent under normal... | A. Unique TensorFlower | 2017-03-28 |
* | Fixes broken gpu kernel registration for resource variables. | Alexandre Passos | 2017-03-20 |
* | Merge changes from github. | Andrew Harp | 2017-03-01 |
* | Enables all optimizers for dense resource variables. | A. Unique TensorFlower | 2017-01-27 |
* | Kernels and ops for all optimizers when using resource variables. | A. Unique TensorFlower | 2017-01-12 |
* | Merge changes from github. | Jonathan Hseu | 2016-12-22 |
* | Extend the FTRL and Proximal AdaGrad optimizer to support accepting scalar te... | A. Unique TensorFlower | 2016-12-21 |
* | Fix linter errors left over from PRs. | Martin Wicke | 2016-12-15 |
* | Merge changes from github. | Martin Wicke | 2016-12-14 |
* | Fix a bug in the CUDA implementation of centered RMSProp. Reordered the sum (... | A. Unique TensorFlower | 2016-10-11 |
* | Add option 'centered' to RMSPropOptimizer. When set to False, gradients are n... | A. Unique TensorFlower | 2016-10-06 |
* | Improved the formatting of the code | Benoit Steiner | 2016-08-22 |
* | Adagrad Dual Averaging optimizer for sparse linear models, that takes care of... | A. Unique TensorFlower | 2016-08-18 |
* | Merge changes from github. | Benoit Steiner | 2016-08-18 |
* | Merge changes from github. | Martin Wicke | 2016-07-25 |
* | Merge changes from github. | Vijay Vasudevan | 2016-06-11 |
* | ProximalAdagrad and ProximalGradientdescent, which provide l1 and l2 regulari... | A. Unique TensorFlower | 2016-06-06 |
* | Merge changes from github. | Martin Wicke | 2016-06-06 |
* | Change some kernels to use TF_CALL* macros, so that the instantiations for some | A. Unique TensorFlower | 2016-06-06 |
* | Update copyright for 3p/tf/core. | A. Unique TensorFlower | 2016-06-02 |
* | Add macros for mutex_lock and shared_lock, so compilation fails if | A. Unique TensorFlower | 2016-05-23 |
* | Add support for fp16 to all the training ops. Note that FTRL in particular | A. Unique TensorFlower | 2016-04-20 |
* | Add explicit casts for a few ops that assumed it could implicitly cast to/from | A. Unique TensorFlower | 2016-03-29 |
* | Merge changes from github, some fixes to adhere somewhat | Vijay Vasudevan | 2016-03-22 |
* | Eliminating the DoValidation/DoCompute split for training kernels and | David G. Andersen | 2016-03-16 |
* | Re-introducing previous fixes to locking, taking | David G. Andersen | 2016-03-16 |
* | Reverting use_locking change temporarily to fix test failure. | David G. Andersen | 2016-03-15 |
* | Fix stale uses of the use_locking attr that could allow unlocked access | David G. Andersen | 2016-03-15 |
* | Fix the crash when applying SparseFtrl update where inner_dimension > 0 | A. Unique TensorFlower | 2016-03-09 |