aboutsummaryrefslogtreecommitdiffhomepage
path: root/third_party/eigen3
Commit message (Collapse)AuthorAge
* Merge changes from github.Gravatar Dandelion Mané2017-03-10
| | | | Change: 149800363
* Merge changes from github.Gravatar Andrew Harp2017-03-01
| | | | Change: 148954491
* "the the" -> "the"Gravatar A. Unique TensorFlower2017-02-21
| | | | Change: 148163782
* Merge changes from github.Gravatar Vijay Vasudevan2017-02-17
| | | | Change: 147897309
* Internal change.Gravatar A. Unique TensorFlower2017-02-09
| | | | Change: 147051664
* Deleted references to non existent //third_party/mkl libraryGravatar Benoit Steiner2017-02-08
| | | | Change: 146970526
* Merge changes from github.Gravatar Benoit Steiner2017-02-08
| | | | Change: 146918929
* Merge changes from github.Gravatar Patrick Nguyen2017-01-12
| | | | Change: 144396000
* Merge changes from github.Gravatar Jonathan Hseu2016-12-22
| | | | Change: 142805270
* Sync the github and local versions of the ThreadPool headerGravatar Benoit Steiner2016-12-15
| | | | Change: 142169284
* Add licenses.Gravatar A. Unique TensorFlower2016-11-28
| | | | Change: 140396287
* Merge changes from github.Gravatar Benoit Steiner2016-11-09
| | | | Change: 138675832
* Merge changes from github.Gravatar Xiaoqiang Zheng2016-10-28
| | | | Change: 137532946
* Remove unused eigen files.Gravatar Yifei Feng2016-10-25
|
* Internal change.Gravatar A. Unique TensorFlower2016-10-19
| | | | Change: 136615121
* Merge changes from github.Gravatar A. Unique TensorFlower2016-10-10
| | | | Change: 135698415
* Optimize Bazel external dependenciesGravatar Justine Tunney2016-09-21
| | | | | | | | | | | | | | | | | | | | | | This change does the following: - Always use {,new_}http_archive rather than git_repository - Make liberal use of strip_prefix - Clarify licenses() in BUILD files - On POSIX include headers like a normal C/C++ program This change accomplishes the following: - Reduce download size >100MB: The biggest culprit is grpc which has tens of thousands of commits in its GitHub repository. - Reduce disk size >200MB: On disk, grpc takes up 250MB when cloned even though the tarball of the git repo is 3.2MB. By never using git externals, we save on network. - Consume less cpu: Cloning git repositories is much slower than downloading and extracting a tarball. Change: 133895791
* Remove unused files.Gravatar Yifei Feng2016-09-19
|
* Add an op for singular value decomposition (SVD) of a dense matrix or ↵Gravatar A. Unique TensorFlower2016-08-01
| | | | | | | | | | | | | batches of dense matrices. This calls Eigen::JacobiSVD<Matrix, Eigen::HouseholderQRPreconditioner> which is known to be rather slow. This change is primarily intended to get the TensorFlow interfaces and functionality in place. We intend to swap out the "backend" with a higher performance algorithm implementation in the future. This CL also contains a small refactoring of the LinearAlgebraOp base class: 1. I moved the initial processing of inputs and outputs into separate helper functions so Compute() is not so long. 2. The derived classes are now allowed to return fewer output matrix shapes (n) than the number of op outputs (m) in which case empty (shape[0]) tensors are returned for the last m-n outputs. Fixed a few Python linter errors that were blocking presubmit. Change: 128990912
* Merge changes from github.Gravatar Martin Wicke2016-07-25
| | | | Change: 128401884
* Switched to the latest version of Eigen that provides significant performanceGravatar Benoit Steiner2016-07-12
| | | | | | improvements for fp16 Added SpecialFunctions to the list of eigen headers TensorFlow depends on Change: 127264575
* Automated rollback of change 127233960Gravatar Vijay Vasudevan2016-07-12
| | | | Change: 127253427
* Switched to the latest version of Eigen that provides significant performanceGravatar Benoit Steiner2016-07-12
| | | | | improvements for fp16 Change: 127233960
* Adds a "currentThreadIndex" method to Eigen's ThreadPoolDevice. Use it to ↵Gravatar A. Unique TensorFlower2016-06-27
| | | | | | handle per-thread buffer allocation for the tileable executor without resorting to thread_local that is not fully supported on Android. Change: 126009029
* Upgraded Eigen to the latest version that provides new scan operations. This ↵Gravatar Benoit Steiner2016-06-23
| | | | | | will enable the implementation of the cumsum operation in TensorFlow Change: 125697517
* Enable the vectorization of adds and mult on fp16s. This improves theGravatar Benoit Steiner2016-06-08
| | | | | performance of the toy mnist training by 1 order of magnitude Change: 124374286
* Improved the performance of full reductions on GPU.Gravatar Benoit Steiner2016-06-07
| | | | | | | | | | | | | | | | | NEW BM_fullReduction/10 4591 4595 153149 20.8M items/s BM_fullReduction/64 5073 5075 100000 770.0M items/s BM_fullReduction/512 9067 9070 75263 26.9G items/s BM_fullReduction/4k 243984 244125 2868 64.0G items/s BM_fullReduction/5k 359125 359273 1951 64.8G items/s OLD BM_fullReduction/10 9085 9087 74395 10.5M items/s BM_fullReduction/64 9478 9478 72014 412.1M items/s BM_fullReduction/512 14643 14646 46902 16.7G items/s BM_fullReduction/4k 260338 260384 2678 60.0G items/s BM_fullReduction/5k 385076 385178 1818 60.5G items/s Change: 124290852
* Enable fp16 for most of the pooling ops (MaxPool, AvgPool, associatedGravatar Benoit Steiner2016-06-06
| | | | | gradients, some variants etc.). Change: 124197406
* Enable fp16 for most of the pooling ops (MaxPool, AvgPool, associatedGravatar A. Unique TensorFlower2016-06-03
| | | | | gradients, some variants etc.). Change: 123967787
* Enable fp16 for most of the pooling ops (MaxPool, AvgPool, associatedGravatar Benoit Steiner2016-06-03
| | | | | gradients, some variants etc.). Change: 123967117
* Added support for convolutions of 16bit floats on CPUGravatar Benoit Steiner2016-05-31
| | | | Change: 123659102
* Upgraded to the latest version of Eigen that supports convolutions on fp16Gravatar Benoit Steiner2016-05-25
| | | | Change: 123238579
* Switched to the latest version of Eigen that performs much better on machinesGravatar Benoit Steiner2016-05-16
| | | | | | | | with many cpu cores For example, the wall time for the following tutorial went down from 13m35 to 5m27: bazel run -c opt --copt=-mavx tensorflow/examples/tutorials/word2vec/word2vec_basic Change: 122462177
* Made the contraction code compatible with fp16Gravatar Benoit Steiner2016-05-12
| | | | Change: 122192081
* Upgraded to the latest version of Eigen that speeds up full reductions on fp16Gravatar Benoit Steiner2016-05-12
| | | | | by about 3 orders of magnitude as well as some partial reductions by 30% when using cuda 7.5 or above Change: 122191448
* Improved support for min and max on 16 bit floats when running on recent cudaGravatar Benoit Steiner2016-04-27
| | | | | | gpus Updated the check numerics code to make it compatible with fp16 Change: 120980302
* Made it possible to compute the cross entropy using 16 bit floatsGravatar Benoit Steiner2016-04-25
| | | | Change: 120739269
* Rollback of rollback of cl/120366069:Gravatar A. Unique TensorFlower2016-04-21
| | | | | | | | tensorflow: switch to eigen thread pool This is first step of switching tensorflow to the new non-blocking thread pool in eigen. Change: 120510292
* Prevent TensorFlow from crashing when attempting to reduce an empty tensor ↵Gravatar Benoit Steiner2016-04-21
| | | | | | on GPU Change: 120505517
* Fixed a compilation error when targeting cuda 3.0 devices such as the onesGravatar Benoit Steiner2016-04-20
| | | | | offered by AWS Change: 120369420
* Upgraded to the latest version of Eigen that adds support for computing theGravatar Benoit Steiner2016-04-14
| | | | | sigmoid of fp16 and introduces a condition estimator. Change: 119907721
* Added support for trigonometric and transcendental functions of half floatsGravatar Benoit Steiner2016-04-14
| | | | Change: 119850987
* Upgraded to the latest version of Eigen that provides significant performanceGravatar Benoit Steiner2016-04-13
| | | | | improvements for fp16 Change: 119771118
* Made isinf, isnan, isfinite, ceil and floor work with 16 bit floats.Gravatar Benoit Steiner2016-04-09
| | | | Change: 119458778
* Upgraded to the latest version of Eigen that has bug fixes for complex numbersGravatar Benoit Steiner2016-04-08
| | | | | as well as fp16 Change: 119398881
* Upgraded to the latest version of eigen that introduces implementations of ↵Gravatar Benoit Steiner2016-04-07
| | | | | | | the zeta and polygamma functions, as well as improved support for float16. Change: 119279101
* Upgrade to the latest version of Eigen that provides better support for float16Gravatar Benoit Steiner2016-04-04
| | | | | and fixes the computation of absolute values on gpu. Change: 119001808
* Upgraded to the latest version of EigenGravatar Benoit Steiner2016-03-28
| | | | Change: 118414762
* Upgraded to the latest version of Eigen that provides better support for fp16.Gravatar Benoit Steiner2016-03-28
| | | | | Use Eigen mod functors directly instead of duplicating them. Change: 118362359
* Move the NeuralNetwork code out of third_party/eigen3 and intoGravatar Benoit Steiner2016-03-23
| | | | | tensorflow/core/kernel. Change: 117941211