aboutsummaryrefslogtreecommitdiffhomepage
path: root/third_party/eigen3/Eigen
Commit message (Collapse)AuthorAge
* Merge commit for internal changesGravatar zhengxq2016-08-04
|\
| * Add an op for singular value decomposition (SVD) of a dense matrix or ↵Gravatar A. Unique TensorFlower2016-08-01
| | | | | | | | | | | | | | | | | | | | | | | | | | batches of dense matrices. This calls Eigen::JacobiSVD<Matrix, Eigen::HouseholderQRPreconditioner> which is known to be rather slow. This change is primarily intended to get the TensorFlow interfaces and functionality in place. We intend to swap out the "backend" with a higher performance algorithm implementation in the future. This CL also contains a small refactoring of the LinearAlgebraOp base class: 1. I moved the initial processing of inputs and outputs into separate helper functions so Compute() is not so long. 2. The derived classes are now allowed to return fewer output matrix shapes (n) than the number of op outputs (m) in which case empty (shape[0]) tensors are returned for the last m-n outputs. Fixed a few Python linter errors that were blocking presubmit. Change: 128990912
| * Merge changes from github.Gravatar Martin Wicke2016-07-25
| | | | | | | | Change: 128401884
* | Simplify Eigen package config (#3288)Gravatar Igor Babuschkin2016-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Simplify Eigen package config * Add missing unsupported/Eigen/* * Fix pip setup.py * Adjust new eigen header * Fix bazel include dependency error * Adjust Makefile to work with Eigen changes * Remove nvcc workaround for CUDA <= 6.0 CUDA versions prior to 6.5 gave an error: kernel launches from templates are not allowed in system files error when using gcc v4.8 and including code that uses templated kernel launches via `-isystem`. In order to work around this, the GPU crosstool converted `-isystem` arguments containing the cuda headers into `-iquote` arguments. This workaround has now been removed. * Configure cmake and make to get eigen version from tensorflow/workspace.bzl
* | Merge commit for internal changesGravatar Vijay Vasudevan2016-07-13
|\|
| * Switched to the latest version of Eigen that provides significant performanceGravatar Benoit Steiner2016-07-12
| | | | | | | | | | | | improvements for fp16 Added SpecialFunctions to the list of eigen headers TensorFlow depends on Change: 127264575
| * Automated rollback of change 127233960Gravatar Vijay Vasudevan2016-07-12
| | | | | | | | Change: 127253427
* | Update Eigen to version that includes scan op fix (#3275)Gravatar Igor Babuschkin2016-07-12
| |
| * Switched to the latest version of Eigen that provides significant performanceGravatar Benoit Steiner2016-07-12
| | | | | | | | | | improvements for fp16 Change: 127233960
* | Merge commit for internal changesGravatar Rasmus Larsen2016-06-28
|\|
| * Adds a "currentThreadIndex" method to Eigen's ThreadPoolDevice. Use it to ↵Gravatar A. Unique TensorFlower2016-06-27
| | | | | | | | | | | | handle per-thread buffer allocation for the tileable executor without resorting to thread_local that is not fully supported on Android. Change: 126009029
* | Merge commit for internal changesGravatar Maciek Chociej2016-06-24
|\|
| * Upgraded Eigen to the latest version that provides new scan operations. This ↵Gravatar Benoit Steiner2016-06-23
| | | | | | | | | | | | will enable the implementation of the cumsum operation in TensorFlow Change: 125697517
* | Merge commit for internal changesGravatar Vijay Vasudevan2016-06-08
|\|
| * Enable the vectorization of adds and mult on fp16s. This improves theGravatar Benoit Steiner2016-06-08
| | | | | | | | | | performance of the toy mnist training by 1 order of magnitude Change: 124374286
* | Merge commit for internal changesGravatar Vijay Vasudevan2016-06-07
|\|
| * Improved the performance of full reductions on GPU.Gravatar Benoit Steiner2016-06-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NEW BM_fullReduction/10 4591 4595 153149 20.8M items/s BM_fullReduction/64 5073 5075 100000 770.0M items/s BM_fullReduction/512 9067 9070 75263 26.9G items/s BM_fullReduction/4k 243984 244125 2868 64.0G items/s BM_fullReduction/5k 359125 359273 1951 64.8G items/s OLD BM_fullReduction/10 9085 9087 74395 10.5M items/s BM_fullReduction/64 9478 9478 72014 412.1M items/s BM_fullReduction/512 14643 14646 46902 16.7G items/s BM_fullReduction/4k 260338 260384 2678 60.0G items/s BM_fullReduction/5k 385076 385178 1818 60.5G items/s Change: 124290852
* | Merge commit for internal changesGravatar Vijay Vasudevan2016-06-07
|\|
| * Enable fp16 for most of the pooling ops (MaxPool, AvgPool, associatedGravatar Benoit Steiner2016-06-06
| | | | | | | | | | gradients, some variants etc.). Change: 124197406
| * Enable fp16 for most of the pooling ops (MaxPool, AvgPool, associatedGravatar A. Unique TensorFlower2016-06-03
| | | | | | | | | | gradients, some variants etc.). Change: 123967787
| * Enable fp16 for most of the pooling ops (MaxPool, AvgPool, associatedGravatar Benoit Steiner2016-06-03
| | | | | | | | | | gradients, some variants etc.). Change: 123967117
* | Merge commits from internal.Gravatar Martin Wicke2016-06-02
|\|
| * Added support for convolutions of 16bit floats on CPUGravatar Benoit Steiner2016-05-31
| | | | | | | | Change: 123659102
* | Merge commit for internal changesGravatar Vijay Vasudevan2016-05-25
|\|
| * Upgraded to the latest version of Eigen that supports convolutions on fp16Gravatar Benoit Steiner2016-05-25
| | | | | | | | Change: 123238579
* | Switched to the latest version of Eigen that performs much better on machinesGravatar Benoit Steiner2016-05-17
| | | | | | | | | | | | | | | | with many cpu cores For example, the wall time for the following tutorial went down from 13m35 to 5m27: bazel run -c opt --copt=-mavx tensorflow/examples/tutorials/word2vec/word2vec_basic Change: 122462177
| * Switched to the latest version of Eigen that performs much better on machinesGravatar Benoit Steiner2016-05-16
| | | | | | | | | | | | | | | | with many cpu cores For example, the wall time for the following tutorial went down from 13m35 to 5m27: bazel run -c opt --copt=-mavx tensorflow/examples/tutorials/word2vec/word2vec_basic Change: 122462177
* | Merge commit for internal changesGravatar Derek Murray2016-05-16
|\|
| * Made the contraction code compatible with fp16Gravatar Benoit Steiner2016-05-12
| | | | | | | | Change: 122192081
| * Upgraded to the latest version of Eigen that speeds up full reductions on fp16Gravatar Benoit Steiner2016-05-12
| | | | | | | | | | by about 3 orders of magnitude as well as some partial reductions by 30% when using cuda 7.5 or above Change: 122191448
* | Merge commit for internal changesGravatar Paul Tucker2016-04-28
|\|
| * Improved support for min and max on 16 bit floats when running on recent cudaGravatar Benoit Steiner2016-04-27
| | | | | | | | | | | | gpus Updated the check numerics code to make it compatible with fp16 Change: 120980302
* | Merge commit for internal changesGravatar Paul Tucker2016-04-27
|\|
| * Made it possible to compute the cross entropy using 16 bit floatsGravatar Benoit Steiner2016-04-25
| | | | | | | | Change: 120739269
| * Rollback of rollback of cl/120366069:Gravatar A. Unique TensorFlower2016-04-21
| | | | | | | | | | | | | | | | tensorflow: switch to eigen thread pool This is first step of switching tensorflow to the new non-blocking thread pool in eigen. Change: 120510292
| * Prevent TensorFlow from crashing when attempting to reduce an empty tensor ↵Gravatar Benoit Steiner2016-04-21
| | | | | | | | | | | | on GPU Change: 120505517
* | Merge commit for internal changesGravatar Shanqing Cai2016-04-21
|\|
| * Fixed a compilation error when targeting cuda 3.0 devices such as the onesGravatar Benoit Steiner2016-04-20
| | | | | | | | | | offered by AWS Change: 120369420
* | Merge commit for internal changesGravatar Zhifeng Chen2016-04-15
|\|
| * Upgraded to the latest version of Eigen that adds support for computing theGravatar Benoit Steiner2016-04-14
| | | | | | | | | | sigmoid of fp16 and introduces a condition estimator. Change: 119907721
| * Added support for trigonometric and transcendental functions of half floatsGravatar Benoit Steiner2016-04-14
| | | | | | | | Change: 119850987
| * Upgraded to the latest version of Eigen that provides significant performanceGravatar Benoit Steiner2016-04-13
| | | | | | | | | | improvements for fp16 Change: 119771118
* | merge internal changesGravatar Martin Wicke2016-04-11
|\|
| * Made isinf, isnan, isfinite, ceil and floor work with 16 bit floats.Gravatar Benoit Steiner2016-04-09
| | | | | | | | Change: 119458778
| * Upgraded to the latest version of Eigen that has bug fixes for complex numbersGravatar Benoit Steiner2016-04-08
| | | | | | | | | | as well as fp16 Change: 119398881
* | Merge commit for internal changesGravatar Vijay Vasudevan2016-04-08
|\|
| * Upgraded to the latest version of eigen that introduces implementations of ↵Gravatar Benoit Steiner2016-04-07
| | | | | | | | | | | | | | the zeta and polygamma functions, as well as improved support for float16. Change: 119279101
| * Upgrade to the latest version of Eigen that provides better support for float16Gravatar Benoit Steiner2016-04-04
| | | | | | | | | | and fixes the computation of absolute values on gpu. Change: 119001808
* | Merge commit for internal changesGravatar Vijay Vasudevan2016-03-28
|\|
| * Upgraded to the latest version of EigenGravatar Benoit Steiner2016-03-28
| | | | | | | | Change: 118414762