aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
...
* | | Make EIGEN_HAS_STD_RESULT_OF user configurableGravatar Gael Guennebaud2016-05-20
| | |
* | | Make EIGEN_HAS_C99_MATH user configurableGravatar Gael Guennebaud2016-05-20
| | |
* | | Make EIGEN_HAS_RVALUE_REFERENCES user configurableGravatar Gael Guennebaud2016-05-20
| | |
* | | Rename EIGEN_HAVE_RVALUE_REFERENCES to EIGEN_HAS_RVALUE_REFERENCESGravatar Gael Guennebaud2016-05-20
| | |
* | | polygamma is C99/C++11 onlyGravatar Gael Guennebaud2016-05-20
| | |
* | | Add a EIGEN_MAX_CPP_VER option to limit the C++ version to be used.Gravatar Gael Guennebaud2016-05-20
| | |
* | | Improve doc of special math functionsGravatar Gael Guennebaud2016-05-20
| | |
* | | Rename UniformRandom to UnitRandom.Gravatar Gael Guennebaud2016-05-20
| | |
* | | Fix coding practice in Quaternion::UniformRandomGravatar Gael Guennebaud2016-05-20
| | |
* | | bug #823: add static method to Quaternion for uniform random rotations.Gravatar Joseph Mirabel2016-05-20
| | |
* | | Remove std:: to enable custom scalar types.Gravatar Gael Guennebaud2016-05-19
| | |
| * | Merged eigen/eigen into defaultGravatar Rasmus Larsen2016-05-18
| |\ \
| * \ \ Merge.Gravatar Rasmus Munk Larsen2016-05-18
| |\ \ \
| * | | | Minor cleanups: 1. Get rid of unused variables. 2. Get rid of last uses of ↵Gravatar Rasmus Munk Larsen2016-05-18
| | | | | | | | | | | | | | | | | | | | EIGEN_USE_COST_MODEL.
| | * | | Reduce overhead for small tensors and cheap ops by short-circuiting the ↵Gravatar Rasmus Munk Larsen2016-05-17
| |/ / / | | | | | | | | | | | | const computation and block size calculation in parallelFor.
| | | * Merged latest updates from trunkGravatar Benoit Steiner2016-05-17
| | | |\
| | | * | Allow vectorized padding on GPU. This helps speed things up a little.Gravatar Benoit Steiner2016-05-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before: BM_padding/10 5000000 460 217.03 MFlops/s BM_padding/80 5000000 460 13899.40 MFlops/s BM_padding/640 5000000 461 888421.17 MFlops/s BM_padding/4K 5000000 460 54316322.55 MFlops/s After: BM_padding/10 5000000 454 220.20 MFlops/s BM_padding/80 5000000 455 14039.86 MFlops/s BM_padding/640 5000000 452 904968.83 MFlops/s BM_padding/4K 5000000 411 60750049.21 MFlops/s
* | | | | made a fix to the GMRES solver so that it now correctly reports the error ↵Gravatar David Dement2016-05-16
| | | | | | | | | | | | | | | | | | | | achieved in the solution process
* | | | | Fix unit test.Gravatar Gael Guennebaud2016-05-19
| | | | |
* | | | | Improve unit tests of zeta, polygamma, and digammaGravatar Gael Guennebaud2016-05-19
| | | | |
* | | | | zeta and polygamma are not unary functions, but binary ones.Gravatar Gael Guennebaud2016-05-19
| | | | |
* | | | | zeta and digamma do not require C++11/C99Gravatar Gael Guennebaud2016-05-19
| | | | |
* | | | | Add some c++11 flags in documentationGravatar Gael Guennebaud2016-05-19
| | | | |
* | | | | bug #1201: optimize affine*vector productsGravatar Gael Guennebaud2016-05-19
| | | | |
* | | | | bug #1221: disable gcc 6 warning: ignoring attributes on template argumentGravatar Gael Guennebaud2016-05-19
| | | | |
* | | | | Fix SelfAdjointEigenSolver for some input expression types, and add new ↵Gravatar Gael Guennebaud2016-05-19
| | | | | | | | | | | | | | | | | | | | regression unit tests for sparse and selfadjointview inputs.
* | | | | DiagonalWrapper is a vector, so it must expose the LinearAccessBit flag.Gravatar Gael Guennebaud2016-05-19
| | | | |
* | | | | Add support for SelfAdjointView::diagonal()Gravatar Gael Guennebaud2016-05-19
| | | | |
* | | | | Fix SelfAdjointView::triangularView for complexes.Gravatar Gael Guennebaud2016-05-19
| | | | |
* | | | | bug #1230: add support for SelfadjointView::triangularView.Gravatar Gael Guennebaud2016-05-19
| |/ / / |/| | |
* | | | Advertize the packet api of the tensor reducers iff the corresponding packet ↵Gravatar Benoit Steiner2016-05-18
| | | | | | | | | | | | | | | | primitives are available.
* | | | bug #1231: fix compilation regression regarding complex_array/=real_array ↵Gravatar Gael Guennebaud2016-05-18
| | | | | | | | | | | | | | | | and add respective unit tests
* | | | Use coeff(i,j) instead of operator().Gravatar Gael Guennebaud2016-05-18
| | | |
* | | | bug #1224: fix regression in (dense*dense).sparseView() by specializing ↵Gravatar Gael Guennebaud2016-05-18
| | | | | | | | | | | | | | | | evaluator<SparseView<Product>> for sparse products only.
* | | | Use default sorting strategy for square products.Gravatar Gael Guennebaud2016-05-18
| | | |
* | | | Extend sparse*sparse product unit test to check that the expected ↵Gravatar Gael Guennebaud2016-05-18
| | | | | | | | | | | | | | | | implementation is used (conservative vs auto pruning).
* | | | bug #1229: bypass usage of Derived::Options which is available for plain ↵Gravatar Gael Guennebaud2016-05-18
| | | | | | | | | | | | | | | | matrix types only. Better use column-major storage anyway.
* | | | Pass argument by const ref instead of by value in pow(AutoDiffScalar...)Gravatar Gael Guennebaud2016-05-18
| | | |
* | | | bug #1223: fix compilation of AutoDiffScalar's min/max operators, and add ↵Gravatar Gael Guennebaud2016-05-18
| | | | | | | | | | | | | | | | regression unit test.
* | | | bug #1222: fix compilation in AutoDiffScalar and add respective unit testGravatar Gael Guennebaud2016-05-18
| | | |
* | | | Big 1213: add regression unit test.Gravatar Gael Guennebaud2016-05-18
| | | |
* | | | bug #1213: rename some enums type for consistency.Gravatar Gael Guennebaud2016-05-18
|/ / /
* | | #if defined(EIGEN_USE_NONBLOCKING_THREAD_POOL) is now #if ↵Gravatar Benoit Steiner2016-05-17
| | | | | | | | | | | | !defined(EIGEN_USE_SIMPLE_THREAD_POOL): the non blocking thread pool is the default since it's more scalable, and one needs to request the old thread pool explicitly.
* | | Fixed compilation errorGravatar Benoit Steiner2016-05-17
| | |
* | | Fixed compilation error in the tensor thread poolGravatar Benoit Steiner2016-05-17
| | |
* | | Merge upstream.Gravatar Rasmus Munk Larsen2016-05-17
|\ \ \
* | | | Roll back changes to core. Move include of TensorFunctors.h up to satisfy ↵Gravatar Rasmus Munk Larsen2016-05-17
| | | | | | | | | | | | | | | | dependence in TensorCostModel.h.
| * | | Merged eigen/eigen into defaultGravatar Rasmus Larsen2016-05-17
|/| | |
| * | | Enable the use of the packet api to evaluate tensor broadcasts. This speed ↵Gravatar Benoit Steiner2016-05-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | things up quite a bit: Before" M_broadcasting/10 500000 3690 27.10 MFlops/s BM_broadcasting/80 500000 4014 1594.24 MFlops/s BM_broadcasting/640 100000 14770 27731.35 MFlops/s BM_broadcasting/4K 5000 632711 39512.48 MFlops/s After: BM_broadcasting/10 500000 4287 23.33 MFlops/s BM_broadcasting/80 500000 4455 1436.41 MFlops/s BM_broadcasting/640 200000 10195 40173.01 MFlops/s BM_broadcasting/4K 5000 423746 58997.57 MFlops/s
| * | | Allow vectorized padding on GPU. This helps speed things up a littleGravatar Benoit Steiner2016-05-17
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before: BM_padding/10 5000000 460 217.03 MFlops/s BM_padding/80 5000000 460 13899.40 MFlops/s BM_padding/640 5000000 461 888421.17 MFlops/s BM_padding/4K 5000000 460 54316322.55 MFlops/s After: BM_padding/10 5000000 454 220.20 MFlops/s BM_padding/80 5000000 455 14039.86 MFlops/s BM_padding/640 5000000 452 904968.83 MFlops/s BM_padding/4K 5000000 411 60750049.21 MFlops/s