aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* bug #899: remove "rank-revealing" qualifier for SparseQR and warn that it is ↵Gravatar Gael Guennebaud2019-02-19
| | | | not always rank-revealing.
* Fix conversion warningsGravatar Gael Guennebaud2019-02-19
|
* Fix C++17 compilationGravatar Gael Guennebaud2019-02-19
|
* Fix incorrect value of NumDimensions in TensorContraction traits.Gravatar Rasmus Munk Larsen2019-02-19
| | | | Reported here: #1671
* Commas at the end of enumerator lists are not allowed in C++03Gravatar Christoph Hertzberg2019-02-19
|
* fix unit compilation in c++17: std::ptr_fun has been removed.Gravatar Gael Guennebaud2019-02-19
|
* Add C++17 detection macro, and make sure throw(xpr) is not used if the ↵Gravatar Gael Guennebaud2019-02-19
| | | | compiler is in c++17 mode.
* Fix conversion warningsGravatar Gael Guennebaud2019-02-19
|
* bug #1046: add unit tests for correct propagation of alignment through ↵Gravatar Gael Guennebaud2019-02-19
| | | | std::alignment_of
* Fix harmless Scalar vs RealScalar cast.Gravatar Gael Guennebaud2019-02-18
|
* Add unit test for LinSpaced and complex numbers.Gravatar Gael Guennebaud2019-02-18
|
* bug #1194: implement slightly faster and SIMD friendly 4x4 determinant.Gravatar Gael Guennebaud2019-02-18
|
* Fix regression: .conjugate() was popped out but not re-introduced.Gravatar Gael Guennebaud2019-02-18
|
* Set cost of conjugate to 0 (in practice it boils down to a no-op).Gravatar Gael Guennebaud2019-02-18
| | | | | This is also important to make sure that A.conjugate() * B.conjugate() does not evaluate its arguments into temporaries (e.g., if A and B are fixed and small, or * fall back to lazyProduct)
* GEMM: catch all scalar-multiple variants when falling-back to a coeff-based ↵Gravatar Gael Guennebaud2019-02-18
| | | | | | | product. Before only s*A*B was caught which was both inconsistent with GEMM, sub-optimal, and could even lead to compilation-errors (https://stackoverflow.com/questions/54738495).
* Guard C++11-style default constructor. Also, this is only needed for MSVCGravatar Christoph Hertzberg2019-02-16
|
* Add possibility to bench row-major lhs and rhsGravatar Gael Guennebaud2019-02-15
|
* bug #1680: improve MSVC inlining by declaring many triavial constructors and ↵Gravatar Gael Guennebaud2019-02-15
| | | | accessors as STRONG_INLINE.
* bug #1680: make all "block" methods strong-inline and device-functions (some ↵Gravatar Gael Guennebaud2019-02-15
| | | | were missing EIGEN_DEVICE_FUNC)
* bug #1678: Fix lack of __FMA__ macro on MSVC with AVX512Gravatar Gael Guennebaud2019-02-15
|
* bug #1678: workaround MSVC compilation issues with AVX512Gravatar Gael Guennebaud2019-02-15
|
* bug #1679: avoid possible division by 0 in complex-schurGravatar Gael Guennebaud2019-02-15
|
* Revert ↵Gravatar Rasmus Munk Larsen2019-02-14
| | | | | | https://bitbucket.org/eigen/eigen/commits/b55b5c7280a0481f01fe5ec764d55c443a8b6496 .
* Merged in ezhulenev/eigen-01 (pull request PR-590)Gravatar Rasmus Larsen2019-02-14
|\ | | | | | | Do not generate no-op cast() and conjugate() expressions
* | Fix signed-unsigned return in RuqQueueGravatar Eugene Zhulenev2019-02-14
| |
* | Fix signed-unsigned comparison warning in RunQueueGravatar Eugene Zhulenev2019-02-14
| |
| * Do not generate no-op cast() and conjugate() expressionsGravatar Eugene Zhulenev2019-02-14
|/
* Speedup Tensor ThreadPool RunQueu::Empty()Gravatar Eugene Zhulenev2019-02-13
|
* Let's properly use Score instead of std::abs, and remove deprecated FIXME ( ↵Gravatar Gael Guennebaud2019-02-11
| | | | a /= b does a/b and not a * (1/b) as it was a long time ago...)
* Fix compilation of empty products of the form: Mx0 * 0xNGravatar Gael Guennebaud2019-02-11
|
* Speed up 2x2 LU by a factor 2, and other small fixed sizes by about 10%.Gravatar Gael Guennebaud2019-02-11
| | | | Not sure that's so critical, but this does not complexify the code base much.
* Enable unit tests of PartialPivLU on fixed size matrices, and increase ↵Gravatar Gael Guennebaud2019-02-11
| | | | tested matrix size (blocking was not tested!)
* Speedup PartialPivLU for small matrices by passing compile-time sizes when ↵Gravatar Gael Guennebaud2019-02-11
| | | | | | | | | | | | | | | | | | | | | available. This change set also makes a better use of Map<>+OuterStride and Ref<> yielding surprising speed up for small dynamic sizes as well. The table below reports times in micro seconds for 10 random matrices: | ------ float --------- | ------- double ------- | size | before after ratio | before after ratio | fixed 1 | 0.34 0.11 2.93 | 0.35 0.11 3.06 | fixed 2 | 0.81 0.24 3.38 | 0.91 0.25 3.60 | fixed 3 | 1.49 0.49 3.04 | 1.68 0.55 3.01 | fixed 4 | 2.31 0.70 3.28 | 2.45 1.08 2.27 | fixed 5 | 3.49 1.11 3.13 | 3.84 2.24 1.71 | fixed 6 | 4.76 1.64 2.88 | 4.87 2.84 1.71 | dyn 1 | 0.50 0.40 1.23 | 0.51 0.40 1.26 | dyn 2 | 1.08 0.85 1.27 | 1.04 0.69 1.49 | dyn 3 | 1.76 1.26 1.40 | 1.84 1.14 1.60 | dyn 4 | 2.57 1.75 1.46 | 2.67 1.66 1.60 | dyn 5 | 3.80 2.64 1.43 | 4.00 2.48 1.61 | dyn 6 | 5.06 3.43 1.47 | 5.15 3.21 1.60 |
* Add PacketConv implementation for non-vectorizable src expressionsGravatar Eugene Zhulenev2019-02-08
|
* Optimize TensorConversion evaluator: do not convert same typeGravatar Eugene Zhulenev2019-02-08
|
* Spline.h: fix spelling "spang" -> "span"Gravatar Steven Peters2019-02-08
|
* Don't do parallel_pack if we can use thread_local memory in tensor contractionsGravatar Eugene Zhulenev2019-02-07
|
* Make GEMM fallback to GEMV for runtime vectors.Gravatar Gael Guennebaud2019-02-07
| | | | | This is a more general and simpler version of changeset 4c0fa6ce0f81ce67dd6723528ddf72f66ae92ba2
* Backed out changeset 4c0fa6ce0f81ce67dd6723528ddf72f66ae92ba2Gravatar Gael Guennebaud2019-02-07
|
* bug #1676: workaround GCC's bug in c++17 mode.Gravatar Gael Guennebaud2019-02-07
|
* Merged in ezhulenev/eigen-01 (pull request PR-581)Gravatar Rasmus Larsen2019-02-05
|\ | | | | | | | | | | | | Parallelize tensor contraction only by sharding dimension and use 'thread-local' memory for packing Approved-by: Rasmus Larsen <rmlarsen@google.com> Approved-by: Gael Guennebaud <g.gael@free.fr>
| * Do not reduce parallelism too much in contractions with small number of threadsGravatar Eugene Zhulenev2019-02-04
| |
| * Parallelize tensor contraction only by sharding dimension and use ↵Gravatar Eugene Zhulenev2019-02-04
| | | | | | | | 'thread-local' memory for packing
* | Remove duplicated comment lineGravatar Eugene Zhulenev2019-02-04
| |
* | Fix GeneralBlockPanelKernel Android compilationGravatar Eugene Zhulenev2019-02-04
|/
* bug #1674: disable GCC's unsafe-math-optimizations in sin/cos vectorization ↵Gravatar Gael Guennebaud2019-02-03
| | | | (results are completely wrong otherwise)
* Merged in rmlarsen/eigen (pull request PR-578)Gravatar Rasmus Larsen2019-02-02
|\ | | | | | | | | | | Speed up Eigen matrix*vector and vector*matrix multiplication. Approved-by: Eugene Zhulenev <ezhulenev@google.com>
* | Speed up row-major matrix-vector product on ARMGravatar Sameer Agarwal2019-02-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The row-major matrix-vector multiplication code uses a threshold to check if processing 8 rows at a time would thrash the cache. This change introduces two modifications to this logic. 1. A smaller threshold for ARM and ARM64 devices. The value of this threshold was determined empirically using a Pixel2 phone, by benchmarking a large number of matrix-vector products in the range [1..4096]x[1..4096] and measuring performance separately on small and little cores with frequency pinning. On big (out-of-order) cores, this change has little to no impact. But on the small (in-order) cores, the matrix-vector products are up to 700% faster. Especially on large matrices. The motivation for this change was some internal code at Google which was using hand-written NEON for implementing similar functionality, processing the matrix one row at a time, which exhibited substantially better performance than Eigen. With the current change, Eigen handily beats that code. 2. Make the logic for choosing number of simultaneous rows apply unifiormly to 8, 4 and 2 rows instead of just 8 rows. Since the default threshold for non-ARM devices is essentially unchanged (32000 -> 32 * 1024), this change has no impact on non-ARM performance. This was verified by running the same set of benchmarks on a Xeon desktop.
| * Speed up Eigen matrix*vector and vector*matrix multiplication.Gravatar Rasmus Munk Larsen2019-01-31
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change speeds up Eigen matrix * vector and vector * matrix multiplication for dynamic matrices when it is known at runtime that one of the factors is a vector. The benchmarks below test c.noalias()= n_by_n_matrix * n_by_1_matrix; c.noalias()= 1_by_n_matrix * n_by_n_matrix; respectively. Benchmark measurements: SSE: Run on *** (72 X 2992 MHz CPUs); 2019-01-28T17:51:44.452697457-08:00 CPU: Intel Skylake Xeon with HyperThreading (36 cores) dL1:32KB dL2:1024KB dL3:24MB Benchmark Base (ns) New (ns) Improvement ------------------------------------------------------------------ BM_MatVec/64 1096 312 +71.5% BM_MatVec/128 4581 1464 +68.0% BM_MatVec/256 18534 5710 +69.2% BM_MatVec/512 118083 24162 +79.5% BM_MatVec/1k 704106 173346 +75.4% BM_MatVec/2k 3080828 742728 +75.9% BM_MatVec/4k 25421512 4530117 +82.2% BM_VecMat/32 352 130 +63.1% BM_VecMat/64 1213 425 +65.0% BM_VecMat/128 4640 1564 +66.3% BM_VecMat/256 17902 5884 +67.1% BM_VecMat/512 70466 24000 +65.9% BM_VecMat/1k 340150 161263 +52.6% BM_VecMat/2k 1420590 645576 +54.6% BM_VecMat/4k 8083859 4364327 +46.0% AVX2: Run on *** (72 X 2993 MHz CPUs); 2019-01-28T17:45:11.508545307-08:00 CPU: Intel Skylake Xeon with HyperThreading (36 cores) dL1:32KB dL2:1024KB dL3:24MB Benchmark Base (ns) New (ns) Improvement ------------------------------------------------------------------ BM_MatVec/64 619 120 +80.6% BM_MatVec/128 9693 752 +92.2% BM_MatVec/256 38356 2773 +92.8% BM_MatVec/512 69006 12803 +81.4% BM_MatVec/1k 443810 160378 +63.9% BM_MatVec/2k 2633553 646594 +75.4% BM_MatVec/4k 16211095 4327148 +73.3% BM_VecMat/64 925 227 +75.5% BM_VecMat/128 3438 830 +75.9% BM_VecMat/256 13427 2936 +78.1% BM_VecMat/512 53944 12473 +76.9% BM_VecMat/1k 302264 157076 +48.0% BM_VecMat/2k 1396811 675778 +51.6% BM_VecMat/4k 8962246 4459010 +50.2% AVX512: Run on *** (72 X 2993 MHz CPUs); 2019-01-28T17:35:17.239329863-08:00 CPU: Intel Skylake Xeon with HyperThreading (36 cores) dL1:32KB dL2:1024KB dL3:24MB Benchmark Base (ns) New (ns) Improvement ------------------------------------------------------------------ BM_MatVec/64 401 111 +72.3% BM_MatVec/128 1846 513 +72.2% BM_MatVec/256 36739 1927 +94.8% BM_MatVec/512 54490 9227 +83.1% BM_MatVec/1k 487374 161457 +66.9% BM_MatVec/2k 2016270 643824 +68.1% BM_MatVec/4k 13204300 4077412 +69.1% BM_VecMat/32 324 106 +67.3% BM_VecMat/64 1034 246 +76.2% BM_VecMat/128 3576 802 +77.6% BM_VecMat/256 13411 2561 +80.9% BM_VecMat/512 58686 10037 +82.9% BM_VecMat/1k 320862 163750 +49.0% BM_VecMat/2k 1406719 651397 +53.7% BM_VecMat/4k 7785179 4124677 +47.0% Currently watchingStop watching
* GEBP: improves pipelining in the 1pX4 path with FMA.Gravatar Gael Guennebaud2019-01-30
| | | | | Prior to this change, a product with a LHS having 8 rows was faster with AVX-only than with AVX+FMA. With AVX+FMA I measured a speed up of about x1.25 in such cases.