From 08eeb648ea6c329b9b1fb3063993572c21404974 Mon Sep 17 00:00:00 2001 From: Gael Guennebaud Date: Thu, 5 Dec 2019 16:33:24 +0100 Subject: update hg to git hashes --- bench/perf_monitoring/changesets.txt | 181 ++++++++++++++++++----------------- 1 file changed, 91 insertions(+), 90 deletions(-) (limited to 'bench') diff --git a/bench/perf_monitoring/changesets.txt b/bench/perf_monitoring/changesets.txt index 647825c0f..efdd9a0ff 100644 --- a/bench/perf_monitoring/changesets.txt +++ b/bench/perf_monitoring/changesets.txt @@ -1,94 +1,95 @@ +Load hg-to-git hash maps from ./eigen_git/.git/ #3.0.1 #3.1.1 #3.2.0 3.2.4 -#5745:37f59e65eb6c -5891:d8652709345d # introduce AVX -#5893:24b4dc92c6d3 # merge -5895:997c2ef9fc8b # introduce FMA -#5904:e1eafd14eaa1 # complex and AVX -5908:f8ee3c721251 # improve packing with ptranspose -#5921:ca808bb456b0 # merge -#5927:8b1001f9e3ac -5937:5a4ca1ad8c53 # New gebp kernel: up to 3 packets x 4 register-level blocks -#5949:f3488f4e45b2 # merge -#5969:e09031dccfd9 # Disable 3pX4 kernel on Altivec -#5992:4a429f5e0483 # merge -before-evaluators -#6334:f6a45e5b8b7c # Implement evaluator for sparse outer products -#6639:c9121c60b5c7 -#6655:06f163b5221f # Properly detect FMA support on ARM -#6677:700e023044e7 # FMA has been wrongly disabled -#6681:11d31dafb0e3 -#6699:5e6e8e10aad1 # merge default to tensors -#6726:ff2d2388e7b9 # merge default to tensors -#6742:0cbd6195e829 # merge default to tensors -#6747:853d2bafeb8f # Generalized the gebp apis -6765:71584fd55762 # Made the blocking computation aware of the l3 cache;
Also optimized the blocking parameters to take
into account the number of threads used for a computation. -6781:9cc5a931b2c6 # generalized gemv -6792:f6e1daab600a # ensured that contractions that can be reduced to a matrix vector product -#6844:039efd86b75c # merge tensor -6845:7333ed40c6ef # change prefetching in gebp -#6856:b5be5e10eb7f # merge index conversion -6893:c3a64aba7c70 # clean blocking size computation -6899:877facace746 # rotating kernel for ARM only -#6904:c250623ae9fa # result_of -6921:915f1b1fc158 # fix prefetching change for ARM -6923:9ff25f6dacc6 # prefetching -6933:52572e60b5d3 # blocking size strategy -6937:c8c042f286b2 # avoid redundant pack_rhs -6981:7e5d6f78da59 # dynamic loop swapping -6984:45f26866c091 # rm dynamic loop swapping,
adjust lhs's micro panel height to fully exploit L1 cache -6986:a675d05b6f8f # blocking heuristic:
block on the rhs in L1 if the lhs fit in L1. -7013:f875e75f07e5 # organize a little our default cache sizes,
and use a saner default L1 outside of x86 (10% faster on Nexus 5) -7015:8aad8f35c955 # Refactor computeProductBlockingSizes to make room
for the possibility of using lookup tables -7016:a58d253e8c91 # Polish lookup tables generation -7018:9b27294a8186 # actual_panel_rows computation should always be resilient
to parameters not consistent with the known L1 cache size, see comment -7019:c758b1e2c073 # Provide a empirical lookup table for blocking sizes measured on a Nexus 5.
Only for float, only for Android on ARM 32bit for now. -7085:627e039fba68 # Bug 986: add support for coefficient-based
product with 0 depth. -7098:b6f1db9cf9ec # Bug 992: don't select a 3p GEMM path with non-SIMD scalar types. -7591:09a8e2186610 # 3.3-alpha1 -7650:b0f3c8f43025 # help clang inlining -7708:dfc6ab9d9458 # Improve numerical accuracy in LLT and triangular solve
by using true scalar divisions (instead of x * (1/y)) -#8744:74b789ada92a # Improved the matrix multiplication blocking in the case
where mr is not a power of 2 (e.g on Haswell CPUs) -8789:efcb912e4356 # Made the index type a template parameter to evaluateProductBlockingSizes.
Use numext::mini and numext::maxi instead of
std::min/std::max to compute blocking sizes. -8972:81d53c711775 # Don't optimize the processing of the last rows of
a matrix matrix product in cases that violate
the assumptions made by the optimized code path. -8985:d935df21a082 # Remove the rotating kernel. -8988:6c2dc56e73b3 # Bug 256: enable vectorization with unaligned loads/stores. -9148:b8b8c421e36c # Relax mixing-type constraints for binary coeff-wise operators -9174:d228bc282ac9 # merge -9175:abc7a3600098 # Include the cost of stores in unrolling -9212:c90098affa7b # Fix perf regression introduced in changeset 8aad8f35c955 -9213:9f1c14e4694b # Fix perf regression in dgemm introduced by changeset 81d53c711775 -9361:69d418c06999 # 3.3-beta2 -9445:f27ff0ad77a3 # Optimize expression matching 'd?=a-b*c' as 'd?=a; d?=b*c;' -9583:bef509908b9d # 3.3-rc1 -9593:2f24280cf59a # Bug 1311: fix alignment logic in some cases
of (scalar*small).lazyProduct(small) -9722:040d861b88b5 # Disabled part of the matrix matrix peeling code
that's incompatible with 512 bit registers -9792:26667be4f70b # 3.3.0 -9891:41260bdfc23b # Fix a performance regression in (mat*mat)*vec
for which mat*mat was evaluated multiple times. -9942:b1d3eba60130 # Operators += and -= do not resize! -9943:79bb9887afd4 # Ease compiler generating clean and efficient code in mat*vec -9946:2213991340ea # Complete rewrite of column-major-matrix * vector product
to deliver higher performance of modern CPU. -9955:630471c3298c # Improve performance of row-major-dense-matrix * vector products
for recent CPUs. -9975:2eeed9de710c # Revert vec/y to vec*(1/y) in row-major TRSM -10442:e3f17da72a40 # Bug 1435: fix aliasing issue in exressions like: A = C - B*A; -10735:6913f0cf7d06 # Adds missing EIGEN_STRONG_INLINE to support MSVC
properly inlining small vector calculations -10943:4db388d946bd # Bug 1562: optimize evaluation of small products
of the form s*A*B by rewriting them as: s*(A.lazyProduct(B))
to save a costly temporary.
Measured speedup from 2x to 5x. -10961:5007ff66c9f6 # Introduce the macro ei_declare_local_nested_eval to
help allocating on the stack local temporaries via alloca,
and let outer-products makes a good use of it. -11083:30a528a984bb # Bug 1578: Improve prefetching in matrix multiplication on MIPS. -11533:71609c41e9f8 # PR 526: Speed up multiplication of small, dynamically sized matrices -11545:6d348dc9b092 # Vectorize row-by-row gebp loop iterations on 16 packets as well -11579:efda481cbd7a # Bug 1624: improve matrix-matrix product on ARM 64, 20% speedup -11606:b8d3f548a9d9 # do not read buffers out of bounds -11638:22f9cc0079bd # Implement AVX512 vectorization of std::complex -11642:9f52fde03483 # Bug 1636: fix gemm performance issue with gcc>=6 and no FMA -11648:81172653b67b # Bug 1515: disable gebp's 3pX4 micro kernel
for MSVC<=19.14 because of register spilling. -11654:b81188e099f3 # fix EIGEN_GEBP_2PX4_SPILLING_WORKAROUND
for non vectorized type, and non x86/64 target -11664:71546f1a9f0c # enable spilling workaround on architectures with SSE/AVX -11669:b500fef42ced # Artificially increase l1-blocking size for AVX512.
+10% speedup with current kernels. -11683:2ea2960f1c7f # Make code compile again for older compilers. -11753:556fb4ceb654 # Bug: 1633: refactor gebp kernel and optimize for neon -11761:cefc1ba05596 # Bug 1661: fix regression in GEBP and AVX512 -11763:1e41e70fe97b # GEBP: cleanup logic to choose between
a 4 packets of 1 packet (=209bf81aa3f3+fix) -11803:d95b5d78598b # gebp: Add new ½ and ¼ packet rows per (peeling) round on the lhs \ No newline at end of file +#574a7621809fe +58964a85800bd # introduce AVX +#589cbd7e98174 # merge +589db7d49efbb # introduce FMA +#590a078f442a3 # complex and AVX +590a419cea4a0 # improve packing with ptranspose +#59251e85c936d # merge +#592e497a27ddc +593d5a795f673 # New gebp kernel: up to 3 packets x 4 register-level blocks +#5942c3c95990d # merge +#596c9788d55b9 # Disable 3pX4 kernel on Altivec +#5999aa3dc4e21 # merge +6209452eb38f8 # before-evaluators +#6333eba5e1101 # Implement evaluator for sparse outer products +#663b9d314ae19 +#6655ef95fabee # Properly detect FMA support on ARM +#667fe25f3b8e3 # FMA has been wrongly disabled +#668409547a0c8 +#6694304c73542 # merge default to tensors +#67216047c8d4a # merge default to tensors +#67410a79ca3a3 # merge default to tensors +#674b7271dffb5 # Generalized the gebp apis +676bfdd9f3ac9 # Made the blocking computation aware of the l3 cache;
Also optimized the blocking parameters to take
into account the number of threads used for a computation. +6782dde63499c # generalized gemv +6799f98650d0a # ensured that contractions that can be reduced to a matrix vector product +#6840918c51e60 # merge tensor +684e972b55ec4 # change prefetching in gebp +#68598604576d1 # merge index conversion +68963eb0f6fe6 # clean blocking size computation +689db05f2d01e # rotating kernel for ARM only +#6901b7e12847d # result_of +69226275b250a # fix prefetching change for ARM +692692136350b # prefetching +693a8ad8887bf # blocking size strategy +693bcf9bb5c1f # avoid redundant pack_rhs +6987550107028 # dynamic loop swapping +69858740ce4c6 # rm dynamic loop swapping,
adjust lhs's micro panel height to fully exploit L1 cache +698cd3bbffa73 # blocking heuristic:
block on the rhs in L1 if the lhs fit in L1. +701488c15615a # organize a little our default cache sizes,
and use a saner default L1 outside of x86 (10% faster on Nexus 5) +701e56aabf205 # Refactor computeProductBlockingSizes to make room
for the possibility of using lookup tables +701ca5c12587b # Polish lookup tables generation +7013589a9c115 # actual_panel_rows computation should always be resilient
to parameters not consistent with the known L1 cache size, see comment +70102babb9c0f # Provide a empirical lookup table for blocking sizes measured on a Nexus 5.
Only for float, only for Android on ARM 32bit for now. +7088481dc21ea # Bug 986: add support for coefficient-based
product with 0 depth. +709d7f51feb07 # Bug 992: don't select a 3p GEMM path with non-SIMD scalar types. +759f9303cc7c5 # 3.3-alpha1 +765aba1eda71e # help clang inlining +770fe630c9873 # Improve numerical accuracy in LLT and triangular solve
by using true scalar divisions (instead of x * (1/y)) +#8741d23430628 # Improved the matrix multiplication blocking in the case
where mr is not a power of 2 (e.g on Haswell CPUs) +878f629fe95c8 # Made the index type a template parameter to evaluateProductBlockingSizes.
Use numext::mini and numext::maxi instead of
std::min/std::max to compute blocking sizes. +8975d51a7f12c # Don't optimize the processing of the last rows of
a matrix matrix product in cases that violate
the assumptions made by the optimized code path. +8986136f4fdd4 # Remove the rotating kernel. +898e68e165a23 # Bug 256: enable vectorization with unaligned loads/stores. +91466e99ab6a1 # Relax mixing-type constraints for binary coeff-wise operators +91776236cdea4 # merge +917101ea26f5e # Include the cost of stores in unrolling +921672076db5d # Fix perf regression introduced in changeset e56aabf205 +9210fa9e4a15c # Fix perf regression in dgemm introduced by changeset 5d51a7f12c +936f6b3cf8de9 # 3.3-beta2 +944504a4404f1 # Optimize expression matching 'd?=a-b*c' as 'd?=a; d?=b*c;' +95877e27fbeee # 3.3-rc1 +959779774f98c # Bug 1311: fix alignment logic in some cases
of (scalar*small).lazyProduct(small) +9729f9d8d2f62 # Disabled part of the matrix matrix peeling code
that's incompatible with 512 bit registers +979eeac81b8c0 # 3.3.0 +989c927af60ed # Fix a performance regression in (mat*mat)*vec
for which mat*mat was evaluated multiple times. +994fe696022ec # Operators += and -= do not resize! +99466f65ccc36 # Ease compiler generating clean and efficient code in mat*vec +9946a5fe86098 # Complete rewrite of column-major-matrix * vector product
to deliver higher performance of modern CPU. +99591003f3b86 # Improve performance of row-major-dense-matrix * vector products
for recent CPUs. +997eb621413c1 # Revert vec/y to vec*(1/y) in row-major TRSM +10444bbc320468 # Bug 1435: fix aliasing issue in exressions like: A = C - B*A; +1073624df50945 # Adds missing EIGEN_STRONG_INLINE to support MSVC
properly inlining small vector calculations +1094d428a199ab # Bug 1562: optimize evaluation of small products
of the form s*A*B by rewriting them as: s*(A.lazyProduct(B))
to save a costly temporary.
Measured speedup from 2x to 5x. +1096de9e31a06d # Introduce the macro ei_declare_local_nested_eval to
help allocating on the stack local temporaries via alloca,
and let outer-products makes a good use of it. +11087b91c11207 # Bug 1578: Improve prefetching in matrix multiplication on MIPS. +1153aa110e681b # PR 526: Speed up multiplication of small, dynamically sized matrices +11544ad359237a # Vectorize row-by-row gebp loop iterations on 16 packets as well +1157a476054879 # Bug 1624: improve matrix-matrix product on ARM 64, 20% speedup +1160a4159dba08 # do not read buffers out of bounds +1163c53eececb0 # Implement AVX512 vectorization of std::complex +11644e7746fe22 # Bug 1636: fix gemm performance issue with gcc>=6 and no FMA +1164956678a4ef # Bug 1515: disable gebp's 3pX4 micro kernel
for MSVC<=19.14 because of register spilling. +1165426bce7529 # fix EIGEN_GEBP_2PX4_SPILLING_WORKAROUND
for non vectorized type, and non x86/64 target +11660d90637838 # enable spilling workaround on architectures with SSE/AVX +1166f159cf3d75 # Artificially increase l1-blocking size for AVX512.
+10% speedup with current kernels. +11686dd93f7e3b # Make code compile again for older compilers. +1175dbfcceabf5 # Bug: 1633: refactor gebp kernel and optimize for neon +117670e133333d # Bug 1661: fix regression in GEBP and AVX512 +11760f028f61cb # GEBP: cleanup logic to choose between
a 4 packets of 1 packet (=e118ce86fd+fix) +1180de77bf5d6c # gebp: Add new ½ and ¼ packet rows per (peeling) round on the lhs -- cgit v1.2.3