aboutsummaryrefslogtreecommitdiffhomepage
path: root/Eigen/src/Core/arch/AVX512
Commit message (Collapse)AuthorAge
...
* Implement vectorized versions of log1p and expm1 in Eigen using Kahan's ↵Gravatar Rasmus Munk Larsen2019-08-12
| | | | | | | | | | | | formulas, and change the scalar implementations to properly handle infinite arguments. Depending on instruction set, significant speedups are observed for the vectorized path: log1p wall time is reduced 60-93% (2.5x - 15x speedup) expm1 wall time is reduced 0-85% (1x - 7x speedup) The scalar path is slower by 20-30% due to the extra branch needed to handle +infinity correctly. Full benchmarks measured on Intel(R) Xeon(R) Gold 6154 here: https://bitbucket.org/snippets/rmlarsen/MXBkpM
* Various fixes for packet ops.Gravatar Rasmus Munk Larsen2019-06-20
| | | | | | 1. Fix buggy pcmp_eq and unit test for half types. 2. Add unit test for pselect and add specializations for SSE 4.1, AVX512, and half types. 3. Get rid of FIXME: Implement faster pnegate for half by XOR'ing with a sign bit mask.
* Add masked_store_available to unpacket_traitsGravatar Eugene Zhulenev2019-05-02
|
* Add masked pstoreu to AVX and AVX512 PacketMathGravatar Eugene Zhulenev2019-05-02
|
* Adding lowlevel APIs for optimized RHS packet load in TensorFlowGravatar Anuj Rawat2019-04-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SpatialConvolution Low-level APIs are added in order to optimized packet load in gemm_pack_rhs in TensorFlow SpatialConvolution. The optimization is for scenario when a packet is split across 2 adjacent columns. In this case we read it as two 'partial' packets and then merge these into 1. Currently this only works for Packet16f (AVX512) and Packet8f (AVX2). We plan to add this for other packet types (such as Packet8d) also. This optimization shows significant speedup in SpatialConvolution with certain parameters. Some examples are below. Benchmark parameters are specified as: Batch size, Input dim, Depth, Num of filters, Filter dim Speedup numbers are specified for number of threads 1, 2, 4, 8, 16. AVX512: Parameters | Speedup (Num of threads: 1, 2, 4, 8, 16) ----------------------------|------------------------------------------ 128, 24x24, 3, 64, 5x5 |2.18X, 2.13X, 1.73X, 1.64X, 1.66X 128, 24x24, 1, 64, 8x8 |2.00X, 1.98X, 1.93X, 1.91X, 1.91X 32, 24x24, 3, 64, 5x5 |2.26X, 2.14X, 2.17X, 2.22X, 2.33X 128, 24x24, 3, 64, 3x3 |1.51X, 1.45X, 1.45X, 1.67X, 1.57X 32, 14x14, 24, 64, 5x5 |1.21X, 1.19X, 1.16X, 1.70X, 1.17X 128, 128x128, 3, 96, 11x11 |2.17X, 2.18X, 2.19X, 2.20X, 2.18X AVX2: Parameters | Speedup (Num of threads: 1, 2, 4, 8, 16) ----------------------------|------------------------------------------ 128, 24x24, 3, 64, 5x5 | 1.66X, 1.65X, 1.61X, 1.56X, 1.49X 32, 24x24, 3, 64, 5x5 | 1.71X, 1.63X, 1.77X, 1.58X, 1.68X 128, 24x24, 1, 64, 5x5 | 1.44X, 1.40X, 1.38X, 1.37X, 1.33X 128, 24x24, 3, 64, 3x3 | 1.68X, 1.63X, 1.58X, 1.56X, 1.62X 128, 128x128, 3, 96, 11x11 | 1.36X, 1.36X, 1.37X, 1.37X, 1.37X In the higher level benchmark cifar10, we observe a runtime improvement of around 6% for AVX512 on Intel Skylake server (8 cores). On lower level PackRhs micro-benchmarks specified in TensorFlow tensorflow/core/kernels/eigen_spatial_convolutions_test.cc, we observe the following runtime numbers: AVX512: Parameters | Runtime without patch (ns) | Runtime with patch (ns) | Speedup ---------------------------------------------------------------|----------------------------|-------------------------|--------- BM_RHS_NAME(PackRhs, 128, 24, 24, 3, 64, 5, 5, 1, 1, 256, 56) | 41350 | 15073 | 2.74X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 1, 1, 256, 56) | 7277 | 7341 | 0.99X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 2, 2, 256, 56) | 8675 | 8681 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 1, 1, 256, 56) | 24155 | 16079 | 1.50X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 2, 2, 256, 56) | 25052 | 17152 | 1.46X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 1, 1, 256, 56) | 18269 | 18345 | 1.00X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 2, 4, 256, 56) | 19468 | 19872 | 0.98X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 1, 1, 36, 432) | 156060 | 42432 | 3.68X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 2, 2, 36, 432) | 132701 | 36944 | 3.59X AVX2: Parameters | Runtime without patch (ns) | Runtime with patch (ns) | Speedup ---------------------------------------------------------------|----------------------------|-------------------------|--------- BM_RHS_NAME(PackRhs, 128, 24, 24, 3, 64, 5, 5, 1, 1, 256, 56) | 26233 | 12393 | 2.12X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 1, 1, 256, 56) | 6091 | 6062 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 2, 2, 256, 56) | 7427 | 7408 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 1, 1, 256, 56) | 23453 | 20826 | 1.13X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 2, 2, 256, 56) | 23167 | 22091 | 1.09X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 1, 1, 256, 56) | 23422 | 23682 | 0.99X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 2, 4, 256, 56) | 23165 | 23663 | 0.98X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 1, 1, 36, 432) | 72689 | 44969 | 1.62X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 2, 2, 36, 432) | 61732 | 39779 | 1.55X All benchmarks on Intel Skylake server with 8 cores.
* fix alignment in ploadquadGravatar Gael Guennebaud2019-02-22
|
* AVX512: implement faster ploadquad<Packet16f> thus speeding up GEMMGravatar Gael Guennebaud2019-02-21
|
* bug #1678: workaround MSVC compilation issues with AVX512Gravatar Gael Guennebaud2019-02-15
|
* Fix conflicts and mergeGravatar Gael Guennebaud2019-01-30
|\
* | Renaming some more `I` identifiersGravatar Christoph Hertzberg2019-01-26
| |
* | Fix compilation error for logical packet ops with older compilers.Gravatar Rasmus Munk Larsen2019-01-16
| |
* | AVX512: fix pgather/pscatter for Packet4cd and unaligned pointersGravatar Gael Guennebaud2019-01-14
| |
* | AVX512 (r)sqrt(double) was mistakenly disabled with clang and othersGravatar Gael Guennebaud2019-01-14
| |
* | Resolve.Gravatar Rasmus Munk Larsen2019-01-11
|\ \
| * \ Merged eigen/eigen into defaultGravatar Rasmus Larsen2019-01-11
| |\ \
| | * | Remove reinterpret_cast from AVX512 complex implementationGravatar Mark D Ryan2019-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The reinterpret_casts used in ptranspose(PacketBlock<Packet8cf,4>&) ptranspose(PacketBlock<Packet8cf,8>&) don't appear to be working correctly. They're used to convert the kernel parameters to PacketBlock<Packet8d,T>& so that the complex number versions of ptranspose can be written using the existing double implementations. Unfortunately, they don't seem to work and are responsible for 9 unit test failures in the AVX512 build of tensorflow master. This commit fixes the issue by manually initialising PacketBlock<Packet8d,T> variables with the contents of the kernel parameter before calling the double version of ptranspose, and then copying the resulting values back into the kernel parameter before returning.
* | | | Rename pones -> ptrue. Use _CMP_TRUE_UQ where appropriate.Gravatar Rasmus Munk Larsen2019-01-09
|\ \ \ \
| | * | | Collapsed revisionGravatar Rasmus Munk Larsen2019-01-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Add packet up "pones". Write pnot(a) as pxor(pones(a), a). * Collapsed revision * Simplify a bit. * Undo useless diffs. * Fix typo.
* | | | | Collapsed revisionGravatar Rasmus Munk Larsen2019-01-09
| |/ / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Collapsed revision * Add packet up "pones". Write pnot(a) as pxor(pones(a), a). * Collapsed revision * Simplify a bit. * Undo useless diffs. * Fix typo.
| * | | Simplify a bit.Gravatar Rasmus Munk Larsen2019-01-09
| | | |
| * | | Add packet up "pones". Write pnot(a) as pxor(pones(a), a).Gravatar Rasmus Munk Larsen2019-01-09
|/ / /
* | | Merged eigen/eigen into defaultGravatar Rasmus Larsen2019-01-09
|\| |
| * | fix plog(+inf) with AVX512Gravatar Gael Guennebaud2019-01-09
| | |
| * | Add dedicated implementations of predux_any for AVX512, NEON, and Altivec/VSEGravatar Gael Guennebaud2019-01-09
| | |
| * | Add missing pcmp_lt and others for AVX512Gravatar Gael Guennebaud2019-01-09
| | |
* | | Add support for pcmp_eq and pnot, including for complex types.Gravatar Rasmus Munk Larsen2019-01-07
|/ /
* | PR560: Fix the AVX512f only buildsGravatar Mark D Ryan2019-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit c53eececb0415834b961cb61cd466907261b4b2f introduced AVX512 support for complex numbers but required avx512dq to build. Commit 1d683ae2f5a340a6e2681c8cd0782f4db6b807ea fixed some but not, it would seem all, of the hard avx512dq dependencies. Build failures are still evident on Eigen and TensorFlow when compiling with just avx512f and no avx512dq using gcc 7.3. Looking at the code there does indeed seem to be a problem. Commit c53eececb0415834b961cb61cd466907261b4b2f calls avx512dq intrinsics directly, e.g, _mm512_extractf32x8_ps and _mm512_and_ps. This commit fixes the issue by replacing the direct intrinsic calls with the various wrapper functions that are safe to use on avx512f only builds.
* | One more stupid AVX 512 fix (I don't have direct access to AVX512 machines)Gravatar Gael Guennebaud2018-12-24
| |
* | Add EIGEN_STRONG_INLINE where requiredGravatar Gael Guennebaud2018-12-24
| |
* | Add missing pcmp_lt_or_nan for AVX512Gravatar Gael Guennebaud2018-12-23
| |
| * Introducing "vectorized" byte on unpacket_traits structsGravatar Gustavo Lima Chaves2018-12-19
|/ | | | | | | | | | | | | | | | | | | | | This is a preparation to a change on gebp_traits, where a new template argument will be introduced to dictate the packet size, so it won't be bound to the current/max packet size only anymore. By having packet types defined early on gebp_traits, one has now to act on packet types, not scalars anymore, for the enum values defined on that class. One approach for reaching the vectorizable/size properties one needs there could be getting the packet's scalar again with unpacket_traits<>, then the size/Vectorizable enum entries from packet_traits<>. It turns out guards like "#ifndef EIGEN_VECTORIZE_AVX512" at AVX/PacketMath.h will hide smaller packet variations of packet_traits<> for some types (and it makes sense to keep that). In other words, one can't go back to the scalar and create a new PacketType, as this will always lead to the maximum packet type for the architecture. The less costly/invasive solution for that, thus, is to add the vectorizable info on every unpacket_traits struct as well.
* Properly set the number of registers for AVX512Gravatar Gael Guennebaud2018-12-11
|
* bug #1641: fix testing of pandnot and fix pandnot for complex on SSE/AVX/AVX512Gravatar Gael Guennebaud2018-12-08
|
* AVX512f includes FMA but GCC does not define __FMA__ with -mavx512f onlyGravatar Gael Guennebaud2018-12-06
|
* Fix compilation with avx512f only, i.e., no AVX512DQGravatar Gael Guennebaud2018-12-06
|
* Implement AVX512 vectorization of std::complex<float/double>Gravatar Gael Guennebaud2018-12-06
|
* Several improvements regarding packet-bitwise operations:Gravatar Gael Guennebaud2018-11-30
| | | | | | - add unit tests - optimize their AVX512f implementation - add missing implementations (half, Packet4f, ...)
* Add psin/pcos on AVX512 -> almost for free, at last!Gravatar Gael Guennebaud2018-11-30
|
* Fix pandnot order in AVX512Gravatar Gael Guennebaud2018-11-30
|
* Fix float-to-double warningGravatar Gael Guennebaud2018-10-16
|
* Fix warning with AVX512fGravatar Gael Guennebaud2018-10-11
|
* Fix avx512 plog(NaN) to return NaN instead of +infGravatar Gael Guennebaud2018-10-11
|
* Enable avx512 plog with clangGravatar Gael Guennebaud2018-10-11
|
* fix alignment issue in ploaddup for AVX512Gravatar Gael Guennebaud2018-09-28
|
* Fix warnings in AVX512Gravatar Gael Guennebaud2018-09-20
|
* Use Intel cast intrinsics, since MSVC does not allow direct casting.Gravatar Christoph Hertzberg2018-08-24
| | | | Reported by David Winkler.
* Re-enable FMA for fast sqrt functionsGravatar Mark D Ryan2018-07-30
|
* Fix AVX512 implementations of psqrtGravatar Mark D Ryan2018-06-25
| | | | | | | | | | | | | This commit fixes the AVX512 implementations of psqrt in the same way that 3ed67cb0bb4af65fbf243df598604a8c7630bf7d fixed the AVX2 version of this function. The AVX512 versions of psqrt incorrectly return -0.0 for negative values, instead of NaN. Fixing the issues requires adding some additional instructions that slow down the algorithms. A similar test to the one used in 3ed67cb0bb4af65fbf243df598604a8c7630bf7d shows that the corrected Packet16f code runs at 73% of the speed of the existing code, while the corrected Packed8d function runs at 68% of the original.
* Fix compilation with MSVC by reverting to char* for _mm_prefetch except for ↵Gravatar Gael Guennebaud2018-06-07
| | | | PGI (the later being the one that has the wrong prototype).
* fix AVX512 plogGravatar Jayaram Bobba2018-04-20
|