aboutsummaryrefslogtreecommitdiffhomepage
path: root/test/packetmath.cpp
Commit message (Collapse)AuthorAge
* Add generic PacketMath implementation of the Error Function (erf).Gravatar Rasmus Munk Larsen2019-09-19
|
* Merging eigen/eigen.Gravatar Srinivas Vasudevan2019-09-16
|\
* | Add Bessel functions to SpecialFunctions.Gravatar Srinivas Vasudevan2019-09-14
|/ | | | | | | | | - Split SpecialFunctions files in to a separate BesselFunctions file. In particular add: - Modified bessel functions of the second kind k0, k1, k0e, k1e - Bessel functions of the first kind j0, j1 - Bessel functions of the second kind y0, y1
* Add packetized versions of i0e and i1e special functions.Gravatar Srinivas Vasudevan2019-09-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - In particular refactor the i0e and i1e code so scalar and vectorized path share code. - Move chebevl to GenericPacketMathFunctions. A brief benchmark with building Eigen with FMA, AVX and AVX2 flags Before: CPU: Intel Haswell with HyperThreading (6 cores) Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- BM_eigen_i0e_double/1 57.3 57.3 10000000 BM_eigen_i0e_double/8 398 398 1748554 BM_eigen_i0e_double/64 3184 3184 218961 BM_eigen_i0e_double/512 25579 25579 27330 BM_eigen_i0e_double/4k 205043 205042 3418 BM_eigen_i0e_double/32k 1646038 1646176 422 BM_eigen_i0e_double/256k 13180959 13182613 53 BM_eigen_i0e_double/1M 52684617 52706132 10 BM_eigen_i0e_float/1 28.4 28.4 24636711 BM_eigen_i0e_float/8 75.7 75.7 9207634 BM_eigen_i0e_float/64 512 512 1000000 BM_eigen_i0e_float/512 4194 4194 166359 BM_eigen_i0e_float/4k 32756 32761 21373 BM_eigen_i0e_float/32k 261133 261153 2678 BM_eigen_i0e_float/256k 2087938 2088231 333 BM_eigen_i0e_float/1M 8380409 8381234 84 BM_eigen_i1e_double/1 56.3 56.3 10000000 BM_eigen_i1e_double/8 397 397 1772376 BM_eigen_i1e_double/64 3114 3115 223881 BM_eigen_i1e_double/512 25358 25361 27761 BM_eigen_i1e_double/4k 203543 203593 3462 BM_eigen_i1e_double/32k 1613649 1613803 428 BM_eigen_i1e_double/256k 12910625 12910374 54 BM_eigen_i1e_double/1M 51723824 51723991 10 BM_eigen_i1e_float/1 28.3 28.3 24683049 BM_eigen_i1e_float/8 74.8 74.9 9366216 BM_eigen_i1e_float/64 505 505 1000000 BM_eigen_i1e_float/512 4068 4068 171690 BM_eigen_i1e_float/4k 31803 31806 21948 BM_eigen_i1e_float/32k 253637 253692 2763 BM_eigen_i1e_float/256k 2019711 2019918 346 BM_eigen_i1e_float/1M 8238681 8238713 86 After: CPU: Intel Haswell with HyperThreading (6 cores) Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- BM_eigen_i0e_double/1 15.8 15.8 44097476 BM_eigen_i0e_double/8 99.3 99.3 7014884 BM_eigen_i0e_double/64 777 777 886612 BM_eigen_i0e_double/512 6180 6181 100000 BM_eigen_i0e_double/4k 48136 48140 14678 BM_eigen_i0e_double/32k 385936 385943 1801 BM_eigen_i0e_double/256k 3293324 3293551 228 BM_eigen_i0e_double/1M 12423600 12424458 57 BM_eigen_i0e_float/1 16.3 16.3 43038042 BM_eigen_i0e_float/8 30.1 30.1 23456931 BM_eigen_i0e_float/64 169 169 4132875 BM_eigen_i0e_float/512 1338 1339 516860 BM_eigen_i0e_float/4k 10191 10191 68513 BM_eigen_i0e_float/32k 81338 81337 8531 BM_eigen_i0e_float/256k 651807 651984 1000 BM_eigen_i0e_float/1M 2633821 2634187 268 BM_eigen_i1e_double/1 16.2 16.2 42352499 BM_eigen_i1e_double/8 110 110 6316524 BM_eigen_i1e_double/64 822 822 851065 BM_eigen_i1e_double/512 6480 6481 100000 BM_eigen_i1e_double/4k 51843 51843 10000 BM_eigen_i1e_double/32k 414854 414852 1680 BM_eigen_i1e_double/256k 3320001 3320568 212 BM_eigen_i1e_double/1M 13442795 13442391 53 BM_eigen_i1e_float/1 17.6 17.6 41025735 BM_eigen_i1e_float/8 35.5 35.5 19597891 BM_eigen_i1e_float/64 240 240 2924237 BM_eigen_i1e_float/512 1424 1424 485953 BM_eigen_i1e_float/4k 10722 10723 65162 BM_eigen_i1e_float/32k 86286 86297 8048 BM_eigen_i1e_float/256k 691821 691868 1000 BM_eigen_i1e_float/1M 2777336 2777747 256 This shows anywhere from a 50% to 75% improvement on these operations. I've also benchmarked without any of these flags turned on, and got similar performance to before (if not better). Also tested packetmath.cpp + special_functions to ensure no regressions.
* Merging from eigen/eigen.Gravatar Srinivas Vasudevan2019-09-03
|\
* | Add ndtri function, the inverse of the normal distribution function.Gravatar Srinivas Vasudevan2019-08-12
| |
| * Add more tests for corner cases of log1p and expm1. Add handling of infinite ↵Gravatar Rasmus Munk Larsen2019-08-28
| | | | | | | | arguments to log1p such that log1p(inf) = inf.
| * Revert changes to std_falback::log1p that broke handling of arguments less ↵Gravatar Rasmus Munk Larsen2019-08-27
| | | | | | | | than -1. Fix packet op accordingly.
| * Implement vectorized versions of log1p and expm1 in Eigen using Kahan's ↵Gravatar Rasmus Munk Larsen2019-08-12
|/ | | | | | | | | | | | formulas, and change the scalar implementations to properly handle infinite arguments. Depending on instruction set, significant speedups are observed for the vectorized path: log1p wall time is reduced 60-93% (2.5x - 15x speedup) expm1 wall time is reduced 0-85% (1x - 7x speedup) The scalar path is slower by 20-30% due to the extra branch needed to handle +infinity correctly. Full benchmarks measured on Intel(R) Xeon(R) Gold 6154 here: https://bitbucket.org/snippets/rmlarsen/MXBkpM
* Various fixes for packet ops.Gravatar Rasmus Munk Larsen2019-06-20
| | | | | | 1. Fix buggy pcmp_eq and unit test for half types. 2. Add unit test for pselect and add specializations for SSE 4.1, AVX512, and half types. 3. Get rid of FIXME: Implement faster pnegate for half by XOR'ing with a sign bit mask.
* Add masked_store_available to unpacket_traitsGravatar Eugene Zhulenev2019-05-02
|
* Add masked pstoreu to AVX and AVX512 PacketMathGravatar Eugene Zhulenev2019-05-02
|
* Adding lowlevel APIs for optimized RHS packet load in TensorFlowGravatar Anuj Rawat2019-04-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SpatialConvolution Low-level APIs are added in order to optimized packet load in gemm_pack_rhs in TensorFlow SpatialConvolution. The optimization is for scenario when a packet is split across 2 adjacent columns. In this case we read it as two 'partial' packets and then merge these into 1. Currently this only works for Packet16f (AVX512) and Packet8f (AVX2). We plan to add this for other packet types (such as Packet8d) also. This optimization shows significant speedup in SpatialConvolution with certain parameters. Some examples are below. Benchmark parameters are specified as: Batch size, Input dim, Depth, Num of filters, Filter dim Speedup numbers are specified for number of threads 1, 2, 4, 8, 16. AVX512: Parameters | Speedup (Num of threads: 1, 2, 4, 8, 16) ----------------------------|------------------------------------------ 128, 24x24, 3, 64, 5x5 |2.18X, 2.13X, 1.73X, 1.64X, 1.66X 128, 24x24, 1, 64, 8x8 |2.00X, 1.98X, 1.93X, 1.91X, 1.91X 32, 24x24, 3, 64, 5x5 |2.26X, 2.14X, 2.17X, 2.22X, 2.33X 128, 24x24, 3, 64, 3x3 |1.51X, 1.45X, 1.45X, 1.67X, 1.57X 32, 14x14, 24, 64, 5x5 |1.21X, 1.19X, 1.16X, 1.70X, 1.17X 128, 128x128, 3, 96, 11x11 |2.17X, 2.18X, 2.19X, 2.20X, 2.18X AVX2: Parameters | Speedup (Num of threads: 1, 2, 4, 8, 16) ----------------------------|------------------------------------------ 128, 24x24, 3, 64, 5x5 | 1.66X, 1.65X, 1.61X, 1.56X, 1.49X 32, 24x24, 3, 64, 5x5 | 1.71X, 1.63X, 1.77X, 1.58X, 1.68X 128, 24x24, 1, 64, 5x5 | 1.44X, 1.40X, 1.38X, 1.37X, 1.33X 128, 24x24, 3, 64, 3x3 | 1.68X, 1.63X, 1.58X, 1.56X, 1.62X 128, 128x128, 3, 96, 11x11 | 1.36X, 1.36X, 1.37X, 1.37X, 1.37X In the higher level benchmark cifar10, we observe a runtime improvement of around 6% for AVX512 on Intel Skylake server (8 cores). On lower level PackRhs micro-benchmarks specified in TensorFlow tensorflow/core/kernels/eigen_spatial_convolutions_test.cc, we observe the following runtime numbers: AVX512: Parameters | Runtime without patch (ns) | Runtime with patch (ns) | Speedup ---------------------------------------------------------------|----------------------------|-------------------------|--------- BM_RHS_NAME(PackRhs, 128, 24, 24, 3, 64, 5, 5, 1, 1, 256, 56) | 41350 | 15073 | 2.74X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 1, 1, 256, 56) | 7277 | 7341 | 0.99X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 2, 2, 256, 56) | 8675 | 8681 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 1, 1, 256, 56) | 24155 | 16079 | 1.50X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 2, 2, 256, 56) | 25052 | 17152 | 1.46X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 1, 1, 256, 56) | 18269 | 18345 | 1.00X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 2, 4, 256, 56) | 19468 | 19872 | 0.98X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 1, 1, 36, 432) | 156060 | 42432 | 3.68X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 2, 2, 36, 432) | 132701 | 36944 | 3.59X AVX2: Parameters | Runtime without patch (ns) | Runtime with patch (ns) | Speedup ---------------------------------------------------------------|----------------------------|-------------------------|--------- BM_RHS_NAME(PackRhs, 128, 24, 24, 3, 64, 5, 5, 1, 1, 256, 56) | 26233 | 12393 | 2.12X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 1, 1, 256, 56) | 6091 | 6062 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 2, 2, 256, 56) | 7427 | 7408 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 1, 1, 256, 56) | 23453 | 20826 | 1.13X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 2, 2, 256, 56) | 23167 | 22091 | 1.09X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 1, 1, 256, 56) | 23422 | 23682 | 0.99X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 2, 4, 256, 56) | 23165 | 23663 | 0.98X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 1, 1, 36, 432) | 72689 | 44969 | 1.62X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 2, 2, 36, 432) | 61732 | 39779 | 1.55X All benchmarks on Intel Skylake server with 8 cores.
* AVX512 (r)sqrt(double) was mistakenly disabled with clang and othersGravatar Gael Guennebaud2019-01-14
|
* Rename pones -> ptrue. Use _CMP_TRUE_UQ where appropriate.Gravatar Rasmus Munk Larsen2019-01-09
|\
* | Collapsed revisionGravatar Rasmus Munk Larsen2019-01-09
| | | | | | | | | | | | | | | | | | * Collapsed revision * Add packet up "pones". Write pnot(a) as pxor(pones(a), a). * Collapsed revision * Simplify a bit. * Undo useless diffs. * Fix typo.
| * Add packet up "pones". Write pnot(a) as pxor(pones(a), a).Gravatar Rasmus Munk Larsen2019-01-09
|/
* Merged eigen/eigen into defaultGravatar Rasmus Larsen2019-01-09
|\
| * bug #1652: implements a much more accurate version of vectorized sin/cos. ↵Gravatar Gael Guennebaud2019-01-09
| | | | | | | | | | | | | | This new version achieve same speed for SSE/AVX, and is slightly faster with FMA. Guarantees are as follows: - no FMA: 1ULP up to 3pi, 2ULP up to sin(25966) and cos(18838), fallback to std::sin/cos for larger inputs - FMA: 1ULP up to sin(117435.992) and cos(71476.0625), fallback to std::sin/cos for larger inputs
* | Add support for pcmp_eq and pnot, including for complex types.Gravatar Rasmus Munk Larsen2019-01-07
|/
* Fix unit testGravatar Gael Guennebaud2018-12-27
|
* Implement a faster fix for sin/cos of large entries that also correctly ↵Gravatar Gael Guennebaud2018-12-23
| | | | handle INF input.
* Make sure that psin/pcos return number in [-1,1] for large inputs (though ↵Gravatar Gael Guennebaud2018-12-23
| | | | sin/cos on large entries is quite useless because it's inaccurate)
* Fix plog(+INF): it returned ~87 instead of +INFGravatar Gael Guennebaud2018-12-23
|
* bug #1641: fix testing of pandnot and fix pandnot for complex on SSE/AVX/AVX512Gravatar Gael Guennebaud2018-12-08
|
* Implement AVX512 vectorization of std::complex<float/double>Gravatar Gael Guennebaud2018-12-06
|
* Several improvements regarding packet-bitwise operations:Gravatar Gael Guennebaud2018-11-30
| | | | | | - add unit tests - optimize their AVX512f implementation - add missing implementations (half, Packet4f, ...)
* Extend unit test to recursively check half-packet types and non packet typesGravatar Gael Guennebaud2018-11-26
|
* fix alignment issue in ploaddup for AVX512Gravatar Gael Guennebaud2018-09-28
|
* Fix warningGravatar Gael Guennebaud2018-09-20
|
* Get rid of EIGEN_TEST_FUNC, unit tests must now be declared with ↵Gravatar Gael Guennebaud2018-07-17
| | | | | | | | | EIGEN_DECLARE_TEST(mytest) { /* code */ }. This provide several advantages: - more flexibility in designing unit tests - unit tests can be glued to speed up compilation - unit tests are compiled with same predefined macros, which is a requirement for zapcc
* palign is not used anymore, so let's relax the unit testGravatar Gael Guennebaud2018-07-06
|
* Complete Packet8h implementation and test it in packetmath unit testGravatar Gael Guennebaud2018-07-06
|
* Fix unit test for SIMD engine not supporting sqrtGravatar Gael Guennebaud2018-04-26
|
* Rename predux_downto4 to be more accurate on its semantic.Gravatar Gael Guennebaud2018-04-03
|
* Fix unit testing of predux_downto4 (bad name), and add unit testing of prsqrtGravatar Gael Guennebaud2018-04-03
|
* MIsc. source and comment typosGravatar luz.paz2018-03-11
| | | | Found using `codespell` and `grep` from downstream FreeCAD
* Added support for expm1 in Eigen.Gravatar Srinivas Vasudevan2016-12-02
|
* replace sizeof(Packet) with PacketSize else it breaks for ZVector.Packet4fGravatar Konstantinos Margaritis2016-11-17
|
* Merged eigen/eigen into defaultGravatar Benoit Steiner2016-11-03
|\
| * Add pinsertfirst function and implement pinsertlast for complex on SSE/AVX.Gravatar Gael Guennebaud2016-11-02
| |
| * Add a pinsertlast function replacing the last entry of a packet by a scalar.Gravatar Gael Guennebaud2016-10-25
| | | | | | | | (useful to vectorize LinSpaced)
* | Merged eigen/eigen into defaultGravatar Benoit Steiner2016-10-12
|\|
* | Renamed predux_half into predux_downto4Gravatar Benoit Steiner2016-10-06
| |
* | Merged latest updates from trunkGravatar Benoit Steiner2016-10-05
|\ \
| | * Fix a bug in the implementation of Carmack's fast sqrt algorithm in Eigen ↵Gravatar Rasmus Munk Larsen2016-10-04
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | (enabled by EIGEN_FAST_MATH), which causes the vectorized parts of the computation to return -0.0 instead of NaN for negative arguments. Benchmark speed in Giga-sqrts/s Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz ----------------------------------------- SSE AVX Fast=1 2.529G 4.380G Fast=0 1.944G 1.898G Fast=1 fixed 2.214G 3.739G This table illustrates the worst case in terms speed impact: It was measured by repeatedly computing the sqrt of an n=4096 float vector that fits in L1 cache. For large vectors the operation becomes memory bound and the differences between the different versions almost negligible.
| * Add a note regarding gcc bug #72867Gravatar Gael Guennebaud2016-09-22
| |
| * Fix compilation in non C++11 mode.Gravatar Gael Guennebaud2016-08-23
| |
| * Add log1p support for CUDA and half floatsGravatar Igor Babuschkin2016-08-08
| |
| * Made the packetmath test compile again. A better fix would be to move the ↵Gravatar Benoit Steiner2016-07-11
| | | | | | | | special function tests to the unsupported directory where the code now resides.