Commit message (Collapse) | Author | Age | |
---|---|---|---|
* | Add missing functions for Packet8bf in Altivec architecture. | 2020-09-08 | |
| | | | | | Including new tests for bfloat16 Packets. Fix prsqrt on GenericPacketMath. | ||
* | MatrixProuct enhancements: | 2020-09-02 | |
| | | | | | | | | | | | | | - Changes to Altivec/MatrixProduct Adapting code to gcc 10. Generic code style and performance enhancements. Adding PanelMode support. Adding stride/offset support. Enabling float64, std::complex and std::complex. Fixing lack of symm_pack. Enabling mixedtypes. - Adding std::complex tests to blasutil. - Adding an implementation of storePacketBlock when Incr!= 1. | ||
* | Changing u/int8_t to un/signed char because clang does not understand | 2020-09-02 | |
| | | | | | | it. Implementing pcmp_eq to Packet8 and Packet16. | ||
* | Change Packet8s and Packet8us to use vector commands on Power for pmadd, ↵ | 2020-08-28 | |
| | | | | pmul and psub. | ||
* | Add support for Bfloat16 to use vector instructions on Altivec | 2020-08-10 | |
| | | | | architecture | ||
* | Fix pscatter and pgather for Altivec Complex double | 2020-06-16 | |
| | |||
* | Add pscatter for Packet16{u}c (int8) | 2020-05-20 | |
| | |||
* | - Vectorizing MMA packing. | 2020-05-19 | |
| | | | | | - Optimizing MMA kernel. - Adding PacketBlock store to blas_data_mapper. | ||
* | Altivec template functions to better code reusability | 2020-05-11 | |
| | |||
* | Remove unused packet op "palign". | 2020-05-07 | |
| | | | | Clean up a compiler warning in c++03 mode in AVX512/Complex.h. | ||
* | Add support to vector instructions to Packet16uc and Packet16c | 2020-04-27 | |
| | |||
* | Remove unused packet op "preduxp". | 2020-04-23 | |
| | |||
* | Add Packet8s and Packet8us to support signed/unsigned int16/short Altivec ↵ | 2020-04-21 | |
| | | | | vector operations | ||
* | Adhere to recommended load/store intrinsics for pp64le | 2020-03-23 | |
| | |||
* | Fixing float32's pround halfway criteria to match STL's criteria. | 2020-03-21 | |
| | |||
* | Add shift_left<N> and shift_right<N> coefficient-wise unary Array functions | 2020-03-19 | |
| | |||
* | Switching unpacket_traits<Packet4i> to vectorizable=true. | 2020-01-13 | |
| | |||
* | Move implementation of vectorized error function erf() to ↵ | 2019-09-27 | |
| | | | | SpecialFunctionsImpl.h. | ||
* | Add generic PacketMath implementation of the Error Function (erf). | 2019-09-19 | |
| | |||
* | Fix compilation without vector engine available (e.g., x86 with SSE disabled): | 2019-09-05 | |
| | | | | -> ppolevl is required by ndtri even for the scalar path | ||
* | Fix debug macros in p{load,store}u | 2019-08-14 | |
| | |||
* | Add missing pcmp_XX methods for double/Packet2d | 2019-08-14 | |
| | | | | This actually fixes an issue in unit-test packetmath_2 with pcmp_eq when it is compiled with clang. When pcmp_eq(Packet4f,Packet4f) is used instead of pcmp_eq(Packet2d,Packet2d), the unit-test does not pass due to NaN on ref vector. | ||
* | Fix packed load/store for PowerPC's VSX | 2019-08-09 | |
| | | | | | | | | The vec_vsx_ld/vec_vsx_st builtins were wrongly used for aligned load/store. In fact, they perform unaligned memory access and, even when the address is 16-byte aligned, they are much slower (at least 2x) than their aligned counterparts. For double/Packet2d vec_xl/vec_xst should be prefered over vec_ld/vec_st, although the latter works when casted to float/Packet4f. Silencing some weird warning with throw but some GCC versions. Such warning are not thrown by Clang. | ||
* | Fix offset argument of ploadu/pstoreu for Altivec | 2019-08-09 | |
| | | | | | | | | | | If no offset is given, them it should be zero. Also passes full address to vec_vsx_ld/st builtins. Removes userless _EIGEN_ALIGNED_PTR & _EIGEN_MASK_ALIGNMENT. Removes unnecessary casts. | ||
* | bug #1718: Add cast to successfully compile with clang on PowerPC | 2019-08-09 | |
| | | | | Ignoring -Wc11-extensions warnings thrown by clang at Altivec/PacketMath.h | ||
* | Add masked_store_available to unpacket_traits | 2019-05-02 | |
| | |||
* | Adding lowlevel APIs for optimized RHS packet load in TensorFlow | 2019-04-20 | |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SpatialConvolution Low-level APIs are added in order to optimized packet load in gemm_pack_rhs in TensorFlow SpatialConvolution. The optimization is for scenario when a packet is split across 2 adjacent columns. In this case we read it as two 'partial' packets and then merge these into 1. Currently this only works for Packet16f (AVX512) and Packet8f (AVX2). We plan to add this for other packet types (such as Packet8d) also. This optimization shows significant speedup in SpatialConvolution with certain parameters. Some examples are below. Benchmark parameters are specified as: Batch size, Input dim, Depth, Num of filters, Filter dim Speedup numbers are specified for number of threads 1, 2, 4, 8, 16. AVX512: Parameters | Speedup (Num of threads: 1, 2, 4, 8, 16) ----------------------------|------------------------------------------ 128, 24x24, 3, 64, 5x5 |2.18X, 2.13X, 1.73X, 1.64X, 1.66X 128, 24x24, 1, 64, 8x8 |2.00X, 1.98X, 1.93X, 1.91X, 1.91X 32, 24x24, 3, 64, 5x5 |2.26X, 2.14X, 2.17X, 2.22X, 2.33X 128, 24x24, 3, 64, 3x3 |1.51X, 1.45X, 1.45X, 1.67X, 1.57X 32, 14x14, 24, 64, 5x5 |1.21X, 1.19X, 1.16X, 1.70X, 1.17X 128, 128x128, 3, 96, 11x11 |2.17X, 2.18X, 2.19X, 2.20X, 2.18X AVX2: Parameters | Speedup (Num of threads: 1, 2, 4, 8, 16) ----------------------------|------------------------------------------ 128, 24x24, 3, 64, 5x5 | 1.66X, 1.65X, 1.61X, 1.56X, 1.49X 32, 24x24, 3, 64, 5x5 | 1.71X, 1.63X, 1.77X, 1.58X, 1.68X 128, 24x24, 1, 64, 5x5 | 1.44X, 1.40X, 1.38X, 1.37X, 1.33X 128, 24x24, 3, 64, 3x3 | 1.68X, 1.63X, 1.58X, 1.56X, 1.62X 128, 128x128, 3, 96, 11x11 | 1.36X, 1.36X, 1.37X, 1.37X, 1.37X In the higher level benchmark cifar10, we observe a runtime improvement of around 6% for AVX512 on Intel Skylake server (8 cores). On lower level PackRhs micro-benchmarks specified in TensorFlow tensorflow/core/kernels/eigen_spatial_convolutions_test.cc, we observe the following runtime numbers: AVX512: Parameters | Runtime without patch (ns) | Runtime with patch (ns) | Speedup ---------------------------------------------------------------|----------------------------|-------------------------|--------- BM_RHS_NAME(PackRhs, 128, 24, 24, 3, 64, 5, 5, 1, 1, 256, 56) | 41350 | 15073 | 2.74X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 1, 1, 256, 56) | 7277 | 7341 | 0.99X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 2, 2, 256, 56) | 8675 | 8681 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 1, 1, 256, 56) | 24155 | 16079 | 1.50X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 2, 2, 256, 56) | 25052 | 17152 | 1.46X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 1, 1, 256, 56) | 18269 | 18345 | 1.00X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 2, 4, 256, 56) | 19468 | 19872 | 0.98X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 1, 1, 36, 432) | 156060 | 42432 | 3.68X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 2, 2, 36, 432) | 132701 | 36944 | 3.59X AVX2: Parameters | Runtime without patch (ns) | Runtime with patch (ns) | Speedup ---------------------------------------------------------------|----------------------------|-------------------------|--------- BM_RHS_NAME(PackRhs, 128, 24, 24, 3, 64, 5, 5, 1, 1, 256, 56) | 26233 | 12393 | 2.12X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 1, 1, 256, 56) | 6091 | 6062 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 2, 2, 256, 56) | 7427 | 7408 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 1, 1, 256, 56) | 23453 | 20826 | 1.13X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 2, 2, 256, 56) | 23167 | 22091 | 1.09X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 1, 1, 256, 56) | 23422 | 23682 | 0.99X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 2, 4, 256, 56) | 23165 | 23663 | 0.98X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 1, 1, 36, 432) | 72689 | 44969 | 1.62X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 2, 2, 36, 432) | 61732 | 39779 | 1.55X All benchmarks on Intel Skylake server with 8 cores. | ||
* | Collapsed revision from PR-619 | 2019-03-26 | |
| | | | | | | | * Add support for pcmp_eq in AltiVec/Complex.h * Fixed implementation of pcmp_eq for double The new logic is based on the logic from NEON for double. | ||
* | Fix conflicts and merge | 2019-01-30 | |
|\ | |||
* | | bug #1652: fix position of EIGEN_ALIGN16 attributes in Neon and Altivec | 2019-01-14 | |
| | | |||
* | | Add dedicated implementations of predux_any for AVX512, NEON, and Altivec/VSE | 2019-01-09 | |
| | | |||
| * | Introducing "vectorized" byte on unpacket_traits structs | 2018-12-19 | |
|/ | | | | | | | | | | | | | | | | | | | | | This is a preparation to a change on gebp_traits, where a new template argument will be introduced to dictate the packet size, so it won't be bound to the current/max packet size only anymore. By having packet types defined early on gebp_traits, one has now to act on packet types, not scalars anymore, for the enum values defined on that class. One approach for reaching the vectorizable/size properties one needs there could be getting the packet's scalar again with unpacket_traits<>, then the size/Vectorizable enum entries from packet_traits<>. It turns out guards like "#ifndef EIGEN_VECTORIZE_AVX512" at AVX/PacketMath.h will hide smaller packet variations of packet_traits<> for some types (and it makes sense to keep that). In other words, one can't go back to the scalar and create a new PacketType, as this will always lead to the maximum packet type for the architecture. The less costly/invasive solution for that, thus, is to add the vectorizable info on every unpacket_traits struct as well. | ||
* | Add packet sin and cos to Altivec/VSX and NEON | 2018-11-30 | |
| | |||
* | bug #1631: fix compilation with ARM NEON and clang, and cleanup the weird ↵ | 2018-11-27 | |
| | | | | pshiftright_and_cast and pcast_and_shiftleft functions. | ||
* | Unify Altivec/VSX pexp(double) with default implementation | 2018-11-27 | |
| | |||
* | cleanup | 2018-11-26 | |
| | |||
* | Unify Altivec/VSX's pexp with generic implementation | 2018-11-26 | |
| | |||
* | Unify Altivec/VSX's plog with generic implementation, and enable it! | 2018-11-26 | |
| | |||
* | Fix typo in pbend for AltiVec. | 2018-06-22 | |
| | |||
* | Add a note on vec_min vs asm | 2018-04-04 | |
| | |||
* | bug #1494: makes pmin/pmax behave on Altivec/VSX as on x86 regading NaNs | 2018-04-04 | |
| | |||
* | MIsc. source and comment typos | 2018-03-11 | |
| | | | | Found using `codespell` and `grep` from downstream FreeCAD | ||
* | bug #1436: fix compilation of Jacobi rotations with ARM NEON, some ↵ | 2017-06-15 | |
| | | | | specializations of internal::conj_helper were missing. | ||
* | Add std:: namespace prefix to all (hopefully) instances if size_t/ptrdfiff_t | 2017-01-23 | |
| | |||
* | bug #1360: fix sign issue with pmull on altivec | 2016-12-18 | |
| | |||
* | Fix unused warning | 2016-12-18 | |
| | |||
* | bug #1167: simplify installation of header files using cmake's ↵ | 2016-08-29 | |
| | | | | install(DIRECTORY ...) command. | ||
* | minor fixes for big endian altivec/vsx | 2016-07-10 | |
| | |||
* | fix compilation with clang 3.9, fix performance with pset1, use vector ↵ | 2016-06-23 | |
| | | | | operators instead of intrinsics in some cases | ||
* | mostly cleanups and modernizing code | 2016-06-19 | |
| |