aboutsummaryrefslogtreecommitdiffhomepage
path: root/Eigen/src/Core/arch
Commit message (Collapse)AuthorAge
* Remove old Clang compiler bug work-arounds. The two LLVM bugs referenced in ↵Gravatar Benoit Jacob2020-09-15
| | | | the comments here have long been fixed. The workarounds were now detrimental because (1) they prevented using fused mul-add on Clang/ARM32 and (2) the unnecessary 'volatile' in 'asm volatile' prevented legitimate reordering by the compiler.
* Make bfloat16(float(-nan)) produce -nan, not nan.Gravatar Tim Shen2020-09-15
|
* Add plog ops support packet2d for NEONGravatar Guoqiang QI2020-09-15
|
* Unified sse pldexp_double apiGravatar Guoqiang QI2020-09-12
|
* Fix half_impl::float_to_half_rtne(float) warning: '<<' causes overflowGravatar Niels Dekker2020-09-10
| | | | | | Fixed Visual Studio 2019 Code Analysis (C++ Core Guidelines) warning C26450 from inside `half_impl::float_to_half_rtne(float)`: > Arithmetic overflow: '<<' operation causes overflow at compile time.
* Add missing functions for Packet8bf in Altivec architecture.Gravatar Pedro Caldeira2020-09-08
| | | | | Including new tests for bfloat16 Packets. Fix prsqrt on GenericPacketMath.
* Add Neon psqrt<Packet2d> and pexp<Packet2d>Gravatar Guoqiang QI2020-09-08
|
* MatrixProuct enhancements:Gravatar Everton Constantino2020-09-02
| | | | | | | | | | | | | - Changes to Altivec/MatrixProduct Adapting code to gcc 10. Generic code style and performance enhancements. Adding PanelMode support. Adding stride/offset support. Enabling float64, std::complex and std::complex. Fixing lack of symm_pack. Enabling mixedtypes. - Adding std::complex tests to blasutil. - Adding an implementation of storePacketBlock when Incr!= 1.
* Changing u/int8_t to un/signed char because clang does not understandGravatar Everton Constantino2020-09-02
| | | | | | it. Implementing pcmp_eq to Packet8 and Packet16.
* Change Packet8s and Packet8us to use vector commands on Power for pmadd, ↵Gravatar Chip Kerchner2020-08-28
| | | | pmul and psub.
* add psqrt ops support packet2f/packet4f for NEONGravatar Guoqiang QI2020-08-21
|
* bfloat16 packetmath for Arm Neon backendGravatar David Tellenbach2020-08-13
|
* Add support for Bfloat16 to use vector instructions on AltivecGravatar Pedro Caldeira2020-08-10
| | | | architecture
* Temporarily turn off the NEON implementation of pfloor as it does not work ↵Gravatar Zachary Garrett2020-08-04
| | | | | | for large values. The NEON implementation mimics the SSE implementation, but didn't mention the caveat that due to the unsigned of signed integer conversions, not all values in the original floating point represented are supported.
* Fix undefine BF16 union behavior in AVX512.Gravatar Teng Lu2020-07-29
|
* Fix clang-tidy warnings in generic bfloat16 implementationGravatar David Tellenbach2020-07-27
| | | | See !172 for related discussions.
* Fix bfloat16 castsGravatar David Tellenbach2020-07-23
| | | | | | | If we have explicit conversion operators available (C++11) we define explicit casts from bfloat16 to other types. If not (C++03), we don't define conversion operators but rely on implicit conversion chains from bfloat16 over float to other types.
* Revert change that made conversion from bfloat16 to {float, double} implicit.Gravatar Rasmus Munk Larsen2020-07-22
| | | | Add roundtrip tests for casting between bfloat16 and complex types.
* Fix cast of blfoat16 to std::complex<T>Gravatar David Tellenbach2020-07-22
| | | | This fixes https://gitlab.com/libeigen/eigen/-/issues/1951
* Make sure we take the little-endian path if __BYTE_ORDER__ is not defined.Gravatar Rasmus Munk Larsen2020-07-22
|
* Faster conversion from integer types to bfloat16Gravatar Niels Dekker2020-07-22
| | | | | | Specialized `bfloat16_impl::float_to_bfloat16_rtne(float)` for normal floating point numbers, infinity and zero, in order to improve the performance of `bfloat16::bfloat16(const T&)` for integer argument types. A reduction of more than 20% of the runtime duration of conversion from int to bfloat16 was observed, using Visual C++ 2019 on Windows 10.
* Avoid undefined behavior by union type punning in float_to_bfloat16_rtneGravatar Niels Dekker2020-07-14
| | | | | | | | Use `numext::as_uint`, instead of union based type punning, to avoid undefined behavior. See also C++ Core Guidelines: "Don't use a union for type punning" https://github.com/isocpp/CppCoreGuidelines/blob/v0.8/CppCoreGuidelines.md#c183-dont-use-a-union-for-type-punning `numext::as_uint` was suggested by David Tellenbach
* AVX path for BF16Gravatar Sheng Yang2020-07-14
|
* Allow implicit conversion from bfloat16 to float and doubleGravatar Niels Dekker2020-07-11
| | | | | | Conversion from `bfloat16` to `float` and `double` is lossless. It seems natural to allow the conversion to be implicit, as the C++ language also support implicit conversion from a smaller to a larger floating point type. Intel's OneDLL bfloat16 implementation also has an implicit `operator float()`: https://github.com/oneapi-src/oneDNN/blob/v1.5/src/common/bfloat16.hpp
* Fix test basic stuffGravatar David Tellenbach2020-07-09
| | | | | | - Guard fundamental types that are not available pre C++11 - Separate subsequent angle brackets >> by spaces - Allow casting of Eigen::half and Eigen::bfloat16 to complex types
* BF16 for scalar_cmp_with_cast_opGravatar Sheng Yang2020-07-01
|
* Fix tensor casts for large packets and casts to/from std::complexGravatar Antonio Sanchez2020-06-30
| | | | | | | | | | | | | The original tensor casts were only defined for `SrcCoeffRatio`:`TgtCoeffRatio` 1:1, 1:2, 2:1, 4:1. Here we add the missing 1:N and 8:1. We also add casting `Eigen::half` to/from `std::complex<T>`, which was missing to make it consistent with `Eigen:bfloat16`, and generalize the overload to work for any complex type. Tests were added to `basicstuff`, `packetmath`, and `cxx11_tensor_casts` to test all cast configurations.
* Fix packetmath_1 float tests for arm/aarch64.Gravatar Antonio Sanchez2020-06-24
| | | | | | | | | | | | | | | | | | | Added missing `pmadd<Packet2f>` for NEON. This leads to significant improvement in precision than previous `pmul+padd`, which was causing the `pcos` tests to fail. Also added an approx test with `std::sin`/`std::cos` since otherwise returning any `a^2+b^2=1` would pass. Modified `log(denorm)` tests. Denorms are not always supported by all systems (returns `::min`), are always flushed to zero on 32-bit arm, and configurably flush to zero on sse/avx/aarch64. This leads to inconsistent results across different systems (i.e. `-inf` vs `nan`). Added a check for existence and exclude ARM. Removed logistic exactness test, since scalar and vectorized versions follow different code-paths due to differences in `pexp` and `pmadd`, which result in slightly different values. For example, exactness always fails on arm, aarch64, and altivec.
* Add missing Packet2l/Packet2ul ops for NEON.Gravatar Antonio Sanchez2020-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | The current multiply (`pmul`) and comparison operators (`pcmp_lt`, `pcmp_le`, `pcmp_eq`) are missing for packets `Packet2l` and `Packet2ul`. This leads to compile errors for the `packetmath.cpp` tests in clang. Here we add and test the missing ops. Tested: ``` $ aarch64-linux-gnu-g++ -static -I./ '-DEIGEN_TEST_PART_9=1' '-DEIGEN_TEST_PART_10=1' test/packetmath.cpp -o packetmath $ adb push packetmath /data/local/tmp/ $ adb shell "/data/local/tmp/packetmath" $ arm-linux-gnueabihf-g++ -mfpu=neon -static -I./ '-DEIGEN_TEST_PART_9=1' '-DEIGEN_TEST_PART_10=1' test/packetmath.cpp -o packetmath $ adb push packetmath /data/local/tmp/ $ adb shell "/data/local/tmp/packetmath" $ clang++ -target aarch64-linux-android21 -static -I./ '-DEIGEN_TEST_PART_9=1' '-DEIGEN_TEST_PART_10=1' test/packetmath.cpp -o packetmath $ adb push packetmath /data/local/tmp/ $ adb shell "/data/local/tmp/packetmath" $ clang++ -target armv7-linux-android21 -static -mfpu=neon -I./ '-DEIGEN_TEST_PART_9=1' '-DEIGEN_TEST_PART_10=1' test/packetmath.cpp -o packetmath $ adb push packetmath /data/local/tmp/ $ adb shell "/data/local/tmp/packetmath" ```
* Added missing NEON pcasts, update packetmath tests.Gravatar Antonio Sanchez2020-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | The NEON `pcast` operators are all implemented and tested for existing packets. This requires adding a `pcast(a,b,c,d,e,f,g,h)` for casting between `int64_t` and `int8_t` in `GenericPacketMath.h`. Removed incorrect `HasHalfPacket` definition for NEON's `Packet2l`/`Packet2ul`. Adjustments were also made to the `packetmath` tests. These include - minor bug fixes for cast tests (i.e. 4:1 casts, only casting for packets that are vectorizable) - added 8:1 cast tests - random number generation - original had uninteresting 0 to 0 casts for many casts between floating-point and integers, and exhibited signed overflow undefined behavior Tested: ``` $ aarch64-linux-gnu-g++ -static -I./ '-DEIGEN_TEST_PART_ALL=1' test/packetmath.cpp -o packetmath $ adb push packetmath /data/local/tmp/ $ adb shell "/data/local/tmp/packetmath" ```
* Support BFloat16 in EigenGravatar Teng Lu2020-06-20
|
* Fix pscatter and pgather for Altivec Complex doubleGravatar Pedro Caldeira2020-06-16
|
* Remove HasCast and fix packetmath cast tests.Gravatar Antonio Sanchez2020-06-11
| | | | | | | | | | | The use of the `packet_traits<>::HasCast` field is currently inconsistent with `type_casting_traits<>`, and is unused apart from within `test/packetmath.cpp`. In addition, those packetmath cast tests do not currently reflect how casts are performed in practice: they ignore the `SrcCoeffRatio` and `TgtCoeffRatio` fields, assuming a 1:1 ratio. Here we remove the unsed `HasCast`, and modify the packet cast tests to better reflect their usage.
* Update FindComputeCpp.cmake to fix build problems on WindowsGravatar Thales Sabino2020-06-05
| | | | | - Use standard types in SYCL/PacketMath.h to avoid compilation problems on Windows - Add EIGEN_HAS_CONSTEXPR to cxx11_tensor_argmax_sycl.cpp to fix build problems on Windows
* Add support for PacketBlock<Packet8s,4> and PacketBlock<Packet16uc,4> ↵Gravatar Kan Chen2020-05-29
| | | | ptranspose on NEON
* Add pscatter for Packet16{u}c (int8)Gravatar Pedro Caldeira2020-05-20
|
* - Vectorizing MMA packing.Gravatar Everton Constantino2020-05-19
| | | | | - Optimizing MMA kernel. - Adding PacketBlock store to blas_data_mapper.
* Add missing packet ops for bool, and make it pass the same packet op unit ↵Gravatar Rasmus Munk Larsen2020-05-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tests as other arithmetic types. This change also contains a few minor cleanups: 1. Remove packet op pnot, which is not needed for anything other than pcmp_le_or_nan, which can be done in other ways. 2. Remove the "HasInsert" enum, which is no longer needed since we removed the corresponding packet ops. 3. Add faster pselect op for Packet4i when SSE4.1 is supported. Among other things, this makes the fast transposeInPlace() method available for Matrix<bool>. Run on ************** (72 X 2994 MHz CPUs); 2020-05-09T10:51:02.372347913-07:00 CPU: Intel Skylake Xeon with HyperThreading (36 cores) dL1:32KB dL2:1024KB dL3:24MB Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------------- BM_TransposeInPlace<float>/4 9.77 9.77 71670320 BM_TransposeInPlace<float>/8 21.9 21.9 31929525 BM_TransposeInPlace<float>/16 66.6 66.6 10000000 BM_TransposeInPlace<float>/32 243 243 2879561 BM_TransposeInPlace<float>/59 844 844 829767 BM_TransposeInPlace<float>/64 933 933 750567 BM_TransposeInPlace<float>/128 3944 3945 177405 BM_TransposeInPlace<float>/256 16853 16853 41457 BM_TransposeInPlace<float>/512 204952 204968 3448 BM_TransposeInPlace<float>/1k 1053889 1053861 664 BM_TransposeInPlace<bool>/4 14.4 14.4 48637301 BM_TransposeInPlace<bool>/8 36.0 36.0 19370222 BM_TransposeInPlace<bool>/16 31.5 31.5 22178902 BM_TransposeInPlace<bool>/32 111 111 6272048 BM_TransposeInPlace<bool>/59 626 626 1000000 BM_TransposeInPlace<bool>/64 428 428 1632689 BM_TransposeInPlace<bool>/128 1677 1677 417377 BM_TransposeInPlace<bool>/256 7126 7126 96264 BM_TransposeInPlace<bool>/512 29021 29024 24165 BM_TransposeInPlace<bool>/1k 116321 116330 6068
* Altivec template functions to better code reusabilityGravatar Pedro Caldeira2020-05-11
|
* Remove packet ops pinsertfirst and pinsertlast that are only used in a ↵Gravatar Rasmus Munk Larsen2020-05-08
| | | | | | | | | | | | | | | | single place, and can be replaced by other ops when constructing the first/final packet in linspaced_op_impl::packetOp. I cannot measure any performance changes for SSE, AVX, or AVX512. name old time/op new time/op delta BM_LinSpace<float>/1 1.63ns ± 0% 1.63ns ± 0% ~ (p=0.762 n=5+5) BM_LinSpace<float>/8 4.92ns ± 3% 4.89ns ± 3% ~ (p=0.421 n=5+5) BM_LinSpace<float>/64 34.6ns ± 0% 34.6ns ± 0% ~ (p=0.841 n=5+5) BM_LinSpace<float>/512 217ns ± 0% 217ns ± 0% ~ (p=0.421 n=5+5) BM_LinSpace<float>/4k 1.68µs ± 0% 1.68µs ± 0% ~ (p=1.000 n=5+5) BM_LinSpace<float>/32k 13.3µs ± 0% 13.3µs ± 0% ~ (p=0.905 n=5+4) BM_LinSpace<float>/256k 107µs ± 0% 107µs ± 0% ~ (p=0.841 n=5+5) BM_LinSpace<float>/1M 427µs ± 0% 427µs ± 0% ~ (p=0.690 n=5+5)
* Remove unused packet op "palign".Gravatar Rasmus Munk Larsen2020-05-07
| | | | Clean up a compiler warning in c++03 mode in AVX512/Complex.h.
* Remove traits declaring NEON vectorized casts that do not actually have ↵Gravatar Rasmus Munk Larsen2020-05-07
| | | | packet op implementations.
* Fix compilation error with Clang on Android: _mm_extract_epi64 fails to compile.Gravatar Rasmus Munk Larsen2020-04-29
|
* Extend support for Packet16b:Gravatar Rasmus Munk Larsen2020-04-28
| | | | | | | | | | | | | | | | | * Add ptranspose<*,4> to support matmul and add unit test for Matrix<bool> * Matrix<bool> * work around a bug in slicing of Tensor<bool>. * Add tensor tests This speeds up matmul for boolean matrices by about 10x name old time/op new time/op delta BM_MatMul<bool>/8 267ns ± 0% 479ns ± 0% +79.25% (p=0.008 n=5+5) BM_MatMul<bool>/32 6.42µs ± 0% 0.87µs ± 0% -86.50% (p=0.008 n=5+5) BM_MatMul<bool>/64 43.3µs ± 0% 5.9µs ± 0% -86.42% (p=0.008 n=5+5) BM_MatMul<bool>/128 315µs ± 0% 44µs ± 0% -85.98% (p=0.008 n=5+5) BM_MatMul<bool>/256 2.41ms ± 0% 0.34ms ± 0% -85.68% (p=0.008 n=5+5) BM_MatMul<bool>/512 18.8ms ± 0% 2.7ms ± 0% -85.53% (p=0.008 n=5+5) BM_MatMul<bool>/1k 149ms ± 0% 22ms ± 0% -85.40% (p=0.008 n=5+5)
* Add support to vector instructions to Packet16uc and Packet16cGravatar Pedro Caldeira2020-04-27
|
* Remove unused packet op "preduxp".Gravatar Rasmus Munk Larsen2020-04-23
|
* Add Packet8s and Packet8us to support signed/unsigned int16/short Altivec ↵Gravatar Pedro Caldeira2020-04-21
| | | | vector operations
* Fix bug in ptrue for Packet16b.Gravatar Rasmus Munk Larsen2020-04-20
|
* Add partial vectorization for matrices and tensors of bool. This speeds up ↵Gravatar Rasmus Munk Larsen2020-04-20
| | | | | | | | | | | | | | | | | | | | | | | | | boolean operations on Tensors by up to 25x. Benchmark numbers for the logical and of two NxN tensors: name old time/op new time/op delta BM_booleanAnd_1T/3 [using 1 threads] 14.6ns ± 0% 14.4ns ± 0% -0.96% BM_booleanAnd_1T/4 [using 1 threads] 20.5ns ±12% 9.0ns ± 0% -56.07% BM_booleanAnd_1T/7 [using 1 threads] 41.7ns ± 0% 10.5ns ± 0% -74.87% BM_booleanAnd_1T/8 [using 1 threads] 52.1ns ± 0% 10.1ns ± 0% -80.59% BM_booleanAnd_1T/10 [using 1 threads] 76.3ns ± 0% 13.8ns ± 0% -81.87% BM_booleanAnd_1T/15 [using 1 threads] 167ns ± 0% 16ns ± 0% -90.45% BM_booleanAnd_1T/16 [using 1 threads] 188ns ± 0% 16ns ± 0% -91.57% BM_booleanAnd_1T/31 [using 1 threads] 667ns ± 0% 34ns ± 0% -94.83% BM_booleanAnd_1T/32 [using 1 threads] 710ns ± 0% 35ns ± 0% -95.01% BM_booleanAnd_1T/64 [using 1 threads] 2.80µs ± 0% 0.11µs ± 0% -95.93% BM_booleanAnd_1T/128 [using 1 threads] 11.2µs ± 0% 0.4µs ± 0% -96.11% BM_booleanAnd_1T/256 [using 1 threads] 44.6µs ± 0% 2.5µs ± 0% -94.31% BM_booleanAnd_1T/512 [using 1 threads] 178µs ± 0% 10µs ± 0% -94.35% BM_booleanAnd_1T/1k [using 1 threads] 717µs ± 0% 78µs ± 1% -89.07% BM_booleanAnd_1T/2k [using 1 threads] 2.87ms ± 0% 0.31ms ± 1% -89.08% BM_booleanAnd_1T/4k [using 1 threads] 11.7ms ± 0% 1.9ms ± 4% -83.55% BM_booleanAnd_1T/10k [using 1 threads] 70.3ms ± 0% 17.2ms ± 4% -75.48%
* Move eigen_packet_wrapper to GenericPacketMath.h and use it for ↵Gravatar Rasmus Munk Larsen2020-04-15
| | | | | | | SSE/AVX/AVX512 as it is already used for NEON. This will allow us to define multiple packet types backed by the same vector type, e.g., __m128i. Use this machanism to define packets for half and clean up the packet op implementations.