aboutsummaryrefslogtreecommitdiffhomepage
path: root/test
Commit message (Collapse)AuthorAge
* Revert "Add log2() operator to Eigen"Gravatar Rasmus Munk Larsen2020-12-03
| | | | This reverts commit 4d91519a9be061da5d300079fca17dd0b9328050.
* Add log2() operator to EigenGravatar Rasmus Munk Larsen2020-12-03
|
* Include chrono in main for c++11.Gravatar Antonio Sanchez2020-12-03
| | | | Hack to fix tensor tests, since min/max are overridden by `main.h`.
* AVX512 missing ops.Gravatar Antonio Sanchez2020-11-30
| | | | | | | | | | This allows the `packetmath` tests to pass for AVX512 on skylake. Made `half` and `bfloat16` consistent in terms of ops they support. Note the `log` tests are currently disabled for `bfloat16` since they fail due to poor precision (they were previously disabled for `Packet8bf` via test function specialization -- I just removed that specialization and disabled it in the generic test).
* Make inclusion of doc sub-directory optional by adjusting options.Gravatar Bowie Owens2020-11-27
| | | | | | | | | | Allows exclusion of doc and related targets to help when using eigen via add_subdirectory(). Requested by: https://gitlab.com/libeigen/eigen/-/issues/1842 Also required making EIGEN_TEST_BUILD_DOCUMENTATION a dependent option on EIGEN_BUILD_DOC. This ensures documentation targets are properly defined when EIGEN_TEST_BUILD_DOCUMENTATION is ON.
* Revert "Fix Half NaN definition and test."Gravatar Rasmus Munk Larsen2020-11-24
| | | | This reverts commit c770746d709686ef2b8b652616d9232f9b028e78.
* Fix Half NaN definition and test.Gravatar Rasmus Munk Larsen2020-11-24
| | | | | | | | | | | | | The `half_float` test was failing with `-mcpu=cortex-a55` (native `__fp16`) due to a bad NaN bit-pattern comparison (in the case of casting a float to `__fp16`, the signaling `NaN` is quieted). There was also an inconsistency between `numeric_limits<half>::quiet_NaN()` and `NumTraits::quiet_NaN()`. Here we correct the inconsistency and compare NaNs according to the IEEE 754 definition. Also modified the `bfloat16_float` test to match. Tested with `cortex-a53` and `cortex-a55`.
* Implement missing AVX half ops.Gravatar Antonio Sanchez2020-11-24
| | | | | | | | Minimal implementation of AVX `Eigen::half` ops to bring in line with `bfloat16`. Allows `packetmath_13` to pass. Also adjusted `bfloat16` packet traits to match the supported set of ops (e.g. Bessel is not actually implemented).
* Fix Half NaN definition and test.Gravatar Antonio Sanchez2020-11-23
| | | | | | | | | | | | | The `half_float` test was failing with `-mcpu=cortex-a55` (native `__fp16`) due to a bad NaN bit-pattern comparison (in the case of casting a float to `__fp16`, the signaling `NaN` is quieted). There was also an inconsistency between `numeric_limits<half>::quiet_NaN()` and `NumTraits::quiet_NaN()`. Here we correct the inconsistency and compare NaNs according to the IEEE 754 definition. Also modified the `bfloat16_float` test to match. Tested with `cortex-a53` and `cortex-a55`.
* Update AVX half packets, disable test.Gravatar Antonio Sanchez2020-11-21
| | | | | | | | The AVX half implementation is incomplete, causing the `packetmath_13` test to fail. This disables the test. Also refactored the existing AVX implementation to use `bit_cast` instead of direct access to `.x`.
* Fix sparse_extra_3, disable counting temporaries for testing ↵Gravatar Antonio Sanchez2020-11-18
| | | | | | | | | | | | | | | | | | | | | | | DynamicSparseMatrix. Multiplication of column-major `DynamicSparseMatrix`es involves three temporaries: - two for transposing twice to sort the coefficients (`ConservativeSparseSparseProduct.h`, L160-161) - one for a final copy assignment (`SparseAssign.h`, L108) The latter is avoided in an optimization for `SparseMatrix`. Since `DynamicSparseMatrix` is deprecated in favor of `SparseMatrix`, it's not worth the effort to optimize further, so I simply disabled counting temporaries via a macro. Note that due to the inclusion of `sparse_product.cpp`, the `sparse_extra` tests actually re-run all the original `sparse_product` tests as well. We may want to simply drop the `DynamicSparseMatrix` tests altogether, which would eliminate the test duplication. Related to #2048
* Re-enable Arm Neon Eigen::half packets of size 8Gravatar David Tellenbach2020-11-18
| | | | | | - Add predux_half_dowto4 - Remove explicit casts in Half.h to match the behaviour of BFloat16.h - Enable more packetmath tests for Eigen::half
* Add bit_cast for half/bfloat to/from uint16_t, fix TensorRandomGravatar Antonio Sanchez2020-11-18
| | | | | | | | | | The existing `TensorRandom.h` implementation makes the assumption that `half` (`bfloat16`) has a `uint16_t` member `x` (`value`), which is not always true. This currently fails on arm64, where `x` has type `__fp16`. Added `bit_cast` specializations to allow casting to/from `uint16_t` for both `half` and `bfloat16`. Also added tests in `half_float`, `bfloat16_float`, and `cxx11_tensor_random` to catch these errors in the future.
* Initialize primitives to fix -Wuninitialized-const-reference.Gravatar Antonio Sanchez2020-11-18
| | | | | | | | | | | | | The `meta` test generates warnings with the latest version of clang due to passing uninitialized variables as const reference arguments. ``` test/meta.cpp:102:45: error: variable 'f' is uninitialized when passed as a const reference argument here [-Werror,-Wuninitialized-const-reference] VERIFY(( check_is_convertible(a.dot(b), f) )); ``` We don't actually use the variables, but initializing them eliminates the new warning. Fixes #2067.
* Eliminate double-promotion warnings.Gravatar Antonio Sanchez2020-11-16
| | | | | | | | | | | | | Clang currently complains about implicit conversions, e.g. ``` test/packetmath.cpp:680:59: warning: implicit conversion increases floating-point precision: 'typename Eigen::internal::random_retval<typename Eigen::internal::global_math_functions_filtering_base<double>::type>::type' (aka 'double') to 'long double' [-Wdouble-promotion] data1[0] = Scalar((2 * k + k1) * EIGEN_PI / 2 * internal::random<double>(0.8, 1.2)); ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ test/packetmath.cpp:681:40: warning: implicit conversion increases floating-point precision: 'float' to 'long double' [-Wdouble-promotion] data1[1] = Scalar((2 * k + 2 + k1) * EIGEN_PI / 2 * internal::random<double>(0.8, 1.2)); ``` Modified to explicitly cast to double.
* Explicit casts of S -> std::complex<T>Gravatar Antonio Sanchez2020-11-14
| | | | | | | | | | | | | | | | When calling `internal::cast<S, std::complex<T>>(x)`, clang often generates an implicit conversion warning due to an implicit cast from type `S` to `T`. This currently affects the following tests: - `basicstuff` - `bfloat16_float` - `cxx11_tensor_casts` The implicit cast leads to widening/narrowing float conversions. Widening warnings only seem to be generated by clang (`-Wdouble-promotion`). To eliminate the warning, we explicitly cast the real-component first from `S` to `T`. We also adjust tests to use `internal::cast` instead of `static_cast` when a complex type may be involved.
* Suppress ignored-attributes warning (same as in vectorization_logic). Remove ↵Gravatar Christoph Hertzberg2020-11-13
| | | | redundant include and using namespace.
* Fix erroneous forward declaration of boost nvp.Gravatar Everton Constantino2020-11-10
|
* CMakefile update for ROCm 4.0Gravatar Deven Desai2020-10-29
| | | | Starting with ROCm 4.0, the `hipconfig --platform` command will return `amd` (prior return value was `hcc`). Updating the CMakeLists.txt files in the test dirs to account for this change.
* Add support for Armv8.2-a __fp16Gravatar David Tellenbach2020-10-28
| | | | | | | | | | | | | | | Armv8.2-a provides a native half-precision floating point (__fp16 aka. float16_t). This patch introduces * __fp16 as underlying type of Eigen::half if this type is available * the packet types Packet4hf and Packet8hf representing float16x4_t and float16x8_t respectively * packet-math for the above packets with corresponding scalar type Eigen::half The packet-math functionality has been implemented by Ashutosh Sharma <ashutosh.sharma@amperecomputing.com>. This closes #1940.
* Add packet generic ops `predux_fmin`, `predux_fmin_nan`, `predux_fmax`, and ↵Gravatar Rasmus Munk Larsen2020-10-13
| | | | `predux_fmax_nan` that implement reductions with `PropagateNaN`, and `PropagateNumbers` semantics. Add (slow) generic implementations for most reductions.
* Clean up packetmath tests and fix various bugs to make bfloat16 pass ↵Gravatar Rasmus Munk Larsen2020-10-09
| | | | (almost) all packetmath tests with SSE, AVX, and AVX512.
* Disable test exceptions when using OpenMP.Gravatar David Tellenbach2020-10-09
|
* Don't make assumptions about NaN-propagation for pmin/pmax - it various ↵Gravatar Rasmus Munk Larsen2020-10-07
| | | | | | across platforms. Change test to only test for NaN-propagation for pfmin/pfmax.
* Add a generic packet ops corresponding to {std}::fmin and {std}::fmax. The ↵Gravatar Rasmus Munk Larsen2020-10-01
| | | | non-sensical NaN-propagation rules for std::min std::max implemented by pmin and pmax in Eigen is a longstanding source og confusion and bug report. This change is a first step towards addressing it, as discussing in issue #564.
* Fix alignedbox 32-bit precision test failure.Gravatar Antonio Sanchez2020-09-30
| | | | | | | | | | | | | | | The current `test/geo_alignedbox` tests fail on 32-bit arm due to small floating-point errors. In particular, the following is not guaranteed to hold: ``` IsometryTransform identity = IsometryTransform::Identity(); BoxType transformedC; transformedC.extend(c.transformed(identity)); VERIFY(transformedC.contains(c)); ``` since `c.transformed(identity)` is ever-so-slightly different from `c`. Instead, we replace this test with one that checks an identity transform is within floating-point precision of `c`. Also updated the condition on `AlignedBox::transform(...)` to only accept `Affine`, `AffineCompact`, and `Isometry` modes explicitly. Otherwise, invalid combinations of modes would also incorrectly pass the assertion.
* Added AlignedBox::transform(AffineTransform).Gravatar Martin Pecka2020-09-28
|
* Remove EIGEN_CONSTEXPR from NumTraits<boost::multiprecision::number<...>>Gravatar David Tellenbach2020-09-21
|
* Get rid of initialization logic for blueNorm by making the computed ↵Gravatar Rasmus Munk Larsen2020-09-18
| | | | | | constants static const or constexpr. Move macro definition EIGEN_CONSTEXPR to Core and make all methods in NumTraits constexpr when EIGEN_HASH_CONSTEXPR is 1.
* Make bfloat16(float(-nan)) produce -nan, not nan.Gravatar Tim Shen2020-09-15
|
* Add missing functions for Packet8bf in Altivec architecture.Gravatar Pedro Caldeira2020-09-08
| | | | | Including new tests for bfloat16 Packets. Fix prsqrt on GenericPacketMath.
* MatrixProuct enhancements:Gravatar Everton Constantino2020-09-02
| | | | | | | | | | | | | - Changes to Altivec/MatrixProduct Adapting code to gcc 10. Generic code style and performance enhancements. Adding PanelMode support. Adding stride/offset support. Enabling float64, std::complex and std::complex. Fixing lack of symm_pack. Enabling mixedtypes. - Adding std::complex tests to blasutil. - Adding an implementation of storePacketBlock when Incr!= 1.
* Fix #1974: assertion when reserving an empty sparse matrixGravatar Gael Guennebaud2020-08-26
|
* Fixing a CUDA / P100 regression introduced by PR 181Gravatar Deven Desai2020-08-20
| | | | | | PR 181 ( https://gitlab.com/libeigen/eigen/-/merge_requests/181 ) adds `__launch_bounds__(1024)` attribute to GPU kernels, that did not have that attribute explicitly specified. That PR seems to cause regressions on the CUDA platform. This PR/commit makes the changes in PR 181, to be applicable for HIP only
* Add possibility to split test suit build targets and improved CI configurationGravatar David Tellenbach2020-08-19
| | | | | | - Introduce CMake option `EIGEN_SPLIT_TESTSUITE` that allows to divide the single test build target into several subtargets - Add CI pipeline for merge request that can be run by GitLab's shared runners - Add nightly CI pipeline
* Fix compilation error in blasutil testGravatar David Tellenbach2020-08-14
|
* Replace the call to int64_t in the blasutil test by explicit typesGravatar David Tellenbach2020-08-14
| | | | | | | | | Some platforms define int64_t to be long long even for C++03. If this is the case we miss the definition of internal::make_unsigned for this type. If we just define the template we get duplicated definitions errors for platforms defining int64_t as signed long for C++03. We need to find a way to distinguish both cases at compile-time.
* Add support for Bfloat16 to use vector instructions on AltivecGravatar Pedro Caldeira2020-08-10
| | | | architecture
* Adding an explicit launch_bounds(1024) attribute for GPU kernels.Gravatar Deven Desai2020-08-05
| | | | | | | | | | Starting with ROCm 3.5, the HIP compiler will change from HCC to hip-clang. This compiler change introduce a change in the default value of the `__launch_bounds__` attribute associated with a GPU kernel. (default value means the value assumed by the compiler as the `__launch_bounds attribute__` value, when it is not explicitly specified by the user) Currently (i.e. for HIP with ROCm 3.3 and older), the default value is 1024. That changes to 256 with ROCm 3.5 (i.e. hip-clang compiler). As a consequence of this change, if a GPU kernel with a `__luanch_bounds__` attribute of 256 is launched at runtime with a threads_per_block value > 256, it leads to a runtime error. This is leading to a couple of Eigen unit test failures with ROCm 3.5. This commit adds an explicit `__launch_bounds(1024)__` attribute to every GPU kernel that currently does not have it explicitly specified (and hence will end up getting the default value of 256 with the change to hip-clang)
* Fix bfloat16 castsGravatar David Tellenbach2020-07-23
| | | | | | | If we have explicit conversion operators available (C++11) we define explicit casts from bfloat16 to other types. If not (C++03), we don't define conversion operators but rely on implicit conversion chains from bfloat16 over float to other types.
* Revert change that made conversion from bfloat16 to {float, double} implicit.Gravatar Rasmus Munk Larsen2020-07-22
| | | | Add roundtrip tests for casting between bfloat16 and complex types.
* Faster conversion from integer types to bfloat16Gravatar Niels Dekker2020-07-22
| | | | | | Specialized `bfloat16_impl::float_to_bfloat16_rtne(float)` for normal floating point numbers, infinity and zero, in order to improve the performance of `bfloat16::bfloat16(const T&)` for integer argument types. A reduction of more than 20% of the runtime duration of conversion from int to bfloat16 was observed, using Visual C++ 2019 on Windows 10.
* Allow implicit conversion from bfloat16 to float and doubleGravatar Niels Dekker2020-07-11
| | | | | | Conversion from `bfloat16` to `float` and `double` is lossless. It seems natural to allow the conversion to be implicit, as the C++ language also support implicit conversion from a smaller to a larger floating point type. Intel's OneDLL bfloat16 implementation also has an implicit `operator float()`: https://github.com/oneapi-src/oneDNN/blob/v1.5/src/common/bfloat16.hpp
* Guard operator<< test by EIGEN_NO_IO.Gravatar Rasmus Munk Larsen2020-07-09
|
* Add operator<< to print a quaternion.Gravatar Rasmus Munk Larsen2020-07-09
|
* Fix test basic stuffGravatar David Tellenbach2020-07-09
| | | | | | - Guard fundamental types that are not available pre C++11 - Separate subsequent angle brackets >> by spaces - Allow casting of Eigen::half and Eigen::bfloat16 to complex types
* Change the sign operator in Eigen to return NaN for NaN arguments, not zero.Gravatar Rasmus Munk Larsen2020-07-07
|
* Make test packetmath C++98 compliantGravatar David Tellenbach2020-07-01
|
* Delete duplicate test cases in vectorization_logic.cppGravatar Kan Chen2020-07-01
|
* Fix tensor casts for large packets and casts to/from std::complexGravatar Antonio Sanchez2020-06-30
| | | | | | | | | | | | | The original tensor casts were only defined for `SrcCoeffRatio`:`TgtCoeffRatio` 1:1, 1:2, 2:1, 4:1. Here we add the missing 1:N and 8:1. We also add casting `Eigen::half` to/from `std::complex<T>`, which was missing to make it consistent with `Eigen:bfloat16`, and generalize the overload to work for any complex type. Tests were added to `basicstuff`, `packetmath`, and `cxx11_tensor_casts` to test all cast configurations.