aboutsummaryrefslogtreecommitdiffhomepage
path: root/test/half_float.cpp
Commit message (Collapse)AuthorAge
* Add fmod(half, half).Gravatar Antonio Sanchez2021-03-15
| | | | This is to support TensorFlow's `tf.math.floormod` for half.
* Add increment/decrement operators to Eigen::half.Gravatar Antonio Sanchez2021-03-15
| | | | | This is for consistency with bfloat16, and to support initialization with `std::iota`.
* Fix Half NaN definition and test.Gravatar Antonio Sanchez2020-11-23
| | | | | | | | | | | | | The `half_float` test was failing with `-mcpu=cortex-a55` (native `__fp16`) due to a bad NaN bit-pattern comparison (in the case of casting a float to `__fp16`, the signaling `NaN` is quieted). There was also an inconsistency between `numeric_limits<half>::quiet_NaN()` and `NumTraits::quiet_NaN()`. Here we correct the inconsistency and compare NaNs according to the IEEE 754 definition. Also modified the `bfloat16_float` test to match. Tested with `cortex-a53` and `cortex-a55`.
* Add bit_cast for half/bfloat to/from uint16_t, fix TensorRandomGravatar Antonio Sanchez2020-11-18
| | | | | | | | | | The existing `TensorRandom.h` implementation makes the assumption that `half` (`bfloat16`) has a `uint16_t` member `x` (`value`), which is not always true. This currently fails on arm64, where `x` has type `__fp16`. Added `bit_cast` specializations to allow casting to/from `uint16_t` for both `half` and `bfloat16`. Also added tests in `half_float`, `bfloat16_float`, and `cxx11_tensor_random` to catch these errors in the future.
* Add support for Armv8.2-a __fp16Gravatar David Tellenbach2020-10-28
| | | | | | | | | | | | | | | Armv8.2-a provides a native half-precision floating point (__fp16 aka. float16_t). This patch introduces * __fp16 as underlying type of Eigen::half if this type is available * the packet types Packet4hf and Packet8hf representing float16x4_t and float16x8_t respectively * packet-math for the above packets with corresponding scalar type Eigen::half The packet-math functionality has been implemented by Ashutosh Sharma <ashutosh.sharma@amperecomputing.com>. This closes #1940.
* Clean up float16 a.k.a. Eigen::half support in Eigen. Move the definition of ↵Gravatar Rasmus Munk Larsen2019-08-27
| | | | half to Core/arch/Default and move arch-specific packet ops to their respective sub-directories.
* Get rid of EIGEN_TEST_FUNC, unit tests must now be declared with ↵Gravatar Gael Guennebaud2018-07-17
| | | | | | | | | EIGEN_DECLARE_TEST(mytest) { /* code */ }. This provide several advantages: - more flexibility in designing unit tests - unit tests can be glued to speed up compilation - unit tests are compiled with same predefined macros, which is a requirement for zapcc
* merging updates from upstreamGravatar Deven Desai2018-07-11
|\
| * test product kernel with half-floats.Gravatar Gael Guennebaud2018-07-06
| |
* | updates based on PR feedbackGravatar Deven Desai2018-06-14
|/ | | | | | | | | | | | | | | | | There are two major changes (and a few minor ones which are not listed here...see PR discussion for details) 1. Eigen::half implementations for HIP and CUDA have been merged. This means that - `CUDA/Half.h` and `HIP/hcc/Half.h` got merged to a new file `GPU/Half.h` - `CUDA/PacketMathHalf.h` and `HIP/hcc/PacketMathHalf.h` got merged to a new file `GPU/PacketMathHalf.h` - `CUDA/TypeCasting.h` and `HIP/hcc/TypeCasting.h` got merged to a new file `GPU/TypeCasting.h` After this change the `HIP/hcc` directory only contains one file `math_constants.h`. That will go away too once that file becomes a part of the HIP install. 2. new macros EIGEN_GPUCC, EIGEN_GPU_COMPILE_PHASE and EIGEN_HAS_GPU_FP16 have been added and the code has been updated to use them where appropriate. - `EIGEN_GPUCC` is the same as `(EIGEN_CUDACC || EIGEN_HIPCC)` - `EIGEN_GPU_DEVICE_COMPILE` is the same as `(EIGEN_CUDA_ARCH || EIGEN_HIP_DEVICE_COMPILE)` - `EIGEN_HAS_GPU_FP16` is the same as `(EIGEN_HAS_CUDA_FP16 or EIGEN_HAS_HIP_FP16)`
* Added support for CUDA 9.0.Gravatar Benoit Steiner2017-08-31
|
* Fix compilation with some compilersGravatar Gael Guennebaud2017-06-09
|
* Add missing std::numeric_limits specialization for half, and complete ↵Gravatar Gael Guennebaud2017-06-09
| | | | NumTraits<half>
* Added support for expm1 in Eigen.Gravatar Srinivas Vasudevan2016-12-02
|
* Add log1p support for CUDA and half floatsGravatar Igor Babuschkin2016-08-08
|
* Check that it's possible to forward declare the hlaf type.Gravatar Benoit Steiner2016-08-03
|
* Move half unit test from unsupported to main testsGravatar Gael Guennebaud2016-07-22