aboutsummaryrefslogtreecommitdiffhomepage
path: root/Eigen/src/Core/functors
Commit message (Collapse)AuthorAge
* Don't use the rational approximation to the logistic function on GPUs as it ↵Gravatar Rasmus Munk Larsen2020-01-09
| | | | appears to be slightly slower.
* The upper limits for where to use the rational approximation to the logistic ↵Gravatar Rasmus Munk Larsen2020-01-08
| | | | function were not set carefully enough in the original commit, and some arguments would cause the function to return values greater than 1. This change set the versions found by scanning all floating point numbers (using std::nextafterf()).
* Bug #1785: Introduce numext::rint.Gravatar Ilya Tokar2020-01-07
| | | | | | This provides a new op that matches std::rint and previous behavior of pround. Also adds corresponding unsupported/../Tensor op. Performance is the same as e. g. floor (tested SSE/AVX).
* Improve accuracy of fast approximate tanh and the logistic functions in ↵Gravatar Rasmus Munk Larsen2019-12-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Eigen, such that they preserve relative accuracy to within a few ULPs where their function values tend to zero (around x=0 for tanh, and for large negative x for the logistic function). This change re-instates the fast rational approximation of the logistic function for float32 in Eigen (removed in https://gitlab.com/libeigen/eigen/commit/66f07efeaed39d6a67005343d7e0caf7d9eeacdb), but uses the more accurate approximation 1/(1+exp(-1)) ~= exp(x) below -9. The exponential is only calculated on the vectorized path if at least one element in the SIMD input vector is less than -9. This change also contains a few improvements to speed up the original float specialization of logistic: - Introduce EIGEN_PREDICT_{FALSE,TRUE} for __builtin_predict and use it to predict that the logistic-only path is most likely (~2-3% speedup for the common case). - Carefully set the upper clipping point to the smallest x where the approximation evaluates to exactly 1. This saves the explicit clamping of the output (~7% speedup). The increased accuracy for tanh comes at a cost of 10-20% depending on instruction set. The benchmarks below repeated calls u = v.logistic() (u = v.tanh(), respectively) where u and v are of type Eigen::ArrayXf, have length 8k, and v contains random numbers in [-1,1]. Benchmark numbers for logistic: Before: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_logistic_float 4467 4468 155835 model_time: 4827 AVX BM_eigen_logistic_float 2347 2347 299135 model_time: 2926 AVX+FMA BM_eigen_logistic_float 1467 1467 476143 model_time: 2926 AVX512 BM_eigen_logistic_float 805 805 858696 model_time: 1463 After: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_logistic_float 2589 2590 270264 model_time: 4827 AVX BM_eigen_logistic_float 1428 1428 489265 model_time: 2926 AVX+FMA BM_eigen_logistic_float 1059 1059 662255 model_time: 2926 AVX512 BM_eigen_logistic_float 673 673 1000000 model_time: 1463 Benchmark numbers for tanh: Before: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_tanh_float 2391 2391 292624 model_time: 4242 AVX BM_eigen_tanh_float 1256 1256 554662 model_time: 2633 AVX+FMA BM_eigen_tanh_float 823 823 866267 model_time: 1609 AVX512 BM_eigen_tanh_float 443 443 1578999 model_time: 805 After: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_tanh_float 2588 2588 273531 model_time: 4242 AVX BM_eigen_tanh_float 1536 1536 452321 model_time: 2633 AVX+FMA BM_eigen_tanh_float 1007 1007 694681 model_time: 1609 AVX512 BM_eigen_tanh_float 471 471 1472178 model_time: 805
* Revert the specialization for scalar_logistic_op<float> introduced in:Gravatar Rasmus Munk Larsen2019-12-02
| | | | | | | https://bitbucket.org/eigen/eigen/commits/77b447c24e3344e43ff64eb932d4bb35a2db01ce While providing a 50% speedup on Haswell+ processors, the large relative error outside [-18, 18] in this approximation causes problems, e.g., when computing gradients of activation functions like softplus in neural networks.
* PR 719: fix real/imag namespace conflictGravatar Gael Guennebaud2019-10-08
|
* [SYCL] This PR adds the minimum modifications to Eigen core required to run ↵Gravatar Mehdi Goli2019-06-27
| | | | | | | | Eigen unsupported modules on devices supporting SYCL. * Adding SYCL memory model * Enabling/Disabling SYCL backend in Core * Supporting Vectorization
* Fix build with clang on Windows.Gravatar Rasmus Munk Larsen2019-05-09
|
* Restore C++03 compatibilityGravatar Christoph Hertzberg2019-05-06
|
* Fix traits for scalar_logistic_op.Gravatar Rasmus Munk Larsen2019-05-03
|
* Make clipping outside [-18:18] consistent for vectorized and non-vectorized ↵Gravatar Rasmus Munk Larsen2019-03-15
| | | | paths of scalar_logistic_<float>.
* bug #1684: partially workaround clang's 6/7 bug #40815Gravatar Gael Guennebaud2019-03-13
|
* Fix harmless Scalar vs RealScalar cast.Gravatar Gael Guennebaud2019-02-18
|
* Set cost of conjugate to 0 (in practice it boils down to a no-op).Gravatar Gael Guennebaud2019-02-18
| | | | | This is also important to make sure that A.conjugate() * B.conjugate() does not evaluate its arguments into temporaries (e.g., if A and B are fixed and small, or * fall back to lazyProduct)
* Add support for inverse hyperbolic functions.Gravatar Rasmus Munk Larsen2019-01-11
| | | | Fix cost of division.
* bug #1630: fix linspaced when requesting smaller packet size than default one.Gravatar Gael Guennebaud2018-11-28
|
* Add optimized version of logistic function for float. As an example, this is ↵Gravatar Rasmus Munk Larsen2018-11-12
| | | | about 50% faster than the existing version on Haswell using AVX.
* sigmoid -> logisticGravatar Rasmus Munk Larsen2018-08-13
|
* Move sigmoid functor to core.Gravatar Rasmus Munk Larsen2018-08-03
|
* updates based on PR feedbackGravatar Deven Desai2018-06-14
| | | | | | | | | | | | | | | | | There are two major changes (and a few minor ones which are not listed here...see PR discussion for details) 1. Eigen::half implementations for HIP and CUDA have been merged. This means that - `CUDA/Half.h` and `HIP/hcc/Half.h` got merged to a new file `GPU/Half.h` - `CUDA/PacketMathHalf.h` and `HIP/hcc/PacketMathHalf.h` got merged to a new file `GPU/PacketMathHalf.h` - `CUDA/TypeCasting.h` and `HIP/hcc/TypeCasting.h` got merged to a new file `GPU/TypeCasting.h` After this change the `HIP/hcc` directory only contains one file `math_constants.h`. That will go away too once that file becomes a part of the HIP install. 2. new macros EIGEN_GPUCC, EIGEN_GPU_COMPILE_PHASE and EIGEN_HAS_GPU_FP16 have been added and the code has been updated to use them where appropriate. - `EIGEN_GPUCC` is the same as `(EIGEN_CUDACC || EIGEN_HIPCC)` - `EIGEN_GPU_DEVICE_COMPILE` is the same as `(EIGEN_CUDA_ARCH || EIGEN_HIP_DEVICE_COMPILE)` - `EIGEN_HAS_GPU_FP16` is the same as `(EIGEN_HAS_CUDA_FP16 or EIGEN_HAS_HIP_FP16)`
* Adding support for using Eigen in HIP kernels.Gravatar Deven Desai2018-06-06
| | | | | | | | | This commit enables the use of Eigen on HIP kernels / AMD GPUs. Support has been added along the same lines as what already exists for using Eigen in CUDA kernels / NVidia GPUs. Application code needs to explicitly define EIGEN_USE_HIP when using Eigen in HIP kernels. This is because some of the CUDA headers get picked up by default during Eigen compile (irrespective of whether or not the underlying compiler is CUDACC/NVCC, for e.g. Eigen/src/Core/arch/CUDA/Half.h). In order to maintain this behavior, the EIGEN_USE_HIP macro is used to switch to using the HIP version of those header files (see Eigen/Core and unsupported/Eigen/CXX11/Tensor) Use the "-DEIGEN_TEST_HIP" cmake option to enable the HIP specific unit tests.
* Factories code between numext::hypot and scalar_hyot_op functor.Gravatar Gael Guennebaud2018-04-04
|
* bug #1532: disable stl::*_negate in C++17 (they are deprecated)Gravatar Gael Guennebaud2018-04-03
|
* Add a EIGEN_NO_CUDA option, and introduce EIGEN_CUDACC and EIGEN_CUDA_ARCH ↵Gravatar Gael Guennebaud2017-07-17
| | | | aliases
* bug #1417: make LinSpace compatible with std::complexGravatar Gael Guennebaud2017-06-06
|
* bug #1383: fix regression in LinSpaced for integers and high<lowGravatar Gael Guennebaud2017-01-25
|
* bug #1383: Fix regression from 3.2 with LinSpaced(n,0,n-1) with n==0.Gravatar Gael Guennebaud2017-01-25
|
* bug #1376: add missing assertion on size mismatch with compound assignment ↵Gravatar Gael Guennebaud2017-01-23
| | | | operators (e.g., mat += mat.col(j))
* MSVC 2015 has all we want about c++11 and MSVC 2017 fails on binder1st/binder2ndGravatar Gael Guennebaud2017-01-06
|
* use numext::absGravatar Angelos Mantzaflaris2016-12-02
| | | | | (grafted from 0a08d4c60b652d1f24b2fa062c818c4b93890c59 )
* 1. Add explicit template to abs2 (resolves deduction for some arithmetic types)Gravatar Angelos Mantzaflaris2016-12-02
| | | | | | 2. Avoid signed-unsigned conversion in comparison (warning in case Scalar is unsigned) (grafted from 4086187e49760d4bde72750dfa20ae9451263417 )
* Added support for expm1 in Eigen.Gravatar Srinivas Vasudevan2016-12-02
|
* bug #1351: fix compilation of random with old compilersGravatar Gael Guennebaud2016-11-30
|
* Added isnan, isfinite and isinf for SYCL device. Plus test for that.Gravatar Luke Iwanski2016-11-18
|
* bug #1004: improve accuracy of LinSpaced for abs(low) >> abs(high).Gravatar Gael Guennebaud2016-11-02
|
* bug #1004: one more rewrite of LinSpaced for floating point numbers to ↵Gravatar Gael Guennebaud2016-10-25
| | | | | | | | guarantee both interpolation and monotonicity. This version simply does low+i*step plus a branch to return high if i==size-1. Vectorization is accomplished with a branch and the help of pinsertlast. Some quick benchmark revealed that the overhead is really marginal, even when filling small vectors.
* bug #1004: remove the inaccurate "sequential" path for LinSpaced, mark ↵Gravatar Gael Guennebaud2016-10-24
| | | | | | respective function as deprecated, and enforce strict interpolation of the higher range using a correction term. Now, even with floating point precision, both the 'low' and 'high' bounds are exactly reproduced at i=0 and i=size-1 respectively.
* bug #698: rewrite LinSpaced for integer scalar types to avoid overflow and ↵Gravatar Gael Guennebaud2016-10-24
| | | | | | | guarantee an even spacing when possible. Otherwise, the "high" bound is implicitly lowered to the largest value allowing for an even distribution. This changeset also disable vectorization for this integer path.
* Adding EIGEN_DEVICE_FUNC in the Geometry module.Gravatar Robert Lukierski2016-10-12
| | | | | Additional CUDA necessary fixes in the Core (mostly usage of EIGEN_USING_STD_MATH).
* bug #1195: move NumTraits::Div<>::Cost to internal::scalar_div_cost (with ↵Gravatar Gael Guennebaud2016-09-08
| | | | some specializations in arch/SSE and arch/AVX)
* Fix shadowing wrt Eigen::IndexGravatar Gael Guennebaud2016-09-05
|
* bug #1286: automatically detect the available prototypes of functors passed ↵Gravatar Gael Guennebaud2016-08-31
| | | | | | | | | to CwiseNullaryExpr such that functors have only to implement the operators that matters among: operator()() operator()(i) operator()(i,j) Linear access is also automatically detected based on the availability of operator()(i,j).
* bug #1167: simplify installation of header files using cmake's ↵Gravatar Gael Guennebaud2016-08-29
| | | | install(DIRECTORY ...) command.
* Cleanup cost of tanhGravatar Gael Guennebaud2016-08-23
|
* Factorize the 4 copies of tanh implementations, make numext::tanh consistent ↵Gravatar Gael Guennebaud2016-08-23
| | | | with array::tanh, enable fast tanh in fast-math mode only.
* fix tanh inconsistentGravatar Ziming Dong2016-08-06
|
* bug #1232: refactor special functions as a new SpecialFunctions module, ↵Gravatar Gael Guennebaud2016-07-08
| | | | currently in unsupported/.
* Cleanup unused functors.Gravatar Gael Guennebaud2016-06-14
|
* Generalize expr.pow(scalar), pow(expr,scalar) and pow(scalar,expr).Gravatar Gael Guennebaud2016-06-14
| | | | Internal: scalar_pow_op (unary) is removed, and scalar_binary_pow_op is renamed scalar_pow_op.
* Implement expr+scalar, scalar+expr, expr-scalar, and scalar-expr as binary ↵Gravatar Gael Guennebaud2016-06-14
| | | | | | expressions, and generalize supported scalar types. The following functors are now deprecated: scalar_add_op, scalar_sub_op, and scalar_rsub_op.