aboutsummaryrefslogtreecommitdiffhomepage
path: root/Eigen/src/Core/util
Commit message (Collapse)AuthorAge
...
* Replace the call to int64_t in the blasutil test by explicit typesGravatar David Tellenbach2020-08-14
| | | | | | | | | Some platforms define int64_t to be long long even for C++03. If this is the case we miss the definition of internal::make_unsigned for this type. If we just define the template we get duplicated definitions errors for platforms defining int64_t as signed long for C++03. We need to find a way to distinguish both cases at compile-time.
* Make numext::as_uint a device function.Gravatar Rasmus Munk Larsen2020-07-22
|
* Change the sign operator in Eigen to return NaN for NaN arguments, not zero.Gravatar Rasmus Munk Larsen2020-07-07
|
* Support BFloat16 in EigenGravatar Teng Lu2020-06-20
|
* - Vectorizing MMA packing.Gravatar Everton Constantino2020-05-19
| | | | | - Optimizing MMA kernel. - Adding PacketBlock store to blas_data_mapper.
* Added support for reverse iterators for Vectorwise operations.Gravatar Felipe Attanasio2020-05-14
|
* Fix confusing template param name for Stride fwd decl.Gravatar Xiaoxiang Cao2020-04-30
|
* Add numeric_limits min and max for boolGravatar Akshay Naresh Modi2020-04-06
| | | | This will allow (among other things) computation of argmax and argmin of bool tensors
* Add absolute_difference coefficient-wise binary Array functionGravatar Joel Holdsworth2020-03-19
|
* Fixing HIP breakage caused by the recent commit that introduces Packet4h2 as ↵Gravatar Deven Desai2020-03-12
| | | | the Eigen::Half packet type
* Revert "avoid selecting half-packets when unnecessary"Gravatar Rasmus Munk Larsen2020-02-25
| | | This reverts commit 5ca10480b0756e40b0723d90adeba8506291fc7c
* Revert "Pick full packet unconditionally when EIGEN_UNALIGNED_VECTORIZE"Gravatar Rasmus Munk Larsen2020-02-25
| | | This reverts commit 44df2109c8c700222643a9a45f144676348f4df1
* Revert "do not pick full-packet if it'd result in more operations"Gravatar Rasmus Munk Larsen2020-02-25
| | | This reverts commit e9cc0cd353803a818204e48054bd89699b84e6c6
* do not pick full-packet if it'd result in more operationsGravatar Francesco Mazzoli2020-02-07
| | | | | See comment and <https://gitlab.com/libeigen/eigen/merge_requests/46#note_270622952>.
* Pick full packet unconditionally when EIGEN_UNALIGNED_VECTORIZEGravatar Francesco Mazzoli2020-02-07
| | | | See comment for details.
* avoid selecting half-packets when unnecessaryGravatar Francesco Mazzoli2020-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | See <https://stackoverflow.com/questions/59709148/ensuring-that-eigen-uses-avx-vectorization-for-a-certain-operation> for an explanation of the problem this solves. In short, for some reason, before this commit the half-packet is selected when the array / matrix size is not a multiple of `unpacket_traits<PacketType>::size`, where `PacketType` starts out being the full Packet. For example, for some data of 100 `float`s, `Packet4f` will be selected rather than `Packet8f`, because 100 is not a multiple of 8, the size of `Packet8f`. This commit switches to selecting the half-packet if the size is less than the packet size, which seems to make more sense. As I stated in the SO post I'm not sure that I'm understanding the issue correctly, but this fix resolves the issue in my program. Moreover, `make check` passes, with the exception of line 614 and 616 in `test/packetmath.cpp`, which however also fail on master on my machine: CHECK_CWISE1_IF(PacketTraits::HasBessel, numext::bessel_i0, internal::pbessel_i0); ... CHECK_CWISE1_IF(PacketTraits::HasBessel, numext::bessel_i1, internal::pbessel_i1);
* Bug #1788: Fix rule-of-three violations inside the stable modules.Gravatar Christoph Hertzberg2019-12-19
| | | | | This fixes deprecated-copy warnings when compiling with GCC>=9 Also protect some additional Base-constructors from getting called by user code code (#1587)
* Add default definition for EIGEN_PREDICT_*Gravatar Rasmus Munk Larsen2019-12-16
|
* Improve accuracy of fast approximate tanh and the logistic functions in ↵Gravatar Rasmus Munk Larsen2019-12-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Eigen, such that they preserve relative accuracy to within a few ULPs where their function values tend to zero (around x=0 for tanh, and for large negative x for the logistic function). This change re-instates the fast rational approximation of the logistic function for float32 in Eigen (removed in https://gitlab.com/libeigen/eigen/commit/66f07efeaed39d6a67005343d7e0caf7d9eeacdb), but uses the more accurate approximation 1/(1+exp(-1)) ~= exp(x) below -9. The exponential is only calculated on the vectorized path if at least one element in the SIMD input vector is less than -9. This change also contains a few improvements to speed up the original float specialization of logistic: - Introduce EIGEN_PREDICT_{FALSE,TRUE} for __builtin_predict and use it to predict that the logistic-only path is most likely (~2-3% speedup for the common case). - Carefully set the upper clipping point to the smallest x where the approximation evaluates to exactly 1. This saves the explicit clamping of the output (~7% speedup). The increased accuracy for tanh comes at a cost of 10-20% depending on instruction set. The benchmarks below repeated calls u = v.logistic() (u = v.tanh(), respectively) where u and v are of type Eigen::ArrayXf, have length 8k, and v contains random numbers in [-1,1]. Benchmark numbers for logistic: Before: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_logistic_float 4467 4468 155835 model_time: 4827 AVX BM_eigen_logistic_float 2347 2347 299135 model_time: 2926 AVX+FMA BM_eigen_logistic_float 1467 1467 476143 model_time: 2926 AVX512 BM_eigen_logistic_float 805 805 858696 model_time: 1463 After: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_logistic_float 2589 2590 270264 model_time: 4827 AVX BM_eigen_logistic_float 1428 1428 489265 model_time: 2926 AVX+FMA BM_eigen_logistic_float 1059 1059 662255 model_time: 2926 AVX512 BM_eigen_logistic_float 673 673 1000000 model_time: 1463 Benchmark numbers for tanh: Before: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_tanh_float 2391 2391 292624 model_time: 4242 AVX BM_eigen_tanh_float 1256 1256 554662 model_time: 2633 AVX+FMA BM_eigen_tanh_float 823 823 866267 model_time: 1609 AVX512 BM_eigen_tanh_float 443 443 1578999 model_time: 805 After: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_tanh_float 2588 2588 273531 model_time: 4242 AVX BM_eigen_tanh_float 1536 1536 452321 model_time: 2633 AVX+FMA BM_eigen_tanh_float 1007 1007 694681 model_time: 1609 AVX512 BM_eigen_tanh_float 471 471 1472178 model_time: 805
* Added Eigen::numext typedefs for uint8_t, int8_t, uint16_t and int16_tGravatar Joel Holdsworth2019-12-11
|
* Merged in ↵Gravatar Rasmus Larsen2019-12-04
|\ | | | | | | | | | | | | | | anshuljl/eigen-2/Anshul-Jaiswal/update-configurevectorizationh-to-not-op-1573079916090 (pull request PR-754) Update ConfigureVectorization.h to not optimize fp16 routines when compiling with cuda. Approved-by: Deven Desai <deven.desai.amd@gmail.com>
* \ Merged in ezhulenev/eigen-02 (pull request PR-767)Gravatar Rasmus Larsen2019-12-02
|\ \ | | | | | | | | | Fix shadow warnings in AlignedBox and SparseBlock
* | | [SYCL] Rebasing the SYCL support branch on top of the Einge upstream master ↵Gravatar Mehdi Goli2019-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | branch. * Unifying all loadLocalTile from lhs and rhs to an extract_block function. * Adding get_tensor operation which was missing in TensorContractionMapper. * Adding the -D method missing from cmake for Disable_Skinny Contraction operation. * Wrapping all the indices in TensorScanSycl into Scan parameter struct. * Fixing typo in Device SYCL * Unifying load to private register for tall/skinny no shared * Unifying load to vector tile for tensor-vector/vector-tensor operation * Removing all the LHS/RHS class for extracting data from global * Removing Outputfunction from TensorContractionSkinnyNoshared. * Combining the local memory version of tall/skinny and normal tensor contraction into one kernel. * Combining the no-local memory version of tall/skinny and normal tensor contraction into one kernel. * Combining General Tensor-Vector and VectorTensor contraction into one kernel. * Making double buffering optional for Tensor contraction when local memory is version is used. * Modifying benchmark to accept custom Reduction Sizes * Disabling AVX optimization for SYCL backend on the host to allow SSE optimization to the host * Adding Test for SYCL * Modifying SYCL CMake
| * | Fix shadow warnings in AlignedBox and SparseBlockGravatar Eugene Zhulenev2019-11-27
|/ /
* | SparseRef: Fixed alignment warning on ARM GCCGravatar Joel Holdsworth2019-11-07
| |
| * Update ConfigureVectorization.h to not optimize fp16 routines when compiling ↵Gravatar Anshul Jaiswal2019-11-06
| | | | | | | | with cuda.
* | Disable AVX on broken xcode versions. See PR 748.Gravatar Gael Guennebaud2019-11-12
|/ | | | Patch adapted from Hans Johnson's PR 748.
* Add EIGEN_HAS_INTRINSIC_INT128 macroGravatar Rasmus Munk Larsen2019-11-06
| | | | Add a new EIGEN_HAS_INTRINSIC_INT128 macro, and use this instead of __SIZEOF_INT128__. This fixes related issues with TensorIntDiv.h when building with Clang for Windows, where support for 128-bit integer arithmetic is advertised but broken in practice.
* Rollback or PR-746 and partial rollback of ↵Gravatar Rasmus Munk Larsen2019-11-05
| | | | | | | | https://bitbucket.org/eigen/eigen/commits/668ab3fc474e54c7919eda4fbaf11f3a99246494 . std::array is still not supported in CUDA device code on Windows.
* Remove internal::smart_copy and replace with std::copyGravatar Eugene Zhulenev2019-10-29
|
* bug #1752: make is_convertible equivalent to the std c++11 equivalent and ↵Gravatar Gael Guennebaud2019-10-10
| | | | fallback to std::is_convertible when c++11 is enabled.
* Tensor block evaluation V2 support for unary/binary/broadcstingGravatar Eugene Zhulenev2019-09-24
|
* Add Bessel functions to SpecialFunctions.Gravatar Srinivas Vasudevan2019-09-14
| | | | | | | | | - Split SpecialFunctions files in to a separate BesselFunctions file. In particular add: - Modified bessel functions of the second kind k0, k1, k0e, k1e - Bessel functions of the first kind j0, j1 - Bessel functions of the second kind y0, y1
* Add packetized versions of i0e and i1e special functions.Gravatar Srinivas Vasudevan2019-09-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - In particular refactor the i0e and i1e code so scalar and vectorized path share code. - Move chebevl to GenericPacketMathFunctions. A brief benchmark with building Eigen with FMA, AVX and AVX2 flags Before: CPU: Intel Haswell with HyperThreading (6 cores) Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- BM_eigen_i0e_double/1 57.3 57.3 10000000 BM_eigen_i0e_double/8 398 398 1748554 BM_eigen_i0e_double/64 3184 3184 218961 BM_eigen_i0e_double/512 25579 25579 27330 BM_eigen_i0e_double/4k 205043 205042 3418 BM_eigen_i0e_double/32k 1646038 1646176 422 BM_eigen_i0e_double/256k 13180959 13182613 53 BM_eigen_i0e_double/1M 52684617 52706132 10 BM_eigen_i0e_float/1 28.4 28.4 24636711 BM_eigen_i0e_float/8 75.7 75.7 9207634 BM_eigen_i0e_float/64 512 512 1000000 BM_eigen_i0e_float/512 4194 4194 166359 BM_eigen_i0e_float/4k 32756 32761 21373 BM_eigen_i0e_float/32k 261133 261153 2678 BM_eigen_i0e_float/256k 2087938 2088231 333 BM_eigen_i0e_float/1M 8380409 8381234 84 BM_eigen_i1e_double/1 56.3 56.3 10000000 BM_eigen_i1e_double/8 397 397 1772376 BM_eigen_i1e_double/64 3114 3115 223881 BM_eigen_i1e_double/512 25358 25361 27761 BM_eigen_i1e_double/4k 203543 203593 3462 BM_eigen_i1e_double/32k 1613649 1613803 428 BM_eigen_i1e_double/256k 12910625 12910374 54 BM_eigen_i1e_double/1M 51723824 51723991 10 BM_eigen_i1e_float/1 28.3 28.3 24683049 BM_eigen_i1e_float/8 74.8 74.9 9366216 BM_eigen_i1e_float/64 505 505 1000000 BM_eigen_i1e_float/512 4068 4068 171690 BM_eigen_i1e_float/4k 31803 31806 21948 BM_eigen_i1e_float/32k 253637 253692 2763 BM_eigen_i1e_float/256k 2019711 2019918 346 BM_eigen_i1e_float/1M 8238681 8238713 86 After: CPU: Intel Haswell with HyperThreading (6 cores) Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- BM_eigen_i0e_double/1 15.8 15.8 44097476 BM_eigen_i0e_double/8 99.3 99.3 7014884 BM_eigen_i0e_double/64 777 777 886612 BM_eigen_i0e_double/512 6180 6181 100000 BM_eigen_i0e_double/4k 48136 48140 14678 BM_eigen_i0e_double/32k 385936 385943 1801 BM_eigen_i0e_double/256k 3293324 3293551 228 BM_eigen_i0e_double/1M 12423600 12424458 57 BM_eigen_i0e_float/1 16.3 16.3 43038042 BM_eigen_i0e_float/8 30.1 30.1 23456931 BM_eigen_i0e_float/64 169 169 4132875 BM_eigen_i0e_float/512 1338 1339 516860 BM_eigen_i0e_float/4k 10191 10191 68513 BM_eigen_i0e_float/32k 81338 81337 8531 BM_eigen_i0e_float/256k 651807 651984 1000 BM_eigen_i0e_float/1M 2633821 2634187 268 BM_eigen_i1e_double/1 16.2 16.2 42352499 BM_eigen_i1e_double/8 110 110 6316524 BM_eigen_i1e_double/64 822 822 851065 BM_eigen_i1e_double/512 6480 6481 100000 BM_eigen_i1e_double/4k 51843 51843 10000 BM_eigen_i1e_double/32k 414854 414852 1680 BM_eigen_i1e_double/256k 3320001 3320568 212 BM_eigen_i1e_double/1M 13442795 13442391 53 BM_eigen_i1e_float/1 17.6 17.6 41025735 BM_eigen_i1e_float/8 35.5 35.5 19597891 BM_eigen_i1e_float/64 240 240 2924237 BM_eigen_i1e_float/512 1424 1424 485953 BM_eigen_i1e_float/4k 10722 10723 65162 BM_eigen_i1e_float/32k 86286 86297 8048 BM_eigen_i1e_float/256k 691821 691868 1000 BM_eigen_i1e_float/1M 2777336 2777747 256 This shows anywhere from a 50% to 75% improvement on these operations. I've also benchmarked without any of these flags turned on, and got similar performance to before (if not better). Also tested packetmath.cpp + special_functions to ensure no regressions.
* bug #1736: fix compilation issue with A(all,{1,2}).col(j) by implementing ↵Gravatar Gael Guennebaud2019-09-11
| | | | true compile-time "if" for block_evaluator<>::coeff(i)/coeffRef(i)
* bug #1741: fix C.noalias() = A*C; with C.innerStride()!=1Gravatar Gael Guennebaud2019-09-10
|
* PR 621: Fix documentation of EIGEN_COMP_EMSCRIPTENGravatar David Tellenbach2019-03-21
|
* PR 681: Add ndtri function, the inverse of the normal distribution function.Gravatar Srinivas Vasudevan2019-08-12
|
* bug #1718: Add cast to successfully compile with clang on PowerPCGravatar João P. L. de Carvalho2019-08-09
| | | | Ignoring -Wc11-extensions warnings thrown by clang at Altivec/PacketMath.h
* PR 655: Fix missing Eigen namespace in MacrosGravatar Justin Carpentier2019-06-05
|
* [SYCL] This PR adds the minimum modifications to Eigen core required to run ↵Gravatar Mehdi Goli2019-06-27
| | | | | | | | Eigen unsupported modules on devices supporting SYCL. * Adding SYCL memory model * Enabling/Disabling SYCL backend in Core * Supporting Vectorization
* bug #1724: Mask buggy warnings with g++-7Gravatar Christoph Hertzberg2019-06-14
| | | | | (grafted from 427f2f66d69ae9b124c2f8bcd927fb6e19e07e91 )
* Make is_valid_index_type return false for float and double when ↵Gravatar Rasmus Munk Larsen2019-06-05
| | | | EIGEN_HAS_TYPE_TRAITS is off.
* Add workaround for choosing the right include files with FP16C support with ↵Gravatar Rasmus Munk Larsen2019-06-05
| | | | clang.
* Clean up CUDA/NVCC version macros and their use in Eigen, and a few other ↵Gravatar Rasmus Munk Larsen2019-05-31
| | | | CUDA build failures.
* Enable support for F16C with Clang. The required intrinsics were added here: ↵Gravatar Rasmus Munk Larsen2019-05-20
| | | | | | https://reviews.llvm.org/D16177 and are part of LLVM 3.8.0.
* Merged in rmlarsen/eigen (pull request PR-643)Gravatar Rasmus Larsen2019-05-20
|\ | | | | | | | | | | Make Eigen build with cuda 10 and clang. Approved-by: Justin Lebar <justin.lebar@gmail.com>
| * Make Eigen build with cuda 10 and clang.Gravatar Rasmus Munk Larsen2019-05-15
| |
* | Eigen: Fix MSVC C++17 language standard detection logicGravatar Scott Ramsby2019-05-03
|/ | | | | | | To detect C++17 support, use _MSVC_LANG macro instead of _MSC_VER. _MSC_VER can indicate whether the current compiler version could support the C++17 language standard, but not whether that standard is actually selected (i.e. via /std:c++17). See these web pages for more details: https://devblogs.microsoft.com/cppblog/msvc-now-correctly-reports-__cplusplus/ https://docs.microsoft.com/en-us/cpp/preprocessor/predefined-macros
* Add masked_store_available to unpacket_traitsGravatar Eugene Zhulenev2019-05-02
|