aboutsummaryrefslogtreecommitdiffhomepage
path: root/Eigen/src/Core/arch/GPU
Commit message (Collapse)AuthorAge
* Adding lowlevel APIs for optimized RHS packet load in TensorFlowGravatar Anuj Rawat2019-04-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SpatialConvolution Low-level APIs are added in order to optimized packet load in gemm_pack_rhs in TensorFlow SpatialConvolution. The optimization is for scenario when a packet is split across 2 adjacent columns. In this case we read it as two 'partial' packets and then merge these into 1. Currently this only works for Packet16f (AVX512) and Packet8f (AVX2). We plan to add this for other packet types (such as Packet8d) also. This optimization shows significant speedup in SpatialConvolution with certain parameters. Some examples are below. Benchmark parameters are specified as: Batch size, Input dim, Depth, Num of filters, Filter dim Speedup numbers are specified for number of threads 1, 2, 4, 8, 16. AVX512: Parameters | Speedup (Num of threads: 1, 2, 4, 8, 16) ----------------------------|------------------------------------------ 128, 24x24, 3, 64, 5x5 |2.18X, 2.13X, 1.73X, 1.64X, 1.66X 128, 24x24, 1, 64, 8x8 |2.00X, 1.98X, 1.93X, 1.91X, 1.91X 32, 24x24, 3, 64, 5x5 |2.26X, 2.14X, 2.17X, 2.22X, 2.33X 128, 24x24, 3, 64, 3x3 |1.51X, 1.45X, 1.45X, 1.67X, 1.57X 32, 14x14, 24, 64, 5x5 |1.21X, 1.19X, 1.16X, 1.70X, 1.17X 128, 128x128, 3, 96, 11x11 |2.17X, 2.18X, 2.19X, 2.20X, 2.18X AVX2: Parameters | Speedup (Num of threads: 1, 2, 4, 8, 16) ----------------------------|------------------------------------------ 128, 24x24, 3, 64, 5x5 | 1.66X, 1.65X, 1.61X, 1.56X, 1.49X 32, 24x24, 3, 64, 5x5 | 1.71X, 1.63X, 1.77X, 1.58X, 1.68X 128, 24x24, 1, 64, 5x5 | 1.44X, 1.40X, 1.38X, 1.37X, 1.33X 128, 24x24, 3, 64, 3x3 | 1.68X, 1.63X, 1.58X, 1.56X, 1.62X 128, 128x128, 3, 96, 11x11 | 1.36X, 1.36X, 1.37X, 1.37X, 1.37X In the higher level benchmark cifar10, we observe a runtime improvement of around 6% for AVX512 on Intel Skylake server (8 cores). On lower level PackRhs micro-benchmarks specified in TensorFlow tensorflow/core/kernels/eigen_spatial_convolutions_test.cc, we observe the following runtime numbers: AVX512: Parameters | Runtime without patch (ns) | Runtime with patch (ns) | Speedup ---------------------------------------------------------------|----------------------------|-------------------------|--------- BM_RHS_NAME(PackRhs, 128, 24, 24, 3, 64, 5, 5, 1, 1, 256, 56) | 41350 | 15073 | 2.74X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 1, 1, 256, 56) | 7277 | 7341 | 0.99X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 2, 2, 256, 56) | 8675 | 8681 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 1, 1, 256, 56) | 24155 | 16079 | 1.50X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 2, 2, 256, 56) | 25052 | 17152 | 1.46X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 1, 1, 256, 56) | 18269 | 18345 | 1.00X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 2, 4, 256, 56) | 19468 | 19872 | 0.98X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 1, 1, 36, 432) | 156060 | 42432 | 3.68X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 2, 2, 36, 432) | 132701 | 36944 | 3.59X AVX2: Parameters | Runtime without patch (ns) | Runtime with patch (ns) | Speedup ---------------------------------------------------------------|----------------------------|-------------------------|--------- BM_RHS_NAME(PackRhs, 128, 24, 24, 3, 64, 5, 5, 1, 1, 256, 56) | 26233 | 12393 | 2.12X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 1, 1, 256, 56) | 6091 | 6062 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 2, 2, 256, 56) | 7427 | 7408 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 1, 1, 256, 56) | 23453 | 20826 | 1.13X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 2, 2, 256, 56) | 23167 | 22091 | 1.09X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 1, 1, 256, 56) | 23422 | 23682 | 0.99X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 2, 4, 256, 56) | 23165 | 23663 | 0.98X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 1, 1, 36, 432) | 72689 | 44969 | 1.62X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 2, 2, 36, 432) | 61732 | 39779 | 1.55X All benchmarks on Intel Skylake server with 8 cores.
* updates requested in the PR feedback. Also droping coded within #ifdef ↵Gravatar Deven Desai2019-03-19
| | | | EIGEN_HAS_OLD_HIP_FP16
* Merged eigen/eigen into defaultGravatar Deven Desai2019-03-19
|\
| * Clean up half packet traits and add a few more missing packet ops.Gravatar Rasmus Munk Larsen2019-03-14
| |
| * Clean up PacketMathHalf.h and add a few missing logical packet ops.Gravatar Rasmus Munk Larsen2019-03-11
| |
| * Add a few missing packet ops: cmp_eq for NEON. pfloor for GPU.Gravatar Rasmus Munk Larsen2019-02-21
| |
| * Fix conflicts and mergeGravatar Gael Guennebaud2019-01-30
| |\
| * | Add missing logical packet ops for GPU and NEON.Gravatar Rasmus Munk Larsen2019-01-17
| | |
| * | Fix compilation error for logical packet ops with older compilers.Gravatar Rasmus Munk Larsen2019-01-16
| | |
| * | Fix warnings in ptrue for complex and half types.Gravatar Rasmus Munk Larsen2019-01-11
| | |
| * | Fix merge.Gravatar Rasmus Munk Larsen2019-01-11
| | |
| * | Merge.Gravatar Rasmus Munk Larsen2019-01-11
| |\ \
| * \ \ Rename pones -> ptrue. Use _CMP_TRUE_UQ where appropriate.Gravatar Rasmus Munk Larsen2019-01-09
| |\ \ \
| | | * | Collapsed revisionGravatar Rasmus Munk Larsen2019-01-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Add packet up "pones". Write pnot(a) as pxor(pones(a), a). * Collapsed revision * Simplify a bit. * Undo useless diffs. * Fix typo.
| * | | | Collapsed revisionGravatar Rasmus Munk Larsen2019-01-09
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Collapsed revision * Add packet up "pones". Write pnot(a) as pxor(pones(a), a). * Collapsed revision * Simplify a bit. * Undo useless diffs. * Fix typo.
| | * | Add packet up "pones". Write pnot(a) as pxor(pones(a), a).Gravatar Rasmus Munk Larsen2019-01-09
| |/ /
| * | Add support for pcmp_eq and pnot, including for complex types.Gravatar Rasmus Munk Larsen2019-01-07
| | |
| | * Introducing "vectorized" byte on unpacket_traits structsGravatar Gustavo Lima Chaves2018-12-19
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a preparation to a change on gebp_traits, where a new template argument will be introduced to dictate the packet size, so it won't be bound to the current/max packet size only anymore. By having packet types defined early on gebp_traits, one has now to act on packet types, not scalars anymore, for the enum values defined on that class. One approach for reaching the vectorizable/size properties one needs there could be getting the packet's scalar again with unpacket_traits<>, then the size/Vectorizable enum entries from packet_traits<>. It turns out guards like "#ifndef EIGEN_VECTORIZE_AVX512" at AVX/PacketMath.h will hide smaller packet variations of packet_traits<> for some types (and it makes sense to keep that). In other words, one can't go back to the scalar and create a new PacketType, as this will always lead to the maximum packet type for the architecture. The less costly/invasive solution for that, thus, is to add the vectorizable info on every unpacket_traits struct as well.
| * bug #1636: fix compilation with some ABI versions.Gravatar Gael Guennebaud2018-12-06
| |
| * Several improvements regarding packet-bitwise operations:Gravatar Gael Guennebaud2018-11-30
| | | | | | | | | | | | - add unit tests - optimize their AVX512f implementation - add missing implementations (half, Packet4f, ...)
* | ROCm/HIP specfic fixes + updatesGravatar Deven Desai2018-11-19
|/ | | | | | | | | | | | | | | | | | | | | | | | | | 1. Eigen/src/Core/arch/GPU/Half.h Updating the HIPCC implementation half so that it can declared as a __shared__ variable 2. Eigen/src/Core/util/Macros.h, Eigen/src/Core/util/Memory.h introducing a EIGEN_USE_STD(func) macro that calls - std::func be default - ::func when eigen is being compiled with HIPCC This change was requested in the previous HIP PR (https://bitbucket.org/eigen/eigen/pull-requests/518/pr-with-hip-specific-fixes-for-the-eigen/diff) 3. unsupported/Eigen/CXX11/src/Tensor/TensorDeviceThreadPool.h Removing EIGEN_DEVICE_FUNC attribute from pure virtual methods as it is not supported by HIPCC 4. unsupported/Eigen/CXX11/src/Tensor/TensorReduction.h Disabling the template specializations of InnerMostDimReducer as they run into HIPCC link errors
* Collapsed revision (based on pull request PR-325)Gravatar Christian von Schultz2018-10-22
| | | | | | | * Support compiling without IO streams Add the preprocessor definition EIGEN_NO_IO which, if defined, disables all use of the IO streams part of the standard library.
* Workaround increases required alignment warningGravatar Gael Guennebaud2018-09-20
|
* Updates to fix HIP-clang specific compile errors.Gravatar Deven Desai2018-08-30
| | | | Compiling the eigen unittests with hip-clang (HIP with clang as the underlying compiler instead of hcc or nvcc), results in compile errors. The changes in this commit fix those compile errors. The main change is to convert a few instances of "__device__" to "EIGEN_DEVICE_FUNC"
* Fix doxy and misc. typosGravatar luz.paz"2018-08-01
| | | | | | | | | | | | | | | | Found via `codespell -q 3 -I ../eigen-word-whitelist.txt` --- Eigen/src/Core/ProductEvaluators.h | 4 ++-- Eigen/src/Core/arch/GPU/Half.h | 2 +- Eigen/src/Core/util/Memory.h | 2 +- Eigen/src/Geometry/Hyperplane.h | 2 +- Eigen/src/Geometry/Transform.h | 2 +- Eigen/src/Geometry/Translation.h | 12 ++++++------ doc/PreprocessorDirectives.dox | 2 +- doc/TutorialGeometry.dox | 2 +- test/boostmultiprec.cpp | 2 +- test/triangular.cpp | 2 +- 10 files changed, 16 insertions(+), 16 deletions(-)
* bug #1526 - CUDA compilation fails on CUDA 9.x SDK when arch is set to ↵Gravatar Jiandong Ruan2018-09-08
| | | | compute_60 and/or above
* bug #1590: fix collision with some system headers defining the macro FP32Gravatar Gael Guennebaud2018-08-28
|
* Creating separate SYCL required PR for uncontroversial files.Gravatar Mehdi Goli2018-08-03
|
* Updates corresponding to the latest round of PR feedbackGravatar Deven Desai2018-07-11
| | | | | | | | | | | | | | The major changes are 1. Moving CUDA/PacketMath.h to GPU/PacketMath.h 2. Moving CUDA/MathFunctions.h to GPU/MathFunction.h 3. Moving CUDA/CudaSpecialFunctions.h to GPU/GpuSpecialFunctions.h The above three changes effectively enable the Eigen "Packet" layer for the HIP platform 4. Merging the "hip_basic" and "cuda_basic" unit tests into one ("gpu_basic") 5. Updating the "EIGEN_DEVICE_FUNC" marking in some places The change has been tested on the HIP and CUDA platforms.
* renaming CUDA* to GPU* for some header filesGravatar Deven Desai2018-07-11
|
* merging updates from upstreamGravatar Deven Desai2018-07-11
|
* updates based on PR feedbackGravatar Deven Desai2018-06-14
| | | | | | | | | | | | | | | | | There are two major changes (and a few minor ones which are not listed here...see PR discussion for details) 1. Eigen::half implementations for HIP and CUDA have been merged. This means that - `CUDA/Half.h` and `HIP/hcc/Half.h` got merged to a new file `GPU/Half.h` - `CUDA/PacketMathHalf.h` and `HIP/hcc/PacketMathHalf.h` got merged to a new file `GPU/PacketMathHalf.h` - `CUDA/TypeCasting.h` and `HIP/hcc/TypeCasting.h` got merged to a new file `GPU/TypeCasting.h` After this change the `HIP/hcc` directory only contains one file `math_constants.h`. That will go away too once that file becomes a part of the HIP install. 2. new macros EIGEN_GPUCC, EIGEN_GPU_COMPILE_PHASE and EIGEN_HAS_GPU_FP16 have been added and the code has been updated to use them where appropriate. - `EIGEN_GPUCC` is the same as `(EIGEN_CUDACC || EIGEN_HIPCC)` - `EIGEN_GPU_DEVICE_COMPILE` is the same as `(EIGEN_CUDA_ARCH || EIGEN_HIP_DEVICE_COMPILE)` - `EIGEN_HAS_GPU_FP16` is the same as `(EIGEN_HAS_CUDA_FP16 or EIGEN_HAS_HIP_FP16)`
* moving Half headers from CUDA dir to GPU dir, removing the HIP versionsGravatar Deven Desai2018-06-13