aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Removing unused API to fix compile error in TensorFlow due toGravatar Anuj Rawat2019-05-12
| | | | AVX512VL, AVX512BW usage
* bug #1707: Fix deprecation warnings, or disable warnings when testing ↵Gravatar Christoph Hertzberg2019-05-10
| | | | deprecated functions
* Fix build with clang on Windows.Gravatar Rasmus Munk Larsen2019-05-09
|
* Fix AVX512 & GCC 6.3 compilationGravatar Eugene Zhulenev2019-05-07
|
* Fix stupid shadow-warnings (with old clang versions)Gravatar Christoph Hertzberg2019-05-07
|
* Restore C++03 compatibilityGravatar Christoph Hertzberg2019-05-07
|
* Restore C++03 compatibilityGravatar Christoph Hertzberg2019-05-06
|
* Fix traits for scalar_logistic_op.Gravatar Rasmus Munk Larsen2019-05-03
|
* Add masked_store_available to unpacket_traitsGravatar Eugene Zhulenev2019-05-02
|
* Add masked pstoreu for Packet16hGravatar Eugene Zhulenev2019-05-02
|
* Add masked pstoreu to AVX and AVX512 PacketMathGravatar Eugene Zhulenev2019-05-02
|
* Fix regression in changeset ae33e866c750c6c24ada5c6f7f3ec15815d0e683Gravatar Gael Guennebaud2019-05-02
|
* Merged in ezhulenev/eigen-01 (pull request PR-633)Gravatar Rasmus Larsen2019-04-29
|\ | | | | | | Check if gpu_assert was overridden in TensorGpuHipCudaDefines
* | Fix compilation with PGI version 19Gravatar Andy May2019-04-25
| |
* | Merged in ezhulenev/eigen-01 (pull request PR-632)Gravatar Gael Guennebaud2019-04-25
|\ \ | | | | | | | | | Fix doxygen warnings
| | * Check if gpu_assert was overridden in TensorGpuHipCudaDefinesGravatar Eugene Zhulenev2019-04-25
| |/ |/|
| * Fix doxygen warnings to enable statis code analysisGravatar Eugene Zhulenev2019-04-24
| |
* | Get rid of SequentialLinSpacedReturnType deprecation warnings in DenseBase.hGravatar Eugene Zhulenev2019-04-24
|/
* Remove deprecation annotation from typedef Eigen::Index Index, as it would ↵Gravatar Rasmus Munk Larsen2019-04-24
| | | | generate too many build warnings.
* Add missing EIGEN_DEPRECATED annotations to deprecated functions and fix few ↵Gravatar Eugene Zhulenev2019-04-23
| | | | other doxygen warnings
* Use packet ops instead of AVX2 intrinsicsGravatar Eugene Zhulenev2019-04-23
|
* Adding lowlevel APIs for optimized RHS packet load in TensorFlowGravatar Anuj Rawat2019-04-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SpatialConvolution Low-level APIs are added in order to optimized packet load in gemm_pack_rhs in TensorFlow SpatialConvolution. The optimization is for scenario when a packet is split across 2 adjacent columns. In this case we read it as two 'partial' packets and then merge these into 1. Currently this only works for Packet16f (AVX512) and Packet8f (AVX2). We plan to add this for other packet types (such as Packet8d) also. This optimization shows significant speedup in SpatialConvolution with certain parameters. Some examples are below. Benchmark parameters are specified as: Batch size, Input dim, Depth, Num of filters, Filter dim Speedup numbers are specified for number of threads 1, 2, 4, 8, 16. AVX512: Parameters | Speedup (Num of threads: 1, 2, 4, 8, 16) ----------------------------|------------------------------------------ 128, 24x24, 3, 64, 5x5 |2.18X, 2.13X, 1.73X, 1.64X, 1.66X 128, 24x24, 1, 64, 8x8 |2.00X, 1.98X, 1.93X, 1.91X, 1.91X 32, 24x24, 3, 64, 5x5 |2.26X, 2.14X, 2.17X, 2.22X, 2.33X 128, 24x24, 3, 64, 3x3 |1.51X, 1.45X, 1.45X, 1.67X, 1.57X 32, 14x14, 24, 64, 5x5 |1.21X, 1.19X, 1.16X, 1.70X, 1.17X 128, 128x128, 3, 96, 11x11 |2.17X, 2.18X, 2.19X, 2.20X, 2.18X AVX2: Parameters | Speedup (Num of threads: 1, 2, 4, 8, 16) ----------------------------|------------------------------------------ 128, 24x24, 3, 64, 5x5 | 1.66X, 1.65X, 1.61X, 1.56X, 1.49X 32, 24x24, 3, 64, 5x5 | 1.71X, 1.63X, 1.77X, 1.58X, 1.68X 128, 24x24, 1, 64, 5x5 | 1.44X, 1.40X, 1.38X, 1.37X, 1.33X 128, 24x24, 3, 64, 3x3 | 1.68X, 1.63X, 1.58X, 1.56X, 1.62X 128, 128x128, 3, 96, 11x11 | 1.36X, 1.36X, 1.37X, 1.37X, 1.37X In the higher level benchmark cifar10, we observe a runtime improvement of around 6% for AVX512 on Intel Skylake server (8 cores). On lower level PackRhs micro-benchmarks specified in TensorFlow tensorflow/core/kernels/eigen_spatial_convolutions_test.cc, we observe the following runtime numbers: AVX512: Parameters | Runtime without patch (ns) | Runtime with patch (ns) | Speedup ---------------------------------------------------------------|----------------------------|-------------------------|--------- BM_RHS_NAME(PackRhs, 128, 24, 24, 3, 64, 5, 5, 1, 1, 256, 56) | 41350 | 15073 | 2.74X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 1, 1, 256, 56) | 7277 | 7341 | 0.99X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 2, 2, 256, 56) | 8675 | 8681 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 1, 1, 256, 56) | 24155 | 16079 | 1.50X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 2, 2, 256, 56) | 25052 | 17152 | 1.46X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 1, 1, 256, 56) | 18269 | 18345 | 1.00X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 2, 4, 256, 56) | 19468 | 19872 | 0.98X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 1, 1, 36, 432) | 156060 | 42432 | 3.68X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 2, 2, 36, 432) | 132701 | 36944 | 3.59X AVX2: Parameters | Runtime without patch (ns) | Runtime with patch (ns) | Speedup ---------------------------------------------------------------|----------------------------|-------------------------|--------- BM_RHS_NAME(PackRhs, 128, 24, 24, 3, 64, 5, 5, 1, 1, 256, 56) | 26233 | 12393 | 2.12X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 1, 1, 256, 56) | 6091 | 6062 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 32, 64, 5, 5, 2, 2, 256, 56) | 7427 | 7408 | 1.00X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 1, 1, 256, 56) | 23453 | 20826 | 1.13X BM_RHS_NAME(PackRhs, 32, 64, 64, 30, 64, 5, 5, 2, 2, 256, 56) | 23167 | 22091 | 1.09X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 1, 1, 256, 56) | 23422 | 23682 | 0.99X BM_RHS_NAME(PackRhs, 32, 256, 256, 4, 16, 8, 8, 2, 4, 256, 56) | 23165 | 23663 | 0.98X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 1, 1, 36, 432) | 72689 | 44969 | 1.62X BM_RHS_NAME(PackRhs, 32, 64, 64, 4, 16, 3, 3, 2, 2, 36, 432) | 61732 | 39779 | 1.55X All benchmarks on Intel Skylake server with 8 cores.
* Split the implementation of i?amax/min into two. Based on PR-627 by Sameer ↵Gravatar Christoph Hertzberg2019-04-15
| | | | | | Agarwal. Like the Netlib reference implementation, I*AMAX now uses the L1-norm instead of the L2-norm for each element. Changed I*MIN accordingly.
* Tweak cost model for tensor contraction when parallelizing over the inner ↵Gravatar Rasmus Munk Larsen2019-04-12
| | | | | | dimension. https://bitbucket.org/snippets/rmlarsen/MexxLo
* Update TheadPoolDevice example to include ThreadPool creation and passing ↵Gravatar Jonathon Koyle2019-04-10
| | | | pointer into constructor.
* adding EIGEN_DEVICE_FUNC to the recently added TensorContractionKernel ↵Gravatar Deven Desai2019-04-08
| | | | constructor. Not having the EIGEN_DEVICE_FUNC attribute on it was leading to compiler errors when compiling Eigen in the ROCm/HIP path
* Add missing semicolonGravatar Eugene Zhulenev2019-04-02
|
* Add support for custom packed Lhs/Rhs blocks in tensor contractionsGravatar Eugene Zhulenev2019-04-01
|
* bug #1695: fix a numerical robustness issue. Computing the secular equation ↵Gravatar Gael Guennebaud2019-03-27
| | | | at the middle range without a shift might give a wrong sign.
* Collapsed revision from PR-619Gravatar William D. Irons2019-03-26
| | | | | | | * Add support for pcmp_eq in AltiVec/Complex.h * Fixed implementation of pcmp_eq for double The new logic is based on the logic from NEON for double.
* ICC does not support -fno-unsafe-math-optimizationsGravatar Gael Guennebaud2019-03-22
|
* updates requested in the PR feedback. Also droping coded within #ifdef ↵Gravatar Deven Desai2019-03-19
| | | | EIGEN_HAS_OLD_HIP_FP16
* Merged eigen/eigen into defaultGravatar Deven Desai2019-03-19
|\
| * Merged in rmlarsen/eigen (pull request PR-618)Gravatar Rasmus Larsen2019-03-18
| |\ | | | | | | | | | | | | | | | Make clipping outside [-18:18] consistent for vectorized and non-vectorized paths of scalar_logistic_op<float>. Approved-by: Gael Guennebaud <g.gael@free.fr>
| * | fix unit test in c++03: c++03 does not allow passing local or anonymous enum ↵Gravatar Gael Guennebaud2019-03-18
| | | | | | | | | | | | as template param
| * | bug #1692: enable enum as sizes of Matrix and ArrayGravatar Gael Guennebaud2019-03-17
| | |
| | * Make clipping outside [-18:18] consistent for vectorized and non-vectorized ↵Gravatar Rasmus Munk Larsen2019-03-15
| |/ | | | | | | paths of scalar_logistic_<float>.
| * Merged in tellenbach/eigen/sykline_consistent_include_guards (pull request ↵Gravatar Rasmus Larsen2019-03-15
| |\ | | | | | | | | | | | | | | | PR-617) Fix include guard comments for Skyline module
| | * Fix include guard commentsGravatar David Tellenbach2019-03-15
| |/
| * Clean up half packet traits and add a few more missing packet ops.Gravatar Rasmus Munk Larsen2019-03-14
| |
| * Remove undefined std::complex<int>Gravatar David Tellenbach2019-03-14
| |
| * PR 593: Add variadtic ctor for DiagonalMatrix with unit testsGravatar David Tellenbach2019-03-14
| |
| * revert debug stuffGravatar Gael Guennebaud2019-03-14
| |
| * Remove EIGEN_MPL2_ONLY guard in IncompleteCholesky that is no longer needed ↵Gravatar Rasmus Munk Larsen2019-03-13
| | | | | | | | after the AMD reordering code was relicensed to MPL2.
| * bug #1684: partially workaround clang's 6/7 bug #40815Gravatar Gael Guennebaud2019-03-13
| |
| * Merged in rmlarsen/eigen (pull request PR-615)Gravatar Rasmus Larsen2019-03-12
| |\ | | | | | | | | | Clean up PacketMathHalf.h and add a few missing logical packet ops.
| * | erm.. use proper idGravatar Thomas Capricelli2019-03-12
| | |
| * | update tracking codeGravatar Thomas Capricelli2019-03-12
| | |
| | * Clean up PacketMathHalf.h and add a few missing logical packet ops.Gravatar Rasmus Munk Larsen2019-03-11
| |/
| * Fix segfaults with cuda compilationGravatar Eugene Zhulenev2019-03-11
| |