aboutsummaryrefslogtreecommitdiffhomepage
path: root/unsupported/Eigen/CXX11/src/Tensor/TensorContractionBlocking.h
Commit message (Collapse)AuthorAge
* Remove XSMM support from Tensor moduleGravatar Eugene Zhulenev2019-08-19
|
* Fix tensor contraction for AVX512 machinesGravatar Mark D Ryan2018-07-31
| | | | | | | | | This patch modifies the TensorContraction class to ensure that the kc_ field is always a multiple of the packet_size, if the packet_size is > 8. Without this change spatial convolutions in Tensorflow do not work properly as the code that re-arranges the input matrices can assert if kc_ is not a multiple of the packet_size. This leads to a unit test failure, //tensorflow/python/kernel_tests:conv_ops_test, on AVX512 builds of tensorflow.
* Remove explicit mkldnn support and redundant TensorContractionKernelBlockingGravatar Eugene Zhulenev2018-09-27
|
* Reduce the number of template specializations of classes related to tensor ↵Gravatar Rasmus Munk Larsen2018-07-27
| | | | contraction to reduce binary size.
* Updates corresponding to the latest round of PR feedbackGravatar Deven Desai2018-07-11
| | | | | | | | | | | | | | The major changes are 1. Moving CUDA/PacketMath.h to GPU/PacketMath.h 2. Moving CUDA/MathFunctions.h to GPU/MathFunction.h 3. Moving CUDA/CudaSpecialFunctions.h to GPU/GpuSpecialFunctions.h The above three changes effectively enable the Eigen "Packet" layer for the HIP platform 4. Merging the "hip_basic" and "cuda_basic" unit tests into one ("gpu_basic") 5. Updating the "EIGEN_DEVICE_FUNC" marking in some places The change has been tested on the HIP and CUDA platforms.
* syncing this fork with upstreamGravatar Deven Desai2018-06-13
|\
* | Adding support for using Eigen in HIP kernels.Gravatar Deven Desai2018-06-06
| | | | | | | | | | | | | | | | | | This commit enables the use of Eigen on HIP kernels / AMD GPUs. Support has been added along the same lines as what already exists for using Eigen in CUDA kernels / NVidia GPUs. Application code needs to explicitly define EIGEN_USE_HIP when using Eigen in HIP kernels. This is because some of the CUDA headers get picked up by default during Eigen compile (irrespective of whether or not the underlying compiler is CUDACC/NVCC, for e.g. Eigen/src/Core/arch/CUDA/Half.h). In order to maintain this behavior, the EIGEN_USE_HIP macro is used to switch to using the HIP version of those header files (see Eigen/Core and unsupported/Eigen/CXX11/Tensor) Use the "-DEIGEN_TEST_HIP" cmake option to enable the HIP specific unit tests.
| * Fix typos found using codespellGravatar Gael Guennebaud2018-06-07
|/
* Leverage libxsmm kernels within signle threaded contractionsGravatar Benoit Steiner2016-12-21
|
* Use computeProductBlockingSizes to compute blocking for both ShardByCol and ↵Gravatar Rasmus Munk Larsen2016-04-27
| | | | ShardByRow cases.
* Marked several methods EIGEN_DEVICE_FUNCGravatar Benoit Steiner2016-01-28
|
* Created a mechanism to enable contraction mappers to determine the best ↵Gravatar Benoit Steiner2016-01-22
blocking strategy.