diff options
author | Deven Desai <deven.desai.amd@gmail.com> | 2018-10-01 14:28:37 +0000 |
---|---|---|
committer | Deven Desai <deven.desai.amd@gmail.com> | 2018-10-01 14:28:37 +0000 |
commit | 94898488a6fe3096a7a44d0bb108e514f0e44699 (patch) | |
tree | 7aced8073d5a62c5f0c6696f77adcd6994732c82 /unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h | |
parent | e95696acb313a84b33a18cc300de418b05dc58e5 (diff) |
This commit contains the following (HIP specific) updates:
- unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h
Changing "pass-by-reference" argument to be "pass-by-value" instead
(in a __global__ function decl).
"pass-by-reference" arguments to __global__ functions are unwise,
and will be explicitly flagged as errors by the newer versions of HIP.
- Eigen/src/Core/util/Memory.h
- unsupported/Eigen/CXX11/src/Tensor/TensorContraction.h
Changes introduced in recent commits breaks the HIP compile.
Adding EIGEN_DEVICE_FUNC attribute to some functions and
calling ::malloc/free instead of the corresponding std:: versions
to get the HIP compile working again
- unsupported/Eigen/CXX11/src/Tensor/TensorReduction.h
Change introduced a recent commit breaks the HIP compile
(link stage errors out due to failure to inline a function).
Disabling the recently introduced code (only for HIP compile), to get
the eigen nightly testing going again.
Will submit another PR once we have te proper fix.
- Eigen/src/Core/util/ConfigureVectorization.h
Enabling GPU VECTOR support when HIP compiler is in use
(for both the host and device compile phases)
Diffstat (limited to 'unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h')
-rw-r--r-- | unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h b/unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h index 88940e6e6..375c570b3 100644 --- a/unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h +++ b/unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h @@ -292,7 +292,7 @@ __global__ void FullReductionKernelHalfFloat(Reducer reducer, const Self input, } template <typename Op> -__global__ void ReductionCleanupKernelHalfFloat(Op& reducer, half* output, half2* scratch) { +__global__ void ReductionCleanupKernelHalfFloat(Op reducer, half* output, half2* scratch) { eigen_assert(threadIdx.x == 1); half tmp = __low2half(*scratch); reducer.reduce(__high2half(*scratch), &tmp); |