| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
| |
clang-cuda.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The major changes are
1. Moving CUDA/PacketMath.h to GPU/PacketMath.h
2. Moving CUDA/MathFunctions.h to GPU/MathFunction.h
3. Moving CUDA/CudaSpecialFunctions.h to GPU/GpuSpecialFunctions.h
The above three changes effectively enable the Eigen "Packet" layer for the HIP platform
4. Merging the "hip_basic" and "cuda_basic" unit tests into one ("gpu_basic")
5. Updating the "EIGEN_DEVICE_FUNC" marking in some places
The change has been tested on the HIP and CUDA platforms.
|
|\ |
|
| | |
|
| |
| |
| |
| | |
unit tests
|
| |
| |
| |
| | |
unsupported/test directories
|
| |
| |
| |
| | |
unsupported/test directories
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There are two major changes (and a few minor ones which are not listed here...see PR discussion for details)
1. Eigen::half implementations for HIP and CUDA have been merged.
This means that
- `CUDA/Half.h` and `HIP/hcc/Half.h` got merged to a new file `GPU/Half.h`
- `CUDA/PacketMathHalf.h` and `HIP/hcc/PacketMathHalf.h` got merged to a new file `GPU/PacketMathHalf.h`
- `CUDA/TypeCasting.h` and `HIP/hcc/TypeCasting.h` got merged to a new file `GPU/TypeCasting.h`
After this change the `HIP/hcc` directory only contains one file `math_constants.h`. That will go away too once that file becomes a part of the HIP install.
2. new macros EIGEN_GPUCC, EIGEN_GPU_COMPILE_PHASE and EIGEN_HAS_GPU_FP16 have been added and the code has been updated to use them where appropriate.
- `EIGEN_GPUCC` is the same as `(EIGEN_CUDACC || EIGEN_HIPCC)`
- `EIGEN_GPU_DEVICE_COMPILE` is the same as `(EIGEN_CUDA_ARCH || EIGEN_HIP_DEVICE_COMPILE)`
- `EIGEN_HAS_GPU_FP16` is the same as `(EIGEN_HAS_CUDA_FP16 or EIGEN_HAS_HIP_FP16)`
|
|\ \ |
|
| |/ |
|
| |\ |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
variable.
In addition to igamma(a, x), this code implements:
* igamma_der_a(a, x) = d igamma(a, x) / da -- derivative of igamma with respect to the parameter
* gamma_sample_der_alpha(alpha, sample) -- reparameterization derivative of a Gamma(alpha, 1) random variable sample with respect to the alpha parameter
The derivatives are computed by forward mode differentiation of the igamma(a, x) code. Although gamma_sample_der_alpha can be implemented via igamma_der_a, a separate function is more accurate and efficient due to analytical cancellation of some terms. All three functions are implemented by a method parameterized with "mode" that always computes the derivatives, but does not return them unless required by the mode. The compiler is expected to (and, based on benchmarks, does) skip the unnecessary computations depending on the mode.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| | |
This commit enables the use of Eigen on HIP kernels / AMD GPUs. Support has been added along the same lines as what already exists for using Eigen in CUDA kernels / NVidia GPUs.
Application code needs to explicitly define EIGEN_USE_HIP when using Eigen in HIP kernels. This is because some of the CUDA headers get picked up by default during Eigen compile (irrespective of whether or not the underlying compiler is CUDACC/NVCC, for e.g. Eigen/src/Core/arch/CUDA/Half.h). In order to maintain this behavior, the EIGEN_USE_HIP macro is used to switch to using the HIP version of those header files (see Eigen/Core and unsupported/Eigen/CXX11/Tensor)
Use the "-DEIGEN_TEST_HIP" cmake option to enable the HIP specific unit tests.
|
|/
|
|
|
|
| |
1. Added new packet functions using SIMD for NByOne, OneByN cases
2. Modified existing packet functions to reduce index calculations when input stride is non-SIMD
3. Added 4 test cases to cover the new packet functions
|
|\
| |
| |
| |
| |
| | |
Exponentially scaled modified Bessel functions of order zero and one.
Approved-by: Benoit Steiner <benoit.steiner.goog@gmail.com>
|
| | |
|
|/
|
|
|
|
| |
The functions are conventionally called i0e and i1e. The exponentially scaled version is more numerically stable. The standard Bessel functions can be obtained as i0(x) = exp(|x|) i0e(x)
The code is ported from Cephes and tested against SciPy.
|
| |
|
|\
| |
| |
| | |
Rename scalar_clip_op to scalar_clamp_op to prevent collision with existing functor in TensorFlow.
|
| | |
|
| |
| |
| |
| | |
functor in TensorFlow.
|
|\ \
| |/
|/|
| | |
Fix bugs and typos in the contraction example of the tensor README
|
| | |
|
| |
| |
| |
| | |
Avoid 32-bit integer overflow in TensorSlicingOp
|
| | |
|
| | |
|
|\ \
| | |
| | |
| | | |
TensorFlow.
|
| |/
|/|
| |
| | |
issues for large FFTs. https://github.com/tensorflow/tensorflow/issues/10749#issuecomment-354557689
|
| |
| |
| |
| | |
https://bitbucket.org/eigen/eigen/pull-requests/351
|
|/
|
|
| |
TensorFlow.
|
|\
| |
| |
| |
| |
| | |
Added an example for a contraction to a scalar value to README.md
Approved-by: Jonas Harsch <jonas.harsch@gmail.com>
|
|\ \ |
|
| | |
| | |
| | |
| | | |
As an example this reduces binary size of an TensorFlow demo app for Android by about 2.5%.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If the cost is large enough then the thread count can be larger than the maximum
representable int, so just casting it to an int is undefined behavior.
Contributed by phurst@google.com.
|
| |/
|/|
| |
| | |
contraction of two second order tensors and how you can get the value of the result. I lost one day to get this doen so I think it will help some guys. I also added Eigen:: to the IndexPair and and array in the same example.
|
| | |
|
| | |
|
|/
|
|
| |
aliases
|
| |
|
| |
|
| |
|
|\
| |
| |
| | |
Improved support for OpenCL
|
|\ \
| | |
| | |
| | |
| | |
| | | |
request PR-315)
Add labels to #ifdef, in TensorReductionCuda.h
|
| | |
| | |
| | |
| | | |
Tensor Trace op
|