aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Bug #1796: Make matrix squareroot usable for Map and Ref typesGravatar Christoph Hertzberg2019-12-20
|
* Reduce code duplication and avoid confusing DoxygenGravatar Christoph Hertzberg2019-12-19
|
* Hide recursive meta templates from DoxygenGravatar Christoph Hertzberg2019-12-19
|
* Use double-braces initialization (as everywhere else in the test-suite).Gravatar Christoph Hertzberg2019-12-19
|
* Fix trivial shadow warningGravatar Christoph Hertzberg2019-12-19
|
* Bug #1788: Fix rule-of-three violations inside the stable modules.Gravatar Christoph Hertzberg2019-12-19
| | | | | This fixes deprecated-copy warnings when compiling with GCC>=9 Also protect some additional Base-constructors from getting called by user code code (#1587)
* Fix unit-test which I broke in previous fixGravatar Christoph Hertzberg2019-12-19
|
* Fix TensorPadding bug in squeezed reads from inner dimension Gravatar Eugene Zhulenev2019-12-19
|
* Return const data pointer from TensorRef evaluator.data()Gravatar Eugene Zhulenev2019-12-18
|
* Tensor block evaluation cost modelGravatar Eugene Zhulenev2019-12-18
|
* Fix some maybe-unitialized warningsGravatar Christoph Hertzberg2019-12-18
|
* Workaround class-memaccess warnings on newer GCC versionsGravatar Christoph Hertzberg2019-12-18
|
* fix compilation due to new HIP scalar accessorGravatar Jeff Daily2019-12-17
|
* Reduce block evaluation overhead for small tensor expressionsGravatar Eugene Zhulenev2019-12-17
|
* Add default definition for EIGEN_PREDICT_*Gravatar Rasmus Munk Larsen2019-12-16
|
* Improve accuracy of fast approximate tanh and the logistic functions in ↵Gravatar Rasmus Munk Larsen2019-12-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Eigen, such that they preserve relative accuracy to within a few ULPs where their function values tend to zero (around x=0 for tanh, and for large negative x for the logistic function). This change re-instates the fast rational approximation of the logistic function for float32 in Eigen (removed in https://gitlab.com/libeigen/eigen/commit/66f07efeaed39d6a67005343d7e0caf7d9eeacdb), but uses the more accurate approximation 1/(1+exp(-1)) ~= exp(x) below -9. The exponential is only calculated on the vectorized path if at least one element in the SIMD input vector is less than -9. This change also contains a few improvements to speed up the original float specialization of logistic: - Introduce EIGEN_PREDICT_{FALSE,TRUE} for __builtin_predict and use it to predict that the logistic-only path is most likely (~2-3% speedup for the common case). - Carefully set the upper clipping point to the smallest x where the approximation evaluates to exactly 1. This saves the explicit clamping of the output (~7% speedup). The increased accuracy for tanh comes at a cost of 10-20% depending on instruction set. The benchmarks below repeated calls u = v.logistic() (u = v.tanh(), respectively) where u and v are of type Eigen::ArrayXf, have length 8k, and v contains random numbers in [-1,1]. Benchmark numbers for logistic: Before: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_logistic_float 4467 4468 155835 model_time: 4827 AVX BM_eigen_logistic_float 2347 2347 299135 model_time: 2926 AVX+FMA BM_eigen_logistic_float 1467 1467 476143 model_time: 2926 AVX512 BM_eigen_logistic_float 805 805 858696 model_time: 1463 After: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_logistic_float 2589 2590 270264 model_time: 4827 AVX BM_eigen_logistic_float 1428 1428 489265 model_time: 2926 AVX+FMA BM_eigen_logistic_float 1059 1059 662255 model_time: 2926 AVX512 BM_eigen_logistic_float 673 673 1000000 model_time: 1463 Benchmark numbers for tanh: Before: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_tanh_float 2391 2391 292624 model_time: 4242 AVX BM_eigen_tanh_float 1256 1256 554662 model_time: 2633 AVX+FMA BM_eigen_tanh_float 823 823 866267 model_time: 1609 AVX512 BM_eigen_tanh_float 443 443 1578999 model_time: 805 After: Benchmark Time(ns) CPU(ns) Iterations ----------------------------------------------------------------- SSE BM_eigen_tanh_float 2588 2588 273531 model_time: 4242 AVX BM_eigen_tanh_float 1536 1536 452321 model_time: 2633 AVX+FMA BM_eigen_tanh_float 1007 1007 694681 model_time: 1609 AVX512 BM_eigen_tanh_float 471 471 1472178 model_time: 805
* Resolve double-promotion warnings when compiling with clang.Gravatar Christoph Hertzberg2019-12-13
| | | | `sin` was calling `sin(double)` instead of `std::sin(float)`
* Renamed .hgignore to .gitignore (removing hg-specific "syntax" line)Gravatar Christoph Hertzberg2019-12-13
|
* Bug 1785: fix pround on x86 to use the same rounding mode as std::round.Gravatar Ilya Tokar2019-12-12
| | | | | | This also adds pset1frombits helper to Packet[24]d. Makes round ~45% slower for SSE: 1.65µs ± 1% before vs 2.45µs ± 2% after, stil an order of magnitude faster than scalar version: 33.8µs ± 2%.
* Clamp tanh approximation outside [-c, c] where c is the smallest value where ↵Gravatar Rasmus Munk Larsen2019-12-12
| | | | the approximation is exactly +/-1. Without FMA, c = 7.90531110763549805, with FMA c = 7.99881172180175781.
* Fix implementation of complex expm1. Add tests that fail with previous ↵Gravatar Srinivas Vasudevan2019-12-12
| | | | implementation, but pass with the current one.
* Initialize non-trivially constructible types when allocating a temp buffer.Gravatar Eugene Zhulenev2019-12-12
|
* Squeeze reads from two inner dimensions in TensorPaddingGravatar Eugene Zhulenev2019-12-11
|
* Add back accidentally deleted default constructor to ↵Gravatar Eugene Zhulenev2019-12-11
| | | | TensorExecutorTilingContext.
* Added io testGravatar Joel Holdsworth2019-12-11
|
* IO: Fixed printing of char and unsigned char matricesGravatar Joel Holdsworth2019-12-11
|
* Added Eigen::numext typedefs for uint8_t, int8_t, uint16_t and int16_tGravatar Joel Holdsworth2019-12-11
|
* Bug 1786: fix compilation with MSVCGravatar Gael Guennebaud2019-12-11
|
* Remove block memory allocation required by removed block evaluation APIGravatar Eugene Zhulenev2019-12-10
|
* Remove V2 suffix from TensorBlockGravatar Eugene Zhulenev2019-12-10
|
* Remove TensorBlock.h and old TensorBlock/BlockMapperGravatar Eugene Zhulenev2019-12-10
|
* Fix for HIP breakage detected on 191210Gravatar Deven Desai2019-12-10
| | | | | | | | The following commit introduces compile errors when running eigen with hipcc https://gitlab.com/libeigen/eigen/commit/2918f85ba976dbfbf72f7d4c1961a577f5850148 hipcc errors out because it requies the device attribute on the methods within the TensorBlockV2ResourceRequirements struct instroduced by the commit above. The fix is to add the device attribute to those methods
* Do not use std::vector in getResourceRequirementsGravatar Eugene Zhulenev2019-12-09
|
* Undo the block size change.Gravatar Artem Belevich2019-12-09
| | | | .z *is* used by the EigenContractionKernelInternal().
* Add async evaluation support to TensorSelectOpGravatar Eugene Zhulenev2019-12-09
|
* fix AlignedVector3 inconsisent interface with other Vector classes, default ↵Gravatar Janek Kozicki2019-12-06
| | | | constructor and operator- were missing.
* Add recursive work splitting to EvalShardedByInnerDimContextGravatar Eugene Zhulenev2019-12-05
|
* Improve performance of contraction kernelsGravatar Artem Belevich2019-12-05
| | | | | | | | | | * Force-inline implementations. They pass around pointers to shared memory blocks. Without inlining compiler must operate via generic pointers. Inlining allows compiler to detect that we're operating on shared memory which allows generation of substantially faster code. * Fixed a long-standing typo which resulted in launching 8x more kernels than we needed (.z dimension of the block is unused by the kernel).
* update hg to git hashesGravatar Gael Guennebaud2019-12-05
|
* Add missing initialization in cxx11_tensor_trace.cpp.Gravatar Rasmus Munk Larsen2019-12-04
|
* Replace calls to "hg" by calls to "git"Gravatar Gael Guennebaud2019-12-04
|
* Update old links to bitbucket to point to gitlab.comGravatar Gael Guennebaud2019-12-04
|
* Added tag before-git-migration for changeset ↵Gravatar Gael Guennebaud2019-12-04
| | | | a7c7d329d89e8484be58df6078a586c44523db37
* Merged in ezhulenev/eigen-01 (pull request PR-769)Gravatar Rasmus Larsen2019-12-04
|\ | | | | | | Capture TensorMap by value inside tensor expression AST
* \ Merged in ↵Gravatar Rasmus Larsen2019-12-04
|\ \ | | | | | | | | | | | | | | | | | | | | | anshuljl/eigen-2/Anshul-Jaiswal/update-configurevectorizationh-to-not-op-1573079916090 (pull request PR-754) Update ConfigureVectorization.h to not optimize fp16 routines when compiling with cuda. Approved-by: Deven Desai <deven.desai.amd@gmail.com>
| | * Capture TensorMap by value inside tensor expression ASTGravatar Eugene Zhulenev2019-12-03
| |/ |/|
* | Remove __host__ annotation for device-only function.Gravatar Rasmus Munk Larsen2019-12-03
| |
* | Use EIGEN_DEVICE_FUNC macro instead of __device__.Gravatar Rasmus Munk Larsen2019-12-03
| |
* | Fix QuaternionBase::cast for quaternion map and wrapper.Gravatar Gael Guennebaud2019-12-03
| |
* | bug #1776: fix vector-wise STL iterator's operator-> using a proxy as ↵Gravatar Gael Guennebaud2019-12-03
| | | | | | | | | | | | pointer type. This changeset fixes also the value_type definition.