Commit message (Collapse) | Author | Age | |
---|---|---|---|
* | Deleted unnecessary trailing commas. | 2016-04-29 | |
| | |||
* | Fixed compilation errors generated by clang | 2016-04-29 | |
| | |||
* | Added a few tests to ensure that the dimensions of rank 0 tensors are ↵ | 2016-04-29 | |
| | | | | correctly computed | ||
* | Return the proper size (ie 1) for tensors of rank 0 | 2016-04-29 | |
| | |||
* | Made several tensor tests compatible with cxx03 | 2016-04-29 | |
| | |||
* | Moved a number of tensor tests that don't require cxx11 to work properly ↵ | 2016-04-29 | |
| | | | | outside the EIGEN_TEST_CXX11 test section | ||
* | Fixed teh cxx11_tensor_empty test to compile without requiring cxx11 support | 2016-04-29 | |
| | |||
* | Deleted unused default values for template parameters | 2016-04-29 | |
| | |||
* | Made a coupe of tensor tests compile without requiring c++11 support. | 2016-04-29 | |
| | |||
* | Made the cxx11_tensor_forced_eval compile without c++11. | 2016-04-29 | |
| | |||
* | Don't turn on const expressions when compiling with gcc >= 4.8 unless the ↵ | 2016-04-29 | |
| | | | | -std=c++11 option has been used | ||
* | Restore Tensor support for non c++11 compilers | 2016-04-29 | |
| | |||
* | Fixed include path | 2016-04-29 | |
| | |||
* | Fix compilation of sparse.cast<>().transpose(). | 2016-04-29 | |
| | |||
* | Fixed a few memory leaks | 2016-04-28 | |
| | |||
* | Fixed the igamma and igammac implementations to make them callable from a ↵ | 2016-04-28 | |
| | | | | gpu kernel. | ||
* | Deleted unused variable | 2016-04-28 | |
| | |||
* | Eliminate mutual recursion in igamma{,c}_impl::Run. | 2016-04-28 | |
| | | | | | | | | | | | | | | | | | | | Presently, igammac_impl::Run calls igamma_impl::Run, which in turn calls igammac_impl::Run. This isn't actually mutual recursion; the calls are guarded such that we never get into a loop. Nonetheless, it's a stretch for clang to prove this. As a result, clang emits a recursive call in both igammac_impl::Run and igamma_impl::Run. That this is suboptimal code is bad enough, but it's particularly bad when compiling for CUDA/nvptx. nvptx allows recursion, but only begrudgingly: If you have recursive calls in a kernel, it's on you to manually specify the kernel's stack size. Otherwise, ptxas will dump a warning, make a guess, and who knows if it's right. This change explicitly eliminates the mutual recursion in igammac_impl::Run and igamma_impl::Run. | ||
* | Fixed compilation error with clang. | 2016-04-27 | |
| | |||
* | Merged in rmlarsen/eigen2 (pull request PR-183) | 2016-04-27 | |
|\ | | | | | | | Detect cxx_constexpr support when compiling with clang. | ||
| * | Depend on the more extensive support for constexpr in clang: | 2016-04-27 | |
| | | | | | | | | http://clang.llvm.org/docs/LanguageExtensions.html#c-1y-relaxed-constexpr | ||
| * | Detect cxx_constexpr support when compiling with clang. | 2016-04-27 | |
| | | |||
* | | Merged latest update from trunk | 2016-04-27 | |
|\| | |||
* | | fpclassify isn't portable enough. In particular, the return values of the ↵ | 2016-04-27 | |
| | | | | | | | | function are not available on all the platforms Eigen supportes: remove it from Eigen. | ||
| * | Fix missing inclusion of Eigen/Core | 2016-04-27 | |
| | | |||
* | | Made the index type a template parameter to evaluateProductBlockingSizes | 2016-04-27 | |
|/ | | | | Use numext::mini and numext::maxi instead of std::min/std::max to compute blocking sizes. | ||
* | Merged latest updates from trunk | 2016-04-27 | |
|\ | |||
* | | Improved support for min and max on 16 bit floats when running on recent ↵ | 2016-04-27 | |
| | | | | | | | | cuda gpus | ||
| * | Merged eigen/eigen into default | 2016-04-27 | |
| |\ | |||
| * | | Use computeProductBlockingSizes to compute blocking for both ShardByCol and ↵ | 2016-04-27 | |
| | | | | | | | | | | | | ShardByRow cases. | ||
* | | | Added support for fpclassify in Eigen::Numext | 2016-04-27 | |
| |/ |/| | |||
* | | Implement stricter argument checking for SYRK and SY2K and real matrices. To ↵ | 2016-04-27 | |
|/ | | | | implement the BLAS API they should return info=2 if op='C' is passed for a complex matrix. Without this change, the Eigen BLAS fails the strict zblat3 and cblat3 tests in LAPACK 3.5. | ||
* | Refactor the unsupported CXX11/Core module to internal headers only. | 2016-04-26 | |
| | |||
* | Fixed the partial evaluation of non vectorizable tensor subexpressions | 2016-04-25 | |
| | |||
* | Refined the cost of the striding operation. | 2016-04-25 | |
| | |||
* | Merged in rmlarsen/eigen (pull request PR-179) | 2016-04-21 | |
|\ | | | | | | | Prevent crash in CompleteOrthogonalDecomposition if object was default constructed. | ||
* | | Provide access to the base threadpool classes | 2016-04-21 | |
| | | |||
| * | Prevent crash in CompleteOrthogonalDecomposition if object was default ↵ | 2016-04-21 | |
| | | | | | | | | constructed. | ||
* | | Added the ability to switch to the new thread pool with a #define | 2016-04-21 | |
| | | |||
* | | Use index list for the striding benchmarks | 2016-04-21 | |
| | | |||
* | | Fixed several compilation warnings | 2016-04-21 | |
| | | |||
* | | Added an option to enable the use of the F16C instruction set | 2016-04-21 | |
| | | |||
* | | Use EIGEN_THREAD_YIELD instead of std::this_thread::yield to make the code ↵ | 2016-04-21 | |
| | | | | | | | | more portable. | ||
* | | Don't crash when attempting to reduce empty tensors. | 2016-04-20 | |
| | | |||
* | | Added more tests | 2016-04-20 | |
| | | |||
* | | Don't attempt to leverage the _cvtss_sh and _cvtsh_ss instructions when ↵ | 2016-04-20 | |
| | | | | | | | | compiling with clang since it's unclear which versions of clang actually support these instruction. | ||
* | | Started to implement a portable way to yield. | 2016-04-19 | |
| | | |||
* | | Made sure all the required header files are included when trying to use fp16 | 2016-04-19 | |
| | | |||
* | | Implemented a more portable version of thread local variables | 2016-04-19 | |
| | | |||
* | | Fixed a few typos | 2016-04-19 | |
| | |