| Commit message (Collapse) | Author | Age |
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
- split sparse_basic unit test
- various fixes in sparse module
|
|
|
|
| |
fix issues in Cholmod/Taucs supports
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Question 1: why are *=scalar and /=scalar working right away ?
Same weirdness in DynamicSparseMatrix where operators += and -= work wihout
having to redefine them ???
|
|
|
|
| |
and fix commainitializer unit test with MSVC
|
|
|
|
| |
* add an option to disable Qt testing
|
| |
|
| |
|
|
|
|
|
| |
* add row(i), col(i) functions
* add prune() function to remove small coefficients
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
correctly initialized to 0.
|
|
|
|
|
|
| |
* add a MappedSparseMatrix class (like Eigen::Map but for sparse
matrices)
* rename SparseArray to CompressedStorage
|
| |
|
|
|
|
|
| |
* improved performance of mat*=scalar
* bug fix in cwise*
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MatrixBase.
That means a lot of features which were available for sparse matrices
via the dense (and super slow) implemention are no longer available.
All features which make sense for sparse matrices (aka can be implemented efficiently) will be
implemented soon, but don't expect to see an API as rich as for the dense path.
Other changes:
* no block(), row(), col() anymore.
* instead use .innerVector() to get a col or row vector of a matrix.
* .segment(), start(), end() will be back soon, not sure for block()
* faster cwise product
|
|
|
|
|
|
|
| |
ei_aligned_malloc now really behaves like a malloc
(untyped, doesn't call ctor)
ei_aligned_new is the typed variant calling ctor
EIGEN_MAKE_ALIGNED_OPERATOR_NEW now takes the class name as parameter
|
|
|
|
|
|
|
|
| |
* extend unit tests
* add support for generic sum reduction and dot product
* optimize the cwise()* : this is a special case of CwiseBinaryOp where
we only have to process the coeffs which are not null for *both* matrices.
Perhaps there exist some other binary operations like that ?
|
| |
|
|
|
|
|
|
|
|
| |
* Matrix: always inherit WithAlignedOperatorNew, regardless of
vectorization or not
* rename ei_alloc_stack to ei_aligned_stack_alloc
* mixingtypes test: disable vectorization as SSE intrinsics don't allow
mixing types and we just get compile errors there.
|
| |
|
|
|
|
| |
the default.
|
|
|
|
|
|
| |
* enable complex support for the CHOLMOD LLT backend
using CHOLMOD's triangular solver
* quick fix for complex support in SparseLLT::solve
|
|
|
|
| |
* finally get ei_add_test right
|
|
|
|
| |
* idea of Keir Mierle: make the static assert error msgs UPPERCASE
|
| |
|
| |
|
|
|
|
|
| |
* fix some "unused variable" warnings in the tests; there remains a libstdc++ "deprecated"
warning which I haven't looked much into
|
|
|
|
|
| |
which allows to fill a matrix with random inner coordinates (makes sense
only when a very few coeffs are inserted per col/row)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- in matrix-matrix product, static assert on the two scalar types to be the same.
- Similarly in CwiseBinaryOp. POTENTIALLY CONTROVERSIAL: we don't allow anymore binary
ops to take two different scalar types. The functors that we defined take two args
of the same type anyway; also we still allow the return type to be different.
Again the reason is that different scalar types are incompatible with vectorization.
Better have the user realize explicitly what mixing different numeric types costs him
in terms of performance.
See comment in CwiseBinaryOp constructor.
- This allowed to fix a little mistake in test/regression.cpp, mixing float and double
- Remove redundant semicolon (;) after static asserts
|
|
|
|
|
|
|
|
|
| |
* add a LDL^T factorization with solver using code from T. Davis's LDL
library (LPGL2.1+)
* various bug fixes in trianfular solver, matrix product, etc.
* improve cmake files for the supported libraries
* split the sparse unit test
* etc.
|
|
|
|
|
| |
- remove some useless stuff => let's focus on a single sparse matrix format
- finalize the new RandomSetter
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as described on the wiki (one map per N column)
Here's some bench results for the 4 currently supported map impl:
std::map => 18.3385 (581 MB)
gnu::hash_map => 6.52574 (555 MB)
google::dense => 2.87982 (315 MB)
google::sparse => 15.7441 (165 MB)
This is the time is second (and memory consumption) to insert/lookup
10 million of coeffs with random coords inside a 10000^2 matrix,
with one map per packet of 64 columns => google::dense really rocks !
Note I use for the key value the index of the column in the packet (between 0 and 63)
times the number of rows and I used the default hash function.... so maybe there is
room for improvement here....
|
|
|
|
|
| |
for both backends.
* extended a bit the sparse unit tests
|
| |
|
|
|
|
| |
* add unit tests for sparse cholesky
|
| |
|