| Commit message (Collapse) | Author | Age |
|
|
|
|
| |
- remove some useless stuff => let's focus on a single sparse matrix format
- finalize the new RandomSetter
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as described on the wiki (one map per N column)
Here's some bench results for the 4 currently supported map impl:
std::map => 18.3385 (581 MB)
gnu::hash_map => 6.52574 (555 MB)
google::dense => 2.87982 (315 MB)
google::sparse => 15.7441 (165 MB)
This is the time is second (and memory consumption) to insert/lookup
10 million of coeffs with random coords inside a 10000^2 matrix,
with one map per packet of 64 columns => google::dense really rocks !
Note I use for the key value the index of the column in the packet (between 0 and 63)
times the number of rows and I used the default hash function.... so maybe there is
room for improvement here....
|
|
|
|
| |
* add unit tests for sparse cholesky
|
|
|
|
|
|
|
|
|
|
| |
solver from suitesparse (as cholmod). It seems to be even faster
than SuperLU and it was much simpler to interface ! Well,
the factorization is faster, but for the solve part, SuperLU is
quite faster. On the other hand the solve part represents only a
fraction of the whole procedure. Moreover, I bench random matrices
that does not represents real cases, and I'm not sure at all
I use both libraries with their best settings !
|
|
|
|
|
| |
using SuperLU. Calling SuperLU was very painful, but it was worth it,
it seems to be damn fast !
|
| |
|
|
|
|
|
| |
* bidirectionnal mapping
* full cholesky factorization
|
|
|
|
|
|
|
|
|
|
| |
* several fixes (transpose, matrix product, etc...)
* Added a basic cholesky factorization
* Added a low level hybrid dense/sparse vector class
to help writing code involving intensive read/write
in a fixed vector. It is currently used to implement
the matrix product itself as well as in the Cholesky
factorization.
|
|
|
|
|
| |
Block specialization for sparse matrices.
InnerIterators for Blocks and fixes in CoreIterators.
|
|
|
|
| |
#include<algorithm> so I'm not sure how it compiled at all for you :)
|
|
|
|
|
|
|
| |
might be twice faster fot small fixed size matrix
* added a sparse triangular solver (sparse version
of inverseProduct)
* various other improvements in the Sparse module
|
|
|
|
|
|
|
| |
* added complete implementation of sparse matrix product
(with a little glue in Eigen/Core)
* added an exhaustive bench of sparse products including GMM++ and MTL4
=> Eigen outperforms in all transposed/density configurations !
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* added some glue to Eigen/Core (SparseBit, ei_eval, Matrix)
* add two new sparse matrix types:
HashMatrix: based on std::map (for random writes)
LinkedVectorMatrix: array of linked vectors
(for outer coherent writes, e.g. to transpose a matrix)
* add a SparseSetter class to easily set/update any kind of matrices, e.g.:
{ SparseSetter<MatrixType,RandomAccessPattern> wrapper(mymatrix);
for (...) wrapper->coeffRef(rand(),rand()) = rand(); }
* automatic shallow copy for RValue
* and a lot of mess !
plus:
* remove the remaining ArrayBit related stuff
* don't use alloca in product for very large memory allocation
|
|
- uses the common "Compressed Column Storage" scheme
- supports every unary and binary operators with xpr template
assuming binaryOp(0,0) == 0 and unaryOp(0) = 0 (otherwise a sparse
matrix doesnot make sense)
- this is the first commit, so of course, there are still several shorcommings !
|