aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
-rw-r--r--Eigen/src/SparseCore/SparseColEtree.h1
-rw-r--r--doc/SparseLinearSystems.dox104
-rw-r--r--doc/SparseQuickReference.dox229
-rw-r--r--doc/TutorialSparse.dox110
4 files changed, 242 insertions, 202 deletions
diff --git a/Eigen/src/SparseCore/SparseColEtree.h b/Eigen/src/SparseCore/SparseColEtree.h
index e1de15afd..f89ca3814 100644
--- a/Eigen/src/SparseCore/SparseColEtree.h
+++ b/Eigen/src/SparseCore/SparseColEtree.h
@@ -55,6 +55,7 @@ Index etree_find (Index i, IndexVector& pp)
* \param mat The matrix in column-major format.
* \param parent The elimination tree
* \param firstRowElt The column index of the first element in each row
+ * \param perm The permutation to apply to the column of \b mat
*/
template <typename MatrixType, typename IndexVector>
int coletree(const MatrixType& mat, IndexVector& parent, IndexVector& firstRowElt, typename MatrixType::Index *perm=0)
diff --git a/doc/SparseLinearSystems.dox b/doc/SparseLinearSystems.dox
index e447c40ce..c00be10d3 100644
--- a/doc/SparseLinearSystems.dox
+++ b/doc/SparseLinearSystems.dox
@@ -1,10 +1,57 @@
namespace Eigen {
/** \eigenManualPage TopicSparseSystems Solving Sparse Linear Systems
-In Eigen, there are several methods available to solve linear systems when the coefficient matrix is sparse. Because of the special representation of this class of matrices, special care should be taken in order to get a good performance. See \ref TutorialSparse for a detailed introduction about sparse matrices in Eigen. In this page, we briefly present the main steps that are common to all the linear solvers in Eigen together with the main concepts behind them. Depending on the properties of the matrix, the desired accuracy, the end-user is able to tune these steps in order to improve the performance of its code. However, an impatient user does not need to know deeply what's hiding behind these steps: the last section presents a benchmark routine that can be easily used to get an insight on the performance of all the available solvers.
+In Eigen, there are several methods available to solve linear systems when the coefficient matrix is sparse. Because of the special representation of this class of matrices, special care should be taken in order to get a good performance. See \ref TutorialSparse for a detailed introduction about sparse matrices in Eigen. This page lists the sparse solvers available in Eigen. The main steps that are common to all these linear solvers are introduced as well. Depending on the properties of the matrix, the desired accuracy, the end-user is able to tune those steps in order to improve the performance of its code. Note that it is not required to know deeply what's hiding behind these steps: the last section presents a benchmark routine that can be easily used to get an insight on the performance of all the available solvers.
\eigenAutoToc
- As summarized in \ref TutorialSparseDirectSolvers, there are many built-in solvers in Eigen as well as interface to external solvers libraries. All these solvers follow the same calling sequence. The basic steps are as follows :
+\section TutorialSparseDirectSolvers Sparse solvers
+
+%Eigen currently provides a limited set of built-in solvers, as well as wrappers to external solver libraries.
+They are summarized in the following table:
+
+<table class="manual">
+<tr><th>Class</th><th>Module</th><th>Solver kind</th><th>Matrix kind</th><th>Features related to performance</th>
+ <th>Dependencies,License</th><th class="width20em"><p>Notes</p></th></tr>
+<tr><td>SimplicialLLT </td><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>Direct LLt factorization</td><td>SPD</td><td>Fill-in reducing</td>
+ <td>built-in, LGPL</td>
+ <td>SimplicialLDLT is often preferable</td></tr>
+<tr><td>SimplicialLDLT </td><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>Direct LDLt factorization</td><td>SPD</td><td>Fill-in reducing</td>
+ <td>built-in, LGPL</td>
+ <td>Recommended for very sparse and not too large problems (e.g., 2D Poisson eq.)</td></tr>
+<tr><td>ConjugateGradient</td><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>Classic iterative CG</td><td>SPD</td><td>Preconditionning</td>
+ <td>built-in, MPL2</td>
+ <td>Recommended for large symmetric problems (e.g., 3D Poisson eq.)</td></tr>
+<tr><td>BiCGSTAB</td><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>Iterative stabilized bi-conjugate gradient</td><td>Square</td><td>Preconditionning</td>
+ <td>built-in, MPL2</td>
+ <td>To speedup the convergence, try it with the \ref IncompleteLUT preconditioner.</td></tr>
+<tr><td>SparseLU</td> <td>\link SparseLU_Module SparseLU \endlink </td> <td>LU factorization </td>
+ <td>Square </td><td>Fill-in reducing, Leverage fast dense algebra</td>
+ <td> built-in, MPL2</td> <td>optimized for small and large problems with irregular patterns </td></tr>
+<tr><td>SparseQR</td> <td>\link SparseQR_Module SparseQR \endlink</td> <td> QR factorization</td>
+ <td>Any, rectangular</td><td> Fill-in reducing</td>
+ <td>built-in, MPL2</td><td>recommended for least-square problems, has a basic rank-revealing feature</td></tr>
+<tr> <th colspan="7"> Wrappers to external solvers </th></tr>
+<tr><td>PastixLLT \n PastixLDLT \n PastixLU</td><td>\link PaStiXSupport_Module PaStiXSupport \endlink</td><td>Direct LLt, LDLt, LU factorizations</td><td>SPD \n SPD \n Square</td><td>Fill-in reducing, Leverage fast dense algebra, Multithreading</td>
+ <td>Requires the <a href="http://pastix.gforge.inria.fr">PaStiX</a> package, \b CeCILL-C </td>
+ <td>optimized for tough problems and symmetric patterns</td></tr>
+<tr><td>CholmodSupernodalLLT</td><td>\link CholmodSupport_Module CholmodSupport \endlink</td><td>Direct LLt factorization</td><td>SPD</td><td>Fill-in reducing, Leverage fast dense algebra</td>
+ <td>Requires the <a href="http://www.cise.ufl.edu/research/sparse/SuiteSparse/">SuiteSparse</a> package, \b GPL </td>
+ <td></td></tr>
+<tr><td>UmfPackLU</td><td>\link UmfPackSupport_Module UmfPackSupport \endlink</td><td>Direct LU factorization</td><td>Square</td><td>Fill-in reducing, Leverage fast dense algebra</td>
+ <td>Requires the <a href="http://www.cise.ufl.edu/research/sparse/SuiteSparse/">SuiteSparse</a> package, \b GPL </td>
+ <td></td></tr>
+<tr><td>SuperLU</td><td>\link SuperLUSupport_Module SuperLUSupport \endlink</td><td>Direct LU factorization</td><td>Square</td><td>Fill-in reducing, Leverage fast dense algebra</td>
+ <td>Requires the <a href="http://crd-legacy.lbl.gov/~xiaoye/SuperLU/">SuperLU</a> library, (BSD-like)</td>
+ <td></td></tr>
+<tr><td>SPQR</td><td>\link SPQRSupport_Module SPQRSupport \endlink </td> <td> QR factorization </td>
+ <td> Any, rectangular</td><td>fill-in reducing, multithreaded, fast dense algebra</td>
+ <td> requires the <a href="http://www.cise.ufl.edu/research/sparse/SuiteSparse/">SuiteSparse</a> package, \b GPL </td><td>recommended for linear least-squares problems, has a rank-revealing feature</tr>
+</table>
+
+Here \c SPD means symmetric positive definite.
+
+All these solvers follow the same general concept.
+Here is a typical and general example:
\code
#include <Eigen/RequiredModuleName>
// ...
@@ -15,21 +62,52 @@ VectorXd b, x;
// solve Ax = b
SolverClassName<SparseMatrix<double> > solver;
solver.compute(A);
-if(solver.info()!=Succeeded) {
+if(solver.info()!=Success) {
// decomposition failed
return;
}
x = solver.solve(b);
-if(solver.info()!=Succeeded) {
+if(solver.info()!=Success) {
// solving failed
return;
}
+// solve for another right hand side:
+x1 = solver.solve(b1);
\endcode
+For \c SPD solvers, a second optional template argument allows to specify which triangular part have to be used, e.g.:
+
+\code
+#include <Eigen/IterativeLinearSolvers>
+
+ConjugateGradient<SparseMatrix<double>, Eigen::Upper> solver;
+x = solver.compute(A).solve(b);
+\endcode
+In the above example, only the upper triangular part of the input matrix A is considered for solving. The opposite triangle might either be empty or contain arbitrary values.
+
+In the case where multiple problems with the same sparsity pattern have to be solved, then the "compute" step can be decomposed as follow:
+\code
+SolverClassName<SparseMatrix<double> > solver;
+solver.analyzePattern(A); // for this step the numerical values of A are not used
+solver.factorize(A);
+x1 = solver.solve(b1);
+x2 = solver.solve(b2);
+...
+A = ...; // modify the values of the nonzeros of A, the nonzeros pattern must stay unchanged
+solver.factorize(A);
+x1 = solver.solve(b1);
+x2 = solver.solve(b2);
+...
+\endcode
+The compute() method is equivalent to calling both analyzePattern() and factorize().
+
+Finally, each solver provides some specific features, such as determinant, access to the factors, controls of the iterations, and so on.
+More details are availble in the documentations of the respective classes.
+
\section TheSparseCompute The Compute Step
-In the compute() function, the matrix is generally factorized: LLT for self-adjoint matrices, LDLT for general hermitian matrices and LU for non hermitian matrices. These are the results of using direct solvers. For this class of solvers precisely, the compute step is further subdivided into analyzePattern() and factorize().
+In the compute() function, the matrix is generally factorized: LLT for self-adjoint matrices, LDLT for general hermitian matrices, LU for non hermitian matrices and QR for rectangular matrices. These are the results of using direct solvers. For this class of solvers precisely, the compute step is further subdivided into analyzePattern() and factorize().
-The goal of analyzePattern() is to reorder the nonzero elements of the matrix, such that the factorization step creates less fill-in. This step exploits only the structure of the matrix. Hence, the results of this step can be used for other linear systems where the matrix has the same structure. Note however that sometimes, some external solvers (like SuperLU) require that the values of the matrix are set in this step, for instance to equilibrate the rows and columns of the matrix. In this situation, the results of this step can note be used with other matrices.
+The goal of analyzePattern() is to reorder the nonzero elements of the matrix, such that the factorization step creates less fill-in. This step exploits only the structure of the matrix. Hence, the results of this step can be used for other linear systems where the matrix has the same structure. Note however that sometimes, some external solvers (like SuperLU) require that the values of the matrix are set in this step, for instance to equilibrate the rows and columns of the matrix. In this situation, the results of this step should not be used with other matrices.
Eigen provides a limited set of methods to reorder the matrix in this step, either built-in (COLAMD, AMD) or external (METIS). These methods are set in template parameter list of the solver :
\code
@@ -40,33 +118,31 @@ See the \link OrderingMethods_Module OrderingMethods module \endlink for the lis
In factorize(), the factors of the coefficient matrix are computed. This step should be called each time the values of the matrix change. However, the structural pattern of the matrix should not change between multiple calls.
-For iterative solvers, the compute step is used to eventually setup a preconditioner. Remember that, basically, the goal of the preconditioner is to speedup the convergence of an iterative method by solving a modified linear system where the coefficient matrix has more clustered eigenvalues. For real problems, an iterative solver should always be used with a preconditioner. In Eigen, a preconditioner is selected by simply adding it as a template parameter to the iterative solver object.
+For iterative solvers, the compute step is used to eventually setup a preconditioner. For instance, with the ILUT preconditioner, the incomplete factors L and U are computed in this step. Remember that, basically, the goal of the preconditioner is to speedup the convergence of an iterative method by solving a modified linear system where the coefficient matrix has more clustered eigenvalues. For real problems, an iterative solver should always be used with a preconditioner. In Eigen, a preconditioner is selected by simply adding it as a template parameter to the iterative solver object.
\code
IterativeSolverClassName<SparseMatrix<double>, PreconditionerName<SparseMatrix<double> > solver;
\endcode
The member function preconditioner() returns a read-write reference to the preconditioner
- to directly interact with it.
+ to directly interact with it. See the \link IterativeLinearSolvers_Module Iterative solvers module \endlink and the documentation of each class for the list of available methods.
-For instance, with the ILUT preconditioner, the incomplete factors L and U are computed in this step.
-See \link Sparse_modules the Sparse module \endlink for the list of available preconditioners in Eigen.
\section TheSparseSolve The Solve step
The solve() function computes the solution of the linear systems with one or many right hand sides.
\code
X = solver.solve(B);
\endcode
-Here, B can be a vector or a matrix where the columns form the different right hand sides. The solve() function can be called several times as well, for instance When all the right hand sides are not available at once.
+Here, B can be a vector or a matrix where the columns form the different right hand sides. The solve() function can be called several times as well, for instance when all the right hand sides are not available at once.
\code
x1 = solver.solve(b1);
// Get the second right hand side b2
x2 = solver.solve(b2);
// ...
\endcode
-For direct methods, the solution are computed at the machine precision. Sometimes, the solution need not be too accurate. In this case, the iterative methods are more suitable and the desired accuracy can be set before the solve step using setTolerance(). For all the available functions, please, refer to the documentation of the \link IterativeLinearSolvers_Module Iterative solvers module \endlink.
+For direct methods, the solution are computed at the machine precision. Sometimes, the solution need not be too accurate. In this case, the iterative methods are more suitable and the desired accuracy can be set before the solve step using \b setTolerance(). For all the available functions, please, refer to the documentation of the \link IterativeLinearSolvers_Module Iterative solvers module \endlink.
\section BenchmarkRoutine
-Most of the time, all you need is to know how much time it will take to qolve your system, and hopefully, what is the most suitable solver. In Eigen, we provide a benchmark routine that can be used for this purpose. It is very easy to use. First, it should be activated at the configuration step with the flag TEST_REAL_CASES. Then, in bench/spbench, you can compile the routine by typing \b make \e spbenchsolver. You can then run it with --help option to get the list of all available options. Basically, the matrices to test should be in <a href="http://math.nist.gov/MatrixMarket/formats.html">MatrixMarket Coordinate format</a>, and the routine returns the statistics from all available solvers in Eigen.
+Most of the time, all you need is to know how much time it will take to qolve your system, and hopefully, what is the most suitable solver. In Eigen, we provide a benchmark routine that can be used for this purpose. It is very easy to use. In the build directory, navigate to bench/spbench and compile the routine by typing \b make \e spbenchsolver. Run it with --help option to get the list of all available options. Basically, the matrices to test should be in <a href="http://math.nist.gov/MatrixMarket/formats.html">MatrixMarket Coordinate format</a>, and the routine returns the statistics from all available solvers in Eigen.
-The following table gives an example of XHTML statistics from several Eigen built-in and external solvers.
+The following table gives an example of XML statistics from several Eigen built-in and external solvers.
<TABLE border="1">
<TR><TH>Matrix <TH> N <TH> NNZ <TH> <TH > UMFPACK <TH > SUPERLU <TH > PASTIX LU <TH >BiCGSTAB <TH > BiCGSTAB+ILUT <TH >GMRES+ILUT<TH > LDLT <TH> CHOLMOD LDLT <TH > PASTIX LDLT <TH > LLT <TH > CHOLMOD SP LLT <TH > CHOLMOD LLT <TH > PASTIX LLT <TH> CG</TR>
<TR><TH rowspan="4">vector_graphics <TD rowspan="4"> 12855 <TD rowspan="4"> 72069 <TH>Compute Time <TD>0.0254549<TD>0.0215677<TD>0.0701827<TD>0.000153388<TD>0.0140107<TD>0.0153709<TD>0.0101601<TD style="background-color:red">0.00930502<TD>0.0649689
diff --git a/doc/SparseQuickReference.dox b/doc/SparseQuickReference.dox
index 15015a0ca..4a33d0cc9 100644
--- a/doc/SparseQuickReference.dox
+++ b/doc/SparseQuickReference.dox
@@ -4,61 +4,84 @@ namespace Eigen {
<hr>
-In this page, we give a quick summary of the main operations available for sparse matrices in the class SparseMatrix. First, it is recommended to read first the introductory tutorial at \ref TutorialSparse. The important point to have in mind when working on sparse matrices is how they are stored :
-i.e either row major or column major. The default is column major. Most arithmetic operations on sparse matrices will assert that they have the same storage order. Moreover, when interacting with external libraries that are not yet supported by Eigen, it is important to know how to send the required matrix pointers.
-
-\section Constructors Constructors and assignments
-SparseMatrix is the core class to build and manipulate sparse matrices in Eigen. It takes as template parameters the Scalar type and the storage order, either RowMajor or ColumnMajor. The default is ColumnMajor.
+In this page, we give a quick summary of the main operations available for sparse matrices in the class SparseMatrix. First, it is recommended to read the introductory tutorial at \ref TutorialSparse. The important point to have in mind when working on sparse matrices is how they are stored :
+i.e either row major or column major. The default is column major. Most arithmetic operations on sparse matrices will assert that they have the same storage order.
+\section SparseMatrixInit Sparse Matrix Initialization
+<table class="manual">
+<tr><th> Category </th> <th> Operations</th> <th>Notes</th></tr>
+<tr><td>Constructor</td>
+<td>
\code
- SparseMatrix<double> sm1(1000,1000); // 1000x1000 compressed sparse matrix of double.
- SparseMatrix<std::complex<double>,RowMajor> sm2; // Compressed row major matrix of complex double.
+ SparseMatrix<double> sm1(1000,1000);
+ SparseMatrix<std::complex<double>,RowMajor> sm2;
\endcode
-The copy constructor and assignment can be used to convert matrices from a storage order to another
+</td> <td> Default is ColMajor</td> </tr>
+<tr class="alt">
+<td> Resize/Reserve</td>
+<td>
+ \code
+ sm1.resize(m,n); //Change sm1 to a m x n matrix.
+ sm1.reserve(nnz); // Allocate room for nnz nonzeros elements.
+ \endcode
+</td>
+<td> Note that when calling reserve(), it is not required that nnz is the exact number of nonzero elements in the final matrix. However, an exact estimation will avoid multiple reallocations during the insertion phase. </td>
+</tr>
+<tr>
+<td> Assignment </td>
+<td>
\code
SparseMatrix<double,Colmajor> sm1;
- // Eventually fill the matrix sm1 ...
- SparseMatrix<double,Rowmajor> sm2(sm1), sm3; // Initialize sm2 with sm1.
- sm3 = sm1; // Assignment and evaluations modify the storage order.
+ // Initialize sm2 with sm1.
+ SparseMatrix<double,Rowmajor> sm2(sm1), sm3;
+ // Assignment and evaluations modify the storage order.
+ sm3 = sm1;
\endcode
-
-\section SparseMatrixInsertion Allocating and inserting values
-resize() and reserve() are used to set the size and allocate space for nonzero elements
- \code
- sm1.resize(m,n); //Change sm to a mxn matrix.
- sm1.reserve(nnz); // Allocate room for nnz nonzeros elements.
- \endcode
-Note that when calling reserve(), it is not required that nnz is the exact number of nonzero elements in the final matrix. However, an exact estimation will avoid multiple reallocations during the insertion phase.
-
-Insertions of values in the sparse matrix can be done directly by looping over nonzero elements and use the insert() function
+</td>
+<td> The copy constructor can be used to convert from a storage order to another</td>
+</tr>
+<tr class="alt">
+<td> Element-wise Insertion</td>
+<td>
\code
-// Direct insertion of the value v_ij;
- sm1.insert(i, j) = v_ij; // It is assumed that v_ij does not already exist in the matrix.
-\endcode
+// Insert a new element;
+ sm1.insert(i, j) = v_ij;
-After insertion, a value at (i,j) can be modified using coeffRef()
-\code
- // Update the value v_ij
- sm1.coeffRef(i,j) = v_ij;
- sm1.coeffRef(i,j) += v_ij;
- sm1.coeffRef(i,j) -= v_ij;
- ...
+// Update the value v_ij
+ sm1.coeffRef(i,j) = v_ij;
+ sm1.coeffRef(i,j) += v_ij;
+ sm1.coeffRef(i,j) -= v_ij;
\endcode
-
-The recommended way to insert values is to build a list of triplets (row, col, val) and then call setFromTriplets().
+</td>
+<td> insert() assumes that the element does not already exist; otherwise, use coeffRef()</td>
+</tr>
+<tr>
+<td> Batch insertion</td>
+<td>
\code
+ std::vector< Eigen::Triplet<double> > tripletList;
+ tripletList.reserve(estimation_of_entries);
+ // -- Fill tripletList with nonzero elements...
sm1.setFromTriplets(TripletList.begin(), TripletList.end());
\endcode
-A complete example is available at \ref TutorialSparseFilling.
-
-The following functions can be used to set constant or random values in the matrix.
+</td>
+<td>A complete example is available at \link TutorialSparseFilling Triplet Insertion \endlink.</td>
+</tr>
+<tr class="alt">
+<td> Constant or Random Insertion</td>
+<td>
\code
- sm1.setZero(); // Reset the matrix with zero elements
- ...
+sm1.setZero(); // Set the matrix with zero elements
+sm1.setConstant(val); //Replace all the nonzero values with val
\endcode
+</td>
+<td> The matrix sm1 should have been created before ???</td>
+</tr>
+</table>
+
\section SparseBasicInfos Matrix properties
-Beyond the functions rows() and cols() that are used to get the number of rows and columns, there are some useful functions that are available to easily get some informations from the matrix.
+Beyond the basic functions rows() and cols(), there are some useful functions that are available to easily get some informations from the matrix.
<table class="manual">
<tr>
<td> \code
@@ -67,16 +90,18 @@ Beyond the functions rows() and cols() that are used to get the number of rows a
sm1.nonZeros(); // Number of non zero values
sm1.outerSize(); // Number of columns (resp. rows) for a column major (resp. row major )
sm1.innerSize(); // Number of rows (resp. columns) for a row major (resp. column major)
- sm1.norm(); // (Euclidian ??) norm of the matrix
- sm1.squaredNorm(); //
+ sm1.norm(); // Euclidian norm of the matrix
+ sm1.squaredNorm(); // Squared norm of the matrix
+ sm1.blueNorm();
sm1.isVector(); // Check if sm1 is a sparse vector or a sparse matrix
+ sm1.isCompressed(); // Check if sm1 is in compressed form
...
\endcode </td>
</tr>
</table>
\section SparseBasicOps Arithmetic operations
-It is easy to perform arithmetic operations on sparse matrices provided that the dimensions are adequate and that the matrices have the same storage order. Note that the evaluation can always be done in a matrix with a different storage order.
+It is easy to perform arithmetic operations on sparse matrices provided that the dimensions are adequate and that the matrices have the same storage order. Note that the evaluation can always be done in a matrix with a different storage order. In the following, \b sm denotes a sparse matrix, \b dm a dense matrix and \b dv a dense vector.
<table class="manual">
<tr><th> Operations </th> <th> Code </th> <th> Notes </th></tr>
@@ -103,7 +128,7 @@ It is easy to perform arithmetic operations on sparse matrices provided that the
</tr>
<tr>
- <td> Product </td>
+ <td> %Sparse %Product </td>
<td> \code
sm3 = sm1 * sm2;
dm2 = sm1 * dm1;
@@ -123,7 +148,20 @@ It is easy to perform arithmetic operations on sparse matrices provided that the
Note that the transposition change the storage order. There is no support for transposeInPlace().
</td>
</tr>
-
+<tr>
+<td> Permutation </td>
+<td>
+\code
+perm.indices(); // Reference to the vector of indices
+sm1.twistedBy(perm); // Permute rows and columns
+sm2 = sm1 * perm; //Permute the columns
+sm2 = perm * sm1; // Permute the columns
+\endcode
+</td>
+<td>
+
+</td>
+</tr>
<tr>
<td>
Component-wise ops
@@ -142,47 +180,70 @@ It is easy to perform arithmetic operations on sparse matrices provided that the
</tr>
</table>
-
-\section SparseInterops Low-level storage
-There are a set of low-levels functions to get the standard compressed storage pointers. The matrix should be in compressed mode which can be checked by calling isCompressed(); makeCompressed() should do the job otherwise.
+\section sparseotherops Other supported operations
+<table class="manual">
+<tr><th>Operations</th> <th> Code </th> <th> Notes</th> </tr>
+<tr>
+<td>Sub-matrices</td>
+<td>
+\code
+ sm1.block(startRow, startCol, rows, cols);
+ sm1.block(startRow, startCol);
+ sm1.topLeftCorner(rows, cols);
+ sm1.topRightCorner(rows, cols);
+ sm1.bottomLeftCorner( rows, cols);
+ sm1.bottomRightCorner( rows, cols);
+ \endcode
+</td> <td> </td>
+</tr>
+<tr>
+<td> Range </td>
+<td>
+\code
+ sm1.innerVector(outer);
+ sm1.innerVectors(start, size);
+ sm1.leftCols(size);
+ sm2.rightCols(size);
+ sm1.middleRows(start, numRows);
+ sm1.middleCols(start, numCols);
+ sm1.col(j);
+\endcode
+</td>
+<td>A inner vector is either a row (for row-major) or a column (for column-major). As stated earlier, the evaluation can be done in a matrix with different storage order </td>
+</tr>
+<tr>
+<td> Triangular and selfadjoint views</td>
+<td>
\code
- // Scalar pointer to the values of the matrix, size nnz
- sm1.valuePtr();
- // Index pointer to get the row indices (resp. column indices) for column major (resp. row major) matrix, size nnz
- sm1.innerIndexPtr();
- // Index pointer to the beginning of each row (resp. column) in valuePtr() and innerIndexPtr() for column major (row major). The size is outersize()+1;
- sm1.outerIndexPtr();
+ sm2 = sm1.triangularview<Lower>();
+ sm2 = sm1.selfadjointview<Lower>();
\endcode
-These pointers can therefore be easily used to send the matrix to some external libraries/solvers that are not yet supported by Eigen.
-
-\section sparsepermutation Permutations, submatrices and Selfadjoint Views
-In many cases, it is necessary to reorder the rows and/or the columns of the sparse matrix for several purposes : fill-in reducing during matrix decomposition, better data locality for sparse matrix-vector products... The class PermutationMatrix is available to this end.
- \code
- PermutationMatrix<Dynamic, Dynamic, int> perm;
- // Reserve and fill the values of perm;
- perm.inverse(n); // Compute eventually the inverse permutation
- sm1.twistedBy(perm) //Apply the permutation on rows and columns
- sm2 = sm1 * perm; // ??? Apply the permutation on columns ???;
- sm2 = perm * sm1; // ??? Apply the permutation on rows ???;
- \endcode
-
-\section sparsesubmatrices Sub-matrices
-The following functions are useful to extract a block of rows (resp. columns) from a row-major (resp. column major) sparse matrix. Note that because of the particular storage, it is not ?? efficient ?? to extract a submatrix comprising a certain number of subrows and subcolumns.
- \code
- sm1.innerVector(outer); // Returns the outer -th column (resp. row) of the matrix if sm is col-major (resp. row-major)
- sm1.innerVectors(outer); // Returns the outer -th column (resp. row) of the matrix if mat is col-major (resp. row-major)
- sm1.middleRows(start, numRows); // For row major matrices, get a range of numRows rows
- sm1.middleCols(start, numCols); // For column major matrices, get a range of numCols cols
- \endcode
- Examples :
-
-\section sparseselfadjointview Sparse triangular and selfadjoint Views
- \code
- sm2 = sm1.triangularview<Lower>(); // Get the lower triangular part of the matrix.
- dv2 = sm1.triangularView<Upper>().solve(dv1); // Solve the linear system with the uppper triangular part.
- sm2 = sm1.selfadjointview<Lower>(); // Build a selfadjoint matrix from the lower part of sm1.
- \endcode
-
-
+</td>
+<td> Several combination between triangular views and blocks views are possible
+\code
+ \endcode </td>
+</tr>
+<tr>
+<td>Triangular solve </td>
+<td>
+\code
+ dv2 = sm1.triangularView<Upper>().solve(dv1);
+ dv2 = sm1.topLeftCorner(size, size).triangularView<Lower>().solve(dv1);
+\endcode
+</td>
+<td> For general sparse solve, Use any suitable module described at \ref TopicSparseSystems </td>
+</tr>
+<tr>
+<td> Low-level API</td>
+<td>
+\code
+sm1.valuePtr(); // Pointer to the values
+sm1.innerIndextr(); // Pointer to the indices.
+sm1.outerIndexPtr(); //Pointer to the beginning of each inner vector
+\endcode
+</td>
+<td> If the matrix is not in compressed form, makeCompressed() should be called before. Note that these functions are mostly provided for interoperability purposes with external libraries. A better access to the values of the matrix is done by using the InnerIterator class as described in \link TutorialSparse the Tutorial Sparse \endlink section</td>
+</tr>
+</table>
*/
}
diff --git a/doc/TutorialSparse.dox b/doc/TutorialSparse.dox
index 9f06005fa..98c9997e1 100644
--- a/doc/TutorialSparse.dox
+++ b/doc/TutorialSparse.dox
@@ -10,11 +10,14 @@ Manipulating and solving sparse problems involves various modules which are summ
<tr><th>Module</th><th>Header file</th><th>Contents</th></tr>
<tr><td>\link SparseCore_Module SparseCore \endlink</td><td>\code#include <Eigen/SparseCore>\endcode</td><td>SparseMatrix and SparseVector classes, matrix assembly, basic sparse linear algebra (including sparse triangular solvers)</td></tr>
<tr><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>\code#include <Eigen/SparseCholesky>\endcode</td><td>Direct sparse LLT and LDLT Cholesky factorization to solve sparse self-adjoint positive definite problems</td></tr>
+<tr><td>\link SparseLU_Module SparseLU \endlink</td><td>\code #include<Eigen/SparseLU> \endcode</td>
+<td>%Sparse LU factorization to solve general square sparse systems</td></tr>
+<tr><td>\link SparseQR_Module SparseQR \endlink</td><td>\code #include<Eigen/SparseQR>\endcode </td><td>%Sparse QR factorization for solving sparse linear least-squares problems</td></tr>
<tr><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>\code#include <Eigen/IterativeLinearSolvers>\endcode</td><td>Iterative solvers to solve large general linear square problems (including self-adjoint positive definite problems)</td></tr>
<tr><td>\link Sparse_modules Sparse \endlink</td><td>\code#include <Eigen/Sparse>\endcode</td><td>Includes all the above modules</td></tr>
</table>
-\section TutorialSparseIntro Sparse matrix representation
+\section TutorialSparseIntro Sparse matrix format
In many applications (e.g., finite element methods) it is common to deal with very large matrices where only a few coefficients are different from zero. In such cases, memory consumption can be reduced and performance increased by using a specialized representation storing only the nonzero coefficients. Such a matrix is called a sparse matrix.
@@ -224,102 +227,10 @@ A typical scenario of this approach is illustrated bellow:
- The line 5 suppresses the remaining empty space and transforms the matrix into a compressed column storage.
-\section TutorialSparseDirectSolvers Solving linear problems
-
-%Eigen currently provides a limited set of built-in solvers, as well as wrappers to external solver libraries.
-They are summarized in the following table:
-
-<table class="manual">
-<tr><th>Class</th><th>Module</th><th>Solver kind</th><th>Matrix kind</th><th>Features related to performance</th>
- <th>Dependencies,License</th><th class="width20em"><p>Notes</p></th></tr>
-<tr><td>SimplicialLLT </td><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>Direct LLt factorization</td><td>SPD</td><td>Fill-in reducing</td>
- <td>built-in, LGPL</td>
- <td>SimplicialLDLT is often preferable</td></tr>
-<tr><td>SimplicialLDLT </td><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>Direct LDLt factorization</td><td>SPD</td><td>Fill-in reducing</td>
- <td>built-in, LGPL</td>
- <td>Recommended for very sparse and not too large problems (e.g., 2D Poisson eq.)</td></tr>
-<tr><td>ConjugateGradient</td><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>Classic iterative CG</td><td>SPD</td><td>Preconditionning</td>
- <td>built-in, LGPL</td>
- <td>Recommended for large symmetric problems (e.g., 3D Poisson eq.)</td></tr>
-<tr><td>BiCGSTAB</td><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>Iterative stabilized bi-conjugate gradient</td><td>Square</td><td>Preconditionning</td>
- <td>built-in, LGPL</td>
- <td>Might not always converge</td></tr>
-
-
-<tr><td>PastixLLT \n PastixLDLT \n PastixLU</td><td>\link PaStiXSupport_Module PaStiXSupport \endlink</td><td>Direct LLt, LDLt, LU factorizations</td><td>SPD \n SPD \n Square</td><td>Fill-in reducing, Leverage fast dense algebra, Multithreading</td>
- <td>Requires the <a href="http://pastix.gforge.inria.fr">PaStiX</a> package, \b CeCILL-C </td>
- <td>optimized for tough problems and symmetric patterns</td></tr>
-<tr><td>CholmodSupernodalLLT</td><td>\link CholmodSupport_Module CholmodSupport \endlink</td><td>Direct LLt factorization</td><td>SPD</td><td>Fill-in reducing, Leverage fast dense algebra</td>
- <td>Requires the <a href="http://www.cise.ufl.edu/research/sparse/SuiteSparse/">SuiteSparse</a> package, \b GPL </td>
- <td></td></tr>
-<tr><td>UmfPackLU</td><td>\link UmfPackSupport_Module UmfPackSupport \endlink</td><td>Direct LU factorization</td><td>Square</td><td>Fill-in reducing, Leverage fast dense algebra</td>
- <td>Requires the <a href="http://www.cise.ufl.edu/research/sparse/SuiteSparse/">SuiteSparse</a> package, \b GPL </td>
- <td></td></tr>
-<tr><td>SuperLU</td><td>\link SuperLUSupport_Module SuperLUSupport \endlink</td><td>Direct LU factorization</td><td>Square</td><td>Fill-in reducing, Leverage fast dense algebra</td>
- <td>Requires the <a href="http://crd-legacy.lbl.gov/~xiaoye/SuperLU/">SuperLU</a> library, (BSD-like)</td>
- <td></td></tr>
-</table>
-
-Here \c SPD means symmetric positive definite.
-
-All these solvers follow the same general concept.
-Here is a typical and general example:
-\code
-#include <Eigen/RequiredModuleName>
-// ...
-SparseMatrix<double> A;
-// fill A
-VectorXd b, x;
-// fill b
-// solve Ax = b
-SolverClassName<SparseMatrix<double> > solver;
-solver.compute(A);
-if(solver.info()!=Success) {
- // decomposition failed
- return;
-}
-x = solver.solve(b);
-if(solver.info()!=Success) {
- // solving failed
- return;
-}
-// solve for another right hand side:
-x1 = solver.solve(b1);
-\endcode
-
-For \c SPD solvers, a second optional template argument allows to specify which triangular part have to be used, e.g.:
-
-\code
-#include <Eigen/IterativeLinearSolvers>
-
-ConjugateGradient<SparseMatrix<double>, Eigen::Upper> solver;
-x = solver.compute(A).solve(b);
-\endcode
-In the above example, only the upper triangular part of the input matrix A is considered for solving. The opposite triangle might either be empty or contain arbitrary values.
-
-In the case where multiple problems with the same sparcity pattern have to be solved, then the "compute" step can be decomposed as follow:
-\code
-SolverClassName<SparseMatrix<double> > solver;
-solver.analyzePattern(A); // for this step the numerical values of A are not used
-solver.factorize(A);
-x1 = solver.solve(b1);
-x2 = solver.solve(b2);
-...
-A = ...; // modify the values of the nonzeros of A, the nonzeros pattern must stay unchanged
-solver.factorize(A);
-x1 = solver.solve(b1);
-x2 = solver.solve(b2);
-...
-\endcode
-The compute() method is equivalent to calling both analyzePattern() and factorize().
-
-Finally, each solver provides some specific features, such as determinant, access to the factors, controls of the iterations, and so on.
-More details are availble in the documentations of the respective classes.
-
\section TutorialSparseFeatureSet Supported operators and functions
-Because of their special storage format, sparse matrices cannot offer the same level of flexbility than dense matrices.
+Because of their special storage format, sparse matrices cannot offer the same level of flexibility than dense matrices.
In Eigen's sparse module we chose to expose only the subset of the dense matrix API which can be efficiently implemented.
In the following \em sm denotes a sparse matrix, \em sv a sparse vector, \em dm a dense matrix, and \em dv a dense vector.
@@ -420,16 +331,7 @@ sm2 = A.selfadjointView<Upper>().twistedBy(P); //
sm2.selfadjointView<Lower>() = A.selfadjointView<Lower>().twistedBy(P); // compute P S P' from the lower triangular part of A, and then only compute the lower part
\endcode
-\subsection TutorialSparse_Submat Sub-matrices
-
-%Sparse matrices does not support yet the addressing of arbitrary sub matrices. Currently, one can only reference a set of contiguous \em inner vectors, i.e., a set of contiguous rows for a row-major matrix, or a set of contiguous columns for a column major matrix:
-\code
- sm1.innerVector(j); // returns an expression of the j-th column (resp. row) of the matrix if sm1 is col-major (resp. row-major)
- sm1.innerVectors(j, nb); // returns an expression of the nb columns (resp. row) starting from the j-th column (resp. row)
- // of the matrix if sm1 is col-major (resp. row-major)
- sm1.middleRows(j, nb); // for row major matrices only, get a range of nb rows
- sm1.middleCols(j, nb); // for column major matrices only, get a range of nb columns
-\endcode
+Please, refer to the \link SparseQuickRefPage Quick Reference \endlink guide for the list of supported operations. The list of linear solvers available is \link TopicSparseSystems here. \endlink
*/