From a1ddf2e7a8f3d8ef21a13b3075a6e55b71c892a7 Mon Sep 17 00:00:00 2001 From: Desire NUENTSA Date: Tue, 5 Mar 2013 12:55:03 +0100 Subject: Update doc for the sparse module --- doc/TutorialSparse.dox | 110 +++---------------------------------------------- 1 file changed, 6 insertions(+), 104 deletions(-) (limited to 'doc/TutorialSparse.dox') diff --git a/doc/TutorialSparse.dox b/doc/TutorialSparse.dox index 9f06005fa..98c9997e1 100644 --- a/doc/TutorialSparse.dox +++ b/doc/TutorialSparse.dox @@ -10,11 +10,14 @@ Manipulating and solving sparse problems involves various modules which are summ ModuleHeader fileContents \link SparseCore_Module SparseCore \endlink\code#include \endcodeSparseMatrix and SparseVector classes, matrix assembly, basic sparse linear algebra (including sparse triangular solvers) \link SparseCholesky_Module SparseCholesky \endlink\code#include \endcodeDirect sparse LLT and LDLT Cholesky factorization to solve sparse self-adjoint positive definite problems +\link SparseLU_Module SparseLU \endlink\code #include \endcode +%Sparse LU factorization to solve general square sparse systems +\link SparseQR_Module SparseQR \endlink\code #include\endcode %Sparse QR factorization for solving sparse linear least-squares problems \link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink\code#include \endcodeIterative solvers to solve large general linear square problems (including self-adjoint positive definite problems) \link Sparse_modules Sparse \endlink\code#include \endcodeIncludes all the above modules -\section TutorialSparseIntro Sparse matrix representation +\section TutorialSparseIntro Sparse matrix format In many applications (e.g., finite element methods) it is common to deal with very large matrices where only a few coefficients are different from zero. In such cases, memory consumption can be reduced and performance increased by using a specialized representation storing only the nonzero coefficients. Such a matrix is called a sparse matrix. @@ -224,102 +227,10 @@ A typical scenario of this approach is illustrated bellow: - The line 5 suppresses the remaining empty space and transforms the matrix into a compressed column storage. -\section TutorialSparseDirectSolvers Solving linear problems - -%Eigen currently provides a limited set of built-in solvers, as well as wrappers to external solver libraries. -They are summarized in the following table: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ClassModuleSolver kindMatrix kindFeatures related to performanceDependencies,License

Notes

SimplicialLLT \link SparseCholesky_Module SparseCholesky \endlinkDirect LLt factorizationSPDFill-in reducingbuilt-in, LGPLSimplicialLDLT is often preferable
SimplicialLDLT \link SparseCholesky_Module SparseCholesky \endlinkDirect LDLt factorizationSPDFill-in reducingbuilt-in, LGPLRecommended for very sparse and not too large problems (e.g., 2D Poisson eq.)
ConjugateGradient\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlinkClassic iterative CGSPDPreconditionningbuilt-in, LGPLRecommended for large symmetric problems (e.g., 3D Poisson eq.)
BiCGSTAB\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlinkIterative stabilized bi-conjugate gradientSquarePreconditionningbuilt-in, LGPLMight not always converge
PastixLLT \n PastixLDLT \n PastixLU\link PaStiXSupport_Module PaStiXSupport \endlinkDirect LLt, LDLt, LU factorizationsSPD \n SPD \n SquareFill-in reducing, Leverage fast dense algebra, MultithreadingRequires the PaStiX package, \b CeCILL-C optimized for tough problems and symmetric patterns
CholmodSupernodalLLT\link CholmodSupport_Module CholmodSupport \endlinkDirect LLt factorizationSPDFill-in reducing, Leverage fast dense algebraRequires the SuiteSparse package, \b GPL
UmfPackLU\link UmfPackSupport_Module UmfPackSupport \endlinkDirect LU factorizationSquareFill-in reducing, Leverage fast dense algebraRequires the SuiteSparse package, \b GPL
SuperLU\link SuperLUSupport_Module SuperLUSupport \endlinkDirect LU factorizationSquareFill-in reducing, Leverage fast dense algebraRequires the SuperLU library, (BSD-like)
- -Here \c SPD means symmetric positive definite. - -All these solvers follow the same general concept. -Here is a typical and general example: -\code -#include -// ... -SparseMatrix A; -// fill A -VectorXd b, x; -// fill b -// solve Ax = b -SolverClassName > solver; -solver.compute(A); -if(solver.info()!=Success) { - // decomposition failed - return; -} -x = solver.solve(b); -if(solver.info()!=Success) { - // solving failed - return; -} -// solve for another right hand side: -x1 = solver.solve(b1); -\endcode - -For \c SPD solvers, a second optional template argument allows to specify which triangular part have to be used, e.g.: - -\code -#include - -ConjugateGradient, Eigen::Upper> solver; -x = solver.compute(A).solve(b); -\endcode -In the above example, only the upper triangular part of the input matrix A is considered for solving. The opposite triangle might either be empty or contain arbitrary values. - -In the case where multiple problems with the same sparcity pattern have to be solved, then the "compute" step can be decomposed as follow: -\code -SolverClassName > solver; -solver.analyzePattern(A); // for this step the numerical values of A are not used -solver.factorize(A); -x1 = solver.solve(b1); -x2 = solver.solve(b2); -... -A = ...; // modify the values of the nonzeros of A, the nonzeros pattern must stay unchanged -solver.factorize(A); -x1 = solver.solve(b1); -x2 = solver.solve(b2); -... -\endcode -The compute() method is equivalent to calling both analyzePattern() and factorize(). - -Finally, each solver provides some specific features, such as determinant, access to the factors, controls of the iterations, and so on. -More details are availble in the documentations of the respective classes. - \section TutorialSparseFeatureSet Supported operators and functions -Because of their special storage format, sparse matrices cannot offer the same level of flexbility than dense matrices. +Because of their special storage format, sparse matrices cannot offer the same level of flexibility than dense matrices. In Eigen's sparse module we chose to expose only the subset of the dense matrix API which can be efficiently implemented. In the following \em sm denotes a sparse matrix, \em sv a sparse vector, \em dm a dense matrix, and \em dv a dense vector. @@ -420,16 +331,7 @@ sm2 = A.selfadjointView().twistedBy(P); // sm2.selfadjointView() = A.selfadjointView().twistedBy(P); // compute P S P' from the lower triangular part of A, and then only compute the lower part \endcode -\subsection TutorialSparse_Submat Sub-matrices - -%Sparse matrices does not support yet the addressing of arbitrary sub matrices. Currently, one can only reference a set of contiguous \em inner vectors, i.e., a set of contiguous rows for a row-major matrix, or a set of contiguous columns for a column major matrix: -\code - sm1.innerVector(j); // returns an expression of the j-th column (resp. row) of the matrix if sm1 is col-major (resp. row-major) - sm1.innerVectors(j, nb); // returns an expression of the nb columns (resp. row) starting from the j-th column (resp. row) - // of the matrix if sm1 is col-major (resp. row-major) - sm1.middleRows(j, nb); // for row major matrices only, get a range of nb rows - sm1.middleCols(j, nb); // for column major matrices only, get a range of nb columns -\endcode +Please, refer to the \link SparseQuickRefPage Quick Reference \endlink guide for the list of supported operations. The list of linear solvers available is \link TopicSparseSystems here. \endlink */ -- cgit v1.2.3