aboutsummaryrefslogtreecommitdiffhomepage
path: root/doc/TutorialSparse.dox
diff options
context:
space:
mode:
authorGravatar Gael Guennebaud <g.gael@free.fr>2013-01-06 23:57:54 +0100
committerGravatar Gael Guennebaud <g.gael@free.fr>2013-01-06 23:57:54 +0100
commit2abe7d8c6e11a02fc345f6ae464b4b759b092a67 (patch)
tree32b0a1517731ee705e1d6ee7d8d944740642efbf /doc/TutorialSparse.dox
parent091a49cad5d3aed47de9cb78c821c5f11b85e5f8 (diff)
Rename the dox files: the number prefixes are not needed anymore
Diffstat (limited to 'doc/TutorialSparse.dox')
-rw-r--r--doc/TutorialSparse.dox436
1 files changed, 436 insertions, 0 deletions
diff --git a/doc/TutorialSparse.dox b/doc/TutorialSparse.dox
new file mode 100644
index 000000000..9f06005fa
--- /dev/null
+++ b/doc/TutorialSparse.dox
@@ -0,0 +1,436 @@
+namespace Eigen {
+
+/** \eigenManualPage TutorialSparse Sparse matrix manipulations
+
+\eigeneigenAutoToc
+
+Manipulating and solving sparse problems involves various modules which are summarized below:
+
+<table class="manual">
+<tr><th>Module</th><th>Header file</th><th>Contents</th></tr>
+<tr><td>\link SparseCore_Module SparseCore \endlink</td><td>\code#include <Eigen/SparseCore>\endcode</td><td>SparseMatrix and SparseVector classes, matrix assembly, basic sparse linear algebra (including sparse triangular solvers)</td></tr>
+<tr><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>\code#include <Eigen/SparseCholesky>\endcode</td><td>Direct sparse LLT and LDLT Cholesky factorization to solve sparse self-adjoint positive definite problems</td></tr>
+<tr><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>\code#include <Eigen/IterativeLinearSolvers>\endcode</td><td>Iterative solvers to solve large general linear square problems (including self-adjoint positive definite problems)</td></tr>
+<tr><td>\link Sparse_modules Sparse \endlink</td><td>\code#include <Eigen/Sparse>\endcode</td><td>Includes all the above modules</td></tr>
+</table>
+
+\section TutorialSparseIntro Sparse matrix representation
+
+In many applications (e.g., finite element methods) it is common to deal with very large matrices where only a few coefficients are different from zero. In such cases, memory consumption can be reduced and performance increased by using a specialized representation storing only the nonzero coefficients. Such a matrix is called a sparse matrix.
+
+\b The \b %SparseMatrix \b class
+
+The class SparseMatrix is the main sparse matrix representation of Eigen's sparse module; it offers high performance and low memory usage.
+It implements a more versatile variant of the widely-used Compressed Column (or Row) Storage scheme.
+It consists of four compact arrays:
+ - \c Values: stores the coefficient values of the non-zeros.
+ - \c InnerIndices: stores the row (resp. column) indices of the non-zeros.
+ - \c OuterStarts: stores for each column (resp. row) the index of the first non-zero in the previous two arrays.
+ - \c InnerNNZs: stores the number of non-zeros of each column (resp. row).
+The word \c inner refers to an \em inner \em vector that is a column for a column-major matrix, or a row for a row-major matrix.
+The word \c outer refers to the other direction.
+
+This storage scheme is better explained on an example. The following matrix
+<table class="manual">
+<tr><td> 0</td><td>3</td><td> 0</td><td>0</td><td> 0</td></tr>
+<tr><td>22</td><td>0</td><td> 0</td><td>0</td><td>17</td></tr>
+<tr><td> 7</td><td>5</td><td> 0</td><td>1</td><td> 0</td></tr>
+<tr><td> 0</td><td>0</td><td> 0</td><td>0</td><td> 0</td></tr>
+<tr><td> 0</td><td>0</td><td>14</td><td>0</td><td> 8</td></tr>
+</table>
+
+and one of its possible sparse, \b column \b major representation:
+<table class="manual">
+<tr><td>Values:</td> <td>22</td><td>7</td><td>_</td><td>3</td><td>5</td><td>14</td><td>_</td><td>_</td><td>1</td><td>_</td><td>17</td><td>8</td></tr>
+<tr><td>InnerIndices:</td> <td> 1</td><td>2</td><td>_</td><td>0</td><td>2</td><td> 4</td><td>_</td><td>_</td><td>2</td><td>_</td><td> 1</td><td>4</td></tr>
+</table>
+<table class="manual">
+<tr><td>OuterStarts:</td><td>0</td><td>3</td><td>5</td><td>8</td><td>10</td><td>\em 12 </td></tr>
+<tr><td>InnerNNZs:</td> <td>2</td><td>2</td><td>1</td><td>1</td><td> 2</td><td></td></tr>
+</table>
+
+Currently the elements of a given inner vector are guaranteed to be always sorted by increasing inner indices.
+The \c "_" indicates available free space to quickly insert new elements.
+Assuming no reallocation is needed, the insertion of a random element is therefore in O(nnz_j) where nnz_j is the number of nonzeros of the respective inner vector.
+On the other hand, inserting elements with increasing inner indices in a given inner vector is much more efficient since this only requires to increase the respective \c InnerNNZs entry that is a O(1) operation.
+
+The case where no empty space is available is a special case, and is refered as the \em compressed mode.
+It corresponds to the widely used Compressed Column (or Row) Storage schemes (CCS or CRS).
+Any SparseMatrix can be turned to this form by calling the SparseMatrix::makeCompressed() function.
+In this case, one can remark that the \c InnerNNZs array is redundant with \c OuterStarts because we the equality: \c InnerNNZs[j] = \c OuterStarts[j+1]-\c OuterStarts[j].
+Therefore, in practice a call to SparseMatrix::makeCompressed() frees this buffer.
+
+It is worth noting that most of our wrappers to external libraries requires compressed matrices as inputs.
+
+The results of %Eigen's operations always produces \b compressed sparse matrices.
+On the other hand, the insertion of a new element into a SparseMatrix converts this later to the \b uncompressed mode.
+
+Here is the previous matrix represented in compressed mode:
+<table class="manual">
+<tr><td>Values:</td> <td>22</td><td>7</td><td>3</td><td>5</td><td>14</td><td>1</td><td>17</td><td>8</td></tr>
+<tr><td>InnerIndices:</td> <td> 1</td><td>2</td><td>0</td><td>2</td><td> 4</td><td>2</td><td> 1</td><td>4</td></tr>
+</table>
+<table class="manual">
+<tr><td>OuterStarts:</td><td>0</td><td>2</td><td>4</td><td>5</td><td>6</td><td>\em 8 </td></tr>
+</table>
+
+A SparseVector is a special case of a SparseMatrix where only the \c Values and \c InnerIndices arrays are stored.
+There is no notion of compressed/uncompressed mode for a SparseVector.
+
+
+\section TutorialSparseExample First example
+
+Before describing each individual class, let's start with the following typical example: solving the Lapace equation \f$ \nabla u = 0 \f$ on a regular 2D grid using a finite difference scheme and Dirichlet boundary conditions.
+Such problem can be mathematically expressed as a linear problem of the form \f$ Ax=b \f$ where \f$ x \f$ is the vector of \c m unknowns (in our case, the values of the pixels), \f$ b \f$ is the right hand side vector resulting from the boundary conditions, and \f$ A \f$ is an \f$ m \times m \f$ matrix containing only a few non-zero elements resulting from the discretization of the Laplacian operator.
+
+<table class="manual">
+<tr><td>
+\include Tutorial_sparse_example.cpp
+</td>
+<td>
+\image html Tutorial_sparse_example.jpeg
+</td></tr></table>
+
+In this example, we start by defining a column-major sparse matrix type of double \c SparseMatrix<double>, and a triplet list of the same scalar type \c Triplet<double>. A triplet is a simple object representing a non-zero entry as the triplet: \c row index, \c column index, \c value.
+
+In the main function, we declare a list \c coefficients of triplets (as a std vector) and the right hand side vector \f$ b \f$ which are filled by the \a buildProblem function.
+The raw and flat list of non-zero entries is then converted to a true SparseMatrix object \c A.
+Note that the elements of the list do not have to be sorted, and possible duplicate entries will be summed up.
+
+The last step consists of effectively solving the assembled problem.
+Since the resulting matrix \c A is symmetric by construction, we can perform a direct Cholesky factorization via the SimplicialLDLT class which behaves like its LDLT counterpart for dense objects.
+
+The resulting vector \c x contains the pixel values as a 1D array which is saved to a jpeg file shown on the right of the code above.
+
+Describing the \a buildProblem and \a save functions is out of the scope of this tutorial. They are given \ref TutorialSparse_example_details "here" for the curious and reproducibility purpose.
+
+
+
+
+\section TutorialSparseSparseMatrix The SparseMatrix class
+
+\b %Matrix \b and \b vector \b properties \n
+
+The SparseMatrix and SparseVector classes take three template arguments:
+ * the scalar type (e.g., double)
+ * the storage order (ColMajor or RowMajor, the default is RowMajor)
+ * the inner index type (default is \c int).
+
+As for dense Matrix objects, constructors takes the size of the object.
+Here are some examples:
+
+\code
+SparseMatrix<std::complex<float> > mat(1000,2000); // declares a 1000x2000 column-major compressed sparse matrix of complex<float>
+SparseMatrix<double,RowMajor> mat(1000,2000); // declares a 1000x2000 row-major compressed sparse matrix of double
+SparseVector<std::complex<float> > vec(1000); // declares a column sparse vector of complex<float> of size 1000
+SparseVector<double,RowMajor> vec(1000); // declares a row sparse vector of double of size 1000
+\endcode
+
+In the rest of the tutorial, \c mat and \c vec represent any sparse-matrix and sparse-vector objects, respectively.
+
+The dimensions of a matrix can be queried using the following functions:
+<table class="manual">
+<tr><td>Standard \n dimensions</td><td>\code
+mat.rows()
+mat.cols()\endcode</td>
+<td>\code
+vec.size() \endcode</td>
+</tr>
+<tr><td>Sizes along the \n inner/outer dimensions</td><td>\code
+mat.innerSize()
+mat.outerSize()\endcode</td>
+<td></td>
+</tr>
+<tr><td>Number of non \n zero coefficients</td><td>\code
+mat.nonZeros() \endcode</td>
+<td>\code
+vec.nonZeros() \endcode</td></tr>
+</table>
+
+
+\b Iterating \b over \b the \b nonzero \b coefficients \n
+
+Random access to the elements of a sparse object can be done through the \c coeffRef(i,j) function.
+However, this function involves a quite expensive binary search.
+In most cases, one only wants to iterate over the non-zeros elements. This is achieved by a standard loop over the outer dimension, and then by iterating over the non-zeros of the current inner vector via an InnerIterator. Thus, the non-zero entries have to be visited in the same order than the storage order.
+Here is an example:
+<table class="manual">
+<tr><td>
+\code
+SparseMatrix<double> mat(rows,cols);
+for (int k=0; k<mat.outerSize(); ++k)
+ for (SparseMatrix<double>::InnerIterator it(mat,k); it; ++it)
+ {
+ it.value();
+ it.row(); // row index
+ it.col(); // col index (here it is equal to k)
+ it.index(); // inner index, here it is equal to it.row()
+ }
+\endcode
+</td><td>
+\code
+SparseVector<double> vec(size);
+for (SparseVector<double>::InnerIterator it(vec); it; ++it)
+{
+ it.value(); // == vec[ it.index() ]
+ it.index();
+}
+\endcode
+</td></tr>
+</table>
+For a writable expression, the referenced value can be modified using the valueRef() function.
+If the type of the sparse matrix or vector depends on a template parameter, then the \c typename keyword is
+required to indicate that \c InnerIterator denotes a type; see \ref TopicTemplateKeyword for details.
+
+
+\section TutorialSparseFilling Filling a sparse matrix
+
+Because of the special storage scheme of a SparseMatrix, special care has to be taken when adding new nonzero entries.
+For instance, the cost of a single purely random insertion into a SparseMatrix is \c O(nnz), where \c nnz is the current number of non-zero coefficients.
+
+The simplest way to create a sparse matrix while guaranteeing good performance is thus to first build a list of so-called \em triplets, and then convert it to a SparseMatrix.
+
+Here is a typical usage example:
+\code
+typedef Eigen::Triplet<double> T;
+std::vector<T> tripletList;
+tripletList.reserve(estimation_of_entries);
+for(...)
+{
+ // ...
+ tripletList.push_back(T(i,j,v_ij));
+}
+SparseMatrixType mat(rows,cols);
+mat.setFromTriplets(tripletList.begin(), tripletList.end());
+// mat is ready to go!
+\endcode
+The \c std::vector of triplets might contain the elements in arbitrary order, and might even contain duplicated elements that will be summed up by setFromTriplets().
+See the SparseMatrix::setFromTriplets() function and class Triplet for more details.
+
+
+In some cases, however, slightly higher performance, and lower memory consumption can be reached by directly inserting the non-zeros into the destination matrix.
+A typical scenario of this approach is illustrated bellow:
+\code
+1: SparseMatrix<double> mat(rows,cols); // default is column major
+2: mat.reserve(VectorXi::Constant(cols,6));
+3: for each i,j such that v_ij != 0
+4: mat.insert(i,j) = v_ij; // alternative: mat.coeffRef(i,j) += v_ij;
+5: mat.makeCompressed(); // optional
+\endcode
+
+- The key ingredient here is the line 2 where we reserve room for 6 non-zeros per column. In many cases, the number of non-zeros per column or row can easily be known in advance. If it varies significantly for each inner vector, then it is possible to specify a reserve size for each inner vector by providing a vector object with an operator[](int j) returning the reserve size of the \c j-th inner vector (e.g., via a VectorXi or std::vector<int>). If only a rought estimate of the number of nonzeros per inner-vector can be obtained, it is highly recommended to overestimate it rather than the opposite. If this line is omitted, then the first insertion of a new element will reserve room for 2 elements per inner vector.
+- The line 4 performs a sorted insertion. In this example, the ideal case is when the \c j-th column is not full and contains non-zeros whose inner-indices are smaller than \c i. In this case, this operation boils down to trivial O(1) operation.
+- When calling insert(i,j) the element \c i \c ,j must not already exists, otherwise use the coeffRef(i,j) method that will allow to, e.g., accumulate values. This method first performs a binary search and finally calls insert(i,j) if the element does not already exist. It is more flexible than insert() but also more costly.
+- The line 5 suppresses the remaining empty space and transforms the matrix into a compressed column storage.
+
+
+\section TutorialSparseDirectSolvers Solving linear problems
+
+%Eigen currently provides a limited set of built-in solvers, as well as wrappers to external solver libraries.
+They are summarized in the following table:
+
+<table class="manual">
+<tr><th>Class</th><th>Module</th><th>Solver kind</th><th>Matrix kind</th><th>Features related to performance</th>
+ <th>Dependencies,License</th><th class="width20em"><p>Notes</p></th></tr>
+<tr><td>SimplicialLLT </td><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>Direct LLt factorization</td><td>SPD</td><td>Fill-in reducing</td>
+ <td>built-in, LGPL</td>
+ <td>SimplicialLDLT is often preferable</td></tr>
+<tr><td>SimplicialLDLT </td><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>Direct LDLt factorization</td><td>SPD</td><td>Fill-in reducing</td>
+ <td>built-in, LGPL</td>
+ <td>Recommended for very sparse and not too large problems (e.g., 2D Poisson eq.)</td></tr>
+<tr><td>ConjugateGradient</td><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>Classic iterative CG</td><td>SPD</td><td>Preconditionning</td>
+ <td>built-in, LGPL</td>
+ <td>Recommended for large symmetric problems (e.g., 3D Poisson eq.)</td></tr>
+<tr><td>BiCGSTAB</td><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>Iterative stabilized bi-conjugate gradient</td><td>Square</td><td>Preconditionning</td>
+ <td>built-in, LGPL</td>
+ <td>Might not always converge</td></tr>
+
+
+<tr><td>PastixLLT \n PastixLDLT \n PastixLU</td><td>\link PaStiXSupport_Module PaStiXSupport \endlink</td><td>Direct LLt, LDLt, LU factorizations</td><td>SPD \n SPD \n Square</td><td>Fill-in reducing, Leverage fast dense algebra, Multithreading</td>
+ <td>Requires the <a href="http://pastix.gforge.inria.fr">PaStiX</a> package, \b CeCILL-C </td>
+ <td>optimized for tough problems and symmetric patterns</td></tr>
+<tr><td>CholmodSupernodalLLT</td><td>\link CholmodSupport_Module CholmodSupport \endlink</td><td>Direct LLt factorization</td><td>SPD</td><td>Fill-in reducing, Leverage fast dense algebra</td>
+ <td>Requires the <a href="http://www.cise.ufl.edu/research/sparse/SuiteSparse/">SuiteSparse</a> package, \b GPL </td>
+ <td></td></tr>
+<tr><td>UmfPackLU</td><td>\link UmfPackSupport_Module UmfPackSupport \endlink</td><td>Direct LU factorization</td><td>Square</td><td>Fill-in reducing, Leverage fast dense algebra</td>
+ <td>Requires the <a href="http://www.cise.ufl.edu/research/sparse/SuiteSparse/">SuiteSparse</a> package, \b GPL </td>
+ <td></td></tr>
+<tr><td>SuperLU</td><td>\link SuperLUSupport_Module SuperLUSupport \endlink</td><td>Direct LU factorization</td><td>Square</td><td>Fill-in reducing, Leverage fast dense algebra</td>
+ <td>Requires the <a href="http://crd-legacy.lbl.gov/~xiaoye/SuperLU/">SuperLU</a> library, (BSD-like)</td>
+ <td></td></tr>
+</table>
+
+Here \c SPD means symmetric positive definite.
+
+All these solvers follow the same general concept.
+Here is a typical and general example:
+\code
+#include <Eigen/RequiredModuleName>
+// ...
+SparseMatrix<double> A;
+// fill A
+VectorXd b, x;
+// fill b
+// solve Ax = b
+SolverClassName<SparseMatrix<double> > solver;
+solver.compute(A);
+if(solver.info()!=Success) {
+ // decomposition failed
+ return;
+}
+x = solver.solve(b);
+if(solver.info()!=Success) {
+ // solving failed
+ return;
+}
+// solve for another right hand side:
+x1 = solver.solve(b1);
+\endcode
+
+For \c SPD solvers, a second optional template argument allows to specify which triangular part have to be used, e.g.:
+
+\code
+#include <Eigen/IterativeLinearSolvers>
+
+ConjugateGradient<SparseMatrix<double>, Eigen::Upper> solver;
+x = solver.compute(A).solve(b);
+\endcode
+In the above example, only the upper triangular part of the input matrix A is considered for solving. The opposite triangle might either be empty or contain arbitrary values.
+
+In the case where multiple problems with the same sparcity pattern have to be solved, then the "compute" step can be decomposed as follow:
+\code
+SolverClassName<SparseMatrix<double> > solver;
+solver.analyzePattern(A); // for this step the numerical values of A are not used
+solver.factorize(A);
+x1 = solver.solve(b1);
+x2 = solver.solve(b2);
+...
+A = ...; // modify the values of the nonzeros of A, the nonzeros pattern must stay unchanged
+solver.factorize(A);
+x1 = solver.solve(b1);
+x2 = solver.solve(b2);
+...
+\endcode
+The compute() method is equivalent to calling both analyzePattern() and factorize().
+
+Finally, each solver provides some specific features, such as determinant, access to the factors, controls of the iterations, and so on.
+More details are availble in the documentations of the respective classes.
+
+
+\section TutorialSparseFeatureSet Supported operators and functions
+
+Because of their special storage format, sparse matrices cannot offer the same level of flexbility than dense matrices.
+In Eigen's sparse module we chose to expose only the subset of the dense matrix API which can be efficiently implemented.
+In the following \em sm denotes a sparse matrix, \em sv a sparse vector, \em dm a dense matrix, and \em dv a dense vector.
+
+\subsection TutorialSparse_BasicOps Basic operations
+
+%Sparse expressions support most of the unary and binary coefficient wise operations:
+\code
+sm1.real() sm1.imag() -sm1 0.5*sm1
+sm1+sm2 sm1-sm2 sm1.cwiseProduct(sm2)
+\endcode
+However, a strong restriction is that the storage orders must match. For instance, in the following example:
+\code
+sm4 = sm1 + sm2 + sm3;
+\endcode
+sm1, sm2, and sm3 must all be row-major or all column major.
+On the other hand, there is no restriction on the target matrix sm4.
+For instance, this means that for computing \f$ A^T + A \f$, the matrix \f$ A^T \f$ must be evaluated into a temporary matrix of compatible storage order:
+\code
+SparseMatrix<double> A, B;
+B = SparseMatrix<double>(A.transpose()) + A;
+\endcode
+
+Binary coefficient wise operators can also mix sparse and dense expressions:
+\code
+sm2 = sm1.cwiseProduct(dm1);
+dm2 = sm1 + dm1;
+\endcode
+
+
+%Sparse expressions also support transposition:
+\code
+sm1 = sm2.transpose();
+sm1 = sm2.adjoint();
+\endcode
+However, there is no transposeInPlace() method.
+
+
+\subsection TutorialSparse_Products Matrix products
+
+%Eigen supports various kind of sparse matrix products which are summarize below:
+ - \b sparse-dense:
+ \code
+dv2 = sm1 * dv1;
+dm2 = dm1 * sm1.adjoint();
+dm2 = 2. * sm1 * dm1;
+ \endcode
+ - \b symmetric \b sparse-dense. The product of a sparse symmetric matrix with a dense matrix (or vector) can also be optimized by specifying the symmetry with selfadjointView():
+ \code
+dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of A are stored
+dm2 = A.selfadjointView<Upper>() * dm1; // if only the upper part of A is stored
+dm2 = A.selfadjointView<Lower>() * dm1; // if only the lower part of A is stored
+ \endcode
+ - \b sparse-sparse. For sparse-sparse products, two different algorithms are available. The default one is conservative and preserve the explicit zeros that might appear:
+ \code
+sm3 = sm1 * sm2;
+sm3 = 4 * sm1.adjoint() * sm2;
+ \endcode
+ The second algorithm prunes on the fly the explicit zeros, or the values smaller than a given threshold. It is enabled and controlled through the prune() functions:
+ \code
+sm3 = (sm1 * sm2).prune(); // removes numerical zeros
+sm3 = (sm1 * sm2).prune(ref); // removes elements much smaller than ref
+sm3 = (sm1 * sm2).prune(ref,epsilon); // removes elements smaller than ref*epsilon
+ \endcode
+
+ - \b permutations. Finally, permutations can be applied to sparse matrices too:
+ \code
+PermutationMatrix<Dynamic,Dynamic> P = ...;
+sm2 = P * sm1;
+sm2 = sm1 * P.inverse();
+sm2 = sm1.transpose() * P;
+ \endcode
+
+
+\subsection TutorialSparse_TriangularSelfadjoint Triangular and selfadjoint views
+
+Just as with dense matrices, the triangularView() function can be used to address a triangular part of the matrix, and perform triangular solves with a dense right hand side:
+\code
+dm2 = sm1.triangularView<Lower>(dm1);
+dv2 = sm1.transpose().triangularView<Upper>(dv1);
+\endcode
+
+The selfadjointView() function permits various operations:
+ - optimized sparse-dense matrix products:
+ \code
+dm2 = sm1.selfadjointView<>() * dm1; // if all coefficients of A are stored
+dm2 = A.selfadjointView<Upper>() * dm1; // if only the upper part of A is stored
+dm2 = A.selfadjointView<Lower>() * dm1; // if only the lower part of A is stored
+ \endcode
+ - copy of triangular parts:
+ \code
+sm2 = sm1.selfadjointView<Upper>(); // makes a full selfadjoint matrix from the upper triangular part
+sm2.selfadjointView<Lower>() = sm1.selfadjointView<Upper>(); // copies the upper triangular part to the lower triangular part
+ \endcode
+ - application of symmetric permutations:
+ \code
+PermutationMatrix<Dynamic,Dynamic> P = ...;
+sm2 = A.selfadjointView<Upper>().twistedBy(P); // compute P S P' from the upper triangular part of A, and make it a full matrix
+sm2.selfadjointView<Lower>() = A.selfadjointView<Lower>().twistedBy(P); // compute P S P' from the lower triangular part of A, and then only compute the lower part
+ \endcode
+
+\subsection TutorialSparse_Submat Sub-matrices
+
+%Sparse matrices does not support yet the addressing of arbitrary sub matrices. Currently, one can only reference a set of contiguous \em inner vectors, i.e., a set of contiguous rows for a row-major matrix, or a set of contiguous columns for a column major matrix:
+\code
+ sm1.innerVector(j); // returns an expression of the j-th column (resp. row) of the matrix if sm1 is col-major (resp. row-major)
+ sm1.innerVectors(j, nb); // returns an expression of the nb columns (resp. row) starting from the j-th column (resp. row)
+ // of the matrix if sm1 is col-major (resp. row-major)
+ sm1.middleRows(j, nb); // for row major matrices only, get a range of nb rows
+ sm1.middleCols(j, nb); // for column major matrices only, get a range of nb columns
+\endcode
+
+*/
+
+}