aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
-rw-r--r--doc/C05_TutorialLinearAlgebra.dox310
-rw-r--r--doc/C06_TutorialLinearAlgebra.dox4
-rw-r--r--doc/TopicLinearAlgebraDecompositions.dox4
-rw-r--r--doc/examples/TutorialLinAlgSetThreshold.cpp7
4 files changed, 7 insertions, 318 deletions
diff --git a/doc/C05_TutorialLinearAlgebra.dox b/doc/C05_TutorialLinearAlgebra.dox
deleted file mode 100644
index af2096bf3..000000000
--- a/doc/C05_TutorialLinearAlgebra.dox
+++ /dev/null
@@ -1,310 +0,0 @@
-namespace Eigen {
-
-/** \page TutorialAdvancedLinearAlgebra Tutorial 3/4 - Advanced linear algebra
- \ingroup Tutorial
-
-<div class="eimainmenu">\ref index "Overview"
- | \ref TutorialCore "Core features"
- | \ref TutorialGeometry "Geometry"
- | \b Advanced \b linear \b algebra
- | \ref TutorialSparse "Sparse matrix"
-</div>
-
-This tutorial chapter explains how you can use Eigen to tackle various problems involving matrices:
-solving systems of linear equations, finding eigenvalues and eigenvectors, and so on.
-
-\b Table \b of \b contents
- - \ref TutorialAdvSolvers
- - \ref TutorialAdvLU
- - \ref TutorialAdvCholesky
- - \ref TutorialAdvQR
- - \ref TutorialAdvEigenProblems
-
-
-\section TutorialAdvSolvers Solving linear problems
-
-This part of the tutorial focuses on solving systems of linear equations. Such systems can be
-written in the form \f$ A \mathbf{x} = \mathbf{b} \f$, where both \f$ A \f$ and \f$ \mathbf{b} \f$
-are known, and \f$ \mathbf{x} \f$ is the unknown. Moreover, \f$ A \f$ is assumed to be a square
-matrix.
-
-The equation \f$ A \mathbf{x} = \mathbf{b} \f$ has a unique solution if \f$ A \f$ is invertible. If
-the matrix is not invertible, then the equation may have no or infinitely many solutions. All
-solvers assume that \f$ A \f$ is invertible, unless noted otherwise.
-
-Eigen offers various algorithms to this problem. The choice of algorithm mainly depends on the
-nature of the matrix \f$ A \f$, such as its shape, size and numerical properties.
- - The \ref TutorialAdvSolvers_LU "LU decomposition" (with partial pivoting) is a general-purpose
- algorithm which works for most problems.
- - Use the \ref TutorialAdvSolvers_Cholesky "Cholesky decomposition" if the matrix \f$ A \f$ is
- positive definite.
- - Use a special \ref TutorialAdvSolvers_Triangular "triangular solver" if the matrix \f$ A \f$ is
- upper or lower triangular.
- - Use of the \ref TutorialAdvSolvers_Inverse "matrix inverse" is not recommended in general, but
- may be appropriate in special cases, for instance if you want to solve several systems with the
- same matrix \f$ A \f$ and that matrix is small.
- - \ref TutorialAdvSolvers_Misc "Other solvers" (%LU decomposition with full pivoting, the singular
- value decomposition) are provided for special cases, such as when \f$ A \f$ is not invertible.
-
-The methods described here can be used whenever an expression involve the product of an inverse
-matrix with a vector or another matrix: \f$ A^{-1} \mathbf{v} \f$ or \f$ A^{-1} B \f$.
-
-
-\subsection TutorialAdvSolvers_LU LU decomposition (with partial pivoting)
-
-This is a general-purpose algorithm which performs well in most cases (provided the matrix \f$ A \f$
-is invertible), so if you are unsure about which algorithm to pick, choose this. The method proceeds
-in two steps. First, the %LU decomposition with partial pivoting is computed using the
-MatrixBase::partialPivLu() function. This yields an object of the class PartialPivLU. Then, the
-PartialPivLU::solve() method is called to compute a solution.
-
-As an example, suppose we want to solve the following system of linear equations:
-
-\f[ \begin{aligned}
- x + 2y + 3z &= 3 \\
- 4x + 5y + 6z &= 3 \\
- 7x + 8y + 10z &= 4.
-\end{aligned} \f]
-
-The following program solves this system:
-
-<table class="tutorial_code"><tr><td>
-\include Tutorial_PartialLU_solve.cpp
-</td><td>
-output: \include Tutorial_PartialLU_solve.out
-</td></tr></table>
-
-There are many situations in which we want to solve the same system of equations with different
-right-hand sides. One possibility is to put the right-hand sides as columns in a matrix \f$ B \f$
-and then solve the equation \f$ A X = B \f$. For instance, suppose that we want to solve the same
-system as before, but now we also need the solution of the same equations with 1 on the right-hand
-side. The following code computes the required solutions:
-
-<table class="tutorial_code"><tr><td>
-\include Tutorial_solve_multiple_rhs.cpp
-</td><td>
-output: \include Tutorial_solve_multiple_rhs.out
-</td></tr></table>
-
-However, this is not always possible. Often, you only know the right-hand side of the second
-problem, and whether you want to solve it at all, after you solved the first problem. In such a
-case, it's best to save the %LU decomposition and reuse it to solve the second problem. This is
-worth the effort because computing the %LU decomposition is much more expensive than using it to
-solve the equation. Here is some code to illustrate the procedure. It uses the constructor
-PartialPivLU::PartialPivLU(const MatrixType&) to compute the %LU decomposition.
-
-<table class="tutorial_code"><tr><td>
-\include Tutorial_solve_reuse_decomposition.cpp
-</td><td>
-output: \include Tutorial_solve_reuse_decomposition.out
-</td></tr></table>
-
-\b Warning: All this code presumes that the matrix \f$ A \f$ is invertible, so that the system
-\f$ A \mathbf{x} = \mathbf{b} \f$ has a unique solution. If the matrix \f$ A \f$ is not invertible,
-then the system \f$ A \mathbf{x} = \mathbf{b} \f$ has either zero or infinitely many solutions. In
-both cases, PartialPivLU::solve() will give nonsense results. For example, suppose that we want to
-solve the same system as above, but with the 10 in the last equation replaced by 9. Then the system
-of equations is inconsistent: adding the first and the third equation gives \f$ 8x + 10y + 12z = 7 \f$,
-which implies \f$ 4x + 5y + 6z = 3\frac12 \f$, in contradiction with the second equation. If we try
-to solve this inconsistent system with Eigen, we find:
-
-<table class="tutorial_code"><tr><td>
-\include Tutorial_solve_singular.cpp
-</td><td>
-output: \include Tutorial_solve_singular.out
-</td></tr></table>
-
-The %LU decomposition with \b full pivoting (class FullPivLU) and the singular value decomposition (class
-SVD) may be helpful in this case, as explained in the section \ref TutorialAdvSolvers_Misc below.
-
-\sa LU_Module, MatrixBase::partialPivLu(), PartialPivLU::solve(), class PartialPivLU.
-
-
-\subsection TutorialAdvSolvers_Cholesky Cholesky decomposition
-
-If the matrix \f$ A \f$ is \b symmetric \b positive \b definite, then the best method is to use a
-Cholesky decomposition: it is both faster and more accurate than the %LU decomposition. Such
-positive definite matrices often arise when solving overdetermined problems. These are linear
-systems \f$ A \mathbf{x} = \mathbf{b} \f$ in which the matrix \f$ A \f$ has more rows than columns,
-so that there are more equations than unknowns. Typically, there is no vector \f$ \mathbf{x} \f$
-which satisfies all the equation. Instead, we look for the least-square solution, that is, the
-vector \f$ \mathbf{x} \f$ for which \f$ \| A \mathbf{x} - \mathbf{b} \|_2 \f$ is minimal. You can
-find this vector by solving the equation \f$ A^T \! A \mathbf{x} = A^T \mathbf{b} \f$. If the matrix
-\f$ A \f$ has full rank, then \f$ A^T \! A \f$ is positive definite and thus you can use the
-Cholesky decomposition to solve this system and find the least-square solution to the original
-system \f$ A \mathbf{x} = \mathbf{b} \f$.
-
-Eigen offers two different Cholesky decompositions: the LLT class provides a \f$ LL^T \f$
-decomposition where L is a lower triangular matrix, and the LDLT class provides a \f$ LDL^T \f$
-decomposition where L is lower triangular with unit diagonal and D is a diagonal matrix. The latter
-includes pivoting and avoids square roots; this makes the %LDLT decomposition slightly more stable
-than the %LLT decomposition. The LDLT class is able to handle both positive- and negative-definite
-matrices, but not indefinite matrices.
-
-The API is the same as when using the %LU decomposition.
-
-\code
-#include <Eigen/Cholesky>
-MatrixXf D = MatrixXf::Random(8,4);
-MatrixXf A = D.transpose() * D;
-VectorXf b = A * VectorXf::Random(4);
-VectorXf x_llt = A.llt().solve(b); // using a LLT factorization
-VectorXf x_ldlt = A.ldlt().solve(b); // using a LDLT factorization
-\endcode
-
-The LLT and LDLT classes also provide an \em in \em place API for the case where the value of the
-right hand-side \f$ b \f$ is not needed anymore.
-
-\code
-A.llt().solveInPlace(b);
-\endcode
-
-This code replaces the vector \f$ b \f$ by the result \f$ x \f$.
-
-As before, you can reuse the factorization if you have to solve the same linear problem with
-different right-hand sides, e.g.:
-
-\code
-// ...
-LLT<MatrixXf> lltOfA(A);
-lltOfA.solveInPlace(b0);
-lltOfA.solveInPlace(b1);
-// ...
-\endcode
-
-\sa Cholesky_Module, MatrixBase::llt(), MatrixBase::ldlt(), LLT::solve(), LLT::solveInPlace(),
-LDLT::solve(), LDLT::solveInPlace(), class LLT, class LDLT.
-
-
-\subsection TutorialAdvSolvers_Triangular Triangular solver
-
-If the matrix \f$ A \f$ is triangular (upper or lower) and invertible (the coefficients of the
-diagonal are all not zero), then the problem can be solved directly using the TriangularView
-class. This is much faster than using an %LU or Cholesky decomposition (in fact, the triangular
-solver is used when you solve a system using the %LU or Cholesky decomposition). Here is an example:
-
-<table class="tutorial_code"><tr><td>
-\include Tutorial_solve_triangular.cpp
-</td><td>
-output: \include Tutorial_solve_triangular.out
-</td></tr></table>
-
-The MatrixBase::triangularView() function constructs an object of the class TriangularView, and
-TriangularView::solve() then solves the system. There is also an \e in \e place variant:
-
-<table class="tutorial_code"><tr><td>
-\include Tutorial_solve_triangular_inplace.cpp
-</td><td>
-output: \include Tutorial_solve_triangular_inplace.out
-</td></tr></table>
-
-\sa MatrixBase::triangularView(), TriangularView::solve(), TriangularView::solveInPlace(),
-TriangularView class.
-
-
-\subsection TutorialAdvSolvers_Inverse Direct inversion (for small matrices)
-
-The solution of the system \f$ A \mathbf{x} = \mathbf{b} \f$ is given by \f$ \mathbf{x} = A^{-1}
-\mathbf{b} \f$. This suggests the following approach for solving the system: compute the matrix
-inverse and multiply that with the right-hand side. This is often not a good approach: using the %LU
-decomposition with partial pivoting yields a more accurate algorithm that is usually just as fast or
-even faster. However, using the matrix inverse can be faster if the matrix \f$ A \f$ is small
-(&le;4) and fixed size, though numerical stability problems may still remain. Here is an example of
-how you would write this in Eigen:
-
-<table class="tutorial_code"><tr><td>
-\include Tutorial_solve_matrix_inverse.cpp
-</td><td>
-output: \include Tutorial_solve_matrix_inverse.out
-</td></tr></table>
-
-Note that the function inverse() is defined in the \ref LU_Module.
-
-\sa MatrixBase::inverse().
-
-
-\subsection TutorialAdvSolvers_Misc Other solvers (for singular matrices and special cases)
-
-Finally, Eigen also offer solvers based on a singular value decomposition (%SVD) or the %LU
-decomposition with full pivoting. These have the same API as the solvers based on the %LU
-decomposition with partial pivoting (PartialPivLU).
-
-The solver based on the %SVD uses the class SVD. It can handle singular matrices. Here is an example
-of its use:
-
-\code
-#include <Eigen/SVD>
-// ...
-MatrixXf A = MatrixXf::Random(20,20);
-VectorXf b = VectorXf::Random(20);
-VectorXf x = A.svd().solve(b);
-SVD<MatrixXf> svdOfA(A);
-x = svdOfA.solve(b);
-\endcode
-
-%LU decomposition with full pivoting has better numerical stability than %LU decomposition with
-partial pivoting. It is defined in the class FullPivLU. The solver can also handle singular matrices.
-
-\code
-#include <Eigen/LU>
-// ...
-MatrixXf A = MatrixXf::Random(20,20);
-VectorXf b = VectorXf::Random(20);
-VectorXf x = A.lu().solve(b);
-FullPivLU<MatrixXf> luOfA(A);
-x = luOfA.solve(b);
-\endcode
-
-See the section \ref TutorialAdvLU below.
-
-\sa class SVD, SVD::solve(), SVD_Module, class FullPivLU, LU::solve(), LU_Module.
-
-
-
-<a href="#" class="top">top</a>\section TutorialAdvLU LU
-
-Eigen provides a rank-revealing LU decomposition with full pivoting, which has very good numerical stability.
-
-You can obtain the LU decomposition of a matrix by calling \link MatrixBase::lu() lu() \endlink, which is the easiest way if you're going to use the LU decomposition only once, as in
-\code
-#include <Eigen/LU>
-MatrixXf A = MatrixXf::Random(20,20);
-VectorXf b = VectorXf::Random(20);
-VectorXf x = A.lu().solve(b);
-\endcode
-
-Alternatively, you can construct a named LU decomposition, which allows you to reuse it for more than one operation:
-\code
-#include <Eigen/LU>
-MatrixXf A = MatrixXf::Random(20,20);
-Eigen::FullPivLU<MatrixXf> lu(A);
-cout << "The rank of A is" << lu.rank() << endl;
-if(lu.isInvertible()) {
- cout << "A is invertible, its inverse is:" << endl << lu.inverse() << endl;
-}
-else {
- cout << "Here's a matrix whose columns form a basis of the kernel a.k.a. nullspace of A:"
- << endl << lu.kernel() << endl;
-}
-\endcode
-
-\sa LU_Module, LU::solve(), class FullPivLU
-
-<a href="#" class="top">top</a>\section TutorialAdvCholesky Cholesky
-todo
-
-\sa Cholesky_Module, LLT::solve(), LLT::solveInPlace(), LDLT::solve(), LDLT::solveInPlace(), class LLT, class LDLT
-
-<a href="#" class="top">top</a>\section TutorialAdvQR QR
-todo
-
-\sa QR_Module, class QR
-
-<a href="#" class="top">top</a>\section TutorialAdvEigenProblems Eigen value problems
-todo
-
-\sa class SelfAdjointEigenSolver, class EigenSolver
-
-*/
-
-}
diff --git a/doc/C06_TutorialLinearAlgebra.dox b/doc/C06_TutorialLinearAlgebra.dox
index 7c851ec34..c3bb7b40d 100644
--- a/doc/C06_TutorialLinearAlgebra.dox
+++ b/doc/C06_TutorialLinearAlgebra.dox
@@ -229,7 +229,9 @@ Of course, any rank computation depends on the choice of an arbitrary threshold,
floating-point matrix is \em exactly rank-deficient. Eigen picks a sensible default threshold, which depends
on the decomposition but is typically the diagonal size times machine epsilon. While this is the best default we
could pick, only you know what is the right threshold for your application. You can set this by calling setThreshold()
-on your decomposition object before calling compute(), as in this example:
+on your decomposition object before calling rank() or any other method that needs to use such a threshold.
+The decomposition itself, i.e. the compute() method, is independent of the threshold. You don't need to recompute the
+decomposition after you've changed the threshold.
<table class="tutorial_code">
<tr>
diff --git a/doc/TopicLinearAlgebraDecompositions.dox b/doc/TopicLinearAlgebraDecompositions.dox
index 1ff0549f9..ad8d0abea 100644
--- a/doc/TopicLinearAlgebraDecompositions.dox
+++ b/doc/TopicLinearAlgebraDecompositions.dox
@@ -249,8 +249,8 @@ namespace Eigen {
\b Notes:
<ul>
-<li><a name="note1">\b 1: </a>There exist a couple of variants of the LDLT algorithm. Eigen's one produces a pure diagonal matrix, and therefore it cannot handle indefinite matrix, unlike Lapack's one which produces a block diagonal matrix.</li>
-<li><a name="note2">\b 2: </a>Eigenvalues and Schur decompositions rely on iterative algorithms. Their convergence speed depends on how well the eigenvalues are separated.</li>
+<li><a name="note1">\b 1: </a>There exist two variants of the LDLT algorithm. Eigen's one produces a pure diagonal D matrix, and therefore it cannot handle indefinite matrices, unlike Lapack's one which produces a block diagonal D matrix.</li>
+<li><a name="note2">\b 2: </a>Eigenvalues, SVD and Schur decompositions rely on iterative algorithms. Their convergence speed depends on how well the eigenvalues are separated.</li>
</ul>
\section TopicLinAlgTerminology Terminology
diff --git a/doc/examples/TutorialLinAlgSetThreshold.cpp b/doc/examples/TutorialLinAlgSetThreshold.cpp
index e0927cf27..3956b13a3 100644
--- a/doc/examples/TutorialLinAlgSetThreshold.cpp
+++ b/doc/examples/TutorialLinAlgSetThreshold.cpp
@@ -7,13 +7,10 @@ using namespace Eigen;
int main()
{
Matrix2d A;
- FullPivLU<Matrix2d> lu;
A << 2, 1,
2, 0.9999999999;
- lu.compute(A);
+ FullPivLU<Matrix2d> lu(A);
cout << "By default, the rank of A is found to be " << lu.rank() << endl;
- cout << "Now recomputing the LU decomposition with threshold 1e-5" << endl;
lu.setThreshold(1e-5);
- lu.compute(A);
- cout << "The rank of A is found to be " << lu.rank() << endl;
+ cout << "With threshold 1e-5, the rank of A is found to be " << lu.rank() << endl;
}