aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar Benoit Jacob <jacob.benoit.1@gmail.com>2010-10-15 09:44:43 -0400
committerGravatar Benoit Jacob <jacob.benoit.1@gmail.com>2010-10-15 09:44:43 -0400
commit26129229ec12961687b0414c40e10e2880beec79 (patch)
treead65b5a51cd493c96563462bd668744e55fa7f04
parentfcee1903be17a07fa07019d21162a8d5797dc6a9 (diff)
doc updates/improvements
-rw-r--r--doc/C00_QuickStartGuide.dox5
-rw-r--r--doc/C01_TutorialMatrixClass.dox2
-rw-r--r--doc/C06_TutorialLinearAlgebra.dox15
-rw-r--r--doc/QuickReference.dox4
-rw-r--r--doc/TopicLinearAlgebraDecompositions.dox21
-rw-r--r--doc/examples/TutorialLinAlgSVDSolve.cpp15
6 files changed, 38 insertions, 24 deletions
diff --git a/doc/C00_QuickStartGuide.dox b/doc/C00_QuickStartGuide.dox
index 3b7c405ca..e2381f9e3 100644
--- a/doc/C00_QuickStartGuide.dox
+++ b/doc/C00_QuickStartGuide.dox
@@ -83,8 +83,9 @@ The use of fixed-size matrices and vectors has two advantages. The compiler emit
\section GettingStartedConclusion Where to go from here?
-You could directly use our \ref QuickRefPage and class documentation, or if you do not yet feel ready for that, you could
-read the longer \ref TutorialMatrixClass "Tutorial" in which the Eigen library is explained in more detail.
+It's worth taking the time to read the \ref TutorialMatrixClass "long tutorial".
+
+However if you think you don't need it, you can directly use the classes documentation and our \ref QuickRefPage.
\li \b Next: \ref TutorialMatrixClass
diff --git a/doc/C01_TutorialMatrixClass.dox b/doc/C01_TutorialMatrixClass.dox
index b5d7ec0c7..5e3098361 100644
--- a/doc/C01_TutorialMatrixClass.dox
+++ b/doc/C01_TutorialMatrixClass.dox
@@ -92,7 +92,7 @@ Matrix<float, 3, Dynamic>
\section TutorialMatrixConstructors Constructors
-A default constructor is always available, and always has zero runtime cost. You can do:
+A default constructor is always available, never performs any dynamic memory allocation, and never initializes the matrix coefficients. You can do:
\code
Matrix3f a;
MatrixXf b;
diff --git a/doc/C06_TutorialLinearAlgebra.dox b/doc/C06_TutorialLinearAlgebra.dox
index 3e436d393..c8f2bf23d 100644
--- a/doc/C06_TutorialLinearAlgebra.dox
+++ b/doc/C06_TutorialLinearAlgebra.dox
@@ -180,9 +180,18 @@ Here is an example:
\section TutorialLinAlgLeastsquares Least squares solving
-Eigen doesn't currently provide built-in linear least squares solving functions, but you can easily compute that yourself
-from Eigen's decompositions. The most reliable way is to use a SVD (or better yet, JacobiSVD), and in the future
-these classes will offer methods for least squares solving. Another, potentially faster way, is to use a LLT decomposition
+The best way to do least squares solving is with a SVD decomposition. Eigen provides one as the JacobiSVD class, and its solve()
+is doing least-squares solving.
+
+Here is an example:
+<table class="tutorial_code">
+<tr>
+ <td>\include TutorialLinAlgSVDSolve.cpp </td>
+ <td>output: \verbinclude TutorialLinAlgSVDSolve.out </td>
+</tr>
+</table>
+
+Another way, potentially faster but less reliable, is to use a LDLT decomposition
of the normal matrix. In any case, just read any reference text on least squares, and it will be very easy for you
to implement any linear least squares computation on top of Eigen.
diff --git a/doc/QuickReference.dox b/doc/QuickReference.dox
index d426b85de..a7d42767c 100644
--- a/doc/QuickReference.dox
+++ b/doc/QuickReference.dox
@@ -27,11 +27,11 @@ The Eigen library is divided in a Core module and several additional modules. Ea
<tr><td>\link LU_Module LU \endlink</td><td>\code#include <Eigen/LU>\endcode</td><td>Inverse, determinant, LU decompositions with solver (FullPivLU, PartialPivLU)</td></tr>
<tr><td>\link Cholesky_Module Cholesky \endlink</td><td>\code#include <Eigen/Cholesky>\endcode</td><td>LLT and LDLT Cholesky factorization with solver</td></tr>
<tr><td>\link Householder_Module Householder \endlink</td><td>\code#include <Eigen/Householder>\endcode</td><td>Householder transformations; this module is used by several linear algebra modules</td></tr>
-<tr><td>\link SVD_Module SVD \endlink</td><td>\code#include <Eigen/SVD>\endcode</td><td>%SVD decomposition with solver (SVD, JacobiSVD)</td></tr>
+<tr><td>\link SVD_Module SVD \endlink</td><td>\code#include <Eigen/SVD>\endcode</td><td>SVD decomposition with least-squares solver (JacobiSVD)</td></tr>
<tr><td>\link QR_Module QR \endlink</td><td>\code#include <Eigen/QR>\endcode</td><td>QR decomposition with solver (HouseholderQR, ColPivHouseholderQR, FullPivHouseholderQR)</td></tr>
<tr><td>\link Eigenvalues_Module Eigenvalues \endlink</td><td>\code#include <Eigen/Eigenvalues>\endcode</td><td>Eigenvalue, eigenvector decompositions (EigenSolver, SelfAdjointEigenSolver, ComplexEigenSolver)</td></tr>
<tr><td>\link Sparse_Module Sparse \endlink</td><td>\code#include <Eigen/Sparse>\endcode</td><td>%Sparse matrix storage and related basic linear algebra (SparseMatrix, DynamicSparseMatrix, SparseVector)</td></tr>
-<tr><td></td><td>\code#include <Eigen/Dense>\endcode</td><td>Includes Core, Geometry, LU, Cholesky, %SVD, QR, and Eigenvalues header files</td></tr>
+<tr><td></td><td>\code#include <Eigen/Dense>\endcode</td><td>Includes Core, Geometry, LU, Cholesky, SVD, QR, and Eigenvalues header files</td></tr>
<tr><td></td><td>\code#include <Eigen/Eigen>\endcode</td><td>Includes %Dense and %Sparse header files (the whole Eigen library)</td></tr>
</table>
diff --git a/doc/TopicLinearAlgebraDecompositions.dox b/doc/TopicLinearAlgebraDecompositions.dox
index ad8d0abea..203a05dd8 100644
--- a/doc/TopicLinearAlgebraDecompositions.dox
+++ b/doc/TopicLinearAlgebraDecompositions.dox
@@ -112,27 +112,15 @@ namespace Eigen {
<tr><td colspan="9">\n Singular values and eigenvalues decompositions</td></tr>
<tr>
- <td>SVD</td>
- <td>-</td>
- <td>Average</td>
- <td>Good</td>
- <td>Yes</td>
- <td>Singular values/vectors, least squares</td>
- <td>Yes</td>
- <td>Average</td>
- <td>-</td>
- </tr>
-
- <tr>
- <td>JacobiSVD</td>
+ <td>JacobiSVD (two-sided)</td>
<td>-</td>
<td>Slow (but fast for small matrices)</td>
- <td>Proven</td>
+ <td>Excellent-Proven<sup><a href="#note3">3</a></sup></td>
<td>Yes</td>
<td>Singular values/vectors, least squares</td>
- <td>-</td>
+ <td>Yes (and does least squares)</td>
<td>Excellent</td>
- <td>-</td>
+ <td>R-SVD</td>
</tr>
<tr>
@@ -251,6 +239,7 @@ namespace Eigen {
<ul>
<li><a name="note1">\b 1: </a>There exist two variants of the LDLT algorithm. Eigen's one produces a pure diagonal D matrix, and therefore it cannot handle indefinite matrices, unlike Lapack's one which produces a block diagonal D matrix.</li>
<li><a name="note2">\b 2: </a>Eigenvalues, SVD and Schur decompositions rely on iterative algorithms. Their convergence speed depends on how well the eigenvalues are separated.</li>
+<li><a name="note3">\b 3: </a>Our JacobiSVD is two-sided, making for proven and optimal precision for square matrices. For non-square matrices, we have to use a QR preconditioner first. The default choice, ColPivHouseholderQR, is already very reliable, but if you want it to be proven, use FullPivHouseholderQR instead.
</ul>
\section TopicLinAlgTerminology Terminology
diff --git a/doc/examples/TutorialLinAlgSVDSolve.cpp b/doc/examples/TutorialLinAlgSVDSolve.cpp
new file mode 100644
index 000000000..c75779d5f
--- /dev/null
+++ b/doc/examples/TutorialLinAlgSVDSolve.cpp
@@ -0,0 +1,15 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+ MatrixXf A = MatrixXf::Random(3, 2);
+ cout << "Here is the matrix A:\n" << A << endl;
+ VectorXf b = VectorXf::Random(3);
+ cout << "Here is the right hand side b:\n" << b << endl;
+ JacobiSVD<MatrixXf> svd(A, ComputeThinU | ComputeThinV);
+ cout << "The least-squares solution is:\n" << svd.solve(b) << endl;
+}