From 8fbe0e4699b4c03dd62b266371f23b103319ec36 Mon Sep 17 00:00:00 2001 From: Gael Guennebaud Date: Wed, 4 Dec 2019 10:57:07 +0100 Subject: Update old links to bitbucket to point to gitlab.com --- doc/DenseDecompositionBenchmark.dox | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'doc') diff --git a/doc/DenseDecompositionBenchmark.dox b/doc/DenseDecompositionBenchmark.dox index 7be9c70cd..8f9570b7a 100644 --- a/doc/DenseDecompositionBenchmark.dox +++ b/doc/DenseDecompositionBenchmark.dox @@ -35,7 +35,7 @@ Timings are in \b milliseconds, and factors are relative to the LLT decompositio + For large problem sizes, only the decomposition implementing a cache-friendly blocking strategy scale well. Those include LLT, PartialPivLU, HouseholderQR, and BDCSVD. This explain why for a 4k x 4k matrix, HouseholderQR is faster than LDLT. In the future, LDLT and ColPivHouseholderQR will also implement blocking strategies. + CompleteOrthogonalDecomposition is based on ColPivHouseholderQR and they thus achieve the same level of performance. -The above table has been generated by the bench/dense_solvers.cpp file, feel-free to hack it to generate a table matching your hardware, compiler, and favorite problem sizes. +The above table has been generated by the bench/dense_solvers.cpp file, feel-free to hack it to generate a table matching your hardware, compiler, and favorite problem sizes. */ -- cgit v1.2.3