aboutsummaryrefslogtreecommitdiffhomepage
path: root/doc/TopicMultithreading.dox
blob: 7a8ff301f02f961e28d8168389ea0982d59c23b2 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
namespace Eigen {

/** \page TopicMultiThreading Eigen and multi-threading

\section TopicMultiThreading_MakingEigenMT Make Eigen run in parallel

Some %Eigen's algorithms can exploit the multiple cores present in your hardware.
To this end, it is enough to enable OpenMP on your compiler, for instance:
 - GCC: \c -fopenmp
 - ICC: \c -openmp
 - MSVC: check the respective option in the build properties.

You can control the number of threads that will be used using either the OpenMP API or %Eigen's API using the following priority:
\code
 OMP_NUM_THREADS=n ./my_program
 omp_set_num_threads(n);
 Eigen::setNbThreads(n);
\endcode
Unless `setNbThreads` has been called, %Eigen uses the number of threads specified by OpenMP.
You can restore this behavior by calling `setNbThreads(0);`.
You can query the number of threads that will be used with:
\code
n = Eigen::nbThreads( );
\endcode
You can disable %Eigen's multi threading at compile time by defining the \link TopicPreprocessorDirectivesPerformance EIGEN_DONT_PARALLELIZE \endlink preprocessor token.

Currently, the following algorithms can make use of multi-threading:
 - general dense matrix - matrix products
 - PartialPivLU
 - row-major-sparse * dense vector/matrix products
 - ConjugateGradient with \c Lower|Upper as the \c UpLo template parameter.
 - BiCGSTAB with a row-major sparse matrix format.
 - LeastSquaresConjugateGradient

\warning On most OS it is <strong>very important</strong> to limit the number of threads to the number of physical cores, otherwise significant slowdowns are expected, especially for operations involving dense matrices.

Indeed, the principle of hyper-threading is to run multiple threads (in most cases 2) on a single core in an interleaved manner.
However, %Eigen's matrix-matrix product kernel is fully optimized and already exploits nearly 100% of the CPU capacity.
Consequently, there is no room for running multiple such threads on a single core, and the performance would drops significantly because of cache pollution and other sources of overheads.
At this stage of reading you're probably wondering why %Eigen does not limit itself to the number of physical cores?
This is simply because OpenMP does not allow to know the number of physical cores, and thus %Eigen will launch as many threads as <i>cores</i> reported by OpenMP.

\section TopicMultiThreading_UsingEigenWithMT Using Eigen in a multi-threaded application

In the case your own application is multithreaded, and multiple threads make calls to %Eigen, then you have to initialize %Eigen by calling the following routine \b before creating the threads:
\code
#include <Eigen/Core>

int main(int argc, char** argv)
{
  Eigen::initParallel();
  
  ...
}
\endcode

\note With %Eigen 3.3, and a fully C++11 compliant compiler (i.e., <a href="http://en.cppreference.com/w/cpp/language/storage_duration#Static_local_variables">thread-safe static local variable initialization</a>), then calling \c initParallel() is optional.

\warning Note that all functions generating random matrices are \b not re-entrant nor thread-safe. Those include DenseBase::Random(), and DenseBase::setRandom() despite a call to `Eigen::initParallel()`. This is because these functions are based on `std::rand` which is not re-entrant.
For thread-safe random generator, we recommend the use of c++11 random generators (\link DenseBase::NullaryExpr(Index, const CustomNullaryOp&) example \endlink) or `boost::random`.

In the case your application is parallelized with OpenMP, you might want to disable %Eigen's own parallelization as detailed in the previous section.

\warning Using OpenMP with custom scalar types that might throw exceptions can lead to unexpected behaviour in the event of throwing.
*/

}