aboutsummaryrefslogtreecommitdiffhomepage
path: root/doc/C02_TutorialMatrixArithmetic.dox
blob: 7e5615d6a620e8dffeef656bcd4a51b96083519e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
namespace Eigen {

/** \page TutorialMatrixArithmetic Tutorial page 2 - %Matrix and vector arithmetic
    \ingroup Tutorial

\li \b Previous: \ref TutorialMatrixClass
\li \b Next: \ref TutorialArrayClass

This tutorial aims to provide an overview and some details on how to perform arithmetic
between matrices, vectors and scalars with Eigen.

\b Table \b of \b contents
  - \ref TutorialArithmeticIntroduction
  - \ref TutorialArithmeticAddSub
  - \ref TutorialArithmeticScalarMulDiv
  - \ref TutorialArithmeticMentionXprTemplates
  - \ref TutorialArithmeticTranspose
  - \ref TutorialArithmeticMatrixMul
  - \ref TutorialArithmeticDotAndCross
  - \ref TutorialArithmeticRedux
  - \ref TutorialArithmeticValidity

\section TutorialArithmeticIntroduction Introduction

Eigen offers matrix/vector arithmetic operations either through overloads of common C++ arithmetic operators such as +, -, *,
or through special methods such as dot(), cross(), etc.
For the Matrix class (matrices and vectors), operators are only overloaded to support
linear-algebraic operations. For example, \c matrix1 \c * \c matrix2 means matrix-matrix product,
and \c vector \c + \c scalar is just not allowed. If you want to perform all kinds of array operations,
not linear algebra, see \ref TutorialArrayClass "next page".

\section TutorialArithmeticAddSub Addition and subtraction

The left hand side and right hand side must, of course, have the same numbers of rows and of columns. They must
also have the same \c Scalar type, as Eigen doesn't do automatic type promotion. The operators at hand here are:
\li binary operator + as in \c a+b
\li binary operator - as in \c a-b
\li unary operator - as in \c -a
\li compound operator += as in \c a+=b
\li compound operator -= as in \c a-=b

Example: \include tut_arithmetic_add_sub.cpp
Output: \verbinclude tut_arithmetic_add_sub.out

\section TutorialArithmeticScalarMulDiv Scalar multiplication and division

Multiplication and division by a scalar is very simple too. The operators at hand here are:
\li binary operator * as in \c matrix*scalar
\li binary operator * as in \c scalar*matrix
\li binary operator / as in \c matrix/scalar
\li compound operator *= as in \c matrix*=scalar
\li compound operator /= as in \c matrix/=scalar

Example: \include tut_arithmetic_scalar_mul_div.cpp
Output: \verbinclude tut_arithmetic_scalar_mul_div.out

\section TutorialArithmeticMentionXprTemplates A note about expression templates

This is an advanced topic that we explain in \ref TopicEigenExpressionTemplates "this page",
but it is useful to just mention it now. In Eigen, arithmetic operators such as \c operator+ don't
perform any computation by themselves, they just return an "expression object" describing the computation to be
performed. The actual computation happens later, when the whole expression is evaluated, typically in \c operator=.
While this might sound heavy, any modern optimizing compiler is able to optimize away that abstraction and
the result is perfectly optimized code. For example, when you do:
\code
VectorXf a(50), b(50), c(50), d(50);
...
a = 3*b + 4*c + 5*d;
\endcode
Eigen compiles it to just one for loop, so that the arrays are traversed only once. Simplifying (e.g. ignoring
SIMD optimizations), this loop looks like this:
\code
for(int i = 0; i < 50; ++i)
  a[i] = 3*b[i] + 4*c[i] + 5*d[i];
\endcode
Thus, you should not be afraid of using relatively large arithmetic expressions with Eigen: it only gives Eigen
more opportunities for optimization.

\section TutorialArithmeticTranspose Transposition and conjugation

The \c transpose \f$ a^T \f$, \c conjugate \f$ \bar{a} \f$, and the \c adjoint (i.e., conjugate transpose) of the matrix or vector \f$ a \f$, are simply obtained by the functions of the same names.

<table class="tutorial_code"><tr><td>
Example: \include tut_arithmetic_transpose_conjugate.cpp
</td>
<td>
Output: \include tut_arithmetic_transpose_conjugate.out
</td></tr></table>

For real matrices, \c conjugate() is a no-operation, and so \c adjoint() is 100% equivalent to \c transpose().

As for basic arithmetic operators, \c transpose and \c adjoint simply return a proxy object without doing the actual transposition. Therefore, <tt>a=a.transpose()</tt> leads to an unexpected result:
<table class="tutorial_code"><tr><td>
Example: \include tut_arithmetic_transpose_aliasing.cpp
</td>
<td>
Output: \include tut_arithmetic_transpose_aliasing.out
</td></tr></table>
In "debug mode", i.e., when assertions have not been disabled, such common pitfalls are automatically detected. For \em in-place transposition, simply use the transposeInPlace() function:
<table class="tutorial_code"><tr><td>
Example: \include tut_arithmetic_transpose_inplace.cpp
</td>
<td>
Output: \include tut_arithmetic_transpose_inplace.out
</td></tr></table>
There is also the adjointInPlace() function for complex matrix.

\section TutorialArithmeticMatrixMul Matrix-matrix and matrix-vector multiplication

Matrix-matrix multiplication is again done with \c operator*. Since vectors are a special
case of matrices, they are implicitly handled there too, so matrix-vector product is really just a special
case of matrix-matrix product, and so is vector-vector outer product. Thus, all these cases are handled by just
two operators:
\li binary operator * as in \c a*b
\li compound operator *= as in \c a*=b

Example: \include tut_arithmetic_matrix_mul.cpp
Output: \verbinclude tut_arithmetic_matrix_mul.out

Note: if you read the above paragraph on expression templates and are worried that doing \c m=m*m might cause
aliasing issues, be reassured for now: Eigen treats matrix multiplication as a special case and takes care of
introducing a temporary here, so it will compile \c m=m*m as:
\code
tmp = m*m;
m = tmp;
\endcode
If you know your matrix product can be safely evaluated into the destination matrix without aliasing issue, then you can use the \c noalias() function to avoid the temporary, e.g.:
\code
c.noalias() += a * b;
\endcode
For more details on this topic, see \ref TopicEigenExpressionTemplates "this page".

\b Note: for BLAS users worried about performance, expressions such as <tt>c.noalias() -= 2 * a.adjoint() * b;</tt> are fully optimized and trigger a single gemm-like function call.

\section TutorialArithmeticDotAndCross Dot product and cross product

The above-discussed \c operator* does not allow to compute dot and cross products. For that, you need the dot() and cross() methods.
Example: \include tut_arithmetic_dot_cross.cpp
Output: \verbinclude tut_arithmetic_dot_cross.out

Remember that cross product is only for vectors of size 3. Dot product is for vectors of any sizes.
When using complex numbers, Eigen's dot product is conjugate-linear in the first variable and linear in the
second variable.

\section TutorialArithmeticRedux Basic arithmetic reduction operations
Eigen also provides some reduction operations to reduce a given matrix or vector to a single value such as the sum (<tt>a.sum()</tt>), product (<tt>a.sum()</tt>), or the maximum (<tt>a.maxCoeff()</tt>) and minimum (<tt>a.minCoeff()</tt>) of all its coefficients.

<table class="tutorial_code"><tr><td>
Example: \include tut_arithmetic_redux_basic.cpp
</td>
<td>
Output: \include tut_arithmetic_redux_basic.out
</td></tr></table>

The \em trace of a matrix, as returned by the function \c trace(), is the sum of the diagonal coefficients and can also be computed as efficiently using <tt>a.diagonal().sum()</tt>, as we will see later on.

There also exist variants of the \c minCoeff and \c maxCoeff functions returning the coordinates of the respective coefficient via the arguments:

<table class="tutorial_code"><tr><td>
Example: \include tut_arithmetic_redux_minmax.cpp
</td>
<td>
Output: \include tut_arithmetic_redux_minmax.out
</td></tr></table>


\section TutorialArithmeticValidity Validity of operations
Eigen checks the validity of the operations that you perform. When possible,
it checks them at compile-time, producing compilation errors. These error messages can be long and ugly,
but Eigen writes the important message in UPPERCASE_LETTERS_SO_IT_STANDS_OUT. For example:
\code
  Matrix3f m;
  Vector4f v;
  v = m*v;      // Compile-time error: YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES
\endcode

Of course, in many cases, for example when checking dynamic sizes, the check cannot be performed at compile time.
Eigen then uses runtime assertions. This means that executing an illegal operation will result in a crash at runtime,
with an error message.

\code
  MatrixXf m(3,3);
  VectorXf v(4);
  v = m * v; // Run-time assertion failure here: "invalid matrix product"
\endcode

For more details on this topic, see \ref TopicAssertions "this page".

\li \b Next: \ref TutorialArrayClass

*/

}