Rotations in ![{\displaystyle \mathbb {R} ^{3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f936ddf584f8f3dd2a0ed08917001b7a404c10b5)
Consider a real 3×3 matrix R with columns
r1, r2, r3,
i.e.,
.
The matrix R is orthogonal if
![{\displaystyle \mathbf {r} _{i}\cdot \mathbf {r} _{j}=\delta _{ij},\quad i,j=1,2,3.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cd9b3a9d504d9beb5a628de86f3eb1f086bec9fd)
The matrix R is a proper rotation matrix, if it is
orthogonal and if r1, r2,
r3 form a right-handed set, i.e.,
![{\displaystyle \mathbf {r} _{i}\times \mathbf {r} _{j}=\sum _{k=1}^{3}\,\varepsilon _{ijk}\mathbf {r} _{k}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ed8037e79c016751321bbc326cc386b330a87c8e)
Here the symbol × indicates a
cross product and
is the
antisymmetric Levi-Civita symbol,
![{\displaystyle {\begin{aligned}\varepsilon _{123}=&\;\varepsilon _{312}=\varepsilon _{231}=1\\\varepsilon _{213}=&\;\varepsilon _{321}=\varepsilon _{132}=-1\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e38140913ca07c8eca665c15d2ebf73f6cd936de)
and
if two or more indices are equal.
The matrix R is an improper rotation matrix if
its column vectors form a left-handed set, i.e.,
![{\displaystyle \mathbf {r} _{i}\times \mathbf {r} _{j}=-\sum _{k=1}^{3}\,\varepsilon _{ijk}\mathbf {r} _{k}\;.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dfc5fe7fa24250e75e46dd30a40a7c2c909673dc)
The last two equations can be condensed into one equation
![{\displaystyle \mathbf {r} _{i}\times \mathbf {r} _{j}=\det(\mathbf {R} )\sum _{k=1}^{3}\;\varepsilon _{ijk}\mathbf {r} _{k}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cec3ffdd7024157f582ee90118aab26dd6ab61d9)
by virtue of the the fact that
the determinant of a proper rotation matrix is 1 and of an improper
rotation −1. This can be proved as follows:
The determinant of a 3×3 matrix with column vectors a,
b, and c can be written as scalar triple product
.
It was just shown that for a proper rotation
the columns of R are orthonormal and satisfy,
![{\displaystyle \mathbf {r} _{1}\cdot (\mathbf {r} _{2}\times \mathbf {r} _{3})=\mathbf {r} _{1}\cdot \left(\sum _{k=1}^{3}\,\varepsilon _{23k}\,\mathbf {r} _{k}\right)=\varepsilon _{231}=1.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9c75db6ff0b6b986667b49446bfb8c641fb6639e)
Likewise the determinant is −1 for an improper rotation.
Theorem
A proper rotation matrix R can be
factorized thus
![{\displaystyle \mathbf {R} =\mathbf {R} _{z}(\omega _{3})\;\mathbf {R} _{y}(\omega _{2})\;\mathbf {R} _{x}(\omega _{1})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1b18f82799a499e947fe982dbe558efbb476c8ee)
which is referred to as the Euler z-y-x parametrization,
or also as
![{\displaystyle \mathbf {R} =\mathbf {R} _{z}(\alpha )\;\mathbf {R} _{y}(\beta )\;\mathbf {R} _{z}(\gamma )\quad }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59e7be20b761fad83ca246ef75a1d58a2ed36cbf)
the Euler z-y-z parametrization.
Here the matrices representing rotations around the z, y, and x axis, respectively, over arbitrary angle φ, are
![{\displaystyle \mathbf {R} _{z}(\varphi )\equiv {\begin{pmatrix}\cos \varphi &-\sin \varphi &0\\\sin \varphi &\cos \varphi &0\\0&0&1\\\end{pmatrix}},\quad \mathbf {R} _{y}(\varphi )\equiv {\begin{pmatrix}\cos \varphi &0&\sin \varphi \\0&1&0\\-\sin \varphi &0&\cos \varphi \\\end{pmatrix}},\quad \mathbf {R} _{x}(\varphi )\equiv {\begin{pmatrix}1&0&0\\0&\cos \varphi &-\sin \varphi \\0&\sin \varphi &\cos \varphi \\\end{pmatrix}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5ceec361f7dfeae3dc1e9d34c354f39931b85e65)
Proof
First the Euler z-y-x-parametrization will be proved by describing an
algorithm for the factorization of R.
Consider to that end the matrix product
![{\displaystyle \mathbf {R} _{z}(\omega _{3})\,\mathbf {R} _{y}(\omega _{2})={\begin{pmatrix}\cos \omega _{3}\cos \omega _{2}&-\sin \omega _{3}&\cos \omega _{3}\sin \omega _{2}\\\sin \omega _{3}\cos \omega _{2}&\cos \omega _{3}&\sin \omega _{3}\sin \omega _{2}\\-\sin \omega _{2}&0&\cos \omega _{2}\\\end{pmatrix}}\equiv (\mathbf {a} _{1},\mathbf {a} _{2},\mathbf {a} _{3}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e9d065baeffb5b88422803ce42e745b12125cc9d)
The columns of the matrix product are for ease of reference designated by a1, a2, and a3.
Note that the multiplication by
Rx(ω1) on the right
does not affect the first column, so that a1 =
r1 (the first column of the matrix to be factorized).
Solve
and
from the first column of
R,
![{\displaystyle \mathbf {a} _{1}={\begin{pmatrix}\cos \omega _{3}\;\cos \omega _{2}\\\sin \omega _{3}\;\cos \omega _{2}\\-\sin \omega _{2}\\\end{pmatrix}}={\begin{pmatrix}R_{11}\\R_{21}\\R_{31}\\\end{pmatrix}}\equiv \mathbf {r} _{1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6795eeff409a7799e5900b97c85dfe0a951d96d5)
This is possible. First solve
for
from
![{\displaystyle \sin \omega _{2}=-R_{31}.\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8a8cec800f7b248aae076065f7579eace55a3c63)
Then solve
for
from the two equations:
![{\displaystyle {\begin{aligned}\cos \omega _{3}=&{R_{11} \over \cos \omega _{2}}\\\sin \omega _{3}=&{R_{21} \over \cos \omega _{2}}.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/175cd8a9b037057e31f8af68241445c1d899b74c)
Knowledge of
and
determines the vectors a2 and a3.
Since a1, a2 and a3 are the columns of a
proper rotation matrix they form an orthonormal right-handed system. The plane spanned by a2 and a3 is orthogonal to
and hence the plane contains
and
. Thus the latter two vectors are a linear combination of the first two,
![{\displaystyle (\mathbf {r} _{2},\mathbf {r} _{3})=(\mathbf {a} _{2},\mathbf {a} _{3}){\begin{pmatrix}\cos \omega _{1}&-\sin \omega _{1}\\\sin \omega _{1}&\cos \omega _{1}\\\end{pmatrix}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e678b2792cc993fe4f47e4e79ba934cbaeac4c13)
Since
are
known unit vectors we can compute
![{\displaystyle {\begin{aligned}\mathbf {a} _{2}\cdot \mathbf {r} _{2}=&\cos \omega _{1}\\\mathbf {a} _{3}\cdot \mathbf {r} _{2}=&\sin \omega _{1}.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d4210d2e80526fac9a6b1313b4128d71a878e604)
These equations give
with
.
Augment the 2×2 matrix to the 3×3 matrix
, then
![{\displaystyle {\begin{aligned}\mathbf {R} \equiv (\mathbf {r} _{1},\mathbf {r} _{2},\mathbf {r} _{3})=(\mathbf {r} _{1},\mathbf {a} _{2},\mathbf {a} _{3})\mathbf {R} _{x}(\omega _{1})=(\mathbf {a} _{1},\mathbf {a} _{2},\mathbf {a} _{3})\mathbf {R} _{x}(\omega _{1})=\mathbf {R} _{z}(\omega _{3})\,\mathbf {R} _{y}(\omega _{2})\,\mathbf {R} _{x}(\omega _{1}).\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e19209deca6ce98f769239737097b1b5c215a9c2)
This concludes the proof of the z-y-x parametrization.
The Euler z-y-z parametrization is obtained by a small modification of the previous proof. Solve
and
from
(the rightmost multiplication by Rz(ω1) does not affect r3)
and then consider
![{\displaystyle (\mathbf {r} _{1},\;\mathbf {r} _{2})=(\mathbf {a} _{1},\;\mathbf {a} _{2}){\begin{pmatrix}\cos \omega _{1}&-\sin \omega _{1}\\\sin \omega _{1}&\cos \omega _{1}\\\end{pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b09f83356520f385fe70a8cdf034302850b52a78)
or,
The equation for R can be written as
![{\displaystyle (\mathbf {r} _{1},\mathbf {r} _{2},\mathbf {r} _{3})=(\mathbf {a} _{1},\mathbf {a} _{2},\mathbf {r} _{3})\,\mathbf {R} _{z}(\omega _{1})=\mathbf {R} _{z}(\omega _{3})\,\mathbf {R} _{y}(\omega _{2})\,\mathbf {R} _{z}(\omega _{1})\;,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3d09592e6d96226c878b0897e8adc03143388bd9)
which proves the Euler z-y-z parametrization. It is common in this parametrization to write
![{\displaystyle \omega _{3}=\alpha ,\quad \omega _{2}=\beta ,\quad \omega _{1}=\gamma .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ff4673a5ad24e5578d00391be4348c4ac59a57a1)