Euler's theorem (rotation): Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Paul Wormer
(→‎Excursion into matrix theory: One more reference in answer to the WP genius)
mNo edit summary
 
Line 196: Line 196:


==Note==
==Note==
<references />
<references />[[Category:Suggestion Bot Tag]]

Latest revision as of 07:00, 14 August 2024

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

Euler's theorem on rotation is the statement that in space a rigid motion which has a fixed point always has an axis (of rotation), i.e., a straight line of fixed points. It is named after Leonhard Euler who proved this in 1775 by an elementary geometric argument.

In terms of modern mathematics, rotations are distance and orientation preserving transformations in 3-dimensional Euclidean (affine) space which have a fixed point. Such transformations are associated with linear operators on the difference space that preserve inner product (are isometric) and preserve orientation (have unit determinant). In an orthogonal basis of these operators correspond one-to-one with orthogonal 3 × 3 matrices with determinant +1. Since for such (non-identity) matrices exactly one eigenvector has eigenvalue +1, this eigenvector gives the direction of the axis.

The product of two orthogonal matrices is again orthogonal, and from the determinant rule: det(AB) = det(A)det(B) follows that the product matrix has also unit determinant. The matrix product being associative and the inverse of an orthogonal matrix being orthogonal, the matrices form a group of infinite order, commonly denoted by SO(3), the special (det = 1) orthogonal group in 3 dimensions. Note that the map A → det(A) is a group homomorphism: the set of determinants forms a 1-dimensional irreducible representation (the identity representation) of SO(3).

Euler's theorem (1776)

Euler states the theorem as follows:[1]

Theorema. Quomodocunque sphaera circa centrum suum conuertatur, semper assignari potest diameter, cuius directio in situ translato conueniat cum situ initiali.

or (in free translation):

When a sphere is moved around its centre it is always possible to find a diameter whose direction in the displaced position is the same as in the initial position.

To prove this, Euler considers a great circle on the sphere and the great circle to which it is transported by the movement. These two circles intersect in two (opposite) points of which one, say A, is chosen. This point lies on the initial circle and thus is transported to a point a on the second circle. On the other hand, A lies also on the translated circle, and thus corresponds to a point α on the initial circle. Now Euler considers the symmetry plane of the angle αAa (which passes through the centre C of the sphere) and the symmetry plane of the arc Aa (which also passes through C). These two planes intersect in a diameter whose endpoint O on the sphere remains fixed under the movement because the triangle OαA is transported onto the triangle OAa (since αA is mapped on Aa and the triangles have the same angles).

This also shows that the rotation of the sphere can be seen as two consecutive reflections about the two planes described above. Points in a mirror plane are invariant under reflection, and hence the points on their intersection (a line: the axis of rotation) are invariant under both the reflections, and hence under the rotation.

Matrix proof

An algebraic proof starts from the fact that a rotation is a linear map in one-to-one correspondence with a 3×3 rotation matrix R, i.e, a matrix for which

where E is the 3×3 identity matrix and superscript T indicates the transposed matrix. Clearly a rotation matrix has determinant ±1, for invoking some properties of determinants, one can prove

The matrix with positive determinant is a proper rotation and with a negative determinant an improper rotation (is equal to a reflection times a proper rotation).

It will now be shown that a rotation matrix R has at least one invariant vector n, i.e., R n = n. Note that this is equivalent to stating that the vector n is an eigenvector of the matrix R with eigenvalue λ = 1.

A proper rotation matrix R has at least one unit eigenvalue. Using the two relations:

we find

From this follows that λ = 1 is a root (solution) of the secular equation, that is,

In other words, the matrix RE is singular and has a non-zero kernel, that is, there is at least one non-zero vector, say n, for which

The line μn for real μ is invariant under R, i.e, μn is a rotation axis. This proves Euler's theorem.

Equivalence of an orthogonal matrix to a rotation matrix

A proper orthogonal matrix is equivalent to

If R has more than one invariant vector then φ = 0 and R = E. Any vector is an invariant vector of E.

Excursion into matrix theory

In order to prove the previous equation some facts from matrix theory must be recalled. Matrices over the field of complex numbers are considered.

An m×m matrix A has m orthogonal eigenvectors if and only if A is normal, that is, if AA = AA. [2][3]

This result is equivalent to stating that normal matrices can be brought to diagonal form by a unitary similarity transformation:

and U is unitary, that is,

The eigenvalues α1, ..., αm are roots of the secular equation. If the matrix A happens to be unitary (and note that unitary matrices are normal), then

and it follows that the eigenvalues of a unitary matrix are on the unit circle in the complex plane:

Also an orthogonal (real unitary) matrix has eigenvalues on the unit circle in the complex plane. Moreover, since its secular equation (an mth order polynomial in λ) has real coefficients, it follows that its roots appear in complex conjugate pairs, that is, if α is a root then so is α.


After recollection of these general facts from matrix theory, we return to the rotation matrix R. It follows from its realness and orthogonality that

with the third column of the 3×3 matrix U equal to the invariant vector n. Writing u1 and u2 for the first two columns of U, this equation gives

If u1 has eigenvalue 1, then φ = 0 and u2 has also eigenvalue 1, which implies that in that case R = E.

Finally, the matrix equation is transformed by means of a unitary matrix,

which gives

The columns of U′ are orthonormal. The third column is still n, the other two columns are perpendicular to n. This result implies that any orthogonal matrix R is equivalent to a rotation over an angle φ around an axis n.

Equivalence classes

It is of interest to remark that the trace (sum of diagonal elements) of the real rotation matrix given above is 1 + 2cosφ. Since a trace is invariant under an orthogonal matrix transformation:

it follows that all matrices that are equivalent to R by an orthogonal matrix transformation have the same trace. The matrix transformation is clearly an equivalence relation, that is, all equivalent matrices form an equivalence class. In fact, all matrices with the same trace form an equivalence class in the group SO(3). Elements of such an equivalence class share their rotation angle, but all rotations are around different axes. If n is a eigenvector of R with eigenvalue 1, then An is an eigenvector of ARAT, also with eigenvalue 1. Unless A = E, n and An are different.

Note

  1. see the bibliography subpage for the 1776 reference (p.202)
  2. The dagger symbol † stands for complex conjugation followed by transposition. For real matrices complex conjugation does nothing and daggering a real matrix is the same as transposing it.
  3. See for a proof most books on linear algebra or matrix theory. For instance, Felix R. Gantmacher, Matrizentheorie, Springer-Verlag, Berlin (1986), chapter 9.10. Here it is proved that a linear operator on a finite-dimensional inner product space is normal if and only if it has a complete set of orthonormal eigenvectors. Compare F. Ayres, Theory and Problems of Matrices, Schaum, New York (1962), p.164: A square matrix A is unitarily similar to a diagonal matrix if and only if A is normal.