# Eigenvalue

(Redirected from Eigenvalues)

Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
Citable Version  [?]

This editable Main Article is under development and subject to a disclaimer.

In linear algebra an eigenvalue of a (square) matrix ${\displaystyle A}$ is a number ${\displaystyle \lambda }$ that satisfies the eigenvalue equation,

${\displaystyle {\text{det}}(A-\lambda I)=0\ ,}$

where det means the determinant, ${\displaystyle I}$ is the identity matrix of the same dimension as ${\displaystyle A}$, and in general ${\displaystyle \lambda }$ can be complex. The origin of this equation, the characteristic polynomial of A, is the eigenvalue problem, which is to find the eigenvalues and associated eigenvectors of ${\displaystyle A}$. That is, to find a number ${\displaystyle \lambda }$ and a vector ${\displaystyle \scriptstyle {\vec {v}}}$ that together satisfy

${\displaystyle A{\vec {v}}=\lambda {\vec {v}}\ .}$

What this equation says is that even though ${\displaystyle A}$ is a matrix its action on ${\displaystyle \scriptstyle {\vec {v}}}$ is the same as multiplying the vector by the number ${\displaystyle \lambda }$. This means that the vector ${\displaystyle \scriptstyle {\vec {v}}}$ and the vector ${\displaystyle \scriptstyle A{\vec {v}}}$ are parallel (or anti-parallel if ${\displaystyle \lambda }$ is negative). Note that generally this will not be true. This is most easily seen with a quick example. Suppose

${\displaystyle A={\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{pmatrix}}}$ and ${\displaystyle {\vec {v}}={\begin{pmatrix}v_{1}\\v_{2}\end{pmatrix}}\ .}$

Then their matrix product is

${\displaystyle A{\vec {v}}={\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{pmatrix}}{\begin{pmatrix}v_{1}\\v_{2}\end{pmatrix}}={\begin{pmatrix}a_{11}v_{1}+a_{12}v_{2}\\a_{21}v_{1}+a_{22}v_{2}\end{pmatrix}}}$

whereas the scalar product is

${\displaystyle \lambda {\vec {v}}={\begin{pmatrix}\lambda v_{1}\\\lambda v_{2}\end{pmatrix}}\ .}$

Obviously then ${\displaystyle \scriptstyle A{\vec {v}}\neq \lambda {\vec {v}}}$ unless ${\displaystyle \lambda v_{1}=a_{11}v_{1}+a_{12}v_{2}}$ and simultaneously ${\displaystyle \lambda v_{2}=a_{21}v_{1}+a_{22}v_{2}}$. For a given ${\displaystyle \lambda }$, it is easy to pick numbers for the entries of ${\displaystyle A}$ and ${\displaystyle \scriptstyle {\vec {v}}}$ such that this is not satisfied.

## The eigenvalue equation

So where did the eigenvalue equation ${\displaystyle {\text{det}}(A-\lambda I)=0}$ come from? Well, we assume that we know the matrix ${\displaystyle A}$ and want to find a number ${\displaystyle \lambda }$ and a non-zero vector ${\displaystyle \scriptstyle {\vec {v}}}$ so that ${\displaystyle \scriptstyle A{\vec {v}}=\lambda {\vec {v}}}$. (Note that if ${\displaystyle \scriptstyle {\vec {v}}={\vec {0}}}$ then the equation is always true, and therefore uninteresting.) So now we have ${\displaystyle \scriptstyle A{\vec {v}}-\lambda {\vec {v}}={\vec {0}}}$. It doesn't make sense to subtract a number from a matrix, but we can factor out the vector if we first multiply the right-hand term by the identity, giving us

${\displaystyle (A-\lambda I){\vec {v}}={\vec {0}}\ .}$

Now we have to remember the fact that ${\displaystyle A-\lambda I}$ is a square matrix, and so it might be invertible. If it was invertible then we could simply multiply on the left by its inverse to get

${\displaystyle {\vec {v}}=(A-\lambda I)^{-1}{\vec {0}}={\vec {0}}}$

but we have already said that ${\displaystyle \scriptstyle {\vec {v}}}$ can't be the zero vector! The only way around this is if ${\displaystyle A-\lambda I}$ is in fact non-invertible. It can be shown that a square matrix is non-invertible if and only if its determinant is zero. That is, we require

${\displaystyle {\text{det}}(A-\lambda I)=0\ ,}$

which is the eigenvalue equation stated above.

## A more technical approach

So far we have looked eigenvalues in terms of square matrices. As usual in mathematics though we like things to be as general as possible, since then anything we prove will be true in as many different applications as possible. So instead we can define eigenvalues in the following way.

Definition: Let ${\displaystyle V}$ be a vector space over a field ${\displaystyle F}$, and let ${\displaystyle \scriptstyle A:V\to V}$ be a linear map. An eigenvalue associated with ${\displaystyle A}$ is an element ${\displaystyle \scriptstyle \lambda \in F}$ for which there exists a non-zero vector ${\displaystyle \scriptstyle {\vec {v}}\in V}$ such that

${\displaystyle A({\vec {v}})=\lambda {\vec {v}}\ .}$

Then ${\displaystyle \scriptstyle {\vec {v}}}$ is called the eigenvector of ${\displaystyle A}$ associated with ${\displaystyle \lambda }$.