We now focus on finding eigenvectors for a given eigenvalue.
Let #V# be a vector space and # L :V\rightarrow V# a linear map. For every number #\lambda#,
\[
E_\lambda=\ker{ L -\lambda {I_V}}
\] is a linear subspace of #V#.
The subspace #E_\lambda# is called the eigenspace of # L # corresponding to #\lambda#. This space consists of the zero vector and all eigenvectors with eigenvalue #\lambda#.
The subset #E_\lambda# of #V# is the null space of the linear map # L -\lambda \,I_V#, and so, by theorem Image space and kernel, a linear subspace of #V#.
Let #\vec{v}# be a vector in #V#. Then #\vec{v}# belongs to # E_\lambda# if and only if #( L -\lambda \, I_V)\vec{v}=\vec{0}#, so if and only if # L( \vec{v})-\lambda\vec{v}=\vec{0}#, so if and only if # L( \vec{v})=\lambda\vec{v}#. Therefore, the eigenspace #E_\lambda# consists of the zero vector and all eigenvectors with eigenvalue #\lambda#.
A special eigenspace is #E_0#. It consists of all vectors that are mapped onto #0# times themselves, that is, onto #\vec{0}#. Therefore, the eigenspace #E_0# is #\ker{L}#, the null space (or kernel) of # L #.
For most values of #\lambda#, the space #E_\lambda# only consists of #\{\vec{0}\}#. The number #\lambda# is an eigenvalue if and only if #E_\lambda# contains a vector #\vec{v}\neq\vec{0}#, that is, if and only if #\dim {E_\lambda }\gt0#. Some authors speak only of an eigenspace if #\lambda# is an eigenvalue.
For a given eigenvalue #\lambda# the determination of the corresponding eigenvectors is a matter of finding the kernel of a linear map, which is equivalent to solving a system of linear equations.
The process of finding eigenvectors consists of first determining the values of #\lambda# such that #\dim {E_\lambda }\gt0#. Then #\lambda# is an eigenvalue and the vectors in #E_\lambda# distinct from #\vec{0}# are the eigenvectors corresponding to the eigenvalue.
Let # L :V\rightarrow V# be a linear map. We recall that the equation \(\det (L_\alpha-\lambda I)=0\) is the characteristic equation of # L # and that the left side of this equation, #\det (L_\alpha-\lambda \,I)#, is the characteristic polynomial of # L#.
Let #L:V\to V# be a linear map, where #V# is a vector space of finite dimension #n#.
- A number #\lambda# is an eigenvalue of #L# if and only if #\det (L-\lambda \cdot I_V)=0#.
- The eigenvectors corresponding to the eigenvalue #\lambda# are the solutions #\vec{v}# distinct from the zero vector of the linear equation #L(\vec{v}) = \lambda\, \vec{v}#.
Let #\alpha# be a basis of #V#. Then the #\alpha#-coordinates #v_1,\ldots,v_n# of the eigenvectors of #L# corresponding to the eigenvalue #\lambda# may be calculated as the solutions distinct from the zero vector of the system of linear equations \[ \left(\,\begin{array}{cccc}
a_{11}-\lambda & a_{12} & \ldots & a_{1n}\\
a_{21} & a_{22}-\lambda & & \vdots\\
\vdots & & \ddots & \vdots \\
a_{n1} & \ldots & \ldots & a_{nn}-\lambda
\end{array}\,\right)\ \ \left(\,\begin{array}{c}
v_1\\ v_2\\ \vdots\\ v_n
\end{array}\,\right)\ =\ \left(\,\begin{array}{r}
0 \\ 0\\ \vdots\\ 0
\end{array}\,\right) \] where #a_{ij}# is the #(i,j)#-entry of #L_\alpha#.
Suppose that #\lambda# is an eigenvalue of #L#. Then there is a vector #\vec{v}\ne\vec{0}# such that \(L(\vec{v}) = \lambda\,\vec{v}\). The vector #\vec{v}# then belongs to the kernel of the linear map #L-\lambda\cdot I_V#. This means that #\det( L-\lambda\cdot I_V) = 0#.
Conversely: If #\lambda# is a root of the characteristic polynomial of #L#, then the linear map #\det( L-\lambda\cdot I_V) # is not invertible, so its kernel contains a vector #\vec{v}\ne\vec{0}#. Then \(L(\vec{v}) = \lambda\,\vec{v}\), which means that #\vec{v}# is an eigenvector of #L# corresponding to the eigenvalue #\lambda#.
The matrix \[
L_\alpha-\lambda\, I_n=\left(\,\begin{array}{cccc}
a_{11}-\lambda & a_{12} & \ldots & a_{1n}\\
a_{21} & a_{22}-\lambda & & \vdots\\
\vdots & & \ddots & \vdots \\
a_{n1} & \ldots & \ldots & a_{nn}-\lambda
\end{array}\,\right)
\]
is the matrix of #L -\lambda \cdot I_V# with respect to the basis #\alpha#.
Statement 1 can be strengthened to: #\lambda# is an eigenvalue of #L# if and only if it is a root of the minimal polynomial of #L#.
Here is the proof:
- If #\lambda# is a root of the minimal polynomial, then it is also a root of the characteristic polynomial (which is after all a multiple of the minimal polynomial), and therefore, according to statement 1, an eigenvalue of #L#.
- Conversely, if #\lambda# is an eigenvalue of #L# corresponding to the eigenvector #\vec{v}#, then the minimal polynomial #m_L(x)# satisfies: \[\begin{array}{rcl}m_L(\lambda)\,\vec{v}&=&m_L(L)\vec{v} \\&&\phantom{xx}\color{blue}{\lambda\,\vec{v} = L(\vec{v})}\\ &=&\vec{0}\\&&\phantom{xx}\color{blue}{m_L(x)\text{ is the minimal polynomial of }L}\end{array}\]
Since the characteristic polynomial always has a complex root, each linear map from a complex vector space of finite dimension to itself has eigenvectors. This is not the case if the dimension of the vector space is infinite. Here is an example: Let #P# be the vector space of all polynomials in #x# and let #L_x:P\to P# be multiplication by #x#. Suppose that #f(x)# is a polynomial of #P# that is an eigenvector of #L_x# corresponding to eigenvalue #\lambda#. Then we have #x\cdot f(x) = L_x(f(x)) = \lambda\cdot f(x)#. Because #f(x)#, being an eigenvector, is not the null polynomial, the degree of the left-hand side is one bigger than the degree of #f(x)#, the degree of the right-hand side. This is a contradiction. Therefore, there are no eigenvectors of #L_x# in #P#. This example shows that, in case of an infinite dimensional vector space (even over the complex numbers), it may happen that a linear transformation has no eigenvectors.
Let \(L : \mathbb R^2\to\mathbb R^2\) be the linear map induced by reflection in the line \(x + 3 y = 0\). What are the eigenvalues of \(L\)?
\( \{-1,1\}\)
Vectors on the line \(x + 3 y = 0\) are mapped onto themselves, so \(1\) is an eigenvalue of #L#. Furthermore, since the normal vector to this line gets mapped to its negative, \(-1\) is an eigenvalue as well. A map from \(\mathbb R^2\) to \(\mathbb R^2\) has at most two different eigenvalues, so the answer is \(\{-1,1\}\).