Invariant subspaces of linear maps: Eigenvalues and eigenvectors
Determining eigenvalues and eigenvectors
The following theorem shows that we can find a maximal set of linearly independent eigenvectors by finding these for all eigenvalues separately.
Independence of eigenvectors for different eigenvalues
Let #V# be a vector space and # L :V\rightarrow V# a linear map. Suppose that #\lambda_1,\ldots ,\lambda_n# are mutually different eigenvalues of #L#. Suppose further that #\alpha_i# is a set of linearly independent eigenvectors of # E_i = \ker{L-\lambda_i\,I_V}# for #i=1,\ldots,n#. Then the union of #\alpha_1,\ldots,\alpha_n# is linearly independent.
Determination of eigenvalues and eigenvectors can thus be carried out as follows.
Let #V# be an #n#-dimensional vector space, where #n# is a natural number, and let #L:V\to V# be a linear map. The eigenvalues of #L# and a maximum set of linearly independent eigenvectors can be found as follows:
- Set up the matrix #A = L_{\alpha}# of the linear map # L :V \rightarrow V# with respect to a basis #\alpha# of choice.
- Set up the characteristic equation #\det (A-\lambda \cdot I_n)=0# and solve it.
- For each eigenvalue #\lambda# solve the system #(A-\lambda\cdot I_n)\vec{v}=\vec{0}#. Each solution provides the coordinates of a vector from #E_{\lambda}#. The solutions constitute the space #E_{\lambda}#. Choose a basis of the solution space. The union of these bases for all of the eigenvalues is a maximal set of linearly independent eigenvectors, given in terms of coordinates with respect to the basis #\alpha#.
- Go back, if desired, from coordinates to vectors in #V#.
Some of the examples below show that a given linear transformation # L :V \rightarrow V# does not always have a basis of eigenvectors, so # L# is not always determined by a diagonal matrix. Yet, we can often find a fairly simple form of the matrix. Later we will discuss this in greater detail.
- List of eigenvalues: \([4,2]\)
- Matrix whose columns are corresponding eigenvectors: \(\matrix{1&2\\ -3 & -5}\)
We start by solving the characteristic equation \(\det(A-\lambda\cdot I)=0\) of the matrix \(A\). Calculation and factorization of the characteristic polynomial gives
\[ \begin{array}{rcl}
\det(A-\lambda I) &=& \left\vert \begin{array}{cc} -8-\lambda & -4 \\ 30 & 14-\lambda \end{array} \right\vert\\ &=& (-8-\lambda)(14-\lambda)+4\cdot30 \\
&=& \left(\lambda-4\right)\cdot \left(\lambda-2\right)
\end{array}\] This means that the eigenvalues are \(\lambda_1 = 4\) and \(\lambda_2 = 2\).
Next we calculate the corresponding eigenvector for each eigenvalue.
For \(\lambda_1 = 4\) we determine the kernel of \(A -4\cdot I_2\) by row reducing the coefficient matrix:
\[\begin{array}[t]{ll}A -4\cdot I_2&=
\matrix{
-12 & -4 \\
30 & 10 }\\
&
\begin{array}[t]{ll} \sim\left(\begin{array}{cc} 30 & 10 \\ -12 & -4 \end{array}\right) & \\ \sim\left(\begin{array}{cc} 1 & {{1}\over{3}} \\ -12 & -4 \end{array}\right) & \\ \sim\left(\begin{array}{cc} 1 & {{1}\over{3}} \\ 0 & 0 \end{array}\right) \end{array}
\end{array}\] Thus the eigenspace for \(\lambda_1 = 4\) equals \(\linspan{ \cv{1\\ -3}} \).
For \(\lambda_2 = 2\) we determine the kernel of \(A -2\cdot I_2\) by row reducing the coefficient matrix:
\[\begin{array}[t]{ll}A -2\cdot I_2&=
\matrix{
-10 & -4 \\
30 & 12
}\\
&
\begin{array}[t]{ll} \sim\left(\begin{array}{cc} 30 & 12 \\ -10 & -4 \end{array}\right) & \\ \sim\left(\begin{array}{cc} 1 & {{2}\over{5}} \\ -10 & -4 \end{array}\right) & \\ \sim\left(\begin{array}{cc} 1 & {{2}\over{5}} \\ 0 & 0 \end{array}\right) \end{array}
\end{array}\] Thus the eigenspace for \(\lambda_2 = 2\) equals \(\linspan{ \cv{2\\-5}}\).
We conclude that the list of eigenvalues is #\rv{4,2}# and that a matrix whose columns are corresponding eigenvalues is \(\matrix{1&2\\ -3 & -5}\).
Or visit omptest.org if jou are taking an OMPT exam.