We now focus on finding eigenvectors for a given eigenvalue.
Let be a vector space and a linear map. For every number ,
is a linear subspace of
.
The subspace is called the eigenspace of corresponding to . This space consists of the zero vector and all eigenvectors with eigenvalue .
The subset of is the null space of the linear map , and so, by theorem Image space and kernel, a linear subspace of .
Let be a vector in . Then belongs to if and only if , so if and only if , so if and only if . Therefore, the eigenspace consists of the zero vector and all eigenvectors with eigenvalue .
A special eigenspace is . It consists of all vectors that are mapped onto times themselves, that is, onto . Therefore, the eigenspace is , the null space (or kernel) of .
For most values of , the space only consists of . The number is an eigenvalue if and only if contains a vector , that is, if and only if . Some authors speak only of an eigenspace if is an eigenvalue.
For a given eigenvalue the determination of the corresponding eigenvectors is a matter of finding the kernel of a linear map, which is equivalent to solving a system of linear equations.
The process of finding eigenvectors consists of first determining the values of such that . Then is an eigenvalue and the vectors in distinct from are the eigenvectors corresponding to the eigenvalue.
Let be a linear map. We recall that the equation is the characteristic equation of and that the left side of this equation, , is the characteristic polynomial of .
Let be a linear map, where is a vector space of finite dimension .
- A number is an eigenvalue of if and only if .
- The eigenvectors corresponding to the eigenvalue are the solutions distinct from the zero vector of the linear equation .
Let be a basis of . Then the -coordinates of the eigenvectors of corresponding to the eigenvalue may be calculated as the solutions distinct from the zero vector of the system of linear equations
where
is the
-entry of
.
Suppose that is an eigenvalue of . Then there is a vector such that . The vector then belongs to the kernel of the linear map . This means that .
Conversely: If is a root of the characteristic polynomial of , then the linear map is not invertible, so its kernel contains a vector . Then , which means that is an eigenvector of corresponding to the eigenvalue .
The matrix
is the matrix of
with respect to the basis
.
Statement 1 can be strengthened to: is an eigenvalue of if and only if it is a root of the minimal polynomial of .
Here is the proof:
- If is a root of the minimal polynomial, then it is also a root of the characteristic polynomial (which is after all a multiple of the minimal polynomial), and therefore, according to statement 1, an eigenvalue of .
- Conversely, if is an eigenvalue of corresponding to the eigenvector , then the minimal polynomial satisfies:
Since the characteristic polynomial always has a complex root, each linear map from a complex vector space of finite dimension to itself has eigenvectors. This is not the case if the dimension of the vector space is infinite. Here is an example: Let be the vector space of all polynomials in and let be multiplication by . Suppose that is a polynomial of that is an eigenvector of corresponding to eigenvalue . Then we have . Because , being an eigenvector, is not the null polynomial, the degree of the left-hand side is one bigger than the degree of , the degree of the right-hand side. This is a contradiction. Therefore, there are no eigenvectors of in . This example shows that, in case of an infinite dimensional vector space (even over the complex numbers), it may happen that a linear transformation has no eigenvectors.
Let be the linear map induced by reflection in the line . What are the eigenvalues of ?
Vectors on the line are mapped onto themselves, so is an eigenvalue of . Furthermore, since the normal vector to this line gets mapped to its negative, is an eigenvalue as well. A map from to has at most two different eigenvalues, so the answer is .