If and are bases of a finite dimensional vector space , and the matrix of a linear map with respect to is given, then, according to the theorem Basis transition, the matrix of with respect to can be calculated by use of the formula
where is the transition matrix of the basis to the basis . This means we only have to choose the coordinates once, after that we are able to define all other matrix representations of the linear map by only calculating with matrices. We make this explicit in the following theorem.
Let be a basis of an -dimensional vector space , where is a natural number, and let be a linear map with matrix with respect to .
An -matrix is the matrix of with respect to a basis for if and only if there exists an invertible -matrix with
In particular, the determinant of every matrix determining has the same value, so that we can speak of the determinant of the linear map .
Assume that is a basis for , so , that is, is the matrix of with respect to . Then theorem Basis transition tells us that and are related by means of . In this case is an invertible -matrix with .
The other way around: assume that we have an invertible -matrix with . Now choose , such that . Then is the corresponding basis of (because is invertible with inverse ), and it follows that
We have shown that is the matrix of with respect to the basis for.
To prove the last statement, we show that and have the same determinant. By using the properties of the determinant established earlier we find
A consequence of this theorem is the known fact that the -zero matrix is the only matrix of the zero map of an -dimensional vector space to itself.
Another consequence of this theorem is the known fact that the -identity matrix is the only matrix of the identity map of an -dimensional vector space to itself.
Yet another consequence is the fact that for each linear map of a -dimensional vector space to itself we have a unique matrix. After all, that map has to be a scalar multiplication and the matrix has to be the scalar by which the vector is multiplied.
If two linear maps have different determinants, then it is not possible to find two bases of , such that the matrix of the one linear map with respect to the first basis is equal to the matrix of the other linear map with respect to the second basis.
We give a special name, conjugate, to matrices and who are both from a given linear map. We will also show that this relation satisfies three important properties, summarized in the notion of equivalence relation:
Let be a natural number. Two -matrices and are said to be conjugate if there exists an invertible -matrix with . We say that the matrix conjugates the matrix to and is called the conjugator.
Being conjugate is an equivalence relation; this means that it has the following three properties for each three -matrices , , :
- Reflexivity: is conjugate with itself (hence, with )
- Symmetry: If and are conjugate, then and are also conjugate
- Transitivity: If and are conjugate, and and are conjugate, then and are also conjugate.
Reflexivity: For take the -matrix identity matrix . Then we have . Hence, and are conjugate.
Symmetry: Assume that and are conjugate. Then an invertible -matrix with exists. By multiplying both sides on the left by and on the right by , we see that , or
Since
is an invertible
-matrix, we conclude that
and
are conjugate.
Associativity: Assume that and are conjugate, and that and are also conjugate. Then we have invertible -matrices and , such that and . Consequently the invertible -matrix satisfies
such that
and
are conjugate.
Determining if two -matrices and are conjugate largely boils down to solving linear equations: we start by solving the matrix equation
and next look for an invertible matrix among the solutions. Here is an example: Assume
If
and
are conjugate, then there exists an invertible
-matrix
with
.
After multiplying from the right by we have the linear matrix equation
If we write
then, after performing the matrix multiplications, the matrix equation transforms in
and hence, in the system of linear equations
This system has the solution
where
is a free parameter. The matrix
is not invertible, so
and
are not conjugate. In other words: There is no basis of
with respect to which the linear map with matrix
with respect to the standard basis has the matrix
.
In this case you can see directly that and are not conjugate because their determinants differ. Later we discuss the determination of the Jordan normal form of a square matrix; this method is much more efficient.
With given -matrices and and invertible -matrix such that , the conjugator is not unique: any scalar multiple of (that is not equal to the zero matrix) is also a conjugator from to . Furthermore, it may happen that various matrices that are not scalar multiples of each other, are all conjugators from to . For example, take and . Then every invertible -matrix satisfies by definition of the inverse matrix, because .
Another commonly used name for conjugate matrices is similar matrices.
The set of -matrices can be divided in separate disjoint parts, each consisting of matrices that are mutually conjugate and not conjugate to any matrix of another part. This is a property that holds for every equivalence relation. In general, these parts are called equivalence classes and usually are also named after the specific equivalence relation, hence, here conjugacy classes.
Let be a vector space of finite dimension . For each linear map the matrices in that determine with respect to a suitable basis, form a conjugacy class.
The conjugacy class of the identity map consists of only . The only linear maps of which the conjugacy class consists of a single matrix, are scalar multiplications. After all, the matrix corresponding to a scalar multiplication by is the diagonal matrix of which all diagonal elements are equal to . This matrix is the only one that commutates with all other -matrices. Therefore only conjugates with itself: for each invertible matrix .
From the theorem above and properties of the determinant it follows that two matrices with different determinants are not conjugate. This can also be seen directly: if
and
are conjugate
-matrices, then there exists an invertible
-matrix
such that
. From known
properties of the determinant it then follows that
Since conjugate matrices represent the same linear map, they share all properties of that map. We saw above that conjugate matrices have the same determinant. Other properties that conjugate matrices share, are trace, characteristic polynomial, rank and minimal polynomial. Two matrices of which one or more of these properties differ, are not conjugate.
A property of a matrix of a linear map that does not depend on the chosen basis is called an invariant.
Thanks to a statement we will discuss later, the characteristic polynomial directly shows whether or not two -matrices are conjugate. However, that is not the case for matrices of larger dimensions.
The following two matrices
and
are conjugate:
Determine a conjugator
from
to
; that is, an invertible
-matrix
such that
The matrix
is invertible since
. It is easy to verify that
satisfies
:
To find
, we first solve the matrix equation:
The solution of this system of linear equations in
,
,
, and
is
Hence, we have the free parameters
. If we choose
, then we find
, so
This answer works because the matrix is invertible.
The answer is not unique. Other choices than for the entries of are possible.