Conjugation is about matrices representing a linear map from a linear vector space to the same vector space. Here, we discuss the case of two different vector spaces or, more generally, two different bases: one for the domain #V# and one for the range #W# of a linear map #V\to W#. We will see that the problem of determining whether two matrices represent the same linear map with respect to suitable bases for the domain and range, is much easier than the case where the same basis must be used for both domain and range.
Let #m# and #n# be natural numbers. Two #(m\times n)#-matrices #A# and #B# are called matrix equivalent if there is an invertible #(m\times m)#-matrix #S# and an invertible #(n\times n)#-matrix #T# such that #B = S\, A\, T#.
- Two matrices #A# and #B# of the same size are matrix equivalent if and only if they have the same rank. In particular, matrix equivalence is an equivalence relation.
- Let #\alpha# be a basis for an #n#-dimensional vector space #V#, let #\beta # be a basis for an #m#-dimensional vector space #W#, and let #L: V\to W# be a linear map having matrix #A# with respect to #\alpha# and #\beta#. An #(m\times n)#-matrix #B# is the matrix of #L# with respect to a basis for #V# and a basis for #W# if and only if #A# and #B# are matrix equivalent.
1. Suppose that #B = S\, A\, T# for an invertible #(m\times m)#-matrix #S# and an invertible #(n\times n)#-matrix #T#. Then #\im{B}# is the image of #\im{A}# under #S# (we use that #T# is invertible, so the image of #\mathbb{R}^n# under #T# equals #\mathbb{R}^n#). Because #S# is invertible, the vector spaces #\im{A}# and #\im{B}# have the same dimension. According to Rank is dimension column space this means that the rank of #A# equals the rank of #B#.
Suppose, for proof of the converse, that the rank of #A# equals the rank of #B#. We will write #r# for this rank. Then, the row and column reduced echelon form of both #A# and #B# is equal to the matrix #K_r# all of whose entries are zero, except for the first #r# diagonal entries, which are equal to #1#. Since #K_r# is obtained by multiplying #A# and #B# from the left and right by invertible matrices, it follows that #A# and #B# are matrix equivalent.
The fact that matrix equivalence is an equivalence relation follows immediately from the characterization of the relation in statement 1: since the rank is a function on the set of #(m\times n)#-matrices, having the same image under the rank is an equivalence relation.
2. This proof is similar to the proof in the case of conjugation. For the matrix #A# as in the conditions of the statement, we have #A = {}_\beta L_\alpha#. If #\alpha'# is also a basis of #V# and #\beta'# is also a basis of #W#, then #B = {}_{\beta'}L_{\alpha'}# is the matrix of #L# with respect to these bases and so \[B = {}_{\beta'}I_\beta\, {}_\beta L_\alpha\, {}_{\alpha}I_{\alpha'}={}_{\beta'}I_\beta\, A \,{}_{\alpha}I_{\alpha'}\]Because the matrices #{}_{\beta'}I_\beta # and #{}_{\alpha}I_{\alpha'}# are invertible, it follows that #A# and #B# are matrix equivalent.
Conversely, if there are invertible matrices #S# and #T# such that #B = S\, A\, T#, then the basis #\beta'# corresponding to the coordinatisation #L_S\,\beta# and the basis #\alpha'# corresponding to the coordinatisation #L_T^{-1}\,\alpha# satisfy #B = {}_{\beta'}L_{\alpha'}#, for then \[ B = S\, A\, T =( \beta'\,\beta^{-1})_\varepsilon\,{}_\beta L_\alpha\,(\alpha\,(\alpha')^{-1})_\varepsilon= {}_{\beta'} L_{\alpha'}\]
Determining whether two #(m\times n)#-matrices #A# and #B# are matrix equivalent, is thus equivalent to checking whether or not #\text{rank}(A) = \text{rank}(B)#.
In case #A# and #B# have equal rank #r#, it is possible to find invertible matrices #S# and #T# such that #B = S\, A\, T #. By row and column reduction for both #A# and #B#, we can find invertible matrices #S_a#, #S_b#, #T_a#, #T_b# such that
\[A = S_a K_r T_a\phantom{xxx}\text{ and }\phantom{xxx} B = S_b K_r T_b\]
where #K_r# is an #(m\times n)#-matrix of rank #r# in row and column reduced echelon form. In that case, \(B = S\, A \, T\), where \(S=S_b\, S_a^{-1}\) and \(T = T_a^{-1}T_b\) are invertible matrices.
We can also approach the search for #S# and #T# by solving linear equations. We start by solving the matrix equation
\[B \, U= S\, A\phantom{xxx}\text{in the unknown square matrices }\phantom{xxx}S,\ U\] and then look for invertible matrices among the solutions. Here is an example: Suppose
\[A = \matrix{1&0\\ 0& -1}\phantom{xxx}\text { and }\phantom{xxx} B = \matrix{1&1\\ 0& 1}\]
If #A# and #B# are matrix equivalent, then there are invertible #(2\times2)#-matrices #S# and #T# with #B = S\,A\,T#. After multiplying from the right by #U = T^{-1}# this gives the linear matrix equation \[ B\,U = S\,A\] If we write\[S= \matrix{a&b\\ c&d}\quad\text{and}\quad U = \matrix{x&y\\ z&w}\] then, after working out the products, the matrix equation turns into
\[\matrix{x+z& y+w\\ z & w} = \matrix{a&-b\\ c&-d}\]
and hence into the system of linear equations
\[\lineqs{ x+z &=& a\\ y+w &=& -b\\ z &=& c\\ w &=& -d}\]
We conclude that #S = \matrix{x+z &-y-w\\ z & -w}#. If we choose #y=z = 1# and #w=x=0#, then #S# and #U# are invertible and we find, with #T = U^{-1}#,
\[S\, A\, T = \matrix{1 &-1\\ 1 & 0} \matrix{1&0\\ 0& -1} \matrix{0&1\\ 1&0} =\matrix{1&1\\ 0& 1}\ =B\]
This shows that #A# and #B# are matrix equivalent. Therefore, in this case, bases #\alpha# and #\beta# of #\mathbb{R}^2# exist such that the linear maps #L_A# and #{}_\beta (L_B)_\alpha# are equal.
For given matrix equivalent #(m\times n)#-matrices #A# and #B#, the invertible #(m\times m)#-matrix #S# and the invertible #(n\times n)#-matrix #T# such that #B = S\,A\,T# are not unique: for any constant #c# unequal to zero, the scalar multiples #S'=c\, S# and #T'=c^{-1}\, T# satisfy the form #B=S'\,A\,T'# as well. Furthermore, sometimes other choices for #S# and #T# are possible: consider #n=2# and #A = B = I_2#. Then every invertible #(2\times 2)#-matrix #S# satisfies #B = S\,A\,S^{-1}# by definition of the inverse matrix because #I_2 = S\,S^{-1} = S\,I_2\,S^{-1}#.
Since matrix equivalence is an equivalence relation, the set #M_{m\times n}# of #(m\times n)#-matrices can be partitioned into mutually disjoint subsets, each consisting of matrices which are mutually matrix equivalent: the matrix-equivalence classes.
Two #(m\times n)#-matrices are matrix equivalent if and only if they have the same rank. In other words, the matrix-equivalence classes consist of all the elements of #M_{m\times n}# with a fixed rank.
The statement that matrix equivalence is an equivalence relation follows immediately from the statement that two matrices of the same size are matrix equivalent if and only if they have the same rank: the rank is a map defined on #M_{m\times n}# and for each map the relation of having the same image under the map is an equivalence relation.
Here, we give a direct proof of the statement by verifying the three characteristic properties of an equivalence relation for matrix equivalence:
Reflexivity: Take #S=I_m# and #T=I_n#. Then we have #A = I_m\, A\, I_n#. So #A# is matrix equivalent to itself.
Symmetry: Suppose that #A# and #B# are matrix equivalent. Then there is an invertible #(m\times m)#-matrix #S# and an invertible #(n\times n)#-matrix #T# with #B = S\,A\,T#. By multiplying from the left by #S^{-1}# and from the right by #T^{-1}#, we see that #S^{-1} B\, T ^{-1}= A#, that is, \[A = \left(S^{-1}\right) B\,\left(T^{-1}\right)\] Because #S^{-1}# is an invertible #(m\times m)#-matrix and #T^{-1}# an invertible #(n\times n)#-matrix, we conclude that #B# and #A# are matrix equivalent.
Associativity: Suppose that #A# and #B# are matrix equivalent, and that #B# and #C# are also matrix equivalent. Then there are invertible #(m\times m)#-matrices #S# and #R# and invertible #(n\times n)#-matrices #T# and #U# such that #B = S\,A\,T# and #C = R\,B\,U#. As a consequence, the invertible #(m\times m)#-matrix #R\,S# and the invertible #(n\times n)#-matrix #T\,U# satisfy \[ C = R\,B\,U =R\,S\,A\,T\,U = \left(R\, S\right)\, A \,\left(T\, U\right)\] which implies that #A# and #C# are matrix equivalent.
We give some other characterizations of linear maps with equal rank. Recall from the theory The matrix of a linear map that, for a linear map #L:V\to W# and bases #\alpha# for #V# and #\beta# for #W#, the matrix of #L# with respect to #\alpha# and #\beta# is denoted by #{}_\beta L_\alpha#, and, in case #V=W# and #\alpha = \beta#, also by #L_\alpha#.
Let #V# and #W# be vector spaces of finite dimension #n# and #m#, respectively, and let #L# and #M# be linear maps #V\to W#. The following statements are equivalent.
- There are isomorphisms #P: V\to V# and #Q:\ W\to W# such that #M = Q\,L\,P#.
- There are bases #\beta_1# and #\beta_2# for #V# and #\gamma_1# and #\gamma_2# for #W# such that #{}_{\gamma_1}L_{\beta_1} = {}_{\gamma_2}M_{\beta_2}#.
- There are bases #\beta# for #V# and #\gamma# for #W#, such that #{}_{\gamma}L_{\beta} # and # {}_{\gamma}M_{\beta}# are matrix equivalent.
- For each pair of bases #\beta# for #V# and #\gamma# for #W# the matrices #{}_{\gamma}L_{\beta} # and # {}_{\gamma}M_{\beta}# are matrix equivalent.
- For each pair of bases #\beta# for #V# and #\gamma# for #W# the matrices #{}_{\gamma}L_{\beta} # and # {}_{\gamma}M_{\beta}# have the same rank.
The equivalence of 4 and 5 follows directly from the above theorem Matrix equivalence. To prove the equivalence of the first four statements, we use the scheme
\[1\Rightarrow 4 \Rightarrow 3\Rightarrow 2\Rightarrow 1\]
#1 \Rightarrow 4#: Suppose that there are isomorphisms #P: V\to V# and #Q:\ W\to W# such that #M = Q\,L\,P#. Let #\beta# be a basis for #V# and #\gamma# a basis for #W#. Then we have
\[{}_{\gamma}M_{\beta} ={}_{\gamma}(Q\,L\,P)_{\beta}=Q_{\gamma}\,{}_{\gamma}L_{\beta}\,P_{\beta}\] where #Q_{\gamma}# and #P_{\beta}# are invertible matrices. This means that #{}_{\gamma}L_{\beta}# and #{}_{\gamma}M_{\beta}# are matrix equivalent.
#4 \Rightarrow 3#: This follows immediately from the fact that any choice of #\beta# and #\gamma# as in statement 4 satisfies statement 3.
#3 \Rightarrow 2#: Suppose that #\beta# is a basis for #V# and #\gamma# is a basis for #W# such that #{}_{\gamma}L_{\beta} # and # {}_{\gamma}M_{\beta}# are matrix equivalent. Then there are invertible matrices #C# of size #n\times n# and #D# of size #m\times m# such that \[{}_{\gamma}M_{\beta} =D\, {}_{\gamma}L_{\beta}\,C\] Let the basis #\beta_1# for #V# consist of the images of the basis #\beta# under #L_C^{-1}# and let the basis #\gamma_1# for #W# consist of the images of the basis #\gamma# under #L_D#. Then we have #C = \beta\beta_1^{-1}={}_\beta I_{\beta_1}# and #D=\gamma_1\gamma^{-1}={}_{\gamma_1}I_{\gamma}# so \[D\, {}_{\gamma}L_{\beta}\,C = {}_{\gamma_1}I_{\gamma}\, {}_{\gamma}L_{\beta}\,{}_{\beta}I_{\beta_1} ={}_{\gamma_1}L_{\beta_1}\] If we choose #\beta_2 = \beta# and #\gamma_2 = \gamma#, we find
\[{}_{\gamma_2}M_{\beta_2} ={}_{\gamma}M_{\beta} =D \,{}_{\gamma}L_{\beta}\,C = {}_{\gamma_1}L_{\beta_1}\] as was to be proven.
#2 \Rightarrow 1#: Suppose that there are bases #\beta_1# and #\beta_2# for #V# and #\gamma_1# and #\gamma_2# for #W# such that #{}_{\gamma_1}L_{\beta_1} = {}_{\gamma_2}M_{\beta_2}#. Then, after left multiplication by #\gamma_2^{-1}# and right multiplication by #\beta_2#, we find
\[\gamma_2^{-1}\gamma_1 L \beta_1^{-1}\beta_2 = M\] Therefore, the isomorphisms #Q = \gamma_2^{-1}\gamma_1 # and # P = \beta_1^{-1}\beta_2# satisfy #M = Q\,L\,P#, which proves statement 1.
Two matrices thus determine the same linear map with respect to possibly different bases if and only if each of the two can be row and column reduced to the same matrix\[K_r = \matrix{1&0&0&\cdots& 0&0&\cdots&0\\ 0&1&0&\cdots &0&0&\cdots&0\\ 0&0&\ddots&\cdots &0&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots &\vdots&\vdots&\vdots&\vdots\\0&0&0&\cdots &1&0&\cdots&0\\ 0&0&0&\cdots&0& 0&\cdots&0\\ 0&0&0&\cdots&0& 0&\ddots&\vdots\\ 0&0&0&\cdots&0& 0&\cdots&0}\]where the number of ones is equal to the rank #r# of each of the two matrices.
From the theorem we can immediately deduce the following previously discussed result:
An #(m\times n)#-matrix #A# has rank #r# if and only if there is an invertible #(m\times m)#-matrix #D# and an invertible #(n\times n)#-matrix #C# such that #A = D\, K_{r}\,C#, where #K_{r}# is the #(m\times n)#-matrix obtained from the zero matrix having the same dimensions by replacing the first #r# diagonal entries by #1#.
Consider the matrices \[ A = \matrix{4 & 1 & -3 \\ 2 & 1 & -2 \\ }\phantom{xx}\text{ and }\phantom{xx} B = \matrix{-4 & -5 & 3 \\ -2 & -3 & 2 \\ } \]
Are #A# and #B# matrix equivalent?
Yes
According to the theorem
Matrix equivalence, the matrices are matrix equivalent if and only if they have the same rank. The rank of #A# is #2 # and the rank of #B# is #2# as well. Therefore, the answer is Yes.
Both matrices can be row and column reduced to the echelon form #\matrix{1 & 0 & 0 \\ 0 & 1 & 0 \\ }#.
After all,
\[\begin{array}{rcl}A &=& \matrix{-2 & 5 \\ -1 & 2 \\ }\,\matrix{1 & 0 & 0 \\ 0 & 1 & 0 \\ }\,\matrix{-2 & -3 & 4 \\ 0 & -1 & 1 \\ -1 & 0 & 1 \\ }\\ &\text{ and }&\\ B &=& \matrix{2 & -3 \\ 1 & -1 \\ }\, \matrix{1 & 0 & 0 \\ 0 & 1 & 0 \\ }\, \matrix{-2 & -4 & 3 \\ 0 & -1 & 1 \\ 1 & 2 & -1 \\ }\end{array}\]