Previously we saw that, if an orthogonal map leaves invariant a subspace, it also leaves invariant a complementary subspace. The same is true of unitary maps.
Let #W# be a linear subspace of a complex finite-dimensional inner product space #V#, and #L:V\rightarrow V# a unitary map that leaves #W# invariant. Then also #W^\perp# is invariant under #L#.
The proof of this statement is very similar to the real case.
Earlier we also saw that orthogonal maps are complex diagonalizable. This also holds for unitary maps.
Let #L:V\to V# be a unitary map on a complex finite-dimensional inner product space #V#. Then there is an orthonormal basis #\alpha# for #V# such that \(L_\alpha \) is a diagonal matrix with only complex numbers of absolute value #1# on the diagonal.
Suppose that #\lambda# is a complex eigenvalue of #L#. We show that the generalized eigenspace for #\lambda# coincides with the eigenspace #\ker{L-\lambda\,I_V}#. For this purpose, it is sufficient to show that #\ker{(L-\lambda\,I_V)^2}# is equal to #\ker{L-\lambda\,I_V}#. Let #\vec{w}# be a vector in #\ker{(L-\lambda\,I_V)^2}# and write #\vec{v} = (L-\lambda\, I_V)(\vec{w})#. Then #\vec{v}# belongs to #\ker{L-\lambda\,I_V}#, so #\vec{v}# is an eigenvector of #L# with eigenvalue #\lambda#.
We examine what it means for #L# to preserve the inner product of #\vec{v}# and #\vec{w}#:
\[\begin{array}{rcl}\dotprod{\vec{v}}{\vec{w}}& =& \dotprod{L(\vec{v})}{L(\vec{w})} \\&&\phantom{xxx}\color{blue}{L\text{ is unitary}}\\&=& \dotprod{(\lambda\cdot\vec{v})}{(\lambda\,\vec{w}+\vec{v})} \\&&\phantom{xxx}\color{blue}{\vec{v}\text{ lies in }\ker{L-\lambda\,I_V}\text{ and }\vec{v} = L(\vec{w})-\lambda\, \vec{w}}\\&=& {\lambda\cdot\overline\lambda}\cdot\dotprod{\vec{v}}{\vec{w}}+\lambda\cdot\dotprod{\vec{v}}{\vec{v}} \\&&\phantom{xxx}\color{blue}{\text{hermiticity of the inner product }}\\&=& \dotprod{\vec{v}}{\vec{w}}+\lambda\cdot\dotprod{\vec{v}}{\vec{v}} \\&&\phantom{xxx}\color{blue}{|\lambda| = 1}\end{array}\]
By subtracting #\dotprod{\vec{v}}{\vec{w}}# from both sides, we see that #\lambda\cdot\dotprod{\vec{v}}{\vec{v}}=0#. Because #|\lambda| = 1#, we have #\lambda\ne0#, so # \dotprod{\vec{v}}{\vec{v}}=0#. Because of the positive definiteness of the inner product, this implies #\vec{v} = \vec{0}#. We conclude that #\vec{w}# belongs to #\ker{L-\lambda\,I_V}#. This shows that #\ker{(L-\lambda\,I_V)^2}=\ker{L-\lambda\,I_V}#. By induction with respect to the exponent #j# we derive immediately from this that #\ker{(L-\lambda\,I_V)^j}=\ker{L-\lambda\,I_V}# for all natural numbers #j#, so the generalized eigenspace of #L# for #\lambda# is equal to its eigenspace.
According to theorems Direct sum decomposition and Generalized eigenspaces, #V# is the direct sum of the generalized eigenspaces of #L#. Because of the above, #V# is the direct sum of the eigenspaces of #L#. The eigenspaces are perpendicular to each other: If #\lambda# and #\mu# are eigenvalues with eigenvectors #\vec{v}# and #\vec{w}#, respecively, then we have \[\dotprod{\vec{v}}{\vec{w}} = \dotprod{L(\vec{v})}{L(\vec{w})} = \dotprod{(\lambda\cdot \vec{v})}{(\mu\cdot \vec{w})}=\lambda\cdot\overline{\mu}\cdot \dotprod{\vec{v}}{\vec{w}}\] After multiplication by #\mu# (which has absolute value #1# because it is the eigenvalue of a unitary map), this leads to \[(\lambda-\mu)\cdot (\dotprod{\vec{v}}{\vec{w}} )= 0\] Therefore, if #\lambda\ne \mu#, then #\dotprod{\vec{v}}{\vec{w}} = 0#, that is, #\vec{v}# is perpendicular to #\vec{w}#.
So, if we choose orthonormal bases for each of the eigenspaces, then #\alpha#, the composition of these orthonormal bases, is an orthonormal basis of #V# and the matrix #L_\alpha# is diagonal.
According to the theorem, eigenvectors of #L# with different eigenvalues are perpendicular to each other. This makes it easy to find an orthonormal basis of eigenvectors.
Whereas the orthogonal Jordan normal form still may have #(2\times2)#-submatrices corresponding to rotations, the theorem tells us that unitary Jordan (normal) form of a unitary map is always diagonal. On the diagonal are the eigenvalues, which all have absolute value #1#.
The converse is also true: Suppose that #L:V\to V# is a linear map on a complex finite-dimensional inner product space #V#. If there is an orthonormal basis #\alpha# for #V# such that \(L_\alpha \) is a diagonal matrix with only complex numbers with absolute value #1# on the diagonal, then #L# is unitary.
The absolute value of the determinant of a unitary matrix is equal to #1#. Indeed, the determinant of a matrix does not depend on the basis chosen for the vectorspace #V# of the map #L:V\to V# determined by that matrix. Thanks to the above theorem, we may choose the basis #\alpha# such that the matrix #L_\alpha# of a unitary map #L# with respect to #\alpha# is diagonal with only numbers of absolute value #1# on the diagonal. The matrix #L_\alpha# is conjugate with the original matrix and therefore has the same determinant. The determinant of a diagonal matrix is equal to the product of the diagonal elements, so that the absolute value of the determinant of #L_\alpha# equals #1#.
The statements about classifications of orthogonal maps have their analogue unitary versions.
Let #V# be a complex inner product space of finite dimension and suppose that #L# and #M# are unitary maps #V\to V#. Then the following statements about #L# and #M# are equivalent:
- There is a unitary map #X:V\to V# such that #M = X\, L\, X^{-1}#.
- There is an invertible linear map #Y: V\to V# such that #M = Y\, L\, Y^{-1}#.
- The characteristic polynomials of #L# and #M# are equal.
- The eigenvalues with multiplicities of #L# are the same as those of #M#.
We prove the implications according to the scheme #1\Rightarrow 2\Rightarrow 3\Rightarrow 4\Rightarrow 1#. This suffices for the proof of all equivalences.
#1\Rightarrow 2# is trivial.
#2\Rightarrow 3# holds because it is known from The characteristic polynomial of a linear mapping that the polynomial does not depend on the choice of a basis.
#3\Rightarrow 4# Statements 3 and 4 are equivalent because the characteristic polynomial is the product of the linear factors #x-\lambda#, where #\lambda# runs over all eigenvalues (with the appropriate multiplicities).
#4\Rightarrow 1# Suppose that statement 4 holds. From the above theorem Unitary maps are diagonalizable, it follows that there are orthonormal bases #\alpha# and #\beta# such that #L_\alpha# and #M_\beta# are diagonal matrices. These diagonal matrices for #L# and #M# have the same diagonal entries, namely the eigenvalues. We may assume that #L_\alpha = M_\beta# so #\alpha \,L\,\alpha^{-1}=\beta \,M\,\beta^{-1}#. Put #X =\beta^{-1} \alpha#. Because #\alpha# and #\beta# are orthonormal bases, the corresponding linear transformations are unitary. Because of Properties of isometries, also #X# is unitary. Finally, we have
\[X\, L\, X^{-1} = \beta^{-1} \alpha \, L\, (\beta^{-1} \alpha)^{-1}= \beta^{-1} \alpha \, L\, \alpha^{-1} \beta = \beta^{-1}\beta \,M\,\beta^{-1} \beta = M\] Thus statement 1 is derived from statement 4.
We say that two square matrices #A# and #B# are unitary conjugate if there is a unitary matrix #T# with #A = T\, B \, T^{-1}#. This defines an equivalence relation. The proof of transitivity is similar to the proof of the implication #4\Rightarrow 1# of the theorem.
The matrix \[A = \matrix{{{\complexi-1}\over{2}} & {{\complexi+1}\over{2}} \\ {{\complexi+1}\over{2}} & {{\complexi-1}\over{2}} \\ }\] is unitary. Give a unitary matrix #T# such that \( T^{-1} \, A \, T\) is diagonal.
# T = # # \matrix{-{{1}\over{\sqrt{2}}} & -{{1}\over{\sqrt{2}}} \\ -{{1}\over{\sqrt{2}}} & {{1}\over{\sqrt{2}}} \\ }#
The characteristic polynomial of #A# is \[ p_A(x) = \det(A-x\, I_2) = \left(x+1\right)\cdot \left(x-\complexi\right)\] Therefore, the eigenvalues of #A# are #\complexi# and #-1#. Because #A# is unitary, it is diagonalizable, so #A# is conjugate to \[ D =\matrix{\complexi&0\\ 0& -1}\] In order to find the requested unitary matrix #T#, we first determine a basis of #\mathbb{C}^2# consisting of eigenvectors of #A#.
The eigenspace of #A# corresponding to #\complexi# is found by the calculating the nullspace of #A- \complexi\cdot I_2#. A spanning vector is #\rv{ -1 , -1 } #. Likewise the eigenspace of #A# corresponding to #-1# is found by calculating the nullspace of #A+1\cdot I_2#. A spanning vector is #\rv{ -1 , 1 } #.
The two spanning vectors of the eigenspaces found are perpendicular to each other. We find an orthonormal basis #\alpha# of #\mathbb{C}^2# by dividing these two vectors by their lengths:
\[\begin{array}{rclcl}\vec{a}_1 &=& \dfrac{1}{\sqrt{2}}\cdot \rv{ -1 , -1 } &=&\displaystyle \rv{ -{{1}\over{\sqrt{2}}} , -{{1}\over{\sqrt{2}}} } \\ \vec{a}_2 &=& \dfrac{1}{\sqrt{2}} \cdot \rv{ -1 , 1 } &=& \displaystyle \rv{ -{{1}\over{\sqrt{2}}} , {{1}\over{\sqrt{2}}} }
\end{array}\] Thus, the basis #\alpha=\basis{\vec{a}_1, \vec{a}_2}# is given by the following matrix #T# whose columns are the two vectors #\vec{a}_1# and # \vec{a}_2#: \[T = \matrix{-{{1}\over{\sqrt{2}}} & -{{1}\over{\sqrt{2}}} \\ -{{1}\over{\sqrt{2}}} & {{1}\over{\sqrt{2}}} \\ } \]The answer is not unique: the columns of #T# may be swapped because the order of the diagonal elements in #D# is irrelevant; furthermore, the columns of #T# may be multiplied by #-1# independently of each other because the signs of the basis vectors in #\alpha# are irrelevant.
The answer can be verified by conjugation of #A# by the inverse of the answer and verifying whether this is the diagonal matrix #D#. We carry this check out for the answer \( T = \matrix{-{{1}\over{\sqrt{2}}} & -{{1}\over{\sqrt{2}}} \\ -{{1}\over{\sqrt{2}}} & {{1}\over{\sqrt{2}}} \\ }\):
\[\begin{array}{rcl}T^{-1}\, A\, T &=& {\matrix{-{{1}\over{\sqrt{2}}} & -{{1}\over{\sqrt{2}}} \\ -{{1}\over{\sqrt{2}}} & {{1}\over{\sqrt{2}}} \\ }}^{-1}\, \matrix{{{\complexi-1}\over{2}} & {{\complexi+1}\over{2}} \\ {{\complexi+1}\over{2}} & {{\complexi-1}\over{2}} \\ } \, \matrix{-{{1}\over{\sqrt{2}}} & -{{1}\over{\sqrt{2}}} \\ -{{1}\over{\sqrt{2}}} & {{1}\over{\sqrt{2}}} \\ }\\
&&\phantom{xxx}\color{blue}{\text{matrices substituted}}\\
&=& \matrix{-{{1}\over{\sqrt{2}}} & -{{1}\over{\sqrt{2}}} \\ -{{1}\over{\sqrt{2}}} & {{1}\over{\sqrt{2}}} \\ }\, \matrix{-{{\complexi}\over{\sqrt{2}}} & {{1}\over{\sqrt{2}}} \\ -{{\complexi}\over{\sqrt{2}}} & -{{1}\over{\sqrt{2}}} \\ } \\ &&\phantom{xxx}\color{blue}{\text{inverse determined and }A\, T\text{ computed}}\\
&=& \matrix{\complexi & 0 \\ 0 & -1 \\ }
\end{array}\]