If #A# and #B# are both #( n\times n)#-matrices, then
\[\begin{array}{lrcl}\text{determinant of the transpose: }& \det (A)&=&\det (A^\top)\\\text{determinant of the product: } &\det (A\ B)&=&\det (A)\cdot\det (B)\end{array} \]
Determinant of the transpose: In the derivation below, we use the following two facts:
- a permutation #\sigma# has the same sign as its inverse #\sigma^{-1}# (for, if #\sigma# is the product of a sequence of transpositions, then #\sigma^{-1}# is the product of these transpositions in the reverse order).
- If #\sigma# runs over all permutations of #\{1,\ldots,n\}#, then so does #\sigma^{-1}# (for, the map on the finite set of permutations of # \{1,\ldots,n\}# sending a permutation to its inverse, is bijective since it is equal to its own inverse).
\[\begin{array}{rcl}\det(A^{\top}) &=&\sum_{\sigma}\text{sg}(\sigma)\cdot a_{\sigma(1)1}\cdots a_{\sigma(n)n}\\&&\phantom{xx}\color{blue}{\text{definition of }\det\text{; the }(i,j)\text{-entry of }A^\top\text{ is }a_{ji}}\\ &=&\sum_{\sigma}\text{sg}(\sigma)\cdot a_{1\sigma^{-1}(1)}\cdots a_{n\sigma^{-1}(n)}\\ &&\phantom{xx}\color{blue}{\text{the same product of }a_{ij}\text{ in a different order}}\\&=&\sum_{\sigma}\text{sg}(\sigma^{-1})\cdot a_{1\sigma(1)}\cdots a_{n\sigma(n)}\\ &&\phantom{xx}\color{blue}{\sigma \text{ in each term replaced by }\sigma^{-1}}\\ &=&\sum_{\sigma}\text{sg}(\sigma)\cdot a_{1\sigma(1)}\cdots a_{n\sigma(n)}\\ &&\phantom{xx}\color{blue}{\text{sg}(\sigma^{-1}) =\text{sg}(\sigma)}\\ &=& \det(A) \\ &&\phantom{xx}\color{blue}{\text{definition of }\det}\end{array}\]
Determinant of product: Consider the function #E# of #n# column vectors from #\mathbb{R}^n# defined by \[E(\vec{b}_1,\ldots,\vec{b}_n) = \det\left(A\, B\right)\]where #A# is a fixed #(n\times n)#-matrix and #B# is the #(n\times n)#-matrix whose columns are #\vec{b}_1,\ldots,\vec{b}_n#. We verify that #E(\vec{b}_1,\ldots,\vec{b}_n)# satisfies the first two requirements for a determinantal function.
- Multilinearity: Because #A# is linear and #\det# is linear in each argument, #E# is linear in each argument.
- Antisymmetry: By interchanging two vectors among the arguments, the value of the determinant is changed to its negative, and therefore also the value of #E(\vec{b}_1,\ldots,\vec{b}_n)#.
As a consequence, according to the Characterization of the determinant, we have #E(\vec{b}_1,\ldots,\vec{b}_n) = E(\vec{e}_1,\ldots,\vec{e}_n)\cdot \det(B)# from which it follows that:\[\begin{array}{rcll}\det(A\, B )&=&E(\vec{b}_1,\ldots,\vec{b}_n)&\color{blue}{\text{function rule for }E(B)}\\&=&E(\vec{e}_1,\ldots,\vec{e}_n)\cdot \det(B)&\color{blue}{E\text{ is multilinear and antisymmetric}}\\&=&\det(A\, I)\cdot\det(B)&\color{blue}{\text{function rule for }E(I)}\\&=&\det(A)\cdot\det(B)&\color{blue}{A\, I=A}\end{array}\]
Previously we saw that #A# and #B# do not always commute, that is, #A\,B =B\, A# does not always hold (if #n\gt 1#). But #\det(A\, B) = \det(B\, A)# is always true because #\det(A)\cdot\det(B) = \det(B)\cdot\det(A)#.
A special consequence of the above product formula is a criterion for invertibility of the matrix:
A square matrix #A# is invertible if and only if the determinant of #A# is distinct from #0#.
In this case, #\det(A^{-1}) = \frac{1}{\det(A)}#.
If #A# cannot be inverted, then, according to Invertiblity criteria for a linear mapping, the columns of #A# are linearly dependent. From the theorem Determinantal functions disappear on dependent vectors, it follows that the determinant of #A# must then be equal to #0#.
Conversely, if #A# is invertible, then there is a matrix #A^{-1}# of the same dimensions as #A# with #A\, A^{-1} = I#. According to the above product formula for determinants of matrices, it follows that #\det(A)\cdot \det(A^{-1}) =1#. In particular, we then have #\det(A)\ne0# and #\det(A^{-1}) = \frac{1}{\det(A)}#.
According to the above theorem and Invertibility and rank, an #(n\times n)#-matrix #A# has rank less than #n# if and only if #\det(A) = 0#.
More generally, the rank of a non-zero matrix #A# is the greatest natural number #r# for which the determinant of an #(r\times r)#-submatrix of #A# is not equal to #0#.
We discuss a few special cases.
- The determinant of a square matrix of the form \[M = \matrix{A&C\\ 0&B}\] where #A# and #B# are square submatrices, and #C# is an arbitrary matrix of appropriate dimensions, is equal to the product of the determinants of the two submatrices along the diagonal: \[\det(M) = \det(A)\cdot\det(B)\]
- The determinant of a square matrix of the form \[M = \matrix{a_{11}&\cdots&\cdots&\cdots&a_{1n}\\ 0&a_{22}&\cdots&\cdots&a_{2n}\\ 0&0&\ddots&\vdots&a_{3n}\\ 0&0&\ddots&a_{(n-1)(n-1)}&a_{nn}\\ 0&0&\cdots&0&a_{nn}}\] is equal to the product of the diagonal entries: \[\det(M) = a_{11}\cdot a_{22}\,\cdots\, a_{nn}\]
The first statement follows from the sum formula:
\[\det(M) =\sum_{\sigma}\text{sg}(\sigma)\cdot m_{1\sigma(1)}\cdots m_{n\sigma(n)}
\] where #M# is the #(n\times n)#-matrix with #(i,j)#-entry #m_{ij}#. Suppose that #A# has dimensions #k\times k#, so #B# has dimensions #(n-k)\times(n-k)#. Suppose that #\sigma# is a permutation of #\{1,\ldots,n\}# for which the term \[\text{sg}(\sigma)\cdot m_{1\sigma(1)}\cdots m_{n\sigma(n)}\] contains an entry from the matrix #C#. Then there is an index #i\le k# with #\sigma(i)\gt k#. This means that the #i#-th row of #A# does not contribute to this term, and so there is a row #j\gt k# with a contribution to this term from a row below the matrix #A#, that is to say, with #j\gt k#. But #m_{jk} = 0#, so the value of the term is #0#. We conclude that the terms having an entry from #C# do not contribute to the sum formula for #\det(M)#. Accordingly,
\[\begin{array}{rcl}\det(M) &=&\sum_{\sigma :\ \sigma\left(\{1,\ldots,k\}\right)=\{1,\ldots,k\}}\text{sg}(\sigma)\cdot m_{1\sigma(1)}\cdots m_{n\sigma(n)}\\&=&\sum_{\sigma,\tau}\text{sg}(\sigma\,\tau)\cdot m_{1\sigma(1)}\cdots m_{k\sigma(k)}\cdot m_{(k+1)(k+\tau(1))}\cdots m_{n(k+\tau(n))}\\&&\phantom{xxx}\color{blue}{\sigma\text{ permutation of }\{1,\ldots k\},\ \tau\text{ permutation of }\{ 1,\ldots,n-k\}}\\&=&\sum_{\sigma,\tau}\text{sg}(\sigma)\cdot \text{sg}(\tau)\cdot m_{1\sigma(1)}\cdots m_{k\sigma(k)}\cdot m_{(k+1)(k+\tau(1))}\cdots m_{n(k+\tau(n))}\\&&\phantom{xxx}\color{blue}{\sigma\text{ permutation of }\{1,\ldots, k\},\ \tau\text{ permutation of }\{ 1,\ldots,n-k\}}\\&&\phantom{xxx}\color{blue}{\text{sg}(\sigma\,\tau) =\text{sg}(\sigma)\cdot \text{sg}(\tau)}\\ &=&\left(\sum_{\sigma}\text{sg}(\sigma)\cdot m_{1\sigma(1)}\cdots m_{k\sigma(k)}\right)\cdot\left(\sum_{\tau}\cdot m_{(k+1)(k+\tau(1))}\cdots m_{n(k+\tau(n))}\right)\\&&\phantom{xxx}\color{blue}{\sigma\text{ permutation of }\{1,\ldots k\},\ \tau\text{ permutation of }\{ 1,\ldots,n-k\}}\\&=&\det(A)\cdot\det(B)
\end{array}\]
The second statement follows from the first by repeated application with #A# a #(1\times1)#-matrix. Let #n# be the number of columns of #M#. If #n = 1#, then the statement we need to prove is true: #\det(M) = a_{11}#. If #n\gt 1# then, by induction,
\[\begin{array}{rcl}\det(M) &=&a_{11}\cdot \det \matrix{a_{22}&\cdots&\cdots&a_{2n}\\ 0&\ddots&\cdots&a_{3n}\\ 0&\ddots&a_{(n-1)(n-1)}&a_{nn}\\ 0&\cdots&0&a_{nn}}\\ &&\phantom{xxx}\color{blue}{\text{statement 1 with }A=\matrix{a_{11}}}\\ &=&a_{11}\cdot a_{22}\cdots a_{nn}\\ &&\phantom{xxx}\color{blue}{\text{induction hypothesis}}\end{array}\]
Both cases are instances of a more general theorem, which says that the determinant of a square matrix of the form \[ \matrix{A_1&*&*&*&*\\ 0&A_2&*&*&*\\0&0&\ddots&\ddots&\vdots\\ \vdots&\vdots&\ddots&\ddots&*\\ 0&0&\cdots&0&A_m}\]
where the #A_i# are square matrices and each #*# stands for an arbitrary submatrix of suitable size, is equal to \[\det(A_1)\cdot \det(A_2)\cdots \det(A_m)\]
As we saw before, a matrix of the form described in the second statement is called an upper trigangular matrix.
Later we will see how these laws help to compute the determinant of a matrix efficiently.
For what value of #a# is the determinant of the matrix #A# below equal to #-390#?
\[ A = \matrix{2 a & -3 a & 4 a \\ 4 & -5 & 2 \\ 8 & 1 & 3 \\ }\]
#a=# #-3#
The variable #a# occurs in each entry of the first row, so
\[A = \matrix{a&0&0\\ 0&1&0\\ 0&0&1}\, \matrix{2 & -3 & 4 \\ 4 & -5 & 2 \\ 8 & 1 & 3 \\ }\] The determinant of the first matrix on the right-hand side is #a# and the determinant of the second is #130#. Consequently, the
product formula for the determinant gives \[-390= a \cdot 130\] from which it immediately follows that #a = -3#.