Previously we saw that an -matrix of rank has some nice properties: according to theorem Invertibilty and rank the inverse of exists and according to Unique solution and maximal rank each system of linear equations with coefficient matrix has exactly one solution.
We also discussed that determining the rank of a matrix is possible by reducing rows and columns to the reduced echelon form and counting the number of independent rows (or columns) of the result. We will however also be able to see whether the rank is or not by calculating the so-called determinant of the -matrix. The calculation of a this number is fairly simple, but the underlying theory is more complicated.
The determinant of a -matrix
is the number .
The usual notation for the determinant of is or .
The linear map with matrix is invertible if and only if .
In this case, the inverse of is equal to
This theorem including the proof we saw earlier in The inverse of a matrix. Here we give an alternative proof by verifying that .
Suppose . Then, at least one of the two rows of is dependent on the other. Suppose the second row is a multiple of the first, say . Then The other case (in which the first row is dependent on the second) can be proven similarly.
Conversely, suppose . If one of the two rows is equal to the zero vector, then this row is dependent on the other, so . Now we assume that . If , then it follows from that and thus , so the second row is dependent on the first one. If , then and it follows from that , so , and we see again that the second row is dependent on the first one.
Finally, it can easily be verified that from which it follows that, if , the matrix is the inverse of .
The number also plays a role in solving a system in two unknowns: if is the matrix obtained from by replacing the first column by , and is the matrix obtained from by replacing the second column by , then the unique solution of the system
is equal to
provided . You can verify this by calculation, but a nice proof valid for all dimensions will be given later. If and is a -matrix, then has a similar solution for , , and in terms of the determinants of , , , and .
A third topic where plays a role is the computation of surface area. We have not defined area nor the subtleties that arise when coordinates are being used. What we observe here will only serve as an illustration. The oriented surface area of a parallelogram defined by and is equal to . It is called oriented, because the outcome can also be negative, depending on the order in which we write down the vectors. The 'real' surface is obtained by taking the absolute value.
The expression is not linear, and thus appears to fall outside the scope of linear algebra. Fortunately, the expression is multilinear. Certain properties of the -determinant model generalize to -determinants.
Regard the determinant as a function of the two rows of the matrix: . Then satisfies the following three properties (for all choices of vectors and scalars):
- bilinearity (linearity in both arguments):
- antisymmetry: (swapping two vectors produces a minus sign); as a consequence, if the two vectors are identical to each other, then ;
- normalization: .
The determinant is unique in the sense that, if a function of pairs of vectors has the above three properties of , it is the determinant function .
These properties are easy to verify. For example, the second property follows from
Uniqueness: Suppose that is a function of pairs of vectors, which has the aforementioned three properties of . Then we have . By using the bilinearity, we find:
It may seem overkill to describe a simple expression like with these properties, but in higher dimensions such a definition will be essential because explicit formulas are unmanageable for calculations.
The reason for working out the calculations in this simple situation is that the higher dimensional case shows the same kind of calculation, where we will refrain from elaborating all details.
Note that the minus sign is caused by a change in the order of the vectors in the second to last transition.