The determinant of an -matrix is a number. This number depends on the matrix, and in particular on the rows of the matrix. The dependence of the rows is the basis for the definition of a so-called determinantal function.
A determinantal function on is a function that assigns a number to each -tuple of vectors such that the following properties are satisfied:
- Multilinearity: for we have
- Antisymmetry: when two vectors of the arguments are interchanged, the value of the determinantal function turns into its negative.
- Normalization: .
The multilinearity says that, for each , the function is linear in the -th argument (while all other arguments are kept constant). Therefore, the multinearity states that is linear in each argument.
The above definition depends on . Moreover, it is not apparent from the definition that, for each , there is indeed a determinantal function. This is the case, as will be shown below.
The antisymmetry may also be formulated as when two arguments are equal (that is, when for mutually different and ). See Determinantal functions disappear on dependent vectors below.
Even without normalization, multilinear antisymmetric functions can be characterized, as will become clear from the last part of theorem Characterization of the determinant below.
The function can also be seen as a function having as an argument the -matrix whose -th row is .
Let for and in . Then is a determinantal function on :
- Multilinearity: in each term of the sum in the definition, a coordinate of each argument occurs once.
- Antisymmetry:
- Normalization: .
If we write and , it becomes clear that this definition corresponds to the determinant of the matrix from an earlier definition.
Let for , , and in , where represents the inner product and the cross product. Then is a determinantal function on :
- Multilinearity: each of the vectors , , occurs exactly once, and both the dot product and the cross product are linear in each of their arguments.
- Antisymmetry: We show that if two of the three arguments are equal, then follows. The cross product is antisymmetric, so . Because is perpendicular to both and , we have both and .
- Normalization: .
We will prove that, for every , there is indeed exactly one determinantal function and we will actually provide a formula for it; it will be called the determinant. First we draw some conclusions from the definition.
If is a determinantal function on , then it has the following properties.
- : if two vectors among are the same, then the determinant is equal to .
- if the system is linearly dependent.
1. By interchanging the two arguments that are equal to the vector , the determinant transforms into its negative, but at the same time, the arguments do not change, so the determinant remains the same. This is only possible if the determinant is .
2. Assume for the sake of convenience that Then because of the first part.
If is multilinear, then the antisymmetry may also be formulated as when two arguments are equal (that is, if for mutually different and ).
In item 1 of the proof we already saw that the assumption that is antisymmetric implies that when two arguments are equal. Below, we establish the converse implication that says that it follows from the assumption that whenever , that is antisymmetric in the arguments and .
We focus on the first two arguments and write . Thanks to the multilinearity we have
If whenever two arguments are the same, then for all vectors . Therefore, the above equality becomes from which antisymmetry of follows, and so also antisymmetry of in the first two arguments.
Antisymmetry for every other pair of arguments can be shown in the same way.
For the following characterization of determinantal functions we use some facts about permutations.
Consider the function on defined by
where the sum runs over all permutations of . Here, , so , the determinant of , the -matrix whose -th row is equal to .
- The function is a determinantal function.
- The function is the only determinantal function on .
- If is a function of arguments from which is multilinear and antisymmetric, then .
Instead of we also write . We will refer to the above expression for as the sum formula for the determinant.
Take and consider the matrix Thus, the rows are and .
Now, there are only two permutations of , namely and . The first has already the correct order; the corresponding sign is thus , and the second one a transposition in the order becomes , so its sign is . We thus find
in accordance with the definition given before.
Now . Consider the determinant of -matrix with -element :
There are six permutations of ; those with a plus sign are , , and , and those with a minus sign are , , and . We find
This expression is known as the rule of Sarrus. It is easy to remember; place the first two columns behind the matrix
and now include the three terms on the main diagonal or parallel to it with a plus sign and the terms on the secondary diagonal (that is, ) or parallel to it with a minus sign.
In 2-Dimensional determinants we saw that -determinants have an interpretation as surface area. Something similar applies to -determinants: these measure volumes of parallellopipida in .
Expanding the determinant as a sum of terms involving , as is done in the proof, each list of indices appears in this sum. However, many terms vanish. After all, if two of the indices are the same, then at the right-hand side there is a determinant with two equal vectors, and so it is . Consequently, if exists, thenWhich indices appear in this sum, and how many terms are there in the sum? From the fact that all of the numbers have to be different, and that each lies between and , in the sequence all of the numbers between and occur exactly once. Such a list is called a
permutation of the numbers . All permutations of , for example, are
These permutations can be found as follows: we choose an element from . This can be done in three ways. Thereafter, there are two choices left for the second element. The third element is then fixed. Thus, there are possible permutations of .
Uniqueness: We are now able to write down the determinantal function for each . It is done in the same way as for -matrices: we will write each row as a linear combination of the standard basis vectors and use the multilinearity to write the determinant as a sum of many determinants of standard basis vectors. For this purpose consider the matrix
with rows . Then, for each ,
and soThis is a sum of a lot of terms. There are summation indices, each with possible values, so the number of terms is . For a value as low as we have terms.
Since terms in which two indices and are equal constribute zero, we we can limit ourselves to permutations of . We count these permutations as follows. For the first element, there are possibilities, then for the second element there are possibilities, for the third element , , for the penultimate , and finally, the last element is fixed. Thus, there are permutations of .
The sum for thus has terms. That is a lot less than , but for there still are terms. If some terms of the sum over all permutations would be equal to , then the sum would consist of fewer terms. However, this is not the case. Because contains all numbers , we can obtain the order by repeated transpositions. Each transposition involves a factor . If the number of permutations is even, then equals and otherwise it is equal to . Thus, we find:
If exists, then
where the sum runs over all permutations of and the sign must be if the permutation of the sequence involves an even number of transpositions and otherwise.
det is a determinantal function: To prove this we verify that meets the three requirements for a determinantal function.
Multilinearity: Let be one of the numbers . We want to show that is linear in the -th argument . Because is a sum of terms of the form , it suffices to verify that each of these terms is linear in . That is indeed the case, because only the factor comes from .
Antisymmetry: upon an interchange of two vectors in the arguments, the value of the determinant changes to its negative. If we interchange the arguments and , we get the same expression, with and in each term replaced by and , respectively. Because it only involves a transposition, we may suppose . So if we let be the composition of and the transposition of and , we get
For clarity, we treat these equalities again, but with more words:
From it follows that has one transposition less than such that . The sum over all permutations equals the sum over all permutations . With all these results we find:
Normalization: .
A standard basis vector of consists of zeroes and one at position :This means:Below, in the sum over all permutations the only term unequal to zero is therefore belonging to :
Last statement: If , then each term in the expansion of is zero because it contains a factor . Then , as required. Suppose, therefore, . Then is defined; it is a function that satisfies all three requirements for a determinantal function so because of the uniqueness we have The statement now follows after multiplying both members by .
The sum formula has terms, where and for . This is way too much to be of use even for relatively small . However, the formula implies results that we will use to calculate determinants.
For what value of is the determinant of the matrix below equal to zero?
Give your answer in the form of an integer or a simplified fraction.
We calculate the determinant of :
Thus, the determinant is equal to zero if and only if .