The two operations of a vector space, addition and scalar multiplication, can be carried out in its coordinate space.
Let be a natural number and a basis of a vector space . By and we indicate vectors in with coordinates and respectively, with respect to the given basis, and we denote by a real number.
- The coordinate vector of with respect to the basis is
- The coordinate vector of with respect to the basis is
Addition and scalar multiplication in thus correspond to the addition and scalar multiplication of the coordinates in with respect to the basis .
By coordinatization we can reduce questions about vector spaces to questions about or , where we can use a whole arsenal of computational techniques regarding rows of numbers. The final answer can then be given by translating the results found back to the original vector space.
This possibility is the great importance of the vector spaces and : they are as it were, the standard model for each real and complex -dimensional vector space . Mathematicians formulate this as follows: each -dimensional real or complex vector space is isomorphic to or .
The following theorem shows that independence can be examined on the level of coordinates.
Let be a basis of the -dimensional vector space and let be a sequence of vectors of . Then:
- is independent if and only if the associated set of coordinate vectors in with respect to is independent.
- is a basis of if and only if the corresponding set of coordinate vectors in with respect to is a basis of .
In order to determine whether the polynomials , , of the vector space of all polynomials in of degree at most are linearly dependent, we can concentrate on the sequence of coordinate vectors , , of this triple with respect to the basis of . Because the third vector is the sum of the first two, the three do not form a basis of .
It can, of course, also be seen immediately that the third polynomial is the sum of the first two. The advantage of the coordinate vectors is that, up to the translation between to the original vector space (in this case ) and (where in this case), only the methods for verifying dependency in need to be performed.
We only show the first part. After all, the second part follows immediately from it. Denote the coordinate vectors with respect to by . Then the coordinate vector of is equal to . Now, if , then , and vice versa, because the coordinate vector of the zero vector is . A non-trivial relationship between thus translates into a non-trivial relation between the coordinate vectors . In other words is dependent if and only if is dependent.
The above theory also leads to a different way of looking to systems of linear equations.
Consider the system of linear equations in the unknowns :
Let be the columns of the coefficient matrix and put . Then we can write the system in vector form as
- The system has at least one solution if and only if is located in the space spanned by the columns.
- The system has exactly one solution if and only if, moreover, the columns are independent. In this case, the solution gives the coordinates of the vector with respect to the basis .
Apparently, looking for a solution is the same as trying to write as a linear combination of the columns of the coefficient matrix.
What is the difference between and ?
The plane is denoted by . It is often said that is the plane. That is incorrect. After selection of a basis of , each arrow in can be written as and computing with arrows can be reduced to calculating with sequences of two numbers, that is, to working with coordinate vectors in .
It depends on the choice of basis which arrow in the plane corresponds to which coordinate vector. Since we almost always reduce calculations in the plane to calculations in , we tend to identify these two spaces.