The two operations of a vector space, addition and scalar multiplication, can be carried out in its coordinate space.
Let #n# be a natural number and #\basis{\vec{a}_1 ,\ldots ,\vec{a}_n}# a basis of a vector space #V#. By #\vec{x}# and #\vec{y}# we indicate vectors in # V# with coordinates #\rv{x_1 ,\ldots ,x_n }# and #\rv{y_1 ,\ldots ,y_n }# respectively, with respect to the given basis, and we denote by #\lambda# a real number.
- The coordinate vector of #\vec{x}+\vec{y}# with respect to the basis #\basis{\vec{a}_1 ,\ldots ,\vec{a}_n}# is \[\rv{x_1 +y_1 ,\ldots ,x_n +y_n }\]
- The coordinate vector of #\lambda \cdot\vec{x}# with respect to the basis #\basis{\vec{a}_1 ,\ldots ,\vec{a}_n}# is \[\rv{\lambda \cdot x_1 ,\ldots ,\lambda\cdot x_n }\]
Addition and scalar multiplication in #V# thus correspond to the addition and scalar multiplication of the coordinates in #\mathbb{R}^n# with respect to the basis #\basis{\vec{a}_1 ,\ldots ,\vec{a}_n}#.
By coordinatization we can reduce questions about vector spaces to questions about #\mathbb{R}^n# or #\mathbb{C}^n#, where we can use a whole arsenal of computational techniques regarding rows of numbers. The final answer can then be given by translating the results found back to the original vector space.
This possibility is the great importance of the vector spaces #\mathbb{R}^n# and #\mathbb{C}^n#: they are as it were, the standard model for each real and complex #n#-dimensional vector space #V#. Mathematicians formulate this as follows: each #n#-dimensional real or complex vector space is isomorphic to #\mathbb{R}^n# or #\mathbb{C}^n#.
The following theorem shows that independence can be examined on the level of coordinates.
Let #\alpha# be a basis of the #n#-dimensional vector space #V# and let #\vec{a}_1 , \ldots ,\vec{a}_m# be a sequence of vectors of #V#. Then:
- #\vec{a}_1 , \ldots ,\vec{a}_m# is independent if and only if the associated set of coordinate vectors in #\mathbb{R}^n# with respect to #\alpha# is independent.
- #\vec{a}_1 , \ldots ,\vec{a}_m# is a basis of #V# if and only if the corresponding set of coordinate vectors in #\mathbb{R}^n# with respect to #\alpha# is a basis of #\mathbb{R}^n#.
In order to determine whether the polynomials #1+2x+3x^2#, #2+x-x^2#, #3+3x+2x^2# of the vector space #P_2# of all polynomials in #x# of degree at most #2# are linearly dependent, we can concentrate on the sequence of coordinate vectors #\rv{1,2,3}#, #\rv{2,1,-1}#, #\rv{3,3,2}# of this triple with respect to the basis #\basis{1, x, x^2}# of #P_2#. Because the third vector is the sum of the first two, the three do not form a basis of #P_2#.
It can, of course, also be seen immediately that the third polynomial is the sum of the first two. The advantage of the coordinate vectors is that, up to the translation between to the original vector space (in this case #P_2# ) and #\mathbb{R}^n# (where #n=3# in this case), only the methods for verifying dependency in #\mathbb{R}^n# need to be performed.
We only show the first part. After all, the second part follows immediately from it. Denote the coordinate vectors with respect to #\vec{a}_1 , \ldots ,\vec{a}_m# by #\vec{b}_1, \ldots ,\vec{b}_m#. Then the coordinate vector of #\lambda_1\cdot \vec{a}_1 + \cdots + \lambda_m\cdot \vec{a}_m# is equal to #\lambda_1 \cdot\vec{b}_1 + \cdots + \lambda_m \cdot\vec{b}_m#. Now, if #\lambda_1 \cdot\vec{a}_1 + \cdots + \lambda_m\cdot \vec{a}_m = \vec{0}#, then #\lambda_1\cdot \vec{b}_1 + \cdots + \lambda_m\cdot \vec{b}_m =\rv{0,\ldots ,0}#, and vice versa, because the coordinate vector of the zero vector is #\rv{0,\ldots ,0}#. A non-trivial relationship between #\vec{a}_1 , \ldots ,\vec{a}_m# thus translates into a non-trivial relation between the coordinate vectors #\vec{b}_1, \ldots ,\vec{b}_m#. In other words #\vec{a}_1 , \ldots ,\vec{a}_m# is dependent if and only if #\vec{b}_1, \ldots ,\vec{b}_m# is dependent.
The above theory also leads to a different way of looking to systems of linear equations.
Consider the system of linear equations in the unknowns #x_1,\ldots,x_n#:
\[
\begin{array}{cc}
a_{11}x_1 +a_{12}x_2+\cdots +a_{1n}x_n= & b_1\\
\vdots & \vdots \\
a_{m1} x_1 +a_{m2}x_2+\cdots +a_{mn}x_n= & b_m
\end{array}
\]
Let #\vec{k}_1,\ldots ,\vec{k}_n# be the columns of the coefficient matrix and put #\vec{b}=\matrix{b_1\\ \vdots\\ b_m}#. Then we can write the system in vector form as
\[
x_1\cdot\vec{k}_1+x_2\cdot\vec{k}_2+\cdots+x_n\cdot\vec{k}_n=\vec{b}
\]
- The system has at least one solution if and only if #\vec{b}# is located in the space spanned by the columns.
- The system has exactly one solution if and only if, moreover, the columns #\vec{k}_1,\ldots ,\vec{k}_n# are independent. In this case, the solution gives the coordinates of the vector #\vec{b}# with respect to the basis #\basis{\vec{k}_1,\ldots ,\vec{k}_n}#.
Apparently, looking for a solution is the same as trying to write #\vec{b}# as a linear combination of the columns of the coefficient matrix.
What is the difference between #\mathbb{R}^2# and #\mathbb{E}^2#?
The plane is denoted by #\mathbb{E}^2#. It is often said that #\mathbb{R}^2# is the plane. That is incorrect. After selection of a basis #\basis{\vec{a}_1, \vec{a}_2}# of #\mathbb{E}^2#, each arrow #\vec{x}# in #\mathbb{E}^2# can be written as #\vec{x}=x_1\cdot \vec{a}_1 +x_2\cdot\vec{a}_2 # and computing with arrows can be reduced to calculating with sequences of two numbers, that is, to working with coordinate vectors in #\mathbb{R}^2#.
It depends on the choice of basis which arrow in the plane corresponds to which coordinate vector. Since we almost always reduce calculations in the plane to calculations in #\mathbb{R}^2#, we tend to identify these two spaces.