Previously we encountered the linear map #L_A: \mathbb{R}^n\rightarrow \mathbb{R}^m# determined by the #(m\times n)#-matrix #A#. Here we show that every linear map between two coordinate spaces has this form.
In order to see how this can be the case, note that the columns of a real #(m\times n)#-matrix #A# are the images under #L_ A # of the vectors of the standard basis of #\mathbb{R}^n#.
Let #m# and #n# be natural numbers and let #\varepsilon=\basis{\vec{e}_1,\ldots ,\vec{e}_n}# be the standard basis for #\mathbb{R}^n#.
Each linear mapping # L : \mathbb{R}^n \rightarrow\mathbb{R}^m# is determined by the matrix #A = L_{\varepsilon}# whose columns are \[{ L (\vec{e}_1) ,\ldots , L (\vec{e}_n)}\]
The matrix #L_{\varepsilon}# is called the matrix of the linear map #L#.
Write a vector #\vec{x}\in \mathbb{R}^n# as a linear combination of the standard basis vectors: #x_1 \vec{e}_1 +\cdots + x_n\vec{e}_n#. Then, due to the linearity of # L #, \[ L(\vec{x})=x_1\cdot L (\vec{e}_1) +\cdots +x_n\cdot L (\vec{e}_n)\] Collect the #n# vectors # L( \vec{e}_1) ,\ldots , L( \vec{e}_n)# as columns in an #(m\times n)#-matrix #A = L_{\varepsilon}#. Then \[L(\vec{x}) =A\vec{x}\] Here, the right hand side represents the matrix product of #A# and #\vec{x}#.
The image of the vector #\vec{x}# under # L # is computed as the matrix product #L_{\varepsilon}\vec{x}#.
For each linear map #L:V\to W# and each basis #\alpha# of #V# and basis #\beta# of #W#, we will indicate later how you can find a matrix that describes the map.
We will then write #{}_\beta L_{\alpha}# for the matrix and #L_{\alpha}# if #\beta = \alpha#. In the notation #L_\varepsilon# used here the standard basis of #V# appears. If both #V# and #W# are coordinate spaces, we sloppily also use #\varepsilon# to refer to the standard basis of #W#; thus the notation #L_\varepsilon# resembles the shorthand for #{}_\varepsilon L_\varepsilon#.
Let #\vec{a}# be a vector in an inner product space #V#. Since the inner product #\dotprod{\vec{x}}{ \vec{y}}# is linear in #\vec{y}#, the map that assigns to #\vec{y}# the inner product #\dotprod{\vec{a}}{\vec{y}}# is a linear mapping #V\to\mathbb{R}#. In the case where #V = \mathbb{R}^n# with standard inner product, #\dotprod{\vec{a}}{\vec{y}}# coincides with the matrix product #A\,\vec{y}#, where #A# is the #(1\times n)#-matrix whose only row is the coordinate vector #\vec{a}=\rv{a_1,\ldots,a_n}#, since then
\[\dotprod{\vec{a}}{\vec{y}}=a_1\cdot y_1+\cdots+a_n\cdot y_ n = \matrix{a_1&\cdots & a_n}\, \matrix{y_1\\ \vdots\\ y_n} = A\, \vec{y}\]
This way, we have found a new role for matrices.
Let #L:\mathbb{R}^2\to\mathbb{R}^2# be the linear map defined by
\[L\left(\rv{x,y}\right)=\rv{-4\cdot x-9\cdot y, -x}\]
Determine the matrix #L_{\varepsilon}# of #L# with respect to the standard basis.
\(L_\varepsilon =\)\(\matrix{-4& -9\\ -1 & 0}\)
After all, \[\begin{array}{rcl} L(\vec{e}_1) &=& \rv{-4\cdot 1 -9\cdot 0,-1\cdot 1 + 0\cdot 0} = \rv{-4,-1}\\
L(\vec{e}_2) &=&\rv{-4\cdot 0 -9\cdot 1,-1\cdot 0 + 0\cdot 1} =\rv{-9,0}\end{array}\] By the theorem
Linear maps in coordinate spaces defined by matrices these image vectors, viewed as column vectors, are the columns of #L_\varepsilon#, so
\[L_\varepsilon = \matrix{-4& -9\\ -1 & 0}\]