Under certain conditions we can calculate with matrices. Here we discuss addition and scalar multiplication. We also look at reflection about the main diagonal.
If \(A\) and \(B\) are matrices of the same size, then the sum matrix \(A+B\) is the matrix that you will get by adding corresponding elements. Just as for numbers, the operation is called addition.
Addition of matrices satisfies the following two properties, where \(A=(a_{ij})\), \(B=(b_{ij})\) and \(C=(c_{ij})\) are three \((m\times n)\)-matrices:
\[
\begin{array}{ll}
A+B=B+A &\phantom{xxx} \color{blue}{\text{commutativity}} \\
(A+B)+C=A+(B+C) &\phantom{xxx} \color{blue}{\text{associativity}}
\end{array}
\]
Written out in coordinates this definition reads as follows:
Let \(A\) and \(B\) both be \((m\times n)\)-matrices with elements \(a_{ij}\) and \(b_{ij}\), respectively. For \( 1\leq i\leq m\) and \(1\leq j\leq n\) define \[
c_{ij}=a_{ij}+b_{ij}
\] The \((m\times n)\)-matrix \(C\) with elements \(c_{ij}\) is the sum of the matrices \(A\) and \(B\).
Below are two examples of matrix sums.
\[\begin{aligned}\matrix{3 & 0 & 1 \\ 4 & 3 & 2 \\ 2 & 0 & 4 \\ 4 & 5 & 0 \\ } + \matrix{2 & 3 & 1 \\ 4 & 1 & 2 \\ 0 & 5 & 3 \\ 3 & 5 & 5 \\ } &= \matrix{3+2 & 0+3 & 1+1 \\ 4+4 & 3+1 & 2+2 \\ 2+0 & 0+5 & 4+3 \\ 4+
3 & 5+5 & 0+5 \\ } \\ \\ &= \matrix{5 & 3 & 2 \\ 8 & 4 & 4 \\ 2 & 5 & 7 \\ 7 & 10 & 5 \\ }\end{aligned}\] \[\matrix{3 & 0 & 1 \\ 4 & 3 & 2 \\ 2 & 0 & 4 \\ 4 & 5 & 0 \\ } + \matrix{2 & 3 \\ 4 & 1 \\ 0 & 5 \\ 3 & 5 \\ } \text{does not exist because the matrix sizes differ.}\]
Both rules are a direct result of the same rules for numbers:
- The first property, commutativity for the addition of matrices, follows directly from the fact that the simple addition is commutative: if #c_{ij}=a_{ij}+b_{ij}#, then also #c_{ij}=b_{ij}+a_{ij}#, so \[A+B = (a_{ij})+(b_{ij})=(c_{ij}) = (b_{ij}) + (a_{ij}) = B+A\]
- The second property, associativity for the addition of matrices, we establish as follows. First we observe that \(A+B\) and \(B+C\) and hence #(A+B)+C# and #A+(B+C)# have equal size, namely \(\rv{m, n}\). Next, we observe that at position \(ij\) the matrix \((A+B)+C\) has the element \[((A+B)+C)_{ij}=(A+B)_{ij} +c_{ij}=(a_{ij}+b_{ij})+c_{ij}\] and that the matrix \(A+(B+C)\) has the element \(a_{ij}+(b_{ij}+c_{ij})\); of course, these two numbers are equal.
The associativity enables us to talk about \( A+B+C\) without specifying how we determine the matrix: as \((A+B)+C\) or as \(A+(B+C)\). The result is the same either way.
If \(A\) is a matrix and \(\lambda\) a number, then \(\lambda\cdot A\) or simply #\lambda A#, the matrix you get by multiplying all the elements of \(A\) by \(\lambda\). We call this operation the scalar multiplication of the scalar #\lambda# by the matrix #A# and the result the scalar product.
If #\lambda = -1#, we often write #-A# rather than #-1 A#. This matrix is called the opposite matrix of #A#.
For scalar multiplication calculation rules are as shown below, where \(A\) and \(B\) are matrices of equal size, and \(\lambda\) and \(\mu\) are scalars:
\[
\begin{array}{rl}
1\,A\!\!\! & =A \\
(\lambda+\mu)\,A\!\!\! &= \lambda\, A+\mu\, A \\
\lambda\,(A+B)\!\!\! &= \lambda A+\lambda B \\
\lambda(\mu\, A)\!\!\! &= (\lambda\, \mu)\, A
\end{array}
\]
Below is an example of a scalar multiplication of a matrix by a number.
\[\begin{aligned} -5 \matrix{2 & 0 & 1 \\ 0 & 1 & 5 \\ 0 & 2 & 2 \\ } &= \matrix{\left(-5\right)\cdot 2 & \left(-5\right)\cdot 0 & \left(-5
\right)\cdot 1 \\ \left(-5\right)\cdot 0 & \left(-5\right)\cdot 1 &
\left(-5\right)\cdot 5 \\ \left(-5\right)\cdot 0 & \left(-5\right)
\cdot 2 & \left(-5\right)\cdot 2 \\ }\\ \\ &= \matrix{-10 & 0 & -5 \\ 0 & -5 & -25 \\ 0 & -10 & -10 \\ }\end{aligned}\]
The operations addition and scalar multiplication for matrices both relate to each element of the matrix, where, for each index, the well-known multiplication by a number, and the well-known sum of numbers, respectively, takes place. Thus, the rules are not so different from the corresponding known calculation rules for numbers.
For vectors, we have defined a scalar multiplication. If #A# a #(1\times n)#-matrix (a row vector of length #n#), or an #(m\times 1)#-matrix (a column vector of length #m#), then the scalar multiplication of #\lambda # by #A# corresponds to the scalar multiplication of vectors.
The transposed matrix of \(A\), denoted as \(A^{\top}\), is the matrix that you get when you reflect \(A\) about its main diagonal. If \(A\) is an \((m\times n)\)-matrix, then \(A^{\top}\) is an \((n\times m)\)-matrix.
For matrices \(A\) and \(B\) of equal size and each scalar \(\lambda\) we have \[
\begin{array}{rl}
(A+B)^{\top}\!\!\! & = A^{\top}+B^{\top} \\
(\lambda A)^{\top}\!\!\! & = \lambda A^{\top} \\
(A^{\top})^{\top}\!\!\! & =A
\end{array}
\]
Below are two examples of transposed matrices.
\[\matrix{0 & 5 & 1 \\ 0 & 3 & 5 \\ }^{\!\top} = \matrix{0 & 0 \\ 5 & 3 \\ 1 & 5 \\ }\qquad\text{and}\qquad \matrix{0 \\ 5 \\ }^{\!\top}=\matrix{0 & 5 \\ }\]
Other commonly used ways of denoting the transposed matrix of #A# are \(A^{t}\) and \(A'\).
The #(i,j)#-element of #\lambda A# is #\lambda\cdot a_{ij}#, so the #(i,j)#-element of #\left(\lambda A\right)^{top}# is #\lambda\cdot a_{ji}#, but that is the #(i,j)#-element of #\lambda A^{\top}#, so #\left(\lambda A\right)^{top}=\lambda A^{\top}#.
The proofs of the other rules are similar.
A symmetric matrix is a square matrix that is equal to its transpose. An anti-symmetric matrix is a square matrix that is opposite to its transpose.
In other words, a matrix \(A\) is\[\begin{array}{rrcl}\text{symmetric if }&A^{\top}&=&A\\ \text{anti-symmetric if }&A^{\top}&=&-A\end{array}\]
Below are two examples: a symmetric matrix and an anti-symmetric matrix.
\(\matrix{0 & 2 & 5 \\ 2 & 0 & 4 \\ 5 & 4 & 4 \\ }\) is a symmetric matrix because \[\matrix{0 & 2 & 5 \\ 2 & 0 & 4 \\ 5 & 4 & 4 \\ }^{\!\top} = \matrix{0 & 2 & 5 \\ 2 & 0 & 4 \\ 5 & 4 & 4 \\ }\]
\(\matrix{0 & -2 & -5 \\ 2 & 0 & 0 \\ 5 & 0 & 0 \\ }\) is an anti-symmetric matrix because \[\matrix{0 & -2 & -5 \\ 2 & 0 & 0 \\ 5 & 0 & 0 \\ }^{\!\top} =\matrix{0 & 2 & 5 \\ -2 & 0 & 0 \\ -5 & 0 & 0 \\ }=-\matrix{0 & -2 & -5 \\ 2 & 0 & 0 \\ 5 & 0 & 0 \\ }\]
It is not necessary to require that the matrix be square: it follows from the requirement that the matrix is equal to its transpose. After all, the transpose of an #(m\times n)#-matrix is an #(n\times m)#-matrix, so the matrix can only be equal to the transposed or the opposite of the transposed if #\rv{m,n} = \rv{n,m}#.