Let and be vector spaces. Because a linear mapping is of course just a special case of a general mapping, we can talk for example about the image of a vector or the image of a subset of and about the full inverse image of a subset of .
Let be a mapping, a subset of , and a subset of .
- By we denote the image of under : the set .
- By we denote the full inverse image of under : the set .
Let the mapping be given by .
The image of under is equal to if and equal to if .
The full inverse image of under consists of the solution of and is thus equal to
We denote the full inverse image of under by or . The former notation is most common in mathematics, but may lead to confusion with the notation for the inverse of a mapping. To avoid confusion when using this notation we often add the meaning in words, such as "full inverse image of ".
Two important linear subspaces can be associated with linear mappings.
Let be a mapping. Define
is called the image or the image space of and is called the null space or kernel of .
The image space is the image of under the mapping and the null space is the full inverse image of under .
The first definition generalizes the column space of the coefficient matrix (that is to say: the span of the columns of the matrix) of a system of linear equations.
The second definition generalizes the solution space of a homogeneous system of linear equations.
As indicated before, we an -matrix is also used to refer to the linear map determined by it (given by ). This way image and kernel of als also defined: en .
As the name indicates, the image space and the null space of a linear mapping are linear subspaces:
Let be a linear mapping.
- The image of is a linear subspace of .
- The kernel of is a linear subspace of .
Let the mapping be given by . The image space equals if . But if and (so is a constant mapping distinct from ), then the image space is , not a linear subspace. The kernel consists of the solution of and is thus equal to
In particular, is not a linear subspace of if . We see again that is linear only if .
The image space is a linear subspace of : It is a subset of . The zero vector of is the image of the zero vector of and thus belongs to . If , belong to and and are scalars, then there are vectors , in such that and , so linearity of implies:
We conclude that belongs to , from which we deduce that is a linear subspace of .
The kernel is a linear subspace of : It is a subset of which always contains the zero vector . Further, it follows from the linearity of that if and belong to and and are scalars, we have
so .
We determine the kernel and the image space of , the linear mapping determined by the -matrix This means that, if we use column vectors, the mapping rule is The null space of consists of all vectors that satisfy This is a homogeneous system of linear equations with as a coefficient matrix. The null space is the plane with equation .
The image space consists of all vectors of the form
that is, vectors of the form
This describes exactly the span of the columns of , that is, the column space. We conclude that the image space equals
We also determine the full inverse image of the straight line with parametric representation
So we are looking for vectors which satisfy
for some . This means that we must solve the system with augmented matrix
By row reduction it is easy to deduce that this system has only solutions for . The solutions form the plane with equation .
Can you see on the basis of the relative position of and the image space that the calculation of the full inverse image can be limited to the calculation of the full inverse image of the vector ?
If is a matrix, then the kernel of , the linear map determined by , is the solution space of a homogeneous system of linear equations with coefficient matrix , and the image of is the column space (which is the span of the columns of the matrix) of , as shown in the previous example.
The null space of consists of all vectors that satisfy , that is to say, all solutions of the homogeneous system .
The image space consists of all vectors of the form . If has columns , then this is exactly the subset
of . This is the column space of .
Null space and image space thus generalize two concepts from the world of matrices.
Consider the orthogonal projection in on a line through the origin. If we take the length of to be equal to , we can describe algebraically by the mapping rule
- The image space of is equal to . This is geometrically obvious (each point is projected onto ), and can be derived algebraically from the mapping rule. This does in fact show that each vector in the image is a scalar multiple of . Because the image is a linear subspace and , the image space should coincide with .
- The null space is the line that is perpendicular to and passes through the origin. Indeed, the null space consists of all vectors for which , that is, , which is precisely the orthoplement of .
We discuss The link between systems of linear equations and affine subspaces again; this time from the point of view of linear mappings.
Let be a linear mapping and consider the vector equation .
The solution of the equation is equal to the full inverse image of under . In particular:
In general, the particular solution is not unique: each solution can act as particular solution. Likewise, each vector in the affine subspace can act as support vector.
The theorem General and particular solution states that we we can find all solutions of the vector equation by adding a particular solution to the solutions of the corresponding homogeneous equation. The corresponding homogeneous equation is (by definition) the equation . The set of all solutions of this equation is the null space . In particular, the equation has at most one solution if .
The first part of the statement is trivial. For the proof of the second part, we let be a particular solution. For each , we have . Therefore, is also a solution. Conversely, if is a solution, then , so . Since the vector is indeed the sum of and a vector from the kernel. We conclude that the set of all solutions is the affine subspace .
This property is often used in solving linear differential equations.