We have previously seen the idea of a linear combination of numbers. In this section we will look at forming linear combinations of vectors. The typical problem of the vector equations sort is: can we find the coefficients so that a linear combination of some set of vectors (with those coefficients) is equal to a given vector?
Recall that when we formed linear combinations of numbers we were allowed to “multiply by constants and add things up.” So if we are planning to do the same thing with vectors we need to understand what it means to multiply a vector by a constant
and what it means to add vectors.
We use the term scalar to refer to real numbers, especially when referring to the numbers that we multiply vectors by. Calling them “constants” is probably not the best plan; both a scalar and a vector can be constant — that just means they aren't changing. It's usually more important to distinguish the vectors from the scalars — which things have multiple components and which don't? When we think of vectors as “those things that have both a direction and a magnitude,” the effect of multiplying by a scalar is to leave the direction unchanged, but change the magnitude by scaling it as the scalar indicates. If the scalar is less than 1, the magnitude of the vector will be reduced; if the scalar is greater than 1 it will be increased. Of course, if the scalar is negative the direction will be effected, but in a rather simplistic way: the vector ends up facing the opposite direction.
When we have an actual vector and a scalar we'd like to multiply it by, the operation we perform is almost the only thing it could be! Just multiply each of the components of the vector by the scalar.
Definition1.3.1scalar-vector product
If \(\vec{v}\) is a vector having \(m\) components, \(\vec{v} = \langle v_1, v_2, \ldots , v_m \rangle\) and \(s\) is a scalar, then the scalar multiplication of \(\vec{v}\) by \(s\) is defined by
\begin{equation*}s\vec{v} = s \langle v_1, v_2, \ldots , v_m \rangle = \langle sv_1, sv_2, \ldots , sv_m \rangle \end{equation*}
The addition of vectors is best thought of in terms of “directions”. Suppose the directions to got from my house to the Kwik-E-Mart are: “go 3 blocks north and 1 block east” (call that vector \(\vec{v}\), we might write it's component form as \(\vec{v} = \langle 1, 3 \rangle\)). Suppose in addition that the directions to go from the Kwik-E-Mart to Moe's Tavern are “go 1 block north and 2 blocks west”
(let's call this \(\vec{w} = \langle -2, 1\rangle\)). The meaning of the vector sum is the vector that describes the change that would be effected if we follow one set of directions followed by the other — except we don't have to be slavish about it — we don't literally follow the first set of directions and then do the second. The sum is the set of directions that take us directly to Moe's without making a Kwik-E-Mart pit stop.
When we actually compute vector sums using the component forms of the vectors involved the computation is probably exactly what you would expect: just add up the corresponding components.
Definition1.3.3vector addition
If \(\vec{v}\) and \(\vec{w}\) are both vectors having \(m\) components,
\begin{gather*}
\vec{v} = \langle v_1, v_2, \ldots , v_m \rangle\\
\end{gather*}
and
\begin{gather*}
\vec{w} = \langle w_1, w_2, \ldots , w_m \rangle
\end{gather*}
then their vector sum is defined by
\begin{gather*}
\vec{v} + \vec{w} = \langle v_1+w_1, v_2+w_2, \ldots , v_m+w_m \rangle.
\end{gather*}
One last definition will be needed to work with vector equations. What does it mean for two vectors to be equal to one another? The answer is probably entirely obvious, but we'll include a formal definition here for completeness.
Definition1.3.5vector equality
If \(\vec{v}\) and \(\vec{w}\) are two vectors of length \(m\) having components
\begin{gather*}
\vec{v} = \langle v_1, v_2, \ldots , v_m \rangle\\
\end{gather*}
and
\begin{gather*}
\vec{w} = \langle w_1, w_2, \ldots , w_m \rangle
\end{gather*}
then we say \(\vec{v}\) and \(\vec{w}\) are equal and write \(\vec{v} = \vec{w}\) if and only if for every \(i\), \(1\leq i \leq m\), \(v_i = w_i\).
Example1.3.6a small vector problem
Consider the following set of vectors: \(\langle 1, 1, 0 \rangle\),
\(\langle 1, 1, 1 \rangle\) and \(\langle 0, 0, 1 \rangle\). Is it possible to find scalars \(x_1\), \(x_2\) and \(x_3\) so that
\begin{gather*}
x_1 \langle 1, 1, 0 \rangle + x_2 \langle 1, 1, 1 \rangle + x_3 \langle 0, 0, 1 \rangle = \langle 2, 3, 4 \rangle
\end{gather*}
SolutionLet's modify the given problem by using the definitions of (first) scalar multiplication (and then) vector addition:
\begin{gather*}
\langle x_1, x_1, 0 \rangle + \langle x_2, x_2, x_2 \rangle + \langle 0, 0, x_3 \rangle = \langle 2, 3, 4 \rangle .\\
\end{gather*}
and then
\begin{gather*}
\langle x_1 + x_2, x_1 + x_2, x_2 + x_3 \rangle = \langle 2, 3, 4 \rangle .
\end{gather*}
Now (surprise!) that final form — after we use the definition of vector equality — becomes a system of three equations in three unknowns.
\begin{equation*}
\begin{alignedat}{4}
x_1 \amp {}+{} \amp x_2 \amp \amp \amp {}={} \amp 2 \\
x_1 \amp {}+{} \amp x_2 \amp \amp \amp {}={} \amp 3 \\
x_1 \amp {}+{} \amp x_2 \amp {}+{} \amp x_3 \amp {}={} \amp 4
\end{alignedat}
\end{equation*}
This system is different from the other systems we've seen so far. It doesn't have a solution. Its statement includes an impossibility; if \(x_1\) and \(x_2\) have a sum of \(2\) (from the first equation) how can they also have a sum of \(3\) (which is what the second equation asserts). So there simply aren't three numbers which can be used as the coefficients!
Let's make a tiny change to the previous problem. Sometimes small changes have large effects! We'll change the second component in the vector on the right-hand side to a \(2\).
Example1.3.7a slightly tweaked vector problem
Consider the following set of vectors: \(\langle 1, 1, 0 \rangle\),
\(\langle 1, 1, 1 \rangle\) and \(\langle 0, 0, 1 \rangle\). Is it possible to find scalars \(x_1\), \(x_2\) and \(x_3\) so that
\begin{gather*}
x_1 \langle 1, 1, 0 \rangle + x_2 \langle 1, 1, 1 \rangle + x_3 \langle 0, 0, 1 \rangle = \langle 2, 2, 4 \rangle
\end{gather*}
SolutionNotice that since the left-hand side vectors are all the same as before we can reuse our previous work. The final form of the vector equation is
\begin{gather*}
\langle x_1 + x_2, x_1 + x_2, x_2 + x_3 \rangle = \langle 2, 2, 4 \rangle .
\end{gather*}
Now, as a system of equations, we have
\begin{equation*}
\begin{alignedat}{4}
x_1 \amp {}+{} \amp x_2 \amp \amp \amp {}={} \amp 2 \\
x_1 \amp {}+{} \amp x_2 \amp \amp \amp {}={} \amp 2 \\
x_1 \amp {}+{} \amp x_2 \amp {}+{} \amp x_3 \amp {}={} \amp 4
\end{alignedat}
\end{equation*}
and the first two equations are identical — they no longer cause a contradiction. This system not only has a solution, it has lots of them!
When one equation is an exact duplicate of the other, is there really any reason to retain both copies in the system? Remember that we are mostly concerned with solution sets to linear systems. Either of the copies of this equation will have the same effect on solution sets. For a given vector, they will both either say “Sure! it works for me, put it in the solution set” or “No way, that vector is not okay with me! It makes me false.” So, from the perspective of solution sets, this system is really just a system of two equations in three unknowns.
\begin{equation*}
\begin{alignedat}{4}
x_1 \amp {}+{} \amp x_2 \amp \amp \amp {}={} \amp 2 \\
x_1 \amp {}+{} \amp x_2 \amp {}+{} \amp x_3 \amp {}={} \amp 4
\end{alignedat}
\end{equation*}
By subtracting the first equation from the second we get a unique value for \(x_3\) (\(x_3=2\)). But any pair of numbers that add up to \(2\) will work for \(x_1\) and \(x_2\). Not only is the solution not unique, the solution set for this system is infinite!
We can express the solution set of this system using set-builder notation and a parameter.
\begin{gather*}
\left\{ \langle 2-t, t, 2 \rangle \suchthat t \in \Reals \right\}
\end{gather*}
Notice how the parameter \(t\) allows the values of \(x_1\) and \(x_2\) to range over all possibilities that add up to \(2\)? We essentially let \(x_2\) have any value whatsoever (\(t\) can be any real number) and then we choose \(x_1\) in such a way that the sum is \(2\). In a situation like this, \(x_2\) is known as a free variable.