Linear independence

A set of vectors in a vector space is linearly dependent if one of them can be written as a linear combination of the others – if it "depends" on the others. Vectors which are not linearly dependent are linearly independent.

 

A geometric example: look at three different vectors in three-dimensional Euclidean space. (The gray vectors are just i, j and k for reference.)

If any two of the vectors are parallel, then one is a scalar multiple of the other. A scalar multiple is a linear combination, so the vectors are linearly dependent.

               
   

If no two of the vectors are parallel but all three lie in a plane, then any two of those vectors span that plane. The third vector is a linear combination of the first two, since it also lies in this plane, so the vectors are linearly dependent.

   

If the three vectors don't all lie in some plane through the origin, none is in the span of the other two, so none is a linear combination of the other two. The three vectors are linearly independent.

 

If you have a large collection of vectors, checking for linear independence by checking separately whether each vector is a linear combination of the others could be very tedious. Fortunately, there's a way to do all the checks at once.

The vectors v1, v2, ..., vk are linearly independent if and only if the linear dependence equation
          c1v1 + c2v2 + ... + ckvk = 0
has only the trivial solution
          c1 = c2 = ... = ck = 0.

Proof. The linear dependence equation always has at least the trivial solution c1 = c2 = ... = ck = 0. We're only interested in non-trivial solutions.

Suppose it also has a solution with one of the ci's not zero, say c1 ≠ 0. Then you can solve for v1 as a linear combination of the others:

v1 = –(c2/c1)v2 – (c3/c1)v3 – ... – (ck/c1)vk,

so the vectors are linearly dependent.

On the other hand, suppose the vectors are linearly dependent, with (say) v1 a linear combination of the others:

v1 = c2v2 + ... + ckvk.

Then (–1)v1 + c2v2 + ... + ckvk = 0, so the linear dependence equation has a solution with some ci not zero (specifically c1 = –1).

 

To determine whether or not a set of vectors is linearly independent

Set up the linear dependence equation for those vectors and solve it.

If you get only the trivial solution (all coefficients zero), the vectors are linearly independent.

If you get any solution other than the trivial solution, the vectors are linearly dependent.

 

Some important examples. (You can check these by setting up a linear dependence relation and solving for the coefficients.)

The elementary vectors in Rn are linearly independent.

The matrices

in the space of 2 x 2 matrices are linearly independent.

The polynomials 1, x, x2, x3, ..., xn are linearly independent.