Linear Algebra 6: Rank, Basis, Dimension

adam dhalla
9 min readJan 25, 2021

--

This is a continuation of my Linear Algebra series, which should be viewed as an extra resource while going along with Gilbert Strang’s class 18.06 on OCW. This can be closely matched to Lecture 9 and 10 in his series.

Today we tackle a topic that we’ve already seen, but not discussed formally. It is possibly the most important idea to cover in this side of linear algebra, and this is the rank of a matrix. The two other ideas, basis and dimension, will kind of fall out of this.

Rank

To put it simply, the rank of the matrix represents the amount of independent columns in the matrix. This number, r, is very important when examining a matrix. Let’s take the rank of this matrix.

The rank of this matrix is 2. That is because we have two independent columns, column 1 and 2. The third column is a multiple of the first column, and thu is dependent. Simply, r = 2.

But, we can get more from this number. If we were looking at this in the context of a system of equations, and were either solving for b or 0, we would be looking for free columns vs pivot columns.

We then realize that the amount of pivot columns is the rank, since pivot columns = independent columns.

Furthermore, we can also get the amount of free columns in a (m, n) matrix by doing n - r, which gives us the amount of free columns.

So far:

  • r is the amount of independent / pivot columns, as well as pivot variables
  • n - r is the amount of dependent / free columns, as well as free variables

And in extension,

  • n - r is the amount of vectors that define the null space
  • r is the amount of vectors that define the column space

It can be said that the rank of a matrix is it’s “true size”. Even if you have some m = 100, n = 200 matrix, you could describe the same column space with just 100 columns, instead of the 200. This brings us into a discussion about basis.

Basis

The basis is the smallest set of vectors possible that can be used to describe a vector space. A vector space has an infinite amount of bases. For example, all of the following are basis vectors of R².

In other words, if we take combinations of any of these pairs of basis vectors, we can make any vector in R². These basis vectors are always completely independent.

Important:

  • All vectors in a basis are linearly dependent
  • The vectors must span the space in question.

In extension, the basis has no nonzero entry in the null space.

When looking at a matrix that is dependent, we can analyze the matrix to find a smaller amount of basis vectors that span the same space as the matrix.

This is a dependent matrix. The last column is a multiple of the first column. The columns span a two dimensional plane in three dimensions. We can describe a two dimensional plane with just two vectors, so how can we reduce these three vectors into two?

Well, we can just take the two independent columns. The first and the second. These two vectors span the exact same space.

These vectors are one of the many basis vectors for the matrix we were dealing with.

Dimension

Dimension is possibly the simplest concept — it is the amount of dimensions that the columns, or vectors, span. The dimension of the above matrix is 2, since the column space of the matrix is 2.

As a general rule, rank = dimension, or r = dimension.

This would be a graph of what our column space for A could look like. It is a 2D plane, dictated by our two 2D basis, independent vectors, placed in a R³ environment.

Full Rank; r = m = n

Often we deal with the case of full rank: where r = m = n. In this case, our matrix is obviously square, since it requires that m = n. Furthermore, this means that every column (and every row) is independent.

This means that our square, ‘full rank’ matrix is the basis of it’s own space, since it is the smallest, most ‘efficient’ way to describe a vector subspace. There is no way to more succinctly describe the column space of a matrix than the columns of a full rank matrix.

All of these above matrices are full rank matrices. Their rows and columns are independent. They have no entries in their null space except for the zero vector {0}.

When doing elimination on a matrix A that is full rank, you will have no problems getting a pivot in each row and column.

If you do Gauss-Jordan elimination on a full rank matrix (bring it to reduced row-echelon form) you will get the identity matrix I as the result, as eliminating all zeros above and below the pivots will leave no gaps and dividing each row by each row’s pivot will return 1’s across the diagonal.

Full rank matrices are also invertible, as the columns can combine to create each column of the identity matrix.

When viewed in a linear system of equations context, this means there is one unique solution to any linear system where A is a full rank matrix. This is because the columns of A can combine in one unique way to form any answer b, since any b is in the column space of A.

Full Column Rank; r = n, r < m

In full column rank, your matrix is made up of fully independent columns (since r = n, or one pivot in each column). The difference is that you have dependent rows, or leftover rows.

In a system of equations, this means that we have the same amount of pivots as unknowns, but more equations than we need. Let’s look at a matrix A in full column rank in the context of a system of equations.

The bottom two rows of the matrix cancel out since they are multiples of the first two. The third row is the first row x 2, and the fourth row is the second row x 2.

Thus, when we cancel out, we get the equivalent system Ux = c.

Pivots are highlighted. Forgot to have a “2” at the end of the last b in c.

Now Ux = c can only be solved if the two solvability conditions are true. only if b3–2b_1 = 0 and b4–2b_2 = 0 can it be true.

But if those two conditions are true, we are left with a single answer to solve our system. That is why we say that with a r = n (full column rank) matrix, we either have 0 or 1 solutions. 0 if one or more of our solvability conditions is false, and 1 if all solvability conditions are true. Most likely, if we were to randomly choose a

Finally, if we convert any full column rank matrix into reduced-row echelon form, we get the identity with zeros attached on the bottom.

This makes sense. Taking just the top two rows of our original matrix A, we have a completely full rank system, with pivots in all columns and rows. Once we row-reduce the top part, we should expect an identity. The bottom rows are completely redundant (dependent) and will be canceled out by elimination.

So, any full column rank matrix in reduced row-echelon form can be expected to look like (I, 0) when rearranged to fit that way.

Full column rank matrices do not have any non-zero entries into the null space, since there are no free columns or variable since although there might be dependent rows, there are no dependent columns.

Full Row Rank; r = m, r < n

Full row rank is when our equation has the same amount of pivots as rows. In this scenario, our matrix does have free variables and free columns, and thus has entries in the null space.

Let’s look at an equation with full row rank:

The second column is the first column x 2, and the fourth column is the second column x 2. Thus, when we cancel out, we can get the updated equation Ux = c.

It’s kind of hard to understand the points we’re making from here, so let’s simplify further to the simplest system possible, Rx = d, using Gauss-Jordan elimination, to get the reduced row-echelon form matrix.

I won’t carry on with the right hand side since that will get out of hand with the division and all of that, but once we’re done, our matrix looks like this:

Familiar? Looks like the full column rank matrix but on it’s side.

In this scenario, we get a few interesting outcomes. Although I haven’t copied over the right hand side, since it would be a little too tedious to add any real value, we can see the possible solutions.

Since our answer will be two dimensional, and we have the basis vectors to describe two dimensional space in the first two columns of our matrix, we can solve any answer b.

But more importantly than that, we also have our ‘free columns’ of zeros. These multiply our ‘free variables’ of z and t. Thus, we can set z and t to any constants we like, since they will multiply by 0. From this, we can say that we have an infinite amount of solutions to any answer.

Unlike the last two cases, if our n > r, we have at least n-r free variables and free columns, and in extension, at least n-r vectors in the null space.

Here, since we have dependent / free columns, we will have two vectors in the null space. To figure these out we can use the methods described in part 4 of this series, which is finding exact answers to Ax = 0.

Recap

In full rank matrices, or r = m = n

  • The matrix must be square
  • There is one unique solution to every b.
  • The reduced-row echelon form R is the identity I.
  • There is nothing in the null space
  • The matrix is invertible

In full column rank matrices, or r = n < m

  • There is 1 or 0 solutions to every b
  • The reduced-row echelon form R is the identity I on top of a zero matrix
  • There is nothing in the null space

In full row rank matrices, or r = m < n

  • There is an infinite amount of solutions to every b.
  • The reduced-row echelon form R is the identity I to the left of a zero matrix.
  • There is n-r special solutions in the null space.

From this, we can confidently say that with m, r, and n alone, we can accurately predict how many answers we should be expecting, if any.

Adam Dhalla is a high school student out of Vancouver, British Columbia, currently in the STEM and business fellowship TKS. He is fascinated with the outdoor world, and is currently learning about emerging technologies for an environmental purpose.

Follow his Instagram, and his LinkedIn. For more, similar content, subscribe to his newsletter here.

--

--

adam dhalla

17 y old learning about machine learning, as well as a lifelong naturalist. Climate activist in Vancouver. Writer. Visit me @ adamdhalla.com