Linear Algebra 4: Reduced Row Echelon Form for solving Ax = 0

adam dhalla
7 min readJan 20, 2021

--

This is a continuation of my Linear Algebra series, which should be viewed as an extra resource while going along with Gilbert Strang’s class 18.06 on OCW.

Let’s introduce a small additional concept in linear algebra — this is the Reduced Row Echelon Form of a matrix. I’d highly recommend reading the article before this, which details how to solve Ax = 0 in general.

Leaving right where we left off, we eliminated through our A matrix to get something in echelon form, symbolized by the variable U. Well, there is one more further simplification to our matrix U that can reveal many solutions (including the null space vectors) almost immediately.

We’re going to go from our matrix U calculated in the article before. Our system is currently at this point:

Converting to Reduced Row Echelon Form (rref)

A matrix in reduced row echelon format has all pivots equal to 1, and 0’s above the pivots. Let’s explore what this means with rectangular matrices.

Our pivots highlighted in green.

Picking up where we left off with our matrix in echelon form, let’s do the first step towards reduced row echelon form, and that’s making all the pivots 1.

We would normally do this by dividing each row of U by the pivot that belongs to that row. This of course is allowed, since we’re just dividing equations, if you think about it. We don’t have to worry about also dividing the right hand side b since b is equal to 0, which doesn’t change when divided.

So, our first row stays the same as the pivot is already one. We must divide the second row by 2 (the pivot) to make the pivot 1. When we do this, we get the simpler matrix:

Now, we want the numbers above the pivots to be equal to zero. To do this, we must eliminate upwards, or in other words, do Gauss-Jordan elimination. We have to multiply row 2 by 1 and subtract it from row 1 to cancel out the 1 above our second pivot.

After doing this, we get our matrix in reduced row-echelon form, often called R (we basically did A → U → R)

Note that the zero in the top right hand corner is not typical of reduced row-echelon form — usually, we only have zeros above the pivots. Again, the characteristics of a matrix in reduced row-echelon form is:

  • Pivots are the first nonzero number in their row
  • All entries below pivots are zero
  • All pivots are one
  • No zeros directly above the pivots

Reduced row-echelon form really only has meaning for non-square, non-independent matrices, since the reduced row-echelon form of a square, independent (and thus invertible) is just the identity.

In fact, the way we calculate invertible matrices from square and independent matrices is by turning our matrix into the reduced row-echelon form, but that’s not important to talk about here.

This is the simplest way to express a matrix. There is no going below this.

So, how do we get information from a matrix in reduced-row echelon form? That’s why we did this whole thing anyways, right? First, we must understand something rather peculiar with our rref matrix.

Why the Identity Matrix Shows Up in the Pivot Columns

Let’s take our rref matrix and identify, once again, the pivot and free columns, which it still has from U.

Notice that, disregarding the ignorable bottom row of 0’s , that, just by looking at the pivot columns only, we get the identity matrix? Here’s it highlighted for clarity.

You can think of why exactly this comes about if you do a bit of thinking. Remember that I said a completely invertible, independent, square matrix’s row-reduced form would be the identity (dividing pivots by zero and clearing up all points above and below the diagonal would do this).

Well, you can think of it like this: if we got rid of the dependent (free) columns in this matrix, columns 2 and 4, we would have (disregarding the last 0 = 0 row) a completely independent, 2 x 2 matrix. We would only have the two independent, pivot columns 1 and 2 left, which would be a completely independent system, and therefore have the identity matrix as it’s rref matrix.

Anyways, if we look at the dependent columns,

Block Matrix Intuition for Finding Null space Vectors in R

We can find the null space vectors quite quickly just by using a trick. I’ll first explain how this trick came to be, and then how we can use it.

This intuition needs a basic understanding of block matrices, and how we can split up a matrix into several smaller blocks. I found the best explanation on how to do that to be this one.

Anyways — let’s look again at our rref matrix R. By rearranging to make it so that the pivot and free variables are in groups, we can simplify this into a block matrix.

Remember how at the end of last article we showed that you could express the resulting null space vectors as a single matrix N (just by taking each vector as the column in N). Well, now, we can solve for the entire matrix N instead of finding the vectors separately.

Instead of analyzing Ux = 0, as we used to, we can instead do RN = 0, where R is our block matrix. What does our block matrix N have to look like for RN to equal zero?

Well, I’ll give the answer and then explain why this answer comes to be.

Since we treat block vectors just like how we treat individual items inside of a matrix, we can expand this out to see why it works . If we multiply out the first row of our matrix R with our N, we get:

If we multiply our bottom row of R, we also get 0, since (0, 0) times (-F, I) returns back (0, 0).

So, we know the structure of of every N matrix in the format RN = 0. It is the negative of the pivot columns on top of the identity. This is pretty powerful, and let’s us take a shortcut.

Now, when we look at a matrix in reduced row-echelon form, we can immediately come up with the matrix N, just by looking at the columns.

Remember that the top part of our null space matrix N is all the free columns F with their signs reversed. What we can do to immediately calculate this part is to identify the items in the free columns on our rref matrix R.

We can then take these and reverse their signs and put them in the top part of N. For the identity, simply look at how many variables you need left for the null space vectors to be of proper size (here it’s two, since there are four variables in each null space vector — you can get this from the amount of columns in R). So, we add an identity matrix of size 2 in the bottom.

Here it is! The null space matrix, with all the null space vectors in side of it, just from observation of the R matrix alone. For sake of completeness, let’s write out the columns as a linear combination to truly represent the null space.

That’s the power the R matrix holds — and we’ll explore it deeper in the next article, and how we can use R to solve for Ax = b.

Adam Dhalla is a high school student out of Vancouver, British Columbia, currently in the STEM and business fellowship TKS. He is fascinated with the outdoor world, and is currently learning about emerging technologies for an environmental purpose. To keep up,

Follow his Instagram, and his LinkedIn. For more, similar content, subscribe to his newsletter here.

--

--

adam dhalla
adam dhalla

Written by adam dhalla

17 y old learning about machine learning, as well as a lifelong naturalist. Climate activist in Vancouver. Writer. Visit me @ adamdhalla.com

No responses yet