Breathing in the crisp autumn air and staring out over what has been his home, neighborhood, close friend, and therapist, for the past two years, Henry David Thoreau lightly closed the fragile cedar-wooded door of his cabin for the last time.
This is a continuation of my Linear Algebra series, which should be viewed as an extra resource while going along with Gilbert Strang’s class 18.06 on OCW. This can be closely matched to Lecture 8 in his series.
Here, we continue our discussion about solving linear systems with elimination. In my Gaussian Elimination series, we explored how square, invertible matrices can be solved by method of elimination and row exchanges — but we never delved into solving rectangular, non-invertible systems.
In the last lesson, we explored how non-square systems can be solved by using Gaussian elimination. …
This is a continuation of my Linear Algebra series, which should be viewed as an extra resource while going along with Gilbert Strang’s class 18.06 on OCW.
Let’s introduce a small additional concept in linear algebra — this is the Reduced Row Echelon Form of a matrix. I’d highly recommend reading the article before this, which details how to solve Ax = 0 in general.
Leaving right where we left off, we eliminated through our A matrix to get something in echelon form, symbolized by the variable U. …
This article assumes a base knowledge of all basic matrix and vector manipulation concepts, like dot products, linear combinations, as well as ideas involving span, vector spaces and subspaces, and column spaces.
Continuing off the last article, we introduced ideas of the null space.
To review, the null space is the vector space of some group of x that satisfy Ax = 0. …
Now that we have explored the column space, we can explore the other vector subspace that matrices can offer, which is the null space. First, we must be familiar with the matrix way of representing a system of equations. I cover this in the first part of my gaussian elimination tutorial.
The entire concept of null space is rather simple. We are trying to find all possible x in a matrix equation Ax = 0. A combination of all these x will form their own subspace, this time in R^n. Let’s walk through all of that .
Let’s start off with an example, in the language of (3, 3) matrices. Let’s also put these in Ax = 0 form, since we’re solving for x. …
This article assumes knowledge of Gaussian Elimination, matrix multiplication, linear combinations, and a few other concepts. If Gaussian Elimination seems foreign, I have a series of three articles walking through the meanings and mechanics of it.
Now that we have fundamentally covered the elimination way of looking at a system of equations Ax = b, we can delve a little more into the abstract concepts of linear algebra that underlie the system.
We can analyze the rows and columns of our matrix A to understand the concept of vector spaces and subspaces.
A vector space, and later, subspaces, are a group of vectors that are closed under scalar multiplication and addition. …
Matrix transposes and symmetric matrices are linked — in fact, the definition of a symmetric matrix is that a symmetric matrix A’s transpose gives back the same matrix A.
This is a continuation of my linear algebra series, tied with the 18.06 MIT OCW Gilbert Strang course on introductory linear algebra. Let’s jump right in with the basics on transposes.
We transpose a two-dimensional matrix by using it’s rows as columns, or inversely, as it’s columns as rows. That is all it takes to do the simple operation.
In the last article, we discussed how we could express Gaussian Elimination, in terms of matrix multiplications, and how we could multiply those elementary matrices into one single matrix E, which would bring A to the upper triangular U.
Let’s jump right in.
Let’s take a look again at what our example matrix E (our multiple of our three individual elimination matrices) looked like in the last article.
We covered all the base concepts for Gaussian Elimination in the first entry in this miniseries on Gaussian Elimination. This article covers how we utilize Gaussian elimination to turn our coefficient matrix A into an upper triangular U so that we can employ back-substitution to solve the system. We also explored the singular cases that can come out of it.
This article series can be thought of as a companion to prof. Gilbert Strang’s course 18.06 on Linear Algebra — more specifically, the first four lectures.
There are two main operations in elimination. These are the elimination operation (the combined multiplication/subtraction step) and row exchanges. …