In this section we need to take a look at the equation of a line in R 3 R 3 . As we saw in the previous section the equation y = m x + b y = m x + b does not describe a line in R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...
Example 4 Find the inverse of the following matrix, if it exists.
We first form the new matrix by tacking on the identity matrix to this matrix. This is
We will now use row operations to try and convert the first three columns to the identity. In other words, we want a 1 on the diagonal that starts at the upper left corner and zeroes in all the other entries in the first three columns.
If you think about it, this process is very similar to the process we used in the last section to solve systems, it just goes a little farther. Here is the work for this problem.
So, we were able to convert the first three columns into the identity matrix therefore the inverse exists and it is,
We will now use row operations to try and convert the first three columns to the identity. In other words, we want a 1 on the diagonal that starts at the upper left corner and zeroes in all the other entries in the first three columns.
If you think about it, this process is very similar to the process we used in the last section to solve systems, it just goes a little farther. Here is the work for this problem.
So, we were able to convert the first three columns into the identity matrix therefore the inverse exists and it is,
So, there was an example in which the inverse did exist. Let’s take a look at an example
I’ll leave it to you to verify this fact for the previous two examples.
In fact, we can go a little farther now. Since we are assuming that we’ve got the same number of equations as unknowns the matrix in is a square matrix and so we can compute its determinant. This gives the following fact.
Example 5 Find the inverse of the following matrix, provided it exists.
In this case we will tack on the identity to get the new matrix and then try to convert the first two columns to the identity matrix.
And we don’t need to go any farther. In order for the identity to be in the first two columns we must have a 1 in the second entry of the second column and a 0 in the second entry of the first column. However, there is no way to get a 1 in the second entry of the second column that will keep a 0 in the second entry in the first column. Therefore, we can’t get the identity in the first two columns and hence the inverse of doesn’t exist.
And we don’t need to go any farther. In order for the identity to be in the first two columns we must have a 1 in the second entry of the second column and a 0 in the second entry of the first column. However, there is no way to get a 1 in the second entry of the second column that will keep a 0 in the second entry in the first column. Therefore, we can’t get the identity in the first two columns and hence the inverse of doesn’t exist.
We will leave off this discussion of inverses with the following fact.
Fact
Given a square matrix .
- If is nonsingular then will exist.M
- If is singular then will NOT exist.
I’ll leave it to you to verify this fact for the previous two examples.
Systems of Equations Revisited
We need to do a quick revisit of systems of equations. Let’s start with a general system of equations.
Now, covert each side into a vector to get,
The left side of this equation can be thought of as a matrix multiplication.
Simplifying up the notation a little gives,
where, is a vector whose components are the unknowns in the original system of equations. We call the matrix form of the system of equations and solving is equivalent to solving . The solving process is identical. The augmented matrix for is
Once we have the augmented matrix we proceed as we did with a system that hasn’t been written in matrix form.
We also have the following fact about solutions to .
Fact
Given the system of equation we have one of the following three possibilities for solutions.
- There will be no solutions.
- There will be exactly one solution.
- There will be infinitely many solutions.
In fact, we can go a little farther now. Since we are assuming that we’ve got the same number of equations as unknowns the matrix in is a square matrix and so we can compute its determinant. This gives the following fact.
Fact
Given the system of equations in we have the following.
- If is nonsingular then there will be exactly one solution to the system.
- If is singular then there will either be no solution or infinitely many solutions to the system.
The matrix form of a homogeneous system is
where is the vector of all zeroes. In the homogeneous system we are guaranteed to have a solution, . The fact above for homogeneous systems is then,
Fact
Given the homogeneous system we have the following.
- If is nonsingular then the only solution will be .
- If is singular then there will be infinitely many nonzero solutions to the system.
Linear Independence/Linear Dependence
This is not the first time that we’ve seen this topic. We also saw linear independence and linear dependence back when we were looking at second order differential equations. In that section we were dealing with functions, but the concept is essentially the same here. If we start with vectors,
If we can find constants, , , …, with at least two nonzero such that
then we call the vectors linearly dependent. If the only constants that work in are , =0, …, then we call the vectors linearly independent.
If we further make the assumption that each of the vectors has components, i.e. each of the vectors look like,
we can get a very simple test for linear independence and linear dependence. Note that this does not have to be the case, but in all of our work we will be working with vectors each of which has components.
Fact
Given the vectors each with components,
form the matrix,
So, the matrix is a matrix whose column is the vector, . Then,
form the matrix,
So, the matrix is a matrix whose column is the vector, . Then,
- If is nonsingular (i.e. is not zero) then the vectors are linearly independent, and
- if is singular (i.e. ) then the vectors are linearly dependent and the constants that make true can be found by solving the systemwhere is a vector containing the constants in .
Comments
Post a Comment