In this section we need to take a look at the equation of a line in R 3 R 3 . As we saw in the previous section the equation y = m x + b y = m x + b does not describe a line in R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...
If you get nothing out of this quick review of linear algebra you must get this section. Without this section you will not be able to do any of the differential equations work that is in this chapter.
So, let’s start with the following. If we multiply an matrix by an vector we will get a new vector back. In other words,
What we want to know is if it is possible for the following to happen. Instead of just getting a brand new vector out of the multiplication is it possible instead to get the following,
In other words, is it possible, at least for certain and , to have matrix multiplication be the same as just multiplying the vector by a constant? Of course, we probably wouldn’t be talking about this if the answer was no. So, it is possible for this to happen, however, it won’t happen for just any value of or . If we do happen to have a and for which this works (and they will always come in pairs) then we call an eigenvalue of and an eigenvector of .
So, how do we go about finding the eigenvalues and eigenvectors for a matrix? Well first notice that if then is going to be true for any value of and so we are going to make the assumption that . With that out of the way let’s rewrite a little.
Notice that before we factored out the we added in the appropriately sized identity matrix. This is equivalent to multiplying things by a one and so doesn’t change the value of anything. We needed to do this because without it we would have had the difference of a matrix, , and a constant, , and this can’t be done. We now have the difference of two matrices of the same size which can be done.
So, with this rewrite we see that
is equivalent to . In order to find the eigenvectors for a matrix we will need to solve a homogeneous system. Recall the fact from the previous section that we know that we will either have exactly one solution () or we will have infinitely many nonzero solutions. Since we’ve already said that we don’t want this means that we want the second case.
Knowing this will allow us to find the eigenvalues for a matrix. Recall from this fact that we will get the second case only if the matrix in the system is singular. Therefore, we will need to determine the values of for which we get,
Once we have the eigenvalues we can then go back and determine the eigenvectors for each eigenvalue. Let’s take a look at a couple of quick facts about eigenvalues and eigenvectors.
Fact
If is an matrix then is an degree polynomial. This polynomial is called the characteristic polynomial.
To find eigenvalues of a matrix all we need to do is solve a polynomial. That’s generally not too bad provided we keep small. Likewise this fact also tells us that for an matrix, , we will have eigenvalues if we include all repeated eigenvalues.
Fact
If is the complete list of eigenvalues for (including all repeated eigenvalues) then,
- If occurs only once in the list then we call simple.
- If occurs times in the list then we say that has multiplicity .
- If () are the simple eigenvalues in the list with corresponding eigenvectors , , …, then the eigenvectors are all linearly independent.
- If is an eigenvalue of multiplicity then will have anywhere from 1 to linearly independent eigenvectors.
The usefulness of these facts will become apparent when we get back into differential equations since in that work we will want linearly independent solutions.
Let’s work a couple of examples now to see how we actually go about finding eigenvalues and eigenvectors.
Example 1 Find the eigenvalues and eigenvectors of the following matrix.
The first thing that we need to do is find the eigenvalues. That means we need the following matrix,
In particular we need to determine where the determinant of this matrix is zero.
So, it looks like we will have two simple eigenvalues for this matrix, and . We will now need to find the eigenvectors for each of these. Also note that according to the fact above, the two eigenvectors should be linearly independent.
To find the eigenvectors we simply plug in each eigenvalue into and solve. So, let’s do that.
:
In this case we need to solve the following system.
Recall that officially to solve this system we use the following augmented matrix.
Upon reducing down we see that we get a single equation
that will yield an infinite number of solutions. This is expected behavior. Recall that we picked the eigenvalues so that the matrix would be singular and so we would get infinitely many solutions.
Notice as well that we could have identified this from the original system. This won’t always be the case, but in the case we can see from the system that one row will be a multiple of the other and so we will get infinite solutions. From this point on we won’t be actually solving systems in these cases. We will just go straight to the equation and we can use either of the two rows for this equation.
Now, let’s get back to the eigenvector, since that is what we were after. In general then the eigenvector will be any vector that satisfies the following,
To get this we used the solution to the equation that we found above.
We really don’t want a general eigenvector however so we will pick a value for to get a specific eigenvector. We can choose anything (except ), so pick something that will make the eigenvector “nice”. Note as well that since we’ve already assumed that the eigenvector is not zero we must choose a value that will not give us zero, which is why we want to avoid in this case. Here’s the eigenvector for this eigenvalue.
Now we get to do this all over again for the second eigenvalue.
:
We’ll do much less work with this part than we did with the previous part. We will need to solve the following system.
Clearly both rows are multiples of each other and so we will get infinitely many solutions. We can choose to work with either row. We’ll run with the first because to avoid having too many minus signs floating around. Doing this gives us,
Note that we can solve this for either of the two variables. However, with an eye towards working with these later on let’s try to avoid as many fractions as possible. The eigenvector is then,
Summarizing we have,
Note that the two eigenvectors are linearly independent as predicted.
In particular we need to determine where the determinant of this matrix is zero.
So, it looks like we will have two simple eigenvalues for this matrix, and . We will now need to find the eigenvectors for each of these. Also note that according to the fact above, the two eigenvectors should be linearly independent.
To find the eigenvectors we simply plug in each eigenvalue into and solve. So, let’s do that.
:
In this case we need to solve the following system.
Recall that officially to solve this system we use the following augmented matrix.
Upon reducing down we see that we get a single equation
that will yield an infinite number of solutions. This is expected behavior. Recall that we picked the eigenvalues so that the matrix would be singular and so we would get infinitely many solutions.
Notice as well that we could have identified this from the original system. This won’t always be the case, but in the case we can see from the system that one row will be a multiple of the other and so we will get infinite solutions. From this point on we won’t be actually solving systems in these cases. We will just go straight to the equation and we can use either of the two rows for this equation.
Now, let’s get back to the eigenvector, since that is what we were after. In general then the eigenvector will be any vector that satisfies the following,
To get this we used the solution to the equation that we found above.
We really don’t want a general eigenvector however so we will pick a value for to get a specific eigenvector. We can choose anything (except ), so pick something that will make the eigenvector “nice”. Note as well that since we’ve already assumed that the eigenvector is not zero we must choose a value that will not give us zero, which is why we want to avoid in this case. Here’s the eigenvector for this eigenvalue.
Now we get to do this all over again for the second eigenvalue.
:
We’ll do much less work with this part than we did with the previous part. We will need to solve the following system.
Clearly both rows are multiples of each other and so we will get infinitely many solutions. We can choose to work with either row. We’ll run with the first because to avoid having too many minus signs floating around. Doing this gives us,
Note that we can solve this for either of the two variables. However, with an eye towards working with these later on let’s try to avoid as many fractions as possible. The eigenvector is then,
Summarizing we have,
Note that the two eigenvectors are linearly independent as predicted.
Example 2 Find the eigenvalues and eigenvectors of the following matrix.
This matrix has fractions in it. That’s life so don’t get excited about it. First, we need the eigenvalues.
So, it looks like we’ve got an eigenvalue of multiplicity 2 here. Remember that the power on the term will be the multiplicity.
Now, let’s find the eigenvector(s). This one is going to be a little different from the first example. There is only one eigenvalue so let’s do the work for that one. We will need to solve the following system,
So, the rows are multiples of each other. We’ll work with the first equation in this example to find the eigenvector.
Recall in the last example we decided that we wanted to make these as “nice” as possible and so should avoid fractions if we can. Sometimes, as in this case, we simply can’t so we’ll have to deal with it. In this case the eigenvector will be,
Note that by careful choice of the variable in this case we were able to get rid of the fraction that we had. This is something that in general doesn’t much matter if we do or not. However, when we get back to differential equations it will be easier on us if we don’t have any fractions so we will usually try to eliminate them at this step.
Also, in this case we are only going to get a single (linearly independent) eigenvector. We can get other eigenvectors, by choosing different values of . However, each of these will be linearly dependent with the first eigenvector. If you’re not convinced of this try it. Pick some values for and get a different vector and check to see if the two are linearly dependent.
Recall from the fact above that an eigenvalue of multiplicity will have anywhere from 1 to linearly independent eigenvectors. In this case we got one. For most of the matrices that we’ll be working with this will be the case, although it doesn’t have to be. We can, on occasion, get two.
So, it looks like we’ve got an eigenvalue of multiplicity 2 here. Remember that the power on the term will be the multiplicity.
Now, let’s find the eigenvector(s). This one is going to be a little different from the first example. There is only one eigenvalue so let’s do the work for that one. We will need to solve the following system,
So, the rows are multiples of each other. We’ll work with the first equation in this example to find the eigenvector.
Recall in the last example we decided that we wanted to make these as “nice” as possible and so should avoid fractions if we can. Sometimes, as in this case, we simply can’t so we’ll have to deal with it. In this case the eigenvector will be,
Note that by careful choice of the variable in this case we were able to get rid of the fraction that we had. This is something that in general doesn’t much matter if we do or not. However, when we get back to differential equations it will be easier on us if we don’t have any fractions so we will usually try to eliminate them at this step.
Also, in this case we are only going to get a single (linearly independent) eigenvector. We can get other eigenvectors, by choosing different values of . However, each of these will be linearly dependent with the first eigenvector. If you’re not convinced of this try it. Pick some values for and get a different vector and check to see if the two are linearly dependent.
Recall from the fact above that an eigenvalue of multiplicity will have anywhere from 1 to linearly independent eigenvectors. In this case we got one. For most of the matrices that we’ll be working with this will be the case, although it doesn’t have to be. We can, on occasion, get two.
Comments
Post a Comment