Skip to main content

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - Higher Order: Systems



In this section we want to take a brief look at systems of differential equations that are larger than 2×2. The problem here is that unlike the first few sections where we looked at nth order differential equations we can’t really come up with a set of formulas that will always work for every system. So, with that in mind we’re going to look at all possible cases for a 3×3system (leaving some details for you to verify at times) and then a couple of quick comments about 4 x 4 systems to illustrate how to extend things out to even larger systems and then we’ll leave it to you to actually extend things out if you’d like to.
We will also not be doing any actual examples in this section. The point of this section is just to show how to extend out what we know about 2×2 systems to larger systems.
Initially the process is identical regardless of the size of the system. So, for a system of 3 differential equations with 3 unknown functions we first put the system into matrix form,
x=Ax
where the coefficient matrix, A, is a 3×3 matrix. We next need to determine the eigenvalues and eigenvectors for A and because A is a 3×3 matrix we know that there will be 3 eigenvalues (including repeated eigenvalues if there are any).
This is where the process from the 2×2 systems starts to vary. We will need a total of 3 linearly independent solutions to form the general solution. Some of what we know from the 2×2 systems can be brought forward to this point. For instance, we know that solutions corresponding to simple eigenvalues (i.e. they only occur once in the list of eigenvalues) will be linearly independent. We know that solutions from a set of complex conjugate eigenvalues will be linearly independent. We also know how to get a set of linearly independent solutions from a double eigenvalue with a single eigenvector.
There are also a couple of facts about eigenvalues/eigenvectors that we need to review here as well. First, provided A has only real entries (which it always will here) all complex eigenvalues will occur in conjugate pairs (i.e.λ=α±βi) and their associated eigenvectors will also be complex conjugates of each other. Next, if an eigenvalue has multiplicity k2 (i.e.occurs at least twice in the list of eigenvalues) then there will be anywhere from 1 to k linearly independent eigenvectors for the eigenvalue.
With all these ideas in mind let’s start going through all the possible combinations of eigenvalues that we can possibly have for a 3×3 case. Let’s also note that for a 3×3 system it is impossible to have only 2 real distinct eigenvalues. The only possibilities are to have 1 or 3 real distinct eigenvalues.
Here are all the possible cases.

3 Real Distinct Eigenvalues

In this case we’ll have the real, distinct eigenvalues λ1λ2λ3 and their associated eigenvectors, η1η2 and η3 are guaranteed to be linearly independent and so the three linearly independent solutions we get from this case are,
eλ1tη1eλ2tη2eλ3tη3

1 Real and 2 Complex Eigenvalues

From the real eigenvalue/eigenvector pair, λ1 and η1, we get one solution,
eλ1tη1
We get the other two solutions in the same manner that we did with the 2×2 case. If the eigenvalues are λ2,3=α±βiwith eigenvectors η2 and η3=(η2)¯ we can get two real‑valued solution by using Euler’s formula to expand,
eλ2tη2=e(α+βi)tη2=eαt(cos(βt)+isin(βt))η2
into its real and imaginary parts, u+iv. The final two real valued solutions we need are then,
uv

1 Real Distinct and 1 Double Eigenvalue with 1 Eigenvector

From the real eigenvalue/eigenvector pair, λ1 and η1, we get one solution,
eλ1tη1
From our work in the 2×2 systems we know that from the double eigenvalue λ2 with single eigenvector, η2, we get the following two solutions,
eλ2tη2teλ2tξ+eλ2tρ
where ξ and ρ must satisfy the following equations,
(Aλ2I)ξ=0(Aλ2I)ρ=ξ
Note that the first equation simply tells us that ξ must be the single eigenvector for this eigenvalue, η2, and we usually just say that the second solution we get from the double root case is,
teλ2tη2+eλ2tρwhereρ satisfies(Aλ2I)ρ=η2

1 Real Distinct and 1 Double Eigenvalue with 2 Linearly Independent Eigenvectors

We didn’t look at this case back when we were examining the 2×2 systems but it is easy enough to deal with. In this case we’ll have a single real distinct eigenvalue/eigenvector pair, λ1 and η1, as well as a double eigenvalue λ2 and the double eigenvalue has two linearly independent eigenvectors, η2 and η3.
In this case all three eigenvectors are linearly independent and so we get the following three linearly independent solutions,
eλ1tη1eλ2tη2eλ2tη3
We are now out of the cases that compare to those that we did with 2×2 systems and we now need to move into the brand new case that we pick up for 3×3 systems. This new case involves eigenvalues with multiplicity of 3. As we noted above we can have 1, 2, or 3 linearly independent eigenvectors and so we actually have 3 sub cases to deal with here. So, let’s go through these final 3 cases for a 3×3 system.

1 Triple Eigenvalue with 1 Eigenvector

The eigenvalue/eigenvector pair in this case are λ and η. Because the eigenvalue is real we know that the first solution we need is,
eλtη
We can use the work from the double eigenvalue with one eigenvector to get that a second solution is,
teλtη+eλtρwhereρ satisfies(AλI)ρ=η
For a third solution we can take a clue from how we dealt with nth order differential equations with roots multiplicity 3. In those cases, we multiplied the original solution by a t2. However, just as with the double eigenvalue case that won’t be enough to get us a solution. In this case the third solution will be,
12t2eλtξ+teλtρ+eλtμ
where ξρ, and μ must satisfy,
(AλI)ξ=0(AλI)ρ=ξ(AλI)μ=ρ
You can verify that this is a solution and the conditions by taking a derivative and plugging into the system.
Now, the first condition simply tells us that ξ=η because we only have a single eigenvector here and so we can reduce this third solution to,
12t2eλtη+teλtρ+eλtμ
where ρ, and μ must satisfy,
(AλI)ρ=η(AλI)μ=ρ
and finally notice that we would have solved the new first condition in determining the second solution above and so all we really need to do here is solve the final condition.
As a final note in this case, the 12 is in the solution solely to keep any extra constants from appearing in the conditions which in turn allows us to reuse previous results.

1 Triple Eigenvalue with 2 Linearly Independent Eigenvectors

In this case we’ll have the eigenvalue λ with the two linearly independent eigenvectors η1 and η2 so we get the following two linearly independent solutions,
eλtη1eλtη2
We now need a third solution. The third solution will be in the form,
teλtξ+eλtρ
where ξ and ρ must satisfy the following equations,
(AλI)ξ=0(AλI)ρ=ξ
We’ve already verified that this will be a solution with these conditions in the double eigenvalue case (that work only required a repeated eigenvalue, not necessarily a double one).
However, unlike the previous times we’ve seen this we can’t just say that ξ is an eigenvector. In all the previous cases in which we’ve seen this condition we had a single eigenvector and this time we have two linearly independent eigenvectors. This means that the most general possible solution to the first condition is,
ξ=c1η1+c2η2
This creates problems in solving the second condition. The second condition will not have solutions for every choice of c1and c2 and the choice that we use will be dependent upon the eigenvectors. So upon solving the first condition we would need to plug the general solution into the second condition and then proceed to determine conditions on c1 and c2 that would allow us to solve the second condition.

1 Triple Eigenvalue with 3 Linearly Independent Eigenvectors

In this case we’ll have the eigenvalue λ with the three linearly independent eigenvectors η1η2, and η3 so we get the following three linearly independent solutions,
eλtη1eλtη2eλtη3

4 x 4 Systems

We’ll close this section out with a couple of comments about 4 x 4 systems. In these cases we will have 4 eigenvalues and will need 4 linearly independent solutions in order to get a general solution. The vast majority of the cases here are natural extensions of what 3×3 systems cases and in fact will use a vast majority of that work.
Here are a couple of new cases that we should comment briefly on however. With 4 x 4 systems it will now be possible to have two different sets of double eigenvalues and two different sets of complex conjugate eigenvalues. In either of these cases we can treat each one as a separate case and use our previous knowledge about double eigenvalues and complex eigenvalues to get the solutions we need.
It is also now possible to have a “double” complex eigenvalue. In other words, we can have λ=α±βi each occur twice in the list of eigenvalues. The solutions for this case aren’t too bad. We get two solutions in the normal way of dealing with complex eigenvalues. The remaining two solutions will come from the work we did for a double eigenvalue. The work we did in that case did not require that the eigenvalue/eigenvector pair to be real. Therefore, if the eigenvector associated with λ=α+βi is η then the second solution will be,
te(α+βi)tη+e(α+βi)tρwhereρ satisfies(AλI)ρ=η
and once we’ve determined ρ we can again split this up into its real and imaginary parts using Euler’s formula to get two new real valued solutions.
Finally, with 4 x 4 systems we can now have eigenvalues with multiplicity of 4. In these cases, we can have 1, 2, 3, or 4 linearly independent eigenvectors and we can use our work with 3×3 systems to see how to generate solutions for these cases. The one issue that you’ll need to pay attention to is the conditions for the 2 and 3 eigenvector cases will have the same complications that the 2 eigenvector case has in the 3×3 systems.
So, we’ve discussed some of the issues involved in systems larger than 2×2 and it is hopefully clear that when we move into larger systems the work can be become vastly more complicated.

Comments

Popular posts from this blog

Digital Signal Processing - Basic Continuous Time Signals

To test a system, generally, standard or basic signals are used. These signals are the basic building blocks for many complex signals. Hence, they play a very important role in the study of signals and systems. Unit Impulse or Delta Function A signal, which satisfies the condition,   δ ( t ) = lim ϵ → ∞ x ( t ) δ ( t ) = lim ϵ → ∞ x ( t )   is known as unit impulse signal. This signal tends to infinity when t = 0 and tends to zero when t ≠ 0 such that the area under its curve is always equals to one. The delta function has zero amplitude everywhere except at t = 0. Properties of Unit Impulse Signal δ(t) is an even signal. δ(t) is an example of neither energy nor power (NENP) signal. Area of unit impulse signal can be written as; A = ∫ ∞ − ∞ δ ( t ) d t = ∫ ∞ − ∞ lim ϵ → 0 x ( t ) d t = lim ϵ → 0 ∫ ∞ − ∞ [ x ( t ) d t ] = 1 Weight or strength of the signal can be written as; y ( t ) = A δ ( t ) y ( t ) = A δ ( t ) Area of the weighted impulse s...

Differential Equations - First Order: Bernoulli

In this section we are going to take a look at differential equations in the form, y ′ + p ( x ) y = q ( x ) y n y ′ + p ( x ) y = q ( x ) y n where  p ( x ) p ( x )  and  q ( x ) q ( x )  are continuous functions on the interval we’re working on and  n n  is a real number. Differential equations in this form are called  Bernoulli Equations . First notice that if  n = 0 n = 0  or  n = 1 n = 1  then the equation is linear and we already know how to solve it in these cases. Therefore, in this section we’re going to be looking at solutions for values of  n n  other than these two. In order to solve these we’ll first divide the differential equation by  y n y n  to get, y − n y ′ + p ( x ) y 1 − n = q ( x ) y − n y ′ + p ( x ) y 1 − n = q ( x ) We are now going to use the substitution  v = y 1 − n v = y 1 − n  to convert this into a differential equation in terms of  v v . As we’ll see th...

Differential Equations - Systems: Solutions

Now that we’ve got some of the basics out of the way for systems of differential equations it’s time to start thinking about how to solve a system of differential equations. We will start with the homogeneous system written in matrix form, → x ′ = A → x (1) (1) x → ′ = A x → where,  A A  is an  n × n n × n  matrix and  → x x →  is a vector whose components are the unknown functions in the system. Now, if we start with  n = 1 n = 1 then the system reduces to a fairly simple linear (or separable) first order differential equation. x ′ = a x x ′ = a x and this has the following solution, x ( t ) = c e a t x ( t ) = c e a t So, let’s use this as a guide and for a general  n n  let’s see if → x ( t ) = → η e r t (2) (2) x → ( t ) = η → e r t will be a solution. Note that the only real difference here is that we let the constant in front of the exponential be a vector. All we need to do then is plug this into the d...

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Digital Signal Processing - Miscellaneous Signals

There are other signals, which are a result of operation performed on them. Some common type of signals are discussed below. Conjugate Signals Signals, which satisfies the condition  x ( t ) = x ∗ ( − t ) are called conjugate signals. Let  x ( t ) = a ( t ) + j b ( t ) So,  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) By Condition,  x ( t ) = x ∗ ( − t ) If we compare both the derived equations 1 and 2, we can see that the real part is even, whereas the imaginary part is odd. This is the condition for a signal to be a conjugate type. Conjugate Anti-Symmetric Signals Signals, which satisfy the condition  x ( t ) = − x ∗ ( − t ) are called conjugate anti-symmetric signal Let  x ( t ) = a ( t ) + j b ( t ) So  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) − x ∗ ( − t ) = − a ( − t ) + j b ( − t ) By Condition  x ( t ) = − x ∗ ( − t ) ...

Differential Equations - First Order: Modeling - i

We now move into one of the main applications of differential equations both in this class and in general. Modeling is the process of writing a differential equation to describe a physical situation. Almost all of the differential equations that you will use in your job (for the engineers out there in the audience) are there because somebody, at some time, modeled a situation to come up with the differential equation that you are using. This section is not intended to completely teach you how to go about modeling all physical situations. A whole course could be devoted to the subject of modeling and still not cover everything! This section is designed to introduce you to the process of modeling and show you what is involved in modeling. We will look at three different situations in this section : Mixing Problems, Population Problems, and Falling Objects. In all of these situations we will be forced to make assumptions that do not accurately depict reality in most cases, but wi...

Differential Equations - Systems: Repeated Eigenvalues - i

This is the final case that we need to take a look at. In this section we are going to look at solutions to the system, → x ′ = A → x x → ′ = A x → where the eigenvalues are repeated eigenvalues. Since we are going to be working with systems in which  A A  is a  2 × 2 2 × 2  matrix we will make that assumption from the start. So, the system will have a double eigenvalue,  λ λ . This presents us with a problem. We want two linearly independent solutions so that we can form a general solution. However, with a double eigenvalue we will have only one, → x 1 = → η e λ t x → 1 = η → e λ t So, we need to come up with a second solution. Recall that when we looked at the double root case with the second order differential equations we ran into a similar problem. In that section we simply added a  t t  to the solution and were able to get a second solution. Let’s see if the same thing will work in this case as well. We’ll see if → x = t e...

Differential Equations - Basic Concepts: Definitions

Differential Equation The first definition that we should cover should be that of  differential equation . A differential equation is any equation which contains derivatives, either ordinary derivatives or partial derivatives. There is one differential equation that everybody probably knows, that is Newton’s Second Law of Motion. If an object of mass  m m  is moving with acceleration  a a  and being acted on with force  F F  then Newton’s Second Law tells us. F = m a (1) (1) F = m a To see that this is in fact a differential equation we need to rewrite it a little. First, remember that we can rewrite the acceleration,  a a , in one of two ways. a = d v d t OR a = d 2 u d t 2 (2) (2) a = d v d t OR a = d 2 u d t 2 Where  v v  is the velocity of the object and  u u  is the position function of the object at any time  t t . We should also remember at this point that the force,  F F  may also be a f...

Differential Equations - Second Order: Repeated Roots

In this section we will be looking at the last case for the constant coefficient, linear, homogeneous second order differential equations. In this case we want solutions to a y ′′ + b y ′ + c y = 0 a y ″ + b y ′ + c y = 0 where solutions to the characteristic equation a r 2 + b r + c = 0 a r 2 + b r + c = 0 are double roots  r 1 = r 2 = r r 1 = r 2 = r . This leads to a problem however. Recall that the solutions are y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t These are the same solution and will NOT be “nice enough” to form a general solution. We do promise that we’ll define “nice enough” eventually! So, we can use the first solution, but we’re going to need a second solution. Before finding this second solution let’s take a little side trip. The reason for the side trip will be clear eventually. From the quadratic formula we know that the roots to the characteristic equation are, r 1 , 2 = ...

Differential Equations - Partial: Summary of Separation of Variables

Throughout this chapter we’ve been talking about and solving partial differential equations using the method of separation of variables. However, the one thing that we’ve not really done is completely work an example from start to finish showing each and every step. Each partial differential equation that we solved made use somewhere of the fact that we’d done at least part of the problem in another section and so it makes some sense to have a quick summary of the method here. Also note that each of the partial differential equations only involved two variables. The method can often be extended out to more than two variables, but the work in those problems can be quite involved and so we didn’t cover any of that here. So with all of that out of the way here is a quick summary of the method of separation of variables for partial differential equations in two variables. Verify that the partial differential equation is linear and homogeneous. Verify that the boundary condi...