Skip to main content

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - Second Order: Fundamental Sets of Solutions




The time has finally come to define “nice enough”. We’ve been using this term throughout the last few sections to describe those solutions that could be used to form a general solution and it is now time to officially define it.
First, because everything that we’re going to be doing here only requires linear and homogeneous we won’t require constant coefficients in our differential equation. So, let’s start with the following IVP.
(1)p(t)y+q(t)y+r(t)y=0y(t0)=y0y(t0)=y0
Let’s also suppose that we have already found two solutions to this differential equation, y1(t) and y2(t). We know from the Principle of Superposition that
(2)y(t)=c1y1(t)+c2y2(t)
will also be a solution to the differential equation. What we want to know is whether or not it will be a general solution. In order for (2) to be considered a general solution it must satisfy the general initial conditions in (1).
y(t0)=y0y(t0)=y0
This will also imply that any solution to the differential equation can be written in this form.
So, let’s see if we can find constants that will satisfy these conditions. First differentiate (2) and plug in the initial conditions.
(3)y0=y(t0)=c1y1(t0)+c2y2(t0)y0=y(t0)=c1y1(t0)+c2y2(t0)
Since we are assuming that we’ve already got the two solutions everything in this system is technically known and so this is a system that can be solved for c1 and c2. This can be done in general using Cramer’s Rule. Using Cramer’s Rule gives the following solution.
(4)c1=|y0y2(t0)y0y2(t0)||y1(t0)y2(t0)y1(t0)y2(t0)|c2=|y1(t0)y0y1(t0)y0||y1(t0)y2(t0)y1(t0)y2(t0)|
where,
|abcd|=adbc
is the determinant of a 2x2 matrix. If you don’t know about determinants that is okay, just use the formula that we’ve provided above.
Now, (4) will give the solution to the system (3). Note that in practice we generally don’t use Cramer’s Rule to solve systems, we just proceed in a straightforward manner and solve the system using basic algebra techniques. So, why did we use Cramer’s Rule here then?
We used Cramer’s Rule because we can use (4) to develop a condition that will allow us to determine when we can solve for the constants. All three (yes three, the denominators are the same!) of the quantities in (4) are just numbers and the only thing that will prevent us from actually getting a solution will be when the denominator is zero.
The quantity in the denominator is called the Wronskian and is denoted as
W(f,g)(t)=|f(t)g(t)f(t)g(t)|=f(t)g(t)g(t)f(t)
When it is clear what the functions and/or t are we often just denote the Wronskian by W.
Let’s recall what we were after here. We wanted to determine when two solutions to (1) would be nice enough to form a general solution. The two solutions will form a general solution to (1) if they satisfy the general initial conditions given in (1) and we can see from Cramer’s Rule that they will satisfy the initial conditions provided the Wronskian isn’t zero. Or,
W(y1,y2)(t0)=|y1(t0)y2(t0)y1(t0)y2(t0)|=y1(t0)y2(t0)y2(t0)y1(t0)0
So, suppose that y1(t) and y2(t) are two solutions to (1) and that W(y1,y2)(t)0. Then the two solutions are called a fundamental set of solutions and the general solution to (1) is
y(t)=c1y1(t)+c2y2(t)
We know now what “nice enough” means. Two solutions are “nice enough” if they are a fundamental set of solutions.
So, let’s check one of the claims that we made in a previous section. We’ll leave the other two to you to check if you’d like to.


Example 1 Back in the complex root section we made the claim thaty1(t)=eλtcos(μt)and y2(t)=eλtsin(μt)were a fundamental set of solutions. Prove that they in fact are.

So, to prove this we will need to take the Wronskian for these two solutions and show that it isn’t zero.
W=|eλtcos(μt)eλtsin(μt)λeλtcos(μt)μeλtsin(μt)λeλtsin(μt)+μeλtcos(μt)|=eλtcos(μt)(λeλtsin(μt)+μeλtcos(μt))eλtsin(μt)(λeλtcos(μt)μeλtsin(μt))=μe2λtcos2(μt)+μe2λtsin2(μt)=μe2λt(cos2(μt)+sin2(μt))=μe2λtNow, the exponential will never be zero and μ0 (if it were we wouldn’t have complex roots!) and so W0. Therefore, these two solutions are in fact a fundamental set of solutions and so the general solution in this case is.
y(t)=c1eλtcos(μt)+c2eλtsin(μt)


Example 2 In the first example that we worked in the Reduction of Order section we found a second solution to2t2y+ty3y=0Show that this second solution, along with the given solution, form a fundamental set of solutions for the differential equation.

The two solutions from that example are
y1(t)=t1y2(t)=t32Let’s compute the Wronskian of these two solutions.
W=|t1t32t232t12|=32t12(t12)=52t12=52tSo, the Wronskian will never be zero. Note that we can’t plug t = 0 into the Wronskian. This would be a problem in finding the constants in the general solution, except that we also can’t plug t = 0 into the solution either and so this isn’t the problem that it might appear to be.
So, since the Wronskian isn’t zero for any t the two solutions form a fundamental set of solutions and the general solution is
y(t)=c1t1+c2t32as we claimed in that example.

To this point we’ve found a set of solutions then we’ve claimed that they are in fact a fundamental set of solutions. Of course, you can now verify all those claims that we’ve made, however this does bring up a question. How do we know that for a given differential equation a set of fundamental solutions will exist? The following theorem answers this question.

Theorem


Consider the differential equation
y+p(t)y+q(t)y=0where p(t) and q(t) are continuous functions on some interval I. Choose t0 to be any point in the interval I. Let y1(t) be a solution to the differential equation that satisfies the initial conditions.
y(t0)=1y(t0)=0Let y2(t) be a solution to the differential equation that satisfies the initial conditions.
y(t0)=0y(t0)=1Then y1(t) and y2(t) form a fundamental set of solutions for the differential equation.
It is easy enough to show that these two solutions form a fundamental set of solutions. Just compute the Wronskian.
W(y1,y2)(t0)=|y1(t0)y2(t0)y1(t0)y2(t0)|=|1001|=10=10
So, fundamental sets of solutions will exist provided we can solve the two IVP’s given in the theorem.


Example 3 Use the theorem to find a fundamental set of solutions fory+4y+3y=0using t0=0.

Using the techniques from the first part of this chapter we can find the two solutions that we’ve been using to this point.
y(t)=e3ty(t)=etThese do form a fundamental set of solutions as we can easily verify. However, they are NOT the set that will be given by the theorem. Neither of these solutions will satisfy either of the two sets of initial conditions given in the theorem. We will have to use these to find the fundamental set of solutions that is given by the theorem.
We know that the following is also a solution to the differential equation.
y(t)=c1e3t+c2etSo, let’s apply the first set of initial conditions and see if we can find constants that will work.
y(0)=1y(0)=0We’ll leave it to you to verify that we get the following solution upon doing this.
y1(t)=12e3t+32etLikewise, if we apply the second set of initial conditions,
y(0)=0y(0)=1we will get
y2(t)=12e3t+12etAccording to the theorem these should form a fundament set of solutions. This is easy enough to check.
W=|12e3t+32et12e3t+12et32e3t32et32e3t12et|=(12e3t+32et)(32e3t12et)(12e3t+12et)(32e3t32et)=e4t0

So, we got a completely different set of fundamental solutions from the theorem than what we’ve been using up to this point. This is not a problem. There are an infinite number of pairs of functions that we could use as a fundamental set of solutions for this problem.

So, which set of fundamental solutions should we use? Well, if we use the ones that we originally found, the general solution would be,
y(t)=c1e3t+c2et
Whereas, if we used the set from the theorem the general solution would be,
y(t)=c1(12e3t+32et)+c2(12e3t+12et)
This would not be very fun to work with when it came to determining the coefficients to satisfy a general set of initial conditions.

So, which set of fundamental solutions should we use? We should always try to use the set that is the most convenient to use for a given problem.

Comments

Popular posts from this blog

Digital Signal Processing - Basic Continuous Time Signals

To test a system, generally, standard or basic signals are used. These signals are the basic building blocks for many complex signals. Hence, they play a very important role in the study of signals and systems. Unit Impulse or Delta Function A signal, which satisfies the condition,   δ ( t ) = lim ϵ → ∞ x ( t ) δ ( t ) = lim ϵ → ∞ x ( t )   is known as unit impulse signal. This signal tends to infinity when t = 0 and tends to zero when t ≠ 0 such that the area under its curve is always equals to one. The delta function has zero amplitude everywhere except at t = 0. Properties of Unit Impulse Signal δ(t) is an even signal. δ(t) is an example of neither energy nor power (NENP) signal. Area of unit impulse signal can be written as; A = ∫ ∞ − ∞ δ ( t ) d t = ∫ ∞ − ∞ lim ϵ → 0 x ( t ) d t = lim ϵ → 0 ∫ ∞ − ∞ [ x ( t ) d t ] = 1 Weight or strength of the signal can be written as; y ( t ) = A δ ( t ) y ( t ) = A δ ( t ) Area of the weighted impulse s...

Differential Equations - First Order: Bernoulli

In this section we are going to take a look at differential equations in the form, y ′ + p ( x ) y = q ( x ) y n y ′ + p ( x ) y = q ( x ) y n where  p ( x ) p ( x )  and  q ( x ) q ( x )  are continuous functions on the interval we’re working on and  n n  is a real number. Differential equations in this form are called  Bernoulli Equations . First notice that if  n = 0 n = 0  or  n = 1 n = 1  then the equation is linear and we already know how to solve it in these cases. Therefore, in this section we’re going to be looking at solutions for values of  n n  other than these two. In order to solve these we’ll first divide the differential equation by  y n y n  to get, y − n y ′ + p ( x ) y 1 − n = q ( x ) y − n y ′ + p ( x ) y 1 − n = q ( x ) We are now going to use the substitution  v = y 1 − n v = y 1 − n  to convert this into a differential equation in terms of  v v . As we’ll see th...

Differential Equations - Systems: Solutions

Now that we’ve got some of the basics out of the way for systems of differential equations it’s time to start thinking about how to solve a system of differential equations. We will start with the homogeneous system written in matrix form, → x ′ = A → x (1) (1) x → ′ = A x → where,  A A  is an  n × n n × n  matrix and  → x x →  is a vector whose components are the unknown functions in the system. Now, if we start with  n = 1 n = 1 then the system reduces to a fairly simple linear (or separable) first order differential equation. x ′ = a x x ′ = a x and this has the following solution, x ( t ) = c e a t x ( t ) = c e a t So, let’s use this as a guide and for a general  n n  let’s see if → x ( t ) = → η e r t (2) (2) x → ( t ) = η → e r t will be a solution. Note that the only real difference here is that we let the constant in front of the exponential be a vector. All we need to do then is plug this into the d...

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - First Order: Modeling - i

We now move into one of the main applications of differential equations both in this class and in general. Modeling is the process of writing a differential equation to describe a physical situation. Almost all of the differential equations that you will use in your job (for the engineers out there in the audience) are there because somebody, at some time, modeled a situation to come up with the differential equation that you are using. This section is not intended to completely teach you how to go about modeling all physical situations. A whole course could be devoted to the subject of modeling and still not cover everything! This section is designed to introduce you to the process of modeling and show you what is involved in modeling. We will look at three different situations in this section : Mixing Problems, Population Problems, and Falling Objects. In all of these situations we will be forced to make assumptions that do not accurately depict reality in most cases, but wi...

Digital Signal Processing - Miscellaneous Signals

There are other signals, which are a result of operation performed on them. Some common type of signals are discussed below. Conjugate Signals Signals, which satisfies the condition  x ( t ) = x ∗ ( − t ) are called conjugate signals. Let  x ( t ) = a ( t ) + j b ( t ) So,  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) By Condition,  x ( t ) = x ∗ ( − t ) If we compare both the derived equations 1 and 2, we can see that the real part is even, whereas the imaginary part is odd. This is the condition for a signal to be a conjugate type. Conjugate Anti-Symmetric Signals Signals, which satisfy the condition  x ( t ) = − x ∗ ( − t ) are called conjugate anti-symmetric signal Let  x ( t ) = a ( t ) + j b ( t ) So  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) − x ∗ ( − t ) = − a ( − t ) + j b ( − t ) By Condition  x ( t ) = − x ∗ ( − t ) ...

Differential Equations - Systems: Repeated Eigenvalues - i

This is the final case that we need to take a look at. In this section we are going to look at solutions to the system, → x ′ = A → x x → ′ = A x → where the eigenvalues are repeated eigenvalues. Since we are going to be working with systems in which  A A  is a  2 × 2 2 × 2  matrix we will make that assumption from the start. So, the system will have a double eigenvalue,  λ λ . This presents us with a problem. We want two linearly independent solutions so that we can form a general solution. However, with a double eigenvalue we will have only one, → x 1 = → η e λ t x → 1 = η → e λ t So, we need to come up with a second solution. Recall that when we looked at the double root case with the second order differential equations we ran into a similar problem. In that section we simply added a  t t  to the solution and were able to get a second solution. Let’s see if the same thing will work in this case as well. We’ll see if → x = t e...

Differential Equations - Systems: Repeated Eigenvalues - ii

Example 3  Solve the following IVP. → x ′ = ( − 1 3 2 − 1 6 − 2 ) → x → x ( 2 ) = ( 1 0 ) x → ′ = ( − 1 3 2 − 1 6 − 2 ) x → x → ( 2 ) = ( 1 0 ) First the eigenvalue for the system. det ( A − λ I ) = ∣ ∣ ∣ ∣ − 1 − λ 3 2 − 1 6 − 2 − λ ∣ ∣ ∣ ∣ = λ 2 + 3 λ + 9 4 = ( λ + 3 2 ) 2 ⇒ λ 1 , 2 = − 3 2 det ( A − λ I ) = | − 1 − λ 3 2 − 1 6 − 2 − λ | = λ 2 + 3 λ + 9 4 = ( λ + 3 2 ) 2 ⇒ λ 1 , 2 = − 3 2 Now let’s get the eigenvector. ( 1 2 3 2 − 1 6 − 1 2 ) ( η 1 η 2 ) = ( 0 0 ) ⇒ 1 2 η 1 + 3 2 η 2 = 0 η 1 = − 3 η 2 ( 1 2 3 2 − 1 6 − 1 2 ) ( η 1 η 2 ) = ( 0 0 ) ⇒ 1 2 η 1 + 3 2 η 2 = 0 η 1 = − 3 η 2 → η = ( − 3 η 2 η 2 ) η 2 ≠ 0 → η ( 1 ) = ( − 3 1 ) η 2 = 1 η → = ( − 3 η 2 η 2 ) η 2 ≠ 0 η → ( 1 ) = ( − 3 1 ) η 2 = 1 Now find  → ρ ρ → , ( 1 2 3 2 − 1 6 − 1 2 ) ( ρ 1 ρ 2 ) = ( − 3 1 ) ⇒ 1 2 ρ 1 + 3 2 ρ 2 = − 3 ρ 1 = − 6 − 3 ρ 2 ( 1 2 3 2 − 1 6 − 1 2 ) ( ρ 1 ρ 2 ) = ( − 3 1 ) ⇒ 1 2 ρ 1 + 3 2 ρ 2 = − 3 ρ 1 = − 6 − 3 ρ 2 → ρ = ( − 6 − 3 ρ 2 ρ 2 ) ⇒ → ρ = ( − 6 0 ) if  ρ 2 = 0 ρ → ...

Differential Equations - Second Order: Repeated Roots

In this section we will be looking at the last case for the constant coefficient, linear, homogeneous second order differential equations. In this case we want solutions to a y ′′ + b y ′ + c y = 0 a y ″ + b y ′ + c y = 0 where solutions to the characteristic equation a r 2 + b r + c = 0 a r 2 + b r + c = 0 are double roots  r 1 = r 2 = r r 1 = r 2 = r . This leads to a problem however. Recall that the solutions are y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t These are the same solution and will NOT be “nice enough” to form a general solution. We do promise that we’ll define “nice enough” eventually! So, we can use the first solution, but we’re going to need a second solution. Before finding this second solution let’s take a little side trip. The reason for the side trip will be clear eventually. From the quadratic formula we know that the roots to the characteristic equation are, r 1 , 2 = ...

Differential Equations - Laplace Transforms: Table

f ( t ) = L − 1 { F ( s ) } f ( t ) = L − 1 { F ( s ) } F ( s ) = L { f ( t ) } F ( s ) = L { f ( t ) }  1 1 s 1 s e a t e a t 1 s − a 1 s − a t n , n = 1 , 2 , 3 , … t n , n = 1 , 2 , 3 , … n ! s n + 1 n ! s n + 1 t p t p ,  p > − 1 p > − 1 Γ ( p + 1 ) s p + 1 Γ ( p + 1 ) s p + 1 √ t t √ π 2 s 3 2 π 2 s 3 2 t n − 1 2 , n = 1 , 2 , 3 , … t n − 1 2 , n = 1 , 2 , 3 , … 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) √ π 2 n s n + 1 2 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) π 2 n s n + 1 2 sin ( a t ) sin ⁡ ( a t ) a s 2 + a 2 a s 2 + a 2 cos ( a t ) cos ⁡ ( a t ) s s 2 + a 2 s s 2 + a 2 t sin ( a t ) t sin ⁡ ( a t ) 2 a s ( s 2 + a 2 ) 2 2 a s ( s 2 + a 2 ) 2 t cos ( a t ) t cos ⁡ ( a t ) s 2 − a 2 ( s 2 + a 2 ) 2 s 2 − a 2 ( s 2 + a 2 ) 2 sin ( a t ) − a t cos ( a t ) sin ⁡ ( a t ) − a t cos ⁡ ( a t ) 2 a 3 ( s 2 + a 2 ) 2 2 a 3 ( s 2 + a 2 ) 2 sin ( a t ) + a t cos ( a t ) sin ⁡ ( a t ) + a t cos ⁡ ( a t ) 2 a s 2 ( s 2 + a 2 ) 2 2 a s 2 ( s 2 + a 2 ) 2 cos ( a t ) − a t sin ( a t ) cos ⁡ (...