Skip to main content

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - Fourier Series: Eigenvalues and Eigenfunctions - i

As we did in the previous section we need to again note that we are only going to give a brief look at the topic of eigenvalues and eigenfunctions for boundary value problems. There are quite a few ideas that we’ll not be looking at here. The intent of this section is simply to give you an idea of the subject and to do enough work to allow us to solve some basic partial differential equations in the next chapter.

Now, before we start talking about the actual subject of this section let’s recall a topic from Linear Algebra that we briefly discussed previously in these notes. For a given square matrix, A, if we could find values of λ for which we could find nonzero solutions, i.e. x0, to,
Ax=λx
then we called λ an eigenvalue of A and x was its corresponding eigenvector.
It’s important to recall here that in order for λ to be an eigenvalue then we had to be able to find nonzero solutions to the equation.
So, just what does this have to do with boundary value problems? Well go back to the previous section and take a look at Example 7 and Example 8. In those two examples we solved homogeneous (and that’s important!) BVP’s in the form,
(1)y+λy=0y(0)=0y(2π)=0
In Example 7 we had λ=4 and we found nontrivial (i.e. nonzero) solutions to the BVP. In Example 8 we used λ=3 and the only solution was the trivial solution (i.e. y(t)=0). So, this homogeneous BVP (recall this also means the boundary conditions are zero) seems to exhibit similar behavior to the behavior in the matrix equation above. There are values of λthat will give nontrivial solutions to this BVP and values of λ that will only admit the trivial solution.
So, for those values of λ that give nontrivial solutions we’ll call λ an eigenvalue for the BVP and the nontrivial solutions will be called eigenfunctions for the BVP corresponding to the given eigenvalue.
We now know that for the homogeneous BVP given in (1) λ=4 is an eigenvalue (with eigenfunctions y(x)=c2sin(2x)) and that λ=3 is not an eigenvalue.
Eventually we’ll try to determine if there are any other eigenvalues for (1), however before we do that let’s comment briefly on why it is so important for the BVP to be homogeneous in this discussion. In Example 2 and Example 3 of the previous section we solved the homogeneous differential equation
y+4y=0
with two different nonhomogeneous boundary conditions in the form,
y(0)=ay(2π)=b
In these two examples we saw that by simply changing the value of a and/or b we were able to get either nontrivial solutions or to force no solution at all. In the discussion of eigenvalues/eigenfunctions we need solutions to exist and the only way to assure this behavior is to require that the boundary conditions also be homogeneous. In other words, we need for the BVP to be homogeneous.
There is one final topic that we need to discuss before we move into the topic of eigenvalues and eigenfunctions and this is more of a notational issue that will help us with some of the work that we’ll need to do.
Let’s suppose that we have a second order differential equation and its characteristic polynomial has two real, distinct roots and that they are in the form
r1=αr2=α
Then we know that the solution is,
y(x)=c1er1x+c2er2x=c1eαx+c2eαx
While there is nothing wrong with this solution let’s do a little rewriting of this. We’ll start by splitting up the terms as follows,
y(x)=c1eαx+c2eαx=c12eαx+c12eαx+c22eαx+c22eαx
Now we’ll add/subtract the following terms (note we’re “mixing” the ci and ±α up in the new terms) to get,
y(x)=c12eαx+c12eαx+c22eαx+c22eαx+(c12eαxc12eαx)+(c22eαxc22eαx)
Next, rearrange terms around a little,
y(x)=12(c1eαx+c1eαx+c2eαx+c2eαx)+12(c1eαxc1eαxc2eαx+c2eαx)
Finally, the quantities in parenthesis factor and we’ll move the location of the fraction as well. Doing this, as well as renaming the new constants we get,
y(x)=(c1+c2)eαx+eαx2+(c1c2)eαxeαx2=c¯1eαx+eαx2+c¯2eαxeαx2
All this work probably seems very mysterious and unnecessary. However there really was a reason for it. In fact, you may have already seen the reason, at least in part. The two “new” functions that we have in our solution are in fact two of the hyperbolic functions. In particular,
cosh(x)=ex+ex2sinh(x)=exex2
So, another way to write the solution to a second order differential equation whose characteristic polynomial has two real, distinct roots in the form r1=α,r2=α is,
y(x)=c1cosh(αx)+c2sinh(αx)
Having the solution in this form for some (actually most) of the problems we’ll be looking will make our life a lot easier. The hyperbolic functions have some very nice properties that we can (and will) take advantage of.
First, since we’ll be needing them later on, the derivatives are,
ddx(cosh(x))=sinh(x)ddx(sinh(x))=cosh(x)
Next let’s take a quick look at the graphs of these functions.






Note that cosh(0)=1 and sinh(0)=0. Because we’ll often be working with boundary conditions at x=0 these will be useful evaluations.
Next, and possibly more importantly, let’s notice that cosh(x)>0 for all x and so the hyperbolic cosine will never be zero. Likewise, we can see that sinh(x)=0 only if x=0. We will be using both of these facts in some of our work so we shouldn’t forget them.
Okay, now that we’ve got all that out of the way let’s work an example to see how we go about finding eigenvalues/eigenfunctions for a BVP.


Example 1 Find all the eigenvalues and eigenfunctions for the following BVP.y+λy=0y(0)=0y(2π)=0

We started off this section looking at this BVP and we already know one eigenvalue (λ=4) and we know one value of λthat is not an eigenvalue (λ=3). As we go through the work here we need to remember that we will get an eigenvalue for a particular value of λ if we get non-trivial solutions of the BVP for that particular value of λ.
In order to know that we’ve found all the eigenvalues we can’t just start randomly trying values of λ to see if we get non-trivial solutions or not. Luckily there is a way to do this that’s not too bad and will give us all the eigenvalues/eigenfunctions. We are going to have to do some cases however. The three cases that we will need to look at are : λ>0λ=0, and λ<0. Each of these cases gives a specific form of the solution to the BVP to which we can then apply the boundary conditions to see if we’ll get non-trivial solutions or not. So, let’s get started on the cases.
λ>0_
In this case the characteristic polynomial we get from the differential equation is,

r2+λ=0r1,2=±λIn this case since we know that λ>0 these roots are complex and we can write them instead as,
r1,2=±λiThe general solution to the differential equation is then,
y(x)=c1cos(λx)+c2sin(λx)Applying the first boundary condition gives us,
0=y(0)=c1So, taking this into account and applying the second boundary condition we get,
0=y(2π)=c2sin(2πλ)This means that we have to have one of the following,
c2=0orsin(2πλ)=0However, recall that we want non-trivial solutions and if we have the first possibility we will get the trivial solution for all values of λ>0. Therefore, let’s assume that c20. This means that we have,
sin(2πλ)=02πλ=nπn=1,2,3,In other words, taking advantage of the fact that we know where sine is zero we can arrive at the second equation. Also note that because we are assuming that λ>0 we know that 2πλ>0and so n can only be a positive integer for this case.
Now all we have to do is solve this for λ and we’ll have all the positive eigenvalues for this BVP.
The positive eigenvalues are then,
λn=(n2)2=n24n=1,2,3,and the eigenfunctions that correspond to these eigenvalues are,
yn(x)=sin(nx2)n=1,2,3,Note that we subscripted an n on the eigenvalues and eigenfunctions to denote the fact that there is one for each of the given values of n. Also note that we dropped the c2 on the eigenfunctions. For eigenfunctions we are only interested in the function itself and not the constant in front of it and so we generally drop that.
Let’s now move into the second case.
λ=0_
In this case the BVP becomes,

y=0y(0)=0y(2π)=0and integrating the differential equation a couple of times gives us the general solution,
y(x)=c1+c2xApplying the first boundary condition gives,
0=y(0)=c1Applying the second boundary condition as well as the results of the first boundary condition gives,
0=y(2π)=2c2πHere, unlike the first case, we don’t have a choice on how to make this zero. This will only be zero if c2=0.
Therefore, for this BVP (and that’s important), if we have λ=0 the only solution is the trivial solution and so λ=0cannot be an eigenvalue for this BVP.
Now let’s look at the final case.
λ<0_
In this case the characteristic equation and its roots are the same as in the first case. So, we know that,

r1,2=±λHowever, because we are assuming λ<0 here these are now two real distinct roots and so using our work above for these kinds of real, distinct roots we know that the general solution will be,
y(x)=c1cosh(λx)+c2sinh(λx)Note that we could have used the exponential form of the solution here, but our work will be significantly easier if we use the hyperbolic form of the solution here.
Now, applying the first boundary condition gives,
0=y(0)=c1cosh(0)+c2sinh(0)=c1(1)+c2(0)=c1c1=0Applying the second boundary condition gives,
0=y(2π)=c2sinh(2πλ)Because we are assuming λ<0 we know that 2πλ0 and so we also know that sinh(2πλ)0. Therefore, much like the second case, we must have c2=0.
So, for this BVP (again that’s important), if we have λ<0 we only get the trivial solution and so there are no negative eigenvalues.
In summary then we will have the following eigenvalues/eigenfunctions for this BVP.
λn=n24yn(x)=sin(nx2)n=1,2,3,
Let’s take a look at another example with slightly different boundary conditions.


Comments

Popular posts from this blog

Digital Signal Processing - Basic Continuous Time Signals

To test a system, generally, standard or basic signals are used. These signals are the basic building blocks for many complex signals. Hence, they play a very important role in the study of signals and systems. Unit Impulse or Delta Function A signal, which satisfies the condition,   δ ( t ) = lim ϵ → ∞ x ( t ) δ ( t ) = lim ϵ → ∞ x ( t )   is known as unit impulse signal. This signal tends to infinity when t = 0 and tends to zero when t ≠ 0 such that the area under its curve is always equals to one. The delta function has zero amplitude everywhere except at t = 0. Properties of Unit Impulse Signal δ(t) is an even signal. δ(t) is an example of neither energy nor power (NENP) signal. Area of unit impulse signal can be written as; A = ∫ ∞ − ∞ δ ( t ) d t = ∫ ∞ − ∞ lim ϵ → 0 x ( t ) d t = lim ϵ → 0 ∫ ∞ − ∞ [ x ( t ) d t ] = 1 Weight or strength of the signal can be written as; y ( t ) = A δ ( t ) y ( t ) = A δ ( t ) Area of the weighted impulse s...

Differential Equations - First Order: Bernoulli

In this section we are going to take a look at differential equations in the form, y ′ + p ( x ) y = q ( x ) y n y ′ + p ( x ) y = q ( x ) y n where  p ( x ) p ( x )  and  q ( x ) q ( x )  are continuous functions on the interval we’re working on and  n n  is a real number. Differential equations in this form are called  Bernoulli Equations . First notice that if  n = 0 n = 0  or  n = 1 n = 1  then the equation is linear and we already know how to solve it in these cases. Therefore, in this section we’re going to be looking at solutions for values of  n n  other than these two. In order to solve these we’ll first divide the differential equation by  y n y n  to get, y − n y ′ + p ( x ) y 1 − n = q ( x ) y − n y ′ + p ( x ) y 1 − n = q ( x ) We are now going to use the substitution  v = y 1 − n v = y 1 − n  to convert this into a differential equation in terms of  v v . As we’ll see th...

Differential Equations - Systems: Solutions

Now that we’ve got some of the basics out of the way for systems of differential equations it’s time to start thinking about how to solve a system of differential equations. We will start with the homogeneous system written in matrix form, → x ′ = A → x (1) (1) x → ′ = A x → where,  A A  is an  n × n n × n  matrix and  → x x →  is a vector whose components are the unknown functions in the system. Now, if we start with  n = 1 n = 1 then the system reduces to a fairly simple linear (or separable) first order differential equation. x ′ = a x x ′ = a x and this has the following solution, x ( t ) = c e a t x ( t ) = c e a t So, let’s use this as a guide and for a general  n n  let’s see if → x ( t ) = → η e r t (2) (2) x → ( t ) = η → e r t will be a solution. Note that the only real difference here is that we let the constant in front of the exponential be a vector. All we need to do then is plug this into the d...

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - First Order: Modeling - i

We now move into one of the main applications of differential equations both in this class and in general. Modeling is the process of writing a differential equation to describe a physical situation. Almost all of the differential equations that you will use in your job (for the engineers out there in the audience) are there because somebody, at some time, modeled a situation to come up with the differential equation that you are using. This section is not intended to completely teach you how to go about modeling all physical situations. A whole course could be devoted to the subject of modeling and still not cover everything! This section is designed to introduce you to the process of modeling and show you what is involved in modeling. We will look at three different situations in this section : Mixing Problems, Population Problems, and Falling Objects. In all of these situations we will be forced to make assumptions that do not accurately depict reality in most cases, but wi...

Digital Signal Processing - Miscellaneous Signals

There are other signals, which are a result of operation performed on them. Some common type of signals are discussed below. Conjugate Signals Signals, which satisfies the condition  x ( t ) = x ∗ ( − t ) are called conjugate signals. Let  x ( t ) = a ( t ) + j b ( t ) So,  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) By Condition,  x ( t ) = x ∗ ( − t ) If we compare both the derived equations 1 and 2, we can see that the real part is even, whereas the imaginary part is odd. This is the condition for a signal to be a conjugate type. Conjugate Anti-Symmetric Signals Signals, which satisfy the condition  x ( t ) = − x ∗ ( − t ) are called conjugate anti-symmetric signal Let  x ( t ) = a ( t ) + j b ( t ) So  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) − x ∗ ( − t ) = − a ( − t ) + j b ( − t ) By Condition  x ( t ) = − x ∗ ( − t ) ...

Differential Equations - Systems: Repeated Eigenvalues - i

This is the final case that we need to take a look at. In this section we are going to look at solutions to the system, → x ′ = A → x x → ′ = A x → where the eigenvalues are repeated eigenvalues. Since we are going to be working with systems in which  A A  is a  2 × 2 2 × 2  matrix we will make that assumption from the start. So, the system will have a double eigenvalue,  λ λ . This presents us with a problem. We want two linearly independent solutions so that we can form a general solution. However, with a double eigenvalue we will have only one, → x 1 = → η e λ t x → 1 = η → e λ t So, we need to come up with a second solution. Recall that when we looked at the double root case with the second order differential equations we ran into a similar problem. In that section we simply added a  t t  to the solution and were able to get a second solution. Let’s see if the same thing will work in this case as well. We’ll see if → x = t e...

Differential Equations - Second Order: Repeated Roots

In this section we will be looking at the last case for the constant coefficient, linear, homogeneous second order differential equations. In this case we want solutions to a y ′′ + b y ′ + c y = 0 a y ″ + b y ′ + c y = 0 where solutions to the characteristic equation a r 2 + b r + c = 0 a r 2 + b r + c = 0 are double roots  r 1 = r 2 = r r 1 = r 2 = r . This leads to a problem however. Recall that the solutions are y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t These are the same solution and will NOT be “nice enough” to form a general solution. We do promise that we’ll define “nice enough” eventually! So, we can use the first solution, but we’re going to need a second solution. Before finding this second solution let’s take a little side trip. The reason for the side trip will be clear eventually. From the quadratic formula we know that the roots to the characteristic equation are, r 1 , 2 = ...

Differential Equations - Systems: Repeated Eigenvalues - ii

Example 3  Solve the following IVP. → x ′ = ( − 1 3 2 − 1 6 − 2 ) → x → x ( 2 ) = ( 1 0 ) x → ′ = ( − 1 3 2 − 1 6 − 2 ) x → x → ( 2 ) = ( 1 0 ) First the eigenvalue for the system. det ( A − λ I ) = ∣ ∣ ∣ ∣ − 1 − λ 3 2 − 1 6 − 2 − λ ∣ ∣ ∣ ∣ = λ 2 + 3 λ + 9 4 = ( λ + 3 2 ) 2 ⇒ λ 1 , 2 = − 3 2 det ( A − λ I ) = | − 1 − λ 3 2 − 1 6 − 2 − λ | = λ 2 + 3 λ + 9 4 = ( λ + 3 2 ) 2 ⇒ λ 1 , 2 = − 3 2 Now let’s get the eigenvector. ( 1 2 3 2 − 1 6 − 1 2 ) ( η 1 η 2 ) = ( 0 0 ) ⇒ 1 2 η 1 + 3 2 η 2 = 0 η 1 = − 3 η 2 ( 1 2 3 2 − 1 6 − 1 2 ) ( η 1 η 2 ) = ( 0 0 ) ⇒ 1 2 η 1 + 3 2 η 2 = 0 η 1 = − 3 η 2 → η = ( − 3 η 2 η 2 ) η 2 ≠ 0 → η ( 1 ) = ( − 3 1 ) η 2 = 1 η → = ( − 3 η 2 η 2 ) η 2 ≠ 0 η → ( 1 ) = ( − 3 1 ) η 2 = 1 Now find  → ρ ρ → , ( 1 2 3 2 − 1 6 − 1 2 ) ( ρ 1 ρ 2 ) = ( − 3 1 ) ⇒ 1 2 ρ 1 + 3 2 ρ 2 = − 3 ρ 1 = − 6 − 3 ρ 2 ( 1 2 3 2 − 1 6 − 1 2 ) ( ρ 1 ρ 2 ) = ( − 3 1 ) ⇒ 1 2 ρ 1 + 3 2 ρ 2 = − 3 ρ 1 = − 6 − 3 ρ 2 → ρ = ( − 6 − 3 ρ 2 ρ 2 ) ⇒ → ρ = ( − 6 0 ) if  ρ 2 = 0 ρ → ...

Differential Equations - Laplace Transforms: Table

f ( t ) = L − 1 { F ( s ) } f ( t ) = L − 1 { F ( s ) } F ( s ) = L { f ( t ) } F ( s ) = L { f ( t ) }  1 1 s 1 s e a t e a t 1 s − a 1 s − a t n , n = 1 , 2 , 3 , … t n , n = 1 , 2 , 3 , … n ! s n + 1 n ! s n + 1 t p t p ,  p > − 1 p > − 1 Γ ( p + 1 ) s p + 1 Γ ( p + 1 ) s p + 1 √ t t √ π 2 s 3 2 π 2 s 3 2 t n − 1 2 , n = 1 , 2 , 3 , … t n − 1 2 , n = 1 , 2 , 3 , … 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) √ π 2 n s n + 1 2 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) π 2 n s n + 1 2 sin ( a t ) sin ⁡ ( a t ) a s 2 + a 2 a s 2 + a 2 cos ( a t ) cos ⁡ ( a t ) s s 2 + a 2 s s 2 + a 2 t sin ( a t ) t sin ⁡ ( a t ) 2 a s ( s 2 + a 2 ) 2 2 a s ( s 2 + a 2 ) 2 t cos ( a t ) t cos ⁡ ( a t ) s 2 − a 2 ( s 2 + a 2 ) 2 s 2 − a 2 ( s 2 + a 2 ) 2 sin ( a t ) − a t cos ( a t ) sin ⁡ ( a t ) − a t cos ⁡ ( a t ) 2 a 3 ( s 2 + a 2 ) 2 2 a 3 ( s 2 + a 2 ) 2 sin ( a t ) + a t cos ( a t ) sin ⁡ ( a t ) + a t cos ⁡ ( a t ) 2 a s 2 ( s 2 + a 2 ) 2 2 a s 2 ( s 2 + a 2 ) 2 cos ( a t ) − a t sin ( a t ) cos ⁡ (...