Skip to main content

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - Laplace Transforms: Definition


You know, it’s always a little scary when we devote a whole section just to the definition of something. Laplace transforms (or just transforms) can seem scary when we first start looking at them. However, as we will see, they aren’t as bad as they may appear at first.
Before we start with the definition of the Laplace transform we need to get another definition out of the way.

A function is called piecewise continuous on an interval if the interval can be broken into a finite number of subintervals on which the function is continuous on each open subinterval (i.e. the subinterval without its endpoints) and has a finite limit at the endpoints of each subinterval. Below is a sketch of a piecewise continuous function.



In other words, a piecewise continuous function is a function that has a finite number of breaks in it and doesn’t blow up to infinity anywhere.
Now, let’s take a look at the definition of the Laplace transform.

Definition


Suppose that f(t) is a piecewise continuous function. The Laplace transform of f(t) is denoted L{f(t)} and defined as
(1)L{f(t)}=0estf(t)dt
There is an alternate notation for Laplace transforms. For the sake of convenience we will often denote Laplace transforms as,
L{f(t)}=F(s)
With this alternate notation, note that the transform is really a function of a new variable, s, and that all the t’s will drop out in the integration process.
Now, the integral in the definition of the transform is called an improper integral and it would probably be best to recall how these kinds of integrals work before we actually jump into computing some transforms.


Example 1 If c0, evaluate the following integral.0ectdt

Remember that you need to convert improper integrals to limits as follows,
0ectdt=limn0nectdtNow, do the integral, then evaluate the limit.
0ectdt=limn0nectdt=limn(1cect)|0n=limn(1cecn1c)Now, at this point, we’ve got to be careful. The value of c will affect our answer. We’ve already assumed that c was non-zero, now we need to worry about the sign of c. If c is positive the exponential will go to infinity. On the other hand, if c is negative the exponential will go to zero.
So, the integral is only convergent (i.e. the limit exists and is finite) provided c<0. In this case we get,
(2)0ectdt=1cprovided c<0
Now that we remember how to do these, let’s compute some Laplace transforms. We’ll start off with probably the simplest Laplace transform to compute.


Example 2 Compute L{1}.

There’s not really a whole lot do here other than plug the function f(t)=1 into (1)
L{1}=0estdtNow, at this point notice that this is nothing more than the integral in the previous example with c=s. Therefore, all we need to do is reuse (2) with the appropriate substitution. Doing this gives,
L{1}=0estdt=1sprovided s<0Or, with some simplification we have,
L{1}=1sprovided s>0

Notice that we had to put a restriction on s in order to actually compute the transform. All Laplace transforms will have restrictions on s. At this stage of the game, this restriction is something that we tend to ignore, but we really shouldn’t ever forget that it’s there.
Let’s do another example.


Example 3 Compute L{eat}

Plug the function into the definition of the transform and do a little simplification.
L{eat}=0esteatdt=0e(as)tdtOnce again, notice that we can use (2) provided c=as. So, let’s do this.
L{eat}=0e(as)tdt=1asprovided as<0=1saprovided s>a

Let’s do one more example that doesn’t come down to an application of (2).


Example 4 Compute L{sin(at)}.

Note that we’re going to leave it to you to check most of the integration here. Plug the function into the definition. This time let’s also use the alternate notation.
L{sin(at)}=F(s)=0estsin(at)dt=limn0nestsin(at)dtNow, if we integrate by parts we will arrive at,
F(s)=limn((1aestcos(at))|0nsa0nestcos(at)dt)Now, evaluate the first term to simplify it a little and integrate by parts again on the integral. Doing this arrives at,
F(s)=limn(1a(1esncos(an))sa((1aestsin(at))|0n+sa0nestsin(at)dt))Now, evaluate the second term, take the limit and simplify.
F(s)=limn(1a(1esncos(an))sa(1aesnsin(an)+sa0nestsin(at)dt))=1asa(sa0estsin(at)dt)=1as2a20estsin(at)dtNow, notice that in the limits we had to assume that s>0 in order to do the following two limits.
limnesncos(an)=0limnesnsin(an)=0Without this assumption, we get a divergent integral again. Also, note that when we got back to the integral we just converted the upper limit back to infinity. The reason for this is that, if you think about it, this integral is nothing more than the integral that we started with. Therefore, we now get,
F(s)=1as2a2F(s)Now, simply solve for F(s) to get,
L{sin(at)}=F(s)=as2+a2provided s>0

As this example shows, computing Laplace transforms is often messy.
Before moving on to the next section, we need to do a little side note. On occasion you will see the following as the definition of the Laplace transform.
L{f(t)}=estf(t)dt
Note the change in the lower limit from zero to negative infinity. In these cases there is almost always the assumption that the function f(t) is in fact defined as follows,
f(t)={f(t)if t00if t<0
In other words, it is assumed that the function is zero if t<0. In this case the first half of the integral will drop out since the function is zero and we will get back to the definition given in . A Heaviside function is usually used to make the function zero for t<0. We will be looking at these in a later section.

Comments

Popular posts from this blog

Digital Signal Processing - Basic Continuous Time Signals

To test a system, generally, standard or basic signals are used. These signals are the basic building blocks for many complex signals. Hence, they play a very important role in the study of signals and systems. Unit Impulse or Delta Function A signal, which satisfies the condition,   δ ( t ) = lim ϵ → ∞ x ( t ) δ ( t ) = lim ϵ → ∞ x ( t )   is known as unit impulse signal. This signal tends to infinity when t = 0 and tends to zero when t ≠ 0 such that the area under its curve is always equals to one. The delta function has zero amplitude everywhere except at t = 0. Properties of Unit Impulse Signal δ(t) is an even signal. δ(t) is an example of neither energy nor power (NENP) signal. Area of unit impulse signal can be written as; A = ∫ ∞ − ∞ δ ( t ) d t = ∫ ∞ − ∞ lim ϵ → 0 x ( t ) d t = lim ϵ → 0 ∫ ∞ − ∞ [ x ( t ) d t ] = 1 Weight or strength of the signal can be written as; y ( t ) = A δ ( t ) y ( t ) = A δ ( t ) Area of the weighted impulse s...

Differential Equations - First Order: Bernoulli

In this section we are going to take a look at differential equations in the form, y ′ + p ( x ) y = q ( x ) y n y ′ + p ( x ) y = q ( x ) y n where  p ( x ) p ( x )  and  q ( x ) q ( x )  are continuous functions on the interval we’re working on and  n n  is a real number. Differential equations in this form are called  Bernoulli Equations . First notice that if  n = 0 n = 0  or  n = 1 n = 1  then the equation is linear and we already know how to solve it in these cases. Therefore, in this section we’re going to be looking at solutions for values of  n n  other than these two. In order to solve these we’ll first divide the differential equation by  y n y n  to get, y − n y ′ + p ( x ) y 1 − n = q ( x ) y − n y ′ + p ( x ) y 1 − n = q ( x ) We are now going to use the substitution  v = y 1 − n v = y 1 − n  to convert this into a differential equation in terms of  v v . As we’ll see th...

Differential Equations - Systems: Solutions

Now that we’ve got some of the basics out of the way for systems of differential equations it’s time to start thinking about how to solve a system of differential equations. We will start with the homogeneous system written in matrix form, → x ′ = A → x (1) (1) x → ′ = A x → where,  A A  is an  n × n n × n  matrix and  → x x →  is a vector whose components are the unknown functions in the system. Now, if we start with  n = 1 n = 1 then the system reduces to a fairly simple linear (or separable) first order differential equation. x ′ = a x x ′ = a x and this has the following solution, x ( t ) = c e a t x ( t ) = c e a t So, let’s use this as a guide and for a general  n n  let’s see if → x ( t ) = → η e r t (2) (2) x → ( t ) = η → e r t will be a solution. Note that the only real difference here is that we let the constant in front of the exponential be a vector. All we need to do then is plug this into the d...

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - First Order: Modeling - i

We now move into one of the main applications of differential equations both in this class and in general. Modeling is the process of writing a differential equation to describe a physical situation. Almost all of the differential equations that you will use in your job (for the engineers out there in the audience) are there because somebody, at some time, modeled a situation to come up with the differential equation that you are using. This section is not intended to completely teach you how to go about modeling all physical situations. A whole course could be devoted to the subject of modeling and still not cover everything! This section is designed to introduce you to the process of modeling and show you what is involved in modeling. We will look at three different situations in this section : Mixing Problems, Population Problems, and Falling Objects. In all of these situations we will be forced to make assumptions that do not accurately depict reality in most cases, but wi...

Digital Signal Processing - Miscellaneous Signals

There are other signals, which are a result of operation performed on them. Some common type of signals are discussed below. Conjugate Signals Signals, which satisfies the condition  x ( t ) = x ∗ ( − t ) are called conjugate signals. Let  x ( t ) = a ( t ) + j b ( t ) So,  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) By Condition,  x ( t ) = x ∗ ( − t ) If we compare both the derived equations 1 and 2, we can see that the real part is even, whereas the imaginary part is odd. This is the condition for a signal to be a conjugate type. Conjugate Anti-Symmetric Signals Signals, which satisfy the condition  x ( t ) = − x ∗ ( − t ) are called conjugate anti-symmetric signal Let  x ( t ) = a ( t ) + j b ( t ) So  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) − x ∗ ( − t ) = − a ( − t ) + j b ( − t ) By Condition  x ( t ) = − x ∗ ( − t ) ...

Differential Equations - Systems: Repeated Eigenvalues - i

This is the final case that we need to take a look at. In this section we are going to look at solutions to the system, → x ′ = A → x x → ′ = A x → where the eigenvalues are repeated eigenvalues. Since we are going to be working with systems in which  A A  is a  2 × 2 2 × 2  matrix we will make that assumption from the start. So, the system will have a double eigenvalue,  λ λ . This presents us with a problem. We want two linearly independent solutions so that we can form a general solution. However, with a double eigenvalue we will have only one, → x 1 = → η e λ t x → 1 = η → e λ t So, we need to come up with a second solution. Recall that when we looked at the double root case with the second order differential equations we ran into a similar problem. In that section we simply added a  t t  to the solution and were able to get a second solution. Let’s see if the same thing will work in this case as well. We’ll see if → x = t e...

Differential Equations - Systems: Repeated Eigenvalues - ii

Example 3  Solve the following IVP. → x ′ = ( − 1 3 2 − 1 6 − 2 ) → x → x ( 2 ) = ( 1 0 ) x → ′ = ( − 1 3 2 − 1 6 − 2 ) x → x → ( 2 ) = ( 1 0 ) First the eigenvalue for the system. det ( A − λ I ) = ∣ ∣ ∣ ∣ − 1 − λ 3 2 − 1 6 − 2 − λ ∣ ∣ ∣ ∣ = λ 2 + 3 λ + 9 4 = ( λ + 3 2 ) 2 ⇒ λ 1 , 2 = − 3 2 det ( A − λ I ) = | − 1 − λ 3 2 − 1 6 − 2 − λ | = λ 2 + 3 λ + 9 4 = ( λ + 3 2 ) 2 ⇒ λ 1 , 2 = − 3 2 Now let’s get the eigenvector. ( 1 2 3 2 − 1 6 − 1 2 ) ( η 1 η 2 ) = ( 0 0 ) ⇒ 1 2 η 1 + 3 2 η 2 = 0 η 1 = − 3 η 2 ( 1 2 3 2 − 1 6 − 1 2 ) ( η 1 η 2 ) = ( 0 0 ) ⇒ 1 2 η 1 + 3 2 η 2 = 0 η 1 = − 3 η 2 → η = ( − 3 η 2 η 2 ) η 2 ≠ 0 → η ( 1 ) = ( − 3 1 ) η 2 = 1 η → = ( − 3 η 2 η 2 ) η 2 ≠ 0 η → ( 1 ) = ( − 3 1 ) η 2 = 1 Now find  → ρ ρ → , ( 1 2 3 2 − 1 6 − 1 2 ) ( ρ 1 ρ 2 ) = ( − 3 1 ) ⇒ 1 2 ρ 1 + 3 2 ρ 2 = − 3 ρ 1 = − 6 − 3 ρ 2 ( 1 2 3 2 − 1 6 − 1 2 ) ( ρ 1 ρ 2 ) = ( − 3 1 ) ⇒ 1 2 ρ 1 + 3 2 ρ 2 = − 3 ρ 1 = − 6 − 3 ρ 2 → ρ = ( − 6 − 3 ρ 2 ρ 2 ) ⇒ → ρ = ( − 6 0 ) if  ρ 2 = 0 ρ → ...

Differential Equations - Second Order: Repeated Roots

In this section we will be looking at the last case for the constant coefficient, linear, homogeneous second order differential equations. In this case we want solutions to a y ′′ + b y ′ + c y = 0 a y ″ + b y ′ + c y = 0 where solutions to the characteristic equation a r 2 + b r + c = 0 a r 2 + b r + c = 0 are double roots  r 1 = r 2 = r r 1 = r 2 = r . This leads to a problem however. Recall that the solutions are y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t These are the same solution and will NOT be “nice enough” to form a general solution. We do promise that we’ll define “nice enough” eventually! So, we can use the first solution, but we’re going to need a second solution. Before finding this second solution let’s take a little side trip. The reason for the side trip will be clear eventually. From the quadratic formula we know that the roots to the characteristic equation are, r 1 , 2 = ...

Differential Equations - Laplace Transforms: Table

f ( t ) = L − 1 { F ( s ) } f ( t ) = L − 1 { F ( s ) } F ( s ) = L { f ( t ) } F ( s ) = L { f ( t ) }  1 1 s 1 s e a t e a t 1 s − a 1 s − a t n , n = 1 , 2 , 3 , … t n , n = 1 , 2 , 3 , … n ! s n + 1 n ! s n + 1 t p t p ,  p > − 1 p > − 1 Γ ( p + 1 ) s p + 1 Γ ( p + 1 ) s p + 1 √ t t √ π 2 s 3 2 π 2 s 3 2 t n − 1 2 , n = 1 , 2 , 3 , … t n − 1 2 , n = 1 , 2 , 3 , … 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) √ π 2 n s n + 1 2 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) π 2 n s n + 1 2 sin ( a t ) sin ⁡ ( a t ) a s 2 + a 2 a s 2 + a 2 cos ( a t ) cos ⁡ ( a t ) s s 2 + a 2 s s 2 + a 2 t sin ( a t ) t sin ⁡ ( a t ) 2 a s ( s 2 + a 2 ) 2 2 a s ( s 2 + a 2 ) 2 t cos ( a t ) t cos ⁡ ( a t ) s 2 − a 2 ( s 2 + a 2 ) 2 s 2 − a 2 ( s 2 + a 2 ) 2 sin ( a t ) − a t cos ( a t ) sin ⁡ ( a t ) − a t cos ⁡ ( a t ) 2 a 3 ( s 2 + a 2 ) 2 2 a 3 ( s 2 + a 2 ) 2 sin ( a t ) + a t cos ( a t ) sin ⁡ ( a t ) + a t cos ⁡ ( a t ) 2 a s 2 ( s 2 + a 2 ) 2 2 a s 2 ( s 2 + a 2 ) 2 cos ( a t ) − a t sin ( a t ) cos ⁡ (...