Skip to main content

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a vector. Note as we

Differential Equations - Fourier Series: Convergence

Over the last few sections we’ve spent a fair amount of time to computing Fourier series, but we’ve avoided discussing the topic of convergence of the series. In other words, will the Fourier series converge to the function on the given interval?
In this section we’re going to address this issue as well as a couple of other issues about Fourier series. We’ll be giving a fair number of theorems in this section but are not going to be proving any of them. We’ll also not be doing a whole lot of in the way of examples in this section.
Before we get into the topic of convergence we need to first define a couple of terms that we’ll run into in the rest of the section. First, we say that f(x) has a jump discontinuity at x=a if the limit of the function from the left, denoted f(a), and the limit of the function from the right, denoted f(a+), both exist and f(a)f(a+).
Next, we say that f(x) is piecewise smooth if the function can be broken into distinct pieces and on each piece both the function and its derivative, f(x), are continuous. A piecewise smooth function may not be continuous everywhere however the only discontinuities that are allowed are a finite number of jump discontinuities.
Let’s consider the function,
f(x)={Lif Lx02xif 0xL
We found the Fourier series for this function in Example 2 of the previous section. Here is a sketch of this function on the interval on which it is defined, i.e. LxL.

This function has a jump discontinuity at x=0 because f(0)=L0=f(0+) and note that on the intervals Lx0 and 0xL both the function and its derivative are continuous. This is therefore an example of a piecewise smooth function. Note that the function itself is not continuous at x=0 but because this point of discontinuity is a jump discontinuity the function is still piecewise smooth.
The last term we need to define is that of periodic extension. Given a function, f(x), defined on some interval, we’ll be using LxL exclusively here, the periodic extension of this function is the new function we get by taking the graph of the function on the given interval and then repeating that graph to the right and left of the graph of the original function on the given interval.
It is probably best to see an example of a periodic extension at this point to help make the words above a little clearer. Here is a sketch of the period extension of the function we looked at above,
The original function is the solid line in the range LxL. We then got the periodic extension of this by picking this piece up and copying it every interval of length 2L to the right and left of the original graph. This is shown with the two sets of dashed lines to either side of the original graph.
Note that the resulting function that we get from defining the periodic extension is in fact a new periodic function that is equal to the original function on LxL.
With these definitions out of the way we can now proceed to talk a little bit about the convergence of Fourier series. We will start off with the convergence of a Fourier series and once we have that taken care of the convergence of Fourier Sine/Cosine series will follow as a direct consequence. Here then is the theorem giving the convergence of a Fourier series.

Convergence of Fourier series

Suppose f(x) is a piecewise smooth on the interval LxL. The Fourier series of f(x) will then converge to,
  1. the periodic extension of f(x) if the periodic extension is continuous.
  2. the average of the two one-sided limits, 12[f(a)+f(a+)], if the periodic extension has a jump discontinuity at x=a.

The first thing to note about this is that on the interval LxL both the function and the periodic extension are equal and so where the function is continuous on LxL the periodic extension will also be continuous and hence at these points the Fourier series will in fact converge to the function. The only points in the interval LxL where the Fourier series will not converge to the function is where the function has a jump discontinuity.
Let’s again consider Example 2 of the previous section. In that section we found that the Fourier series of,
f(x)={Lif Lx02xif 0xL
on LxL to be,
f(x)=L+n=12Ln2π2((1)n1)cos(nπxL)n=1Lnπ(1+(1)n)sin(nπxL)
We now know that in the intervals L<x<0 and 0<x<L the function and hence the periodic extension are both continuous and so on these two intervals the Fourier series will converge to the periodic extension and hence will converge to the function itself.
At the point x=0 the function has a jump discontinuity and so the periodic extension will also have a jump discontinuity at this point. That means that atx=0 the Fourier series will converge to,
12[f(0)+f(0+)]=12[L+0]=L2
At the two endpoints of the interval, x=L and x=L, we can see from the sketch of the periodic extension above that the periodic extension has a jump discontinuity here and so the Fourier series will not converge to the function there but instead the averages of the limits.
So, at x=L the Fourier series will converge to,
12[f(L)+f(L+)]=12[2L+L]=3L2
and at x=L the Fourier series will converge to,
12[f(L)+f(L+)]=12[2L+L]=3L2
Now that we have addressed the convergence of a Fourier series we can briefly turn our attention to the convergence of Fourier sine/cosine series. First, as noted in the previous section the Fourier sine series of an odd function on LxL and the Fourier cosine series of an even function on LxL are both just special cases of a Fourier series we now know that both of these will have the same convergence as a Fourier series.
Next, if we look at the Fourier sine series of any function, g(x), on 0xL then we know that this is just the Fourier series of the odd extension of g(x) restricted down to the interval 0xL. Therefore, we know that the Fourier series will converge to the odd extension on LxL where it is continuous and the average of the limits where the odd extension has a jump discontinuity. However, on 0xL we know that g(x) and the odd extension are equal and so we can again see that the Fourier sine series will have the same convergence as the Fourier series.
Likewise, we can go through a similar argument for the Fourier cosine series using even extensions to see that Fourier cosine series for a function on 0xL will also have the same convergence as a Fourier series.
The next topic that we want to briefly discuss here is when will a Fourier series be continuous. From the theorem on the convergence of Fourier series we know that where the function is continuous the Fourier series will converge to the function and hence be continuous at these points. The only places where the Fourier series may not be continuous is if there is a jump discontinuity on the interval LxL and potentially at the endpoints as we saw that the periodic extension may introduce a jump discontinuity there.
So, if we’re going to want the Fourier series to be continuous everywhere we’ll need to make sure that the function does not have any discontinuities in LxL. Also, in order to avoid having the periodic extension introduce a jump discontinuity we’ll need to require that f(L)=f(L). By doing this the two ends of the graph will match up when we form the periodic extension and hence we will avoid a jump discontinuity at the end points.

Here is a summary of these ideas for a Fourier series.
Suppose f(x) is a piecewise smooth on the interval LxL. The Fourier series of f(x) will be continuous and will converge to f(x) on LxL provided f(x) is continuous on LxL and f(L)=f(L).
Now, how can we use this to get similar statements about Fourier sine/cosine series on 0xL? Let’s start with a Fourier cosine series. The first thing that we do is form the even extension of f(x) on LxL. For the purposes of this discussion let’s call the even extension g(x) As we saw when we sketched several even extensions in the Fourier cosine series section that in order for the sketch to be the even extension of the function we must have both,
g(0)=g(0+)g(L)=g(L)
If one or both of these aren’t true then g(x) will not be an even extension off(x).
So, in forming the even extension we do not introduce any jump discontinuities at x=0 and we get for free that g(L)=g(L). If we now apply the above theorem to the even extension we see that the Fourier series of the even extension is continuous on LxL. However, because the even extension and the function itself are the same on 0xL then the Fourier cosine series of f(x) must also be continuous on 0xL.

Here is a summary of this discussion for the Fourier cosine series.
Suppose f(x) is a piecewise smooth on the interval 0xL. The Fourier cosine series of f(x) will be continuous and will converge to f(x) on 0xL provided f(x) is continuous on 0xL.
Note that we don’t need any requirements on the end points here because they are trivially satisfied when we convert over to the even extension.
For a Fourier sine series we need to be a little more careful. Again, the first thing that we need to do is form the odd extension on LxL and let’s call it g(x). We know that in order for it to be the odd extension then we know that at all points in LxL it must satisfy g(x)=g(x) and that is what can lead to problems.
As we saw in the Fourier sine series section it is very easy to introduce a jump discontinuity at x=0 when we form the odd extension. In fact, the only way to avoid forming a jump discontinuity at this point is to require that f(0)=0.
Next, the requirement that at the endpoints we must have g(L)=g(L) will practically guarantee that we’ll introduce a jump discontinuity here as well when we form the odd extension. Again, the only way to avoid doing this is to require f(L)=0.
So, with these two requirements we will get an odd extension that is continuous and so we know that the Fourier series of the odd extension on LxL will be continuous and hence the Fourier sine series will be continuous on 0xL.

Here is a summary of all this for the Fourier sine series.
Suppose f(x) is a piecewise smooth on the interval 0xL. The Fourier sine series of f(x) will be continuous and will converge to f(x) on 0xL provided f(x) is continuous on 0xLf(0)=0 and f(L)=0.
The next topic of discussion here is differentiation and integration of Fourier series. In particular, we want to know if we can differentiate a Fourier series term by term and have the result be the Fourier series of the derivative of the function. Likewise, we want to know if we can integrate a Fourier series term by term and arrive at the Fourier series of the integral of the function.
Note that we’ll not be doing much discussion of the details here. All we’re really going to be doing is giving the theorems that govern the ideas here so that we can say we’ve given them.

Let’s start off with the theorem for term by term differentiation of a Fourier series.
Given a function f(x) if the derivative, f(x), is piecewise smooth and the Fourier series of f(x) is continuous then the Fourier series can be differentiated term by term. The result of the differentiation is the Fourier series of the derivative, f(x).
One of the main condition of this theorem is that the Fourier series be continuous and from above we also know the conditions on the function that will give this. So, if we add this into the theorem to get this form of the theorem,

Supposef(x) is a continuous function, its derivativef(x) is piecewise smooth and f(L)=f(L) then the Fourier series of the function can be differentiated term by term and the result is the Fourier series of the derivative.
For Fourier cosine/sine series the basic theorem is the same as for Fourier series. All that’s required is that the Fourier cosine/sine series be continuous and then you can differentiate term by term. The theorems that we’ll give here will merge the conditions for the Fourier cosine/sine series to be continuous into the theorem.
Let’s start with the Fourier cosine series.

Supposef(x) is a continuous function and its derivativef(x) is piecewise smooth then the Fourier cosine series of the function can be differentiated term by term and the result is the Fourier sine series of the derivative.
Next the theorem for Fourier sine series.

Supposef(x) is a continuous function, its derivativef(x) is piecewise smooth, f(0)=0 and f(L)=0 then the Fourier sine series of the function can be differentiated term by term and the result is the Fourier cosine series of the derivative.
The theorem for integration of Fourier series term by term is simple so there it is.

Supposef(x) is piecewise smooth then the Fourier sine series of the function can be integrated term by term and the result is a convergent infinite series that will converge to the integral of f(x).
Note however that the new series that results from term by term integration may not be the Fourier series for the integral of the function.

Comments

Popular posts from this blog

Digital Signal Processing - Basic Continuous Time Signals

To test a system, generally, standard or basic signals are used. These signals are the basic building blocks for many complex signals. Hence, they play a very important role in the study of signals and systems. Unit Impulse or Delta Function A signal, which satisfies the condition,   δ ( t ) = lim ϵ → ∞ x ( t ) δ ( t ) = lim ϵ → ∞ x ( t )   is known as unit impulse signal. This signal tends to infinity when t = 0 and tends to zero when t ≠ 0 such that the area under its curve is always equals to one. The delta function has zero amplitude everywhere except at t = 0. Properties of Unit Impulse Signal δ(t) is an even signal. δ(t) is an example of neither energy nor power (NENP) signal. Area of unit impulse signal can be written as; A = ∫ ∞ − ∞ δ ( t ) d t = ∫ ∞ − ∞ lim ϵ → 0 x ( t ) d t = lim ϵ → 0 ∫ ∞ − ∞ [ x ( t ) d t ] = 1 Weight or strength of the signal can be written as; y ( t ) = A δ ( t ) y ( t ) = A δ ( t ) Area of the weighted impulse signal can

Differential Equations - First Order: Bernoulli

In this section we are going to take a look at differential equations in the form, y ′ + p ( x ) y = q ( x ) y n y ′ + p ( x ) y = q ( x ) y n where  p ( x ) p ( x )  and  q ( x ) q ( x )  are continuous functions on the interval we’re working on and  n n  is a real number. Differential equations in this form are called  Bernoulli Equations . First notice that if  n = 0 n = 0  or  n = 1 n = 1  then the equation is linear and we already know how to solve it in these cases. Therefore, in this section we’re going to be looking at solutions for values of  n n  other than these two. In order to solve these we’ll first divide the differential equation by  y n y n  to get, y − n y ′ + p ( x ) y 1 − n = q ( x ) y − n y ′ + p ( x ) y 1 − n = q ( x ) We are now going to use the substitution  v = y 1 − n v = y 1 − n  to convert this into a differential equation in terms of  v v . As we’ll see this will lead to a differential equation that we can solve. We are going to have to be c

Differential Equations - Systems: Solutions

Now that we’ve got some of the basics out of the way for systems of differential equations it’s time to start thinking about how to solve a system of differential equations. We will start with the homogeneous system written in matrix form, → x ′ = A → x (1) (1) x → ′ = A x → where,  A A  is an  n × n n × n  matrix and  → x x →  is a vector whose components are the unknown functions in the system. Now, if we start with  n = 1 n = 1 then the system reduces to a fairly simple linear (or separable) first order differential equation. x ′ = a x x ′ = a x and this has the following solution, x ( t ) = c e a t x ( t ) = c e a t So, let’s use this as a guide and for a general  n n  let’s see if → x ( t ) = → η e r t (2) (2) x → ( t ) = η → e r t will be a solution. Note that the only real difference here is that we let the constant in front of the exponential be a vector. All we need to do then is plug this into the differential equation and see what we get. First notice that

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a vector. Note as we

Digital Signal Processing - Miscellaneous Signals

There are other signals, which are a result of operation performed on them. Some common type of signals are discussed below. Conjugate Signals Signals, which satisfies the condition  x ( t ) = x ∗ ( − t ) are called conjugate signals. Let  x ( t ) = a ( t ) + j b ( t ) So,  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) By Condition,  x ( t ) = x ∗ ( − t ) If we compare both the derived equations 1 and 2, we can see that the real part is even, whereas the imaginary part is odd. This is the condition for a signal to be a conjugate type. Conjugate Anti-Symmetric Signals Signals, which satisfy the condition  x ( t ) = − x ∗ ( − t ) are called conjugate anti-symmetric signal Let  x ( t ) = a ( t ) + j b ( t ) So  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) − x ∗ ( − t ) = − a ( − t ) + j b ( − t ) By Condition  x ( t ) = − x ∗ ( − t ) Now, again compare, both the equations just as w

Differential Equations - First Order: Modeling - i

We now move into one of the main applications of differential equations both in this class and in general. Modeling is the process of writing a differential equation to describe a physical situation. Almost all of the differential equations that you will use in your job (for the engineers out there in the audience) are there because somebody, at some time, modeled a situation to come up with the differential equation that you are using. This section is not intended to completely teach you how to go about modeling all physical situations. A whole course could be devoted to the subject of modeling and still not cover everything! This section is designed to introduce you to the process of modeling and show you what is involved in modeling. We will look at three different situations in this section : Mixing Problems, Population Problems, and Falling Objects. In all of these situations we will be forced to make assumptions that do not accurately depict reality in most cases, but wi

Differential Equations - Basic Concepts: Definitions

Differential Equation The first definition that we should cover should be that of  differential equation . A differential equation is any equation which contains derivatives, either ordinary derivatives or partial derivatives. There is one differential equation that everybody probably knows, that is Newton’s Second Law of Motion. If an object of mass  m m  is moving with acceleration  a a  and being acted on with force  F F  then Newton’s Second Law tells us. F = m a (1) (1) F = m a To see that this is in fact a differential equation we need to rewrite it a little. First, remember that we can rewrite the acceleration,  a a , in one of two ways. a = d v d t OR a = d 2 u d t 2 (2) (2) a = d v d t OR a = d 2 u d t 2 Where  v v  is the velocity of the object and  u u  is the position function of the object at any time  t t . We should also remember at this point that the force,  F F  may also be a function of time, velocity, and/or position. So, with all these things in

Differential Equations - Partial: Summary of Separation of Variables

Throughout this chapter we’ve been talking about and solving partial differential equations using the method of separation of variables. However, the one thing that we’ve not really done is completely work an example from start to finish showing each and every step. Each partial differential equation that we solved made use somewhere of the fact that we’d done at least part of the problem in another section and so it makes some sense to have a quick summary of the method here. Also note that each of the partial differential equations only involved two variables. The method can often be extended out to more than two variables, but the work in those problems can be quite involved and so we didn’t cover any of that here. So with all of that out of the way here is a quick summary of the method of separation of variables for partial differential equations in two variables. Verify that the partial differential equation is linear and homogeneous. Verify that the boundary condi

Differential Equations - Systems: Repeated Eigenvalues - i

This is the final case that we need to take a look at. In this section we are going to look at solutions to the system, → x ′ = A → x x → ′ = A x → where the eigenvalues are repeated eigenvalues. Since we are going to be working with systems in which  A A  is a  2 × 2 2 × 2  matrix we will make that assumption from the start. So, the system will have a double eigenvalue,  λ λ . This presents us with a problem. We want two linearly independent solutions so that we can form a general solution. However, with a double eigenvalue we will have only one, → x 1 = → η e λ t x → 1 = η → e λ t So, we need to come up with a second solution. Recall that when we looked at the double root case with the second order differential equations we ran into a similar problem. In that section we simply added a  t t  to the solution and were able to get a second solution. Let’s see if the same thing will work in this case as well. We’ll see if → x = t e λ t → η x → = t e λ t η → will also be a

Differential Equations - First Order: Modeling - ii

Example 4  A 50 kg object is shot from a cannon straight up with an initial velocity of 10m/s off a bridge that is 100 meters above the ground. If air resistance is given by 5 v v  determine the velocity of the mass when it hits the ground. First, notice that when we say straight up, we really mean straight up, but in such a way that it will miss the bridge on the way back down. Here is a sketch of the situation. Notice the conventions that we set up for this problem. Since the vast majority of the motion will be in the downward direction we decided to assume that everything acting in the downward direction should be positive. Note that we also defined the “zero position” as the bridge, which makes the ground have a “position” of 100. Okay, if you think about it we actually have two situations here. The initial phase in which the mass is rising in the air and the second phase when the mass is on its way down. We will need to examine both situations and set up an IVP for