Skip to main content

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a vector. Note as we

Differential Equations - Second Order: Wronskian Applications



In the previous section we introduced the Wronskian to help us determine whether two solutions were a fundamental set of solutions. In this section we will look at another application of the Wronskian as well as an alternate method of computing the Wronskian.
Let’s start with the application. We need to introduce a couple of new concepts first.
Given two non-zero functions f(x) and g(x) write down the following equation.
(1)cf(x)+kg(x)=0
Notice that c=0 and k = 0 will make (1) true for all x regardless of the functions that we use.
Now, if we can find non-zero constants c and k for which (1) will also be true for all x then we call the two functions linearly dependent. On the other hand if the only two constants for which (1) is true are c = 0 and k = 0 then we call the functions linearly independent.


Example 1 Determine if the following sets of functions are linearly dependent or linearly independent.
  1. f(x)=9cos(2x)g(x)=2cos2(x)2sin2(x)
  2. f(t)=2t2g(t)=t4

a) f(x)=9cos(2x)g(x)=2cos2(x)2sin2(x) 
We’ll start by writing down (1) for these two functions.
c(9cos(2x))+k(2cos2(x)2sin2(x))=0We need to determine if we can find non-zero constants c and k that will make this true for all x or if c = 0 and k = 0 are the only constants that will make this true for all x. This is often a fairly difficult process. The process can be simplified with a good intuition for this kind of thing, but that’s hard to come by, especially if you haven’t done many of these kinds of problems.
In this case the problem can be simplified by recalling
cos2(x)sin2(x)=cos(2x)Using this fact our equation becomes.
9ccos(2x)+2kcos(2x)=0(9c+2k)cos(2x)=0With this simplification we can see that this will be zero for any pair of constants c and k that satisfy
9c+2k=0Among the possible pairs on constants that we could use are the following pairs.
c=1,k=92c=29,k=1c=2,k=9c=76,k=214etc.As we’re sure you can see there are literally thousands of possible pairs and they can be made as “simple” or as “complicated” as you want them to be.
So, we’ve managed to find a pair of non-zero constants that will make the equation true for all x and so the two functions are linearly dependent.

b) f(t)=2t2g(t)=t4 

As with the last part, we’ll start by writing down (1) for these functions.
2ct2+kt4=0In this case there isn’t any quick and simple formula to write one of the functions in terms of the other as we did in the first part. So, we’re just going to have to see if we can find constants. We’ll start by noticing that if the original equation is true, then if we differentiate everything we get a new equation that must also be true. In other words, we’ve got the following system of two equations in two unknowns.
2ct2+kt4=04ct+4kt3=0We can solve this system for c and k and see what we get. We’ll start by solving the second equation for c.
c=kt2Now, plug this into the first equation.
2(kt2)t2+kt4=0kt4=0Recall that we are after constants that will make this true for all t. The only way that this will ever be zero for all t is if k = 0! So, if k = 0 we must also have c = 0.
Therefore, we’ve shown that the only way that
2ct2+kt4=0will be true for all t is to require that c = 0 and k = 0. The two functions therefore, are linearly independent.
As we saw in the previous examples determining whether two functions are linearly independent or linearly dependent can be a fairly involved process. This is where the Wronskian can help.

Fact


Given two functions f(x) and g(x) that are differentiable on some interval I.
  1. If W(f,g)(x0)0 for some x0 in I, then f(x) and g(x) are linearly independent on the interval I.
  2. If f(x) and g(x) are linearly dependent on I then W(f,g)(x)=0 for all x in the interval I.
Be very careful with this fact. It DOES NOT say that if W(f,g)(x)=0 then f(x) and g(x) are linearly dependent! In fact, it is possible for two linearly independent functions to have a zero Wronskian!

This fact is used to quickly identify linearly independent functions and functions that are liable to be linearly dependent.


Example 2 Verify the fact using the functions from the previous example.
  1. f(x)=9cos(2x)g(x)=2cos2(x)2sin2(x)
  2. f(t)=2t2g(t)=t4

a) f(x)=9cos(2x)g(x)=2cos2(x)2sin2(x) 
In this case if we compute the Wronskian of the two functions we should get zero since we have already determined that these functions are linearly dependent.
W=|9cos(2x)2cos2(x)2sin2(x)18sin(2x)4cos(x)sin(x)4sin(x)cos(x)|=|9cos(2x)2cos(2x)18sin(2x)2sin(2x)2sin(2x)|=|9cos(2x)2cos(2x)18sin(2x)4sin(2x)|=36cos(2x)sin(2x)(36cos(2x)sin(2x))=0So, we get zero as we should have. Notice the heavy use of trig formulas to simplify the work!

b) f(t)=2t2g(t)=t4 

Here we know that the two functions are linearly independent and so we should get a non-zero Wronskian.
W=|2t2t44t4t3|=8t54t5=4t5The Wronskian is non-zero as we expected provided t0. This is not a problem. As long as the Wronskian is not identically zero for all t we are okay.


Example 3 Determine if the following functions are linearly dependent or linearly independent.
  1. f(t)=costg(t)=sint
  2. f(x)=6xg(x)=6x+2

a) f(t)=costg(t)=sint 
Now that we have the Wronskian to use here let’s first check that. If its non-zero then we will know that the two functions are linearly independent and if its zero then we can be pretty sure that they are linearly dependent.
W=|costsintsintcost|=cos2t+sin2t=10So, by the fact these two functions are linearly independent. Much easier this time around!

b) f(x)=6xg(x)=6x+2 

We’ll do the same thing here as we did in the first part. Recall that
(ax)=axlnaNow compute the Wronskian.
W=|6x6x+26xln66x+2ln6|=6x6x+2ln66x+26xln6=0Now, this does not say that the two functions are linearly dependent! However, we can guess that they probably are linearly dependent. To prove that they are in fact linearly dependent we’ll need to write down (1) and see if we can find non-zero c and k that will make it true for all x.
c6x+k6x+2=0c6x+k6x62=0c6x+36k6x=0(c+36k)6x=0So, it looks like we could use any constants that satisfy
c+36k=0to make this zero for all x. In particular we could use
c=36k=1c=36k=1c=9k=14etc.We have non-zero constants that will make the equation true for all x. Therefore, the functions are linearly dependent.

Before proceeding to the next topic in this section let’s talk a little more about linearly independent and linearly dependent functions. Let’s start off by assuming that f(x) and g(x) are linearly dependent. So, that means there are non-zero constants cand k so that
cf(x)+kg(x)=0
is true for all x.
Now, we can solve this in either of the following two ways.
f(x)=kcg(x)ORg(x)=ckf(x)
Note that this can be done because we know that c and k are non-zero and hence the divisions can be done without worrying about division by zero.
So, this means that two linearly dependent functions can be written in such a way that one is nothing more than a constants time the other. Go back and look at both of the sets of linearly dependent functions that we wrote down and you will see that this is true for both of them.
Two functions that are linearly independent can’t be written in this manner and so we can’t get from one to the other simply by multiplying by a constant.
Next, we don’t want to leave you with the impression that linear independence and linear dependence is only for two functions. We can easily extend the idea to as many functions as we’d like.
Let’s suppose that we have n non-zero functions, f1(x)f2(x),…, fn(x). Write down the following equation.
(2)c1f1(x)+c2f2(x)++cnfn(x)=0
If we can find constants c1c2, …, cn with at least two non-zero so that (2) is true for all x then we call the functions linearly dependent. If, on the other hand, the only constants that make (2) true for x are c1=0c2=0, …, cn=0 then we call the functions linearly independent.

Note that unlike the two function case we can have some of the constants be zero and still have the functions be linearly dependent.
In this case just what does it mean for the functions to be linearly dependent? Well, let’s suppose that they are. So, this means that we can find constants, with at least two non-zero so that (2) is true for all x. For the sake of argument let’s suppose that c1 is one of the non-zero constants. This means that we can do the following.
c1f1(x)+c2f2(x)++cnfn(x)=0c1f1(x)=(c2f2(x)++cnfn(x))f1(x)=1c1(c2f2(x)++cnfn(x))
In other words, if the functions are linearly dependent then we can write at least one of them in terms of the other functions.
Okay, let’s move on to the other topic of this section. There is an alternate method of computing the Wronskian. The following theorem gives this alternate method.


Abel’s Theorem


If y1(t) and y2(t) are two solutions to
y+p(t)y+q(t)y=0then the Wronskian of the two solutions is
W(y1,y2)(t)=W(y1,y2)(t0)et0tp(x)dxfor some t0.


Because we don’t know the Wronskian and we don’t know t0 this won’t do us a lot of good apparently. However, we can rewrite this as
(3)W(y1,y2)(t)=cep(t)dt
where the original Wronskian sitting in front of the exponential is absorbed into the c and the evaluation of the integral at t0 will put a constant in the exponential that can also be brought out and absorbed into the constant c. If you don’t recall how to do this go back and take a look at the linear, first order differential equation section as we did something similar there.
With this rewrite we can compute the Wronskian up to a multiplicative constant, which isn’t too bad. Notice as well that we don’t actually need the two solutions to do this. All we need is the coefficient of the first derivative from the differential equation (provided the coefficient of the second derivative is one of course…).
Let’s take a look at a quick example of this.


Example 4 Without solving, determine the Wronskian of two solutions to the following differential equation.t4y2t3yt8y=0

The first thing that we need to do is divide the differential equation by the coefficient of the second derivative as that needs to be a one. This gives us
y2tyt4y=0Now, using (3) the Wronskian is

Comments

Popular posts from this blog

Digital Signal Processing - Basic Continuous Time Signals

To test a system, generally, standard or basic signals are used. These signals are the basic building blocks for many complex signals. Hence, they play a very important role in the study of signals and systems. Unit Impulse or Delta Function A signal, which satisfies the condition,   δ ( t ) = lim ϵ → ∞ x ( t ) δ ( t ) = lim ϵ → ∞ x ( t )   is known as unit impulse signal. This signal tends to infinity when t = 0 and tends to zero when t ≠ 0 such that the area under its curve is always equals to one. The delta function has zero amplitude everywhere except at t = 0. Properties of Unit Impulse Signal δ(t) is an even signal. δ(t) is an example of neither energy nor power (NENP) signal. Area of unit impulse signal can be written as; A = ∫ ∞ − ∞ δ ( t ) d t = ∫ ∞ − ∞ lim ϵ → 0 x ( t ) d t = lim ϵ → 0 ∫ ∞ − ∞ [ x ( t ) d t ] = 1 Weight or strength of the signal can be written as; y ( t ) = A δ ( t ) y ( t ) = A δ ( t ) Area of the weighted impulse signal can

Differential Equations - First Order: Bernoulli

In this section we are going to take a look at differential equations in the form, y ′ + p ( x ) y = q ( x ) y n y ′ + p ( x ) y = q ( x ) y n where  p ( x ) p ( x )  and  q ( x ) q ( x )  are continuous functions on the interval we’re working on and  n n  is a real number. Differential equations in this form are called  Bernoulli Equations . First notice that if  n = 0 n = 0  or  n = 1 n = 1  then the equation is linear and we already know how to solve it in these cases. Therefore, in this section we’re going to be looking at solutions for values of  n n  other than these two. In order to solve these we’ll first divide the differential equation by  y n y n  to get, y − n y ′ + p ( x ) y 1 − n = q ( x ) y − n y ′ + p ( x ) y 1 − n = q ( x ) We are now going to use the substitution  v = y 1 − n v = y 1 − n  to convert this into a differential equation in terms of  v v . As we’ll see this will lead to a differential equation that we can solve. We are going to have to be c

Differential Equations - Systems: Solutions

Now that we’ve got some of the basics out of the way for systems of differential equations it’s time to start thinking about how to solve a system of differential equations. We will start with the homogeneous system written in matrix form, → x ′ = A → x (1) (1) x → ′ = A x → where,  A A  is an  n × n n × n  matrix and  → x x →  is a vector whose components are the unknown functions in the system. Now, if we start with  n = 1 n = 1 then the system reduces to a fairly simple linear (or separable) first order differential equation. x ′ = a x x ′ = a x and this has the following solution, x ( t ) = c e a t x ( t ) = c e a t So, let’s use this as a guide and for a general  n n  let’s see if → x ( t ) = → η e r t (2) (2) x → ( t ) = η → e r t will be a solution. Note that the only real difference here is that we let the constant in front of the exponential be a vector. All we need to do then is plug this into the differential equation and see what we get. First notice that

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a vector. Note as we

Digital Signal Processing - Miscellaneous Signals

There are other signals, which are a result of operation performed on them. Some common type of signals are discussed below. Conjugate Signals Signals, which satisfies the condition  x ( t ) = x ∗ ( − t ) are called conjugate signals. Let  x ( t ) = a ( t ) + j b ( t ) So,  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) By Condition,  x ( t ) = x ∗ ( − t ) If we compare both the derived equations 1 and 2, we can see that the real part is even, whereas the imaginary part is odd. This is the condition for a signal to be a conjugate type. Conjugate Anti-Symmetric Signals Signals, which satisfy the condition  x ( t ) = − x ∗ ( − t ) are called conjugate anti-symmetric signal Let  x ( t ) = a ( t ) + j b ( t ) So  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) − x ∗ ( − t ) = − a ( − t ) + j b ( − t ) By Condition  x ( t ) = − x ∗ ( − t ) Now, again compare, both the equations just as w

Differential Equations - First Order: Modeling - i

We now move into one of the main applications of differential equations both in this class and in general. Modeling is the process of writing a differential equation to describe a physical situation. Almost all of the differential equations that you will use in your job (for the engineers out there in the audience) are there because somebody, at some time, modeled a situation to come up with the differential equation that you are using. This section is not intended to completely teach you how to go about modeling all physical situations. A whole course could be devoted to the subject of modeling and still not cover everything! This section is designed to introduce you to the process of modeling and show you what is involved in modeling. We will look at three different situations in this section : Mixing Problems, Population Problems, and Falling Objects. In all of these situations we will be forced to make assumptions that do not accurately depict reality in most cases, but wi

Differential Equations - Basic Concepts: Definitions

Differential Equation The first definition that we should cover should be that of  differential equation . A differential equation is any equation which contains derivatives, either ordinary derivatives or partial derivatives. There is one differential equation that everybody probably knows, that is Newton’s Second Law of Motion. If an object of mass  m m  is moving with acceleration  a a  and being acted on with force  F F  then Newton’s Second Law tells us. F = m a (1) (1) F = m a To see that this is in fact a differential equation we need to rewrite it a little. First, remember that we can rewrite the acceleration,  a a , in one of two ways. a = d v d t OR a = d 2 u d t 2 (2) (2) a = d v d t OR a = d 2 u d t 2 Where  v v  is the velocity of the object and  u u  is the position function of the object at any time  t t . We should also remember at this point that the force,  F F  may also be a function of time, velocity, and/or position. So, with all these things in

Differential Equations - Partial: Summary of Separation of Variables

Throughout this chapter we’ve been talking about and solving partial differential equations using the method of separation of variables. However, the one thing that we’ve not really done is completely work an example from start to finish showing each and every step. Each partial differential equation that we solved made use somewhere of the fact that we’d done at least part of the problem in another section and so it makes some sense to have a quick summary of the method here. Also note that each of the partial differential equations only involved two variables. The method can often be extended out to more than two variables, but the work in those problems can be quite involved and so we didn’t cover any of that here. So with all of that out of the way here is a quick summary of the method of separation of variables for partial differential equations in two variables. Verify that the partial differential equation is linear and homogeneous. Verify that the boundary condi

Differential Equations - Systems: Repeated Eigenvalues - i

This is the final case that we need to take a look at. In this section we are going to look at solutions to the system, → x ′ = A → x x → ′ = A x → where the eigenvalues are repeated eigenvalues. Since we are going to be working with systems in which  A A  is a  2 × 2 2 × 2  matrix we will make that assumption from the start. So, the system will have a double eigenvalue,  λ λ . This presents us with a problem. We want two linearly independent solutions so that we can form a general solution. However, with a double eigenvalue we will have only one, → x 1 = → η e λ t x → 1 = η → e λ t So, we need to come up with a second solution. Recall that when we looked at the double root case with the second order differential equations we ran into a similar problem. In that section we simply added a  t t  to the solution and were able to get a second solution. Let’s see if the same thing will work in this case as well. We’ll see if → x = t e λ t → η x → = t e λ t η → will also be a

Differential Equations - First Order: Modeling - ii

Example 4  A 50 kg object is shot from a cannon straight up with an initial velocity of 10m/s off a bridge that is 100 meters above the ground. If air resistance is given by 5 v v  determine the velocity of the mass when it hits the ground. First, notice that when we say straight up, we really mean straight up, but in such a way that it will miss the bridge on the way back down. Here is a sketch of the situation. Notice the conventions that we set up for this problem. Since the vast majority of the motion will be in the downward direction we decided to assume that everything acting in the downward direction should be positive. Note that we also defined the “zero position” as the bridge, which makes the ground have a “position” of 100. Okay, if you think about it we actually have two situations here. The initial phase in which the mass is rising in the air and the second phase when the mass is on its way down. We will need to examine both situations and set up an IVP for