Skip to main content

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - Systems: Non-homogeneous



We now need to address non-homogeneous systems briefly. Both of the methods that we looked at back in the second order differential equations chapter can also be used here. As we will see Undetermined Coefficients is almost identical when used on systems while Variation of Parameters will need to have a new formula derived, but will actually be slightly easier when applied to systems.


Undetermined Coefficients


The method of Undetermined Coefficients for systems is pretty much identical to the second order differential equation case. The only difference is that the coefficients will need to be vectors now.
Let’s take a quick look at an example.


Example 1 Find the general solution to the following system.x=(1232)x+t(24)

We already have the complementary solution as we solved that part back in the real eigenvalue section. It is,
xc(t)=c1et(11)+c2e4t(23)Guessing the form of the particular solution will work in exactly the same way it did back when we first looked at this method. We have a linear polynomial and so our guess will need to be a linear polynomial. The only difference is that the “coefficients” will need to be vectors instead of constants. The particular solution will have the form,
xP=ta+b=t(a1a2)+(b1b2)So, we need to differentiate the guess
xP=a=(a1a2)
Before plugging into the system let’s simplify the notation a little to help with our work. We’ll write the system as,
x=(1232)x+t(24)=Ax+tgThis will make the following work a little easier. Now, let’s plug things into the system.
a=A(ta+b)+tga=tAa+Ab+tg0=t(Aa+g)+(Aba)Now we need to set the coefficients equal. Doing this gives,
t1:Aa+g=0Aa=gt0:Aba=0Ab=aNow only a is unknown in the first equation so we can use Gaussian elimination to solve the system. We’ll leave this work to you to check.
(1232)(a1a2)=(24)a=(352)Now that we know a we can solve the second equation for b.
(1232)(b1b2)=(352)b=(114238)So, since we were able to solve both equations, the particular solution is then,
xP=t(352)+(114238)The general solution is then,
x(t)=c1et(11)+c2e4t(23)+t(352)+(114238)

So, as you can see undetermined coefficients is nearly the same as the first time we saw it. The work in solving for the “constants” is a little messier however.


Variation of Parameters


In this case we will need to derive a new formula for variation of parameters for systems. The derivation this time will be much simpler than the when we first saw variation of parameters.
First let X(t) be a matrix whose ith column is the ith linearly independent solution to the system,
x=Ax
Now it can be shown that X(t) will be a solution to the following differential equation.
(1)X=AX
This is nothing more than the original system with the matrix in place of the original vector.
We are going to try and find a particular solution to
x=Ax+g(t)
We will assume that we can find a solution of the form,
xP=X(t)v(t)
where we will need to determine the vector v(t). To do this we will need to plug this into the non-homogeneous system. Don’t forget to product rule the particular solution when plugging the guess into the system.
Xv+Xv=AXv+g
Note that we dropped the (t) part of things to simplify the notation a little. Now using (1) we can rewrite this a little.
Xv+Xv=Xv+gXv=g
Because we formed X using linearly independent solutions we know that det(X) must be nonzero and this in turn means that we can find the inverse of X. So, multiply both sides by the inverse of X.
v=X1g
Now all that we need to do is integrate both sides to get v(t).
v(t)=X1gdt
As with the second order differential equation case we can ignore any constants of integration. The particular solution is then,
(2)xP=XX1gdt
Let’s work a quick example using this.


Example 2 Find the general solution to the following system.
x=(5142)x+e2t(61)

We found the complementary solution to this system in the real eigenvalue section. It is,
xc(t)=c1et(14)+c2e6t(11)Now the matrix X is,
X=(ete6t4ete6t)Now, we need to find the inverse of this matrix. We saw how to find inverses of matrices back in the second linear algebra review section and the process is the same here even though we don’t have constant entries. We’ll leave the detail to you to check.
X1=(15et15et45e6t15e6t)Now do the multiplication in the integral.
X1g=(15et15et45e6t15e6t)(6e2te2t)=(e3t5e8t)Now do the integral.
X1gdt=(e3t5e8t)dt=(e3tdt5e8tdt)=(13e3t58e8t)Remember that to integrate a matrix or vector you just integrate the individual entries.
We can now get the particular solution.
xP=XX1gdt=(ete6t4ete6t)(13e3t58e8t)=(2324e2t1724e2t)=e2t(23241724)The general solution is then,
x(t)=c1et(14)+c2e6t(11)+e2t(23241724)
So, some of the work can be a little messy, but overall not too bad.

We looked at two methods of solving non-homogeneous differential equations here and while the work can be a little messy they aren’t too bad. Of course, we also kept the non-homogeneous part fairly simple here. More complicated problems will have significant amounts of work involved.

Comments

Popular posts from this blog

Digital Signal Processing - Basic Continuous Time Signals

To test a system, generally, standard or basic signals are used. These signals are the basic building blocks for many complex signals. Hence, they play a very important role in the study of signals and systems. Unit Impulse or Delta Function A signal, which satisfies the condition,   δ ( t ) = lim ϵ → ∞ x ( t ) δ ( t ) = lim ϵ → ∞ x ( t )   is known as unit impulse signal. This signal tends to infinity when t = 0 and tends to zero when t ≠ 0 such that the area under its curve is always equals to one. The delta function has zero amplitude everywhere except at t = 0. Properties of Unit Impulse Signal δ(t) is an even signal. δ(t) is an example of neither energy nor power (NENP) signal. Area of unit impulse signal can be written as; A = ∫ ∞ − ∞ δ ( t ) d t = ∫ ∞ − ∞ lim ϵ → 0 x ( t ) d t = lim ϵ → 0 ∫ ∞ − ∞ [ x ( t ) d t ] = 1 Weight or strength of the signal can be written as; y ( t ) = A δ ( t ) y ( t ) = A δ ( t ) Area of the weighted impulse s...

Differential Equations - First Order: Bernoulli

In this section we are going to take a look at differential equations in the form, y ′ + p ( x ) y = q ( x ) y n y ′ + p ( x ) y = q ( x ) y n where  p ( x ) p ( x )  and  q ( x ) q ( x )  are continuous functions on the interval we’re working on and  n n  is a real number. Differential equations in this form are called  Bernoulli Equations . First notice that if  n = 0 n = 0  or  n = 1 n = 1  then the equation is linear and we already know how to solve it in these cases. Therefore, in this section we’re going to be looking at solutions for values of  n n  other than these two. In order to solve these we’ll first divide the differential equation by  y n y n  to get, y − n y ′ + p ( x ) y 1 − n = q ( x ) y − n y ′ + p ( x ) y 1 − n = q ( x ) We are now going to use the substitution  v = y 1 − n v = y 1 − n  to convert this into a differential equation in terms of  v v . As we’ll see th...

Differential Equations - Systems: Solutions

Now that we’ve got some of the basics out of the way for systems of differential equations it’s time to start thinking about how to solve a system of differential equations. We will start with the homogeneous system written in matrix form, → x ′ = A → x (1) (1) x → ′ = A x → where,  A A  is an  n × n n × n  matrix and  → x x →  is a vector whose components are the unknown functions in the system. Now, if we start with  n = 1 n = 1 then the system reduces to a fairly simple linear (or separable) first order differential equation. x ′ = a x x ′ = a x and this has the following solution, x ( t ) = c e a t x ( t ) = c e a t So, let’s use this as a guide and for a general  n n  let’s see if → x ( t ) = → η e r t (2) (2) x → ( t ) = η → e r t will be a solution. Note that the only real difference here is that we let the constant in front of the exponential be a vector. All we need to do then is plug this into the d...

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - First Order: Modeling - i

We now move into one of the main applications of differential equations both in this class and in general. Modeling is the process of writing a differential equation to describe a physical situation. Almost all of the differential equations that you will use in your job (for the engineers out there in the audience) are there because somebody, at some time, modeled a situation to come up with the differential equation that you are using. This section is not intended to completely teach you how to go about modeling all physical situations. A whole course could be devoted to the subject of modeling and still not cover everything! This section is designed to introduce you to the process of modeling and show you what is involved in modeling. We will look at three different situations in this section : Mixing Problems, Population Problems, and Falling Objects. In all of these situations we will be forced to make assumptions that do not accurately depict reality in most cases, but wi...

Digital Signal Processing - Miscellaneous Signals

There are other signals, which are a result of operation performed on them. Some common type of signals are discussed below. Conjugate Signals Signals, which satisfies the condition  x ( t ) = x ∗ ( − t ) are called conjugate signals. Let  x ( t ) = a ( t ) + j b ( t ) So,  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) By Condition,  x ( t ) = x ∗ ( − t ) If we compare both the derived equations 1 and 2, we can see that the real part is even, whereas the imaginary part is odd. This is the condition for a signal to be a conjugate type. Conjugate Anti-Symmetric Signals Signals, which satisfy the condition  x ( t ) = − x ∗ ( − t ) are called conjugate anti-symmetric signal Let  x ( t ) = a ( t ) + j b ( t ) So  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) − x ∗ ( − t ) = − a ( − t ) + j b ( − t ) By Condition  x ( t ) = − x ∗ ( − t ) ...

Differential Equations - Systems: Repeated Eigenvalues - i

This is the final case that we need to take a look at. In this section we are going to look at solutions to the system, → x ′ = A → x x → ′ = A x → where the eigenvalues are repeated eigenvalues. Since we are going to be working with systems in which  A A  is a  2 × 2 2 × 2  matrix we will make that assumption from the start. So, the system will have a double eigenvalue,  λ λ . This presents us with a problem. We want two linearly independent solutions so that we can form a general solution. However, with a double eigenvalue we will have only one, → x 1 = → η e λ t x → 1 = η → e λ t So, we need to come up with a second solution. Recall that when we looked at the double root case with the second order differential equations we ran into a similar problem. In that section we simply added a  t t  to the solution and were able to get a second solution. Let’s see if the same thing will work in this case as well. We’ll see if → x = t e...

Differential Equations - Second Order: Repeated Roots

In this section we will be looking at the last case for the constant coefficient, linear, homogeneous second order differential equations. In this case we want solutions to a y ′′ + b y ′ + c y = 0 a y ″ + b y ′ + c y = 0 where solutions to the characteristic equation a r 2 + b r + c = 0 a r 2 + b r + c = 0 are double roots  r 1 = r 2 = r r 1 = r 2 = r . This leads to a problem however. Recall that the solutions are y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t These are the same solution and will NOT be “nice enough” to form a general solution. We do promise that we’ll define “nice enough” eventually! So, we can use the first solution, but we’re going to need a second solution. Before finding this second solution let’s take a little side trip. The reason for the side trip will be clear eventually. From the quadratic formula we know that the roots to the characteristic equation are, r 1 , 2 = ...

Differential Equations - Systems: Repeated Eigenvalues - ii

Example 3  Solve the following IVP. → x ′ = ( − 1 3 2 − 1 6 − 2 ) → x → x ( 2 ) = ( 1 0 ) x → ′ = ( − 1 3 2 − 1 6 − 2 ) x → x → ( 2 ) = ( 1 0 ) First the eigenvalue for the system. det ( A − λ I ) = ∣ ∣ ∣ ∣ − 1 − λ 3 2 − 1 6 − 2 − λ ∣ ∣ ∣ ∣ = λ 2 + 3 λ + 9 4 = ( λ + 3 2 ) 2 ⇒ λ 1 , 2 = − 3 2 det ( A − λ I ) = | − 1 − λ 3 2 − 1 6 − 2 − λ | = λ 2 + 3 λ + 9 4 = ( λ + 3 2 ) 2 ⇒ λ 1 , 2 = − 3 2 Now let’s get the eigenvector. ( 1 2 3 2 − 1 6 − 1 2 ) ( η 1 η 2 ) = ( 0 0 ) ⇒ 1 2 η 1 + 3 2 η 2 = 0 η 1 = − 3 η 2 ( 1 2 3 2 − 1 6 − 1 2 ) ( η 1 η 2 ) = ( 0 0 ) ⇒ 1 2 η 1 + 3 2 η 2 = 0 η 1 = − 3 η 2 → η = ( − 3 η 2 η 2 ) η 2 ≠ 0 → η ( 1 ) = ( − 3 1 ) η 2 = 1 η → = ( − 3 η 2 η 2 ) η 2 ≠ 0 η → ( 1 ) = ( − 3 1 ) η 2 = 1 Now find  → ρ ρ → , ( 1 2 3 2 − 1 6 − 1 2 ) ( ρ 1 ρ 2 ) = ( − 3 1 ) ⇒ 1 2 ρ 1 + 3 2 ρ 2 = − 3 ρ 1 = − 6 − 3 ρ 2 ( 1 2 3 2 − 1 6 − 1 2 ) ( ρ 1 ρ 2 ) = ( − 3 1 ) ⇒ 1 2 ρ 1 + 3 2 ρ 2 = − 3 ρ 1 = − 6 − 3 ρ 2 → ρ = ( − 6 − 3 ρ 2 ρ 2 ) ⇒ → ρ = ( − 6 0 ) if  ρ 2 = 0 ρ → ...

Differential Equations - Laplace Transforms: Table

f ( t ) = L − 1 { F ( s ) } f ( t ) = L − 1 { F ( s ) } F ( s ) = L { f ( t ) } F ( s ) = L { f ( t ) }  1 1 s 1 s e a t e a t 1 s − a 1 s − a t n , n = 1 , 2 , 3 , … t n , n = 1 , 2 , 3 , … n ! s n + 1 n ! s n + 1 t p t p ,  p > − 1 p > − 1 Γ ( p + 1 ) s p + 1 Γ ( p + 1 ) s p + 1 √ t t √ π 2 s 3 2 π 2 s 3 2 t n − 1 2 , n = 1 , 2 , 3 , … t n − 1 2 , n = 1 , 2 , 3 , … 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) √ π 2 n s n + 1 2 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) π 2 n s n + 1 2 sin ( a t ) sin ⁡ ( a t ) a s 2 + a 2 a s 2 + a 2 cos ( a t ) cos ⁡ ( a t ) s s 2 + a 2 s s 2 + a 2 t sin ( a t ) t sin ⁡ ( a t ) 2 a s ( s 2 + a 2 ) 2 2 a s ( s 2 + a 2 ) 2 t cos ( a t ) t cos ⁡ ( a t ) s 2 − a 2 ( s 2 + a 2 ) 2 s 2 − a 2 ( s 2 + a 2 ) 2 sin ( a t ) − a t cos ( a t ) sin ⁡ ( a t ) − a t cos ⁡ ( a t ) 2 a 3 ( s 2 + a 2 ) 2 2 a 3 ( s 2 + a 2 ) 2 sin ( a t ) + a t cos ( a t ) sin ⁡ ( a t ) + a t cos ⁡ ( a t ) 2 a s 2 ( s 2 + a 2 ) 2 2 a s 2 ( s 2 + a 2 ) 2 cos ( a t ) − a t sin ( a t ) cos ⁡ (...