Skip to main content

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - Systems: Eigenvalues & Eigenvectors - ii


Example 3 Find the eigenvalues and eigenvectors of the following matrix.A=(41722)

So, we’ll start with the eigenvalues.
det(AλI)=|4λ1722λ|=(4λ)(2λ)+34=λ2+2λ+26This doesn’t factor, so upon using the quadratic formula we arrive at,
λ1,2=1±5iIn this case we get complex eigenvalues which are definitely a fact of life with eigenvalue/eigenvector problems so get used to them.
Finding eigenvectors for complex eigenvalues is identical to the previous two examples, but it will be somewhat messier. So, let’s do that.
λ1=1+5i :
The system that we need to solve this time is
(4(1+5i)1722(1+5i))(η1η2)=(00)(35i17235i)(η1η2)=(00)Now, it’s not super clear that the rows are multiples of each other, but they are. In this case we have,
R1=12(3+5i)R2This is not something that you need to worry about, we just wanted to make the point. For the work that we’ll be doing later on with differential equations we will just assume that we’ve done everything correctly and we’ve got two rows that are multiples of each other. Therefore, all that we need to do here is pick one of the rows and work with it.
We’ll work with the second row this time.
2η1+(35i)η2=0Now we can solve for either of the two variables. However, again looking forward to differential equations, we are going to need the “i” in the numerator so solve the equation in such a way as this will happen. Doing this gives,
2η1=(35i)η2η1=12(35i)η2So, the eigenvector in this case is
η=(η1η2)=(12(35i)η2η2),η20η(1)=(3+5i2),η2=2As with the previous example we choose the value of the variable to clear out the fraction.
Now, the work for the second eigenvector is almost identical and so we’ll not dwell on that too much.
λ2=15i :
The system that we need to solve here is
(4(15i)1722(15i))(η1η2)=(00)(3+5i1723+5i)(η1η2)=(00)Working with the second row again gives,
2η1+(3+5i)η2=0η1=12(3+5i)η2The eigenvector in this case is
η=(η1η2)=(12(3+5i)η2η2),η20η(2)=(35i2),η2=2Summarizing,
λ1=1+5iη(1)=(3+5i2)λ2=15iη(2)=(35i2)

There is a nice fact that we can use to simplify the work when we get complex eigenvalues. We need a bit of terminology first however.
If we start with a complex number,
z=a+bi
then the complex conjugate of z is
z¯=abi
To compute the complex conjugate of a complex number we simply change the sign on the term that contains the “i”. The complex conjugate of a vector is just the conjugate of each of the vector’s components.
We now have the following fact about complex eigenvalues and eigenvectors.


Fact


If A is an n×n matrix with only real numbers and if λ1=a+bi is an eigenvalue with eigenvector η(1). Then λ2=λ1¯=abi is also an eigenvalue and its eigenvector is the conjugate of η(1).
This fact is something that you should feel free to use as you need to in our work.
Now, we need to work one final eigenvalue/eigenvector problem. To this point we’ve only worked with 2×2 matrices and we should work at least one that isn’t 2×2. Also, we need to work one in which we get an eigenvalue of multiplicity greater than one that has more than one linearly independent eigenvector.


Example 4 Find the eigenvalues and eigenvectors of the following matrix.A=(011101110)

Despite the fact that this is a 3×3 matrix, it still works the same as the 2×2 matrices that we’ve been working with. So, start with the eigenvalues
det(AλI)=|λ111λ111λ|=λ3+3λ+2=(λ2)(λ+1)2λ1=2,λ2,3=1So, we’ve got a simple eigenvalue and an eigenvalue of multiplicity 2. Note that we used the same method of computing the determinant of a 3×3 matrix that we used in the previous section. We just didn’t show the work.
Let’s now get the eigenvectors. We’ll start with the simple eigenvector.
λ1=2 :
Here we’ll need to solve,
(211121112)(η1η2η3)=(000)This time, unlike the 2×2 cases we worked earlier, we actually need to solve the system. So let’s do that.
(211012101120)R1R2(121021101120)R2+2R1R3R1(121003300330)13R2(121001100330)R33R2R1+2R2(101001100000)Going back to equations gives,
η1η3=0η1=η3η2η3=0η2=η3So, again we get infinitely many solutions as we should for eigenvectors. The eigenvector is then,
η=(η1η2η3)=(η3η3η3),η30η(1)=(111),η3=1Now, let’s do the other eigenvalue.
λ2=1 :
Here we’ll need to solve,
(111111111)(η1η2η3)=(000)Okay, in this case is clear that all three rows are the same and so there isn’t any reason to actually solve the system since we can clear out the bottom two rows to all zeroes in one step. The equation that we get then is,
η1+η2+η3=0η1=η2η3So, in this case we get to pick two of the values for free and will still get infinitely many solutions. Here is the general eigenvector for this case,
η=(η1η2η3)=(η2η3η2η3),η20 and η30 at the same timeNotice the restriction this time. Recall that we only require that the eigenvector not be the zero vector. This means that we can allow one or the other of the two variables to be zero, we just can’t allow both of them to be zero at the same time!
What this means for us is that we are going to get two linearly independent eigenvectors this time. Here they are.
η(2)=(101)η2=0 and η3=1η(3)=(110)η2=1 and η3=0Now when we talked about linear independent vectors in the last section we only looked at n vectors each with n components. We can still talk about linear independence in this case however. Recall back with we did linear independence for functions we saw at the time that if two functions were linearly dependent then they were multiples of each other. Well the same thing holds true for vectors. Two vectors will be linearly dependent if they are multiples of each other. In this case there is no way to get η(2) by multiplying η(3) by a constant. Therefore, these two vectors must be linearly independent.
So, summarizing up, here are the eigenvalues and eigenvectors for this matrix

Comments

Popular posts from this blog

Digital Signal Processing - Basic Continuous Time Signals

To test a system, generally, standard or basic signals are used. These signals are the basic building blocks for many complex signals. Hence, they play a very important role in the study of signals and systems. Unit Impulse or Delta Function A signal, which satisfies the condition,   δ ( t ) = lim ϵ → ∞ x ( t ) δ ( t ) = lim ϵ → ∞ x ( t )   is known as unit impulse signal. This signal tends to infinity when t = 0 and tends to zero when t ≠ 0 such that the area under its curve is always equals to one. The delta function has zero amplitude everywhere except at t = 0. Properties of Unit Impulse Signal δ(t) is an even signal. δ(t) is an example of neither energy nor power (NENP) signal. Area of unit impulse signal can be written as; A = ∫ ∞ − ∞ δ ( t ) d t = ∫ ∞ − ∞ lim ϵ → 0 x ( t ) d t = lim ϵ → 0 ∫ ∞ − ∞ [ x ( t ) d t ] = 1 Weight or strength of the signal can be written as; y ( t ) = A δ ( t ) y ( t ) = A δ ( t ) Area of the weighted impulse s...

Differential Equations - First Order: Bernoulli

In this section we are going to take a look at differential equations in the form, y ′ + p ( x ) y = q ( x ) y n y ′ + p ( x ) y = q ( x ) y n where  p ( x ) p ( x )  and  q ( x ) q ( x )  are continuous functions on the interval we’re working on and  n n  is a real number. Differential equations in this form are called  Bernoulli Equations . First notice that if  n = 0 n = 0  or  n = 1 n = 1  then the equation is linear and we already know how to solve it in these cases. Therefore, in this section we’re going to be looking at solutions for values of  n n  other than these two. In order to solve these we’ll first divide the differential equation by  y n y n  to get, y − n y ′ + p ( x ) y 1 − n = q ( x ) y − n y ′ + p ( x ) y 1 − n = q ( x ) We are now going to use the substitution  v = y 1 − n v = y 1 − n  to convert this into a differential equation in terms of  v v . As we’ll see th...

Differential Equations - Systems: Solutions

Now that we’ve got some of the basics out of the way for systems of differential equations it’s time to start thinking about how to solve a system of differential equations. We will start with the homogeneous system written in matrix form, → x ′ = A → x (1) (1) x → ′ = A x → where,  A A  is an  n × n n × n  matrix and  → x x →  is a vector whose components are the unknown functions in the system. Now, if we start with  n = 1 n = 1 then the system reduces to a fairly simple linear (or separable) first order differential equation. x ′ = a x x ′ = a x and this has the following solution, x ( t ) = c e a t x ( t ) = c e a t So, let’s use this as a guide and for a general  n n  let’s see if → x ( t ) = → η e r t (2) (2) x → ( t ) = η → e r t will be a solution. Note that the only real difference here is that we let the constant in front of the exponential be a vector. All we need to do then is plug this into the d...

Calculus III - 3-Dimensional Space: Equations of Lines

In this section we need to take a look at the equation of a line in  R 3 R 3 . As we saw in the previous section the equation  y = m x + b y = m x + b  does not describe a line in  R 3 R 3 , instead it describes a plane. This doesn’t mean however that we can’t write down an equation for a line in 3-D space. We’re just going to need a new way of writing down the equation of a curve. So, before we get into the equations of lines we first need to briefly look at vector functions. We’re going to take a more in depth look at vector functions later. At this point all that we need to worry about is notational issues and how they can be used to give the equation of a curve. The best way to get an idea of what a vector function is and what its graph looks like is to look at an example. So, consider the following vector function. → r ( t ) = ⟨ t , 1 ⟩ r → ( t ) = ⟨ t , 1 ⟩ A vector function is a function that takes one or more variables, one in this case, and returns a...

Differential Equations - First Order: Modeling - i

We now move into one of the main applications of differential equations both in this class and in general. Modeling is the process of writing a differential equation to describe a physical situation. Almost all of the differential equations that you will use in your job (for the engineers out there in the audience) are there because somebody, at some time, modeled a situation to come up with the differential equation that you are using. This section is not intended to completely teach you how to go about modeling all physical situations. A whole course could be devoted to the subject of modeling and still not cover everything! This section is designed to introduce you to the process of modeling and show you what is involved in modeling. We will look at three different situations in this section : Mixing Problems, Population Problems, and Falling Objects. In all of these situations we will be forced to make assumptions that do not accurately depict reality in most cases, but wi...

Digital Signal Processing - Miscellaneous Signals

There are other signals, which are a result of operation performed on them. Some common type of signals are discussed below. Conjugate Signals Signals, which satisfies the condition  x ( t ) = x ∗ ( − t ) are called conjugate signals. Let  x ( t ) = a ( t ) + j b ( t ) So,  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) By Condition,  x ( t ) = x ∗ ( − t ) If we compare both the derived equations 1 and 2, we can see that the real part is even, whereas the imaginary part is odd. This is the condition for a signal to be a conjugate type. Conjugate Anti-Symmetric Signals Signals, which satisfy the condition  x ( t ) = − x ∗ ( − t ) are called conjugate anti-symmetric signal Let  x ( t ) = a ( t ) + j b ( t ) So  x ( − t ) = a ( − t ) + j b ( − t ) And  x ∗ ( − t ) = a ( − t ) − j b ( − t ) − x ∗ ( − t ) = − a ( − t ) + j b ( − t ) By Condition  x ( t ) = − x ∗ ( − t ) ...

Differential Equations - Systems: Repeated Eigenvalues - i

This is the final case that we need to take a look at. In this section we are going to look at solutions to the system, → x ′ = A → x x → ′ = A x → where the eigenvalues are repeated eigenvalues. Since we are going to be working with systems in which  A A  is a  2 × 2 2 × 2  matrix we will make that assumption from the start. So, the system will have a double eigenvalue,  λ λ . This presents us with a problem. We want two linearly independent solutions so that we can form a general solution. However, with a double eigenvalue we will have only one, → x 1 = → η e λ t x → 1 = η → e λ t So, we need to come up with a second solution. Recall that when we looked at the double root case with the second order differential equations we ran into a similar problem. In that section we simply added a  t t  to the solution and were able to get a second solution. Let’s see if the same thing will work in this case as well. We’ll see if → x = t e...

Differential Equations - Systems: Repeated Eigenvalues - ii

Example 3  Solve the following IVP. → x ′ = ( − 1 3 2 − 1 6 − 2 ) → x → x ( 2 ) = ( 1 0 ) x → ′ = ( − 1 3 2 − 1 6 − 2 ) x → x → ( 2 ) = ( 1 0 ) First the eigenvalue for the system. det ( A − λ I ) = ∣ ∣ ∣ ∣ − 1 − λ 3 2 − 1 6 − 2 − λ ∣ ∣ ∣ ∣ = λ 2 + 3 λ + 9 4 = ( λ + 3 2 ) 2 ⇒ λ 1 , 2 = − 3 2 det ( A − λ I ) = | − 1 − λ 3 2 − 1 6 − 2 − λ | = λ 2 + 3 λ + 9 4 = ( λ + 3 2 ) 2 ⇒ λ 1 , 2 = − 3 2 Now let’s get the eigenvector. ( 1 2 3 2 − 1 6 − 1 2 ) ( η 1 η 2 ) = ( 0 0 ) ⇒ 1 2 η 1 + 3 2 η 2 = 0 η 1 = − 3 η 2 ( 1 2 3 2 − 1 6 − 1 2 ) ( η 1 η 2 ) = ( 0 0 ) ⇒ 1 2 η 1 + 3 2 η 2 = 0 η 1 = − 3 η 2 → η = ( − 3 η 2 η 2 ) η 2 ≠ 0 → η ( 1 ) = ( − 3 1 ) η 2 = 1 η → = ( − 3 η 2 η 2 ) η 2 ≠ 0 η → ( 1 ) = ( − 3 1 ) η 2 = 1 Now find  → ρ ρ → , ( 1 2 3 2 − 1 6 − 1 2 ) ( ρ 1 ρ 2 ) = ( − 3 1 ) ⇒ 1 2 ρ 1 + 3 2 ρ 2 = − 3 ρ 1 = − 6 − 3 ρ 2 ( 1 2 3 2 − 1 6 − 1 2 ) ( ρ 1 ρ 2 ) = ( − 3 1 ) ⇒ 1 2 ρ 1 + 3 2 ρ 2 = − 3 ρ 1 = − 6 − 3 ρ 2 → ρ = ( − 6 − 3 ρ 2 ρ 2 ) ⇒ → ρ = ( − 6 0 ) if  ρ 2 = 0 ρ → ...

Differential Equations - Second Order: Repeated Roots

In this section we will be looking at the last case for the constant coefficient, linear, homogeneous second order differential equations. In this case we want solutions to a y ′′ + b y ′ + c y = 0 a y ″ + b y ′ + c y = 0 where solutions to the characteristic equation a r 2 + b r + c = 0 a r 2 + b r + c = 0 are double roots  r 1 = r 2 = r r 1 = r 2 = r . This leads to a problem however. Recall that the solutions are y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t y 1 ( t ) = e r 1 t = e r t y 2 ( t ) = e r 2 t = e r t These are the same solution and will NOT be “nice enough” to form a general solution. We do promise that we’ll define “nice enough” eventually! So, we can use the first solution, but we’re going to need a second solution. Before finding this second solution let’s take a little side trip. The reason for the side trip will be clear eventually. From the quadratic formula we know that the roots to the characteristic equation are, r 1 , 2 = ...

Differential Equations - Laplace Transforms: Table

f ( t ) = L − 1 { F ( s ) } f ( t ) = L − 1 { F ( s ) } F ( s ) = L { f ( t ) } F ( s ) = L { f ( t ) }  1 1 s 1 s e a t e a t 1 s − a 1 s − a t n , n = 1 , 2 , 3 , … t n , n = 1 , 2 , 3 , … n ! s n + 1 n ! s n + 1 t p t p ,  p > − 1 p > − 1 Γ ( p + 1 ) s p + 1 Γ ( p + 1 ) s p + 1 √ t t √ π 2 s 3 2 π 2 s 3 2 t n − 1 2 , n = 1 , 2 , 3 , … t n − 1 2 , n = 1 , 2 , 3 , … 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) √ π 2 n s n + 1 2 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) π 2 n s n + 1 2 sin ( a t ) sin ⁡ ( a t ) a s 2 + a 2 a s 2 + a 2 cos ( a t ) cos ⁡ ( a t ) s s 2 + a 2 s s 2 + a 2 t sin ( a t ) t sin ⁡ ( a t ) 2 a s ( s 2 + a 2 ) 2 2 a s ( s 2 + a 2 ) 2 t cos ( a t ) t cos ⁡ ( a t ) s 2 − a 2 ( s 2 + a 2 ) 2 s 2 − a 2 ( s 2 + a 2 ) 2 sin ( a t ) − a t cos ( a t ) sin ⁡ ( a t ) − a t cos ⁡ ( a t ) 2 a 3 ( s 2 + a 2 ) 2 2 a 3 ( s 2 + a 2 ) 2 sin ( a t ) + a t cos ( a t ) sin ⁡ ( a t ) + a t cos ⁡ ( a t ) 2 a s 2 ( s 2 + a 2 ) 2 2 a s 2 ( s 2 + a 2 ) 2 cos ( a t ) − a t sin ( a t ) cos ⁡ (...