Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
22BBook.pdf
Скачиваний:
10
Добавлен:
19.02.2016
Размер:
954.31 Кб
Скачать

50

CHAPTER 4. DIFFERENCE EQUATIONS

4.3Inhomogeneous Di erence Equations

In a completely analogous way to the ODE case, one proves that every solution to the inhomogeneous linear di erence equation (4.1) is of the form

(yn)homo + (yn)part

where (yn)homo is a solution to the homogeneous equation (4.1) with fn = 0 and (yn)part is a particular solution to the inhomogeneous equation (4.1).

4.4Exercises

#1. Degenerate Roots

Consider the constant coe cient di erence equation (4.2) but now assume the two roots λ1,2 are equal. Show that

n1 is a second linearly independent solution to (4.2).

#2. Rational Approximations to 2

Solve the di erence equation

xn+1 = 2xn + xn−1, n 1

with initial conditions x0 = 0 and x1 = 1 that corresponds to the sequence 0, 1, 2, 5, 12, 29,. . . . Show that

lim

xn+1 xn

=

 

.

2

n→∞

 

xn

The rational numbers

 

xn+1 xn

 

 

 

 

 

xn

provide us with very good approximations to the square root of two.2

#3. Catalan numbers

Many times nonlinear recurrence relations arise. For example, Catalan numbers Tn satisfy the nonlinear recurrence relation

n−1

X

Tn = TkTn−1−k, n = 1, 2, . . .

k=0

where T0 := 1. Define

X

T (z) = Tnzn.

n=0

2The square root of two is irrational.

4.4. EXERCISES

 

51

Show that

 

 

 

 

 

 

T (z) =

1 1 4z

.

 

 

 

 

2z

From this prove that

 

n

 

 

Tn = n + 1

 

 

 

 

1

 

2n

where n

 

is the binomial coe cient. Catalan numbers arise in a variety of combinatorial

k

 

 

 

 

 

 

 

problems. Here is one example:

Suppose 2n points are placed in fixed positions, evenly distributed on the circumference of a circle. Then there are Tn ways to join n pairs of the points so that the resulting chords do not intersect.

One can easily make a table of values of Tn using, say, the MATHEMATICA command (this gives T1 through T10).

Table[{n, Binomial[2*n, n]/(n + 1)}, {n, 1, 10}]

52

CHAPTER 4. DIFFERENCE EQUATIONS

Chapter 5

Matrix Di erential Equations

Linear systems are almost the only large class of di erential equations for which there exists a definitive theory. This theory is essentially a branch of linear algebra, and allows us to solve all autonomous linear equations.

V.A. Arnold, Ordinary Di erential Equations [2]

5.1The Matrix Exponential

Let A be a n×n matrix with constant entries. In this chapter we study the matrix di erential equation

dxdt = Ax where x Rn.

We will present an algorithm that reduces solving (5.1) to problems in linear algebra. The exponential of the matrix tA, t R, is defined by the infinite series1

etA = exp(tA) := I + tA + t2 A2 + t3 A3 + · · · . 2! 3!

(5.1)

(5.2)

Remark: In an advanced course you will prove that this infinite series of matrices converges to a n × n matrix.

It is important to note that for matrices A and B, in general, exp(tA) exp(tB) 6= exp(tA + tB).

1We put the scalar factor t directly into the definition of the matrix exponential since it is in this form we will use the matrix exponential.

53

54

CHAPTER 5. MATRIX DIFFERENTIAL EQUATIONS

If A and B commute (AB = BA) then it is the case that

exp(tA) exp(tB) = exp(tA + tB).

This last fact can be proved by examining the series expansion of both sides—on the left hand side one has to multiply two infinite series. You will find that by making use of AB = BA the result follows precisely as in the case of complex exponentials.

Here are some examples:

1.

A = D = diagonal matrix = diag (λ1, λ2, . . . , λn) .

Observe that

Dk = diag λk1 , λk2 , . . . , λkn .

Thus

X

tk

k=0 k! Dk = diag e1 , e2 , . . . , eN .

2.Suppose that A is a diagonalizable matrix; that is, there exist matrices S and D with S invertible and D diagonal such that

A = SDS−1 .

Observe

A2 = (SDS−1)(SDS−1) = SD2S−1 ,

and more generally,

Ak = SDkS−1 .

Thus

 

tk

 

 

 

exp(tA) =

X

 

Ak

 

k=0

k!

 

 

 

 

 

 

 

tk

 

 

 

=

X

 

SDk S−1

 

k=0

k!

 

 

 

 

 

 

 

 

 

 

 

tk

 

 

X

 

Dk S−1

 

=

S

 

 

 

 

k=0

k!

 

 

 

 

 

=

S exp(tD)S−1 .

(5.3)

In the next to the last equality, we used the fact that S and S−1 do not depend upon the summation index k and can therefore be brought outside of the sum. The last equality makes use of the previous example where we computed the exponential of a diagonal matrix. This example shows that if one can find such S and D, then the computation of the exp(tA) is reduced to matrix multiplications. This last result, (5.3), is quite suitable for using MATLAB .

5.1. THE MATRIX EXPONENTIAL

 

 

 

 

 

55

3. Let

 

2

1

1

 

(5.4)

A =

 

 

1

1

1

 

 

 

8

5

3

 

and suppose we want exp(tA). Here’s how we would do this symbolically in MATLAB . (In displaying the result, we show exp(tA) column-by-column since otherwise it would not fit on the page.)

>>A=[1 1 1; 2 1 -1; -8 -5 -3];

>>A=sym(A);

>>[V,D]=eig(A)

V =

 

 

 

 

[

0,

 

1,

1 ]

[

-1,

-5/4,

-4/3]

[

1,

 

-7/4,

-2/3]

D =

 

 

 

 

[

2,

0,

0]

 

[

0, -2,

0]

 

[

0,

0,

-1]

 

>>t=sym(’t’);

>>eA=V*exp(t*D)*inv(V);

>>eA(1:3,1)

ans =

[

-2-2*exp(-2*t)+3*exp(-t)]

[

3/2*exp(2*t)+43/24+5/2*exp(-2*t)-4*exp(-t)]

[ -3/2*exp(2*t)+17/24+7/2*exp(-2*t)-2*exp(-t)]

>> eA(1:3,2)

ans =

[

-13/6-exp(-2*t)+exp(-t)]

[

13/12*exp(2*t)+415/144+5/4*exp(-2*t)-4/3*exp(-t)]

[ -13/12*exp(2*t)+221/144+7/4*exp(-2*t)-2/3*exp(-t)]

>> eA(1:3,3)

ans =

[

-1/6-exp(-2*t)+exp(-t)]

56

CHAPTER 5. MATRIX DIFFERENTIAL EQUATIONS

[

1/12*exp(2*t)+43/144+5/4*exp(-2*t)-4/3*exp(-t)]

[ -1/12*exp(2*t)-127/144+7/4*exp(-2*t)-2/3*exp(-t)]

>>

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4. Let

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A =

 

 

 

0 1 .

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

0

 

 

 

 

 

 

 

 

Matrix multiplication shows

 

 

 

 

 

 

A2 = I,

 

 

 

 

 

 

 

 

and thus

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

k

 

 

k

 

 

 

 

 

 

 

 

 

2k

 

 

2

 

 

 

 

 

 

k

 

 

 

 

 

A

 

 

 

 

 

 

= (I) = (1) I,

 

 

 

A = A

 

 

 

 

 

 

 

 

2k+1 = A2k A = ( 1)k A.

 

 

 

 

Hence

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

tk

 

 

 

 

 

 

 

 

 

 

 

 

 

 

exp(tA) =

 

X

 

 

Ak

 

 

 

 

 

 

 

 

 

 

(5.5)

 

k=0

 

k!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

t2k

 

 

 

 

 

 

t2k+1

 

 

 

 

 

 

X

 

 

 

 

 

 

 

 

X

 

 

 

 

 

 

=

 

k=0

 

(2k)!

A2k +

 

(2k + 1)!

A2k+1

 

 

 

 

 

 

 

 

 

 

 

k=0

 

 

 

 

 

 

 

 

 

 

t2k

 

 

 

 

 

 

t2k+1

 

 

 

=

 

X

 

 

 

 

 

 

(1)k I +

X

 

 

 

(1)kA

 

k=0

 

(2k)!

k=0

(2k + 1)!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

=

 

cos t I + sin t A

+ sin t

0

 

=

 

 

 

0

 

 

 

 

cos t

 

 

 

cos t

 

 

 

0

 

 

 

 

0

 

sin t

 

 

 

sin t

 

 

cos t

 

 

 

 

 

 

 

 

=

 

 

cos t

 

 

sin t

.

 

 

 

 

 

 

(5.6)

Remark: You can also compute

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

0

 

 

 

 

 

 

 

exp

t

 

0

1

 

 

 

 

 

 

 

 

by the method of Example #2. Try it!

5.2Application of etA to di erential equations

5.2.1Derivative of etA with respect to t

The following is the basic property of the exponential that we apply to di erential equations. As before, A denotes a n × n matrix with constant coe cients.

 

d

(5.7)

 

 

 

 

dt exp(tA) = A exp(tA) = exp(tA)A.

 

 

5.2. APPLICATION OF MATRIX EXPONENTIAL TO DES

57

Here is the proof: Di erentiate

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

t2

 

 

 

t3

 

 

 

etA = I + tA +

 

A2

+

 

 

 

 

A3 + · · ·

 

 

2!

3!

 

 

term-by-term2 with the result

 

 

 

 

 

 

 

 

 

 

 

 

d

 

 

 

t2

 

 

 

 

 

 

 

 

 

etA = A + tA2 +

 

 

A3 + · · ·

 

 

 

dt

2!

 

 

 

 

= A I + tA +

 

t2

 

 

 

 

 

 

A2 + · · ·

 

 

 

 

2!

 

=

A etA

 

 

 

 

 

 

 

 

=

etA A.

 

 

 

 

 

 

 

 

The last equality follows by factoring A out on the right instead of the left.

5.2.2Solution to Matrix ODE with Constant Coe cients

We now use (5.7) to prove

Theorem: Let

dx

= Ax

(5.8)

dt

 

 

where x Rn and A is a n × n matrix with constant coe cients. Then every solution of (5.8) is of the form

x(t) = exp(tA)x0

(5.9)

for some constant vector x0 Rn.

Proof : (i) First we show that x(t) = etAx0 is a solution:

dt

= dt

etAx0 =

dt etA x0

dx

 

d

 

 

d

=AetAx0

=Ax(t).

(ii)We now show that every solution of (5.8) is of the form (5.9). Let y(t) by any solution to (5.8). Let

Δ(t) := e−tAy(t).

If we can show that Δ(t) is independent of t—that it is a constant vector which we call x0, then we are done since multiplying both sides by etA shows

etAx0 = etAΔ(t) = etAe−tAy(t) = y(t).

2In a complex analysis course you will prove that convergent complex power series can be di erentiated term-by-term and the resulting series has the same radius of convergence. Note there really is something to prove here since there is an interchange of two limits.

58

CHAPTER 5. MATRIX DIFFERENTIAL EQUATIONS

(We used the fact that tA and exponential is valid.) To show respect to t is zero:

tA commute so that the addition formula for the matrix that Δ(t) is independent of t we show its derivative with

d

=

 

d

e−tAy(t)

 

 

dt

 

 

 

 

 

 

dt

 

 

 

 

 

 

 

d

dy

 

 

=

 

 

e−tA y(t) + e−tA

 

(chain rule)

 

dt

dt

 

=

 

e−tAA y(t) + e−tA (Ay(t)) (y(t) satisfies ODE)

=0.

The next theorem relates the solution x(t) of (5.8) to the eigenvalues and eigenvectors of the matrix A (in the case A is diagonalizable).

Theorem: Let A be a diagonalizable matrix. Any solution to (5.8) can be written as

x(t) = c1e1 ψ1 + c2e2 ψ2

+ · · · + cneN ψn

(5.10)

where λ1, . . . , λn are the eigenvalues of A with

associated eigenvectors ψ1, . . . , ψn, and

c1, . . . , cn are constants.

 

 

Proof : All solutions of (5.8) are of the form (5.9). Since A is diagonalizable, the eigenvectors of A can be used to form a basis: {ψ1, . . . , ψn}. Since this is a basis there exist constants c1, . . . , cn such that

x0 = c1ψ1 + c2ψ2 + · · · + cnψn.

(x0 is the constant vector appearing in (5.9).)

For any eigenvector ψ of A with eigenvalue λ we have

etAψ = eψ.

(This can be proved by applying the infinite series (5.2) to the eigenvector ψ and noting Ak ψ = λk ψ for all positive integers k.) Thus

etAx0 = c1etAψ1 + · · · cnetAψn

=c1e1 ψ1 + · · · + cneN ψn.

Here are two immediate corollaries of this theorem:

1.If A is diagonalizable and has only real eigenvalues, then any solution x(t) of (5.1) will have no oscillations.

2.If A is diagonalizable and the real part of every eigenvalue is negative, then

x(t) 0 (zero vector), as t +To see this recall that if λ = σ + (σ and µ both real), then

eλt = eσteiµt.

If σ < 0, eσt 0 as t +. Now apply preceding theorem.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]