Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
22BBook.pdf
Скачиваний:
10
Добавлен:
19.02.2016
Размер:
954.31 Кб
Скачать

Chapter 8

Laplace Transform

8.1Matrix version of the method of Laplace transforms for solving constant coe cient DE’s

The Laplace transform of a function f (t) is

Z

F (s) = L(f )(s) = e−tsf (t) dt (8.1)

0

for s su ciently large. For the Laplace transform to make sense the function f cannot grow faster that an exponential near infinity. Thus, for example, the Laplace transform of ex2 is not defined.

We extend (8.1) to vector-valued functions f (t),

f1(t)

f2(t)

f(t) = .

.

.

 

 

fn(t)

 

 

 

 

by

 

e−tsf1(t) dt

 

 

 

 

 

 

 

R0e−tsf2(t) dt

 

L

 

0

 

 

 

 

 

R ..

 

 

 

F (s) = (f )(s) =

 

 

.

 

(t) dt

.

 

0

e−tsf

 

 

 

 

n

 

 

 

 

R

 

 

 

 

 

Integration by parts shows that

L( dfdt )(s) = sL(f )(s) f (0).

We now explain how matrix Laplace transforms are used to solve the matrix ODE

dxdt = Ax + f (t)

(8.2)

(8.3)

(8.4)

(8.5)

115

116 CHAPTER 8. LAPLACE TRANSFORM

where A is a constant coe cient n × n matrix, f (t) is a vector-valued function of the independent variable t (“forcing term”) with initial condition

x(0) = x0.

(8.6)

First, we take the Laplace transform of both sides of (8.5). From (8.4) we see that the Laplace transform of the LHS of (8.5) is

dx

L( dt ) = sL(x) x0.

The Laplace transform of the RHS of (8.5) is

L(Ax + f ) = L(Ax) + L(f )

= AL(x) + F (s)

where we set F (s) = L(f )(s) and we used the fact that A is independent of t to conclude1

L(Ax) = AL(x).

(8.7)

Thus the Laplace transform of (8.5) is

 

sL(x) x0 = AL(x) + F,

 

or

 

(sIn A)L(x) = x0 + F (s)

(8.8)

where In is the n×n identity matrix. Equation (8.8) is a linear system of algebraic equations for L(x). We now proceed to solve (8.8). This can be done once we know that (sIn A) is invertible. Recall that a matrix is invertible if and only if the determinant of the matrix is nonzero. The determinant of the matrix in question is

p(s) := det(sIn A),

(8.9)

which is the characteristic polynomial of the matrix A. We know that the zeros of p(s) are the eigenvalues of A. If s is larger than the absolute value of the largest eigenvalue of A; in symbols,

s > max|λi|,

(8.10)

−1

exists. We assume s satisfies this condition.

then p(s) cannot vanish and hence (sIn A)

−1

 

Then multiplying both sides of (8.8) by (sIn A)

results in

 

 

−1

−1

F (s).

(8.11)

 

L(x)(s) = (sIn A)

x0 + (sIn A)

 

 

 

 

 

 

 

1You are asked to prove (8.7) in an exercise.

8.1. MATRIX VERSION

117

Equation (8.11) is the basic result in the application of Laplace transforms to the solution of constant coe cient di erential equations with an inhomogeneous forcing term. Equation (8.11) will be a quick way to solve initial value problems once we learn e cient methods to (i) compute (sIn A)−1, (ii) compute the Laplace transform of various forcing terms F (s) = L(f )(s), and (iii) find the inverse Laplace transform. Step (i) is easier if one uses software packages such as MATLAB . Steps (ii) and(iii) are made easier by the use of extensive Laplace transform tables or symbolic integration packages such as MATHEMATICA. It should be noted that many of the DE techniques one learns in engineering courses can be described as e cient methods to do these three steps for examples that are of interest to engineers.

We now give two examples that apply (8.11).

8.1.1Example 1

Consider the scalar ODE

d2y

+ b

dy

+ cy = f (t)

(8.12)

dt2

dt

where b and c are constants. We first rewrite this as a system

 

 

x(t) =

x2

=

y(t)

 

 

 

 

x1

 

y(t)

so that

=

c b x + f (t) .

 

dt

 

dx

 

0

1

 

0

Then

sI2

A =

s

1

,

 

 

 

 

c

s + b

 

 

and

 

 

 

 

 

 

c

s .

(sI2 A)−1 = s2 + bs + c

 

 

 

 

1

 

s + b

1

Observe that the characteristic polynomial

p(s) = det(sI2 A) = s2 + bs + c

appears in the denominator of the matrix elements of (sI2 A)−1. (This factor in Laplace transforms should be familiar from the scalar treatment—here we see it is the characteristic polynomial of A.) By (8.11)

1

+ b)y(0) + y(0)

 

 

F (s)

1

L(x)(s) =

 

(scy(0) + sy(0)

 

+

 

s

s2 + bs + c

s2 + bs + c

where F (s) = L(f )(s). This implies that the Laplace transform of y(t) is given by

L(y)(s) =

(s + b)y(0)

+ y(0)

 

F (s)

 

 

 

 

+

 

.

(8.13)

s2 + bs

+ c

s2 + bs + c

This derivation of (8.13) may be compared with the derivation of equation (16) on page 302 of Boyce and DiPrima [4] (in our example a = 1).

118

CHAPTER 8. LAPLACE TRANSFORM

8.1.2Example 2

We consider the system (8.5) for the special case of n = 3 with f (t) = 0 and A given by

A = 1

2

1

.

(8.14)

1

0

1

 

 

1

1

1

 

The characteristic polynomial of (8.14) is

 

 

 

 

p(s) = s3 2s2 + s 2 = (s2 + 1)(s 2)

(8.15)

and so the matrix A has eigenvalues ±i and 2. shows that

 

1 =

1

 

s2 s 1

(sI3

A)

 

s + 2

p(s)

 

 

 

 

 

s 3

A rather long linear algebra computation

s2

s 2

.

(8.16)

1

s + 2

 

 

s + 1 s2 3s + 2

If one writes a partial fraction decomposition of each of the matrix elements appearing in (8.16) and collects together terms with like denominators, then (8.16) can be written as

(sI3 A)−1 =

s 1 2

4/5

4/5

0

 

 

1/5

1/5

0

 

1/5

 

0

1/5

+ s2 + 1

(3 + 4s)/5

(2 + s)/5

1

 

. (8.17)

1

(3 + 4s)/5

(2 + s)/5

1

 

 

 

 

(7 + s)/5

(3 + s)/5

1 + s

 

 

 

 

We now apply (8.17) to solve (8.5) with the above A and f = 0 for the case of initial

conditions

 

 

 

 

x0 = 2 .

 

 

 

 

(8.18)

 

 

 

 

 

 

1

 

 

 

 

 

 

We find

 

 

 

 

1

 

 

 

 

 

 

 

 

 

4/5 + s2 + 1

6/5

+ s2 + 1

2/5

 

(x)(s) = (sI3

 

A) 1x0 =

s

2

. (8.19)

L

1

 

1/5

 

 

s

6/5

1

2/5

 

 

 

1/5

 

 

 

4/5

 

 

8/5

 

To find x(t) from (8.19) we use Table 6.2.1 on page 300 of Boyce and DiPrima [4]; in particular, entries 2, 5, and 6. Thus

x(t) = e2t

4/5

+ cos t

6/5

+ sin t

2/5 .

 

1/5

 

6/5

 

2/5

 

1/5

 

4/5

 

8/5

One can also use MATHEMATICA to compute the inverse Laplace transforms. To do so use the command InverseLaplaceTransform. For example if one inputs

InverseLaplaceTransform[1/(s-2),s,t] then the output is e2t.

We give now a second derivation of (8.19) using the eigenvectors of A. As noted above, the eigenvalues of A are λ1 = 2, λ2 = i, and λ3 = i. If we denote by φj an eigenvector

8.2. STRUCTURE OF (SIN A)−1

119

associated to eigenvalue λj (j = 1, 2, 3), then a routine linear algebra computation gives the following possible choices for the φj :

φ1

=

4

,

φ2

=

 

(1 + i)/2

,

φ3

=

( 1 + i)/2 .

 

 

1

 

 

 

(1 + i)/2

 

 

 

 

(1

i)/2

 

 

 

1

 

 

 

1

 

 

 

1

Now for any eigenvector φ corresponding to eigenvalue λ of a matrix A we have

(sIn A)−1φ = (s λ)−1φ.

To use this observation we first write

x0 = c1φ1 + c2φ2 + c3φ3.

A computation shows that

 

 

c1 = 1/5,

c2 = 2/5 4i/5,

and

c3 = 2/5 + 4i/5.

 

Thus

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(sI

A)−1x =

1

(s

2)−1

φ +

2 4i

(s

i)−1φ +

2 + 4i

(s + i)−1

φ .

5

5

5

3

0

 

 

1

 

2

 

3

Combining the last two terms gives (8.19).

8.2Structure of (sIn A)1 when A is diagonalizable

In this section we assume that the matrix A is diagonalizable; that is, we assume a set of linearly independent eigenvectors of A form a basis. Recall the following two theorems from linear algebra: (1) If the n ×n matrix A has n distinct eigenvalues, then A is diagonalizable; and (2) If the matrix A is symmetric (hermitian if the entries are complex), then A is diagonalizable.

Since A is assumed to be diagonalizable, there exists a nonsingular matrix P such that

 

A = P DP −1

 

 

where D is

 

01

λ2 . . .

0

 

 

 

λ

0 . . .

0

 

D =

.. .. . . .

..

. .

.

 

 

 

0

0 . . .

λ

 

 

 

 

n

 

 

 

 

 

 

 

and each eigenvalue λi of A appears as many times as the (algebraic) multiplicity of λi. Thus

sIn A = sIn P DP −1

= P (sIn D)P −1 ,

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]