Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

0387986995Basic TheoryC

.pdf
Скачиваний:
14
Добавлен:
10.06.2015
Размер:
11.75 Mб
Скачать

68 M. NONUNIQUENESS

111-8. Let f (t, i, y) be an R"-valued function of (t, x, y-) E R x R" x R'". Assume that

(1) the entries of f'(t, i, yl are continuous in the region A 0 < t < a, jxrj < a, jyj < b}, where a, a, and b are fixed positive numbers,

(2) there exists a positive number K such that j f (t, x"j, y- )- f (t, x"2, y -)l < Kjxt -x"2 j if (t,i),y-) E A (j = 1, 2).

Let U denote the set of all R"'-valued functions u(t) such that ju(t)j < b for 0 < t < a and that JiZ(t) - u(r)j < Lit - rj if 0 _< t < a and 0 < r < a, where L

is a positive constant independent of u E U. Also, let ¢(t; u+) denote the unique

solution of the initial-value problem 'jj

u(t)), x(0) = 0', where u" E U. It is

known that there exists a positive number ao such that for all u E U, the solution

(t, u') exists and u)j < a for 0 < t < aa. Denote by R the subset of Rn+' which is the union of solution curves {(t, 0(t, 65)) : 0 < t < ao) for all u E U, i.e.,

R = {(t, ¢(t, u)) : 0 < t < aa, u E U}. Show that R is a closed set in R"+t

Hint. See [LM1, Theorem 2, pp. 44-47) and [LM2, Problem 6, pp. 282-283).

111-9. Let f (t, 2, yy be an R"-valued function of (t, a, y) E R x R" x R'". Assume that

(1)the entries of f (t, f, y-) are continuous in the region 0 = {(t, 2, yj : 0 < t < a, Jx"j < a, jy"j < b}, where a, a, and b are fixed positive numbers,

(2) there exists a positive number K such that yl- f (t, x2, y-)J < Kjit-i2j

if (t,i1,y)EA (j=1,2).

Let U denote the set of all R'"-valued functions g(t) such that I i(t)J < b for 0 < t < a and that the entries of u are piecewise continuous on the interval 0 <

t < a. Also, let (t; u) denote the unique solution of the initial-value problem

di = f (t, x, u"(t)), x(0) = 0, where u" E U. It is known that there exists a positive dt

number ao such that for all u' E U, the solution (t, u) exists and Jd(t, u)j < a for 0 <- t < ao. Denote by 1Z the subset of R"' which is the union of solution curves {(t, ¢(t, ul)) : 0 < t _< ao) for all u E U, i.e., R = {(t, (t, u)) : 0 < t < ao, u E U). Assume that a point (r, ¢(r, uo)) is on the boundary of R, where 0 < r < 00 and

!!o E U. Show that the solution curve

0 <- t < r} is also on the

boundary of R.

Hint. jLM2, Theorem 3 of Chapter 4 and its remark on pp. 254-257, and Problem

2 on p. 258).

III-10. Let A(t, x) and f (t, x") be respectively an n x n matrix-valued and R"- valued functions whose entries are continuous and bounded in (t,x-) E R"+' on a domain 0 = { (t, x) : a < t < b, x" E R"), where a and b are real numbers.

Also, assume that (r, {) E A. Show that every solution of the initial-value problem dx = A(t, x'a + f (t, i), i(r) = t exists on the interval a < t < b.

dt

CHAPTER IV

GENERAL THEORY OF LINEAR SYSTEMS

The main topic of this chapter is the structure of solutions of a linear system

(LP)

dt = A(t)f + b(t),

where entries of the n x n matrix A(t) are complex-valued (i.e., C-valued) continuous functions of a real independent variable t, and the Cn-valued function b(t) is continuous in t. The existence and uniqueness of solutions of problem (LP) were given by Theorem 1-3-5. In §IV-1, we explain some basic results concerning n x n matrices whose entries are complex numbers. In particular, we explain the S-N decomposition (or the Jordan-Chevalley decomposition) of a matrix (cf. Definition IV-1-12; also see [Bou, Chapter 7], [HirS, Chapter 6], and [Hum, pp. 17-18]). The S-N decomposition is equivalent to the block-diagonalization which separates dis- tinct eigenvalues. It is simpler than the Jordan canonical form. The basic tools for achieving this decomposition are the Cayley-Hamilton theorem (cf. Theorem

IV-1-5) and the partial fraction decomposition of reciprocal of the characteristic polynomial. It is relatively easy to obtain this decomposition with an elementary calculation if all eigenvalues of a given matrix are known (cf. Examples IV-1-18 and IV-1-19). In §IV-2, we explain the general aspect of linear homogeneous systems. Homogeneous systems with constant coefficients are treated in §IV-3. More precisely speaking, we define e'A and discuss its properties. In §IV-4, we explain the structure of solutions of a homogeneous system with periodic coefficients. The main result is the Floquet theorem (cf. Theorem IV-4-1 and [Fl]). The Hamiltonian systems with periodic coefficients are the main subject of §IV-5. The Floquet theorem is extended to this case using canonical linear transformations (cf. [Si4] and [Marl). Also, we go through an elementary part of the theory of symplectic groups. Finally, nonhomogeneous systems and scalar higher-order equations are treated in §1V-6 and §IV-7, respectively. The topics of §§IV-2-IV-4, IV-6, and

IV-7 are found also, for example, in [CL, Chapter 3] and [Har2, Chapter IV]. For symplectic groups, see, for example, [Ja, Chapter 6] and [We, Chapters 6 and 8].

IV-1. Some basic results concerning matrices

In this section, we explain the basic results concerning constant square matrices. Let Mn(C) denote the set of all n x n matrices whose entries are complex numbers.

The set of all invertible matrices with entries in C is denoted by GL(n,C), which stands for the general linear group of order n. We define a topology in Mn (C) by

the norm [A] = max [ask[ for A E Mn(C), where a3k is the entry of A on the j-th

1<j,k<n

all

a12 ...

aln

 

row and the k-th column; i.e., A = all

a22

a

. A matrix A E Mn(C)

and

an2 "'

ann

 

69

70

IV. GENERAL THEORY OF LINEAR SYSTEMS

is said to be upper-triangular if ajk = 0 for j > k. The following lemma is a basic result in the theory of matrices.

Lemma IV-1-1. For each A E Mn(C), there exists a matrix P E GL(n,C) such that P-1 AP is uppertriangular.

Proof.

Let A be an eigenvalue of A and pt be an eigenvector of A associated with the eigenvalue A. Then, Apt = A P t and P 1 # 0. Choose n - 1 vectors p", (j = 2, ... , n) so that Q = [#1A p"nJ E GL(n, C), where the p"j are column vectors of the matrix

 

A

Q. Then, the first column vector of Q-1AQ is

0

Hence, ae can complete the

proof of this lemma by induction on n.

0

 

A matrix A E Mn(C) is said to be diagonal if a,k = 0 for j k. We denote by diag(di, d2, ... , d, the diagonal matrix with entries dl, d2, ... , do on the main diagonal (i.e., d_, = aj.,). A matrix A E Mn(C) is said to be diagonalizable (or semisimple) if there exists a matrix P E GL(n,C) such that P-1AP is diagonal.

Denote by Sn the set of all diagonalizable matrices in Mn(C). The following lemma is another basic result in the theory of matrices.

Lemma IV-1-2. A matrix A E Mn(C) is diagonalizable if and only if A has n linearly independent eigenvectors pt, p2, ... , Pn

Proof

If A has n linearly independent eigenvectors pt, p'2, ... , p,, set P = [P'tpa ... pnJ

E GL(n,C). Then, P'1AP is diagonal. Conversely, if PAP is diagonal for

P = [ 1 6 t h... fl, E GL(n, C), then p1, A, ... , pn 7are n linearly independent eigenvectors of A.

In particular, if a matrix A E Mn(C) has n distinct eigenvalues, then n eigenvectors corresponding to these n eigenvalues, respectively, are linearly independent (cf.

[Rab, p. 1861). Therefore, we obtain the following corollary of Lemma IV-1-2.

Corollary IV-1-3. If a matrix A E Mn(C) has n distinct eigenvalues, then A E

Sn.

The set Mn(C) is a noncommutative C-algebra. This means that Mn(C) is a vector space over C and a noncommutative ring. The set Sn is not a subalgebra of

Mn(C). However, the following lemma shows an important topological property of

Sn as a subset of Mn(C).

Lemma IV-1-4. The set Sn is dense in Mn(C).

Prioof.

It must be shown that, for each matrix A E Mn(C), there exists a sequence

{Bk : k = 1,2,... } of matrices in Sn such that

lim Bk = A. To do this, we

 

k +W

may assume without any loss of generality that A is an upper-triangular matrix with the eigenvalues At, ... , An on the main diagonal (cf. Lemma IV-1-1). Set

1. SOME BASIC RESULTS CONCERNING MATRICES

71

B k = A + diag[Ek.1, Ek,2, , Ek,n], where the quantities ek,,, (v = 1, 2, ... , n) are chosen in such a way that n numbers Al + Ek,1, A2 + 4,2, ... , An + Ek,n are distinct and that lim Ek,&, = 0 for v = 1, 2,... , n. Then, by Corollary IV-1-3, we obtain

Bk E Sn and

lim Bk = A.

 

k-r+oo

For a matrix A E Mn(C), denote by pA(A) the characteristic polynomial of A with the expansion

(IV.1.1)

 

 

n

 

pA(A) = det(AIn - A] = An +

ph(A)An-''

 

 

 

h=1

 

where In denotes the n x n identity matrix. Note that

 

 

PA(A)

= An +

Eph(A)An-h,

= In.

h=1

Now, let us prove the Cayley-Hamilton theorem (see, for example, [Be13, pp. 200201 and 220], (Cu, p. 220], and (Rab, p. 198]).

Theorem IV-1-5 (A. Cayley-W. R. Hamilton). If A E Mn(C), then its charac- teristic polynomial satisfies PA(A) = O, where 0 is the zero matrix of appropriate size.

Remark IV-1-6. The coefficients ph(A) of pA(A) are polynomials in entries ap, of the matrix A with integer coefficients.

Proof of Theorem IV-1-5.

Since the entries of pA(A) are polynomials of entries a,k of the matrix A, they are continuous in the entries of A. Therefore, if pA(A) = 0 for A E Sn, it is also true for every A E Mn(C), since Sn is dense in Mn(C) (cf. Lemma IV-1-4).

Note also that if B = P'1 AP for some P E GL(n, C), then pB(A) = PA(A) and pB(B) = P-'pA(A)P. Therefore, it suffices to prove Theorem IV-1-5 for diagonal matrices. Set A = diag(A1, A2, ... , A.J. Then, pA(A) = (A - A1)(A - A2) ... (A - An) and pA(A) = diag[PA(A1),PA(A2),... ,PA(A.)] = 0.

It is an important application of Theorem IV-1-5 that an n x it matrix N satisfies the condition N" = 0 if its characteristic polynomial pN(A) is equal to A". If

N" = O, N is said to be nilpotent.

Lemma IV-1-7. A matrix N E Mn(C) is nilpotent if and only if all eigenvalues of N are zero.

Proof.

If IV is an eigenvector of N associated with an eigenvalue A of N, then Nkp"= Akp' for every positive integer k. In particular, N"p = A"p Hence, if N" = 0, then

A = 0. On the other hand, if all eigenvalues of N are 0, the characteristic polynomial pN(A) is equal to A". Hence, N is nilpotent. 0

Applying Lemma IV-1-1 to a nilpotent matrix N, we obtain the following result.

72

IV. GENERAL THEORY OF LINEAR SYSTEMS

Lemma IV-1-8. A matrix N E .Mn(C) is nilpotent if and only if then exists a matrix P E GL(n,C) such that P-I NP is upper-triangular and the entries on the main diagonal of N are all zero. Furthermore, if N is a real matrix, then there exists a real matrix P that satisfies the requirement given above.

To verify the last statement of this lemma, use a method similar to the proof of Lemma IV-1-1 together with the fact that if an eigenvalue of a real matrix is real, then there exists a real eigenvector associated with this real eigenvalue. Details are left to the reader as an exercise.

The main concern of this section is to explain the S-N decomposition of a matrix

A E Mn(C) (cf. Theorem IV-1-11). Before introducing the S-N decomposition, we need some preparation.

Let A. (j = 1, 2,... , k) be the distinct eigenvalues of A and let m., (j =

1,2.... , k) be their respective multiplicities. Then, the characteristic polynomial of the matrix A is given by pA(A) = (A - '\0M'(1\ - A2)m2 ... (A - Ak)m'. Decom-

pose 1

into partial fractions 1 =

QU(A

,

where, for every j, the

(A - A,),n,

pA(A)

pA(A)

 

 

quantity Q, is a nonzero polynomial in A of degree not greater than m, - 1. Hence,

k

I = EQ,(A) J1 (A - Ah)m". Setting

)=1 h¢j

P,(A) = Qj(A) 1I (A - A,)me

 

 

h#1

 

 

i =

P2(A}.

 

 

 

J=1

 

Now that this is an identity in A. Therefore, setting

 

(IV.1.4)

Pj(A) = Qj(A)fl(A-AhIn)m"

(y = 1,2,....k),

 

hv&1

 

 

we obtain

 

 

 

 

 

k

 

(IV.1.5)

I. _ > Ph(A).

 

h=1

In the following two lemmas, we show that (IV.1.5) is a resolution of the identity in terms of projections Ph (A) onto invariant subspaces of A associated with eigenvalues Ah, respectively.

Lemma IV-1-9. The k matrices P, (A) (j = 1,2,... , k) given by (W-1.4) satisfy the following conditions:

(i) A and P, (A) (y = 1, 2, ... , k) commute.

1. SOME BASIC RESULTS CONCERNING MATRICES

73

(ii) (A-),In)'n'Pi(A) =0 (j = 1,2,... ,k),

(iii) P,(A)Ph(A) = 0 ifj h,

k

(iv) >Ph(A) = In,

h=1

(v) Pi(A)2=P,(A) (j=1.2,...,k),

( v : ) P, (A) 0 (j = 1, 2, ... , k).

Proof.

Since P,(A) is a polynomial of A, we obtain (i). Using Theorem IV-1-5, we derive (ii) and (iii) from (IV.1.4) and (i). Statement (iv) is the same as (IV.1.5).

Multiplying the both sides of (IV.1.5) by P,(A), we obtain

 

k

(IV.1.6)

P,(A) _ >P,(A)Ph(A).

h=1

Then, (v) follows from (IV.1.6) and (iii). To prove (vi), let IT, be an eigenvector of A associated with the eigenvalue Al. Note that (IV.1.2) implies Ph(A)) = 0 if h 0 j.

Therefore, we derive P,(A,) = 1 from (IV.1.3). Now, since P. (A)#, = P,(A3)p' # we obtain (vi).

Lemma IV-1-10. Denote by V. the image of the mapping P,(A) : C" -. Cn.

Then,

(1)p'E Cn belongs to V. if and only if P,(A)p= p

(2)Pj(A)p"=0 for all fl E Vh if j 0 h.

(3) Cn = V1 Ei3 V2 e

e Vk (a direct sum).

(4)for each j, V, is an invariant subspace of A.

(5)the restriction of A on V, has a coordinates-wise representation:

(IV.1.7)

Alv, : AjIj + A,,

where I. is the identity matrix and Nj is a nilpotent matrix.

(6) dime V, = m, .

Proof

Each part of this lemma follows from Lemma IV-1-9 as follows.

A vector IT E V3 if and only if p" = P, (A)q" for some q' E Cn. If p" = P, (A) q, we obtain P,(A)p= Pj(A)2q'= Pj(A)q =p""from (v) of Lemma IV-1-9.

A vector p" E Vh if and only if ff = Ph (A),y for some q" E Cn. Hence, from (iii) of Lemma IV-1-9 we obtain P, (A)IF = Pj (A)Ph(A)q"= 0 if 0 h.

(iv) of Lemma IV-1-9 implies p = P, (A)15 +

+ Pk- (A)p" for every p3 E C",

while (1) implies that P, (A)p E Vj. On the other hand, if p" = )51 +

+ pk

for some g,E V2 (j = 1,2,... , k), then, by (1) and (2), we obtain P,(A)p" =

Pi(A)p1+...+PJ(A)pk =pj

Ap"= AP,(A)p = P,(A)Ap E V3 for every 15E Vj.

Let n, be the dimension of the space V, over C and let {ff,,t : 1 = 1, 2, ... , n. } be a basis for V,. Then, there exists an n. X n, matrix N,, such that

74

IV. GENERAL THEORY OF LINEAR SYSTEMS

(A-)'1In)V1,1PJ,2...p'1.n,J = [Pi,1PJ,2...p3i.nsJNj

as the coordinates-wise representation relative to this basis. This implies that

(A - A,1.)'Pi(A)(P1,1Pi,2...p),nj) _

IP1,191,2...Pj,Nj

for (t = 1,2,... ).

I

In particular, from (ii) of Lemma IV-1-9, we derive N,

O . Thus, we

obtain

 

 

(IV-1-8)

...P = [l ,1P'r,2 ... p'f n,](A) I, + N,),

where 1, is the n, x n1 identity matrix. This proves (IV.1.7).

(6) Let { " l, t : e = 1, 2,... , n, } be a basis for V, (j = 1, 2,... , k). Set

(IV.1.9)

Po = (p1,1 .

p'2...,

Then, Po E GL(n, C) and (IV.1.8) implies

 

([V.1.10) Po'AP0 = diagjAllt +N1,A212+N2,...,Aklk+Nk],

where the right-hand side of (IV.1.10) is a matrix in a block-diagonal form with entries Al h +N1, A212+N2, ... , Aklk+Nk on the main diagonal blocks. Hence,

pA(A) A 2 ) ' 2 Also, PA(A) _ (A - A1)m'(A -

X2 )12 (A - Ak)mk. Therefore, dimC V) = n, = m, (j = 1,2, ... , k). 0

The following theorem defines the S-N decomposition of a matrix A E Mn(C).

Theorem IV-1-11. Let A be an n x n matrix whose entries are complex numbers. Then, there exist two n x n matrices S and N such that

(a)S is diagonalizable,

(b)N is nilpotent,

(c)A = S + N,

(d)SN = NS.

The two matrices S and N are uniquely determined by these four conditions. If A is real, then S and N are also real. Furthermore, they are polynomials in A with coefficients in the smallest field Q(a,k, A,1) containing the field Q of rational numbers, the entries ajk of A, and the eigenvalues Al, A2 ... , Ak of A.

Proof

We prove this theorem in three steps.

Step 1. Existence of S and N. Using the projections P,(A) given by (IV.1.4), define

S and N by

S = A1P1(A)+A2Pz(A)+...+At Pk(A),

N=A - S.

If P0 is given by (IV.1.9), then

(IV.1.11)

Po 1SP0 = diag[A111, A212, ... , Aklk)

 

1. SOME BASIC RESULTS CONCERNING MATRICES

75

and

 

 

(IV.1.12)

Po 1 NPo = diag[N1 i N2, ... , Nk]

 

from Lemmas IV-1-9 and IV-I-10 and (IV.1.10). Hence, S is diagonalizable and N is nilpotent. Furthermore, NS = SN since S and N are polynomials in A. This shows the existence of S and N satisfying (a), (b), (c), and (d). Moreover, from

(IV.1.4), it follows that two matrices S and N are polynomials in A with coefficients in the field Q(ajk, Ah).

Step 2. Uniqueness of S and N. Assume that there exists another pair (S, N) of n x n matrices satisfying conditions (a), (b), (c), and (d). Then, (c) and (d) imply that SA = AS and NA = AN. Hence, SS = SS, NS = SN, SN = NS, and NN = NN since S and N are polynomials in A. This implies that S - S is

diagonalizable and N - N is nilpotent. Therefore, from S - S = N - N, it follows

that S-S=N-N=O.

Step 3. The case when S and N are real. In case when A is real, let 5 and N be the complex conjugates of S and N, respectively. Then, A = S + N = 3° + N.

Hence, the uniqueness of S and N implies that S = 3 and N = N.

This completes the proof of Theorem IV-1-11.

Definition IV-1-12. The decomposition A = S + N of Theorem IV-1-11 is called the S-N decomposition of A.

Remark IV-1-13. From (IV.1.11), it follows immediately that S and A have the same eigenvalues, counting their multiplicities. Therefore, S is invertible if and only if A is invertible.

Observation IV-1-14. Let A be an n x n matrix whose distinct eigenvalues are A = S + N be the S-N decomposition of A. It can be shown that n x n matrices P1, P2, ... , Pk are uniquely determined by the following three

conditions:

(i)

(ii)P,P1 = O if j 36 t,

(iii)S = A11P1 + A2P2 + ... + AkPk.

Proof.

Note that

 

In = P1(A) + P2(A) + ... + Pk(A),

{

Pj(A)Ph(A) = O if

j

h,

 

S = A1P1(A) + A2P2(A) + ... + AkPk(A).

k

First, derive that P, S = SP; = \j P,. Then, this implies that .X P1 = >ahPj P1. (A).

h=1

Hence, \jPjPA(A) = \h PiPh(A). Thus, PiPh(A) = 0 whenever j h. Therefore, it follows that P1 = P1(A) = P2P}(A).

a, - ib,, and D. = I

76

IV. GENERAL THEORY OF LINEAR SYSTEMS

Observation IV-1-15. Let A = S + N be the S-N decomposition of an n x n matrix A. Let T be an n x n invertible matrix such that if we set A = T-1ST, then A = diag[A1I1, A2I2, ... , AkIk], Where A1, A2, ... , Ak are distinct eigenvalues of S (and also of A), I, is the m3 x mj identity matrix, and m3 is the multiplicity

of the eigenvalue A,. It is easy to show that

(i) if we set M = T-'NT, then M is nilpotent, MA = AM, and M =

diag[M1i M2,... , Mk}, where Mj are mj x m j nilpotent matrices,

(ii) if we set Pj = Tdiag[Ej1iE.,2,... ,E,k]T-1, where E,1 = 0 if j

1, while

Ejj = Ij, we obtain

 

PjPh =0

 

 

(I

=P1+P2+...+Pk,

(.1

h),

 

 

 

 

. S = A1P1 + A2P2 + ... + AkPk.

Therefore, P. = P, (S) = P? (A) (j = 1, 2, ... , k) (cf. Observation IV-1-14).

The following two remarks concern real diagonalizable matrices.

Remark IV-1-16. Let A be a real nxn diagonalizable matrix and let A1, A2 ... , An be the eigenvalues of A. Then, there exists a real n x n invertible matrix P such that

(1)in the case when all eigenvalues A j (j = 1,2,... , n) are real, then P-1 AP is a real diagonal matrix whose entries on the main diagonal are A1, A2, ... , An,

(2)in the case when all eigenvalues are not real, then n is an even integer 2m and P-1A fP = diag[D1, D2, ... , DmJ, where A23_1 = a, + ibl, A2J = a) - ibj, and

Dj

abj a,

(3) in other cases, P1AP = diag[D1, D2i ... , Dhj, where A23_1 = a)+ib,, A2.1 =

b, for j = 1, 2, ... , h - 1 and Dh is a real diagonal

a j

matrix whose entries on the main diagonal are A, (j = 2h - 1,... , n).

Remark IV-1-17. For any given real n x n matrix A, there exists a sequence

{Bk : k = 1, 2,. ..} of real n x n diagonalizable matrices such that lim Bk = A.

This can be proved in the following way:

(i)let A = S + N be S-N decomposition of A,

(ii)using Remark IV-1-16, assume that S = diag[D1, D2, ... , Dhj, as in (3) of

Remark IV-1-16,

(iii)find the form of N by SN = NS,

(iv)triangularize N without changing S,

(v)use a method similar to the proof of Lemma IV-1-4.

Details of proofs of Remark IV-1-16 and IV-1-17 are left to the reader as exercises. Now, we give two examples of calculation of the S-N decomposition.

 

 

252

498

4134

698

 

Example IV-1-18. The matrix A

-234 -465 -3885 -656

has two

15

 

30

252

42

 

 

 

 

distinct eigenvalues 3 and 4, and

 

-10 -20 -166 -25

 

 

 

 

 

 

 

 

PA(A) = (A - 4)2(A - 3)2,

 

1

 

 

2

 

2

PA(A) (A

14)2

A 4 + (A 13)2 + A 3

 

 

1. SOME BASIC RESULTS CONCERNING MATRICES

77

Set

 

 

 

 

 

 

 

 

P1(A) = (A - 3)2

 

 

P2(A) _ (A - 4)2 + 2(A - 3)(A - 4)2.

Then,

 

 

 

 

 

 

 

 

 

-1

-2

134

198

2

2

-134

-198

P1(A) =

1

2

-125

-186

-1

-1

125

186

0

0

9

12

P2(A) =

0

-8

-12

 

0

 

0

0

-6

-8

0

0

6

9

Therefore,

 

 

 

 

 

 

2

 

-2

134

198

 

1

 

5

-125 -186

 

0

 

0

12

12 I'

S = 4P1(A) + 3P2(A) =

 

 

 

 

 

10

 

0

-6

-5

250

500

 

4000

500

-235

-470

 

-3760

-470

N = A - S =

30

 

240

30

15

 

-10

-20

 

-160

-20

Example IV-1-19. The matrix A =

3

4

3

 

 

2

7

4

has two distinct eigenvalues

A1= 11, A2=1,and

 

 

 

-4 8

3

 

 

 

 

 

 

 

 

 

 

 

 

 

PA(A)

1 ) 21 1 ) ,

1

_

 

1

 

_

(A+9)

 

 

 

 

 

 

 

 

 

PA(A)

100(A - 11)

100(A - 1)2*

 

 

 

Hence,

 

 

 

 

 

 

 

 

 

 

 

1

-

(A - 1)2

-

(A + 9)(A - 11)

 

 

 

 

 

100

 

 

100

 

 

 

Set

 

 

 

 

 

 

 

 

 

 

P1(A)

_ (A - 1)2

P2(A) - - (A + 9)(A - 11).

 

 

100

 

 

 

 

100

 

 

Then,

 

 

 

 

 

 

 

 

 

 

1

0

56

28

 

P2(A)

 

1

1100

-56 -28

P1(A) = - 0

76

38 ,

 

 

100

0

24

-38

100

0

48

24

 

 

 

0

-48

76

 

 

 

 

 

Therefore,

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

10

56

28

 

 

 

 

 

 

 

0

86

38

 

 

S = 11P1(A) + P2(A) = 10

 

 

 

 

 

 

 

 

0

48

34

1

 

 

 

 

 

 

 

 

 

 

 

20 -16 2

N=A - S= 10 20 -16 2

-40 32 -4

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]