Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Chau Chemometrics From Basics to Wavelet Transform

.pdf
Скачиваний:
116
Добавлен:
15.08.2013
Размер:
2.71 Mб
Скачать

appendix

259

From this perspective, vectors and matrices in linear algebra are important in mathematical manipulation of oneand two-dimensional data obtained from analytical instruments.

A.1.2. Column and Row Vectors

A group of real numbers arranged in a column form a column vector, while its transpose is a row vector as shown in the following way:

 

 

a2

 

 

 

 

 

 

=

 

a1

 

 

= [

 

]

 

..

 

 

a

 

 

.

 

at

 

a1, a2, . . . , an

 

 

 

 

 

 

 

 

 

 

 

 

an

 

 

 

 

 

Here, we follow the convention in which a boldfaced variable denotes a column vector or a matrix.

If we say two vectors, a and b, are equal to each other, that means every corresponding element in them are equal.

A.1.3. Addition and Subtraction of Vectors

Addition or subtraction of two vectors means that every element of the vectors is added or subtracted in the following manner:

 

 

a2

± b2

 

±

 

a1

b1

 

 

=

..

a

b

 

±

 

 

.

 

 

 

an ± bn

Vector addition and subtraction have the following properties:

a + b = b + a

(a + b) + c = a + (b + c) a + 0 = a

Here 0 = [0, 0, . . . , 0]t .

A spectrum of a mixture of two chemical components, say, a and b, can be expressed as the vector sum of the individual spectra a and b according to the Lambert--Beer law (see Fig. A.3).

Vector addition of individual spectra to give the spectrum of the mixture can also be applied to other analytical signals such as a chromatogram, a

260

appendix

mixture

a

b

λ nm

Figure A.3. The mixture spectrum produced by adding two spectra a and b together.

voltammogram, a kinetic curve, or a titration curve, as they are governed by additive laws similar to the Lambert--Beer law for absorbance. A vector with n elements can be regarded as a point in n-dimensional linear space. Subtraction of two vectors gives the distance between these two points the in the n-dimensional linear space. The geometric meaning of vector subtraction is shown in Figure A.4.

It is well known that addition and subtraction between vectors can be visualized by the so-called parallelogram rule as depicted in Figure A.5.

A.1.4. Vector Direction and Length

A vector in a n-dimensional linear space has direction and length. The direction of a vector is determined by the ratios between elements. The

0

a_b=[a1_b1, ..., an_bn]

a

a_b

o

b

Figure A.4. Geometric illustration of vector subtraction in an n-dimensional linear space.

 

appendix

 

261

 

 

 

 

a

 

 

 

 

 

 

b

c

b

 

c

 

a

 

 

 

 

 

a+b=c

 

c-b=a

Figure A.5. Parallelogram rule for vector addition and subtraction.

length or the magnitudes of a vector is defined by

a = (a12 + · · · , +an2)1/2

In linear algebra, a is called the norm of the vector a.

A.1.5. Scalar Multiplication of Vectors

A vector a multiplied by a scalar (a constant) k is given by

 

=

 

ka1

 

 

..

 

 

ka2

 

k a

 

 

.

 

 

 

 

 

 

 

 

kan

 

and is called the scalar multiplication of a vector in linear algebra. Note that the spectra of different concentrations are just like vectors multiplied by different constants, say, k1, k2, and so on (see Fig. A.6).

A

k2a

k1a

a

Figure A.6. Profiles obtained by scalar multiplication of a vector (spectrum) by constants k1 and k2 with k2 > k1.

262

appendix

Scalar multiplication of vectors has the following properties:

k1(k2a) = (kk2)a

k1(a + b) = k1a + k1b

(k1 + k2) a = k a + k2a

In particular, we have

0 a = 0 1 a = a −1 a = −a

A.1.6. Inner and Outer Products between Vectors

When two vectors with the same size (number of elements) multiply each other, there are two possible operations: the inner product and the outer product. The inner product (also known as the dot product or the scalar product ) produces a scalar (a number), while the outer product (also known as the cross-product or the vector product ) produces a matrix. The following formula (where the superscript t denotes transposition) defines the inner product between two vectors:

 

 

b2

 

 

 

= [

]

 

b1

 

=

 

..

 

at b a1, a2, . . . , an

 

 

.

 

 

 

 

 

 

 

 

 

 

 

bn

 

 

 

The inner product has the following properties:

at (b + c) = at b + at c (a + b)t c = at c + bt c

Figure A.7 gives the geometric meaning of the inner product between two vectors. The inner product is essentially a kind of projection. The

Inner product of vectors

 

 

 

 

a

 

a

 

 

 

α

 

 

 

a

 

 

 

 

 

b

 

b

b

 

→ →

 

 

a.b=|a|.|b|.cos(α)

 

Figure A.7. Graphic representation of inner product of the two vectors a and b.

appendix

263

concept of projection is very important in chemometrics, and a good understanding of this concept will be very helpful in studying the subject.

If two vectors a and b are orthogonal with each other, that is, if the angle, α, between them is 90(as shown in the middle part of Fig. A.7), then the inner product is equal to zero:

at b = 0

The outer product of two vectors produces a bilinear matrix of rank equal to 1, which is of special importance in multivariate resolution for two-way data. In the two-way data from ‘‘hyphenated’’ chromatography, every chemical component can be expressed by such a bilinear matrix of rank 1. The outer product of vectors a and b is given as follows:

 

 

a2

 

 

 

a2b1

a2b2

· · ·

a2bn

 

=

 

a1

 

[

 

] =

 

a1b1

a1b2

 

a1bn

 

 

..

 

..

..

 

..

a bt

 

 

.

 

 

b1, b2, . . . , bn

 

 

.

.

· · · .

 

 

 

 

 

 

 

 

 

 

 

 

· · ·

 

 

 

 

an

 

 

 

 

an b1

an b2

· · ·

an bn

 

A.1.7. The Matrix and Its Operations

In general, a matrix is expressed in the following manner

 

a11

a12

 

a1m

 

 

. .

· · · .

a21

a22

· · ·

a2m

 

 

 

· · ·

 

 

 

 

 

..

 

.. ..

· · ·

 

an1

an2

anm

 

in which there are m columns and n rows.

Usually, capital letters are used to represent matrices, for example, A, B, . . . . Lowercase symbols, with integer subscripts i and j , represent the elements in the matrix. For example, aij in the expression above denotes the matrix r elements at the i th row and the j th column. Thus, sometimes, (aij ) is utilized to denote matrix A. Matrix A can also be expressed as collection of column vectors:

A = [a1, a2, . . . , am ]

264

appendix

A.1.8. Matrix Addition and Subtraction

Two or more matrices of the same order can be added (or subtracted) by adding (or subtracting) their corresponding elements in the following way:

A + B = (aij ) + (bij ) = (aij + bij )

It is obvious that the addition operation has the following properties:

A + B = B + A

(A + B) + C = A + (B + C)

A.1.9. Matrix Multiplication

The product of a matrix of order (n × q), A = (aij )n×q and a matrix B = (bij )q×m of order (q × m) produces a matrix C = (cij )n×m of order (n × m). The elements cij are defined as

cij = aik bkj

Essentially, cij is the result of the inner product of the i th row of matrix A and the j th column of matrix B. It should be noted that matrix multiplication may not satisfy the commutative rule:

A B = B A

However, it will satisfy the associative rule

ABC = (AB)C = A(BC)

and also the distribution rule:

A(B + C) = AB + AC

(A + B)(C + D) = A(C + D) + B(C + D)

A.1.10. Zero Matrix and Identity Matrix

In a zero matrix, 0, all component elements equal to zero. A square matrix of order n × n is called an identity matrix if all its diagonal elements have unity value and the off-diagonal elements have zero value. It is denoted by I or In in linear algebra.

It is obvious that the 0 and I matrices have the following features:

A + 0 = A

IA = AI = A

appendix

265

A.1.11. Transpose of a Matrix

The transpose of a matrix A, namely, At , is obtained by exchanging rows and columns of A:

(aij )t = (aji )

From this definition, we have

(AB)t = Bt At (ABC)t = Ct Bt At

A matrix is called a symmetric matrix if its transpose is equal to itself:

At = A

A.1.12. Determinant of a Matrix

The determinant of a square matrix A of order (n × n), |A| or det(A), is defined by

 

i

det(A) = |A| =

n

( −1)i +j aij |Mij|

 

=

 

1

 

i

=

n

aij Aij (for any i , j = a fixed value)

 

=

 

1

where |Mij| is the determinant of the minor of the element aij . The minor Mij is a (n − 1) × (n − 1) matrix obtained by deleting the i th row and the j th column of A. The resulting quantity Aij , is called the cofactor of aij and is defined as (−1)i +j |Mij|.

Consider the following examples:

N = 2:

 

 

|A| = a11a22 a12a22

 

 

N = 3: the first column with j = 1 is fixed:

|

|

 

=

 

 

a

32

a

33

=

A11

 

(

1)2

 

 

 

a23

 

 

(

1)2 M11

 

=

a22

 

=

|

 

 

 

 

32

 

33

 

|

 

 

 

 

a12

a13

 

 

( 1)2 M21

 

A21

 

 

 

 

 

 

a

 

 

 

 

=

( 1)2 a

 

 

 

=

|

 

 

 

 

22

 

23

 

|

 

 

 

 

a12

a13

 

 

( 1)2 M31

 

A31

 

 

 

 

 

 

a

 

 

 

 

 

( 1)2 a

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

and |A| = a11A11 + a21A21 + a31A31.

266

appendix

As an alternative, one may fix a row and write down the determinant of A according to

n

det(A) = |A| = ( −1)i +j aij |Mij| (for any i , i = a fixed value)

j =1

A square matrix A is said to be regular or nonsingular if |A| = 0. Otherwise A is said to be singular.

Let A and B be n × n square matrices and k be a scalar; we then have

|At | = |A| |k A| = k n |A| |AB| = |A||B| |A2| = |A|2

If A is a diagonal or triangular matrix, then

n

|A| = aii

i =1

A.1.13. Inverse of a Matrix

If two square matrices, say, A and B, satisfy AB = I, then B is called the inverse matrix of A and is denoted by A−1. If A−1 exists, matrix A is a nonsingular matrix or a matrix of full rank. It is easily seen that A−1 exists if and only if A is nonsingular.

If the inverses A−1 and B−1 exist, the following expressions hold:

(k A)−1 = k −1A−1

(A B)−1 = B−1A−1

(At )−1 = (A−1)t

A.1.14. Orthogonal Matrix

A square matrix A is said to be orthogonal if

At A = AAt = I

appendix

267

The orthogonal matrices have the following properties:

 

At = A−1

 

det(A) = ±1

 

This is because

 

det(A) det(A) = det(At ) det(A) = det(At A) = det(I) = 1

 

A.1.15. Trace of a Square Matrix

The trace of a square matrix, tr(A), is defined as the sum of the diagonal elements as

tr(A) = aii

In a special case when A is a matrix of order (1 × 1), it contains only one element a, then

tr(A) = a

For example, a quadratic type yt Ay is a number:

tr(yt Ay) = yt Ay

Properties of the trace of a square matrix are as follows:

tr(A + B) = tr(A) + tr(B)

 

 

 

 

tr(αA) = α tr(A)

 

 

 

 

tr(AB) = tr(BA)

 

 

 

 

 

E [tr(A)] = tr[E (A)]

 

 

 

 

i

 

 

n

 

 

n

 

tr(AAt ) = tr(At A) =

 

 

=

aij2

 

 

=

1 j

1

 

 

 

 

It is obvious that if a = [a1, a2, . . . , an ]t

is a vector of n elements, then the

squared norm may be written as

 

 

 

 

 

 

n

 

ai2 = tr(aat )

a 2 = at a =

1

i

 

 

 

 

 

=

 

 

 

 

 

 

268

appendix

A.1.16. Rank of a Matrix

For matrix A of order (n ×m), its rank is the number of linearly independent row vectors (or column vectors) in it (see the example below) and is denoted by rank(A). It has the following features:

At = (A−1)t

0 ≤ rank(A) ≤ min(n, m)

rank(AB) ≤ min[rank(A), rank(B)] rank(A + B) ≤ rank(A) + rank(B)

rank(At A) = rank(AAt ) = rank(A)

The rank of a square matrix equals its order n if and only if det(A) is not equal to zeros:

rank(A) = n (det(A) = 0)

Remarks. When a sample is measured by a ‘‘hyphenated instrument,’’ the data can be arranged in the form of a matrix. If there is no measurement noise and the spectrum of every absorbing chemical component is different from all the other spectra, then the rank of the data matrix equals the number of chemical components within the sample.

Example A.1. Suppose that a data matrix is composed of n vectors (spectra) as obtained from measurements that are a linear combination of the vectors a and b, pure spectra of two chemical components. The rank of this matrix is 2 as there are only two linearly independent vectors in it. Each of the n vectors (spectra) mi (with i = 1, . . . , n) can be expressed by the following formula:

mi = cia a + cib b (i = 1, 2, . . . , n)

where cia and cib are the relative concentrations of the two components under the i th condition. Thus, the linear space is essentially determined by the two vectors a and b of the chemical components as illustrated in Figure A.8.

A.1.17. Eigenvalues and Eigenvectors of a Matrix

For a matrix A, we have the following relationship

A i = λi i (i = 1, 2, . . . , k )

Соседние файлы в предмете Химия