- •Introduction to adjustment calculus
- •Introduction to adjustment calculus (Third Corrected Edition)
- •Introduction
- •2. Fundamentals of the mathematical theory of probability
- •If d'cd; then p (d1) £ lf
- •Is called the mean (average) of the actual sample. We can show that m equals also to:
- •3.1.4 Variance of a Sample
- •Is called the variance (dispersion) the actual sample. The square root 2
- •In the interval [6,10] is nine. This number
- •VVII?I 0-0878'
- •In this case, the new histogram of the sample £ is shown in Figure 3.5.
- •Is usually called the r-th moment of the pdf (random variable); more precisely; the r-th moment of the pdf about zero. On the other hand, the r-th central moment of the pdf is given by:
- •3.2.4 Basic Postulate (Hypothesis) of Statistics, Testing
- •3.3.4 Covariance and Variance-Covariance Matrix
- •X and X of a multivariate X as
- •It is not difficult to see that the variance-covariance matrix can also be written in terms of the mathematical expectation as follows:
- •3.3.6 Mean and Variance-Covariance Matrix of a Multisample The mean of a multisample (3.48) is defined as
- •4.2 Random (Accidental) Errors
- •It should be noted that the term иrandom error" is used rather freely in practice.
- •In order to be able to use the tables of the standard normal
- •X, we first have to standardize X, I.E. To transform X to t using
- •Is a normally distributed random
- •4.10 Other Measures of Dispersion
- •The average or mean error a of the sample l is defined as
- •5. Least-squares principle
- •5.2 The Sample Mean as "The Maximum Probability Estimator"
- •5.4 Least-Sqaures Principle for Random Multivariate
- •In very much the same way as we postulated
- •The relationship between e and e for a mathematical model
- •6.4.4 Variance Covariance Matrix of the Mean of a Multisample
- •Itself and can be interpreted as a measure of confidence we have in the correctness of the mean £. Evidently, our confidence increases with the number of observations.
- •6.4.6 Parametric Adjustment
- •In this section, we are going to deal with the adjustment of the linear model (6.67), I.E.
- •It can be easily linearized by Taylor's series expansion, I.E.
- •In which we neglect the higher order terms. Putting ax for X-X , al for
- •The system of normal equations (6.76) has a solution X
- •In sections 6.4.2 and 6.4.3. In this case, the observation equations will be
- •In matrix form we can write
- •In metres.
- •6.4.7 Variance-Covariance Matrix of the Parametric Adjustment Solution Vector, Variance Factor and Weight Coefficient Matrix
- •I.E. We know the relative variances and covariances of the observations only. This means that we have to work with the weight matrix к£- 1
- •If we develop the quadratic form V pv 3) considering the observations l to be influenced by random errors only, we get an estimate к for the assumed factor к given by
- •Variance factor к plays. It can be regarded as the variance of unit
- •In metres,
- •Is satisfied. This can be verified by writing
- •Into a . О
- •6.U.10 Conditional Adjustment
- •In this section we are going to deal with the adjustment of the linear model (6.68), I.E.
- •For the adjustment, the above model is reformulated as:
- •Is not as straightforward, as it is in the parametric case (section 6.4.6)
- •VeRn VeRn
- •Into the above vector we get 0.0
- •0.0 In metres .
- •In metres.
- •Areas under the standard normal curve from 0 to t
- •Van der Waerden, b.L., 1969: Mathematical Statistics, Springer-Verlag.
Variance factor к plays. It can be regarded as the variance of unit
2
weight (see 6.U.3) and is accordingly usually denoted by either Sq or
2
o"o (in case of postulated variances). This is again intuitively pleasing
since it ties together formulae (6.66) and (6.65)> where к can be also
2 л A 2 л 2
equated to Sq . Analogically, we denote к by either Sq or cq .
л 2
By adopting the notation cq for к, and further by denoting the weight coefficient matrix of the estimated parameters X, i.e. N-"1, by Q,
the equations (6.90) and (6.91) become:
•Лф А
А 2 v pv
(6.98) (6.99)
Example 6.18: Let us compute the estimated variance-covariance matrix
л
of the adjusted parameters X in example 6.l6. The
EA matrix is computed from equation (6.99). First9 from x
the above mentioned example we have: H - н -- 2h
vt = _J —Li_[d d d]
1,3 ■ ?d. 12 3'
1 1
P . • - . p 1 1 It
3,3 = diag [v V S
and df=n-u=3-2=l.
Hence,
ЛТ HJ - HG - i5i
1,3
?d.
1 1
vtpv= (Hj - Vfh.)2 / I d.
and
г 2 v pv ,„ it „г ч2
if" " (HJ " HG " SEi}" / di -1
dl d2 d3
d2 + d3
d2d3
As we have seen, N = Q is given Ъу -1
Svd.
= N
1_
2,2
1
dn + d.
1
dld2
We
thus
obtain
finally
2 „ ,HJ ~ HG ~¥ЧЧ2
dl(d2+d3) ' dld3
d^d^ , d^(d2 + d^
Example 6.19: Let us compute the estimated variance-covariance matrix E" of the adjusted parameters X in example 6.17. We are going to use equations (6.98) and (6.99)• First, from the above mentioned example we have
VT = [0.00, 0.02, 0.02, -0.0U, -0.0k, 0.0k] 1,6
In metres,
P = diag [0.25, 0.5, 0.5, 0.25, 0.5, 0.25] 6,6
-2- a
in m and
df=n-u=6-3=3.
Hence
VTPV = 0.002 (unitless),
and
T
2
V
PV
о df 3
Also, from example 6.IT, we have 1.6 0.8 0.8
Q = N 3,3
Finally,
0.8 1.6 0.8 0.8 0.8 1.2
in m
К = о Q = 10
Л О
3,3
-к
10.67 5.33 5.33
5.33 10.67 5.33
5-33 5-33 8.0 in m
or /4
10.67 5.33 5.33
5.33 10.67 5.33 5.33 5.33 8.0
in cm
6.4.8 Some Properties of the Parametric Adjustment Solution Vector
It can he shown that the choice of the weight matrix P of the observations L (proportional to the inverse of variance-cоvariance matrix
I-) and the choice of the least-squares method (minimization of V PV)
h
to get the solution X = X ensures that the resulting estimate X has got the smallest possible trace of its variance-covariance matrix Z*. In
A
"2-1 T
other words: taking P = a Z and seeking min V PV, provides such a
° L XeRu
solution X that satisfies at the same time the condition
min trace Zl. . (6.100)
This is a result similar to the consequence of the least squares principle applied to random multivariate (section 5•4) and we are not going to prove it here.
(I1
-
L.°)2
n
<j>(LJ3;L) = П exp [ - — 1- } у (6.101)
° i=l S./(2tt) 2
T
we get the most probable estimate of L if the condition min V PV
° XeR