Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
[2.1] 3D Imaging, Analysis and Applications-Springer-Verlag London (2012).pdf
Скачиваний:
12
Добавлен:
11.12.2021
Размер:
12.61 Mб
Скачать

116

M.-A. Drouin and J.-A. Beraldin

A sheet-of-light system such as the one illustrated at Fig. 3.3(right) can be calibrated similarly by replacing the tables ti (x1, y1) = x2 and ti (x1, y1) = Z by ti (α, y2) = x2 and ti (α, y2) = Z where α is the angle controlling the orientation of the laser plane, y2 is a row of the camera and x2 is the measured laser peak position for the camera row y2. Systems that use a Gray code with sub-pixel localization of the fringe transitions could be calibrated similarly. Note that tables ti and ti can be large and the values inside those tables may vary smoothly. It is, therefore, possible to fit a non-uniform rational B-spline (NURBS) surface or polynomial surface over those tables in order to reduce the memory requirement. Moreover, different steps are described in [25] that make it possible to reduce the sensitivity to noise of a non-parametric calibration procedure.

3.6 Measurement Uncertainty

In this section, we examine the uncertainty associated with 3D points measured by an active triangulation scanner. This section contains advanced material and may be omitted on first reading. Some errors are systematic in nature while others are random. Systematic errors may be implementation dependent and an experimental protocol is proposed to detect them in Sect. 3.7. In the remainder of this section, random errors are discussed. This study is performed for area-based scanners that use phase shift. An experimental approach for modeling random errors for the Gray code method will be presented in Sect. 3.7. Moreover, because the description requires advanced knowledge of the image formation process, the discussion of random errors for laser-based scanners is postponed until Sect. 3.8.

In the remainder of this section, we examine how the noise in the images of the camera influences the position of 3D points. First, the error propagation from image intensity to pixel coordinate is presented for the phase shift approach described in Sect. 3.4.2. Then, this error on the pixel coordinate is propagated through the intrinsic and extrinsic parameters. Finally, the error-propagation chain is used as a design tool.

3.6.1 Uncertainty Related to the Phase Shift Algorithm

In order to perform the error propagation from the noisy images to the phase value associated with a pixel [x1, y1]T , we only consider the B1(x1, y1) and B2(x1, y1) elements of vector X(x1, y1) in Eq. (3.23). Thus, Eq. (3.23) becomes

B1(x1, y1), B2(x1, y1) T = M I(x1, y1)

(3.33)

where M is the last two rows of the matrix (MT M)1MT used in Eq. (3.23). First, assuming that the noise is spatially independent, the joint probability density function p(B1(x1, y1), B2(x1, y1)) must be computed. Finally, the probability density

3 Active 3D Imaging Systems

117

function for the phase error p(

φ) is obtained by changing the coordinate system

from Cartesian to polar coordinates and integrating over the magnitude. Assuming that the noise contaminating the intensity measurement in the images is a zero-mean Gaussian noise, p(B1(x1, y1), B2(x1, y1)) is a zero-mean multivariate Gaussian distribution [27, 28]. Using Eq. (3.33), the covariance matrix ΣB associated with this distribution can be computed as

ΣB = M ΣI M T

(3.34)

where ΣI is the covariance matrix of the zero-mean Gaussian noise contaminating the intensity measured in the camera images [27, 28].

We give the details for the case θi = 2π i/N when the noise on each intensity measurement is independent with a zero mean and variance σ 2. One may verify that

ΣB = σ 2

 

0

2/N .

(3.35)

 

 

2/N

0

 

This is the special case of the work presented in [53] (see also [52]).

Henceforth, the following notation will be used: quantities obtained from measurement will use a hat symbol to differentiate them from the unknown real quanti-

ties. As an example B(x1, y1) is the real unknown value while ˆ

 

 

1

, y

1

)

is the value

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

B(x

 

 

 

 

 

 

 

computed from the noisy images. The probability density function is

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ˆ1(x1, y1), ˆ

2

 

 

 

1

 

 

 

 

1

 

=

 

N

 

 

γ (x1,y1)

 

 

 

 

 

 

 

 

 

 

(3.36)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(x

, y

)

4π σ 2 e

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

p B

B

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

where

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

N ((B

B (x

1

, y

1

))2

+

(B (x

1

, y

1

)

B

(x

1

, y

1

))2)

 

γ (x

1

, y

1

)

=

 

 

 

1(x1, y1) ˆ1

 

 

 

 

 

 

 

 

 

2

 

 

 

ˆ2

 

 

 

 

 

 

 

. (3.37)

 

 

 

 

 

 

 

 

 

 

 

 

 

4σ 2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ˆ1 =

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ˆ2 =

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

r cos

+

 

 

 

 

 

 

Now changing to a polar coordinate system using B

 

 

 

 

 

 

 

 

φ) and B

r sin+

 

 

φ) and B1 = B cos φ and B2 = B sin φ and integrating over r in the

domain [0, ∞] we obtain the probability density function

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

B2N

 

B2N cos2

φ

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

N cos

 

 

 

 

 

 

 

 

 

 

 

 

 

 

e

4σ 2

(2σ + e

4σ 2

 

 

 

 

 

 

 

 

 

 

 

φ (1 + erf(

B

 

 

φ

)))

 

p(

 

φ) =

 

 

 

 

 

 

B N π cos

 

 

 

2σ

 

 

 

 

 

(3.38)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4π σ

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

which is independent of φ and where

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

erf(z) =

 

 

 

2

 

 

 

z

et

2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

dt.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(3.39)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

π

 

 

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

When σ is small and B is large, p( φ) can be approximated by the probability

density function of a zero-mean Gaussian distribution of variance

2σ 2

(see [53] for

B2N

 

 

118

M.-A. Drouin and J.-A. Beraldin

details). Assuming that the spatial period of the pattern is ω, the positional error on x2 is a zero-mean Gaussian noise with variance

σx22 =

ω2σ 2

(3.40)

2π 2B2N .

The uncertainty interval can be reduced by reducing either the spatial period of the pattern, or the variance σ 2, or by increasing either the number of patterns used or the intensity ratio (i.e. B ) of the projection system. Note that even if B is unknown, it can be estimated by projecting a white and a black image; however, this is only valid when the projector and camera are in focus (see Sect. 3.8).

3.6.2 Uncertainty Related to Intrinsic Parameters

When performing triangulation using Eq. (3.31), the pixel coordinates of the camera are known and noise is only present on the measured pixel coordinates of the projector. Thus, the intrinsic parameters of the camera do not directly influence the uncertainty on the position of the 3D point. The error propagation from the pixel coordinates to the normalized view coordinates for the projector can easily be computed. The transformation in Eq. (3.12) is linear and the variance associated with x2 is

2

 

σ 2 s

2

 

=

x2

x2

(3.41)

σx2

 

 

d2

 

where sx2 and d are intrinsic parameters of the projector and σx22 is computed using Eq. (3.40). According to Eq. (3.41), as the distance d increases, or sx2 is reduced, the variance will be reduced. However, in a real system, the resolution may not be limited by the pixel size but by the optical resolution (see Sect. 3.8). and increasing d may be the only effective way of reducing the uncertainty. As will be explained in

Fig. 3.9 The reconstruction volume of two systems where only the focal length of the projector is different (50 mm at left and 100 mm at right). The red lines define the plane in focus in the camera. Figure courtesy of NRC Canada

3 Active 3D Imaging Systems

119

Sect. 3.8, when d is increased while keeping the standoff distance constant the focal length must be increased; otherwise, the image will be blurred. Note that when d is increased the field of view is also reduced. The intersection of the field of view of the camera and projector defines the reconstruction volume of a system. Figure 3.9 illustrates the reconstruction volume of two systems that differ only by the focal length of the projector (i.e. the value of d also varies). Thus, there is a trade off between the size of the reconstruction volume and the magnitude of the uncertainty.

3.6.3 Uncertainty Related to Extrinsic Parameters

Because the transformation of Eq. (3.31) from a normalized image coordinates to 3D points is non-linear, we introduce a first-order approximation using Taylor’s expansion. The solution close to xˆ2 can be approximated by

 

 

 

 

 

 

d

 

 

 

 

 

 

 

 

Q(x1,y1) xˆ2 +

x2 Q(x1,y1) xˆ2 +

 

Q(x1,y1) xˆ2

 

x2

 

(3.42)

 

 

 

 

dx2

 

 

 

 

 

 

 

ˆ

 

 

 

 

where

 

 

 

 

 

 

 

 

 

 

d

 

,y1) xˆ2

 

 

 

 

 

 

 

 

 

Q(x1

 

 

 

 

 

 

 

 

dx2

 

 

 

 

 

 

 

 

ˆ

 

 

 

 

 

 

x1

 

 

 

 

 

(r33Tx + r13Tz r31Tx x1 + r11Tzx1 r32Tx y1 + r12Tzy1

)

 

 

=

y

.

(3.43)

 

 

 

(r13 + r11x1 r33xˆ2 r31x1xˆ2 + r12y1 r32xˆ2y1)2

 

11

 

Since a first order approximation is used, the covariance matrix associated to a 3D point can be computed similarly as ΣB in Eq. (3.35) [27, 28]. Explicitly, the covariance matrix associated with a 3D point is

 

 

x12

x1y1

x1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Σ

x1

y1

y12

y1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

= x1

y1

1

 

 

 

+

 

 

 

 

 

 

 

 

+

 

 

 

 

 

 

 

 

 

 

+

 

 

 

1

 

 

 

1

 

 

 

1

 

 

1

2

 

×

 

( r33Tx

 

r13Tz

 

r31Tx x

 

r11Tzx

 

r32Tx y

 

r12Tzy

 

)2

σx2 (3.44)

 

 

 

 

(r13

+

r11x

r33x

r31x

 

x

+

r12y

 

r32x

y

)4

 

 

 

 

 

 

 

 

 

1

ˆ2

 

 

1

ˆ2

 

 

1

 

ˆ2 1

 

 

 

 

where σ 2 is computed using Eq. (3.40) and Eq. (3.41).

x2

The covariance matrix can be used to compute a confidence region which is the multi-variable equivalent to the confidence interval.4 The uncertainty over the range

4A confidence interval is an interval within which we are (1 α)100 % confident that a point measured under the presence of Gaussian noise (of known mean and variance) will be within this interval (we use α = 0.05).