Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
[2.1] 3D Imaging, Analysis and Applications-Springer-Verlag London (2012).pdf
Скачиваний:
12
Добавлен:
11.12.2021
Размер:
12.61 Mб
Скачать

106

M.-A. Drouin and J.-A. Beraldin

(known from a camera calibration) using Eq. (3.12). Considering the sheet-of-light projector model, described above, the 3D scene point Qw is back-projected to an unknown normalized coordinate [0, y1]T for the given value of α.

Clearly there are three unknowns here, which includes the depth associated with the back-projected camera ray to the 3D scene point, and a pair of parameters that describe the planar position in the projected sheet-of-light of the 3D scene point. If we can form an independent equation for each of the coordinates Xw , Yw , Zw of Qw , then we can solve for that point’s unknown 3D scene position.

By rearranging Eq. (3.13) and Eq. (3.14), one may obtain

Qw = Rα 0, λ1y1, λ1

T + Tα Rα Tα = RcT λ2x2, λ2y2, λ2

T Tc

(3.15)

where λ1 and λ2 are the range (i.e. the distance along the Z-axis) between the 3D point Qw and the laser source and the camera respectively. Moreover, Rα and Tα are the parameters related to the laser plane orientation and position and Rc and Tc are the extrinsic parameters of the camera. (Note that these can be simplified to the 3 × 3 identity matrix and the zero 3-vector, if the world coordinate system is chosen to coincide with the camera coordinate system.) When Rα and Tα are known, the vector equality on the right of Eq. (3.15) is a system of three equations with three unknowns λ1, λ2 and y1. These can easily be determined and then the values substituted in the vector equality on the left of Eq. (3.15) to solve for the unknown Qw .

For a given α, a 3D point can be computed for each row of the camera. Thus, in Eq. (3.15) the known y2 and α and the measured value of x2 which is obtained using a peak detector can be used to compute a 3D point. A range image is obtained by taking an image of the scene for each value of α. In the next section, we examine scanners that project structured light patterns over an area of the scene.

3.4 Area-Based Structured Light Systems

The stripe scanner presented earlier requires the head of the scanner to be rotated or translated in order to produce a range image (see Fig. 3.3). Other methods project many planes of light simultaneously and use a coding strategy to recover which camera pixel views the light from a given plane. There are many coding strategies that can be used to establish the correspondence [57] and it is this coding that gives the name structured light. The two main categories of coding are spatial coding and temporal coding, although the two can be mixed [29]. In temporal coding, patterns are projected one after the other and an image is captured for each pattern. Matching to a particular projected stripe is done based only on the time sequence of imaged intensity at a particular location in the scanner’s camera. In contrast, spatial coding techniques project just a single pattern, and the greyscale or color pattern within a local neighborhood is used to perform the necessary correspondence matching. Clearly this has a shorter capture time and is generally better