Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
[2.1] 3D Imaging, Analysis and Applications-Springer-Verlag London (2012).pdf
Скачиваний:
12
Добавлен:
11.12.2021
Размер:
12.61 Mб
Скачать

8 3D Face Recognition

359

8.10.3.1 Other Expression Modeling Approaches

Another example of facial expression modeling is the work of Al-Osaimi et al. [5]. In this approach, the facial expression deformation patterns are first learned using a linear PCA subspace called an Expression Deformation Model. The model is learnt using part of the FRGC v2 data augmented by over 3000 facial scans under different facial expressions. More specifically, the PCA subspace is built from shape residues between pairs of scans of the same face, one under neutral expression and the other under non-neutral facial expression. Before calculating the residue, the two scans are first registered using the ICP [9] algorithm applied to the semi-rigid regions of the faces (i.e. forehead and nose). Since the PCA space is computed from the residues, it only models the facial expressions as opposed to the human face.

The linear model is used during recognition to morph out the expression deformations from unseen faces leaving only interpersonal disparities. The shape residues between the probe and every gallery scan are calculated. Only the residue of the correct identity will account for the expression deformations and other residues will also contain shape differences. A shape residue r is projected to the PCA subspace E as follows:

r = E ET E 1ET r.

(8.46)

If the gallery face from which the residue was calculated is the same as the probe, then the error between the original and reconstructed shape residues

ε = r r T r r

(8.47)

will be small, otherwise it will be large. The probe is assigned the identity of the gallery face corresponding to the minimum value of ε. In practice, the projection is modified to avoid border effects and outliers in the data. Moreover, the projection is restricted to the dimensions of the subspace E where realistic expression residues can exist. Large differences between r and r are truncated to a fixed value to avoid the effects of hair and other outliers. Note that it is not necessary that one of the facial expressions (while computing the residue) is neutral. One non-neutral facial expression can be morphed to another using the same PCA model. Figure 8.16 shows two example faces morphed from one non-neutral expression to another. Using the FRGC v2 dataset, verification rates at 0.001 FAR were 98.35 % and 97.73 % for face scans under neutral and non-neutral expressions respectively.

8.11 Research Challenges

After a decade of extensive research in the area of 3D face recognition, new representations and techniques that can be applied to this problem are continually being released in the literature. A number of challenges still remain to be surmounted. These challenges have been discussed in the survey of Bowyer et al. [14] and include

360

A. Mian and N. Pears

Fig. 8.16 Left: query facial expression. Centre: target facial expression. Right: the result of morphing the left 3D image in order to match the facial expression of the central 3D image. Figure courtesy of [5]

improved 3D sensing technology as a foremost requirement. Speed, accuracy, flexibility in the ambient scan acquisition conditions and imperceptibility of the acquisition process are all important for practical applications. Facial expressions remain a challenge as existing techniques lose important features in the process of removing facial expressions or extracting expression-invariant features. Although relatively small pose variations can be handled by current 3D face recognition systems, large pose variations often can not, due to significant self-occlusion. In systems that employ pose normalization, this will affect the accuracy of pose correction and, for any recognition system, it will result in large areas of missing data. (For profile views, this may be mitigated by the fact that the symmetrical face contains redundant information for discrimination.) Additional problems with capturing 3D data from a single viewpoint include noise at the edges of the scan and the inability to reliably define local regions (e.g. for local surface feature extraction), because these become eroded if they are positioned near the edges of the scan. Dark and specular regions of the face offer further challenges to the acquisition and subsequent preprocessing steps.

In addition to sensor technology improving, we expect to see improved 3D face datasets, with larger numbers of subjects and larger number of captures per subject, covering a very wide range of pose variation, expression variation and occlusions caused by hair, hands and common accessories (e.g. spectacles, hats, scarves and phone). We expect to see publicly available datasets that start to combine pose variation, expression variation, and occlusion thus providing an even greater challenge to 3D face recognition algorithms.

Passive techniques are advancing rapidly, for example, some approaches may no longer explicitly reconstruct the facial surface but directly extract features from multiview stereo images. One problem with current resolution passive stereo is that there

8 3D Face Recognition

361

is insufficient texture at a large enough scale to perform correspondence matching. As imaging technology improves, we will be able to see the fine detail of skin pores and other small-scale skin surface textures, which may provide enough distinctive texture for matching. Of course, with a much increased input data size associated with high resolution images, a commensurate increase in computational power is required and that depends on the complexity of the state-of-the-art feature extraction and dense matching algorithms.

3D video cameras are also appearing in the market opening up yet another dimension for video based 3D face recognition. Current 3D cameras usually have one or more drawbacks which may include: low resolution, offline 3D scene reconstruction, noisy reconstructions or high cost. However, it is likely that the technology will improve and the cost will decrease with time, particularly if the cameras are used in mass markets, such as computer games. A prime example of this is Microsoft’s Kinect camera, released in 2010.

8.12 Concluding Remarks

In this chapter we presented the basic concepts behind 3D face recognition algorithms. In particular we looked at the individual stages in a typical 3D face scan processing pipeline that takes raw face scans and is able to make verification or identification decisions. We presented a wide range of literature relating to all of these stages. We explained several well-established 3D face recognition techniques (ICP, PCA, LDA) with a more tutorial approach and clear implementation steps in order to familiarize the reader with the area of 3D face recognition. We also presented a selection of more advanced methods that have shown promising recognition performance on benchmark datasets.

8.13 Further Reading

The interested reader is encouraged to refer to the original publications of the methods described in this chapter, and their references, for more details concerning the algorithms discussed here. There are several existing 3D face recognition surveys which give a good overview of the field, including those by Bowyer et al. [14], and Abate et al. [1]. Given that a range image can in many ways be treated like a standard 2D image, a good background in 2D face recognition is desirable. To this end we recommend starting with the wide ranging survey of Zhao et al. [96], although this relates to work prior to 2003. No doubt further surveys on 3D, 2D and 3D/2D face recognition will be published periodically in the future. In addition, the website www.face-rec.org [29] provides a range of information an all common face recognition modalities. Several of the chapters in this book are highly useful to the 3D face recognition researcher, particularly Chaps. 2–7 which include detailed discussions on 3D image acquisition, surface representations, 3D features, shape

362

A. Mian and N. Pears

registration and shape matching. For good general texts on pattern recognition and machine learning we recommend the texts of Duda et al. [28] and Bishop [10].

8.14 Questions

1.What advantages can 3D face recognition systems have over standard 2D face recognition systems?

2.How can a 3D sensor be used such that the 3D shape information that it generates aids 2D-based face recognition? Discuss this with respect to the probe images being 3D and the gallery 2D and vice-versa.

3.What are the main advantages and disadvantages of feature-based 3D face recognition approaches when compared to holistic approaches?

4.Outline the main processing stages of a 3D face recognition system and give a brief description of the primary function of each stage. Indicate the circumstances under which some of the stages may be omitted.

5.Briefly outline the main steps of the ICP algorithm and describe its advantages and limitations in the context of 3D face recognition.

6.Provide a short proof of the relationship between eigenvalues and singular values given in (8.20).

7.Compare and contrast PCA and LDA in the context of 3D face recognition.

8.15 Exercises

In order to do these exercises you will need access to the FRGC v2 3D face dataset.

1.Build (or download) some utilities to load and display the 3D face scans stored in the ABS format files of the FRGC dataset.

2.Implement the cropping, spike removal and hole filling preprocessing steps as described in Sect. 8.5. Apply them to a small selection of scans in the FRGC v2 data and check that they operate as expected.

3.Implement an ICP-based face verification system, as described in Sect. 8.6 and use the pre-processed scans as input.

4.Implement a PCA-based 3D face recognition system, as described in Sect. 8.7, using raw depth data only and compare your results with the ICP-based system.

5.Use a facial mask to only include the upper half of the 3D face scan in training and testing data. Rerun your experimentations for ICP and PCA and compare with your previous results, particularly with a view to those scans that have nonneutral facial expressions.

References

1.Abate, A.F., Nappi, M., Riccio, D., Sabatino, G.: 2D and 3D face recognition: a survey. Pattern Recognit. Lett. 28, 1885–1906 (2007)

8 3D Face Recognition

363

2.Achermann, B., Jiang, X., Bunke, H.: Face recognition using range images. In: Int. Conference on Virtual Systems and MultiMedia, pp. 129–136 (1997)

3.Adini, Y., Moses, Y., Shimon, U.: Face recognition: the problem of compensating for changes in illumination direction. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 721–732 (1997)

4.Al-Osaimi, F., Bennamoun, M., Mian, A.: Integration of local and global geometrical cues for 3D face recognition. Pattern Recognit. 41(3), 1030–1040 (2008)

5.Al-Osaimi, F., Bennamoun, M., Mian, A.: An expression deformation approach to non-rigid 3D face recognition. Int. J. Comput. Vis. 81(3), 302–316 (2009)

6.Angel, E.: Interactive Computer Graphics. Addison Wesley, Reading (2009)

7.Arun, K.S., Huang, T.S., Blostein, S.D.: Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 9(5), 698–700 (1987)

8.Belhumeur, P., Hespanha, J., Kriegman, D.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19, 711–720 (1997)

9.Besl, P., McKay, H.: A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)

10.Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Berlin (2006)

11.Blanz, V., Vetter, T.: Face recognition based on fitting a 3D morphable model. IEEE Trans. Pattern Anal. Mach. Intell. 25, 1063–1074 (2003)

12.Blanz, V., Scherbaum, K., Seidel, H.: Fitting a morphable model to 3D scans of faces. In: IEEE Int. Conference on Computer Vision, pp. 1–8 (2007)

13.The Bosphorus 3D face database: http://bosphorus.ee.boun.edu.tr/. Accessed 5th July 2011

14.Bowyer, K., Chang, K., Flynn, P.: A survey of approaches and challenges in 3D and multimodal 3D + 2D face recognition. Comput. Vis. Image Underst. 101, 1–15 (2006)

15.Bronstein, A., Bronstein, M., Kimmel, R.: Three-dimensional face recognition. Int. J. Comput. Vis. 64(1), 5–30 (2005)

16.CASIA-3D FaceV1: http://biometrics.idealtest.org. Accessed 5th July 2011

17.Chang, K., Bowyer, K., Flynn, P.: Face recognition using 2D and 3D facial data. In: Multimodal User Authentication Workshop, pp. 25–32 (2003)

18.Chang, K., Bowyer, K., Flynn, P.: An evaluation of multimodal 2D+3D face biometrics. IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 619–624 (2005)

19.Chang, K., Bowyer, K., Flynn, P.: Multiple nose region matching for 3D face recognition under varying facial expression. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1695–1700 (2006)

20.Chua, C., Jarvis, R.: Point signatures: a new representation for 3D object recognition. Int. J. Comput. Vis. 25(1), 63–85 (1997)

21.Chua, C., Han, F., Ho, Y.: 3D human face recognition using point signatures. In: Proc. IEEE Int. Workshop Analysis and Modeling of Faces and Gestures, pp. 233–238 (2000)

22.Colombo, A., Cusano, C., Schettini, R.: 3D face detection using curvature analysis. Pattern Recognit. 39(3), 444–455 (2006)

23.Creusot, C., Pears, N.E., Austin, J.: 3D face landmark labelling. In: Proc. 1st ACM Workshop on 3D Object Retrieval (3DOR’10), pp. 27–32 (2010)

24.Creusot, C., Pears, N.E., Austin, J.: Automatic keypoint detection on 3D faces using a dictionary of local shapes. In: The First Joint 3DIM/3DPVT Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, pp. 16–19 (2011)

25.Creusot, C.: Automatic landmarking for non-cooperative 3d face recognition. Ph.D. thesis, Department of Computer Science, University of York, UK (2011)

26.DeCarlo, D., Metaxas, D.: Optical flow constraints on deformable models with applications to face tracking. Int. J. Comput. Vis. 38(2), 99–127 (2000)

27.D’Erico, J.: Surface Fitting Using Gridfit. MATLAB Central File Exchange (2006)

28.Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Wiley-Interscience, New York (2001)

29.Face recognition homepage: http://www.face-rec.org. Accessed 24th August 2011

30.Faltemier, T.C., Bowyer, K.W., Flynn, P.J.: Using a multi-instance enrollment representation to improve 3D face recognition. In: 1st IEEE Int. Conf. on Biometrics: Theory, Applications, and Systems (BTAS’07) (2007)

364

A. Mian and N. Pears

31.Faltemier, T., Bowyer, K., Flynn, P.: A region ensemble for 3-D face recognition. IEEE Trans. Inf. Forensics Secur. 3(1), 62–73 (2008)

32.Farkas, L.: Anthropometry of the Head and Face. Raven Press, New York (1994)

33.Fisher, N., Lee, A.: Correlation coefficients for random variables on a unit sphere or hypersphere. Biometrika 73(1), 159–164 (1986)

34.Fitzgibbon, A.W.: Robust registration of 2D and 3D point sets. Image Vis. Comput. 21, 1145– 1153 (2003)

35.Fleishman, S., Drori, I., Cohen-Or, D.: Bilateral mesh denoising. ACM Trans. Graph. 22(3), 950–953 (2003)

36.Gao, H., Davis, J.W.: Why direct LDA is not equivalent to LDA. Pattern Recognit. 39, 1002– 1006 (2006)

37.Garland, M., Heckbert, P.: Surface simplification using quadric error metrics. In: Proceedings of SIGGRAPH (1997)

38.Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 6(23), 643–660 (2001)

39.Gokberk, B., Irfanoglua, M., Arakun, L.: 3D shape-based face representation and feature extraction for face recognition. Image Vis. Comput. 24(8), 857–869 (2006)

40.Gordon, G.: Face recognition based on depth and curvature feature. In: IEEE Computer Society Conference on CVPR, pp. 808–810 (1992)

41.Gupta, S., Markey, M., Bovik, A.: Anthropometric 3D face recognition. Int. J. Comput. Vis. doi:10.1007/s11263-010-0360-8 (2010)

42.Heckbert, P., Garland, M.: Survey of polygonal surface simplification algorithms. In: SIGGRAPH, Course Notes: Multiresolution Surface Modeling (1997)

43.Heseltine, T., Pears, N.E., Austin, J.: Three-dimensional face recognition: an fishersurface approach. In: Proc. Int. Conf. Image Analysis and Recognition, vol. II, pp. 684–691 (2004)

44.Heseltine, T., Pears, N.E., Austin, J.: Three-dimensional face recognition: an eigensurface approach. In: Proc. IEEE Int. Conf. Image Processing, pp. 1421–1424 (2004)

45.Heseltine, T., Pears, N.E., Austin, J.: Three dimensional face recognition using combinations of surface feature map subspace components. Image Vis. Comput. 26(3), 382–396 (2008)

46.Hesher, C., Srivastava, A., Erlebacher, G.: A novel technique for face recognition using range imaging. In: Int. Symposium on Signal Processing and Its Applications, pp. 201–204 (2003)

47.Horn, B.: Robot Vision. MIT Press, Cambridge (1986). Chap. 16

48.Jain, A., Ross, A., Prabhakar, S.: An introduction to biometric recognition. IEEE Trans. Circuits Syst. Video Technol. 14(1), 4–20 (2004)

49.Johnson, A., Hebert, M.: Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 674–686 (1999)

50.Kakadiaris, I., Passalis, G., Theoharis, T., Toderici, G., Konstantinidis, I., Murtuza, N.: Multimodal face recognition: combination of geometry with physiological information. In: Proc. IEEE Int. Conf on Computer Vision and Pattern Recognition, pp. 1022–1029 (2005)

51.Kakadiaris, I., Passalis, G., Toderici, G., Murtuza, M., Lu, Y., Karampatziakis, N., Theoharis, T.: Three-dimensional face recognition in the presence of facial expressions: an annotated deformable model approach. IEEE Trans. Pattern Anal. Mach. Intell. 29(4), 640–649 (2007)

52.Kimmel, R., Sethian, J.: Computing geodesic on manifolds. Proc. Natl. Acad. Sci. USA 95, 8431–8435 (1998)

53.Klassen, E., Srivastava, A., Mio, W., Joshi, S.: Analysis of planar shapes using geodesic paths on shape spaces. IEEE Trans. Pattern Anal. Mach. Intell. 26(3), 372–383 (2004)

54.Koenderink, J.J., van Doorn, A.J.: Surface shape and curvature scales. Image Vis. Comput. 10(8), 557–564 (1992)

55.Lee, J., Milios, E.: Matching range images of human faces. In: Int. Conference on Computer Vision, pp. 722–726 (1990)

8 3D Face Recognition

365

56.Lee, Y., Shim, J.: Curvature-based human face recognition using depth-weighted Hausdorff distance. In: Int. Conference on Image Processing, pp. 1429–1432 (2004)

57.Lo, T., Siebert, J.P.: Local feature extraction and matching on range images: 2.5D SIFT. Comput. Vis. Image Underst. 113(12), 1235–1250 (2009)

58.Lu, X., Jain, A.K.: Deformation modeling for robust 3D face matching. In: Proc IEEE Int. Conf. on Computer Vision and Pattern Recognition, vol. 2, pp. 1377–1383 (2006)

59.Lu, X., Jain, A., Colbry, D.: Matching 2.5D scans to 3D models. IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 31–43 (2006)

60.Mandal, C., Qin, H., Vemuri, B.: A novel FEM-based dynamic framework for subdivision surfaces. Comput. Aided Des. 32(8–9), 479–497 (2000)

61.Maurer, T., Guigonis, D., Maslov, I., Pesenti, B., Tsaregorodtsev, A., West, D., Medioni, G.: Performance of Geometrix ActiveID 3D face recognition engine on the FRGC data. In: IEEE Workshop on Face Recognition Grand Challenge Experiments (2005)

62.Metaxas, D., Kakadiaris, I.: Elastically adaptive deformable models. IEEE Trans. Pattern Anal. Mach. Intell. 24(10), 1310–1321 (2002)

63.Mian, A.: http://www.csse.uwa.edu.au/~ajmal/code.html. Accessed on 6th July 2011

64.Mian, A., Bennamoun, M., Owens, R.: An efficient multimodal 2D–3D hybrid approach to automatic face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 29(11), 1927–1943 (2007)

65.Mian, A., Bennamoun, M., Owens, R.: Keypoint detection and local feature matching for textured 3D face recognition. Int. J. Comput. Vis. 79(1), 1–12 (2008)

66.Mian, A., Bennamoun, M., Owens, R.: On the repeatability and quality of keypoints for local feature-based 3D object retrieval from cluttered scenes. Int. J. Comput. Vis. (2010)

67.Medioni, G., Waupotitsch, R.: Face recognition and modeling in 3D. In: IEEE Int. Workshop Analysis and Modeling of Faces and Gestures, pp. 232–233 (2003)

68.MeshLab.: Visual computing Lab. ISTI-CNR. http://meshlab.sourceforge.net/. Cited 14 June, 2010

69.Padia, C., Pears, N.E.: A review and characterization of ICP-based symmetry plane localisation in 3D face data. Technical Report YCS 463, Department of Computer Science, University of York (2011)

70.Pan, G., Han, S., Wu, Z., Wang, Y.: 3D face recognition using mapped depth images. In: IEEE Workshop on Face Recognition Grand Challenge Experiments (2005)

71.Passalis, G., Kakadiaris, I.A., Theoharis, T., Toderici, G., Murtuza, N.: Evaluation of the UR3D algorithm using the FRGC v2 data set. In: Proc. IEEE Workshop on Face Recognition Grand Challenge Experiments (2005)

72.Pears, N.E., Heseltine, T., Romero, M.: From 3D point clouds to pose normalised depth maps. Int. J. Comput. Vis. 89(2), 152–176 (2010). Special Issue on 3D Object Retrieval

73.Phillips, P., Flynn, P., Scruggs, T., Bowyer, K., Chang, J., Hoffman, K., Marques, J., Min, J., Worek, W.: Overview of the face recognition grand challenge. In: IEEE CVPR, pp. 947–954 (2005)

74.Piegl, L., Tiller, W.: The NURBS Book. Monographs in Visual Communication, 2nd edn. (1997)

75.Portilla, J., Simoncelli, E.: A parametric texture model based on joint statistic of complex wavelet coefficients. Int. J. Comput. Vis. 40, 49–71 (2000)

76.Queirolo, C.Q., Silva, L., Bellon, O.R.P., Segundo, M.P.: 3D face recognition using simulated annealing and the surface interpenetration measure. IEEE Trans. Pattern Anal. Mach. Intell. 32(2), 206–219 (2010)

77.Rusinkiewicz, S., Levoy, M.: Efficient variants of the ICP algorithm. In: Int. Conf. on 3D Digital Imaging and Modeling, pp. 145–152 (2001)

78.Samir, C., Srivastava, A., Daoudi, M.: Three-dimensional face recognition using shapes of facial curves. IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 1858–1863 (2006)

79.Savran, A., et al.: In: Bosphorus Database for 3D Face Analysis. Biometrics and Identity Management. Lecture Notes in Computer Science, vol. 5372, pp. 47–56 (2008)

80.Sethian, J.: A review of the theory, algorithms, and applications of level set method for propagating surfaces. In: Acta Numer., pp. 309–395 (1996)

366

A. Mian and N. Pears

81.Silva, L., Bellon, O.R.P., Boyer, K.L.: Precision range image registration using a robust surface interpenetration measure and enhanced genetic algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 762–776 (2005)

82.Sirovich, L., Kirby, M.: Low-dimensional procedure for the characterization of human faces. J. Opt. Soc. Am. A 4, 519–524 (1987)

83.Spira, A., Kimmel, R.: An ecient solution to the eikonal equation on parametric manifolds. Interfaces Free Bound. 6(3), 315–327 (2004)

84.Swiss Ranger. Mesa Imaging. http://www.mesa-imaging.ch/. Cited 10 June, 2010

85.Tanaka, H., Ikeda, M., Chiaki, H.: Curvature-based face surface recognition using spherical correlation principal directions for curved object recognition. In: Int. Conference on Automated Face and Gesture Recognition, pp. 372–377 (1998)

86.Texas 3D face recognition database. http://live.ece.utexas.edu/research/texas3dfr/. Accessed 5th July 2011

87.Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3, 71–86 (1991)

88.Viola, P., Jones, M.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154 (2004)

89.Xianfang, S., Rosin, P., Martin, R., Langbein, F.: Noise analysis and synthesis for 3D laser depth scanners. Graph. Models 71(2), 34–48 (2009)

90.Xu, C., Wang, Y., Tan, T., Quan, L.: Automatic 3D face recognition combining global geometric features with local shape variation information. In: Proc. IEEE Int. Conf. Pattern Recognition, pp. 308–313 (2004)

91.Yan, P., Bowyer, K.W.: A fast algorithm for ICP-based 3D shape biometrics. Comput. Vis. Image Underst. 107(3), 195–202 (2007)

92.Yang, M., Kriegman, D., Ahuja, N.: Detecting faces in images: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24(1), 34–58 (2002)

93.Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.J.: A 3D facial expression database for facial behavior research. In: 7th Int. Conf. on Automatic Face and Gesture Recognition (FGR06), pp. 211–216 (2006)

94.Yu, H., Yang, J.: A direct LDA algorithm for high-dimensional data—with application to face recognition. Pattern Recognit. 34(10), 2067–2069 (2001)

95.Zhang, L., Snavely, N., Curless, B., Seitz, S.: Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph. 23(3), 548–558 (2004)

96.Zhao, W., Chellappa, R., Phillips, P., Rosenfeld, A.: Face recognition: a literature survey. In: ACM Computing Survey, vol. 35, pp. 399–458 (2003)