A unified-calibration method in FTP-based 3D data acquisition for

Sep 26, 2006 - holder. Figs. 13 and 14 were drawn by Microsoft excel. In. Fig. ... in this method, which provides a low-cost application. The main advantages of ...
570KB taille 1 téléchargements 233 vues
ARTICLE IN PRESS

Optics and Lasers in Engineering 45 (2007) 396–404

A unified-calibration method in FTP-based 3D data acquisition for reverse engineering Chunsheng Yu, Qingjin Peng Department of Mechanical and Manufacturing Engineering, University of Manitoba, Winnipeg, Manitoba, Canada, R3T 5V6 Received 28 April 2006; accepted 27 July 2006 Available online 26 September 2006

Abstract Reverse engineering plays an important role in product design and manufacturing due to the need for the improvement of existing products. Acquisition of 3D data of existing products is one of the most fundamental processes in reverse engineering. This paper proposes a unified-calibration method for Fourier Transform Profilometry (FTP)-based 3D data acquisition. In current FTP methods, the system parameters, such as positions and orientations of the camera and projector and the reference plane location, are needed to convert the phase map into 3D coordinates of the object. Generally, it is difficult to measure these parameters directly. Therefore, a calibration procedure is required in current FTP methods. In this research, a novel method is proposed for the calibration. Only one image is used to calculate all system parameters in the proposed method. The experiments show the method is simple and feasible for FTP-based 3D data acquisition. r 2006 Elsevier Ltd. All rights reserved. Keywords: Reverse engineering; 3D modeling; Image processing; Fourier transform profilometry

1. Introduction Product development includes the improvement of existing products. Some existing parts of the product may not have digital models to identify their design parameters in the modification and remanufacturing. Reverse engineering provides a useful tool for the need of the design modification of existing products. The acquisition of 3D data of existing products is a key element for reverse engineering. One of the 3D data acquisition methods is the use of tracking systems. The tracking systems can acquire 3D data by positioning a probe on the object and triggering the computer to record 3D coordinate positions of the probe. Coordinate measuring machines (CMMs) are robust 3D mechanical trackers. Electromagnetic and ultrasonic trackers are also used in some tracking systems. These tracking systems are limited by mechanical or electromagnetic problems in the object Corresponding author. Tel.: +1 204 474 6843; fax: +1 204 275 7507.

E-mail address: [email protected] (Q. Peng). 0143-8166/$ - see front matter r 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.optlaseng.2006.07.001

size, measurement volume and materials used [1]. Industrial X-ray computed tomography (CT) is also an approach to capture 3D data [2]. It scans an object using X-ray and gets a series of slices of the object. The slices can then be used to acquire 3D data of the object. But the system is generally expensive and needs a big space for installation. The laser has been used as an accurate tool for acquiring 3D data of existing objects. There are four methods that use laser to acquire 3D coordinates [3]. They are time/light in flight, point laser triangulation [4], laser speckle pattern sectioning and the laser tracking system. The processing is slow using a laser beam to capture 3D data because the surface has to be scanned line-by-line. The system is expensive. The high-energy laser beam needs to be treated with care [1]. Comparing with the tracking systems, industrial CT or laser scanners, the image-based 3D data acquisition methods provide effective and low-cost tools to acquire 3D data of objects. Shape-from-shading (SFS) is a method that can reconstruct the 3D shape of an object by the mapping between the shading and surface shape in terms of

ARTICLE IN PRESS C. Yu, Q. Peng / Optics and Lasers in Engineering 45 (2007) 396–404

the reflectance function [5]. In SFS method, the reflection model of an object’s surface has to be assumed. Real images of the object’s surface do not always follow the assumed model. Therefore SFS method is somewhat inaccurate and is sensitive to noise. Moire´ method needs two gratings, one is a master grating and the other is a reference grating, from which contour fringes can be generated and resolved by a charged couple device (CCD) camera. Using the contour fringes the 3D shape of an object can be obtained [3]. Moire´ methods have phase discrimination problems when the surface is not smooth. This problem makes that Moire´ methods cannot avoid errors when the slope of the surface is greater than a certain value. The system structure and algorithm of Moire´ methods is complex. Photogrammetry method uses two or more images acquired by either several cameras at the same time or by one camera at different times to calculate 3D coordinates of the objects [6,7]. The key to the method is the correspondence problem that is to determine the point in one image that corresponds to a given point in another image. Generally, it is hard to solve the correspondence problem on a smooth surface. In phasemeasuring profilometry (PMP) method, a fringe pattern is projected onto the object by varying the phase of the pattern [8,9]. Three or more deformed fringe pattern images are captured by a camera and the phase distribution of the object can be calculated by these images. The 3D data of the object can be recovered by mapping the phase distribution to the height. In the fourier transform profilometry (FTP) [10] method, a Ronchi grating or a sinusoidal grating is projected onto the object surface (Fig. 1). The deformed grating image is captured by a camera. The 3D shape of the object can be obtained by calculating Fourier transform of the image, filtering in spatial frequency domain and calculating inverse Fourier transform. These image-based methods vary in their accuracy and calculation speed, each being suitable for certain types of applications. This research searches for a novel solution for the 3D data acquisition for reverse engineering. The simple, fast and cost-effective technique is the objective of the research. The FTP method has the advantages of requiring only one (or two) image(s) to do full field analysis, and effective removal of noise and background power fluctuations by a band-limited filter in the frequency plane. Therefore the FTP method is selected in this research for 3D data acquisition. The use of FTP method results in a phase map that is not scaled to provide 3D data of the objects. Therefore, it is necessary to convert the phase map into 3D coordinates of the object surface. The conversion is dependent on the system parameters including positions and orientations of the camera and projector, and the reference plane location. In most cases, these parameters are difficult to measure directly. Therefore, a calibration procedure is needed in FTP method to determine the parameters [11–14]. The calibration procedure includes a height calibration (Z-axis) and a plane calibration (X-axis and Y-axis). A series of

397

Fig. 1. (a) An image without grating; (b) An image with deformed grating; (c) 3D plot of the reconstructed object.

images are taken in different locations of the calibration pattern. The mass data of the images have to be processed. The calibration procedure is time-consuming. In this paper, a unified calibration method for the FTP-based 3D data acquisition is proposed. There is only one image of a special designed pattern required to calibrate the system. The height calibration and the plane calibration are unified in one step. Experimental results show that this method is a simple and fast way to calibrate the FTP-based 3D data acquisition system. This paper is organized as follows: Section 2 introduces the FTP method and the system parameters obtained from the calibration using the FTP method. In Section 3, the image-based parameters measurement method is discussed. The principle of the method is described. The hardware of the system is described and the process of the FTP-based 3D data acquisition method is discussed in Section 4. Finally, Section 5 gives the experimental result and discussion of the proposed method. 2. The FTP method and 3D coordinates calculation 2.1. The object’s 3D shape and phase Fig. 2 shows the geometry of the projection and imaging system. Point P and E are the optical centers for projector

ARTICLE IN PRESS C. Yu, Q. Peng / Optics and Lasers in Engineering 45 (2007) 396–404

398

and the camera, respectively. The phase and height relationship is obtained based on Ref. [13]. Thus, L hðx; yÞ ¼ AC ¼ LDfðx; yÞ=ðDfðx; yÞ  2pf r dÞ, (1) d where L is the distance between the optical center of the camera and the reference plane; d is the distance between P and E; fr is the spatial frequency of the projected fringes in the reference plane; Df(x,y) is the phase information, which contains the surface height information. If the phase Df(x,y) is obtained then the height information of the surface can be calculated. FTP is a method to compute the phase Df(x,y). 2.2. FTP method When an object is put on the reference plane, the deformed grating image can be expressed by nX ¼1

gðx; yÞ ¼ rðx; yÞ

An exp ðið2pnf r0 x þ njðx; yÞÞÞ

(2)

n¼1

when h(x,y) ¼ 0, the grating image is written as g0 ðx; yÞ ¼ r0 ðx; yÞ

nX ¼1

An exp ðið2pnf r0 x þ nj0 ðx; yÞÞÞ,

(3)

n¼1

where r(x,y) and r0(x,y) are non-uniform distributions of reflectivity on the surface of the object and on the reference plane, respectively. An is the weighting factors of Fourier series, fr0 is the fundamental frequency of the observed grating image, j(x,y) is the phase resulting from the object height distribution and j0(x,y) is the original phase when h(x,y) ¼ 0. The one dimension (1D) Fourier transform of the observed image (in Eq. (2)) is computed and the Fourier spectrum of the image is obtained. A filter function is used to obtain the fundamental component of the Fourier spectra. The inverse Fourier transform is applied to the fundamental component. An image that only carries the deformed grating information is obtained: _

gðx; yÞ ¼ A1 rðx; yÞ exp ði2pf r0 x þ fðx; yÞÞ.

_

g0 ðx; yÞ ¼ A1 r0 ðx; yÞ exp ði2pf r0 x þ f0 ðx; yÞÞ.

2.3. Phase unwrapping Since the phase calculated by FTP methods gives principal values ranging from p to p, the phase distribution is wrapped into this range and has discontinuities with 2p jumps when the phase variation is larger than 2p. These discontinuities can be corrected by adding or subtracting 2p according to the phase jump ranging from p to p or vice versa. 2.4. 3D coordinates calculation Based on the phase information, 3D coordinates of the object’s surface can be calculated. As shown in Fig. 3, the world coordinate system is set that X, Y coordinates are on the reference plane and the coordinate center is in a line with the coordinate center of the camera coordinate system. X, Y and Z are parallel with the x, y and z of the camera coordinate system. The principal point of the camera is assumed at the center of the image, the skew coefficient and distortions are zero. Therefore the 3D coordinates (X, Y, and Z) of a point on the object can be calculated as follows: Z coordinate is the height (h(x,y)) from the point on the object to the reference plane and it can be calculated by Eq. (1). X coordinate : X ¼

U L , f =a

(7)

World Coordinate System

X M(X,Y,Z) Z

P

Y

E d Image Coordinate System L

U L z

V f

C

A Reference Plane

h B

Object

Fig. 2. A geometry structure of the projection and image system.

(5)

The phase Dj(x,y) that has relationship with the height distribution is:  _  _ Djðx; yÞ ¼ Im log gðx; yÞgn0 ðx; yÞ . (6)

(4)

Camera

Projector

The same operation is applied to Eq. (3), then

Camera Coordinate System

x y

Fig. 3. Coordinate system for 3D coordinates calculation.

ARTICLE IN PRESS C. Yu, Q. Peng / Optics and Lasers in Engineering 45 (2007) 396–404

where U is the coordinate of the point in the image coordinate system, f is the focus length of the camera, a is the size of a pixel on image and L is the distance from the optical center of the camera to the reference plane. Y coordinate : Y ¼

V L , f =a

(8)

where V is the coordinate of the point in image coordinate system. 3. Image-based parameters measurement methods Based on the analysis of the FTP method, L, d and f/a should be obtained to calculate 3D coordinates of the objects. An image-based method to calculate these parameters is described in following parts. 3.1. Helper pattern design The helper pattern [15] is a special designed tool. It is used for a system construction and parameters (L, d and f/a) measurement. The pattern is composed of a box, a stick and two crosses (see Fig. 4). The box is opened in the front (rectangle KLEF) and at the back (rectangle SWGH). The width, height and length of the box are known. One cross (line AB and IJ) is attached in front of the box and another (line CD and MN) is attached at the back. Both crosses are perfectly aligned. The stick (line AQ) is attached in front of the box and on the horizontal line of the cross (line AB). The length of the stick is known. In this research the helper pattern is used to calculate L, d and f/a using image-based methods.

both crosses are on parallel planes, the center point of the crosses in image planes is the intersection of optical axis of the camera with the image plane. The image plane is parallel to both cross planes. The projector is set at a position so that: (1) the optical axis of the projector crosses with the optical axis of the camera at center point of cross AB–IJ (O00 ); (2) the line that passes through points P and O is parallel to the cross line AB; and (3) the cross ABIJ is on the reference plane as shown in Fig. 5. Suppose an image is taken with the helper pattern using the system structure shown in Fig. 5. In Fig. 5, a and Q0 are the projective point of A and Q on the reference plane and q0 is the projective point of Q0 on the image plane. According to DOPMDMO00 Q0 , DQ0 QAQ0 MO00 , DOO0 aDOO00 A and DOO0 q0 OO00 Q0 , the following equations can be obtained. OP OM ¼ , O00 Q0 MO00

(9)

QA Q0 A ¼ , MO00 O00 Q0

(10)

O 0 q0 OO0 , 00 0 ¼ O Q OO00

(11)

O0 a OO0 ¼ , 00 O A OO00

(12)

O0 q0 ¼ n  a,

(13)

O0 a ¼ u  a,

(14)

F a ¼ f =a,

3.2. Principle of the new method

(15) 0

There are three assumptions used in this method: (1) the pinhole model of a camera is used, (2) the distortion, zeroskew and unit aspect ratio in image plane is not considered, and (3) the image is governed by laws of projective geometry. The camera needs to be oriented to the helper pattern so that the two crosses of the pattern are perfectly aligned together (only one cross can be seen in the image). Because

N

W

399

where n is the amount of pixels from O to q on the image; u is the amount of pixels from O0 to a on the image; a is the size of a pixel on the image; f is focal length of the camera.

Projector

P

Camera O f O'

L

S

0

Image Plane

q'

a

M

I K

C

D

Q

L J

H

M

A

G

B

B Reference Plane

Q

E

J

Fig. 4. The helper pattern.

Q'

O" A I

Helper Pattern

F Fig. 5. Geometry structure of the camera, the projector and the helper pattern.

ARTICLE IN PRESS C. Yu, Q. Peng / Optics and Lasers in Engineering 45 (2007) 396–404

400

Therefore the distance between the optical center of the projector and the camera can be calculated: OP ¼

OO00  O00 A  n  ðu  nÞ  QA  O00 A  u , QA  n

(16)

where AQ is the length of the stick; O00 A is a half of the width (AB) of the helper pattern. In Eq. (16), if the distance from the camera to the reference plane (OO00 ) could be obtained then OP can be calculated. Fig. 6 shows the geometry structure of the image and the helper pattern. According to DOabDOAB and DOcdDOCD, following equations can be obtained. O0 a OO0 ¼ , O00 A OO00

(17)

O0 d OO0 ¼ , 000 O D OO000

(18)

OO000  OO00 ¼ AD,

(19)

AO00 ¼ O000 D,

(20)

O0 a ¼ u  a,

(21)

O d ¼ v  a,

(22)

OO0 ¼ f ,

(23)

where u and v are the amount of pixels of line aO0 and dO0 on the image. AD is the length of the helper pattern. AO00 is a half of AB and AB is the width of the helper pattern. a, b, c and d are projective points of A, B, C and D on the image. We can get: u  AD ð a  uÞ

(24)

and f u  OO00 ¼ . (25) a AO00 Combining OO00 with Eq. (15), OP can be calculated.

Fa ¼

The system layout is shown in Fig. 7. According to the projection geometry of Fig. 5, the optical centers of the projector and the digital camera are located at the same distance L from the reference plane, and d is the distance between them. A sinusoidal grating is generated by the computer and is projected on the object and the reference plane by the projector. It allows users to change the cycle time of the grating for FTP method at a low cost. Images of the deformed grating and the object are captured by the digital camera and saved in the computer. The helper pattern is used for system construction and parameters measurement.

D

C

A

O'''

Iðx; yÞ ¼ A þ B cos ½2pf r0 x þ ai ,

(26)

where I(x,y ) is the gray intensity of pixel (x,y) of the sinusoidal grating image; A is the average intensity of the image background; B is the intensity modulation; fr0 is the fundamental frequency of the grating image; ai is the angle of phase shift. (3) Two images are required based on Eq. (6). One is the reference-grating image, the other is deformed grating image. The reference-grating image is the grating image on the reference plane without an object in front of the reference plane.

d

Computer

B

O'

a d c

(1) System construction sets up the system hardware. Parameters calculation obtains all system parameters. The positions of the digital camera, the helper pattern and the projector meet the requirements described in Section 3.2. An image with the helper pattern is taken, the system parameters L, d and f/a can be calculated. (2) Sinusoidal grating is generated by Eq. (26).

Helper Pattern

O''

A

B

Image Plane

The overall process of the FTP-based 3D data acquisition is shown in Fig. 8 and is described as follows.

C

D

Helper Pattern

a

4.1. System hardware

4.2. The working process of the system

0

OO00 ¼

4. A FTP-based 3D data acquisition system with unified calibration

Reference Plane b

c

Digital Camera

Image Plane d

b f

Z

Projector

X O

L

Y Fig. 6. Geometry structure of the image and the helper pattern.

Fig. 7. System hardware and layout.

ARTICLE IN PRESS C. Yu, Q. Peng / Optics and Lasers in Engineering 45 (2007) 396–404

(4) Grating image pro-processing is for removing the noise from acquired images. Low-pass filter image processing is used in this procedure. (5) In phase calculation, images are computed by 1D Fourier Transform to obtain the Fourier spectra. Then the spectra are filtered to obtain the fundamental component with a mid-pass filter function. Finally, the inverse Fourier transform is applied to the fundamental component to obtain g(x,y) and g0(x,y), the phase can then be obtained using Eq. (6). (6) Based on the phase information, 3D coordinates of data points on the object are calculated. The algorithm is described in Section 2.4. (7) The 3D data of the object obtained in step (6) are the point clouds and they are output to CAD/CAM systems to build the 3D model.

5. Experiments and discussions 5.1. Experiments setup The layout of the FTP-based 3D acquisition system is shown in Fig. 7. In this experiment, the projector is an System construction Parameters calculation

EIKI LC-7000 projector and the resolution of the projector is 1024  768 pixels. The digital camera is SONY FDMavica and the image size used is 1024  768 pixels. FTP algorithms are developed using MATLAB. The reliabilityguided algorithm was used for phase unwrapping [16]. A prototype of the helper pattern is built and the dimension of the helper pattern is shown in Fig. 9. A holder is used as an example in the experiment. Fig. 9(b) is the original image of the holder. 5.2. Experimental results In this experiment, the first step is system construction with the aid of the helper pattern. The hardware layout is shown in Fig. 5. Fig. 10 is image of the helper pattern taken after the system construction. In Fig. 10, the center point (O0 ) of the cross in the image of the helper pattern is the original point of the coordinate system where x ¼ 0, y ¼ 0. The distances between O0 and a, d, a0 , q0 were measured in the image. Parameters of the system are computed by Eqs. (24), (25) and (15). The result is L ¼ 851.35 mm, d ¼ 208.97 mm and f/a ¼ 1590.68. Fig. 11(a) is the sinusoidal grating image of eight pixels per cycle that is generated by the computer and the size is 1024  768 pixels. Fig. 11(b) shows the picture of a deformed grating image, and the straight grating lines in Fig. 11(a) serve as the reference signal for determining absolute phase values

Sinusoidal grating generation Image acquisition Image pre-processing Phase calculation 3D data calculation 3D data exporting Fig. 8. The overall process of the FTP-based 3D data acquisition.

401

Fig. 10. The image of the helper pattern.

Fig. 9. (a) The dimensions of the helper pattern; (b) original image of the holder.

ARTICLE IN PRESS C. Yu, Q. Peng / Optics and Lasers in Engineering 45 (2007) 396–404

402

Fig. 11. (a) Sinusoidal grating image; (b) deformed grating and the holder.

point of the cross used as original points of the coordinate system to measure pixel distance of the object, but also it is used for orientating the camera. Therefore this method is sensitive to the accuracy of the helper pattern. In Fig. 10, the boundary of the helper pattern on the image is composed of 2 or 3 pixels. An error of 1 pixel on the edge detection can generate errors. Therefore this method is sensitive to the accuracy of edges or points detection on the image. The alignment of the optical axis requires some accurate tuning mechanics. The comparison of the unified calibration method and other calibration methods is shown in Table 1. The height calibration and the plane calibration are processed in one step. There is no need to move the helper pattern. The parameters calculation is easy and robust as only similar triangles method is used. Fig. 12. The recovered shape of the holder.

6. Conclusions to be converted into a height distribution. Fig. 12 shows the recovered shape of the holder. The shape of the holder is recovered correctly. Figs. 13 and 14 show examples of the holder’s profile obtained by CMM and the FTP method. The profile is obtained by cutting the holder along a diameter of the holder. Figs. 13 and 14 were drawn by Microsoft excel. In Fig. 13, the maximum distance of top and bottom of the holder is (72.2101–66.3307) ¼ 5.8794 mm. In Fig. 14, the maximum distance is 6.4 mm. The error is 8%. But the shape of two profiles is same. Therefore this method is feasible for 3D data acquisition.

This paper proposes a unified calibration method in FTP-based 3D data acquisition for reverse engineering. A projector, a computer and a helper pattern are used in this method, which provides a low-cost application. The main advantages of this method are simple and fast. Only one image is needed to calculate all parameters of the system. The helper pattern is not moved in calibration process. A simple triangles method is used. Experimental examples show the method is feasible for 3D data acquisition. The following actions are designed to improve the accuracy of the method:

5.3. Discussion

(1) To use more accurate edge or point detection methods to find the location of the helper pattern. (2) To find an accurate method to set up the camera and the projector in right position.

In Eqs. (24) and (25), the length of the pattern is used to compute the distance. And not only is the center

ARTICLE IN PRESS C. Yu, Q. Peng / Optics and Lasers in Engineering 45 (2007) 396–404

403

76 74 72 Z

70 68 66

-200

-150

-100

64

-50

0 X

50

100

150

200

339

325

311

297

283

268

254

240

226

212

197

183

169

155

141

126

112

97.9

83.7

69.5

55.3

41.1

26.8

76 74 72 70 68 66 64 62 12.6

Z

Fig. 13. Profile of the holder that is measured by CMM.

X Fig. 14. Profile of the holder that is obtained by FTP.

Table 1 The comparison of the unified calibration method and other calibration methods No.

Unified calibration

Other calibration methods [11–14]

1.

One image with the helper pattern is needed. The height calibration and the plane calibration are processed using the image Calibration process is simple. The helper pattern is at one fixed position in the calibration procedure

A series of images with calibration rig are needed. The height calibration is completed by pure fringe images and the plane calibration is processed using images of the calibration rig The calibration rig is moved to different positions in the calibration procedure. The accurate position of the rig has to be measured

Only similar triangles method is used to calculate the parameters of the system. The calculation is simple and robust

Least-square algorithm and linear interpolation algorithm are used to calculate the parameters The calculation is timeconsuming

2.

3.

Acknowledgments This research is supported by Canadian NSERC Research Grants. References [1] Peng Q, Loftus M. An image-based fast three-dimensional modeling method for virtual manufacturing. J Eng Manuf 2000;214:709–21. [2] Isdale J, 3D Scanner Technology Review. /http://www.vr.isdale. com/3Dscanners/3DscannerReview.htmlS. 1998. [3] Chen F, Brown GM, Song M. Overview of three-dimension shape measurement using optical methods. Optic Eng 2000;39:10–22. [4] Koubias RS, Stojanovic S, Georgoudakis E. A measuring method for laser-based profilometry and its application in non-destructive testing and quality control. Proceedings of the 4th International Conference on Vibration Measurements by Laser Techniques—Advances & Applications. /http://www.apel.ee.upatras.gr/the_lab/faculty/koubias_ publications.htmS.

[5] Dovgard R, Basri R. Statistical symmetric shape from shading for 3D structure recovery of faces. 8th European Conference on Computer Vision 2004;1:99–113. [6] Qurban M, Sohaib K. Camera calibration and three-dimensional world reconstruction of stereo-vision using neural networks. Int J Syst Sci 2001;32:1155–9. [7] Achour K, Benkhelif M. A new approach to 3D reconstruction without camera calibration. Pattern Recognition 2001;34:2467–76. [8] Quan C, He XY, Wang CF, Tay CJ, Shang HM. Shape measurement of small objects using LCD fringe projection with phase shifting. Optics Commun 2001;189:21–9. [9] Li W, Su X. Application of improved phase-measuring profilometry in nonconstant environmental light. Optic Eng 2001;40:478–85. [10] Su X, Chen W. Fourier transform profilometry: a review. Optic Lasers Eng 2001;35:263–84. [11] Hu Q, Huang PS, Fu Q, Chiang F. Calibration of a threedimensional shape measurement system. Optic Eng 2003;42:487–93. [12] Guo H, He H, Yu Y, Chen M. A least squares calibration method for fringe projection profilometry. Proc Int Soc Optic Eng 2004;5531: 429–40.

ARTICLE IN PRESS 404

C. Yu, Q. Peng / Optics and Lasers in Engineering 45 (2007) 396–404

[13] Takeda M, Mutoh K. Fourier transform profilometry for the automatic measurement of 3-D object shapes. Appl Opt 1983;22:3977–82. [14] Zhang X, Lin Y, Zhao M, Niu X, Huang Y. Calibration of a fringe projection profilometry system using virtual phase calibrating model planes. J Opt: Pure Appl Opt 2005;7:192–7.

[15] Be´nallal M, Meunier J, Camera Calibration with a Viewfinder. The 15th International Conference on Vision Interface. /http://www. cipprs.org/vi2002/pdf/s5-8.pdfS. 2002. [16] Su X, Chen W. Reliability-guided phase unwrapping algorithm: a review. Optic Lasers Eng 2004;42:245–61.