Accurate procedure for the calibration of a structured light system

s , such as is used in Eq. 1. The ... inverse mapping of Eq. 4, based on the estimated ... hold accurate only within the volume occupied by the test object. For each ...
877KB taille 22 téléchargements 385 vues
Accurate procedure for the calibration of a structured light system Ricardo Legarda-Sa´enz, MEMBER Thorsten Bothe Werner P. Ju¨ptner, FELLOW SPIE Bremer Institute fu¨r Angewandte Strahltechnik Klagenfurter Strasse 2 D-28359 Bremen, Germany E-mail: [email protected]

SPIE

Abstract. A procedure is proposed to calibrate a generic structured light system, consisting of one camera and one projector. The proposed procedure is based on defining a unique coordinate system for both devices in the structured light system, and thus, a rigidity constraint is introduced into the transformation process. This constraint is used to derivate a simple function for the simultaneous estimation of the parameters, resulting in parameters that are more reliable. The performance of the proposed procedure is shown on examples of the calibration of two different structured light systems. © 2004 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.1635373]

Subject terms: calibration; structured light; fringe projection; three-dimensional reconstruction. Paper 030294 received Jun. 20, 2003; revised manuscript received Aug. 21, 2003; accepted for publication Aug. 29, 2003.

1

Introduction

The automatic, noncontact measurement of objects in industrial applications is one of the most important applications of the 3-D shape measurement methods.1,2 A wellestablished 3-D shape measurement technique is triangulation with structured light. It projects a regular pattern of light onto the measured space to encode the scene surfaces, and hence creates contrasted features that can be reliably extracted from the image. In this way, the corresponding point’s problem in the classical photogrammetry stereo system is avoiding.3– 6 The key to accurate reconstruction is the proper calibration of each element used in the structured light system 共SLS兲. Several approaches to calibrate structured light systems can be found in the literature, such as techniques based on neural networks7,8 or bundle adjustment,9–14 where the calibration process varies depending of the available information and the setup used. We describe the process used to calibrate a SLS, which consists of one camera and one projector, and both devices are fixed during the calibration and measurement process. The proposed procedure is based on defining a unique coordinate system for both devices in the SLS, and thus, a rigidity constraint is introduced into the transformation process.3 The advantage of this constraint is used to derivate a simple function for the estimation of the parameters. The advantage of this function with respect other SLS approaches4,6 is the simultaneous estimation of the parameters, resulting in parameters that are more reliable. This constraint is extensively used in computer vision for classical stereo systems,3 but it does not account for lens nonlinearities. An additional advantage is the possibility to use the proposed procedure in the calibration of different configurations without any modification. The organization of the work is as follows. In Sec. 2, the mathematical models used in the structured light system are explained. In particular, Sec. 2.2 describes the model used to joint the camera and the projector in a single coordinate 464

Opt. Eng. 43(2) 464–471 (February 2004)

system for the SLS. A description of the calibration procedure is given in Sec. 3, where Sec. 3.2 explains the functional used in the estimation, and the process employed to minimize it. In Sec. 4, the performance of the calibration is shown on examples of experimental measurements using two different SLS, and Sec. 5 offers concluding remarks. 2

Mathematical Models Used in the Structured Light System

2.1 Model of the Camera and Projector Devices The camera model used here is the same used in photogrammetry, which is a combination of the pinhole model and lens distortions.9–12 The model for the projector is the same used for the camera, as the projector can conceptually be regarded as a camera acting in reverse. This model is shown graphically in Fig. 1. In this model, a 3-D point pw⫽ 关 X,Y ,Z 兴 T is expressed into pixel coordinate m⫽ 关 u, v兴 T through the following sequence of transformations. 1. Transformation from the object coordinate system to the device coordinate system, using the follow function p⫽T 共 pw ,⌰ 兲 ,

共1兲

where p⫽ 关 x,y,z 兴 T is the 3-D position in the device coordinate system, and ⌰⫽ 关 ␻ , ␸ , ␬ ,t x ,t y ,t z 兴 is the vector that characterizes the rigid transformation, and is usually called extrinsic parameter of the system. The function T(•) is defined as T 共 r1 ,⌰ 兲 ⫽R•r1 ⫹t, where r1 is a 3-D vector: R is the 3⫻3 orthogonal rotation matrix, which is defined by the three Euler angles ␻, ␸, and ␬; and t⫽ 关 t x ,t y ,t z 兴 T is the translation vector.

0091-3286/2004/$15.00

© 2004 Society of Photo-Optical Instrumentation Engineers

Downloaded from SPIE Digital Library on 01 Jul 2011 to 130.215.127.45. Terms of Use: http://spiedl.org/terms

Legarda-Sa´enz, Bothe, and Ju¨ptner: Accurate procedure for the calibration . . .

2. Perspective projection of the 3-D device coordinate p to the image plane using the pinhole model,

3. Correction of the coordinate in the image plane using the lens distortion model defined in the follow function

f pc ⫽ p, z

ad ⫽ac ⫹ f d 共 ac , ␦ 兲 ,

共2兲

where pc ⫽ 关 x c y c , f 兴 is the coordinate in the image plane, and f is the focal length of the optical lens. T

f d 共 ac , ␦ 兲 ⫽



共3兲

where ac ⫽ 关 x c ,y c 兴 T is the ideal coordinate, and ad ⫽ 关 x d ,y d 兴 T is the distorted coordinate. Both are in the image plane. The lens distortion model is defined as

x c 共 k 1 r 2 ⫹k 2 r 2 ⫹¯ 兲 ⫹ 关 2p 1 x c y c ⫹p 2 共 r 2 ⫹2x 2c 兲兴共 1⫹ p 3 r 2 ⫹¯ 兲 ⫹ 共 ay c ⫹s x x c 兲 y c 共 k 1 r 2 ⫹k 2 r 4 ⫹¯ 兲 ⫹ 关 2p 2 x c y c ⫹p 1 共 r 2 ⫹2y 2c 兲兴共 1⫹ p 3 r 2 ⫹¯ 兲 ⫹ 共 ax c 兲



r 2 ⫽x 2c ⫹y 2c ,

共4兲

where ␦ ⫽ 关 k 1 ,k 2 ,...,p 1 , p 2 ,...,a,s x 兴 are the distortion parameters.11 The parameters k 1 ,k 2 ,... are the coefficients for the radial distortion, the parameters p 1 , p 2 ,... are the coefficients for the decentering distortion, the parameter a is the shear factor, and s x is the scale factor in x.11 4. Conversion from the metric image coordinate ad to the pixel coordinate m⫽ 关 u, v兴 T using the follow transformation,

冋册

u ⫽ v

冋 册 xd ⫹x o du

yd ⫹y o dv

共5兲

,

where x o , y o are the image coordinates 共in pixels兲 of the principal point, and d u , d v are the horizontal and vertical sizes of the pixel, respectively. It is assumed here that the pixel sizes d u , d v are a priori known. The transformation from 3-D device coordinate p to pixel coordinate m, resulting from the composition of Eqs. 共2兲 through 共5兲, can be expressed more concisely as m⫽g共 p, ␪ 兲 ,

where ␪ ⫽ 关 f ,u o , v o , ␦ 兴 is the intrinsic parameter of the device. 2.2 Model of the Camera and Projector in the Structured Light System In the SLS described in this work, both devices 共camera and projector兲 are viewing the object scene at the same time, and they are only separated by a rigid displacement defined by the vector ⌰ s , such as is used in Eq. 共1兲. The schematic arrangement is shown in Fig. 2. Under this situation, it is possible to use a unique device coordinate system, and to apply only one transformation from the object coordinate system to the device coordinate system, using one of the devices as reference. Defining the camera as reference, the complete sequence of transformation for the SLS is defined as p⫽T 共 pw ,⌰ 兲 ,

共7兲

p⬘ ⫽T 共 p,⌰ s 兲 ,

共8兲

共6兲

Fig. 1 Model of the camera and projector.

Fig. 2 Schematic arrangement of the camera and projector in the SLS. Optical Engineering, Vol. 43 No. 2, February 2004

Downloaded from SPIE Digital Library on 01 Jul 2011 to 130.215.127.45. Terms of Use: http://spiedl.org/terms

465

Legarda-Sa´enz, Bothe, and Ju¨ptner: Accurate procedure for the calibration . . .

mc ⫽g共 p, ␪ c 兲

共9兲

mp ⫽g共 p⬘ , ␪ p 兲 ,

共10兲

where ␪ c and ␪ p are the intrinsic parameters of the camera and projector, respectively, ⌰ s is the vector that characterizes the rigid displacement from camera to projector, p⬘ is the coordinate in the projector coordinate system, and the vectors mc and mp are the pixel coordinates in the camera and projector, respectively. The functions were previously defined. As can be seen in these equations, a unique coordinate system for both devices is used in the transformation sequence, where both devices are related with a simple displacement. This displacement acts as a rigidity constraint in a simple way, and this is extensively used as bound in the estimation of parameters for classical stereo systems.3 However, these stereo applications do not account for the lens nonlinearities. An additional advantage of the previous transformation sequence is the nondependency of the SLS configuration, that is, it is possible to use in any SLS configuration without any modification. 2.3 Remark to Consider in the 3-D Reconstruction Once the parameters involved in the previously mentioned transformations are estimated, a 3-D reconstruction can be obtained using the observed pixel coordinates from the devices and Eqs. 共2兲 through 共10兲. The reconstruction process is merely to invert the sequence described in these equations. However, the distortion model given in Eq. 共4兲 is expressed on terms of ideal coordinates in the image plane, and as can be easily noticed, this function does not have an analytical inversion. As a consequence, it is necessary to approximate it. A few explicit solutions can be found in the literature.12,14 One of the solutions is to use the follow recursion in the reconstruction step,12,14 ac ⬇ad ⫺fd 共 ad , ␦ 兲 ⬇ad ⫺fd 关 ad ⫺fd 共 ad , ␦ 兲 , ␦ 兴 ⬇ad ⫺fd 兵 ad ⫺fd 关 ad ⫺fd 共 ad , ␦ 兲 , ␦ 兴 , ␦ 其 ⬇¯ .

共11兲

Another solution is to include an extra step in the calibration procedure, where the solution consists of creating an inverse mapping of Eq. 共4兲, based on the estimated parameters.12 In both cases, the resultant approximation is precise enough for accurate 3-D reconstructions. 3

Calibration Procedure

3.1 Acquisition of the Data Used in Calibration The objective of the calibration is to estimate the optimal value for each parameter involved in the transformation sequence. This estimation is realized using known 3-D coordinates in the scene space and its corresponding coordinates in the image plane. In practice, a test object 共calibration target兲, with M number of known points 共target marks兲, is used in the calibration. The estimation is carried out through the analysis of several views of this object, positioned within the volume that is being imaged by the system. In this way, the estimated parameters are expected to hold accurate only within the volume occupied by the test 466

Fig. 3 (a) Fringe patterns used in the absolute phase measurement (three synthetic wavelengths, each one with four phase shifts), and (b) an example of absolute phase measurement from the demodulation of the wrapped phases obtained from each synthetic wavelength.

object. For each new position of the test object, only a new vector ⌰ i is required to describe the rigid displacement of the test object with respect to the device coordinate system, where i denotes the number of the position. This concept is shown in Fig. 2. A usual test object is a plane with the target marks, because it is much simpler to build and measure than a 3-D object. This plane is positioned in several places of the 3-D volume, where the measurement will be made. In each position of the test object, the calibration data are taken, which consists of a simple image of the target markers, and the absolute phase measurement of the test object.6,15,16 The calibration data are processed in two steps: first, the subpixel position of each target mark is estimated from the simple image using standard techniques, like template matching.4,9–14 These positions are used as data for the camera calibration. Once these positions are known, the second step is to estimate the data used in the projector calibration. These data are estimated using the absolute phase measurement of the object,6,15,16 which consist in a sequence of adapted synthetic wavelengths where the selection of wavelengths and their connection, ensuring the hierarchical reduction of the ambiguity range of the fringe order, combined with an increase in accuracy of phase measurement. To compute the projector coordinates from the phase information, each phase value is assigned to a unique projector position.6,15 An example of the absolute phase measurement is shown in Fig. 3, and a typical measurement set used in the calibration process is shown in Fig. 4.

Estimation of the Parameters Used in the System Once the input data are estimated, the next step is the estimation of the calibration parameters. These can be estimated by minimizing the total square error between the

3.2

Optical Engineering, Vol. 43 No. 2, February 2004 Downloaded from SPIE Digital Library on 01 Jul 2011 to 130.215.127.45. Terms of Use: http://spiedl.org/terms

Legarda-Sa´enz, Bothe, and Ju¨ptner: Accurate procedure for the calibration . . .

Fig. 5 Two structured light systems used to test the proposed calibration procedure: (a) a normal setup, and (b) the camera and projector with nearly parallel optical axes.

N

J共 ␸ 兲 ⫽

M

兺i 兺j 兵 储共 mc 兲 i j ⫺g共 pi j , ␪ c 兲储 2 ⫹ 储共 mp 兲 i j ⫺g关共 p⬘ 兲 i j , ␪ p兴储 2 其 ,

Fig. 4 Example of measurement set used in the calibration: (a) simple image, and absolute phase measurements in the (b) horizontal and (c) vertical direction. The ⫹ indicates the center of the located target mark.

measurements and the estimations of the model used to describe the transformation between the 3-D coordinates of a point in the scene space and its corresponding pixel coordinates in the devices. Let (pw ) j be the coordinate of the j’th target in the test object. In addition, let (mc ) i j and (mp ) i j be the corresponding pixel coordinate of (pw ) j in the camera and projector, respectively, where i is the position number of the test object. Based on the transformation sequence described in Sec. 2.2, the function that relates the 3-D coordinates of a point in the scene space and its corresponding pixel coordinates in the devices is defined as

共12兲

where ␸ ⫽ 关 ␪ c , ␪ p ,⌰ s ,⌰ 1 ,⌰ 2 ,...,⌰ N 兴 are the variables to be minimized using N positions of the test object and M number of target marks in the test object. If the experimental uncertainty is known, it can be included in the function.12,13,17 The advantage of this function, with respect others SLS approaches,4,6 is the simultaneous estimation of the parameters using the unique coordinate as a rigidity constraint. This constraint bounds the solution space, reducing the risk of erroneous estimations.17 The maximum likelihood estimate can be obtained by minimizing the previous function, that is

␸ˆ ⫽arg minJ 共 ␸ 兲 . ␸

共13兲

Equation 共13兲 is a nonlinear minimization problem, which can be solved with the Levenberg-Maquardt optimization method.18 In addition, a robust estimator can be included in the minimization process.19 However, due to the large number of unknowns and the ill conditioning of the problem, the search for the global minimum can be difficult and can be trapped in a local minimum. Optical Engineering, Vol. 43 No. 2, February 2004

Downloaded from SPIE Digital Library on 01 Jul 2011 to 130.215.127.45. Terms of Use: http://spiedl.org/terms

467

Legarda-Sa´enz, Bothe, and Ju¨ptner: Accurate procedure for the calibration . . .

Fig. 7 Residuals obtained in the second calibration: (a) residual distribution of the calibration, and (b) typical residual distribution of one frame used in the calibration. In (a), the ⫹ indicates camera residuals and * indicates projector residuals. In (b), the o indicates the observed image position, and the arrows indicates the scaled residual, where the scale is shown in the top left of the panel. Fig. 6 Residuals obtained in the first calibration: (a) residual distribution of the calibration, and (b) typical residual distribution of one frame used in the calibration. In panel (a), the ⫹ indicates camera residuals and * indicates projector residual. In panel (b), the o indicates the observed image position, and the arrows indicates the scaled residual, where the scale is shown in the top left of the panel.

This problem is solved easily with the fractioning of the estimation process.13 This fractioning consists of estimating first the parameters of the camera and projector separately, using well-known techniques in the literature 共e.g., Refs. 12 and 20兲. Once the parameters of every device are estimated, the minimization problem given in Eq. 共13兲 is performed, using the obtained estimations as initial values. 4

Experimental Results and Discussion

To show the reliability of the parameter estimated by the proposed procedure, two different SLS were calibrated using the previous procedure. Both calibrations were made using the procedure in a 1-GHz-Pentium III PC with 256 MB of main memory. The test object shown in Fig. 4共a兲 was used in both calibrations. The test object is about 800 ⫻600 mm in size, and it has 140 target marks distributed uniformly. The target marks were measured to an accuracy of 0.1 mm. 468

The first calibration was made with the SLS shown in Fig. 5共a兲. In this system, the resolution of the chargecoupled device 共CCD兲 projector is 736⫻572 pixels with a nominal focal length of 12 mm, and the resolution of the LCD projector is 832⫻624 pixels with a nominal focal length of 50 mm. The length of the base line is about 860 mm. This system was calibrated using 19 positions of the test object. The calibrated volume was about 900 mm wide, 1000 mm tall, and 600 mm deep, placed 1000 mm from the SLS. A second calibration was performed using a portable SLS, where the camera and projector were assembled with nearly parallel optical axes. Both devices have a short focal length, and the length of the base line is about 100 mm.21 An image of the system is shown in Fig. 5共b兲. The calibration was performed using 13 positions of the test object. The calibrated volume was about 900 mm wide, 900 mm tall, and 600 mm deep, placed 600 mm from this SLS. The first evaluation was the analysis of the discrepancies obtained in the estimations. The time employed in the first calibration process was approximately 80 s, and the normalized stereo calibration error was equal to 0.603.22 The resultant residuals are shown graphically in Fig. 6. In the second calibration, the normalized stereo calibration error

Optical Engineering, Vol. 43 No. 2, February 2004 Downloaded from SPIE Digital Library on 01 Jul 2011 to 130.215.127.45. Terms of Use: http://spiedl.org/terms

Legarda-Sa´enz, Bothe, and Ju¨ptner: Accurate procedure for the calibration . . .

Fig. 8 (a) 3-D reconstruction of the plane using the first setup shown in Fig. 5(a), and the calibration parameters estimated by the proposed procedure. (b) Typical scaled planarity deviation obtained from the reconstruction, together with least squares fit to a plane, where the • indicates the deviation, and the standard deviation is shown as scale at the bottom of the panel.

Fig. 9 (a) 3-D reconstruction of the plane using the second setup shown in Fig. 5(b), and the calibration parameters estimated by the proposed procedure. (b) Typical scaled planarity deviation obtained from the reconstruction, together with least squares fit to a plane, where the • indicates the deviation, and the standard deviation is shown as scale at the bottom of the panel.

was equal to 0.792, and the time employed was approximately 55 s. The resultant residuals from the second estimation are shown graphically in Fig. 7. As can be seen in the previous results, the residual magnitude is small enough in both calibrations, as is expected on a good estimation.9–14 One drawback of the estimations is the systematic distribution of the residuals, which indicate some error in the target positions measurement. All the calibration procedures based in the knowledge of the 3-D coordinates of the target marks are affected by the uncertainty of the target measurement, and this is one of the most important sources of systematic error in calibration and reconstruction. These errors can only be solved by improving the accuracy of the target position or including a selfcalibration strategy in the estimation.6,9–11,13 However, this situation does not affect the present estimations too much, because the normalized stereo calibration error indicates that the residuals are negligible compared with image digitalization noise at this depth.22 The second evaluation of the performance was the 3-D reconstruction of objects using the estimated parameters and following the remarks made in Sec. 2.3. Using the first SLS, the measurement of one plane was performed. The resultant reconstruction is shown in Fig. 8共a兲, where the

resultant standard deviation from planarity was equal to 0.0861 mm, and the measured area was approximately 420⫻580 mm2 . The typical planarity deviation is shown in Fig. 8共b兲. For the second SLS, the measurements of one plane and the one statue were performed. The plane reconstruction using the second SLS is shown in Fig. 9共a兲. The resultant standard deviation from planarity of this reconstruction was equal to 0.4506 mm, where the approximated measured area was 650⫻650 mm2 . The typical planarity deviation is shown in Fig. 9共b兲. The statue used in the measurement is shown in Fig. 10共a兲, and the resultant reconstruction is shown in Fig. 10共b兲. To evaluate the performance of the statue reconstruction, a reference measurement was performed using a commercial triangulation scanner.23 The resultant differences between this reference measurement and the obtained reconstruction is shown in Fig. 10共c兲, where the resultant standard deviation of this reconstruction was equal to 0.5053 mm. To analyze these results, it is necessary to consider the geometry of the SLS, because the geometry configuration 共particularly the angle between devices兲 substantially affects the reconstruction performance.24 –27 For the first SLS, the resolution limit of measurement is found to be 0.097 Optical Engineering, Vol. 43 No. 2, February 2004

Downloaded from SPIE Digital Library on 01 Jul 2011 to 130.215.127.45. Terms of Use: http://spiedl.org/terms

469

Legarda-Sa´enz, Bothe, and Ju¨ptner: Accurate procedure for the calibration . . .

mm, and for the second SLS, it is about 0.616 mm. Therefore, despite the relative big planarity deviation obtained from the prior reconstructions, especially in the second reconstruction, the maximum accuracy allowed for the setup system is achieved. To compare the proposed calibration procedure to existing approaches found in literature, the procedure described in Ref. 6 was used to calibrate both systems using the previously mentioned datasets. For the first SLS, the normalized stereo calibration error was equal to 0.735, and the resultant standard deviation from planarity was equal to 0.1054 mm. For the second SLS, the normalized stereo calibration error was equal to 0.907, the resultant standard deviation from planarity was equal to 0.5162 mm, and the standard deviation from the statue reconstruction was equal to 0.5789 mm. As can be seen, the accuracy obtained by the approach used as a reference is lower than that obtained by the proposed procedure. The proposed technique improves the reliability of the estimation, and consequently the performance of the 3-D reconstruction in comparison with similar approaches reported in the literature. 5 Conclusions We present a procedure for the accurate calibration of a structured light system. The proposed procedure is based on defining a unique coordinate system for both devices in the SLS, and thus, a rigidity constraint is introduced into the transformation process. This constraint is used to derivate a simple function for the simultaneous estimation of the parameters, resulting in parameters that are more reliable, as can be seen in the reported experiments. The resultant function is defined in Eq. 共12兲, and it can be applied to different configurations of SLS without any modification, as is shown on examples of experimental measurements.

Acknowledgments The authors thank Achim Gesierich and Jan Mu¨ller for assistance with the experiments.

References

Fig. 10 (a) Statue used to test the performance of the calibration. (b) 3-D reconstruction using the second setup shown in Fig. 5(b), and the calibration parameters estimated by the proposed procedure. (c) Differences between the reference measurement and the reconstruction shown in Fig. 10(b). The showed scale is in millimeters. 470

1. F. Chen, G. W. Brown, and M. Song, ‘‘Overview of three-dimensional shape measurement using optical methods,’’ Opt. Eng. 39共1兲, 10–22 共2000兲. 2. P. Graebling, A. Lallement, D. Zhou, and E. Hirsch, ‘‘Optical highprecision three-dimensional vision-based quality control of manufactured parts by use of synthetic images and knowledge for image-data evaluation and interpretation,’’ Appl. Opt. 41共14兲, 2627–2643 共2002兲. 3. O. Faugeras, Three-Dimensional Computer Vision. A Geometric Viewpoint, MIT Press, Cambridge MA 共1993兲. 4. R. J. Valkenburg and A. M. McIvor, ‘‘Accurate 3D measurement using a structured light system,’’ Image Vis. Comput. 16共2兲, 99–110 共1998兲. 5. J. Batlle, E. Mouaddib, and J. Salvi, ‘‘Recent progress in coded structured light as a technique to solve the correspondence problem: A survey,’’ Pattern Recogn. 31共7兲, 963–982 共1998兲. 6. W. Schreiber and G. Notni, ‘‘Theory and arrangements of selfcalibrating whole-body three-dimensional measurement systems using fringe projection technique,’’ Opt. Eng. 39共1兲, 159–169 共2000兲. 7. F. J. Cuevas, M. Servin, and R. Rodriguez-Vera, ‘‘Depth object recovery using radial basis functions,’’ Opt. Commun. 163共4兲, 270–277 共1999兲. 8. F. J. Cuevas, M. Servin, O. N. Stavroudis, and R. Rodriguez-Vera, ‘‘Multi-layer neural networks applied to phase and depth recovery from fringe patterns,’’ Opt. Commun. 181共4兲, 239–259 共2000兲. 9. C. C. Slama, Manual of Photogrammetry, American Society of Photogrammetry, Falls Church, VA 共1980兲. 10. C. S. Fraser, ‘‘Photogrammetric camera component calibration: A review of analytical techniques,’’ in Calibration and Orientation of

Optical Engineering, Vol. 43 No. 2, February 2004 Downloaded from SPIE Digital Library on 01 Jul 2011 to 130.215.127.45. Terms of Use: http://spiedl.org/terms

Legarda-Sa´enz, Bothe, and Ju¨ptner: Accurate procedure for the calibration . . .

11.

12. 13. 14.

15. 16. 17. 18.

19. 20. 21. 22. 23. 24. 25. 26.

Cameras in Computer Vision, A. Gruen and T. S. Huang, Eds., pp. 95–136, Springer-Verlag, Berlin Heidelberg 共2001兲. A. Gruen and H. A. Beyer, ‘‘System calibration through selfcalibration,’’ in Calibration and Orientation of Cameras in Computer Vision, A. Gruen and T. S. Huang, Eds., pp. 163–194, SpringerVerlag, Berlin Heidelberg 共2001兲. J. Heikkila¨, ‘‘Geometric camera calibration using circular control points,’’ IEEE Trans. Pattern Anal. Mach. Intell. PAMI-22共10兲, 1066 –1077 共2000兲. F. Pedersini, A. Sarti, and S. Tubaro, ‘‘Accurate and simple geometric calibration of multi-camera systems,’’ Signal Process. 77共3兲, 309–334 共1999兲. D. B. Gennery, ‘‘Least-square camera calibration including lens distortion and automatic editing of calibration points,’’ in Calibration and Orientation of Cameras in Computer Vision, A. Gruen and T. S. Huang, Eds., pp. 123–136, Springer-Verlag, Berlin Heidelberg, 共2001兲. W. Osten, ‘‘Application of optical shape measurement for the nondestructive evaluation of complex objects,’’ Opt. Eng. 39共1兲, 232–243 共2000兲. W. Nadeborn, P. Andra¨, and W. Osten, ‘‘A robust procedure for absolute phase measurement,’’ Opt. Lasers Eng. 24共2–3兲, 245–260 共1996兲. M. Bertero and P. Boccaci, Introduction to Inverse Problems in Imaging, Institute of Physics Publishing, Bristol, UK 共1998兲. J. J. More´, ‘‘The Levenberg-Maquardt algorithm: implementation and theory,’’ in Numerical Analysis: Proceedings of the Biennial Conference 1977, G. A. Watson, Ed., pp. 105–116, Springer-Verlag Lecture Notes in Mathematics 630, Berlin Heidelberg 共1978兲. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C, 2nd ed., sec. 15.7, Cambridge University, Cambridge, England 共1999兲. Z. Zhang, ‘‘A flexible new technique for camera calibration,’’ IEEE Trans. Pattern Anal. Mach. Intell. PAMI-22共11兲, 1330–1334 共2000兲. T. Bothe, A. Gesierich, R. Legarda-Sa´enz, and W. Ju¨ptner, ‘‘3Dcamera,’’ Proc. SPIE 5144, 295–306 共2003兲. J. Salvi, X. Armangue´, and J. Batlle, ‘‘A comparative review of camera calibrating methods with accuracy evaluation,’’ Pattern Recogn. 35共7兲, 1617–1635 共2002兲. Steinbichler Optotechnik GmbH, ‘‘Triangulation scanner,’’ see http:// www.steinbichler.de. K. J. Ga˚svik, Optical Metrology, sec. 5.4.2, John Wiley and Sons, New York 共1987兲. Z. Yang and Y. F. Wang, ‘‘Error analysis of 3D shape construction from structured lighting,’’ Pattern Recogn. 29共2兲, 189–206 共1996兲. W. Osten, P. Andra¨, and D. Kayser, ‘‘Hochauflo¨sende vermessung

ausgedehnter technischer oberfla¨chen mit skalierbarer topometrie,’’ Tech. Mess. 66共11兲, 413– 428 共1999兲. 27. P. Andra¨, Ein Verallgemeinertes Geometriemodell fu¨r das Streifenprojectionsverfahren zur Optischen 3D-Koordinatenmessung, sec. 4.2, BIAS, Bremen 共1998兲. Ricardo Legarda-Sa´enz received his MSc degree in electronic engineering (computation) from the Instituto Tecnolo´gico de Chihuahua (Mexico) in 1997, and his PhD in optics from the Centro de ´ ptica (Mexico) in 2000. Since 2001, he has Investigaciones en O been a research fellow at Bremer Institut fu¨r Angewandte Strahltechnik (BIAS), Germany. His current interests are image processing applied to fringe pattern analysis, moire´ and fringe projection techniques, and the development of automatic methods for optical metrology. Thorsten Bothe studied physics at the Carl von Ossietzky University, Germany. As a member of the applied optics group, he received his MSc degree in 1995 with work in speckle pattern interferometry. Afterward, he worked as a project manager on a 3-D ESPI system for deformation monitoring in historical monuments. Since 1998, he has been with BIAS working on his doctoral thesis in the field of interferometry and fringe projection for shape and deformation measurement. Werner P. Ju¨ptner studied physics at the Technical University of Hannover, Germany, from 1964 to 1969, and he received his MSc at the end of his studies. From 1970 to 1977, he worked at the Institute for Applied Materials Research, Bremen, Germany. In 1975, he received his PhD in mechanical engineering from the University of Hannover. In 1977, he founded, together with his colleague G. Sepold, the Bremen Institute for Applied Beam Technology (BIAS), and since then he has been the director of the institute. In 1989, he was offered a chair as university professor for laser physics and laser metrology at Bremen University. In 2002, he received the Dennis Gabor Award of SPIE. He has chaired many national and international conferences and symposia.

Optical Engineering, Vol. 43 No. 2, February 2004 Downloaded from SPIE Digital Library on 01 Jul 2011 to 130.215.127.45. Terms of Use: http://spiedl.org/terms

471