Theory and arrangements of self-calibrating whole-body three

ment field size. © 2000 Society of Photo-Optical Instrumentation Engineers. ... tions of the sensor so-called system parameters enter into the coordinate .... camera O(c), and the image point are located on a straight line. .... phase modulated signal. Thus ...... L. J. Hornbeck, ''Digital light processing and MEMS: timely conver-.
1MB taille 83 téléchargements 257 vues
Theory and arrangements of self-calibrating whole-body three-dimensional measurement systems using fringe projection technique Wolfgang Schreiber Gunther Notni Fraunhofer Institute for Applied Optics and Precision Engineering Schillerstraße 1 07745 Jena Germany E-mail: [email protected]

Abstract. We report the basic concepts and experimental arrangements of self-calibrating 3-D measurement systems using structuredlight illumination (fringe projection), which ensure a high number of object points, rapid data acquisition, and a simultaneous determination of coordinates and system parameters (self-calibration), making the system completely insensitive to environmental changes. Furthermore, it is unnecessary to have any marker on the object surface and a subsequent matching of the single views is not required to obtain a full-body measurement. For this purpose, the test object is successively illuminated with two grating sequences perpendicular to each other from at least two different directions, resulting in surplus phase values for each measurement point. Based on these phase values one can calculate the orientation parameters as well as the 3-D coordinates simultaneously. Different measurement setups that have the ability to measure the entire surface (full-body measurement) are reported. Results are presented showing the power of this concept, for example, by measuring of the complete 3-D shape of specular technical surfaces, whereas the object volumes can differ strongly. Theoretical estimations proven by first measurements show a coordinate measurement accuracy of up to 10⫺5 of the measurement field size. © 2000 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(00)01801-8]

Subject terms: three-dimensional measurement; fringe projection; calibration; topometry; triangulation; shape measurement. Paper SM-18 received July 5, 1999; revised manuscript received Aug. 7, 1999; accepted for publication Aug. 7, 1999.

1

Introduction

The aim of optical 3-D measurements based on fringe projection,1–4 gray code projection,5–7 moire´ techniques,8,9 or methods of digital photogrammetry10–12 is the determination of usual Cartesian coordinates. All of these standard optical methods, however, usually measure the coordinates indirectly. The measurement values are the phase values of projected fringes or of moire´s and the pixel or image coordinates of a camera only. Therefore, in addition to measured values, parameters describing the geometrical conditions of the sensor 共so-called system parameters兲 enter into the coordinate calculation. Normally, these parameters must be determined in a calibration procedure before the actual 3-D measurement begins.13 For their determination known objects, for example, planes with targets must be measured at two or more well-defined positions. Such a proceeding requires perfect standards and/or their precise movement within the measurement volume. Furthermore, the procedure becomes impractical with increasing object volumes 共⬎30 cm in diameter兲 and the system is expected to be time-stable, because the calibration is carried out before the measurement. This state of affairs causes optical measuring systems to be sometimes complicated to handle and very sensitive to environmental changes and other disturbances during the time between the calibration and the Opt. Eng. 39(1) 159–169 (January 2000)

0091-3286/2000/$15.00

actual coordinate measurement. Moreover, the calibration procedures themselves are very time consuming and difficult to handle. Of course, the measurements can be as well as the standard is manufactured. A further disadvantage of the usually used fringe projection systems is that some 3-D information is left out due to shadowing and specular light. Therefore, the sensor or the object must be moved into multiple, overlapping measurement positions to view the entire surface. The resulting point clouds taken from the different views then must be merged into a common coordinate system to eventually obtain the complete 3-D view.14–17 These procedures are often interactive and very time-consuming. This can also be achieved using accurate object handling to move the object in multiple overlapping measurement positions in combination with a special 3-D calibration.18,19 It is well known that photogrammetry offers a comfortable way to solve the mentioned problems—i.e., determining the system parameters as well as the 3-D coordinates during the measurement procedure—i.e., to realize a selfcalibration, and to merge the data using common homologous points in the different views.10–12 Unfortunately, however, photogrammetry is restricted because the calculation is very time-consuming, the number of measurement points is typically lower than 10.000, and one must interact with the object to be measured while fixing the necessary mark© 2000 Society of Photo-Optical Instrumentation Engineers

Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

159

Schreiber and Notni: Theory and arrangements of self-calibrating whole-body . . .

ers on the object surface as homologous points. In photogrammetry, a measurement is analyzed from at least two positions, yielding four measurement values. The redundant value can be used for both instant determination of object coordinates and model parameters. In photogrammetry, this practice is called free network bundle adjustment. To determine the scale, only a single distance in the object space must be known. To define a coordinate system the knowledge of six further, suitable parameters is required. In every respect, the advantage is obvious. Expensive standards that would, additionally, be unwieldy for a larger space are not required. Moreover, the temporal stability of the parameters must be guaranteed only within the time of measurement. In this paper the concept of self-calibration of fringe projection systems is summarized, as we recently proposed. This concept overcomes the problems mentioned by combining the advantages of the fringe projection technique with those of photogrammetry. The main features of the new measurement concept are 1. The combination of both methods guarantees a high number of object points and rapid data acquisition. 2. The simultaneous determination of coordinates and system parameters 共self-calibration兲 makes the fringe projection system insensitive to environmental changes and vibrations, and from the practical and financial point of view no calibration equipment is required. Furthermore, this provides the basis for very accurate measurements. 3. No markers are required on the object surface and subsequent matching of the single views is not necessary because all data points lie in the same coordinate system. This is the basis for automatic wholebody measurement using multiple cameras simultaneously or in succession. The following section briefly explains the background of the mathematical model used, based on the model of photogrammetry and how it can be applied to fringe projection techniques. The differences pointed out were the reason for the development of the new concept. Section 3 explains the concept of the self-calibration of fringe projection systems. In Sec. 4, some arrangements are proposed that have the ability to measure a single, whole-body view. Section 5 presents some impressions of the achievable accuracy, and some experimental results are shown in Sec. 6.

Fig. 1 Relationship between the image and world coordinates of a camera shown in a positive configuration.12

well as asymmetrical or spiral distortions. The mathematical dependence of the measured image coordinates ( ␰ (c) , ␩ (c) ) of an image point M (c) on the 3-D coordinates x M , y M , and z M of the corresponding object point M is investigated in terms of a world coordinate system x,y,z 共see Fig. 1兲. The superscript 共c兲 indicates camera parameters. Using the principle of collinearity the following equation describes the relation between coordinates and unknown parameters:

冋 册 ␰ 共Mc 兲

␩ 共Mc 兲

2 Basic Mathematical Model In this section, the mathematical model of 3-D sensors with fringe projection using the photogrammetry model is introduced. 2.1 Mathematical Model of a Camera in Photogrammetry The basis of the photogrammetric model is the central projection, i.e., an object point M, the projection center of the camera O (c) , and the image point are located on a straight line. Moreover, systematic deviations from central projection due to distortions of the imaging lens are taken into account. Mathematical corrections include symmetrical as 160

⫽⫺

冉 冊冉 冊

c x⬘ ␰H ⫹ ␩H , z⬘ y⬘

共1兲

where (c) ␰ (c) M , ␩ M ⫽image coordinates x ⬘ , y ⬘ , z ⬘ ⫽auxiliary coordinate system, with it’s origin in the projection center and the x ⬘ -y ⬘ plane parallel to the ␰-␩ plane of the image coordinate system 共not shown in Fig. 1兲 c⫽camera constant 共shortest distance of projection center and image plane兲 ␰ H , ␩ H ⫽image coordinates of the main point. Deviations d ␰ and d ␩ from the strict central projection, like distortion, can be described with functions that contain additional parameters. For example, the correction of radial distortion can be written as

Optical Engineering, Vol. 39 No. 1, January 2000 Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

Schreiber and Notni: Theory and arrangements of self-calibrating whole-body . . .

d␰ ⫽ 共 ␰ ⫺ ␰ H 兲共 k 1 r 2 ⫹k 2 r 4 ⫹k 3 r 6 ⫹¯ 兲 , d␩ ⫽ 共 ␩ ⫺ ␩ H 兲共 k 1 r 2 ⫹k 2 r 4 ⫹k 3 r 6 ⫹¯ 兲

共2兲

with r⫽ 关共 ␰ ⫺ ␰ H 兲 2 ⫹ 共 ␩ ⫺ ␩ H 兲 2 兴 1/2.

冋 册

冉冊

x M ⫺x 共0c 兲 x⬘ y ⬘ ⫽R共 ␻ , ␾ , ␬ 兲 y M ⫺y 共0c 兲 , z⬘ z M ⫺z 共0c 兲

共4兲

Equation 共1兲 then extents to

冋 册 ␰ 共Mc 兲

␩ 共Mc 兲

冉 冊冉 冊冉 冊

c x⬘ ␰H d␰ ⫽⫺ ⫹ ␩ H ⫹ d␩ . z⬘ y⬘

共3兲

Finally the change from the auxiliary coordinate system to the world coordinate system is made by a six-parameter transformation:



r 11

r 12

r 13

册冋

cos ␾ cos ␬

R⫽ r 21 r 22 r 23 ⫽ cos ␻ sin ␬ ⫹sin ␻ sin ␾ cos ␬ r 31 r 32 r 33 sin ␻ sin ␬ ⫺cos ␻ sin ␾ cos ␬

⫺cos ␾ sin ␬ cos ␻ cos ␬ ⫺sin ␻ sin ␾ sin ␬ sin ␻ cos ␬ ⫹cos ␻ sin ␾ sin ␬

where ␻, ␾, and ␬ are angles of Euler’s convention. Therefore, the relationship between the image coordi(c) nates ␰ (c) M and ␩ M and the coordinates x M , y M , and z M is given by 关 ␰ 共Mc 兲 ⫺ ␰ H 兴共 1⫹k 1 r 2 ⫹k 2 r 4 ⫹k 3 r 6 兲

⫽⫺c

r 共11c 兲 关 x M ⫺x 共0c 兲 ⫹r 共21c 兲 关 y M ⫺y 共0c 兲 兴 ⫹r 共31c 兲 关 z M ⫺z 共0c 兲 兴 r 共13c 兲 关 x M ⫺x 共0c 兲 兲 ⫹r 共23c 兲 关 y M ⫺y 共0c 兲 兴 ⫹r 共33c 兲 关 z M ⫺z 共0c 兲 兲



⫺sin ␻ cos ␾ , cos ␻ cos ␾

共5兲

Note that one disadvantage of photogrammetric measurements is the necessary time-consuming identification of homologous points in the different images. Seldom can one make use of natural marks. Artificial marks must be pinned or projected onto the surface, especially on nonstructured or continued surfaces. Furthermore, the number of measurable points is usually less than the number of resolvable points. 2.2 Optical 3-D Measurement Systems Using Structured Light in the Photogrammetry Model Figure 2 shows the typical geometric arrangement of a fringe projection system with central projection. A fringe

关 ␩ 共Mc 兲 ⫺ ␩ H 兴共 1⫹k 1 r 2 ⫹k 2 r 4 ⫹k 3 r 6 兲

⫽⫺c

sin ␾

,

共6兲

and

where x M ,y M ,z M ⫽world coordinates of the object (c) (c) x (c) 0 ,y 0 ,z 0 ⫽world coordinates of the projection center R( ␻ , ␾ , ␬ )⫽orthonormal rotation matrix, which rotates world coordinate system parallel to the auxiliary system. On the other hand,

r 共12c 兲 关 x M ⫺x 共0c 兲 兴 ⫹r 共22c 兲 关 y M ⫺y 共0c 兲 兴 ⫹r 共32c 兲 关 z M ⫺z 共0c 兲 兴 r 共13c 兲 关 x M ⫺x 共0c 兲 兴 ⫹r 共23c 兲 关 y M ⫺y 共0c 兲 兴 ⫹r 共33c 兲 关 z M ⫺z 共0c 兲 兴

.

共7兲 From this result we can reach the following conclusion. One camera position yielding two image coordinates is insufficient for the determination of three coordinates of an object point. Using a second camera position one acquires four measurement values, including one redundant value that can be used to determine unknown parameters. From an adequate number of object points measured at different camera positions a system of nonlinear equations can be created enabling the determination of the object coordinates x M , y M , and z M ; the parameters of the inner orientation of the cameras c, ␰ H , and ␩ H ; the parameters of the distortion functions d ␰ and d ␩ ; and the parameters of exterior orientation of the images x 0 , y 0 , z 0 , ␾, ␻, and ␬. The system becomes highly redundant if images that include measured object points are available from more than two positions. In this way, without any knowledge of the parameters one can reconstruct the measurement configuration and the object shape.

Fig. 2 Relationship between measurement values and 3-D coordinates in a fringe projection system. Optical Engineering, Vol. 39 No. 1, January 2000

Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

161

Schreiber and Notni: Theory and arrangements of self-calibrating whole-body . . .

projector projects a grating with period ⌳ onto the surface to be measured. Usually 30 to 100 lines are imaged. A CCD camera acquires the intersections of the projected fringes with the surface. The intersection corresponds to a phase modulated signal. Thus, every object point M can be assigned to a phase value ⌽ (MP) . This phase value ⌽ (MP) is related to a reference value ⌽ R( P) . A phase difference of ⌽ (MP) ⫺⌽ R( P) is connected with a length of ␰ (MP) ⫽ 关 ⌽ (MP) ⫺⌽ R( P) 兴 /(2 ␲ )⌳ in the plane of the grating. This value ␰ (MP) can be considered as a coordinate of an image coordinate system. Its ␰ axis is directed normal to the fringes, and the reference phase ⌽ R( P) becomes zero. As was shown for a camera in Sec. 2.1, we now apply the model of central projection to a projector as well, resulting in the following equation: ␰ 共MP 兲 ⫽

关 ⌽ 共MP 兲 ⫺⌽ 共RP 兲 兴

2␲

⫽ ␰ 共HP 兲 ⫺c 共 P 兲



r 共11P 兲 关 x M ⫺x 共0P 兲 兴 ⫹r 共21P 兲 关 y M ⫺y 共0P 兲 兴 ⫹r 共31P 兲 关 z M ⫺z 共0P 兲 兴 r 共13P 兲 关 x M ⫺x 共0P 兲 兴 ⫹r 共23P 兲 关 y M ⫺y 共0P 兲 兴 ⫹r 共33P 兲 关 z M ⫺z 共0P 兲 兴

共8兲

⫹d␰ 共MP 兲 .

The superscript 共P兲 signifies a projector parameter. Note that unlike a camera, the projector does not provide a second coordinate. Thus, one of the three positioning parameters of the outer orientation remain undetermined because the projected fringes are oriented in only a single direction. It is required that the phase measurement 关 ⌽ (MP) ⫺⌽ R( P) 兴 include multiples of 2␲. In particular, steps on the surface, the occurrence of shading, or other irregularities must be treated carefully, which can easily be done by the use of the gray-code method for phase calculation. Note again that the reference ⌽ R( P) must be constant during the entire measurement of one object. In addition to the phase of one object point at the 3-D fringe projection sensor pixel- or image-coordinates ␰ (c) M and ␩ (c) M of the camera are also measured simultaneously. Their coordinate system is located parallel to the columns or rows of the detector array. The image coordinates are given by r 共11c 兲 关 x M ⫺x 共0c 兲 兴 ⫹r 共21c 兲 关 y M ⫺y 共0c 兲 兴 ⫹r 共31c 兲 关 z M ⫺z 共0c 兲 兴 ␰ 共Mc 兲 ⫽ ␰ 共Hc 兲 ⫺c 共 c 兲 共 c 兲 r 13 关 x M ⫺x 共0c 兲 兴 ⫹r 共23c 兲 关 y M ⫺y 共0c 兲 兴 ⫹r 共33c 兲 关 z M ⫺z 共0c 兲 兴 ⫹d␰ 共Mc 兲 ,

␩ 共Mc 兲 ⫽ ␩ 共Hc 兲 ⫺c 共 c 兲

共9兲 r 共12c 兲 关 x M ⫺x 共0c 兲 兴 ⫹r 共22c 兲 关 y M ⫺y 共0c 兲 兴 ⫹r 共32c 兲 关 z M ⫺z 共0c 兲 兴 r 共13c 兲 关 x M ⫺x 共0c 兲 兴 ⫹r 共23c 兲 关 y M ⫺y 共0c 兲 兴 ⫹r 共33c 兲 关 z M ⫺z 共0c 兲 兴

⫹d␩ 共Mc 兲 .

共10兲

The meaning of the variables becomes clear from Fig. 2 and from the explanations of the preceding section. From the preceding discussion two main results can be concluded. First, the intermediate results of an optical 3-D measurement based on simple fringe projection are three measuring values 共one phase value and two image coordinates兲 per point. If we know the system parameters of the sensor, we can calculate the 3-D point. This results in the 162

following general rule for a 3-D measurement. One must have at least k⫽3 measurement values per measurement point and known system parameters to determine the 3-D coordinates of an object point. Because a simple fringe projection system does not provide any additional measurement values, self-calibration is not applicable for the analysis. To determine the system parameters, we require a calibration strategy that uses additional information from the object space. The type and amount of information depends on the number and type of parameters. Usually, calibration and measurement are separate procedures. Second, on reflection one can find the basic condition for a simultaneous determination of all coordinates, orientation, and correction parameters 共self-calibration兲 as k⭓3 ⫹(ka⫹b)/l, where l is the number of measured points 共l ⫽i, j, where i and j indicate the row and the column of the detector array兲, k is the number of measured quantities, a is the number of changed parameters at each measurement position, and b is the number of constant parameters. This means that for a self-calibration a surplus measurement value is required for some measurement points. 3

Concept for Self-Calibrating Measurements Based on Fringe Projection From the preceding sections it is clear that self-calibration of sensors using structured light can be accomplished only if more than the necessary three measurement values are provided. The main task is determining how to provide these surplus measurement values in the case of fringe projection. To solve this problem we propose to use a modified fringe projection unit.20,22 The modified fringe projection unit should have the ability to rotate the grating by 90 deg. The result is a second phase value per measurement point. As shown in Sec. 2 for ␰ (MP) , we get as the fourth equation similarly ␩ 共MP 兲 ⫽

关 ⌿ 共MP 兲 ⫺⌿ 共RP 兲 兴

2␲

⫽ ␩ 共HP 兲 ⫺c 共 P 兲



r 共12P 兲 关 x M ⫺x 共0P 兲 兴 ⫹r 共22P 兲 关 y M ⫺y 共0P 兲 兴 ⫹r 共32P 兲 关 z M ⫺z 共0P 兲 兴 r 共13P 兲 关 x M ⫺x 共0P 兲 兴 ⫹r 共23P 兲 关 y M ⫺y 共0P 兲 兴 ⫹r 共33P 兲 关 z M ⫺z 共0P 兲 兴

⫹d␩ 共MP 兲 .

共11兲

Now, ␩ (MP) is the phase value at the measured point while the grating is rotated by 90 deg. The variables were explained in the preceding sections. In the following, we propose some basic concepts for how one can realize a self-calibrating fringe projection system when applying this modified fringe projection unit.

Sensor with Camera, Projector, and Rotating Grating The simplest way is to use only one camera and the modified fringe projection unit. Thus, as in a photogrammetric measuring system, four linearly independent measuring values are provided for every point, i.e., ␩ (MP) , ␰ (MP) , ␩ (c) M , and ␰ (c) M 关see Eqs. 共8兲–共11兲兴. Because both of the camera components and the modified projector are modeled in the same way as a camera in photogrammetry, further analysis can be done using concomitant bundle adjustment software. 3.1

Optical Engineering, Vol. 39 No. 1, January 2000 Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

Schreiber and Notni: Theory and arrangements of self-calibrating whole-body . . .

A principle difference between a measurement system consisting of two cameras, as in photogrammetry, and a system of a camera and a modified projector, as proposed here, is the mutual assignment and the type of values used for measurement. Despite the existence of well-developed algorithms, the secure search for homologous points is a difficult and time-consuming task in photogrammetry. In the case of the proposed camera/projector configuration, this problem is reduced to the assignment of the phase values to a pixel coordinate. Fortunately, this can be done automatically. Note, however, that the new concept of a camera/ projector configuration, as described in the preceding section, still has two disadvantages. One is that it uses measurement values of different types and, the other is that the parameters of both the camera and the projector are involved in the coordinate calculation. Some problems in practical application can result from these facts. 3.2 Self-Calibrating Optical 3-D Measurement System with Uniform Reference We now propose self-calibrating arrangements that overcome the mentioned disadvantages. For this purpose recall that a great advantage is achieved if the configuration becomes independent of the camera properties, i.e., if the measurement is carried out based on the projector alone while using only phase values as measurement values. We proposed such arrangements for non-self-calibrating fringe projection systems,4,23 known as the method of uniform reference or the principle of uniform scale representation. This method can be transferred to self-calibrating arrangements when using the modified projector and rotating the grating by 90 deg from at least two different projection directions P1 and P2 and fixing the imaging camera during the measurement with respect to the object surface. The camera then determines the measuring raster on the object surface through the pixel array alone and has no influence on the determination of the 3-D coordinates. The effect is that the number of unknown orientation and aberration parameters that must be determined is reduced, because only the parameters of the projector are included in the calculation. As the measurement result one acquires four values, however, in this case, all of them are phase values, for example, ␩ (MP1) , ␰ (MP1) , ␩ (MP2) , and ␰ (MP2) . These are sufficient, and no camera parameters must be known for the coordinate calculation. Moreover, there is again one redundant value. Thus, the conditions are very similar to those in the preceding discussion, i.e., the system parameters and coordinates can also be calculated simultaneously. One must solve an equation system of the type in Eqs. 共8兲, 共9兲, 共10兲, and 共11兲, and one must replace ␩ (MP) by ␩ (MP1) , ␰ (MP) by ( P2) (c) ( P2) ␰ (MP1) , ␩ (c) M by ␩ M , and ␰ M by ␰ M . Note that the measurement system is extendable to a network as it is also possible in photogrammetry20–22 共see Fig. 3兲. For this, the object is illuminated from different positions by several projectors or by one projector in succession, giving us more than the at least desired four phase values per measurement point, i.e., one acquires ␩ (MPn) and ␰ (MPn) , where n is the number of projector positions used. The advantage of using this strategy is that an increasing number of projection directions causes decreases uncer-

Fig. 3 Self-calibrating camera-fringe projector network.

tainty and the coordinates become more reliable. Note that the projection directions should be chosen in relation to the proposed configurations in photogrammetry. 4

Self-Calibrating Arrangements

4.1 Single 3-D View As a result of the previous section, the following measurement strategy can be used to obtain a single 3-D view in a self-calibrating measurement regime. A camera and a projector are each mounted on one tripod. The camera is directed to the object surface and remains in the same position during the whole measurement, while the projector is moved and illuminates the object from n different directions. In every position, two sets of images are acquired by rotating the grating between the two sets by 90 deg. A set of images itself contains seven gray-code sequences and four sinusoidal 90-deg phase-shifted intensities to obtain the absolute phase values. Performing the gray-code analysis results in n sets of two phase values ⌽ (i,P)j and ⌿ (i,P)j or projector coordinates ␰ (i,P)j and ␩ (i,P)j . For the calculation of the system parameters, only a few 共100 to 1000兲 of these phase values are chosen as starting points 共homologous points兲 for the bundle adjustment calculation to perform the calculation in a reasonable time. In the second step, using these calculated system parameters, the 3-D coordinates are calculated at all remaining measurement points, while solving the equation system 关Eqs. 共8兲–共11兲兴. As a result, one obtains a single 3-D view of the object. The measurement field is limited only by the following. First, it must be imaged by the camera and, second, the field must be illuminated from at least two different projectors 共or projector positions兲. The maximum number of points that can be measured is determined by the number of resolvable pixels of the camera, and there is no problem assigning measurement values if the camera remains in the same position relative to the object. 4.2 Whole-Body Measurement As mentioned, it is quite difficult using the usual fringe projection systems to obtain a full-body view, i.e., to measure 360 deg around. For this, the sensor or the object must be moved into multiple, overlapping measuring positions to Optical Engineering, Vol. 39 No. 1, January 2000

Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

163

Schreiber and Notni: Theory and arrangements of self-calibrating whole-body . . .

Fig. 4 Basic camera-fringe projector arrangement for whole-body measurement.

view the entire surface. The resulting point clouds taken from the different views then must be merged into a common coordinate system to obtain the complete 3-D view.14–19 Here we propose some arrangements with which we can obtain a full-body view within a self-calibrating measurement procedure. The necessary merging of the single views takes place fully automatically and is done without any marker on the object surface, objects features, other merging procedure or a highly accurate object/sensor handling system.24 To achieve an automatical, self-calibrating full-body view, the following measurement condition must be fulfilled. Two or more cameras must image the object from the desired different views. During the measurement procedure, we must ensure that neighboring cameras measure at the same time phase values for more than two projector positions, whereas from each projector position, the mentioned two sets of fringe systems rotated by 90 deg must project 共see Fig. 4兲. Note that the camera viewing fields do not have to be overlapped. The phase values in the overlapping areas of the projections can be used as tie points, as in photogrammetry, to calculate the orientation of the projectors to each other. This can be done step by step around the complete object. As a result, one achieves different single 3-D views within one world coordinate system. Note that because all of the calculations are performed within one world coordinate system, no further fitting and/or merging of the single views must be performed to achieve a whole-body view. Because of this, an error between the overlapping views 共patches兲 cannot occur, i.e., a homogeneous overall accuracy is achieved. In the following, we propose some possible mobile and stationary arrangements for whole-body measurement. 4.2.1 Mobile camera-projector network The simplest arrangement consists of one mobile projection unit and two mobile cameras, as shown in Fig. 5. Both of the cameras capture the images simultaneously, having a small overlapping area, while projecting the two sets of fringes from at least two different positions. Then the position of one of the cameras can be changed to get a different view of the object. The projection of the fringes is now repeated from other positions. Now the position of the other camera can be changed and a further set of fringes is projected. This procedure can be repeated as often as necessary to get the whole-body measurement, i.e., one can move step 164

Fig. 5 Mobile self-calibrating camera-fringe projector network for whole-body measurement.

by step completely around the object. The calculation of all necessary orientation parameters and 3-D coordinates takes place within one calculation after the acquisition of the measurement values is complete. Note that no information concerning the position of the projector共s兲 or the cameras is necessary during the whole measurement procedure. 4.2.2 Stationary measurement system (type A) A possible stationary self-calibrating measurement setup for whole-body measurement according to the preceding concept is described in the following24 共see Fig. 6兲. The object O to be measured is put on the rotary stage 共RS兲. A frame H is attached at the table to mount different cameras 共in the example, C1 to C4兲 to image the object from different directions. The fringe projector P is mounted near the rotary stage, where the orientation is not known. In the measurement procedure, the object as well as the cameras are rotated with respect to the projection unit, giving different projection directions, where the angles of rotation are not measured. At each rotation position, the mentioned two grating sequences rotated by 90 deg are projected onto the

Fig. 6 Schematic representation of a type A self-calibrating measurement system for whole-body measurement.

Optical Engineering, Vol. 39 No. 1, January 2000 Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

Schreiber and Notni: Theory and arrangements of self-calibrating whole-body . . .

angles of rotation need not be measured. The data consumption procedure is the same as described for the type A system. At each rotation 共projection兲 position the mentioned two grating sequences rotated by 90 deg are projected onto the object. All cameras capture these fringe pictures simultaneously. The following analysis to calculate the 3-D data proceeds as described earlier, yielding the complete 3-D view. The advantage of the type B measurement system is that no rotation of the object takes place, making it possible to measure heavy objects or to use the system directly in a production line.

5

Fig. 7 Schematic representation of a self-calibrating type B measurement system for whole-body measurement.

object. For the data acquisition, all of the cameras capture these fringe pictures simultaneously. Based on these fringe pictures, when using at least two rotation positions, at least four phase values for each pixel of the camera can be calculated. Using these phase values, the 3-D coordinates as well as all of the orientation parameters are calculated as already described. As a result, in our example, four different 3-D views of the object are obtained that fit automatically to the entire 3-D view because they are measured in the same world coordinate system. Of course, it is possible to use more than four cameras at different positions. Note that within a type A system, it is possible to use more than one projection unit simultaneously or in succession. As shown in Fig. 6, for example, a second projection unit P2 provides the opportunity to also measure the object from the underside within one measurement procedure. Of course, for this one must use four more cameras 共not shown in the figure兲 that observe the object from underside. 4.2.3 Stationary measurement system (type B) The second system 共type B兲 is based on the inversion of the necessary motions used in the type A measurement system25 共see Fig. 7兲. Here the object to be measured is put on a holder that is mounted on a frame. At this frame, the cameras 共c1 to c5, for example兲 are fixed so that the position of the camera共s兲 with respect to the object is fixed during the whole measurement procedure. To achieve the necessary different projection directions for the selfcalibration 共i.e., at least two兲 the projector P illuminating the object via the mirror m is rotated with respect to the fixed object and camera共s兲. In the measurement procedure, the projection unit is rotated with respect to the fixed cameras and object, and the

Measurement Accuracy

We now present some remarks on the achievable coordinate measurement accuracy of such self-calibrating fringe projection systems. Here we focus on the arrangements where the measurement is carried out using only the measured phase values. Therefore, the camera parameters 共i.e., external orientation, internal orientation, and image quality兲 are not included in the measurement/bundle adjustment calculation procedure and do not influence the measurement accuracy in any way. Therefore, the achievable coordinate measurement accuracy depends only on 1. the phase measurement accuracy mainly as influenced by the law of error propagation 2. the accuracy of the fringe projection unit parameters 共i.e., the accuracy of external orientation, internal orientation, and distortion parameters determination兲 3. the image coordinate measurement accuracy of the projection unit both depend on the phase measurement accuracy.

5.1 Fringe Projection Unit/Phase Measurement Accuracy The provision of the two grating sequences 共gray code combined with four 90-deg phase-shifted sinusoidal intensities兲 rotated by 90 deg to each other makes it necessary to use a digital pixel-addressable fringe projection unit. Different types of such projection principles can be used: a pixel addressable liquid crystal display26 共LCD, matrix LCD兲, the driven image light amplifier 共D-ILA兲 projection principle,27 and digital micromirror device™ 共DMD, by Texas Instruments兲 arrays.28 In our experiments, we used the DMD projector with 1024⫻768 because with this projection unit, a phase measurement accuracy of up to ⌳/400 can be achieved when taking into account the linearization of the transfer characteristic of the intensities24 共gray levels兲. We must also consider that the realizable phase measurement accuracy is not necessarily independent of the grating period. The number of the still resolvable grating lines is dominated by the number of the CCD elements imaging the measurement field. At least 5 to 10 receiving CCD elements over a grating line are required to achieve the mentioned phase measurement accuracy. Optical Engineering, Vol. 39 No. 1, January 2000

Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

165

Schreiber and Notni: Theory and arrangements of self-calibrating whole-body . . .

5.2 Parameters of the Fringe Projection Unit The parameters of the fringe projection unit are determined during the measurement based on the mentioned bundle adjustment calculation, which is well known in photogrammetry. The accuracy depends strongly on the right choice of the position of the phase values that are chosen as homologous points and the phase measurement accuracy. We showed20 that the system parameters can be determined with an accuracy of about 0.001 mm and 0.001 deg, which provides the basis for good 3-D coordinate measurement accuracy. In contrast to photogrammetry, in the case of the camera/fringe projector configuration, the problem of a secure search of homologous points is strongly reduced because the phase values are assigned only to the pixel coordinate of the camera. Fortunately, this happens automatically. Furthermore, some special orientations and/or positions of the projection unit ensure a minimal error budget. Such arrangements are well known in photogrammetry.29 In the proposed experimental arrangements, for example, this condition is fulfilled if a projection angle of about 35 deg is chosen. 5.3 Image Coordinate Measurement Accuracy In analogy to photogrammetry, the image coordinate measurement accuracy of a projection unit can be defined as the uncertainty of measurement of a length in the grating plane of a fringe projector, which is equivalent to a phase difference. Hence, the image coordinate measurement accuracy depends on the phase measurement accuracy and on the number of grating lines that are still resolvable over the measurement area. The number of grating lines that are still resolvable is limited by the number of CCD elements imaging the object being measured and the technical conditions of the fringe projector 共i.e., how many fringes or lines can be ‘‘produced’’兲. The absolute image coordinate accuracy ⌬C is simply given by ⌬C⫽G 共 ⌬ P/L 兲 ,

共12兲

where G is the size 共height, length兲 of the grating, L is the number of grating lines in the grating area, and ⌬ P is the measurable fraction of a grating line as it is calculated from the phase measurement accuracy. Typically, the image coordinate accuracy ⌬C should be better than 0.001 mm to ensure a coordinate measurement accuracy of 10⫺5 of the measured field size.20,27 For example, this is possible in the case of the projection of L⫽50 fringes onto the object, using a grating of the size G⫽13 mm 共as in the case of a DMD array兲, and a phase measurement accuracy of better than 2␲/260. Note also that an increasing number of projection positions decreases uncertainty and improves the reliability of the coordinates. 6 Experimental Results The experiments for the 3-D shape measurement were performed using the explained stationary type A and B setups both using the mentioned DMD projection unit. 166

Fig. 8 Sketch of the groove plate (200⫻200 mm2, groove depths 160, 80, 40, 20, 10, and 5 ␮m).

First, we show that the achievable coordinate measurement accuracy is much higher than that of conventional fringe projection systems. The test object used was a groove plate with a 200⫻200-mm2 extension, measured within an illuminated field 600 mm in diameter. The plane surface has six milled grooves with a depth between 5 and 160 ␮m, and their depth doubles from groove to groove 共see Fig. 8兲. During the measurement, eight different projection directions and one camera were used to acquire a single view of the surface. As input values 共homologous points兲 for the bundle adjustment calculation only about 200 points out of 512⫻512 were automatically chosen. However, the relatively low number of points is sufficient for the exact determination of all of the orientation parameters. For example, if about 200 points are selected, the number of unknowns is about 700 共200⫻3 coordinates, 8 ⫻6 parameters of outer orientation, 3 parameters of the inner orientation of the projector, and up to 20 parameters for distortion correction兲, while the redundancy is about 2500. As mentioned, the choice of points takes into consideration the mathematical condition as well as the accuracy of the measurement values. Furthermore, note that all coordinates were calculated directly from the measurement values. No filtering or averaging was applied. The results show 1. The groove depths of 160, 80, 40, and 20 ␮m are measured very exactly. 2. From the intersection 共see Fig. 9兲 it is obvious that the obtained measurement accuracy is ␴ ⬍10 ␮ m. Note, however, that the measurement field was 60 cm in diameter, i.e., the achieved measurement accuracy was about 10⫺5 of the measurement field. The next example gives some ideas of the possibilities of achieving a full-body measurement. To obtain the results shown in Fig. 10 for measuring the head of a statue a type A system was used where the data acquisition occurred simultaneously with four cameras and using eight different projector positions 共i.e., projection directions兲. Note that the whole measurement including the calculation time 共mainly influenced by the bundle adjustment兲 was actually

Fig. 9 Intersection of the groove plate.

Optical Engineering, Vol. 39 No. 1, January 2000 Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

Schreiber and Notni: Theory and arrangements of self-calibrating whole-body . . .

Fig. 10 Single 3-D views of the head of a statue measured with cameras (a) C1, (b) C2, (c) C3, and (d) C4; measurement volume 400⫻200⫻200 mm3. (e) Total 3-D view of the head of a statue.

Optical Engineering, Vol. 39 No. 1, January 2000 Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

167

Schreiber and Notni: Theory and arrangements of self-calibrating whole-body . . .

Fig. 13 High-resolution detail of the induction pipe.

Fig. 11 Intersection across the measured 3-D data of a designer model of a can.

carried out with a PII 400-MHz PC within 3 min. Figures 10共a兲 to 10共d兲 show the single views seen by the four cameras and Fig. 10共e兲 shows the mixing of the four single views to form the total 3-D view. In the single views, regions are clearly recognizable where no measurement points were obtained due to shadowing or specular effects. Such failures do not appear in the total 3-D view 关see Fig. 10共e兲兴. Note that no matching errors can occur, because the views are measured in a common coordinate system. A second example shows the possibility of measuring the top and bottom side of an object simultaneously 共see Fig. 11兲 while measuring a designer model of a can. In Fig. 11, the lines of intersections parallel to the bottom of the can are shown. It is obvious that the can is measured all the way around. The measurement example in Fig. 12 demonstrates that it is possible to measure technical, specular surfaces. Here the metal induction pipe of a car was measured. For this the measurement, a type B setup with five cameras and eight projector positions 共i.e., eight rotation angles兲 was used. Note that awkward areas, like zones with either specular reflection or shadows, can be avoided when using more than two projection directions. This is because shadows or specular reflections are shifted step by step across the surface of the object.

Fig. 12 The 3-D point cloud of an induction pipe. 168

Due to the fact that one uses only phase values for the calculation of the coordinates and system parameters the image quality of the cameras has no influence on the measurement accuracy, and arrangements can be realized for a simultaneous multiresolution. This means that one can use cameras observing completely different field sizes simultaneously so that the measured point clouds are automatically fit together, giving us the ability to combine a lowresolution overall view with a detailed high-resolution view. The example shown in Fig. 13 combines views with field sizes of 100 and 400 mm, respectively, while measuring the induction pipe. It is obvious that some geometric details 共e.g., the drilled holes兲 are clearly better resolved.

7

Conclusion

Self-calibrating fringe projection systems were created based on the concepts of photogrammetry. We showed that it is necessary to have a surplus measurement value for the purpose of self-calibration. To achieve this, a special fringe projection technique with a rotating grating and multiple cameras and projection positions was introduced. The advantages of such measurement concepts are 1. The realization of in-process self-calibrating systems is possible without the use of calibration equipment. 2. A full-body measurement can be realized without the use of matching procedures or target points so that no merging errors occur. 3. Because of the use of only phase values to calculate the coordinates and system parameters the image quality of the cameras has no influence on the measurement accuracy, and arrangements are possible that ensure the same accuracy in x, y, and z coordinates. Simultaneous multiresolution is also possible. Different measurement setups for self-calibrating 3-D coordinate measurement were introduced to provide the listed possibilities. First measurements verified the concept of a self-calibrating system. The uncertainty of the measurements obtained was ␴ ⬇D⫻10⫺5 , where D is the extension of the measurement field. Using the described measurement procedure and equipment, alternative user-friendly setups can be developed covering a great number of applications, such as quality control, 3-D contouring, and obtaining 3-D data for CAD/ CAM or reverse engineering.

Optical Engineering, Vol. 39 No. 1, January 2000 Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

Schreiber and Notni: Theory and arrangements of self-calibrating whole-body . . .

References 1. M. Haliou and H. C. Liu, ‘‘Optical three dimensional sensing by phase measuring profilometry,’’ Opt. Lasers Eng. 11, 185–215 共1989兲. 2. R. Zumbrunn, ‘‘Automatic fast shape determination of diffuse reflecting objects at close range, by means of structured light and digital phase measurement,’’ in Proc. ISPRS in Photogrammetry 共1987兲. 3. A. Jeffry, R. C. Jalkio, S. Kim, and K. Case, ‘‘Three dimensional inspection using multistripe structured light,’’ Opt. Eng. 24„6…, 967– 974 共1985兲. 4. W. Schreiber, G. Notni, P. Ku¨hmstedt, J. Gerber, and R. Kowarschik, ‘‘Optical 3D-coordinate measuring system using structured light,’’ Proc. SPIE 2782, 620–627 共1996兲. 5. F. W. Wahl, ‘‘A coded high approach for depth map acquisition,’’ in Mustererkennung, G. Hartmann, Ed., Springer Verlag, Berlin 共1986兲. 6. G. Sansoni, M. Carocci, S. Lazzari, and R. Rodella, ‘‘A threedimensional imaging system for industrial applications with improved flexibility and robustness,’’ J. Opt. A: Pure Appl. Opt. 1, 83–93 共1999兲. 7. M. Kujawinska and W. Osten, ‘‘Fringe pattern analysis methods: upto-date review,’’ Proc. SPIE 3407, 56–66 共1998兲. 8. M. Idesawa, T. Yatagai, and T. Soma, ‘‘Scanning moire´ method and automatic measurement of 3D-shapes,’’ Appl. Opt. 1006, 67–89 共1977兲. 9. L. Pirodda, ‘‘Shadow and projection moire´ techniques for absolute or relative mapping of surface shapes,’’ Opt. Eng. 21共4兲, 641–649 共1982兲. 10. H. A. Beyer, V. Uffenkamp, and G. van der Vlugt, ‘‘Quality control in industry with digital photogrammetry,’’ in Optical 3DMeasurement Techniques III, A. Gruen and H. Kahmen Eds., pp. 29–38, Vienna 共1995兲. 11. K. Regensburger, Photogrammetrie, Verlag fu¨r Bauwesen, Berlin 共1990兲. 12. K. Kraus, Photogrammetrie, Vols. 1 and 2, Ferd, Du¨mmler Verlag, Bonn 共1996兲. 13. B. Breuckmann, F. Halbauer, E. Klaas, and M. Kube, ‘‘3Dmetrologies for industrial applications,’’ Proc. SPIE 3102, 20–29 共1997兲. 14. C. Reich, ‘‘Photogrammetric matching of point clouds for 3Dmeasurement of complex objects,’’ Proc. SPIE 3520, 100–110 共1998兲. 15. R. Massen, J. Ga¨ssler, C. Konz, and H. Richter, ‘‘From a million of dull point coordinates to an intelligent CAD description,’’ in Proc. Fringe 93, pp. 194–203, Akademie Verlag 共1993兲. 16. P. J. Neugebauer, ‘‘Reconstruction of real-world objects via simultaneous registration and robust combination of multiple range images,’’ Int. J. Shape Modeling 3, 71–90 共1997兲. 17. H. Scho¨nfeld, G. Ha¨usler, and S. Karbacher, ‘‘Reverse engineering using optical 3D sensors,’’ Proc. SPIE 3313, 115–125 共1998兲. 18. P. Ku¨hmstedt, G. Notni, W. Schreiber, and J. Gerber, ‘‘Fullhemisphere automatical optical 3D measurement system,’’ Proc. SPIE 3100, 261–265 共1997兲. 19. A. Asundi and W. Zhou, ‘‘Mapping algorithm for 360-deg profilometry with time delayed integration imaging,’’ Opt. Eng. 38共2兲, 339– 344 共1999兲. 20. V. Kirschner, W. Schreiber, R. Kowarschik, and G. Notni, ‘‘Selfcalibrating shape-measuring system based on fringe projection,’’ Proc. SPIE 3102, 5–13 共1997兲.

21. W. Schreiber, V. Kirschner, R. Kowarschik, and G. Notni, ‘‘Managing some calibration problems in fringe projection shape measurement systems,’’ in Proc. Finge ’97, W. Ju¨ptner and W. Osten, Eds., pp. 443–450, Akademie Verlag, Berlin 共1997兲. 22. W. Schreiber and V. Kirschner, ‘‘Verfahren zur Bestimmung der ra¨umlichen Koordinaten von Gegensta¨nden und/oder deren zeitlicher ¨ nderung und Vorrichtung zur Anwendung dieses Verfahrens,’’ PCT A Patent pending 共1996兲. 23. R. Kowarschik, P. Ku¨hmstedt, and W. Schreiber, ‘‘3-coordinate measurements with structured light,’’ in Proc. Fringe’93, W. Ju¨ptner and W. Osten, Eds., pp. 204–208, Akademie Verlag, Berlin 共1993兲. 24. G. Notni, W. Schreiber, M. Heinze, and G. Notni, ‘‘Flexible autocalibrating full-body 3D measurement system using digital light projection,’’ Proc. SPIE 3824, 79–88 共1999兲. 25. W. Schreiber and G. Notni, ‘‘Vorrichtung zur Bestimmung der ra¨umlichen Koordinaten von Gegensta¨nden,’’ PCT Patent pending 共1998兲. 26. Product information, matrix-LCP, Jenoptik AG, Jena, Germany 共1999兲. 27. Product information, JVC, Friedberg/Hessen, Germany 共1998兲. 28. L. J. Hornbeck, ‘‘Digital light processing and MEMS: timely convergence for a bright future,’’ in Plenary Session, SPIE Micromachining and Microfabrication ’95, Austin, Texas 共Oct. 24, 1995兲. Color reprints available from Texas Instruments Digital Imaging Group, 共214兲 995–2426. See also http:www.ti.com/dlp. 29. C. S. Fraser, ‘‘Network design,’’ in Close Range Photogrammetry and Machine Vision, K. B. Atkinson, Ed., pp. 256–281, Whittles Publishing, Caithnes 共1996兲. Wolfgang Schreiber received his DiplPhys in 1962 and his DrRerNat in 1971 from the Friedrich Schiller University, Jena, where he has worked on laser research, holography, and holographic interferometry at the Institute of Applied Optics. In 1992 he joined the Department of Optical Systems at the Fraunhofer Institute of Applied Optics and Precision Engineering, Jena, and he is currently working on optical 3-D measurement methods. Gunther Notni received his diploma and his PhD degree in physics from the Friedrich Schiller University, Jena, in 1988 and 1992, respectively. Since 1992 he has been a staff member at the Fraunhofer Institution for Applied Optics and Precision Engineering, where he has headed the Optical Systems Department since 1994. His research interests include optical 3-D shape measurement, methods of surface characterization, interferometry, speckle interferometry, and the application of phase conjugation in optical metrology using photorefractive crystals.

Optical Engineering, Vol. 39 No. 1, January 2000 Downloaded from SPIE Digital Library on 30 Jun 2011 to 130.215.125.7. Terms of Use: http://spiedl.org/terms

169