Phase-unwrapping algorithm for the measurement of three

The technique of phase-measuring profilometry has been extensively studied.1. The intensity recorded in the measurement is a cyclical function of the phase,.
724KB taille 2 téléchargements 298 vues
Phase-unwrapping algorithm for the measurement of three-dimensional object shapes Hong Zhao, Wenyi Chen, and Yushan Tan

A new phase-unwrapping algorithm is described that uses two phase images with different precision in the unwrapping; this technique can produce an approximately correct unwrapping in the presence of discontinuities. We introduce it into the measurement of a three-dimensional object shape and also present the experimental results.

1. Introduction

The technique of phase-measuring profilometry has been extensively studied. 1 The intensity recorded in the measurement is a cyclical function of the phase, and the computation of the phase by any inverse trigonometric function provides only principal values of the phase, values that lie between r and -7r rad. Phase unwrapping is the determination of the true phase from module 2 data. Early phase-unwrapping algorithms were based on distinguishing the phase change between sample points,2 3 and these techniques were not tolerant of discontinuities and noise in the phase. Recently some attempts have been made to deal with the more difficult problem of true discontinuities in the phase map. Bone has suggested identification of these discontinuities by searching for the region in which the phase curvature exceeds some threshold. 4 However, this method fails when the first derivative of the phase is continuous across a boundary. More recently, Huntley and Saldner described a method that could be applied successfully to this constrained problem domain. 5 The basic ideal behind this method is that the phase at each pixel is treated as a function of time. Unwrapping is then carried out along the time axis for a pixel independently of the others. The Huntley and Saldner technique would require only that the phase could be changed slowly from an unambiguously unwrappable phase to the final desired phase.

The authors are with the Laser and Infrared Laboratory, Department of Mechanical Engineering, Xi'An Jiaotong University, Xian, Shaanxi 710049, China. Received 16 August 1993; revised manuscript received 6 January 1994. 0003-6935/94/204497-04$06.00/0. © 1994 Optical Society of America.

The original unambiguously unwrappable initial phase field need not be exactly proportional to the final desired phase. The algorithm developed in this paper is sufficiently different from, and significantly simpler than, that of Huntley and Saldner. This technique is based on two phase images with different precision in the unwrapping, and the fringe orders are assigned by the lower-precision phase pattern. The phase unwrapping of every point is done independently, and we can obtain correct phase values in the presence of discontinuities. This paper introduces the process of this phase unwrapping in detail, and phasemeasuring profilometry with the phase-shifting algorithm is addressed briefly in Section 2. Finally, we apply this technique to measurement of a threedimensional (3-D) object shape, and the results are presented. However, the technique should also be applicable to another problem area in phase unwrapping, that of regional undersampling. Further investigations are needed. 2. Phase-Measuring Profilometry with the Phase-Shifting Algorithm

The general block diagram of a phase-measuring profilometry system is given in Fig. 1. When a sinusoidal optical field is projected onto a 3-D diffuse object, the mathematical representation of the deformed optical field may be expressed in the general form I(x, y) = r(x, y)[a(x, y) + b(x, y)cos +(x, y)],

(1)

where r(x, y) is the object reflectivity characterizing the surface's nature. a(x, y) represents the background intensity, and b(x, y)/a(x, y) is the fringe contrast. The phase function (x, y) characterizes the fringe deformation and is related to the object 10 July 1994 / Vol. 33, No. 20 / APPLIED OPTICS

4497

calibration is described in Section 4. Thus from Eqs. (7)-(9) we can obtain the expression of n(x, y):

PROJECTORYTH PHASE SHIFTER 3-0 OFJECT

n(x, y)

=

INT[ (2Y] = INT[( 2']

= INT[

Fig. 1. General block diagram for a shape-measurement system. R, reference plane.

shape Z = h(x, y). The phase 4 can be retrieved from Eq. (1) by use of the phase-shifting algorithms. 6 7 Here we select phase shifting by 7r/2 increments, and the irradiances detected by the camera are Il(x, y) = r(x, y)[a(x, y) + b(x, y)cos +(x, y)],

(2)

I2 (x, y) = r(x, y)[a(x, y) - b(x, y)sin +,(x, y)],

(3)

I 3(x, y) = r(x, y)[a(x, y) - b(x, y)cos +i(x, y)],

(4)

14(x, y) = r(x, y)[a(x, y) + b(x, y)sin +(x, y)].

(5)

INT is a operator equal to the integer part of its argument. From Eq. (11) it can be seen that n(x, y) can be readily obtained when k, and k 2 are known. But it is also clear that there are some problems with sensitivity to noise in Eq. (11). This is because the zero-order phase is used to calculate the fringe order of a more precise phase distribution. Noise and calibration errors could introduce inconsistencies between the two phase profiles, especially when the argument of Eq. (11) approaches an integer. At the same time the information of the two initial phase fields is not used in Eq. (11) sufficiently. Therefore we have to revise the result n(x, y) that is obtained by Eq. (11). Now we define the difference between the high-precision height profile and the low-precision height profile as the quantity

A(m) = k[o(x, y) + 2mrr]

From these four equations the phase function may be readily computed as 4(x, y) = arctan(14 - I2)/(1 - I3).

(6)

We can also obtain the phase fr(X, y) of a reference plane by using the above method. Consequently the phase distribution, (o(x, y), produced by object shape Z = h(x, y) is acquired: (WX, y) = ((x, Y)

3.

-

4r(X, Y)

(7)

4(x, y) = 4O(x, y) + 2n(x, y)ir,

(8)

where n(x, y) is an integer. Equation (8) shows that the phase unwrapping is only a process of the determination of n(x, y). In order to compute the value of n(x, y) of every point (x,y), we can obtain an alternative phase Oo(x, y) of a 3-D object by using Eqs. (2)-(7) in the same optical system but with a reduced sensitivity so that the phase 00 has a range of less than 27r. The object shape h(x, y) can be expressed as follows: h(x, y) = kpF(x, y),

(9)

h(x, y) = k 200(X, y)

(10)

where k, and k2 are separate constants and both are related to the structure of the optical system; their

4498

APPLIED OPTICS / Vol. 33, No. 20 / 10 July 1994

- k 2 06(X,

y)

(12)

[m = n(x, y), n(x, y) ± 1]. To simplify, we can define E(m) =

A(m)/ki = 4O(x, y) + 2mir - k 2/k 1 06(x, y) (13)

[m = n(x, y), n(x, y) + 1].

We think that IE(m) I has a minimal value when m = mo(x, y). Then we can obtain an exact phaseunwrapping distribution of 0:

Phase Unwrapping

The phase obtained from Eq. (7) is indeterminate to an additive constant of 2nrr because the arctangent is defined over a range from -rr to ir. The true phase is given by

(11)

k,2]rr

(P(x, y) = 4O(x, y) + 2mo(x, y)qr {mo(x,

y) E [n(x, y), n(x, y)

±

1]}.

(14)

The above process states that the phase-unwrapping value of the arbitrary point (x,y) can be calculated by Oo(x, y) and -0 (x, y) only. 4.

k and k2 Calibration

The optical geometry for the most general projection and imaging system is shown in Fig. 2. P is the center of the exit pupils of the projection lens, and I is the center of the entrance pupils of the imaging lens. G is a grating with spatial frequency f and a sinusoidal intensity transmission, and the direction of the grating lines projected in either the reference plane or the object is normal to the plane of the figure. Dc is a CCD detector. The scan direction of the CCD camera is in the X direction. The detector Dc can measure the phase 4'c at point C on the reference plane as well as < D at point D of the object. A mapping algorithm can then determine point A on the reference plane such that 4 'A =

DC

SI

(a) R C

A

Fig. 2. Geometry for projecting and imaging a grating pattern on the object.

4

'D. This permits a computation of the geometric distance AC:

AC = ~CD/( 2 7rf),

(15)

where XCD can be obtained by Eq. (14). From similar triangles, PDI and ACD, the object height is h(x, y) = AC(s/d)(1 + AC/d)-1 ,

(b)

(16)

where d and s are distances, as shown in Fig. 2. As in most practical situations, d >> AC, and Eq. (15) can be simplified: h(x, y) = AC(s/d).

(17)

Substituting Eq. (15) into Eq. (17), we have h(x, y) = [s/(27rfd)]4cD.

(18)

(c)

Comparing Eq. (18) with Eq. (9) and (10) we get, respectively, k = s/(27rf1 d),

(19)

k2 = s/(27rf2 d),

(20)

where f and f2 are the spatial frequencies of the two gratings, respectively. 5.

Experiment and Results

For experimental measurements, we recorded sinusoidal gratings by photographing moir6 fringes that appeared when two Ronchi gratings of the same frequencies were superimposed. A conventional slide projector was modified in order to operate with a grating clip mounted on a stepper-motor-driven translation stage. Deformed grating images were recorded on a two-dimensional array CCD interfaced to an IBM PC computer for data acquisition and processing. An exemplary measurement is illustrated in Fig. 3. In this experiment we choose s = 1200 mm and d =

(d) Fig. 3. Exemplary measurement: (a) object, (b) the wrapped phase with many-order fringes, (c)the wrapped phase with only the zero-order fringe, (d) the unwrapped phase obtained by the current algorithm.

300 mm. The object measured is the plaster model human teeth with many sharp segments, as shown in Fig. 3(a). Figures 3(b) and 3(c) are the wrapped phase of the tooth model obtained by use of the phase-shift algorithm when the fringe spacings are 0.5 and 3 mm, respectively. There is less than a 2'r

10 July 1994 / Vol. 33, No. 20 / APPLIED OPTICS

4499

by Eq. (11) contains error. In other words the permissible value of inconsistent error in a highprecision phase field is 0.5 fringe caused by the noise of a low-precision phase. It is clear that the inconsistent error of 0.27 fringe is less than the permissible error of 0.5 fringe. In addition, when a better phaseunwrapping result is required, we regard the first phase-unwrapping result as the zero-order phase in the next phase unwrapping, which is similar to a mathematical recurrence algorithm. 7. Fig. 4. Three-dimensional plot of the reconstructed surface of the object.

phase in Fig. 3(c), so it does not require unwrapping. Figure 3(d) is the unwrapped phase of Fig. 3(b) obtained by use of the algorithm developed in this paper. Finally, Fig. 4 gives a 3-D plot of the reconstructed surface of the human tooth model. 6.

Discussion

From Eq. (16) we know that Eq. (9) and (10) are two approximate, separate linear equations. As s = 1200 mm and the maximum height of the object is 20 mm in this experiment, the maximum nonlinear errors of h(x, y) obtained by Eqs. (9) and (10) are same, 1.7%, which means that the error of 0.017 fringe is introduced into the low-precision phase field. At the same time an estimate of the error of 00 is made by measurement of the phase distribution on a fixed plane. Such a measurement in our system shows a phase-angle error of less than 10°. Then the total error of the low-precision profile is - 0.045 fringe. Because k 2 /k 1 = fl/f2 = 6, when the zero-order phase is used to calculate the fringe order of the more precise phase distribution, the inconsistent error of the high-precision phase profile is 0.27 fringe. However, the minimum error of the phase-unwrapped result is 1 fringe when the n(x, y) produced

4500

APPLIED OPTICS / Vol. 33, No. 20 / 10 July 1994

Conclusions

The methods for computing true phases can produce phase maps automatically without the need for locating fringe centers or assigning fringe orders. This technique is tolerant of discontinuities and noise in the phase, and it is efficient and simple to implement. The authors thank the Youth Science Foundation of Xi'An Jiaotong University for funding. References 1. M. Halioua and H.-C. Liu, "Optical three-dimensional sensing by phase measuring profilometry," Opt. Lasers Eng. 11, 185215 (1989). 2. W. W. Macy, Jr., "Two-dimensional fringe-pattern analysis," Appl. Opt. 22, 3898-3901 (1983). 3. D. C. Ghiglia, G. A. Mastin, and L. A. Romero, "Cellularautomata method for phase unwrapping," J. Opt. Soc. Am. A 4, 267-280 (1987). 4. D. J. Bone, "Fourier fringe analysis: the two-dimensional phase unwrapping problem," Appl. Opt. 30, 3627-3632 (1991). 5. J. M. Huntley and H. Saldner, "Temporal phase-unwrapping algorithm for automated interferogram analysis," Appl. Opt. 32, 3047-3052 (1993). 6. J. H. Bruning, J. E. Gallagher, D. P. Rosenfeld, A. D. White, D. J. Brangaccio, and D. R. Herriott, "Digital wave-front measuring interferometer for testing optical surfaces and lenses," Appl. Opt. 13,2693-2703 (1974). 7. C. L. Koliopoulos, "Interferometric optical phase measurement techniques," Ph.D. dissertation (University of Arizona, Tucson, Ariz., 1981).