Location and shape measurement using a portable ... - Springer Link

Jul 1, 2002 - Location and Shape Measurement Using a Portable. Fringe Projection System by M. Heredia Ortiz and E.A. Patterson. ABSTRACT--In this ...
2MB taille 2 téléchargements 266 vues
Location and Shape Measurement Using a Portable Fringe Projection System by M. Heredia Ortiz and E.A. Patterson ABSTRACT--In this paper we present a portable fringe projection system developed for the measurement of the threedimensional shape and position of complex aircraft parts. We describe the prototype instrument and processing algorithms designed, implemented, and tested during this study, and we discuss several applications throughout the lifecycle of the aircraft. KEY WORDS--fringe projection, shape, position, location, aircraft component Nomenclature

y modulation O angle between projection and observation axes, rad qb modulated phase, rad * phase estimator, rad a,13 angles of laser beam, rad A background illumination b distance between camera focal point and laser source, mm f effective focal distance of the camera lens, mm fo carrier frequency 1 image intensity i, j image pixel coordinates K calibration constant O object image O camera focal point P pitch of the reference image P ( X p , Yp, Z p ) point in object p ( x p , yp) projection of point P onto the image plane Q quality value of the phase differences R reference image S laser source s camera magnification, mm pixel- t x, y CCD physical coordinates, mm X, Y, Z real world coordinates, mm Introduction Non-destructive evaluation (NDE) techniques, such as ultrasonic waves, eddy currents, x-ray, infrared thermography,

M. Heredia Ortiz was a Research Associate and E.A. Patterson (SEM member; [email protected]) is a Professor, Department of Mechanical Engineering, University of Sheffield, Sheffield, S1 3JD, UK. E.A. Patterson is now at Department of Mechanical Engineering, Michigan State UniversiO; MI, USA. Original manuscript submitted: July 1, 2002. Final manuscript received." January 18, 2005.

DOI: 10.1177/0014485105053801 9 2005 Society for Experimental Mechanics

etc., are extensively used in the aerospace industry throughout the lifecycle of an aircraft, e.g., to evaluate prototypes in the design stage, to detect manufacturing defects in production, and to monitor flaws produced during service. 1 Most NDE techniques provide some form of two-dimensional (2D) scan of the inspected part. The ability to register these scans with a three-dimensional (3D) computer aided design (CAD) model of the part would enable (i) the integration of NDE data obtained with different methods, (ii) the transfer of information relative to NDE inspections between different disciplines, and (iii) the automation of the inspection process in the production stages. However, the determination of a point-by-point correspondence between the 2D scans, and the 3D CAD model requires information about the actual shape of the part and its orientation and location with respect to the inspection device. A number of applications that require a system capable of providing measurements of the location and shape of an arbitrary object have been identified in this context, including the automated generation of probe trajectories for the ultrasound testing of large slender composite panels, the integration of local x-ray scans with the CAD model of the part for the inspection of hidden welds, and others relating the assembly of the aircraft and in-service maintenance inspections. 1 A number of shape measurement techniques are potentially available to provide this capability. The reflection moire technique 2 allows direct measurement of surface orientation, but requires a polished surface in the specimen. The shadow moire technique 3 can be used for surface contouring with typical sensitivities of 1/10 mm using simple apparatus. It can cope with a variety of surface finishes, but it is difficult to apply for large or complex objects, because a grating must be fabricated and set up at a close distance to the surface of the object. The projection moire 4 and fringe projection 5 techniques are more flexible in terms of object size and of dealing with arbitrary shapes. Also, their requirements of surface finish are not strict. Their sensitivity and range can be easily adjusted by several orders of magnitude, by changing the pitch of the grating slide and/or the distance between the elements. The fringe projection technique does not require delicate mechanisms for the precise alignment of the projecting and viewing gratings as in projection moirE, which allows for very simple, robust and inexpensive apparatus to be implemented using off-the-shelf components. This is at the expense of some loss in sensitivity, although accuracies in the micrometer range have been reported in the literature. 6 Fringe projection differs from projection moire in that the reference and object images do not interfere physically with one another, so that the classical moire pattern is not observed.

Experimental Mechanics 9 197

Fig. 1--Photograph of the apparatus in which gratings are projected using a specially designed light source and the CCD camera is used to collect images whilst two lasers are used respectively to align the instrument and perform distance measurements

In this study, the fringe projection technique was selected. It provides a fast and inexpensive way of measuring the shape and orientation of an arbitrary 3D component using very simple experimental apparatus.7 Figure 1 shows a typical experimental setup. A grating is projected onto the object of interest, and a camera viewing the object surface from a different direction collects an image. 8 The technique is non-contacting, allows remote operation, and does not interfere with most NDE methods. Assuming the angles of projection and observation are known, the deviation from straight lines by the fringes in the image can be used to infer the shape of the test surface. Digital processing of the fringe images yields a full-field map of the depth of the surface relative to a reference plane, where the values are undetermined by a constant, sometimes referred to as the "piston" term. This term, related to the problem of identifying the zero-order fringe (see, for example, Durelli and Parks 9 and other classical texts), and can be interpreted as an unknown rigid-body displacement of the object along the camera axis. In order to obtain absolute depth values, it is necessary to determine the location of at least one point by alternative means.l~ A method for the combined measurement of shape and location using fringe projection and a scanning laser system can be found in the literature. 11 The scanning laser technique for distance measurement uses an acousto-optic deflector to scan the object with a diffracted laser beam. The light spot technique ~2 is a simpler method, and was chosen to complement the fringe projection technique in this study. This technique is commonly used in machine vision applications to measure distances using triangulation with a beam of light. The measurement is fast and simple, and no contact is necessary. The camera of the fringe projection system images the bright spot produced by a fixed laser source on the surface of the object, enabling the measurement of its absolute location and thus the evaluation of the unknown "piston" term in the depth map obtained using the fringe projection technique. In the following section the algorithms for digital fringe processing are outlined. They provide maps of modulated phase, whose relationship to surface depth is explained. The 198

9

VoL 45, No. 3, June 2005

Fig. 2--Typical input images: (a) reference; (b) calibration cone of dimensions H = 140 mm, R = 150 mm; (c) an aircraft fuselage panel is used here as the object; (d) light spot on the same panel. The rectangular regions highlighted in (a) and (b) were used as reference and object data sets for the calibration of the system

principles and application of the light spot technique are also described. Subsequent sections describe the experimental apparatus and the calibration procedure. Finally, two examples of the technique applied to aircraft components are presented.

Processing Algorithms The fringe projection technique requires the collection of an object image and a reference image (Fig. 2). When object shape information is required, the reference image is obtained by projecting a fringe grating onto a flat plane. Projecting the fringes onto the object of interest, without any intervening change in the optical arrangement, produces the object image. These two images are digitally processed to extract the depth information. First, a phase-stepping algorithm is used to obtain a modulated phase map, which is related to the relative distortion of the fringe pattern between the two images. The surface depth can then be calculated from the phase map. Phase Extraction

Assuming vertical orientation of the fringes in the reference image (see Fig. 2(a)), the intensity distribution of the fringe pattern can be mathematically described as follows: l(i, j ) = a(i, j ) (1 + y(i, j ) cos(2~f0j + ~(i, j ) ) ) . (1) Here, A and u are the background illumination and modula9 2005 Society for Experimental Mechanics

tion terms, respectively, fo is the carrier frequency, which is related to the pitch P of the projected fringes, and the modulated phase qb encodes the information related to the surface depth. The first step of the processing algorithm is the extraction of the phase information qb from the fringe pattern. Spatial carrier phase-stepping methods 13,14 allow the measurement of the phase from a single object image with a very low computational burden. The method suggested in this paper takes the following self-calibrating five-step phase estimator, originally derived by Hariharan et a1.15 for temporal phase shifting, and adapts it for spatial phase stepping: (2(12--14) q~* = arctan k,2f3 ~ ~ I 1

) "

(2)

In the original temporal algorithm, five images, represented by Ii, I2 . . . . . 15 are generated by shifting the projected grating in P / 4 increments in the direction perpendicular to the projected fringes, i.e., horizontally in this case, and recording an image corresponding to each phase shift. Here, a single image 1 is recorded then shifted in the computer, so that l](i, j ) = I (i, j - P / 2 ) , I2(i, j ) = I ( i , j - P / 4 ) . . . . . I5(i, j ) = I ( i , j + P/2)4. This eliminates the need for a phase-shifting mechanism in the apparatus. In addition, the technique can be used in the presence of vibrations, or to study objects in motion. Note that P represents the pitch of the projected fringes in the reference image, and the same value is used to extract the phase of both the reference and the object image using the above expression. The phase difference between the reference and object image is then obtained by subtracting the two estimators: A** = .b

- ,~.

(3)

In this expression, the subscript R denotes the reference image and O the object image. A comprehensive evaluation of this phase extraction method can be found in Heredia. 1 The complete study will not be repeated here for brevity, although some of the results will be outlined. These estimators are accurate, provided that (i) the background illumination and modulation terms vary slowly relative to the modulated phase, and (ii) the phase differences between the reference and the object are small. 1 In practical terms, the selection of a sufficiently fine grating and a small projection angle satisfies these conditions, provided that the surface is smooth with gentle slopes. These conditions are normally met in the application under study. There are situations, such as changes in the surface color, markings etc., in which the background and modulation terms vary rapidly, potentially introducing errors in the phase extraction. A preprocessing normalization algorithm can be used in such cases to remove these terms. Phase Unwrapping

Due to the arctangent operation in eq (2), the phase values appear wrapped within the range [-~x, ~x], presenting 2~x discontinuities at the end of the periods. Unwrapping consists of detecting the jumps and adding appropriate multiples of 2rt until a continuous phase map is obtained. When dealing with 2D images, the process is path-dependent in the presence of noise and aliasing. These may corrupt the phase and cause errors, which propagate through the image. 9 2005 Society for Experimental Mechanics

A variation of the quality-guided unwrapping algorithm ]6 was developed for this study. This algorithm uses a measure of the quality of the phase data to select an unwrapping path, processing the reliable regions first to prevent the propagation of errors. A quality map consistent with the phase estimators proposed in eq (2) was obtained by adapting the expression for the modulation u used in the five-bucket algorithm for temporal phase shifting 15 u=

3x/4 (14 - / 2 ) 2 + (I~ + 15 - 213) 2

(4)

2 (ll + 12 -+- 213 + 14 + 15)

with l l ( i , j ) P/4) .....

= l(i,j - P/2),12(i,j) = l(i,j I5(i, j ) = l ( i , j + P / 2 ) . Equation (4) can be

applied to the reference and the object images, respectively, to yield YR and Yo, and combined to obtain the quality map Q(i, j ) of the phase differences. (5)

Q (i, j ) = YR (i, j ) . Yo (i, j ) .

Thresholding of the quality map can be used to automatically detect the background of the image. Inversely, the region of interest can be defined manually by setting the quality value of background pixels to zero. A histogram analysis of the resulting quality map allows the automatic selection of quality thresholds that define four classes of pixels or "bins" according to their quality values, such that each bin will contain normally the same number of pixels. The bins that will contain higher-quality pixels are assigned a higher priority level. The algorithm assigns three tags to each pixel in the image: wrapped/unwrapped, active/inactive, and unexplored/explored. At the beginning of the procedure, all pixels are set to wrapped, inactive, and unexplored. The unwrapping procedure starts at the highest quality pixel, which is set to active and unwrapped. The neighbors of the active pixel are unwrapped, by adding the appropriate multiple of 27t that makes the phase continuous, and their tag is set to unwrapped. Each neighbor is placed in the appropriate bin according to its quality value, ignoring pixels that belong to the background or may have already been explored. Then the active pixel is deactivated and marked as explored. Finally, the algorithm searches the bins for the next active pixel, starting from the highest priority bin (the one that stores top quality pixels) and moving onto the lower priority bins only when the previous ones are empty. This process is repeated until no unexplored pixels remain in the image. Depth Data

The fringe processing technique described above yields a map of modulated phase. This phase, qs, encodes the information pertaining to the shape of the object. A number of different expressions can be found in the literature to calculate the surface depth, from this phase map. A common hypothesis is to assume that the grating is projected onto the object using a device that keeps the pitch constant in space and that observation is parallel to the optical axis everywhere. Such a device would require the use of collimating lenses, which is impractical for real applications. However, an acceptable approximation can be obtained if the distance from the projection and observation points is sufficiently large compared to the size of the object, so that they can be considered to be at infinity. Hence, assuming infinite Experimental Mechanics

9

199

Using eq (8) and simple trigonometric relations, expressions can be derived to calculate the coordinates of point P as a function of the coordinates of its projection p in the image plane and the parameters b, f , and c~defined above. ~2 These are, respectively, the distance between the laser source S and the camera focal point O, the effective focal length of the camera lens, and the angle ct between the projected laser beam and the line joining its source to the camera focal point. In particular, the expression for the Z-coordinate is

bf tan (et) Zp = f + Xp tan (ct)"

Fig. 3--Light spot technique for distance measurement. The laser source is at S, and the point P at the object is projected onto p in the CCD, which is shown in light gray. 0 is the focal point of the camera

(9)

The other coordinates can be calculated from eq (8). This technique requires accurate calibration of the camera and precise detection of the spot in the image to work properly. A calibration procedure for the camera and algorithms for the detection of the light spot with subpixel accuracy were developed to address these problems.

Experimental Procedure Apparatus

optics, it can be proved that the measured phase difference is proportional to the surface depth, 17 i.e., AZ = K A Y ,

(6)

where the calibration constant K (mm rad -1) is a function of the pitch of the projected fringes P and the angle between projection and observation axes 0, so that P K -- - 27~ tan 0

(7)

Finally, the infinite optics hypothesis also implies that distances in the image are scaled by a magnification constant s (mm pixel-1), which depends on the optical arrangement used. The following section describes a simple calibration procedure, developed to measure K and s using a reference card and a calibration cone, which are aligned using a laser mounted on the camera.

Distance Measurement A pinhole camera model was used in this research. Optical aberrations were neglected and the optical axis was assumed to intersect the CCD at its center. A coordinate system, with the origin located in the focal point of the camera O and the Z-axis aligned with the optical axis, is used to simplify the expressions, as shown diagrammatically in Fig. 3. The image plane, in this case the plane of the CCD, is situated along the Z-axis, at a distance f , which is the effective focal distance of the camera lens. We assume that the laser source S is located on the X-axis at a distance b from the origin O. The laser beam intersects the surface of the object at point P(Xp, Yp, Zp). The beam direction is defined by the angles et in the XZ-plane and [3 in a plane perpendicular to the XZ-plane that contains SE The projection of point P onto the image plane is point p(xp, yp).The ray theorem yields

Xp _ Yp _ Zp Xp yp f 200 9 VoL 45, No. 3, June 2005

(8)

A prototype instrument for fringe projection and recording of images has been developed for this study, as shown in Fig. 1. The projector was built in-house using a 300 W halogen-tungsten lamp as a light source. The light is projected through a glass grating (Edmund Scientific precision Ronchi rulings with line frequencies ranging from 100/inch to 250/inch) and focused onto the surface of the object using a 12.5-75mm zoom TV lens. A CCD camera (Panasonic WV BP100) with a 4.5 mm lens was used to record images of up to 574 x 768 pixels. The camera was connected to a laptop computer (500 MHz processor and 128 Mb of RAM memory) for data processing using a commercial frame grabber (MRT Video PCMCIA card). The reference alignment and distance measurement procedures use 5 mW red laser diodes to produce bright light spots. Two commercial modules (NVG Inc. M660-5) were used. The alignment laser source was attached to the camera and its beam aligned with the optical axis of the camera using a flat card onto which both the camera and laser beam were focused, such that the laser spot was coincident with the center of the image acquired by the camera. The card has a symmetrical line pattern printed on it to assist in the process. A special mount with adjustment screws allowed accurate adjustment of the beam direction, and it was locked in position for subsequent use. The distance measurement laser sat on the tripod next to the projector.

Calibration of the Fringe Projection System The calibration constant K in eq (6) and the camera magnification s depend on the configuration of the optical elements and need to be calculated to obtain quantitative data. The camera magnification s is needed to transform the image pixel coordinates (i, j) into real world physical coordinates

(x, y). A simple calibration procedure has been designed to increase the flexibility and ease of use of the system. The calibration constant K and camera magnification s are calculated by analyzing a surface of known geometry, which removes

9 2005 Society for Experimental Mechanics

the need for making direct measurements of the parameters in eq (7). A flat reference card is positioned in front of the specimen, and aligned perpendicular to the viewing axis of the camera using the alignment laser mounted on the camera. The surface of the card has a glossy plain finish to reflect the laser beam. The orientation of the card is adjusted so that the reflected beam is aimed back at the laser source to ensure the correct alignment, and the card is clamped in position. The fringe projector is switched on to collect the reference image. A calibration cone with known dimensions is attached to the reference card and a second image is recorded. The size of the cone is selected to approximately match that of the features of interest in the specimen. The two images are processed to extract and unwrap the modulated phase map using the processing algorithms described in the previous section. Automatic detection of the features in the phase map, i.e., the apex and edge of the cone, allows the calculation of the calibration constant K as the ratio between the height of the cone in mm and the value of the phase at the apex, and of the magnification factor s as the ratio between the diameter of the cone in mm and the diameter in pixels of the circular edge in the phase map. Alternatively, a least-squares fit with K and s as parameters can be performed between the phase map and an artificial surface of the same size as the input images generated using the real dimensions and the detected position of the cone. This procedure uses information at more data points, increasing the accuracy of the calculated calibration constant K and magnification s and improving the robustness with noisy data. Calibration of the Distance Measurement System

The calibration of the distance measurement system requires the calculation of the optical parameters in eq (9): b, f , and c~. These parameters cannot be measured directly because points O and S in Fig. 3 are based on a pinhole camera model, not the real optical arrangement. However, a set of calibration images of the light spot on a reference card placed at known Z locations can be used to calculate their values. The procedure involves (i) estimating the parameters and (ii) using an optimization algorithm to adjust them. Although in theory three calibration images suffice to solve for the three unknown model parameters b, f , and c~, the accuracy is improved by using a minimum of 10 images taken at evenly distributed locations in the range of interest. The location of the light spot projected on the reference card P(Xp, Yp,Zp) w a s measured using a steel ruler, with an accuracy of 4-0.5 mm. The intersection of the camera axis Z with the reference card (origin for Xp, Yp) was located using the alignment laser AA', as shown in Fig. 3. The (i, j) position of the bright spot can be determined in each of the images with subpixel accuracy by calculating the weighted centroid of the bright region. 12 The corresponding physical coordinates (Xp, yp) of point p in the image plane were obtained by changing the coordinate system, as shown in Fig. 3, and scaling with the detector size. The dimensions of a single detector were calculated as the ratio between the physical size of the CCD chip and the number of detectors in the x and y directions, which yielded 6.5 x 6.25 ~tm for the camera used in this study. An iterative algorithm was used to determine the values of the parameters, c~, b, and f , which optimize the fit between

9 2005 Society for Experimental Mechanics

experimental values (i.e., the position of the bright spot measured using the ruler) and the values of the point coordinates P(Xp, Yp, Zp) calculated in eqs (8) and (9) from the image plane coordinates p(Xp, yp) in the calibration images. An initial estimate of the parameters b, f , and a is needed to start the iterations. The reading from the camera lens was used as a reasonable estimate of the focal distance f . The focal point O was assumed to be at a distance f behind the CCD plane, and Z coordinates were adjusted accordingly. The gradient of a graph of measured coordinates Zp as a function of Xp was used to estimate the angle c~ between the laser beam and the camera. The distance b between the laser source S and the camera focal point O can then also be estimated by triangulation. Table 1 shows a sample set of calibration results.

Results Two full-size aircraft components are presented as a case study. The system was calibrated using the images of the reference plane and calibration cone shown in Figs. 2(a) and (b), respectively. The cone used in this case was of height 140 mm with a 150 mm radius. The grating used was 200 lines per inch. The camera was located approximately 1000 mm from the object, and the projector approximately 600 mm to the left. The calibration images of the cone were cropped to a 400 x 350 pixel rectangle to remove the background near the edges, as indicated in Fig. 2. In this case, the system calibration yielded K =-7.1375 (mm rad- 1) and s = 1.5 (mm pixel-1). Figure 4(a) shows the unwrapped phase map for the cone, and Fig. 4(b) shows the 3D surface, which results after scaling the phase map with the calculated calibration constants. The first component tested was a composite panel from the outer skin of an aircraft (approximate dimensions 950 x 500 mm2). Figures 2(c) and (d) show the input data consisting of an image of the object with the projected fringe pattern, and an image of the spot of light on the component for distance measurement. These images were collected using the same optical configuration as for the calibration. The object image in Fig. 2(c) was combined with the reference image in Fig. 2(a) to produce a modulated phase map. This map can be used to calculate the surface geometry using eq (6), which produces the depth map shown in Fig. 5(a). The light spot image in Fig. 2(d) was used to determine the absolute location of the specimen relative to the camera. A calibration of the system yielded b = 494 mm, f = 4.91 and c~ = 1.63 rad. Figure 5(b) shows the combined shape and location data. The location of the light spot in the image was determined, and eq (9) used to calculate Zp = 1040 mm. This result is within 0.05% of the value obtained by measuring with a steel tape measure. Figure 5(b) shows the combined shape and location data. A second set of data was collected under routine maintenance conditions in the hangar. The aim of this exercise was to demonstrate that the system could carry out measurements with limited access, adverse illumination, and simultaneously with other maintenance operations in the aircraft. Absolute location measurement was not performed in this case, although in principle it should not present any difficulties. The component shown in Fig. 6(a) is the port foreplane canard of a fighter plane, with overall approximate dimensions 1100 x 1300 mm. The camera was located approximately

Experimental Mechanics 9 201

TABLE 1--SAMPLE CALIBRATION DATA Xp Calculated Xp Image (mm) (mm) 1 124.5 124.6 2 145.0 145.3 3 188.5 189.4 4 232.5 231.4 5 257.5 256.2 6 283.0 282.1 7 305.0 305.8

|

Error (mm) +0.1 +0.3 +0.9 -1.1 -1.3 -0.9 +0.8

Fig. 4--Calibration maps produced using the highlighted rectangular region of 400 x 350 pixels in Figs. 2(a) and (b): (a) depth map obtained after unwrapping and calibrating the modulated phase map with the constant K; (b) 3D surface plot of the calibration cone obtained after scaling with the camera magnification factor s

2000 mm away, with the projector 600 mm to the left of the camera. Note that the bright illumination in the hangar caused poor contrast in the fringes, shadows and glare on the surface. Also, the presence of markings caused sudden changes in the modulation of the fringes. A second image of the object was collected with the fringe projector switched off, and subtracted from the image in Fig. 6(a). The resulting image was normalized, which in essence involves a high pass filter and adjustment of the intensity values to use the full dynamic range, yielding the enhanced image shown in Fig. 6(b). 202 t Vot.45, No. 3, June 2005

Zp (mm) 1004.2 1037.7 1108.8 1176.9 1216.8 1258.8 1297.0

m

Calculated Zp (mm) 1004.6 1037.6 1107.6 1177.6 1217.6 1257.6 1297.6

Error (mm) -0.4 +0.1 +1.2 -0.7 -0.8 +1.2 -0.5

Fig. 5--Surface contour maps for the panel from the aircraft fuselage: (a) obtained after unwrapping and calibrating modulated phase map obtained from the images in Figs. 2(a) and (c); (b) combined location and shape data as a shaded 3D surface with the origin of the coordinate axes at the optical center of the camera and the resolution reduced for clarity of display

In order to reduce the amount of experimental work carried out in the adverse conditions of the hangar, the reference and calibration images were later recorded in the laboratory using the same configuration of the optical elements. The normalized object image was combined with the reference shown in Fig. 6(c) to produce the surface map shown in Figs. 6(d) and 6(e). An alternative would have been to switch off the hangar 9 2005 Society for Experimental Mechanics

Fig. 6~Set of data collected in the hangar during maintenance of an aircraft: (a) raw input image showing poor contrast in the fringes, shadows (left) and glare (top right), due to adverse illumination conditions in the hangar; (b) enhanced image showing improved contrast and reduced effects of shadows and glare; (c) reference image, collected in the laboratory with the same optical configuration; (d) measured contour map of the surface and profiles along vertical and horizontal lines shown; (e) 3D representation of the surface

lights and to work at night but the aim was for a system that could be used during routine maintenance without the need to suspend other maintenance operations or schedule additional shifts.

Discussion The phase extraction algorithm proved successful provided the conditions described in the section on phase extraction were met. Some localized problems occurred in discontinuities, such as at the angle bracket used to mount the panel in Fig. 5 (see the center of the image). A good rule of thumb for grating selection is 6-8 pixels per fringe in the reference image, to account for inclined regions in the object image where the lines appear closer together. Also, the grating pitch should be at least the same size as the detail in the object of interest. The best results were obtained using a normal viewing direction and projection angles between 10 ~ and 20 ~. The combination of low contrast, bright markings

9 2005 Society for Experimental Mechanics

and fringe frequency close to the minimum of four pixels per fringe caused errors, such as in the upper-right corner of Fig. 6(c). In a series of tests with calibration cones and inclined fiat plates carried out to evaluate the accuracy of the method, where surface shape was known, the error consistently remained below 1% of the measured range. The main source of error comes from the finite optics hypothesis. In reality, the pitch of the projected grating is not constant, and as a consequence the object surface appears curved. Some of this error is cancelled when the reference and object phase maps are subtracted. Test results suggest that for a typical setup, a curvature of 1/200 is detected in nominally flat surfaces. A higher-order approximation is needed in place of eq (6) to account for this effect, but was not believed to be worthwhile for this study. Optical aberrations and electrical noise were also identified as secondary sources of error, but their effect is well below the other errors discussed above.

Experimental Mechanics 9 203

Conclusions A technique based on fringe projection has been introduced that allows full-field automatic measurement of the location and shape of 3D objects. A novel phase extraction method for the analysis of fringe patterns has been described. The technique can be applied to a single image and is therefore suitable for the study of dynamic events and, at the same time, it is computationally very simple. Expressions to estimate the quality of the phase maps and a fast quality-guided unwrapping algorithm have also been outlined. Finally, a normalizing technique capable of enhancing fringe patterns collected in unfavorable conditions has been demonstrated with a real example. Distance measurements were performed using the light spot technique to resolve the indeterminate "piston" term in the shape data. Combining both techniques, it is possible to obtain the shape and absolute location of an arbitrary component with accuracies better than 1% of the measured range. The experimental apparatus and methodology have been described, both for shape and distance measurements. The system has been demonstrated in two examples from the aerospace industry, and has proved to be robust and reliable. The shape and location data provided by the system will enable the registration of NDT data with the CAD model of the part, which could make a maj or contribution to the integration of NDT in the lifecycle analysis of an aircraft.

Acknowledgments This research was partly funded by the BRITE Euram program through the INDUCE project no. BE 97-4057 for NDT techniques in the aerospace industry.

References 1. Heredia, M., Novel Developments of Moird Techniquesfor Industrial Applications, Doctoral Thesis, Sheffield, UK, University of Sheffield (2004).

204

9

VoL 45, No. 3, June 2005

2. Ligtenberg, EK., "The MoirF Method," Proceedings of the Society for Experimental Stress Analysis (SESA), 12 (2), 83-98 (1954). 3. Welle~ R. and Shepard, B.M., "Displacement Measurement by Mechanical lnterferometry,'" Proceedings of SESA, 6 (1), 35-38 (1948). 4. Brooks, R.E. and Hellfinger, L.O., "Moir~ Gauging Using Optical Interference Patterns," Applied Optics, 11, 2269-2277 (1982). 5. Takeda, M. and Mutoh, K., "Fourier Transform Profilometry for the Automatic Measurement of 3D Object Shapes," Applied Optics, 22 (24), 3977-3982 (1983). 6. Sciammarella, C.A., Lamberti, L., and Sciammarella, F.M., "High Accuracy Contouring Using Grating Projection," Proceedings of the XXX Convegno Nazionale AIAS, Alghero, 811-820 (2001). 7. Heredia Ortiz, M. and Patterson, E.A., "Fringe Projection for Optical Path Length Correction in Reflection Photoelasticity,'" Proceedings of the British Society for Strain Measurement (BSSM) Conference, Lancaster, UK (2001). 8. Doty, J.L., "Projection Moir~for Remote Contour Analysis," Journal of the Optical Society of America, 73 (3), 366-372 (1983). 9. Durelli, A.J. and Parks, V.J., Moird Analysis of Strain, Prentice-Hall, Englewood Cliffs, NJ (1970). 10. Cloud, G.L. and Creath, K., Optical Methods of Engineering Analysis, Cambridge University Press, Cambridge (1995). 11. Zeng, L., Matsumoto, H., and Kawachi, K., "Simultaneous Measurement of the Position and Shape of a Swimming Fish by Combining a Fringe Pattern Projection Method with a Laser Scanning Technique," Optical Engineering, 37 (5), 1500-1504 (1998). 12. Klette, R., Koschan, A., and Schluns, K., Computer Vision: 3-D Data from Images, Springer- Verlag, New York (1998). 13. Shough, D.M., Kwon, O.Y.., and Leary, D.E, "'High-speed Interferometric Measurement of Aerodynamic Phenomena," Propagation of Highenergy Laser Beams Through the Earth's Atmosphere, Proceedings of the SP1E, 1221, 394-403 (1990). 14. Chan, PH. and Bryanston-Cross, P..J., "Spatial Phase-stepping Method of Fringe Pattern Analysis," Optics and Lasers in Engineering, 23, 343-354 (1995). 15. Hariharan, P..,Oreb, B.E, and Eiju, C.H., "Digital Phase-shifiing lnterferometry: a Simple Error-compensating Phase Calculation Algorithm," Applied Optics, 26, 2504 (1987). 16. Ghiglia, D.C. and Pritt, M.D., Two-dimensional Phase Unwrapping, Wiley-lnterscience, New York (1998). 17. Pirodda, L., "Shadow and Projection MoirF Techniques for Absolute or Relative Mapping of Surface Shapes," Optical Engineering, 21 (4), 640-649 (1982).

9 2005 Society for Experimental Mechanics