Clement 05.qxd - Dr. Gérard Subsol

and hierarchical volume deformation (Petrick 2000) have been introduced. ... Clement 05.qxd 1/29/05 6:26 PM Page 79 ... annealing to support a manual identification of corresponding features. The ... have investigated for several years (Quatrehomme et al. 1997). We will then ..... Electronic version: http://www.cs.ubc.ca.
8MB taille 1 téléchargements 60 vues
Clement 05.qxd

1/29/05

6:26 PM

Page 79

C H A P T E R 5

A U T O M AT I C

3 D

FA C I A L

R E C O N S T R U C T I O N

B Y

F E AT U R E - B A S E D R E G I S T R AT I O N R E F E R E N C E

O F

A

H E A D

Gérard Subsol FOVEA Project, 2 lot. de Tamaroque, 11490 Portel des Corbièves, France Gérald Quatrehomme Laboratory of Forensic Pathology and Forensic Anthropology, Avenue de Valombrose, 06107 Nice Cedex 2, France

5.1

INTRODUCTION

As emphasized by Bramble et al. (2001), “two and three-dimensional computerbased reconstruction systems have been developed to make the reconstruction process faster, more flexible and to remove some of the subjectivity and inconsistencies associated with the traditional approaches (illustrative identikit and 3D clay based reconstruction)”. We can attempt to classify the 3D computerbased methods that have been presented in the last fifteen years into the three following categories. MORPHOMETRY-BASED METHODS

The user chooses some sites on the skull surface where he defines the facial thickness. A facial surface is then adjusted (or “warped”) to fit with the selected sites by applying some 3D transformations. The main difficulty is in determining a class of transformations that are both complex and regular enough to deform precisely and consistently the face surface. Vanezis et al. (1989), who made one of the first attempts to use a 3D computer-graphic method, applied transformations that are nonuniform scalings. Since then, more complex transformations such as bilinear interpolation (Plasencia 1999), spline functions (Archer et al. 1998, Vignal 2004) radial basis functions (Vanezis et al. 2000), and hierarchical volume deformation (Petrick 2000) have been introduced. MORPHOLOGY-BASED METHODS

The user sets up the morphology of the face by including the muscles and the fat, before ending the reconstruction by putting on the skin layer. Wilhelms

Clement 05.qxd

80

1/29/05

6:26 PM

Page 80

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

and Van Gelder (1997) presented computer-graphics algorithms to model the bones, muscles, and underlying skin. Kähler et al. (2003) fitted a precise reference anatomical model of the head on the unidentified skull by using the correspondences between some skull and skin landmarks. A similar procedure was also developed to simulate or plan facial surgery by Koch et al. (1996). In fact, these two classes of methods combine an automatization and an extension of traditional reconstruction processes that are described by Quatrehomme and Subsol (2004). The first one is easier to implement as it does not require any anatomical model; rather it relies on data—some tens of landmarks and the corresponding facial thicknesses—that are very sparse. REGISTRATION-BASED METHODS

This class of methods requires first to design a reference head model, consisting of a skull and a face model that can be extracted from 3D images acquired by computer tomography (Shaham et al. 2000) or 3D laser scanning (Tyrrell et al. 1997). The reference skull is then registered with the model of the unknown skull in order to compute a 3D deformation. This deformation can then be applied to the reference face in order to infer the unknown face (see Figure 5.1). In Nelson and Michael’s (1998) paper, some structures called “discs” that define the 3D deformation are manually placed on key features around the unknown and the reference skulls. Seibert (1997) used simulated annealing to support a manual identification of corresponding features. The method of Attardi et al. (1999) is in two steps: some anthropological points

Figure 5.1

Skull S

Face F

In registration-based methods, the reference and the unidentified skulls are registered and the resulting 3D deformation is applied to the reference face to infer the unknown face.

Skull S'

CT-Scan 3D Images

(reference)

(unknown face F') 1. Compute the deformation between the skulls S and S' 2. Apply the deformation to the face F to extrapolate the face F'

Clement 05.qxd

1/29/05

6:26 PM

Page 81

AUTOMATIC 3D FACIAL RECONSTRUCTION

are manually identified on the two skulls and define a first deformation that allows then to track and register new feature points in order to obtain a refined deformation. Jones (2001) proposes an algorithm based on intensity correlation between the two 3D images of the skulls to extract the feature points automatically. Tu et al. (2004) transform the 3D skull model into a 2.5D representation by using cylindrical coordinates. This allows the performance of a 2D registration algorithm that is based on the intensity. Notice that the different classes of method can be also mixed. For example, Jones (2001) uses the registration result to map the facial thickness of all the points of the reference head on the unidentified skull in order to define the underlying face. The registration-based methods appear to us as the most promising as they do not require any anthropological measurements or complex anatomical knowledge and can be based on the whole surface data of the skull and the face. Moreover, a lot of progress has been made in 3D image processing in recent years (see e.g., Ayache 2003) and many registrations algorithms have been developed and tested. In the next section, we will describe the method we have investigated for several years (Quatrehomme et al. 1997). We will then present some results in Section 3 before discussing some difficulties raised by this class of methods in Section 4.

5.2 5.2.1

DESCRIPTION OF THE METHOD DATA ACQUISITION

THE ACQUISITION PROCESS

Computer-tomographic (CT) scanning of the head of a cadaver may lead to many problems. For ethical reasons, it becomes quite impossible to use a regular medical CT scanner that is used for living persons. Moreover, some technical problems can occur, for example, caused by the presence of metallic material in teeth (Spoor et al. 2000). A solution can be to take a cast of the face and to digitize it separately from the dry skull (Quatrehomme et al. 1997). The skull and the face cast are then placed into the CT device (see Figure 5.2) which gives, in a few minutes, a series of several tens of digital images representing the successive slices of the anatomical structure. These images are, in general, of a resolution of 512 by 512 pixels which are coded in several thousands of gray levels. They are then “stacked” in order to build up a threedimensional image. CT scanners that are routinely used in medical radiology have a resolution of one millimeter, whereas special industrial microscanners can reach up to a resolution of 100 microns (Ilerhaus and Thompson 1998). Some 3D image-processing algorithms developed for medical imaging or computer-assisted design (CAD) are applied to extract the surface of the

81

Clement 05.qxd

82

1/29/05

6:26 PM

Page 82

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

Figure 5.2 Obtaining a 3D representation of the head. The anatomical structure (or its cast) (A) is placed in the CT scanner (B). We obtain then a series of several tens of digital images of 512 by 512 pixels in gray levels that correspond to slices (C). It is then possible to “stack” the slices, extract the surface of the anatomical structure, and visualize it in 3D on a computer screen (D).

(a)

(b)

(c)

(d)

structure from the 3D image and to display it, from any point of view, on the screen of a computer.

THE REFERENCE HEAD

As reference-head data, we used the CT scan of the cast of the face and of the dry skull of a man who died in his seventies (see Figure 5.3). The 3D images consist in 62 slices with a thickness of 3 mm, composed of 512 by 512 pixels of 0.6 by 0.6 mm. The face and the skull were aligned manually by fitting some anatomical landmarks—a difficulty being that the opening angle of the mandible must be exactly the same on the cadaver as on the skeletonized skull, and a special device had to be developed. For this reason, in the following experiment number 2, we have “deleted” the mandible in order to focus only on the upper part.

EXPERIMENT 1: UNKNOWN CONTEMPORARY SKULL

We applied the same procedure as described for experiment 2 (see Figure 5.4).

EXPERIMENT 2: PREHISTORIC SKULL OF THE MAN OF TAUTAVEL

In the second experiment, we use a CT scan of a cast of the reconstruction of the skull of an ante-Neandertalian, known as the “man of Tautavel”, estimated

Clement 05.qxd

1/29/05

6:26 PM

Page 83

AUTOMATIC 3D FACIAL RECONSTRUCTION

Figure 5.3 The reference head. The images of the skull (up) and of the face cast (bottom) have been manually aligned by using some anatomical landmarks (middle).

to be 450,000 years old (see Figure 5.5). The prehistoric reconstitution is based on the face (Arago XXI) and on the right parietal (Arago XLVII) that were found in the Arago cave at Tautavel, France, in 1971, and on the left parietal being obtained by symmetry, on a mold of the Swanscombe occipital, and on the temporal bone and its symmetric of Sangiran 17 (Pithecanthropus VIII) (de Lumley et al. 1982). The 3D image consists of 154 slices with a thickness of 1 mm, composed of 512 by 512 pixels of 0.5 by 0.5 mm.

83

Clement 05.qxd

84

1/29/05

6:26 PM

Page 84

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

Figure 5.4 Experiment 1: as for the reference head, the images of the skull (up) and of the face cast (bottom) have been manually aligned by using some anatomical landmarks (middle).

Figure 5.5 Experiment 2: the aim is to infer the prehistoric face of the Man of Tautavel from the fossil skull that is estimated to be 450 000 years old.

Clement 05.qxd

1/29/05

6:26 PM

Page 85

AUTOMATIC 3D FACIAL RECONSTRUCTION

Figure 5.6

Normal n

t2 Crest line P

t1 k1

"Maximal" curvature

Principal direction

Mathematical definition of crest lines: k1: maximal principal curvature in absolute value, t1: associated principal direction. grad k1 t1  0 ⇔ P is a crest point and belongs to a crestline. Reprinted from Medical Image Analysis (Subsol et al. 1998), with permission from Elsevier.

Figure 5.7 Crest lines automatically extracted in a CT scan of a skull. Notice how crest lines emphasize the mandible, the orbits, the cheekbones, or the temples and also, inside the cranium, the sphenoid and temporal bones as well as the foramen magnum. Reprinted from Medical Image Analysis (Subsol et al. 1998), with permission from Elsevier.

5.2.2

EXTRACTION OF FEATURE POINTS AND LINES

To compute the 3D transformation, we have to find some landmarks on the surface of the skull. They must be defined by an unambiguous mathematical formula to be automatically computed and be anatomically relevant to characterize the structure. We chose “crest lines” (Gourdon and Thirion 1996) which are defined by the extrema of the principal curvature, where it has the largest absolute magnitude, along its associated principal direction (see Figure 5.6). By their definition, these lines follow the salient lines of a surface. We can check this in Figure 5.7 where the crest lines, automatically extracted in a CT scan of a skull, emphasize the mandible, the orbits, the cheekbones, or the temples and also, inside the cranium, the sphenoid and temporal bones as well as the foramen magnum.

85

Clement 05.qxd

86

1/29/05

6:26 PM

Page 86

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

Salient structures are also used by doctors as anatomical landmarks. For example, the crest-line definition is very close to the “ridge-line” one given by Cutting et al. (1993) (see Figure 5.8), which corresponds to the type II landmark in Bookstein’s typology (Bookstein 1991). In Figure 5.9, we display on the same skull the crest lines (in gray) which were automatically extracted and the ridge lines (in black) which were extracted semimanually under the supervision of an anatomist. The two sets of lines are visually very close, showing that crest lines would have a strong anatomical significance.

5.2.3

REGISTRATION OF FEATURE LINES

We extracted several hundred crest lines composed of several thousand points on the skulls and then needed to find the correspondences between these features (see Figure 5.10). Usually this is done manually by an anatomist who knows the biological homology: two features are put into correspondence if they characterize the same biological functionality. In our case, there are so many points that this becomes impossible, and we had to design an algorithm to find the correspondences automatically. This is a very well known problem in 3D image processing called “automatic registration” (Ayache 2003). We have developed a method described by Subsol et al. (1998) that deforms iteratively and continuously the first set of lines towards the second one in order to superimpose them. At the end of the process, each point Pi, of the first set is matched with the point Qi of the second set that is the closest, and some inconsistent

Figure 5.8 “Ridge lines” are extracted semimanually under the supervision of an anatomist and are used for applications in craniofacial surgery and paleontology (Dean et al. 1998). By permission of the Journal of Craniofacial Surgery.

Clement 05.qxd

1/29/05

6:26 PM

Page 87

AUTOMATIC 3D FACIAL RECONSTRUCTION

Figure 5.9 Comparison of “ridge” lines and “crest” lines. The precise superimposition of crest (in red) and ridge lines (in blue) shows that crest lines would have a strong anatomical significance, even if they are based on a mathematical definition. Reprinted from Medical Image Analysis (Subsol et al. 1998), with permission from Elsevier. Figure 5.10 The registration problem in experiment 2. The difficulty is to find the correspondences between the features as for example, the pairings (P1, Q1) or (P2, Q2). Top: crest lines on the reference skull (536 lines and 5756 points). Bottom: crest lines on the skull of the Man of Tautavel (337 lines and 5417 points).

correspondences are discarded. In our experiments, the algorithm finds in some minutes, on a standard personal computer, around 1,500 points pairings (Pi, Qi), located all around the inside and outside surfaces of the skull. Thirion et al. (1996) checked on the data of several skulls that these registration results are consistent with those obtained by another automatic method

87

Clement 05.qxd

88

1/29/05

6:26 PM

Page 88

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

⇐ (s1, R1, t1) or (A1)

Figure 5.11 The complex geometrical normalization in experiment 2. First, the rotation R, the translation T, and the scaling s are automatically computed based on pairs of homologous points (Pi,Q i) in order to align the two skulls in the same position and orientation and to compensate for the difference of global size. Moreover, we can notice that the skull of the Man of Tautavel is bent (up, right). This is due to the fact that it had lain on its side and was compressed by gravity. We have modeled this taphonomic deformation by applying an affine transformation A1: the two skulls are now normalized and comparable (bottom, left).

and by a semimanual method where an anatomist supervises the detection of homologous points.

5.2.4

GEOMETRICAL NORMALIZATION

Before computing the deformation between the reference and the unknown skulls, we have to align them in the same position and orientation and to compensate for the difference of global size (see Figure 5.11). This requires the computation of the three following transformations: the rotation R, the translation T, and the scaling s. Several methods exist to compute (s, R, t) based on pairs of homologous points (Pi, Qi), as the “Procrustes superimposition” (Boostein 1991) or the “least-square distance” minimization that leads to: (s , R , t )  Argmin (s , R , t )



i

|| sRPi  t  Q i ||2 .

Sometimes, the shape of the unknown skull was altered either at the moment of the death (e.g., local deformation in the case of a traumatism or an assault) or postmortem (e.g., compression due to the weight of earth in the case of

Clement 05.qxd

1/29/05

6:26 PM

Page 89

AUTOMATIC 3D FACIAL RECONSTRUCTION

burying). It then becomes necessary to model this alteration in order to recover the original shape of the skull. Such a task is extremely difficult, as the alterations can be geometrically very complex. Thus, in the case of experiment 2, we can notice that the skull of the Man of Tautavel is bent (see Figure 5.11, top, right). This is due to the fact that it had lain on its side and was compressed by the gravity and the weight of sediments. We have modeled this taphonomic deformation by computing an affine transformation A between the reference and the unknown skulls. The degrees of freedom corresponding to the diagonal coefficients in the matrix representing A allow to model the bending alteration. After applying A1, the reference and the unknown skulls are really comparable (see Figure 5.11, bottom, left). Another way to recover the bending of the skull would be to extract automatically the midsagittal plane (Thirion et al. 2000) and to realign it with the vertical plane. Nevertheless, the knowledge of the in situ orientation of the fossil is indispensable, since similar deformations might be the result of many different taphonomic events (Ponce de León and Zollikofer 1999).

5.2.5

COMPUTING THE 3D TRANSFORMATION

Now, we have to compute the 3D transformation between the reference and the unknown skulls that have been both normalized. The “thin-plate spline” method (Bookstein 1991), widely used in morphometry, allows the computation of such a function that interpolates the displacements of the normalized homologous points (Pi, Qi) with some mathematical properties of regularity. Nevertheless, interpolation is relevant when the matched points are totally reliable and distributed regularly (for example, a few points being located manually). In our case, these points are not totally reliable, due to possible mismatches of the registration algorithm, and are sparse in a few compact areas as they belong to lines. So, we have developed a spline approximation function that is regular enough to minimize the influence of an erroneous matched point (Subsol et al. 1998). The coordinate functions are then computed by a 3D tensor product of B-spline basis functions. To compute this 3D transformation T, we maximize the weighted sum of an approximation criterion (quadratic distance between T(P i) and Q i) and a regularization criterion (minimization of the sum of the second-order derivatives that correspond to the “curvature” of the function): ⎛⎜ T  Argmin(T ) ⎜⎜⎜ ⎜⎜ ⎝



i

|| T (Pi ′)  Q i′ ||2  

∫∫∫ V

[(∂ 2T /∂x 2 )  (∂ 2T /∂x∂y ) 

⎞⎟ ⎟ ]dV ⎟⎟⎟ . ⎟⎟ ⎠

The parameter  balances the approximation accuracy and the smoothness of the transformation.

89

Clement 05.qxd

90

1/29/05

6:26 PM

Page 90

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

Figure 5.12 Experiment 2: the 3D transformation between the reference skull (up) and the skull of the Man of Tautavel (bottom) is automatically computed based on the registered crest lines. Notice how the deformed mesh emphasizes the main features of the skull of the Man of Tautavel: low cranium, receding forehead, and protuberant face as well as an important frontal dissymmetry due to the taphonomic deformations.

By applying T to a 3D regular mesh, it becomes possible to display the transformation. In particular, we can notice in Figure 5.12 that the deformed mesh emphasizes the main features of the skull of the Man of Tautavel: low cranium, receding forehead, protuberant face as well as an important frontal dissymmetry due to the taphonomic deformations (Mafart et al. 1999).

5.3 5.3.1

R E S U LT S A N D D I S C U S S I O N THE PROBLEM OF THE VALIDATION

EXPERIMENT 1

As for all the methods of facial reconstruction, the main concern is the validity of the reconstruction result. In the first case presented, we are able to compare the automatic facial reconstruction with the real face (see Figure 5.13). Even if the reference and the actual faces have such different morphology that it is no

Clement 05.qxd

1/29/05

6:26 PM

Page 91

AUTOMATIC 3D FACIAL RECONSTRUCTION

Figure 5.13 Experiment 1: comparison of the deformed reference face and of the actual “unknown” face (Quatrehomme et al. 1997). The overall proportions of the face are correctly inferred. By permission of the Journal of Forensic Sciences.

more valid to assume the hypotheses of similarity of age or corpulence, the result looks fairly interesting. In particular, the overall proportions of the face as the width, the interocular distance, or the eyebrow thickness are correctly inferred. The low part is less resembling, but this may be due to the difference of the opening of the mandible between the reference and the “unidentified” skulls. The soft parts as the nose or the cheeks are also not very significant. In fact, it is very difficult to quantify the resemblances between two faces. We could compute an objective distance, based for example on the squared sum of distances between some feature points of the deformed reference and the actual faces, or even between the whole surfaces if we use dense meshes. As the distance will be never strictly equal to zero, it would require defining a threshold to decide if two faces really resemble each other. This value should be under the average distance corresponding to the intravariability of different human faces. But such a parameter, which must be computed on a very large database of representative faces, is not known yet. Moreover, some features such as hair, skin complexion, or the color of the eyes are totally absent in our virtual reconstructions, whereas they are most important for recognition. At last, when we deal with 3D models, the problem becomes much more complex: a slight shift of orientation of the representation of the reconstructed head can induce large differences in the perception of the face shape due to differences of shading or lightning. As emphasized in (Quatrehomme and Subsol 2004), very few studies have been performed on the validation of 3D reconstruction methods on a “significant” scale, whether it is automated or not. In conclusion, there is a strong need to define a clear and reproducible protocol to evaluate the quality of a 3D computer-assisted facial reconstruction

91

Clement 05.qxd

92

1/29/05

6:26 PM

Page 92

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

with respect to the real face. This will allow the performance of a retrospective study to compare and validate the different methods as has been done for other image processing applications such as rigid registration (West et al. 1999). EXPERIMENT 2

In the case of the Man of Tautavel, the problem is much more difficult as we do not know the actual face! Nevertheless, we can compare our reconstruction with the ones obtained with different methods (see Figure 5.14): 2D drawings and manual 3D facial reconstructions performed by an artist and by forensic medical doctors. All these methods emphasize the same morphological features of ante-Neandertalians. The global similarity of the results are encouraging and indicate that our automatic method can give a consistent overall appearance of the face. This is all the more interesting since the reference head was not, a priori, consistent with the Man of Tautavel; the morphology is very different, as 450 000 years of human evolution separate the two men, and the Man of Tautavel is estimated to be 20–30 years old, whereas the reference man was in his seventies. To refine the reconstruction, we could test several hypotheses on age or corpulence, based on different reference heads. This is clearly the main advantage of an entirely automatic method that allows the performance of a reconstruction in a few minutes. The problem is then to set up a significant reference head. 5.3.2

DEFINING A REFERENCE HEAD

The variation of the shape of the head is so huge that, even if we restrict a population group based on sex, corpulence, or ethnicity criteria, it is impossible to find a subject with the perfect “average” head, that could be scanned in 3D. So, very often, reference heads are taken among the models that are available to the user as the Visible Human Data Set (Koch et al. 1996). Nevertheless, we can describe two ways to build a significant reference head. The first idea consists in using anthropometric or cephalometric measurements to model a virtual reference head with computer-graphic tool. DeCarlo et al. (1998) synthesized 3D face models of a North American Caucasian young adult male and female based on data presented by Farkas (1994). The second idea is to infer an average model directly from a database of 3D images of different heads (see Figure 5.15). Cutting et al. (1993) and Subsol et al. (1998) extract and register line landmarks in the 3D images of a skull. The positions of the corresponding landmarks are then averaged to define the reference model. Blanz and Vetter (2004) generate 3D face models based on 200 heads of young adults (100 male and 100 female). As the images were obtained by a laser scan, it was possible to model not only the 3D geometry of the face but also the texture. The second method appears

Clement 05.qxd

1/29/05

6:26 PM

Page 93

AUTOMATIC 3D FACIAL RECONSTRUCTION

Figure 5.14 Comparison of several facial reconstruction methods. ■ Upper line, from left to right: a 2D artistic drawing by Carlos Ranzi; a 3D manual facial reconstruction performed by an artist, Elisabeth Daynes under the

scientific direction of a paleontologist, Marie-Antoinette de Lumley; a 2D artistic drawing by Carlo Moretti. All these images by permission of the Centre Européen de Recherches Préhistoriques de Tautavel and its president Henry de Lumley (Tautavel, Web). ■

Middle line: different views of our 3D computerized facial reconstruction. Bottom line: Left & right: a 3D manual facial reconstruction performed by forensic medical doctors (Odin et al. 2002). By permission of the authors. Middle: a 3D manual facial reconstruction performed by an artist, André Bordes, under the scientific direction of a paleontologist, MarieAntoinette de Lumley. By permission of the Centre Européen de Recherches Préhistoriques de Tautavel and its president Henry de Lumley.



93

Clement 05.qxd

94

1/29/05

6:26 PM

Page 94

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

Figure 5.15 A dataset of 6 different skulls segmented from CT scans. Notice how the shapes are different. Reprinted from Medical Image Analysis (Subsol et al. 1998), with permission from Elsevier.

more precise, as the averaging process will take into account all data available in the images—up to several thousand points if the surface meshes are dense—and not only the position of some tens of anthropological landmarks. Nevertheless, no one has yet designed a complete average model of the head that includes both an average skull and a face model that is registered.

5.3.3

MODELING THE HUMAN VARIABILITY

As the presented method is automatic and fast, it is possible to test several hypotheses of reconstruction based on different criteria as age, sex, ethnicity, or corpulence. As it seems impossible to set up a database of reference heads corresponding to all the categories, a solution is to infer different heads from a reference one. AGE

Milner et al. (2001) alter the 2D profile of a skull according to some cephalometric measurements and perform a manual face reconstruction to infer a face that is 50 years older. The authors caution that their work is more an exercise than a real methodology as there is no other result against which to compare it. Evison (2004) interpolates between a young and an old 3D facial reconstruction of a male, leading to intermediately aged faces. But the method is

Clement 05.qxd

1/29/05

6:26 PM

Page 95

AUTOMATIC 3D FACIAL RECONSTRUCTION

based on the hypothesis that the points of the face move linearly between the two extremes, which could be considered simplistic. Hutton et al. (2002) build a more complex model of facial growth based on a training set of 3D surface scans of 199 male and 201 female subjects with ages between 0 and 50. Coughlan (1997) decomposes craniofacial growth into two processes called remodelling and displacement. This model is combined with a 3D computer-based facial reconstruction method by Archer et al. (1998). ETHNICITY

Dean et al. (1998) build average skull models using a database of skull CT-scan images of Caucasian Americans and African Americans, male and female. It allows emphasis on the anatomical differences between the ethnicities and the sexes. A range of variation that corresponds to individuals of mixed African and European ancestry could be obtained by using 3D interpolation (Evison 2004). CORPULENCE

Archer et al. (1998) tune the length of the virtual “dowels” that link the face and skull models to allow the user to modify the corpulence. A 3D interpolation (Evison 2004) process can then synthesize a potential range of obesity. EXPRESSIVITY

Kähler et al. (2003) build the reference head model on an anatomical basis which comprises the underlying muscles and the bone layers. Once the model is fitted to the unknown skull, it becomes possible to animate the face and obtain different expressions by setting muscle contractions. If the generic reconstruction is based on a neutral pose, it can be helpful to present several expressions for identification purposes.

5.3.4 INFERRING ILL-DEFINED FACIAL PARTS OR FEATURES Many soft parts of the face are difficult to infer, in particular the nose, chin, eyes, lips, or ears (Quatrehomme and Subsol, 2004). Kähler et al. (2003) express some empirical rules that are used in traditional facial reconstruction, by the automatically placing vertical and horizontal guides in a frontal view. The user can then move or update some landmarks in order to refine the reconstruction. Tu et al. (2004) extracted a collection of eye, nose, and lip models from 3D scans of various individuals. The user can then place manually a model that will be blended on the 3D reconstruction. More generally, a facial editing tool described by Archer et al. (1998) enables the user to modify locally the shape of the reconstruction. Vanezis et al. (2000) show how a frontal 2D view of the 3D reconstruction can be imported into a

95

Clement 05.qxd

96

1/29/05

6:26 PM

Page 96

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

police identikit system which allows the addition of features as opened eyes, hair, or glasses. Another way to make the reconstruction more realistic is to map a reference texture of the head on the 3D reconstruction. Attardi et al. (1999) and Tu et al. (2004) generate a cylindrical texture either from a set 2D of views or from a 2.5D laser scan of an individual whose face is assumed to resemble the face to reconstruct. The texture is then fitted and projected on the 3D reconstruction. Notice that much research has also been performed to synthesize wrinkles (e.g., Wu et al. 1995) that could be added to the 3D reconstruction.

5.4

CONCLUSION

Facial reconstruction is used more and more for museum presentations (Neave and Prag 1997), but still only for human beings. With the advent of 3D computer-graphic methods, which are more flexible and require less effort and time, we can imagine performing facial reconstructions on animals. For example, the appearance of a prehistoric felid (Antón et and García-Perea 1998) could be inferred from a fossil skull by using as a reference head one of a modern felid. This is completely feasible with the method presented in this chapter, as it does not depend on soft-tissue thickness measurements that we are not aware of in the case of animals. Moreover, some data are available as several fossil animals have been already CT-scanned, for example CTLab (Web). Such facial reconstructions would be of the most interest for museums and could be presented either in real exhibition rooms with special graphic devices (Bimber et al. 2002) or on virtual Web sites (Yasuda et al. 2002).

ACKNOWLEDGEMENTS The work described in this chapter is currently part of research carried out in the FOVEA Project funded by the Program “Société de l’Information” of the French Center of Scientific Research, CNRS. Part of this work was done when Gérard Subsol was with EPIDAURE Project, INRIA Sophia Antipolis, France and with the Laboratory of Computer Science of the University of Avignon, France. The authors would like to thank Bertrand Mafart for his valuable help and Marie-Antoinette and Henry de Lumley for making the Man of Tautavel’s material available.

REFERENCES The links to the electronic versions were checked around mid-2003. Since this date, some of them may have been moved or deleted.

Clement 05.qxd

1/29/05

6:26 PM

Page 97

AUTOMATIC 3D FACIAL RECONSTRUCTION

Antón M. and García-Perea R. (1998) “Reconstructed Facial Appearance of the Sabretoothed Felid Smilodon”, Zoological Journal of the Linnean Society 124, pp. 369–386. Archer K., Coughlan K., Forsey D. and Struben, S. (1998) Software Tools for Craniofacial Growth and Reconstruction. Graphics Interface, Vancouver (Canada), June 1998. Electronic version: http://www.graphicsinterface.org/proceedings/1998/120/. Attardi G., Betrò M., Forte M., Gori R., Guidazzoli A., Imboden S. and Mallegni F. (1999) “3D Facial Reconstruction and Visualization of Ancient Egyptian Mummies Using Sprial CT Data – Soft Tissues Reconstruction and Textures Application”, SIGGRAPH. Sketches and Applications Los Angeles (U.S.A.), August 1999. Electronic version: http://medialab.di.unipi.it/Project/Mummia/. Ayache N. (2003) “Epidaure: a Research Project in Medical Image Analysis, Simulation and Robotics at INRIA”, To appear in IEEE Transactions on Medical Imaging. 2003. Electronic version: http://www-sop.inria.fr/epidaure/BIBLIO/index.html. Bimber O., Gatesy S., Witmer L., Raskar R. and Encarnação L. (2002) “Merging Fossil Specimens with Computer-Generated Information”, Computer, September 2002, pp. 45–50. Electronic version: http://citeseer.nj.nec.com/ bimber02merging.html. Bookstein F. (1991) Morphometric Tools for Landmark Data. Cambridge University Press. Blanz V. and Vetter T. (2004) “A Morphable Model for the Synthesis of 3D Faces”, Computer Graphic Facial Reconstruction, Chapter 5, Academic Press, 2004. Bramble S., Compton D. and Klasén L. (2001) “Forensic Image Analysis”, 13th INTERPOL Forensic Science Symposium, Lyon, France, October 2001. Electronic version: http://www.interpol.int/Public/Forensic/IFSS/meeting13/Reviews/Image.pdf. Coughlan K. (1998) “Simulating Craniofacial Growth”. Master’s Thesis, University of British Columbia, April 1997. Electronic version: http://www.cs.ubc.ca /labs/imager/th/coughlan.msc.1997.html. CTLab, Web http://www.ctlab.geo.utexas.edu/ Cutting C., Bookstein F., Haddad B., Dean D. and Kim D. (1993) A spline-based approach for averaging three-dimensional curves and surfaces, in: SPIE Conference on Mathematical Methods in Medical Imaging II, Vol. 2035, pp. 29–43. Dean D., Bookstein F.L., Koneru S., Lee J.H., Kamath J., Cutting C.B., Hans M. and Goldberg J. (1998) “Average African American Three-Dimensional Computed Tomography Skull Images: The Potential Clinical Importance of Ethnicity and Sex”. The Journal of Craniofacial Surgery 9(4), July 1998. DeCarlo, D., Metaxas, D. and Stone, M. (1998) “An Anthropometric Face Model Using Variational Techniques”, SIGGRAPH pp. 67–74, Orlando (U.S.A.), July 1998. Electronic version: http://athos.rutgers.edu/decarlo/anthface.html.

97

Clement 05.qxd

98

1/29/05

6:26 PM

Page 98

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

Evison, M. (2004) “Modeling Age, Obesity, and Ethnicity in a Computerized Facial Reconstruction”, Computer Graphic Facial Reconstruction Chapter 10, Academic Press, 2004. Farkas, L. (ed.) Anthropometry of the Head and Face. Raven Press,1994. Hutton T., Buxton B. and Hammond, P. (2002) “Estimating Average Growth Trajectories in Shape-Space using Kernel Smoothing”, International Workshop on Growth and Motion in 3D Medical Images, European Conference on Computer Vision, Copenhagen (Denmark), 2002. To appear in IEEE Transactions on Medical Imaging. Electronic version: http://www.eastman.ucl.ac.uk/dmi/Papers/growth.pdf. Jones M. (2001) “Facial Reconstruction Using Volumetric Data”, in: Vision, Modeling, and Visualization 2001 (Ertl, T., Girod, B., Greiner, G., Niemann, H. and Seidel H., eds.). IOS Press, Stuttgart (Germany), pp. 135–142. Electronic version: http://citeseer.nj.nec.com/482378.html. Kähler K., Haber J. and Seidel H. (2003) “Reanimating the Dead: Reconstruction of Expressive Faces from Skull Data”, ACM SIGGRAPH Conference Proceedings San Diego, USA. Electronic version available at: http://www.mpi-sb.mpg. de/resources/FAM/demos.html. Koch R., Gross M., Carls F., von Büren D., Fankhauser G. and Parish Y. (1996) “Simulating Facial Surgery Using Finite Element Models”, Computer Graphics Vol. 30, Annual Conference Series, pp. 421–428. Electronic version: http://citeseer.nj. nec.com/koch96simulating.html. de Lumley, H., de Lumley, M. and David, R. (1982) “Découverte et reconstruction de l’Homme de Tautavel”, 1er Congrès de Paléontologie Humaine, Tome 1, Nice (France), 1982. Mafart B., Méline D., Silvestre A. and Subsol, G. (1999) 3D Imagery and Paleontology: Shape differences between the Skull of Modern Man and that of Tautavel Man. B. Hidoine, A. Paouri, designers and directors, video 451–452, Department of Scientific Multimedia Communication, INRIA. Electronic version: http://www.inria.fr/multimedia/Videotheque/0-Fiches-Videos/451-fra.html. Mafart, B. and Delingette, H. with the collaboration of Subsol G. (eds.) (2002) “Colloquium on Three-Dimensional Imaging in Paleoanthropology and Prehistoric Archaeology”, Liège (Belgium), September 2001. Published in British Archaeological Reports International Series, No. 1049, Archaeopress. Milner C., Neave R. and Wilkinson, C. (2001) “Predicting Growth in the Aging Craniofacial Skeleton”, Forensic Science Communications Vol. 3, (3). Electronic version: http://www.fbi.gov/hq/lab/fsc/backissu/july2001/milner.html. Nelson L. and Michael S. (1998) “The Application of Volume Deformation to ThreeDimensional Facial Reconstruction: A Comparison with Previous Techniques”, Forensic Science International 94, pp. 167–181.

Clement 05.qxd

1/29/05

6:26 PM

Page 99

AUTOMATIC 3D FACIAL RECONSTRUCTION

Odin G., Quatrehomme G., Subsol G., Delingette H., Mafart B. and de Lumley, M A. (2002) “Comparison of a Three-Dimensional and a Computerized-Assisted Method for Cranio-Facial Reconstruction: Application to the Tautavel Man”, XIV International Congress of Prehistoric and Protohistoric Science, Liège (Belgium), September 2001. Abstract page 23 in preprints. For the full text, see Mafart and Delingette (2002) in the references above (pp. 67–69). Petrick M. (2000) “Volumetric Facial Reconstruction for Forensic Identification”, Seminar Computergraphik, Universität Koblenz Landau (Germany), October 2000. Text in German. Electronic version: http://www.uni-koblenz.de/cg/veranst/ws0001/4.1.26_Seminar_ Computergraphik.html Plasencia, J. (1999) International Conference in Central Europe on Computer Graphics, Visualization and Interactive Digital Media’99, Plzen-Bory, Czech Republic, February 1999. Electronic version: http://citeseer.nj.nec.com/490826.html. Ponce de León and Zollikofer (1999) missing Prag J. Neave R. (1997) Making Faces – Using Forensic and Archaeological Evidence. British Museum Press. Quatrehomme, G. (2000) Reconstruction faciale: intérêt anthropologique et médi-légal. Ph.D. in Anthropology, University of Bordeaux, France. Quatrehomme G., Cotin S., Subsol G., Delingette H., Garidel Y., Grévin G., Fidrich M., Bailet P. and Ollier, A. (1997) “A Fully Three-Dimensional Method for Facial Reconstruction Based on Deformable Models”, Journal of Forensic Sciences 42(4), pp. 649–652. Quatrehomme, G. and Subsol, G. (2004) “Classical Non Computer-Assisted Craniofacial Reconstruction”, Computer Graphic Facial Reconstruction Chapter 2. Academic Press. Shaham D., Sosna J., Makori A., Slasky B., Bar-Ziv J. and Donchin Y. (2000) “Post Mortem CT-Scan: An Alternative Method in Forensic Medicine and Trauma Research”, The Internet Journal of Rescue and Disaster Medicine 2(1). Electronic version: http://www.ispub.com/ostia/index.php?xmlFilePath  journals/ijrdm/archives.xml. Seibert F. (1997) “Model-Based 3D-Reconstruction of Human Faces”, Computer Graphik Topics No. 3, pp. 8–9. Electronic version: http://www.inigraphics.net/publications/topics/1997/issue3/issue3_home.html. Spoor F., Jeffery N. and Zonneveld, F. (2000) “Imaging Skeletal Growth and Evolution”. Chapter 6 in: Development, Growth and Evolution The Linnean Society of London. Electronic version: http://www.isi.uu.nl/People/Frans/. Subsol G., Thirion J. and Ayache, N. (1998) “A General Scheme for Automatically Building 3D Morphometric Anatomical Atlases: Application to a Skull Atlas”, Medical Image Analysis 2(1), 37–60.

99

Clement 05.qxd

100

1/29/05

6:26 PM

Page 100

COMPUTER-GRAPHIC FACIAL RECONSTRUCTION

Subsol G., Mafart B., Silvestre A. and de Lumley M. (2002) “3D Image Processing for the Study of the Evolution of the Shape of the Human Skull: Presentation of the Tools and Preliminary Results”. XIV International Congress of Prehistoric and Protohistoric Science, Liège (Belgium), September 2001. Abstract page 22 in preprints, full text in: Mafart, B. and H. Delingette, H. with the collaboration of Subsol, G. (eds.), “Three-Dimensional Imaging in Paleoanthropology and Prehistoric Archaeology”, British Archaeological Reports International Series No. 1049, pp. 37–45. http://www.tautavel.culture.gouv.fr/ Thirion J. and Gourdon A. (1996) “The 3D Marching Lines Algorithm”, Graphical Models and Image Processing 58(6), pp. 503–509. Electronic version: http://www.inria.fr/ RRRT/RR-1881.html Thirion J., Subsol G. and Dean, D. (1996) “Cross Validation of Three Inter-Patients Matching Methods”, in: Visualization in Biomedical Computing, Hamburg (Germany), Lecture Notes in Computer Science, (K.H. Höhne and R. Kikinis, eds.). Vol. 1131, pp. 327–336, Springer. Thirion J., Prima S., Subsol G. and Roberts N. (2000) “Statistical Analysis of Normal and Abnormal Dissymmetry in Volumetric Medical Images”, Medical Image Analysis 4(2), 111–121. Electronic version: http://citeseer.nj.nec.com/thirion97statistical.html. Thompson J. and Illerhaus B. (1998) “A New Reconstruction of the Le Moustier 1 Skull and Investigation of Internal Structures Using 3-D-CT data”, Journal of Human Evolution 35, pp. 647–665. Tu P., Hartley R., Lorensen W., Allyassin A., Gupta R. and Heier L. (2004) “Face Reconstructions using Flesh Deformation Modes”, Computer Graphic Facial Reconstruction, Chapter 6, Academic Press. Tyrrell A., Evison M., Chamberlain A. and Green M. (1997) “Forensic Three-Dimensional Facial Reconstruction: Historical Review and Contemporary Developments”, Journal of Forensic Sciences 42(4), pp. 653–661. Vanezis P., Blowes R., Linney A., Tan A., Richards R. and Neave R. (1989) “Application of 3D Computer Graphics for Facial Reconstruction and Comparison with Sculpting Techniques”, Forensic Science International, 42, pp. 69–84. Vanezis P., Vanezis M., McCombe G. and Niblett, T. (2000) “Facial Reconstruction using 3-D Computer Graphics”, Forensic Science International, No. 108, pp. 81–95. Electronic version: http://citeseer.nj.nec.com/vanezis00facial.html. Vignal J. (2004) “New Advances in 3D Facial Reconstruction using Computer Modelization”, Computer Graphic Facial Reconstruction, Chapter 14, Academic Press. West J., Fitzpatrick J., Wang M., Dawant B., Maurer C., Kessler R., Maciunas Barillot C., Lemoine D., Collignon A., Maes F., Suetens P., Vandermeulen D., van den Elsen P.,

Clement 05.qxd

1/29/05

6:26 PM

Page 101

AUTOMATIC 3D FACIAL RECONSTRUCTION

Hemler P., Napel S., Sumanaweera T., Harkness B., Hill D., Studholme C., Malandain G., Pennec X., Noz M., Maguire G., Pollack M., Pellizzari C., Robb R., Hanson D. and Woods R. (1996) “Comparison and Evaluation of Retrospective Intermodality Image Registration Techniques”, SPIE Conference on Medical Imaging 2710, pp. 332–347, Newport Beach, USA. Electronic version: http://citeseer.nj.nec.com/ west98comparison.html. Wilhelms J. and Van Gelder A. (1997) “Anatomically Based Modeling”, Computer Graphics, 31, Annual Conference Series, pp.173–180. Electronic version: http://citeseer.nj.nec.com/wilhelms97anatomically.html. Wu Y., Magnenat Thalmann N. and Thalmann D. (1995) “A Dynamic Wrinkle Model in Facial Animation and Skin Ageing”, Journal of Visualization and Computer Animation 6(4), pp. 195–206. Electronic version: http://citeseer. nj.nec.com/ wu95dynamic.html. Yasuda T., Yokoi S., Yoshida S. and Endo M. (2002) “A Method for Restoration and Exhibition of Relics in the Virtual Museum”. Museums and the Web, Boston, USA. Electronic version: http://www.archimuse.com/mw2002/abstracts/prg_170000684.html.

101

Clement 05.qxd

1/29/05

6:26 PM

Page 102