Estimation of the orientation of textured patterns

Oct 7, 2010 - fore a crucial task for all applications concerning land coverage management and monitoring. Among the existing approaches ... Gradient-based methods consist in estimating the orientation on the basis of the derivatives of ...
1MB taille 11 téléchargements 404 vues
Pattern Recognition Letters 32 (2011) 190–196

Contents lists available at ScienceDirect

Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec

Estimation of the orientation of textured patterns via wavelet analysis Antoine Lefebvre a, Thomas Corpetti b,⇑, Laurence Hubert Moy a a b

Costel, Univ Rennes 2, UMR 6554 LETG, Rennes, France CNRS – LIAMA, Beijing, China

a r t i c l e

i n f o

Article history: Received 10 December 2009 Available online 7 October 2010 Communicated by Y.J. Zhang Keywords: Texture orientation estimation Wavelets Kullback–Leibler divergence

a b s t r a c t This paper is concerned with the estimation of the dominant orientation of textured patches that appear in a number of images (remote sensing, biology or natural sciences for instance). It is based on the maximization of a criterion that deals with the coefficients enclosed in the different bands of a wavelet decomposition of the original image. More precisely, we search for the orientation that best concentrates the energy of the coefficients in a single direction. To compare the wavelet coefficients between the different bands, we use the Kullback–Leibler divergence on their distribution, this latter being assumed to behave like a Generalized Gaussian Density. The space–time localization of the wavelet transform allows to deal with any polygon that may be contained in a single image. This is of key importance when one works with (non-rectangular) segmented objects. We have applied the same methodology but using other criteria to compare the distributions, in order to highlight the benefit of the Kullback–Leibler divergence. In addition, the methodology is validated on synthetic and real situations and compared with a state-of-the-art approach devoted to orientation estimation. Ó 2010 Elsevier B.V. All rights reserved.

1. Introduction Finding the main orientation of a textured pattern is of key importance in many practical applications. For instance in medical imaging or fluid mechanics (Marion and Vray, 2009), the transmittance images generate some filaments that are transported by the underlying flow. When only a single image is available, the orientation of these filaments therefore inform on the direction of the flow. In remote sensing, images of land cover often exhibit a set of agricultural parcels whose orientation depend on the digging, the local topology or the nature of the culture: cereals or vineyards generate patterns with oriented textures whereas others parcels like meadows, bare soils, etc. provide non-oriented (isotropic) texture. The measurement of this orientation for each parcel is therefore a crucial task for all applications concerning land coverage management and monitoring. Among the existing approaches related to orientation estimation, one can roughly classify them into gradient-based (Freeman and Adelson, 1991; Germain et al., 2003; Le Pouliquen et al., 2005; Michelet et al., 2007) and spectral-based approaches (JafariKhouzani and Soltanian-Zadeh, 2005; Josso et al., 2005). Gradient-based methods consist in estimating the orientation on the basis of the derivatives of the photometric level-lines of the image luminance. Several very efficient approaches have been proposed to design reliable filters while minimizing the bias, ⇑ Corresponding author. Tel.: +86 1062647458. E-mail address: [email protected] (T. Corpetti). 0167-8655/$ - see front matter Ó 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.patrec.2010.09.021

especially through the design of recursive filters (Le Pouliquen et al., 2005; Michelet et al., 2007). However, these techniques extract an orientation vector at each location of the image, which is not desired in our application since we rather prefer to extract a single vector representing the orientation of a whole polygon. As for the spectral-based approaches, most of existing techniques are based on the Fourier transform and exhibit competitive performances in numerous situations. Such frameworks are preferred when the aim consists in extracting only one angle for a whole pattern, like in our application. Unfortunately, they fail when multiple and sometimes non-rectangular objects occur in a single image. The lack of space information of the Fourier transform indeed prevent from the localization of any structured pattern. In addition, as pointed out in (Jafari-Khouzani and Soltanian-Zadeh, 2005), the macro-textures coming from highfrequency micro-elements of several directions are tricky to analyze with such approaches since they are responsible for a lot of noise. It would therefore be of great importance to deal with a multi-scale decomposition to drop this shortcoming. In this paper, we propose a wavelet-based approach for texture orientation estimation of different patterns included in a presegmented image. The key idea is to find the angle of the rotation that best concentrates the energy in a given direction of a wavelet decomposition of the original image. An iterative strategy is presented which enables to reach a desired accuracy (given by the user) in a minimal number of iterations. The paper is organized as follows: Section 2 presents our methodology. It is composed of four parts: Section 2.1 deals with

A. Lefebvre et al. / Pattern Recognition Letters 32 (2011) 190–196

generalities about wavelets, Section 2.2 proposes a criterion that relies on a similarity measurement, Section 2.3 the presentation of such a similarity criterion, and Section 2.4 presents some details about the wavelet computations we performed in practice. Section 3 exhibits several experiments that validate our approach and compare it with a state-of-the-art one. 2. Wavelet-based texture orientation In this section we present the methodology proposed for extracting the main orientation of a textured pattern. Let us first recall some generalities about the wavelet decomposition. 2.1. Wavelet decomposition of signals and images The continuous wavelet decomposition F of a real signal f(x) (1dimensional for the sake of clarity) is:

F ða; bÞ ¼

  1  tb dt; f ðtÞ pffiffiffi w a a 1

Z

1

ð1Þ

 being the complex conwhere w stands for the analyzing wavelet, w jugate of w, a is a scaling parameter and b is a position parameter. Any combinations of the scaling and the position parameter are possible. For discrete signals I[n], a discrete transform can be defined as: N=2 X

I ½|;k ¼

h | i |  22 ðn  kÞ I½n22 w

ð2Þ

n¼N=2

for a position k and a scaling factor |. The support of the wavelet w is [N/2, N/2]. It can  |be shown  (see Mallat, 1989 for instance) that the family w|;k ¼ w 22 ðn  kÞ for ð|;kÞ 2 Z2 constitutes an orthonormal basis and any discrete signal I[n] can be represented as

I½n ¼

X

a½k/J;k ½n þ

k

1 X X |¼J

I ½|;kw|;k ½n;

ð3Þ

k

where / is the scaling function. The coefficients a[k] correspond to an ‘‘approximation” of the initial signal I at resolution J and all I ½|;k are the ‘‘details” associated to the scale |. For 2D discrete signals like digital images, the extension of the wavelet decomposition yields three kinds of details: horizontal, vertical and diagonals. Therefore, one of the strengths of the wavelet decomposition, compared to the Fourier transform, is that it allows a local description of the high frequencies of the original signal (through the detail bands) by differentiating three main directions. In this paper we argue for exploring this property to extract the main orientation of a pattern. This is presented in the next section. 2.2. Estimating the main orientation 2.2.1. Basic idea In order to find the direction angle of a textured patch, we propose to iteratively rotate the original image and to compute the angle that concentrates the maximum of detail information in a single direction (vertical in practice). Therefore, if one represents an n image Ih (corresponding to o the original I rotated by h) as A1h ; H1h ; V 1h ; D1h ; . . . ; AJh ; HJh ; V Jh ; DJh , where A|h (resp. H|h ; V |h ; D|h ) represents the approximation (resp. horizontal, vertical and diagonal) band along the |th level of the wavelet decomposition, we search for the angle ^ h such that:

191

discussed at the end of this section). The criterion E will reach its maximum E max when the differences between all the vertical bands and the other ones will be the highest. This corresponds to a major orientation of the patterns along the vertical axis. Therefore, the value of ^ h corresponds to the angle between the vertical axis and the oriented pattern.1 In practice, if one claims for a given accuracy dh on the estimation, a number N = 180/dh of computations is theoretically needed. However, due to symmetry properties, one can restrict the search area to the range [0°, 90°] since the coefficients in the vertical (resp. horizontal) band of Ih+90° correspond to the coefficients in the horizontal (resp. vertical) band of Ih. In addition, as proposed in (Marion and Vray, 2009), one can optimize the number of rotations to proceed. 2.2.2. Optimizing the number of rotations Having a 2-steps approach where the first one consists in a coarse estimation with a ~ h-accuracy and the second one consists in a fine estimation with a dh-accuracy, the number of rotations N rot ð~ hÞ required is then:

Nrot ð~hÞ ¼

~h 90 þ2 þ 1: ~h dh |{z} |fflfflfflffl{zfflfflfflffl}

Coarse step

ð5Þ

Fine step

Let us study the function N rot ð~ hÞ. Its derivative is N 0rot ð~ hÞ ¼  90 þ d2h ~ h2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  which is canceled when h ¼ 90dh =2, is negative (resp. positive) ~ > h ). Therefore, it admits one global minimum ~ < h (resp. h when h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi on h ¼ 90dh =2. Hence, for a required accuracy dh, one must first pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi perform a coarse estimation with an accuracy of h ¼ 90dh =2 and then refine the estimation around the crude solution with a dh-accuracy. As an example, if dh = 1°, the coarse step is with an pffiffiffiffiffiffiffiffiffiffiffi accuracy of h ¼ 90=2 ’ 6:7 and the total number of rotations is then Nrot(h*) ’ 28, which corresponds to 84% less than the 180 rotations needed with a single step. 2.2.3. Is the input texture oriented (anisotropic) or not (isotropic)? The orientation angle has no meaning when one deals with isotropic textures. With our approach, as in (Jafari-Khouzani and Soltanian-Zadeh, 2005; Josso et al., 2005), it is possible to evaluate if a given texture is oriented or not. Let us note E ¼ ½Eð0Þ; . . . ; Eðhi Þ; . . . ; Eð180ÞT the vector composed of all values E issued from relation (4) ffi with the different angles hi = i  h* (where pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi h ¼ 90dh =2). An isotropic pattern will provide a small maximum value E max associated to a small standard deviation of E. The detail coefficients will indeed be more or less identically distributed in the different bands. On the opposite, an oriented texture will significantly increase E max as well as the standard deviation of E. As a consequence, a threshold on E max or on the standard deviation of E will classify the textures as oriented or isotropic. Its value will be discussed in the experimental part. 2.3. Choice of the similarity measurement

ð4Þ

Several possibilities exist for computing a similarity measurement between two sets of data (L1-norm and L2-norm are among the most popular). In the wavelet context, it is known that in a given sub-band of the wavelet decomposition, the distribution of the wavelet coefficients I ½|;k of any texture pattern follows a Generalized Gaussian Density (GGD) function (Mallat, 1989). The GGD reads:

Here, Dð1 ; 2 Þ is a symmetric similarity measurement between distributions 1 and 2 (the choice of this measurement will be

1 Note that the approximation band is not used here. We indeed assume that this band does not contain any relevant information regarding the texture and orientation because it refers to the lowest frequencies in the image.

8 ^h ¼ maxðEðD; fIh gÞÞ ¼ fhjEðD; fIh gÞ ¼ E max g; where > > < h : J      P > > D V |h ; H|h þ D V |h ; D|h : : EðD; fIh gÞ ¼ |¼0

192

A. Lefebvre et al. / Pattern Recognition Letters 32 (2011) 190–196

pðx; a; bÞ ¼

jxj b b eð a Þ 2aCð1=bÞ

ð6Þ

and is characterized by two coefficients: the scale parameter a and the shape parameter b. For instance, a Gaussian distribution (resp. subGaussian; resp. superGaussian) corresponds to b = 2 (resp. R þ1 b < 2; resp. b > 2). The term CðtÞ ¼ 0 ez zt1 dz is the mathematical Gamma function. Fig. 1 illustrates the GGD distribution of the coefficients in a sub-band of a wavelet decomposition. It is therefore more attractive to be based on a distribution similarity measurement to compare the coefficients in the different bands of the wavelet decomposition. Moreover, a histogram representation provides the advantage to be insensitive with respect to the localization of the different coefficients. This is well adapted to our context since one only wants to isolate the most energetic band, whatever the spatial positions of the coefficients are. To define a measurement on the basis of distributions, two main possibilities are available: (i) compute directly the similarity measurement between the empirical histograms, (ii) compute the similarity measurement between their ‘‘smoothed” histograms: when one deals directly with the distribution issued from the set of points, any computation is indeed likely to be noisy since in general the number of points is not large enough. It is then common to ‘‘smooth” the distribution, using for instance kernel methods. In this application, as we know that the distribution follows a GGD, we rather prefer to use its parametric version to compute the similarity measurement. Here, the smoothing step here consists in identifying the values (a, b) from the set of data. We used the maximum-likelihood estimator for the identification of (a, b). The log-likelihood function is defined as Lðx; a; bÞ ¼ log Ppðxi ; a; bÞ for the set of coefficients x = (x1, . . . , xN). Parameters (a, b) are given by successively canceling the derivative of the log-likelihood L with respect to a and b. This yields the following system (Do and Vetterli, 2002): N @LðÞ N X bjxi jb ab ¼ þ ¼ 0; @a a i¼1 a b   N  @LðÞ N NWð1=bÞ X jxi j jxi j ¼ 0; ¼ þ  log @b b a a b2 i¼1

ð7Þ ð8Þ

where LðÞ ¼ Lðx; a; bÞ and W(t) = C 0 (t)/C(t) is the digamma function. When b is fixed, the relation in (7) has a unique solution  P 1=b a^ ¼ Nb Ni¼1 jxi jb . If one substitutes this relation in Eq. (8), b^ is the root of:



Wð1=bÞ b

PN 

b log i¼1 jxi j log jxi j þ PN b i¼1 jxi j

 P N b N

i¼1 jxi j

b

b

 ¼ 0;

ð9Þ

which can be solved numerically using the Newton–Raphson procedure (Do and Vetterli, 2002). To obtain a faster convergence,

the initial parameter b0 of the iteration process is given by the moments technique (Fisher, 1925). Once (a, b) were estimated, we have used the symmetric Kullback–Leibler divergence between to GGDs as a similarity measurement D in relation (4). The Kullback–Leibler (KL) divergence between GGDs p1 and p2 represented with (a1, b1) and (a2, b2) reads (Do and Vetterli, 2002):

KL1;2 ¼ log

   b2 b1 a2 Cð1=b2 Þ a1 Cððb2 þ 1Þ=b1 Þ 1 þ  : b2 a1 Cð1=b1 Þ b1 a2 Cð1=b1 Þ

ð10Þ

Since the KL divergence is non-symmetric, we have used the following measurement D :

Dðp1 ; p2 Þ ¼

1 ðKL1;2 þ KL2;1 Þ: 2

ð11Þ

Let us outline that among usual similarity criteria between histograms, the Bhattacharyya similarity measurement is also known for its reliable performances. However, to the best of our knowledge, the Kullback–Leibler divergence between two GGD is the only one that can be expressed conveniently in an analytic way, which simplifies many practical aspects. 2.4. Choice of the wavelet decomposition and the analyzing wavelet As already mentioned, it has been observed in (Mallat, 1989) that any textured pattern generates wavelet coefficients that correspond to a GGD distribution, whatever the analyzing wavelet is. In practice, the higher the order of the wavelet, the more discontinuous patterns are represented in the different bands and therefore, the more peaked are the GGD distributions. In our applications, as the shape of distributions is not a strong constraint, we have used, for its simplicity, the Haar Discrete Wavelet Decomposition. We now turn to some experiments. 3. Experimental results First, in Section 3.1, we prove on Brodatz textures that (i) our criterion is adapted for capturing oriented (vertical in our case) textures and (ii) the criterion proposed in (11) is reliable. Then, in Section 3.2, we present some experiments on synthetic data. Finally in Section 3.3, we have used our approach for estimating the orientation of textures patterns that appear in remote sensing data and compared it with a usual approach based on the Fourier transform (Josso et al., 2005). 3.1. Brodatz textures The approach has first been tested on a set of seven Brodatz textures (Brodatz, 1966) in order to (i) validate the similarity criterion

Fig. 1. Illustration of the Generalized Gaussian Density distribution of the wavelets coefficients: (a) a texture image; (b) the vertical band of the wavelet decomposition and (c) the distribution of the coefficients in (b). One verifies that these coefficients match a GGD distribution.

A. Lefebvre et al. / Pattern Recognition Letters 32 (2011) 190–196

chosen and (ii) determine the threshold to apply to E max for the label of oriented patterns (see end of Section 2.2). The tested images are represented on Fig. 2a–g. The three first ones have been randomly chosen among vertically oriented patterns, the fourth one exhibits two perpendicular main directions (which is a situation not managed by our method) and the remaining three have also been randomly chosen among isotropic patterns with different scales. Each original image (size 640  640) has been decomposed in 100 sub-images of size 64  64 on which the approach has been applied. The empirical histograms of the wavelet coefficients have been mapped on 70 bins since we observed that this size was adequate to estimate properly the distribution with a low computation time, the range of the coefficients being [128; 128]. In order to validate the proposed criterion in (11), we have compared it with others similarity measurements DðX 1 ; X 2 ) between coefficients X1 and X2 (each coefficient Xi is represented by a set of values xi that are distributed following the histogram pi) that are :

D1 ðX 1 ; X 2 Þ ¼

N 1 X jx1 ðiÞ  x2 ðiÞj L1 -norm between coefficients; N i

ð12Þ D2 ðX 1 ; X 2 Þ ¼

N 1 X jx1 ðiÞ  x2 ðiÞj2 L2 -norm between coefficients; N i

ð13Þ 1 fKLðp ; p Þ þ KLðp2 ; p1 Þg 2 Z 1 2 Z 1 p ðxÞ p ðxÞ ¼ p1 ðxÞ log 1 dx þ p2 ðxÞ log 2 dx 2 p2 ðxÞ p1 ðxÞ

D3 ðX 1 ; X 2 Þ ¼

Symmetric Kullback—Leibler divergence between empirical histograms p1 and p2 of coefficients; ð14Þ D4 ðX 1 ; X 2 Þ ¼

 Z pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  1 p1 ðxÞp2 ðxÞdx

ð1  BhattacharyyaÞ criterion between empirical histograms p1 and p2 of coefficients;

ð15Þ

D5 ðX 1 ; X 2 Þ ¼



193

Z pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  1 p1 ða1 ; b1 Þp2 ða2 ; b2 Þdx

ð1  BhattacharyyaÞ criterion between reconstructed GGD using ð6Þ:

ð16Þ

Since the Bhattacharyya criterion B equals 1 for two identical distributions and decreases to 0 when the distributions differ, we have used a form in ð1  BÞ in the two last equations in order to have a similar behavior than others compared criteria. In Fig. 3, we have depicted the ‘‘normalized average maximum divergence”: e max ðDi ; IÞ ¼ E max ðDi ; fIgÞ=E max ðDi ; fa; . . . ; ggÞ for each image I and E criterion Di . The numerator of the latter relation represents the average of E max (see relation (4)) using the measurement Di along the 100 samples issued from image I. Its denominator is a ‘‘normalization factor” with the average values of all E max (in all images) issued from Di . As a variety of anisotropic and isotropic images are taken into account, we assume that this average value E max ðDi ; fa; . . . ; ggÞ is representative of the current magnitude of the corresponding criterion. Hence, this normalization enables a fair graphical representation and comparison between the similarity criteria. From Fig. 3, one can observe that all measurement reached for the oriented patterns (x-axis corresponding to Fig. 2a–c) are higher than the ones issued from the non-oriented textures (x-axis Fig. 2e–g). As expected, the value in (d) is more ambiguous since two main directions hold and a given sub-band never concentrates all the energy. From this figure, it is also very interesting to outline that the measurements issued from histograms are more adapted to the discrimination of oriented and non-oriented patterns. Their relative range of variation is indeed more important than the measurements issued from the L1 and L2 norms; in addition, higher values are extracted for oriented patterns and smaller values are extracted for non-oriented patterns. Finally, among the histogrambased criteria, we observe that the ones issued from smoothed histograms are more discriminative than those reached from the empirical distributions and we also note that the proposed one in (11), issued from the symmetric Kullback–Leibler, is the most discriminative (the black curve). Its higher (resp. smaller) values indeed correspond to oriented (resp. non-oriented) patterns.These experiments on Brodatz textures show that our criterion enables to discriminate oriented and non-oriented patterns.

Fig. 2. Some Brodatz texture images: (a)–(c) oriented textures; (d) two main directions and (e)–(g) isotropic textures with different scales.

194

A. Lefebvre et al. / Pattern Recognition Letters 32 (2011) 190–196 2.5 D1 : L1 norm D2 : L2 norm D3 : Kullback-Liebler on empirical histrograms D4 : (1- Bhattacharyya) on empirical histrograms

Average and normalized distances

2

D5 : (1- Bhattacharyya) on fitted GGD D : proposed distance : Kullback-Liebler on fitted GGD

1.5

1

0.5

0

a

b

c

d Images

e

f

g

Fig. 3. Normalized average criteria for images in Fig. 2a–g issued from criteria D1 (blue), D2 (red), D3 (magenta), D4 (cyan), D5 (green) and the proposed one (black). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

They also validate the proposed criterion in (11). From these experiments and the graphs on Fig. 3, one can define a threshold g  1 on the averaged and normalized criterion Ee max ðDi ; Þ that dise max P gÞ and non-oriented ð E e max 6 gÞ criminates the oriented ð E patterns. In practice, this threshold in non-normalized values is g  1.82.

some synthetically oriented patterns with a known angle on which we can apply several techniques to estimate the orientation. We have compared our results with a usual technique used for orientation estimation (Josso et al., 2005). This technique is based on a Principal Component Analysis of the Fourier spectrum of the original signal. As a particular orientation of a pattern will generate a singular behavior of the corresponding frequency in the Fourier spectrum, the identification of the main Fourier axis will then give the privileged direction of the object. A rotation in the Fourier space implicating a rotation of the same angle in the image plane, this main axis coincides with the angle of the oriented pattern. The ratio between the first and the second eigenvalue of the Principal Component Analysis enables to discriminate the oriented and non-oriented textures. One of the main drawbacks of this approach is that it requires to deal only with rectangular areas. The Fourier transform indeed does not allow any spatial localization of the coefficients. It is therefore impossible to perform a Fourier transform on the whole image and to select only the coefficients related to the desired pattern (then on polygons, the strategy consists in estimating the orientation in the highest rectangle included in the patch). The numerical results are depicted in Table 1, which one can observe that the mean accuracy obtained with our technique outperforms the Fourier one. The same conclusion holds for the standard deviation. The multi-scale analysis via wavelets enables to capture with more accuracy the oriented details than the purespectral Fourier analysis. To illustrate the quality of the results, Fig. 4 represents an oriented texture pattern and its reorientation along the vertical axis using the angle obtained with our approach and with the one in (Josso et al., 2005). From these visual samples, it is obvious that the proposed approach gave better results. Let us now turn to experiments on some real presegmented objects.

3.2. Synthetic sequences 3.3. Remote sensing data We have tested our approach in order to calculate the orientation angle of anisotropic textures. A set of 20 sub-images taken in the vertically oriented Brodatz textures have been selected and rotated with some randomly chosen angles. To that end, one gets

Table 1 Results on synthetic sequences with our approach and the approach in (Josso et al., 2005).

Our approach Fourier based approach

Mean error (in °)

Std. deviation (in °)

1.2 7.9

1.9 21.7

We have estimated the orientation angle on a set of 150 presegmented patterns extracted on remotely sensed images of 60 cm resolution issued from different sensors: very high spatial resolution data, grey scale aerial photographs and panchromatic satellite images. Five classes of land coverage have been distinguished: meadow, bare soil, forest, cereals, and vineyard. Among them, bare soils, meadows and forests are considered as ‘‘nonoriented” (or isotropic) textures whereas the cereals and vineyards are ‘‘oriented” (or anisotropic). First, the rate of correct classification between oriented and non-oriented textures is 96.7% with our approach whereas it is of 82.7% with the approach in (Josso et al., 2005). In Fig. 5, we have

Fig. 4. Reorientation of the patterns: (a) the original pattern with the true orientation of 54°; (b) its rotation according the angle estimated with the proposed approach (54°) and (c) its rotation according the angle estimated with (Josso et al., 2005) (80°)

A. Lefebvre et al. / Pattern Recognition Letters 32 (2011) 190–196

depicted a sample randomly chosen in each class of land cover (first line) with the associated ground truth obtained by photointerpretation (second line) and the estimated angle computed with our method (third line). The mean value of all E max and their mean standard deviation for each class are shown on the fourth and fifth line. The two last ones exhibit the mean differences between the ground truth and our estimated orientations and the ones provided by Josso et al. (2005). From these examples, one can observe that the proposed threshold g  1.82 as well as the mean and the standard deviation

195

of E max separate well the anisotropic (vineyard, cereals) and isotropic (forest, bare soil, meadow) patterns. The relative differences of orientations between our approach and the one in (Josso et al., 2005) is non negligible and it is obvious to observe on the two last lines of Fig. 5 that our approach performs better. For some important errors, we have plotted in Fig. 6 the re-oriented patterns along the vertical axis with both estimated angles. From these figures, it is clear that the accuracy is more important with the proposed technique since our re-oriented pattern fit better with the vertical axis.

Fig. 5. Real examples on pre-segmented remote sensing patterns (in white). Line 1: land cover class; Line 2: Ground truth angle obtained by photo-interpretation; Line 3: Estimated angle with the proposed technique for the corresponding sample (NO means ‘‘non-oriented pattern”); Line 4: mean of all E max ; Line 5: mean of all standard deviation of E max ; Lines 6–7 (Diff. 1 and Diff 2): mean difference between the ground truth and the estimated angles with our approach – Diff. 1 – and the technique in (Josso et al., 2005) – Line 2.

196

A. Lefebvre et al. / Pattern Recognition Letters 32 (2011) 190–196

Fig. 6. Reorientation of the patterns for some agricultural parcels (top and bottom): (a) represents the input textures; (b) rotation of (a) according to the proposed method; and (c) rotation of (a) according to (Josso et al., 2005).

4. Conclusion In this paper, we have proposed a method for estimating the orientation of oriented and textured patterns based on a wavelet analysis. The basic idea is to search for the orientation that best concentrates the energy of wavelet coefficients in a single direction. In addition, the method enables to label an input pattern as oriented or not. The underlying similarity criterion is issued from an analytic version of the Kullback–Leibler divergence for Generalized Gaussian Density distributions. Our criterion to maximize e has been compared with usual ones. The estimated angles have also been confronted to the ones issued from a pure spectral approach. This has revealed better estimations with our technique. Finally, our approach deals in a natural way with polygon objects. We believe this is a precious advantage when one deals with patterns issued from an external segmentation step. References Brodatz, P., 1966. Textures; A Photographic Album for Artists and Designers. Dover Publications, New York, NY, USA.

Do, M., Vetterli, M., 2002. Wavelet-based texture retrieval using generalized Gaussian density and Kullback–Leibler distance. IEEE Trans. Image Process. 11 (2), 146–158. Fisher, R., 1925. Theory of statistical estimation. Proc. Cambridge Philos. Soc. 22, 700–725. Freeman, W., Adelson, E., 1991. The design and use of steerable filters. IEEE Trans. Pattern Anal. Machine Intell. 13 (9), 891–906. Germain, C., Da Costa, J., Lavialle, O., Baylou, P., 2003. Multiscale estimation of vector field anisotropy. Application to texture characterization. Signal Process. 83, 1487–1503. Jafari-Khouzani, K., Soltanian-Zadeh, H., 2005. Radon transform orientation estimation for rotation invariant texture analysis. IEEE Trans. Pattern Anal. Machine Intell. 27 (6), 1004–1008. Josso, B., Burton, D., Lalor, M., 2005. Texture orientation and anisotropy calculation by Fourier transform and principal component analysis. Mech. Systems Signal Process. 19, 1152–1161. Le Pouliquen, F., Da Costa, J.-P., Germain, C., Baylou, P., 2005. A new adaptive framework for unbiased orientation estimation in textured images. Pattern Recognition 38 (11), 2032–2046. Mallat, S., 1989. Multiresolution approximations and wavelet orthonormal bases of L2(R). Trans. Amer. Math. Soc. 315, 69–87. Marion, A., Vray, D., 2009. Spatiotemporal filtering of sequence of ultrasound images to estimate a dense field of velocities. Pattern Recognition 42 (11), 2989–2997. Michelet, F., Da Costa, J.-P., Lavialle, O., Berthoumieu, Y., Baylou, P., Germain, C., 2007. Estimating local multiple orientations. Signal Process. 87, 1655–1669.