GrAN Grup de recerca d'an`alisi num`erica - Olivier Le Cadet

Dx = [0, 1, −1]; Dy = DT x. Taking a centered derivation filter proves to be a bad choice for the recon- struction process, because we can't find dual filters of finite ...
574KB taille 5 téléchargements 183 vues
GrAN Grup de recerca d’an`alisi num`erica

Edge Detection and Classification through the Directional Canny Multiscale Transform Olivier Le Cadet and Val´erie Perrier

October 20, 2005 GrAN report 05-03

Departament de Matem`atica Aplicada Universitat de Val`encia 46100 Burjassot, Val`encia (SPAIN)

Edge Detection and Classification through the Directional Canny Multiscale Transform 1 Olivier Le Cadet

2

and Val´erie Perrier

3

October 20, 2005

Abstract: We present a two dimensional directional vectorial wavelet transform, the Canny Multiscale Transform (CMT), based on the works of S. Mallat and al., who presented a multiscale version of the Canny detector. The properties of the CMT are presented, and linked with the steerable filters theory. An efficient implementation based on B-spline masks is proposed, and reconstruction algorithms are presented as well. This transform is used to detect edges in an image, and to characterize them through their Lipschitz regularity. Tests are presented on school images.

1

Work partially supported by the EC-IHP Network Breaking Complexity and the Rhˆ ones-Alpes region. 2 Departament de Matem` atica Aplicada. Universitat de Val`encia. Val`encia (Spain). e-mail:[email protected] 3 LMC-IMAG, BP 53, 38041 Grenoble Cedex (France). email:[email protected]

Introduction Images are nowadays increasingly used as a way to focus on a certain kind of information provided by our complex environment. Medical images, such as echographies, or X-ray images, frequently aim at checking the state of an organ or of a bone for example. Astronomical images represent nebulas, remote galaxies, and other astral objects unattainable, but whose light carries information. The colors of geographical maps may stand for the density of population, the climate,... However, the useful information in an image is not always obvious to see, and procedures to detect some patterns in an image are needed. In many cases, we are interested in detecting automatically the edges, delimiting the different patterns of an image. For example, finding automatically the contours of a vertebra on two X-ray images, a face and a lateral view, allows, by matching the projections of a 3D statistical model of a vertebra on the planes of the two views, to reconstruct, during a surgetical intervention, the 3D shape of the vertebra on which is made the operation ([8]). In this purpose, several methods are possible, belonging in general to one of these four classes of methods: • region growth: We can make image segmentation, that is, separate it into its different constitutive regions, for example by placing seeds in the image and making them grow until they correspond to a region of the image ([6], [1]). The edges will be the curves delimiting the different regions. • active contours If we have information about the geometry of a shape we are looking for, we can take benefit of it, and define an initial contour, approximating this shape, and make it deform until it converges to the exact contour of the searched object ([23], ou [7]). • texture classification: these methods are adapted to textured image; for example, supervised texture classification methods suppose we know the different kinds of texture present in the image. Then, using a statistical criterion, we decide, for each pixel of the image, to which texture it belongs ([12], [17]). • gradient approach: finally, and that is the approach considered in this article, we can try to find directly the edges of the image, which are composed of points on which image intensity varies strongly. Of course, such a definition is unsufficient, but the problem of the definition of an edge is very ill-posed (the problem of the definition of a region, useful for the first class of method, is ill-posed as well). Therefore, when looking for edges, one has to precise the edge modelization chosen, which we will do in the first section. For the gradient approach, wavelet methods prove useful and efficient. They provide indeed another representation of the image, where its most irregular 1

parts are affected with important coefficients, the most regular parts characterized with small coefficients.

1

Definition of an edge

We want to find the edges in an image, that is, the geometrical contours separating two different objects or two different regions in an image. A first approach, the ”region” approach, consists in finding the regions, for example by placing ”seeds” that will grow up until they correspond to a region. The edges are the lines separating obtained regions. The dual approach, that we will adopt here, the ”gradient” approach, consists in finding directly the edges. In this purpose, we need to characterize them.Since an edge separate two different regions, an edge point is a point where the local intensity of the image varies rapidly - more rapidly than in the neighbour points which are close from the edge: such a point could therefore be characterized as a local maximum of the gradient of the image intensity. The problem is that such a characterization is to be applied to differentiable images, and above all that it detects all noise points. Canny ([5]) decided to evade the problem by smoothing the image first. But which smoothing should we choose? A strong smoothing will lead to detection of less points, a lighter one will be more permissive. That is why S. Mallat defined, in his work with S. Zhong ([20]), multiscales contours, and in his work with W. Hwang, a chaining across the scales binding the different points of the scale-space plane corresponding to a same point of the original image. Thus, every edge point of the image is characterized by a whole chain of the scale-space plane: the longer the chains are, the more important is the smoothing we impose, and the less we will get edge point. Let’s note that this chain also allows to get information about the regularity of the image at the edge point it characterizes.

s=1

minimum length required

s=N (coarsest scale)

Figure 1: Chaining across scales

2

Taking advantage of these ideas, we will define here a point of an edge as the top, towards fine scales, of such a chain (see figure 1). Let us see now how to compute, for different smoothing scales s, the local maxima of the gradient of the intensity smoothed by a kernel θs (x, y), which is equivalent, as reminded below, to computing a vectorial wavelet transform (W 1 f, W 2 f ), which we will ~ Detecting call Canny Multiscale Transform, using the vectorial wavelet Ψ = ∇θ. edge points of the image smoothed by a kernel of scale s, which we will refer to as edge points of scale s, will be done by computing the local maxima of |(W 1 f (x, y, s), W 2 f (x, y, s))| at the scale s. Then, we will see how to define the chaining across the scales, which relates all the local maxima in the scale-space plane which have been produced by a same singular point of the original image (see 1), and allows to localize an edge point.

2

The Canny Multiscale Transform

2.1

Choice of the smoothing kernel

The choice of the smoothing kernel θ is determined by the characteristics our detector should comply with • causality As our edge detector is a multiscale detector, so as to detect geometric important contours and to be able to compute their Lipschitzregularity, we should use a kernel which ensure that the chaining across the scales will work. Namely, we doesn’t want to appear new local maxima while going down to coarse scales. Yuille and Poggio demonstrated in [25] that the gaussian kernel is the only one to fulfill this condition among a wide class of kernels. By using a gaussian kernel to smooth the image, the contours are diffused according to the heat equation: no new contour may appear during this diffusion process. • efficiency We want to compute a numerically efficient detector. In this purpose, separable filters prove very useful: they enable to do the computations first on rows and then on columns, which lowers the computational cost significantly. • orientability The problem of using a separable kernel is that it provides information in two priviledgized directions (typically the horizontal and the vertical direction). Thus, for example, diagonal lines might be less well detected than horizontal or vertical ones. We would like to get a transform which allow to get information in any direction, keeping in the same time a computational cost reasonable. A theory has been developped, concerning the steerable filters, by Freeman and Adelson ([9]) and Simoncelli ([22]). They use a filter h and a finite number of rotated versions of this filter hθi , i = 0 · · · N −1 which allow, by linear combinations, to design a rotated version of the filter in any direction θ. Thus, we just have to compute the transforms f ? hθi , i = 0 · · · N − 1. 3

In our case, we consider wavelets which are derivatives of a smoothing kernel Θ. In this framework, to be able to design steerable filters, we have to use a kernel Θ which is isotropic. Considering all these requirements, gaussian kernels prove to be the most adapted for our framework. They are the only ones to be isotropic and separable at the same time, and they also fulfill the causality requirement. Moreover, they solve the Heisenberg principle -visible here in the compromise we have to do between the good spatial localization properties of fine scales and the good detecting properties of coarse scales- in an optimal way. Then, to design discrete gaussian filters, we will use kernels allowing to approximate numerically a gaussian kernel, for example B-spline kernels.

2.2

Definition of the Canny Multiscale Transform

~ denotes the vectorial wavelet Here, Θ will be an isotropic smoothing kernel. Ψ obtained by taking the gradient of Θ: ~ = (ψ 1 , ψ 2 ) = ∇Θ ~ Ψ

~ we obtain a vectorial wavelet transform: Analysing the image f with Ψ, ~ (x, y, s) = (W 1 f, W 2 f ) = f ? Ψ ~ˇ s (x, y) Wf where

~ˇ y) = ψ(−x, ~ ψ(x, −y) and ~ˇ x , y ) ~ˇ s (x, y) = 1 Ψ( Ψ s s s

(1)

The first component, W 1 f , detects vertical edges, whereas W 2 f detects ~ θ f the wavelet transform obtained by using horizontal edges. We will note W ~ −θ (x, y)) instead of Ψ(x, ~ Ψ(R y):  1 −θ 2 −θ < f, (ψ )oR ) >, < f, (ψ )oR ) > ~ x ,s ~ x ,s RR  RR ~ 1 1 −θ ~t−~x ) d~t, ~ 1 2 −θ ~t−~x ) d~t = s s R2 f (t) s ψ (R R2 f (t) s ψ (R (2) ~ θ f depends on three parameters, the scale s, the translation parameter W ~x = (x, y), and the orientation parameter θ, and corresponds to a directional wavelet transform as defined by Murenzi (see [21]), allowing to capture information in any direction. ~ θ f (~x, s, θ) = W

~ as the Canny Multiscale TransWe will refer to the wavelet transform Wf form (CMT) in the following.

4

2.3

Properties of the CMT

We will point here two important properties of the CMT which make it so efficient for edge detection. • As can be seen from the steerable filters theory, or as demonstrated in the appendix A, ~ θ f = R−θ Wf ~ W (3) ~ θ f provides information where Rθ is the rotation matrix of angle θ. W about the singularities in the direction θ and the direction orthogonal to θ. ∂Θ Thus, computing two wavelet transforms (with wavelets ∂Θ ∂x and ∂y , where Θ has to be isotropic) allows to get, through a simple multiplication by a rotation matrix, a directional wavelet transform, with a cheap computional cost. Let us note that if we want to use second derivatives of an isotropic 2 2 kernel, we will have to compute three wavelet transforms (using ∂∂xΘ2 , ∂∂yΘ2 , ∂Θ and ∂Θ ∂x ∂y ) to generate a directional wavelet transform.

• From the fact that we use a vectorial wavelet which is a gradient of a ~ = (ψ 1 , ψ 2 ) = ∇θ), ~ smoothing function θ (ψ we can easily deduce that ~ Wf (x, y, s) is proportional to the gradient of the image smoothed by θs : adopting the notations 1, we have: ~ (x, y, s) = s∇(f ~ ?Θ ˇ s )(x, y). Wf

(4)

This property is the ground for the success of the Canny Multiscale Transform. Indeed, it provides: – the maxima of the gradient of the image smoothed by a kernel at different scales: we saw that this was what we needed to characterize edge points. – the direction of the gradient of the image smoothed by a kernel: this geometrical information proves very useful, for example to compute the local maxima of the gradient: indeed, if we test that a point is a local maximum in a circular neighbourhood, we might compare two edge points together. Making this comparison with the neighbour points in the direction of the gradient, we avoid this difficulty and we reduce the computational cost of the detection of a local maxima. We will also see how we can benefit from this information when studying the chaining across scales. Thus, at each scale s, we define the modulus of the wavelet transform of a signal f by p M f (x, y, s) = |W 1 f (x, y, s)|2 + |W 2 f (x, y, s)|2 , (5) 5

and the associated angle Af (x, y, s) = tan−1



= π − tan

W 2 f (x,y,s) W 1 f (x,y,s)

−1



 ,

W 2 f (x,y,s) W 1 f (x,y,s)

if W 1 f (x, y, s) ≥ 0, 

(6)

1

, if W f (x, y, s) < 0.

Since M f (x, y, s) is proportional to the image smoothed by the gaussian ˇ s (x, y), detecting its local maxima in the direction kernel at the scale s, f ? Θ Af (x, y, s) afford us to find edge points of scale s.

2.4

Practical implementation

To implement the vectorial wavelet transform previously described so as to be numerically as efficient as possible, we will take profit from two classical ideas: • a gaussian is invariant by convolution, in the sense that, if G(x, y) = 2 2 e−π(x +y ) : j−1 convolutions }| { z x y 1 G ? G · · · ? G )(x, y) = G( √ , √ ). j j j

Thus, by making iteratively N −1 convolutions by a gaussian on the image, and applying at each step a derivation filter (horizontal for W 1 f , vertical for W 2 f ) we can obtain, with a reduced √ cost, the wavelet coefficients W 1 f (., ., sj , W 2 f (., ., sj ) for scales in j: √ √ sj = 1, 2, · · · , N.

Moreover, a gaussian being separable, each convolution can be done in O(m × n) where m × n is the size of the image, by making the convolutions on the lines first, on the columns in a second time. Nevertheless, to represent a gaussian in a discrete way, one should use at least a 7 × 7 matrix, so that the constant K in the computational cost (K × N × m × n) is relatively high. • a very classical tool issued from the wavelet’s theory for implementing wavelet transforms is the algorithm “` a trous” of S. Mallat (see for example [16]). One need a low pass filter h and a high pass filter g issued from a multiresolution analysis. Iterative convolutions by the low pass filter h, oversampled (by intercaling 2j zeros between each of its coefficients at the step j), are done, and at each step j, a convolution by the high pass filter g (oversampled in the same way that h) delivers the wavelet coefficients at the scale s = 2j . Thus, we obtain the wavelet coefficients, for dyadic scales s = 2j , j = 0 · · · J, in a computational cost O(J × n × m). However, to achieve a performant chaining across the scales, we shouldn’t use scales so far from each other. 6

For our part, we will do iterative convolutions by low-pass filters approximating a gaussian, and use two high pass filters which are derivation filters: g = Dx , an horizontal derivation filter which will give the first component of the CMT, and g T = Dy which will provide the√second component. We will not do any oversampling, so as to obtain scales in n. The derivation filters are non-centered derivation filters: Dx = [0, 1, −1]; Dy = DxT

Taking a centered derivation filter proves to be a bad choice for the reconstruction process, because we can’t find dual filters of finite support for the reconstruction process (see [13]). We will use as a low pass filter h = Θ,  1 1  2 Θ= 16 1

where Θ is a binomial filter:  2 1 4 2 . 2 1

This algorithm, that we will call the cascade B-spline algorithm is a filterbank without decimation using the filters previously defined. The inner working of the algorithm is represented on the figure 2.

   f ?Θ

 +  f ? (h ~ h) .. .

PP





PP

? N convolutions

z }| { f ? (h ~ h··· ~ h)

Image f  H HH  ? f ?g

HH j H f ? gT

 @XXXX XX XXX @ XXX R @ z X f ? (h ~ g) f ? (h ~ g T )

PP

PP

?

Pq P N −1 convolutions

z }| { f ? ( h ~ h · · · ~ h ~g)

Figure 2: scheme of the B-spline cascade algorithm 7

N −1 convolutions

}| { z f ? ( h ~ h · · · ~ h ~g T )

Let’s note that: • Θ is separable:

  1 1 1 2 , Θ = [1, 2, 1] ~ 4 4 1

where ~ stands for the non-periodic discrete 2D convolution. • the filter Θ represents a B-spline of order 1, and then, j − 1 iterative convolutions by Θ give a filter Θj representing a B-spline of order 2j − 1, whose fast convergence to gaussian is demonstrated in [24]. This property afford us to interprete the cascade B-spline algorithm as a CMT based on a gradient of a gaussian, as we show further. Let’s note C 1 f (., ., n) and C 2 f (., ., n) the two coefficients obtained at the nivel n of the tree: n−1 convolutions

z }| { C f (., ., n) = f ? ( Θ ~ Θ · · · ~ Θ ~Dx ) 1

(7)

n−1 convolutions

z }| { C f (., ., n) = f ? ( Θ ~ Θ · · · ~ Θ ~Dy ) 2

(8)

? is the discrete periodic 2D convolution (used to make the convolution between the image f and a B-spline mask) and ~ stands for the discrete non periodic convolution (used to make the convolution between B-spline masks). The quantity n−1 convolutions

}| { z A = Θ~Θ···~Θ

represents a B-spline of degree 2n − 1, and A ~ Dx is the derivative of this B-spline. Now, a B-spline derivative converges fastly to the derivative of a gaussian: the demonstration in the continuous case can be found in [24]. The discrete convergence has to be considered carefully, in particular because of the choice of the derivation filter. The discrete convergence is studied in [18], where the convergence of the discrete derivative of B-spline masks to discrete derivative of a gaussian is proved. It tells, using our notations: ∀(k1 , k2 ) ∈ {−(n + 1) · · · n + 1}2 ,

  2 +y2 2 ∂ 1 1 −2 x 2n (A ~ Dx )(k1 , k2 ) ≈ e (k1 − , k2 − ). π2n ∂x 2 2

The spatial shift of

1 2

is due to the use of a non-centered derivation filter. Then: 8

C 1 f (., ., n) ≈ ≈

q

2 nf

?

i  h  x2 +y2 2 f ? π2n ∂x e−2 2n (k1 − 12 , k2 − 21 ) " ! √ 2# √ 2 q (x/ n2 ) +(y/ n2 ) 1 2 1 − 1 2 (k1 − 2 , k2 − 2 ) ∂x n 2π e

we recognize inside the parenthesis p the partial derivative with respect to x of a gaussian of variance 1 at the scale n2 . Consequently, r r 1 1 n 1 n 1 C f (. + , . + , n) ≈ W f (., ., ). 2 2 2 2 r r 1 1 n 2 n C f (. + , . + , n) ≈ W 2 f (., ., ). 2 2 2 2 (W 1 f, W 2 f ) corresponds to the CMT using a gaussian kernel of variance 1. Thus, the coefficient of the cascade B-spline algorithm allows to get a very close approximation, in a numerically efficient way (in O(m × n × N ) for N scales and an image of size p m × n) of a CMT: we just have to multiply the coefficients by the scale, n2 , and to shift the spatial translation parameters of 1 2 . Moreover, this algorithm can be easily inverted, as shown in the next section.

3

Reconstruction properties

To cope with the reconstruction problem, several approaches are possible: one may want to find a continuous reconstruction formula directly from the definition of the CMT, which is a wavelet transform. The wavelet theory has nowadays very solid theoretical grounds, and the conditions under which we can invert a wavelet transform and the inverse transform formulae are well known. Nevertheless, the discretisation process of the obtained inverse formula proves quite often difficult, and it is not rare to be constrained to do practical adjustements to get good results. Another approach consists in starting from the practical algorithm: in our case, we will try to invert directly the practical implementation of the CMT we proposed, namely, the B-spline cascade algorithm.

3.1

Reconstruction from the continuous definition formula of the CMT

As we mentioned in the former section, the CMT multiplied by a rotation matrix provides a vectorial directional wavelet transform (which can be proved quite easily through a two dimensional change of variable):      Wθ1 f (u, v, a, θ) cos θ sin θ W 1 f (u, v, a) = (9) Wθ2 f (u, v, a, θ) − sin θ cos θ W 2 f (u, v, a)

9

Wθ1 f and Wθ2 f are two directional wavelet transform (the first one using ∂Θ the wavelet ∂Θ ∂x , the second one, the wavelet ∂y ), whose definition has been reminded in the former section. We can invert a directional wavelet transform using the following theorem: Theorem 3.1 Let ψ be a real wavelet, f ∈ L2 (R2 ), and W f (~x, a, θ) its directional wavelet transform. If ZZ

Cψ =

R2

ˆ ω )|2 |ψ(~ d~ ω < +∞, |~ ω |2

then we have (i) an energy conservation law Z

2π 0

Z

a>0

ZZ

da |W f (~b, a, θ)|2 dθ 3 d~b = Cψ a R2

ZZ

R2

|f (~x)|2 d~x

(10)

(ii) a reconstruction formula f (~t) =

1 Cψ

Z

2π 0

Z

+∞ 0

ZZ

~t − ~b da 1 )dθ 3 d~b W f (~b, a, θ) ψ(R−θ a a a 2 R

(11)

Combining this classical reconstruction formula for Wθ1 f and the previous formula (9) allows to find the following proposition (the demonstration is given in appendix A), which give an elegant reconstruction formula for the CMT and express the energy conservation law for the CMT. −→ Proposition 3.2 Let f be a function of L2 (R2 ) and Wf(~b, a) its CMT trans~ being the gradient of an isotropic kernel Θ. form, using a vectorial wavelet Ψ Then, we have (i) an energy conservation law Z ZZ ZZ da π |f (~t)|2 d~t = ||W~ f (~b, a)||2 d~b (12) Cψ a>0 a3 R2 R2 (ii) a reconstruction formula π f (~x) = Cψ

Z

a>0

da a3

ZZ

R2

−→ ~ ~ d~b Wf(~b, a).Ψ a,b

(13)

RR ˆ ω )|2 ω is the admissibility constant associated to the wavelet Cψ = R2 |ψ(~ |~ ω |2 d~ ∂Θ ∂Θ (or ). As shown in appendix A, Cψ is defined if Θ ∈ L2 (R) and is equal ∂x ∂y to 2π 2 ||Θ||22 .

10

3.2

Reconstruction from the cascade B-spline algorithm

In this approach, we directly focus on the chosen implementation scheme, the cascade B-spline algorithm: we interpret it as a filterbank tree (see figure 2): passing from a level of the tree to a lower one is done thanks to three filters, e Θ, Dx , and Dy . We are going to search for associated reconstruction filters, Θ, fx , and D fy . These ones should have a finite support, as small as possible, so D as to get a numerically efficient reconstruction algorithm. let us consider the level n of the tree: each of the three components of this level will be convolved by the adequate reconstruction filter (the first one by ˜ the second one by D˜x , and the last one by D˜y . Then, the sum of the three Θ, terms has to give us the low frequency component of the level n − 1. In particular, for the last step (top of the tree), if we note ~ the discrete convolution, and ? the discrete periodic convolution: ˜ + f ? Dx ? D˜x + f ? Dy ? D˜y f =f ?Θ?Θ and then

˜ + Dx ~ D˜x + Dy ~ D˜y ) f = f ? (Θ ~ Θ

(14)

˜ D˜x and D˜y such that: We then have to find Θ, ˜ + Dx ~ D˜x + Dy ~ D˜y = δ Θ~Θ

(15)

where δ is the neutral element of the convolution ? (δnm = 1 if n = m = 0 and 0 otherwise). The following proposition provides three reconstruction filters of minimal size support. Its demonstration is given in Appendix B. Proposition 3.3 The filters:  1   −2 0 0 0 ˜ =  0 1 0  D˜x = 1  −3 Θ 16 − 12 0 0 0

1 2

3 1 2

verify

 0 0  0

T

D˜y = D˜x =

1 16

 

− 21 1 2

0

 −3 − 21 1  3 2 0 0

˜ + Dx ~ D˜x + Dy ~ D˜y δ =Θ~Θ

˜ D˜x , and D˜y allow to climb up in the tree (knowing Thus, the three filters Θ, the wavelet coefficients and the lowest frequency component) as Θ, Dx , and Dy allowed to go down and compute the wavelet coefficients.

11

Remark 3.4 The same reconstruction filters can be used for a dyadic scheme, in which the filters have to be oversampled at each level j by intercaling 2 j zeros between each coefficient of the filters; the dyadic reconstruction process can be done easily by oversampling in the same way the reconstruction filters of the proposition 3.3.

4

Multiscale properties

Simply by considering the coefficient of the finest scale of the CMT and defining an adequate threshold to discriminate the local maxima due to noise from the local maxima generated by significant edges of the image, we can define a good edge detector (the famous Canny detector). Therefore, why should we use several scales? A first answer to this question lies in the fact that textures produce high energy coefficient. Even by thresholding the coefficient of a Canny detector, we will find texture points. Going down in the scales, less texture points produce local maxima, which enable to find the real geometrical edges of the image. On the other hand, Holschneider, Tchamitchian and Jaffard remarked that the asymptotic behaviour, when scales decrease, of the coefficients of a wavelet transform of a function enabled to characterize the regularity of the function. From the evolution, through scales, of the wavelet coefficients, we are able to determine the local regularity of an edge point. To do this on signals, Mallat and Hwang ([19]) defined the concept of maxima lines, linking between them all the local maxima which have been produced, at different scales, by a same edge point of the original image. This gives a chain of local maxima wavelet coefficients, whose behaviour characterize the regularity of the image at this edge point. First, let us see how to do this chaining process in our 2D context. Then, we will see how it allows to compute the Lipschitz regularity of the edge points.

4.1

Chaining of the local maxima

We sawphow the cascade B-spline algorithm allowed to compute the CMT at the scales n2 . Contour points of the image produce local maxima in each of the scales of a wavelet transform till a certain scale s0 . We are then going to find the local maxima of the CMT, and then, see how to chain them so as to bind all the local maxima generated by a same contour point of the original image. We dispose of the CMT r  r  n n ), W 2 f (., ., ) , n = 1···N W 1 f (., ., 2 2 p • Modulus-orientation: for sn = n2 , n = 1 · · · N , we compute the modulus of the CMT M f (., ., sn ) and the orientations Af (., ., sn ) as indicated in the formula 6.

12

• Local maxima: we then consider points which are likely to indicate the presence of a contour point: the local maxima of M f (., ., sn ) in the direction of the p gradient of the smoothed image, that is, Af (., ., sn ). To test if M f (x, y, n2 ) is a local maximum, we consider (x1 , y1 ) and (x2 , y2 ) the two neighbour pixels in the direction Af (x, y, sn ) (for example, if Af (x, y, sn ) is close to π4 , then (x1 , y1 ) = (x + 1, y + 1) and (x2 , y2 ) = (x − 1, y − 1), see figure 3).

(x1 ,y1)

Af(x,y,s)

(x,y)

(x 2,y2) p the Figure 3: Af (x, y, n2 ) is between π8 and 3π 8 , therefore, we consider p two neighbour pixels (x1 , y1 ) and (x2 , y2 ) and compare M f (x, y, n2 ) with p p M f (x1 , y1 , n2 ) and M f (x2 , y2 , n2 ) We will say that M f (x, y, sn ) is a local maximum if: • M f (x, y, sn )−M f (x1 , y1 , sn ) >  and M f (x, y, sn ) ≥ M f (x2 , y2 , sn ) or • M f (x, y, sn )−M f (x2 , y2 , sn ) >  and M f (x, y, sn ) ≥ M f (x1 , y1 , sn ).  is a threshold to avoid that little oscillations create local maxima. • chaining: now, we aim at chaining together maxima that have been generated by a same singularity of the original image. √ Using scales in n, as we do, makes sure that a local maxima produced by a given point of the original image at the scale sn won’t move of more than one pixel at the next scale sn−1 (see [4]). Thus, if we want to link a modulus maximum (xn , yn ) at the scale sn with a modulus maximum (xn−1 , yn−1 ) at the scale sn−1 , we just have to check the positions xn−1 ∈ 13

{xn , xn − 1, xn + 1} and yn−1 ∈ {yn , yn − 1, yn + 1}. There are therefore at most 9 possibilities for (xn−1 , yn−1 ). Moreover, the propagation of the maxima happens in the direction of the gradient of the image (the contours are diffused). The two points in the direction orthogonal to the gradient are not candidate, which restrict the research to 7 points. Finally, since both modulus maxima should characterize the same edge point of the original image, one has Af (xn , yn , sn ) ∼ Af (xn−1 , yn−1 , sn−1 ), which give us a criterion to chose among the 7 candidates.

(Xa,Ya, a+1)

Max à chainer

(Xa,Ya,a)

a+1

a

Figure 4: Finding a correspondance between a local maxima at a given scale and another local maxima at the next finer scale

Going from a local maximum (xN , yN ) of the coarsest scale sN , we chain it with a local maxima of the scale sN −1 , and we continue the chaining √ until we reach a local maximum (x0 , y0 ) to the finest scale, s0 = 22 . Finally, the image of edges Cf is defined by √  M f (x0 , y0 , 22 ), (x0 , y0 ) is on an edge, Cf (x0 , y0 ) = 0, otherwise.

4.2

Computation of the local Lipschitz regularity

Another advantage of multiscale detection techniques lies in their ability to compute the local regularity of a function. The Lipschitz (or H¨ older) regularity of a function is a precise measure of its regularity, characterized by a real parameter α. The regularity evaluation can be done either on a whole interval or in a point. The Lipschitz regularity measures the error done when approximating the function by a polynomial. We will just remind here the 2D pointwise definition.

14

Definition 4.1 Let n ≤ α < n + 1, where n ∈ N. f(x,y) is Lipschitz-α in (x0 , y0 ) ∈ R2 if ∃K > 0, and if ∃Pn a two variable polynomial of degree n such that ∀~h = (h1 , h2 ) ∈ R2 , α

|f (x0 + h1 , y0 + h2 ) − Pn (h1 , h2 )| ≤ K|~h|α (= K|h21 + h22 | 2 ) A well known theorem due to Holschneider and Tchamitchian ([10]) show how the evolution of the amplitude of the wavelet coefficients on an interval when the scale s decrease is related to the Lipschitz regularity of the function on this interval. To characterize the regularity of a function at a precise point, more work was needed, and Jaffard ([11]) coped with this problem, demonstrating a necessary condition and a sufficient condition (but no necessary and sufficient condition) binding the wavelet coefficients behaviour and the pointwise Lipschitz regularity. These works show that wavelet transforms are a relevant tool to capture the Lipschitz regularity of a function. All the mentioned theorems are reproduced and discussed in [19], where Mallat and Hwang also complete the study on evaluation of the Lipschitz regularity thanks to wavelet transform. In particular, they propose to chain across the scales singular values of the wavelet transform that have been produced by the presence of a singularity: in the case of the TMC, defined with simple derivative of gaussian, a singularity at the point (x0 , y0 ) will produce local maxima in the modulus of the wavelet transform until a certain scale s0 . Thus, rather than following any wavelet coefficients, for example the ones right above the ~ (x0 , y0 , s), s < s0 ), following the local maxima produced by the singularity (Wf singularity gives a much more fiable evaluation of its Lipschitz regularity. Thus, they define maxima lines in 1D, of which we give here a definition in the 2D case (see figure 1): Definition 4.2 (Mallat-Hwang) a maxima line is a connected curve (xs , ys , s) ~ (xs , ys , s)| is a local of the scale-space 3D space such that at each scale s |Wf ~ maximum of (x, y) 7→ |Wf (x, y, s)|. As said before, to make sure that the local maxima produced by a singularity all belong to a connected curve, one may use a gaussian kernel. The presence of a maxima line denotes the presence of a singularity point at (x0 , y0 ), able to produce local maxima which persist when the scale increase (i.e. with a stronger smoothing). The next theorem, also drawn from [19] and adapted in the 2D case, tells how evaluate the Lipschitz regularity of (x0 , y0 )thanks to the values ~ (xs , ys , s)|. |Wf Theorem 4.3 ([19]) Let’s suppose that ψ is a wavelet Cn with compact support R +∞ and that it exists θ such that −∞ θ(t)dt 6= 0 and ψ = (−1)n θ(n) . Let f a tempered distribution, α < n no integer and (x0 , y0 ) ∈]c, d[2 . Let’s suppose that it exists s0 > 0 and C > 0 such that the local maxima (s, x, y) of 15

~ (x, y, s)| under the scale s0 (s < s0 ) and in the interval ]c, d[2 ((x, y) ∈]c, d[2 ) |Wf are included in the cone ||(x − x0 , y − y0 )|| ≤ Cs. Then f is Lipschitz-α at (x0 , y0 ) if and only if ∃A such that for all local maxima ~ (x, y, a)| in the influence cone ||(x − x0 , y − y0 )|| ≤ Cs, (s, x, y) of |Wf |W f (x, y, s)| ≤ Asα .

5

Results

This section shows some results on school images and real images, which allow to see the performances of the described algorithm. Mostly, the geometrical parts are found at the expense of textured part; the Lipschitz regularity of the contour points corresponds to the theory. The cost of using a multiresolution method is a certain imprecision in some corners (where two edges join together). Let us see some images allowing to illustrate these characteristics. The images representing the contour points colored so as to code their Lipschitz regularity use a thick point representation (the 8 neighbour pixels of each contour point are also colored, so as to make the image more lisible), at the difference of images presenting contour points classified in segments (thin point representation). edge extensions The presented algorithm delivers positions of the edges point to point. As the chaining can fail near the corners, when two different edges meet (the direction of gradient, even though the image is smoothed, is very unstable in such points), some edges can be uncomplete. Therefore, we add a gather-and-complete step: edge points are gathered to define edge curves, and these edge curves are then complete by taking profit of the information of the finer scale. initial datas : let us then consider the result image, containing the found edge points {(xe , ye )}e=1···N , their local orientation {Af (xe , ye , √12 )}e=1···N , and their Lipschitz regularity {alpha(xe , ye )}e=1···N . Let us consider as well the modulus of the CMT at the finer scale, {M f (xi , yj , √12 )}i=1···width,j=1···height . “gather” step : this step consists in gathering edge points in curves. To do this, we find a find a first edge point, (xinit , yinit ), we assign it the edge curve number ec = 1, and try to link it with other edge points. We define from him a double conic zone of search orientated in the two perpendicular directions to the gradient of the image in (xinit , yinit ) (Af (xinit , yinit , √12 ) ± π2 ). All edge points contained in this search zone and which have a comparable local orientation are affected with the edge curve number ec . The marking process is done again recursively from these new points. When no new edge point can be added, we check if there is enough edge points marked to define a curve; if not, the gathering process is canceled. If yes, ec is increased by one, and we start again from 16

a new, non-marked, edge point (see figure 5). “complete” step : at the end of the “gather” step, all the edge points have a number indicating the edge curve they belong to. The isolated edge points disappear, by imposing that edge curves are constituted by a certain number of edge points. Some of the constituted edge curves contain holes, that we are going to close, if possible, using the information of the finest scale. We consider the points which are local maxima of the CMT at the finer scale, that is, the non zero points of the matrix {M f (xi , yj , √12 )}i=1···width,j=1···height , which have not been marked yet. Let’s consider (xmax , ymax ), a such point. We define a double search cone around it, orientated in the perpendicular directions to its local orientation. If we find in the first part of the cone an edge point marked with a certain edge curve number ec and in the second part of the cone, an other edge point with the same ec number, we add (xmax , ymax ) in the result image (see figure 5).

+ +

+

+ +

(xinit ,yinit)

edge n° x

added to the edge x

+

+ +

edge n° x

Figure 5: Gathering the points into numbered edge curves, and completing the edge curves

Tests on school and real images • Points, line, and step edges. Here we have an image including three types of contours: the points, the lines, and the step edges. The first one have a theoretical Lipschitz regularity of -2, the second one of -1, and the last one of 0. The image at the left is the original image; the image at the right presents the detected contour points (10 convolutions by Θ), each colored with a color representing its Lipschitz regularity. The points constituting the lines all have a Lipschitz regularity between −1.2 and −0.8, the isolated points have a Lipschitz regularity between −1.6 and −2 and the points of the step edge have Lipschitz regularity beween −0.2 and 0.2. 17

Figure 6: Points, line, and step edges. left: original image. right: contour points regularity map • geometrical image.

Figure 7: Geometrical image (left: original image. right: detected contours) All the contour points found on this school image have a Lipschitz regularity between −0.23 and 0.2, which is normal since all the edges are step edges. We do 16 convolutions by Θ. The image at the right represent the contour points, gathered in segments (one color by segment). A Lipschitzregularity (the average value of the Lipschitz regularities of the contour points constituting the segment) can be associated to each segment. Near some corners, the detection is not perfect. The more convolutions we do, the more precision we lose near the corner (but the better we can compute the Lipschitz regularity, and the less texture points we get). 18

• Real image: mandrill.

Figure 8: Mandrill image. top left: original image. top right: Lipschitz regularity √ 2 of contour points. bottom left: modulus of the CMT at scale 2 . bottom right: q modulus of the CMT at scale 15 2 . The hairs of Mandrill form a texture, whereas its eyes and its nose define geometrical contours. This coexistence makes the analysis of this image difficult. A simple thresholding of the weakest wavelet coefficients in a fine scale is not enough to detect the geometrical contours, since textures produce big wavelet coefficients as well. But, going further in the scales, we see that the scales are less detected. Moreover, the Lipschitz regularity of the hair points are significantly weaker than the ones of the geometrical contours points, which allow to eliminate most of the texture points.

Conclusion In this article, we have presented a multiscale edge detector; inspired from the works of S. Mallat on chaining across scales, it shows how to perform an efficient 19

chaining of the local maxima in the 2D case; Allowing a rapid computation of the direction of the gradient, the presented detector uses largely this information to take into account the geometry of the analysed image. It is a multiscale detector, which allows us to compute the Lipschitz regularity of the edge points and to get less texture points. It brings out two drawbacks: it increases the imprecision near the corner points (where two edges meet), and it increases the computational cost. To cope with the first problem, we use a “dilatation” method of the edges, which will fill the holes in the obtained edges thanks to the information of the finer scale. To deal with the second problem, we have proposed an efficient implementation scheme based on B-spline kernels. The orientability properties of this detector have been emphasized, thanks to the steerable filter theory. The reconstruction problem have been studied, on the one hand by proposing an inversion formula of the continuous definition of the Multiscale Canny Transform, on the other hand by designing reconstruction filters allowing to invert the chosen implementation algorithm of the Canny Multiscale Transform, the cascade B-spline algorithm. Additional work can be done exploitation of the orientability property, on how to use this algorithm for denoising and texture separation, choosing thresholding depending of the scale. We are also working on application of this detection algorithm to watermarking (image protection, see [14]).

20

Appendix A: proof of the proposition 3.2 proof of the orientation formula 9 we want to prove that

~ θ f = R−θ Wf ~ W

Let’s begin by observing that RR y−v Wθ1 f (u, v, a, θ) = RRR2 f (x, y) a1 ψ 1 (R−θ ( x−u a , a ))dxdy 1 −θ = (x, y))dxdy R2 f (ax + u, ay + v) aψ (R

The same change of variable can be done for Wθ2 f (u, v, a, θ), and for W 1 f (u, v, a) et W 2 f (u, v, a). Let us express now

∂Θ ∂x

 −θ  (R (x, y)) et

∂Θ ∂y

 −θ  (R (x, y)) ; for this, let’s note:

R−θ (x, y) = (R1 (x, y), R2 (x, y)) avec



R1 (x, y) = cos θ x + sin θ y R2 (x, y) = − sin θ x + cos θ y

(x, y) −→ (R1 (x, y), R2 (x, y)) is a polar change of variables. Now, !     ∂  ∂R2 ∂Θ ∂R1 1 2 Θ(R (x, y), R (x, y)) (R1 (x, y), R2 (x, y)) ∂x  ∂x ∂x ∂x  = ∂ ∂Θ 1 2 1 2 ∂R1 ∂R2 ∂y Θ(R (x, y), R (x, y)) ∂y (R (x, y), R (x, y)) ∂y ∂y

namely, 

∂ ∂x ∂ ∂y

    −θ cos θ Θ(R−θ (x, y)) = Θ(R (x, y)) sin θ

− sin θ cos θ



∂Θ −θ (x, y)) ∂x (R ∂Θ −θ (R (x, y)) ∂y



the jacobian of this change of variables is the rotation matrix R θ ; let’s apply to the former equality the operator (Rθ )−1 = R−θ , and let’s take into consideration that Θ is isotropic, therefore Θ(R−θ (x, y)) = Θ(x, y) which gives: 

∂Θ −θ (x, y)) ∂x (R ∂Θ −θ (R (x, y)) ∂y



=



cos θ − sin θ

sin θ cos θ



∂ ∂x ∂ ∂y

[Θ(x, y)] [Θ(x, y)]



we then only have to multiply this equality by a f (ax + u, ay + v) and integrate it with respect to x and y to obtain the result. existence of the constant Cψ Θ is a smoothing kernel of L2 (R2 ). Cψ is defined by, if we consider the wavelet ψ = ∂Θ ∂x : 21



= =

RR

RR

R2

ˆ ω )|2 |ψ(~ ω |~ ω |2 d~ ˆ

R2

2

ω )| 4π 2 ω12 |ωΘ(~ ω 2 +ω 2 d~ 1

RR

=

2

RR



R2

R2

∂Θ |d ω )|2 ∂x (~ d~ ω ω12 +ω22

ˆ ω )|2 d~ |Θ(~ ω



Thus, Cψ is defined since Θ belongs to L2 (R2 ). Value of the constants Cψ1 and Cψ2 ψ1 and ψ2 stand for the wavelets ∂Θ ∂x and C ψ1

= =

RR

∂Θ ∂y

|ψˆ1 (~ ω )|2 ω |~ ω |2 d~ RR 2 2 2 ω +ω 2 −ω2 ˆ |Θ(~ ω )|2 d~ ω 4π 2 1ω2 +ω 2 R2 1 2 R2

RR

R2

|Θ(x, y)|2 dxdy.

respectively. By definition, =

RR

ˆ

R2 2

1

2

= 4π ||Θ||22 − Cψ2

Let’s do a polar change of variables: RR 2 2 θ ˆ Cψ1 = 4π 2 R2 ρ cos |Θ(ρ cos θ, ρ sin θ)|2 ρdρdθ ρ2 RR ρ2 sin2 θ ˆ 2 Cψ2 = 4π |Θ(ρ cos θ, ρ sin θ)|2 ρdρdθ 2 2 R

2

ω )| 4π 2 ω12 |ωΘ(~ ω 2 +ω 2 d~

ρ

Θ being isotropic, it is invariant to rotation, and so is its Fourier transform. ˆ cos θ, ρ sin θ)|2 does not actually depend on θ, so that the functions Thus, |Θ(ρ we have to integrate in Cψ1 and Cpsi2 are separable: we can integrate apart cos θ (in the Cψ1 equation) and sin θ (in the Cψ2 equation): Z Z cos θdθ = sin θdθ = π, [0,2π]

[0,2π]

therefore, C ψ1 = C ψ2 . Combining the two obtained equations: C ψ1 C ψ1

= 4π 2 ||Θ||22 − Cψ2 = C ψ2

we obtain

Cψ1 = Cψ2 = 2π 2 ||Θ||22 If Θ(x, y) = G(x, y) where G is a gaussian kernel defined by: G(x, y) = then ||G||22 =

1 4π ,

1 − x2 +y2 2 e 2π

and therefore Cψ1 = Cψ2 =

22

π 2.

Reconstruction formula Let’s remind first the general reconstruction formula of a directional wavelet transform (valid if Cψ is defined): f (~x) =

1 Cψ

Z

2π 0

Z

+∞ 0

ZZ

1 ~x − ~b da Wθ1 f (~b, a, θ) ψ 1 (R−θ )dθ 3 d~b a a a 2 R

~ = (ψ 1 , ψ 2 ) = ∇Θ). ~ Let us adapt these formula to our framework (ψ By introducing the formula 9, linking the CMT transform to a vectorial directional wavelet transform: Z 2π Z +∞ Z Z h i 1 ∂Θ ~x − ~b da 1 cos θ Wf1 (~b, a) + sin θ Wf2 (~b, a) (R−θ )dθ 3 d~b f (~x) = Cψ 1 0 a ∂x a a 2 0 R And since Θ is isotropic, ∂Θ x − b1 y − b2 ∂Θ x − b1 y − b2 ∂Θ −θ x − b1 y − b2 (R ( , )) = cos θ ( , )+sin θ ( , ). ∂x a a ∂x a a ∂y a a As a consequence, R f (~x) = a>0 +

+ +

R

R

R

da a3

da a>0 a3

da a>0 a3 da a>0 a3

RR

R2

RR

RR RR

R2

R2

R2

d~b d~b

d~b d~b

R 2π 0

R 2π 0

R 2π 0

R 2π 0

x−b1 y−b2 dθ cos2 θ Wf1 (~b, a) a1 ∂Θ ∂x ( a , a ) x−b1 y−b2 dθ sin2 θ Wf2 (~b, a) a1 ∂Θ ∂y ( a , a )

x−b1 y−b2 dθ cos θ sin θ Wf1 (~b, a) a1 ∂Θ ∂y ( a , a ) x−b1 y−b2 dθ cos θ sin θ Wf2 (~b, a) a1 ∂Θ ∂x ( a , a )

In the first term of the addition, we can isolate what does not depend on θ; it remains: Z 2π cos2 θdθ 0

which is equal to π. Doing the same with the second term, we isolate: Z 2π sin2 θdθ 0

, also equal to π, and in the two last terms we isolate Z 2π sin 2θdθ 0

equal to 0. Finally, it remains: Z ZZ h i π da 1 2 f (~x) = d~b Wf1 (~b, a)ψa, x) + Wf2 (~b, a)ψa, x) ~b (~ ~b (~ 3 Cψ1 a>0 a R2 23

where

x−~b 1 ∂Θ ~ a ∂x ( a )

1 ψa, ~b =

2 ψa, ~b =

1 ∂Θ ~ x−~b a ∂y ( a )

Energy conservation law Replacing Wθ1 f (~b, a, θ) with cos θ Wf1 (~b, a) + sin θ Wf2 (~b, a) we have: Z

2π 0

|Wθ1 f (~b, a, θ)|2 dθ

=

Z

2π 0

h

cos θ Wf1 (~b, a) + sin θ Wf2 (~b, a)

i2



developping this expression: Z 2π  2 2  |Wθ1 f (~b, a, θ)|2 dθ = π Wf1 (~b, a) + π Wf2 (~b, a) 0

Let us report this in the energy conservation law of a directional transform 10: ZZ Z ZZ   da π 2 ~ ~b |W 1 f (~b, a)|2 + |W 2 f (~b, a)|2 ~ d |f (t)| dt = Cψ a>0 a3 R2 R2 which gives us the expected result. 

Appendix B: demonstration of the proposition 3.3 Let us show that

˜ + Dx ~ D˜x + Dy ~ D˜y δ =Θ~Θ

using the transfert functions of the filters: ˆ + Dˆx Dˆ ˜x + Dˆy Dˆ ˜y = 1 Θ ˆ Dˆx and Dˆy • Computation of Θ, ˆ Value of Θ: Since

 1 1 1 Θ = [1, 2, 1] ~  2  , 4 4 1 

ˆ 1 , ω2 ) = Θ ˆ 1 (ω1 )Θ ˆ 1 (ω2 ) where Θ(ω ˆ 1 (ω) Θ

= 14 (e−2iπω + e2iπω + 2) = 12 (1 + cos 2πω) = cos2 πω 24

(16)

and therefore,

ˆ 1 , ω2 ) = cos2 (πω1 )cos2 (πω2 ) Θ(ω

Value of Dˆx :  0 0 0 Dx =  0 1 −1  0 0 0 

therefore Calcul de Dˆy :

Dˆx = 1 − e−2iπω1 = 2ie−iπω1 sin(πω1 )

 0 0 0 Dy =  0 1 0  0 −1 0 

and then

Dˆy = 2ie−iπω2 sin(πω2 )

• Developping (16) Let’s come back to (16); we want that: ˜x + 2ie−iπω2 sin πω2 Dˆ ˜y = 1 cos2 πω1 cos2 πω2 + 2ie−iπω1 sin πω1 Dˆ ˜x under this form: Let’s search for Dˆ ˜x ˜x = − i eiπω1 sin πω1 Dˆ Dˆ 2 ˜y under the form and Dˆ ˆ ˜y ˜y = − i eiπω2 sin πω2 D Dˆ 2 ˆ ˆ ˜ y such that The problem is then to find D˜x and D ˆ ˆ ˜y = 1 ∀(ω1 , ω2 ) ∈ R2 , cos2 πω1 cos2 πω2 + sin2 πω1 D˜x + sin2 πω2 D It is easy to see that

ˆ ˆ ˜y = 1 D˜x = cos2 πω2 and D

are solutions of the problem; so are ˆ ˜ y = cos2 πω1 and Dˆ ˜x = 1 D

25

So as to establish a symetry between D˜x and D˜y let’s choose the average of these solutions: 1 + cos2 πω2 ˆ D˜x = 2 and 2 ˆ ˜ y = 1 + cos πω1 D 2 Namely 2 ˆ = − i eiπω1 sin πω 1 + cos πω2 D˜ x 1 2 2 and 2 ˜y = − i eiπω2 sin πω2 1 + cos πω1 Dˆ 2 2 • Back to the direct space: Let us develop these expressions so as to consider them as a trigonometric polynomial and deduce D˜x et D˜y . ˜x = 1 e2iπω2 + 3 + 1 e−2iπω2 Dˆ 32 16 32 1 3 1 2iπω2 2iπω1 e e − e2iπω1 − e−2iπω2 e2iπω1 32 16 32 as a consequence,  1 1  −2 2 0 1  −3 3 0  D˜x = 16 − 12 21 0 −

identically,



T 1  D˜y = D˜x = 16

− 21 1 2

0

 −3 − 12 1  3 2 0 0



References [1] R. Adams, L. Bischof, Seeded region growing, IEEE Trans. on PAMI, Vol. 16, No. 6, June 1994, pp. 641 -647. [2] A. Arneodo, E. Bacry, S. Jaffard, and J. F. Muzy, Oscillating singularities and fractal functions. In Spline functions and the theory of wavelets (Montreal, PQ, 1996), volume 18 of CRM Proc. Lecture Notes, pages 315–329. Amer. Math. Soc., Providence, RI, 1999. 26

[3] E. Bacry, J. F. Muzy, and A. Arneodo, Singularity spectrum of fractal signals: exact results. Journal of Statistical Physics, 70(3/4):635-674, 1993. [4] K. Berkner, R.O. Wells, A new hierarchical scheme for approximating the continuous wavelet transform with applications to edge detection, IEEE Signal Processing Letters, 6 no. 8 (1999) 148–153. [5] J. Canny, A computational approach to edge detection, IEEE Trans. Patt. Anal. and Mach. Intell., PAMI-8 no. 6 (1986) 679-698. [6] collective work directed by J-P. Cocquerez and S. Philipp, Analyse d’images : filtrage et segmentation, Masson, 1995. [7] L.D. Cohen, On Active Contour Models and balloons, CVGIP: Image Understanding, vol 53, p211, 1991. [8] M. Fleute, L. Desbat, R. Martin, S. Lavalle, M. Defrise, X. Liu et R. Taylor Statistical model registration for a C-arm CT system, IEEE NSSMIC2001, abstract book pp. 112, San Diego, 2001. [9] W. T. Freeman, E. H. Adelson, The Design and Use of Steerable Filters, IEEE Trans. Patt. Anal. and Mach. Int. (1991). [10] M. Holschneider, R. Kronland-Martinet, J. Morlet, P. Tchamitchian Wavelets, Time-Frequency Methods, and Phase Space, Springer-Verlag, Berlin, 1989. [11] S. Jaffard, Exposants de H¨ older en des points donns et coefficients d’ondelettes, Note au compte-rendu de l’Acadmie des Sciences, France, 308 ser. I, pp 79-81, 1989. [12] T. Chang, C. C. J Kuo, Texture Analysis and classification with treestructured wavelet transform, IEEE Transactions on Image Processing, vol. 2, no. 4, 429-441, 1993. [13] O. Le Cadet, D´etection et caract´erisation des contours d’une image. Application la segmentation d’images m´edicales et au watermarking., PhD thesis (2004). [14] O. Le Cadet, A.S. Piquemal, On the Application of Edge Detection to Watermark Images, submitted to International Journal of Wavelets, Multiresolution and Information Processing (IJWMIP). [15] T. Lindeberg, Scale Space Theory in Computer Vision, Kluwer, Boston, 1994. [16] S. Mallat, A Wavelet Tour of Signal Processing, Academic Press (1998). [17] S. Meignen, Problmes d’chelle dans la segmentation par ondelettes d’images textures, PhD thesis, Universit Joseph Fourier, Grenoble, 2001. 27

[18] S. Meignen, S. Achard, Time Localization of Transients with Wavelet Maxima Lines, IEEE. Trans. sign. Proc., accepted, 2004. [19] S. Mallat, W.L. Hwang, Singularity Detection and Processing with Wavelet, IEEE Trans. Info. Theory 38(2) (1992) 617–643. [20] S. Mallat, S. Zhong, Characterization of signals from multiscale edges, IEEE Trans. On Patt. An. And Mach. Int. 14(7) (1992) 710–732. [21] R. Murenzi, Ondelettes multidimensionnelles et applications l’analyse d’images, thse de doctorat, universit catholique de Louvain, Belgique, 1990 [22] E. Simoncelli, H. Farid, Steerable wedge filters for local orientation analysis, IEEE Trans Image Processing, 5(9):1377-1382, Sept 1996. [23] M. Kass, A. Witkin, D. Terzopoulos, Snakes: Active contour models. Int. J. Computer Vision 1 pp. 321-332, 1988. [24] M. Unser, A. Aldroubi, M. Eden, On the asymptotic convergence of Bspline wavelets to Gabor functions, IEEE Trans. of Information Theory 38(2) (1992) 864–872. [25] A.L. Yuille, T. Poggio, Scaling and fingerprint theorems for zero-crossings. C. Brown(Ed.), Advances in Computer Vision, Hillsdale, N.J.: Lawrence Erlbaum Ass., pp. 47-78, 1988

28