Quantization Noise Removal for Optimal Transform Decoding

method, involving a priori assumptions on the solution and ... Many works have been done on the design of denoising post- ... the introduction of artifacts. Thus ...
822KB taille 1 téléchargements 257 vues
Quantization Noise Removal for Optimal Transform Decoding S. Tramini(1), M. Antonini(1), M. Barlaud(1), G. Aubert(2) E-Mail: [email protected]

(1)

I3S laboratory, CNRS UPRES-A 6070 University of Nice-Sophia Antipolis, 2000 route des lucioles bâtiment Algorithmes/Euclide Sophia Antipolis, 06410 Biot - France

Abstract This paper examines the relationship between quantization noise removal and variational problem. Traditional transformed and quantized image restoration techniques cannot prevent parasitic effects due to quantization noise. In this paper, we propose a new method, involving a priori assumptions on the solution and knowledge of the coder (transformation and quantization) to account for effects due to quantization noise. This technique, called MORPHÉ, can be viewed as an inverse problem with optimization of the Transform/Quantization/Decoding structure. This leads to the study of different ways to solve the constrained optimization problem. Experiments using this nonlinear inverse dynamic filtering demonstrate PSNR gains over standard linear inverse filtering as well as appreciable visual improvements.

(2)

J.A. Dieudonné laboratory, UMR 6621, University of Nice-Sophia Antipolis, Parc Valrose, 06108 Nice Cedex 2- France

In this paper, we focus on the choice of the quantizer constraints at decoding and specially on the tuning of the hyperparameters, which is a crucial and not easy operation.

2. Reconstruction scheme under constraints 2.1 Statement of the problem Let us define Ω, the support of the image f (a rectangle in general), as an open bounded set in ². In continuous variables the image to be restored f can be represented by functions of Ω ² which associate, to the pixel (x,y) ∈ ², its gray level f(x,y). Let us define p = Rf as the transformed image in Ω R the transformed domain, where R is a given linear operator (wavelets transform, convolution...); R: ² (Ω ) ² (Ω R ) . In order to simplify the overview of this paper, we extend both open bounded, Ω and Ω R , by ². Also, norm and integration notations are always defined on ². The output of a quantizer Q can always be defined as an additive noise model [1], and thus, we solve the following problem. Let ~ p = Q( p ) = p + ε ( p ) , (1) 





1. Introduction It is well known that coding and decoding an image results in annoying artifacts (using either DCT or wavelet transforms). Many works have been done on the design of denoising postprocessing algorithms, which perform after decoding on decoded images. The drawback of these approaches is that they do not take into account the quantization process and accurate noise modeling. The goal of this work is to overcome the introduction of artifacts. Thus, we break with the usual linear approach and we develop an advanced decoding technique for the transform/quantization image-coding scheme. The method performs during decoding and takes into account the nonlinearity of the quantization process. Our decoding algorithm MORPHÉ (Method for Optimal Reconstruction including Projection and Hyperparameters Estimation) is based on two key steps: 1) Modeling of the coder operations: i) quantizer model; ii) introduction of quantization range constraint in order to complete the quantizer model; 2) Noise removal with edge preserving.





where the energy of the quantization noise ε is known. In the rest of the paper, we assume that the operator R corresponds to a transformation with good decorrelation p results from an optimal quantization properties [2] and ~ of the transformed data [3;4]. The main idea of decoding is to find f ∈ 2 ( ) which p fixed. Then, the minimizes a criterion J, with R and ~ estimated image is given by: fˆ = arg min (J ( f ) ) (2) 

f

where J is the sum of a term measuring the faithfulness of the estimate to the data and a regularizing term. Here, we propose to minimize the following criterion: J ( f ) = J 1 ( f ) + C1 ( f )

(3)

The first term takes into account the observed data and the quantizer model in the transform domain (see formula (6)). Furthermore, the second term contains a priori assumptions

h( y ) = 0  îh( y ) = y 2

on the solution. It allows noise removal while preserving edges [5;6]. We chose the following one: C1 ( f ) = λ2 ψ ( ∇f 







)

(4)

if y ≤ 0



if y > 0

The constrain problem (7) is equivalent to the following unconstrained problem:

where ψ is a convex potential function [5;6].

{ minimize Φ( f , µ ) = J ( f ) + µ C 2 ( f )}

2.2 Quantizer constraint

2 2 with α = 1 − σ ε2 σ origin where σ origin and σ ε2 are known across the subbands, and provided from coder. r is a noise decorrelated from Rƒ. We set r constant in Rf. Then:

^f

Φ( f ,µ)

The quantization model can be written as a gain/additive model [7,8]. Thus, the data ~ p can be related to the unknown image f through a linear model of the form: ~ (5) p = Q(Rf ) = αRf + r

µ

f

fig.1: penalty Influence on criterion (9).

2 J 1 ( f ) = αRf + r − ~ p

(6) C2 ( f ) =

2.2.2 Quantization interval constraint For all value belonging to the range constraint I = [ ~p − q 2 , ~p + q 2 [ , the uniform scalar quantizer gives a unique value ~ p where q is the quantization step. The introduction of our quantization model (5) is not sufficient to ensure that the transform coefficients of reconstructed image belong to I. In this section, we take into account this information. Now, in order to reconstruct f, we consider the following problem:  minimize J ( f )  î subject to : Rf ∈I

(7)

Solving (7) is equivalent to satisfy the two following inequalities:  g1 ( f ) = (Rf − ~p − q 2 ) < 0 , (8)  ~ î g 2 ( f ) = ( p − Rf − q 2 ) ≤ 0 In the two following sections, we study the problem of constrained optimization, considering in particular two methods, which are substantially different. The principle common to these two methods is the casting of problem (7) into a form of unconstrained optimization.

3. Exterior penalty method 3.1 Range constraint penalty The exterior penalty method constitutes a family of algorithms, which is particularly interesting due to the simplicity of the concept as well as their efficiency in practice. Let define the function h ( ), as in [9], by:



2

h (g i ( f ) ) =

i =1

with µ > 0 (9)

The penalty function C 2 ( f ) allows to find the solution f such that Rƒ belongs to the "good" quantization interval I for Q(Rf ) = ~ p. In other words, if the transform coefficient value is out of the range, C 2 ( f ) ensures to put it back in, otherwise, no change is made. This penalty function is defined by:

2.2.1 Quantizer model



y∈

( gi+ ( f )) ² ,

2



i =1

where g + ( f )= max [0 , g ( f

)]

(10) ∀f .

Furthermore, since max (a ,b ) = 12 ( b − a + b + a ) , (10) can be formulated as: C2 ( f ) =

1 1 4

( Rf − ~p + i 



i = −1 i≠0



q 2 

p)− − i (Rf − ~

q 2

)

2

(11)

Suppose that J has a minimum in ˆf , then we necessarily have: 1 ∂ 2 ∂f

Φ ( f , µˆ )

f = ˆf

=0

Thus, the solution ˆf to the minimization problem (9) verifies the following Euler-Lagrange equations:  ψ ′ ( ∇ˆf )  ˆ  ∇f + µ κ ( ˆf ) = R*α ~p R*α 2 Rˆf − λ2 div   ˆ ∇f   







 0  with: κ ( f ) =  R* g1 ( ˆf )  * ˆ − R g2 ( f ) î

(12)

if Rˆf ∈ I if Rˆf > ~ p+q 2 ~ ˆ if Rf ≥ p − q 2

with Neumman boundaries condition. R* stands for the adjoint integral operator, and div stands for the divergence operator. In order to solve these nonlinear equations we use a fast halfquadratic deterministic minimization algorithm derived from ARTUR [6]. The drawback of this penalty is the empirical choice of the hyperparameter µ .

3.2 Determination of the hyperparameter µ The purpose of this section is to propose a method for the tuning of the lagrangian µ in equation (9). The convex criterion (3) is minimized under the constraint C 2 ( f )≤ 0 . The choice of µ results from a compromise:

µ must be large enough to satisfy the constraint ( Rf ∈ I ); if µ is too large, numerical difficulties will arise in the searching for the unconstrained optimum ( the Φ function is ill-conditioned). 



Thus, we introduce a fast search method with an iterative formula for µ :

This method consists of solving the dual problem (13) using essentially the concavity of the dual function ω and the fact that we can derive a gradient from the calculation of ω (µ ). The function L ( f , µ ) is strictly convex; thus, the minimum in f of L ( f , µˆ ) is unique for µ = µˆ (optimum of the dual). If there exists a saddle point s, the solution of the dual problem (13) allows an optimal solution of primal problem (7). Thus, if there exists a solution ˆf to this minimization problem, it verifies 1 ∂ 2 ∂f

L ( f , µˆ )

that is, the following Euler-Lagrange equations: 

0

µ ≥0

Step 1 : Initialization − Let

( ) ⇒ solve

Step 3 : µ

k +1

k

= µ + ηˆ k d

Stopping rule : if met , d

k

k

with d

k

(12)

( )

= ∇ µ Φ ˆf = C

2



 ψ ′ ( ∇ˆf )   ∇ˆf + R* (µ1 − µ 2 ) = R*α ~ R*α 2 Rˆf − λ2 div p (14)  ( ∇ˆf )    Thus, we use a fast search method derived from UZAWA's gradient descent technique [11;12] with an iterative formula for µ : 

Step 2 : compute ˆf = arg min Φ f

=0,

f = ˆf

( ˆf )

= 0 else k ← k+1 and go to step 2.

where ηˆk is the gradient step whose value determine the convergence speed and d k is the gradient of Φ relative to µ at point ( ˆf ,µ k ) . Remark: A natural way to adapt optimization method without constraint (2) to constrained problem (7) without the drawback of µ determination, is to project, at each step of the minimization problem, the solution on boundary of the quantization domain [10] (see Annex A).



Step 1 : Initialization − Let

0

0

µ1 , µ 2 ≥ 0

k

k

Step 2 : compute ω ( µ ) = arg min L ( f , µ ) ⇒ solve (14) k +1

Step 3 : µ i

{

k

k

= Proj µ i + η k g i ( f ) 

+

Stopping rule : Convergence on f

}

for i = 1..2

- else k ← k+1 and go to step 2.

5. Experiment 5.1 Application to image compression

4. Lagrangian Duality method: MORPHÉ Let define the lagrangian function L ( f , µ ) : 2

L ( f , µ ) = J ( f ) + ∑ ∫ µi gi ( f i =1

)

f,µ

)

L(

^f

The problem (7) can be solved if we can find a saddle point of the lagrangian function, i.e s = ( ˆf , µˆ ) , which satisfies:  L ( ˆf , µˆ ) = min L ( f , µˆ ) f   ˆ g i ( f ) ≤ 0  ˆ µˆ i . g i ( f ) = 0 i = 1..2 î

Let us define now for µ

≥0

µ

f

fig. 2: saddle point illustration:

( ) on ( )

ˆf minimize L f, µˆ

²;

ˆ minimize L ˆf µ on µ

+





.

the function ω ( µ ) = min L ( f , µ ) f

The constrain problem (7) is equivalent to the following (13) dual problem: max2 + ω (µ ) µ∈



The experimental results are shown on figure 3 and figure 4. On Figure 3, the original image is a 512x512-synthetic piecewise constant image called Vnice (figure 1). This phantom is an image derived from Shepp & Logan’s phantom [13]. We code this image using the method proposed in [4] with the biorthogonal filters 9-7. The compression ratio is 32:1. Decoding is ensured using different approaches: the classical inverse linear filtering, the inverse linear filtering plus denoising by ARTUR, and the new method with improvements of the quantization model. Our new decoding algorithm MORPHÉ yields improvements in visual quality as well as a gain of about 8dB in Peak SNR over the linear method. We remark that the algorithm evolution-projection [10] gives the same result as denoising plus exterior penalty method. We denote the importance of the quantizer model (see 2.2.1) and the improvement when introducing the quantization range constraint (see 2.2.2). On Figure 4, the original image is a 512x512 image, called Blonde from Kodak site. We code this image using the method proposed in [3;4] with the biorthogonal filters 5-7. The compression ratio is 48:1. Experiments using MORPHÉ

demonstrate again PSNR gains (0.9 dB) over standard linear inverse filtering as well as appreciable visual improvements. Moreover, MORPHÉ does not blur the image, and it permits to recover the geometry and the radiometry of the image.

5.2 Application to blurry Image corrupted by uniform noise This method can be applied in a large number of applications in image processing. We propose to apply MORPHÉ to deblur blurry image corrupted with a quantization noise or a uniform noise. The entire model described in this paper is legitimate in deblurring images. Results are shown for three different applications on Barbara 512x512 image. We consider images, which are both blurry and noisy. Here we take R to be a convolution operator with a large PSF kernel, and we consider three models of noise. Let define three constants A, B, and C. p = Q(Rf ) with Our first model is a quantization noise: ~ quantization step q ∈ . p for Our second model is as follows. Given an image ~ p = Rf + η1 , we add a uniform and stationary which ~ noise (its standard deviation is given by: σ ² = A ). We assume for simplicity that this uniform noise can be approached by a quantization noise with a variance q ² 12 (q is the quantization step). Consequently, we consider this second model as the first model. The quantization step is defined by the following formula: 







q = 2 3A . p for Our third model is as follows. Given an image ~ p = Rf + η 2 . We add a uniform and nonwhich ~ stationary noise (the standard deviation of the noise is given by: σ ² = A + B Rf + C Rf ² ). We can assimilate this uniform bounded noise as a quantization noise with a non stationary quantization step given: q=2

2 3 ( A + B Rf + C Rf ) .

Results are given in the case of these three applications. Figure 6 compares performance of commonly used deblurring techniques (here, we use ARTUR [6]) vs. MORPHÉ. These results illustrate the behavior of the algorithm MORPHÉ: i. This method enhances PSNR results (1.5 dB over standard ARTUR) as well as visual rendering (see Fig. 5). ii. This method is robust to the tuning of λ parameter (see Fig. 6). iii. This method is well adapted to local noise statistics in the case of blurry image corrupted by a uniform and non-stationary noise.

4. Conclusion This paper presents a new decoding method, which questions the philosophy of gaussian noise assumption. We proposed to

take into account a new constraint (quantization range) which permits to reconstruct image corrupted by bounded noise such as quantization noise or uniform noise. We also developed MORPHÉ in order to involve the tuning of µ parameter and to become robust to the tuning of λ parameter. Our method can be applied in a large number of applications (deblurring blurry images corrupted by quantization noise or uniform noise, decoding compressed images…). MORPHÉ outperforms classical methods, which do not consider bounded noise. The results show an appreciable improvement in visual quality, since noise artifacts such as blocking and ringing are removed. We denote a PSNR gain over traditional methods of about 8dB for synthetic images and 1 d B for real life images.

Annexe A We can use a constrained optimization algorithm for removing artifact from images in spatial domain [14]. This method involves the computation of λ. [14] use a parabolic equation with time as an evolution parameter to solve the Euler-Lagrange equations. Thus, we have the following PDE: 

f t = div (∇f





λ (t) = −

1 2σ ε2

∇f 



) − λ (t) ( f

− R* ~p ) ∇f ∇R* ~p ∇f − ∇f 





This procedure removes artifacts such as ringing but may smooth the image too strongly. So the second operation namely, projection on the transform domain, allows image to become freeartifact. In this way, we have no hyperparameters to estimate. Unfortunately this projection algorithm is computationally costly and does not permit to obtain the global minimum of the criterion. Moreover, Results depend on the initial conditions.

References [1] A. Gersho, R. Gray, "Vector Quantization and Signal Compression", Boston: Kluwer Academic Publishers, 1990. [2] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies "Image Coding Using Wavelet Transform", IEEE Transaction on Image Processing, Vol.1, No.2, 1992, pp. 205-220. [3] P. Raffy, M. Antonini, M. Barlaud, "Zerotree Edge Adaptive Coder for Low bit Rate Image Transmission", SPIE, Visual Communication and Image Processing, San Jose, USA, 1997. [4] P. Raffy, M. Antonini, M. Barlaud, "Optimal subband bit allocation procedure for very low bit rate image coding", nd Electronics Letters, Vol. 34 No. 7, pp 647, 2 April 1998. [5] S. Tramini, M. Antonini, M. Barlaud, "Intraframe Image Decoding based on a Nonlinear Variational Approach", International Journal of Imaging Systems and Technology, to be published, 1998. [6] P. Charbonnier, L. Blanc-Féraud, G. Aubert, M. Barlaud, "Deterministic Edge-Preserving Regularization in Computed Imaging", IEEE Transaction on Image Processing, Vol.5, No.12, 1997.

[7] S. Tramini, M. Antonini, M. Barlaud, G. Aubert,"NonLinear Dynamic Filtering for Image Compression" International Conference on Image Processing, Santa Barbara, California, USA, 1997, Vol.3. [8] P.H. Westerink, "Subband Coding of Images", phD thesis, Delft University of Technology, 1989. [9] A.V. Fiaco, G.P. McCormick, "Nonlinear programing: Sequential unconstrained minimization techniques", John Wiley, New York, 1968. [10] S. Zhong,"Image Coding with Optimal Reconstruction" International Conference on Image Processing, Santa Barbara, California, USA, 1997, Vol.1. pp 161. [11] M. Minoux, "Mathematical programming: Theory and algorithms", Wiley: New York, 1986. [12] H. Uzawa, "Iterative methods for concave programming in: Studies in linear and nonlinear programming", Chap. 10, (Arrow, Hurwicz, Uzawa eds.), Stanford University Press, 1958. [13] L.A Shepp and B.F. Logan, “The fourier reconstruction of a head section”, IEEE Trans. On Nuclear Science, Vol. NS-21, pp.21-43, 1974. [14] L. Rudin, S. Osher and E. Fatemi, "Nonlinear total variation based noise removal algorithm", Physica D, 60:259-268, 1992.

Fig.4: extract of Blonde image. Left to Right: (0.17 bpp) linear filtering; reconstructed image by MORPHÉ.

Original Vnice Image 51

(f) Gain additive + Uzawa (e) Gain additive + penalty

50

49

(d) Gain additive

48

PSNR (dB)

(g) evolution-projection (c) Denoising + penalty 47

(b) Denoising

46

45

44

43

42

(a) Linear Filering

41 5

10

15

20

λ

25

30

35

40

45

50

Fig.3: PSNR results vs λ parameter: (a) linear filtering, (b) denoising by ARTUR, (c) denoising by ARTUR under quantization interval constraint, (d) gain additive model, (e) gain additive model under quantization interval penalty (EXTERIOR PENALTY METHOD), (f) gain additive model under quantization interval constraint (UZAWA): MORPHÉ, (g) evolution projection procedure without λ estimation.

Fig.5: Visual results for a λ=1. Top row: original, blurry and quantized (q=1) PSNR = 25.96 dB, middle row deblurring with ARTUR, PSNR = 30.54 dB, deblurring with MORPHÉ, PSNR = 35.09 dB, bottom row: deblurring with adaptive MORPHÉ, PSNR = 35.9 dB and its adaptiveλ

32

(c)

31

35

(c)

31

34

30

(c)

28

(b)

PSNR (dB)

29

32

31

30

29

(b)

27

28

PSNR (dB)

33

30

29

28

(b)

27

26

25 27

26

(a)

(a)

26

25 0

0.5

1

1.5

2

2.5

λ

3

3.5

4

4.5

5

(a)

24

23

25 0

0.5

1

1.5

2

λ

2.5

3

3.5

4

4.5

5

0

0.5

1

1.5

2

2.5

λ

3

3.5

4

4.5

5

Fig.6: PSNR results vs. λ parameter: Left to Right: quantization noise (q = 2); uniform and stationary noise (A = 1,3); uniform and non stationary noise (A = 1,3 - B = 0.00675 - C = 0.0000189). (a) Blurry Barbara image corrupted by different noises). (b) Deblurring with ARTUR. (c) Deblurring with MORPHÉ

PSNR (dB)

0