Adaptive Fuzzy Color Interpolation

color interpolation algorithm using a method of fuzzy membership assignment along ... area of research and development is getting more and more attention, thanks to the booming ... may be acceptable in a video application because the artifact may not be visible by the ..... Similar operation can be carried out analogously.
2MB taille 3 téléchargements 340 vues
Journal of Electronic Imaging Vol. 11(3), July 2002.

Adaptive Fuzzy Color Interpolation Ping-Sing Tsai1, Tinku Acharya1,2 , Ajay K. Ray3 1

Intel Corporation Desktop Architecture Lab 5000 West Chandler Boulevard Mail Stop: CH7-212 Chandler, Arizona 85226, USA

2

Department of Electrical Engineering Arizona State University Tempe, Arizona 85287, USA

3

Department of Electronics and Electrical Communication Engineering Indian Institute of Technology Kharagpur, India

Email: [email protected]

1

Journal of Electronic Imaging Vol. 11(3), July 2002.

Abstract In an electronic color imaging device, such as a digital camera, using a single CCD or CMOS sensor, the color information is usually acquired in sub-sampled patterns of Red (R), Green (G) and Blue (B) pixels. Full resolution color is subsequently generated from this sub-sampled image. This is popularly called Color Interpolation or Color Demosaicing. In this paper, we present a color interpolation algorithm using a method of fuzzy membership assignment along with the concept of smooth hue transition. The algorithm is adaptive in nature and produces superior quality full resolution color images compared to most of the popularly known color interpolation algorithms in the literature. We present the results of comparison with some challenging subsampled images for color interpolation. Keywords: Color Interpolation, Color Demosaicing, Fuzzy Membership, Hue, Digital Imaging

1. Introduction Due to the cost and packaging consideration, in digital imaging devices such as a digital camera, the image color is captured in a sub-sampled pattern. Typically the raw image is captured with each "pixel'' location composed of only one of the three primary color components, R (Red), G (Green), or B (Blue). This sub-sampled color image is generated using certain pattern of a Color Filter Array (CFA). This CFA is realized by coating the surface of the electronic sensor array using some optical material that acts as band-pass filter. This coating on each pixel location of the sensor array permits the photons corresponding to only one color component (frequency range) to be transmitted to the sensor and the other two color components are blocked. A typical and widely used CFA pattern is called Bayer Pattern [1], as shown in Figure 1. It should be noted that there are many other patterns known in the literature. However, we only consider the Bayer pattern in this paper. n m 1

1

2

3

4

5

G

R

G

R

G

2

B

G

B

G

B

3

G

R

G

R

G

4

B

G

B

G

B

5

G

R

G

R

G

Figure 1: Bayer CFA pattern A perfect, rather efficient full color representation of an image needs the information of all the three colors in each pixel location. As a result, each pixel needs to be represented by 24 bit color, assuming 8 bits for each of R, G and B. To achieve this, it is essential to interpolate the missing two colors in each pixel location using the information of the neighboring pixels. The methodology to recover or interpolate these missing colors is popularly known as Color Interpolation or Color Demosaicing. We used the terminology Color Interpolation in this paper, instead of Color Demosaicing. In this paper, we present a brief review of the color interpolation algorithms existing in the literature in section 2. We proposed a new color interpolation algorithm using the concept of Fuzzy membership assignment based scheme and smooth hue transition in section 3. We present the experimental results and comparison of our

2

Journal of Electronic Imaging Vol. 11(3), July 2002. proposed technique with other popular techniques in section 4. We conclude this paper in section 5.

2. Related Works In the past decade, many color interpolation algorithms have been proposed in the literature. This area of research and development is getting more and more attention, thanks to the booming market of digital imaging devices. These color interpolation algorithms can be broadly classified into non-adaptive algorithms and adaptive algorithms, as described below. (1) Non-adaptive algorithms: In non-adaptive color interpolation algorithms, a fixed pattern of computation is applied in every pixel location in the sub-sampled color image in order to recover the missing two color components. Usually, this type of algorithms is easy to implement with low cost in terms of computational requirements. (2) Adaptive algorithms: In adaptive color interpolation algorithms, intelligent processing is applied in every pixel location based on the characteristics of the image in order to recover the missing color components. This type of algorithms yield better results in terms of quality as compared with the non-adaptive algorithms. However, effective algorithms in this category are usually more computationally intensive. We review some algorithms from both categories in order to introduce some flavor and characteristics of the color interpolation methodologies proposed in the literature. However, there are other algorithms that we did not include in this paper, such as Cubic Convolution type methods [2, 3], Pattern Recognition Interpolation method [4], Pattern Classification method [5], gradient based method [6], and so on. In this paper, we only give a review to some basic methods that are related to the thought process of our proposed new algorithm. Interested readers are also suggested to visit the web-site at Stanford University [7].

2.1 Non-adaptive algorithms •

Nearest Neighbor Replication: In this simple color interpolation method [8, 9], each missing color in a pixel is allocated by the value of the nearest pixel of the same color in the input image. The nearest neighbor can be any one of the upper, lower, left and right pixels. As discussed by Adams [8], the only advantage of this approach is that computational requirement is very small and suitable for applications where speed is very crucial. However, the significant color errors make it unacceptable for still imaging system, such as high-resolution digital cameras.



Bilinear Interpolation: Instead of replicating the nearest neighbors, bilinear interpolation [8] allocates the missing color component with the linear average of the adjacent pixels with same color component. For example, the pixel location (2, 3) in Figure 1 contains BLUE component only. Hence the missing GREEN component can be estimated as average of the left, right, top and bottom GREEN pixel values. The missing RED component can be estimated as average of the four diagonally adjacent corner neighbors containing RED pixels. This method is very simple and can be easily implemented. However, experimental results show that new kind of pixel artifacts, e.g. “zipper effect”, is introduced in the neighborhood of the interpolated pixels in the interpolated full color image. This artifact may be acceptable in a video application because the artifact may not be visible by the human eye due to effect of motion blur between video frames, but these artifacts are not acceptable for a still imaging system.

3

Journal of Electronic Imaging Vol. 11(3), July 2002. •

Median Interpolation: The median interpolation [10] allocates the missing color component with the “median” value of the adjacent pixels of same color component, as opposed to the linear average used in bilinear interpolation. This provides a slightly better result in terms of visual quality as compared with the bilinear interpolation. However, the resultant images are still blurry for images with high frequency contents, and for high resolution still imaging system, this is still not acceptable.



Smooth Hue Transition Interpolation: The key problem of the color artifacts in both bilinear and median interpolation is that the hue values of adjacent pixels change suddenly (non-smoothly). On the other hand, the Bayer CFA pattern can be considered as combination of a luminance channel (green pixels) and two chrominance channels (red and blue pixels). The smooth hue transition interpolation method [11] interpolates these channels differently. The missing GREEN component in every RED and BLUE pixel locations in the Bayer pattern can first be interpolated using bilinear interpolation as discussed before. The idea of chrominance channel interpolation is to impose a smooth transition in hue value from pixel to pixel. In order to do so, it defines blue "hue value" as B/G, and red "hue value" as R/G. For interpolation of the missing blue pixel values B m ,n in pixel location (m, n) in the Bayer pattern, the following three different cases may arise. Case 1: the pixel location (m, n) contains GREEN color component only and the adjacent left and right pixel locations contain BLUE color component only. For example, the pixel location (2, 2) in Figure 1 contains GREEN component only and its adjacent pixels in left and right contain only BLUE information. The BLUE information in location (m, n) can be estimated as follows:

 Bm ,n −1 Bm ,n +1   2. + Bm,n = Gm ,n *   G G m , n +1   m ,n −1 Case 2: the pixel location (m, n) contains GREEN color component only and the adjacent top and bottom pixel locations contain BLUE color component only. The pixel at location (3, 3) in Figure 1 is such an example. The BLUE information in location (m, n) can be estimated as follows:

 Bm −1,n Bm +1,n + Bm,n = Gm ,n *  G Gm + ,n m − 1 , n 

  2.  

Case 3: the pixel location (m, n) contains RED color component only. Obviously, four diagonally neighboring corner pixels contain BLUE color only. For example, the pixel location (3, 2) in Figure 1 contains RED color component only. The BLUE information in location (m, n) can be estimated as follows:

 Bm −1,n −1 Bm −1,n +1 Bm +1,n −1 Bm +1,n +1   4. + + + Bm ,n = Gm,n *   G G G G m −1, n +1 m +1, n −1 m +1, n +1   m −1,n −1 The interpolation of missing RED pixel values can be carried out similarity. As mentioned by Adams [8], depending on where the interpolation step happens in the image processing chain the definition of "hue value" changes. For example, if the pixel value is transformed into logarithmic exposure space from linear space before interpolation, instead of B/G or R/G, one can now define the "hue value" as B-G or R-G.

4

Journal of Electronic Imaging Vol. 11(3), July 2002. This is coming from the fact that log(X/Y) = log(X) – log(Y) = X' – Y'. Here X’ and Y’ are the logarithmic values of X and Y respectively. Since the linear/nonlinear transformation can be done using a simple table look-up and all the division for calculating hue value is replaced by subtraction, this helps reduce computational complexity for implementation.

2.2 Adaptive algorithms •

Pattern Matching based Interpolation Algorithm: In the Bayer pattern, a BLUE or RED pixel has four neighboring GREEN pixels. A simple pattern matching technique for reconstructing the missing color components based on the pixel contexts was proposed by Wu, et. al. [12]. This pattern matching algorithm defines a green pattern for the pixel at location (m, n) containing a non-GREEN color component as a four-dimensional integer-valued vector: g (m , n) = G m −1,n , G m +1,n , G m ,n−1 , G m ,n+1 .

(

)

The similarity (or difference) between two green patterns g1 and g 2 is defined as the vector 1-norm

g1 − g 2 =

∑ g1

i

0≤ i < 4

− g2i .

When the difference between two green patterns is small, it is likely that the two pixel locations where the two green patterns are defined will have similar RED and BLUE color components. A weighted average proportional to degree of similarity of the green patterns is used to calculate the missing color component based on the green pattern contexts. For example, the missing BLUE color value B m ,n in pixel location (m, n) contains only RED

g (m , n) with the four neighboring green patterns g (m − 1, n − 1) , g (m + 1, n − 1) , g (m − 1, n + 1) and g (m + 1, n + 1) . If all the differences between g (m , n) and other four green patterns

color component is estimated by comparing the green pattern

are uniformly small, then a simple average is used to estimated the missing BLUE color component,

Bm , n =

Bm −1,n−1 + Bm +1,n−1 + Bm −1,n+1 + Bm +1,n+1 4

.

Otherwise, when the largest difference is above certain threshold, only the top two bestmatched green patterns information are used. If

g (m , n) − g (m − 1, n − 1)

and

g (m , n) − g (m + 1, n − 1) are the two smallest differences, then the missing BLUE color is estimated as follows.

Bm , n =

g (m , n) − g (m + 1, n − 1) × Bm −1,n−1 + g (m , n) − g (m − 1, n − 1) × Bm +1,n−1 g (m , n) − g (m − 1, n − 1) + g (m , n) − g (m + 1, n − 1)

The missing RED color values can be carried out analogously. This algorithm is simple and efficient. However, as pointed out by Wu, et. al. [12], the quality of reconstructed images is still undesirable. •

Block Matching Based Algorithm: Similar to the pattern matching algorithm described previously, the block matching algorithm by Acharya, et. al. [13] defines the “Color Block” of a non-green pixel as the vector formed by the four neighboring green pixels. However, instead of using vector norm to measure the similarity, a new metrics “Color

5

Journal of Electronic Imaging Vol. 11(3), July 2002. Gravity” is defined as the mean value of the four vector components for measuring the similarity between color blocks. Let x = ( x1, x 2, x 3, x 4) be a vector of a Color Block, then the Color Gravity for this color block will be x = (

x1 + x 2 + x3 + x 4 ) . The 4

similarity between two color blocks is defined as the absolute difference of the two color gravity values. Also instead of using the weighted average proportional to the similarity of the green patterns for estimating the missing color values, the block matching algorithm is developed based on the selection of a neighboring block whose Color Gravity is closest to the Color Gravity of the color block under consideration. For a non-green pixel in the Bayer pattern image, there are four neighboring green pixels Gn (the North neighbor), Gs (South), Ge (East), and Gw (West) which form the Color Block g = (Gn, Gs, Ge, Gw ) whose Color Gravity is g . The missing GREEN value G is simply computed by the median of the four neighboring green pixel value. If the pixel location contains BLUE color value, it will have four diagonally RED pixels Rne (North-East), Rse (South-East), Rsw (South-West), and Rnw (North-West) whose color blocks are g ne , g se , g sw , and g nw . The corresponding color gravity are

g ne , g se , g sw , and g nw respectively. The missing RED value R is replaced using one of the four diagonally red pixels based on best match of their color gravity values. The best match or minimal difference ∆ min is the minimum amongst the four absolute differences

∆1 = g − g ne ,

∆2 = g − g se ,

∆3 = g − g sw ,

and

∆4 = g − g nw

respectively. Similarly, we can estimate the missing BLUE color value, if the pixel location contains RED color value due to the symmetry of red and blue sampling position in a Bayer pattern image. For the green pixel location, only two color blocks (either up-bottom or left-right positions) are considered for the missing RED or BLUE color. Similar operation can be carried out analogously. The algorithm can be described as follows: Begin if the pixel location is not GREEN then { G ← median{ Gn , Gs , Ge , Gw }; compute ∆1 = g − g ne , ∆2 = g − g se , ∆3 = g − g sw , and ∆4 = g − g nw ; Find ∆ min = min{ ∆1 , ∆ 2 , ∆3 , ∆ 4 } ; If ( ∆ min = ∆1 ) then R ← Rne if Red is missing, B ← Bne if Blue is missing; If ( ∆ min = ∆ 2 ) then R ← Rse if Red is missing, B ← Bse if Blue is missing; If ( ∆ min = ∆ 3 ) then R ← Rsw if Red is missing, B ← Bsw if Blue is missing; If ( ∆ min = ∆ 4 ) then

R ← Rnw if Red is missing, B ← Bnw if Blue is missing;

}; if the pixel location is GREEN then { compute ∆u = G − g u , ∆b = G − g b , ∆l = G − g l and ∆r = G − g r ; If ( ∆u < ∆b ) then

B ← Bu else B ← Bb ;

6

Journal of Electronic Imaging Vol. 11(3), July 2002.

If ( ∆l < ∆r ) then

R ← Rl else R ← Rr ;

} End. This method provides a much sharper image as compared with the simple median or bilinear interpolation type methods and the simple pattern matching method. However, since this method does not consider smooth hue transition, the color bleeding artifacts may still be a problem for some images such as image of a Zebra. •

Edge Sensing Interpolation: Depending on luminance gradients, different predictors are used for the missing GREEN values in the edge sensing interpolation method [8, 14]. First, two gradients are defined, one in horizontal direction, the other in vertical direction, for each RED or BLUE only pixel location. For instance, consider the pixel "b8" as shown in Figure 2. We define two gradients as

∆H = g 7 − g 9 and ∆V = g 3 − g13 , where x denotes absolute value of x . Based on these gradient values and a certain threshold ( T ), the interpolation algorithm then can be described as follows if ∆H < T and ∆V > T then G8 = (g7+g9)/2; else if ∆H > T and ∆V < T then G8 = (g3+g13)/2; else G8 = (g3+g7+g9+g13)/4; endif endif g1

r2

g3

r4

g5

b6

g7

b8

g9

b10

g11

r12

g13

r14

g15

b16

g17

b18

g19

b20

g21

r22

g23

r24

g25

Figure 2 A slightly different edge sensing interpolation algorithm is described in [15]. Instead of luminance gradients, chrominance gradients are used. The two gradients, refer to Figure 3 below, are defined as:

∆H = b5 −

b3 + b7 , 2

∆V = b5 −

b1 + b9 . 2

7

Journal of Electronic Imaging Vol. 11(3), July 2002.

b1 g2 b3

g4

b5

g6

b7

g8 b9 Figure 3 •

Linear Interpolation with Laplacian second-order Correction terms: This algorithm [16] is designed for optimizing performance in terms of the visual quality of the interpolated image when applied on images with sharp edges. We illustrate this algorithm using an example. Missing color components are estimated by following steps. ¾

The first step in this algorithm is to estimate the missing GREEN color components at the pixel locations containing RED or BLUE color component only. Let us consider estimating the green value at a blue pixel location (using Figure 3) as an example. Interpolation at a red pixel location can be done in the similar fashion. Let's find out the missing GREEN component (G5) at pixel location “b5”. We define horizontal and vertical gradients in this pixel location as follows:

∆H = g 4 − g 6 + (b5 − b3) − (b7 − b5) , ∆V = g 2 − g8 + (b5 − b1) − (b9 − b5) . Intuitively, we can consider ∆H and ∆V above as combination of the luminance gradient and the chrominance gradient as described in edge sensing interpolation algorithm in the previous section. In the expression of ∆H above, as an example, the first term g 4 − g 6 is the first-order difference of the neighboring green values, considered

to

be

the

luminance

gradient

and

the

second

term

(b5 − b3) − (b7 − b5) is the second-order derivative of the neighboring blue values, considered as the chrominance gradient. Using these two gradient values, the missing green component G5 at pixel location “b5” is estimated as follows. if ∆H < ∆V then G5 = (g4 + g6)/2 + (-b3 + 2*b5 - b7)/4; else if ∆H > ∆V then G5 = (g2+g8)/2 + (-b1 + 2*b5 - b9)/4; else G5 = (g2+g4+g6+g8)/4 + (-b1 - b3 + 4*b5 - b7 - b9)/8; endif endif The interpolation step for G5 has two parts. The first part is the linear average of the neighboring green values, and the second part can be considered as a second-order correction term based on the neighboring blue (red) values.

8

Journal of Electronic Imaging Vol. 11(3), July 2002. ¾

The missing red (or blue) color components are estimated in every pixel location after estimation of the missing green components in every pixel location. Depending on the position, refer to Figure 4 below, we have three cases:

r1

g2

r3

g4

b5

g6

r7

g8

r9

Figure 4 1. Estimate red (blue) color component at a green pixel where nearest neighbors of red (blue) pixels are in the same column, e.g. pixel location “g4” as shown in Figure 4 above. We estimate the red component R4 at pixel location g4 as follows. R4 = (r1 + r7) / 2 + (g4 - G1 + g4 - G7) / 4 2. Estimate red (blue) color component at a green pixel where nearest neighbors of red (blue) pixels are in the same row, e.g. pixel location “g2” as shown in Figure 4. We estimate the red component R2 at pixel location g2 as follows. R2 = (r1 + r3) / 2 + (g2 - G1 + g2 - G3) / 4 3. Estimate red (blue) color component at a blue (red) pixel. For instance, estimate red component R5 at pixel location “b5” as shown in Figure 4. Here we first define two diagonal gradients as follows:

∆N = r1 − r 9 + G5 − g1 + G5 − g 9 , ∆P = r 3 − r 7 + G5 − g 3 + G5 − g 7 , Using these diagonal gradients, the algorithm for estimating the missing color components is described as: if ∆N < ∆P then R5 = (r1+r9)/2 + (-G1 + 2*G5 – G9)/2; else if ∆H > ∆V then R5 = (r3+r7)/2 + (-G3 + 2*G5 – G7)/2; else R5 = (r1+r3+r7+r9)/4 + (-G1 - G3 + 4*G5 - G7 - G9)/4; endif endif. This method provides much better visual quality of the reconstructed image containing a lot of sharp edges. However, the second-order derivative for calculating the gradients makes the algorithm quite sensitive to noise. Since only the color information in the same direction (vertical, horizontal, or one of the diagonal directions based on the gradient information) is used for interpolation, we believe that it is still possible to further improve the visual quality of the reconstructed image.

9

Journal of Electronic Imaging Vol. 11(3), July 2002.

3. Proposed Fuzzy Assignment Based Adaptive Method 3.1 Fuzzy membership assignment strategy In our proposed approach, we have assigned a fuzzy membership [17, 18, 19] to all the surrounding four connected neighboring pixels ( G1 , G 2 , G 3 , G 4 as shown in Figure 5, as an example) in order to estimate the missing color component (say green) in a pixel location (R in Figure 5). The membership assignment strategy differs depending upon characteristic of a possible edge at the pixel location. For example, let us consider estimation of the missing green color component at a pixel location, say containing R as shown in Figure 5.

B

G3

B

G1

R

G2

B

G4

B

Figure 5 Depending upon the correlation amongst the surrounding pixels, we have formulated a strategy to assign membership grades to the surrounding horizontal and vertical pixels. The membership grades have been experimentally derived through exhaustive subjective visual inspection, taking into consideration the exhaustive set of images having possible edges in the horizontal and vertical directions. We have considered the following four cases, where there is a possible edge along the horizontal direction. In a likewise manner we have also considered the cases where there are possible edges in the vertical direction. Case 1:

G1 − G 2

is small while

G3 − G 4

is arbitrarily large, subject to the condition that

G 3 − G 4 − G1 − G 2 >> 0 . Here we have assumed the existence of a horizontal edge while the horizontal neighboring pixels G1 and G2 have approximately the same intensity. Case 2:

G1 − G 2 is small and G 3 − G 4 is arbitrary and G1 ≈ G 2 ≈ G 4 . In this case also there is clearly a possible edge at the pixel location R , and the intensity of this edge depends upon the surrounding pixel values G 3 and G 4 . Case 3: This case is similar to case 2 with a difference that here the pixels G1 , G 2 and G 3 to be approximately of similar pixel intensity, i.e. G1 − G 2 is small and G 3 − G 4

is arbitrary and

G1 ≈ G 2 ≈ G 3 .

10

Journal of Electronic Imaging Vol. 11(3), July 2002.

Case 4: In this case we have considered that all the four connecting neighboring pixels G1 , G 2 , G 3 and

G 4 that are all different subject to the condition that G 3 − G 4 − G1 − G 2 >> 0 . In each of the above cases we have computed the horizontal and vertical membership functions and a similar logic has been applied to the images having an edge in the vertical direction. While computing the membership assignments, we have estimated the likelihood of the occurrences of each of the four cases as described above using a large number of images. We have also observed that the assumption of equal a- priori probabilities of occurrence of different types of edges, viz., horizontal and vertical edges is not quite true. Thus we have also found out the likelihood of occurrence of each of the types of edges from our database images. Finally from the average of these membership grades, the fuzzy membership value of 0.5 has been assigned to the horizontal green pixel values G1 and G 2 . Similarly the two vertical green pixel value G 3 and

G 4 have been assigned membership grade of 0.1. We have observed that the above assignment of membership values have resulted in very good color demosaicing, yielding visually pleasing color reconstruction. The membership assignment has been decided after performing exhaustive experiments on a large number of images. On the basis of above discussion, the missing green value at pixel location R can be interpolated as follows: Missing G =

0.5 * G1 + 0.5 * G 2 + 0.1 * G 3 + 0.1 * G 4 0 .5 + 0 .5 + 0 .1 + 0 .1

= 0.8333 *

(G + G 4 ) (G1 + G 2 ) + 0.1667 * 3 2 2

Using this fuzzy membership assignment as a weighted-average tool for missing color interpolation, we can fully utilize all the neighboring information for estimating the missing color information.

3.2 Proposed three-steps interpolation algorithm The proposed interpolation algorithm is a three-step algorithm as summarized below. Step 1: Estimation of all missing Green values. After completion of this step, each and every pixel location has a value for the green color component. Step 2: Estimation of missing Blue (Red) color component at each pixel location containing Red (Blue) color component only. The green values estimated in the previous step are used in this step. The decision is based on the change of hue values. Step 3: Estimation of missing Red and Blue at green pixels. The estimated Red/Blue at blue/red pixels in the previous step 2 have been utilized for interpolation of the missing Red and Blue at green pixels. The details of above steps in the proposed color interpolation algorithm have been described below. It should be noted that by lower case r, g or b, we represent the red (r), green (g) or blue

11

Journal of Electronic Imaging Vol. 11(3), July 2002. (b) values present in each pixel in the sub-sampled Bayer pattern color image. And by upper case letters R, G and B, we denote the estimated values of Red (R), Green (G) and Blue (B) in each pixel location. Step 1: Estimation of all missing Green values From the Bayer pattern, the arrangement of the neighboring pixels in a 5x5 window with a red pixel at the center of the window is shown below. We estimate the missing green color component at this pixel location as follows. r m-2,n-2

g m-2,n-1

r m-2,n

g m-2,n+1

r m-2,n+2

g m-1,n-2

b m-1,n-1

g m-1,n

b m-1,n+1

g m-1,n+2

r m,n-2

g m,n-1

r m,n

g m,n+1

r m,n+2

g m+1,n-2

b m+1,n-1

g m+1,n

b m+1,n+1

g m+1,n+2

r m+2,n-2

g m+2,n-1

r m+2,n

g m+2,n+1

r m+2,n+2

First, we estimate two parameters in terms of changes in the Hue values, one in horizontal direction and the other in vertical direction, using the following two equations.

C hor ≡ (Rm ,n+1 − G m ,n+1 ) − (Rm ,n−1 − G m ,n−1 )

) +r )  (r   (r + r  ≈  m ,n+ 2 m ,n − g m ,n+1  −  m,n m ,n−2 − g m ,n−1  2 2     1 = × (− rm ,n − 2 + 2 g m ,n −1 − 2 g m ,n +1 + rm ,n + 2 ) 2

C ver ≡ (Rm +1,n − G m +1,n ) − (Rm −1,n − G m −1,n )

) +r )  (r   (r + r  ≈  m + 2, n m , n − g m +1, n  −  m, n m − 2, n − g m −1, n  2 2     1 = × (− rm −2,n + 2 g m −1,n − 2 g m +1,n + rm + 2,n ) 2

As mentioned earlier in the smooth hue transition interpolation, if the pixel value is already transformed from linear exposure space into logarithmic exposure space, one can define the “RED hue value” as R − G . When R is not available, we used a simple average of neighboring R values to approximate it. As shown in the above two equations, the two changes of hue values, C hor and C ver , can be easily obtained using the filtering operation at the pixel location (m, n) in the input Bayer pattern image with a five taps filter, (-0.5, 1, 0, -1, 0.5). Then, depending upon the values of these two parameters, different fuzzy memberships numbers are used as weighting factors to estimate the missing Green value (as described in the following "if –then- else" statement).

12

Journal of Electronic Imaging Vol. 11(3), July 2002.

if ( C hor < C ver ) then

G m ,n = 0.8333 * I hor + 0.1667 * I ver ; else if ( |Cver| < |Chor| )

G m ,n = 0.1667 * I hor + 0.8333 * I ver ; else

G m ,n = 0.5 * I hor + 0.5 * I ver ;

endif endif Where the two variables I hor and I ver are estimated as shown in equations (1) and (2) below based on the assumption of smooth hue transition as discussed earlier with a weighting factor (0.5) in the second term to reduce the sensitivity of noise in the image.

I hor = (g m −1,n + g m +1,n ) 2 + 0.5 × (− rm − 2,n + 2rm ,n − rm + 2,n ) 4 I ver = (g m ,n−1 + g m ,n +1 ) 2 + 0.5 × (− rm ,n − 2 + 2rm ,n − rm ,n + 2 ) 4

(1) (2)

The two scaling factors, 0.8333 and 0.1667, in above expressions have been derived experimentally based on the fuzzy membership assignment strategy. A similar strategy is applied for estimation of the missing Green value at the blue pixels in Bayer pattern image. Step 2: Estimation of the missing Blue/Red value at red/blue pixel From the Bayer pattern, the arrangement of the neighboring pixels in a 3x3 window with a red pixel at the center of the window is shown below.

b m-1,n-1

g m-1,n

b m-1,n+1

g m,n-1

r m,n

g m,n+1

b m+1,n-1

g m+1,n

b m+1,n+1

The hue values of the four corner pixels in the window and the difference of hues along the diagonals are estimated using following equations.

hue nw = bm −1,n −1 − G m −1,n −1 ; hue sw = bm +1, n −1 − Gm +1, n −1 ; huene = bm −1, n +1 − Gm −1, n +1 ; hue se = bm +1, n +1 − Gm +1, n +1 ; huemd = huenw − hue se ; 13

Journal of Electronic Imaging Vol. 11(3), July 2002.

hue sd = huene − hue sw ; Where each Gm+i,n+j at rm,n has been estimated in step1. Here hue_md and hue_sd indicate the difference of hues along the diagonals. The procedure for estimation of the missing Blue value in the red pixel is shown below. if ( hue md < hue sd ) then

Bm , n = Gm , n + 0.8333 * ((bm −1, n −1 − Gm −1, n −1 ) + (bm +1, n +1 − Gm +1, n +1 )) 2

(

(

+ 0.1667 * (bm −1, n +1 − Gm −1, n +1 ) + b m +1,n−1 − Gm +1, n −1

)) 2 ;

else

Bm , n = Gm , n + 0.1667 * ((bm −1, n −1 − Gm −1, n −1 ) + (bm +1, n +1 − Gm +1, n +1 )) 2

(

(

+ 0.8333 * (bm −1, n +1 − Gm −1, n +1 ) + b m +1,n−1 − Gm +1, n −1

)) 2 ;

endif Estimation of the missing Red value at a blue pixel may similarly be obtained. Step 3: Estimation of the missing Blue/Red value at green pixel From the Bayer pattern, the arrangement of the neighboring pixels in a 3x3 window with a green pixel at the center of the window is shown below.

g m-1,n-1

b m-1,n

g m-1,n+1

r m,n-1

g m,n

r m,n+1

g m+1,n-1

b m+1,n

g m+1,n+1

Here the four hue values adjacent to the center pixel in the above window and the difference of hues along the horizontal and vertical directions are estimated using following equations.

huen = bm −1, n − Gm −1, n ; huee = bm , n +1 − Gm , n +1 ; hue w = bm , n −1 − Gm , n −1 ; hue s = bm +1, n − Gm +1, n ; huehor = huee − hue w ; huever = huen − hue s ; Both the color values Gm+i,n+j and Bm+i,n+j at the pixel location gm,n in above equations have been estimated in the previous steps. Here hue_hor and hue_ver indicate the difference of hues along

14

Journal of Electronic Imaging Vol. 11(3), July 2002. the horizontal and vertical directions. The procedure for estimation of the missing Blue value in the green pixel is shown below. if ( hue hor < hue ver ) then

Bm , n = gm , n + 0.8333 * ((Bm , n −1 − Gm , n −1 ) + (Bm , n +1 − Gm , n +1 )) 2

(

(

+ 0.1667 * (bm −1, n − Gm −1, n ) + b m +1,n − Gm +1, n

)) 2 ;

else

Bm , n = gm , n + 0.1667 * ((Bm , n −1 − Gm , n −1 ) + (Bm , n +1 − Gm , n +1 )) 2

(

(

+ 0.8333 * (bm −1, n − Gm −1, n ) + b m +1,n − Gm +1, n

)) 2 ;

endif Similarly, the missing Red color component is estimated in each pixel location containing only the green color component in the Bayer pattern image.

4. Experimental Results Since it is difficult to grab the sub-sampled Bayer pattern images directly from the digital cameras available in the market, we have synthetically generated the sub-sampled Bayer pattern images from 24-bit full color RGB high quality images by simply dropping two color values in each pixel location. We have generated more than 25 such test images of different types and characteristics of the contents. However in this paper, we have reported results of performance of the interpolation algorithms applied on four Bayer pattern images that have been synthetically generated from four original full color test images shown in Figure 6. We carefully have chosen these challenging images of different characteristics to demonstrate performance of different color interpolation algorithms. As shown in Figure 6(a), the STAR image contains lots of high frequency patterns in the form of black and white sharp edges in different angles. As shown in Figure 6(b), ‘ZEBRA’ is an image with black and white strips in the scene. As shown in Figure 6(c), the TOWN image contains a lot of sharp and colorful edges, and the NEWENG image in Figure 6(d) is a natural outdoor scene. We compared performance of the proposed color interpolation algorithm with three methods, bilinear interpolation, block matching based algorithm by Acharya, et. al. [13] and combination of Smooth Hue Transition and Edge Sensing interpolation as described in section 2. The reconstructed full color images using these algorithms are shown in Figures 7-11. In all of these images, our proposed algorithm produces much better result in terms of subjective quality and objective measure of the peak signal-to-noise ratio (PSNR). Table 1 shows the PSNR results for these test images. As we can see, we can get up to about 10 dB improvement by fuzzy color interpolation algorithm as compared with the simple bilinear interpolation on the STAR image, and on the average, our new method can achieve more than 5 dB improvement as compared with the other three methods we tested. This ‘STAR’ image is very useful for testing color interpolation algorithms. We can easily see the blurring and color bleeding artifacts, as shown in Figure 7(b), (c) and (d), produced by the bilinear interpolation, Block Matching method, and combination of Smooth Hue Transition and Edge Sensing interpolation. The proposed fuzzy color interpolation algorithm produces the best result in terms of visual quality, as shown in Figure 7(a), in term of those blurring and color bleeding

15

Journal of Electronic Imaging Vol. 11(3), July 2002. artifacts. Figure 8 shows a zoomed version of those results in Figure 7. In Figures 9 – 11, we show three sets of reconstructed ZEBRA, TOWN, NEWENG images using the same algorithms. Table 1: PSNR comparison of different algorithms PSNR Image

STAR

ZEBRA

TOWN

NEWENG

Color Channel

Bilinear interpolation

Block Matching

Smooth Hue + Edge Sensing

Proposed method

R

20.926

21.623

24.256

32.356

G

24.610

27.138

27.466

32.163

B

21.128

21.671

24.513

32.874

R

25.315

25.546

29.168

35.670

G

29.074

30.261

30.348

35.316

B

25.448

25.620

28.919

35.599

R

26.365

26.448

28.923

34.281

G

30.713

31.812

31.239

35.167

B

26.482

26.556

28.568

34.236

R

25.307

24.715

27.123

31.963

G

28.903

28.929

28.210

32.259

B

25.445

24.814

26.815

31.187

16

Journal of Electronic Imaging Vol. 11(3), July 2002.

5. Conclusion and Summary Due to the cost and packaging consideration, in digital imaging device such as a digital camera, only a single electronic sensor is used and the need for color interpolation will remain critical until other technologies such as multi-channel color moiré free sensor [20] is mature. In this paper, we presented a new color interpolation algorithm for Bayer pattern sub-sampled color images. The proposed algorithm utilizes the fuzzy membership assignment as a weighting factor along with the concept of smooth hue transition for estimating the missing colors in each pixel. This algorithm significantly improves the overall visual quality of the reconstructed color images. The experimental results show that the algorithm preserves colors on the edges with minimal or no visual artifacts. We have presented the objective quality metrics in terms of PSNR to show the performance of the algorithm with four challenging images for color interpolation.

Acknowledgement The authors solemnly acknowledge the contribution of Late A. K. V. Subba Rao for his dedicated work in the area of Color Interpolation and his collaboration in early phase of this research.

References 1. Bryce E. Bayer, "Color imaging array," U.S. Patent 3,971,065, Eastman Kodak Company, 1976. 2. Don P. Mitchell et. al, "Reconstruction Filters in Computer Graphics," Computer Graphics, (SIGGRAPH'88 Proceedings), Vol.22, No.4, P.221-228, August 1988. 3. Hsieh S. Hou et. al., "Cubic Splines for Image Interpolation and Digital Filtering," IEEE Transactions on Acoustic, Speech and Signal Processing, Vol. ASSP-26, pp.508-517, 1987. 4. David R. Cok, "Signal processing method and apparatus for sampled image signals," U.S. Patent 4,630,307, Eastman Kodak Company, 1986. 5. Eiichi Shimizu et. al., "The Digital Camera Using New Compression and Interpolation Algorithm," IS&T 49th Annual Conference, pp:268-27, 1996. 6. Tmoasz A. Matraszek, David R. Cok, and Robert T. Gray, "Gradient based method for providing values for unknown pixels in a digital image," U.S. Patent 5,875,040, Eastman Kodak Company, 1999. 7. http://www-ise.stanford.edu/class/psych221/99 8. James E. Adams, Jr., “Interactions between color plane interpolation and other image processing functions in electronic photography,” Proceedings of the SPIE Electronic Imaging Conference, Vol. 2416, pp:144-151, 1995. 9. Tadashi Sakamoto, Chikako Nakanishi and Tomohiro Hase, "Software Pixel Interpolation for Digital Still Cameras Suitable for A 32-bit MCU," IEEE Transactions on Consumer Electronics, Vol. 44, No. 4, pp 1342-1352, Nov. 1998.

17

Journal of Electronic Imaging Vol. 11(3), July 2002. 10. William T Freeman, "Method and apparatus for reconstructing missing color samples," U.S. Patent 4,663,655, Polaroid Corporation, 1987. 11. David R Cok, "Signal processing method and apparatus for producing interpolated chrominance values in a sampled color image signal," U.S. Patent 4,642,678, Eastman Kodak Company, 1987. 12. X. Wu, W. K. Choi, and P. Bao, “Color Restoration from Digital Camera Data by Pattern Matching,” Proceedings of the SPIE’s Electronic Imaging Conference, Color Imaging: DeviceIndependent Color, Color Hardcopy, and Graphic Arts II, Vol. 3018, pp. 12-17, 1997. 13. Tinku Acharya and Ping-Sing Tsai, "A New Block Matching Based Color Interpolation Algorithm,'' Proceedings of the SPIE Electronic Imaging Conference, Color Imaging: DeviceIndependent Color, Color Hardcopy, and Graphic Arts IV, Vol. 3648, pp. 60-65, 1999. 14. Robert H. Hibbard, "Apparatus and method for adaptively interpolating a full color image utilize luminance gradients," U.S. Patent 5,382,976, Eastman Kodak Company, 1995. 15. Glaude A. Laroche and Mark A. Prescott, “Apparatus and method for adaptively interpolating a full color image utilize chrominance gradients," U.S. Patent 5,373,322, Eastman Kodak Company, 1994. 16. John F. Hamilton, Jr. and James E. Adams, Jr., "Adaptive color plan interpolation in single sensor color electronic camera," U.S. Patent 5,629,734, Eastman Kodak Company, 1997. 17. Zadeh, L. A. “Fuzzy Sets,” Information and Control 8; 338-353, 1965. 18. Zimmermann, H. J. Fuzzy Set Theory and Its Applications, 2nd ed. Norwell, MA: Kluwer Academic, 1991. 19. Kazuo, T: translated by Niimura, T. An Introduction to Fuzzy Logic for Practical Applications, Springer-Verlag New York, Inc. 1997. 20. P. G. Herzog, D. Knipp, H. Stiebig and F. Koenig, "Characterization of novel three and six channel color moire free sensors,'' Proceedings of the SPIE Electronic Imaging Conference, Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts IV, Vol. 3648, pp 48-59, 1999.

18

Journal of Electronic Imaging Vol. 11(3), July 2002.

(a)

(b)

(c)

(d)

Figure 6: Original 24 bits full color images: (a) Star (b) Zebra (c) Town (d) Neweng.

19

Journal of Electronic Imaging Vol. 11(3), July 2002.

(a)

(b)

(c)

(d)

Figure 7: STAR image: (a) Reconstructed image using proposed algorithm, (b) Reconstructed image using Bilinear color interpolation, (c) Reconstructed image using Block Matching based method, and (d) Reconstructed image using combination of Smooth Hue and Edge Sensing interpolation.

20

Journal of Electronic Imaging Vol. 11(3), July 2002.

(a)

(b)

(c)

(d)

Figure 8: STAR image (center portion only): (a) Reconstructed image using proposed algorithm, (b) Reconstructed image using Bilinear color interpolation, (c) Reconstructed image using Block Matching based method, and (d) Reconstructed image using combination of Smooth Hue and Edge Sensing interpolation.

21

Journal of Electronic Imaging Vol. 11(3), July 2002.

(a)

(b)

(c)

(d)

Figure 9: Zebra image: (a) Reconstructed image using proposed algorithm, (b) Reconstructed image using Bilinear color interpolation, (c) Reconstructed image using Block Matching based method, and (d) Reconstructed image using combination of Smooth Hue and Edge Sensing interpolation.

22

Journal of Electronic Imaging Vol. 11(3), July 2002.

(a)

(b)

(c)

(d)

Figure 10: Town image: (a) Reconstructed image using proposed algorithm, (b) Reconstructed image using Bilinear color interpolation, (c) Reconstructed image using Block Matching based method, and (d) Reconstructed image using combination of Smooth Hue and Edge Sensing interpolation.

23

Journal of Electronic Imaging Vol. 11(3), July 2002.

(a)

(b)

(c)

(d)

Figure 11: Neweng image: (a) Reconstructed image using proposed algorithm, (b) Reconstructed image using Bilinear color interpolation, (c) Reconstructed image using Block Matching based method, and (d) Reconstructed image using combination of Smooth Hue and Edge Sensing interpolation.

24