List decoding of first order Reed-Muller codes,II - CiteSeerX

First efficient but probabilistic list decoding algorithm for Reed-Muller codes of order 1 was known since 1989 [1], and later was re-established in a larger context ...
85KB taille 1 téléchargements 276 vues
List decoding of first order Reed-Muller codes,II Grigory A. Kabatiansky and C´edric Tavernier

Abstract We describe a deterministic list decoding algorithm for the first order Reed-Muller codes RM (1, m) of length n = 2m correcting up to n( 21 − ) errors with the complexity O(n−2 ) in the worst case. On the other hand we show that the corresponding lists have the size O( −2 ) (in the worst case also).

1

Introduction

First efficient but probabilistic list decoding algorithm for Reed-Muller codes of order 1 was known since 1989 [1], and later was re-established in a larger context in [2]. Recently there have been done several attempts to construct effective deterministic list decoding algorithms for Reed-Muller (RM ) codes. In one direction there are papers of R.Pellikaan and X.-W.Wu, which exploit two well known facts: 1-shortened RM (s, m) codes (of length n − 1 = 2 m − 1) are subcodes of BCH codes with the same designed distance d, and BCH codes are subfield subcodes of RS codes of the same distance d. Hence one can apply Guruswami-Sudan (GS) algorithm [3], what will result in a list decoding algorithm of RM -code of order s “correcting √ ” up to n(1 − 1 − 2−s − ) errors with the algorithm’s complexity O(n 3 −6 ) (see [4] for details). Note that the decoding radius of the resulting algorithm is smaller than the value of the Johnson bound for RM codes despite the GS algorithm can correct errors up to the value of the corresponding Johnson bound but for RS codes. For instance, the algorithm of [4] can decode RM (1, m) code only up to decoding radius T P W = n(0.293 − ) and it has complexity O(n 3 −6 ). This paper is a continuation of [5], where two efficient list decoding algorithms of RM (1, m) codes, correcting up to n( 12 − ) errors with linear complexity O(n −3 ) for any  > 0, were constructed. In this paper we show that one of algorithms proposed in [5] , namely, sums-criteria algorithm, has complexity O(n−2 ) and, on the other hand, we show that the corresponding lists for RM (1, m) codes can have the size O( −2 ).

2

Deterministic list decoding algorithms for first order ReedMuller codes

Binary first order Reed-Muller code RM (1, m) of length n = 2 m consists of vectors f = (..., f (x1 , ..., xm ), ...), where 1

X

f (x1 , ..., xm ) = f0 +

fi xi

1≤i≤m

is an affine Boolean function and (x1 , ..., xm ) range over all 2m points of the Boolean cube of dimension m. Recall that by the definition [6] a list decoding algorithm of decoding radius T should produce for any received vector y the list LT ;C (y) = {c ∈ C : d(y, c) ≤ T } of all vectors c from a code C which are at distance at most T from y. To (upper) estimate the size of the list L T ;C (y) we use the Johnson bound (see [7] for simple and general proof): let Bn,w (y) = {x : d(y, x) ≤ w} be the Hamming ball of radius w = ωn with the center y and let C be a code with the minimal code distance at least d = δn. Then |Lw;C (y)| = |C ∩ Bn,w (y)| ≤

δ , δ − 2ω(1 − ω)

(1)

provided that the denominator is positive. Let y be the received vector and L (y) := Ln(1/2−);RM (1,m) (y) = {f ∈ RM (1, m) : d(y, f ) ≤ n(1/2 − )}

(2)

be the desired list. The Johnson bound gives that the size of the list is at most (2) −2 . Let us remind the Sums-Algorithm from [5]. The algorithm works recursively by finding on the i-th step a list Li (y) of “candidates” which should contain the list of i-th prefixes of all f (x1 , ..., xm ) = f0 + f1 x1 + . . . + fm xm ∈ L (y), where i-th prefix of f (x1 , ..., xm ) is defined as f (i) (x1 , ..., xi ) = f1 x1 + . . . + fi xi . The main idea of the sums-algorithm is to approximate the Hamming distance between the received vector y and an arbitrary “propagation” of a candidate c(i) (x1 , . . . , xm ) = c1 x1 + . . . + ci xi by the sum of Hamming distances over all i-dimensional “facets” of the m-dimensional Boolean cube. Let Sα = {(x1 , . . . , xi , α1 , . . . , αm−i )} be an i-dimensional facet, where (x 1 , . . . , xi ) range over all 2i binary i-dimensional vectors, α1 , . . . , αm−i are fixed and α = α1 + . . . + αm−i 2m−i−1 is (i) the “number” assigned to this facet. Denote by d α (a, b) the Hamming distance between two n arbitrary vectors a, b ∈ F2 restricted to a given i-dimensional facet S α . Define (i) (i) (i) i (i) ∆(i) α (a, b) = min{dα (a, b), dα (a + 1, b)} = min{dα (a, b), 2 − dα (a, b)}

and (i)

∆ (a, b) =

2m−i X−1

∆(i) α (a, b)

(3)

(4)

α=0

It is clear that the restriction of an affine function f (x 1 , ..., xm ) ∈ RM (1, m) on any idimensional facet Sα can be represented as a sum of its i-th prefix f (i) (x1 , ..., xi ) and the corresponding constant. This together with (3) and (4) imply that for any facet S α and any affine function f (x1 , ..., xm ) and its i-th prefix f (i) (x1 , ..., xi ) we have d(y, f ) =

2m−i X−1 α=0

d(i) α (y, f )



2m−i X−1 α=0

2

(i) i (i) ∆(i) α (y, f ) = ∆ (y, f )

(5)

Based on (5) we introduced in [5] the following natural criteria of acceptance a candidate. Namely, a candidate c(i) = c1 x1 + . . . + ci xi is accepted if and only if ∆(i) (y, c(i) ) ≤ n(1/2 − ). (i) Denote Lˆ (y) = {c(i) ∈ RM (1, i) : (∆i (y, c(i) ) ≤ n(1/2−)} the list of such accepted candidates. We called [5] the corresponding algorithm as Sums-Algorithm. It is clear from (5) that prefixes of all affine functions of the list L (y) will be accepted by Sums-Algorithm. To work effectively this algorithm should not accept many others (incorrect) candidates. In [5] we could not prove it for Sums-Algorithm directly and used another algorithm, called Ratio-Algorithm, and proved that both algorithms generate rather small lists, namely of size O( −3 ). Now we prove that lists of Sums-Algorithm have size O(−2 ). Lemma 1 For any received vector y and for every i ∈ [1, . . . , m] 1 Lˆ(i)  (y) ≤ 42

(6)

(i) Proof. Let f = f1 x1 + ... + fi xi be arbitrary element of the list Lˆ (y). On every facet Sα we choose the nearest to y between f and f + 1, i.e. what provide the minimum for (i) (i) min{dα (f, y), dα (f + 1, y)}. And then define a boolean function F which equals on every facet the corresponding best approximation to y, i.e. equals either f or f + 1. Obviously the sums-criteria is the same as the following condition: d(F, y) ≤ n(1/2 − ). Now it is easy to (i) check that if f and g = g1 x1 + ... + gi xi are two different elements of the list Lˆ (y) then

d(F, G) = n/2

(7)

Indeed, on every facet Sα their restrictions are elements f + a α and g + bα of the corresponding RM (1, i) code of length n0 = 2i . Hence d(f + aα , g + bα ) = n0 /2 = 2i−1 what gives (7). Therefore the application of the Johnson bound (1) to this set of boolean functions, corresponding to elements of the list, gives the statement of the lemma. 2 (i)

Since the upper bound on the size of (each) lists Lˆ (y) decreases from O(1−3 ) [5] to O(−2 ) then the same reduction should be valid for the complexity what gives the main result of the paper. Theorem 1 The Sums-Algorithm outputs with the complexity O(n −2 ) all RM (1, m) codevectors which are at the distance at most n(1/2 − ) from a given received vector. Now we shall prove that for first order Reed-Muller codes the Johnson bound gives the right order for the size of the list. Denote L  (RM (1, m)) = maxy |L (y)| the largest possible size of the list, i.e., the size of the list in the worst case. Let us start from bent functions. Recall that bent functions exist for all even m and for every bent function f (x 1 , ..., xm ) there are n = 2m √ affine boolean functions which are at the Hamming distance (n − n)/2 (and the rest n affine √ boolean functions are at the Hamming distance (n + n)/2), see [8]. Hence if we choose a √ bent function as a received vector y then for  = (2 n)−1 the corresponding list consists of n elements what coincides with the Johnson bound (1) what shows that L  (RM (1, m)) = n for √  = (2 n)−1 and m is even. Moreover, application of bent function with the number of effective variables less than n gives the following result. 3

Theorem 2 For any  > 0 L (RM (1, m)) = O(min{−2 , n}) (8) √ √ Proof. For  = 2−(i+1) = (2 n0 )−1 ≥ (2 n)−1 consider a bent function ϕ on 2i variables, and set y = f (x1 , ..., xn ) = ϕ(x1 , ..., x2i ). Then there are n√0 affine boolean functions λ1 , ..., λn0 in 2i variables, which are at the Hamming distance (n 0 − n0 )/2 from ϕ(x1 , ..., x2i ). Consider the following n0 affine functions lj (x1 , ..., xn ) = λj (x1 , ..., x2i ). Each of them is at the distance √ 0 0 (n − n )/2 from the received vector y on every 2i-dimensional facet and, hence, in total at the distance n(1/2 − ) from the received vector y. Therefore for  = 2 −(i+1) , where 2i ≤ n, L (RM (1, m)) = (2)−2

(9)

This together with an obvious remark that |L  (y)| ≤ |RM (1, m)| = 2n prove (8). 2 Note that this simple result on tightness of the Johnson bound for RM (1, m) codes seems to be new (a similar result was proved in [2] but for random linear codes). We do not know if the same is true for RM-codes of fixed order.

References [1] O.Goldreich and L.A.Levin, “A hard-core predicate for all one-way functions”, Proceedings of 21-st ACM Symp. on Theory of Computing, pp. 25–32, 1989. [2] O. Goldreich, R. Rubinfeld and M. Sudan, “Learning polynomials with queries: the highly noisy case”, SIAM J. on Discrete Math., pp. 535–570, 2000. [3] V.Guruswami and M.Sudan, “Improved decoding of Reed-Solomon and algebraic-geometry codes ,” IEEE Trans. on Information Theory, vol. 45, pp. 1757–1767, 1999. [4] R. Pellikaan and X.-W. Wu, “List decoding of q-ary Reed-Muller Codes”, IEEE Trans. on Information Theory, vol. 50, pp. 679-682, 2004. [5] G.Kabatiansky and C.Tavernier, “List decoding of Reed-Muller codes of first order,” in Proc. ACCT-9, pp. 230–235, Bulgaria, 2004. [6] P. Elias, “List decoding for noisy channels” 1957-IRE WESCON Convention Record, Pt. 2, pp. 94–104, 1957. [7] L.A Bassalygo, “New upper bounds for error-correcting codes ,” Probl. Info. Transmission,vol.1, No 4, pp. 41-44, 1965. Reprinted in ”Key Papers in the Development of Coding Theory”, ed. E.R.Berlekamp, IEEE Press, NY, 1974. [8] F. J. MacWilliams and N. J. A. Sloane , The Theory of Error-Correcting Codes, NorthHolland, Amsterdam (1977).

4