A new algorithm for computing Gröbner bases

We indicate how to compute these functions and how to test their equality, using a triangu- lation of .... 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17. The set .... Indeed SMIC(n, m) ≤ em ≤ FP (m) where P is the set of monomials appearing in G0.
220KB taille 1 téléchargements 62 vues
A new algorithm for computing Gr¨obner bases H. Lombardi January 1998

´ Equipe of Math´ematiques of Besan¸con. UMR CNRS 6623. UFR des Sciences and Techniques. Universit´e of Franche-Comt´e. 25030 Besan¸con Cedex email: [email protected]

Abstract We give a new algorithm for computing Gr¨obner bases. This algorithm is radically distinct from the Buchberger algorithm. It uses neither divisions nor S-polynomials. It is based on controling the growth of the rank of successive genralized Sylvester matrices. These matrices are defined incrementaly by adding new columns and the required rows. The main idea is to bound below and above the

AMS Classification: 13P10, 12Y05, 15A03, 65F50 Key words: Gr¨ obner Basis, Linear Algebra, Sylvester Matrix, Constructive Mathematics.

Introduction We give a new algorithm for computing Gr¨obner bases. This algorithm is radically distinct from the Buchberger algorithm (cf. [3], [4], [1]). It uses neither divisions nor S-polynomials. It is based on controling the growth of the rank of successive genralized Sylvester matrices. These matrices are defined incrementaly by adding new columns and the required rows. The main idea is to bound below and above the ranks of future matrices as two functions of the number of incremental steps, until these bounds coincide. If A is a finite subset of k[x1 , . . . , xr ] we define A[1] = A∪x1 A∪. . .∪xr A and A[n+1] = (A[n] )[1] . [n] If G0 = [f1 , . . . , fs ] is a list of polynomials in k[x1 , . . . , xr ] we define Gn = G0 . This defines a kind of Sylvester matrix for G0 . Let En be the k-vector space generated by Gn . We define in section 2 a function SMICG0 (n, m) computable from Gn and verifying SMIC(n, m) ≤ dim(Em ) for all m ≥ n. We call this function the structural minimum of growth, because it estimates at the step n a lower bound of the rank of Gm . We define in section 3 a function SMACG0 (n, m) computable from Gn and verifying SMAC(n, m) ≥ dim(Em ) for all m ≥ n. We call this function the structural maximum of growth, because it estimates at the step n an upper bound of the rank of Gm . We indicate how to compute these functions and how to test their equality, using a triangulation of Gn without column exchange. The SMIC-SMAC algorithm is given in section 4. It consists in triangulating successive Gn until the SMIC equals the SMAC. The fact that this happens is proved in sections 5 and 7. The second proof is constructive. 1

Technical topics are discussed in section 6. A priori our algorithm could reduce the memory space requirement w.r.t. the classical Buchberger algorithm. Our approach is based on well controled linear algebra. So it might be related to the work of J.-C. Faug`ere who developped a new efficient algorithm (FGB) for computing Gr¨obner bases which relies on linear algebra and memory management techniques. Possible links will be more clear when J.-C. Faug`ere will publish his results.

1

Preliminaries

We give some notations. Notations 1 Let k be a field. • Mr denotes the set of monomials xα = xα1 1 . . . xαr r in r variables. Divisibility is a natural partial order on Mr . We use xα |xβ for xα divides xβ . The total degree α1 + · · · + αr of xα is |α|. The total degree of a polynomial f is denoted by totdeg(f ). • An admissible order on Mr is given. We denote xα ≤ xβ or α ≤ β for this admissible order. • If f is a polynomial in r variables whose leading term is cxα , we let Lm(f ) := xα the leading monomial of f . Let us recall that a Gr¨obner basis of a polynomial ideal I = I(f1 , . . . , fs ) is a finite family (g1 , . . . , gt ) in I such that the leading monomial of any f 6= 0 in I is a multiple of some Lm(gj ). • If A is a subset of k[x1 , . . . , xr ] we will denote A[0] = A A[1] = A ∪ x1 A ∪ . . . ∪ xr A A[n+1] = (A[n] )[1] • We have A ⊂ B ⇒ A[1] ⊂ B [1] (A ⊂ B means inclusion with possible equality). • We let deg(A) := max{totdeg(f ); f ∈ A}. So we get deg(A[n] ) = n + deg(A). If A is empty we let deg(A) = −∞. Let us assume now that A is a finite subset of Mr . • We denote by Fri(A) the lower frontier of A, i.e., the subset of A of minimal elements w.r.t. divisibility. Of course we get Fri(A) = Fri(A[1] ). • We let val(A) := inf{|α|; xα ∈ A}. So we get val(A[n] ) = val(A). If A is empty we let val(A) = +∞. • We let FA (n) := #(A[n] ) be the number of elements in A[n] . The function n 7→ FA (n) is called the growth function of A. Lemma 1 If A is a nonempty subset of Mr , then the function n 7→ FA (n) is equal to a polynomial HA (n) for n ≥ n0 where n0 ≤ deg(ppcm(A)) − val(A) − r. When n tends to infinity, we have HA (n) ∼ n+r ∼ nr /r!. r 2

The polynomial HA (n) is called the growth polynomial of A. Applying lemma 1, we see that the growth function of A can be explicitely described. Before giving a combinatorial proof of this lemma let us see an example.

Example 1 We give an example with A ⊂ M2 A: #A = 53 11

.

.

.

.

.

.

.

.

0

0

.

.

.

.

10

.

.

.

.

.

.

.

.

0

0

.

.

.

.

9

.

0

0

.

.

.

.

0

0

.

.

.

.

.

8

.

0

0

0

.

.

.

0

0

0

.

.

.

.

7

.

0

0

0

0

.

.

0

0

0

0

0

0

.

6

.

0

0

0

.

.

0

.

0

0

0

0

0

.

5

.

0

0

0

.

.

.

.

0

0

0

0

0

.

4

.

0

0

0

0

.

.

.

0

.

.

.

.

0

3

.

.

.

.

0

0

.

0

.

.

.

.

.

.

2

.

.

.

.

0

0

0

.

.

.

.

.

.

.

1

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0

1

2

3

4

5

6

7

8

9 10 11 12 13

A[1] : #(A[1] ) = 53 + 28 = 81 12

.

.

.

.

.

.

.

.





.

.

.

.

.

11

.

.

.

.

.

.

.

.

0

0



.

.

.

.

10

.





.

.

.

.



0

0



.

.

.

.

9

.

0

0



.

.

.

0

0



.

.

.

.

.

8

.

0

0

0



.

.

0

0

0







.

.

7

.

0

0

0

0





0

0

0

0

0

0



.

6

.

0

0

0



.

0



0

0

0

0

0



.

5

.

0

0

0



.

.

.

0

0

0

0

0



.

4

.

0

0

0

0



.



0



.

.

.

0



3

.

.

.

.

0

0



0



.

.

.

.

.

.

2

.

.

.

.

0

0

0



.

.

.

.

.

.

.

1

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 3

A[2] : #(A[2] ) = 81 + 27 = 108 13

.

.

.

.

.

.

.

.





.

.

.

.

.

.

12

.

.

.

.

.

.

.

.

1

1



.

.

.

.

.

11

.





.

.

.

.



0

0

1



.

.

.

.

10

.

1

1



.

.

.

1

0

0

1



.

.

.

.

9

.

0

0

1



.

.

0

0

1







.

.

.

8

.

0

0

0

1





0

0

0

1

1

1



.

.

7

.

0

0

0

0

1

1

0

0

0

0

0

0

1



.

6

.

0

0

0

1



0

1

0

0

0

0

0

1



.

5

.

0

0

0

1



.



0

0

0

0

0

1



.

4

.

0

0

0

0

1



1

0

1



.

.

0

1



3

.

.

.

.

0

0

1

0

1



.

.

.

.

.

.

2

.

.

.

.

0

0

0

1



.

.

.

.

.

.

.

1

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15

A[3] : #(A[3] ) = 108 + 23 = 131 14

.

.

.

.

.

.

.

.





.

.

.

.

.

.

.

13

.

.

.

.

.

.

.

.

2

2



.

.

.

.

.

.

12

.





.

.

.

.



1

1

2



.

.

.

.

.

11

.

2

2



.

.

.

2

0

0

1

2



.

.

.

.

10

.

1

1

2



.

.

1

0

0

1

2



.

.

.

.

9

.

0

0

1

2





0

0

1

2

2

2



.

.

.

8

.

0

0

0

1

2

2

0

0

0

1

1

1

2



.

.

7

.

0

0

0

0

1

1

0

0

0

0

0

0

1

2



.

6

.

0

0

0

1

2

0

1

0

0

0

0

0

1

2



.

5

.

0

0

0

1

2



2

0

0

0

0

0

1

2



.

4

.

0

0

0

0

1

2

1

0

1

2



.

0

1

2



3

.

.

.

.

0

0

1

0

1

2



.

.

.

.

.

.

2

.

.

.

.

0

0

0

1

2



.

.

.

.

.

.

.

1

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16

4 The set A[3] has “the gap” x12 1 x2 (the notion of gap will be discussed in section 7).

4

A[4] : #(A[4] ) = 131 + 23 = 154 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

. . . . . . . . . . . . . . . . 0

. . • 3 2 1 0 0 0 0 0 0 . . . . 1

. . . . . . • • . . . . . . . 3 3 • • . . . . • 2 2 3 3 • . . . 3 1 1 2 2 3 • . . 2 0 0 1 1 2 3 • • 1 0 0 1 0 1 2 3 3 0 0 1 2 0 0 1 2 2 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 2 0 1 0 0 0 0 0 1 2 3 2 0 0 0 0 0 0 1 2 1 0 1 2 . . 0 0 1 0 1 2 3 . . 0 0 0 1 2 3 • . . . . . . . . . . . . . . . . . . 2 3 4 5 6 7 8 9 10

. . • 3 2 2 2 1 0 0 0 3 • . . . 11

. . . • 3 3 2 1 0 0 0 • . . . . 12

. . . . • • 3 2 1 1 1 0 . . . . 13

. . . . . . • 3 2 2 2 1 . . . . 14

. . . . . . . • 3 3 3 2 . . . . 15

. . . . . . . . • • • 3 . . . . 16

. . . . . . . . . . . • . . . . 17

The set B = A[4] has its growth polynomial equal to its growth function: for all n ≥ 4 we get  + 8n + 9n + 71 = n(n−1) + 19n + 72 FA (n) = HA (n) = (n+1)(n+2) 2 2 FA (n + 1) − FA (n) = n + 19 as we can see with the following counting process A[4] 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

. . . . . . . . x | o . . . . . . . . . . . . . . . . x | o o . . . . . . . . x x . . . . x x | o o o . . . . . . . x x x . . . x x | o o o o . . . . . . x x x x . . x x | o o o o o . . . . -----------------------------------------------------. + + + + + + + + + + + + + . . . . . + + + + + + + + + + + + + + . . . . + + + + + + + + + + + + + + + . . . + + + + + + + + + + + + + + + + . . + + + + + + + + + + + + + + + + . . + + + + + + + + + + + + + + + + . . + + + + + + + + + + + + + + + + + . . . . + + + + + + + + . . . . . . . . . . + + + + + + + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

5

A combinatorial proof of lemma 1 Let us denote by X a singleton {xα } where xα = xα1 1 . . . xαr r . We remark that for a finite subset A of Mr we can write [ A[m] = X [m] X singleton⊂A [m]

On the other hand, we can compute the number of elements in an intersection X1 [m] [m] [m] more generally in X1 ∩ X2 ∩ · · · ∩ Xs . Indeed [m]

X1

[m]

∩ X2

[m]

∩ X2

or

∩ · · · ∩ Xs[m] = {xβ ; ppcm(X1 , X2 , . . . , Xs )|xβ and |β| ≤ m + inf (deg(Xi ))} i=1,...,s

So if we let p = deg(ppcm(X1 , X2 , . . . , Xs )) and v = inf i=1,...,s (deg(Xi )) we get  0 [m] [m] [m]  if v + m < p #(X1 ∩ X2 ∩ · · · ∩ Xs ) = v+m−p+r if v + m ≥ p r  and, since u+r = 0 for u = −r, . . . , −1 r  0 [m] [m] [m]  if m < p − v − r #(X1 ∩ X2 ∩ · · · ∩ Xs ) = v+m−p+r if m ≥ p − v − r r Finally we conclude by recalling the well known formula giving the number of elements of a union of finite sets from the numbers of elements in the intersections (cf. [8]). E.g., letting Aij := Ai ∩ Aj , Aijk := Ai ∩ Aj ∩ Ak etc. X X X #(A1 ∪ A2 ∪ A3 ∪ A4 ) = #(Ai ) − #(Aij ) + #(Aijk ) − #(A1234 ) 1≤i≤4

1≤i≤j≤4

1≤i≤j≤k≤4



2

The SMIC

In the sequel we consider a field k and a non empty list G0 = [f1 , . . . , fs ] of polynomials in k[x1 , . . . , xr ]. We fix the following notations • d = deg(G0 )

Notations 2 [n]

• Gn = G0

• En = Vect(Gn ) is the k-vector space generated by Gn . • Exp(n) := {Lm(f ); f 6= 0, f ∈ En } is the set of leading monomials of nonzero polynomials in En . Remark that Exp(n)[1] ⊂ Exp(n + 1). • en := #(Exp(n) = dim(En ) is the k-vector space dimension of En = Vect(Gn ). • I(f1 , . . . , fs ) is the ideal generated by f1 , . . . , fs . Definition 3 For all m ≥ n we let SMICG0 (n, m) := SMIC(n, m) := FExp(n) (m − n) = #(Exp(n)[m−n] ). We call it the structural minimum of growth, computed at step n and bounding the rank at step m. 6

We get easily. Fact 1 For all n ≤ n0 ≤ m we have SMIC(n, m) ≤ SMIC(n0 , m) ≤ em = SMIC(m, m). When m tends to infinity we have SMIC(n, m) ∼ em ∼ mr /r!. Indeed SMIC(n, m) ≤ em ≤ FP (m) where P is the set of monomials appearing in G0 . Definition 4 We call column triangulation of a matrix M any algorithmic process that replaces the matrix M by another matrix M 0 having the same column vector space and verifying the following property: for any row there is at most one column having its leading nonzero coefficient on this row. A way for computing Exp(n) is given by a column triangulation of the matrix whose column vectors are the elements of Gn written in some suitable basis (this basis is P [n] written in decreasing admissible order). We are especially interested in column triangulations without column exchange (the pivot is taken in the next column except if this is a zero column). In such a process, the lines of the successive pivots are in chaotic order (so the computed matrix is not a “nice triangular” one). See the next figure. • x x x x x x x x x x x x x x x

. . . . . . • x x x x x x x x x

. . • x x x . x x x x x x x x x

. . . . • x . x x x x x x x x x

. . . . . . . . . . • x x x x x

. . . . . . . . . . . . . • x x

. . . . . . . . . • . x x . x x

. . . . . • . x x . . x x . x x

. • . x . . . x x . . x x . x x

. . . . . . . • x . . x x . x x

. . . . . . . . . . . . . . • x

Example of a matrix produced by column triangulation without column exchange. We have deleted the columns which have been reduced to 0, . means 0, • means a pivot, x an arbitrary entry (zero or nonzero). The important thing is that such a triangulation allows a better management of the successive Exp(n). When we say “triangulation without column exchange”, we always mean a column triangulation. 7

3

The SMAC

We begin this section by defining Gn as an ordered list. The polynomial xα fi precedes the polynomial xβ fj in the list Gn if α = β and i < j or if β precedes α in the lexicographical monomial order where x1 > · · · > xr . This implies that if xα fi precedes xβ fj in the list Gn then this remains true in the list Gn+1 , and moreover xh xα fi precedes xh xβ fj (for h = 1, . . . , r) in the list Gn+1 . The list Gn may also be written with the following recursive rule (• means the concatenation of lists): We explain only the case r = 3. We let G10 = G20 = G30 = G0 ,

G3n+1 = x3 G3n • G0

G2n+1 = x2 G2n • G3n+1 and Gn+1 = G1n+1 = x1 G1n • G2n+1 If we consider Gn as a list of column vectors, i.e., as a matrix, we get a variant of the usual Sylvester matrices. Lemma 2 If xα fi is a linear combination of vectors preceeding it in the list Gn then this remains true in the list Gn+1 , and morevover xh xα fi is a linear combination of vectors preceeding it in the list Gn+1 (for h = 1, . . . , r). Remark 1 The polynomial xα fi is a linear combination of vectors preceeding it in the list Gn iff it is killed (reduced to 0) in a column triangulation of Gn without column exchange. Notation 5 For a given finite list G0 = [f1 , . . . , fs ] of polynomials in k[x1 , . . . , xr ] and for j = 1, . . . , s we denote by Mj,n the following subset of Mr : Mj,n := {xα : xα fj is a linear combination of vectors preceeding it in the list Gn } We give now an immediate corollary of lemma 2. Corollary 1 For j = 1, . . . , s and for all m ≥ n ∈ N we have Mj,n [1] ⊂ Mj,n+1

[m−n]

and

Mj,n

⊂ Mj,m

Let us remark that the dimension en of En equals the number vectors not reduced to zero during the triangulation of Gn : en = dim(En ) =

 s  X n+r n

j=1

 − #(Mj,n ) .

So we get a natural definition for the SMAC. Definition 6 For all m ≥ n we let SMACG0 (n, m) := SMAC(n, m) :=

Ps

j=1

m+r m



 − FMj,n (m − n) .

We call it the structural maximum of growth, computed at step n and bounding the rank at step m. From previous remarks and corollary we get easily. Fact 2 For all n ≤ n0 ≤ m we have SMAC(n, m) ≥ SMAC(n0 , m) ≥ em = SMAC(m, m). 8

Here is another simple and crucial fact. Fact 3 Assume that SMIC(n, n + 1) = SMAC(n, n + 1). Then en+1 = SMIC(n, n + 1) = SMAC(n, n + 1), Exp(n + 1) = Exp(n)[1] and for j = 1, . . . , s, Mj,n+1 = Mj,n [1] . In particular Fri(Exp(n)) = Fri(Exp(n + 1)) and for m ≥ n + 1, SMIC(n + 1, m) = SMIC(n, m) and SMAC(n + 1, m) = SMAC(n, m). In the opposite case when SMIC(n, n + 1) < SMAC(n, n + 1) then SMIC(n + 1, n + 1) > SMIC(n, n + 1) or SMAC(n + 1, n + 1) < SMAC(n, n + 1), and Exp(n)[1] 6= Exp(n + 1) or Mj,n [1] 6= Mj,n+1 for at least one j ∈ {1, . . . , s}. We deduce the following theorem. Theorem 1 Assume that for an integer n we have ∀m ≥ n

SMIC(n, m) = SMAC(n, m)

then — for all m ≥ n, em = SMIC(n, m) — Fri(Exp(n)) = Fri({Lm(I(f1 , . . . , fs ))}) — after any column triangulation of the matrix Gn , the list of polynomials that correspond to column vectors whose leading monomial is in the lower frontier Fri(Exp(n)) is a minimal Gr¨ obner basis of the ideal I(f1 , . . . , fs ) Finally, the functions SMIC(n, .) and SMAC(n, .) do coincide for great values of n as says the following theorem. We shall give two proofs of this fact in sections 5 and 7. Theorem 2 For any system G0 = [f1 , . . . , fs ] in k[x1 , . . . , xr ] we have ∃n ∈ N

∀m ≥ n

SMIC(n, m) = SMAC(n, m)

Remark 2 If for some integer n all Mj,n were nonempty, then the function m 7→ SMAC(n, m) should be, for m > m0 ≥ n, a polynomial of degree < r, and this is impossible since SMAC(n, m) ≥ em ∼ mr /r!. So there is always an index j such that Mj,n = ∅. On the other hand, while there are at least two empty Mj,n the growth at infinity of m 7→ SMAC(n, m) is too high and the functions SMIC(n, .) and SMAC(n, .) cannot be equal.

4

The SMIC–SMAC algorithm

Previous results lead to the following algorithm. Input A nonempty G0 = (f1 , . . . , fs ) of nonzero polynomials in k[x1 , . . . , xr ]. Output A minimal Gr¨ obner basis G of I(f1 , . . . , fs ). Variables Mj : (for j = 1, . . . , s) the list of monomials xα such that xα fj is killed when triangulating Gn without column exchange, V : list of nonzero vectors given by this triangulation, E: list of leading monomials of elements in V , F ri: list of monomials (the lower frontier of E), G: list of polynomials in V that correspond to F ri, F ini: boolean variable, n: number of step Begin 9

G := [ ], F ri := [ ], Mj := [ ] (for j = 1, . . . , s), E := [ ], F ini := False, n := 0, Repeat Compute the output V when triangulating Gn without column exchange (use previous computations if n > 0), Update the lists G, F ri, E (use V ) and Mj for j = 1, . . . , s (use killed vectors in the triangulation), If F ri = [1] then F ini = V rai % (I(f1 , . . . , fs ) = I(1)) else If exactly one Mj is empty then If SMIC(n, n + 1) = SMAC(n, n + 1) then If ∀m ≥ n SMIC(n, m) = SMAC(n, m) then F ini = True, If F ini = False then n := n + 1, Until F ini Return G End Remark that the test ∀m ≥ n

SMIC(n, m) = SMAC(n, m) ?

is algorithmic by virtue of lemma 1. We discuss this question in section 6. And that the algorithm ends by virtue of theorem 2. If we let Vn be the output when triangulating Gn without column exchange, we see that the list x1 Vn is the beginning of the output when triangulating the list Gn+1 without column exchange. Concerning the second part of the list Gn+1 , which equals G2n+1 (see beginning of section 3), it is clear that all elements in the Mj,n [1] (for j ∈ {1, . . . , n}) are useless. So, an adequate management of the successive triangulations can significantly reduce the complexity. The minimal Gr¨ obner basis computed by our algorithm is optimal since it is given by linear combinations of input polynomials with degrees as low as possible. When updating G and F ri we keep each old element f given in G by previous triangulations while no new monomial in F ri divides the leading monomial of f .

5

A proof that the SMIC–SMAC algorithm stops

We shall give two proofs that the SMIC–SMAC algorithm stops in this section and in section 7. These proofs are inspired by the proofs of Dickson’s lemma (in its classical version [7] and in its constructive version [11]). Nevertheless, we are forced to introduce some ad hoc notions of nondecreasing sequences, stationary sequences, sequences that pause. In this section the proof is rather simple, but it is not constructive. At first we define a very particular set. Notation 7 We denote by Mr,δ the set of pairs (n, A) where n ∈ N and A is a finite subset of Mr such that deg(A) ≤ n + δ. It is clear that each (n, Exp(n)) is in Mr,d (d = deg(G0 )) and each pair (n, Mj,n ) is in Mr,0 . Definition 8 A sequence n 7→ (n, An ) in Mr,δ is said to be nondecreasing if for all n we have An [1] ⊂ An+1 . It is said to be stationary (after n0 ) if for all n ≥ n0 we have An [1] = An+1 . It is clear that the sequence n 7→ (n, Exp(n)) is nondecreasing in Mr,d and that the sequences n 7→ (n, Mj,n ) are nondecreasing in Mr,0 . We settle first the following non constructive lemma. 10

Lemma 3 (classical lemma ` a la Dickson) For all integers r ≥ 1 and δ ≥ 0, a nondecreasing sequence n → 7 (n, An ) in Mr,δ is stationary. Proof The proof is by induction on r. Case r = 1. Here we use a notion of “gap” in a finite subset of M1 . This is unambiguous since the ordering is linear. Either An is empty for all n and the sequence is stationary. Or there exists a first integer n1 such that An1 is nonempty. For n ≥ n1 the two sequences n 7→ Fri(An ) and n 7→ n + δ − deg(An ) are nonincreasing sequences. So they are stationary after some n2 ≥ n1 . If γ is the width of the largest gap in An2 , the sequence n 7→ (n, An ) is surely stationary after n2 + γ. From r − 1 to r (r ≥ 2). Let n 7→ (n, An ) be a nondecreasing sequence in Mr,δ . Either An is empty for all n and the sequence is stationary. Or there exists a first integer n1 such that An1 is nonempty. For n ≥ n1 the sequence n 7→ n + δ − deg(An ) is nonincreasing. So it is stationary after some n2 ≥ n1 . Let xa (a = (a1 , . . . , ar )) a highest degree monomial in An2 and let B = {xa }. For n ≥ n2 we have An = B [n−n2 ] ∪

[

(Hb,j ∩ An )

b,j

where (Hb,j )(b,j) (with j ∈ {1, . . . , r}) is a finite family of “affine hyperplanes” in Mr . Precisely each Hb,j is the set of monomials xα where αj = b with some b < aj . Each sequence n 7→ (n, Hb,j ∩ An ) may be seen as a nondecreasing sequence in Mr−1,δ . By induction hypothesis, all these sequences are stationary (in their respective affine subspaces) after some n3 ≥ n2 . Let Bn := B [n−n2 ] (whence Bn [1] = Bn+1 ) and Cn,b,j := Hb,j ∩ An . We see easily that for n ≥ n2 we have [ An [1] = Bn [1] ∪ Cn,b,j [1]r−1 b,j

(the index r−1 for [1] means that Cn,b,j [1]r−1 is viewed in its space Mr−1 ). Indeed it is impossible that any Cn,b,j [1] (viewed in Mr ) with b = aj − 1 have an element “upper” than Bn+1 . This ends the proof.  Non constructive proof of theorem 2 From previous lemma the sequence n 7→ (n, Exp(n)) is stationary in Mr,δ and sequences n 7→ (n, Mj,n ) are stationary in Mr,0 after some n0 . So for m ≥ n ≥ n0 we have SMIC(n, m) = SMIC(n0 , m) and SMAC(n, m) = SMAC(n0 , m). Applying fact 3 the functions m 7→ SMIC(n0 , m) and m 7→ SMIC(n0 , m) (for m ≥ n0 ) are equal. 

6

Implementing the SMIC–SMAC algorithm

Good triangulations of successive Gn A first issue when implementing the algorithm is a good management of successive triangulations of matrices Gn . This is the job of experts in structured matrices (see the book [2] for block Toeplitz matrices and the papers [5], [6] and [13] for quasi-Toeplitz matrices). Examples in two variables A difficult task when impementing the algorithm is making explicit the test ∀m ≥ n

SMIC(n, m) = SMAC(n, m) ? 11

This test has to be done when there remains exactly one empty Mj,n and when SMIC(n, n+1) = SMAC(n, n + 1) (these things are easy to test). It seems very unlikely that for some n, there remains exactly one empty Mj,n , SMIC(n, n + 1) = SMAC(n, n + 1) and the two functions SMIC(n, .) and SMAC(n, .) do not coincide. So it is perhaps sufficient to use the naive process obtained from lemma 1: counting each #B [m] until m = ppcm(B) − val(B). This seems nevertheless too costly. So we shall sketch two other processes. But let us see two new examples. Intuitively, from the proof of lemma 1, the growth function of A[n] could be equal to its growth polynomial when A[n] fullfill “gaps” between the lower frontier and the ppcm of monomials in A. This is true in following examples, with a counting process similar to the one given in example 1. Example 2 With the following set B, we get val(B) = 3, ppcm(B) = x61 x52 , deg(ppcm(B)) = 11, deg(B) = 8, and the growth function of B [n] becomes equal to its growth polynomial at step number 6, exactly the number given in lemma 1.

6 5 4 3 2 1 0

. . . . . . . 0

. 0 . . . . . 1

. . . . . 0 . 2

. . . . . . . 3

. . . . . . . 4

. . . . . . . 5

. . . . 0 . . 6

. . . . . . . 7

. . . . . . . 8

. . . . . . . 9

11 10 9 8 7 6 5 4 3 2 1 0

. . . . . . . . . . . . 0

• 5 4 3 2 1 0 . . . . . 1

. • 5 4 3 2 1 3 2 1 0 . 2

. . • 5 4 3 2 4 3 2 1 . 3

. . . • 5 4 3 5 4 3 2 . 4

. . . . • 5 4 • 5 4 3 . 5

. . . . . . . . . . . . . . . • . . . . 5 • . . . 4 5 • . . 3 4 5 • . 2 3 4 5 • 1 2 3 4 5 0 1 2 3 4 4 5 • . . . . . . . 6 7 8 9 10

B [6]

We get HB (n) = (n−1)(n−2) + 5n + 4n + 8 = 2 (the ppcm is in O).

n(n−1) 2

12

. . . . . . . . • 5 . . 11

. . . . . . . . . • . . 12

. . . . . . . . . . . . 13

+ 8n + 9 with the following counting process

11 10 9 8 7 6 5 4 3 2 1 0

. x . . . . | . . . . . . . . . x x . . . | . . . . . . . . . x x x . . | . . . . . . . . . x x x x . | o . . . . . . . . x x x x x | o o . . . . . . . x x x x x | o o o . . . . . . x x x x x | O o o o . . . . ----------------------------------------. . + + + + + + + + + . . . . . + + + + + + + + + + . . . . + + + + + + + + + + + . . . + + + + + + + . . . . . . . . . . . . . . . . . . . 0 1 2 3 4 5 6 7 8 9 10 11 12 13

Example 3 With the following set C, we get val(C) = 3, ppcm(C) = x71 x72 , deg(ppcm(C)) = 14, deg(C) = 10, and the growth function of C [n] becomes equal to its growth polynomial at step number 4. The number given in lemma 1 is 9. 7 6 5 4 3 2 1 0

. . . . . . . . 0

. . . . . 0 . . 1

0 . 0 . . 0 . . 2

. 0 . . . . . . 3

. . . . . . . . 4

. . . . . . 0 . 5

. . . . . . . . 6

. . . . 0 . . . 7

. . . . . . . . 8

. . . . . . . . 9

11 10 9 8 7 6 5 4 3 2 1 0

. . . . . . . . . . . . 0

. . . . . • 3 2 1 0 . . 1

• 3 2 1 0 1 0 2 1 0 . . 2

. • 3 2 1 0 1 3 2 1 . . 3

. . • 3 2 1 2 • 3 2 . . 4

. . . • 3 2 3 3 2 1 0 . 5

. . . . . . . . . . . . . . . . . . . . • • . . . 3 3 • . . • 2 3 • . • 1 2 3 • 3 0 1 2 3 2 3 • . . 1 2 3 • . . . . . . 6 7 8 9 10

C [4]

+ 6n + 2n + 11 = We get HC (n) = (n+1)(n+2) 2 process (the ppcm is in O.)

. . . . . . . . • . . . 11

. . . . . . . . . . . . 12

n(n−1) 2

13

+ 10n + 12 with the following counting

. . x . . . . | . . . . . . . . x x . . . | . . . . . . . . x x x . . | . . . . . . . . x x x x . | . . . . . . . x x x x x x | O . . . . . . x x x x x x | o o . . . . . x x x x x x | o o o . . . . x x x x x x | o o o o . . . x x x x x x | o o o o o . -------------------------------------2 . + + + + + + + + . . . . 1 . . . . . + + + + + . . . 0 . . . . . . . . . . . . . 0 1 2 3 4 5 6 7 8 9 10 11 12

11 10 9 8 7 6 5 4 3

It is also easy to give examples where the counting process given in previous examples works even before the ppcm of A belongs to A[n] . Some hints for computating fast the growth functions of subsets of Mr Our first suggestion is to try to write in an economic way a finite set A ∈ Mr as a union of sets [m ] Xi i with suitable singletons Xi : [ [m ] A= Xi i i∈I [m+m ]

i As A[m] is the union of the Xi we can find its cardinality using formulas similar to the ones in the proof of lemma 1. Once we have obtained an economical writing for A if we consider a new set A0 containing [1] A (this is the case when running the SMIC–SMAC algorithm) we remark that if A0 is the disjoint union of A[1] and C then we have [ [m +1] [ A0 = Xi i ∪ X.

i∈I

X singleton⊂C

It should be usefull to find a process for simplifying, if possible, this writing. A second suggestion is to use a counting process similar to the one we have seen in two variables. Such a process consists in partitioning efficiently the space Mr in a disjoint union of “affine subspaces” Hu in such a way that the intersections Hu ∩ A[m] are easy to describe. Namely each intersection could be a set defined as the set of monomials that are multiple of a given monomial and whose degree is smaller than max(0, m + m0 ) (where m0 ∈ Z) inside the affine subspace Hu . This could be done recursively w.r.t. the dimension of Hu as follows: First, we search for a monomial xα with maximal depth in A, i.e., a monomial of minimal degree among the ones for which the set {xβ : α|β and |β| ≤ deg(A)} is contained in A. The first affine subspace Hu is H1 := {xβ : α|β}. There are corresponding affine subspaces Hb,j of dimension r − 1 and we define Hb,j := Hb,j ∩ {xβ : ∀k < j βk ≥ αk } = {xβ : βj = αj , ∀k < j βk ≥ αk }. So Mr is the disjoint union of H1 and the affine subspaces Hb,j . 14

We say that the situation is convex if we have:  ∀j ∀b < αj Hb,j ∩ A 6= ∅ ⇒ ∀b0 ∈ [b, αj [ Hb0 ,j ∩ A 6= ∅ . Perhaps it is necessary to consider A[m] for sufficiently large m so that the situation becomes convex. For such an m, each nonempty Hb,j ∩ A[m] is described in a similar way. First we search in Hb,j for a monomial with maximal depth w.r.t. Hb,j ∩ A[m] , i.e., a monomial of minimal degree among the ones in Hb,j for which the set Hb,j ∩ {xβ : α|β and |β| ≤ deg(A[m] ∩ Hb,j )} is nonempty and contained in A[m] ∩ Hb,j . And so on.

7

A constructive proof that the SMIC–SMAC algorithm stops

We begin with one more notation. Notation 9 If A is a finite subset of Mr and k is an integer > 0 we denote by A[−k] the set of monomials xα such that {xα }[k] ⊂ A. The set A[−k] is the maximal set B such that B [k] ⊂ A. We have A[−k][k] ⊂ A ⊂ A[k][−k] , A[−k][−`] = A[−k−`] (` > 0, k ≥ 0) A ⊂ B ⇒ A[−k] ⊂ B [−k] so that A[−k] = A[−k][k][−k] and A[k] = A[k][−k][k] This section is divided in three subsections. In the first one, we discuss the notion of positive and negative gaps in a finite subset A of Mr . This discussion leads to definitions given in the second one. These definitions are useful for our constructive proof in the last one.

What is a gap, and what is a set without a gap ? In example 1, if we let C = A[4][−4] , we get C [4] = A[4] with the growth function of C equal to the growth polynomial of A. In the following drawing, we have indicated A with 0 and C \ A with •. 11 10 9 8 7 6 5 4 3 2 1 0

. . . . . . . . 0 0 . . . . . . . . . . . . 0 0 . . . . . 0 0 . . . . 0 0 • • . . . . 0 0 0 . . . 0 0 0 • • . . . 0 0 0 0 . . 0 0 0 0 0 0 . . 0 0 0 • • 0 • 0 0 0 0 0 . . 0 0 0 • • • • 0 0 0 0 0 . . 0 0 0 0 • • • 0 • • • • 0 . . . . 0 0 • 0 . . . . . . . . . . 0 0 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 1 2 3 4 5 6 7 8 9 10 11 12 13 In this example, it seems reasonable to consider that the “gaps” of A are the elements of C \ A. We see that the growth function of A minorizes its growth polynomial. But this is not always the case. 15

Example 4 With the following D for example 5 4 3 2 1 0

. . . . . . . 0 . . . . . . . . . . . . . . . . . . . . . 0 . . . . . . 0 1 2 3 4 5 6 we get HD (n) = n(n−1) + 6n − 1 so that HD (1) = 5 and HD (0) = −1. So HD (0) = FD (0) − 3 2 and HD (1) = FD (1) − 1. 7 6 5 4 3 2 1 0

. 3 . . . . . . . . 2 3 . . . . . . . 1 2 3 . . . . . . 0 1 2 3 3 . . . . . . . . 2 3 . . . . . . . 1 2 3 . . . . . . 0 1 2 3 . . . . . . . . . 0 1 2 3 4 5 6 7 8 The growth function of D majorizes its growth polynomial. We could perhaps indicate “negative gaps” o of D in M2 in the following way (if we give weight −1 to negative gaps we should find the growth function equal to the growth polynomial.) 5 4 3 2 1 0

. . . . . . 0

. 0 . . . . 1

. o . . . . 2

. o . . . . 3

. . . . . . 4

. . . o 0 . 5

. . . . . . 6

Example 5 But we can also find a set E with a positive gap and a negative gap, so that its growth polynomial coincides with its growth function but E does have a gap. 4 3 2 1 0

. . . . . 0 We get for all 5 4 3 2 1 0

. . . . . . 0

. . . . . . . . . 0 . . . . 0 . . . . 0 . . . 0 0 . . . 0 0 . . 0 . 0 . . . . . . . . . . 1 2 3 4 5 6 7 8 n FE (n) = HE (n) = n(n−1) + 9n + 9 as we can see by considering A[2] . 2 2 1 0 . . . 1

. 2 1 0 0 . 2

. . 2 1 0 . 3

. . . 2 1 . 4

. . . . 2 . 5

2 1 0 0 0 . 6

. 2 1 0 1 . 7

. . 2 1 0 . 8

. . . . . . 2 . 1 2 . . 9 10 16

We can mark a gap x and perhaps a negative gap o. 4 3 2 1 0

. . . . . 0

. 0 . . . 1

. . 0 0 . 2

. . . 0 . 3

. . . o . 4

. . . . . 5

. 0 0 0 . 6

. . 0 x . 7

. . . 0 . 8

. . . . . 9

Nevertheless, it seems difficult to give a satisfactory definition of gaps and negative gaps in the general case. So in the following subsection we content ourselves with giving definitions corresponding to the intuition of finite subsets without positive gaps, or without positive nor negative gaps.

Three useful definitions Following definitions are made in order to facilitate our constructive proof. In definition 10 we deal only with positive gaps. Definition 10 A finite subset A of Mr is said to be without a gap if for all integers k ≥ 0 and ` > 0 we have A[k] = A[k+`][−`] . So if A is without a gap, the same is true fo all A[k] (k ≥ 0). Moreover, we get immediately. Fact 4 Assume A without a gap, A[k] ⊂ B and A[k+`] = B [`] . Then A[k] = B. In another way: if A is without a gap, A[k] ⊂ B and A[k] 6= B then for all integer ` > 0 we have A[k+`] 6= B [`] and FA (k + `) < FB (`). Indeed (in the first case) we have B ⊂ B [`][−`] = A[k+`][−`] = A[k] . Following definition corresponds to the intuition of a set without a gap nor negative gap. Definition 11 A finite subset A of Mr is said to be stable if it is without a gap and if its growth function coincides with its growth polynomial. So if A is stable, the same thing is true for all A[k] (k ≥ 0). Probably, for any finite set A, the set A[cA ] is stable, where cA is the integer deg(ppcm(A)) − val(A) − r given in lemma 1. Definition 12 A sequence k 7→ (nk , Ak ) in Mr,δ is said to be nondecreasing if for all k < ` we [n −n ] have nk < n` and Ak ` k ⊂ A` . We say that the sequence pauses between k and ` > k if Ak [n −n ] is stable and Ak ` k = A` . Applying fact 4 we see that the sequence pauses between k and ` iff it pauses between k and k + 1, k + 1 and k + 2, . . . , ` − 1 and `. Remark that it is important that Ak be without a gap. It is clear that any subsequence of n 7→ (n, Exp(n)) is nondecreasing in Mr,d and that any subsequence of n 7→ (n, Mj,n ) is nondecreasing in Mr,0 . 17

Constructive lemma ` a la Dickson, and constructive proof of theorem 2 Lemma 4 (constructive lemma ` a la Dickson) For all integers r ≥ 1 and d ≥ 0, any nondecreasing sequence k 7→ (nk , Ak ) in Mr,d pauses between two successive indices. Proof The proof is by induction on r. Case r = 1. A subset of M1 is stable iff the set of degrees is an interval in N. We consider a nondecreasing sequence k 7→ (nk , Ak ) in M1,d . If A0 = A1 is empty, the sequence pauses between 0 and 1. In the opposite case assume A0 is nonempty. Let fk := (nk +d)−deg(Ak )+deg(Fri(Ak )). The sequence k 7→ fk is nonincreasing in N. Let δk be the width of the largest gap in Ak . If δk = 0 and fk+1 < fk then the largest gap in Ak+1 (if there is any) is at the beginning or at the end and its width is ≤ fk − fk+1 − 1 so fk+1 + δk+1 < fk = fk + δk . If δk > 0 and fk+1 = fk then δk+1 = δk − 1 and fk+1 + δk+1 < fk + δk . If δk > 0 and fk+1 < fk then the largest gap in Ak+1 is either inside Ak (and δk+1 < δk ) or it is at the beginning or at the end (and fk+1 + δk+1 < fk < fk + δk ). In all these cases, we have fk+1 + δk+1 < fk + δk . Finally the sequence pauses between k and k + 1 iff δk = 0 and fk+1 = fk . This happens after at most f0 + δ0 < n0 + d steps. From r − 1 to r (r ≥ 2). Remark first that since “any nondecreasing sequence in Mr−1,d pauses” (with definition 12) then “any finite list of nondecreasing sequences in Mr−1,d pause simultaneously”. It suffices to see the case of two sequences k 7→ (nk , Ak ) and k 7→ (mk , Bk ). If ki is the i–th value for which k 7→ (nk , Ak ) pauses between ki and ki+1 , we consider the sequence i 7→ (mki , Bki ). This sequence pauses between t and t + 1. Then we see that k 7→ (nk , Ak ) and k 7→ (mk , Bk ) simultaneously pause between kt and kt + 1. Now we consider a nondecreasing sequence k 7→ (nk , Ak ) in Mr,d . If A0 = A1 is empty, the sequence pauses between 0 and 1. In the other case, assume A0 is nonempty. Let fk := (nk + d) − deg(Ak ). The sequence k 7→ fk is nonincreasing in N. We let k0 = 0. We let k1 , . . . , ku , . . . be the successive indices for which fku < fku −1 . The sequence u 7→ ku is a finite sequence with no more than f0 terms. For some given u, we consider a monomial xa(u) with a(u) = (a1 (u), . . . , ar (u)), of highest total degree in Aku . We let B(u) = {xa(u) }. For k ≥ ku we write [ Ak = B(u)[nk −nku ] ∪ (Hb,j ∩ Ak ) b,j

where the Hb,j (with j ∈ {1, . . . , r}) are a finite family of “affine hyperplanes” in Mr . Precisely each Hb,j is the set of monomials xα where αj = b with some b < aj (u). For k ≥ ku let Bu,k := B(u)[nk −nku ]

and Cu,k,b,j := Hb,j ∩ Ak

The sequence k 7→ (nk , Bu,k ) is stationary in Mr,d and sequences k 7→ (nk , Cu,k,b,j ) are nondecreasing sequences in Mr−1,d . By induction hypothesis, these sequences pause simultaneously between hu and hu + 1 for some hu ≥ ku . If fhu +1 = fku then with ` = hu we have [ Cu,`,b,j [n`+1 −n` ]r−1 A` [n`+1 −n` ] = Bu,` [n`+1 −n` ] ∪ b,j

(the index r − 1 for [n`+1 − n` ] means that Cu,`,b,j [n`+1 −n` ]r−1 is viewed in its space Mr−1 ). Indeed it is impossible that any Cu,`,b,j [n`+1 −n` ] (viewed in Mr ) with b = aj (u) − 1 have an 18

element “upper” than Bu,` [n`+1 −n` ] . So, the sequence k 7→ (nk , Ak ) pauses between ` and ` + 1. In this case we define hv = hu for all v > u. If fhu +1 < fku then ku+1 is well defined and is between ku and hu + 1. In this case we have the same deal with u + 1 and we get hu+1 . So the sequence u 7→ hu is a well defined infinite nondecreasing sequence. As it cannot be increasing for more than f0 terms, it pauses certainly. This ends the proof.  Fact 5 Assume that Exp(n) is stable with Exp(n + r) = Exp(n)[r] and that for j = 1, . . . , s, Mj,n is stable with Mj,n+r = Mj,n [r] . Then ∀m ≥ n SMIC(n, m) = SMAC(n, m). Proof Indeed, for h = 1, . . . , r we have Exp(n + h) = Exp(n)[h] and (for j = 1, . . . , s) Mj,n+h = Mj,n [h] . From facts 3 (page 9) and 4 (page 17), we get SMIC(n, n + h) = SMAC(n, n + h) successively for h = 1, . . . , r. So these two polynomials of degree r are equal.  Constructive proof of theorem 2 From lemma 4 the sequence n 7→ (nr, Exp(nr)) and sequences n 7→ (nr, Mj,nr ) pause simultaneously in Mr,d . We conclude applying fact 5.  Remark that this proof of theorem 2 gives a new constructive proof of existence of a Gr¨obner basis for an ideal of finite type in k[x1 , . . . , xr ]. This gives also a new constructive proof of the Hilbert basis theorem (see [14], [15], [12], [11]).

Conclusion This section provides intuitive arguments supporting the idea that our algorithm might have lower complexity than the Buchberger algorithm. Then we outline further research. The SMIC–SMAC algorithm is a decisive generalization of the algorithm given in two variables in the thesis [9] of A. Kanber: Algorithme de calcul d’une base de Gr¨ obner d’un Id´eal de K[X, Y ] par Triangulation de “Matrices de Sylvester”, under the direction of the author and of S. Labhalla. The Kanber algorithm was itself a decisive generalization of the algorithm [10] yielding Hermite’s reduction of a matrix with univariate polynomial coefficients. For the algorithm in two variables, A. Kanber uses a very subtle definition of “upper frontier” of Exp(n). We don’t know if this definition can be generalized to more variables. Its merit is that it simplifies the test for terminating the algorithm. This test is slightly different from the one in the SMIC–SMAC algorithm. A first implementation seems to show a significant memory saving w.r.t. the classical Buchberger algorithm. The SMIC–SMAC algorithm and those in [10] and [9] were motivated by the juxtaposition of Euclid’s algorithm and the subresultant algorithm. In the latter, an unnecessary coefficient growth is avoided, e.g., when the coefficients are integers. Experiments by G. Villard using the Hermipol algorithm given in [10] have shown significant memory savings w.r.t. to the usual approach using successive divisions. We hope that a similar memory saving occurs with our algorithm in comparison to the Buchberger algorithm. There might also be an improvement concerning the growth of the degrees of intermediate polynomials. On the other hand, our algorithm might create more intermediate polynomials. Several variants and extensions of the SMIC–SMAC algorithm are worthwile. It should be interesting first to make a careful study of the homogeneous case, second, to generalize our 19

approach to the computation of other kinds of Gr¨obner bases, e.g., for modules, and third, to analyze relations between the SMAC and the syzygies module. We end the paper with a puzzling question. The function m 7→ em = dim(Em ) is equal after some n0 to the polynomial H(m) = SMIC(∞, m) = SMAC(∞, m) which equals, by definition, the value of functions m 7→ SMIC(n, m) and m 7→ SMAC(n, m) for n and m large enough. Remark that the function SMIC depends a priori on the chosen admissible ordering for monomials. But the function SMAC and the function m 7→ em = dim(Em ), and a fortiori the function H(m) = SMIC(∞, m) = SMAC(∞, m), are all independant of the admissible ordering. Yet, around step n0 (between step n0 /r and step rn0 for example) the set G computed by the SMIC–SMAC algorithm should intuitively become a Gr¨obner basis of the ideal. Such a result would imply a small difference between the degrees of different minimal Gr¨obner bases for distinct admissible orders, in contradition with usual ideas on this topic. So it seems necessary to embark into a careful study: if the set G computed at step n1 by the SMIC–SMAC algorithm is a Gr¨ obner basis of the ideal, at which step are we sure that the algorithm stops, and at which step are we sure that em becomes equal to H(m) ? Acknowledgements Many thanks to A. Galligo, M.-F. Roy, L. Gonz´alez-Vega, L. Pottier, H. Perdry, A. Kanber, B. Sadik, M. El Kahoui and S. Labhalla for all discussions, suggestions and encouragements that allowed us to elaborate and to improve the SMIC–SMAC algorithm.

References [1] Becker T., Weispfening V.: Gr¨ obner Bases. Springer 1993. 1 [2] Bini D., Pan V.: Polynomial and matrix computations, vol 1: Fundamental Algorithms. Birkha¨ user, 1994. 11 [3] Buchberger B.: Gr¨ obner Bases: an algorithmic method in polynomial ideal theory in Multidimensional Systems Theory, ed. Bose N. K., D. Reidel Publishing Company, Dordrecht, 1985, 184–232. 1 [4] Cox Q., Little J, O’Shea D.: Ideals, Varieties, and Algorithms. (Springer Verlag UTM) (1992). 1 [5] Emiris I., Pan V.: The structure of sparse resultant matrices. Proceedings ISSAC 97. ACM Publications, 1997. 11 [6] Emiris I., Pan V.: Symbolic and numeric methods for exploiting structure in constructing resultant matrices. Preprint. 11 [7] Galligo A.: Algorithmes de calcul de Bases Standard. Technical Report. Universit´e de Nice. 1983. 10 [8] Hardy G., Wright E.: An introduction to the theory of numbers. 5th edition, Clarendon Press, 1979. 6 [9] Kanber A.: Algorithme de calcul d’une base de Gr¨ obner d’un Id´eal de K[X, Y ] par Triangulation de “Matrices de Sylvester”. Th`ese de 3`eme cycle, novembre 97, Marrakech. 19 20

[10] Labhalla S., Lombardi H., Marlin R.: Algorithmes de calcul de la r´eduction de Hermite d’une matrice ` a coefficients polynomiaux. Theoretical Computer Science, 161, 1996, 69–92. 19 [11] Lombardi H., Perdry H.: The Buchberger Algorithm as a Tool for Ideal Theory of Polynomial Rings in Constructive Mathematics. In ”Gr¨obner Bases and Applications (Proc. of the Conference 33 Years of Gr¨ obner Bases)”. B.Buchberger and F.Winkler (eds.), Cambridge University Press, London Mathematical Society Lecture Notes Series, vol. 251 (1998). 10, 19 [12] Mines R., Richman F., Ruitenburg W. A Course in Constructive Algebra. Universitext. Springer-Verlag, 1988. 19 [13] Mourrain B., Pan V.: Solving special polynomial systems by using structured matrices and algebraic residues. Proc. Workshop on Foundations of Computational Mathematics, Cucker F., Shub M. (eds). Springer LNCS 1997. 11 [14] Richman F.: Constructive aspects of Noetherian rings. In : Proc. Amer. Mat. Soc. 4. 436– 441.(1974) 19 [15] Seidenberg A.: What is Noetherian ? In : Rend. Sem. Mat. e Fis. di Milano 44. 55–61(1974) 19

Contents Introduction

1

1 Preliminaries

2

2 The SMIC

6

3 The SMAC

8

4 The SMIC–SMAC algorithm

9

5 A proof that the SMIC–SMAC algorithm stops

10

6 Implementing the SMIC–SMAC algorithm

11

7 A constructive proof that the SMIC–SMAC algorithm stops

15

Conclusion

19

21