Efficient Methods for Computing Optimality Degrees of Elements

In combinatorial optimization problems, we are given a set of elements E and a .... Proof. (⇒) From the possible optimality of f, it follows that there exits con-.
130KB taille 3 téléchargements 291 vues
Efficient Methods for Computing Optimality Degrees of Elements in Fuzzy Weighted Matroids J´erˆ ome Fortin1 , Adam Kasperski2 , and Pawel Zieli´ nski3 1

2

IRIT/UPS 118 route de Narbonne, 31062, Toulouse, cedex 4, France, [email protected] Institute of Industrial Engineering and Management, Wroclaw University of Technology, Wybrze˙ze Wyspia´ nskiego 27, 50-370 Wroclaw, Poland, [email protected] 3 Institute of Mathematics and Computer Science, Wroclaw University of Technology, Wybrze˙ze Wyspia´ nskiego 27, 50-370 Wroclaw, Poland, [email protected]

Abstract. In this paper some effective methods for calculating the exact degrees of possible and necessary optimality of an element in matroids with ill-known weights modeled by fuzzy intervals are presented.

1

Introduction

In combinatorial optimization problems, we are given a set of elements E and a weight we is associated with each element e ∈ E. We seek an object composed of the elements of E for which the total weight is maximal (minimal). In the deterministic case the elements of E can be divided into two groups: those which belong to an optimal solution (optimal elements) and those which do not belong to an optimal one. In this paper, we consider the case in which the weights are imprecise and they are modeled by the classical intervals and fuzzy intervals. In the interval-valued case the elements form three groups: those that are optimal for sure (necessarily optimal elements), those that are not optimal for sure and the elements whose optimality is unknown (possibly optimal elements). In the fuzzy-valued case the weights of elements are modeled by possibility distributions[1]. In this case the notions of possible and necessary optimality can be extended and every element can be characterized by degrees of possible and necessary optimality. In this paper, we wish to investigate the combinatorial optimization problem, which can be formulated on a matroid (a good introduction to matroids can be found in [4]). The case of interval-valued weights is first addressed and it is then extended to fuzzy intervals. The main results of this paper are two effective algorithms, based on profile approach [2], for calculating the exact values of degrees of possible and necessary optimality of a given element.

2

Preliminaries

Consider a system (E, I), where E = {e1 , . . . , en } is a nonempty ground set and I is a collection of subsets of E closed under inclusion, i.e. if B ∈ I and A ⊆ B then A ∈ I. The system (E, I) is a matroid (see e.g. [4]) if it satisfies the following growth property: if A ∈ I, B ∈ I and |A| < |B|, then there exists e ∈ B \ A such that A ∪ {e} ∈ I. The maximal (under inclusion) independent sets in I are called bases. The minimal (under inclusion) sets not in I are called circuits. The construction of any base is not a difficult issue. If σ specifies an order of elements of E, then the corresponding base Bσ can be constructed by simple Algorithm 1. We call Bσ , the base induced by σ.

Algorithm 1: Constructing a base of a matroid Input: A matroid M = (E, I), a sequence σ = (e1 , . . . , en ) of E. Output: A base Bσ of M. Bσ ← ∅ for i ← 1 to n do if Bσ ∪ {ei } ∈ I then Bσ ← Bσ ∪ {ei } return Bσ

The running time of Algorithm 1 is O(nf (n)), where f (n) is time required for deciding whether set B ∪{ei } contains a circuit, which depends on the particular structure of a matroid. Let us denote by pred(e, σ) the elements which precede element e in sequence σ. The following property of matroids can be proven [3]. Proposition 1. Let σ and ρ be two sequences of the elements of E. Let e ∈ E be an element such that pred(e, σ) ⊆ pred(e, ρ). If e ∈ / Bσ then e ∈ / Bρ . In the combinatorial optimization problem on matroid, a nonnegative weight w e P is given for every element e ∈ E and we seek a base B, for which the cost e∈B we is maximal. This problem can be solved by means of a greedy algorithm, that is Algorithm 1 with sequence σ, in which the elements are sorted in the nonincreasing order of their weights. The greedy algorithm constructs the optimal base in O(n log n + nf (n)) time. Evaluating whether a given element f ∈ E is optimal, i.e. whether f is a part of an optimal base, is not a difficult issue. Let σ ∗ (w, f ), f ∈ E, denote a special sequence of elements of E, in which the elements are sorted in the nonincreasing order of their weights we , e ∈ E, w is the vector of weights. Moreover, if wf = we , e 6= f , then element f precedes element e in this sequence. The following proposition gives the necessary and sufficient condition for establishing whether a given element is optimal [3]. Proposition 2. A given element f is optimal if and only if f is a part of the optimal base Bσ∗ (w,f ) induced by σ ∗ (w, f ).

Proposition 2 suggests an O(n log n + nf (n)) method for evaluating the optimality of f , where O(n log n) is the time required for forming σ ∗ (w, f ). This complexity can be improved. Let σ(w, f ), f ∈ E, denote a sequence, such that pred(f, σ(w, f )) = {e ∈ E : we > wf }. Clearly, σ(w, f ) can be obtained in O(n) time since it is not necessary to order the elements (we only require elements e ∈ E, such that we > wf to appear before f ). Proposition 3. A given element f is optimal if and only if f is a part of a base Bσ(w,f ) induced by σ(w, f ). Proof. It is clear that pred(f, σ(w, f )) = pred(f, σ ∗ (w, f )). Thus, by Propositions 1, f ∈ Bσ∗ (w,f ) if and only if f ∈ Bσ(w,f ) . Hence, by Propositions 2, f is optimal if and only if f ∈ Bσ(w,f ) . t u From Proposition 3, we immediately obtain a method for evaluating the optimality of an element. It requires O(n + nf (n)) = O(nf (n)), where O(n) is time for forming sequence σ(w, f ) and O(nf (n)) is time for constructing base Bσ(w,f ) by Algorithm 1. In the case when element f ∈ E is not optimal (i.e. it is not a part of an optimal base), a natural question arises: how far is f from optimality. In other words, what is the minimal nonnegative real number δf that added to the weight P of f makes it optimal. P Clearly, δf can be calculated as follows: δf = maxB∈B e∈B we − maxB∈Bf e∈B we , where B is the set of all bases and Bf is the set of all the bases containing f .

3

Evaluating the optimality of elements in interval-valued matroids

Consider now the case in which the values of weights are only known to belong to intervals We = [we− , we+ ], e ∈ E. We define a configuration as a precise instantiation of the weights of each element e ∈ E, i.e. w = (we )e∈E , we ∈ We . We denote by Γ the set of all the configurations, i.e. Γ = ×e∈E [we− , we+ ]. We use we (w) to denote the weight of element e ∈ E in configuration w ∈ Γ . Among the configurations of Γ , we distinguish two extreme ones. Namely, the configurations − w+ {f } and w {f } such that: we (w + {f } )

=

(

we+ we−

if e = f , , we (w − {f } ) = otherwise

(

we− we+

if e = f , , e ∈ E. otherwise

(1)

A given element f ∈ E is possibly optimal if and only if it is optimal in some configuration w ∈ Γ . A given element f ∈ E is necessarily optimal if and only if it is optimal in all configurations w ∈ Γ . Instead of being optimal or not, like in the deterministic case, elements now form three groups: those that are for sure optimal despite uncertainty (necessarily optimal elements), those that are for sure not optimal, and elements whose optimality is unknown (possibly optimal

elements). Note that, if an element f ∈ E is necessarily optimal, then it is also possibly optimal but the converse statement is not true. We can obtain more information about optimality of f ∈ E. Let δf (w), w ∈ Γ , denote the minimal nonnegative real number such that f with weight wf (w)+ δf (w) becomes optimal in configuration w. Let us define δf− = minw∈Γ δf (w) and δf+ = maxw∈Γ δf (w). Now the interval ∆f = [δf− , δf+ ] indicates how far f is from being possibly (resp. necessarily) optimal. There are some obvious connections between the notions of optimality and the bounds δf− and δf+ of element f ∈ E. Proposition 4. An element f is possibly (resp. necessarily) optimal if and only if δf− = 0 (resp. δf+ = 0). The following theorems characterize the possibly and the necessarily optimal elements. Theorem 1. Element f ∈ E is possibly optimal if and only if f is a part of a base Bσ(w+ ,f ) induced by σ(w + {f } , f ). {f }

Proof. (⇒) From the possible optimality of f , it follows that there exits configuration w ∈ Γ such that f is optimal. Proposition 3 implies f is a part of a base Bσ(w,f ) induced by σ(w, f ). It is easy to observe that pred(f, σ(w + {f } , f )) ⊆ pred(f, σ(w, f )). Proposition 1 now yields f ∈ Bσ(w+ ,f ) . {f }

(⇐) If f ∈ Bσ(w+

{f }

,f )

then f is optimal under configuration w + {f } , by Proposi-

tion 3, and thus f is possibly optimal.

t u

Theorem 2. Element f ∈ E is necessarily optimal if and only if f is a part of a base Bσ(w− ,f ) induced by σ(w − {f } , f ). {f }

Proof. (⇒) If f is necessarily optimal then it is optimal for all configurations, in particular for w − {f } . By Proposition 3, f ∈ Bσ(w − ,f ) . {f }

(⇐) Suppose f ∈ Bσ(w−

,f ) . {f }

Consider any configuration w ∈ Γ . It is easy to

see that pred(e, σ(w, f )) ⊆ pred(e, σ(w − {f } , f )). We conclude from Proposition 3 that f ∈ Bσ(w,f ) and by Proposition 1 f is optimal in w. Accordingly, f is optimal for all configurations w ∈ Γ and thus it is necessarily optimal. t u Making use of Theorems 1 and 2, one can easily evaluate the possible and necessary optimality of an element f . In order to assert whether f is possibly optimal, we apply Algorithm 1 in which the order of elements is specified by σ(w + {f } , f ). Element f is then possibly optimal if the obtained base contains f . Otherwise, it is not possibly optimal. In the same way, we assert whether f is necessarily optimal. The running time of both methods is O(nf (n)). Theorems 1 and 2 allow also to determine interval ∆f = [δf− , δf+ ]. If f ∈ / − Bσ(w+ ,f ) (which indicates that δf > 0) then set Bσ(w+ ,f ) ∪ {f } contains an {f }

{f }

unique circuit C. We can find an element g ∈ C \ {f } of the minimal value

of wg− . Then, from Theorem 1, it follows that δf− = wg− − wf+ . Similarly, if f∈ / Bσ(w− ,f ) (which indicates that δf+ > 0) then set Bσ(w− ,f ) ∪ {f } contains {f }

{f }

an unique circuit C. We can find an element g ∈ C \ {f } of the minimal value of wg+ and, by Theorem 2, δf+ = wg+ − wf− . It is easily seen that both values δf− and δf+ for a given element f ∈ E can be computed in O(nf (n)) time.

4

Some methods of computing the optimality degrees of elements in fuzzy-valued matroids

We now generalize the concepts of interval-valued matroids to the fuzzy-valued ones and provide a possibilistic formulation of the problem (see [1]). The weights ˜ e, of elements of E are ill-known and they are modeled by fuzzy intervals W e ∈ E. Let us recall that a fuzzy interval is a fuzzy set in the space of real numbers IR, whose membership function µW ˜ e is normal, quasiconcave and upper semi-continuous on IR (see for instance [1]), µW ˜ e : IR → [0, 1]. The membership function µW ˜ e , e ∈ E, expresses the possibility distribution of the weight of element e ∈ E (see [1]). Let w = (we )e∈E , we ∈ IR, be a configuration of the weights. The configuration w represents a certain state of the world. Assuming that the weights are unrelated, the joint possibility distribution over configura˜ e , e ∈ E, is as follows: π(w) = mine∈E µ ˜ (we ). Hence, tions, induced by the W We the degrees of possibility and necessity that an element f ∈ E is optimal are defined as follows: Π(f is optimal) =

sup

π(w),

(2)

w: f is optimal in w

N(f is optimal) =

inf

w: f is not optimal in w

(1 − π(w)).

(3)

The degrees of optimality can be generalized by fuzzyfying the quantity δf , f ∈ E in the following way: µ∆˜f (x) = Π(δf = x) =

sup

π(w).

w: x=δf (w)

The following relations hold: Π(f is optimal) = Π(δf = 0) = µ∆˜f (0), N(f is optimal) = N(δf = 0) = 1 − sup µ∆˜f (x). x>0

˜ e , e ∈ E, can be decomposed into its λ-cuts, that is the Every fuzzy weight W ˜ e (λ), λ ∈ (0, 1], ˜ e (λ) = {x | µ ˜ (x) ≥ λ}, λ ∈ (0, 1]. It is well known that W sets W We − + − ˜ ˜ ˜ ˜ + (0)] is the is the classical interval [We (λ), We (λ)]. We assume that [We (0), W e − + ˜ ˜ ˜ support of We . Functions We : [0, 1] → IR and We : [0, 1] → IR, are called ˜ e [2], respectively (see Fig. 1a). Thus, the profiles left and right profiles of W can be seen as a parametric representations of the left and right hand sides of a

fuzzy interval. We assume additionally that both profiles are strictly monotone. This assumption holds for the fuzzy intervals of L-R type [1]. Therefore, it is not restrictive. Let M(λ) = (E, I), λ ∈ [0, 1], be the interval-valued matroid with ˜ e (λ), e ∈ E, being the λ-cuts of the fuzzy weights. A link between the weights W interval case and the fuzzy one resulting from formulae (2) and (3) and the fact ˜ e (β) ⊆ W ˜ e (α), e ∈ E, is as follows: that if α < β then W Π(f is optimal) = sup{λ | f is possibly optimal in M(λ)},

(4)

N(f is optimal) = 1 − inf{λ | f is necessarily optimal in M(λ)}.

(5)

Equations (4) and (5) form the theoretical basis for calculating the values of the optimality indices. They suggest a standard bisection method for determining the optimality degrees (2) and (3) of a fixed element with a given accuracy  via the use of λ-cuts. At each iteration the possible (necessary) optimality of the element is evaluated in the interval-valued matroid M(λ) according to Theorem 1 (Theorem 2). The calculations take O(| log |nf (n)) time. Unfortunately, this method gives only an approximate values of the optimality degrees. Further in this section, we propose some polynomial algorithms which give the exact values of the degrees of optimality. First, we need to extend two extreme configurations (1) to the fuzzy case. ˜+ ˜− The fuzzy counterparts w {f } and w {f } are vectors of the left and right profiles defined as follows: ( ( ˜ − if e = f , ˜ + if e = f , W W e − e + ˜ e (w ˜ e (w ˜ {f } ) = ˜ {f } ) = , e ∈ E. (6) , W W − ˜ + otherwise ˜ W W otherwise e e Assume that we intend to calculate the value of Π(f is optimal), f ∈ E. The key observation is that in order to do this, it is enough to analyze only the ˜+ fuzzy configuration w {f } . Moreover, it is sufficient to take into account only the ˜ + with profiles W ˜ − , e 6= f , in configuration w ˜+ . intersection points of profile W f

e

{f }

Let eλ1 , . . . , eλm be elements in E, whose left profiles intersect with the right profile of element f . The numbers λ1 , . . . , λm ∈ [0, 1] denote the cuts such that ˜ e− (λi ) = W ˜ + (λi ), i = 1, . . . , m. We assume that λ1 ≤ · · · ≤ λm , W ˜ e− (λ1 ) ≤ W f λi λ1 − ˜ ··· ≤ W (λ ). Let us also distinguish elements v , . . . , v in E whose left m 1 r e λm + ˜ profiles are entirely on the left hand side of Wf and elements u1 , . . . , uq whose ˜ + . The 3-partition of elements left profiles are entirely on the right hand side of W f is of the form E = {u1 , . . . , uq } ∪ {eλ1 , . . . , eλm } ∪ {v1 , . . . , vr } (see Fig. 1b). Let us now define sequences σ0 , σ1 , . . . σm in the following way: σ0 = (u1 , . . . , uq , f , eλ1 , . . . , eλm , v1 , . . . , vr ), σi = (u1 , . . . , uq , eλ1 , . . . , eλi , f , eλi+1 . . . , eλm , v1 , . . . , vr ), i = 1, . . . , m − 1, σm = (u1 , . . . , uq , eλ1 , . . . , eλm , f , v1 , · · · , vr ). Note that the sequences differ from each other only in the position of the element f , which depends on the cut λi . Let us define λ0 = 0.

(a)

(b)

µW ˜ (x)

1

1

˜− W

λm

˜+ W

λ2 λ1 x

vr

v2

v1 e λ 2

e λm

e λ1 f u q

u2 u1 x

˜ (in bold). (b) The partition Fig. 1. (a) The left and right profiles of fuzzy interval W ˜ + with profiles W ˜ e− , e 6= f , in of E with respect to the intersection points of profile W f + ˜ {f } . configuration w

Observation 1. If f ∈ Bσi−1 , then f is possibly optimal in matroid M(λ), λ ∈ [0, λi ], i = 1, . . . , m. Proof. Observe, that it is sufficient to show that f is possibly optimal in interval weighted matroid M(λi ). It is easy to see that the extreme configuration w + {f } ˜ + (λi ) and we (w+ ) = W ˜ e− (λi ) if e 6= f . in M(λi ) is as follows: wf (w + ) = W {f }

f

{f }

From the construction of the sequence σi , it follows that pred(σ(w + {f } , f ), f ) ⊆ pred(σi−1 , f ). Thus, from Proposition 1 and the assumption f ∈ Bσi−1 , we see that f is a part of the base Bσ(w+ ,f ) in M(λi ) and, by Theorem 1, it is possibly {f }

optimal in M(λi ).

t u

Observation 2. If f ∈ / Bσi then f is not possibly optimal in matroid M(λ), λ ∈ (λi , 1], i = 0, . . . , m. ˜ − (λi ) ≥ W ˜ + (λi ) for Proof. From the definition of sequence σi it follows that W e f all e ∈ pred(f, σi ) (see also Fig. 1b). Let λ > λi . From the strict monotonicity ˜ e− (λ) > W ˜ + (λ) for all e ∈ pred(f, σi ). of the right and left profiles we obtain W f Thus in the interval weighted matroid M(λ) all the elements e ∈ pred(f, σi ) must also precede f in the corresponding sequence σ(w + {f } , f ) in matroid M(λ). Therefore, according to Propositions 1 and Theorem 1, element f is not possibly optimal in M(λ). t u Observations 1 and 2, together with formula (4) yield: Proposition 5. If f ∈ Bσm then Π(f is optimal) = 1. Otherwise, let k be the smallest index in {0, 1, . . . , m} such that f ∈ / Bσk . Then Π(f is optimal) = λk . Proposition 5 allows to construct an effective algorithm (Algorithm 2) for computing the value of Π(f is optimal) of a given element f ∈ E. The key of Algorithm 2 is that there is no need to apply Algorithm 1 for each sequence σ0 , . . . , σm for checking whether f is a part of base Bσi induced by σi , i = 0, . . . , m. Using the fact that the sequences differ from each other only in the position of element f , which depends on the cut λi , we only need to test if f can be added to a base constructed in Algorithm 2 after each choosing of an element eλi . It is

Algorithm 2: Computing Π(f is optimal). Input: A fuzzy weighted matroid M = (E, I), a distinguished element f ∈ E. Output: Π(f is optimal). Find all pairs (λi , eλi ) Sort (λi , eλi ) in nondecreasing order with respect to the values of λi Form σ0 = (u1 , . . . , uq , f, eλ1 , . . . , eλm , v1 , . . . , vr ) B←∅ for i ← 1 to q do if B ∪ {ui } ∈ I then B ← B ∪ {ui } if B ∪ {f } ∈ / I then return 0 /*f ∈ / B σ0 */ for i ← 1 to m do if B ∪ {eλi } ∈ I then B ← B ∪ {eλi } if B ∪ {f } ∈ / I then return λi /*f ∈ / B σi */ return 1 /*f ∈ Bσm */

easily seen that Algorithm 2 implicitly check whether f ∈ Bσi , for i = 0, . . . , m deciding this way if f is possibly optimal. Hence, Algorithm 2 is equivalent to one course of Algorithm 1. Since finding all the intersection points requires O(n) time if all the fuzzy intervals are of the L-L type [1], it is easily seen that Algorithm 2 takes O(n log n + nf (n)) time, where O(n log n) is time required for sorting (λi , eλi ) with respect to the values of λi . An approach to computing N(f is optimal) of a given element f ∈ E is PSfrag replacements ˜− symmetric. (x)this case one need to consider fuzzy configuration w µW ˜ In {f } and take − − + ˜ ˜ ˜ W intersection points of profile Wf with profiles We , e 6= f , in into account ˜+ − W ˜ {f } (see Fig. 2). The numbers λ1 , . . . , λm denote the cuts such configuration w + ˜ ˜ − (λi ), i = 1, . . . , m, under the assumption that λ1 ≤ · · · ≤ that Weλ (λi ) = W f i ˜ + (λ1 ) ≤ · · · ≤ W ˜ + (λm ). λm , W e λ1

e λm

1

λm λ2 λ1 vr

v2 v1

f e λ1

e λm

e λ2 u q

u2 u1 x

˜ − with Fig. 2. The partition of E with respect to the intersection points of profile W f ˜ e+ , e 6= f , in configuration w ˜− profiles W {f } .

Similarly, we define σ1 , . . . , σm+1 with respect to elements eλ1 , . . . , eλm whose right profiles intersect with the left profile of element f . σ1

= (u1 , . . . , uq , eλm , . . . , eλ1 , f , v1 , . . . , vr ),

σi

= (u1 , . . . , uq , eλm , . . . , eλi , f , eλi−1 . . . , eλ1 , v1 , . . . , vr ), i = 2, . . . , m,

σm+1 = (u1 , . . . , uq , f , eλm , . . . , eλ1 , v1 , · · · , vr ). Set λm+1 = 1. The following proposition is symmetric to Proposition 5 (the proof goes in the similar manner). Proposition 6. If f ∈ Bσ1 then N(f is optimal) = 1. Otherwise, let k be the largest index in {1, . . . , m+1} such that f ∈ / Bσk . Then N(f is optimal) = 1−λk .

Algorithm 3: Computing N(f is optimal). Input: A fuzzy weighted matroid M = (E, I), a distinguished element f ∈ E. Output: N(f is optimal). Find all pairs (λi , eλi ) Sort (λi , eλi ) in nondecreasing order with respect to the values of λi Form σm+1 = (u1 , . . . , uq , f, eλm , . . . , eλ1 , v1 , . . . , vr ) B←∅ for i ← 1 to q do if B ∪ {ui } ∈ I then B ← B ∪ {ui } if B ∪ {f } ∈ / I then return 0 /*f ∈ / Bσm+1 */ for i ← m downto 1 do if B ∪ {eλi } ∈ I then B ← B ∪ {eλi } if B ∪ {f } ∈ / I then return 1 − λi /*f ∈ / B σi */ return 1 /*f ∈ Bσ1 */

Algorithm 3 is similar in a spirit to Algorithm 2. Here, there is also no need to apply Algorithm 1 for each sequence σm+1 , . . . , σ1 for checking whether f is a part of base Bσi induced by σi , i = m + 1, . . . , 1, according to Theorem 2. Similarly, Algorithm 3 implicitly check whether f ∈ Bσi , for i = m + 1, . . . , 1 evaluating this way the necessary optimality of f , which is due to the fact that sequences σm+1 , . . . , σ1 differ from each other only in the position of element f . Obviously, computing N(f is optimal) also requires O(n log n + nf (n)) time.

References 1. Dubois, D., Prade, H.: Possibility theory: an approach to computerized processing of uncertainty. Plenum Press, New York 1988. 2. Dubois, D., Fargier, H., Fortin, J.: A generalized vertex method for computing with fuzzy intervals. In: Fuzz’IEEE (2004) 541-546.

3. Kasperski, A., Zieli´ nski, P.: On combinatorial optimization problems on matroids with ill-known weights. Instytut Matematyki PWr., Wroclaw 2005, raport serii PREPRINTY nr 34, submitted for publication in Eur. J. Oper. Res. 4. Oxley, J., G.: Matroid Theory. Oxford University Press, New York, 1992.