Single machine scheduling with delivery dates and ... - Yasmina Seddik

nomial time algorithm for the two delivery dates problem .... k=1Vk has a practical application in the ...... nal of Operational Research, 51(1), 100–109 . 7. Janiak ...
233KB taille 1 téléchargements 225 vues
Single machine scheduling with delivery dates and cumulative payoffs Yasmina Seddik · Christophe Gonzales · Safia Kedad-Sidhoum

Preliminary version of the article published on Journal of Scheduling: DOI: 10.1007/s10951-012-0302-0, 2013

Abstract We address a single machine scheduling problem with a new optimization criterion and unequal release dates. This new criterion results from a practical situation in the domain of book digitization. Given a set of job-independent delivery dates, the goal is to maximize the cumulative number of jobs processed before each delivery date. We establish the complexity of the general problem. In addition, we discuss some polynomial cases and provide a pseudopolynomial time algorithm for the two delivery dates problem based on dynamic programming and some dominance properties. Experimental results are also reported. Keywords Single machine · Delivery dates · Release dates · Cumulative payoffs · Stepwise cost function 1 Introduction In this paper, we address a single machine non-preemptive scheduling problem where N jobs J1 , . . . , JN have to be scheduled. Each job Ji has a release date ri ≥ 0 and a processing time pi > 0. K delivery dates are given: D1 , . . . , DK , with 0 < D1 < · · · < DK . Given a schedule S, which is characterized by the completion times of all the jobs, variable Vk represents the number of jobs executed before Dk in S, k = 1, . . . , K. The payoff Yasmina Seddik Laboratoire d’Informatique de Paris 6 4 place Jussieu 75005 Paris E-mail: [email protected] Christophe Gonzales Laboratoire d’Informatique de Paris 6 4 place Jussieu 75005 Paris E-mail: [email protected] Safia Kedad-Sidhoum Laboratoire d’Informatique de Paris 6 4 place Jussieu 75005 Paris E-mail: [email protected]

of a given schedule S is defined by v(S) = ∑Kk=1 Vk . The payoffs related to the delivery dates are cumulative: jobs delivered at a given delivery date are also counted for the payoffs related to each of the subsequent delivery dates. Extending the three-field notation of Graham et al. [4], the problem we address in this paper can be defined as 1|ri | ∑Kk=1 Vk . We will refer to the payoff related to a job Ji as v(Ji ). For a given schedule S, v(Ji ) is the number of delivery dates before which Ji completes in S. Let Ci (S) (or Ci when no ambiguity is possible) be the completion time of job Ji in S; v(Ji ) can be represented by the following non-increasing stepwise function:  K     .   .. v(Ji ) = 2    1   0

if 0 < Ci ≤ D1 if DK−2 < Ci ≤ DK−1 if DK−1 < Ci ≤ DK if DK < Ci

In the example presented in Figure 1, jobs J1 and J2 are executed before D1 , the payoff of each of these jobs is thus 3. J4 ’s completion time occurs after D1 but before D2 , so J4 ’s payoff is 2. Finally, J3 ’s completion time occurs after D2 but not later than D3 : its payoff is 1. Consequently, the total payoff is 9. J1 0 r1

J2 r2 r4

J4 D1

J3 D2 r3

D3

Fig. 1 Example of a schedule with three delivery dates

If the cumulative aspect of the payoffs is relaxed, our problem is related to the problem with fixed delivery dates studied by Hall et al. [5]. They considered several classical scheduling criteria, including the following variant: the cost

of a job Ji depends on the earliest delivery date occurring after the completion of Ji . In addition, complexity results are established for several problems, with different criteria and machine configurations. As for the cumulative payoffs, Janiak and Krysiak [7, 8] define a criterion which is a generalization of ∑Kk=1 Vk . They attempt to maximize a schedule’s payoff defined as the sum of the jobs payoffs in the schedule, each job having its own distinct stepwise payoff function. They consider the single machine and the uniform and unrelated parallel machines problems, all without release dates. They show that the single machine problem is strongly NP-hard in the general case. They also provide a pseudopolynomial time algorithm for the special case of the unrelated parallel machines problem, in which the breakpoints are the same for all the jobs’ stepwise functions. Additionally, they propose several heuristic algorithms based on list strategies to solve the general problem with unrelated parallel processors. We can also mention the work of Raut et al. [12], which provide several heuristics to solve the single machine problem without release dates where jobs have a general payoff function decreasing with time. The structural properties observed in Janiak and Krysiak [7, 8] and Raut et al. [12] are not valid when release dates are considered, as in the problem addressed in this paper. Detienne et al. [2] consider the unrelated parallel machines problem with release dates, with the same criterion considered in Janiak and Krysiak [7, 8]. Detienne et al. [2] address a problem which is clearly a generalization of the problem presented in this paper: 1|ri | ∑Kk=1 Vk ; and propose generic exact methods (mixed integer programming and dedicated constraint generation procedures). Although the results of Detienne et al. [2] are valid for 1|ri | ∑Kk=1 Vk , our contribution is to analyze the complexity of some problems dealing with cumulative payoffs and to propose dedicated exact methods based on some structural properties of the optimal solutions. Finally, there exists another class of problems related to 1|ri | ∑Kk=1 Vk : the generalized due date problem [6], where due dates are not related to the jobs. Instead, global due dates are defined, and before each of them, one job must complete. Then, given a schedule, the i-th scheduled job is related to the i-th due date, and its cost is computed in relation to that due date, as for classical due dates. Complexity results have been established by Hall et al. [6] for this class of problems, which is still quite far from the addressed problem, in view of the cumulative aspect of the payoff function. Problem 1|ri | ∑Kk=1 Vk has a practical application in the domain of bibliographical digitization where a huge amount of books must be digitized, following a linear process. We consider the single machine problem to model the bottleneck machine. Two entities are involved: the client and the manufacturer. We focus on the manufacturer’s point of view,

by trying to meet the client requirements while maximizing the manufacturer’s own profits. The books to be digitized become available to the manufacturer at different dates. In addition, the client sets several delivery dates Dk , k = 1, . . . , K (such that 0 < D1 < · · · < DK ), and target quantities Qk , k = 1, . . . , K. For a given k = 1, . . . , K, Qk is the number of books the client expects to be digitized from the very beginning until date Dk . Therefore, quantity Qk includes quantity Qk−1 plus an additional amount of expected books, and we have: 0 < Q1 < Q2 < · · · < QK . From the preceding follows the cumulative aspect of the problem. Because Vk is the number of books having been digitized from the very beginning until date Dk , it is desirable that Vk is at least equal to Qk if possible, or else, that Vk be as close as possible to Qk . However, digitizing more than Qk books before Dk increases the manufacturer’s payoff (which is proportional to the number of digitized books). These goals can be achieved by maximizing the difference Vk − Qk for every k = 1, . . . , K. Aggregating these K criteria by summation, we must maximize the objective function ∑Kk=1 (Vk − Qk ) or, equivalently, max ∑Kk=1 Vk as ∑Kk=1 Qk is a constant. This objective function also reflects the following preference of the manufacturer: earning a given payoff is more valuable earlier than later. The paper is organized as follows. In Section 2 complexity results are established: polynomial cases are discussed, as well as the NP-hardness of the general problem and the special case with two delivery dates. Section 3 is dedicated to solving the two delivery dates problem, through a dynamic programming algorithm, for which some experimental results are presented. Finally, we draw some conclusions in Section 4. 2 Complexity analysis 2.1 Polynomial cases 2.1.1 Relaxations of the general problem Three polynomial cases resulting from different relaxations of the general problem 1|ri | ∑Kk=1 Vk have been identified. 1. The problem with no release date 1|| ∑Kk=1 Vk can be easily shown to be optimally solved by sequencing the jobs in a nondecreasing order of their processing times (SPT rule). 2. The preemptive case 1|ri , pmtn| ∑Kk=1 Vk can be optimally solved using Shortest Remaining Processing Time rule (SRPT), which is also called Modified Smith’s rule [1]: at each release time or completion time of a job, schedule an available job that has the smallest remaining processing time. An optimal solution for problem 1|ri , pmtn| ∑ Vk can be obtained using the SRPT rule. This can be proved with

the same arguments used to show that applying SRPT [1] yields an optimal solution for the problem 1|r j , pmtn| ∑ C j . Therefore, the problem 1|ri , pmtn| ∑Kk=1 Vk can be solved in O(NlogN) time. 3. We consider the case where all the jobs have the same processing time 1|ri , pi = p| ∑Kk=1 Vk .

Proof Let S∗ be an optimal schedule such that the last on time job does not complete at time D. Let Jl be the last on time job and let Cl be its completion time in S∗ . We can rightshift Jl such that it completes at time D. The jobs that follow Jl in S∗ are also right-shifted of D − Cl time units, in order to avoid jobs overlapping; the payoff of the new schedule is equal to v(S∗ ). 

Definition 1 A “left-shifted” schedule is such that each job starts as soon as possible. This means that each job starts at its release date if it is possible, otherwise at the completion time of the previous job in the schedule.

Lemma 2 There exists an optimal schedule for 1|ri |V such that there is no idle time between any pair of consecutive on time jobs.

Hence, to represent a left-shifted schedule S it is sufficient to provide the ordered sequence of all the jobs in S. Proposition 1 A left-shifted schedule in which jobs are ordered w.r.t. a nondecreasing order of their release dates is optimal for the identical processing times problem 1|ri , pi = p| ∑Kk=1 Vk . Proof In the following proof, we will only consider leftshifted schedules. Indeed, every feasible schedule can be transformed into a left-shifted schedule in O(N) time, without decreasing its payoff. Let SR be a schedule in which jobs are ordered w.r.t. a nondecreasing order of their release dates. We show that every feasible schedule S can be transformed into SR , and that v(SR ) ≥ v(S). We renumber the jobs w.r.t. their order in S: J1 , · · · , JN . If there exists a job Ji such that Ci (SR ) > Ci+1 (SR ), then by swapping Ji and Ji+1 in S, the total payoff will not decrease. Indeed, if ri = ri+1 , the payoff does not change, since pi = pi+1 . Otherwise, if ri > ri+1 , the payoff can remain unchanged, or can increase if Ci (S) − pi = ri and [ri − 1, ri ] is an idle time. Iterating this process, we get, after a polynomial number of steps, the schedule SR such that v(SR ) ≥ v(S). Since this result holds for every feasible schedule S, it also holds for every optimal schedule. Hence the proposition is true.  Proposition 1 leads to an optimal algorithm for problem 1|ri , pi = p| ∑Kk=1 Vk in O(NlogN) time. 2.1.2 Single delivery date The problem with a single delivery date D, 1|ri |V , is equivalent to the problem 1|ri , di = d| ∑ Ui . By definition, every job Ji can be processed in the time period Fi = [ri , D]. We call an “on time” job, every job Ji that is processed in time interval Fi . The jobs that are not on time can be scheduled after D, in any order. Lemma 1 There exists an optimal schedule for 1|ri |V such that the last on time job completes at time D.

The arguments for the proof are similar to the ones for Lemma 1 (jobs are right-shifted). We provide a polynomial time algorithm to solve 1|ri |V , which is close to the Moore-Hodgson algorithm [10] for solving 1|| ∑ Ui . We renumber jobs from 1 to N, w.r.t. a nonincreasing order of their release dates. S is an ordered set of jobs, which, at the end of the algorithm, will represent an optimal sequence. At each iteration, the jobs of S satisfy the conditions described in Lemmas 1 and 2. We will use the following notations: – ∀Ji 6∈ S, Ji .S denotes the sequence obtained by prepending Ji to S. This notation can be extended to the prepending of a sequence S to another sequence S0 : S.S0 . – ∀J j ∈ S, S\J j denotes the sequence obtained by removing J j from S, leaving all the other jobs in the order given by S.

1 2 3 4 5 6 7 8 9 10

Input: J1 , · · · , JN , ordered w.r.t. their nonincreasing release dates Output: S.S0 S ← 0, / t ← D, S0 ← 0/ for i = 1 to N do S ← Ji .S t ← t − pi if t < ri then J j ← job in S with the largest processing time S ← S\J j t ← t + pj S0 ← J j .S0 return S.S0

Algorithm 1: Algorithm for 1|ri |V

Theorem 1 Algorithm 1 solves optimally the problem 1|ri |V . Proof The arguments for the proof of correctness are similar to those for the Moore-Hodgson algorithm [11]. We give a sketch of proof below. At each iteration, the on time jobs satisfy the conditions described in Lemmas 1 and 2. The jobs are numbered from

1 to N w.r.t. the order of their nonincreasing release dates. We call S1m the sequence J1 , . . . , Jm . A subsequence E of S1m is called eligible, if there exists a feasible subschedule that schedules exactly the jobs of E, in the order given by E, and such that all the jobs of E are on time. Given a sequence S, its processing time p(S) is defined as: p(S) = ∑Ji ∈S pi . For each m ∈ {1, . . . , N}: – Let Em be an eligible subsequence of S1m with the maximum number of on time jobs. We set Nm = |Em |: the number of jobs in Em . – Let Sm be an eligible subsequence of S1m , such that |Sm | = Nm , and p(Sm ) is minimal. The sequence SN obtained at the end of the algorithm is optimal. The proof is done by induction on m. The algorithm correctly constructs S1 , because it tests whether J1 can be scheduled into F1 : if this is possible, S1 only contains job J1 , otherwise S1 is empty. Therefore, S1 is an eligible sequence of S11 such that |S1 | = N1 and such that p(S1 ) is minimal. The induction hypothesis is that the algorithm indeed constructs Sm . We prove next that sequence Sm+1 can be constructed starting out with sequence Sm . At the (m + 1)-th iteration of the algorithm, job Jm+1 is prepended to Sm . There are two cases to be considered. Case 1. Jm+1 is processed into Fm+1 . We have Nm+1 = Nm + 1, because |Jm+1 .Sm | = Nm + 1 and |Jm+1 .Sm | cannot be greater than Nm + 1 (by induction hypothesis on |Sm | = Nm ). Moreover, every eligible subsequence of S1m+1 that contains Nm+1 jobs, includes job Jm+1 . Indeed, by induction hypothesis, every eligible subsequence S0 of S1m satisfies |S0 | ≤ Nm . Then, to construct Sm+1 , we must add job Jm+1 to an eligible subsequence S00 of S1m with Nm jobs and minimal processing time. S00 = Sm by induction hypothesis, hence p(Jm+1 .Sm ) is minimal among the processing times of the feasible sequences of Nm+1 jobs. Case 2. Jm+1 is not processed into Fm+1 . We have Nm+1 = Nm . Indeed, Jm+1 cannot belong to an eligible subsequence S000 of S1m+1 such that |S000 | = Nm + 1, because by adding Jm+1 to Sm (which contains Nm jobs and has the shortest processing time), Jm+1 cannot be on time. Thus, Nm is the maximal number of jobs of an eligible subsequence of S1m+1 . Moreover, adding Jm+1 to Sm does not increase the number of on time jobs. However, adding Jm+1 to Sm and deleting the job with longest processing time among Sm ∪ {Jm+1 } keeps the same number Nm of on time jobs, and reduces as much as possible the total processing time of the sequence. Hence, the algorithm constructs an eligible subsequence with minimal processing time. 

Algorithm 1 runs in O(NlogN) if a heap is used for the search of the job with the largest processing time.

2.2 Multiple delivery dates In order to prove the NP-hardness of problem 1|ri | ∑Kk=1 Vk , we rephrase it as the following decision problem: MDD (Multiple Delivery Dates problem). Given a set of N jobs J1 , . . . , JN , K delivery dates D1 , . . . , DK , and a value V , does there exist a schedule S such that v(S) ≥ V ? To prove the NP-completeness of MDD we will reduce to it the 3-Partition problem, defined as follows [3]: 3-PARTITION. Given positive integers m, B, and a set of integers A = {a1 , a2 , . . . , a3m } such that ∑3m i=1 ai = mB and B/4 < ai < B/2 for 1 ≤ i ≤ 3m, does there exist a partition hA1 , A2 , . . . , Am i of A into 3-element sets such that, for each i, ∑a∈Ai a = B? Before proving the NP-completeness of MDD, we need to introduce the following definition. Definition 2 In a schedule S, a straddling job Ji is a job such that Ci − pi < Dk < Ci , for some k ∈ {1, . . . , K}. Theorem 2 MDD is unary NP-complete. Proof Given a feasible solution S for MDD, its payoff v(S) can be computed in O(N) time, thus MDD ∈ NP. We show that 3-Partition reduces to MDD. Suppose we are given an instance of 3-Partition, with m, B and A. The corresponding input to MDD is a set of N = 3m(m + 2) jobs, a set of m delivery dates, and a value V = (6 + 3m)m(m + 1)/2. We define three different kinds of jobs, with processing times and release dates specified as follows. For each i ∈ {1, . . . , 3m}: – job J˜i has a processing time of mB + ai , and its release date is zero, – job Jbi has a processing time of mB, and its release date is zero, 1 – for j ∈ {1, . . . , m}, job J i, j has a processing time of 3m , and its release date is jB(6m + 1) + j − 1. The delivery dates are: D j = jB(6m + 1) + j, j ∈ {1, . . . , m}. We also set: D0 = 0. We show that MDD has a solution with a payoff at least equal to V if and only if the desired partition of A exists. First, if there exists a partition hA1 , A2 , . . . , Am i of A, such that for each i, ∑a∈Ai a = B, then the following schedule σ is such that v(σ ) = V (see Figure 2). For each j ∈ {1, . . . , m}:

1. Let A j = {a1 j , a2 j , a3 j }. The three jobs {J˜1 j , J˜2 j , J˜3 j } (of length mB + a1 j , mB + a2 j , mB + a3 j respectively) are scheduled from D j−1 to D j−1 + B(3m + 1), 2. the three jobs Jb3( j−1)+1 , Jb3( j−1)+2 , Jb3( j−1)+3 are scheduled from D j−1 + B(3m + 1) to D j−1 + B(6m + 1) = D j − 1, 3. finally, the 3m jobs J 1, j , . . . , J 3m, j are scheduled from D j − 1 to D j .

b J-type jobs, each one being of length ai . The same reasoning holds for every I 0j , j = 1, . . . , m. Therefore, the second b parts of J-type jobs, each one of length ai , are partitioned in m groups, each one having a total length of B, which constitutes a solution to 3-Partition. 

In schedule σ , 6 + 3m jobs are executed between each pair of consecutive delivery dates. Hence, v(σ ) = (6+3m)(1 + 2 + · · · + m) = (6 + 3m)m(m + 1)/2 = V .

We consider the problem of 1|ri | ∑Kk=1 Vk with K = 2. In order to prove its NP-hardness, we have to prove the NPcompleteness of the corresponding decision problem, defined as follows.

Conversely, assume there exists a schedule σ 0 such that v(σ 0 ) ≥ V . There are two cases to consider. First, suppose that there is no straddling job in σ 0 . There can be at most 6 + 3m jobs scheduled before D1 , in the interval [0, D1 ]. Indeed, all the jobs with release dates equal to b ˜ zero (all the J-type and J-type jobs) have processing times at least equal to mB. So, at most 6 of them can be scheduled before D1 . Moreover, at most 3m J-type jobs can be scheduled before D1 . Because we assume there is no straddling job, the same reasoning remains valid for every time interval I j = [D j−1 , D j ], j = 1, . . . , m. Thus, σ 0 has at most 6+3m jobs scheduled in each interval I j , j = 1, . . . , m. We deduce that σ 0 has exactly 6 + 3m jobs scheduled in each interval I j , j = 1, . . . , m, otherwise v(σ 0 ) < V . We have shown that if σ 0 has no straddling job, then σ 0 has exactly 6 + 3m jobs delivered at each delivery date, and v(σ 0 ) = V . Second, suppose that σ 0 contains straddling jobs. Let D js be the first delivery date with a straddling job Js . Then the time interval between D js − 1 and D js is necessarily used for the execution of Js . Therefore, the jobs J 1, js , . . . , J 3m, js cannot be scheduled before D js . In the best case, they are executed before D js +1 . The action of postponing jobs J 1, js , . . . , J 3m, js after D js decreases the objective function value of an amount of at least 3m. On the other hand, the straddling job increases the payoff of an amount of at most m − 1. Hence, the payoff of a solution with a straddling job cannot be greater than or equal to V . Therefore, σ 0 has no straddling jobs. In schedule σ 0 , for every j ∈ {1, . . . , m}, all the jobs J 1, j , . . . , J 3m, j are scheduled between D j − 1 and D j . Otherwise, it would not be possible to schedule 6 + 3m jobs be˜ tween D j−1 and D j . The 6m remaining jobs (all the J-type b and J-type jobs) are distributed by groups of 6 jobs in each interval I 0j =]D j−1 , D j − 1], j = 1, . . . , m. The length of each b ˜ interval I 0j is B(6m + 1). All the J-type and J-type jobs have lengths equal or greater than mB. Thus, each such job can be seen as composed of a first part whose length is mB, and b ˜ a second part whose length is 0 for J-type jobs and ai for Jtype jobs. If 6 jobs are scheduled in I10 , then 6mB time slots are filled by the first parts of these 6 jobs. The free slots of I10 , whose total length is B, are occupied by the second parts of

2.3 Two delivery dates

2DD (Two Delivery Dates problem). Given a collection of N jobs: {J1 , . . . , JN }, two delivery dates: D1 , D2 , and a value V , does there exist a schedule S such that v(S) ≥ V ? We show a polynomial reduction from the partition problem to 2DD. A definition of the partition problem follows. Partition. Given n positive integers s1 , . . . , sn , does there exist L0 ⊂ L = {1, . . . , n} such that: ∑i∈L0 si = ∑i∈L\L0 si ? Theorem 3 2DD is NP-complete. Proof Given a feasible solution S for 2DD, its payoff v(S) can be computed in O(N) time, thus 2DD ∈ NP. We show that Partition reduces to 2DD. Suppose we are given an instance of Partition, with n positive integers s1 , . . . , sn . Let b = 21 ∑ni=1 si . The corresponding input of 2DD is a set of 3n jobs, two delivery dates and a value V = 5n. There are three different kinds of jobs, with processing times and release dates specified as follows. For each i ∈ {1, . . . , n}: – job J˜i has a processing time of b + si , and its release date is zero, – job Jbi has a processing time of b, and its release date is zero, – job J i has a processing time of n1 , and its release date is b(n + 1). The delivery dates are: D1 = b(n + 1) + 1 and D2 = 2b(n + 1) + 1. We now have to show that 2DD has a solution of value at least V if and only if the desired partition of s1 , . . . , sn exists. First, if Partition has a solution, the solution is represented by a partition of L in two sets L0 and L\L0 . Let k be the cardinality of L0 . The cardinality of L\L0 is then n − k. Let σ be the following schedule (see Figure 3). 1. The k jobs of the set {J˜i |i ∈ L0 } are executed from time zero to b(k + 1) 2. The n − k jobs of the set {Jbi |i ∈ L\L0 } are executed from b(k + 1) to b(n + 1)

(3m + 1)B J˜1 j

J˜2 j

3mB J˜3 j

Jb3( j−1)+1

Jb3( j−1)+2

D j−1

Therefore, in schedule σ , k + (n − k) + n = 2n jobs are executed before D1 , and (n − k) + k = n jobs have their completion time before D2 and after D1 . Hence, v(σ ) = 5n. Conversely, suppose there exists a schedule σ 0 whose value is 5n. We first prove that, in σ 0 , 2n jobs are executed before D1 . Indeed, if less than 2n jobs were executed before D1 , then v(σ 0 ) would be less than 5n (because of the total number of jobs). Hence, the n jobs J 1 , . . . , J n must all be executed before D1 , otherwise it would be impossible to execute 2n jobs before D1 , since all the other jobs have processing times larger than b. Thus, at most n other jobs can be b ˜ executed before D1 . Indeed, all the J-type and J-type have processing times at least equal to b. Therefore, at most n of them can be executed in the interval [0, b(n + 1)] (the total b number of J-type jobs is n). So, in schedule σ 0 , we have exactly n jobs executed in the interval I1 = [0, b(n + 1)] and n jobs executed in the interval I2 = [D1 , D2 ]. The length of each of these intervals is b ˜ b(n + 1). As we said, all the J-type and J-type jobs have processing times at least equal to b. Hence, each of them can be seen as composed of two parts: a first part of length b, and b a second part whose length is 0 for J-type jobs and si for ˜ J-type jobs. Because n jobs are executed in I1 , nb time slots are occupied by the first parts of these n jobs. The remaining portion of I1 , which is of length b, is occupied by the second ˜ parts of J-type jobs, each of length si . The same reasoning ˜ holds for I2 . Therefore, the second parts of the J-type jobs, each of length si , are partitioned in two groups, each of total length equal to b, which constitutes a solution to Partition. 

In section 3 we present a pseudopolynomial time algorithm for 1|ri |V1 + V2 , proving thus that 1|ri |V1 + V2 is weakly NP-hard.

Jb3( j−1)+3 ri j D j |{z} J 1 j ,...,J 3m j

Fig. 2 Schedule σ between D j−1 and D j , j = 1, . . . , m

3. The n jobs J 1 , . . . , J n are executed from b(n + 1) to b(n + 1) + 1 4. The n − k jobs {J˜i |i ∈ L\L0 } are executed from b(n + 1) + 1 to b(n + 1) + 1 + b(n − k + 1) = 1 + b(2n − k + 2) 5. The k jobs of the set {Jbi |i ∈ L0 } are executed from 1 + b(2n − k + 2) to 2b(n + 1) + 1

1

3 Solving the problem with two delivery dates 3.1 Structural properties Any schedule S of 1|ri |V1 + V2 can be split into three subschedules: the subschedule S1 of the jobs whose completion times are in the interval I1 = [0, D1 ], the subschedule S2 of the jobs whose completion times are in the interval I2 =]D1 , D2 ] and the subschedule S3 of the jobs whose completion times are in the interval I3 =]D2 , +∞[. We have S = S1 .S2 .S3 , where Si .S j denotes the concatenation of subschedule Si with subschedule S j . We introduce two lemmas that will be useful for the dynamic programming algorithm. Definition 3 A feasible schedule S = S1 .S2 .S3 such that all the jobs of S1 , respectively S2 , are ordered in nondecreasing order of their release dates is called an Earliest Release Date schedule (ERD-schedule). Lemma 3 There exists an ERD-schedule that is an optimal solution of 1|ri |V1 +V2 . Proof We first show that every feasible schedule S = S1 .S2 .S3 can be modified in order to obtain an ERD-schedule SR such that v(SR ) ≥ v(S). Let Ji and J j be two consecutive jobs in S, such that: – Ji and J j belong both to S1 or both to S2 , – ri > r j and – Ci < C j . Let S0 = S10 .S20 .S30 be a schedule obtained from S by swapping Ji and J j without moving the other jobs, in such a way that Ci0 = C j and C0j = Ci − pi + p j . Two cases need to be considered: Case 1. If Ji and J j belong to S1 , then Ci0 ∈ I1 and C0j ∈ I1 . Thus, Ji and J j belong to S10 and v(S0 ) = v(S). Case 2. If Ji and J j belong to S2 , two subcases may occur: – If C0j > D1 , then Ci0 ∈ I2 and C0j ∈ I2 . Thus, Ji and J j belong to S20 and v(S0 ) = v(S). – If C0j ≤ D1 , then Ci0 ∈ I2 and C0j ∈ I1 . Thus, Ji ∈ S20 and J j ∈ S10 and v(S0 ) = v(S) + 1. Iterating this process, we get after a finite number of steps (bubble sort algorithm [9]) an ERD-schedule SR . In particular, this modification process can be applied to any optimal schedule. Hence the result holds. 

{J˜i |i∈L0 }

z

}|

{z

{Jbi |i∈L\L0 }

{J 1 ,...,J n }

{J˜i |i∈L\L0 }

}|

{z }| {z

}|

b(n − k)

1

0

{Jbi |i∈L0 }

{z

}|

{

D1 b(k + 1)

D2 b(n − k + 1)

b(n + 1)

bk b(n + 1)

Fig. 3 Schedule σ

We recall that a partial schedule without idle times is a block. Lemma 4 There exists an optimal schedule for 1|ri |V1 +V2 where S2 is a block. Proof Let S = S1 .S2 .S3 be a feasible schedule where there are idle times in S2 , and let k2 be the completion time of the last job of S2 . Without moving the last job of S2 , we can right-shift the other jobs of S2 until there are no more idle times in S2 . We obtain a solution of value v(S), because the jobs of S1 and S3 do not move, and the jobs of S2 still complete in the interval I2 . In particular, this holds for any optimal schedule. Hence the result holds.  Lemma 4 can be extended to the general problem: there exists an optimal solution in which the jobs that complete in the interval ]DK−1 , DK ] are scheduled in a unique block. However, this generalized result is not necessary for what follows. Obviously, there exists an optimal schedule S∗ = S1∗ .S2∗ .S3∗ that is an ERD-schedule where S2∗ is a block.

3.2 Dynamic programming algorithm The key point of the dynamic programming algorithm is to construct ERD-schedules where S2 is a block. Before describing the algorithm, we define the following notations for any schedule S = S1 .S2 .S3 : – for any job Ji in S, the completion time of Ji in S is denoted by Ci (S); – the completion time of the last job of S1 is denoted by k1 (S); – the starting time of the first job of S2 is denoted by t2 (S); – the completion time of the last job of S2 is denoted by k2 (S). When no ambiguity is possible, Ci (resp. k1 , resp. t2 , resp. k2 ) denotes Ci (S) (resp. k1 (S), resp. t2 (S), resp. k2 (S)). Figure 4 illustrates these notations on an example. Hence, any schedule S can be partially described by a 4tuple (k1 (S),t2 (S), k2 (S), v(S)). Of course this 4-tuple does not provide a precise description of the schedule, like completion times of the jobs; however it includes enough informations for the purpose of the algorithm, which is to return

0

k1

t2

D1

k2

D2

Fig. 4 Definition of k1 , t2 and k2

an optimal payoff. In this section, we first handle schedules, to explain the idea of the algorithm, then we define some formal functions that handle 4-tuples. The jobs are numbered in the order of nondecreasing release dates: J1 , . . . , JN . There are N steps in the algorithm. At each step j = 1, . . . , N, we construct a set S j of dominant ERD-schedules where S2 is a block, starting out from S j−1 , the set of schedules built at the previous step. To build S j , we modify the schedules of S j−1 by reinserting job J j either before D1 or between D1 and D2 , or keeping J j after D2 . Indeed, the initial set of schedules S0 contains only one schedule, in which all the jobs are scheduled after D2 , in the order of nondecreasing release dates. The unique schedule of S0 is represented by the following 4-tuple: (0, D2 , 0, 0). In this 4-tuple, k1 = 0 because S1 is empty, therefore it can be represented as starting at time 0 and completing at time 0. Since S2 is empty too, it should be represented as starting at D1 and finishing at D1 ; however, we set its starting time at D2 (t2 = D2 ) and its completion time at 0 (k2 = 0), in order to avoid some additional subcases. This assumption does not affect the correctness of the algorithm. Suppose now we are at step j, j = 1, . . . , N. Let S = S1 .S2 .S3 be a schedule of S j−1 . Then, J j ∈ S3 . Indeed, as a job is reinserted only at its corresponding step, job J j is scheduled after D2 in all the schedules of S j−1 . Thus, starting out from S, we can build some new schedules (at most three) by reinserting job J j in three different ways: in S1 , in S2 , or in S3 . When J j is reinserted into S1 , resp. S2 , it is always scheduled at the end of S1 , resp. S2 , in order to obtain an ERDschedule (see Figure 5). Moreover, if we attempt to reinsert J j into S1 : – if J j can be scheduled after k1 while completing before both D1 and the starting time of the first job of S2 (i.e. if max(r j , k1 ) + p j ≤ min(t2 , D1 )), then J j is scheduled as soon as possible after k1 and we have C j = max(r j , k1 ) + p j. – if J j can be scheduled after k1 while completing before D1 (i.e. max(r j , k1 ) + p j ≤ D1 ) but cannot complete before the starting time of the first job of S2 (i.e. if max(r j , k1 )+

Jj

k1

t2

0

– if there is at least one job in S2 , and if J j can be scheduled after k2 while completing into ]D1 , D2 ] (i.e. max(r j , k2 )+ p j ≤ D2 ), then J j is scheduled as soon as possible after k2 thus C j = max(r j , k2 ) + p j . If k2 < r j , then all the jobs of S2 , except for J j , are right-shifted in order to constitute a unique block with J j , while maintaining feasibility (see Figure 8). – otherwise, J j is not reinserted in S2 .

k2

D1

D2

Fig. 5 The three ways job J j can be reinserted into S

p j > t2 ), and if S2 can be right-shifted of the necessary number of time units to allow J j to be reinserted in S1 while keeping the completion times of all the jobs of S2 in I2 (i.e. k2 + max(k1 , r j ) + p j − t2 ≤ D2 ), then this right-shift is performed and J j is reinserted in S1 (see Figure 6). – otherwise, J j is not reinserted in S1 .

In the first two cases, the payoff of the new schedule is v(S) + 1.

Jj

In the first two cases, the payoff of the new schedule is v(S) + 2.

k1

t2

0

k2

D1

D2

rj

Jj k1

k2

t2

Jj k1 0

t2

rj

k2

0

D1

D2

k1

k1 t2 rj

rj

D2

t2

k2 Jj

0

k2

Jj 0

D1

D1

D1

rj

D2

Fig. 8 An example in which J j is reinserted into S2 : the previous block in S2 is right-shifted of (r j − k2 ) to avoid idle times in S2

D2

Fig. 6 An example in which J j is reinserted into S1 : S2 is right-shifted of max(k1 , r j ) + p j − t2 to allow J j to be scheduled

Consider now the case in which we attempt to reinsert J j into S2 . Let b j be the earliest starting time allowing job J j to complete after D1 : b j = max(r j , D1 − p j + 1) (see Figure 7). bj Jj 0 rj

(a)

D1 − p j + 1 D D1 + 1 1

D2

bj Jj 0

rj

D1 r j + p j

More formally, let Q j be the set of 4-tuples corresponding to the schedules of S j . Then, at step j, we can define three functions g1 , g2 , g3 on Q j−1 . Given a 4-tuple (k1 (S),t2 (S), k2 (S), v(S)) of Q j−1 , corresponding to a solution S of S j−1 , function g1 (resp. g2 , g3 ) returns the 4-tuple associated to a solution obtained by reinserting job J j in S1 (resp. in S2 , in S3 ). Before giving the formal definition of these three functions, we need the following notations. For a given j ∈ {1, . . . , N} and a given schedule S ∈ S j−1 :

(b) D2

Fig. 7 Value of b j if: (a) r j < D1 − p j + 1,

(b) r j ≥ D1 − p j + 1

– if there are no jobs in S2 , J j is scheduled as soon as possible while completing into ]D1 , D2 ] thus C j = max(b j , k1 )+ p j (if max(b j , k1 ) + p j ≤ D2 ).

– α is the earliest possible completion time of J j if it is reinserted in S1 : α = max(r j , k1 ) + p j , – β1 is the earliest possible completion time of J j if it is reinserted in S2 when S2 is empty: β1 = max(b j , k1 )+ p j , – β2 is the earliest possible completion time of J j if it is reinserted in S2 when S2 is not empty: β2 = max(r j , k2 )+ p j.

g1 (k 1 ,t2 , k2 , v, j) = (α,t2 , k2 , v + 2) if α ≤ min(t2 , D1 )    (α, α, k2 + α − t2 , v + 2) if t2 < α ≤ D1 and k2 + α − t2 ≤ D2    (k1 ,t2 , k2 , v) otherwise g2 (k1 ,t2 , k2 , v, j) =  (k1 , β1 − p j , β1 , v + 1) if t2 = D2     and β1 ≤ D2  (k1 ,t2 + β2 − p j − k2 , β2 , v + 1) if t2 < D2    and β2 ≤ D2   (k1 ,t2 , k2 , v) otherwise g3 (k1 ,t2 , k2 , v, j) = (k1 ,t2 , k2 , v) that two 4-tuples (k1 ,t2 , k2 , v) and (k10 ,t20 , k20 , v0 ) iff k1 = k10 and t2 = t20 and k2 = k20 . Clearly, at

We say are similar step j, some subsets of similar 4-tuples can be generated with functions g1 , g2 , g3 . In this case, we keep in Q j only one of the similar 4-tuples, that has the maximal value of v. The algorithm ends after N steps. Once QN is built (without similar 4-tuples), the maximal value of v among those of all the 4-tuples of QN is an optimal payoff. The complexity of the algorithm is given by the sizes of the sets Q1 , . . . , QN . Thus, it is given by the number of steps (N) and the maximal number of non-similar 4-tuples built at each step. The number of non-similar 4-tuples generated at a step is bounded by O(D1 (D2 )2 ), since the number of possible values of k1 is bounded by O(D1 ), the number of possible values of t2 is bounded by O(D2 ) and the number of possible values of k2 is bounded by O(D2 ). Moreover, sorting the jobs by nondecreasing release dates at the beginning of the algorithm induces a complexity of O(NlogN). Finally, the total complexity is O(NlogN + ND1 (D2 )2 ). The pseudocode of the algorithm, its proof of correctness, and the formal proof of its complexity are given in Appendix A. For the sake of clarity, we provided an algorithm that returns an optimal payoff, and only handles sets Q j . However, it is easy to modify it in order to get an optimal schedule (by also manipulating sets S j ). In order to speed up the execution of the algorithm, we introduce in Section 3.2.1 a dominance rule, which can be used as a pruning rule in the dynamic programming algorithm. 3.2.1 Pruning rule Given any instance I of 1|ri |V1 +V2 , let N1M be the maximal number of jobs that can be scheduled in the interval [0, D1 ], and S1M the associated partial schedule. This number can be easily obtained by applying the polynomial algorithm for

the single delivery date problem 1|ri |V , described in Section 2.1.2, on the interval [0, D1 ] . For any feasible schedule S of I, and, a fortiori, for any optimal schedule, we have |S1 | ≤ N1M . Recall that the jobs J1 , . . . , JN are indexed as defined for the dynamic programming algorithm, i.e. by non decreasing release dates. The pruning rule for the dynamic programming algorithm can be described as follows. Pruning rule. Let Je be the last job such that its release date is strictly less than D1 : e = maxi=1,...,N {i|ri < D1 }. At the end of step e of the dynamic programming algorithm, prune all the schedules that do not have exactly N1M jobs completing in the interval [0, D1 ]. The pruning rule is based on Proposition 2, which states that the set of schedules with N1M jobs completing before D1 is dominant. Proposition 2 Given any feasible schedule S of 1|ri |V1 +V2 , there exists a feasible schedule S0 such that |S10 | = N1M and v(S0 ) ≥ v(S). The proof of Proposition 2 is based on constructive arguments. For this purpose, we introduce Algorithm 2 which, given a schedule S such that |S1 | < N1M , constructs a schedule S0 such that |S10 | = N1M and v(S0 ) ≥ v(S), starting from S0 = S. Let S1M -jobs denote the jobs that belong to S1M , and M let S1 -jobs denote the jobs that do not belong to S1M . Then the key idea of Algorithm 2 is to rearrange jobs in S0 , by M reinserting S1 -jobs that are in S10 into S20 and S30 and reinserting all the S1M -jobs into S10 , while maintaining feasibility and without decreasing the total payoff of S0 . Left-shifting M some S1M -jobs, and right-shifting some S1 -jobs will thus be necessary. Finally, note that Algorithm 2 will only be used to prove Proposition 2. Before describing this algorithm and its proof of validity, we need some preliminary lemmas. In the following, we call Js any straddling job that starts before D1 and completes after D1 , and we denote ts its starting time. Let c be the comM pletion time of the last S1 -job in S10 (c = 0 if there is none). Lemma 5 Given a schedule S0 , an S1M -job Ji scheduled in S20 or S30 and an interval [c, D1 ], if there is no straddling job Js in S0 , and if the total sum of the idle times in [c, D1 ] is at least equal to pi , then Ji can be reinserted in S0 into the interval [c, D1 ]. Proof Let M be the set of S1M -jobs scheduled into [c, D1 ] in S0 . Then, by using the algorithm for 1|ri |V described in Section 2.1.2, the jobs of the set M ∪ {Ji } can be rescheduled in [c, D1 ], hence resulting in a partial schedule denoted Sm . Indeed, the interval [c, D1 ] is wide enough to contain all the processing times of the jobs of M ∪ {Ji }. However, we must be sure that Sm is feasible (i.e. each job starts after its release date). Notice that the sequence of jobs corresponding to Sm

is a subsequence of the sequence associated to S1M . Hence, because both schedules are right-justified and complete at D1 , for any job J j of Sm we have C j (S1M ) ≤ C j (Sm ). Therefore, as S1M is feasible, Sm is also feasible.  We define two operators that will be used in Algorithm 2 to move jobs inside the schedule S0 : a left-shift (resp. rightshift) operator LS(Ji , x, y) (resp. RS(Ji , x, y)), where Ji is shifted from Sx0 to Sy0 , with x, y ∈ {1, 2, 3}. Let us describe these operators, and the induced payoff variations: the payoff variations can be easily deduced by reminding that a job completing in [0, D1 ] is worth 2 for the payoff, a job completing in ]D1 , D2 ] is worth 1, and a job completing after D2 is worth 0. 1. For any job Ji we define: – RS(Ji , 1, 3): Ji is removed from S10 and reinserted after the last job of S30 , as shown below. It induces a payoff variation of −2. Ji 0

D1

D2 Ji

0

D1

D2

– RS(Ji , 2, 3): Ji is removed from S20 and reinserted after the last job of S30 , as shown below. It induces a payoff variation of −1.

D1

– LS(JM , 3, 1): JM is removed from S30 and reinserted in S10 into the interval [c, D1 ], as shown below. c111 000 0

0

D2

D1

D1

v1

v2

v3

D2

Ji 0

0 (a)

D1

| {z } B

D1

D2

D1

D2

≤ v1 + v2

D2

After performing RS(Ji , 1, 2), the feasibility is maintained, as Ji is right-shifted and the idle time induced by rightshifting the jobs of S20 is large enough to schedule Ji . RS(Ji , 1, 2) induces a payoff variation of −1. 3. For any S1M -job JM , with processing time pM , such that:

v1

11 00 00 11 00 11 00 11 00 11 00 11 v2

D1

JM ≤ v1 + v2

D2

c

0

11 00 00 JM 11 00 11 00 00 11 11 00 11 D1

0 (b)

D2

≤ v1 + v2

c

11 00 00 11 00 11 00 11 00 11 00 11 v1

v2

JM D1

D2

c

0

Ji ≤ v1 + v2 + v3

v2

c 11 00 000 111 00 11 000 J111 M 00 11 000 00 111 11 000 111

c

D2

After performing RS(Ji , 1, 3) or RS(Ji , 2, 3), the feasibility is maintained, as Ji is right-shifted. 2. For any job Ji such that pi is at most equal to the total sum of the idle times in S20 , we define: – RS(Ji , 1, 2): Ji is removed from S10 and reinserted in S20 , in the following way (see Figure below). First, all the jobs of S20 that start after D1 are right-shifted in order to obtain a unique block B that completes at D2 . Then, Ji is inserted in S20 before the first job of B. 0

v1

JM

In the example in the above figure, the striped jobs are the S1M -jobs scheduled after c in S10 . JM is reinserted in S10 by rescheduling all the striped jobs and JM between c and D1 with the polynomial algorithm for 1|ri |V . After LS(JM , 3, 1), the feasibility is maintained by Lemma 5. LS(JM , 3, 1) induces a payoff variation of 2. 4. For any S1M -job JM , with processing time pM , such that: (a) there is no straddling job in S0 starting before D1 and completing after D1 , and the total sum of the idle times between c and D1 in S0 is at least pM , or (b) Js = JM and the total sum of the idle times between c and CM − pM is at least CM − D1 , we define:

Ji 0

0 1 111 000 0 1 000 111 0 000 1 111 0 1

– LS(JM , 2, 1): JM is removed from S20 and reinserted in S10 into the interval [c, D1 ] (see the two Figures below).

Ji 0

(a) there is no straddling job in S0 starting before D1 and completing after D1 , and (b) the total sum of the idle times between c and D1 in S0 is at least pM , we define:

11 00 00 JM 11 00 11 00 00 11 11 00 11 D1

D2

In the examples in the figures above, the striped jobs are the S1M -jobs scheduled after c in S10 . JM is reinserted in S10 by rescheduling all the striped jobs and JM between c and D1 with the polynomial algorithm for 1|ri |V . After LS(JM , 2, 1), the feasibility is maintained by Lemma 5. Indeed, in the case in which JM = Js , once JM is removed from S20 there is no straddling job Js anymore, and therefore Lemma 5 applies. LS(JM , 2, 1) induces a payoff variation of 1.

Algorithm 2 is a recursive function, depicted by the pseudocode Algorithm 2, that takes as parameters the initial schedule S0 = S10 .S20 .S30 and k, the current step of the algorithm. M

We denote by n1 the number of S1 -jobs in S10 and by n2 (resp. n3 ) the number of S1M -jobs in S20 (resp. S30 ). We have |S10 | = N1M + n1 − (n3 + n2 ), thus the terminal condition is achieved when n1 = n2 + n3 . We describe next the structure of the algorithm. There are two main cases: the initial case k = 0 (lines 3-15) and the general case k > 0 (lines 16-41). For each of these two cases, we have two subcases. Therefore, we consider the algorithm as composed of four parts: Part 1 includes lines 4-8, Part 2 lines 9-15, Part 3 lines 17-29 and Part 4 lines 30-41. In the initial case, if there are no S1M -jobs in S10 or if there is at most one S1M -job in S20 (lines 4-8), then it is easy to reinsert all the S1M -jobs in S10 (and possibly reinsert some S1M jobs in S30 ). Otherwise, if there exists a straddling job and it is possible to exchange it with another job in order to obtain a schedule without straddling job (and without decreasing the payoff), this exchange is performed at lines 10-12. The general case is composed of parts 3 and 4. Let us give the idea of its inductive structure. At the beginning of M each step k > 0, we consider Gk : the set of the last k S1 -jobs 0 scheduled in S1 . We also denote by Ek+1 a (k + 1)-tuple of S1M -jobs in S20 . Then, if the following condition is satisfied, we execute Part 3:

1 2 3 4 5

Algo2: k, S0 if n1 = n2 + n3 then return S0 if k = 0 then if (n1 = 0 and n2 ≥ 0) or (n1 > 0 and n2 ∈ {0, 1}) then M for each S1 -job Ji in S10 do RS(Ji , 1, 3) M

if ∃ a straddling S1 -job Js then RS(Js , 2, 3) for each S1M -job JM in S20 do LS(JM , 2, 1) for each S1M -job JM in S30 do LS(JM , 3, 1)

6 7 8

else

9

if (∃ a straddling job Js ) and (∃ an S1M -job Ji in S20 s.t. (ri ≤ ts and pi ≤ D1 − ts ) or (ri > ts )) then RS(Js , 2, 3) LS(Ji , 2, 1)

10 11 12

if n1 < n2 + n3 then if n2 ∈ {0, 1} then Algo2(0, S’) else Algo2(1, S’)

13 14 15

16 17

else if ∀ Ek+1 ,



J j ∈Gk

pj
1, then we executed Part 3 in all the previous steps k such that 1 ≤ k < ku . Hence, at any step ku > 1, (Hk ) is observed for all 1 ≤ k < ku . When performing Part 4, there exists an Ek+1 that satisfies the condition ∑J j ∈Gk p j ≥ ∑Ji ∈Ek+1 pi . Therefore, by removing the jobs of Gk from S10 , the jobs of Ek+1 can be reinserted in S10 , and consequently the number of jobs in S10 increases. This condition can be attained several times, and each time |S10 | increases, until the terminal condition is satisfied. Finally, the proof of correctness of Algorithm 2 is given in Proposition 3, which is proven in Appendix B.

pj
k and n2 > k + 1 then Algo2(k + 1, S0 ) else if n2 = k + 1 and there is a straddling job Js such that Js is an S1M -job then flag ← true else flag ← false for each S1M -job JM in S20 do RS(JM , 2, 3) if ∃ a straddling job Js then RS(Js , 2, 3) for each job J j in Gk−1 do RS(J j , 1, 2) Jk ← the unique job of Gk \Gk−1 if flag then RS(Jk , 1, 3) else RS(Jk , 1, 2)

18 19

28

∀ Ek+1 ,



J j ∈Gk

else if ∃ a straddling job Js such that Js 6∈ Ek+1 then J f ← the first job of Ek+1 exchange J f and Js for each job Ji in Ek+1 do RS(Ji , 2, 3) for each job J j in Gk−1 do RS(J j , 1, 2) Jk ← the unique job of Gk \Gk−1 RS(Jk , 1, 3) for each job Ji in Ek+1 do LS(Ji , 3, 1) if n1 < n2 + n3 then if n1 > 0 and n2 ≥ 2 then Algo2(1, S0 ) else Algo2(0, S0 ) return S0

Algorithm 2.

3.3 Experimental results

The dynamic programming algorithm was first tested on randomly generated instances. The instance generator used here is a simplified version of the generator used by Detienne et Proposition 3 Algo2(0, S) yields a feasible schedule S0 whose al. [2] for the parallel machines problem with a stepwise cost payoff is greater than or equal to v(S), and that schedules function. It takes the following inputs: N1M jobs completing no later than D1 . – the number of jobs N,

– a parameter a ∈]0, 1], – a parameter R ∈]0, 1]

Table 2 For each couple (A, B), mean CPU time on 70 instances (in seconds)

The N processing times of the jobs are first generated, each being picked randomly from a uniform distribution {10, . . . , 99}. Then, the delivery dates are: D1 = b(a× ∑Ni=1 pi )/2c and D2 = 2 × D1 . For each job, its release date is picked randomly in one of the intervals r1 = [0, R × D1 ] or r2 = [D1 , D1 + R × D1 ] (uniform distributions). The choice of r1 or r2 is also random (each with a probability of 0.5). For each combination of N ∈ {30, 40, 50, 60, 70, 80}, a ∈ {0.8, 1, 1.2} and R ∈ {0.1, 0.3, 0.5}, 5 instances were generated. Some experiments were performed to analyze the efficiency of the algorithm in terms of CPU time. The tests were performed on a 3.33 GHz Intel Core2-Duo processor, 8 GB RAM, running Debian wheezy/sid. Since the results did not show that the CPU times are not strongly related to the values of a and R, we only present the results in relation to the number of jobs N, in Table 1. Table 1 For each value of N, the number of instances (out of 45 instances) solved within a time limit of 30 minutes CPU, and the mean CPU time (in seconds) for the solved instances N

Number of solved instances

CPU time

30 40 50 60 70 80

45 45 45 44 28 9

9.3 56.7 274.8 668.4 797.8 1186.6

Table 1 shows that it becomes hard to efficiently solve instances with more than 70 jobs within reasonable CPU time. Indeed, for the 70-jobs instances, seventeen instances require more than 30 minutes to be solved. Some further tests were performed to study the influence of D1 and D2 on the efficiency of the algorithm. We fixed N = 40 and R = 0.3. Moreover, we introduced the following parameters: – A ∈ {0.3, 0.4, 0.5, 0.7, 0.8, 0.9, 1, 1.2, 1.5} – B ∈ {0.1, 0.3, 0.4, 0.6, 0.8, 0.9} A (resp. B) represents ratio D2 / ∑Ni=1 pi (resp. D1 /D2 ). For each pair of values of A and B, 70 instances are generated, as described above, except for the delivery dates, which are generated from parameters A and B. The average CPU times are provided in Table 2. We can notice that when D1 is very small or very large, compared to D2 (B = 0.1 or B = 0.9), the CPU time is usually shorter than for other values of B. This can be explained by noticing that, in these two cases, the instances are very similar to the single delivery date problem 1|ri |V . For the same reason, the instances with B = 0.8 are also quite easy.

A\B

0.1

0.3

0.4

0.6

0.8

0.9

0.3 0.4 0.5 0.7 0.8 0.9 1 1.2 1.5

2 5.7 8.1 16.5 22.5 26.9 31.3 38.1 60.1

9.4 21.5 32.2 57.8 78.1 81.4 84.5 91.8 71.7

11.8 28.2 43.8 70 72.3 83.4 68.4 58.9 54.8

12.3 24.1 45.1 58.9 55.4 43.5 46.2 45.7 31.1

5.2 13.1 16.5 28.9 30.8 30.4 32.1 28.3 24.5

2.8 4.4 5.5 8.6 10.9 13.5 11 12.4 11.2

Conversely, the most difficult instances are those generated with B = 0.3 or B = 0.4. As for parameter A, we notice that when it increases, the CPU time increases up to some point, then it decreases. Since an increase of A implies an increase of D2 , the initial increase of CPU times can be explained by observing that the complexity of the algorithm is O(NlogN + ND1 (D2 )2 ). However, when D2 is very large, almost all the jobs can be scheduled before D2 , and therefore it is easier to obtain an optimal solution. This can explain the observed decrease in the CPU times.

4 Conclusions This paper introduces a single machine scheduling problem with a new criterion based on cumulative payoffs regarding a set of delivery dates. Release dates are considered for the jobs. The strong NP-hardness of the general problem is established, a pseudopolynomial time algorithm is developed for the two delivery dates case, and some polynomial cases are discussed. They extend the complexity results known for single machine scheduling problems for which the number of late jobs is minimized, release dates are considered and a common due date is imposed. Further research will be devoted to find a branch-andbound algorithm in order to solve the general problem. Some bounds and some dominance rules could be derived from the results we get for the polynomial cases, and especially from the algorithm of Section 2.1.2. Furthermore, a heuristic approach could be developed to obtain good solutions for the real world instances of the problem. Acknowledgements This work was supported by FUI project Dem@tFactory, financed by DGCIS, Conseil g´en´eral de Seine-et-Marne, Conseil g´en´eral de Seine-Saint-Denis, Ville de Paris, Pˆole de comp´etitivit´e Cap Digital.

References 1. Brucker, P. (2007). Scheduling Algorithms. Berlin: Springer.

2. Detienne, B., Dauz`eres-P´er`es, S. & Yugma, C. (2011). Scheduling jobs on parallel machine to minimize a regular step total cost function. Journal of Scheduling, 14(6), 523–538. 3. Garey, M. R., Johnson, D. S. & Sethi, R. (1976). The complexity of flowshop and jobshop scheduling. Mathematics of Operations Research, 1(2), 117–129. 4. Graham, R. L., Lawler, E. L., Lenstra J. K. & Rinnooy Kann A. H. G. (1979). Optimization and approximation in deterministic sequencing and scheduling: A survey. Annals of Discrete Mathematics, 6, 287–326 . 5. Hall,N. G., Lesaoana M. & Potts C. N. (2001). Scheduling with Fixed Delivery Dates. Operations Research, 49(1), 134–144 . 6. Hall, N. G., Sethi, S. P. & Sriskandarajah,C. (1991). On the complexity of generalized due date scheduling problems. European Journal of Operational Research, 51(1), 100–109 . 7. Janiak, A. & Krysiak, T. (2007). Single processor scheduling with job values depending on their completion times. Journal of Scheduling, 10(2), 129–138. 8. Janiak, A. & Krysiak, T. (2012). Scheduling jobs with values dependent on their completion times. International Journal of Production Economics, 135(1), 231–241. 9. Knuth, D. (1997). The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Boston, Massachusets: Addison-Wesley. 10. Moore, J. M. (1968). An n Job, One Machine Sequencing Algorithm for Minimizing the Number of Late Jobs. Management Science, 15(1), 102–109 . 11. Pinedo, M. (1995). Scheduling Theory, Algorithms, and Systems. Englewood Cliffs, New Jersey: Prentice Hall. 12. Raut, S., Swami, S. & Gupta, J. N. D. (2008). Scheduling a capacitated single machine with time deteriorating job values. International Journal of Production Economics, 114(2), 769–780.

A The dynamic programming algorithm In Algorithm 3, a 4-tuple (k1 ,t2 , k2 , v) is represented as a pair of a 3tuple and a payoff: (hk1 ,t2 , k2 i, v), in order to easily handle similar 4tuples. As for 4-tuples, two pairs (e, v) and (e0 , v0 ) are similar iff e = e0 . Moreover, we defined in Section 3.2 variables β1 and β2 : – β1 is the earliest possible completion time of J j if it is reinserted in S2 and S2 is empty: β1 = max(b j , k1 ) + p j ; notice that β1 = max(b j , k1 ) + p j = max(b j , k1 , k2 ) + p j since k2 = 0. – β2 is the earliest possible completion time of J j if it is reinserted in S2 and S2 is not empty: β2 = max(r j , k2 ) + p j ; notice that β2 = max(r j , k2 )+ p j = max(b j , k2 )+ p j , since k2 > D1 , and max(b j , k2 )+ p j = max(b j , k2 , k1 ) + p j , since k1 < k2 . Thus, in Algorithm 3, we use variable β = max(b j , k1 , k2 ) + p j instead of β1 and β2 . Theorem 4 Algorithm 3 returns the payoff of an optimal schedule. Proof Recall that jobs are numbered by nondecreasing release dates: J1 , . . . , JN . Moreover, recall that at each step j, three functions g1 , g2 , g3 were defined in Section 3.2, taking as argument a 4-tuple. We define the corresponding functions q1 , q2 , q3 , such that, for i ∈ {1, 2, 3}: gi (k1 ,t2 , k2 , v, j) = (k10 ,t20 , k20 , v0 ) ⇔ qi (hk1 ,t2 , k2 i, v, j) = (hk10 ,t20 , k20 i, v0 ). Finally, let Q0 = {(h0, D2 , 0i, 0)} and, for any j ∈ {1, . . . , N}, let Q j = ∪(e,v)∈Q j−1 (q1 (e, v, j) ∪ q2 (e, v, j) ∪ q3 (e, v, j)). Algorithm 3 constructs exactly the sets Q j , j = {1, . . . , N}, except for the similar pairs: for each subset of similar pairs, only one of the pairs with the maximal value of v is kept. Indeed, line 3 clearly adds {q3 (e, v, j) : (e, v) ∈ Q j−1 } to Q j , lines 4-8 add {q1 (e, v, j) : (e, v) ∈ Q j−1 } to Q j and lines 9-13 add {q2 (e, v, j) : (e, v) ∈ Q j−1 } to Q j ; all these additions are performed while observing the avoidance of similar pairs. We show next that QN , when constructed without avoiding similar pairs, contains some pair corresponding to an optimal schedule. Therefore, since the removal of similar pairs from Q j , j = 1, . . . , N, clearly does not prevent to have at least one pair corresponding to an optimal schedule in QN , that will prove that Algorithm 3 returns an optimal payoff. For any ERD-schedule S where S2 is a block, we denote by f (S) the pair (e, v) corresponding to S’s 3-tuple and payoff respectively. Let S∗ be an optimal ERD-schedule where S2∗ is a block. Let us denote by {Ji1 , . . . , Jil }, i1 < i2 < · · · < il , the set of jobs of S∗ that complete before D2 . For k = 1, .., l, let Sik be the schedule that satisfies all the following conditions, with minimum k1 and k2 (among all the schedules satisfying the same conditions): 1. Sik schedules all the jobs Ji1 , . . . , Jik before D2 , and all the other jobs after D2 2. if Ciq (S∗ ) ≤ D1 , then Ciq (Sik ) ≤ D1 , ∀q ∈ {1, . . . , k} 3. if D1 < Ciq (S∗ ) ≤ D2 , then D1 < Ciq (Sik ) ≤ D2 , ∀q ∈ {1, . . . , k} i 4. Sik is an ERD-schedule where S2k is a block To conclude the proof, we show by induction that, for every k ∈ {1, . . . , l}, f (Sik ) ∈ Qik , which implies f (Sil ) ∈ Qil ⊆ QN , which indeed proves the theorem, since v(Sil ) = v(S∗ ). First step of the induction. The only 3-tuple of set Q0 corresponds to a schedule with no jobs before D2 . If i1 > 1, for all i ∈ {1, . . . , i1 −1}: (h0, D2 , 0i, 0) ∈ {q3 (e, v, i) : (e, v) ∈ Qi−1 } ⊆ Qi , by definition of Qi . Then, there are two cases. If Ci1 (S∗ ) ≤ D1 , then, in Si1 , Ji1 completes at its earliest possible completion time ri1 + pi1 , and all the other jobs are executed after D2 . Therefore, f (Si1 ) = (hri1 + pi1 , D2 , 0i, 2) = q1 (h0, D2 , 0i, 0, i1 ) ∈ {q1 (e, v, i1 ) : (e, v) ∈ Qi1 −1 } ⊆ Qi1 . Otherwise, if D1 < Ci1 (S∗ ) ≤ D2 , then, in Si1 , job Ji1 completes at its earliest possible completion time into ]D1 , D2 ]: max(ri1 , bi1 ). Therefore, f (Si1 ) = (h0, max(ri1 , bi1 ), max(ri1 , bi1 )+ pi1 i, 1) = q2 (h0, D2 , 0i, 0, i1 ) ∈ {q2 (e, v, i1 ) : (e, v) ∈ Qi1 −1 } ⊆ Qi1 .

1 2 3

4 5 6 7

8

9 10 11 12

13

14

Input: pi , ri (jobs sorted by nondecreasing ri ), D1 , D2 Output: The payoff of an optimal schedule Q0 ← {(h0, D2 , 0i, 0)} for j = 1 to N do // Reinserting task J j into S3 Q j ← Q j−1 // Reinserting task J j into S1 foreach (hk1 ,t2 , k2 i, v) ∈ Q j−1 do α ← max(r j , k1 ) + p j  hα,t2 , k2 i if α ≤ t2 e← hα, α, k2 + α − t2 i otherwise if k1 (e) ≤ D 1 and k2 (e) ≤ D2 then  Q j ∪ {(e, v + 2)}\{(e, v0 )} if ∃(e, v0 ) ∈ Q j such that v0 < v + 2 if ∃(e, v0 ) ∈ Q j such that v0 ≥ v + 2 Qj ← Qj  Q j ∪ {(e, v + 2)} otherwise // Reinserting task J j into S2 foreach (hk1 ,t2 , k2 i, v) ∈ Q j−1 do β ← max(b j , k1 , k2 ) + p j  hk1 , β − p j , β i if t2 = D2 e← hk1 ,t2 + β − p j − k2 , β i otherwise if k1 (e) ≤ D 1 and k2 (e) ≤ D2 then  Q j ∪ {(e, v + 1)}\{(e, v0 )} if ∃(e, v0 ) ∈ Q j such that v0 < v + 1 if ∃(e, v0 ) ∈ Q j such that v0 ≥ v + 1 Qj ← Qj  Q j ∪ {(e, v + 1)} otherwise return max{v : (e, v) ∈ QN }

Algorithm 3: Pseudopolynomial algorithm General step of the induction. Assume now that f (Si j−1 ) ∈ Qi j−1 , k = 1, . . . , j − 1. There are two cases. If Ci j (S∗ ) ≤ D1 , then, in Si j , job Ji j must start after both k1 and ri j , in order to satisfy condition 4 and to maintain feasibility. So, the i

earliest possible starting time of job Ji j is max(ri j , k1 ). Moreover, S2j

same complexity clearly applies for the foreach loop of lines 9–13. So, overall, the complexity of executing lines 3–13 is O(X) = O(D1 (D2 )2 ). The for loop of lines 2–13 is executed N times. Finally, sorting the jobs in nondecreasing release dates order can be achieved in O(N log N). Overall, the complexity of Algorithm 3 is O(N(D1 (D2 )2 ) + N log N). 

i must start not earlier than both the starting time of S2j−1 (since k2 (Si j−1 ) i is minimal) and the completion time of Ji j in S j (to avoid overlaps). i Hence, if S2j−1 starts before the completion time of Ji j in Si j (i.e. max(ri j , k1 (SBi j−1Correctness ))+ of Algorithm 2 ij i j−1 pi j > t2 (S )), then S2 must start exactly at the completion time of Ji j i Proposition 3 Algo2(0, S) yields a feasible schedule S0 whose payoff is in Si j . Otherwise, S2j must start at t2 (Si j−1 ). Hence q2 adds f (Si j ) to greater than or equal to v(S), and that schedules N1M jobs completing Qi j . no later than D1 . Otherwise, if D1 < Ci (S∗ ) ≤ D2 , then the earliest possible starting j

i

time of Ji j in Si j is clearly max(bi j , k1 ) if S2j is empty, otherwise it is max(ri j , k2 ). Function q2 places the job precisely at that date and, in i

i

order for S2j to remain a block, it right-shifts all its jobs so that S2j contains no idle time. Hence, f (Si j ) ∈ Qi j .  Proposition 4 Algorithm 3 computes the optimal payoff of an instance in pseudopolynomial time (O(N(D1 (D2 )2 ) + N log N)). Proof First, observe that, by lines 8 and 13, it is impossible that some Q j contains two pairs (e, v) and (e0 , v0 ) with e = e0 . So the number of elements in each Q j is bounded by the number of possible 3-tuples. Clearly, k1 can only range from 1 to D1 and k2 from D1 + 1 to D2 . t2 can only range from b to D2 − 1. Overall, the number of states in each Q j is bounded by X = (D1 )×(D2 −D1 )×(D2 −b). In each for loop of lines 2–13, we first copy Q j−1 into Q j , hence inducing a complexity of O(X), then for each loop of lines 4–8, we try to reinsert job J j into S1 for each 3-tuple of Q j−1 , hence inducing an overall complexity of O(X) to create states e on line 6 and, by storing Q j as an array or a hash table, a complexity of O(X) is induced to update Q j on line 8. The

Proof We need to prove the following assertions: 1. The algorithm always terminates. 2. The returned schedule is feasible. 3. In the returned schedule, exactly N1M jobs complete no later than D1 . 4. The returned schedule has a payoff greater than or equal to the payoff of the initial schedule.

Assertion 1: The algorithm always terminates. When Algo2(0, S0 ) is first called, Part 1 (lines 4-8) and/or Part 2 (lines 9-15) are executed. If we are in Part 1, the algorithm terminates, as there are no other calls to function Algo2. In Part 2, the algorithm terminates, unless there is a call to Algo2(0, S0 ) (line 14) or Algo2(1, S0 ) (line 15). Algo2(0, S0 ) will cause the execution of Part 1 (since n2 ∈ {0, 1}), which will terminate. Therefore we only need to prove that Algo2(1, S0 ) terminates, and more generally that Algo2(k, S0 ) terminates, for k > 0. Let us first consider the evolution of n1 , n2 , n3 on lines 20-29 and 31-38. At line 22, n3 increases by n2 , and n2 becomes equal to 0; at line 24, n1 decreases by k − 1, and on lines 26-27 it decreases by 1; at line

28, n1 becomes equal to 0; at line 29, n3 becomes equal to 0. As for lines 31-38, we have: at line 34, n3 increases by k + 1, and n2 decreases by k + 1; at line 35, n1 decreases by k − 1; at line 37, n1 decreases by 1; at line 38, n3 decreases by k + 1. Let us consider all the calls of Algo2 to show that they cannot be executed indefinitely. When Algo2(0, S0 ) is called at line 41, the algorithm stops, since n2 ∈ {0, 1}. Hence we only need to examine the calls of Algo2(1, S0 ) on lines 15 and 40, and the call of Algo2(k + 1, S0 ) on line 18. The instruction of line 15 is executed at most once, that is at the first call of Algo2(0, S0 ), since the following calls of Algo2(0, S0 ) are executed only if n2 ∈ {0, 1}. As for the instruction of line 40, it can be executed only while n1 > 0 and n2 ≥ 2, which is a limited number of times, since n1 and n2 strictly decrease on lines 20-29 and 31-38. Finally, the instruction Algo2(k + 1, S0 ) of line 18 cannot be executed indefinitely, since at some point we will have k = n1 or k + 1 = n2 .

Assertion 2: The returned schedule is feasible. We need to show that each performed operation (LS or RS) is feasible. Let us examine each part of the pseudocode. We will refer to the different cases considered for the definitions of the RS and LS operations earlier in this Section. Part 1. Line 5: the RS(Ji , 1, 3) operations are feasible (cf. Case 1). After M executing this line, there are no more S1 -jobs in S10 , and therefore c = 0. Line 6: RS(Js , 2, 3) is feasible (cf .Case 1). After executing this line, if there is a straddling job Js , Js is an S1M -job. Line 7: if there is a straddling job Js , the operation LS(Js , 2, 1) is first performed (since Js is an S1M -job), and then we perform LS(JM , 2, 1) for each of the other S1M -jobs JM in S20 . Since c = 0 and Js is an S1M -job, the total sum of the idle times between c and Cs − ps is at least Cs − D1 . Therefore, LS(Js , 2, 1) is feasible (cf. Case 4.(b)). Hence, when LS(JM , 2, 1) is performed for each of the non-straddling S1M -jobs JM in S20 , there is no straddling job. Moreover, since c = 0, and all of these jobs are S1M -jobs, their total processing time is at most the total sum of the idle times in [c, D1 ]. Therefore, the LS(JM , 2, 1) operations are feasible (cf. Case 4.(a)). Line 8: LS(JM , 3, 1) is performed for every S1M -job in S30 . Since the reinserted jobs are all S1M -jobs, and since there is no straddling job, and c = 0, the total sum of the idle times in [c, D1 ] is at least equal to the total processing time of those jobs. Therefore, the LS(JM , 3, 1) operations are feasible (cf. Case 3). Part 2. Line 11: RS(Js , 2, 3) is feasible (cf. Case 1). After this operation there is no straddling job. Moreover, c is unchanged and c ≤ ts . Line 12: since Ji is an S1M -job, ri > ts implies pi ≤ D1 − ts . Therefore, if the condition of line 10 is true, pi ≤ D1 −ts . Hence, since c ≤ ts , the total amount of idle times in [c, D1 ] is greater than pi . Therefore, since there is no straddling job, LS(Ji , 2, 1) is feasible (cf. Case 4.(a)). Part 3. Lines 20-21: notice that if flag is true, there are exactly k + 1 S1M jobs in S20 and thus there exists a unique Ek+1 , which contains all the S1M -jobs in S20 , including Js . Line 22: the RS(JM , 2, 3) operations are feasible (cf. Case 1). If flag is true, there is no straddling job after these operations. If flag is false, it is possible that there is a straddling job Js after these operations: in M this case, Js is an S1 -job. Line 23: RS(Js , 2, 3) is feasible (cf. Case 1). This operation is never performed if flag is true. After line 23 there is no straddling job. Lines 24-27: there are two cases, depending on the value of flag. – If flag is false, we perform for all the jobs J j of Gk : RS(J j , 1, 2) (lines 24 and 27). Since flag is false, at least k + 1 non-straddling jobs were moved to S30 at line 22. Let us consider Ek+1 a set of k +1

non-straddling jobs that were moved to S30 at line 22. So, after lines 22 and 23, the total sum of the idle times in S20 is at least equal to ∑Ji ∈Ek+1 pi . Moreover, by induction hypothesis, ∑J j ∈Gk p j < ∑Ji ∈Ek+1 pi . Therefore, the total sum of the idle times in S20 is at least equal to ∑J j ∈Gk p j . We deduce (cf. Case 2) that RS(J j , 1, 2) is feasible for all the jobs J j of Gk . – If flag is true, we perform RS(J j , 1, 2) (line 24) only for the jobs of Gk−1 , while for the unique job Jk of Gk \Gk−1 we perform RS(Jk , 1, 3) (line 26). Since flag is true, exactly k + 1 jobs were moved to S30 at line 22, one of them being Js . Therefore, exactly k nonstraddling jobs were moved to S30 at line 22. Let Ek be the set of those jobs. Moreover, the instruction of line 23 is not performed, since RS(Js , 2, 3) is already performed at line 22. So, after lines 22 and 23, the total sum of the idle times in S20 is at least equal to ∑Ji ∈Ek pi . By induction hypothesis, ∑J j ∈Gk−1 p j < ∑Ji ∈Ek pi . Therefore, the total sum of the idle times in S20 is at least equal to ∑J j ∈Gk−1 p j . We deduce (cf. Case 2) that RS(J j , 1, 2) is feasible for all the jobs J j of Gk−1 . Finally, the operation RS(Jk , 1, 3) at line 26 is feasible (cf. Case 1). Line 28: the RS(Ji , 1, 3) operations are feasible (cf. Case 1). After M these operations, there are no more S1 -jobs in S10 , therefore c = 0. Line 29: LS(JM , 3, 1) is performed for every S1M -job in S30 . Since the reinserted jobs are all S1M -jobs, and since there is no straddling job, and c = 0, the total sum of the idle times in [c, D1 ] is at least equal to the total processing time of those jobs. Therefore, the LS(JM , 3, 1) operations are feasible (cf. Case 3). Part 4 Recall that the following condition is observed: ∃ Ek+1 , ∑J j ∈Gk p j ≥ ∑Ji ∈Ek+1 pi . Lines 31-33: if the condition of line 31 is true, we have that for all Ji ∈ Ek+1 , ri ≤ ts and pi > D1 − ts , because of the instructions of lines 10-12. Indeed, none of the operations performed in the algorithm transforms a schedule without straddling job in a schedule with a straddling job, as can be seen in the definitions of the left-shift and right-shift operators. Therefore, any job Ji of Ek+1 can be rescheduled in order to start at time ts , by right-shifting Js . After this exchange, Ji is the actual straddling job. Line 34: the RS(Ji , 2, 3) operations are feasible (cf. Case 1). After executing these operations there is no straddling job. Line 35: since at least k non-straddling jobs were moved to S30 at line 34, the total sum of the idle times in S20 is at least equal to ∑Ji ∈Ek pi , with Ek a set of k non-straddling jobs moved to S30 at line 34. By induction hypothesis, ∑J j ∈Gk−1 p j < ∑Ji ∈Ek pi . Therefore, performing RS(J j , 1, 2) on every job J j of Gk−1 is feasible (cf. Case 2). Line 37: RS(Jk , 1, 3) is feasible (cf. Case 1). Line 38: at lines 35-37, all the jobs of Gk were removed from S10 . Therefore, the total amount of idle times in [c, D1 ] is at least ∑J j ∈Gk p j . Since ∑J j ∈Gk p j ≥ ∑Ji ∈Ek+1 pi , the total amount of idle times in [c, D1 ] is at least ∑Ji ∈Ek+1 pi . For this reason, and since there is no straddling job, we deduce that performing LS(Ji , 3, 1) on every job Ji of Ek+1 is feasible (cf. Case 3).

Assertion 3: In the returned schedule, exactly N1M jobs complete no later than D1 . Since the terminal condition n1 = n2 + n3

implies that exactly N1M jobs complete no later than D1 , it is sufficient to prove that whenever the algorithm terminates, the terminal condition is true. In Part 1, at line 5, n1 becomes equal to 0, at line 7, n2 becomes equal to 0, and at line 8, n3 becomes equal to 0. Therefore, the terminal condition is true. In Part 2, the algorithm stops only if the condition n1 < n2 + n3 (line 13) is not satisfied, which implies that the terminal condition is true, since we cannot have n1 > n2 + n3 , otherwise there would be a

contradiction on N1M being the maximal number of jobs that can complete no later than D1 . If Part 3 terminates (lines 20-29), we have that n1 = n2 = n3 = 0, as shown in the proof of Assertion 1. Therefore the terminal condition is true. Finally, Part 4 terminates only if the condition n1 < n2 + n3 (line 39) is not satisfied, which, as said above, implies that the terminal condition is true.

Assertion 4: The returned schedule has a payoff greater than or equal to the payoff of the initial schedule. In order to prove this assertion, we show that the execution of any aforementioned part does not decrease the payoff. Let nb1 (resp. nb2 , nb3 ) be the value of n1 (resp. n2 , n3 ) at the beginning of the sequence of instructions of a given part. Let us consider each part. – lines 5-8: at line 5, nb1 operations RS(Ji , 1, 3) are performed, inducing a payoff variation of −2nb1 ; if RS(Js , 2, 3) is performed at line 6, it induces a payoff variation of −1; the nb2 LS(JM , 2, 1) operations of line 7 induce a payoff variation of nb2 ; and the nb3 LS(JM , 3, 1) operations of line 8 induce a payoff variation of 2nb3 . Thus, the total payoff variation is at least nb2 + 2nb3 − 2nb1 − 1. If nb1 = 0, the payoff is at least nb2 + 2nb3 − 1 ≥ 0, since 0 = nb1 < nb2 + nb3 . Otherwise (i.e. nb1 > 0), if nb2 = 0, then nb1 < nb2 + nb3 = nb3 and therefore nb2 + 2nb3 − 2nb1 − 1 ≥ 0; else (i.e. nb2 = 1), then nb1 < nb3 + 1. Thus, 2nb3 ≥ 2nb1 . Therefore, nb2 + 2nb3 − 2nb1 − 1 ≥ 0. – lines 11-12: the RS(Js , 2, 3) operation at line 11 induces a payoff variation of −1, while the LS(Ji , 2, 1) operation at line 12 induces a payoff variation of 1. Therefore, the total payoff variation is 0. – lines 20-29: at line 22, we perform nb2 times the operation RS(JM , 2, 3) (payoff variation: −nb2 ). If flag is true, there is no straddling job after the operation of line 22. In this case, at line 24 we perform k − 1 times the operation RS(J j , 1, 2) (payoff variation: −k + 1) and at line 26 the operation RS(Jk , 1, 3) (payoff variation: −2). Otherwise, if flag is false, the operation RS(Js , 2, 3) at line 23 can possibly be performed (payoff variation: −1); and k RS(J j , 1, 2) operations (payoff variation: −k) are performed at lines 24 and 27. Finally, at line 28 we perform nb1 −k times the operation RS(Ji , 1, 3) (payoff variation: −2(nb1 − k)); and at line 29 we perform (nb2 + nb3 ) times LS(JM , 3, 1) (payoff variation: 2(nb2 + nb3 )). Therefore, the total payoff variation is at least 2nb3 + nb2 + k − 2nb1 − 1. If nb1 = k, the payoff variation is at least 2nb3 + nb2 − nb1 − 1 ≥ 0, since nb1 < nb2 + nb3 . If nb2 = k + 1, the payoff variation is at least 2(nb3 + nb2 − nb1 − 1) ≥ 0, since nb1 < nb2 + nb3 . – lines 31-38: the exchange performed in lines 31-33 does not change the payoff; at line 34 we perform k +1 times the operation RS(Ji , 2, 3) (payoff variation: −k − 1); at line 35 we perform (k − 1) times the operation RS(J j , 1, 2) (payoff variation: −k + 1); at line 37 we perform RS(Jk , 1, 3) (payoff variation: −2); and at line 38, (k + 1) times the operation LS(Ji , 3, 1) (payoff variation: 2(k + 1)). Therefore, the total payoff variation is 0.