1. Trang chủ
  2. » Luận Văn - Báo Cáo

capacity allocation and rescheduling in supply chains

141 218 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 141
Dung lượng 500,68 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In the new schedule, the limit on allowabledisruption is measured by the maximum time disruption to any of the jobs betweenthe original and adjusted schedules.. Let π denote an arbitrary

Trang 2

Zhixin Liu2007

Trang 3

In the first part of the dissertation, we study the problem of rescheduling formultiple new orders Assume that a set of original jobs has been scheduled on a singlemachine, but not processed, when a set of new jobs arrives The decision maker needs

to insert the new jobs into the existing schedule without excessively changing it Theobjective is minimization of the maximum lateness of the jobs, subject to a customerservice requirement modeled by a limit on the maximum time change of the originaljobs Since the schedule of the original jobs can be arbitrary, this problem modelsmultiple disruptions from repeated new job arrivals We show that this schedulingproblem is intractable, even if no new jobs arrive We describe several approximationalgorithms and analyze their worst-case performance Next, we develop a branchand bound algorithm that uses a variable neighborhood descent algorithm to obtain

an initial upper bound, several dominance properties that we establish, and a lowerbounding scheme based on a preemptive relaxation of the problem The branch andbound algorithm solves 99.9% of randomly generated instances with up to 1000 jobswithin 60 seconds Our work demonstrates for the first time that optimization oflarge scale, intractable rescheduling problems is possible More generally, it refocusesthe literature on scheduling problems towards rescheduling issues

In the second part of the dissertation, we consider a multiple product supply chainwhere a manufacturer receives orders from several distributors If the orders cannot

Trang 4

all be met from available production capacity, then the manufacturer allocates thatcapacity among the distributors The distributors may share their allocated capacityamong themselves before submitting revised orders Finally, the manufacturer sched-ules the revised orders to minimize its cost We consider three practical coordinationissues First, we estimate the benefit to the manufacturer from considering schedulingcosts and constraints in making capacity allocation decisions Second, we estimatethe additional profit that the distributors achieve when they share their allocatedcapacity Third, we estimate the value of coordination between the manufacturerand the distributors Our work is the first to consider all three issues simultaneously.

We model scheduling costs and constraints within the manufacturer’s capacity cation problem, model the distributors’ capacity sharing problem as a cooperativegame which has properties that are unique within cooperative game theory, and de-velop optimal algorithms for all the models defined by the three coordination issues.Our exact evaluation of decisions about the appropriate coordination level improvesmanagers’ ability to make those decisions

allo-In the third part of the dissertation, we consider a problem where a group ofagents, each with a set of jobs, need to schedule their jobs on a common processingfacility Each agent wants to minimize an objective function which depends on itsown job completion times Time slots are allocated to the various jobs based onthe bids of the participating agents We investigate the efficiency and effectiveness ofthree ascending auction mechanisms, with market goods consisting of time slots, fixedtime blocks, and flexible time blocks, respectively For all three market goods, wedemonstrate the efficiency of our auction mechanism, and develop optimal algorithmsfor the winner-determination and bid-determination problems We find that if flexible

Trang 5

time blocks are defined as market goods, then it is guaranteed that the allocationwhich results from the auction is a Pareto optimal solution Finally, by developing

a counterexample to a well known result, we show that for the case of time blocks,there may not exist an equilibrium solution that is globally optimal

Trang 6

Dedicated to:

Meng, the love of my life

My parents, for their support

Trang 7

I would like to express my sincere appreciation to my adviser Professor Nicholas G.Hall, whose inspiration, guidance and advice made this dissertation possible, and whohas influenced me far beyond this dissertation During the four years of my Ph.D.study, I have learned a great deal academically and personally from him

My thanks go to my dissertation committee for their intellectual support I amgrateful to Professor Marc E Posner for insightful advice and discussion regardingChapter 3 He also taught me several optimization courses and introduced me torecent developments in scheduling theory Thanks to Professor Suvrajeet Sen for histime spent in reading this dissertation

I would like to thank Professor Chris N Potts, for his guidance and suggestionsregarding Chapter 2 He also verified most of the technical details in this chapter.Thanks to Wenhui and Xiaoping, for their encouragement and help regardingevery aspect of my life Thanks to Kejian, for his advice and help about living inColumbus

I feel very fortunate to know these people, and many others Space does notpermit mentioning all of them, for which I apologize

Trang 8

The Great Wall Computer School2003-present Graduate Teaching Associate,

Graduate Research Associate,The Ohio State University

PUBLICATIONS

Research Publications

J Xie, W Xing, Z Liu, J Dong, “Minimum Deviation Algorithm for Two-StageNo-Wait Flowshops with Parallel Machines” Computers & Mathematics with Appli-cations, 47:1857–1863, 2004

FIELDS OF STUDY

Major Field: Business Administration

Minor Field: Management Information System

Minor Field: Logistics

Trang 9

TABLE OF CONTENTS

Page

Abstract ii

Dedication v

Acknowledgments vi

Vita vii

List of Tables x

List of Figures xii

Chapters: 1 Introduction 1

2 Rescheduling for Multiple New Orders 5

2.1 Introduction 5

2.2 Problem Definition and Computational Complexity 8

2.3 Approximation 12

2.3.1 Analysis of specific classes of schedules 13

2.3.2 A best possible approximation algorithm 18

2.4 Branch and Bound Algorithm 22

2.4.1 Lower bounds 22

2.4.2 Upper bounds 22

2.4.3 Structural properties 27

2.4.4 Branch and bound strategy 30

2.5 Computational Results 31

2.5.1 Heuristics 32

Trang 10

2.5.2 Branch and bound algorithm 35

2.6 Concluding Remarks 42

3 Capacity Allocation and Scheduling in Supply Chains 44

3.1 Introduction 44

3.2 Preliminaries 48

3.3 Uncoordinated Supply Chain: Manufacturer 51

3.3.1 Capacity allocation 51

3.3.2 Scheduling revised orders 61

3.4 Uncoordinated Supply Chain: Distributors 62

3.4.1 With partial orders 62

3.4.2 Without partial orders 70

3.5 Coordinated Supply Chain 78

3.6 Computational Study 81

3.6.1 Manufacturer’s coordination 82

3.6.2 Distributors’ coordination 85

3.6.3 Supply chain coordination 87

3.7 Setup Times 89

3.8 Conclusions 97

4 Noncooperative Scheduling Games with Auctions 99

4.1 Introduction 99

4.2 Preliminaries 101

4.3 Noncooperative Scheduling Games 106

4.3.1 Time slots 106

4.3.2 Fixed time blocks 108

4.3.3 Flexible time blocks 114

4.4 Equilibrium Solution with Time Blocks 117

4.5 Conclusions 119

Bibliography 121

Trang 11

LIST OF TABLES

2.1 Data for Example 1 15

2.2 Data for Example 2 17

2.3 Data for Example 3 18

2.4 Data for Example 4 20

2.5 Effect of n on Performance of the Heuristics 34

2.6 Effect of n on Performance of the Algorithm 35

2.7 Detailed Results for the Branch and Bound Algorithm 38

2.8 Effect of n on Performance of the Algorithm on Difficult Instances 40

2.9 Effect of Generation Number on Performance for Ancestor Instances 41 3.1 Instance of Distributors’ LP Game in Theorem 11 68

3.2 Knapsack Game of Theorem 13 73

3.3 Instance of Knapsack Game with Only Fractional Core Members 76

3.4 Independent Variables to Evaluate Coordination 83

3.5 Selected ANOVA Results of Manufacturer’s Coordination 84

3.6 Sensitivity Analysis for Manufacturer’s Coordination 84

Trang 12

3.7 Selected ANOVA Results on Value of Distributors’ Coordination 853.8 Effect of Partial Orders on Value of Distributors’ Coordination 86

3.9 Effect of Revenue / Profit Correlation on Value of Distributors’ dination 863.10 Selected ANOVA Results for Supply Chain Coordination 87

Coor-3.11 Effect of Revenue / Profit Correlation on Value of Supply Chain ordination 88

Co-3.12 Effect of Number of Products and Partial Orders on Value of SupplyChain Coordination 883.13 Sensitivity Analysis for Coordination with Setup Times 95

Trang 13

LIST OF FIGURES

1.1 Capacity Allocation and Scheduling Problem 3

Trang 14

CHAPTER 1

INTRODUCTION

Scheduling theory originated in the 1950’s, and has developed extensively since.This research area considers the allocation of resources over time to perform a col-lection of tasks, with a set of objectives to optimize (Brucker 1998, Pinedo 2002).Classical scheduling literature assumes that all the tasks and scheduling facilities be-long to a single decision maker, and typically requires all the tasks to be processed tominimize the total scheduling cost However, with the rapid development of complexmodern production and supply systems, it is becoming increasingly important for acompany to make efficient operational decisions subject to unpredictable customerdemand and complicated interactive behaviors of customers, suppliers, partners andcompetitors In this dissertation, we study scheduling problems with capacity con-straints faced by manufacturers in competition or coordination with other supplychain members Also, in our work, the jobs to be scheduled belong to various en-tities, and we use game theory to study problems of capacity allocation and payoffdivision This dissertation is organized into three parts, as follows

In Chapter 2, we study the problem of rescheduling for multiple new orders Inrecent years, rescheduling has attracted considerable attention, motivated by both im-portant practical applications and interesting research problems (Vieira et al 2003)

Trang 15

In modern decision making and manufacturing systems, unexpected disruptions monly occur, which necessitates the rescheduling In a production system facingunpredictable demand, managers and planners must generate high quality schedules,and react quickly to unexpected events to revise their schedules effectively Specifi-cally, Chapter 2 considers a problem where a set of original jobs has been scheduled

com-on a single machine, but not processed, when a set of new jobs arrives The decisicom-onmaker needs to insert the new jobs into the existing schedule without excessivelychanging it The objective is minimization of the maximum lateness of the jobs,subject to a customer service requirement modeled by a limit on the maximum timechange of the original jobs Since the schedule of the original jobs can be arbitrary,this problem models multiple disruptions from repeated new job arrivals Chapter 2

is accepted for journal publication (Hall et al 2007)

Chapter 3 considers a scheduling problem with capacity allocation issues whicharise when manufacturers are unable to meet all the orders which they have received.These issues are most common in industries with high fashion content, rapid techno-logical development, or occasional demand surges (Fisher 1997) We consider capacityallocation and scheduling issues in a make-to-order supply chain The supply chainincludes a manufacturer that produces multiple products in response to orders fromseveral distributors We study three practical and significant coordination issues thatarise in this supply chain when the manufacturer’s production capacity is insufficient

to meet all the orders First, we estimate the value of considering scheduling costsand constraints as part of the manufacturer’s decision to allocate its capacity to thedistributors across products Allocation of capacity to distributors allows more flex-ibility than allocation of products Second, we estimate the value of coordination

Trang 16

among the distributors in sharing their allocated capacity Finally, we estimate thevalue of coordination between the manufacturer and the distributors An importantcontribution of our work is the consideration of these three interrelated coordinationissues simultaneously An overview of the capacity allocation and scheduling prob-lem appears in Figure 1.1 The left branch represents the uncoordinated system,and the right branch represents the system where the manufacturer and distributorscoordinate their decisions Chapter 3 is submitted for journal publication (Hall andLiu 2007).

The manufacturer receives orders,but has insufficient capacity to meetthem all

-

?

?

The manufacturer allocates

capac-ity to the distributors, and invites

them to revise their orders

?

The distributors share their

allo-cated capacity, and revise their

or-ders to maximize profit

?

The manufacturer schedules the

re-vised orders to maximize its profit

The manufacturer and distributorsjointly determine the orders and theschedule to maximize the overallsystem profit

Figure 1.1: Capacity Allocation and Scheduling Problem

In Chapter 4, we study a competitive scheduling problem Solution methods forscheduling problems are traditionally developed under the assumption of centrally

Trang 17

available information or distributed information with cooperative behavior less, centralized methods may not be available when agents have competing interestsand privately held information about job requirements and values In a decentral-ized scheduling problem, jobs may represent operations such as specific industrialfinishing, mechanical testing, quality control, components assembly, and equipmentmaintenance and repair We consider problems where a group of agents, each with aset of jobs, needs to schedule their jobs on a common processing facility Each agentwants to minimize an objective function which depends on its own job completiontimes We develop ascending auction models to allocate capacity Time slots are allo-cated to alternative jobs based on the bids of the participating agents We investigatethe efficiency and effectiveness of various auction mechanisms, and the optimization

Neverthe-of the agents’ bidding policies

Trang 18

In general, there are many possible disruptions that can make rescheduling sary, including the arrival of new orders, machine breakdowns, shortage of materials

neces-or resources, larger neces-or smaller than expected processing times, cancellation of neces-orders,changes in order priority, and due date changes

There are several well-documented real-world applications of rescheduling Bean

et al (1991) consider an automobile industry application They propose a matchup

Trang 19

scheduling approach that compensates for a disruption Zweben et al (1993) scribe the GERRY scheduling and rescheduling system that supports the space shut-tle using iterative repair heuristics Clausen et al (2001) describe a shipyard ap-plication, where the goal of rescheduling is to store large steel plates for efficientaccess A similar problem occurs in the assignment of stacker cranes to berths atcontainer ports Yu et al (2003), in their work that received the 2002 Edelman Award

de-of INFORMS (http://www.informs.org/Prizes/EdelmanPrizeDetails.html), discuss ashort-range airline planning problem They describe an optimization-based approachfor rescheduling to compensate for air traffic, weather, crew unavailability, and otherdisruptions, which approach helped Continental Airlines recover after the terroristattacks of September 11, 2001

Because of the practical importance of rescheduling, a number of researcherspropose rescheduling approaches for a variety of scheduling environments Szelkeand Kerr (1994), Davenport and Beck (2000), Herroelen and Leus (2005), Vieira et

al (2003) and Aytug et al (2004) provide extensive reviews of the rescheduling erature, including taxonomies, strategies and algorithms, in both deterministic andstochastic environments

lit-Wu et al (1993) suggest a composite objective rescheduling approach for a singlemachine problem, and design heuristic procedures for solving it Their criteria includeminimizing the makespan and the impact of the schedule change Unal et al (1997)consider a single machine problem with newly arrived jobs that have setup times thatdepend on their part types They consider inserting new jobs into the original schedule

to minimize the total weighted completion time, or makespan, of the new jobs, withoutincurring additional setups or causing jobs to become late These constraints may

Trang 20

be too restrictive in practice Hall and Potts (2006) consider problems where a set

of jobs becomes available later than expected, after a schedule has been determined

to minimize a classical cost objective In the new schedule, the limit on allowabledisruption is measured by the maximum time disruption to any of the jobs betweenthe original and adjusted schedules

This chapter discusses the following rescheduling problem A set of original jobshas been scheduled to minimize a given cost objective, and then several sets of newjobs arrive unexpectedly The objective used in the original schedule still needs to beminimized over all of the jobs However, this will change the original schedule, reduc-ing customer satisfaction and creating havoc with the original resource allocations.Thus, the tradeoff between the scheduling cost and the disruption cost needs to beconsidered in detail For job arrivals that can be modeled as a single new order, thisproblem is studied by Hall and Potts (2004) They measure the disruption either bythe amount of resequencing or by the amount of time disruption, and in either casethis disruption is treated both as a constraint and as a cost

In many practical problems however, consideration of a single new order, as inHall and Potts (2004), is not general enough Complex and dynamic schedulingenvironments may generate disruptions that occur at different times and cannot beviewed as a single new order Therefore, we consider multiple arriving new orders.Here, as a result of previous disruptions, it is not necessarily the case that the originalschedule optimizes the given scheduling measure Therefore, important properties ofthe original schedule are lost, and the problem becomes significantly more difficult tosolve

Trang 21

In Section 2.2, we introduce our notation, formally define the rescheduling lem, and present a computational complexity result Section 2.3 analyzes the worst-case performance of several specific classes of schedules and of an approximation algo-rithm In Section 2.4, we propose lower and upper bounding schemes, and establishdominance properties that are then integrated into a branch and bound algorithm.

prob-In Section 2.5, we provide an extensive computational study of this algorithm nally, Section 2.6 contains a summary of the chapter, along with some suggestions forfuture research

In this section, we give a formal definition of the rescheduling problem and thendiscuss its computational complexity

Let JO = {1, , nO} denote a set of original jobs to be processed nonpreemptively

on a single machine Let π∗denote a schedule that minimizes the lateness of the latestjob in JO From Jackson (1955), a possible construction for π∗ is to sequence the jobs

in nondecreasing order of their due dates (EDD order) and schedule them withoutidle time between the jobs Let π denote an arbitrary schedule for JO, possiblyincluding some idle time between the jobs Also, let JN denote a set of new jobs,where nN = |JN| We assume that all the new jobs arrive at time zero, after aschedule for the jobs of JO has been determined, but before processing begins There

is no loss of generality in this assumption: if the jobs arrive after time zero, then theprocessed jobs of JO are removed from the problem, any partly processed jobs areprocessed to completion, and JO and nO are updated accordingly

Trang 22

Let J = JO ∪ JN, and n = nO+ nN Also, let pj denote the positive processingtime of job j, and let dj denote the due date of job j (which can be negative), for

j ∈ J

Let ν ∈ {π∗, π} For any schedule σ of the jobs of J , we define the followingvariables:

Cj(σ) = the completion time of job j, for j ∈ J ;

Lj(σ) = Cj(σ) − dj, the lateness of job j, for j ∈ J ;

∆j(ν, σ) = |Cj(σ) − Cj(ν)|, the time disruption of job j, for j ∈ JO,

where the time disruption of job j in schedule σ is the absolute value of the ference between the finishing time of that job in σ and ν When there is no am-biguity, we simplify Cj(σ), Lj(σ), and ∆j(ν, σ) to Cj, Lj, and ∆j(ν), respectively.Let Cmax = maxj∈J{Cj}, Lmax = maxj∈J{Lj}, and ∆max(ν) = maxj∈J O{∆j(ν)}.The time disruption measure models penalties associated with the change of deliverytimes of jobs to customers, and the cost of rescheduling the resources so that theyare available at the new times when they are required

dif-Let k denote the given upper bound on the allowed time disruption of any nal job Following the standard α|β|γ classification scheme for scheduling problems(Graham et al 1979), α indicates the scheduling environment, β describes the jobcharacteristics or restrictive requirements, and γ specifies the objective function to

origi-be minimized We consider only single machine problems, thus implying that α = 1.Under β, we use ∆max(π∗) ≤ k or ∆max(π) ≤ k to describe a constraint on the max-imum time disruption for any job of JO We also use pmtn to denote a schedule

in which preemption of jobs is allowed Finally, the objective is to minimize themaximum lateness, Lmax Thus, our problems are denoted by 1|∆max(π∗) ≤ k|Lmax

Trang 23

and 1|∆max(π) ≤ k|Lmax Problem 1|∆max(π) ≤ k|Lmax is more general than lem 1|∆max(π∗) ≤ k|Lmax, and as we discuss below much less tractable For prob-lem 1|∆max(π) ≤ k|Lmax, which is the main focus of this chapter, the constraint

prob-∆max(π) ≤ k implies that Cj ≥ Cj(π) − k and Cj ≤ Cj(π) + k Thus, each job j, for

j ∈ JO, has an implied release date ¯rj = Cj(π) − pj − k specifying the earliest time

at which it can start processing, and an implied deadline ¯dj = Cj(π) + k specifyingthe time by which it must be completed; ¯rj = 0 and ¯dj = ∞ for j ∈ JN

It is important to note that, as a result of previous disruptions, the original ule, denoted by π, may no longer be optimal with respect to the scheduling cost.Consequently, we allow the original schedule to be arbitrary, possibly containing ma-chine idle time Thus, 1|∆max(π) ≤ k|Lmax corresponds to the multiple-disruptionrescheduling problem

sched-Hall and Potts (2004) present an O(n + nNlog nN) time algorithm to solve lem 1|∆max(π∗) ≤ k|Lmax They also prove that the recognition version of problem1|∆max(π) ≤ k|Lmax for arbitrary π is unary NP-complete, which refers to problemsfor which no polynomial-time algorithm can exist, even if the data are unary encoded,unless P = NP (Garey and Johnson, 1979) We now provide a new proof of this com-putational complexity result, which shows that the same result holds even if no newjobs arrive

prob-Theorem 1 The recognition version of problem 1|∆max(π) ≤ k|Lmax with JN = ∅ isunary NP-complete

Proof By reduction from the following problem, which is known to be unary complete

Trang 24

NP-3-Partition (Garey and Johnson 1979): Given 3t elements with integer sizes a1, , a3t,where P 3t

i=1ai = ty and y/4 < ai < y/2 for i = 1, , 3t, does there exist a tion S1, , St of the index set {1, , 3t} such that |Sj| = 3 and P

parti-i∈S jai = y for

j = 1, , t?

Consider the following instance of the recognition version of problem 1|∆max(π) ≤k|Lmax: n0 = 4t − 1; pi = ai and di = ty + t − 1 for i = 1, , 3t; pi = 1 and

di = (i − 3t)y + i − 3t for i = 3t + 1, , 4t − 1; k = (t − 1)y; and C = 0, where

C is a threshold value for Lmax In schedule π, jobs 1, , 3t are scheduled withinthe interval [0, ty] in arbitrary order without idle time, jobs 3t + 1, , 4t − 1 arescheduled within the intervals [(i − 2t − 1)y + i − 3t − 1, (i − 2t − 1)y + i − 3t], for

i = 3t + 1, , 4t − 1, and idle time appears between the final t − 1 intervals

We prove that there exists a feasible schedule for this instance of 1|∆max(π) ≤k|Lmax with Lmax≤ C if and only if there exists a solution to 3-Partition

(⇒) Consider a no-idle schedule σ in which job 3t + 1 is processed in the interval[y, y + 1], job 3t + 2 is processed in the interval [2y + 1, 2y + 2], , and job 4t − 1 isprocessed in the interval [(t − 1)y + t − 2, (t − 1)y + t − 1] Then there are t intervals oflength y, including the one starting at time (t − 1)y + t − 1 We process jobs 1, , 3t

in these t intervals In each interval, there are exactly three jobs with total processingtime y, and they are processed in the same order as in π Also, the jobs in the firstposition in each interval are processed in the same order as in π It follows that

Lmax≤ C It is also clear that for the jobs 3t + 1, , 4t − 1, we have ∆max= (t − 1)y.Now we consider the jobs in 1, , 3t First, if a job moves earlier from π to σ, themove is by no more than from ty in π to y in σ (which could possibly occur for thelast job of π); thus, the time disruption is no greater than (t − 1)y Second, if a

Trang 25

job moves later from π to σ, this move is no more than from (t − 1)(y/4 + 1) in π

to (t − 1)y + t − 1 in σ (which could possibly occur for the job that starts at time(t − 1)y + t − 1 in σ); therefore, the time disruption in this case is less than (t − 1)y.Thus, we obtain a feasible schedule with Lmax ≤ C

(⇐) A feasible schedule σ with Lmax≤ C implies that job 3t + 1 completes processing

no later than time y + 1, so since ∆max(π) ≤ (t − 1)y, it must be processed in theinterval [y, y + 1] Similarly, job 3t + 2 must be processed in the interval [y + 1, y + 2], ., and job 4t − 1 must be processed in the interval [(t − 1)y + t − 2, (t − 1)y + t − 1].Moreover, each of the jobs 1, , 3t must complete no later than time ty + t − 1.Therefore, there is no idle time in the schedule, and three jobs in each remaininginterval have total processing time of exactly y Therefore, there exists a solution to3-Partition

Let σ∗ denote an optimal schedule for problem 1|∆max(π) ≤ k|Lmax

Corollary 1 Problem 1|∆max(π) ≤ k|Lmax does not have a polynomial-time mation algorithm delivering a schedule σ with Lmax(σ) ≤ ρLmax(σ∗) for all instancesfor any finite ρ, unless P=NP

approxi-Proof Since the threshold Lmax value used in the proof of Theorem 1 is zero, theresult follows immediately

Henceforth, we restrict our discussion to problem 1|∆max(π) ≤ k|Lmax

This section focuses on the worst-case performance of approximation algorithms.All results in this section are proved under the standard assumption that dj ≤ 0,

Trang 26

for j ∈ J If σ is a specific class of schedule or the schedule obtained from someapproximation algorithm, then we establish an inequality Lmax(σ) ≤ ρLmax(σ∗) thatholds for all problem instances, where ρ is a constant We refer to ρ as a performancebound for schedules of this class or for the approximation algorithm.

Section 2.3.1 analyzes the worst-case performance of several specific classes ofschedules In Section 2.3.2, we design an approximation algorithm, analyze its worst-case performance, and show that the resulting performance bound is best possible

Recall that an active schedule is a schedule in which no job can be scheduled earlierwithout violating a constraint, while a semiactive schedule is a schedule in which nojob can be moved earlier without changing the sequence or violating a constraint(Brucker 1998) Note that an active schedule is also semiactive We first analyze theworst-case performance of an arbitrary active or semiactive schedule

Theorem 2 Let σASbe an arbitrary active or semiactive schedule Then Lmax(σAS) ≤3Lmax(σ∗), and no better performance bound exists for an arbitrary active or semiac-tive schedule

Proof Consider job j for which Lj(σAS) = Lmax(σAS), and let Pj denote the totalprocessing of jobs that are completed by time Cj(σAS) If there is no idle time beforejob j in σAS, then

Alternatively, the last period of idle time before job j occurs immediately beforesome other job i, where job i starts at time ri because σAS is an active or semi-active

Trang 27

schedule Therefore,

Lmax(σAS) ≤ ri+ Pj − dj (2.2)Using our assumption that dj ≤ 0 for all jobs j, we obtain ri ≤ Cmax(σ∗) ≤ Lmax(σ∗),

Pj ≤ Cmax(σ∗) ≤ Lmax(σ∗), and −dj ≤ Lmax(σ∗) Substitution in (2.1) and (2.2)yields the desired inequality Lmax(σAS) ≤ 3Lmax(σ∗)

The following instance shows that no better bound exists

Example 1 n0 = n − 2, nN = 2, where n ≥ 3; processing times and due dates areshown in Table 2.1; k = (n − 1 + 1/n2)/2; and the intervals within which each job isscheduled in π are also given in Table 2.1

Schedules σAS and σ∗ are defined in Table 2.1, with the values of ∆j(π, σAS) and

∆j(π, σ∗) for j ∈ J0 also given Precise values for ∆j(π, σ∗) depend on whether

n is odd or even For odd n, ∆j(π, σ∗) is decreasing in j until j = (n + 1)/2when ∆(n+1)/2(π, σ∗) = 0 + (n − 1)/(2n2), and then is increasing in j starting with

∆(n+3)/2(π, σ∗) = 1 − (n + 1)/(2n2) Similarly, for even n, ∆j(π, σ∗) is decreasing

in j until j = n/2 when ∆n/2(π, σ∗) = 1/2 + (n − 2)/(2n2), and then an ing pattern starts with ∆n/2+1(π, σ∗) = 1/2 − n/(2n2) We compute the solutionvalues Lmax(σAS) = 3n − 3 + 1/(2n2) and Lmax(σ∗) = n + (n − 2)/n2 Therefore,lim

increas-n→∞(Lmax(σAS)/Lmax(σ∗)) = lim

n→∞((3n − 3 + 1/(2n2))/(n + (n − 2)/n2)) = 3

A locally optimal schedule is a semiactive schedule in which exchanging the order

of any two adjacent jobs (a) will not decrease the maximum lateness, and (b) willnot decrease the maximum lateness of these two jobs under the condition that it doesnot increase the maximum lateness of all the jobs The following result considers theworst-case performance of locally optimal schedules

Trang 29

Theorem 3 Let σLO denote a locally optimal schedule Then Lmax(σLO) ≤ 3Lmax(σ∗),and no better performance bound exists for an arbitrary locally optimal schedule.Proof Since a locally optimal schedule is by definition semiactive, the worst-caseperformance bound follows from Theorem 2.

The following instance shows that no better bound exists

Example 2 n = nO = 5; processing times and due dates are shown in Table 2.2,where v ≥ 1; k = 2v + 1; and the intervals within which each job is scheduled in πare also given in Table 2.2

Schedules σLO and σ∗ are defined in Table 2.2, with the values of ∆j(π, σLO)and ∆j(π, σ∗) for j ∈ J0 also given We therefore obtain Lmax(σLO) = 6v + 5 and

Lmax(σ∗) = 2v + 3 Thus, lim

v→∞(Lmax(σLO)/Lmax(σ∗)) = lim

v→∞((6v + 5)/(2v + 3)) = 3

Finally, we analyze the worst-case performance of an arbitrary no idle time ule, which is a schedule with processing starting at time zero and with no idle timebetween jobs

sched-Theorem 4 Let σNI denote an arbitrary no idle time schedule Then Lmax(σNI) ≤2Lmax(σ∗), and no better performance bound exists for an arbitrary no idle time sched-ule

Proof As in the proof of Theorem 2, (2.1) holds with σAS replaced by σNI Since

Pj ≤ Cmax(σ∗) ≤ Lmax(σ∗) for a no idle time schedule σNI, where the final inequality

is a consequence of our assumption that dj ≤ 0 for all jobs j, we obtain Lmax(σNI) ≤2Lmax(σ∗)

The following instance shows that no better bound exists

Trang 31

Example 3 n = nO = 2; processing times and due dates are shown in Table 2.3,where v ≥ 1; k = v; and the intervals within which each job is scheduled in π are alsogiven in Table 2.3.

Table 2.3: Data for Example 3

Schedules σNI and σ∗ are defined in Table 2.3, with the values of ∆j(π, σNI) and

∆j(π, σ∗) for j ∈ J0 also given After deducing that Lmax(σNI) = 2v and Lmax(σ∗) =

v + 1, we obtain lim

v→∞(Lmax(σNI)/Lmax(σ∗)) = lim

v→∞(2v/(v + 1)) = 2

In this section, we propose an approximation algorithm that schedules the jobs of

JO in π order, and then appends the jobs of JN Since the appended jobs of JN arescheduled in EDD order, this algorithm is named appended earliest due date (AEDD)

We show that this algorithm is best possible among polynomial-time approximationalgorithms

Appended Earliest Due Date (AEDD)

JO Construction

Using the sequence π, schedule each job of JO as early as possible

Trang 32

Theorem 5 Let σAEDD denote a schedule found by Algorithm AEDD Then

Lmax(σAEDD) ≤ 2Lmax(σ∗), and no better performance bound exists for AlgorithmAEDD

Proof Let l denote the last job in π (and thus also the last job of JO in σAEDD).Since Algorithm AEDD schedules the jobs of NO in nondecreasing order of their duedates, the value of Cl(σAEDD) is minimal Therefore,

Cl(σAEDD) ≤ Cmax(σ∗) ≤ Lmax(σ∗), (2.3)

where the final inequality is obtained from our assumption that dj ≤ 0 for all jobs j.Let job j have maximum lateness in schedule σAEDD First, suppose that j ∈

JO Using the definition of lateness, we obtain Lmax(σAEDD) = Cj(σAEDD) − dj <

Lmax(σ∗) + Cl(σAEDD), where the inequality is a result of job l being last among thejobs in JO Substituting (2.3) yields the desired inequality Lmax(σAEDD) ≤ 2Lmax(σ∗).Alternatively, suppose that j ∈ JN In this case, Lmax(σAEDD) = Cl(σAEDD) +

Lmax(JN), where Lmax(JN) is the maximum lateness of an EDD schedule of the jobs

of JN, evaluated by starting processing at time zero Trivially, Lmax(JN) ≤ Lmax(σ∗).Therefore, using (2.3), we obtain Lmax(σAEDD) ≤ 2Lmax(σ∗) in this case also

The following instance shows that no better bound exists

Trang 33

Example 4 n = 2, n0 = 1, nN = 1; processing times and due dates are shown inTable 2.4, where v ≥ 1; k = 1; and the intervals within which each job is scheduled

in π are also given in Table 2.4

Table 2.4: Data for Example 4

Schedules σAEDD and σ∗ are defined in Table 2.4, with the values of ∆j(π, σAEDD)and ∆j(π, σ∗) for j ∈ J0 also given Since Lmax(σAEDD) = 2v and Lmax(σ∗) = v + 1,

of any polynomial-time algorithm, under the standard assumption that dj ≤ 0 for

Trang 34

where ti=1ai = 2A, does there exist a partition S1, S2of the index set {1, , t} suchthat P

i∈S 1ai =P

i∈S 2ai = A?

Given an instance of Partition, we construct an instance of problem 1|∆max(π) ≤k|Lmax, where n = t + 2, nO = 2, nN = t, and k = 0 For JO, p1 = 1, p2 = rA,and d1 = d2 = 0, where r ≥ 1 is a constant For JN, pj = aj−2 and dj = −rA, for

j = 3, , t + 2 Schedule π has job 1 in [A, A + 1] and job 2 in [2A + 1, 2A + 1 + rA].Since k = 0, jobs 1 and 2 must occupy the same time intervals as in π Thus, thejobs of JN are scheduled within [0, A], [A + 1, 2A + 1] and the time interval starting

at 2A + 1 + rA

Suppose that Partition has a solution Then for each i ∈ S1, job i + 2 can bescheduled in [0, A], and for each i ∈ S2, job i + 2 can be scheduled in [A + 1, 2A + 1].This gives Lmax= rA + 2A + 1, where the maximum lateness is attained both for job

2 and for the last job of JN

On the other hand, for any schedule in which at least one job of JN is scheduledafter job 2, the maximum lateness is at least (2A+1+rA+1)−(−rA) = 2rA+2A+2

If Partition has a solution but the polynomial-time algorithm schedules a job of

JN to complete after job 2, then Lmax(σH)/Lmax(σ∗) = (2rA + 2A + 2)/(rA + 2A + 1),which can be arbitrarily close to 2 Thus, if ρ < 2, the polynomial-time algorithmschedules all jobs of JN before job 2, and therefore finds a solution to Partition ifsuch a solution exists However, this is only possible if P = N P

Theorem 6 shows that a polynomial-time approximation scheme (Papadimitriouand Steiglitz 1982, Schuurman and Woeginger 2005) is not possible even under theassumption that dj ≤ 0 for j ∈ J, unless P = N P This generalizes the result inCorollary 1

Trang 35

2.4 Branch and Bound Algorithm

In this section, we present the components of our branch and bound algorithm.Section 2.4.1 describes our lower bounding scheme that uses a preemptive relaxation.Section 2.4.2 presents two heuristics for obtaining upper bounds One of these heuris-tics provides the starting solution for a variable neighborhood descent procedure that

we develop In Section 2.4.3, we establish some properties that we use as dominancerules within our branch and bound algorithm Finally, we present an overview of thecomplete branch and bound algorithm in Section 2.4.4

Our branch and bound algorithm obtains lower bounds on the optimal value ofthe maximum lateness using a preemptive relaxation, as follows Each job j has animplied release date ¯rj, and a cost function

fj(Cj) =

(

Cj − dj for j ∈ JN; j ∈ JO and Cj ≤ ¯dj

∞ for j ∈ JO and Cj > ¯dj (2.4)Our lower bound is the solution of the preemptive problem 1|pmtn, rj|fmax, where

fmax= maxj∈J{fj}, using the O(n2) time algorithm of Baker et al (1983), to which

we refer to PMTN

In this subsection, we present two heuristics for obtaining upper bounds Thefirst of these heuristics resembles the procedure for computing the lower bound usingPMTN The second heuristic computes an initial schedule using a generalization

of AEDD, and then improves this solution using a variable neighborhood descentprocedure

Trang 36

The first heuristic, denoted APMTN, is motivated by the observation that in mostinstances very few jobs are preempted by PMTN For example, our computationalresults show that when n = 100, the number of preemptions generated by PMTNaverages 5.6 and does not exceed 26 in any of the 720 instances tested, and when

n = 1000, the number of preemptions averages 55 and does not exceed 231 (see theexperimental design in Section 2.5) This suggests that a procedure similar to PMTN,but without using preemption, may produce good schedules

Adjusted PMTN (APMTN)

Step 0: Initialization

Step 0.1: Compute the implied release date rj = Cj(π) − pj − k and the implieddeadline dj = Cj(π) + k for j ∈ JO, and set rj = 0 and dj = ∞ for j ∈ JN

Step 0.2: Compute a lower bound using PMTN

Step 0.3: Schedule the n jobs in nondecreasing order of rj, starting each as early

as possible Divide the schedule into blocks, where a block is a maximal period ofprocessing without idle time between the jobs For each block i, set Bi to be the set

of jobs within this block, and compute the maximum lateness of these jobs

Step 1: Adjustment

Step 1.1: Find a block i with the largest maximum lateness of its jobs If block ihas maximum lateness no greater than the lower bound obtained by PMTN or hasalready been searched, then go to Step 2; otherwise, search block i as follows

Step 1.2: Find the completion time Cb

i of the last job in Bi, and set ˆBi = Bi.Step 1.3: If ˆBi = ∅, then terminate with no feasible schedule found; otherwise, find

a job j ∈ ˆBi for which fj(Cb

i) (as defined in (2.4)) is minimal If Cb

i > dj, then

Trang 37

terminate with no feasible schedule found; otherwise, provisionally schedule job j to

be completed at time Cb

i.Step 1.4: Schedule all of the jobs in Bi\{j} in nondecreasing rj order, starting each asearly as possible If idle time between the jobs occurs because of the implied releasedate values rj, then set ˆBi = ˆBi\ {j}, and go to Step 1.3; otherwise, regard job j asscheduled, set Bi = Bi \ {j}, if Bi 6= ∅, return to Step 1.2, and if Bi = ∅, return toStep 1.1

Step 2: Output

Output the heuristic schedule and its maximum lateness

The intuition underlying the Adjusted PMTN heuristic is as follows The schedulefound in Step 1, which minimizes machine idle time, is decomposed into blocks whereprocessing is consecutive To reduce the overall maximum lateness, a block is chosenfor which the maximum lateness of its jobs is greatest, and an attempt is made toreschedule the jobs of this block, without extending its duration The proceduresuccessively selects a job for the last (unfilled) position in this block, basing decisions

on the cost function defined in (2.4) until the complete block is scheduled AdjustedPMTN requires O(n3) time

Although our approximation algorithm AEDD achieves the best possible case performance ratio, it does not always find an active schedule or a locally optimalschedule Therefore, we now describe a modified heuristic that allows the insertion ofthe jobs of JN with those of JO, and also incorporates a simple local search procedure.Modifications of the classical EDD algorithm (Jackson 1955) appear elsewhere in thescheduling literature; see, for example, Grigoriev et al (2005)

Trang 38

worst-Inserted and Improved Earliest Due Date (IIEDD)

Apply the following for i = 2, , n

Exchange the jobs in positions i − 1 and i in the current sequence, if the exchange(a) decreases the maximum lateness of all the jobs, or (b) decreases the maximumlateness of these two jobs without increasing the maximum lateness of all the jobs.Heuristic IIEDD requires O(n2) time While IIEDD is expected on average to

be superior to AEDD, even better solutions may be achievable by applying a moresophisticated local search procedure to the solution generated by IIEDD Specifically,

we design a variable neighborhood descent (VND) procedure, which is described low A useful introduction to variable neighborhood search procedures is provided byHansen and Mladenovi´c (2001)

be-Our VND uses six types of neighborhoods In the following description of ourneighborhood moves, we let Bj and Aj denote the set of jobs that are scheduled beforeand after job j, respectively, for j ∈ J Suppose that job j has maximum lateness inthe current schedule In the case of equal maximum lateness values, the job with themaximum completion time is selected In the case where the schedule contains somemachine idle time before job j, suppose that job h starts the block containing job

Trang 39

j, i.e., job h is immediately preceded by machine idle time and there is no machineidle time between the processing of jobs h and j To reduce the maximum lateness,job j must be completed earlier, and therefore the set of jobs sequenced between hand j must be reduced Our neighborhood structures that achieve this reduction aredefined as follows:

N1: remove job j and insert it immediately before some job i, where i ∈ Bj;

N2: remove some job i and insert it immediately after job j, where i ∈ Bj \ Bh;

N3: swap job i and job j, where i ∈ Bj;

N4: if h exists, remove job h and insert it immediately after some job i, where

i ∈ Ah;

N5: if h exists, remove some job i and insert it immediately before job h, where

i ∈ Ah\ (Aj ∪ {j});

N6: if h exists, swap job h and job i, where i ∈ Ah\ {j}

For any neighbor, a corresponding schedule is computed by scheduling each job inthe sequence as early as possible, subject to the implied release dates

A neighbor resulting from one of these moves is preferred to the previous schedule

if the maximum lateness is smaller First consider N1, N2, and N3 In the case ofequal maximum lateness values, the schedule with the earlier completion time of thejob in the position previously occupied by job j is preferred If there is still a tie,then the neighbor is preferred to the previous schedule if dj < di or if dj = di and

j < i Now consider N4, N5, and N6 In the case of equal maximum lateness values,the schedule with the earlier completion time of the job in the position previously

Trang 40

occupied by job i is preferred If there is still a tie, then the neighbor is preferred tothe previous schedule if ¯ri < ¯rh or if ¯ri = ¯rh and i < h.

VND

Initialization

Apply IIEDD to find an initial schedule σ

Search Neighborhoods

Execute the following for l = 1, , 6

Apply the following procedure until none of the neighbors of Nlof the current schedule

domi-Property 1 For any two jobs i and j, where i, j ∈ JO, if Ci(π) < Cj(π) and Cj(π) −

Ci(π) > 2k − pi, then in any feasible schedule job i precedes job j

Proof Consider a feasible schedule σ in which job j precedes job i so that Cj(σ) ≤

Ci(σ) − pi Feasibility ensures that Cj(σ) ≥ rj + pj = Cj(π) − k and Ci(σ) ≤ ¯di =

Ci(π) + k From these inequalities, Cj(π) − k ≤ Cj(σ) ≤ Ci(σ) − pi ≤ Ci(π) + k − pi,

Ngày đăng: 02/11/2014, 00:25

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN