1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tối ưu hóa viễn thông và thích nghi Kỹ thuật Heuristic P4 ppsx

21 331 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 21
Dung lượng 194,38 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In particular, equation 4.5 specifies the condition that thenumber of edges in any spanning tree must be equal to one fewer than the number of nodes, while equation 4.6 is an anti-cycle

Trang 1

Telecommunications Optimization: Heuristic and Adaptive Techniques, edited by D Corne, M.J Oates and G.D Smith

Copyright © 2000 John Wiley & Sons Ltd ISBNs: 0-471-98855-3 (Hardback); 0-470-84163X (Electronic)

Trang 2

In this chapter, we address a fundamental DDS network design problem that arises inpractical applications of a telecommunications company in the United States The decisionelements of the problem consist of a finite set of inter-offices (hubs) and a finite set ofcustomer locations that are geographically distributed on a plane A subset of hubs arechosen to be active subject to the restriction of forming a network in which every two activehubs to communicate with each other, hence constituting a spanning tree Each hub has afixed cost for being chosen active and each link (edge) has a connection cost for beingincluded in the associated spanning tree Each customer location must be connected directly

to its own designated end office which in turn needs to be connected with exactly one activehub, thereby permitting every two customers to communicate with each other via the hubnetwork This also incurs a connection cost on the edge between the customer location andits associated hub The objective is to design such a network at minimum cost

In practice, the line connection cost is distance sensitive and is calculated according tothe tariff charges established by the Federal Communications Commission (FCC) Thesecharges include a fixed cost for use and a variable cost that is related to the distance Foreach active hub, in addition to the fixed bridging cost, a charge is also accessed for eachincoming and outgoing line connected to this hub To illustrate how these costs areassociated with the DSS network, suppose the monthly cost data are given as in Table 4.1.Then, the monthly costs for the network given in Figure 4.1 are as detailed in Table 4.2.The foregoing representation of the DDS network design problem can be simplified byreference to a Steiner Tree framework Since the linking cost per line between an end officeand a potential hub is known and the bridging cost per line for that hub is also available, we

Trang 3

can pre-calculate the cost of connecting a customer location to a hub by adding up these twoterms Thus, the intermediate end offices can be eliminated and the DDS network problem

can be converted into an extension of the Steiner Tree Problem This extended problem was

first investigated by Lee et al (1996), who denote the hubs as ‘Steiner nodes’ and the customer locations as ‘target nodes’, thus giving this problem the name Steiner tree-star (STS) problem.

Table 4.1 Example monthly cost data for leased line networks.

Fixed bridging cost $82.00

Bridging cost per line $41.00

Line connecting cost: Mileage Fixed Cost Variable Cost

Total monthly cost: $2166.40

Literature on the STS problem is limited Lee et al (1994) show that the STS problem is

strongly NP-hard and identify two mixed zero-one integer programming formulations for

this problem Lee et al (1996) further investigate valid inequalities and facets of the

underlying polytope of the STS problem, and implement them in a branch and cut scheme

More recently, Xu et al (1996a; 1996b) have developed a Tabu Search (TS) based

algorithm Their computational tests demonstrated that the TS algorithm is able to findoptimal solutions for all problem instances up to 100 nodes Applied to larger problems that

the branch and cut procedure (Lee et al., 1996) could not solve, the TS algorithm consistently outperforms the construction heuristic described in Lee et al (1996).

In this chapter, we explore an implementation of Scatter Search (SS) for the STSproblem Scatter search, and its generalized form called path relinking, are evolutionarymethods that have recently been shown to yield promising outcomes for solvingcombinatorial and nonlinear optimization problems Based on formulations originallyproposed in the 1960s (Glover, 1963; 1965) for combining decision rules and problemconstraints, these methods use strategies for combining solution vectors that have provedeffective for scheduling, routing, financial product design, neural network training,optimizing simulation and a variety of other problem areas (see, e.g., Glover (1999)).Our chapter is organized as follows The problem formulation is presented in the nextsection Section 4.3 briefly describes the tabu search algorithm for the STS problem We

Trang 4

further describe the SS based heuristic for the STS problem in section 4.4 and examine several relevant issues, such as the diversification generator, the reference set update method, the subset generation method, the solution combination method and the improvement method In section 4.5, we report computational results on a set of carefully designed test problems, accompanied by comparisons with the solutions obtained by the TS

algorithm (Xu et al., 1996a; 1996b) which has been documented as the best heuristic

available prior to this research In the concluding section, we summarize our methodology and findings

We formulate the STS problem as a 0-1 integer programming problem as follows First we define:

M set of target nodes;

N set of Steiner nodes;

c ij cost of connecting target node i to Steiner node j;

d jk cost of connecting Steiner nodes j and k;

b j cost of activating Steiner node j.

The decision variables of this formulation are:

x i a binary variable equal to 1 if and only if Steiner node j is selected to be active.

y j k a binary variable equal to 1 if and only if Steiner node j is linked to Steiner node

z ij a binary variable equal to 1 if and only if target node i is linked to Steiner node j.

The model is then to minimize:

∑ ∑

∑ ∑

∈ ∈

∈ ∈ >

+ +

M

ij ij N

jk jk N

i i

b

,

(4.1)

subject to:

M i z

N j

for

,

N j M i x

z ijj, for ∈ , ∈ (4.3)

N k j k j x

x

y jk ≤( j+ k)/2, for < , , ∈ (4.4)

∈ > ∈ ∈

N

j N

k j k

y 1, for , ,

(4.5)

∈ > ∈ ∈ −

N

j N

k j k

y , for | | 3

) ( ,

(4.6)

Trang 5

N j

x j∈ 0,1}, for ∈ (4.7)

N k j k j

y jk∈ 0,1}, for < , , ∈ (4.8)

N j M i

z jk∈ 0,1}, for ∈ , ∈ (4.9)

In this formulation, the objective function (equation 4.1) seeks to minimize the sums ofthe connection costs between target nodes and Steiner nodes, the connection costs betweenSteiner nodes, and the setup costs for activating Steiner nodes The constraint of equation4.2 specifies the star topology that requires each target node to be connected to exactly oneSteiner node Constraint 4.3 indicates that the target node can only be connected to theactive Steiner node Constraint 4.4 stipulates that two Steiner nodes can be connected if andonly if both nodes are active Constraints 4.5 and 4.6 express the spanning tree structureover the active Steiner nodes In particular, equation 4.5 specifies the condition that thenumber of edges in any spanning tree must be equal to one fewer than the number of nodes,

while equation 4.6 is an anti-cycle constraint that also ensures that connectivity will be

established for each active Steiner node via the spanning tree Constraints 4.7–4.9 expressthe non-negativity and discrete requirements All of the decision variables are binary

Clearly, the decision variable vector x is the critical one for the STS problem Once this

n-vector is determined, we can trivially determine the y jk values by building the minimal

spanning tree over the selected Steiner nodes (those for which x j =1), and then determine the

z ij values for each target node i by connecting it to its nearest active Steiner node, i.e we have z ij =1 if and only if c ij = min {c ik | x k =1}

In this section, we provide an overview of the tabu search algorithm for this problem, which

was first proposed in Xu et al (1996b) Although we do not describe the method in minute

detail, we are careful to describe enough of its form to permit readers to understand both thesimilarities and differences between this method and the scatter search method that is thefocus of our current investigation The tabu search algorithm starts at a trivial initialsolution and proceeds iteratively At each iteration, a set of candidate moves is extractedfrom the neighborhood for evaluation, and a ‘best’ (highest evaluation) move is selected.The selected move is applied to the current solution, thereby generating a new solution

During each iteration, certain neighborhood moves are considered tabu moves and excluded

from the candidate list The best non-tabu move can be determined either deterministically

or probabilistically An aspiration criterion can over-ride the choice of a best non-tabu move

by selecting a highly attractive tabu move The algorithm proceeds in this way, until a defined number of iterations has elapsed, and then terminates At termination, the algorithmoutputs the all-time best feasible solution In subsequent subsections, we describe the majorcomponents of the algorithm

Trang 6

pre-4.3.1 Neighborhood Structure

Once the set of active Steiner nodes is determined, a feasible solution can easily beconstructed by connecting the active Steiner nodes using a spanning tree and by linking thetarget nodes to their nearest active Steiner nodes Based on this observation, we considerthree types of moves: constructive moves which add a currently inactive Steiner node to thecurrent solution, destructive moves which remove a active Steiner node from the currentsolution, and swap moves which exchange an active Steiner node with an inactive Steinernode The swap moves induce a more significant change in the current solution and hencerequire a more complex evaluation For efficiency, swap moves are executed lessfrequently More specifically, we execute the swap move once for every certain number ofiterations (for perturbation) and consecutively several times when the search fails toimprove the current solution for a pre-specified number of iterations (for intensification).Outside the swap move phase, constructive and destructive moves are executed, selectingthe best candidate move based on the evaluation and aspiration criteria applied to a subset

of these two types of moves In addition, since destructive moves that remove nodes deformthe current spanning tree, we restrict the nodes removed to consist only of those activeSteiner nodes whose degree does not exceed three This restriction has the purpose offacilitating the move evaluation, as described next

4.3.2 Move Evaluation and Error Correction

To quickly evaluate a potential move, we provide methods to estimate the cost of theresulting new solution according to the various move types For a constructive move, wecalculate the new cost by summing the fixed cost of adding the new Steiner node with theconnection cost for linking the new node to its closest active Steiner node For a destructivemove, since we only consider those active Steiner nodes with degree less than or equal tothree in the current solution, we can reconstruct the spanning tree as follows If the degree

of the node to be dropped is equal to one, we simply remove this node; If the degree isequal to two, we add the link that joins the two neighboring nodes after removing the node;

If the degree is equal to three, we choose the least cost pair of links which will connect thethree nodes previously adjacent to node removed The cost of the new solution can becalculated by adjusting the connection cost for the new spanning tree and the fixed cost forthe node removed The swap can be treated as a combination of the constructive anddestructive moves by first removing a tree node and then adding a non-tree node

The error introduced by the preceding estimates can be corrected by executing aminimum spanning tree algorithm We apply this error correction procedure every fewiterations and also whenever a new best solution is found Throughout the algorithm, wemaintain a set of elite solutions that represent the best solutions found so far The errorcorrection procedure is also applied to these solutions periodically

4.3.3 TS Memory

Our TS approach uses both a short-term memory and a long-term memory to prevent the

Trang 7

search from being trapped in a local minimum and to intensify and diversify the search Theshort term memory operates by imposing restrictions on the set of solution attributes that arepermitted to be incorporated in (or changed by) candidate moves More precisely, a nodeadded to the solution by a constructive move is prevented from being deleted for a certainnumber of iterations, and likewise a node dropped from the solution by a destructive move

is prevented from being added for a certain (different) number of iterations Forconstructive and destructive moves, therefore, these restrictions ensure that the changescaused by each move will not be ‘reversed’ for the next few iterations For each swap move,

we impose tabu restrictions that affect both added and dropped nodes The number of

iterations during which a node remains subject to a tabu restriction is called the tabu tenure

of the node We establish a relatively small range for the tabu tenure, which depends on thetype of move considered, and each time a move is executed, we select a specific tenurerandomly from the associated range We also use an aspiration criterion to over-ride thetabu classification whenever the move will lead to a new solution which is among the besttwo solutions found so far

The long-term memory is a frequency based memory that depends on the number oftimes each particular node has been added or dropped from the solution We use this todiscourage the types of changes that have already occurred frequently (thus encouragingchanges that have occurred less frequently) This represents a particular form of frequencymemory based on attribute transitions (changes) Another type of frequency memory isbased on residence, i.e the number of iterations that nodes remain in or out of solution

4.3.4 Probabilistic Choice

As stated above, a best candidate move can be selected at each iteration according to eitherprobabilistic or deterministic rules We find that a probabilistic choice of candidate move isappropriate in this application since the move evaluation contains ‘noise’ due to theestimate errors The selection of the candidate move can be summarized as follows First,all neighborhood moves (including tabu moves) are evaluated If the move with the highestevaluation satisfies the aspiration criterion, it will be selected Otherwise, we consider thelist of moves ordered by their evaluations For this purpose, tabu moves are considered to

be moves with highly penalized evaluations

We select the top move with a probability p and reject the move with probability 1–p If

the move is rejected, then we consider the next move on the list in the same fashion If itturns out that no move has been selected at the end of this process, we select the top move

We also make the selection probability vary with the quality of the move by changing it

to pβ −1r β 2, where r is the ratio of the current move evaluation to the value of the best

solution found so far, and β1 and β2 are two positive parameters This new fine-tunedprobability will increase the chance of selecting ‘good’ moves

4.3.5 Solution Recovery for Intensification

We implement a variant of the restarting and recovery strategy in which the recovery of theelite solution is postponed until the last stage of the search The elite solutions, which are

Trang 8

the best K distinc t solutions found so far, are rec overed in reverse order, from the worst

solution to the best solution The list of elite solutions is updated whenever a new solution isfound better than the worst solution in the list Then the new solution is added to the list andthe worst is dropped During each solution recovery, the designated elite solution takenfrom the list becomes the current solution, and all tabu restrictions are removed andreinitialized A new search is then launched that is permitted to constitute a fixed number ofiterations until the next recovery starts Once the recovery process reaches the best solution

in the list, it moves circularly back to the worst solution and restarts the above processagain (Note that our probabilistic move selection induces the process to avoid repeating theprevious search trajectory.)

3 A Reference Set Update Method: to build and maintain a Reference Set consisting of

the b best solutions found (where the value of b is typically small, e.g between 20 and

40), organized to provide efficient accessing by other parts of the method

4 A Subset Generation Method: to operate on the Reference Set, to produce a subset ofits solutions as a basis for creating combined solutions

5 A Solution Combination Method: to transform a given subset of solutions produced bythe Subset Generation Method into one or more combined solution vectors

In the following subsections, we first describe the framework of our SS algorithm, and thendescribe each component which is specifically designed for the STS problem

4.4.1 Framework of SS

We specify the general template in outline form as follows This template reflects the type

of design often used in scatter search and path relinking

Initial Phase

1 (Seed Solution Step.) Create one or more seed solutions, which are arbitrary trial

solutions used to initiate the remainder of the method

2 (Diversification Generator.) Use the Diversification Generator to generate diverse trial

Trang 9

solutions from the seed solution(s).

3 (Improvement and Reference Set Update Methods.) For each trial solution produced in

Step 2, use the Improvement Method to create one or more enhanced trial solutions.During successive applications of this step, maintain and update a Reference Setconsisting of the b best solutions found

4 (Repeat.) Execute Steps 2 and 3 until producing some designated total number of

enhanced trial solutions as a source of candidates for the Reference Set

Scatter Search Phase

5 (Subset Generation Method.) Generate subsets of the Reference Set as a basis for

creating combined solutions

6 (Solution Combination Method.) For each subset X produced in Step 5, use the Solution Combination Method to produce a set C(X) that consists of one or more combined solutions Treat each member of the set C(X) as a trial solution for the

following step

7 (Improvement and Reference Set Update Methods.) For each trial solution produced in

Step 6, use the Improvement Method to create one or more enhanced trial solutions,while continuing to maintain and update the Reference Set

8 (Repeat.) Execute Steps 5–7 in repeated sequence, until reaching a specified cut-off

limit on the total number of iterations

We follow the foregoing template and describe in detail each of the components in thesubsequent subsections

4.4.2 Diversification Generators for Zero-One Vectors

Let x denote an 0-1 n-vector in the solution representation (In our STS problem, x

represents a vector of the decision variables which determines if the corresponding Steinernode is active or not.) The first type of diversification generator we consider takes such a

vector x as its seed solution, and generates a collection of solutions associated with an integer h = 1, 2, , h*, where h* ≤ n – 1 (recommended is h* ≤ n/5).

We generate two types of solutions, x′ and x ′′ , for each h, by the following pair of

solution generating rules:

Type 1 Solution: Let the first c omponent x1′ of x′ be 1 x − , and let1

kh

x1′+ = 1−x1 +kh for k = 1, 2, 3, , k*, where k* is the largest integer satisfying k* ≤ n/h Remaining components of x′ equal 0.

To illustrate for x = (0,0, ,0): the values h = 1, 2 and 3 respectively yield x′ = (1,1, ,1),

x ′ = (1,0,1,0,1 ) and x′ = (1,0,0,1,0,0,1,0,0,1, ) This progression suggests the reason for preferring h* ≤ n/5 As h becomes larger, the solutions x′ for two adjacent values of h

differ from each other proportionately less than when h is smaller An option to exploit this

is to allow h to increase by an increasing increment for larger values of h.

Trang 10

Type 2 Solution: Let x ′′ be the complement of x′

Again to illustrate for x = (0,0, ,0): the values h = 1, 2 and 3 respectively yield x′′ =

(0,0, ,0), x ′′ = (0,1,0,1, ) and x ′′ = (0,1,1,0,1,1,0, ) Since x ′′ duplicates x for h = 1, the value h = 1 can be skipped when generating x′′

We extend the preceding design to generate additional solutions as follows For values

of h ≥ 3 the solution vector is shifted so that the index 1 is instead represented as a variable index q, which can take the values 1, 2, 3, , h Continuing the illustration for x = (0,0, ,0), suppose h = 3 Then, in addition to x′ = (1,0,0,1,0,0,1, ), the method also generates the solutions given by x′ = (0,1,0,0,1,0,0,1, ) and x′ = (0,0,1,0,0,1,0,0,1 ), as q takes the

values 2 and 3

The following pseudo-code indicates how the resulting diversification generator can bestructured, where the parameter MaxSolutions indicates the maximum number of solutionsdesired to be generated (In our implementation, we set MaxSolutions equal to the number

of ‘empty slots’ in the reference set, so the procedure terminates either once the referenceset is filled, or after all of the indicated solutions are produced.) Comments within the codeappear in italics, enclosed within parentheses

NumSolutions = 0

For h = 1 to h*

Let q* = 1 if h < 3, and otherwise let q* = h

(q* denotes the value such that q will range from 1 to q* We set q* = 1 instead of q* = h

for h < 3 because otherwise the solutions produced for the special case of h < 3 will duplicate other solutions or their complements.)

For q = 1 to q*

let k* = (n–q)/h <rounded down>

For k = 1 to k*

kh q kh

x′+ =1− +

End k

If h > 1, generate x′′ as the complement of x

( x′ and x ′′ are the current output solutions.)

If NumSolutions MaxSolutions, then stop

generating solutions.

End q

End h

The number of solutions x′ and x ′′ produced by the preceding generator is approximately

q*(q*+1) Thus if n = 50 and h* = n/5 = 10, the method will generate about 110 different

output solutions, while if n = 100 and h* = n/5 = 20, the method will generate about 420

different output solutions

Since the number of output solutions grows fairly rapidly as n increases, this number can

be limited, while creating a relatively diverse subset of solutions, by allowing q to skip over various values between 1 and q* The greater the number of values skipped, the less

‘similar’ the successive solutions (for a given h) will be Also, as previously noted, h itself

can be incremented by a value that differs from 1

Ngày đăng: 01/07/2014, 10:20

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm