Two dynamic programming methodologies in very large scale neighborhood search applied to th

更新时间:2023-04-29 10:29:01 阅读量: 实用文档 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

MIT Sloan School of Management

Working Paper 4463-03

July 2003

Dynamic Programming Methodologies in Very Large Scale Neighborhood Search Applied to the Traveling Salesman Problem

?zlem Ergun and James B. Orlin

? 2003 by ?zlem Ergun and James B. Orlin. All rights reserved.

Short sections of text, not to exceed two paragraphs, may be quoted without explicit

permission, provided that full credit including ? notice is given to the source.

This paper also can be downloaded without charge from the

Social Science Research Network Electronic Paper Collection:

7465dfd6b9f3f90f76c61b6d/abstract=489784

Two Dynamic Programming Methodologies in Very Large Scale

Neighborhood Search Applied to the Traveling Salesman Problem

?zlem Ergun

Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30339

email: oergun@7465dfd6b9f3f90f76c61b6d

James B. Orlin

Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA 02139

email: jorlin@7465dfd6b9f3f90f76c61b6d

1

Abstract:

We provide two different neighborhood construction techniques for creating exponentially large neighborhoods that are searchable in polynomial time using dynamic programming. We illustrate both of these approaches on very large scale neighborhood search techniques for the traveling salesman problem. Our approaches are intended both to unify previously known results as well as to offer schemas for generating additional exponential neighborhoods that are searchable in polynomial time. The first approach is to define the neighborhood recursively. In this approach, the dynamic programming recursion is a natural consequence of the recursion that defines the neighborhood. In particular, we show how to create the pyramidal tour neighborhood, the twisted sequences neighborhood, and dynasearch neighborhoods using this approach. In the second approach, we consider the standard dynamic program to solve the TSP. We then obtain exponentially large neighborhoods by selecting a polynomially bounded number of states, and restricting the dynamic program to those states only. We show how the Balas and Simonetti neighborhood and the insertion dynasearch neighborhood can be viewed in this manner. We also show that one of the dynasearch neighborhoods can be derived directly from the 2-exchange neighborhood using this approach.

Subject classifications:

Traveling Salesman: Very large scale neighborhood search for the TSP.

Heuristics: Very large scale neighborhood search for the TSP.

Dynamic Programming: Two DP methodologies for heuristic search for the TSP.

2

Section 1. Introduction

Neighborhood search is a practical method for efficiently finding “good” solutions to hard combinatorial optimization problems. Let P = min {cx : x ∈ D } be an instance of an optimization problem with cost vector c and feasible set D . Given a feasible solution x ’ in D , a neighborhood search algorithm has an associated neighborhood function N which identifies a subset, N (x ), of D as the “neighbors” of x under N .

A local search algorithm follows the following basic scheme. Start with a current feasible solution, say x , and iteratively replace the current solution with a neighbor y of the current solution with lower objective value. Continue until there is no neighbor with improved objective value, at which point the current solution is called locally optimal . There is a large literature on local search as well as extensions of local search including simulated annealing and tabu search. For an excellent reference on local search in combinatorial optimization, see Aarts and Lenstra (1997).

In very large-scale neighborhood (VLSN ) search, the number of solutions in a neighborhood is very large (often exponential) with respect to the size of the input. As a rule of thumb, one expects to find better locally optimal solutions assuming that one can search a larger neighborhood efficiently. Unfortunately, for many very large neighborhoods, the search time may be much larger. There are a variety of

techniques for efficiently searching neighborhoods in VLSN search. One general approach that has been successful in searching exponentially large neighborhoods in polynomial time has been dynamic programming. See Ahuja et al. (2002) and De ?neko and Woeginger (2000) for surveys on these techniques, including a number of papers on VLSN search that employ dynamic programming.

Here, we present two different approaches for developing exponentially large neighborhoods that are searchable in polynomial time using dynamic programming. In so doing, we offer frameworks that encompass many of the existing results in the literature, as well as offer general methodologies for creating new neighborhoods that are searchable in polynomial time.

In the first approach, we consider instances in which a neighborhood is recursively defined. For the

examples we provide, these neighborhoods can be efficiently searched via the dynamic programs that are naturally induced by the recursions. We illustrate this approach on pyramidal tours, as developed by Sarvanov and Doroshko (1981), on twisted neighborhoods as defined by Aurenheimer (1988) and described for VLSN search by De ?neko and Woeginger (2000), and for dynasearch neighborhoods as introduced by Potts and van de Velde (1995). The properties of recursively defined neighborhoods are often easily established using mathematical induction. We describe the use of recursion in defining neighborhoods in detail in Sections 3 and 4.

The second approach starts with the standard dynamic program to solve a combinatorial optimization problem and restricts attention to a polynomially large subset V of states of the dynamic programming state space. When |is polynomially bounded in the size of the input, the time to solve the dynamic program is also polynomial. In Sections 5 and 6, we give examples in which | is polynomially bounded, and solving the dynamic programming recursion over V is equivalent, in a technical sense that we will make clear, to searching an exponentially large neighborhood. We illustrate this approach on dynasearch neighborhoods and extensions as well as on the Balas-Simonetti neighborhood.

||V V

3

The approach in our paper differs from the rollout approach of using dynamic programming for heuristics (see, Bertsekas, Tsitsiklis, C. Wu. 1997) in that our approach explicitly searches a well defined neighborhood, whereas their approach uses dynamic programming combined with heuristic search to find a good solution for a combinatorial optimization problem.

The results in this paper present unifying views of dynamic programming algorithms for searching neighborhoods for the traveling salesman problem. We note that De?neko and Woeginger (2000) presented permutation trees as a unifying concept for dynamic programming methods for searching exponentially large TSP neighborhoods. Some of the exponential neighborhoods we survey are not representable as permutation trees (such as the twisted sequences neighborhood and the Balas-Simonetti neighborhood), and concurrently there are other exponential neighborhoods that are naturally represented in terms of permutation trees that are not as easily represented within our framework. So, neither framework subsumes the other.

We view the contributions of this paper as follows:

1. We provide two alternative unifying frameworks for using dynamic programming for the TSP for

very large scale neighborhood search: recursively defined neighborhoods and dynamic

programming restrictions.

2. We provide basic theory for working with restricted dynamic programs. In particular, we show

how to associate a neighborhood with each restriction of the canonical dynamic program for the TSP.

3. We provide a method of using a dynamic programming recursion to transform a neighborhood N

into a possibly larger neighborhood N’ that is called the “dynamic programming expansion” of N.

In the case of the 2-exchange neighborhood, the dynamic programming expansion is

exponentially large, but can be searched in polynomial time.

4. Often the description of a recursively defined neighborhood is both compact and elegant.

We believe that the constructs in this paper will help to open up new lines for future research.

1. We expect that similar frameworks can be applied for other combinatorial optimization problems

as well when there is a dynamic program that is much more efficient than complete enumeration.

2. Our constructs suggest the possibility of automating searching a number of exponentially sized

neighborhoods. In particular, if a neighborhood can be recursively defined, perhaps one can

automatically derive the dynamic programming recursion from the recursive definitions for the

neighborhood.

Usually, the measure of a neighborhood search technique for a combinatorial optimization problem is how it performs in practice. In our case, several neighborhoods discussed in this paper have already been empirically tested by other researchers, including the dynasearch neighborhoods (Potts and van de Velde 1995 and Congram 2000), variants of pyramidal tours neighborhoods (Carlier and P. Villon 1990), and the Balas-Simonetti neighborhood (Balas and Simonetti 2001), all with some success for the TSP. Dynasearch neighborhoods have been applied with success on other combinatorial optimization problems as well (Agarwal et al. 2003, Congram, Potts, and van de Velde 2002, Ergun, Orlin, and Steele-Feldman 2002).

4

Nevertheless, we believe that the value of the methodologies in this paper should ultimately be judged on three criteria: First, will they lead to newly discovered neighborhood search techniques that are effective for some combinatorial optimization problems? Second, will they be the first step in a process that

ultimately results in software that helps users to design neighborhood search techniques? Third, will they lead to further developments in the theory of VLSN search?

In Section 2 of this paper, we present additional background as well as our notation and definitions. In Sections 3, 4, 5, and 6 we present techniques for constructing exponentially large neighborhoods that are searchable in polynomial time using dynamic programming. In Section 7, we present a summary and conclusions.

Section 2. Definitions and Background for the TSP

In this section we offer definitions and background for the Traveling Salesman Problem (TSP).

The Traveling Salesman Problem

The traveling salesman problem tries to find the minimum distance tour on n cities that are labeled 1, 2, …, n . Let the distance from city i to city j be c (i , j ). We represent a tour using a permutation T ∈ S n , where S n denotes the set of all permutations of {1, 2, …, n }. The permutation T = T (1), …, T (n ) refers to the tour in which the first city visited is T (1), the next city visited is T (2), and so on. The cost of the tour

T is denoted as c (T ) = . We refer to a pair of consecutive cities of T

(including T (n ), T (1)) as an edge of the tour T . We denote a sequence A of k cities as A = ?i 1, i 2, …, i k ?. If k = n , we refer to the sequence A as a tour. The reverse of a sequence A of cities is the cities of A in reverse order, and is denoted as Rev (A). For example, Rev (?i 1, i 2, …, i k ?) = ?i k , i k -1, …, i 1?.

11((),(1))((),(1)n i c T n T c T i T i ?=+∑+

If i ≤ j , we let [i , j ] be shorthand for the sequence ?i , i+1, …, j ?. If i > j , then [i , j ] = ?.

The subset obtained from subset S of cities by deleting any city in S’ will be denoted as S \ S’. We abbreviate S \ {i } as S \ i .

If S is a subset or sequence of cities, then max(S ) denotes the maximum index of a city of S , and min(S ) denotes the minimum index of a city of S .

As per De ?neko and Woeginger (2000), if A is a sequence of cities, and B is a different sequence of cities with no city in common with A , then A B is the sequence obtained by concatenating A with B . For example, ?3, 1, 7? ?2, 6, 4? = ?3, 1, 7, 2, 6, 4?.

Notation for Neighborhoods for the Traveling Salesman Problem

We first describe a neighborhood for the tour T I = ?1, 2, …, n ?, which is also the identity permutation. A neighborhood N (T I ) is a collection of tours, that is N (T I ) ? S n .

Any neighborhood of T I can be extended to a neighborhood of any other tour T ∈ S n as follows. If σ ∈ N (T I ), then the tour ((1)),((2)),...,(())().T T n N σσσσ= T T Mathematically, T T ∈σ is the permutation obtained by composing T and σ. If we let f T (i , j ) = c (T (i ), T (j )), then the cost of the tour 5

T σ is , which is the usual formula for the cost of σ except that c is replace by f T .

1

1((),(1))((),(1))n T T i f n f i i σσσσ?=+∑*TSP N *TSP N TSP N *TSP N +

In subsequent sections, we assume that the initial tour is T I = ?1, …, n ? , and we provide dynamic

programs for determining the minimum distance neighbor of T I . However, if one wants to search N (T ) instead, then it suffices to replace c by f T in the recursions.

In general, we are treating neighborhoods for different sized TSPs . Sometimes, when the number of cities n is permitted to vary, we let N n denote the neighborhood set for problems with n cities. In the case that the number of cities is obvious from context, we drop the index, and denote the neighborhood set as N . We assume that the identity permutation T I ∈ N n , that is ?1, 2, …, n ? ∈ N n for all n .

All of our recursions define both tours as well as sequences of fewer than n cities. We let N * refer to all sequences and tours defined by the recursion, and we let N = N * ∩ S n denote the neighborhood. We will refer to the set N * as a superneighborhood .

Section 3. Recursively Defined Neighborhoods

In this section, we discuss recursive definitions as part of VLSN search for the TSP . In particular, we will provide recursive definitions for well known exponentially sized neighborhoods together with dynamic programs based on the recursions for searching these neighborhoods efficiently. We illustrate our approach on one of the dynasearch neighborhood as well as some “relatives” of this neighborhood, the pyramidal tour neighborhood, and the twisted sequences neighborhood.

As above, we will let N denote a neighborhood, and let N * denote all sequences created by the recursions in the construction of N . In each case, rather than present a dynamic program, we will create an equivalent state space graph. Our objective in so doing is to make explicit a very clear connection between the recursion that defines the neighborhood and the resulting dynamic program.

To illustrate our recursion-based approach, we first consider a recursion for generating the neighborhood S n , that is the neighborhood of all tours. The superneighborhood N * that we generate will consist of all sequences of cities in which the first city is 1.

The Held and Karp (1962) dynamic program for the TSP can be viewed as being based on the following recursion:

The Complete Neighborhood N TSP for the TSP.

1. 1 ∈ .

2. If A ∈ and k ? A , then A k ∈ *TSP N .

3. = ∩ S n .

The state space graph corresponding to the dynamic program for the TSP can be obtained as follows. It includes a node for each state of the dynamic program as well as a special destination node t .

6

Let g TSP (S , j ) denote the optimal value for state (S , j ) ∈ V TSP in the Held and Karp dynamic program for the TSP. Then g TSP (S , j ) is the shortest length of a sequence A whose initial city is 1, whose terminal city is j , and such that A includes all of the cities of S .

The following are well known properties of the state space graph and its relation to the Held and Karp dynamic program DP TSP .

1. The shortest path from node ({1}, 1) ∈ V TSP to node (S , j ) has length g TSP (S , j ) .

2. Any shortest path from node ({1}, 1) ∈ V TSP to node t corresponds to a minimum length tour. If

we let (S j , i j ) denote the j -th node of V TSP on the shortest path, then the shortest length tour is ?i 1, …, i n ?.

Properties 1 and 2 above extend to the other dynamic programming state space graphs as defined in this section and the next.

The Independent Compounded 2-Exchange Neighborhood: Dynasearch

We next present a class of exponentially sized neighborhoods developed in Potts and van de Velde (1995), Congram (2000) and Congram, Potts, and van de Velde (2002) that is obtained from the 2-exchange neighborhood by compounding sets of independent 2-exchanges.

We let h (i , j ) be the cost associated with sequence [i , j ] = ?i , i +1, …, j ?. Thus .

One can first compute h (1, j ) and h (j , n ) for all j in O(n ) steps. Subsequently computing h (i , j ) = h (1, n ) – h (j +1, n ) – h (1, i -1) takes O(1) additional steps.

1(,)(,1)j k i h i j c k k ?==+∑

Recall that for i ≤ j , Rev [i , j ] denote the sequence ?j , j -1, …, i ?. We let RevMove [i , j ] be the move that reverses the orders of cities in positions i to j . For example, if we apply RevMove [i , j ] to the identity permutation T I , we obtain ?1, …, i -1? Rev [i , j ] ?j +1, …, n ?. If we apply RevMove [3, 4] to ?1, 5, 2, 3, 4?, we would obtain ?1, 5, 3, 2, 4?.

A 2-exchange of a tour T is a permutation obtained from T by the operation RevMove [i , j ] for some i < j . Equivalently, the 2-exchange of the tour ?1, …, n ? can be viewed as breaking edges (i-1, i ) and (j , j+1) and adding edges (i-1, j ) and (i , j+1). We say that two 2-exchanges RevMove [i 1, j 1] and RevMove [i 2, j 2] are independent if j 1 < i 2 - 1 or j 2 < i 1 – 1. If j 1 < i 2 or j 2 < i 1, we say that the two 2-exchanges are weakly independent . Potts and van de Velde (1995), and Congram, Potts, and van de Velde (2002) introduced neighborhoods based on compounding (or applying) independent moves under the name “dynasearch”, a term which we use here as well. We refer to the neighborhood obtained from T I by compounding independent 2-exchanges as the 2-exchange dynasearch neighborhood . We refer to the neighborhood obtained by compounding weakly independent 2-exchanges as the weak 2-exchange dynasearch neighborhood .

7

Exponential neighborhoods based on compounding moves were designed and computationally tested on classes of single and parallel machine scheduling and vehicle routing problems as well as the traveling salesman problem and the linear ordering problem (Agarwal et al. 2003, Congram 2000, Congram, Potts, and van de Velde 2002, Ergun 2001, Ergun, Orlin, and Steele-Feldman 2002).

The size of the compounded independent moves neighborhood is ?(1.7548n ), as may be computed from a simple recursion. See Congram (2000) and Ergun (2001) for derivations . The dynasearch

neighborhoods can be searched in O(n 2) by dynamic programs as in Potts and van de Velde (1995), Congram (2000), Congram, Potts, and van de Velde (2002), and by network flows techniques as in

Agarwal et al. (2003), Ergun (2001), Ergun, Orlin, and Steele-Feldman (2002) as well as by the approach given in this section.

The approach developed here to define the neighborhood is different from the approach developed in the original papers on dynasearch; however, the dynamic programming recursions developed here are very similar.

The 2-Exchange Dynasearch Neighborhood N DS for the TSP.

1. 1 ∈ *DS N .

2. If A ∈ *DS N and max(A ) = i < n , then

a. A i +1∈ *DS N , and

b. A Rev [i +1, j-1] j ∈ *DS N for i +3 ≤ j ≤ n , and

c. A Rev [i +1, n ] ∈ *DS N .

3. DS N = *DS N ∩ S n .

The state space graph for the dynasearch neighborhood is constructed in a similar manner to the graph G TSP .

As before, an optimal tour in N DS can be obtained by finding the shortest path in G DS from node ({1}, 1) to node t .

8

Theorem 1. The set DS N = *DS N ∩ S n is the 2-exchange dynasearch neighborhood. The corresponding

state space graph has O(n ) nodes and O(n 2) edges. The time to find a minimum distance neighbor is O(n 2).

Proof . Suppose first that σ is in the 2-exchange dynasearch neighborhood and that σ is obtained by k independent two exchanges. We prove the result by induction on k. If k = 0, then σ = [1, n] ∈ DS N . Suppose instead that the compounded independent 2-exchanges to obtain σ are RevMove [i 2j -1, i 2j ] for j = 1 to k for some k ≥ 1 , and where indices are in increasing order. Then

σ = [1, i 1-1] Rev [i 1, i 2] [i 2+1, i 3-1] Rev [i 3, i 4] [i 4+1, i 5-1] … Rev [i 2k-1, i 2k ] [i 2k +1, n ].

Recall that [i , j ] = ? for i > j .

It is easy to see that σ can be obtained by the recursion for N DS by starting with city 1, and at each application of Step 2 in the construction of σ, concatenating a single node or else concatenating Rev [i 2j -1, i 2j ] i 2j +1 for some j , or else appending Rev [i 2k -1, i 2k ] if i 2k = n .

Conversely if σ ∈DS N , then for some k

σ = [1, i 1-1] Rev [i 1, i 2] [i 2+1, i 3-1] Rev [i 3, i 4] [i 4+1, i 5-1] … Rev [i 2k -1, i 2k ] [i 2k +1, n ].

In this case, σ is obtained by compounding RevMove [i 2j -1, i 2j ] for j = 1 to k, and thus σ is in the 2-exchange dynasearch neighborhood.

Let V’ = {({1, …, j }, j ) : j = 1 to n } ∪ { ({1, …, n }, j ) : j = 2 to n -1}. We now claim that V DS = V’.

We first note that it is easily established that V’? V DS . We next assume that (S , i ) ∈ V DS , and we will show that (S , i ) ∈ V’. Suppose |> 1, and it was created by applying Step 2 to some state (S’, i’). Moreover, ||< |, and so we assume inductively that (S’, i’) ∈ V’, and so S’ = {1, …, i’}. It is easily verified that if 2a, or 2b or 2c is applied to (S’, i’), the resulting state (S , i ) ∈ V’, thus showing that V DS ? V’. It also follows that each state (S , j ) can be represented in O(1) space by the pair (i , j ) where S = {1, …, i }.

|S 'S |S

Thus |= O(n ) and = O(n 2), and the time to create G DS is O(n 2) since we can compute h (i , j ) in O(1) steps after an initial pre-computation. The running time to find the shortest path from node ({1}, 1) to node t is O(n 2) since G DS is acyclic, completing the proof. (See, for example, Ahuja, Magnanti, and

Orlin (1993) for information about shortest path algorithms.)

? |DS V ||DS E

In the proof, we commented on the storage space of a state. This storage is important because in creating G DS or any other state-space graph, we need to know whether a node (S , j ) derived from rule 2 is already present in the graph. If it is already present, we need to be able to identify it in O(1) steps. For each of the neighborhoods considered in this paper, except the Held-Karp dynamic program, one can store each state in O(1) space, and identify its presence in O(1) time.

9

Weakly Independent Compounded 2-exchanges

In this subsection, we base dynasearch on the relaxed concept of weak independence in order to develop a larger neighborhood that is also searchable in polynomial time.

The Weak 2-Exchange Dynasearch Neighborhood N WDS for the TSP

1. 1 ∈ .

*WDS N 2. If A ∈ and max(A ) = i , then A Rev [i +1, j ] ∈ for i +1 ≤ j ≤ n .

*WDS N *WDS N 3. = ∩ S n .

WDS N *WDS N

This neighborhood is more straightforward to define than the dynasearch neighborhood, as is its state space graph.

Theorem 2. The set = ∩ S n is the weak 2-exchange dynasearch neighborhood. The

corresponding state space graph G WDS has O(n 2) nodes and O(n 3) edges. The time to find a minimum distance neighbor is O(n 3).

WDS N *WDS N

Proof . We first show that the weak 2-exchange dynasearch neighborhood is contained in . Suppose that σ can be obtained by compounding weakly independent 2-exchanges. By permitting null operations such as RevMove [i , i ], σ can be obtained by performing RevMove [i 2j -1, i 2j ] for j = 1 to k , where

(1) i 1 = i 2 = 1, (2) i 2k = n , and (3) i 2j +1 = i 2j + 1 for j = 1 to k -1. It follows that σ can be obtained by starting with 1, and concatenating Rev [i 2j -1, i 2j ] for j = 2 to k , and thus σ ∈ N WDS . Conversely, if σ ∈ N WDS and can be obtained by starting with 1 and concatenating a sequence of reversals, then it is clear that σ can be obtained by compounding weakly independent 2-exchanges.

WDS N

We next claim that for any state (S , j ) with max(S ) = i , it follows that S = {1, …, i }. The claim is clearly true if i = 1, and it follows directly by induction because of Step 2 of the construction of G WDS . Moreover, the state ({1, …, i }, j ) is obtainable for 2 ≤ j ≤ i by starting with {{1, …, i -1}, i -1} and then applying Step 2 of the construction of G WDS . The only state (S , j ) with j = 1 is ({1}, 1). We conclude that

V WDS = {({1, …, i}, k ) : for 1 ≤ k ≤ i ≤ n }.

Thus there are O(n 2) nodes, and O(n ) arcs emanating from each by the construction in Step 2 of G WDS , leading to O(n 3) arcs. The time to create G WDS is O(n 3) since we can compute h (i , j ) in O(1) steps after an initial pre-computation of h (1, j ) for j = 1 to n , and we can create each arc in O(1) step assuming that we store max(S ) for each state (S , j ). The running time to find the shortest path from node 1 to all other nodes is O(n 3) since we are solving the shortest path problem on an acyclic graph. This completes the proof. ? 10

Other Dynasearch Neighborhoods.

The results of the previous sections on dynasearch applied to independent and weakly independent 2-exchanges can be extended to other neighborhoods as well. For example, let Insert[i, j] = [i+1, j] i , and let InsertMove[i, j] be the move that takes the city in position i and inserts it directly after city j. For example, if we apply InsertMove[i, j] for 1 < i < j < n to the tour ?1, …, n?, we obtain the sequence

[1, i-1] Insert[i, j] [j+1, n]. Similarly, we can define Swap[i, j] = j [i+1, j-1] i, and we can define SwapMove[i, j] to be the move that swaps the cities in positions i and j of a tour. The insert neighborhood is the neighborhood of ?1, …, n? obtained by permitting at most one InsertMove, and the swap neighborhood is obtained by permitting at most one SwapMove. We let the composite neighborhood be the neighborhood obtained by permitting at most one 2-exchange or InsertMove or SwapMove.

We can define independent and weakly independent InsertMoves, SwapMoves, and composite moves as before and modify the dynasearch and the weak dynasearch neighborhoods accordingly. In general, one would expect the dynasearch neighborhood to require O(n2) time to search, and the weak dynasearch neighborhood would require O(n3) time. However, the weak insertion dynasearch neighborhood can be searched even more efficiently when it is formulated with dynamic programming restrictions as we will see in Section 5.

Section 4. Other recursively defined neighborhoods

In this section, we consider recursive ways for defining pyramidal tours and the twisted sequences neighborhood.

In the dynasearch neighborhood of Section 3 as well as in the standard dynamic program for the traveling salesman problem, every single tour began with the city 1, and the state (S, j) referred to tours that started with city 1, ended with city j, and visited all of the cities in S. In this section, we no longer assume that

the initial city in a sequence is city 1. Rather, we let (i, S, j) refer to sequences A such that the first city in

A is i, the last city in A is j, and A includes all of the cities in S.

The Pyramidal Tours Neighborhood

The pyramidal tours neighborhood for the TSP was first introduced by Sarvanov and Doroshko (1981). We say that a sequence A (which is not necessarily a tour) is pyramidal if it consists of cities

{i, i+1, …, n} for some i≥ 1, and if the cities in A increase in index to city n, and then decrease in index. For example, if n = 7, then ?4, 7, 6, 5, 3? is pyramidal.

The pyramidal neighborhood consists of all pyramidal tours. For example, if n = 7, then the tour

?3, 4, 7, 6, 5, 2, 1? is in the pyramidal neighborhood. Our definition differs slightly from the usual definition of pyramidal tour in that we do not require the first city to be city 1; that is, city 1 may be the last city instead. Thus, our definition includes all the usual pyramidal tours as well as their reversals. We use this slightly non-standard definition in order to simplify the description of the recursion, but it would be easy to use the original definition as well. The size of the pyramidal neighborhood is 2n.

Note that the edges (1, 2) and (n-1, n) of the original tour belong to all of its neighbors. This is a drawback in using the pyramidal tour neighborhood in practice. To circumvent this drawback, Carlier

11

and Villon (1990) consider all n rotations associated with a given tour, and search the pyramidal neighborhoods of each of these rotations. The size of this composite neighborhood is θ(n 2n ) and it can be searched in O(n 3) time by a dynamic program described in Carlier and Villon (1990) or using n iterations of a shortest path algorithm on an appropriately created auxiliary graph (see Ahuja et al. (2002) for details), or using the n iterations of the procedure listed below.

We now present a recursion which defines the pyramidal neighborhood, as well as its corresponding state space graph.

The Pyramidal Tours Neighborhood N PT for the TSP.

1. n ∈ *PT N .

2. If A ∈ *PT N and min(A ) = i , then

a. A i -1 ∈ *PT N and

b i -1 A ∈ *PT N .

3. PT N = *PT N ∩ S n .

Theorem 3. The superneighborhood *PT N consists of all pyramidal sequences, and the neighborhood

PT N is the pyramidal tour neighborhood. The corresponding state space graph G PT has O(n 2) nodes and O(n 2) edges. The time to find a minimum distance neighbor is O(n 2).

Proof. We first claim that for A ∈*PT N , A is pyramidal. It is true if |= 1, since in that case A = ?n ?.

We now consider A’ and assume inductively that the claim is true for all sequences with fewer cities than A’. The sequence A’ is created in Step 2 of the recursion, and either (i) A’ = A i -1 or (ii) A’ = i -1 A ,

where A ∈ |A *PT N and min(A ) = i . By inductive hypothesis A is pyramidal and thus A consists of cities

{i , …, n }, and the sequence in A increases in index to city n and then decreases in index. It follows that A’ is also pyramidal.

We next establish that every pyramidal sequence is in *PT N . So suppose that A is pyramidal, and let A j be

A as restricted to cities {j , j +1, …, n }. Since A is pyramidal, it follows that A j is also pyramidal for each j .

We assume inductively that A j+1 ∈*PT N for some j < n . It is easily verified that A j = j A j +1 or else A j =

A j +1 j . It follows that A j ∈ *PT N , completing the proof of the first sentence of the theorem.

12

We now establish that the state space graph has O(n2) nodes. Suppose that (i, S, j) is a state. Then S = (k,k+1, …, n} for some k and k = i or j. It follows that there are O(n) different sets for S and O(n) ways that the pair (i, j) can be chosen for a specified set. And for each state (i, S, j), the state can be represented by the triple (i, min(S), j), which takes O(1) space, and can be recognized in O(1) time. Because the nodes (i,S, j) and (j, S, i) for S = {i, i+1, …, n} both have two arcs emanating, we conclude that

E = O(n2), and G PT can be created in O(n2) time. Moreover, the optimal tour in N PT is induced by

PT

the shortest path from node (n, {n}, n} to node t, which can be obtained in O(n2) steps, completing the proof. ?

The Twisted Sequences Neighborhood

We once again consider 2-exchanges, each of which is denoted by an operation RevMove[i, j] for some i and j with 1 < i < j≤n. Let R be an ordered set of 2-exchanges on a tour on n cities. For example, suppose that n = 9, and R = {RevMove[3, 5], RevMove[2, 7], RevMove[4, 6]}. Applying the operations in R is to carry out the reversals in the order that they appear in R. For example, applying RevMove[3, 5] to A0 = ?1, 2, 3, 4, 5, 6, 7, 8, 9? would lead to the sequence A1 = ?1, 2, 5, 4, 3, 6, 7, 8, 9?. Applying RevMove[2, 7] to A1 would lead to the sequence A2 = ?1, 7, 6, 3, 4, 5, 2, 8, 9? since it reverses the positions of the elements currently in positions 2 to 7. Applying RevMove[4, 6] to A2 leads to the sequence A3 =

?1, 7, 6, 5, 4, 3, 2, 8, 9?.

We say that one reversal RevMove[j1, k1] is contained in RevMove[j2, k2] if [j1, k1] ? [j2, k2]. We say that a subset S of cities is consecutive if S = {i, i+1, …, j} for some 1 ≤i≤j≤n.

Aurenhammer (1988) defined twisted sequences in the context of sorting of data. We provide a definition that Aurenhammer shows is equivalent to his original definition (modulo he specified the operations slightly differently.). We say that a sequence A on a consecutive set A = {i, i+1 …, j} of cities is a twisted sequence if it can be obtained by carrying out a sequence R of 2-exchanges on the permutation ?i, i+1…, j? such that the following two properties hold:

TS Property 1. Each 2-exchange RevMove(k, l)of R is contained in [i, j].

TS Property 2. If RevMove(k, l)precedes RevMove(k’, l’)in R, then either the two moves are weakly independent or else, RevMove(k, l)is contained in RevMove(k’, l’).

For a sequence R of 2-exchanges satisfying the two TS Properties, the same tour is obtained regardless of the order in which the moves in R are carried out. However, we shall assume that they are carried out in the order they appear in R.

This definition is equivalent to the definition given by Aurenhammer in the case that the sequence consists of all cities in {1, 2, …, n}. We note that the sequence A3 obtained from R above is a twisted sequence because A3 can be obtained by a single reversal even though R does not satisfies the properties 1 and 2 above.

Aurenhammer showed that the twisted sequences neighborhood contains at least ?(2n) and at most O(c n) tours for some constant c. De?neko and Woeginger (2000) claimed that the twisted sequences neighborhoods contains O(6n) tours. They also presented a dynamic program which finds the best tour in

13

7

neighborhood grows asymptotically (3. In particular, it is ?(5.8284n ) and O(5.8285n ). n

We now present a recursion which defines the twisted sequences neighborhood.

The Twisted Sequences Neighborhood N TS for the TSP.

1. j ∈ *TS N for each j = 1 to n .

2. If A ∈ *TS N , B ∈ *TS N , and max(A ) = min(B ) - 1, then A B ∈ *TS N and

Rev(A B ) ∈ *TS N .

3. = TS N *TS N ∩ S n .

Before presenting the state space graph for the twisted sequences neighborhood, we establish that *TS N is

the set of all twisted sequences, and is the twisted sequences neighborhood.

TS N

Theorem 4. The superneighborhood *TS N consists of all twisted sequences, and the neighborhood N TS is

the twisted sequences neighborhood.

Proof. We first note for any sequence A ∈*TS N , the set of cities in A is consecutive. This is easily

established via induction on the number of operations needed to create A . We also observe that by Rule 2

in the creation of *TS

N , if A ∈*TS N , then Rev(A ) ∈ *TS N .

We next claim that if A’ ∈*TS

N , then A’ is a twisted subsequence. The claim is clearly true if A’ = ?j ? for some j . We assume inductively that the claim is true for sets A ∈*TS N with A < 'A . By construction, A’

= A B or else A’ = Rev (A B ), where A ∈*TS N is defined on cities {i , … , j } for some i and j , and B

∈*TS N is defined on cities {j +1, …, k }. By inductive hypothesis, there are sets of 2-exchanges R A and R B that satisfies TS Properties 1 and 2, and which applied to {i , …, k } yields A and B . Then A’ is either obtained from R = R A , R B or else it is obtained from R’ = R A , R B , RevMove (i , k ). This completes the proof of the first claim.

We next claim that every twisted sequence is in *TS N . To see this, let A’ be a twisted sequence on

{i , i +1, …, j }, and let R be the set of 2-exchanges satisfying TS Properties 1 and 2, and which when applied to ?i , …, j ? yields A’. If A’ can be obtained in multiple such ways via 2-exchanges, let R be the set with the smallest number of 2-exchanges that generates A’.

We inductively assume that the claim is true for all twisted sequences with fewer cities than A’. We also assume inductively that the claim is true for all twisted sequences A with the same number of cities as A’ but with fewer 2-exchanges needed to generate A .

We consider two cases. In the first case, we assume that there is no 2-exchange that changes the position of city i . In this case, if we apply the reversals in R to ?i +1, …, j ?, we obtain A’ \ i , that is, we obtain the

subsequence obtained from A by deleting city i . By inductive hypothesis A’ \ i ∈ *TS

N , and by Rule 2, A’ = i A’ \ i ∈ *TS

N . So, if no 2-exchange contains i , then A’ ∈ *TS N .

14

We now assume that there is a reversal that contains i , and let RevMove (i , k ) be the last such reversal in R . We now consider the subcase that k = j . In this case, the reversal Rev (A’) of sequence A ’ is obtained from {i , …, j } by applying the 2-exchanges in R’ = R \ RevMove (i , j ) . Since |R’| < |R |, by inductive hypothesis

Rev (A’) ∈ *TS

N , and so A’ ∈ *TS N .

In the remaining subcase k < j . By TS Property 2, the set R may be partitioned into two sets R 1 and R 2 where each 2-exchange of R 1 is contained in {i , .., k }, and each 2-exchange of R 2 is contained in {k +1, …, j }. We assume that the order of the 2-exchanges in R 1 and R 2 are consistent with their ordering

in R . Let A be generated by R 1 and let B be generated by R 2. By inductive hypothesis, A ∈ *TS N and

B ∈*TS N . Moreover, A’ = A B . It follows from Rule 2 that A’ ∈ *TS N . ?

We now provide the nodes of the state space graph for twisted sequences, and show that the resulting dynamic program solves twisted sequences in O(n 7) time, which is the same bound obtained by De ?neko and Woeginger (2000). We do not create the state space graph in its entirety since the optimal solution to the dynamic program does not correspond in a simple way to the shortest path in the state space graph. We let g TS (i , S , j ) be the minimum cost of a twisted sequence starting at city i , ending at city j , and passing through each city of S . .

Theorem 5. The dynamic program DP TS finds the optimal twisted sequence in the neighborhood N TS in O(n 7) time.

Proof . The number of states in the state graph is O(n 4) since each state is of the form (i , S , j ) where S is consecutive and i and j are cities in S . If one looks at the minimization in Rule 3 of the dynamic programming recursion, one sees that one needs to minimize over choices of k 3, i 2 and j 1, which results in O(n 3) time to determine g(i 1, S , j 2). Thus the running time is O(n 7). ?

The running time matches the one developed by De ?neko and Woeginger (2000), and the dynamic program is essentially the same.

Section 5. Optimization over DP Restrictions.

In this section and in the next section, we apply dynamic programming restrictions; that is, we take the Held-Karp dynamic program for the traveling salesman problem as given by the state space graph G TSP in 15

Section 3, and solve this dynamic program as restricted to a subset V ? V TSP . We denote the state space graph as G TSP [V ], which is standard graph theoretic notation for the subgraph of G TSP induced by the subset V of nodes and all edges with both endpoints in V . This process induces a neighborhood of the original tour, which we describe in this section. We are particularly interested in examples in which the number of states is polynomially bounded and the number of tours in the induced neighborhood is exponentially large.

There is a sizeable literature in approximate dynamic programming for hard combinatorial optimization problems that is based on dynamic programming restrictions. As with the dynamic programming

restrictions of this section, approximate dynamic programming is an approach that tries to circumvent the “curse of dimensionality” by contracting the state space. Usually, in approximate dynamic programming and other forms of dynamic programming restrictions, the size of the dynamic program is decreased by simultaneously aggregating some of the states and eliminating some of the constraints. Thus, the problem solved is a relaxation of the original, rather than a restriction of the original. Typically, the solution value obtained in approximate dynamic programming is a lower bound (for minimization problems) on the optimal value, whereas in the approach we discuss in this section, the optimal solution for the

neighborhood is feasible for the TSP and thus is an upper bound on the optimal value. Christofides, Mingozzi and Toth (1981) used state space reductions to derive lower bounds for the TSP , the VRP , and variants. They give several different relaxations of the states, show how these correspond to known relaxations of the original problems, and that these relaxations are equivalent to Lagrangian relaxations. Similar techniques are applied to machine scheduling problems in, Abdul-Razaq and Potts (1988), Hariri and Potts (1994), Ibaraki and Nakamura (1994), and Potts and van Wassenhove (1987).

In order to associate neighborhoods with subsets of states, we need some additional notation and definitions. For a given sequence A = ?i 1, i 2, i 3, …, i k ?, with i 1 = 1, we let State (A ) = (S , i k ), where S = {i 1, i 2, …, i k }. For a sequence A = ?i 1, i 2, i 3, …, i k ?, we refer to the subsequences A j = ?i 1, i 2, i 3, …, i j ? for j = 1 to k as the initial subsequences of A . The canonical dynamic program creates a tour A by starting with city 1 and concatenating one city at a time. With this in mind, for each tour A we let

V TSP [A ] = {t } ∪ {State (A j ) : A j = ?1, …, i j ? is an initial subsequence of A for j = 1 to A }.

For a given collection V ? V TSP , let N TSP [V ] = {A ∈ S n : V TSP (A ) ? V }. In other words, N TSP [V ] contains all

tours A such that the states needed to generate A are all in V . Similarly, we let *[]TSP

N V = {A : V TSP (A ) ? V }. We will soon show that N TSP [V ] is the neighborhood induced by solving the dynamic program on state space

graph G TSP [V ], and *[]TSP

N V is the corresponding superneighborhood.

In the following, for each A = ?i 1, …, i k ?, we let c

A denote the cost of sequence A where A is viewed as a path; that is c

A . If k < n , then c (A ) = c A . If A is a tour, then c (A ) = + c (i n , i 1). ?()1

11?()(,)k j j j c i i ?+==∑?()?()c A

The following theorem shows that finding the shortest path from node ({1}, 1) to all other nodes in G TSP [V ] corresponds to finding the best tour in []TSP N V .

16

Theorem 6. Let V be any subset of states of V TSP . Let (S , j ) be any state in V , and let g (S , j ) be the minimum cost of a path from ({1}, 1) to (S , j ) in G TSP [V ]. Then

g (S, j ) = min {c A : State (A ) = (S , j ) and A ∈ ?()*[]TSP N V }, and

g (t ) = min { : A ∈ []()c A TSP N V }.

Proof. We first claim that g (S , j ) = min { : State (A ) = (S , j ) and A ∈ ?()c A *[]TSP N V }. This claim is clearly true for S = 1, since the only state of V TSP with one element is ({1}, 1). Let us assume inductively that the claim is true for any state (S’, j’) with 'S < S , and suppose that S > 1. In the restriction of the Held-Karp dynamic program for the TSP ,

g (S , j ) = min {g (S \ j , k ) + c (k , j ) : k ∈ S \ j and (S \ j , k ) ∈ V },

and thus by induction,

g (S , j ) = min { + c (k , j ) : ?(')c A A ′∈ *[]TSP N V and State (A ′) = (S \ j , k )}.

Let be the set that causes the minimization in the previous relation. Since A ′′A ′′ ∈*[]TSP N V , State (A ′′) =

(S \ j , k ), and (S , j ) ∈ V , it follows that A ′′ j ∈ *[]TSP

N V , and thus

g (S , j ) = min { : State (A ) = (S , j ) and A ∈ ?()c A *[]TSP N V }.

From the above claim it also follows that g (t ) = min { : A ∈ []?()c A TSP N V }. ?

We next apply the results of Theorem 6 to the neighborhood first developed by Balas (1996), and later extended by Balas and Simonetti (2001).

The Balas-Simonetti Neighborhood

Balas (1996) considered the neighborhood consisting of all sequences A with the following properties:

(1) the first city of A is city 1, and (2) there is a parameter K of the neighborhood, such that i follows j in A if j + K ≤ i . We consider the following equivalent formulation: If i precedes j in A , then i < K + j . Balas presented a dynamic programming recursion for finding the minimum distance tour in the

neighborhood that runs in O(K 2 2K n ) time, which is linear in n for fixed K and polynomial in n for K = O(log n ). The number of tours in the neighborhood is (, which is exponential in n whenever k is slowly growing in n . (Here, exponential means non-polynomial.)

((1)/)n n k e ??

Balas and Simonetti (2001) generalized the neighborhood to require a condition that is equivalent to the following: If i precedes j in A , then i < K (j ) + j , where K (j ) is a positive integer bounded above by K . They also showed how to implement the dynamic program effectively by searching the state space graph, and they carried out extensive computational experiments. Here we show that the Balas-Simonetti neighborhood can be constructed in a very natural way using recursion. Moreover, the recursion

immediately suggests an implementation very similar to the one that Balas and Simonetti applied. In the following, if A is a sequence of cities or a set, we let A = {1, …, n }\ A , that is the cities not in A . 17

We next show that *BS N is the Balas-Simonetti neighborhood.

Lemma 1. The tour A ∈BS N if and only if whenever i precedes j in A, then i < K (j ) + j .

Proof. Suppose A = ?1, i 2, i 3, …, i n ?, and let A j = ?1, i 2, i 3, …, i j ?. Let us assume first that A ∈BS N , and

thus A k ∈ *BS N for each k . It follows directly from the construction of A k in Step 2 of the BS N

neighborhood, that i k < i l + K (i l ) for all l > k .

We next assume that whenever i precedes j in A , then i < K (j ) + j . We will claim that A k ∈ *BS

N for all k , and thus A ∈BS N . The claim is clearly true for k = 1, and we assume inductively that the claim is true for

k – 1. By inductive hypothesis A k -1 ∈*BS N , and by our choice of A , i k < i l + K (i l ) for all l > k . It follows

that A k ∈*BS N , completing the proof. ?

The recursion for N BS adds one city at a time to the end of a sequence A such that the newly added city satisfies a condition. Hence the recursion for N BS is a restriction of the Held and Karp recursion for N TSP and V BS ? V TSP . Furthermore, by Theorem 6 finding the shortest path from node ({1}, 1) to node t in G TSP [V BS ] corresponds to finding the best tour in N TSP [V BS ] which is equivalent to N BS .

The following lemma is proved in Balas and Simonetti for their neighborhood. We give an alternative

proof here that follows from the recursive construction of *BS

N .

Theorem 7. Consider the Balas-Simonetti neighborhood and suppose that K < log n . Then |V BS | = O(n K 2k ), and | E BS | = O(n K 2 2K ). The time to construct G BS is O(n K 2 2K ), and the time to find a minimum distance neighbor is O(n K 2 2K ).

Proof. Let Q r = {S : (S , j ) ∈ V BS for some j , and r = min(S )}. Then for 1 ≤ k < r it follows that k ∈ S . For r + K (r ) ≤ k ≤ n , it follows that k ∈ S . Thus for any (S , j ) ∈ Q r , S = {1, …, k -1} ∪ S’, where S’ ? {r +1, …, r + K (r )-1}. It follows that there are at most 2K (r ) ≤ 2K choices for S’ and thus for S . In 18

addition, there are at most K choices for j , and so |Q r | = O(K 2K ). Moreover, V BS = ∪, and so |V BS | = O(n K 2K ).

1n

r r Q =

We note in Step 2 in the creation of E BS , there are at most K edges emanating from any state (S , j ), and so |E BS | = O(n K 2 2K ).

We next show that we can store (S , j ) in O(1) space for all states (S , j ) ∈ V BS . So suppose (S , j ) ∈ Q r . We represent S by the pair (r , L (S )) where L (S ) = [,]2k k S r r K ∈∩+∑. Then 1 ≤ L (S ) ≤ 2K +1 ≤ 2n , and the

pair (r , L (S )) uniquely identifies and characterizes S . We refer to (r , L (S )) as the identifier for S , and we refer to the triple (j , r , L (S )) as the identifier for (S , j ). For any state (S , j ) created in Step 2 of the recursion of G BS , it takes O(K ) time to determine the identifier. Moreover, it takes O(K ) time to determine all of the arcs emanating from (S , j ) . So, the time to create G BS is O(n K 2 2K ), and the time to solve the shortest path problem is also O(n K 2 2K ). ?

The Weak Insertion Dynasearch Neighborhood

In this subsection we show that the N WIDS can be searched efficiently in O(n 2) when described with a recursion which is a restriction of the Held and Karp recursion for N TSP . First we provide a

characterization of the weak independent insertion neighborhood, and then give the recursive description.

Lemma 2. Let A = ?1, i 2, i 3, …, i n ?, and let A j = ?1, i 2, i 3, …, i j ? for each j = 2 to n . Then A ∈ Weak insertion dynasearch neighborhood if and only if for each j , the cities of A j are {1, 2, …, j +1}\k for some k ∈ {2, …, j +1}.

Proof. Suppose first that A is in the weak insertion dynasearch neighborhood. If A = ?1, 2, …, n ?, then the cities of A j are {1, …, j }, and the lemma is valid. Suppose instead that A ≠ ?1, 2, …, n ?, and let InsertMove(r , s ) be the last InsertMove in the creation of A . We assume inductively that for j < r , the cities of A j are {1, 2, …, j +1}\k for some k ∈ {2, …, j +1}. Because A is formed by compounding weakly independent moves, the cities of A r-1 are {1, 2, …, r-1}. For r ≤ t ≤ s-1, the cities of A t are

{1, 2, …, t+1}\r . Finally, for t > s , the cities of A t are {1, 2, …, t }. We have thus established the “only if” part of the lemma.

Suppose instead that for each j , the cities of A j are {1, 2, …, j +1}\k for some k ∈ {2, …, j +1}. Let S = {r : the cities of A r are {1, 2, …, r }, and let us denote the cities in S as {1, j 2 …, j k }. Then one can establish inductively that A can be created from InsertMove (j s +1, j s +1) for all s such that j s +1 ≠ j s +1. ?

The Weak Insertion Dynasearch Neighborhood N WIDS for the TSP

1. 1 ∈

*WIDS N 2. Suppose that A ∈ and max(A ) = i . Then *WIDS N a. A i+1 ∈ .

*WIDS N b. if |A | = i , then A i+2 ∈ .

*WIDS N c . if |A| = i-1 , then A j ∈ , where j = {1, …., i }\ A .

*WIDS N 3. = ∩ S n .

WIDS N *WIDS N 19

Theorem 8. The neighborhood is the weak insertion dynasearch neighborhood. The

corresponding state space graph G WIDS has O(n 2) nodes and O(n 2) edges. The time to find a minimum distance neighbor is O(n 2).

WIDS N

Proof . Lemma 2 establishes that is the weak insertion dynasearch neighborhood.

WIDS N

By Lemma 2 there are O(n 2) values for (S , k ) because S = {1, …, j } and 1 < k ≤ j or else S = {1, …, k } \ {i }, 1 < i < k . In both cases, the number of arcs emanating from each state is equal to 2 and thus there are O(n 2) edges. Moreover, the graph G WIDS is acyclic and can be created in O(n 2) steps. It follows that

the time to find a shortest path in G WIDS from node ({1}, 1} to node t is O(n 2).

?

We close this section with a simple negative result. No superset of the pyramidal tour neighborhood can be obtained as a neighborhood N TSP [V ] if we require that |V | is polynomially bounded. To see this, suppose that the pyramidal tours are a subset of N TSP [V ] for some V ? V TSP . Such a subset V exists since N TSP [V TSP ] contains all tours. Recall that a pyramidal tour can be expressed as ?i 1, …, i j -1? n ?i j +1, …, i n -1?, where the sequence ?i 1, …, i j -1? is increasing in index, and the sequence ?i j +1, …, i n -1? is decreasing. There are 2n -2 distinct ways that one can choose the cities in ?i 1, …, i j-1?, which means that V must contain at least 2n -2 distinct states, contrary to the assumption that V is polynomially bounded in n .

Section 6. The Dynamic Programming Expansion of a Neighborhood.

For a given tour A , let V TSP [A ] be the set of states of V TSP associated with A , as defined in Section 5, and for a given collection N of tours, we let V TSP [N ] = ∪.

[]TSP A N V A ∈

In Section 5, we associated a neighborhood N TSP [V ] with a subset V of states. In this section, we consider the inverse operation of associating a set of states V TSP [N ] with a neighborhood. Moreover, for a given neighborhood N , we can first associate the set of states V’ = V TSP [N ], and then create a second neighborhood N TSP [V’]. We refer to the neighborhood N TSP [V’] = N TSP [V TSP [N ]] as the expansion of neighborhood N with respect to the dynamic program DP TSP , or more briefly as the dynamic

programming expansion of N . In general, if N is any polynomially sized neighborhood, then N is a subset of the dynamic programming expansion N’ of N , and N’ can be searched in polynomial time. The 20

本文来源:https://www.bwwdw.com/article/0w3q.html

Top