Some Personal Views on the Current State and the Future of L

更新时间:2023-04-23 13:12:02 阅读量: 实用文档 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

Some Personal Views on the Current State and the Future of Locational AnalysisEdited by Ioannis Giannikos and Stefan Nickel Written by P. Avella, S. Benati, L. Canovas Martinez, K. Dalby, D. Di Girolamo, B. Dimitrijevic, G. Ghiani, I. Giannikos, N. Guttmann, T. H. Hultberg, J. Fliege, A. Marin, M. Mun~z Marquez, M. M. Ndiaye, o S. Nickel, P. Peeters, D. Perez Brito, S. Policastro, F. A. Saldanha de Gama, P. Zidda January 2, 1996In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling aspects in discrete Location Theory, the in uence of the distance function, the relation between discrete, network and continuous location, heuristic techniques, the state of technology and undesirable facility location. Some general questions are stated regarding the applicability of location models, promising research directions and the way technology a ects the development of solution techniques.

Abstract

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

Contents2.1 2.2 2.3 2.4

1 Introduction 2 State of the models in discrete Location TheoryAre models robust?::::::::::: What new assumptions can be made? Are the objective functions realistic?: Further Developments:::::::::

::::

::::

::::

::::

::::

::::

::::

::::

::::

::::

::::

::::

::::

3 33 4 5 5

3 Axioms, or: What is a good Distance Measure? 6 4 Is there a gap between network, discrete and continuous Location Theory? 8 5 Dynamic Change 10 6 Inhomogeneous Distance Measures 11 7 The structure of the demand 11 8 Undesirable Facility Location 12 9 Heuristic Techniques 169.1 9.2 9.3 9.4 9.5 9.6 Simulated Annealing:::::::: Tabu Search::::::::::::: Genetic Algorithms::::::::: Arti cial Neural Networks::::: Lagrangean Relaxation::::::: Applications in Locational Analysis

::::::

::::::

::::::

::::::

::::::

::::::

::::::

::::::

::::::

::::::

::::::

::::::

::::::

::::::

::::::

16 17 17 18 18 19

10 State of technology

10.1 Parallel processing and computation speed::::::::::: 20 10.2 Data Structures and our way of thinking about algorithm performance:::::::::::::::::::::::::::::: 21 10.3 How can technology in uence the models we build?:::::: 22

20

11 Concluding Remarks

23

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

1 IntroductionEvery young scientist has to nd his way through the literature of the eld he wants to work on. Furthermore he (or she) has to identify a niche with enough potential for his (or her) own research. This di culty of this task is the motivation for this paper, where the participants of the 12th European Summer Institute tried to describe from their perspective what these aforementioned niches for Location Theory may look like. One

goal of the paper therefore is to show what young researchers are interested in. Another goal is to give people new to the eld an impression of what might be interesting to look at in Location Theory. Especially to reach the latter goal we have included many references. The rest of the paper is organized as follows: In the rst section we discuss the state of the models in discrete Location Theory. In the next sections, we will consider the axiomatic foundations of distance measures and how these foundations permeate the three main branches of Locational Analysis (discrete, network and continuous Location Theory). Moreover, some speci c directions for further research, like inhomogeneous distance measures and dynamic change are discussed. Section 8 is devoted to undesirable facility location, which seems to be a very important branch of Location Theory. After that heuristic techniques and their application to facility location problems are discussed. This section is followed by a section in which the state of technology and its in uence on Location Theory is described. The paper ends with some conclusions.

2 State of the models in discrete Location TheoryDuring the 12th European Summer Institute the discussions among participants and lecturers covered a large variety of topics. One of the most popular and, sometimes, controversial issues was the state of the most common models in terms of their applicability, especially in discrete location theory. It would be virtually impossible to report all the views that were stated during ESI XII. Our objective in this section is to raise some issues which, we feel, are important and stimulate constructive discussions among researchers and practitioners within our discipline. We have the feeling that very little remains to be added to the basic models of the discipline. In many cases the p-centre, the p-median and the uncapa3

2.1 Are models robust?

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

citated facility location models provide the general framework upon which one can t the peculiar location problem which one is aiming to solve. They provide both the theoretical setting and the computational techniques on which one can rely. Since these problems are NP-hard, we can never be safe against some really bad data instances. However, it is a common belief that\realistic" problems i.e. with say some hundreds of nodes, can actually be solved in a\reasonable" amount of time with the currently available technology, see 27, 71, 92, 116]. Clearly, what we mean by\realistic" and\reasonable" is very much dependent on each particular problem. The next obvious question is whether it is su cient to approximate real life problems by the classical models in the literature or whether these models should be extended.

2.2 What new assumptions can be made?

The assumptions of the basic models are that every variable is controlled by the decision-maker. We are in a world where there is no room for uncertainty and once a decision has been made, the problem is ju

st to implement it. We assume that there is a complete separation between a rst phase where all the variables are evaluated and a second phase where one has to carry out a schedule of activities. However, this distinction is not always acceptable. First of all, in some real life problems the optimal solution is very sensitive to the data, and a small modi cation, say an error of measurement or an unforeseeable delay, can completely break the optimal planning. Worst of all, it has been mentioned that such unexpected events can provoke a loss of money larger than the savings that could be achieved by solving the problem optimally, 118]. Possible ways to address this problem include the introduction of constraints that prevent some very weak solution, see 42]. Probabilistic or fuzzy measures can be introduced as well, 3, 4, 8, 47]. If any other extension fails, the use of classic Monte-Carlo simulation methods may prove e ective 33]. Breaks from the optimal schedule can also arise because it is not even true that every variable is controlled by the decision maker. Sometimes in a real world situation the decision-maker must take into account the decisions of other agents or interested parties. In a competitive environment, for instance, the decision maker must forecast the behaviour of the competitors as well as the behaviour of the customers. If these forecasts are based on rather simplistic assumptions, the model may prove to be impractical. In this case, we must introduce consistent assumptions regarding the behaviour of all agents in the problem. Possible ways to achieve this are to study the locational process within the framework of competitive game theory 69, 73], 4

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

introduce utility functions to model the reaction of the agents 52, 100] and see the overall locational process as a multi-period optimization problem 82]. Another interesting development of our discipline is to consider models that are able to react to the break of the optimal planning. Consider, for instance, the problem of locating a mobile unit to cope with certain emergencies. In some cases certain data items may di er signi cantly from their expected values or may not be available early enough for an\optimal" location to be chosen. In such cases one needs a exible rule to cope with certain changes in the data. Some early studies suggest that a useful tool for real life routing problems can be neural networks (see 20, 116, 129]). It would be interesting to investigate whether certain location problems lend themselves to this kind of approach. In many realistic situations the classic objective functions are not really appropriate. The standard e ciency measure is the minsum objective (minimize the sum of all distances or costs) whereas the standard equity measure is the minmax objective (minimize maximum distance). However, there are many circumstances where these objectives are not applicable. Certain locations may interact with each other, for instance, in which case the l

inearity of the model is broken (see 19]). In other cases it is required that the nal decision be not strongly correlated with the location of a particular agent. Finally, there may be cases where the chosen location is obtained by the simulation of a voting process 73, 76]. The very nature of locational decisions, which often involve several agents with con icting objectives, suggests that multi-objective approaches are more appropriate for analysing complex situations (see 25, 34, 35]).

2.3 Are the objective functions realistic?

2.4 Further Developments

We conclude this set of remarks with a few comments about three elds of application of location analysis which appear promising for new development. To another subject| obnoxious facility location| we devote another section of this paper. Competition (see 16, 29, 52, 54, 42, 95]). It is very easy to understand that almost every economic decision involving facility location is made in a competitive market, and that in real life factories are relocated according to the advantages in taxes, prices, wages, etc. In economic theory it is well known that very puzzling e ects can be observed when rms compete in both price and location 40]. Therefore we must pay attention to the way we formulate the model since, generally speaking, 5

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

we may wish to optimize in such a way that the competitors could not immediately react, thus reducing the pro t! Path locations (see 34, 35, 136]). Path location problems are about the determination of a path connecting two nodes of a network. This problem arises in a large number of applications: oil pipelines, electric lines, waterworks, sewers and transportation lines. One of the main features of these problems is that the cost function includes two components: the path construction cost and the loss due to the inaccessibility of the network. Dealing with these two costs can lead to di erent formulations of the problem. It appears that multi-objective methods are more appropriate in this setting. Moreover, the models proposed so far cannot represent transportation networks realistically because the users' demand is not associated to single nodes but to pairs of origindestination nodes since the path location and the user routes depend on each other. In this case, one or more concurrent transportation systems should be considered. In conclusion, our impression is that there is large room for improving the way models are formulated. This does not necessarily mean developing more complicated models trying to assess every detail, but developing new simple models which must be robust to cope with the complexities of reality.

3 Axioms, or: What is a good Distance Measure?When one wants to model the location of some facilities, one must determine the set of facilities and the interactions between its elements. But some di culties appear when one wants to evaluate these interactions (deviation, travel time, travel cost,::: ). The following questions arise naturally. A

re distances between two facilities what we have in mind or what is really measured? Did users and servers have the same evaluation of a given deviation (question of opportunity)? From two metrics d1 and d2 two di erent evaluations can appear in the sense that d1(x; y) d1(y; z) and d2(x; y)> d2(y; z). So how can we de ne a good evaluation tool which takes into account the peculiarities of all interactions? The choice that we made, implies the structure of the problem and\::: what is chosen as the distances between two points determines the geometry that results" 90]. According to the space we use (discrete, network, continuous) the evaluating function may have di erent properties. The formalization of such functions requires some\natural" properties. So from a mathematical point of view, we need some assumptions in order to 6

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

have a good interaction evaluation tool. For a given set X (where we want to locate), we de ne a distance (or metric) on X as a function which satis es the following four conditions. For any elements x; y and z in X, (p1) (p2) (p3) (p4)

d(x; y) 0 d(x; y)= 0, x= y d(x; y)= d(y; x) d(x; y)+ d(y; z) d(x; z)

(non negativity) (de niteness) (symmetry) (triangle inequality).

Now we look at some familiar examples of evaluation functions: In continuous location, the most usual evaluation functions are Minkowski's distances (lp distances) which are de ned in the plane by:

lp(x; y)= (x1? y1)p+ (x2? y2)p

1=p

; 1 p< 1:

While dealing with the lp family, the most referred to distance measures are the Euclidean distance for p= 2 and the rectilinear (or Manhattan) distance for p= 1. There are also polyhedral distances for which a nite number of permitted moving directions is given (see 145]). Such distances are used for example in spatial analysis to have approximate evaluations in transportation networks. Without the property (p3 ), d is a quasi-distance. For example, the shortest path between two nodes a and b de nes a quasi-distance d in an oriented graph (see 85, 74]). There are other kinds of evaluation functions (see 79]): - if (p2 ) fails then the function d is a semi-distance (or pseudo-distance). - if d satis es (p1), (p4) and the following property (p5 ), i.e. d(x; y)= 0 if x= y, then d is a weak distance. We will not discuss the applicability of these conditions, our purpose here is just to show in what way we can introduce some formalization of an evaluation function. Technical tools used to solve a problem are consequences of its structure and in this way imply a close link to distance functions. Therefore computational techniques (algorithms, heuristics, simulations,::: ), the use of convex analysis (see 112]) depend directly on the selected metric. Structure and stability of the solution set (see 51, 50]), coincidence conditions (in multi-facility location, see 126, 114]), are also a ected by the choice one made. 7

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

4 Is there a gap between network, discrete and continuous Location Theory?On

the rst sight these three areas have not much in common besides the fact that in every area location problems are solved. Completely di erent types of algorithms and also completely di erent naming schemes are witness of some gaps. But looking under the surface we will discover some bridges over the gaps. The key concept which will allow us to construct these bridges is the underlying assumption of (nearly) all location problems that the goodness of a chosen location has something to do with the distance one has to cover to reach this location. Mathematically speaking we have to look for the (di erent?) metric spaces which are behind the di erent areas of Location Theory, since only metric spaces allow us to use the word distance in a well de ned way. (As a remark for the mathematicians: in this part of the survey we do not distinguish between symmetric, asymmetric or weak metrics to make things easy!) In continuous Location Theory a great variety of distance measures is available (see, for example 146, 51, 14, 91, 59, 120, 127, 138, 112, 119]). Also some work has been done on the applicability of the di erent concepts (see, for example 142, 15, 10, 11, 13, 9, 101, 103]). One of the problems here is that the classes of metrics looked at are mostly derived from norms or gauges (for a survey see 128]) and therefore only a small subclass of metrics is covered. In network location we determine somehow the way to go between two nodes in the network (at least this means we assume this data to be given) and de ne the distance between any two nodes to be the shortest path in the network. The next result justi es the usage of the word distance:

Theorem 4.1 (see 70]) There is a one to one correspondence between nite metric spaces and undirected networks with positive edge lengths.

This theorem can also be extended to the asymmetric case, where we have a correspondence to directed networks. At this point we can formulate the di erences and similarities of continuous and network location with respect to distances: In both areas we are operating in a metric space, but in continuous location we choose the metric according to the setting, while in network location the metric space is automatically given by virtue of Theorem 4.1 and the de nition of network distances (see also 18]). So far we have only discussed network location problems, where the new facilities and also the clients are restricted to the nodes of the network. Since 8

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

a lot of discrete location problems are derived from such a structure the discussion above carries over to discrete location problems. Therefore we have established some formal relationship between all the three areas. In the following we will extend our discussion to absolute network location problems, where location is also allowed on the edges of the network. In the case where locating on edges is allowed the same problem as in continuous location problems arises: what distance de nition should be used for two points on the

interior of an edge. Some alternatives are discussed in 96] (see also references therein). Usually one assumes linear behaviour of the distance which yields a very strong connection to continuous location problems with polyhedral gauges or polyhedral norms (block norms). This is the case since we can nd for every continuous location problem with polyhedral gauges an equivalent (except alternative optima) absolute network location problem. The converse is not true, since we can always construct networks which only rely on metric spaces and these metrics are not necessarily generated by gauges (see Theorem 4.1). But for real life problems it would be very interesting to have some mathematical error bounds. Empirical tests have been done by 142] and recently by 103]. An application of this discussion can be found in 7] where a continuous location problem with barriers under the Manhattan metric is transformed into a network location problem. If we look at trees the relationship becomes even tighter, because concepts like convexity carry over 41, 104]. As a conclusion we can state the following: One important challenge in Location Theory is to make the crossing between the di erent discussed areas more transparent and algorithmically practicable. It is easy to imagine a scenario where we get some continuous data from a Geographical Information System (GIS). To handle that data we convert it to some network based problem. Maybe we even try to make a discrete model out of it. But nally we have to interpret our solution in the original continuous space. This process includes some very hard problems, such as: When is it better to formulate a given real-world problem in a discrete space and when should one use a network or a continuous formulation? How can we handle spatial requirements and how can we transform them from continuous to discrete problems? How can we handle barriers and forbidden regions given in continuous space on networks? (i.e. How should we map these geometrical objects onto a network structure?) How can we approximate a network distance by some continuous space distance and vice versa? 9

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

How can we handle inhomogeneous distances (see the section`Inhomogeneous Distances' below)? Given the current state of continuous, network and discrete Location Theory, the rst problem is certainly the most important one. The other problems include some well known research problems in combinatorial optimization, such as Steiner Trees or more general optimal realizations of metrics on networks (see, for example 144, 135, 44]).

5 Dynamic ChangeIt seems that there is only one part of Location Theory in which timedependent decision making naturally comes into play: competitive Location Theory (see, e. g. 123, 53, 117]). There, two or more players try to locate their facilities ( rms in the market) in such a way that their overall gain is maximized. Time dependencies result directly from the competitive environment, since in many problem formulations t

he players play against each other and interchange their moves. This part of locational analysis is closely connected to game theory and voting theory 106, 143]. For example, it can be shown that in some metric spaces there is a close connection between Condorcet points, Simpson points, and Fermat points 94, 48, 117]. Apart from that, dynamic location is mostly used in scenarios where there is not enough money to build all facilities at the same time 137], in which a xed time period for using a facility is known in advance or in which the demand 130] or the type and cost of the interactions change in time. Of course, it is easy to introduce time-dependent parameters in all problem formulations and globalize the resulting family of ordinary problems by way of a globalizing function. Another idea is to make not only the parameters dynamic, but also the placement process. This can be done, for example, by introducing time-dependent placement or opening costs (and likewise, costs for closing facilities). Problems of this type have already been formulated in discrete Location Theory, where the resulting problem is most often a mixed integer linear program (MILP). The main di culty which arises is the number of unknowns, since in many problem formulations the dimension of the static (time-independent) problem is multiplied by the number of time steps in the new model. This means that standard techniques of combinatorial optimization may not be helpful here. One way of tackling these problems consists of generalizing an algorithm for the static case such that it can deal with the dynamic case too 133, 139]. Other authors 117, Chapter 4] considered heuristics based on dynamic programming. But up to now it cannot be said that in dynamic location problems new kinds of distance measures are introduced. 10

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

6 Inhomogeneous Distance MeasuresIn practice, one often has problems where the space in which the location takes place is pided into several parts. These parts then have totally different distance measures, if one tries to move through them. Modeling these types of problems requires inhomogeneous distance measures. Imagine a pedestrian who tries to nd his way to a certain location. The distance measure employed by him/her changes abruptly if he/she goes not through a forest, but on a road, etc. This has severe consequences on the concept of the shortest path, which is here by no means the straight line between two points 62]. This is mostly a research topic in continuous Location Theory. In discrete and network Location Theory, inhomogeneous real world problems can, to a certain extent, be modeled by discretizing the whole space and using specially tailored distances for di erent pairs of points or networks of a speci c kind. But this kind of formulation works for small-scale problems only. Therefore, continuous Location Theory has to cope with problems in which the locational space (the space in which the location takes place) is inhomogeneous. To a

very small extent, one can model the inhomogeneity by using di erent distance functions for di erent demand points. Various authors have already addressed these problem types 23, 124, 113, 80, 49, 126]. With these models one can take into account the fact that di erent demand points may use di erent transportation systems. It is even possible to use di erent nonsymmetric distance measures. But a general distance function, which takes as arguments two points and computes how much time (or money) it takes to go from one point to the other, is only nonnegative and continuous, but certainly not much more. In reality, the symmetry and the positive homogeneity of norms is seldom encountered. This implies that location problems in inhomogeneous spaces have much less structure and are therefore much harder to solve. For example, one cannot expect to have convex problem formulations in inhomogeneous spaces. Up to now, the work done in this eld centers around the special case of problems with barriers to travel (see, e. g. 84, 99, 140, 2, 88]). There is certainly more research necessary on this topic.

7 The structure of the demandSuppose that you want to locate a warehouse that must supply some product to a set of depots. How can one model this problem? There are several questions to answer before the selection of the model. Now we are going to address the question of how one models the demand. We will try to answer this question independently of the others. Consider the problem of locating a warehouse. In this case the product 11

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

must be carried from the warehouse to the depots. At this point we consider that the locations of the depots are xed in time. Under this assumption each depot should be considered as a point. Thus the distances involved in the problem are the distances from the warehouse to the depots. This kind of problem appears in multiple situations, as distribution of goods, computer network, communication network,::: . But this model is a simple one. The depots are not the destination of the product, this must be rerouted from them. The usual situation is that there are customers that need the product, which get it from one of the depots. Thus a second stage of the problem arises. Sometimes we can forget about this second problem if, for instance, the cost of the local travels has little weight in the total cost. However, this is not the general case. Now we are going to analyze the second problem. Consider the problem of locating only one depot. The customers must travel between their houses and the depot. Frequently the number of customers is too large to consider each customer as a demand point. The most usual approximation is the aggregation of the demand around a small set of points and the assumption that the customers are located only at these points. But strictly speaking this is a new problem, a districting problem. This is usually solved in heuristic manner (see 102]). In this model the distances between the customers and t

he depot, the only distances involved in the model, are not the real distances that the customers must travel. There are some reasons to look at the problem from a di erent point of view. These include the uncertainty of the demand, stochastic demand and others; these situations lead us to consider expected distances between the depot and the customers. Recently this approach has received increasing attention in the literature (see, for example 45, 32]). Considering now the problem of locating the warehouse and the depots, we must address a new problem, the hierarchical location problem. This is not an exhaustive description of possible types of the demand. Issues we have not discussed include hierarchical location with multiple levels, the location of multiple servers with interactions between them, the queue location problem,::: We only try to show that the structure of the demand should receive greater attention than usual. We do believe that quite often thinking about the structure of the demand is the same as thinking about the structure of the location problem itself.

8 Undesirable Facility LocationThe term\facility" implies that a particular establishment o ers some kind of service to a certain group of people (customers). Hence, in the context of 12

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

facility location, it can be argued that all facilities are necessary because of the service they provide. If a facility does not have any signi cant detrimental e ects to any inpidual, it is usually referred to as an ordinary facility. On the other hand, a facility that has considerable adverse e ects on its environment or its customers is known as an undesirable facility. Examples of such facilities include chemical factories, nuclear power plants, dump sites for waste disposal, etc. Their adverse e ects may be noise, pollution, potential for serious accidents etc. Until the early 1980's undesirable facility location had received very little attention by operational researchers. It is interesting that at that time only 2% of the location literature discussed models applicable to the location of undesirable facilities (see 56]). It can be argued that the main reasons for this are the complex issues involved in undesirable facility location (more than one objective, several interested parties, etc.) and also the fact that these facilities are mainly the by-products of industrialisation and increased use of technology whereas ordinary facilities have always been with us (see 12]). It is interesting to observe the development over the years of solution approaches to undesirable facility location problems. Most of the earlier methods are single objective models; they assume that the detrimental e ects of an undesirable facility on a customer are a decreasing function of the distance between them. Consequently, these e ects can be minimized if the facilities to be located are placed as far away as possible from the customers subject to constraints that prevent location at in nity. A very

comprehensive review of distance maximization models for undesirable facility location can be found in 56]. Most of these early approaches address the problem of locating a single undesirable facility under the maximin criterion. A variety of methods have been developed depending on the problem setting. When Euclidean distances are used, the most common approach seems to be the enumeration of all possible optima based on some analytical properties of the optimal solution (see 39, 108, 110]). This basic idea can be extended to rectilinear distances (see 46]). However, the dominant approach in this case is one of piding the feasible region in rectangular areas and solving a sequence of LPs (see 46, 109, 107]. The single facility maxisum problem has not been as popular; only a limited number of papers on this problem have appeared. Both network and planar problems are solved using analytical results that limit the search to a subset of the feasible region (see 24, 115, 72]. The majority of the early literature on multi-facility problems addresses the p-dispersion problem where there are no customers and the objective is to maximize the distance between any two facilities to be located (see 21, 93, 55]). This lack of research on multi-facility location problems with maximization objectives appears to be a result of their complexity since even 13

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

the simplest of these problems are NP-complete (see 56]). Following the rapid technological advancements in hardware and software, research e orts have shifted towards more realistic formulations over the last 7-8 years. Clearly, models where distances are measured in the Euclidean or rectilinear metric are not adequate for most real life problems. The spread of pollution, for instance, is usually not symmetric; in the case of prevalent winds pollution may spread further in one direction than in others and that can vary from time to time. An algorithmic scheme for locating facilities causing airborne pollution in a network when a detailed description of the meteorological conditions is developed by Karkazis and Bo ey 83]. A further step towards more realistic models is a graphical approach for locating an undesirable facility in the plane using general asymmetric distances developed by Buchanan and Wesolowsky 17]. A geometrical branch-and-bound algorithm, devised by Hansen et al 72], has been extended by Plastria 125] for general single-facility problems, with distances measured by mixed norms and as an objective any continuous function of the distances. The method caters for forbidden regions (such as rivers or lakes) and is proposed as a site generation tool, to identify several candidate locations which can then be assessed on the basis of whatever criteria the decision maker is interested in. This leads us to another recent development in undesirable facility location: the use of more than one objective. Examples of multi-objective models include the ones developed by Erkut and Neuman 57] and Giannik

os 63]. The most common objectives considered are the minimization of cost, the maximization of distance between facilities and customers and the equitable treatment of customers, i.e. the equitable distribution of the disutility imposed by the facilities. In fact, the issue of equity is of major importance in undesirable facility location since the e ects caused by such a facility could be quite serious and no customer would be prepared to su er a disproportionate share of these e ects. A review of several alternative equity measures (range, maximin, etc.) and a discussion of their implications in facility location can be found in 105]. A problem closely linked to undesirable facility location is that of transporting hazardous materials or wastes and locating undesirable facilities to treat or dispose of them. Although instances of the problem are becoming increasingly common in modern societies, we do not feel it has received enough attention in the literature. Some multi-objective methods can be found in 150, 132, 149]. These models locate several undesirable facilities and select shipment routes along which the hazardous material/waste is transported; some of the objectives they consider are the minimization of total cost, the minimization of total risk due to the transportation of the material/waste, the minimization of some equity measure (e.g. maximum inpidual risk), 14

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

etc. Risk at a customer (population centre) is expressed as the amount of hazardous material/waste multiplied by the weight of the customer or is given as the probability of an accident to occur multiplied by the e ects of that accident. In our opinion, more emphasis should be given to the estimation of these probability functions taking into account the geographical terrain, the type of vehicles used, the chemical properties of the hazardous substance, etc. In addition more work should be done on the routing aspect of the problem, i.e. which vehicle should transport which quantity at which time. In general we feel that if mathematical models are to be used in realistic applications, there must be greater cooperation between operational researchers and scientists from other disciplines (physics, chemistry, economics, etc.). The next step forward should be to incorporate into the location literature the specialist knowledge available in these disciplines regarding the spread of pollutants, the economic implications of locational decisions, etc. Another issue that must be addressed in locational analysis is uncertainty. Most existing methods assume that the parameters of the problem do not change through time. Especially in undesirable facility location this may not be the case; the amount of pollutants that need to be further processed or disposed of might change signi cantly depending on certain technological developments, relevant legislation might also change, etc. We think these factors should be taken into consideration in realistic models to avoid accepting solutio

ns that may be satisfactory today but not in the future. In our opinion undesirable facility location appears to be the area within location analysis where continuous models can be\reconciled" with discrete models. In the initial stage of site selection continuous models such as those presented by Plastria 125] and Buchanan and Wesolowsky 17] seem to be more appropriate for identifying potential solutions that would be acceptable in terms of their consequences on the customers or their distance from them. On the other hand, discrete models appear more appropriate for selecting the nal solution(s) among the potential ones. As far as solution methods are concerned, we are inclined to agree with Bo ey and Karkazis 12] who argue that the development of realistic models should not be deterred by the possible di culty of solving them exactly. They propose the use of modern heuristic methods (tabu search, simulated annealing, etc.). To the best of our knowledge the use of modern heuristics in undesirable facility location is limited at the moment. Clearly, the location of undesirable facilities is a sensitive issue in modern society. We certainly feel that operational research methods can play a useful role as a decision making aid. In some cases it can even stimulate the development of new technologies as exempli ed by Wyman and Kuby 147], who demonstrated to the industry that several small units using photolytic 15

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

processes were better than a single large incinerator for processing some toxic chemicals in Phoenix, USA.

9 Heuristic TechniquesSeveral problems in Locational Analysis are NP-Hard and cannot be solved exactly by algorithms that have polynomial complexity. Even some of the most well known problems in our discipline such as the k{centre, the plant location or the p{median problem fall in this category. These problems can be looked at from a combinatorial optimization point of view and can be addressed using complete enumeration techniques. However, these techniques are not practical for realistic instances of the problem since they will not generate an optimal solution within reasonable time. It is for these problems that heuristic methods must be used to nd satisfactory solutions which could, however, be some way from the optimal solution of the problem in question. The term\'heuristic" derives from the Greek\heuriskein" meaning to nd or discover. A heuristic algorithm can be de ned as follows 131]: A heuristic is a technique which seeks good (i.e. near-optimal) solutions at a reasonable computational cost without being able to guarantee either feasibility or optimality, or even in many cases to state how close to optimality a particular solution is. In this section we discuss some heuristic techniques which, in our view, are of great importance in Locational Analysis. These are known as\metaheuristic" algorithms because they are given in a general way and can be applied to a wide range of problems. We give a brief outline of each

algorithm and discuss some of their applications in location problems. The use of simulated annealing as a technique for discrete optimization dates back to the early 1980s. The ideas that form the basis of simulated annealing were rst published by Metropolis et al. 111] in 1953. Thirty years later, Kirkpatrick et al. 87] suggested that this type of simulation could be used to search feasible solutions of an optimization problem, with the objective of converging with probability 1 to an optimal solution. This approach can be regarded as a variant of the well known heuristic technique of local (neighbourhood) search, in which a subset of the feasible solution is explored by repeatedly moving from the current solution to a neighbouring solution. For a minimization problem the traditional forms of local search employ a descent strategy, in which the search always moves in a direction of improvement. However, such a strategy often results in convergence to a local rather than 16

9.1 Simulated Annealing

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

a global optimum. The solutions obtained by these descent strategies are totally dependent on the starting solution(s) employed. In simulated annealing uphill moves are allowed, but their frequency is governed by a probability function which changes as the algorithm progresses. The inspiration for this form of control was Metropolis's work in statistical thermodynamics. The laws of thermodynamics state that at temperature t, the probability of an increase in energy of magnitude E is given by p( E )= expf?ktE g where k is a physical constant known as Boltzmann's constant 1, 98]. Tabu search (TS) has its antecedents in methods designed to cross boundaries of feasibility or local optimality normally treated as barriers, and systematically to impose and release constraints to permit exploration of otherwise forbidden regions. Early examples of such procedures include heuristics based on surrogate constraint methods and cutting plane approaches that systematically violate feasibility conditions. The modern form of tabu search derives from Glover et al 65]. Tabu search scarcely involves reference to supernatural or moral considerations, but instead is concerned with imposing restrictions to guide a search process to negotiate otherwise di cult regions. These restrictions operate in several forms, both by direct exclusion of certain search alternatives classed as\forbidden", and also by translation into modi ed evaluations and probabilities of selection. A fundamental element underlying tabu search is the use of exible memory. From the standpoint of tabu search, exible memory embodies the dual processes of creating and exploiting structures for taking advantage of history (hence combining the activities of acquiring and pro ting from information). The memory structures of tabu search operate by reference to four principal dimensions, consisting of recency, frequency, quality, and in uence. These dimensions in turn are set against a background of logical

structure and connectivity. From an Operational Research perspective, the idea of a genetic algorithm can be understood as the intelligent exploitation of a random search. Genetic Algorithms, GA, were developed initially by Holland and his associates at the University of Michigan in the 1960s and 1970s, and the rst full, systematic treatment was contained in Holland's book Adaptation in Natural and Arti cial Systems (see 75]). The name Genetic Algorithm originates from the analogy between the representation of a complex structure by means of a vector of components, and the idea, familiar to biologists, of the genetic structure of a chromosome. 17

9.2 Tabu Search

9.3 Genetic Algorithms

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

In many applications, the component vector, or chromosome, is simply a string of 0s and 1s. Several genetic operators have been identi ed for manipulating these chromosomes, the most commonly used ones being crossover (an exchange of sections of the parents' chromosome). Continuing the genetic analogy, variables are often called genes, the possible values of a variable alleles, and the position of a variable in a string is called its locus. A further distinction is drawn, in genetics, between the chromosome as genotype, meaning the actual structure(s) (the coded string which is processed by the algorithm), and the phenotype the physical expression of the structure(s) (the decoded set of parameters). A genetic algorithm works by maintaining a population of M chromosomes, potential parents whose tness values have been calculated. Each chromosome encodes a solution to the problem, and its tness value is related to the value of the objective function for that solution 66]. An arti cial neural network (ANN) is a computational paradigm that differs substantially from those based on the standard von Neumann architecture. Feature recognition ANNs generally learn from experience, instead of being expressly programmed with rules as in conventional arti cial intelligence (AI). Feedback ANNs feel their way to good solutions, rather than explicitly exploring possibilities. The ANN is inspired by the structure of biological neural networks and their way of encoding and solving problems. Two main kinds of architecture exist, feed-forward and feed-back, with major application areas of feature recognition and optimization, respectively. The feed-forward ANN approach to classi cation problems constitutes, in a sense, extensions of conventional tting and principal component analysis tools into the non-linear domain. This is in contrast to using feed-back ANN to nd solutions to di cult combinatorial optimization problems. There are two families of ANN algorithms for optimization problems: (1) the pure neural approach based on either binary or multistage neurons for the dynamics; (2) deformable templates, where the neural degrees of freedom have been integrated out and one is left with coordinates for possible solutions. Lagrangean relaxation was developed in the early 1970s with t

he pioneering work of Held and Karp on the travelling salesman problem and is today an indispensable technique for generating lower bounds for use in algorithms to solve combinatorial optimization problems. We de ne the Lagrangean relaxation of problem (P) with respect to the constraint set Ax b by 18

9.4 Arti cial Neural Networks

9.5 Lagrangean Relaxation

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerife, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issues discussed include modelling as

introducing a Lagrange multiplier vector 0 which is attached to this constraint set and brought into the objective function to give: minimizefcx+ (b? Ax)g subject to: Bx d with x 2 (0; 1) What we have done here is: to choose some set of constraints in the problem for relaxation; and to attach Lagrange multipliers to these constraints in order to bring them into the objective function. The key point is that the program we are left with after Lagrangean relaxation, for any 0, gives a lower bound on the optimal solution to the original problem (P).

9.6 Applications in Locational Analysis

Having described brie y some meta-heuristic techniques, we now discuss some applications in location problems. Chardaire and Lutton 22], described an algorithm which optimizes the cost of telecommunication networks. The algorithm is based on the simulated annealing technique. Golden 67] used simulated annealing to solve routing and location problems. Tabu search is still in an early stage of development, with a substantial majority of its applications occurring only since 1989. Darko and Jadranka 37] developed a new heuristic method based on tabu search for the problem of locating p interacting hub facilities among n interacting nodes in a network. Klincewicz 89] considered the analogous problem. However, Crainic et al. 30] proposed a tabu search heuristic for the location/allocation problem with balancing requirements and compared it with an e cient dual ascent method. The results indicate that tabu search is a competitive approach for this class of problems. Most of the original applications of genetic algorithms were in areas closely linked to Arti cial Intelligence. Recently, problems of interest to Operational Research have also been tackled. These include applications of Genetic Algorithms to discrete-space location problems, particularly the p-median by Hosage et al. 77], and a genetic procedure for the multi-period facility layout problem by Conway et al. 26]. We have not been able to nd any references on the application of Neural Networks in location problems although there are several applications in other areas 122, 121]. As far as we know with respect to quality this heuristic method in general gives results comparable to those of other approximating approaches. Relaxation techniques for solving the Dynamic Capacitated Plant Location Problem have been suggested by Shulman 134]. Domich et al. 43], discussed a mathematical model that selects locations for Internal Revenue Service Post-of-Duty. Barcelo et al. 5] presented a Lagrangian relaxation heuristic algorithm for the capacitated location pro

blem. Guignard 68], studied the makespan minimizing problem in a single stage manufacturing process, the Lagrangean dual bound obtained is stronger than the LP relax19

本文来源:https://www.bwwdw.com/article/odtq.html

Top