Professional Documents
Culture Documents
Günther R. Raidl1
1 Introduction
Metaheuristics have proven to be highly useful for approximately solving difficult
optimization problems in practice. A general overview on this research area can
be found e.g. in [1], for more information see also [2, 3]. The term metaheuristic
was first introduced by Glover [4]. Today, it refers to a broad class of algorithmic
concepts for optimization and problem solving, and the boundaries are somewhat
fuzzy. Voß [5] gives the following definition:
A metaheuristic is an iterative master process that guides and modi-
fies the operations of subordinate heuristics to efficiently produce high-
quality solutions. It may manipulate a complete (or incomplete) single
solution or a collection of solutions at each iteration. The subordinate
heuristics may be high (or low) level procedures, or a simple local search,
or just a construction method.
According to Glover [2],
. . . these methods have over time also come to include any procedure
for problem solving that employs a strategy for overcoming the trap of
?
This work is supported by the European RTN ADONET under grant 504438.
2 Günther R. Raidl
grew up in relative isolation and followed relatively strictly the biologically ori-
ented thinking. It is mostly due to the no free lunch theorems [8] that this
situation fortunately changed and people recognized that there cannot exist a
general optimization strategy which is globally better than any other. In fact, to
solve a problem at hand most effectively, it almost always requires a specialized
algorithm that needs to be compiled of adequate parts.
Several publications exist which give taxonomies for hybrid metaheuristics or
particular subcategories [9–14]. The following section tries to merge the most im-
portant aspects of these classifications and at some points extends these views.
Also, several examples of common hybridization strategies are given. In Sec-
tion 3, we turn to a unified view on metaheuristics by discussing the pool tem-
plate. It helps to extract the specific characteristics of the individual classical
metaheuristics and to interpret them as a toolbox of key components that can
be combined in flexible ways to build an effective composite system. Section 4
refers to a selection of highly promising possibilities for combining metaheuris-
tics with algorithms from two other prominent research fields in combinatorial
optimization, namely constraint programming and integer linear programming.
Finally, conclusions are drawn in Section 5.
Figure 1 illustrates the various classes and properties by which we want to cate-
gorize hybrids of metaheuristics. Hereby, we combine aspects from the taxonomy
introduced by Talbi [10] with the points-of-view from Cotta [9] and Blum et al.
[11]. Classifications with particular respect to parallel metaheuristics are partly
adopted from El-Abd and Kamel [14] and Cotta et al. [12] and with respect
to the hybridization of metaheuristics with exact optimization techniques from
Puchinger and Raidl [13].
We start by distinguishing what we hybridize, i.e. which kind of algorithms.
We might combine (a) different metaheuristic strategies, (b) metaheuristics with
certain algorithms specific for the problem we are considering, such as special
simulations, or (c) metaheuristics with other more general techniques coming
from fields like operations research (OR) and artificial intelligence (AI). Promi-
nent examples for optimization methods from other fields that have been suc-
cessfully combined with metaheuristics are exact approaches like branch-and-
bound, dynamic programming, and various specific integer linear programming
techniques on one side and soft computation techniques like neural networks and
fuzzy logic on the other side.
Beside this differentiation, previous taxonomies of hybrid metaheuristics [10,
9] primarily distinguish the level (or strength) at which the different algorithms
are combined: High-level combinations in principle retain the individual iden-
tities of the original algorithms and cooperate over a relatively well defined
interface; there is no direct, strong relationship of the internal workings of the
algorithms. On the contrary, algorithms in low-level combinations strongly de-
4 Günther R. Raidl
– For example, in memetic algorithms [16], various kinds of local search are
embedded in an evolutionary algorithm for locally improving candidate so-
lutions obtained from variation operators.
– Very large scale neighborhood search (VLSN) approaches are another exam-
ple [17]. They utilize certain exact techniques such as dynamic programming
to efficiently find best solutions in specifically designed large neighborhoods
within a local search based metaheuristic.
– Also, any decoder-based metaheuristic, in which a master algorithm acts on
an implicit or incomplete representation of candidate solutions and a decoder
is used to obtain corresponding actual solutions, falls into this category. Such
a decoder can be virtually any kind of algorithm ranging from a simple prob-
lem specific heuristic to sophisticated exact optimization techniques or other
OR/AI methods. For example in the cutting and packing domain, a com-
mon approach is to represent a candidate solution as a permutation of the
items that need to be cut out or packed, and an actual solution is derived
by considering the items in more or less sophisticated assignment heuris-
tics in the given order, see e.g. [18]. Weight-coding [19] and problem space
search [20] are further examples of indirect, relatively generally applicable
representations based on decoders.
– Merging solutions: In population based methods such as evolutionary algo-
rithms or scatter search, a traditional variation operator is recombination.
6 Günther R. Raidl
It derives a new solution by combining features of two (or more) parent so-
lutions. Especially in classical genetic algorithms, this operator is based on
pure random decisions and therefore works without exploiting any problem
specific knowledge. Occasionally, this procedure is replaced by more powerful
algorithms like path-relinking [21] or by exact techniques based on branch-
and-bound or integer linear programming that identify a best combination
of parental features, see e.g. [22, 23].
The success of all these hybrid metaheuristics tells us that it is usually a bad
idea to approach a given (combinatorial) optimization problem with a view that
is too restricted to a small (sub-)class of metaheuristics, at least when the pri-
mary goal is to solve the problem as well as possible. There is nothing to say
against the analogy to real-world phenomena, by which several metaheuristics
are explained with or even derived from, for example evolutionary algorithms,
ant colony optimization, or simulated annealing. However, one should avoid to
focus too strongly on such philosophies, hereby losing the view on particular
strengths and benefits of other algorithmic concepts.
Instead of perceiving the various well-known metaheuristics as relatively in-
dependent optimization frameworks and occasionally considering hybridization
A Unified View on Hybrid Metaheuristics 7
Fig. 2. The pool template from Voß [30, 31]. P : Pool; IF /OF : Input/Output Function;
IM : Improvement Method; SCM : Solution Combination Method.
The previous section mainly addressed low-level hybrids between different types
of metaheuristics. Most existing hybrid metaheuristics probably fall into this
category. To some degree, the described point-of-view can also be extended to-
wards hybrids of metaheuristics with other OR and AI techniques. In fact, it is
often a bad idea to prematurely restrict the possible choices in the design of an
optimization algorithm too early to metaheuristic techniques only.
In particular constraint programming (CP) and integer linear programming
(ILP) shall be mentioned here as two research fields with long histories and
also much success in solving difficult combinatorial optimization problems; see
[35] and [36] for introductory textbooks to the two fields, respectively. As meta-
heuristics, the methods from these fields also have their specific advantages and
A Unified View on Hybrid Metaheuristics 9
limits. Especially in the last years, combinations between such techniques and
metaheuristics have shown to be often extremely successful and promising.
An overview on hybrids of local search based approaches and constraint pro-
gramming is given in [37]. Basic concepts include:
5 Conclusions
Manifold possibilities of hybridizing individual metaheuristics with each other
and/or with algorithms from other fields exist. A large number of publications
documents the great success and benefits of such hybrids. Based on several previ-
ously suggested taxonomies, a unified classification and characterization of mean-
ingful hybridization approaches has been presented. Especially with respect to
low-level hybrids of different metaheuristics, a unified view based on a common
pool template can be advantageous. It helps in making different key components
of existing metaheuristics explicit. We can then consider these key components
as a toolbox and build an effective (hybrid) metaheuristic for a problem at hand
by selecting and combining the most appropriate components. This approach of
A Unified View on Hybrid Metaheuristics 11
References
1. Blum, C., Roli, A.: Metaheuristics in combinatorial optimization: Overview and
conceptual comparison. ACM Computing Surveys 35(3) (2003) 268–308
2. Glover, F., Kochenberger, G.A.: Handbook of Metaheuristics. Kluwer (2003)
3. Hoos, H.H., Stützle, T.: Stochastic Local Search. Morgan Kaufmann (2005)
4. Glover, F.: Future paths for integer programming and links to artificial intelligence.
Decision Sciences 8 (1977) 156–166
5. Voß S., Martello, S., Osman, I.H., Roucairo, C.: Meta-Heuristics: Andvances and
Trends in Local Search Paradigms for Optimization. Kluwer, Boston (1999)
6. Blum, C., Roli, A., Sampels, M., eds.: Proceedings of the First International
Workshop on Hybrid Metaheuristics, Valencia, Spain (2004)
7. Blesa, M.J., Blum, C., Roli, A., Sampels, M., eds.: Hybrid Metaheuristics: Second
International Workshop. Volume 3636 of LNCS. (2005)
8. Wolpert, D., Macready, W.: No free lunch theorems for optimization. IEEE Trans-
actions on Evolutionary Computation 1(1) (1997) 67–82
9. Cotta, C.: A study of hybridisation techniques and their application to the design
of evolutionary algorithms. AI Communications 11(3–4) (1998) 223–224
10. Talbi, E.G.: A taxonomy of hybrid metaheuristics. Journal of Heuristics 8(5)
(2002) 541–565
11. Blum, C., Roli, A., Alba, E.: An introduction to metaheuristic techniques. In:
Parallel Metaheuristics, a New Class of Algorithms. John Wiley (2005) 3–42
12. Cotta, C., Talbi, E.G., Alba, E.: Parallel hybrid metaheuristics. In Alba, E., ed.:
Parallel Metaheuristics, a New Class of Algorithms. John Wiley (2005) 347–370
13. Puchinger, J., Raidl, G.R.: Combining metaheuristics and exact algorithms in
combinatorial optimization: A survey and classification. In: Proceedings of the First
International Work-Conference on the Interplay Between Natural and Artificial
Computation, Part II. Volume 3562 of LNCS., Springer (2005) 41–53
14. El-Abd, M., Kamel, M.: A taxonomy of cooperative search algorithms. In Blesa,
M.J., Blum, C., Roli, A., Sampels, M., eds.: Hybrid Metaheuristics: Second Inter-
national Workshop. Volume 3636 of LNCS., Springer (2005) 32–41
15. Alba, E., ed.: Parallel Metaheuristics, a New Class of Algorithms. John Wiley,
New Jersey (2005)
16. Moscato, P.: Memetic algorithms: A short introduction. In Corne, D., et al., eds.:
New Ideas in Optimization. McGraw Hill (1999) 219–234
17. Ahuja, R.K., Ergun, Ö., Orlin, J.B., Punnen, A.P.: A survey of very large-scale
neighborhood search techniques. Discrete Applied Mathematics 123(1-3) (2002)
75–102
18. Puchinger, J., Raidl, G.R.: Models and algorithms for three-stage two-dimensional
bin packing. European Journal of Operational Research, Feature Issue on Cutting
and Packing (to appear 2006)
19. Julstrom, B.A.: Strings of weights as chromosomes in genetic algorithms for combi-
natorial problems. In Alander, J.T., ed.: Proceedings of the Third Nordic Workshop
on Genetic Algorithms and their Applications. (1997) 33–48
12 Günther R. Raidl
20. Storer, R.H., Wu, S.D., Vaccari, R.: New search spaces for sequencing problems
with application to job-shop scheduling. Management Science 38 (1992) 1495–1509
21. Glover, F., Laguna, M., Martı́, R.: Fundamentals of scatter search and path re-
linking. Control and Cybernetics 39(3) (2000) 653–684
22. Applegate, D., Bixby, R., Chvátal, V., Cook, W.: On the solution of the traveling
salesman problem. Documenta Mathematica Vol. ICM III (1998) 645–656
23. Cotta, C., Troya, J.M.: Embedding branch and bound within evolutionary algo-
rithms. Applied Intelligence 18 (2003) 137–153
24. Goldberg, D.E.: Genetic Algorithms in Search, Optimization, and Learning.
Addison-Wesley, Reading, MA (1989)
25. Talukdar, S., Baeretzen, L., Gove, A., de Souza, P.: Asynchronous teams: Coop-
eration schemes for autonomous agents. Journal of Heuristics 4 (1998) 295–321
26. Talukdar, S., Murty, S., Akkiraju, R.: Asynchronous teams. In: Handbook of
Metaheuristics. Volume 57. Kluwer Academic Publishers (2003) 537–556
27. Denzinger, J., Offermann, T.: On cooperation between evolutionary algorithms
and other search paradigms. In: Proceedings of the Congress on Evolutionary
Computation 1999, IEEE Press (1999)
28. Vaessens, R., Aarts, E., Lenstra, J.: A local search template. In Manner, R.,
Manderick, B., eds.: Parallel Problem Solving from Nature, Elsevier (1992) 67–76
29. Calégari, P., Coray, G., Hertz, A., Kobler, D., Kuonen, P.: A taxonomy of evolu-
tionary algorithms in combinatorial optimization. Journal of Heuristics 5(2) (1999)
145–158
30. Greistorfer, P., Voß, S.: Controlled pool maintenance in combinatorial optimiza-
tion. In Rego, C., Alidaee, B., eds.: Metaheuristic Optimization via Memory
and Evolution – Tabu Search and Scatter Search. Volume 30 of Operations Re-
search/Computer Science Interfaces. Springer (2005) 382–424
31. Voß, S.: Hybridizing metaheuristics: The road to success in problem solving (2006)
Slides of an invited talk at the EvoCOP 2006, the 6th European Conference on
Evolutionary Computation in Combinatorial Optimization, Budapest, Hungary,
http://www.ads.tuwien.ac.at/evocop/Media:Invited-talk-EvoCOP2006-voss.pdf.
32. Fink, A., Voß, S.: HotFrame: A heuristic optimization framework. In Voß S.,
Woodruff, D.L., eds.: Optimization Software Class Libraries. OR/CS Interfaces
Series. Kluwer Academic Publishers (1999)
33. Wagner, D.: Eine generische Bibliothek für Metaheuristiken und ihre Anwendung
auf das Quadratic Assignment Problem. Master’s thesis, Vienna University of
Technology, Institute of Computer Graphics and Algorithms (2005)
34. Voß S., Woodruff, D.L., eds.: Optimization Software Class Libraries. OR/CS
Interfaces Series. Kluwer Academic Publishers (2002)
35. Marriott, K., Stuckey, P.: Programming with Constraints. The MIT Press (1998)
36. Nemhauser, G., Wolsey, L.: Integer and Combinatorial Optimization. John Wiley
(1988)
37. Focacci, F., Laburthe, F., Lodi, A.: Local search and constraint programming.
In: Handbook of Metaheuristics. Volume 57. Kluwer Academic Publishers (2003)
369–403
38. Puchinger, J., Raidl, G.R., Gruber, M.: Cooperating memetic and branch-and-cut
algorithms for solving the multidimensional knapsack problem. In: Proceedings of
MIC2005, the 6th Metaheuristics International Conference, Vienna, Austria (2005)
775–780
39. Fischetti, M., Lodi, A.: Local Branching. Mathematical Programming Series B 98
(2003) 23–47