You are on page 1of 19

Lagrangean Relaxation A Tutorial

Monique GUIGNARD
Department of OPIM The Wharton School University of Pennsylvania Philadelphia, PA 19104-6340
latest revision: February 2008

Preface
This tutorial presents the necessary background for understanding and designing efficient Lagrangean relaxations for integer programming problems. It suggests various ways in which Lagrangean relaxation can be applied and presents possible extensions. It reviews essential properties of the Lagrangean function, and describes several algorithms to solve Lagrangean duals. Lagrangean heuristics are an integral part of a Lagrangean approximation scheme, and may be ad-hoc or generic. A special effort has been made to give geometric interpretations whenever possible, and to illustrate the text with figures.

Keywords
integer programming, Lagrangean relaxation, Lagrangean dual, subgradient optimization, constraint generation, Lagrangean heuristic.

Introduction
The goal of this tutorial is to present what, at least in the opinion of the author, needs to be known in order to design and implement an efficient Lagrangean relaxation in integer programming. It does not pretend to be complete, in particular no attempt has been made to include a comprehensive biibliography for any of the topics covered. There are far too many papers describing, using or implementing Lagrangean relaxation. However, as far as the mathematical and conceptual background is concerned, it is hoped that a potential user will be able to find enough ideas, suggesting possibly novel approaches, to be able to successfully complete his project. The references provided should only be considered as a rather sparse subset of trees in an ever growing forest. Lagrangean relaxation as a tool has been used in other fields than integer programming. In this presentation we are limiting our study to integer programming, even though some of the ideas and algorithms discussed apply as well, sometimes with minor modifications, in other areas. Our purpose is to review the theory behind it, to explain how it can be applied, and show how it can be solved, so that a potential user can avoid some of the common pitfalls and can design a complete Lagrangean strategy for his complex integer programming problem. The presentation starts with a reminder of what a relaxation is, followed by a basic introduction to Lagrangean relaxation for linear integer problems, and its geometric interpretation. General ideas for splitting a problem before applying Lagrangean relaxation are then presented. We introduce the Integer Linearization Principle, because it should be detected when present in order to avoid unnecessary integer optimization. We then discuss these characteristics of the Lagrangean function which are important for the design of efficient optimization methods. The next section concentrates on primal and dual methods to solve relaxation duals: the classical subgradient optimization and constraint generation methods, and a more recent, hybrid, approach, a two-phase dual method. Extensions of Lagrangean relaxation are then reviewed: Lagrangean decomposition and Lagrangean substitution. The presentation continues with the description of Lagrangean relaxation applied to two rather different examples. The last section is devoted to Lagrangean heuristics, which complement Lagrangean bounding by making an attempt at transforming infeasible Lagrangean solutions into good feasible solutions.
2

Notation
If (P) is an optimization problem, we use the following notation: FS(P) the set of feasible solutions of (P) OS(P) the set of optimal solutions of (P) v(P) the optimal value of (P) Max either Maximize (problem) or Maximum (value) (see context) Min either Minimize (problem) or Minimum (value) (see context)

Definition of Relaxations for Optimization Problems (Geoffrion 1974)


Consider an optimization problem (P) Max {f(x) x X} and problem (RP) Max { g(x) x Y }. We say that (RP) is a relaxation of (P) if (i) Y X, and (ii) x X, g(x) f(x) It follows that v(RP) v(P).

Lagrangean Relaxation For Linear Integer Problems (Held and Karp 1970)
Consider the following optimization problem (P) Max x {f x Ax b, Cx d, xX} integrality constraints complicating constraints constraints to keep

in which some constraints are complicating, in the sense that one would be able to solve the same integer programming problem had these constraints not been present: Max x {f x Cx d, x X}. One can take advantage of this situation by constructing a so-called Lagrangean relaxation of (P) in the following way. Let 0 be a vector of multipliers, and let (LR) be the problem (LR): Max x {f x + (b-Ax) Cx d, x X}. (LR) is a relaxation of (P), since (i) FS(LR) FS(P) (ii) x FS(P), f x +(b-Ax) f x, therefore v(LR) v(P) , for all 0.

One will call an optimal solution of (LR) a Lagangean solution. The optimal value v(LR) is an upper bound on the (unknown) optimal value of (P). Getting the tightest, i.e., the smallest, Lagrangean upper bound is then an optimization problem over . The problem (LR) Min 0 V(LR) is called the Lagrangean dual of (P) relative to the complicating constraints Ax b.

V(P) V(LR) Min 0 V(LR) V(LR1) V(LR2)

In all that follows, we assume that { x X | Cx d} is bounded, and that problem (P) is feasible.

Example 1: Consider the 0-1 knapsack problem (KP) Max {4x +5y +7z | 5x + 4y + 4z 10, x, y, z {0, 1}}. Its LP relaxation is (LP) Max {4x +5y +7z | 5x + 4y + 4z 10, x, y, z [0, 1]}. One can construct a Lagrangean relaxation of (KP), say, (LR), with nonnegative multiplier , as follows: (LR) Max {(4x +5y +7z) + (10 - 5x - 4y - 4z) | x, y, z = 0 or 1}. Notice that for any feasible solution of (KP) and any nonnegative , the objective function of (LR) is larger than or equal to the objective function of (KP), since the added term, (10 - 5x - 4y - 4z), is then nonnegative. For an arbitrary feasible solution of (LR), however, that added term can have any sign.

Is a feasible Lagrangean solution optimal for (P)?


Suppose that x*, an optimal solution of (LR*) for * 0, is a feasible solution of (P), that is, it also satisfies the relaxed constraint Ax* b. In general, such a solution will not be optimal for (P), and the optimal value of (P), v(P), must be in the bracket [fx*, fx*+*(b-Ax*)]. If, however, complementary slackness holds, i.e., if *(b-Ax*) is 0, then obviously, x* is an optimal solution for (P).

Geometric Interpretation (Geoffrion 1974)


The Lagrangean dual (LR) is equivalent to the primal relaxation (PR) Max x {fx Ax b, x Co{ x X Cx d }}, i.e. v(LR) = v(PR).

Proof: The proof is based on LP duality. Indeed v(LR) = Min 0 V(LR) = Min 0 Max x {f x + (b-Ax) Cx d, x X} = Min 0 Max x {f x + (b-Ax) xCo{x X| Cx d }} (i) = Max x {fx Ax b, x Co{ x X Cx d }} (ii) = v(PR) . (i) is true because the maximum of a linear function over a bounded, discrete set of points is equal to the maximum of that linear function over the convex hull of this set of points. (ii) is true by LP duality because Co{x Rn | C x d, x X} is a bounded polyhedron, i.e., a polytope, and the problem in (ii) is a feasible, bounded LP.
4

RELAX KEEP
x x x x

f
v(LP) v(PR)
x x

v(P)

{x|Ax b} Co{xX |Cx d}}

Co{xXCx d} {xCx d}

{xAx b}

If Co{xXCx d} = {xCx d}, then v(P) v(PR) =v(LR) = v(LP). In that case, one says that (LR) has the Integrality Property, and the Lagrangean relaxation bound is equal to the LP bound. If Co{xXCx d} {xCx d}, then v(P) v(PR) = v(LR) v(LP), and the Lagrangean bound can be strictly better than the LP bound. Example 1 (continued) For the Lagrangean relaxation of (KP) Max {4x +5y +7z | 5x + 4y + 4z 10, x, y, z {0, 1}} described above, the integrality property holds, since the optimal solution of (LR) can be determined simply by looking at the sign of the coefficient of the variable: set the variable to 1 if its coefficient is positive, 0 otherwise. Notice that for equal to 0, (LR ) reduces to the unconstrained relaxation of (KP).

Problem Splitting Tricks


There are often many ways in which a given problem can be relaxed in a Lagrangean fashion. We will list here a few, mostly to point out to the reader that often a little bit of reformulation can do wonders to a models ability to relaxation, and that for many complex models, intuition and some understanding of the problem interactions may suggest ingenious and efficient relaxation schemes. (1) Isolate a well known subproblem and dualize the other constraints This is the most commonly used approach. Historically, it was natural to isolate subproblems which could be solved efficiently by specialized algorithms but not by standard commercial software.

(P)

Max x {f x Ax b,

Cx d, xX

} integer constraints

well-known subproblem This argument may be less convincing now that many good commercial packages are capable of solving medium and sometimes even large problems without any special structure, as long as they are not too complex. Example 2 (bi-knapsack problem) Consider now the following 0-1 knapsack problem with two constraints: (2KP) Max {4x +5y +7z | 5x + 4y + 4z 10, 9x + 3y +5z 11, x, y, z {0, 1}}. Its LP relaxation is (LP) Max {4x +5y +7z | 5x + 4y + 4z 10, 9x + 3y +5z 11, x, y, z [0, 1]}. One can construct a Lagrangean relaxation (LR) of (2KP) with one nonnegative multiplier , by dualizing for instance the first constraint (one could instead choose to dualize the second constraint, or both): (LR) Max {(4x +5y +7z) + (10 - 5x - 4y - 4z) | 9x + 3y +5z 11, x, y, z = 0 or 1}. This is a 0-1 knapsack problem, and very efficient solution algorithms have been developed for that problem type. (2) Dualize linking constraints: Sometimes naturally, sometimes after a bit of reformulation, problems may contain otherwise independent structures linked by a set of constraints. It is often worth looking at the possibility of relaxing the linking constraints, thus splitting the problem into independent subproblems. For instance a production problem over multiple facilities may contain sets of constraints related to individual facilities, and the demand constraints render the plant problems dependent on each other. Another example is that of a multi-period model in which facilities (or roads) built in one period can be used in that or a later period. One may be able to use action (building) variables in the design part of the model, and state (existence) variables in the rest of the model: dualizing the linking relationship between built in period t and built by period t may split the model into a facility building problem and a facility using problem (see for instance Guignard, Chajakis, Ryu, 1994). This is a special case of Lagrangean substitution (see later). Max x,y { f x + gy Ax b, xX, Cy d, yY, KEEP 2 Ex + Fy h } RELAX

complicating constraints

RELAX

KEEP

KEEP 1

(3) If there are two interesting subproblems with common variables, split these variables first and then dualize the copy constraint: this is called Lagrangean decomposition (see later for geometric interpretation, generalizations and properties)

(P)

Max x {f x Ax b, Cx d, xX} is equivalent to Max x,y { f x Ax b, xX, Cy d, yX, KEEP 1 KEEP 2

(P')

x=y }

RELAX

Example 2 (continued) Consider again the 0-1 knapsack problem with two constraints: (2KP) Max {4x +5y +7z | 5x + 4y + 4z 10, 9x + 3y +5z 11, x, y, z {0, 1}}. One can construct a Lagrangean decomposition (LD) of (2KP), by first copying the variables (call x, y and z the copies of variables x, y and z in the second constraint) and then dualizing the copy constraints. More specificcally, (2KP) is first transformed into the equivalent model (2KP) Max 4x + 5y + 7z s.t. 5x + 4y + 4z 10, 9x + 3y +5z 11, x = x, y = y, z = z, (C) x, y, z {0, 1} x, y, z {0, 1} and the copy constraints (C) are then dualized with multipliers x , y, and z , yielding the Lagrangean decomposition problem (LD) Max (4x +5y +7z) + x (x x) + y (y y) + z(z z) 5x + 4y + 4z 10 9x + 3y +5z 11 x, y, z {0, 1} x, y, z {0, 1} and this problem decomposes into two separate knapsack problems, i.e., it is equivalent to Max (4x +5y +7z) - ( x x + y y + z z) s.t. 5x + 4y + 4z 10 x, y, z {0, 1} s.t. + Max s.t. x x + y y + z z 9x + 3y +5z 11 x, y, z {0, 1}.

As will be shown later, this Lagrangean decomposition can produce bounds strictly better than the two Lagrangean relaxations in which one of the knapsack constraints is dualized.

Integer Linearization Principle

(Geoffrion 1974, Geoffrion, McBride 1978)

The following is an important property of some Lagrangean subproblems. When one can recognize its occurrence, one can usually substantially reduce the computational difficulty of the Lagrangean problem. The Lagrangean problem should have the following structure: (LR) Max { fx + gyAi xi pi yi, all i, xX, By b, yi = 0 or 1, all i}
over y only

separates, one per i

where X = i (Xi) may contain some integrality requirement. xi may be a vector. It goes like this: (i) ignore at first the constraints over integer variables only, By b. (ii) the problem separates into one problem for each i: (LRi) Max { fi xi + gi yi Ai xi pi yi, xi Xi, yi = 0 or 1, all i} where yi plays the role of a 0-1 parameter: for yi = 0, xi = 0, and fi xi + gi yi = 0. for yi = 1, solve (LRi yi =1 ): vi = Max{ fi xi + gi Ai xi pi, xi 0}. vi is the contribution of yi = 1 in the objective function. (iii) replace v(LR) by v(PL) where (PL) is (PL) Max {i vi yi By b, yi = 0 or 1, all i}. This process makes use of the integrality constraint on variable yi and therefore even in cases where both (PL) and (LRi yi =1 ) have the Integrality Property, it is possible to have v(LR) = Min v(PL) = Min v(LR) < v(LP).

v(LRi)

v(LRi)

0 yi 1

yi = 0 or 1

Example 3 (similar to Geoffrion and Mc Bride 1978) Consider the capacitated p-median problem: Minx,y i j cij xij + i fi yi s.t. i xij = 1, all j dualize with multipliers uj 0 j dj xij ai yi i xij yi

j dj xij ai yi, all i, xij yi, all i, j,

(LR i u)

i yi p, xij 0, yi = 0 or 1, all i, j.

ignore temporarily

one computes a strong bound v(LR) = Maxu v(LRu) = Maxu v(PLu) where v(PLu) = Min y { i vi yi i yi p, yi = 0 or 1, all i} trivial knapsack problem in y and vi = v(LRiuyi =1) = Min x {j (cij+ui) xij+ fij dj xij ai, xij 1} - i ui. continuous knapsack problem in x

Characteristics of the Lagrangean Function


For problem (P) Max x {f x Ax b, Cx d, xX}
complicating constraints integer constraints kept constraints

one constructs a Lagrangean relaxation (LR) Max x { f x + (b-Ax)Cx d, xX} (assume one can solve it) and the corresponding Lagrangean dual (LR) Min 0 v(LR). The Lagrangean function is It is an implicit function of . Let { xX Cx d } = { x1, x2,..., xK }, then Max x {f x + (b - Ax) Cx d, x X} (LR) = Maxk=1,...,K {f xk + (b - Axk)} and z() is the upper envelope of a family of linear functions of , and is therefore a convex function of . (LR) Min 0 v(LR) = Min 0 z() = Min 0 Maxk=1,...,K {f xk + (b - A xk )} = Min 0, { f xk + (b - A xk ), k=1,...,K} v(LR) is the minimum of a piecewise linear convex function, known only implicitly. This function z() has breakpoints where it is not differentiable. Its level sets C() = {0z()}, a scalar, are convex polyhedral sets. One is looking for the smallest possible value of for which C() is nonempty. Let * be a minimizer of z(), and let * = z(*). z() = v(LR) .

z() f x1

f x2 = f x2 + (b-A x2) = f xk + (b-A xk) = f x1 + (b-A x1) f xk

Let k be a current "guess" at *, and let k = z(k). Let Hk = {f xk + (b-Axk) = k} be a level hyperplane passing through k . Hk defines part of the boundary of C (k). If z() is differentiable at k, it has a gradient z(k) at k: z(k) = (b - Axk) Hk. If z() is nondifferentiable at k, it has a subgradient sk at k: sk = (b - Axk) Hk.

10

Space of -s = -(b-Ax )
k k

C(=k) *

k Hk = {f xk + (b-Axk) = k} region where xk is optimal for (LR )

Primal and Dual Methods to Solve Relaxation Duals


1. Subgradient Method (Held and Karp, 1970) (Held, Wolfe, Crowder 1974) It is an iterative method in which steps are taken along the negative of a subgradient of z(). Let k be the current iterate, and xk be an optimal solution of (LRk). Then sk = b - Axk is a subgradient of z() at k. If * is an optimal solution of (LR), with * = z(*), let k+1 be the projection of k on the hyperplane H* parallel to Hk H* = { f xk + (b-Axk) = *}. H* passes through * if f xk + *(b-Axk) = *, otherwise H* is between Hk and * because * fxk' + *(b - A xk' ), k'=1,...,K. The vector sk is perpendicular to both Hk and H*, therefore k+1 - k is a negative multiple of sk: k+1 - k = - sk, 0. Also, k+1 belongs to H*: f xk + k+1(b-Axk) = * therefore f xk - sk(b-Axk) = k - sk.sk = * and = (k - *)/ ||sk ||2 , so that k+1 = k + sk. (* - k) / ||sk||2 . Finally let k+1 be the projection of k+1 onto the nonnegative orthant: k+1 = Max (0, k+1).

11

-sk = -(b-Axk)

C(=k)

H*={f xk+(b-Axk) = *} * k +1

k Hk = {f xk + (b-Axk) = k} region where xk is optimal for (LR )

Remarks. 1. A subgradient is not necessarily a direction of improvement, as seen on the picture (one immediately gets out of the current level set). Yet the method will converge. 2. This formula unfortunately uses the unknown optimal value * of (LR). If one uses an estimate for that value, then one may be using either too small or too large a multiple of -sk. If too small, one may be making steps which are too small and convergence will be slow. If too big, one is actually projecting on a hyperplane which is too far away from k, possibly beyond *. If one sees that the objective function values do not improve for a certain number of iterations, one should suspect that * has been underestimated (for a minimization problem) and one should reduce the difference k *, multiplying it by a factor k less than 1. One uses the revised formula: k+1 = k + sk. k (*- k) / ||sk||2 , where k is reduced when there is no improvement for too long. The interested reader is referred to Lemarechal (1974) and Zowe (1985) for an extension of subgradient methods called bundle methods.

12

2. Constraint Generation Method In this method, one uses the fact mentioned earlier that z() is the upper envelope of a family of linear functions. (LR) Min 0 v(LR) = Min 0 z() = Min 0 Maxk=1,...,K {f xk + (b - A xk )} = Min 0, { f xk + (b - A xk ), k=1,...,K} = f xk+ (b-A xk)

= f x1+ (b-A x1)

= f x2+ (b-A x2) f x1

f x2

z(k+1)

k+1

f xk
At each iteration, one generates one or more cuts of the form f xk + (b - A xk ), by solving the Lagrangean subproblem (LRk) with solution xk. These cuts are added to those generated in previous iterations to form the current restricted LP master problem: (MPk) Min 0, { f xh + (b - A xh ), h=1,..., k}, whose solution is the next iterate k+1. The process terminates when v(MPk) = z(k+1). This value is the optimal value of (LR).

3. Two-Phase Hybrid Method (Guignard, Zhu 1994) This combines the subgradient method in a first phase, and constraint generation in a second phase. The multipliers are first adjusted according to the subgradient formula, and at the same time, constraints corresponding to all known solutions of the Lagrangean subproblems are added to the LP master problem. The value of the LP master problem is taken as the current estimate of the optimum of the Lagrangean dual. The estimate gets more and more accurate as iterations go by, so there is no need for any adjustment. In other words, the scalar k is kept equal to 1 at all iterations. The Lagrangean bound and the value of the master problem provide a bracket on the dual optimum, and this provides a convergence test, like for the pure constraint generation method. Contrary to what happens if the multipliers are adjusted according to the master problem, there is no guarantee however that the new multipliers give an improving value for the master problem. One must therefore make sure that the process does not cycle. This can be done simply as follows. One checks whether the constraints generated from the Lagrangean solutions differ from iteration to
13

iteration. If constraints get repeated, the master problem cannot improve. After the same cut has been generated a given number of times (say, 5 times), one switches to a pure constraint generation phase. This process is accelerated if one generates as many Lagrangean solutions as possible. For instance if a heuristic phase follows the solution of the Lagrangean subproblem at any iteration, and if a feasible integer solution is produced, such a solution is a fortiori feasible for the Lagrangean subproblem and the corresponding cut is added to the master problem. The method has been tested on a variety of problems and/or relaxation schemes for which the traditional subgradient method was known to behave poorly: generalized assignment problems of large size and of type C; integrated forest management problems (IRPM); biknapsack problems; Lagrangean decomposition and/or substitution... and convergence was always achieved quickly. The method appears to be quite robust.

Extensions of Lagrangean Relaxation: Decomposition and Substitution


The purpose of Lagrangean substitution (or its special case, Lagrangean decomposition) is to: (1) induce decomposition of the problem into independent subproblems, (2) capture different structural characteristics of the problem, (3) obtain stronger bounds than by standard Lagrangean relaxation schemes. This can be achieved in the following manner: (1) identify the parts of the problem that should be split, (2) replace variables in each part by copies or substitute new expressions, (3) dualize the copy or substitution expression. Remark: It is not necessary that each resulting subproblem be of a special type. Yet it should be much less complex or much smaller than the overall problem so that it can be solved by existing (commercial) software. (i) Lagrangean Decomposition (automatic control of processes, in the 70s, for instance Soenen 1977) One introduces a copy variable and dualizes the copy constraint: (P) Max f x s.t. Ax b Cx d xX (P') Max f x s.t. Cx d Ay b xX yX x= y

Max s.t.

(f-u)x + Max uy s.t. Ay b Cx d xX yY

(LDu) Max f x + u(y-x) s.t. Ay b Cx d xX yX

(LD)

Min u v(LDu)

is the Lagrangean Decomposition dual.

14

Geometric Interpretation

(Guignard, Kim 1987)

f
x x x x x

V(LR)

V(LD)
x x

x x x x

{xAx b}

{xCxd} Co{xXCxd} Co{xXAxb}

v(LD) = Max {fx x Co{xXA x b}Co{xXC x d}}. It follows that: (1) v(LD) is always at least as good as v(LR). (2) If the second subproblem has the Integrality Property: {xA x b} = Co{xXA x b}, then (LD) is equivalent to (LR): v(LD) = v(LR). (3) If both subproblems have the Integrality Property: {xA x b} = Co{xXA x b}, and {xC x d } = Co{xXC x d}, then (LD) is equivalent to (LP): v(LD) = v(LP). (4) If neither problem has the Integrality Property, then (LD) can be strictly better than (LR).

(ii) Lagrangean Substitution (Reinoso and Maculan 1988) (Guignard 1989) One replaces Ax by an expression of some new variable and relaxes the equivalence constraint: (P) Max f x s.t. Ax b Cx d xX (P') Max f x s.t Cx d. xX

(y) b
y Y A x = (y)

Max (f-uA) x + Max u (y) s.t Cx d. s.t. (y) b x X yY

(LSu)

Max f x + u[ (y) - Ax] s.t. Cx d (y) b xX y Y

15

(LS)

Min u v(LSu)

is the Lagrangean Substitution dual.

Y should be such that xX, yY: Ax = ( y), i.e. (P') should not be more constrained than (P). Lagrangean substitution bounds are not always comparable to (LR) or (LD) bounds. Also one will often use a combination of schemes simultaneously. Example 1: Capacitated Plant Location Problem (CPLP) Minx,y i j cij xij + i fi yi s.t. i xij = 1, all j xij yi, all i, j i ai yi j dj, all j j dj xij ai yi, all i xij 0, yi = 0 or 1, all i, j.

meet 100% of customer demand ship nothing if plant is closed enough plants to meet total demand ship no more than plant capacity

(D) (B) (T) (C)

Three best Lagrangean schemes: (LR) (Geoffrion and McBride 1978, Guignard and Ryu 1992) dualize (D) then use integer linearization property. Subproblems solved by one continous knapsack problem per plant and one 0-1 knapsack over all plants. Strong bound. Small computational cost. (LD) (Guignard and Kim 1987) Copy xij = x'ij and yy = y'i in (C). Duplicate (T). Split {(D), (B), (T)} APLP (Thizy 1994, Ryu 1992) {(B), (T), (C)} like (LR) Strong bounds, but expensive. (LS) (Chen and Guignard 1992) Copy j dj xij = j dj xij' and yi = y'i in (C). Same split as (LD). Same bound. Fewer multipliers. Less expensive.

Example 2: Hydroelectric power management (Guignard, Yan 1993)

low water level of k

high water level of k+1

Power Plant k

=
Power Plant k+1

A series of power plants are located along a river, separated by reservoirs and falls. Managing the power generation requires decisions concerning water releases at each plant in each time period.
16

This results in a large complex mixed-integer program. A possible decomposition of the problem consists in "cutting" each reservoir in half, i.e. "splitting" the water level variable in each reservoir, and dualizing the copy constraint: high water level in k+1 = low water level in k. This Lagrangean decomposition produces one power management problem per power plant. This subproblem does not have a special structure, but it is much simpler and smaller than the original problem and is readily solvable by commercial software. It does not have the Integrality Property. The resulting bound is much stronger than the LP bound. Lagrangean Heuristics Lagrangean relaxation provides not only stronger bounds than LP relaxation, it also generates nearly feasible integer solutions. The Lagrangean subproblem solutions typically violate some, but not all, violated constraints. Depending on the actual problem, one may choose to try to get feasible solutions in different ways: (1) by modifying the solution to correct its infeasibilities while keeping the objective function deterioration small. Example: in production scheduling, if one relaxes the demand constraints, one may try to change production (down or up) so as to meet the demand (de Matta, Guignard 1994). (2) by fixing (at 1 or 0) some of the meaningful decision variables according to their value in the current Lagrangean solution, and solving optimally the remaining (hopefully much smaller) problem. We call this the lazy heuristic. In a sense, it is a generic heuristic, not truly problem dependent, except in the way one selects the variables to be fixed. One guiding principle may be to fix variables which satisfy relaxed constraints. In order to be quick and effective, this heuristic will normally require that the number of variables fixed be neither too small (the remaining problem would still be too large) nor too big (there should remain sufficient freedom for the remaining subproblem). Example 1. In CPLP, if one relaxes the demand constraints (D), for a dense problem, the {(C), (B), (T)} subproblem opens enough plants to satisfy total demand, and ships no more than the availability of the plant if it is open. One can fix open the "open" plants, and solve the remaining problem over the plants closed in the Lagrangean solution, and the shipments. Afterwards, one may still be able to modify the solution by closing unused plants. (Guignard, Ryu 1992). Example 2. For a generalized assignment problem, after relaxing the multiple choice constraints, one obtains solutions which satisfy all knapsack constraints but usually violate some multiple choice constraints (if not, the Lagrangean solution is an optimal solution for the problem, because complementary slackness holds). If the number of violated constraints is neither too small nor too large, one can fix the variables in all satisfied columns, and solve the remaining generalized assignment problem over the unsatisfied columns. This can be followed by some ery quick interchange heuristic. This procedure can be incredibly powerful even for very tight difficult GAP problems (Guignard, Zhu 1994).

Conclusion
Lagrangean relaxation is a powerful family of tools for solving approximately integer programming problems. It provides (*) stronger bounds than LP relaxation when the problem(s) don't have the Integrality Property. (**) good starting points for heuristic search
17

The availability of powerful interfaces (GAMS, AMPL, ...) and of flexible IP packages makes it possible for the user to try various schemes and to implement and test them. It is not necessary to have special structures embedded in a problem to try to use Lagrangean schemes. If it is possible to decompose the problem structurally into meaningful components and to split them through constraint dualization, possibly after having introduced new variable expressions, it is probably worth trying. Finally solutions to one or more of the Lagrangean subproblems might lend themselves to Lagrangean heuristics, possibly followed by interchange heuristics, to obtain good feasible solutions. Lagrangean bounds coupled with Lagrangean heuristics provide the analyst with brackets around the optimal integer value. These are usually much tighter than with LP based bounds and heuristics.

References
Chen, B. and Guignard, M., (1991), ''LD=LDA for CPLP '', 91-12-03, revised as ''Polyhedral Analysis and Decompositions for Capacitated Plant Location-Type Problems,'' Discrete Applied Mathematics, 1998. de Matta, R. and Guignard, M., (1994), ''Dynamic Production Scheduling for a Process Industry,'' Operations Research, 42, 492-503. Fisher, M.L., Northup, W.D. and Shapiro, J.F. (1975) ''Using Duality to Solve Discrete Optimization Problems: Theory and Computational Experience,'' Mathematical Programming Study 3, 56-94. Geoffrion, A.M., (1974), ''Lagrangean Relaxation for Integer Programming,'' Mathematical Programming Study 2, 82-114. Geoffrion, A.M. and McBride, R., (1978), ''Lagrangean Relaxation Applied to Capacitated Facility Location Problems,'' AIIE Transactions, 10, 40-47. Guignard, M., (1989), '' General Aggregation Schemes in Lagrangean Decomposition: Theory and Potential Applications,'' Working paper 89-12-07, Department of Decision Sciences, University of Pennsylvania. Guignard, M., Chajakis, E. and Ryu, C. (1994), ''Harvest Scheduling and Transportation Planning in Forest Management,'' VII CLAIO Meeting, Santiago, Chile, July 1994, also Working paper 94-0202, Operations and Information Management Department, University of Pennsylvania. Guignard, M. and Kim, S., (1987), ''Lagrangean Decomposition: A Model Yielding Stronger Lagrangean Bounds,'' Mathematical Programming, 39, 215-228. Guignard, M. and Ryu C., (1992), ''An Efficient Algorithm for the Capacitated Plant Location Problem,'' Working paper 92-11-02, Decision Sciences Department, University of Pennsylvania.

18

Guignard, M. and Yan H., (1993), ''Structural Decomposition Methods for Dynamic MultiHydropower Plant Optimization,'' Working Paper 93-12-01, Operations and Information Management Department, University of Pennsylvania. Guignard, M., and Zhu, S., (1994), ''A Two-Phase Dual Algorithm for Solving Lagrangean Duals in Mixed Integer Programming,'' VII CLAIO, Santiago, Chile, July 1994, also Working paper, Operations and Information Management Department, University of Pennsylvania. Held, M. and Karp, R.M., (1970), ''The travelling salesman problem and minimum spanning trees,'' Operations Research, 18, 1138-1162. Held, M. and Karp, R.M., (1971), ''The travelling salesman problem and minimum spanning trees: part II,'' Mathematical Programming, 1, 6-25. Held, M., Wolfe, P. and Crowder, H., (1974), '' Validation of Subgradient Optimization,'' Mathematical Programming, 6, 62-88. Lemarchal, C., (1974), ''An Algorithm for Minimizing Convex Functions,'' Proceedings IFIP'74 Congress (North Holland, Amsterdam), 552-556. Reinoso, H., Maculan, N., (1988), ''Lagrangean Decomposition in Integer Linear Programming: a New Scheme, '' Revised version of Report ES-141/88 (in Portuguese), COPPE, Universidade Federal de Rio de Janeiro. Ribeiro, C., (1983), '' Algorithmes de recherche de plus court chemins avec contraintes: tude thorique, implmentation et paralllisation,'' Doctoral Dissertation, Paris, France. Ryu, C. and Guignard, M. (1992), ''An Exact Algorithm for the Simple Plant Location Problem with an Aggregate Capacity Constraint, '' Working paper 92-04-09, Decision Sciences Department, University of Pennsylvania. Soenen, R., (1977), " Contribution l'tude des systmes de conduite en temps rel en vue de la commande d'units de fabrication," Thse de Doctorat d'Etat, Lille, France. Thizy, J.-M., (1994), "A Facility Location Problem with Aggregate Capacity,'' INFOR, 32 ,1-18. Zowe, J., (1985), " Nondifferentiable Optimization, " Programming (Springer-Verlag, Berlin) 323-356. in Computational Mathematical

19

You might also like