You are on page 1of 9

654

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 17, NO. 3, AUGUST 2002

A PrimalDual Interior Point Method for Optimal Power Flow Dispatching


Rabih A. Jabr, Alun H. Coonick, and Brian J. Cory
AbstractIn this paper, the solution of the optimal power flow dispatching (OPFD) problem by a primaldual interior point method is considered. Several primaldual methods for optimal power flow (OPF) have been suggested, all of which are essentially direct extensions of primaldual methods for linear programming. The aim of the present work is to enhance convergence through two modifications: a filter technique to guide the choice of the step length and an altered search direction in order to avoid convergence to a nonminimizing stationary point. A reduction in computational time is also gained through solving a positive definite matrix for the search direction. Numerical tests on standard IEEE systems and on a realistic network are very encouraging and show that the new algorithm converges where other algorithms fail. Index TermsOptimization methods, power generation dispatching, second-order condition, step length control.

I. INTRODUCTION PTIMAL POWER FLOW DISPATCHING (OPFD) is an optimization problem which minimizes the total generation dispatch cost while satisfying physical and technical constraints on the network. Primaldual interior point methods for optimal power flow (OPF) have recently been discussed [1][6]. All these methods have been motivated by the success of interior point methods for linear programming. Wu et al. [1], [2] established that primaldual interior point methods offer an attractive solution to the OPF problem. Their method was based on polar coordinates and investigated two different ways of updating the barrier parameter. At the same time, Granville [3] independently proposed a similar method for optimal reactive power dispatch. Through the concept of centering directions, Wei et al. [4] unified the OPF, the approximate OPF, and the classical power flow into a single optimization problem. These authors also presented a novel data structure to reduce fill-in during factorization, and reduce computational time when using rectangular coordinates. Torres and Quintana [5] compared the polar and rectangular coordinates version and concluded that both perform equally well. Very recently, Castronuovo et al. [6] presented an OPF solution with high-performance computational techniques. Here, the use of vector techniques in order to enhance computational speed is proposed. The solution of linear equations is done through sparse LU factorization and a modification of the Tinney II (minimum degree ordering) heuristic.
Manuscript received October 12, 1999; revised October 1, 2001. R. A. Jabr is with the Department of Electrical, Computer and Communication Engineering, Notre Dame University, 72 Zouk Mikayel, Lebanon. A. H. Coonick and B. J. Cory are with the Department of Electrical and Electronic Engineering, Imperial College, London SW7 2BT, U.K. Publisher Item Identifier 10.1109/TPWRS.2002.800870.

Although these methods [1][6] proved to be efficient for solving OPFD problems, they lack a technique which induces convergence and neglect the second-order sufficiency conditions [7], [8] which are needed to prove solution optimality. An algorithm for nonlinear nonconvex programming which does not check for the second-order conditions can leave the user unsure about the outcome of the optimization. This has motivated Almeida et al. [9] to propose the parametric OPF that tracks the trajectory of the solution using Newtons method while satisfying the second-order KuhnTucker (KT) conditions. In this paper, we discuss two modifications used to convert the linear programming interior point method to nonconvex nonlinear OPFD problems. These modifications are aimed at overcoming the above-mentioned disadvantages while keeping the OPFD algorithm efficient. Moreover, an increase in efficiency was achieved by formulating the OPFD through an inequality form which leads to a sparse symmetric positive definite matrix. Symmetric positive definiteness is the highest state a matrix can aspire to be [10], since it permits economy and numerical stability in the solution of linear systems. The proposed algorithm is compared with a similar version of the interior point OPF algorithm presented in [6]. This paper is organized as follows. Section II introduces the problem and Section III presents the optimality conditions. The primaldual interior point method theory and implementation are given in Sections IV and V. Special emphasis is given to the treatment of indefiniteness in Section VI. The filter technique, which guides the choice of the step length, is introduced in Section VII. Section VIII explains the choice of the barrier parameter followed by a pseudocode summary of the algorithm in Section IX. Section X highlights the main features of the method used as a benchmark for comparison. Numerical results are given in Section XI. The paper is concluded in Section XII.

II. OPFD PROBLEM FORMULATION We are interested in the nonlinear nonconvex optimization problem with inequality constraints minimize where subject to and (1a)

, , , and . The functions and are continuous and smooth (i.e., with first and second continuous derivatives). The objective function is the total generation dispatch cost. The cost functions are assumed to be convex and piecewise linear. They are modeled in the optimization problem using

0885-8950/02$17.00 2002 IEEE

JABR et al.: PRIMALDUAL INTERIOR POINT METHOD FOR OPFD

655

TABLE I NOMENCLATURE USED IN (1b)

Fig. 1. Transformer model.

Fig. 2. Equivalent transformer model.

separable programming [11]. The inequality constraints in (1a) represent

3) The generator active limits are not explicitly expressed but are taken into account through the separable programming approach [13]. A. Regulating Transformer Model A regulating transformer has the capacity to regulate both voltage magnitude and phase angle [14]. Fig. 1 shows the one is a complex line diagram of the transformer where ratio. A direct consequence of this is that the bus admittance matrix ( bus) becomes unsymmetrical [14]. Our approach for modeling the regulating transformer allows its easy incorporation into an OPF-based program without altering the functions that evaluate the Jacobian and Hessian matrices corresponding to the power flow equations. For a transformer branch, we introduce a voltage controlled bus (slave), as shown in Fig. 2. The control is done through the master bus according to the following constraints:

(1b) The symbols used in (1b) are explained in Table I. Section II-A presents the formulation of the regulating transformer model. The following points concerning the OPFD formulation need to be observed. 1) The active power balance equations are traditionally expressed as equality constraints [1][6]. Our formulation in (1b) uses inequality constraints for reasons which will become clear in the following sections. Note that since the cost curves are positive, the optimization process of minimizing the cost will reduce the generation level to meet the total load. If the optimal solution leads to over satisfaction of the load, then either the OPFD problem is infeasible or the system has a lower cost if it is dispatched at a slightly higher load (which is not practical). Irving and Sterling [12] have used a similar inequality form in the context of economic dispatching. 2) To enforce line flow constraints properly, the real line flow power is checked at both the sending and receiving end of every line. The positive flow on a line has the higher magnitude (since the sum of flows is equal to the real losses which is nonnegative) and can therefore be limited by the network capacity.

(1c) and ) are the lower and upper tap limits, rewhere ( and ) are the lower and upper phase spectively, and ( shift limits, respectively. Moreover, we ensure in the optimizaextracted from is tion process that the transformer power injected into . III. OPTIMALITY CONDITIONS The Lagrangian function for (1a) is (2a) and . The gradient of where and the Jacobian of by by ] is of the Lagrangian [with respect to is denoted . The Hessian

(2b) Let hold. denote a point where the following conditions

656

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 17, NO. 3, AUGUST 2002

1) Feasibility: and 2) First-order KT conditions: At and such that multipliers

. , there exists and

is positive definite

(3e)

(3f) (2c) 3) Second-order KT conditions: The reduced Hessian of is positive definite, the Lagrangian function is a basis for the null space of the Jacobian where and of all constraints equal to zero at is the Hessian of the Lagrangian. of the con4) Constraint qualification: The gradients are linearly independent. straints equal to zero at 5) Strict complementarity: There is no such that or . is an isolated local minimizer of Under conditions 15, (1a) [7], [8], [15]. To avoid clutter, we will omit the argument of vector or matrix functions, such as or , when it is unnecessary or clear from context. IV. PRIMALDUAL INTERIOR POINT METHOD Primaldual methods can be interpreted in several ways [16]. One particular interpretation is based on the classical FiaccoMcCormick [15] logarithmic barrier function associated with problem (1a) given by is positive definite where and are the Lagrange multipliers appearing in (2c). define a barrier trajectory, or a local cenThe points tral path for (3c). If we introduce the slack variable (3g) and define (3h) (3i) then (3d)(3f) which define the central path are equivalent to and (3j)

(3k)

and (3a) is a positive parameter. The barrier gradient and where Hessian are

(3l)

and

(3m)

and . Note that as , (3l) where imposes a more accurate approximation of the complementarity conditions, which hold at optimality. nonlinear equations Equations (3j) and (3l) define which hold at (3b) where column vector of ones (of appropriate size); diagonal matrix ; . represents a diagonal matrix Note that the notation whose elements are contained in vector . The solution to problem (1a) can be obtained via a sequence of solutions to the unconstrained subproblem minimize (3c) (3o) and (3n)

Note that (3n) does not capture the nonnegativity constraints in (3l) and the positive definiteness of (3k). Applying Newtons method to the system in (3n), we obtain

Under conditions 15 of Section III [15], [17], for a sequence of decreasing and sufficiently small , there is a corresponding to the unconsequence of isolated local minimizers strained problem (3c) such that (3d)

(3p)

JABR et al.: PRIMALDUAL INTERIOR POINT METHOD FOR OPFD

657

Together, (3o) and (3p) specify a linear system which can be used to solve for the Newton steps . V. SOLVING THE PRIMALDUAL SYSTEM As noted in Section IV, the solution of the linear systems (3o) and (3p) does not take into account the positive definiteness conditions in (3k). A robust second derivative method to (1a) must not only be able to check the second-order KT conditions (condition 3 in Section III), but also to move away from nonminimizing stationary points [17], [18]. The particular choice of the standard form (1a) makes it straightforward to check positive definiteness of the reduced Hessian. We describe our approach here. and from (3p) We start by solving for

without any need to predict the active set. In other words, the algorithm can make decisions about possible indefiniteness of the reduced Lagrangian Hessian near the solution by checking the positive definiteness of . The same test is also used by Vanderbei et al. [18]. A. Treatment of Indefiniteness can be used to In Section VI, we have seen that matrix check positive definiteness of the reduced Lagrangian Hessian. If is not positive definite, the iterates are likely to converge to a nonminimizing stationary point. Consider the simple example subject from [18] of minimizing a concave function . The algorithm presented in to the bound constraints [1][6] when applied to solve this problem starting from , converges to the global maximum at 0.5. In our algorithm, whenever is indefinite, we replace it by its 2-norm positive approximant [10] (5a) where is the identity matrix and is positive definite . A warning associated with this perturbation is that dual infeasibility may fail to decrease even with arbitrarily small steps [18]. This is the price to pay in order to obtain and descent of the barrier function. positive definiteness of is zero most However, empirical evidence suggests that of the time and that the dual infeasibility does decrease to zero close to a minimizer since ultimately becomes positive definite [15], [17], [18]. is obtained through the bisection method The value of containing [10]. The idea is to find an interval . If is positive definite where , accontaining , otherwise accept . This process cept is repeated until the desired accuracy is reached as (5b) is a relative error tolerance, typically 5 10 . where To initialize this procedure, we set to the computer tolerance, and to the Frobenius norm of . The Frobenius norm is a weak upper bound, but it is used to avoid the calculation of the minimum eigenvalue of . Testing for positive definiteness is done through the Cholesky decomposition of . The matrix is declared positive definite if the process succeeds. Sparsity techniques and minimum degree ordering are employed in computing the Cholesky factorization [19]. VII. CHOICE OF STEP LENGTH If started far from a solution, primaldual methods the iterates of which are updated based on the ratio test [1][6] may fail to converge to a solution as illustrated by the minimization s.t. using an initial estiof . For this reason, primaldual methods usually use mate a merit function in order to induce convergence [18]. There are, however, problems associated with the merit function, particularly with the choice of the penalty parameter [20]. This has led to the use of methods such as the watch-dog strategy [21] in which the merit function is allowed to increase a limited number of iterations.

(4a) and substitute (4a) in (3o) to get the augmented system

(4b) where

We further solve for

from the second equation in (4b) (4c)

and substitute back into the first equation of (4b) to get the normal system (4d) where

solves for the search direction first by solving the normal system (4d) and then by evaluating the other components of for the search direction from (4c) and (4a). VI. SECOND-ORDER KT CONDITIONS A minimizing solution of (1a) is distinguished from a nonminimizing stationary point by positive definiteness of the reduced Hessian. Since there is no identification of the binding constraints during an interior point iteration, it is not obvious how this condition can be checked in practice. However, Gay et al. [17] show that as we approach the optimal solution, [in (4d)] can be used to provide guidance about the eigenvalues (condition 3 in Section III) of the unknown matrix

Our

algorithm

658

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 17, NO. 3, AUGUST 2002

On the other hand, empirical evidence [20], [18] suggests that Newtons method should be hindered as little as possible. Motivated by this evidence, we adapt the filter method to the choice of the step length in the primaldual method. This method was originally proposed by Fletcher and Leyffer [20] for setting the trust-region radius in sequential quadratic programming. The idea is to interfere as little as possible with Newtons method but to do enough in order to give bias toward convergence. A. Filter Technique There are two competing aims in the primaldual solution of (1a). The first aim is to minimize the objective, and the second is the satisfaction of the constraints. Keeping in mind that the positivity of iterates is easily maintained using the ratio test, these two conflicting aims can be written as minimize and minimize (6b) (6a)

Fig. 3. Graphical representation of the filter.

The step lengths (6c) are successively halved until the iterate (6e) becomes acceptable to the filter

A merit function usually combines (6a) and (6b) into a single objective. Instead we see (6a) and (6b) as two separate objectives, similar to multiobjective optimization. However, the situation here is different since it is essential to find a point where if possible. In this sense, (6b) has priority. Nevertheless, we will make use of the principle of domination from multiobjective programming in order to introduce the concept of the filter. is said to dominate anDefinition 1 [20]: A pair if and only if and . other pair In the context of the primaldual method, this implies that the th iterate is at least as good as the th iterate with respect to (6a) and (6b). Next, we define the filter which will be used in the line search to accept or reject a step. such that Definition 2 [20]: A filter is a list of pairs is said to be acno pair dominates any other. A point cepted for inclusion in the filter if it is not dominated by any point in the filter. The filter therefore accepts any point that either improves optimality or infeasibility. Fig. 3 shows the filter graphically in plane. Each point defines a block of nonacceptable the points. The union of these blocks represents the set of points not acceptable to the filter. We follow most primaldual methods in allowing separate step lengths for the primal and dual variables [22]. A standard ratio test is used to ensure that nonnegative variables remain nonnegative

(6e) VIII. CHOICE OF THE BARRIER PARAMETER An important issue in the primaldual method is the choice of the barrier parameter. Many methods are based on approximate complementarity where the centering parameter is fixed a priori [17]. Mehrotra [23] suggested a scheme for linear programming in which the barrier parameter is estimated dynamically during the iteration. Owing to its success [24], in our algorithm we follow the heuristic originally proposed in [23]. First, the Newton equations system (see Section V) is solved with the barrier set to zero. The direction obtained in this is called the affine-scaling case direction. The barrier parameter is estimated dynamically from the estimated reduction in the complementarity gap along the affine-scaling direction (7a) where

(6c) where

The step lengths in the affine-scaling direction are obtained using (6c) and (6d). To avoid numerical instability, we define by (7a) when the absolute complementarity gap , , we define but when (7b)

if (6d) We also make use of second-order correction for the complementarity condition [1], [2], [4], [5], [24].

JABR et al.: PRIMALDUAL INTERIOR POINT METHOD FOR OPFD

659

IX. PSEUDOCODE SUMMARY OF THE ALGORITHM: MPCIP The algorithm presented will be referred to as the modified predictorcorrector interior point (MPCIP). The quantities itmax and lsmax are limits on the maximum number of iterations in the main loop and maximum number of steps in the filter search loop, respectively. In this implementation, itmax 100 and lsmax 25. The quantity is the exit tolerance and is set to 10 .

TABLE II TEST PROBLEM STATISTICS

Algorithm MPCIP: ; initialize , set given a starting solution , and ; initialize the filter; iter 0; repeat iter iter 1; if iter 1 then perform minimum degree ordering of end compute the Cholesky decomposition of ; by modifying if necessary; obtain and solve for the affineset scaling direction: from (4d), from (4c) and from (4a); obtain the primal and dual steps in the from affine scaling direction (6c) and (6d); using (7a) and (7b); obtain obtain the actual search direction taking into account the second-order from (4d), correction terms: from (4c) and from (4a); obtain the primal and dual steps in the from (6c) and actual direction (6d); lsteps 0 repeat compute a trial point if the trial point is dominated by the filter and else update the filter break end lsteps lsteps 1 lsmax; until lsteps update the step using (6e); tol and dfeas tol and until(pfeas tol) or iter > itmax or lsteps opt lsmax.

Needless to say, the same decomposition of is used in computing the affine-scaling and actual directions. Note that in the above algorithm, the relative primal infeasibility, dual infeasibility, and optimality used to set up the stopping criteria are defined as (8a) (8b) (8c) where

X. OTHER INTERIOR-POINT METHODS Primaldual interior point methods have been applied before to the solution of the OPF [1][6]. We have chosen to compare our work with the most recently presented algorithm [6], which mainly differs from the others by the choice of the linear equations solver. In [6], Castronuovo et al. propose storage of the augmented system by sparse lists as opposed to compact form by blocks. This type of storage reduces the number of floating operations, because only nonzeros are considered, while in block storage, explicit inversion of the blocks with zero element is required. The solution to the augmented system is obtained through LU factorization with minimum degree ordering modified to dynamically evaluate the condition of the pivot. MATLAB [19] offers a similar function which allows column minimum degree ordering to be performed at the beginning of the solution together with dynamic partial pivoting for numerical stability. To make the comparison fair, we include using the method presented in [6], the same heuristic for evaluating the barrier parameter and second-order correction step as explained in Section VIII. Different primal and dual steps are used [1][5]. The resulting method is implemented in MATLAB and referred to as predictor corrector interior point (PCIP) method.

660

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 17, NO. 3, AUGUST 2002

TABLE III OPFD COMPUTATIONAL RESULTSPEAK DEMAND (NUMBER IN BRACKETS IS THE STARTING POINT FOR CONSTRAINED OPFD WITH REGULATING TRANSFORMERS)

TABLE IV OPFD COMPUTATIONAL BEHAVIORTEN LOAD LEVELS

XI. NUMERICAL RESULTS This section presents some numerical results obtained with an implementation of the MPCIP algorithm. The algorithm is tested on eight different networks each dispatched at ten different load levels. The MPCIP is compared to the PCIP referred to in Section X. All routines are written in MATLAB and run on a Pentium II 400 MHz PC with 128 Mb RAM. Table II shows a summary of the test problems. All the systems are from the IEEE standards except the 175 Bus and the 175 Bus E (with an embedded network) systems which are derived from an actual network. The performance of the algorithms is tested starting from two different values of the interpolatory variables but always with a flat voltage profile and zero bus angles. Results on two different types of problems are reported: 1) constrained OPFD i.e., with the enforcement of line flow constraints;

2) constrained OPFD with regulating transformers and line flow constraints. The results of the OPFD at peak demand are given in Table III, which shows that with the MPCIP, we could solve all of the test systems. For the PCIP, convergence was not attained for the 57 Bus and 300 Bus systems even when starting from different points. The number of binding constraints obtained from the OPFD solutions in Table III were monitored throughout the simulations. The 300 Bus system in particular showed that the MPCIP is able to detect a high number of active constraints. To assess the relative performance of both methods, each of the eight systems was dispatched at the ten load levels obtained from a load duration curve and starting from two different initial points. When convergence was attained, the algorithms always reached the same schedule solution, within the tolerance error. Table IV shows the percentage of success for each of the methods together with number of iterations in which the filter

JABR et al.: PRIMALDUAL INTERIOR POINT METHOD FOR OPFD

661

iterations because it solves the OPF with both equality and inequality constraints i.e., a part of the active set is known a priori which means that the complementarity condition (3l) has to be checked for a lesser number of variables. However, the MPCIP requires less computation time since it solves a symmetric positive definite system using Cholesky decomposition, which is bound to be much faster than the solution of symmetric indefinite systems using LU decomposition. From the point of view of the user, a reduction of computational time is more important than a reduction in number of iterations. XII. CONCLUSIONS
Fig. 4. CPU times of MPCIP as a proportion of PCIP times.

Fig. 5.

Iterations of MPCIP as a proportion of PCIP iterations.

technique (nF) and the estimation of diagonal perturbation (nE) was invoked. A problem number is also assigned for ease of identification. Failures in the PCIP occurred for cases 8, 15, 16, 22, 29, and 30. Reasons for this can be attributed to the indefiniteness of the reduced Lagrangian Hessian, which this method ignores. For the constrained OPFD cases, with or without regulating transformers, the MPCIP algorithm showed a higher degree of robustness. Failures in this case occurred only in the 57 bus system starting from the same point. This was due to a high diagonal perturbation at the beginning of the iterations, which prevented the dual infeasibility from decreasing to zero. However, starting from a different point resulted in 100% success. Table IV also shows that diagonal perturbation was mostly required for the constrained OPFD with regulating transformers. Generally, the use of diagonal perturbation was rare since close to the solution becomes positive definite, as expected. Moreover, the number of iterations in which the filter enforced a reduction in step length was very small. While this may seem surprising, one should remember that the search direction is a Newton step, which generally moves toward a minimizer and a feasible point. A similar conclusion was reached by Vanderbei [18] when using a merit function. Nevertheless, the filter is needed in some cases to enhance convergence. The ratio (MPCIP to PCIP) of average iteration count and CPU time is shown in Figs. 4 and 5, respectively. The figures indicate that in most cases, the PCIP requires a lower number of

This paper describes a modified primaldual method applied to the OPFD problem. Two modifications are included as compared to previous published solutions: 1) a filter to ensure step length control; 2) diagonal perturbation of the normal equations matrix in order to guide convergence toward minimizing stationary points. The method presented also provides a means to detect and treat indefiniteness in the reduced Hessian (second-order sufficiency condition for optimality) without predicting the active set. Moreover, the formulation of the OPFD through inequality constraints requires less CPU time as compared to previous methods since it solves a positive definite matrix the size of which is equal to the number of primal variables. The paper also presents a model for the inclusion of regulating transformers in the OPF solution. This model overcomes the difficulty of modeling the phase shift, which has precluded its appearance in previous interior point OPF solutions. Numerical testing on eight different networks shows that the method is promising. REFERENCES
[1] Y. C. Wu, A. S. Debs, and R. E. Marsten, A direct nonlinear predictorcorrector primaldual interior point algorithm for optimal power flows, IEEE Trans. Power Syst., vol. 9, pp. 876883, May 1994. [2] Y. C. Wu, A. S. Debs, and R. E. Marsten, A nonlinear programming approach based on an interior point method for optimal power flows, in Proc. IEEE/NTUA Athens Power Tech. Conf., Athens, Greece, Sept. 58, 1993, pp. 196200. [3] S. Granville, Optimal reactive dispatch through interior-point methods, IEEE Trans. Power Syst., vol. 9, pp. 136146, Feb. 1994. [4] H. Wei, H. Sasaki, J. Kubakawa, and R. Yokoyama, An interior point nonlinear programming for optimal power flow problems with a novel data structure, IEEE Trans. Power Syst., vol. 13, pp. 870877, Aug. 1998. [5] G. L. Torres and V. H. Quintana, An interior-point method for nonlinear optimal power flow using voltage rectangular coordinates, IEEE Trans. Power Syst., vol. 13, pp. 12111218, Nov. 1998. [6] E. D. Castronuovo, J. M. Campagnolo, and R. Salgado, Optimal power flow solutions via interior point method with high-performance computation techniques, in Proc. 13th PSCC in Trondheim, June 28July 2, 1999, pp. 12071213. [7] R. Fletcher, Practical Methods of Optimization. New York: Wiley, 1987. [8] D. G. Luenberger, Nonlinear Programming, 2nd ed. Reading, MA: Addison-Wesley, 1984. [9] K. C. Almeida, F. D. Galliana, and S. Soares, A general parametric optimal power flow, IEEE Trans. Power Syst., vol. 9, pp. 540547, Feb. 1994. [10] N. J. Higham, Computing a nearest symmetric positive semidefinite matrix, Lin. Alg. Appl., vol. 103, pp. 103118, 1988.

662

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 17, NO. 3, AUGUST 2002

[11] M. S. Bazaraa and C. M. Shetty, Nonlinear ProgrammingTheory and Algorithms. New York: Wiley, 1979. [12] M. R. Irving and M. J. H. Sterling, Economic dispatch of active power by quadratic programming using a sparse linear complementary algorithm, Electric. Power Energy Syst., vol. 7, pp. 26, Jan. 1985. [13] R. A. Jabr and A. H. Coonick, Homogeneous interior point method for constrained power scheduling, Proc. Inst. Elect. Eng., Gener. Transm. Distrib., pt. C, vol. 147, no. 4, pp. 239244, 2000. [14] J. J. Grainger and W. D. Stevenson, Power System Analysis. New York: McGraw-Hill, 1994. [15] A. V. Fiacco and G. P. McCormick, Nonlinear Programming: Sequential Unconstrained Minimization Techniques. New York: Wiley, 1968. [16] A. S. El-Bakry, R. A. Tapia, T. Tsuchiya, and Y. Zhang, On the formulation and theory of Newton interior point method for nonlinear programming, J. Optim. Theory Appl., vol. 89, pp. 507541, June 1996. [17] D. M. Gay, M. L. Overton, and M. H. Wright, A primaldual interior method for nonconvex nonlinear programming, Computing Sciences Research Center, Bell Laboratories, Murray Hill, NJ, Tech. Rep. 97-4-08, July 1997. [18] R. J. Vanderbei and D. F. Shanno, An interior-point algorithm for nonconvex nonlinear programming, Statistics and Operations Research, Princeton Univ., Princeton, NJ, Tech. Rep. SOR-97-21, 1997. [19] Matlab Users Guide. Natick, MA: The MathWorks, Inc., 1996. [20] R. Fletcher and S. Leyffer, Nonlinear programming without a penalty function, Univ. of Dundee, Dundee, U.K., Numeric. Anal. Rep. NA/171, Sept. 22, 1997. [21] R. M. Chamberlain, M. J. D. Powell, C. Lemarechal, and H. C. Pedersen, The watchdog technique for forcing convergence in algorithms for constrained optimization, Math. Prog. Study 16, pp. 117, 1982. [22] S. J. Wright, Primal Dual Interior Point Methods. Philadelphia, PA: SIAM, 1997. [23] S. Mehrotra, On the implementation of a primaldual interior point method, SIAM J. Optim., vol. 2, pp. 575601, 1992.

[24] I. J. Lustig, R. E. Marsten, and D. F. Shanno, On implementing Mehrotras predictorcorrector interior-point method for linear programming, SIAM J. Optim., vol. 2, no. 3, pp. 435449, 1992.

Rabih A. Jabr received the B.E. degree in electrical engineering (with high distinction) from the American University of Beirut, Lebanon, in 1997, and the Ph.D. degree in electrical engineering from the Imperial College, London, U.K., in 2000. Currently, he is an Assistant Professor in the Department of Electrical, Computer, and Communication Engineering at Notre Dame University, Zouk Mikayel, Lebanon. His research interests are in operations research and power system optimization.

Alun H. Coonick received the M.Sc. degree from the University of Southampton, Southampton, U.K., in 1980, and the Ph.D. degree from the Imperial College, London, U.K., in 1991. Currently, he is a Lecturer in the Department of Electrical and Electronic Engineering, Imperial College, London, U.K. His research interests include power system stability, control using FACTS devices, and artificial intelligence.

Brian J. Cory lectured at Imperial College, London, U.K., from 1956 until his retirement in 1993. He is now a Visiting Professor and Senior Research Fellow. His main research interests are in all aspects of electrical energy supply. These include planning, pricing, operating modern systems, and coping with deregulation, privatization, and competition.

You might also like