You are on page 1of 14

Convergence Acceleration for Some Root nding Methods

Weimin Han and Florian A. Potra Department of Mathematics University of Iowa Iowa City, IA 52242
Dedicated to Professor Ulrich Kulisch on the occasion of his 60th birthday

Abstract{Zusammenfassung
extrapolation formulas to accelerate the convergence of super-linearly convergent sequences. Applications are given for some root nding methods such as Newton's method and the secant method. Numerical examples are given showing the e ectiveness of the extrapolation formulas.

Convergence Acceleration for Some Root nding Methods. We present simple, e cient

AMS Subject Classi cations: 65B99, 65H05. Key Words: Convergence acceleration, root nding, secant method, Newton's method.
einfache und e ektive Extrapolationsformeln zur Beschleunigung der Konvergenz uberlinear konvergierender Folgen an. Diese Formeln werden angewandt auf Verfahren zur Nullstellenbestimmung wie zum Beispiel dem Newtonverfahren oder dem Sekantenverfahren. Numerische Beispiele zeigen die E ektivitat der Extrapolationsformeln.

Konvergenzbeschleunigung einiger Verfahren zur Nullstellenberechnung. Wir geben

1 Introduction
The acceleration of convergence has been an active eld of research in numerical analysis. The most important results obtained before 1970 in this eld are summarized in the excellent
1

survey paper of Joyce 3] (see also 2]). A more recent survey is contained in the dissertation of Walz 8]. We note that the vast majority of the results obtained so far apply to rather slow convergent sequences, and especially on sequences that admit logarithmic error expansions. There are relatively fewer results for faster convergent sequences. Of course, if a sequence has a fast convergence rate, one cannot expect that acceleration techniques could bring too much improvement. However, even a modest improvement may be useful because, as we show in the present paper this can lead to saving one iteration for some popular root nding methods, such as the secant method or Newton's method. The problem of accelerating sequences produced by iteration procedures of the form
xn+1 = T (xn ); n = 0; 1; : : :

(1)

was studied by Meinardus 4] in case 0 < jT 0(x )j < 1; where x = limn!1 xn , and by Walz 8] in case
T 0 (x ) =

(2)

= T p? (x ) = 0; T p (x ) 6= 0:
( 1) ( )

(3)

The acceleration procedures proposed in the above mentioned papers depend on an unknown quantity . If condition (2) is satis ed, then = T 0(x ) and eventually it can be properly approximated by using the available information contained in the sequence fxng. In case (3) with p 2, no explicit formula for is known and, unfortunately, the approximation procedure for proposed in 9] does not seem to work adequately. This will be discussed at the end of this paper. In the present paper, we consider the problem of accelerating sequences fxng that are super-linearly convergent to a root x and satisfy an error relation of the form
xn ? x = c n
p Y j =1

(xn?j ? x );

(4) (5)

or

xn ? x = cn (xn?1 ? x )p ;

j

xn ? x = c n

p Y j =1

(xn?j ? x ) ;
j

(6) (7)

0, 1 j

p, and fcn g is a sequence of constants such that

lim c n!1 n
( ) ( )

= c 6= 0:

If condition (3) is satis ed, and T p is continuous at x , then (5) and (7) are clearly satis ed with c = T p (x )=p!. We say that a sequence fxng accelerates fxng, if ^
xn ? x = n (xn ? x ); ^
n

! 0 as n ! 1:

(8)

Our acceleration schemes depend only on information that is available at the nth iteration. For sequences satisfying (4), we de ne (xn ? xn? ) xn = x n ? ^ ; (9) xn + xn? p ? 2xn? for sequences satisfying (5), we set (x ? xn)p ; (10) xn = xn ? n? ^ (xn? ? xn? )p and for sequences satisfying (6), we use p (xn? ? xn) 1 Y (x ? x ? ?1 : (11) xn = xn ? ^ n?j n?j ) (xn?p? ? xn?p) j
1 2 ( +1) 1 1 +1 2 1 1 +1 1
p

+1

=2

We note that (9) reduces to Aitken's method (see 1] or 2]) in the particular case when p = 1. In any case, we prove that (8) is true, i.e., the accelerated sequence is (super-linearly) faster than the original sequence. We apply our schemes to the secant method, Newton's method, as well as to a method of order 1:839 considered in 7] and 5]. In our analysis, we will use the notion of divided di erence. A divided di erence of a function f is symmetric with respect to its arguments. The rst-order and second-order divided di erences can be de ned by 8 > f (y ) ? f (x) > < if y 6= x; f x; y ] = > y ? x > f 0 (x) : if y = x;
3

2 Divided di erences of higher orders are de ned similarly. We will need the following relation between divided di erences and derivatives of C k -functions: 1 f x ; x ; ; xk ] = f k ( ) for some 2 minfx ; ; xk g; maxfx ; ; xk g]: k! Throughout the paper, we will use the convention that Qk l aj = 1, if l > k. j
0 1 ( ) 0 0 =

8 > > < x; y; z ] = > > :

f y; z ] ? f x; y ] if z 6= x; z?x 1 f 00(x) if x = y = z:

2 Acceleration of super-linearly convergent methods

Consider a sequence of iterates fxn g produced by a super-linearly convergent method to approximate a solution x of a nonlinear scalar equation
f (x) = 0:

(12)

First we consider the case when the error relation (4) holds, together with (7). From Theorem 3.2 of 6], it follows that the sequence fxng has the exact Q-order of convergence , where 2 1; 2) is the largest root of
tk ?
p?1 X j =0

tj = 0:

Denote the iteration errors

"n?j = xn?j ? x ; 0
p Y j =1

p:

"n = cn "n?j :

1 2 ( +1) 1

(13)

For its error, we have, using (4),

"n ^ " = "n ? " +(" n ? "n??)2" n n? n? p
1 2

xn ? x ^

"n"n?(p+1) ? "n?1 "n + "n?(p+1) ? 2"n?1 = n"n;

( +1) 2

(14)

cn ? cn?1 = O(jcn ? cn?1j): (15) Q cn 1 + cn cn?1 j =2 "2 ?j ? 2cn?1 p=2 "n?j n j By (7), n ! 0 as n ! 1. Hence the iterates fxn g de ned by (13) accelerate the convergence ^ of fxn g.
n

where

Qp

Next, we consider the case when the errors satisfy (5) together with (7). Let us de ne (x ? xn )p (16) xn = xn ? n? ^ (xn? ? xn? )p and show that fxng accelerates the convergence of fxng. Equations (5) and (16) imply ^ (" ? "n)p = " ; "n = "n ? n? ^ (17) ("n? ? "n? )p n n where p? p (18) = 1 ? cn? (1 ? cn"n??) p : n cn (1 ? cn? "p ? ) n By (7), we have j nj = O(jcn ? cn? j) ! 0 as n ! 1: (19)
1 +1 2 1 1 +1 2 1 1 1 1 +1 1 1 2 1

Finally, we consider the general case (6). We de ne p (x ? xn) 1 Y (x ? x ? ?1 : xn = xn ? n? ^ n?j n?j ) (xn?p? ? xn?p) j For an error analysis, we use the following two relations implied by (6):
1 +1 1
p

+1

=2

(20)

1 "n?1 =

j "n?j ?1:

"n?1 = cn?1

We have
"n = "n ? ^

= where
cn? n =1? c
n
1

n "n ;

("n?p? ? "n?p)
1

("n? ? "n)
1

1 +1
p

p?1 Y j =2

("n?j ? "n?j ) ?
+1
j

?1

?1

! 0 as n ! 1:

Application to the secant method

The secant method is de ned by the recursion formula
xn = xn?1 ? f (xn?1 )
0 1

When the initial guesses x and x are su ciently close to a root x of f (x), the secant method converges. For the sake of completeness, we include a derivation of the error relation for the secant method.
xn ? x = xn?1 ? x
2 2

xn?1 ? xn?2 n f (xn?1 ) ? f (xn?2 )

2:

(21)

? f xn? ; xn? ]? f (xn? ) = f xn? ; xn? ]? ff xn? ; xn? ] (xn? ? x ) ? (f (xn? ) ? f (x ))g = f xn? ; xn? ]? ff xn? ; xn? ] ? f xn? ; x ]g (xn? ? x ) = f xn? ; xn? ]? f xn? ; xn? ; x ] (xn? ? x ) (xn? ? x ):
2 1 1 1 1 1 2 1 1 1 1 1 2 1 1 1 2 1 1 2 1 1 2 2 1

(22)

Hence, for the secant method, we have the error relation (4) with p = 2, and f x ;x ;x ] : cn = n? n? f xn? ; xn? ] Assuming that x is a simple root, we have f 00 (x ) as n ! 1: cn ! c = 0 2f (x ) Thus, when we apply the secant method to compute a simple root x with f 00(x ) 6= 0, we can use the extrapolation formula, (xn ? xn? ) ; (23) xn = xn ? ^ xn + xn? ? 2xn?
2 1 1 2 3 1

to obtain a more accurate approximation. If f 00(x ) = 0, then from (22), we have

xn ? x = cn (xn?1 ? x ) (xn?2 ? x )2 ;

where
cn =

So, when f 00 (x ) = 0 and f (x ) 6= 0, we can use the extrapolation formula (20) with p = 2, = 1 and = 2: (x ? xn) (xn? ? xn? ) : (24) xn = xn ? n? ^ (xn? ? xn? ) In general, if f 00(x ) = = f l (x ) = 0 and f l (x ) 6= 0 for some l 2, then according to (20), the extrapolation formula is (x ? xn ) (xn? ? xn? )l? : xn = xn ? n? ^ (25) (xn? ? xn? )l
(3) 1 2 1 2 2 1 3 2 2 ( ) ( +1) 1 2 2 1 1 3 2

f xn?2 ; xn?1 ; x ; x ] xn?1 ? x f xn?1 ; x ; x ; x ] + f xn?2 ; xn?1 ] xn?2 ? x f xn?2 ; xn?1 ]

x ! f f 0((x )) as n ! 1: 6
(3)

Application to a method of order 1:839

The following method was proposed in 7] for solving scalar nonlinear equations and generalized in 5] to nonlinear operator equations: f (xn? ) xn = xn? ? : (26) f xn? ; xn? ] + f xn? ; xn? ] ? f xn? ; xn? ] Compared to the secant method, which requires the same amount of work (a function evaluation per iteration step), the algorithm (26) has a higher convergence order (1:839 vs. 1:618). An error relation of the form (4) for (26) can be derived as follows.
1 1 2 1 3 1 3 2

f ) = xn? ? x ? f x ; x ] + (xn? ) ;? f (x] ? f x ; x ] f xn? xn? n? n? n? n? n ? n ? n = (xn? ? x ) f xn? ;fxx? ] ;+ f x]n+ ;fxx? ] ;? f x]n? ;fxx? ] ;? f x]n? ; x ] n? xn? n? xn? n? xn? = cn(xn? ? x ) (xn? ? x ) (xn? ? x );
1 1 2 1 3 1 3 2 1 2 1 3 1 3 2 1 2 1 3 1 3 2 1 2 3

xn ? x

(27)

where
cn =

f xn?3 ; xn?2 ; xn?1 ; x ] +

cn ! c =
(3)

xn?1 ? x (xn?2 ? x ) (xn?3 ? x ) f xn?3; xn?2 ; xn?1] : f xn?2 ; xn?1 ] + f xn?3 ; xn?1 ] ? f xn?3 ; xn?2 ] f (3) (x ) 6f 0(x ) as n ! 1:

(28)

If f (x ) = = f (x ) = 0 and f (x ) 6= 0 for some l 3, then as in the discussion for the secant method, we can use the following extrapolation formula (x ? xn ) (xn? ? xn? )l? : xn = xn ? n? ^ (30) (xn? ? xn? )l?
(3) ( )

Therefore, if f (x ) 6= 0, we have the error relation (4) with p = 3, and the extrapolation formula is (xn ? xn? ) : (29) xn = xn ? ^ x + x ? 2x
1 2

( +1)

n?4

n?1

Application to Newton's method

Newton's method generates a sequence fxng by the recursive formula f (x ) n 1: (31) xn = xn? ? 0 n? f (xn? ) If the initial guess x is su ciently close to a root x , then xn converges to x quadratically. We have f (x ) ? f (x ) xn ? x = xn? ? x ? n? f xn? ; xn? ] n = f xn? ;fxx? ] ;? f x]n? ; x ] (xn? ? x ) n? xn? = f xn? ; xn? ; x ] (xn? ? x ) ; f xn? ; xn? ] so that "n = cn "n? ; (32)
1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 1

where

cn =

(33)

If the root x is simple, then

If f 00(x ) 6= 0, then p = 2, and the extrapolation formula (10) becomes (x ? xn) : xn = xn ? n? ^ (xn? ? xn? )
1 3 2 1 2

f 00 (x ) cn ! c = 0 : 2f (x )

(34)

When f 00(x ) = 0 and f (x ) 6= 0, we can write the error relation in the form
(3)

xn ? x x = f f n?1; xn?1; x ] (xn?1 ? x )2 xn?1 ; xn?1 ] = f xn?1; xn?1; x ] ? f xn?1; x ; x ] + f xn?1 ; x ; x ] ? f x ; x ; x ] (xn?1 ? x )2 f xn?1 ; xn?1 ] 3 = cn(xn?1 ? x ) ;

where
cn = f xn?1 ; xn?1 ; x ; x ] + f xn?1 ; x ; x ; x ] f xn?1 ; xn?1 ]
1

x ! c = f f 0((x )) as n ! 1: 3
(3) 4

2 1 3

(35)
( +1)

( ) 1 +2

2, then the (36)

n?2

n?1

+1

3 Numerical examples
We present some experiments with extrapolation formulas described in the previous section. Example 3.1 In the rst example, we solve the equation 1 =0 ex ? 0:1 + x which has a root x = 0:6497506818006853.
2

For the secant method, we choose the initial guesses

x0 = 1; x1 = 0:9

Let us use xn to denote the accelerating iterate computed from (23), and xn the accelerating ^ ^ iterate computed from (20) with p = 2, = = 1. The following table contains the errors of the secant method iterates, xn ? x , and those of the extrapolated ones, xn ? x and ^ xn ? x . ^
(1) (2) 1 2 (1) (2)

xn ? x

x(1) ? x ^n

x(2) ? x ^n
2 3 6

1 3 4

13

10? 10? 10? 10? 10?

?0:3067 10?
0:2741 ?0:1517 ?0:4706 0:1332

1 3

10 13

5 9

14

For the method (26), we take

x0 = 1:1; x1 = 1; x2 = 0:9

We use xn to denote the accelerating iterate computed from (29), and xn the accelerating ^ ^ iterate computed from (20) with p = 3, = = = 1. Then we have the following numerical results: n xn ? x xn ? x ^ xn ? x ^ 3 ?0:4755 10? 4 0:3249 10? ?0:1454 10? ?0:2255 10? 5 0:4608 10? 0:1623 10? ?0:5654 10? 6 0:1047 10? 0:1983 10? 0:3340 10? 7 ?0:2776 10? ?0:1776 10? ?0:6661 10? For Newton's method, we take x =1
(1) (2) 1 2 3 (1) (2) 1 2 4 7 2 1 4 4 8 8 14 14 15 0

10

1 2 3 4

xn ? x

xn ? x ^

1 2 4 9

?0:3431 10?

2 5

0:4754 10? 0:8755 10?

11

One must be cautious in using the extrapolation formulas (23) and (29). When the iterates are very close to a root of the function, the denominators in (23) and (29) are close to zero, and the loss-of-signi cance error will dominate the iterate error. Therefore, the numerical results of the extrapolation will deteriorate when the iterates are very close to the root.

Example 3.2 We examine an example where f 00(x ) = 0. The equation

f (x) = x ? 1 + 0:1 (x ? 1)3 = 0

has a root x = 1. Let us use the secant method to compute the root. Note that f 00 (x ) = 0 and f (x ) 6= 0, so the correct extrapolation formula is (24). We denote by xn the iterate given by the secant method, xn the extrapolated iterate computed from the \wrong" formula ^ (23), and xn the extrapolated iterate computed from (24). With the initial guesses x = 0, ^ x = 0:1, we have the following numerical table.
(3) (1) (2) 0 1

xn ? x ?0:1345
1 4 9

x(1) ? x ^n

x(2) ? x ^n
2 3 8

0:1149 10 0:4966 10? 0:8484 10?

10

We notice that the \wrong" extrapolation formula produces worse results than the secant iterates. We also notice that the error in the rst extrapolated iterate x from the correct ^ formula (24) is large, because the values x and x are far away from the root.
(2) 3 0 1

11

Example 3.3 We end this paper by considering an example presented in 9] as an illustration

of the acceleration method proposed there. As we mentioned in the introduction, that acceleration method depends on an unknown quantity . If
f (x ) = 0; f 0 (x ) 6= 0 and f 00 (x ) 6= 0;

(37)

then the Newton iteration mapping

f (x) T (x) = x ? 0 f (x)

satis es (3) with p = 2. Therefore, in this case, the acceleration formula proposed in 9] reduces to x ? xn : yn = n 1? Because is unknown, it is suggested in 9] that it should be replaced by
(1) +1 2n 2n

~n = xn ? xn : xn ? xn
+2 +2 +1 +1

(38)

In this case, yn would depend on xn so that we will denote x ? ~ n xn : xn = yn = n ~ ~ 1 ? ~n

(1) +2 +2 (1) +1 +2 +2 0 2

(39)

The example presented in 9] consists in applying Newton's method for nding x = 3, starting from x = 3. Here, f (x) = x ? 3, so that (37) is clearly satis ed, and Newton's method becomes ! 1 x + 3 xn = n 1: 2 n? x
1

The following table contains the results for fxng given by our acceleration method (34) and ^ for fxn g given by (38) and (39). ~
n

n?1

1 2 3 4

xn ? x

10 10? 10? 10?

0

xn ? x ~
1 2 6

xn ? x ^
2 6 12

1 4 8

?0:6538 10? 0:2324 10? ?0:1282 10? 0:9417 10? ?0:4745 10? 0:1301 10?
12

While (34) works reasonably well, it appears as if (38){(39) were a \deceleration" method. The explanation is that by substituting (38) in (39), we obtain, with n instead of n + 2, (xn ? xn? ) (40) xn = xn ? ~ xn + xn? ? 2xn? which is exactly Aitken's method. Or, it is known that this method cannot be applied to super-linearly convergent sequences. In fact, proceeding as in (14), (15), we deduce that cn ? cn? ~ ~ xn ? x = ~n (xn ? x ), ~n = ~ , cn = xxn ? xx . For Newton's method, ~ cn (1 + cn cn? ? 2~n? ) ~ ~~ c n? ? we have cn = cn"n? , where "n = xn ? x and cn is given by (33). Thus, ~
1 2 2 1 1 1 1 1 1

so that, because limn!1 cn = c < 1,

cn "n? ? cn? "n? ~n = cn "n? (1 + cn cn? "n? "n? ? 2cn? "n? ) cn cn " ? c ? " = c c " (1 +? cn? " n" n? 2c " ) cn n? n? n? ? n? n? n? n n? 1? 1 = 1 + c c " cn"n?? 2c " ; "
1 1 2 1 1 1 2 1 2 1 2 2 1 2 1 2 2 1 1 2 1 2 2

n?1 n?2

lim ~ n!1 n

j j = 1:

Therefore, fxng is \super-linearly slower" than the original Newton sequence. ~

References
1] Atkinson, K. E.: An Introduction to Numerical Analysis, 2nd ed. New York: John Wiley & Sons 1988. 2] Brezinski, C.: Acceleration de la Convergence en Analyse Numerique. Lecture Notes in Mathematics, No. 584. Berlin: Springer 1977. 3] Joyce, D. C.: Survey of extrapolation processes in numerical analysis. SIAM Review 13, 435{488 (1971).
13

4] Meinardus, G.: Uber das asymptotische Verhalten von Iterationsfolgen. Z. Ang. Math. Mech. 63, 70{72 (1983). 5] Potra, F. A.: On an iterative algorithm of order 1:839 for solving nonlinear operator equations. Numer. Funct. Anal. and Optimiz. 7, 75{106 (1984-85). 6] Potra, F. A.: On Q-order and R-order of convergence. J. Optimization Theory and Applications 63, 415{431 (1989). 7] Traub, J.: Iterative Methods for the Solution of Equations. New Jersey: Prentice-Hall, Englewood Cli s 1964. 8] Walz, G.: Approximation von Funktionen durch asymptotische Entwicklungen und Eliminationsprozeduren. Dissertation, Universitat Mannheim, 1987. 9] Walz, G.: Asymptotic expansion and acceleration of convergence for higher order iteration process. Numer. Math. 59, 529{540 (1991).

14