You are on page 1of 100

ES 84 Numerical Methods

Stephen H. Haim
Computer Engineering Dept./EECE
Gaussian Elimination,
LU Decomposition &
Gauss-Seidel
Gaussian Elimination
2
Nave Gaussian Elimination
One of the most popular techniques for
solving simultaneous linear equations of the
form

Consists of 2 steps
1. Forward Elimination of Unknowns.

2. Back Substitution
| || | | | C X A =
Forward Elimination
The goal of Forward Elimination is to transform
the coefficient matrix into an Upper Triangular
Matrix

7 . 0 0 0
56 . 1 8 . 4 0
1 5 25
(
(
(


(
(
(


1 12 144
1 8 64
1 5 25
Forward Elimination
Linear Equations
A set of n equations and n unknowns
1 1 3 13 2 12 1 11
... b x a x a x a x a
n n
= + + + +
2 2 3 23 2 22 1 21
... b x a x a x a x a
n n
= + + + +
n n nn n n n
b x a x a x a x a = + + + + ...
3 3 2 2 1 1

. .
. .
. .

Forward Elimination
Transform to an Upper Triangular Matrix
Step 1: Eliminate x
1
in 2
nd
equation using equation 1 as
the pivot equation
) (
1
21
11
a
a
Eqn

Which will yield


1
11
21
1
11
21
2 12
11
21
1 21
... b
a
a
x a
a
a
x a
a
a
x a
n n
= + + +
Forward Elimination
Zeroing out the coefficient of x
1
in the 2
nd
equation.
Subtract this equation from 2
nd
equation
1
11
21
2 1
11
21
2 2 12
11
21
22
... b
a
a
b x a
a
a
a x a
a
a
a
n n n
=
|
|
.
|

\
|
+ +
|
|
.
|

\
|


'
2
'
2 2
'
22
... b x a x a
n n
= + +
n n n
a
a
a
a a
a
a
a
a a
1
11
21
2
'
2
12
11
21
22
'
22


=
=

Or Where
Forward Elimination
Repeat this procedure for the remaining
equations to reduce the set of equations as
1 1 3 13 2 12 1 11
... b x a x a x a x a
n n
= + + + +
'
2
'
2 3
'
23 2
'
22
... b x a x a x a
n n
= + + +
'
3
'
3 3
'
33 2
'
32
... b x a x a x a
n n
= + + +
' '
3
'
3 2
'
2
...
n n nn n n
b x a x a x a = + + +


. . .
. . .
. . .

Forward Elimination
Step 2: Eliminate x
2
in the 3
rd
equation.
Equivalent to eliminating x
1
in the 2
nd
equation
using equation 2 as the pivot equation.
) (
2
3
32
22
a
a
Eqn
Eqn
(

Forward Elimination
This procedure is repeated for the remaining
equations to reduce the set of equations as
1 1 3 13 2 12 1 11
... b x a x a x a x a
n n
= + + + +
'
2
'
2 3
'
23 2
'
22
... b x a x a x a
n n
= + + +
"
3
"
3 3
"
33
... b x a x a
n n
= + +
" "
3
"
3
...
n n nn n
b x a x a = + +



. .
. .
. .

Forward Elimination
Continue this procedure by using the third equation as the pivot
equation and so on.
At the end of (n-1) Forward Elimination steps, the system of
equations will look like:
'
2
'
2 3
'
23 2
'
22
... b x a x a x a
n n
= + + +
"
3
"
3
"
33
... b x a x a
n n
= + +
( )
( ) 1
1

=
n
n n
n
nn
b x a


. .
. .
. .


1 1 3 13 2 12 1 11
... b x a x a x a x a
n n
= + + + +
Forward Elimination
At the end of the Forward Elimination steps
(
(
(
(
(
(

=
(
(
(
(
(
(

(
(
(
(
(
(

) - (n
n n
3
2
1
n
nn
n
n
n
b
b
b
b
x
x
x
x
a
a a
a a a
a a a a
1
"
3
'
2
1
) 1 (
"
3
"
33
'
2
'
23
'
22
1 13 12 11

Back Substitution
The goal of Back Substitution is to solve each of
the equations using the upper triangular matrix.
(
(
(

=
(
(
(

(
(
(

3
2
1
3
2
1
33
23 22
13 12 11
x
x
x

0 0
0
b
b
b
a
a a
a a a
Example of a system of 3 equations
Back Substitution
Start with the last equation because it has only
one unknown
) 1 (
) 1 (

=
n
nn
n
n
n
a
b
x
Solve the second from last equation (n-1)
th

using x
n
solved for previously.
This solves for x
n-1
.
Back Substitution
Representing Back Substitution for all equations
by formula
( ) ( )
( ) 1
1
1 1

+ =

=
i
ii
n
i j
j
i
ij
i
i
i
a
x a b
x
For i=n-1, n-2,.,1
and
) 1 (
) 1 (

=
n
nn
n
n
n
a
b
x
Example: Rocket Velocity
The upward velocity of a rocket
is given at three different times
Time, t Velocity, v
s m/s
5 106.8
8 177.2
12 279.2
The velocity data is approximated by a polynomial as:
( ) 12. t 5 ,
3 2
2
1
s s + + = a t a t a t v
Find: The Velocity at t=6,7.5,9, and 11 seconds.
Example: Rocket Velocity
Assume
( ) 12. t 5 , a t a t a t v s s + + =
3 2
2
1
(
(
(

=
(
(
(

(
(
(

3
2
1
3
2
3
2
2
2
1
2
1
1
1
1
v
v
v
a
a
a

t t
t t
t t
3
2
1
Results in a matrix template of the form:
Using date from the time / velocity table, the matrix becomes:
(
(
(

=
(
(
(

(
(
(

2 . 279
2 . 177
8 . 106
1 12 144
1 8 64
1 5 25
3
2
1
a
a
a

Example: Rocket Velocity
Forward Elimination: Step 1
=
(

) 64 (
25
1
2
Row
Row
Yields
(
(
(

=
(
(
(

(
(
(


2 . 279
21 . 96
81 . 106
a
a
a

1 12 144
56 . 1 8 . 4 0
1 5 25
3
2
1
Example: Rocket Velocity
=
(

) 144 (
25
1
3
Row
Row
(
(
(

=
(
(
(

(
(
(



0 . 336
21 . 96
8 . 106
a
a
a

76 . 4 8 . 16 0
56 . 1 8 . 4 0
1 5 25
3
2
1
Yields
Forward Elimination: Step 1
Example: Rocket Velocity
Yields
=
(

) 8 . 16 (
8 . 4
2
3
Row
Row
(
(
(

=
(
(
(

(
(
(


735 . 0
21 . 96
8 . 106
a
a
a

7 . 0 0 0
56 . 1 8 . 4 0
1 5 25
3
2
1
This is now ready for Back Substitution
Forward Elimination: Step 2
Example: Rocket Velocity
Back Substitution: Solve for a
3
using the third equation
735 . 0 7 . 0
3
= a
7 0
735 0
.
.
a
3
=
050 1. a
3
=
Example: Rocket Velocity
Back Substitution: Solve for a
2
using the second equation
21 . 96 56 . 1 8 . 4
3 2
= a a
8 . 4
56 . 1 21 . 96
3
2

+
=
a
a
( )
8 4
050 1 56 1 21 96
. -
. . . -
a
2
+
=
70 19. a
2
=
Example: Rocket Velocity
Back Substitution: Solve for a
1
using the first equation
8 . 106 5 25
3 2 1
= + + a a a
25
5 8 . 106

3 2
1
a a
a

=
( )
25
050 . 1 70 . 19 5 8 . 106
1

= a
2900 . 0
1
= a
Example: Rocket Velocity
Solution:
The solution vector is
(
(
(

=
(
(
(

050 . 1
70 . 19
2900 . 0
3
2
1
a
a
a
The polynomial that passes through the three data points is
then:
( )
3 2
2
1
a t a t a t v + + =
12 5 , 050 . 1 70 . 19 2900 . 0
2
s s + + = t t t
Example: Rocket Velocity
Solution:
Substitute each value of t to find the corresponding velocity
( ) ( ) ( )
. s / m 1 . 165
050 . 1 5 . 7 70 . 19 5 . 7 2900 . 0 5 . 7 v
2
=
+ + =
( ) ( ) ( )
. s / m 8 . 201
050 . 1 9 70 . 19 9 2900 . 0 9 v
2
=
+ + =
( ) ( ) ( )
. s / m 8 . 252
050 . 1 11 70 . 19 11 2900 . 0 11 v
2
=
+ + =
( ) ( ) ( )
. / 69 . 129
050 . 1 6 70 . 19 6 2900 . 0 6
2
s m
v
=
+ + =
Pitfalls
Two Potential Pitfalls
-Division by zero: May occur in the forward elimination
steps. Consider the set of equations:
6 5 5
901 . 3 3 099 . 2 6
7 7 10
3 2 1
3 2 1
3 2
= +
= + +
=
x x x
x x x
x x
- Round-off error: Prone to round-off errors.
Pitfalls: Example
Consider the system of equations:
Use five significant figures with chopping
(
(
(


5 1 5
6 099 . 2 3
0 7 10
(
(
(

3
2
1
x
x
x
(
(
(

6
901 . 3
7

=
At the end of Forward Elimination
(
(
(


15005 0 0
6 001 . 0 0
0 7 10
(
(
(

3
2
1
x
x
x
(
(
(

15004
001 . 6
7
=

Pitfalls: Example
Back Substitution
99993 . 0
15005
15004
3
= = x
5 . 1
001 . 0
6 001 . 6
3
2
=


=
x
x
3500 . 0
10
0 7 7
3 2
1
=
+
=
x x
x
(
(
(

=
(
(
(

(
(
(


15004
001 . 6
7
15005 0 0
6 001 . 0 0
0 7 10
3
2
1
x
x
x
Pitfalls: Example
Compare the calculated values with the exact solution
| |
(
(
(

=
(
(
(

=
99993 . 0
5 . 1
35 . 0
3
2
1
x
x
x
X
calculated
| |
(
(
(

=
(
(
(

=
1
1
0
3
2
1
x
x
x
X
exact
Improvements
Increase the number of significant digits
Decreases round off error
Does not avoid division by zero
Gaussian Elimination with Partial Pivoting
Avoids division by zero
Reduces round off error
Partial Pivoting
pk
a
Gaussian Elimination with partial pivoting applies row switching to
normal Gaussian Elimination.
How?
At the beginning of the k
th
step of forward elimination, find the maximum of
nk k k kk
a a a ......., ,......... ,
, 1 +
If the maximum of the values is In the p
th
row,
, n p k s s
then switch rows p and k.
Partial Pivoting
What does it Mean?

Gaussian Elimination with Partial Pivoting ensures that
each step of Forward Elimination is performed with the
pivoting element |a
kk
| having the largest absolute value.

Partial Pivoting: Example
Consider the system of equations
6 x 5 x x 5
901 . 3 x 3 x 099 . 2 x 3
7 x 7 x 10
3 2 1
3 2 1
2 1
= +
= + +
=
In matrix form
(
(
(

5 1 5
6 099 . 2 3
0 7 10
(
(
(

3
2
1
x
x
x
(
(
(

6
901 . 3
7

=
Solve using Gaussian Elimination with Partial Pivoting using five
significant digits with chopping
Partial Pivoting: Example
Forward Elimination: Step 1
Examining the values of the first column
|10|, |-3|, and |5| or 10, 3, and 5
The largest absolute value is 10, which means, to follow the
rules of Partial Pivoting, we switch row1 with row1.
(
(
(

=
(
(
(

(
(
(

6
901 . 3
7
5 1 5
6 099 . 2 3
0 7 10
3
2
1
x
x
x
(
(
(

=
(
(
(

(
(
(


5 . 2
001 . 6
7
5 5 . 2 0
6 001 . 0 0
0 7 10
3
2
1
x
x
x

Performing Forward Elimination


Partial Pivoting: Example
Forward Elimination: Step 2
Examining the values of the first column
|-0.001| and |2.5| or 0.0001 and 2.5
The largest absolute value is 2.5, so row 2 is switched with
row 3
(
(
(

=
(
(
(

(
(
(


5 . 2
001 . 6
7
5 5 . 2 0
6 001 . 0 0
0 7 10
3
2
1
x
x
x

(
(
(

=
(
(
(

(
(
(

001 . 6
5 . 2
7
6 001 . 0 0
5 5 . 2 0
0 7 10
3
2
1
x
x
x
Performing the row swap
Partial Pivoting: Example
Forward Elimination: Step 2

Performing the Forward Elimination results in:
(
(
(

=
(
(
(

(
(
(


002 . 6
5 . 2
7
002 . 6 0 0
5 5 . 2 0
0 7 10
3
2
1
x
x
x
Partial Pivoting: Example
Back Substitution
Solving the equations through back substitution
1
002 . 6
002 . 6
3
= = x
1
5 . 2
5 5 . 2
2
2
=

=
x
x
0
10
0 7 7
3 2
1
=
+
=
x x
x
(
(
(

=
(
(
(

(
(
(


002 . 6
5 . 2
7
002 . 6 0 0
5 5 . 2 0
0 7 10
3
2
1
x
x
x
Partial Pivoting: Example
| |
(
(
(

=
(
(
(

=
1
1
0
3
2
1
x
x
x
X
exact
| |
(
(
(

=
(
(
(

=
1
1
0
3
2
1
x
x
x
X
calculated
Compare the calculated and exact solution
The fact that they are equal is coincidence, but it does
illustrate the advantage of Partial Pivoting
Summary
-Forward Elimination
-Back Substitution
-Pitfalls
-Improvements
-Partial Pivoting
LU Decomposition
40
LU Decomposition
LU Decomposition is another method to solve a set of
simultaneous linear equations

Which is better, Gauss Elimination or LU Decomposition?

To answer this, a closer look at LU decomposition is
needed.
LU Decomposition
| | L
Method
| | A
| | | || | U L A =
| | U
For most non-singular matrix that one could conduct Nave Gauss
Elimination forward elimination steps, one can always write it as
Where
= lower triangular martix
= upper triangular martix
LU Decomposition
| || || | | | C X U L =
| || | | | C X A =
Proof
If solving a set of linear equations
If Then
Multiply by
Which gives
Remember which leads to
Now, if then
Now, let
Which ends with
and
| |
1
L
| | | || || | | | | | C L X U L L
1 1
=
| | | | | | I L L =
1
| || || | | | | | C L X U I
1
=
| || | | | U U I =
| || | | | | | C L X U
1
=
| | | || | U L A =
| | | | | | Z C L =
1
| || | | | C L = Z
| || | | | Z U = X
(1)
(2)
LU Decomposition
| || | | | C = Z L
| | Z
| || | | | Z U = X
How can this be used?
Given
Decompose into and
| | U
| | L
Then solve for

And then solve for
| | X
| || | | | C X A =
| | A
LU Decomposition
| || | | | C Z L =
How is this better or faster than Gauss
Elimination?
Lets look at computational time.
n = number of equations
To decompose [A], time is proportional to

To solve and
time proportional to
3
3
n
| || | | | C X U =
2
2
n
LU Decomposition
Therefore, total computational time for LU Decomposition is
proportional to
2
3
3
n
n
+
)
2
( 2
3
2
3
n n
+
or
Gauss Elimination computation time is proportional to
2 3
2 3
n n
+
How is this better?
LU Decomposition
)
2
n
3
n
( m
2 3
+
) n ( m
3
n
2
3
+
5
10 33 . 8
What about a situation where the [C] vector changes?
In LU Decomposition, LU decomposition of [A] is independent
of the [C] vector, therefore it only needs to be done once.
Let m = the number of times the [C] vector changes
The computational times are proportional to
LU decomposition = Gauss Elimination=
Consider a 100 equation set with 50 right hand side vectors
LU Decomposition = Gauss Elimination =
7
10 69 . 1
LU Decomposition
Another Advantage

Finding the Inverse of a Matrix
LU Decomposition Gauss Elimination
3
4
) (
3
3
2
3
n
n n
n
= +
2 3 2 3
3 4 2 3
n n n n
n + =
|
|
.
|

\
|
+
For large values of n
3
4
2 3
3 3 4
n n n
)) +
LU Decomposition
Method: [A] Decompose to [L] and [U]
| | | || |
(
(
(

(
(
(

= =
33
23 22
13 12 11
32 31
21
0 0
0
1
0 1
0 0 1
u
u u
u u u
U L A

[U] is the same as the coefficient matrix at the end of the forward
elimination step.
[L] is obtained using the multipliers that were used in the forward
elimination process
LU Decomposition
Finding the [U] matrix
Using the Forward Elimination Procedure of Gauss Elimination

1 12 144
1 8 64
1 5 25
(
(
(


(
(
(

=
(

1 12 144
56 . 1 8 . 4 0
1 5 25
) 64 (
25
1 Row
2 Row

(
(
(


=
(

76 . 4 8 . 16 0
56 . 1 8 . 4 0
1 5 25
) 144 (
25
1 Row
3 Row
LU Decomposition
Finding the [U] matrix
Using the Forward Elimination Procedure of Gauss Elimination

(
(
(



76 . 4 8 . 16 0
56 . 1 8 . 4 0
1 5 25

(
(
(

=
(

7 . 0 0 0
56 . 1 8 . 4 0
1 5 25
) 8 . 16 (
8 . 4
2 Row
3 Row
| |
(
(
(

=
7 . 0 0 0
56 . 1 8 . 4 0
1 5 25
U
LU Decomposition
Finding the [L] matrix
Using the multipliers used during the Forward Elimination Procedure
(
(
(

1
0 1
0 0 1
32 31
21

56 . 2
25
64
11
21
21
= = =
a
a

76 . 5
25
144
11
31
31
= = =
a
a

From the first step


of forward
elimination
From the second
step of forward
elimination
(
(
(



76 . 4 8 . 16 0
56 . 1 8 . 4 0
1 5 25
5 . 3
8 . 4
8 . 16
22
32
32
=

= =
a
a

LU Decomposition
| |
(
(
(

=
1 5 . 3 76 . 5
0 1 56 . 2
0 0 1
L
Does ?
| || | | | A U L =
| || | =
(
(
(


(
(
(

=
7 . 0 0 0
56 . 1 8 . 4 0
1 5 25
1 5 . 3 76 . 5
0 1 56 . 2
0 0 1
U L
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Solve the following set of
linear equations using LU
Decomposition
(
(
(

=
(
(
(

(
(
(

2 . 279
2 . 177
8 . 106
a
a
a

1 12 144
1 8 64
1 5 25
3
2
1
Using the procedure for finding the [L] and [U] matrices
| | | || |
(
(
(


(
(
(

= =
7 . 0 0 0
56 . 1 8 . 4 0
1 5 25
1 5 . 3 76 . 5
0 1 56 . 2
0 0 1
U L A
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Set




Solve for
| || | | | C Z L =
| | Z
(
(
(

=
(
(
(

(
(
(

2 . 279
2 . 177
8 . 106
1 5 . 3 76 . 5
0 1 56 . 2
0 0 1
3
2
1
z
z
z
2 . 279 5 . 3 76 . 5
2 . 177 56 . 2
10
3 2 1
2 1
1
= + +
= +
=
z z z
z z
z
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Complete the forward substitution to solve for
| | Z
735 . 0
) 21 . 96 ( 5 . 3 ) 8 . 106 ( 76 . 5 2 . 279
5 . 3 76 . 5 2 . 279
2 . 96
) 8 . 106 ( 56 . 2 2 . 177
56 . 2 2 . 177
8 . 106
2 1 3
1 2
1
=
=
=
=
=
=
=
z z z
z z
z
| |
(
(
(

=
(
(
(

=
735 . 0
21 . 96
8 . 106
3
2
1
z
z
z
Z
LU Decomposition
| || | | | Z X U =
| | X
Example: Solving simultaneous linear equations using LU Decomposition
Set




Solve for
(
(
(

=
(
(
(

(
(
(


0.735
96.21 -
106.8

7 . 0 0 0
56 . 1 8 . 4 0
1 5 25
3
2
1
a
a
a
The 3 equations become
735 . 0 7 . 0
21 . 96 56 . 1 8 . 4
8 . 106 5 25
3
3 2
3 2 1
=
=
= + +
a
a a
a a a
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
From the 3
rd
equation
1.050
0.7
0.735
a
a
3
=
=
= 735 . 0 7 . 0
3
Substituting in a
3
and using the
second equation
( )
70 19.
4.8 -
1.050 1.56 96.21 -

8 . 4
56 . 1 21 . 96
21 . 96 56 . 1 8 . 4
3
2
3 2
=
+
=

+
=
=
a
a
a a
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Substituting in a
3
and a
2

using the first equation
( )
2900 . 0
25
050 . 1 70 . 19 5 8 . 106
25
a a 5 8 . 106
8 . 106 5 25
3 2
1
3 2 1
=

=

=
= + +


a
a a a
Hence the Solution Vector is:
(
(
(

=
(
(
(

050 . 1
70 . 19
2900 . 0
3
2
1
a
a
a
LU Decomposition
3
4
2 3
3 3 4
n n n
)) +
Finding the inverse of a square matrix
Remember, the relative computational time comparison
of LU decomposition and Gauss elimination is:
Review: The inverse [B] of a square matrix [A] is defined as
| || | | | | || | A B I B A = =
LU Decomposition
Finding the inverse of a square matrix
How can LU Decomposition be used to find the inverse?
Assume the first column of [B] to be
Using this and the definition of matrix multiplication
First column of [B] Second column of [B]
| |
T
n1 12
b b b
11
| |
(
(
(
(

=
(
(
(
(

0
0
1
b
b
b
A
1 n
21
11

| |
(
(
(
(

=
(
(
(
(

0
1
0
b
b
b
A
2 n
22
12

The remaining columns in [B] can be found in the same manner
LU Decomposition
Example: Finding the inverse of a square matrix
| |
(
(
(

=
1 12 144
1 8 64
1 5 25
A
| | | || |
(
(
(

(
(
(

= =
0.7 0 0
1.56 - 4.8 - 0
1 5 25

1 5 . 3 76 . 5
0 1 56 . 2
0 0 1
U L A
Find the inverse of [A]
Using the Decomposition procedure, the [L] and [U] matrices are found to be
LU Decomposition
Example: Finding the inverse of a square matrix
Solving for the each column of [B] requires to steps
1) Solve [L] [Z] = [C] for [Z] and 2) Solve [U] [X] = [Z] for [X]
Step 1:
| || | | | = C Z L
(
(
(

=
(
(
(

(
(
(

0
0
1
1 5 . 3 76 . 5
0 1 56 . 2
0 0 1
3
2
1
z
z
z
This generates the equations:
1
1
= z
0 56 . 2
2 1
= + z z
0 5 . 3 76 . 5
3 2 1
= + + z z z


LU Decomposition
Example: Finding the inverse of a square matrix
Solving for [Z]

( )
( ) ( )
2 . 3
56 . 2 5 . 3 1 76 . 5 0
z 5 . 3 z 76 . 5 0 z
56 . 2
1 56 . 2 0
2 1 3
=
=
=
=
=
=
=
1 2
1
2.56z - 0 z
1 z
| |
(
(
(

=
(
(
(

=
2 . 3
56 . 2
1
z
z
z
Z
3
2
1
LU Decomposition
Example: Finding the inverse of a square matrix
Solving for [U] [X] = [Z] for [X]

(
(
(

=
(
(
(

(
(
(


3.2
2.56 -
1

7 . 0 0 0
56 . 1 8 . 4 0
1 5 25
31
21
11
b
b
b
1 5 25
31 21 11
= + + b b b
56 . 2 56 . 1 8 . 4
31 21
= b b
2 . 3 7 . 0
31
= b

LU Decomposition
Example: Finding the inverse of a square matrix
Using Backward Substitution


04762 . 0 =
25
571 . 4 ) 9524 . 0 ( 5 1
=
25
5 1
=
9524 . 0 =
8 . 4
) 571 . 4 ( 560 . 1 + 56 . 2
=
8 . 4
560 . 1 + 56 . 2
=
571 . 4 =
7 . 0
2 . 3
=
3 1 2 1
1 1
3 1
2 1
3 1
b b
b
b
b
b
So the first column of
the inverse of [A] is:
(
(
(

=
(
(
(

571 . 4
9524 . 0
04762 . 0
31
21
11
b
b
b
LU Decomposition
Example: Finding the inverse of a square matrix
Repeating for the second and third columns of the inverse
Second Column Third Column


(
(
(

=
(
(
(

(
(
(

0
1
0
1 12 144
1 8 64
1 5 25
32
22
12
b
b
b
(
(
(

=
(
(
(

000 . 5
417 . 1
08333 . 0
32
22
12
b
b
b
(
(
(

=
(
(
(

(
(
(

1
0
0
b
b
b

1 12 144
1 8 64
1 5 25
33
23
13
(
(
(

=
(
(
(

429 . 1
4643 . 0
03571 . 0
33
23
13
b
b
b
LU Decomposition
Example: Finding the inverse of a square matrix
The inverse of [A] is


| |
(
(
(

429 . 1 050 . 5 571 . 4


4643 . 0 417 . 1 9524 . 0
0357 . 0 08333 . 0 4762 . 0
1
A
To check your work do the following operation
| || | | | | | | | A A I A A
1 1
= =
Gauss-Seidel Method
69
Gauss-Seidel Method
An iterative method.
Basic Procedure:
-Algebraically solve each linear equation for x
i

-Assume an initial guess solution array
-Solve for each x
i
and repeat
-Use absolute relative approximate error after each iteration
to check if error is within a pre-specified tolerance.

Gauss-Seidel Method
Why?
The Gauss-Seidel Method allows the user to control round-off error.

Elimination methods such as Gaussian Elimination and LU
Decomposition are prone to prone to round-off error.

Also: If the physics of the problem are understood, a close initial
guess can be made, decreasing the number of iterations needed.
Gauss-Seidel Method
Algorithm
A set of n equations and n unknowns:
1 1 3 13 2 12 1 11
... b x a x a x a x a
n n
= + + + +
2 3 23 2 22 1 21
... b x a x a x a x a
n 2n
= + + + +
n n nn n n n
b x a x a x a x a = + + + + ...
3 3 2 2 1 1
. .
. .
. .
If: the diagonal elements are
non-zero
Rewrite each equation solving
for the corresponding unknown
ex:
First equation, solve for x
1
Second equation, solve for x
2
Gauss-Seidel Method
Algorithm
Rewriting each equation
11
1 3 13 2 12 1
1
a
x a x a x a c
x
n n

=

nn
n n n n n n
n
n n
n n n n n n n n n
n
n n
a
x a x a x a c
x
a
x a x a x a x a c
x
a
x a x a x a c
x
1 1 , 2 2 1 1
1 , 1
, 1 2 2 , 1 2 2 , 1 1 1 , 1 1
1
22
2 3 23 1 21 2
2



=

=

=




From Equation 1


From equation 2

From equation n-1

From equation n
Gauss-Seidel Method
Algorithm
General Form of each equation
11
1
1
1 1
1
a
x a c
x
n
j
j
j j
=
=

=
22
2
1
2 2
2
a
x a c
x
j
n
j
j
j
=
=

=
1 , 1
1
1
, 1 1
1

=
=

=
n n
n
n j
j
j j n n
n
a
x a c
x
nn
n
n j
j
j nj n
n
a
x a c
x

=
=

=
1
Gauss-Seidel Method
Algorithm
General Form for any row i
. , , 2 , 1 ,
1
n i
a
x a c
x
ii
n
i j
j
j ij i
i
=

=

=
=
How or where can this equation be used?
Gauss-Seidel Method
Solve for the unknowns
Assume an initial guess for [X]
(
(
(
(
(
(

n
- n
2
x
x
x
x
1
1

Use rewritten equations to solve for


each value of x
i
.
Important: Remember to use the
most recent value of x
i
. Which
means to apply values calculated to
the calculations remaining in the
current iteration.
Gauss-Seidel Method
Calculate the Absolute Relative Approximate Error
100
x
x x
new
i
old
i
new
i
i
a

= c
So when has the answer been found?

The iterations are stopped when the absolute relative
approximate error is less than a prespecified tolerance for all
unknowns.
Gauss-Seidel Method: Example 1
The upward velocity of a rocket
is given at three different times
Time, t Velocity, v
s m/s
5 106.8
8 177.2
12 279.2
The velocity data is approximated by a polynomial as:
( ) 12. t 5 ,
3 2
2
1
s s + + = a t a t a t v
Gauss-Seidel Method: Example
1
(
(
(

=
(
(
(

(
(
(

3
2
1
3
2
3
2
2
2
1
2
1
1
1
1
v
v
v
a
a
a

t t
t t
t t
3
2
1
Using a Matrix template of the form
The system of equations becomes
(
(
(

=
(
(
(

(
(
(

2 . 279
2 . 177
8 . 106
1 12 144
1 8 64
1 5 25
3
2
1
a
a
a

Initial Guess: Assume an initial guess of


(
(
(

=
(
(
(

5
2
1
3
2
1
a
a
a
Gauss-Seidel Method: Example
1
Rewriting each equation
(
(
(

=
(
(
(

(
(
(

2 . 279
2 . 177
8 . 106
1 12 144
1 8 64
1 5 25
3
2
1
a
a
a

25
5 8 . 106
3 2
1
a a
a

=
8
64 2 . 177
3 1
2
a a
a

=
1
12 144 2 . 279
2 1
3
a a
a

=


Gauss-Seidel Method: Example
1
Applying the initial guess and solving for a
i
(
(
(

=
(
(
(

5
2
1
3
2
1
a
a
a
6720 . 3
25
) 5 ( ) 2 ( 5 8 . 106
a
1
=

=
( ) ( )
8510 . 7
8
5 6720 . 3 64 2 . 177
a
2
=

=
( ) ( )
36 . 155
1
8510 . 7 12 6720 . 3 144 2 . 279
a
3
=

=
Initial Guess
When solving for a
2
, how many of the initial guess values were used?
Gauss-Seidel Method: Example
1
% 76 . 72 100 x
6720 . 3
0000 . 1 6720 . 3
1
=

= c
a
% 47 . 125 100 x
8510 . 7
0000 . 2 8510 . 7
2
=


= c
a
% 22 . 103 100 x
36 . 155
0000 . 5 36 . 155
3
=


= c
a




Finding the absolute relative approximate error
100
x
x x
new
i
old
i
new
i
i
a

= c
At the end of the first iteration
The maximum absolute
relative approximate error is
125.47%

(
(
(

=
(
(
(

36 . 155
8510 . 7
6720 . 3
3
2
1
a
a
a
Gauss-Seidel Method: Example
1
Iteration #2
Using
(
(
(

=
(
(
(

36 . 155
8510 . 7
6720 . 3
3
2
1
a
a
a
( )
056 . 12
25
36 . 155 8510 . 7 5 8 . 106
1
=

= a
( )
882 . 54
8
36 . 155 056 . 12 64 2 . 177
2
=

= a
( ) ( )
34 . 798
1
882 . 54 12 056 . 12 144 2 . 279
3
=

= a




from iteration #1
the values of a
i
are found:
Gauss-Seidel Method: Example
1
Finding the absolute relative approximate error
% 542 . 69 100 x
056 . 12
6720 . 3 056 . 12
1
=

= e
a
( )
% 695 . 85 100 x
882 . 54
8510 . 7 882 . 54
2
=


= e
a
( )
% 54 . 80 100 x
34 . 798
36 . 155 34 . 798
3
=


= e
a




At the end of the second iteration
(
(
(

=
(
(
(

34 . 798
882 . 54
056 . 12
3
2
1
a
a
a
The maximum absolute
relative approximate error is
85.695%

Gauss-Seidel Method: Example
1
(
(
(

=
(
(
(

0858 . 1
690 . 19
29048 . 0
3
2
1
a
a
a
Repeating more iterations, the following values are obtained
Iteration
a
1
a
2
a
3

1
2
3
4
5
6
3.672
12.056
47.182
193.33
800.53
3322.6
72.767
67.542
74.448
75.595
75.850
75.907
-7.8510
-54.882
-255.51
-1093.4
-4577.2
-19049
125.47
85.695
78.521
76.632
76.112
75.971
-155.36
-798.34
-3448.9
-14440
-60072
-249580
103.22
80.540
76.852
76.116
75.962
75.931
%
1
a
e %
2
a
e
%
3
a
e
! Notice The relative errors are not decreasing at any significant rate
Also, the solution is not converging to the true solution of
Gauss-Seidel Method: Pitfall
What went wrong?
Even though done correctly, the answer is not converging to the
correct answer
This example illustrates a pitfall of the Gauss-Siedel method: not all
systems of equations will converge.
Is there a fix?
One class of system of equations always converges: One with a diagonally
dominant coefficient matrix.
Diagonally dominant: [A] in [A] [X] = [C] is diagonally dominant if:

=
=
>
n
j
j
ij
a a
i
1
ii

=
=
)
n
i j
j
ij ii
a a
1
for all i and
for at least one i
Gauss-Seidel Method: Pitfall
| |
(
(
(

=
1 16 123
1 43 45
34 81 . 5 2
A
Diagonally dominant: The coefficient on the diagonal must be at least
equal to the sum of the other coefficients in that row and at least one row
with a diagonal coefficient greater than the sum of the other coefficients
in that row.

(
(
(

=
129 34 96
5 53 23
56 34 124
] B [
Which coefficient matrix is diagonally dominant?
Most physical systems do result in simultaneous linear equations that
have diagonally dominant coefficient matrices.
Gauss-Seidel Method: Example
2
Given the system of equations
1 5x - 3x 12x
3 2 1
= +
28 3x 5x x
3 2 1
= + +
76 13x 7x 3x
3 2 1
= + +



(
(
(

=
(
(
(

1
0
1
3
2
1
x
x
x
With an initial guess of
The coefficient matrix is:
| |
(
(
(


=
13 7 3
3 5 1
5 3 12
A
Will the solution converge using the
Gauss-Siedel method?
Gauss-Seidel Method: Example
2
| |
(
(
(


=
13 7 3
3 5 1
5 3 12
A
Checking if the coefficient matrix is diagonally dominant
4 3 1 5 5
23 21 22
= + = + > = = a a a
10 7 3 13 13
32 31 33
= + = + > = = a a a

8 5 3 12 12
13 12 11
= + = + > = = a a a
The inequalities are all true and at least one row is strictly greater than:
Therefore: The solution should converge using the Gauss-Siedel Method
Gauss-Seidel Method: Example
2
(
(
(

=
(
(
(

(
(
(


76
28
1
13 7 3
3 5 1
5 3 12
3
2
1
a
a
a

Rewriting each equation


12
5 3 1
3 2
1
x x
x
+
=
5
3 28
3 1
2
x x
x

=
13
7 3 76
2 1
3
x x
x

=


With an initial guess of
(
(
(

=
(
(
(

1
0
1
3
2
1
x
x
x
( ) ( )
50000 . 0
12
1 5 0 3 1
1
=
+
= x
( ) ( )
9000 . 4
5
1 3 5 . 0 28
2
=

= x
( ) ( )
0923 . 3
13
9000 . 4 7 50000 . 0 3 76
3
=

= x

Gauss-Seidel Method: Example
2
The absolute relative approximate error


% 662 . 67 100
50000 . 0
0000 . 1 50000 . 0
1
a
=

= e
% 00 . 100 100
9000 . 4
0 9000 . 4
2
a
=

= e
% 662 . 67 100
0923 . 3
0000 . 1 0923 . 3
3
a
=

= e


The maximum absolute relative error after the first iteration is 100%
Gauss-Seidel Method: Example
2
(
(
(

=
(
(
(

8118 . 3
7153 . 3
14679 . 0
3
2
1
x
x
x
After Iteration #1
( ) ( )
14679 . 0
12
0923 . 3 5 9000 . 4 3 1
1
=
+
= x
( ) ( )
7153 . 3
5
0923 . 3 3 14679 . 0 28
2
=

= x
( ) ( )
8118 . 3
13
900 . 4 7 14679 . 0 3 76
3
=

= x


Substituting the x values into the equations After Iteration #2
(
(
(

=
(
(
(

0923 . 3
9000 . 4
5000 . 0
3
2
1
x
x
x
Gauss-Seidel Method: Example
2
Iteration #2 absolute relative approximate error
% 62 . 240 100
14679 . 0
50000 . 0 14679 . 0
1
=

= e
a
% 887 . 31 100
7153 . 3
9000 . 4 7153 . 3
2
=

= e
a
% 876 . 18 100
8118 . 3
0923 . 3 8118 . 3
3
=

= e
a


The maximum absolute relative error after the first iteration is 240.62%

This is much larger than the maximum absolute relative error obtained in
iteration #1. Is this a problem?
Gauss-Seidel Method: Example
2
Repeating more iterations, the following values are obtained
1
a
c
2
a
c
3
a
c
Iteration a
1


a
2
a
3

1
2
3
4
5
6
0.50000
0.14679
0.74275
0.94675
0.99177
0.99919
67.662
240.62
80.23
21.547
4.5394
0.74260
4.900
3.7153
3.1644
3.0281
3.0034
3.0001
100.00
31.887
17.409
4.5012
0.82240
0.11000
3.0923
3.8118
3.9708
3.9971
4.0001
4.0001
67.662
18.876
4.0042
0.65798
0.07499
0.00000
(
(
(

=
(
(
(

4
3
1
3
2
1
x
x
x
(
(
(

=
(
(
(

0001 . 4
0001 . 3
99919 . 0
3
2
1
x
x
x
The solution obtained


is close to the exact solution of
Gauss-Seidel Method: Example
3
Given the system of equations
76 13x 7x 3x
3 2 1
= + +
28 3x 5x x
3 2 1
= + +
1 5x - 3x 12x
3 2 1
= +
With an initial guess of
(
(
(

=
(
(
(

1
0
1
3
2
1
x
x
x
Rewriting the equations
3
13 7 76
3 2
1
x x
x

=
5
3 28
3 1
2
x x
x

=
5
3 12 1
2 1
3


=
x x
x


Gauss-Seidel Method: Example
3
Conducting six iterations
1
a
e
2
a
e
3
a
e
Iteration a
1
a
2
a
3

1
2
3
4
5
6
21.000
-196.15
-1995.0
-20149
2.0364x10
5

-2.0579x10
5

110.71
109.83
109.90
109.89
109.90
1.0990
0.80000
14.421
-116.02
1204.6
-12140
1.2272x10
5

100.00
94.453
112.43
109.63
109.92
109.89
5.0680
-462.30
4718.1
-47636
4.8144x10
5

-4.8653x10
6

98.027
110.96
109.80
109.90
109.89
109.89
The values are not converging.
Does this mean that the Gauss-Seidel method cannot be used?
Gauss-Seidel Method
The Gauss-Seidel Method can still be used
The coefficient matrix is not
diagonally dominant
| |
(
(
(

=
5 3 12
3 5 1
13 7 3
A
But this is the same set of
equations used in example #2,
which did converge.
| |
(
(
(


=
13 7 3
3 5 1
5 3 12
A
If a system of linear equations is not diagonally dominant, check to see if
rearranging the equations can form a diagonally dominant matrix.
Gauss-Seidel Method
Not every system of equations can be rearranged to have a
diagonally dominant coefficient matrix.
Observe the set of equations
3
3 2 1
= + + x x x
9 4 3 2
3 2 1
= + + x x x
9 7
3 2 1
= + + x x x
Which equation(s) prevents this set of equation from having a
diagonally dominant coefficient matrix?
Gauss-Seidel Method
Summary
-Advantages of the Gauss-Seidel Method
-Algorithm for the Gauss-Seidel Method
-Pitfalls of the Gauss-Seidel Method
Slides are from: http://numericalmethods.eng.usf.edu
100

You might also like