Professional Documents
Culture Documents
=
=
...
=
b1
b2
...
bm
=
=
...
=
b1
b2
...
bn
A=
a11 a12
a21 a22
...
...
an1 an2
. . . a1n
. . . a2n
. . . ...
. . . ann
x=
x1
x2
...
xn
b=
b1
b2
...
bn
2.2
Example
3x1 + 2x2 = 1
7x1 + 5x2 = 2
3 2
x1
1
=
7 5
2
x2
(1)
5 2
1
x1
=
x2
7 3
2
which implies
5122
1
x1
=
=
x2
7 1 + 3 2
1
Check by substitution into original equation (1)
3 2
1
3121
1
=
=
7 5
1
7151
2
2.3
(2)
(3)
2.4
(4)
(5)
2 3
x1
2
=
.
4 1
x2
1
Subtracting two times row 1 from row 2 implies
2 3
x1
2
=
0 5
R2 2R1
x2
3
(6)
(7)
1
3
1
x1 =
23
=
2
5
10
gives the solution
1
10
x=
3 .
5
2.5
(8)
1
b3
a33
x2 =
1
(b2 a23x3)
a22
x1 =
1
(b1 a13x3 a12x2)
a11
2.6
a11 a12 a13 . . . a1n
x1
b1
0 a a . . . a x b
2
22
23
2n 2
0 0 a33 . . . a3n x3 = b3
..
... . . . . . . . . .
.
... ...
0 0 . . . 0 ann
xn
bn
the solution can be found by back substitution
1
bn ,
xn =
ann
1
xn1 =
(bn1 an1,nxn) ,
an1,n1
... =
...
n
X
1
b1
a1j xj ,
x1 =
a11
j=2
provided ajj 6= 0 for all j = 1, . . . , n.
In order to transform a given linear system of equations into
an equivalent one, where the matrix A is upper triangular we
use an algorithm called Gaussian Elimination.
Carl Friedrich Gauss (1777 1855) a German mathematician and scientist.
2.7
1 2 3 6
2 2 2 6
1 8 1 10
Now introduce zeros in the first column below the main diagonal by first subtracting 2 times the first row from the second
row and then subtracting the first row from the third row
1 2 3 6
0 2 4 6
R2 2R1
R3 R1
0 6 2 4
We are not yet finished. In order to achieve upper triangular
form we need to create an additional zero in the last row
1 2
3
6
0 2 4 6
R3 + 3R2
0 0 14 14
The linear system of equations is now in upper triangular form
T
and we can easily recover the solution x = 1 1 1 .
2.8
x x x x x
x x x x x
x x x x x
x x x x x
First reduce the first column
x x x x x
x
0 x x x x
0
x x x x x
0
x x x x x
x
x
x
x
x
x
x
x
x
x
x
x
x
x x x x x
x
0 x x x x
0
0 0 x x x
0
0 x x x x
0
and finally the third column
x x
0 x
0 0
0 0
x
x
x
0
x
x
x
x
x
x
x
0
x
0
x
0
x
x
0
0
x
x
x
x
x
x
x
x
x
x
2.9
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
det A = 0,
1 1 1
1 1 0
R2-R1
1 1 1
0 0 1
1.5
1
0.5
Contradiction
0
0x+0y = 0 = 1
0.5
NO SOLUTIONS
1
1
0.5
0
x
0.5
2.10
det A = 2,
1 1 1
1 1 0
R2-R1
1 1 1
0 2 1
1.5
1
y
0.5
1
1
x=
2
2
UNIQUE
SOLUTION
y=
0
0.5
1
1
0.5
0
x
0.5
det A = 0,
R2-2R1
1 1 1
0 0 0
1.5
1 1 1
2 2 2
No Contradiction
0 x + 0 y = 0X
0.5
0
1
0.5
0
x
0.5
INFINITELY
MANY
SOLUTIONS
2.7 Pivoting
Gaussian elimination is not possible if we encounter a diagonal element that is zero in the current column.
Example
0
A x = 2
4
4
1 2
3 1 x = 9
9
2 1
T
2 1 .
4 2 1
9
2 3 1 x = 9 .
0 1 2
4
Now we can start Gaussian Elimination.
In practice one always looks for the element with the largest
magnitude in the current column and exchanges the row containing this element with the top row of the current sub-block.
This should be done each time elimination takes place (i.e.,
for each column) until we arrive at the upper triangular form
of the matrix. This process is called Partial Pivoting.
Pivoting becomes even more important when Gaussian Elimination is implemented on a computer. This is because computers operate with approximations of real numbers up to some
precision. Unavoidably this leads to round-off errors, which
may propagate and have dramatic effect on the final result (approximate solution to the linear system). Let us demonstrate
this on a simple example.
Example. Consider a linear system:
0.0001x + y = 1 [1]
x + 2y = 1 [2]
It has the exact solution x =
10000
10002 ,
y=
10001
10002
(check this!).
0.0001x +
=
1
[1]
1
1
(2 0.0001
1)y = 1 0.0001
1 [20]
Thus, we find
0.0001x +
y
=
1
[1]
10000.0y = 10000.0 [20]
Hence,
0.0001x + y = 1 [1]
y = 1.0 [20]
Then we back-substitute to find x:
0.0001x + 1.0 = 1 [1]
y = 1.0 [20]
such that our approximate solution is
x = 0.0
y = 1.0
As you can see, we have a real problem: the propagation of
round-off errors results in a drastically wrong solution.
Now, let us swap the equations around in the above linear system:
x + 2y = 1 [1]
0.0001x + y = 1 [2]
10001
10000
As before, the exact solution is x = 10002
, y = 10002
,
and x 1, y 1 is a good approximation.
Again, let us solve this system using Gaussian Elimination,
rounding all results to 4 significant digits.
One has
x +
2y
=
1
[1]
0.0001
0
(1 0.0001
1 2)y = 1 1 1 [2 ]
This gives
x + 2y = 1 [1]
1.0y = 1.0 [20]
2.14
Hence
x + 2y = 1 [1]
y = 1.0 [20]
Then we back-substitute to find x:
x + 2 1.0 = 1 [1]
y
= 1.0 [20]
such that our approximate solution is
x = 1.0
y = 1.0
Clearly, this is a much better approximation than what we had
before.
Points to note:
If we do not use exact arithmetic, then Gaussian Elimination for the same linear system may give different answers depending on the order in which equations of the
system are written. One of these answers may be obviously wrong! This is due to propagation of round-off errors.
The process of Partial Pivoting switches the equations
around (as explained above). In many cases this keeps
round-off errors under control and produces adequate approximations.
Note that in Partial Pivoting we look for the element
with the largest magnitude (i.e., with the largest absolute
value). In other words, given a choice between 0.3 and
0.5, we must select 0.5.
2.15
R1
1 3 0 1
R2 2 2 4 2
R3
1 3 2 3
The element with the largest magnitude in the 1st column is
2, and it is in the 2nd row (R2). Therefore, we do pivoting,
i.e., we interchange rows (R2) and (R1), and then use Gaussian elimination to introduce zeros in the 1st column:
2 2 4 2
R2 2 2 4 2
R2
R1 1 3 0 1 R1+ 12 R2 0 2 2 2 .
1 3 2 3
0 4 4 4
R3+ 12 R2
R3
Now, in this new matrix, we look at the elements in the 2nd
column of the 2nd and the 3rd rows (marked in boldface), and
find the one with the largest magnitude: it is 4 and it is in the
3rd row. Thus, we need to pivot again. We interchange the
2nd and the 3rd rows, and then apply Gaussian elimination:
2 2 4 2
2 2 4 2
R10
0 4 4 4 .
R20 0 4 4 4
0 2 2 2
0 0 4 4
R30
R30+ 12 R20
The system is now in the upper triangular form, and we can
use back-substitution to find the solution: x=1, y=0, z= 1.
2.16
?<
2.8
LU Factorization
u11
u12
u13
u13 = a13,
l21 =
a21
u11 ,
l31 =
a31
u11 ,
l32 =
1 1 1
x1
2
2 4 2 x2 = 6
1 1 4
x3
9
Doolittles method implies
u12
u13
u11
1 1 1
1 1 4
l31u11 l31u12 + l32u22 l31u13 + l32u23 + u33
2.18
u12 =
l21 =
2
1
= 2,
l31 =
1
1
= 1, l32 =
1,
u13 = 1
= 1 u33 = 4 1 0 = 3
1 0 0
y1
2
L y = 2 1 0 y2 = 6
1 1 1
y3
9
y1 = 2
2y1 + y2 = 6
y1 = 2
y1 + y2 + y3 = 9
y2 = 6 2 (2) = 2
y3 = 9 + (2) (2) = 9
2
1 1 1
x1
0 2 0 x2 = 2
x3
9
0 0 3
3y3 = 9
2x2 = 2
x3 = 3
x2 = 1
x1 + x2 x3 = 2
x1 = 2
Note if we now wanted the result for another b we would only
need to do the forward and back substitution.
>?
2.19
2.20