You are on page 1of 20

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

2 Systems of linear equations and their solution


For the purpose of computer simulation, physical structures
(e.g., bridges, electric circuits) are often represented as discrete models. These models are usually linear systems A x =
f , where the matrix A represents the structure, f is the vector representing external forces, and x is the vector of unknown quantities of interest (e.g., displacements of points on
the bridge, currents in the electric circuit).
2.1

Definition of linear systems of equations

A system of equations of the form


a11x1 + a12x2 + . . . + a1nxn
a21x1 + a22x2 + . . . + a2nxn
...
am1x1 + am2x2 + . . . + amnxn

=
=
...
=

b1
b2
...
bm

is called a linear system of m equations with n unknowns


(x1, x2, . . . , xn).
Points to note:
linear systems often occur in engineering applications;
m does not need to be equal to n;
generally, for n variables, n equations will give a unique
solution; not always, though (as we shall see in the examples below);
the coefficients aij (i, j = 1, 2, . . . , n) do not need to be
integers, nor even nice real numbers.
2.1

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

2.2 Matrix solution of an nn linear system of equations


Suppose that we have a system of n equations in n unknowns
x 1 , x 2 , . . . , xn .
a11x1 + a12x2 + . . . + a1nxn
a21x1 + a22x2 + . . . + a2nxn
...
an1x1 + an2x2 + . . . + annxn

=
=
...
=

b1
b2
...
bn

This can be represented in matrix form as


Ax = b
where

A=

a11 a12
a21 a22
...
...
an1 an2

. . . a1n

. . . a2n
. . . ...

. . . ann

x=

x1
x2
...
xn

b=

b1
b2
...
bn

If A is a non-singular matrix so that the inverse matrix A1


exists, then there is a unique solution given by
x = A1 b

2.2

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

Example
3x1 + 2x2 = 1
7x1 + 5x2 = 2

This can be represented in matrix form as


3 2
x1
1
=
7 5
2
x2

(1)

Multiply both sides on the left by the inverse matrix A1



5 2
1
x1
=
x2
7 3
2
which implies

5122
1
x1
=
=
x2
7 1 + 3 2
1
Check by substitution into original equation (1)



3 2
1
3121
1
=
=
7 5
1
7151
2

BUT, calculation of the inverse matrix is very expensive.


Solving linear systems of equations is at the heart of many
numerical algorithms in engineering.
Therefore, it is important to have efficient methods.

2.3

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

2.3 Solving linear systems of equations: The basic idea


Consider the case n = 2, e.g.
2x1 + 3x2 = 2
4x1 + x2 = 1

(2)
(3)

Let us subtract two times equation (2) from equation (3)


2x1 + 3x2 = 2
0x1 5x2 = 3
We have eliminated x1 from equation (5) and can solve it
3
5x2 = 3 x2 =
5
We can now insert x2 into the first equation to obtain
3 1
1
2x1 = 2 3 =
x1 =
5 5
10

2.4

(4)
(5)

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

This process can be written in the matrix form A x = b as


2 3
x1
2
=
.
4 1
x2
1
Subtracting two times row 1 from row 2 implies


2 3
x1
2
=
0 5
R2 2R1
x2
3

(6)

(7)

and back substituting for the components of x


3
x2 =
5

1
3
1
x1 =
23
=
2
5
10
gives the solution

1
10

x=
3 .
5

Summary We started with the linear system of equations (6)


and transformed it by row manipulations into the equivalent
system (7), from which we could easily obtain the solution x.

2.5

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

2.4 Back substitution in upper triangular systems


Solution of the system A x = b is easy if it has the form
a11x1 + a12x2 + a13x3 = b1
a22x2 + a23x3 = b2
a33x3 = b3

(8)

Provided ajj 6= 0 for j = 1, 2, 3 we simply start at the last row


and progressively back substitute to work out the values of x
x3 =

1
b3
a33

x2 =

1
(b2 a23x3)
a22

x1 =

1
(b1 a13x3 a12x2)
a11

2.6

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

For any linear system of equations in which the the coefficient


matrix A is upper triangular with zero elements below the
main diagonal


a11 a12 a13 . . . a1n
x1
b1
0 a a . . . a x b

2
22
23
2n 2


0 0 a33 . . . a3n x3 = b3
..

... . . . . . . . . .
.
... ...
0 0 . . . 0 ann
xn
bn
the solution can be found by back substitution
1
bn ,
xn =
ann
1
xn1 =
(bn1 an1,nxn) ,
an1,n1
... =
...

n
X
1
b1
a1j xj ,
x1 =
a11
j=2
provided ajj 6= 0 for all j = 1, . . . , n.
In order to transform a given linear system of equations into
an equivalent one, where the matrix A is upper triangular we
use an algorithm called Gaussian Elimination.
Carl Friedrich Gauss (1777 1855) a German mathematician and scientist.

2.7

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

2.5 Gaussian Elimination


We explain this for a 3 3 example. The general algorithm
will be clear from this.
We start with the system of equations

x1
1 2 3
6
2 2 2 x 2 = 6
1 8 1
x3
10
It is useful to rewrite it in the augmented form

1 2 3 6
2 2 2 6
1 8 1 10

Now introduce zeros in the first column below the main diagonal by first subtracting 2 times the first row from the second
row and then subtracting the first row from the third row

1 2 3 6
0 2 4 6
R2 2R1
R3 R1
0 6 2 4
We are not yet finished. In order to achieve upper triangular
form we need to create an additional zero in the last row

1 2
3
6
0 2 4 6
R3 + 3R2
0 0 14 14
The linear system of equations is now in upper triangular form

T
and we can easily recover the solution x = 1 1 1 .
2.8

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

We now show the process of Gaussian elimination schematically for a 4 4 example

x x x x x

x x x x x

x x x x x
x x x x x
First reduce the first column

x x x x x
x

0 x x x x
0


x x x x x
0
x x x x x
x

x
x
x
x

x
x
x
x

x
x
x
x

then the second column

x x x x x
x

0 x x x x
0


0 0 x x x
0
0 x x x x
0
and finally the third column

x x

0 x

0 0
0 0

x
x
x
0

x
x
x
x

x
x

x
0

x
0
x
0
x
x
0
0

x
x
x
x

x
x
x
x

x
x

to generate an upper triangular augmented matrix.

2.9

x
x
x
x

x

x
x

Back substitute to determine the solution x.

x
x
x
x

x
x
x
x

x
x

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

2.6 Geometric Interpretation: Existence of solutions


For the case of a 2 2 system of equations
a11x1 + a12x2 = b1
a21x1 + a22x2 = b2
we may interpret (x1, x2) as coordinates in the x1x2-plane.
The equations are straight lines.
The point (x1, x2) is a solution the point lies on both
lines. There are three cases:(i) No solution if the lines are parallel
(ii) Precisely one solution if they intersect
(iii) Infinitely many solutions if they coincide
Example (i) Two parallel lines
x+y = 1
x+y = 0

det A = 0,

1 1 1
1 1 0

R2-R1

1 1 1
0 0 1

1.5

1
0.5

Contradiction
0

0x+0y = 0 = 1
0.5

NO SOLUTIONS
1
1

0.5

0
x

0.5

2.10

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

Example (ii) Lines intersect


x+y = 1
xy = 0

det A = 2,

1 1 1
1 1 0

R2-R1

1 1 1
0 2 1

1.5
1
y

0.5

1
1
x=
2
2
UNIQUE
SOLUTION
y=

0
0.5
1
1

0.5

0
x

0.5

Example (iii) Lines coincide


x+y = 1
2x + 2y = 2

det A = 0,

R2-2R1

1 1 1
0 0 0

1.5

1 1 1
2 2 2

No Contradiction

0 x + 0 y = 0X
0.5

0
1

0.5

0
x

0.5

INFINITELY
MANY
SOLUTIONS

If det A = 0 there are either no solutions or infinitely many


2.11

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

2.7 Pivoting
Gaussian elimination is not possible if we encounter a diagonal element that is zero in the current column.
Example

0
A x = 2
4

The solution exists, x = 1


4
1 2
3 1 x = 9
9
2 1
T
2 1 .

But we cannot start Gaussian Elimination because a11 = 0.


This can be solved by a process called pivoting.
Since the order does not matter, we can just exchange the first
equation with the third equation to obtain


4 2 1
9
2 3 1 x = 9 .
0 1 2
4
Now we can start Gaussian Elimination.
In practice one always looks for the element with the largest
magnitude in the current column and exchanges the row containing this element with the top row of the current sub-block.
This should be done each time elimination takes place (i.e.,
for each column) until we arrive at the upper triangular form
of the matrix. This process is called Partial Pivoting.

NB We dont need to pivot


2 1 2 1
a22 because we are NOT
6 0 2 0
eliminating elements in the
8 1 5 4
2nd column.
2.12

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

Pivoting becomes even more important when Gaussian Elimination is implemented on a computer. This is because computers operate with approximations of real numbers up to some
precision. Unavoidably this leads to round-off errors, which
may propagate and have dramatic effect on the final result (approximate solution to the linear system). Let us demonstrate
this on a simple example.
Example. Consider a linear system:
0.0001x + y = 1 [1]
x + 2y = 1 [2]
It has the exact solution x =

10000
10002 ,

y=

10001
10002

(check this!).

Therefore, x 1, y 1 is a good approximation.


Now, let us try solving this system using Gaussian Elimination, rounding all results to 4 significant digits.
1
Multiply row [1] by 0.0001
and subtract the result from row [2]:

0.0001x +

=
1
[1]
1
1
(2 0.0001
1)y = 1 0.0001
1 [20]

Recall that we round all results to 4 significant digits. E.g.,


one can use MATLAB to compute
>> digits(4)
>> vpa(2-(-1)*1/0.0001)
ans =
10000.0
>> vpa(1-(-1)*1/0.0001)
ans =
10000.0
2.13

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

Thus, we find
0.0001x +

y
=
1
[1]
10000.0y = 10000.0 [20]

Hence,
0.0001x + y = 1 [1]
y = 1.0 [20]
Then we back-substitute to find x:
0.0001x + 1.0 = 1 [1]
y = 1.0 [20]
such that our approximate solution is
x = 0.0
y = 1.0
As you can see, we have a real problem: the propagation of
round-off errors results in a drastically wrong solution.
Now, let us swap the equations around in the above linear system:
x + 2y = 1 [1]
0.0001x + y = 1 [2]
10001
10000
As before, the exact solution is x = 10002
, y = 10002
,
and x 1, y 1 is a good approximation.
Again, let us solve this system using Gaussian Elimination,
rounding all results to 4 significant digits.
One has
x +

2y

=
1
[1]
0.0001
0
(1 0.0001
1 2)y = 1 1 1 [2 ]

This gives
x + 2y = 1 [1]
1.0y = 1.0 [20]
2.14

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

Hence

x + 2y = 1 [1]
y = 1.0 [20]
Then we back-substitute to find x:
x + 2 1.0 = 1 [1]
y
= 1.0 [20]
such that our approximate solution is
x = 1.0
y = 1.0
Clearly, this is a much better approximation than what we had
before.
Points to note:
If we do not use exact arithmetic, then Gaussian Elimination for the same linear system may give different answers depending on the order in which equations of the
system are written. One of these answers may be obviously wrong! This is due to propagation of round-off errors.
The process of Partial Pivoting switches the equations
around (as explained above). In many cases this keeps
round-off errors under control and produces adequate approximations.
Note that in Partial Pivoting we look for the element
with the largest magnitude (i.e., with the largest absolute
value). In other words, given a choice between 0.3 and
0.5, we must select 0.5.
2.15

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

Example. Solve the following linear system using Gaussian


Elimination with partial pivoting
x 3y
= 1
2x + 2y 4z = 2
x + 3y 2z = 3
Let us write the system in the augmented matrix form:

R1
1 3 0 1
R2 2 2 4 2
R3
1 3 2 3
The element with the largest magnitude in the 1st column is
2, and it is in the 2nd row (R2). Therefore, we do pivoting,
i.e., we interchange rows (R2) and (R1), and then use Gaussian elimination to introduce zeros in the 1st column:

2 2 4 2
R2 2 2 4 2
R2
R1 1 3 0 1 R1+ 12 R2 0 2 2 2 .
1 3 2 3
0 4 4 4
R3+ 12 R2
R3
Now, in this new matrix, we look at the elements in the 2nd
column of the 2nd and the 3rd rows (marked in boldface), and
find the one with the largest magnitude: it is 4 and it is in the
3rd row. Thus, we need to pivot again. We interchange the
2nd and the 3rd rows, and then apply Gaussian elimination:

2 2 4 2
2 2 4 2
R10
0 4 4 4 .
R20 0 4 4 4
0 2 2 2
0 0 4 4
R30
R30+ 12 R20
The system is now in the upper triangular form, and we can
use back-substitution to find the solution: x=1, y=0, z= 1.
2.16

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

?<
2.8

LU Factorization

The nonsingular matrix A has an LU-factorization if it can


be expressed as the product of a lower-triangular matrix L and
an upper triangular matrix U, e.g.

a11 a12 a13


l11 0 0
u11 u12 u13
A = a21 a22 a23 = l21 l22 0 0 u22 u23 = LU
a31 a32 a33
l31 l32 l33
0 0 u33
It turns out that this factorization (when it exists) is not unique.
If L has ones on its diagonal, then it is called a Doolittle
factorization. If U has ones on its diagonal, then it is called
a Crout factorization. When U = LT it is called a Cholesky
decomposition.
2.8.1 Using LU factorization to solve a system
Assuming A x = b can be factorized into
LU x = b
Then let y = U x and use forward substitution to solve
Ly = b
for y. Then use backward substitution to solve
Ux = y
for x. This has the advantage that once A = LU is known
we only have to do a forward and back substitution again for
a different righthand side b.
2.17

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

2.8.2 Doolittles Method

a11 a12 a13


1 0 0
u11 u12 u13
A = a21 a22 a23 = l21 1 0 0 u22 u23 = LU
a31 a32 a33
l31 l32 1
0 0 u33
Multiplying out

u11
u12
u13

A = l21u11 l21u12 + u22


l21u13 + u23
l31u11 l31u12 + l32u22 l31u13 + l32u23 + u33
Comparing coefficients implies that
u11 = a11, u12 = a12,

u13 = a13,

l21 =

a21
u11 ,

u22 = a22 l21u12, u23 = a23 l21u13,

l31 =

a31
u11 ,

l32 =

a32 l31 u12


,
u22

u33 = a33 l31u13 l32u23.

Sometimes it is necessary to use pivoting.


Example Use LU factorization to solve


1 1 1
x1
2
2 4 2 x2 = 6
1 1 4
x3
9
Doolittles method implies

u12
u13
u11
1 1 1

2 4 2 = l21u11 l21u12 + u22


l21u13 + u23

1 1 4
l31u11 l31u12 + l32u22 l31u13 + l32u23 + u33
2.18

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

Comparing coefficients we find


u11 = 1,

u12 =

l21 =

2
1

= 2,

l31 =

1
1

= 1, l32 =

1,

u13 = 1

u22 = 4 2 1 = 2, u23 = 2 2 (1) = 0


1(1)1
2

= 1 u33 = 4 1 0 = 3

Now let y = U x LU x = b becomes L y = b, which is


solved by forward substitution (i.e. start at top of matrix):


1 0 0
y1
2
L y = 2 1 0 y2 = 6
1 1 1
y3
9
y1 = 2
2y1 + y2 = 6

y1 = 2

y1 + y2 + y3 = 9

y2 = 6 2 (2) = 2
y3 = 9 + (2) (2) = 9

Then solve U x = y for x by back substitution:


2
1 1 1
x1
0 2 0 x2 = 2
x3
9
0 0 3
3y3 = 9
2x2 = 2

x3 = 3

x2 = 1

x1 + x2 x3 = 2
x1 = 2
Note if we now wanted the result for another b we would only
need to do the forward and back substitution.

>?
2.19

2E1: Linear Algebra | Lecture Notes

2 Systems of linear equations and their solution

2.9 Programs and libraries


There are several packages that implement Gaussian Elimination and LU decomposition.
For example
MATLAB
Maple
Mathematica
And there is a standard programming library called LAPACK
that can easily be integrated into self-written programs.

2.20

You might also like