You are on page 1of 106

Solutions Manual

LINEAR SYSTEM THEORY, 2/E

Wilson J. Rugh
Department of Electrical and Computer Engineering
Johns Hopkins University

PREFACE
With some lingering ambivalence about the merits of the undertaking, but with a bit more dedication than
the first time around, I prepared this Solutions Manual for the second edition of Linear System Theory. Roughly
40% of the exercises are addressed, including all exercises in Chapter 1 and all others used in developments in the
text. This coverage complements the 60% of those in an unscientific survey who wanted a solutions manual, and
perhaps does not overly upset the 40% who voted no. (The main contention between the two groups involved the
inevitable appearance of pirated student copies and the view that an available solution spoils the exercise.)
I expect that a number of my solutions could be improved, and that some could be improved using only
techniques from the text. Also the press of time and my flagging enthusiasm for text processing impeded the
crafting of economical solutionssome solutions may contain too many steps or too many words. However I
hope that the error rate in these pages is low and that the value of this manual is greater than the price paid.
Please send comments and corrections to the author at rugh@jhu.edu or ECE Department, Johns Hopkins
University, Baltimore, MD 21218 USA.

CHAPTER 1

Solution 1.1
(a) For k = 2, (A + B)2 = A 2 + AB + BA + B 2 . If AB = BA, then (A + B)2 = A 2 + 2AB + B 2 . In general if
AB = BA, then the k-fold product (A + B)k can be written as a sum of terms of the form A j B kj , j = 0, . . . , k. The
k
number of terms that can be written as A j B kj is given by the binomial coefficient
. Therefore AB = BA
 j 
implies
(A + B)k =

j =0

k  j kj
AB
j

(b) Write
det [ I A (t)] = n + an1 (t)n1 + . . . + a 1 (t) + a 0 (t)
where invertibility of A (t) implies a 0 (t) 0. The Cayley-Hamilton theorem implies
A n (t) + an1 (t)A n1 (t) + . . . + a 0 (t)I = 0
for all t. Multiplying through by A 1 (t) yields
A 1 (t) =

. . . an1 (t)A n2 (t) A n1 (t)


1 (t)I
_a
________________________________
a 0 (t)

for all t. Since a 0 (t) = det [A (t)], a 0 (t) = det A (t). Assume > 0 is such that det A (t) for all t. Since
A (t) we have aij (t) , and thus there exists a such that a j (t) for all t. Then, for all t,
a 1 (t)I + . . . + A n1 (t)
______________________
A 1 (t) =
det A (t)
+ + . . . + n1
_________________
=

Solution 1.2
(a) If is an eigenvalue of A, then recursive use of Ap = p shows that k is an eigenvalue of A k . However to
show multiplicities are preserved is more difficult, and apparently requires Jordan form, or at least results on
similarity to upper triangular form.
(b) If is an eigenvalue of invertible A, then is nonzero and Ap = p implies A 1 p = (1/ )p. As in (a),
addressing preservation of multiplicities is more difficult.
T
T
(c) A T has eigenvalues __
1 , . . . , __
n since det (I A ) = det (I A) = det (I A).
(d) A H has eigenvalues 1 , . . . , n using (c) and the fact that the determinant (sum of products) of a conjugate is
the conjugate of the determinant. That is

-1-

Linear System Theory, 2/E

Solutions Manual

_ ________
_
_
det ( I A H ) = det ( I A)H = det ( I A)
(e) A has eigenvalues 1 , . . . , n since Ap = p implies ( A)p = ()p.
(f) Eigenvalues of A T A are not nicely related to eigenvalues of A. Consider the example
 0 
 0 0 
A=
, ATA =
 0 0 
 0 
where the eigenvalues of A are both zero, and the eigenvalues of A T A are 0, . (If A is symmetric, then (a)
applies.)

Solution 1.3

(a) If the eigenvalues of A are all zero, then det ( I A) = n and the Cayley-Hamilton theorem shows that A is
nilpotent. On the other hand if one eigenvalue, say 1 is nonzero, let p be a corresponding eigenvector. Then
A k p = k1 p 0 for all k 0, and A cannot be nilpotent.
_
(b) Suppose Q is real and symmetric, and is an eigenvalue of Q. Then also
_ _is_ an eigenvalue. From the
eigenvalue/eigenvector
equation Qp = p we get_ p H Qp = p H p. Also Qp = p, and
_
_ transposing gives
p H Qp = p H p. Subtracting the two results gives ( )p H p = 0. Since p 0, this gives = , that is, is real.
(c) If A is upper triangular, then I A is upper triangular. Recursive Laplace expansion of the determinant about
the first column gives
det ( I A) = ( a 11 ) . . . ( ann )
which implies the eigenvalues of A are the diagonal entries a 11 , . . . , ann .

Solution 1.4
(a)
A=




0 0
1 0

implies A T A =




1 0
0 0

implies

A = 1

(b)
A=




3 1
1 3

implies A T A =




10 6 
6 10 

Then
det (I A T A) = ( 16)( 4)
which implies A = 4.
(c)
A=




1i 0 
0 1+i 

implies A H A =




(1+i)(1i)
0

=
0
(1i)(1+i) 

This gives A = 2 .

Solution 1.5 Let


A=




1/ 
,
0 1/ 

>1

Then the eigenvalues are 1/ and, using an inequality on text page 7,


A

max

1 i, j 2

-2-

aij =




2 0
0 2

Linear System Theory, 2/E

Solutions Manual

Solution 1.6 By definition of the spectral norm, for any 0 we can write
A x
______
x = 1
x = 1
x
A x
A x
_________
________
= max
= max
x = 1/
x
x = 1
x

A =

A x =

max

max

Since this holds for any 0,


A =

max

A x
A x
______
______
= max
x0
x
x

Therefore
A x
______
x

for any x 0, which gives


A x A x

Solution 1.7 By definition of the spectral norm,


AB =

max

=1

max
x

=1

(AB)x =

max

=1

A (Bx)

{A Bx } , by Exercise 1.6

= A max
x

Bx = A B

=1

If A is invertible, then A A 1 = I and the obvious I = 1 give


1 = A A 1 A A 1
Therefore
1
_____
A

A 1

Solution 1.8 We use the following easily verified facts about partitioned vectors:


x1
x2

x 1 , x 2 ;

x1
0

= x 1 ,

0
x2

= x 2

Write


Ax =

A 11 A 12
A 21 A 22

x1
x2

A 11 x 1 + A 12 x 2
A 21 x 1 + A 22 x 2





Then for A 11 , for example,


A =

max

=1

max
x 1

=1

A x

max

=1

A 11 x 1

+ A 12 x 2

A 11 x 1 = A 11

The other partitions are handled similarly. The last part is easy from the definition of induced norm. For example
if

-3-

Linear System Theory, 2/E

Solutions Manual

A=

0 A 12
0 0





then partitioning the vector x similarly we see that


max

=1

A x =

max

x 2

=1

A 12 x 2 = A 12

Solution 1.9 By the Cauchy-Schwarz inequality, and x T = x ,


x T A x x T A x = A T x x

A T x 2 = A x 2
This immediately gives
x T A x A x 2
If is an eigenvalue of A and x is a corresponding unity-norm eigenvector, then
= x = x = A x A x = A

Solution 1.10 Since Q = Q T , Q T Q = Q 2 , and the eigenvalues of Q 2 are 21 , . . . , 2n . Therefore


Q =

2
  (Q
  )
 max

= max

1in

For the other equality Cauchy-Schwarz gives


x T Qx

| x T Q x = Qx x
Q x 2 = [ max

i ]

1in

x Tx

Therefore | x T Qx | Q for all unity-norm x. Choosing xa as a unity-norm eigenvector of Q corresponding to


the eigenvalue that yields max i gives
1in

x Ta Qxa = x Ta

Thus max
x

=1

[ max

1in

i ] xa

= max

1in

x T Qx = Q .

   x)
 T (A
 x)
 = x
 TA TA
 x
 ,
Solution 1.11 Since A x = (A

A =

max xTA TA x

x



=1

max x T A T A x

=1

1/2

The Rayleigh-Ritz inequality gives, for all unity-norm x,


x T A T A x max (A T A) x T x = max (A T A)
and since A T A 0, max (A T A) 0. Choosing xa to be a unity-norm eigenvector corresponding to max (A T A) gives
x Ta A T A xa = max (A T A)
Thus

-4-

Linear System Theory, 2/E

Solutions Manual

max x T A T A x = max (A T A)

=1

T 
  (A
  A)
so we have A = max
.

Solution 1.12 Since A T A > 0 we have i (A T A) > 0, i = 1, . . . , n, and (A T A)1 > 0. Then by Exercise 1.11,
A 1 2

= max ((A T A)1 ) =


n

1
_________
min (A T A)

i (A T A)
n 1
T
max (A A)]
i =1
_[____________
__________________

=
(det A)2
min (A T A) . det (A T A)

A 2(n1)
_________
(det A)2

Therefore
A 1

A n1
________
det A

Solution 1.13 Assume A 0, for the zero case is trivial. For any unity-norm x and y,
y T A x y T A x

y A x = A
Therefore
max

x , y

=1

y T A x A

Now let unity-norm xa be such that A xa = A , and let


ya =

Axa
_____
A

Then ya = 1 and
y Ta A xa =

A xa 2
x Ta A T A xa
A 2
______
________
__________
= A
=
=
A
A
A

Therefore
max

x , y

=1

y T A x = A

Solution 1.14 The coefficients of the characteristic polynomial of a matrix are continuous functions of matrix
entries, since determinant is a continuous function of the entries (sum of products). Also the roots of a
polynomial are continuous functions of the coefficients. (A proof is given in Appendix A.4 of E.D. Sontag,
Mathematical Control Theory, Springer-Verlag, New York, 1990.) Since a composition of continuous functions
is a continuous function, the pointwise-in-t eigenvalues of A (t) are continuous in t.
This argument gives that the (nonnegative) eigenvalues of A T (t)A (t) are continuous in t. Then the maximum at
each t is continuous in t plot two eigenvalues and consider their pointwise maximum to see this. Finally since
square root is a continuous function of nonnegative arguments, we conclude A (t) is continuous in t.
However for continuously-differentiable A (t), A (t) need not be continuously differentiable in t. Consider the
-5-

Linear System Theory, 2/E

Solutions Manual

example


A (t) =

t 0
0 t2

A (t) =

t , 0t 1
t2 , 1 < t <

Clearly the time derivative of A (t) is discontinuous at t = 1. (This overlaps Exercise 1.18 a bit.)
Also the eigenvalues of continuously-differentiable A (t) are not necessarily continuously differentiable, consider

0 1 
A (t) =
 1 t 
An easy computation gives the eigenvalues
(t) =


t 2  4
t
_
______
__

2
2

Thus
.

(t) =

t
1 ________
__

2
2

2 t   4

and this function is not continuous at t = 2.

Solution 1.15 Clearly Q is positive definite, and by Rayleigh-Ritz if x 0,


0 < min (Q) x T x x T Q x max (Q) x T x
Choosing x as an eigenvector corresponding to min (Q) (respectively, max (Q)) shows that these inequalities are
tight. Thus
1 min (Q) , max (Q) 2

Therefore
min (Q 1 ) =

1
1
___
_ ______

2
max (Q)

max (Q 1 ) =

1
1
___
_______

1
min (Q)

Thus Rayleigh-Ritz for the positive definite matrix Q 1 gives


1
1
___
___
I
I Q 1
1

Solution 1.16 If W (t) I is symmetric and positive semidefinite for all t, then for any x,
x T W (t) x x T x
for all t. At any value of t, let xt be an eigenvector corresponding to an eigenvalue (necessarily real) t of W (t).
Then
x Tt W (t) xt = t x Tt xt x Tt xt
That is t . This holds for any eigenvalue of W (t) and every t. Since the determinant is the product of
eigenvalues,
det W (t) n > 0
for any t.

-6-

Linear System Theory, 2/E

Solutions Manual

Solution 1.17 Using the product rule to differentiate A (t) A 1 (t) = I yields
.
_d_ A 1 (t) = 0
A (t) A 1 (t) + A (t)
dt
which gives
_d_ A 1 (t) = A 1 (t) A. (t) A 1 (t)
dt

Solution 1.18

Assuming differentiability of both x (t) and

x (t),

and using the chain rule for scalar

functions,

_d_
dt

_d_
dt
_d_
= 2x (t)
dt

x (t)2 = 2x (t)

x (t)
x (t)

Also we can write, using the product rule and the Cauchy-Schwarz inequality,
_d_ x (t)2 = _d_ x T (t) x (t) = x. T (t) x (t) + x T (t) x. (t) = 2x T (t) x. (t)

dt
dt
.
2x (t)x (t)
For t such that x (t) 0, comparing these expressions gives

_d_
dt

x (t) x (t)

If x (t) = 0 on a closed interval, then on that interval the result is trivial. If x (t) = 0 at an isolated point, then
continuity arguments show that the result is valid. Note that for the differentiable function x (t) = t, x (t) = t
is not differentiable at t = 0. Thus we must make the assumption that x (t) is differentiable. (While this
inequality is not explicitly used in the book, the added differentiability hypothesis explains why we always
differentiate x (t)2 = x T (t) x (t) instead of x (t).)

Solution 1.19 To prove the contrapositive claim, suppose for each i, j there is a constant ij such that
t

fij () d ij ,

t 0

Then by the inequality on page 7, noting that max fij (t) is a continuous function of t and taking the pointwisei, j

in-t maximum,
t

fij () d
F () d mn max
i, j
  

  
mn

t m

| fij () d

0 i =1 j =1
 
mn

ij < ,

i =1 j =1
k

The argument for

F ( j) is similar.
j =0
-7-

t 0

Linear System Theory, 2/E

Solutions Manual

Solution 1.20 If (t), p (t) are a pointwise-in-t eigenvalue/eigenvector pair for A 1 (t), then
A 1 (t) p (t) = (t) p (t) = (t)p (t)

Therefore, for every t,


(t) =

A 1 (t)p (t)
A 1 (t) p (t)
_______________
_____________

p (t)
p (t)

Since this holds for any eigenvalue/eigenvector pair,


det

A (t) =

1
1
1
___
_________________
___________
n >0
=
1 (t) . . . n (t)

det A 1 (t)

for all t.

Solution 1.21 Using Exercise 1.10 and the assumptions Q (t) 0, tb ta ,


tb

tb

tb

tb

ta

ta

ta

ta

Q () d = max [Q ()] d tr [Q ()] d = tr Q () d

Note that
tb

Q ( ) d 0

ta

since for every x


x

tb

tb

ta

ta

Q ( ) d x = x T Q ( ) x d 0

Thus, using a property of the trace on page 8 of Chapter 1, we have


tb

tb

tb

ta

ta

ta

Q () d tr Q () d n Q () d

Finally,
tb

Q ( ) d I

ta

implies, using Rayleigh-Ritz,


tb

Q () d

ta

Therefore
tb

Q () d n

ta

-8-

CHAPTER 2

Solution 2.3

.
The nominal solution for u (t) = sin (3t) is y (t) = sin t. Let x 1 (t) = y (t), x 2 (t) = y (t) to write the

state equation
.
x (t) =





x 2 (t)
(4/ 3)x 31 (t) (1/ 3)u (t)

Computing the Jacobians and evaluating gives the linearized state equation


.
x (t) =

0
1
x (t) +
4 sin2 t 0 





0 
u (t)
1/ 3 

y (t) = 1 0 x (t)


where
x (t) = x (t)





sin t
cos t

u (t) = u (t) sin (3t) ,

y (t) = y (t) sin t , x (0) = x (0)





0
1





Solution 2.5 For u = 0 constant nominal solutions are solutions of


0 = x 2 2x 1 x 2 = x 2 (12x 1 )
2

0 = x 1 + x 1 + x 2 = x 1 (x 1 1) + x 2
Evidently there are 4 possible solutions:
 0 
xa =
, xb =
 0 




1
,
0

xc =

1/ 2 
,
1/ 2 




xd =

1/ 2 
1/ 2 

Since
_f
__ =
x





2x 2 12x 1
1+2x 1 2x 2

f
___
=
u





0
1





evaluating at each of the constant nominals gives the corresponding 4 linearized state equations.

Solution 2.7 Clearly x is a constant nominal if and only if


0 = A x + bu
that is, if and only if A x = bu . There exists such an x if and only if b Im [A ], in other words

-9-

rank A = rank [ A b ].
Also, x is a constant nominal with c x = 0 if and only if
0 = A x + bu
0 = c x
that is, if and only if


A
x=
c




bu

0


As above, this holds if and only if


rank




A
= rank
c




A b
c 0

Finally, x is a constant nominal with c x = u if and only if


0 = A x + bu = ( A + bc ) x
and this holds if and only if
x Ker [ A + bc ]
(If A is invertible, we can be more explicit. For any u the unique constant nominal is x = A 1 bu . Then y = 0 for
u 0 if and only if c A 1 b = 0, and y = u if and only if c A 1 b = 1.)

Solution 2.8
(a) Since



A B
C 0

is invertible, for any K





A + BK B 
=
C
0




A B
C 0




I 0
K I

is invertible. Let




A + BK B 

C
0





R1 R2
R3 R 4

I 0

0 I

Then the 1, 2-block gives R 2 = (A + BK) BR 4 and the 2, 2-block gives CR 2 = I, that is, I = C(A + BK)1 BR 4
Thus [ C (A + BK)1 B ]1 exists and is given by R 4 .
(b) We need to show that there exists N such that
0 = (A + BK)x + BNu
u = Cx
The first equation gives
x = (A + BK)1 BN u
Thus we need to choose N such that
C (A + BK)1 BN u = u
From part (a) we take N = [C (A + BK)1 B ]1 = R 4 .

-10-

Linear System Theory, 2/E

Solutions Manual

Solution 2.10 For u (t) = u , x is a constant nominal if and only if


0 = (A + Du ) x + bu
This holds if and only if bu Im [ A + Du ], that is, if and only if
rank ( A + Du ) = rank




A +Du

bu




If A + Du is invertible, then
x = (A + Du )1 bu

(+)

If A is invertible, then by continuity of the determinant det (A + Du ) 0 for all u such that u is sufficiently
small, and (+) defines a corresponding constant nominal. The corresponding linearized state equation is
.
x (t) = (A + Du ) x (t) + [ b D (A + Du )1 bu ] u (t)
y (t) = C x (t)

Solution 2.12

For the given nominal input, nominal output, and nominal initial state, the nominal solution

satisfies


.
x (t) =

x 1 (t) x 3 (t) , x (0) =


x 2 (t) 2 x 3 (t)








0 
3
2 

1 = x 2 (t) 2 x 3 (t)
Integrating for x 1 (t) and then x 3 (t) easily gives the nominal solution x 1 (t) = t, x 2 (t) = 2 t 3, and x 3 (t) = t 2.
The corresponding linearized state equation is specified by
 0 0
0 
 0 
A = 1 0 1 , B (t)= t , C = 0 1 2
 0 
 0 1 2 


It is unusual that the nominal input and nominal output are constants, but the linearization is time varying.

Solution 2.14 Compute


.
.
.
.
z (t) = x (t) q (t) = A x (t) + Bu (t) + A 1 Bu (t)
.
= A x (t) A[A 1 Bu (t)] + A 1 Bu (t)
.
= A z (t) + A 1 Bu (t)
.
If at any value of ta > 0 we have x (ta ) = q (ta ), that is z (ta ) = 0, and u (t) = 0 for t ta , that is u (t) = u (ta ) for
t ta , then z (t) = 0 for t ta . Thus x (t) = q (ta ) for t ta , and q (t) represents what could be called an
instantaneous constant nominal.

-11-

CHAPTER 3

Solution 3.2 Differentiating term k +1 of the Peano-Baker series using Leibniz rule gives

___

A (1 ) A (2 )

...

A (k +1 ) d k +1

. . . d 1

= A (t) A (2 )

d
. . . d 2 ___
t

d

___
+ A ( 1 )

A (2 )

___
= A ( 1 )


d
___

A () A (2 ) . . . d k +1 . . . d 2
d

d k +1 . . . d 1

A (k +1 )

...

A (k +1 ) d k +1

...

A (2 )

...

A (k +1 )

d k +1 . . . d 1

Repeating this process k times gives

___

A (1 ) A (2 )

...

A (k +1 ) d k +1

. . . d 1

= A ( 1 )

= A ( 1 )

= A ( 1 )

k1

...

A ( k )

___

k



A (k +1 ) d k +1

k1

...

A ( k )

0 A () +

A ( 2 )

0 d k +1

d k . . . d 1

d k . . . d 1

k1

...

A ( k ) d k . . . d 1

A ()




Recognizing this as term k of the uniformly convergent series for (t, ) A () gives

___
(t, ) = (t, ) A ()

(Of course it is simpler to use the formula for the derivative of an inverse matrix given in Exercise 1.17.)
-12-

Linear System Theory, 2/E

Solutions Manual

Solution 3.6 Writing the state equation as a pair of scalar equations, the first one is
.
t
______
x 1 (t)
x 1 (t) =
1 + t2
and an easy computation gives
x 1o
_________
(1 + t 2 )1/2

x 1 (t) =
Then the second scalar equation then becomes

x 1o
.
4t
_________
______
x 2 (t) +
x 2 (t) =
2
(1 + t 2 )1/2
1+t
The complete solution formula gives, with some help from Mathematica,
t

.
(1 + 2 )3/2
1
_________
________
d x 1o
x 2o +
x 2 (t) =
2 2
2 2
(1 + t )
0 (1 + t )
=

   2
 (t 3 /4+5t/ 8)+(3/ 8) sinh1 (t)
1+t
1
_
____________________________
________
x 1o
x
+
2o
(1 + t 2 )2
(1 + t 2 )2

If x 1o = 1, then as t , x 2 (t) 1/ 4, not zero.

Solution 3.7 From the hint, letting


t

r (t) = v ()() d
to

.
we have r (t) = v (t)(t), and
(t) (t) + r (t)

(*)

Multiplying (*) through by the nonnegative v (t) gives


v (t)(t) v (t)(t) + v (t)r (t)
or
.
r (t) v (t)r (t) v (t)(t)
Multiply both sides by the positive quantity
t

v ( ) d

to

to obtain
t

_d_
dt

v ( ) d

r (t)e

to

v (t)(t)e

v ( ) d
to

Integrating both sides from to to t, and using r (to ) = 0 gives

v ( ) d

r (t)e

to

v ()()e
to

Multiplying through by the positive quantity

-13-

v ( ) d
to

Linear System Theory, 2/E

Solutions Manual

v ( ) d

eo
gives
t

v ( ) d

r (t) v ()()e

to

and using (*) yields the desired inequality.

Solution 3.10 Multiply the state equation by 2 z T (t) to obtain


.
_d_
2 z T (t) z (t) =
dt

z (t)2

2 zi (t)aij (t) zj (t)


i =1 j =1

2aij (t)zi (t)zj (t) ,


i =1 j =1

t to

At each t to let
a (t) = 2n 2 max

1 i, j n

aij (t)

Note a (t) is a continuous function of t, as a quick sample sketch indicates. Then, since zi (t) z (t),
_d_
dt

z (t)2

a (t)z (t)2 , t to

Multiplying through by the positive quantity


t

a () d

e
gives

to

a () d

_d_
dt

to

z (t)2

0 , t to

Integrating both sides from to to t and using z (to ) = 0 gives


, t to

z (t) = 0

which implies z (t) = 0 for t to .

Solution 3.11 The vector function x (t) satisfies the given state equation if and only if it satisfies
t

to

to to

to

x (t) = xo + A () x() d +

E (, ) x() d d + B ()u () d

Assuming there are two solutions, their difference z (t) satisfies


t

to

to to

z (t) = A () z() d +

E (, ) z() d d

Interchanging the order of integration in the double integral (Dirichlets formula) gives
-14-

Linear System Theory, 2/E

Solutions Manual

t t

z (t) = A () z() d + E (, ) d z() d


to

to
t

to

A () + E (, ) d z() d

= A (t, ) z () d
to

Thus
t

to

to

(t, ) z () d A (t, )z () d
z (t) = A
By continuity, given T > 0 there exists a finite constant such that A (t, ) for to t to + T. Thus
t

z (t)

z (t) d ,

t [to , to +T ]

to

and the Gronwall-Bellman inequality gives


than one solution.

0 for t [to , to +T ], implying that there can be no more

z (t) =

Solution 3.13 From the Peano-Baker series,


t

(t, )

I + A ( 1 ) d 1 + . . . + A ( 1 )

k1

...

A ( k ) d k . . . d 1

A ( 1 )

j =k +1

j1

...

A ( j ) d j . . . d 1

For any fixed T > 0 there is a finite constant such that A (t) for t [T, T ], by continuity. Therefore

A ( 1 )

j =k +1

j1

...

A ( j ) d j . . . d 1

j =k +1

A ( 1 )

j1

...

A (1 )
j =k +1

A ( j ) d j . . . d 1

j1

...

A ( j ) d j . . . d 1

.
.
.

j =k +1

j =k +1

j =k +1

We need to show that given > 0 there exists K such that

-15-

...

j1

1 d j . . . d 1

t j
_______
j!

(2T) j
______
, t, [T, T ]
j!

Linear System Theory, 2/E

Solutions Manual

j =K +1

2T) j
_(_____
<
j!

(*)

Using the hint,

j =k +1

(2T)i
(2T)k +1 . ______
(2T)k +1+i
2T) j
________
__________
_(_____

=
j!
ki
i =0 (k +1)!
i =0 (k +1+i)!

If k > 2T, then

j =k +1

(2T)k +1
1
(2T)k +1 . _ _______
2T) j ________
__________________
_(_____
=

(k1)!(k +1)(k2T)
1 2T/k
(k +1)!
j!

Because of the factorial in the denominator, given > 0 there exists a K > 2T such that (*) holds.

Solution 3.15 Writing the complete solution of the state equation at t f , we need to satisfy
tf

Ho xo + H f (t f , to ) xo + (t f , )f () d = h

to

(+)

Thus there exists a solution that satisfies the boundary conditions if and only if
tf

h Hf

(t f , )f () d Im[ Ho + H f (t f , to ) ]

to

There exists a unique solution that satisfies the boundary conditions if Ho + H f (t f , to ) is invertible. To compute
a solution x (t) satisfying the boundary conditions:
(1) Compute (t, to ) for t [to , t f ]
(2) Compute Ho + H f (t f , to )
tf

(3) Compute

(t f , )f () d

to

(4) Solve (+) for xo


t

(5) Set x (t) = (t, to ) xo + (t, )f () d , t [to , t f ]


to

-16-

CHAPTER 4

Solution 4.1 An easy way to compute A (t) is to use A (t) = (t, 0)(0, t). This gives
A (t) =

2t 1 
1 2t 




This A (t) commutes with its integral, so we can write (t, ) as the matrix exponential
t

(t, ) = exp

A () d

= exp

(t)2 (t)
(t) (t)2





Solution 4.4 A linear state equation corresponding to the n th -order differential equation is


.
x (t) =

...

0
...

0

.
.
.
.
x (t)
.
.

...
1

.
.
.
a 0 (t) a 1 (t)
an1 (t) 
0
0
.
.
.
0

1
0
.
.
.
0

The corresponding adjoint state equation is




.
z (t) =

0
1
.
.
.
0
0

...
...
.
.
.
...
...

0
0
.
.
.
0
1

a 0 (t) 

a 1 (t)
.

.
z (t)
.

an2 (t)
an1 (t)


th

To put this in the form of an n -order differential equation, start with


.
zn (t) = zn1 (t) + an1 (t) zn (t)
.
zn1 (t) = zn2 (t) + an2 (t) zn (t)
These give
..
.
_d_ [ a (t) z (t) ]
zn (t) = zn1 (t) +
n1
n
dt
_d_ [ a (t) z (t) ]
= zn2 (t) an2 (t) zn (t) +
n1
n
dt
Next,
-17-

Linear System Theory, 2/E

Solutions Manual

.
zn2 (t) = zn3 (t) + an3 (t) zn (t)
gives
.
d2
d3
_d_ [ a (t) z (t) ] + ____
____
[ an1 (t) zn (t) ]
z
(t)
=
z
(t)

n2
n
n
n2
dt
dt 2
dt 3
d2
_d_ [ a (t) z (t) ] + ____
[ an1 (t) zn (t) ]
= zn3 (t) + an3 (t) zn (t)
n2
n
dt
dt 2
Continuing gives the n th -order differential equation
d n2
d n1
dn
_____
_____
____
[ an2 (t) zn (t) ]
(t)
z
(t)
]

[
a
z
(t)
=
n1
n
n
dt n2
dt n1
dt n
_d_ [ a (t) z (t) ] + (1)n +1 a (t) z (t)
+ . . . + (1)n
1
n
0
n
dt

Solution 4.6

For the first matrix differential equation, write the transpose of the equation as (transpose and
differentiation commute)
.T
X (t) = A T (t)X T (t) , X T (to ) = X To

This has the unique solution X T (t) = A T (t) (t, to )X To , so that


X (t) = Xo AT T (t) (t, to )
In the second matrix differential equation, let k (t, ) be the transition matrix for Ak (t), k = 1, 2. Then it is easy
to verify (Leibniz rule) that a solution is
t

to )Xo T2 (t, to )

X (t) = 1 (t,

+ 1 (t, )F () T2 (t, ) d
to

Or, one can generate this expression by using the obvious integrating factors on the left and right sides of the
differential. equation. (To show this is a unique solution, show that the difference Z (t) between any two solutions
satisfies Z (t) = A 1 (t)Z (t) + Z (t)A T2 (t), with Z (to ) = 0. Integrate both sides and apply the Bellman-Gronwall
inequality to show Z (t) is identically zero.)

Solution 4.9 Clearly A (t) commutes with its integral. Thus we compute
exp




0 1

1 0 


and then replace by a () d . From the power series for the exponential,
0

exp




0 1
=
1 0 


k =0

k =0

k =0

1
___
k!




1
_____
(2k)!
1
_____
(2k)!

0 1k k

1 0 







0 1  2k 2k
+
1 0 
(1)k 0
0 (1)k





-18-

k =0

1
_ ______
(2k +1)!

2k +

k =0




0 1  2k +1 2k +1

1 0 

1
_ ______
(2k +1)!





0
(1)k
k +1
(1)
0





2k +1

Linear System Theory, 2/E

Solutions Manual

=
=

cos 0 
+
0 cos 
cos sin 
sin cos 









0 sin 
sin 0 

Replacing as noted above gives (t, 0).


For sufficiency, suppose x (t, 0) = T (t)e Rt . Then T (0) = I and T (t) is continuously
differentiable. Let z (t) = T 1 (t) x (t) so that

Solution 4.10

z (t, 0) = T 1 (t)x (t, 0)T (0) = T 1 (t)T (t)e Rt = e Rt


.
Thus z (t) = R z (t).
For necessity, suppose P (t) is a variable change that gives
.
z (t) = Ra z (t)
Then
z (t, 0) = e

Ra t

= P 1 (t)x (t, 0)P (0)

that is,
x (t, 0) = P (t)e

Ra t

P 1 (0)

Let T (t) = P (t)P 1 (0) and R = P (0)Ra P 1 (0). Then


x (t, 0) = T (t)P (0) e P

(0)RP (0)t

P 1 (0)

= T (t)P (0) [ P 1 (0)e Rt P (0) ] P 1 (0)


= T (t)e Rt

Solution 4.11 Suppose


(t, 0) = e

A1t

A2t

=e

A1t

Then
.
_d_
(t, 0) =
dt
=e
This implies A (t) = e

A1t

A1t

A1t

A2t




( A 1 +A 2 ) e A 2 t

( A 1 +A 2 ) e A 1 t . e A 1 t e A 2 t

A t

[ A 1 +A 2 ] e 1 . Therefore A (0) = A 1 +A 2 is clear, and


.
A t
A t
A t
A t
A (t) = A 1 e 1 ( A 1 +A 2 ) e 1 + e 1 ( A 1 +A 2 ) e 1 (A 1 )
= A 1 A (t) A (t) A 1

Conversely, assume A 1 and A 2 are such that


.
A (t) = A 1 A (t) A (t) A 1 , A (0) = A 1 + A 2
This matrix differential equation has a unique solution (by rewriting it as a linear vector differential equation), and
from the calculation above this solution is
A (t) = e

A1t

( A 1 + A 2 ) e A 1 t

Since

-19-

Linear System Theory, 2/E

Solutions Manual

_d_
dt
we have that (t, 0) = e

A1t A2t

A1t

A2t




= A (t)e

A1t A2t

, e

A10 A20

=I

Solution 4.13 Writing


__ (t, ) = A (t) (t, ) , (, ) = I
A
A
t
in partitioned form shows that
__ (t, ) = A (t) (t, ) , (, ) = 0
21
22
21
21
t
Thus 21 (t, ) is identically zero. But then
__ (t, ) = A (t) (t, ) , (, ) = I
ii
ii
ii
ii
t
for i = 1, 2, and
__ (t, ) = A (t) (t, ) + A (t) (t, ) , (, ) = 0
12
11
12
12
22
12
t
Using Exercise 4.6 with F (t) = A 12 (t) 22 (t, ) gives
t

12 (t, ) = 11 (t, ) A 12 () 22 (, ) d

Solution 4.17 We need to compute a continuously-differentiable, invertible P (t) such that






t 1
1
= P
(t)
1 t






.
0 1
1
P (t) P
(t)P (t)
2
2t 2 t 

Multiplying on the left by P (t), the result can be written as a dimension-4 linear state equation. Choosing the
initial condition corresponding to P (0) = I, some clever guessing gives
 1 0 
P (t) =

t 1

Solution 4.23 Using the formula for the derivative of an inverse matrix given in Exercise 1.17,
__ (, t) = __ 1 (t, ) = 1 (t, )
A
A
A
t
t
= 1
A (t, )

= 1
A (t, )


__ (t, ) 1 (t, )
A
A
t


_____
A (t, ) 1
A (t, )
(t)


A (t)A (t, ) 1
A (t, )


= 1
A (t, ) A (t) = A (, t) A (t)
Transposing gives

-20-

Linear System Theory, 2/E

Solutions Manual

__ T (, t) = A T (t) T (, t)
A
A
t
Since (, ) = I, we have F (t) = A T (t).
Or we can use the result of Exercise 3.2 to compute:

__ (, t) = _____
A (, t) = A (, t)A (t)
A
(t)
t
This implies
__ T (, t) = A T (t) (, t)
A
A
t
Since (, ) = I, we have F (t) = A T (t).

Solution 4.25 We can write


t+

(t + , ) = I +

A () d +

t +


k =2

A (1 ) A (2 ) . . .

k1

A (k ) d k . . . d 1

and
e

_
At ()t

_
= I + At ()t +

Then
R (t, ) = (t

+ , ) e
t +

k =2

_
At ()t

k =2

_
1 k
___
A t ()t k
k!

A (1 ) A (2 ) . . .

k1

_
1 k
___
A t ()t k
A (k ) d k . . . d 1
k!

From A (t) and the triangle inequality,


R (t, )

k =2

k
_t__
= 2 t 2
k!

k =2

2 k2 k2
___

t
k!

Using
1
2
______
___
, k 2

(k2)!
k!
gives
R (t, ) 2 t 2

k =2

= 2 t 2 e t

-21-

1
______
k2 t k2
(k2)!





CHAPTER 5

Solution 5.3 Using the series definition, which involves talent in series recognition,
A 2k +1 =




0 1
, A 2k =
1 0

1 0
, k = 0, 1, . . .
0 1




gives


e At = I +




0 t  ___
1
+
t 0
2!

t 2 0  ___
1
+
0 t2 
3!
t

(e +e )/ 2 (e e )/ 2
=
(e t e t )/ 2 (e t +e t )/ 2 
t









0 t3 
...
+
t3 0 

cosh t sinh t 

sinh t cosh t 

Using the Laplace transform method,


1
_____
2
s 1
s
_____
2
s 1

(sI A)1 =




s 1 
1 s 

s
_____
2
s 1
1
_____
2
s 1

which gives again


e At =

cosh t sinh t 
sinh t cosh t 




Using the diagonalization method, computing eigenvectors for A and letting


 1
1 
P=
 1 1 
gives
P 1 AP =

1 0 
0 1 




Then


e At = P

et 0
0 e t

P 1 =





cosh t sinh t 

sinh t cosh t 

Solution 5.4 Since


A (t) =




t 1
1 t

commutes with its integral,


-22-







Linear System Theory, 2/E

Solutions Manual

A () d

(t, 0) = e 0

= exp

t2/2 t

t t2/2 

And since
t2 / 2 0 
,
0 t2/2 





0 t

t 0

commute,
(t, 0) = exp

1 0 2
t / 2 . exp
0 1

0 1
t
1 0





Using Exercise 5.3 gives


(t, 0) =





e t /2 0
2
0 e t /2

cosh t sinh t 
=
sinh t cosh t 





e t /2 cosh t e t /2 sinh t

2
2
e t /2 sinh t e t /2 cosh t 

Solution 5.7 To verify that


t

A e A d = e At I
0

note that the two sides agree at t = 0, and the derivatives of the two sides with respect to t are identical.
If A is invertible and all its eigenvalues have negative real parts, then limt e At = 0. This gives

e A d = I
0

that is,

A 1 = e A d = e A d

Solution 5.9

Evaluating the given expression at t = 0 gives x (0) = 0. Using Leibniz rule to differentiate the

expression gives
t

D u ( ) d
.
_d_ e A (t) e
bu () d
x (t) =

dt 0
t

__
t

= bu (t) +
0

e A (t) e

D u ( ) d

bu () d

D u ( ) d

Using the product rule and differentiating the power series for e

gives

.
x (t) = bu (t) +
0

Ae A (t) e

D u ( ) d

bu () + e A (t) Du (t)e

D u ( ) d

bu () d


If we assume that AD = DA, then e A (t) D = De A (t) and

-23-

Linear System Theory, 2/E

Solutions Manual

D u ( ) d
D u ( ) d
.
x (t) = bu (t) + A e A (t) e
bu () d + Du (t) e A (t) e
bu () d
t

= A x (t) + Dx (t)u (t) + bu (t)

Solution 5.12 We will show how to define 0 (t), . . . , n1 (t) such that
n1

k (t)Pk =

k =0

n1

n1

k (0)Pk = I

k (t)APk ,

k =0

(*)

k =0

which then gives the desired expression by Property 5.1. From the definitions,
P 1 = AP 0 1 I , P 2 = AP 1 2 P 1 , . . . , Pn1 = APn2 n1 Pn2
Also Pn = (An I)Pn1 = 0 by the Cayley-Hamilton theorem, so APn1 = n Pn1 . Now we equate coefficients of
like Pk s in (*), rewritten as
n1 .
n1
k (t)Pk = k (t)[Pk+1 + k +1 Pk ]
k =0

k =0

to get equations for the desired k (t)s:

.
P 0 : 0 (t) = 1 0 (t)
.
P 1 : 1 (t) = 0 (t) + 2 1 (t)
.
.
. .
Pn1 : n1 (t) = n2 (t) + n n1 (t)

that is,


. 0 (t)
1 (t)

.
.
.

n1 (t) 

1 0 . . .
1 2 . . .

0
0
.
.
.

0
0
.
.
.
0

. .
.
. .
.
. .
.
0 0 . . . n1
0 0 . . . 1 n

0 (t)
1 (t)

.
.
.

n1 (t) 

With the initial condition provided by 0 (0) = 1, k (0) = 0, k = 1, . . . , n1, the analytic solution of this state
equation provides a solution for (*). (The resulting expression for e At is sometimes called Putzers formula.)

Solution 5.17 Write, by Property 5.11,


(t, to ) = P 1 (t)e

R (tto )

P (to )

where P (t) is continuous, T-periodic, and invertible at each t. Let


S = P 1 (to )RP (to ) , Q (t, to ) = P 1 (t)P (to )
Then Q (t, to ) is continuous and invertible at each t, and satisfies
Q (t +T, to ) = P 1 (t +T)P (to ) = P 1 (t)P (to ) = Q (t, to )
with Q (to , to ) = I. Also,

-24-

Linear System Theory, 2/E

Solutions Manual

(t, to ) = P 1 (t) e

P (to )SP 1 (to ) (tto )

= Q (t, to )e

P (to ) = P 1 (t)P (to ) e

S(tto )

P 1 (to )P (to )

S (tto )

Solution 5.19 From the Floquet decomposition and Property 4.9,


T

tr [A ()] d

det (T, 0) = det e RT = e 0

Because the integral in the exponent is positive, the product of eigenvalues of (T, 0) is greater than unity, which
implies that at least one eigenvalue of (T, 0) has magnitude greater than unity.Thus by the argument following
Example 5.12 there exist unbounded solutions.

Solution 5.20 Following the hint, define a real matrix S by


e S 2T = 2 (T, 0)
and set
Q (t) = (t, 0)e St
Clearly Q (t) is real and continuous, and
Q (t +2T) = (t +2T, 0)e S (t +2T) = (t +2T, T)(T, 0)e S 2T e St
= (t +T, 0)(T, 0)e S 2T e St = (t +T, T)2 (T, 0)e S 2T e St
= (t +T, T)e St = (t, 0)e St
= Q (t)
That is, Q (t) is 2T-periodic. (For a proof of the hint, see Chapter 8 of D.L. Lukes, Differential Equations:
Classical to Controlled, Academic Press, 1982.)

Solution 5.22

The solution will be T-periodic for initial state xo if and only if xo satisfies (see text equation

(32))
to +T

(to +T, to ) I ] xo =

(to , )f() d

to

This linear equation has a solution for xo if and only if


to +T

z To

(to , )f() d = 0

(*)

to

for every nonzero vector zo that satisfies


T

[ 1 (to +T, to ) I ]

zo = 0

The solution of the adjoint state equation can be written as


T

z (t) = [ 1 (t, to ) ] zo
Then by Lemma 5.14, (**) is precisely the condition that z (t) be T-periodic. Thus writing (*) in the form

-25-

(**)

Linear System Theory, 2/E

Solutions Manual

to +T

0=

to +T

z To (to ,

)f() d =

to

z T ()f () d

to

completes the proof.

Solution 5.24 Note A = A T , and from Example 5.9,


e At =




cos t sin t 
sin t cos t 

Therefore all solutions of the adjoint equation are periodic, with period of the form k 2, where k is a positive
integer. The forcing term has period T = 2 /, where we assume > 0. The rest of the analysis breaks down
into 3 cases.
Case 1: If 1, 1/ 2, 1/ 3, . . . then the adjoint equation has no T-periodic solution, so the condition (Exercise
5.22)
T

z T ()f () d = 0

(+)

holds vacuously. Thus there will exist corresponding periodic solutions.


Case 2: If = 1, then
T

z
0

()f () d = z To e A f () d
0

= zo 1 sin2 () d + zo 2 cos sin d


0
so there is no periodic solution.
Case 3: If = 1/k, k = 2, 3, . . . , then since
T

cos sin (/k) d = sin sin (/k) d = 0

the condition (+) will hold, and there exist periodic solutions.
In summary, there exist periodic solutions for all > 0 except = 1.

-26-

CHAPTER 6

If the state equation is uniformly stable, then there exists a positive such that for any to and xo
the corresponding solution satisfies

Solution 6.1

x (t) xo ,

t to

Given a positive , take = / . Then, regardless of to , xo implies


x (t)

= , t to

Conversely, given a positive suppose positive is such that, regardless of to , xo implies x (t) ,
t to . For any ta to let xa be such that
xa = 1

(ta , to )xa = (ta , to )

Then xo = xa satisfies xo = , and the corresponding solution at t = ta satisfies


x (ta ) = (ta , to )xo = (ta , to )

Therefore
(ta , to ) /

Such an xa can be selected for any ta , to such that ta to . Therefore


(t, to ) /

for all t and to with t to , and we can take = / to obtain


x (t) = (t, to )xo (t, to )xo xo ,

t to

This implies uniform stability.

Solution 6.4 Using the fact that A (t) commutes with its integral,
t

(t, ) = e

A () d

=I+

e (t)


1
___
e (t)
+
t 
2!





e (t)

e (t)
t

+ ...

For any fixed , 11 (t, ) clearly grows without bound as t , and thus the state equation is not uniformly
stable.

Solution 6.6 Using elementary properties of the norm,

-27-

Linear System Theory, 2/E

Solutions Manual

(t, ) = I

+ A ( ) d + A ( 1 )

A (2 ) d 2 d 1 + . . .

= I + A () d + A (1 )

A (2 ) d 2 d 1 + . . .

= 1 + A () d + A (1 )

A (2 ) d 2 d 1 +

...

(Be careful of t < .) Since A (t) for all t,


t

(t, )

1 + 1 d +

1 d 2 d 1 + . . .

= 1 + t + 2

t2
_|_____
+ ...
2!

For | t ,
(t, )

2 2

_
____
+ ...
2!

1+ +

= e

Solution 6.8 See the proof of Theorem 15.2.


Solution 6.10 Write Re [] = , where > 0 by assumption, so that
t e t = t e t ,

t 0

A simple maximization argument (setting the derivative to zero) gives


t e t

1
___
= ,
e

t 0

so that
t e t ,

t 0

Using this bound we can write


t e t = t e t = t e (/2)t e (/2)t

2 (/2)t
___
e
,
e

t 0

Similarly,
t 2 e t = t 2 e t

4 (/4)t
2 . ___
2
2
___
___
___
e
,
t e (/4)t e (/4)t
t e (/2)t =

e
e

and continuing we get, for any j 0,


j +( j 1)+
+1
j
_2___________
e (/2 )t , t 0
j
( e)
...

t j e t
Therefore

-28-

t 0

Linear System Theory, 2/E

Solutions Manual

t j e t dt
0

j +( j 1)+
+1
_2___________
( e) j
...

e (/2 )t dt
j

j +( j 1)+ . . . +1

2j
_2___________ . ___
j

( e)

22j +( j 1)+ +1
_____________
e j Re [] j +1
...

By Theorem 6.4 uniform stability is equivalent to existence of a finite constant such that
all t 0. Writing

Solution 6.12
e At for

e At =

t j1
t
______
e k
( j1)!

Wkj
k =1 j =1

where 1 , . . . , m are the distinct eigenvalues of A, suppose


Re[k ] 0 , k = 1, . . . , m

(*)

Re[k ] = 0 implies k = 1
k t

Since t e is bounded if Re[k ] < 0 (for any j), and e k = 1 if Re [k ] = 0, it is clear that
bounded for t 0. Thus (*) is a sufficient condition for uniform stability.
A necessary condition for uniform stability is
j1

e At

is

Re[k ] 0 , k = 1, . . . , m
For if Re[k ] > 0 for some k, the proof of Theorem 6.2 shows that e At grows without bound as t . The gap
between this necessary condition and the sufficient condition is illustrated by the two cases
 0 0 
 0 1 
A=
,
A=
 0 0 
 0 0 
Both satisfy the necessary condition, neither satisfy the sufficient condition, and the first case is uniformly stable
while the second case is not (unbounded solutions exist, as shown by easy computation of the transition matrix).
(It can be shown that a necessary and sufficient condition for uniform stability is that each eigenvalue of A has
nonpositive real part and any eigenvalue of A with zero real part has algebraic multiplicity equal to its geometric
multiplicity.)

Solution 6.14 Suppose , > 0 are such that


(t, to ) e

(tto )

for all t, to such that t to . Then given any xo , to , the corresponding solution at t to satisfies
x (t) = (t, to )xo (t, to )xo e

(tto )

xo

and the state equation is uniformly exponentially stable.


Now suppose the state equation is uniformly exponentially stable, so that there exist , > 0 such that
x (t) e

(tto )

xo ,

t to

for any xo and to . Given any to and ta to , choose xa such that


(ta , to )xa = (ta , to ) , xa = 1

Then with xo = xa the corresponding solution at ta satisfies

-29-

Linear System Theory, 2/E

Solutions Manual

x (ta ) = (ta , to )xa = (ta , to ) e

(ta to )

Since such an xa can be selected for any to and ta > to , we have


(t, ) e (t)

for all t, such that t , and the proof is complete.


.

Solution 6.18 The variable change z (t) = P 1 (t) x (t) yields z (t) = 0 if and only if
.
P 1 (t) A (t)P (t) P 1 (t)P (t) = 0

.
for all t. This clearly is equivalent to P (t) = A (t)P (t), which is equivalent to A (t, ) = P (t)P 1 (). Now, if P (t)
is a Lyapunov transformation, that is P (t) < and det P (t) > 0 for all t, then
A (t, ) P (t)P 1 () P (t)

P ()n1
__________
det P ()

n / =
for all t and .
Conversely, suppose

A (t, ) for
P (t)

all t and . Let P (t) = A (t, 0). Then P (t) and

P 1 (t)n1
___________
= P 1 (t)n1 det P (t)
det P 1 (t)

for all t. Using P (t) 1/P 1 (t) gives


det

P (t)

1
__________
1
P (t)n

and since P 1 (t) = A (0, t) ,


det

P (t)

1
___
n

Thus P (t) is a Lyapunov transformation, and clearly


.
P 1 (t) A (t)P (t) P 1 (t)P (t) = 0
for all t.

-30-

CHAPTER 7

Solution 7.3 Let A = FA, and take Q = F 1 , which is positive definite since F is positive definite. Then since F
is symmetric,
T

A Q + QA = A T FF 1 + F 1 FA = A T + A < 0
This gives exponential stability by Theorem 7.4.

Solution 7.5

By our default assumptions, a (t) is continuous. Since Q is constant, symmetric, and positive
definite, the first condition of Theorem 7.2 holds. Checking the second condition,

a (t) a (t)/ 2 
0
A T (t)Q + QA (t) =
 a (t)/ 2
1 
gives the requirements
a (t) 0 , 4a (t) a 2 (t)
Thus the state equation is uniformly stable if a (t) is a continuous function satisfying 0 a (t) 4 for all t.

Solution 7.6 With




Q(t) =

.
a (t) 0 
,
A T (t)Q(t) + Q(t)A (t) + Q (t) =
0 1





.
a (t) 0 

0 4 

we need to assume that a (t) is continuously differentiable and a (t) for some positive constants and so
.
that the first condition of Theorem 7.4 is satisfied. For the second condition we need to assume a (t) , for
some positive constant . Unfortunately this implies, taking any to ,
t

.
a (t) = a (to ) + a () d a (to ) + to t , t to
to

and for sufficiently large t the positivity condition on a (t) will be violated. Thus there is no a (t) for which the
given Q (t) shows uniform exponential stability of the given state equation.

Solution 7.9 We need to assume that a(t) is continuously differentiable. Consider




Q (t) I =

2a (t)+1
1

Suppose there exists a small, positive constant such that

-31-

1

(t)+1
_a______

a (t)


Linear System Theory, 2/E

Solutions Manual

a (t) 1/ (2)

for all t. Then


2a (t) + 1 + 1 > 1
1
(t)+1
______
_a______
= 1+ > 1
1+
1/ (2)
a (t)
and Q (t)I 0, for all t, follows easily. Similarly, with = (2+1)/ we can show IQ (t) 0 using
1
+1
___
_2____
1 = 1
2

2
1
+1
(t)+1 _2____
____
_a______
1
1

a
(t)
a (t)

2a (t) 1

Next consider
.
A (t)Q (t) + Q (t) A (t) + Q (t) =
T

.
2a (t)2a(t)

.
a (t)
_____
2a(t) 2
a (t)

This gives that for uniform exponential stability we also need existence of a small, positive constant such that
.
a 2 (t) 2a 3 (t) a (t) a (t)/2
for all t. For example, a (t) = 1 satisfies these conditions.

Solution 7.11

Suppose that for every symmetric, positive-definite M there exits a unique, symmetric,
positive-definite Q such that
A T Q + QA + 2Q = M

(*)

(A + I)T Q + Q (A + I) = M

(**)

that is,
Then by the argument above Theorem 7.11 we conclude that all eigenvalues of A + I have negative real parts.
That is, if
0 = det [ I (A + I) ] = det [ ( )I A ]
then Re [] < 0. Since > 0, this gives Re [ ] < , that is, all eigenvalues of A have real parts strictly less
than .
Now suppose all eigenvalues of A have real parts strictly less than . Then, as above, eigenvalues of
A + I have negative real parts. Then by Theorem 7.11, given symmetric, positive-definite M there exists a
unique, symmetric, positive-definite Q such that (**) holds, which implies (*) holds.

Solution 7.16 For arbitrary but fixed t 0, let xa be such that


xa = 1

e At xa = e At

By Theorem 7.11 the unique solution of QA + A T Q = M is the symmetric, positive-definite matrix

Q = e A Me A d
T

Thus we can write

-32-

Linear System Theory, 2/E

Solutions Manual

x Ta e A Me A xa d x Ta e A Me A xa d
T

= x Ta Qxa max (Q) = Q


Also, using a change of integration variable from to = t,

x Ta e A Me A xa d = x Ta e A (t + ) Me A(t + ) xa d
T

= x Ta e A t Qe At xa min (Q)e At xa 2 =
T

e At 2
_______
Q 1

Therefore
e At 2
_______
Q
Q 1

Since t was arbitrary, this gives



Q
 1
 
max e At Q 


t0

Solution 7.17

Let F = A + ()I. Then F A +, all eigenvalues of F have real parts less than ,

and
e Ft = e At e ()t
Thus
e At = e ( )t e Ft

(*)

By Theorem 7.11 the unique solution of F Q + QF = I is


T

Q = e F e F d
T

For any n 1 vector x,


T
d T F T F
___
x e e x = x Te F [ F T + F ] e F x
d

F T + F x T e F e F x
T

(Exercise1.9)

2(A +) x T e F e F x
T

Thus for any t 0,

x T e F t e Ft x =
T

d
___
d




x Te F e F x
T




2 (A +) x T e F e F x d
T

2 (A +) x T Qx
Therefore

-33-

Linear System Theory, 2/E

Solutions Manual

x T e F t e Ft x 2 (A +) x T Qx , t 0
T

which gives
e Ft

 
2  ( A +    ) Q
 , t 0

Thus the desired inequality follows from (*).

Solution 7.19 To show uniform exponential stability of A (t), write the 1,2-entry of A (t) as a (t), and let
Q (t) = q (t) I, where
2+e 2t , t 1/ 2
q (t) =
q (t) , 1/ 2 < t < 1/ 2

3 , t 1/ 2

Here q (t) is a continuously-differentiable patch satisfying 2 q (t) 3 for 1/ 2 < t < 1/ 2, and another
condition to be specified below. Then we have 2 I Q (t) 3 I for all t. Next consider
.


.
2q (t)+q (t) a (t)q (t)
A T (t)Q (t) + Q (t)A (t) + Q (t) =
. I
a (t)q (t) 6q (t)+q (t) 

1

We choose = 1 and show that


.

2q (t)+q (t)+1
a (t)q (t)
0
.
a (t)q (t)
6q (t)+q (t)+1 

.
for all t. With t < 1/ 2 or t > 1/ 2 it is easy to show that q (t)q (t)1 0, and a patch function can be sketched
such that this inequality is satisfied for 1/ 2 < t < 1/ 2. Then, for all t,
.
.
2q (t)+q (t)+1 q (t) 0 , 6q (t)+q (t)+1 5q (t) 0
.
.
[2q (t)+q (t)+1][6q (t)+q (t)+1] a 2 (t)q 2 (t) [5a 2 (t)]q 2 (t) 4q 2 (t) 0


Thus we have proven uniform exponential stability.


To show A T (t) is not uniformly exponentially stable, write the state equation as two scalar equations to
compute
A T (t) (t, 0) =





e t
0
(e t e 3t )/ 4 e 3t

, t 0

and the existence of unbounded solutions is clear.


Using the characterization of uniform stability in Exercise 6.1, given > 0, let = 1 (()).
Then > 0, since () > 0, and the inverse exists since (.) is strictly increasing. Then for any to , and any xo such
that xo , the corresponding solution is such that

Solution 7.20

v (t, x (t)) v (to , xo ) (xo ) () = () , t to


Therefore
(x (t)) v (t, x (t)) () , t to

But since (.) is strictly increasing, this gives x (t) , t to , and thus the state equation is uniformly stable.

-34-

CHAPTER 8

Solution 8.3 No. The matrix


2 8
0 1

A=





has negative eigenvalues, but


4 8
8 2

A + AT =





has an eigenvalue at zero.

Solution 8.6 Viewing F (t)x (t) as a forcing term, for any to , xo , and t to we can write
t

x (t) = A +F (t, to ) xo = A (t, to ) xo + A (t, )F () x() d


to

which gives, for suitable constants , > 0,


x (t) e

(tto )

xo +

e (t) F ()x() d

to

Thus
t

x (t) e

to

xo +

F () e x() d

to

and the Gronwall-Bellman inequality (Lemma 3.2) implies


t

e t x (t) e

to

xo e

Therefore

-35-

F () d

to

Linear System Theory, 2/E

Solutions Manual

x(t) e

(tto )

F () d

eo

xo

(tto )

(tto )

F () d

eo
e

xo

xo

and we conclude the desired uniform exponential stability.

Solution 8.8 We can follow the proof of Theorem 8.7 (first and last portions) to show that the solution

Q (t) = e A

(t)

e A (t) d

of
A T (t)Q (t) + Q (t) A (t) = I
is continuously-differentiable and satisfies, for all t,
I Q (t) I

where and are positive constants. Then with

.
F (t) = A (t) 12Q 1 (t)Q (t)

an easy calculation shows


.
F T (t)Q (t) + Q (t)F (t) + Q (t) = A T (t)Q (t) + Q (t) A (t) = I
Thus
.
x (t) = F (t) x (t)
is uniformly exponentially stable by Theorem 7.4.

Solution 8.9 As in Exercise 8.8 we have, for all t,


I Q (t) I

which implies
Q 1 (t)

1
__

Also, by the middle portion of the proof of Theorem 8.7,


.
.
Q (t) 2A (t)Q (t)2
Therefore
.

12Q 1 (t)Q (t)

2
_
___

for all t. Write


.
.
.
x (t) = A (t) x (t) = [ A (t) 12Q 1 (t)Q (t) ] x (t) + 12Q 1 (t)Q (t) x (t)
.

= F (t) x (t) + 12Q 1 (t)Q (t) x (t)

-36-

Linear System Theory, 2/E

Solutions Manual

Then the complete solution formula gives


t
.
x (t) = F (t, to ) xo + F (t, ) 12Q 1 ()Q () x() d
to

and the result of Exercise 8.8 implies that there exists positive constants , such that, for any to and t to ,
x (t) e

(tto )

xo +

e (t)

2
_
___
x() d

to

Therefore
t

x (t) e

to

xo +

to

2
_
____
e x() d

and the Gronwall-Bellman inequality (Lemma 3.2) implies


t

e t x (t) e

to

xo e

2 / d

to

Thus
x (t) e

(2 /)(tto )

xo

Now, writing the left side as A (t, to )xo and for any to and t to choosing the appropriate unity-norm xo gives
A (t, to ) e

(2 /)(tto )

For sufficiently small this gives the desired uniform exponential stability. (Note that Theorem 8.6 also can be
.
used to conclude that uniform exponential stability of x (t) = F (t) x (t) implies uniform exponential stability of
.
.
x (t) = [ F (t) + 12Q 1 (t)Q (t) ] x (t) = A (t) x (t)
for sufficiently small.)
With F (t) = A (t) + ( / 2)I we have that
F (t) satisfy Re [F (t)] / 2. The unique solution of

Solution 8.10

F(t) + / 2,

.
.
F (t) = A (t), and the eigenvalues of

F T (t)Q (t) + Q (t)F (t) = I


is

Q (t) = e F

(t)

e F (t) d

As in the proof of Theorem 8.7, there is a constant such that Q (t) for all t. Now, for any n 1 vector z,
T
d T F T (t) F (t)
___
z e
e
z = z T e F (t) [ F T (t) + F (t) ] e F (t) z
d

(2 + ) z T e F
Thus for any 0,

-37-

(t)

e F (t) z

Linear System Theory, 2/E

Solutions Manual

z T e F

(t)

e F (t) z =

d
___
d




z Te F

(t)

e F (t) z




(2 + ) z T e F

(t)

e F (t) z d

(t)

e F (t) z d

(2 + ) z T e F
0

(2 + ) z T Q (t) z
Thus
eF

(t)

e F (t) (2 + ) Q (t) , 0

and using
e F(t) = e A(t) e ( /2) , 0
gives
e A(t)

 
+

 )
 e ( /2) ,
(2
0

Solution 8.11 Write (the chain rule is valid since u (t) is a scalar)
.
. 
.
db
dA
___
___
(u (t))u (t)
(u (t))u (t) A 1 (u (t))b (u (t)) A 1 (u (t))
q (t) = A 1 (u (t))
du
du


.

= B (t)u (t)
Then
.
x (t) = A (u (t)) x (t) + b (u (t))
= A (u (t)) [ x (t) q (t) ] + A (u (t))q (t) + b (u (t))
= A (u (t)) [ x (t) q (t) ]
gives
_d_ [ x (t) q (t) ] = A (u (t)) [ x (t) q (t) ] + B (t)u. (t)
dt

(*)

Since
.
.
dA
dA
___
_d_ A (u (t)) = ___
(u (t))u (t)
(u (t))u (t) =
du
du
dt
.
we can conclude from Theorem 8.7 that for sufficiently small, and u (t) such that u (t) for all t, there exist
positive constants and (depending on u (t)) such that

A (u (t)) (t, ) e (t)

, t 0

But the smoothness assumptions on A (.) and b (.) and the bounds on u (t) also give that there exists a positive
constant such that B (t) for t 0. Thus the solution formula for (*) gives
x (t) q (t) x (0) q (0) + /

for u (t) as above, and the claimed result follows.

-38-

, t 0

CHAPTER 9

Solution 9.7 Write





B (AI)B (AI)2 B . . .




A 2 B2AB+2 B . . .

ABB

B AB A 2 B . . .




Im Im 2 Im
0 Im 2Im
0 0
Im
0 0
0
.
.
.
.
.
.
.
.
.

...
...
...
...
.
.
.










Clearly the two controllability matrices have the same rank. (The solution is even easier using rank tests from
Chapter 13.)

Solution 9.8 Since A has negative-real-part eigenvalues,

Q = e At BB T e A t dt
T

is well defined, symmetric, and

AQ + QA =




_d_
dt

Ae At BB T e A t + e At BB T e A t A T



e At BB T e A







dt

dt

= BB T
Also it is clear that Q is positive semidefinite. If it is not positive definite, then for some nonzero, n 1 x,

0 = x Qx = x T e At BB T e A t x dt
T

x T e At B 2 dt
0

Thus x e B = 0 for all t 0, and it follows that


T At

-39-

Linear System Theory, 2/E

Solutions Manual

dj
___
dt j

0=




x T e At B

= x TA jB
t =0

for j = 0, 1, 2, . . . . But this implies



x T B AB . . . A n1 B





=0

which contradicts the controllability hypothesis. Thus Q is positive definite.

Solution 9.9 Suppose is an eigenvalue of A, and p is a corresponding left eigenvector. Then p 0, and
p TA = p T
This implies both

_
p HA = p H ,

Now suppose Q is as claimed. Then

A T p = p

_
p H AQp + p H QA T p = p H Qp + p H Qp
= p H BB T p

that is,
2Re [] p H Q p = p H BB T p

(*)

This gives Re [] 0 since Q is positive definite. Now suppose Re [] = 0. Then (*) gives p H B = 0. Also, for
j = 1, 2, . . . ,
_
_
p H A j B = p H A j1 B = . . . = j p H B = 0
Thus

p H B AB . . . A n1 B





=0

which contradicts the controllability assumption. Therefore Re [] < 0.

Solution 9.10 Let

tf

Wy (to , t f ) = C (t f )(t f , t)B (t)B T (t)T (t f , t)C T (t f ) dt


to

If Wy (to , t f ) is invertible, given any x(to ) = xo choose


u (t) = B T (t)T (t f , t)C T (t f )W 1
y (to , t f )C (t f )(t f , to ) xo
Then the corresponding complete solution of the state equation gives
tf

y (t f ) = C (t f )(t f , to ) xo C (t f )(t f , )B ()B T ()T (t f , )C T (t f ) d W 1


y (to , t f ) C (t f )(t f , to ) xo
to

=0
and we have shown output controllability on [to , t f ]..

-40-

Linear System Theory, 2/E

Solutions Manual

Now suppose the state equation is output controllable on [to , t f ], but that Wy (to , t f ) is not invertible. Then
there exists a p 1 vector ya 0 such that y Ta Wy (to , t f )ya = 0. Using by now familiar arguments, this gives
y Ta C (t f )(t f , t)B (t) = 0 , t [to , t f ]
Consider the initial state
xo = (to , t f )C T (t f )[ C (t f )C T (t f ) ]1 ya
which is well defined and nonzero since rank C (t f ) = p. There exists an input ua (t) such that
tf

0 = C (t f )(t f , to ) xo + C (t f )(t f , )B ()ua () d


to

tf

= ya + C (t f )(t f , )B ()ua () d
to

Premultiplying by

y Ta

gives
0= y Ta ya

This contradicts ya 0, and thus Wy (to , t f ) is invertible.


The rank assumption on C (t f ) is needed in the necessity proof to guarantee that xo is well defined. For
m = p = 1, invertibility of Wy (to , t f ) is equivalent to existence of a ta (to , t f ) such that
C (t f )(t f , ta )B (ta ) 0
That is, there exists a ta (to , t f ) such that the output response at t f to an impulse input at ta is nonzero.

Solution 9.11 From Exercise 9.10, since rank C = p, the state equation is output controllable if and only if for
some fixed t f > 0,

tf

Wy = Ce

A (t f t)

BB T e

A T (t f t)

C T dt

is invertible. We will show this holds if and only if




rank

CB CAB . . . CA n1 B




=p

by showing equivalence of the negations. If Wy is not invertible, there exists a nonzero p 1 vector ya such that
y Ta Wy ya = 0. Thus
y Ta Ce

A (t f t)

B = 0 , t [0, t f ]

Differentiating repeatedly, and evaluating at t = t f gives


y Ta CA j B = 0 , j = 0, 1, . . .
Thus

y Ta CB CAB . . . CA n1 B





=0

and this implies


rank




CB CAB . . . CA n1 B




<p

Conversely, if the rank condition fails, then there exists a nonzero ya such that y Ta CA j B = 0,
j = 0, . . . , n1. Then

-41-

Linear System Theory, 2/E

Solutions Manual

y Ta Ce

A (t f t)

n1

k (t f t) A k B = 0 ,

B = y Ta C

t [0, t f ]

k =0

Therefore y Ta Wy ya = 0, which implies that Wy is not invertible.


For m = p = 1 argue as in Solution 9.10 to show that a linear state equation is output controllable if and
only if its impulse response (equivalently, transfer function) is not identically zero.

Solution 9.17 Beginning with


y (t) = c (t)x (t)
.
.
.
y (t) = c (t)x (t) + c (t)x (t)
.
= [c (t) + c (t)A (t)]x (t) + c (t)b (t)u (t)
= L 1 (t)x (t) + L 0 (t)b (t)u (t)
it is easy to show by induction that
k1

y (k) (t) = Lk (t)x (t) +

j =0

Now if

d k j 1
_______
[ L j (t)b (t)u (t) ] , k = 1, 2, . . .
dt k j 1

__ 1

Ln (t)M = 0 (t) 1 (t) . . . n 1 (t)




then


n 1

i (t)Li (t) =
i =0




0 (t)

. . . n 1 (t)





L 0 (t) 

.
.
= Ln (t)
.

Ln 1 (t) 

Thus we can write


y (n) (t)

n 1

n1

i (t)y (i) (t) = Ln (t)x (t) +

i =0
j =0

n 1

n 1

i1

i (t)Li (t)x (t) i=0 i (t) j=0


i =0

n1

d n j 1
_______
[ L j (t)b (t)u (t) ]
dt n j 1

j =0

d i j 1
______
[ L j (t)b (t)u (t) ]
dt i j 1

n 1
i1
d i j 1
d n j 1
______
_______
[ L j (t)b (t)u (t) ]
[
L
(t)b
(t)u
(t)

(t)
]

j
i

i j 1
dt n j 1
i =0
j =0 dt

This is in the desired form of an n th -order differential equation.

-42-

CHAPTER 10

Solution 10.2

We show equivalence of full-rank failure in the respective controllability and observability


matrices, and thus conclude that one realization is controllable and observable (minimal) if and only if the other is
controllable and observable (minimal). First,
rank

B AB

. . . A n1 B




<n

if and only if there exits a nonzero, n 1 vector q such that


q T B = q T AB = . . . = q T A n1 B = 0
This holds if and only if
q T B = q T (A+BC)B = . . . = q T (A+BC)n1 B = 0
which is equivalent to
rank

B (A+BC)B

. . . (A+BC)n1 B




<n

Similarly,


rank






C
CA
.
.
.
CA n1






<n




if and only if there exists a nonzero, n 1 vector p such that


Cp = CAp = . . . = CA n1 p = 0
This is equivalent to
Cp = C (A+BC)p = . . . = C (A+BC)n1 p = 0
which is equivalent to


rank






C
C (A+BC)
.
.
.
C (A+BC)n1

Solution 10.9 Since


-43-








<n

Linear System Theory, 2/E

Solutions Manual

C (t)B () = H (t)F ()

(*)

for all t, , picking an appropriate to and t f > to ,


tf

Mx (to , t f )Wx (to , t f ) = C (t)H (t) dt


T

to

tf

F()B T () d

(**)

to

where the left side is a product of invertible matrices by minimality. Therefore the two matrices on the right side
are invertible. Let
tf

T
P 1 = M 1
x (to , t f ) C (t)H (t) dt
to
T

Then multiply both sides of (*) by C (t) and integrate with respect to t to obtain
tf

Mx (to , t f )B () = C T (t)H (t) dt F ()


to

for all . That is,


B () = P 1 F ()
for all . Similarly, (*) gives
tf

C (t)Wx (to , t f ) = H (t) F()B T () d


to

that is,
tf

C (t) = H (t) F()B T () d W 1


x (to , t f )
to

But (**) then gives


tf

tf

F()B T () d W 1
C T (t)H (t) dt
x (to , t f ) =


to




to





Mx (to , t f ) = P

so we have
C (t) = H (t)P
for all t. Noting that 0 = P 1 . 0 . P, we have that P is a change of variables relating the two zero-A minimal
realizations. Since a change of variables always can be used to obtain a zero-A realization, this shows that any
two minimal realizations of a given weighting pattern are related by a variable change.

Solution 10.11 Evaluating


X (t+) = X (t) X ()
at = t gives that X (t) is invertible, and X 1 (t) = X (t) for all t. Differentiating with respect to t, and with
respect to , and using

__ X (t+) = ___
X (t+)

t
gives

-44-

Linear System Theory, 2/E

Solutions Manual






_d_ X (t)  X () = X (t)
dt







d
___
X ( ) 
d


which implies
d
___
X () = X (t)
d


_d_ X (t)  X ()
dt






Integrate both sides with respect to t from a fixed to to a fixed t f > to to obtain
tf

d
___
X () = X (t)
(t f to )
d
to






_d_ X (t)  dt X ()
dt


Now let
A=

1
_____
t f to

tf

X (t)

to


_d_ X (t)  dt
dt


to write
d
___
X () = A X () , X (0) = I
d
This implies X () = e A . (Of course there are quicker ways. For example note that
d

___
__ X (t+) = ___
X ( )
X (t+) = X (t)
d

t
.
.
Evaluating at = 0 gives X (t) = X (t)X (0), which implies
.

X (t) = X (0)e X (0)t = e X (0) t


Also the result holds for continuous solutions of the functional equation, though the proof is much more difficult.)

Solution 10.12 If rank Gi = ri we can write (admittedly using a matrix factorization unreviewed in the text)
Gi = Ci Bi
where Ci is p ri , Bi is ri m, and both have rank ri . Then it is easy to check that
A = block diagonal

{ i Ir

, i = 1, . . . , r

},

B=





B1
.
.
.
Br






, C=

C1

. . . Cr




is a realization of G (s) of dimension r 1 + . . . + rr = n. We need only show that this realization is controllable
and observable. Write
B1 0 . . . 0 
. . . n1 I 

1
m
0 B 2 . . . 0   Im 1 Im
.
.


.
. 

. .
.
.
B AB . . . A n1 B =
.
.
.
.

. .
.
.



.
. 
.
. 
. .
.
.



. . . n1 I 
0 0 . . . Br   Im r Im
r
m 

On the right side the first matrix has rank n, while the second is invertible due to its Vandermonde structure and
the fact that 1 , . . . , r are distinct. This shows controllability. A similar argument shows observability.
(Controllability and observability can be shown more easily using rank tests developed in Chapter 13.)

-45-

CHAPTER 11

Solution 11.4 Since


rank b Ab = rank

1 1 
=1
1 1 




the state equation is not minimal. It is easy to compute the impulse response:
G (t, ) = C (t)e A (t) B = (t 2 + 1) e (t)
Then a factorization is obvious, giving a minimal realization
.
x (t) = e t u (t)
y (t) = (t 2 + 1)e t x (t)

Solution 11.7 For the given impulse response,


22 (t, ) =




1+e 2t / 2+e 2 / 2 e 2
0
e 2t





It is easy to check that rank 22 (t, ) = 2 for all t, , and a little more calculation shows that rank 33 (t, ) = 2.
Then a minimal realization is, using formulas in the proof of Theorem 11.3,
F(t, ) = 22 (t, )
Fc (t, ) = 1+e 2t / 2+e 2 / 2

e 2




C (t) = Fc (t, t)F 1 (t, t) = 1 0




B (t) = Fr (t, t) =




1+e 2t
e 2t

A (t) = Fs (t, t)F 1 (t, t) =











e 2t 0  1
 F
(t, t) =
2e 2t 0 

Solution 11.12 The infinite Hankel matrix is

-46-




0 1

0 2

Linear System Theory, 2/E

Solutions Manual





1
1
1
.
.
.

1 ...
1 ...
1 ...
. .
. .
. .

1
1
1
.
.
.








and clearly the rank condition in Theorem 11.7 is satisfied with l = k = n = 1. Then, following the proof of
Theorem 11.7,
F = Fs = Fc = Fr = H 1 = H s1 = 1
and a minimal (dimension-1) realization is
.
x (t) = x (t) + u (t)
y (t) = x (t)
For the truncated sequence,







1
1
1
0
.
.
.

1
1
0
0
.
.
.

1
0
0
0
.
.
.

...
...
...
...
.
.
.

0
0
0
0
.
.
.









The rank condition in Theorem 11.7 is satisfied with l = k = n = 3. Taking


F = H3 =




1 1 1
1 1 0  , Fs = H s3 =
1 0 0

Fc = 1 1 1 , Fr =









1 1 0
1 0 0
0 0 0

1
1
1

gives a minimal realization specified by


A = Fs F 1 =




0 1 0
0 0 1 , B=
0 0 0




1
1 , C= 1 0 0
1





(This is an example of Silvermans formulas in Exercise 11.13. Also, it is not hard to see that truncation of the
sequence after any finite number n of 1s will lead to a minimal realization of dimension n.)

Solution 11.13 Writing the rank-n infinite Hankel matrix as












G0 G1 . . .
G1 G2 . . .
.
.
.
.
.
.
.
.
.
Gn1 Gn . . .
.
.
.
.
.
.
.
.
.











suppose for some 1 i n a left-to-right column search yields that the first linearly dependent column is column
i. Then there exist scalars 0 , . . . , i2 such that column i is given by the linear combination
-47-

Linear System Theory, 2/E

Solutions Manual









Gi1
Gi
.
.
.
Gn2+i
.
.
.

= 0

G0 

G1 
. 
.
.  + . . . +
i2
Gn1 
. 
. 
.









Gi2
Gi1
.
.
.
Gn3+i
.
.
.











By ignoring the top entry, this linear combination shows that column i +1 is given by the same linear combination
of the i1 columns to its left, and so on. Thus by the rank assumption on there cannot exist such an i, and the
first n columns of are linearly independent. A similar argument shows that the first n columns of n,n +j are
linearly independent, for every j 0, and thus that nn is invertible.
It remains only to show that the given A, B, C provides a realization for G (s), since minimality is then
immediate. Premultiplication by nn verifies
1
nn

Gk
.
.
.





Then, since A =




Gn +k1

snn

= ek +1 , k = 0, . . . , n1




1
nn ,






Gk
.
.
.
Gn +k1





= snn ek +1 =

Gk +1
.
.
.
Gn +k





, k = 0, . . . , n1




Now, CB = G 0 , and
G0
.
.
.
Gn1

CA j B = CA j1 A





= ... =C





= CA j1

Gj
.
.
.





Gn1+j

G1
.
.
.
Gn










= G j , j = 1, . . . , n





To complete the verification we use the fact that each dependent column of n,n +j is given by the same linear
combination of n columns to its left. This follows by writing column n +1 of as a linear combination of the first
n (linearly independent) columns, and deleting partitions from the top of the resulting expression. This implies
that multiplying any column of n,n +j by A gives the next column to the right. Thus


CA n +j B = CA j





Gn
.
.
.
G 2n1





=C

= Gn +j , j = 1, 2, . . .

-48-

Gn +j
.
.
.
G 2n1+j







CHAPTER 12

Solution 12.1 If the state equation is uniformly bounded-input, bounded-output stable, then it is clear from the
definition that given we can take = .
Now suppose the , condition holds. In particular we can take = 1 and assume is such that, for any to ,
u (t)

1 , t to

implies
y (t) ,

t to

Now suppose u (t) is any bounded input signal. Given to let = sup u (t). Note > 0 can be assumed, for
t to

otherwise we have a trivial case. Then u (t)/ 1 for all t to , and the zero-state response to u (t) satisfies
t

y (t) =

G (t, )u () d

to
t

= G (t, )u ()/ d
to

= sup u (t) , t to
t to

Thus we have
sup y (t) sup u (t)

t to

t to

and conclude uniform bounded-input, bounded-output stability, with = .

Solution 12.8 For any > 0, and constant A and B,


t

W (t, t) =

e A (t) BB T e A

(t)

Changing the variable of integration from to = t yields

W (t, t) = e A e A BB T e A d e A
T

It is easy to prove (by showing the equivalence of the negations by contradiction, as in the proof of Theorem 9.5)
that this is positive definite if and only if

-49-

Linear System Theory, 2/E

Solutions Manual

rank

B AB . . . A n1 B




=n

Then given we can take


= min

e A BB T e A d e A

For a time-varying example, take scalar a (t) = 0, b (t) = e

t /2

. Then

W (t, t) = e t (e 1)
Given any > 0, W (t, t) > 0 for all t, but there exists no > 0 such that
W (t, t)
for all t.

Solution 12.9 Consider a scalar state equation


.
x (t) = b (t)u (t)
y (t) = x (t)
where b (t) is a smooth bump function described as follows. It is a continuous, nonnegative function that is zero
for t [0, 1], and has unit area on [0, 1]. Then for any input signal the zero-state response satisfies
1

y (t) b ()u() d
0

for any t. Thus for any to and any t to ,


1

y (t)

u (t)
b() d . tsup
t
o

sup u (t)
t to

and the state equation is uniformly bounded-input, bounded-output stable with = 1. However if we consider a
bounded input that is continuous and satisfies
1, 0t 1
0, t 2



u (t) =

then limt u (t) = 0, but y (t) = 1 for t 1.


The result is true in the time-invariant case, however. Suppose

G (t) dt = <
0

and suppose u (t) is continuous, and u (t) 0 as t . Then u (t) is bounded, and we let = sup u (t). Now
t0

given > 0, pick T 1 > 0 such that

G (t) dt ___
2

T1

and pick T 2 > 0 such that

-50-

Linear System Theory, 2/E

Solutions Manual

u (t)

___
, t T2
2

Let T = 2 max [T 1 , T 2 ]. Then for t T,


t

y (t)

G (t)u () d
0
T

= G (t) d +
0

___
G (t) d
2 T

Changing the variables of integration gives


t

y (t)

G () d +

tT

___
2

tT

G () d

___
___
=
+
2
2

This shows that y (t) 0 as t .

Solution 12.11 The hypotheses imply that given > 0 there exist 1 , 2 > 0 such that if
xo < 1

u (t) < 2

, t to

where u (t) is n 1, then the solution of


.
x (t) = A (t) x (t) + u (t) , x (to ) = xo
satisfies
x (t) < ,

t to

In particular, with xo = 0, this shows that if u (t) < 2 for t to , then the corresponding zero-state solution of
the state equation
.
x (t) = A (t) x (t) + u (t)
y (t) = x (t)

(*)

satisfies y (t) < for t to . But this implies uniform bounded-input, bounded-output stability by Exercise
12.1. Thus there exists a finite constant such that the impulse response of (*), which is identical to the transition
matrix of A (t), satisfies
t

(t, ) d
to

for all t, to such that t to . Since A (t) is bounded, this gives uniform exponential stability of
.
x (t) = A (t) x (t)
by Theorem 6.8.

Solution 12.12 Suppose the impulse response is G (t), where G (t) = 0 for t < 0. For u (t) = e t , t 0,

-51-

Linear System Theory, 2/E

Solutions Manual

y (t)e t dt =
0

G (t)e d  e t dt =
0

G (t)e t dt




G (t)e d




e t dt

e d

where all integrals are well-defined because of the stability assumption, and , > 0. Changing the variable of
integration in the inner integral from t to = t gives

y (t)e

dt =

G ()e d

G(s) s =

e e d




e (+) d
0

1
_____
G()
+

Without the stability assumption we can say that U (s) = 1/(s+) for Re [s ] > , and the integral for G (s)
converges for Re [s ] > Re [p 1 ], . . . , Re [pn ], where p 1 , . . . , pn are the poles of G (s). Thus

G (s)
_____
= y (t)e st dt
Y (s) =
s+
0
is valid for Re [s ] > , Re [p 1 ], . . . , Re [pn ]. This implies that if
> , Re [p 1 ], . . . , Re [pn ]

then

G ( )
y (t)e t dt = _____
+
0

even though y (t) may be unbounded.


Given u (t), t 0, and xo , suppose x (t) is a solution of the given state equation. Then with
v (t) = y (t) = C x (t) we have
.
x (t) = A x (t) + Bu (t) , x (0) = xo
.
z (t) = AP z (t) + AB(CB)1 C x (t)

Solution 12.14

= AP z (t) + A (I P) x (t) , z (0) = xo


Thus
.
.
x (t) z (t) = AP [ x (t) z (t) ] + Bu (t) , x (0) z (0) = 0
and this gives
t

x (t) z (t) = e AP (t) Bu () d


0

Since PB = 0 and
e AP (t) =

n1

i (t) (AP)i

i =0

-52-

Linear System Theory, 2/E

Solutions Manual

we get
t

x (t) z (t) = 0 (t)Bu () d


0

Then
.
w (t) = (CB)1 CAP z (t) (CB)1 CAB(CB)1 C x (t) + (CB)1 C x (t)
= (CB)1 CAP z (t) (CB)1 CAB(CB)1 C x (t) + (CB)1 CA x (t) + (CB)1 CBu (t)
= (CB)1 CAP z (t) + (CB)1 CA [ B(CB)1 C + I ] x (t) + u (t)
= (CB)1 CAP[ x (t) z (t) ] + u (t)
t

= (CB)1 CAP 0 (t)Bu () d + u (t)


0

Again using PB = 0 gives


w (t) = u (t) , t 0
To address stability, since PB = 0 we see that P is not invertible. Thus AP is not invertible, which implies the
second state equation is never exponentially stable. The scalar case with A = 1, B = C = 1 is uniformly
bounded-input, bounded-output stable, but the resulting
.
z (t) = v (t)
.
w (t) = v (t) + v (t)
is not, as the bounded input v (t) = cos (e t ) shows.

-53-

CHAPTER 13

Solution 13.1 Suppose n = 2 and A has complex eigenvalues. Let


A=




a 11 a 12
a 21 a 22




b=




b1
b2





Then A has eigenvalues


2











22
)
4(a
a 11 +a 22

(a
11
+a
11 a 22 a 12 a 21 )
____________________________________
2

and since the eigenvalues are complex,


(a 11 +a 22 )2 4(a 11 a 22 a 12 a 21 ) = (a 11 a 22 )2 + 4a 12 a 21 < 0
Supposing that det [ b

(*)

Ab ] = 0, we will show that if b 0 we get a contradiction. For


0 = det [ b

Ab ] = a 21 b 21 a 12 b 22 (a 11 a 22 )b 1 b 2

implies
(a 11 a 22 )2 b 21 b 22 = (a 21 b 21 a 12 b 22 )2

(**)

If b 1 = 0, b 2 0, then (**) implies a 12 = 0, which contradicts (*). If b 1 0, b 2 = 0, then (**) implies a 21 = 0,


which contradicts (*). If b 1 0, and b 2 0, then multiplying (*) by b 21 b 22 and using (**) gives
(a 21 b 21 a 12 b 22 )2 + 4a 12 a 21 b 21 b 22 < 0
or,
(a 21 b 21 +a 12 b 22 )2 < 0
which is a contradiction. Thus det [ b Ab ] 0 for every b 0.
Conversely, suppose det [ b Ab ] 0 for every b 0. If A has real eigenvalues, let p be a left eigenvector
of A corresponding to , and take b 0 such that b T p = 0. (Note b and p are real.) Then
p TA = p T , p Tb = 0
which implies that the state equation is not controllable for this b, a contradiction. Therefore A cannot have real
eigenvalues, so it must have complex eigenvalues. (For the more challenging version of the problem, we can
show controllability for all nonzero b implies n = 2 by using a (real) P to transform A to real Jordan form. Then
for n > 2 pick a left eigenvector of P 1 AP and a real b 0 such that p T P 1 b = 0 to obtain a contradiction.)

-54-

Linear System Theory, 2/E

Solutions Manual

Solution 13.4 We need to show that


rank

B AB . . . A n1 B




=n ,

rank

if and only if the (n +p)-dimensional state equation


.
A 0
z (t) =
z (t) +
 C 0 

A B
= n +p
C D

B
u (t)
D

(+)

(++)

is controllable. First suppose (+) holds but (++) is not controllable. Then there exists a complex so such that
rank




Since rank [ so IA

so In A 0
C so Ip

B
 < n +p
D

(*)

B ] = n, this implies
rank

C so Ip D

<p

In turn, this implies so = 0, so that (*) becomes


rank

A 0 B 
< n +p
C 0 D 

and this contradicts the second rank condition in (+).


Conversely, supposing (++) is controllable, then
rank




...
...

AB A 2 B
CB CAB

B
D

= n +p




This implies
rank

B AB . . . A n1 B




=n

in other words, the first rank condition in (+) holds. Now suppose
A B
rank
< n +p
 C D 
Then
rank




so In A 0
C so Ip

B

D


< n +p

so = 0

that is,
so In +p

A 0
C 0

B
D

< n +p
so = 0
and this implies that (++) is not controllable. The contradiction shows that the second rank condition in (+) holds.
rank

Solution 13.5 Since J has a single eigenvalue , controllability is equivalent to the condition
rank IJ


B =n



From the form of the matrix IJ it is clear that a necessary and sufficient condition for controllability is that the
set of rows of B corresponding to zero rows of IJ must be a linearly independent set of 1 m vectors.
In the general Jordan form case, applying this condition for each eigenvalue i gives a necessary and
sufficient condition for controllability. (Note that independence of one set of such rows of B (corresponding to one
distinct eigenvalue) from another set of such rows of B (corresponding to another distinct eigenvalue) is not
required.)

-55-

Linear System Theory, 2/E

Solutions Manual

Solution 13.10 Since




P 1 B

(P 1 AP)P 1 B . . . (P 1 AP)n1 P 1 B




= P 1

B AB . . . A n1 B




and controllability indices are defined by a left-to-right linear independence search, it is clear that controllability
indices are unaffected by state variable changes.
For the second part, let rk be the number of linearly dependent columns in A k B that arise in the left-to-right
column search of [ B AB . . . A n1 B ]. Note r 0 = 0 since rank B = m. Then rk is the number of controllability
indices that have value k. This is because for each of the rk columns of the form A k Bi that are dependent, we
have i k, since for j > 0 the vector A k +j Bi also will be dependent on columns to its left. Thus for
k = 1, . . . , m, rk rk1 gives the number of controllability indices with value k. Writing



BG ABG . . . A k BG =


B AB . . . A k B





G
0
.
.
.
0

0 ... 0 
G ... 0 
. . . 
. . . 
. . .

0 ... G

and using the invertibility of G shows that the same sequence of rk s are generated by left-to-right column search
in [ BG ABG . . . A n1 BG ].

Solution 13.11 For the time-invariant case, if


p TA = p T , p TB = 0
implies p = 0, then
p T (A +BK) = p T , p T B = 0
obviously implies p = 0. Therefore controllability of the open-loop state equation implies controllability of the
closed-loop state equation.
In the time-varying case, suppose the open-loop state equation is controllable on [to , t f ]. Thus given
x (to ) = xo there exists an input signal ua (t) such that the corresponding solution xa (t) satisfies xa (t f ) = 0. Then the
closed-loop state equation
.
z (t) = [ A (t) + B (t)K (t) ] z (t) + B (t)v (t)
with initial state z (to ) = xo and input va (t) = ua (t) K (t) xa (t) has the solution z (t) = xa (t). Thus z (t f ) = 0. Since
this argument applies for any xo , the closed-loop state equation is controllable on [to , t f ].

Solution 13.12 By controllability, we can apply a variable change to controller form, with
A = Ao + Bo UP 1 = PAP 1 , B = Bo R = PB
Then we can choose K such that


A + B K =







0
1
0
0
.
.
.
.
.
.
0
0
p 0 p 1

Now we want to compute b such that

-56-

...
0
...
0
.
.
.
.
.
.
...
1
. . . p
n1









Linear System Theory, 2/E

Solutions Manual

0
0
.
.
.

0
1

B b =







Using to denote various unimportant entries, set




B b = Bo Rb = block diagonal

0
0
.
.
.

0
1




i 1

, i = 1, . . . , m




1
0
.
.
.
0
0

1
.
.
.
0
0

...
...
.
.
.
...
...



.
. b=
.


1








0
0
.
.
.

0
1

This gives a set of equations of the form


m

0 = b 1 + i bi
i =2
m

0 = b 2 + i bi
i =3

.
.
.
0 = bm1 + bm
1 = bm
Clearly there is a solution for the entries of b, regardless of the s. Now it is easy to conclude controllability of
the single-input state equation by calculation of the form of the controllability matrix. Then changing to the
original state variables gives the result since controllability is preserved. In the original variables, take K = K P
and b = b. For an example to show that b alone does not suffice, take Exercise 13.11 with all s zero.

Solution 13.14 Supposing the rank of the controllability matrix is q, Theorem 13.1 gives an invertible Pa such
that
P 1
a APa =




A 11 A 12
0 A 22




, P 1
a B =




B 1
0




, CPa =

C 1

C 2




where A 11 is q q and the state equation defined by C 1 , A 11 , B 1 is controllable. Now suppose


C 1
C 1 A 11
.
.
.




rank






n1

C 1 A 11





=l






Applying Theorem 13.12 there is an invertible Pb such that with


P=




Pb 0
0 Inq

we have

-57-





Linear System Theory, 2/E

Solutions Manual

(P 1
a APa )P





CPa P =

A 11 0
A 21 A 22
0 0

A 13
A 23
A 33

C 1 0 C 2

B 1
B 2
0





, P

(P 1
a B)








(*)







where A 11 is l l, and in fact A 33 = A 22 , C 2 = C 2 . It is easy to see that the state equation formed from
C 1 , A 11 , B 1 is both controllable and observable. Also an easy calculation using block triangular structure shows
that the impulse response of the state equation defined by (*) is
C 1 e

A 11 t

B 1

It remains only to show that l = s. Using the effect of variable changes on the controllability and observability
matrices and the special structure of (*) give




C
CA
.
.
.




CA

n1





B AB . . . A n1 B




C 1

C 1 A 11
.
.
.




n1
C 1 A 11







n1
B 1 A 11 B 1 . . . A 11 B 1




Thus


rank

C 1

C 1 A 11
.
.
.

n1




C 1 A 11

B 1 A 11 B 1 . . .

n1
A 11 B 1




=s

But



rank






C 1

C 1 A 11
.
.
.







l1

C 1 A 11

= rank

l1
B 1 A 11 B 1 . . . A 11 B 1




and so we must have l = s.

-58-




=l




CHAPTER 14

Solution 14.2 For any t f > 0,


tf

W = e At BB T e A t dt
T

is symmetric and positive definite by controllability, and


tf

AW + WA =
T

=e

_d_
dt
At f




e At BB T e A

BB T e

A T t f

T 
t


dt

+ BB T

Letting K = B T W 1 , we have
(A + BK)W + W (A + BK)T = ( e

At f

BB T e

A T t f

+ BB T )

(*)

Suppose is an eigenvalue of A +BK. Then is an eigenvalue of (A+BK)T , and we let p 0 be a corresponding


eigenvector. Then
(A + BK)T p = p
Also,

_
p H (A + BK) = p H

Pre- and post-multiplying (*) by p H and p, respectively, gives


2Re [] p H W p 0
which implies Re [] 0. Further, if Re [] = 0, then
p H (e

At f

BB T e

A T t f

+ BB T ) p = 0

Thus p H B = 0, and this gives


_
p H AB = p H (A + BK BK)B = p H (A + BK)B p H BKB = p H B = 0
Continuing this calculation for p H A 2 B, and so on, gives
p H B AB . . . A n1 B


which contradicts controllability of the given state equation.

-59-




=0

Linear System Theory, 2/E

Solutions Manual

Solution 14.5
(a) For any n 1 vector x,
x H (A + A T ) x = x H A x + x H A T x 2m x H x
If is an eigenvalue of A, and x is a unity-norm eigenvector corresponding to , then
_
A x = x , x HAT = x H
and we conclude

+ 2 m

Therefore any eigenvalue of A satisfies Re [] m , and this implies that for > m all eigenvalues of A + I
have positive real parts. Therefore all eigenvalues of (A T + I) = (A I)T have negative real parts.
(b) Using Theorem 7.11, with > m , the unique solution of
Q (A I)T + (A I)Q = BB T

(*)

is

Q = e (A + I)t BB T e (A

+ I)t

dt

Clearly Q is positive semidefinite. If x T Qx = 0, then


x T e (A + I)t B = 0 , t 0
and the usual sequential differentiation and evaluation at t = 0 gives a contradiction to controllability. Thus Q is
positive definite.
(c) Now consider the linear state equation
.
z (t) = ( A+ IBB T Q 1 )z (t)

(**)

Using (*) to write BB T Q 1 gives


.
T
z (t) = Q (A+ I ) Q 1 z (t)
But Q [ (A + I)T ]Q 1 has negative-real-part eigenvalues, which proves that (**) is exponentially stable.
(d) Invoking Lemma 14.6 gives that
.
z (t) = ( ABB T Q 1 )z (t)
is exponentially stable with rate > m .

Solution 14.6 Given a controllable linear state equation


.
x (t) = A x (t) + Bu (t)
by Exercise 13.12 we can choose an m n matrix K and an m 1 vector b such that
.
x (t) = (A+BK )x (t) + (Bb )u (t)
is a controllable single-input state equation. By a single-input controller form calculation, it is clear that we can
choose a 1 n gain k that yields a closed-loop state equation with any specified characteristic polynomial. That
is,

-60-

Linear System Theory, 2/E

Solutions Manual

A+BK+Bbk = A+B (K+bk)


has the specified characteristic polynomial. Thus for the original state equation, the feedback law
u (t) = (K+bk) x (t)
yields a closed-loop state equation with specified characteristic polynomial.

Solution 14.8

Without loss of generality we can assume the change of variables in Theorem 13.1 has been

performed so that
A=

A 11 A 12
0 A 22




, B=







B1
0




where A 11 is q q, and
rank IA 11


B1 = q



for all complex values of . Then the eigenvalues of A comprise the eigenvalues of A 11 and the eigenvalues of
A 22 . Also, for any complex ,
rank IA


B = rank



IA 11

A 12 B 1
IA 22 0

= q + rank IA 22




(+)

Now suppose rank [ IA B ] = n for all nonnegative-real-part eigenvalues of A. Then by (+) any such
eigenvalue must be an eigenvalue of A 11 , which implies that all eigenvalues of A 22 have negative real parts. But
we can compute an m q matrix K 1 such that A 11 + B 1 K 1 has negative-real-part-eigenvalues. So setting
K = [ K 1 0 ] we have that
A + BK =




A 11 +B 1 K 1 A 12
0
A 22





has negative-real-part eigenvalues.


On the other hand, if there exists a K = [K 1 K 2 ] such that
A + BK =




A 11 +B 1 K 1 A 12 +B 1 K 2
0
A 22





has negative-real-part eigenvalues, then A 22 has negative-real-part eigenvalues. Thus if Re [] 0, then (+) gives
rank IA


B = q+nq = n



Solution 14.9 For controllability assume A and B have been transformed to controller form by a state variable
change. By Exercise 13.10 this does not alter the controllability indices. Then it is easy to show that A+BLC and
B are in controller form with the same block sizes, regardless of L and C. Thus the controllability indices do not
change. Similar arguments apply in the case of observability.

Solution 14.10 For any L, using properties of the trace,

-61-

Linear System Theory, 2/E

Solutions Manual

tr [A+BLC ] = tr [A ] + tr [BLC ]
= tr [A ] + tr [CBL ]
= tr [A ]
>0
Thus at least one eigenvalue of A+BLC has positive real part, regardless of L.

Solution 14.12 Write the k th -row of G(s) in terms of the k th -row Ck of C as


Ck (sI A)1 B =

Ck A j Bs (j+1)

j =0

The k th -relative degree k is such that, since L Aj [Ck ](t)B (t) = Ck A j B,


2
Ck B = . . . = Ck A k B = 0

Ck A

k 1

B0

Thus in the k th -row of G(s), the minimum difference between the numerator and denominator polynomial degrees
among the entries Gk1 (s), . . . , Gkm (s) is k .

-62-

CHAPTER 15

Solution 15.2 The closed-loop state equation can be written as


.
x (t) = Ax(t) + BMz(t) + BNv(t)
= Ax(t) + BMz(t) + BNC [Lz(t)+x(t)]
.
z (t) = Fz(t) + GC [Lz(t)+x(t)]
Making the variable change w (t) = x (t)+Lz (t) gives the description
.
w (t) = Ax(t) + BMz(t) + BNCw(t) + LFz (t) + LGCw (t)
= Ax(t) + [BM+LF ]z(t) + [BN+LG ]Cw (t)
= [AHC ]w (t)
.
z (t) = Fz(t) + GCw(t)
Thus the closed-loop state equation in matrix form is
.

AHC 0 
w (t)


 = 
.
GC F 
z (t) 






w (t) 

z (t) 

and the result is clear.

Solution 15.4 Following the hint, write for any


+

B ()2

d =

(, )(, )B ()B T ()T (, )T (, ) d

(, )2 (, )B ()B T ()T (, ) d

Since A (t) is bounded, by Exercise 6.6 there is a positive constant such that
And since
+

(, )B ()B T ()T (, ) d 1 I

Exercise 1.21 gives, for any ,

-63-

(, )2

2 for [, +].

Linear System Theory, 2/E

Solutions Manual

B ()2

d 2

(, )B ()B T ()T (, ) d

2 n 1 = 1
Now for any , and t [+k , +(k +1)], k = 0, 1, . . . ,
+(k +1)

B ()2

B ()2

k +(j +1)

+j
j =0

B ()2

(k +1) 1 [1 + (t)/ ] 1
This bound is independent of k, so letting 2 = 1 / we have
t

B ()2 d 1 + 2 (t)

for all t, with t . (Of course this provides a simplification of the hypotheses of Theorem 15.5 for the
bounded-A (t) case.)

Solution 15.6 Write the given state equation in the partitioned form



.
za (t) 
 =
.
zb (t) 




A 11 A 12
A 21 A 22

y (t) = Ip











za (t) 
 +
zb (t)





B1
B2




u(t)

za (t) 

zb (t)


and the reduced-dimension observer and feedback in the form


.
zc (t) = [ A 22 H A 12 ] zc (t) + [ B 2 HB 1 ]u(t) + [ A 21 + (A 22 H A 12 )H H A 11 ] za (t)
zb (t) = zc (t) + Hza (t)
u(t) = K 1 za (t) + K 2 zb (t) + Nr(t)
It is predictable that writing the overall closed-loop state equation in terms of the variables za (t), zb (t), and
eb (t) = zb (t)zb (t) is revealing. This gives
.

za (t) 
A 11 +B 1 K 1 A 12 +B 1 K 2 B 1 K 2  za (t) 
B1N 

.

 



 z (t)  =
A 21 +B 2 K 1 A 22 +B 2 K 2 B 2 K 2
zb (t) + B 2 N r(t)
b

 



.

0
0
A 22 HA 12   eb (t) 
0 
eb (t) 




y (t) = Ip


0 0








za (t) 

zb (t)

eb (t) 

Thus we see that the eigenvalues of the closed-loop state equation are provided by the n eigenvalues of A +BK
and the (np) eigenvalues of A 22 H A 12 . Furthermore, the block triangular structure gives the closed-loop
transfer function as

-64-

Linear System Theory, 2/E

Solutions Manual

0 (sIABK )1 BN R(s)

Y(s) = Ip




which is the same as if a static state feedback gain K is used.

Solution 15.9 Similar in style to Solution 14.8.


Solution 15.10 Since
u = Hz + Jv = Hz + JC 2 x + JD 21 r + JD 22 u

we assume that IJD 22 is invertible, and let L = (IJD 22 )1 to write


u = LHz + LJC 2 x + LJD 21 r
Then, substituting for u,
.
x = (A+BLJC 2 )x + BLHz + BLJD 21 r
.
z = (GC 2 +GD 22 LJC 2 )x + (F+GD 22 LH)z + (GD 22 +GD 22 LJD 21 )r
y = (C 1 +D 1 LJC 2 )x + D 1 LHz + D 1 LJD 21 r
This gives the closed-loop coefficients
A =





A+BLJC 2
BLH

,
GC 2 +GD 22 LJC 2 F+GD 22 LH 

C = C 1 +D 1 LJC 2


D 1 LH




B =




BLJD 21
GD 22 +GD 22 LJD 21

, D = D 1 LJD 21

These expressions can be rewritten using


L = (I JD 22 )1 = I + J (ID 22 J)1 D 22
which follows from Exercise 28.2 or is easily verified using the identity in Exercise 28.1.

-65-





CHAPTER 16

Solution 16.4 By Theorem 16.16 there exist polynomial matrices X (s), Y (s), A (s), and B (s) such that
N(s) X(s) + D(s)Y(s) = Ip

(*)

Na (s) A(s) + Da (s)B(s) = Ip

(**)

1
Since D 1 (s)N(s) = D 1
a (s)Na (s), Na (s) = Da (s)D (s)N(s). Substituting this into (**) gives

Da (s)D 1 (s)N(s) A(s) + Da (s)B(s) = Ip


that is,
N(s) A(s) + D(s)B(s) = D(s)D 1
a (s)
Similarly, N(s) = D(s)D 1
a (s)Na (s), and substituting into (*) gives
Na (s) X(s) + Da (s)Y(s) = Da (s)D 1 (s)
1
1
Therefore D(s)D 1
both are polynomial matrices, and thus both are unimodular.
a (s) and [ D(s)D a (s) ]

Solution 16.5 From the given equality,


NL (s)D(s) DL (s)N(s) = 0
and since N (s) and D (s) are right coprime there exist polynomial matrices X(s) and Y(s) such that
X(s)D(s) Y(s)N(s) = I
Putting these two equations together gives





X(s) Y(s)

NL (s) DL (s)





D(s)
N(s)








I
0





It remains only to prove unimodularity. Since NL (s) and DL (s) are left coprime, there exist polynomial matrices
A(s) and B(s) such that
DL (s) A(s) + NL (s)B(s) = I
That is,



X(s) Y(s) 

NL (s) DL (s) 




D(s) B(s) 
 =
N(s) A(s) 

Multiplying on the right by

-66-




I X(s)B(s)+Y(s)A(s) 

0
I


Linear System Theory, 2/E

Solutions Manual




I [X(s)B(s)+Y(s)A(s)] 

0
I


gives


X(s) Y(s)

NL (s) DL (s)







D(s) D(s)[X(s)B(s)+Y(s)A(s)]+B(s) 
 = I
N(s) N(s)[X(s)B(s)+Y(s)A(s)]+A(s) 

That is





X(s) Y(s)

NL (s) DL (s)

D(s) D(s)(X(s)B(s)+Y(s)A(s))+B(s) 

N(s) N(s)(X(s)B(s)+Y(s)A(s))+A(s) 




which is another polynomial matrix. Thus





X(s) Y(s) 

NL (s) DL (s) 

is unimodular.

Solution 16.7 The relationship


(P s+P 1 )1 = R 1 s+R 0
holds if R 1 and R 0 are such that
I = (P s+P 1 ) (R 1 s+R 0 ) = P R 1 s 2 + (P R 0 +P 1 R 1 )s + P 1 R 0
1
1
Taking R 0 = P 1
1 and R 1 = P 1 P P 1 , it remains to verify that P R 1 = 0. We have

I = (P s + . . . + P 0 ) (Q s + . . . + Q 0 )
= P Q s + + (P Q 1 +P 1 Q )s +1 + . . .
with P 1 and Q 1 invertible. Therefore
PQ = 0 ,

P Q 1 +P 1 Q = 0

(+)

The second equation gives


P = P 1 Q Q 1
1
Then we can write
1
R 1 = Q Q 1
1 P 1

and the first equation in (+) gives P R 1 = 0. In summary,


1
1
(P s+P 1 )1 = Q Q 1
1 P 1 s + P 1

and thus P s+P 1 is unimodular.


1

Since N(s)D 1 (s) = N (s)D (s) both are coprime right polynomial fraction descriptions, there
exists a unimodular U(s) such that D(s) = D (s)U(s). Suppose for some integer 1 J m we have

Solution 16.10

ck [D ] = ck [D ] , k = 1, . . . , J1 ; cJ [D ] < cJ [D ]
Writing D(s) and D (s) in terms of columns Dk (s) and D k (s) and writing the (i, j)-entry of U(s) as uij (s) give

-67-

Linear System Theory, 2/E

Solutions Manual

Dk (s) = D 1 (s)u 1,k (s) + . . . + D J (s)uJ,k (s) + . . . + D m (s)um,k (s) , k = 1, . . . , m


Using a similar column notation for D hc and D l (s) gives
D hc
k s

ck [D ]

hc c 1 [D ]

+ D lk (s) = [D 1 s

l
hc c [D ]
l
+D 1 (s)] u 1,k (s) + . . . + [D J s J +D J (s)] uJ,k (s)

hc c [D ]
l
+ . . . + [D m s m +D m (s)] um,k (s) , k = 1, . . . , m

We claim that
ck [D ] =

max
j = 1, . . . , m

{ c j [D ]+degree u j,k (s) }

hc
hc
This is shown by a an argument using linear independence of D 1 , . . . , D m as follows. Let

c =

max
j = 1, . . . , m

and let j,k be the coefficient of s


term on the right side is

c c j [D ]

{ c j [D ]+degree u j,k (s) }

in u j,k (s). Then not all the j,k are zero, and the vector coefficient of the s c
m

j,k D j
j =1

hc

By linear independence this sum is nonzero, which implies ck [D ] = c.


Now, using the definition of J,
ck [D ] < cJ [D ] . . . cm [D ] , k = 1, . . . , J1
and this implies uJ,k (s) = . . . = um,k (s) = 0. Thus U (s) has the form
U (s) =




Ua (s)
0(mJ+1) J

Ub (s) 

Uc (s)


where Ua (s) is (J1) J, from which rank U (s) m1 for all values of s. This contradicts unimodularity, Thus
cJ [D ] = cJ [D ]. The proof is complete since the roles of D (s) and D (s) can be reversed.

-68-

CHAPTER 17

Solution 17.1 If
.
x (t) = A x (t) + Bu (t)
y (t) = C x (t)
T

is a realization of G (s), then


.
z (t) = A T x (t) + C T v (t)
w (t) = B T z (t)
is a realization for G (s) since
T

G (s) = [ G T (s) ] = [ C (sI A)1 B ] = B T (sI A T )1 C T


Furthermore, easy calculation of the controllability and observability matrices of the two realizations shows that
one is minimal if and only if the other is. Now, if N (s) and D (s) give a coprime left polynomial fraction
description for G(s), then there exist polynomial matrices X (s) and Y (s) such that
N (s) X (s) + D(s)Y(s) = I
Therefore
X T (s)N T (s) + Y T (s)D T (s) = I
which implies that N T (s) and D T (s) are right coprime. Also, since D(s) is row reduced, D T (s) is column reduced.
Thus we can write down a controller-form minimal realization for G T (s) = N T (s)[ D T (s) ]1 as per Theorem 17.4,
and this provides a minimal realization for G (s) by the correspondence above.

Solution 17.3 Proof of Theorem 17.7: From Theorem 13.17 we have


Q 1 AQ = A To + Q 1 VBTo , CQ = SB To
Transposing (6) gives
(s)B To = sT (s) T (s)A To
and (13) implies
(s) = D (s)S + T (s)Q 1 V
Substituting into (+) gives
D (s)SB To = T (s) [ sI A To Q 1 VBTo ]

-69-

(+)

Linear System Theory, 2/E

Solutions Manual

Therefore
SB To [ sI A To Q 1 VBTo ]

= D 1 (s)T (s)

Using the definition of N (s),


1

D 1 (s)N (s) = SB To [ sI (A To + Q 1 VBTo ) ] Q 1 B


1

= CQ [ sI Q 1 AQ ] Q 1 B
= C (sI A)1 B
Note that D (s) is row reduced since Dlr = S 1 , which is invertible. Finally, if the state equation is controllable as
well as observable, hence minimal, then it is clear from the definition of D (s) that the degree of the polynomial
fraction description equals the dimension of the minimal realization. Therefore D 1 (s)N (s) is a coprime left
polynomial fraction description.

Solution 17.5 Suppose there is a nonzero h with the property that for each uo there is an xo such that
t

hCe At xo + hCe A (t) Buo e

so

d = 0 , t 0

Suppose G (s) = N (s)D 1 (s) is a coprime right polynomial fraction description. Then taking Laplace transforms
gives
hC (sI A)1 xo + hN (s)D 1 (s)uo (sso )1 = 0
that is,
(sso )hC (sI A)1 xo + hN (s)D 1 (s)uo = 0
If so is not a pole of G (s), then D (so ) is invertible. Thus evaluating at s = so gives
hN (so )D 1 (so )uo = 0
and we have that if so is not a pole of G (s), then for every u o
hN (so )u o = 0
Thus hN (so ) = 0, that is rank N (so ) < p < m, which implies that so is a transmission zero.
Conversely, suppose so is a transmission zero that is not a pole of G (s). Then for a right-coprime
polynomial fraction description G (s) = N (s)D 1 (s) we have that D (so ) is invertible, and rank N (so ) < p < m.
Thus there exists a nonzero 1 p vector h such that hN (so ) = 0. Using the identity (just as in the proof of
Theorem 17.13)
(so I A)1 (s so )1 = (sI A)1 (so I A)1 + (sI A)1 (sso )1
we can write for any uo and the choice xo = (so I A)1 Buo ,





hCe At xo + hCe A (t) Buo e


0

so

d  = hN (so )D 1 (so )uo (sso )1 = 0




That is, h has the property that for any uo there is an xo such that
t

hCe xo + hCe A (t) Buo e


At

-70-

so

d = 0 , t 0

Linear System Theory, 2/E

Solutions Manual

Solution 17.9 Using a coprime right polynomial fraction description


G (s) = N (s)D 1 (s) =

N (s) adj D (s)


____________
det D (s)

suppose for some i, j and complex so we have


= Gij (so ) =

[ N (so ) adj D (so ) ]ij


___________________
det D (so )

Since the numerator is the magnitude of a polynomial, it is finite for every so , and this implies det D (so ) = 0, that
is, so is a pole of G (s).
Now suppose so is such that det D (so ) = 0. By coprimeness of the right polynomial fraction description
N (s)D 1 (s), there exist polynomial matrices X (s) and Y (s) such that
X(s)N(s) + Y(s)D(s) = Im
for all s. Therefore

[ X(s)G(s) + Y(s) ] D(s) = Im


for all s, and thus
det [ X(s)G(s) + Y(s) ] det D(s) = 1
for all s. This implies that at s = so we must have
det

[ X(so )G (so ) + Y (so ) ] =

Since the entries of the polynomial matrices X (so ) and Y (so ) are finite, some entry of G (so ) must have infinite
magnitude.

-71-

CHAPTER 18

Solution 18.2

(a) If x A (A 1 V ), then clearly x Im [A ], and there exists y A 1 V such that x = Ay, which implies x V.
Therefore A (A 1 V ) V Im [A ]. Conversely, suppose x V Im [A ]. Then x Im [A ] implies there exists y
such that x = Ay, and x V implies y A 1 V. Thus x A (A 1 V ), that is, V Im [A ] A (A 1 V ).
(b) If x V + Ker [A ], then we can write
x = xa + xb , xa V , xb Ker [A ]
1

and Ax = Axa AV. Thus x A (AV ), which gives V + Ker [A ] A 1 (AV ). Conversely, if x A 1 (AV ), then
there exists y V such that Ax = Ay, that is, A (xy) = 0. Thus writing
x = y + (xy) V + Ker [A ]
1

gives A (AV ) V + Ker [A ].


(c) If AV W, then using (b) gives A 1 (AV ) = V + Ker [A ] A 1 W. Thus V A 1 W.
Conversely, V A 1 W implies, using (a),
AV A (A 1 W ) = W Im [A ]
Therefore AV W.

Solution 18.4 For x Wa V + Wb V, write


x = x a + x b , x a Wa V , x b Wb V
Then xa , xb V, and xa Wa , xb Wb , which imply xa + xb V and xa + xb Wa + Wb , that is,
x = xa + xb (Wa + Wb ) V
and we have shown that
Wa V + Wb V (Wa +Wb ) V
For the second part, if Wa V, then x (Wa + Wb ) V implies x V and x Wa + Wb . We can write
x = x a + x b , x a Wa V , x b Wb
But x xa = xb V, so we have x Wa + Wb V. This gives
(Wa + Wb ) V Wa + Wb V
The reverse containment follows from the first part since Wa V implies Wa = Wa V.

-72-

Linear System Theory, 2/E

Solutions Manual

Solution 18.9 Clearly C <A | B> = Y if and only if




rank C B AB . . . A n1 B





=p

and thus the proof involves showing that the rank condition is equivalent to positive definiteness of
tf

Ce A (t t) BB T e A (t t) C T dt
T

This is carried out in Solution 9.11.

Solution 18.10 We show equivalence of the negations. First suppose 0 V Ker [C ] is a controlled invariant
subspace. Then picking a friend F of V we have
(A + BF)V V Ker [C ]
Selecting 0 xo V, this gives
e (A + BF)t xo V , t 0
and thus
Ce (A + BF)t xo = 0 , t 0
Thus the closed-loop state equation is not observable, since the zero-input response to xo 0 is identical to the
zero-input response to the zero initial state.
Conversely, suppose the closed-loop state equation is not observable for some F. Then
n1

N = Ker [C (A + BF)k ] 0
k =0

Thus 0xo N implies, using the Cayley-Hamilton theorem,


0 = Cxo = C (A + BF) xo = C (A + BF)2 xo = . . .
That is, (A + BF) xo N, which gives (A + BF)N N. Clearly N Ker [C ], so N is a nonzero controlled invariant
subspace contained in Ker [C ].
Let P 1 , . . . , Pq be a basis for B R = Im [B 1 ] + . . . + Im [Bq ] , P 1 , . . . , Pr be a basis for R ,
P 1 , . . . , Pc be a basis for <A | B> , P 1 , . . . , Pn be a basis for X . Then for i = 1, . . . , q, Bi B R , and for
i = q +1, . . . , m, Bi R , Bi <A | B>. Thus P 1 B = B has the form

Solution 18.11

B =






B 11

B 12
B 22
B 32

0(rq) q
0(cr) q
0(nc) q 0(nc) (mq)








If B 1 , . . . , Bq are linearly independent and we choose P j = B j , j = 1, . . . , q, then B 11 = Iq . Finally, since


<A | B> is invariant for A,
A 11 A 12 
P 1 AP = 

0c (nc) A 22 


-73-

CHAPTER 19

Solution 19.1 First we show


( W + S ) = W S
An n 1 vector x satisfies x ( W + S ) if and only if x T (w + s) = 0 for all w W and s S . This is equivalent
to x T w + x T s = 0 for all w W and s S , and by taking first s = 0 and then w = 0 this is equivalent to x T w = 0
for all w W and x T s = 0 for all s S . These conditions hold if and only if x W and x S , that is,
x W S .
Next we show
( A T S ) = A 1 S
An n 1 vector x satisfies x ( A T S ) if and only if x T y = 0 for all y A T S , which holds if and only if
x T A T z = 0 for all z S, which is the same as (Ax)T z = 0 for all z S , which is equivalent to Ax S , which is
equivalent to x A 1 S .
Finally we prove that ( S ) = S . It is easy to show that S ( S ) since x S implies y T x = 0 for all

y S , that is, x T y = 0 for all y S , which implies x ( S ) .


To show ( S ) S , suppose 0 x ( S ) . Then for all y S we have x T y = 0. That is, if y T z = 0 for
all z S , then x T y = 0. Equivalently, if z T y = 0 for all z S , then x T y = 0. Thus
Ker

z T = Ker





zT 
 ,
xT 

for all z S

(*)

This implies x S , for if not, then for any z S ,


rank

z T < rank





zT 

xT 

By the matrix fact in the Hint, this implies


dim Ker




zT 
 < dim Ker
xT 

zT




which contradicts (*).

Solution 19.2 By induction we will show that (W k ) = V k , where V k is generated by the algorithm for V * in
Theorem 19.3:

-74-

Linear System Theory, 2/E

Solutions Manual

V0 = K
V k +1 = K A 1 (V k + B )
= V k A 1 (V k + B )
For k = 0 the claim becomes ( K ) = K , which is established in Exercise 19.1. So suppose for some nonegative
integer K we have (W K ) = V K . Then, using Exercise 19.1,
(W K +1 ) =




W K + AT[ W K B ]

= (W K )
= VK










A T (W K B )




A T [ (V K ) B ]

But further use of Exercise 19.1 gives




(V K ) B

AT

= A 1 (V K ) B


= A 1 (V K + B)

Thus
(W K +1 ) = V K A 1 (V K + B) = V K +1
This completes the induction proof, and gives V * = V n = (W n ) .

Solution 19.4 We establish the Hint by induction, for F a friend of V *. For k = 1,


k

(A + BF) j1 (B V *) = B V * = V * (A .0 + B )
j =1
= R1
Assume now that for some positive integer K we have
K

(A + BF) j1 (B V *) = R K = V * (AR K1 B )

j =1
Then
K +1

(A + BF) j1 (B V *) = B V * + (A + BF) (A + BF) j1 (B V *)

j =1
j =1
= B V * + (A + BF)R K

From the algorithm, R R V *, thus


K

(A + BF)R K (A + BF)V * V *
Using the second part of Exercise 18.4 gives

B V * + (A + BF)R K = [ B + (A + BF)R K ] V *
Since (A + BF)R K + B = AR K + B, the right side of (+) can be rewritten as

B V * + (A + BF)R K = V * [ AR K + B ]
= R K +1
This completes the induction proof of the Hint, and Theorem 19.6 gives R * = R n .

-75-

(+)

Linear System Theory, 2/E

Solutions Manual

Solution 19.7 The closed-loop state equation


.
x (t) = (A + BF)x (t) + (E + BK)w (t) + BGv (t)
y (t) = Cx (t)
is disturbance decoupled if and only if
C (sI A BF)1 (E + BK) = 0
That is, if and only if

<A +BF Im [E +BK ]> Ker [C ]

(*)

Thus we want to show that there exist F and K such that (*) holds if and only if Im [E ] V * + B, where V * is the
maximal controlled invariant subspace contained in Ker [C ] for the plant.
First suppose F and K are such that (*) holds. Since <A +BF Im [E +BK ]> is invariant under (A + BF), it
is a controlled invariant subspace contained in Ker [C ] for the plant. Then
Im [E +BK ] <A +BF Im [E +BK ]> V *
That is, for any x X there is a v V * such that (E + BK)x = v. Therefore
Ex = v + B (K x)
which implies Im [E ] V * + B.
Conversely, suppose Im [E ] V * + B, where V * is the maximal controlled invariant subspace contained
in Ker [C ] for the plant. We first show how to compute K such that Im [E +BK ] V *. Then we can pick any
friend F of V * and the proof will be finished since we will have

<A +BF Im [E +BK ]> V * Ker [C ]


If w 1 , . . . , wq is a basis for W, then there exist v 1 , . . . , vq V * and u 1 , . . . , uq U such that
Ew j = v j + Bu j , j = 1, . . . , q
Let
K = u 1 . . . uq





w 1 . . . wq

Then
(E + BK)w j = Ew j + BKw j
= v j + Bu j + B u 1 . . . uq e j


= v j , j = 1, . . . , q
That is, K is such that
Im [E + BK ] V *

Solution 19.11 Note first that


span { pr +1 , . . . , pn } = R 2 *
Since R 1 * K 1 = Ker [C 2 ] and R 2 * K 2 = Ker [C 1 ], we have that in the new coordinates,

-76-

Linear System Theory, 2/E

Solutions Manual

C 1 = C 1 P = C 11 0 0

C 2 = C 2 P = 0 C 11 0

Since Im [BG 1 ] B R 1 * R 1 * and BG 1 = PB 1 we have


B 1 =





B 11
0
B 13






Similarly, Im [BG 2 ] B R 2 * R 2 * gives


B 2 =





B 22
B 23






Finally, (A + BF)R i * R i *, i = 1, 2, and (A + BF)P = PA give


A =






A 11
0
A 31

0 0
A 22 0
A 32 A 33







That is, with z (t) = P 1 x (t), the closed-loop state equation takes the partitioned form
.
za (t) = A 11 za (t) + B 11 r 1 (t)
.
zb (t) = A 22 zb (t) + B 22 r 2 (t)
.
zc (t) = A 31 za (t) + A 32 zb (t) + A 33 zc (t) + B 13 r 1 (t) + B 23 r 2 (t)
y 1 (t) = C 11 za (t)
y 2 (t) = C 12 zb (t)

-77-

CHAPTER 20

Solution 20.1

A sketch shows that v (t) is a sequence of unit-height rectangular pulses, occurring every T
seconds, with the width of the k th pulse given by k/5, k = 0, . . . , 5. This is a piecewise-continuous (actually,
piecewise-constant) input, and the continuous-time solution formula gives
t

z (t) = e

F (tto )

z (to ) + e F (t) Gv () d
to

Evaluate this at t = (k +1)T and to = kT to get


(k +1)T

z [(k +1)T ] = e FT z (kT) +

e F (kT +T) Gv () d

kT

Let = kT+T in the integral, to obtain


T

z [(k +1)T ] = e FT z (kT) + e F Gv (kT +T) d


0

Then the special form of v (t) gives


T

z [(k +1)T ] = e FT z (kT) +

e F G d sgn [u (k)]

Tu (k)T

The integral term is not linear in the input sequence u (k), so we approximate the integral when u (k) is small.
Changing integration variable to = T, another way to write the integral term is
u (k)T

e FT

e F G d sgn [u (k)]

For u (k) small,


u (k)T

u (k)T

e F d =

( IF + . . . ) d u (k)T I
0

Then since u (k) sgn [u (k)] = u (k), this gives the approximate, linear, discrete-time state equation.
z [(k +1)T ] = e FT z (kT) + e FT T u (k)

Solution 20.4 For a constant nominal input u (k) = u , constant nominal solutions are given by

-78-

Linear System Theory, 2/E

Solutions Manual

x =




u
2
u

y = u

Easy calculation gives the linearized state equation


x (k +1) =

1 0 
 x (k) +
0 1 







2
 u (k)
4u 

1 x (k) + 2u u (k)

y (k) = 2u




Since A k = (1)k I and CB = 0, the zero-state solution formula easily gives


y (k) = 2u u (k)
Thus the zero-state behavior of the linearized state equation is that of a pure gain.

Solution 20.10

(k, j):




Computing ( j +q, j) for the first few values of q 0 easily leads to the general formula for

0
a 1 (k1)a 2 (k2)a 1 (k3)a 2 (k4) . . . a 1 ( j) 
 ,
.
.
.
a 2 (k1)a 1 (k2)a 2 (k3)a 1 (k4)
a 2 ( j)
0


a 1 (k1)a 2 (k2)a 1 (k3)a 2 (k4) . . . a 2 ( j)


0
 ,
a 2 (k1)a 1 (k2)a 2 (k3)a 1 (k4) . . . a 1 ( j) 

Solution 20.11 By definition, for k j +1,


F (k, j) = F (k1)F (k2) . . . F ( j +1)F ( j)
= A T (1k)A T (2k) . . . A T (1j)A T (j) , k j +1
Therefore, for k j +1,
TF (k, j) = A (j)A (j1) . . . A (k+2)A (k+1)
However, for j+1 k+2, that is, k j +1,
A (j+1, k+1) = A (j)A (j1) . . . A (k+2)A (k+1)
and a comparison gives
A T (k) (k, j) = AT (k) (j+1, k+1) , k j +1

Solution 20.14 For k k 1 +1 ko +1 we can write, somewhat cleverly,


(k,

k1

ko ) (kk 1 ) =

(k,

ko )

(k,

j) ( j, k)

j =k 1

k1

j =k 1

Clearly this gives

-79-

kj odd, 1

kj even, 1

Linear System Theory, 2/E

(k,

Solutions Manual

ko )

1
_____
kk 1

k1

j) ( j, k) , k k 1 +1 ko +1

(k,

j =k 1

Solution 20.16 Given A (k) and F we want P (k) to satisfy


F = P 1 (k +1)A (k)P (k)
for all k. Assuming F is invertible and A (k) is invertible for every k, it is easy to verify that
P (k) = A (k, 0)F k
is the correct choice. Obviously if F = I, then the variable change is P (k) = A (k, 0). Using this in Example
20.19, where
A (k) =

1 a (k) 

0 1 




gives
k1

P (k) = A (k, 0) =





a (i)
i =0





, k 1

and
k1

P 1 (k +1) = A (0, k +1) =





Then an easy multiplication verifies the property.

-80-

a (i) 
i =0




, k 0

CHAPTER 21

Solution 21.3 Using z-transforms,


(zI A)




z 1 

12 z+7 

1
_________
2
z +7z+12




z +7 1 

12 z 

and
Y (z) = zc(zIA)1 xo + c(zIA)1 b U (z)
=

z
_________
z19 z1
z 2 +7z+12


z
z1
____
1/ 20  _________
+ 2
1/ 20 
z +7z+12 z1




=0
Therefore the complete solution is y (k) = 0, k 0.

Solution 21.4 First compute the corresponding discrete-time state equation


x ([(k +1)T ] = Fx (kT) + gu (kT)
y (kT) = hx (kT)
2

Using A = 0, it is easy to compute


F=e

AT

g = e A b d =

1 T
,
0 1




T2/2

T


and h = c. The transfer functions are


Y (s)
_____
= c (sIA)1 b = 0 1
U (s)





s 1  1 0 
= 1/ s
0 s   1

and
Z [y (kT)]
_________
= h (zIF)1 g = 0 1
Z [u (kT)]


z1 T  1 T 2 / 2

 = T / (z1)
T
0 z1 





Solution 21.7
(a) The solution formula gives, using a standard formula for a finite geometric sum,

-81-

Linear System Theory, 2/E

Solutions Manual

k1

x (k) = (1+r/ l)k xo + (1+r / l)kj1 b


j =0

= (1+r/ l)k xo + b (1+r / l)k1




11/(1+r / l)k 
____________

11/(1+r / l) 

= (1+r/ l)k (xo +bl / r) bl / r


(b) In one year a deposit xo yields
x (l) = (1+r / l)l xo
so
effective interest rate =

(1+r / l)l xo xo
_____________
100% = [(1+r / l)l 1] 100%
xo

For r = 0.05, l = 2, the effective interest rate is 5.06%. For r = 0.05, l = 12, the effective interest rate is 5.12%.
(c) Set


0 = x (19) = (1.05)19  xo +


50,000
(50,000) 
_______
_________
 +
0.05
0.05


and solve to obtain xo = $604,266. Of course this means you have actually won only $654,266, but
congratulations remain appropriate.

Solution 21.9 With T = Td / l and v (t) = v (kT), kT t (k +1)T, evaluate the solution formula
t

z (t) = e F (t) z () + e F (t) Gv (Td ) d , t T

at t = (k +1)T, = kT to obtain
T

z [(k +1)T ] =e FT z (kT) + e F d G v [(kl)T ]


0

= Az (kT) + Bv [(kl)T ]
Defining


x (k) =






z (kT) 
v [(kl)T ] 

.
,
.

.

v [(k1)T ] 

u (k) = v (kT) ,

we get

-82-

y (k) = y (kT)

Linear System Theory, 2/E

Solutions Manual




x (k +1) =






A
0
.
.
.
0
0

B
0
.
.
.
0
0

0
1
.
.
.
0
0

...
...
.
.
.
...
...

0
0
.
.  x (k) +
.

1
0








0
0
.
.  u (k) ,
.

0
1




x (0) =






z (0) 
v (lT) 

.
.

.

v (2T) 
v (T) 

y (k) = C 0 . . . 0 x (k)



The dimension of the initial state is n+l. The transfer function of this state equation is the same as the transfer
function of
z (k +1) = Az (k) + Bu (kl)
y (k) = Cz (k)
Taking the z-transform, using the right shift property, gives
Y (z) = C (zIA)1 Bz l U (z)

Solution 21.12 Easy calculation shows that for


Ma =




1 0
 ,
Mb =
0 0




0 1

0 0

a = Ma , but Mb does not.


Ma has a square root, with M

Solution 21.13

By Lemma 21.6, given any ko there is a K-periodic solution of the forced state equation if and
only if there is an xo satisfying
[I (ko +K, ko )]xo =

ko +K1

(ko +K, j +1)f ( j)

(*)

j =ko

Similarly there is a K-periodic solution of the unforced state equation if and only if there is a zo satisfying
[I (ko +K, ko )]zo = 0

(**)

Since there is no zo 0 satisfying (**), it follows that [I(ko +K, ko )] is invertible. This implies that for each ko
there exists a unique xo satisfying (*). For this xo the forced state equation has a K-periodic solution.
However, if there is a zo 0 satisfying (**), (*) might still have a solution if the right side is in the range of
[I(ko +K, ko )].

Solution 21.14

Since the forced state equation has no K-periodic solutions, for any ko there is by Exercise
21.13 a zo 0 such that the solution of
z (k +1) = A (k)z (k) ,

z (ko ) = zo

is K-periodic. Thus by Lemma 21.6,


[I (ko +K, ko )]zo = 0
and therefore [I (ko +K, ko )] is not invertible. Since there are no solutions to
[I (ko +K, ko )]xo =

ko +K1

j =ko

-83-

(ko +K, j +1)f ( j)

Linear System Theory, 2/E

Solutions Manual

we have by linear algebra that there exits a nonzero, n 1 vector p such that
[I (ko +K, ko )]T p = 0
and
ko +K1

pT

j =k

(ko +K, j +1)f ( j) = q 0

Now pick any xo . Then it is easy to show that the corresponding solution satisfies p T x(ko +jK) = p T xo +jq,
j = 1, 2, . . . . This shows that the solution is unbounded.

-84-

CHAPTER 22

Solution 22.1 Similar to Solution 6.1.


Solution 22.4

If the state equation is uniformly exponentially stable, then there exist 1 and 0 < 1 such

that
(k,

j) kj , k j

Equivalently, for every k,


(k +j,

k) j , j 0

which implies
j = sup (k +j, k) j
k

Then
1/j
1/j
lim 1/j
j = lim ( ) = lim

<1
Now suppose
lim ( j )1/j < 1

Picking 0 < < 1 there exists a positive integer J such that


1/j
j < 1 , j J

Let = 1 and
=

1
___
max [ max j , 1 ]
J

1 j J

Then for j J,
(k +j,

k) sup (k +j, j) = j
k

max j J
1 j J

-85-

Linear System Theory, 2/E

Solutions Manual

Similarly, for j > J,


(k +j,

k) sup (k +j, j)
k

= j
< (1) j = j
j
This implies uniform exponential stability.

Solution 22.6 For = 0 the problem is trivial, so suppose 0 and write


k

k k = k k = k ( e ln ) , k 0
Let = ln, so that > 0 since < 1. Then
max k k max t e t
k0

t0

and a simple maximization argument (as in Exercise 6.10) gives


max te t
t0

1
___
e

Therefore
k k

________
= , k 0
e ln

To get a decaying exponential bound, write


k k = k ( )k ( )k = 2 ( )k , k 0
Then

k k
k =0

2
_______
1

For j > 1 write


k j k = k ( 1/j +1 )k . . . k ( 1/j +1 )k . ( 1/j +1 )k
and proceed as above.

Solution 22.7 Use the fact from Exercise 20.11 that


A T (k) (k, j) = AT (k) (j +1, k +1) , k j
Then A (k) is uniformly exponentially stable if and only if there exist 1 and 0 < 1 such that
A (k) (k,

j) kj , k j

T
A
(k) (k,

j) kj , k j

This is equivalent to

which is equivalent to
T
A
(k) (j +1,

k +1) (j +1)(k +1)j , j +1 k +1

-86-

Linear System Theory, 2/E

Solutions Manual

which is equivalent to
A T (k) (k,

j) kj , k j

which is equivalent to uniform exponential stability of A T (k).


However for the case of A T (k), consider the example where A (k) is 3-periodic with
A (0) =




0 2
 ,
A (1) =
1/ 2 0 




0 1/ 2 
 ,
A (2) =
1/ 2 0 

Then
A (k) (3, 0) =




1/ 2 0 

0 1/ 2 

and it is easy to conclude uniform exponential stability. However


A T (k) (3, 0) =




and it is easy to see that there will be unbounded solutions.

-87-

2 0 

0 1/ 8 




2 0 

0 1/ 2 

CHAPTER 23

With Q = qI, where q > 0 we compute A T (k)QA (k)Q to get the sufficient condition for
uniform exponential stability:

Solution 23.1

a 21 (k), a 22 (k) 1

__
, >0
q

Thus the state equation is uniformly exponentially stable if there exists a constant < 1 such that for all k
a 1 (k), a 2 (k)

With
Q=




q1 0
0 q2





where q 1 , q 2 > 0, the sufficient condition for uniform exponential stability becomes existence of a constant > 0
such that for all k,
a 21 (k)

q 2
_____
,
q1

a 22 (k)

q 1
_____
q2

These conclusions show uniform exponential stability under weaker conditions, where one bounded coefficient
can be larger than unity if the other bounded coefficient is suitably small. For example, suppose
sup | a 2 (k) = < . Then we can take q 1 = 2 +0.01, q 2 = 1, and = 0.01 to conclude uniform exponential
k

stability if a 21 (k) 0.99/ (2 +0.01) for all k.

Solution 23.4 Using the transition matrix computed in Exercise 20.10, an easy computation gives that

Q (k) = I +

T ( j, k)( j, k)
j =k +1

is a diagonal matrix with


q 11 (k) = 1 + a 22 (k) + a 21 (k +1)a 22 (k) + a 22 (k +2)a 21 (k +1)a 22 (k)
+ a 21 (k +3)a 22 (k +2)a 21 (k +1)a 22 (k) + . . .
q 22 (k) = 1 + a 21 (k) + a 22 (k +1)a 21 (k) + a 21 (k +2)a 22 (k +1)a 21 (k)
+ a 22 (k +3)a 21 (k +2)a 22 (k +1)a 21 (k) + . . .
Since this Q (k) is guaranteed to satisfy I Q (k) and A T (k)Q (k)A (k)Q (k) I for all k, a sufficient condition
for uniform exponential stability is existence of a constant such that q 11 (k), q 22 (k) for all k. Clearly this
-88-

Linear System Theory, 2/E

Solutions Manual

holds if a 21 (k), a 22 (k) < 1 for all k, but it also holds under weaker conditions. For example suppose the bound is violated only for k = 0, and
a 21 (0) > 1 , a 21 (0)a 22 (1) <
Then we can conclude uniform exponential stability. (More sophisticated analyses should be possible . . . .)

Solution 23.6 If the state equation is exponentially stable, then by Theorem 23.7 there is for any symmetric M
a unique symmetric Q such that
A T QA Q = M
Write
M=




m1 m2
m2 m3

Q=

q1 q2
q2 q3







and write the discrete-time Lyapunov equation as the vector equation






0
a 20
1
0 1a 0 a 0
0
1
2




q1
q2
q3









m 1
m 2
m 3






The condition
det





0
a 20
1
0 1a 0 a 0
0
1
2





reduces to the condition a 0 0, 1, 2. Assuming this condition we compute Q for M = I, and use the fact that
Q > 0 since M > 0. The expression




q1
q2
q3










1
0
a 20
0 1a 0 a 0
1
2
0

1
0
1






gives
Q=

1
______________
a 0 (a 0 +2)(a 0 1)




a 0 (a 20 +a 0 +2) 2a 0 

2a 0
2(a 0 +1) 

By Sylvesters criterion, Q > 0 if and only if


a 0 (a 0 +2) > 0 , (1a 0 )(a 0 +2) > 0

(+)

Note that these conditions subsume the conditions assumed above.


Now suppose the conditions in (+) hold. Then for M = I > 0 there is a solution Q > 0 to the discrete-time
Lyapunov equation. Thus the state equation is exponentially stable. That is, the conditions in (+) are necessary
and sufficient for exponential stability.

Solution 23.10 Suppose is an eigenvalue of A with eigenvector p. Then since M, Q 0 satisfy


A T QA Q = M
we have

-89-

Linear System Theory, 2/E

Solutions Manual

p H A T QAp p H Qp = p H Mp
That is,

( 2 1 )p H Qp = p H Mp
If p H Mp > 0, then 2 1 < 0, which gives < 1. But suppose p H Mp = 0. Then for k 0,
_
0 = 2k p H Mp = k p H Mp k = p H (A T )k MA k p
= (Re [p ])T (A T )k MA k (Re [p ]) + (Im [p ])T (A T )k MA k (Im [p ])
Since M 0, this implies
0 = (Re [p ])T (A T )k MA k (Re [p ]) = (Im [p ])T (A T )k MA k (Im [p ])
By hypothesis this implies
lim A k (Re [p ]) = lim A k (Im [p ]) = 0

Therefore
lim A k p = lim k p = 0

which implies < 1.

-90-

CHAPTER 24

Solution 24.1 Since


A T (k)A (k) =




a 22 0
0 a 21





it is clear that
1/2
max (k) = max [ a 1 (k), a 2 (k) ]

Thus Corollary 24.3 states that the state equation is uniformly stable if there exists a constant such that
k

max [ a 1 (i),

i =j

a 2 (i) ]

(#)

for all k, j with k j. (Note that this condition holds if


max [ a 1 (k),

a 2 (k) ]

for all but a finite number of values of k.) Of course the condition (#) is not necessary. Consider
x (k +1) =




0 1/ 9 
 x (k)
4 0 

The eigenvalues are 2/ 3, so the state equation is uniformly stable, but clearly (#) fails.

Solution 24.5 Following the hint, set r (ko ) = 0 and


k1

r (k) =

( j)( j) , k ko +1

j =ko

and write the given inequality as


(k) (k) + (k)r (k) , k ko +1

Then, using nonnegativity of (k),


r(k +1) = r(k) + (k)(k)
[1+(k)(k)]r (k) + (k)(k) , k ko +1
Since 1+(k)(k) 1, k ko ,

-91-

(*)

Linear System Theory, 2/E

r (k +1)




j =ko

Solutions Manual


k1
1
__________
 r (k)
j =ko
1+( j)( j) 


k
1
__________
 + (k)(k)
j =ko
1+( j)( j) 










1
__________
 ,
k ko +1
1+( j)( j) 

Iterating this inequality gives


r (k)

k1

j =k

k1

( j)( j)

i =j +1

[ 1+(i)(i) ] , k ko +1

and substituting this into (*) yields the result.

Solution 24.7

By assumption A (k, j) for k j. Treating f (k, z (k)) as an input, the complete solution

formula is
z (k) = A (k, ko )z (ko ) +

k1

A (k, j +1)f ( j, z ( j)) , k ko +1

j =ko

This gives
z (k) z (ko ) +

k1

f ( j, z ( j))

j =ko

z (ko ) +

k1

j z ( j) , k ko +1

j =ko

Applying Lemma 24.5,


z (k) z (ko ) exp

k1

j ]

j =ko

z (ko ) exp [

j =k

j ]

e z (ko ) , k ko
This implies uniform stability.
For the scalar example


A (k) = 1/ 2 ,

f (k, z (k)) =

0, k 0
,
z (k), k < 0

k =

0, k 0
,
1, k < 0

we have

j =
j =k

0, k 0
k , k < 0

which is bounded for each k. But for ko < 0, the solution of this state equation yields
z (0) = (3/ 2)

ko

zo = (3/ 2)

ko

zo

Clearly any candidate bound can be violated by choosing ko sufficiently large, so the state equation is not
uniformly stable.

-92-

CHAPTER 25

Solution 25.1 If M (ko , k f ) is not invertible, then there exists a nonzero, n 1 vector xa such that
0 = x Ta M (ko , k f )xa
k f 1

j =k

x Ta T ( j, ko )C T ( j)C ( j)( j, ko )xa

k f 1

C ( j)( j,

ko )xa 2

j =ko

This implies
C ( j)( j, ko )xa = 0 , j = ko , . . . , k f 1
which shows that the nonzero initial state xa yields the same output on the interval as does the zero initial state.
Therefore the state equation is not observable.
On the other hand, for any initial state xo we can write, just as in the proof of Theorem 25.9,
y (ko ) 

.
.
M (ko , k f )xo = O T (ko , k f ) 

.


y (k f 1) 



If M (ko , k f ) is invertible, then the initial state is uniquely determined by


y (ko ) 

.
.
xo = M 1 (ko , k f )O T (ko , k f ) 

.


y (k f 1) 



Solution 25.2 In general the claim is false. If A (k) is zero, then


k f 1

W (0, k f ) =

(k f , j +1)b ( j)b T ( j)T (k f , j +1)

j =0

= b (k f 1)b T (k f 1)
This W (0, k f ) has rank at most 1, and if n 2 the state equation is not reachable on [0, k f ].

-93-

Linear System Theory, 2/E

Solutions Manual

The claim is true if A (k) is invertible at each k. Let k f = n so that


n1

(n, j +1)b ( j)b T ( j)T (n, j +1)

j =0

W (0, n) =

Since (n, j +1) is invertible for j = 0, . . . , n1, let


b (k) = 1 (n, k +1)ek +1 , k = 0, . . . , n1
where ek is the k th -column of In . Then
n1

W (0, n) =

e j +1 e Tj +1 = In

j =0

and the state equation is reachable on [0, n ].

Solution 25.7 Suppose WO (ko , k f ) is invertible. Given a p 1 vector y f , let


u (k) = B T (k)T (k f , k +1)C T (k f )W 1
O (ko , k f )y f , k = ko , . . . , k f 1
and let u (k) = 0 for other values of k. Then it is easy to show that the zero-state response to this input yields
y (k f ) = y f . Thus the state equation is output reachable on [ko , k f ].
Conversely, suppose the state equation is output reachable on [ko , k f ]. If WO (ko , k f ) is not invertible, then
there exists a nonzero p 1 vector ya such that
0 = y Ta WO (ko , k f )ya
k f 1

y Ta C (k f )T (k f , j +1)B( j)B T ( j)T (k f , j +1)C T (k f )ya

j =ko
k f 1

y Ta C (k f )(k f ,

j +1)B( j)2

j =ko

Therefore
y Ta C (k f )(k f , j +1)B( j) = 0 , j = ko , . . . , k f 1
But by output reachability, with y f = ya , there exists an input ua (k) such that
k f 1

ya =

C (k f )(k f , j +1)B ( j)ua ( j)

j =ko

Thus
k f 1

y Ta ya =

y Ta C (k f )(k f , j +1)B ( j)ua ( j) = 0

j =ko

and this implies ya = 0. This contradiction shows that WO (ko , k f ) must be invertible.
Note that if rank C (k f ) < p, then WO (ko , k f ) cannot be invertible, and the state equation cannot be output
reachable.
If m = p = 1, then
k f 1

WO (ko , k f ) =

G 2 (k f , j)

j =ko

Thus the state equation is output reachable on [ko , k f ] if and only if G (k f , j) 0 for some j = ko , . . . , k f 1.

-94-

Linear System Theory, 2/E

Solutions Manual

Solution 25.13 We will prove that the state equation is reconstructible if and only if
C
CA
.
.
.












CA n1

z = 0 implies A n z = 0

(*)




That is, if and only if the null space of the observability matrix is contained in the null space of A n .
First, suppose the state equation is not reconstructible. Then there exist n 1 vectors xa and xb such that
xa xb and
C
.
.
.






C
.
.
.





CA n1

xa =

CA n1





xb ,

A n xa A n xb




That is





C
.
.
.





CA n1

(xa xb ) = 0 ,

A n (xa xb ) 0




Thus the condition (*) fails.


Now suppose the condition (*) fails and z is such that





C
.
.
.
CA n1





z = 0 and A n z 0




Obviously z 0. Then for x (0) = z the zero-input response is


y (k) = 0 , k = 0, . . . , n1

(+)

and x (n) 0. But the same output sequence is produced by x (0) = 0, and for this initial state x (n) = 0. Thus we
cannot determine from the output (+) whether x (n) = z or x (n) = 0, which implies the state equation is not
reconstructible.

-95-

CHAPTER 26

Solution 26.2 For the linear state equation


x (k +1) =

1 k
x (k) +
1 1

0
u (k)
1

easy computations give


R 2 (k) = B (k) (k +1, k)B (k1) =



0 k
1 1

and
R 3 (k) = B (k) (k +1, k)B (k1) (k +1, k1)B (k2) =



0 k 2k1 
1 1 k 

From the respective ranks the state equation is 3-step reachable, but not 2-step reachable.

Solution 26.4 The (n +1)-dimensional state equation


z (k +1) =




A 0
 z (k) +
c 0




b
d




u (k)

y (k) = 01n 1 z (k) u (k)





has the transfer function


H (z) = 01n 1

= 01n 1




zIA 0  1 b 
1
c z   d 
(zIA) 1
0
z c (zIA)1 z 1
1




b
d




= z 1 c (zIA)1 b + z 1 d 1
= z 1 G (z) 1

Solution 26.6

By Theorem 26.8 G (z) is realizable if and only if it is a matrix of (real-coefficient) strictlyproper rational functions. By partial fraction expansion of G (z)/ z we can write G (z) in the form

-96-

Linear System Theory, 2/E

Solutions Manual

G (z) =

z
_ _____
(zl )r

Glr
l =1 r =1

_
Here 1 , . . . , m are distinct complex numbers
__ such that if L is complex, then M = L for some M. Furthermore
the p m complex matrices satisfy GMr = GLr for r = 1, . . . , L . From Table 1.10 the corresponding unit pulse
response is
m

G (k) =

Glr
l =1 r =1




k  k+1r

l1  l

(#)

Thus we can state that a unit pulse response G (k) is realizable if and only if
(a) there exist positive integers m, 1 , . . . , m , distinct complex numbers 1 , . . . , m , and 1 + . . . +m complex
p m matrices Glr such that (#) holds
for all k 1, and
_
__
(b) if L is complex, then M = L for some M. Furthermore the p m complex matrices satisfy GMr = GLr for
r = 1, . . . , L .

Solution 26.8

Suppose the given state equation is minimal and of dimension n. We can write its (strictlyproper, rational) transfer function as
. adj (zIA) . b
_c____________
G (z) =
det (zIA)

where the polynomial det (zIA) has degree n. If the numerator and denominator polynomials have a common
root, then this root can be canceled without changing the inverse z transform of G (z). Therefore, following
Example 26.10, we can write by inspection a dimension-(n1) realization of the unit pulse response of the
original state equation. This contradicts the assumed minimality, and the contradiction gives that the two
polynomials cannot have a common root.
Now suppose the polynomials det (zIA) and c . adj (zIA) . b have no common root, but that the given state
equation is not minimal. Then there is a minimal realization
z (k +1) = Fz (k) + gu (k)
y (k) = hz (k)
and we then have
. adj (zIF) . g
. adj (zIA) . b
_h____________
_c____________
=
det (zIF)
det (zIA)
where the polynomial det (zIF) has degree no larger than n1. This implies that the polynomials det (zIA) and
c . adj (zIA) . b have a common roota contradiction. Therefore the given state equation is minimal.

Solution 26.11 This is essentially the same as Solution 11.12.


Solution 26.12 Either by writing a minimal realization of G (z) in the form of Example 26.10 and computing
cA k b, k = 0, . . . , 4, or by long division of G (z), it is easy to verify the first 5 Markov parameters.
For the second part we can either work with an assumed transfer function, or assume a dimension-2 state
equation of the form


x (k +1) =




0
1
 x (k) +
a 0 a 1

y (k) = c 0


c 1 x (k)



-97-




0
 u (k)
1

Linear System Theory, 2/E

Solutions Manual

From the latter approach, setting


cb = 0, cAb = 1, cA 2 b = 1/ 2, cA 3 b = 1/ 2
easily yields c 1 = 0, c 0 = 1, a 0 = 1/ 4, a 1 = 1/ 2.

-98-

CHAPTER 27

Solution 27.1 Similar to Solution 12.1.


Solution 27.4 Suppose the entry Gij (z) has one pole at z = 1, that is
Gij (z) =

Nij (z)
__________
(z1)Dij (z)

where all roots of the polynomial Dij (z) have magnitude less than unity (so Dij (1) 0), and the polynomial Nij (z)
satisfies Nij (1) 0. Suppose that the m 1 U (z) has all components zero except for U j (z) = z /(z1). Then the
i th -component of the output is given by
Yi (z) =

z Nij (z)
___________
(z1)2 Dij (z)

By partial fraction expansion yi (k) includes decaying exponential terms, possibly a constant term, and the term
ij (1)
_N
_____
k , k 0
Dij (1)

Since this term is unbounded, every realization of G (z) fails to be uniform bounded-input, bounded-output stable.

Solution 27.7 The claim is not true in the time-varying case. Consider the scalar state equation
x (k +1) = x (k) + (k)u (k)
y (k) = x (k)
where (k) is the unit pulse. The zero-state response to any input is


y (k) =

0, k ko > 0
u (0), k ko +1, ko 0

Thus the state equation is uniform bounded-input, bounded-output stable with = 1. However for ko = 0 and
u (k) = (1/ 2)k we have u (k) 0 as k , but y (k) = 1 for all k 1.
For the time-invariant case the claim can be proved as follows. Assume u (k) 0 as k . Given > 0
we will find a K such that y (k) , k K, which shows that y (k) 0 as k . With
k

y (k) =

G (kj)u ( j)
j =0

and an input signal u (k) such that u (k) 0 as k , let


-99-

Linear System Theory, 2/E

Solutions Manual

= sup u (k) ,
k0

G (k)
k =0

The first constant is finite for a well-defined sequence that goes to zero, and the second is finite by uniform
bounded-input, bounded-output stability. Then there is a positive integer K 1 such that
u (k)

___
,
2

k =K

G (k)
1

___
,
2

k K1

Let K = 2K 1 . Then for k K we have


y (k)

K 1 1

G (kj) +

j =0

kK 1

G (q) +

q =kK 1

___
G (kj)
2 k
=K 1

___
___
=
+
2
2

Solution 27.8 Similar to Solution 12.12.

-100-

___
2 q
=0

G (q)

CHAPTER 28

Solution 28.2 Lemma 16.18 gives that if V 11 and V are invertible, then
V




V 11 V 12
V 21 V 22




1
1
1
1
1
V 1
11 +V 11 V 12 V a V 21 V 11 V 11 V 12 V a
1
1
1
V a V 21 V 11
Va




1
where Va = V 22 V 21 V 1
= I, written as
11 V 12 . From the expression VV




V 11 V 12
V 21 V 22




W 11 W 12
W 21 W 22




=I

we obtain
V 11 W 11 + V 12 W 21 = I
V 21 W 11 + V 22 W 21 = 0
Under the assumption that V 11 and V 22 are invertible these imply
1
W 11 = V 1
11 V 11 V 12 W 21 ,

W 21 = V 1
22 V 21 W 11

Solving for W 11 gives


1
W 11 = (V 11 V 12 V 1
22 V 21 )

and comparing this with the 1,1-block of V 1 from Lemma 16.18 gives
1
1
1
1
1
1
(V 11 V 12 V 1
22 V 21 ) = V 11 + V 11 V 12 (V 22 V 21 V 11 V 12 ) V 21 V 11

Solution 28.3 Given > 1 consider


z (k +1) = A z (k) + B u (k)
where A = A and B = B. It is easy to see that reachability is preserved, and if we choose K such that
z (k +1) = (A +B K )z (k) = (A+BK )z (k)
is uniformly exponentially stable, then by Lemma 28.7 we have that
x (k +1) = (A+BK )x (k)
is uniformly exponentially stable with rate . So choose, by Theorem 28.9,

-101-

Linear System Theory, 2/E

Solutions Manual

k =0

K = B (A )n 


k
T
T
A B B (A )k 

n +1

That is,


K = B T ( A T )n 


= B T (A T )n 


k =0

( A)k ( B)( B)T ( A T )k 

( A)n +1

2(nk) A k BB T (A T )k
k =0

A n +1




Solution 28.4 Similar to Solution 13.11. However for the time-invariant case the reachability matrix rank test
can be used, rather than the eigenvector test, by writing


B (A+BK)B (A+BK)2 B

... =


B AB A 2 B

...





I
0
0
.
.
.

KB KAB+(KB)2 . . .
...
I
KB
...
0
I
.
.
.
.
.
.
.
.
.








Solution 28.6 Similar to Solution 2.8.


Solution 28.8 Supposing that the linear state equation is reachable, there exists K such that all eigenvalues of
A+BK have magnitude less than unity. Therefore (IABK) is invertible, and if we suppose


AI B 
C 0

is invertible, then C (IABK)1 B is invertible from Exercise 28.6. Then given any diagonal, m m matrix , we
can choose
N = [C (IABK)1 B ]1
to obtain G (1) = . For this closed-loop system, any x (0) and any constant input R (k) = ro yields
lim y (k) = ro

by the final value theorem. That is, the steady-state value of the response to constant inputs is noninteracting.
(For finite time values, or other inputs, interaction typically occurs.)

-102-

CHAPTER 29

Solution 29.1 The error eb (k) satisfies


eb (k +1) = z (k +1) Pb (k +1)x (k +1)
= F (k)z (k) + [G b (k)C (k)Pb (k +1)A (k)]x (k) + [G a (k)Pb (k +1)B (k)]u (k)
= F (k)z (k) F (k)Pb (k)x (k)
= F (k)eb (k)
Therefore eb (k) 0 exponentially as k . Now

e (k) = x (k) x (k)


= x (k)H (k)C (k)x (k)J (k)z (k)
= J (k)eb (k) + [IH (k)C (k)J (k)Pb (k)]x (k)
= J (k)eb (k)
Therefore if J (k) is bounded, that is, J (k) < for all k, then eb (k) 0 implies e (k) 0, as k , and
x (k) is an asymptotic estimate of x (k).

Solution 29.2 The plant is





xa (k +1) 
 =
x b (k +1) 




F 11 (k) F 12 (k) 

F 21 (k) F 22 (k) 

y (k) = Ip










xa (k) 
 +
x b (k) 




G 1 (k) 
 u (k)
G 2 (k) 

xa (k) 

x b (k) 

With
Pb (k) = H (k) In p





we have



C (k) 

P b (k) 




Ip
0

H (k) In p

Then the equations in Exercise 29.1 give

-103-








Ip
0

H (k) In p





Linear System Theory, 2/E

Solutions Manual

F (k) = F 22 (k) H (k +1)F 12 (k)


G b (k) = F (k)H (k) H (k +1)F 11 (k) + F 21 (k)
G a (k) = H (k +1)G 1 (k) + G 2 (k)
The observer estimate is
x (k) =










p
y (k) +
H (k)


Ip 

H (k) 




0
 z (k)
I 
xa (k) 
 +
x b (k) 




0
 z (k)
I 

xa (k)
H (k)xa (k)+z (k)




Therefore
xa (k) = xa (k)
xb (k) = H (k)xa (k) + z (k)
where
z (k +1) = F (k)z (k) + G a (k)u (k) + G b (k)y (k)
This is exactly the same as the reduced-dimension observer in the text.

Solution 29.5 Similar to Solution 15.6.


Solution 29.6 Similar to Solution 15.10.

-104-

You might also like