Professional Documents
Culture Documents
Wilson J. Rugh
Department of Electrical and Computer Engineering
Johns Hopkins University
PREFACE
With some lingering ambivalence about the merits of the undertaking, but with a bit more dedication than
the first time around, I prepared this Solutions Manual for the second edition of Linear System Theory. Roughly
40% of the exercises are addressed, including all exercises in Chapter 1 and all others used in developments in the
text. This coverage complements the 60% of those in an unscientific survey who wanted a solutions manual, and
perhaps does not overly upset the 40% who voted no. (The main contention between the two groups involved the
inevitable appearance of pirated student copies and the view that an available solution spoils the exercise.)
I expect that a number of my solutions could be improved, and that some could be improved using only
techniques from the text. Also the press of time and my flagging enthusiasm for text processing impeded the
crafting of economical solutionssome solutions may contain too many steps or too many words. However I
hope that the error rate in these pages is low and that the value of this manual is greater than the price paid.
Please send comments and corrections to the author at rugh@jhu.edu or ECE Department, Johns Hopkins
University, Baltimore, MD 21218 USA.
CHAPTER 1
Solution 1.1
(a) For k = 2, (A + B)2 = A 2 + AB + BA + B 2 . If AB = BA, then (A + B)2 = A 2 + 2AB + B 2 . In general if
AB = BA, then the k-fold product (A + B)k can be written as a sum of terms of the form A j B kj , j = 0, . . . , k. The
k
number of terms that can be written as A j B kj is given by the binomial coefficient
. Therefore AB = BA
j
implies
(A + B)k =
j =0
k j kj
AB
j
(b) Write
det [ I A (t)] = n + an1 (t)n1 + . . . + a 1 (t) + a 0 (t)
where invertibility of A (t) implies a 0 (t) 0. The Cayley-Hamilton theorem implies
A n (t) + an1 (t)A n1 (t) + . . . + a 0 (t)I = 0
for all t. Multiplying through by A 1 (t) yields
A 1 (t) =
for all t. Since a 0 (t) = det [A (t)], a 0 (t) = det A (t). Assume > 0 is such that det A (t) for all t. Since
A (t) we have aij (t) , and thus there exists a such that a j (t) for all t. Then, for all t,
a 1 (t)I + . . . + A n1 (t)
______________________
A 1 (t) =
det A (t)
+ + . . . + n1
_________________
=
Solution 1.2
(a) If is an eigenvalue of A, then recursive use of Ap = p shows that k is an eigenvalue of A k . However to
show multiplicities are preserved is more difficult, and apparently requires Jordan form, or at least results on
similarity to upper triangular form.
(b) If is an eigenvalue of invertible A, then is nonzero and Ap = p implies A 1 p = (1/ )p. As in (a),
addressing preservation of multiplicities is more difficult.
T
T
(c) A T has eigenvalues __
1 , . . . , __
n since det (I A ) = det (I A) = det (I A).
(d) A H has eigenvalues 1 , . . . , n using (c) and the fact that the determinant (sum of products) of a conjugate is
the conjugate of the determinant. That is
-1-
Solutions Manual
_ ________
_
_
det ( I A H ) = det ( I A)H = det ( I A)
(e) A has eigenvalues 1 , . . . , n since Ap = p implies ( A)p = ()p.
(f) Eigenvalues of A T A are not nicely related to eigenvalues of A. Consider the example
0
0 0
A=
, ATA =
0 0
0
where the eigenvalues of A are both zero, and the eigenvalues of A T A are 0, . (If A is symmetric, then (a)
applies.)
Solution 1.3
(a) If the eigenvalues of A are all zero, then det ( I A) = n and the Cayley-Hamilton theorem shows that A is
nilpotent. On the other hand if one eigenvalue, say 1 is nonzero, let p be a corresponding eigenvector. Then
A k p = k1 p 0 for all k 0, and A cannot be nilpotent.
_
(b) Suppose Q is real and symmetric, and is an eigenvalue of Q. Then also
_ _is_ an eigenvalue. From the
eigenvalue/eigenvector
equation Qp = p we get_ p H Qp = p H p. Also Qp = p, and
_
_ transposing gives
p H Qp = p H p. Subtracting the two results gives ( )p H p = 0. Since p 0, this gives = , that is, is real.
(c) If A is upper triangular, then I A is upper triangular. Recursive Laplace expansion of the determinant about
the first column gives
det ( I A) = ( a 11 ) . . . ( ann )
which implies the eigenvalues of A are the diagonal entries a 11 , . . . , ann .
Solution 1.4
(a)
A=
0 0
1 0
implies A T A =
1 0
0 0
implies
A = 1
(b)
A=
3 1
1 3
implies A T A =
10 6
6 10
Then
det (I A T A) = ( 16)( 4)
which implies A = 4.
(c)
A=
1i 0
0 1+i
implies A H A =
(1+i)(1i)
0
=
0
(1i)(1+i)
1/
,
0 1/
>1
max
1 i, j 2
-2-
aij =
2 0
0 2
Solutions Manual
Solution 1.6 By definition of the spectral norm, for any 0 we can write
A x
______
x = 1
x = 1
x
A x
A x
_________
________
= max
= max
x = 1/
x
x = 1
x
A =
A x =
max
max
max
A x
A x
______
______
= max
x0
x
x
Therefore
A x
______
x
max
=1
max
x
=1
(AB)x =
max
=1
A (Bx)
{A Bx } , by Exercise 1.6
= A max
x
Bx = A B
=1
A 1
Solution 1.8 We use the following easily verified facts about partitioned vectors:
x1
x2
x 1 , x 2 ;
x1
0
= x 1 ,
0
x2
= x 2
Write
Ax =
A 11 A 12
A 21 A 22
x1
x2
A 11 x 1 + A 12 x 2
A 21 x 1 + A 22 x 2
max
=1
max
x 1
=1
A x
max
=1
A 11 x 1
+ A 12 x 2
A 11 x 1 = A 11
The other partitions are handled similarly. The last part is easy from the definition of induced norm. For example
if
-3-
Solutions Manual
A=
0 A 12
0 0
=1
A x =
max
x 2
=1
A 12 x 2 = A 12
A T x 2 = A x 2
This immediately gives
x T A x A x 2
If is an eigenvalue of A and x is a corresponding unity-norm eigenvector, then
= x = x = A x A x = A
2
(Q
)
max
= max
1in
| x T Q x = Qx x
Q x 2 = [ max
i ]
1in
x Tx
x Ta Qxa = x Ta
Thus max
x
=1
[ max
1in
i ] xa
= max
1in
x T Qx = Q .
x)
T (A
x)
= x
TA TA
x
,
Solution 1.11 Since A x = (A
A =
x
=1
max x T A T A x
=1
1/2
-4-
Solutions Manual
max x T A T A x = max (A T A)
=1
T
(A
A)
so we have A = max
.
Solution 1.12 Since A T A > 0 we have i (A T A) > 0, i = 1, . . . , n, and (A T A)1 > 0. Then by Exercise 1.11,
A 1 2
1
_________
min (A T A)
i (A T A)
n 1
T
max (A A)]
i =1
_[____________
__________________
=
(det A)2
min (A T A) . det (A T A)
A 2(n1)
_________
(det A)2
Therefore
A 1
A n1
________
det A
Solution 1.13 Assume A 0, for the zero case is trivial. For any unity-norm x and y,
y T A x y T A x
y A x = A
Therefore
max
x , y
=1
y T A x A
Axa
_____
A
Then ya = 1 and
y Ta A xa =
A xa 2
x Ta A T A xa
A 2
______
________
__________
= A
=
=
A
A
A
Therefore
max
x , y
=1
y T A x = A
Solution 1.14 The coefficients of the characteristic polynomial of a matrix are continuous functions of matrix
entries, since determinant is a continuous function of the entries (sum of products). Also the roots of a
polynomial are continuous functions of the coefficients. (A proof is given in Appendix A.4 of E.D. Sontag,
Mathematical Control Theory, Springer-Verlag, New York, 1990.) Since a composition of continuous functions
is a continuous function, the pointwise-in-t eigenvalues of A (t) are continuous in t.
This argument gives that the (nonnegative) eigenvalues of A T (t)A (t) are continuous in t. Then the maximum at
each t is continuous in t plot two eigenvalues and consider their pointwise maximum to see this. Finally since
square root is a continuous function of nonnegative arguments, we conclude A (t) is continuous in t.
However for continuously-differentiable A (t), A (t) need not be continuously differentiable in t. Consider the
-5-
Solutions Manual
example
A (t) =
t 0
0 t2
A (t) =
t , 0t 1
t2 , 1 < t <
Clearly the time derivative of A (t) is discontinuous at t = 1. (This overlaps Exercise 1.18 a bit.)
Also the eigenvalues of continuously-differentiable A (t) are not necessarily continuously differentiable, consider
0 1
A (t) =
1 t
An easy computation gives the eigenvalues
(t) =
t 2 4
t
_
______
__
2
2
Thus
.
(t) =
t
1 ________
__
2
2
2 t 4
Therefore
min (Q 1 ) =
1
1
___
_ ______
2
max (Q)
max (Q 1 ) =
1
1
___
_______
1
min (Q)
Solution 1.16 If W (t) I is symmetric and positive semidefinite for all t, then for any x,
x T W (t) x x T x
for all t. At any value of t, let xt be an eigenvector corresponding to an eigenvalue (necessarily real) t of W (t).
Then
x Tt W (t) xt = t x Tt xt x Tt xt
That is t . This holds for any eigenvalue of W (t) and every t. Since the determinant is the product of
eigenvalues,
det W (t) n > 0
for any t.
-6-
Solutions Manual
Solution 1.17 Using the product rule to differentiate A (t) A 1 (t) = I yields
.
_d_ A 1 (t) = 0
A (t) A 1 (t) + A (t)
dt
which gives
_d_ A 1 (t) = A 1 (t) A. (t) A 1 (t)
dt
Solution 1.18
x (t),
functions,
_d_
dt
_d_
dt
_d_
= 2x (t)
dt
x (t)2 = 2x (t)
x (t)
x (t)
Also we can write, using the product rule and the Cauchy-Schwarz inequality,
_d_ x (t)2 = _d_ x T (t) x (t) = x. T (t) x (t) + x T (t) x. (t) = 2x T (t) x. (t)
dt
dt
.
2x (t)x (t)
For t such that x (t) 0, comparing these expressions gives
_d_
dt
x (t) x (t)
If x (t) = 0 on a closed interval, then on that interval the result is trivial. If x (t) = 0 at an isolated point, then
continuity arguments show that the result is valid. Note that for the differentiable function x (t) = t, x (t) = t
is not differentiable at t = 0. Thus we must make the assumption that x (t) is differentiable. (While this
inequality is not explicitly used in the book, the added differentiability hypothesis explains why we always
differentiate x (t)2 = x T (t) x (t) instead of x (t).)
Solution 1.19 To prove the contrapositive claim, suppose for each i, j there is a constant ij such that
t
fij () d ij ,
t 0
Then by the inequality on page 7, noting that max fij (t) is a continuous function of t and taking the pointwisei, j
in-t maximum,
t
fij () d
F () d mn max
i, j
mn
t m
| fij () d
0 i =1 j =1
mn
ij < ,
i =1 j =1
k
F ( j) is similar.
j =0
-7-
t 0
Solutions Manual
Solution 1.20 If (t), p (t) are a pointwise-in-t eigenvalue/eigenvector pair for A 1 (t), then
A 1 (t) p (t) = (t) p (t) = (t)p (t)
A 1 (t)p (t)
A 1 (t) p (t)
_______________
_____________
p (t)
p (t)
A (t) =
1
1
1
___
_________________
___________
n >0
=
1 (t) . . . n (t)
det A 1 (t)
for all t.
tb
tb
tb
ta
ta
ta
ta
Note that
tb
Q ( ) d 0
ta
tb
tb
ta
ta
Q ( ) d x = x T Q ( ) x d 0
tb
tb
ta
ta
ta
Q () d tr Q () d n Q () d
Finally,
tb
Q ( ) d I
ta
Q () d
ta
Therefore
tb
Q () d n
ta
-8-
CHAPTER 2
Solution 2.3
.
The nominal solution for u (t) = sin (3t) is y (t) = sin t. Let x 1 (t) = y (t), x 2 (t) = y (t) to write the
state equation
.
x (t) =
x 2 (t)
(4/ 3)x 31 (t) (1/ 3)u (t)
Computing the Jacobians and evaluating gives the linearized state equation
.
x (t) =
0
1
x (t) +
4 sin2 t 0
0
u (t)
1/ 3
y (t) = 1 0 x (t)
where
x (t) = x (t)
sin t
cos t
0
1
0 = x 1 + x 1 + x 2 = x 1 (x 1 1) + x 2
Evidently there are 4 possible solutions:
0
xa =
, xb =
0
1
,
0
xc =
1/ 2
,
1/ 2
xd =
1/ 2
1/ 2
Since
_f
__ =
x
2x 2 12x 1
1+2x 1 2x 2
f
___
=
u
0
1
evaluating at each of the constant nominals gives the corresponding 4 linearized state equations.
-9-
rank A = rank [ A b ].
Also, x is a constant nominal with c x = 0 if and only if
0 = A x + bu
0 = c x
that is, if and only if
A
x=
c
bu
0
A
= rank
c
A b
c 0
Solution 2.8
(a) Since
A B
C 0
A + BK B
=
C
0
A B
C 0
I 0
K I
is invertible. Let
A + BK B
C
0
R1 R2
R3 R 4
I 0
0 I
Then the 1, 2-block gives R 2 = (A + BK) BR 4 and the 2, 2-block gives CR 2 = I, that is, I = C(A + BK)1 BR 4
Thus [ C (A + BK)1 B ]1 exists and is given by R 4 .
(b) We need to show that there exists N such that
0 = (A + BK)x + BNu
u = Cx
The first equation gives
x = (A + BK)1 BN u
Thus we need to choose N such that
C (A + BK)1 BN u = u
From part (a) we take N = [C (A + BK)1 B ]1 = R 4 .
-10-
Solutions Manual
A +Du
bu
If A + Du is invertible, then
x = (A + Du )1 bu
(+)
If A is invertible, then by continuity of the determinant det (A + Du ) 0 for all u such that u is sufficiently
small, and (+) defines a corresponding constant nominal. The corresponding linearized state equation is
.
x (t) = (A + Du ) x (t) + [ b D (A + Du )1 bu ] u (t)
y (t) = C x (t)
Solution 2.12
For the given nominal input, nominal output, and nominal initial state, the nominal solution
satisfies
.
x (t) =
0
3
2
1 = x 2 (t) 2 x 3 (t)
Integrating for x 1 (t) and then x 3 (t) easily gives the nominal solution x 1 (t) = t, x 2 (t) = 2 t 3, and x 3 (t) = t 2.
The corresponding linearized state equation is specified by
0 0
0
0
A = 1 0 1 , B (t)= t , C = 0 1 2
0
0 1 2
It is unusual that the nominal input and nominal output are constants, but the linearization is time varying.
-11-
CHAPTER 3
Solution 3.2 Differentiating term k +1 of the Peano-Baker series using Leibniz rule gives
___
A (1 ) A (2 )
...
A (k +1 ) d k +1
. . . d 1
= A (t) A (2 )
d
. . . d 2 ___
t
d
___
+ A ( 1 )
A (2 )
___
= A ( 1 )
d
___
A () A (2 ) . . . d k +1 . . . d 2
d
d k +1 . . . d 1
A (k +1 )
...
A (k +1 ) d k +1
...
A (2 )
...
A (k +1 )
d k +1 . . . d 1
___
A (1 ) A (2 )
...
A (k +1 ) d k +1
. . . d 1
= A ( 1 )
= A ( 1 )
= A ( 1 )
k1
...
A ( k )
___
k
A (k +1 ) d k +1
k1
...
A ( k )
0 A () +
A ( 2 )
0 d k +1
d k . . . d 1
d k . . . d 1
k1
...
A ( k ) d k . . . d 1
A ()
Recognizing this as term k of the uniformly convergent series for (t, ) A () gives
___
(t, ) = (t, ) A ()
(Of course it is simpler to use the formula for the derivative of an inverse matrix given in Exercise 1.17.)
-12-
Solutions Manual
Solution 3.6 Writing the state equation as a pair of scalar equations, the first one is
.
t
______
x 1 (t)
x 1 (t) =
1 + t2
and an easy computation gives
x 1o
_________
(1 + t 2 )1/2
x 1 (t) =
Then the second scalar equation then becomes
x 1o
.
4t
_________
______
x 2 (t) +
x 2 (t) =
2
(1 + t 2 )1/2
1+t
The complete solution formula gives, with some help from Mathematica,
t
.
(1 + 2 )3/2
1
_________
________
d x 1o
x 2o +
x 2 (t) =
2 2
2 2
(1 + t )
0 (1 + t )
=
2
(t 3 /4+5t/ 8)+(3/ 8) sinh1 (t)
1+t
1
_
____________________________
________
x 1o
x
+
2o
(1 + t 2 )2
(1 + t 2 )2
r (t) = v ()() d
to
.
we have r (t) = v (t)(t), and
(t) (t) + r (t)
(*)
v ( ) d
to
to obtain
t
_d_
dt
v ( ) d
r (t)e
to
v (t)(t)e
v ( ) d
to
v ( ) d
r (t)e
to
v ()()e
to
-13-
v ( ) d
to
Solutions Manual
v ( ) d
eo
gives
t
v ( ) d
r (t) v ()()e
to
z (t)2
t to
At each t to let
a (t) = 2n 2 max
1 i, j n
aij (t)
Note a (t) is a continuous function of t, as a quick sample sketch indicates. Then, since zi (t) z (t),
_d_
dt
z (t)2
a (t)z (t)2 , t to
a () d
e
gives
to
a () d
_d_
dt
to
z (t)2
0 , t to
z (t) = 0
Solution 3.11 The vector function x (t) satisfies the given state equation if and only if it satisfies
t
to
to to
to
x (t) = xo + A () x() d +
E (, ) x() d d + B ()u () d
to
to to
z (t) = A () z() d +
E (, ) z() d d
Interchanging the order of integration in the double integral (Dirichlets formula) gives
-14-
Solutions Manual
t t
to
t
to
A () + E (, ) d z() d
= A (t, ) z () d
to
Thus
t
to
to
(t, ) z () d A (t, )z () d
z (t) = A
By continuity, given T > 0 there exists a finite constant such that A (t, ) for to t to + T. Thus
t
z (t)
z (t) d ,
t [to , to +T ]
to
z (t) =
(t, )
I + A ( 1 ) d 1 + . . . + A ( 1 )
k1
...
A ( k ) d k . . . d 1
A ( 1 )
j =k +1
j1
...
A ( j ) d j . . . d 1
For any fixed T > 0 there is a finite constant such that A (t) for t [T, T ], by continuity. Therefore
A ( 1 )
j =k +1
j1
...
A ( j ) d j . . . d 1
j =k +1
A ( 1 )
j1
...
A (1 )
j =k +1
A ( j ) d j . . . d 1
j1
...
A ( j ) d j . . . d 1
.
.
.
j =k +1
j =k +1
j =k +1
-15-
...
j1
1 d j . . . d 1
t j
_______
j!
(2T) j
______
, t, [T, T ]
j!
Solutions Manual
j =K +1
2T) j
_(_____
<
j!
(*)
j =k +1
(2T)i
(2T)k +1 . ______
(2T)k +1+i
2T) j
________
__________
_(_____
=
j!
ki
i =0 (k +1)!
i =0 (k +1+i)!
j =k +1
(2T)k +1
1
(2T)k +1 . _ _______
2T) j ________
__________________
_(_____
=
(k1)!(k +1)(k2T)
1 2T/k
(k +1)!
j!
Because of the factorial in the denominator, given > 0 there exists a K > 2T such that (*) holds.
Solution 3.15 Writing the complete solution of the state equation at t f , we need to satisfy
tf
Ho xo + H f (t f , to ) xo + (t f , )f () d = h
to
(+)
Thus there exists a solution that satisfies the boundary conditions if and only if
tf
h Hf
(t f , )f () d Im[ Ho + H f (t f , to ) ]
to
There exists a unique solution that satisfies the boundary conditions if Ho + H f (t f , to ) is invertible. To compute
a solution x (t) satisfying the boundary conditions:
(1) Compute (t, to ) for t [to , t f ]
(2) Compute Ho + H f (t f , to )
tf
(3) Compute
(t f , )f () d
to
-16-
CHAPTER 4
Solution 4.1 An easy way to compute A (t) is to use A (t) = (t, 0)(0, t). This gives
A (t) =
2t 1
1 2t
This A (t) commutes with its integral, so we can write (t, ) as the matrix exponential
t
(t, ) = exp
A () d
= exp
(t)2 (t)
(t) (t)2
Solution 4.4 A linear state equation corresponding to the n th -order differential equation is
.
x (t) =
...
0
...
0
.
.
.
.
x (t)
.
.
...
1
.
.
.
a 0 (t) a 1 (t)
an1 (t)
0
0
.
.
.
0
1
0
.
.
.
0
.
z (t) =
0
1
.
.
.
0
0
...
...
.
.
.
...
...
0
0
.
.
.
0
1
a 0 (t)
a 1 (t)
.
.
z (t)
.
an2 (t)
an1 (t)
th
Solutions Manual
.
zn2 (t) = zn3 (t) + an3 (t) zn (t)
gives
.
d2
d3
_d_ [ a (t) z (t) ] + ____
____
[ an1 (t) zn (t) ]
z
(t)
=
z
(t)
n2
n
n
n2
dt
dt 2
dt 3
d2
_d_ [ a (t) z (t) ] + ____
[ an1 (t) zn (t) ]
= zn3 (t) + an3 (t) zn (t)
n2
n
dt
dt 2
Continuing gives the n th -order differential equation
d n2
d n1
dn
_____
_____
____
[ an2 (t) zn (t) ]
(t)
z
(t)
]
[
a
z
(t)
=
n1
n
n
dt n2
dt n1
dt n
_d_ [ a (t) z (t) ] + (1)n +1 a (t) z (t)
+ . . . + (1)n
1
n
0
n
dt
Solution 4.6
For the first matrix differential equation, write the transpose of the equation as (transpose and
differentiation commute)
.T
X (t) = A T (t)X T (t) , X T (to ) = X To
to )Xo T2 (t, to )
X (t) = 1 (t,
+ 1 (t, )F () T2 (t, ) d
to
Or, one can generate this expression by using the obvious integrating factors on the left and right sides of the
differential. equation. (To show this is a unique solution, show that the difference Z (t) between any two solutions
satisfies Z (t) = A 1 (t)Z (t) + Z (t)A T2 (t), with Z (to ) = 0. Integrate both sides and apply the Bellman-Gronwall
inequality to show Z (t) is identically zero.)
Solution 4.9 Clearly A (t) commutes with its integral. Thus we compute
exp
0 1
1 0
and then replace by a () d . From the power series for the exponential,
0
exp
0 1
=
1 0
k =0
k =0
k =0
1
___
k!
1
_____
(2k)!
1
_____
(2k)!
0 1k k
1 0
0 1 2k 2k
+
1 0
(1)k 0
0 (1)k
-18-
k =0
1
_ ______
(2k +1)!
2k +
k =0
0 1 2k +1 2k +1
1 0
1
_ ______
(2k +1)!
0
(1)k
k +1
(1)
0
2k +1
Solutions Manual
=
=
cos 0
+
0 cos
cos sin
sin cos
0 sin
sin 0
Solution 4.10
Ra t
that is,
x (t, 0) = P (t)e
Ra t
P 1 (0)
(0)RP (0)t
P 1 (0)
A1t
A2t
=e
A1t
Then
.
_d_
(t, 0) =
dt
=e
This implies A (t) = e
A1t
A1t
A1t
A2t
( A 1 +A 2 ) e A 2 t
( A 1 +A 2 ) e A 1 t . e A 1 t e A 2 t
A t
A1t
( A 1 + A 2 ) e A 1 t
Since
-19-
Solutions Manual
_d_
dt
we have that (t, 0) = e
A1t A2t
A1t
A2t
= A (t)e
A1t A2t
, e
A10 A20
=I
12 (t, ) = 11 (t, ) A 12 () 22 (, ) d
t 1
1
= P
(t)
1 t
.
0 1
1
P (t) P
(t)P (t)
2
2t 2 t
Multiplying on the left by P (t), the result can be written as a dimension-4 linear state equation. Choosing the
initial condition corresponding to P (0) = I, some clever guessing gives
1 0
P (t) =
t 1
Solution 4.23 Using the formula for the derivative of an inverse matrix given in Exercise 1.17,
__ (, t) = __ 1 (t, ) = 1 (t, )
A
A
A
t
t
= 1
A (t, )
= 1
A (t, )
__ (t, ) 1 (t, )
A
A
t
_____
A (t, ) 1
A (t, )
(t)
A (t)A (t, ) 1
A (t, )
= 1
A (t, ) A (t) = A (, t) A (t)
Transposing gives
-20-
Solutions Manual
__ T (, t) = A T (t) T (, t)
A
A
t
Since (, ) = I, we have F (t) = A T (t).
Or we can use the result of Exercise 3.2 to compute:
__ (, t) = _____
A (, t) = A (, t)A (t)
A
(t)
t
This implies
__ T (, t) = A T (t) (, t)
A
A
t
Since (, ) = I, we have F (t) = A T (t).
(t + , ) = I +
A () d +
t +
k =2
A (1 ) A (2 ) . . .
k1
A (k ) d k . . . d 1
and
e
_
At ()t
_
= I + At ()t +
Then
R (t, ) = (t
+ , ) e
t +
k =2
_
At ()t
k =2
_
1 k
___
A t ()t k
k!
A (1 ) A (2 ) . . .
k1
_
1 k
___
A t ()t k
A (k ) d k . . . d 1
k!
k =2
k
_t__
= 2 t 2
k!
k =2
2 k2 k2
___
t
k!
Using
1
2
______
___
, k 2
(k2)!
k!
gives
R (t, ) 2 t 2
k =2
= 2 t 2 e t
-21-
1
______
k2 t k2
(k2)!
CHAPTER 5
Solution 5.3 Using the series definition, which involves talent in series recognition,
A 2k +1 =
0 1
, A 2k =
1 0
1 0
, k = 0, 1, . . .
0 1
gives
e At = I +
0 t ___
1
+
t 0
2!
t 2 0 ___
1
+
0 t2
3!
t
(e +e )/ 2 (e e )/ 2
=
(e t e t )/ 2 (e t +e t )/ 2
t
0 t3
...
+
t3 0
cosh t sinh t
sinh t cosh t
(sI A)1 =
s 1
1 s
s
_____
2
s 1
1
_____
2
s 1
cosh t sinh t
sinh t cosh t
1 0
0 1
Then
e At = P
et 0
0 e t
P 1 =
cosh t sinh t
sinh t cosh t
t 1
1 t
Solutions Manual
A () d
(t, 0) = e 0
= exp
t2/2 t
t t2/2
And since
t2 / 2 0
,
0 t2/2
0 t
t 0
commute,
(t, 0) = exp
1 0 2
t / 2 . exp
0 1
0 1
t
1 0
e t /2 0
2
0 e t /2
cosh t sinh t
=
sinh t cosh t
e t /2 cosh t e t /2 sinh t
2
2
e t /2 sinh t e t /2 cosh t
A e A d = e At I
0
note that the two sides agree at t = 0, and the derivatives of the two sides with respect to t are identical.
If A is invertible and all its eigenvalues have negative real parts, then limt e At = 0. This gives
e A d = I
0
that is,
A 1 = e A d = e A d
Solution 5.9
Evaluating the given expression at t = 0 gives x (0) = 0. Using Leibniz rule to differentiate the
expression gives
t
D u ( ) d
.
_d_ e A (t) e
bu () d
x (t) =
dt 0
t
__
t
= bu (t) +
0
e A (t) e
D u ( ) d
bu () d
D u ( ) d
Using the product rule and differentiating the power series for e
gives
.
x (t) = bu (t) +
0
Ae A (t) e
D u ( ) d
bu () + e A (t) Du (t)e
D u ( ) d
bu () d
-23-
Solutions Manual
D u ( ) d
D u ( ) d
.
x (t) = bu (t) + A e A (t) e
bu () d + Du (t) e A (t) e
bu () d
t
Solution 5.12 We will show how to define 0 (t), . . . , n1 (t) such that
n1
k (t)Pk =
k =0
n1
n1
k (0)Pk = I
k (t)APk ,
k =0
(*)
k =0
which then gives the desired expression by Property 5.1. From the definitions,
P 1 = AP 0 1 I , P 2 = AP 1 2 P 1 , . . . , Pn1 = APn2 n1 Pn2
Also Pn = (An I)Pn1 = 0 by the Cayley-Hamilton theorem, so APn1 = n Pn1 . Now we equate coefficients of
like Pk s in (*), rewritten as
n1 .
n1
k (t)Pk = k (t)[Pk+1 + k +1 Pk ]
k =0
k =0
.
P 0 : 0 (t) = 1 0 (t)
.
P 1 : 1 (t) = 0 (t) + 2 1 (t)
.
.
. .
Pn1 : n1 (t) = n2 (t) + n n1 (t)
that is,
. 0 (t)
1 (t)
.
.
.
n1 (t)
1 0 . . .
1 2 . . .
0
0
.
.
.
0
0
.
.
.
0
. .
.
. .
.
. .
.
0 0 . . . n1
0 0 . . . 1 n
0 (t)
1 (t)
.
.
.
n1 (t)
With the initial condition provided by 0 (0) = 1, k (0) = 0, k = 1, . . . , n1, the analytic solution of this state
equation provides a solution for (*). (The resulting expression for e At is sometimes called Putzers formula.)
R (tto )
P (to )
-24-
Solutions Manual
(t, to ) = P 1 (t) e
= Q (t, to )e
S(tto )
P 1 (to )P (to )
S (tto )
tr [A ()] d
Because the integral in the exponent is positive, the product of eigenvalues of (T, 0) is greater than unity, which
implies that at least one eigenvalue of (T, 0) has magnitude greater than unity.Thus by the argument following
Example 5.12 there exist unbounded solutions.
Solution 5.22
The solution will be T-periodic for initial state xo if and only if xo satisfies (see text equation
(32))
to +T
(to +T, to ) I ] xo =
(to , )f() d
to
z To
(to , )f() d = 0
(*)
to
[ 1 (to +T, to ) I ]
zo = 0
z (t) = [ 1 (t, to ) ] zo
Then by Lemma 5.14, (**) is precisely the condition that z (t) be T-periodic. Thus writing (*) in the form
-25-
(**)
Solutions Manual
to +T
0=
to +T
z To (to ,
)f() d =
to
z T ()f () d
to
cos t sin t
sin t cos t
Therefore all solutions of the adjoint equation are periodic, with period of the form k 2, where k is a positive
integer. The forcing term has period T = 2 /, where we assume > 0. The rest of the analysis breaks down
into 3 cases.
Case 1: If 1, 1/ 2, 1/ 3, . . . then the adjoint equation has no T-periodic solution, so the condition (Exercise
5.22)
T
z T ()f () d = 0
(+)
z
0
()f () d = z To e A f () d
0
the condition (+) will hold, and there exist periodic solutions.
In summary, there exist periodic solutions for all > 0 except = 1.
-26-
CHAPTER 6
If the state equation is uniformly stable, then there exists a positive such that for any to and xo
the corresponding solution satisfies
Solution 6.1
x (t) xo ,
t to
= , t to
Conversely, given a positive suppose positive is such that, regardless of to , xo implies x (t) ,
t to . For any ta to let xa be such that
xa = 1
Therefore
(ta , to ) /
t to
Solution 6.4 Using the fact that A (t) commutes with its integral,
t
(t, ) = e
A () d
=I+
e (t)
1
___
e (t)
+
t
2!
e (t)
e (t)
t
+ ...
For any fixed , 11 (t, ) clearly grows without bound as t , and thus the state equation is not uniformly
stable.
-27-
Solutions Manual
(t, ) = I
+ A ( ) d + A ( 1 )
A (2 ) d 2 d 1 + . . .
= I + A () d + A (1 )
A (2 ) d 2 d 1 + . . .
= 1 + A () d + A (1 )
A (2 ) d 2 d 1 +
...
(t, )
1 + 1 d +
1 d 2 d 1 + . . .
= 1 + t + 2
t2
_|_____
+ ...
2!
For | t ,
(t, )
2 2
_
____
+ ...
2!
1+ +
= e
t 0
1
___
= ,
e
t 0
so that
t e t ,
t 0
2 (/2)t
___
e
,
e
t 0
Similarly,
t 2 e t = t 2 e t
4 (/4)t
2 . ___
2
2
___
___
___
e
,
t e (/4)t e (/4)t
t e (/2)t =
e
e
t j e t
Therefore
-28-
t 0
Solutions Manual
t j e t dt
0
j +( j 1)+
+1
_2___________
( e) j
...
e (/2 )t dt
j
j +( j 1)+ . . . +1
2j
_2___________ . ___
j
( e)
22j +( j 1)+ +1
_____________
e j Re [] j +1
...
By Theorem 6.4 uniform stability is equivalent to existence of a finite constant such that
all t 0. Writing
Solution 6.12
e At for
e At =
t j1
t
______
e k
( j1)!
Wkj
k =1 j =1
(*)
Re[k ] = 0 implies k = 1
k t
Since t e is bounded if Re[k ] < 0 (for any j), and e k = 1 if Re [k ] = 0, it is clear that
bounded for t 0. Thus (*) is a sufficient condition for uniform stability.
A necessary condition for uniform stability is
j1
e At
is
Re[k ] 0 , k = 1, . . . , m
For if Re[k ] > 0 for some k, the proof of Theorem 6.2 shows that e At grows without bound as t . The gap
between this necessary condition and the sufficient condition is illustrated by the two cases
0 0
0 1
A=
,
A=
0 0
0 0
Both satisfy the necessary condition, neither satisfy the sufficient condition, and the first case is uniformly stable
while the second case is not (unbounded solutions exist, as shown by easy computation of the transition matrix).
(It can be shown that a necessary and sufficient condition for uniform stability is that each eigenvalue of A has
nonpositive real part and any eigenvalue of A with zero real part has algebraic multiplicity equal to its geometric
multiplicity.)
(tto )
for all t, to such that t to . Then given any xo , to , the corresponding solution at t to satisfies
x (t) = (t, to )xo (t, to )xo e
(tto )
xo
(tto )
xo ,
t to
-29-
Solutions Manual
(ta to )
Solution 6.18 The variable change z (t) = P 1 (t) x (t) yields z (t) = 0 if and only if
.
P 1 (t) A (t)P (t) P 1 (t)P (t) = 0
.
for all t. This clearly is equivalent to P (t) = A (t)P (t), which is equivalent to A (t, ) = P (t)P 1 (). Now, if P (t)
is a Lyapunov transformation, that is P (t) < and det P (t) > 0 for all t, then
A (t, ) P (t)P 1 () P (t)
P ()n1
__________
det P ()
n / =
for all t and .
Conversely, suppose
A (t, ) for
P (t)
P 1 (t)n1
___________
= P 1 (t)n1 det P (t)
det P 1 (t)
P (t)
1
__________
1
P (t)n
P (t)
1
___
n
-30-
CHAPTER 7
Solution 7.3 Let A = FA, and take Q = F 1 , which is positive definite since F is positive definite. Then since F
is symmetric,
T
A Q + QA = A T FF 1 + F 1 FA = A T + A < 0
This gives exponential stability by Theorem 7.4.
Solution 7.5
By our default assumptions, a (t) is continuous. Since Q is constant, symmetric, and positive
definite, the first condition of Theorem 7.2 holds. Checking the second condition,
a (t) a (t)/ 2
0
A T (t)Q + QA (t) =
a (t)/ 2
1
gives the requirements
a (t) 0 , 4a (t) a 2 (t)
Thus the state equation is uniformly stable if a (t) is a continuous function satisfying 0 a (t) 4 for all t.
Q(t) =
.
a (t) 0
,
A T (t)Q(t) + Q(t)A (t) + Q (t) =
0 1
.
a (t) 0
0 4
we need to assume that a (t) is continuously differentiable and a (t) for some positive constants and so
.
that the first condition of Theorem 7.4 is satisfied. For the second condition we need to assume a (t) , for
some positive constant . Unfortunately this implies, taking any to ,
t
.
a (t) = a (to ) + a () d a (to ) + to t , t to
to
and for sufficiently large t the positivity condition on a (t) will be violated. Thus there is no a (t) for which the
given Q (t) shows uniform exponential stability of the given state equation.
Q (t) I =
2a (t)+1
1
-31-
1
(t)+1
_a______
a (t)
Solutions Manual
a (t) 1/ (2)
2
1
+1
(t)+1 _2____
____
_a______
1
1
a
(t)
a (t)
2a (t) 1
Next consider
.
A (t)Q (t) + Q (t) A (t) + Q (t) =
T
.
2a (t)2a(t)
.
a (t)
_____
2a(t) 2
a (t)
This gives that for uniform exponential stability we also need existence of a small, positive constant such that
.
a 2 (t) 2a 3 (t) a (t) a (t)/2
for all t. For example, a (t) = 1 satisfies these conditions.
Solution 7.11
Suppose that for every symmetric, positive-definite M there exits a unique, symmetric,
positive-definite Q such that
A T Q + QA + 2Q = M
(*)
(A + I)T Q + Q (A + I) = M
(**)
that is,
Then by the argument above Theorem 7.11 we conclude that all eigenvalues of A + I have negative real parts.
That is, if
0 = det [ I (A + I) ] = det [ ( )I A ]
then Re [] < 0. Since > 0, this gives Re [ ] < , that is, all eigenvalues of A have real parts strictly less
than .
Now suppose all eigenvalues of A have real parts strictly less than . Then, as above, eigenvalues of
A + I have negative real parts. Then by Theorem 7.11, given symmetric, positive-definite M there exists a
unique, symmetric, positive-definite Q such that (**) holds, which implies (*) holds.
e At xa = e At
Q = e A Me A d
T
-32-
Solutions Manual
x Ta e A Me A xa d x Ta e A Me A xa d
T
x Ta e A Me A xa d = x Ta e A (t + ) Me A(t + ) xa d
T
= x Ta e A t Qe At xa min (Q)e At xa 2 =
T
e At 2
_______
Q 1
Therefore
e At 2
_______
Q
Q 1
t0
Solution 7.17
Let F = A + ()I. Then F A +, all eigenvalues of F have real parts less than ,
and
e Ft = e At e ()t
Thus
e At = e ( )t e Ft
(*)
Q = e F e F d
T
F T + F x T e F e F x
T
(Exercise1.9)
2(A +) x T e F e F x
T
x T e F t e Ft x =
T
d
___
d
x Te F e F x
T
2 (A +) x T e F e F x d
T
2 (A +) x T Qx
Therefore
-33-
Solutions Manual
x T e F t e Ft x 2 (A +) x T Qx , t 0
T
which gives
e Ft
2 ( A + ) Q
, t 0
Solution 7.19 To show uniform exponential stability of A (t), write the 1,2-entry of A (t) as a (t), and let
Q (t) = q (t) I, where
2+e 2t , t 1/ 2
q (t) =
q (t) , 1/ 2 < t < 1/ 2
3 , t 1/ 2
Here q (t) is a continuously-differentiable patch satisfying 2 q (t) 3 for 1/ 2 < t < 1/ 2, and another
condition to be specified below. Then we have 2 I Q (t) 3 I for all t. Next consider
.
.
2q (t)+q (t) a (t)q (t)
A T (t)Q (t) + Q (t)A (t) + Q (t) =
. I
a (t)q (t) 6q (t)+q (t)
1
e t
0
(e t e 3t )/ 4 e 3t
, t 0
Solution 7.20
But since (.) is strictly increasing, this gives x (t) , t to , and thus the state equation is uniformly stable.
-34-
CHAPTER 8
A=
A + AT =
Solution 8.6 Viewing F (t)x (t) as a forcing term, for any to , xo , and t to we can write
t
(tto )
xo +
e (t) F ()x() d
to
Thus
t
x (t) e
to
xo +
F () e x() d
to
e t x (t) e
to
xo e
Therefore
-35-
F () d
to
Solutions Manual
x(t) e
(tto )
F () d
eo
xo
(tto )
(tto )
F () d
eo
e
xo
xo
Solution 8.8 We can follow the proof of Theorem 8.7 (first and last portions) to show that the solution
Q (t) = e A
(t)
e A (t) d
of
A T (t)Q (t) + Q (t) A (t) = I
is continuously-differentiable and satisfies, for all t,
I Q (t) I
.
F (t) = A (t) 12Q 1 (t)Q (t)
which implies
Q 1 (t)
1
__
2
_
___
-36-
Solutions Manual
and the result of Exercise 8.8 implies that there exists positive constants , such that, for any to and t to ,
x (t) e
(tto )
xo +
e (t)
2
_
___
x() d
to
Therefore
t
x (t) e
to
xo +
to
2
_
____
e x() d
e t x (t) e
to
xo e
2 / d
to
Thus
x (t) e
(2 /)(tto )
xo
Now, writing the left side as A (t, to )xo and for any to and t to choosing the appropriate unity-norm xo gives
A (t, to ) e
(2 /)(tto )
For sufficiently small this gives the desired uniform exponential stability. (Note that Theorem 8.6 also can be
.
used to conclude that uniform exponential stability of x (t) = F (t) x (t) implies uniform exponential stability of
.
.
x (t) = [ F (t) + 12Q 1 (t)Q (t) ] x (t) = A (t) x (t)
for sufficiently small.)
With F (t) = A (t) + ( / 2)I we have that
F (t) satisfy Re [F (t)] / 2. The unique solution of
Solution 8.10
F(t) + / 2,
.
.
F (t) = A (t), and the eigenvalues of
Q (t) = e F
(t)
e F (t) d
As in the proof of Theorem 8.7, there is a constant such that Q (t) for all t. Now, for any n 1 vector z,
T
d T F T (t) F (t)
___
z e
e
z = z T e F (t) [ F T (t) + F (t) ] e F (t) z
d
(2 + ) z T e F
Thus for any 0,
-37-
(t)
e F (t) z
Solutions Manual
z T e F
(t)
e F (t) z =
d
___
d
z Te F
(t)
e F (t) z
(2 + ) z T e F
(t)
e F (t) z d
(t)
e F (t) z d
(2 + ) z T e F
0
(2 + ) z T Q (t) z
Thus
eF
(t)
e F (t) (2 + ) Q (t) , 0
and using
e F(t) = e A(t) e ( /2) , 0
gives
e A(t)
+
)
e ( /2) ,
(2
0
Solution 8.11 Write (the chain rule is valid since u (t) is a scalar)
.
.
.
db
dA
___
___
(u (t))u (t)
(u (t))u (t) A 1 (u (t))b (u (t)) A 1 (u (t))
q (t) = A 1 (u (t))
du
du
.
= B (t)u (t)
Then
.
x (t) = A (u (t)) x (t) + b (u (t))
= A (u (t)) [ x (t) q (t) ] + A (u (t))q (t) + b (u (t))
= A (u (t)) [ x (t) q (t) ]
gives
_d_ [ x (t) q (t) ] = A (u (t)) [ x (t) q (t) ] + B (t)u. (t)
dt
(*)
Since
.
.
dA
dA
___
_d_ A (u (t)) = ___
(u (t))u (t)
(u (t))u (t) =
du
du
dt
.
we can conclude from Theorem 8.7 that for sufficiently small, and u (t) such that u (t) for all t, there exist
positive constants and (depending on u (t)) such that
, t 0
But the smoothness assumptions on A (.) and b (.) and the bounds on u (t) also give that there exists a positive
constant such that B (t) for t 0. Thus the solution formula for (*) gives
x (t) q (t) x (0) q (0) + /
-38-
, t 0
CHAPTER 9
B (AI)B (AI)2 B . . .
A 2 B2AB+2 B . . .
ABB
B AB A 2 B . . .
Im Im 2 Im
0 Im 2Im
0 0
Im
0 0
0
.
.
.
.
.
.
.
.
.
...
...
...
...
.
.
.
Clearly the two controllability matrices have the same rank. (The solution is even easier using rank tests from
Chapter 13.)
Q = e At BB T e A t dt
T
AQ + QA =
_d_
dt
Ae At BB T e A t + e At BB T e A t A T
e At BB T e A
dt
dt
= BB T
Also it is clear that Q is positive semidefinite. If it is not positive definite, then for some nonzero, n 1 x,
0 = x Qx = x T e At BB T e A t x dt
T
x T e At B 2 dt
0
-39-
Solutions Manual
dj
___
dt j
0=
x T e At B
= x TA jB
t =0
=0
Solution 9.9 Suppose is an eigenvalue of A, and p is a corresponding left eigenvector. Then p 0, and
p TA = p T
This implies both
_
p HA = p H ,
A T p = p
_
p H AQp + p H QA T p = p H Qp + p H Qp
= p H BB T p
that is,
2Re [] p H Q p = p H BB T p
(*)
This gives Re [] 0 since Q is positive definite. Now suppose Re [] = 0. Then (*) gives p H B = 0. Also, for
j = 1, 2, . . . ,
_
_
p H A j B = p H A j1 B = . . . = j p H B = 0
Thus
p H B AB . . . A n1 B
=0
tf
=0
and we have shown output controllability on [to , t f ]..
-40-
Solutions Manual
Now suppose the state equation is output controllable on [to , t f ], but that Wy (to , t f ) is not invertible. Then
there exists a p 1 vector ya 0 such that y Ta Wy (to , t f )ya = 0. Using by now familiar arguments, this gives
y Ta C (t f )(t f , t)B (t) = 0 , t [to , t f ]
Consider the initial state
xo = (to , t f )C T (t f )[ C (t f )C T (t f ) ]1 ya
which is well defined and nonzero since rank C (t f ) = p. There exists an input ua (t) such that
tf
tf
= ya + C (t f )(t f , )B ()ua () d
to
Premultiplying by
y Ta
gives
0= y Ta ya
Solution 9.11 From Exercise 9.10, since rank C = p, the state equation is output controllable if and only if for
some fixed t f > 0,
tf
Wy = Ce
A (t f t)
BB T e
A T (t f t)
C T dt
rank
CB CAB . . . CA n1 B
=p
by showing equivalence of the negations. If Wy is not invertible, there exists a nonzero p 1 vector ya such that
y Ta Wy ya = 0. Thus
y Ta Ce
A (t f t)
B = 0 , t [0, t f ]
=0
CB CAB . . . CA n1 B
<p
Conversely, if the rank condition fails, then there exists a nonzero ya such that y Ta CA j B = 0,
j = 0, . . . , n1. Then
-41-
Solutions Manual
y Ta Ce
A (t f t)
n1
k (t f t) A k B = 0 ,
B = y Ta C
t [0, t f ]
k =0
j =0
Now if
d k j 1
_______
[ L j (t)b (t)u (t) ] , k = 1, 2, . . .
dt k j 1
__ 1
then
n 1
i (t)Li (t) =
i =0
0 (t)
. . . n 1 (t)
L 0 (t)
.
.
= Ln (t)
.
Ln 1 (t)
n 1
n1
i =0
j =0
n 1
n 1
i1
n1
d n j 1
_______
[ L j (t)b (t)u (t) ]
dt n j 1
j =0
d i j 1
______
[ L j (t)b (t)u (t) ]
dt i j 1
n 1
i1
d i j 1
d n j 1
______
_______
[ L j (t)b (t)u (t) ]
[
L
(t)b
(t)u
(t)
(t)
]
j
i
i j 1
dt n j 1
i =0
j =0 dt
-42-
CHAPTER 10
Solution 10.2
B AB
. . . A n1 B
<n
B (A+BC)B
. . . (A+BC)n1 B
<n
Similarly,
rank
C
CA
.
.
.
CA n1
<n
rank
C
C (A+BC)
.
.
.
C (A+BC)n1
<n
Solutions Manual
C (t)B () = H (t)F ()
(*)
to
tf
F()B T () d
(**)
to
where the left side is a product of invertible matrices by minimality. Therefore the two matrices on the right side
are invertible. Let
tf
T
P 1 = M 1
x (to , t f ) C (t)H (t) dt
to
T
Then multiply both sides of (*) by C (t) and integrate with respect to t to obtain
tf
that is,
tf
tf
F()B T () d W 1
C T (t)H (t) dt
x (to , t f ) =
to
to
Mx (to , t f ) = P
so we have
C (t) = H (t)P
for all t. Noting that 0 = P 1 . 0 . P, we have that P is a change of variables relating the two zero-A minimal
realizations. Since a change of variables always can be used to obtain a zero-A realization, this shows that any
two minimal realizations of a given weighting pattern are related by a variable change.
__ X (t+) = ___
X (t+)
t
gives
-44-
Solutions Manual
_d_ X (t) X () = X (t)
dt
d
___
X ( )
d
which implies
d
___
X () = X (t)
d
_d_ X (t) X ()
dt
Integrate both sides with respect to t from a fixed to to a fixed t f > to to obtain
tf
d
___
X () = X (t)
(t f to )
d
to
_d_ X (t) dt X ()
dt
Now let
A=
1
_____
t f to
tf
X (t)
to
_d_ X (t) dt
dt
to write
d
___
X () = A X () , X (0) = I
d
This implies X () = e A . (Of course there are quicker ways. For example note that
d
___
__ X (t+) = ___
X ( )
X (t+) = X (t)
d
t
.
.
Evaluating at = 0 gives X (t) = X (t)X (0), which implies
.
Solution 10.12 If rank Gi = ri we can write (admittedly using a matrix factorization unreviewed in the text)
Gi = Ci Bi
where Ci is p ri , Bi is ri m, and both have rank ri . Then it is easy to check that
A = block diagonal
{ i Ir
, i = 1, . . . , r
},
B=
B1
.
.
.
Br
, C=
C1
. . . Cr
is a realization of G (s) of dimension r 1 + . . . + rr = n. We need only show that this realization is controllable
and observable. Write
B1 0 . . . 0
. . . n1 I
1
m
0 B 2 . . . 0 Im 1 Im
.
.
.
.
. .
.
.
B AB . . . A n1 B =
.
.
.
.
. .
.
.
.
.
.
.
. .
.
.
. . . n1 I
0 0 . . . Br Im r Im
r
m
On the right side the first matrix has rank n, while the second is invertible due to its Vandermonde structure and
the fact that 1 , . . . , r are distinct. This shows controllability. A similar argument shows observability.
(Controllability and observability can be shown more easily using rank tests developed in Chapter 13.)
-45-
CHAPTER 11
1 1
=1
1 1
the state equation is not minimal. It is easy to compute the impulse response:
G (t, ) = C (t)e A (t) B = (t 2 + 1) e (t)
Then a factorization is obvious, giving a minimal realization
.
x (t) = e t u (t)
y (t) = (t 2 + 1)e t x (t)
1+e 2t / 2+e 2 / 2 e 2
0
e 2t
It is easy to check that rank 22 (t, ) = 2 for all t, , and a little more calculation shows that rank 33 (t, ) = 2.
Then a minimal realization is, using formulas in the proof of Theorem 11.3,
F(t, ) = 22 (t, )
Fc (t, ) = 1+e 2t / 2+e 2 / 2
e 2
B (t) = Fr (t, t) =
1+e 2t
e 2t
e 2t 0 1
F
(t, t) =
2e 2t 0
-46-
0 1
0 2
Solutions Manual
1
1
1
.
.
.
1 ...
1 ...
1 ...
. .
. .
. .
1
1
1
.
.
.
and clearly the rank condition in Theorem 11.7 is satisfied with l = k = n = 1. Then, following the proof of
Theorem 11.7,
F = Fs = Fc = Fr = H 1 = H s1 = 1
and a minimal (dimension-1) realization is
.
x (t) = x (t) + u (t)
y (t) = x (t)
For the truncated sequence,
1
1
1
0
.
.
.
1
1
0
0
.
.
.
1
0
0
0
.
.
.
...
...
...
...
.
.
.
0
0
0
0
.
.
.
1 1 1
1 1 0 , Fs = H s3 =
1 0 0
Fc = 1 1 1 , Fr =
1 1 0
1 0 0
0 0 0
1
1
1
0 1 0
0 0 1 , B=
0 0 0
1
1 , C= 1 0 0
1
(This is an example of Silvermans formulas in Exercise 11.13. Also, it is not hard to see that truncation of the
sequence after any finite number n of 1s will lead to a minimal realization of dimension n.)
G0 G1 . . .
G1 G2 . . .
.
.
.
.
.
.
.
.
.
Gn1 Gn . . .
.
.
.
.
.
.
.
.
.
suppose for some 1 i n a left-to-right column search yields that the first linearly dependent column is column
i. Then there exist scalars 0 , . . . , i2 such that column i is given by the linear combination
-47-
Solutions Manual
Gi1
Gi
.
.
.
Gn2+i
.
.
.
= 0
G0
G1
.
.
. + . . . +
i2
Gn1
.
.
.
Gi2
Gi1
.
.
.
Gn3+i
.
.
.
By ignoring the top entry, this linear combination shows that column i +1 is given by the same linear combination
of the i1 columns to its left, and so on. Thus by the rank assumption on there cannot exist such an i, and the
first n columns of are linearly independent. A similar argument shows that the first n columns of n,n +j are
linearly independent, for every j 0, and thus that nn is invertible.
It remains only to show that the given A, B, C provides a realization for G (s), since minimality is then
immediate. Premultiplication by nn verifies
1
nn
Gk
.
.
.
Then, since A =
Gn +k1
snn
= ek +1 , k = 0, . . . , n1
1
nn ,
Gk
.
.
.
Gn +k1
= snn ek +1 =
Gk +1
.
.
.
Gn +k
, k = 0, . . . , n1
Now, CB = G 0 , and
G0
.
.
.
Gn1
CA j B = CA j1 A
= ... =C
= CA j1
Gj
.
.
.
Gn1+j
G1
.
.
.
Gn
= G j , j = 1, . . . , n
To complete the verification we use the fact that each dependent column of n,n +j is given by the same linear
combination of n columns to its left. This follows by writing column n +1 of as a linear combination of the first
n (linearly independent) columns, and deleting partitions from the top of the resulting expression. This implies
that multiplying any column of n,n +j by A gives the next column to the right. Thus
CA n +j B = CA j
Gn
.
.
.
G 2n1
=C
= Gn +j , j = 1, 2, . . .
-48-
Gn +j
.
.
.
G 2n1+j
CHAPTER 12
Solution 12.1 If the state equation is uniformly bounded-input, bounded-output stable, then it is clear from the
definition that given we can take = .
Now suppose the , condition holds. In particular we can take = 1 and assume is such that, for any to ,
u (t)
1 , t to
implies
y (t) ,
t to
Now suppose u (t) is any bounded input signal. Given to let = sup u (t). Note > 0 can be assumed, for
t to
otherwise we have a trivial case. Then u (t)/ 1 for all t to , and the zero-state response to u (t) satisfies
t
y (t) =
G (t, )u () d
to
t
= G (t, )u ()/ d
to
= sup u (t) , t to
t to
Thus we have
sup y (t) sup u (t)
t to
t to
W (t, t) =
e A (t) BB T e A
(t)
W (t, t) = e A e A BB T e A d e A
T
It is easy to prove (by showing the equivalence of the negations by contradiction, as in the proof of Theorem 9.5)
that this is positive definite if and only if
-49-
Solutions Manual
rank
B AB . . . A n1 B
=n
e A BB T e A d e A
t /2
. Then
W (t, t) = e t (e 1)
Given any > 0, W (t, t) > 0 for all t, but there exists no > 0 such that
W (t, t)
for all t.
y (t) b ()u() d
0
y (t)
u (t)
b() d . tsup
t
o
sup u (t)
t to
and the state equation is uniformly bounded-input, bounded-output stable with = 1. However if we consider a
bounded input that is continuous and satisfies
1, 0t 1
0, t 2
u (t) =
G (t) dt = <
0
and suppose u (t) is continuous, and u (t) 0 as t . Then u (t) is bounded, and we let = sup u (t). Now
t0
G (t) dt ___
2
T1
-50-
Solutions Manual
u (t)
___
, t T2
2
y (t)
G (t)u () d
0
T
= G (t) d +
0
___
G (t) d
2 T
y (t)
G () d +
tT
___
2
tT
G () d
___
___
=
+
2
2
Solution 12.11 The hypotheses imply that given > 0 there exist 1 , 2 > 0 such that if
xo < 1
u (t) < 2
, t to
t to
In particular, with xo = 0, this shows that if u (t) < 2 for t to , then the corresponding zero-state solution of
the state equation
.
x (t) = A (t) x (t) + u (t)
y (t) = x (t)
(*)
satisfies y (t) < for t to . But this implies uniform bounded-input, bounded-output stability by Exercise
12.1. Thus there exists a finite constant such that the impulse response of (*), which is identical to the transition
matrix of A (t), satisfies
t
(t, ) d
to
for all t, to such that t to . Since A (t) is bounded, this gives uniform exponential stability of
.
x (t) = A (t) x (t)
by Theorem 6.8.
Solution 12.12 Suppose the impulse response is G (t), where G (t) = 0 for t < 0. For u (t) = e t , t 0,
-51-
Solutions Manual
y (t)e t dt =
0
G (t)e d e t dt =
0
G (t)e t dt
G (t)e d
e t dt
e d
where all integrals are well-defined because of the stability assumption, and , > 0. Changing the variable of
integration in the inner integral from t to = t gives
y (t)e
dt =
G ()e d
G(s) s =
e e d
e (+) d
0
1
_____
G()
+
Without the stability assumption we can say that U (s) = 1/(s+) for Re [s ] > , and the integral for G (s)
converges for Re [s ] > Re [p 1 ], . . . , Re [pn ], where p 1 , . . . , pn are the poles of G (s). Thus
G (s)
_____
= y (t)e st dt
Y (s) =
s+
0
is valid for Re [s ] > , Re [p 1 ], . . . , Re [pn ]. This implies that if
> , Re [p 1 ], . . . , Re [pn ]
then
G ( )
y (t)e t dt = _____
+
0
Solution 12.14
Since PB = 0 and
e AP (t) =
n1
i (t) (AP)i
i =0
-52-
Solutions Manual
we get
t
Then
.
w (t) = (CB)1 CAP z (t) (CB)1 CAB(CB)1 C x (t) + (CB)1 C x (t)
= (CB)1 CAP z (t) (CB)1 CAB(CB)1 C x (t) + (CB)1 CA x (t) + (CB)1 CBu (t)
= (CB)1 CAP z (t) + (CB)1 CA [ B(CB)1 C + I ] x (t) + u (t)
= (CB)1 CAP[ x (t) z (t) ] + u (t)
t
-53-
CHAPTER 13
a 11 a 12
a 21 a 22
b=
b1
b2
22
)
4(a
a 11 +a 22
(a
11
+a
11 a 22 a 12 a 21 )
____________________________________
2
(*)
Ab ] = a 21 b 21 a 12 b 22 (a 11 a 22 )b 1 b 2
implies
(a 11 a 22 )2 b 21 b 22 = (a 21 b 21 a 12 b 22 )2
(**)
-54-
Solutions Manual
B AB . . . A n1 B
=n ,
rank
A B
= n +p
C D
B
u (t)
D
(+)
(++)
is controllable. First suppose (+) holds but (++) is not controllable. Then there exists a complex so such that
rank
Since rank [ so IA
so In A 0
C so Ip
B
< n +p
D
(*)
B ] = n, this implies
rank
C so Ip D
<p
A 0 B
< n +p
C 0 D
...
...
AB A 2 B
CB CAB
B
D
= n +p
This implies
rank
B AB . . . A n1 B
=n
in other words, the first rank condition in (+) holds. Now suppose
A B
rank
< n +p
C D
Then
rank
so In A 0
C so Ip
B
D
< n +p
so = 0
that is,
so In +p
A 0
C 0
B
D
< n +p
so = 0
and this implies that (++) is not controllable. The contradiction shows that the second rank condition in (+) holds.
rank
Solution 13.5 Since J has a single eigenvalue , controllability is equivalent to the condition
rank IJ
B =n
From the form of the matrix IJ it is clear that a necessary and sufficient condition for controllability is that the
set of rows of B corresponding to zero rows of IJ must be a linearly independent set of 1 m vectors.
In the general Jordan form case, applying this condition for each eigenvalue i gives a necessary and
sufficient condition for controllability. (Note that independence of one set of such rows of B (corresponding to one
distinct eigenvalue) from another set of such rows of B (corresponding to another distinct eigenvalue) is not
required.)
-55-
Solutions Manual
P 1 B
(P 1 AP)P 1 B . . . (P 1 AP)n1 P 1 B
= P 1
B AB . . . A n1 B
and controllability indices are defined by a left-to-right linear independence search, it is clear that controllability
indices are unaffected by state variable changes.
For the second part, let rk be the number of linearly dependent columns in A k B that arise in the left-to-right
column search of [ B AB . . . A n1 B ]. Note r 0 = 0 since rank B = m. Then rk is the number of controllability
indices that have value k. This is because for each of the rk columns of the form A k Bi that are dependent, we
have i k, since for j > 0 the vector A k +j Bi also will be dependent on columns to its left. Thus for
k = 1, . . . , m, rk rk1 gives the number of controllability indices with value k. Writing
BG ABG . . . A k BG =
B AB . . . A k B
G
0
.
.
.
0
0 ... 0
G ... 0
. . .
. . .
. . .
0 ... G
and using the invertibility of G shows that the same sequence of rk s are generated by left-to-right column search
in [ BG ABG . . . A n1 BG ].
Solution 13.12 By controllability, we can apply a variable change to controller form, with
A = Ao + Bo UP 1 = PAP 1 , B = Bo R = PB
Then we can choose K such that
A + B K =
0
1
0
0
.
.
.
.
.
.
0
0
p 0 p 1
-56-
...
0
...
0
.
.
.
.
.
.
...
1
. . . p
n1
Solutions Manual
0
0
.
.
.
0
1
B b =
B b = Bo Rb = block diagonal
0
0
.
.
.
0
1
i 1
, i = 1, . . . , m
1
0
.
.
.
0
0
1
.
.
.
0
0
...
...
.
.
.
...
...
.
. b=
.
1
0
0
.
.
.
0
1
0 = b 1 + i bi
i =2
m
0 = b 2 + i bi
i =3
.
.
.
0 = bm1 + bm
1 = bm
Clearly there is a solution for the entries of b, regardless of the s. Now it is easy to conclude controllability of
the single-input state equation by calculation of the form of the controllability matrix. Then changing to the
original state variables gives the result since controllability is preserved. In the original variables, take K = K P
and b = b. For an example to show that b alone does not suffice, take Exercise 13.11 with all s zero.
Solution 13.14 Supposing the rank of the controllability matrix is q, Theorem 13.1 gives an invertible Pa such
that
P 1
a APa =
A 11 A 12
0 A 22
, P 1
a B =
B 1
0
, CPa =
C 1
C 2
rank
n1
C 1 A 11
=l
Pb 0
0 Inq
we have
-57-
Solutions Manual
(P 1
a APa )P
CPa P =
A 11 0
A 21 A 22
0 0
A 13
A 23
A 33
C 1 0 C 2
B 1
B 2
0
, P
(P 1
a B)
(*)
where A 11 is l l, and in fact A 33 = A 22 , C 2 = C 2 . It is easy to see that the state equation formed from
C 1 , A 11 , B 1 is both controllable and observable. Also an easy calculation using block triangular structure shows
that the impulse response of the state equation defined by (*) is
C 1 e
A 11 t
B 1
It remains only to show that l = s. Using the effect of variable changes on the controllability and observability
matrices and the special structure of (*) give
C
CA
.
.
.
CA
n1
B AB . . . A n1 B
C 1
C 1 A 11
.
.
.
n1
C 1 A 11
n1
B 1 A 11 B 1 . . . A 11 B 1
Thus
rank
C 1
C 1 A 11
.
.
.
n1
C 1 A 11
B 1 A 11 B 1 . . .
n1
A 11 B 1
=s
But
rank
C 1
C 1 A 11
.
.
.
l1
C 1 A 11
= rank
l1
B 1 A 11 B 1 . . . A 11 B 1
-58-
=l
CHAPTER 14
W = e At BB T e A t dt
T
AW + WA =
T
=e
_d_
dt
At f
e At BB T e A
BB T e
A T t f
T
t
dt
+ BB T
Letting K = B T W 1 , we have
(A + BK)W + W (A + BK)T = ( e
At f
BB T e
A T t f
+ BB T )
(*)
_
p H (A + BK) = p H
At f
BB T e
A T t f
+ BB T ) p = 0
-59-
=0
Solutions Manual
Solution 14.5
(a) For any n 1 vector x,
x H (A + A T ) x = x H A x + x H A T x 2m x H x
If is an eigenvalue of A, and x is a unity-norm eigenvector corresponding to , then
_
A x = x , x HAT = x H
and we conclude
+ 2 m
Therefore any eigenvalue of A satisfies Re [] m , and this implies that for > m all eigenvalues of A + I
have positive real parts. Therefore all eigenvalues of (A T + I) = (A I)T have negative real parts.
(b) Using Theorem 7.11, with > m , the unique solution of
Q (A I)T + (A I)Q = BB T
(*)
is
Q = e (A + I)t BB T e (A
+ I)t
dt
(**)
-60-
Solutions Manual
Solution 14.8
Without loss of generality we can assume the change of variables in Theorem 13.1 has been
performed so that
A=
A 11 A 12
0 A 22
, B=
B1
0
where A 11 is q q, and
rank IA 11
B1 = q
for all complex values of . Then the eigenvalues of A comprise the eigenvalues of A 11 and the eigenvalues of
A 22 . Also, for any complex ,
rank IA
B = rank
IA 11
A 12 B 1
IA 22 0
= q + rank IA 22
(+)
Now suppose rank [ IA B ] = n for all nonnegative-real-part eigenvalues of A. Then by (+) any such
eigenvalue must be an eigenvalue of A 11 , which implies that all eigenvalues of A 22 have negative real parts. But
we can compute an m q matrix K 1 such that A 11 + B 1 K 1 has negative-real-part-eigenvalues. So setting
K = [ K 1 0 ] we have that
A + BK =
A 11 +B 1 K 1 A 12
0
A 22
A 11 +B 1 K 1 A 12 +B 1 K 2
0
A 22
has negative-real-part eigenvalues, then A 22 has negative-real-part eigenvalues. Thus if Re [] 0, then (+) gives
rank IA
B = q+nq = n
Solution 14.9 For controllability assume A and B have been transformed to controller form by a state variable
change. By Exercise 13.10 this does not alter the controllability indices. Then it is easy to show that A+BLC and
B are in controller form with the same block sizes, regardless of L and C. Thus the controllability indices do not
change. Similar arguments apply in the case of observability.
-61-
Solutions Manual
tr [A+BLC ] = tr [A ] + tr [BLC ]
= tr [A ] + tr [CBL ]
= tr [A ]
>0
Thus at least one eigenvalue of A+BLC has positive real part, regardless of L.
Ck A j Bs (j+1)
j =0
Ck A
k 1
B0
Thus in the k th -row of G(s), the minimum difference between the numerator and denominator polynomial degrees
among the entries Gk1 (s), . . . , Gkm (s) is k .
-62-
CHAPTER 15
w (t)
z (t)
B ()2
d =
(, )2 (, )B ()B T ()T (, ) d
Since A (t) is bounded, by Exercise 6.6 there is a positive constant such that
And since
+
(, )B ()B T ()T (, ) d 1 I
-63-
(, )2
2 for [, +].
Solutions Manual
B ()2
d 2
(, )B ()B T ()T (, ) d
2 n 1 = 1
Now for any , and t [+k , +(k +1)], k = 0, 1, . . . ,
+(k +1)
B ()2
B ()2
k +(j +1)
+j
j =0
B ()2
(k +1) 1 [1 + (t)/ ] 1
This bound is independent of k, so letting 2 = 1 / we have
t
B ()2 d 1 + 2 (t)
for all t, with t . (Of course this provides a simplification of the hypotheses of Theorem 15.5 for the
bounded-A (t) case.)
Solution 15.6 Write the given state equation in the partitioned form
.
za (t)
=
.
zb (t)
A 11 A 12
A 21 A 22
y (t) = Ip
za (t)
+
zb (t)
B1
B2
u(t)
za (t)
zb (t)
y (t) = Ip
0 0
za (t)
zb (t)
eb (t)
Thus we see that the eigenvalues of the closed-loop state equation are provided by the n eigenvalues of A +BK
and the (np) eigenvalues of A 22 H A 12 . Furthermore, the block triangular structure gives the closed-loop
transfer function as
-64-
Solutions Manual
0 (sIABK )1 BN R(s)
Y(s) = Ip
A+BLJC 2
BLH
,
GC 2 +GD 22 LJC 2 F+GD 22 LH
C = C 1 +D 1 LJC 2
D 1 LH
B =
BLJD 21
GD 22 +GD 22 LJD 21
, D = D 1 LJD 21
-65-
CHAPTER 16
Solution 16.4 By Theorem 16.16 there exist polynomial matrices X (s), Y (s), A (s), and B (s) such that
N(s) X(s) + D(s)Y(s) = Ip
(*)
(**)
1
Since D 1 (s)N(s) = D 1
a (s)Na (s), Na (s) = Da (s)D (s)N(s). Substituting this into (**) gives
X(s) Y(s)
NL (s) DL (s)
D(s)
N(s)
I
0
It remains only to prove unimodularity. Since NL (s) and DL (s) are left coprime, there exist polynomial matrices
A(s) and B(s) such that
DL (s) A(s) + NL (s)B(s) = I
That is,
X(s) Y(s)
NL (s) DL (s)
D(s) B(s)
=
N(s) A(s)
-66-
I X(s)B(s)+Y(s)A(s)
0
I
Solutions Manual
I [X(s)B(s)+Y(s)A(s)]
0
I
gives
X(s) Y(s)
NL (s) DL (s)
D(s) D(s)[X(s)B(s)+Y(s)A(s)]+B(s)
= I
N(s) N(s)[X(s)B(s)+Y(s)A(s)]+A(s)
That is
X(s) Y(s)
NL (s) DL (s)
D(s) D(s)(X(s)B(s)+Y(s)A(s))+B(s)
N(s) N(s)(X(s)B(s)+Y(s)A(s))+A(s)
X(s) Y(s)
NL (s) DL (s)
is unimodular.
I = (P s + . . . + P 0 ) (Q s + . . . + Q 0 )
= P Q s + + (P Q 1 +P 1 Q )s +1 + . . .
with P 1 and Q 1 invertible. Therefore
PQ = 0 ,
P Q 1 +P 1 Q = 0
(+)
Since N(s)D 1 (s) = N (s)D (s) both are coprime right polynomial fraction descriptions, there
exists a unimodular U(s) such that D(s) = D (s)U(s). Suppose for some integer 1 J m we have
Solution 16.10
ck [D ] = ck [D ] , k = 1, . . . , J1 ; cJ [D ] < cJ [D ]
Writing D(s) and D (s) in terms of columns Dk (s) and D k (s) and writing the (i, j)-entry of U(s) as uij (s) give
-67-
Solutions Manual
ck [D ]
hc c 1 [D ]
+ D lk (s) = [D 1 s
l
hc c [D ]
l
+D 1 (s)] u 1,k (s) + . . . + [D J s J +D J (s)] uJ,k (s)
hc c [D ]
l
+ . . . + [D m s m +D m (s)] um,k (s) , k = 1, . . . , m
We claim that
ck [D ] =
max
j = 1, . . . , m
hc
hc
This is shown by a an argument using linear independence of D 1 , . . . , D m as follows. Let
c =
max
j = 1, . . . , m
c c j [D ]
in u j,k (s). Then not all the j,k are zero, and the vector coefficient of the s c
m
j,k D j
j =1
hc
Ua (s)
0(mJ+1) J
Ub (s)
Uc (s)
where Ua (s) is (J1) J, from which rank U (s) m1 for all values of s. This contradicts unimodularity, Thus
cJ [D ] = cJ [D ]. The proof is complete since the roles of D (s) and D (s) can be reversed.
-68-
CHAPTER 17
Solution 17.1 If
.
x (t) = A x (t) + Bu (t)
y (t) = C x (t)
T
-69-
(+)
Solutions Manual
Therefore
SB To [ sI A To Q 1 VBTo ]
= D 1 (s)T (s)
= CQ [ sI Q 1 AQ ] Q 1 B
= C (sI A)1 B
Note that D (s) is row reduced since Dlr = S 1 , which is invertible. Finally, if the state equation is controllable as
well as observable, hence minimal, then it is clear from the definition of D (s) that the degree of the polynomial
fraction description equals the dimension of the minimal realization. Therefore D 1 (s)N (s) is a coprime left
polynomial fraction description.
Solution 17.5 Suppose there is a nonzero h with the property that for each uo there is an xo such that
t
so
d = 0 , t 0
Suppose G (s) = N (s)D 1 (s) is a coprime right polynomial fraction description. Then taking Laplace transforms
gives
hC (sI A)1 xo + hN (s)D 1 (s)uo (sso )1 = 0
that is,
(sso )hC (sI A)1 xo + hN (s)D 1 (s)uo = 0
If so is not a pole of G (s), then D (so ) is invertible. Thus evaluating at s = so gives
hN (so )D 1 (so )uo = 0
and we have that if so is not a pole of G (s), then for every u o
hN (so )u o = 0
Thus hN (so ) = 0, that is rank N (so ) < p < m, which implies that so is a transmission zero.
Conversely, suppose so is a transmission zero that is not a pole of G (s). Then for a right-coprime
polynomial fraction description G (s) = N (s)D 1 (s) we have that D (so ) is invertible, and rank N (so ) < p < m.
Thus there exists a nonzero 1 p vector h such that hN (so ) = 0. Using the identity (just as in the proof of
Theorem 17.13)
(so I A)1 (s so )1 = (sI A)1 (so I A)1 + (sI A)1 (sso )1
we can write for any uo and the choice xo = (so I A)1 Buo ,
so
That is, h has the property that for any uo there is an xo such that
t
-70-
so
d = 0 , t 0
Solutions Manual
Since the numerator is the magnitude of a polynomial, it is finite for every so , and this implies det D (so ) = 0, that
is, so is a pole of G (s).
Now suppose so is such that det D (so ) = 0. By coprimeness of the right polynomial fraction description
N (s)D 1 (s), there exist polynomial matrices X (s) and Y (s) such that
X(s)N(s) + Y(s)D(s) = Im
for all s. Therefore
Since the entries of the polynomial matrices X (so ) and Y (so ) are finite, some entry of G (so ) must have infinite
magnitude.
-71-
CHAPTER 18
Solution 18.2
(a) If x A (A 1 V ), then clearly x Im [A ], and there exists y A 1 V such that x = Ay, which implies x V.
Therefore A (A 1 V ) V Im [A ]. Conversely, suppose x V Im [A ]. Then x Im [A ] implies there exists y
such that x = Ay, and x V implies y A 1 V. Thus x A (A 1 V ), that is, V Im [A ] A (A 1 V ).
(b) If x V + Ker [A ], then we can write
x = xa + xb , xa V , xb Ker [A ]
1
and Ax = Axa AV. Thus x A (AV ), which gives V + Ker [A ] A 1 (AV ). Conversely, if x A 1 (AV ), then
there exists y V such that Ax = Ay, that is, A (xy) = 0. Thus writing
x = y + (xy) V + Ker [A ]
1
-72-
Solutions Manual
rank C B AB . . . A n1 B
=p
and thus the proof involves showing that the rank condition is equivalent to positive definiteness of
tf
Ce A (t t) BB T e A (t t) C T dt
T
Solution 18.10 We show equivalence of the negations. First suppose 0 V Ker [C ] is a controlled invariant
subspace. Then picking a friend F of V we have
(A + BF)V V Ker [C ]
Selecting 0 xo V, this gives
e (A + BF)t xo V , t 0
and thus
Ce (A + BF)t xo = 0 , t 0
Thus the closed-loop state equation is not observable, since the zero-input response to xo 0 is identical to the
zero-input response to the zero initial state.
Conversely, suppose the closed-loop state equation is not observable for some F. Then
n1
N = Ker [C (A + BF)k ] 0
k =0
Solution 18.11
B =
B 11
B 12
B 22
B 32
0(rq) q
0(cr) q
0(nc) q 0(nc) (mq)
-73-
CHAPTER 19
z T = Ker
zT
,
xT
for all z S
(*)
z T < rank
zT
xT
zT
< dim Ker
xT
zT
Solution 19.2 By induction we will show that (W k ) = V k , where V k is generated by the algorithm for V * in
Theorem 19.3:
-74-
Solutions Manual
V0 = K
V k +1 = K A 1 (V k + B )
= V k A 1 (V k + B )
For k = 0 the claim becomes ( K ) = K , which is established in Exercise 19.1. So suppose for some nonegative
integer K we have (W K ) = V K . Then, using Exercise 19.1,
(W K +1 ) =
W K + AT[ W K B ]
= (W K )
= VK
A T (W K B )
A T [ (V K ) B ]
(V K ) B
AT
= A 1 (V K ) B
= A 1 (V K + B)
Thus
(W K +1 ) = V K A 1 (V K + B) = V K +1
This completes the induction proof, and gives V * = V n = (W n ) .
(A + BF) j1 (B V *) = B V * = V * (A .0 + B )
j =1
= R1
Assume now that for some positive integer K we have
K
(A + BF) j1 (B V *) = R K = V * (AR K1 B )
j =1
Then
K +1
j =1
j =1
= B V * + (A + BF)R K
(A + BF)R K (A + BF)V * V *
Using the second part of Exercise 18.4 gives
B V * + (A + BF)R K = [ B + (A + BF)R K ] V *
Since (A + BF)R K + B = AR K + B, the right side of (+) can be rewritten as
B V * + (A + BF)R K = V * [ AR K + B ]
= R K +1
This completes the induction proof of the Hint, and Theorem 19.6 gives R * = R n .
-75-
(+)
Solutions Manual
(*)
Thus we want to show that there exist F and K such that (*) holds if and only if Im [E ] V * + B, where V * is the
maximal controlled invariant subspace contained in Ker [C ] for the plant.
First suppose F and K are such that (*) holds. Since <A +BF Im [E +BK ]> is invariant under (A + BF), it
is a controlled invariant subspace contained in Ker [C ] for the plant. Then
Im [E +BK ] <A +BF Im [E +BK ]> V *
That is, for any x X there is a v V * such that (E + BK)x = v. Therefore
Ex = v + B (K x)
which implies Im [E ] V * + B.
Conversely, suppose Im [E ] V * + B, where V * is the maximal controlled invariant subspace contained
in Ker [C ] for the plant. We first show how to compute K such that Im [E +BK ] V *. Then we can pick any
friend F of V * and the proof will be finished since we will have
w 1 . . . wq
Then
(E + BK)w j = Ew j + BKw j
= v j + Bu j + B u 1 . . . uq e j
= v j , j = 1, . . . , q
That is, K is such that
Im [E + BK ] V *
-76-
Solutions Manual
C 1 = C 1 P = C 11 0 0
C 2 = C 2 P = 0 C 11 0
B 11
0
B 13
B 22
B 23
A 11
0
A 31
0 0
A 22 0
A 32 A 33
That is, with z (t) = P 1 x (t), the closed-loop state equation takes the partitioned form
.
za (t) = A 11 za (t) + B 11 r 1 (t)
.
zb (t) = A 22 zb (t) + B 22 r 2 (t)
.
zc (t) = A 31 za (t) + A 32 zb (t) + A 33 zc (t) + B 13 r 1 (t) + B 23 r 2 (t)
y 1 (t) = C 11 za (t)
y 2 (t) = C 12 zb (t)
-77-
CHAPTER 20
Solution 20.1
A sketch shows that v (t) is a sequence of unit-height rectangular pulses, occurring every T
seconds, with the width of the k th pulse given by k/5, k = 0, . . . , 5. This is a piecewise-continuous (actually,
piecewise-constant) input, and the continuous-time solution formula gives
t
z (t) = e
F (tto )
z (to ) + e F (t) Gv () d
to
e F (kT +T) Gv () d
kT
e F G d sgn [u (k)]
Tu (k)T
The integral term is not linear in the input sequence u (k), so we approximate the integral when u (k) is small.
Changing integration variable to = T, another way to write the integral term is
u (k)T
e FT
e F G d sgn [u (k)]
u (k)T
e F d =
( IF + . . . ) d u (k)T I
0
Then since u (k) sgn [u (k)] = u (k), this gives the approximate, linear, discrete-time state equation.
z [(k +1)T ] = e FT z (kT) + e FT T u (k)
Solution 20.4 For a constant nominal input u (k) = u , constant nominal solutions are given by
-78-
Solutions Manual
x =
u
2
u
y = u
1 0
x (k) +
0 1
2
u (k)
4u
1 x (k) + 2u u (k)
y (k) = 2u
Solution 20.10
(k, j):
Computing ( j +q, j) for the first few values of q 0 easily leads to the general formula for
0
a 1 (k1)a 2 (k2)a 1 (k3)a 2 (k4) . . . a 1 ( j)
,
.
.
.
a 2 (k1)a 1 (k2)a 2 (k3)a 1 (k4)
a 2 ( j)
0
0
,
a 2 (k1)a 1 (k2)a 2 (k3)a 1 (k4) . . . a 1 ( j)
k1
ko ) (kk 1 ) =
(k,
ko )
(k,
j) ( j, k)
j =k 1
k1
j =k 1
-79-
kj odd, 1
kj even, 1
(k,
Solutions Manual
ko )
1
_____
kk 1
k1
j) ( j, k) , k k 1 +1 ko +1
(k,
j =k 1
1 a (k)
0 1
gives
k1
P (k) = A (k, 0) =
a (i)
i =0
, k 1
and
k1
-80-
a (i)
i =0
, k 0
CHAPTER 21
z 1
12 z+7
1
_________
2
z +7z+12
z +7 1
12 z
and
Y (z) = zc(zIA)1 xo + c(zIA)1 b U (z)
=
z
_________
z19 z1
z 2 +7z+12
z
z1
____
1/ 20 _________
+ 2
1/ 20
z +7z+12 z1
=0
Therefore the complete solution is y (k) = 0, k 0.
AT
g = e A b d =
1 T
,
0 1
T2/2
T
s 1 1 0
= 1/ s
0 s 1
and
Z [y (kT)]
_________
= h (zIF)1 g = 0 1
Z [u (kT)]
z1 T 1 T 2 / 2
= T / (z1)
T
0 z1
Solution 21.7
(a) The solution formula gives, using a standard formula for a finite geometric sum,
-81-
Solutions Manual
k1
11/(1+r / l)k
____________
11/(1+r / l)
(1+r / l)l xo xo
_____________
100% = [(1+r / l)l 1] 100%
xo
For r = 0.05, l = 2, the effective interest rate is 5.06%. For r = 0.05, l = 12, the effective interest rate is 5.12%.
(c) Set
0 = x (19) = (1.05)19 xo +
50,000
(50,000)
_______
_________
+
0.05
0.05
and solve to obtain xo = $604,266. Of course this means you have actually won only $654,266, but
congratulations remain appropriate.
Solution 21.9 With T = Td / l and v (t) = v (kT), kT t (k +1)T, evaluate the solution formula
t
at t = (k +1)T, = kT to obtain
T
= Az (kT) + Bv [(kl)T ]
Defining
x (k) =
z (kT)
v [(kl)T ]
.
,
.
.
v [(k1)T ]
u (k) = v (kT) ,
we get
-82-
y (k) = y (kT)
Solutions Manual
x (k +1) =
A
0
.
.
.
0
0
B
0
.
.
.
0
0
0
1
.
.
.
0
0
...
...
.
.
.
...
...
0
0
.
. x (k) +
.
1
0
0
0
.
. u (k) ,
.
0
1
x (0) =
z (0)
v (lT)
.
.
.
v (2T)
v (T)
y (k) = C 0 . . . 0 x (k)
The dimension of the initial state is n+l. The transfer function of this state equation is the same as the transfer
function of
z (k +1) = Az (k) + Bu (kl)
y (k) = Cz (k)
Taking the z-transform, using the right shift property, gives
Y (z) = C (zIA)1 Bz l U (z)
1 0
,
Mb =
0 0
0 1
0 0
Solution 21.13
By Lemma 21.6, given any ko there is a K-periodic solution of the forced state equation if and
only if there is an xo satisfying
[I (ko +K, ko )]xo =
ko +K1
(*)
j =ko
Similarly there is a K-periodic solution of the unforced state equation if and only if there is a zo satisfying
[I (ko +K, ko )]zo = 0
(**)
Since there is no zo 0 satisfying (**), it follows that [I(ko +K, ko )] is invertible. This implies that for each ko
there exists a unique xo satisfying (*). For this xo the forced state equation has a K-periodic solution.
However, if there is a zo 0 satisfying (**), (*) might still have a solution if the right side is in the range of
[I(ko +K, ko )].
Solution 21.14
Since the forced state equation has no K-periodic solutions, for any ko there is by Exercise
21.13 a zo 0 such that the solution of
z (k +1) = A (k)z (k) ,
z (ko ) = zo
ko +K1
j =ko
-83-
Solutions Manual
we have by linear algebra that there exits a nonzero, n 1 vector p such that
[I (ko +K, ko )]T p = 0
and
ko +K1
pT
j =k
Now pick any xo . Then it is easy to show that the corresponding solution satisfies p T x(ko +jK) = p T xo +jq,
j = 1, 2, . . . . This shows that the solution is unbounded.
-84-
CHAPTER 22
If the state equation is uniformly exponentially stable, then there exist 1 and 0 < 1 such
that
(k,
j) kj , k j
k) j , j 0
which implies
j = sup (k +j, k) j
k
Then
1/j
1/j
lim 1/j
j = lim ( ) = lim
<1
Now suppose
lim ( j )1/j < 1
Let = 1 and
=
1
___
max [ max j , 1 ]
J
1 j J
Then for j J,
(k +j,
k) sup (k +j, j) = j
k
max j J
1 j J
-85-
Solutions Manual
k) sup (k +j, j)
k
= j
< (1) j = j
j
This implies uniform exponential stability.
k k = k k = k ( e ln ) , k 0
Let = ln, so that > 0 since < 1. Then
max k k max t e t
k0
t0
1
___
e
Therefore
k k
________
= , k 0
e ln
k k
k =0
2
_______
1
j) kj , k j
T
A
(k) (k,
j) kj , k j
This is equivalent to
which is equivalent to
T
A
(k) (j +1,
-86-
Solutions Manual
which is equivalent to
A T (k) (k,
j) kj , k j
0 2
,
A (1) =
1/ 2 0
0 1/ 2
,
A (2) =
1/ 2 0
Then
A (k) (3, 0) =
1/ 2 0
0 1/ 2
-87-
2 0
0 1/ 8
2 0
0 1/ 2
CHAPTER 23
With Q = qI, where q > 0 we compute A T (k)QA (k)Q to get the sufficient condition for
uniform exponential stability:
Solution 23.1
a 21 (k), a 22 (k) 1
__
, >0
q
Thus the state equation is uniformly exponentially stable if there exists a constant < 1 such that for all k
a 1 (k), a 2 (k)
With
Q=
q1 0
0 q2
where q 1 , q 2 > 0, the sufficient condition for uniform exponential stability becomes existence of a constant > 0
such that for all k,
a 21 (k)
q 2
_____
,
q1
a 22 (k)
q 1
_____
q2
These conclusions show uniform exponential stability under weaker conditions, where one bounded coefficient
can be larger than unity if the other bounded coefficient is suitably small. For example, suppose
sup | a 2 (k) = < . Then we can take q 1 = 2 +0.01, q 2 = 1, and = 0.01 to conclude uniform exponential
k
Solution 23.4 Using the transition matrix computed in Exercise 20.10, an easy computation gives that
Q (k) = I +
T ( j, k)( j, k)
j =k +1
Solutions Manual
holds if a 21 (k), a 22 (k) < 1 for all k, but it also holds under weaker conditions. For example suppose the bound is violated only for k = 0, and
a 21 (0) > 1 , a 21 (0)a 22 (1) <
Then we can conclude uniform exponential stability. (More sophisticated analyses should be possible . . . .)
Solution 23.6 If the state equation is exponentially stable, then by Theorem 23.7 there is for any symmetric M
a unique symmetric Q such that
A T QA Q = M
Write
M=
m1 m2
m2 m3
Q=
q1 q2
q2 q3
0
a 20
1
0 1a 0 a 0
0
1
2
q1
q2
q3
m 1
m 2
m 3
The condition
det
0
a 20
1
0 1a 0 a 0
0
1
2
reduces to the condition a 0 0, 1, 2. Assuming this condition we compute Q for M = I, and use the fact that
Q > 0 since M > 0. The expression
q1
q2
q3
1
0
a 20
0 1a 0 a 0
1
2
0
1
0
1
gives
Q=
1
______________
a 0 (a 0 +2)(a 0 1)
a 0 (a 20 +a 0 +2) 2a 0
2a 0
2(a 0 +1)
(+)
-89-
Solutions Manual
p H A T QAp p H Qp = p H Mp
That is,
( 2 1 )p H Qp = p H Mp
If p H Mp > 0, then 2 1 < 0, which gives < 1. But suppose p H Mp = 0. Then for k 0,
_
0 = 2k p H Mp = k p H Mp k = p H (A T )k MA k p
= (Re [p ])T (A T )k MA k (Re [p ]) + (Im [p ])T (A T )k MA k (Im [p ])
Since M 0, this implies
0 = (Re [p ])T (A T )k MA k (Re [p ]) = (Im [p ])T (A T )k MA k (Im [p ])
By hypothesis this implies
lim A k (Re [p ]) = lim A k (Im [p ]) = 0
Therefore
lim A k p = lim k p = 0
-90-
CHAPTER 24
a 22 0
0 a 21
it is clear that
1/2
max (k) = max [ a 1 (k), a 2 (k) ]
Thus Corollary 24.3 states that the state equation is uniformly stable if there exists a constant such that
k
max [ a 1 (i),
i =j
a 2 (i) ]
(#)
a 2 (k) ]
for all but a finite number of values of k.) Of course the condition (#) is not necessary. Consider
x (k +1) =
0 1/ 9
x (k)
4 0
The eigenvalues are 2/ 3, so the state equation is uniformly stable, but clearly (#) fails.
r (k) =
( j)( j) , k ko +1
j =ko
-91-
(*)
r (k +1)
j =ko
Solutions Manual
k1
1
__________
r (k)
j =ko
1+( j)( j)
k
1
__________
+ (k)(k)
j =ko
1+( j)( j)
1
__________
,
k ko +1
1+( j)( j)
k1
j =k
k1
( j)( j)
i =j +1
[ 1+(i)(i) ] , k ko +1
Solution 24.7
By assumption A (k, j) for k j. Treating f (k, z (k)) as an input, the complete solution
formula is
z (k) = A (k, ko )z (ko ) +
k1
j =ko
This gives
z (k) z (ko ) +
k1
f ( j, z ( j))
j =ko
z (ko ) +
k1
j z ( j) , k ko +1
j =ko
k1
j ]
j =ko
z (ko ) exp [
j =k
j ]
e z (ko ) , k ko
This implies uniform stability.
For the scalar example
A (k) = 1/ 2 ,
f (k, z (k)) =
0, k 0
,
z (k), k < 0
k =
0, k 0
,
1, k < 0
we have
j =
j =k
0, k 0
k , k < 0
which is bounded for each k. But for ko < 0, the solution of this state equation yields
z (0) = (3/ 2)
ko
zo = (3/ 2)
ko
zo
Clearly any candidate bound can be violated by choosing ko sufficiently large, so the state equation is not
uniformly stable.
-92-
CHAPTER 25
Solution 25.1 If M (ko , k f ) is not invertible, then there exists a nonzero, n 1 vector xa such that
0 = x Ta M (ko , k f )xa
k f 1
j =k
k f 1
C ( j)( j,
ko )xa 2
j =ko
This implies
C ( j)( j, ko )xa = 0 , j = ko , . . . , k f 1
which shows that the nonzero initial state xa yields the same output on the interval as does the zero initial state.
Therefore the state equation is not observable.
On the other hand, for any initial state xo we can write, just as in the proof of Theorem 25.9,
y (ko )
.
.
M (ko , k f )xo = O T (ko , k f )
.
y (k f 1)
W (0, k f ) =
j =0
= b (k f 1)b T (k f 1)
This W (0, k f ) has rank at most 1, and if n 2 the state equation is not reachable on [0, k f ].
-93-
Solutions Manual
j =0
W (0, n) =
W (0, n) =
e j +1 e Tj +1 = In
j =0
j =ko
k f 1
y Ta C (k f )(k f ,
j +1)B( j)2
j =ko
Therefore
y Ta C (k f )(k f , j +1)B( j) = 0 , j = ko , . . . , k f 1
But by output reachability, with y f = ya , there exists an input ua (k) such that
k f 1
ya =
j =ko
Thus
k f 1
y Ta ya =
j =ko
and this implies ya = 0. This contradiction shows that WO (ko , k f ) must be invertible.
Note that if rank C (k f ) < p, then WO (ko , k f ) cannot be invertible, and the state equation cannot be output
reachable.
If m = p = 1, then
k f 1
WO (ko , k f ) =
G 2 (k f , j)
j =ko
Thus the state equation is output reachable on [ko , k f ] if and only if G (k f , j) 0 for some j = ko , . . . , k f 1.
-94-
Solutions Manual
Solution 25.13 We will prove that the state equation is reconstructible if and only if
C
CA
.
.
.
CA n1
z = 0 implies A n z = 0
(*)
That is, if and only if the null space of the observability matrix is contained in the null space of A n .
First, suppose the state equation is not reconstructible. Then there exist n 1 vectors xa and xb such that
xa xb and
C
.
.
.
C
.
.
.
CA n1
xa =
CA n1
xb ,
A n xa A n xb
That is
C
.
.
.
CA n1
(xa xb ) = 0 ,
A n (xa xb ) 0
C
.
.
.
CA n1
z = 0 and A n z 0
(+)
and x (n) 0. But the same output sequence is produced by x (0) = 0, and for this initial state x (n) = 0. Thus we
cannot determine from the output (+) whether x (n) = z or x (n) = 0, which implies the state equation is not
reconstructible.
-95-
CHAPTER 26
1 k
x (k) +
1 1
0
u (k)
1
0 k
1 1
and
R 3 (k) = B (k) (k +1, k)B (k1) (k +1, k1)B (k2) =
0 k 2k1
1 1 k
From the respective ranks the state equation is 3-step reachable, but not 2-step reachable.
A 0
z (k) +
c 0
b
d
u (k)
= 01n 1
zIA 0 1 b
1
c z d
(zIA) 1
0
z c (zIA)1 z 1
1
b
d
= z 1 c (zIA)1 b + z 1 d 1
= z 1 G (z) 1
Solution 26.6
By Theorem 26.8 G (z) is realizable if and only if it is a matrix of (real-coefficient) strictlyproper rational functions. By partial fraction expansion of G (z)/ z we can write G (z) in the form
-96-
Solutions Manual
G (z) =
z
_ _____
(zl )r
Glr
l =1 r =1
_
Here 1 , . . . , m are distinct complex numbers
__ such that if L is complex, then M = L for some M. Furthermore
the p m complex matrices satisfy GMr = GLr for r = 1, . . . , L . From Table 1.10 the corresponding unit pulse
response is
m
G (k) =
Glr
l =1 r =1
k k+1r
l1 l
(#)
Thus we can state that a unit pulse response G (k) is realizable if and only if
(a) there exist positive integers m, 1 , . . . , m , distinct complex numbers 1 , . . . , m , and 1 + . . . +m complex
p m matrices Glr such that (#) holds
for all k 1, and
_
__
(b) if L is complex, then M = L for some M. Furthermore the p m complex matrices satisfy GMr = GLr for
r = 1, . . . , L .
Solution 26.8
Suppose the given state equation is minimal and of dimension n. We can write its (strictlyproper, rational) transfer function as
. adj (zIA) . b
_c____________
G (z) =
det (zIA)
where the polynomial det (zIA) has degree n. If the numerator and denominator polynomials have a common
root, then this root can be canceled without changing the inverse z transform of G (z). Therefore, following
Example 26.10, we can write by inspection a dimension-(n1) realization of the unit pulse response of the
original state equation. This contradicts the assumed minimality, and the contradiction gives that the two
polynomials cannot have a common root.
Now suppose the polynomials det (zIA) and c . adj (zIA) . b have no common root, but that the given state
equation is not minimal. Then there is a minimal realization
z (k +1) = Fz (k) + gu (k)
y (k) = hz (k)
and we then have
. adj (zIF) . g
. adj (zIA) . b
_h____________
_c____________
=
det (zIF)
det (zIA)
where the polynomial det (zIF) has degree no larger than n1. This implies that the polynomials det (zIA) and
c . adj (zIA) . b have a common roota contradiction. Therefore the given state equation is minimal.
x (k +1) =
0
1
x (k) +
a 0 a 1
y (k) = c 0
c 1 x (k)
-97-
0
u (k)
1
Solutions Manual
-98-
CHAPTER 27
Nij (z)
__________
(z1)Dij (z)
where all roots of the polynomial Dij (z) have magnitude less than unity (so Dij (1) 0), and the polynomial Nij (z)
satisfies Nij (1) 0. Suppose that the m 1 U (z) has all components zero except for U j (z) = z /(z1). Then the
i th -component of the output is given by
Yi (z) =
z Nij (z)
___________
(z1)2 Dij (z)
By partial fraction expansion yi (k) includes decaying exponential terms, possibly a constant term, and the term
ij (1)
_N
_____
k , k 0
Dij (1)
Since this term is unbounded, every realization of G (z) fails to be uniform bounded-input, bounded-output stable.
Solution 27.7 The claim is not true in the time-varying case. Consider the scalar state equation
x (k +1) = x (k) + (k)u (k)
y (k) = x (k)
where (k) is the unit pulse. The zero-state response to any input is
y (k) =
0, k ko > 0
u (0), k ko +1, ko 0
Thus the state equation is uniform bounded-input, bounded-output stable with = 1. However for ko = 0 and
u (k) = (1/ 2)k we have u (k) 0 as k , but y (k) = 1 for all k 1.
For the time-invariant case the claim can be proved as follows. Assume u (k) 0 as k . Given > 0
we will find a K such that y (k) , k K, which shows that y (k) 0 as k . With
k
y (k) =
G (kj)u ( j)
j =0
Solutions Manual
= sup u (k) ,
k0
G (k)
k =0
The first constant is finite for a well-defined sequence that goes to zero, and the second is finite by uniform
bounded-input, bounded-output stability. Then there is a positive integer K 1 such that
u (k)
___
,
2
k =K
G (k)
1
___
,
2
k K1
K 1 1
G (kj) +
j =0
kK 1
G (q) +
q =kK 1
___
G (kj)
2 k
=K 1
___
___
=
+
2
2
-100-
___
2 q
=0
G (q)
CHAPTER 28
Solution 28.2 Lemma 16.18 gives that if V 11 and V are invertible, then
V
V 11 V 12
V 21 V 22
1
1
1
1
1
V 1
11 +V 11 V 12 V a V 21 V 11 V 11 V 12 V a
1
1
1
V a V 21 V 11
Va
1
where Va = V 22 V 21 V 1
= I, written as
11 V 12 . From the expression VV
V 11 V 12
V 21 V 22
W 11 W 12
W 21 W 22
=I
we obtain
V 11 W 11 + V 12 W 21 = I
V 21 W 11 + V 22 W 21 = 0
Under the assumption that V 11 and V 22 are invertible these imply
1
W 11 = V 1
11 V 11 V 12 W 21 ,
W 21 = V 1
22 V 21 W 11
and comparing this with the 1,1-block of V 1 from Lemma 16.18 gives
1
1
1
1
1
1
(V 11 V 12 V 1
22 V 21 ) = V 11 + V 11 V 12 (V 22 V 21 V 11 V 12 ) V 21 V 11
-101-
Solutions Manual
k =0
K = B (A )n
k
T
T
A B B (A )k
n +1
That is,
K = B T ( A T )n
= B T (A T )n
k =0
( A)n +1
2(nk) A k BB T (A T )k
k =0
A n +1
Solution 28.4 Similar to Solution 13.11. However for the time-invariant case the reachability matrix rank test
can be used, rather than the eigenvector test, by writing
B (A+BK)B (A+BK)2 B
... =
B AB A 2 B
...
I
0
0
.
.
.
KB KAB+(KB)2 . . .
...
I
KB
...
0
I
.
.
.
.
.
.
.
.
.
AI B
C 0
is invertible, then C (IABK)1 B is invertible from Exercise 28.6. Then given any diagonal, m m matrix , we
can choose
N = [C (IABK)1 B ]1
to obtain G (1) = . For this closed-loop system, any x (0) and any constant input R (k) = ro yields
lim y (k) = ro
by the final value theorem. That is, the steady-state value of the response to constant inputs is noninteracting.
(For finite time values, or other inputs, interaction typically occurs.)
-102-
CHAPTER 29
xa (k +1)
=
x b (k +1)
F 11 (k) F 12 (k)
F 21 (k) F 22 (k)
y (k) = Ip
xa (k)
+
x b (k)
G 1 (k)
u (k)
G 2 (k)
xa (k)
x b (k)
With
Pb (k) = H (k) In p
we have
C (k)
P b (k)
Ip
0
H (k) In p
-103-
Ip
0
H (k) In p
Solutions Manual
p
y (k) +
H (k)
Ip
H (k)
0
z (k)
I
xa (k)
+
x b (k)
0
z (k)
I
xa (k)
H (k)xa (k)+z (k)
Therefore
xa (k) = xa (k)
xb (k) = H (k)xa (k) + z (k)
where
z (k +1) = F (k)z (k) + G a (k)u (k) + G b (k)y (k)
This is exactly the same as the reduced-dimension observer in the text.
-104-