You are on page 1of 37

AGC DSP

Adaptive Signal Processing




Problem: Equalise through a FIR filter the distorting effect of a communication channel that may be changing with time. If the channel were fixed then a possible solution could be based on the Wiener filter approach We need to know in such case the correlation matrix of the transmitted signal and the cross correlation vector between the input and desired response. When the the filter is operating in an unknown environment these required quantities need to be found from the accumulated data.
Professor A G Constantinides 1

AGC DSP

Adaptive Signal Processing




The problem is particularly acute when not only the environment is changing but also the data involved are non-stationary In such cases we need temporally to follow the behaviour of the signals, and adapt the correlation parameters as the environment is changing. This would essentially produce a temporally adaptive filter.
Professor A G Constantinides 2

AGC DSP

Adaptive Signal Processing




A possible framework is: d [n] e[n]


Algorithm

{x[n]}

Adaptive Filter : w

d [ n]

Professor A G Constantinides

AGC DSP

Adaptive Signal Processing




Applications are many


       

Digital Communications Channel Equalisation Adaptive noise cancellation Adaptive echo cancellation System identification Smart antenna systems Blind system equalisation And many, many others
Professor A G Constantinides 4

AGC DSP

Applications

Professor A G Constantinides

AGC DSP

Adaptive Signal Processing




Echo Cancellers in Local Loops


Tx1
Hybrid Echo canceller Hybrid Echo canceller Adaptive Algorithm Local Loop

Rx2

Adaptive Algorithm

Rx1

+
Professor A G Constantinides

Rx2

+
6

AGC DSP

Adaptive Signal Processing




Adaptive Noise Canceller


REFERENCE SIGNAL FIR filter Noise

Adaptive Algorithm

+
Signal +Noise PRIMARY SIGNAL

Professor A G Constantinides

AGC DSP

Adaptive Signal Processing




System Identification
FIR filter

Adaptive Algorithm Signal

+
Unknown System

Professor A G Constantinides

AGC DSP

Adaptive Signal Processing




System Equalisation
Signal Unknown System FIR filter

Adaptive Algorithm

Delay

Professor A G Constantinides

AGC DSP

Adaptive Signal Processing




Adaptive Predictors
Signal FIR filter

Delay Adaptive Algorithm

Professor A G Constantinides

10

AGC DSP

Adaptive Signal Processing




Adaptive Arrays

Linear Combiner

Interference

Professor A G Constantinides

11

AGC DSP

Adaptive Signal Processing


 

Basic principles: 1) Form an objective function (performance criterion) 2) Find gradient of objective function with respect to FIR filter weights 3) There are several different approaches that can be used at this point 3) Form a differential/difference equation from the gradient.
Professor A G Constantinides 12

AGC DSP

Adaptive Signal Processing


   

Let the desired signal be The input signal x[n] The output y[n] Now form the vectors

d [n]

x[ n] ! ?x[n] x[ n  1] . x[ n  m  1]A h ! ?h[0] h[1] . h[ m  1]A


 

So that

y[n] ! x[ n] h
Professor A G Constantinides 13

AGC DSP

Adaptive Signal Processing




The form the objective function 2 J ( w ) ! E{?d [n]  y[ n]A }


2 J ( w ) ! W d  pT h  hT p  hT Rh

where

R ! E{x[ n]x[n] } p ! E{x[n]d [n]}


Professor A G Constantinides 14

AGC DSP

Adaptive Signal Processing




We wish to minimise this function at the instant n Using Steepest Descent we write 1 xJ (h[n]) h[n  1] ! h[ n]  Q xh[ n] 2 But xJ (h) ! 2p  2Rh xh
Professor A G Constantinides 15

AGC DSP

Adaptive Signal Processing




So that the weights update equation

h[n  1] ! h[ n]  Q (p  Rh[ n])




 

Since the objective function is quadratic this expression will converge in m steps The equation is not practical If we knew R and p a priori we could find the required solution (Wiener) as

h opt ! R p
Professor A G Constantinides 16

1

AGC DSP

Adaptive Signal Processing


 

However these matrices are not known Approximate expressions are obtained by ignoring the expectations in the earlier complete forms

[ n] ! x[n]x[n]T R


p[n] ! x[ n]d [ n]

This is very crude. However, because the update equation accumulates such quantities, progressive we expect the crude form to improve
Professor A G Constantinides 17

AGC DSP

The LMS Algorithm




Thus we have
T

h[n  1] ! h[ n]  Qx[n](d [n]  x[ n] h[ n])




Where the error is


T

e[ n] ! ( d [ n]  x[n] h[n]) ! (d [n]  y[ n])




And hence can write

h[n  1] ! h[n]  Qx[n]e[n]




This is sometimes called the stochastic gradient descent


Professor A G Constantinides 18

AGC DSP

Convergence
The parameter Qis the step size, and it should be selected carefully  If too small it takes too long to converge, if too large it can lead to instability  Write the autocorrelation matrix in the eigen factorisation form T R !Q Q
Professor A G Constantinides 19

AGC DSP

Convergence


Where Q is orthogonal and is diagonal containing the eigenvalues  The error in the weights with respect to their optimal values is given by (using the Wiener solution for p h[n  1]  h opt ! h[ n]  h opt  Q ( Rh opt  Rh[n])  We obtain e h [n  1] ! e h [ n]  QRe h [ n]
Professor A G Constantinides 20

AGC DSP

Convergence


Or equivalently

e h [n  1] ! (1  QQ


Q)e h [n]
T

I.e.

Qe h [n  1] ! Q(1  QQ ! (Q  QQQ
T

Q)e h [n]

Q)e h [ n]

Thus we have

Qe h [n  1] ! (1  Q )Qe h [n]


Form a new variable

v[n] ! Qe h [ n]
21

Professor A G Constantinides

AGC DSP

Convergence


So that

v[n  1] ! (1  Q ) v[ n]


Thus each element of this new variable is dependent on the previous value of it via a scaling constant The equation will therefore have an exponential form in the time domain, and the largest coefficient in the right hand side will dominate
Professor A G Constantinides 22

AGC DSP

Convergence


We require that 1  QPmax 1 Or 2 0 Q Pmax In practice we take a much smaller value than this

Professor A G Constantinides

23

AGC DSP

Estimates


Then it can be seen that as weight update equation yields

npg the

 

E{h[n  1]} ! E{h[ n]}


And on taking expectations of both sides of it we have
T

E{h[n  1]} ! E{h[n]}  QE{x[n](d [n]  x[ n] h[ n])}




Or

0 ! QE{( x[ n]d [n]  x[ n]x[ n] h[ n])}


Professor A G Constantinides 24

AGC DSP

Limiting forms


This indicates that the solution ultimately tends to the Wiener form I.e. the estimate is unbiased

Professor A G Constantinides

25

AGC DSP

Misadjustment


The excess mean square error in the objective function due to gradient noise Assume uncorrelatedness set
2 J min ! W d

 p h opt

Where W 2 is the variance of desired d response and h opt is zero when uncorrelated. Then misadjustment is defined as

J XS ! ( J LMS (g)  J min ) / J min


Professor A G Constantinides 26

AGC DSP

Misadjustment


It can be shown that the misadjustment is given by m QP i J XS / J min ! i !11  QPi

Professor A G Constantinides

27

AGC DSP

Normalised LMS


To make the step size respond to the signal needs 2Q h[ n  1] ! h[n]  x[n]e[n] 2 1  x[n] In this case 0 Q 1 And misadjustment is proportional to the step size.
Professor A G Constantinides 28

AGC DSP

Transform based LMS

{x[n]}

Adaptive Filter : w

d [ n]

d [n] e[n]

Transform Inverse Transform

Algorithm

Professor A G Constantinides

29

AGC DSP

Least Squares Adaptive




with

R[ n] ! x[i ]x[i ]
i !1 n

p[ n] ! x[n]d [n]


i !1 We have the Least Squares solution

h[n] ! R[ n] p[n]


1

However, this is computationally very intensive to implement. Alternative forms make use of recursive estimates of the matrices involved.
Professor A G Constantinides 30

AGC DSP

Recursive Least Squares




Firstly we note that

p[n] ! p[n  1]  x[ n]d [n] R[ n] ! R[n  1]  x[ n]x[ n]T




We now use the Inversion Lemma (or the Sherman-Morrison formula) Let

Professor A G Constantinides

31

AGC DSP

Recursive Least Squares (RLS)




Let

P[n] ! R[n]

1

Then

R[n  1]1 x[ n] k[ n ] ! T 1 1  x[n] R[n  1] x[ n]


T

P[n] ! R[ n  1]  k[n]x [ n]P[ n  1]




The quantity k[n] is known as the Kalman gain


Professor A G Constantinides 32

AGC DSP

Recursive Least Squares




Now use k[ n] ! P[ n]x[ n] in the computation of the filter weights

h[n] ! P[n]p[ n] ! P[ n](p[ n  1]  x[n]d [n]) P[n] updates we  From the earlier expression for
have

P[n]p[n  1] ! P[n  1]p[n  1]  k[n]xT [n]P[n  1]p[n  1]




And hence

h[n] ! h[n  1]  k[n](d [n]  x[n] h[n  1])


Professor A G Constantinides 33

AGC DSP

Kalman Filters


Kalman filter is a sequential estimation problem normally derived from  The Bayes approach  The Innovations approach Essentially they lead to the same equations as RLS, but underlying assumptions are different

Professor A G Constantinides

34

AGC DSP

Kalman Filters


The problem is normally stated as:




Given a sequence of noisy observations to estimate the sequence of state vectors of a linear system driven by noise.

Standard formulation

x[ n  1] ! Ax[n]  w[ n] y[ n] ! C[ n]x[ n]  [n]

Professor A G Constantinides

35

AGC DSP

Kalman Filters


Kalman filters may be seen as RLS with the following correspondence


Sate-Update matrix Sate-noise variance Observation matrix Observations State estimate

    

A[n]

Sate space

RLS

Q[n] ! E{w[n]w[n]T }

I 0
x[ n]T d [n]
h[n]

C[n]
y[n] x[n]

Professor A G Constantinides

36

AGC DSP

Cholesky Factorisation


In situations where storage and to some extend computational demand is at a premium one can use the Cholesky factorisation tecchnique for a positive definite matrix T Express R ! LL , where L is lower triangular There are many techniques for determining the factorisation
Professor A G Constantinides 37

You might also like