You are on page 1of 18

MV and GMV Control Algorithms (MTT/1999)

MINIMUM VARIANCE AND GENERALISED MINIMUM VARIANCE CONTROL ALGORITHMS


M.T. Tham (1999) Dept. of Chemical and Process Engineering University of Newcastle upon Tyne Newcastle upon Tyne, NE1 7RU, UK.

1. INTRODUCTION In this set of notes, we will look at the design of controllers that take into consideration, random disturbances that affect the process. First we will consider the Minimum Variance (MV) controller and examine its performance characteristics. Noting its limitations, the MV algorithm will then be extended to the Generalised Minimum Variance (GMV) algorithm. These algorithms were the first that were designed specifically for self-tuning applications, and are now considered classical formulations. The approaches that are adopted are those that underpin the design and development of modern model based predictive controllers. This set of notes therefore also highlights the design procedures and the tricks that can be applied to design control algorithms with enhanced performances. A brief example is also given to show how the algorithms described here can be implemented as self-tuning controllers. 2. MINIMUM VARIANCE (MV) CONTROL 2.1 Derivation of the MV Control Law The MV controller seeks a control signal u(t) that will minimise the following performance objective: J MV = E [w(t ) y (t + k )] |t
2

}
1 of 18

(1)

Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne

MV and GMV Control Algorithms (MTT/1999)

The notation E{ . |t } denotes an expectation conditional upon data available up to and including current time, t. Taking the expected value of a variable squared gives the variance of that variable. In this case, JMV therefore refers to the variance of the error between set-point w(t) and the controlled output k-time steps in the future, y(t+k). The desired controller is thus one that minimises this variance, hence the name Minimum Variance control. To enable minimisation of Eq.(1) with respect to u(t), first we need to be able to relate the controlled output y to the manipulated input u. This is made available via a process model, the simplest of which is the CARMA (Controlled Auto-Regressive Moving Average) model: Ay (t ) = z k Bu(t ) + C(t ) (2)

Equation (2) is an ARMAX or a CARMA model, where k 1 is the time delay of the process, expressed as an integer multiple of the sampling interval Ts and A( z ) , B( z ) and C( z ) are polynomials in z-1. That is: A( z ) = 1 + a1 z 1 + a 2 z 2 +!+ a N A z N A , N A = deg( A( z )) B( z ) = b0 + b1 z 1 + b2 z 2 +!+ a N B z N B , N B = deg( B( z ) ) C( z) = 1 + c1 z 1 + c2 z 2 +!+ c N C z N C , N C = deg(C( z )) (t) is a random zero-mean sequence with finite variance 2. That is, E {(t )} = 0 and E {(t ) 2 } = 2 (4) (3a) (3b) (3c)

However, the objective function involves a term in the future, namely y(t+k), which is not available at time t. Therefore the minimisation cannot be performed unless we can replace y(t+k) with a realisable estimate. This can be achieved via the use of the following identity C = EA + z k F (5a)

where E and F are again polynomials in z-1. This identity, known in mathematics as the polynomial division identity, gives essentially the quotient and the remainder of the division of two polynomials. In this case C F = " E + z k A quotient # A $ % (5b)

remainder

with deg(E) = k-1 and e0 = 1.


Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne 2 of 18

MV and GMV Control Algorithms (MTT/1999)

Equation (5) assumes further significance in that it enables the separation of current and past values from future values. As a result, Eq.(5) is also known as the Separation Identity. To see how this is accomplished, multiply E into Eq.(2). This gives EAy (t ) = z k EBu(t ) + CE(t ) Using Eq.(5a) to substitute for EA in Eq.(6), we get (6)

(C z k F ) y (t ) = z k EBu(t ) + CE(t )
Time shift Eq.(7a) k-steps into the future by multiplying zk to give,

(7a)

(C z k F ) y (t + k ) = EBu(t ) + CE(t + k )
past and current values to the left-hand-side: Cy (t + k ) CE(t + k ) = EBu(t ) + Fy ( t ) Defining y * (t + k | t ) = y (t + k ) E(t + k ) we then obtain the k-step-ahead predictor of y(t) as, Cy * (t + k | t ) = EBu(t ) + Fy (t )

(7b)

Next, separate out terms involving future values to the right-hand-side, and terms involving

(8)

(9)

(10)

Now we can use y * (t + k | t ) in place of y(t+k) in the objective function, since it is a function of past and current values of y and u only, as signified by the index (t+k|t). Since only y * (t + k | t ) is needed, we re-arrange Eq.(10) into a more suitable form, namely y * (t + k | t ) = EBu(t ) + Fy (t ) + Hy * (t + k 1| t 1) where H is another polynomial in z-1 defined as H = (1 C ) z Substituting y * (t + k | t ) for y(t+k) in the objective function gives J MV E [w(t ) y * (t + k | t )] |t
2

(11)

(12)

= E [w(t ) EBu(t ) Fy (t ) Hy * (t + k 1| t 1)] |t


2

}
3 of 18

Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne

MV and GMV Control Algorithms (MTT/1999)

When minimising JMV w.r.t. u(t), we are seeking a u(t) that will set J MV = 2e0b0 [w(t ) EBu(t ) Fy (t ) Hy * (t + k 1| t 1)] = 0 u(t ) From Eq.(13) it is clear that the required control is u(t ) = [w(t ) Fy (t ) Hy * (t + k 1| t 1)] / EB 2.2 Implementing MV Control In calculating u(t) from Eq.(14), we require the coefficients of the F, H, E and B polynomials. To simplify matters, we define the product EB to be G, thus reducing the number of polynomials from 4 to 3. Using this new nomenclature, the k-step-ahead predictor y * (t + k | t ) becomes: y * (t + k | t ) = Gu(t ) + Fy (t ) + Hy * (t + k 1| t 1) The control signal is calculated as: 1 u( t ) = g0
NG * w(t ) gi u(t i ) Fy (t ) Hy (t k 1| t 1) i =1

(13)

(14)

(15)

(16)

If we do not know the coefficients of F, H, E and B polynomials, they will have to be estimated from process input-output data. Time-shifting Eq.(15) k-time-steps back, gives y * (t | t k ) = y (t ) E(t ) = Gu(t k ) + Fy (t k ) + Hy * (t 1| t k 1) or y (t ) = Gu(t k ) + Fy (t k ) + Hy * (t 1| t k 1) + (t ), (t ) = E(t ) (18) (17)

Equation (18) thus provides the regression expression for estimating the coefficients of G, F, and H. If estimation and control is carried out every sampling instant, then we have a selftuning minimum variance control strategy. 2.3 Properties of the MV Controller The minimum variance controller has several interesting properties. Re-arrangement of Eq.(14) gives
Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne 4 of 18

MV and GMV Control Algorithms (MTT/1999)

w(t ) = EBu(t ) + Fy (t ) + Hy * (t + k 1| t 1) = y * (t + k | t )

(19)

This is known as the 'control law', and tells us that the control signal calculated from Eq.(14) will drive the k-step-ahead predictor y * (t + k | t ) to the set-point w(t). Using the definition of Eq.(9), w(t ) = y (t + k ) + E(t + k ) y (t ) = w(t k ) + E(t ) (20)

Thus, if the process model is accurate, then the controlled output will follow set-point after the time delay period: the only error will be that due to a weighted sum of process noise. The controller provides dead-time compensation and the response is the best that is possible. Further, if there is no process noise, i.e. (t ) = 0 , then it can be seen that the minimum variance controller is equivalent to a deadbeat controller. Equation (20) also represents the closed loop relationship, and we can see that there are no poles or zeros. This indicates that the minimum variance controller achieves its performance objective by cancelling process dynamics. Therefore, it cannot be applied to non-minimum phased systems. Another limitation is that the minimum variance strategy is often observed to exert excessive control effort, which may not be tolerated from the operational point of view. These practical shortcomings led to the development of the Generalised Minimum Variance controller. 3. GENERALISED MINIMUM VARIANCE (GMV) CONTROL 3.1 Derivation of the GMV Control Law The Generalised Minimum Variance controller seeks a control signal u(t) that will minimise, J GMV = E [ Rw(t ) Py (t + k )] + [Q' u(t )] |t
2 2

(21)

Compare this with Eq.(1), the objective function used to synthesise the minimum variance controller. Here, we have placed weightings on the output and set-point, and included a term to penalise excessive control effort via the use of transfer functions P, R and Q' respectively (P, R and Q' assume general transfer function structures, i.e. P = Pn / Pd ). Given the same model as Eq. (2), the problem is again to find a predictor to replace the unknown term
Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne 5 of 18

MV and GMV Control Algorithms (MTT/1999)

(t + k ) = Py (t + k ) in the objective function of Eq.(21). As we are using transfer function weightings, the corresponding separation identity is PC = EA + z k F / Pd (22)

with deg(E) = k-1 and e0 = 1. Note that, due to the inclusion of P and Pd, the E and F polynomials in Eq.(22) are different from those in Eq.(5b). Following the procedure used to derive the minimum variance controller, we first multiply E into the process model, and this gives EAy (t ) = z k EBu(t ) + CE(t ) Using Eq.(22) to substitute for EA in Eq.(23), we get (23)

(PC z (PC z

F / Pd y (t ) = z k EBu (t ) + CE(t )

) )

(24)

Time shift Eq.(24) k-steps into the future by multiplying zk to give,


k

F / Pd y (t + k ) = EBu (t ) + CE(t + k )

(25)

Next, separate out terms involving future values to the right-hand-side, and terms involving past and current values to the left-hand-side: CPy (t + k ) CE(t + k ) = EBu (t ) + Fy (t ) / Pd (26)

Now, using the definitions y '(t ) = y (t ) / Pd ; * (t + k | t ) = (t + k ) E(t + k ) ; G = EB ; and H = (1 C) z , the corresponding k-step-ahead predictor is therefore: * (t + k | t ) = Gu(t ) + Fy '(t ) + H * (t + k 1| t 1) (27)

Substitution of Eq.(27) into Eq.(21) and minimisation with respect to the current manipulated input, u(t), yields J GMV ' = 2 g 0 Rw(t ) Gu (t ) Fy ' (t ) H* (t + k 1 | t 1) + 2q0 Q' u (t ) = 0 u (t ) Simplification and re-arrangement then gives the GMV control law:
' * (t + k | t ) Rw(t ) + Qu(t ) = 0 ; where Q = q 0 Q'/ g 0

(28)
6 of 18

Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne

MV and GMV Control Algorithms (MTT/1999)

Again, if the parameters of the process are known, then the control signal is calculated making use of Eqs. (27) and (28), i.e.
NG Rw ( t ) gi u(t 1) Fy '(t ) H * (t + k 1| t 1) i =1 u( t ) = g0 + Q

(29)

To be more specific, if Q = (1-z-1) for example, then


NG * Rw(t ) gi u(t i ) Fy '(t ) H (t + k 1| t 1) + u(t 1) i =1 u( t ) = g0 +

(30)

The closed loop expression for the GMV algorithm is slightly more complicated than the minimum variance case, but is not difficult to determine. First re-write the control law as: * (t + k | t ) = Py * (t + k | t ) = Py (t + k ) E(t + k ) = Rw(t ) Qu (t ) Time-shifting k time steps back, Py (t ) E(t ) = Rw(t k ) Qu(t k ) u(t k ) = Rw(t k ) Py (t ) + E(t ) Q

Substitute this in place of u(t-k) in the model given by Eq.(2), i.e. Ay (t ) = BRw(t k ) PBy (t ) + EB(t ) + C ( t ) Q

Re-arrangement gives

( PB + QA) y (t ) = BRw(t k ) + ( EB + QC)(t )


Hence, the closed loop expression for the GMV controller is, y (t ) = BRw(t k ) + ( EB + QC ) (t ) PB + QA (31)

and we can use this to study the properties of the GMV scheme. Note the absence of a timedelay term in the characteristic polynomial. This shows that the GMV controller provides dead-time compensation.

Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne

7 of 18

MV and GMV Control Algorithms (MTT/1999)

3.2 Interpretations of GMV Control Depending on the choice of weightings, the GMV can be interpreted in a number of ways. 3.2.1 Minimum variance control Setting P = 1, R = 1 and Q = 0 in Eq.(31) yields: y (t ) = w(t k ) + E(t ) (32)

That is, the GMV controller gives a performance identical to that obtained from a minimum variance controller. Thus the minimum variance algorithm is merely a special case of the GMV formulation, as may be deduced from the form of the GMV cost function. 3.2.2 Model following control If we choose Q = 0, then Eq.(31) reduces to y (t ) = Rw(t k ) + E(t ) P (33)

Equation (33) shows that the controlled output will follow set-point with response characteristics governed by the ratio R/P. Also, the noise term is filtered by 1/P. Therefore, 1/P can be selected to act as a noise filter and while R is specified such that R/P is the desired model that the closed loop has to match. This leads to a model-following or modelreference control strategy. 3.2.3 Smith -predictive control Finally, with P = 1, R = 1 and Q 0, we have from the control law of Eq.(28) y * (t + k | t ) w(t ) + Qu(t ) = 0 or u( t ) = w( t ) y * ( t + k | t ) Q (34) (35)

This has the following block diagram representation:

Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne

8 of 18

MV and GMV Control Algorithms (MTT/1999)

w(t)

1 Q

u(t) PROCESS

y(t)

y*(t+k|t) zk

E(t+k)

Figure 1. Interpretation of GMV as a Smith-Predictive Control Strategy Thus, 1/Q can be regarded as a controller operating on a predicted feedback, much like the Smith-predictor strategy. Conveniently, this interpretation also allows 1/Q to be specified to have a P+I controller structure and its parameters tuned accordingly. The corresponding closed loop expression is, y (t ) = Bw(t k ) + ( EB + QC )(t ) B + QA (36)

4. OFFSET PROBLEMS One of the main problems affecting MV and GMV control performance is that of offsets. There are a number of reasons why offsets occur when these controllers are applied. 4.1 Offsets Due to Unknown Disturbances When implementing MV or GMV control, offsets can occur if the system is affected by an unknown/unmeasurable disturbance with a non-zero mean. To see the effect that an unmeasurable non-zero mean load disturbance has on the closed loop response, consider the regulatory problem (w = 0) and the case when (t ) is composed of a zero mean sequence (t ) and a bias term d. In the case of MV control, this means that, from Eq. (20), y (t ) = E ((t ) + d ) Therefore, E{y (t )} = E{E ((t ) + d )} = E{E(t )}+ E{Ed } = E (1)d
Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne 9 of 18

(37)

MV and GMV Control Algorithms (MTT/1999)

In other words, at the steady state, the value of the output is equal E(1)d instead of zero as required. With the GMV controller, making use of the closed loop expression given in Eq. (31), the following results: E { y (t )} = [ E (1) B(1) + Q(1)C (1)] d 0 P(1) B(1) + Q(1) A(1) (38)

Analyses will also show that the above results will occur if the controllers were synthesised using the model given by Eq. (2), but the process behaves according to: Ay (t ) = z k Bu (t ) + C(t ) + d (39)

where d is regarded as an unmeasurable disturbance, i.e. there is process-model mistmatch. Therefore, one method to overcome this particular cause of offsets, is to use the following model in controller design: (t ) Ay (t ) = z k Bu (t ) + C(t ) + d where (t ) = d (t 1) + y (t ) y * (t | t k ) d (40)

is an on-line estimate of the unmeasured disturbance d. Following the above derivation procedures, using Eq. (40), the k-step ahead predictors and control signals for the MV and GMV controllers become: Minimum Variance (t ) + Hy * (t + k 1 | t 1) y * (t + k | t ) = EBu (t ) + Fy (t ) + Ed u (t ) = 1 g0
NG (t ) Hy * (t k 1 | t 1) w ( t ) g i u (t i ) Fy (t ) Ed i =1

(41)

Generalised Minimum Variance (t ) + H* (t + k 1 | t 1) * (t + k | t ) = Gu (t ) + Fy ' (t ) + Ed

Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne

10 of 18

MV and GMV Control Algorithms (MTT/1999)


NG (t ) H* (t + k 1 | t 1) Rw t g i u (t 1) Fy ' (t ) Ed ( ) i =1 u (t ) = g0 + Q

(42)

4.2 Offsets due to a scalar weighting on control Offsets can also occur during GMV control when a scalar control weighting is used, as is highlighted by the GMVs closed loop equation, Eq. (31). For clarity, consider the special case where (t ) = 0 ; Q = ; R = P = 1 . Then, the value of the output at the steady-state is: y (t ) B(1) w(t k ) w( t k ) B(1) + A(1) (35)

Exact set-point tracking will therefore not be achieved unless A(1) = 0, i.e. the process has integrating properties. Fortunately, this problem can be solved simply; instead of penalising excessive control, the cost function is modified to penalise excessive changes in control leading to a cost function of the form: J = E [ Rw(t ) Py (t + k )] + [ (1 z 1 )u(t )] |t
2 2

(36)

Alternatively, a control weighting having an inverse PI (proportional + integral) compensator structure may be employed. Although not identical, the two methods are similar since in both cases, Q(1) = 0, i.e. changes in control signals are penalised. The second technique, however, is more attractive as the resulting self-tuned closed loop could now be considered as a conventional control loop with a predictive feedback as shown in Fig. 1 above. The advantage here is that the control weighting could be adjusted/tuned as if it was a conventional controller using the numerous tuning rules that are available. Another method involves adjusting the gain of the set-point weighting R such that zero error tracking will be achieved asymptotically, i.e. B(1) R(1) =1 P(1) B(1) + Q(1) A(1) 4.3 Incremental Prediction And Control One generic way of overcoming the problem of offsets is to implement the MV and GMV
Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne 11 of 18

(37)

MV and GMV Control Algorithms (MTT/1999)

controllers in velocity or incremental form. There are 2 ways to achieve this. 4.3.1 k-incremental control For example, a k-incremental predictor can be used instead of the normal positional kstep-ahead predictors derived above. k-incremental predictors are obtained by multiplying the respective predictor expressions by the operator: k = 1 z k Consider the more general GMV formulation. The k-step ahead predictor becomes k * (t + k | t ) = G k u (t ) + F k y ' (t ) + H k * (t + k 1 | t 1) which on rearrangement gives * (t + k | t ) = * (t | t k ) + G k u (t ) + F k y ' (t ) + H k * (t + k 1 | t 1) Replacing the current prediction * (t | t k ) by its measured value leads to: * (t + k | t ) = (t ) + F k y '(t ) + G k u(t ) + H k * (t + k 1| t 1) (41) (40) (39) (38)

Note firstly, that the k-step-ahead prediction is a function of differenced data. Thus, any static non-zero term and hence their effects would be removed. Also, when implemented as a selftuning or adaptive algorithm where the coefficients of F, G and H are estimated on-line, the estimation problem will be better conditioned since the data will have asymptotic zero means. Substitution of * (t + k | t ) into the GMV control law, Eq.(28), results in the control signal: u (t ) = [ Rw(t ) (t ) F k y ' (t ) H k * (t + k 1 | t 1)] /[Q + k G ] Since the differential operator defined by Eq.(38) can be factored as: k = 1 z k = (1 z 1 )(1 + z 1 + z 2 +&+ z k +1 ) and if Q also contains the factor 1 z 1 , then Eq.(42) provides an integrating control action. Use of the k-incremental predictor with the GMV control law therefore yields an algorithm that has offset rejection properties. From the resulting closed loop expression, i.e.: (42)

Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne

12 of 18

MV and GMV Control Algorithms (MTT/1999)

y (t ) =

BRw(t k ) + ( EB k + QC )(t ) PB + QA

(43)

we can see that even if E {(t )} = d , E { y(t )} = 0 . 4.3.2 Design using the CARIMA model Another way to develop a controller that has offset rejection properties is to base its design on a CARIMA (Controlled Auto-Regressive Integrated Moving Average) model instead of the CARMA model of Eq.(2). Ay (t ) = z k Bu (t ) + C(t ) / where = 1 z 1 . The interpretation of (t ) / is that it represents a disturbance sequence that manifests as a sequence of steps of random magnitudes occurring at random time intervals. This is, in a sense, more representative of real process disturbances. A GMV type controller based on the model of Eq.(46) and the cost function Eq.(21) can be developed using the separation identity: PC = EA + z k F / Pd The resulting predictor is given by * (t + k | t ) = Gu (t ) + Fy ' (t ) + H* (t + k 1 | t 1) where y '(t ) = y (t ) / Pd ; * (t + k | t ) = (t + k ) E(t + k ) ; G = EB ; and H = (1 C) z . Substitution into Eq.(21) and minimising w.r.t. u(t), gives a control law that is identical to Eq.(28), i.e.
' * (t + k | t ) Rw(t ) + Qu(t ) = 0 ; where Q = q 0 Q'/ g 0

(46)

(47)

(48)

but because the predictor equations are different, the control signal is calculated as: u (t ) = [ Rw(t ) Fy ' (t ) H* (t + k 1 | t 1)] /[Q + G ] (49)

Thus, if Q also contains the factor 1 z 1 , Eq.(49) describes an integrating control action, and the corresponding closed loop expression is

Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne

13 of 18

MV and GMV Control Algorithms (MTT/1999)

y (t ) =

BRw(t k ) + (EB + QC )(t ) (PB + QA)

(50)

Considering the regulation problem, even if E{(t )} = d , at the steady-state, E{ y (t )} = [ E (1) B(1) (1) + Q(1)C (1)] d =0 P(1) B(1) + Q(1) A(1) (51)

which is the desired disturbance rejection result. 4.3.3 Summary Notice that the closed loop expression obtained using the CARIMA model for controller design, Eq.(50), is very similar to that obtained using k-incremental predictors, Eq. (43). Use of the CARIMA model for controller design however, leads to a more elegant and generic approach to obtaining controllers with offset rejection properties. In fact, the CARIMA representation is now the standard form used in model based controller design. However, from Eqs.(43) and (50), we can also deduce that the control weighting Q must still be selected to effect off-set free responses. 5. IMPLEMENTATION AS SELF-TUNING CONTROLLERS 5.1 Self-tuning Strategies Any linear model based controller can be made self-tuning. Given a suitable on-line parameter estimation technique, there are two approaches to self-tuning controller implementation. The intuitive approach is to estimate the parameters of the process model and use these to calculate controller parameters. Since the parameters of the model are estimated explicitly, this strategy is known as an explicit self-tuning scheme. It is also called an indirect approach to selftuning control because the controller parameters are calculated based on estimated process parameters. Alternatively, the problem can also be posed such that controller parameters are directly estimated. It is also called the implicit approach as the process parameters are implicitly embedded in the controller parameters. This approach is computationally more efficient since controller design calculations are bypassed. With the explicit scheme, we can code the algorithm such that there is a bank of controller types to choose from, depending on the nature of the estimated process. Adopting the implicit approach means that the controller
Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne 14 of 18

MV and GMV Control Algorithms (MTT/1999)

type is fixed. Thus, flexibility is lost. Schematics showing the difference between these two self-tuning strategies are shown in the figure below:
MANIPULATED INPUT, U PROCESS CONTROLLED OUTPUT, Y MANIPULATED INPUT, U PROCESS CONTROLLED OUTPUT, Y

CONTROLLER

CONTROLLER

PC
CALCULATE CONTROLLER PARAMETERS

PC

PP
ESTIMATE PROCESS MODEL'S PARAMETERS ESTIMATE CONTROLLER PARAMETERS

Explicit/Indirect Self-tuning Strategy

Implicit/Direct Self-tuning Strategy

Figure 2. Self-tuning Control Schemes Notice that in both cases, the estimated parameters are used as is. In other words, the certainty equivalence principle is being applied, where estimated parameters are accepted as the true parameters with no further checks on their viability. In the following section, the MV algorithm will be used as an example to show how it can be implemented as a implicit self-tuning controller. 5.2 Self-tuning Minimum Variance Control To calculate the minimum variance control signal u(t) from Eq.(16), we need the coefficients of the F, H, and G polynomials. For an unknown system, these would have to be estimated
Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne 15 of 18

MV and GMV Control Algorithms (MTT/1999)

from process input-output data and the regression expression for parameter estimation is given by Eq.(18) y (t ) = Gu(t k ) + Fy (t k ) + Hy * (t 1| t k 1) + (t ), (t ) = E(t ) Note that the term (t) is uncorrelated with the data in the regression since it is a future value with respect to the time indices of y and u. Thus RLS can be employed without modification. The parameter vector which is to be estimated will be T = g 0 , g1 , g 2 ,& , g N G , f 0 , f 1 , f 2 ,& , f N F , h0 , h1 , h2 ,& , h N H while the corresponding data vector is x T (t k ) = [u(t k ), u(t k 1),& , u(t k N G ), y (t k ), y (t k 1),& , y (t k N F ), y * (t 1| t k 1), y * (t 2| t k 2),& , y * (t N H | t k N H )] so that Eq.(18) can be written as: y (t ) = x T (t k ) + (t ) Thus, at time t, the equation error will be ' (t 1) e( t ) = y ( t ) x T ( t k ) and the parameters updated according to ' T (t ) = ' T (t 1) + K (t )e(t ) where K(t) is the gain of the RLS estimator, calculated in the usual way. ' T (t ) becomes available, the control signal is calculated as Once 1 u( t ) = g0
NG * w(t ) gi u(t i ) Fy (t ) Hy (t k 1| t 1) i =1

If estimation and control is carried out every sampling instant, then we have a self-tuning minimum variance control strategy. Note that the minimum variance controller can also be implemented as an explicit self-tuning controller. That is, the parameters of the process model are first identified, then the separation identity is solved for the polynomials E and F. G is
Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne 16 of 18

MV and GMV Control Algorithms (MTT/1999)

then obtained from EB, and H from (1-C)z. The control signal is calculated from Eq. (16) as before. Implementations of the other algorithms described in this set of notes as implicit or explcit self-tuning controllers proceeds in a similar manner. 6. CONCLUDING REMARKS Minimum variance control attempts to drive the controlled to follow set-point in the minimum time possible. This is of course limited by any delays inherent within the system, thereby requiring the use of delay-step-ahead predictive feedback. To achieve minimum time responses, the controller also has to expend the necessary energy in one time step. In a real environment, this ideal breaks down due to problems such a sensitivity to NMP systems, excessive wear and tear of final control elements and so on. The improvement provided by the GMV controller is, however, only superficial since the minimum variance objective still forms the basis of the algorithm. The weightings P, Q' and R are essentially de-gaining factors, leaving the problem of unknown/time-varying delays unsolved. The latest development in this category of control algorithms is extended horizon control, or more commonly known as model based predictive control (MBPC). With extended horizon control, the criterion of achieving minimum variance responses is relaxed in a different manner. Although the effect is similar, controller sensitivity is reduced by allowing it more time to achieve zero output error. Instead of using de-gaining filters, time 'horizons' are employed to tailor the response characteristics of both process input and output. For example, if the algorithm is capable of providing predictions more than time-delay steps into the future, i.e. multi-step predictions, then the controller will be able to 'look' beyond time-delays and periods of inverse response due to NMP dynamics. By the same token, if it can be stipulated that the algorithm may take several time steps to achieve its objective, then the sensitivities associated with high gain control can be avoided. The above concepts of 'prediction horizon' and 'control horizon' respectively, can be realised by minimisation of a performance index of the form: J=
2 [ y(t + m) w(t )] + u(t + l 1) l =1 Ny Nu

(52)

m= N 1

Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne

17 of 18

MV and GMV Control Algorithms (MTT/1999)

where Ny and N1are the maximum and minimum prediction horizons respectively, Nu is the control horizon while is a weighting on control. Note that the criterion costs changes in control. Not only does this ensure offset rejection, as discussed above, it also conveniently allows the assumption that control increments are zero beyond the control horizon. In general, the prediction horizon is related to the plant transient response; the control horizon acts as a coarse tuning parameter which affects the sensitivity of the controller whilst the control weighting provides the fine tuning variable. Although these 'tuning knobs' have still to be adjusted by the operator, the system is more tolerant to the choice of horizons than the GMV is to the settings for P and Q. The details of extended horizon control will be covered in another set of notes.

Copyright 1999 Department of Chemical and Process Engineering University of Newcastle upon Tyne

18 of 18

You might also like