6.3 Least Squares Methods 235
If the matrix Z
T
Z is invertible, then
ˆ
θ =(Z
T
Z)
−1
Z
T
Y (6.56)
The matrix P =(Z
T
Z)
−1
is called the covariance matrix if the stochastic
part has unit variance.
In general, the estimate of
ˆ
θ is unbiased (its expectation is equal to θ)if
ξ(k) is white noise.
6.3.1 Recursive Least Squares Method
In recursive least squares (RLS) estimated parameters are improved with each
new data. This means that the estimate
ˆ
θ(k) can be obtained by some simple
manipulations from the estimate
ˆ
θ(k − 1) based on the data available up to
time k − 1.
Characteristic features of recursive methods are as follows:
• Their requirements for computer memory are very modest as not all mea-
sured data up to current time are needed.
• They form important part of adaptive systems where actual controllers
are based on current process estimates.
• They are easily modifiable for real-time data treatment and for time vari-
ant parameters.
To understand better derivation of a recursive identification method, let
us consider the following example.
Example 6.4: Recursive mean value calculation
Consider a model of the form
y(k)=a + ξ(k)
where ξ(k) is a disturbance with standard deviation of one. It is easy to
show that the best estimate of a based on information up to time k in the
sense of least squares is given as the mean of all measurements
ˆa(k)=
1
k
k
i=1
y(i)
This equation can be rewritten as
ˆa(k)=
1
k
-
k−1
i=1
y(i)+y(k)
.
=
1
k
[(k − 1)ˆa(k − 1) + y(k)]
=ˆa(k − 1) +
1
k
[y(k) − ˆa(k − 1)]
The result says that the estimate of the parameter a in time k is equal
to the estimate at time k − 1 plus some correction term. The correction
term depends linearly on the error between ˆa(k −1) and its prediction at