Название | EEG Signal Processing and Machine Learning |
---|---|
Автор произведения | Saeid Sanei |
Жанр | Программы |
Серия | |
Издательство | Программы |
Год выпуска | 0 |
isbn | 9781119386933 |
Adaptive Wiener filters are probably the most fundamental type of adaptive filters. In Figure 4.11 the optimal weights for the filter, w(n), are calculated such that
(4.91)
where w is the Wiener filter coefficient vector. Using the orthogonality principle [39] the final form of the mean squared error will be:
(4.92)
where E(.) represents statistical expectation:
(4.93)
and
(4.94)
By taking the gradient with respect to w and equating it to zero we have:
(4.95)
As R and p are usually unknown the above minimization is performed iteratively by substituting time averages for statistical averages. The adaptive filter in this case, decorrelates the output signals. The general update equation is in the form of:
(4.96)
where n is the iteration number which typically corresponds to discrete‐time index. Δ w (n) has to be computed such that E[e(n)]2 reaches to a reasonable minimum. The simplest and most common way of calculating Δw(n) is by using gradient descent or steepest descent algorithm [39]. In both cases, a criterion is defined as a function of the squared error (often called a performance index) such as η (e(n)2), such that it monotonically decreases after each iteration and converges to a global minimum. This requires:
(4.97)
Assuming ΔW is very small, it is concluded that:
where, ∇w (.)represents gradient with respect to w. This means that the above equation (Eq. 4.98) is satisfied by setting Δw = − μ∇w (.), where μ is the learning rate or convergence parameter. Hence, the general update equation takes the form:
(4.99)
Using the least mean square (LMS) approach, ∇w (η(w)) is replaced by an instantaneous gradient of the squared error signal, i.e.:
(4.100)
Therefore, the LMS‐based update equation is
(4.101)
Also, the convergence parameter, μ, must be positive and should satisfy the following:
(4.102)
where λ max represents the maximum eigenvalue of the autocorrelation matrix R . The LMS algorithm is the most simple and computationally efficient algorithm. However, the speed of convergence can be slow especially for correlated signals. The recursive least‐squares (RLS) algorithm attempts to provide a high speed stable filter, but it is numerically unstable for real‐time applications [40, 41]. Defining the performance index as:
Then, by taking the derivative with respect to w we obtain
where 0 < γ ≤ 1 is the forgetting factor [40, 41]. Replacing for e(n) in the above equation (Eq. 4.104) and writing it in vector form gives:
(4.105)
where
(4.106)
and
(4.107)
From this equation:
(4.108)
The RLS algorithm performs the above operation recursively such that P and R are estimated at the current time n as:
(4.109)
(4.110)
In this case
(4.111)
where M represents the finite impulse response (FIR) filter order. Conversely:
(4.112)
which can be simplified using the matrix inversion lemma [42]:
(4.113)
and finally, the update equation can be written as:
(4.114)
where
and the error e(n) after each iteration is recalculated as:
(4.116)