Название | EEG Signal Processing and Machine Learning |
---|---|
Автор произведения | Saeid Sanei |
Жанр | Программы |
Серия | |
Издательство | Программы |
Год выпуска | 0 |
isbn | 9781119386933 |
The prediction models can be easily extended to multichannel data. This leads to estimation of matrices of prediction coefficients. These parameters stem from both the temporal and interchannel correlations.
3.4.1.2 Prony's Method
Prony's method has been previously used to model EPs [30, 31]. Based on this model an EP, which is obtained by applying a short audio or visual brain stimulus to the brain, can be considered as the impulse response (IR) of a linear infinite impulse response (IIR) system. The original attempt in this area was to fit an exponentially damped sinusoidal model to the data [32]. This method was later modified to model sinusoidal signals [33]. Prony's method is used to calculate the LP parameters. The angles of the poles in the z‐plane of the constructed LP filter are then referred to the frequencies of the damped sinusoids of the exponential terms used for modelling the data. Consequently, both the amplitude of the exponentials and the initial phase can be obtained following the methods used for an AR model, as follows.
Based on the original method we can consider the output of an AR system with zero excitation to be related to its IR as:
(3.44)
where y(n) represents the exponential data samples, p is the prediction order,
Therefore, the model coefficients are first calculated using one of the methods previously mentioned in this section, i.e.
(3.45)
and a0 = 1. On the basis of (3.39), y(n) is calculated as the weighted sum of its p past values. y(n) is then constructed and the parameters fk and rk are estimated. Hence, the damping factors are obtained as
(3.46)
and the resonance frequencies as
(3.47)
where Re(.) and Im(.) denote respectively the real and imaginary parts of a complex quantity. The wk parameters are calculated using the fact that
In vector form this can be illustrated as Rw = y , where
and
(3.50)
In the above solution we considered that the number of data samples N is equal to N = 2p, where p is the prediction order. For the cases where N > 2p a least‐squares (LS) solution for w can be obtained as:
(3.51)
where (.) H denotes conjugate transpose. This equation can also be solved using the Cholesky decomposition method. For real data such as EEG signals this equation changes to w = ( R T R )−1 R T y , where (.) T represents the transpose operation. A similar result can be achieved using principal component analysis (PCA) [25].
In cases for which the data are contaminated with white noise, the performance of Prony's method is reasonable. However, for non‐white noise, the noise information is not easily separable from the data and therefore the method may not be sufficiently successful.
As we will see in a later chapter of this book, Prony's algorithm has been used in modelling and analysis of audio and visual EPs (AEP and VEP) [31, 35].
3.4.2 Nonlinear Modelling
An approach similar to AR or MVAR modelling in which the output samples are nonlinearly related to the previous samples, may be followed based on the methods developed for forecasting financial growth in economical studies.
In the generalized autoregressive conditional heteroskedasticity (GARCH) method [36], each sample relates to its previous samples through a nonlinear (or sum of nonlinear) function(s). This model was originally introduced for time‐varying volatility (honoured with the Nobel Prize in Economic sciences in 2003).
Nonlinearities in the time series are declared with the aid of the McLeod and Li [37] and (Brock, Dechert, and Scheinkman) tests [38]. However, both tests lack the ability to reveal the actual kind of nonlinear dependency.
Generally, it is not possible to discern whether the nonlinearity is deterministic or stochastic in nature, nor can we distinguish between multiplicative and additive dependencies. The type of stochastic nonlinearity may be determined based on Hsieh test [39]. The additive and multiplicative dependencies can be discriminated by using this test. However, the test itself is not used to obtain the model parameters.
Considering the input to a nonlinear system to be u(n) and the generated signal as the output of such a system to be x(n), a restricted class of nonlinear models suitable for the analysis of such process is given by:
(3.52)
Multiplicative dependence means nonlinearity in the variance, which requires the function h(.) to be nonlinear; additive dependence, conversely, means nonlinearity in the mean, which holds if the function g(.) is nonlinear. The conditional statistical mean and variance are respectively defined as:
(3.53)