Название | Modern Characterization of Electromagnetic Systems and its Associated Metrology |
---|---|
Автор произведения | Magdalena Salazar-Palma |
Жанр | Физика |
Серия | |
Издательство | Физика |
Год выпуска | 0 |
isbn | 9781119076537 |
(1.13)
In this case, the reconstruction error for the reduced‐rank model is given by
(1.14)
This equation implies that the mean squared error Ξrr in the low‐rank approximation is smaller than the mean squared error Ξo to the original data vector without any approximation, if the first term in the summation is small. So low‐rank modelling provides some advantages provided
(1.15)
which illustrates the result of a bias‐variance trade off. In particular, it illustrates that using a low‐rank model for representing the data vector u(n) incurs a bias through the p terms of the basis vector. Interestingly enough, introducing this bias is done knowingly in return for a reduction in variance, namely the part of the mean squared error due to the additive noise vector v(n). This illustrates that the motivation for using a simpler model that may not exactly match the underlying physics responsible for generating the data vector u(n), hence the bias, but the model is less susceptible to noise, hence a reduction in variance [1, 2].
We now use this principle in the interpolation/extrapolation of various system responses. Since the data are from a linear time invariant (LTI) system that has a bounded input and a bounded output and satisfy a second‐order partial differential equation, the associated time‐domain eigenvectors are sums of complex exponentials and in the transformed frequency domain are ratios of two polynomials. As discussed, these eigenvectors form the optimal basis in representing the given data and hence can also be used for interpolation/extrapolation of a given data set. Consequently, we will use either of these two models to fit the data as seems appropriate. To this effect, we present the Matrix Pencil Method (MP) which approximates the data by a sum of complex exponentials and in the transformed domain by the Cauchy Method (CM) which fits the data by a ratio of two rational polynomials. In applying these two techniques it is necessary to be familiar two other topics which are the singular value decomposition and the total least squares which are discussed next.
1.3 An Introduction to Singular Value Decomposition (SVD) and the Theory of Total Least Squares (TLS)
1.3.1 Singular Value Decomposition
As has been described in [https://davetang.org/file/Singular_Value_Decomposition_Tutorial.pdf] “Singular value decomposition (SVD) can be looked at from three mutually compatible points of view. On the one hand, we can see it as a method for transforming correlated variables into a set of uncorrelated ones that better expose the various relationships among the original data items. At the same time, SVD is a method for identifying and ordering the dimensions along which data points exhibit the most variation. This ties in to the third way of viewing SVD, which is that once we have identified where the most variation is, it's possible to find the best approximation of the original data points using fewer dimensions. Hence, SVD can be seen as a method for data reduction. We shall illustrate this last point with an example later on.
First, we will introduce a critical component in solving a Total Least Squares problem called the Singular Value Decomposition (SVD). The singular value decomposition is one of the most important concepts in linear algebra [2]. To start, we first need to understand what eigenvectors and eigenvalues are as related to a dynamic system. If we multiply a vector x by a matrix [A], we will get a new vector, Ax. The next equation shows the simple equation relating a matrix [A] and an eigenvector x to an eigenvalue λ (just a number) and the original x.
[A] is assumed to be a square matrix, x is the eigenvector, and λ is a value called the eigenvalue. Normally when any vector x is multiplied by any matrix [A] a new vector results with components pointing in different directions than the original x. However, eigenvectors are special vectors that come out in the same direction even after they are multiplied by the matrix [A]. From (1.16) we can see that when one multiplies an eigenvector by [A], the new vector Ax is just the eigenvalue λ times the original x. This eigenvalue determines whether the vector x is shrunk, stretched, reversed, or unchanged. Eigenvectors and eigenvalues play crucial roles in linear algebra ranging from simplifying matrix algebra such as taking the 500th power of [A] to solving differential equations. To take the 500th power of [A], one only needs to find the eigenvalues and eigenvectors of [A] and take the 500th power of the eigenvalues. The eigenvectors will not change direction and the multiplication of the 500th power of the eigenvalues and the eigenvectors will result in [A]500. As we will see in later sections, the eigenvalues can also provide important parameters of a system transfer function such as the poles.
One way to characterize and extract the eigenvalues of a matrix [A] is to diagonalize it. Diagonalizing a matrix not only provides a quick way to extract eigenvalues but important parameters such as the rank and dimension of a matrix can be found easily once a matrix is diagonalized. To diagonalize matrix [A], the eigenvalues of [A] must first be placed in a diagonal matrix, [Λ]. This is completed by forming an eigenvector matrix [S] with the eigenvectors of [A] put into the columns of [S] and multiplying as such
(1.17) can now be rearranged and [A] can also be written as
(1.18)
We start to encounter problems when matrices are not only square but also rectangular. Previously we assumed that [A] was an n by n square matrix. Now we will assume [A] is any m by n rectangular matrix. We would still like to simplify the matrix or “diagonalize” it but using [S]─1[A][S] is no longer ideal for a few reasons; the eigenvectors of [S] are not always orthogonal, there are sometimes not enough eigenvectors, and using A x = λ x requires [A] to be a square matrix. However, this problem can be solved with the singular value decomposition but of course at a cost. The SVD of [A] results in the following
where m is the number of rows of [A], n is the number of columns of [A], and r is the rank of [A]. The SVD of [A], which can now be rectangular or square, will have two sets of singular vectors, u’s and v’s. The u’s are the eigenvectors of [A][A]* and the v’s are the eigenvectors