Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic

Читать онлайн.
Название Artificial Intelligence and Quantum Computing for Advanced Wireless Networks
Автор произведения Savo G. Glisic
Жанр Программы
Серия
Издательство Программы
Год выпуска 0
isbn 9781119790310



Скачать книгу

is expected to obtain a small ρ in order to make a good approximation.

      where α is a weighted parameter that defines the relative trade‐off between the squared error loss and the experienced loss, and e1(xk) = ykfFM(xk), e2(xk) = fSVR(xk) − fFM(xk). Thus, the error between the desired output and actual output is characterized by the first term, and the second term measures the difference between the actual output and the experienced output of SVR. Therefore, each epoch of the hybrid learning algorithm is composed of a forward pass and a backward pass which implement the linear ridge regression algorithm and the gradient descent method in E over parameters Θ and θ0i . Here, θ0i are identified by the linear ridge regression in the forward pass. In addition, it is assumed that the Gaussian membership function is employed, and thus Θj is referred as to σj . Using Eqs. (4.62) and (4.63), and defining

      (4.67)normal phi Subscript i Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis equals StartFraction product Underscript j equals 1 Overscript d Endscripts exp left-parenthesis minus left-parenthesis StartFraction x Subscript j Baseline minus z Subscript italic i j Baseline Over sigma Subscript j Baseline EndFraction right-parenthesis squared right-parenthesis Over sigma-summation Underscript i equals 1 Overscript c Endscripts product Underscript j equals 1 Overscript d Endscripts exp left-parenthesis minus left-parenthesis StartFraction x Subscript j Baseline minus z Subscript italic i j Baseline Over sigma Subscript j Baseline EndFraction right-parenthesis squared right-parenthesis EndFraction comma

      (4.68)StartFraction partial-differential upper E Over partial-differential theta Subscript 0 i Baseline EndFraction equals sigma-summation Underscript k equals 1 Overscript n Endscripts normal phi Subscript i Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis left-parenthesis f Subscript upper F upper M Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis minus y Subscript k Baseline right-parenthesis plus sigma-summation Underscript k equals 1 Overscript n Endscripts alpha normal phi Subscript i Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis left-parenthesis f Subscript upper F upper M Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis minus f Subscript italic upper S upper V upper R Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis right-parenthesis equals 0 period

      These conditions can be rewritten in the form of normal equations:

      (4.69)StartLayout 1st Row theta 01 sigma-summation Underscript k equals 1 Overscript n Endscripts normal phi Subscript m Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis normal phi Subscript i Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis plus ellipsis plus theta Subscript 0 normal c Sub Superscript prime Subscript Baseline sigma-summation Underscript k equals 1 Overscript n Endscripts normal phi Subscript m Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis normal phi prime Subscript c Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis 2nd Row equals sigma-summation Underscript k equals 1 Overscript n Endscripts normal phi Subscript m Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis left-parenthesis StartFraction y Subscript k Baseline Over 1 plus alpha EndFraction plus StartFraction alpha Over 1 plus alpha EndFraction left-parenthesis sigma-summation Underscript i equals 1 Overscript c Endscripts k left-parenthesis normal x Subscript k Baseline comma normal x Subscript i Baseline right-parenthesis theta prime Subscript i plus b right-parenthesis 3rd Row minus sigma-summation Underscript i equals 1 Overscript c Superscript prime Baseline Endscripts theta Subscript i Baseline product Underscript j equals 1 Overscript d Endscripts exp left-parenthesis minus left-parenthesis StartFraction x Subscript j Baseline minus z Subscript italic i j Baseline Over sigma Subscript j Baseline EndFraction right-parenthesis squared right-parenthesis right-parenthesis EndLayout

      where m = 1, … , c′. This is a standard problem that forms the grounds for linear regression, and the most well‐known formula for estimating θ = [θ01 θ02θ0c′]T uses the ridge regression algorithm:

      (4.70)theta equals left-bracket upper X Superscript normal upper T Baseline upper X plus delta upper I Subscript n Baseline right-bracket Superscript negative 1 Baseline upper X Superscript normal upper T Baseline upper Y

      where δ is a positive scalar, and

      (4.71)upper X equals Start 4 By 1 Matrix 1st Row psi left-parenthesis normal x 1 right-parenthesis Superscript normal upper T Baseline 2nd Row psi left-parenthesis normal x 2 right-parenthesis Superscript normal upper T Baseline 3rd Row vertical-ellipsis 4th Row psi left-parenthesis normal x Subscript n Baseline right-parenthesis Superscript normal upper T Baseline EndMatrix upper Y equals Start 4 By 1 Matrix 1st Row y prime 1 2nd Row y prime 2 3rd Row vertical-ellipsis 4th Row y prime Subscript normal n EndMatrix

      where y prime Subscript k Baseline equals StartFraction y Subscript k Baseline Over 1 plus alpha EndFraction plus StartFraction alpha Over 1 plus alpha EndFraction left-parenthesis sigma-summation Underscript i equals 1 Overscript c Endscripts k left-parenthesis normal x Subscript k Baseline comma normal x Subscript i Baseline right-parenthesis theta Subscript i Baseline prime plus b right-parenthesis minus sigma-summation Underscript i equals 1 Overscript c prime Endscripts theta Subscript i Baseline product Underscript j equals 1 Overscript d Endscripts exp left-parenthesis minus left-parenthesis StartFraction x Subscript j Baseline minus z Subscript italic iota j Baseline Over italic sigma j EndFraction right-parenthesis squared right-parenthesis,

      ψ(xk) = [φ1(xk), φ2(xk), ⋯, φc′(xk)]T, k = 1, … n. In the backward pass, the error rates propagate backward and σj are updated by the gradient descent method. The derivatives with respect to sigma Subscript j Superscript negative 2 are calculated from

      (4.72)StartFraction 
            </div>
      	</div>
  	</div>
  	<hr>
  	<div class=