Название | Artificial Intelligence and Quantum Computing for Advanced Wireless Networks |
---|---|
Автор произведения | Savo G. Glisic |
Жанр | Программы |
Серия | |
Издательство | Программы |
Год выпуска | 0 |
isbn | 9781119790310 |
By combining conjunctions, disjunctions‚ and negations, various partitions of the antecedent space can be obtained; the boundaries are, however, restricted to the rectangular grid defined by the fuzzy sets of the individual variables. As an example, consider the rule “If x1 is not A13 and x2 is A21 then …”
The degree of fulfillment of this rule is computed using the complement and intersection operators:
(4.42)
The antecedent form with multivariate membership functions, Eq. (4.31), is the most general one, as there is no restriction on the shape of the fuzzy regions. The boundaries between these regions can be arbitrarily curved and opaque to the axes. Also, the number of fuzzy sets needed to cover the antecedent space may be much smaller than in the previous cases. Hence, for complex multivariable systems, this partition may provide the most effective representation.
Defuzzification: In many applications, a crisp output y is desired. To obtain a crisp value, the output fuzzy set must be defuzzified. With the Mamdani inference scheme, the center of gravity (COG) defuzzification method is used. This method computes the y coordinate of the COG of the area under the fuzzy set B′:
where F is the number of elements yj in Y. The continuous domain Y thus must be discretized to be able to compute the COG.
Design Example 4.5
Consider the output fuzzy set B′ = [0.2, 0.2, 0.3, 0.9, 1] from the previous example, where the output domain is Y = [0, 25, 50, 75, 100]. The defuzzified output obtained by applying Eq. (4.43) is
The network throughput (in arbitrary units), computed by the fuzzy model, is thus 72.12.
4.4.2 SVR
The basic idea: Let {(x1, y1), . . . , (xl, yl)} ⊂ X × ℝ, be a given training data, where X denotes the space of the input patterns (e.g., X = ℝd). In ε‐SV regression, the objective is to find a function f(x) that has at most a deviation of ε from the actually obtained targets yi for all the training data, and at the same time is as flat as possible. In other words, we do not care about errors as long as they are less than ε, but will not accept any deviation larger than this. We begin by describing the case of linear functions, f, taking the form
where 〈 , 〉 denotes the dot product in X. Flatness in the case of Eq. (4.44) means that one seeks a small w. One way to ensure this is to minimize the norm, that is, ‖w‖2 = 〈w, w〉. We can write this problem as a convex optimization problem:
The tacit assumption in Eq. (4.45) was that such a function f actually exists that approximates all pairs (xi, yi) with ε precision, or in other words, that the convex optimization problem is feasible. Sometimes, the problem might not have a solution for the given ε, or we also may want to allow for some errors. In analogy with the “soft margin” (see Figure 2.7 of Chapter 2), one can introduce slack variables ξi,
The constant C > 0 determines the trade‐off between the flatness of f and the amount up to which deviations larger than ε are tolerated. This corresponds to dealing with a so‐called ε‐insensitive loss function |ξ|ε described by