Mathematical Programming for Power Systems Operation. Alejandro Garcés Ruiz

Читать онлайн.
Название Mathematical Programming for Power Systems Operation
Автор произведения Alejandro Garcés Ruiz
Жанр Физика
Серия
Издательство Физика
Год выпуска 0
isbn 9781119747284



Скачать книгу

target="_blank" rel="nofollow" href="#ulink_127e7f2f-3461-5db4-a07a-21883ab1c172">Figure 2.3 Comparison among unit balls defined by norm-2, norm-1, and norm-∞

      Notice that a ball is not necessarily round, at least with this definition. All balls share a common geometric property known as convexity that is studied in Chapter 3.

      2.3 Global and local optimum

      Let us consider a mathematical optimization problem represented as (Equation 2.22).

(2.22)

      where f :

n
is the objective function, x are decision variables, Ω is the feasible set, and β are constant parameters of the problem.

      Figure 2.4 Example of local and global optima: a) function with two local minima and their respective neighborhoods, b) function with a unique global minimum (the neighborhood is the entire domain of the function).

      There are two local minima in the first case, whereas there is a unique global minimum in the second case. This concept is more than a fancy theoretical notion; what good is a local optimum if there are even better solutions in another region of the feasible set? In practice, we require global or close-to-global optimum solutions.

      Figure 2.5 Example of a function with several optimal points.

      It is well-known, from basic mathematics, that the optimum of a continuous differentiable function is attached when its derivative is zero. This fact can be formalized in view of the concepts presented in previous sections. Consider a function f :

with a local minimum in x~. A neighborhood is defined as N={x∈R:x=x~±t,|t|<t0} with the following condition:

(2.23)

      

(2.24)

      The same calculation can be made if t < 0, just in that case, the direction of the inequality changes as follows:

      

(2.25)

      Notice that this limit is the definition of derivative; hence, f′(x~)≥0 and f′(x~)≤0 These two conditions hold simultaneously when f′(x~)=0. Consequently, the optimum of a differentiable function is the point where the derivative vanishes. This condition is local in the neighborhood N.

      This idea can be easily extended to multivariable functions as follows: consider a function f:Rn→R (continuous and differentiable) and a neighborhood given by N={x∈Rn:x=x~+Δx} Now, define a function g(t)=f(x~+tΔx) If x~ is a local minimum of f, then

(2.26)

      In terms of the new function g, (Equation 2.26) leads to the following condition:

       g left parenthesis t right parenthesis greater or equal than g left parenthesis 0 right parenthesis (2.27)

      This condition implies that 0 is a local optimum of g; moreover,

       g to the power of apostrophe left parenthesis 0 right parenthesis equals limit as t not stretchy rightwards arrow 0 to the power of plus of fraction numerator f left parenthesis x with tilde on top plus t straight capital delta x right parenthesis minus f left parenthesis x with tilde on top right parenthesis over denominator t end fraction equals left parenthesis nabla f left parenthesis x with tilde on top right parenthesis right parenthesis to the power of straight ⊤ straight capital delta x (2.28)

      Notice that g is a function of one variable, then optimal conditiong′ = 0 is met, regardless the direction of Δx. Therefore, the optimum of a multivariate function is given when the gradient is zero ∇f(x~)=0). This condition permits to find local optimal points, as presented in the next section. Two questions are still open: in what conditions are the optimum global? And, when is the solution unique? We will answer these relevant questions in the next chapter. For now, let us see how to find the optimum using the gradient.

      2.5 The gradient method