Название | Mathematical Programming for Power Systems Operation |
---|---|
Автор произведения | Alejandro Garcés Ruiz |
Жанр | Физика |
Серия | |
Издательство | Физика |
Год выпуска | 0 |
isbn | 9781119747284 |
where the objective function f :
n → is differentiable. The gradient ∇f(x) represents the direction of greatest increase of f. Thus, minimizing f implies to move in the direction opposite to the gradient. Therefore, we use the following iteration:
The gradient method consists in applying this iteration until the gradient is small enough, i.e., until ‖∇f(x)‖ ≥ ϵ. It is easier to understand the algorithm by considering concrete problems and their implementation in Python, as given in the next examples.
Example 2.4
Consider the following optimization problem:
The gradient of this function is presented below:
We require to find a value (x, y) such that this gradient is zero. Therefore, we use the gradient method. The algorithm starts from an initial point (for example x = 10, y = 10) and calculate new points as follows:
This step can be implemented in a script in Python, as presented below:
import numpy as np x = 10 y = 10 t = 0.03 for k in range(50): dx = 20*x + np.exp(x+y) dy = 30*y + np.exp(x+y) x += -t*dx y += -t*dy print('grad:',np.abs([dx,dy])) print('argmin:',x,y)
In the first line, we import the module NumPy with the alias np. This module contains mathematical functions such as sin, cos, exp, ln among others. The gradient introduces two components dx and dy, which are evaluated in each iteration and added to the previous point (x,y). We repeat the process 50 times and print the value of the gradient each iteration. Notice that all the indented statements belong to the for-statement, and hence the gradient is printed in each iteration. In contrast, the argmin is printed only at the end of the process.
Example 2.5
Python allows calculating the gradient automatically using the module AutoGrad. It is quite intuitive to use. Consider the following script, which solves the same problem presented in the previous example:
import autograd.numpy as np from autograd import grad # gradient calculation def f(x): z = 10.0*x[0]**2 + 15*x[1]**2 + np.exp(x[0]+x[1]) return z g = grad(f) # create a funtion g that returns the gradient x = np.array([10.0,10.0]) t = 0.03 for k in range(50): dx = g(x) x = x -t*dx print('argmin:',x)
In this case, we defined a function f and its gradient g where (x, y) was replaced by a vector (x0, x1). The module NumPy was loaded using autograd.numpy to obtain a gradient function automatically. The code executes the same 50 iterations, obtaining the same result. The reader should execute and compare the two codes in terms of time calculation and results.
Example 2.6
Consider a small photovoltaic system formed by three solar panels A, B, and C, placed as depicted in Figure 2.6. Each solar system has a power electronic converter that requires to be connected to a common point E before transmitted to the final user in D. The converters and the user’s location are fixed, but the common point E can be moved at will. The coordinates of the solar panels and the final user areA = (0, 40), B = (20, 70), C = (30, 0), and D = (100, 50) respectively.
Figure 2.6 A small photovoltaic system with three solar panels.
The cost of the cables is different since each cable carries different current. Our objective is to find the best position of E in order to minimize the total cost of the cable. Therefore, the following unconstrained optimization problem is formulated:
where costij¯ is the unitary cost of the cable that connects the point i and j, and ij¯ is the corresponding length.
The costs of the cables are costAE¯ = 12, costBE¯ = 13, costCE¯=11 and costDE¯=18 The distance between any two points U = (u0, u1) and V = (v0, v1) is given by the following expression:
This equation is required several times; thus, it is useful to define a function, as presented below:
import numpy as np A = (0,40) B = (20,70) C = (30,0) D = (100,50) def dist(U,V): return