may easily program chord and secant algorithms and compare their efficiency with Newton's method when solving the test cubic equation
starting from the same initial interval
, as was done with Newton's and bisection methods.
Figure 1.2b shows the resulting convergence history of the two methods. From the two sets of data of
Figure 1.2b we may conclude that while the chord method seems to converge linearly, the secant methods converges
superlinearly with an approximate order
. It can be formally proved that the exact convergence order of the chord method is
, whereas for the secant method the order is the
golden ratio (see exercises at the end of the chapter).
Practical 1.1 Sliding Particles over Surfaces
A small point‐like object of mass initially at rest is released from the highest point of the surface A of height , as shown in the following figure. The object slowly starts to slide under the effects of gravity until it reaches point B at which it loses contact with the surface. The goal of this practical is to determine the abscissa of that point.
First, assuming there are no friction forces, show that the speed of the object at , i.e. still in contact with the surface, is
Then show that the horizontal velocity is given by the expression in the figure on the left. Imposing that there is no horizontal acceleration at point B (i.e. ) show that (excluding points where )
at the point where contact is lost.
Consider the family of surfaces shown in the figure on the right. Releasing the object from point , find the abscissas for , and 0.9.
1.6 Conditioning
As mentioned before, nonlinear equations usually involve parameters and constants coming from known external data. For example, Eq. (1.5) for the quantum energy levels depend on the well's depth and width , as well as on the mass of the particle . Therefore, it is obvious that changing the value of , , or in (1.5) will change the value of the corresponding energy levels . In particular, we may also expect that if the parameters are changed just by a tiny amount, then the changes in the solutions should also be very small.
In numerical mathematics, a problem is said to be well conditioned when small changes in the input data always lead to small changes in the output solution. By contrast, a problem is said to be ill conditioned when even very small changes in the input data may lead to very large variations in the outcome. In practice it is very important to quantify this sensitivity of the solution to small changes in the input data. Let us quantify this in the case of root‐finding.
Imagine we are asked to find the abscissa at which the graph intercepts the ordinate ; see Figure 1.3a. Mathematically, the sought abscissa is the root of the equation , where plays the role of a parameter. If is slightly changed to (with small), the new root will accordingly move to a new value . If the conditions of the inverse function theorem are satisfied in a neighborhood of , the equation defines as a function with , where is the inverse function of . Accordingly, the derivatives of at and of at satisfy